code stringlengths 114 1.05M | path stringlengths 3 312 | quality_prob float64 0.5 0.99 | learning_prob float64 0.2 1 | filename stringlengths 3 168 | kind stringclasses 1
value |
|---|---|---|---|---|---|
===================
Sage Sample Package
===================
.. image:: https://mybinder.org/badge.svg
:target: https://mybinder.org/v2/gh/sagemath/sage_sample/master
This package is designed as a simple `SageMath <http://www.sagemath.org>`_ package
example to serve as a good practice reference for package developers. We follow
python recommendations and adapt them to the SageMath community. You can find more
advanced documentation on python package creation on
`How To Package Your Python Code <https://packaging.python.org/>`_.
This is still a work in progress. Once this example will have
stabilized, the plan is to make a
`cookie cutter <https://cookiecutter.readthedocs.io/en/latest/>`_
template out of it.
Installation
------------
Try the `demo <https://mybinder.org/v2/gh/sagemath/sage_sample/master?filepath=demo.ipynb>`_ on binder
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Local install from source
^^^^^^^^^^^^^^^^^^^^^^^^^
Download the source from the git repository::
$ git clone https://github.com/sagemath/sage_sample.git
Change to the root directory and run::
$ sage -pip install --upgrade --no-index -v .
For convenience this package contains a `makefile <makefile>`_ with this
and other often used commands. Should you wish too, you can use the
shorthand::
$ make install
Install from PyPI
^^^^^^^^^^^^^^^^^^
sage_sample is distributed on PyPI. You can install it with the command:
$ sage -pip install sage_sample
To distribute your own package on PyPI, you will need an account on pypi.org
(maybe at first on test.pypi.org)
You also need to install setuptools, wheel and twine:
$ sage -pip install --upgrade setuptools wheel twine
Make the package:
$ python setup.py sdist bdist_wheel
Upload and test the package to the test PyPI repository:
$ twine upload --repository-url https://test.pypi.org/legacy/ dist/*
$ sage -pip install -i https://test.pypi.org/simple sage_sample
And later, upload your distribution to the real PyPI [optionally sign it with GPG]:
$ twine upload [-s] dist/*
Usage
-----
Once the package is installed, you can use it in Sage with::
sage: from sage_sample import answer_to_ultimate_question
sage: answer_to_ultimate_question()
42
See also the `demo notebook <demo.ipynb>`_.
Setup
------
All packaging setup is done through ``setup.py``. To create your own package
follow the strcuture of the file and change the parameters accordingly.
Source code
-----------
All source code is stored in the folder ``sage_sample`` using the same name as the
package. This is not mandatory but highly recommended for clarity. All source folder
must contain a ``__init__.py`` file with needed includes.
Tests
-----
This package is configured for tests written in the documentation
strings, also known as ``doctests``. For examples, see this
`source file <sage_sample/ultimate_question.py>`_. See also
`SageMath's coding conventions and best practices document <http://doc.sagemath.org/html/en/developer/coding_basics.html#writing-testable-examples>`_.
With additional configuration, it would be possible to include unit
tests as well.
Once the package is installed, one can use the SageMath test system
configured in ``setup.py`` to run the tests::
$ sage setup.py test
This is just calling ``sage -t`` with appropriate flags.
Shorthand::
$ make test
Documentation
-------------
The documentation of the package can be generated using Sage's
``Sphinx`` installation::
$ cd docs
$ sage -sh -c "make html"
Shorthand::
$ make doc
For this to work on your own package, make sure you follow the same
structure as we do here:
* Create a ``docs`` folder containing the exact same ``Makefile`` and a ``source``
folder.
* Copy and paste the ``docs/source/conf.py`` file from this package and update
the few project specific variables at the beginning of the file.
* Create an ``index.rst`` file as well as a ``<module name>.rst`` file for each
module you want on the documentation.
Travis CI integration
---------------------
.. image:: https://travis-ci.org/sagemath/sage_sample.svg?branch=master
:target: https://travis-ci.org/sagemath/sage_sample
Scripts that run ``make test`` on various SageMath versions on the
Travis CI system are included.
https://docs.travis-ci.com/user/for-beginners explains how to enable
automatic Travis CI builds for your GitHub-hosted project.
The scripts download and install binary releases (7.1-7.4) from a
SageMath mirror. Edit ``.travis-install.sh`` if some optional or
experimental SageMath packages need to be installed prior to running
your package. Edit ``.travis.yml`` to change the list of SageMath
versions used.
Automatically deploying documentation to GitHub pages using Travis CI
---------------------------------------------------------------------
* First do the steps described above to enable Travis CI integration
of your GitHub-hosted project.
* If you don't already have GitHub pages for your project: Create and
checkout a branch ``gh-pages`` in your repository and put an empty
file ``.nojekyll`` in it (see
https://help.github.com/articles/files-that-start-with-an-underscore-are-missing/).
Then commit it and push it to GitHub::
$ git clone --single-branch --depth 1 https://github.com/USER/PROJECT.git gh-pages
$ cd gh-pages
$ git checkout --orphan gh-pages
$ git rm -rf .
$ touch .nojekyll
$ git add .nojekyll
$ git commit -m "Initial commit"
$ git push -u origin gh-pages
$ cd ..
* (Back in your working copy:) Generate a new ssh key pair with an
empty passphrase::
$ ssh-keygen -t dsa -f .travis_ci_gh_pages_deploy_key
* Add the public ssh key (contents of the file
``.travis_ci_gh_pages_deploy_key.pub``) to your GitHub repository
as a deploy key (Settings/Deploy keys/Add deploy key).
Title: Key for deploying documentation to GitHub pages.
Check Allow write access.
* Install the Travis CI command-line client from
https://github.com/travis-ci/travis.rb::
$ gem install travis
* Log in to Travis CI using your GitHub credentials::
$ travis login
* Encrypt the private ssh key, add the decryption keys
as secure environment variables to Travis CI, and
add code to ``.travis.yml`` to decrypt it::
$ travis encrypt-file .travis_ci_gh_pages_deploy_key --add before_script
* Add the encrypted private ssh key to the repository::
$ git add .travis_ci_gh_pages_deploy_key.enc
* Have git ignore the other keys (and the gh-pages directory)::
$ echo >> .gitignore
$ echo "/.travis_ci_gh_pages_deploy_key" >> .gitignore
$ echo "/.travis_ci_gh_pages_deploy_key.pub" >> .gitignore
$ echo "/gh-pages" >> .gitignore
$ git add .gitignore
* Optionally, edit ``.travis.yml`` to adjust variables ``DEPLOY_DOC_...``
* Commit all changes to GitHub. The Travis CI build should then run
automatically and deploy it::
$ git add .travis.yml
$ git commit -m "Deploy built documentation to GitHub"
$ git push
* The deployed documentation will be available at:
https://USER.github.io/PROJECT/
This can be customized by changing ``DEPLOY_DOC_TO_DIRECTORY=/``
to another directory in ``.travis.yml``
For example, setting ``DEPLOY_DOC_TO_DIRECTORY=doc/html`` will make
the deployed documentation available at:
https://USER.github.io/PROJECT/doc/html/
| /sage_sample-0.3.0.tar.gz/sage_sample-0.3.0/README.rst | 0.852721 | 0.653859 | README.rst | pypi |
import os
import importlib.util
from sage.misc.package_dir import SourceDistributionFilter
from sage_setup.find import installed_files_by_module, get_extensions
def _remove(file_set, module_base, to_remove):
"""
Helper to remove files from a set of filenames.
INPUT:
- ``file_set`` -- a set of filenames.
- ``module_base`` -- string. Name of a Python package/module.
- ``to_remove`` -- list/tuple/iterable of strings. Either
filenames or extensions (starting with ``'.'``)
OUTPUT:
This function does not return anything. The ``file_set`` parameter
is modified in place.
EXAMPLES::
sage: files = set(['a/b/c.py', 'a/b/d.py', 'a/b/c.pyx'])
sage: from sage_setup.clean import _remove
sage: _remove(files, 'a.b', ['c.py', 'd.py'])
sage: files
{'a/b/c.pyx'}
sage: files = set(['a/b/c.py', 'a/b/d.py', 'a/b/c.pyx'])
sage: _remove(files, 'a.b.c', ['.py', '.pyx'])
sage: files
{'a/b/d.py'}
"""
path = os.path.join(*module_base.split('.'))
for filename in to_remove:
if filename.startswith('.'):
filename = path + filename
else:
filename = os.path.join(path, filename)
remove = [filename]
remove.append(importlib.util.cache_from_source(filename))
file_set.difference_update(remove)
def _find_stale_files(site_packages, python_packages, python_modules, ext_modules, data_files, nobase_data_files=()):
"""
Find stale files
This method lists all files installed and then subtracts the ones
which are intentionally being installed.
EXAMPLES:
It is crucial that only truly stale files are being found, of
course. We check that when the doctest is being run, that is,
after installation, there are no stale files::
sage: from sage.env import SAGE_SRC, SAGE_LIB, SAGE_ROOT
sage: from sage_setup.find import _cythonized_dir
sage: cythonized_dir = _cythonized_dir(SAGE_SRC)
sage: from sage_setup.find import find_python_sources, find_extra_files
sage: python_packages, python_modules, cython_modules = find_python_sources(
....: SAGE_SRC, ['sage', 'sage_setup'])
sage: extra_files = find_extra_files(SAGE_SRC,
....: ['sage', 'sage_setup'], cythonized_dir, [])
sage: from importlib.metadata import files
sage: for f in files('sagemath-standard'):
....: dir = os.path.dirname(str(f))
....: extra_files[dir] = extra_files.get(dir, [])
....: extra_files[dir].append(str(f))
sage: extra_files = list(extra_files.items())
sage: from sage_setup.clean import _find_stale_files
TODO: Also check extension modules::
sage: stale_iter = _find_stale_files(SAGE_LIB, python_packages, python_modules, [], extra_files)
sage: from importlib.machinery import EXTENSION_SUFFIXES
sage: skip_extensions = tuple(EXTENSION_SUFFIXES)
sage: for f in stale_iter:
....: if f.endswith(skip_extensions): continue
....: if '/ext_data/' in f: continue
....: print('Found stale file: ' + f)
"""
PYMOD_EXTS = get_extensions('source') + get_extensions('bytecode')
CEXTMOD_EXTS = get_extensions('extension')
INIT_FILES = tuple('__init__' + x for x in PYMOD_EXTS)
module_files = installed_files_by_module(site_packages, ['sage'])
for mod in python_packages:
try:
files = module_files[mod]
except KeyError:
# the source module "mod" has not been previously installed, fine.
continue
_remove(files, mod, INIT_FILES)
for mod in python_modules:
try:
files = module_files[mod]
except KeyError:
continue
_remove(files, mod, PYMOD_EXTS)
for ext in ext_modules:
mod = ext.name
try:
files = module_files[mod]
except KeyError:
continue
_remove(files, mod, CEXTMOD_EXTS)
# Convert data_files to a set
installed_files = set()
for dir, files in data_files:
for f in files:
installed_files.add(os.path.join(dir, os.path.basename(f)))
for dir, files in nobase_data_files:
for f in files:
installed_files.add(f)
for files in module_files.values():
for f in files:
if f not in installed_files:
yield f
def clean_install_dir(site_packages, python_packages, python_modules, ext_modules, data_files, nobase_data_files, *,
distributions=None, exclude_distributions=None):
"""
Delete all modules that are **not** being installed
If you switch branches it is common to (re)move the source for an
already installed module. Subsequent rebuilds will leave the stale
module in the install directory, which can break programs that try
to import all modules. In particular, the Sphinx autodoc builder
does this and is susceptible to hard-to-reproduce failures that
way. Hence we must make sure to delete all stale modules.
INPUT:
- ``site_packages`` -- the root Python path where the Sage library
is being installed.
- ``python_packages`` -- list of pure Python packages (directories
with ``__init__.py``).
- ``python_modules`` -- list of pure Python modules.
- ``ext_modules`` -- list of distutils ``Extension`` classes. The
output of ``cythonize``.
- ``data_files`` -- a list of (installation directory, files) pairs,
like the ``data_files`` argument to distutils' ``setup()``. Only
the basename of the files is used.
- ``nobase_data_files`` -- a list of (installation directory, files)
pairs. The files are expected to be in a subdirectory of the
installation directory; the filenames are used as is.
- ``distributions`` -- (default: ``None``) if not ``None``,
should be a sequence or set of strings: only clean files whose
``distribution`` (from a ``# sage_setup: distribution = PACKAGE``
directive in the file) is an element of ``distributions``.
"""
distribution_filter = SourceDistributionFilter(distributions, exclude_distributions)
stale_file_iter = _find_stale_files(
site_packages, python_packages, python_modules, ext_modules, data_files, nobase_data_files)
for f in stale_file_iter:
f = os.path.join(site_packages, f)
if f in distribution_filter:
print('Cleaning up stale file: {0}'.format(f))
os.unlink(f) | /sage-setup-10.0b0.tar.gz/sage-setup-10.0b0/sage_setup/clean.py | 0.4917 | 0.234999 | clean.py | pypi |
import os
import sys
import time
keep_going = False
def run_command(cmd):
"""
INPUT:
- ``cmd`` -- a string; a command to run
OUTPUT: prints ``cmd`` to the console and then runs
``os.system(cmd)``.
"""
print(cmd)
sys.stdout.flush()
return os.system(cmd)
def apply_func_progress(p):
"""
Given a triple p consisting of a function, value and a string,
output the string and apply the function to the value.
The string could for example be some progress indicator.
This exists solely because we can't pickle an anonymous function
in execute_list_of_commands_in_parallel below.
"""
sys.stdout.write(p[2])
sys.stdout.flush()
return p[0](p[1])
def execute_list_of_commands_in_parallel(command_list, nthreads):
"""
Execute the given list of commands, possibly in parallel, using
``nthreads`` threads. Terminates ``setup.py`` with an exit code
of 1 if an error occurs in any subcommand.
INPUT:
- ``command_list`` -- a list of commands, each given as a pair of
the form ``[function, argument]`` of a function to call and its
argument
- ``nthreads`` -- integer; number of threads to use
WARNING: commands are run roughly in order, but of course successive
commands may be run at the same time.
"""
# Add progress indicator strings to the command_list
N = len(command_list)
progress_fmt = "[{:%i}/{}] " % len(str(N))
for i in range(N):
progress = progress_fmt.format(i+1, N)
command_list[i] = command_list[i] + (progress,)
from multiprocessing import Pool
# map_async handles KeyboardInterrupt correctly if an argument is
# given to get(). Plain map() and apply_async() do not work
# correctly, see Trac #16113.
pool = Pool(nthreads)
result = pool.map_async(apply_func_progress, command_list, 1).get(99999)
pool.close()
pool.join()
process_command_results(result)
def process_command_results(result_values):
error = None
for r in result_values:
if r:
print("Error running command, failed with status %s."%r)
if not keep_going:
sys.exit(1)
error = r
if error:
sys.exit(1)
def execute_list_of_commands(command_list):
"""
INPUT:
- ``command_list`` -- a list of strings or pairs
OUTPUT:
For each entry in command_list, we attempt to run the command.
If it is a string, we call ``os.system()``. If it is a pair [f, v],
we call f(v).
If the environment variable :envvar:`SAGE_NUM_THREADS` is set, use
that many threads.
"""
t = time.time()
# Determine the number of threads from the environment variable
# SAGE_NUM_THREADS, which is set automatically by sage-env
try:
nthreads = int(os.environ['SAGE_NUM_THREADS'])
except KeyError:
nthreads = 1
# normalize the command_list to handle strings correctly
command_list = [ [run_command, x] if isinstance(x, str) else x for x in command_list ]
# No need for more threads than there are commands, but at least one
nthreads = min(len(command_list), nthreads)
nthreads = max(1, nthreads)
def plural(n,noun):
if n == 1:
return "1 %s"%noun
return "%i %ss"%(n,noun)
print("Executing %s (using %s)"%(plural(len(command_list),"command"), plural(nthreads,"thread")))
execute_list_of_commands_in_parallel(command_list, nthreads)
print("Time to execute %s: %.2f seconds."%(plural(len(command_list),"command"), time.time() - t)) | /sage-setup-10.0b0.tar.gz/sage-setup-10.0b0/sage_setup/run_parallel.py | 0.511473 | 0.419707 | run_parallel.py | pypi |
from __future__ import print_function, absolute_import
from .utils import je, reindent_lines as ri
def string_of_addr(a):
r"""
An address or a length from a parameter specification may be
either None, an integer, or a MemoryChunk. If the address or
length is an integer or a MemoryChunk, this function will convert
it to a string giving an expression that will evaluate to the correct
address or length. (See the docstring for params_gen for more
information on parameter specifications.)
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: mc_code = MemoryChunkConstants('code', ty_int)
sage: string_of_addr(mc_code)
'*code++'
sage: string_of_addr(42r)
'42'
"""
if isinstance(a, int):
return str(a)
assert(isinstance(a, MemoryChunk))
return '*%s++' % a.name
class MemoryChunk(object):
r"""
Memory chunks control allocation, deallocation, initialization,
etc. of the vectors and objects in the interpreter. Basically,
there is one memory chunk per argument to the C interpreter.
There are three "generic" varieties of memory chunk: "constants",
"arguments", and "scratch". These are named after their most
common use, but they could be used for other things in some
interpreters.
All three kinds of chunks are allocated in the wrapper class.
Constants are initialized when the wrapper is constructed;
arguments are initialized in the __call__ method, from the
caller's arguments. "scratch" chunks are not initialized at all;
they are used for scratch storage (often, but not necessarily, for
a stack) in the interpreter.
Interpreters which need memory chunks that don't fit into these
categories can create new subclasses of MemoryChunk.
"""
def __init__(self, name, storage_type):
r"""
Initialize an instance of MemoryChunk.
This sets the properties "name" (the name of this memory chunk;
used in generated variable names, etc.) and "storage_type",
which is a StorageType object.
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: mc = MemoryChunkArguments('args', ty_mpfr)
sage: mc.name
'args'
sage: mc.storage_type is ty_mpfr
True
"""
self.name = name
self.storage_type = storage_type
def __repr__(self):
r"""
Give a string representation of this memory chunk.
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: mc = MemoryChunkArguments('args', ty_mpfr)
sage: mc
{MC:args}
sage: mc.__repr__()
'{MC:args}'
"""
return '{MC:%s}' % self.name
def declare_class_members(self):
r"""
Return a string giving the declarations of the class members
in a wrapper class for this memory chunk.
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: mc = MemoryChunkArguments('args', ty_mpfr)
sage: mc.declare_class_members()
' cdef int _n_args\n cdef mpfr_t* _args\n'
"""
return self.storage_type.declare_chunk_class_members(self.name)
def init_class_members(self):
r"""
Return a string to be put in the __init__ method of a wrapper
class using this memory chunk, to initialize the corresponding
class members.
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: mc = MemoryChunkArguments('args', ty_mpfr)
sage: print(mc.init_class_members())
count = args['args']
self._n_args = count
self._args = <mpfr_t*>check_allocarray(self._n_args, sizeof(mpfr_t))
for i in range(count):
mpfr_init2(self._args[i], self.domain.prec())
<BLANKLINE>
"""
return ""
def dealloc_class_members(self):
r"""
Return a string to be put in the __dealloc__ method of a wrapper
class using this memory chunk, to deallocate the corresponding
class members.
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: mc = MemoryChunkArguments('args', ty_mpfr)
sage: print(mc.dealloc_class_members())
if self._args:
for i in range(self._n_args):
mpfr_clear(self._args[i])
sig_free(self._args)
<BLANKLINE>
"""
return ""
def declare_parameter(self):
r"""
Return the string to use to declare the interpreter parameter
corresponding to this memory chunk.
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: mc = MemoryChunkArguments('args', ty_mpfr)
sage: mc.declare_parameter()
'mpfr_t* args'
"""
return '%s %s' % (self.storage_type.c_ptr_type(), self.name)
def declare_call_locals(self):
r"""
Return a string to put in the __call__ method of a wrapper
class using this memory chunk, to allocate local variables.
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: mc = MemoryChunkRRRetval('retval', ty_mpfr)
sage: mc.declare_call_locals()
' cdef RealNumber retval = (self.domain)()\n'
"""
return ""
def pass_argument(self):
r"""
Return the string to pass the argument corresponding to this
memory chunk to the interpreter.
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: mc = MemoryChunkConstants('constants', ty_mpfr)
sage: mc.pass_argument()
'self._constants'
"""
raise NotImplementedError
def pass_call_c_argument(self):
r"""
Return the string to pass the argument corresponding to this
memory chunk to the interpreter, for use in the call_c method.
Almost always the same as pass_argument.
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: mc = MemoryChunkConstants('constants', ty_mpfr)
sage: mc.pass_call_c_argument()
'self._constants'
"""
return self.pass_argument()
def needs_cleanup_on_error(self):
r"""
In an interpreter that can terminate prematurely (due to an
exception from calling Python code, or divide by zero, or
whatever) it will just return at the end of the current instruction,
skipping the rest of the program. Thus, it may still have
values pushed on the stack, etc.
This method returns True if this memory chunk is modified by the
interpreter and needs some sort of cleanup when an error happens.
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: mc = MemoryChunkConstants('constants', ty_mpfr)
sage: mc.needs_cleanup_on_error()
False
"""
return False
def is_stack(self):
r"""
Says whether this memory chunk is a stack. This affects code
generation for instructions using this memory chunk.
It would be nicer to make this object-oriented somehow, so
that the code generator called MemoryChunk methods instead of
using::
if ch.is_stack():
... hardcoded stack code
else:
... hardcoded non-stack code
but that hasn't been done yet.
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: mc = MemoryChunkScratch('scratch', ty_mpfr)
sage: mc.is_stack()
False
sage: mc = MemoryChunkScratch('stack', ty_mpfr, is_stack=True)
sage: mc.is_stack()
True
"""
return False
def is_python_refcounted_stack(self):
r"""
Says whether this memory chunk refers to a stack where the entries
need to be INCREF/DECREF'ed.
It would be nice to make this object-oriented, so that the
code generator called MemoryChunk methods to do the potential
INCREF/DECREF and didn't have to explicitly test
is_python_refcounted_stack.
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: mc = MemoryChunkScratch('args', ty_python)
sage: mc.is_python_refcounted_stack()
False
sage: mc = MemoryChunkScratch('args', ty_python, is_stack=True)
sage: mc.is_python_refcounted_stack()
True
sage: mc = MemoryChunkScratch('args', ty_mpfr, is_stack=True)
sage: mc.is_python_refcounted_stack()
False
"""
return self.is_stack() and self.storage_type.python_refcounted()
class MemoryChunkLonglivedArray(MemoryChunk):
r"""
MemoryChunkLonglivedArray is a subtype of MemoryChunk that deals
with memory chunks that are both 1) allocated as class members (rather
than being allocated in __call__) and 2) are arrays.
"""
def init_class_members(self):
r"""
Return a string to be put in the __init__ method of a wrapper
class using this memory chunk, to initialize the corresponding
class members.
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: mc = MemoryChunkArguments('args', ty_double)
sage: print(mc.init_class_members())
count = args['args']
self._n_args = count
self._args = <double*>check_allocarray(self._n_args, sizeof(double))
<BLANKLINE>
"""
return je(ri(0, """
count = args['{{ myself.name }}']
{% print(myself.storage_type.alloc_chunk_data(myself.name, 'count')) %}
"""), myself=self)
def dealloc_class_members(self):
r"""
Return a string to be put in the __dealloc__ method of a wrapper
class using this memory chunk, to deallocate the corresponding
class members.
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: mc = MemoryChunkArguments('args', ty_mpfr)
sage: print(mc.dealloc_class_members())
if self._args:
for i in range(self._n_args):
mpfr_clear(self._args[i])
sig_free(self._args)
<BLANKLINE>
"""
return self.storage_type.dealloc_chunk_data(self.name)
def pass_argument(self):
r"""
Return the string to pass the argument corresponding to this
memory chunk to the interpreter.
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: mc = MemoryChunkConstants('constants', ty_mpfr)
sage: mc.pass_argument()
'self._constants'
"""
return 'self._%s' % self.name
class MemoryChunkConstants(MemoryChunkLonglivedArray):
r"""
MemoryChunkConstants is a subtype of MemoryChunkLonglivedArray.
MemoryChunkConstants chunks have their contents set in the
wrapper's __init__ method (and not changed afterward).
"""
def init_class_members(self):
r"""
Return a string to be put in the __init__ method of a wrapper
class using this memory chunk, to initialize the corresponding
class members.
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: mc = MemoryChunkConstants('constants', ty_mpfr)
sage: print(mc.init_class_members())
val = args['constants']
self._n_constants = len(val)
self._constants = <mpfr_t*>check_allocarray(self._n_constants, sizeof(mpfr_t))
for i in range(len(val)):
mpfr_init2(self._constants[i], self.domain.prec())
for i in range(len(val)):
rn = self.domain(val[i])
mpfr_set(self._constants[i], rn.value, MPFR_RNDN)
<BLANKLINE>
"""
return je(ri(0, """
val = args['{{ myself.name }}']
{% print(myself.storage_type.alloc_chunk_data(myself.name, 'len(val)')) %}
for i in range(len(val)):
{{ myself.storage_type.assign_c_from_py('self._%s[i]' % myself.name, 'val[i]') | i(12) }}
"""), myself=self)
class MemoryChunkArguments(MemoryChunkLonglivedArray):
r"""
MemoryChunkArguments is a subtype of MemoryChunkLonglivedArray,
for dealing with arguments to the wrapper's ``__call__`` method.
Currently the ``__call__`` method is declared to take a varargs
`*args` argument tuple. We assume that the MemoryChunk named `args`
will deal with that tuple.
"""
def setup_args(self):
r"""
Handle the arguments of __call__ -- copy them into a pre-allocated
array, ready to pass to the interpreter.
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: mc = MemoryChunkArguments('args', ty_mpfr)
sage: print(mc.setup_args())
cdef mpfr_t* c_args = self._args
cdef int i
for i from 0 <= i < len(args):
rn = self.domain(args[i])
mpfr_set(self._args[i], rn.value, MPFR_RNDN)
<BLANKLINE>
"""
return je(ri(0, """
cdef {{ myself.storage_type.c_ptr_type() }} c_args = self._args
cdef int i
for i from 0 <= i < len(args):
{{ myself.storage_type.assign_c_from_py('self._args[i]', 'args[i]') | i(4) }}
"""), myself=self)
def pass_argument(self):
r"""
Return the string to pass the argument corresponding to this
memory chunk to the interpreter.
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: mc = MemoryChunkArguments('args', ty_mpfr)
sage: mc.pass_argument()
'c_args'
"""
return 'c_args'
class MemoryChunkScratch(MemoryChunkLonglivedArray):
r"""
MemoryChunkScratch is a subtype of MemoryChunkLonglivedArray
for dealing with memory chunks that are allocated in the wrapper,
but only used in the interpreter -- stacks, scratch registers, etc.
(Currently these are only used as stacks.)
"""
def __init__(self, name, storage_type, is_stack=False):
r"""
Initialize an instance of MemoryChunkScratch.
Initializes the _is_stack property, as well as
the properties described in the documentation for
MemoryChunk.__init__.
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: mc = MemoryChunkScratch('stack', ty_double, is_stack=True)
sage: mc.name
'stack'
sage: mc.storage_type is ty_double
True
sage: mc._is_stack
True
"""
super(MemoryChunkScratch, self).__init__(name, storage_type)
self._is_stack = is_stack
def is_stack(self):
r"""
Says whether this memory chunk is a stack. This affects code
generation for instructions using this memory chunk.
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: mc = MemoryChunkScratch('stack', ty_mpfr, is_stack=True)
sage: mc.is_stack()
True
"""
return self._is_stack
def needs_cleanup_on_error(self):
r"""
In an interpreter that can terminate prematurely (due to an
exception from calling Python code, or divide by zero, or
whatever) it will just return at the end of the current instruction,
skipping the rest of the program. Thus, it may still have
values pushed on the stack, etc.
This method returns True if this memory chunk is modified by the
interpreter and needs some sort of cleanup when an error happens.
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: mc = MemoryChunkScratch('registers', ty_python)
sage: mc.needs_cleanup_on_error()
True
"""
return self.storage_type.python_refcounted()
def handle_cleanup(self):
r"""
Handle the cleanup if the interpreter exits with an error.
For scratch/stack chunks that hold Python-refcounted values,
we assume that they are filled with NULL on every entry to the
interpreter. If the interpreter exited with an error, it may
have left values in the chunk, so we need to go through
the chunk and Py_CLEAR it.
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: mc = MemoryChunkScratch('registers', ty_python)
sage: print(mc.handle_cleanup())
for i in range(self._n_registers):
Py_CLEAR(self._registers[i])
<BLANKLINE>
"""
# XXX This is a lot slower than it needs to be, because
# we don't have a "cdef int i" in scope here.
return je(ri(0, """
for i in range(self._n_{{ myself.name }}):
Py_CLEAR(self._{{ myself.name }}[i])
"""), myself=self) | /sage-setup-10.0b0.tar.gz/sage-setup-10.0b0/sage_setup/autogen/interpreters/memory.py | 0.858199 | 0.464051 | memory.py | pypi |
from __future__ import print_function, absolute_import
import re
from .storage import ty_int
def params_gen(**chunks):
r"""
Instructions have a parameter specification that says where they get
their inputs and where their outputs go. Each parameter has
the same form: it is a triple (chunk, addr, len). The chunk says
where the parameter is read from/written to. The addr says which
value in the chunk is used. If the chunk is a stack chunk, then
addr must be null; the value will be read from/written to the top
of the stack. Otherwise, addr must be an integer, or another chunk;
if addr is another chunk, then the next value is read from that chunk
to be the address.
The len says how many values to read/write. It can be either None
(meaning to read/write only a single value), an integer, or
another chunk; if it is a chunk, then the next value is read from that
chunk to be the len. Note that specifying len changes the types
given to the instruction, so len=None is different than len=1 even
though both mean to use a single value.
These parameter specifications are cumbersome to write by hand, so
there's also a simple string format for them. This (curried)
function parses the simple string format and produces parameter
specifications. The params_gen function takes keyword arguments
mapping single-character names to memory chunks. The string format
uses these names. The params_gen function returns another function,
that takes two strings and returns a pair of lists of parameter
specifications.
Each string is the concatenation of arbitrarily many specifications.
Each specification consists of an address and a length. The
address is either a single character naming a stack chunk,
or a string of the form 'A[B]' where A names a non-stack chunk
and B names the code chunk. The length is either empty, or '@n'
for a number n (meaning to use that many arguments), or '@C', where
C is the code chunk.
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: mc_stack = MemoryChunkScratch('stack', ty_double, is_stack=True)
sage: mc_args = MemoryChunkArguments('args', ty_double)
sage: mc_code = MemoryChunkConstants('code', ty_int)
sage: pg = params_gen(D=mc_code, A=mc_args, S=mc_stack)
sage: pg('S', '')
([({MC:stack}, None, None)], [])
sage: pg('A[D]', '')
([({MC:args}, {MC:code}, None)], [])
sage: pg('S@5', '')
([({MC:stack}, None, 5)], [])
sage: pg('S@D', '')
([({MC:stack}, None, {MC:code})], [])
sage: pg('A[D]@D', '')
([({MC:args}, {MC:code}, {MC:code})], [])
sage: pg('SSS@D', 'A[D]S@D')
([({MC:stack}, None, None), ({MC:stack}, None, None), ({MC:stack}, None, {MC:code})], [({MC:args}, {MC:code}, None), ({MC:stack}, None, {MC:code})])
"""
def make_params(s):
p = []
s = s.strip()
while s:
chunk_code = s[0]
s = s[1:]
chunk = chunks[chunk_code]
addr = None
ch_len = None
# shouldn't hardcode 'code' here
if chunk.is_stack() or chunk.name == 'code':
pass
else:
m = re.match(r'\[(?:([0-9]+)|([a-zA-Z]))\]', s)
if m.group(1):
addr = int(m.group(1))
else:
ch = chunks[m.group(2)]
assert ch.storage_type is ty_int
addr = ch
s = s[m.end():].strip()
if len(s) and s[0] == '@':
m = re.match(r'@(?:([0-9]+)|([a-zA-Z]))', s)
if m.group(1):
ch_len = int(m.group(1))
else:
ch = chunks[m.group(2)]
assert ch.storage_type is ty_int
ch_len = ch
s = s[m.end():].strip()
p.append((chunk, addr, ch_len))
return p
def params(s_ins, s_outs):
ins = make_params(s_ins)
outs = make_params(s_outs)
return (ins, outs)
return params
class InstrSpec(object):
r"""
Each instruction in an interpreter is represented as an InstrSpec.
This contains all the information that we need to generate code
to interpret the instruction; it also is used to build the tables
that fast_callable uses, so this is the nexus point between
users of the interpreter (possibly pure Python) and the
generated C interpreter.
The underlying instructions are matched to the caller by name.
For instance, fast_callable assumes that if the interpreter has an
instruction named 'cos', then it will take a single argument,
return a single result, and implement the cos() function.
The print representation of an instruction (which will probably
only be used when doctesting this file) consists of the name,
a simplified stack effect, and the code (truncated if it's long).
The stack effect has two parts, the input and the output, separated
by '->'; the input shows what will be popped from the stack,
the output what will be placed on the stack. Each consists of
a sequence of 'S' and '*' characters, where 'S' refers to a single
argument and '*' refers to a variable number of arguments.
The code for an instruction is a small snippet of C code. It has
available variables 'i0', 'i1', ..., 'o0', 'o1', ...; one variable
for each input and output; its job is to assign values to the output
variables, based on the values of the input variables.
Normally, in an interpreter that uses doubles, each of the input
and output variables will be a double. If i0 actually represents
a variable number of arguments, then it will be a pointer to
double instead, and there will be another variable n_i0 giving
the actual number of arguments.
When instructions refer to auto-reference types, they actually
get a pointer to the data in its original location; it is
not copied into a local variable. Mostly, this makes no difference,
but there is one potential problem to be aware of. It is possible
for an output variable to point to the same object as an input
variable; in fact, this usually will happen when you're working
with the stack. If the instruction maps to a single function call,
then this is fine; the standard auto-reference implementations
(GMP, MPFR, etc.) are careful to allow having the input and output
be the same. But if the instruction maps to multiple function
calls, you may need to use a temporary variable.
Here's an example of this issue. Suppose you want to make an
instruction that does ``out = a+b*c``. You write code like this::
out = b*c
out = a+out
But out will actually share the same storage as a; so the first line
modifies a, and you actually end up computing 2*(b+c). The fix
is to only write to the output once, at the very end of your
instruction.
Instructions are also allowed to access memory chunks (other than
the stack and code) directly. They are available as C variables
with the same name as the chunk. This is useful if some type of
memory chunk doesn't fit well with the params_gen interface.
There are additional reference-counting rules that must be
followed if your interpreter operates on Python objects; these
rules are described in the docstring of the PythonInterpreter
class.
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: pg = RDFInterpreter().pg
sage: InstrSpec('add', pg('SS','S'), code='o0 = i0+i1;')
add: SS->S = 'o0 = i0+i1;'
"""
def __init__(self, name, io, code=None, uses_error_handler=False,
handles_own_decref=False):
r"""
Initialize an InstrSpec.
INPUT:
- name -- the name of the instruction
- io -- a pair of lists of parameter specifications for I/O of the
instruction
- code -- a string containing a snippet of C code to read
from the input variables and write to the output variables
- uses_error_handler -- True if the instruction calls Python
and jumps to error: on a Python error
- handles_own_decref -- True if the instruction handles Python
objects and includes its own
reference-counting
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: pg = RDFInterpreter().pg
sage: InstrSpec('add', pg('SS','S'), code='o0 = i0+i1;')
add: SS->S = 'o0 = i0+i1;'
sage: instr = InstrSpec('py_call', pg('P[D]S@D', 'S'), code=('This is very complicated. ' + 'blah ' * 30)); instr
py_call: *->S = 'This is very compli... blah blah blah '
sage: instr.name
'py_call'
sage: instr.inputs
[({MC:py_constants}, {MC:code}, None), ({MC:stack}, None, {MC:code})]
sage: instr.outputs
[({MC:stack}, None, None)]
sage: instr.code
'This is very complicated. blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah '
sage: instr.parameters
['py_constants', 'n_inputs']
sage: instr.n_inputs
0
sage: instr.n_outputs
1
"""
self.name = name
self.inputs = io[0]
self.outputs = io[1]
self.uses_error_handler = uses_error_handler
self.handles_own_decref = handles_own_decref
if code is not None:
self.code = code
# XXX We assume that there is only one stack
n_inputs = 0
n_outputs = 0
in_effect = ''
out_effect = ''
p = []
for (ch, addr, len) in self.inputs:
if ch.is_stack():
if len is None:
n_inputs += 1
in_effect += 'S'
elif isinstance(len, int):
n_inputs += len
in_effect += 'S%d' % len
else:
p.append('n_inputs')
in_effect += '*'
else:
p.append(ch.name)
for (ch, addr, len) in self.outputs:
if ch.is_stack():
if len is None:
n_outputs += 1
out_effect += 'S'
elif isinstance(len, int):
n_outputs += len
out_effect += 'S%d' % len
else:
p.append('n_outputs')
out_effect += '*'
else:
p.append(ch.name)
self.parameters = p
self.n_inputs = n_inputs
self.n_outputs = n_outputs
self.in_effect = in_effect
self.out_effect = out_effect
def __repr__(self):
r"""
Produce a string representing a given instruction, consisting
of its name, a brief stack specification, and its code
(possibly abbreviated).
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: pg = RDFInterpreter().pg
sage: InstrSpec('add', pg('SS','S'), code='o0 = i0+i1;')
add: SS->S = 'o0 = i0+i1;'
"""
rcode = repr(self.code)
if len(rcode) > 40:
rcode = rcode[:20] + '...' + rcode[-17:]
return '%s: %s->%s = %s' % \
(self.name, self.in_effect, self.out_effect, rcode)
# Now we have a series of helper functions that make it slightly easier
# to create instructions.
def instr_infix(name, io, op):
r"""
A helper function for creating instructions implemented by
a single infix binary operator.
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: pg = RDFInterpreter().pg
sage: instr_infix('mul', pg('SS', 'S'), '*')
mul: SS->S = 'o0 = i0 * i1;'
"""
return InstrSpec(name, io, code='o0 = i0 %s i1;' % op)
def instr_funcall_2args(name, io, op):
r"""
A helper function for creating instructions implemented by
a two-argument function call.
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: pg = RDFInterpreter().pg
sage: instr_funcall_2args('atan2', pg('SS', 'S'), 'atan2')
atan2: SS->S = 'o0 = atan2(i0, i1);'
"""
return InstrSpec(name, io, code='o0 = %s(i0, i1);' % op)
def instr_unary(name, io, op):
r"""
A helper function for creating instructions with one input
and one output.
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: pg = RDFInterpreter().pg
sage: instr_unary('sin', pg('S','S'), 'sin(i0)')
sin: S->S = 'o0 = sin(i0);'
sage: instr_unary('neg', pg('S','S'), '-i0')
neg: S->S = 'o0 = -i0;'
"""
return InstrSpec(name, io, code='o0 = ' + op + ';')
def instr_funcall_2args_mpfr(name, io, op):
r"""
A helper function for creating MPFR instructions with two inputs
and one output.
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: pg = RRInterpreter().pg
sage: instr_funcall_2args_mpfr('add', pg('SS','S'), 'mpfr_add')
add: SS->S = 'mpfr_add(o0, i0, i1, MPFR_RNDN);'
"""
return InstrSpec(name, io, code='%s(o0, i0, i1, MPFR_RNDN);' % op)
def instr_funcall_1arg_mpfr(name, io, op):
r"""
A helper function for creating MPFR instructions with one input
and one output.
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: pg = RRInterpreter().pg
sage: instr_funcall_1arg_mpfr('exp', pg('S','S'), 'mpfr_exp')
exp: S->S = 'mpfr_exp(o0, i0, MPFR_RNDN);'
"""
return InstrSpec(name, io, code='%s(o0, i0, MPFR_RNDN);' % op)
def instr_funcall_2args_mpc(name, io, op):
r"""
A helper function for creating MPC instructions with two inputs
and one output.
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: pg = CCInterpreter().pg
sage: instr_funcall_2args_mpc('add', pg('SS','S'), 'mpc_add')
add: SS->S = 'mpc_add(o0, i0, i1, MPC_RNDNN);'
"""
return InstrSpec(name, io, code='%s(o0, i0, i1, MPC_RNDNN);' % op)
def instr_funcall_1arg_mpc(name, io, op):
r"""
A helper function for creating MPC instructions with one input
and one output.
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: pg = CCInterpreter().pg
sage: instr_funcall_1arg_mpc('exp', pg('S','S'), 'mpc_exp')
exp: S->S = 'mpc_exp(o0, i0, MPC_RNDNN);'
"""
return InstrSpec(name, io, code='%s(o0, i0, MPC_RNDNN);' % op) | /sage-setup-10.0b0.tar.gz/sage-setup-10.0b0/sage_setup/autogen/interpreters/instructions.py | 0.723993 | 0.560583 | instructions.py | pypi |
from __future__ import print_function, absolute_import
import os
import textwrap
from jinja2 import Environment
from jinja2.runtime import StrictUndefined
# We share a single jinja2 environment among all templating in this
# file. We use trim_blocks=True (which means that we ignore white
# space after "%}" jinja2 command endings), and set undefined to
# complain if we use an undefined variable.
JINJA_ENV = Environment(trim_blocks=True, undefined=StrictUndefined)
# Allow 'i' as a shorter alias for the built-in 'indent' filter.
JINJA_ENV.filters['i'] = JINJA_ENV.filters['indent']
def je(template, **kwargs):
r"""
A convenience method for creating strings with Jinja templates.
The name je stands for "Jinja evaluate".
The first argument is the template string; remaining keyword
arguments define Jinja variables.
If the first character in the template string is a newline, it is
removed (this feature is useful when using multi-line templates defined
with triple-quoted strings -- the first line doesn't have to be on
the same line as the quotes, which would screw up the indentation).
(This is very inefficient, because it recompiles the Jinja
template on each call; don't use it in situations where
performance is important.)
EXAMPLES::
sage: from sage_setup.autogen.interpreters import je
sage: je("{{ a }} > {{ b }} * {{ c }}", a='"a suffusion of yellow"', b=3, c=7)
'"a suffusion of yellow" > 3 * 7'
"""
if template and template[0] == '\n':
template = template[1:]
# It looks like Jinja2 automatically removes one trailing newline?
if template and template[-1] == '\n':
template = template + '\n'
tmpl = JINJA_ENV.from_string(template)
return tmpl.render(kwargs)
def indent_lines(n, text):
r"""
Indent each line in text by ``n`` spaces.
INPUT:
- ``n`` -- indentation amount
- ``text`` -- text to indent
EXAMPLES::
sage: from sage_setup.autogen.interpreters import indent_lines
sage: indent_lines(3, "foo")
' foo'
sage: indent_lines(3, "foo\nbar")
' foo\n bar'
sage: indent_lines(3, "foo\nbar\n")
' foo\n bar\n'
"""
lines = text.splitlines(True)
spaces = ' ' * n
return ''.join((spaces if line.strip() else '') + line
for line in lines)
def reindent_lines(n, text):
r"""
Strip any existing indentation on the given text (while keeping
relative indentation) then re-indents the text by ``n`` spaces.
INPUT:
- ``n`` -- indentation amount
- ``text`` -- text to indent
EXAMPLES::
sage: from sage_setup.autogen.interpreters import reindent_lines
sage: print(reindent_lines(3, " foo\n bar"))
foo
bar
"""
return indent_lines(n, textwrap.dedent(text))
def write_if_changed(fn, value):
r"""
Write value to the file named fn, if value is different than
the current contents.
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: def last_modification(fn): return os.stat(fn).st_mtime
sage: fn = tmp_filename('gen_interp')
sage: write_if_changed(fn, 'Hello, world')
sage: t1 = last_modification(fn)
sage: open(fn).read()
'Hello, world'
sage: sleep(2) # long time
sage: write_if_changed(fn, 'Goodbye, world')
sage: t2 = last_modification(fn)
sage: open(fn).read()
'Goodbye, world'
sage: sleep(2) # long time
sage: write_if_changed(fn, 'Goodbye, world')
sage: t3 = last_modification(fn)
sage: open(fn).read()
'Goodbye, world'
sage: t1 == t2 # long time
False
sage: t2 == t3
True
"""
old_value = None
try:
with open(fn) as file:
old_value = file.read()
except IOError:
pass
if value != old_value:
# We try to remove the file, in case it exists. This is to
# automatically break hardlinks... see #5350 for motivation.
try:
os.remove(fn)
except OSError:
pass
with open(fn, 'w') as file:
file.write(value) | /sage-setup-10.0b0.tar.gz/sage-setup-10.0b0/sage_setup/autogen/interpreters/utils.py | 0.560373 | 0.295827 | utils.py | pypi |
from __future__ import print_function, absolute_import
from .utils import je, reindent_lines as ri
class StorageType(object):
r"""
A StorageType specifies the C types used to deal with values of a
given type.
We currently support three categories of types.
First are the "simple" types. These are types where: the
representation is small, functions expect arguments to be passed
by value, and the C/C++ assignment operator works. This would
include built-in C types (long, float, etc.) and small structs
(like gsl_complex).
Second is 'PyObject*'. This is just like a simple type, except
that we have to incref/decref at appropriate places.
Third is "auto-reference" types. This is how
GMP/MPIR/MPFR/MPFI/FLINT types work. For these types, functions
expect arguments to be passed by reference, and the C assignment
operator does not do what we want. In addition, they take
advantage of a quirk in C (where arrays are automatically
converted to pointers) to automatically pass arguments by
reference.
Support for further categories would not be difficult to add (such
as reference-counted types other than PyObject*, or
pass-by-reference types that don't use the GMP auto-reference
trick), if we ever run across a use for them.
"""
def __init__(self):
r"""
Initialize an instance of StorageType.
This sets several properties:
class_member_declarations:
A string giving variable declarations that must be members of any
wrapper class using this type.
class_member_initializations:
A string initializing the class_member_declarations; will be
inserted into the __init__ method of any wrapper class using this
type.
local_declarations:
A string giving variable declarations that must be local variables
in Cython methods using this storage type.
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: ty_double.class_member_declarations
''
sage: ty_double.class_member_initializations
''
sage: ty_double.local_declarations
''
sage: ty_mpfr.class_member_declarations
'cdef RealField_class domain\n'
sage: ty_mpfr.class_member_initializations
"self.domain = args['domain']\n"
sage: ty_mpfr.local_declarations
'cdef RealNumber rn\n'
"""
self.class_member_declarations = ''
self.class_member_initializations = ''
self.local_declarations = ''
def cheap_copies(self):
r"""
Returns True or False, depending on whether this StorageType
supports cheap copies -- whether it is cheap to copy values of
this type from one location to another. This is true for
primitive types, and for types like PyObject* (where you are only
copying a pointer, and possibly changing some reference counts).
It is false for types like mpz_t and mpfr_t, where copying values
can involve arbitrarily much work (including memory allocation).
The practical effect is that if cheap_copies is True,
instructions with outputs of this type write the results into
local variables, and the results are then copied to their
final locations. If cheap_copies is False, then the addresses
of output locations are passed into the instruction and the
instruction writes outputs directly in the final location.
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: ty_double.cheap_copies()
True
sage: ty_python.cheap_copies()
True
sage: ty_mpfr.cheap_copies()
False
"""
return False
def python_refcounted(self):
r"""
Says whether this storage type is a Python type, so we need to
use INCREF/DECREF.
(If we needed to support any non-Python refcounted types, it
might be better to make this object-oriented and have methods
like "generate an incref" and "generate a decref". But as
long as we only support Python, this way is probably simpler.)
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: ty_double.python_refcounted()
False
sage: ty_python.python_refcounted()
True
"""
return False
def cython_decl_type(self):
r"""
Give the Cython type for a single value of this type (as a string).
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: ty_double.cython_decl_type()
'double'
sage: ty_python.cython_decl_type()
'object'
sage: ty_mpfr.cython_decl_type()
'mpfr_t'
"""
return self.c_decl_type()
def cython_array_type(self):
r"""
Give the Cython type for referring to an array of values of
this type (as a string).
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: ty_double.cython_array_type()
'double*'
sage: ty_python.cython_array_type()
'PyObject**'
sage: ty_mpfr.cython_array_type()
'mpfr_t*'
"""
return self.c_ptr_type()
def needs_cython_init_clear(self):
r"""
Says whether values/arrays of this type need to be initialized
before use and cleared before the underlying memory is freed.
(We could remove this method, always call .cython_init() to
generate initialization code, and just let .cython_init()
generate empty code if no initialization is required; that would
generate empty loops, which are ugly and potentially might not
be optimized away.)
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: ty_double.needs_cython_init_clear()
False
sage: ty_mpfr.needs_cython_init_clear()
True
sage: ty_python.needs_cython_init_clear()
True
"""
return False
def c_decl_type(self):
r"""
Give the C type for a single value of this type (as a string).
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: ty_double.c_decl_type()
'double'
sage: ty_python.c_decl_type()
'PyObject*'
sage: ty_mpfr.c_decl_type()
'mpfr_t'
"""
raise NotImplementedError
def c_ptr_type(self):
r"""
Give the C type for a pointer to this type (as a reference to
either a single value or an array) (as a string).
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: ty_double.c_ptr_type()
'double*'
sage: ty_python.c_ptr_type()
'PyObject**'
sage: ty_mpfr.c_ptr_type()
'mpfr_t*'
"""
return self.c_decl_type() + '*'
def c_reference_type(self):
r"""
Give the C type which should be used for passing a reference
to a single value in a call. This is used as the type for the
return value.
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: ty_double.c_reference_type()
'double*'
sage: ty_python.c_reference_type()
'PyObject**'
"""
return self.c_ptr_type()
def c_local_type(self):
r"""
Give the C type used for a value of this type inside an
instruction. For assignable/cheap_copy types, this is the
same as c_decl_type; for auto-reference types, this is the
pointer type.
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: ty_double.c_local_type()
'double'
sage: ty_python.c_local_type()
'PyObject*'
sage: ty_mpfr.c_local_type()
'mpfr_ptr'
"""
raise NotImplementedError
def assign_c_from_py(self, c, py):
r"""
Given a Cython variable/array reference/etc. of this storage type,
and a Python expression, generate code to assign to the Cython
variable from the Python expression.
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: ty_double.assign_c_from_py('foo', 'bar')
'foo = bar'
sage: ty_python.assign_c_from_py('foo[i]', 'bar[j]')
'foo[i] = <PyObject *>bar[j]; Py_INCREF(foo[i])'
sage: ty_mpfr.assign_c_from_py('foo', 'bar')
'rn = self.domain(bar)\nmpfr_set(foo, rn.value, MPFR_RNDN)'
"""
return je("{{ c }} = {{ py }}", c=c, py=py)
def declare_chunk_class_members(self, name):
r"""
Return a string giving the declarations of the class members
in a wrapper class for a memory chunk with this storage type
and the given name.
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: ty_mpfr.declare_chunk_class_members('args')
' cdef int _n_args\n cdef mpfr_t* _args\n'
"""
return je(ri(0,
"""
{# XXX Variables here (and everywhere, really) should actually be Py_ssize_t #}
cdef int _n_{{ name }}
cdef {{ myself.cython_array_type() }} _{{ name }}
"""), myself=self, name=name)
def alloc_chunk_data(self, name, len):
r"""
Return a string allocating the memory for the class members for
a memory chunk with this storage type and the given name.
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: print(ty_mpfr.alloc_chunk_data('args', 'MY_LENGTH'))
self._n_args = MY_LENGTH
self._args = <mpfr_t*>check_allocarray(self._n_args, sizeof(mpfr_t))
for i in range(MY_LENGTH):
mpfr_init2(self._args[i], self.domain.prec())
<BLANKLINE>
"""
return je(ri(0,
"""
self._n_{{ name }} = {{ len }}
self._{{ name }} = <{{ myself.c_ptr_type() }}>check_allocarray(self._n_{{ name }}, sizeof({{ myself.c_decl_type() }}))
{% if myself.needs_cython_init_clear() %}
for i in range({{ len }}):
{{ myself.cython_init('self._%s[i]' % name) }}
{% endif %}
"""), myself=self, name=name, len=len)
def dealloc_chunk_data(self, name):
r"""
Return a string to be put in the __dealloc__ method of a
wrapper class using a memory chunk with this storage type, to
deallocate the corresponding class members.
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: print(ty_double.dealloc_chunk_data('args'))
if self._args:
sig_free(self._args)
<BLANKLINE>
sage: print(ty_mpfr.dealloc_chunk_data('constants'))
if self._constants:
for i in range(self._n_constants):
mpfr_clear(self._constants[i])
sig_free(self._constants)
<BLANKLINE>
"""
return je(ri(0, """
if self._{{ name }}:
{% if myself.needs_cython_init_clear() %}
for i in range(self._n_{{ name }}):
{{ myself.cython_clear('self._%s[i]' % name) }}
{% endif %}
sig_free(self._{{ name }})
"""), myself=self, name=name)
class StorageTypeAssignable(StorageType):
r"""
StorageTypeAssignable is a subtype of StorageType that deals with
types with cheap copies, like primitive types and PyObject*.
"""
def __init__(self, ty):
r"""
Initializes the property type (the C/Cython name for this type),
as well as the properties described in the documentation for
StorageType.__init__.
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: ty_double.class_member_declarations
''
sage: ty_double.class_member_initializations
''
sage: ty_double.local_declarations
''
sage: ty_double.type
'double'
sage: ty_python.type
'PyObject*'
"""
StorageType.__init__(self)
self.type = ty
def cheap_copies(self):
r"""
Returns True or False, depending on whether this StorageType
supports cheap copies -- whether it is cheap to copy values of
this type from one location to another. (See StorageType.cheap_copies
for more on this property.)
Since having cheap copies is essentially the definition of
StorageTypeAssignable, this always returns True.
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: ty_double.cheap_copies()
True
sage: ty_python.cheap_copies()
True
"""
return True
def c_decl_type(self):
r"""
Give the C type for a single value of this type (as a string).
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: ty_double.c_decl_type()
'double'
sage: ty_python.c_decl_type()
'PyObject*'
"""
return self.type
def c_local_type(self):
r"""
Give the C type used for a value of this type inside an
instruction. For assignable/cheap_copy types, this is the
same as c_decl_type; for auto-reference types, this is the
pointer type.
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: ty_double.c_local_type()
'double'
sage: ty_python.c_local_type()
'PyObject*'
"""
return self.type
class StorageTypeSimple(StorageTypeAssignable):
r"""
StorageTypeSimple is a subtype of StorageTypeAssignable that deals
with non-reference-counted types with cheap copies, like primitive
types. As of yet, it has no functionality differences from
StorageTypeAssignable.
"""
pass
ty_int = StorageTypeSimple('int')
ty_double = StorageTypeSimple('double')
class StorageTypeDoubleComplex(StorageTypeSimple):
r"""
This is specific to the complex double type. It behaves exactly
like a StorageTypeSimple in C, but needs a little help to do
conversions in Cython.
This uses functions defined in CDFInterpreter, and is for use in
that context.
"""
def assign_c_from_py(self, c, py):
"""
sage: from sage_setup.autogen.interpreters import ty_double_complex
sage: ty_double_complex.assign_c_from_py('z_c', 'z_py')
'z_c = CDE_to_dz(z_py)'
"""
return je("{{ c }} = CDE_to_dz({{ py }})", c=c, py=py)
ty_double_complex = StorageTypeDoubleComplex('double_complex')
class StorageTypePython(StorageTypeAssignable):
r"""
StorageTypePython is a subtype of StorageTypeAssignable that deals
with Python objects.
Just allocating an array full of PyObject* leads to problems,
because the Python garbage collector must be able to get to every
Python object, and it wouldn't know how to get to these arrays.
So we allocate the array as a Python list, but then we immediately
pull the ob_item out of it and deal only with that from then on.
We often leave these lists with NULL entries. This is safe for
the garbage collector and the deallocator, which is all we care
about; but it would be unsafe to provide Python-level access to
these lists.
There is one special thing about StorageTypePython: memory that is
used by the interpreter as scratch space (for example, the stack)
must be cleared after each call (so we don't hold on to
potentially-large objects and waste memory). Since we have to do
this anyway, the interpreter gains a tiny bit of speed by assuming
that the scratch space is cleared on entry; for example, when
pushing a value onto the stack, it doesn't bother to XDECREF the
previous value because it's always NULL.
"""
def __init__(self):
r"""
Initializes the properties described in the documentation
for StorageTypeAssignable.__init__. The type is always
'PyObject*'.
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: ty_python.class_member_declarations
''
sage: ty_python.class_member_initializations
''
sage: ty_python.local_declarations
''
sage: ty_python.type
'PyObject*'
"""
super(StorageTypePython, self).__init__('PyObject*')
def python_refcounted(self):
r"""
Says whether this storage type is a Python type, so we need to
use INCREF/DECREF.
Returns True.
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: ty_python.python_refcounted()
True
"""
return True
def cython_decl_type(self):
r"""
Give the Cython type for a single value of this type (as a string).
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: ty_python.cython_decl_type()
'object'
"""
return 'object'
def declare_chunk_class_members(self, name):
r"""
Return a string giving the declarations of the class members
in a wrapper class for a memory chunk with this storage type
and the given name.
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: ty_python.declare_chunk_class_members('args')
' cdef object _list_args\n cdef int _n_args\n cdef PyObject** _args\n'
"""
return je(ri(4,
"""
cdef object _list_{{ name }}
cdef int _n_{{ name }}
cdef {{ myself.cython_array_type() }} _{{ name }}
"""), myself=self, name=name)
def alloc_chunk_data(self, name, len):
r"""
Return a string allocating the memory for the class members for
a memory chunk with this storage type and the given name.
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: print(ty_python.alloc_chunk_data('args', 'MY_LENGTH'))
self._n_args = MY_LENGTH
self._list_args = PyList_New(self._n_args)
self._args = (<PyListObject *>self._list_args).ob_item
<BLANKLINE>
"""
return je(ri(8,
"""
self._n_{{ name }} = {{ len }}
self._list_{{ name }} = PyList_New(self._n_{{ name }})
self._{{ name }} = (<PyListObject *>self._list_{{ name }}).ob_item
"""), myself=self, name=name, len=len)
def dealloc_chunk_data(self, name):
r"""
Return a string to be put in the __dealloc__ method of a
wrapper class using a memory chunk with this storage type, to
deallocate the corresponding class members.
Our array was allocated as a Python list; this means we actually
don't need to do anything to deallocate it.
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: ty_python.dealloc_chunk_data('args')
''
"""
return ''
def needs_cython_init_clear(self):
r"""
Says whether values/arrays of this type need to be initialized
before use and cleared before the underlying memory is freed.
Returns True.
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: ty_python.needs_cython_init_clear()
True
"""
return True
def assign_c_from_py(self, c, py):
r"""
Given a Cython variable/array reference/etc. of this storage type,
and a Python expression, generate code to assign to the Cython
variable from the Python expression.
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: ty_python.assign_c_from_py('foo[i]', 'bar[j]')
'foo[i] = <PyObject *>bar[j]; Py_INCREF(foo[i])'
"""
return je("""{{ c }} = <PyObject *>{{ py }}; Py_INCREF({{ c }})""",
c=c, py=py)
def cython_init(self, loc):
r"""
Generates code to initialize a variable (or array reference)
holding a PyObject*. Sets it to NULL.
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: ty_python.cython_init('foo[i]')
'foo[i] = NULL'
"""
return je("{{ loc }} = NULL", loc=loc)
def cython_clear(self, loc):
r"""
Generates code to clear a variable (or array reference) holding
a PyObject*.
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: ty_python.cython_clear('foo[i]')
'Py_CLEAR(foo[i])'
"""
return je("Py_CLEAR({{ loc }})", loc=loc)
ty_python = StorageTypePython()
class StorageTypeAutoReference(StorageType):
r"""
StorageTypeAutoReference is a subtype of StorageType that deals with
types in the style of GMP/MPIR/MPFR/MPFI/FLINT, where copies are
not cheap, functions expect arguments to be passed by reference,
and the API takes advantage of the C quirk where arrays are
automatically converted to pointers to automatically pass
arguments by reference.
"""
def __init__(self, decl_ty, ref_ty):
r"""
Initializes the properties decl_type and ref_type (the C type
names used when declaring variables and function parameters,
respectively), as well as the properties described in
the documentation for StorageType.__init__.
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: ty_mpfr.class_member_declarations
'cdef RealField_class domain\n'
sage: ty_mpfr.class_member_initializations
"self.domain = args['domain']\n"
sage: ty_mpfr.local_declarations
'cdef RealNumber rn\n'
sage: ty_mpfr.decl_type
'mpfr_t'
sage: ty_mpfr.ref_type
'mpfr_ptr'
"""
super(StorageTypeAutoReference, self).__init__()
self.decl_type = decl_ty
self.ref_type = ref_ty
def c_decl_type(self):
r"""
Give the C type for a single value of this type (as a string).
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: ty_mpfr.c_decl_type()
'mpfr_t'
"""
return self.decl_type
def c_local_type(self):
r"""
Give the C type used for a value of this type inside an
instruction. For assignable/cheap_copy types, this is the
same as c_decl_type; for auto-reference types, this is the
pointer type.
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: ty_mpfr.c_local_type()
'mpfr_ptr'
"""
return self.ref_type
def c_reference_type(self):
r"""
Give the C type which should be used for passing a reference
to a single value in a call. This is used as the type for the
return value.
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: ty_mpfr.c_reference_type()
'mpfr_t'
"""
return self.decl_type
def needs_cython_init_clear(self):
r"""
Says whether values/arrays of this type need to be initialized
before use and cleared before the underlying memory is freed.
All known examples of auto-reference types do need a special
initialization call, so this always returns True.
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: ty_mpfr.needs_cython_init_clear()
True
"""
return True
class StorageTypeMPFR(StorageTypeAutoReference):
r"""
StorageTypeMPFR is a subtype of StorageTypeAutoReference that deals
the MPFR's mpfr_t type.
For any given program that we're interpreting, ty_mpfr can only
refer to a single precision. An interpreter that needs to use
two precisions of mpfr_t in the same program should instantiate two
separate instances of StorageTypeMPFR. (Interpreters that need
to handle arbitrarily many precisions in the same program are not
handled at all.)
"""
def __init__(self, id=''):
r"""
Initializes the id property, as well as the properties described
in the documentation for StorageTypeAutoReference.__init__.
The id property is used if you want to have an interpreter
that handles two instances of StorageTypeMPFR (that is,
handles mpfr_t variables at two different precisions
simultaneously). It's a string that's used to generate
variable names that don't conflict. (The id system has
never actually been used, so bugs probably remain.)
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: ty_mpfr.class_member_declarations
'cdef RealField_class domain\n'
sage: ty_mpfr.class_member_initializations
"self.domain = args['domain']\n"
sage: ty_mpfr.local_declarations
'cdef RealNumber rn\n'
sage: ty_mpfr.decl_type
'mpfr_t'
sage: ty_mpfr.ref_type
'mpfr_ptr'
TESTS::
sage: ty_mpfr2 = StorageTypeMPFR(id='_the_second')
sage: ty_mpfr2.class_member_declarations
'cdef RealField_class domain_the_second\n'
sage: ty_mpfr2.class_member_initializations
"self.domain_the_second = args['domain_the_second']\n"
sage: ty_mpfr2.local_declarations
'cdef RealNumber rn_the_second\n'
"""
super(StorageTypeMPFR, self).__init__('mpfr_t', 'mpfr_ptr')
self.id = id
self.class_member_declarations = "cdef RealField_class domain%s\n" % self.id
self.class_member_initializations = \
"self.domain%s = args['domain%s']\n" % (self.id, self.id)
self.local_declarations = "cdef RealNumber rn%s\n" % self.id
def cython_init(self, loc):
r"""
Generates code to initialize an mpfr_t reference (a variable, an
array reference, etc.)
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: ty_mpfr.cython_init('foo[i]')
'mpfr_init2(foo[i], self.domain.prec())'
"""
return je("mpfr_init2({{ loc }}, self.domain{{ myself.id }}.prec())",
myself=self, loc=loc)
def cython_clear(self, loc):
r"""
Generates code to clear an mpfr_t reference (a variable, an
array reference, etc.)
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: ty_mpfr.cython_clear('foo[i]')
'mpfr_clear(foo[i])'
"""
return 'mpfr_clear(%s)' % loc
def assign_c_from_py(self, c, py):
r"""
Given a Cython variable/array reference/etc. of this storage type,
and a Python expression, generate code to assign to the Cython
variable from the Python expression.
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: ty_mpfr.assign_c_from_py('foo[i]', 'bar[j]')
'rn = self.domain(bar[j])\nmpfr_set(foo[i], rn.value, MPFR_RNDN)'
"""
return je(ri(0, """
rn{{ myself.id }} = self.domain({{ py }})
mpfr_set({{ c }}, rn.value, MPFR_RNDN)"""),
myself=self, c=c, py=py)
ty_mpfr = StorageTypeMPFR()
class StorageTypeMPC(StorageTypeAutoReference):
r"""
StorageTypeMPC is a subtype of StorageTypeAutoReference that deals
the MPC's mpc_t type.
For any given program that we're interpreting, ty_mpc can only
refer to a single precision. An interpreter that needs to use
two precisions of mpc_t in the same program should instantiate two
separate instances of StorageTypeMPC. (Interpreters that need
to handle arbitrarily many precisions in the same program are not
handled at all.)
"""
def __init__(self, id=''):
r"""
Initializes the id property, as well as the properties described
in the documentation for StorageTypeAutoReference.__init__.
The id property is used if you want to have an interpreter
that handles two instances of StorageTypeMPC (that is,
handles mpC_t variables at two different precisions
simultaneously). It's a string that's used to generate
variable names that don't conflict. (The id system has
never actually been used, so bugs probably remain.)
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: ty_mpc.class_member_declarations
'cdef object domain\ncdef ComplexNumber domain_element\n'
sage: ty_mpc.class_member_initializations
"self.domain = args['domain']\nself.domain_element = self.domain.zero()\n"
sage: ty_mpc.local_declarations
'cdef ComplexNumber cn\n'
sage: ty_mpc.decl_type
'mpc_t'
sage: ty_mpc.ref_type
'mpc_ptr'
TESTS::
sage: ty_mpfr2 = StorageTypeMPC(id='_the_second')
sage: ty_mpfr2.class_member_declarations
'cdef object domain_the_second\ncdef ComplexNumber domain_element_the_second\n'
sage: ty_mpfr2.class_member_initializations
"self.domain_the_second = args['domain_the_second']\nself.domain_element_the_second = self.domain.zero()\n"
sage: ty_mpfr2.local_declarations
'cdef ComplexNumber cn_the_second\n'
"""
StorageTypeAutoReference.__init__(self, 'mpc_t', 'mpc_ptr')
self.id = id
self.class_member_declarations = "cdef object domain%s\ncdef ComplexNumber domain_element%s\n" % (self.id, self.id)
self.class_member_initializations = \
"self.domain%s = args['domain%s']\nself.domain_element%s = self.domain.zero()\n" % (self.id, self.id, self.id)
self.local_declarations = "cdef ComplexNumber cn%s\n" % self.id
def cython_init(self, loc):
r"""
Generates code to initialize an mpc_t reference (a variable, an
array reference, etc.)
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: ty_mpc.cython_init('foo[i]')
'mpc_init2(foo[i], self.domain_element._prec)'
"""
return je("mpc_init2({{ loc }}, self.domain_element{{ myself.id }}._prec)",
myself=self, loc=loc)
def cython_clear(self, loc):
r"""
Generates code to clear an mpfr_t reference (a variable, an
array reference, etc.)
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: ty_mpc.cython_clear('foo[i]')
'mpc_clear(foo[i])'
"""
return 'mpc_clear(%s)' % loc
def assign_c_from_py(self, c, py):
r"""
Given a Cython variable/array reference/etc. of this storage type,
and a Python expression, generate code to assign to the Cython
variable from the Python expression.
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: ty_mpc.assign_c_from_py('foo[i]', 'bar[j]')
'cn = self.domain(bar[j])\nmpc_set_fr_fr(foo[i], cn.__re, cn.__im, MPC_RNDNN)'
"""
return je("""
cn{{ myself.id }} = self.domain({{ py }})
mpc_set_fr_fr({{ c }}, cn.__re, cn.__im, MPC_RNDNN)""", myself=self, c=c, py=py)
ty_mpc = StorageTypeMPC() | /sage-setup-10.0b0.tar.gz/sage-setup-10.0b0/sage_setup/autogen/interpreters/storage.py | 0.831485 | 0.452838 | storage.py | pypi |
from .base import StackInterpreter
from ..instructions import (params_gen, instr_funcall_2args, instr_unary,
InstrSpec)
from ..memory import MemoryChunk
from ..storage import ty_python
from ..utils import je, reindent_lines as ri
class MemoryChunkPythonArguments(MemoryChunk):
r"""
A special-purpose memory chunk, for the generic Python-object based
interpreter. Rather than copy the arguments into an array allocated
in the wrapper, we use the PyTupleObject internals and pass the array
that's inside the argument tuple.
"""
def declare_class_members(self):
r"""
Return a string giving the declarations of the class members
in a wrapper class for this memory chunk.
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: mc = MemoryChunkPythonArguments('args', ty_python)
"""
return " cdef int _n_%s\n" % self.name
def init_class_members(self):
r"""
Return a string to be put in the __init__ method of a wrapper
class using this memory chunk, to initialize the corresponding
class members.
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: mc = MemoryChunkPythonArguments('args', ty_python)
sage: mc.init_class_members()
" count = args['args']\n self._n_args = count\n"
"""
return je(ri(8,
"""
count = args['{{ myself.name }}']
self._n_args = count
"""), myself=self)
def setup_args(self):
r"""
Handle the arguments of __call__. Nothing to do.
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: mc = MemoryChunkPythonArguments('args', ty_python)
sage: mc.setup_args()
''
"""
return ''
def pass_argument(self):
r"""
Pass the innards of the argument tuple to the interpreter.
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: mc = MemoryChunkPythonArguments('args', ty_python)
sage: mc.pass_argument()
'(<PyTupleObject*>args).ob_item'
"""
return "(<PyTupleObject*>args).ob_item"
class MemoryChunkPyConstant(MemoryChunk):
r"""
A special-purpose memory chunk, for holding a single Python constant
and passing it to the interpreter as a PyObject*.
"""
def __init__(self, name):
r"""
Initialize an instance of MemoryChunkPyConstant.
Always uses the type ty_python.
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: mc = MemoryChunkPyConstant('domain')
sage: mc.name
'domain'
sage: mc.storage_type is ty_python
True
"""
super(MemoryChunkPyConstant, self).__init__(name, ty_python)
def declare_class_members(self):
r"""
Return a string giving the declarations of the class members
in a wrapper class for this memory chunk.
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: mc = MemoryChunkPyConstant('domain')
sage: mc.declare_class_members()
' cdef object _domain\n'
"""
return je(ri(4,
"""
cdef object _{{ myself.name }}
"""), myself=self)
def init_class_members(self):
r"""
Return a string to be put in the __init__ method of a wrapper
class using this memory chunk, to initialize the corresponding
class members.
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: mc = MemoryChunkPyConstant('domain')
sage: mc.init_class_members()
" self._domain = args['domain']\n"
"""
return je(ri(8,
"""
self._{{ myself.name }} = args['{{ myself.name }}']
"""), myself=self)
def declare_parameter(self):
r"""
Return the string to use to declare the interpreter parameter
corresponding to this memory chunk.
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: mc = MemoryChunkPyConstant('domain')
sage: mc.declare_parameter()
'PyObject* domain'
"""
return 'PyObject* %s' % self.name
def pass_argument(self):
r"""
Return the string to pass the argument corresponding to this
memory chunk to the interpreter.
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: mc = MemoryChunkPyConstant('domain')
sage: mc.pass_argument()
'<PyObject*>self._domain'
"""
return '<PyObject*>self._%s' % self.name
class PythonInterpreter(StackInterpreter):
r"""
A subclass of StackInterpreter, specifying an interpreter over
Python objects.
Let's discuss how the reference-counting works in Python-object
based interpreters.
There is a simple rule to remember: when executing the code
snippets, the input variables contain borrowed references;
you must fill in the output variables with references you own.
As an optimization, an instruction may set .handles_own_decref; in
that case, it must decref any input variables that came from the
stack. (Input variables that came from arguments/constants chunks
must NOT be decref'ed!) In addition, with .handles_own_decref, if
any of your input variables are arbitrary-count, then you must
NULL out these variables as you decref them. (Use Py_CLEAR to do
this, unless you understand the documentation of Py_CLEAR and why
it's different than Py_XDECREF followed by assigning NULL.)
Note that as a tiny optimization, the interpreter always assumes
(and ensures) that empty parts of the stack contain NULL, so
it doesn't bother to Py_XDECREF before it pushes onto the stack.
"""
name = 'py'
def __init__(self):
r"""
Initialize a PythonInterpreter.
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: interp = PythonInterpreter()
sage: interp.name
'py'
sage: interp.mc_args
{MC:args}
sage: interp.chunks
[{MC:args}, {MC:constants}, {MC:stack}, {MC:code}]
sage: instrs = dict([(ins.name, ins) for ins in interp.instr_descs])
sage: instrs['add']
add: SS->S = 'o0 = PyNumber_Add(i0, i1);'
sage: instrs['py_call']
py_call: *->S = '\nPyObject *py_args...CREF(py_args);\n'
"""
super(PythonInterpreter, self).__init__(ty_python)
# StackInterpreter.__init__ gave us a MemoryChunkArguments.
# Override with MemoryChunkPythonArguments.
self.mc_args = MemoryChunkPythonArguments('args', ty_python)
self.chunks = [self.mc_args, self.mc_constants, self.mc_stack,
self.mc_code]
self.c_header = ri(0,
"""
#define CHECK(x) (x != NULL)
""")
self.pyx_header = ri(0,
"""\
from cpython.number cimport PyNumber_TrueDivide
""")
pg = params_gen(A=self.mc_args, C=self.mc_constants, D=self.mc_code,
S=self.mc_stack)
self.pg = pg
instrs = [
InstrSpec('load_arg', pg('A[D]', 'S'),
code='o0 = i0; Py_INCREF(o0);'),
InstrSpec('load_const', pg('C[D]', 'S'),
code='o0 = i0; Py_INCREF(o0);'),
InstrSpec('return', pg('S', ''),
code='return i0;',
handles_own_decref=True),
InstrSpec('py_call', pg('C[D]S@D', 'S'),
handles_own_decref=True,
code=ri(0, """
PyObject *py_args = PyTuple_New(n_i1);
if (py_args == NULL) goto error;
int i;
for (i = 0; i < n_i1; i++) {
PyObject *arg = i1[i];
PyTuple_SET_ITEM(py_args, i, arg);
i1[i] = NULL;
}
o0 = PyObject_CallObject(i0, py_args);
Py_DECREF(py_args);
"""))
]
binops = [
('add', 'PyNumber_Add'),
('sub', 'PyNumber_Subtract'),
('mul', 'PyNumber_Multiply'),
('div', 'PyNumber_TrueDivide'),
('floordiv', 'PyNumber_FloorDivide')
]
for (name, op) in binops:
instrs.append(instr_funcall_2args(name, pg('SS', 'S'), op))
instrs.append(InstrSpec('pow', pg('SS', 'S'),
code='o0 = PyNumber_Power(i0, i1, Py_None);'))
instrs.append(InstrSpec('ipow', pg('SC[D]', 'S'),
code='o0 = PyNumber_Power(i0, i1, Py_None);'))
for (name, op) in [('neg', 'PyNumber_Negative'),
('invert', 'PyNumber_Invert'),
('abs', 'PyNumber_Absolute')]:
instrs.append(instr_unary(name, pg('S', 'S'), '%s(i0)'%op))
self.instr_descs = instrs
self._set_opcodes()
# Always use ipow
self.ipow_range = True
# We don't yet support call_c for Python-object interpreters
# (the default implementation doesn't work, because of
# object vs. PyObject* confusion)
self.implement_call_c = False | /sage-setup-10.0b0.tar.gz/sage-setup-10.0b0/sage_setup/autogen/interpreters/specs/python.py | 0.74382 | 0.404213 | python.py | pypi |
from .base import StackInterpreter
from .python import MemoryChunkPyConstant
from ..instructions import (params_gen, instr_funcall_1arg_mpfr,
instr_funcall_2args_mpfr, InstrSpec)
from ..memory import MemoryChunk, MemoryChunkConstants
from ..storage import ty_mpfr, ty_python
from ..utils import je, reindent_lines as ri
class MemoryChunkRRRetval(MemoryChunk):
r"""
A special-purpose memory chunk, for dealing with the return value
of the RR-based interpreter.
"""
def declare_class_members(self):
r"""
Return a string giving the declarations of the class members
in a wrapper class for this memory chunk.
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: mc = MemoryChunkRRRetval('retval', ty_mpfr)
sage: mc.declare_class_members()
''
"""
return ""
def declare_call_locals(self):
r"""
Return a string to put in the __call__ method of a wrapper
class using this memory chunk, to allocate local variables.
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: mc = MemoryChunkRRRetval('retval', ty_mpfr)
sage: mc.declare_call_locals()
' cdef RealNumber retval = (self.domain)()\n'
"""
return je(ri(8,
"""
cdef RealNumber {{ myself.name }} = (self.domain)()
"""), myself=self)
def declare_parameter(self):
r"""
Return the string to use to declare the interpreter parameter
corresponding to this memory chunk.
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: mc = MemoryChunkRRRetval('retval', ty_mpfr)
sage: mc.declare_parameter()
'mpfr_t retval'
"""
return '%s %s' % (self.storage_type.c_reference_type(), self.name)
def pass_argument(self):
r"""
Return the string to pass the argument corresponding to this
memory chunk to the interpreter.
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: mc = MemoryChunkRRRetval('retval', ty_mpfr)
sage: mc.pass_argument()
'retval.value'
"""
return je("""{{ myself.name }}.value""", myself=self)
def pass_call_c_argument(self):
r"""
Return the string to pass the argument corresponding to this
memory chunk to the interpreter, for use in the call_c method.
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: mc = MemoryChunkRRRetval('retval', ty_mpfr)
sage: mc.pass_call_c_argument()
'result'
"""
return "result"
class RRInterpreter(StackInterpreter):
r"""
A subclass of StackInterpreter, specifying an interpreter over
MPFR arbitrary-precision floating-point numbers.
"""
name = 'rr'
def __init__(self):
r"""
Initialize an RDFInterpreter.
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: interp = RRInterpreter()
sage: interp.name
'rr'
sage: interp.mc_py_constants
{MC:py_constants}
sage: interp.chunks
[{MC:args}, {MC:retval}, {MC:constants}, {MC:py_constants}, {MC:stack}, {MC:code}, {MC:domain}]
sage: interp.pg('A[D]', 'S')
([({MC:args}, {MC:code}, None)], [({MC:stack}, None, None)])
sage: instrs = dict([(ins.name, ins) for ins in interp.instr_descs])
sage: instrs['add']
add: SS->S = 'mpfr_add(o0, i0, i1, MPFR_RNDN);'
sage: instrs['py_call']
py_call: *->S = '\nif (!rr_py_call_h...goto error;\n}\n'
That py_call instruction is particularly interesting, and
demonstrates a useful technique to let you use Cython code
in an interpreter. Let's look more closely::
sage: print(instrs['py_call'].code)
if (!rr_py_call_helper(domain, i0, n_i1, i1, o0)) {
goto error;
}
This instruction makes use of the function ``rr_py_call_helper``,
which is declared in ``wrapper_rr.h``::
sage: print(interp.c_header)
<BLANKLINE>
#include <mpfr.h>
#include "sage/ext/interpreters/wrapper_rr.h"
<BLANKLINE>
The function ``rr_py_call_helper`` is implemented in Cython::
sage: print(interp.pyx_header)
cdef public bint rr_py_call_helper(object domain, object fn,
int n_args,
mpfr_t* args, mpfr_t retval) except 0:
py_args = []
cdef int i
cdef RealNumber rn
for i from 0 <= i < n_args:
rn = domain()
mpfr_set(rn.value, args[i], MPFR_RNDN)
py_args.append(rn)
cdef RealNumber result = domain(fn(*py_args))
mpfr_set(retval, result.value, MPFR_RNDN)
return 1
So instructions where you need to interact with Python can
call back into Cython code fairly easily.
"""
mc_retval = MemoryChunkRRRetval('retval', ty_mpfr)
super(RRInterpreter, self).__init__(ty_mpfr, mc_retval=mc_retval)
self.err_return = '0'
self.mc_py_constants = MemoryChunkConstants('py_constants', ty_python)
self.mc_domain = MemoryChunkPyConstant('domain')
self.chunks = [self.mc_args, self.mc_retval, self.mc_constants,
self.mc_py_constants,
self.mc_stack, self.mc_code, self.mc_domain]
pg = params_gen(A=self.mc_args, C=self.mc_constants, D=self.mc_code,
S=self.mc_stack,
P=self.mc_py_constants)
self.pg = pg
self.c_header = ri(0,
'''
#include <mpfr.h>
#include "sage/ext/interpreters/wrapper_rr.h"
''')
self.pxd_header = ri(0,
"""
from sage.rings.real_mpfr cimport RealField_class, RealNumber
from sage.libs.mpfr cimport *
""")
self.pyx_header = ri(0,
"""\
cdef public bint rr_py_call_helper(object domain, object fn,
int n_args,
mpfr_t* args, mpfr_t retval) except 0:
py_args = []
cdef int i
cdef RealNumber rn
for i from 0 <= i < n_args:
rn = domain()
mpfr_set(rn.value, args[i], MPFR_RNDN)
py_args.append(rn)
cdef RealNumber result = domain(fn(*py_args))
mpfr_set(retval, result.value, MPFR_RNDN)
return 1
""")
instrs = [
InstrSpec('load_arg', pg('A[D]', 'S'),
code='mpfr_set(o0, i0, MPFR_RNDN);'),
InstrSpec('load_const', pg('C[D]', 'S'),
code='mpfr_set(o0, i0, MPFR_RNDN);'),
InstrSpec('return', pg('S', ''),
code='mpfr_set(retval, i0, MPFR_RNDN);\nreturn 1;\n'),
InstrSpec('py_call', pg('P[D]S@D', 'S'),
uses_error_handler=True,
code=ri(0,
"""
if (!rr_py_call_helper(domain, i0, n_i1, i1, o0)) {
goto error;
}
"""))
]
for (name, op) in [('add', 'mpfr_add'), ('sub', 'mpfr_sub'),
('mul', 'mpfr_mul'), ('div', 'mpfr_div'),
('pow', 'mpfr_pow')]:
instrs.append(instr_funcall_2args_mpfr(name, pg('SS', 'S'), op))
instrs.append(instr_funcall_2args_mpfr('ipow', pg('SD', 'S'), 'mpfr_pow_si'))
for name in ['neg', 'abs',
'log', 'log2', 'log10',
'exp', 'exp2', 'exp10',
'cos', 'sin', 'tan',
'sec', 'csc', 'cot',
'acos', 'asin', 'atan',
'cosh', 'sinh', 'tanh',
'sech', 'csch', 'coth',
'acosh', 'asinh', 'atanh',
'log1p', 'expm1', 'eint',
'gamma', 'lngamma',
'zeta', 'erf', 'erfc',
'j0', 'j1', 'y0', 'y1']:
instrs.append(instr_funcall_1arg_mpfr(name, pg('S', 'S'), 'mpfr_' + name))
# mpfr_ui_div constructs a temporary mpfr_t and then calls mpfr_div;
# it would probably be (slightly) faster to use a permanent copy
# of "one" (on the other hand, the constructed temporary copy is
# on the stack, so it's very likely to be in the cache).
instrs.append(InstrSpec('invert', pg('S', 'S'),
code='mpfr_ui_div(o0, 1, i0, MPFR_RNDN);'))
self.instr_descs = instrs
self._set_opcodes()
# Supported for exponents that fit in a long, so we could use
# a much wider range on a 64-bit machine. On the other hand,
# it's easier to write the code this way, and constant integer
# exponents outside this range probably aren't very common anyway.
self.ipow_range = (int(-2**31), int(2**31-1)) | /sage-setup-10.0b0.tar.gz/sage-setup-10.0b0/sage_setup/autogen/interpreters/specs/rr.py | 0.616359 | 0.285055 | rr.py | pypi |
from __future__ import print_function, absolute_import
from .base import StackInterpreter
from ..instructions import (params_gen, instr_infix, instr_funcall_2args,
instr_unary, InstrSpec)
from ..memory import MemoryChunkConstants
from ..storage import ty_double_complex, ty_python
from ..utils import reindent_lines as ri
class CDFInterpreter(StackInterpreter):
r"""
A subclass of StackInterpreter, specifying an interpreter over
complex machine-floating-point values (C doubles).
"""
name = 'cdf'
def __init__(self):
r"""
Initialize a CDFInterpreter.
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: interp = CDFInterpreter()
sage: interp.name
'cdf'
sage: interp.mc_py_constants
{MC:py_constants}
sage: interp.chunks
[{MC:args}, {MC:constants}, {MC:py_constants}, {MC:stack}, {MC:code}]
sage: interp.pg('A[D]', 'S')
([({MC:args}, {MC:code}, None)], [({MC:stack}, None, None)])
sage: instrs = dict([(ins.name, ins) for ins in interp.instr_descs])
sage: instrs['add']
add: SS->S = 'o0 = i0 + i1;'
sage: instrs['sin']
sin: S->S = 'o0 = csin(i0);'
sage: instrs['py_call']
py_call: *->S = '\nif (!cdf_py_call_...goto error;\n}\n'
A test of integer powers::
sage: f(x) = sum(x^k for k in [-20..20])
sage: f(CDF(1+2j)) # rel tol 4e-16
-10391778.999999996 + 3349659.499999962*I
sage: ff = fast_callable(f, CDF)
sage: ff(1 + 2j) # rel tol 1e-14
-10391779.000000004 + 3349659.49999997*I
sage: ff.python_calls()
[]
sage: f(x) = sum(x^k for k in [0..5])
sage: ff = fast_callable(f, CDF)
sage: ff(2)
63.0
sage: ff(2j)
13.0 + 26.0*I
"""
super(CDFInterpreter, self).__init__(ty_double_complex)
self.mc_py_constants = MemoryChunkConstants('py_constants', ty_python)
# See comment for RDFInterpreter
self.err_return = '-1094648119105371'
self.adjust_retval = "dz_to_CDE"
self.chunks = [self.mc_args, self.mc_constants, self.mc_py_constants,
self.mc_stack,
self.mc_code]
pg = params_gen(A=self.mc_args, C=self.mc_constants, D=self.mc_code,
S=self.mc_stack, P=self.mc_py_constants)
self.pg = pg
self.c_header = ri(0,"""
#include <stdlib.h>
#include <complex.h>
#include "sage/ext/interpreters/wrapper_cdf.h"
/* On Solaris, we need to define _Imaginary_I when compiling with GCC,
* otherwise the constant I doesn't work. The definition below is based
* on glibc. */
#ifdef __GNUC__
#undef _Imaginary_I
#define _Imaginary_I (__extension__ 1.0iF)
#endif
typedef double complex double_complex;
static inline double complex csquareX(double complex z) {
double complex res;
__real__(res) = __real__(z) * __real__(z) - __imag__(z) * __imag__(z);
__imag__(res) = 2 * __real__(z) * __imag__(z);
return res;
}
static inline double complex cpow_int(double complex z, int exp) {
if (exp < 0) return 1/cpow_int(z, -exp);
switch (exp) {
case 0: return 1;
case 1: return z;
case 2: return csquareX(z);
case 3: return csquareX(z) * z;
case 4:
case 5:
case 6:
case 7:
case 8:
{
double complex z2 = csquareX(z);
double complex z4 = csquareX(z2);
if (exp == 4) return z4;
if (exp == 5) return z4 * z;
if (exp == 6) return z4 * z2;
if (exp == 7) return z4 * z2 * z;
if (exp == 8) return z4 * z4;
}
}
if (cimag(z) == 0) return pow(creal(z), exp);
if (creal(z) == 0) {
double r = pow(cimag(z), exp);
switch (exp % 4) {
case 0:
return r;
case 1:
return r * I;
case 2:
return -r;
default /* case 3 */:
return -r * I;
}
}
return cpow(z, exp);
}
""")
self.pxd_header = ri(0, """
# This is to work around a header incompatibility with PARI using
# "I" as variable conflicting with the complex "I".
# If we cimport pari earlier, we avoid this problem.
cimport cypari2.types
# We need the type double_complex to work around
# http://trac.cython.org/ticket/869
# so this is a bit hackish.
cdef extern from "complex.h":
ctypedef double double_complex "double complex"
""")
self.pyx_header = ri(0, """
from sage.libs.gsl.complex cimport *
from sage.rings.complex_double cimport ComplexDoubleElement
import sage.rings.complex_double
cdef object CDF = sage.rings.complex_double.CDF
cdef extern from "complex.h":
cdef double creal(double_complex)
cdef double cimag(double_complex)
cdef double_complex _Complex_I
cdef inline double_complex CDE_to_dz(zz):
cdef ComplexDoubleElement z = <ComplexDoubleElement>(zz if isinstance(zz, ComplexDoubleElement) else CDF(zz))
return GSL_REAL(z._complex) + _Complex_I * GSL_IMAG(z._complex)
cdef inline ComplexDoubleElement dz_to_CDE(double_complex dz):
cdef ComplexDoubleElement z = <ComplexDoubleElement>ComplexDoubleElement.__new__(ComplexDoubleElement)
GSL_SET_COMPLEX(&z._complex, creal(dz), cimag(dz))
return z
cdef public bint cdf_py_call_helper(object fn,
int n_args,
double_complex* args, double_complex* retval) except 0:
py_args = []
cdef int i
for i from 0 <= i < n_args:
py_args.append(dz_to_CDE(args[i]))
py_result = fn(*py_args)
cdef ComplexDoubleElement result
if isinstance(py_result, ComplexDoubleElement):
result = <ComplexDoubleElement>py_result
else:
result = CDF(py_result)
retval[0] = CDE_to_dz(result)
return 1
"""[1:])
instrs = [
InstrSpec('load_arg', pg('A[D]', 'S'),
code='o0 = i0;'),
InstrSpec('load_const', pg('C[D]', 'S'),
code='o0 = i0;'),
InstrSpec('return', pg('S', ''),
code='return i0;'),
InstrSpec('py_call', pg('P[D]S@D', 'S'),
uses_error_handler=True,
code="""
if (!cdf_py_call_helper(i0, n_i1, i1, &o0)) {
goto error;
}
""")
]
for (name, op) in [('add', '+'), ('sub', '-'),
('mul', '*'), ('div', '/'),
('truediv', '/')]:
instrs.append(instr_infix(name, pg('SS', 'S'), op))
instrs.append(instr_funcall_2args('pow', pg('SS', 'S'), 'cpow'))
instrs.append(instr_funcall_2args('ipow', pg('SD', 'S'), 'cpow_int'))
for (name, op) in [('neg', '-i0'), ('invert', '1/i0'),
('abs', 'cabs(i0)')]:
instrs.append(instr_unary(name, pg('S', 'S'), op))
for name in ['sqrt', 'sin', 'cos', 'tan',
'asin', 'acos', 'atan', 'sinh', 'cosh', 'tanh',
'asinh', 'acosh', 'atanh', 'exp', 'log']:
instrs.append(instr_unary(name, pg('S', 'S'), "c%s(i0)" % name))
self.instr_descs = instrs
self._set_opcodes()
# supported for exponents that fit in an int
self.ipow_range = (int(-2**31), int(2**31-1)) | /sage-setup-10.0b0.tar.gz/sage-setup-10.0b0/sage_setup/autogen/interpreters/specs/cdf.py | 0.576423 | 0.232757 | cdf.py | pypi |
from __future__ import print_function, absolute_import
from .base import StackInterpreter
from .python import (MemoryChunkPyConstant, MemoryChunkPythonArguments,
PythonInterpreter)
from ..storage import ty_python
from ..utils import reindent_lines as ri
class MemoryChunkElementArguments(MemoryChunkPythonArguments):
r"""
A special-purpose memory chunk, for the Python-object based
interpreters that want to process (and perhaps modify) the data.
We allocate a new list on every call to hold the modified arguments.
That's not strictly necessary -- we could pre-allocate a list and map into
it -- but this lets us use simpler code for a very-likely-negligible
efficiency cost. (The Element interpreter is going to allocate lots of
objects as it runs, anyway.)
"""
def setup_args(self):
r"""
Handle the arguments of __call__. Note: This hardcodes
"self._domain".
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: mc = MemoryChunkElementArguments('args', ty_python)
sage: mc.setup_args()
'mapped_args = [self._domain(a) for a in args]\n'
"""
return "mapped_args = [self._domain(a) for a in args]\n"
def pass_argument(self):
r"""
Pass the innards of the argument tuple to the interpreter.
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: mc = MemoryChunkElementArguments('args', ty_python)
sage: mc.pass_argument()
'(<PyListObject*>mapped_args).ob_item'
"""
return "(<PyListObject*>mapped_args).ob_item"
class ElementInterpreter(PythonInterpreter):
r"""
A subclass of PythonInterpreter, specifying an interpreter over
Sage elements with a particular parent.
This is very similar to the PythonInterpreter, but after every
instruction, the result is checked to make sure it actually an
element with the correct parent; if not, we attempt to convert it.
Uses the same instructions (with the same implementation) as
PythonInterpreter.
"""
name = 'el'
def __init__(self):
r"""
Initialize an ElementInterpreter.
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: interp = ElementInterpreter()
sage: interp.name
'el'
sage: interp.mc_args
{MC:args}
sage: interp.chunks
[{MC:args}, {MC:constants}, {MC:stack}, {MC:domain}, {MC:code}]
sage: instrs = dict([(ins.name, ins) for ins in interp.instr_descs])
sage: instrs['add']
add: SS->S = 'o0 = PyNumber_Add(i0, i1);'
sage: instrs['py_call']
py_call: *->S = '\nPyObject *py_args...CREF(py_args);\n'
"""
super(ElementInterpreter, self).__init__()
# PythonInterpreter.__init__ gave us a MemoryChunkPythonArguments.
# Override with MemoryChunkElementArguments.
self.mc_args = MemoryChunkElementArguments('args', ty_python)
self.mc_domain_info = MemoryChunkPyConstant('domain')
self.chunks = [self.mc_args, self.mc_constants, self.mc_stack,
self.mc_domain_info, self.mc_code]
self.c_header = ri(0, """
#include "sage/ext/interpreters/wrapper_el.h"
#define CHECK(x) do_check(&(x), domain)
static inline int do_check(PyObject **x, PyObject *domain) {
if (*x == NULL) return 0;
PyObject *new_x = el_check_element(*x, domain);
Py_DECREF(*x);
*x = new_x;
if (*x == NULL) return 0;
return 1;
}
""")
self.pyx_header += ri(0, """
from sage.structure.element cimport Element
cdef public object el_check_element(object v, parent):
cdef Element v_el
if isinstance(v, Element):
v_el = <Element>v
if v_el._parent is parent:
return v_el
return parent(v)
"""[1:]) | /sage-setup-10.0b0.tar.gz/sage-setup-10.0b0/sage_setup/autogen/interpreters/specs/element.py | 0.709523 | 0.381738 | element.py | pypi |
from .base import StackInterpreter
from .python import MemoryChunkPyConstant
from ..instructions import (params_gen, instr_funcall_1arg_mpc,
instr_funcall_2args_mpc, InstrSpec)
from ..memory import MemoryChunk, MemoryChunkConstants
from ..storage import ty_mpc, ty_python
from ..utils import je, reindent_lines as ri
class MemoryChunkCCRetval(MemoryChunk):
r"""
A special-purpose memory chunk, for dealing with the return value
of the CC-based interpreter.
"""
def declare_class_members(self):
r"""
Return a string giving the declarations of the class members
in a wrapper class for this memory chunk.
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: mc = MemoryChunkCCRetval('retval', ty_mpc)
sage: mc.declare_class_members()
''
"""
return ""
def declare_call_locals(self):
r"""
Return a string to put in the __call__ method of a wrapper
class using this memory chunk, to allocate local variables.
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: mc = MemoryChunkCCRetval('retval', ty_mpc)
sage: mc.declare_call_locals()
' cdef ComplexNumber retval = (self.domain_element._new())\n'
"""
return je(ri(8,
"""
cdef ComplexNumber {{ myself.name }} = (self.domain_element._new())
"""), myself=self)
def declare_parameter(self):
r"""
Return the string to use to declare the interpreter parameter
corresponding to this memory chunk.
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: mc = MemoryChunkCCRetval('retval', ty_mpc)
sage: mc.declare_parameter()
'mpc_t retval'
"""
return '%s %s' % (self.storage_type.c_reference_type(), self.name)
def pass_argument(self):
r"""
Return the string to pass the argument corresponding to this
memory chunk to the interpreter.
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: mc = MemoryChunkCCRetval('retval', ty_mpc)
sage: mc.pass_argument()
'(<mpc_t>(retval.__re))'
"""
return je("""(<mpc_t>({{ myself.name }}.__re))""", myself=self)
def pass_call_c_argument(self):
r"""
Return the string to pass the argument corresponding to this
memory chunk to the interpreter, for use in the call_c method.
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: mc = MemoryChunkCCRetval('retval', ty_mpc)
sage: mc.pass_call_c_argument()
'result'
"""
return "result"
class CCInterpreter(StackInterpreter):
r"""
A subclass of StackInterpreter, specifying an interpreter over
MPFR arbitrary-precision floating-point numbers.
"""
name = 'cc'
def __init__(self):
r"""
Initialize a CCInterpreter.
EXAMPLES::
sage: from sage_setup.autogen.interpreters import *
sage: interp = CCInterpreter()
sage: interp.name
'cc'
sage: interp.mc_py_constants
{MC:py_constants}
sage: interp.chunks
[{MC:args}, {MC:retval}, {MC:constants}, {MC:py_constants}, {MC:stack}, {MC:code}, {MC:domain}]
sage: interp.pg('A[D]', 'S')
([({MC:args}, {MC:code}, None)], [({MC:stack}, None, None)])
sage: instrs = dict([(ins.name, ins) for ins in interp.instr_descs])
sage: instrs['add']
add: SS->S = 'mpc_add(o0, i0, i1, MPC_RNDNN);'
sage: instrs['py_call']
py_call: *->S = '\n if (!cc_py_call...goto error;\n}\n'
That py_call instruction is particularly interesting, and
demonstrates a useful technique to let you use Cython code
in an interpreter. Let's look more closely::
sage: print(instrs['py_call'].code)
<BLANKLINE>
if (!cc_py_call_helper(domain, i0, n_i1, i1, o0)) {
goto error;
}
<BLANKLINE>
This instruction makes use of the function cc_py_call_helper,
which is declared::
sage: print(interp.c_header)
<BLANKLINE>
#include <mpc.h>
#include "sage/ext/interpreters/wrapper_cc.h"
<BLANKLINE>
So instructions where you need to interact with Python can
call back into Cython code fairly easily.
"""
mc_retval = MemoryChunkCCRetval('retval', ty_mpc)
super(CCInterpreter, self).__init__(ty_mpc, mc_retval=mc_retval)
self.err_return = '0'
self.mc_py_constants = MemoryChunkConstants('py_constants', ty_python)
self.mc_domain = MemoryChunkPyConstant('domain')
self.chunks = [self.mc_args, self.mc_retval, self.mc_constants,
self.mc_py_constants,
self.mc_stack, self.mc_code, self.mc_domain]
pg = params_gen(A=self.mc_args, C=self.mc_constants, D=self.mc_code,
S=self.mc_stack,
P=self.mc_py_constants)
self.pg = pg
self.c_header = ri(0,
'''
#include <mpc.h>
#include "sage/ext/interpreters/wrapper_cc.h"
''')
self.pxd_header = ri(0,
"""
from sage.rings.real_mpfr cimport RealNumber
from sage.libs.mpfr cimport *
from sage.rings.complex_mpfr cimport ComplexNumber
from sage.libs.mpc cimport *
""")
self.pyx_header = ri(0,
"""\
# distutils: libraries = mpfr mpc gmp
cdef public bint cc_py_call_helper(object domain, object fn,
int n_args,
mpc_t* args, mpc_t retval) except 0:
py_args = []
cdef int i
cdef ComplexNumber ZERO=domain.zero()
cdef ComplexNumber cn
for i from 0 <= i < n_args:
cn = ZERO._new()
mpfr_set(cn.__re, mpc_realref(args[i]), MPFR_RNDN)
mpfr_set(cn.__im, mpc_imagref(args[i]), MPFR_RNDN)
py_args.append(cn)
cdef ComplexNumber result = domain(fn(*py_args))
mpc_set_fr_fr(retval, result.__re,result.__im, MPC_RNDNN)
return 1
""")
instrs = [
InstrSpec('load_arg', pg('A[D]', 'S'),
code='mpc_set(o0, i0, MPC_RNDNN);'),
InstrSpec('load_const', pg('C[D]', 'S'),
code='mpc_set(o0, i0, MPC_RNDNN);'),
InstrSpec('return', pg('S', ''),
code='mpc_set(retval, i0, MPC_RNDNN);\nreturn 1;\n'),
InstrSpec('py_call', pg('P[D]S@D', 'S'),
uses_error_handler=True,
code="""
if (!cc_py_call_helper(domain, i0, n_i1, i1, o0)) {
goto error;
}
""")
]
for (name, op) in [('add', 'mpc_add'), ('sub', 'mpc_sub'),
('mul', 'mpc_mul'), ('div', 'mpc_div'),
('pow', 'mpc_pow')]:
instrs.append(instr_funcall_2args_mpc(name, pg('SS', 'S'), op))
instrs.append(instr_funcall_2args_mpc('ipow', pg('SD', 'S'), 'mpc_pow_si'))
for name in ['neg',
'log', 'log10',
'exp',
'cos', 'sin', 'tan',
'acos', 'asin', 'atan',
'cosh', 'sinh', 'tanh',
'acosh', 'asinh', 'atanh']:
instrs.append(instr_funcall_1arg_mpc(name, pg('S', 'S'), 'mpc_' + name))
# mpc_ui_div constructs a temporary mpc_t and then calls mpc_div;
# it would probably be (slightly) faster to use a permanent copy
# of "one" (on the other hand, the constructed temporary copy is
# on the stack, so it's very likely to be in the cache).
instrs.append(InstrSpec('invert', pg('S', 'S'),
code='mpc_ui_div(o0, 1, i0, MPC_RNDNN);'))
self.instr_descs = instrs
self._set_opcodes()
# Supported for exponents that fit in a long, so we could use
# a much wider range on a 64-bit machine. On the other hand,
# it's easier to write the code this way, and constant integer
# exponents outside this range probably aren't very common anyway.
self.ipow_range = (int(-2**31), int(2**31-1)) | /sage-setup-10.0b0.tar.gz/sage-setup-10.0b0/sage_setup/autogen/interpreters/specs/cc.py | 0.691081 | 0.245752 | cc.py | pypi |
from datetime import datetime
from typing import (
Any,
Callable,
Dict,
List,
Optional,
Sequence,
Tuple,
TYPE_CHECKING,
Union,
)
from flask import Flask
from flask_caching import Cache
from typing_extensions import Literal, TypedDict
from werkzeug.wrappers import Response
if TYPE_CHECKING:
from superset.utils.core import GenericDataType
class LegacyMetric(TypedDict):
label: Optional[str]
class AdhocMetricColumn(TypedDict, total=False):
column_name: Optional[str]
description: Optional[str]
expression: Optional[str]
filterable: bool
groupby: bool
id: int
is_dttm: bool
python_date_format: Optional[str]
type: str
type_generic: "GenericDataType"
verbose_name: Optional[str]
class AdhocMetric(TypedDict, total=False):
aggregate: str
column: Optional[AdhocMetricColumn]
expressionType: Literal["SIMPLE", "SQL"]
hasCustomLabel: Optional[bool]
label: Optional[str]
sqlExpression: Optional[str]
class AdhocColumn(TypedDict, total=False):
hasCustomLabel: Optional[bool]
label: Optional[str]
sqlExpression: Optional[str]
CacheConfig = Union[Callable[[Flask], Cache], Dict[str, Any]]
DbapiDescriptionRow = Tuple[
str, str, Optional[str], Optional[str], Optional[int], Optional[int], bool
]
DbapiDescription = Union[List[DbapiDescriptionRow], Tuple[DbapiDescriptionRow, ...]]
DbapiResult = Sequence[Union[List[Any], Tuple[Any, ...]]]
FilterValue = Union[bool, datetime, float, int, str]
FilterValues = Union[FilterValue, List[FilterValue], Tuple[FilterValue]]
FormData = Dict[str, Any]
Granularity = Union[str, Dict[str, Union[str, float]]]
Column = Union[AdhocColumn, str]
Metric = Union[AdhocMetric, str]
OrderBy = Tuple[Metric, bool]
QueryObjectDict = Dict[str, Any]
VizData = Optional[Union[List[Any], Dict[Any, Any]]]
VizPayload = Dict[str, Any]
# Flask response.
Base = Union[bytes, str]
Status = Union[int, str]
Headers = Dict[str, Any]
FlaskResponse = Union[
Response,
Base,
Tuple[Base, Status],
Tuple[Base, Status, Headers],
Tuple[Response, Status],
] | /sage-superset-1.0.0.tar.gz/sage-superset-1.0.0/superset/typing.py | 0.850033 | 0.291193 | typing.py | pypi |
import logging
from typing import Optional
from colorama import Fore, Style
logger = logging.getLogger(__name__)
class BaseStatsLogger:
"""Base class for logging realtime events"""
def __init__(self, prefix: str = "superset") -> None:
self.prefix = prefix
def key(self, key: str) -> str:
if self.prefix:
return self.prefix + key
return key
def incr(self, key: str) -> None:
"""Increment a counter"""
raise NotImplementedError()
def decr(self, key: str) -> None:
"""Decrement a counter"""
raise NotImplementedError()
def timing(self, key: str, value: float) -> None:
raise NotImplementedError()
def gauge(self, key: str, value: float) -> None:
"""Setup a gauge"""
raise NotImplementedError()
class DummyStatsLogger(BaseStatsLogger):
def incr(self, key: str) -> None:
logger.debug(Fore.CYAN + "[stats_logger] (incr) " + key + Style.RESET_ALL)
def decr(self, key: str) -> None:
logger.debug((Fore.CYAN + "[stats_logger] (decr) " + key + Style.RESET_ALL))
def timing(self, key: str, value: float) -> None:
logger.debug(
(Fore.CYAN + f"[stats_logger] (timing) {key} | {value} " + Style.RESET_ALL)
)
def gauge(self, key: str, value: float) -> None:
logger.debug(
(
Fore.CYAN
+ "[stats_logger] (gauge) "
+ f"{key}"
+ f"{value}"
+ Style.RESET_ALL
)
)
try:
from statsd import StatsClient
class StatsdStatsLogger(BaseStatsLogger):
def __init__( # pylint: disable=super-init-not-called
self,
host: str = "localhost",
port: int = 8125,
prefix: str = "superset",
statsd_client: Optional[StatsClient] = None,
) -> None:
"""
Initializes from either params or a supplied, pre-constructed statsd client.
If statsd_client argument is given, all other arguments are ignored and the
supplied client will be used to emit metrics.
"""
if statsd_client:
self.client = statsd_client
else:
self.client = StatsClient(host=host, port=port, prefix=prefix)
def incr(self, key: str) -> None:
self.client.incr(key)
def decr(self, key: str) -> None:
self.client.decr(key)
def timing(self, key: str, value: float) -> None:
self.client.timing(key, value)
def gauge(self, key: str, value: float) -> None:
self.client.gauge(key, value)
except Exception: # pylint: disable=broad-except
pass | /sage-superset-1.0.0.tar.gz/sage-superset-1.0.0/superset/stats_logger.py | 0.816223 | 0.175538 | stats_logger.py | pypi |
# ATTENTION: If you change any constants, make sure to also change utils/common.js
# string to use when None values *need* to be converted to/from strings
from enum import Enum
NULL_STRING = "<NULL>"
EMPTY_STRING = "<empty string>"
CHANGE_ME_SECRET_KEY = "CHANGE_ME_TO_A_COMPLEX_RANDOM_SECRET"
# UUID for the examples database
EXAMPLES_DB_UUID = "a2dc77af-e654-49bb-b321-40f6b559a1ee"
class RouteMethod: # pylint: disable=too-few-public-methods
"""
Route methods are a FAB concept around ModelView and RestModelView
classes in FAB. Derivatives can define `include_route_method` and
`exclude_route_methods` class attribute as a set of methods that
will or won't get exposed.
This class is a collection of static constants to reference common
route methods, namely the ones defined in the base classes in FAB
"""
# ModelView specific
ACTION = "action"
ACTION_POST = "action_post"
ADD = "add"
API_CREATE = "api_create"
API_DELETE = "api_delete"
API_GET = "api_get"
API_READ = "api_read"
API_UPDATE = "api_update"
DELETE = "delete"
DOWNLOAD = "download"
EDIT = "edit"
LIST = "list"
SHOW = "show"
INFO = "info"
# RestModelView specific
EXPORT = "export"
IMPORT = "import_"
GET = "get"
GET_LIST = "get_list"
POST = "post"
PUT = "put"
RELATED = "related"
DISTINCT = "distinct"
# Commonly used sets
API_SET = {API_CREATE, API_DELETE, API_GET, API_READ, API_UPDATE}
CRUD_SET = {ADD, LIST, EDIT, DELETE, ACTION_POST, SHOW}
RELATED_VIEW_SET = {ADD, LIST, EDIT, DELETE}
REST_MODEL_VIEW_CRUD_SET = {DELETE, GET, GET_LIST, POST, PUT, INFO}
MODEL_VIEW_RW_METHOD_PERMISSION_MAP = {
"add": "write",
"api": "read",
"api_column_add": "write",
"api_column_edit": "write",
"api_create": "write",
"api_delete": "write",
"api_get": "read",
"api_read": "read",
"api_readvalues": "read",
"api_update": "write",
"annotation": "read",
"delete": "write",
"download": "read",
"download_dashboards": "read",
"edit": "write",
"list": "read",
"muldelete": "write",
"mulexport": "read",
"show": "read",
"new": "write",
"yaml_export": "read",
"refresh": "write",
}
MODEL_API_RW_METHOD_PERMISSION_MAP = {
"bulk_delete": "write",
"delete": "write",
"distinct": "read",
"get": "read",
"get_list": "read",
"info": "read",
"post": "write",
"put": "write",
"related": "read",
"related_objects": "read",
"schemas": "read",
"select_star": "read",
"table_metadata": "read",
"test_connection": "read",
"validate_parameters": "read",
"favorite_status": "read",
"thumbnail": "read",
"import_": "write",
"refresh": "write",
"cache_screenshot": "read",
"screenshot": "read",
"data": "read",
"data_from_cache": "read",
"get_charts": "read",
"get_datasets": "read",
"function_names": "read",
"available": "read",
"get_data": "read",
}
EXTRA_FORM_DATA_APPEND_KEYS = {
"adhoc_filters",
"filters",
"interactive_groupby",
"interactive_highlight",
"interactive_drilldown",
"custom_form_data",
}
EXTRA_FORM_DATA_OVERRIDE_REGULAR_MAPPINGS = {
"granularity": "granularity",
"granularity_sqla": "granularity",
"time_column": "time_column",
"time_grain": "time_grain",
"time_range": "time_range",
"druid_time_origin": "druid_time_origin",
"time_grain_sqla": "time_grain_sqla",
"time_range_endpoints": "time_range_endpoints",
}
EXTRA_FORM_DATA_OVERRIDE_EXTRA_KEYS = {
"relative_start",
"relative_end",
}
EXTRA_FORM_DATA_OVERRIDE_KEYS = (
set(EXTRA_FORM_DATA_OVERRIDE_REGULAR_MAPPINGS.values())
| EXTRA_FORM_DATA_OVERRIDE_EXTRA_KEYS
)
class PandasAxis(int, Enum):
ROW = 0
COLUMN = 1
class PandasPostprocessingCompare(str, Enum):
DIFF = "difference"
PCT = "percentage"
RAT = "ratio"
class CacheRegion(str, Enum):
DEFAULT = "default"
DATA = "data"
THUMBNAIL = "thumbnail" | /sage-superset-1.0.0.tar.gz/sage-superset-1.0.0/superset/constants.py | 0.71602 | 0.264198 | constants.py | pypi |
"""Defines the templating context for SQL Lab"""
import json
import re
from functools import partial
from typing import (
Any,
Callable,
cast,
Dict,
List,
Optional,
Tuple,
TYPE_CHECKING,
Union,
)
from flask import current_app, g, has_request_context, request
from flask_babel import gettext as _
from jinja2 import DebugUndefined
from jinja2.sandbox import SandboxedEnvironment
from sqlalchemy.engine.interfaces import Dialect
from sqlalchemy.types import String
from typing_extensions import TypedDict
from superset.exceptions import SupersetTemplateException
from superset.extensions import feature_flag_manager
from superset.utils.core import convert_legacy_filters_into_adhoc, merge_extra_filters
from superset.utils.memoized import memoized
if TYPE_CHECKING:
from superset.connectors.sqla.models import SqlaTable
from superset.models.core import Database
from superset.models.sql_lab import Query
NONE_TYPE = type(None).__name__
ALLOWED_TYPES = (
NONE_TYPE,
"bool",
"str",
"unicode",
"int",
"long",
"float",
"list",
"dict",
"tuple",
"set",
)
COLLECTION_TYPES = ("list", "dict", "tuple", "set")
@memoized
def context_addons() -> Dict[str, Any]:
return current_app.config.get("JINJA_CONTEXT_ADDONS", {})
class Filter(TypedDict):
op: str # pylint: disable=C0103
col: str
val: Union[None, Any, List[Any]]
class ExtraCache:
"""
Dummy class that exposes a method used to store additional values used in
calculation of query object cache keys.
"""
# Regular expression for detecting the presence of templated methods which could
# be added to the cache key.
regex = re.compile(
r"\{\{.*("
r"current_user_id\(.*\)|"
r"current_username\(.*\)|"
r"cache_key_wrapper\(.*\)|"
r"url_param\(.*\)"
r").*\}\}"
)
def __init__(
self,
extra_cache_keys: Optional[List[Any]] = None,
applied_filters: Optional[List[str]] = None,
removed_filters: Optional[List[str]] = None,
dialect: Optional[Dialect] = None,
):
self.extra_cache_keys = extra_cache_keys
self.applied_filters = applied_filters if applied_filters is not None else []
self.removed_filters = removed_filters if removed_filters is not None else []
self.dialect = dialect
def current_user_id(self, add_to_cache_keys: bool = True) -> Optional[int]:
"""
Return the user ID of the user who is currently logged in.
:param add_to_cache_keys: Whether the value should be included in the cache key
:returns: The user ID
"""
if hasattr(g, "user") and g.user:
if add_to_cache_keys:
self.cache_key_wrapper(g.user.get_id())
return g.user.get_id()
return None
def current_username(self, add_to_cache_keys: bool = True) -> Optional[str]:
"""
Return the username of the user who is currently logged in.
:param add_to_cache_keys: Whether the value should be included in the cache key
:returns: The username
"""
if g.user and hasattr(g.user, "username"):
if add_to_cache_keys:
self.cache_key_wrapper(g.user.username)
return g.user.username
return None
def cache_key_wrapper(self, key: Any) -> Any:
"""
Adds values to a list that is added to the query object used for calculating a
cache key.
This is needed if the following applies:
- Caching is enabled
- The query is dynamically generated using a jinja template
- A `JINJA_CONTEXT_ADDONS` or similar is used as a filter in the query
:param key: Any value that should be considered when calculating the cache key
:return: the original value ``key`` passed to the function
"""
if self.extra_cache_keys is not None:
self.extra_cache_keys.append(key)
return key
def url_param(
self,
param: str,
default: Optional[str] = None,
add_to_cache_keys: bool = True,
escape_result: bool = True,
) -> Optional[str]:
"""
Read a url or post parameter and use it in your SQL Lab query.
When in SQL Lab, it's possible to add arbitrary URL "query string" parameters,
and use those in your SQL code. For instance you can alter your url and add
`?foo=bar`, as in `{domain}/superset/sqllab?foo=bar`. Then if your query is
something like SELECT * FROM foo = '{{ url_param('foo') }}', it will be parsed
at runtime and replaced by the value in the URL.
As you create a visualization form this SQL Lab query, you can pass parameters
in the explore view as well as from the dashboard, and it should carry through
to your queries.
Default values for URL parameters can be defined in chart metadata by adding the
key-value pair `url_params: {'foo': 'bar'}`
:param param: the parameter to lookup
:param default: the value to return in the absence of the parameter
:param add_to_cache_keys: Whether the value should be included in the cache key
:param escape_result: Should special characters in the result be escaped
:returns: The URL parameters
"""
# pylint: disable=import-outside-toplevel
from superset.views.utils import get_form_data
if has_request_context() and request.args.get(param): # type: ignore
return request.args.get(param, default)
form_data, _ = get_form_data()
url_params = form_data.get("url_params") or {}
result = url_params.get(param, default)
if result and escape_result and self.dialect:
# use the dialect specific quoting logic to escape string
result = String().literal_processor(dialect=self.dialect)(value=result)[
1:-1
]
if add_to_cache_keys:
self.cache_key_wrapper(result)
return result
def filter_values(
self, column: str, default: Optional[str] = None, remove_filter: bool = False
) -> List[Any]:
"""Gets a values for a particular filter as a list
This is useful if:
- you want to use a filter component to filter a query where the name of
filter component column doesn't match the one in the select statement
- you want to have the ability for filter inside the main query for speed
purposes
Usage example::
SELECT action, count(*) as times
FROM logs
WHERE
action in ({{ "'" + "','".join(filter_values('action_type')) + "'" }})
GROUP BY action
:param column: column/filter name to lookup
:param default: default value to return if there's no matching columns
:param remove_filter: When set to true, mark the filter as processed,
removing it from the outer query. Useful when a filter should
only apply to the inner query
:return: returns a list of filter values
"""
return_val: List[Any] = []
filters = self.get_filters(column, remove_filter)
for flt in filters:
val = flt.get("val")
if isinstance(val, list):
return_val.extend(val)
elif val:
return_val.append(val)
if (not return_val) and default:
# If no values are found, return the default provided.
return_val = [default]
return return_val
def get_filters(self, column: str, remove_filter: bool = False) -> List[Filter]:
"""Get the filters applied to the given column. In addition
to returning values like the filter_values function
the get_filters function returns the operator specified in the explorer UI.
This is useful if:
- you want to handle more than the IN operator in your SQL clause
- you want to handle generating custom SQL conditions for a filter
- you want to have the ability for filter inside the main query for speed
purposes
Usage example::
WITH RECURSIVE
superiors(employee_id, manager_id, full_name, level, lineage) AS (
SELECT
employee_id,
manager_id,
full_name,
1 as level,
employee_id as lineage
FROM
employees
WHERE
1=1
{# Render a blank line #}
{%- for filter in get_filters('full_name', remove_filter=True) -%}
{%- if filter.get('op') == 'IN' -%}
AND
full_name IN ( {{ "'" + "', '".join(filter.get('val')) + "'" }} )
{%- endif -%}
{%- if filter.get('op') == 'LIKE' -%}
AND
full_name LIKE {{ "'" + filter.get('val') + "'" }}
{%- endif -%}
{%- endfor -%}
UNION ALL
SELECT
e.employee_id,
e.manager_id,
e.full_name,
s.level + 1 as level,
s.lineage
FROM
employees e,
superiors s
WHERE s.manager_id = e.employee_id
)
SELECT
employee_id, manager_id, full_name, level, lineage
FROM
superiors
order by lineage, level
:param column: column/filter name to lookup
:param remove_filter: When set to true, mark the filter as processed,
removing it from the outer query. Useful when a filter should
only apply to the inner query
:return: returns a list of filters
"""
# pylint: disable=import-outside-toplevel
from superset.utils.core import FilterOperator
from superset.views.utils import get_form_data
form_data, _ = get_form_data()
convert_legacy_filters_into_adhoc(form_data)
merge_extra_filters(form_data)
filters: List[Filter] = []
for flt in form_data.get("adhoc_filters", []):
val: Union[Any, List[Any]] = flt.get("comparator")
op: str = flt["operator"].upper() if flt.get("operator") else None
# fltOpName: str = flt.get("filterOptionName")
if (
flt.get("expressionType") == "SIMPLE"
and flt.get("clause") == "WHERE"
and flt.get("subject") == column
and val
):
if remove_filter:
if column not in self.removed_filters:
self.removed_filters.append(column)
if column not in self.applied_filters:
self.applied_filters.append(column)
if op in (
FilterOperator.IN.value,
FilterOperator.NOT_IN.value,
) and not isinstance(val, list):
val = [val]
filters.append({"op": op, "col": column, "val": val})
return filters
def safe_proxy(func: Callable[..., Any], *args: Any, **kwargs: Any) -> Any:
return_value = func(*args, **kwargs)
value_type = type(return_value).__name__
if value_type not in ALLOWED_TYPES:
raise SupersetTemplateException(
_(
"Unsafe return type for function %(func)s: %(value_type)s",
func=func.__name__,
value_type=value_type,
)
)
if value_type in COLLECTION_TYPES:
try:
return_value = json.loads(json.dumps(return_value))
except TypeError as ex:
raise SupersetTemplateException(
_("Unsupported return value for method %(name)s", name=func.__name__,)
) from ex
return return_value
def validate_context_types(context: Dict[str, Any]) -> Dict[str, Any]:
for key in context:
arg_type = type(context[key]).__name__
if arg_type not in ALLOWED_TYPES and key not in context_addons():
if arg_type == "partial" and context[key].func.__name__ == "safe_proxy":
continue
raise SupersetTemplateException(
_(
"Unsafe template value for key %(key)s: %(value_type)s",
key=key,
value_type=arg_type,
)
)
if arg_type in COLLECTION_TYPES:
try:
context[key] = json.loads(json.dumps(context[key]))
except TypeError as ex:
raise SupersetTemplateException(
_("Unsupported template value for key %(key)s", key=key)
) from ex
return context
def validate_template_context(
engine: Optional[str], context: Dict[str, Any]
) -> Dict[str, Any]:
if engine and engine in context:
# validate engine context separately to allow for engine-specific methods
engine_context = validate_context_types(context.pop(engine))
valid_context = validate_context_types(context)
valid_context[engine] = engine_context
return valid_context
return validate_context_types(context)
class BaseTemplateProcessor:
"""
Base class for database-specific jinja context
"""
engine: Optional[str] = None
# pylint: disable=too-many-arguments
def __init__(
self,
database: "Database",
query: Optional["Query"] = None,
table: Optional["SqlaTable"] = None,
extra_cache_keys: Optional[List[Any]] = None,
removed_filters: Optional[List[str]] = None,
applied_filters: Optional[List[str]] = None,
**kwargs: Any,
) -> None:
self._database = database
self._query = query
self._schema = None
if query and query.schema:
self._schema = query.schema
elif table:
self._schema = table.schema
self._extra_cache_keys = extra_cache_keys
self._applied_filters = applied_filters
self._removed_filters = removed_filters
self._context: Dict[str, Any] = {}
self._env = SandboxedEnvironment(undefined=DebugUndefined)
self.set_context(**kwargs)
def set_context(self, **kwargs: Any) -> None:
self._context.update(kwargs)
self._context.update(context_addons())
def process_template(self, sql: str, **kwargs: Any) -> str:
"""Processes a sql template
>>> sql = "SELECT '{{ datetime(2017, 1, 1).isoformat() }}'"
>>> process_template(sql)
"SELECT '2017-01-01T00:00:00'"
"""
template = self._env.from_string(sql)
kwargs.update(self._context)
context = validate_template_context(self.engine, kwargs)
return template.render(context)
class JinjaTemplateProcessor(BaseTemplateProcessor):
def set_context(self, **kwargs: Any) -> None:
super().set_context(**kwargs)
extra_cache = ExtraCache(
extra_cache_keys=self._extra_cache_keys,
applied_filters=self._applied_filters,
removed_filters=self._removed_filters,
dialect=self._database.get_dialect(),
)
self._context.update(
{
"url_param": partial(safe_proxy, extra_cache.url_param),
"current_user_id": partial(safe_proxy, extra_cache.current_user_id),
"current_username": partial(safe_proxy, extra_cache.current_username),
"cache_key_wrapper": partial(safe_proxy, extra_cache.cache_key_wrapper),
"filter_values": partial(safe_proxy, extra_cache.filter_values),
"get_filters": partial(safe_proxy, extra_cache.get_filters),
}
)
class NoOpTemplateProcessor(BaseTemplateProcessor):
def process_template(self, sql: str, **kwargs: Any) -> str:
"""
Makes processing a template a noop
"""
return sql
class PrestoTemplateProcessor(JinjaTemplateProcessor):
"""Presto Jinja context
The methods described here are namespaced under ``presto`` in the
jinja context as in ``SELECT '{{ presto.some_macro_call() }}'``
"""
engine = "presto"
def set_context(self, **kwargs: Any) -> None:
super().set_context(**kwargs)
self._context[self.engine] = {
"first_latest_partition": partial(safe_proxy, self.first_latest_partition),
"latest_partitions": partial(safe_proxy, self.latest_partitions),
"latest_sub_partition": partial(safe_proxy, self.latest_sub_partition),
"latest_partition": partial(safe_proxy, self.latest_partition),
}
@staticmethod
def _schema_table(
table_name: str, schema: Optional[str]
) -> Tuple[str, Optional[str]]:
if "." in table_name:
schema, table_name = table_name.split(".")
return table_name, schema
def first_latest_partition(self, table_name: str) -> Optional[str]:
"""
Gets the first value in the array of all latest partitions
:param table_name: table name in the format `schema.table`
:return: the first (or only) value in the latest partition array
:raises IndexError: If no partition exists
"""
latest_partitions = self.latest_partitions(table_name)
return latest_partitions[0] if latest_partitions else None
def latest_partitions(self, table_name: str) -> Optional[List[str]]:
"""
Gets the array of all latest partitions
:param table_name: table name in the format `schema.table`
:return: the latest partition array
"""
# pylint: disable=import-outside-toplevel
from superset.db_engine_specs.presto import PrestoEngineSpec
table_name, schema = self._schema_table(table_name, self._schema)
return cast(PrestoEngineSpec, self._database.db_engine_spec).latest_partition(
table_name, schema, self._database
)[1]
def latest_sub_partition(self, table_name: str, **kwargs: Any) -> Any:
table_name, schema = self._schema_table(table_name, self._schema)
# pylint: disable=import-outside-toplevel
from superset.db_engine_specs.presto import PrestoEngineSpec
return cast(
PrestoEngineSpec, self._database.db_engine_spec
).latest_sub_partition(
table_name=table_name, schema=schema, database=self._database, **kwargs
)
latest_partition = first_latest_partition
class HiveTemplateProcessor(PrestoTemplateProcessor):
engine = "hive"
DEFAULT_PROCESSORS = {"presto": PrestoTemplateProcessor, "hive": HiveTemplateProcessor}
@memoized
def get_template_processors() -> Dict[str, Any]:
processors = current_app.config.get("CUSTOM_TEMPLATE_PROCESSORS", {})
for engine, processor in DEFAULT_PROCESSORS.items():
# do not overwrite engine-specific CUSTOM_TEMPLATE_PROCESSORS
if not engine in processors:
processors[engine] = processor
return processors
def get_template_processor(
database: "Database",
table: Optional["SqlaTable"] = None,
query: Optional["Query"] = None,
**kwargs: Any,
) -> BaseTemplateProcessor:
if feature_flag_manager.is_feature_enabled("ENABLE_TEMPLATE_PROCESSING"):
template_processor = get_template_processors().get(
database.backend, JinjaTemplateProcessor
)
else:
template_processor = NoOpTemplateProcessor
return template_processor(database=database, table=table, query=query, **kwargs) | /sage-superset-1.0.0.tar.gz/sage-superset-1.0.0/superset/jinja_context.py | 0.864024 | 0.297559 | jinja_context.py | pypi |
""" Superset wrapper around pyarrow.Table.
"""
import datetime
import json
import logging
from typing import Any, Dict, List, Optional, Tuple, Type
import numpy as np
import pandas as pd
import pyarrow as pa
from superset.db_engine_specs import BaseEngineSpec
from superset.typing import DbapiDescription, DbapiResult
from superset.utils import core as utils
logger = logging.getLogger(__name__)
def dedup(l: List[str], suffix: str = "__", case_sensitive: bool = True) -> List[str]:
"""De-duplicates a list of string by suffixing a counter
Always returns the same number of entries as provided, and always returns
unique values. Case sensitive comparison by default.
>>> print(','.join(dedup(['foo', 'bar', 'bar', 'bar', 'Bar'])))
foo,bar,bar__1,bar__2,Bar
>>> print(
','.join(dedup(['foo', 'bar', 'bar', 'bar', 'Bar'], case_sensitive=False))
)
foo,bar,bar__1,bar__2,Bar__3
"""
new_l: List[str] = []
seen: Dict[str, int] = {}
for item in l:
s_fixed_case = item if case_sensitive else item.lower()
if s_fixed_case in seen:
seen[s_fixed_case] += 1
item += suffix + str(seen[s_fixed_case])
else:
seen[s_fixed_case] = 0
new_l.append(item)
return new_l
def stringify(obj: Any) -> str:
return json.dumps(obj, default=utils.json_iso_dttm_ser)
def stringify_values(array: np.ndarray) -> np.ndarray:
vstringify = np.vectorize(stringify)
return vstringify(array)
def destringify(obj: str) -> Any:
return json.loads(obj)
class SupersetResultSet:
def __init__( # pylint: disable=too-many-locals
self,
data: DbapiResult,
cursor_description: DbapiDescription,
db_engine_spec: Type[BaseEngineSpec],
):
self.db_engine_spec = db_engine_spec
data = data or []
column_names: List[str] = []
pa_data: List[pa.Array] = []
deduped_cursor_desc: List[Tuple[Any, ...]] = []
numpy_dtype: List[Tuple[str, ...]] = []
stringified_arr: np.ndarray
if cursor_description:
# get deduped list of column names
column_names = dedup([col[0] for col in cursor_description])
# fix cursor descriptor with the deduped names
deduped_cursor_desc = [
tuple([column_name, *list(description)[1:]])
for column_name, description in zip(column_names, cursor_description)
]
# generate numpy structured array dtype
numpy_dtype = [(column_name, "object") for column_name in column_names]
# only do expensive recasting if datatype is not standard list of tuples
if data and (not isinstance(data, list) or not isinstance(data[0], tuple)):
data = [tuple(row) for row in data]
array = np.array(data, dtype=numpy_dtype)
if array.size > 0:
for column in column_names:
try:
pa_data.append(pa.array(array[column].tolist()))
except (
pa.lib.ArrowInvalid,
pa.lib.ArrowTypeError,
pa.lib.ArrowNotImplementedError,
TypeError, # this is super hackey,
# https://issues.apache.org/jira/browse/ARROW-7855
):
# attempt serialization of values as strings
stringified_arr = stringify_values(array[column])
pa_data.append(pa.array(stringified_arr.tolist()))
if pa_data: # pylint: disable=too-many-nested-blocks
for i, column in enumerate(column_names):
if pa.types.is_nested(pa_data[i].type):
# TODO: revisit nested column serialization once nested types
# are added as a natively supported column type in Superset
# (superset.utils.core.GenericDataType).
stringified_arr = stringify_values(array[column])
pa_data[i] = pa.array(stringified_arr.tolist())
elif pa.types.is_temporal(pa_data[i].type):
# workaround for bug converting
# `psycopg2.tz.FixedOffsetTimezone` tzinfo values.
# related: https://issues.apache.org/jira/browse/ARROW-5248
sample = self.first_nonempty(array[column])
if sample and isinstance(sample, datetime.datetime):
try:
if sample.tzinfo:
tz = sample.tzinfo
series = pd.Series(
array[column], dtype="datetime64[ns]"
)
series = pd.to_datetime(series).dt.tz_localize(tz)
pa_data[i] = pa.Array.from_pandas(
series, type=pa.timestamp("ns", tz=tz)
)
except Exception as ex: # pylint: disable=broad-except
logger.exception(ex)
self.table = pa.Table.from_arrays(pa_data, names=column_names)
self._type_dict: Dict[str, Any] = {}
try:
# The driver may not be passing a cursor.description
self._type_dict = {
col: db_engine_spec.get_datatype(deduped_cursor_desc[i][1])
for i, col in enumerate(column_names)
if deduped_cursor_desc
}
except Exception as ex: # pylint: disable=broad-except
logger.exception(ex)
@staticmethod
def convert_pa_dtype(pa_dtype: pa.DataType) -> Optional[str]:
if pa.types.is_boolean(pa_dtype):
return "BOOL"
if pa.types.is_integer(pa_dtype):
return "INT"
if pa.types.is_floating(pa_dtype):
return "FLOAT"
if pa.types.is_string(pa_dtype):
return "STRING"
if pa.types.is_temporal(pa_dtype):
return "DATETIME"
return None
@staticmethod
def convert_table_to_df(table: pa.Table) -> pd.DataFrame:
return table.to_pandas(integer_object_nulls=True)
@staticmethod
def first_nonempty(items: List[Any]) -> Any:
return next((i for i in items if i), None)
def is_temporal(self, db_type_str: Optional[str]) -> bool:
column_spec = self.db_engine_spec.get_column_spec(db_type_str)
if column_spec is None:
return False
return column_spec.is_dttm
def data_type(self, col_name: str, pa_dtype: pa.DataType) -> Optional[str]:
"""Given a pyarrow data type, Returns a generic database type"""
set_type = self._type_dict.get(col_name)
if set_type:
return set_type
mapped_type = self.convert_pa_dtype(pa_dtype)
if mapped_type:
return mapped_type
return None
def to_pandas_df(self) -> pd.DataFrame:
return self.convert_table_to_df(self.table)
@property
def pa_table(self) -> pa.Table:
return self.table
@property
def size(self) -> int:
return self.table.num_rows
@property
def columns(self) -> List[Dict[str, Any]]:
if not self.table.column_names:
return []
columns = []
for col in self.table.schema:
db_type_str = self.data_type(col.name, col.type)
column = {
"name": col.name,
"type": db_type_str,
"is_date": self.is_temporal(db_type_str),
}
columns.append(column)
return columns | /sage-superset-1.0.0.tar.gz/sage-superset-1.0.0/superset/result_set.py | 0.698741 | 0.302597 | result_set.py | pypi |
from typing import Any, Dict, List, Optional, Type
from flask_appbuilder.models.filters import BaseFilter
from flask_appbuilder.models.sqla import Model
from flask_appbuilder.models.sqla.interface import SQLAInterface
from sqlalchemy.exc import SQLAlchemyError
from sqlalchemy.orm import Session
from superset.dao.exceptions import (
DAOConfigError,
DAOCreateFailedError,
DAODeleteFailedError,
DAOUpdateFailedError,
)
from superset.extensions import db
class BaseDAO:
"""
Base DAO, implement base CRUD sqlalchemy operations
"""
model_cls: Optional[Type[Model]] = None
"""
Child classes need to state the Model class so they don't need to implement basic
create, update and delete methods
"""
base_filter: Optional[BaseFilter] = None
"""
Child classes can register base filtering to be aplied to all filter methods
"""
@classmethod
def find_by_id(cls, model_id: int, session: Session = None) -> Model:
"""
Find a model by id, if defined applies `base_filter`
"""
session = session or db.session
query = session.query(cls.model_cls)
if cls.base_filter:
data_model = SQLAInterface(cls.model_cls, session)
query = cls.base_filter( # pylint: disable=not-callable
"id", data_model
).apply(query, None)
return query.filter_by(id=model_id).one_or_none()
@classmethod
def find_by_ids(cls, model_ids: List[int]) -> List[Model]:
"""
Find a List of models by a list of ids, if defined applies `base_filter`
"""
id_col = getattr(cls.model_cls, "id", None)
if id_col is None:
return []
query = db.session.query(cls.model_cls).filter(id_col.in_(model_ids))
if cls.base_filter:
data_model = SQLAInterface(cls.model_cls, db.session)
query = cls.base_filter( # pylint: disable=not-callable
"id", data_model
).apply(query, None)
return query.all()
@classmethod
def find_all(cls) -> List[Model]:
"""
Get all that fit the `base_filter`
"""
query = db.session.query(cls.model_cls)
if cls.base_filter:
data_model = SQLAInterface(cls.model_cls, db.session)
query = cls.base_filter( # pylint: disable=not-callable
"id", data_model
).apply(query, None)
return query.all()
@classmethod
def find_one_or_none(cls, **filter_by: Any) -> Optional[Model]:
"""
Get the first that fit the `base_filter`
"""
query = db.session.query(cls.model_cls)
if cls.base_filter:
data_model = SQLAInterface(cls.model_cls, db.session)
query = cls.base_filter( # pylint: disable=not-callable
"id", data_model
).apply(query, None)
return query.filter_by(**filter_by).one_or_none()
@classmethod
def create(cls, properties: Dict[str, Any], commit: bool = True) -> Model:
"""
Generic for creating models
:raises: DAOCreateFailedError
"""
if cls.model_cls is None:
raise DAOConfigError()
model = cls.model_cls() # pylint: disable=not-callable
for key, value in properties.items():
setattr(model, key, value)
try:
db.session.add(model)
if commit:
db.session.commit()
except SQLAlchemyError as ex: # pragma: no cover
db.session.rollback()
raise DAOCreateFailedError(exception=ex) from ex
return model
@classmethod
def save(cls, instance_model: Model, commit: bool = True) -> Model:
"""
Generic for saving models
:raises: DAOCreateFailedError
"""
if cls.model_cls is None:
raise DAOConfigError()
if not isinstance(instance_model, cls.model_cls):
raise DAOCreateFailedError(
"the instance model is not a type of the model class"
)
try:
db.session.add(instance_model)
if commit:
db.session.commit()
except SQLAlchemyError as ex: # pragma: no cover
db.session.rollback()
raise DAOCreateFailedError(exception=ex) from ex
return instance_model
@classmethod
def update(
cls, model: Model, properties: Dict[str, Any], commit: bool = True
) -> Model:
"""
Generic update a model
:raises: DAOCreateFailedError
"""
for key, value in properties.items():
setattr(model, key, value)
try:
db.session.merge(model)
if commit:
db.session.commit()
except SQLAlchemyError as ex: # pragma: no cover
db.session.rollback()
raise DAOUpdateFailedError(exception=ex) from ex
return model
@classmethod
def delete(cls, model: Model, commit: bool = True) -> Model:
"""
Generic delete a model
:raises: DAOCreateFailedError
"""
try:
db.session.delete(model)
if commit:
db.session.commit()
except SQLAlchemyError as ex: # pragma: no cover
db.session.rollback()
raise DAODeleteFailedError(exception=ex) from ex
return model | /sage-superset-1.0.0.tar.gz/sage-superset-1.0.0/superset/dao/base.py | 0.854809 | 0.194597 | base.py | pypi |
import logging
from dataclasses import dataclass
from typing import Dict, List, Tuple
from sqlalchemy import (
Column,
ForeignKey,
Integer,
Sequence,
String,
Table,
UniqueConstraint,
)
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import Load, relationship, Session
logger = logging.getLogger(__name__)
Base = declarative_base()
@dataclass(frozen=True)
class Pvm:
view: str
permission: str
PvmMigrationMapType = Dict[Pvm, Tuple[Pvm, ...]]
# Partial freeze of the current metadata db schema
class Permission(Base): # type: ignore
__tablename__ = "ab_permission"
id = Column(Integer, Sequence("ab_permission_id_seq"), primary_key=True)
name = Column(String(100), unique=True, nullable=False)
def __repr__(self) -> str:
return f"{self.name}"
class ViewMenu(Base): # type: ignore
__tablename__ = "ab_view_menu"
id = Column(Integer, Sequence("ab_view_menu_id_seq"), primary_key=True)
name = Column(String(250), unique=True, nullable=False)
def __repr__(self) -> str:
return f"{self.name}"
def __eq__(self, other: object) -> bool:
return (isinstance(other, self.__class__)) and (self.name == other.name)
def __neq__(self, other: object) -> bool:
return (isinstance(other, self.__class__)) and self.name != other.name
assoc_permissionview_role = Table(
"ab_permission_view_role",
Base.metadata,
Column("id", Integer, Sequence("ab_permission_view_role_id_seq"), primary_key=True),
Column("permission_view_id", Integer, ForeignKey("ab_permission_view.id")),
Column("role_id", Integer, ForeignKey("ab_role.id")),
UniqueConstraint("permission_view_id", "role_id"),
)
class Role(Base): # type: ignore
__tablename__ = "ab_role"
id = Column(Integer, Sequence("ab_role_id_seq"), primary_key=True)
name = Column(String(64), unique=True, nullable=False)
permissions = relationship(
"PermissionView", secondary=assoc_permissionview_role, backref="role"
)
def __repr__(self) -> str:
return f"{self.name}"
class PermissionView(Base): # type: ignore
__tablename__ = "ab_permission_view"
__table_args__ = (UniqueConstraint("permission_id", "view_menu_id"),)
id = Column(Integer, Sequence("ab_permission_view_id_seq"), primary_key=True)
permission_id = Column(Integer, ForeignKey("ab_permission.id"))
permission = relationship("Permission")
view_menu_id = Column(Integer, ForeignKey("ab_view_menu.id"))
view_menu = relationship("ViewMenu")
def __repr__(self) -> str:
return f"{self.permission} {self.view_menu}"
def _add_view_menu(session: Session, view_name: str) -> ViewMenu:
"""
Check and add the new view menu
"""
new_view = session.query(ViewMenu).filter(ViewMenu.name == view_name).one_or_none()
if not new_view:
new_view = ViewMenu(name=view_name)
session.add(new_view)
return new_view
def _add_permission(session: Session, permission_name: str) -> Permission:
"""
Check and add the new Permission
"""
new_permission = (
session.query(Permission)
.filter(Permission.name == permission_name)
.one_or_none()
)
if not new_permission:
new_permission = Permission(name=permission_name)
session.add(new_permission)
return new_permission
def _add_permission_view(
session: Session, permission: Permission, view_menu: ViewMenu
) -> PermissionView:
"""
Check and add the new Permission View
"""
new_pvm = (
session.query(PermissionView)
.filter(
PermissionView.view_menu_id == view_menu.id,
PermissionView.permission_id == permission.id,
)
.one_or_none()
)
if not new_pvm:
new_pvm = PermissionView(view_menu=view_menu, permission=permission)
session.add(new_pvm)
return new_pvm
def _find_pvm(session: Session, view_name: str, permission_name: str) -> PermissionView:
return (
session.query(PermissionView)
.join(Permission)
.join(ViewMenu)
.filter(ViewMenu.name == view_name, Permission.name == permission_name)
).one_or_none()
def add_pvms(
session: Session, pvm_data: Dict[str, Tuple[str, ...]], commit: bool = False
) -> List[PermissionView]:
"""
Checks if exists and adds new Permissions, Views and PermissionView's
"""
pvms = []
for view_name, permissions in pvm_data.items():
# Check and add the new View
new_view = _add_view_menu(session, view_name)
for permission_name in permissions:
new_permission = _add_permission(session, permission_name)
# Check and add the new PVM
pvms.append(_add_permission_view(session, new_permission, new_view))
if commit:
session.commit()
return pvms
def _delete_old_permissions(
session: Session, pvm_map: Dict[PermissionView, List[PermissionView]]
) -> None:
"""
Delete old permissions:
- Delete the PermissionView
- Deletes the Permission if it's an orphan now
- Deletes the ViewMenu if it's an orphan now
"""
# Delete old permissions
for old_pvm, new_pvms in pvm_map.items():
old_permission_name = old_pvm.permission.name
old_view_name = old_pvm.view_menu.name
logger.info(f"Going to delete pvm: {old_pvm}")
session.delete(old_pvm)
pvms_with_permission = (
session.query(PermissionView)
.join(Permission)
.filter(Permission.name == old_permission_name)
).first()
if not pvms_with_permission:
logger.info(f"Going to delete permission: {old_pvm.permission}")
session.delete(old_pvm.permission)
pvms_with_view_menu = (
session.query(PermissionView)
.join(ViewMenu)
.filter(ViewMenu.name == old_view_name)
).first()
if not pvms_with_view_menu:
logger.info(f"Going to delete view_menu: {old_pvm.view_menu}")
session.delete(old_pvm.view_menu)
def migrate_roles(
session: Session, pvm_key_map: PvmMigrationMapType, commit: bool = False,
) -> None:
"""
Migrates all existing roles that have the permissions to be migrated
"""
# Collect a map of PermissionView objects for migration
pvm_map: Dict[PermissionView, List[PermissionView]] = {}
for old_pvm_key, new_pvms_ in pvm_key_map.items():
old_pvm = _find_pvm(session, old_pvm_key.view, old_pvm_key.permission)
if old_pvm:
for new_pvm_key in new_pvms_:
new_pvm = _find_pvm(session, new_pvm_key.view, new_pvm_key.permission)
if old_pvm not in pvm_map:
pvm_map[old_pvm] = [new_pvm]
else:
pvm_map[old_pvm].append(new_pvm)
# Replace old permissions by the new ones on all existing roles
roles = session.query(Role).options(Load(Role).joinedload(Role.permissions)).all()
for role in roles:
for old_pvm, new_pvms in pvm_map.items():
if old_pvm in role.permissions:
logger.info(f"Removing {old_pvm} from {role}")
role.permissions.remove(old_pvm)
for new_pvm in new_pvms:
if new_pvm not in role.permissions:
logger.info(f"Add {new_pvm} to {role}")
role.permissions.append(new_pvm)
session.merge(role)
# Delete old permissions
_delete_old_permissions(session, pvm_map)
if commit:
session.commit()
def get_reversed_new_pvms(pvm_map: PvmMigrationMapType) -> Dict[str, Tuple[str, ...]]:
reversed_pvms: Dict[str, Tuple[str, ...]] = {}
for old_pvm, new_pvms in pvm_map.items():
if old_pvm.view not in reversed_pvms:
reversed_pvms[old_pvm.view] = (old_pvm.permission,)
else:
reversed_pvms[old_pvm.view] = reversed_pvms[old_pvm.view] + (
old_pvm.permission,
)
return reversed_pvms
def get_reversed_pvm_map(pvm_map: PvmMigrationMapType) -> PvmMigrationMapType:
reversed_pvm_map: PvmMigrationMapType = {}
for old_pvm, new_pvms in pvm_map.items():
for new_pvm in new_pvms:
if new_pvm not in reversed_pvm_map:
reversed_pvm_map[new_pvm] = (old_pvm,)
else:
reversed_pvm_map[new_pvm] = reversed_pvm_map[new_pvm] + (old_pvm,)
return reversed_pvm_map | /sage-superset-1.0.0.tar.gz/sage-superset-1.0.0/superset/migrations/shared/security_converge.py | 0.798894 | 0.156975 | security_converge.py | pypi |
# revision identifiers, used by Alembic.
revision = "fc3a3a8ff221"
down_revision = "085f06488938"
import json
from typing import Any, Dict, Iterable
from alembic import op
from sqlalchemy import Column, Integer, Text
from sqlalchemy.ext.declarative import declarative_base
from superset import db
Base = declarative_base()
class Dashboard(Base):
"""Declarative class to do query in upgrade"""
__tablename__ = "dashboards"
id = Column(Integer, primary_key=True)
json_metadata = Column(Text)
# these are copied over from `superset/constants.py` to make sure they stay unchanged
EXTRA_FORM_DATA_APPEND_KEYS = {
"adhoc_filters",
"filters",
"interactive_groupby",
"interactive_highlight",
"interactive_drilldown",
"custom_form_data",
}
EXTRA_FORM_DATA_OVERRIDE_REGULAR_KEYS = {
"granularity",
"granularity_sqla",
"time_column",
"time_grain",
"time_range",
}
EXTRA_FORM_DATA_OVERRIDE_EXTRA_KEYS = {
"druid_time_origin",
"relative_start",
"relative_end",
"time_grain_sqla",
"time_range_endpoints",
}
EXTRA_FORM_DATA_OVERRIDE_KEYS = (
EXTRA_FORM_DATA_OVERRIDE_REGULAR_KEYS | EXTRA_FORM_DATA_OVERRIDE_EXTRA_KEYS
)
def upgrade_select_filters(native_filters: Iterable[Dict[str, Any]]) -> None:
"""
Add `defaultToFirstItem` to `controlValues` of `select_filter` components
"""
for native_filter in native_filters:
filter_type = native_filter.get("filterType")
if filter_type == "filter_select":
control_values = native_filter.get("controlValues", {})
value = control_values.get("defaultToFirstItem", False)
control_values["defaultToFirstItem"] = value
def upgrade_filter_set(filter_set: Dict[str, Any]) -> int:
changed_filters = 0
upgrade_select_filters(filter_set.get("nativeFilters", {}).values())
data_mask = filter_set.get("dataMask", {})
native_filters = data_mask.pop("nativeFilters", {})
for filter_id, filter_obj in native_filters.items():
changed_filters += 1
# move filter up one level
data_mask[filter_id] = filter_obj
# rename currentState to filterState
current_state = filter_obj.pop("currentState", {})
filter_obj["filterState"] = current_state
# create new extraFormData field
old_extra_form_data = filter_obj.pop("extraFormData", {})
extra_form_data = {}
filter_obj["extraFormData"] = extra_form_data
# upgrade append filters
appends = old_extra_form_data.pop("append_form_data", {})
extra_form_data.update(appends)
# upgrade override filters
overrides = old_extra_form_data.pop("override_form_data", {})
for override_key, override_value in overrides.items():
# nested extras are also moved up to main object
if override_key == "extras":
for extra_key, extra_value in override_value.items():
extra_form_data[extra_key] = extra_value
else:
extra_form_data[override_key] = override_value
return changed_filters
def downgrade_filter_set(filter_set: Dict[str, Any]) -> int:
changed_filters = 0
old_data_mask = filter_set.pop("dataMask", {})
native_filters = {}
data_mask = {"nativeFilters": native_filters}
filter_set["dataMask"] = data_mask
for filter_id, filter_obj in old_data_mask.items():
changed_filters += 1
# move filter object down one level
native_filters[filter_id] = filter_obj
# downgrade filter state
filter_state = filter_obj.pop("filterState", {})
filter_obj["currentState"] = filter_state
old_extra_form_data = filter_obj.pop("extraFormData", {})
extra_form_data = {}
filter_obj["extraFormData"] = extra_form_data
# downgrade append keys
append_form_data = {}
extra_form_data["append_form_data"] = append_form_data
for key in EXTRA_FORM_DATA_APPEND_KEYS:
value = old_extra_form_data.pop(key, None)
if value is not None:
append_form_data[key] = value
if not append_form_data:
del extra_form_data["append_form_data"]
# downgrade override keys
override_form_data = {}
extra_form_data["override_form_data"] = override_form_data
for key in EXTRA_FORM_DATA_OVERRIDE_KEYS:
value = old_extra_form_data.pop(key, None)
if key in EXTRA_FORM_DATA_OVERRIDE_EXTRA_KEYS:
extras = override_form_data.get("extras", {})
extras[key] = value
elif value is not None:
override_form_data[key] = value
if not override_form_data:
del extra_form_data["override_form_data"]
return changed_filters
def upgrade():
bind = op.get_bind()
session = db.Session(bind=bind)
dashboards = (
session.query(Dashboard)
.filter(Dashboard.json_metadata.like('%"filter_sets_configuration"%'))
.all()
)
changed_filter_sets, changed_filters = 0, 0
for dashboard in dashboards:
try:
json_metadata = json.loads(dashboard.json_metadata)
# upgrade native select filter metadata
native_filters = json_metadata.get("native_filter_configuration")
if native_filters:
upgrade_select_filters(native_filters)
# upgrade filter sets
filter_sets = json_metadata["filter_sets_configuration"]
for filter_set in filter_sets:
changed_filter_sets += 1
changed_filters += upgrade_filter_set(filter_set)
dashboard.json_metadata = json.dumps(json_metadata, sort_keys=True)
except Exception as e:
print(f"Parsing json_metadata for dashboard {dashboard.id} failed.")
raise e
session.commit()
session.close()
print(f"Updated {changed_filter_sets} filter sets with {changed_filters} filters.")
def downgrade():
bind = op.get_bind()
session = db.Session(bind=bind)
dashboards = (
session.query(Dashboard)
.filter(Dashboard.json_metadata.like('%"filter_sets_configuration"%'))
.all()
)
changed_filter_sets, changed_filters = 0, 0
for dashboard in dashboards:
try:
json_metadata = json.loads(dashboard.json_metadata)
filter_sets = json_metadata.get("filter_sets_configuration", {})
json_metadata["filter_sets_configuration"] = filter_sets
for filter_set in filter_sets:
changed_filter_sets += 1
changed_filters += downgrade_filter_set(filter_set)
dashboard.json_metadata = json.dumps(json_metadata, sort_keys=True)
except Exception as e:
print(f"Parsing json_metadata for dashboard {dashboard.id} failed.")
raise e
session.commit()
session.close()
print(f"Updated {changed_filter_sets} filter sets with {changed_filters} filters.") | /sage-superset-1.0.0.tar.gz/sage-superset-1.0.0/superset/migrations/versions/fc3a3a8ff221_migrate_filter_sets_to_new_format.py | 0.52975 | 0.201499 | fc3a3a8ff221_migrate_filter_sets_to_new_format.py | pypi |
# revision identifiers, used by Alembic.
revision = "c82ee8a39623"
down_revision = "c617da68de7d"
from datetime import datetime
from alembic import op
from flask_appbuilder.models.mixins import AuditMixin
from sqlalchemy import Column, DateTime, Enum, ForeignKey, Integer, String
from sqlalchemy.ext.declarative import declarative_base, declared_attr
from superset.models.tags import ObjectTypes, TagTypes
Base = declarative_base()
class AuditMixinNullable(AuditMixin):
"""Altering the AuditMixin to use nullable fields
Allows creating objects programmatically outside of CRUD
"""
created_on = Column(DateTime, default=datetime.now, nullable=True)
changed_on = Column(
DateTime, default=datetime.now, onupdate=datetime.now, nullable=True
)
@declared_attr
def created_by_fk(self) -> Column:
return Column(
Integer, ForeignKey("ab_user.id"), default=self.get_user_id, nullable=True,
)
@declared_attr
def changed_by_fk(self) -> Column:
return Column(
Integer,
ForeignKey("ab_user.id"),
default=self.get_user_id,
onupdate=self.get_user_id,
nullable=True,
)
class Tag(Base, AuditMixinNullable):
"""A tag attached to an object (query, chart or dashboard)."""
__tablename__ = "tag"
id = Column(Integer, primary_key=True)
name = Column(String(250), unique=True)
type = Column(Enum(TagTypes))
class TaggedObject(Base, AuditMixinNullable):
__tablename__ = "tagged_object"
id = Column(Integer, primary_key=True)
tag_id = Column(Integer, ForeignKey("tag.id"))
object_id = Column(Integer)
object_type = Column(Enum(ObjectTypes))
class User(Base):
"""Declarative class to do query in upgrade"""
__tablename__ = "ab_user"
id = Column(Integer, primary_key=True)
def upgrade():
bind = op.get_bind()
Tag.__table__.create(bind)
TaggedObject.__table__.create(bind)
def downgrade():
op.drop_table("tagged_object")
op.drop_table("tag") | /sage-superset-1.0.0.tar.gz/sage-superset-1.0.0/superset/migrations/versions/c82ee8a39623_add_implicit_tags.py | 0.576184 | 0.156814 | c82ee8a39623_add_implicit_tags.py | pypi |
import logging
import sqlalchemy as sa
from alembic import op
from sqlalchemy.dialects import mysql
from superset import db
from superset.utils.core import generic_find_constraint_name
# revision identifiers, used by Alembic.
revision = "3b626e2a6783"
down_revision = "eca4694defa7"
def upgrade():
# cleanup after: https://github.com/airbnb/superset/pull/1078
try:
slices_ibfk_1 = generic_find_constraint_name(
table="slices",
columns={"druid_datasource_id"},
referenced="datasources",
database=db,
)
slices_ibfk_2 = generic_find_constraint_name(
table="slices", columns={"table_id"}, referenced="tables", database=db
)
with op.batch_alter_table("slices") as batch_op:
if slices_ibfk_1:
batch_op.drop_constraint(slices_ibfk_1, type_="foreignkey")
if slices_ibfk_2:
batch_op.drop_constraint(slices_ibfk_2, type_="foreignkey")
batch_op.drop_column("druid_datasource_id")
batch_op.drop_column("table_id")
except Exception as ex:
logging.warning(str(ex))
# fixed issue: https://github.com/airbnb/superset/issues/466
try:
with op.batch_alter_table("columns") as batch_op:
batch_op.create_foreign_key(
None, "datasources", ["datasource_name"], ["datasource_name"]
)
except Exception as ex:
logging.warning(str(ex))
try:
with op.batch_alter_table("query") as batch_op:
batch_op.create_unique_constraint("client_id", ["client_id"])
except Exception as ex:
logging.warning(str(ex))
try:
with op.batch_alter_table("query") as batch_op:
batch_op.drop_column("name")
except Exception as ex:
logging.warning(str(ex))
def downgrade():
try:
with op.batch_alter_table("tables") as batch_op:
batch_op.create_index("table_name", ["table_name"], unique=True)
except Exception as ex:
logging.warning(str(ex))
try:
with op.batch_alter_table("slices") as batch_op:
batch_op.add_column(
sa.Column(
"table_id",
mysql.INTEGER(display_width=11),
autoincrement=False,
nullable=True,
)
)
batch_op.add_column(
sa.Column(
"druid_datasource_id",
sa.Integer(),
autoincrement=False,
nullable=True,
)
)
batch_op.create_foreign_key(
"slices_ibfk_1", "datasources", ["druid_datasource_id"], ["id"]
)
batch_op.create_foreign_key("slices_ibfk_2", "tables", ["table_id"], ["id"])
except Exception as ex:
logging.warning(str(ex))
try:
fk_columns = generic_find_constraint_name(
table="columns",
columns={"datasource_name"},
referenced="datasources",
database=db,
)
with op.batch_alter_table("columns") as batch_op:
batch_op.drop_constraint(fk_columns, type_="foreignkey")
except Exception as ex:
logging.warning(str(ex))
op.add_column("query", sa.Column("name", sa.String(length=256), nullable=True))
try:
with op.batch_alter_table("query") as batch_op:
batch_op.drop_constraint("client_id", type_="unique")
except Exception as ex:
logging.warning(str(ex)) | /sage-superset-1.0.0.tar.gz/sage-superset-1.0.0/superset/migrations/versions/3b626e2a6783_sync_db_with_models.py | 0.421552 | 0.17037 | 3b626e2a6783_sync_db_with_models.py | pypi |
# revision identifiers, used by Alembic.
revision = "7f2635b51f5d"
down_revision = "937d04c16b64"
from alembic import op
from sqlalchemy import Column, engine, Integer, String
from sqlalchemy.ext.declarative import declarative_base
from superset import db
from superset.utils.core import generic_find_uq_constraint_name
Base = declarative_base()
conv = {"uq": "uq_%(table_name)s_%(column_0_name)s"}
class BaseColumnMixin:
id = Column(Integer, primary_key=True)
class DruidColumn(BaseColumnMixin, Base):
__tablename__ = "columns"
datasource_id = Column(Integer)
class TableColumn(BaseColumnMixin, Base):
__tablename__ = "table_columns"
table_id = Column(Integer)
def upgrade():
bind = op.get_bind()
session = db.Session(bind=bind)
# Delete the orphaned columns records.
for record in session.query(DruidColumn).all():
if record.datasource_id is None:
session.delete(record)
session.commit()
# Enforce that the columns.column_name column be non-nullable.
with op.batch_alter_table("columns") as batch_op:
batch_op.alter_column("column_name", existing_type=String(255), nullable=False)
# Delete the orphaned table_columns records.
for record in session.query(TableColumn).all():
if record.table_id is None:
session.delete(record)
session.commit()
# Reduce the size of the table_columns.column_name column for constraint
# viability and enforce that it be non-nullable.
with op.batch_alter_table("table_columns") as batch_op:
batch_op.alter_column(
"column_name", existing_type=String(256), nullable=False, type_=String(255)
)
# Add the missing uniqueness constraint to the table_columns table.
with op.batch_alter_table("table_columns", naming_convention=conv) as batch_op:
batch_op.create_unique_constraint(
"uq_table_columns_column_name", ["column_name", "table_id"]
)
def downgrade():
bind = op.get_bind()
insp = engine.reflection.Inspector.from_engine(bind)
# Remove the missing uniqueness constraint from the table_columns table.
with op.batch_alter_table("table_columns", naming_convention=conv) as batch_op:
batch_op.drop_constraint(
generic_find_uq_constraint_name(
"table_columns", {"column_name", "table_id"}, insp
)
or "uq_table_columns_column_name",
type_="unique",
)
# Restore the size of the table_columns.column_name column and forego that
# it be non-nullable.
with op.batch_alter_table("table_columns") as batch_op:
batch_op.alter_column(
"column_name", existing_type=String(255), nullable=True, type_=String(256)
)
# Forego that the columns.column_name be non-nullable.
with op.batch_alter_table("columns") as batch_op:
batch_op.alter_column("column_name", existing_type=String(255), nullable=True) | /sage-superset-1.0.0.tar.gz/sage-superset-1.0.0/superset/migrations/versions/7f2635b51f5d_update_base_columns.py | 0.570571 | 0.193243 | 7f2635b51f5d_update_base_columns.py | pypi |
import sqlalchemy as sa
from alembic import op
from superset.utils.core import generic_find_fk_constraint_name
# revision identifiers, used by Alembic.
revision = "b6fa807eac07"
down_revision = "1495eb914ad3"
conv = {
"fk": "fk_%(table_name)s_%(column_0_name)s_%(referred_table_name)s",
"uq": "uq_%(table_name)s_%(column_0_name)s",
}
def upgrade():
bind = op.get_bind()
insp = sa.engine.reflection.Inspector.from_engine(bind)
# First, drop the foreign key constraint prior to altering columns.
fk_datasources_cluster_name_clusters = (
generic_find_fk_constraint_name(
"datasources", {"cluster_name"}, "clusters", insp
)
or "fk_datasources_cluster_name_clusters"
)
with op.batch_alter_table("datasources", naming_convention=conv) as batch_op:
batch_op.drop_constraint(
fk_datasources_cluster_name_clusters, type_="foreignkey"
)
# Second, make the columns non-nullable.
with op.batch_alter_table("datasources") as batch_op:
batch_op.alter_column(
"cluster_name", existing_type=sa.String(250), nullable=False
)
with op.batch_alter_table("clusters") as batch_op:
batch_op.alter_column(
"cluster_name", existing_type=sa.String(250), nullable=False
)
with op.batch_alter_table("dbs") as batch_op:
batch_op.alter_column(
"database_name", existing_type=sa.String(250), nullable=False
)
with op.batch_alter_table("tables") as batch_op:
batch_op.alter_column(
"table_name", existing_type=sa.String(250), nullable=False
)
# Finally, re-add the foreign key constraint.
with op.batch_alter_table("datasources") as batch_op:
batch_op.create_foreign_key(
fk_datasources_cluster_name_clusters,
"clusters",
["cluster_name"],
["cluster_name"],
)
def downgrade():
bind = op.get_bind()
insp = sa.engine.reflection.Inspector.from_engine(bind)
# First, drop the foreign key constraint prior to altering columns.
fk_datasources_cluster_name_clusters = (
generic_find_fk_constraint_name(
"datasources", {"cluster_name"}, "clusters", insp
)
or "fk_datasources_cluster_name_clusters"
)
with op.batch_alter_table("datasources", naming_convention=conv) as batch_op:
batch_op.drop_constraint(
fk_datasources_cluster_name_clusters, type_="foreignkey"
)
# Second, make the columns nullable.
with op.batch_alter_table("datasources") as batch_op:
batch_op.alter_column(
"cluster_name", existing_type=sa.String(250), nullable=True
)
with op.batch_alter_table("clusters") as batch_op:
batch_op.alter_column(
"cluster_name", existing_type=sa.String(250), nullable=True
)
with op.batch_alter_table("dbs") as batch_op:
batch_op.alter_column(
"database_name", existing_type=sa.String(250), nullable=True
)
with op.batch_alter_table("tables") as batch_op:
batch_op.alter_column("table_name", existing_type=sa.String(250), nullable=True)
# Finally, re-add the foreign key constraint.
with op.batch_alter_table("datasources") as batch_op:
batch_op.create_foreign_key(
fk_datasources_cluster_name_clusters,
"clusters",
["cluster_name"],
["cluster_name"],
) | /sage-superset-1.0.0.tar.gz/sage-superset-1.0.0/superset/migrations/versions/b6fa807eac07_make_names_non_nullable.py | 0.62395 | 0.198375 | b6fa807eac07_make_names_non_nullable.py | pypi |
# revision identifiers, used by Alembic.
revision = "d94d33dbe938"
down_revision = "80aa3f04bc82"
from alembic import op
from sqlalchemy import Column, Integer, String, Text
from sqlalchemy.ext.declarative import declarative_base
from superset import db
from superset.utils.core import MediumText
Base = declarative_base()
class BaseColumnMixin:
id = Column(Integer, primary_key=True)
column_name = Column(String(255))
description = Column(Text)
type = Column(String(32))
verbose_name = Column(String(1024))
class BaseDatasourceMixin:
id = Column(Integer, primary_key=True)
description = Column(Text)
class BaseMetricMixin:
id = Column(Integer, primary_key=True)
d3format = Column(String(128))
description = Column(Text)
metric_name = Column(String(512))
metric_type = Column(String(32))
verbose_name = Column(String(1024))
warning_text = Column(Text)
class Annotation(Base):
__tablename__ = "annotation"
id = Column(Integer, primary_key=True)
long_descr = Column(Text)
json_metadata = Column(Text)
short_descr = Column(String(500))
class Dashboard(Base):
__tablename__ = "dashboards"
id = Column(Integer, primary_key=True)
css = Column(Text)
dashboard_title = Column(String(500))
description = Column(Text)
json_metadata = Column(Text)
position_json = Column(MediumText())
slug = Column(String(255))
class Database(Base):
__tablename__ = "dbs"
id = Column(Integer, primary_key=True)
database_name = Column(String(250))
extra = Column(Text)
force_ctas_schema = Column(String(250))
sqlalchemy_uri = Column(String(1024))
verbose_name = Column(String(250))
class DruidCluster(Base):
__tablename__ = "clusters"
id = Column(Integer, primary_key=True)
broker_host = Column(String(255))
broker_endpoint = Column(String(255))
cluster_name = Column(String(250))
verbose_name = Column(String(250))
class DruidColumn(BaseColumnMixin, Base):
__tablename__ = "columns"
dimension_spec_json = Column(Text)
class DruidDatasource(BaseDatasourceMixin, Base):
__tablename__ = "datasources"
datasource_name = Column(String(255))
default_endpoint = Column(Text)
fetch_values_from = Column(String(100))
class DruidMetric(BaseMetricMixin, Base):
__tablename__ = "metrics"
json = Column(Text)
class Slice(Base):
__tablename__ = "slices"
id = Column(Integer, primary_key=True)
description = Column(Text)
params = Column(Text)
slice_name = Column(String(250))
viz_type = Column(String(250))
class SqlaTable(BaseDatasourceMixin, Base):
__tablename__ = "tables"
default_endpoint = Column(MediumText())
fetch_values_predicate = Column(String(1000))
main_dttm_col = Column(String(250))
schema = Column(String(255))
sql = Column(Text)
table_name = Column(String(250))
template_params = Column(Text)
class SqlMetric(BaseMetricMixin, Base):
__tablename__ = "sql_metrics"
expression = Column(Text)
class TableColumn(BaseColumnMixin, Base):
__tablename__ = "table_columns"
database_expression = Column(String(255))
expression = Column(Text)
python_date_format = Column(String(255))
def upgrade():
bind = op.get_bind()
session = db.Session(bind=bind)
tables = [
Annotation,
Dashboard,
Database,
DruidCluster,
DruidColumn,
DruidDatasource,
DruidMetric,
Slice,
SqlaTable,
SqlMetric,
TableColumn,
]
for table in tables:
for record in session.query(table).all():
for col in record.__table__.columns.values():
if not col.primary_key:
value = getattr(record, col.name)
if value is not None and value.strip() == "":
setattr(record, col.name, None)
session.commit()
session.close()
def downgrade():
pass | /sage-superset-1.0.0.tar.gz/sage-superset-1.0.0/superset/migrations/versions/d94d33dbe938_form_strip.py | 0.436862 | 0.211173 | d94d33dbe938_form_strip.py | pypi |
import logging
import sqlalchemy as sa
from alembic import op
from superset.utils.core import (
generic_find_fk_constraint_name,
generic_find_fk_constraint_names,
generic_find_uq_constraint_name,
)
# revision identifiers, used by Alembic.
revision = "4736ec66ce19"
down_revision = "f959a6652acd"
conv = {
"fk": "fk_%(table_name)s_%(column_0_name)s_%(referred_table_name)s",
"uq": "uq_%(table_name)s_%(column_0_name)s",
}
# Helper table for database migrations using minimal schema.
datasources = sa.Table(
"datasources",
sa.MetaData(),
sa.Column("id", sa.Integer, primary_key=True),
sa.Column("datasource_name", sa.String(255)),
)
def upgrade():
bind = op.get_bind()
insp = sa.engine.reflection.Inspector.from_engine(bind)
# Add the new less restrictive uniqueness constraint.
with op.batch_alter_table("datasources", naming_convention=conv) as batch_op:
batch_op.create_unique_constraint(
"uq_datasources_cluster_name", ["cluster_name", "datasource_name"]
)
# Augment the tables which have a foreign key constraint related to the
# datasources.datasource_name column.
for foreign in ["columns", "metrics"]:
with op.batch_alter_table(foreign, naming_convention=conv) as batch_op:
# Add the datasource_id column with the relevant constraints.
batch_op.add_column(sa.Column("datasource_id", sa.Integer))
batch_op.create_foreign_key(
"fk_{}_datasource_id_datasources".format(foreign),
"datasources",
["datasource_id"],
["id"],
)
# Helper table for database migration using minimal schema.
table = sa.Table(
foreign,
sa.MetaData(),
sa.Column("id", sa.Integer, primary_key=True),
sa.Column("datasource_name", sa.String(255)),
sa.Column("datasource_id", sa.Integer),
)
# Migrate the existing data.
for datasource in bind.execute(datasources.select()):
bind.execute(
table.update()
.where(table.c.datasource_name == datasource.datasource_name)
.values(datasource_id=datasource.id)
)
with op.batch_alter_table(foreign, naming_convention=conv) as batch_op:
# Drop the datasource_name column and associated constraints. Note
# due to prior revisions (1226819ee0e3, 3b626e2a6783) there may
# incorectly be multiple duplicate constraints.
names = generic_find_fk_constraint_names(
foreign, {"datasource_name"}, "datasources", insp
)
for name in names:
batch_op.drop_constraint(
name or "fk_{}_datasource_name_datasources".format(foreign),
type_="foreignkey",
)
batch_op.drop_column("datasource_name")
try:
# Drop the old more restrictive uniqueness constraint.
with op.batch_alter_table("datasources", naming_convention=conv) as batch_op:
batch_op.drop_constraint(
generic_find_uq_constraint_name(
"datasources", {"datasource_name"}, insp
)
or "uq_datasources_datasource_name",
type_="unique",
)
except Exception as ex:
logging.warning(
"Constraint drop failed, you may want to do this "
"manually on your database. For context, this is a known "
"issue around undeterministic contraint names on Postgres "
"and perhaps more databases through SQLAlchemy."
)
logging.exception(ex)
def downgrade():
bind = op.get_bind()
insp = sa.engine.reflection.Inspector.from_engine(bind)
# Add the new more restrictive uniqueness constraint which is required by
# the foreign key constraints. Note this operation will fail if the
# datasources.datasource_name column is no longer unique.
with op.batch_alter_table("datasources", naming_convention=conv) as batch_op:
batch_op.create_unique_constraint(
"uq_datasources_datasource_name", ["datasource_name"]
)
# Augment the tables which have a foreign key constraint related to the
# datasources.datasource_id column.
for foreign in ["columns", "metrics"]:
with op.batch_alter_table(foreign, naming_convention=conv) as batch_op:
# Add the datasource_name column with the relevant constraints.
batch_op.add_column(sa.Column("datasource_name", sa.String(255)))
batch_op.create_foreign_key(
"fk_{}_datasource_name_datasources".format(foreign),
"datasources",
["datasource_name"],
["datasource_name"],
)
# Helper table for database migration using minimal schema.
table = sa.Table(
foreign,
sa.MetaData(),
sa.Column("id", sa.Integer, primary_key=True),
sa.Column("datasource_name", sa.String(255)),
sa.Column("datasource_id", sa.Integer),
)
# Migrate the existing data.
for datasource in bind.execute(datasources.select()):
bind.execute(
table.update()
.where(table.c.datasource_id == datasource.id)
.values(datasource_name=datasource.datasource_name)
)
with op.batch_alter_table(foreign, naming_convention=conv) as batch_op:
# Drop the datasource_id column and associated constraint.
batch_op.drop_constraint(
"fk_{}_datasource_id_datasources".format(foreign), type_="foreignkey"
)
batch_op.drop_column("datasource_id")
with op.batch_alter_table("datasources", naming_convention=conv) as batch_op:
# Prior to dropping the uniqueness constraint, the foreign key
# associated with the cluster_name column needs to be dropped.
batch_op.drop_constraint(
generic_find_fk_constraint_name(
"datasources", {"cluster_name"}, "clusters", insp
)
or "fk_datasources_cluster_name_clusters",
type_="foreignkey",
)
# Drop the old less restrictive uniqueness constraint.
batch_op.drop_constraint(
generic_find_uq_constraint_name(
"datasources", {"cluster_name", "datasource_name"}, insp
)
or "uq_datasources_cluster_name",
type_="unique",
)
# Re-create the foreign key associated with the cluster_name column.
batch_op.create_foreign_key(
"fk_{}_datasource_id_datasources".format(foreign),
"clusters",
["cluster_name"],
["cluster_name"],
) | /sage-superset-1.0.0.tar.gz/sage-superset-1.0.0/superset/migrations/versions/4736ec66ce19_.py | 0.533154 | 0.281789 | 4736ec66ce19_.py | pypi |
# revision identifiers, used by Alembic.
revision = "ccb74baaa89b"
down_revision = "40f16acf1ba7"
from alembic import op
from sqlalchemy.exc import SQLAlchemyError
from sqlalchemy.orm import Session
from superset.migrations.shared.security_converge import (
add_pvms,
get_reversed_new_pvms,
get_reversed_pvm_map,
migrate_roles,
Pvm,
)
NEW_PVMS = {"Chart": ("can_read", "can_write",)}
PVM_MAP = {
Pvm("SliceModelView", "can_list"): (Pvm("Chart", "can_read"),),
Pvm("SliceModelView", "can_show"): (Pvm("Chart", "can_read"),),
Pvm("SliceModelView", "can_edit",): (Pvm("Chart", "can_write"),),
Pvm("SliceModelView", "can_delete",): (Pvm("Chart", "can_write"),),
Pvm("SliceModelView", "can_add",): (Pvm("Chart", "can_write"),),
Pvm("SliceModelView", "can_download",): (Pvm("Chart", "can_read"),),
Pvm("SliceModelView", "muldelete",): (Pvm("Chart", "can_write"),),
Pvm("SliceModelView", "can_mulexport",): (Pvm("Chart", "can_read"),),
Pvm("SliceModelView", "can_favorite_status",): (Pvm("Chart", "can_read"),),
Pvm("SliceModelView", "can_cache_screenshot",): (Pvm("Chart", "can_read"),),
Pvm("SliceModelView", "can_screenshot",): (Pvm("Chart", "can_read"),),
Pvm("SliceModelView", "can_data_from_cache",): (Pvm("Chart", "can_read"),),
Pvm("SliceAsync", "can_list",): (Pvm("Chart", "can_read"),),
Pvm("SliceAsync", "muldelete",): (Pvm("Chart", "can_write"),),
}
def upgrade():
bind = op.get_bind()
session = Session(bind=bind)
# Add the new permissions on the migration itself
add_pvms(session, NEW_PVMS)
migrate_roles(session, PVM_MAP)
try:
session.commit()
except SQLAlchemyError as ex:
print(f"An error occurred while upgrading permissions: {ex}")
session.rollback()
def downgrade():
bind = op.get_bind()
session = Session(bind=bind)
# Add the old permissions on the migration itself
add_pvms(session, get_reversed_new_pvms(PVM_MAP))
migrate_roles(session, get_reversed_pvm_map(PVM_MAP))
try:
session.commit()
except SQLAlchemyError as ex:
print(f"An error occurred while downgrading permissions: {ex}")
session.rollback()
pass | /sage-superset-1.0.0.tar.gz/sage-superset-1.0.0/superset/migrations/versions/ccb74baaa89b_security_converge_charts.py | 0.428951 | 0.239716 | ccb74baaa89b_security_converge_charts.py | pypi |
# revision identifiers, used by Alembic.
revision = "c617da68de7d"
down_revision = "18dc26817ad2"
from alembic import op
from sqlalchemy import Column, Integer, String, Text
from sqlalchemy.ext.declarative import declarative_base
from superset import db
from superset.utils.core import MediumText
Base = declarative_base()
class BaseColumnMixin:
id = Column(Integer, primary_key=True)
column_name = Column(String(255))
description = Column(Text)
type = Column(String(32))
verbose_name = Column(String(1024))
class BaseDatasourceMixin:
id = Column(Integer, primary_key=True)
description = Column(Text)
class BaseMetricMixin:
id = Column(Integer, primary_key=True)
d3format = Column(String(128))
description = Column(Text)
metric_name = Column(String(512))
metric_type = Column(String(32))
verbose_name = Column(String(1024))
warning_text = Column(Text)
class Annotation(Base):
__tablename__ = "annotation"
id = Column(Integer, primary_key=True)
long_descr = Column(Text)
json_metadata = Column(Text)
short_descr = Column(String(500))
class Dashboard(Base):
__tablename__ = "dashboards"
id = Column(Integer, primary_key=True)
css = Column(Text)
dashboard_title = Column(String(500))
description = Column(Text)
json_metadata = Column(Text)
position_json = Column(MediumText())
slug = Column(String(255))
class Database(Base):
__tablename__ = "dbs"
id = Column(Integer, primary_key=True)
database_name = Column(String(250))
extra = Column(Text)
force_ctas_schema = Column(String(250))
sqlalchemy_uri = Column(String(1024))
verbose_name = Column(String(250))
class DruidCluster(Base):
__tablename__ = "clusters"
id = Column(Integer, primary_key=True)
broker_host = Column(String(255))
broker_endpoint = Column(String(255))
cluster_name = Column(String(250))
verbose_name = Column(String(250))
class DruidColumn(BaseColumnMixin, Base):
__tablename__ = "columns"
dimension_spec_json = Column(Text)
class DruidDatasource(BaseDatasourceMixin, Base):
__tablename__ = "datasources"
datasource_name = Column(String(255))
default_endpoint = Column(Text)
fetch_values_from = Column(String(100))
class DruidMetric(BaseMetricMixin, Base):
__tablename__ = "metrics"
json = Column(Text)
class Slice(Base):
__tablename__ = "slices"
id = Column(Integer, primary_key=True)
description = Column(Text)
params = Column(Text)
slice_name = Column(String(250))
viz_type = Column(String(250))
class SqlaTable(BaseDatasourceMixin, Base):
__tablename__ = "tables"
default_endpoint = Column(MediumText())
fetch_values_predicate = Column(String(1000))
main_dttm_col = Column(String(250))
schema = Column(String(255))
sql = Column(Text)
table_name = Column(String(250))
template_params = Column(Text)
class SqlMetric(BaseMetricMixin, Base):
__tablename__ = "sql_metrics"
expression = Column(Text)
class TableColumn(BaseColumnMixin, Base):
__tablename__ = "table_columns"
database_expression = Column(String(255))
expression = Column(Text)
python_date_format = Column(String(255))
def upgrade():
bind = op.get_bind()
session = db.Session(bind=bind)
tables = [
Annotation,
Dashboard,
Database,
DruidCluster,
DruidColumn,
DruidDatasource,
DruidMetric,
Slice,
SqlaTable,
SqlMetric,
TableColumn,
]
for table in tables:
for record in session.query(table).all():
for col in record.__table__.columns.values():
if not col.primary_key:
if getattr(record, col.name) == "":
setattr(record, col.name, None)
session.commit()
session.close()
def downgrade():
pass | /sage-superset-1.0.0.tar.gz/sage-superset-1.0.0/superset/migrations/versions/c617da68de7d_form_nullable.py | 0.425247 | 0.179782 | c617da68de7d_form_nullable.py | pypi |
# revision identifiers, used by Alembic.
from alembic import op
from sqlalchemy import Column, ForeignKey, Integer, String
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import relationship
from superset import db
revision = "5afa9079866a"
down_revision = "db4b49eb0782"
Base = declarative_base()
class Sqlatable(Base):
__tablename__ = "tables"
id = Column(Integer, primary_key=True)
perm = Column(String(1000))
schema_perm = Column(String(1000))
schema = Column(String(255))
database_id = Column(Integer, ForeignKey("dbs.id"), nullable=False)
database = relationship("Database", foreign_keys=[database_id])
class Slice(Base):
__tablename__ = "slices"
id = Column(Integer, primary_key=True)
datasource_id = Column(Integer)
datasource_type = Column(String(200))
schema_perm = Column(String(1000))
class Database(Base):
__tablename__ = "dbs"
id = Column(Integer, primary_key=True)
database_name = Column(String(250))
verbose_name = Column(String(250), unique=True)
def upgrade():
op.add_column(
"datasources", Column("schema_perm", String(length=1000), nullable=True)
)
op.add_column("slices", Column("schema_perm", String(length=1000), nullable=True))
op.add_column("tables", Column("schema_perm", String(length=1000), nullable=True))
bind = op.get_bind()
session = db.Session(bind=bind)
for t in session.query(Sqlatable).all():
db_name = (
t.database.verbose_name
if t.database.verbose_name
else t.database.database_name
)
if t.schema:
t.schema_perm = f"[{db_name}].[{t.schema}]"
table_slices = (
session.query(Slice)
.filter_by(datasource_type="table")
.filter_by(datasource_id=t.id)
.all()
)
for s in table_slices:
s.schema_perm = t.schema_perm
session.commit()
def downgrade():
op.drop_column("tables", "schema_perm")
op.drop_column("datasources", "schema_perm")
op.drop_column("slices", "schema_perm") | /sage-superset-1.0.0.tar.gz/sage-superset-1.0.0/superset/migrations/versions/5afa9079866a_serialize_schema_permissions_py.py | 0.426441 | 0.17883 | 5afa9079866a_serialize_schema_permissions_py.py | pypi |
# revision identifiers, used by Alembic.
revision = "143b6f2815da"
down_revision = "e323605f370a"
import json
from typing import Any, Dict, List, Tuple
from alembic import op
from sqlalchemy import and_, Column, Integer, String, Text
from sqlalchemy.ext.declarative import declarative_base
from superset import db
Base = declarative_base()
class Slice(Base):
__tablename__ = "slices"
id = Column(Integer, primary_key=True)
viz_type = Column(String(250))
params = Column(Text)
VALID_RENDERERS = (
"Table With Subtotal",
"Table With Subtotal Heatmap",
"Table With Subtotal Col Heatmap",
"Table With Subtotal Row Heatmap",
"Table With Subtotal Barchart",
"Table With Subtotal Col Barchart",
"Table With Subtotal Row Barchart",
)
def upgrade():
bind = op.get_bind()
session = db.Session(bind=bind)
slices = (
session.query(Slice)
.filter(
and_(
Slice.viz_type == "pivot_table_v2",
Slice.params.like('%"tableRenderer%'),
)
)
.all()
)
changed_slices = 0
for slice in slices:
try:
params = json.loads(slice.params)
table_renderer = params.pop("tableRenderer", None)
conditional_formatting = params.get("conditional_formatting")
# don't update unless table_renderer is valid and
# conditional_formatting is undefined
if table_renderer in VALID_RENDERERS and conditional_formatting is None:
metric_labels = [
metric if isinstance(metric, str) else metric["label"]
for metric in params.get("metrics")
]
params["conditional_formatting"] = [
{
"colorScheme": "rgb(255,0,0)",
"column": metric_label,
"operator": "None",
}
for metric_label in metric_labels
]
changed_slices += 1
slice.params = json.dumps(params, sort_keys=True)
except Exception as e:
print(f"Parsing json_metadata for slice {slice.id} failed.")
raise e
session.commit()
session.close()
print(f"Upgraded {changed_slices} slices.")
def downgrade():
# slices can't be downgraded
pass | /sage-superset-1.0.0.tar.gz/sage-superset-1.0.0/superset/migrations/versions/143b6f2815da_migrate_pivot_table_v2_heatmaps_to_new_.py | 0.583441 | 0.159283 | 143b6f2815da_migrate_pivot_table_v2_heatmaps_to_new_.py | pypi |
# revision identifiers, used by Alembic.
revision = "f1410ed7ec95"
down_revision = "d416d0d715cc"
import json
from typing import Any, Dict, Iterable, Tuple
from alembic import op
from sqlalchemy import Column, Integer, Text
from sqlalchemy.ext.declarative import declarative_base
from superset import db
Base = declarative_base()
class Dashboard(Base):
"""Declarative class to do query in upgrade"""
__tablename__ = "dashboards"
id = Column(Integer, primary_key=True)
json_metadata = Column(Text)
def upgrade_filters(native_filters: Iterable[Dict[str, Any]]) -> int:
"""
Move `defaultValue` into `defaultDataMask.filterState`
"""
changed_filters = 0
for native_filter in native_filters:
default_value = native_filter.pop("defaultValue", None)
if default_value is not None:
changed_filters += 1
default_data_mask = {}
default_data_mask["filterState"] = {"value": default_value}
native_filter["defaultDataMask"] = default_data_mask
return changed_filters
def downgrade_filters(native_filters: Iterable[Dict[str, Any]]) -> int:
"""
Move `defaultDataMask.filterState` into `defaultValue`
"""
changed_filters = 0
for native_filter in native_filters:
default_data_mask = native_filter.pop("defaultDataMask", {})
filter_state = default_data_mask.get("filterState")
if filter_state is not None:
changed_filters += 1
value = filter_state["value"]
native_filter["defaultValue"] = value
return changed_filters
def upgrade_dashboard(dashboard: Dict[str, Any]) -> Tuple[int, int]:
changed_filters, changed_filter_sets = 0, 0
# upgrade native select filter metadata
# upgrade native select filter metadata
native_filters = dashboard.get("native_filter_configuration")
if native_filters:
changed_filters += upgrade_filters(native_filters)
# upgrade filter sets
filter_sets = dashboard.get("filter_sets_configuration", [])
for filter_set in filter_sets:
if upgrade_filters(filter_set.get("nativeFilters", {}).values()):
changed_filter_sets += 1
return changed_filters, changed_filter_sets
def upgrade():
bind = op.get_bind()
session = db.Session(bind=bind)
dashboards = (
session.query(Dashboard)
.filter(Dashboard.json_metadata.like('%"native_filter_configuration"%'))
.all()
)
changed_filters, changed_filter_sets = 0, 0
for dashboard in dashboards:
try:
json_metadata = json.loads(dashboard.json_metadata)
dashboard.json_metadata = json.dumps(json_metadata, sort_keys=True)
upgrades = upgrade_dashboard(json_metadata)
changed_filters += upgrades[0]
changed_filter_sets += upgrades[1]
dashboard.json_metadata = json.dumps(json_metadata, sort_keys=True)
except Exception as e:
print(f"Parsing json_metadata for dashboard {dashboard.id} failed.")
raise e
session.commit()
session.close()
print(f"Upgraded {changed_filters} filters and {changed_filter_sets} filter sets.")
def downgrade_dashboard(dashboard: Dict[str, Any]) -> Tuple[int, int]:
changed_filters, changed_filter_sets = 0, 0
# upgrade native select filter metadata
native_filters = dashboard.get("native_filter_configuration")
if native_filters:
changed_filters += downgrade_filters(native_filters)
# upgrade filter sets
filter_sets = dashboard.get("filter_sets_configuration", [])
for filter_set in filter_sets:
if downgrade_filters(filter_set.get("nativeFilters", {}).values()):
changed_filter_sets += 1
return changed_filters, changed_filter_sets
def downgrade():
bind = op.get_bind()
session = db.Session(bind=bind)
dashboards = (
session.query(Dashboard)
.filter(Dashboard.json_metadata.like('%"native_filter_configuration"%'))
.all()
)
changed_filters, changed_filter_sets = 0, 0
for dashboard in dashboards:
try:
json_metadata = json.loads(dashboard.json_metadata)
downgrades = downgrade_dashboard(json_metadata)
changed_filters += downgrades[0]
changed_filter_sets += downgrades[1]
dashboard.json_metadata = json.dumps(json_metadata, sort_keys=True)
except Exception as e:
print(f"Parsing json_metadata for dashboard {dashboard.id} failed.")
raise e
session.commit()
session.close()
print(
f"Downgraded {changed_filters} filters and {changed_filter_sets} filter sets."
) | /sage-superset-1.0.0.tar.gz/sage-superset-1.0.0/superset/migrations/versions/f1410ed7ec95_migrate_native_filters_to_new_schema.py | 0.628635 | 0.169543 | f1410ed7ec95_migrate_native_filters_to_new_schema.py | pypi |
# revision identifiers, used by Alembic.
revision = "1f6dca87d1a2"
down_revision = "4b84f97828aa"
from alembic import op
from sqlalchemy.exc import SQLAlchemyError
from sqlalchemy.orm import Session
from superset.migrations.shared.security_converge import (
add_pvms,
get_reversed_new_pvms,
get_reversed_pvm_map,
migrate_roles,
Pvm,
)
NEW_PVMS = {"Dashboard": ("can_read", "can_write",)}
PVM_MAP = {
Pvm("DashboardModelView", "can_add"): (Pvm("Dashboard", "can_write"),),
Pvm("DashboardModelView", "can_delete"): (Pvm("Dashboard", "can_write"),),
Pvm("DashboardModelView", "can_download_dashboards",): (
Pvm("Dashboard", "can_read"),
),
Pvm("DashboardModelView", "can_edit",): (Pvm("Dashboard", "can_write"),),
Pvm("DashboardModelView", "can_favorite_status",): (Pvm("Dashboard", "can_read"),),
Pvm("DashboardModelView", "can_list",): (Pvm("Dashboard", "can_read"),),
Pvm("DashboardModelView", "can_mulexport",): (Pvm("Dashboard", "can_read"),),
Pvm("DashboardModelView", "can_show",): (Pvm("Dashboard", "can_read"),),
Pvm("DashboardModelView", "muldelete",): (Pvm("Dashboard", "can_write"),),
Pvm("DashboardModelView", "mulexport",): (Pvm("Dashboard", "can_read"),),
Pvm("DashboardModelViewAsync", "can_list",): (Pvm("Dashboard", "can_read"),),
Pvm("DashboardModelViewAsync", "muldelete",): (Pvm("Dashboard", "can_write"),),
Pvm("DashboardModelViewAsync", "mulexport",): (Pvm("Dashboard", "can_read"),),
Pvm("Dashboard", "can_new",): (Pvm("Dashboard", "can_write"),),
}
def upgrade():
bind = op.get_bind()
session = Session(bind=bind)
# Add the new permissions on the migration itself
add_pvms(session, NEW_PVMS)
migrate_roles(session, PVM_MAP)
try:
session.commit()
except SQLAlchemyError as ex:
print(f"An error occurred while upgrading permissions: {ex}")
session.rollback()
def downgrade():
bind = op.get_bind()
session = Session(bind=bind)
# Add the old permissions on the migration itself
add_pvms(session, get_reversed_new_pvms(PVM_MAP))
migrate_roles(session, get_reversed_pvm_map(PVM_MAP))
try:
session.commit()
except SQLAlchemyError as ex:
print(f"An error occurred while downgrading permissions: {ex}")
session.rollback()
pass | /sage-superset-1.0.0.tar.gz/sage-superset-1.0.0/superset/migrations/versions/1f6dca87d1a2_security_converge_dashboards.py | 0.441673 | 0.252257 | 1f6dca87d1a2_security_converge_dashboards.py | pypi |
import re
from alembic import op
from sqlalchemy import Column, Integer, String, Text
from sqlalchemy.ext.declarative import declarative_base
from superset import db
from superset.utils.core import MediumText
Base = declarative_base()
class BaseColumnMixin:
id = Column(Integer, primary_key=True)
column_name = Column(String(255))
description = Column(Text)
type = Column(String(32))
verbose_name = Column(String(1024))
class BaseDatasourceMixin:
id = Column(Integer, primary_key=True)
description = Column(Text)
class BaseMetricMixin:
id = Column(Integer, primary_key=True)
d3format = Column(String(128))
description = Column(Text)
metric_name = Column(String(512))
metric_type = Column(String(32))
verbose_name = Column(String(1024))
warning_text = Column(Text)
class Annotation(Base):
__tablename__ = "annotation"
id = Column(Integer, primary_key=True)
long_descr = Column(Text)
json_metadata = Column(Text)
short_descr = Column(String(500))
class Dashboard(Base):
__tablename__ = "dashboards"
id = Column(Integer, primary_key=True)
css = Column(Text)
dashboard_title = Column(String(500))
description = Column(Text)
json_metadata = Column(Text)
position_json = Column(MediumText())
slug = Column(String(255))
class Database(Base):
__tablename__ = "dbs"
id = Column(Integer, primary_key=True)
database_name = Column(String(250))
extra = Column(Text)
force_ctas_schema = Column(String(250))
sqlalchemy_uri = Column(String(1024))
verbose_name = Column(String(250))
class DruidCluster(Base):
__tablename__ = "clusters"
id = Column(Integer, primary_key=True)
broker_host = Column(String(255))
broker_endpoint = Column(String(255))
cluster_name = Column(String(250))
verbose_name = Column(String(250))
class DruidColumn(BaseColumnMixin, Base):
__tablename__ = "columns"
dimension_spec_json = Column(Text)
class DruidDatasource(BaseDatasourceMixin, Base):
__tablename__ = "datasources"
datasource_name = Column(String(255))
default_endpoint = Column(Text)
fetch_values_from = Column(String(100))
class DruidMetric(BaseMetricMixin, Base):
__tablename__ = "metrics"
json = Column(Text)
class Slice(Base):
__tablename__ = "slices"
id = Column(Integer, primary_key=True)
description = Column(Text)
params = Column(Text)
slice_name = Column(String(250))
viz_type = Column(String(250))
class SqlaTable(BaseDatasourceMixin, Base):
__tablename__ = "tables"
default_endpoint = Column(MediumText())
fetch_values_predicate = Column(String(1000))
main_dttm_col = Column(String(250))
schema = Column(String(255))
sql = Column(Text)
table_name = Column(String(250))
template_params = Column(Text)
class SqlMetric(BaseMetricMixin, Base):
__tablename__ = "sql_metrics"
expression = Column(Text)
class TableColumn(BaseColumnMixin, Base):
__tablename__ = "table_columns"
expression = Column(Text)
python_date_format = Column(String(255))
# revision identifiers, used by Alembic.
revision = "258b5280a45e"
down_revision = "11c737c17cc6"
def upgrade():
bind = op.get_bind()
session = db.Session(bind=bind)
tables = [
Annotation,
Dashboard,
Database,
DruidCluster,
DruidColumn,
DruidDatasource,
DruidMetric,
Slice,
SqlaTable,
SqlMetric,
TableColumn,
]
for table in tables:
for record in session.query(table).all():
for col in record.__table__.columns.values():
if not col.primary_key:
value = getattr(record, col.name)
if value is not None and re.search(r"^\s+|\s+$", value):
setattr(record, col.name, value.strip())
session.commit()
session.close()
def downgrade():
pass | /sage-superset-1.0.0.tar.gz/sage-superset-1.0.0/superset/migrations/versions/258b5280a45e_form_strip_leading_and_trailing_whitespace.py | 0.41324 | 0.188735 | 258b5280a45e_form_strip_leading_and_trailing_whitespace.py | pypi |
import sqlalchemy as sa
from alembic import op
from superset import db
from superset.utils.core import (
generic_find_fk_constraint_name,
generic_find_uq_constraint_name,
)
# revision identifiers, used by Alembic.
revision = "e96dbf2cfef0"
down_revision = "817e1c9b09d0"
def upgrade():
bind = op.get_bind()
insp = sa.engine.reflection.Inspector.from_engine(bind)
# Add cluster_id column
with op.batch_alter_table("datasources") as batch_op:
batch_op.add_column(sa.Column("cluster_id", sa.Integer()))
# Update cluster_id values
metadata = sa.MetaData(bind=bind)
datasources = sa.Table("datasources", metadata, autoload=True)
clusters = sa.Table("clusters", metadata, autoload=True)
statement = datasources.update().values(
cluster_id=sa.select([clusters.c.id])
.where(datasources.c.cluster_name == clusters.c.cluster_name)
.as_scalar()
)
bind.execute(statement)
with op.batch_alter_table("datasources") as batch_op:
# Drop cluster_name column
fk_constraint_name = generic_find_fk_constraint_name(
"datasources", {"cluster_name"}, "clusters", insp
)
uq_constraint_name = generic_find_uq_constraint_name(
"datasources", {"cluster_name", "datasource_name"}, insp
)
batch_op.drop_constraint(fk_constraint_name, type_="foreignkey")
batch_op.drop_constraint(uq_constraint_name, type_="unique")
batch_op.drop_column("cluster_name")
# Add constraints to cluster_id column
batch_op.alter_column("cluster_id", existing_type=sa.Integer, nullable=False)
batch_op.create_unique_constraint(
"uq_datasources_cluster_id", ["cluster_id", "datasource_name"]
)
batch_op.create_foreign_key(
"fk_datasources_cluster_id_clusters", "clusters", ["cluster_id"], ["id"]
)
def downgrade():
bind = op.get_bind()
insp = sa.engine.reflection.Inspector.from_engine(bind)
# Add cluster_name column
with op.batch_alter_table("datasources") as batch_op:
batch_op.add_column(sa.Column("cluster_name", sa.String(250)))
# Update cluster_name values
metadata = sa.MetaData(bind=bind)
datasources = sa.Table("datasources", metadata, autoload=True)
clusters = sa.Table("clusters", metadata, autoload=True)
statement = datasources.update().values(
cluster_name=sa.select([clusters.c.cluster_name])
.where(datasources.c.cluster_id == clusters.c.id)
.as_scalar()
)
bind.execute(statement)
with op.batch_alter_table("datasources") as batch_op:
# Drop cluster_id column
fk_constraint_name = generic_find_fk_constraint_name(
"datasources", {"id"}, "clusters", insp
)
uq_constraint_name = generic_find_uq_constraint_name(
"datasources", {"cluster_id", "datasource_name"}, insp
)
batch_op.drop_constraint(fk_constraint_name, type_="foreignkey")
batch_op.drop_constraint(uq_constraint_name, type_="unique")
batch_op.drop_column("cluster_id")
# Add constraints to cluster_name column
batch_op.alter_column(
"cluster_name", existing_type=sa.String(250), nullable=False
)
batch_op.create_unique_constraint(
"uq_datasources_cluster_name", ["cluster_name", "datasource_name"]
)
batch_op.create_foreign_key(
"fk_datasources_cluster_name_clusters",
"clusters",
["cluster_name"],
["cluster_name"],
) | /sage-superset-1.0.0.tar.gz/sage-superset-1.0.0/superset/migrations/versions/e96dbf2cfef0_datasource_cluster_fk.py | 0.470737 | 0.232495 | e96dbf2cfef0_datasource_cluster_fk.py | pypi |
# revision identifiers, used by Alembic.
revision = "070c043f2fdb"
down_revision = "41ce8799acc3"
import json
from alembic import op
from sqlalchemy import and_, Boolean, Column, Integer, String, Text
from sqlalchemy.ext.declarative import declarative_base
from superset import db
Base = declarative_base()
class Slice(Base):
__tablename__ = "slices"
id = Column(Integer, primary_key=True)
params = Column(Text)
datasource_id = Column(Integer)
datasource_type = Column(String(200))
class SqlaTable(Base):
__tablename__ = "tables"
id = Column(Integer, primary_key=True)
main_dttm_col = Column(String(250))
class TableColumn(Base):
__tablename__ = "table_columns"
id = Column(Integer, primary_key=True)
table_id = Column(Integer)
is_dttm = Column(Boolean)
column_name = Column(String(255))
def upgrade():
"""
Adds the granularity param to charts without it populated. This is required for
time range filtering to work properly. Uses the following approach:
- Find all charts without a granularity or granularity_sqla param.
- Get the dataset that backs the chart.
- If the dataset has the main dttm column set, use it.
- Otherwise, find all the dttm columns in the dataset and use the first one (this
matches the behavior of Explore view on the frontend)
- If no dttm columns exist in the dataset, don't change the chart.
"""
bind = op.get_bind()
session = db.Session(bind=bind)
slices_changed = 0
for slc in (
session.query(Slice)
.filter(
and_(
Slice.datasource_type == "table", Slice.params.notlike('%"granularity%')
)
)
.all()
):
try:
params = json.loads(slc.params)
if "granularity" in params or "granularity_sqla" in params:
continue
table = session.query(SqlaTable).get(slc.datasource_id)
if not table:
continue
if table.main_dttm_col:
params["granularity"] = table.main_dttm_col
slc.params = json.dumps(params, sort_keys=True)
print(f"Set granularity for slice {slc.id} to {table.main_dttm_col}")
slices_changed += 1
continue
table_columns = (
session.query(TableColumn)
.filter(TableColumn.table_id == table.id)
.filter(TableColumn.is_dttm == True)
.all()
)
if len(table_columns):
params["granularity"] = table_columns[0].column_name
slc.params = json.dumps(params, sort_keys=True)
print(
f"Set granularity for slice {slc.id} to {table_columns[0].column_name}"
)
slices_changed += 1
except Exception as e:
print(e)
print(f"Parsing params for slice {slc.id} failed.")
pass
print(f"{slices_changed} slices altered")
session.commit()
session.close()
def downgrade():
"""
It's impossible to downgrade this migration.
"""
pass | /sage-superset-1.0.0.tar.gz/sage-superset-1.0.0/superset/migrations/versions/070c043f2fdb_add_granularity_to_charts_where_missing.py | 0.435061 | 0.250511 | 070c043f2fdb_add_granularity_to_charts_where_missing.py | pypi |
# revision identifiers, used by Alembic.
revision = "e9df189e5c7e"
down_revision = "7f2635b51f5d"
from alembic import op
from sqlalchemy import Column, engine, Integer, String, Text
from sqlalchemy.ext.declarative import declarative_base
from superset import db
from superset.utils.core import generic_find_uq_constraint_name
Base = declarative_base()
conv = {"uq": "uq_%(table_name)s_%(column_0_name)s"}
class BaseMetricMixin:
id = Column(Integer, primary_key=True)
class DruidMetric(BaseMetricMixin, Base):
__tablename__ = "metrics"
datasource_id = Column(Integer)
class SqlMetric(BaseMetricMixin, Base):
__tablename__ = "sql_metrics"
table_id = Column(Integer)
def upgrade():
bind = op.get_bind()
session = db.Session(bind=bind)
# Delete the orphaned metrics records.
for record in session.query(DruidMetric).all():
if record.datasource_id is None:
session.delete(record)
session.commit()
# Enforce that metrics.metric_name column be non-nullable.
with op.batch_alter_table("metrics") as batch_op:
batch_op.alter_column("metric_name", existing_type=String(255), nullable=False)
# Enforce that metrics.json column be non-nullable.
with op.batch_alter_table("metrics") as batch_op:
batch_op.alter_column("json", existing_type=Text, nullable=False)
# Delete the orphaned sql_metrics records.
for record in session.query(SqlMetric).all():
if record.table_id is None:
session.delete(record)
session.commit()
# Reduce the size of the sql_metrics.metric_name column for constraint
# viability and enforce that it to be non-nullable.
with op.batch_alter_table("sql_metrics") as batch_op:
batch_op.alter_column(
"metric_name", existing_type=String(512), nullable=False, type_=String(255)
)
# Enforce that sql_metrics.expression column be non-nullable.
with op.batch_alter_table("sql_metrics") as batch_op:
batch_op.alter_column("expression", existing_type=Text, nullable=False)
# Add the missing uniqueness constraint to the sql_metrics table.
with op.batch_alter_table("sql_metrics", naming_convention=conv) as batch_op:
batch_op.create_unique_constraint(
"uq_sql_metrics_metric_name", ["metric_name", "table_id"]
)
def downgrade():
bind = op.get_bind()
insp = engine.reflection.Inspector.from_engine(bind)
# Remove the missing uniqueness constraint from the sql_metrics table.
with op.batch_alter_table("sql_metrics", naming_convention=conv) as batch_op:
batch_op.drop_constraint(
generic_find_uq_constraint_name(
"sql_metrics", {"metric_name", "table_id"}, insp
)
or "uq_sql_metrics_table_id",
type_="unique",
)
# Restore the size of the sql_metrics.metric_name column and forego that it
# be non-nullable.
with op.batch_alter_table("sql_metrics") as batch_op:
batch_op.alter_column(
"metric_name", existing_type=String(255), nullable=True, type_=String(512)
)
# Forego that the sql_metrics.expression column be non-nullable.
with op.batch_alter_table("sql_metrics") as batch_op:
batch_op.alter_column("expression", existing_type=Text, nullable=True)
# Forego that the metrics.metric_name column be non-nullable.
with op.batch_alter_table("metrics") as batch_op:
batch_op.alter_column("metric_name", existing_type=String(255), nullable=True)
# Forego that the metrics.json column be non-nullable.
with op.batch_alter_table("metrics") as batch_op:
batch_op.alter_column("json", existing_type=Text, nullable=True) | /sage-superset-1.0.0.tar.gz/sage-superset-1.0.0/superset/migrations/versions/e9df189e5c7e_update_base_metrics.py | 0.600071 | 0.199893 | e9df189e5c7e_update_base_metrics.py | pypi |
from alembic import op
from sqlalchemy.exc import SQLAlchemyError
from sqlalchemy.orm import Session
# revision identifiers, used by Alembic.
from superset.migrations.shared.security_converge import (
add_pvms,
get_reversed_new_pvms,
get_reversed_pvm_map,
migrate_roles,
Pvm,
)
revision = "c25cb2c78727"
down_revision = "ccb74baaa89b"
NEW_PVMS = {"Annotation": ("can_read", "can_write",)}
PVM_MAP = {
Pvm("AnnotationLayerModelView", "can_delete"): (Pvm("Annotation", "can_write"),),
Pvm("AnnotationLayerModelView", "can_list"): (Pvm("Annotation", "can_read"),),
Pvm("AnnotationLayerModelView", "can_show",): (Pvm("Annotation", "can_read"),),
Pvm("AnnotationLayerModelView", "can_add",): (Pvm("Annotation", "can_write"),),
Pvm("AnnotationLayerModelView", "can_edit",): (Pvm("Annotation", "can_write"),),
Pvm("AnnotationModelView", "can_annotation",): (Pvm("Annotation", "can_read"),),
Pvm("AnnotationModelView", "can_show",): (Pvm("Annotation", "can_read"),),
Pvm("AnnotationModelView", "can_add",): (Pvm("Annotation", "can_write"),),
Pvm("AnnotationModelView", "can_delete",): (Pvm("Annotation", "can_write"),),
Pvm("AnnotationModelView", "can_edit",): (Pvm("Annotation", "can_write"),),
Pvm("AnnotationModelView", "can_list",): (Pvm("Annotation", "can_read"),),
}
def upgrade():
bind = op.get_bind()
session = Session(bind=bind)
# Add the new permissions on the migration itself
add_pvms(session, NEW_PVMS)
migrate_roles(session, PVM_MAP)
try:
session.commit()
except SQLAlchemyError as ex:
print(f"An error occurred while upgrading annotation permissions: {ex}")
session.rollback()
def downgrade():
bind = op.get_bind()
session = Session(bind=bind)
# Add the old permissions on the migration itself
add_pvms(session, get_reversed_new_pvms(PVM_MAP))
migrate_roles(session, get_reversed_pvm_map(PVM_MAP))
try:
session.commit()
except SQLAlchemyError as ex:
print(f"An error occurred while downgrading annotation permissions: {ex}")
session.rollback()
pass | /sage-superset-1.0.0.tar.gz/sage-superset-1.0.0/superset/migrations/versions/c25cb2c78727_security_converge_annotations.py | 0.442637 | 0.21684 | c25cb2c78727_security_converge_annotations.py | pypi |
from typing import Dict, Optional, Tuple
import pandas as pd
from sqlalchemy import BigInteger, Date, DateTime, inspect, String
from superset import app, db
from superset.models.slice import Slice
from ..utils.database import get_example_database
from .helpers import (
get_example_data,
get_slice_json,
get_table_connector_registry,
merge_slice,
misc_dash_slices,
)
def load_multiformat_time_series( # pylint: disable=too-many-locals
only_metadata: bool = False, force: bool = False
) -> None:
"""Loading time series data from a zip file in the repo"""
tbl_name = "multiformat_time_series"
database = get_example_database()
engine = database.get_sqla_engine()
schema = inspect(engine).default_schema_name
table_exists = database.has_table_by_name(tbl_name)
if not only_metadata and (not table_exists or force):
data = get_example_data("multiformat_time_series.json.gz")
pdf = pd.read_json(data)
# TODO(bkyryliuk): move load examples data into the pytest fixture
if database.backend == "presto":
pdf.ds = pd.to_datetime(pdf.ds, unit="s")
pdf.ds = pdf.ds.dt.strftime("%Y-%m-%d")
pdf.ds2 = pd.to_datetime(pdf.ds2, unit="s")
pdf.ds2 = pdf.ds2.dt.strftime("%Y-%m-%d %H:%M%:%S")
else:
pdf.ds = pd.to_datetime(pdf.ds, unit="s")
pdf.ds2 = pd.to_datetime(pdf.ds2, unit="s")
pdf.to_sql(
tbl_name,
engine,
schema=schema,
if_exists="replace",
chunksize=500,
dtype={
"ds": String(255) if database.backend == "presto" else Date,
"ds2": String(255) if database.backend == "presto" else DateTime,
"epoch_s": BigInteger,
"epoch_ms": BigInteger,
"string0": String(100),
"string1": String(100),
"string2": String(100),
"string3": String(100),
},
index=False,
)
print("Done loading table!")
print("-" * 80)
print(f"Creating table [{tbl_name}] reference")
table = get_table_connector_registry()
obj = db.session.query(table).filter_by(table_name=tbl_name).first()
if not obj:
obj = table(table_name=tbl_name, schema=schema)
obj.main_dttm_col = "ds"
obj.database = database
obj.filter_select_enabled = True
dttm_and_expr_dict: Dict[str, Tuple[Optional[str], None]] = {
"ds": (None, None),
"ds2": (None, None),
"epoch_s": ("epoch_s", None),
"epoch_ms": ("epoch_ms", None),
"string2": ("%Y%m%d-%H%M%S", None),
"string1": ("%Y-%m-%d^%H:%M:%S", None),
"string0": ("%Y-%m-%d %H:%M:%S.%f", None),
"string3": ("%Y/%m/%d%H:%M:%S.%f", None),
}
for col in obj.columns:
dttm_and_expr = dttm_and_expr_dict[col.column_name]
col.python_date_format = dttm_and_expr[0]
col.dbatabase_expr = dttm_and_expr[1]
col.is_dttm = True
db.session.merge(obj)
db.session.commit()
obj.fetch_metadata()
tbl = obj
print("Creating Heatmap charts")
for i, col in enumerate(tbl.columns):
slice_data = {
"metrics": ["count"],
"granularity_sqla": col.column_name,
"row_limit": app.config["ROW_LIMIT"],
"since": "2015",
"until": "2016",
"viz_type": "cal_heatmap",
"domain_granularity": "month",
"subdomain_granularity": "day",
}
slc = Slice(
slice_name=f"Calendar Heatmap multiformat {i}",
viz_type="cal_heatmap",
datasource_type="table",
datasource_id=tbl.id,
params=get_slice_json(slice_data),
)
merge_slice(slc)
misc_dash_slices.add("Calendar Heatmap multiformat 0") | /sage-superset-1.0.0.tar.gz/sage-superset-1.0.0/superset/examples/multiformat_time_series.py | 0.538498 | 0.282351 | multiformat_time_series.py | pypi |
"""Loads datasets, dashboards and slices in a new superset instance"""
import json
import textwrap
from superset import db
from superset.models.dashboard import Dashboard
from superset.models.slice import Slice
from .helpers import update_slice_ids
def load_tabbed_dashboard(_: bool = False) -> None:
"""Creating a tabbed dashboard"""
print("Creating a dashboard with nested tabs")
slug = "tabbed_dash"
dash = db.session.query(Dashboard).filter_by(slug=slug).first()
if not dash:
dash = Dashboard()
# reuse charts in "World's Bank Data and create
# new dashboard with nested tabs
tabbed_dash_slices = set()
tabbed_dash_slices.add("Region Filter")
tabbed_dash_slices.add("Growth Rate")
tabbed_dash_slices.add("Treemap")
tabbed_dash_slices.add("Box plot")
js = textwrap.dedent(
"""\
{
"CHART-c0EjR-OZ0n": {
"children": [],
"id": "CHART-c0EjR-OZ0n",
"meta": {
"chartId": 870,
"height": 50,
"sliceName": "Box plot",
"width": 4
},
"parents": [
"ROOT_ID",
"TABS-lV0r00f4H1",
"TAB-NF3dlrWGS",
"ROW-7G2o5uDvfo"
],
"type": "CHART"
},
"CHART-dxV7Il74hH": {
"children": [],
"id": "CHART-dxV7Il74hH",
"meta": {
"chartId": 797,
"height": 50,
"sliceName": "Treemap",
"width": 4
},
"parents": [
"ROOT_ID",
"TABS-lV0r00f4H1",
"TAB-gcQJxApOZS",
"TABS-afnrUvdxYF",
"TAB-jNNd4WWar1",
"ROW-7ygtDczaQ"
],
"type": "CHART"
},
"CHART-jJ5Yj1Ptaz": {
"children": [],
"id": "CHART-jJ5Yj1Ptaz",
"meta": {
"chartId": 789,
"height": 50,
"sliceName": "World's Population",
"width": 4
},
"parents": [
"ROOT_ID",
"TABS-lV0r00f4H1",
"TAB-NF3dlrWGS",
"TABS-CSjo6VfNrj",
"TAB-z81Q87PD7",
"ROW-G73z9PIHn"
],
"type": "CHART"
},
"CHART-z4gmEuCqQ5": {
"children": [],
"id": "CHART-z4gmEuCqQ5",
"meta": {
"chartId": 788,
"height": 50,
"sliceName": "Region Filter",
"width": 4
},
"parents": [
"ROOT_ID",
"TABS-lV0r00f4H1",
"TAB-NF3dlrWGS",
"TABS-CSjo6VfNrj",
"TAB-EcNm_wh922",
"ROW-LCjsdSetJ"
],
"type": "CHART"
},
"DASHBOARD_VERSION_KEY": "v2",
"GRID_ID": {
"children": [],
"id": "GRID_ID",
"type": "GRID"
},
"HEADER_ID": {
"id": "HEADER_ID",
"meta": {
"text": "Tabbed Dashboard"
},
"type": "HEADER"
},
"ROOT_ID": {
"children": [
"TABS-lV0r00f4H1"
],
"id": "ROOT_ID",
"type": "ROOT"
},
"ROW-7G2o5uDvfo": {
"children": [
"CHART-c0EjR-OZ0n"
],
"id": "ROW-7G2o5uDvfo",
"meta": {
"background": "BACKGROUND_TRANSPARENT"
},
"parents": [
"ROOT_ID",
"TABS-lV0r00f4H1",
"TAB-NF3dlrWGS"
],
"type": "ROW"
},
"ROW-7ygtDczaQ": {
"children": [
"CHART-dxV7Il74hH"
],
"id": "ROW-7ygtDczaQ",
"meta": {
"background": "BACKGROUND_TRANSPARENT"
},
"parents": [
"ROOT_ID",
"TABS-lV0r00f4H1",
"TAB-gcQJxApOZS",
"TABS-afnrUvdxYF",
"TAB-jNNd4WWar1"
],
"type": "ROW"
},
"ROW-G73z9PIHn": {
"children": [
"CHART-jJ5Yj1Ptaz"
],
"id": "ROW-G73z9PIHn",
"meta": {
"background": "BACKGROUND_TRANSPARENT"
},
"parents": [
"ROOT_ID",
"TABS-lV0r00f4H1",
"TAB-NF3dlrWGS",
"TABS-CSjo6VfNrj",
"TAB-z81Q87PD7"
],
"type": "ROW"
},
"ROW-LCjsdSetJ": {
"children": [
"CHART-z4gmEuCqQ5"
],
"id": "ROW-LCjsdSetJ",
"meta": {
"background": "BACKGROUND_TRANSPARENT"
},
"parents": [
"ROOT_ID",
"TABS-lV0r00f4H1",
"TAB-NF3dlrWGS",
"TABS-CSjo6VfNrj",
"TAB-EcNm_wh922"
],
"type": "ROW"
},
"TAB-EcNm_wh922": {
"children": [
"ROW-LCjsdSetJ"
],
"id": "TAB-EcNm_wh922",
"meta": {
"text": "row tab 1"
},
"parents": [
"ROOT_ID",
"TABS-lV0r00f4H1",
"TAB-NF3dlrWGS",
"TABS-CSjo6VfNrj"
],
"type": "TAB"
},
"TAB-NF3dlrWGS": {
"children": [
"ROW-7G2o5uDvfo",
"TABS-CSjo6VfNrj"
],
"id": "TAB-NF3dlrWGS",
"meta": {
"text": "Tab A"
},
"parents": [
"ROOT_ID",
"TABS-lV0r00f4H1"
],
"type": "TAB"
},
"TAB-gcQJxApOZS": {
"children": [
"TABS-afnrUvdxYF"
],
"id": "TAB-gcQJxApOZS",
"meta": {
"text": "Tab B"
},
"parents": [
"ROOT_ID",
"TABS-lV0r00f4H1"
],
"type": "TAB"
},
"TAB-jNNd4WWar1": {
"children": [
"ROW-7ygtDczaQ"
],
"id": "TAB-jNNd4WWar1",
"meta": {
"text": "New Tab"
},
"parents": [
"ROOT_ID",
"TABS-lV0r00f4H1",
"TAB-gcQJxApOZS",
"TABS-afnrUvdxYF"
],
"type": "TAB"
},
"TAB-z81Q87PD7": {
"children": [
"ROW-G73z9PIHn"
],
"id": "TAB-z81Q87PD7",
"meta": {
"text": "row tab 2"
},
"parents": [
"ROOT_ID",
"TABS-lV0r00f4H1",
"TAB-NF3dlrWGS",
"TABS-CSjo6VfNrj"
],
"type": "TAB"
},
"TABS-CSjo6VfNrj": {
"children": [
"TAB-EcNm_wh922",
"TAB-z81Q87PD7"
],
"id": "TABS-CSjo6VfNrj",
"meta": {},
"parents": [
"ROOT_ID",
"TABS-lV0r00f4H1",
"TAB-NF3dlrWGS"
],
"type": "TABS"
},
"TABS-afnrUvdxYF": {
"children": [
"TAB-jNNd4WWar1"
],
"id": "TABS-afnrUvdxYF",
"meta": {},
"parents": [
"ROOT_ID",
"TABS-lV0r00f4H1",
"TAB-gcQJxApOZS"
],
"type": "TABS"
},
"TABS-lV0r00f4H1": {
"children": [
"TAB-NF3dlrWGS",
"TAB-gcQJxApOZS"
],
"id": "TABS-lV0r00f4H1",
"meta": {},
"parents": [
"ROOT_ID"
],
"type": "TABS"
}
}
"""
)
pos = json.loads(js)
slices = [
db.session.query(Slice).filter_by(slice_name=name).first()
for name in tabbed_dash_slices
]
slices = sorted(slices, key=lambda x: x.id)
update_slice_ids(pos, slices)
dash.position_json = json.dumps(pos, indent=4)
dash.slices = slices
dash.dashboard_title = "Tabbed Dashboard"
dash.slug = slug
db.session.merge(dash)
db.session.commit() | /sage-superset-1.0.0.tar.gz/sage-superset-1.0.0/superset/examples/tabbed_dashboard.py | 0.511961 | 0.393851 | tabbed_dashboard.py | pypi |
import datetime
import pandas as pd
from sqlalchemy import BigInteger, Date, inspect, String
from sqlalchemy.sql import column
import superset.utils.database as database_utils
from superset import db
from superset.connectors.sqla.models import SqlMetric
from superset.models.slice import Slice
from .helpers import (
get_example_data,
get_slice_json,
get_table_connector_registry,
merge_slice,
misc_dash_slices,
)
def load_country_map_data(only_metadata: bool = False, force: bool = False) -> None:
"""Loading data for map with country map"""
tbl_name = "birth_france_by_region"
database = database_utils.get_example_database()
engine = database.get_sqla_engine()
schema = inspect(engine).default_schema_name
table_exists = database.has_table_by_name(tbl_name)
if not only_metadata and (not table_exists or force):
csv_bytes = get_example_data(
"birth_france_data_for_country_map.csv", is_gzip=False, make_bytes=True
)
data = pd.read_csv(csv_bytes, encoding="utf-8")
data["dttm"] = datetime.datetime.now().date()
data.to_sql(
tbl_name,
engine,
schema=schema,
if_exists="replace",
chunksize=500,
dtype={
"DEPT_ID": String(10),
"2003": BigInteger,
"2004": BigInteger,
"2005": BigInteger,
"2006": BigInteger,
"2007": BigInteger,
"2008": BigInteger,
"2009": BigInteger,
"2010": BigInteger,
"2011": BigInteger,
"2012": BigInteger,
"2013": BigInteger,
"2014": BigInteger,
"dttm": Date(),
},
index=False,
)
print("Done loading table!")
print("-" * 80)
print("Creating table reference")
table = get_table_connector_registry()
obj = db.session.query(table).filter_by(table_name=tbl_name).first()
if not obj:
obj = table(table_name=tbl_name, schema=schema)
obj.main_dttm_col = "dttm"
obj.database = database
obj.filter_select_enabled = True
if not any(col.metric_name == "avg__2004" for col in obj.metrics):
col = str(column("2004").compile(db.engine))
obj.metrics.append(SqlMetric(metric_name="avg__2004", expression=f"AVG({col})"))
db.session.merge(obj)
db.session.commit()
obj.fetch_metadata()
tbl = obj
slice_data = {
"granularity_sqla": "",
"since": "",
"until": "",
"viz_type": "country_map",
"entity": "DEPT_ID",
"metric": {
"expressionType": "SIMPLE",
"column": {"type": "INT", "column_name": "2004"},
"aggregate": "AVG",
"label": "Boys",
"optionName": "metric_112342",
},
"row_limit": 500000,
"select_country": "france",
}
print("Creating a slice")
slc = Slice(
slice_name="Birth in France by department in 2016",
viz_type="country_map",
datasource_type="table",
datasource_id=tbl.id,
params=get_slice_json(slice_data),
)
misc_dash_slices.add(slc.slice_name)
merge_slice(slc) | /sage-superset-1.0.0.tar.gz/sage-superset-1.0.0/superset/examples/country_map.py | 0.416797 | 0.246273 | country_map.py | pypi |
import datetime
import random
import geohash
import pandas as pd
from sqlalchemy import DateTime, Float, inspect, String
import superset.utils.database as database_utils
from superset import db
from superset.models.slice import Slice
from .helpers import (
get_example_data,
get_slice_json,
get_table_connector_registry,
merge_slice,
misc_dash_slices,
)
def load_long_lat_data(only_metadata: bool = False, force: bool = False) -> None:
"""Loading lat/long data from a csv file in the repo"""
tbl_name = "long_lat"
database = database_utils.get_example_database()
engine = database.get_sqla_engine()
schema = inspect(engine).default_schema_name
table_exists = database.has_table_by_name(tbl_name)
if not only_metadata and (not table_exists or force):
data = get_example_data("san_francisco.csv.gz", make_bytes=True)
pdf = pd.read_csv(data, encoding="utf-8")
start = datetime.datetime.now().replace(
hour=0, minute=0, second=0, microsecond=0
)
pdf["datetime"] = [
start + datetime.timedelta(hours=i * 24 / (len(pdf) - 1))
for i in range(len(pdf))
]
pdf["occupancy"] = [random.randint(1, 6) for _ in range(len(pdf))]
pdf["radius_miles"] = [random.uniform(1, 3) for _ in range(len(pdf))]
pdf["geohash"] = pdf[["LAT", "LON"]].apply(lambda x: geohash.encode(*x), axis=1)
pdf["delimited"] = pdf["LAT"].map(str).str.cat(pdf["LON"].map(str), sep=",")
pdf.to_sql(
tbl_name,
engine,
schema=schema,
if_exists="replace",
chunksize=500,
dtype={
"longitude": Float(),
"latitude": Float(),
"number": Float(),
"street": String(100),
"unit": String(10),
"city": String(50),
"district": String(50),
"region": String(50),
"postcode": Float(),
"id": String(100),
"datetime": DateTime(),
"occupancy": Float(),
"radius_miles": Float(),
"geohash": String(12),
"delimited": String(60),
},
index=False,
)
print("Done loading table!")
print("-" * 80)
print("Creating table reference")
table = get_table_connector_registry()
obj = db.session.query(table).filter_by(table_name=tbl_name).first()
if not obj:
obj = table(table_name=tbl_name, schema=schema)
obj.main_dttm_col = "datetime"
obj.database = database
obj.filter_select_enabled = True
db.session.merge(obj)
db.session.commit()
obj.fetch_metadata()
tbl = obj
slice_data = {
"granularity_sqla": "day",
"since": "2014-01-01",
"until": "now",
"viz_type": "mapbox",
"all_columns_x": "LON",
"all_columns_y": "LAT",
"mapbox_style": "mapbox://styles/mapbox/light-v9",
"all_columns": ["occupancy"],
"row_limit": 500000,
}
print("Creating a slice")
slc = Slice(
slice_name="Mapbox Long/Lat",
viz_type="mapbox",
datasource_type="table",
datasource_id=tbl.id,
params=get_slice_json(slice_data),
)
misc_dash_slices.add(slc.slice_name)
merge_slice(slc) | /sage-superset-1.0.0.tar.gz/sage-superset-1.0.0/superset/examples/long_lat.py | 0.421552 | 0.221687 | long_lat.py | pypi |
import pandas as pd
from sqlalchemy import DateTime, inspect, String
import superset.utils.database as database_utils
from superset import app, db
from superset.models.slice import Slice
from .helpers import (
get_example_data,
get_slice_json,
get_table_connector_registry,
merge_slice,
)
def load_random_time_series_data(
only_metadata: bool = False, force: bool = False
) -> None:
"""Loading random time series data from a zip file in the repo"""
tbl_name = "random_time_series"
database = database_utils.get_example_database()
engine = database.get_sqla_engine()
schema = inspect(engine).default_schema_name
table_exists = database.has_table_by_name(tbl_name)
if not only_metadata and (not table_exists or force):
data = get_example_data("random_time_series.json.gz")
pdf = pd.read_json(data)
if database.backend == "presto":
pdf.ds = pd.to_datetime(pdf.ds, unit="s")
pdf.ds = pdf.ds.dt.strftime("%Y-%m-%d %H:%M%:%S")
else:
pdf.ds = pd.to_datetime(pdf.ds, unit="s")
pdf.to_sql(
tbl_name,
engine,
schema=schema,
if_exists="replace",
chunksize=500,
dtype={"ds": DateTime if database.backend != "presto" else String(255)},
index=False,
)
print("Done loading table!")
print("-" * 80)
print(f"Creating table [{tbl_name}] reference")
table = get_table_connector_registry()
obj = db.session.query(table).filter_by(table_name=tbl_name).first()
if not obj:
obj = table(table_name=tbl_name, schema=schema)
obj.main_dttm_col = "ds"
obj.database = database
obj.filter_select_enabled = True
db.session.merge(obj)
db.session.commit()
obj.fetch_metadata()
tbl = obj
slice_data = {
"granularity_sqla": "ds",
"row_limit": app.config["ROW_LIMIT"],
"since": "2019-01-01",
"until": "2019-02-01",
"metrics": ["count"],
"viz_type": "cal_heatmap",
"domain_granularity": "month",
"subdomain_granularity": "day",
}
print("Creating a slice")
slc = Slice(
slice_name="Calendar Heatmap",
viz_type="cal_heatmap",
datasource_type="table",
datasource_id=tbl.id,
params=get_slice_json(slice_data),
)
merge_slice(slc) | /sage-superset-1.0.0.tar.gz/sage-superset-1.0.0/superset/examples/random_time_series.py | 0.478529 | 0.186077 | random_time_series.py | pypi |
"""Loads datasets, dashboards and slices in a new superset instance"""
import textwrap
import pandas as pd
from sqlalchemy import Float, inspect, String
from sqlalchemy.sql import column
import superset.utils.database as database_utils
from superset import db
from superset.connectors.sqla.models import SqlMetric
from superset.models.slice import Slice
from .helpers import (
get_example_data,
get_table_connector_registry,
merge_slice,
misc_dash_slices,
)
def load_energy(
only_metadata: bool = False, force: bool = False, sample: bool = False
) -> None:
"""Loads an energy related dataset to use with sankey and graphs"""
tbl_name = "energy_usage"
database = database_utils.get_example_database()
engine = database.get_sqla_engine()
schema = inspect(engine).default_schema_name
table_exists = database.has_table_by_name(tbl_name)
if not only_metadata and (not table_exists or force):
data = get_example_data("energy.json.gz")
pdf = pd.read_json(data)
pdf = pdf.head(100) if sample else pdf
pdf.to_sql(
tbl_name,
engine,
schema=schema,
if_exists="replace",
chunksize=500,
dtype={"source": String(255), "target": String(255), "value": Float()},
index=False,
method="multi",
)
print("Creating table [wb_health_population] reference")
table = get_table_connector_registry()
tbl = db.session.query(table).filter_by(table_name=tbl_name).first()
if not tbl:
tbl = table(table_name=tbl_name, schema=schema)
tbl.description = "Energy consumption"
tbl.database = database
tbl.filter_select_enabled = True
if not any(col.metric_name == "sum__value" for col in tbl.metrics):
col = str(column("value").compile(db.engine))
tbl.metrics.append(
SqlMetric(metric_name="sum__value", expression=f"SUM({col})")
)
db.session.merge(tbl)
db.session.commit()
tbl.fetch_metadata()
slc = Slice(
slice_name="Energy Sankey",
viz_type="sankey",
datasource_type="table",
datasource_id=tbl.id,
params=textwrap.dedent(
"""\
{
"collapsed_fieldsets": "",
"groupby": [
"source",
"target"
],
"metric": "sum__value",
"row_limit": "5000",
"slice_name": "Energy Sankey",
"viz_type": "sankey"
}
"""
),
)
misc_dash_slices.add(slc.slice_name)
merge_slice(slc)
slc = Slice(
slice_name="Energy Force Layout",
viz_type="graph_chart",
datasource_type="table",
datasource_id=tbl.id,
params=textwrap.dedent(
"""\
{
"source": "source",
"target": "target",
"edgeLength": 400,
"repulsion": 1000,
"layout": "force",
"metric": "sum__value",
"row_limit": "5000",
"slice_name": "Force",
"viz_type": "graph_chart"
}
"""
),
)
misc_dash_slices.add(slc.slice_name)
merge_slice(slc)
slc = Slice(
slice_name="Heatmap",
viz_type="heatmap",
datasource_type="table",
datasource_id=tbl.id,
params=textwrap.dedent(
"""\
{
"all_columns_x": "source",
"all_columns_y": "target",
"canvas_image_rendering": "pixelated",
"collapsed_fieldsets": "",
"linear_color_scheme": "blue_white_yellow",
"metric": "sum__value",
"normalize_across": "heatmap",
"slice_name": "Heatmap",
"viz_type": "heatmap",
"xscale_interval": "1",
"yscale_interval": "1"
}
"""
),
)
misc_dash_slices.add(slc.slice_name)
merge_slice(slc) | /sage-superset-1.0.0.tar.gz/sage-superset-1.0.0/superset/examples/energy.py | 0.628863 | 0.368718 | energy.py | pypi |
"""This module contains data related to countries and is used for geo mapping"""
# pylint: disable=too-many-lines
from typing import Any, Dict, List, Optional
countries: List[Dict[str, Any]] = [
{
"name": "Angola",
"area": 1246700,
"cioc": "ANG",
"cca2": "AO",
"capital": "Luanda",
"lat": -12.5,
"lng": 18.5,
"cca3": "AGO",
},
{
"name": "Algeria",
"area": 2381741,
"cioc": "ALG",
"cca2": "DZ",
"capital": "Algiers",
"lat": 28,
"lng": 3,
"cca3": "DZA",
},
{
"name": "Egypt",
"area": 1002450,
"cioc": "EGY",
"cca2": "EG",
"capital": "Cairo",
"lat": 27,
"lng": 30,
"cca3": "EGY",
},
{
"name": "Bangladesh",
"area": 147570,
"cioc": "BAN",
"cca2": "BD",
"capital": "Dhaka",
"lat": 24,
"lng": 90,
"cca3": "BGD",
},
{
"name": "Niger",
"area": 1267000,
"cioc": "NIG",
"cca2": "NE",
"capital": "Niamey",
"lat": 16,
"lng": 8,
"cca3": "NER",
},
{
"name": "Liechtenstein",
"area": 160,
"cioc": "LIE",
"cca2": "LI",
"capital": "Vaduz",
"lat": 47.26666666,
"lng": 9.53333333,
"cca3": "LIE",
},
{
"name": "Namibia",
"area": 825615,
"cioc": "NAM",
"cca2": "NA",
"capital": "Windhoek",
"lat": -22,
"lng": 17,
"cca3": "NAM",
},
{
"name": "Bulgaria",
"area": 110879,
"cioc": "BUL",
"cca2": "BG",
"capital": "Sofia",
"lat": 43,
"lng": 25,
"cca3": "BGR",
},
{
"name": "Bolivia",
"area": 1098581,
"cioc": "BOL",
"cca2": "BO",
"capital": "Sucre",
"lat": -17,
"lng": -65,
"cca3": "BOL",
},
{
"name": "Ghana",
"area": 238533,
"cioc": "GHA",
"cca2": "GH",
"capital": "Accra",
"lat": 8,
"lng": -2,
"cca3": "GHA",
},
{
"name": "Cocos (Keeling) Islands",
"area": 14,
"cioc": "",
"cca2": "CC",
"capital": "West Island",
"lat": -12.5,
"lng": 96.83333333,
"cca3": "CCK",
},
{
"name": "Pakistan",
"area": 881912,
"cioc": "PAK",
"cca2": "PK",
"capital": "Islamabad",
"lat": 30,
"lng": 70,
"cca3": "PAK",
},
{
"name": "Cape Verde",
"area": 4033,
"cioc": "CPV",
"cca2": "CV",
"capital": "Praia",
"lat": 16,
"lng": -24,
"cca3": "CPV",
},
{
"name": "Jordan",
"area": 89342,
"cioc": "JOR",
"cca2": "JO",
"capital": "Amman",
"lat": 31,
"lng": 36,
"cca3": "JOR",
},
{
"name": "Liberia",
"area": 111369,
"cioc": "LBR",
"cca2": "LR",
"capital": "Monrovia",
"lat": 6.5,
"lng": -9.5,
"cca3": "LBR",
},
{
"name": "Libya",
"area": 1759540,
"cioc": "LBA",
"cca2": "LY",
"capital": "Tripoli",
"lat": 25,
"lng": 17,
"cca3": "LBY",
},
{
"name": "Malaysia",
"area": 330803,
"cioc": "MAS",
"cca2": "MY",
"capital": "Kuala Lumpur",
"lat": 2.5,
"lng": 112.5,
"cca3": "MYS",
},
{
"name": "Dominican Republic",
"area": 48671,
"cioc": "DOM",
"cca2": "DO",
"capital": "Santo Domingo",
"lat": 19,
"lng": -70.66666666,
"cca3": "DOM",
},
{
"name": "Puerto Rico",
"area": 8870,
"cioc": "PUR",
"cca2": "PR",
"capital": "San Juan",
"lat": 18.25,
"lng": -66.5,
"cca3": "PRI",
},
{
"name": "Mayotte",
"area": 374,
"cioc": "",
"cca2": "YT",
"capital": "Mamoudzou",
"lat": -12.83333333,
"lng": 45.16666666,
"cca3": "MYT",
},
{
"name": "North Korea",
"area": 120538,
"cioc": "PRK",
"cca2": "KP",
"capital": "Pyongyang",
"lat": 40,
"lng": 127,
"cca3": "PRK",
},
{
"name": "Palestine",
"area": 6220,
"cioc": "PLE",
"cca2": "PS",
"capital": "Ramallah",
"lat": 31.9,
"lng": 35.2,
"cca3": "PSE",
},
{
"name": "Tanzania",
"area": 945087,
"cioc": "TAN",
"cca2": "TZ",
"capital": "Dodoma",
"lat": -6,
"lng": 35,
"cca3": "TZA",
},
{
"name": "Botswana",
"area": 582000,
"cioc": "BOT",
"cca2": "BW",
"capital": "Gaborone",
"lat": -22,
"lng": 24,
"cca3": "BWA",
},
{
"name": "Cambodia",
"area": 181035,
"cioc": "CAM",
"cca2": "KH",
"capital": "Phnom Penh",
"lat": 13,
"lng": 105,
"cca3": "KHM",
},
{
"name": "Nicaragua",
"area": 130373,
"cioc": "NCA",
"cca2": "NI",
"capital": "Managua",
"lat": 13,
"lng": -85,
"cca3": "NIC",
},
{
"name": "Trinidad and Tobago",
"area": 5130,
"cioc": "TTO",
"cca2": "TT",
"capital": "Port of Spain",
"lat": 11,
"lng": -61,
"cca3": "TTO",
},
{
"name": "Ethiopia",
"area": 1104300,
"cioc": "ETH",
"cca2": "ET",
"capital": "Addis Ababa",
"lat": 8,
"lng": 38,
"cca3": "ETH",
},
{
"name": "Paraguay",
"area": 406752,
"cioc": "PAR",
"cca2": "PY",
"capital": "Asuncion",
"lat": -23,
"lng": -58,
"cca3": "PRY",
},
{
"name": "Hong Kong",
"area": 1104,
"cioc": "HKG",
"cca2": "HK",
"capital": "City of Victoria",
"lat": 22.267,
"lng": 114.188,
"cca3": "HKG",
},
{
"name": "Saudi Arabia",
"area": 2149690,
"cioc": "KSA",
"cca2": "SA",
"capital": "Riyadh",
"lat": 25,
"lng": 45,
"cca3": "SAU",
},
{
"name": "Lebanon",
"area": 10452,
"cioc": "LIB",
"cca2": "LB",
"capital": "Beirut",
"lat": 33.83333333,
"lng": 35.83333333,
"cca3": "LBN",
},
{
"name": "Slovenia",
"area": 20273,
"cioc": "SLO",
"cca2": "SI",
"capital": "Ljubljana",
"lat": 46.11666666,
"lng": 14.81666666,
"cca3": "SVN",
},
{
"name": "Burkina Faso",
"area": 272967,
"cioc": "BUR",
"cca2": "BF",
"capital": "Ouagadougou",
"lat": 13,
"lng": -2,
"cca3": "BFA",
},
{
"name": "Switzerland",
"area": 41284,
"cioc": "SUI",
"cca2": "CH",
"capital": "Bern",
"lat": 47,
"lng": 8,
"cca3": "CHE",
},
{
"name": "Mauritania",
"area": 1030700,
"cioc": "MTN",
"cca2": "MR",
"capital": "Nouakchott",
"lat": 20,
"lng": -12,
"cca3": "MRT",
},
{
"name": "Croatia",
"area": 56594,
"cioc": "CRO",
"cca2": "HR",
"capital": "Zagreb",
"lat": 45.16666666,
"lng": 15.5,
"cca3": "HRV",
},
{
"name": "Chile",
"area": 756102,
"cioc": "CHI",
"cca2": "CL",
"capital": "Santiago",
"lat": -30,
"lng": -71,
"cca3": "CHL",
},
{
"name": "China",
"area": 9706961,
"cioc": "CHN",
"cca2": "CN",
"capital": "Beijing",
"lat": 35,
"lng": 105,
"cca3": "CHN",
},
{
"name": "Saint Kitts and Nevis",
"area": 261,
"cioc": "SKN",
"cca2": "KN",
"capital": "Basseterre",
"lat": 17.33333333,
"lng": -62.75,
"cca3": "KNA",
},
{
"name": "Sierra Leone",
"area": 71740,
"cioc": "SLE",
"cca2": "SL",
"capital": "Freetown",
"lat": 8.5,
"lng": -11.5,
"cca3": "SLE",
},
{
"name": "Jamaica",
"area": 10991,
"cioc": "JAM",
"cca2": "JM",
"capital": "Kingston",
"lat": 18.25,
"lng": -77.5,
"cca3": "JAM",
},
{
"name": "San Marino",
"area": 61,
"cioc": "SMR",
"cca2": "SM",
"capital": "City of San Marino",
"lat": 43.76666666,
"lng": 12.41666666,
"cca3": "SMR",
},
{
"name": "Gibraltar",
"area": 6,
"cioc": "",
"cca2": "GI",
"capital": "Gibraltar",
"lat": 36.13333333,
"lng": -5.35,
"cca3": "GIB",
},
{
"name": "Djibouti",
"area": 23200,
"cioc": "DJI",
"cca2": "DJ",
"capital": "Djibouti",
"lat": 11.5,
"lng": 43,
"cca3": "DJI",
},
{
"name": "Guinea",
"area": 245857,
"cioc": "GUI",
"cca2": "GN",
"capital": "Conakry",
"lat": 11,
"lng": -10,
"cca3": "GIN",
},
{
"name": "Finland",
"area": 338424,
"cioc": "FIN",
"cca2": "FI",
"capital": "Helsinki",
"lat": 64,
"lng": 26,
"cca3": "FIN",
},
{
"name": "Uruguay",
"area": 181034,
"cioc": "URU",
"cca2": "UY",
"capital": "Montevideo",
"lat": -33,
"lng": -56,
"cca3": "URY",
},
{
"name": "Thailand",
"area": 513120,
"cioc": "THA",
"cca2": "TH",
"capital": "Bangkok",
"lat": 15,
"lng": 100,
"cca3": "THA",
},
{
"name": "Sao Tome and Principe",
"area": 964,
"cioc": "STP",
"cca2": "ST",
"capital": "Sao Tome",
"lat": 1,
"lng": 7,
"cca3": "STP",
},
{
"name": "Seychelles",
"area": 452,
"cioc": "SEY",
"cca2": "SC",
"capital": "Victoria",
"lat": -4.58333333,
"lng": 55.66666666,
"cca3": "SYC",
},
{
"name": "Nepal",
"area": 147181,
"cioc": "NEP",
"cca2": "NP",
"capital": "Kathmandu",
"lat": 28,
"lng": 84,
"cca3": "NPL",
},
{
"name": "Christmas Island",
"area": 135,
"cioc": "",
"cca2": "CX",
"capital": "Flying Fish Cove",
"lat": -10.5,
"lng": 105.66666666,
"cca3": "CXR",
},
{
"name": "Laos",
"area": 236800,
"cioc": "LAO",
"cca2": "LA",
"capital": "Vientiane",
"lat": 18,
"lng": 105,
"cca3": "LAO",
},
{
"name": "Yemen",
"area": 527968,
"cioc": "YEM",
"cca2": "YE",
"capital": "Sana'a",
"lat": 15,
"lng": 48,
"cca3": "YEM",
},
{
"name": "Bouvet Island",
"area": 49,
"cioc": "",
"cca2": "BV",
"capital": "",
"lat": -54.43333333,
"lng": 3.4,
"cca3": "BVT",
},
{
"name": "South Africa",
"area": 1221037,
"cioc": "RSA",
"cca2": "ZA",
"capital": "Pretoria",
"lat": -29,
"lng": 24,
"cca3": "ZAF",
},
{
"name": "Kiribati",
"area": 811,
"cioc": "KIR",
"cca2": "KI",
"capital": "South Tarawa",
"lat": 1.41666666,
"lng": 173,
"cca3": "KIR",
},
{
"name": "Philippines",
"area": 342353,
"cioc": "PHI",
"cca2": "PH",
"capital": "Manila",
"lat": 13,
"lng": 122,
"cca3": "PHL",
},
{
"name": "Sint Maarten",
"area": 34,
"cioc": "",
"cca2": "SX",
"capital": "Philipsburg",
"lat": 18.033333,
"lng": -63.05,
"cca3": "SXM",
},
{
"name": "Romania",
"area": 238391,
"cioc": "ROU",
"cca2": "RO",
"capital": "Bucharest",
"lat": 46,
"lng": 25,
"cca3": "ROU",
},
{
"name": "United States Virgin Islands",
"area": 347,
"cioc": "ISV",
"cca2": "VI",
"capital": "Charlotte Amalie",
"lat": 18.35,
"lng": -64.933333,
"cca3": "VIR",
},
{
"name": "Syria",
"area": 185180,
"cioc": "SYR",
"cca2": "SY",
"capital": "Damascus",
"lat": 35,
"lng": 38,
"cca3": "SYR",
},
{
"name": "Macau",
"area": 30,
"cioc": "",
"cca2": "MO",
"capital": "",
"lat": 22.16666666,
"lng": 113.55,
"cca3": "MAC",
},
{
"name": "Saint Martin",
"area": 53,
"cioc": "",
"cca2": "MF",
"capital": "Marigot",
"lat": 18.08333333,
"lng": -63.95,
"cca3": "MAF",
},
{
"name": "Malta",
"area": 316,
"cioc": "MLT",
"cca2": "MT",
"capital": "Valletta",
"lat": 35.83333333,
"lng": 14.58333333,
"cca3": "MLT",
},
{
"name": "Kazakhstan",
"area": 2724900,
"cioc": "KAZ",
"cca2": "KZ",
"capital": "Astana",
"lat": 48,
"lng": 68,
"cca3": "KAZ",
},
{
"name": "Turks and Caicos Islands",
"area": 948,
"cioc": "",
"cca2": "TC",
"capital": "Cockburn Town",
"lat": 21.75,
"lng": -71.58333333,
"cca3": "TCA",
},
{
"name": "French Polynesia",
"area": 4167,
"cioc": "",
"cca2": "PF",
"capital": "Papeete",
"lat": -15,
"lng": -140,
"cca3": "PYF",
},
{
"name": "Niue",
"area": 260,
"cioc": "",
"cca2": "NU",
"capital": "Alofi",
"lat": -19.03333333,
"lng": -169.86666666,
"cca3": "NIU",
},
{
"name": "Dominica",
"area": 751,
"cioc": "DMA",
"cca2": "DM",
"capital": "Roseau",
"lat": 15.41666666,
"lng": -61.33333333,
"cca3": "DMA",
},
{
"name": "Benin",
"area": 112622,
"cioc": "BEN",
"cca2": "BJ",
"capital": "Porto-Novo",
"lat": 9.5,
"lng": 2.25,
"cca3": "BEN",
},
{
"name": "French Guiana",
"area": 83534,
"cioc": "",
"cca2": "GF",
"capital": "Cayenne",
"lat": 4,
"lng": -53,
"cca3": "GUF",
},
{
"name": "Belgium",
"area": 30528,
"cioc": "BEL",
"cca2": "BE",
"capital": "Brussels",
"lat": 50.83333333,
"lng": 4,
"cca3": "BEL",
},
{
"name": "Montserrat",
"area": 102,
"cioc": "",
"cca2": "MS",
"capital": "Plymouth",
"lat": 16.75,
"lng": -62.2,
"cca3": "MSR",
},
{
"name": "Togo",
"area": 56785,
"cioc": "TOG",
"cca2": "TG",
"capital": "Lome",
"lat": 8,
"lng": 1.16666666,
"cca3": "TGO",
},
{
"name": "Germany",
"area": 357114,
"cioc": "GER",
"cca2": "DE",
"capital": "Berlin",
"lat": 51,
"lng": 9,
"cca3": "DEU",
},
{
"name": "Guam",
"area": 549,
"cioc": "GUM",
"cca2": "GU",
"capital": "Hagatna",
"lat": 13.46666666,
"lng": 144.78333333,
"cca3": "GUM",
},
{
"name": "Sri Lanka",
"area": 65610,
"cioc": "SRI",
"cca2": "LK",
"capital": "Colombo",
"lat": 7,
"lng": 81,
"cca3": "LKA",
},
{
"name": "South Sudan",
"area": 619745,
"cioc": "",
"cca2": "SS",
"capital": "Juba",
"lat": 7,
"lng": 30,
"cca3": "SSD",
},
{
"name": "Falkland Islands",
"area": 12173,
"cioc": "",
"cca2": "FK",
"capital": "Stanley",
"lat": -51.75,
"lng": -59,
"cca3": "FLK",
},
{
"name": "United Kingdom",
"area": 242900,
"cioc": "GBR",
"cca2": "GB",
"capital": "London",
"lat": 54,
"lng": -2,
"cca3": "GBR",
},
{
"name": "Guyana",
"area": 214969,
"cioc": "GUY",
"cca2": "GY",
"capital": "Georgetown",
"lat": 5,
"lng": -59,
"cca3": "GUY",
},
{
"name": "Costa Rica",
"area": 51100,
"cioc": "CRC",
"cca2": "CR",
"capital": "San Jose",
"lat": 10,
"lng": -84,
"cca3": "CRI",
},
{
"name": "Cameroon",
"area": 475442,
"cioc": "CMR",
"cca2": "CM",
"capital": "Yaounde",
"lat": 6,
"lng": 12,
"cca3": "CMR",
},
{
"name": "Morocco",
"area": 446550,
"cioc": "MAR",
"cca2": "MA",
"capital": "Rabat",
"lat": 32,
"lng": -5,
"cca3": "MAR",
},
{
"name": "Northern Mariana Islands",
"area": 464,
"cioc": "",
"cca2": "MP",
"capital": "Saipan",
"lat": 15.2,
"lng": 145.75,
"cca3": "MNP",
},
{
"name": "Lesotho",
"area": 30355,
"cioc": "LES",
"cca2": "LS",
"capital": "Maseru",
"lat": -29.5,
"lng": 28.5,
"cca3": "LSO",
},
{
"name": "Hungary",
"area": 93028,
"cioc": "HUN",
"cca2": "HU",
"capital": "Budapest",
"lat": 47,
"lng": 20,
"cca3": "HUN",
},
{
"name": "Turkmenistan",
"area": 488100,
"cioc": "TKM",
"cca2": "TM",
"capital": "Ashgabat",
"lat": 40,
"lng": 60,
"cca3": "TKM",
},
{
"name": "Suriname",
"area": 163820,
"cioc": "SUR",
"cca2": "SR",
"capital": "Paramaribo",
"lat": 4,
"lng": -56,
"cca3": "SUR",
},
{
"name": "Netherlands",
"area": 41850,
"cioc": "NED",
"cca2": "NL",
"capital": "Amsterdam",
"lat": 52.5,
"lng": 5.75,
"cca3": "NLD",
},
{
"name": "Bermuda",
"area": 54,
"cioc": "BER",
"cca2": "BM",
"capital": "Hamilton",
"lat": 32.33333333,
"lng": -64.75,
"cca3": "BMU",
},
{
"name": "Heard Island and McDonald Islands",
"area": 412,
"cioc": "",
"cca2": "HM",
"capital": "",
"lat": -53.1,
"lng": 72.51666666,
"cca3": "HMD",
},
{
"name": "Chad",
"area": 1284000,
"cioc": "CHA",
"cca2": "TD",
"capital": "N'Djamena",
"lat": 15,
"lng": 19,
"cca3": "TCD",
},
{
"name": "Georgia",
"area": 69700,
"cioc": "GEO",
"cca2": "GE",
"capital": "Tbilisi",
"lat": 42,
"lng": 43.5,
"cca3": "GEO",
},
{
"name": "Montenegro",
"area": 13812,
"cioc": "MNE",
"cca2": "ME",
"capital": "Podgorica",
"lat": 42.5,
"lng": 19.3,
"cca3": "MNE",
},
{
"name": "Mongolia",
"area": 1564110,
"cioc": "MGL",
"cca2": "MN",
"capital": "Ulan Bator",
"lat": 46,
"lng": 105,
"cca3": "MNG",
},
{
"name": "Marshall Islands",
"area": 181,
"cioc": "MHL",
"cca2": "MH",
"capital": "Majuro",
"lat": 9,
"lng": 168,
"cca3": "MHL",
},
{
"name": "Martinique",
"area": 1128,
"cioc": "",
"cca2": "MQ",
"capital": "Fort-de-France",
"lat": 14.666667,
"lng": -61,
"cca3": "MTQ",
},
{
"name": "Belize",
"area": 22966,
"cioc": "BIZ",
"cca2": "BZ",
"capital": "Belmopan",
"lat": 17.25,
"lng": -88.75,
"cca3": "BLZ",
},
{
"name": "Norfolk Island",
"area": 36,
"cioc": "",
"cca2": "NF",
"capital": "Kingston",
"lat": -29.03333333,
"lng": 167.95,
"cca3": "NFK",
},
{
"name": "Myanmar",
"area": 676578,
"cioc": "MYA",
"cca2": "MM",
"capital": "Naypyidaw",
"lat": 22,
"lng": 98,
"cca3": "MMR",
},
{
"name": "Afghanistan",
"area": 652230,
"cioc": "AFG",
"cca2": "AF",
"capital": "Kabul",
"lat": 33,
"lng": 65,
"cca3": "AFG",
},
{
"name": "Burundi",
"area": 27834,
"cioc": "BDI",
"cca2": "BI",
"capital": "Bujumbura",
"lat": -3.5,
"lng": 30,
"cca3": "BDI",
},
{
"name": "British Virgin Islands",
"area": 151,
"cioc": "IVB",
"cca2": "VG",
"capital": "Road Town",
"lat": 18.431383,
"lng": -64.62305,
"cca3": "VGB",
},
{
"name": "Belarus",
"area": 207600,
"cioc": "BLR",
"cca2": "BY",
"capital": "Minsk",
"lat": 53,
"lng": 28,
"cca3": "BLR",
},
{
"name": "Saint Barthelemy",
"area": 21,
"cioc": "",
"cca2": "BL",
"capital": "Gustavia",
"lat": 18.5,
"lng": -63.41666666,
"cca3": "BLM",
},
{
"name": "Grenada",
"area": 344,
"cioc": "GRN",
"cca2": "GD",
"capital": "St. George's",
"lat": 12.11666666,
"lng": -61.66666666,
"cca3": "GRD",
},
{
"name": "Tokelau",
"area": 12,
"cioc": "",
"cca2": "TK",
"capital": "Fakaofo",
"lat": -9,
"lng": -172,
"cca3": "TKL",
},
{
"name": "Greece",
"area": 131990,
"cioc": "GRE",
"cca2": "GR",
"capital": "Athens",
"lat": 39,
"lng": 22,
"cca3": "GRC",
},
{
"name": "Russia",
"area": 17098242,
"cioc": "RUS",
"cca2": "RU",
"capital": "Moscow",
"lat": 60,
"lng": 100,
"cca3": "RUS",
},
{
"name": "Greenland",
"area": 2166086,
"cioc": "",
"cca2": "GL",
"capital": "Nuuk",
"lat": 72,
"lng": -40,
"cca3": "GRL",
},
{
"name": "Andorra",
"area": 468,
"cioc": "AND",
"cca2": "AD",
"capital": "Andorra la Vella",
"lat": 42.5,
"lng": 1.5,
"cca3": "AND",
},
{
"name": "Mozambique",
"area": 801590,
"cioc": "MOZ",
"cca2": "MZ",
"capital": "Maputo",
"lat": -18.25,
"lng": 35,
"cca3": "MOZ",
},
{
"name": "Tajikistan",
"area": 143100,
"cioc": "TJK",
"cca2": "TJ",
"capital": "Dushanbe",
"lat": 39,
"lng": 71,
"cca3": "TJK",
},
{
"name": "Haiti",
"area": 27750,
"cioc": "HAI",
"cca2": "HT",
"capital": "Port-au-Prince",
"lat": 19,
"lng": -72.41666666,
"cca3": "HTI",
},
{
"name": "Mexico",
"area": 1964375,
"cioc": "MEX",
"cca2": "MX",
"capital": "Mexico City",
"lat": 23,
"lng": -102,
"cca3": "MEX",
},
{
"name": "Zimbabwe",
"area": 390757,
"cioc": "ZIM",
"cca2": "ZW",
"capital": "Harare",
"lat": -20,
"lng": 30,
"cca3": "ZWE",
},
{
"name": "Saint Lucia",
"area": 616,
"cioc": "LCA",
"cca2": "LC",
"capital": "Castries",
"lat": 13.88333333,
"lng": -60.96666666,
"cca3": "LCA",
},
{
"name": "India",
"area": 3287590,
"cioc": "IND",
"cca2": "IN",
"capital": "New Delhi",
"lat": 20,
"lng": 77,
"cca3": "IND",
},
{
"name": "Latvia",
"area": 64559,
"cioc": "LAT",
"cca2": "LV",
"capital": "Riga",
"lat": 57,
"lng": 25,
"cca3": "LVA",
},
{
"name": "Bhutan",
"area": 38394,
"cioc": "BHU",
"cca2": "BT",
"capital": "Thimphu",
"lat": 27.5,
"lng": 90.5,
"cca3": "BTN",
},
{
"name": "Saint Vincent and the Grenadines",
"area": 389,
"cioc": "VIN",
"cca2": "VC",
"capital": "Kingstown",
"lat": 13.25,
"lng": -61.2,
"cca3": "VCT",
},
{
"name": "Vietnam",
"area": 331212,
"cioc": "VIE",
"cca2": "VN",
"capital": "Hanoi",
"lat": 16.16666666,
"lng": 107.83333333,
"cca3": "VNM",
},
{
"name": "Norway",
"area": 323802,
"cioc": "NOR",
"cca2": "NO",
"capital": "Oslo",
"lat": 62,
"lng": 10,
"cca3": "NOR",
},
{
"name": "Czech Republic",
"area": 78865,
"cioc": "CZE",
"cca2": "CZ",
"capital": "Prague",
"lat": 49.75,
"lng": 15.5,
"cca3": "CZE",
},
{
"name": "French Southern and Antarctic Lands",
"area": 7747,
"cioc": "",
"cca2": "TF",
"capital": "Port-aux-Francais",
"lat": -49.25,
"lng": 69.167,
"cca3": "ATF",
},
{
"name": "Antigua and Barbuda",
"area": 442,
"cioc": "ANT",
"cca2": "AG",
"capital": "Saint John's",
"lat": 17.05,
"lng": -61.8,
"cca3": "ATG",
},
{
"name": "Fiji",
"area": 18272,
"cioc": "FIJ",
"cca2": "FJ",
"capital": "Suva",
"lat": -18,
"lng": 175,
"cca3": "FJI",
},
{
"name": "British Indian Ocean Territory",
"area": 60,
"cioc": "",
"cca2": "IO",
"capital": "Diego Garcia",
"lat": -6,
"lng": 71.5,
"cca3": "IOT",
},
{
"name": "Honduras",
"area": 112492,
"cioc": "HON",
"cca2": "HN",
"capital": "Tegucigalpa",
"lat": 15,
"lng": -86.5,
"cca3": "HND",
},
{
"name": "Mauritius",
"area": 2040,
"cioc": "MRI",
"cca2": "MU",
"capital": "Port Louis",
"lat": -20.28333333,
"lng": 57.55,
"cca3": "MUS",
},
{
"name": "Antarctica",
"area": 14000000,
"cioc": "",
"cca2": "AQ",
"capital": "",
"lat": -90,
"lng": 0,
"cca3": "ATA",
},
{
"name": "Luxembourg",
"area": 2586,
"cioc": "LUX",
"cca2": "LU",
"capital": "Luxembourg",
"lat": 49.75,
"lng": 6.16666666,
"cca3": "LUX",
},
{
"name": "Israel",
"area": 20770,
"cioc": "ISR",
"cca2": "IL",
"capital": "Jerusalem",
"lat": 31.47,
"lng": 35.13,
"cca3": "ISR",
},
{
"name": "Micronesia",
"area": 702,
"cioc": "FSM",
"cca2": "FM",
"capital": "Palikir",
"lat": 6.91666666,
"lng": 158.25,
"cca3": "FSM",
},
{
"name": "Peru",
"area": 1285216,
"cioc": "PER",
"cca2": "PE",
"capital": "Lima",
"lat": -10,
"lng": -76,
"cca3": "PER",
},
{
"name": "Reunion",
"area": 2511,
"cioc": "",
"cca2": "RE",
"capital": "Saint-Denis",
"lat": -21.15,
"lng": 55.5,
"cca3": "REU",
},
{
"name": "Indonesia",
"area": 1904569,
"cioc": "INA",
"cca2": "ID",
"capital": "Jakarta",
"lat": -5,
"lng": 120,
"cca3": "IDN",
},
{
"name": "Vanuatu",
"area": 12189,
"cioc": "VAN",
"cca2": "VU",
"capital": "Port Vila",
"lat": -16,
"lng": 167,
"cca3": "VUT",
},
{
"name": "Macedonia",
"area": 25713,
"cioc": "MKD",
"cca2": "MK",
"capital": "Skopje",
"lat": 41.83333333,
"lng": 22,
"cca3": "MKD",
},
{
"name": "DR Congo",
"area": 2344858,
"cioc": "COD",
"cca2": "CD",
"capital": "Kinshasa",
"lat": 0,
"lng": 25,
"cca3": "COD",
},
{
"name": "Republic of the Congo",
"area": 342000,
"cioc": "CGO",
"cca2": "CG",
"capital": "Brazzaville",
"lat": -1,
"lng": 15,
"cca3": "COG",
},
{
"name": "Iceland",
"area": 103000,
"cioc": "ISL",
"cca2": "IS",
"capital": "Reykjavik",
"lat": 65,
"lng": -18,
"cca3": "ISL",
},
{
"name": "Guadeloupe",
"area": 1628,
"cioc": "",
"cca2": "GP",
"capital": "Basse-Terre",
"lat": 16.25,
"lng": -61.583333,
"cca3": "GLP",
},
{
"name": "Cook Islands",
"area": 236,
"cioc": "COK",
"cca2": "CK",
"capital": "Avarua",
"lat": -21.23333333,
"lng": -159.76666666,
"cca3": "COK",
},
{
"name": "Comoros",
"area": 1862,
"cioc": "COM",
"cca2": "KM",
"capital": "Moroni",
"lat": -12.16666666,
"lng": 44.25,
"cca3": "COM",
},
{
"name": "Colombia",
"area": 1141748,
"cioc": "COL",
"cca2": "CO",
"capital": "Bogota",
"lat": 4,
"lng": -72,
"cca3": "COL",
},
{
"name": "Nigeria",
"area": 923768,
"cioc": "NGR",
"cca2": "NG",
"capital": "Abuja",
"lat": 10,
"lng": 8,
"cca3": "NGA",
},
{
"name": "Timor-Leste",
"area": 14874,
"cioc": "TLS",
"cca2": "TL",
"capital": "Dili",
"lat": -8.83333333,
"lng": 125.91666666,
"cca3": "TLS",
},
{
"name": "Taiwan",
"area": 36193,
"cioc": "TPE",
"cca2": "TW",
"capital": "Taipei",
"lat": 23.5,
"lng": 121,
"cca3": "TWN",
},
{
"name": "Portugal",
"area": 92090,
"cioc": "POR",
"cca2": "PT",
"capital": "Lisbon",
"lat": 39.5,
"lng": -8,
"cca3": "PRT",
},
{
"name": "Moldova",
"area": 33846,
"cioc": "MDA",
"cca2": "MD",
"capital": "Chisinau",
"lat": 47,
"lng": 29,
"cca3": "MDA",
},
{
"name": "Guernsey",
"area": 78,
"cioc": "",
"cca2": "GG",
"capital": "St. Peter Port",
"lat": 49.46666666,
"lng": -2.58333333,
"cca3": "GGY",
},
{
"name": "Madagascar",
"area": 587041,
"cioc": "MAD",
"cca2": "MG",
"capital": "Antananarivo",
"lat": -20,
"lng": 47,
"cca3": "MDG",
},
{
"name": "Ecuador",
"area": 276841,
"cioc": "ECU",
"cca2": "EC",
"capital": "Quito",
"lat": -2,
"lng": -77.5,
"cca3": "ECU",
},
{
"name": "Senegal",
"area": 196722,
"cioc": "SEN",
"cca2": "SN",
"capital": "Dakar",
"lat": 14,
"lng": -14,
"cca3": "SEN",
},
{
"name": "New Zealand",
"area": 270467,
"cioc": "NZL",
"cca2": "NZ",
"capital": "Wellington",
"lat": -41,
"lng": 174,
"cca3": "NZL",
},
{
"name": "Maldives",
"area": 300,
"cioc": "MDV",
"cca2": "MV",
"capital": "Male",
"lat": 3.25,
"lng": 73,
"cca3": "MDV",
},
{
"name": "American Samoa",
"area": 199,
"cioc": "ASA",
"cca2": "AS",
"capital": "Pago Pago",
"lat": -14.33333333,
"lng": -170,
"cca3": "ASM",
},
{
"name": "Saint Pierre and Miquelon",
"area": 242,
"cioc": "",
"cca2": "PM",
"capital": "Saint-Pierre",
"lat": 46.83333333,
"lng": -56.33333333,
"cca3": "SPM",
},
{
"name": "Curacao",
"area": 444,
"cioc": "",
"cca2": "CW",
"capital": "Willemstad",
"lat": 12.116667,
"lng": -68.933333,
"cca3": "CUW",
},
{
"name": "France",
"area": 551695,
"cioc": "FRA",
"cca2": "FR",
"capital": "Paris",
"lat": 46,
"lng": 2,
"cca3": "FRA",
},
{
"name": "Lithuania",
"area": 65300,
"cioc": "LTU",
"cca2": "LT",
"capital": "Vilnius",
"lat": 56,
"lng": 24,
"cca3": "LTU",
},
{
"name": "Rwanda",
"area": 26338,
"cioc": "RWA",
"cca2": "RW",
"capital": "Kigali",
"lat": -2,
"lng": 30,
"cca3": "RWA",
},
{
"name": "Zambia",
"area": 752612,
"cioc": "ZAM",
"cca2": "ZM",
"capital": "Lusaka",
"lat": -15,
"lng": 30,
"cca3": "ZMB",
},
{
"name": "Gambia",
"area": 10689,
"cioc": "GAM",
"cca2": "GM",
"capital": "Banjul",
"lat": 13.46666666,
"lng": -16.56666666,
"cca3": "GMB",
},
{
"name": "Wallis and Futuna",
"area": 142,
"cioc": "",
"cca2": "WF",
"capital": "Mata-Utu",
"lat": -13.3,
"lng": -176.2,
"cca3": "WLF",
},
{
"name": "Jersey",
"area": 116,
"cioc": "",
"cca2": "JE",
"capital": "Saint Helier",
"lat": 49.25,
"lng": -2.16666666,
"cca3": "JEY",
},
{
"name": "Faroe Islands",
"area": 1393,
"cioc": "",
"cca2": "FO",
"capital": "Torshavn",
"lat": 62,
"lng": -7,
"cca3": "FRO",
},
{
"name": "Guatemala",
"area": 108889,
"cioc": "GUA",
"cca2": "GT",
"capital": "Guatemala City",
"lat": 15.5,
"lng": -90.25,
"cca3": "GTM",
},
{
"name": "Denmark",
"area": 43094,
"cioc": "DEN",
"cca2": "DK",
"capital": "Copenhagen",
"lat": 56,
"lng": 10,
"cca3": "DNK",
},
{
"name": "Isle of Man",
"area": 572,
"cioc": "",
"cca2": "IM",
"capital": "Douglas",
"lat": 54.25,
"lng": -4.5,
"cca3": "IMN",
},
{
"name": "Australia",
"area": 7692024,
"cioc": "AUS",
"cca2": "AU",
"capital": "Canberra",
"lat": -27,
"lng": 133,
"cca3": "AUS",
},
{
"name": "Austria",
"area": 83871,
"cioc": "AUT",
"cca2": "AT",
"capital": "Vienna",
"lat": 47.33333333,
"lng": 13.33333333,
"cca3": "AUT",
},
{
"name": "Svalbard and Jan Mayen",
"area": -1,
"cioc": "",
"cca2": "SJ",
"capital": "Longyearbyen",
"lat": 78,
"lng": 20,
"cca3": "SJM",
},
{
"name": "Venezuela",
"area": 916445,
"cioc": "VEN",
"cca2": "VE",
"capital": "Caracas",
"lat": 8,
"lng": -66,
"cca3": "VEN",
},
{
"name": "Kosovo",
"area": 10908,
"cioc": "KOS",
"cca2": "XK",
"capital": "Pristina",
"lat": 42.666667,
"lng": 21.166667,
"cca3": "UNK",
},
{
"name": "Palau",
"area": 459,
"cioc": "PLW",
"cca2": "PW",
"capital": "Ngerulmud",
"lat": 7.5,
"lng": 134.5,
"cca3": "PLW",
},
{
"name": "Kenya",
"area": 580367,
"cioc": "KEN",
"cca2": "KE",
"capital": "Nairobi",
"lat": 1,
"lng": 38,
"cca3": "KEN",
},
{
"name": "Samoa",
"area": 2842,
"cioc": "SAM",
"cca2": "WS",
"capital": "Apia",
"lat": -13.58333333,
"lng": -172.33333333,
"cca3": "WSM",
},
{
"name": "Turkey",
"area": 783562,
"cioc": "TUR",
"cca2": "TR",
"capital": "Ankara",
"lat": 39,
"lng": 35,
"cca3": "TUR",
},
{
"name": "Albania",
"area": 28748,
"cioc": "ALB",
"cca2": "AL",
"capital": "Tirana",
"lat": 41,
"lng": 20,
"cca3": "ALB",
},
{
"name": "Oman",
"area": 309500,
"cioc": "OMA",
"cca2": "OM",
"capital": "Muscat",
"lat": 21,
"lng": 57,
"cca3": "OMN",
},
{
"name": "Tuvalu",
"area": 26,
"cioc": "TUV",
"cca2": "TV",
"capital": "Funafuti",
"lat": -8,
"lng": 178,
"cca3": "TUV",
},
{
"name": "Aland Islands",
"area": 1580,
"cioc": "",
"cca2": "AX",
"capital": "Mariehamn",
"lat": 60.116667,
"lng": 19.9,
"cca3": "ALA",
},
{
"name": "Brunei",
"area": 5765,
"cioc": "BRU",
"cca2": "BN",
"capital": "Bandar Seri Begawan",
"lat": 4.5,
"lng": 114.66666666,
"cca3": "BRN",
},
{
"name": "Tunisia",
"area": 163610,
"cioc": "TUN",
"cca2": "TN",
"capital": "Tunis",
"lat": 34,
"lng": 9,
"cca3": "TUN",
},
{
"name": "Pitcairn Islands",
"area": 47,
"cioc": "",
"cca2": "PN",
"capital": "Adamstown",
"lat": -25.06666666,
"lng": -130.1,
"cca3": "PCN",
},
{
"name": "Barbados",
"area": 430,
"cioc": "BAR",
"cca2": "BB",
"capital": "Bridgetown",
"lat": 13.16666666,
"lng": -59.53333333,
"cca3": "BRB",
},
{
"name": "Brazil",
"area": 8515767,
"cioc": "BRA",
"cca2": "BR",
"capital": "Brasilia",
"lat": -10,
"lng": -55,
"cca3": "BRA",
},
{
"name": "Ivory Coast",
"area": 322463,
"cioc": "CIV",
"cca2": "CI",
"capital": "Yamoussoukro",
"lat": 8,
"lng": -5,
"cca3": "CIV",
},
{
"name": "Serbia",
"area": 88361,
"cioc": "SRB",
"cca2": "RS",
"capital": "Belgrade",
"lat": 44,
"lng": 21,
"cca3": "SRB",
},
{
"name": "Equatorial Guinea",
"area": 28051,
"cioc": "GEQ",
"cca2": "GQ",
"capital": "Malabo",
"lat": 2,
"lng": 10,
"cca3": "GNQ",
},
{
"name": "United States",
"area": 9372610,
"cioc": "USA",
"cca2": "US",
"capital": "Washington D.C.",
"lat": 38,
"lng": -97,
"cca3": "USA",
},
{
"name": "Qatar",
"area": 11586,
"cioc": "QAT",
"cca2": "QA",
"capital": "Doha",
"lat": 25.5,
"lng": 51.25,
"cca3": "QAT",
},
{
"name": "Sweden",
"area": 450295,
"cioc": "SWE",
"cca2": "SE",
"capital": "Stockholm",
"lat": 62,
"lng": 15,
"cca3": "SWE",
},
{
"name": "Azerbaijan",
"area": 86600,
"cioc": "AZE",
"cca2": "AZ",
"capital": "Baku",
"lat": 40.5,
"lng": 47.5,
"cca3": "AZE",
},
{
"name": "Guinea-Bissau",
"area": 36125,
"cioc": "GBS",
"cca2": "GW",
"capital": "Bissau",
"lat": 12,
"lng": -15,
"cca3": "GNB",
},
{
"name": "Swaziland",
"area": 17364,
"cioc": "SWZ",
"cca2": "SZ",
"capital": "Lobamba",
"lat": -26.5,
"lng": 31.5,
"cca3": "SWZ",
},
{
"name": "Tonga",
"area": 747,
"cioc": "TGA",
"cca2": "TO",
"capital": "Nuku'alofa",
"lat": -20,
"lng": -175,
"cca3": "TON",
},
{
"name": "Canada",
"area": 9984670,
"cioc": "CAN",
"cca2": "CA",
"capital": "Ottawa",
"lat": 60,
"lng": -95,
"cca3": "CAN",
},
{
"name": "Ukraine",
"area": 603500,
"cioc": "UKR",
"cca2": "UA",
"capital": "Kiev",
"lat": 49,
"lng": 32,
"cca3": "UKR",
},
{
"name": "South Korea",
"area": 100210,
"cioc": "KOR",
"cca2": "KR",
"capital": "Seoul",
"lat": 37,
"lng": 127.5,
"cca3": "KOR",
},
{
"name": "Anguilla",
"area": 91,
"cioc": "",
"cca2": "AI",
"capital": "The Valley",
"lat": 18.25,
"lng": -63.16666666,
"cca3": "AIA",
},
{
"name": "Central African Republic",
"area": 622984,
"cioc": "CAF",
"cca2": "CF",
"capital": "Bangui",
"lat": 7,
"lng": 21,
"cca3": "CAF",
},
{
"name": "Slovakia",
"area": 49037,
"cioc": "SVK",
"cca2": "SK",
"capital": "Bratislava",
"lat": 48.66666666,
"lng": 19.5,
"cca3": "SVK",
},
{
"name": "Cyprus",
"area": 9251,
"cioc": "CYP",
"cca2": "CY",
"capital": "Nicosia",
"lat": 35,
"lng": 33,
"cca3": "CYP",
},
{
"name": "Bosnia and Herzegovina",
"area": 51209,
"cioc": "BIH",
"cca2": "BA",
"capital": "Sarajevo",
"lat": 44,
"lng": 18,
"cca3": "BIH",
},
{
"name": "Singapore",
"area": 710,
"cioc": "SIN",
"cca2": "SG",
"capital": "Singapore",
"lat": 1.36666666,
"lng": 103.8,
"cca3": "SGP",
},
{
"name": "South Georgia",
"area": 3903,
"cioc": "",
"cca2": "GS",
"capital": "King Edward Point",
"lat": -54.5,
"lng": -37,
"cca3": "SGS",
},
{
"name": "Somalia",
"area": 637657,
"cioc": "SOM",
"cca2": "SO",
"capital": "Mogadishu",
"lat": 10,
"lng": 49,
"cca3": "SOM",
},
{
"name": "Uzbekistan",
"area": 447400,
"cioc": "UZB",
"cca2": "UZ",
"capital": "Tashkent",
"lat": 41,
"lng": 64,
"cca3": "UZB",
},
{
"name": "Eritrea",
"area": 117600,
"cioc": "ERI",
"cca2": "ER",
"capital": "Asmara",
"lat": 15,
"lng": 39,
"cca3": "ERI",
},
{
"name": "Poland",
"area": 312679,
"cioc": "POL",
"cca2": "PL",
"capital": "Warsaw",
"lat": 52,
"lng": 20,
"cca3": "POL",
},
{
"name": "Kuwait",
"area": 17818,
"cioc": "KUW",
"cca2": "KW",
"capital": "Kuwait City",
"lat": 29.5,
"lng": 45.75,
"cca3": "KWT",
},
{
"name": "Gabon",
"area": 267668,
"cioc": "GAB",
"cca2": "GA",
"capital": "Libreville",
"lat": -1,
"lng": 11.75,
"cca3": "GAB",
},
{
"name": "Cayman Islands",
"area": 264,
"cioc": "CAY",
"cca2": "KY",
"capital": "George Town",
"lat": 19.5,
"lng": -80.5,
"cca3": "CYM",
},
{
"name": "Vatican City",
"area": 0.44,
"cioc": "",
"cca2": "VA",
"capital": "Vatican City",
"lat": 41.9,
"lng": 12.45,
"cca3": "VAT",
},
{
"name": "Estonia",
"area": 45227,
"cioc": "EST",
"cca2": "EE",
"capital": "Tallinn",
"lat": 59,
"lng": 26,
"cca3": "EST",
},
{
"name": "Malawi",
"area": 118484,
"cioc": "MAW",
"cca2": "MW",
"capital": "Lilongwe",
"lat": -13.5,
"lng": 34,
"cca3": "MWI",
},
{
"name": "Spain",
"area": 505992,
"cioc": "ESP",
"cca2": "ES",
"capital": "Madrid",
"lat": 40,
"lng": -4,
"cca3": "ESP",
},
{
"name": "Iraq",
"area": 438317,
"cioc": "IRQ",
"cca2": "IQ",
"capital": "Baghdad",
"lat": 33,
"lng": 44,
"cca3": "IRQ",
},
{
"name": "El Salvador",
"area": 21041,
"cioc": "ESA",
"cca2": "SV",
"capital": "San Salvador",
"lat": 13.83333333,
"lng": -88.91666666,
"cca3": "SLV",
},
{
"name": "Mali",
"area": 1240192,
"cioc": "MLI",
"cca2": "ML",
"capital": "Bamako",
"lat": 17,
"lng": -4,
"cca3": "MLI",
},
{
"name": "Ireland",
"area": 70273,
"cioc": "IRL",
"cca2": "IE",
"capital": "Dublin",
"lat": 53,
"lng": -8,
"cca3": "IRL",
},
{
"name": "Iran",
"area": 1648195,
"cioc": "IRI",
"cca2": "IR",
"capital": "Tehran",
"lat": 32,
"lng": 53,
"cca3": "IRN",
},
{
"name": "Aruba",
"area": 180,
"cioc": "ARU",
"cca2": "AW",
"capital": "Oranjestad",
"lat": 12.5,
"lng": -69.96666666,
"cca3": "ABW",
},
{
"name": "Papua New Guinea",
"area": 462840,
"cioc": "PNG",
"cca2": "PG",
"capital": "Port Moresby",
"lat": -6,
"lng": 147,
"cca3": "PNG",
},
{
"name": "Panama",
"area": 75417,
"cioc": "PAN",
"cca2": "PA",
"capital": "Panama City",
"lat": 9,
"lng": -80,
"cca3": "PAN",
},
{
"name": "Sudan",
"area": 1886068,
"cioc": "SUD",
"cca2": "SD",
"capital": "Khartoum",
"lat": 15,
"lng": 30,
"cca3": "SDN",
},
{
"name": "Solomon Islands",
"area": 28896,
"cioc": "SOL",
"cca2": "SB",
"capital": "Honiara",
"lat": -8,
"lng": 159,
"cca3": "SLB",
},
{
"name": "Western Sahara",
"area": 266000,
"cioc": "",
"cca2": "EH",
"capital": "El Aaiun",
"lat": 24.5,
"lng": -13,
"cca3": "ESH",
},
{
"name": "Monaco",
"area": 2.02,
"cioc": "MON",
"cca2": "MC",
"capital": "Monaco",
"lat": 43.73333333,
"lng": 7.4,
"cca3": "MCO",
},
{
"name": "Italy",
"area": 301336,
"cioc": "ITA",
"cca2": "IT",
"capital": "Rome",
"lat": 42.83333333,
"lng": 12.83333333,
"cca3": "ITA",
},
{
"name": "Japan",
"area": 377930,
"cioc": "JPN",
"cca2": "JP",
"capital": "Tokyo",
"lat": 36,
"lng": 138,
"cca3": "JPN",
},
{
"name": "Kyrgyzstan",
"area": 199951,
"cioc": "KGZ",
"cca2": "KG",
"capital": "Bishkek",
"lat": 41,
"lng": 75,
"cca3": "KGZ",
},
{
"name": "Uganda",
"area": 241550,
"cioc": "UGA",
"cca2": "UG",
"capital": "Kampala",
"lat": 1,
"lng": 32,
"cca3": "UGA",
},
{
"name": "New Caledonia",
"area": 18575,
"cioc": "",
"cca2": "NC",
"capital": "Noumea",
"lat": -21.5,
"lng": 165.5,
"cca3": "NCL",
},
{
"name": "United Arab Emirates",
"area": 83600,
"cioc": "UAE",
"cca2": "AE",
"capital": "Abu Dhabi",
"lat": 24,
"lng": 54,
"cca3": "ARE",
},
{
"name": "Argentina",
"area": 2780400,
"cioc": "ARG",
"cca2": "AR",
"capital": "Buenos Aires",
"lat": -34,
"lng": -64,
"cca3": "ARG",
},
{
"name": "Bahamas",
"area": 13943,
"cioc": "BAH",
"cca2": "BS",
"capital": "Nassau",
"lat": 24.25,
"lng": -76,
"cca3": "BHS",
},
{
"name": "Bahrain",
"area": 765,
"cioc": "BRN",
"cca2": "BH",
"capital": "Manama",
"lat": 26,
"lng": 50.55,
"cca3": "BHR",
},
{
"name": "Armenia",
"area": 29743,
"cioc": "ARM",
"cca2": "AM",
"capital": "Yerevan",
"lat": 40,
"lng": 45,
"cca3": "ARM",
},
{
"name": "Nauru",
"area": 21,
"cioc": "NRU",
"cca2": "NR",
"capital": "Yaren",
"lat": -0.53333333,
"lng": 166.91666666,
"cca3": "NRU",
},
{
"name": "Cuba",
"area": 109884,
"cioc": "CUB",
"cca2": "CU",
"capital": "Havana",
"lat": 21.5,
"lng": -80,
"cca3": "CUB",
},
]
all_lookups: Dict[str, Dict[str, Dict[str, Any]]] = {}
lookups = ["cioc", "cca2", "cca3", "name"]
for lookup in lookups:
all_lookups[lookup] = {}
for country in countries:
all_lookups[lookup][country[lookup].lower()] = country
def get(field: str, symbol: str) -> Optional[Dict[str, Any]]:
"""
Get country data based on a standard code and a symbol
"""
return all_lookups[field].get(symbol.lower()) | /sage-superset-1.0.0.tar.gz/sage-superset-1.0.0/superset/examples/countries.py | 0.696887 | 0.666687 | countries.py | pypi |
import json
import textwrap
from typing import Dict, List, Tuple, Union
import pandas as pd
from flask_appbuilder.security.sqla.models import User
from sqlalchemy import DateTime, inspect, String
from sqlalchemy.sql import column
from superset import app, db, security_manager
from superset.connectors.sqla.models import SqlaTable, SqlMetric, TableColumn
from superset.exceptions import NoDataException
from superset.models.core import Database
from superset.models.dashboard import Dashboard
from superset.models.slice import Slice
from ..utils.database import get_example_database
from .helpers import (
get_example_data,
get_slice_json,
get_table_connector_registry,
merge_slice,
misc_dash_slices,
update_slice_ids,
)
def get_admin_user() -> User:
admin = security_manager.find_user("admin")
if admin is None:
raise NoDataException(
"Admin user does not exist. "
"Please, check if test users are properly loaded "
"(`superset load_test_users`)."
)
return admin
def gen_filter(
subject: str, comparator: str, operator: str = "=="
) -> Dict[str, Union[bool, str]]:
return {
"clause": "WHERE",
"comparator": comparator,
"expressionType": "SIMPLE",
"operator": operator,
"subject": subject,
}
def load_data(tbl_name: str, database: Database, sample: bool = False) -> None:
pdf = pd.read_json(get_example_data("birth_names2.json.gz"))
# TODO(bkyryliuk): move load examples data into the pytest fixture
if database.backend == "presto":
pdf.ds = pd.to_datetime(pdf.ds, unit="ms")
pdf.ds = pdf.ds.dt.strftime("%Y-%m-%d %H:%M%:%S")
else:
pdf.ds = pd.to_datetime(pdf.ds, unit="ms")
pdf = pdf.head(100) if sample else pdf
engine = database.get_sqla_engine()
schema = inspect(engine).default_schema_name
pdf.to_sql(
tbl_name,
database.get_sqla_engine(),
schema=schema,
if_exists="replace",
chunksize=500,
dtype={
# TODO(bkyryliuk): use TIMESTAMP type for presto
"ds": DateTime if database.backend != "presto" else String(255),
"gender": String(16),
"state": String(10),
"name": String(255),
},
method="multi",
index=False,
)
print("Done loading table!")
print("-" * 80)
def load_birth_names(
only_metadata: bool = False, force: bool = False, sample: bool = False
) -> None:
"""Loading birth name dataset from a zip file in the repo"""
database = get_example_database()
engine = database.get_sqla_engine()
schema = inspect(engine).default_schema_name
tbl_name = "birth_names"
table_exists = database.has_table_by_name(tbl_name, schema=schema)
if not only_metadata and (not table_exists or force):
load_data(tbl_name, database, sample=sample)
table = get_table_connector_registry()
obj = db.session.query(table).filter_by(table_name=tbl_name, schema=schema).first()
if not obj:
print(f"Creating table [{tbl_name}] reference")
obj = table(table_name=tbl_name, schema=schema)
db.session.add(obj)
_set_table_metadata(obj, database)
_add_table_metrics(obj)
db.session.commit()
slices, _ = create_slices(obj, admin_owner=True)
create_dashboard(slices)
def _set_table_metadata(datasource: SqlaTable, database: "Database") -> None:
datasource.main_dttm_col = "ds"
datasource.database = database
datasource.filter_select_enabled = True
datasource.fetch_metadata()
def _add_table_metrics(datasource: SqlaTable) -> None:
if not any(col.column_name == "num_california" for col in datasource.columns):
col_state = str(column("state").compile(db.engine))
col_num = str(column("num").compile(db.engine))
datasource.columns.append(
TableColumn(
column_name="num_california",
expression=f"CASE WHEN {col_state} = 'CA' THEN {col_num} ELSE 0 END",
)
)
if not any(col.metric_name == "sum__num" for col in datasource.metrics):
col = str(column("num").compile(db.engine))
datasource.metrics.append(
SqlMetric(metric_name="sum__num", expression=f"SUM({col})")
)
for col in datasource.columns:
if col.column_name == "ds":
col.is_dttm = True
break
def create_slices(tbl: SqlaTable, admin_owner: bool) -> Tuple[List[Slice], List[Slice]]:
metrics = [
{
"expressionType": "SIMPLE",
"column": {"column_name": "num", "type": "BIGINT"},
"aggregate": "SUM",
"label": "Births",
"optionName": "metric_11",
}
]
metric = "sum__num"
defaults = {
"compare_lag": "10",
"compare_suffix": "o10Y",
"limit": "25",
"time_range": "No filter",
"time_range_endpoints": ["inclusive", "exclusive"],
"granularity_sqla": "ds",
"groupby": [],
"row_limit": app.config["ROW_LIMIT"],
"since": "100 years ago",
"until": "now",
"viz_type": "table",
"markup_type": "markdown",
}
default_query_context = {
"result_format": "json",
"result_type": "full",
"datasource": {"id": tbl.id, "type": "table",},
"queries": [{"columns": [], "metrics": [],},],
}
admin = get_admin_user()
if admin_owner:
slice_props = dict(
datasource_id=tbl.id,
datasource_type="table",
owners=[admin],
created_by=admin,
)
else:
slice_props = dict(
datasource_id=tbl.id, datasource_type="table", owners=[], created_by=admin
)
print("Creating some slices")
slices = [
Slice(
**slice_props,
slice_name="Participants",
viz_type="big_number",
params=get_slice_json(
defaults,
viz_type="big_number",
granularity_sqla="ds",
compare_lag="5",
compare_suffix="over 5Y",
metric=metric,
),
),
Slice(
**slice_props,
slice_name="Genders",
viz_type="pie",
params=get_slice_json(
defaults, viz_type="pie", groupby=["gender"], metric=metric
),
),
Slice(
**slice_props,
slice_name="Trends",
viz_type="line",
params=get_slice_json(
defaults,
viz_type="line",
groupby=["name"],
granularity_sqla="ds",
rich_tooltip=True,
show_legend=True,
metrics=metrics,
),
),
Slice(
**slice_props,
slice_name="Genders by State",
viz_type="dist_bar",
params=get_slice_json(
defaults,
adhoc_filters=[
{
"clause": "WHERE",
"expressionType": "SIMPLE",
"filterOptionName": "2745eae5",
"comparator": ["other"],
"operator": "NOT IN",
"subject": "state",
}
],
viz_type="dist_bar",
metrics=[
{
"expressionType": "SIMPLE",
"column": {"column_name": "num_boys", "type": "BIGINT(20)"},
"aggregate": "SUM",
"label": "Boys",
"optionName": "metric_11",
},
{
"expressionType": "SIMPLE",
"column": {"column_name": "num_girls", "type": "BIGINT(20)"},
"aggregate": "SUM",
"label": "Girls",
"optionName": "metric_12",
},
],
groupby=["state"],
),
),
Slice(
**slice_props,
slice_name="Girls",
viz_type="table",
params=get_slice_json(
defaults,
groupby=["name"],
adhoc_filters=[gen_filter("gender", "girl")],
row_limit=50,
timeseries_limit_metric=metric,
metrics=[metric],
),
),
Slice(
**slice_props,
slice_name="Girl Name Cloud",
viz_type="word_cloud",
params=get_slice_json(
defaults,
viz_type="word_cloud",
size_from="10",
series="name",
size_to="70",
rotation="square",
limit="100",
adhoc_filters=[gen_filter("gender", "girl")],
metric=metric,
),
),
Slice(
**slice_props,
slice_name="Boys",
viz_type="table",
params=get_slice_json(
defaults,
groupby=["name"],
adhoc_filters=[gen_filter("gender", "boy")],
row_limit=50,
timeseries_limit_metric=metric,
metrics=[metric],
),
),
Slice(
**slice_props,
slice_name="Boy Name Cloud",
viz_type="word_cloud",
params=get_slice_json(
defaults,
viz_type="word_cloud",
size_from="10",
series="name",
size_to="70",
rotation="square",
limit="100",
adhoc_filters=[gen_filter("gender", "boy")],
metric=metric,
),
),
Slice(
**slice_props,
slice_name="Top 10 Girl Name Share",
viz_type="area",
params=get_slice_json(
defaults,
adhoc_filters=[gen_filter("gender", "girl")],
comparison_type="values",
groupby=["name"],
limit=10,
stacked_style="expand",
time_grain_sqla="P1D",
viz_type="area",
x_axis_forma="smart_date",
metrics=metrics,
),
),
Slice(
**slice_props,
slice_name="Top 10 Boy Name Share",
viz_type="area",
params=get_slice_json(
defaults,
adhoc_filters=[gen_filter("gender", "boy")],
comparison_type="values",
groupby=["name"],
limit=10,
stacked_style="expand",
time_grain_sqla="P1D",
viz_type="area",
x_axis_forma="smart_date",
metrics=metrics,
),
),
Slice(
**slice_props,
slice_name="Pivot Table v2",
viz_type="pivot_table_v2",
params=get_slice_json(
defaults,
viz_type="pivot_table_v2",
groupbyRows=["name"],
groupbyColumns=["state"],
metrics=[metric],
),
query_context=get_slice_json(
default_query_context,
queries=[{"columns": ["name", "state"], "metrics": [metric],}],
),
),
]
misc_slices = [
Slice(
**slice_props,
slice_name="Average and Sum Trends",
viz_type="dual_line",
params=get_slice_json(
defaults,
viz_type="dual_line",
metric={
"expressionType": "SIMPLE",
"column": {"column_name": "num", "type": "BIGINT(20)"},
"aggregate": "AVG",
"label": "AVG(num)",
"optionName": "metric_vgops097wej_g8uff99zhk7",
},
metric_2="sum__num",
granularity_sqla="ds",
metrics=metrics,
),
),
Slice(
**slice_props,
slice_name="Num Births Trend",
viz_type="line",
params=get_slice_json(defaults, viz_type="line", metrics=metrics),
),
Slice(
**slice_props,
slice_name="Daily Totals",
viz_type="table",
params=get_slice_json(
defaults,
groupby=["ds"],
since="40 years ago",
until="now",
viz_type="table",
metrics=metrics,
),
),
Slice(
**slice_props,
slice_name="Number of California Births",
viz_type="big_number_total",
params=get_slice_json(
defaults,
metric={
"expressionType": "SIMPLE",
"column": {
"column_name": "num_california",
"expression": "CASE WHEN state = 'CA' THEN num ELSE 0 END",
},
"aggregate": "SUM",
"label": "SUM(num_california)",
},
viz_type="big_number_total",
granularity_sqla="ds",
),
),
Slice(
**slice_props,
slice_name="Top 10 California Names Timeseries",
viz_type="line",
params=get_slice_json(
defaults,
metrics=[
{
"expressionType": "SIMPLE",
"column": {
"column_name": "num_california",
"expression": "CASE WHEN state = 'CA' THEN num ELSE 0 END",
},
"aggregate": "SUM",
"label": "SUM(num_california)",
}
],
viz_type="line",
granularity_sqla="ds",
groupby=["name"],
timeseries_limit_metric={
"expressionType": "SIMPLE",
"column": {
"column_name": "num_california",
"expression": "CASE WHEN state = 'CA' THEN num ELSE 0 END",
},
"aggregate": "SUM",
"label": "SUM(num_california)",
},
limit="10",
),
),
Slice(
**slice_props,
slice_name="Names Sorted by Num in California",
viz_type="table",
params=get_slice_json(
defaults,
metrics=metrics,
groupby=["name"],
row_limit=50,
timeseries_limit_metric={
"expressionType": "SIMPLE",
"column": {
"column_name": "num_california",
"expression": "CASE WHEN state = 'CA' THEN num ELSE 0 END",
},
"aggregate": "SUM",
"label": "SUM(num_california)",
},
),
),
Slice(
**slice_props,
slice_name="Number of Girls",
viz_type="big_number_total",
params=get_slice_json(
defaults,
metric=metric,
viz_type="big_number_total",
granularity_sqla="ds",
adhoc_filters=[gen_filter("gender", "girl")],
subheader="total female participants",
),
),
Slice(
**slice_props,
slice_name="Pivot Table",
viz_type="pivot_table",
params=get_slice_json(
defaults,
viz_type="pivot_table",
groupby=["name"],
columns=["state"],
metrics=metrics,
),
),
]
for slc in slices:
merge_slice(slc)
for slc in misc_slices:
merge_slice(slc)
misc_dash_slices.add(slc.slice_name)
return slices, misc_slices
def create_dashboard(slices: List[Slice]) -> Dashboard:
print("Creating a dashboard")
admin = get_admin_user()
dash = db.session.query(Dashboard).filter_by(slug="births").first()
if not dash:
dash = Dashboard()
dash.owners = [admin]
dash.created_by = admin
db.session.add(dash)
dash.published = True
dash.json_metadata = textwrap.dedent(
"""\
{
"label_colors": {
"Girls": "#FF69B4",
"Boys": "#ADD8E6",
"girl": "#FF69B4",
"boy": "#ADD8E6"
}
}"""
)
# pylint: disable=line-too-long
pos = json.loads(
textwrap.dedent(
"""\
{
"CHART-6GdlekVise": {
"children": [],
"id": "CHART-6GdlekVise",
"meta": {
"chartId": 5547,
"height": 50,
"sliceName": "Top 10 Girl Name Share",
"width": 5
},
"parents": [
"ROOT_ID",
"GRID_ID",
"ROW-eh0w37bWbR"
],
"type": "CHART"
},
"CHART-6n9jxb30JG": {
"children": [],
"id": "CHART-6n9jxb30JG",
"meta": {
"chartId": 5540,
"height": 36,
"sliceName": "Genders by State",
"width": 5
},
"parents": [
"ROOT_ID",
"GRID_ID",
"ROW--EyBZQlDi"
],
"type": "CHART"
},
"CHART-Jj9qh1ol-N": {
"children": [],
"id": "CHART-Jj9qh1ol-N",
"meta": {
"chartId": 5545,
"height": 50,
"sliceName": "Boy Name Cloud",
"width": 4
},
"parents": [
"ROOT_ID",
"GRID_ID",
"ROW-kzWtcvo8R1"
],
"type": "CHART"
},
"CHART-ODvantb_bF": {
"children": [],
"id": "CHART-ODvantb_bF",
"meta": {
"chartId": 5548,
"height": 50,
"sliceName": "Top 10 Boy Name Share",
"width": 5
},
"parents": [
"ROOT_ID",
"GRID_ID",
"ROW-kzWtcvo8R1"
],
"type": "CHART"
},
"CHART-PAXUUqwmX9": {
"children": [],
"id": "CHART-PAXUUqwmX9",
"meta": {
"chartId": 5538,
"height": 34,
"sliceName": "Genders",
"width": 3
},
"parents": [
"ROOT_ID",
"GRID_ID",
"ROW-2n0XgiHDgs"
],
"type": "CHART"
},
"CHART-_T6n_K9iQN": {
"children": [],
"id": "CHART-_T6n_K9iQN",
"meta": {
"chartId": 5539,
"height": 36,
"sliceName": "Trends",
"width": 7
},
"parents": [
"ROOT_ID",
"GRID_ID",
"ROW--EyBZQlDi"
],
"type": "CHART"
},
"CHART-eNY0tcE_ic": {
"children": [],
"id": "CHART-eNY0tcE_ic",
"meta": {
"chartId": 5537,
"height": 34,
"sliceName": "Participants",
"width": 3
},
"parents": [
"ROOT_ID",
"GRID_ID",
"ROW-2n0XgiHDgs"
],
"type": "CHART"
},
"CHART-g075mMgyYb": {
"children": [],
"id": "CHART-g075mMgyYb",
"meta": {
"chartId": 5541,
"height": 50,
"sliceName": "Girls",
"width": 3
},
"parents": [
"ROOT_ID",
"GRID_ID",
"ROW-eh0w37bWbR"
],
"type": "CHART"
},
"CHART-n-zGGE6S1y": {
"children": [],
"id": "CHART-n-zGGE6S1y",
"meta": {
"chartId": 5542,
"height": 50,
"sliceName": "Girl Name Cloud",
"width": 4
},
"parents": [
"ROOT_ID",
"GRID_ID",
"ROW-eh0w37bWbR"
],
"type": "CHART"
},
"CHART-vJIPjmcbD3": {
"children": [],
"id": "CHART-vJIPjmcbD3",
"meta": {
"chartId": 5543,
"height": 50,
"sliceName": "Boys",
"width": 3
},
"parents": [
"ROOT_ID",
"GRID_ID",
"ROW-kzWtcvo8R1"
],
"type": "CHART"
},
"DASHBOARD_VERSION_KEY": "v2",
"GRID_ID": {
"children": [
"ROW-2n0XgiHDgs",
"ROW--EyBZQlDi",
"ROW-eh0w37bWbR",
"ROW-kzWtcvo8R1"
],
"id": "GRID_ID",
"parents": [
"ROOT_ID"
],
"type": "GRID"
},
"HEADER_ID": {
"id": "HEADER_ID",
"meta": {
"text": "Births"
},
"type": "HEADER"
},
"MARKDOWN-zaflB60tbC": {
"children": [],
"id": "MARKDOWN-zaflB60tbC",
"meta": {
"code": "<div style=\\"text-align:center\\"> <h1>Birth Names Dashboard</h1> <img src=\\"/static/assets/images/babies.png\\" style=\\"width:50%;\\"></div>",
"height": 34,
"width": 6
},
"parents": [
"ROOT_ID",
"GRID_ID",
"ROW-2n0XgiHDgs"
],
"type": "MARKDOWN"
},
"ROOT_ID": {
"children": [
"GRID_ID"
],
"id": "ROOT_ID",
"type": "ROOT"
},
"ROW--EyBZQlDi": {
"children": [
"CHART-_T6n_K9iQN",
"CHART-6n9jxb30JG"
],
"id": "ROW--EyBZQlDi",
"meta": {
"background": "BACKGROUND_TRANSPARENT"
},
"parents": [
"ROOT_ID",
"GRID_ID"
],
"type": "ROW"
},
"ROW-2n0XgiHDgs": {
"children": [
"CHART-eNY0tcE_ic",
"MARKDOWN-zaflB60tbC",
"CHART-PAXUUqwmX9"
],
"id": "ROW-2n0XgiHDgs",
"meta": {
"background": "BACKGROUND_TRANSPARENT"
},
"parents": [
"ROOT_ID",
"GRID_ID"
],
"type": "ROW"
},
"ROW-eh0w37bWbR": {
"children": [
"CHART-g075mMgyYb",
"CHART-n-zGGE6S1y",
"CHART-6GdlekVise"
],
"id": "ROW-eh0w37bWbR",
"meta": {
"background": "BACKGROUND_TRANSPARENT"
},
"parents": [
"ROOT_ID",
"GRID_ID"
],
"type": "ROW"
},
"ROW-kzWtcvo8R1": {
"children": [
"CHART-vJIPjmcbD3",
"CHART-Jj9qh1ol-N",
"CHART-ODvantb_bF"
],
"id": "ROW-kzWtcvo8R1",
"meta": {
"background": "BACKGROUND_TRANSPARENT"
},
"parents": [
"ROOT_ID",
"GRID_ID"
],
"type": "ROW"
}
}
"""
)
)
# pylint: enable=line-too-long
# dashboard v2 doesn't allow add markup slice
dash.slices = [slc for slc in slices if slc.viz_type != "markup"]
update_slice_ids(pos, dash.slices)
dash.dashboard_title = "USA Births Names"
dash.position_json = json.dumps(pos, indent=4)
dash.slug = "births"
db.session.commit()
return dash | /sage-superset-1.0.0.tar.gz/sage-superset-1.0.0/superset/examples/birth_names.py | 0.414662 | 0.214301 | birth_names.py | pypi |
"""Loads datasets, dashboards and slices in a new superset instance"""
import json
import os
from typing import List
import pandas as pd
from sqlalchemy import DateTime, inspect, String
from sqlalchemy.sql import column
import superset.utils.database
from superset import app, db
from superset.connectors.sqla.models import SqlMetric
from superset.models.dashboard import Dashboard
from superset.models.slice import Slice
from superset.utils import core as utils
from ..connectors.base.models import BaseDatasource
from .helpers import (
get_example_data,
get_examples_folder,
get_slice_json,
get_table_connector_registry,
merge_slice,
misc_dash_slices,
update_slice_ids,
)
def load_world_bank_health_n_pop( # pylint: disable=too-many-locals, too-many-statements
only_metadata: bool = False, force: bool = False, sample: bool = False,
) -> None:
"""Loads the world bank health dataset, slices and a dashboard"""
tbl_name = "wb_health_population"
database = superset.utils.database.get_example_database()
engine = database.get_sqla_engine()
schema = inspect(engine).default_schema_name
table_exists = database.has_table_by_name(tbl_name)
if not only_metadata and (not table_exists or force):
data = get_example_data("countries.json.gz")
pdf = pd.read_json(data)
pdf.columns = [col.replace(".", "_") for col in pdf.columns]
if database.backend == "presto":
pdf.year = pd.to_datetime(pdf.year)
pdf.year = pdf.year.dt.strftime("%Y-%m-%d %H:%M%:%S")
else:
pdf.year = pd.to_datetime(pdf.year)
pdf = pdf.head(100) if sample else pdf
pdf.to_sql(
tbl_name,
engine,
schema=schema,
if_exists="replace",
chunksize=50,
dtype={
# TODO(bkyryliuk): use TIMESTAMP type for presto
"year": DateTime if database.backend != "presto" else String(255),
"country_code": String(3),
"country_name": String(255),
"region": String(255),
},
method="multi",
index=False,
)
print("Creating table [wb_health_population] reference")
table = get_table_connector_registry()
tbl = db.session.query(table).filter_by(table_name=tbl_name).first()
if not tbl:
tbl = table(table_name=tbl_name, schema=schema)
tbl.description = utils.readfile(
os.path.join(get_examples_folder(), "countries.md")
)
tbl.main_dttm_col = "year"
tbl.database = database
tbl.filter_select_enabled = True
metrics = [
"sum__SP_POP_TOTL",
"sum__SH_DYN_AIDS",
"sum__SH_DYN_AIDS",
"sum__SP_RUR_TOTL_ZS",
"sum__SP_DYN_LE00_IN",
"sum__SP_RUR_TOTL",
]
for metric in metrics:
if not any(col.metric_name == metric for col in tbl.metrics):
aggr_func = metric[:3]
col = str(column(metric[5:]).compile(db.engine))
tbl.metrics.append(
SqlMetric(metric_name=metric, expression=f"{aggr_func}({col})")
)
db.session.merge(tbl)
db.session.commit()
tbl.fetch_metadata()
slices = create_slices(tbl)
misc_dash_slices.add(slices[-1].slice_name)
for slc in slices:
merge_slice(slc)
print("Creating a World's Health Bank dashboard")
dash_name = "World Bank's Data"
slug = "world_health"
dash = db.session.query(Dashboard).filter_by(slug=slug).first()
if not dash:
dash = Dashboard()
dash.published = True
pos = dashboard_positions
update_slice_ids(pos, slices)
dash.dashboard_title = dash_name
dash.position_json = json.dumps(pos, indent=4)
dash.slug = slug
dash.slices = slices[:-1]
db.session.merge(dash)
db.session.commit()
def create_slices(tbl: BaseDatasource) -> List[Slice]:
metric = "sum__SP_POP_TOTL"
metrics = ["sum__SP_POP_TOTL"]
secondary_metric = {
"aggregate": "SUM",
"column": {
"column_name": "SP_RUR_TOTL",
"optionName": "_col_SP_RUR_TOTL",
"type": "DOUBLE",
},
"expressionType": "SIMPLE",
"hasCustomLabel": True,
"label": "Rural Population",
}
defaults = {
"compare_lag": "10",
"compare_suffix": "o10Y",
"limit": "25",
"granularity_sqla": "year",
"groupby": [],
"row_limit": app.config["ROW_LIMIT"],
"since": "2014-01-01",
"until": "2014-01-02",
"time_range": "2014-01-01 : 2014-01-02",
"time_range_endpoints": ["inclusive", "exclusive"],
"markup_type": "markdown",
"country_fieldtype": "cca3",
"entity": "country_code",
"show_bubbles": True,
}
return [
Slice(
slice_name="Region Filter",
viz_type="filter_box",
datasource_type="table",
datasource_id=tbl.id,
params=get_slice_json(
defaults,
viz_type="filter_box",
date_filter=False,
filter_configs=[
{
"asc": False,
"clearable": True,
"column": "region",
"key": "2s98dfu",
"metric": "sum__SP_POP_TOTL",
"multiple": False,
},
{
"asc": False,
"clearable": True,
"key": "li3j2lk",
"column": "country_name",
"metric": "sum__SP_POP_TOTL",
"multiple": True,
},
],
),
),
Slice(
slice_name="World's Population",
viz_type="big_number",
datasource_type="table",
datasource_id=tbl.id,
params=get_slice_json(
defaults,
since="2000",
viz_type="big_number",
compare_lag="10",
metric="sum__SP_POP_TOTL",
compare_suffix="over 10Y",
),
),
Slice(
slice_name="Most Populated Countries",
viz_type="table",
datasource_type="table",
datasource_id=tbl.id,
params=get_slice_json(
defaults,
viz_type="table",
metrics=["sum__SP_POP_TOTL"],
groupby=["country_name"],
),
),
Slice(
slice_name="Growth Rate",
viz_type="line",
datasource_type="table",
datasource_id=tbl.id,
params=get_slice_json(
defaults,
viz_type="line",
since="1960-01-01",
metrics=["sum__SP_POP_TOTL"],
num_period_compare="10",
groupby=["country_name"],
),
),
Slice(
slice_name="% Rural",
viz_type="world_map",
datasource_type="table",
datasource_id=tbl.id,
params=get_slice_json(
defaults,
viz_type="world_map",
metric="sum__SP_RUR_TOTL_ZS",
num_period_compare="10",
secondary_metric=secondary_metric,
),
),
Slice(
slice_name="Life Expectancy VS Rural %",
viz_type="bubble",
datasource_type="table",
datasource_id=tbl.id,
params=get_slice_json(
defaults,
viz_type="bubble",
since="2011-01-01",
until="2011-01-02",
series="region",
limit=0,
entity="country_name",
x="sum__SP_RUR_TOTL_ZS",
y="sum__SP_DYN_LE00_IN",
size="sum__SP_POP_TOTL",
max_bubble_size="50",
adhoc_filters=[
{
"clause": "WHERE",
"expressionType": "SIMPLE",
"filterOptionName": "2745eae5",
"comparator": [
"TCA",
"MNP",
"DMA",
"MHL",
"MCO",
"SXM",
"CYM",
"TUV",
"IMY",
"KNA",
"ASM",
"ADO",
"AMA",
"PLW",
],
"operator": "NOT IN",
"subject": "country_code",
}
],
),
),
Slice(
slice_name="Rural Breakdown",
viz_type="sunburst",
datasource_type="table",
datasource_id=tbl.id,
params=get_slice_json(
defaults,
viz_type="sunburst",
groupby=["region", "country_name"],
since="2011-01-01",
until="2011-01-02",
metric=metric,
secondary_metric=secondary_metric,
),
),
Slice(
slice_name="World's Pop Growth",
viz_type="area",
datasource_type="table",
datasource_id=tbl.id,
params=get_slice_json(
defaults,
since="1960-01-01",
until="now",
viz_type="area",
groupby=["region"],
metrics=metrics,
),
),
Slice(
slice_name="Box plot",
viz_type="box_plot",
datasource_type="table",
datasource_id=tbl.id,
params=get_slice_json(
defaults,
since="1960-01-01",
until="now",
whisker_options="Min/max (no outliers)",
x_ticks_layout="staggered",
viz_type="box_plot",
groupby=["region"],
metrics=metrics,
),
),
Slice(
slice_name="Treemap",
viz_type="treemap",
datasource_type="table",
datasource_id=tbl.id,
params=get_slice_json(
defaults,
since="1960-01-01",
until="now",
viz_type="treemap",
metrics=["sum__SP_POP_TOTL"],
groupby=["region", "country_code"],
),
),
Slice(
slice_name="Parallel Coordinates",
viz_type="para",
datasource_type="table",
datasource_id=tbl.id,
params=get_slice_json(
defaults,
since="2011-01-01",
until="2012-01-01",
viz_type="para",
limit=100,
metrics=["sum__SP_POP_TOTL", "sum__SP_RUR_TOTL_ZS", "sum__SH_DYN_AIDS"],
secondary_metric="sum__SP_POP_TOTL",
series="country_name",
),
),
]
dashboard_positions = {
"CHART-36bfc934": {
"children": [],
"id": "CHART-36bfc934",
"meta": {"chartId": 40, "height": 25, "sliceName": "Region Filter", "width": 2},
"type": "CHART",
},
"CHART-37982887": {
"children": [],
"id": "CHART-37982887",
"meta": {
"chartId": 41,
"height": 25,
"sliceName": "World's Population",
"width": 2,
},
"type": "CHART",
},
"CHART-17e0f8d8": {
"children": [],
"id": "CHART-17e0f8d8",
"meta": {
"chartId": 42,
"height": 92,
"sliceName": "Most Populated Countries",
"width": 3,
},
"type": "CHART",
},
"CHART-2ee52f30": {
"children": [],
"id": "CHART-2ee52f30",
"meta": {"chartId": 43, "height": 38, "sliceName": "Growth Rate", "width": 6},
"type": "CHART",
},
"CHART-2d5b6871": {
"children": [],
"id": "CHART-2d5b6871",
"meta": {"chartId": 44, "height": 52, "sliceName": "% Rural", "width": 7},
"type": "CHART",
},
"CHART-0fd0d252": {
"children": [],
"id": "CHART-0fd0d252",
"meta": {
"chartId": 45,
"height": 50,
"sliceName": "Life Expectancy VS Rural %",
"width": 8,
},
"type": "CHART",
},
"CHART-97f4cb48": {
"children": [],
"id": "CHART-97f4cb48",
"meta": {
"chartId": 46,
"height": 38,
"sliceName": "Rural Breakdown",
"width": 3,
},
"type": "CHART",
},
"CHART-b5e05d6f": {
"children": [],
"id": "CHART-b5e05d6f",
"meta": {
"chartId": 47,
"height": 50,
"sliceName": "World's Pop Growth",
"width": 4,
},
"type": "CHART",
},
"CHART-e76e9f5f": {
"children": [],
"id": "CHART-e76e9f5f",
"meta": {"chartId": 48, "height": 50, "sliceName": "Box plot", "width": 4},
"type": "CHART",
},
"CHART-a4808bba": {
"children": [],
"id": "CHART-a4808bba",
"meta": {"chartId": 49, "height": 50, "sliceName": "Treemap", "width": 8},
"type": "CHART",
},
"CHART-3nc0d8sk": {
"children": [],
"id": "CHART-3nc0d8sk",
"meta": {"chartId": 50, "height": 50, "sliceName": "Treemap", "width": 8},
"type": "CHART",
},
"COLUMN-071bbbad": {
"children": ["ROW-1e064e3c", "ROW-afdefba9"],
"id": "COLUMN-071bbbad",
"meta": {"background": "BACKGROUND_TRANSPARENT", "width": 9},
"type": "COLUMN",
},
"COLUMN-fe3914b8": {
"children": ["CHART-36bfc934", "CHART-37982887"],
"id": "COLUMN-fe3914b8",
"meta": {"background": "BACKGROUND_TRANSPARENT", "width": 2},
"type": "COLUMN",
},
"GRID_ID": {
"children": ["ROW-46632bc2", "ROW-3fa26c5d", "ROW-812b3f13"],
"id": "GRID_ID",
"type": "GRID",
},
"HEADER_ID": {
"id": "HEADER_ID",
"meta": {"text": "World's Bank Data"},
"type": "HEADER",
},
"ROOT_ID": {"children": ["GRID_ID"], "id": "ROOT_ID", "type": "ROOT"},
"ROW-1e064e3c": {
"children": ["COLUMN-fe3914b8", "CHART-2d5b6871"],
"id": "ROW-1e064e3c",
"meta": {"background": "BACKGROUND_TRANSPARENT"},
"type": "ROW",
},
"ROW-3fa26c5d": {
"children": ["CHART-b5e05d6f", "CHART-0fd0d252"],
"id": "ROW-3fa26c5d",
"meta": {"background": "BACKGROUND_TRANSPARENT"},
"type": "ROW",
},
"ROW-46632bc2": {
"children": ["COLUMN-071bbbad", "CHART-17e0f8d8"],
"id": "ROW-46632bc2",
"meta": {"background": "BACKGROUND_TRANSPARENT"},
"type": "ROW",
},
"ROW-812b3f13": {
"children": ["CHART-a4808bba", "CHART-e76e9f5f"],
"id": "ROW-812b3f13",
"meta": {"background": "BACKGROUND_TRANSPARENT"},
"type": "ROW",
},
"ROW-afdefba9": {
"children": ["CHART-2ee52f30", "CHART-97f4cb48"],
"id": "ROW-afdefba9",
"meta": {"background": "BACKGROUND_TRANSPARENT"},
"type": "ROW",
},
"DASHBOARD_VERSION_KEY": "v2",
} | /sage-superset-1.0.0.tar.gz/sage-superset-1.0.0/superset/examples/world_bank.py | 0.571049 | 0.282358 | world_bank.py | pypi |
import json
from superset import db
from superset.models.dashboard import Dashboard
from superset.models.slice import Slice
from .helpers import (
get_slice_json,
get_table_connector_registry,
merge_slice,
update_slice_ids,
)
COLOR_RED = {"r": 205, "g": 0, "b": 3, "a": 0.82}
POSITION_JSON = """\
{
"CHART-3afd9d70": {
"meta": {
"chartId": 66,
"width": 6,
"height": 50
},
"type": "CHART",
"id": "CHART-3afd9d70",
"children": []
},
"CHART-2ee7fa5e": {
"meta": {
"chartId": 67,
"width": 6,
"height": 50
},
"type": "CHART",
"id": "CHART-2ee7fa5e",
"children": []
},
"CHART-201f7715": {
"meta": {
"chartId": 68,
"width": 6,
"height": 50
},
"type": "CHART",
"id": "CHART-201f7715",
"children": []
},
"CHART-d02f6c40": {
"meta": {
"chartId": 69,
"width": 6,
"height": 50
},
"type": "CHART",
"id": "CHART-d02f6c40",
"children": []
},
"CHART-2673431d": {
"meta": {
"chartId": 70,
"width": 6,
"height": 50
},
"type": "CHART",
"id": "CHART-2673431d",
"children": []
},
"CHART-85265a60": {
"meta": {
"chartId": 71,
"width": 6,
"height": 50
},
"type": "CHART",
"id": "CHART-85265a60",
"children": []
},
"CHART-2b87513c": {
"meta": {
"chartId": 72,
"width": 6,
"height": 50
},
"type": "CHART",
"id": "CHART-2b87513c",
"children": []
},
"GRID_ID": {
"type": "GRID",
"id": "GRID_ID",
"children": [
"ROW-a7b16cb5",
"ROW-72c218a5",
"ROW-957ba55b",
"ROW-af041bdd"
]
},
"HEADER_ID": {
"meta": {
"text": "deck.gl Demo"
},
"type": "HEADER",
"id": "HEADER_ID"
},
"ROOT_ID": {
"type": "ROOT",
"id": "ROOT_ID",
"children": [
"GRID_ID"
]
},
"ROW-72c218a5": {
"meta": {
"background": "BACKGROUND_TRANSPARENT"
},
"type": "ROW",
"id": "ROW-72c218a5",
"children": [
"CHART-d02f6c40",
"CHART-201f7715"
]
},
"ROW-957ba55b": {
"meta": {
"background": "BACKGROUND_TRANSPARENT"
},
"type": "ROW",
"id": "ROW-957ba55b",
"children": [
"CHART-2673431d",
"CHART-85265a60"
]
},
"ROW-a7b16cb5": {
"meta": {
"background": "BACKGROUND_TRANSPARENT"
},
"type": "ROW",
"id": "ROW-a7b16cb5",
"children": [
"CHART-3afd9d70",
"CHART-2ee7fa5e"
]
},
"ROW-af041bdd": {
"meta": {
"background": "BACKGROUND_TRANSPARENT"
},
"type": "ROW",
"id": "ROW-af041bdd",
"children": [
"CHART-2b87513c"
]
},
"DASHBOARD_VERSION_KEY": "v2"
}"""
def load_deck_dash() -> None: # pylint: disable=too-many-statements
print("Loading deck.gl dashboard")
slices = []
table = get_table_connector_registry()
tbl = db.session.query(table).filter_by(table_name="long_lat").first()
slice_data = {
"spatial": {"type": "latlong", "lonCol": "LON", "latCol": "LAT"},
"color_picker": COLOR_RED,
"datasource": "5__table",
"granularity_sqla": None,
"groupby": [],
"mapbox_style": "mapbox://styles/mapbox/light-v9",
"multiplier": 10,
"point_radius_fixed": {"type": "metric", "value": "count"},
"point_unit": "square_m",
"min_radius": 1,
"max_radius": 250,
"row_limit": 5000,
"time_range": " : ",
"time_range_endpoints": ["inclusive", "exclusive"],
"size": "count",
"time_grain_sqla": None,
"viewport": {
"bearing": -4.952916738791771,
"latitude": 37.78926922909199,
"longitude": -122.42613341901688,
"pitch": 4.750411100577438,
"zoom": 12.729132798697304,
},
"viz_type": "deck_scatter",
}
print("Creating Scatterplot slice")
slc = Slice(
slice_name="Scatterplot",
viz_type="deck_scatter",
datasource_type="table",
datasource_id=tbl.id,
params=get_slice_json(slice_data),
)
merge_slice(slc)
slices.append(slc)
slice_data = {
"point_unit": "square_m",
"row_limit": 5000,
"spatial": {"type": "latlong", "lonCol": "LON", "latCol": "LAT"},
"mapbox_style": "mapbox://styles/mapbox/dark-v9",
"granularity_sqla": None,
"size": "count",
"viz_type": "deck_screengrid",
"time_range": "No filter",
"point_radius": "Auto",
"color_picker": {"a": 1, "r": 14, "b": 0, "g": 255},
"grid_size": 20,
"viewport": {
"zoom": 14.161641703941438,
"longitude": -122.41827069521386,
"bearing": -4.952916738791771,
"latitude": 37.76024135844065,
"pitch": 4.750411100577438,
},
"point_radius_fixed": {"type": "fix", "value": 2000},
"datasource": "5__table",
"time_grain_sqla": None,
"groupby": [],
}
print("Creating Screen Grid slice")
slc = Slice(
slice_name="Screen grid",
viz_type="deck_screengrid",
datasource_type="table",
datasource_id=tbl.id,
params=get_slice_json(slice_data),
)
merge_slice(slc)
slices.append(slc)
slice_data = {
"spatial": {"type": "latlong", "lonCol": "LON", "latCol": "LAT"},
"row_limit": 5000,
"mapbox_style": "mapbox://styles/mapbox/streets-v9",
"granularity_sqla": None,
"size": "count",
"viz_type": "deck_hex",
"time_range": "No filter",
"point_radius_unit": "Pixels",
"point_radius": "Auto",
"color_picker": {"a": 1, "r": 14, "b": 0, "g": 255},
"grid_size": 40,
"extruded": True,
"viewport": {
"latitude": 37.789795085160335,
"pitch": 54.08961642447763,
"zoom": 13.835465702403654,
"longitude": -122.40632230075536,
"bearing": -2.3984797349335167,
},
"point_radius_fixed": {"type": "fix", "value": 2000},
"datasource": "5__table",
"time_grain_sqla": None,
"groupby": [],
}
print("Creating Hex slice")
slc = Slice(
slice_name="Hexagons",
viz_type="deck_hex",
datasource_type="table",
datasource_id=tbl.id,
params=get_slice_json(slice_data),
)
merge_slice(slc)
slices.append(slc)
slice_data = {
"autozoom": False,
"spatial": {"type": "latlong", "lonCol": "LON", "latCol": "LAT"},
"row_limit": 5000,
"mapbox_style": "mapbox://styles/mapbox/satellite-streets-v9",
"granularity_sqla": None,
"size": "count",
"viz_type": "deck_grid",
"point_radius_unit": "Pixels",
"point_radius": "Auto",
"time_range": "No filter",
"color_picker": {"a": 1, "r": 14, "b": 0, "g": 255},
"grid_size": 120,
"extruded": True,
"viewport": {
"longitude": -122.42066918995666,
"bearing": 155.80099696026355,
"zoom": 12.699690845482069,
"latitude": 37.7942314882596,
"pitch": 53.470800300695146,
},
"point_radius_fixed": {"type": "fix", "value": 2000},
"datasource": "5__table",
"time_grain_sqla": None,
"groupby": [],
}
print("Creating Grid slice")
slc = Slice(
slice_name="Grid",
viz_type="deck_grid",
datasource_type="table",
datasource_id=tbl.id,
params=get_slice_json(slice_data),
)
merge_slice(slc)
slices.append(slc)
polygon_tbl = (
db.session.query(table).filter_by(table_name="sf_population_polygons").first()
)
slice_data = {
"datasource": "11__table",
"viz_type": "deck_polygon",
"slice_id": 41,
"granularity_sqla": None,
"time_grain_sqla": None,
"time_range": " : ",
"line_column": "contour",
"metric": {
"aggregate": "SUM",
"column": {
"column_name": "population",
"description": None,
"expression": None,
"filterable": True,
"groupby": True,
"id": 1332,
"is_dttm": False,
"optionName": "_col_population",
"python_date_format": None,
"type": "BIGINT",
"verbose_name": None,
},
"expressionType": "SIMPLE",
"hasCustomLabel": True,
"label": "Population",
"optionName": "metric_t2v4qbfiz1_w6qgpx4h2p",
"sqlExpression": None,
},
"line_type": "json",
"linear_color_scheme": "oranges",
"mapbox_style": "mapbox://styles/mapbox/light-v9",
"viewport": {
"longitude": -122.43388541747726,
"latitude": 37.752020331384834,
"zoom": 11.133995608594631,
"bearing": 37.89506450385642,
"pitch": 60,
"width": 667,
"height": 906,
"altitude": 1.5,
"maxZoom": 20,
"minZoom": 0,
"maxPitch": 60,
"minPitch": 0,
"maxLatitude": 85.05113,
"minLatitude": -85.05113,
},
"reverse_long_lat": False,
"fill_color_picker": {"r": 3, "g": 65, "b": 73, "a": 1},
"stroke_color_picker": {"r": 0, "g": 122, "b": 135, "a": 1},
"filled": True,
"stroked": False,
"extruded": True,
"multiplier": 0.1,
"point_radius_fixed": {
"type": "metric",
"value": {
"aggregate": None,
"column": None,
"expressionType": "SQL",
"hasCustomLabel": None,
"label": "Density",
"optionName": "metric_c5rvwrzoo86_293h6yrv2ic",
"sqlExpression": "SUM(population)/SUM(area)",
},
},
"js_columns": [],
"js_data_mutator": "",
"js_tooltip": "",
"js_onclick_href": "",
"legend_format": ".1s",
"legend_position": "tr",
}
print("Creating Polygon slice")
slc = Slice(
slice_name="Polygons",
viz_type="deck_polygon",
datasource_type="table",
datasource_id=polygon_tbl.id,
params=get_slice_json(slice_data),
)
merge_slice(slc)
slices.append(slc)
slice_data = {
"datasource": "10__table",
"viz_type": "deck_arc",
"slice_id": 42,
"granularity_sqla": None,
"time_grain_sqla": None,
"time_range": " : ",
"start_spatial": {
"type": "latlong",
"latCol": "LATITUDE",
"lonCol": "LONGITUDE",
},
"end_spatial": {
"type": "latlong",
"latCol": "LATITUDE_DEST",
"lonCol": "LONGITUDE_DEST",
},
"row_limit": 5000,
"mapbox_style": "mapbox://styles/mapbox/light-v9",
"viewport": {
"altitude": 1.5,
"bearing": 8.546256357301871,
"height": 642,
"latitude": 44.596651438714254,
"longitude": -91.84340711201104,
"maxLatitude": 85.05113,
"maxPitch": 60,
"maxZoom": 20,
"minLatitude": -85.05113,
"minPitch": 0,
"minZoom": 0,
"pitch": 60,
"width": 997,
"zoom": 2.929837070560775,
},
"color_picker": {"r": 0, "g": 122, "b": 135, "a": 1},
"stroke_width": 1,
}
print("Creating Arc slice")
slc = Slice(
slice_name="Arcs",
viz_type="deck_arc",
datasource_type="table",
datasource_id=db.session.query(table)
.filter_by(table_name="flights")
.first()
.id,
params=get_slice_json(slice_data),
)
merge_slice(slc)
slices.append(slc)
slice_data = {
"datasource": "12__table",
"slice_id": 43,
"viz_type": "deck_path",
"time_grain_sqla": None,
"time_range": " : ",
"line_column": "path_json",
"line_type": "json",
"row_limit": 5000,
"mapbox_style": "mapbox://styles/mapbox/light-v9",
"viewport": {
"longitude": -122.18885402582598,
"latitude": 37.73671752604488,
"zoom": 9.51847667620428,
"bearing": 0,
"pitch": 0,
"width": 669,
"height": 1094,
"altitude": 1.5,
"maxZoom": 20,
"minZoom": 0,
"maxPitch": 60,
"minPitch": 0,
"maxLatitude": 85.05113,
"minLatitude": -85.05113,
},
"color_picker": {"r": 0, "g": 122, "b": 135, "a": 1},
"line_width": 150,
"reverse_long_lat": False,
"js_columns": ["color"],
"js_data_mutator": "data => data.map(d => ({\n"
" ...d,\n"
" color: colors.hexToRGB(d.extraProps.color)\n"
"}));",
"js_tooltip": "",
"js_onclick_href": "",
}
print("Creating Path slice")
slc = Slice(
slice_name="Path",
viz_type="deck_path",
datasource_type="table",
datasource_id=db.session.query(table)
.filter_by(table_name="bart_lines")
.first()
.id,
params=get_slice_json(slice_data),
)
merge_slice(slc)
slices.append(slc)
slug = "deck"
print("Creating a dashboard")
title = "deck.gl Demo"
dash = db.session.query(Dashboard).filter_by(slug=slug).first()
if not dash:
dash = Dashboard()
dash.published = True
js = POSITION_JSON
pos = json.loads(js)
update_slice_ids(pos, slices)
dash.position_json = json.dumps(pos, indent=4)
dash.dashboard_title = title
dash.slug = slug
dash.slices = slices
db.session.merge(dash)
db.session.commit() | /sage-superset-1.0.0.tar.gz/sage-superset-1.0.0/superset/examples/deck.py | 0.44746 | 0.260307 | deck.py | pypi |
"""A set of constants and methods to manage permissions and security"""
import logging
import re
import time
from collections import defaultdict
from typing import (
Any,
Callable,
cast,
Dict,
List,
NamedTuple,
Optional,
Set,
TYPE_CHECKING,
Union,
)
import jwt
from flask import current_app, Flask, g, Request
from flask_appbuilder import Model
from flask_appbuilder.models.sqla.interface import SQLAInterface
from flask_appbuilder.security.sqla.manager import SecurityManager
from flask_appbuilder.security.sqla.models import (
assoc_permissionview_role,
assoc_user_role,
PermissionView,
Role,
User,
)
from flask_appbuilder.security.views import (
PermissionModelView,
PermissionViewModelView,
RoleModelView,
UserModelView,
ViewMenuModelView,
)
from flask_appbuilder.widgets import ListWidget
from flask_login import AnonymousUserMixin, LoginManager
from sqlalchemy import and_, or_
from sqlalchemy.engine.base import Connection
from sqlalchemy.orm import Session
from sqlalchemy.orm.mapper import Mapper
from sqlalchemy.orm.query import Query as SqlaQuery
from superset import sql_parse
from superset.connectors.connector_registry import ConnectorRegistry
from superset.constants import RouteMethod
from superset.errors import ErrorLevel, SupersetError, SupersetErrorType
from superset.exceptions import SupersetSecurityException
from superset.security.guest_token import (
GuestToken,
GuestTokenResources,
GuestTokenResourceType,
GuestTokenRlsRule,
GuestTokenUser,
GuestUser,
)
from superset.utils.core import DatasourceName, RowLevelSecurityFilterType
if TYPE_CHECKING:
from superset.common.query_context import QueryContext
from superset.connectors.base.models import BaseDatasource
from superset.connectors.druid.models import DruidCluster
from superset.models.core import Database
from superset.models.dashboard import Dashboard
from superset.models.sql_lab import Query
from superset.sql_parse import Table
from superset.viz import BaseViz
logger = logging.getLogger(__name__)
class DatabaseAndSchema(NamedTuple):
database: str
schema: str
class SupersetSecurityListWidget(ListWidget): # pylint: disable=too-few-public-methods
"""
Redeclaring to avoid circular imports
"""
template = "superset/fab_overrides/list.html"
class SupersetRoleListWidget(ListWidget): # pylint: disable=too-few-public-methods
"""
Role model view from FAB already uses a custom list widget override
So we override the override
"""
template = "superset/fab_overrides/list_role.html"
def __init__(self, **kwargs: Any) -> None:
kwargs["appbuilder"] = current_app.appbuilder
super().__init__(**kwargs)
UserModelView.list_widget = SupersetSecurityListWidget
RoleModelView.list_widget = SupersetRoleListWidget
PermissionViewModelView.list_widget = SupersetSecurityListWidget
PermissionModelView.list_widget = SupersetSecurityListWidget
# Limiting routes on FAB model views
UserModelView.include_route_methods = RouteMethod.CRUD_SET | {
RouteMethod.ACTION,
RouteMethod.API_READ,
RouteMethod.ACTION_POST,
"userinfo",
}
RoleModelView.include_route_methods = RouteMethod.CRUD_SET
PermissionViewModelView.include_route_methods = {RouteMethod.LIST}
PermissionModelView.include_route_methods = {RouteMethod.LIST}
ViewMenuModelView.include_route_methods = {RouteMethod.LIST}
RoleModelView.list_columns = ["name"]
RoleModelView.edit_columns = ["name", "permissions", "user"]
RoleModelView.related_views = []
class SupersetSecurityManager( # pylint: disable=too-many-public-methods
SecurityManager
):
userstatschartview = None
READ_ONLY_MODEL_VIEWS = {"Database", "DruidClusterModelView", "DynamicPlugin"}
USER_MODEL_VIEWS = {
"UserDBModelView",
"UserLDAPModelView",
"UserOAuthModelView",
"UserOIDModelView",
"UserRemoteUserModelView",
}
GAMMA_READ_ONLY_MODEL_VIEWS = {
"Dataset",
"DruidColumnInlineView",
"DruidDatasourceModelView",
"DruidMetricInlineView",
"Datasource",
} | READ_ONLY_MODEL_VIEWS
ADMIN_ONLY_VIEW_MENUS = {
"AccessRequestsModelView",
"SQL Lab",
"Refresh Druid Metadata",
"ResetPasswordView",
"RoleModelView",
"Log",
"Security",
"Row Level Security",
"Row Level Security Filters",
"RowLevelSecurityFiltersModelView",
} | USER_MODEL_VIEWS
ALPHA_ONLY_VIEW_MENUS = {
"Manage",
"CSS Templates",
"Queries",
"Import dashboards",
"Upload a CSV",
}
ADMIN_ONLY_PERMISSIONS = {
"can_sql_json", # TODO: move can_sql_json to sql_lab role
"can_override_role_permissions",
"can_sync_druid_source",
"can_override_role_permissions",
"can_approve",
"can_update_role",
"all_query_access",
"can_grant_guest_token",
}
READ_ONLY_PERMISSION = {
"can_show",
"can_list",
"can_get",
"can_external_metadata",
"can_external_metadata_by_name",
"can_read",
}
ALPHA_ONLY_PERMISSIONS = {
"muldelete",
"all_database_access",
"all_datasource_access",
}
OBJECT_SPEC_PERMISSIONS = {
"database_access",
"schema_access",
"datasource_access",
"metric_access",
}
ACCESSIBLE_PERMS = {"can_userinfo", "resetmypassword"}
SQLLAB_PERMISSION_VIEWS = {
("can_csv", "Superset"),
("can_read", "SavedQuery"),
("can_read", "Database"),
("can_sql_json", "Superset"),
("can_sqllab_viz", "Superset"),
("can_sqllab_table_viz", "Superset"),
("can_sqllab", "Superset"),
("menu_access", "SQL Lab"),
("menu_access", "SQL Editor"),
("menu_access", "Saved Queries"),
("menu_access", "Query Search"),
}
data_access_permissions = (
"database_access",
"schema_access",
"datasource_access",
"all_datasource_access",
"all_database_access",
"all_query_access",
)
guest_user_cls = GuestUser
def create_login_manager(self, app: Flask) -> LoginManager:
lm = super().create_login_manager(app)
lm.request_loader(self.request_loader)
return lm
def request_loader(self, request: Request) -> Optional[User]:
# pylint: disable=import-outside-toplevel
from superset.extensions import feature_flag_manager
if feature_flag_manager.is_feature_enabled("EMBEDDED_SUPERSET"):
return self.get_guest_user_from_request(request)
return None
def get_schema_perm( # pylint: disable=no-self-use
self, database: Union["Database", str], schema: Optional[str] = None
) -> Optional[str]:
"""
Return the database specific schema permission.
:param database: The Superset database or database name
:param schema: The Superset schema name
:return: The database specific schema permission
"""
if schema:
return f"[{database}].[{schema}]"
return None
def unpack_database_and_schema( # pylint: disable=no-self-use
self, schema_permission: str
) -> DatabaseAndSchema:
# [database_name].[schema|table]
schema_name = schema_permission.split(".")[1][1:-1]
database_name = schema_permission.split(".")[0][1:-1]
return DatabaseAndSchema(database_name, schema_name)
def can_access(self, permission_name: str, view_name: str) -> bool:
"""
Return True if the user can access the FAB permission/view, False otherwise.
Note this method adds protection from has_access failing from missing
permission/view entries.
:param permission_name: The FAB permission name
:param view_name: The FAB view-menu name
:returns: Whether the user can access the FAB permission/view
"""
user = g.user
if user.is_anonymous:
return self.is_item_public(permission_name, view_name)
return self._has_view_access(user, permission_name, view_name)
def can_access_all_queries(self) -> bool:
"""
Return True if the user can access all SQL Lab queries, False otherwise.
:returns: Whether the user can access all queries
"""
return self.can_access("all_query_access", "all_query_access")
def can_access_all_datasources(self) -> bool:
"""
Return True if the user can fully access all the Superset datasources, False
otherwise.
:returns: Whether the user can fully access all Superset datasources
"""
return self.can_access("all_datasource_access", "all_datasource_access")
def can_access_all_databases(self) -> bool:
"""
Return True if the user can fully access all the Superset databases, False
otherwise.
:returns: Whether the user can fully access all Superset databases
"""
return self.can_access("all_database_access", "all_database_access")
def can_access_database(self, database: Union["Database", "DruidCluster"]) -> bool:
"""
Return True if the user can fully access the Superset database, False otherwise.
Note for Druid the database is akin to the Druid cluster.
:param database: The Superset database
:returns: Whether the user can fully access the Superset database
"""
return (
self.can_access_all_datasources()
or self.can_access_all_databases()
or self.can_access("database_access", database.perm) # type: ignore
)
def can_access_schema(self, datasource: "BaseDatasource") -> bool:
"""
Return True if the user can fully access the schema associated with the Superset
datasource, False otherwise.
Note for Druid datasources the database and schema are akin to the Druid cluster
and datasource name prefix respectively, i.e., [schema.]datasource.
:param datasource: The Superset datasource
:returns: Whether the user can fully access the datasource's schema
"""
return (
self.can_access_all_datasources()
or self.can_access_database(datasource.database)
or self.can_access("schema_access", datasource.schema_perm or "")
)
def can_access_datasource(self, datasource: "BaseDatasource") -> bool:
"""
Return True if the user can fully access of the Superset datasource, False
otherwise.
:param datasource: The Superset datasource
:returns: Whether the user can fully access the Superset datasource
"""
try:
self.raise_for_access(datasource=datasource)
except SupersetSecurityException:
return False
return True
@staticmethod
def get_datasource_access_error_msg(datasource: "BaseDatasource") -> str:
"""
Return the error message for the denied Superset datasource.
:param datasource: The denied Superset datasource
:returns: The error message
"""
return f"""This endpoint requires the datasource {datasource.name}, database or
`all_datasource_access` permission"""
@staticmethod
def get_datasource_access_link( # pylint: disable=unused-argument
datasource: "BaseDatasource",
) -> Optional[str]:
"""
Return the link for the denied Superset datasource.
:param datasource: The denied Superset datasource
:returns: The access URL
"""
return current_app.config.get("PERMISSION_INSTRUCTIONS_LINK")
def get_datasource_access_error_object( # pylint: disable=invalid-name
self, datasource: "BaseDatasource"
) -> SupersetError:
"""
Return the error object for the denied Superset datasource.
:param datasource: The denied Superset datasource
:returns: The error object
"""
return SupersetError(
error_type=SupersetErrorType.DATASOURCE_SECURITY_ACCESS_ERROR,
message=self.get_datasource_access_error_msg(datasource),
level=ErrorLevel.ERROR,
extra={
"link": self.get_datasource_access_link(datasource),
"datasource": datasource.name,
},
)
def get_table_access_error_msg( # pylint: disable=no-self-use
self, tables: Set["Table"]
) -> str:
"""
Return the error message for the denied SQL tables.
:param tables: The set of denied SQL tables
:returns: The error message
"""
quoted_tables = [f"`{table}`" for table in tables]
return f"""You need access to the following tables: {", ".join(quoted_tables)},
`all_database_access` or `all_datasource_access` permission"""
def get_table_access_error_object(self, tables: Set["Table"]) -> SupersetError:
"""
Return the error object for the denied SQL tables.
:param tables: The set of denied SQL tables
:returns: The error object
"""
return SupersetError(
error_type=SupersetErrorType.TABLE_SECURITY_ACCESS_ERROR,
message=self.get_table_access_error_msg(tables),
level=ErrorLevel.ERROR,
extra={
"link": self.get_table_access_link(tables),
"tables": [str(table) for table in tables],
},
)
def get_table_access_link( # pylint: disable=unused-argument,no-self-use
self, tables: Set["Table"]
) -> Optional[str]:
"""
Return the access link for the denied SQL tables.
:param tables: The set of denied SQL tables
:returns: The access URL
"""
return current_app.config.get("PERMISSION_INSTRUCTIONS_LINK")
def get_user_datasources(self) -> List["BaseDatasource"]:
"""
Collect datasources which the user has explicit permissions to.
:returns: The list of datasources
"""
user_perms = self.user_view_menu_names("datasource_access")
schema_perms = self.user_view_menu_names("schema_access")
user_datasources = set()
for datasource_class in ConnectorRegistry.sources.values():
user_datasources.update(
self.get_session.query(datasource_class)
.filter(
or_(
datasource_class.perm.in_(user_perms),
datasource_class.schema_perm.in_(schema_perms),
)
)
.all()
)
# group all datasources by database
all_datasources = ConnectorRegistry.get_all_datasources(self.get_session)
datasources_by_database: Dict["Database", Set["BaseDatasource"]] = defaultdict(
set
)
for datasource in all_datasources:
datasources_by_database[datasource.database].add(datasource)
# add datasources with implicit permission (eg, database access)
for database, datasources in datasources_by_database.items():
if self.can_access_database(database):
user_datasources.update(datasources)
return list(user_datasources)
def can_access_table(self, database: "Database", table: "Table") -> bool:
"""
Return True if the user can access the SQL table, False otherwise.
:param database: The SQL database
:param table: The SQL table
:returns: Whether the user can access the SQL table
"""
try:
self.raise_for_access(database=database, table=table)
except SupersetSecurityException:
return False
return True
def user_view_menu_names(self, permission_name: str) -> Set[str]:
base_query = (
self.get_session.query(self.viewmenu_model.name)
.join(self.permissionview_model)
.join(self.permission_model)
.join(assoc_permissionview_role)
.join(self.role_model)
)
if not g.user.is_anonymous:
# filter by user id
view_menu_names = (
base_query.join(assoc_user_role)
.join(self.user_model)
.filter(self.user_model.id == g.user.get_id())
.filter(self.permission_model.name == permission_name)
).all()
return {s.name for s in view_menu_names}
# Properly treat anonymous user
public_role = self.get_public_role()
if public_role:
# filter by public role
view_menu_names = (
base_query.filter(self.role_model.id == public_role.id).filter(
self.permission_model.name == permission_name
)
).all()
return {s.name for s in view_menu_names}
return set()
def get_schemas_accessible_by_user(
self, database: "Database", schemas: List[str], hierarchical: bool = True
) -> List[str]:
"""
Return the list of SQL schemas accessible by the user.
:param database: The SQL database
:param schemas: The list of eligible SQL schemas
:param hierarchical: Whether to check using the hierarchical permission logic
:returns: The list of accessible SQL schemas
"""
# pylint: disable=import-outside-toplevel
from superset.connectors.sqla.models import SqlaTable
if hierarchical and self.can_access_database(database):
return schemas
# schema_access
accessible_schemas = {
self.unpack_database_and_schema(s).schema
for s in self.user_view_menu_names("schema_access")
if s.startswith(f"[{database}].")
}
# datasource_access
perms = self.user_view_menu_names("datasource_access")
if perms:
tables = (
self.get_session.query(SqlaTable.schema)
.filter(SqlaTable.database_id == database.id)
.filter(SqlaTable.schema.isnot(None))
.filter(SqlaTable.schema != "")
.filter(or_(SqlaTable.perm.in_(perms)))
.distinct()
)
accessible_schemas.update([table.schema for table in tables])
return [s for s in schemas if s in accessible_schemas]
def get_datasources_accessible_by_user( # pylint: disable=invalid-name
self,
database: "Database",
datasource_names: List[DatasourceName],
schema: Optional[str] = None,
) -> List[DatasourceName]:
"""
Return the list of SQL tables accessible by the user.
:param database: The SQL database
:param datasource_names: The list of eligible SQL tables w/ schema
:param schema: The fallback SQL schema if not present in the table name
:returns: The list of accessible SQL tables w/ schema
"""
if self.can_access_database(database):
return datasource_names
if schema:
schema_perm = self.get_schema_perm(database, schema)
if schema_perm and self.can_access("schema_access", schema_perm):
return datasource_names
user_perms = self.user_view_menu_names("datasource_access")
schema_perms = self.user_view_menu_names("schema_access")
user_datasources = ConnectorRegistry.query_datasources_by_permissions(
self.get_session, database, user_perms, schema_perms
)
if schema:
names = {d.table_name for d in user_datasources if d.schema == schema}
return [d for d in datasource_names if d.table in names]
full_names = {d.full_name for d in user_datasources}
return [d for d in datasource_names if f"[{database}].[{d}]" in full_names]
def merge_perm(self, permission_name: str, view_menu_name: str) -> None:
"""
Add the FAB permission/view-menu.
:param permission_name: The FAB permission name
:param view_menu_names: The FAB view-menu name
:see: SecurityManager.add_permission_view_menu
"""
logger.warning(
"This method 'merge_perm' is deprecated use add_permission_view_menu"
)
self.add_permission_view_menu(permission_name, view_menu_name)
def _is_user_defined_permission(self, perm: Model) -> bool:
"""
Return True if the FAB permission is user defined, False otherwise.
:param perm: The FAB permission
:returns: Whether the FAB permission is user defined
"""
return perm.permission.name in self.OBJECT_SPEC_PERMISSIONS
def create_custom_permissions(self) -> None:
"""
Create custom FAB permissions.
"""
self.add_permission_view_menu("all_datasource_access", "all_datasource_access")
self.add_permission_view_menu("all_database_access", "all_database_access")
self.add_permission_view_menu("all_query_access", "all_query_access")
self.add_permission_view_menu("can_share_dashboard", "Superset")
self.add_permission_view_menu("can_share_chart", "Superset")
def create_missing_perms(self) -> None:
"""
Creates missing FAB permissions for datasources, schemas and metrics.
"""
# pylint: disable=import-outside-toplevel
from superset.models import core as models
logger.info("Fetching a set of all perms to lookup which ones are missing")
all_pvs = set()
for pv in self.get_session.query(self.permissionview_model).all():
if pv.permission and pv.view_menu:
all_pvs.add((pv.permission.name, pv.view_menu.name))
def merge_pv(view_menu: str, perm: str) -> None:
"""Create permission view menu only if it doesn't exist"""
if view_menu and perm and (view_menu, perm) not in all_pvs:
self.add_permission_view_menu(view_menu, perm)
logger.info("Creating missing datasource permissions.")
datasources = ConnectorRegistry.get_all_datasources(self.get_session)
for datasource in datasources:
merge_pv("datasource_access", datasource.get_perm())
merge_pv("schema_access", datasource.get_schema_perm())
logger.info("Creating missing database permissions.")
databases = self.get_session.query(models.Database).all()
for database in databases:
merge_pv("database_access", database.perm)
def clean_perms(self) -> None:
"""
Clean up the FAB faulty permissions.
"""
logger.info("Cleaning faulty perms")
sesh = self.get_session
pvms = sesh.query(PermissionView).filter(
or_(
PermissionView.permission # pylint: disable=singleton-comparison
== None,
PermissionView.view_menu # pylint: disable=singleton-comparison
== None,
)
)
deleted_count = pvms.delete()
sesh.commit()
if deleted_count:
logger.info("Deleted %i faulty permissions", deleted_count)
def sync_role_definitions(self) -> None:
"""
Initialize the Superset application with security roles and such.
"""
logger.info("Syncing role definition")
self.create_custom_permissions()
# Creating default roles
self.set_role("Admin", self._is_admin_pvm)
self.set_role("Alpha", self._is_alpha_pvm)
self.set_role("Gamma", self._is_gamma_pvm)
self.set_role("granter", self._is_granter_pvm)
self.set_role("sql_lab", self._is_sql_lab_pvm)
# Configure public role
if current_app.config["PUBLIC_ROLE_LIKE"]:
self.copy_role(
current_app.config["PUBLIC_ROLE_LIKE"],
self.auth_role_public,
merge=True,
)
if current_app.config.get("PUBLIC_ROLE_LIKE_GAMMA", False):
logger.warning(
"The config `PUBLIC_ROLE_LIKE_GAMMA` is deprecated and will be removed "
"in Superset 1.0. Please use `PUBLIC_ROLE_LIKE` instead."
)
self.copy_role("Gamma", self.auth_role_public, merge=True)
self.create_missing_perms()
# commit role and view menu updates
self.get_session.commit()
self.clean_perms()
def _get_pvms_from_builtin_role(self, role_name: str) -> List[PermissionView]:
"""
Gets a list of model PermissionView permissions infered from a builtin role
definition
"""
role_from_permissions_names = self.builtin_roles.get(role_name, [])
all_pvms = self.get_session.query(PermissionView).all()
role_from_permissions = []
for pvm_regex in role_from_permissions_names:
view_name_regex = pvm_regex[0]
permission_name_regex = pvm_regex[1]
for pvm in all_pvms:
if re.match(view_name_regex, pvm.view_menu.name) and re.match(
permission_name_regex, pvm.permission.name
):
if pvm not in role_from_permissions:
role_from_permissions.append(pvm)
return role_from_permissions
def find_roles_by_id(self, role_ids: List[int]) -> List[Role]:
"""
Find a List of models by a list of ids, if defined applies `base_filter`
"""
query = self.get_session.query(Role).filter(Role.id.in_(role_ids))
return query.all()
def copy_role(
self, role_from_name: str, role_to_name: str, merge: bool = True
) -> None:
"""
Copies permissions from a role to another.
Note: Supports regex defined builtin roles
:param role_from_name: The FAB role name from where the permissions are taken
:param role_to_name: The FAB role name from where the permissions are copied to
:param merge: If merge is true, keep data access permissions
if they already exist on the target role
"""
logger.info("Copy/Merge %s to %s", role_from_name, role_to_name)
# If it's a builtin role extract permissions from it
if role_from_name in self.builtin_roles:
role_from_permissions = self._get_pvms_from_builtin_role(role_from_name)
else:
role_from_permissions = list(self.find_role(role_from_name).permissions)
role_to = self.add_role(role_to_name)
# If merge, recover existing data access permissions
if merge:
for permission_view in role_to.permissions:
if (
permission_view not in role_from_permissions
and permission_view.permission.name in self.data_access_permissions
):
role_from_permissions.append(permission_view)
role_to.permissions = role_from_permissions
self.get_session.merge(role_to)
self.get_session.commit()
def set_role(
self, role_name: str, pvm_check: Callable[[PermissionView], bool]
) -> None:
"""
Set the FAB permission/views for the role.
:param role_name: The FAB role name
:param pvm_check: The FAB permission/view check
"""
logger.info("Syncing %s perms", role_name)
pvms = self.get_session.query(PermissionView).all()
pvms = [p for p in pvms if p.permission and p.view_menu]
role = self.add_role(role_name)
role_pvms = [
permission_view for permission_view in pvms if pvm_check(permission_view)
]
role.permissions = role_pvms
self.get_session.merge(role)
self.get_session.commit()
def _is_admin_only(self, pvm: PermissionView) -> bool:
"""
Return True if the FAB permission/view is accessible to only Admin users,
False otherwise.
Note readonly operations on read only model views are allowed only for admins.
:param pvm: The FAB permission/view
:returns: Whether the FAB object is accessible to only Admin users
"""
if (
pvm.view_menu.name in self.READ_ONLY_MODEL_VIEWS
and pvm.permission.name not in self.READ_ONLY_PERMISSION
):
return True
return (
pvm.view_menu.name in self.ADMIN_ONLY_VIEW_MENUS
or pvm.permission.name in self.ADMIN_ONLY_PERMISSIONS
)
def _is_alpha_only(self, pvm: PermissionView) -> bool:
"""
Return True if the FAB permission/view is accessible to only Alpha users,
False otherwise.
:param pvm: The FAB permission/view
:returns: Whether the FAB object is accessible to only Alpha users
"""
if (
pvm.view_menu.name in self.GAMMA_READ_ONLY_MODEL_VIEWS
and pvm.permission.name not in self.READ_ONLY_PERMISSION
):
return True
return (
pvm.view_menu.name in self.ALPHA_ONLY_VIEW_MENUS
or pvm.permission.name in self.ALPHA_ONLY_PERMISSIONS
)
def _is_accessible_to_all(self, pvm: PermissionView) -> bool:
"""
Return True if the FAB permission/view is accessible to all, False
otherwise.
:param pvm: The FAB permission/view
:returns: Whether the FAB object is accessible to all users
"""
return pvm.permission.name in self.ACCESSIBLE_PERMS
def _is_admin_pvm(self, pvm: PermissionView) -> bool:
"""
Return True if the FAB permission/view is Admin user related, False
otherwise.
:param pvm: The FAB permission/view
:returns: Whether the FAB object is Admin related
"""
return not self._is_user_defined_permission(pvm)
def _is_alpha_pvm(self, pvm: PermissionView) -> bool:
"""
Return True if the FAB permission/view is Alpha user related, False
otherwise.
:param pvm: The FAB permission/view
:returns: Whether the FAB object is Alpha related
"""
return not (
self._is_user_defined_permission(pvm) or self._is_admin_only(pvm)
) or self._is_accessible_to_all(pvm)
def _is_gamma_pvm(self, pvm: PermissionView) -> bool:
"""
Return True if the FAB permission/view is Gamma user related, False
otherwise.
:param pvm: The FAB permission/view
:returns: Whether the FAB object is Gamma related
"""
return not (
self._is_user_defined_permission(pvm)
or self._is_admin_only(pvm)
or self._is_alpha_only(pvm)
) or self._is_accessible_to_all(pvm)
def _is_sql_lab_pvm(self, pvm: PermissionView) -> bool:
"""
Return True if the FAB permission/view is SQL Lab related, False
otherwise.
:param pvm: The FAB permission/view
:returns: Whether the FAB object is SQL Lab related
"""
return (pvm.permission.name, pvm.view_menu.name) in self.SQLLAB_PERMISSION_VIEWS
def _is_granter_pvm( # pylint: disable=no-self-use
self, pvm: PermissionView
) -> bool:
"""
Return True if the user can grant the FAB permission/view, False
otherwise.
:param pvm: The FAB permission/view
:returns: Whether the user can grant the FAB permission/view
"""
return pvm.permission.name in {"can_override_role_permissions", "can_approve"}
def set_perm( # pylint: disable=unused-argument
self, mapper: Mapper, connection: Connection, target: "BaseDatasource"
) -> None:
"""
Set the datasource permissions.
:param mapper: The table mapper
:param connection: The DB-API connection
:param target: The mapped instance being persisted
"""
link_table = target.__table__
if target.perm != target.get_perm():
connection.execute(
link_table.update()
.where(link_table.c.id == target.id)
.values(perm=target.get_perm())
)
if (
hasattr(target, "schema_perm")
and target.schema_perm != target.get_schema_perm()
):
connection.execute(
link_table.update()
.where(link_table.c.id == target.id)
.values(schema_perm=target.get_schema_perm())
)
pvm_names = []
if target.__tablename__ in {"dbs", "clusters"}:
pvm_names.append(("database_access", target.get_perm()))
else:
pvm_names.append(("datasource_access", target.get_perm()))
if target.schema:
pvm_names.append(("schema_access", target.get_schema_perm()))
# TODO(bogdan): modify slice permissions as well.
for permission_name, view_menu_name in pvm_names:
permission = self.find_permission(permission_name)
view_menu = self.find_view_menu(view_menu_name)
pv = None
if not permission:
permission_table = (
self.permission_model.__table__ # pylint: disable=no-member
)
connection.execute(
permission_table.insert().values(name=permission_name)
)
permission = self.find_permission(permission_name)
if not view_menu:
view_menu_table = (
self.viewmenu_model.__table__ # pylint: disable=no-member
)
connection.execute(view_menu_table.insert().values(name=view_menu_name))
view_menu = self.find_view_menu(view_menu_name)
if permission and view_menu:
pv = (
self.get_session.query(self.permissionview_model)
.filter_by(permission=permission, view_menu=view_menu)
.first()
)
if not pv and permission and view_menu:
permission_view_table = (
self.permissionview_model.__table__ # pylint: disable=no-member
)
connection.execute(
permission_view_table.insert().values(
permission_id=permission.id, view_menu_id=view_menu.id
)
)
def raise_for_access(
# pylint: disable=too-many-arguments,too-many-locals
self,
database: Optional["Database"] = None,
datasource: Optional["BaseDatasource"] = None,
query: Optional["Query"] = None,
query_context: Optional["QueryContext"] = None,
table: Optional["Table"] = None,
viz: Optional["BaseViz"] = None,
) -> None:
"""
Raise an exception if the user cannot access the resource.
:param database: The Superset database
:param datasource: The Superset datasource
:param query: The SQL Lab query
:param query_context: The query context
:param table: The Superset table (requires database)
:param viz: The visualization
:raises SupersetSecurityException: If the user cannot access the resource
"""
# pylint: disable=import-outside-toplevel
from superset.connectors.sqla.models import SqlaTable
from superset.extensions import feature_flag_manager
from superset.sql_parse import Table
if database and table or query:
if query:
database = query.database
database = cast("Database", database)
if self.can_access_database(database):
return
if query:
tables = {
Table(table_.table, table_.schema or query.schema)
for table_ in sql_parse.ParsedQuery(query.sql).tables
}
elif table:
tables = {table}
denied = set()
for table_ in tables:
schema_perm = self.get_schema_perm(database, schema=table_.schema)
if not (schema_perm and self.can_access("schema_access", schema_perm)):
datasources = SqlaTable.query_datasources_by_name(
self.get_session, database, table_.table, schema=table_.schema
)
# Access to any datasource is suffice.
for datasource_ in datasources:
if self.can_access("datasource_access", datasource_.perm):
break
else:
denied.add(table_)
if denied:
raise SupersetSecurityException(
self.get_table_access_error_object(denied)
)
if datasource or query_context or viz:
if query_context:
datasource = query_context.datasource
elif viz:
datasource = viz.datasource
assert datasource
should_check_dashboard_access = (
feature_flag_manager.is_feature_enabled("DASHBOARD_RBAC")
or self.is_guest_user()
)
if not (
self.can_access_schema(datasource)
or self.can_access("datasource_access", datasource.perm or "")
or (
should_check_dashboard_access
and self.can_access_based_on_dashboard(datasource)
)
):
raise SupersetSecurityException(
self.get_datasource_access_error_object(datasource)
)
def get_user_by_username(
self, username: str, session: Session = None
) -> Optional[User]:
"""
Retrieves a user by it's username case sensitive. Optional session parameter
utility method normally useful for celery tasks where the session
need to be scoped
"""
session = session or self.get_session
return (
session.query(self.user_model)
.filter(self.user_model.username == username)
.one_or_none()
)
def get_anonymous_user(self) -> User: # pylint: disable=no-self-use
return AnonymousUserMixin()
def get_user_roles(self, user: Optional[User] = None) -> List[Role]:
if not user:
user = g.user
if user.is_anonymous:
public_role = current_app.config.get("AUTH_ROLE_PUBLIC")
return [self.get_public_role()] if public_role else []
return user.roles
def get_guest_rls_filters(
self, dataset: "BaseDatasource"
) -> List[GuestTokenRlsRule]:
"""
Retrieves the row level security filters for the current user and the dataset,
if the user is authenticated with a guest token.
:param dataset: The dataset to check against
:return: A list of filters
"""
guest_user = self.get_current_guest_user_if_guest()
if guest_user:
return [
rule
for rule in guest_user.rls
if not rule.get("dataset")
or str(rule.get("dataset")) == str(dataset.id)
]
return []
def get_rls_filters(self, table: "BaseDatasource") -> List[SqlaQuery]:
"""
Retrieves the appropriate row level security filters for the current user and
the passed table.
:param table: The table to check against
:returns: A list of filters
"""
if hasattr(g, "user"):
# pylint: disable=import-outside-toplevel
from superset.connectors.sqla.models import (
RLSFilterRoles,
RLSFilterTables,
RowLevelSecurityFilter,
)
user_roles = [role.id for role in self.get_user_roles()]
regular_filter_roles = (
self.get_session.query(RLSFilterRoles.c.rls_filter_id)
.join(RowLevelSecurityFilter)
.filter(
RowLevelSecurityFilter.filter_type
== RowLevelSecurityFilterType.REGULAR
)
.filter(RLSFilterRoles.c.role_id.in_(user_roles))
.subquery()
)
base_filter_roles = (
self.get_session.query(RLSFilterRoles.c.rls_filter_id)
.join(RowLevelSecurityFilter)
.filter(
RowLevelSecurityFilter.filter_type
== RowLevelSecurityFilterType.BASE
)
.filter(RLSFilterRoles.c.role_id.in_(user_roles))
.subquery()
)
filter_tables = (
self.get_session.query(RLSFilterTables.c.rls_filter_id)
.filter(RLSFilterTables.c.table_id == table.id)
.subquery()
)
query = (
self.get_session.query(
RowLevelSecurityFilter.id,
RowLevelSecurityFilter.group_key,
RowLevelSecurityFilter.clause,
)
.filter(RowLevelSecurityFilter.id.in_(filter_tables))
.filter(
or_(
and_(
RowLevelSecurityFilter.filter_type
== RowLevelSecurityFilterType.REGULAR,
RowLevelSecurityFilter.id.in_(regular_filter_roles),
),
and_(
RowLevelSecurityFilter.filter_type
== RowLevelSecurityFilterType.BASE,
RowLevelSecurityFilter.id.notin_(base_filter_roles),
),
)
)
)
return query.all()
return []
def get_rls_ids(self, table: "BaseDatasource") -> List[int]:
"""
Retrieves the appropriate row level security filters IDs for the current user
and the passed table.
:param table: The table to check against
:returns: A list of IDs
"""
ids = [f.id for f in self.get_rls_filters(table)]
ids.sort() # Combinations rather than permutations
return ids
@staticmethod
def raise_for_user_activity_access(user_id: int) -> None:
user = g.user if g.user and g.user.get_id() else None
if not user or (
not current_app.config["ENABLE_BROAD_ACTIVITY_ACCESS"]
and user_id != user.id
):
raise SupersetSecurityException(
SupersetError(
error_type=SupersetErrorType.USER_ACTIVITY_SECURITY_ACCESS_ERROR,
message="Access to user's activity data is restricted",
level=ErrorLevel.ERROR,
)
)
def raise_for_dashboard_access(self, dashboard: "Dashboard") -> None:
"""
Raise an exception if the user cannot access the dashboard.
This does not check for the required role/permission pairs,
it only concerns itself with entity relationships.
:param dashboard: Dashboard the user wants access to
:raises DashboardAccessDeniedError: If the user cannot access the resource
"""
# pylint: disable=import-outside-toplevel
from superset import is_feature_enabled
from superset.dashboards.commands.exceptions import DashboardAccessDeniedError
from superset.views.base import is_user_admin
from superset.views.utils import is_owner
def has_rbac_access() -> bool:
return (not is_feature_enabled("DASHBOARD_RBAC")) or any(
dashboard_role.id
in [user_role.id for user_role in self.get_user_roles()]
for dashboard_role in dashboard.roles
)
if self.is_guest_user():
can_access = self.has_guest_access(
GuestTokenResourceType.DASHBOARD, dashboard.id
)
else:
can_access = (
is_user_admin()
or is_owner(dashboard, g.user)
or (dashboard.published and has_rbac_access())
or (not dashboard.published and not dashboard.roles)
)
if not can_access:
raise DashboardAccessDeniedError()
@staticmethod
def can_access_based_on_dashboard(datasource: "BaseDatasource") -> bool:
# pylint: disable=import-outside-toplevel
from superset import db
from superset.dashboards.filters import DashboardAccessFilter
from superset.models.dashboard import Dashboard
from superset.models.slice import Slice
datasource_class = type(datasource)
query = (
db.session.query(datasource_class)
.join(Slice.table)
.filter(datasource_class.id == datasource.id)
)
query = DashboardAccessFilter("id", SQLAInterface(Dashboard, db.session)).apply(
query, None
)
exists = db.session.query(query.exists()).scalar()
return exists
@staticmethod
def _get_current_epoch_time() -> float:
""" This is used so the tests can mock time """
return time.time()
def create_guest_access_token(
self,
user: GuestTokenUser,
resources: GuestTokenResources,
rls: List[GuestTokenRlsRule],
) -> bytes:
secret = current_app.config["GUEST_TOKEN_JWT_SECRET"]
algo = current_app.config["GUEST_TOKEN_JWT_ALGO"]
exp_seconds = current_app.config["GUEST_TOKEN_JWT_EXP_SECONDS"]
# calculate expiration time
now = self._get_current_epoch_time()
exp = now + (exp_seconds * 1000)
claims = {
"user": user,
"resources": resources,
"rls_rules": rls,
# standard jwt claims:
"iat": now, # issued at
"exp": exp, # expiration time
}
token = jwt.encode(claims, secret, algorithm=algo)
return token
def get_guest_user_from_request(self, req: Request) -> Optional[GuestUser]:
"""
If there is a guest token in the request (used for embedded),
parses the token and returns the guest user.
This is meant to be used as a request loader for the LoginManager.
The LoginManager will only call this if an active session cannot be found.
:return: A guest user object
"""
raw_token = req.headers.get(current_app.config["GUEST_TOKEN_HEADER_NAME"])
if raw_token is None:
return None
try:
token = self.parse_jwt_guest_token(raw_token)
if token.get("user") is None:
raise ValueError("Guest token does not contain a user claim")
if token.get("resources") is None:
raise ValueError("Guest token does not contain a resources claim")
if token.get("rls_rules") is None:
raise ValueError("Guest token does not contain an rls_rules claim")
except Exception: # pylint: disable=broad-except
# The login manager will handle sending 401s.
# We don't need to send a special error message.
logger.warning("Invalid guest token", exc_info=True)
return None
else:
return self.get_guest_user_from_token(cast(GuestToken, token))
def get_guest_user_from_token(self, token: GuestToken) -> GuestUser:
return self.guest_user_cls(
token=token, roles=[self.find_role(current_app.config["GUEST_ROLE_NAME"])],
)
@staticmethod
def parse_jwt_guest_token(raw_token: str) -> Dict[str, Any]:
"""
Parses a guest token. Raises an error if the jwt fails standard claims checks.
:param raw_token: the token gotten from the request
:return: the same token that was passed in, tested but unchanged
"""
secret = current_app.config["GUEST_TOKEN_JWT_SECRET"]
algo = current_app.config["GUEST_TOKEN_JWT_ALGO"]
return jwt.decode(raw_token, secret, algorithms=[algo])
@staticmethod
def is_guest_user(user: Optional[Any] = None) -> bool:
# pylint: disable=import-outside-toplevel
from superset import is_feature_enabled
if not is_feature_enabled("EMBEDDED_SUPERSET"):
return False
if not user:
user = g.user
return hasattr(user, "is_guest_user") and user.is_guest_user
def get_current_guest_user_if_guest(self) -> Optional[GuestUser]:
if self.is_guest_user():
return g.user
return None
def has_guest_access(
self, resource_type: GuestTokenResourceType, resource_id: Union[str, int]
) -> bool:
user = self.get_current_guest_user_if_guest()
if not user:
return False
strid = str(resource_id)
for resource in user.resources:
if resource["type"] == resource_type.value and str(resource["id"]) == strid:
return True
return False | /sage-superset-1.0.0.tar.gz/sage-superset-1.0.0/superset/security/manager.py | 0.599133 | 0.160233 | manager.py | pypi |
from typing import Any, Optional
from flask import g
from flask_appbuilder.security.sqla.models import Role
from flask_babel import lazy_gettext as _
from sqlalchemy import and_, or_
from sqlalchemy.orm.query import Query
from superset import db, is_feature_enabled, security_manager
from superset.models.core import FavStar
from superset.models.dashboard import Dashboard
from superset.models.slice import Slice
from superset.security.guest_token import GuestTokenResourceType, GuestUser
from superset.views.base import BaseFilter, is_user_admin
from superset.views.base_api import BaseFavoriteFilter
class DashboardTitleOrSlugFilter(BaseFilter): # pylint: disable=too-few-public-methods
name = _("Title or Slug")
arg_name = "title_or_slug"
def apply(self, query: Query, value: Any) -> Query:
if not value:
return query
ilike_value = f"%{value}%"
return query.filter(
or_(
Dashboard.dashboard_title.ilike(ilike_value),
Dashboard.slug.ilike(ilike_value),
)
)
class DashboardFavoriteFilter( # pylint: disable=too-few-public-methods
BaseFavoriteFilter
):
"""
Custom filter for the GET list that filters all dashboards that a user has favored
"""
arg_name = "dashboard_is_favorite"
class_name = "Dashboard"
model = Dashboard
class DashboardAccessFilter(BaseFilter): # pylint: disable=too-few-public-methods
"""
List dashboards with the following criteria:
1. Those which the user owns
2. Those which the user has favorited
3. Those which have been published (if they have access to at least one slice)
If the user is an admin then show all dashboards.
This means they do not get curation but can still sort by "published"
if they wish to see those dashboards which are published first.
"""
def apply(self, query: Query, value: Any) -> Query:
if is_user_admin():
return query
datasource_perms = security_manager.user_view_menu_names("datasource_access")
schema_perms = security_manager.user_view_menu_names("schema_access")
is_rbac_disabled_filter = []
dashboard_has_roles = Dashboard.roles.any()
if is_feature_enabled("DASHBOARD_RBAC"):
is_rbac_disabled_filter.append(~dashboard_has_roles)
datasource_perm_query = (
db.session.query(Dashboard.id)
.join(Dashboard.slices)
.filter(
and_(
Dashboard.published.is_(True),
*is_rbac_disabled_filter,
or_(
Slice.perm.in_(datasource_perms),
Slice.schema_perm.in_(schema_perms),
security_manager.can_access_all_datasources(),
),
)
)
)
users_favorite_dash_query = db.session.query(FavStar.obj_id).filter(
and_(
FavStar.user_id == security_manager.user_model.get_user_id(),
FavStar.class_name == "Dashboard",
)
)
owner_ids_query = (
db.session.query(Dashboard.id)
.join(Dashboard.owners)
.filter(
security_manager.user_model.id
== security_manager.user_model.get_user_id()
)
)
feature_flagged_filters = []
if is_feature_enabled("DASHBOARD_RBAC"):
roles_based_query = (
db.session.query(Dashboard.id)
.join(Dashboard.roles)
.filter(
and_(
Dashboard.published.is_(True),
dashboard_has_roles,
Role.id.in_([x.id for x in security_manager.get_user_roles()]),
),
)
)
feature_flagged_filters.append(Dashboard.id.in_(roles_based_query))
if is_feature_enabled("EMBEDDED_SUPERSET") and security_manager.is_guest_user(
g.user
):
guest_user: GuestUser = g.user
embedded_dashboard_ids = [
r["id"]
for r in guest_user.resources
if r["type"] == GuestTokenResourceType.DASHBOARD.value
]
if len(embedded_dashboard_ids) != 0:
feature_flagged_filters.append(Dashboard.id.in_(embedded_dashboard_ids))
query = query.filter(
or_(
Dashboard.id.in_(owner_ids_query),
Dashboard.id.in_(datasource_perm_query),
Dashboard.id.in_(users_favorite_dash_query),
*feature_flagged_filters,
)
)
return query
class FilterRelatedRoles(BaseFilter): # pylint: disable=too-few-public-methods
"""
A filter to allow searching for related roles of a resource.
Use in the api by adding something like:
related_field_filters = {
"roles": RelatedFieldFilter("name", FilterRelatedRoles),
}
"""
name = _("Role")
arg_name = "roles"
def apply(self, query: Query, value: Optional[Any]) -> Query:
role_model = security_manager.role_model
if value:
return query.filter(role_model.name.ilike(f"%{value}%"),)
return query
class DashboardCertifiedFilter(BaseFilter): # pylint: disable=too-few-public-methods
"""
Custom filter for the GET list that filters all certified dashboards
"""
name = _("Is certified")
arg_name = "dashboard_is_certified"
def apply(self, query: Query, value: Any) -> Query:
if value is True:
return query.filter(and_(Dashboard.certified_by.isnot(None),))
if value is False:
return query.filter(and_(Dashboard.certified_by.is_(None),))
return query | /sage-superset-1.0.0.tar.gz/sage-superset-1.0.0/superset/dashboards/filters.py | 0.831896 | 0.165357 | filters.py | pypi |
import json
import re
from typing import Any, Dict, Union
from marshmallow import fields, post_load, Schema
from marshmallow.validate import Length, ValidationError
from superset.exceptions import SupersetException
from superset.utils import core as utils
get_delete_ids_schema = {"type": "array", "items": {"type": "integer"}}
get_export_ids_schema = {"type": "array", "items": {"type": "integer"}}
get_fav_star_ids_schema = {"type": "array", "items": {"type": "integer"}}
thumbnail_query_schema = {
"type": "object",
"properties": {"force": {"type": "boolean"}},
}
dashboard_title_description = "A title for the dashboard."
slug_description = "Unique identifying part for the web address of the dashboard."
owners_description = (
"Owner are users ids allowed to delete or change this dashboard. "
"If left empty you will be one of the owners of the dashboard."
)
roles_description = (
"Roles is a list which defines access to the dashboard. "
"These roles are always applied in addition to restrictions on dataset "
"level access. "
"If no roles defined then the dashboard is available to all roles."
)
position_json_description = (
"This json object describes the positioning of the widgets "
"in the dashboard. It is dynamically generated when "
"adjusting the widgets size and positions by using "
"drag & drop in the dashboard view"
)
css_description = "Override CSS for the dashboard."
json_metadata_description = (
"This JSON object is generated dynamically when clicking "
"the save or overwrite button in the dashboard view. "
"It is exposed here for reference and for power users who may want to alter "
" specific parameters."
)
published_description = (
"Determines whether or not this dashboard is visible in "
"the list of all dashboards."
)
charts_description = (
"The names of the dashboard's charts. Names are used for legacy reasons."
)
certified_by_description = "Person or group that has certified this dashboard"
certification_details_description = "Details of the certification"
openapi_spec_methods_override = {
"get": {"get": {"description": "Get a dashboard detail information."}},
"get_list": {
"get": {
"description": "Get a list of dashboards, use Rison or JSON query "
"parameters for filtering, sorting, pagination and "
" for selecting specific columns and metadata.",
}
},
"info": {
"get": {
"description": "Several metadata information about dashboard API "
"endpoints.",
}
},
"related": {
"get": {"description": "Get a list of all possible owners for a dashboard."}
},
}
def validate_json(value: Union[bytes, bytearray, str]) -> None:
try:
utils.validate_json(value)
except SupersetException as ex:
raise ValidationError("JSON not valid") from ex
def validate_json_metadata(value: Union[bytes, bytearray, str]) -> None:
if not value:
return
try:
value_obj = json.loads(value)
except json.decoder.JSONDecodeError as ex:
raise ValidationError("JSON not valid") from ex
errors = DashboardJSONMetadataSchema().validate(value_obj, partial=False)
if errors:
raise ValidationError(errors)
class DashboardJSONMetadataSchema(Schema):
show_native_filters = fields.Boolean()
# native_filter_configuration is for dashboard-native filters
native_filter_configuration = fields.List(fields.Dict(), allow_none=True)
# chart_configuration for now keeps data about cross-filter scoping for charts
chart_configuration = fields.Dict()
# filter_sets_configuration is for dashboard-native filters
filter_sets_configuration = fields.List(fields.Dict(), allow_none=True)
timed_refresh_immune_slices = fields.List(fields.Integer())
# deprecated wrt dashboard-native filters
filter_scopes = fields.Dict()
expanded_slices = fields.Dict()
refresh_frequency = fields.Integer()
# deprecated wrt dashboard-native filters
default_filters = fields.Str()
stagger_refresh = fields.Boolean()
stagger_time = fields.Integer()
color_scheme = fields.Str(allow_none=True)
color_namespace = fields.Str(allow_none=True)
positions = fields.Dict(allow_none=True)
label_colors = fields.Dict()
# used for v0 import/export
import_time = fields.Integer()
remote_id = fields.Integer()
class UserSchema(Schema):
id = fields.Int()
username = fields.String()
first_name = fields.String()
last_name = fields.String()
class RolesSchema(Schema):
id = fields.Int()
name = fields.String()
class DashboardGetResponseSchema(Schema):
id = fields.Int()
slug = fields.String()
url = fields.String()
dashboard_title = fields.String(description=dashboard_title_description)
thumbnail_url = fields.String()
published = fields.Boolean()
css = fields.String(description=css_description)
json_metadata = fields.String(description=json_metadata_description)
position_json = fields.String(description=position_json_description)
certified_by = fields.String(description=certified_by_description)
certification_details = fields.String(description=certification_details_description)
changed_by_name = fields.String()
changed_by_url = fields.String()
changed_by = fields.Nested(UserSchema)
changed_on = fields.DateTime()
charts = fields.List(fields.String(description=charts_description))
owners = fields.List(fields.Nested(UserSchema))
roles = fields.List(fields.Nested(RolesSchema))
changed_on_humanized = fields.String(data_key="changed_on_delta_humanized")
class DatabaseSchema(Schema):
id = fields.Int()
name = fields.String()
backend = fields.String()
allow_multi_schema_metadata_fetch = fields.Bool() # pylint: disable=invalid-name
allows_subquery = fields.Bool()
allows_cost_estimate = fields.Bool()
allows_virtual_table_explore = fields.Bool()
explore_database_id = fields.Int()
class DashboardDatasetSchema(Schema):
id = fields.Int()
uid = fields.Str()
column_formats = fields.Dict()
database = fields.Nested(DatabaseSchema)
default_endpoint = fields.String()
filter_select = fields.Bool()
filter_select_enabled = fields.Bool()
is_sqllab_view = fields.Bool()
name = fields.Str()
datasource_name = fields.Str()
table_name = fields.Str()
type = fields.Str()
schema = fields.Str()
offset = fields.Int()
cache_timeout = fields.Int()
params = fields.Str()
perm = fields.Str()
edit_url = fields.Str()
sql = fields.Str()
select_star = fields.Str()
main_dttm_col = fields.Str()
health_check_message = fields.Str()
fetch_values_predicate = fields.Str()
template_params = fields.Str()
owners = fields.List(fields.Int())
columns = fields.List(fields.Dict())
column_types = fields.List(fields.Int())
metrics = fields.List(fields.Dict())
order_by_choices = fields.List(fields.List(fields.Str()))
verbose_map = fields.Dict(fields.Str(), fields.Str())
time_grain_sqla = fields.List(fields.List(fields.Str()))
granularity_sqla = fields.List(fields.List(fields.Str()))
class BaseDashboardSchema(Schema):
# pylint: disable=no-self-use,unused-argument
@post_load
def post_load(self, data: Dict[str, Any], **kwargs: Any) -> Dict[str, Any]:
if data.get("slug"):
data["slug"] = data["slug"].strip()
data["slug"] = data["slug"].replace(" ", "-")
data["slug"] = re.sub(r"[^\w\-]+", "", data["slug"])
return data
class DashboardPostSchema(BaseDashboardSchema):
dashboard_title = fields.String(
description=dashboard_title_description,
allow_none=True,
validate=Length(0, 500),
)
slug = fields.String(
description=slug_description, allow_none=True, validate=[Length(1, 255)]
)
owners = fields.List(fields.Integer(description=owners_description))
roles = fields.List(fields.Integer(description=roles_description))
position_json = fields.String(
description=position_json_description, validate=validate_json
)
css = fields.String()
json_metadata = fields.String(
description=json_metadata_description, validate=validate_json_metadata,
)
published = fields.Boolean(description=published_description)
certified_by = fields.String(description=certified_by_description, allow_none=True)
certification_details = fields.String(
description=certification_details_description, allow_none=True
)
class DashboardPutSchema(BaseDashboardSchema):
dashboard_title = fields.String(
description=dashboard_title_description,
allow_none=True,
validate=Length(0, 500),
)
slug = fields.String(
description=slug_description, allow_none=True, validate=Length(0, 255)
)
owners = fields.List(
fields.Integer(description=owners_description, allow_none=True)
)
roles = fields.List(fields.Integer(description=roles_description, allow_none=True))
position_json = fields.String(
description=position_json_description, allow_none=True, validate=validate_json
)
css = fields.String(description=css_description, allow_none=True)
json_metadata = fields.String(
description=json_metadata_description,
allow_none=True,
validate=validate_json_metadata,
)
published = fields.Boolean(description=published_description, allow_none=True)
certified_by = fields.String(description=certified_by_description, allow_none=True)
certification_details = fields.String(
description=certification_details_description, allow_none=True
)
class ChartFavStarResponseResult(Schema):
id = fields.Integer(description="The Chart id")
value = fields.Boolean(description="The FaveStar value")
class GetFavStarIdsSchema(Schema):
result = fields.List(
fields.Nested(ChartFavStarResponseResult),
description="A list of results for each corresponding chart in the request",
)
class ImportV1DashboardSchema(Schema):
dashboard_title = fields.String(required=True)
description = fields.String(allow_none=True)
css = fields.String(allow_none=True)
slug = fields.String(allow_none=True)
uuid = fields.UUID(required=True)
position = fields.Dict()
metadata = fields.Dict()
version = fields.String(required=True) | /sage-superset-1.0.0.tar.gz/sage-superset-1.0.0/superset/dashboards/schemas.py | 0.673192 | 0.264703 | schemas.py | pypi |
import json
import logging
from datetime import datetime
from typing import Any, Dict, List, Optional, Union
from sqlalchemy.exc import SQLAlchemyError
from superset import security_manager
from superset.dao.base import BaseDAO
from superset.dashboards.commands.exceptions import DashboardNotFoundError
from superset.dashboards.filters import DashboardAccessFilter
from superset.extensions import db
from superset.models.core import FavStar, FavStarClassName
from superset.models.dashboard import Dashboard
from superset.models.slice import Slice
from superset.utils.dashboard_filter_scopes_converter import copy_filter_scopes
logger = logging.getLogger(__name__)
class DashboardDAO(BaseDAO):
model_cls = Dashboard
base_filter = DashboardAccessFilter
@staticmethod
def get_by_id_or_slug(id_or_slug: str) -> Dashboard:
dashboard = Dashboard.get(id_or_slug)
if not dashboard:
raise DashboardNotFoundError()
security_manager.raise_for_dashboard_access(dashboard)
return dashboard
@staticmethod
def get_datasets_for_dashboard(id_or_slug: str) -> List[Any]:
dashboard = DashboardDAO.get_by_id_or_slug(id_or_slug)
return dashboard.datasets_trimmed_for_slices()
@staticmethod
def get_charts_for_dashboard(id_or_slug: str) -> List[Slice]:
return DashboardDAO.get_by_id_or_slug(id_or_slug).slices
@staticmethod
def get_dashboard_changed_on(
id_or_slug_or_dashboard: Union[str, Dashboard]
) -> datetime:
"""
Get latest changed datetime for a dashboard.
:param id_or_slug_or_dashboard: A dashboard or the ID or slug of the dashboard.
:returns: The datetime the dashboard was last changed.
"""
dashboard = (
DashboardDAO.get_by_id_or_slug(id_or_slug_or_dashboard)
if isinstance(id_or_slug_or_dashboard, str)
else id_or_slug_or_dashboard
)
# drop microseconds in datetime to match with last_modified header
return dashboard.changed_on.replace(microsecond=0)
@staticmethod
def get_dashboard_and_slices_changed_on( # pylint: disable=invalid-name
id_or_slug_or_dashboard: Union[str, Dashboard]
) -> datetime:
"""
Get latest changed datetime for a dashboard. The change could be a dashboard
metadata change, or a change to one of its dependent slices.
:param id_or_slug_or_dashboard: A dashboard or the ID or slug of the dashboard.
:returns: The datetime the dashboard was last changed.
"""
dashboard = (
DashboardDAO.get_by_id_or_slug(id_or_slug_or_dashboard)
if isinstance(id_or_slug_or_dashboard, str)
else id_or_slug_or_dashboard
)
dashboard_changed_on = DashboardDAO.get_dashboard_changed_on(dashboard)
slices = dashboard.slices
slices_changed_on = max(
[slc.changed_on for slc in slices]
+ ([datetime.fromtimestamp(0)] if len(slices) == 0 else [])
)
# drop microseconds in datetime to match with last_modified header
return max(dashboard_changed_on, slices_changed_on).replace(microsecond=0)
@staticmethod
def get_dashboard_and_datasets_changed_on( # pylint: disable=invalid-name
id_or_slug_or_dashboard: Union[str, Dashboard]
) -> datetime:
"""
Get latest changed datetime for a dashboard. The change could be a dashboard
metadata change, a change to one of its dependent datasets.
:param id_or_slug_or_dashboard: A dashboard or the ID or slug of the dashboard.
:returns: The datetime the dashboard was last changed.
"""
dashboard = (
DashboardDAO.get_by_id_or_slug(id_or_slug_or_dashboard)
if isinstance(id_or_slug_or_dashboard, str)
else id_or_slug_or_dashboard
)
dashboard_changed_on = DashboardDAO.get_dashboard_changed_on(dashboard)
datasources = dashboard.datasources
datasources_changed_on = max(
[datasource.changed_on for datasource in datasources]
+ ([datetime.fromtimestamp(0)] if len(datasources) == 0 else [])
)
# drop microseconds in datetime to match with last_modified header
return max(dashboard_changed_on, datasources_changed_on).replace(microsecond=0)
@staticmethod
def validate_slug_uniqueness(slug: str) -> bool:
if not slug:
return True
dashboard_query = db.session.query(Dashboard).filter(Dashboard.slug == slug)
return not db.session.query(dashboard_query.exists()).scalar()
@staticmethod
def validate_update_slug_uniqueness(dashboard_id: int, slug: Optional[str]) -> bool:
if slug is not None:
dashboard_query = db.session.query(Dashboard).filter(
Dashboard.slug == slug, Dashboard.id != dashboard_id
)
return not db.session.query(dashboard_query.exists()).scalar()
return True
@staticmethod
def update_charts_owners(model: Dashboard, commit: bool = True) -> Dashboard:
owners = list(model.owners)
for slc in model.slices:
slc.owners = list(set(owners) | set(slc.owners))
if commit:
db.session.commit()
return model
@staticmethod
def bulk_delete(models: Optional[List[Dashboard]], commit: bool = True) -> None:
item_ids = [model.id for model in models] if models else []
# bulk delete, first delete related data
if models:
for model in models:
model.slices = []
model.owners = []
db.session.merge(model)
# bulk delete itself
try:
db.session.query(Dashboard).filter(Dashboard.id.in_(item_ids)).delete(
synchronize_session="fetch"
)
if commit:
db.session.commit()
except SQLAlchemyError as ex:
if commit:
db.session.rollback()
raise ex
@staticmethod
def set_dash_metadata( # pylint: disable=too-many-locals
dashboard: Dashboard,
data: Dict[Any, Any],
old_to_new_slice_ids: Optional[Dict[int, int]] = None,
commit: bool = False,
) -> Dashboard:
positions = data.get("positions")
new_filter_scopes = {}
md = dashboard.params_dict
if positions is not None:
# find slices in the position data
slice_ids = [
value.get("meta", {}).get("chartId")
for value in positions.values()
if isinstance(value, dict)
]
session = db.session()
current_slices = session.query(Slice).filter(Slice.id.in_(slice_ids)).all()
dashboard.slices = current_slices
# add UUID to positions
uuid_map = {slice.id: str(slice.uuid) for slice in current_slices}
for obj in positions.values():
if (
isinstance(obj, dict)
and obj["type"] == "CHART"
and obj["meta"]["chartId"]
):
chart_id = obj["meta"]["chartId"]
obj["meta"]["uuid"] = uuid_map.get(chart_id)
# remove leading and trailing white spaces in the dumped json
dashboard.position_json = json.dumps(
positions, indent=None, separators=(",", ":"), sort_keys=True
)
if "filter_scopes" in data:
# replace filter_id and immune ids from old slice id to new slice id:
# and remove slice ids that are not in dash anymore
slc_id_dict: Dict[int, int] = {}
if old_to_new_slice_ids:
slc_id_dict = {
old: new
for old, new in old_to_new_slice_ids.items()
if new in slice_ids
}
else:
slc_id_dict = {sid: sid for sid in slice_ids}
new_filter_scopes = copy_filter_scopes(
old_to_new_slc_id_dict=slc_id_dict,
old_filter_scopes=json.loads(data["filter_scopes"] or "{}")
if isinstance(data["filter_scopes"], str)
else data["filter_scopes"],
)
default_filters_data = json.loads(data.get("default_filters", "{}"))
applicable_filters = {
key: v
for key, v in default_filters_data.items()
if int(key) in slice_ids
}
md["default_filters"] = json.dumps(applicable_filters)
# positions have its own column, no need to store it in metadata
md.pop("positions", None)
# The css and dashboard_title properties are not part of the metadata
# TODO (geido): remove by refactoring/deprecating save_dash endpoint
if data.get("css") is not None:
dashboard.css = data.get("css")
if data.get("dashboard_title") is not None:
dashboard.dashboard_title = data.get("dashboard_title")
if new_filter_scopes:
md["filter_scopes"] = new_filter_scopes
else:
md.pop("filter_scopes", None)
md.setdefault("timed_refresh_immune_slices", [])
if data.get("color_namespace") is None:
md.pop("color_namespace", None)
else:
md["color_namespace"] = data.get("color_namespace")
md["expanded_slices"] = data.get("expanded_slices", {})
md["refresh_frequency"] = data.get("refresh_frequency", 0)
md["color_scheme"] = data.get("color_scheme", "")
md["label_colors"] = data.get("label_colors", {})
dashboard.json_metadata = json.dumps(md)
if commit:
db.session.commit()
return dashboard
@staticmethod
def favorited_ids(
dashboards: List[Dashboard], current_user_id: int
) -> List[FavStar]:
ids = [dash.id for dash in dashboards]
return [
star.obj_id
for star in db.session.query(FavStar.obj_id)
.filter(
FavStar.class_name == FavStarClassName.DASHBOARD,
FavStar.obj_id.in_(ids),
FavStar.user_id == current_user_id,
)
.all()
] | /sage-superset-1.0.0.tar.gz/sage-superset-1.0.0/superset/dashboards/dao.py | 0.824073 | 0.203787 | dao.py | pypi |
from typing import Any, cast, Dict, Mapping
from marshmallow import fields, post_load, Schema, ValidationError
from marshmallow.validate import Length, OneOf
from superset.dashboards.filter_sets.consts import (
DASHBOARD_OWNER_TYPE,
JSON_METADATA_FIELD,
OWNER_ID_FIELD,
OWNER_TYPE_FIELD,
USER_OWNER_TYPE,
)
class JsonMetadataSchema(Schema):
nativeFilters = fields.Mapping(required=True, allow_none=False)
dataMask = fields.Mapping(required=False, allow_none=False)
class FilterSetSchema(Schema):
json_metadata_schema: JsonMetadataSchema = JsonMetadataSchema()
def _validate_json_meta_data(self, json_meta_data: str) -> None:
try:
self.json_metadata_schema.loads(json_meta_data)
except Exception as ex:
raise ValidationError("failed to parse json_metadata to json") from ex
class FilterSetPostSchema(FilterSetSchema):
json_metadata_schema: JsonMetadataSchema = JsonMetadataSchema()
# pylint: disable=W0613
name = fields.String(required=True, allow_none=False, validate=Length(0, 500),)
description = fields.String(
required=False, allow_none=True, validate=[Length(1, 1000)]
)
json_metadata = fields.String(allow_none=False, required=True)
owner_type = fields.String(
required=True, validate=OneOf([USER_OWNER_TYPE, DASHBOARD_OWNER_TYPE])
)
owner_id = fields.Int(required=False)
@post_load
def validate(
self, data: Mapping[Any, Any], *, many: Any, partial: Any
) -> Dict[str, Any]:
self._validate_json_meta_data(data[JSON_METADATA_FIELD])
if data[OWNER_TYPE_FIELD] == USER_OWNER_TYPE and OWNER_ID_FIELD not in data:
raise ValidationError("owner_id is mandatory when owner_type is User")
return cast(Dict[str, Any], data)
class FilterSetPutSchema(FilterSetSchema):
name = fields.String(required=False, allow_none=False, validate=Length(0, 500))
description = fields.String(
required=False, allow_none=False, validate=[Length(1, 1000)]
)
json_metadata = fields.String(required=False, allow_none=False)
owner_type = fields.String(
allow_none=False, required=False, validate=OneOf([DASHBOARD_OWNER_TYPE])
)
@post_load
def validate( # pylint: disable=unused-argument
self, data: Mapping[Any, Any], *, many: Any, partial: Any
) -> Dict[str, Any]:
if JSON_METADATA_FIELD in data:
self._validate_json_meta_data(data[JSON_METADATA_FIELD])
return cast(Dict[str, Any], data)
def validate_pair(first_field: str, second_field: str, data: Dict[str, Any]) -> None:
if first_field in data and second_field not in data:
raise ValidationError(
"{} must be included alongside {}".format(first_field, second_field)
) | /sage-superset-1.0.0.tar.gz/sage-superset-1.0.0/superset/dashboards/filter_sets/schemas.py | 0.686685 | 0.172485 | schemas.py | pypi |
from __future__ import absolute_import, division, print_function, unicode_literals
import enum
from typing import List, Optional, TYPE_CHECKING, Union
from flask_appbuilder import Model
from sqlalchemy import Column, Enum, ForeignKey, Integer, String
from sqlalchemy.engine.base import Connection
from sqlalchemy.orm import relationship, Session, sessionmaker
from sqlalchemy.orm.mapper import Mapper
from superset.models.helpers import AuditMixinNullable
if TYPE_CHECKING:
from superset.models.core import FavStar
from superset.models.dashboard import Dashboard
from superset.models.slice import Slice
from superset.models.sql_lab import Query
Session = sessionmaker(autoflush=False)
class TagTypes(enum.Enum):
"""
Types for tags.
Objects (queries, charts and dashboards) will have with implicit tags based
on metadata: types, owners and who favorited them. This way, user "alice"
can find all their objects by querying for the tag `owner:alice`.
"""
# pylint: disable=invalid-name
# explicit tags, added manually by the owner
custom = 1
# implicit tags, generated automatically
type = 2
owner = 3
favorited_by = 4
class ObjectTypes(enum.Enum):
"""Object types."""
# pylint: disable=invalid-name
query = 1
chart = 2
dashboard = 3
class Tag(Model, AuditMixinNullable):
"""A tag attached to an object (query, chart or dashboard)."""
__tablename__ = "tag"
id = Column(Integer, primary_key=True)
name = Column(String(250), unique=True)
type = Column(Enum(TagTypes))
class TaggedObject(Model, AuditMixinNullable):
"""An association between an object and a tag."""
__tablename__ = "tagged_object"
id = Column(Integer, primary_key=True)
tag_id = Column(Integer, ForeignKey("tag.id"))
object_id = Column(Integer)
object_type = Column(Enum(ObjectTypes))
tag = relationship("Tag", backref="objects")
def get_tag(name: str, session: Session, type_: TagTypes) -> Tag:
tag = session.query(Tag).filter_by(name=name, type=type_).one_or_none()
if tag is None:
tag = Tag(name=name, type=type_)
session.add(tag)
session.commit()
return tag
def get_object_type(class_name: str) -> ObjectTypes:
mapping = {
"slice": ObjectTypes.chart,
"dashboard": ObjectTypes.dashboard,
"query": ObjectTypes.query,
}
try:
return mapping[class_name.lower()]
except KeyError as ex:
raise Exception("No mapping found for {0}".format(class_name)) from ex
class ObjectUpdater:
object_type: Optional[str] = None
@classmethod
def get_owners_ids(
cls, target: Union["Dashboard", "FavStar", "Slice"]
) -> List[int]:
raise NotImplementedError("Subclass should implement `get_owners_ids`")
@classmethod
def _add_owners(
cls, session: Session, target: Union["Dashboard", "FavStar", "Slice"]
) -> None:
for owner_id in cls.get_owners_ids(target):
name = "owner:{0}".format(owner_id)
tag = get_tag(name, session, TagTypes.owner)
tagged_object = TaggedObject(
tag_id=tag.id, object_id=target.id, object_type=cls.object_type
)
session.add(tagged_object)
@classmethod
def after_insert(
cls,
_mapper: Mapper,
connection: Connection,
target: Union["Dashboard", "FavStar", "Slice"],
) -> None:
session = Session(bind=connection)
# add `owner:` tags
cls._add_owners(session, target)
# add `type:` tags
tag = get_tag("type:{0}".format(cls.object_type), session, TagTypes.type)
tagged_object = TaggedObject(
tag_id=tag.id, object_id=target.id, object_type=cls.object_type
)
session.add(tagged_object)
session.commit()
@classmethod
def after_update(
cls,
_mapper: Mapper,
connection: Connection,
target: Union["Dashboard", "FavStar", "Slice"],
) -> None:
session = Session(bind=connection)
# delete current `owner:` tags
query = (
session.query(TaggedObject.id)
.join(Tag)
.filter(
TaggedObject.object_type == cls.object_type,
TaggedObject.object_id == target.id,
Tag.type == TagTypes.owner,
)
)
ids = [row[0] for row in query]
session.query(TaggedObject).filter(TaggedObject.id.in_(ids)).delete(
synchronize_session=False
)
# add `owner:` tags
cls._add_owners(session, target)
session.commit()
@classmethod
def after_delete(
cls,
_mapper: Mapper,
connection: Connection,
target: Union["Dashboard", "FavStar", "Slice"],
) -> None:
session = Session(bind=connection)
# delete row from `tagged_objects`
session.query(TaggedObject).filter(
TaggedObject.object_type == cls.object_type,
TaggedObject.object_id == target.id,
).delete()
session.commit()
class ChartUpdater(ObjectUpdater):
object_type = "chart"
@classmethod
def get_owners_ids(cls, target: "Slice") -> List[int]:
return [owner.id for owner in target.owners]
class DashboardUpdater(ObjectUpdater):
object_type = "dashboard"
@classmethod
def get_owners_ids(cls, target: "Dashboard") -> List[int]:
return [owner.id for owner in target.owners]
class QueryUpdater(ObjectUpdater):
object_type = "query"
@classmethod
def get_owners_ids(cls, target: "Query") -> List[int]:
return [target.user_id]
class FavStarUpdater:
@classmethod
def after_insert(
cls, _mapper: Mapper, connection: Connection, target: "FavStar"
) -> None:
session = Session(bind=connection)
name = "favorited_by:{0}".format(target.user_id)
tag = get_tag(name, session, TagTypes.favorited_by)
tagged_object = TaggedObject(
tag_id=tag.id,
object_id=target.obj_id,
object_type=get_object_type(target.class_name),
)
session.add(tagged_object)
session.commit()
@classmethod
def after_delete(
cls, _mapper: Mapper, connection: Connection, target: "FavStar"
) -> None:
session = Session(bind=connection)
name = "favorited_by:{0}".format(target.user_id)
query = (
session.query(TaggedObject.id)
.join(Tag)
.filter(
TaggedObject.object_id == target.obj_id,
Tag.type == TagTypes.favorited_by,
Tag.name == name,
)
)
ids = [row[0] for row in query]
session.query(TaggedObject).filter(TaggedObject.id.in_(ids)).delete(
synchronize_session=False
)
session.commit() | /sage-superset-1.0.0.tar.gz/sage-superset-1.0.0/superset/models/tags.py | 0.884052 | 0.17172 | tags.py | pypi |
"""A collection of ORM sqlalchemy models for Superset"""
import enum
import json
from typing import Any, Dict, Optional
from cron_descriptor import get_description
from flask_appbuilder import Model
from flask_appbuilder.models.decorators import renders
from sqlalchemy import (
Boolean,
Column,
DateTime,
Float,
ForeignKey,
Integer,
String,
Table,
Text,
)
from sqlalchemy.orm import backref, relationship, validates
from sqlalchemy.schema import UniqueConstraint
from sqlalchemy_utils import UUIDType
from superset.extensions import security_manager
from superset.models.core import Database
from superset.models.dashboard import Dashboard
from superset.models.helpers import AuditMixinNullable
from superset.models.slice import Slice
metadata = Model.metadata # pylint: disable=no-member
class ReportScheduleType(str, enum.Enum):
ALERT = "Alert"
REPORT = "Report"
class ReportScheduleValidatorType(str, enum.Enum):
"""Validator types for alerts"""
NOT_NULL = "not null"
OPERATOR = "operator"
class ReportRecipientType(str, enum.Enum):
EMAIL = "Email"
SLACK = "Slack"
class ReportState(str, enum.Enum):
SUCCESS = "Success"
WORKING = "Working"
ERROR = "Error"
NOOP = "Not triggered"
GRACE = "On Grace"
class ReportDataFormat(str, enum.Enum):
VISUALIZATION = "PNG"
DATA = "CSV"
TEXT = "TEXT"
class ReportCreationMethodType(str, enum.Enum):
CHARTS = "charts"
DASHBOARDS = "dashboards"
ALERTS_REPORTS = "alerts_reports"
report_schedule_user = Table(
"report_schedule_user",
metadata,
Column("id", Integer, primary_key=True),
Column("user_id", Integer, ForeignKey("ab_user.id"), nullable=False),
Column(
"report_schedule_id", Integer, ForeignKey("report_schedule.id"), nullable=False
),
UniqueConstraint("user_id", "report_schedule_id"),
)
class ReportSchedule(Model, AuditMixinNullable):
"""
Report Schedules, supports alerts and reports
"""
__tablename__ = "report_schedule"
__table_args__ = (UniqueConstraint("name", "type"),)
id = Column(Integer, primary_key=True)
type = Column(String(50), nullable=False)
name = Column(String(150), nullable=False)
description = Column(Text)
context_markdown = Column(Text)
active = Column(Boolean, default=True, index=True)
crontab = Column(String(1000), nullable=False)
creation_method = Column(
String(255), server_default=ReportCreationMethodType.ALERTS_REPORTS
)
timezone = Column(String(100), default="UTC", nullable=False)
report_format = Column(String(50), default=ReportDataFormat.VISUALIZATION)
sql = Column(Text())
# (Alerts/Reports) M-O to chart
chart_id = Column(Integer, ForeignKey("slices.id"), nullable=True)
chart = relationship(Slice, backref="report_schedules", foreign_keys=[chart_id])
# (Alerts/Reports) M-O to dashboard
dashboard_id = Column(Integer, ForeignKey("dashboards.id"), nullable=True)
dashboard = relationship(
Dashboard, backref="report_schedules", foreign_keys=[dashboard_id]
)
# (Alerts) M-O to database
database_id = Column(Integer, ForeignKey("dbs.id"), nullable=True)
database = relationship(Database, foreign_keys=[database_id])
owners = relationship(security_manager.user_model, secondary=report_schedule_user)
# (Alerts) Stamped last observations
last_eval_dttm = Column(DateTime)
last_state = Column(String(50), default=ReportState.NOOP)
last_value = Column(Float)
last_value_row_json = Column(Text)
# (Alerts) Observed value validation related columns
validator_type = Column(String(100))
validator_config_json = Column(Text, default="{}")
# Log retention
log_retention = Column(Integer, default=90)
# (Alerts) After a success how long to wait for a new trigger (seconds)
grace_period = Column(Integer, default=60 * 60 * 4)
# (Alerts/Reports) Unlock a possible stalled working state
working_timeout = Column(Integer, default=60 * 60 * 1)
# Store the selected dashboard tabs etc.
extra = Column(Text, default="{}")
# (Reports) When generating a screenshot, bypass the cache?
force_screenshot = Column(Boolean, default=False)
def __repr__(self) -> str:
return str(self.name)
@renders("crontab")
def crontab_humanized(self) -> str:
return get_description(self.crontab)
@validates("extra")
# pylint: disable=unused-argument,no-self-use
def validate_extra(self, key: str, value: Dict[Any, Any]) -> Optional[str]:
if value is not None:
return json.dumps(value)
return None
class ReportRecipients(Model, AuditMixinNullable):
"""
Report Recipients, meant to support multiple notification types, eg: Slack, email
"""
__tablename__ = "report_recipient"
id = Column(Integer, primary_key=True)
type = Column(String(50), nullable=False)
recipient_config_json = Column(Text, default="{}")
report_schedule_id = Column(
Integer, ForeignKey("report_schedule.id"), nullable=False
)
report_schedule = relationship(
ReportSchedule,
backref=backref("recipients", cascade="all,delete,delete-orphan"),
foreign_keys=[report_schedule_id],
)
class ReportExecutionLog(Model): # pylint: disable=too-few-public-methods
"""
Report Execution Log, hold the result of the report execution with timestamps,
last observation and possible error messages
"""
__tablename__ = "report_execution_log"
id = Column(Integer, primary_key=True)
uuid = Column(UUIDType(binary=True))
# Timestamps
scheduled_dttm = Column(DateTime, nullable=False)
start_dttm = Column(DateTime)
end_dttm = Column(DateTime)
# (Alerts) Observed values
value = Column(Float)
value_row_json = Column(Text)
state = Column(String(50), nullable=False)
error_message = Column(Text)
report_schedule_id = Column(
Integer, ForeignKey("report_schedule.id"), nullable=False
)
report_schedule = relationship(
ReportSchedule,
backref=backref("logs", cascade="all,delete,delete-orphan"),
foreign_keys=[report_schedule_id],
) | /sage-superset-1.0.0.tar.gz/sage-superset-1.0.0/superset/models/reports.py | 0.833019 | 0.218962 | reports.py | pypi |
"""A collection of ORM sqlalchemy models for SQL Lab"""
import re
from datetime import datetime
from typing import Any, Dict, List
import simplejson as json
import sqlalchemy as sqla
from flask import Markup
from flask_appbuilder import Model
from flask_appbuilder.models.decorators import renders
from humanize import naturaltime
from sqlalchemy import (
Boolean,
Column,
DateTime,
Enum,
ForeignKey,
Integer,
Numeric,
String,
Text,
)
from sqlalchemy.engine.url import URL
from sqlalchemy.orm import backref, relationship
from superset import security_manager
from superset.models.helpers import (
AuditMixinNullable,
ExtraJSONMixin,
ImportExportMixin,
)
from superset.models.tags import QueryUpdater
from superset.sql_parse import CtasMethod, ParsedQuery, Table
from superset.sqllab.limiting_factor import LimitingFactor
from superset.utils.core import QueryStatus, user_label
class Query(Model, ExtraJSONMixin):
"""ORM model for SQL query
Now that SQL Lab support multi-statement execution, an entry in this
table may represent multiple SQL statements executed sequentially"""
__tablename__ = "query"
id = Column(Integer, primary_key=True)
client_id = Column(String(11), unique=True, nullable=False)
database_id = Column(Integer, ForeignKey("dbs.id"), nullable=False)
# Store the tmp table into the DB only if the user asks for it.
tmp_table_name = Column(String(256))
tmp_schema_name = Column(String(256))
user_id = Column(Integer, ForeignKey("ab_user.id"), nullable=True)
status = Column(String(16), default=QueryStatus.PENDING)
tab_name = Column(String(256))
sql_editor_id = Column(String(256))
schema = Column(String(256))
sql = Column(Text)
# Query to retrieve the results,
# used only in case of select_as_cta_used is true.
select_sql = Column(Text)
executed_sql = Column(Text)
# Could be configured in the superset config.
limit = Column(Integer)
limiting_factor = Column(
Enum(LimitingFactor), server_default=LimitingFactor.UNKNOWN
)
select_as_cta = Column(Boolean)
select_as_cta_used = Column(Boolean, default=False)
ctas_method = Column(String(16), default=CtasMethod.TABLE)
progress = Column(Integer, default=0) # 1..100
# # of rows in the result set or rows modified.
rows = Column(Integer)
error_message = Column(Text)
# key used to store the results in the results backend
results_key = Column(String(64), index=True)
# Using Numeric in place of DateTime for sub-second precision
# stored as seconds since epoch, allowing for milliseconds
start_time = Column(Numeric(precision=20, scale=6))
start_running_time = Column(Numeric(precision=20, scale=6))
end_time = Column(Numeric(precision=20, scale=6))
end_result_backend_time = Column(Numeric(precision=20, scale=6))
tracking_url = Column(Text)
changed_on = Column(
DateTime, default=datetime.utcnow, onupdate=datetime.utcnow, nullable=True
)
database = relationship(
"Database",
foreign_keys=[database_id],
backref=backref("queries", cascade="all, delete-orphan"),
)
user = relationship(security_manager.user_model, foreign_keys=[user_id])
__table_args__ = (sqla.Index("ti_user_id_changed_on", user_id, changed_on),)
def to_dict(self) -> Dict[str, Any]:
return {
"changedOn": self.changed_on,
"changed_on": self.changed_on.isoformat(),
"dbId": self.database_id,
"db": self.database.database_name,
"endDttm": self.end_time,
"errorMessage": self.error_message,
"executedSql": self.executed_sql,
"id": self.client_id,
"queryId": self.id,
"limit": self.limit,
"limitingFactor": self.limiting_factor,
"progress": self.progress,
"rows": self.rows,
"schema": self.schema,
"ctas": self.select_as_cta,
"serverId": self.id,
"sql": self.sql,
"sqlEditorId": self.sql_editor_id,
"startDttm": self.start_time,
"state": self.status.lower(),
"tab": self.tab_name,
"tempSchema": self.tmp_schema_name,
"tempTable": self.tmp_table_name,
"userId": self.user_id,
"user": user_label(self.user),
"resultsKey": self.results_key,
"trackingUrl": self.tracking_url,
"extra": self.extra,
}
@property
def name(self) -> str:
"""Name property"""
ts = datetime.now().isoformat()
ts = ts.replace("-", "").replace(":", "").split(".")[0]
tab = self.tab_name.replace(" ", "_").lower() if self.tab_name else "notab"
tab = re.sub(r"\W+", "", tab)
return f"sqllab_{tab}_{ts}"
@property
def database_name(self) -> str:
return self.database.name
@property
def username(self) -> str:
return self.user.username
@property
def sql_tables(self) -> List[Table]:
return list(ParsedQuery(self.sql).tables)
def raise_for_access(self) -> None:
"""
Raise an exception if the user cannot access the resource.
:raises SupersetSecurityException: If the user cannot access the resource
"""
security_manager.raise_for_access(query=self)
class SavedQuery(Model, AuditMixinNullable, ExtraJSONMixin, ImportExportMixin):
"""ORM model for SQL query"""
__tablename__ = "saved_query"
id = Column(Integer, primary_key=True)
user_id = Column(Integer, ForeignKey("ab_user.id"), nullable=True)
db_id = Column(Integer, ForeignKey("dbs.id"), nullable=True)
schema = Column(String(128))
label = Column(String(256))
description = Column(Text)
sql = Column(Text)
user = relationship(
security_manager.user_model,
backref=backref("saved_queries", cascade="all, delete-orphan"),
foreign_keys=[user_id],
)
database = relationship(
"Database",
foreign_keys=[db_id],
backref=backref("saved_queries", cascade="all, delete-orphan"),
)
rows = Column(Integer, nullable=True)
last_run = Column(DateTime, nullable=True)
export_parent = "database"
export_fields = [
"schema",
"label",
"description",
"sql",
]
def __repr__(self) -> str:
return str(self.label)
def to_dict(self) -> Dict[str, Any]:
return {
"id": self.id,
}
@property
def pop_tab_link(self) -> Markup:
return Markup(
f"""
<a href="/superset/sqllab?savedQueryId={self.id}">
<i class="fa fa-link"></i>
</a>
"""
)
@property
def user_email(self) -> str:
return self.user.email
@property
def sqlalchemy_uri(self) -> URL:
return self.database.sqlalchemy_uri
def url(self) -> str:
return "/superset/sqllab?savedQueryId={0}".format(self.id)
@property
def sql_tables(self) -> List[Table]:
return list(ParsedQuery(self.sql).tables)
@property
def last_run_humanized(self) -> str:
return naturaltime(datetime.now() - self.changed_on)
@property
def _last_run_delta_humanized(self) -> str:
return naturaltime(datetime.now() - self.changed_on)
@renders("changed_on")
def last_run_delta_humanized(self) -> str:
return self._last_run_delta_humanized
class TabState(Model, AuditMixinNullable, ExtraJSONMixin):
__tablename__ = "tab_state"
# basic info
id = Column(Integer, primary_key=True, autoincrement=True)
user_id = Column(Integer, ForeignKey("ab_user.id"))
label = Column(String(256))
active = Column(Boolean, default=False)
# selected DB and schema
database_id = Column(Integer, ForeignKey("dbs.id"))
database = relationship("Database", foreign_keys=[database_id])
schema = Column(String(256))
# tables that are open in the schema browser and their data previews
table_schemas = relationship(
"TableSchema",
cascade="all, delete-orphan",
backref="tab_state",
passive_deletes=True,
)
# the query in the textarea, and results (if any)
sql = Column(Text)
query_limit = Column(Integer)
# latest query that was run
latest_query_id = Column(Integer, ForeignKey("query.client_id"))
latest_query = relationship("Query")
# other properties
autorun = Column(Boolean, default=False)
template_params = Column(Text)
hide_left_bar = Column(Boolean, default=False)
# any saved queries that are associated with the Tab State
saved_query_id = Column(Integer, ForeignKey("saved_query.id"), nullable=True)
saved_query = relationship("SavedQuery", foreign_keys=[saved_query_id])
def to_dict(self) -> Dict[str, Any]:
return {
"id": self.id,
"user_id": self.user_id,
"label": self.label,
"active": self.active,
"database_id": self.database_id,
"schema": self.schema,
"table_schemas": [ts.to_dict() for ts in self.table_schemas],
"sql": self.sql,
"query_limit": self.query_limit,
"latest_query": self.latest_query.to_dict() if self.latest_query else None,
"autorun": self.autorun,
"template_params": self.template_params,
"hide_left_bar": self.hide_left_bar,
"saved_query": self.saved_query.to_dict() if self.saved_query else None,
}
class TableSchema(Model, AuditMixinNullable, ExtraJSONMixin):
__tablename__ = "table_schema"
id = Column(Integer, primary_key=True, autoincrement=True)
tab_state_id = Column(Integer, ForeignKey("tab_state.id", ondelete="CASCADE"))
database_id = Column(Integer, ForeignKey("dbs.id"), nullable=False)
database = relationship("Database", foreign_keys=[database_id])
schema = Column(String(256))
table = Column(String(256))
# JSON describing the schema, partitions, latest partition, etc.
description = Column(Text)
expanded = Column(Boolean, default=False)
def to_dict(self) -> Dict[str, Any]:
try:
description = json.loads(self.description)
except json.JSONDecodeError:
description = None
return {
"id": self.id,
"tab_state_id": self.tab_state_id,
"database_id": self.database_id,
"schema": self.schema,
"table": self.table,
"description": description,
"expanded": self.expanded,
}
# events for updating tags
sqla.event.listen(SavedQuery, "after_insert", QueryUpdater.after_insert)
sqla.event.listen(SavedQuery, "after_update", QueryUpdater.after_update)
sqla.event.listen(SavedQuery, "after_delete", QueryUpdater.after_delete) | /sage-superset-1.0.0.tar.gz/sage-superset-1.0.0/superset/models/sql_lab.py | 0.766643 | 0.33372 | sql_lab.py | pypi |
"""Models for scheduled execution of jobs"""
import json
import textwrap
from datetime import datetime
from typing import Any, Optional
from flask_appbuilder import Model
from sqlalchemy import (
Boolean,
Column,
DateTime,
Float,
ForeignKey,
Integer,
String,
Table,
Text,
)
from sqlalchemy.ext.declarative import declared_attr
from sqlalchemy.orm import backref, relationship, RelationshipProperty
from superset import db, security_manager
from superset.models.helpers import AuditMixinNullable
metadata = Model.metadata # pylint: disable=no-member
alert_owner = Table(
"alert_owner",
metadata,
Column("id", Integer, primary_key=True),
Column("user_id", Integer, ForeignKey("ab_user.id")),
Column("alert_id", Integer, ForeignKey("alerts.id")),
)
class Alert(Model, AuditMixinNullable):
"""Schedules for emailing slices / dashboards"""
__tablename__ = "alerts"
id = Column(Integer, primary_key=True)
label = Column(String(150), nullable=False)
active = Column(Boolean, default=True, index=True)
# TODO(bkyryliuk): enforce minimal supported frequency
crontab = Column(String(50), nullable=False)
alert_type = Column(String(50))
owners = relationship(security_manager.user_model, secondary=alert_owner)
recipients = Column(Text)
slack_channel = Column(Text)
# TODO(bkyryliuk): implement log_retention
log_retention = Column(Integer, default=90)
grace_period = Column(Integer, default=60 * 60 * 24)
slice_id = Column(Integer, ForeignKey("slices.id"))
slice = relationship("Slice", backref="alerts", foreign_keys=[slice_id])
dashboard_id = Column(Integer, ForeignKey("dashboards.id"))
dashboard = relationship("Dashboard", backref="alert", foreign_keys=[dashboard_id])
last_eval_dttm = Column(DateTime, default=datetime.utcnow)
last_state = Column(String(10))
# Observation related columns
sql = Column(Text, nullable=False)
# Validation related columns
validator_type = Column(String(100), nullable=False)
validator_config = Column(
Text,
default=textwrap.dedent(
"""
{
}
"""
),
)
@declared_attr
def database_id(self) -> int:
return Column(Integer, ForeignKey("dbs.id"), nullable=False)
@declared_attr
def database(self) -> RelationshipProperty:
return relationship(
"Database",
foreign_keys=[self.database_id],
backref=backref("sql_observers", cascade="all, delete-orphan"),
)
def get_last_observation(self) -> Optional[Any]:
observations = list(
db.session.query(SQLObservation)
.filter_by(alert_id=self.id)
.order_by(SQLObservation.dttm.desc())
.limit(1)
)
if observations:
return observations[0]
return None
def __str__(self) -> str:
return f"<{self.id}:{self.label}>"
@property
def pretty_config(self) -> str:
"""String representing the comparison that will trigger a validator"""
config = json.loads(self.validator_config)
if self.validator_type.lower() == "operator":
return f"{config['op']} {config['threshold']}"
if self.validator_type.lower() == "not null":
return "!= Null or 0"
return ""
class AlertLog(Model):
"""Keeps track of alert-related operations"""
__tablename__ = "alert_logs"
id = Column(Integer, primary_key=True)
scheduled_dttm = Column(DateTime)
dttm_start = Column(DateTime, default=datetime.utcnow)
dttm_end = Column(DateTime, default=datetime.utcnow)
alert_id = Column(Integer, ForeignKey("alerts.id"))
alert = relationship("Alert", backref="logs", foreign_keys=[alert_id])
state = Column(String(10))
@property
def duration(self) -> int:
return (self.dttm_end - self.dttm_start).total_seconds()
# TODO: Currently SQLObservation table will constantly grow with no limit,
# add some retention restriction or more to a more scalable db e.g.
# https://github.com/apache/superset/blob/master/superset/utils/log.py#L32
class SQLObservation(Model): # pylint: disable=too-few-public-methods
"""Keeps track of the collected observations for alerts."""
__tablename__ = "sql_observations"
id = Column(Integer, primary_key=True)
dttm = Column(DateTime, default=datetime.utcnow, index=True)
alert_id = Column(Integer, ForeignKey("alerts.id"))
alert = relationship(
"Alert",
foreign_keys=[alert_id],
backref=backref("observations", cascade="all, delete-orphan"),
)
value = Column(Float)
error_msg = Column(String(500)) | /sage-superset-1.0.0.tar.gz/sage-superset-1.0.0/superset/models/alerts.py | 0.692434 | 0.204401 | alerts.py | pypi |
"""Models for scheduled execution of jobs"""
import enum
from typing import Optional, Type
from flask_appbuilder import Model
from sqlalchemy import Boolean, Column, Enum, ForeignKey, Integer, String, Text
from sqlalchemy.ext.declarative import declared_attr
from sqlalchemy.orm import relationship, RelationshipProperty
from superset import security_manager
from superset.models.alerts import Alert
from superset.models.helpers import AuditMixinNullable, ImportExportMixin
metadata = Model.metadata # pylint: disable=no-member
class ScheduleType(str, enum.Enum):
# pylint: disable=invalid-name
slice = "slice"
dashboard = "dashboard"
alert = "alert"
class EmailDeliveryType(str, enum.Enum):
# pylint: disable=invalid-name
attachment = "Attachment"
inline = "Inline"
class SliceEmailReportFormat(str, enum.Enum):
# pylint: disable=invalid-name
visualization = "Visualization"
data = "Raw data"
class EmailSchedule:
"""Schedules for emailing slices / dashboards"""
__tablename__ = "email_schedules"
id = Column(Integer, primary_key=True)
active = Column(Boolean, default=True, index=True)
crontab = Column(String(50))
@declared_attr
def user_id(self) -> int:
return Column(Integer, ForeignKey("ab_user.id"))
@declared_attr
def user(self) -> RelationshipProperty:
return relationship(
security_manager.user_model,
backref=self.__tablename__,
foreign_keys=[self.user_id],
)
recipients = Column(Text)
slack_channel = Column(Text)
deliver_as_group = Column(Boolean, default=False)
delivery_type = Column(Enum(EmailDeliveryType))
class DashboardEmailSchedule(
Model, AuditMixinNullable, ImportExportMixin, EmailSchedule
):
__tablename__ = "dashboard_email_schedules"
dashboard_id = Column(Integer, ForeignKey("dashboards.id"))
dashboard = relationship(
"Dashboard", backref="email_schedules", foreign_keys=[dashboard_id]
)
class SliceEmailSchedule(Model, AuditMixinNullable, ImportExportMixin, EmailSchedule):
__tablename__ = "slice_email_schedules"
slice_id = Column(Integer, ForeignKey("slices.id"))
slice = relationship("Slice", backref="email_schedules", foreign_keys=[slice_id])
email_format = Column(Enum(SliceEmailReportFormat))
def get_scheduler_model(report_type: str) -> Optional[Type[EmailSchedule]]:
if report_type == ScheduleType.dashboard:
return DashboardEmailSchedule
if report_type == ScheduleType.slice:
return SliceEmailSchedule
if report_type == ScheduleType.alert:
return Alert
return None | /sage-superset-1.0.0.tar.gz/sage-superset-1.0.0/superset/models/schedules.py | 0.781164 | 0.171477 | schedules.py | pypi |
import logging
from datetime import datetime
from typing import Optional
import pandas as pd
from sqlalchemy.orm import Session
from superset import jinja_context
from superset.models.alerts import Alert, SQLObservation
logger = logging.getLogger("tasks.email_reports")
# Session needs to be passed along in the celery workers and db.session cannot be used.
# For more info see: https://github.com/apache/superset/issues/10530
def observe(alert_id: int, session: Session) -> Optional[str]:
"""Collect observations for the alert.
Returns an error message if the observer value was not valid
"""
alert = session.query(Alert).filter_by(id=alert_id).one()
value = None
tp = jinja_context.get_template_processor(database=alert.database)
rendered_sql = tp.process_template(alert.sql)
df = alert.database.get_df(rendered_sql)
error_msg = validate_observer_result(df, alert.id, alert.label)
if not error_msg and not df.empty and df.to_records()[0][1] is not None:
value = float(df.to_records()[0][1])
observation = SQLObservation(
alert_id=alert_id, dttm=datetime.utcnow(), value=value, error_msg=error_msg,
)
session.add(observation)
session.commit()
return error_msg
def validate_observer_result(
sql_result: pd.DataFrame, alert_id: int, alert_label: str
) -> Optional[str]:
"""
Verifies if a DataFrame SQL query result to see if
it contains a valid value for a SQLObservation.
Returns an error message if the result is invalid.
"""
try:
if sql_result.empty:
# empty results are used for the not null validator
return None
rows = sql_result.to_records()
assert (
len(rows) == 1
), f"Observer for alert <{alert_id}:{alert_label}> returned more than 1 row"
assert (
len(rows[0]) == 2
), f"Observer for alert <{alert_id}:{alert_label}> returned more than 1 column"
if rows[0][1] is None:
return None
float(rows[0][1])
except AssertionError as error:
return str(error)
except (TypeError, ValueError):
return (
f"Observer for alert <{alert_id}:{alert_label}> returned a non-number value"
)
return None | /sage-superset-1.0.0.tar.gz/sage-superset-1.0.0/superset/tasks/alerts/observer.py | 0.900333 | 0.356251 | observer.py | pypi |
import enum
import json
from operator import eq, ge, gt, le, lt, ne
from typing import Callable, Optional
import numpy as np
from superset.exceptions import SupersetException
from superset.models.alerts import Alert
OPERATOR_FUNCTIONS = {">=": ge, ">": gt, "<=": le, "<": lt, "==": eq, "!=": ne}
class AlertValidatorType(str, enum.Enum):
NOT_NULL = "not null"
OPERATOR = "operator"
@classmethod
def valid_type(cls, validator_type: str) -> bool:
return any(val_type.value == validator_type for val_type in cls)
def check_validator(validator_type: str, config: str) -> None:
if not AlertValidatorType.valid_type(validator_type):
raise SupersetException(
f"Error: {validator_type} is not a valid validator type."
)
config_dict = json.loads(config)
if validator_type == AlertValidatorType.OPERATOR.value:
if not (config_dict.get("op") and config_dict.get("threshold") is not None):
raise SupersetException(
"Error: Operator Validator needs specified operator and threshold "
'values. Add "op" and "threshold" to config.'
)
if not config_dict["op"] in OPERATOR_FUNCTIONS.keys():
raise SupersetException(
f'Error: {config_dict["op"]} is an invalid operator type. Change '
f'the "op" value in the config to one of '
f'["<", "<=", ">", ">=", "==", "!="]'
)
if not isinstance(config_dict["threshold"], (int, float)):
raise SupersetException(
f'Error: {config_dict["threshold"]} is an invalid threshold value.'
f' Change the "threshold" value in the config.'
)
def not_null_validator(
alert: Alert, validator_config: str # pylint: disable=unused-argument
) -> bool:
"""Returns True if a recent observation is not NULL"""
observation = alert.get_last_observation()
# TODO: Validate malformed observations/observations with errors separately
if (
not observation
or observation.error_msg
or observation.value in (0, None, np.nan)
):
return False
return True
def operator_validator(alert: Alert, validator_config: str) -> bool:
"""
Returns True if a recent observation is greater than or equal to
the value given in the validator config
"""
observation = alert.get_last_observation()
if not observation or observation.value in (None, np.nan):
return False
operator = json.loads(validator_config)["op"]
threshold = json.loads(validator_config)["threshold"]
return OPERATOR_FUNCTIONS[operator](observation.value, threshold)
def get_validator_function(
validator_type: str,
) -> Optional[Callable[[Alert, str], bool]]:
"""Returns a validation function based on validator_type"""
alert_validators = {
AlertValidatorType.NOT_NULL.value: not_null_validator,
AlertValidatorType.OPERATOR.value: operator_validator,
}
if alert_validators.get(validator_type.lower()):
return alert_validators[validator_type.lower()]
return None | /sage-superset-1.0.0.tar.gz/sage-superset-1.0.0/superset/tasks/alerts/validator.py | 0.695131 | 0.215702 | validator.py | pypi |
from typing import Union
from marshmallow import fields, Schema, ValidationError
from marshmallow.validate import Length
from superset.exceptions import SupersetException
from superset.utils import core as utils
openapi_spec_methods_override = {
"get": {"get": {"description": "Get an Annotation layer"}},
"get_list": {
"get": {
"description": "Get a list of Annotation layers, use Rison or JSON "
"query parameters for filtering, sorting,"
" pagination and for selecting specific"
" columns and metadata.",
}
},
"post": {"post": {"description": "Create an Annotation layer"}},
"put": {"put": {"description": "Update an Annotation layer"}},
"delete": {"delete": {"description": "Delete Annotation layer"}},
}
get_delete_ids_schema = {"type": "array", "items": {"type": "integer"}}
annotation_start_dttm = "The annotation start date time"
annotation_end_dttm = "The annotation end date time"
annotation_layer = "The annotation layer id"
annotation_short_descr = "A short description"
annotation_long_descr = "A long description"
annotation_json_metadata = "JSON metadata"
def validate_json(value: Union[bytes, bytearray, str]) -> None:
try:
utils.validate_json(value)
except SupersetException as ex:
raise ValidationError("JSON not valid") from ex
class AnnotationPostSchema(Schema):
short_descr = fields.String(
description=annotation_short_descr,
required=True,
allow_none=False,
validate=[Length(1, 500)],
)
long_descr = fields.String(description=annotation_long_descr, allow_none=True)
start_dttm = fields.DateTime(
description=annotation_start_dttm, required=True, allow_none=False,
)
end_dttm = fields.DateTime(
description=annotation_end_dttm, required=True, allow_none=False
)
json_metadata = fields.String(
description=annotation_json_metadata, validate=validate_json, allow_none=True,
)
class AnnotationPutSchema(Schema):
short_descr = fields.String(
description=annotation_short_descr, required=False, validate=[Length(1, 500)]
)
long_descr = fields.String(
description=annotation_long_descr, required=False, allow_none=True
)
start_dttm = fields.DateTime(description=annotation_start_dttm, required=False)
end_dttm = fields.DateTime(description=annotation_end_dttm, required=False)
json_metadata = fields.String(
description=annotation_json_metadata,
validate=validate_json,
required=False,
allow_none=True,
) | /sage-superset-1.0.0.tar.gz/sage-superset-1.0.0/superset/annotation_layers/annotations/schemas.py | 0.801625 | 0.301812 | schemas.py | pypi |
from typing import Any
from flask_babel import lazy_gettext as _
from sqlalchemy import and_, or_
from sqlalchemy.orm.query import Query
from superset import security_manager
from superset.connectors.sqla.models import SqlaTable
from superset.models.slice import Slice
from superset.views.base import BaseFilter
from superset.views.base_api import BaseFavoriteFilter
class ChartAllTextFilter(BaseFilter): # pylint: disable=too-few-public-methods
name = _("All Text")
arg_name = "chart_all_text"
def apply(self, query: Query, value: Any) -> Query:
if not value:
return query
ilike_value = f"%{value}%"
return query.filter(
or_(
Slice.slice_name.ilike(ilike_value),
Slice.description.ilike(ilike_value),
Slice.viz_type.ilike(ilike_value),
SqlaTable.table_name.ilike(ilike_value),
)
)
class ChartFavoriteFilter(BaseFavoriteFilter): # pylint: disable=too-few-public-methods
"""
Custom filter for the GET list that filters all charts that a user has favored
"""
arg_name = "chart_is_favorite"
class_name = "slice"
model = Slice
class ChartCertifiedFilter(BaseFilter): # pylint: disable=too-few-public-methods
"""
Custom filter for the GET list that filters all certified charts
"""
name = _("Is certified")
arg_name = "chart_is_certified"
def apply(self, query: Query, value: Any) -> Query:
if value is True:
return query.filter(and_(Slice.certified_by.isnot(None)))
if value is False:
return query.filter(and_(Slice.certified_by.is_(None)))
return query
class ChartFilter(BaseFilter): # pylint: disable=too-few-public-methods
def apply(self, query: Query, value: Any) -> Query:
if security_manager.can_access_all_datasources():
return query
perms = security_manager.user_view_menu_names("datasource_access")
schema_perms = security_manager.user_view_menu_names("schema_access")
return query.filter(
or_(self.model.perm.in_(perms), self.model.schema_perm.in_(schema_perms))
) | /sage-superset-1.0.0.tar.gz/sage-superset-1.0.0/superset/charts/filters.py | 0.806853 | 0.164047 | filters.py | pypi |
import logging
import re
from datetime import datetime
from typing import Any, Dict, List, Optional, Pattern, Tuple
from flask_babel import gettext as __
from superset.db_engine_specs.base import BaseEngineSpec, LimitMethod
from superset.errors import SupersetErrorType
from superset.utils import core as utils
logger = logging.getLogger(__name__)
# Regular expressions to catch custom errors
CONNECTION_ACCESS_DENIED_REGEX = re.compile("Adaptive Server connection failed")
CONNECTION_INVALID_HOSTNAME_REGEX = re.compile(
r"Adaptive Server is unavailable or does not exist \((?P<hostname>.*?)\)"
"(?!.*Net-Lib error).*$"
)
CONNECTION_PORT_CLOSED_REGEX = re.compile(
r"Net-Lib error during Connection refused \(61\)"
)
CONNECTION_HOST_DOWN_REGEX = re.compile(
r"Net-Lib error during Operation timed out \(60\)"
)
class MssqlEngineSpec(BaseEngineSpec):
engine = "mssql"
engine_name = "Microsoft SQL Server"
limit_method = LimitMethod.WRAP_SQL
max_column_name_length = 128
allows_cte_in_subquery = False
_time_grain_expressions = {
None: "{col}",
"PT1S": "DATEADD(SECOND, DATEDIFF(SECOND, '2000-01-01', {col}), '2000-01-01')",
"PT1M": "DATEADD(MINUTE, DATEDIFF(MINUTE, 0, {col}), 0)",
"PT5M": "DATEADD(MINUTE, DATEDIFF(MINUTE, 0, {col}) / 5 * 5, 0)",
"PT10M": "DATEADD(MINUTE, DATEDIFF(MINUTE, 0, {col}) / 10 * 10, 0)",
"PT15M": "DATEADD(MINUTE, DATEDIFF(MINUTE, 0, {col}) / 15 * 15, 0)",
"PT30M": "DATEADD(MINUTE, DATEDIFF(MINUTE, 0, {col}) / 30 * 30, 0)",
"PT1H": "DATEADD(HOUR, DATEDIFF(HOUR, 0, {col}), 0)",
"P1D": "DATEADD(DAY, DATEDIFF(DAY, 0, {col}), 0)",
"P1W": "DATEADD(DAY, 1 - DATEPART(WEEKDAY, {col}),"
" DATEADD(DAY, DATEDIFF(DAY, 0, {col}), 0))",
"P1M": "DATEADD(MONTH, DATEDIFF(MONTH, 0, {col}), 0)",
"P3M": "DATEADD(QUARTER, DATEDIFF(QUARTER, 0, {col}), 0)",
"P1Y": "DATEADD(YEAR, DATEDIFF(YEAR, 0, {col}), 0)",
"1969-12-28T00:00:00Z/P1W": "DATEADD(DAY, -1,"
" DATEADD(WEEK, DATEDIFF(WEEK, 0, {col}), 0))",
"1969-12-29T00:00:00Z/P1W": "DATEADD(WEEK,"
" DATEDIFF(WEEK, 0, DATEADD(DAY, -1, {col})), 0)",
}
custom_errors: Dict[Pattern[str], Tuple[str, SupersetErrorType, Dict[str, Any]]] = {
CONNECTION_ACCESS_DENIED_REGEX: (
__(
'Either the username "%(username)s", password, '
'or database name "%(database)s" is incorrect.'
),
SupersetErrorType.CONNECTION_ACCESS_DENIED_ERROR,
{},
),
CONNECTION_INVALID_HOSTNAME_REGEX: (
__('The hostname "%(hostname)s" cannot be resolved.'),
SupersetErrorType.CONNECTION_INVALID_HOSTNAME_ERROR,
{},
),
CONNECTION_PORT_CLOSED_REGEX: (
__('Port %(port)s on hostname "%(hostname)s" refused the connection.'),
SupersetErrorType.CONNECTION_PORT_CLOSED_ERROR,
{},
),
CONNECTION_HOST_DOWN_REGEX: (
__(
'The host "%(hostname)s" might be down, and can\'t be '
"reached on port %(port)s."
),
SupersetErrorType.CONNECTION_HOST_DOWN_ERROR,
{},
),
}
@classmethod
def epoch_to_dttm(cls) -> str:
return "dateadd(S, {col}, '1970-01-01')"
@classmethod
def convert_dttm(
cls, target_type: str, dttm: datetime, db_extra: Optional[Dict[str, Any]] = None
) -> Optional[str]:
tt = target_type.upper()
if tt == utils.TemporalType.DATE:
return f"CONVERT(DATE, '{dttm.date().isoformat()}', 23)"
if tt == utils.TemporalType.DATETIME:
datetime_formatted = dttm.isoformat(timespec="milliseconds")
return f"""CONVERT(DATETIME, '{datetime_formatted}', 126)"""
if tt == utils.TemporalType.SMALLDATETIME:
datetime_formatted = dttm.isoformat(sep=" ", timespec="seconds")
return f"""CONVERT(SMALLDATETIME, '{datetime_formatted}', 20)"""
return None
@classmethod
def fetch_data(
cls, cursor: Any, limit: Optional[int] = None
) -> List[Tuple[Any, ...]]:
data = super().fetch_data(cursor, limit)
# Lists of `pyodbc.Row` need to be unpacked further
return cls.pyodbc_rows_to_tuples(data)
@classmethod
def extract_error_message(cls, ex: Exception) -> str:
if str(ex).startswith("(8155,"):
return (
f"{cls.engine} error: All your SQL functions need to "
"have an alias on MSSQL. For example: SELECT COUNT(*) AS C1 FROM TABLE1"
)
return f"{cls.engine} error: {cls._extract_error_message(ex)}"
class AzureSynapseSpec(MssqlEngineSpec):
engine = "mssql"
engine_name = "Azure Synapse"
default_driver = "pyodbc" | /sage-superset-1.0.0.tar.gz/sage-superset-1.0.0/superset/db_engine_specs/mssql.py | 0.759671 | 0.154504 | mssql.py | pypi |
from datetime import datetime
from typing import Any, Dict, List, Optional, Type
from superset.db_engine_specs.base import BaseEngineSpec, LimitMethod
from superset.db_engine_specs.exceptions import (
SupersetDBAPIDatabaseError,
SupersetDBAPIOperationalError,
SupersetDBAPIProgrammingError,
)
from superset.sql_parse import ParsedQuery
from superset.utils import core as utils
class KustoSqlEngineSpec(BaseEngineSpec): # pylint: disable=abstract-method
limit_method = LimitMethod.WRAP_SQL
engine = "kustosql"
engine_name = "KustoSQL"
time_groupby_inline = True
time_secondary_columns = True
allows_joins = True
allows_subqueries = True
allows_sql_comments = False
_time_grain_expressions = {
None: "{col}",
"PT1S": "DATEADD(second, DATEDIFF(second, '2000-01-01', {col}), '2000-01-01')",
"PT1M": "DATEADD(minute, DATEDIFF(minute, 0, {col}), 0)",
"PT5M": "DATEADD(minute, DATEDIFF(minute, 0, {col}) / 5 * 5, 0)",
"PT10M": "DATEADD(minute, DATEDIFF(minute, 0, {col}) / 10 * 10, 0)",
"PT15M": "DATEADD(minute, DATEDIFF(minute, 0, {col}) / 15 * 15, 0)",
"PT0.5H": "DATEADD(minute, DATEDIFF(minute, 0, {col}) / 30 * 30, 0)",
"PT1H": "DATEADD(hour, DATEDIFF(hour, 0, {col}), 0)",
"P1D": "DATEADD(day, DATEDIFF(day, 0, {col}), 0)",
"P1W": "DATEADD(day, -1, DATEADD(week, DATEDIFF(week, 0, {col}), 0))",
"P1M": "DATEADD(month, DATEDIFF(month, 0, {col}), 0)",
"P0.25Y": "DATEADD(quarter, DATEDIFF(quarter, 0, {col}), 0)",
"P1Y": "DATEADD(year, DATEDIFF(year, 0, {col}), 0)",
"1969-12-28T00:00:00Z/P1W": "DATEADD(day, -1,"
" DATEADD(week, DATEDIFF(week, 0, {col}), 0))",
"1969-12-29T00:00:00Z/P1W": "DATEADD(week,"
" DATEDIFF(week, 0, DATEADD(day, -1, {col})), 0)",
}
type_code_map: Dict[int, str] = {} # loaded from get_datatype only if needed
@classmethod
def get_dbapi_exception_mapping(cls) -> Dict[Type[Exception], Type[Exception]]:
# pylint: disable=import-outside-toplevel,import-error
import sqlalchemy_kusto.errors as kusto_exceptions
return {
kusto_exceptions.DatabaseError: SupersetDBAPIDatabaseError,
kusto_exceptions.OperationalError: SupersetDBAPIOperationalError,
kusto_exceptions.ProgrammingError: SupersetDBAPIProgrammingError,
}
@classmethod
def convert_dttm(
cls, target_type: str, dttm: datetime, db_extra: Optional[Dict[str, Any]] = None
) -> Optional[str]:
tt = target_type.upper()
if tt == utils.TemporalType.DATE:
return f"CONVERT(DATE, '{dttm.date().isoformat()}', 23)"
if tt == utils.TemporalType.DATETIME:
datetime_formatted = dttm.isoformat(timespec="milliseconds")
return f"""CONVERT(DATETIME, '{datetime_formatted}', 126)"""
if tt == utils.TemporalType.SMALLDATETIME:
datetime_formatted = dttm.isoformat(sep=" ", timespec="seconds")
return f"""CONVERT(SMALLDATETIME, '{datetime_formatted}', 20)"""
if tt == utils.TemporalType.TIMESTAMP:
datetime_formatted = dttm.isoformat(sep=" ", timespec="seconds")
return f"""CONVERT(TIMESTAMP, '{datetime_formatted}', 20)"""
return None
@classmethod
def is_readonly_query(cls, parsed_query: ParsedQuery) -> bool:
"""Pessimistic readonly, 100% sure statement won't mutate anything"""
return parsed_query.sql.lower().startswith("select")
class KustoKqlEngineSpec(BaseEngineSpec): # pylint: disable=abstract-method
limit_method = LimitMethod.WRAP_SQL
engine = "kustokql"
engine_name = "KustoKQL"
time_groupby_inline = True
time_secondary_columns = True
allows_joins = True
allows_subqueries = True
allows_sql_comments = False
run_multiple_statements_as_one = True
_time_grain_expressions = {
None: "{col}",
"PT1S": "{col}/ time(1s)",
"PT1M": "{col}/ time(1min)",
"PT1H": "{col}/ time(1h)",
"P1D": "{col}/ time(1d)",
"P1M": "datetime_diff('month',CreateDate, datetime(0001-01-01 00:00:00))+1",
"P1Y": "datetime_diff('year',CreateDate, datetime(0001-01-01 00:00:00))+1",
}
type_code_map: Dict[int, str] = {} # loaded from get_datatype only if needed
@classmethod
def get_dbapi_exception_mapping(cls) -> Dict[Type[Exception], Type[Exception]]:
# pylint: disable=import-outside-toplevel,import-error
import sqlalchemy_kusto.errors as kusto_exceptions
return {
kusto_exceptions.DatabaseError: SupersetDBAPIDatabaseError,
kusto_exceptions.OperationalError: SupersetDBAPIOperationalError,
kusto_exceptions.ProgrammingError: SupersetDBAPIProgrammingError,
}
@classmethod
def convert_dttm(
cls, target_type: str, dttm: datetime, db_extra: Optional[Dict[str, Any]] = None
) -> Optional[str]:
if target_type.upper() in [
utils.TemporalType.DATETIME,
utils.TemporalType.TIMESTAMP,
]:
return f"""datetime({dttm.isoformat(timespec="microseconds")})"""
if target_type.upper() == utils.TemporalType.DATE:
return f"""datetime({dttm.date().isoformat()})"""
return None
@classmethod
def is_readonly_query(cls, parsed_query: ParsedQuery) -> bool:
"""
Pessimistic readonly, 100% sure statement won't mutate anything.
"""
return KustoKqlEngineSpec.is_select_query(
parsed_query
) or parsed_query.sql.startswith(".show")
@classmethod
def is_select_query(cls, parsed_query: ParsedQuery) -> bool:
return not parsed_query.sql.startswith(".")
@classmethod
def parse_sql(cls, sql: str) -> List[str]:
"""
Kusto supports a single query statement, but it could include sub queries
and variables declared via let keyword.
"""
return [sql] | /sage-superset-1.0.0.tar.gz/sage-superset-1.0.0/superset/db_engine_specs/kusto.py | 0.815122 | 0.17515 | kusto.py | pypi |
import json
import re
from datetime import datetime
from typing import Any, Dict, List, Optional, Pattern, Tuple, TYPE_CHECKING
from urllib import parse
from apispec import APISpec
from apispec.ext.marshmallow import MarshmallowPlugin
from flask_babel import gettext as __
from marshmallow import fields, Schema
from sqlalchemy.engine.url import make_url, URL
from typing_extensions import TypedDict
from superset.db_engine_specs.postgres import PostgresBaseEngineSpec
from superset.errors import ErrorLevel, SupersetError, SupersetErrorType
from superset.models.sql_lab import Query
from superset.utils import core as utils
if TYPE_CHECKING:
from superset.models.core import Database
# Regular expressions to catch custom errors
OBJECT_DOES_NOT_EXIST_REGEX = re.compile(
r"Object (?P<object>.*?) does not exist or not authorized."
)
SYNTAX_ERROR_REGEX = re.compile(
"syntax error line (?P<line>.+?) at position (?P<position>.+?) "
"unexpected '(?P<syntax_error>.+?)'."
)
class SnowflakeParametersSchema(Schema):
username = fields.Str(required=True)
password = fields.Str(required=True)
account = fields.Str(required=True)
database = fields.Str(required=True)
role = fields.Str(required=True)
warehouse = fields.Str(required=True)
class SnowflakeParametersType(TypedDict):
username: str
password: str
account: str
database: str
role: str
warehouse: str
class SnowflakeEngineSpec(PostgresBaseEngineSpec):
engine = "snowflake"
engine_name = "Snowflake"
force_column_alias_quotes = True
max_column_name_length = 256
parameters_schema = SnowflakeParametersSchema()
default_driver = "snowflake"
sqlalchemy_uri_placeholder = "snowflake://"
_time_grain_expressions = {
None: "{col}",
"PT1S": "DATE_TRUNC('SECOND', {col})",
"PT1M": "DATE_TRUNC('MINUTE', {col})",
"PT5M": "DATEADD(MINUTE, FLOOR(DATE_PART(MINUTE, {col}) / 5) * 5, \
DATE_TRUNC('HOUR', {col}))",
"PT10M": "DATEADD(MINUTE, FLOOR(DATE_PART(MINUTE, {col}) / 10) * 10, \
DATE_TRUNC('HOUR', {col}))",
"PT15M": "DATEADD(MINUTE, FLOOR(DATE_PART(MINUTE, {col}) / 15) * 15, \
DATE_TRUNC('HOUR', {col}))",
"PT30M": "DATEADD(MINUTE, FLOOR(DATE_PART(MINUTE, {col}) / 30) * 30, \
DATE_TRUNC('HOUR', {col}))",
"PT1H": "DATE_TRUNC('HOUR', {col})",
"P1D": "DATE_TRUNC('DAY', {col})",
"P1W": "DATE_TRUNC('WEEK', {col})",
"P1M": "DATE_TRUNC('MONTH', {col})",
"P3M": "DATE_TRUNC('QUARTER', {col})",
"P1Y": "DATE_TRUNC('YEAR', {col})",
}
custom_errors: Dict[Pattern[str], Tuple[str, SupersetErrorType, Dict[str, Any]]] = {
OBJECT_DOES_NOT_EXIST_REGEX: (
__("%(object)s does not exist in this database."),
SupersetErrorType.OBJECT_DOES_NOT_EXIST_ERROR,
{},
),
SYNTAX_ERROR_REGEX: (
__(
"Please check your query for syntax errors at or "
'near "%(syntax_error)s". Then, try running your query again.'
),
SupersetErrorType.SYNTAX_ERROR,
{},
),
}
@classmethod
def adjust_database_uri(
cls, uri: URL, selected_schema: Optional[str] = None
) -> None:
database = uri.database
if "/" in uri.database:
database = uri.database.split("/")[0]
if selected_schema:
selected_schema = parse.quote(selected_schema, safe="")
uri.database = database + "/" + selected_schema
@classmethod
def epoch_to_dttm(cls) -> str:
return "DATEADD(S, {col}, '1970-01-01')"
@classmethod
def epoch_ms_to_dttm(cls) -> str:
return "DATEADD(MS, {col}, '1970-01-01')"
@classmethod
def convert_dttm(
cls, target_type: str, dttm: datetime, db_extra: Optional[Dict[str, Any]] = None
) -> Optional[str]:
tt = target_type.upper()
if tt == utils.TemporalType.DATE:
return f"TO_DATE('{dttm.date().isoformat()}')"
if tt == utils.TemporalType.DATETIME:
return f"""CAST('{dttm.isoformat(timespec="microseconds")}' AS DATETIME)"""
if tt == utils.TemporalType.TIMESTAMP:
return f"""TO_TIMESTAMP('{dttm.isoformat(timespec="microseconds")}')"""
return None
@staticmethod
def mutate_db_for_connection_test(database: "Database") -> None:
"""
By default, snowflake doesn't validate if the user/role has access to the chosen
database.
:param database: instance to be mutated
"""
extra = json.loads(database.extra or "{}")
engine_params = extra.get("engine_params", {})
connect_args = engine_params.get("connect_args", {})
connect_args["validate_default_parameters"] = True
engine_params["connect_args"] = connect_args
extra["engine_params"] = engine_params
database.extra = json.dumps(extra)
@classmethod
def get_cancel_query_id(cls, cursor: Any, query: Query) -> Optional[str]:
"""
Get Snowflake session ID that will be used to cancel all other running
queries in the same session.
:param cursor: Cursor instance in which the query will be executed
:param query: Query instance
:return: Snowflake Session ID
"""
cursor.execute("SELECT CURRENT_SESSION()")
row = cursor.fetchone()
return row[0]
@classmethod
def cancel_query(cls, cursor: Any, query: Query, cancel_query_id: str) -> bool:
"""
Cancel query in the underlying database.
:param cursor: New cursor instance to the db of the query
:param query: Query instance
:param cancel_query_id: Snowflake Session ID
:return: True if query cancelled successfully, False otherwise
"""
try:
cursor.execute(f"SELECT SYSTEM$CANCEL_ALL_QUERIES({cancel_query_id})")
except Exception: # pylint: disable=broad-except
return False
return True
@classmethod
def build_sqlalchemy_uri(
cls,
parameters: SnowflakeParametersType,
encrypted_extra: Optional[ # pylint: disable=unused-argument
Dict[str, Any]
] = None,
) -> str:
return str(
URL(
"snowflake",
username=parameters.get("username"),
password=parameters.get("password"),
host=parameters.get("account"),
database=parameters.get("database"),
query={
"role": parameters.get("role"),
"warehouse": parameters.get("warehouse"),
},
)
)
@classmethod
def get_parameters_from_uri(
cls,
uri: str,
encrypted_extra: Optional[ # pylint: disable=unused-argument
Dict[str, str]
] = None,
) -> Any:
url = make_url(uri)
query = dict(url.query.items())
return {
"username": url.username,
"password": url.password,
"account": url.host,
"database": url.database,
"role": query.get("role"),
"warehouse": query.get("warehouse"),
}
@classmethod
def validate_parameters(
cls, parameters: SnowflakeParametersType
) -> List[SupersetError]:
errors: List[SupersetError] = []
required = {
"warehouse",
"username",
"database",
"account",
"role",
"password",
}
present = {key for key in parameters if parameters.get(key, ())}
missing = sorted(required - present)
if missing:
errors.append(
SupersetError(
message=f'One or more parameters are missing: {", ".join(missing)}',
error_type=SupersetErrorType.CONNECTION_MISSING_PARAMETERS_ERROR,
level=ErrorLevel.WARNING,
extra={"missing": missing},
),
)
return errors
@classmethod
def parameters_json_schema(cls) -> Any:
"""
Return configuration parameters as OpenAPI.
"""
if not cls.parameters_schema:
return None
ma_plugin = MarshmallowPlugin()
spec = APISpec(
title="Database Parameters",
version="1.0.0",
openapi_version="3.0.0",
plugins=[ma_plugin],
)
spec.components.schema(cls.__name__, schema=cls.parameters_schema)
return spec.to_dict()["components"]["schemas"][cls.__name__] | /sage-superset-1.0.0.tar.gz/sage-superset-1.0.0/superset/db_engine_specs/snowflake.py | 0.780871 | 0.19031 | snowflake.py | pypi |
import re
from datetime import datetime
from typing import Any, Dict, Optional, Pattern, Tuple
from flask_babel import gettext as __
from superset.db_engine_specs.base import BaseEngineSpec
from superset.errors import SupersetErrorType
from superset.utils import core as utils
SYNTAX_ERROR_REGEX = re.compile(
": mismatched input '(?P<syntax_error>.*?)'. Expecting: "
)
class AthenaEngineSpec(BaseEngineSpec):
engine = "awsathena"
engine_name = "Amazon Athena"
allows_escaped_colons = False
_time_grain_expressions = {
None: "{col}",
"PT1S": "date_trunc('second', CAST({col} AS TIMESTAMP))",
"PT1M": "date_trunc('minute', CAST({col} AS TIMESTAMP))",
"PT1H": "date_trunc('hour', CAST({col} AS TIMESTAMP))",
"P1D": "date_trunc('day', CAST({col} AS TIMESTAMP))",
"P1W": "date_trunc('week', CAST({col} AS TIMESTAMP))",
"P1M": "date_trunc('month', CAST({col} AS TIMESTAMP))",
"P3M": "date_trunc('quarter', CAST({col} AS TIMESTAMP))",
"P1Y": "date_trunc('year', CAST({col} AS TIMESTAMP))",
"P1W/1970-01-03T00:00:00Z": "date_add('day', 5, date_trunc('week', \
date_add('day', 1, CAST({col} AS TIMESTAMP))))",
"1969-12-28T00:00:00Z/P1W": "date_add('day', -1, date_trunc('week', \
date_add('day', 1, CAST({col} AS TIMESTAMP))))",
}
custom_errors: Dict[Pattern[str], Tuple[str, SupersetErrorType, Dict[str, Any]]] = {
SYNTAX_ERROR_REGEX: (
__(
"Please check your query for syntax errors at or "
'near "%(syntax_error)s". Then, try running your query again.'
),
SupersetErrorType.SYNTAX_ERROR,
{},
),
}
@classmethod
def convert_dttm(
cls, target_type: str, dttm: datetime, db_extra: Optional[Dict[str, Any]] = None
) -> Optional[str]:
tt = target_type.upper()
if tt == utils.TemporalType.DATE:
return f"from_iso8601_date('{dttm.date().isoformat()}')"
if tt == utils.TemporalType.TIMESTAMP:
datetime_formatted = dttm.isoformat(timespec="microseconds")
return f"""from_iso8601_timestamp('{datetime_formatted}')"""
return None
@classmethod
def epoch_to_dttm(cls) -> str:
return "from_unixtime({col})"
@staticmethod
def _mutate_label(label: str) -> str:
"""
Athena only supports lowercase column names and aliases.
:param label: Expected expression label
:return: Conditionally mutated label
"""
return label.lower() | /sage-superset-1.0.0.tar.gz/sage-superset-1.0.0/superset/db_engine_specs/athena.py | 0.774413 | 0.179674 | athena.py | pypi |
from datetime import datetime
from typing import Any, Dict, Optional
from superset.db_engine_specs.base import BaseEngineSpec, LimitMethod
from superset.utils import core as utils
class FirebirdEngineSpec(BaseEngineSpec):
"""Engine for Firebird"""
engine = "firebird"
engine_name = "Firebird"
# Firebird uses FIRST to limit: `SELECT FIRST 10 * FROM table`
limit_method = LimitMethod.FETCH_MANY
_time_grain_expressions = {
None: "{col}",
"PT1S": (
"CAST(CAST({col} AS DATE) "
"|| ' ' "
"|| EXTRACT(HOUR FROM {col}) "
"|| ':' "
"|| EXTRACT(MINUTE FROM {col}) "
"|| ':' "
"|| FLOOR(EXTRACT(SECOND FROM {col})) AS TIMESTAMP)"
),
"PT1M": (
"CAST(CAST({col} AS DATE) "
"|| ' ' "
"|| EXTRACT(HOUR FROM {col}) "
"|| ':' "
"|| EXTRACT(MINUTE FROM {col}) "
"|| ':00' AS TIMESTAMP)"
),
"PT1H": (
"CAST(CAST({col} AS DATE) "
"|| ' ' "
"|| EXTRACT(HOUR FROM {col}) "
"|| ':00:00' AS TIMESTAMP)"
),
"P1D": "CAST({col} AS DATE)",
"P1M": (
"CAST(EXTRACT(YEAR FROM {col}) "
"|| '-' "
"|| EXTRACT(MONTH FROM {col}) "
"|| '-01' AS DATE)"
),
"P1Y": "CAST(EXTRACT(YEAR FROM {col}) || '-01-01' AS DATE)",
}
@classmethod
def epoch_to_dttm(cls) -> str:
return "DATEADD(second, {col}, CAST('00:00:00' AS TIMESTAMP))"
@classmethod
def convert_dttm(
cls, target_type: str, dttm: datetime, db_extra: Optional[Dict[str, Any]] = None
) -> Optional[str]:
tt = target_type.upper()
if tt == utils.TemporalType.TIMESTAMP:
dttm_formatted = dttm.isoformat(sep=" ")
dttm_valid_precision = dttm_formatted[: len("YYYY-MM-DD HH:MM:SS.MMMM")]
return f"CAST('{dttm_valid_precision}' AS TIMESTAMP)"
if tt == utils.TemporalType.DATE:
return f"CAST('{dttm.date().isoformat()}' AS DATE)"
if tt == utils.TemporalType.TIME:
return f"CAST('{dttm.time().isoformat()}' AS TIME)"
return None | /sage-superset-1.0.0.tar.gz/sage-superset-1.0.0/superset/db_engine_specs/firebird.py | 0.865523 | 0.174903 | firebird.py | pypi |
import inspect
import logging
import pkgutil
from collections import defaultdict
from importlib import import_module
from pathlib import Path
from typing import Any, Dict, List, Set, Type
import sqlalchemy.databases
import sqlalchemy.dialects
from pkg_resources import iter_entry_points
from sqlalchemy.engine.default import DefaultDialect
from superset.db_engine_specs.base import BaseEngineSpec
logger = logging.getLogger(__name__)
def is_engine_spec(attr: Any) -> bool:
return (
inspect.isclass(attr)
and issubclass(attr, BaseEngineSpec)
and attr != BaseEngineSpec
)
def load_engine_specs() -> List[Type[BaseEngineSpec]]:
engine_specs: List[Type[BaseEngineSpec]] = []
# load standard engines
db_engine_spec_dir = str(Path(__file__).parent)
for module_info in pkgutil.iter_modules([db_engine_spec_dir], prefix="."):
module = import_module(module_info.name, package=__name__)
engine_specs.extend(
getattr(module, attr)
for attr in module.__dict__
if is_engine_spec(getattr(module, attr))
)
# load additional engines from external modules
for ep in iter_entry_points("superset.db_engine_specs"):
try:
engine_spec = ep.load()
except Exception: # pylint: disable=broad-except
logger.warning("Unable to load Superset DB engine spec: %s", engine_spec)
continue
engine_specs.append(engine_spec)
return engine_specs
def get_engine_specs() -> Dict[str, Type[BaseEngineSpec]]:
engine_specs = load_engine_specs()
# build map from name/alias -> spec
engine_specs_map: Dict[str, Type[BaseEngineSpec]] = {}
for engine_spec in engine_specs:
names = [engine_spec.engine]
if engine_spec.engine_aliases:
names.extend(engine_spec.engine_aliases)
for name in names:
engine_specs_map[name] = engine_spec
return engine_specs_map
# there's a mismatch between the dialect name reported by the driver in these
# libraries and the dialect name used in the URI
backend_replacements = {
"drilldbapi": "drill",
"exasol": "exa",
}
def get_available_engine_specs() -> Dict[Type[BaseEngineSpec], Set[str]]:
"""
Return available engine specs and installed drivers for them.
"""
drivers: Dict[str, Set[str]] = defaultdict(set)
# native SQLAlchemy dialects
for attr in sqlalchemy.databases.__all__:
dialect = getattr(sqlalchemy.dialects, attr)
for attribute in dialect.__dict__.values():
if (
hasattr(attribute, "dialect")
and inspect.isclass(attribute.dialect)
and issubclass(attribute.dialect, DefaultDialect)
):
try:
attribute.dialect.dbapi()
except ModuleNotFoundError:
continue
except Exception as ex: # pylint: disable=broad-except
logger.warning(
"Unable to load dialect %s: %s", attribute.dialect, ex
)
continue
drivers[attr].add(attribute.dialect.driver)
# installed 3rd-party dialects
for ep in iter_entry_points("sqlalchemy.dialects"):
try:
dialect = ep.load()
except Exception as ex: # pylint: disable=broad-except
logger.warning("Unable to load SQLAlchemy dialect %s: %s", dialect, ex)
else:
backend = dialect.name
if isinstance(backend, bytes):
backend = backend.decode()
backend = backend_replacements.get(backend, backend)
driver = getattr(dialect, "driver", dialect.name)
if isinstance(driver, bytes):
driver = driver.decode()
drivers[backend].add(driver)
available_engines = {}
for engine_spec in load_engine_specs():
driver = drivers[engine_spec.engine]
# lookup driver by engine aliases.
if not driver and engine_spec.engine_aliases:
for alias in engine_spec.engine_aliases:
driver = drivers[alias]
if driver:
break
available_engines[engine_spec] = driver
return available_engines | /sage-superset-1.0.0.tar.gz/sage-superset-1.0.0/superset/db_engine_specs/__init__.py | 0.541166 | 0.173358 | __init__.py | pypi |
from datetime import datetime
from typing import Any, Dict, List, Optional, Tuple
from superset.db_engine_specs.base import BaseEngineSpec, LimitMethod
from superset.utils import core as utils
class OracleEngineSpec(BaseEngineSpec):
engine = "oracle"
engine_name = "Oracle"
limit_method = LimitMethod.WRAP_SQL
force_column_alias_quotes = True
max_column_name_length = 30
_time_grain_expressions = {
None: "{col}",
"PT1S": "CAST({col} as DATE)",
"PT1M": "TRUNC(CAST({col} as DATE), 'MI')",
"PT1H": "TRUNC(CAST({col} as DATE), 'HH')",
"P1D": "TRUNC(CAST({col} as DATE), 'DDD')",
"P1W": "TRUNC(CAST({col} as DATE), 'WW')",
"P1M": "TRUNC(CAST({col} as DATE), 'MONTH')",
"P3M": "TRUNC(CAST({col} as DATE), 'Q')",
"P1Y": "TRUNC(CAST({col} as DATE), 'YEAR')",
}
@classmethod
def convert_dttm(
cls, target_type: str, dttm: datetime, db_extra: Optional[Dict[str, Any]] = None
) -> Optional[str]:
tt = target_type.upper()
if tt == utils.TemporalType.DATE:
return f"TO_DATE('{dttm.date().isoformat()}', 'YYYY-MM-DD')"
if tt == utils.TemporalType.DATETIME:
datetime_formatted = dttm.isoformat(timespec="seconds")
return f"""TO_DATE('{datetime_formatted}', 'YYYY-MM-DD"T"HH24:MI:SS')"""
if tt == utils.TemporalType.TIMESTAMP:
return f"""TO_TIMESTAMP('{dttm
.isoformat(timespec="microseconds")}', 'YYYY-MM-DD"T"HH24:MI:SS.ff6')"""
return None
@classmethod
def epoch_to_dttm(cls) -> str:
return "TO_DATE('1970-01-01','YYYY-MM-DD')+(1/24/60/60)*{col}"
@classmethod
def epoch_ms_to_dttm(cls) -> str:
return "TO_DATE('1970-01-01','YYYY-MM-DD')+(1/24/60/60/1000)*{col}"
@classmethod
def fetch_data(
cls, cursor: Any, limit: Optional[int] = None
) -> List[Tuple[Any, ...]]:
"""
:param cursor: Cursor instance
:param limit: Maximum number of rows to be returned by the cursor
:return: Result of query
"""
if not cursor.description:
return []
return super().fetch_data(cursor, limit) | /sage-superset-1.0.0.tar.gz/sage-superset-1.0.0/superset/db_engine_specs/oracle.py | 0.836521 | 0.220489 | oracle.py | pypi |
from datetime import datetime
from typing import Any, Dict, Optional
from urllib import parse
from sqlalchemy.engine.url import URL
from superset.db_engine_specs.base import BaseEngineSpec
from superset.utils import core as utils
class DrillEngineSpec(BaseEngineSpec):
"""Engine spec for Apache Drill"""
engine = "drill"
engine_name = "Apache Drill"
default_driver = "sadrill"
_time_grain_expressions = {
None: "{col}",
"PT1S": "NEARESTDATE({col}, 'SECOND')",
"PT1M": "NEARESTDATE({col}, 'MINUTE')",
"PT15M": "NEARESTDATE({col}, 'QUARTER_HOUR')",
"PT30M": "NEARESTDATE({col}, 'HALF_HOUR')",
"PT1H": "NEARESTDATE({col}, 'HOUR')",
"P1D": "NEARESTDATE({col}, 'DAY')",
"P1W": "NEARESTDATE({col}, 'WEEK_SUNDAY')",
"P1M": "NEARESTDATE({col}, 'MONTH')",
"P3M": "NEARESTDATE({col}, 'QUARTER')",
"P1Y": "NEARESTDATE({col}, 'YEAR')",
}
# Returns a function to convert a Unix timestamp in milliseconds to a date
@classmethod
def epoch_to_dttm(cls) -> str:
return cls.epoch_ms_to_dttm().replace("{col}", "({col}*1000)")
@classmethod
def epoch_ms_to_dttm(cls) -> str:
return "TO_DATE({col})"
@classmethod
def convert_dttm(
cls, target_type: str, dttm: datetime, db_extra: Optional[Dict[str, Any]] = None
) -> Optional[str]:
tt = target_type.upper()
if tt == utils.TemporalType.DATE:
return f"TO_DATE('{dttm.date().isoformat()}', 'yyyy-MM-dd')"
if tt == utils.TemporalType.TIMESTAMP:
datetime_formatted = dttm.isoformat(sep=" ", timespec="seconds")
return f"""TO_TIMESTAMP('{datetime_formatted}', 'yyyy-MM-dd HH:mm:ss')"""
return None
@classmethod
def adjust_database_uri(cls, uri: URL, selected_schema: Optional[str]) -> None:
if selected_schema:
uri.database = parse.quote(selected_schema, safe="")
@classmethod
def modify_url_for_impersonation(
cls, url: URL, impersonate_user: bool, username: Optional[str]
) -> None:
"""
Modify the SQL Alchemy URL object with the user to impersonate if applicable.
:param url: SQLAlchemy URL object
:param impersonate_user: Flag indicating if impersonation is enabled
:param username: Effective username
"""
if impersonate_user and username is not None:
if url.drivername == "drill+odbc":
url.query["DelegationUID"] = username
elif url.drivername == "drill+jdbc":
url.query["impersonation_target"] = username
else:
url.username = username | /sage-superset-1.0.0.tar.gz/sage-superset-1.0.0/superset/db_engine_specs/drill.py | 0.899539 | 0.25702 | drill.py | pypi |
from typing import Optional, Set
import sqlparse
from sqlparse.sql import (
Identifier,
IdentifierList,
Parenthesis,
remove_quotes,
Token,
TokenList,
)
from sqlparse.tokens import Keyword, Name, Punctuation, String, Whitespace
from sqlparse.utils import imt
from superset.db_engine_specs.base import BaseEngineSpec, LimitMethod
from superset.sql_parse import Table
PRECEDES_TABLE_NAME = {"FROM", "JOIN", "DESCRIBE", "WITH", "LEFT JOIN", "RIGHT JOIN"}
CTE_PREFIX = "CTE__"
JOIN = " JOIN"
def _extract_limit_from_query_td(statement: TokenList) -> Optional[int]:
td_limit_keywork = {"TOP", "SAMPLE"}
str_statement = str(statement)
str_statement = str_statement.replace("\n", " ").replace("\r", "")
token = str_statement.rstrip().split(" ")
token = [part for part in token if part]
limit = None
for i, _ in enumerate(token):
if token[i].upper() in td_limit_keywork and len(token) - 1 > i:
try:
limit = int(token[i + 1])
except ValueError:
limit = None
break
return limit
class ParsedQueryTeradata:
def __init__(
self, sql_statement: str, strip_comments: bool = False, uri_type: str = "None"
):
if strip_comments:
sql_statement = sqlparse.format(sql_statement, strip_comments=True)
self.sql: str = sql_statement
self._tables: Set[Table] = set()
self._alias_names: Set[str] = set()
self._limit: Optional[int] = None
self.uri_type: str = uri_type
self._parsed = sqlparse.parse(self.stripped())
for statement in self._parsed:
self._limit = _extract_limit_from_query_td(statement)
@property
def tables(self) -> Set[Table]:
if not self._tables:
for statement in self._parsed:
self._extract_from_token(statement)
self._tables = {
table for table in self._tables if str(table) not in self._alias_names
}
return self._tables
def stripped(self) -> str:
return self.sql.strip(" \t\n;")
def _extract_from_token(self, token: Token) -> None:
"""
<Identifier> store a list of subtokens and <IdentifierList> store lists of
subtoken list.
It extracts <IdentifierList> and <Identifier> from :param token: and loops
through all subtokens recursively. It finds table_name_preceding_token and
passes <IdentifierList> and <Identifier> to self._process_tokenlist to populate
self._tables.
:param token: instance of Token or child class, e.g. TokenList, to be processed
"""
if not hasattr(token, "tokens"):
return
table_name_preceding_token = False
for item in token.tokens:
if item.is_group and (
not self._is_identifier(item) or isinstance(item.tokens[0], Parenthesis)
):
self._extract_from_token(item)
if item.ttype in Keyword and (
item.normalized in PRECEDES_TABLE_NAME or item.normalized.endswith(JOIN)
):
table_name_preceding_token = True
continue
if item.ttype in Keyword:
table_name_preceding_token = False
continue
if table_name_preceding_token:
if isinstance(item, Identifier):
self._process_tokenlist(item)
elif isinstance(item, IdentifierList):
for item_list in item.get_identifiers():
if isinstance(item_list, TokenList):
self._process_tokenlist(item_list)
elif isinstance(item, IdentifierList):
if any(not self._is_identifier(ItemList) for ItemList in item.tokens):
self._extract_from_token(item)
@staticmethod
def _get_table(tlist: TokenList) -> Optional[Table]:
"""
Return the table if valid, i.e., conforms to the [[catalog.]schema.]table
construct.
:param tlist: The SQL tokens
:returns: The table if the name conforms
"""
# Strip the alias if present.
idx = len(tlist.tokens)
if tlist.has_alias():
ws_idx, _ = tlist.token_next_by(t=Whitespace)
if ws_idx != -1:
idx = ws_idx
tokens = tlist.tokens[:idx]
odd_token_number = len(tokens) in (1, 3, 5)
qualified_name_parts = all(
imt(token, t=[Name, String]) for token in tokens[::2]
)
dot_separators = all(imt(token, m=(Punctuation, ".")) for token in tokens[1::2])
if odd_token_number and qualified_name_parts and dot_separators:
return Table(*[remove_quotes(token.value) for token in tokens[::-2]])
return None
@staticmethod
def _is_identifier(token: Token) -> bool:
return isinstance(token, (IdentifierList, Identifier))
def _process_tokenlist(self, token_list: TokenList) -> None:
"""
Add table names to table set
:param token_list: TokenList to be processed
"""
# exclude subselects
if "(" not in str(token_list):
table = self._get_table(token_list)
if table and not table.table.startswith(CTE_PREFIX):
self._tables.add(table)
return
# store aliases
if token_list.has_alias():
self._alias_names.add(token_list.get_alias())
# some aliases are not parsed properly
if token_list.tokens[0].ttype == Name:
self._alias_names.add(token_list.tokens[0].value)
self._extract_from_token(token_list)
def set_or_update_query_limit_td(self, new_limit: int) -> str:
td_sel_keywords = {"SELECT", "SEL"}
td_limit_keywords = {"TOP", "SAMPLE"}
statement = self._parsed[0]
if not self._limit:
final_limit = new_limit
elif new_limit < self._limit:
final_limit = new_limit
else:
final_limit = self._limit
str_statement = str(statement)
str_statement = str_statement.replace("\n", " ").replace("\r", "")
tokens = str_statement.rstrip().split(" ")
tokens = [token for token in tokens if token]
if limit_not_in_sql(str_statement, td_limit_keywords):
selects = [i for i, word in enumerate(tokens) if word in td_sel_keywords]
first_select = selects[0]
tokens.insert(first_select + 1, "TOP")
tokens.insert(first_select + 2, str(final_limit))
next_is_limit_token = False
new_tokens = []
for token in tokens:
if token.upper() in td_limit_keywords:
next_is_limit_token = True
elif next_is_limit_token:
if token.isdigit():
token = str(final_limit)
next_is_limit_token = False
new_tokens.append(token)
return " ".join(new_tokens)
class TeradataEngineSpec(BaseEngineSpec):
"""Dialect for Teradata DB."""
engine = "teradatasql"
engine_name = "Teradata"
limit_method = LimitMethod.WRAP_SQL
max_column_name_length = 30 # since 14.10 this is 128
_time_grain_expressions = {
None: "{col}",
"PT1M": "TRUNC(CAST({col} as DATE), 'MI')",
"PT1H": "TRUNC(CAST({col} as DATE), 'HH')",
"P1D": "TRUNC(CAST({col} as DATE), 'DDD')",
"P1W": "TRUNC(CAST({col} as DATE), 'WW')",
"P1M": "TRUNC(CAST({col} as DATE), 'MONTH')",
"P0.25Y": "TRUNC(CAST({col} as DATE), 'Q')",
"P1Y": "TRUNC(CAST({col} as DATE), 'YEAR')",
}
@classmethod
def epoch_to_dttm(cls) -> str:
return (
"CAST(((CAST(DATE '1970-01-01' + ({col} / 86400) AS TIMESTAMP(0) "
"AT 0)) AT 0) + (({col} MOD 86400) * INTERVAL '00:00:01' "
"HOUR TO SECOND) AS TIMESTAMP(0))"
)
@classmethod
def apply_limit_to_sql(
cls, sql: str, limit: int, database: str = "Database", force: bool = False
) -> str:
"""
Alters the SQL statement to apply a TOP clause
The function overwrites similar function in base.py because Teradata doesn't
support LIMIT syntax
:param sql: SQL query
:param limit: Maximum number of rows to be returned by the query
:param database: Database instance
:return: SQL query with limit clause
"""
parsed_query = ParsedQueryTeradata(sql)
sql = parsed_query.set_or_update_query_limit_td(limit)
return sql
def limit_not_in_sql(sql: str, limit_words: Set[str]) -> bool:
for limit_word in limit_words:
if limit_word in sql:
return False
return True | /sage-superset-1.0.0.tar.gz/sage-superset-1.0.0/superset/db_engine_specs/teradata.py | 0.715026 | 0.207737 | teradata.py | pypi |
from typing import Dict, Optional
from sqlalchemy.sql.expression import ColumnClause
from superset.db_engine_specs.base import BaseEngineSpec, TimestampExpression
class PinotEngineSpec(BaseEngineSpec): # pylint: disable=abstract-method
engine = "pinot"
engine_name = "Apache Pinot"
allows_subqueries = False
allows_joins = False
allows_alias_in_select = False
allows_alias_in_orderby = False
# Pinot does its own conversion below
_time_grain_expressions: Dict[Optional[str], str] = {
"PT1S": "1:SECONDS",
"PT1M": "1:MINUTES",
"PT1H": "1:HOURS",
"P1D": "1:DAYS",
"P1W": "week",
"P1M": "month",
"P3MY": "quarter",
"P1Y": "year",
}
_python_to_java_time_patterns: Dict[str, str] = {
"%Y": "yyyy",
"%m": "MM",
"%d": "dd",
"%H": "HH",
"%M": "mm",
"%S": "ss",
}
_use_date_trunc_function: Dict[str, bool] = {
"PT1S": False,
"PT1M": False,
"PT1H": False,
"P1D": False,
"P1W": True,
"P1M": True,
"P3M": True,
"P1Y": True,
}
@classmethod
def get_timestamp_expr(
cls,
col: ColumnClause,
pdf: Optional[str],
time_grain: Optional[str],
type_: Optional[str] = None,
) -> TimestampExpression:
if not pdf:
raise NotImplementedError(f"Empty date format for '{col}'")
is_epoch = pdf in ("epoch_s", "epoch_ms")
# The DATETIMECONVERT pinot udf is documented at
# Per https://github.com/apache/incubator-pinot/wiki/dateTimeConvert-UDF
# We are not really converting any time units, just bucketing them.
tf = ""
java_date_format = ""
if not is_epoch:
java_date_format = pdf
for (
python_pattern,
java_pattern,
) in cls._python_to_java_time_patterns.items():
java_date_format = java_date_format.replace(
python_pattern, java_pattern
)
tf = f"1:SECONDS:SIMPLE_DATE_FORMAT:{java_date_format}"
else:
seconds_or_ms = "MILLISECONDS" if pdf == "epoch_ms" else "SECONDS"
tf = f"1:{seconds_or_ms}:EPOCH"
if time_grain:
granularity = cls.get_time_grain_expressions().get(time_grain)
if not granularity:
raise NotImplementedError(f"No pinot grain spec for '{time_grain}'")
else:
return TimestampExpression("{{col}}", col)
# In pinot the output is a string since there is no timestamp column like pg
if cls._use_date_trunc_function.get(time_grain):
if is_epoch:
time_expr = f"DATETRUNC('{granularity}', {{col}}, '{seconds_or_ms}')"
else:
time_expr = (
f"ToDateTime(DATETRUNC('{granularity}', "
+ f"FromDateTime({{col}}, '{java_date_format}'), "
+ f"'MILLISECONDS'), '{java_date_format}')"
)
else:
time_expr = f"DATETIMECONVERT({{col}}, '{tf}', '{tf}', '{granularity}')"
return TimestampExpression(time_expr, col) | /sage-superset-1.0.0.tar.gz/sage-superset-1.0.0/superset/db_engine_specs/pinot.py | 0.883223 | 0.233499 | pinot.py | pypi |
import re
from datetime import datetime
from typing import Any, Dict, List, Optional, Pattern, Tuple, TYPE_CHECKING
from flask_babel import gettext as __
from sqlalchemy.engine.reflection import Inspector
from superset.db_engine_specs.base import BaseEngineSpec
from superset.errors import SupersetErrorType
from superset.utils import core as utils
if TYPE_CHECKING:
# prevent circular imports
from superset.models.core import Database
COLUMN_DOES_NOT_EXIST_REGEX = re.compile("no such column: (?P<column_name>.+)")
class SqliteEngineSpec(BaseEngineSpec):
engine = "sqlite"
engine_name = "SQLite"
_time_grain_expressions = {
None: "{col}",
"PT1S": "DATETIME(STRFTIME('%Y-%m-%dT%H:%M:%S', {col}))",
"PT1M": "DATETIME(STRFTIME('%Y-%m-%dT%H:%M:00', {col}))",
"PT1H": "DATETIME(STRFTIME('%Y-%m-%dT%H:00:00', {col}))",
"P1D": "DATE({col})",
"P1W": "DATE({col}, -strftime('%w', {col}) || ' days')",
"P1M": "DATE({col}, -strftime('%d', {col}) || ' days', '+1 day')",
"P3M": (
"DATETIME(STRFTIME('%Y-', {col}) || " # year
"SUBSTR('00' || " # pad with zeros to 2 chars
"((CAST(STRFTIME('%m', {col}) AS INTEGER)) - " # month as integer
"(((CAST(STRFTIME('%m', {col}) AS INTEGER)) - 1) % 3)), " # month in quarter
"-2) || " # close pad
"'-01T00:00:00')"
),
"P1Y": "DATETIME(STRFTIME('%Y-01-01T00:00:00', {col}))",
"P1W/1970-01-03T00:00:00Z": "DATE({col}, 'weekday 6')",
"1969-12-28T00:00:00Z/P1W": "DATE({col}, 'weekday 0', '-7 days')",
}
custom_errors: Dict[Pattern[str], Tuple[str, SupersetErrorType, Dict[str, Any]]] = {
COLUMN_DOES_NOT_EXIST_REGEX: (
__('We can\'t seem to resolve the column "%(column_name)s"'),
SupersetErrorType.COLUMN_DOES_NOT_EXIST_ERROR,
{},
),
}
@classmethod
def epoch_to_dttm(cls) -> str:
return "datetime({col}, 'unixepoch')"
@classmethod
def get_all_datasource_names(
cls, database: "Database", datasource_type: str
) -> List[utils.DatasourceName]:
schemas = database.get_all_schema_names(
cache=database.schema_cache_enabled,
cache_timeout=database.schema_cache_timeout,
force=True,
)
schema = schemas[0]
if datasource_type == "table":
return database.get_all_table_names_in_schema(
schema=schema,
force=True,
cache=database.table_cache_enabled,
cache_timeout=database.table_cache_timeout,
)
if datasource_type == "view":
return database.get_all_view_names_in_schema(
schema=schema,
force=True,
cache=database.table_cache_enabled,
cache_timeout=database.table_cache_timeout,
)
raise Exception(f"Unsupported datasource_type: {datasource_type}")
@classmethod
def convert_dttm(
cls, target_type: str, dttm: datetime, db_extra: Optional[Dict[str, Any]] = None
) -> Optional[str]:
tt = target_type.upper()
if tt in (utils.TemporalType.TEXT, utils.TemporalType.DATETIME):
return f"""'{dttm.isoformat(sep=" ", timespec="microseconds")}'"""
return None
@classmethod
def get_table_names(
cls, database: "Database", inspector: Inspector, schema: Optional[str]
) -> List[str]:
"""Need to disregard the schema for Sqlite"""
return sorted(inspector.get_table_names()) | /sage-superset-1.0.0.tar.gz/sage-superset-1.0.0/superset/db_engine_specs/sqlite.py | 0.74382 | 0.223483 | sqlite.py | pypi |
from typing import Dict, List, Optional, Set, Type, TYPE_CHECKING
from flask_babel import _
from sqlalchemy import or_
from sqlalchemy.orm import Session, subqueryload
from sqlalchemy.orm.exc import NoResultFound
from superset.datasets.commands.exceptions import DatasetNotFoundError
if TYPE_CHECKING:
from collections import OrderedDict
from superset.connectors.base.models import BaseDatasource
from superset.models.core import Database
class ConnectorRegistry:
"""Central Registry for all available datasource engines"""
sources: Dict[str, Type["BaseDatasource"]] = {}
@classmethod
def register_sources(cls, datasource_config: "OrderedDict[str, List[str]]") -> None:
for module_name, class_names in datasource_config.items():
class_names = [str(s) for s in class_names]
module_obj = __import__(module_name, fromlist=class_names)
for class_name in class_names:
source_class = getattr(module_obj, class_name)
cls.sources[source_class.type] = source_class
@classmethod
def get_datasource(
cls, datasource_type: str, datasource_id: int, session: Session
) -> "BaseDatasource":
"""Safely get a datasource instance, raises `DatasetNotFoundError` if
`datasource_type` is not registered or `datasource_id` does not
exist."""
if datasource_type not in cls.sources:
raise DatasetNotFoundError()
datasource = (
session.query(cls.sources[datasource_type])
.filter_by(id=datasource_id)
.one_or_none()
)
if not datasource:
raise DatasetNotFoundError()
return datasource
@classmethod
def get_all_datasources(cls, session: Session) -> List["BaseDatasource"]:
datasources: List["BaseDatasource"] = []
for source_class in ConnectorRegistry.sources.values():
qry = session.query(source_class)
qry = source_class.default_query(qry)
datasources.extend(qry.all())
return datasources
@classmethod
def get_datasource_by_id(
cls, session: Session, datasource_id: int
) -> "BaseDatasource":
"""
Find a datasource instance based on the unique id.
:param session: Session to use
:param datasource_id: unique id of datasource
:return: Datasource corresponding to the id
:raises NoResultFound: if no datasource is found corresponding to the id
"""
for datasource_class in ConnectorRegistry.sources.values():
try:
return (
session.query(datasource_class)
.filter(datasource_class.id == datasource_id)
.one()
)
except NoResultFound:
# proceed to next datasource type
pass
raise NoResultFound(_("Datasource id not found: %(id)s", id=datasource_id))
@classmethod
def get_datasource_by_name( # pylint: disable=too-many-arguments
cls,
session: Session,
datasource_type: str,
datasource_name: str,
schema: str,
database_name: str,
) -> Optional["BaseDatasource"]:
datasource_class = ConnectorRegistry.sources[datasource_type]
return datasource_class.get_datasource_by_name(
session, datasource_name, schema, database_name
)
@classmethod
def query_datasources_by_permissions( # pylint: disable=invalid-name
cls,
session: Session,
database: "Database",
permissions: Set[str],
schema_perms: Set[str],
) -> List["BaseDatasource"]:
# TODO(bogdan): add unit test
datasource_class = ConnectorRegistry.sources[database.type]
return (
session.query(datasource_class)
.filter_by(database_id=database.id)
.filter(
or_(
datasource_class.perm.in_(permissions),
datasource_class.schema_perm.in_(schema_perms),
)
)
.all()
)
@classmethod
def get_eager_datasource(
cls, session: Session, datasource_type: str, datasource_id: int
) -> "BaseDatasource":
"""Returns datasource with columns and metrics."""
datasource_class = ConnectorRegistry.sources[datasource_type]
return (
session.query(datasource_class)
.options(
subqueryload(datasource_class.columns),
subqueryload(datasource_class.metrics),
)
.filter_by(id=datasource_id)
.one()
)
@classmethod
def query_datasources_by_name(
cls,
session: Session,
database: "Database",
datasource_name: str,
schema: Optional[str] = None,
) -> List["BaseDatasource"]:
datasource_class = ConnectorRegistry.sources[database.type]
return datasource_class.query_datasources_by_name(
session, database, datasource_name, schema=schema
) | /sage-superset-1.0.0.tar.gz/sage-superset-1.0.0/superset/connectors/connector_registry.py | 0.739328 | 0.203173 | connector_registry.py | pypi |
"""Views used by the SqlAlchemy connector"""
import logging
import re
from dataclasses import dataclass, field
from typing import Any, cast, Dict, List, Union
from flask import current_app, flash, Markup, redirect
from flask_appbuilder import CompactCRUDMixin, expose
from flask_appbuilder.actions import action
from flask_appbuilder.fieldwidgets import Select2Widget
from flask_appbuilder.hooks import before_request
from flask_appbuilder.models.sqla.interface import SQLAInterface
from flask_appbuilder.security.decorators import has_access
from flask_babel import gettext as __, lazy_gettext as _
from werkzeug.exceptions import NotFound
from wtforms.ext.sqlalchemy.fields import QuerySelectField
from wtforms.validators import Regexp
from superset import app, db, is_feature_enabled
from superset.connectors.base.views import DatasourceModelView
from superset.connectors.sqla import models
from superset.constants import MODEL_VIEW_RW_METHOD_PERMISSION_MAP, RouteMethod
from superset.typing import FlaskResponse
from superset.utils import core as utils
from superset.views.base import (
check_ownership,
create_table_permissions,
DatasourceFilter,
DeleteMixin,
ListWidgetWithCheckboxes,
SupersetListWidget,
SupersetModelView,
validate_sqlatable,
YamlExportMixin,
)
logger = logging.getLogger(__name__)
class TableColumnInlineView(CompactCRUDMixin, SupersetModelView):
datamodel = SQLAInterface(models.TableColumn)
# TODO TODO, review need for this on related_views
class_permission_name = "Dataset"
method_permission_name = MODEL_VIEW_RW_METHOD_PERMISSION_MAP
include_route_methods = RouteMethod.RELATED_VIEW_SET | RouteMethod.API_SET
list_title = _("Columns")
show_title = _("Show Column")
add_title = _("Add Column")
edit_title = _("Edit Column")
can_delete = False
list_widget = ListWidgetWithCheckboxes
edit_columns = [
"column_name",
"verbose_name",
"description",
"type",
"groupby",
"filterable",
"table",
"expression",
"is_dttm",
"python_date_format",
"extra",
]
add_columns = edit_columns
list_columns = [
"column_name",
"verbose_name",
"type",
"groupby",
"filterable",
"is_dttm",
]
page_size = 500
description_columns = {
"is_dttm": _(
"Whether to make this column available as a "
"[Time Granularity] option, column has to be DATETIME or "
"DATETIME-like"
),
"filterable": _(
"Whether this column is exposed in the `Filters` section "
"of the explore view."
),
"type": _(
"The data type that was inferred by the database. "
"It may be necessary to input a type manually for "
"expression-defined columns in some cases. In most case "
"users should not need to alter this."
),
"expression": utils.markdown(
"a valid, *non-aggregating* SQL expression as supported by the "
"underlying backend. Example: `substr(name, 1, 1)`",
True,
),
"python_date_format": utils.markdown(
Markup(
"The pattern of timestamp format. For strings use "
'<a href="https://docs.python.org/2/library/'
'datetime.html#strftime-strptime-behavior">'
"python datetime string pattern</a> expression which needs to "
'adhere to the <a href="https://en.wikipedia.org/wiki/ISO_8601">'
"ISO 8601</a> standard to ensure that the lexicographical ordering "
"coincides with the chronological ordering. If the timestamp "
"format does not adhere to the ISO 8601 standard you will need to "
"define an expression and type for transforming the string into a "
"date or timestamp. Note currently time zones are not supported. "
"If time is stored in epoch format, put `epoch_s` or `epoch_ms`."
"If no pattern is specified we fall back to using the optional "
"defaults on a per database/column name level via the extra parameter."
""
),
True,
),
"extra": utils.markdown(
"Extra data to specify column metadata. Currently supports "
'certification data of the format: `{ "certification": "certified_by": '
'"Taylor Swift", "details": "This column is the source of truth." '
"} }`. This should be modified from the edit datasource model in "
"Explore to ensure correct formatting.",
True,
),
}
label_columns = {
"column_name": _("Column"),
"verbose_name": _("Verbose Name"),
"description": _("Description"),
"groupby": _("Groupable"),
"filterable": _("Filterable"),
"table": _("Table"),
"expression": _("Expression"),
"is_dttm": _("Is temporal"),
"python_date_format": _("Datetime Format"),
"type": _("Type"),
}
validators_columns = {
"python_date_format": [
# Restrict viable values to epoch_s, epoch_ms, or a strftime format
# which adhere's to the ISO 8601 format (without time zone).
Regexp(
re.compile(
r"""
^(
epoch_s|epoch_ms|
(?P<date>%Y(-%m(-%d)?)?)([\sT](?P<time>%H(:%M(:%S(\.%f)?)?)?))?
)$
""",
re.VERBOSE,
),
message=_("Invalid date/timestamp format"),
)
]
}
add_form_extra_fields = {
"table": QuerySelectField(
"Table",
query_factory=lambda: db.session.query(models.SqlaTable),
allow_blank=True,
widget=Select2Widget(extra_classes="readonly"),
)
}
edit_form_extra_fields = add_form_extra_fields
def pre_add(self, item: "models.SqlMetric") -> None:
logger.warning(
"This endpoint is deprecated and will be removed in version 2.0.0"
)
if app.config["OLD_API_CHECK_DATASET_OWNERSHIP"]:
check_ownership(item.table)
def pre_update(self, item: "models.SqlMetric") -> None:
logger.warning(
"This endpoint is deprecated and will be removed in version 2.0.0"
)
if app.config["OLD_API_CHECK_DATASET_OWNERSHIP"]:
check_ownership(item.table)
def pre_delete(self, item: "models.SqlMetric") -> None:
logger.warning(
"This endpoint is deprecated and will be removed in version 2.0.0"
)
if app.config["OLD_API_CHECK_DATASET_OWNERSHIP"]:
check_ownership(item.table)
class SqlMetricInlineView(CompactCRUDMixin, SupersetModelView):
datamodel = SQLAInterface(models.SqlMetric)
class_permission_name = "Dataset"
method_permission_name = MODEL_VIEW_RW_METHOD_PERMISSION_MAP
include_route_methods = RouteMethod.RELATED_VIEW_SET | RouteMethod.API_SET
list_title = _("Metrics")
show_title = _("Show Metric")
add_title = _("Add Metric")
edit_title = _("Edit Metric")
list_columns = ["metric_name", "verbose_name", "metric_type"]
edit_columns = [
"metric_name",
"description",
"verbose_name",
"metric_type",
"expression",
"table",
"d3format",
"extra",
"warning_text",
]
description_columns = {
"expression": utils.markdown(
"a valid, *aggregating* SQL expression as supported by the "
"underlying backend. Example: `count(DISTINCT userid)`",
True,
),
"d3format": utils.markdown(
"d3 formatting string as defined [here]"
"(https://github.com/d3/d3-format/blob/master/README.md#format). "
"For instance, this default formatting applies in the Table "
"visualization and allow for different metric to use different "
"formats",
True,
),
"extra": utils.markdown(
"Extra data to specify metric metadata. Currently supports "
'metadata of the format: `{ "certification": { "certified_by": '
'"Data Platform Team", "details": "This metric is the source of truth." '
'}, "warning_markdown": "This is a warning." }`. This should be modified '
"from the edit datasource model in Explore to ensure correct formatting.",
True,
),
}
add_columns = edit_columns
page_size = 500
label_columns = {
"metric_name": _("Metric"),
"description": _("Description"),
"verbose_name": _("Verbose Name"),
"metric_type": _("Type"),
"expression": _("SQL Expression"),
"table": _("Table"),
"d3format": _("D3 Format"),
"extra": _("Extra"),
"warning_text": _("Warning Message"),
}
add_form_extra_fields = {
"table": QuerySelectField(
"Table",
query_factory=lambda: db.session.query(models.SqlaTable),
allow_blank=True,
widget=Select2Widget(extra_classes="readonly"),
)
}
edit_form_extra_fields = add_form_extra_fields
def pre_add(self, item: "models.SqlMetric") -> None:
logger.warning(
"This endpoint is deprecated and will be removed in version 2.0.0"
)
if app.config["OLD_API_CHECK_DATASET_OWNERSHIP"]:
check_ownership(item.table)
def pre_update(self, item: "models.SqlMetric") -> None:
logger.warning(
"This endpoint is deprecated and will be removed in version 2.0.0"
)
if app.config["OLD_API_CHECK_DATASET_OWNERSHIP"]:
check_ownership(item.table)
def pre_delete(self, item: "models.SqlMetric") -> None:
logger.warning(
"This endpoint is deprecated and will be removed in version 2.0.0"
)
if app.config["OLD_API_CHECK_DATASET_OWNERSHIP"]:
check_ownership(item.table)
class RowLevelSecurityListWidget(
SupersetListWidget
): # pylint: disable=too-few-public-methods
template = "superset/models/rls/list.html"
def __init__(self, **kwargs: Any):
kwargs["appbuilder"] = current_app.appbuilder
super().__init__(**kwargs)
class RowLevelSecurityFiltersModelView(SupersetModelView, DeleteMixin):
datamodel = SQLAInterface(models.RowLevelSecurityFilter)
list_widget = cast(SupersetListWidget, RowLevelSecurityListWidget)
list_title = _("Row level security filter")
show_title = _("Show Row level security filter")
add_title = _("Add Row level security filter")
edit_title = _("Edit Row level security filter")
list_columns = [
"filter_type",
"tables",
"roles",
"group_key",
"clause",
"creator",
"modified",
]
order_columns = ["filter_type", "group_key", "clause", "modified"]
edit_columns = ["filter_type", "tables", "roles", "group_key", "clause"]
show_columns = edit_columns
search_columns = ("filter_type", "tables", "roles", "group_key", "clause")
add_columns = edit_columns
base_order = ("changed_on", "desc")
description_columns = {
"filter_type": _(
"Regular filters add where clauses to queries if a user belongs to a "
"role referenced in the filter. Base filters apply filters to all queries "
"except the roles defined in the filter, and can be used to define what "
"users can see if no RLS filters within a filter group apply to them."
),
"tables": _("These are the tables this filter will be applied to."),
"roles": _(
"For regular filters, these are the roles this filter will be "
"applied to. For base filters, these are the roles that the "
"filter DOES NOT apply to, e.g. Admin if admin should see all "
"data."
),
"group_key": _(
"Filters with the same group key will be ORed together within the group, "
"while different filter groups will be ANDed together. Undefined group "
"keys are treated as unique groups, i.e. are not grouped together. "
"For example, if a table has three filters, of which two are for "
"departments Finance and Marketing (group key = 'department'), and one "
"refers to the region Europe (group key = 'region'), the filter clause "
"would apply the filter (department = 'Finance' OR department = "
"'Marketing') AND (region = 'Europe')."
),
"clause": _(
"This is the condition that will be added to the WHERE clause. "
"For example, to only return rows for a particular client, "
"you might define a regular filter with the clause `client_id = 9`. To "
"display no rows unless a user belongs to a RLS filter role, a base "
"filter can be created with the clause `1 = 0` (always false)."
),
}
label_columns = {
"tables": _("Tables"),
"roles": _("Roles"),
"clause": _("Clause"),
"creator": _("Creator"),
"modified": _("Modified"),
}
if app.config["RLS_FORM_QUERY_REL_FIELDS"]:
add_form_query_rel_fields = app.config["RLS_FORM_QUERY_REL_FIELDS"]
edit_form_query_rel_fields = add_form_query_rel_fields
@staticmethod
def is_enabled() -> bool:
return is_feature_enabled("ROW_LEVEL_SECURITY")
@before_request
def ensure_enabled(self) -> None:
if not self.is_enabled():
raise NotFound()
class TableModelView( # pylint: disable=too-many-ancestors
DatasourceModelView, DeleteMixin, YamlExportMixin
):
datamodel = SQLAInterface(models.SqlaTable)
class_permission_name = "Dataset"
method_permission_name = MODEL_VIEW_RW_METHOD_PERMISSION_MAP
include_route_methods = RouteMethod.CRUD_SET
list_title = _("Tables")
show_title = _("Show Table")
add_title = _("Import a table definition")
edit_title = _("Edit Table")
list_columns = ["link", "database_name", "changed_by_", "modified"]
order_columns = ["modified"]
add_columns = ["database", "schema", "table_name"]
edit_columns = [
"table_name",
"sql",
"filter_select_enabled",
"fetch_values_predicate",
"database",
"schema",
"description",
"owners",
"main_dttm_col",
"default_endpoint",
"offset",
"cache_timeout",
"is_sqllab_view",
"template_params",
"extra",
]
base_filters = [["id", DatasourceFilter, lambda: []]]
show_columns = edit_columns + ["perm", "slices"]
related_views = [
TableColumnInlineView,
SqlMetricInlineView,
]
base_order = ("changed_on", "desc")
search_columns = ("database", "schema", "table_name", "owners", "is_sqllab_view")
description_columns = {
"slices": _(
"The list of charts associated with this table. By "
"altering this datasource, you may change how these associated "
"charts behave. "
"Also note that charts need to point to a datasource, so "
"this form will fail at saving if removing charts from a "
"datasource. If you want to change the datasource for a chart, "
"overwrite the chart from the 'explore view'"
),
"offset": _("Timezone offset (in hours) for this datasource"),
"table_name": _("Name of the table that exists in the source database"),
"schema": _(
"Schema, as used only in some databases like Postgres, Redshift " "and DB2"
),
"description": Markup(
'Supports <a href="https://daringfireball.net/projects/markdown/">'
"markdown</a>"
),
"sql": _(
"This fields acts a Superset view, meaning that Superset will "
"run a query against this string as a subquery."
),
"fetch_values_predicate": _(
"Predicate applied when fetching distinct value to "
"populate the filter control component. Supports "
"jinja template syntax. Applies only when "
"`Enable Filter Select` is on."
),
"default_endpoint": _(
"Redirects to this endpoint when clicking on the table "
"from the table list"
),
"filter_select_enabled": _(
"Whether to populate the filter's dropdown in the explore "
"view's filter section with a list of distinct values fetched "
"from the backend on the fly"
),
"is_sqllab_view": _(
"Whether the table was generated by the 'Visualize' flow " "in SQL Lab"
),
"template_params": _(
"A set of parameters that become available in the query using "
"Jinja templating syntax"
),
"cache_timeout": _(
"Duration (in seconds) of the caching timeout for this table. "
"A timeout of 0 indicates that the cache never expires. "
"Note this defaults to the database timeout if undefined."
),
"extra": utils.markdown(
"Extra data to specify table metadata. Currently supports "
'metadata of the format: `{ "certification": { "certified_by": '
'"Data Platform Team", "details": "This table is the source of truth." '
'}, "warning_markdown": "This is a warning." }`.',
True,
),
}
label_columns = {
"slices": _("Associated Charts"),
"link": _("Table"),
"changed_by_": _("Changed By"),
"database": _("Database"),
"database_name": _("Database"),
"changed_on_": _("Last Changed"),
"filter_select_enabled": _("Enable Filter Select"),
"schema": _("Schema"),
"default_endpoint": _("Default Endpoint"),
"offset": _("Offset"),
"cache_timeout": _("Cache Timeout"),
"table_name": _("Table Name"),
"fetch_values_predicate": _("Fetch Values Predicate"),
"owners": _("Owners"),
"main_dttm_col": _("Main Datetime Column"),
"description": _("Description"),
"is_sqllab_view": _("SQL Lab View"),
"template_params": _("Template parameters"),
"extra": _("Extra"),
"modified": _("Modified"),
}
edit_form_extra_fields = {
"database": QuerySelectField(
"Database",
query_factory=lambda: db.session.query(models.Database),
widget=Select2Widget(extra_classes="readonly"),
)
}
def pre_add(self, item: "TableModelView") -> None:
logger.warning(
"This endpoint is deprecated and will be removed in version 2.0.0"
)
validate_sqlatable(item)
def pre_update(self, item: "TableModelView") -> None:
logger.warning(
"This endpoint is deprecated and will be removed in version 2.0.0"
)
if app.config["OLD_API_CHECK_DATASET_OWNERSHIP"]:
check_ownership(item)
def post_add( # pylint: disable=arguments-differ
self,
item: "TableModelView",
flash_message: bool = True,
fetch_metadata: bool = True,
) -> None:
if fetch_metadata:
item.fetch_metadata()
create_table_permissions(item)
if flash_message:
flash(
_(
"The table was created. "
"As part of this two-phase configuration "
"process, you should now click the edit button by "
"the new table to configure it."
),
"info",
)
def post_update(self, item: "TableModelView") -> None:
self.post_add(item, flash_message=False, fetch_metadata=False)
def _delete(self, pk: int) -> None:
DeleteMixin._delete(self, pk)
@expose("/edit/<pk>", methods=["GET", "POST"])
@has_access
def edit(self, pk: str) -> FlaskResponse:
"""Simple hack to redirect to explore view after saving"""
resp = super().edit(pk)
if isinstance(resp, str):
return resp
return redirect("/superset/explore/table/{}/".format(pk))
@action(
"refresh", __("Refresh Metadata"), __("Refresh column metadata"), "fa-refresh"
)
def refresh( # pylint: disable=no-self-use,
self, tables: Union["TableModelView", List["TableModelView"]]
) -> FlaskResponse:
logger.warning(
"This endpoint is deprecated and will be removed in version 2.0.0"
)
if not isinstance(tables, list):
tables = [tables]
@dataclass
class RefreshResults:
successes: List[TableModelView] = field(default_factory=list)
failures: List[TableModelView] = field(default_factory=list)
added: Dict[str, List[str]] = field(default_factory=dict)
removed: Dict[str, List[str]] = field(default_factory=dict)
modified: Dict[str, List[str]] = field(default_factory=dict)
results = RefreshResults()
for table_ in tables:
try:
metadata_results = table_.fetch_metadata()
if metadata_results.added:
results.added[table_.table_name] = metadata_results.added
if metadata_results.removed:
results.removed[table_.table_name] = metadata_results.removed
if metadata_results.modified:
results.modified[table_.table_name] = metadata_results.modified
results.successes.append(table_)
except Exception: # pylint: disable=broad-except
results.failures.append(table_)
if len(results.successes) > 0:
success_msg = _(
"Metadata refreshed for the following table(s): %(tables)s",
tables=", ".join([t.table_name for t in results.successes]),
)
flash(success_msg, "info")
if results.added:
added_tables = []
for table, cols in results.added.items():
added_tables.append(f"{table} ({', '.join(cols)})")
flash(
_(
"The following tables added new columns: %(tables)s",
tables=", ".join(added_tables),
),
"info",
)
if results.removed:
removed_tables = []
for table, cols in results.removed.items():
removed_tables.append(f"{table} ({', '.join(cols)})")
flash(
_(
"The following tables removed columns: %(tables)s",
tables=", ".join(removed_tables),
),
"info",
)
if results.modified:
modified_tables = []
for table, cols in results.modified.items():
modified_tables.append(f"{table} ({', '.join(cols)})")
flash(
_(
"The following tables update column metadata: %(tables)s",
tables=", ".join(modified_tables),
),
"info",
)
if len(results.failures) > 0:
failure_msg = _(
"Unable to refresh metadata for the following table(s): %(tables)s",
tables=", ".join([t.table_name for t in results.failures]),
)
flash(failure_msg, "danger")
return redirect("/tablemodelview/list/")
@expose("/list/")
@has_access
def list(self) -> FlaskResponse:
if not is_feature_enabled("ENABLE_REACT_CRUD_VIEWS"):
return super().list()
return super().render_app_template() | /sage-superset-1.0.0.tar.gz/sage-superset-1.0.0/superset/connectors/sqla/views.py | 0.706697 | 0.180648 | views.py | pypi |
import json
import re
from typing import Any, Dict
from flask_babel import lazy_gettext as _
from marshmallow import fields, pre_load, Schema, ValidationError
from marshmallow.validate import Length
get_delete_ids_schema = {"type": "array", "items": {"type": "integer"}}
get_export_ids_schema = {"type": "array", "items": {"type": "integer"}}
def validate_python_date_format(value: str) -> None:
regex = re.compile(
r"""
^(
epoch_s|epoch_ms|
(?P<date>%Y(-%m(-%d)?)?)([\sT](?P<time>%H(:%M(:%S(\.%f)?)?)?))?
)$
""",
re.VERBOSE,
)
match = regex.match(value or "")
if not match:
raise ValidationError(_("Invalid date/timestamp format"))
class DatasetColumnsPutSchema(Schema):
id = fields.Integer()
column_name = fields.String(required=True, validate=Length(1, 255))
type = fields.String(allow_none=True)
verbose_name = fields.String(allow_none=True, Length=(1, 1024))
description = fields.String(allow_none=True)
expression = fields.String(allow_none=True)
extra = fields.String(allow_none=True)
filterable = fields.Boolean()
groupby = fields.Boolean()
is_active = fields.Boolean()
is_dttm = fields.Boolean(default=False)
python_date_format = fields.String(
allow_none=True, validate=[Length(1, 255), validate_python_date_format]
)
uuid = fields.UUID(allow_none=True)
class DatasetMetricsPutSchema(Schema):
id = fields.Integer()
expression = fields.String(required=True)
description = fields.String(allow_none=True)
extra = fields.String(allow_none=True)
metric_name = fields.String(required=True, validate=Length(1, 255))
metric_type = fields.String(allow_none=True, validate=Length(1, 32))
d3format = fields.String(allow_none=True, validate=Length(1, 128))
verbose_name = fields.String(allow_none=True, Length=(1, 1024))
warning_text = fields.String(allow_none=True)
uuid = fields.UUID(allow_none=True)
class DatasetPostSchema(Schema):
database = fields.Integer(required=True)
schema = fields.String(validate=Length(0, 250))
table_name = fields.String(required=True, allow_none=False, validate=Length(1, 250))
owners = fields.List(fields.Integer())
class DatasetPutSchema(Schema):
table_name = fields.String(allow_none=True, validate=Length(1, 250))
database_id = fields.Integer()
sql = fields.String(allow_none=True)
filter_select_enabled = fields.Boolean(allow_none=True)
fetch_values_predicate = fields.String(allow_none=True, validate=Length(0, 1000))
schema = fields.String(allow_none=True, validate=Length(0, 255))
description = fields.String(allow_none=True)
main_dttm_col = fields.String(allow_none=True)
offset = fields.Integer(allow_none=True)
default_endpoint = fields.String(allow_none=True)
cache_timeout = fields.Integer(allow_none=True)
is_sqllab_view = fields.Boolean(allow_none=True)
template_params = fields.String(allow_none=True)
owners = fields.List(fields.Integer())
columns = fields.List(fields.Nested(DatasetColumnsPutSchema))
metrics = fields.List(fields.Nested(DatasetMetricsPutSchema))
extra = fields.String(allow_none=True)
class DatasetRelatedChart(Schema):
id = fields.Integer()
slice_name = fields.String()
viz_type = fields.String()
class DatasetRelatedDashboard(Schema):
id = fields.Integer()
json_metadata = fields.Dict()
slug = fields.String()
title = fields.String()
class DatasetRelatedCharts(Schema):
count = fields.Integer(description="Chart count")
result = fields.List(
fields.Nested(DatasetRelatedChart), description="A list of dashboards"
)
class DatasetRelatedDashboards(Schema):
count = fields.Integer(description="Dashboard count")
result = fields.List(
fields.Nested(DatasetRelatedDashboard), description="A list of dashboards"
)
class DatasetRelatedObjectsResponse(Schema):
charts = fields.Nested(DatasetRelatedCharts)
dashboards = fields.Nested(DatasetRelatedDashboards)
class ImportV1ColumnSchema(Schema):
# pylint: disable=no-self-use, unused-argument
@pre_load
def fix_extra(self, data: Dict[str, Any], **kwargs: Any) -> Dict[str, Any]:
"""
Fix for extra initially beeing exported as a string.
"""
if isinstance(data.get("extra"), str):
data["extra"] = json.loads(data["extra"])
return data
column_name = fields.String(required=True)
extra = fields.Dict(allow_none=True)
verbose_name = fields.String(allow_none=True)
is_dttm = fields.Boolean(default=False, allow_none=True)
is_active = fields.Boolean(default=True, allow_none=True)
type = fields.String(allow_none=True)
groupby = fields.Boolean()
filterable = fields.Boolean()
expression = fields.String(allow_none=True)
description = fields.String(allow_none=True)
python_date_format = fields.String(allow_none=True)
class ImportV1MetricSchema(Schema):
# pylint: disable=no-self-use, unused-argument
@pre_load
def fix_extra(self, data: Dict[str, Any], **kwargs: Any) -> Dict[str, Any]:
"""
Fix for extra initially beeing exported as a string.
"""
if isinstance(data.get("extra"), str):
data["extra"] = json.loads(data["extra"])
return data
metric_name = fields.String(required=True)
verbose_name = fields.String(allow_none=True)
metric_type = fields.String(allow_none=True)
expression = fields.String(required=True)
description = fields.String(allow_none=True)
d3format = fields.String(allow_none=True)
extra = fields.Dict(allow_none=True)
warning_text = fields.String(allow_none=True)
class ImportV1DatasetSchema(Schema):
# pylint: disable=no-self-use, unused-argument
@pre_load
def fix_extra(self, data: Dict[str, Any], **kwargs: Any) -> Dict[str, Any]:
"""
Fix for extra initially beeing exported as a string.
"""
if isinstance(data.get("extra"), str):
data["extra"] = json.loads(data["extra"])
return data
table_name = fields.String(required=True)
main_dttm_col = fields.String(allow_none=True)
description = fields.String(allow_none=True)
default_endpoint = fields.String(allow_none=True)
offset = fields.Integer()
cache_timeout = fields.Integer(allow_none=True)
schema = fields.String(allow_none=True)
sql = fields.String(allow_none=True)
params = fields.Dict(allow_none=True)
template_params = fields.Dict(allow_none=True)
filter_select_enabled = fields.Boolean()
fetch_values_predicate = fields.String(allow_none=True)
extra = fields.Dict(allow_none=True)
uuid = fields.UUID(required=True)
columns = fields.List(fields.Nested(ImportV1ColumnSchema))
metrics = fields.List(fields.Nested(ImportV1MetricSchema))
version = fields.String(required=True)
database_uuid = fields.UUID(required=True)
data = fields.URL() | /sage-superset-1.0.0.tar.gz/sage-superset-1.0.0/superset/datasets/schemas.py | 0.692018 | 0.293658 | schemas.py | pypi |
import logging
from typing import Any, Dict, List, Optional
from flask import current_app
from sqlalchemy.exc import SQLAlchemyError
from superset.connectors.sqla.models import SqlaTable, SqlMetric, TableColumn
from superset.dao.base import BaseDAO
from superset.extensions import db
from superset.models.core import Database
from superset.models.dashboard import Dashboard
from superset.models.slice import Slice
from superset.views.base import DatasourceFilter
logger = logging.getLogger(__name__)
class DatasetDAO(BaseDAO): # pylint: disable=too-many-public-methods
model_cls = SqlaTable
base_filter = DatasourceFilter
@staticmethod
def get_owner_by_id(owner_id: int) -> Optional[object]:
return (
db.session.query(current_app.appbuilder.sm.user_model)
.filter_by(id=owner_id)
.one_or_none()
)
@staticmethod
def get_database_by_id(database_id: int) -> Optional[Database]:
try:
return db.session.query(Database).filter_by(id=database_id).one_or_none()
except SQLAlchemyError as ex: # pragma: no cover
logger.error("Could not get database by id: %s", str(ex), exc_info=True)
return None
@staticmethod
def get_related_objects(database_id: int) -> Dict[str, Any]:
charts = (
db.session.query(Slice)
.filter(
Slice.datasource_id == database_id, Slice.datasource_type == "table"
)
.all()
)
chart_ids = [chart.id for chart in charts]
dashboards = (
(
db.session.query(Dashboard)
.join(Dashboard.slices)
.filter(Slice.id.in_(chart_ids))
)
.distinct()
.all()
)
return dict(charts=charts, dashboards=dashboards)
@staticmethod
def validate_table_exists(
database: Database, table_name: str, schema: Optional[str]
) -> bool:
try:
database.get_table(table_name, schema=schema)
return True
except SQLAlchemyError as ex: # pragma: no cover
logger.warning("Got an error %s validating table: %s", str(ex), table_name)
return False
@staticmethod
def validate_uniqueness(
database_id: int,
schema: Optional[str],
name: str,
dataset_id: Optional[int] = None,
) -> bool:
dataset_query = db.session.query(SqlaTable).filter(
SqlaTable.table_name == name,
SqlaTable.schema == schema,
SqlaTable.database_id == database_id,
)
if dataset_id:
# make sure the dataset found is different from the target (if any)
dataset_query = dataset_query.filter(SqlaTable.id != dataset_id)
return not db.session.query(dataset_query.exists()).scalar()
@staticmethod
def validate_update_uniqueness(
database_id: int, dataset_id: int, name: str
) -> bool:
dataset_query = db.session.query(SqlaTable).filter(
SqlaTable.table_name == name,
SqlaTable.database_id == database_id,
SqlaTable.id != dataset_id,
)
return not db.session.query(dataset_query.exists()).scalar()
@staticmethod
def validate_columns_exist(dataset_id: int, columns_ids: List[int]) -> bool:
dataset_query = (
db.session.query(TableColumn.id).filter(
TableColumn.table_id == dataset_id, TableColumn.id.in_(columns_ids)
)
).all()
return len(columns_ids) == len(dataset_query)
@staticmethod
def validate_columns_uniqueness(dataset_id: int, columns_names: List[str]) -> bool:
dataset_query = (
db.session.query(TableColumn.id).filter(
TableColumn.table_id == dataset_id,
TableColumn.column_name.in_(columns_names),
)
).all()
return len(dataset_query) == 0
@staticmethod
def validate_metrics_exist(dataset_id: int, metrics_ids: List[int]) -> bool:
dataset_query = (
db.session.query(SqlMetric.id).filter(
SqlMetric.table_id == dataset_id, SqlMetric.id.in_(metrics_ids)
)
).all()
return len(metrics_ids) == len(dataset_query)
@staticmethod
def validate_metrics_uniqueness(dataset_id: int, metrics_names: List[str]) -> bool:
dataset_query = (
db.session.query(SqlMetric.id).filter(
SqlMetric.table_id == dataset_id,
SqlMetric.metric_name.in_(metrics_names),
)
).all()
return len(dataset_query) == 0
@classmethod
def update(
cls, model: SqlaTable, properties: Dict[str, Any], commit: bool = True
) -> Optional[SqlaTable]:
"""
Updates a Dataset model on the metadata DB
"""
if "columns" in properties:
properties["columns"] = cls.update_columns(
model, properties.get("columns", []), commit=commit
)
if "metrics" in properties:
properties["metrics"] = cls.update_metrics(
model, properties.get("metrics", []), commit=commit
)
return super().update(model, properties, commit=False)
@classmethod
def update_columns(
cls,
model: SqlaTable,
property_columns: List[Dict[str, Any]],
commit: bool = True,
) -> List[TableColumn]:
"""
Creates/updates and/or deletes a list of columns, based on a
list of Dict.
- If a column Dict has an `id` property then we update.
- If a column Dict does not have an `id` then we create a new metric.
- If there are extra columns on the metadata db that are not defined on the List
then we delete.
"""
new_columns = []
for column in property_columns:
column_id = column.get("id")
if column_id:
column_obj = db.session.query(TableColumn).get(column_id)
column_obj = DatasetDAO.update_column(column_obj, column, commit=commit)
else:
column_obj = DatasetDAO.create_column(column, commit=commit)
new_columns.append(column_obj)
# Checks if an exiting column is missing from properties and delete it
for existing_column in model.columns:
if existing_column.id not in [column.id for column in new_columns]:
DatasetDAO.delete_column(existing_column)
return new_columns
@classmethod
def update_metrics(
cls,
model: SqlaTable,
property_metrics: List[Dict[str, Any]],
commit: bool = True,
) -> List[SqlMetric]:
"""
Creates/updates and/or deletes a list of metrics, based on a
list of Dict.
- If a metric Dict has an `id` property then we update.
- If a metric Dict does not have an `id` then we create a new metric.
- If there are extra metrics on the metadata db that are not defined on the List
then we delete.
"""
new_metrics = []
for metric in property_metrics:
metric_id = metric.get("id")
if metric.get("id"):
metric_obj = db.session.query(SqlMetric).get(metric_id)
metric_obj = DatasetDAO.update_metric(metric_obj, metric, commit=commit)
else:
metric_obj = DatasetDAO.create_metric(metric, commit=commit)
new_metrics.append(metric_obj)
# Checks if an exiting column is missing from properties and delete it
for existing_metric in model.metrics:
if existing_metric.id not in [metric.id for metric in new_metrics]:
DatasetDAO.delete_metric(existing_metric)
return new_metrics
@classmethod
def find_dataset_column(
cls, dataset_id: int, column_id: int
) -> Optional[TableColumn]:
# We want to apply base dataset filters
dataset = DatasetDAO.find_by_id(dataset_id)
if not dataset:
return None
return (
db.session.query(TableColumn)
.filter(TableColumn.table_id == dataset_id, TableColumn.id == column_id)
.one_or_none()
)
@classmethod
def update_column(
cls, model: TableColumn, properties: Dict[str, Any], commit: bool = True
) -> Optional[TableColumn]:
return DatasetColumnDAO.update(model, properties, commit=commit)
@classmethod
def create_column(
cls, properties: Dict[str, Any], commit: bool = True
) -> Optional[TableColumn]:
"""
Creates a Dataset model on the metadata DB
"""
return DatasetColumnDAO.create(properties, commit=commit)
@classmethod
def delete_column(
cls, model: TableColumn, commit: bool = True
) -> Optional[TableColumn]:
"""
Deletes a Dataset column
"""
return cls.delete(model, commit=commit)
@classmethod
def find_dataset_metric(
cls, dataset_id: int, metric_id: int
) -> Optional[SqlMetric]:
# We want to apply base dataset filters
dataset = DatasetDAO.find_by_id(dataset_id)
if not dataset:
return None
return db.session.query(SqlMetric).get(metric_id)
@classmethod
def delete_metric(
cls, model: SqlMetric, commit: bool = True
) -> Optional[TableColumn]:
"""
Deletes a Dataset metric
"""
return cls.delete(model, commit=commit)
@classmethod
def update_metric(
cls, model: SqlMetric, properties: Dict[str, Any], commit: bool = True
) -> Optional[SqlMetric]:
return DatasetMetricDAO.update(model, properties, commit=commit)
@classmethod
def create_metric(
cls, properties: Dict[str, Any], commit: bool = True
) -> Optional[SqlMetric]:
"""
Creates a Dataset model on the metadata DB
"""
return DatasetMetricDAO.create(properties, commit=commit)
@staticmethod
def bulk_delete(models: Optional[List[SqlaTable]], commit: bool = True) -> None:
item_ids = [model.id for model in models] if models else []
# bulk delete, first delete related data
if models:
for model in models:
model.owners = []
db.session.merge(model)
db.session.query(SqlMetric).filter(SqlMetric.table_id.in_(item_ids)).delete(
synchronize_session="fetch"
)
db.session.query(TableColumn).filter(
TableColumn.table_id.in_(item_ids)
).delete(synchronize_session="fetch")
# bulk delete itself
try:
db.session.query(SqlaTable).filter(SqlaTable.id.in_(item_ids)).delete(
synchronize_session="fetch"
)
if commit:
db.session.commit()
except SQLAlchemyError as ex:
if commit:
db.session.rollback()
raise ex
class DatasetColumnDAO(BaseDAO):
model_cls = TableColumn
class DatasetMetricDAO(BaseDAO):
model_cls = SqlMetric | /sage-superset-1.0.0.tar.gz/sage-superset-1.0.0/superset/datasets/dao.py | 0.848816 | 0.233368 | dao.py | pypi |
import logging
from collections import Counter
from typing import Any, Dict, List, Optional
from flask_appbuilder.models.sqla import Model
from flask_appbuilder.security.sqla.models import User
from marshmallow import ValidationError
from superset.commands.base import BaseCommand, UpdateMixin
from superset.connectors.sqla.models import SqlaTable
from superset.dao.exceptions import DAOUpdateFailedError
from superset.datasets.commands.exceptions import (
DatabaseChangeValidationError,
DatasetColumnNotFoundValidationError,
DatasetColumnsDuplicateValidationError,
DatasetColumnsExistsValidationError,
DatasetExistsValidationError,
DatasetForbiddenError,
DatasetInvalidError,
DatasetMetricsDuplicateValidationError,
DatasetMetricsExistsValidationError,
DatasetMetricsNotFoundValidationError,
DatasetNotFoundError,
DatasetUpdateFailedError,
)
from superset.datasets.dao import DatasetDAO
from superset.exceptions import SupersetSecurityException
from superset.views.base import check_ownership
logger = logging.getLogger(__name__)
class UpdateDatasetCommand(UpdateMixin, BaseCommand):
def __init__(
self,
user: User,
model_id: int,
data: Dict[str, Any],
override_columns: bool = False,
):
self._actor = user
self._model_id = model_id
self._properties = data.copy()
self._model: Optional[SqlaTable] = None
self.override_columns = override_columns
def run(self) -> Model:
self.validate()
if self._model:
try:
dataset = DatasetDAO.update(
model=self._model, properties=self._properties,
)
return dataset
except DAOUpdateFailedError as ex:
logger.exception(ex.exception)
raise DatasetUpdateFailedError() from ex
raise DatasetUpdateFailedError()
def validate(self) -> None:
exceptions: List[ValidationError] = []
owner_ids: Optional[List[int]] = self._properties.get("owners")
# Validate/populate model exists
self._model = DatasetDAO.find_by_id(self._model_id)
if not self._model:
raise DatasetNotFoundError()
# Check ownership
try:
check_ownership(self._model)
except SupersetSecurityException as ex:
raise DatasetForbiddenError() from ex
database_id = self._properties.get("database", None)
table_name = self._properties.get("table_name", None)
# Validate uniqueness
if not DatasetDAO.validate_update_uniqueness(
self._model.database_id, self._model_id, table_name
):
exceptions.append(DatasetExistsValidationError(table_name))
# Validate/Populate database not allowed to change
if database_id and database_id != self._model:
exceptions.append(DatabaseChangeValidationError())
# Validate/Populate owner
try:
owners = self.populate_owners(self._actor, owner_ids)
self._properties["owners"] = owners
except ValidationError as ex:
exceptions.append(ex)
# Validate columns
columns = self._properties.get("columns")
if columns:
self._validate_columns(columns, exceptions)
# Validate metrics
metrics = self._properties.get("metrics")
if metrics:
self._validate_metrics(metrics, exceptions)
if exceptions:
exception = DatasetInvalidError()
exception.add_list(exceptions)
raise exception
def _validate_columns(
self, columns: List[Dict[str, Any]], exceptions: List[ValidationError]
) -> None:
# Validate duplicates on data
if self._get_duplicates(columns, "column_name"):
exceptions.append(DatasetColumnsDuplicateValidationError())
else:
# validate invalid id's
columns_ids: List[int] = [
column["id"] for column in columns if "id" in column
]
if not DatasetDAO.validate_columns_exist(self._model_id, columns_ids):
exceptions.append(DatasetColumnNotFoundValidationError())
# validate new column names uniqueness
if not self.override_columns:
columns_names: List[str] = [
column["column_name"] for column in columns if "id" not in column
]
if not DatasetDAO.validate_columns_uniqueness(
self._model_id, columns_names
):
exceptions.append(DatasetColumnsExistsValidationError())
def _validate_metrics(
self, metrics: List[Dict[str, Any]], exceptions: List[ValidationError]
) -> None:
if self._get_duplicates(metrics, "metric_name"):
exceptions.append(DatasetMetricsDuplicateValidationError())
else:
# validate invalid id's
metrics_ids: List[int] = [
metric["id"] for metric in metrics if "id" in metric
]
if not DatasetDAO.validate_metrics_exist(self._model_id, metrics_ids):
exceptions.append(DatasetMetricsNotFoundValidationError())
# validate new metric names uniqueness
metric_names: List[str] = [
metric["metric_name"] for metric in metrics if "id" not in metric
]
if not DatasetDAO.validate_metrics_uniqueness(self._model_id, metric_names):
exceptions.append(DatasetMetricsExistsValidationError())
@staticmethod
def _get_duplicates(data: List[Dict[str, Any]], key: str) -> List[str]:
duplicates = [
name
for name, count in Counter([item[key] for item in data]).items()
if count > 1
]
return duplicates | /sage-superset-1.0.0.tar.gz/sage-superset-1.0.0/superset/datasets/commands/update.py | 0.779574 | 0.15444 | update.py | pypi |
from flask_babel import lazy_gettext as _
from marshmallow.validate import ValidationError
from superset.commands.exceptions import (
CommandException,
CommandInvalidError,
CreateFailedError,
DeleteFailedError,
ForbiddenError,
ImportFailedError,
UpdateFailedError,
)
def get_dataset_exist_error_msg(full_name: str) -> str:
return _("Dataset %(name)s already exists", name=full_name)
class DatabaseNotFoundValidationError(ValidationError):
"""
Marshmallow validation error for database does not exist
"""
def __init__(self) -> None:
super().__init__([_("Database does not exist")], field_name="database")
class DatabaseChangeValidationError(ValidationError):
"""
Marshmallow validation error database changes are not allowed on update
"""
def __init__(self) -> None:
super().__init__([_("Database not allowed to change")], field_name="database")
class DatasetExistsValidationError(ValidationError):
"""
Marshmallow validation error for dataset already exists
"""
def __init__(self, table_name: str) -> None:
super().__init__(
[get_dataset_exist_error_msg(table_name)], field_name="table_name"
)
class DatasetColumnNotFoundValidationError(ValidationError):
"""
Marshmallow validation error when dataset column for update does not exist
"""
def __init__(self) -> None:
super().__init__([_("One or more columns do not exist")], field_name="columns")
class DatasetColumnsDuplicateValidationError(ValidationError):
"""
Marshmallow validation error when dataset columns have a duplicate on the list
"""
def __init__(self) -> None:
super().__init__(
[_("One or more columns are duplicated")], field_name="columns"
)
class DatasetColumnsExistsValidationError(ValidationError):
"""
Marshmallow validation error when dataset columns already exist
"""
def __init__(self) -> None:
super().__init__([_("One or more columns already exist")], field_name="columns")
class DatasetMetricsNotFoundValidationError(ValidationError):
"""
Marshmallow validation error when dataset metric for update does not exist
"""
def __init__(self) -> None:
super().__init__([_("One or more metrics do not exist")], field_name="metrics")
class DatasetMetricsDuplicateValidationError(ValidationError):
"""
Marshmallow validation error when dataset metrics have a duplicate on the list
"""
def __init__(self) -> None:
super().__init__(
[_("One or more metrics are duplicated")], field_name="metrics"
)
class DatasetMetricsExistsValidationError(ValidationError):
"""
Marshmallow validation error when dataset metrics already exist
"""
def __init__(self) -> None:
super().__init__([_("One or more metrics already exist")], field_name="metrics")
class TableNotFoundValidationError(ValidationError):
"""
Marshmallow validation error when a table does not exist on the database
"""
def __init__(self, table_name: str) -> None:
super().__init__(
[
_(
"Table [%(table_name)s] could not be found, "
"please double check your "
"database connection, schema, and "
"table name",
table_name=table_name,
)
],
field_name="table_name",
)
class OwnersNotFoundValidationError(ValidationError):
def __init__(self) -> None:
super().__init__([_("Owners are invalid")], field_name="owners")
class DatasetNotFoundError(CommandException):
status = 404
message = _("Dataset does not exist")
class DatasetInvalidError(CommandInvalidError):
message = _("Dataset parameters are invalid.")
class DatasetCreateFailedError(CreateFailedError):
message = _("Dataset could not be created.")
class DatasetUpdateFailedError(UpdateFailedError):
message = _("Dataset could not be updated.")
class DatasetDeleteFailedError(DeleteFailedError):
message = _("Dataset could not be deleted.")
class DatasetBulkDeleteFailedError(DeleteFailedError):
message = _("Dataset(s) could not be bulk deleted.")
class DatasetRefreshFailedError(UpdateFailedError):
message = _("Dataset could not be updated.")
class DatasetForbiddenError(ForbiddenError):
message = _("Changing this dataset is forbidden")
class DatasetImportError(ImportFailedError):
message = _("Import dataset failed for an unknown reason")
class DatasetAccessDeniedError(ForbiddenError):
message = _("You don't have access to this dataset.") | /sage-superset-1.0.0.tar.gz/sage-superset-1.0.0/superset/datasets/commands/exceptions.py | 0.810779 | 0.188399 | exceptions.py | pypi |
import json
import logging
from typing import Any, Callable, Dict, List, Optional
import yaml
from flask_appbuilder import Model
from sqlalchemy.orm import Session
from sqlalchemy.orm.session import make_transient
from superset import db
from superset.commands.base import BaseCommand
from superset.commands.importers.exceptions import IncorrectVersionError
from superset.connectors.base.models import BaseColumn, BaseDatasource, BaseMetric
from superset.connectors.druid.models import (
DruidCluster,
DruidColumn,
DruidDatasource,
DruidMetric,
)
from superset.connectors.sqla.models import SqlaTable, SqlMetric, TableColumn
from superset.databases.commands.exceptions import DatabaseNotFoundError
from superset.models.core import Database
from superset.utils.dict_import_export import DATABASES_KEY, DRUID_CLUSTERS_KEY
logger = logging.getLogger(__name__)
def lookup_sqla_table(table: SqlaTable) -> Optional[SqlaTable]:
return (
db.session.query(SqlaTable)
.join(Database)
.filter(
SqlaTable.table_name == table.table_name,
SqlaTable.schema == table.schema,
Database.id == table.database_id,
)
.first()
)
def lookup_sqla_database(table: SqlaTable) -> Optional[Database]:
database = (
db.session.query(Database)
.filter_by(database_name=table.params_dict["database_name"])
.one_or_none()
)
if database is None:
raise DatabaseNotFoundError
return database
def lookup_druid_cluster(datasource: DruidDatasource) -> Optional[DruidCluster]:
return db.session.query(DruidCluster).filter_by(id=datasource.cluster_id).first()
def lookup_druid_datasource(datasource: DruidDatasource) -> Optional[DruidDatasource]:
return (
db.session.query(DruidDatasource)
.filter(
DruidDatasource.datasource_name == datasource.datasource_name,
DruidDatasource.cluster_id == datasource.cluster_id,
)
.first()
)
def import_dataset(
i_datasource: BaseDatasource,
database_id: Optional[int] = None,
import_time: Optional[int] = None,
) -> int:
"""Imports the datasource from the object to the database.
Metrics and columns and datasource will be overridden if exists.
This function can be used to import/export dashboards between multiple
superset instances. Audit metadata isn't copied over.
"""
lookup_database: Callable[[BaseDatasource], Optional[Database]]
lookup_datasource: Callable[[BaseDatasource], Optional[BaseDatasource]]
if isinstance(i_datasource, SqlaTable):
lookup_database = lookup_sqla_database
lookup_datasource = lookup_sqla_table
else:
lookup_database = lookup_druid_cluster
lookup_datasource = lookup_druid_datasource
return import_datasource(
db.session,
i_datasource,
lookup_database,
lookup_datasource,
import_time,
database_id,
)
def lookup_sqla_metric(session: Session, metric: SqlMetric) -> SqlMetric:
return (
session.query(SqlMetric)
.filter(
SqlMetric.table_id == metric.table_id,
SqlMetric.metric_name == metric.metric_name,
)
.first()
)
def lookup_druid_metric(session: Session, metric: DruidMetric) -> DruidMetric:
return (
session.query(DruidMetric)
.filter(
DruidMetric.datasource_id == metric.datasource_id,
DruidMetric.metric_name == metric.metric_name,
)
.first()
)
def import_metric(session: Session, metric: BaseMetric) -> BaseMetric:
if isinstance(metric, SqlMetric):
lookup_metric = lookup_sqla_metric
else:
lookup_metric = lookup_druid_metric
return import_simple_obj(session, metric, lookup_metric)
def lookup_sqla_column(session: Session, column: TableColumn) -> TableColumn:
return (
session.query(TableColumn)
.filter(
TableColumn.table_id == column.table_id,
TableColumn.column_name == column.column_name,
)
.first()
)
def lookup_druid_column(session: Session, column: DruidColumn) -> DruidColumn:
return (
session.query(DruidColumn)
.filter(
DruidColumn.datasource_id == column.datasource_id,
DruidColumn.column_name == column.column_name,
)
.first()
)
def import_column(session: Session, column: BaseColumn) -> BaseColumn:
if isinstance(column, TableColumn):
lookup_column = lookup_sqla_column
else:
lookup_column = lookup_druid_column
return import_simple_obj(session, column, lookup_column)
def import_datasource( # pylint: disable=too-many-arguments
session: Session,
i_datasource: Model,
lookup_database: Callable[[Model], Optional[Model]],
lookup_datasource: Callable[[Model], Optional[Model]],
import_time: Optional[int] = None,
database_id: Optional[int] = None,
) -> int:
"""Imports the datasource from the object to the database.
Metrics and columns and datasource will be overrided if exists.
This function can be used to import/export datasources between multiple
superset instances. Audit metadata isn't copies over.
"""
make_transient(i_datasource)
logger.info("Started import of the datasource: %s", i_datasource.to_json())
i_datasource.id = None
i_datasource.database_id = (
database_id
if database_id
else getattr(lookup_database(i_datasource), "id", None)
)
i_datasource.alter_params(import_time=import_time)
# override the datasource
datasource = lookup_datasource(i_datasource)
if datasource:
datasource.override(i_datasource)
session.flush()
else:
datasource = i_datasource.copy()
session.add(datasource)
session.flush()
for metric in i_datasource.metrics:
new_m = metric.copy()
new_m.table_id = datasource.id
logger.info(
"Importing metric %s from the datasource: %s",
new_m.to_json(),
i_datasource.full_name,
)
imported_m = import_metric(session, new_m)
if imported_m.metric_name not in [m.metric_name for m in datasource.metrics]:
datasource.metrics.append(imported_m)
for column in i_datasource.columns:
new_c = column.copy()
new_c.table_id = datasource.id
logger.info(
"Importing column %s from the datasource: %s",
new_c.to_json(),
i_datasource.full_name,
)
imported_c = import_column(session, new_c)
if imported_c.column_name not in [c.column_name for c in datasource.columns]:
datasource.columns.append(imported_c)
session.flush()
return datasource.id
def import_simple_obj(
session: Session, i_obj: Model, lookup_obj: Callable[[Session, Model], Model]
) -> Model:
make_transient(i_obj)
i_obj.id = None
i_obj.table = None
# find if the column was already imported
existing_column = lookup_obj(session, i_obj)
i_obj.table = None
if existing_column:
existing_column.override(i_obj)
session.flush()
return existing_column
session.add(i_obj)
session.flush()
return i_obj
def import_from_dict(
session: Session, data: Dict[str, Any], sync: Optional[List[str]] = None
) -> None:
"""Imports databases and druid clusters from dictionary"""
if not sync:
sync = []
if isinstance(data, dict):
logger.info("Importing %d %s", len(data.get(DATABASES_KEY, [])), DATABASES_KEY)
for database in data.get(DATABASES_KEY, []):
Database.import_from_dict(session, database, sync=sync)
logger.info(
"Importing %d %s", len(data.get(DRUID_CLUSTERS_KEY, [])), DRUID_CLUSTERS_KEY
)
for datasource in data.get(DRUID_CLUSTERS_KEY, []):
DruidCluster.import_from_dict(session, datasource, sync=sync)
session.commit()
else:
logger.info("Supplied object is not a dictionary.")
class ImportDatasetsCommand(BaseCommand):
"""
Import datasources in YAML format.
This is the original unversioned format used to export and import datasources
in Superset.
"""
# pylint: disable=unused-argument
def __init__(
self, contents: Dict[str, str], *args: Any, **kwargs: Any,
):
self.contents = contents
self._configs: Dict[str, Any] = {}
self.sync = []
if kwargs.get("sync_columns"):
self.sync.append("columns")
if kwargs.get("sync_metrics"):
self.sync.append("metrics")
def run(self) -> None:
self.validate()
# TODO (betodealmeida): add rollback in case of error
for file_name, config in self._configs.items():
logger.info("Importing dataset from file %s", file_name)
if isinstance(config, dict):
import_from_dict(db.session, config, sync=self.sync)
else: # list
for dataset in config:
# UI exports don't have the database metadata, so we assume
# the DB exists and has the same name
params = json.loads(dataset["params"])
database = (
db.session.query(Database)
.filter_by(database_name=params["database_name"])
.one()
)
dataset["database_id"] = database.id
SqlaTable.import_from_dict(db.session, dataset, sync=self.sync)
def validate(self) -> None:
# ensure all files are YAML
for file_name, content in self.contents.items():
try:
config = yaml.safe_load(content)
except yaml.parser.ParserError as ex:
logger.exception("Invalid YAML file")
raise IncorrectVersionError(
f"{file_name} is not a valid YAML file"
) from ex
# CLI export
if isinstance(config, dict):
# TODO (betodealmeida): validate with Marshmallow
if DATABASES_KEY not in config and DRUID_CLUSTERS_KEY not in config:
raise IncorrectVersionError(f"{file_name} has no valid keys")
# UI export
elif isinstance(config, list):
# TODO (betodealmeida): validate with Marshmallow
pass
else:
raise IncorrectVersionError(f"{file_name} is not a valid file")
self._configs[file_name] = config | /sage-superset-1.0.0.tar.gz/sage-superset-1.0.0/superset/datasets/commands/importers/v0.py | 0.758868 | 0.188026 | v0.py | pypi |
from __future__ import annotations
import time
from contextlib import contextmanager
from functools import wraps
from typing import Any, Callable, Dict, Iterator, TYPE_CHECKING, Union
from flask import current_app, Response
from superset import is_feature_enabled
from superset.dashboards.commands.exceptions import DashboardAccessDeniedError
from superset.utils import core as utils
from superset.utils.dates import now_as_float
if TYPE_CHECKING:
from superset.stats_logger import BaseStatsLogger
@contextmanager
def stats_timing(stats_key: str, stats_logger: BaseStatsLogger) -> Iterator[float]:
"""Provide a transactional scope around a series of operations."""
start_ts = now_as_float()
try:
yield start_ts
except Exception as ex:
raise ex
finally:
stats_logger.timing(stats_key, now_as_float() - start_ts)
def arghash(args: Any, kwargs: Any) -> int:
"""Simple argument hash with kwargs sorted."""
sorted_args = tuple(
x if hasattr(x, "__repr__") else x for x in [*args, *sorted(kwargs.items())]
)
return hash(sorted_args)
def debounce(duration: Union[float, int] = 0.1) -> Callable[..., Any]:
"""Ensure a function called with the same arguments executes only once
per `duration` (default: 100ms).
"""
def decorate(f: Callable[..., Any]) -> Callable[..., Any]:
last: Dict[str, Any] = {"t": None, "input": None, "output": None}
def wrapped(*args: Any, **kwargs: Any) -> Any:
now = time.time()
updated_hash = arghash(args, kwargs)
if (
last["t"] is None
or now - last["t"] >= duration
or last["input"] != updated_hash
):
result = f(*args, **kwargs)
last["t"] = time.time()
last["input"] = updated_hash
last["output"] = result
return result
return last["output"]
return wrapped
return decorate
def on_security_exception(self: Any, ex: Exception) -> Response:
return self.response(403, **{"message": utils.error_msg_from_exception(ex)})
# noinspection PyPackageRequirements
def check_dashboard_access(
on_error: Callable[..., Any] = on_security_exception
) -> Callable[..., Any]:
def decorator(f: Callable[..., Any]) -> Callable[..., Any]:
@wraps(f)
def wrapper(self: Any, *args: Any, **kwargs: Any) -> Any:
# pylint: disable=import-outside-toplevel
from superset.models.dashboard import Dashboard
dashboard = Dashboard.get(str(kwargs["dashboard_id_or_slug"]))
if is_feature_enabled("DASHBOARD_RBAC"):
try:
current_app.appbuilder.sm.raise_for_dashboard_access(dashboard)
except DashboardAccessDeniedError as ex:
return on_error(self, ex)
except Exception as exception:
raise exception
return f(self, *args, dashboard=dashboard, **kwargs)
return wrapper
return decorator | /sage-superset-1.0.0.tar.gz/sage-superset-1.0.0/superset/utils/decorators.py | 0.905946 | 0.175379 | decorators.py | pypi |
import functools
from typing import Any, Callable, Dict, Optional, Tuple, Type
class _memoized:
"""Decorator that caches a function's return value each time it is called
If called later with the same arguments, the cached value is returned, and
not re-evaluated.
Define ``watch`` as a tuple of attribute names if this Decorator
should account for instance variable changes.
"""
def __init__(
self, func: Callable[..., Any], watch: Optional[Tuple[str, ...]] = None
) -> None:
self.func = func
self.cache: Dict[Any, Any] = {}
self.is_method = False
self.watch = watch or ()
def __call__(self, *args: Any, **kwargs: Any) -> Any:
key = [args, frozenset(kwargs.items())]
if self.is_method:
key.append(tuple(getattr(args[0], v, None) for v in self.watch))
key = tuple(key) # type: ignore
try:
if key in self.cache:
return self.cache[key]
except TypeError as ex:
# Uncachable -- for instance, passing a list as an argument.
raise TypeError("Function cannot be memoized") from ex
value = self.func(*args, **kwargs)
try:
self.cache[key] = value
except TypeError as ex:
raise TypeError("Function cannot be memoized") from ex
return value
def __repr__(self) -> str:
"""Return the function's docstring."""
return self.func.__doc__ or ""
def __get__(
self, obj: Any, objtype: Type[Any]
) -> functools.partial: # type: ignore
if not self.is_method:
self.is_method = True
# Support instance methods.
func = functools.partial(self.__call__, obj)
func.__func__ = self.func # type: ignore
return func
def memoized(
func: Optional[Callable[..., Any]] = None, watch: Optional[Tuple[str, ...]] = None
) -> Callable[..., Any]:
if func:
return _memoized(func)
def wrapper(f: Callable[..., Any]) -> Callable[..., Any]:
return _memoized(f, watch)
return wrapper | /sage-superset-1.0.0.tar.gz/sage-superset-1.0.0/superset/utils/memoized.py | 0.896478 | 0.309467 | memoized.py | pypi |
import logging
from abc import ABC, abstractmethod
from typing import Any, Dict, List, Optional
from flask import Flask
from sqlalchemy import text, TypeDecorator
from sqlalchemy.engine import Connection, Dialect, RowProxy
from sqlalchemy_utils import EncryptedType
logger = logging.getLogger(__name__)
class AbstractEncryptedFieldAdapter(ABC): # pylint: disable=too-few-public-methods
@abstractmethod
def create(
self,
app_config: Optional[Dict[str, Any]],
*args: List[Any],
**kwargs: Optional[Dict[str, Any]],
) -> TypeDecorator:
pass
class SQLAlchemyUtilsAdapter( # pylint: disable=too-few-public-methods
AbstractEncryptedFieldAdapter
):
def create(
self,
app_config: Optional[Dict[str, Any]],
*args: List[Any],
**kwargs: Optional[Dict[str, Any]],
) -> TypeDecorator:
if app_config:
return EncryptedType(*args, app_config["SECRET_KEY"], **kwargs)
raise Exception("Missing app_config kwarg")
class EncryptedFieldFactory:
def __init__(self) -> None:
self._concrete_type_adapter: Optional[AbstractEncryptedFieldAdapter] = None
self._config: Optional[Dict[str, Any]] = None
def init_app(self, app: Flask) -> None:
self._config = app.config
self._concrete_type_adapter = self._config[
"SQLALCHEMY_ENCRYPTED_FIELD_TYPE_ADAPTER"
]()
def create(
self, *args: List[Any], **kwargs: Optional[Dict[str, Any]]
) -> TypeDecorator:
if self._concrete_type_adapter:
return self._concrete_type_adapter.create(self._config, *args, **kwargs)
raise Exception("App not initialized yet. Please call init_app first")
class SecretsMigrator:
def __init__(self, previous_secret_key: str) -> None:
from superset import db # pylint: disable=import-outside-toplevel
self._db = db
self._previous_secret_key = previous_secret_key
self._dialect: Dialect = db.engine.url.get_dialect()
def discover_encrypted_fields(self) -> Dict[str, Dict[str, EncryptedType]]:
"""
Iterates over SqlAlchemy's metadata, looking for EncryptedType
columns along the way. Builds up a dict of
table_name -> dict of col_name: enc type instance
:return:
"""
meta_info: Dict[str, Any] = {}
for table_name, table in self._db.metadata.tables.items():
for col_name, col in table.columns.items():
if isinstance(col.type, EncryptedType):
cols = meta_info.get(table_name, {})
cols[col_name] = col.type
meta_info[table_name] = cols
return meta_info
@staticmethod
def _read_bytes(col_name: str, value: Any) -> Optional[bytes]:
if value is None or isinstance(value, bytes):
return value
# Note that the Postgres Driver returns memoryview's for BLOB types
if isinstance(value, memoryview):
return value.tobytes()
if isinstance(value, str):
return bytes(value.encode("utf8"))
# Just bail if we haven't seen this type before...
raise ValueError(f"DB column {col_name} has unknown type: {type(value)}")
@staticmethod
def _select_columns_from_table(
conn: Connection, column_names: List[str], table_name: str
) -> RowProxy:
return conn.execute(f"SELECT id, {','.join(column_names)} FROM {table_name}")
def _re_encrypt_row(
self,
conn: Connection,
row: RowProxy,
table_name: str,
columns: Dict[str, EncryptedType],
) -> None:
"""
Re encrypts all columns in a Row
:param row: Current row to reencrypt
:param columns: Meta info from columns
"""
re_encrypted_columns = {}
for column_name, encrypted_type in columns.items():
previous_encrypted_type = EncryptedType(
type_in=encrypted_type.underlying_type, key=self._previous_secret_key
)
try:
unencrypted_value = previous_encrypted_type.process_result_value(
self._read_bytes(column_name, row[column_name]), self._dialect
)
except ValueError as exc:
# Failed to unencrypt
try:
encrypted_type.process_result_value(
self._read_bytes(column_name, row[column_name]), self._dialect
)
logger.info(
"Current secret is able to decrypt value on column [%s.%s],"
" nothing to do",
table_name,
column_name,
)
return
except Exception:
raise Exception from exc
re_encrypted_columns[column_name] = encrypted_type.process_bind_param(
unencrypted_value, self._dialect,
)
set_cols = ",".join(
[f"{name} = :{name}" for name in list(re_encrypted_columns.keys())]
)
logger.info("Processing table: %s", table_name)
conn.execute(
text(f"UPDATE {table_name} SET {set_cols} WHERE id = :id"),
id=row["id"],
**re_encrypted_columns,
)
def run(self) -> None:
encrypted_meta_info = self.discover_encrypted_fields()
with self._db.engine.begin() as conn:
logger.info("Collecting info for re encryption")
for table_name, columns in encrypted_meta_info.items():
column_names = list(columns.keys())
rows = self._select_columns_from_table(conn, column_names, table_name)
for row in rows:
self._re_encrypt_row(conn, row, table_name, columns)
logger.info("All tables processed") | /sage-superset-1.0.0.tar.gz/sage-superset-1.0.0/superset/utils/encrypt.py | 0.855474 | 0.164416 | encrypt.py | pypi |
from typing import Any, Dict, Union
from croniter import croniter
from flask_babel import gettext as _
from marshmallow import fields, Schema, validate, validates_schema
from marshmallow.validate import Length, Range, ValidationError
from marshmallow_enum import EnumField
from pytz import all_timezones
from superset.models.reports import (
ReportCreationMethodType,
ReportDataFormat,
ReportRecipientType,
ReportScheduleType,
ReportScheduleValidatorType,
)
openapi_spec_methods_override = {
"get": {"get": {"description": "Get a report schedule"}},
"get_list": {
"get": {
"description": "Get a list of report schedules, use Rison or JSON "
"query parameters for filtering, sorting,"
" pagination and for selecting specific"
" columns and metadata.",
}
},
"post": {"post": {"description": "Create a report schedule"}},
"put": {"put": {"description": "Update a report schedule"}},
"delete": {"delete": {"description": "Delete a report schedule"}},
}
get_delete_ids_schema = {"type": "array", "items": {"type": "integer"}}
type_description = "The report schedule type"
name_description = "The report schedule name."
# :)
description_description = "Use a nice description to give context to this Alert/Report"
context_markdown_description = "Markdown description"
crontab_description = (
"A CRON expression."
"[Crontab Guru](https://crontab.guru/) is "
"a helpful resource that can help you craft a CRON expression."
)
timezone_description = "A timezone string that represents the location of the timezone."
sql_description = (
"A SQL statement that defines whether the alert should get triggered or "
"not. The query is expected to return either NULL or a number value."
)
owners_description = (
"Owner are users ids allowed to delete or change this report. "
"If left empty you will be one of the owners of the report."
)
validator_type_description = (
"Determines when to trigger alert based off value from alert query. "
"Alerts will be triggered with these validator types:\n"
"- Not Null - When the return value is Not NULL, Empty, or 0\n"
"- Operator - When `sql_return_value comparison_operator threshold`"
" is True e.g. `50 <= 75`<br>Supports the comparison operators <, <=, "
">, >=, ==, and !="
)
validator_config_json_op_description = (
"The operation to compare with a threshold to apply to the SQL output\n"
)
log_retention_description = "How long to keep the logs around for this report (in days)"
grace_period_description = (
"Once an alert is triggered, how long, in seconds, before "
"Superset nags you again. (in seconds)"
)
working_timeout_description = (
"If an alert is staled at a working state, how long until it's state is reseted to"
" error"
)
creation_method_description = (
"Creation method is used to inform the frontend whether the report/alert was "
"created in the dashboard, chart, or alerts and reports UI."
)
def validate_crontab(value: Union[bytes, bytearray, str]) -> None:
if not croniter.is_valid(str(value)):
raise ValidationError("Cron expression is not valid")
class ValidatorConfigJSONSchema(Schema):
op = fields.String( # pylint: disable=invalid-name
description=validator_config_json_op_description,
validate=validate.OneOf(choices=["<", "<=", ">", ">=", "==", "!="]),
)
threshold = fields.Float()
class ReportRecipientConfigJSONSchema(Schema):
# TODO if email check validity
target = fields.String()
class ReportRecipientSchema(Schema):
type = fields.String(
description="The recipient type, check spec for valid options",
allow_none=False,
required=True,
validate=validate.OneOf(
choices=tuple(key.value for key in ReportRecipientType)
),
)
recipient_config_json = fields.Nested(ReportRecipientConfigJSONSchema)
class ReportSchedulePostSchema(Schema):
type = fields.String(
description=type_description,
allow_none=False,
required=True,
validate=validate.OneOf(choices=tuple(key.value for key in ReportScheduleType)),
)
name = fields.String(
description=name_description,
allow_none=False,
required=True,
validate=[Length(1, 150)],
example="Daily dashboard email",
)
description = fields.String(
description=description_description,
allow_none=True,
required=False,
example="Daily sales dashboard to marketing",
)
context_markdown = fields.String(
description=context_markdown_description, allow_none=True, required=False
)
active = fields.Boolean()
crontab = fields.String(
description=crontab_description,
validate=[validate_crontab, Length(1, 1000)],
example="*/5 * * * *",
allow_none=False,
required=True,
)
timezone = fields.String(
description=timezone_description,
default="UTC",
validate=validate.OneOf(choices=tuple(all_timezones)),
)
sql = fields.String(
description=sql_description, example="SELECT value FROM time_series_table"
)
chart = fields.Integer(required=False, allow_none=True)
creation_method = EnumField(
ReportCreationMethodType,
by_value=True,
required=False,
description=creation_method_description,
)
dashboard = fields.Integer(required=False, allow_none=True)
selected_tabs = fields.List(fields.Integer(), required=False, allow_none=True)
database = fields.Integer(required=False)
owners = fields.List(fields.Integer(description=owners_description))
validator_type = fields.String(
description=validator_type_description,
validate=validate.OneOf(
choices=tuple(key.value for key in ReportScheduleValidatorType)
),
)
validator_config_json = fields.Nested(ValidatorConfigJSONSchema)
log_retention = fields.Integer(
description=log_retention_description,
example=90,
validate=[Range(min=1, error=_("Value must be greater than 0"))],
)
grace_period = fields.Integer(
description=grace_period_description,
example=60 * 60 * 4,
default=60 * 60 * 4,
validate=[Range(min=1, error=_("Value must be greater than 0"))],
)
working_timeout = fields.Integer(
description=working_timeout_description,
example=60 * 60 * 1,
default=60 * 60 * 1,
validate=[Range(min=1, error=_("Value must be greater than 0"))],
)
recipients = fields.List(fields.Nested(ReportRecipientSchema))
report_format = fields.String(
default=ReportDataFormat.VISUALIZATION,
validate=validate.OneOf(choices=tuple(key.value for key in ReportDataFormat)),
)
extra = fields.Dict(default=None,)
force_screenshot = fields.Boolean(default=False)
@validates_schema
def validate_report_references( # pylint: disable=unused-argument,no-self-use
self, data: Dict[str, Any], **kwargs: Any
) -> None:
if data["type"] == ReportScheduleType.REPORT:
if "database" in data:
raise ValidationError(
{"database": ["Database reference is not allowed on a report"]}
)
class ReportSchedulePutSchema(Schema):
type = fields.String(
description=type_description,
required=False,
validate=validate.OneOf(choices=tuple(key.value for key in ReportScheduleType)),
)
name = fields.String(
description=name_description, required=False, validate=[Length(1, 150)]
)
description = fields.String(
description=description_description,
allow_none=True,
required=False,
example="Daily sales dashboard to marketing",
)
context_markdown = fields.String(
description=context_markdown_description, allow_none=True, required=False
)
active = fields.Boolean(required=False)
crontab = fields.String(
description=crontab_description,
validate=[validate_crontab, Length(1, 1000)],
required=False,
)
timezone = fields.String(
description=timezone_description,
default="UTC",
validate=validate.OneOf(choices=tuple(all_timezones)),
)
sql = fields.String(
description=sql_description,
example="SELECT value FROM time_series_table",
required=False,
allow_none=True,
)
chart = fields.Integer(required=False, allow_none=True)
creation_method = EnumField(
ReportCreationMethodType,
by_value=True,
allow_none=True,
description=creation_method_description,
)
dashboard = fields.Integer(required=False, allow_none=True)
database = fields.Integer(required=False)
owners = fields.List(fields.Integer(description=owners_description), required=False)
validator_type = fields.String(
description=validator_type_description,
validate=validate.OneOf(
choices=tuple(key.value for key in ReportScheduleValidatorType)
),
allow_none=True,
required=False,
)
validator_config_json = fields.Nested(ValidatorConfigJSONSchema, required=False)
log_retention = fields.Integer(
description=log_retention_description,
example=90,
required=False,
validate=[Range(min=1, error=_("Value must be greater than 0"))],
)
grace_period = fields.Integer(
description=grace_period_description,
example=60 * 60 * 4,
required=False,
validate=[Range(min=1, error=_("Value must be greater than 0"))],
)
working_timeout = fields.Integer(
description=working_timeout_description,
example=60 * 60 * 1,
allow_none=True,
required=False,
validate=[Range(min=1, error=_("Value must be greater than 0"))],
)
recipients = fields.List(fields.Nested(ReportRecipientSchema), required=False)
report_format = fields.String(
default=ReportDataFormat.VISUALIZATION,
validate=validate.OneOf(choices=tuple(key.value for key in ReportDataFormat)),
)
force_screenshot = fields.Boolean(default=False) | /sage-superset-1.0.0.tar.gz/sage-superset-1.0.0/superset/reports/schemas.py | 0.754192 | 0.339157 | schemas.py | pypi |
import json
import logging
from io import IOBase
from typing import Sequence, Union
import backoff
from flask_babel import gettext as __
from slack import WebClient
from slack.errors import SlackApiError, SlackClientError
from superset import app
from superset.models.reports import ReportRecipientType
from superset.reports.notifications.base import BaseNotification
from superset.reports.notifications.exceptions import NotificationError
logger = logging.getLogger(__name__)
# Slack only allows Markdown messages up to 4k chars
MAXIMUM_MESSAGE_SIZE = 4000
class SlackNotification(BaseNotification): # pylint: disable=too-few-public-methods
"""
Sends a slack notification for a report recipient
"""
type = ReportRecipientType.SLACK
def _get_channel(self) -> str:
return json.loads(self._recipient.recipient_config_json)["target"]
def _message_template(self, table: str = "") -> str:
return __(
"""*%(name)s*
%(description)s
<%(url)s|Explore in Superset>
%(table)s
""",
name=self._content.name,
description=self._content.description or "",
url=self._content.url,
table=table,
)
@staticmethod
def _error_template(name: str, description: str, text: str) -> str:
return __(
"""*%(name)s*
%(description)s
Error: %(text)s
""",
name=name,
description=description,
text=text,
)
def _get_body(self) -> str:
if self._content.text:
return self._error_template(
self._content.name, self._content.description or "", self._content.text
)
if self._content.embedded_data is None:
return self._message_template()
# Embed data in the message
df = self._content.embedded_data
# Flatten columns/index so they show up nicely in the table
df.columns = [
" ".join(str(name) for name in column).strip()
if isinstance(column, tuple)
else column
for column in df.columns
]
df.index = [
" ".join(str(name) for name in index).strip()
if isinstance(index, tuple)
else index
for index in df.index
]
# Slack Markdown only works on messages shorter than 4k chars, so we might
# need to truncate the data
for i in range(len(df) - 1):
truncated_df = df[: i + 1].fillna("")
truncated_df = truncated_df.append(
{k: "..." for k in df.columns}, ignore_index=True
)
tabulated = df.to_markdown()
table = f"```\n{tabulated}\n```\n\n(table was truncated)"
message = self._message_template(table)
if len(message) > MAXIMUM_MESSAGE_SIZE:
# Decrement i and build a message that is under the limit
truncated_df = df[:i].fillna("")
truncated_df = truncated_df.append(
{k: "..." for k in df.columns}, ignore_index=True
)
tabulated = df.to_markdown()
table = (
f"```\n{tabulated}\n```\n\n(table was truncated)"
if len(truncated_df) > 0
else ""
)
break
# Send full data
else:
tabulated = df.to_markdown()
table = f"```\n{tabulated}\n```"
return self._message_template(table)
def _get_inline_files(self) -> Sequence[Union[str, IOBase, bytes]]:
if self._content.csv:
return [self._content.csv]
if self._content.screenshots:
return self._content.screenshots
return []
@backoff.on_exception(backoff.expo, SlackApiError, factor=10, base=2, max_tries=5)
def send(self) -> None:
files = self._get_inline_files()
title = self._content.name
channel = self._get_channel()
body = self._get_body()
file_type = "csv" if self._content.csv else "png"
try:
token = app.config["SLACK_API_TOKEN"]
if callable(token):
token = token()
client = WebClient(token=token, proxy=app.config["SLACK_PROXY"])
# files_upload returns SlackResponse as we run it in sync mode.
if files:
for file in files:
client.files_upload(
channels=channel,
file=file,
initial_comment=body,
title=title,
filetype=file_type,
)
else:
client.chat_postMessage(channel=channel, text=body)
logger.info("Report sent to slack")
except SlackClientError as ex:
raise NotificationError(ex) from ex | /sage-superset-1.0.0.tar.gz/sage-superset-1.0.0/superset/reports/notifications/slack.py | 0.699152 | 0.244944 | slack.py | pypi |
import json
import logging
from operator import eq, ge, gt, le, lt, ne
from timeit import default_timer
from typing import Optional
import numpy as np
import pandas as pd
from celery.exceptions import SoftTimeLimitExceeded
from flask_babel import lazy_gettext as _
from superset import app, jinja_context
from superset.commands.base import BaseCommand
from superset.models.reports import ReportSchedule, ReportScheduleValidatorType
from superset.reports.commands.exceptions import (
AlertQueryError,
AlertQueryInvalidTypeError,
AlertQueryMultipleColumnsError,
AlertQueryMultipleRowsError,
AlertQueryTimeout,
AlertValidatorConfigError,
)
logger = logging.getLogger(__name__)
ALERT_SQL_LIMIT = 2
# All sql statements have an applied LIMIT,
# to avoid heavy loads done by a user mistake
OPERATOR_FUNCTIONS = {">=": ge, ">": gt, "<=": le, "<": lt, "==": eq, "!=": ne}
class AlertCommand(BaseCommand):
def __init__(self, report_schedule: ReportSchedule):
self._report_schedule = report_schedule
self._result: Optional[float] = None
def run(self) -> bool:
"""
Executes an alert SQL query and validates it.
Will set the report_schedule.last_value or last_value_row_json
with the query result
:return: bool, if the alert triggered or not
:raises AlertQueryError: SQL query is not valid
:raises AlertQueryInvalidTypeError: The output from the SQL query
is not an allowed type
:raises AlertQueryMultipleColumnsError: The SQL query returned multiple columns
:raises AlertQueryMultipleRowsError: The SQL query returned multiple rows
:raises AlertQueryTimeout: The SQL query received a celery soft timeout
:raises AlertValidatorConfigError: The validator query data is not valid
"""
self.validate()
if self._is_validator_not_null:
self._report_schedule.last_value_row_json = str(self._result)
return self._result not in (0, None, np.nan)
self._report_schedule.last_value = self._result
try:
operator = json.loads(self._report_schedule.validator_config_json)["op"]
threshold = json.loads(self._report_schedule.validator_config_json)[
"threshold"
]
return OPERATOR_FUNCTIONS[operator](self._result, threshold)
except (KeyError, json.JSONDecodeError) as ex:
raise AlertValidatorConfigError() from ex
def _validate_not_null(self, rows: np.recarray) -> None:
self._validate_result(rows)
self._result = rows[0][1]
@staticmethod
def _validate_result(rows: np.recarray) -> None:
# check if query return more then one row
if len(rows) > 1:
raise AlertQueryMultipleRowsError(
message=_(
"Alert query returned more then one row. %s rows returned"
% len(rows),
)
)
# check if query returned more then one column
if len(rows[0]) > 2:
raise AlertQueryMultipleColumnsError(
# len is subtracted by 1 to discard pandas index column
_(
"Alert query returned more then one column. %s columns returned"
% (len(rows[0]) - 1)
)
)
def _validate_operator(self, rows: np.recarray) -> None:
self._validate_result(rows)
if rows[0][1] in (0, None, np.nan):
self._result = 0.0
return
try:
# Check if it's float or if we can convert it
self._result = float(rows[0][1])
return
except (AssertionError, TypeError, ValueError) as ex:
raise AlertQueryInvalidTypeError() from ex
@property
def _is_validator_not_null(self) -> bool:
return (
self._report_schedule.validator_type == ReportScheduleValidatorType.NOT_NULL
)
@property
def _is_validator_operator(self) -> bool:
return (
self._report_schedule.validator_type == ReportScheduleValidatorType.OPERATOR
)
def _execute_query(self) -> pd.DataFrame:
"""
Executes the actual alert SQL query template
:return: A pandas dataframe
:raises AlertQueryError: SQL query is not valid
:raises AlertQueryTimeout: The SQL query received a celery soft timeout
"""
sql_template = jinja_context.get_template_processor(
database=self._report_schedule.database
)
rendered_sql = sql_template.process_template(self._report_schedule.sql)
try:
limited_rendered_sql = self._report_schedule.database.apply_limit_to_sql(
rendered_sql, ALERT_SQL_LIMIT
)
query_username = app.config["THUMBNAIL_SELENIUM_USER"]
start = default_timer()
df = self._report_schedule.database.get_df(
sql=limited_rendered_sql, username=query_username
)
stop = default_timer()
logger.info(
"Query for %s took %.2f ms",
self._report_schedule.name,
(stop - start) * 1000.0,
)
return df
except SoftTimeLimitExceeded as ex:
logger.warning("A timeout occurred while executing the alert query: %s", ex)
raise AlertQueryTimeout() from ex
except Exception as ex:
raise AlertQueryError(message=str(ex)) from ex
def validate(self) -> None:
"""
Validate the query result as a Pandas DataFrame
"""
df = self._execute_query()
if df.empty and self._is_validator_not_null:
self._result = None
return
if df.empty and self._is_validator_operator:
self._result = 0.0
return
rows = df.to_records()
if self._is_validator_not_null:
self._validate_not_null(rows)
return
self._validate_operator(rows) | /sage-superset-1.0.0.tar.gz/sage-superset-1.0.0/superset/reports/commands/alert.py | 0.809916 | 0.172904 | alert.py | pypi |
import inspect
from flask import Markup
from flask_babel import lazy_gettext as _
from sqlalchemy import MetaData
from sqlalchemy.engine.url import make_url
from superset import app, security_manager
from superset.databases.filters import DatabaseFilter
from superset.exceptions import SupersetException
from superset.models.core import Database
from superset.security.analytics_db_safety import check_sqlalchemy_uri
from superset.utils import core as utils
class DatabaseMixin:
list_title = _("Databases")
show_title = _("Show Database")
add_title = _("Add Database")
edit_title = _("Edit Database")
list_columns = [
"database_name",
"backend",
"expose_in_sqllab",
"allow_run_async",
"creator",
"modified",
]
order_columns = [
"database_name",
"allow_run_async",
"allow_dml",
"modified",
"allow_file_upload",
"expose_in_sqllab",
]
add_columns = [
"database_name",
"sqlalchemy_uri",
"cache_timeout",
"expose_in_sqllab",
"allow_run_async",
"allow_file_upload",
"allow_ctas",
"allow_cvas",
"allow_dml",
"force_ctas_schema",
"impersonate_user",
"allow_multi_schema_metadata_fetch",
"extra",
"encrypted_extra",
"server_cert",
]
search_exclude_columns = (
"password",
"tables",
"created_by",
"changed_by",
"queries",
"saved_queries",
"encrypted_extra",
"server_cert",
)
edit_columns = add_columns
show_columns = [
"tables",
"cache_timeout",
"extra",
"database_name",
"sqlalchemy_uri",
"perm",
"created_by",
"created_on",
"changed_by",
"changed_on",
]
base_order = ("changed_on", "desc")
description_columns = {
"sqlalchemy_uri": utils.markdown(
"Refer to the "
"[SqlAlchemy docs]"
"(https://docs.sqlalchemy.org/en/rel_1_2/core/engines.html#"
"database-urls) "
"for more information on how to structure your URI.",
True,
),
"expose_in_sqllab": _("Expose this DB in SQL Lab"),
"allow_run_async": _(
"Operate the database in asynchronous mode, meaning "
"that the queries are executed on remote workers as opposed "
"to on the web server itself. "
"This assumes that you have a Celery worker setup as well "
"as a results backend. Refer to the installation docs "
"for more information."
),
"allow_ctas": _("Allow CREATE TABLE AS option in SQL Lab"),
"allow_cvas": _("Allow CREATE VIEW AS option in SQL Lab"),
"allow_dml": _(
"Allow users to run non-SELECT statements "
"(UPDATE, DELETE, CREATE, ...) "
"in SQL Lab"
),
"force_ctas_schema": _(
"When allowing CREATE TABLE AS option in SQL Lab, "
"this option forces the table to be created in this schema"
),
"extra": utils.markdown(
"JSON string containing extra configuration elements.<br/>"
"1. The ``engine_params`` object gets unpacked into the "
"[sqlalchemy.create_engine]"
"(https://docs.sqlalchemy.org/en/latest/core/engines.html#"
"sqlalchemy.create_engine) call, while the ``metadata_params`` "
"gets unpacked into the [sqlalchemy.MetaData]"
"(https://docs.sqlalchemy.org/en/rel_1_0/core/metadata.html"
"#sqlalchemy.schema.MetaData) call.<br/>"
"2. The ``metadata_cache_timeout`` is a cache timeout setting "
"in seconds for metadata fetch of this database. Specify it as "
'**"metadata_cache_timeout": {"schema_cache_timeout": 600, '
'"table_cache_timeout": 600}**. '
"If unset, cache will not be enabled for the functionality. "
"A timeout of 0 indicates that the cache never expires.<br/>"
"3. The ``schemas_allowed_for_file_upload`` is a comma separated list "
"of schemas that CSVs are allowed to upload to. "
'Specify it as **"schemas_allowed_for_file_upload": '
'["public", "csv_upload"]**. '
"If database flavor does not support schema or any schema is allowed "
"to be accessed, just leave the list empty<br/>"
"4. the ``version`` field is a string specifying the this db's version. "
"This should be used with Presto DBs so that the syntax is correct<br/>"
"5. The ``allows_virtual_table_explore`` field is a boolean specifying "
"whether or not the Explore button in SQL Lab results is shown.",
True,
),
"encrypted_extra": utils.markdown(
"JSON string containing additional connection configuration.<br/>"
"This is used to provide connection information for systems like "
"Hive, Presto, and BigQuery, which do not conform to the username:password "
"syntax normally used by SQLAlchemy.",
True,
),
"server_cert": utils.markdown(
"Optional CA_BUNDLE contents to validate HTTPS requests. Only available "
"on certain database engines.",
True,
),
"impersonate_user": _(
"If Presto, all the queries in SQL Lab are going to be executed as the "
"currently logged on user who must have permission to run them.<br/>"
"If Hive and hive.server2.enable.doAs is enabled, will run the queries as "
"service account, but impersonate the currently logged on user "
"via hive.server2.proxy.user property."
),
"allow_multi_schema_metadata_fetch": _(
"Allow SQL Lab to fetch a list of all tables and all views across "
"all database schemas. For large data warehouse with thousands of "
"tables, this can be expensive and put strain on the system."
),
"cache_timeout": _(
"Duration (in seconds) of the caching timeout for charts of this database. "
"A timeout of 0 indicates that the cache never expires. "
"Note this defaults to the global timeout if undefined."
),
"allow_file_upload": _(
"If selected, please set the schemas allowed for csv upload in Extra."
),
}
base_filters = [["id", DatabaseFilter, lambda: []]]
label_columns = {
"expose_in_sqllab": _("Expose in SQL Lab"),
"allow_ctas": _("Allow CREATE TABLE AS"),
"allow_cvas": _("Allow CREATE VIEW AS"),
"allow_dml": _("Allow DML"),
"force_ctas_schema": _("CTAS Schema"),
"database_name": _("Database"),
"creator": _("Creator"),
"changed_on_": _("Last Changed"),
"sqlalchemy_uri": _("SQLAlchemy URI"),
"cache_timeout": _("Chart Cache Timeout"),
"extra": _("Extra"),
"encrypted_extra": _("Secure Extra"),
"server_cert": _("Root certificate"),
"allow_run_async": _("Async Execution"),
"impersonate_user": _("Impersonate the logged on user"),
"allow_file_upload": _("Allow Csv Upload"),
"modified": _("Modified"),
"allow_multi_schema_metadata_fetch": _("Allow Multi Schema Metadata Fetch"),
"backend": _("Backend"),
}
def _pre_add_update(self, database: Database) -> None:
if app.config["PREVENT_UNSAFE_DB_CONNECTIONS"]:
check_sqlalchemy_uri(make_url(database.sqlalchemy_uri))
self.check_extra(database)
self.check_encrypted_extra(database)
if database.server_cert:
utils.parse_ssl_cert(database.server_cert)
database.set_sqlalchemy_uri(database.sqlalchemy_uri)
security_manager.add_permission_view_menu("database_access", database.perm)
# adding a new database we always want to force refresh schema list
for schema in database.get_all_schema_names():
security_manager.add_permission_view_menu(
"schema_access", security_manager.get_schema_perm(database, schema)
)
def pre_add(self, database: Database) -> None:
self._pre_add_update(database)
def pre_update(self, database: Database) -> None:
self._pre_add_update(database)
def pre_delete(self, database: Database) -> None: # pylint: disable=no-self-use
if database.tables:
raise SupersetException(
Markup(
"Cannot delete a database that has tables attached. "
"Here's the list of associated tables: "
+ ", ".join("{}".format(table) for table in database.tables)
)
)
def check_extra(self, database: Database) -> None: # pylint: disable=no-self-use
# this will check whether json.loads(extra) can succeed
try:
extra = database.get_extra()
except Exception as ex:
raise Exception(
_("Extra field cannot be decoded by JSON. %(msg)s", msg=str(ex))
) from ex
# this will check whether 'metadata_params' is configured correctly
metadata_signature = inspect.signature(MetaData)
for key in extra.get("metadata_params", {}):
if key not in metadata_signature.parameters:
raise Exception(
_(
"The metadata_params in Extra field "
"is not configured correctly. The key "
"%{key}s is invalid.",
key=key,
)
)
def check_encrypted_extra( # pylint: disable=no-self-use
self, database: Database
) -> None:
# this will check whether json.loads(secure_extra) can succeed
try:
database.get_encrypted_extra()
except Exception as ex:
raise Exception(
_("Extra field cannot be decoded by JSON. %(msg)s", msg=str(ex))
) from ex | /sage-superset-1.0.0.tar.gz/sage-superset-1.0.0/superset/views/database/mixins.py | 0.596081 | 0.185836 | mixins.py | pypi |
"""Contains the logic to create cohesive forms on the explore view"""
from typing import List
from flask_appbuilder.fieldwidgets import BS3TextFieldWidget
from flask_appbuilder.forms import DynamicForm
from flask_babel import lazy_gettext as _
from flask_wtf.file import FileAllowed, FileField, FileRequired
from wtforms import (
BooleanField,
IntegerField,
MultipleFileField,
SelectField,
StringField,
)
from wtforms.ext.sqlalchemy.fields import QuerySelectField
from wtforms.validators import DataRequired, Length, NumberRange, Optional
from superset import app, db, security_manager
from superset.forms import (
CommaSeparatedListField,
filter_not_empty_values,
JsonListField,
)
from superset.models.core import Database
config = app.config
class UploadToDatabaseForm(DynamicForm):
# pylint: disable=E0211
def file_allowed_dbs() -> List[Database]: # type: ignore
file_enabled_dbs = (
db.session.query(Database).filter_by(allow_file_upload=True).all()
)
return [
file_enabled_db
for file_enabled_db in file_enabled_dbs
if UploadToDatabaseForm.at_least_one_schema_is_allowed(file_enabled_db)
]
@staticmethod
def at_least_one_schema_is_allowed(database: Database) -> bool:
"""
If the user has access to the database or all datasource
1. if schemas_allowed_for_file_upload is empty
a) if database does not support schema
user is able to upload csv without specifying schema name
b) if database supports schema
user is able to upload csv to any schema
2. if schemas_allowed_for_file_upload is not empty
a) if database does not support schema
This situation is impossible and upload will fail
b) if database supports schema
user is able to upload to schema in schemas_allowed_for_file_upload
elif the user does not access to the database or all datasource
1. if schemas_allowed_for_file_upload is empty
a) if database does not support schema
user is unable to upload csv
b) if database supports schema
user is unable to upload csv
2. if schemas_allowed_for_file_upload is not empty
a) if database does not support schema
This situation is impossible and user is unable to upload csv
b) if database supports schema
user is able to upload to schema in schemas_allowed_for_file_upload
"""
if security_manager.can_access_database(database):
return True
schemas = database.get_schema_access_for_file_upload()
if schemas and security_manager.get_schemas_accessible_by_user(
database, schemas, False
):
return True
return False
class CsvToDatabaseForm(UploadToDatabaseForm):
name = StringField(
_("Table Name"),
description=_("Name of table to be created from csv data."),
validators=[DataRequired()],
widget=BS3TextFieldWidget(),
)
csv_file = FileField(
_("CSV File"),
description=_("Select a CSV file to be uploaded to a database."),
validators=[
FileRequired(),
FileAllowed(
config["ALLOWED_EXTENSIONS"].intersection(config["CSV_EXTENSIONS"]),
_(
"Only the following file extensions are allowed: "
"%(allowed_extensions)s",
allowed_extensions=", ".join(
config["ALLOWED_EXTENSIONS"].intersection(
config["CSV_EXTENSIONS"]
)
),
),
),
],
)
con = QuerySelectField(
_("Database"),
query_factory=UploadToDatabaseForm.file_allowed_dbs,
get_pk=lambda a: a.id,
get_label=lambda a: a.database_name,
)
schema = StringField(
_("Schema"),
description=_("Specify a schema (if database flavor supports this)."),
validators=[Optional()],
widget=BS3TextFieldWidget(),
)
sep = StringField(
_("Delimiter"),
description=_("Delimiter used by CSV file (for whitespace use \\s+)."),
validators=[DataRequired()],
widget=BS3TextFieldWidget(),
)
if_exists = SelectField(
_("Table Exists"),
description=_(
"If table exists do one of the following: "
"Fail (do nothing), Replace (drop and recreate table) "
"or Append (insert data)."
),
choices=[
("fail", _("Fail")),
("replace", _("Replace")),
("append", _("Append")),
],
validators=[DataRequired()],
)
header = IntegerField(
_("Header Row"),
description=_(
"Row containing the headers to use as "
"column names (0 is first line of data). "
"Leave empty if there is no header row."
),
validators=[Optional(), NumberRange(min=0)],
widget=BS3TextFieldWidget(),
)
index_col = IntegerField(
_("Index Column"),
description=_(
"Column to use as the row labels of the "
"dataframe. Leave empty if no index column."
),
validators=[Optional(), NumberRange(min=0)],
widget=BS3TextFieldWidget(),
)
mangle_dupe_cols = BooleanField(
_("Mangle Duplicate Columns"),
description=_('Specify duplicate columns as "X.0, X.1".'),
)
usecols = JsonListField(
_("Use Columns"),
default=None,
description=_(
"Json list of the column names that should be read. "
"If not None, only these columns will be read from the file."
),
validators=[Optional()],
)
skipinitialspace = BooleanField(
_("Skip Initial Space"), description=_("Skip spaces after delimiter.")
)
skiprows = IntegerField(
_("Skip Rows"),
description=_("Number of rows to skip at start of file."),
validators=[Optional(), NumberRange(min=0)],
widget=BS3TextFieldWidget(),
)
nrows = IntegerField(
_("Rows to Read"),
description=_("Number of rows of file to read."),
validators=[Optional(), NumberRange(min=0)],
widget=BS3TextFieldWidget(),
)
skip_blank_lines = BooleanField(
_("Skip Blank Lines"),
description=_("Skip blank lines rather than interpreting them as NaN values."),
)
parse_dates = CommaSeparatedListField(
_("Parse Dates"),
description=_(
"A comma separated list of columns that should be parsed as dates."
),
filters=[filter_not_empty_values],
)
infer_datetime_format = BooleanField(
_("Infer Datetime Format"),
description=_("Use Pandas to interpret the datetime format automatically."),
)
decimal = StringField(
_("Decimal Character"),
default=".",
description=_("Character to interpret as decimal point."),
validators=[Optional(), Length(min=1, max=1)],
widget=BS3TextFieldWidget(),
)
index = BooleanField(
_("Dataframe Index"), description=_("Write dataframe index as a column.")
)
index_label = StringField(
_("Column Label(s)"),
description=_(
"Column label for index column(s). If None is given "
"and Dataframe Index is True, Index Names are used."
),
validators=[Optional()],
widget=BS3TextFieldWidget(),
)
null_values = JsonListField(
_("Null values"),
default=config["CSV_DEFAULT_NA_NAMES"],
description=_(
"Json list of the values that should be treated as null. "
'Examples: [""], ["None", "N/A"], ["nan", "null"]. '
"Warning: Hive database supports only single value. "
'Use [""] for empty string.'
),
)
class ExcelToDatabaseForm(UploadToDatabaseForm):
name = StringField(
_("Table Name"),
description=_("Name of table to be created from excel data."),
validators=[DataRequired()],
widget=BS3TextFieldWidget(),
)
excel_file = FileField(
_("Excel File"),
description=_("Select a Excel file to be uploaded to a database."),
validators=[
FileRequired(),
FileAllowed(
config["ALLOWED_EXTENSIONS"].intersection(config["EXCEL_EXTENSIONS"]),
_(
"Only the following file extensions are allowed: "
"%(allowed_extensions)s",
allowed_extensions=", ".join(
config["ALLOWED_EXTENSIONS"].intersection(
config["EXCEL_EXTENSIONS"]
)
),
),
),
],
)
sheet_name = StringField(
_("Sheet Name"),
description=_("Strings used for sheet names (default is the first sheet)."),
validators=[Optional()],
widget=BS3TextFieldWidget(),
)
con = QuerySelectField(
_("Database"),
query_factory=UploadToDatabaseForm.file_allowed_dbs,
get_pk=lambda a: a.id,
get_label=lambda a: a.database_name,
)
schema = StringField(
_("Schema"),
description=_("Specify a schema (if database flavor supports this)."),
validators=[Optional()],
widget=BS3TextFieldWidget(),
)
if_exists = SelectField(
_("Table Exists"),
description=_(
"If table exists do one of the following: "
"Fail (do nothing), Replace (drop and recreate table) "
"or Append (insert data)."
),
choices=[
("fail", _("Fail")),
("replace", _("Replace")),
("append", _("Append")),
],
validators=[DataRequired()],
)
header = IntegerField(
_("Header Row"),
description=_(
"Row containing the headers to use as "
"column names (0 is first line of data). "
"Leave empty if there is no header row."
),
validators=[Optional(), NumberRange(min=0)],
widget=BS3TextFieldWidget(),
)
index_col = IntegerField(
_("Index Column"),
description=_(
"Column to use as the row labels of the "
"dataframe. Leave empty if no index column."
),
validators=[Optional(), NumberRange(min=0)],
widget=BS3TextFieldWidget(),
)
mangle_dupe_cols = BooleanField(
_("Mangle Duplicate Columns"),
description=_('Specify duplicate columns as "X.0, X.1".'),
)
skiprows = IntegerField(
_("Skip Rows"),
description=_("Number of rows to skip at start of file."),
validators=[Optional(), NumberRange(min=0)],
widget=BS3TextFieldWidget(),
)
nrows = IntegerField(
_("Rows to Read"),
description=_("Number of rows of file to read."),
validators=[Optional(), NumberRange(min=0)],
widget=BS3TextFieldWidget(),
)
parse_dates = CommaSeparatedListField(
_("Parse Dates"),
description=_(
"A comma separated list of columns that should be parsed as dates."
),
filters=[filter_not_empty_values],
)
decimal = StringField(
_("Decimal Character"),
default=".",
description=_("Character to interpret as decimal point."),
validators=[Optional(), Length(min=1, max=1)],
widget=BS3TextFieldWidget(),
)
index = BooleanField(
_("Dataframe Index"), description=_("Write dataframe index as a column.")
)
index_label = StringField(
_("Column Label(s)"),
description=_(
"Column label for index column(s). If None is given "
"and Dataframe Index is True, Index Names are used."
),
validators=[Optional()],
widget=BS3TextFieldWidget(),
)
null_values = JsonListField(
_("Null values"),
default=config["CSV_DEFAULT_NA_NAMES"],
description=_(
"Json list of the values that should be treated as null. "
'Examples: [""], ["None", "N/A"], ["nan", "null"]. '
"Warning: Hive database supports only single value. "
'Use [""] for empty string.'
),
)
class ColumnarToDatabaseForm(UploadToDatabaseForm):
name = StringField(
_("Table Name"),
description=_("Name of table to be created from columnar data."),
validators=[DataRequired()],
widget=BS3TextFieldWidget(),
)
columnar_file = MultipleFileField(
_("Columnar File"),
description=_("Select a Columnar file to be uploaded to a database."),
validators=[
DataRequired(),
FileAllowed(
config["ALLOWED_EXTENSIONS"].intersection(
config["COLUMNAR_EXTENSIONS"]
),
_(
"Only the following file extensions are allowed: "
"%(allowed_extensions)s",
allowed_extensions=", ".join(
config["ALLOWED_EXTENSIONS"].intersection(
config["COLUMNAR_EXTENSIONS"]
)
),
),
),
],
)
con = QuerySelectField(
_("Database"),
query_factory=UploadToDatabaseForm.file_allowed_dbs,
get_pk=lambda a: a.id,
get_label=lambda a: a.database_name,
)
schema = StringField(
_("Schema"),
description=_("Specify a schema (if database flavor supports this)."),
validators=[Optional()],
widget=BS3TextFieldWidget(),
)
if_exists = SelectField(
_("Table Exists"),
description=_(
"If table exists do one of the following: "
"Fail (do nothing), Replace (drop and recreate table) "
"or Append (insert data)."
),
choices=[
("fail", _("Fail")),
("replace", _("Replace")),
("append", _("Append")),
],
validators=[DataRequired()],
)
usecols = JsonListField(
_("Use Columns"),
default=None,
description=_(
"Json list of the column names that should be read. "
"If not None, only these columns will be read from the file."
),
validators=[Optional()],
)
index = BooleanField(
_("Dataframe Index"), description=_("Write dataframe index as a column.")
)
index_label = StringField(
_("Column Label(s)"),
description=_(
"Column label for index column(s). If None is given "
"and Dataframe Index is True, Index Names are used."
),
validators=[Optional()],
widget=BS3TextFieldWidget(),
) | /sage-superset-1.0.0.tar.gz/sage-superset-1.0.0/superset/views/database/forms.py | 0.846419 | 0.290666 | forms.py | pypi |
from flask import Markup
from flask_babel import lazy_gettext as _
from superset.dashboards.filters import DashboardAccessFilter
from superset.views.chart.filters import SliceFilter
class SliceMixin: # pylint: disable=too-few-public-methods
list_title = _("Charts")
show_title = _("Show Chart")
add_title = _("Add Chart")
edit_title = _("Edit Chart")
can_add = False
search_columns = (
"slice_name",
"description",
"viz_type",
"datasource_name",
"owners",
)
list_columns = ["slice_link", "viz_type", "datasource_link", "creator", "modified"]
order_columns = [
"slice_name",
"viz_type",
"datasource_link",
"modified",
"changed_on",
]
edit_columns = [
"slice_name",
"description",
"viz_type",
"owners",
"dashboards",
"params",
"cache_timeout",
]
base_order = ("changed_on", "desc")
description_columns = {
"description": Markup(
"The content here can be displayed as widget headers in the "
"dashboard view. Supports "
'<a href="https://daringfireball.net/projects/markdown/"">'
"markdown</a>"
),
"params": _(
"These parameters are generated dynamically when clicking "
"the save or overwrite button in the explore view. This JSON "
"object is exposed here for reference and for power users who may "
"want to alter specific parameters."
),
"cache_timeout": _(
"Duration (in seconds) of the caching timeout for this chart. "
"Note this defaults to the datasource/table timeout if undefined."
),
}
base_filters = [["id", SliceFilter, lambda: []]]
label_columns = {
"cache_timeout": _("Cache Timeout"),
"creator": _("Creator"),
"dashboards": _("Dashboards"),
"datasource_link": _("Datasource"),
"description": _("Description"),
"modified": _("Last Modified"),
"owners": _("Owners"),
"params": _("Parameters"),
"slice_link": _("Chart"),
"slice_name": _("Name"),
"table": _("Table"),
"viz_type": _("Visualization Type"),
}
add_form_query_rel_fields = {"dashboards": [["name", DashboardAccessFilter, None]]}
edit_form_query_rel_fields = add_form_query_rel_fields | /sage-superset-1.0.0.tar.gz/sage-superset-1.0.0/superset/views/chart/mixin.py | 0.616936 | 0.245718 | mixin.py | pypi |
from flask_babel import lazy_gettext as _
from ...dashboards.filters import DashboardAccessFilter
from ..base import check_ownership
class DashboardMixin: # pylint: disable=too-few-public-methods
list_title = _("Dashboards")
show_title = _("Show Dashboard")
add_title = _("Add Dashboard")
edit_title = _("Edit Dashboard")
list_columns = ["dashboard_link", "creator", "published", "modified"]
order_columns = ["dashboard_link", "modified", "published"]
edit_columns = [
"dashboard_title",
"slug",
"owners",
"roles",
"position_json",
"css",
"json_metadata",
"published",
]
show_columns = edit_columns + ["charts"]
search_columns = ("dashboard_title", "slug", "owners", "published")
add_columns = edit_columns
base_order = ("changed_on", "desc")
description_columns = {
"position_json": _(
"This json object describes the positioning of the widgets in "
"the dashboard. It is dynamically generated when adjusting "
"the widgets size and positions by using drag & drop in "
"the dashboard view"
),
"css": _(
"The CSS for individual dashboards can be altered here, or "
"in the dashboard view where changes are immediately "
"visible"
),
"slug": _("To get a readable URL for your dashboard"),
"json_metadata": _(
"This JSON object is generated dynamically when clicking "
"the save or overwrite button in the dashboard view. It "
"is exposed here for reference and for power users who may "
"want to alter specific parameters."
),
"owners": _("Owners is a list of users who can alter the dashboard."),
"roles": _(
"Roles is a list which defines access to the dashboard. "
"Granting a role access to a dashboard will bypass dataset level checks."
"If no roles defined then the dashboard is available to all roles."
),
"published": _(
"Determines whether or not this dashboard is "
"visible in the list of all dashboards"
),
}
base_filters = [["slice", DashboardAccessFilter, lambda: []]]
label_columns = {
"dashboard_link": _("Dashboard"),
"dashboard_title": _("Title"),
"slug": _("Slug"),
"charts": _("Charts"),
"owners": _("Owners"),
"roles": _("Roles"),
"published": _("Published"),
"creator": _("Creator"),
"modified": _("Modified"),
"position_json": _("Position JSON"),
"css": _("CSS"),
"json_metadata": _("JSON Metadata"),
}
def pre_delete(self, item: "DashboardMixin") -> None: # pylint: disable=no-self-use
check_ownership(item) | /sage-superset-1.0.0.tar.gz/sage-superset-1.0.0/superset/views/dashboard/mixin.py | 0.564339 | 0.207917 | mixin.py | pypi |
from abc import ABC, abstractmethod
from typing import Any, List, Optional
from flask_appbuilder.security.sqla.models import User
from superset.commands.utils import populate_owners
class BaseCommand(ABC):
"""
Base class for all Command like Superset Logic objects
"""
@abstractmethod
def run(self) -> Any:
"""
Run executes the command. Can raise command exceptions
:raises: CommandException
"""
@abstractmethod
def validate(self) -> None:
"""
Validate is normally called by run to validate data.
Will raise exception if validation fails
:raises: CommandException
"""
class CreateMixin: # pylint: disable=too-few-public-methods
@staticmethod
def populate_owners(
user: User, owner_ids: Optional[List[int]] = None
) -> List[User]:
"""
Populate list of owners, defaulting to the current user if `owner_ids` is
undefined or empty. If current user is missing in `owner_ids`, current user
is added unless belonging to the Admin role.
:param user: current user
:param owner_ids: list of owners by id's
:raises OwnersNotFoundValidationError: if at least one owner can't be resolved
:returns: Final list of owners
"""
return populate_owners(user, owner_ids, default_to_user=True)
class UpdateMixin: # pylint: disable=too-few-public-methods
@staticmethod
def populate_owners(
user: User, owner_ids: Optional[List[int]] = None
) -> List[User]:
"""
Populate list of owners. If current user is missing in `owner_ids`, current user
is added unless belonging to the Admin role.
:param user: current user
:param owner_ids: list of owners by id's
:raises OwnersNotFoundValidationError: if at least one owner can't be resolved
:returns: Final list of owners
"""
return populate_owners(user, owner_ids, default_to_user=False) | /sage-superset-1.0.0.tar.gz/sage-superset-1.0.0/superset/commands/base.py | 0.92153 | 0.225097 | base.py | pypi |
from typing import Any, Dict, List, Optional
from superset import app
from superset.models.core import Database
custom_password_store = app.config["SQLALCHEMY_CUSTOM_PASSWORD_STORE"]
def get_foreign_keys_metadata(
database: Database, table_name: str, schema_name: Optional[str]
) -> List[Dict[str, Any]]:
foreign_keys = database.get_foreign_keys(table_name, schema_name)
for fk in foreign_keys:
fk["column_names"] = fk.pop("constrained_columns")
fk["type"] = "fk"
return foreign_keys
def get_indexes_metadata(
database: Database, table_name: str, schema_name: Optional[str]
) -> List[Dict[str, Any]]:
indexes = database.get_indexes(table_name, schema_name)
for idx in indexes:
idx["type"] = "index"
return indexes
def get_col_type(col: Dict[Any, Any]) -> str:
try:
dtype = f"{col['type']}"
except Exception: # pylint: disable=broad-except
# sqla.types.JSON __str__ has a bug, so using __class__.
dtype = col["type"].__class__.__name__
return dtype
def get_table_metadata(
database: Database, table_name: str, schema_name: Optional[str]
) -> Dict[str, Any]:
"""
Get table metadata information, including type, pk, fks.
This function raises SQLAlchemyError when a schema is not found.
:param database: The database model
:param table_name: Table name
:param schema_name: schema name
:return: Dict table metadata ready for API response
"""
keys = []
columns = database.get_columns(table_name, schema_name)
primary_key = database.get_pk_constraint(table_name, schema_name)
if primary_key and primary_key.get("constrained_columns"):
primary_key["column_names"] = primary_key.pop("constrained_columns")
primary_key["type"] = "pk"
keys += [primary_key]
foreign_keys = get_foreign_keys_metadata(database, table_name, schema_name)
indexes = get_indexes_metadata(database, table_name, schema_name)
keys += foreign_keys + indexes
payload_columns: List[Dict[str, Any]] = []
table_comment = database.get_table_comment(table_name, schema_name)
for col in columns:
dtype = get_col_type(col)
payload_columns.append(
{
"name": col["name"],
"type": dtype.split("(")[0] if "(" in dtype else dtype,
"longType": dtype,
"keys": [k for k in keys if col["name"] in k["column_names"]],
"comment": col.get("comment"),
}
)
return {
"name": table_name,
"columns": payload_columns,
"selectStar": database.select_star(
table_name,
schema=schema_name,
show_cols=True,
indent=True,
cols=columns,
latest_partition=True,
),
"primaryKey": primary_key,
"foreignKeys": foreign_keys,
"indexes": keys,
"comment": table_comment,
} | /sage-superset-1.0.0.tar.gz/sage-superset-1.0.0/superset/databases/utils.py | 0.811974 | 0.167695 | utils.py | pypi |
from __future__ import annotations
import logging
from datetime import datetime, timedelta
from typing import Any, Dict, List, NamedTuple, Optional, TYPE_CHECKING
from flask_babel import gettext as _
from pandas import DataFrame
from superset.common.chart_data import ChartDataResultType
from superset.exceptions import (
QueryClauseValidationException,
QueryObjectValidationError,
)
from superset.sql_parse import validate_filter_clause
from superset.typing import Column, Metric, OrderBy
from superset.utils import pandas_postprocessing
from superset.utils.core import (
DTTM_ALIAS,
find_duplicates,
get_column_names,
get_metric_names,
is_adhoc_metric,
json_int_dttm_ser,
QueryObjectFilterClause,
)
from superset.utils.date_parser import parse_human_timedelta
from superset.utils.hashing import md5_sha_from_dict
if TYPE_CHECKING:
from superset.connectors.base.models import BaseDatasource
logger = logging.getLogger(__name__)
# TODO: Type Metrics dictionary with TypedDict when it becomes a vanilla python type
# https://github.com/python/mypy/issues/5288
class DeprecatedField(NamedTuple):
old_name: str
new_name: str
DEPRECATED_FIELDS = (
DeprecatedField(old_name="granularity_sqla", new_name="granularity"),
DeprecatedField(old_name="groupby", new_name="columns"),
DeprecatedField(old_name="timeseries_limit", new_name="series_limit"),
DeprecatedField(old_name="timeseries_limit_metric", new_name="series_limit_metric"),
)
DEPRECATED_EXTRAS_FIELDS = (
DeprecatedField(old_name="where", new_name="where"),
DeprecatedField(old_name="having", new_name="having"),
DeprecatedField(old_name="having_filters", new_name="having_druid"),
DeprecatedField(old_name="druid_time_origin", new_name="druid_time_origin"),
)
class QueryObject: # pylint: disable=too-many-instance-attributes
"""
The query object's schema matches the interfaces of DB connectors like sqla
and druid. The query objects are constructed on the client.
"""
annotation_layers: List[Dict[str, Any]]
applied_time_extras: Dict[str, str]
apply_fetch_values_predicate: bool
columns: List[Column]
datasource: Optional[BaseDatasource]
extras: Dict[str, Any]
filter: List[QueryObjectFilterClause]
from_dttm: Optional[datetime]
granularity: Optional[str]
inner_from_dttm: Optional[datetime]
inner_to_dttm: Optional[datetime]
is_rowcount: bool
is_timeseries: bool
metrics: Optional[List[Metric]]
order_desc: bool
orderby: List[OrderBy]
post_processing: List[Dict[str, Any]]
result_type: Optional[ChartDataResultType]
row_limit: int
row_offset: int
series_columns: List[Column]
series_limit: int
series_limit_metric: Optional[Metric]
time_offsets: List[str]
time_shift: Optional[timedelta]
time_range: Optional[str]
to_dttm: Optional[datetime]
def __init__( # pylint: disable=too-many-locals
self,
*,
annotation_layers: Optional[List[Dict[str, Any]]] = None,
applied_time_extras: Optional[Dict[str, str]] = None,
apply_fetch_values_predicate: bool = False,
columns: Optional[List[Column]] = None,
datasource: Optional[BaseDatasource] = None,
extras: Optional[Dict[str, Any]] = None,
filters: Optional[List[QueryObjectFilterClause]] = None,
granularity: Optional[str] = None,
is_rowcount: bool = False,
is_timeseries: Optional[bool] = None,
metrics: Optional[List[Metric]] = None,
order_desc: bool = True,
orderby: Optional[List[OrderBy]] = None,
post_processing: Optional[List[Optional[Dict[str, Any]]]] = None,
row_limit: int,
row_offset: Optional[int] = None,
series_columns: Optional[List[Column]] = None,
series_limit: int = 0,
series_limit_metric: Optional[Metric] = None,
time_range: Optional[str] = None,
time_shift: Optional[str] = None,
**kwargs: Any,
):
self._set_annotation_layers(annotation_layers)
self.applied_time_extras = applied_time_extras or {}
self.apply_fetch_values_predicate = apply_fetch_values_predicate or False
self.columns = columns or []
self.datasource = datasource
self.extras = extras or {}
self.filter = filters or []
self.granularity = granularity
self.is_rowcount = is_rowcount
self._set_is_timeseries(is_timeseries)
self._set_metrics(metrics)
self.order_desc = order_desc
self.orderby = orderby or []
self._set_post_processing(post_processing)
self.row_limit = row_limit
self.row_offset = row_offset or 0
self._init_series_columns(series_columns, metrics, is_timeseries)
self.series_limit = series_limit
self.series_limit_metric = series_limit_metric
self.time_range = time_range
self.time_shift = parse_human_timedelta(time_shift)
self.from_dttm = kwargs.get("from_dttm")
self.to_dttm = kwargs.get("to_dttm")
self.result_type = kwargs.get("result_type")
self.time_offsets = kwargs.get("time_offsets", [])
self.inner_from_dttm = kwargs.get("inner_from_dttm")
self.inner_to_dttm = kwargs.get("inner_to_dttm")
self._rename_deprecated_fields(kwargs)
self._move_deprecated_extra_fields(kwargs)
def _set_annotation_layers(
self, annotation_layers: Optional[List[Dict[str, Any]]]
) -> None:
self.annotation_layers = [
layer
for layer in (annotation_layers or [])
# formula annotations don't affect the payload, hence can be dropped
if layer["annotationType"] != "FORMULA"
]
def _set_is_timeseries(self, is_timeseries: Optional[bool]) -> None:
# is_timeseries is True if time column is in either columns or groupby
# (both are dimensions)
self.is_timeseries = (
is_timeseries if is_timeseries is not None else DTTM_ALIAS in self.columns
)
def _set_metrics(self, metrics: Optional[List[Metric]] = None) -> None:
# Support metric reference/definition in the format of
# 1. 'metric_name' - name of predefined metric
# 2. { label: 'label_name' } - legacy format for a predefined metric
# 3. { expressionType: 'SIMPLE' | 'SQL', ... } - adhoc metric
def is_str_or_adhoc(metric: Metric) -> bool:
return isinstance(metric, str) or is_adhoc_metric(metric)
self.metrics = metrics and [
x if is_str_or_adhoc(x) else x["label"] for x in metrics # type: ignore
]
def _set_post_processing(
self, post_processing: Optional[List[Optional[Dict[str, Any]]]]
) -> None:
post_processing = post_processing or []
self.post_processing = [post_proc for post_proc in post_processing if post_proc]
def _init_series_columns(
self,
series_columns: Optional[List[Column]],
metrics: Optional[List[Metric]],
is_timeseries: Optional[bool],
) -> None:
if series_columns:
self.series_columns = series_columns
elif is_timeseries and metrics:
self.series_columns = self.columns
else:
self.series_columns = []
def _rename_deprecated_fields(self, kwargs: Dict[str, Any]) -> None:
# rename deprecated fields
for field in DEPRECATED_FIELDS:
if field.old_name in kwargs:
logger.warning(
"The field `%s` is deprecated, please use `%s` instead.",
field.old_name,
field.new_name,
)
value = kwargs[field.old_name]
if value:
if hasattr(self, field.new_name):
logger.warning(
"The field `%s` is already populated, "
"replacing value with contents from `%s`.",
field.new_name,
field.old_name,
)
setattr(self, field.new_name, value)
def _move_deprecated_extra_fields(self, kwargs: Dict[str, Any]) -> None:
# move deprecated extras fields to extras
for field in DEPRECATED_EXTRAS_FIELDS:
if field.old_name in kwargs:
logger.warning(
"The field `%s` is deprecated and should "
"be passed to `extras` via the `%s` property.",
field.old_name,
field.new_name,
)
value = kwargs[field.old_name]
if value:
if hasattr(self.extras, field.new_name):
logger.warning(
"The field `%s` is already populated in "
"`extras`, replacing value with contents "
"from `%s`.",
field.new_name,
field.old_name,
)
self.extras[field.new_name] = value
@property
def metric_names(self) -> List[str]:
"""Return metrics names (labels), coerce adhoc metrics to strings."""
return get_metric_names(self.metrics or [])
@property
def column_names(self) -> List[str]:
"""Return column names (labels). Gives priority to groupbys if both groupbys
and metrics are non-empty, otherwise returns column labels."""
return get_column_names(self.columns)
def validate(
self, raise_exceptions: Optional[bool] = True
) -> Optional[QueryObjectValidationError]:
"""Validate query object"""
try:
self._validate_there_are_no_missing_series()
self._validate_no_have_duplicate_labels()
self._validate_filters()
return None
except QueryObjectValidationError as ex:
if raise_exceptions:
raise ex
return ex
def _validate_no_have_duplicate_labels(self) -> None:
all_labels = self.metric_names + self.column_names
if len(set(all_labels)) < len(all_labels):
dup_labels = find_duplicates(all_labels)
raise QueryObjectValidationError(
_(
"Duplicate column/metric labels: %(labels)s. Please make "
"sure all columns and metrics have a unique label.",
labels=", ".join(f'"{x}"' for x in dup_labels),
)
)
def _validate_filters(self) -> None:
for param in ("where", "having"):
clause = self.extras.get(param)
if clause:
try:
validate_filter_clause(clause)
except QueryClauseValidationException as ex:
raise QueryObjectValidationError(ex.message) from ex
def _validate_there_are_no_missing_series(self) -> None:
missing_series = [col for col in self.series_columns if col not in self.columns]
if missing_series:
raise QueryObjectValidationError(
_(
"The following entries in `series_columns` are missing "
"in `columns`: %(columns)s. ",
columns=", ".join(f'"{x}"' for x in missing_series),
)
)
def to_dict(self) -> Dict[str, Any]:
query_object_dict = {
"apply_fetch_values_predicate": self.apply_fetch_values_predicate,
"columns": self.columns,
"extras": self.extras,
"filter": self.filter,
"from_dttm": self.from_dttm,
"granularity": self.granularity,
"inner_from_dttm": self.inner_from_dttm,
"inner_to_dttm": self.inner_to_dttm,
"is_rowcount": self.is_rowcount,
"is_timeseries": self.is_timeseries,
"metrics": self.metrics,
"order_desc": self.order_desc,
"orderby": self.orderby,
"row_limit": self.row_limit,
"row_offset": self.row_offset,
"series_columns": self.series_columns,
"series_limit": self.series_limit,
"series_limit_metric": self.series_limit_metric,
"to_dttm": self.to_dttm,
}
return query_object_dict
def cache_key(self, **extra: Any) -> str:
"""
The cache key is made out of the key/values from to_dict(), plus any
other key/values in `extra`
We remove datetime bounds that are hard values, and replace them with
the use-provided inputs to bounds, which may be time-relative (as in
"5 days ago" or "now").
"""
cache_dict = self.to_dict()
cache_dict.update(extra)
# TODO: the below KVs can all be cleaned up and moved to `to_dict()` at some
# predetermined point in time when orgs are aware that the previously
# chached results will be invalidated.
if not self.apply_fetch_values_predicate:
del cache_dict["apply_fetch_values_predicate"]
if self.datasource:
cache_dict["datasource"] = self.datasource.uid
if self.result_type:
cache_dict["result_type"] = self.result_type
if self.time_range:
cache_dict["time_range"] = self.time_range
if self.post_processing:
cache_dict["post_processing"] = self.post_processing
if self.time_offsets:
cache_dict["time_offsets"] = self.time_offsets
for k in ["from_dttm", "to_dttm"]:
del cache_dict[k]
annotation_fields = [
"annotationType",
"descriptionColumns",
"intervalEndColumn",
"name",
"overrides",
"sourceType",
"timeColumn",
"titleColumn",
"value",
]
annotation_layers = [
{field: layer[field] for field in annotation_fields if field in layer}
for layer in self.annotation_layers
]
# only add to key if there are annotations present that affect the payload
if annotation_layers:
cache_dict["annotation_layers"] = annotation_layers
return md5_sha_from_dict(cache_dict, default=json_int_dttm_ser, ignore_nan=True)
def exec_post_processing(self, df: DataFrame) -> DataFrame:
"""
Perform post processing operations on DataFrame.
:param df: DataFrame returned from database model.
:return: new DataFrame to which all post processing operations have been
applied
:raises QueryObjectValidationError: If the post processing operation
is incorrect
"""
for post_process in self.post_processing:
operation = post_process.get("operation")
if not operation:
raise QueryObjectValidationError(
_("`operation` property of post processing object undefined")
)
if not hasattr(pandas_postprocessing, operation):
raise QueryObjectValidationError(
_(
"Unsupported post processing operation: %(operation)s",
type=operation,
)
)
options = post_process.get("options", {})
df = getattr(pandas_postprocessing, operation)(df, **options)
return df | /sage-superset-1.0.0.tar.gz/sage-superset-1.0.0/superset/common/query_object.py | 0.721253 | 0.19031 | query_object.py | pypi |
from __future__ import annotations
import logging
from typing import Any, ClassVar, Dict, List, Optional, TYPE_CHECKING, Union
import pandas as pd
from superset.common.chart_data import ChartDataResultFormat, ChartDataResultType
from superset.common.query_context_processor import (
CachedTimeOffset,
QueryContextProcessor,
)
from superset.common.query_object import QueryObject
if TYPE_CHECKING:
from superset.connectors.base.models import BaseDatasource
from superset.models.helpers import QueryResult
logger = logging.getLogger(__name__)
class QueryContext:
"""
The query context contains the query object and additional fields necessary
to retrieve the data payload for a given viz.
"""
cache_type: ClassVar[str] = "df"
enforce_numerical_metrics: ClassVar[bool] = True
datasource: BaseDatasource
queries: List[QueryObject]
form_data: Optional[Dict[str, Any]]
result_type: ChartDataResultType
result_format: ChartDataResultFormat
force: bool
custom_cache_timeout: Optional[int]
cache_values: Dict[str, Any]
_processor: QueryContextProcessor
# TODO: Type datasource and query_object dictionary with TypedDict when it becomes
# a vanilla python type https://github.com/python/mypy/issues/5288
def __init__(
self,
*,
datasource: BaseDatasource,
queries: List[QueryObject],
form_data: Optional[Dict[str, Any]],
result_type: ChartDataResultType,
result_format: ChartDataResultFormat,
force: bool = False,
custom_cache_timeout: Optional[int] = None,
cache_values: Dict[str, Any]
) -> None:
self.datasource = datasource
self.result_type = result_type
self.result_format = result_format
self.queries = queries
self.form_data = form_data
self.force = force
self.custom_cache_timeout = custom_cache_timeout
self.cache_values = cache_values
self._processor = QueryContextProcessor(self)
def get_data(self, df: pd.DataFrame,) -> Union[str, List[Dict[str, Any]]]:
return self._processor.get_data(df)
def get_payload(
self, cache_query_context: Optional[bool] = False, force_cached: bool = False,
) -> Dict[str, Any]:
"""Returns the query results with both metadata and data"""
return self._processor.get_payload(cache_query_context, force_cached)
def get_cache_timeout(self) -> Optional[int]:
if self.custom_cache_timeout is not None:
return self.custom_cache_timeout
if self.datasource.cache_timeout is not None:
return self.datasource.cache_timeout
if hasattr(self.datasource, "database"):
return self.datasource.database.cache_timeout
return None
def query_cache_key(self, query_obj: QueryObject, **kwargs: Any) -> Optional[str]:
return self._processor.query_cache_key(query_obj, **kwargs)
def get_df_payload(
self, query_obj: QueryObject, force_cached: Optional[bool] = False,
) -> Dict[str, Any]:
return self._processor.get_df_payload(query_obj, force_cached)
def get_query_result(self, query_object: QueryObject) -> QueryResult:
return self._processor.get_query_result(query_object)
def processing_time_offsets(
self, df: pd.DataFrame, query_object: QueryObject,
) -> CachedTimeOffset:
return self._processor.processing_time_offsets(df, query_object)
def raise_for_access(self) -> None:
self._processor.raise_for_access() | /sage-superset-1.0.0.tar.gz/sage-superset-1.0.0/superset/common/query_context.py | 0.833866 | 0.236549 | query_context.py | pypi |
import atexit
import __main__
# The original rlcompleter simply imports the builtins module.
# However, we need to maintain compatibility with python 2 also.
try:
import builtins
except:
import __builtin__ as builtins
__all__ = ["Completer"]
class Completer:
def __init__(self, namespace = None):
"""Create a new completer for the command line.
Completer([namespace]) -> completer instance.
If unspecified, the default namespace where completions are performed
is __main__ (technically, __main__.__dict__). Namespaces should be
given as dictionaries.
Completer instances should be used as the completion mechanism of
readline via the set_completer() call:
readline.set_completer(Completer(my_namespace).complete)
"""
if namespace and not isinstance(namespace, dict):
raise TypeError('namespace must be a dictionary')
# Don't bind to namespace quite yet, but flag whether the user wants a
# specific namespace or to use __main__.__dict__. This will allow us
# to bind to __main__.__dict__ at completion time, not now.
if namespace is None:
self.use_main_ns = 1
else:
self.use_main_ns = 0
self.namespace = namespace
def complete(self, text, state):
"""Return the next possible completion for 'text'.
This is called successively with state == 0, 1, 2, ... until it
returns None. The completion should begin with 'text'.
"""
if self.use_main_ns:
self.namespace = __main__.__dict__
if not text.strip():
if state == 0:
if _readline_available:
readline.insert_text('\t')
readline.redisplay()
return ''
else:
return '\t'
else:
return None
if state == 0:
if "." in text:
self.matches = self.attr_matches(text)
else:
self.matches = self.global_matches(text)
try:
return self.matches[state]
except IndexError:
return None
def _callable_postfix(self, val, word):
if callable(val):
word = word + "("
return word
def global_matches(self, text):
"""Compute matches when text is a simple name.
Return a list of all keywords, built-in functions and names currently
defined in self.namespace that match.
"""
import keyword
matches = []
seen = {"__builtins__"}
n = len(text)
for word in keyword.kwlist:
if word[:n] == text:
seen.add(word)
if word in {'finally', 'try'}:
word = word + ':'
elif word not in {'False', 'None', 'True',
'break', 'continue', 'pass',
'else'}:
word = word + ' '
matches.append(word)
for nspace in [self.namespace, builtins.__dict__]:
for word, val in nspace.items():
if word[:n] == text and word not in seen:
seen.add(word)
matches.append(self._callable_postfix(val, word))
return matches
def attr_matches(self, text):
"""Compute matches when text contains a dot.
Assuming the text is of the form NAME.NAME....[NAME], and is
evaluable in self.namespace, it will be evaluated and its attributes
(as revealed by dir()) are used as possible completions. (For class
instances, class members are also considered.)
WARNING: this can still invoke arbitrary C code, if an object
with a __getattr__ hook is evaluated.
"""
import re
m = re.match(r"(\w+(\.\w+)*)\.(\w*)", text)
if not m:
return []
expr, attr = m.group(1, 3)
try:
thisobject = eval(expr, self.namespace)
except Exception:
return []
# get the content of the object, except __builtins__
words = set(dir(thisobject))
words.discard("__builtins__")
if hasattr(thisobject, '__class__'):
words.add('__class__')
words.update(get_class_members(thisobject.__class__))
matches = []
n = len(attr)
if attr == '':
noprefix = '_'
elif attr == '_':
noprefix = '__'
else:
noprefix = None
while True:
for word in words:
if (word[:n] == attr and
not (noprefix and word[:n+1] == noprefix)):
match = "%s.%s" % (expr, word)
try:
val = getattr(thisobject, word)
except Exception:
pass # Include even if attribute not set
else:
match = self._callable_postfix(val, match)
matches.append(match)
if matches or not noprefix:
break
if noprefix == '_':
noprefix = '__'
else:
noprefix = None
matches.sort()
return matches
def get_class_members(klass):
ret = dir(klass)
if hasattr(klass,'__bases__'):
for base in klass.__bases__:
ret = ret + get_class_members(base)
return ret
# Regina's edits are here.
# Instead of trying to import the readline module, we just ignore it
# and treat this module as a standalone completion tool.
_readline_available = False | /sageRegina-6.0.1.tar.gz/sageRegina-6.0.1/regina_2bbddde/python/regina/plainCompleter.py | 0.426322 | 0.22396 | plainCompleter.py | pypi |
# sagecipher
[](https://pypi.python.org/pypi/sagecipher)
[](https://codecov.io/gh/p-sherratt/sagecipher)
[](https://travis-ci.org/p-sherratt/sagecipher)
**sagecipher** (**s**sh **age**nt **cipher**) provides an AES cipher, whose key is obtained by signing nonce data via SSH agent. This is illustrated below.

This can be used in turn by the `keyring` library, and by `ansible-vault` to encrypt/decrypt files or secrets via the users' local or forwarded ssh-agent session.
## Contents
* [Installation](#installation)
* [Usage](#usage)
* [Using the keyring backend](#keyring)
* [Using with ansible-vault](#ansible)
* [Using sagecipher directly in Python](#using-in-python)
* [Using the sagecipher CLI tool](#cli)
## Installation
```sh
pip install sagecipher
```
## Usage <a name='usage'></a>
Before using, `ssh-agent` must be running with at least one ssh-key available for producing cipher key material:
```console
$ source <(ssh-agent)
Agent pid 3710
$ ssh-add
Enter passphrase for /home/somebody/.ssh/id_rsa:
Identity added: /home/somebody/.ssh/id_rsa (/home/somebody/.ssh/id_rsa)
```
### Using the keyring backend <a name='keyring'></a>
Here we will set the following environment variables:
| Environment Variable | Value | Description |
|----------------------------------------|------------------------------|-------------------------------------------------------------|
| `PYTHON_KEYRING_BACKEND` | `sagecipher.keyring.Keyring` | Tells `keyring` explicitly to use the `sagecipher` backend |
| `KEYRING_PROPERTY_SSH_KEY_FINGERPRINT` | <hex fingerprint of ssh key> | Pre-selects the SSH key for the `sagecipher` backend to use |
If no other keyring backends are available, sagecipher will be selected as the default backend with a `priority` of 1. The `PYTHON_KEYRING_BACKEND` environment variable can be set to explicitly set the backend. See the [keyring docs](https://keyring.readthedocs.io/en/latest/) for more help using the keyring library.
```console
$ sagecipher list-keys # paramiko does not yet expose key comments, unfortunately..
[ssh-rsa] e8:19:fe:c5:0a:b4:57:5d:96:27:b3:e3:ec:ba:24:3c
[ssh-rsa] 38:c5:94:45:ca:01:65:d1:d0:c5:ee:5e:cd:b3:94:39
$ export PYTHON_KEYRING_BACKEND=sagecipher.keyring.Keyring
$ keyring set svc user1
Password for 'user' in 'svc':
Please select from the following keys...
[1] ssh-rsa e8:19:fe:c5:0a:b4:57:5d:96:27:b3:e3:ec:ba:24:3c
[2] ssh-rsa 38:c5:94:45:ca:01:65:d1:d0:c5:ee:5e:cd:b3:94:39
Selection (1..2): 1
$ keyring get svc user1
password1
$ export KEYRING_PROPERTY_SSH_KEY_FINGERPRINT=e8:19:fe:c5:0a:b4:57:5d:96:27:b3:e3:ec:ba:24:3c
$ keyring get svc user2
password2
$ python
Python 3.6.8 (default, Jan 14 2019, 11:02:34)
[GCC 8.0.1 20180414 (experimental) [trunk revision 259383]] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import keyring
>>> keyring.get_password('svc', 'user1')
'password1'
>>> keyring.get_password('svc', 'user2')
'password2'
```
### Using with ansible-vault <a name='ansible'></a>
In this example we create a secret key in the keyring for use with `ansible-vault`.
This process will work with any keyring backend, but it's assumed we are up and
running with the `sagecipher` keyring backend per the previous section.
For more information, see:
[https://docs.ansible.com/ansible/latest/user_guide/vault.html]()
1. Set up environment variables
| Environment Variable | Value | Description |
|----------------------------------------|------------------------------|-----------------------------------------------------------------------|
| `PYTHON_KEYRING_BACKEND` | `sagecipher.keyring.Keyring` | Tells `keyring` to use the `sagecipher` backend |
| `KEYRING_PROPERTY_SSH_KEY_FINGERPRINT` | <hex fingerprint of ssh key> | Pre-selects the SSH key for the `sagecipher` backend to use |
| `ANSIBLE_VAULT_PASSWORD_FILE` | <path to password script> | `ansible-vault` will use this script to find the vault encryption key |
| |
Replace the key fingerprint below with your own.
```sh
export PYTHON_KEYRING_BACKEND=sagecipher.keyring.Keyring
export KEYRING_PROPERTY_SSH_KEY_FINGERPRINT=e8:19:fe:c5:0a:b4:57:5d:96:27:b3:e3:ec:ba:24:3c
export ANSIBLE_VAULT_PASSWORD_FILE=~/vault-pass.sh
```
2. Generate a random key for ansible-vault and store in the keyring
```sh
keyring set ansible-vault key < <(dd if=/dev/urandom bs=32 count=1 | base64)
```
3. Create the vault password script to retrieve the vault key
```console
$ cat <<EOF > ~/vault-pass.sh
#!/bin/sh
keyring get ansible-vault key
EOF
$ chmod +x vault-pass.sh
```
4. Test it out with `ansible-vault`
```console
$ ansible-vault encrypt_string "secret_password" --name "secret_attribute" > secrets.yml
$ ansible localhost -m debug -a var="secret_attribute" -e "@secrets.yml"
[WARNING]: No inventory was parsed, only implicit localhost is available
localhost | SUCCESS => {
"secret_attribute": "secret_password"
}
```
### Using sagecipher directly in Python <a name='using-in-python'></a>
```python
>>> from sagecipher import Cipher
>>>
>>> # Encrypts using the first SSH key available from SSH agent...
>>> enc_text = Cipher.encrypt_string("hello, world")
>>> text = Cipher.decrypt_string(enc_text)
>>> text
"hello, world"
```
### Using the sagecipher CLI tool <a name='cli'></a>
Check `sagecipher --help` for usage. By default, the 'decrypt' operation will create a FIFO file, and then start a loop to decrypt out to the FIFO whenever it is opened.
The FIFO is created with mode 600 by default, and if the permissions are altered or the parent shell is terminated then the sagecipher background session will end.
```console
$ sagecipher encrypt - encfile
Please select from the following keys...
[1] ssh-rsa e8:19:fe:c5:0a:b4:57:5d:96:27:b3:e3:ec:ba:24:3c
[2] ssh-rsa 38:c5:94:45:ca:01:65:d1:d0:c5:ee:5e:cd:b3:94:39
Selection (1..2): 1
Reading from STDIN...
secret sauce
(CTRL-D)
$ sagecipher decrypt encfile
secret sauce
$ mkfifo decfile
$ sagecipher decrypt encfile decfile &
[1] 16753
$ cat decfile # decfile is just a FIFO
secret sauce
$
```
| /sagecipher-0.7.5.tar.gz/sagecipher-0.7.5/README.md | 0.530236 | 0.914061 | README.md | pypi |
```
import matplotlib
import tensorflow as tf
from keras.datasets import fashion_mnist
matplotlib.use('TkAgg')
from keras.callbacks import ModelCheckpoint
def train(x_train, y_train, x_test, y_test):
(x_train, x_valid) = x_train[5000:], x_train[:5000]
(y_train, y_valid) = y_train[5000:], y_train[:5000]
# Reshape input data from (28, 28) to (28, 28, 1)
w, h = 28, 28
x_train = x_train.reshape(x_train.shape[0], w, h, 1)
x_valid = x_valid.reshape(x_valid.shape[0], w, h, 1)
x_test = x_test.reshape(x_test.shape[0], w, h, 1)
# One-hot encode the labels
y_train = tf.keras.utils.to_categorical(y_train, 10)
y_valid = tf.keras.utils.to_categorical(y_valid, 10)
y_test = tf.keras.utils.to_categorical(y_test, 10)
# Print training set shape
print("x_train shape:", x_train.shape, "y_train shape:", y_train.shape)
# Print the number of training, validation, and test datasets
print(x_train.shape[0], 'train set')
print(x_valid.shape[0], 'validation set')
print(x_test.shape[0], 'test set')
model = tf.keras.Sequential()
# Must define the input shape in the first layer of the neural network
model.add(
tf.keras.layers.Conv2D(filters=64, kernel_size=2, padding='same', activation='relu', input_shape=(28, 28, 1)))
model.add(tf.keras.layers.MaxPooling2D(pool_size=2))
model.add(tf.keras.layers.Dropout(0.3))
model.add(tf.keras.layers.Conv2D(filters=32, kernel_size=2, padding='same', activation='relu'))
model.add(tf.keras.layers.MaxPooling2D(pool_size=2))
model.add(tf.keras.layers.Dropout(0.3))
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(256, activation='relu'))
model.add(tf.keras.layers.Dropout(0.5))
model.add(tf.keras.layers.Dense(10, activation='softmax'))
model.summary()
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
checkpointer = ModelCheckpoint(filepath='model.weights.best.hdf5', verbose=1, save_best_only=True)
model.fit(x_train, y_train,
batch_size=64,
epochs=10,
validation_data=(x_valid, y_valid),
callbacks=[checkpointer])
model.load_weights('model.weights.best.hdf5')
# Evaluate the model on test set
score = model.evaluate(x_test, y_test, verbose=0)
# Print test accuracy
print('\n', 'Test accuracy:', score[1])
def main():
(x_train, y_train), (x_test, y_test) = fashion_mnist.load_data()
print("x_train shape:", x_train.shape, "y_train shape:", y_train.shape)
x_train = x_train.astype('float32') / 255
x_test = x_test.astype('float32') / 255
train(x_train, y_train, x_test, y_test)
main()
```
| /sagecreator-0.1.1.6.tar.gz/sagecreator-0.1.1.6/sagebase/roles/sage.installer/files/sample_notebook.ipynb | 0.899927 | 0.812496 | sample_notebook.ipynb | pypi |
EPHEMERAL_STORAGE_DEVICES = {
'c1.medium': 1,
'c1.xlarge': 4,
'c3.large': 2,
'c3.xlarge': 2,
'c3.2xlarge': 2,
'c3.4xlarge': 2,
'c3.8xlarge': 2,
'cc2.8xlarge': 4,
'cg1.4xlarge': 2,
'cr1.8xlarge': 2,
'd2.xlarge': 3,
'd2.2xlarge': 6,
'd2.4xlarge': 12,
'd2.8xlarge': 24,
'g2.2xlarge': 1,
'g2.8xlarge': 2,
'hi1.4xlarge': 2,
'hs1.8xlarge': 24,
'i2.xlarge': 1,
'i2.2xlarge': 2,
'i2.4xlarge': 4,
'i2.8xlarge': 8,
'm1.small': 1,
'm1.medium': 2,
'm1.large': 2,
'm1.xlarge': 4,
'm2.xlarge': 1,
'm2.2xlarge': 1,
'm2.4xlarge': 2,
'm3.medium': 1,
'm3.large': 1,
'm3.xlarge': 2,
'm3.2xlarge': 2,
'r3.large': 1,
'r3.xlarge': 1,
'r3.2xlarge': 1,
'r3.4xlarge': 1,
'r3.8xlarge': 2
}
DEVICE_SEQUENCE = 'bcdefghijklmnopqrstuvwxy'
class Application(object):
def __init__(self, **kwargs):
# Set all given parameters
for key, val in kwargs.items():
setattr(self, key, val)
def get_ephemeral_block_mapping(self):
bdm = {}
device_map = {k: v for k, v in EPHEMERAL_STORAGE_DEVICES.items() if v > 1}
if self.instance_type in device_map:
for i in range(0, device_map[self.instance_type]):
device = {'ephemeral_name': "ephemeral{0}".format(i)}
bdm['/dev/sd{0}'.format(DEVICE_SEQUENCE[i])] = device
return bdm
def get_block_mapping_ansible(self):
bdm = list()
root_device = {
'volume_type': self.root_volume_type,
'volume_size': self.root_volume_size,
'delete_on_termination': self.ebs_delete_on_termination
}
if self.os_type == 'debian':
root_device['device_name'] = '/dev/xvda'
elif self.os_type == 'ubuntu' or self.os_type == 'bionic':
root_device['device_name'] = '/dev/sda1'
bdm.append(root_device)
for (k, v) in self.get_ephemeral_block_mapping().items():
d = dict(device_name=k, ephemeral=v['ephemeral_name'])
bdm.append(d)
if self.ebs_create_volumes:
device_count = len(bdm) - 1 # For zero-based index
for vol_count in range(0, self.ebs_volume_count):
d = dict(device_name=self.device_name(DEVICE_SEQUENCE[device_count]))
d['volume_type'] = self.ebs_volume_type
if self.ebs_volume_iops:
d['iops'] = self.ebs_volume_iops
if self.ebs_volume_size:
d['volume_size'] = self.ebs_volume_size
d['delete_on_termination'] = self.ebs_delete_on_termination
device_count = device_count + 1
bdm.append(d)
return bdm
def device_name(self, sequence_combination):
if self.os_type == 'debian':
return '/dev/xvd{}'.format(sequence_combination)
elif self.os_type == 'ubuntu' or self.os_type == 'bionic':
return '/dev/sd{}'.format(sequence_combination)
raise Exception("Unknown OS type")
def run(argument_spec, argv):
print(argv)
from argparse import ArgumentParser
ap = ArgumentParser(prog="block_device_mapping")
for k, v in argument_spec.items():
ap.add_argument("--{}".format(k), required=v['required'])
args = ap.parse_args(argv)
app = Application(instance_type=args.instance_type)
print(json.dumps(app.get_ephemeral_block_mapping(), sort_keys=True, indent=4))
print(json.dumps(app.get_block_mapping_ansible(), sort_keys=True, indent=4))
def ansible_run(argument_spec):
module = AnsibleModule(
argument_spec=argument_spec,
supports_check_mode=False,
)
try:
app = Application(
module=module,
**module.params
)
module.exit_json(changed=False, result=app.get_block_mapping_ansible())
except Exception as e:
module.fail_json(msg=str(e))
from ansible.module_utils.basic import *
def main():
argument_spec = dict(
instance_type=dict(required=True, type='str'),
os_type=dict(required=True, type='str', choices=['debian', 'ubuntu', 'bionic']),
root_volume_size=dict(required=False, type='str'),
root_volume_type=dict(required=False, type='str', choices=['gp2', 'io1', 'standard'], default='standard'),
ebs_create_volumes=dict(required=False, type='bool'),
ebs_volume_count=dict(required=False, type='int'),
ebs_volume_type=dict(required=False, type='str', choices=['gp2', 'io1', 'standard'], default='standard'),
ebs_volume_size=dict(required=False, type='int'),
ebs_volume_iops=dict(required=False, type='int'),
ebs_delete_on_termination=dict(required=False, type='bool', default=True),
)
ansible_run(argument_spec)
if __name__ == '__main__':
main() | /sagecreator-0.1.1.6.tar.gz/sagecreator-0.1.1.6/sagebase/library/block_device_mapping.py | 0.433022 | 0.277497 | block_device_mapping.py | pypi |
"""Placeholder docstring"""
from __future__ import absolute_import
import textwrap
import six
class ClientError(Exception):
"""Error class used to separate framework and user errors."""
class _CalledProcessError(ClientError):
"""This exception is raised when a process run by check_call() or
check_output() returns a non-zero exit status.
Attributes:
cmd, return_code, output
"""
def __init__(self, cmd, return_code=None, output=None):
self.return_code = return_code
self.cmd = cmd
self.output = output
super(_CalledProcessError, self).__init__()
def __str__(self):
if six.PY3 and self.output:
error_msg = "\n%s" % self.output.decode("latin1")
elif self.output:
error_msg = "\n%s" % self.output
else:
error_msg = ""
message = '%s:\nCommand "%s"%s' % (type(self).__name__, self.cmd, error_msg)
return message.strip()
class InstallModuleError(_CalledProcessError):
"""Error class indicating a module failed to install."""
class ImportModuleError(ClientError):
"""Error class indicating a module failed to import."""
class ExecuteUserScriptError(_CalledProcessError):
"""Error class indicating a user script failed to execute."""
class ChannelDoesNotExistException(Exception):
"""Error class indicating a channel does not exist."""
def __init__(self, channel_name):
super(ChannelDoesNotExistException, self).__init__(
"Channel %s is not a valid channel" % channel_name
)
class UnsupportedFormatError(Exception):
"""Error class indicating a content type is not supported by the current framework."""
def __init__(self, content_type, **kwargs):
self.message = textwrap.dedent(
"""Content type %s is not supported by this framework.
Please implement input_fn to to deserialize the request data or an output_fn to
serialize the response. For more information, see the SageMaker Python SDK README."""
% content_type
)
super(UnsupportedFormatError, self).__init__(self.message, **kwargs) | /sagemaker_containers-2.8.6.post0.tar.gz/sagemaker_containers-2.8.6.post0/src/sagemaker_containers/_errors.py | 0.867429 | 0.160266 | _errors.py | pypi |
"""Placeholder docstring"""
from __future__ import absolute_import
import csv
import io
import json
from typing import Iterable
import numpy as np
from scipy.sparse import issparse
from six import BytesIO, StringIO
from sagemaker_containers import _content_types, _errors
from sagemaker_containers._recordio import (
_write_numpy_to_dense_tensor,
_write_spmatrix_to_sparse_tensor,
)
def array_to_npy(array_like): # type: (np.array or Iterable or int or float) -> object
"""Convert an array like object to the NPY format.
To understand better what an array like object is see:
https://docs.scipy.org/doc/numpy/user/basics.creation.html#converting-python-array-like-objects-to-numpy-arrays
Args:
array_like (np.array or Iterable or int or float): array like object to be converted to NPY.
Returns:
(obj): NPY array.
"""
buffer = BytesIO()
np.save(buffer, array_like)
return buffer.getvalue()
def npy_to_numpy(npy_array): # type: (object) -> np.array
"""Convert an NPY array into numpy.
Args:
npy_array (npy array): to be converted to numpy array
Returns:
(np.array): converted numpy array.
"""
stream = BytesIO(npy_array)
return np.load(stream, allow_pickle=True)
def array_to_json(array_like): # type: (np.array or Iterable or int or float) -> str
"""Convert an array like object to JSON.
To understand better what an array like object is see:
https://docs.scipy.org/doc/numpy/user/basics.creation.html#converting-python-array-like-objects-to-numpy-arrays
Args:
array_like (np.array or Iterable or int or float): array like object to be
converted to JSON.
Returns:
(str): object serialized to JSON
"""
def default(_array_like):
if hasattr(_array_like, "tolist"):
return _array_like.tolist()
return json.JSONEncoder().default(_array_like)
return json.dumps(array_like, default=default)
def json_to_numpy(string_like, dtype=None): # type: (str) -> np.array
"""Convert a JSON object to a numpy array.
Args:
string_like (str): JSON string.
dtype (dtype, optional): Data type of the resulting array. If None,
the dtypes will be determined by the
contents of each column, individually.
This argument can only be used to
'upcast' the array. For downcasting,
use the .astype(t) method.
Returns:
(np.array): numpy array
"""
data = json.loads(string_like)
return np.array(data, dtype=dtype)
def csv_to_numpy(string_like, dtype=None): # type: (str) -> np.array
"""Convert a CSV object to a numpy array.
Args:
string_like (str): CSV string.
dtype (dtype, optional): Data type of the resulting array. If None, the
dtypes will be determined by the contents of
each column, individually. This argument can
only be used to 'upcast' the array. For
downcasting, use the .astype(t) method.
Returns:
(np.array): numpy array
"""
try:
stream = StringIO(string_like)
reader = csv.reader(stream, delimiter=",", quotechar='"', doublequote=True, strict=True)
array = np.array([row for row in reader]).squeeze()
array = array.astype(dtype)
except ValueError as e:
if dtype is not None:
raise _errors.ClientError(
"Error while writing numpy array: {}. dtype is: {}".format(e, dtype)
)
except Exception as e:
raise _errors.ClientError("Error while decoding csv: {}".format(e))
return array
def array_to_csv(array_like): # type: (np.array or Iterable or int or float) -> str
"""Convert an array like object to CSV.
To understand better what an array like object is see:
https://docs.scipy.org/doc/numpy/user/basics.creation.html#converting-python-array-like-objects-to-numpy-arrays
Args:
array_like (np.array or Iterable or int or float): array like object to be converted to CSV.
Returns:
(str): object serialized to CSV
"""
array = np.array(array_like)
if len(array.shape) == 1:
array = np.reshape(array, (array.shape[0], 1)) # pylint: disable=unsubscriptable-object
try:
stream = StringIO()
writer = csv.writer(
stream, lineterminator="\n", delimiter=",", quotechar='"', doublequote=True, strict=True
)
writer.writerows(array)
return stream.getvalue()
except csv.Error as e:
raise _errors.ClientError("Error while encoding csv: {}".format(e))
def array_to_recordio_protobuf(array_like, labels=None):
"""Convert an array like object to recordio-protobuf format.
To understand better what an array like object is see:
https://docs.scipy.org/doc/numpy/user/basics.creation.html#converting-python-array-like-objects-to-numpy-arrays
Args:
array_like (np.array or scipy.sparse.csr_matrix): array like object to be
converted to recordio-protobuf.
labels (np.array or scipy.sparse.csr_matrix): array like object representing
the labels to be encoded
Returns:
buffer: bytes buffer recordio-protobuf
"""
if len(array_like.shape) == 1:
array_like = array_like.reshape(1, array_like.shape[0])
assert len(array_like.shape) == 2, "Expecting a 1 or 2 dimensional array"
buffer = io.BytesIO()
if issparse(array_like):
_write_spmatrix_to_sparse_tensor(buffer, array_like, labels)
else:
_write_numpy_to_dense_tensor(buffer, array_like, labels)
buffer.seek(0)
return buffer.getvalue()
_encoders_map = {
_content_types.NPY: array_to_npy,
_content_types.CSV: array_to_csv,
_content_types.JSON: array_to_json,
}
_decoders_map = {
_content_types.NPY: npy_to_numpy,
_content_types.CSV: csv_to_numpy,
_content_types.JSON: json_to_numpy,
}
def decode(obj, content_type):
# type: (np.array or Iterable or int or float, str) -> np.array
"""Decode an object ton a one of the default content types to a numpy array.
Args:
obj (object): to be decoded.
content_type (str): content type to be used.
Returns:
np.array: decoded object.
"""
try:
decoder = _decoders_map[content_type]
return decoder(obj)
except KeyError:
raise _errors.UnsupportedFormatError(content_type)
def encode(array_like, content_type):
# type: (np.array or Iterable or int or float, str) -> np.array
"""Encode an array like object in a specific content_type to a numpy array.
To understand better what an array like object is see:
https://docs.scipy.org/doc/numpy/user/basics.creation.html#converting-python-array-like-objects-to-numpy-arrays
Args:
array_like (np.array or Iterable or int or float): to be converted to numpy.
content_type (str): content type to be used.
Returns:
(np.array): object converted as numpy array.
"""
try:
encoder = _encoders_map[content_type]
return encoder(array_like)
except KeyError:
raise _errors.UnsupportedFormatError(content_type) | /sagemaker_containers-2.8.6.post0.tar.gz/sagemaker_containers-2.8.6.post0/src/sagemaker_containers/_encoders.py | 0.90619 | 0.555134 | _encoders.py | pypi |
"""Placeholder docstring"""
from __future__ import absolute_import
import os
import socket
import sys
from typing import Dict, List # noqa ignore=F401 imported but unused
from retrying import retry
from sagemaker_containers import _entry_point_type, _env, _files, _modules, _runner
def run(
uri,
user_entry_point,
args,
env_vars=None,
wait=True,
capture_error=False,
runner=_runner.ProcessRunnerType,
extra_opts=None,
):
# type: (str, str, List[str], Dict[str, str], bool, bool, _runner.RunnerType,Dict[str, str]) -> None # pylint: disable=line-too-long # noqa ignore=E501
"""Download, prepare and executes a compressed tar file from S3 or provided directory as an user
entrypoint. Runs the user entry point, passing env_vars as environment variables and args
as command arguments.
If the entry point is:
- A Python package: executes the packages as >>> env_vars python -m module_name + args
- A Python script: executes the script as >>> env_vars python module_name + args
- Any other: executes the command as >>> env_vars /bin/sh -c ./module_name + args
Example:
>>>import sagemaker_containers
>>>from sagemaker_containers.beta.framework import entry_point
>>>env = sagemaker_containers.training_env()
{'channel-input-dirs': {'training': '/opt/ml/input/training'},
'model_dir': '/opt/ml/model', ...}
>>>hyperparameters = env.hyperparameters
{'batch-size': 128, 'model_dir': '/opt/ml/model'}
>>>args = mapping.to_cmd_args(hyperparameters)
['--batch-size', '128', '--model_dir', '/opt/ml/model']
>>>env_vars = mapping.to_env_vars()
['SAGEMAKER_CHANNELS':'training', 'SAGEMAKER_CHANNEL_TRAINING':'/opt/ml/input/training',
'MODEL_DIR':'/opt/ml/model', ...}
>>>entry_point.run('user_script', args, env_vars)
SAGEMAKER_CHANNELS=training SAGEMAKER_CHANNEL_TRAINING=/opt/ml/input/training \
SAGEMAKER_MODEL_DIR=/opt/ml/model python -m user_script --batch-size 128
--model_dir /opt/ml/model
Args:
uri (str): the location of the module.
user_entry_point (str): name of the user provided entry point
args (list): A list of program arguments.
env_vars (dict): A map containing the environment variables to be written (default: None).
wait (bool): If the user entry point should be run to completion before this method returns
(default: True).
capture_error (bool): Default false. If True, the running process captures the
stderr, and appends it to the returned Exception message in case of errors.
runner (sagemaker_containers.beta.framework.runner.RunnerType): the type of runner object to
be created (default: sagemaker_containers.beta.framework.runner.ProcessRunnerType).
extra_opts (dict): Additional options for running the entry point (default: None).
Currently, this only applies for MPI.
Returns:
sagemaker_containers.beta.framework.process.ProcessRunner: the runner object responsible for
executing the entry point.
"""
env_vars = env_vars or {}
env_vars = env_vars.copy()
_files.download_and_extract(uri, _env.code_dir)
install(user_entry_point, _env.code_dir, capture_error)
_env.write_env_vars(env_vars)
_wait_hostname_resolution()
return _runner.get(runner, user_entry_point, args, env_vars, extra_opts).run(
wait, capture_error
)
def install(name, dst, capture_error=False):
"""Install the user provided entry point to be executed as follow:
- add the path to sys path
- if the user entry point is a command, gives exec permissions to the script
Args:
name (str): name of the script or module.
dst (str): path to directory with the script or module.
capture_error (bool): Default false. If True, the running process captures the
stderr, and appends it to the returned Exception message in case of errors.
"""
if dst not in sys.path:
sys.path.insert(0, dst)
entrypoint_type = _entry_point_type.get(dst, name)
if entrypoint_type is _entry_point_type.PYTHON_PACKAGE:
_modules.install(dst, capture_error)
if entrypoint_type is _entry_point_type.COMMAND:
os.chmod(os.path.join(dst, name), 511)
@retry(stop_max_delay=1000 * 60 * 15, wait_exponential_multiplier=100, wait_exponential_max=30000)
def _dns_lookup(host):
""" Retrying dns lookup on host """
return socket.gethostbyname(host)
def _wait_hostname_resolution():
"""Wait for the hostname resolution of the container. This is known behavior as the cluster
boots up and has been documented here:
https://docs.aws.amazon.com/sagemaker/latest/dg/your-algorithms-training-algo-running-container.html#your-algorithms-training-algo-running-container-dist-training
"""
for host in _env.TrainingEnv().hosts:
_dns_lookup(host) | /sagemaker_containers-2.8.6.post0.tar.gz/sagemaker_containers-2.8.6.post0/src/sagemaker_containers/entry_point.py | 0.640636 | 0.154887 | entry_point.py | pypi |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.