INSTRUCTION stringlengths 1 8.43k | RESPONSE stringlengths 75 104k |
|---|---|
: Quality function using muparser to generate new Quality for every vertex<br > It s possibile to use the following per - vertex variables in the expression: <br > x y z nx ny nz ( normal ) r g b ( color ) q ( quality ) rad vi <br > and all custom <i > vertex attributes</ i > already defined by user. | def vq_function(script, function='vi', normalize=False, color=False):
""":Quality function using muparser to generate new Quality for every vertex<br>It's possibile to use the following per-vertex variables in the expression:<br>x, y, z, nx, ny, nz (normal), r, g, b (color), q (quality), rad, vi, <br>and all custom... |
Rainbow colored voronoi quatrefoil ( 3 4 ) torus knot | def quatrefoil():
""" Rainbow colored voronoi quatrefoil (3,4) torus knot """
start_time = time.time()
os.chdir(THIS_SCRIPTPATH)
#ml_version = '1.3.4BETA'
ml_version = '2016.12'
# Add meshlabserver directory to OS PATH; omit this if it is already in
# your PATH
meshlabserver_path = 'C:... |
Invert faces orientation flipping the normals of the mesh. | def flip(script, force_flip=False, selected=False):
""" Invert faces orientation, flipping the normals of the mesh.
If requested, it tries to guess the right orientation; mainly it decides to
flip all the faces if the minimum/maximum vertexes have not outward point
normals for a few directions. Works w... |
Compute the normals of the vertices of a mesh without exploiting the triangle connectivity useful for dataset with no faces. | def point_sets(script, neighbors=10, smooth_iteration=0, flip=False,
viewpoint_pos=(0.0, 0.0, 0.0)):
""" Compute the normals of the vertices of a mesh without exploiting the
triangle connectivity, useful for dataset with no faces.
Args:
script: the FilterScript object or script f... |
Laplacian smooth of the mesh: for each vertex it calculates the average position with nearest vertex | def laplacian(script, iterations=1, boundary=True, cotangent_weight=True,
selected=False):
""" Laplacian smooth of the mesh: for each vertex it calculates the average
position with nearest vertex
Args:
script: the FilterScript object or script filename to write
the fil... |
The lambda & mu Taubin smoothing it make two steps of smoothing forth and back for each iteration. | def taubin(script, iterations=10, t_lambda=0.5, t_mu=-0.53, selected=False):
""" The lambda & mu Taubin smoothing, it make two steps of smoothing, forth
and back, for each iteration.
Based on:
Gabriel Taubin
"A signal processing approach to fair surface design"
Siggraph 1995
Args:
... |
Two Step Smoothing a feature preserving/ enhancing fairing filter. | def twostep(script, iterations=3, angle_threshold=60, normal_steps=20, fit_steps=20,
selected=False):
""" Two Step Smoothing, a feature preserving/enhancing fairing filter.
It is based on a Normal Smoothing step where similar normals are averaged
together and a step where the vertexes are fitte... |
A laplacian smooth that is constrained to move vertices only along the view direction. | def depth(script, iterations=3, viewpoint=(0, 0, 0), selected=False):
""" A laplacian smooth that is constrained to move vertices only along the
view direction.
Args:
script: the FilterScript object or script filename to write
the filter to.
iterations (int): The number of t... |
Measure the axis aligned bounding box ( aabb ) of a mesh in multiple coordinate systems. | def measure_aabb(fbasename=None, log=None, coord_system='CARTESIAN'):
""" Measure the axis aligned bounding box (aabb) of a mesh
in multiple coordinate systems.
Args:
fbasename (str): filename of input model
log (str): filename of log file
coord_system (enum in ['CARTESIAN', 'CYLIND... |
Measure a cross section of a mesh Perform a plane cut in one of the major axes ( X Y Z ). If you want to cut on a different plane you will need to rotate the model in place perform the cut and rotate it back. Args: fbasename ( str ): filename of input model log ( str ): filename of log file axis ( str ): axis perpendic... | def measure_section(fbasename=None, log=None, axis='z', offset=0.0,
rotate_x_angle=None, ml_version=ml_version):
"""Measure a cross section of a mesh
Perform a plane cut in one of the major axes (X, Y, Z). If you want to cut on
a different plane you will need to rotate the model in ... |
Sort separate line segments in obj format into a continuous polyline or polylines. NOT FINISHED ; DO NOT USE | def polylinesort(fbasename=None, log=None):
"""Sort separate line segments in obj format into a continuous polyline or polylines.
NOT FINISHED; DO NOT USE
Also measures the length of each polyline
Return polyline and polylineMeta (lengths)
"""
fext = os.path.splitext(fbasename)[1][1:].strip()... |
Measures mesh topology | def measure_topology(fbasename=None, log=None, ml_version=ml_version):
"""Measures mesh topology
Args:
fbasename (str): input filename.
log (str): filename to log output
Returns:
dict: dictionary with the following keys:
vert_num (int): number of vertices
ed... |
Measures mesh geometry aabb and topology. | def measure_all(fbasename=None, log=None, ml_version=ml_version):
"""Measures mesh geometry, aabb and topology."""
ml_script1_file = 'TEMP3D_measure_gAndT.mlx'
if ml_version == '1.3.4BETA':
file_out = 'TEMP3D_aabb.xyz'
else:
file_out = None
ml_script1 = mlx.FilterScript(file_in=fbas... |
Measure a dimension of a mesh | def measure_dimension(fbasename=None, log=None, axis1=None, offset1=0.0,
axis2=None, offset2=0.0, ml_version=ml_version):
"""Measure a dimension of a mesh"""
axis1 = axis1.lower()
axis2 = axis2.lower()
ml_script1_file = 'TEMP3D_measure_dimension.mlx'
file_out = 'TEMP3D_measure_... |
This is a helper used by UploadSet. save to provide lowercase extensions for all processed files to compare with configured extensions in the same case. | def lowercase_ext(filename):
"""
This is a helper used by UploadSet.save to provide lowercase extensions for
all processed files, to compare with configured extensions in the same
case.
.. versionchanged:: 0.1.4
Filenames without extensions are no longer lowercased, only the
extension... |
By default Flask will accept uploads to an arbitrary size. While Werkzeug switches uploads from memory to a temporary file when they hit 500 KiB it s still possible for someone to overload your disk space with a gigantic file. | def patch_request_class(app, size=64 * 1024 * 1024):
"""
By default, Flask will accept uploads to an arbitrary size. While Werkzeug
switches uploads from memory to a temporary file when they hit 500 KiB,
it's still possible for someone to overload your disk space with a
gigantic file.
This patc... |
This is a helper function for configure_uploads that extracts the configuration for a single set. | def config_for_set(uset, app, defaults=None):
"""
This is a helper function for `configure_uploads` that extracts the
configuration for a single set.
:param uset: The upload set.
:param app: The app to load the configuration from.
:param defaults: A dict with keys `url` and `dest` from the
... |
Call this after the app has been configured. It will go through all the upload sets get their configuration and store the configuration on the app. It will also register the uploads module if it hasn t been set. This can be called multiple times with different upload sets. | def configure_uploads(app, upload_sets):
"""
Call this after the app has been configured. It will go through all the
upload sets, get their configuration, and store the configuration on the
app. It will also register the uploads module if it hasn't been set. This
can be called multiple times with di... |
This gets the current configuration. By default it looks up the current application and gets the configuration from there. But if you don t want to go to the full effort of setting an application or it s otherwise outside of a request context set the _config attribute to an UploadConfiguration instance then set it back... | def config(self):
"""
This gets the current configuration. By default, it looks up the
current application and gets the configuration from there. But if you
don't want to go to the full effort of setting an application, or it's
otherwise outside of a request context, set the `_co... |
This function gets the URL a file uploaded to this set would be accessed at. It doesn t check whether said file exists. | def url(self, filename):
"""
This function gets the URL a file uploaded to this set would be
accessed at. It doesn't check whether said file exists.
:param filename: The filename to return the URL for.
"""
base = self.config.base_url
if base is None:
... |
This returns the absolute path of a file uploaded to this set. It doesn t actually check whether said file exists. | def path(self, filename, folder=None):
"""
This returns the absolute path of a file uploaded to this set. It
doesn't actually check whether said file exists.
:param filename: The filename to return the path for.
:param folder: The subfolder within the upload set previously used
... |
This determines whether a specific extension is allowed. It is called by file_allowed so if you override that but still want to check extensions call back into this. | def extension_allowed(self, ext):
"""
This determines whether a specific extension is allowed. It is called
by `file_allowed`, so if you override that but still want to check
extensions, call back into this.
:param ext: The extension to check, without the dot.
"""
... |
This saves a werkzeug. FileStorage into this upload set. If the upload is not allowed an UploadNotAllowed error will be raised. Otherwise the file will be saved and its name ( including the folder ) will be returned. | def save(self, storage, folder=None, name=None):
"""
This saves a `werkzeug.FileStorage` into this upload set. If the
upload is not allowed, an `UploadNotAllowed` error will be raised.
Otherwise, the file will be saved and its name (including the folder)
will be returned.
... |
If a file with the selected name already exists in the target folder this method is called to resolve the conflict. It should return a new basename for the file. | def resolve_conflict(self, target_folder, basename):
"""
If a file with the selected name already exists in the target folder,
this method is called to resolve the conflict. It should return a new
basename for the file.
The default implementation splits the name and extension an... |
Returns actual version specified in filename. | def get_vprof_version(filename):
"""Returns actual version specified in filename."""
with open(filename) as src_file:
version_match = re.search(r"^__version__ = ['\"]([^'\"]*)['\"]",
src_file.read(), re.M)
if version_match:
return version_match.group... |
Removes duplicate objects. | def _remove_duplicates(objects):
"""Removes duplicate objects.
http://www.peterbe.com/plog/uniqifiers-benchmark.
"""
seen, uniq = set(), []
for obj in objects:
obj_id = id(obj)
if obj_id in seen:
continue
seen.add(obj_id)
uniq.append(obj)
return uniq |
Returns count difference in two collections of Python objects. | def _get_obj_count_difference(objs1, objs2):
"""Returns count difference in two collections of Python objects."""
clean_obj_list1 = _process_in_memory_objects(objs1)
clean_obj_list2 = _process_in_memory_objects(objs2)
obj_count_1 = _get_object_count_by_type(clean_obj_list1)
obj_count_2 = _get_object... |
Formats object count. | def _format_obj_count(objects):
"""Formats object count."""
result = []
regex = re.compile(r'<(?P<type>\w+) \'(?P<name>\S+)\'>')
for obj_type, obj_count in objects.items():
if obj_count != 0:
match = re.findall(regex, repr(obj_type))
if match:
obj_type, ob... |
Checks memory usage when line event occur. | def _trace_memory_usage(self, frame, event, arg): #pylint: disable=unused-argument
"""Checks memory usage when 'line' event occur."""
if event == 'line' and frame.f_code.co_filename in self.target_modules:
self._events_list.append(
(frame.f_lineno, self._process.memory_info(... |
Returns processed memory usage. | def code_events(self):
"""Returns processed memory usage."""
if self._resulting_events:
return self._resulting_events
for i, (lineno, mem, func, fname) in enumerate(self._events_list):
mem_in_mb = float(mem - self.mem_overhead) / _BYTES_IN_MB
if (self._resulti... |
Returns all objects that are considered a profiler overhead. Objects are hardcoded for convenience. | def obj_overhead(self):
"""Returns all objects that are considered a profiler overhead.
Objects are hardcoded for convenience.
"""
overhead = [
self,
self._resulting_events,
self._events_list,
self._process
]
overhead_count ... |
Returns memory overhead. | def compute_mem_overhead(self):
"""Returns memory overhead."""
self.mem_overhead = (self._process.memory_info().rss -
builtins.initial_rss_size) |
Returns memory stats for a package. | def profile_package(self):
"""Returns memory stats for a package."""
target_modules = base_profiler.get_pkg_module_names(self._run_object)
try:
with _CodeEventsTracker(target_modules) as prof:
prof.compute_mem_overhead()
runpy.run_path(self._run_object... |
Returns memory stats for a module. | def profile_module(self):
"""Returns memory stats for a module."""
target_modules = {self._run_object}
try:
with open(self._run_object, 'rb') as srcfile,\
_CodeEventsTracker(target_modules) as prof:
code = compile(srcfile.read(), self._run_object, 'exe... |
Returns memory stats for a function. | def profile_function(self):
"""Returns memory stats for a function."""
target_modules = {self._run_object.__code__.co_filename}
with _CodeEventsTracker(target_modules) as prof:
prof.compute_mem_overhead()
result = self._run_object(*self._run_args, **self._run_kwargs)
... |
Collects memory stats for specified Python program. | def run(self):
"""Collects memory stats for specified Python program."""
existing_objects = _get_in_memory_objects()
prof, result = self.profile()
new_objects = _get_in_memory_objects()
new_obj_count = _get_obj_count_difference(new_objects, existing_objects)
result_obj_c... |
Returns module filenames from package. | def get_pkg_module_names(package_path):
"""Returns module filenames from package.
Args:
package_path: Path to Python package.
Returns:
A set of module filenames.
"""
module_names = set()
for fobj, modname, _ in pkgutil.iter_modules(path=[package_path]):
filename = os.pat... |
Runs function in separate process. | def run_in_separate_process(func, *args, **kwargs):
"""Runs function in separate process.
This function is used instead of a decorator, since Python multiprocessing
module can't serialize decorated function on all platforms.
"""
manager = multiprocessing.Manager()
manager_dict = manager.dict()
... |
Determines run object type. | def get_run_object_type(run_object):
"""Determines run object type."""
if isinstance(run_object, tuple):
return 'function'
run_object, _, _ = run_object.partition(' ')
if os.path.isdir(run_object):
return 'package'
return 'module' |
Initializes profiler with a module. | def init_module(self, run_object):
"""Initializes profiler with a module."""
self.profile = self.profile_module
self._run_object, _, self._run_args = run_object.partition(' ')
self._object_name = '%s (module)' % self._run_object
self._globs = {
'__file__': self._run_o... |
Initializes profiler with a package. | def init_package(self, run_object):
"""Initializes profiler with a package."""
self.profile = self.profile_package
self._run_object, _, self._run_args = run_object.partition(' ')
self._object_name = '%s (package)' % self._run_object
self._replace_sysargs() |
Initializes profiler with a function. | def init_function(self, run_object):
"""Initializes profiler with a function."""
self.profile = self.profile_function
self._run_object, self._run_args, self._run_kwargs = run_object
filename = inspect.getsourcefile(self._run_object)
self._object_name = '%s @ %s (function)' % (
... |
Replaces sys. argv with proper args to pass to script. | def _replace_sysargs(self):
"""Replaces sys.argv with proper args to pass to script."""
sys.argv[:] = [self._run_object]
if self._run_args:
sys.argv += self._run_args.split() |
Samples current stack and adds result in self. _stats. | def sample(self, signum, frame): #pylint: disable=unused-argument
"""Samples current stack and adds result in self._stats.
Args:
signum: Signal that activates handler.
frame: Frame on top of the stack when signal is handled.
"""
stack = []
while frame an... |
Inserts stack into the call tree. | def _insert_stack(stack, sample_count, call_tree):
"""Inserts stack into the call tree.
Args:
stack: Call stack.
sample_count: Sample count of call stack.
call_tree: Call tree.
"""
curr_level = call_tree
for func in stack:
next_lev... |
Counts and fills sample counts inside call tree. | def _fill_sample_count(self, node):
"""Counts and fills sample counts inside call tree."""
node['sampleCount'] += sum(
self._fill_sample_count(child) for child in node['children'])
return node['sampleCount'] |
Reformats call tree for the UI. | def _format_tree(self, node, total_samples):
"""Reformats call tree for the UI."""
funcname, filename, _ = node['stack']
sample_percent = self._get_percentage(
node['sampleCount'], total_samples)
color_hash = base_profiler.hash_name('%s @ %s' % (funcname, filename))
r... |
Returns call tree. | def call_tree(self):
"""Returns call tree."""
call_tree = {'stack': 'base', 'sampleCount': 0, 'children': []}
for stack, sample_count in self._stats.items():
self._insert_stack(reversed(stack), sample_count, call_tree)
self._fill_sample_count(call_tree)
if not call_tr... |
Runs statistical profiler on a package. | def _profile_package(self):
"""Runs statistical profiler on a package."""
with _StatProfiler() as prof:
prof.base_frame = inspect.currentframe()
try:
runpy.run_path(self._run_object, run_name='__main__')
except SystemExit:
pass
... |
Runs statistical profiler on a module. | def _profile_module(self):
"""Runs statistical profiler on a module."""
with open(self._run_object, 'rb') as srcfile, _StatProfiler() as prof:
code = compile(srcfile.read(), self._run_object, 'exec')
prof.base_frame = inspect.currentframe()
try:
exec(c... |
Runs statistical profiler on a function. | def profile_function(self):
"""Runs statistical profiler on a function."""
with _StatProfiler() as prof:
result = self._run_object(*self._run_args, **self._run_kwargs)
call_tree = prof.call_tree
return {
'objectName': self._object_name,
'sampleInterva... |
Processes collected stats for UI. | def _transform_stats(prof):
"""Processes collected stats for UI."""
records = []
for info, params in prof.stats.items():
filename, lineno, funcname = info
cum_calls, num_calls, time_per_call, cum_time, _ = params
if prof.total_tt == 0:
percenta... |
Runs cProfile on a package. | def _profile_package(self):
"""Runs cProfile on a package."""
prof = cProfile.Profile()
prof.enable()
try:
runpy.run_path(self._run_object, run_name='__main__')
except SystemExit:
pass
prof.disable()
prof_stats = pstats.Stats(prof)
... |
Runs cProfile on a module. | def _profile_module(self):
"""Runs cProfile on a module."""
prof = cProfile.Profile()
try:
with open(self._run_object, 'rb') as srcfile:
code = compile(srcfile.read(), self._run_object, 'exec')
prof.runctx(code, self._globs, None)
except SystemExit... |
Runs cProfile on a function. | def profile_function(self):
"""Runs cProfile on a function."""
prof = cProfile.Profile()
prof.enable()
result = self._run_object(*self._run_args, **self._run_kwargs)
prof.disable()
prof_stats = pstats.Stats(prof)
prof_stats.calc_callees()
return {
... |
Initializes DB. | def init_db():
"""Initializes DB."""
with contextlib.closing(connect_to_db()) as db:
db.cursor().executescript(DB_SCHEMA)
db.commit() |
Returns all existing guestbook records. | def show_guestbook():
"""Returns all existing guestbook records."""
cursor = flask.g.db.execute(
'SELECT name, message FROM entry ORDER BY id DESC;')
entries = [{'name': row[0], 'message': row[1]} for row in cursor.fetchall()]
return jinja2.Template(LAYOUT).render(entries=entries) |
Adds single guestbook record. | def add_entry():
"""Adds single guestbook record."""
name, msg = flask.request.form['name'], flask.request.form['message']
flask.g.db.execute(
'INSERT INTO entry (name, message) VALUES (?, ?)', (name, msg))
flask.g.db.commit()
return flask.redirect('/') |
Profiler handler. | def profiler_handler(uri):
"""Profiler handler."""
# HTTP method should be GET.
if uri == 'main':
runner.run(show_guestbook, 'cmhp')
# In this case HTTP method should be POST singe add_entry uses POST
elif uri == 'add':
runner.run(add_entry, 'cmhp')
return flask.redirect('/') |
Starts HTTP server with specified parameters. | def start(host, port, profiler_stats, dont_start_browser, debug_mode):
"""Starts HTTP server with specified parameters.
Args:
host: Server host name.
port: Server port.
profiler_stats: A dict with collected program stats.
dont_start_browser: Whether to open browser after profili... |
Handles index. html requests. | def _handle_root():
"""Handles index.html requests."""
res_filename = os.path.join(
os.path.dirname(__file__), _PROFILE_HTML)
with io.open(res_filename, 'rb') as res_file:
content = res_file.read()
return content, 'text/html' |
Handles static files requests. | def _handle_other(self):
"""Handles static files requests."""
res_filename = os.path.join(
os.path.dirname(__file__), _STATIC_DIR, self.path[1:])
with io.open(res_filename, 'rb') as res_file:
content = res_file.read()
_, extension = os.path.splitext(self.path)
... |
Handles HTTP GET requests. | def do_GET(self):
"""Handles HTTP GET requests."""
handler = self.uri_map.get(self.path) or self._handle_other
content, content_type = handler()
compressed_content = gzip.compress(content)
self._send_response(
200, headers=(('Content-type', '%s; charset=utf-8' % conte... |
Handles HTTP POST requests. | def do_POST(self):
"""Handles HTTP POST requests."""
post_data = self.rfile.read(int(self.headers['Content-Length']))
json_data = gzip.decompress(post_data)
self._profile_json.update(json.loads(json_data.decode('utf-8')))
self._send_response(
200, headers=(('Content-t... |
Sends HTTP response code message and headers. | def _send_response(self, http_code, message=None, headers=None):
"""Sends HTTP response code, message and headers."""
self.send_response(http_code, message)
if headers:
for header in headers:
self.send_header(*header)
self.end_headers() |
Main function of the module. | def main():
"""Main function of the module."""
parser = argparse.ArgumentParser(
prog=_PROGRAN_NAME, description=_MODULE_DESC,
formatter_class=argparse.RawTextHelpFormatter)
launch_modes = parser.add_mutually_exclusive_group(required=True)
launch_modes.add_argument('-r', '--remote', dest... |
Checks whether path belongs to standard library or installed modules. | def check_standard_dir(module_path):
"""Checks whether path belongs to standard library or installed modules."""
if 'site-packages' in module_path:
return True
for stdlib_path in _STDLIB_PATHS:
if fnmatch.fnmatchcase(module_path, stdlib_path + '*'):
return True
return False |
Records line execution time. | def record_line(self, frame, event, arg): # pylint: disable=unused-argument
"""Records line execution time."""
if event == 'line':
if self.prev_timestamp:
runtime = time.time() - self.prev_timestamp
self.lines.append([self.prev_path, self.prev_lineno, runtime... |
Filters code from standard library from self. lines. | def lines_without_stdlib(self):
"""Filters code from standard library from self.lines."""
prev_line = None
current_module_path = inspect.getabsfile(inspect.currentframe())
for module_path, lineno, runtime in self.lines:
module_abspath = os.path.abspath(module_path)
... |
Fills code heatmap and execution count dictionaries. | def fill_heatmap(self):
"""Fills code heatmap and execution count dictionaries."""
for module_path, lineno, runtime in self.lines_without_stdlib:
self._execution_count[module_path][lineno] += 1
self._heatmap[module_path][lineno] += runtime |
Calculates skip map for large sources. Skip map is a list of tuples where first element of tuple is line number and second is length of the skip region: [ ( 1 10 ) ( 15 10 ) ] means skipping 10 lines after line 1 and 10 lines after line 15. | def _calc_skips(self, heatmap, num_lines):
"""Calculates skip map for large sources.
Skip map is a list of tuples where first element of tuple is line
number and second is length of the skip region:
[(1, 10), (15, 10)] means skipping 10 lines after line 1 and
10 lines aft... |
Skips lines in src_code specified by skip map. | def _skip_lines(src_code, skip_map):
"""Skips lines in src_code specified by skip map."""
if not skip_map:
return [['line', j + 1, l] for j, l in enumerate(src_code)]
code_with_skips, i = [], 0
for line, length in skip_map:
code_with_skips.extend(
... |
Calculates heatmap for package. | def _profile_package(self):
"""Calculates heatmap for package."""
with _CodeHeatmapCalculator() as prof:
try:
runpy.run_path(self._run_object, run_name='__main__')
except SystemExit:
pass
heatmaps = []
for filename, heatmap in prof... |
Formats heatmap for UI. | def _format_heatmap(self, filename, heatmap, execution_count):
"""Formats heatmap for UI."""
with open(filename) as src_file:
file_source = src_file.read().split('\n')
skip_map = self._calc_skips(heatmap, len(file_source))
run_time = sum(time for time in heatmap.values())... |
Calculates heatmap for module. | def _profile_module(self):
"""Calculates heatmap for module."""
with open(self._run_object, 'r') as srcfile:
src_code = srcfile.read()
code = compile(src_code, self._run_object, 'exec')
try:
with _CodeHeatmapCalculator() as prof:
exec(code, sel... |
Calculates heatmap for function. | def profile_function(self):
"""Calculates heatmap for function."""
with _CodeHeatmapCalculator() as prof:
result = self._run_object(*self._run_args, **self._run_kwargs)
code_lines, start_line = inspect.getsourcelines(self._run_object)
source_lines = []
for line in co... |
Runs profilers on run_object. | def run_profilers(run_object, prof_config, verbose=False):
"""Runs profilers on run_object.
Args:
run_object: An object (string or tuple) for profiling.
prof_config: A string with profilers configuration.
verbose: True if info about running profilers should be shown.
Returns:
... |
Runs profilers on a function. | def run(func, options, args=(), kwargs={}, host='localhost', port=8000): # pylint: disable=dangerous-default-value
"""Runs profilers on a function.
Args:
func: A Python function.
options: A string with profilers configuration (i.e. 'cmh').
args: func non-keyword arguments.
kwar... |
Return probability estimates for the RDD containing test vector X. | def predict_proba(self, X):
"""
Return probability estimates for the RDD containing test vector X.
Parameters
----------
X : RDD containing array-like items, shape = [m_samples, n_features]
Returns
-------
C : RDD with array-like items , shape = [n_sampl... |
Return log - probability estimates for the RDD containing the test vector X. | def predict_log_proba(self, X):
"""
Return log-probability estimates for the RDD containing the
test vector X.
Parameters
----------
X : RDD containing array-like items, shape = [m_samples, n_features]
Returns
-------
C : RDD with array-like item... |
Fit Gaussian Naive Bayes according to X y | def fit(self, Z, classes=None):
"""Fit Gaussian Naive Bayes according to X, y
Parameters
----------
X : array-like, shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples
and n_features is the number of features.
y : arr... |
TODO fulibacsi fix docstring Fit Multinomial Naive Bayes according to ( X y ) pair which is zipped into TupleRDD Z. | def fit(self, Z, classes=None):
"""
TODO fulibacsi fix docstring
Fit Multinomial Naive Bayes according to (X,y) pair
which is zipped into TupleRDD Z.
Parameters
----------
Z : TupleRDD containing X [array-like, shape (m_samples, n_features)]
and y [ar... |
Create vocabulary | def _init_vocab(self, analyzed_docs):
"""Create vocabulary
"""
class SetAccum(AccumulatorParam):
def zero(self, initialValue):
return set(initialValue)
def addInPlace(self, v1, v2):
v1 |= v2
return v1
if not self.... |
Create sparse feature matrix and vocabulary where fixed_vocab = False | def _count_vocab(self, analyzed_docs):
"""Create sparse feature matrix, and vocabulary where fixed_vocab=False
"""
vocabulary = self.vocabulary_
j_indices = _make_int_array()
indptr = _make_int_array()
indptr.append(0)
for doc in analyzed_docs:
for fea... |
Sort features by name | def _sort_features(self, vocabulary):
"""Sort features by name
Returns a reordered matrix and modifies the vocabulary in place
"""
sorted_features = sorted(six.iteritems(vocabulary))
map_index = np.empty(len(sorted_features), dtype=np.int32)
for new_val, (term, old_val) ... |
Remove too rare or too common features. | def _limit_features(self, X, vocabulary, high=None, low=None,
limit=None):
"""Remove too rare or too common features.
Prune features that are non zero in more samples than high or less
documents than low, modifying the vocabulary, and restricting it to
at most th... |
Learn the vocabulary dictionary and return term - document matrix. | def fit_transform(self, Z):
"""Learn the vocabulary dictionary and return term-document matrix.
This is equivalent to fit followed by transform, but more efficiently
implemented.
Parameters
----------
Z : iterable or DictRDD with column 'X'
An iterable of ra... |
Transform documents to document - term matrix. | def transform(self, Z):
"""Transform documents to document-term matrix.
Extract token counts out of raw text documents using the vocabulary
fitted with fit or the one provided to the constructor.
Parameters
----------
raw_documents : iterable
An iterable whi... |
Transform an ArrayRDD ( or DictRDD with column X ) containing sequence of documents to a document - term matrix. | def transform(self, Z):
"""Transform an ArrayRDD (or DictRDD with column 'X') containing
sequence of documents to a document-term matrix.
Parameters
----------
Z : ArrayRDD or DictRDD with raw text documents
Samples. Each sample must be a text document (either bytes ... |
Learn the idf vector ( global term weights ) | def fit(self, Z):
"""Learn the idf vector (global term weights)
Parameters
----------
Z : ArrayRDD or DictRDD containing (sparse matrices|ndarray)
a matrix of term/token counts
Returns
-------
self : TfidfVectorizer
"""
X = Z[:, 'X']... |
Compute the mean and std to be used for later scaling. Parameters ---------- Z: DictRDD containing ( X y ) pairs X - Training vector. { array - like sparse matrix } shape [ n_samples n_features ] The data used to compute the mean and standard deviation used for later scaling along the features axis. y - Target labels P... | def fit(self, Z):
"""Compute the mean and std to be used for later scaling.
Parameters
----------
Z : DictRDD containing (X, y) pairs
X - Training vector.
{array-like, sparse matrix}, shape [n_samples, n_features]
The data used to compute the m... |
Perform standardization by centering and scaling Parameters ---------- Z: DictRDD containing ( X y ) pairs X - Training vector y - Target labels Returns ------- C: DictRDD containing ( X y ) pairs X - Training vector standardized y - Target labels | def transform(self, Z):
"""Perform standardization by centering and scaling
Parameters
----------
Z : DictRDD containing (X, y) pairs
X - Training vector
y - Target labels
Returns
-------
C : DictRDD containing (X, y) pairs
X - ... |
Convert to equivalent StandardScaler | def to_scikit(self):
"""
Convert to equivalent StandardScaler
"""
scaler = StandardScaler(with_mean=self.with_mean,
with_std=self.with_std,
copy=self.copy)
scaler.__dict__ = self.__dict__
return scaler |
Wraps a Scikit - learn Linear model s fit method to use with RDD input. | def _spark_fit(self, cls, Z, *args, **kwargs):
"""Wraps a Scikit-learn Linear model's fit method to use with RDD
input.
Parameters
----------
cls : class object
The sklearn linear model's class to wrap.
Z : TupleRDD or DictRDD
The distributed trai... |
Wraps a Scikit - learn Linear model s predict method to use with RDD input. | def _spark_predict(self, cls, X, *args, **kwargs):
"""Wraps a Scikit-learn Linear model's predict method to use with RDD
input.
Parameters
----------
cls : class object
The sklearn linear model's class to wrap.
Z : ArrayRDD
The distributed data to... |
Fit linear model. | def fit(self, Z):
"""
Fit linear model.
Parameters
----------
Z : DictRDD with (X, y) values
X containing numpy array or sparse matrix - The training data
y containing the target values
Returns
-------
self : returns an instance o... |
Fit all the transforms one after the other and transform the data then fit the transformed data using the final estimator. | def fit(self, Z, **fit_params):
"""Fit all the transforms one after the other and transform the
data, then fit the transformed data using the final estimator.
Parameters
----------
Z : ArrayRDD, TupleRDD or DictRDD
Input data in blocked distributed format.
R... |
Fit all the transforms one after the other and transform the data then use fit_transform on transformed data using the final estimator. | def fit_transform(self, Z, **fit_params):
"""Fit all the transforms one after the other and transform the
data, then use fit_transform on transformed data using the final
estimator."""
Zt, fit_params = self._pre_transform(Z, **fit_params)
if hasattr(self.steps[-1][-1], 'fit_trans... |
Applies transforms to the data and the score method of the final estimator. Valid only if the final estimator implements score. | def score(self, Z):
"""Applies transforms to the data, and the score method of the
final estimator. Valid only if the final estimator implements
score."""
Zt = Z
for name, transform in self.steps[:-1]:
Zt = transform.transform(Zt)
return self.steps[-1][-1].sco... |
TODO: rewrite docstring Fit all transformers using X. Parameters ---------- X: array - like or sparse matrix shape ( n_samples n_features ) Input data used to fit transformers. | def fit(self, Z, **fit_params):
"""TODO: rewrite docstring
Fit all transformers using X.
Parameters
----------
X : array-like or sparse matrix, shape (n_samples, n_features)
Input data, used to fit transformers.
"""
fit_params_steps = dict((step, {})
... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.