func_code_string
stringlengths
52
1.94M
func_documentation_string
stringlengths
1
47.2k
def query_directory(self, pattern, file_information_class, flags=None, file_index=0, max_output=65536, send=True): query = SMB2QueryDirectoryRequest() query['file_information_class'] = file_information_class query['flags'] = flags query['file_index'] = fi...
Run a Query/Find on an opened directory based on the params passed in. Supports out of band send function, call this function with send=False to return a tuple of (SMB2QueryDirectoryRequest, receive_func) instead of sending the the request and waiting for the response. The receive_func ...
def close(self, get_attributes=False, send=True): # it is already closed and this isn't for an out of band request if not self._connected and send: return close = SMB2CloseRequest() close['file_id'] = self.file_id if get_attributes: close['flags']...
Closes an opened file. Supports out of band send function, call this function with send=False to return a tuple of (SMB2CloseRequest, receive_func) instead of sending the the request and waiting for the response. The receive_func can be used to get the response from the server by passin...
def from_string(self, sid_string): if not sid_string.startswith("S-"): raise ValueError("A SID string must start with S-") sid_entries = sid_string.split("-") if len(sid_entries) < 3: raise ValueError("A SID string must start with S and contain a " ...
Used to set the structure parameters based on the input string :param sid_string: String of the sid in S-x-x-x-x form
def poll(self): if not self.status["done"]: r = Request("get", self.url + ".json", {"token": self.payload["token"]}) for param, when in MessageRequest.params.iteritems(): self.status[param] = bool(r.answer[param]) self.status[when] = int(r.answer[...
If the message request has a priority of 2, Pushover keeps sending the same notification until the client acknowledges it. Calling the :func:`poll` function fetches the status of the :class:`MessageRequest` object until the notifications either expires, is acknowledged by the client, or ...
def sounds(self): if not Pushover._SOUNDS: request = Request("get", SOUND_URL, {"token": self.token}) Pushover._SOUNDS = request.answer["sounds"] return Pushover._SOUNDS
Return a dictionary of sounds recognized by Pushover and that can be used in a notification message.
def verify(self, user, device=None): payload = {"user": user, "token": self.token} if device: payload["device"] = device try: request = Request("post", USER_URL, payload) except RequestError: return None else: return reques...
Verify that the `user` and optional `device` exist. Returns `None` when the user/device does not exist or a list of the user's devices otherwise.
def message(self, user, message, **kwargs): payload = {"message": message, "user": user, "token": self.token} for key, value in kwargs.iteritems(): if key not in Pushover.message_keywords: raise ValueError("{0}: invalid message parameter".format(key)) eli...
Send `message` to the user specified by `user`. It is possible to specify additional properties of the message by passing keyword arguments. The list of valid keywords is ``title, priority, sound, callback, timestamp, url, url_title, device, retry, expire and html`` which are described i...
def glance(self, user, **kwargs): payload = {"user": user, "token": self.token} for key, value in kwargs.iteritems(): if key not in Pushover.glance_keywords: raise ValueError("{0}: invalid glance parameter".format(key)) else: payload[key] ...
Send a glance to the user. The default property is ``text``, as this is used on most glances, however a valid glance does not need to require text and can be constructed using any combination of valid keyword properties. The list of valid keywords is ``title, text, subtext, count, percen...
def mswe(w, v): # Ensure inputs are numpy arrays w = np.array(w) v = np.array(v) # Check dimensions if(len(w.shape) != 2): raise TypeError('Estimated coefficients must be in NxM matrix') if(len(v.shape) != 1): raise TypeError('Real coefficients must be in 1d array') # En...
Calculate mean squared weight error between estimated and true filter coefficients, in respect to iterations. Parameters ---------- v : array-like True coefficients used to generate desired signal, must be a one-dimensional array. w : array-like Estimated coefficients from a...
def nlms(u, d, M, step, eps=0.001, leak=0, initCoeffs=None, N=None, returnCoeffs=False): # Check epsilon _pchk.checkRegFactor(eps) # Num taps check _pchk.checkNumTaps(M) # Max iteration check if N is None: N = len(u)-M+1 _pchk.checkIter(N, len(u)-M+1) # Check len(d)...
Perform normalized least-mean-squares (NLMS) adaptive filtering on u to minimize error given by e=d-y, where y is the output of the adaptive filter. Parameters ---------- u : array-like One-dimensional filter input. d : array-like One-dimensional desired signal, i.e., the output...
def ap(u, d, M, step, K, eps=0.001, leak=0, initCoeffs=None, N=None, returnCoeffs=False): # Check epsilon _pchk.checkRegFactor(eps) # Check projection order _pchk.checkProjectOrder(K) # Num taps check _pchk.checkNumTaps(M) # Max iteration check if N is None: N = len(u...
Perform affine projection (AP) adaptive filtering on u to minimize error given by e=d-y, where y is the output of the adaptive filter. Parameters ---------- u : array-like One-dimensional filter input. d : array-like One-dimensional desired signal, i.e., the output of the unknown FI...
def bugreport(dest_file="default.log"): adb_full_cmd = [v.ADB_COMMAND_PREFIX, v.ADB_COMMAND_BUGREPORT] try: dest_file_handler = open(dest_file, "w") except IOError: print("IOError: Failed to create a log file") # We have to check if device is available or not before executing t...
Prints dumpsys, dumpstate, and logcat data to the screen, for the purposes of bug reporting :return: result of _exec_command() execution
def push(src, dest): adb_full_cmd = [v.ADB_COMMAND_PREFIX, v.ADB_COMMAND_PUSH, src, dest] return _exec_command(adb_full_cmd)
Push object from host to target :param src: string path to source object on host :param dest: string destination path on target :return: result of _exec_command() execution
def pull(src, dest): adb_full_cmd = [v.ADB_COMMAND_PREFIX, v.ADB_COMMAND_PULL, src, dest] return _exec_command(adb_full_cmd)
Pull object from target to host :param src: string path of object on target :param dest: string destination path on host :return: result of _exec_command() execution
def devices(opts=[]): adb_full_cmd = [v.ADB_COMMAND_PREFIX, v.ADB_COMMAND_DEVICES, _convert_opts(opts)] return _exec_command(adb_full_cmd)
Get list of all available devices including emulators :param opts: list command options (e.g. ["-r", "-a"]) :return: result of _exec_command() execution
def shell(cmd): adb_full_cmd = [v.ADB_COMMAND_PREFIX, v.ADB_COMMAND_SHELL, cmd] return _exec_command(adb_full_cmd)
Execute shell command on target :param cmd: string shell command to execute :return: result of _exec_command() execution
def install(apk, opts=[]): adb_full_cmd = [v.ADB_COMMAND_PREFIX, v.ADB_COMMAND_INSTALL, _convert_opts(opts), apk] return _exec_command(adb_full_cmd)
Install *.apk on target :param apk: string path to apk on host to install :param opts: list command options (e.g. ["-r", "-a"]) :return: result of _exec_command() execution
def uninstall(app, opts=[]): adb_full_cmd = [v.ADB_COMMAND_PREFIX, v.ADB_COMMAND_UNINSTALL, _convert_opts(opts), app] return _exec_command(adb_full_cmd)
Uninstall app from target :param app: app name to uninstall from target (e.g. "com.example.android.valid") :param opts: list command options (e.g. ["-r", "-a"]) :return: result of _exec_command() execution
def sync(): adb_full_cmd = [v.ADB_COMMAND_PREFIX, v.ADB_COMMAND_SHELL ,v.ADB_COMMAND_SYNC] return _exec_command(adb_full_cmd)
Copy host->device only if changed :return: result of _exec_command() execution
def _exec_command(adb_cmd): t = tempfile.TemporaryFile() final_adb_cmd = [] for e in adb_cmd: if e != '': # avoid items with empty string... final_adb_cmd.append(e) # ... so that final command doesn't # contain extra spaces print('\n*** Executing ' + ' '.join(adb_c...
Format adb command and execute it in shell :param adb_cmd: list adb command to execute :return: string '0' and shell command output if successful, otherwise raise CalledProcessError exception and return error code
def _exec_command_to_file(adb_cmd, dest_file_handler): t = tempfile.TemporaryFile() final_adb_cmd = [] for e in adb_cmd: if e != '': # avoid items with empty string... final_adb_cmd.append(e) # ... so that final command doesn't # contain extra spaces print('\n*** E...
Format adb command and execute it in shell and redirects to a file :param adb_cmd: list adb command to execute :param dest_file_handler: file handler to which output will be redirected :return: string '0' and writes shell command output to file if successful, otherwise raise CalledProcessError exception...
def has_changed(self, initial, data): "Detects if the data was changed. This is added in 1.6." if initial is None and data is None: return False if data and not hasattr(data, '__iter__'): data = self.widget.decompress(data) initial = self.to_python(initial) ...
Detects if the data was changed. This is added in 1.6.
def results_decorator(func): # Wrap function to maintian the original doc string, etc @wraps(func) def decorator(lookup_cls): # Construct a class decorator from the original function original = lookup_cls.results def inner(self, request): # Wrap lookup_cls.results by...
Helper for constructing simple decorators around Lookup.results. func is a function which takes a request as the first parameter. If func returns an HttpReponse it is returned otherwise the original Lookup.results is returned.
def login_required(request): "Lookup decorator to require the user to be authenticated." user = getattr(request, 'user', None) if user is None or not user.is_authenticated: return HttpResponse(status=401)
Lookup decorator to require the user to be authenticated.
def staff_member_required(request): "Lookup decorator to require the user is a staff member." user = getattr(request, 'user', None) if user is None or not user.is_authenticated: return HttpResponse(status=401) # Unauthorized elif not user.is_staff: return HttpResponseForbidden()
Lookup decorator to require the user is a staff member.
def format_item(self, item): "Construct result dictionary for the match item." result = { 'id': self.get_item_id(item), 'value': self.get_item_value(item), 'label': self.get_item_label(item), } for key in settings.SELECTABLE_ESCAPED_KEYS: i...
Construct result dictionary for the match item.
def paginate_results(self, results, options): "Return a django.core.paginator.Page of results." limit = options.get('limit', settings.SELECTABLE_MAX_LIMIT) paginator = Paginator(results, limit) page = options.get('page', 1) try: results = paginator.page(page) ...
Return a django.core.paginator.Page of results.
def results(self, request): "Match results to given term and return the serialized HttpResponse." results = {} form = self.form(request.GET) if form.is_valid(): options = form.cleaned_data term = options.get('term', '') raw_data = self.get_query(reques...
Match results to given term and return the serialized HttpResponse.
def format_results(self, raw_data, options): page_data = self.paginate_results(raw_data, options) results = {} meta = options.copy() meta['more'] = _('Show more results') if page_data and page_data.has_next(): meta['next_page'] = page_data.next_page_number() ...
Returns a python structure that later gets serialized. raw_data full list of objects matching the search term options a dictionary of the given options
def import_lookup_class(lookup_class): from selectable.base import LookupBase if isinstance(lookup_class, string_types): mod_str, cls_str = lookup_class.rsplit('.', 1) mod = import_module(mod_str) lookup_class = getattr(mod, cls_str) if not issubclass(lookup_class, LookupBase): ...
Import lookup_class as a dotted base and ensure it extends LookupBase
def clean_limit(self): "Ensure given limit is less than default if defined" limit = self.cleaned_data.get('limit', None) if (settings.SELECTABLE_MAX_LIMIT is not None and (not limit or limit > settings.SELECTABLE_MAX_LIMIT)): limit = settings.SELECTABLE_MAX_LIMIT ...
Ensure given limit is less than default if defined
def getSearchUrl(self, album, artist): # build request url params = collections.OrderedDict() params["gbv"] = "2" params["q"] = "\"%s\" \"%s\" front cover" % (artist, album) if abs(self.target_size - 500) < 300: params["tbs"] = "isz:m" elif self.target_size > 800: params["tbs"] ...
See CoverSource.getSearchUrl.
async def parseResults(self, api_data): results = [] # parse HTML and get results parser = lxml.etree.HTMLParser() html = lxml.etree.XML(api_data.decode("latin-1"), parser) for rank, result in enumerate(__class__.RESULTS_SELECTOR(html), 1): # extract url metadata_div = result.find("...
See CoverSource.parseResults.
async def waitAccessAsync(self): async with self.lock: while True: last_access_ts = self.__getLastAccess() if last_access_ts is not None: now = time.time() last_access_ts = last_access_ts[0] time_since_last_access = now - last_access_ts if time_sinc...
Wait the needed time before sending a request to honor rate limit.
def __access(self, ts): with self.connection: self.connection.execute("INSERT OR REPLACE INTO access_timestamp (timestamp, domain) VALUES (?, ?)", (ts, self.domain))
Record an API access.
def aiohttp_socket_timeout(socket_timeout_s): return aiohttp.ClientTimeout(total=None, connect=None, sock_connect=socket_timeout_s, sock_read=socket_timeout_s)
Return a aiohttp.ClientTimeout object with only socket timeouts set.
async def query(self, url, *, post_data=None, headers=None, verify=True, cache=None, pre_cache_callback=None): async def store_in_cache_callback(): pass if cache is not None: # try from cache first if post_data is not None: if (url, post_data) in cache: self.logger.debug...
Send a GET/POST request or get data from cache, retry if it fails, and return a tuple of store in cache callback, response content.
async def isReachable(self, url, *, headers=None, verify=True, response_headers=None, cache=None): if (cache is not None) and (url in cache): # try from cache first self.logger.debug("Got headers for URL '%s' from cache" % (url)) resp_ok, response_headers = pickle.loads(cache[url]) retu...
Send a HEAD request with short timeout or get data from cache, return True if ressource has 2xx status code, False instead.
async def fastStreamedQuery(self, url, *, headers=None, verify=True): response = await self.session.get(url, headers=self._buildHeaders(headers), timeout=HTTP_SHORT_TIMEOUT, ssl=verify) respons...
Send a GET request with short timeout, do not retry, and return streamed response.
def getSearchUrl(self, album, artist): # build request url params = collections.OrderedDict() params["method"] = "album.getinfo" params["api_key"] = __class__.API_KEY params["album"] = album params["artist"] = artist return __class__.assembleUrl(__class__.BASE_URL, params)
See CoverSource.getSearchUrl.
def processQueryString(self, s): char_blacklist = set(string.punctuation) char_blacklist.remove("'") char_blacklist.remove("&") char_blacklist = frozenset(char_blacklist) return __class__.unpunctuate(s.lower(), char_blacklist=char_blacklist)
See CoverSource.processQueryString.
async def parseResults(self, api_data): results = [] # get xml results list xml_text = api_data.decode("utf-8") xml_root = xml.etree.ElementTree.fromstring(xml_text) status = xml_root.get("status") if status != "ok": raise Exception("Unexpected Last.fm response status: %s" % (status))...
See CoverSource.parseResults.
async def search_and_download(album, artist, format, size, out_filepath, *, size_tolerance_prct, amazon_tlds, no_lq_sources, async_loop): # register sources source_args = (size, size_tolerance_prct) cover_sources = [sources.LastFmCoverSource(*source_args), sourc...
Search and download a cover, return True if success, False instead.
def getSearchUrl(self, album, artist): url = "%s/search" % (__class__.BASE_URL) params = collections.OrderedDict() params["search-alias"] = "digital-music" params["field-keywords"] = " ".join((artist, album)) params["sort"] = "relevancerank" return __class__.assembleUrl(url, params)
See CoverSource.getSearchUrl.
async def parseResults(self, api_data): results = [] # parse page parser = lxml.etree.HTMLParser() html = lxml.etree.XML(api_data.decode("utf-8"), parser) for page_struct_version, result_selector in enumerate(__class__.RESULTS_SELECTORS): result_nodes = result_selector(html) if resu...
See CoverSource.parseResults.
def generateImgUrls(self, product_id, dynapi_key, format_id, slice_count): for x in range(slice_count): for y in range(slice_count): yield ("http://z2-ec2.images-amazon.com/R/1/a=" + product_id + "+c=" + dynapi_key + "+d=_SCR%28" + str(format_id) + "," + str(x) + ","...
Generate URLs for slice_count^2 subimages of a product.
def retrier(*, max_attempts, sleeptime, max_sleeptime, sleepscale=1.5, jitter=0.2): assert(max_attempts > 1) assert(sleeptime >= 0) assert(0 <= jitter <= sleeptime) assert(sleepscale >= 1) cur_sleeptime = min(max_sleeptime, sleeptime) for attempt in range(max_attempts): cur_jitter = random.randint(in...
Generator yielding time to wait for, after the attempt, if it failed.
async def get(self, target_format, target_size, size_tolerance_prct, out_filepath): if self.source_quality.value <= CoverSourceQuality.LOW.value: logging.getLogger("Cover").warning("Cover is from a potentially unreliable source and may be unrelated to the search") images_data = [] for i, url in e...
Download cover and process it.
def postProcess(self, images_data, new_format, new_size): if len(images_data) == 1: in_bytes = io.BytesIO(images_data[0]) img = PIL.Image.open(in_bytes) if img.mode != "RGB": img = img.convert("RGB") else: # images need to be joined before further processing logging.ge...
Convert image binary data to a target format and/or size (None if no conversion needed), and return the processed data.
async def updateImageMetadata(self): assert(self.needMetadataUpdate()) width_sum, height_sum = 0, 0 # only download metadata for the needed images to get full size idxs = [] assert(is_square(len(self.urls))) sq = int(math.sqrt(len(self.urls))) for x in range(sq): for y in range(sq...
Partially download image file(s) to get its real metadata, or get it from cache.
def setFormatMetadata(self, format): assert((self.needMetadataUpdate(CoverImageMetadata.FORMAT)) or (self.format is format)) self.format = format self.check_metadata &= ~CoverImageMetadata.FORMAT
Set format image metadata to what has been reliably identified.
def setSizeMetadata(self, size): assert((self.needMetadataUpdate(CoverImageMetadata.SIZE)) or (self.size == size)) self.size = size self.check_metadata &= ~CoverImageMetadata.SIZE
Set size image metadata to what has been reliably identified.
async def updateSignature(self): assert(self.thumbnail_sig is None) if self.thumbnail_url is None: logging.getLogger("Cover").warning("No thumbnail available for %s" % (self)) return # download logging.getLogger("Cover").debug("Downloading cover thumbnail '%s'..." % (self.thumbnail_url)...
Calculate a cover's "signature" using its thumbnail url.
def compare(first, second, *, target_size, size_tolerance_prct): for c in (first, second): assert(c.format is not None) assert(isinstance(c.size[0], int) and isinstance(c.size[1], int)) # prefer square covers #1 delta_ratio1 = abs(first.size[0] / first.size[1] - 1) delta_ratio2 = abs(se...
Compare cover relevance/quality. Return -1 if first is a worst match than second, 1 otherwise, or 0 if cover can't be discriminated. This code is responsible for comparing two cover results to identify the best one, and is used to sort all results. It is probably the most important piece of code of this t...
async def crunch(image_data, format, silent=False): if (((format is CoverImageFormat.PNG) and (not HAS_OPTIPNG)) or ((format is CoverImageFormat.JPEG) and (not HAS_JPEGOPTIM))): return image_data with mkstemp_ctx.mkstemp(suffix=".%s" % (format.name.lower())) as tmp_out_filepath: if ...
Crunch image data, and return the processed data, or orignal data if operation failed.
def guessImageMetadataFromData(img_data): format, width, height = None, None, None img_stream = io.BytesIO(img_data) try: img = PIL.Image.open(img_stream) except IOError: format = imghdr.what(None, h=img_data) format = SUPPORTED_IMG_FORMATS.get(format, None) else: format...
Identify an image format and size from its first bytes.
async def guessImageMetadataFromHttpData(response): metadata = None img_data = bytearray() while len(img_data) < CoverSourceResult.MAX_FILE_METADATA_PEEK_SIZE: new_img_data = await response.content.read(__class__.METADATA_PEEK_SIZE_INCREMENT) if not new_img_data: break img_dat...
Identify an image format and size from the beginning of its HTTP data.
def guessImageFormatFromHttpResponse(response): extensions = [] # try to guess extension from response content-type header try: content_type = response.headers["Content-Type"] except KeyError: pass else: ext = mimetypes.guess_extension(content_type, strict=False) if ext ...
Guess file format from HTTP response, return format or None.
async def preProcessForComparison(results, target_size, size_tolerance_prct): # find reference (=image most likely to match target cover ignoring factors like size and format) reference = None for result in results: if result.source_quality is CoverSourceQuality.REFERENCE: if ((reference ...
Process results to prepare them for future comparison and sorting.
def computeImgSignature(image_data): parser = PIL.ImageFile.Parser() parser.feed(image_data) img = parser.close() target_size = (__class__.IMG_SIG_SIZE, __class__.IMG_SIG_SIZE) img.thumbnail(target_size, PIL.Image.BICUBIC) if img.size != target_size: logging.getLogger("Cover").debug("...
Calculate an image signature. This is similar to ahash but uses 3 colors components See: https://github.com/JohannesBuchner/imagehash/blob/4.0/imagehash/__init__.py#L125
def analyze_lib(lib_dir, cover_filename, *, ignore_existing=False): work = {} stats = collections.OrderedDict(((k, 0) for k in("files", "albums", "missing covers", "errors"))) with tqdm.tqdm(desc="Analyzing library", unit="dir", postfix=stats) as progress, \ tqdm_log...
Recursively analyze library, and return a dict of path -> (artist, album).
def get_metadata(audio_filepaths): artist, album, has_embedded_album_art = None, None, None for audio_filepath in audio_filepaths: try: mf = mutagen.File(audio_filepath) except Exception: continue if mf is None: continue # artist for key in ("albumartist", "artist", # ogg ...
Return a tuple of album, artist, has_embedded_album_art from a list of audio files.
def analyze_dir(stats, parent_dir, rel_filepaths, cover_filename, *, ignore_existing=False): no_metadata = None, None, None metadata = no_metadata audio_filepaths = [] for rel_filepath in rel_filepaths: stats["files"] += 1 try: ext = os.path.splitext(rel_filepath)[1][1:].lower() except Inde...
Analyze a directory (non recursively) to get its album metadata if it is one.
def embed_album_art(cover_filepath, path): with open(cover_filepath, "rb") as f: cover_data = f.read() for filename in os.listdir(path): try: ext = os.path.splitext(filename)[1][1:].lower() except IndexError: continue if ext in AUDIO_EXTENSIONS: filepath = os.path.join(path, fil...
Embed album art into audio files.
def ichunk(iterable, n): it = iter(iterable) while True: chunk = tuple(itertools.islice(it, n)) if not chunk: return yield chunk
Split an iterable into n-sized chunks.
def get_covers(work, args): with contextlib.ExitStack() as cm: if args.filename == EMBEDDED_ALBUM_ART_SYMBOL: tmp_prefix = "%s_" % (os.path.splitext(os.path.basename(inspect.getfile(inspect.currentframe())))[0]) tmp_dir = cm.enter_context(tempfile.TemporaryDirectory(prefix=tmp_prefix)) # setup ...
Get missing covers.
def mkstemp(*args, **kwargs): fd, filename = tempfile.mkstemp(*args, **kwargs) os.close(fd) try: yield filename finally: os.remove(filename)
Context manager similar to tempfile.NamedTemporaryFile except the file is not deleted on close, and only the filepath is returned .. warnings:: Unlike tempfile.mkstemp, this is not secure
def redirect_logging(tqdm_obj, logger=logging.getLogger()): # remove current handler assert(len(logger.handlers) == 1) prev_handler = logger.handlers[0] logger.removeHandler(prev_handler) # add tqdm handler tqdm_handler = TqdmLoggingHandler(tqdm_obj) if prev_handler.formatter is not None: tqdm_hand...
Context manager to redirect logging to a TqdmLoggingHandler object and then restore the original.
async def search(self, album, artist): self.logger.debug("Searching with source '%s'..." % (self.__class__.__name__)) album = self.processAlbumString(album) artist = self.processArtistString(artist) url_data = self.getSearchUrl(album, artist) if isinstance(url_data, tuple): url, post_data...
Search for a given album/artist and return an iterable of CoverSourceResult.
async def fetchResults(self, url, post_data=None): if post_data is not None: self.logger.debug("Querying URL '%s' %s..." % (url, dict(post_data))) else: self.logger.debug("Querying URL '%s'..." % (url)) headers = {} self.updateHttpHeaders(headers) return await self.http.query(url, ...
Get a (store in cache callback, search results) tuple from an URL.
async def probeUrl(self, url, response_headers=None): self.logger.debug("Probing URL '%s'..." % (url)) headers = {} self.updateHttpHeaders(headers) resp_headers = {} resp_ok = await self.http.isReachable(url, headers=headers, ...
Probe URL reachability from cache or HEAD request.
def unaccentuate(s): return "".join(c for c in unicodedata.normalize("NFKD", s) if not unicodedata.combining(c))
Replace accentuated chars in string by their non accentuated equivalent.
def unpunctuate(s, *, char_blacklist=string.punctuation): # remove punctuation s = "".join(c for c in s if c not in char_blacklist) # remove consecutive spaces return " ".join(filter(None, s.split(" ")))
Remove punctuation from string s.
def getSearchUrl(self, album, artist): params = collections.OrderedDict() params["search-alias"] = "popular" params["field-artist"] = artist params["field-title"] = album params["sort"] = "relevancerank" return __class__.assembleUrl(self.base_url, params)
See CoverSource.getSearchUrl.
async def parseResults(self, api_data): results = [] # parse page parser = lxml.etree.HTMLParser() html = lxml.etree.XML(api_data.decode("utf-8", "ignore"), parser) for page_struct_version, result_selector in enumerate(__class__.RESULTS_SELECTORS): result_nodes = result_selector(html) ...
See CoverSource.parseResults.
def delete_all(obj): types = tuple([ Shader, Mesh, VertexBuffer, IndexBuffer, Texture, Program, Context, ]) for name in dir(obj): child = getattr(obj, name) if isinstance(child, types): child.delete()
Calls `delete()` on all members of `obj` that are recognized as instances of `pg` objects.
def _glfw_get_version(filename): version_checker_source = args = [sys.executable, '-c', textwrap.dedent(version_checker_source)] process = subprocess.Popen(args, universal_newlines=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE) out = process.communicate(_to_cha...
Queries and returns the library version tuple or None by using a subprocess.
def set_error_callback(cbfun): global _error_callback previous_callback = _error_callback if cbfun is None: cbfun = 0 c_cbfun = _GLFWerrorfun(cbfun) _error_callback = (cbfun, c_cbfun) cbfun = c_cbfun _glfw.glfwSetErrorCallback(cbfun) if previous_callback is not None and prev...
Sets the error callback. Wrapper for: GLFWerrorfun glfwSetErrorCallback(GLFWerrorfun cbfun);
def destroy_window(window): _glfw.glfwDestroyWindow(window) window_addr = ctypes.cast(ctypes.pointer(window), ctypes.POINTER(ctypes.c_ulong)).contents.value for callback_repository in _callback_repositories: if window_addr in callback_repository: del ca...
Destroys the specified window and its context. Wrapper for: void glfwDestroyWindow(GLFWwindow* window);
def unwrap(self): size = self.width, self.height bits = self.red_bits, self.green_bits, self.blue_bits return size, bits, self.refresh_rate
Returns a nested python sequence.
def wrap(self, gammaramp): red, green, blue = gammaramp size = min(len(red), len(green), len(blue)) array_type = ctypes.c_ushort*size self.size = ctypes.c_uint(size) self.red_array = array_type() self.green_array = array_type() self.blue_array = array_typ...
Wraps a nested python sequence.
def unwrap(self): red = [self.red[i]/65535.0 for i in range(self.size)] green = [self.green[i]/65535.0 for i in range(self.size)] blue = [self.blue[i]/65535.0 for i in range(self.size)] return red, green, blue
Returns a nested python sequence.
def hex_color(value): r = ((value >> (8 * 2)) & 255) / 255.0 g = ((value >> (8 * 1)) & 255) / 255.0 b = ((value >> (8 * 0)) & 255) / 255.0 return (r, g, b)
Accepts a hexadecimal color `value` in the format ``0xrrggbb`` and returns an (r, g, b) tuple where 0.0 <= r, g, b <= 1.0.
def normalize(vector): d = sum(x * x for x in vector) ** 0.5 return tuple(x / d for x in vector)
Normalizes the `vector` so that its length is 1. `vector` can have any number of components.
def distance(p1, p2): return sum((a - b) ** 2 for a, b in zip(p1, p2)) ** 0.5
Computes and returns the distance between two points, `p1` and `p2`. The points can have any number of components.
def cross(v1, v2): return ( v1[1] * v2[2] - v1[2] * v2[1], v1[2] * v2[0] - v1[0] * v2[2], v1[0] * v2[1] - v1[1] * v2[0], )
Computes the cross product of two vectors.
def dot(v1, v2): x1, y1, z1 = v1 x2, y2, z2 = v2 return x1 * x2 + y1 * y2 + z1 * z2
Computes the dot product of two vectors.
def add(v1, v2): return tuple(a + b for a, b in zip(v1, v2))
Adds two vectors.
def sub(v1, v2): return tuple(a - b for a, b in zip(v1, v2))
Subtracts two vectors.
def interpolate(v1, v2, t): return add(v1, mul(sub(v2, v1), t))
Interpolate from one vector to another.
def normal_from_points(a, b, c): x1, y1, z1 = a x2, y2, z2 = b x3, y3, z3 = c ab = (x2 - x1, y2 - y1, z2 - z1) ac = (x3 - x1, y3 - y1, z3 - z1) x, y, z = cross(ab, ac) d = (x * x + y * y + z * z) ** 0.5 return (x / d, y / d, z / d)
Computes a normal vector given three points.
def smooth_normals(positions, normals): lookup = defaultdict(list) for position, normal in zip(positions, normals): lookup[position].append(normal) result = [] for position in positions: tx = ty = tz = 0 for x, y, z in lookup[position]: tx += x ty += ...
Assigns an averaged normal to each position based on all of the normals originally used for the position.
def bounding_box(positions): (x0, y0, z0) = (x1, y1, z1) = positions[0] for x, y, z in positions: x0 = min(x0, x) y0 = min(y0, y) z0 = min(z0, z) x1 = max(x1, x) y1 = max(y1, y) z1 = max(z1, z) return (x0, y0, z0), (x1, y1, z1)
Computes the bounding box for a list of 3-dimensional points.
def recenter(positions): (x0, y0, z0), (x1, y1, z1) = bounding_box(positions) dx = x1 - (x1 - x0) / 2.0 dy = y1 - (y1 - y0) / 2.0 dz = z1 - (z1 - z0) / 2.0 result = [] for x, y, z in positions: result.append((x - dx, y - dy, z - dz)) return result
Returns a list of new positions centered around the origin.
def interleave(*args): result = [] for array in zip(*args): result.append(tuple(flatten(array))) return result
Interleaves the elements of the provided arrays. >>> a = [(0, 0), (1, 0), (2, 0), (3, 0)] >>> b = [(0, 0), (0, 1), (0, 2), (0, 3)] >>> interleave(a, b) [(0, 0, 0, 0), (1, 0, 0, 1), (2, 0, 0, 2), (3, 0, 0, 3)] This is useful for combining multiple vertex attributes into a single ...
def distinct(iterable, keyfunc=None): seen = set() for item in iterable: key = item if keyfunc is None else keyfunc(item) if key not in seen: seen.add(key) yield item
Yields distinct items from `iterable` in the order that they appear.
def ray_triangle_intersection(v1, v2, v3, o, d): eps = 1e-6 e1 = sub(v2, v1) e2 = sub(v3, v1) p = cross(d, e2) det = dot(e1, p) if abs(det) < eps: return None inv = 1.0 / det t = sub(o, v1) u = dot(t, p) * inv if u < 0 or u > 1: return None q = cross(t, e...
Computes the distance from a point to a triangle given a ray.
def pack_list(fmt, data): func = struct.Struct(fmt).pack return create_string_buffer(''.join([func(x) for x in data]))
Convert a Python list into a ctypes buffer. This appears to be faster than the typical method of creating a ctypes array, e.g. (c_float * len(data))(*data)
def click(self, jquery=False): if jquery: e = JQuery(self) e.click() else: super(Clickable, self).click()
Click by WebElement, if not, JQuery click
def convert_cookie_to_dict(cookie, keys_map=WEB_DRIVER_COOKIE_KEYS_MAP): cookie_dict = dict() for k in keys_map.keys(): key = _to_unicode_if_str(keys_map[k]) value = _to_unicode_if_str(getattr(cookie, k)) cookie_dict[key] = value return cookie_dict
Converts an instance of Cookie class from cookielib to a dict. The names of attributes can be changed according to keys_map:. For example, this method can be used to create a cookie which compatible with WebDriver format. :param cookie: Cookie instance received from requests/sessions using url2lib or reque...