_id stringlengths 2 7 | title stringlengths 1 88 | partition stringclasses 3
values | text stringlengths 31 13.1k | language stringclasses 1
value | meta_information dict |
|---|---|---|---|---|---|
q269900 | Biomart.get_datasets | test | def get_datasets(self, mart='ENSEMBL_MART_ENSEMBL'):
"""Get available datasets from mart you've selected"""
datasets = self.datasets(mart, raw=True)
| python | {
"resource": ""
} |
q269901 | Biomart.get_attributes | test | def get_attributes(self, dataset):
"""Get available attritbutes from dataset you've selected"""
attributes = self.attributes(dataset)
attr_ = [ (k, v[0]) for k, v | python | {
"resource": ""
} |
q269902 | Biomart.get_filters | test | def get_filters(self, dataset):
"""Get available filters from dataset you've selected"""
filters = self.filters(dataset)
filt_ = [ (k, v[0]) for k, v | python | {
"resource": ""
} |
q269903 | Biomart.query | test | def query(self, dataset='hsapiens_gene_ensembl', attributes=[],
filters={}, filename=None):
"""mapping ids using BioMart.
:param dataset: str, default: 'hsapiens_gene_ensembl'
:param attributes: str, list, tuple
:param filters: dict, {'filter name': list(filter value)}
:param host: www.ensembl.org, asia.ensembl.org, useast.ensembl.org
:return: a dataframe contains all attributes you selected.
**Note**: it will take a couple of minutes to get the results.
A xml template for querying biomart. (see https://gist.github.com/keithshep/7776579)
exampleTaxonomy = "mmusculus_gene_ensembl"
exampleGene = "ENSMUSG00000086981,ENSMUSG00000086982,ENSMUSG00000086983"
urlTemplate = \
'''http://ensembl.org/biomart/martservice?query=''' \
'''<?xml version="1.0" encoding="UTF-8"?>''' \
'''<!DOCTYPE Query>''' \
'''<Query virtualSchemaName="default" formatter="CSV" header="0" uniqueRows="0" count="" datasetConfigVersion="0.6">''' \
'''<Dataset name="%s" interface="default"><Filter name="ensembl_gene_id" value="%s"/>''' \
'''<Attribute name="ensembl_gene_id"/><Attribute name="ensembl_transcript_id"/>''' \
'''<Attribute name="transcript_start"/><Attribute name="transcript_end"/>''' \
'''<Attribute name="exon_chrom_start"/><Attribute name="exon_chrom_end"/>''' \
'''</Dataset>''' \
'''</Query>'''
exampleURL = urlTemplate % (exampleTaxonomy, exampleGene)
req = requests.get(exampleURL, stream=True)
"""
if not attributes:
| python | {
"resource": ""
} |
q269904 | gsea | test | def gsea(data, gene_sets, cls, outdir='GSEA_', min_size=15, max_size=500, permutation_num=1000,
weighted_score_type=1,permutation_type='gene_set', method='log2_ratio_of_classes',
ascending=False, processes=1, figsize=(6.5,6), format='pdf',
graph_num=20, no_plot=False, seed=None, verbose=False):
""" Run Gene Set Enrichment Analysis.
:param data: Gene expression data table, Pandas DataFrame, gct file.
:param gene_sets: Enrichr Library name or .gmt gene sets file or dict of gene sets. Same input with GSEA.
:param cls: A list or a .cls file format required for GSEA.
:param str outdir: Results output directory.
:param int permutation_num: Number of permutations for significance computation. Default: 1000.
:param str permutation_type: Permutation type, "phenotype" for phenotypes, "gene_set" for genes.
:param int min_size: Minimum allowed number of genes from gene set also the data set. Default: 15.
:param int max_size: Maximum allowed number of genes from gene set also the data set. Default: 500.
:param float weighted_score_type: Refer to :func:`algorithm.enrichment_score`. Default:1.
:param method: The method used to calculate a correlation or ranking. Default: 'log2_ratio_of_classes'.
Others methods are:
1. 'signal_to_noise'
You must have at least three samples for each phenotype to use this metric.
The larger the signal-to-noise ratio, the larger the differences of the means (scaled by the standard deviations);
that is, the more distinct the gene expression is in each phenotype and the more the gene acts as a “class marker.”
2. 't_test'
Uses the difference of means scaled by the standard deviation and number of samples.
Note: You must have at least three samples for each phenotype to use this metric.
The larger the tTest ratio, the more distinct the gene expression is in each phenotype
and the more the gene acts as a “class marker.”
3. 'ratio_of_classes' (also referred to as fold change).
Uses the ratio of class means to calculate fold change for natural scale data.
4. 'diff_of_classes'
Uses the | python | {
"resource": ""
} |
q269905 | ssgsea | test | def ssgsea(data, gene_sets, outdir="ssGSEA_", sample_norm_method='rank', min_size=15, max_size=2000,
permutation_num=0, weighted_score_type=0.25, scale=True, ascending=False, processes=1,
figsize=(7,6), format='pdf', graph_num=20, no_plot=False, seed=None, verbose=False):
"""Run Gene Set Enrichment Analysis with single sample GSEA tool
:param data: Expression table, pd.Series, pd.DataFrame, GCT file, or .rnk file format.
:param gene_sets: Enrichr Library name or .gmt gene sets file or dict of gene sets. Same input with GSEA.
:param outdir: Results output directory.
:param str sample_norm_method: "Sample normalization method. Choose from {'rank', 'log', 'log_rank'}. Default: rank.
1. 'rank': Rank your expression data, and transform by 10000*rank_dat/gene_numbers
2. 'log' : Do not rank, but transform data by log(data + exp(1)), while data = data[data<1] =1.
3. 'log_rank': Rank your expression data, and transform by log(10000*rank_dat/gene_numbers+ exp(1))
4. 'custom': Do nothing, and use your own rank value to calculate enrichment score.
see here: https://github.com/GSEA-MSigDB/ssGSEAProjection-gpmodule/blob/master/src/ssGSEAProjection.Library.R, line 86
:param int min_size: Minimum allowed number of genes from gene set also the data set. Default: 15.
:param int max_size: Maximum allowed number of genes from gene set also the data set. Default: 2000.
:param int permutation_num: Number of permutations for significance computation. Default: 0.
:param str weighted_score_type: Refer to :func:`algorithm.enrichment_score`. Default:0.25.
:param bool scale: If True, normalize the scores by number of genes in the gene sets.
:param bool ascending: Sorting order of rankings. Default: False.
:param int processes: Number of Processes you are going to use. Default: 1.
:param list figsize: Matplotlib figsize, accept a tuple or list, e.g. [width,height]. Default: [7,6].
:param str format: Matplotlib figure format. Default: 'pdf'.
:param int graph_num: Plot graphs for top sets | python | {
"resource": ""
} |
q269906 | prerank | test | def prerank(rnk, gene_sets, outdir='GSEA_Prerank', pheno_pos='Pos', pheno_neg='Neg',
min_size=15, max_size=500, permutation_num=1000, weighted_score_type=1,
ascending=False, processes=1, figsize=(6.5,6), format='pdf',
graph_num=20, no_plot=False, seed=None, verbose=False):
""" Run Gene Set Enrichment Analysis with pre-ranked correlation defined by user.
:param rnk: pre-ranked correlation table or pandas DataFrame. Same input with ``GSEA`` .rnk file.
:param gene_sets: Enrichr Library name or .gmt gene sets file or dict of gene sets. Same input with GSEA.
:param outdir: results output directory.
:param int permutation_num: Number of permutations for significance computation. Default: 1000.
:param int min_size: Minimum allowed number of genes from gene set also the data set. Default: 15.
:param int max_size: Maximum allowed number of genes from gene set also the data set. Defaults: 500.
:param str weighted_score_type: Refer to :func:`algorithm.enrichment_score`. Default:1.
| python | {
"resource": ""
} |
q269907 | replot | test | def replot(indir, outdir='GSEA_Replot', weighted_score_type=1,
min_size=3, max_size=1000, figsize=(6.5,6), graph_num=20, format='pdf', verbose=False):
"""The main function to reproduce GSEA desktop outputs.
:param indir: GSEA desktop results directory. In the sub folder, you must contain edb file folder.
:param outdir: Output directory.
:param float weighted_score_type: weighted score type. choose from {0,1,1.5,2}. Default: 1.
:param list figsize: Matplotlib output figure figsize. Default: [6.5,6].
:param str format: Matplotlib output figure format. Default: 'pdf'.
:param int min_size: Min size of input genes presented in Gene Sets. Default: 3.
:param int max_size: Max size of input genes presented in Gene Sets. Default: 5000.
You are | python | {
"resource": ""
} |
q269908 | GSEAbase._set_cores | test | def _set_cores(self):
"""set cpu numbers to be used"""
cpu_num = cpu_count()-1
if self._processes > cpu_num:
cores = cpu_num
elif self._processes < 1:
cores = 1
| python | {
"resource": ""
} |
q269909 | GSEAbase.load_gmt | test | def load_gmt(self, gene_list, gmt):
"""load gene set dict"""
if isinstance(gmt, dict):
genesets_dict = gmt
elif isinstance(gmt, str):
genesets_dict = self.parse_gmt(gmt)
else:
raise Exception("Error parsing gmt parameter for gene sets")
subsets = list(genesets_dict.keys())
self.n_genesets = len(subsets)
for subset in subsets:
subset_list = genesets_dict.get(subset)
if isinstance(subset_list, set):
subset_list = list(subset_list)
genesets_dict[subset] = subset_list
tag_indicator = np.in1d(gene_list, subset_list, assume_unique=True)
tag_len = tag_indicator.sum()
if self.min_size <= tag_len <= self.max_size: continue
| python | {
"resource": ""
} |
q269910 | GSEAbase.get_libraries | test | def get_libraries(self, database=''):
"""return active enrichr library name.Offical API """
lib_url='http://amp.pharm.mssm.edu/%sEnrichr/datasetStatistics'%database
libs_json | python | {
"resource": ""
} |
q269911 | GSEAbase._download_libraries | test | def _download_libraries(self, libname):
""" download enrichr libraries."""
self._logger.info("Downloading and generating Enrichr library gene sets......")
s = retry(5)
# queery string
ENRICHR_URL = 'http://amp.pharm.mssm.edu/Enrichr/geneSetLibrary'
query_string = '?mode=text&libraryName=%s'
# get
response = s.get( ENRICHR_URL + query_string % libname, timeout=None)
if not response.ok:
raise Exception('Error fetching enrichment results, check internet connection first.')
# reformat to dict and save to disk
mkdirs(DEFAULT_CACHE_PATH)
genesets_dict = {}
outname = "enrichr.%s.gmt"%libname
gmtout = open(os.path.join(DEFAULT_CACHE_PATH, outname), "w")
| python | {
"resource": ""
} |
q269912 | GSEAbase._heatmat | test | def _heatmat(self, df, classes, pheno_pos, pheno_neg):
"""only use for gsea heatmap"""
width = len(classes) if len(classes) >= 6 else 5
cls_booA =list(map(lambda x: True if x == pheno_pos else False, classes))
cls_booB =list(map(lambda x: True if x == pheno_neg else False, classes))
datA = df.loc[:, cls_booA]
| python | {
"resource": ""
} |
q269913 | GSEAbase._save_results | test | def _save_results(self, zipdata, outdir, module, gmt, rank_metric, permutation_type):
"""reformat gsea results, and save to txt"""
res = OrderedDict()
for gs, gseale, ind, RES in zipdata:
rdict = OrderedDict()
rdict['es'] = gseale[0]
rdict['nes'] = gseale[1]
rdict['pval'] = gseale[2]
rdict['fdr'] = gseale[3]
rdict['geneset_size'] = len(gmt[gs])
rdict['matched_size'] = len(ind)
#reformat gene list.
_genes = rank_metric.index.values[ind]
rdict['genes'] = ";".join([ str(g).strip() for g in _genes ])
if self.module != 'ssgsea':
# extract leading edge genes
if rdict['es'] > 0:
# RES -> ndarray, ind -> list
idx = RES.argmax()
ldg_pos = list(filter(lambda x: x<= idx, ind))
| python | {
"resource": ""
} |
q269914 | GSEA.load_data | test | def load_data(self, cls_vec):
"""pre-processed the data frame.new filtering methods will be implement here.
"""
# read data in
if isinstance(self.data, pd.DataFrame) :
exprs = self.data.copy()
# handle index is gene_names
if exprs.index.dtype == 'O':
exprs = exprs.reset_index()
elif os.path.isfile(self.data) :
# GCT input format?
if self.data.endswith("gct"):
exprs = pd.read_csv(self.data, skiprows=1, comment='#',sep="\t")
else:
exprs = pd.read_csv(self.data, comment='#',sep="\t")
else:
raise Exception('Error parsing gene expression DataFrame!')
#drop duplicated gene names
if exprs.iloc[:,0].duplicated().sum() > 0:
self._logger.warning("Warning: dropping duplicated gene names, only keep the first values")
| python | {
"resource": ""
} |
q269915 | GSEA.run | test | def run(self):
"""GSEA main procedure"""
assert self.permutation_type in ["phenotype", "gene_set"]
assert self.min_size <= self.max_size
# Start Analysis
self._logger.info("Parsing data files for GSEA.............................")
# phenotype labels parsing
phenoPos, phenoNeg, cls_vector = gsea_cls_parser(self.classes)
# select correct expression genes and values.
dat = self.load_data(cls_vector)
# data frame must have length > 1
assert len(dat) > 1
# ranking metrics calculation.
dat2 = ranking_metric(df=dat, method=self.method, pos=phenoPos, neg=phenoNeg,
classes=cls_vector, ascending=self.ascending)
self.ranking = dat2
# filtering out gene sets and build gene sets dictionary
gmt = self.load_gmt(gene_list=dat2.index.values, gmt=self.gene_sets)
self._logger.info("%04d gene_sets used for further statistical testing....."% len(gmt))
self._logger.info("Start to run GSEA...Might take a while..................")
# cpu numbers
self._set_cores()
# compute ES, NES, pval, FDR, RES
dataset = dat if self.permutation_type =='phenotype' else dat2
gsea_results,hit_ind,rank_ES, subsets = gsea_compute_tensor(data=dataset, gmt=gmt, n=self.permutation_num,
weighted_score_type=self.weighted_score_type,
permutation_type=self.permutation_type,
method=self.method,
pheno_pos=phenoPos, pheno_neg=phenoNeg,
| python | {
"resource": ""
} |
q269916 | Prerank.run | test | def run(self):
"""GSEA prerank workflow"""
assert self.min_size <= self.max_size
# parsing rankings
dat2 = self._load_ranking(self.rnk)
assert len(dat2) > 1
# cpu numbers
self._set_cores()
# Start Analysis
self._logger.info("Parsing data files for GSEA.............................")
# filtering out gene sets and build gene sets dictionary
gmt = self.load_gmt(gene_list=dat2.index.values, gmt=self.gene_sets)
self._logger.info("%04d gene_sets used for further statistical testing....."% len(gmt))
self._logger.info("Start to run GSEA...Might take a while..................")
# compute ES, NES, pval, FDR, RES
gsea_results, hit_ind,rank_ES, subsets = gsea_compute(data=dat2, n=self.permutation_num, gmt=gmt,
weighted_score_type=self.weighted_score_type,
permutation_type='gene_set', method=None,
pheno_pos=self.pheno_pos, pheno_neg=self.pheno_neg,
classes=None, ascending=self.ascending,
processes=self._processes, seed=self.seed)
self._logger.info("Start to generate gseapy reports, and produce figures...")
| python | {
"resource": ""
} |
q269917 | SingleSampleGSEA.runSamplesPermu | test | def runSamplesPermu(self, df, gmt=None):
"""Single Sample GSEA workflow with permutation procedure"""
assert self.min_size <= self.max_size
mkdirs(self.outdir)
self.resultsOnSamples = OrderedDict()
outdir = self.outdir
# iter throught each sample
for name, ser in df.iteritems():
self.outdir = os.path.join(outdir, str(name))
self._logger.info("Run Sample: %s " % name)
mkdirs(self.outdir)
# sort ranking values from high to low or reverse
dat2 = ser.sort_values(ascending=self.ascending)
# reset integer index, or caused unwanted problems
# df.reset_index(drop=True, inplace=True)
# compute ES, NES, pval, FDR, RES
gsea_results, hit_ind,rank_ES, subsets = gsea_compute(data=dat2, n=self.permutation_num, gmt=gmt,
weighted_score_type=self.weighted_score_type,
permutation_type='gene_set', method=None,
pheno_pos='', pheno_neg='',
classes=None, ascending=self.ascending,
| python | {
"resource": ""
} |
q269918 | SingleSampleGSEA.runSamples | test | def runSamples(self, df, gmt=None):
"""Single Sample GSEA workflow.
multiprocessing utility on samples.
"""
# df.index.values are gene_names
# Save each sample results to odict
self.resultsOnSamples = OrderedDict()
outdir = self.outdir
# run ssgsea for gct expression matrix
#multi-threading
subsets = sorted(gmt.keys())
tempes=[]
names=[]
rankings=[]
pool = Pool(processes=self._processes)
for name, ser in df.iteritems():
#prepare input
dat = ser.sort_values(ascending=self.ascending)
rankings.append(dat)
names.append(name)
genes_sorted, cor_vec = dat.index.values, dat.values
rs = np.random.RandomState(self.seed)
# apply_async
tempes.append(pool.apply_async(enrichment_score_tensor,
args=(genes_sorted, cor_vec, gmt,
self.weighted_score_type,
self.permutation_num, rs, True,
self.scale)))
pool.close()
pool.join()
# save results and plotting
for i, temp in enumerate(tempes):
name, rnk = names[i], rankings[i]
self._logger.info("Calculate Enrichment Score for Sample: %s "%name)
es, esnull, hit_ind, RES = temp.get()
# create results subdir
self.outdir= os.path.join(outdir, str(name))
mkdirs(self.outdir)
| python | {
"resource": ""
} |
q269919 | SingleSampleGSEA._save | test | def _save(self, outdir):
"""save es and stats"""
# save raw ES to one csv file
samplesRawES = pd.DataFrame(self.resultsOnSamples)
samplesRawES.index.name = 'Term|ES'
# normalize enrichment scores by using the entire data set, as indicated
# by Barbie et al., 2009, online methods, pg. 2
samplesNES = samplesRawES / (samplesRawES.values.max() - samplesRawES.values.min())
samplesNES = samplesNES.copy()
samplesNES.index.rename('Term|NES', inplace=True)
self.res2d = samplesNES
self._logger.info("Congratulations. GSEApy runs successfully................\n")
if self._outdir is None: return
# write es
outESfile = os.path.join(outdir, "gseapy.samples.raw.es.txt")
with open(outESfile, 'a') as f:
if self.scale:
f.write('# scale the enrichment scores by number of genes in the gene sets\n')
f.write('# this normalization has not effects on the final NES, ' + \
'as indicated by Barbie et al., 2009, online methods, pg. 2\n') | python | {
"resource": ""
} |
q269920 | Replot.run | test | def run(self):
"""main replot function"""
assert self.min_size <= self.max_size
assert self.fignum > 0
import glob
from bs4 import BeautifulSoup
# parsing files.......
try:
results_path = glob.glob(self.indir+'*/edb/results.edb')[0]
rank_path = glob.glob(self.indir+'*/edb/*.rnk')[0]
gene_set_path = glob.glob(self.indir+'*/edb/gene_sets.gmt')[0]
except IndexError as e:
sys.stderr.write("Could not locate GSEA files in the given directory!")
sys.exit(1)
# extract sample names from .cls file
cls_path = glob.glob(self.indir+'*/edb/*.cls')
if cls_path:
pos, neg, classes = gsea_cls_parser(cls_path[0])
else:
# logic for prerank results
pos, neg = '',''
# start reploting
self.gene_sets=gene_set_path
# obtain gene sets
gene_set_dict = self.parse_gmt(gmt=gene_set_path)
# obtain rank_metrics
rank_metric = self._load_ranking(rank_path)
correl_vector = rank_metric.values
gene_list = rank_metric.index.values
# extract each enriment term in the results.edb files and plot.
database = BeautifulSoup(open(results_path), features='xml')
length = len(database.findAll('DTG'))
fig_num = self.fignum if self.fignum <= length else length
for idx in range(fig_num):
# extract statistical resutls from results.edb file
enrich_term, hit_ind, nes, pval, fdr= gsea_edb_parser(results_path, index=idx)
| python | {
"resource": ""
} |
q269921 | enrichr | test | def enrichr(gene_list, gene_sets, organism='human', description='',
outdir='Enrichr', background='hsapiens_gene_ensembl', cutoff=0.05,
format='pdf', figsize=(8,6), top_term=10, no_plot=False, verbose=False):
"""Enrichr API.
:param gene_list: Flat file with list of genes, one gene id per row, or a python list object
:param gene_sets: Enrichr Library to query. Required enrichr library name(s). Separate each name by comma.
:param organism: Enrichr supported organism. Select from (human, mouse, yeast, fly, fish, worm).
see here for details: https://amp.pharm.mssm.edu/modEnrichr
:param description: name of analysis. optional.
:param outdir: Output file directory
:param float cutoff: Adjusted P-value (benjamini-hochberg correction) cutoff. Default: 0.05
:param int background: BioMart dataset name for retrieving background gene information.
This argument only works when gene_sets input is a gmt file or python dict.
You could also specify a number by yourself, e.g. total expressed genes number.
In this case, you will skip retrieving background infos from biomart.
Use the code below to see valid background dataset names from BioMart.
Here are example | python | {
"resource": ""
} |
q269922 | Enrichr.parse_genesets | test | def parse_genesets(self):
"""parse gene_sets input file type"""
enrichr_library = self.get_libraries()
if isinstance(self.gene_sets, list):
gss = self.gene_sets
elif isinstance(self.gene_sets, str):
gss = [ g.strip() for g in self.gene_sets.strip().split(",") ]
elif isinstance(self.gene_sets, dict):
gss = [self.gene_sets]
else:
raise Exception("Error parsing enrichr libraries, please provided corrected one")
# gss: a list contain .gmt, dict, enrichr_liraries.
# now, convert .gmt to dict
gss_exist = []
for g in gss:
if isinstance(g, dict):
gss_exist.append(g)
continue
if isinstance(g, str):
if g in enrichr_library:
| python | {
"resource": ""
} |
q269923 | Enrichr.parse_genelists | test | def parse_genelists(self):
"""parse gene list"""
if isinstance(self.gene_list, list):
genes = self.gene_list
elif isinstance(self.gene_list, pd.DataFrame):
# input type is bed file
if self.gene_list.shape[1] >=3:
genes= self.gene_list.iloc[:,:3].apply(lambda x: "\t".join([str(i) for i in x]), axis=1).tolist()
# input type with weight values
elif self.gene_list.shape[1] == 2:
genes= self.gene_list.apply(lambda x: ",".join([str(i) for i in x]), axis=1).tolist()
else:
genes = self.gene_list.squeeze().tolist()
elif isinstance(self.gene_list, pd.Series):
| python | {
"resource": ""
} |
q269924 | Enrichr.send_genes | test | def send_genes(self, gene_list, url):
""" send gene list to enrichr server"""
payload = {
'list': (None, gene_list),
'description': (None, self.descriptions)
}
# response
response = requests.post(url, files=payload)
| python | {
"resource": ""
} |
q269925 | Enrichr.check_genes | test | def check_genes(self, gene_list, usr_list_id):
'''
Compare the genes sent and received to get successfully recognized genes
'''
response = requests.get('http://amp.pharm.mssm.edu/Enrichr/view?userListId=%s' % usr_list_id)
if not response.ok:
raise Exception('Error getting gene list back')
| python | {
"resource": ""
} |
q269926 | Enrichr.get_background | test | def get_background(self):
"""get background gene"""
# input is a file
if os.path.isfile(self.background):
with open(self.background) as b:
bg2 = b.readlines()
bg = [g.strip() for g in bg2]
return set(bg)
# package included data
DB_FILE = resource_filename("gseapy", "data/{}.background.genes.txt".format(self.background))
filename = os.path.join(DEFAULT_CACHE_PATH, "{}.background.genes.txt".format(self.background))
if os.path.exists(filename):
df = pd.read_csv(filename,sep="\t")
elif os.path.exists(DB_FILE):
df = pd.read_csv(DB_FILE,sep="\t")
else:
# background is a biomart database name
| python | {
"resource": ""
} |
q269927 | Enrichr.run | test | def run(self):
"""run enrichr for one sample gene list but multi-libraries"""
# set organism
self.get_organism()
# read input file
genes_list = self.parse_genelists()
gss = self.parse_genesets()
# if gmt
self._logger.info("Connecting to Enrichr Server to get latest library names")
if len(gss) < 1:
sys.stderr.write("Not validated Enrichr library name provided\n")
sys.stdout.write("Hint: use get_library_name() to view full list of supported names")
sys.exit(1)
self.results = pd.DataFrame()
for g in gss:
if isinstance(g, dict):
## local mode
res = self.enrich(g)
shortID, self._gs = str(id(g)), "CUSTOM%s"%id(g)
if res is None:
self._logger.info("No hits return, for gene set: Custom%s"%shortID)
continue
else:
## online mode
self._gs = str(g)
self._logger.debug("Start Enrichr using library: %s" % (self._gs))
self._logger.info('Analysis name: %s, Enrichr Library: %s' % (self.descriptions, self._gs))
shortID, res = self.get_results(genes_list)
# Remember gene set library used
| python | {
"resource": ""
} |
q269928 | cube | test | def cube(script, size=1.0, center=False, color=None):
"""Create a cube primitive
Note that this is made of 6 quads, not triangles
"""
"""# Convert size to list if it isn't already
if not isinstance(size, list):
size = list(size)
# If a single value was supplied use it for all 3 axes
if len(size) == 1:
size = [size[0], size[0], size[0]]"""
size = util.make_list(size, 3)
if script.ml_version == '1.3.4BETA':
filter_name = 'Box'
else:
filter_name = 'Box/Cube'
filter_xml = ''.join([
' <filter name="{}">\n'.format(filter_name),
' <Param name="size" ',
'value="1.0" ',
'description="Scale factor" ', | python | {
"resource": ""
} |
q269929 | icosphere | test | def icosphere(script, radius=1.0, diameter=None, subdivisions=3, color=None):
"""create an icosphere mesh
radius Radius of the sphere
# subdivisions = Subdivision level; Number of the recursive subdivision of the
# surface. Default is 3 (a sphere approximation composed by 1280 faces).
# Admitted values are in the range 0 (an icosahedron) to 8 (a 1.3 MegaTris
# approximation of a sphere). Formula for number of faces: F=20*4^subdiv
# color = specify a color name to apply vertex colors to the newly
| python | {
"resource": ""
} |
q269930 | torus | test | def torus(script, major_radius=3.0, minor_radius=1.0, inner_diameter=None,
outer_diameter=None, major_segments=48, minor_segments=12,
color=None):
"""Create a torus mesh
Args:
major_radius (float, (optional)): radius from the origin to the
center of the cross sections
minor_radius (float, (optional)): radius of the torus cross
section
inner_diameter (float, (optional)): inner diameter of torus. If
both inner_diameter and outer_diameter are provided then
these will override major_radius and minor_radius.,
outer_diameter (float, (optional)): outer diameter of torus. If
both inner_diameter and outer_diameter are provided then
these will override major_radius and minor_radius.
major_segments (int (optional)): number of segments for the main
ring of the torus
minor_segments (int (optional)): number of segments for the minor
ring of the torus
color (str (optional)): color name to apply vertex colors to the
newly created mesh
Returns:
None
"""
if inner_diameter is not None and outer_diameter is not None:
major_radius = (inner_diameter + outer_diameter) / 4
minor_radius = major_radius - inner_diameter / 2
# Ref: inner_diameter = 2 * (major_radius - minor_radius)
# Ref: outer_diameter = 2 | python | {
"resource": ""
} |
q269931 | plane_hires_edges | test | def plane_hires_edges(script, size=1.0, x_segments=1, y_segments=1,
center=False, color=None):
""" Creates a plane with a specified number of vertices
on it sides, but no vertices on the interior.
Currently used to create a simpler bottom for cube_hires.
"""
size = util.make_list(size, 2)
grid(script, size=[x_segments + y_segments - 1, 1],
x_segments=(x_segments + y_segments - 1), y_segments=1)
if ml_script1.ml_version == '1.3.4BETA':
and_val = 'and'
else:
and_val = '&&'
if script.ml_version == '1.3.4BETA': # muparser version: 1.3.2
# Deform left side
transform.vert_function(
script,
x_func='if((y>0) and (x<%s),0,x)' % (y_segments),
y_func='if((y>0) and (x<%s),(x+1)*%s,y)' % (
y_segments, size[1] / y_segments))
# Deform top
transform.vert_function(
script,
x_func='if((y>0) and (x>=%s),(x-%s+1)*%s,x)' % (
y_segments, y_segments, size[0] / x_segments),
y_func='if((y>0) and (x>=%s),%s,y)' % (y_segments, size[1]))
# Deform right side
transform.vert_function(
script,
x_func='if((y<.00001) and (x>%s),%s,x)' % (
x_segments, size[0]),
y_func='if((y<.00001) and (x>%s),(x-%s)*%s,y)' % (
x_segments, x_segments, size[1] / y_segments))
# Deform bottom
transform.vert_function(
script,
x_func='if((y<.00001) and (x<=%s) and (x>0),(x)*%s,x)' % (
x_segments, size[0] / x_segments),
y_func='if((y<.00001) and (x<=%s) and (x>0),0,y)' % (x_segments))
else: # muparser version: 2.2.5
# Deform left side
transform.vert_function(
script,
x_func='((y>0) && (x<{yseg}) ? 0 : x)'.format(yseg=y_segments),
y_func='((y>0) && (x<%s) ? (x+1)*%s : y)' % (
y_segments, size[1] / y_segments))
# Deform top
transform.vert_function(
| python | {
"resource": ""
} |
q269932 | cube_hires | test | def cube_hires(script, size=1.0, x_segments=1, y_segments=1, z_segments=1,
simple_bottom=True, center=False, color=None):
"""Create a box with user defined number of segments in each direction.
Grid spacing is the same as its dimensions (spacing = 1) and its
thickness is one. Intended to be used for e.g. deforming using functions
or a height map (lithopanes) and can be resized after creation.
Warnings: function uses layers.join
top_option
0 open
1 full
2 simple
bottom_option
0 open
1 full
2 simple
"""
"""# Convert size to list if it isn't already
if not isinstance(size, list):
size = list(size)
# If a single value was supplied use it for all 3 axes
if len(size) == 1:
size = [size[0], size[0], size[0]]"""
size = util.make_list(size, 3)
# Top
grid(script,
size,
x_segments,
y_segments)
transform.translate(script, [0, 0, size[2]])
# Bottom
if simple_bottom:
plane_hires_edges(
script, size, x_segments, y_segments)
else:
layers.duplicate(script)
| python | {
"resource": ""
} |
q269933 | color_values | test | def color_values(color):
"""Read color_names.txt and find the red, green, and blue values
for a named color.
"""
# Get the directory where this script file is located:
this_dir = os.path.dirname(
os.path.realpath(
inspect.getsourcefile(
lambda: 0)))
color_name_file = os.path.join(this_dir, 'color_names.txt')
found = False
for line in open(color_name_file, 'r'):
line = line.rstrip()
if color.lower() == line.split()[0]:
#hex_color = line.split()[1]
red = line.split()[2]
| python | {
"resource": ""
} |
q269934 | check_list | test | def check_list(var, num_terms):
""" Check if a variable is a list and is the correct length.
If variable is not a list it will make it a list of the correct length with
all terms identical.
"""
if not isinstance(var, list):
if isinstance(var, tuple):
var = list(var)
else:
var = [var]
for _ in range(1, num_terms):
| python | {
"resource": ""
} |
q269935 | make_list | test | def make_list(var, num_terms=1):
""" Make a variable a list if it is not already
If variable is not a list it will make it a list of the correct length with
all terms identical.
"""
if not isinstance(var, list):
if isinstance(var, tuple):
| python | {
"resource": ""
} |
q269936 | write_filter | test | def write_filter(script, filter_xml):
""" Write filter to FilterScript object or filename
Args:
script (FilterScript object or filename str): the FilterScript object
or script filename to write the filter to.
filter_xml (str): the xml filter string
"""
if isinstance(script, mlx.FilterScript):
| python | {
"resource": ""
} |
q269937 | ls3loop | test | def ls3loop(script, iterations=1, loop_weight=0, edge_threshold=0,
selected=False):
""" Apply LS3 Subdivision Surface algorithm using Loop's weights.
This refinement method take normals into account.
See: Boye', S. Guennebaud, G. & Schlick, C.
"Least squares subdivision surfaces"
Computer Graphics Forum, 2010.
Alternatives weighting schemes are based on the paper:
Barthe, L. & Kobbelt, L.
"Subdivision scheme tuning around extraordinary vertices"
Computer Aided Geometric Design, 2004, 21, 561-583.
The current implementation of these schemes don't handle vertices of
valence > 12
Args:
script: the FilterScript object or script filename to write
the filter to.
iterations (int): Number of times the model is subdivided.
loop_weight (int): Change the weights used. Allow to optimize some
behaviours in spite of others. Valid values are:
0 - Loop (default)
1 - Enhance regularity
2 - Enhance continuity
edge_threshold (float): All the edges longer than this threshold will
be refined. Setting this value to zero will force a uniform
refinement.
selected (bool): If selected the filter is performed only on the
selected faces.
Layer stack:
No impacts
MeshLab versions:
2016.12
| python | {
"resource": ""
} |
q269938 | merge_vert | test | def merge_vert(script, threshold=0.0):
""" Merge together all the vertices that are nearer than the specified
threshold. Like a unify duplicate vertices but with some tolerance.
Args:
script: the FilterScript object or script filename to write
the filter to.
threshold (float): Merging distance. All the vertices that are closer
than this threshold are merged together. Use very small values,
default is zero.
Layer stack:
No impacts
MeshLab versions:
2016.12
1.3.4BETA
""" | python | {
"resource": ""
} |
q269939 | close_holes | test | def close_holes(script, hole_max_edge=30, selected=False,
sel_new_face=True, self_intersection=True):
""" Close holes smaller than a given threshold
Args:
script: the FilterScript object or script filename to write
the filter to.
hole_max_edge (int): The size is expressed as number of edges composing
the hole boundary.
selected (bool): Only the holes with at least one of the boundary faces
selected are closed.
sel_new_face (bool): After closing a hole the faces that have been
created are left selected. Any previous selection is lost. Useful
for example for smoothing or subdividing the newly created holes.
self_intersection (bool): When closing an holes it tries to prevent the
creation of faces that intersect faces adjacent to the boundary of
the hole. It is an heuristic, non intersecting hole filling can be
NP-complete.
Layer stack:
No impacts
MeshLab versions:
2016.12
1.3.4BETA
"""
filter_xml = ''.join([
' <filter name="Close Holes">\n',
' <Param name="maxholesize" ',
'value="{:d}" '.format(hole_max_edge),
| python | {
"resource": ""
} |
q269940 | split_vert_on_nonmanifold_face | test | def split_vert_on_nonmanifold_face(script, vert_displacement_ratio=0.0):
""" Split non-manifold vertices until it becomes two-manifold.
Args:
script: the FilterScript object or script filename to write
the filter to.
vert_displacement_ratio (float): When a vertex is split it is moved
along the average vector going from its position to the centroid
of the FF connected faces sharing it.
Layer stack:
No impacts
MeshLab versions:
2016.12
1.3.4BETA
"""
| python | {
"resource": ""
} |
q269941 | snap_mismatched_borders | test | def snap_mismatched_borders(script, edge_dist_ratio=0.01, unify_vert=True):
""" Try to snap together adjacent borders that are slightly mismatched.
This situation can happen on badly triangulated adjacent patches defined by
high order surfaces. For each border vertex the filter snaps it onto the
closest boundary edge only if it is closest of edge_legth*threshold. When
vertex is snapped the corresponding face it split and a new vertex is
created.
Args:
script: the FilterScript object or script filename to write
the filter to.
edge_dist_ratio (float): Collapse edge when the edge / distance ratio
is greater than this value. E.g. for default value 1000 two
straight border edges are collapsed if the central vertex dist from
the straight line composed by the two edges less than a 1/1000 of
the sum of the edges length. Larger values enforce that only
vertexes very close to the line are removed.
unify_vert (bool): If true the snap vertices are welded together.
Layer stack:
No impacts
MeshLab versions:
| python | {
"resource": ""
} |
q269942 | translate | test | def translate(script, value=(0.0, 0.0, 0.0)):
"""An alternative translate implementation that uses a geometric function.
This is more accurate than the built-in version."""
# Convert value to list if it isn't already
if not isinstance(value, list):
value = list(value)
vert_function(script,
| python | {
"resource": ""
} |
q269943 | rotate | test | def rotate(script, axis='z', angle=0.0):
"""An alternative rotate implementation that uses a geometric function.
This is more accurate than the built-in version."""
angle = math.radians(angle)
if axis.lower() == 'x':
vert_function(script,
x_func='x',
y_func='y*cos({angle})-z*sin({angle})'.format(angle=angle),
z_func='y*sin({angle})+z*cos({angle})'.format(angle=angle))
elif axis.lower() == 'y':
vert_function(script,
x_func='z*sin({angle})+x*cos({angle})'.format(angle=angle),
y_func='y',
z_func='z*cos({angle})-x*sin({angle})'.format(angle=angle))
| python | {
"resource": ""
} |
q269944 | scale | test | def scale(script, value=1.0):
"""An alternative scale implementation that uses a geometric function.
This is more accurate than the built-in version."""
"""# Convert value to list if it isn't already
if not isinstance(value, list):
value = list(value)
# If a single value was supplied use it for all 3 axes
if len(value) == 1: | python | {
"resource": ""
} |
q269945 | function_cyl_co | test | def function_cyl_co(script, r_func='r', theta_func='theta', z_func='z'):
"""Geometric function using cylindrical coordinates.
Define functions in Z up cylindrical coordinates, with radius 'r',
angle 'theta', and height 'z'
See "function" docs for additional usage info and accepted parameters.
Args:
r_func (str): function to generate new coordinates for radius
theta_func (str): function to generate new coordinates for angle.
0 degrees is on the +X axis.
z_func (str): function to generate new coordinates for height
Layer stack:
| python | {
"resource": ""
} |
q269946 | wrap2cylinder | test | def wrap2cylinder(script, radius=1, pitch=0, taper=0, pitch_func=None,
taper_func=None):
"""Deform mesh around cylinder of radius and axis z
y = 0 will be on the surface of radius "radius"
pitch != 0 will create a helix, with distance "pitch" traveled in z for each rotation
taper = change in r over z. E.g. a value of 0.5 will shrink r by 0.5 for every z length of 1
"""
"""vert_function(s=s, x='(%s+y-taper)*sin(x/(%s+y))' % (radius, radius),
y='(%s+y)*cos(x/(%s+y))' % (radius, radius),
z='z-%s*x/(2*%s*(%s+y))' % (pitch, pi, radius))"""
if pitch_func is None:
pitch_func = '-(pitch)*x/(2*pi*(radius))'
pitch_func = pitch_func.replace(
'pitch', str(pitch)).replace(
'pi', str(math.pi)).replace(
'radius', str(radius))
| python | {
"resource": ""
} |
q269947 | bend | test | def bend(script, radius=1, pitch=0, taper=0, angle=0, straght_start=True,
straght_end=False, radius_limit=None, outside_limit_end=True):
"""Bends mesh around cylinder of radius radius and axis z to a certain angle
straight_ends: Only apply twist (pitch) over the area that is bent
outside_limit_end (bool): should values outside of the bend radius_limit be considered part
of the end (True) or the start (False)?
"""
if radius_limit is None:
radius_limit = 2 * radius
# TODO: add limit so bend only applies over y<2*radius; add option to set
# larger limit
angle = math.radians(angle)
segment = radius * angle
"""vert_function(s=s, x='if(x<%s and x>-%s, (%s+y)*sin(x/%s), (%s+y)*sin(%s/%s)+(x-%s)*cos(%s/%s))'
% (segment, segment, radius, radius, radius, segment, radius, segment, segment, radius),
y='if(x<%s*%s/2 and x>-%s*%s/2, (%s+y)*cos(x/%s), (%s+y)*cos(%s)-(x-%s*%s)*sin(%s))'
% (radius, angle, radius, angle, radius, radius, radius, angle/2, radius, angle/2, angle/2),"""
pitch_func = '-(pitch)*x/(2*pi*(radius))'.replace(
'pitch', str(pitch)).replace(
'pi', str(math.pi)).replace(
'radius', str(radius))
taper_func = '(taper)*(pitch_func)'.replace(
'taper', str(taper)).replace(
'pitch_func', str(pitch_func)).replace(
'pi', str(math.pi))
# y<radius_limit
if outside_limit_end:
x_func = 'if(x<(segment) and y<(radius_limit), if(x>0, (y+(radius)+(taper_func))*sin(x/(radius)), x), (y+(radius)+(taper_func))*sin(angle)+(x-(segment))*cos(angle))'
else:
x_func = 'if(x<(segment), if(x>0 and y<(radius_limit), (y+(radius)+(taper_func))*sin(x/(radius)), x), if(y<(radius_limit), (y+(radius)+(taper_func))*sin(angle)+(x-(segment))*cos(angle), x))'
x_func = x_func.replace(
# x_func = 'if(x<segment, if(x>0, (y+radius)*sin(x/radius), x),
# (y+radius)*sin(angle)-segment)'.replace(
'segment', str(segment)).replace(
'radius_limit', str(radius_limit)).replace(
'radius', str(radius)).replace(
'taper_func', str(taper_func)).replace(
'angle', str(angle))
if outside_limit_end:
y_func = 'if(x<(segment) and y<(radius_limit), if(x>0, (y+(radius)+(taper_func))*cos(x/(radius))-(radius), y), (y+(radius)+(taper_func))*cos(angle)-(x-(segment))*sin(angle)-(radius))'
else:
y_func = 'if(x<(segment), if(x>0 and y<(radius_limit), (y+(radius)+(taper_func))*cos(x/(radius))-(radius), y), if(y<(radius_limit), (y+(radius)+(taper_func))*cos(angle)-(x-(segment))*sin(angle)-(radius), y))'
y_func = | python | {
"resource": ""
} |
q269948 | deform2curve | test | def deform2curve(script, curve=mp_func.torus_knot('t'), step=0.001):
""" Deform a mesh along a parametric curve function
Provide a parametric curve function with z as the parameter. This will
deform the xy cross section of the mesh along the curve as z increases.
Source: http://blackpawn.com/texts/pqtorus/
Methodology:
T = P' - P
N1 = P' + P
B = T x N1
N = B x T
newPoint = point.x*N + point.y*B
"""
| python | {
"resource": ""
} |
q269949 | vc2tex | test | def vc2tex(script, tex_name='TEMP3D_texture.png', tex_width=1024,
tex_height=1024, overwrite_tex=False, assign_tex=False,
fill_tex=True):
"""Transfer vertex colors to texture colors
Args:
script: the FilterScript object or script filename to write
the filter to.
tex_name (str): The texture file to be created
tex_width (int): The texture width
tex_height (int): The texture height
overwrite_tex (bool): If current mesh has a texture will be overwritten (with provided texture dimension)
assign_tex (bool): Assign the newly created texture
fill_tex (bool): If enabled the unmapped texture space is colored using a pull push filling algorithm, if false is set to black
"""
filter_xml = ''.join([
' <filter name="Vertex Color to Texture">\n',
' <Param name="textName" ',
'value="%s" ' % tex_name,
'description="Texture file" ',
'type="RichString" ',
'/>\n',
' <Param name="textW" ',
'value="%d" ' % tex_width,
'description="Texture width (px)" ',
'type="RichInt" ',
'/>\n',
' | python | {
"resource": ""
} |
q269950 | mesh2fc | test | def mesh2fc(script, all_visible_layers=False):
"""Transfer mesh colors to face colors
Args:
script: the FilterScript object or script filename to write
the filter to.
all_visible_layers (bool): If true the color mapping is applied to all the meshes
"""
filter_xml = ''.join([
' <filter name="Transfer Color: Mesh to Face">\n',
| python | {
"resource": ""
} |
q269951 | uniform_resampling | test | def uniform_resampling(script, voxel=1.0, offset=0.0, merge_vert=True,
discretize=False, multisample=False, thicken=False):
""" Create a new mesh that is a resampled version of the current one.
The resampling is done by building a uniform volumetric representation
where each voxel contains the signed distance from the original surface.
The resampled surface is reconstructed using the marching cube algorithm
over this volume.
Args:
script: the FilterScript object or script filename to write
the filter to.
voxel (float): voxel (cell) size for resampling. Smaller cells give
better precision at a higher computational cost. Remember that
halving the cell size means that you build a volume 8 times larger.
offset (float): offset amount of the created surface (i.e. distance of
the created surface from the original one). If offset is zero, the
created surface passes on the original mesh itself. Values greater
than zero mean an external surface (offset), and lower than zero
mean an internal surface (inset). In practice this value is the
threshold passed to the Marching Cube algorithm to extract the
isosurface from the distance field representation.
merge_vert (bool): if True the mesh generated by MC will be cleaned by
unifying vertices that are almost coincident.
discretize (bool): if True the position of the intersected edge of the
marching cube grid is not computed by linear interpolation, but it
is placed in fixed middle position. As a consequence the resampled
object will look severely aliased by a stairstep appearance. Useful
only for simulating the output of 3D printing devices.
multisample (bool): if True the distance field is more accurately
compute by multisampling the volume (7 sample for each voxel). Much
slower but less artifacts.
thicken (bool): if True, you have to choose a non zero Offset and a
double surface is built around the original surface, inside and
outside. Is useful to convert thin floating surfaces into solid,
thick meshes.
Layer stack:
Creates 1 new layer 'Offset mesh'
Current layer is changed to new layer
MeshLab versions:
2016.12
1.3.4BETA
| python | {
"resource": ""
} |
q269952 | surface_poisson_screened | test | def surface_poisson_screened(script, visible_layer=False, depth=8,
full_depth=5, cg_depth=0, scale=1.1,
samples_per_node=1.5, point_weight=4.0,
iterations=8, confidence=False, pre_clean=False):
""" This surface reconstruction algorithm creates watertight
surfaces from oriented point sets.
The filter uses the original code of Michael Kazhdan and Matthew Bolitho
implementing the algorithm in the following paper:
Michael Kazhdan, Hugues Hoppe,
"Screened Poisson surface reconstruction"
ACM Trans. Graphics, 32(3), 2013
Args:
script: the FilterScript object or script filename to write
the filter to.
visible_layer (bool): If True all the visible layers will be used for
providing the points
depth (int): This integer is the maximum depth of the tree that will
be used for surface reconstruction. Running at depth d corresponds
to solving on a voxel grid whose resolution is no larger than
2^d x 2^d x 2^d. Note that since the reconstructor adapts the
octree to the sampling density, the specified reconstruction depth
is only an upper bound. The default value for this parameter is 8.
full_depth (int): This integer specifies the depth beyond depth the
octree will be adapted. At coarser depths, the octree will be
complete, containing all 2^d x 2^d x 2^d nodes. The default value
for this parameter is 5.
cg_depth (int): This integer is the depth up to which a
conjugate-gradients solver will be used to solve the linear system.
Beyond this depth Gauss-Seidel relaxation will be used. The default
value for this parameter is 0.
scale (float): This floating point value specifies the ratio between
the diameter of the cube used for reconstruction and the diameter
of the samples' bounding cube. The default value is 1.1.
samples_per_node (float): This floating point value specifies the
minimum number of sample points that should fall within an octree
node as the octree construction is adapted to sampling density. For
noise-free samples, small values in the range [1.0 - 5.0] can be
used. For more noisy samples, larger values in the range
[15.0 - 20.0] may be needed to provide a smoother, noise-reduced,
reconstruction. The default value is 1.5.
point_weight (float): This floating point value specifies the
importance that interpolation of the point samples is given in the
formulation of the screened Poisson equation. The results of the
original (unscreened) Poisson Reconstruction can be obtained by
setting this value to 0. The default value for this parameter is 4.
| python | {
"resource": ""
} |
q269953 | voronoi | test | def voronoi(script, hole_num=50, target_layer=None, sample_layer=None, thickness=0.5, backward=True):
""" Turn a model into a surface with Voronoi style holes in it
References:
http://meshlabstuff.blogspot.com/2009/03/creating-voronoi-sphere.html
http://meshlabstuff.blogspot.com/2009/04/creating-voronoi-sphere-2.html
Requires FilterScript object
Args:
script: the FilterScript object to write the filter to. Does not
work with a script filename.
Layer stack:
No impacts
MeshLab versions:
2016.12
1.3.4BETA
"""
if target_layer is None:
target_layer = script.current_layer()
| python | {
"resource": ""
} |
q269954 | all | test | def all(script, face=True, vert=True):
""" Select all the faces of the current mesh
Args:
script: the FilterScript object or script filename to write
the filter to.
faces (bool): If True the filter will select all the faces.
verts (bool): If True the filter will select all the vertices.
Layer stack:
No impacts
MeshLab versions:
2016.12
1.3.4BETA
"""
filter_xml = ''.join([
' <filter name="Select All">\n',
' <Param name="allFaces" ',
'value="{}" '.format(str(face).lower()),
| python | {
"resource": ""
} |
q269955 | vert_quality | test | def vert_quality(script, min_quality=0.0, max_quality=0.05, inclusive=True):
""" Select all the faces and vertexes within the specified vertex quality
range.
Args:
script: the FilterScript object or script filename to write
the filter] to.
min_quality (float): Minimum acceptable quality value.
max_quality (float): Maximum acceptable quality value.
inclusive (bool): If True only the faces with ALL the vertices within
the specified range are selected. Otherwise any face with at least
one vertex within the range is selected.
Layer stack:
No impacts
MeshLab versions:
2016.12
1.3.4BETA
"""
filter_xml = ''.join([
' <filter name="Select by Vertex Quality">\n',
' <Param name="minQ" ',
'value="{}" '.format(min_quality),
'description="Min Quality" ',
'min="0" ',
'max="{}" '.format(2 * max_quality),
'type="RichDynamicFloat" ',
'/>\n', | python | {
"resource": ""
} |
q269956 | face_function | test | def face_function(script, function='(fi == 0)'):
"""Boolean function using muparser lib to perform face selection over
current mesh.
See help(mlx.muparser_ref) for muparser reference documentation.
It's possible to use parenthesis, per-vertex variables and boolean operator:
(, ), and, or, <, >, =
It's possible to use per-face variables like attributes associated to the three
vertices of every face.
Variables (per face):
x0, y0, z0 for first vertex; x1,y1,z1 for second vertex; x2,y2,z2 for third vertex
nx0, ny0, nz0, nx1, ny1, nz1, etc. for vertex normals
r0, g0, b0, a0, etc. for vertex color
q0, q1, q2 for quality
wtu0, wtv0, wtu1, wtv1, wtu2, wtv2 (per wedge texture coordinates)
ti for face texture index (>= ML2016.12)
vsel0, vsel1, vsel2 for vertex selection (1 yes, 0 no) (>= ML2016.12)
fr, fg, fb, fa for face color (>= ML2016.12)
fq for face quality (>= ML2016.12)
fnx, fny, fnz for face normal (>= ML2016.12)
fsel face selection (1 yes, 0 no) (>= ML2016.12)
Args:
script: the FilterScript object or script filename to write
the filter] to.
| python | {
"resource": ""
} |
q269957 | vert_function | test | def vert_function(script, function='(q < 0)', strict_face_select=True):
"""Boolean function using muparser lib to perform vertex selection over current mesh.
See help(mlx.muparser_ref) for muparser reference documentation.
It's possible to use parenthesis, per-vertex variables and boolean operator:
(, ), and, or, <, >, =
It's possible to use the following per-vertex variables in the expression:
Variables:
x, y, z (coordinates)
nx, ny, nz (normal)
r, g, b, a (color)
q (quality)
rad
vi (vertex index)
vtu, vtv (texture coordinates)
ti (texture index)
vsel (is the vertex selected? 1 yes, 0 no)
and all custom vertex attributes already defined by user.
Args:
script: the FilterScript object or script filename to write
the filter] to.
function (str): a boolean function that will be evaluated in order
to select a subset of vertices. Example: (y > 0) and (ny > 0)
strict_face_select (bool): if True a face is selected if ALL its
vertices are selected. If False a face is selected if at least
one of its vertices is selected. ML v1.3.4BETA only; this is
ignored in 2016.12. In 2016.12 only vertices are selected.
Layer stack:
| python | {
"resource": ""
} |
q269958 | cylindrical_vert | test | def cylindrical_vert(script, radius=1.0, inside=True):
"""Select all vertices within a cylindrical radius
Args:
radius (float): radius of the sphere
center_pt (3 coordinate tuple or list): center point of the sphere | python | {
"resource": ""
} |
q269959 | spherical_vert | test | def spherical_vert(script, radius=1.0, center_pt=(0.0, 0.0, 0.0)):
"""Select all vertices within a spherical radius
Args:
radius (float): radius of the sphere
center_pt (3 coordinate tuple or list): center point of the sphere
Layer stack:
No impacts
MeshLab versions:
| python | {
"resource": ""
} |
q269960 | join | test | def join(script, merge_visible=True, merge_vert=False, delete_layer=True,
keep_unreferenced_vert=False):
""" Flatten all or only the visible layers into a single new mesh.
Transformations are preserved. Existing layers can be optionally
deleted.
Args:
script: the mlx.FilterScript object or script filename to write
the filter to.
merge_visible (bool): merge only visible layers
merge_vert (bool): merge the vertices that are duplicated among
different layers. Very useful when the layers are spliced portions
of a single big mesh.
delete_layer (bool): delete all the merged layers. If all layers are
visible only a single layer will remain after the invocation of
this filter.
keep_unreferenced_vert (bool): Do not discard unreferenced vertices
from source layers. Necessary for point-only layers.
Layer stack:
Creates a new layer "Merged Mesh"
Changes current layer to the new layer
Optionally deletes all other layers
MeshLab versions:
2016.12
1.3.4BETA
Bugs:
UV textures: not currently preserved, however will be in a future
release. https://github.com/cnr-isti-vclab/meshlab/issues/128
merge_visible: it is not currently possible to change the layer
visibility from meshlabserver, however this will be possible
in the future https://github.com/cnr-isti-vclab/meshlab/issues/123
"""
filter_xml = ''.join([
' <filter name="Flatten Visible Layers">\n',
' <Param name="MergeVisible" ',
'value="{}" '.format(str(merge_visible).lower()),
| python | {
"resource": ""
} |
q269961 | rename | test | def rename(script, label='blank', layer_num=None):
""" Rename layer label
Can be useful for outputting mlp files, as the output file names use
the labels.
Args:
script: the mlx.FilterScript object or script filename to write
the filter to.
label (str): new label for the mesh layer
layer_num (int): layer number to rename. Default is the
current layer. Not supported on the file base API.
Layer stack:
Renames a layer
MeshLab versions:
2016.12
1.3.4BETA
"""
| python | {
"resource": ""
} |
q269962 | change | test | def change(script, layer_num=None):
""" Change the current layer by specifying the new layer number.
Args:
script: the mlx.FilterScript object or script filename to write
the filter to.
layer_num (int): the number of the layer to change to. Default is the
last layer if script is a mlx.FilterScript object; if script is a
filename the default is the first layer.
Layer stack:
Modifies current layer
MeshLab versions:
| python | {
"resource": ""
} |
q269963 | duplicate | test | def duplicate(script, layer_num=None):
""" Duplicate a layer.
New layer label is '*_copy'.
Args:
script: the mlx.FilterScript object or script filename to write
the filter to.
layer_num (int): layer number to duplicate. Default is the
current layer. Not supported on the file base API.
Layer stack:
Creates a new layer
Changes current layer to the new layer
MeshLab versions:
2016.12
1.3.4BETA
"""
filter_xml = ' <filter name="Duplicate Current layer"/>\n'
if isinstance(script, mlx.FilterScript):
if (layer_num is None) or (layer_num == script.current_layer()):
| python | {
"resource": ""
} |
q269964 | delete_lower | test | def delete_lower(script, layer_num=None):
""" Delete all layers below the specified one.
Useful for MeshLab ver 2016.12, whcih will only output layer | python | {
"resource": ""
} |
q269965 | handle_error | test | def handle_error(program_name, cmd, log=None):
"""Subprocess program error handling
Args:
program_name (str): name of the subprocess program
Returns:
break_now (bool): indicate whether calling program should break out of loop
"""
print('\nHouston, we have a problem.',
'\n%s did not finish successfully. Review the log' % program_name,
'file and the input file(s) to see what went wrong.')
print('%s command: "%s"' % (program_name, cmd))
if log is not None:
print('log: "%s"' % log)
print('Where do we go from here?')
print(' r - retry running %s (probably after' % program_name,
'you\'ve fixed any problems with the input files)')
print(' c - continue on with the script (probably after',
'you\'ve manually re-run and generated the desired',
'output file(s)')
print(' x - exit, keeping the TEMP3D files and log')
print(' xd - exit, deleting the TEMP3D files and log')
while True:
choice = input('Select r, c, x (default), or xd: ')
if choice not in ('r', 'c', 'x', 'xd'):
#print('Please enter a valid option.')
| python | {
"resource": ""
} |
q269966 | begin | test | def begin(script='TEMP3D_default.mlx', file_in=None, mlp_in=None):
"""Create new mlx script and write opening tags.
Performs special processing on stl files.
If no input files are provided this will create a dummy
file and delete it as the first filter. This works around
the meshlab limitation that it must be provided an input
file, even if you will be creating a mesh as the first
filter.
"""
script_file = open(script, 'w')
script_file.write(''.join(['<!DOCTYPE FilterScript>\n',
'<FilterScript>\n']))
script_file.close()
current_layer = -1
last_layer = -1
stl = False
# Process project files first
if mlp_in is not None:
# make a list if it isn't already
if not isinstance(mlp_in, list):
mlp_in = [mlp_in]
for val in mlp_in:
tree = ET.parse(val)
#root = tree.getroot()
for elem in tree.iter(tag='MLMesh'):
filename = (elem.attrib['filename'])
current_layer += 1
last_layer += 1
# If the mesh file extension is stl, change to that layer and
# run clean.merge_vert
if os.path.splitext(filename)[1][1:].strip().lower() == 'stl':
layers.change(script, current_layer)
clean.merge_vert(script)
stl = True
# Process separate input files next
if file_in is not None:
# make a list if it isn't already
if not isinstance(file_in, list):
file_in = [file_in]
for val in file_in:
current_layer += 1
last_layer += 1
# If the mesh file extension is stl, change to that layer and
| python | {
"resource": ""
} |
q269967 | FilterScript.add_layer | test | def add_layer(self, label, change_layer=True):
""" Add new mesh layer to the end of the stack
Args:
label (str): new label for the mesh layer
change_layer (bool): change to the newly created layer
"""
| python | {
"resource": ""
} |
q269968 | FilterScript.del_layer | test | def del_layer(self, layer_num):
""" Delete mesh layer """
del self.layer_stack[layer_num]
# Adjust current layer if needed
| python | {
"resource": ""
} |
q269969 | FilterScript.save_to_file | test | def save_to_file(self, script_file):
""" Save filter script to an mlx file """
# TODO: rasie exception here instead?
if not self.filters:
print('WARNING: no filters to save to file!')
script_file_descriptor = open(script_file, 'w')
| python | {
"resource": ""
} |
q269970 | FilterScript.run_script | test | def run_script(self, log=None, ml_log=None, mlp_out=None, overwrite=False,
file_out=None, output_mask=None, script_file=None, print_meshlabserver_output=True):
""" Run the script
"""
temp_script = False
temp_ml_log = False
if self.__no_file_in:
# If no input files are provided, create a dummy file
# with a single vertex and delete it first in the script.
# This works around the fact that meshlabserver will
# not run without an input file.
temp_file_in_file = tempfile.NamedTemporaryFile(delete=False, suffix='.xyz', dir=os.getcwd())
temp_file_in_file.write(b'0 0 0')
temp_file_in_file.close()
self.file_in = [temp_file_in_file.name]
if not self.filters:
script_file = None
elif script_file is None:
# Create temporary script file
temp_script = True
temp_script_file = tempfile.NamedTemporaryFile(delete=False, suffix='.mlx')
temp_script_file.close()
self.save_to_file(temp_script_file.name)
script_file = temp_script_file.name
if (self.parse_geometry or self.parse_topology or self.parse_hausdorff) and (ml_log is None):
# create temp ml_log
temp_ml_log = True
ml_log_file = tempfile.NamedTemporaryFile(delete=False, suffix='.txt')
ml_log_file.close()
ml_log = ml_log_file.name
if file_out is | python | {
"resource": ""
} |
q269971 | main | test | def main():
"""Run main script"""
# segments = number of segments to use for circles
segments = 50
# star_points = number of points (or sides) of the star
star_points = 5
# star_radius = radius of circle circumscribing the star
star_radius = 2
# ring_thickness = thickness of the colored rings
ring_thickness = 1
# sphere_radius = radius of sphere the shield will be deformed to
sphere_radius = 2 * (star_radius + 3 * ring_thickness)
# Star calculations:
# Visually approximate a star by using multiple diamonds (i.e. scaled
# squares) which overlap in the center. For the star calculations,
# consider a central polygon with triangles attached to the edges, all
# circumscribed by a circle.
# polygon_radius = distance from center of circle to polygon edge midpoint
polygon_radius = star_radius / \
(1 + math.tan(math.radians(180 / star_points)) /
math.tan(math.radians(90 / star_points)))
# width = 1/2 width of polygon edge/outer triangle bottom
width = polygon_radius * math.tan(math.radians(180 / star_points))
# height = height of outer triangle
height = width / math.tan(math.radians(90 / star_points))
shield = mlx.FilterScript(file_out="shield.ply")
# Create the colored front of the shield using several concentric
# annuluses; combine them together and subdivide so we have more vertices
# to give a smoother deformation later.
mlx.create.annulus(shield, radius=star_radius, cir_segments=segments, color='blue')
mlx.create.annulus(shield,
radius1=star_radius + ring_thickness,
| python | {
"resource": ""
} |
q269972 | hausdorff_distance | test | def hausdorff_distance(script, sampled_layer=1, target_layer=0,
save_sample=False, sample_vert=True, sample_edge=True,
sample_faux_edge=False, sample_face=True,
sample_num=1000, maxdist=10):
""" Compute the Hausdorff Distance between two meshes, sampling one of the
two and finding for each sample the closest point over the other mesh.
Args:
script: the FilterScript object or script filename to write
the filter to.
sampled_layer (int): The mesh layer whose surface is sampled. For each
sample we search the closest point on the target mesh layer.
target_layer (int): The mesh that is sampled for the comparison.
save_sample (bool): Save the position and distance of all the used
samples on both the two surfaces, creating two new layers with two
point clouds representing the used samples.
sample_vert (bool): For the search of maxima it is useful to sample
vertices and edges of the mesh with a greater care. It is quite
probable that the farthest points falls along edges or on mesh
vertexes, and with uniform montecarlo sampling approaches the
probability of taking a sample over a vertex or an edge is
theoretically null. On the other hand this kind of sampling could
make the overall sampling distribution slightly biased and slightly
affects the cumulative results.
sample_edge (bool): see sample_vert
sample_faux_edge (bool): see sample_vert
sample_face (bool): see sample_vert
sample_num (int): The desired number of samples. It can be smaller or
larger than the mesh size, and according to the chosen sampling
strategy it will try to adapt.
maxdist (int): Sample points for which we do not find anything within
this distance are rejected and not considered neither for averaging
nor for max.
Layer stack:
If save_sample is True, two new layers are created: 'Hausdorff Closest
Points' and 'Hausdorff Sample Point'; and the current layer is
changed to the last newly created layer.
If save_sample is False, no impacts
MeshLab versions:
2016.12
1.3.4BETA
"""
# MeshLab defaults:
# sample_num = number of vertices
# maxdist = 0.05 * AABB['diag'] #5% of AABB[diag]
# maxdist_max = AABB['diag']
maxdist_max = 2*maxdist
# TODO: parse output (min, max, mean, etc.)
filter_xml = ''.join([
' <filter name="Hausdorff Distance">\n',
' <Param name="SampledMesh" ',
'value="{:d}" '.format(sampled_layer),
'description="Sampled Mesh" ',
'type="RichMesh" ',
'/>\n',
| python | {
"resource": ""
} |
q269973 | poisson_disk | test | def poisson_disk(script, sample_num=1000, radius=0.0,
montecarlo_rate=20, save_montecarlo=False,
approx_geodesic_dist=False, subsample=False, refine=False,
refine_layer=0, best_sample=True, best_sample_pool=10,
exact_num=False, radius_variance=1.0):
""" Create a new layer populated with a point sampling of the current mesh.
Samples are generated according to a Poisson-disk distribution, using the
algorithm described in:
'Efficient and Flexible Sampling with Blue Noise Properties of Triangular Meshes'
Massimiliano Corsini, Paolo Cignoni, Roberto Scopigno
IEEE TVCG 2012
Args:
script: the FilterScript object or script filename to write
the filter to.
sample_num (int): The desired number of samples. The radius of the disk
is calculated according to the sampling density.
radius (float): If not zero this parameter overrides the previous
parameter to allow exact radius specification.
montecarlo_rate (int): The over-sampling rate that is used to generate
the intial Monte Carlo samples (e.g. if this parameter is 'K' means
that 'K * sample_num' points will be used). The generated
Poisson-disk samples are a subset of these initial Monte Carlo
samples. Larger numbers slow the process but make it a bit more
accurate.
save_montecarlo (bool): If True, it will generate an additional Layer
with the Monte Carlo sampling that was pruned to build the Poisson
distribution.
approx_geodesic_dist (bool): If True Poisson-disk distances are
computed using an approximate geodesic distance, e.g. an Euclidean
distance weighted by a function of the difference between the
normals of the two points.
subsample (bool): If True the original vertices of the base mesh are
used as base set of points. In this case the sample_num should be
obviously much smaller than the original vertex number. Note that
this option is very useful in the case you want to subsample a
dense point cloud.
refine (bool): If True the vertices of the refine_layer mesh layer are
used as starting vertices, and they will be utterly refined by
adding more and more points until possible.
refine_layer (int): Used only if refine is True.
best_sample (bool): If True it will use a simple heuristic for choosing
the samples. At a small cost (it can slow the process a bit) it
usually improves the maximality of the generated sampling.
best_sample_pool (bool): Used only if best_sample is True. It controls
the number of attempts that it makes to get the best sample. It is
reasonable that it is smaller than the Monte Carlo oversampling
factor.
exact_num (bool): If True it will try to do a dicotomic search for the
best Poisson-disk radius that will generate the requested number of
samples with a tolerance of the 0.5%. Obviously it takes much
longer.
radius_variance (float): The radius of the disk is allowed to vary
between r and r*var. If this parameter is 1 the sampling is the
same as the Poisson-disk Sampling.
Layer stack:
Creates new layer 'Poisson-disk Samples'. Current layer is NOT changed
to the new layer (see Bugs).
If save_montecarlo is True, creates a new layer 'Montecarlo Samples'.
Current layer is NOT changed to the new layer (see Bugs).
MeshLab versions:
2016.12
1.3.4BETA
Bugs:
Current layer is NOT changed to the new layer, which is inconsistent
with the majority of filters that create new layers.
"""
filter_xml = ''.join([
' <filter name="Poisson-disk Sampling">\n',
' <Param name="SampleNum" ',
'value="{:d}" '.format(sample_num),
'description="Number of samples" ',
'type="RichInt" ',
'/>\n',
' <Param name="Radius" ',
'value="{}" '.format(radius),
'description="Explicit Radius" ',
'min="0" ',
'max="100" ',
'type="RichAbsPerc" ',
'/>\n',
' <Param name="MontecarloRate" ',
'value="{:d}" '.format(montecarlo_rate), | python | {
"resource": ""
} |
q269974 | mesh_element | test | def mesh_element(script, sample_num=1000, element='VERT'):
""" Create a new layer populated with a point sampling of the current mesh,
at most one sample for each element of the mesh is created.
Samples are taking in a uniform way, one for each element
(vertex/edge/face); all the elements have the same probabilty of being
choosen.
Args:
script: the FilterScript object or script filename to write
the filter to.
sample_num (int): The desired number of elements that must be chosen.
Being a subsampling of the original elements if this number should
not be larger than the number of elements of the original mesh.
element (enum in ['VERT', 'EDGE', 'FACE']): Choose what mesh element
will be used for the subsampling. At most one point sample will
be added for each one of the chosen elements
Layer stack:
Creates new layer 'Sampled Mesh'. Current layer is changed to the new
layer.
MeshLab versions:
2016.12
1.3.4BETA
"""
if element.lower() == 'vert':
element_num = 0
elif element.lower() == 'edge':
element_num = 1
elif element.lower() == | python | {
"resource": ""
} |
q269975 | clustered_vert | test | def clustered_vert(script, cell_size=1.0, strategy='AVERAGE', selected=False):
""" "Create a new layer populated with a subsampling of the vertexes of the
current mesh
The subsampling is driven by a simple one-per-gridded cell strategy.
Args:
script: the FilterScript object or script filename to write
the filter to.
cell_size (float): The size of the cell of the clustering grid. Smaller the cell finer the resulting mesh. For obtaining a very coarse mesh use larger values.
strategy (enum 'AVERAGE' or 'CENTER'): <b>Average</b>: for each cell we take the average of the sample falling into. The resulting point is a new point.<br><b>Closest to center</b>: for each cell we take the sample that is closest to the center of the cell. Choosen vertices are a subset of the original ones.
selected (bool): If true only for the filter is applied only on the selected subset of the mesh.
Layer stack:
Creates new layer 'Cluster Samples'. Current layer is changed to the new
layer.
MeshLab versions:
2016.12
1.3.4BETA
"""
if strategy.lower() == 'average':
strategy_num = 0
elif strategy.lower() == 'center':
strategy_num = 1
filter_xml = ''.join([
' <filter name="Clustered Vertex Subsampling">\n', | python | {
"resource": ""
} |
q269976 | flat_plane | test | def flat_plane(script, plane=0, aspect_ratio=False):
"""Flat plane parameterization
"""
filter_xml = ''.join([
' <filter name="Parametrization: Flat Plane ">\n',
' <Param name="projectionPlane"',
'value="%d"' % plane,
'description="Projection plane"',
'enum_val0="XY"',
'enum_val1="XZ"',
'enum_val2="YZ"',
'enum_cardinality="3"',
'type="RichEnum"',
'tooltip="Choose the projection plane"',
'/>\n',
' <Param name="aspectRatio"',
'value="%s"' % str(aspect_ratio).lower(),
'description="Preserve | python | {
"resource": ""
} |
q269977 | per_triangle | test | def per_triangle(script, sidedim=0, textdim=1024, border=2, method=1):
"""Trivial Per-Triangle parameterization
"""
filter_xml = ''.join([
' <filter name="Parametrization: Trivial Per-Triangle ">\n',
' <Param name="sidedim"',
'value="%d"' % sidedim,
'description="Quads per line"',
'type="RichInt"',
'tooltip="Indicates how many triangles have to be put on each line (every quad contains two triangles). Leave 0 for automatic calculation"',
'/>\n',
' <Param name="textdim"',
'value="%d"' % textdim,
'description="Texture Dimension (px)"',
'type="RichInt"',
'tooltip="Gives an indication on how big the texture is"',
'/>\n',
' <Param name="border"',
'value="%d"' % border,
| python | {
"resource": ""
} |
q269978 | voronoi | test | def voronoi(script, region_num=10, overlap=False):
"""Voronoi Atlas parameterization
"""
filter_xml = ''.join([
' <filter name="Parametrization: Voronoi Atlas">\n',
' <Param name="regionNum"',
'value="%d"' % region_num,
'description="Approx. Region Num"',
'type="RichInt"',
'tooltip="An estimation of the number of regions that must be generated. Smaller regions could lead to parametrizations with smaller distortion."',
'/>\n',
' <Param name="overlapFlag"',
'value="%s"' % str(overlap).lower(),
'description="Overlap"',
'type="RichBool"',
'tooltip="If checked the resulting parametrization will be composed | python | {
"resource": ""
} |
q269979 | measure_topology | test | def measure_topology(script):
""" Compute a set of topological measures over a mesh
Args:
script: the mlx.FilterScript object or script filename to write
the filter to.
Layer stack:
No impacts
MeshLab versions:
2016.12
1.3.4BETA | python | {
"resource": ""
} |
q269980 | parse_topology | test | def parse_topology(ml_log, log=None, ml_version='1.3.4BETA', print_output=False):
"""Parse the ml_log file generated by the measure_topology function.
Args:
ml_log (str): MeshLab log file to parse
log (str): filename to log output
Returns:
dict: dictionary with the following keys:
vert_num (int): number of vertices
edge_num (int): number of edges
face_num (int): number of faces
unref_vert_num (int): number or unreferenced vertices
boundry_edge_num (int): number of boundary edges
part_num (int): number of parts (components) in the mesh.
manifold (bool): True if mesh is two-manifold, otherwise false.
non_manifold_edge (int): number of non_manifold edges.
non_manifold_vert (int): number of non-manifold verices
genus (int or str): genus of the mesh, either a number or
'undefined' if the mesh is non-manifold.
holes (int or str): number of holes in the mesh, either a number
or 'undefined' if the mesh is non-manifold.
"""
topology = {'manifold': True, 'non_manifold_E': 0, 'non_manifold_V': 0}
with open(ml_log) as fread:
for line in fread:
if 'V:' in line:
vert_edge_face = line.replace('V:', ' ').replace('E:', ' ').replace('F:', ' ').split()
topology['vert_num'] = int(vert_edge_face[0])
topology['edge_num'] = int(vert_edge_face[1])
topology['face_num'] = int(vert_edge_face[2])
if 'Unreferenced Vertices' in line:
topology['unref_vert_num'] = int(line.split()[2])
if 'Boundary Edges' in line:
topology['boundry_edge_num'] = int(line.split()[2])
if 'Mesh is composed by' in line:
topology['part_num'] = int(line.split()[4])
if 'non 2-manifold mesh' in line:
topology['manifold'] = False
if 'non two manifold edges' in line:
| python | {
"resource": ""
} |
q269981 | parse_hausdorff | test | def parse_hausdorff(ml_log, log=None, print_output=False):
"""Parse the ml_log file generated by the hausdorff_distance function.
Args:
ml_log (str): MeshLab log file to parse
log (str): filename to log output
Returns:
dict: dictionary with the following keys:
number_points (int): number of points in mesh
min_distance (float): minimum hausdorff distance
max_distance (float): maximum hausdorff distance
mean_distance (float): mean hausdorff distance
rms_distance (float): root mean square distance
"""
hausdorff_distance = {"min_distance": 0.0,
"max_distance": 0.0,
"mean_distance": 0.0,
"rms_distance": 0.0,
"number_points": 0}
with open(ml_log) as fread:
result = fread.readlines()
data = ""
for idx, line in enumerate(result):
m = re.match(r"\s*Sampled (\d+) pts.*", line)
if m is not None:
hausdorff_distance["number_points"] = int(m.group(1))
| python | {
"resource": ""
} |
q269982 | function | test | def function(script, red=255, green=255, blue=255, alpha=255, color=None):
"""Color function using muparser lib to generate new RGBA color for every
vertex
Red, Green, Blue and Alpha channels may be defined by specifying a function
for each.
See help(mlx.muparser_ref) for muparser reference documentation.
It's possible to use the following per-vertex variables in the expression:
Variables (per vertex):
x, y, z (coordinates)
nx, ny, nz (normal)
r, g, b, a (color)
q (quality)
rad (radius)
vi (vertex index)
vtu, vtv (texture coordinates)
ti (texture index)
vsel (is the vertex selected? 1 yes, 0 no)
and all custom vertex attributes already defined by user.
Args:
script: the FilterScript object or script filename to write
the filter to.
red (str [0, 255]): function to generate red component
green (str [0, 255]): function to generate green component
blue (str [0, 255]): function to generate blue component
alpha (str [0, 255]): function to generate alpha component
color (str): name of one of the 140 HTML Color Names defined
in CSS & SVG.
Ref: https://en.wikipedia.org/wiki/Web_colors#X11_color_names
If not None this will override the per component variables.
Layer stack:
No impacts
MeshLab versions:
2016.12
1.3.4BETA
"""
# TODO: | python | {
"resource": ""
} |
q269983 | voronoi | test | def voronoi(script, target_layer=0, source_layer=1, backward=True):
""" Given a Mesh 'M' and a Pointset 'P', the filter projects each vertex of
P over M and color M according to the geodesic distance from these
projected points. Projection and coloring are done on a per vertex
basis.
Args:
script: the FilterScript object or script filename to write
the filter to.
target_layer (int): The mesh layer whose surface is colored. For each
vertex of this mesh we decide the color according to the following
arguments.
source_layer (int): The mesh layer whose vertexes are used as seed
points for the color computation. These seeds point are projected
onto the target_layer mesh.
backward (bool): If True the mesh is colored according to the distance
from the frontier of the voronoi diagram induced by the
source_layer seeds.
Layer stack:
No impacts
MeshLab versions:
2016.12
1.3.4BETA
"""
filter_xml = ''.join([
' <filter name="Voronoi Vertex Coloring">\n', | python | {
"resource": ""
} |
q269984 | cyclic_rainbow | test | def cyclic_rainbow(script, direction='sphere', start_pt=(0, 0, 0),
amplitude=255 / 2, center=255 / 2, freq=0.8,
phase=(0, 120, 240, 0), alpha=False):
""" Color mesh vertices in a repeating sinusiodal rainbow pattern
Sine wave follows the following equation for each color channel (RGBA):
channel = sin(freq*increment + phase)*amplitude + center
Args:
script: the FilterScript object or script filename to write
the filter to.
direction (str) = the direction that the sine wave will travel; this
and the start_pt determine the 'increment' of the sine function.
Valid values are:
'sphere' - radiate sine wave outward from start_pt (default)
'x' - sine wave travels along the X axis
'y' - sine wave travels along the Y axis
'z' - sine wave travels along the Z axis
or define the increment directly using a muparser function, e.g.
'2x + y'. In this case start_pt will not be used; include it in
the function directly.
start_pt (3 coordinate tuple or list): start point of the sine wave. For a
sphere this is the center of the sphere.
amplitude (float [0, 255], single value or 4 term tuple or list): amplitude
of the sine wave, with range between 0-255. If a single value is
specified it will be used for all channels, otherwise specify each
channel individually.
center (float [0, 255], single value or 4 term tuple or list): center
of the sine wave, with range between 0-255. If a single value is
specified it will be used for all channels, otherwise specify each
channel individually.
freq (float, single value or 4 term tuple or list): frequency of the sine
wave. If a single value is specified it will be used for all channels,
otherwise specifiy each channel individually.
phase (float [0, 360], single value or 4 term tuple or list): phase
of the sine wave in degrees, with range between 0-360. If a single
value is specified it will be used for all channels, otherwise specify
each channel individually.
alpha (bool): if False the alpha channel will be set to 255 (full opacity).
Layer stack:
No impacts
MeshLab versions:
2016.12
1.3.4BETA
"""
start_pt = util.make_list(start_pt, 3)
amplitude = util.make_list(amplitude, 4)
center = util.make_list(center, 4)
freq = util.make_list(freq, 4)
phase = util.make_list(phase, 4)
if direction.lower() | python | {
"resource": ""
} |
q269985 | mp_atan2 | test | def mp_atan2(y, x):
"""muparser atan2 function
Implements an atan2(y,x) function for older muparser versions (<2.1.0);
atan2 was added as a built-in function in muparser 2.1.0
Args:
y (str): y argument of the atan2(y,x) function
x (str): x argument of the atan2(y,x) function
Returns:
A muparser string that calculates atan2(y,x)
| python | {
"resource": ""
} |
q269986 | v_cross | test | def v_cross(u, v):
"""muparser cross product function
Compute the cross product of two 3x1 vectors
Args:
u (list or tuple of 3 strings): first vector
v (list or tuple of 3 strings): second vector
Returns:
A list containing a muparser string of the cross product
"""
"""
i = u[1]*v[2] - u[2]*v[1]
j = u[2]*v[0] - u[0]*v[2]
| python | {
"resource": ""
} |
q269987 | v_multiply | test | def v_multiply(scalar, v1):
""" Multiply vector by scalar"""
vector = []
for i, x in enumerate(v1):
| python | {
"resource": ""
} |
q269988 | vert_attr | test | def vert_attr(script, name='radius', function='x^2 + y^2'):
""" Add a new Per-Vertex scalar attribute to current mesh and fill it with
the defined function.
The specified name can be used in other filter functions.
It's possible to use parenthesis, per-vertex variables and boolean operator:
(, ), and, or, <, >, =
It's possible to use the following per-vertex variables in the expression:
Variables:
x, y, z (coordinates)
nx, ny, nz (normal)
r, g, b, a (color)
q (quality)
rad
vi (vertex index)
?vtu, vtv (texture coordinates)
?ti (texture index)
?vsel (is the vertex selected? 1 yes, 0 no)
and all custom vertex attributes already defined by user.
Args:
script: the FilterScript object or script filename to write
the filter] to.
name (str): the name of new attribute. You can access attribute in
other filters through this name.
function (str): function to calculate custom attribute value for each
vertex
Layer stack:
No impacts
| python | {
"resource": ""
} |
q269989 | flip | test | def flip(script, force_flip=False, selected=False):
""" Invert faces orientation, flipping the normals of the mesh.
If requested, it tries to guess the right orientation; mainly it decides to
flip all the faces if the minimum/maximum vertexes have not outward point
normals for a few directions. Works well for single component watertight
objects.
Args:
script: the FilterScript object or script filename to write
the filter to.
force_flip (bool): If selected, the normals will always be flipped;
| python | {
"resource": ""
} |
q269990 | point_sets | test | def point_sets(script, neighbors=10, smooth_iteration=0, flip=False,
viewpoint_pos=(0.0, 0.0, 0.0)):
""" Compute the normals of the vertices of a mesh without exploiting the
triangle connectivity, useful for dataset with no faces.
Args:
script: the FilterScript object or script filename to write
the filter to.
neighbors (int): The number of neighbors used to estimate normals.
smooth_iteration (int): The number of smoothing iteration done on the
p used to estimate and propagate normals.
flip (bool): Flip normals w.r.t. viewpoint. If the 'viewpoint' (i.e.
scanner position) is known, it can be used to disambiguate normals
orientation, so that all the normals will be oriented in the same
direction.
viewpoint_pos (single xyz point, tuple or list): Set the x, y, z
coordinates of the viewpoint position.
Layer stack:
No impacts
MeshLab versions:
2016.12
1.3.4BETA
"""
filter_xml = ''.join([
' <filter name="Compute normals for point sets">\n',
' <Param name="K" ',
'value="{:d}" '.format(neighbors),
| python | {
"resource": ""
} |
q269991 | taubin | test | def taubin(script, iterations=10, t_lambda=0.5, t_mu=-0.53, selected=False):
""" The lambda & mu Taubin smoothing, it make two steps of smoothing, forth
and back, for each iteration.
Based on:
Gabriel Taubin
"A signal processing approach to fair surface design"
Siggraph 1995
Args:
script: the FilterScript object or script filename to write
the filter to.
iterations (int): The number of times that the taubin smoothing is
iterated. Usually it requires a larger number of iteration than the
classical laplacian.
t_lambda (float): The lambda parameter of the Taubin Smoothing algorithm
t_mu (float): The mu parameter of the Taubin Smoothing algorithm
selected (bool): If selected the filter is performed only on the
selected faces
Layer stack:
No impacts
MeshLab versions:
2016.12
1.3.4BETA
"""
filter_xml = ''.join([
' <filter name="Taubin Smooth">\n',
' <Param name="lambda" ',
'value="{}" '.format(t_lambda),
'description="Lambda" ',
| python | {
"resource": ""
} |
q269992 | depth | test | def depth(script, iterations=3, viewpoint=(0, 0, 0), selected=False):
""" A laplacian smooth that is constrained to move vertices only along the
view direction.
Args:
script: the FilterScript object or script filename to write
the filter to.
iterations (int): The number of times that the whole algorithm (normal
smoothing + vertex fitting) is iterated.
viewpoint (vector tuple or list): The position of the view point that
is used to get the constraint direction.
selected (bool): If selected the filter is performed only on the
selected faces
Layer stack:
No impacts
MeshLab versions:
2016.12
1.3.4BETA
"""
filter_xml = ''.join([
' <filter name="Depth Smooth">\n',
' <Param name="stepSmoothNum" ',
'value="{:d}" '.format(iterations),
'description="Smoothing steps" ',
'type="RichInt" ',
'/>\n',
| python | {
"resource": ""
} |
q269993 | polylinesort | test | def polylinesort(fbasename=None, log=None):
"""Sort separate line segments in obj format into a continuous polyline or polylines.
NOT FINISHED; DO NOT USE
Also measures the length of each polyline
Return polyline and polylineMeta (lengths)
"""
fext = os.path.splitext(fbasename)[1][1:].strip().lower()
if fext != 'obj':
print('Input file must be obj. Exiting ...')
sys.exit(1)
fread = open(fbasename, 'r')
first = True
polyline_vertices = []
line_segments = []
for line in fread:
element, x_co, y_co, z_co = line.split()
if element == 'v':
polyline_vertices.append(
[util.to_float(x_co), util.to_float(y_co), util.to_float(z_co)])
elif element == 'l':
p1 = x_co
p2 = y_co
line_segments.append([int(p1), int(p2)])
fread.close()
if log is not None:
log_file = open(log, 'a')
| python | {
"resource": ""
} |
q269994 | measure_topology | test | def measure_topology(fbasename=None, log=None, ml_version=ml_version):
"""Measures mesh topology
Args:
fbasename (str): input filename.
log (str): filename to log output
Returns:
dict: dictionary with the following keys:
vert_num (int): number of vertices
edge_num (int): number of edges
face_num (int): number of faces
unref_vert_num (int): number or unreferenced vertices
boundry_edge_num (int): number of boundary edges
part_num (int): number of parts (components) in the mesh.
| python | {
"resource": ""
} |
q269995 | measure_all | test | def measure_all(fbasename=None, log=None, ml_version=ml_version):
"""Measures mesh geometry, aabb and topology."""
ml_script1_file = 'TEMP3D_measure_gAndT.mlx'
if ml_version == '1.3.4BETA':
file_out = 'TEMP3D_aabb.xyz'
else:
file_out = None
ml_script1 = mlx.FilterScript(file_in=fbasename, file_out=file_out, ml_version=ml_version)
compute.measure_geometry(ml_script1)
| python | {
"resource": ""
} |
q269996 | measure_dimension | test | def measure_dimension(fbasename=None, log=None, axis1=None, offset1=0.0,
axis2=None, offset2=0.0, ml_version=ml_version):
"""Measure a dimension of a mesh"""
axis1 = axis1.lower()
axis2 = axis2.lower()
ml_script1_file = 'TEMP3D_measure_dimension.mlx'
file_out = 'TEMP3D_measure_dimension.xyz'
ml_script1 = mlx.FilterScript(file_in=fbasename, file_out=file_out, ml_version=ml_version)
compute.section(ml_script1, axis1, offset1, surface=True)
compute.section(ml_script1, axis2, offset2, surface=False)
layers.delete_lower(ml_script1)
ml_script1.save_to_file(ml_script1_file)
ml_script1.run_script(log=log, script_file=ml_script1_file)
for val in ('x', 'y', 'z'):
if val not in (axis1, axis2):
axis = val
# ord: Get number that represents letter in ASCII
# Here we find the offset from 'x' to determine the list reference
# i.e. 0 for x, 1 for y, 2 for z
axis_num = ord(axis) - ord('x')
aabb = measure_aabb(file_out, log)
dimension = {'min': aabb['min'][axis_num], 'max': aabb['max'][axis_num],
'length': aabb['size'][axis_num], 'axis': axis}
if log is None:
print('\nFor file "%s"' % fbasename)
print('Dimension parallel to %s with %s=%s & %s=%s:' % (axis, axis1, offset1,
| python | {
"resource": ""
} |
q269997 | lowercase_ext | test | def lowercase_ext(filename):
"""
This is a helper used by UploadSet.save to provide lowercase extensions for
all processed files, to compare with configured extensions in the same
case.
.. versionchanged:: 0.1.4
Filenames without extensions are no longer lowercased, only the
extension is returned in lowercase, if an extension exists.
:param filename: The filename to ensure has a lowercase extension.
| python | {
"resource": ""
} |
q269998 | patch_request_class | test | def patch_request_class(app, size=64 * 1024 * 1024):
"""
By default, Flask will accept uploads to an arbitrary size. While Werkzeug
switches uploads from memory to a temporary file when they hit 500 KiB,
it's still possible for someone to overload your disk space with a
gigantic file.
This patches the app's request class's
`~werkzeug.BaseRequest.max_content_length` attribute so that any upload
larger than the given size is rejected with an HTTP error.
.. note::
In Flask 0.6, you can do this by setting the `MAX_CONTENT_LENGTH`
setting, without patching the request class. To emulate this behavior,
you can pass `None` as the size (you must pass it explicitly). That is
the best way to call this function, as it won't break the Flask 0.6
functionality if it exists.
.. versionchanged:: 0.1.1
| python | {
"resource": ""
} |
q269999 | config_for_set | test | def config_for_set(uset, app, defaults=None):
"""
This is a helper function for `configure_uploads` that extracts the
configuration for a single set.
:param uset: The upload set.
:param app: The app to load the configuration from.
:param defaults: A dict with keys `url` and `dest` from the
`UPLOADS_DEFAULT_DEST` and `DEFAULT_UPLOADS_URL`
settings.
"""
config = app.config
prefix = 'UPLOADED_%s_' % uset.name.upper()
using_defaults = False
if defaults is None:
defaults = dict(dest=None, url=None)
allow_extns = tuple(config.get(prefix + 'ALLOW', ()))
deny_extns | python | {
"resource": ""
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.