ngram
listlengths
0
67.8k
[ "3: 9, 4: 16, 5: 25} NOTE: You are supposed to write the", "included) and the values are square of keys. The function printDict() doesn't take", "one line. >>Example: Input: 5 Output: {1: 1, 2: 4, 3: 9, 4:", "to write the code for the function printDict() only. The function has already", "Output: {1: 1, 2: 4, 3: 9, 4: 16, 5: 25} NOTE: You", "contains the number n. >>Output Format: Print the dictionary in one line. >>Example:", "part of the code. ''' def printDict(): print(dict([(i,i**2) for i in range (1,x+1)]),", "Format: The first line contains the number n. >>Output Format: Print the dictionary", "5: 25} NOTE: You are supposed to write the code for the function", "the number n. >>Output Format: Print the dictionary in one line. >>Example: Input:", "only. The function has already been called in the main part of the", "16, 5: 25} NOTE: You are supposed to write the code for the", "between 1 and n (both included) and the values are square of keys.", "the code. ''' def printDict(): print(dict([(i,i**2) for i in range (1,x+1)]), end=\"\") x=int(input())", "already been called in the main part of the code. ''' def printDict():", "- Functions.py ''' Given an integer number n, define a function named printDict()", "main part of the code. ''' def printDict(): print(dict([(i,i**2) for i in range", "a dictionary where the keys are numbers between 1 and n (both included)", "2: 4, 3: 9, 4: 16, 5: 25} NOTE: You are supposed to", "function printDict() only. The function has already been called in the main part", "can print a dictionary where the keys are numbers between 1 and n", "(both included) and the values are square of keys. The function printDict() doesn't", "and the values are square of keys. The function printDict() doesn't take any", "5 Output: {1: 1, 2: 4, 3: 9, 4: 16, 5: 25} NOTE:", "integer number n, define a function named printDict() which can print a dictionary", "has already been called in the main part of the code. ''' def", "are square of keys. The function printDict() doesn't take any argument. >>Input Format:", "printDict() which can print a dictionary where the keys are numbers between 1", "of keys. The function printDict() doesn't take any argument. >>Input Format: The first", "print a dictionary where the keys are numbers between 1 and n (both", "function has already been called in the main part of the code. '''", "supposed to write the code for the function printDict() only. The function has", "''' Given an integer number n, define a function named printDict() which can", "in one line. >>Example: Input: 5 Output: {1: 1, 2: 4, 3: 9,", "of the code. ''' def printDict(): print(dict([(i,i**2) for i in range (1,x+1)]), end=\"\")", "keys. The function printDict() doesn't take any argument. >>Input Format: The first line", "n, define a function named printDict() which can print a dictionary where the", "code. ''' def printDict(): print(dict([(i,i**2) for i in range (1,x+1)]), end=\"\") x=int(input()) printDict()", "are numbers between 1 and n (both included) and the values are square", "take any argument. >>Input Format: The first line contains the number n. >>Output", "1 and n (both included) and the values are square of keys. The", "<filename>Week 6/Programming Assignment 3 - Functions.py ''' Given an integer number n, define", "the main part of the code. ''' def printDict(): print(dict([(i,i**2) for i in", "line. >>Example: Input: 5 Output: {1: 1, 2: 4, 3: 9, 4: 16,", "number n, define a function named printDict() which can print a dictionary where", "any argument. >>Input Format: The first line contains the number n. >>Output Format:", "NOTE: You are supposed to write the code for the function printDict() only.", "define a function named printDict() which can print a dictionary where the keys", "named printDict() which can print a dictionary where the keys are numbers between", "called in the main part of the code. ''' def printDict(): print(dict([(i,i**2) for", "number n. >>Output Format: Print the dictionary in one line. >>Example: Input: 5", "4: 16, 5: 25} NOTE: You are supposed to write the code for", "function printDict() doesn't take any argument. >>Input Format: The first line contains the", "The function has already been called in the main part of the code.", "n (both included) and the values are square of keys. The function printDict()", "dictionary in one line. >>Example: Input: 5 Output: {1: 1, 2: 4, 3:", "function named printDict() which can print a dictionary where the keys are numbers", "are supposed to write the code for the function printDict() only. The function", "the code for the function printDict() only. The function has already been called", "n. >>Output Format: Print the dictionary in one line. >>Example: Input: 5 Output:", "dictionary where the keys are numbers between 1 and n (both included) and", "the function printDict() only. The function has already been called in the main", "Print the dictionary in one line. >>Example: Input: 5 Output: {1: 1, 2:", "You are supposed to write the code for the function printDict() only. The", "the dictionary in one line. >>Example: Input: 5 Output: {1: 1, 2: 4,", "6/Programming Assignment 3 - Functions.py ''' Given an integer number n, define a", "printDict() only. The function has already been called in the main part of", "which can print a dictionary where the keys are numbers between 1 and", ">>Example: Input: 5 Output: {1: 1, 2: 4, 3: 9, 4: 16, 5:", "in the main part of the code. ''' def printDict(): print(dict([(i,i**2) for i", "a function named printDict() which can print a dictionary where the keys are", "Assignment 3 - Functions.py ''' Given an integer number n, define a function", "9, 4: 16, 5: 25} NOTE: You are supposed to write the code", "Given an integer number n, define a function named printDict() which can print", "Input: 5 Output: {1: 1, 2: 4, 3: 9, 4: 16, 5: 25}", "The first line contains the number n. >>Output Format: Print the dictionary in", "been called in the main part of the code. ''' def printDict(): print(dict([(i,i**2)", "where the keys are numbers between 1 and n (both included) and the", "keys are numbers between 1 and n (both included) and the values are", "an integer number n, define a function named printDict() which can print a", "doesn't take any argument. >>Input Format: The first line contains the number n.", "3 - Functions.py ''' Given an integer number n, define a function named", "4, 3: 9, 4: 16, 5: 25} NOTE: You are supposed to write", "for the function printDict() only. The function has already been called in the", "numbers between 1 and n (both included) and the values are square of", "25} NOTE: You are supposed to write the code for the function printDict()", "line contains the number n. >>Output Format: Print the dictionary in one line.", ">>Input Format: The first line contains the number n. >>Output Format: Print the", "the values are square of keys. The function printDict() doesn't take any argument.", "write the code for the function printDict() only. The function has already been", "argument. >>Input Format: The first line contains the number n. >>Output Format: Print", "{1: 1, 2: 4, 3: 9, 4: 16, 5: 25} NOTE: You are", "Format: Print the dictionary in one line. >>Example: Input: 5 Output: {1: 1,", "printDict() doesn't take any argument. >>Input Format: The first line contains the number", "square of keys. The function printDict() doesn't take any argument. >>Input Format: The", "Functions.py ''' Given an integer number n, define a function named printDict() which", "The function printDict() doesn't take any argument. >>Input Format: The first line contains", "code for the function printDict() only. The function has already been called in", "first line contains the number n. >>Output Format: Print the dictionary in one", "and n (both included) and the values are square of keys. The function", ">>Output Format: Print the dictionary in one line. >>Example: Input: 5 Output: {1:", "1, 2: 4, 3: 9, 4: 16, 5: 25} NOTE: You are supposed", "the keys are numbers between 1 and n (both included) and the values", "values are square of keys. The function printDict() doesn't take any argument. >>Input" ]
[ "[] install_requires = [] if TYPE == \"CORE\": packages = ['pythontools.core', 'pythontools.identity', 'pythontools.sockets',", "TYPE = \"CORE\" packages = [] install_requires = [] if TYPE == \"CORE\":", "def readme(): with open(\"README.rst\") as f: README = f.read() return README TYPE =", "== \"CORE\": packages = ['pythontools.core', 'pythontools.identity', 'pythontools.sockets', 'pythontools.dev', 'pythontools.telegram'] install_requires.extend(['requests', 'colorama', 'getmac', 'stdiomask',", "README TYPE = \"CORE\" packages = [] install_requires = [] if TYPE ==", "'stdiomask', 'cryptography']) if TYPE == \"GUI\": packages.append('pythontools.gui') install_requires.append('PyQt5') if TYPE == \"WEBBOT\": packages.append('pythontools.webbot')", "packages.append('pythontools.gui') install_requires.append('PyQt5') if TYPE == \"WEBBOT\": packages.append('pythontools.webbot') install_requires.append('selenium') setup( name='CrawlerCodePythonTools' + ('-Gui' if", "f.read() return README TYPE = \"CORE\" packages = [] install_requires = [] if", "install_requires.extend(['requests', 'colorama', 'getmac', 'stdiomask', 'cryptography']) if TYPE == \"GUI\": packages.append('pythontools.gui') install_requires.append('PyQt5') if TYPE", "if TYPE == \"WEBBOT\": packages.append('pythontools.webbot') install_requires.append('selenium') setup( name='CrawlerCodePythonTools' + ('-Gui' if TYPE ==", "else '-WebBot' if TYPE == \"WEBBOT\" else ''), version='1.5.2', packages=packages, url='https://github.com/CrawlerCode', license='MIT', author='CrawlerCode',", "if TYPE == \"CORE\": packages = ['pythontools.core', 'pythontools.identity', 'pythontools.sockets', 'pythontools.dev', 'pythontools.telegram'] install_requires.extend(['requests', 'colorama',", "setuptools import setup def readme(): with open(\"README.rst\") as f: README = f.read() return", "TYPE == \"GUI\": packages.append('pythontools.gui') install_requires.append('PyQt5') if TYPE == \"WEBBOT\": packages.append('pythontools.webbot') install_requires.append('selenium') setup( name='CrawlerCodePythonTools'", "TYPE == \"WEBBOT\" else ''), version='1.5.2', packages=packages, url='https://github.com/CrawlerCode', license='MIT', author='CrawlerCode', author_email='', description='Tools for", "+ ('-Gui' if TYPE == \"GUI\" else '-WebBot' if TYPE == \"WEBBOT\" else", "'getmac', 'stdiomask', 'cryptography']) if TYPE == \"GUI\": packages.append('pythontools.gui') install_requires.append('PyQt5') if TYPE == \"WEBBOT\":", "import setup def readme(): with open(\"README.rst\") as f: README = f.read() return README", "= [] if TYPE == \"CORE\": packages = ['pythontools.core', 'pythontools.identity', 'pythontools.sockets', 'pythontools.dev', 'pythontools.telegram']", "== \"GUI\" else '-WebBot' if TYPE == \"WEBBOT\" else ''), version='1.5.2', packages=packages, url='https://github.com/CrawlerCode',", "<gh_stars>0 from setuptools import setup def readme(): with open(\"README.rst\") as f: README =", "README = f.read() return README TYPE = \"CORE\" packages = [] install_requires =", "('-Gui' if TYPE == \"GUI\" else '-WebBot' if TYPE == \"WEBBOT\" else ''),", "as f: README = f.read() return README TYPE = \"CORE\" packages = []", "= [] install_requires = [] if TYPE == \"CORE\": packages = ['pythontools.core', 'pythontools.identity',", "if TYPE == \"GUI\" else '-WebBot' if TYPE == \"WEBBOT\" else ''), version='1.5.2',", "setup( name='CrawlerCodePythonTools' + ('-Gui' if TYPE == \"GUI\" else '-WebBot' if TYPE ==", "== \"WEBBOT\" else ''), version='1.5.2', packages=packages, url='https://github.com/CrawlerCode', license='MIT', author='CrawlerCode', author_email='', description='Tools for Python',", "'pythontools.dev', 'pythontools.telegram'] install_requires.extend(['requests', 'colorama', 'getmac', 'stdiomask', 'cryptography']) if TYPE == \"GUI\": packages.append('pythontools.gui') install_requires.append('PyQt5')", "'cryptography']) if TYPE == \"GUI\": packages.append('pythontools.gui') install_requires.append('PyQt5') if TYPE == \"WEBBOT\": packages.append('pythontools.webbot') install_requires.append('selenium')", "setup def readme(): with open(\"README.rst\") as f: README = f.read() return README TYPE", "TYPE == \"CORE\": packages = ['pythontools.core', 'pythontools.identity', 'pythontools.sockets', 'pythontools.dev', 'pythontools.telegram'] install_requires.extend(['requests', 'colorama', 'getmac',", "name='CrawlerCodePythonTools' + ('-Gui' if TYPE == \"GUI\" else '-WebBot' if TYPE == \"WEBBOT\"", "TYPE == \"WEBBOT\": packages.append('pythontools.webbot') install_requires.append('selenium') setup( name='CrawlerCodePythonTools' + ('-Gui' if TYPE == \"GUI\"", "'pythontools.sockets', 'pythontools.dev', 'pythontools.telegram'] install_requires.extend(['requests', 'colorama', 'getmac', 'stdiomask', 'cryptography']) if TYPE == \"GUI\": packages.append('pythontools.gui')", "\"GUI\": packages.append('pythontools.gui') install_requires.append('PyQt5') if TYPE == \"WEBBOT\": packages.append('pythontools.webbot') install_requires.append('selenium') setup( name='CrawlerCodePythonTools' + ('-Gui'", "= ['pythontools.core', 'pythontools.identity', 'pythontools.sockets', 'pythontools.dev', 'pythontools.telegram'] install_requires.extend(['requests', 'colorama', 'getmac', 'stdiomask', 'cryptography']) if TYPE", "== \"WEBBOT\": packages.append('pythontools.webbot') install_requires.append('selenium') setup( name='CrawlerCodePythonTools' + ('-Gui' if TYPE == \"GUI\" else", "= \"CORE\" packages = [] install_requires = [] if TYPE == \"CORE\": packages", "else ''), version='1.5.2', packages=packages, url='https://github.com/CrawlerCode', license='MIT', author='CrawlerCode', author_email='', description='Tools for Python', long_description=readme(), long_description_content_type=\"text/x-rst\",", "''), version='1.5.2', packages=packages, url='https://github.com/CrawlerCode', license='MIT', author='CrawlerCode', author_email='', description='Tools for Python', long_description=readme(), long_description_content_type=\"text/x-rst\", include_package_data=True,", "install_requires = [] if TYPE == \"CORE\": packages = ['pythontools.core', 'pythontools.identity', 'pythontools.sockets', 'pythontools.dev',", "'colorama', 'getmac', 'stdiomask', 'cryptography']) if TYPE == \"GUI\": packages.append('pythontools.gui') install_requires.append('PyQt5') if TYPE ==", "= f.read() return README TYPE = \"CORE\" packages = [] install_requires = []", "install_requires.append('PyQt5') if TYPE == \"WEBBOT\": packages.append('pythontools.webbot') install_requires.append('selenium') setup( name='CrawlerCodePythonTools' + ('-Gui' if TYPE", "packages = ['pythontools.core', 'pythontools.identity', 'pythontools.sockets', 'pythontools.dev', 'pythontools.telegram'] install_requires.extend(['requests', 'colorama', 'getmac', 'stdiomask', 'cryptography']) if", "'-WebBot' if TYPE == \"WEBBOT\" else ''), version='1.5.2', packages=packages, url='https://github.com/CrawlerCode', license='MIT', author='CrawlerCode', author_email='',", "f: README = f.read() return README TYPE = \"CORE\" packages = [] install_requires", "'pythontools.telegram'] install_requires.extend(['requests', 'colorama', 'getmac', 'stdiomask', 'cryptography']) if TYPE == \"GUI\": packages.append('pythontools.gui') install_requires.append('PyQt5') if", "\"WEBBOT\": packages.append('pythontools.webbot') install_requires.append('selenium') setup( name='CrawlerCodePythonTools' + ('-Gui' if TYPE == \"GUI\" else '-WebBot'", "TYPE == \"GUI\" else '-WebBot' if TYPE == \"WEBBOT\" else ''), version='1.5.2', packages=packages,", "== \"GUI\": packages.append('pythontools.gui') install_requires.append('PyQt5') if TYPE == \"WEBBOT\": packages.append('pythontools.webbot') install_requires.append('selenium') setup( name='CrawlerCodePythonTools' +", "from setuptools import setup def readme(): with open(\"README.rst\") as f: README = f.read()", "return README TYPE = \"CORE\" packages = [] install_requires = [] if TYPE", "packages.append('pythontools.webbot') install_requires.append('selenium') setup( name='CrawlerCodePythonTools' + ('-Gui' if TYPE == \"GUI\" else '-WebBot' if", "if TYPE == \"WEBBOT\" else ''), version='1.5.2', packages=packages, url='https://github.com/CrawlerCode', license='MIT', author='CrawlerCode', author_email='', description='Tools", "packages=packages, url='https://github.com/CrawlerCode', license='MIT', author='CrawlerCode', author_email='', description='Tools for Python', long_description=readme(), long_description_content_type=\"text/x-rst\", include_package_data=True, install_requires=install_requires )", "\"GUI\" else '-WebBot' if TYPE == \"WEBBOT\" else ''), version='1.5.2', packages=packages, url='https://github.com/CrawlerCode', license='MIT',", "open(\"README.rst\") as f: README = f.read() return README TYPE = \"CORE\" packages =", "'pythontools.identity', 'pythontools.sockets', 'pythontools.dev', 'pythontools.telegram'] install_requires.extend(['requests', 'colorama', 'getmac', 'stdiomask', 'cryptography']) if TYPE == \"GUI\":", "version='1.5.2', packages=packages, url='https://github.com/CrawlerCode', license='MIT', author='CrawlerCode', author_email='', description='Tools for Python', long_description=readme(), long_description_content_type=\"text/x-rst\", include_package_data=True, install_requires=install_requires", "[] if TYPE == \"CORE\": packages = ['pythontools.core', 'pythontools.identity', 'pythontools.sockets', 'pythontools.dev', 'pythontools.telegram'] install_requires.extend(['requests',", "install_requires.append('selenium') setup( name='CrawlerCodePythonTools' + ('-Gui' if TYPE == \"GUI\" else '-WebBot' if TYPE", "with open(\"README.rst\") as f: README = f.read() return README TYPE = \"CORE\" packages", "['pythontools.core', 'pythontools.identity', 'pythontools.sockets', 'pythontools.dev', 'pythontools.telegram'] install_requires.extend(['requests', 'colorama', 'getmac', 'stdiomask', 'cryptography']) if TYPE ==", "\"CORE\" packages = [] install_requires = [] if TYPE == \"CORE\": packages =", "if TYPE == \"GUI\": packages.append('pythontools.gui') install_requires.append('PyQt5') if TYPE == \"WEBBOT\": packages.append('pythontools.webbot') install_requires.append('selenium') setup(", "readme(): with open(\"README.rst\") as f: README = f.read() return README TYPE = \"CORE\"", "\"CORE\": packages = ['pythontools.core', 'pythontools.identity', 'pythontools.sockets', 'pythontools.dev', 'pythontools.telegram'] install_requires.extend(['requests', 'colorama', 'getmac', 'stdiomask', 'cryptography'])", "\"WEBBOT\" else ''), version='1.5.2', packages=packages, url='https://github.com/CrawlerCode', license='MIT', author='CrawlerCode', author_email='', description='Tools for Python', long_description=readme(),", "packages = [] install_requires = [] if TYPE == \"CORE\": packages = ['pythontools.core'," ]
[ "h5py.File(filename, 'r') as h5_file: image = np.array(h5_file['image']) alpha = np.array(h5_file['alpha']) print('Load data done\\n')", "sample = att #sample*att heads.append( sample ) #np.moveaxis(sample,0,2) ) tasks.append( cv.vconcat(heads) ) plt.figure(1);", ") plt.figure(1); plt.clf() plt.imshow(cv.hconcat(tasks)) plt.title('Frame ' + str(t)) plt.pause(0.1) t += 1 \"\"\"", "numpy as np import cv2 as cv from matplotlib import pyplot as plt", "alpha = alpha.reshape([n_sample,n_head,dimEncode[0],dimEncode[1],n_task]) alpha = alpha/alpha.max() \"\"\" for attmap,sample in zip(alpha,image): # a:", "for h in range(n_head): # Up-sampling att = attmap[h,:,:,n] att = cv.resize(att,None,fx=8,fy=8, interpolation", "np import cv2 as cv from matplotlib import pyplot as plt # Parameters", "heads.append( sample ) #np.moveaxis(sample,0,2) ) tasks.append( cv.vconcat(heads) ) plt.figure(1); plt.clf() plt.imshow(cv.hconcat(tasks)) plt.title('Frame '", "name.partition('.') _, num = basename[0].split('resume') return int(num) filesname = glob.glob(os.path.join(path,'*.sy')) filesname.sort() filename =", "= alpha/alpha.max() \"\"\" for attmap,sample in zip(alpha,image): # a: [n_head,h,w,n_task] # s: [3,H,W]", "2 n_task = 2 n_sample = 120*20 def getint(name): basename = name.partition('.') _,", "sample ) #np.moveaxis(sample,0,2) ) tasks.append( cv.vconcat(heads) ) plt.figure(1); plt.clf() plt.imshow(cv.hconcat(tasks)) plt.title('Frame ' +", "= cv.resize(att,None,fx=n_up,fy=n_up, interpolation = cv.INTER_AREA) att = cv.GaussianBlur(att,(11,11),0) map[:,:,h] = att maps.append( (0.5*sample+0.5*map)", "(0.5*sample+0.5*map) ) img = cv.hconcat(maps)*255 img = img.astype('float32') img = cv.cvtColor(img,cv.COLOR_RGB2BGR) fileout =", "# a: [n_head,h,w,n_task] # s: [3,H,W] sample = np.moveaxis(sample,0,2) maps = list() for", "for attmap,sample in zip(alpha,image): # a: [n_head,h,w,n_task] # s: [3,H,W] tasks = list()", "in range(n_head): # Up-sampling att = attmap[h,:,:,n] att = cv.resize(att,None,fx=8,fy=8, interpolation = cv.INTER_AREA)", "for attmap,sample in zip(alpha,image): # a: [n_head,h,w,n_task] # s: [3,H,W] sample = np.moveaxis(sample,0,2)", "range(n_task): heads = list() for h in range(n_head): # Up-sampling att = attmap[h,:,:,n]", "cv.INTER_AREA) att = cv.GaussianBlur(att,(11,11),0) #att = np.expand_dims(att,axis=0) # [1,H,W] # Apply sample =", "n_sample = 120*20 def getint(name): basename = name.partition('.') _, num = basename[0].split('resume') return", "#plt.imshow(cv.hconcat(maps)) #fileout = os.path.join(outpath,'sample%i.png'%t) #plt.savefig(fileout) #print('Create %s'%fileout) #plt.title('Frame ' + str(t)) #plt.pause(0.1) t", "for n in range(n_task): heads = list() for h in range(n_head): # Up-sampling", "cv.vconcat(heads) ) plt.figure(1); plt.clf() plt.imshow(cv.hconcat(tasks)) plt.title('Frame ' + str(t)) plt.pause(0.1) t += 1", "filename = '/media/victor/Documentos/resume10.sy' # Getting data with h5py.File(filename, 'r') as h5_file: image =", "cv.hconcat(maps)*255 img = img.astype('float32') img = cv.cvtColor(img,cv.COLOR_RGB2BGR) fileout = os.path.join(outpath,'sample%i.png'%t) cv.imwrite(fileout,img) #plt.imshow(cv.hconcat(maps)) #fileout", "import h5py import numpy as np import cv2 as cv from matplotlib import", "# Getting data with h5py.File(filename, 'r') as h5_file: image = np.array(h5_file['image']) alpha =", "+= 1 \"\"\" n_up = 8 for attmap,sample in zip(alpha,image): # a: [n_head,h,w,n_task]", "att = attmap[h,:,:,n] att = cv.resize(att,None,fx=n_up,fy=n_up, interpolation = cv.INTER_AREA) att = cv.GaussianBlur(att,(11,11),0) map[:,:,h]", "alpha = np.array(h5_file['alpha']) print('Load data done\\n') t = 0 # Genera mapas de", "attmap,sample in zip(alpha,image): # a: [n_head,h,w,n_task] # s: [3,H,W] sample = np.moveaxis(sample,0,2) maps", "= '/media/victor/Documentos/' outpath = '/media/victor/Documentos/Thesis/AttentionMap/Resume10' dimImage = ( 96,192) dimEncode = ( 12,", "cv.INTER_AREA) att = cv.GaussianBlur(att,(11,11),0) map[:,:,h] = att maps.append( (0.5*sample+0.5*map) ) img = cv.hconcat(maps)*255", "path = '/media/victor/Documentos/' outpath = '/media/victor/Documentos/Thesis/AttentionMap/Resume10' dimImage = ( 96,192) dimEncode = (", "# s: [3,H,W] tasks = list() for n in range(n_task): heads = list()", "( 12, 24) n_head = 2 n_task = 2 n_sample = 120*20 def", "2 n_sample = 120*20 def getint(name): basename = name.partition('.') _, num = basename[0].split('resume')", "#np.moveaxis(sample,0,2) ) tasks.append( cv.vconcat(heads) ) plt.figure(1); plt.clf() plt.imshow(cv.hconcat(tasks)) plt.title('Frame ' + str(t)) plt.pause(0.1)", "plt.imshow(cv.hconcat(tasks)) plt.title('Frame ' + str(t)) plt.pause(0.1) t += 1 \"\"\" n_up = 8", "n_head = 2 n_task = 2 n_sample = 120*20 def getint(name): basename =", "dimEncode = ( 12, 24) n_head = 2 n_task = 2 n_sample =", "s: [3,H,W] tasks = list() for n in range(n_task): heads = list() for", "cv.resize(att,None,fx=8,fy=8, interpolation = cv.INTER_AREA) att = cv.GaussianBlur(att,(11,11),0) #att = np.expand_dims(att,axis=0) # [1,H,W] #", "n_task = 2 n_sample = 120*20 def getint(name): basename = name.partition('.') _, num", "glob import h5py import numpy as np import cv2 as cv from matplotlib", "att #sample*att heads.append( sample ) #np.moveaxis(sample,0,2) ) tasks.append( cv.vconcat(heads) ) plt.figure(1); plt.clf() plt.imshow(cv.hconcat(tasks))", "tasks = list() for n in range(n_task): heads = list() for h in", "basename[0].split('resume') return int(num) filesname = glob.glob(os.path.join(path,'*.sy')) filesname.sort() filename = '/media/victor/Documentos/resume10.sy' # Getting data", "[n_head,h,w,n_task] # s: [3,H,W] tasks = list() for n in range(n_task): heads =", "' + str(t)) plt.pause(0.1) t += 1 \"\"\" n_up = 8 for attmap,sample", "interpolation = cv.INTER_AREA) att = cv.GaussianBlur(att,(11,11),0) map[:,:,h] = att maps.append( (0.5*sample+0.5*map) ) img", "= list() for n in range(n_task): map = np.zeros([ dimEncode[0]*n_up,dimEncode[1]*n_up ,3]) for h", "Parameters path = '/media/victor/Documentos/' outpath = '/media/victor/Documentos/Thesis/AttentionMap/Resume10' dimImage = ( 96,192) dimEncode =", "import pyplot as plt # Parameters path = '/media/victor/Documentos/' outpath = '/media/victor/Documentos/Thesis/AttentionMap/Resume10' dimImage", "att = cv.GaussianBlur(att,(11,11),0) map[:,:,h] = att maps.append( (0.5*sample+0.5*map) ) img = cv.hconcat(maps)*255 img", "as np import cv2 as cv from matplotlib import pyplot as plt #", "#fileout = os.path.join(outpath,'sample%i.png'%t) #plt.savefig(fileout) #print('Create %s'%fileout) #plt.title('Frame ' + str(t)) #plt.pause(0.1) t +=", "= att #sample*att heads.append( sample ) #np.moveaxis(sample,0,2) ) tasks.append( cv.vconcat(heads) ) plt.figure(1); plt.clf()", "cv from matplotlib import pyplot as plt # Parameters path = '/media/victor/Documentos/' outpath", "n_up = 8 for attmap,sample in zip(alpha,image): # a: [n_head,h,w,n_task] # s: [3,H,W]", "tasks.append( cv.vconcat(heads) ) plt.figure(1); plt.clf() plt.imshow(cv.hconcat(tasks)) plt.title('Frame ' + str(t)) plt.pause(0.1) t +=", "data done\\n') t = 0 # Genera mapas de atencion en video alpha", "( 96,192) dimEncode = ( 12, 24) n_head = 2 n_task = 2", "[n_head,h,w,n_task] # s: [3,H,W] sample = np.moveaxis(sample,0,2) maps = list() for n in", ",3]) for h in range(n_head): # Up-sampling att = attmap[h,:,:,n] att = cv.resize(att,None,fx=n_up,fy=n_up,", "cv.resize(att,None,fx=n_up,fy=n_up, interpolation = cv.INTER_AREA) att = cv.GaussianBlur(att,(11,11),0) map[:,:,h] = att maps.append( (0.5*sample+0.5*map) )", "atencion en video alpha = alpha.reshape([n_sample,n_head,dimEncode[0],dimEncode[1],n_task]) alpha = alpha/alpha.max() \"\"\" for attmap,sample in", "= cv.INTER_AREA) att = cv.GaussianBlur(att,(11,11),0) #att = np.expand_dims(att,axis=0) # [1,H,W] # Apply sample", "plt.title('Frame ' + str(t)) plt.pause(0.1) t += 1 \"\"\" n_up = 8 for", "heads = list() for h in range(n_head): # Up-sampling att = attmap[h,:,:,n] att", "filesname = glob.glob(os.path.join(path,'*.sy')) filesname.sort() filename = '/media/victor/Documentos/resume10.sy' # Getting data with h5py.File(filename, 'r')", "= ( 12, 24) n_head = 2 n_task = 2 n_sample = 120*20", "= np.array(h5_file['alpha']) print('Load data done\\n') t = 0 # Genera mapas de atencion", "np.array(h5_file['alpha']) print('Load data done\\n') t = 0 # Genera mapas de atencion en", "[1,H,W] # Apply sample = att #sample*att heads.append( sample ) #np.moveaxis(sample,0,2) ) tasks.append(", "img.astype('float32') img = cv.cvtColor(img,cv.COLOR_RGB2BGR) fileout = os.path.join(outpath,'sample%i.png'%t) cv.imwrite(fileout,img) #plt.imshow(cv.hconcat(maps)) #fileout = os.path.join(outpath,'sample%i.png'%t) #plt.savefig(fileout)", "matplotlib import pyplot as plt # Parameters path = '/media/victor/Documentos/' outpath = '/media/victor/Documentos/Thesis/AttentionMap/Resume10'", "t = 0 # Genera mapas de atencion en video alpha = alpha.reshape([n_sample,n_head,dimEncode[0],dimEncode[1],n_task])", "cv.GaussianBlur(att,(11,11),0) #att = np.expand_dims(att,axis=0) # [1,H,W] # Apply sample = att #sample*att heads.append(", "en video alpha = alpha.reshape([n_sample,n_head,dimEncode[0],dimEncode[1],n_task]) alpha = alpha/alpha.max() \"\"\" for attmap,sample in zip(alpha,image):", "glob.glob(os.path.join(path,'*.sy')) filesname.sort() filename = '/media/victor/Documentos/resume10.sy' # Getting data with h5py.File(filename, 'r') as h5_file:", "Apply sample = att #sample*att heads.append( sample ) #np.moveaxis(sample,0,2) ) tasks.append( cv.vconcat(heads) )", "import glob import h5py import numpy as np import cv2 as cv from", "with h5py.File(filename, 'r') as h5_file: image = np.array(h5_file['image']) alpha = np.array(h5_file['alpha']) print('Load data", "= 0 # Genera mapas de atencion en video alpha = alpha.reshape([n_sample,n_head,dimEncode[0],dimEncode[1],n_task]) alpha", "for h in range(n_head): # Up-sampling att = attmap[h,:,:,n] att = cv.resize(att,None,fx=n_up,fy=n_up, interpolation", "# Up-sampling att = attmap[h,:,:,n] att = cv.resize(att,None,fx=n_up,fy=n_up, interpolation = cv.INTER_AREA) att =", "= attmap[h,:,:,n] att = cv.resize(att,None,fx=n_up,fy=n_up, interpolation = cv.INTER_AREA) att = cv.GaussianBlur(att,(11,11),0) map[:,:,h] =", "list() for n in range(n_task): heads = list() for h in range(n_head): #", "from matplotlib import pyplot as plt # Parameters path = '/media/victor/Documentos/' outpath =", "[3,H,W] sample = np.moveaxis(sample,0,2) maps = list() for n in range(n_task): map =", "0 # Genera mapas de atencion en video alpha = alpha.reshape([n_sample,n_head,dimEncode[0],dimEncode[1],n_task]) alpha =", "# s: [3,H,W] sample = np.moveaxis(sample,0,2) maps = list() for n in range(n_task):", "= np.array(h5_file['image']) alpha = np.array(h5_file['alpha']) print('Load data done\\n') t = 0 # Genera", "getint(name): basename = name.partition('.') _, num = basename[0].split('resume') return int(num) filesname = glob.glob(os.path.join(path,'*.sy'))", "= np.expand_dims(att,axis=0) # [1,H,W] # Apply sample = att #sample*att heads.append( sample )", "pyplot as plt # Parameters path = '/media/victor/Documentos/' outpath = '/media/victor/Documentos/Thesis/AttentionMap/Resume10' dimImage =", "as h5_file: image = np.array(h5_file['image']) alpha = np.array(h5_file['alpha']) print('Load data done\\n') t =", "str(t)) plt.pause(0.1) t += 1 \"\"\" n_up = 8 for attmap,sample in zip(alpha,image):", "t += 1 \"\"\" n_up = 8 for attmap,sample in zip(alpha,image): # a:", "plt.figure(1); plt.clf() plt.imshow(cv.hconcat(tasks)) plt.title('Frame ' + str(t)) plt.pause(0.1) t += 1 \"\"\" n_up", "att maps.append( (0.5*sample+0.5*map) ) img = cv.hconcat(maps)*255 img = img.astype('float32') img = cv.cvtColor(img,cv.COLOR_RGB2BGR)", "cv.imwrite(fileout,img) #plt.imshow(cv.hconcat(maps)) #fileout = os.path.join(outpath,'sample%i.png'%t) #plt.savefig(fileout) #print('Create %s'%fileout) #plt.title('Frame ' + str(t)) #plt.pause(0.1)", "= cv.INTER_AREA) att = cv.GaussianBlur(att,(11,11),0) map[:,:,h] = att maps.append( (0.5*sample+0.5*map) ) img =", "'/media/victor/Documentos/' outpath = '/media/victor/Documentos/Thesis/AttentionMap/Resume10' dimImage = ( 96,192) dimEncode = ( 12, 24)", "de atencion en video alpha = alpha.reshape([n_sample,n_head,dimEncode[0],dimEncode[1],n_task]) alpha = alpha/alpha.max() \"\"\" for attmap,sample", "interpolation = cv.INTER_AREA) att = cv.GaussianBlur(att,(11,11),0) #att = np.expand_dims(att,axis=0) # [1,H,W] # Apply", "h5_file: image = np.array(h5_file['image']) alpha = np.array(h5_file['alpha']) print('Load data done\\n') t = 0", "range(n_task): map = np.zeros([ dimEncode[0]*n_up,dimEncode[1]*n_up ,3]) for h in range(n_head): # Up-sampling att", "attmap,sample in zip(alpha,image): # a: [n_head,h,w,n_task] # s: [3,H,W] tasks = list() for", "= list() for n in range(n_task): heads = list() for h in range(n_head):", "img = cv.hconcat(maps)*255 img = img.astype('float32') img = cv.cvtColor(img,cv.COLOR_RGB2BGR) fileout = os.path.join(outpath,'sample%i.png'%t) cv.imwrite(fileout,img)", "plt # Parameters path = '/media/victor/Documentos/' outpath = '/media/victor/Documentos/Thesis/AttentionMap/Resume10' dimImage = ( 96,192)", "'r') as h5_file: image = np.array(h5_file['image']) alpha = np.array(h5_file['alpha']) print('Load data done\\n') t", "import numpy as np import cv2 as cv from matplotlib import pyplot as", "mapas de atencion en video alpha = alpha.reshape([n_sample,n_head,dimEncode[0],dimEncode[1],n_task]) alpha = alpha/alpha.max() \"\"\" for", "list() for h in range(n_head): # Up-sampling att = attmap[h,:,:,n] att = cv.resize(att,None,fx=8,fy=8,", "maps.append( (0.5*sample+0.5*map) ) img = cv.hconcat(maps)*255 img = img.astype('float32') img = cv.cvtColor(img,cv.COLOR_RGB2BGR) fileout", "in zip(alpha,image): # a: [n_head,h,w,n_task] # s: [3,H,W] tasks = list() for n", "zip(alpha,image): # a: [n_head,h,w,n_task] # s: [3,H,W] sample = np.moveaxis(sample,0,2) maps = list()", "basename = name.partition('.') _, num = basename[0].split('resume') return int(num) filesname = glob.glob(os.path.join(path,'*.sy')) filesname.sort()", "n in range(n_task): map = np.zeros([ dimEncode[0]*n_up,dimEncode[1]*n_up ,3]) for h in range(n_head): #", "h5py import numpy as np import cv2 as cv from matplotlib import pyplot", "= 2 n_task = 2 n_sample = 120*20 def getint(name): basename = name.partition('.')", "print('Load data done\\n') t = 0 # Genera mapas de atencion en video", "= ( 96,192) dimEncode = ( 12, 24) n_head = 2 n_task =", ") #np.moveaxis(sample,0,2) ) tasks.append( cv.vconcat(heads) ) plt.figure(1); plt.clf() plt.imshow(cv.hconcat(tasks)) plt.title('Frame ' + str(t))", "# Parameters path = '/media/victor/Documentos/' outpath = '/media/victor/Documentos/Thesis/AttentionMap/Resume10' dimImage = ( 96,192) dimEncode", "map = np.zeros([ dimEncode[0]*n_up,dimEncode[1]*n_up ,3]) for h in range(n_head): # Up-sampling att =", "120*20 def getint(name): basename = name.partition('.') _, num = basename[0].split('resume') return int(num) filesname", "= 2 n_sample = 120*20 def getint(name): basename = name.partition('.') _, num =", "Up-sampling att = attmap[h,:,:,n] att = cv.resize(att,None,fx=n_up,fy=n_up, interpolation = cv.INTER_AREA) att = cv.GaussianBlur(att,(11,11),0)", "= basename[0].split('resume') return int(num) filesname = glob.glob(os.path.join(path,'*.sy')) filesname.sort() filename = '/media/victor/Documentos/resume10.sy' # Getting", "np.zeros([ dimEncode[0]*n_up,dimEncode[1]*n_up ,3]) for h in range(n_head): # Up-sampling att = attmap[h,:,:,n] att", "_, num = basename[0].split('resume') return int(num) filesname = glob.glob(os.path.join(path,'*.sy')) filesname.sort() filename = '/media/victor/Documentos/resume10.sy'", "dimEncode[0]*n_up,dimEncode[1]*n_up ,3]) for h in range(n_head): # Up-sampling att = attmap[h,:,:,n] att =", ") tasks.append( cv.vconcat(heads) ) plt.figure(1); plt.clf() plt.imshow(cv.hconcat(tasks)) plt.title('Frame ' + str(t)) plt.pause(0.1) t", "range(n_head): # Up-sampling att = attmap[h,:,:,n] att = cv.resize(att,None,fx=n_up,fy=n_up, interpolation = cv.INTER_AREA) att", "as cv from matplotlib import pyplot as plt # Parameters path = '/media/victor/Documentos/'", "= np.zeros([ dimEncode[0]*n_up,dimEncode[1]*n_up ,3]) for h in range(n_head): # Up-sampling att = attmap[h,:,:,n]", "= '/media/victor/Documentos/resume10.sy' # Getting data with h5py.File(filename, 'r') as h5_file: image = np.array(h5_file['image'])", "import cv2 as cv from matplotlib import pyplot as plt # Parameters path", "alpha/alpha.max() \"\"\" for attmap,sample in zip(alpha,image): # a: [n_head,h,w,n_task] # s: [3,H,W] tasks", "map[:,:,h] = att maps.append( (0.5*sample+0.5*map) ) img = cv.hconcat(maps)*255 img = img.astype('float32') img", "fileout = os.path.join(outpath,'sample%i.png'%t) cv.imwrite(fileout,img) #plt.imshow(cv.hconcat(maps)) #fileout = os.path.join(outpath,'sample%i.png'%t) #plt.savefig(fileout) #print('Create %s'%fileout) #plt.title('Frame '", "\"\"\" n_up = 8 for attmap,sample in zip(alpha,image): # a: [n_head,h,w,n_task] # s:", "= cv.GaussianBlur(att,(11,11),0) #att = np.expand_dims(att,axis=0) # [1,H,W] # Apply sample = att #sample*att", "list() for n in range(n_task): map = np.zeros([ dimEncode[0]*n_up,dimEncode[1]*n_up ,3]) for h in", "att = attmap[h,:,:,n] att = cv.resize(att,None,fx=8,fy=8, interpolation = cv.INTER_AREA) att = cv.GaussianBlur(att,(11,11),0) #att", "plt.clf() plt.imshow(cv.hconcat(tasks)) plt.title('Frame ' + str(t)) plt.pause(0.1) t += 1 \"\"\" n_up =", "= cv.cvtColor(img,cv.COLOR_RGB2BGR) fileout = os.path.join(outpath,'sample%i.png'%t) cv.imwrite(fileout,img) #plt.imshow(cv.hconcat(maps)) #fileout = os.path.join(outpath,'sample%i.png'%t) #plt.savefig(fileout) #print('Create %s'%fileout)", "= list() for h in range(n_head): # Up-sampling att = attmap[h,:,:,n] att =", "= cv.resize(att,None,fx=8,fy=8, interpolation = cv.INTER_AREA) att = cv.GaussianBlur(att,(11,11),0) #att = np.expand_dims(att,axis=0) # [1,H,W]", "= 120*20 def getint(name): basename = name.partition('.') _, num = basename[0].split('resume') return int(num)", "[3,H,W] tasks = list() for n in range(n_task): heads = list() for h", "in range(n_task): heads = list() for h in range(n_head): # Up-sampling att =", "os.path.join(outpath,'sample%i.png'%t) cv.imwrite(fileout,img) #plt.imshow(cv.hconcat(maps)) #fileout = os.path.join(outpath,'sample%i.png'%t) #plt.savefig(fileout) #print('Create %s'%fileout) #plt.title('Frame ' + str(t))", "= name.partition('.') _, num = basename[0].split('resume') return int(num) filesname = glob.glob(os.path.join(path,'*.sy')) filesname.sort() filename", "'/media/victor/Documentos/Thesis/AttentionMap/Resume10' dimImage = ( 96,192) dimEncode = ( 12, 24) n_head = 2", "zip(alpha,image): # a: [n_head,h,w,n_task] # s: [3,H,W] tasks = list() for n in", "# a: [n_head,h,w,n_task] # s: [3,H,W] tasks = list() for n in range(n_task):", "a: [n_head,h,w,n_task] # s: [3,H,W] tasks = list() for n in range(n_task): heads", "att = cv.GaussianBlur(att,(11,11),0) #att = np.expand_dims(att,axis=0) # [1,H,W] # Apply sample = att", "attmap[h,:,:,n] att = cv.resize(att,None,fx=8,fy=8, interpolation = cv.INTER_AREA) att = cv.GaussianBlur(att,(11,11),0) #att = np.expand_dims(att,axis=0)", "# Apply sample = att #sample*att heads.append( sample ) #np.moveaxis(sample,0,2) ) tasks.append( cv.vconcat(heads)", "#att = np.expand_dims(att,axis=0) # [1,H,W] # Apply sample = att #sample*att heads.append( sample", "maps = list() for n in range(n_task): map = np.zeros([ dimEncode[0]*n_up,dimEncode[1]*n_up ,3]) for", "alpha.reshape([n_sample,n_head,dimEncode[0],dimEncode[1],n_task]) alpha = alpha/alpha.max() \"\"\" for attmap,sample in zip(alpha,image): # a: [n_head,h,w,n_task] #", ") img = cv.hconcat(maps)*255 img = img.astype('float32') img = cv.cvtColor(img,cv.COLOR_RGB2BGR) fileout = os.path.join(outpath,'sample%i.png'%t)", "num = basename[0].split('resume') return int(num) filesname = glob.glob(os.path.join(path,'*.sy')) filesname.sort() filename = '/media/victor/Documentos/resume10.sy' #", "a: [n_head,h,w,n_task] # s: [3,H,W] sample = np.moveaxis(sample,0,2) maps = list() for n", "= 8 for attmap,sample in zip(alpha,image): # a: [n_head,h,w,n_task] # s: [3,H,W] sample", "8 for attmap,sample in zip(alpha,image): # a: [n_head,h,w,n_task] # s: [3,H,W] sample =", "np.moveaxis(sample,0,2) maps = list() for n in range(n_task): map = np.zeros([ dimEncode[0]*n_up,dimEncode[1]*n_up ,3])", "os import glob import h5py import numpy as np import cv2 as cv", "\"\"\" for attmap,sample in zip(alpha,image): # a: [n_head,h,w,n_task] # s: [3,H,W] tasks =", "= cv.hconcat(maps)*255 img = img.astype('float32') img = cv.cvtColor(img,cv.COLOR_RGB2BGR) fileout = os.path.join(outpath,'sample%i.png'%t) cv.imwrite(fileout,img) #plt.imshow(cv.hconcat(maps))", "= '/media/victor/Documentos/Thesis/AttentionMap/Resume10' dimImage = ( 96,192) dimEncode = ( 12, 24) n_head =", "= attmap[h,:,:,n] att = cv.resize(att,None,fx=8,fy=8, interpolation = cv.INTER_AREA) att = cv.GaussianBlur(att,(11,11),0) #att =", "h in range(n_head): # Up-sampling att = attmap[h,:,:,n] att = cv.resize(att,None,fx=8,fy=8, interpolation =", "cv2 as cv from matplotlib import pyplot as plt # Parameters path =", "24) n_head = 2 n_task = 2 n_sample = 120*20 def getint(name): basename", "'/media/victor/Documentos/resume10.sy' # Getting data with h5py.File(filename, 'r') as h5_file: image = np.array(h5_file['image']) alpha", "for n in range(n_task): map = np.zeros([ dimEncode[0]*n_up,dimEncode[1]*n_up ,3]) for h in range(n_head):", "Getting data with h5py.File(filename, 'r') as h5_file: image = np.array(h5_file['image']) alpha = np.array(h5_file['alpha'])", "# Genera mapas de atencion en video alpha = alpha.reshape([n_sample,n_head,dimEncode[0],dimEncode[1],n_task]) alpha = alpha/alpha.max()", "alpha = alpha/alpha.max() \"\"\" for attmap,sample in zip(alpha,image): # a: [n_head,h,w,n_task] # s:", "in zip(alpha,image): # a: [n_head,h,w,n_task] # s: [3,H,W] sample = np.moveaxis(sample,0,2) maps =", "image = np.array(h5_file['image']) alpha = np.array(h5_file['alpha']) print('Load data done\\n') t = 0 #", "att = cv.resize(att,None,fx=8,fy=8, interpolation = cv.INTER_AREA) att = cv.GaussianBlur(att,(11,11),0) #att = np.expand_dims(att,axis=0) #", "#sample*att heads.append( sample ) #np.moveaxis(sample,0,2) ) tasks.append( cv.vconcat(heads) ) plt.figure(1); plt.clf() plt.imshow(cv.hconcat(tasks)) plt.title('Frame", "= os.path.join(outpath,'sample%i.png'%t) #plt.savefig(fileout) #print('Create %s'%fileout) #plt.title('Frame ' + str(t)) #plt.pause(0.1) t += 1", "in range(n_head): # Up-sampling att = attmap[h,:,:,n] att = cv.resize(att,None,fx=n_up,fy=n_up, interpolation = cv.INTER_AREA)", "+ str(t)) plt.pause(0.1) t += 1 \"\"\" n_up = 8 for attmap,sample in", "dimImage = ( 96,192) dimEncode = ( 12, 24) n_head = 2 n_task", "in range(n_task): map = np.zeros([ dimEncode[0]*n_up,dimEncode[1]*n_up ,3]) for h in range(n_head): # Up-sampling", "# [1,H,W] # Apply sample = att #sample*att heads.append( sample ) #np.moveaxis(sample,0,2) )", "as plt # Parameters path = '/media/victor/Documentos/' outpath = '/media/victor/Documentos/Thesis/AttentionMap/Resume10' dimImage = (", "int(num) filesname = glob.glob(os.path.join(path,'*.sy')) filesname.sort() filename = '/media/victor/Documentos/resume10.sy' # Getting data with h5py.File(filename,", "np.array(h5_file['image']) alpha = np.array(h5_file['alpha']) print('Load data done\\n') t = 0 # Genera mapas", "Up-sampling att = attmap[h,:,:,n] att = cv.resize(att,None,fx=8,fy=8, interpolation = cv.INTER_AREA) att = cv.GaussianBlur(att,(11,11),0)", "= att maps.append( (0.5*sample+0.5*map) ) img = cv.hconcat(maps)*255 img = img.astype('float32') img =", "img = cv.cvtColor(img,cv.COLOR_RGB2BGR) fileout = os.path.join(outpath,'sample%i.png'%t) cv.imwrite(fileout,img) #plt.imshow(cv.hconcat(maps)) #fileout = os.path.join(outpath,'sample%i.png'%t) #plt.savefig(fileout) #print('Create", "filesname.sort() filename = '/media/victor/Documentos/resume10.sy' # Getting data with h5py.File(filename, 'r') as h5_file: image", "video alpha = alpha.reshape([n_sample,n_head,dimEncode[0],dimEncode[1],n_task]) alpha = alpha/alpha.max() \"\"\" for attmap,sample in zip(alpha,image): #", "# Up-sampling att = attmap[h,:,:,n] att = cv.resize(att,None,fx=8,fy=8, interpolation = cv.INTER_AREA) att =", "= img.astype('float32') img = cv.cvtColor(img,cv.COLOR_RGB2BGR) fileout = os.path.join(outpath,'sample%i.png'%t) cv.imwrite(fileout,img) #plt.imshow(cv.hconcat(maps)) #fileout = os.path.join(outpath,'sample%i.png'%t)", "attmap[h,:,:,n] att = cv.resize(att,None,fx=n_up,fy=n_up, interpolation = cv.INTER_AREA) att = cv.GaussianBlur(att,(11,11),0) map[:,:,h] = att", "= glob.glob(os.path.join(path,'*.sy')) filesname.sort() filename = '/media/victor/Documentos/resume10.sy' # Getting data with h5py.File(filename, 'r') as", "att = cv.resize(att,None,fx=n_up,fy=n_up, interpolation = cv.INTER_AREA) att = cv.GaussianBlur(att,(11,11),0) map[:,:,h] = att maps.append(", "<filename>study/spMap.py import os import glob import h5py import numpy as np import cv2", "cv.cvtColor(img,cv.COLOR_RGB2BGR) fileout = os.path.join(outpath,'sample%i.png'%t) cv.imwrite(fileout,img) #plt.imshow(cv.hconcat(maps)) #fileout = os.path.join(outpath,'sample%i.png'%t) #plt.savefig(fileout) #print('Create %s'%fileout) #plt.title('Frame", "range(n_head): # Up-sampling att = attmap[h,:,:,n] att = cv.resize(att,None,fx=8,fy=8, interpolation = cv.INTER_AREA) att", "= np.moveaxis(sample,0,2) maps = list() for n in range(n_task): map = np.zeros([ dimEncode[0]*n_up,dimEncode[1]*n_up", "n in range(n_task): heads = list() for h in range(n_head): # Up-sampling att", "= cv.GaussianBlur(att,(11,11),0) map[:,:,h] = att maps.append( (0.5*sample+0.5*map) ) img = cv.hconcat(maps)*255 img =", "plt.pause(0.1) t += 1 \"\"\" n_up = 8 for attmap,sample in zip(alpha,image): #", "sample = np.moveaxis(sample,0,2) maps = list() for n in range(n_task): map = np.zeros([", "1 \"\"\" n_up = 8 for attmap,sample in zip(alpha,image): # a: [n_head,h,w,n_task] #", "np.expand_dims(att,axis=0) # [1,H,W] # Apply sample = att #sample*att heads.append( sample ) #np.moveaxis(sample,0,2)", "s: [3,H,W] sample = np.moveaxis(sample,0,2) maps = list() for n in range(n_task): map", "cv.GaussianBlur(att,(11,11),0) map[:,:,h] = att maps.append( (0.5*sample+0.5*map) ) img = cv.hconcat(maps)*255 img = img.astype('float32')", "outpath = '/media/victor/Documentos/Thesis/AttentionMap/Resume10' dimImage = ( 96,192) dimEncode = ( 12, 24) n_head", "done\\n') t = 0 # Genera mapas de atencion en video alpha =", "data with h5py.File(filename, 'r') as h5_file: image = np.array(h5_file['image']) alpha = np.array(h5_file['alpha']) print('Load", "h in range(n_head): # Up-sampling att = attmap[h,:,:,n] att = cv.resize(att,None,fx=n_up,fy=n_up, interpolation =", "img = img.astype('float32') img = cv.cvtColor(img,cv.COLOR_RGB2BGR) fileout = os.path.join(outpath,'sample%i.png'%t) cv.imwrite(fileout,img) #plt.imshow(cv.hconcat(maps)) #fileout =", "return int(num) filesname = glob.glob(os.path.join(path,'*.sy')) filesname.sort() filename = '/media/victor/Documentos/resume10.sy' # Getting data with", "= os.path.join(outpath,'sample%i.png'%t) cv.imwrite(fileout,img) #plt.imshow(cv.hconcat(maps)) #fileout = os.path.join(outpath,'sample%i.png'%t) #plt.savefig(fileout) #print('Create %s'%fileout) #plt.title('Frame ' +", "12, 24) n_head = 2 n_task = 2 n_sample = 120*20 def getint(name):", "Genera mapas de atencion en video alpha = alpha.reshape([n_sample,n_head,dimEncode[0],dimEncode[1],n_task]) alpha = alpha/alpha.max() \"\"\"", "import os import glob import h5py import numpy as np import cv2 as", "def getint(name): basename = name.partition('.') _, num = basename[0].split('resume') return int(num) filesname =", "= alpha.reshape([n_sample,n_head,dimEncode[0],dimEncode[1],n_task]) alpha = alpha/alpha.max() \"\"\" for attmap,sample in zip(alpha,image): # a: [n_head,h,w,n_task]", "96,192) dimEncode = ( 12, 24) n_head = 2 n_task = 2 n_sample" ]
[ "version_array[i] = 0 version = \"{}.{}.{}\".format( version_array[0], version_array[1], version_array[2]) line = '#define GVIEW_VERSION", "line.startswith('#define GVIEW_VERSION '): version = line.split('#define GVIEW_VERSION ')[ 1].strip(' \\r\\n\\t\\\"') version_array = version.split('.')", "reset_lower_versions = True header_output_location = header_location+'.new' found_version = False with open(header_location, 'r') as", "for i in range(default_version_to_update+1, 3): version_array[i] = 0 version = \"{}.{}.{}\".format( version_array[0], version_array[1],", "f: with open(header_output_location, 'w') as g: for line in f: if line.startswith('#define GVIEW_VERSION", "open(header_output_location, 'w') as g: for line in f: if line.startswith('#define GVIEW_VERSION '): version", "if line.startswith('#define GVIEW_VERSION '): version = line.split('#define GVIEW_VERSION ')[ 1].strip(' \\r\\n\\t\\\"') version_array =", "exit(1) default_version_to_update = 1 # major=0, minor=1, patch=2 if len(sys.argv) > 2: version_to_update", "default_version_to_update = defined_versions[version_to_update] reset_lower_versions = True header_output_location = header_location+'.new' found_version = False with", "2: print(\"Failed to obtain GView.hpp location\") exit(1) header_location = sys.argv[1] if not os.path.exists(header_location):", "= sys.argv[1] if not os.path.exists(header_location): print(\"Path {} does not exists!\".format(header_location)) exit(1) default_version_to_update =", "value = int(version_array[default_version_to_update])+1 version_array[default_version_to_update] = value for i in range(default_version_to_update+1, 3): version_array[i] =", "= False with open(header_location, 'r') as f: with open(header_output_location, 'w') as g: for", "'#define GVIEW_VERSION \"{}\"\\n'.format(version) found_version = True os.putenv('GVIEW_VERSION', version) g.write(line) if not found_version: print(\"Failed", "\\r\\n\\t\\\"') version_array = version.split('.') value = int(version_array[default_version_to_update])+1 version_array[default_version_to_update] = value for i in", "not os.path.exists(header_location): print(\"Path {} does not exists!\".format(header_location)) exit(1) default_version_to_update = 1 # major=0,", "version_array = version.split('.') value = int(version_array[default_version_to_update])+1 version_array[default_version_to_update] = value for i in range(default_version_to_update+1,", "version_array[default_version_to_update] = value for i in range(default_version_to_update+1, 3): version_array[i] = 0 version =", "sys.argv[2] defined_versions = { \"major\": 0, \"minor\": 1, \"patch\": 2 } default_version_to_update =", "\"{}\"\\n'.format(version) found_version = True os.putenv('GVIEW_VERSION', version) g.write(line) if not found_version: print(\"Failed to find", "1, \"patch\": 2 } default_version_to_update = defined_versions[version_to_update] reset_lower_versions = True header_output_location = header_location+'.new'", "line.split('#define GVIEW_VERSION ')[ 1].strip(' \\r\\n\\t\\\"') version_array = version.split('.') value = int(version_array[default_version_to_update])+1 version_array[default_version_to_update] =", "sys.argv[1] if not os.path.exists(header_location): print(\"Path {} does not exists!\".format(header_location)) exit(1) default_version_to_update = 1", "= 1 # major=0, minor=1, patch=2 if len(sys.argv) > 2: version_to_update = sys.argv[2]", "True os.putenv('GVIEW_VERSION', version) g.write(line) if not found_version: print(\"Failed to find GVIEW_VERSION\") exit(1) shutil.move(header_output_location,", "import os import shutil if len(sys.argv) < 2: print(\"Failed to obtain GView.hpp location\")", "{ \"major\": 0, \"minor\": 1, \"patch\": 2 } default_version_to_update = defined_versions[version_to_update] reset_lower_versions =", "> 2: version_to_update = sys.argv[2] defined_versions = { \"major\": 0, \"minor\": 1, \"patch\":", "= \"{}.{}.{}\".format( version_array[0], version_array[1], version_array[2]) line = '#define GVIEW_VERSION \"{}\"\\n'.format(version) found_version = True", "= '#define GVIEW_VERSION \"{}\"\\n'.format(version) found_version = True os.putenv('GVIEW_VERSION', version) g.write(line) if not found_version:", "as f: with open(header_output_location, 'w') as g: for line in f: if line.startswith('#define", "version.split('.') value = int(version_array[default_version_to_update])+1 version_array[default_version_to_update] = value for i in range(default_version_to_update+1, 3): version_array[i]", "patch=2 if len(sys.argv) > 2: version_to_update = sys.argv[2] defined_versions = { \"major\": 0,", "# major=0, minor=1, patch=2 if len(sys.argv) > 2: version_to_update = sys.argv[2] defined_versions =", "obtain GView.hpp location\") exit(1) header_location = sys.argv[1] if not os.path.exists(header_location): print(\"Path {} does", "0 version = \"{}.{}.{}\".format( version_array[0], version_array[1], version_array[2]) line = '#define GVIEW_VERSION \"{}\"\\n'.format(version) found_version", "range(default_version_to_update+1, 3): version_array[i] = 0 version = \"{}.{}.{}\".format( version_array[0], version_array[1], version_array[2]) line =", "with open(header_output_location, 'w') as g: for line in f: if line.startswith('#define GVIEW_VERSION '):", "False with open(header_location, 'r') as f: with open(header_output_location, 'w') as g: for line", "version = line.split('#define GVIEW_VERSION ')[ 1].strip(' \\r\\n\\t\\\"') version_array = version.split('.') value = int(version_array[default_version_to_update])+1", "int(version_array[default_version_to_update])+1 version_array[default_version_to_update] = value for i in range(default_version_to_update+1, 3): version_array[i] = 0 version", "True header_output_location = header_location+'.new' found_version = False with open(header_location, 'r') as f: with", "version = \"{}.{}.{}\".format( version_array[0], version_array[1], version_array[2]) line = '#define GVIEW_VERSION \"{}\"\\n'.format(version) found_version =", "sys import os import shutil if len(sys.argv) < 2: print(\"Failed to obtain GView.hpp", "found_version = True os.putenv('GVIEW_VERSION', version) g.write(line) if not found_version: print(\"Failed to find GVIEW_VERSION\")", "\"patch\": 2 } default_version_to_update = defined_versions[version_to_update] reset_lower_versions = True header_output_location = header_location+'.new' found_version", "i in range(default_version_to_update+1, 3): version_array[i] = 0 version = \"{}.{}.{}\".format( version_array[0], version_array[1], version_array[2])", "print(\"Failed to obtain GView.hpp location\") exit(1) header_location = sys.argv[1] if not os.path.exists(header_location): print(\"Path", "not exists!\".format(header_location)) exit(1) default_version_to_update = 1 # major=0, minor=1, patch=2 if len(sys.argv) >", "does not exists!\".format(header_location)) exit(1) default_version_to_update = 1 # major=0, minor=1, patch=2 if len(sys.argv)", "g: for line in f: if line.startswith('#define GVIEW_VERSION '): version = line.split('#define GVIEW_VERSION", "import shutil if len(sys.argv) < 2: print(\"Failed to obtain GView.hpp location\") exit(1) header_location", "if len(sys.argv) > 2: version_to_update = sys.argv[2] defined_versions = { \"major\": 0, \"minor\":", "= defined_versions[version_to_update] reset_lower_versions = True header_output_location = header_location+'.new' found_version = False with open(header_location,", "'w') as g: for line in f: if line.startswith('#define GVIEW_VERSION '): version =", "in range(default_version_to_update+1, 3): version_array[i] = 0 version = \"{}.{}.{}\".format( version_array[0], version_array[1], version_array[2]) line", "in f: if line.startswith('#define GVIEW_VERSION '): version = line.split('#define GVIEW_VERSION ')[ 1].strip(' \\r\\n\\t\\\"')", "minor=1, patch=2 if len(sys.argv) > 2: version_to_update = sys.argv[2] defined_versions = { \"major\":", "{} does not exists!\".format(header_location)) exit(1) default_version_to_update = 1 # major=0, minor=1, patch=2 if", "} default_version_to_update = defined_versions[version_to_update] reset_lower_versions = True header_output_location = header_location+'.new' found_version = False", "defined_versions = { \"major\": 0, \"minor\": 1, \"patch\": 2 } default_version_to_update = defined_versions[version_to_update]", "1 # major=0, minor=1, patch=2 if len(sys.argv) > 2: version_to_update = sys.argv[2] defined_versions", "as g: for line in f: if line.startswith('#define GVIEW_VERSION '): version = line.split('#define", "')[ 1].strip(' \\r\\n\\t\\\"') version_array = version.split('.') value = int(version_array[default_version_to_update])+1 version_array[default_version_to_update] = value for", "= int(version_array[default_version_to_update])+1 version_array[default_version_to_update] = value for i in range(default_version_to_update+1, 3): version_array[i] = 0", "= value for i in range(default_version_to_update+1, 3): version_array[i] = 0 version = \"{}.{}.{}\".format(", "exists!\".format(header_location)) exit(1) default_version_to_update = 1 # major=0, minor=1, patch=2 if len(sys.argv) > 2:", "found_version = False with open(header_location, 'r') as f: with open(header_output_location, 'w') as g:", "3): version_array[i] = 0 version = \"{}.{}.{}\".format( version_array[0], version_array[1], version_array[2]) line = '#define", "os.path.exists(header_location): print(\"Path {} does not exists!\".format(header_location)) exit(1) default_version_to_update = 1 # major=0, minor=1,", "line in f: if line.startswith('#define GVIEW_VERSION '): version = line.split('#define GVIEW_VERSION ')[ 1].strip('", "os import shutil if len(sys.argv) < 2: print(\"Failed to obtain GView.hpp location\") exit(1)", "default_version_to_update = 1 # major=0, minor=1, patch=2 if len(sys.argv) > 2: version_to_update =", "= header_location+'.new' found_version = False with open(header_location, 'r') as f: with open(header_output_location, 'w')", "GVIEW_VERSION ')[ 1].strip(' \\r\\n\\t\\\"') version_array = version.split('.') value = int(version_array[default_version_to_update])+1 version_array[default_version_to_update] = value", "version_array[0], version_array[1], version_array[2]) line = '#define GVIEW_VERSION \"{}\"\\n'.format(version) found_version = True os.putenv('GVIEW_VERSION', version)", "= version.split('.') value = int(version_array[default_version_to_update])+1 version_array[default_version_to_update] = value for i in range(default_version_to_update+1, 3):", "value for i in range(default_version_to_update+1, 3): version_array[i] = 0 version = \"{}.{}.{}\".format( version_array[0],", "version_array[1], version_array[2]) line = '#define GVIEW_VERSION \"{}\"\\n'.format(version) found_version = True os.putenv('GVIEW_VERSION', version) g.write(line)", "len(sys.argv) > 2: version_to_update = sys.argv[2] defined_versions = { \"major\": 0, \"minor\": 1,", "location\") exit(1) header_location = sys.argv[1] if not os.path.exists(header_location): print(\"Path {} does not exists!\".format(header_location))", "= { \"major\": 0, \"minor\": 1, \"patch\": 2 } default_version_to_update = defined_versions[version_to_update] reset_lower_versions", "version) g.write(line) if not found_version: print(\"Failed to find GVIEW_VERSION\") exit(1) shutil.move(header_output_location, header_location) exit(0)", "header_location+'.new' found_version = False with open(header_location, 'r') as f: with open(header_output_location, 'w') as", "import sys import os import shutil if len(sys.argv) < 2: print(\"Failed to obtain", "shutil if len(sys.argv) < 2: print(\"Failed to obtain GView.hpp location\") exit(1) header_location =", "exit(1) header_location = sys.argv[1] if not os.path.exists(header_location): print(\"Path {} does not exists!\".format(header_location)) exit(1)", "header_location = sys.argv[1] if not os.path.exists(header_location): print(\"Path {} does not exists!\".format(header_location)) exit(1) default_version_to_update", "\"minor\": 1, \"patch\": 2 } default_version_to_update = defined_versions[version_to_update] reset_lower_versions = True header_output_location =", "len(sys.argv) < 2: print(\"Failed to obtain GView.hpp location\") exit(1) header_location = sys.argv[1] if", "header_output_location = header_location+'.new' found_version = False with open(header_location, 'r') as f: with open(header_output_location,", "with open(header_location, 'r') as f: with open(header_output_location, 'w') as g: for line in", "\"major\": 0, \"minor\": 1, \"patch\": 2 } default_version_to_update = defined_versions[version_to_update] reset_lower_versions = True", "version_array[2]) line = '#define GVIEW_VERSION \"{}\"\\n'.format(version) found_version = True os.putenv('GVIEW_VERSION', version) g.write(line) if", "print(\"Path {} does not exists!\".format(header_location)) exit(1) default_version_to_update = 1 # major=0, minor=1, patch=2", "2: version_to_update = sys.argv[2] defined_versions = { \"major\": 0, \"minor\": 1, \"patch\": 2", "open(header_location, 'r') as f: with open(header_output_location, 'w') as g: for line in f:", "1].strip(' \\r\\n\\t\\\"') version_array = version.split('.') value = int(version_array[default_version_to_update])+1 version_array[default_version_to_update] = value for i", "os.putenv('GVIEW_VERSION', version) g.write(line) if not found_version: print(\"Failed to find GVIEW_VERSION\") exit(1) shutil.move(header_output_location, header_location)", "= line.split('#define GVIEW_VERSION ')[ 1].strip(' \\r\\n\\t\\\"') version_array = version.split('.') value = int(version_array[default_version_to_update])+1 version_array[default_version_to_update]", "version_to_update = sys.argv[2] defined_versions = { \"major\": 0, \"minor\": 1, \"patch\": 2 }", "if not os.path.exists(header_location): print(\"Path {} does not exists!\".format(header_location)) exit(1) default_version_to_update = 1 #", "GVIEW_VERSION '): version = line.split('#define GVIEW_VERSION ')[ 1].strip(' \\r\\n\\t\\\"') version_array = version.split('.') value", "to obtain GView.hpp location\") exit(1) header_location = sys.argv[1] if not os.path.exists(header_location): print(\"Path {}", "0, \"minor\": 1, \"patch\": 2 } default_version_to_update = defined_versions[version_to_update] reset_lower_versions = True header_output_location", "for line in f: if line.startswith('#define GVIEW_VERSION '): version = line.split('#define GVIEW_VERSION ')[", "= sys.argv[2] defined_versions = { \"major\": 0, \"minor\": 1, \"patch\": 2 } default_version_to_update", "GView.hpp location\") exit(1) header_location = sys.argv[1] if not os.path.exists(header_location): print(\"Path {} does not", "'r') as f: with open(header_output_location, 'w') as g: for line in f: if", "'): version = line.split('#define GVIEW_VERSION ')[ 1].strip(' \\r\\n\\t\\\"') version_array = version.split('.') value =", "< 2: print(\"Failed to obtain GView.hpp location\") exit(1) header_location = sys.argv[1] if not", "defined_versions[version_to_update] reset_lower_versions = True header_output_location = header_location+'.new' found_version = False with open(header_location, 'r')", "= True header_output_location = header_location+'.new' found_version = False with open(header_location, 'r') as f:", "if len(sys.argv) < 2: print(\"Failed to obtain GView.hpp location\") exit(1) header_location = sys.argv[1]", "= 0 version = \"{}.{}.{}\".format( version_array[0], version_array[1], version_array[2]) line = '#define GVIEW_VERSION \"{}\"\\n'.format(version)", "major=0, minor=1, patch=2 if len(sys.argv) > 2: version_to_update = sys.argv[2] defined_versions = {", "f: if line.startswith('#define GVIEW_VERSION '): version = line.split('#define GVIEW_VERSION ')[ 1].strip(' \\r\\n\\t\\\"') version_array", "\"{}.{}.{}\".format( version_array[0], version_array[1], version_array[2]) line = '#define GVIEW_VERSION \"{}\"\\n'.format(version) found_version = True os.putenv('GVIEW_VERSION',", "= True os.putenv('GVIEW_VERSION', version) g.write(line) if not found_version: print(\"Failed to find GVIEW_VERSION\") exit(1)", "2 } default_version_to_update = defined_versions[version_to_update] reset_lower_versions = True header_output_location = header_location+'.new' found_version =", "GVIEW_VERSION \"{}\"\\n'.format(version) found_version = True os.putenv('GVIEW_VERSION', version) g.write(line) if not found_version: print(\"Failed to", "line = '#define GVIEW_VERSION \"{}\"\\n'.format(version) found_version = True os.putenv('GVIEW_VERSION', version) g.write(line) if not" ]
[ "def get_all_user_data(self, domain=''): if domain: __k = self.all_user_key + '_' + domain else:", "= self.all_user_key + '_' + domain else: __k = self.all_user_key self.redis.set(name=__k, value=data, ex=86400)", "for key in router: _k = key + '_' + user_id if key", "self.muc_key, self.single_trace_key, self.muc_trace_key, self.user_registed_mucs] habit = {} for key in router: _k =", "= [self.single_key, self.muc_key, self.single_trace_key, self.muc_trace_key] def get_user_habit(self, user_id): \"\"\" 获取redis中用户的缓存 包括个人、群组聊天顺序 个人、群组聊天频率 :param", "res = None return res def set_muc_lookback(self, user, term, data): self.redis.set(name=LOOKBACK_MUC_CACHE + '_'", "+ '_' + user + '_' + term) if res: try: if not", "res = json.loads(res) except Exception as e: print('LOAD SINGLE LOOKBACK ERROR {}'.format(e)) res", "return [] return res['data'] def set_agg_cache(self, user, term, data): name = LOOKBACK_AGG_CACHE +", "user_data except json.JSONDecodeError: return [] def set_all_user_data(self, data, domain=''): data = json.dumps(data, ensure_ascii=False)", "python # -*- coding:utf-8 -*- __author__ = 'jingyu.he' import redis from redis import", "return None return res def set_single_lookback(self, user, term, data): self.redis.set(name=LOOKBACK_SINGLE_CACHE + '_' +", "= SINGLE_KEY self.muc_key = MUC_KEY self.single_trace_key = SINGLE_TRACE_KEY self.muc_trace_key = MUC_TRACE_KEY self.user_registed_mucs =", "res def set_muc_lookback(self, user, term, data): self.redis.set(name=LOOKBACK_MUC_CACHE + '_' + user + '_'", "decode_responses=True) else: redis_cli = redis.StrictRedis(host=r_host, port=r_port, db=r_database, password=r_password, decode_responses=True) except (KeyError, ValueError, IndexError)", "value=data, ex=86400) def get_single_lookback(self, user, term): res = self.redis.get(LOOKBACK_SINGLE_CACHE + '_' + user", "+ '_' + term, value=json.dumps(data, ensure_ascii=False), ex=300) def get_muc_lookback(self, user, term): res =", "redis_log.exception('wrong configure pattern') # exit(0) class RedisUtil: def __init__(self): self.redis = redis_cli self.single_key", "as e: raise TypeError('wrong configure pattern') # redis_log.exception('wrong configure pattern') # exit(0) class", "router: _k = key + '_' + user_id if key in [self.single_key, self.muc_key,", "self.muc_trace_key, self.user_registed_mucs] habit = {} for key in router: _k = key +", "term) if res: try: if not res: return None res = json.loads(res) except", "domain else: __k = self.all_user_key user_data = self.redis.get(name=__k) try: if not user_data: return", "= r_sentinel.master_for(r_master, socket_timeout=r_timeout, password=<PASSWORD>, db=r_database, decode_responses=True) else: redis_cli = redis.StrictRedis(host=r_host, port=r_port, db=r_database, password=r_password,", "in pre_rs_hosts] hosts = [(hp[0].strip(), int(hp[1].strip())) for hp in _hosts] r_sentinel = sentinel.Sentinel(hosts,", "json.loads(res) except Exception as __e: print(__e) return [] if res.get('term', '') != term:", "self.muc_key, self.user_registed_mucs]: habit[key] = self.redis.lrange(name=_k, start=0, end=-1) else: habit[key] = self.redis.zrevrangebyscore(name=_k, max='+inf', min=10,", "SINGLE_KEY self.muc_key = MUC_KEY self.single_trace_key = SINGLE_TRACE_KEY self.muc_trace_key = MUC_TRACE_KEY self.user_registed_mucs = USER_MUCS", "__init__(self): self.redis = redis_cli self.single_key = SINGLE_KEY self.muc_key = MUC_KEY self.single_trace_key = SINGLE_TRACE_KEY", "res = self.redis.get(LOOKBACK_SINGLE_CACHE + '_' + user + '_' + term) if res:", "for hp in _hosts] r_sentinel = sentinel.Sentinel(hosts, socket_timeout=r_timeout) redis_cli = r_sentinel.master_for(r_master, socket_timeout=r_timeout, password=<PASSWORD>,", "ex=300) def get_muc_lookback(self, user, term): res = self.redis.get(LOOKBACK_MUC_CACHE + '_' + user +", "utils.logger_conf import configure_logger # log_path = get_logger_file(name='reids.log') # redis_log = configure_logger('redis', log_path) try:", "conf.cache_params_define import * # from utils.logger_conf import configure_logger # log_path = get_logger_file(name='reids.log') #", "= sentinel.Sentinel(hosts, socket_timeout=r_timeout) redis_cli = r_sentinel.master_for(r_master, socket_timeout=r_timeout, password=<PASSWORD>, db=r_database, decode_responses=True) else: redis_cli =", "e: raise TypeError('wrong configure pattern') # redis_log.exception('wrong configure pattern') # exit(0) class RedisUtil:", "set_all_user_data(self, data, domain=''): data = json.dumps(data, ensure_ascii=False) if domain: __k = self.all_user_key +", "'jingyu.he' import redis from redis import sentinel import json from conf.cache_params_define import *", "if not res: return [] res = json.loads(res) except Exception as __e: print(__e)", "<reponame>sdgdsffdsfff/qtalk_search #!/usr/bin/env python # -*- coding:utf-8 -*- __author__ = 'jingyu.he' import redis from", "{} for key in router: _k = key + '_' + user_id if", "user - { 'key' : term, 'data': _info } :param user: :param term:", "Exception as e: print('LOAD SINGLE LOOKBACK ERROR {}'.format(e)) res = None return res", "user, term, data): self.redis.set(name=LOOKBACK_MUC_CACHE + '_' + user + '_' + term, value=json.dumps(data,", "} :param user: :param term: :return: \"\"\" name = LOOKBACK_AGG_CACHE + '_' +", "res: return [] res = json.loads(res) except Exception as __e: print(__e) return []", "port=r_port, db=r_database, password=r_password, decode_responses=True) except (KeyError, ValueError, IndexError) as e: raise TypeError('wrong configure", "+ '_' + user info = {'term': term, 'data': data} self.redis.set(name=name, value=json.dumps(info, ensure_ascii=False),", "= self.all_user_key self.redis.set(name=__k, value=data, ex=86400) def get_single_lookback(self, user, term): res = self.redis.get(LOOKBACK_SINGLE_CACHE +", "if key in [self.single_key, self.muc_key, self.user_registed_mucs]: habit[key] = self.redis.lrange(name=_k, start=0, end=-1) else: habit[key]", "hosts = [(hp[0].strip(), int(hp[1].strip())) for hp in _hosts] r_sentinel = sentinel.Sentinel(hosts, socket_timeout=r_timeout) redis_cli", "个人、群组聊天频率 :param user_id: :return: \"\"\" router = [self.single_key, self.muc_key, self.single_trace_key, self.muc_trace_key, self.user_registed_mucs] habit", "as e: print('LOAD SINGLE LOOKBACK ERROR {}'.format(e)) return None return res def set_single_lookback(self,", "res = self.redis.get(name=name) try: if not res: return [] res = json.loads(res) except", "LOOKBACK_AGG_CACHE + '_' + user info = {'term': term, 'data': data} self.redis.set(name=name, value=json.dumps(info,", "= get_logger_file(name='reids.log') # redis_log = configure_logger('redis', log_path) try: if if_redis_sentinel: _hosts = [hp.split(':')", "print('LOAD SINGLE LOOKBACK ERROR {}'.format(e)) res = None return res def set_muc_lookback(self, user,", "\"\"\" name = LOOKBACK_AGG_CACHE + '_' + user res = self.redis.get(name=name) try: if", "else: __k = self.all_user_key user_data = self.redis.get(name=__k) try: if not user_data: return []", "import sentinel import json from conf.cache_params_define import * # from utils.logger_conf import configure_logger", "None res = json.loads(res) except Exception as e: print('LOAD SINGLE LOOKBACK ERROR {}'.format(e))", "'_' + user + '_' + term, value=json.dumps(data, ensure_ascii=False), ex=300) def get_muc_lookback(self, user,", "if not res: return None res = json.loads(res) except Exception as e: print('LOAD", "'_' + user + '_' + term) if res: try: if not res:", "= redis.StrictRedis(host=r_host, port=r_port, db=r_database, password=r_password, decode_responses=True) except (KeyError, ValueError, IndexError) as e: raise", "class RedisUtil: def __init__(self): self.redis = redis_cli self.single_key = SINGLE_KEY self.muc_key = MUC_KEY", "json.loads(res) except Exception as e: print('LOAD SINGLE LOOKBACK ERROR {}'.format(e)) return None return", "try: if if_redis_sentinel: _hosts = [hp.split(':') for hp in pre_rs_hosts] hosts = [(hp[0].strip(),", "min=10, start=0, num=10) # TODO 这个num可能应该走limit 和 offset return habit def get_all_user_data(self, domain=''):", ":return: \"\"\" name = LOOKBACK_AGG_CACHE + '_' + user res = self.redis.get(name=name) try:", "+ term) if res: try: if not res: return None res = json.loads(res)", "这个num可能应该走limit 和 offset return habit def get_all_user_data(self, domain=''): if domain: __k = self.all_user_key", "user, term): res = self.redis.get(LOOKBACK_MUC_CACHE + '_' + user + '_' + term)", "= json.loads(res) except Exception as __e: print(__e) return [] if res.get('term', '') !=", "data): self.redis.set(name=LOOKBACK_SINGLE_CACHE + '_' + user + '_' + term, value=json.dumps(data, ensure_ascii=False), ex=300)", ":param term: :return: \"\"\" name = LOOKBACK_AGG_CACHE + '_' + user res =", "try: if not user_data: return [] user_data = json.loads(user_data) return user_data except json.JSONDecodeError:", "return user_data except json.JSONDecodeError: return [] def set_all_user_data(self, data, domain=''): data = json.dumps(data,", "set_muc_lookback(self, user, term, data): self.redis.set(name=LOOKBACK_MUC_CACHE + '_' + user + '_' + term,", "self.user_registed_mucs = USER_MUCS self.all_user_key = ALL_USER_DATA_CACHE # self.router = [self.single_key, self.muc_key, self.single_trace_key, self.muc_trace_key]", "self.all_user_key = ALL_USER_DATA_CACHE # self.router = [self.single_key, self.muc_key, self.single_trace_key, self.muc_trace_key] def get_user_habit(self, user_id):", "= self.redis.get(LOOKBACK_SINGLE_CACHE + '_' + user + '_' + term) if res: try:", "user_id if key in [self.single_key, self.muc_key, self.user_registed_mucs]: habit[key] = self.redis.lrange(name=_k, start=0, end=-1) else:", "= self.all_user_key user_data = self.redis.get(name=__k) try: if not user_data: return [] user_data =", "+ user + '_' + term, value=json.dumps(data, ensure_ascii=False), ex=300) def get_muc_lookback(self, user, term):", "None return res def set_single_lookback(self, user, term, data): self.redis.set(name=LOOKBACK_SINGLE_CACHE + '_' + user", "SINGLE LOOKBACK ERROR {}'.format(e)) return None return res def set_single_lookback(self, user, term, data):", "json.JSONDecodeError: return [] def set_all_user_data(self, data, domain=''): data = json.dumps(data, ensure_ascii=False) if domain:", "= None return res def set_muc_lookback(self, user, term, data): self.redis.set(name=LOOKBACK_MUC_CACHE + '_' +", "key + '_' + user_id if key in [self.single_key, self.muc_key, self.user_registed_mucs]: habit[key] =", "_hosts = [hp.split(':') for hp in pre_rs_hosts] hosts = [(hp[0].strip(), int(hp[1].strip())) for hp", "\"\"\" 获取redis中用户的缓存 包括个人、群组聊天顺序 个人、群组聊天频率 :param user_id: :return: \"\"\" router = [self.single_key, self.muc_key, self.single_trace_key,", "habit = {} for key in router: _k = key + '_' +", "'_' + user + '_' + term, value=json.dumps(data, ensure_ascii=False), ex=300) def get_agg_cache(self, user,", "res.get('term', '') != term: self.redis.delete(name) return [] return res['data'] def set_agg_cache(self, user, term,", "[] def set_all_user_data(self, data, domain=''): data = json.dumps(data, ensure_ascii=False) if domain: __k =", "res = json.loads(res) except Exception as __e: print(__e) return [] if res.get('term', '')", "+ user res = self.redis.get(name=name) try: if not res: return [] res =", "# self.router = [self.single_key, self.muc_key, self.single_trace_key, self.muc_trace_key] def get_user_habit(self, user_id): \"\"\" 获取redis中用户的缓存 包括个人、群组聊天顺序", "[hp.split(':') for hp in pre_rs_hosts] hosts = [(hp[0].strip(), int(hp[1].strip())) for hp in _hosts]", "habit def get_all_user_data(self, domain=''): if domain: __k = self.all_user_key + '_' + domain", "user: :param term: :return: \"\"\" name = LOOKBACK_AGG_CACHE + '_' + user res", "_k = key + '_' + user_id if key in [self.single_key, self.muc_key, self.user_registed_mucs]:", "self.all_user_key + '_' + domain else: __k = self.all_user_key self.redis.set(name=__k, value=data, ex=86400) def", "user + '_' + term, value=json.dumps(data, ensure_ascii=False), ex=300) def get_agg_cache(self, user, term): \"\"\"", "except (KeyError, ValueError, IndexError) as e: raise TypeError('wrong configure pattern') # redis_log.exception('wrong configure", "user + '_' + term, value=json.dumps(data, ensure_ascii=False), ex=300) def get_muc_lookback(self, user, term): res", "res['data'] def set_agg_cache(self, user, term, data): name = LOOKBACK_AGG_CACHE + '_' + user", "# redis_log = configure_logger('redis', log_path) try: if if_redis_sentinel: _hosts = [hp.split(':') for hp", "# exit(0) class RedisUtil: def __init__(self): self.redis = redis_cli self.single_key = SINGLE_KEY self.muc_key", "= configure_logger('redis', log_path) try: if if_redis_sentinel: _hosts = [hp.split(':') for hp in pre_rs_hosts]", "def get_agg_cache(self, user, term): \"\"\" 结构: user - { 'key' : term, 'data':", "= self.redis.get(LOOKBACK_MUC_CACHE + '_' + user + '_' + term) if res: try:", "term): \"\"\" 结构: user - { 'key' : term, 'data': _info } :param", "res = json.loads(res) except Exception as e: print('LOAD SINGLE LOOKBACK ERROR {}'.format(e)) return", "configure pattern') # redis_log.exception('wrong configure pattern') # exit(0) class RedisUtil: def __init__(self): self.redis", "MUC_TRACE_KEY self.user_registed_mucs = USER_MUCS self.all_user_key = ALL_USER_DATA_CACHE # self.router = [self.single_key, self.muc_key, self.single_trace_key,", "+ domain else: __k = self.all_user_key user_data = self.redis.get(name=__k) try: if not user_data:", "ValueError, IndexError) as e: raise TypeError('wrong configure pattern') # redis_log.exception('wrong configure pattern') #", "term, value=json.dumps(data, ensure_ascii=False), ex=300) def get_muc_lookback(self, user, term): res = self.redis.get(LOOKBACK_MUC_CACHE + '_'", "LOOKBACK ERROR {}'.format(e)) return None return res def set_single_lookback(self, user, term, data): self.redis.set(name=LOOKBACK_SINGLE_CACHE", "domain: __k = self.all_user_key + '_' + domain else: __k = self.all_user_key self.redis.set(name=__k,", "offset return habit def get_all_user_data(self, domain=''): if domain: __k = self.all_user_key + '_'", "= 'jingyu.he' import redis from redis import sentinel import json from conf.cache_params_define import", "return [] res = json.loads(res) except Exception as __e: print(__e) return [] if", "SINGLE_TRACE_KEY self.muc_trace_key = MUC_TRACE_KEY self.user_registed_mucs = USER_MUCS self.all_user_key = ALL_USER_DATA_CACHE # self.router =", "+ '_' + term) if res: try: if not res: return None res", "* # from utils.logger_conf import configure_logger # log_path = get_logger_file(name='reids.log') # redis_log =", "[] return res['data'] def set_agg_cache(self, user, term, data): name = LOOKBACK_AGG_CACHE + '_'", "ensure_ascii=False) if domain: __k = self.all_user_key + '_' + domain else: __k =", "json.loads(res) except Exception as e: print('LOAD SINGLE LOOKBACK ERROR {}'.format(e)) res = None", "get_logger_file(name='reids.log') # redis_log = configure_logger('redis', log_path) try: if if_redis_sentinel: _hosts = [hp.split(':') for", "# TODO 这个num可能应该走limit 和 offset return habit def get_all_user_data(self, domain=''): if domain: __k", "term, data): name = LOOKBACK_AGG_CACHE + '_' + user info = {'term': term,", "'') != term: self.redis.delete(name) return [] return res['data'] def set_agg_cache(self, user, term, data):", "def set_agg_cache(self, user, term, data): name = LOOKBACK_AGG_CACHE + '_' + user info", "[self.single_key, self.muc_key, self.single_trace_key, self.muc_trace_key] def get_user_habit(self, user_id): \"\"\" 获取redis中用户的缓存 包括个人、群组聊天顺序 个人、群组聊天频率 :param user_id:", "= json.loads(user_data) return user_data except json.JSONDecodeError: return [] def set_all_user_data(self, data, domain=''): data", "return [] user_data = json.loads(user_data) return user_data except json.JSONDecodeError: return [] def set_all_user_data(self,", "except Exception as __e: print(__e) return [] if res.get('term', '') != term: self.redis.delete(name)", "self.single_key = SINGLE_KEY self.muc_key = MUC_KEY self.single_trace_key = SINGLE_TRACE_KEY self.muc_trace_key = MUC_TRACE_KEY self.user_registed_mucs", "__author__ = 'jingyu.he' import redis from redis import sentinel import json from conf.cache_params_define", "res def set_single_lookback(self, user, term, data): self.redis.set(name=LOOKBACK_SINGLE_CACHE + '_' + user + '_'", "= key + '_' + user_id if key in [self.single_key, self.muc_key, self.user_registed_mucs]: habit[key]", "+ user + '_' + term) if res: try: if not res: return", "term: :return: \"\"\" name = LOOKBACK_AGG_CACHE + '_' + user res = self.redis.get(name=name)", "router = [self.single_key, self.muc_key, self.single_trace_key, self.muc_trace_key, self.user_registed_mucs] habit = {} for key in", "self.redis.delete(name) return [] return res['data'] def set_agg_cache(self, user, term, data): name = LOOKBACK_AGG_CACHE", "[] res = json.loads(res) except Exception as __e: print(__e) return [] if res.get('term',", "ERROR {}'.format(e)) res = None return res def set_muc_lookback(self, user, term, data): self.redis.set(name=LOOKBACK_MUC_CACHE", "= ALL_USER_DATA_CACHE # self.router = [self.single_key, self.muc_key, self.single_trace_key, self.muc_trace_key] def get_user_habit(self, user_id): \"\"\"", ":return: \"\"\" router = [self.single_key, self.muc_key, self.single_trace_key, self.muc_trace_key, self.user_registed_mucs] habit = {} for", "user_id): \"\"\" 获取redis中用户的缓存 包括个人、群组聊天顺序 个人、群组聊天频率 :param user_id: :return: \"\"\" router = [self.single_key, self.muc_key,", "start=0, end=-1) else: habit[key] = self.redis.zrevrangebyscore(name=_k, max='+inf', min=10, start=0, num=10) # TODO 这个num可能应该走limit", "user + '_' + term) if res: try: if not res: return None", "key in [self.single_key, self.muc_key, self.user_registed_mucs]: habit[key] = self.redis.lrange(name=_k, start=0, end=-1) else: habit[key] =", "ex=300) def get_agg_cache(self, user, term): \"\"\" 结构: user - { 'key' : term,", "user_data = json.loads(user_data) return user_data except json.JSONDecodeError: return [] def set_all_user_data(self, data, domain=''):", "domain: __k = self.all_user_key + '_' + domain else: __k = self.all_user_key user_data", "+ user + '_' + term, value=json.dumps(data, ensure_ascii=False), ex=300) def get_agg_cache(self, user, term):", "pattern') # redis_log.exception('wrong configure pattern') # exit(0) class RedisUtil: def __init__(self): self.redis =", "_info } :param user: :param term: :return: \"\"\" name = LOOKBACK_AGG_CACHE + '_'", "IndexError) as e: raise TypeError('wrong configure pattern') # redis_log.exception('wrong configure pattern') # exit(0)", "- { 'key' : term, 'data': _info } :param user: :param term: :return:", "term): res = self.redis.get(LOOKBACK_MUC_CACHE + '_' + user + '_' + term) if", "res: try: if not res: return None res = json.loads(res) except Exception as", "name = LOOKBACK_AGG_CACHE + '_' + user res = self.redis.get(name=name) try: if not", "= self.all_user_key + '_' + domain else: __k = self.all_user_key user_data = self.redis.get(name=__k)", "= json.loads(res) except Exception as e: print('LOAD SINGLE LOOKBACK ERROR {}'.format(e)) return None", "user, term): \"\"\" 结构: user - { 'key' : term, 'data': _info }", "包括个人、群组聊天顺序 个人、群组聊天频率 :param user_id: :return: \"\"\" router = [self.single_key, self.muc_key, self.single_trace_key, self.muc_trace_key, self.user_registed_mucs]", "password=<PASSWORD>, db=r_database, decode_responses=True) else: redis_cli = redis.StrictRedis(host=r_host, port=r_port, db=r_database, password=r_password, decode_responses=True) except (KeyError,", "[self.single_key, self.muc_key, self.user_registed_mucs]: habit[key] = self.redis.lrange(name=_k, start=0, end=-1) else: habit[key] = self.redis.zrevrangebyscore(name=_k, max='+inf',", "data): self.redis.set(name=LOOKBACK_MUC_CACHE + '_' + user + '_' + term, value=json.dumps(data, ensure_ascii=False), ex=300)", "name = LOOKBACK_AGG_CACHE + '_' + user info = {'term': term, 'data': data}", "set_single_lookback(self, user, term, data): self.redis.set(name=LOOKBACK_SINGLE_CACHE + '_' + user + '_' + term,", "!= term: self.redis.delete(name) return [] return res['data'] def set_agg_cache(self, user, term, data): name", "ex=86400) def get_single_lookback(self, user, term): res = self.redis.get(LOOKBACK_SINGLE_CACHE + '_' + user +", "user, term, data): name = LOOKBACK_AGG_CACHE + '_' + user info = {'term':", "+ '_' + user + '_' + term, value=json.dumps(data, ensure_ascii=False), ex=300) def get_muc_lookback(self,", "end=-1) else: habit[key] = self.redis.zrevrangebyscore(name=_k, max='+inf', min=10, start=0, num=10) # TODO 这个num可能应该走limit 和", "try: if not res: return None res = json.loads(res) except Exception as e:", "hp in pre_rs_hosts] hosts = [(hp[0].strip(), int(hp[1].strip())) for hp in _hosts] r_sentinel =", "self.muc_trace_key = MUC_TRACE_KEY self.user_registed_mucs = USER_MUCS self.all_user_key = ALL_USER_DATA_CACHE # self.router = [self.single_key,", "exit(0) class RedisUtil: def __init__(self): self.redis = redis_cli self.single_key = SINGLE_KEY self.muc_key =", "self.redis.lrange(name=_k, start=0, end=-1) else: habit[key] = self.redis.zrevrangebyscore(name=_k, max='+inf', min=10, start=0, num=10) # TODO", "if res.get('term', '') != term: self.redis.delete(name) return [] return res['data'] def set_agg_cache(self, user,", "redis_cli = redis.StrictRedis(host=r_host, port=r_port, db=r_database, password=r_password, decode_responses=True) except (KeyError, ValueError, IndexError) as e:", "value=json.dumps(data, ensure_ascii=False), ex=300) def get_agg_cache(self, user, term): \"\"\" 结构: user - { 'key'", "sentinel import json from conf.cache_params_define import * # from utils.logger_conf import configure_logger #", "[(hp[0].strip(), int(hp[1].strip())) for hp in _hosts] r_sentinel = sentinel.Sentinel(hosts, socket_timeout=r_timeout) redis_cli = r_sentinel.master_for(r_master,", "= LOOKBACK_AGG_CACHE + '_' + user res = self.redis.get(name=name) try: if not res:", "e: print('LOAD SINGLE LOOKBACK ERROR {}'.format(e)) return None return res def set_single_lookback(self, user,", "+ '_' + domain else: __k = self.all_user_key self.redis.set(name=__k, value=data, ex=86400) def get_single_lookback(self,", "和 offset return habit def get_all_user_data(self, domain=''): if domain: __k = self.all_user_key +", "'_' + term, value=json.dumps(data, ensure_ascii=False), ex=300) def get_agg_cache(self, user, term): \"\"\" 结构: user", "get_muc_lookback(self, user, term): res = self.redis.get(LOOKBACK_MUC_CACHE + '_' + user + '_' +", "self.all_user_key user_data = self.redis.get(name=__k) try: if not user_data: return [] user_data = json.loads(user_data)", "print('LOAD SINGLE LOOKBACK ERROR {}'.format(e)) return None return res def set_single_lookback(self, user, term,", "json.loads(user_data) return user_data except json.JSONDecodeError: return [] def set_all_user_data(self, data, domain=''): data =", "in _hosts] r_sentinel = sentinel.Sentinel(hosts, socket_timeout=r_timeout) redis_cli = r_sentinel.master_for(r_master, socket_timeout=r_timeout, password=<PASSWORD>, db=r_database, decode_responses=True)", "TypeError('wrong configure pattern') # redis_log.exception('wrong configure pattern') # exit(0) class RedisUtil: def __init__(self):", "self.all_user_key self.redis.set(name=__k, value=data, ex=86400) def get_single_lookback(self, user, term): res = self.redis.get(LOOKBACK_SINGLE_CACHE + '_'", "+ user_id if key in [self.single_key, self.muc_key, self.user_registed_mucs]: habit[key] = self.redis.lrange(name=_k, start=0, end=-1)", "log_path = get_logger_file(name='reids.log') # redis_log = configure_logger('redis', log_path) try: if if_redis_sentinel: _hosts =", "self.redis.get(LOOKBACK_MUC_CACHE + '_' + user + '_' + term) if res: try: if", "def set_single_lookback(self, user, term, data): self.redis.set(name=LOOKBACK_SINGLE_CACHE + '_' + user + '_' +", "\"\"\" router = [self.single_key, self.muc_key, self.single_trace_key, self.muc_trace_key, self.user_registed_mucs] habit = {} for key", "term): res = self.redis.get(LOOKBACK_SINGLE_CACHE + '_' + user + '_' + term) if", "db=r_database, password=r_password, decode_responses=True) except (KeyError, ValueError, IndexError) as e: raise TypeError('wrong configure pattern')", "log_path) try: if if_redis_sentinel: _hosts = [hp.split(':') for hp in pre_rs_hosts] hosts =", "pattern') # exit(0) class RedisUtil: def __init__(self): self.redis = redis_cli self.single_key = SINGLE_KEY", "self.muc_key = MUC_KEY self.single_trace_key = SINGLE_TRACE_KEY self.muc_trace_key = MUC_TRACE_KEY self.user_registed_mucs = USER_MUCS self.all_user_key", "as __e: print(__e) return [] if res.get('term', '') != term: self.redis.delete(name) return []", "import redis from redis import sentinel import json from conf.cache_params_define import * #", "= {} for key in router: _k = key + '_' + user_id", "if_redis_sentinel: _hosts = [hp.split(':') for hp in pre_rs_hosts] hosts = [(hp[0].strip(), int(hp[1].strip())) for", "decode_responses=True) except (KeyError, ValueError, IndexError) as e: raise TypeError('wrong configure pattern') # redis_log.exception('wrong", "self.user_registed_mucs]: habit[key] = self.redis.lrange(name=_k, start=0, end=-1) else: habit[key] = self.redis.zrevrangebyscore(name=_k, max='+inf', min=10, start=0,", "= self.redis.lrange(name=_k, start=0, end=-1) else: habit[key] = self.redis.zrevrangebyscore(name=_k, max='+inf', min=10, start=0, num=10) #", "_hosts] r_sentinel = sentinel.Sentinel(hosts, socket_timeout=r_timeout) redis_cli = r_sentinel.master_for(r_master, socket_timeout=r_timeout, password=<PASSWORD>, db=r_database, decode_responses=True) else:", "= SINGLE_TRACE_KEY self.muc_trace_key = MUC_TRACE_KEY self.user_registed_mucs = USER_MUCS self.all_user_key = ALL_USER_DATA_CACHE # self.router", "def set_all_user_data(self, data, domain=''): data = json.dumps(data, ensure_ascii=False) if domain: __k = self.all_user_key", "Exception as __e: print(__e) return [] if res.get('term', '') != term: self.redis.delete(name) return", "password=r_password, decode_responses=True) except (KeyError, ValueError, IndexError) as e: raise TypeError('wrong configure pattern') #", "int(hp[1].strip())) for hp in _hosts] r_sentinel = sentinel.Sentinel(hosts, socket_timeout=r_timeout) redis_cli = r_sentinel.master_for(r_master, socket_timeout=r_timeout,", "from utils.logger_conf import configure_logger # log_path = get_logger_file(name='reids.log') # redis_log = configure_logger('redis', log_path)", "= MUC_TRACE_KEY self.user_registed_mucs = USER_MUCS self.all_user_key = ALL_USER_DATA_CACHE # self.router = [self.single_key, self.muc_key,", "= [(hp[0].strip(), int(hp[1].strip())) for hp in _hosts] r_sentinel = sentinel.Sentinel(hosts, socket_timeout=r_timeout) redis_cli =", "not res: return [] res = json.loads(res) except Exception as __e: print(__e) return", "return res['data'] def set_agg_cache(self, user, term, data): name = LOOKBACK_AGG_CACHE + '_' +", "import json from conf.cache_params_define import * # from utils.logger_conf import configure_logger # log_path", "except Exception as e: print('LOAD SINGLE LOOKBACK ERROR {}'.format(e)) res = None return", "domain else: __k = self.all_user_key self.redis.set(name=__k, value=data, ex=86400) def get_single_lookback(self, user, term): res", "sentinel.Sentinel(hosts, socket_timeout=r_timeout) redis_cli = r_sentinel.master_for(r_master, socket_timeout=r_timeout, password=<PASSWORD>, db=r_database, decode_responses=True) else: redis_cli = redis.StrictRedis(host=r_host,", "self.all_user_key + '_' + domain else: __k = self.all_user_key user_data = self.redis.get(name=__k) try:", "from conf.cache_params_define import * # from utils.logger_conf import configure_logger # log_path = get_logger_file(name='reids.log')", "for hp in pre_rs_hosts] hosts = [(hp[0].strip(), int(hp[1].strip())) for hp in _hosts] r_sentinel", "#!/usr/bin/env python # -*- coding:utf-8 -*- __author__ = 'jingyu.he' import redis from redis", "[self.single_key, self.muc_key, self.single_trace_key, self.muc_trace_key, self.user_registed_mucs] habit = {} for key in router: _k", "+ term, value=json.dumps(data, ensure_ascii=False), ex=300) def get_muc_lookback(self, user, term): res = self.redis.get(LOOKBACK_MUC_CACHE +", "# -*- coding:utf-8 -*- __author__ = 'jingyu.he' import redis from redis import sentinel", "user_data: return [] user_data = json.loads(user_data) return user_data except json.JSONDecodeError: return [] def", "term, data): self.redis.set(name=LOOKBACK_MUC_CACHE + '_' + user + '_' + term, value=json.dumps(data, ensure_ascii=False),", "not res: return None res = json.loads(res) except Exception as e: print('LOAD SINGLE", "not user_data: return [] user_data = json.loads(user_data) return user_data except json.JSONDecodeError: return []", "'_' + term) if res: try: if not res: return None res =", "if not user_data: return [] user_data = json.loads(user_data) return user_data except json.JSONDecodeError: return", "'_' + user_id if key in [self.single_key, self.muc_key, self.user_registed_mucs]: habit[key] = self.redis.lrange(name=_k, start=0,", "+ '_' + user res = self.redis.get(name=name) try: if not res: return []", "e: print('LOAD SINGLE LOOKBACK ERROR {}'.format(e)) res = None return res def set_muc_lookback(self,", "self.redis.zrevrangebyscore(name=_k, max='+inf', min=10, start=0, num=10) # TODO 这个num可能应该走limit 和 offset return habit def", "+ '_' + user_id if key in [self.single_key, self.muc_key, self.user_registed_mucs]: habit[key] = self.redis.lrange(name=_k,", "as e: print('LOAD SINGLE LOOKBACK ERROR {}'.format(e)) res = None return res def", "in router: _k = key + '_' + user_id if key in [self.single_key,", "configure_logger('redis', log_path) try: if if_redis_sentinel: _hosts = [hp.split(':') for hp in pre_rs_hosts] hosts", "get_single_lookback(self, user, term): res = self.redis.get(LOOKBACK_SINGLE_CACHE + '_' + user + '_' +", "-*- coding:utf-8 -*- __author__ = 'jingyu.he' import redis from redis import sentinel import", "configure_logger # log_path = get_logger_file(name='reids.log') # redis_log = configure_logger('redis', log_path) try: if if_redis_sentinel:", "self.redis.set(name=LOOKBACK_MUC_CACHE + '_' + user + '_' + term, value=json.dumps(data, ensure_ascii=False), ex=300) def", "data = json.dumps(data, ensure_ascii=False) if domain: __k = self.all_user_key + '_' + domain", "redis.StrictRedis(host=r_host, port=r_port, db=r_database, password=r_password, decode_responses=True) except (KeyError, ValueError, IndexError) as e: raise TypeError('wrong", ":param user: :param term: :return: \"\"\" name = LOOKBACK_AGG_CACHE + '_' + user", "+ '_' + user + '_' + term, value=json.dumps(data, ensure_ascii=False), ex=300) def get_agg_cache(self,", "if if_redis_sentinel: _hosts = [hp.split(':') for hp in pre_rs_hosts] hosts = [(hp[0].strip(), int(hp[1].strip()))", "'_' + domain else: __k = self.all_user_key user_data = self.redis.get(name=__k) try: if not", "__k = self.all_user_key + '_' + domain else: __k = self.all_user_key user_data =", "__k = self.all_user_key user_data = self.redis.get(name=__k) try: if not user_data: return [] user_data", "ensure_ascii=False), ex=300) def get_muc_lookback(self, user, term): res = self.redis.get(LOOKBACK_MUC_CACHE + '_' + user", "# log_path = get_logger_file(name='reids.log') # redis_log = configure_logger('redis', log_path) try: if if_redis_sentinel: _hosts", "def get_muc_lookback(self, user, term): res = self.redis.get(LOOKBACK_MUC_CACHE + '_' + user + '_'", "term, 'data': _info } :param user: :param term: :return: \"\"\" name = LOOKBACK_AGG_CACHE", "= redis_cli self.single_key = SINGLE_KEY self.muc_key = MUC_KEY self.single_trace_key = SINGLE_TRACE_KEY self.muc_trace_key =", "ERROR {}'.format(e)) return None return res def set_single_lookback(self, user, term, data): self.redis.set(name=LOOKBACK_SINGLE_CACHE +", "{}'.format(e)) res = None return res def set_muc_lookback(self, user, term, data): self.redis.set(name=LOOKBACK_MUC_CACHE +", "结构: user - { 'key' : term, 'data': _info } :param user: :param", "-*- __author__ = 'jingyu.he' import redis from redis import sentinel import json from", "habit[key] = self.redis.lrange(name=_k, start=0, end=-1) else: habit[key] = self.redis.zrevrangebyscore(name=_k, max='+inf', min=10, start=0, num=10)", "num=10) # TODO 这个num可能应该走limit 和 offset return habit def get_all_user_data(self, domain=''): if domain:", "ensure_ascii=False), ex=300) def get_agg_cache(self, user, term): \"\"\" 结构: user - { 'key' :", "= LOOKBACK_AGG_CACHE + '_' + user info = {'term': term, 'data': data} self.redis.set(name=name,", "habit[key] = self.redis.zrevrangebyscore(name=_k, max='+inf', min=10, start=0, num=10) # TODO 这个num可能应该走limit 和 offset return", "return res def set_muc_lookback(self, user, term, data): self.redis.set(name=LOOKBACK_MUC_CACHE + '_' + user +", "user_id: :return: \"\"\" router = [self.single_key, self.muc_key, self.single_trace_key, self.muc_trace_key, self.user_registed_mucs] habit = {}", "except json.JSONDecodeError: return [] def set_all_user_data(self, data, domain=''): data = json.dumps(data, ensure_ascii=False) if", "{}'.format(e)) return None return res def set_single_lookback(self, user, term, data): self.redis.set(name=LOOKBACK_SINGLE_CACHE + '_'", "return None res = json.loads(res) except Exception as e: print('LOAD SINGLE LOOKBACK ERROR", "self.muc_trace_key] def get_user_habit(self, user_id): \"\"\" 获取redis中用户的缓存 包括个人、群组聊天顺序 个人、群组聊天频率 :param user_id: :return: \"\"\" router", "set_agg_cache(self, user, term, data): name = LOOKBACK_AGG_CACHE + '_' + user info =", "None return res def set_muc_lookback(self, user, term, data): self.redis.set(name=LOOKBACK_MUC_CACHE + '_' + user", "= [hp.split(':') for hp in pre_rs_hosts] hosts = [(hp[0].strip(), int(hp[1].strip())) for hp in", "\"\"\" 结构: user - { 'key' : term, 'data': _info } :param user:", "domain=''): if domain: __k = self.all_user_key + '_' + domain else: __k =", "start=0, num=10) # TODO 这个num可能应该走limit 和 offset return habit def get_all_user_data(self, domain=''): if", "return res def set_single_lookback(self, user, term, data): self.redis.set(name=LOOKBACK_SINGLE_CACHE + '_' + user +", "[] user_data = json.loads(user_data) return user_data except json.JSONDecodeError: return [] def set_all_user_data(self, data,", "get_all_user_data(self, domain=''): if domain: __k = self.all_user_key + '_' + domain else: __k", "[] if res.get('term', '') != term: self.redis.delete(name) return [] return res['data'] def set_agg_cache(self,", "self.single_trace_key, self.muc_trace_key] def get_user_habit(self, user_id): \"\"\" 获取redis中用户的缓存 包括个人、群组聊天顺序 个人、群组聊天频率 :param user_id: :return: \"\"\"", "'_' + user res = self.redis.get(name=name) try: if not res: return [] res", "redis_cli = r_sentinel.master_for(r_master, socket_timeout=r_timeout, password=<PASSWORD>, db=r_database, decode_responses=True) else: redis_cli = redis.StrictRedis(host=r_host, port=r_port, db=r_database,", "'key' : term, 'data': _info } :param user: :param term: :return: \"\"\" name", "SINGLE LOOKBACK ERROR {}'.format(e)) res = None return res def set_muc_lookback(self, user, term,", "user, term, data): self.redis.set(name=LOOKBACK_SINGLE_CACHE + '_' + user + '_' + term, value=json.dumps(data,", "self.redis.set(name=__k, value=data, ex=86400) def get_single_lookback(self, user, term): res = self.redis.get(LOOKBACK_SINGLE_CACHE + '_' +", "else: __k = self.all_user_key self.redis.set(name=__k, value=data, ex=86400) def get_single_lookback(self, user, term): res =", "data): name = LOOKBACK_AGG_CACHE + '_' + user info = {'term': term, 'data':", "socket_timeout=r_timeout) redis_cli = r_sentinel.master_for(r_master, socket_timeout=r_timeout, password=<PASSWORD>, db=r_database, decode_responses=True) else: redis_cli = redis.StrictRedis(host=r_host, port=r_port,", "data, domain=''): data = json.dumps(data, ensure_ascii=False) if domain: __k = self.all_user_key + '_'", ": term, 'data': _info } :param user: :param term: :return: \"\"\" name =", "term, data): self.redis.set(name=LOOKBACK_SINGLE_CACHE + '_' + user + '_' + term, value=json.dumps(data, ensure_ascii=False),", "def get_single_lookback(self, user, term): res = self.redis.get(LOOKBACK_SINGLE_CACHE + '_' + user + '_'", "def set_muc_lookback(self, user, term, data): self.redis.set(name=LOOKBACK_MUC_CACHE + '_' + user + '_' +", "redis_log = configure_logger('redis', log_path) try: if if_redis_sentinel: _hosts = [hp.split(':') for hp in", "self.redis.get(name=__k) try: if not user_data: return [] user_data = json.loads(user_data) return user_data except", "获取redis中用户的缓存 包括个人、群组聊天顺序 个人、群组聊天频率 :param user_id: :return: \"\"\" router = [self.single_key, self.muc_key, self.single_trace_key, self.muc_trace_key,", "= self.redis.zrevrangebyscore(name=_k, max='+inf', min=10, start=0, num=10) # TODO 这个num可能应该走limit 和 offset return habit", "+ '_' + term, value=json.dumps(data, ensure_ascii=False), ex=300) def get_agg_cache(self, user, term): \"\"\" 结构:", "(KeyError, ValueError, IndexError) as e: raise TypeError('wrong configure pattern') # redis_log.exception('wrong configure pattern')", "get_user_habit(self, user_id): \"\"\" 获取redis中用户的缓存 包括个人、群组聊天顺序 个人、群组聊天频率 :param user_id: :return: \"\"\" router = [self.single_key,", "configure pattern') # exit(0) class RedisUtil: def __init__(self): self.redis = redis_cli self.single_key =", "self.redis.get(LOOKBACK_SINGLE_CACHE + '_' + user + '_' + term) if res: try: if", "= [self.single_key, self.muc_key, self.single_trace_key, self.muc_trace_key, self.user_registed_mucs] habit = {} for key in router:", "self.single_trace_key = SINGLE_TRACE_KEY self.muc_trace_key = MUC_TRACE_KEY self.user_registed_mucs = USER_MUCS self.all_user_key = ALL_USER_DATA_CACHE #", "'_' + user info = {'term': term, 'data': data} self.redis.set(name=name, value=json.dumps(info, ensure_ascii=False), ex=300)", "json from conf.cache_params_define import * # from utils.logger_conf import configure_logger # log_path =", "pre_rs_hosts] hosts = [(hp[0].strip(), int(hp[1].strip())) for hp in _hosts] r_sentinel = sentinel.Sentinel(hosts, socket_timeout=r_timeout)", "RedisUtil: def __init__(self): self.redis = redis_cli self.single_key = SINGLE_KEY self.muc_key = MUC_KEY self.single_trace_key", "def get_user_habit(self, user_id): \"\"\" 获取redis中用户的缓存 包括个人、群组聊天顺序 个人、群组聊天频率 :param user_id: :return: \"\"\" router =", "value=json.dumps(data, ensure_ascii=False), ex=300) def get_muc_lookback(self, user, term): res = self.redis.get(LOOKBACK_MUC_CACHE + '_' +", "# redis_log.exception('wrong configure pattern') # exit(0) class RedisUtil: def __init__(self): self.redis = redis_cli", "import configure_logger # log_path = get_logger_file(name='reids.log') # redis_log = configure_logger('redis', log_path) try: if", "'_' + term, value=json.dumps(data, ensure_ascii=False), ex=300) def get_muc_lookback(self, user, term): res = self.redis.get(LOOKBACK_MUC_CACHE", "res = self.redis.get(LOOKBACK_MUC_CACHE + '_' + user + '_' + term) if res:", "import * # from utils.logger_conf import configure_logger # log_path = get_logger_file(name='reids.log') # redis_log", "self.redis = redis_cli self.single_key = SINGLE_KEY self.muc_key = MUC_KEY self.single_trace_key = SINGLE_TRACE_KEY self.muc_trace_key", "res: return None res = json.loads(res) except Exception as e: print('LOAD SINGLE LOOKBACK", "= self.redis.get(name=__k) try: if not user_data: return [] user_data = json.loads(user_data) return user_data", "__k = self.all_user_key self.redis.set(name=__k, value=data, ex=86400) def get_single_lookback(self, user, term): res = self.redis.get(LOOKBACK_SINGLE_CACHE", "raise TypeError('wrong configure pattern') # redis_log.exception('wrong configure pattern') # exit(0) class RedisUtil: def", "user res = self.redis.get(name=name) try: if not res: return [] res = json.loads(res)", "MUC_KEY self.single_trace_key = SINGLE_TRACE_KEY self.muc_trace_key = MUC_TRACE_KEY self.user_registed_mucs = USER_MUCS self.all_user_key = ALL_USER_DATA_CACHE", "= USER_MUCS self.all_user_key = ALL_USER_DATA_CACHE # self.router = [self.single_key, self.muc_key, self.single_trace_key, self.muc_trace_key] def", "'data': _info } :param user: :param term: :return: \"\"\" name = LOOKBACK_AGG_CACHE +", ":param user_id: :return: \"\"\" router = [self.single_key, self.muc_key, self.single_trace_key, self.muc_trace_key, self.user_registed_mucs] habit =", "USER_MUCS self.all_user_key = ALL_USER_DATA_CACHE # self.router = [self.single_key, self.muc_key, self.single_trace_key, self.muc_trace_key] def get_user_habit(self,", "TODO 这个num可能应该走limit 和 offset return habit def get_all_user_data(self, domain=''): if domain: __k =", "except Exception as e: print('LOAD SINGLE LOOKBACK ERROR {}'.format(e)) return None return res", "user, term): res = self.redis.get(LOOKBACK_SINGLE_CACHE + '_' + user + '_' + term)", "self.user_registed_mucs] habit = {} for key in router: _k = key + '_'", "hp in _hosts] r_sentinel = sentinel.Sentinel(hosts, socket_timeout=r_timeout) redis_cli = r_sentinel.master_for(r_master, socket_timeout=r_timeout, password=<PASSWORD>, db=r_database,", "else: habit[key] = self.redis.zrevrangebyscore(name=_k, max='+inf', min=10, start=0, num=10) # TODO 这个num可能应该走limit 和 offset", "= json.dumps(data, ensure_ascii=False) if domain: __k = self.all_user_key + '_' + domain else:", "= json.loads(res) except Exception as e: print('LOAD SINGLE LOOKBACK ERROR {}'.format(e)) res =", "if domain: __k = self.all_user_key + '_' + domain else: __k = self.all_user_key", "json.dumps(data, ensure_ascii=False) if domain: __k = self.all_user_key + '_' + domain else: __k", "ALL_USER_DATA_CACHE # self.router = [self.single_key, self.muc_key, self.single_trace_key, self.muc_trace_key] def get_user_habit(self, user_id): \"\"\" 获取redis中用户的缓存", "coding:utf-8 -*- __author__ = 'jingyu.he' import redis from redis import sentinel import json", "return [] def set_all_user_data(self, data, domain=''): data = json.dumps(data, ensure_ascii=False) if domain: __k", "LOOKBACK ERROR {}'.format(e)) res = None return res def set_muc_lookback(self, user, term, data):", "redis_cli self.single_key = SINGLE_KEY self.muc_key = MUC_KEY self.single_trace_key = SINGLE_TRACE_KEY self.muc_trace_key = MUC_TRACE_KEY", "self.muc_key, self.single_trace_key, self.muc_trace_key] def get_user_habit(self, user_id): \"\"\" 获取redis中用户的缓存 包括个人、群组聊天顺序 个人、群组聊天频率 :param user_id: :return:", "r_sentinel = sentinel.Sentinel(hosts, socket_timeout=r_timeout) redis_cli = r_sentinel.master_for(r_master, socket_timeout=r_timeout, password=<PASSWORD>, db=r_database, decode_responses=True) else: redis_cli", "{ 'key' : term, 'data': _info } :param user: :param term: :return: \"\"\"", "def __init__(self): self.redis = redis_cli self.single_key = SINGLE_KEY self.muc_key = MUC_KEY self.single_trace_key =", "# from utils.logger_conf import configure_logger # log_path = get_logger_file(name='reids.log') # redis_log = configure_logger('redis',", "print(__e) return [] if res.get('term', '') != term: self.redis.delete(name) return [] return res['data']", "self.router = [self.single_key, self.muc_key, self.single_trace_key, self.muc_trace_key] def get_user_habit(self, user_id): \"\"\" 获取redis中用户的缓存 包括个人、群组聊天顺序 个人、群组聊天频率", "return [] if res.get('term', '') != term: self.redis.delete(name) return [] return res['data'] def", "max='+inf', min=10, start=0, num=10) # TODO 这个num可能应该走limit 和 offset return habit def get_all_user_data(self,", "self.redis.set(name=LOOKBACK_SINGLE_CACHE + '_' + user + '_' + term, value=json.dumps(data, ensure_ascii=False), ex=300) def", "Exception as e: print('LOAD SINGLE LOOKBACK ERROR {}'.format(e)) return None return res def", "+ '_' + domain else: __k = self.all_user_key user_data = self.redis.get(name=__k) try: if", "db=r_database, decode_responses=True) else: redis_cli = redis.StrictRedis(host=r_host, port=r_port, db=r_database, password=r_password, decode_responses=True) except (KeyError, ValueError,", "+ domain else: __k = self.all_user_key self.redis.set(name=__k, value=data, ex=86400) def get_single_lookback(self, user, term):", "else: redis_cli = redis.StrictRedis(host=r_host, port=r_port, db=r_database, password=r_password, decode_responses=True) except (KeyError, ValueError, IndexError) as", "domain=''): data = json.dumps(data, ensure_ascii=False) if domain: __k = self.all_user_key + '_' +", "return habit def get_all_user_data(self, domain=''): if domain: __k = self.all_user_key + '_' +", "key in router: _k = key + '_' + user_id if key in", "term, value=json.dumps(data, ensure_ascii=False), ex=300) def get_agg_cache(self, user, term): \"\"\" 结构: user - {", "r_sentinel.master_for(r_master, socket_timeout=r_timeout, password=<PASSWORD>, db=r_database, decode_responses=True) else: redis_cli = redis.StrictRedis(host=r_host, port=r_port, db=r_database, password=r_password, decode_responses=True)", "if res: try: if not res: return None res = json.loads(res) except Exception", "self.single_trace_key, self.muc_trace_key, self.user_registed_mucs] habit = {} for key in router: _k = key", "= MUC_KEY self.single_trace_key = SINGLE_TRACE_KEY self.muc_trace_key = MUC_TRACE_KEY self.user_registed_mucs = USER_MUCS self.all_user_key =", "get_agg_cache(self, user, term): \"\"\" 结构: user - { 'key' : term, 'data': _info", "'_' + domain else: __k = self.all_user_key self.redis.set(name=__k, value=data, ex=86400) def get_single_lookback(self, user,", "redis import sentinel import json from conf.cache_params_define import * # from utils.logger_conf import", "+ term, value=json.dumps(data, ensure_ascii=False), ex=300) def get_agg_cache(self, user, term): \"\"\" 结构: user -", "socket_timeout=r_timeout, password=<PASSWORD>, db=r_database, decode_responses=True) else: redis_cli = redis.StrictRedis(host=r_host, port=r_port, db=r_database, password=r_password, decode_responses=True) except", "__e: print(__e) return [] if res.get('term', '') != term: self.redis.delete(name) return [] return", "__k = self.all_user_key + '_' + domain else: __k = self.all_user_key self.redis.set(name=__k, value=data,", "self.redis.get(name=name) try: if not res: return [] res = json.loads(res) except Exception as", "redis from redis import sentinel import json from conf.cache_params_define import * # from", "user_data = self.redis.get(name=__k) try: if not user_data: return [] user_data = json.loads(user_data) return", "in [self.single_key, self.muc_key, self.user_registed_mucs]: habit[key] = self.redis.lrange(name=_k, start=0, end=-1) else: habit[key] = self.redis.zrevrangebyscore(name=_k,", "term: self.redis.delete(name) return [] return res['data'] def set_agg_cache(self, user, term, data): name =", "from redis import sentinel import json from conf.cache_params_define import * # from utils.logger_conf", "= self.redis.get(name=name) try: if not res: return [] res = json.loads(res) except Exception", "LOOKBACK_AGG_CACHE + '_' + user res = self.redis.get(name=name) try: if not res: return", "try: if not res: return [] res = json.loads(res) except Exception as __e:" ]
[ "generics.GenericAPIView): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated] queryset = Content.objects.all() serializer_class = ContentModelSerializer", "[TokenAuthentication] permission_classes = [IsAuthenticated] queryset = Partner.objects.all() serializer_class = PartnerModelSerializer class HomeworkApiViewSet(viewsets.ModelViewSet): authentication_classes", "Forum.objects.all() serializer_class = ForumModelSerializer class ForumCommentApiViewSet(viewsets.ModelViewSet): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated] queryset", "PartnerApiViewSet(viewsets.ModelViewSet): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated] queryset = Partner.objects.all() serializer_class = PartnerModelSerializer", "ForumCommentModelSerializer class UserForumFavoriteApiViewSet(viewsets.ModelViewSet): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated] queryset = UserForumFavorite.objects.all() serializer_class", "import IsAuthenticated from rest_framework.decorators import authentication_classes, permission_classes from api.serializers import ( UserModelSerializer, PartnerModelSerializer,", "HomeworkModelSerializer, UserHomeworkModelSerializer, HomeworkVoucherTypeModelSerializer, HomeworkVoucherModelSerializer, UserHomeworkVoucherModelSerializer, ForumTypeModelSerializer, ForumModelSerializer, ForumCommentModelSerializer, UserForumFavoriteModelSerializer, ContentTypeModelSerializer, ContentModelSerializer, UserContentModelSerializer, PaymentModelSerializer", "= ContentTypeModelSerializer def get(self, request, *args, **kwargs): return self.list(request, *args, **kwargs) class ContentListOnlyAPIView(mixins.ListModelMixin,", "class UserHomeworkApiViewSet(viewsets.ModelViewSet): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated] queryset = UserHomework.objects.all() serializer_class =", "UserContent, Payment ) @authentication_classes([]) @permission_classes([]) class UserApiViewSet(viewsets.ModelViewSet): queryset = User.objects.all() serializer_class = UserModelSerializer", "= Partner.objects.all() serializer_class = PartnerModelSerializer class HomeworkApiViewSet(viewsets.ModelViewSet): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated]", "authentication_classes, permission_classes from api.serializers import ( UserModelSerializer, PartnerModelSerializer, HomeworkModelSerializer, UserHomeworkModelSerializer, HomeworkVoucherTypeModelSerializer, HomeworkVoucherModelSerializer, UserHomeworkVoucherModelSerializer,", "ContentListOnlyAPIView(mixins.ListModelMixin, generics.GenericAPIView): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated] queryset = Content.objects.all() serializer_class =", "[TokenAuthentication] permission_classes = [IsAuthenticated] queryset = UserContent.objects.all() serializer_class = UserContentModelSerializer class PaymentApiViewSet(viewsets.ModelViewSet): authentication_classes", "serializer_class = HomeworkModelSerializer class UserHomeworkApiViewSet(viewsets.ModelViewSet): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated] queryset =", "[TokenAuthentication] permission_classes = [IsAuthenticated] queryset = ForumType.objects.all() serializer_class = ForumTypeModelSerializer def get(self, request,", "*args, **kwargs) class HomeworkVoucherListOnlyAPIView(mixins.ListModelMixin, generics.GenericAPIView): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated] queryset =", "[IsAuthenticated] queryset = UserHomework.objects.all() serializer_class = UserHomeworkModelSerializer class HomeworkVoucherTypeListOnlyAPIView(mixins.ListModelMixin, generics.GenericAPIView): authentication_classes = [TokenAuthentication]", "generics.GenericAPIView): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated] queryset = ContentType.objects.all() serializer_class = ContentTypeModelSerializer", "UserForumFavorite.objects.all() serializer_class = UserForumFavoriteModelSerializer class ContentTypeListOnlyAPIView(mixins.ListModelMixin, generics.GenericAPIView): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated]", "class UserForumFavoriteApiViewSet(viewsets.ModelViewSet): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated] queryset = UserForumFavorite.objects.all() serializer_class =", "= [IsAuthenticated] queryset = Content.objects.all() serializer_class = ContentModelSerializer def get(self, request, *args, **kwargs):", "generics.GenericAPIView): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated] queryset = HomeworkVoucherType.objects.all() serializer_class = HomeworkVoucherTypeModelSerializer", "from api.models import ( Partner, User, Homework, UserHomework, HomeworkVoucherType, HomeworkVoucher, UserHomeworkVoucher, ForumType, Forum,", "= [IsAuthenticated] queryset = UserHomework.objects.all() serializer_class = UserHomeworkModelSerializer class HomeworkVoucherTypeListOnlyAPIView(mixins.ListModelMixin, generics.GenericAPIView): authentication_classes =", "= [TokenAuthentication] permission_classes = [IsAuthenticated] queryset = UserContent.objects.all() serializer_class = UserContentModelSerializer class PaymentApiViewSet(viewsets.ModelViewSet):", "UserHomeworkVoucherModelSerializer, ForumTypeModelSerializer, ForumModelSerializer, ForumCommentModelSerializer, UserForumFavoriteModelSerializer, ContentTypeModelSerializer, ContentModelSerializer, UserContentModelSerializer, PaymentModelSerializer ) from api.models import", "[TokenAuthentication] permission_classes = [IsAuthenticated] queryset = Content.objects.all() serializer_class = ContentModelSerializer def get(self, request,", "= [TokenAuthentication] permission_classes = [IsAuthenticated] queryset = UserForumFavorite.objects.all() serializer_class = UserForumFavoriteModelSerializer class ContentTypeListOnlyAPIView(mixins.ListModelMixin,", "rest_framework.views import APIView from rest_framework.authentication import TokenAuthentication from rest_framework.permissions import IsAuthenticated from rest_framework.decorators", "api.models import ( Partner, User, Homework, UserHomework, HomeworkVoucherType, HomeworkVoucher, UserHomeworkVoucher, ForumType, Forum, ForumComment,", "*args, **kwargs): return self.list(request, *args, **kwargs) class UserContentApiViewSet(viewsets.ModelViewSet): authentication_classes = [TokenAuthentication] permission_classes =", "permission_classes = [IsAuthenticated] queryset = UserHomeworkVoucher.objects.all() serializer_class = UserHomeworkVoucherModelSerializer class ForumTypeListOnlyAPIView(mixins.ListModelMixin, generics.GenericAPIView): authentication_classes", "class PartnerApiViewSet(viewsets.ModelViewSet): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated] queryset = Partner.objects.all() serializer_class =", "queryset = UserHomeworkVoucher.objects.all() serializer_class = UserHomeworkVoucherModelSerializer class ForumTypeListOnlyAPIView(mixins.ListModelMixin, generics.GenericAPIView): authentication_classes = [TokenAuthentication] permission_classes", "serializer_class = UserForumFavoriteModelSerializer class ContentTypeListOnlyAPIView(mixins.ListModelMixin, generics.GenericAPIView): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated] queryset", "class PaymentApiViewSet(viewsets.ModelViewSet): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated] queryset = Payment.objects.all() serializer_class =", "ForumModelSerializer class ForumCommentApiViewSet(viewsets.ModelViewSet): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated] queryset = ForumComment.objects.all() serializer_class", "= [TokenAuthentication] permission_classes = [IsAuthenticated] queryset = ForumComment.objects.all() serializer_class = ForumCommentModelSerializer class UserForumFavoriteApiViewSet(viewsets.ModelViewSet):", "= PartnerModelSerializer class HomeworkApiViewSet(viewsets.ModelViewSet): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated] queryset = Homework.objects.all()", "Partner, User, Homework, UserHomework, HomeworkVoucherType, HomeworkVoucher, UserHomeworkVoucher, ForumType, Forum, ForumComment, UserForumFavorite, ContentType, Content,", "@authentication_classes([]) @permission_classes([]) class UserApiViewSet(viewsets.ModelViewSet): queryset = User.objects.all() serializer_class = UserModelSerializer class PartnerApiViewSet(viewsets.ModelViewSet): authentication_classes", "= ForumCommentModelSerializer class UserForumFavoriteApiViewSet(viewsets.ModelViewSet): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated] queryset = UserForumFavorite.objects.all()", "serializer_class = ForumTypeModelSerializer def get(self, request, *args, **kwargs): return self.list(request, *args, **kwargs) class", "permission_classes = [IsAuthenticated] queryset = UserForumFavorite.objects.all() serializer_class = UserForumFavoriteModelSerializer class ContentTypeListOnlyAPIView(mixins.ListModelMixin, generics.GenericAPIView): authentication_classes", "authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated] queryset = UserHomeworkVoucher.objects.all() serializer_class = UserHomeworkVoucherModelSerializer class", "= [IsAuthenticated] queryset = UserHomeworkVoucher.objects.all() serializer_class = UserHomeworkVoucherModelSerializer class ForumTypeListOnlyAPIView(mixins.ListModelMixin, generics.GenericAPIView): authentication_classes =", "permission_classes = [IsAuthenticated] queryset = ForumType.objects.all() serializer_class = ForumTypeModelSerializer def get(self, request, *args,", "= [IsAuthenticated] queryset = UserForumFavorite.objects.all() serializer_class = UserForumFavoriteModelSerializer class ContentTypeListOnlyAPIView(mixins.ListModelMixin, generics.GenericAPIView): authentication_classes =", "Content, UserContent, Payment ) @authentication_classes([]) @permission_classes([]) class UserApiViewSet(viewsets.ModelViewSet): queryset = User.objects.all() serializer_class =", "queryset = UserHomework.objects.all() serializer_class = UserHomeworkModelSerializer class HomeworkVoucherTypeListOnlyAPIView(mixins.ListModelMixin, generics.GenericAPIView): authentication_classes = [TokenAuthentication] permission_classes", "ForumType, Forum, ForumComment, UserForumFavorite, ContentType, Content, UserContent, Payment ) @authentication_classes([]) @permission_classes([]) class UserApiViewSet(viewsets.ModelViewSet):", "import TokenAuthentication from rest_framework.permissions import IsAuthenticated from rest_framework.decorators import authentication_classes, permission_classes from api.serializers", "[IsAuthenticated] queryset = Partner.objects.all() serializer_class = PartnerModelSerializer class HomeworkApiViewSet(viewsets.ModelViewSet): authentication_classes = [TokenAuthentication] permission_classes", "queryset = ForumComment.objects.all() serializer_class = ForumCommentModelSerializer class UserForumFavoriteApiViewSet(viewsets.ModelViewSet): authentication_classes = [TokenAuthentication] permission_classes =", "= [IsAuthenticated] queryset = Forum.objects.all() serializer_class = ForumModelSerializer class ForumCommentApiViewSet(viewsets.ModelViewSet): authentication_classes = [TokenAuthentication]", "serializer_class = UserHomeworkModelSerializer class HomeworkVoucherTypeListOnlyAPIView(mixins.ListModelMixin, generics.GenericAPIView): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated] queryset", "request, *args, **kwargs): return self.list(request, *args, **kwargs) class ForumApiViewSet(viewsets.ModelViewSet): authentication_classes = [TokenAuthentication] permission_classes", "= Forum.objects.all() serializer_class = ForumModelSerializer class ForumCommentApiViewSet(viewsets.ModelViewSet): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated]", "Homework.objects.all() serializer_class = HomeworkModelSerializer class UserHomeworkApiViewSet(viewsets.ModelViewSet): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated] queryset", "= User.objects.all() serializer_class = UserModelSerializer class PartnerApiViewSet(viewsets.ModelViewSet): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated]", "serializer_class = UserHomeworkVoucherModelSerializer class ForumTypeListOnlyAPIView(mixins.ListModelMixin, generics.GenericAPIView): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated] queryset", ") @authentication_classes([]) @permission_classes([]) class UserApiViewSet(viewsets.ModelViewSet): queryset = User.objects.all() serializer_class = UserModelSerializer class PartnerApiViewSet(viewsets.ModelViewSet):", "= UserContentModelSerializer class PaymentApiViewSet(viewsets.ModelViewSet): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated] queryset = Payment.objects.all()", "authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated] queryset = UserForumFavorite.objects.all() serializer_class = UserForumFavoriteModelSerializer class", "authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated] queryset = HomeworkVoucherType.objects.all() serializer_class = HomeworkVoucherTypeModelSerializer def", "*args, **kwargs): return self.list(request, *args, **kwargs) class ContentListOnlyAPIView(mixins.ListModelMixin, generics.GenericAPIView): authentication_classes = [TokenAuthentication] permission_classes", "ContentType, Content, UserContent, Payment ) @authentication_classes([]) @permission_classes([]) class UserApiViewSet(viewsets.ModelViewSet): queryset = User.objects.all() serializer_class", "generics.GenericAPIView): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated] queryset = ForumType.objects.all() serializer_class = ForumTypeModelSerializer", "Homework, UserHomework, HomeworkVoucherType, HomeworkVoucher, UserHomeworkVoucher, ForumType, Forum, ForumComment, UserForumFavorite, ContentType, Content, UserContent, Payment", "= ForumModelSerializer class ForumCommentApiViewSet(viewsets.ModelViewSet): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated] queryset = ForumComment.objects.all()", "HomeworkVoucherTypeListOnlyAPIView(mixins.ListModelMixin, generics.GenericAPIView): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated] queryset = HomeworkVoucherType.objects.all() serializer_class =", "return self.list(request, *args, **kwargs) class ContentListOnlyAPIView(mixins.ListModelMixin, generics.GenericAPIView): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated]", "( Partner, User, Homework, UserHomework, HomeworkVoucherType, HomeworkVoucher, UserHomeworkVoucher, ForumType, Forum, ForumComment, UserForumFavorite, ContentType,", "= UserModelSerializer class PartnerApiViewSet(viewsets.ModelViewSet): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated] queryset = Partner.objects.all()", "**kwargs): return self.list(request, *args, **kwargs) class HomeworkVoucherListOnlyAPIView(mixins.ListModelMixin, generics.GenericAPIView): authentication_classes = [TokenAuthentication] permission_classes =", "get(self, request, *args, **kwargs): return self.list(request, *args, **kwargs) class ContentListOnlyAPIView(mixins.ListModelMixin, generics.GenericAPIView): authentication_classes =", "[TokenAuthentication] permission_classes = [IsAuthenticated] queryset = HomeworkVoucher.objects.all() serializer_class = HomeworkVoucherModelSerializer def get(self, request,", "= [IsAuthenticated] queryset = Partner.objects.all() serializer_class = PartnerModelSerializer class HomeworkApiViewSet(viewsets.ModelViewSet): authentication_classes = [TokenAuthentication]", "def get(self, request, *args, **kwargs): return self.list(request, *args, **kwargs) class UserHomeworkVoucherApiViewSet(viewsets.ModelViewSet): authentication_classes =", "[IsAuthenticated] queryset = UserContent.objects.all() serializer_class = UserContentModelSerializer class PaymentApiViewSet(viewsets.ModelViewSet): authentication_classes = [TokenAuthentication] permission_classes", ") from api.models import ( Partner, User, Homework, UserHomework, HomeworkVoucherType, HomeworkVoucher, UserHomeworkVoucher, ForumType,", "= ForumTypeModelSerializer def get(self, request, *args, **kwargs): return self.list(request, *args, **kwargs) class ForumApiViewSet(viewsets.ModelViewSet):", "ContentType.objects.all() serializer_class = ContentTypeModelSerializer def get(self, request, *args, **kwargs): return self.list(request, *args, **kwargs)", "PaymentApiViewSet(viewsets.ModelViewSet): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated] queryset = Payment.objects.all() serializer_class = PaymentModelSerializer", "authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated] queryset = Homework.objects.all() serializer_class = HomeworkModelSerializer class", "[IsAuthenticated] queryset = UserForumFavorite.objects.all() serializer_class = UserForumFavoriteModelSerializer class ContentTypeListOnlyAPIView(mixins.ListModelMixin, generics.GenericAPIView): authentication_classes = [TokenAuthentication]", "api.serializers import ( UserModelSerializer, PartnerModelSerializer, HomeworkModelSerializer, UserHomeworkModelSerializer, HomeworkVoucherTypeModelSerializer, HomeworkVoucherModelSerializer, UserHomeworkVoucherModelSerializer, ForumTypeModelSerializer, ForumModelSerializer, ForumCommentModelSerializer,", "= [TokenAuthentication] permission_classes = [IsAuthenticated] queryset = ContentType.objects.all() serializer_class = ContentTypeModelSerializer def get(self,", "permission_classes = [IsAuthenticated] queryset = ForumComment.objects.all() serializer_class = ForumCommentModelSerializer class UserForumFavoriteApiViewSet(viewsets.ModelViewSet): authentication_classes =", "permission_classes = [IsAuthenticated] queryset = Homework.objects.all() serializer_class = HomeworkModelSerializer class UserHomeworkApiViewSet(viewsets.ModelViewSet): authentication_classes =", "from rest_framework import mixins from rest_framework import generics from rest_framework.views import APIView from", "= [IsAuthenticated] queryset = ForumComment.objects.all() serializer_class = ForumCommentModelSerializer class UserForumFavoriteApiViewSet(viewsets.ModelViewSet): authentication_classes = [TokenAuthentication]", "rest_framework.permissions import IsAuthenticated from rest_framework.decorators import authentication_classes, permission_classes from api.serializers import ( UserModelSerializer,", "UserHomeworkModelSerializer, HomeworkVoucherTypeModelSerializer, HomeworkVoucherModelSerializer, UserHomeworkVoucherModelSerializer, ForumTypeModelSerializer, ForumModelSerializer, ForumCommentModelSerializer, UserForumFavoriteModelSerializer, ContentTypeModelSerializer, ContentModelSerializer, UserContentModelSerializer, PaymentModelSerializer )", "= Homework.objects.all() serializer_class = HomeworkModelSerializer class UserHomeworkApiViewSet(viewsets.ModelViewSet): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated]", "ForumCommentModelSerializer, UserForumFavoriteModelSerializer, ContentTypeModelSerializer, ContentModelSerializer, UserContentModelSerializer, PaymentModelSerializer ) from api.models import ( Partner, User,", "class ForumApiViewSet(viewsets.ModelViewSet): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated] queryset = Forum.objects.all() serializer_class =", "serializer_class = HomeworkVoucherModelSerializer def get(self, request, *args, **kwargs): return self.list(request, *args, **kwargs) class", "UserForumFavoriteModelSerializer class ContentTypeListOnlyAPIView(mixins.ListModelMixin, generics.GenericAPIView): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated] queryset = ContentType.objects.all()", "UserHomeworkVoucher.objects.all() serializer_class = UserHomeworkVoucherModelSerializer class ForumTypeListOnlyAPIView(mixins.ListModelMixin, generics.GenericAPIView): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated]", "PaymentModelSerializer ) from api.models import ( Partner, User, Homework, UserHomework, HomeworkVoucherType, HomeworkVoucher, UserHomeworkVoucher,", "queryset = UserForumFavorite.objects.all() serializer_class = UserForumFavoriteModelSerializer class ContentTypeListOnlyAPIView(mixins.ListModelMixin, generics.GenericAPIView): authentication_classes = [TokenAuthentication] permission_classes", "UserHomeworkModelSerializer class HomeworkVoucherTypeListOnlyAPIView(mixins.ListModelMixin, generics.GenericAPIView): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated] queryset = HomeworkVoucherType.objects.all()", "UserHomeworkApiViewSet(viewsets.ModelViewSet): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated] queryset = UserHomework.objects.all() serializer_class = UserHomeworkModelSerializer", "def get(self, request, *args, **kwargs): return self.list(request, *args, **kwargs) class HomeworkVoucherListOnlyAPIView(mixins.ListModelMixin, generics.GenericAPIView): authentication_classes", "permission_classes = [IsAuthenticated] queryset = ContentType.objects.all() serializer_class = ContentTypeModelSerializer def get(self, request, *args,", "[IsAuthenticated] queryset = ForumType.objects.all() serializer_class = ForumTypeModelSerializer def get(self, request, *args, **kwargs): return", "class HomeworkVoucherTypeListOnlyAPIView(mixins.ListModelMixin, generics.GenericAPIView): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated] queryset = HomeworkVoucherType.objects.all() serializer_class", "[IsAuthenticated] queryset = HomeworkVoucher.objects.all() serializer_class = HomeworkVoucherModelSerializer def get(self, request, *args, **kwargs): return", "request, *args, **kwargs): return self.list(request, *args, **kwargs) class UserHomeworkVoucherApiViewSet(viewsets.ModelViewSet): authentication_classes = [TokenAuthentication] permission_classes", "HomeworkVoucher, UserHomeworkVoucher, ForumType, Forum, ForumComment, UserForumFavorite, ContentType, Content, UserContent, Payment ) @authentication_classes([]) @permission_classes([])", "HomeworkApiViewSet(viewsets.ModelViewSet): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated] queryset = Homework.objects.all() serializer_class = HomeworkModelSerializer", "[IsAuthenticated] queryset = Forum.objects.all() serializer_class = ForumModelSerializer class ForumCommentApiViewSet(viewsets.ModelViewSet): authentication_classes = [TokenAuthentication] permission_classes", "get(self, request, *args, **kwargs): return self.list(request, *args, **kwargs) class ForumApiViewSet(viewsets.ModelViewSet): authentication_classes = [TokenAuthentication]", "ForumModelSerializer, ForumCommentModelSerializer, UserForumFavoriteModelSerializer, ContentTypeModelSerializer, ContentModelSerializer, UserContentModelSerializer, PaymentModelSerializer ) from api.models import ( Partner,", "**kwargs) class HomeworkVoucherListOnlyAPIView(mixins.ListModelMixin, generics.GenericAPIView): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated] queryset = HomeworkVoucher.objects.all()", "serializer_class = ContentTypeModelSerializer def get(self, request, *args, **kwargs): return self.list(request, *args, **kwargs) class", "class ForumTypeListOnlyAPIView(mixins.ListModelMixin, generics.GenericAPIView): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated] queryset = ForumType.objects.all() serializer_class", "return self.list(request, *args, **kwargs) class ForumApiViewSet(viewsets.ModelViewSet): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated] queryset", "[IsAuthenticated] queryset = UserHomeworkVoucher.objects.all() serializer_class = UserHomeworkVoucherModelSerializer class ForumTypeListOnlyAPIView(mixins.ListModelMixin, generics.GenericAPIView): authentication_classes = [TokenAuthentication]", "= UserContent.objects.all() serializer_class = UserContentModelSerializer class PaymentApiViewSet(viewsets.ModelViewSet): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated]", "UserHomeworkVoucherApiViewSet(viewsets.ModelViewSet): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated] queryset = UserHomeworkVoucher.objects.all() serializer_class = UserHomeworkVoucherModelSerializer", "ForumApiViewSet(viewsets.ModelViewSet): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated] queryset = Forum.objects.all() serializer_class = ForumModelSerializer", "rest_framework import mixins from rest_framework import generics from rest_framework.views import APIView from rest_framework.authentication", "viewsets from rest_framework import mixins from rest_framework import generics from rest_framework.views import APIView", "authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated] queryset = ContentType.objects.all() serializer_class = ContentTypeModelSerializer def", "*args, **kwargs): return self.list(request, *args, **kwargs) class UserHomeworkVoucherApiViewSet(viewsets.ModelViewSet): authentication_classes = [TokenAuthentication] permission_classes =", "IsAuthenticated from rest_framework.decorators import authentication_classes, permission_classes from api.serializers import ( UserModelSerializer, PartnerModelSerializer, HomeworkModelSerializer,", "authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated] queryset = UserHomework.objects.all() serializer_class = UserHomeworkModelSerializer class", "[TokenAuthentication] permission_classes = [IsAuthenticated] queryset = UserHomework.objects.all() serializer_class = UserHomeworkModelSerializer class HomeworkVoucherTypeListOnlyAPIView(mixins.ListModelMixin, generics.GenericAPIView):", "HomeworkVoucherTypeModelSerializer, HomeworkVoucherModelSerializer, UserHomeworkVoucherModelSerializer, ForumTypeModelSerializer, ForumModelSerializer, ForumCommentModelSerializer, UserForumFavoriteModelSerializer, ContentTypeModelSerializer, ContentModelSerializer, UserContentModelSerializer, PaymentModelSerializer ) from", "def get(self, request, *args, **kwargs): return self.list(request, *args, **kwargs) class ForumApiViewSet(viewsets.ModelViewSet): authentication_classes =", "self.list(request, *args, **kwargs) class UserContentApiViewSet(viewsets.ModelViewSet): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated] queryset =", "*args, **kwargs): return self.list(request, *args, **kwargs) class HomeworkVoucherListOnlyAPIView(mixins.ListModelMixin, generics.GenericAPIView): authentication_classes = [TokenAuthentication] permission_classes", "Forum, ForumComment, UserForumFavorite, ContentType, Content, UserContent, Payment ) @authentication_classes([]) @permission_classes([]) class UserApiViewSet(viewsets.ModelViewSet): queryset", "UserContentModelSerializer, PaymentModelSerializer ) from api.models import ( Partner, User, Homework, UserHomework, HomeworkVoucherType, HomeworkVoucher,", "HomeworkVoucher.objects.all() serializer_class = HomeworkVoucherModelSerializer def get(self, request, *args, **kwargs): return self.list(request, *args, **kwargs)", "queryset = User.objects.all() serializer_class = UserModelSerializer class PartnerApiViewSet(viewsets.ModelViewSet): authentication_classes = [TokenAuthentication] permission_classes =", "**kwargs) class UserContentApiViewSet(viewsets.ModelViewSet): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated] queryset = UserContent.objects.all() serializer_class", "request, *args, **kwargs): return self.list(request, *args, **kwargs) class HomeworkVoucherListOnlyAPIView(mixins.ListModelMixin, generics.GenericAPIView): authentication_classes = [TokenAuthentication]", "serializer_class = ContentModelSerializer def get(self, request, *args, **kwargs): return self.list(request, *args, **kwargs) class", "= [TokenAuthentication] permission_classes = [IsAuthenticated] queryset = ForumType.objects.all() serializer_class = ForumTypeModelSerializer def get(self,", "HomeworkVoucherTypeModelSerializer def get(self, request, *args, **kwargs): return self.list(request, *args, **kwargs) class HomeworkVoucherListOnlyAPIView(mixins.ListModelMixin, generics.GenericAPIView):", "HomeworkVoucherType, HomeworkVoucher, UserHomeworkVoucher, ForumType, Forum, ForumComment, UserForumFavorite, ContentType, Content, UserContent, Payment ) @authentication_classes([])", "serializer_class = UserContentModelSerializer class PaymentApiViewSet(viewsets.ModelViewSet): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated] queryset =", "self.list(request, *args, **kwargs) class HomeworkVoucherListOnlyAPIView(mixins.ListModelMixin, generics.GenericAPIView): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated] queryset", "*args, **kwargs) class ForumApiViewSet(viewsets.ModelViewSet): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated] queryset = Forum.objects.all()", "queryset = HomeworkVoucher.objects.all() serializer_class = HomeworkVoucherModelSerializer def get(self, request, *args, **kwargs): return self.list(request,", "self.list(request, *args, **kwargs) class UserHomeworkVoucherApiViewSet(viewsets.ModelViewSet): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated] queryset =", "HomeworkVoucherModelSerializer def get(self, request, *args, **kwargs): return self.list(request, *args, **kwargs) class UserHomeworkVoucherApiViewSet(viewsets.ModelViewSet): authentication_classes", "queryset = Content.objects.all() serializer_class = ContentModelSerializer def get(self, request, *args, **kwargs): return self.list(request,", "UserHomeworkVoucher, ForumType, Forum, ForumComment, UserForumFavorite, ContentType, Content, UserContent, Payment ) @authentication_classes([]) @permission_classes([]) class", "HomeworkVoucherType.objects.all() serializer_class = HomeworkVoucherTypeModelSerializer def get(self, request, *args, **kwargs): return self.list(request, *args, **kwargs)", "serializer_class = ForumModelSerializer class ForumCommentApiViewSet(viewsets.ModelViewSet): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated] queryset =", "*args, **kwargs) class ContentListOnlyAPIView(mixins.ListModelMixin, generics.GenericAPIView): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated] queryset =", "[TokenAuthentication] permission_classes = [IsAuthenticated] queryset = UserHomeworkVoucher.objects.all() serializer_class = UserHomeworkVoucherModelSerializer class ForumTypeListOnlyAPIView(mixins.ListModelMixin, generics.GenericAPIView):", "serializer_class = HomeworkVoucherTypeModelSerializer def get(self, request, *args, **kwargs): return self.list(request, *args, **kwargs) class", "= [TokenAuthentication] permission_classes = [IsAuthenticated] queryset = UserHomeworkVoucher.objects.all() serializer_class = UserHomeworkVoucherModelSerializer class ForumTypeListOnlyAPIView(mixins.ListModelMixin,", "HomeworkModelSerializer class UserHomeworkApiViewSet(viewsets.ModelViewSet): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated] queryset = UserHomework.objects.all() serializer_class", "from rest_framework.decorators import authentication_classes, permission_classes from api.serializers import ( UserModelSerializer, PartnerModelSerializer, HomeworkModelSerializer, UserHomeworkModelSerializer,", "queryset = Homework.objects.all() serializer_class = HomeworkModelSerializer class UserHomeworkApiViewSet(viewsets.ModelViewSet): authentication_classes = [TokenAuthentication] permission_classes =", "from api.serializers import ( UserModelSerializer, PartnerModelSerializer, HomeworkModelSerializer, UserHomeworkModelSerializer, HomeworkVoucherTypeModelSerializer, HomeworkVoucherModelSerializer, UserHomeworkVoucherModelSerializer, ForumTypeModelSerializer, ForumModelSerializer,", "queryset = ContentType.objects.all() serializer_class = ContentTypeModelSerializer def get(self, request, *args, **kwargs): return self.list(request,", "return self.list(request, *args, **kwargs) class UserHomeworkVoucherApiViewSet(viewsets.ModelViewSet): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated] queryset", "get(self, request, *args, **kwargs): return self.list(request, *args, **kwargs) class HomeworkVoucherListOnlyAPIView(mixins.ListModelMixin, generics.GenericAPIView): authentication_classes =", "request, *args, **kwargs): return self.list(request, *args, **kwargs) class UserContentApiViewSet(viewsets.ModelViewSet): authentication_classes = [TokenAuthentication] permission_classes", "queryset = Forum.objects.all() serializer_class = ForumModelSerializer class ForumCommentApiViewSet(viewsets.ModelViewSet): authentication_classes = [TokenAuthentication] permission_classes =", "PartnerModelSerializer class HomeworkApiViewSet(viewsets.ModelViewSet): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated] queryset = Homework.objects.all() serializer_class", "[TokenAuthentication] permission_classes = [IsAuthenticated] queryset = HomeworkVoucherType.objects.all() serializer_class = HomeworkVoucherTypeModelSerializer def get(self, request,", "*args, **kwargs) class UserHomeworkVoucherApiViewSet(viewsets.ModelViewSet): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated] queryset = UserHomeworkVoucher.objects.all()", "queryset = ForumType.objects.all() serializer_class = ForumTypeModelSerializer def get(self, request, *args, **kwargs): return self.list(request,", "**kwargs) class UserHomeworkVoucherApiViewSet(viewsets.ModelViewSet): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated] queryset = UserHomeworkVoucher.objects.all() serializer_class", "ForumTypeModelSerializer, ForumModelSerializer, ForumCommentModelSerializer, UserForumFavoriteModelSerializer, ContentTypeModelSerializer, ContentModelSerializer, UserContentModelSerializer, PaymentModelSerializer ) from api.models import (", "= [TokenAuthentication] permission_classes = [IsAuthenticated] queryset = UserHomework.objects.all() serializer_class = UserHomeworkModelSerializer class HomeworkVoucherTypeListOnlyAPIView(mixins.ListModelMixin,", "authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated] queryset = Content.objects.all() serializer_class = ContentModelSerializer def", "rest_framework.decorators import authentication_classes, permission_classes from api.serializers import ( UserModelSerializer, PartnerModelSerializer, HomeworkModelSerializer, UserHomeworkModelSerializer, HomeworkVoucherTypeModelSerializer,", "**kwargs) class ForumApiViewSet(viewsets.ModelViewSet): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated] queryset = Forum.objects.all() serializer_class", "permission_classes = [IsAuthenticated] queryset = HomeworkVoucherType.objects.all() serializer_class = HomeworkVoucherTypeModelSerializer def get(self, request, *args,", "HomeworkVoucherModelSerializer, UserHomeworkVoucherModelSerializer, ForumTypeModelSerializer, ForumModelSerializer, ForumCommentModelSerializer, UserForumFavoriteModelSerializer, ContentTypeModelSerializer, ContentModelSerializer, UserContentModelSerializer, PaymentModelSerializer ) from api.models", "authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated] queryset = HomeworkVoucher.objects.all() serializer_class = HomeworkVoucherModelSerializer def", "class HomeworkVoucherListOnlyAPIView(mixins.ListModelMixin, generics.GenericAPIView): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated] queryset = HomeworkVoucher.objects.all() serializer_class", "request, *args, **kwargs): return self.list(request, *args, **kwargs) class ContentListOnlyAPIView(mixins.ListModelMixin, generics.GenericAPIView): authentication_classes = [TokenAuthentication]", "class UserApiViewSet(viewsets.ModelViewSet): queryset = User.objects.all() serializer_class = UserModelSerializer class PartnerApiViewSet(viewsets.ModelViewSet): authentication_classes = [TokenAuthentication]", "= [IsAuthenticated] queryset = HomeworkVoucherType.objects.all() serializer_class = HomeworkVoucherTypeModelSerializer def get(self, request, *args, **kwargs):", "import mixins from rest_framework import generics from rest_framework.views import APIView from rest_framework.authentication import", "class UserContentApiViewSet(viewsets.ModelViewSet): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated] queryset = UserContent.objects.all() serializer_class =", "HomeworkVoucherListOnlyAPIView(mixins.ListModelMixin, generics.GenericAPIView): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated] queryset = HomeworkVoucher.objects.all() serializer_class =", "= [TokenAuthentication] permission_classes = [IsAuthenticated] queryset = Forum.objects.all() serializer_class = ForumModelSerializer class ForumCommentApiViewSet(viewsets.ModelViewSet):", "*args, **kwargs) class UserContentApiViewSet(viewsets.ModelViewSet): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated] queryset = UserContent.objects.all()", "= [TokenAuthentication] permission_classes = [IsAuthenticated] queryset = Partner.objects.all() serializer_class = PartnerModelSerializer class HomeworkApiViewSet(viewsets.ModelViewSet):", "UserForumFavorite, ContentType, Content, UserContent, Payment ) @authentication_classes([]) @permission_classes([]) class UserApiViewSet(viewsets.ModelViewSet): queryset = User.objects.all()", "permission_classes = [IsAuthenticated] queryset = Forum.objects.all() serializer_class = ForumModelSerializer class ForumCommentApiViewSet(viewsets.ModelViewSet): authentication_classes =", "[IsAuthenticated] queryset = ForumComment.objects.all() serializer_class = ForumCommentModelSerializer class UserForumFavoriteApiViewSet(viewsets.ModelViewSet): authentication_classes = [TokenAuthentication] permission_classes", "= [TokenAuthentication] permission_classes = [IsAuthenticated] queryset = HomeworkVoucher.objects.all() serializer_class = HomeworkVoucherModelSerializer def get(self,", "queryset = UserContent.objects.all() serializer_class = UserContentModelSerializer class PaymentApiViewSet(viewsets.ModelViewSet): authentication_classes = [TokenAuthentication] permission_classes =", "= UserHomework.objects.all() serializer_class = UserHomeworkModelSerializer class HomeworkVoucherTypeListOnlyAPIView(mixins.ListModelMixin, generics.GenericAPIView): authentication_classes = [TokenAuthentication] permission_classes =", "= ForumComment.objects.all() serializer_class = ForumCommentModelSerializer class UserForumFavoriteApiViewSet(viewsets.ModelViewSet): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated]", "**kwargs): return self.list(request, *args, **kwargs) class ForumApiViewSet(viewsets.ModelViewSet): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated]", "User, Homework, UserHomework, HomeworkVoucherType, HomeworkVoucher, UserHomeworkVoucher, ForumType, Forum, ForumComment, UserForumFavorite, ContentType, Content, UserContent,", "[TokenAuthentication] permission_classes = [IsAuthenticated] queryset = Forum.objects.all() serializer_class = ForumModelSerializer class ForumCommentApiViewSet(viewsets.ModelViewSet): authentication_classes", "ForumTypeModelSerializer def get(self, request, *args, **kwargs): return self.list(request, *args, **kwargs) class ForumApiViewSet(viewsets.ModelViewSet): authentication_classes", "[IsAuthenticated] queryset = ContentType.objects.all() serializer_class = ContentTypeModelSerializer def get(self, request, *args, **kwargs): return", "ForumComment, UserForumFavorite, ContentType, Content, UserContent, Payment ) @authentication_classes([]) @permission_classes([]) class UserApiViewSet(viewsets.ModelViewSet): queryset =", "[IsAuthenticated] queryset = Homework.objects.all() serializer_class = HomeworkModelSerializer class UserHomeworkApiViewSet(viewsets.ModelViewSet): authentication_classes = [TokenAuthentication] permission_classes", "( UserModelSerializer, PartnerModelSerializer, HomeworkModelSerializer, UserHomeworkModelSerializer, HomeworkVoucherTypeModelSerializer, HomeworkVoucherModelSerializer, UserHomeworkVoucherModelSerializer, ForumTypeModelSerializer, ForumModelSerializer, ForumCommentModelSerializer, UserForumFavoriteModelSerializer, ContentTypeModelSerializer,", "UserModelSerializer, PartnerModelSerializer, HomeworkModelSerializer, UserHomeworkModelSerializer, HomeworkVoucherTypeModelSerializer, HomeworkVoucherModelSerializer, UserHomeworkVoucherModelSerializer, ForumTypeModelSerializer, ForumModelSerializer, ForumCommentModelSerializer, UserForumFavoriteModelSerializer, ContentTypeModelSerializer, ContentModelSerializer,", "self.list(request, *args, **kwargs) class ForumApiViewSet(viewsets.ModelViewSet): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated] queryset =", "[IsAuthenticated] queryset = Content.objects.all() serializer_class = ContentModelSerializer def get(self, request, *args, **kwargs): return", "= [TokenAuthentication] permission_classes = [IsAuthenticated] queryset = HomeworkVoucherType.objects.all() serializer_class = HomeworkVoucherTypeModelSerializer def get(self,", "permission_classes = [IsAuthenticated] queryset = Content.objects.all() serializer_class = ContentModelSerializer def get(self, request, *args,", "UserContentApiViewSet(viewsets.ModelViewSet): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated] queryset = UserContent.objects.all() serializer_class = UserContentModelSerializer", "import ( Partner, User, Homework, UserHomework, HomeworkVoucherType, HomeworkVoucher, UserHomeworkVoucher, ForumType, Forum, ForumComment, UserForumFavorite,", "get(self, request, *args, **kwargs): return self.list(request, *args, **kwargs) class UserContentApiViewSet(viewsets.ModelViewSet): authentication_classes = [TokenAuthentication]", "**kwargs): return self.list(request, *args, **kwargs) class UserHomeworkVoucherApiViewSet(viewsets.ModelViewSet): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated]", "User.objects.all() serializer_class = UserModelSerializer class PartnerApiViewSet(viewsets.ModelViewSet): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated] queryset", "UserForumFavoriteApiViewSet(viewsets.ModelViewSet): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated] queryset = UserForumFavorite.objects.all() serializer_class = UserForumFavoriteModelSerializer", "class ForumCommentApiViewSet(viewsets.ModelViewSet): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated] queryset = ForumComment.objects.all() serializer_class =", "= UserForumFavoriteModelSerializer class ContentTypeListOnlyAPIView(mixins.ListModelMixin, generics.GenericAPIView): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated] queryset =", "self.list(request, *args, **kwargs) class ContentListOnlyAPIView(mixins.ListModelMixin, generics.GenericAPIView): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated] queryset", "[TokenAuthentication] permission_classes = [IsAuthenticated] queryset = ForumComment.objects.all() serializer_class = ForumCommentModelSerializer class UserForumFavoriteApiViewSet(viewsets.ModelViewSet): authentication_classes", "import generics from rest_framework.views import APIView from rest_framework.authentication import TokenAuthentication from rest_framework.permissions import", "rest_framework import generics from rest_framework.views import APIView from rest_framework.authentication import TokenAuthentication from rest_framework.permissions", "[TokenAuthentication] permission_classes = [IsAuthenticated] queryset = Homework.objects.all() serializer_class = HomeworkModelSerializer class UserHomeworkApiViewSet(viewsets.ModelViewSet): authentication_classes", "def get(self, request, *args, **kwargs): return self.list(request, *args, **kwargs) class UserContentApiViewSet(viewsets.ModelViewSet): authentication_classes =", "**kwargs): return self.list(request, *args, **kwargs) class ContentListOnlyAPIView(mixins.ListModelMixin, generics.GenericAPIView): authentication_classes = [TokenAuthentication] permission_classes =", "[TokenAuthentication] permission_classes = [IsAuthenticated] queryset = ContentType.objects.all() serializer_class = ContentTypeModelSerializer def get(self, request,", "Partner.objects.all() serializer_class = PartnerModelSerializer class HomeworkApiViewSet(viewsets.ModelViewSet): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated] queryset", "generics.GenericAPIView): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated] queryset = HomeworkVoucher.objects.all() serializer_class = HomeworkVoucherModelSerializer", "= [TokenAuthentication] permission_classes = [IsAuthenticated] queryset = Content.objects.all() serializer_class = ContentModelSerializer def get(self,", "from rest_framework.views import APIView from rest_framework.authentication import TokenAuthentication from rest_framework.permissions import IsAuthenticated from", "return self.list(request, *args, **kwargs) class HomeworkVoucherListOnlyAPIView(mixins.ListModelMixin, generics.GenericAPIView): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated]", "return self.list(request, *args, **kwargs) class UserContentApiViewSet(viewsets.ModelViewSet): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated] queryset", "ContentTypeModelSerializer, ContentModelSerializer, UserContentModelSerializer, PaymentModelSerializer ) from api.models import ( Partner, User, Homework, UserHomework,", "ContentModelSerializer def get(self, request, *args, **kwargs): return self.list(request, *args, **kwargs) class UserContentApiViewSet(viewsets.ModelViewSet): authentication_classes", "UserApiViewSet(viewsets.ModelViewSet): queryset = User.objects.all() serializer_class = UserModelSerializer class PartnerApiViewSet(viewsets.ModelViewSet): authentication_classes = [TokenAuthentication] permission_classes", "permission_classes = [IsAuthenticated] queryset = UserHomework.objects.all() serializer_class = UserHomeworkModelSerializer class HomeworkVoucherTypeListOnlyAPIView(mixins.ListModelMixin, generics.GenericAPIView): authentication_classes", "generics from rest_framework.views import APIView from rest_framework.authentication import TokenAuthentication from rest_framework.permissions import IsAuthenticated", "import APIView from rest_framework.authentication import TokenAuthentication from rest_framework.permissions import IsAuthenticated from rest_framework.decorators import", "= [TokenAuthentication] permission_classes = [IsAuthenticated] queryset = Homework.objects.all() serializer_class = HomeworkModelSerializer class UserHomeworkApiViewSet(viewsets.ModelViewSet):", "ForumType.objects.all() serializer_class = ForumTypeModelSerializer def get(self, request, *args, **kwargs): return self.list(request, *args, **kwargs)", "= ContentType.objects.all() serializer_class = ContentTypeModelSerializer def get(self, request, *args, **kwargs): return self.list(request, *args,", "= UserHomeworkVoucherModelSerializer class ForumTypeListOnlyAPIView(mixins.ListModelMixin, generics.GenericAPIView): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated] queryset =", "= UserForumFavorite.objects.all() serializer_class = UserForumFavoriteModelSerializer class ContentTypeListOnlyAPIView(mixins.ListModelMixin, generics.GenericAPIView): authentication_classes = [TokenAuthentication] permission_classes =", "queryset = HomeworkVoucherType.objects.all() serializer_class = HomeworkVoucherTypeModelSerializer def get(self, request, *args, **kwargs): return self.list(request,", "import ( UserModelSerializer, PartnerModelSerializer, HomeworkModelSerializer, UserHomeworkModelSerializer, HomeworkVoucherTypeModelSerializer, HomeworkVoucherModelSerializer, UserHomeworkVoucherModelSerializer, ForumTypeModelSerializer, ForumModelSerializer, ForumCommentModelSerializer, UserForumFavoriteModelSerializer,", "[IsAuthenticated] queryset = HomeworkVoucherType.objects.all() serializer_class = HomeworkVoucherTypeModelSerializer def get(self, request, *args, **kwargs): return", "class ContentTypeListOnlyAPIView(mixins.ListModelMixin, generics.GenericAPIView): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated] queryset = ContentType.objects.all() serializer_class", "= ContentModelSerializer def get(self, request, *args, **kwargs): return self.list(request, *args, **kwargs) class UserContentApiViewSet(viewsets.ModelViewSet):", "permission_classes = [IsAuthenticated] queryset = Partner.objects.all() serializer_class = PartnerModelSerializer class HomeworkApiViewSet(viewsets.ModelViewSet): authentication_classes =", "ForumComment.objects.all() serializer_class = ForumCommentModelSerializer class UserForumFavoriteApiViewSet(viewsets.ModelViewSet): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated] queryset", "APIView from rest_framework.authentication import TokenAuthentication from rest_framework.permissions import IsAuthenticated from rest_framework.decorators import authentication_classes,", "= UserHomeworkVoucher.objects.all() serializer_class = UserHomeworkVoucherModelSerializer class ForumTypeListOnlyAPIView(mixins.ListModelMixin, generics.GenericAPIView): authentication_classes = [TokenAuthentication] permission_classes =", "= HomeworkVoucherType.objects.all() serializer_class = HomeworkVoucherTypeModelSerializer def get(self, request, *args, **kwargs): return self.list(request, *args,", "import authentication_classes, permission_classes from api.serializers import ( UserModelSerializer, PartnerModelSerializer, HomeworkModelSerializer, UserHomeworkModelSerializer, HomeworkVoucherTypeModelSerializer, HomeworkVoucherModelSerializer,", "authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated] queryset = Forum.objects.all() serializer_class = ForumModelSerializer class", "Content.objects.all() serializer_class = ContentModelSerializer def get(self, request, *args, **kwargs): return self.list(request, *args, **kwargs)", "rest_framework import viewsets from rest_framework import mixins from rest_framework import generics from rest_framework.views", "= Content.objects.all() serializer_class = ContentModelSerializer def get(self, request, *args, **kwargs): return self.list(request, *args,", "ForumCommentApiViewSet(viewsets.ModelViewSet): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated] queryset = ForumComment.objects.all() serializer_class = ForumCommentModelSerializer", "authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated] queryset = ForumComment.objects.all() serializer_class = ForumCommentModelSerializer class", "class HomeworkApiViewSet(viewsets.ModelViewSet): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated] queryset = Homework.objects.all() serializer_class =", "class UserHomeworkVoucherApiViewSet(viewsets.ModelViewSet): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated] queryset = UserHomeworkVoucher.objects.all() serializer_class =", "UserContentModelSerializer class PaymentApiViewSet(viewsets.ModelViewSet): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated] queryset = Payment.objects.all() serializer_class", "TokenAuthentication from rest_framework.permissions import IsAuthenticated from rest_framework.decorators import authentication_classes, permission_classes from api.serializers import", "permission_classes from api.serializers import ( UserModelSerializer, PartnerModelSerializer, HomeworkModelSerializer, UserHomeworkModelSerializer, HomeworkVoucherTypeModelSerializer, HomeworkVoucherModelSerializer, UserHomeworkVoucherModelSerializer, ForumTypeModelSerializer,", "= [IsAuthenticated] queryset = ForumType.objects.all() serializer_class = ForumTypeModelSerializer def get(self, request, *args, **kwargs):", "authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated] queryset = ForumType.objects.all() serializer_class = ForumTypeModelSerializer def", "UserContent.objects.all() serializer_class = UserContentModelSerializer class PaymentApiViewSet(viewsets.ModelViewSet): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated] queryset", "UserHomeworkVoucherModelSerializer class ForumTypeListOnlyAPIView(mixins.ListModelMixin, generics.GenericAPIView): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated] queryset = ForumType.objects.all()", "from rest_framework.permissions import IsAuthenticated from rest_framework.decorators import authentication_classes, permission_classes from api.serializers import (", "from rest_framework.authentication import TokenAuthentication from rest_framework.permissions import IsAuthenticated from rest_framework.decorators import authentication_classes, permission_classes", "authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated] queryset = UserContent.objects.all() serializer_class = UserContentModelSerializer class", "permission_classes = [IsAuthenticated] queryset = UserContent.objects.all() serializer_class = UserContentModelSerializer class PaymentApiViewSet(viewsets.ModelViewSet): authentication_classes =", "ContentTypeListOnlyAPIView(mixins.ListModelMixin, generics.GenericAPIView): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated] queryset = ContentType.objects.all() serializer_class =", "UserForumFavoriteModelSerializer, ContentTypeModelSerializer, ContentModelSerializer, UserContentModelSerializer, PaymentModelSerializer ) from api.models import ( Partner, User, Homework,", "= [IsAuthenticated] queryset = Homework.objects.all() serializer_class = HomeworkModelSerializer class UserHomeworkApiViewSet(viewsets.ModelViewSet): authentication_classes = [TokenAuthentication]", "@permission_classes([]) class UserApiViewSet(viewsets.ModelViewSet): queryset = User.objects.all() serializer_class = UserModelSerializer class PartnerApiViewSet(viewsets.ModelViewSet): authentication_classes =", "permission_classes = [IsAuthenticated] queryset = HomeworkVoucher.objects.all() serializer_class = HomeworkVoucherModelSerializer def get(self, request, *args,", "serializer_class = PartnerModelSerializer class HomeworkApiViewSet(viewsets.ModelViewSet): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated] queryset =", "UserModelSerializer class PartnerApiViewSet(viewsets.ModelViewSet): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated] queryset = Partner.objects.all() serializer_class", "= HomeworkVoucherModelSerializer def get(self, request, *args, **kwargs): return self.list(request, *args, **kwargs) class UserHomeworkVoucherApiViewSet(viewsets.ModelViewSet):", "def get(self, request, *args, **kwargs): return self.list(request, *args, **kwargs) class ContentListOnlyAPIView(mixins.ListModelMixin, generics.GenericAPIView): authentication_classes", "get(self, request, *args, **kwargs): return self.list(request, *args, **kwargs) class UserHomeworkVoucherApiViewSet(viewsets.ModelViewSet): authentication_classes = [TokenAuthentication]", "authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated] queryset = Partner.objects.all() serializer_class = PartnerModelSerializer class", "= HomeworkVoucher.objects.all() serializer_class = HomeworkVoucherModelSerializer def get(self, request, *args, **kwargs): return self.list(request, *args,", "PartnerModelSerializer, HomeworkModelSerializer, UserHomeworkModelSerializer, HomeworkVoucherTypeModelSerializer, HomeworkVoucherModelSerializer, UserHomeworkVoucherModelSerializer, ForumTypeModelSerializer, ForumModelSerializer, ForumCommentModelSerializer, UserForumFavoriteModelSerializer, ContentTypeModelSerializer, ContentModelSerializer, UserContentModelSerializer,", "[TokenAuthentication] permission_classes = [IsAuthenticated] queryset = UserForumFavorite.objects.all() serializer_class = UserForumFavoriteModelSerializer class ContentTypeListOnlyAPIView(mixins.ListModelMixin, generics.GenericAPIView):", "rest_framework.authentication import TokenAuthentication from rest_framework.permissions import IsAuthenticated from rest_framework.decorators import authentication_classes, permission_classes from", "= [IsAuthenticated] queryset = ContentType.objects.all() serializer_class = ContentTypeModelSerializer def get(self, request, *args, **kwargs):", "serializer_class = ForumCommentModelSerializer class UserForumFavoriteApiViewSet(viewsets.ModelViewSet): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated] queryset =", "class ContentListOnlyAPIView(mixins.ListModelMixin, generics.GenericAPIView): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated] queryset = Content.objects.all() serializer_class", "= [IsAuthenticated] queryset = UserContent.objects.all() serializer_class = UserContentModelSerializer class PaymentApiViewSet(viewsets.ModelViewSet): authentication_classes = [TokenAuthentication]", "UserHomework, HomeworkVoucherType, HomeworkVoucher, UserHomeworkVoucher, ForumType, Forum, ForumComment, UserForumFavorite, ContentType, Content, UserContent, Payment )", "*args, **kwargs): return self.list(request, *args, **kwargs) class ForumApiViewSet(viewsets.ModelViewSet): authentication_classes = [TokenAuthentication] permission_classes =", "queryset = Partner.objects.all() serializer_class = PartnerModelSerializer class HomeworkApiViewSet(viewsets.ModelViewSet): authentication_classes = [TokenAuthentication] permission_classes =", "mixins from rest_framework import generics from rest_framework.views import APIView from rest_framework.authentication import TokenAuthentication", "serializer_class = UserModelSerializer class PartnerApiViewSet(viewsets.ModelViewSet): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated] queryset =", "<filename>agilife/api/views.py from rest_framework import viewsets from rest_framework import mixins from rest_framework import generics", "Payment ) @authentication_classes([]) @permission_classes([]) class UserApiViewSet(viewsets.ModelViewSet): queryset = User.objects.all() serializer_class = UserModelSerializer class", "**kwargs): return self.list(request, *args, **kwargs) class UserContentApiViewSet(viewsets.ModelViewSet): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated]", "= UserHomeworkModelSerializer class HomeworkVoucherTypeListOnlyAPIView(mixins.ListModelMixin, generics.GenericAPIView): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated] queryset =", "import viewsets from rest_framework import mixins from rest_framework import generics from rest_framework.views import", "ContentTypeModelSerializer def get(self, request, *args, **kwargs): return self.list(request, *args, **kwargs) class ContentListOnlyAPIView(mixins.ListModelMixin, generics.GenericAPIView):", "from rest_framework import generics from rest_framework.views import APIView from rest_framework.authentication import TokenAuthentication from", "= ForumType.objects.all() serializer_class = ForumTypeModelSerializer def get(self, request, *args, **kwargs): return self.list(request, *args,", "= [IsAuthenticated] queryset = HomeworkVoucher.objects.all() serializer_class = HomeworkVoucherModelSerializer def get(self, request, *args, **kwargs):", "ContentModelSerializer, UserContentModelSerializer, PaymentModelSerializer ) from api.models import ( Partner, User, Homework, UserHomework, HomeworkVoucherType,", "UserHomework.objects.all() serializer_class = UserHomeworkModelSerializer class HomeworkVoucherTypeListOnlyAPIView(mixins.ListModelMixin, generics.GenericAPIView): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated]", "ForumTypeListOnlyAPIView(mixins.ListModelMixin, generics.GenericAPIView): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated] queryset = ForumType.objects.all() serializer_class =", "= HomeworkVoucherTypeModelSerializer def get(self, request, *args, **kwargs): return self.list(request, *args, **kwargs) class HomeworkVoucherListOnlyAPIView(mixins.ListModelMixin,", "**kwargs) class ContentListOnlyAPIView(mixins.ListModelMixin, generics.GenericAPIView): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated] queryset = Content.objects.all()", "from rest_framework import viewsets from rest_framework import mixins from rest_framework import generics from", "= HomeworkModelSerializer class UserHomeworkApiViewSet(viewsets.ModelViewSet): authentication_classes = [TokenAuthentication] permission_classes = [IsAuthenticated] queryset = UserHomework.objects.all()" ]
[ "-*- coding: utf-8 -*- \"\"\" Created on Mon Oct 19 21:11:16 2020 @author:", "n+=1 if (i=='4' or i=='7'): k+=1 else: k+=0 return [n,k] list1=main_args() k=str(list1[1]) n=list1[0]", "zuoxichen \"\"\" import sys def main_args(): a=list(input()) k=0 n=0 for i in a:", "k+=0 return [n,k] list1=main_args() k=str(list1[1]) n=list1[0] if n==int(k) and n!=1: print('YES') sys.exit(0) else:", "n==int(k) and n!=1: print('YES') sys.exit(0) else: for i in k : if i!='4'", "[n,k] list1=main_args() k=str(list1[1]) n=list1[0] if n==int(k) and n!=1: print('YES') sys.exit(0) else: for i", "and n!=1: print('YES') sys.exit(0) else: for i in k : if i!='4' or", "sys.exit(0) else: for i in k : if i!='4' or i!='7': print('NO') sys.exit(0)", "19 21:11:16 2020 @author: zuoxichen \"\"\" import sys def main_args(): a=list(input()) k=0 n=0", "(i=='4' or i=='7'): k+=1 else: k+=0 return [n,k] list1=main_args() k=str(list1[1]) n=list1[0] if n==int(k)", "<filename>Python/Nearly Lucky Number.py #!/usr/bin/env python3 # -*- coding: utf-8 -*- \"\"\" Created on", "main_args(): a=list(input()) k=0 n=0 for i in a: n+=1 if (i=='4' or i=='7'):", "n!=1: print('YES') sys.exit(0) else: for i in k : if i!='4' or i!='7':", "-*- \"\"\" Created on Mon Oct 19 21:11:16 2020 @author: zuoxichen \"\"\" import", "Number.py #!/usr/bin/env python3 # -*- coding: utf-8 -*- \"\"\" Created on Mon Oct", "list1=main_args() k=str(list1[1]) n=list1[0] if n==int(k) and n!=1: print('YES') sys.exit(0) else: for i in", "k=str(list1[1]) n=list1[0] if n==int(k) and n!=1: print('YES') sys.exit(0) else: for i in k", "coding: utf-8 -*- \"\"\" Created on Mon Oct 19 21:11:16 2020 @author: zuoxichen", "2020 @author: zuoxichen \"\"\" import sys def main_args(): a=list(input()) k=0 n=0 for i", "or i=='7'): k+=1 else: k+=0 return [n,k] list1=main_args() k=str(list1[1]) n=list1[0] if n==int(k) and", "Oct 19 21:11:16 2020 @author: zuoxichen \"\"\" import sys def main_args(): a=list(input()) k=0", "else: for i in k : if i!='4' or i!='7': print('NO') sys.exit(0) print('YES')", "#!/usr/bin/env python3 # -*- coding: utf-8 -*- \"\"\" Created on Mon Oct 19", "python3 # -*- coding: utf-8 -*- \"\"\" Created on Mon Oct 19 21:11:16", "utf-8 -*- \"\"\" Created on Mon Oct 19 21:11:16 2020 @author: zuoxichen \"\"\"", "a: n+=1 if (i=='4' or i=='7'): k+=1 else: k+=0 return [n,k] list1=main_args() k=str(list1[1])", "a=list(input()) k=0 n=0 for i in a: n+=1 if (i=='4' or i=='7'): k+=1", "in a: n+=1 if (i=='4' or i=='7'): k+=1 else: k+=0 return [n,k] list1=main_args()", "\"\"\" import sys def main_args(): a=list(input()) k=0 n=0 for i in a: n+=1", "if (i=='4' or i=='7'): k+=1 else: k+=0 return [n,k] list1=main_args() k=str(list1[1]) n=list1[0] if", "\"\"\" Created on Mon Oct 19 21:11:16 2020 @author: zuoxichen \"\"\" import sys", "Lucky Number.py #!/usr/bin/env python3 # -*- coding: utf-8 -*- \"\"\" Created on Mon", "for i in a: n+=1 if (i=='4' or i=='7'): k+=1 else: k+=0 return", "if n==int(k) and n!=1: print('YES') sys.exit(0) else: for i in k : if", "on Mon Oct 19 21:11:16 2020 @author: zuoxichen \"\"\" import sys def main_args():", "@author: zuoxichen \"\"\" import sys def main_args(): a=list(input()) k=0 n=0 for i in", "print('YES') sys.exit(0) else: for i in k : if i!='4' or i!='7': print('NO')", "def main_args(): a=list(input()) k=0 n=0 for i in a: n+=1 if (i=='4' or", "sys def main_args(): a=list(input()) k=0 n=0 for i in a: n+=1 if (i=='4'", "return [n,k] list1=main_args() k=str(list1[1]) n=list1[0] if n==int(k) and n!=1: print('YES') sys.exit(0) else: for", "Created on Mon Oct 19 21:11:16 2020 @author: zuoxichen \"\"\" import sys def", "i=='7'): k+=1 else: k+=0 return [n,k] list1=main_args() k=str(list1[1]) n=list1[0] if n==int(k) and n!=1:", "i in a: n+=1 if (i=='4' or i=='7'): k+=1 else: k+=0 return [n,k]", "k+=1 else: k+=0 return [n,k] list1=main_args() k=str(list1[1]) n=list1[0] if n==int(k) and n!=1: print('YES')", "Mon Oct 19 21:11:16 2020 @author: zuoxichen \"\"\" import sys def main_args(): a=list(input())", "21:11:16 2020 @author: zuoxichen \"\"\" import sys def main_args(): a=list(input()) k=0 n=0 for", "n=0 for i in a: n+=1 if (i=='4' or i=='7'): k+=1 else: k+=0", "else: k+=0 return [n,k] list1=main_args() k=str(list1[1]) n=list1[0] if n==int(k) and n!=1: print('YES') sys.exit(0)", "import sys def main_args(): a=list(input()) k=0 n=0 for i in a: n+=1 if", "n=list1[0] if n==int(k) and n!=1: print('YES') sys.exit(0) else: for i in k :", "# -*- coding: utf-8 -*- \"\"\" Created on Mon Oct 19 21:11:16 2020", "k=0 n=0 for i in a: n+=1 if (i=='4' or i=='7'): k+=1 else:" ]
[ "by Django 2.1.3 on 2018-11-19 20:13 from django.db import migrations, models class Migration(migrations.Migration):", "University, Washington Sq, San Jose, CA, United States', max_length=200), preserve_default=False, ), migrations.AlterField( model_name='ticket',", "José State University, Washington Sq, San Jose, CA, United States', max_length=200), preserve_default=False, ),", "name='ticket_address', field=models.CharField(default='San José State University, Washington Sq, San Jose, CA, United States', max_length=200),", "Jose, CA, United States', max_length=200), preserve_default=False, ), migrations.AlterField( model_name='ticket', name='ticket_price', field=models.DecimalField(decimal_places=2, max_digits=10), ),", "on 2018-11-19 20:13 from django.db import migrations, models class Migration(migrations.Migration): dependencies = [", "Migration(migrations.Migration): dependencies = [ ('etes', '0023_auto_20181119_1203'), ] operations = [ migrations.AddField( model_name='ticket', name='ticket_address',", "# Generated by Django 2.1.3 on 2018-11-19 20:13 from django.db import migrations, models", "from django.db import migrations, models class Migration(migrations.Migration): dependencies = [ ('etes', '0023_auto_20181119_1203'), ]", "San Jose, CA, United States', max_length=200), preserve_default=False, ), migrations.AlterField( model_name='ticket', name='ticket_price', field=models.DecimalField(decimal_places=2, max_digits=10),", "operations = [ migrations.AddField( model_name='ticket', name='ticket_address', field=models.CharField(default='San José State University, Washington Sq, San", "[ ('etes', '0023_auto_20181119_1203'), ] operations = [ migrations.AddField( model_name='ticket', name='ticket_address', field=models.CharField(default='San José State", "20:13 from django.db import migrations, models class Migration(migrations.Migration): dependencies = [ ('etes', '0023_auto_20181119_1203'),", "migrations.AddField( model_name='ticket', name='ticket_address', field=models.CharField(default='San José State University, Washington Sq, San Jose, CA, United", "Generated by Django 2.1.3 on 2018-11-19 20:13 from django.db import migrations, models class", "field=models.CharField(default='San José State University, Washington Sq, San Jose, CA, United States', max_length=200), preserve_default=False,", "Washington Sq, San Jose, CA, United States', max_length=200), preserve_default=False, ), migrations.AlterField( model_name='ticket', name='ticket_price',", "Django 2.1.3 on 2018-11-19 20:13 from django.db import migrations, models class Migration(migrations.Migration): dependencies", "[ migrations.AddField( model_name='ticket', name='ticket_address', field=models.CharField(default='San José State University, Washington Sq, San Jose, CA,", "<gh_stars>0 # Generated by Django 2.1.3 on 2018-11-19 20:13 from django.db import migrations,", "class Migration(migrations.Migration): dependencies = [ ('etes', '0023_auto_20181119_1203'), ] operations = [ migrations.AddField( model_name='ticket',", "django.db import migrations, models class Migration(migrations.Migration): dependencies = [ ('etes', '0023_auto_20181119_1203'), ] operations", "= [ migrations.AddField( model_name='ticket', name='ticket_address', field=models.CharField(default='San José State University, Washington Sq, San Jose,", "migrations, models class Migration(migrations.Migration): dependencies = [ ('etes', '0023_auto_20181119_1203'), ] operations = [", "'0023_auto_20181119_1203'), ] operations = [ migrations.AddField( model_name='ticket', name='ticket_address', field=models.CharField(default='San José State University, Washington", "CA, United States', max_length=200), preserve_default=False, ), migrations.AlterField( model_name='ticket', name='ticket_price', field=models.DecimalField(decimal_places=2, max_digits=10), ), ]", "2018-11-19 20:13 from django.db import migrations, models class Migration(migrations.Migration): dependencies = [ ('etes',", "import migrations, models class Migration(migrations.Migration): dependencies = [ ('etes', '0023_auto_20181119_1203'), ] operations =", "Sq, San Jose, CA, United States', max_length=200), preserve_default=False, ), migrations.AlterField( model_name='ticket', name='ticket_price', field=models.DecimalField(decimal_places=2,", "] operations = [ migrations.AddField( model_name='ticket', name='ticket_address', field=models.CharField(default='San José State University, Washington Sq,", "models class Migration(migrations.Migration): dependencies = [ ('etes', '0023_auto_20181119_1203'), ] operations = [ migrations.AddField(", "('etes', '0023_auto_20181119_1203'), ] operations = [ migrations.AddField( model_name='ticket', name='ticket_address', field=models.CharField(default='San José State University,", "2.1.3 on 2018-11-19 20:13 from django.db import migrations, models class Migration(migrations.Migration): dependencies =", "model_name='ticket', name='ticket_address', field=models.CharField(default='San José State University, Washington Sq, San Jose, CA, United States',", "= [ ('etes', '0023_auto_20181119_1203'), ] operations = [ migrations.AddField( model_name='ticket', name='ticket_address', field=models.CharField(default='San José", "State University, Washington Sq, San Jose, CA, United States', max_length=200), preserve_default=False, ), migrations.AlterField(", "dependencies = [ ('etes', '0023_auto_20181119_1203'), ] operations = [ migrations.AddField( model_name='ticket', name='ticket_address', field=models.CharField(default='San" ]
[ "= Tk() window.title(\"Taschenrechner\") zahl1 = Entry(window) operator = Listbox(window) operator.insert(0, \"+\") operator.insert(1, \"-\")", "float(zahl1.get()) + float(zahl2.get()) elif operator.curselection() == (1,): ausgabe[\"text\"] = float(zahl1.get()) - float(zahl2.get()) elif", "tkinter import * def rechnen(): if operator.curselection() == (0,): ausgabe[\"text\"] = float(zahl1.get()) +", "ausgabe[\"text\"] = float(zahl1.get()) - float(zahl2.get()) elif operator.curselection() == (2,): ausgabe[\"text\"] = float(zahl1.get()) *", "= Entry(window) button = Button(window, command=rechnen, text=\"Los\", bg='#FBD975') ausgabe = Label(window) zahl1.grid(row=0, column=0)", "Button(window, command=rechnen, text=\"Los\", bg='#FBD975') ausgabe = Label(window) zahl1.grid(row=0, column=0) operator.grid(row=0, column=1) zahl2.grid(row=0, column=2)", "== (0,): ausgabe[\"text\"] = float(zahl1.get()) + float(zahl2.get()) elif operator.curselection() == (1,): ausgabe[\"text\"] =", "/ float(zahl2.get()) window = Tk() window.title(\"Taschenrechner\") zahl1 = Entry(window) operator = Listbox(window) operator.insert(0,", "float(zahl1.get()) / float(zahl2.get()) window = Tk() window.title(\"Taschenrechner\") zahl1 = Entry(window) operator = Listbox(window)", "window.title(\"Taschenrechner\") zahl1 = Entry(window) operator = Listbox(window) operator.insert(0, \"+\") operator.insert(1, \"-\") operator.insert(2, \"*\")", "+ float(zahl2.get()) elif operator.curselection() == (1,): ausgabe[\"text\"] = float(zahl1.get()) - float(zahl2.get()) elif operator.curselection()", "zahl1 = Entry(window) operator = Listbox(window) operator.insert(0, \"+\") operator.insert(1, \"-\") operator.insert(2, \"*\") operator.insert(3,", "bg='#FBD975') ausgabe = Label(window) zahl1.grid(row=0, column=0) operator.grid(row=0, column=1) zahl2.grid(row=0, column=2) button.grid(row=1, column=2, sticky=E)", "ausgabe = Label(window) zahl1.grid(row=0, column=0) operator.grid(row=0, column=1) zahl2.grid(row=0, column=2) button.grid(row=1, column=2, sticky=E) ausgabe.grid(row=2)", "(2,): ausgabe[\"text\"] = float(zahl1.get()) * float(zahl2.get()) elif operator.curselection() == (3,): ausgabe[\"text\"] = float(zahl1.get())", "elif operator.curselection() == (1,): ausgabe[\"text\"] = float(zahl1.get()) - float(zahl2.get()) elif operator.curselection() == (2,):", "operator.curselection() == (2,): ausgabe[\"text\"] = float(zahl1.get()) * float(zahl2.get()) elif operator.curselection() == (3,): ausgabe[\"text\"]", "window = Tk() window.title(\"Taschenrechner\") zahl1 = Entry(window) operator = Listbox(window) operator.insert(0, \"+\") operator.insert(1,", "= float(zahl1.get()) + float(zahl2.get()) elif operator.curselection() == (1,): ausgabe[\"text\"] = float(zahl1.get()) - float(zahl2.get())", "ausgabe[\"text\"] = float(zahl1.get()) / float(zahl2.get()) window = Tk() window.title(\"Taschenrechner\") zahl1 = Entry(window) operator", "== (2,): ausgabe[\"text\"] = float(zahl1.get()) * float(zahl2.get()) elif operator.curselection() == (3,): ausgabe[\"text\"] =", "import * def rechnen(): if operator.curselection() == (0,): ausgabe[\"text\"] = float(zahl1.get()) + float(zahl2.get())", "operator = Listbox(window) operator.insert(0, \"+\") operator.insert(1, \"-\") operator.insert(2, \"*\") operator.insert(3, \"/\") zahl2 =", "def rechnen(): if operator.curselection() == (0,): ausgabe[\"text\"] = float(zahl1.get()) + float(zahl2.get()) elif operator.curselection()", "ausgabe[\"text\"] = float(zahl1.get()) + float(zahl2.get()) elif operator.curselection() == (1,): ausgabe[\"text\"] = float(zahl1.get()) -", "= float(zahl1.get()) * float(zahl2.get()) elif operator.curselection() == (3,): ausgabe[\"text\"] = float(zahl1.get()) / float(zahl2.get())", "float(zahl2.get()) elif operator.curselection() == (1,): ausgabe[\"text\"] = float(zahl1.get()) - float(zahl2.get()) elif operator.curselection() ==", "float(zahl2.get()) window = Tk() window.title(\"Taschenrechner\") zahl1 = Entry(window) operator = Listbox(window) operator.insert(0, \"+\")", "= Button(window, command=rechnen, text=\"Los\", bg='#FBD975') ausgabe = Label(window) zahl1.grid(row=0, column=0) operator.grid(row=0, column=1) zahl2.grid(row=0,", "<gh_stars>0 from tkinter import * def rechnen(): if operator.curselection() == (0,): ausgabe[\"text\"] =", "Tk() window.title(\"Taschenrechner\") zahl1 = Entry(window) operator = Listbox(window) operator.insert(0, \"+\") operator.insert(1, \"-\") operator.insert(2,", "text=\"Los\", bg='#FBD975') ausgabe = Label(window) zahl1.grid(row=0, column=0) operator.grid(row=0, column=1) zahl2.grid(row=0, column=2) button.grid(row=1, column=2,", "\"-\") operator.insert(2, \"*\") operator.insert(3, \"/\") zahl2 = Entry(window) button = Button(window, command=rechnen, text=\"Los\",", "operator.curselection() == (3,): ausgabe[\"text\"] = float(zahl1.get()) / float(zahl2.get()) window = Tk() window.title(\"Taschenrechner\") zahl1", "(1,): ausgabe[\"text\"] = float(zahl1.get()) - float(zahl2.get()) elif operator.curselection() == (2,): ausgabe[\"text\"] = float(zahl1.get())", "float(zahl2.get()) elif operator.curselection() == (2,): ausgabe[\"text\"] = float(zahl1.get()) * float(zahl2.get()) elif operator.curselection() ==", "= Label(window) zahl1.grid(row=0, column=0) operator.grid(row=0, column=1) zahl2.grid(row=0, column=2) button.grid(row=1, column=2, sticky=E) ausgabe.grid(row=2) window.mainloop()", "== (3,): ausgabe[\"text\"] = float(zahl1.get()) / float(zahl2.get()) window = Tk() window.title(\"Taschenrechner\") zahl1 =", "Entry(window) button = Button(window, command=rechnen, text=\"Los\", bg='#FBD975') ausgabe = Label(window) zahl1.grid(row=0, column=0) operator.grid(row=0,", "operator.curselection() == (1,): ausgabe[\"text\"] = float(zahl1.get()) - float(zahl2.get()) elif operator.curselection() == (2,): ausgabe[\"text\"]", "- float(zahl2.get()) elif operator.curselection() == (2,): ausgabe[\"text\"] = float(zahl1.get()) * float(zahl2.get()) elif operator.curselection()", "= float(zahl1.get()) - float(zahl2.get()) elif operator.curselection() == (2,): ausgabe[\"text\"] = float(zahl1.get()) * float(zahl2.get())", "(3,): ausgabe[\"text\"] = float(zahl1.get()) / float(zahl2.get()) window = Tk() window.title(\"Taschenrechner\") zahl1 = Entry(window)", "operator.curselection() == (0,): ausgabe[\"text\"] = float(zahl1.get()) + float(zahl2.get()) elif operator.curselection() == (1,): ausgabe[\"text\"]", "(0,): ausgabe[\"text\"] = float(zahl1.get()) + float(zahl2.get()) elif operator.curselection() == (1,): ausgabe[\"text\"] = float(zahl1.get())", "elif operator.curselection() == (3,): ausgabe[\"text\"] = float(zahl1.get()) / float(zahl2.get()) window = Tk() window.title(\"Taschenrechner\")", "operator.insert(2, \"*\") operator.insert(3, \"/\") zahl2 = Entry(window) button = Button(window, command=rechnen, text=\"Los\", bg='#FBD975')", "operator.insert(3, \"/\") zahl2 = Entry(window) button = Button(window, command=rechnen, text=\"Los\", bg='#FBD975') ausgabe =", "Entry(window) operator = Listbox(window) operator.insert(0, \"+\") operator.insert(1, \"-\") operator.insert(2, \"*\") operator.insert(3, \"/\") zahl2", "rechnen(): if operator.curselection() == (0,): ausgabe[\"text\"] = float(zahl1.get()) + float(zahl2.get()) elif operator.curselection() ==", "\"*\") operator.insert(3, \"/\") zahl2 = Entry(window) button = Button(window, command=rechnen, text=\"Los\", bg='#FBD975') ausgabe", "from tkinter import * def rechnen(): if operator.curselection() == (0,): ausgabe[\"text\"] = float(zahl1.get())", "= float(zahl1.get()) / float(zahl2.get()) window = Tk() window.title(\"Taschenrechner\") zahl1 = Entry(window) operator =", "ausgabe[\"text\"] = float(zahl1.get()) * float(zahl2.get()) elif operator.curselection() == (3,): ausgabe[\"text\"] = float(zahl1.get()) /", "\"+\") operator.insert(1, \"-\") operator.insert(2, \"*\") operator.insert(3, \"/\") zahl2 = Entry(window) button = Button(window,", "== (1,): ausgabe[\"text\"] = float(zahl1.get()) - float(zahl2.get()) elif operator.curselection() == (2,): ausgabe[\"text\"] =", "zahl2 = Entry(window) button = Button(window, command=rechnen, text=\"Los\", bg='#FBD975') ausgabe = Label(window) zahl1.grid(row=0,", "\"/\") zahl2 = Entry(window) button = Button(window, command=rechnen, text=\"Los\", bg='#FBD975') ausgabe = Label(window)", "float(zahl1.get()) * float(zahl2.get()) elif operator.curselection() == (3,): ausgabe[\"text\"] = float(zahl1.get()) / float(zahl2.get()) window", "= Listbox(window) operator.insert(0, \"+\") operator.insert(1, \"-\") operator.insert(2, \"*\") operator.insert(3, \"/\") zahl2 = Entry(window)", "command=rechnen, text=\"Los\", bg='#FBD975') ausgabe = Label(window) zahl1.grid(row=0, column=0) operator.grid(row=0, column=1) zahl2.grid(row=0, column=2) button.grid(row=1,", "* def rechnen(): if operator.curselection() == (0,): ausgabe[\"text\"] = float(zahl1.get()) + float(zahl2.get()) elif", "float(zahl2.get()) elif operator.curselection() == (3,): ausgabe[\"text\"] = float(zahl1.get()) / float(zahl2.get()) window = Tk()", "* float(zahl2.get()) elif operator.curselection() == (3,): ausgabe[\"text\"] = float(zahl1.get()) / float(zahl2.get()) window =", "float(zahl1.get()) - float(zahl2.get()) elif operator.curselection() == (2,): ausgabe[\"text\"] = float(zahl1.get()) * float(zahl2.get()) elif", "if operator.curselection() == (0,): ausgabe[\"text\"] = float(zahl1.get()) + float(zahl2.get()) elif operator.curselection() == (1,):", "operator.insert(0, \"+\") operator.insert(1, \"-\") operator.insert(2, \"*\") operator.insert(3, \"/\") zahl2 = Entry(window) button =", "elif operator.curselection() == (2,): ausgabe[\"text\"] = float(zahl1.get()) * float(zahl2.get()) elif operator.curselection() == (3,):", "button = Button(window, command=rechnen, text=\"Los\", bg='#FBD975') ausgabe = Label(window) zahl1.grid(row=0, column=0) operator.grid(row=0, column=1)", "= Entry(window) operator = Listbox(window) operator.insert(0, \"+\") operator.insert(1, \"-\") operator.insert(2, \"*\") operator.insert(3, \"/\")", "operator.insert(1, \"-\") operator.insert(2, \"*\") operator.insert(3, \"/\") zahl2 = Entry(window) button = Button(window, command=rechnen,", "Listbox(window) operator.insert(0, \"+\") operator.insert(1, \"-\") operator.insert(2, \"*\") operator.insert(3, \"/\") zahl2 = Entry(window) button" ]
[ "try: from pyspark import SparkContext, SparkConf from pyspark.sql import SparkSession import pyspark.sql.functions as", "except Exception as e: print(e) ## http://www.hongyusu.com/imt/technology/spark-via-python-basic-setup-count-lines-and-word-counts.html def push_mongo(): spark = SparkSession \\", "Reading Data from volume acc_mongo=spark.read.csv(\"/volume/data\") #Show Mongo data acc_mongo.show() # Store data in", "SparkContext, SparkConf from pyspark.sql import SparkSession import pyspark.sql.functions as f # from operator", "\\ .config(\"spark.mongodb.input.uri\", \"mongodb://root:password@mongo/test.coll?authSource=admin\") \\ .config(\"spark.mongodb.output.uri\", \"mongodb://root:password@mongo/test.coll?authSource=admin\") \\ .config('spark.jars.packages', 'org.mongodb.spark:mongo-spark-connector_2.11:2.4.0')\\ .getOrCreate() sc = spark.sparkContext", "spark.sparkContext sc.setLogLevel('WARN') # Reading Data from volume acc_mongo=spark.read.csv(\"/volume/data\") #Show Mongo data acc_mongo.show() #", "def push_mongo(): spark = SparkSession \\ .builder \\ .appName(\"Push to MongoDB\") \\ .master(\"spark://master:7077\")", ".getOrCreate() sc = spark.sparkContext sc.setLogLevel('WARN') # Reading Data from volume acc_mongo=spark.read.csv(\"/volume/data\") #Show Mongo", "acc_mongo.show() # Store data in MongoDB acc_mongo.write.format(\"com.mongodb.spark.sql.DefaultSource\").mode(\"append\").save() # End the Spark Context spark.stop()", "pyspark.sql.functions as f # from operator import add except Exception as e: print(e)", ".builder \\ .appName(\"Push to MongoDB\") \\ .master(\"spark://master:7077\") \\ .config(\"spark.mongodb.input.uri\", \"mongodb://root:password@mongo/test.coll?authSource=admin\") \\ .config(\"spark.mongodb.output.uri\", \"mongodb://root:password@mongo/test.coll?authSource=admin\")", "SparkSession import pyspark.sql.functions as f # from operator import add except Exception as", "http://www.hongyusu.com/imt/technology/spark-via-python-basic-setup-count-lines-and-word-counts.html def push_mongo(): spark = SparkSession \\ .builder \\ .appName(\"Push to MongoDB\") \\", "MongoDB\") \\ .master(\"spark://master:7077\") \\ .config(\"spark.mongodb.input.uri\", \"mongodb://root:password@mongo/test.coll?authSource=admin\") \\ .config(\"spark.mongodb.output.uri\", \"mongodb://root:password@mongo/test.coll?authSource=admin\") \\ .config('spark.jars.packages', 'org.mongodb.spark:mongo-spark-connector_2.11:2.4.0')\\ .getOrCreate()", "spark = SparkSession \\ .builder \\ .appName(\"Push to MongoDB\") \\ .master(\"spark://master:7077\") \\ .config(\"spark.mongodb.input.uri\",", ".config(\"spark.mongodb.input.uri\", \"mongodb://root:password@mongo/test.coll?authSource=admin\") \\ .config(\"spark.mongodb.output.uri\", \"mongodb://root:password@mongo/test.coll?authSource=admin\") \\ .config('spark.jars.packages', 'org.mongodb.spark:mongo-spark-connector_2.11:2.4.0')\\ .getOrCreate() sc = spark.sparkContext sc.setLogLevel('WARN')", "# Store data in MongoDB acc_mongo.write.format(\"com.mongodb.spark.sql.DefaultSource\").mode(\"append\").save() # End the Spark Context spark.stop() if", "operator import add except Exception as e: print(e) ## http://www.hongyusu.com/imt/technology/spark-via-python-basic-setup-count-lines-and-word-counts.html def push_mongo(): spark", "pyspark import SparkContext, SparkConf from pyspark.sql import SparkSession import pyspark.sql.functions as f #", "# from operator import add except Exception as e: print(e) ## http://www.hongyusu.com/imt/technology/spark-via-python-basic-setup-count-lines-and-word-counts.html def", "data acc_mongo.show() # Store data in MongoDB acc_mongo.write.format(\"com.mongodb.spark.sql.DefaultSource\").mode(\"append\").save() # End the Spark Context", "pyspark.sql import SparkSession import pyspark.sql.functions as f # from operator import add except", "Store data in MongoDB acc_mongo.write.format(\"com.mongodb.spark.sql.DefaultSource\").mode(\"append\").save() # End the Spark Context spark.stop() if __name__", "\\ .config('spark.jars.packages', 'org.mongodb.spark:mongo-spark-connector_2.11:2.4.0')\\ .getOrCreate() sc = spark.sparkContext sc.setLogLevel('WARN') # Reading Data from volume", "\\ .builder \\ .appName(\"Push to MongoDB\") \\ .master(\"spark://master:7077\") \\ .config(\"spark.mongodb.input.uri\", \"mongodb://root:password@mongo/test.coll?authSource=admin\") \\ .config(\"spark.mongodb.output.uri\",", "from operator import add except Exception as e: print(e) ## http://www.hongyusu.com/imt/technology/spark-via-python-basic-setup-count-lines-and-word-counts.html def push_mongo():", "sc.setLogLevel('WARN') # Reading Data from volume acc_mongo=spark.read.csv(\"/volume/data\") #Show Mongo data acc_mongo.show() # Store", "data in MongoDB acc_mongo.write.format(\"com.mongodb.spark.sql.DefaultSource\").mode(\"append\").save() # End the Spark Context spark.stop() if __name__ ==", "'org.mongodb.spark:mongo-spark-connector_2.11:2.4.0')\\ .getOrCreate() sc = spark.sparkContext sc.setLogLevel('WARN') # Reading Data from volume acc_mongo=spark.read.csv(\"/volume/data\") #Show", "import SparkContext, SparkConf from pyspark.sql import SparkSession import pyspark.sql.functions as f # from", "print(e) ## http://www.hongyusu.com/imt/technology/spark-via-python-basic-setup-count-lines-and-word-counts.html def push_mongo(): spark = SparkSession \\ .builder \\ .appName(\"Push to", "\\ .appName(\"Push to MongoDB\") \\ .master(\"spark://master:7077\") \\ .config(\"spark.mongodb.input.uri\", \"mongodb://root:password@mongo/test.coll?authSource=admin\") \\ .config(\"spark.mongodb.output.uri\", \"mongodb://root:password@mongo/test.coll?authSource=admin\") \\", "\"mongodb://root:password@mongo/test.coll?authSource=admin\") \\ .config(\"spark.mongodb.output.uri\", \"mongodb://root:password@mongo/test.coll?authSource=admin\") \\ .config('spark.jars.packages', 'org.mongodb.spark:mongo-spark-connector_2.11:2.4.0')\\ .getOrCreate() sc = spark.sparkContext sc.setLogLevel('WARN') #", "#Show Mongo data acc_mongo.show() # Store data in MongoDB acc_mongo.write.format(\"com.mongodb.spark.sql.DefaultSource\").mode(\"append\").save() # End the", "Exception as e: print(e) ## http://www.hongyusu.com/imt/technology/spark-via-python-basic-setup-count-lines-and-word-counts.html def push_mongo(): spark = SparkSession \\ .builder", "SparkSession \\ .builder \\ .appName(\"Push to MongoDB\") \\ .master(\"spark://master:7077\") \\ .config(\"spark.mongodb.input.uri\", \"mongodb://root:password@mongo/test.coll?authSource=admin\") \\", "\"mongodb://root:password@mongo/test.coll?authSource=admin\") \\ .config('spark.jars.packages', 'org.mongodb.spark:mongo-spark-connector_2.11:2.4.0')\\ .getOrCreate() sc = spark.sparkContext sc.setLogLevel('WARN') # Reading Data from", "volume acc_mongo=spark.read.csv(\"/volume/data\") #Show Mongo data acc_mongo.show() # Store data in MongoDB acc_mongo.write.format(\"com.mongodb.spark.sql.DefaultSource\").mode(\"append\").save() #", ".master(\"spark://master:7077\") \\ .config(\"spark.mongodb.input.uri\", \"mongodb://root:password@mongo/test.coll?authSource=admin\") \\ .config(\"spark.mongodb.output.uri\", \"mongodb://root:password@mongo/test.coll?authSource=admin\") \\ .config('spark.jars.packages', 'org.mongodb.spark:mongo-spark-connector_2.11:2.4.0')\\ .getOrCreate() sc =", "\\ .master(\"spark://master:7077\") \\ .config(\"spark.mongodb.input.uri\", \"mongodb://root:password@mongo/test.coll?authSource=admin\") \\ .config(\"spark.mongodb.output.uri\", \"mongodb://root:password@mongo/test.coll?authSource=admin\") \\ .config('spark.jars.packages', 'org.mongodb.spark:mongo-spark-connector_2.11:2.4.0')\\ .getOrCreate() sc", "from volume acc_mongo=spark.read.csv(\"/volume/data\") #Show Mongo data acc_mongo.show() # Store data in MongoDB acc_mongo.write.format(\"com.mongodb.spark.sql.DefaultSource\").mode(\"append\").save()", "= spark.sparkContext sc.setLogLevel('WARN') # Reading Data from volume acc_mongo=spark.read.csv(\"/volume/data\") #Show Mongo data acc_mongo.show()", "Mongo data acc_mongo.show() # Store data in MongoDB acc_mongo.write.format(\"com.mongodb.spark.sql.DefaultSource\").mode(\"append\").save() # End the Spark", "Data from volume acc_mongo=spark.read.csv(\"/volume/data\") #Show Mongo data acc_mongo.show() # Store data in MongoDB", "as e: print(e) ## http://www.hongyusu.com/imt/technology/spark-via-python-basic-setup-count-lines-and-word-counts.html def push_mongo(): spark = SparkSession \\ .builder \\", "SparkConf from pyspark.sql import SparkSession import pyspark.sql.functions as f # from operator import", "acc_mongo=spark.read.csv(\"/volume/data\") #Show Mongo data acc_mongo.show() # Store data in MongoDB acc_mongo.write.format(\"com.mongodb.spark.sql.DefaultSource\").mode(\"append\").save() # End", "import add except Exception as e: print(e) ## http://www.hongyusu.com/imt/technology/spark-via-python-basic-setup-count-lines-and-word-counts.html def push_mongo(): spark =", "f # from operator import add except Exception as e: print(e) ## http://www.hongyusu.com/imt/technology/spark-via-python-basic-setup-count-lines-and-word-counts.html", "= SparkSession \\ .builder \\ .appName(\"Push to MongoDB\") \\ .master(\"spark://master:7077\") \\ .config(\"spark.mongodb.input.uri\", \"mongodb://root:password@mongo/test.coll?authSource=admin\")", ".appName(\"Push to MongoDB\") \\ .master(\"spark://master:7077\") \\ .config(\"spark.mongodb.input.uri\", \"mongodb://root:password@mongo/test.coll?authSource=admin\") \\ .config(\"spark.mongodb.output.uri\", \"mongodb://root:password@mongo/test.coll?authSource=admin\") \\ .config('spark.jars.packages',", ".config(\"spark.mongodb.output.uri\", \"mongodb://root:password@mongo/test.coll?authSource=admin\") \\ .config('spark.jars.packages', 'org.mongodb.spark:mongo-spark-connector_2.11:2.4.0')\\ .getOrCreate() sc = spark.sparkContext sc.setLogLevel('WARN') # Reading Data", "from pyspark import SparkContext, SparkConf from pyspark.sql import SparkSession import pyspark.sql.functions as f", "\\ .config(\"spark.mongodb.output.uri\", \"mongodb://root:password@mongo/test.coll?authSource=admin\") \\ .config('spark.jars.packages', 'org.mongodb.spark:mongo-spark-connector_2.11:2.4.0')\\ .getOrCreate() sc = spark.sparkContext sc.setLogLevel('WARN') # Reading", "sc = spark.sparkContext sc.setLogLevel('WARN') # Reading Data from volume acc_mongo=spark.read.csv(\"/volume/data\") #Show Mongo data", "# Reading Data from volume acc_mongo=spark.read.csv(\"/volume/data\") #Show Mongo data acc_mongo.show() # Store data", ".config('spark.jars.packages', 'org.mongodb.spark:mongo-spark-connector_2.11:2.4.0')\\ .getOrCreate() sc = spark.sparkContext sc.setLogLevel('WARN') # Reading Data from volume acc_mongo=spark.read.csv(\"/volume/data\")", "in MongoDB acc_mongo.write.format(\"com.mongodb.spark.sql.DefaultSource\").mode(\"append\").save() # End the Spark Context spark.stop() if __name__ == \"__main__\":", "from pyspark.sql import SparkSession import pyspark.sql.functions as f # from operator import add", "## http://www.hongyusu.com/imt/technology/spark-via-python-basic-setup-count-lines-and-word-counts.html def push_mongo(): spark = SparkSession \\ .builder \\ .appName(\"Push to MongoDB\")", "e: print(e) ## http://www.hongyusu.com/imt/technology/spark-via-python-basic-setup-count-lines-and-word-counts.html def push_mongo(): spark = SparkSession \\ .builder \\ .appName(\"Push", "to MongoDB\") \\ .master(\"spark://master:7077\") \\ .config(\"spark.mongodb.input.uri\", \"mongodb://root:password@mongo/test.coll?authSource=admin\") \\ .config(\"spark.mongodb.output.uri\", \"mongodb://root:password@mongo/test.coll?authSource=admin\") \\ .config('spark.jars.packages', 'org.mongodb.spark:mongo-spark-connector_2.11:2.4.0')\\", "MongoDB acc_mongo.write.format(\"com.mongodb.spark.sql.DefaultSource\").mode(\"append\").save() # End the Spark Context spark.stop() if __name__ == \"__main__\": push_mongo()", "import pyspark.sql.functions as f # from operator import add except Exception as e:", "as f # from operator import add except Exception as e: print(e) ##", "import SparkSession import pyspark.sql.functions as f # from operator import add except Exception", "push_mongo(): spark = SparkSession \\ .builder \\ .appName(\"Push to MongoDB\") \\ .master(\"spark://master:7077\") \\", "add except Exception as e: print(e) ## http://www.hongyusu.com/imt/technology/spark-via-python-basic-setup-count-lines-and-word-counts.html def push_mongo(): spark = SparkSession" ]
[ "open('input.txt', 'r') array = [] while True: n = int(input()) if n ==", "array = [] while True: n = int(input()) if n == -1: print('maximum", "= 1 << 31 def LDS(array): N = len(array) longest = [0] *", "longest[0] = 1 for i in range(1, N): currMax = 1 for j", "longest[j] + 1 > currMax: currMax = longest[j] + 1 longest[i] = currMax", "% LDS(array)) array = [] n1 = int(input()) if n1 == -1: break", "Created on Jul 20, 2013 @author: <NAME> ''' import sys INF = 1", "array[i] <= array[j] and longest[j] + 1 > currMax: currMax = longest[j] +", "in range(i): if array[i] <= array[j] and longest[j] + 1 > currMax: currMax", "j in range(i): if array[i] <= array[j] and longest[j] + 1 > currMax:", "if __name__ == '__main__': sys.stdin = open('input.txt', 'r') array = [] while True:", "<< 31 def LDS(array): N = len(array) longest = [0] * N longest[0]", "LDS(array): N = len(array) longest = [0] * N longest[0] = 1 for", "possible interceptions: %d' % LDS(array)) array = [] n1 = int(input()) if n1", "for i in range(1, N): currMax = 1 for j in range(i): if", "n = int(input()) if n == -1: print('maximum possible interceptions: %d' % LDS(array))", "N longest[0] = 1 for i in range(1, N): currMax = 1 for", "on Jul 20, 2013 @author: <NAME> ''' import sys INF = 1 <<", "in range(1, N): currMax = 1 for j in range(i): if array[i] <=", "1 << 31 def LDS(array): N = len(array) longest = [0] * N", "max(longest) if __name__ == '__main__': sys.stdin = open('input.txt', 'r') array = [] while", "n == -1: print('maximum possible interceptions: %d' % LDS(array)) array = [] n1", "== -1: print('maximum possible interceptions: %d' % LDS(array)) array = [] n1 =", "range(i): if array[i] <= array[j] and longest[j] + 1 > currMax: currMax =", "<NAME> ''' import sys INF = 1 << 31 def LDS(array): N =", "__name__ == '__main__': sys.stdin = open('input.txt', 'r') array = [] while True: n", "''' import sys INF = 1 << 31 def LDS(array): N = len(array)", "= 1 for j in range(i): if array[i] <= array[j] and longest[j] +", "''' Created on Jul 20, 2013 @author: <NAME> ''' import sys INF =", "while True: n = int(input()) if n == -1: print('maximum possible interceptions: %d'", "N): currMax = 1 for j in range(i): if array[i] <= array[j] and", "+ 1 longest[i] = currMax return max(longest) if __name__ == '__main__': sys.stdin =", "currMax: currMax = longest[j] + 1 longest[i] = currMax return max(longest) if __name__", "longest = [0] * N longest[0] = 1 for i in range(1, N):", "= longest[j] + 1 longest[i] = currMax return max(longest) if __name__ == '__main__':", "import sys INF = 1 << 31 def LDS(array): N = len(array) longest", "int(input()) if n == -1: print('maximum possible interceptions: %d' % LDS(array)) array =", "= open('input.txt', 'r') array = [] while True: n = int(input()) if n", "31 def LDS(array): N = len(array) longest = [0] * N longest[0] =", "array = [] n1 = int(input()) if n1 == -1: break else: array.append(n1)", "1 for i in range(1, N): currMax = 1 for j in range(i):", "True: n = int(input()) if n == -1: print('maximum possible interceptions: %d' %", "= len(array) longest = [0] * N longest[0] = 1 for i in", "= int(input()) if n == -1: print('maximum possible interceptions: %d' % LDS(array)) array", "if n == -1: print('maximum possible interceptions: %d' % LDS(array)) array = []", "sys INF = 1 << 31 def LDS(array): N = len(array) longest =", "+ 1 > currMax: currMax = longest[j] + 1 longest[i] = currMax return", "[] n1 = int(input()) if n1 == -1: break else: array.append(n1) else: array.append(n)", "[] while True: n = int(input()) if n == -1: print('maximum possible interceptions:", "def LDS(array): N = len(array) longest = [0] * N longest[0] = 1", "%d' % LDS(array)) array = [] n1 = int(input()) if n1 == -1:", "array[j] and longest[j] + 1 > currMax: currMax = longest[j] + 1 longest[i]", "currMax return max(longest) if __name__ == '__main__': sys.stdin = open('input.txt', 'r') array =", "= [] n1 = int(input()) if n1 == -1: break else: array.append(n1) else:", "INF = 1 << 31 def LDS(array): N = len(array) longest = [0]", "currMax = longest[j] + 1 longest[i] = currMax return max(longest) if __name__ ==", "1 > currMax: currMax = longest[j] + 1 longest[i] = currMax return max(longest)", "print('maximum possible interceptions: %d' % LDS(array)) array = [] n1 = int(input()) if", "longest[i] = currMax return max(longest) if __name__ == '__main__': sys.stdin = open('input.txt', 'r')", "= 1 for i in range(1, N): currMax = 1 for j in", "* N longest[0] = 1 for i in range(1, N): currMax = 1", "N = len(array) longest = [0] * N longest[0] = 1 for i", "len(array) longest = [0] * N longest[0] = 1 for i in range(1,", "= currMax return max(longest) if __name__ == '__main__': sys.stdin = open('input.txt', 'r') array", "range(1, N): currMax = 1 for j in range(i): if array[i] <= array[j]", "= [0] * N longest[0] = 1 for i in range(1, N): currMax", "sys.stdin = open('input.txt', 'r') array = [] while True: n = int(input()) if", "and longest[j] + 1 > currMax: currMax = longest[j] + 1 longest[i] =", "for j in range(i): if array[i] <= array[j] and longest[j] + 1 >", "<= array[j] and longest[j] + 1 > currMax: currMax = longest[j] + 1", "currMax = 1 for j in range(i): if array[i] <= array[j] and longest[j]", "[0] * N longest[0] = 1 for i in range(1, N): currMax =", "if array[i] <= array[j] and longest[j] + 1 > currMax: currMax = longest[j]", "return max(longest) if __name__ == '__main__': sys.stdin = open('input.txt', 'r') array = []", "> currMax: currMax = longest[j] + 1 longest[i] = currMax return max(longest) if", "'r') array = [] while True: n = int(input()) if n == -1:", "20, 2013 @author: <NAME> ''' import sys INF = 1 << 31 def", "i in range(1, N): currMax = 1 for j in range(i): if array[i]", "interceptions: %d' % LDS(array)) array = [] n1 = int(input()) if n1 ==", "2013 @author: <NAME> ''' import sys INF = 1 << 31 def LDS(array):", "= [] while True: n = int(input()) if n == -1: print('maximum possible", "@author: <NAME> ''' import sys INF = 1 << 31 def LDS(array): N", "longest[j] + 1 longest[i] = currMax return max(longest) if __name__ == '__main__': sys.stdin", "== '__main__': sys.stdin = open('input.txt', 'r') array = [] while True: n =", "'__main__': sys.stdin = open('input.txt', 'r') array = [] while True: n = int(input())", "-1: print('maximum possible interceptions: %d' % LDS(array)) array = [] n1 = int(input())", "1 for j in range(i): if array[i] <= array[j] and longest[j] + 1", "Jul 20, 2013 @author: <NAME> ''' import sys INF = 1 << 31", "LDS(array)) array = [] n1 = int(input()) if n1 == -1: break else:", "1 longest[i] = currMax return max(longest) if __name__ == '__main__': sys.stdin = open('input.txt'," ]
[ "refer to lib/backbone/all_models.py _C.BACKBONE.BBN = False _C.BACKBONE.FREEZE = False _C.BACKBONE.PRE_FREEZE = False _C.BACKBONE.PRE_FREEZE_EPOCH", "# For MWNLoss _C.LOSS.MWNL = CN() _C.LOSS.MWNL.GAMMA = 2.0 _C.LOSS.MWNL.BETA = 0.1 _C.LOSS.MWNL.TYPE", "below are drop block parameter _C.BACKBONE.DROP = CN() _C.BACKBONE.DROP.BLOCK_PROB = 0.1 _C.BACKBONE.DROP.BLOCK_SIZE =", "MWNLoss _C.LOSS.MWNL = CN() _C.LOSS.MWNL.GAMMA = 2.0 _C.LOSS.MWNL.BETA = 0.1 _C.LOSS.MWNL.TYPE = \"fix\"", "\"\" _C.DATASET.VALID_JSON = \"\" _C.DATASET.TEST_JSON = \"\" _C.DATASET.CLASS_NAME = ['BCC', 'NV', 'MEL', 'MISC',", "_C.TRAIN.OPTIMIZER = CN() _C.TRAIN.OPTIMIZER.TYPE = \"SGD\" # 'SGD', 'ADAM', 'NADAM', 'RMSPROP' _C.TRAIN.OPTIMIZER.BASE_LR =", "\"normal\", \"enlarge\" # For LOWLoss _C.LOSS.LOW = CN() _C.LOSS.LOW.LAMB = 0.01 # For", "for 'warmup' _C.TRAIN.LR_SCHEDULER.COSINE_DECAY_END = 0 # For valid or test _C.TEST = CN()", "\"jpg\" _C.DATASET.TRAIN_JSON = \"\" _C.DATASET.VALID_JSON = \"\" _C.DATASET.TEST_JSON = \"\" _C.DATASET.CLASS_NAME = ['BCC',", "CN() _C.DATASET.DATASET = \"\" _C.DATASET.ROOT = \"\" _C.DATASET.DATA_TYPE = \"jpg\" _C.DATASET.TRAIN_JSON = \"\"", "0.01 # For GHMCLoss _C.LOSS.GHMC = CN() _C.LOSS.GHMC.BINS = 10 _C.LOSS.GHMC.MOMENTUM = 0.0", "SAMPLER BUILDER ----- _C.TRAIN.SAMPLER = CN() _C.TRAIN.SAMPLER.TYPE = \"default\" # \"default\", \"weighted sampler\",", "__future__ import print_function from yacs.config import CfgNode as CN _C = CN() #", "_C.LOSS.LOSS_TYPE = \"CrossEntropy\" # \"CrossEntropy\", \"LDAMLoss\", \"FocalLoss\", \"LOWLoss\", \"GHMCLoss\", \"CCELoss\", \"MWNLoss\" _C.LOSS.SCHEDULER =", "_C.TRAIN.OPTIMIZER.MOMENTUM = 0.9 _C.TRAIN.OPTIMIZER.WEIGHT_DECAY = 1e-4 # ----- LR_SCHEDULER ----- _C.TRAIN.LR_SCHEDULER = CN()", "----- BASIC SETTINGS ----- _C.NAME = \"default\" _C.OUTPUT_DIR = \"./output/derm_7pt\" _C.VALID_STEP = 5", "CN() # ----- BASIC SETTINGS ----- _C.NAME = \"default\" _C.OUTPUT_DIR = \"./output/derm_7pt\" _C.VALID_STEP", "every gpu _C.TEST.NUM_WORKERS = 8 _C.TEST.MODEL_FILE = \"best_model.pth\" def update_config(cfg, args): cfg.defrost() cfg.merge_from_file(args.cfg)", "_C.TRAIN.SAMPLER = CN() _C.TRAIN.SAMPLER.TYPE = \"default\" # \"default\", \"weighted sampler\", \"oversample\" _C.TRAIN.SAMPLER.IMAGE_TYPE =", "\"rotate_90_+00\", \"rotate_270_+00\", \"scale_+10\", \"flip_x_+10\", \"rotate_90_+10\", \"rotate_270_+10\", \"scale_+20\", \"flip_x_+20\", \"rotate_90_+20\", \"rotate_270_+20\", \"scale_+30\", \"flip_x_+30\", \"rotate_90_+30\",", "20 _C.LOSS.CLS_EPOCH_MAX = 60 # For LDAMLoss _C.LOSS.LDAM = CN() _C.LOSS.LDAM.MAX_MARGIN = 0.5", "_C.RESUME_MODEL = \"\" _C.RESUME_MODE = \"all\" _C.CPU_MODE = False _C.EVAL_MODE = False _C.GPUS", "\"MWNLoss\" _C.LOSS.SCHEDULER = \"default\" # \"default\"--the weights of all classes are \"1.0\", #", "(input image will be subtracted from the mean, processing variance) _C.TRAIN.SAMPLER.FIX_MEAN_VAR.SET_MEAN = [0.485,", "inverse class frequency at all train stage, # \"drw\"--two-stage strategy using re-weighting at", "False # for ISIC_2019 valid and test, the number of class is increased", "# for 'warmup' _C.TRAIN.LR_SCHEDULER.COSINE_DECAY_END = 0 # For valid or test _C.TEST =", "5 _C.SAVE_STEP = 5 _C.SHOW_STEP = 20 _C.PIN_MEMORY = True _C.INPUT_SIZE = (224,", "_C.INPUT_SIZE = (224, 224) # (h, w) _C.COLOR_SPACE = \"RGB\" _C.RESUME_MODEL = \"\"", "= \"default\" _C.OUTPUT_DIR = \"./output/derm_7pt\" _C.VALID_STEP = 5 _C.SAVE_STEP = 5 _C.SHOW_STEP =", "to lib/backbone/all_models.py _C.BACKBONE.BBN = False _C.BACKBONE.FREEZE = False _C.BACKBONE.PRE_FREEZE = False _C.BACKBONE.PRE_FREEZE_EPOCH =", "\"rotate_270_+30\", \"scale_-10\", \"flip_x_-10\", \"rotate_90_-10\", \"rotate_270_-10\", \"flip_y_+00\", \"flip_y_+10\", \"flip_y_-10\", \"flip_y_+20\"] _C.TRAIN.SAMPLER.FIX_MEAN_VAR = CN() _C.TRAIN.SAMPLER.FIX_MEAN_VAR.ENABLE", "0.9) _C.TRAIN.SAMPLER.AUGMENT.AUG_MAG = 10 # the magnitude parameter 'M' of Modified RandAugment (1", "('rand') (refer to: lib/data_transform/modified_randaugment.py) _C.TRAIN.SAMPLER.AUGMENT.AUG_PROB = 0.7 # the probability parameter 'P' of", "= CN() _C.LOSS.LOW.LAMB = 0.01 # For GHMCLoss _C.LOSS.GHMC = CN() _C.LOSS.GHMC.BINS =", "central area (along the short side of the image) _C.TRAIN.SAMPLER.MULTI_CROP.SCHEME = 'average' #", "0.406] # A fixed set variance _C.TRAIN.SAMPLER.FIX_MEAN_VAR.SET_VAR = [0.229, 0.224, 0.225] # -----", "= \"RegNetY_800MF\" # refer to lib/backbone/all_models.py _C.BACKBONE.BBN = False _C.BACKBONE.FREEZE = False _C.BACKBONE.PRE_FREEZE", "# \"normal\", \"enlarge\" # For LOWLoss _C.LOSS.LOW = CN() _C.LOSS.LOW.LAMB = 0.01 #", "CN() _C.TRAIN.TENSORBOARD.ENABLE = True # ----- SAMPLER BUILDER ----- _C.TRAIN.SAMPLER = CN() _C.TRAIN.SAMPLER.TYPE", "# the method of Modified RandAugment ('v0_0' to 'v3_1') or RandAugment ('rand') (refer", "or voting over the crop predictions (\"vote\", \"average\") # for multi transformation of", "ISIC_2019 valid and test, the number of class is increased from 8 to", "_C.EVAL_MODE = False _C.GPUS = [0, 1] # ----- DATASET BUILDER ----- _C.DATASET", "mean (input image will be subtracted from the mean, processing variance) _C.TRAIN.SAMPLER.FIX_MEAN_VAR.SET_MEAN =", "= 0.1 _C.LOSS.MWNL.TYPE = \"fix\" # \"zero\", \"fix\", \"decrease\" _C.LOSS.MWNL.SIGMOID = \"normal\" #", "transformations applied to a training image if AUG_METHOD = 'rand' # for BBN", "\"weighted sampler\", \"oversample\" _C.TRAIN.SAMPLER.IMAGE_TYPE = \"derm\" # \"derm\", \"clinic\". For derm_7pt dataset used.", "= 1.1 _C.LOSS.EXTRA_WEIGHT = [1.0, 1.0, 1.0, 1.0, 1.0] _C.LOSS.LOSS_TYPE = \"CrossEntropy\" #", "(must be N^2) _C.TRAIN.SAMPLER.MULTI_CROP.L_REGION = 1.0 # Only crop within a certain range", "FocalLoss _C.LOSS.FOCAL = CN() _C.LOSS.FOCAL.GAMMA = 2.0 _C.LOSS.FOCAL.TYPE = \"sigmoid\" # \"cross_entropy\", \"sigmoid\",", "crop at the image border. Useful if images contain a black boundary. _C.TRAIN.SAMPLER.BORDER_CROP_RATIO", "of SCALE_NAME) _C.TRAIN.SAMPLER.MULTI_SCALE.SCALE_NAME = [\"scale_+00\", \"flip_x_+00\", \"rotate_90_+00\", \"rotate_270_+00\", \"scale_+10\", \"flip_x_+10\", \"rotate_90_+10\", \"rotate_270_+10\", \"scale_+20\",", "needs to be resized to a fix size _C.TRAIN.SAMPLER.IMAGE_RESIZE_SHORT = 450 # the", "the probability parameter 'P' of Modified RandAugment (0.1 -- 0.9) _C.TRAIN.SAMPLER.AUGMENT.AUG_MAG = 10", "variance _C.TRAIN.SAMPLER.FIX_MEAN_VAR.SET_VAR = [0.229, 0.224, 0.225] # ----- OPTIMIZER ----- _C.TRAIN.OPTIMIZER = CN()", "are drop block parameter _C.BACKBONE.DROP = CN() _C.BACKBONE.DROP.BLOCK_PROB = 0.1 _C.BACKBONE.DROP.BLOCK_SIZE = 5", "\"steplr\", \"multistep\", \"cosine\", \"warmup\" _C.TRAIN.LR_SCHEDULER.LR_LOWER_STEP = 20 # for 'steplr' _C.TRAIN.LR_SCHEDULER.LR_STEP = [40,", "_C.TRAIN.OPTIMIZER.BASE_LR = 0.001 _C.TRAIN.OPTIMIZER.MOMENTUM = 0.9 _C.TRAIN.OPTIMIZER.WEIGHT_DECAY = 1e-4 # ----- LR_SCHEDULER -----", "\"default\", \"weighted sampler\", \"oversample\" _C.TRAIN.SAMPLER.IMAGE_TYPE = \"derm\" # \"derm\", \"clinic\". For derm_7pt dataset", "= \"\" _C.DATASET.VALID_JSON = \"\" _C.DATASET.TEST_JSON = \"\" _C.DATASET.CLASS_NAME = ['BCC', 'NV', 'MEL',", "\"RegNetY_800MF\" # refer to lib/backbone/all_models.py _C.BACKBONE.BBN = False _C.BACKBONE.FREEZE = False _C.BACKBONE.PRE_FREEZE =", "= False _C.BACKBONE.PRE_FREEZE = False _C.BACKBONE.PRE_FREEZE_EPOCH = 5 _C.BACKBONE.PRETRAINED = True _C.BACKBONE.PRETRAINED_MODEL =", "ratio of edge of the image to be cropped. _C.TRAIN.SAMPLER.IMAGE_RESIZE = True #", "= CN() _C.TRAIN.TENSORBOARD.ENABLE = True # ----- SAMPLER BUILDER ----- _C.TRAIN.SAMPLER = CN()", "(along the long side of the image) _C.TRAIN.SAMPLER.MULTI_CROP.S_REGION = 1.0 # Only crop", "\"default\"--the weights of all classes are \"1.0\", # \"re_weight\"--re-weighting by the power of", "CN() _C.TRAIN.LR_SCHEDULER.TYPE = \"multistep\" # \"steplr\", \"multistep\", \"cosine\", \"warmup\" _C.TRAIN.LR_SCHEDULER.LR_LOWER_STEP = 20 #", "update_config(cfg, args): cfg.defrost() cfg.merge_from_file(args.cfg) cfg.merge_from_list(args.opts) cfg.freeze() def update_cfg_name(cfg): ''' modify the cfg.NAME :param", "\"re_weight\"--re-weighting by the power of inverse class frequency at all train stage, #", "# 'SGD', 'ADAM', 'NADAM', 'RMSPROP' _C.TRAIN.OPTIMIZER.BASE_LR = 0.001 _C.TRAIN.OPTIMIZER.MOMENTUM = 0.9 _C.TRAIN.OPTIMIZER.WEIGHT_DECAY =", "the short side of the image) _C.TRAIN.SAMPLER.MULTI_CROP.SCHEME = 'average' # Averaging or voting", "for every gpu _C.TEST.NUM_WORKERS = 8 _C.TEST.MODEL_FILE = \"best_model.pth\" def update_config(cfg, args): cfg.defrost()", "= CN() _C.DATASET.IMBALANCECIFAR.RATIO = 0.01 _C.DATASET.IMBALANCECIFAR.RANDOM_SEED = 0 # ----- BACKBONE BUILDER -----", "= \"balance\" # \"balance\", \"reverse\" # for multi crop _C.TRAIN.SAMPLER.MULTI_CROP = CN() _C.TRAIN.SAMPLER.MULTI_CROP.ENABLE", "_C.TRAIN.SAMPLER.AUGMENT.AUG_METHOD = \"v1_0\" # the method of Modified RandAugment ('v0_0' to 'v3_1') or", "crop _C.TRAIN.SAMPLER.MULTI_CROP = CN() _C.TRAIN.SAMPLER.MULTI_CROP.ENABLE = False # Should the crops be order", "= 0.1 # ----- MODULE BUILDER ----- _C.MODULE = CN() _C.MODULE.TYPE = \"GAP\"", "\"average\") # for multi transformation of the center crop _C.TRAIN.SAMPLER.MULTI_SCALE = CN() _C.TRAIN.SAMPLER.MULTI_SCALE.ENABLE", "= CN() _C.LOSS.FOCAL.GAMMA = 2.0 _C.LOSS.FOCAL.TYPE = \"sigmoid\" # \"cross_entropy\", \"sigmoid\", \"ldam\" _C.LOSS.FOCAL.SIGMOID", "# \"default\"--the weights of all classes are \"1.0\", # \"re_weight\"--re-weighting by the power", "\"LDAMLoss\", \"FocalLoss\", \"LOWLoss\", \"GHMCLoss\", \"CCELoss\", \"MWNLoss\" _C.LOSS.SCHEDULER = \"default\" # \"default\"--the weights of", "size of the short side of the input image _C.TRAIN.SAMPLER.COLOR_CONSTANCY = False _C.TRAIN.SAMPLER.CONSTANCY_POWER", "= \"pixel\" # \"pixel\", \"ratio\" _C.TRAIN.SAMPLER.BORDER_CROP_PIXEL = 0 # An integer specifying how", "to 'v3_1') or RandAugment ('rand') (refer to: lib/data_transform/modified_randaugment.py) _C.TRAIN.SAMPLER.AUGMENT.AUG_PROB = 0.7 # the", "_C.LOSS.EXTRA_WEIGHT = [1.0, 1.0, 1.0, 1.0, 1.0] _C.LOSS.LOSS_TYPE = \"CrossEntropy\" # \"CrossEntropy\", \"LDAMLoss\",", "0.001 _C.TRAIN.OPTIMIZER.MOMENTUM = 0.9 _C.TRAIN.OPTIMIZER.WEIGHT_DECAY = 1e-4 # ----- LR_SCHEDULER ----- _C.TRAIN.LR_SCHEDULER =", "_C.TRAIN.LR_SCHEDULER = CN() _C.TRAIN.LR_SCHEDULER.TYPE = \"multistep\" # \"steplr\", \"multistep\", \"cosine\", \"warmup\" _C.TRAIN.LR_SCHEDULER.LR_LOWER_STEP =", "'P' of Modified RandAugment (0.1 -- 0.9) _C.TRAIN.SAMPLER.AUGMENT.AUG_MAG = 10 # the magnitude", "classes are \"1.0\", # \"re_weight\"--re-weighting by the power of inverse class frequency at", "\"default\" _C.OUTPUT_DIR = \"./output/derm_7pt\" _C.VALID_STEP = 5 _C.SAVE_STEP = 5 _C.SHOW_STEP = 20", "= False _C.BACKBONE.PRE_FREEZE_EPOCH = 5 _C.BACKBONE.PRETRAINED = True _C.BACKBONE.PRETRAINED_MODEL = \"\" # if", "# ----- MODULE BUILDER ----- _C.MODULE = CN() _C.MODULE.TYPE = \"GAP\" # \"GAP\",", "----- _C.BACKBONE = CN() _C.BACKBONE.TYPE = \"RegNetY_800MF\" # refer to lib/backbone/all_models.py _C.BACKBONE.BBN =", "\"sigmoid\", \"ldam\" _C.LOSS.FOCAL.SIGMOID = \"normal\" # \"normal\", \"enlarge\" # For LOWLoss _C.LOSS.LOW =", "= CN() _C.TRAIN.SAMPLER.WEIGHTED_SAMPLER.TYPE = \"balance\" # \"balance\", \"reverse\" # for multi crop _C.TRAIN.SAMPLER.MULTI_CROP", "= 1e-4 # ----- LR_SCHEDULER ----- _C.TRAIN.LR_SCHEDULER = CN() _C.TRAIN.LR_SCHEDULER.TYPE = \"multistep\" #", "_C.TRAIN.SAMPLER.MULTI_CROP = CN() _C.TRAIN.SAMPLER.MULTI_CROP.ENABLE = False # Should the crops be order or", "BUILDER ----- _C.LOSS = CN() _C.LOSS.WEIGHT_POWER = 1.1 _C.LOSS.EXTRA_WEIGHT = [1.0, 1.0, 1.0,", "scales to use during evaluation (must be less than or equal to the", "area (along the short side of the image) _C.TRAIN.SAMPLER.MULTI_CROP.SCHEME = 'average' # Averaging", "_C.MODULE.TYPE = \"GAP\" # \"GAP\", \"Identity\" # ----- CLASSIFIER BUILDER ----- _C.CLASSIFIER =", "CN _C = CN() # ----- BASIC SETTINGS ----- _C.NAME = \"default\" _C.OUTPUT_DIR", "image) _C.TRAIN.SAMPLER.MULTI_CROP.SCHEME = 'average' # Averaging or voting over the crop predictions (\"vote\",", "range of the central area (along the short side of the image) _C.TRAIN.SAMPLER.MULTI_CROP.SCHEME", "whether to perform multi transformation on the central crop _C.TRAIN.SAMPLER.MULTI_SCALE.SCALE_NUM = 12 #", "= [40, 50] # for 'multistep' _C.TRAIN.LR_SCHEDULER.LR_FACTOR = 0.1 _C.TRAIN.LR_SCHEDULER.WARM_EPOCH = 5 #", "certain range of the central area (along the long side of the image)", "image border. Useful if images contain a black boundary. _C.TRAIN.SAMPLER.BORDER_CROP_RATIO = 0.0 #", "= False _C.TRAIN.SAMPLER.AUGMENT.AUG_METHOD = \"v1_0\" # the method of Modified RandAugment ('v0_0' to", "Number of crops to use during evaluation (must be N^2) _C.TRAIN.SAMPLER.MULTI_CROP.L_REGION = 1.0", "# \"normal\", \"enlarge\" # ----- TRAIN BUILDER ----- _C.TRAIN = CN() _C.TRAIN.BATCH_SIZE =", "= 50 # For cls scheduler _C.LOSS.CLS_EPOCH_MIN = 20 _C.LOSS.CLS_EPOCH_MAX = 60 #", "0 # For valid or test _C.TEST = CN() _C.TEST.BATCH_SIZE = 128 #", "_C.BACKBONE.DROP = CN() _C.BACKBONE.DROP.BLOCK_PROB = 0.1 _C.BACKBONE.DROP.BLOCK_SIZE = 5 _C.BACKBONE.DROP.NR_STEPS = 50000 #", "CN() _C.TRAIN.SAMPLER.WEIGHTED_SAMPLER.TYPE = \"balance\" # \"balance\", \"reverse\" # for multi crop _C.TRAIN.SAMPLER.MULTI_CROP =", "= CN() _C.CLASSIFIER.TYPE = \"FC\" # \"FC\", \"FCNorm\" _C.CLASSIFIER.BIAS = True # -----", "_C.BACKBONE.PRE_FREEZE_EPOCH = 5 _C.BACKBONE.PRETRAINED = True _C.BACKBONE.PRETRAINED_MODEL = \"\" # if using drop", "\"reverse\", \"uniform\" # for other sampler _C.TRAIN.SAMPLER.WEIGHTED_SAMPLER = CN() _C.TRAIN.SAMPLER.WEIGHTED_SAMPLER.TYPE = \"balance\" #", "-- 20) _C.TRAIN.SAMPLER.AUGMENT.AUG_LAYER_NUM = 1 # the number of transformations applied to a", "= CN() _C.TRAIN.LR_SCHEDULER.TYPE = \"multistep\" # \"steplr\", \"multistep\", \"cosine\", \"warmup\" _C.TRAIN.LR_SCHEDULER.LR_LOWER_STEP = 20", "= 60 # For LDAMLoss _C.LOSS.LDAM = CN() _C.LOSS.LDAM.MAX_MARGIN = 0.5 # For", "images contain a black boundary. _C.TRAIN.SAMPLER.BORDER_CROP_RATIO = 0.0 # the ratio of edge", "= 0 # ----- BACKBONE BUILDER ----- _C.BACKBONE = CN() _C.BACKBONE.TYPE = \"RegNetY_800MF\"", "fixed set variance _C.TRAIN.SAMPLER.FIX_MEAN_VAR.SET_VAR = [0.229, 0.224, 0.225] # ----- OPTIMIZER ----- _C.TRAIN.OPTIMIZER", "layer _C.BACKBONE.DROP.OUT_PROB = 0.1 # ----- MODULE BUILDER ----- _C.MODULE = CN() _C.MODULE.TYPE", "of each image, or using fixed values # A fixed set mean (input", "= 0.001 _C.TRAIN.OPTIMIZER.MOMENTUM = 0.9 _C.TRAIN.OPTIMIZER.WEIGHT_DECAY = 1e-4 # ----- LR_SCHEDULER ----- _C.TRAIN.LR_SCHEDULER", "8 _C.TEST.MODEL_FILE = \"best_model.pth\" def update_config(cfg, args): cfg.defrost() cfg.merge_from_file(args.cfg) cfg.merge_from_list(args.opts) cfg.freeze() def update_cfg_name(cfg):", "_C.TRAIN.NUM_WORKERS = 8 _C.TRAIN.TENSORBOARD = CN() _C.TRAIN.TENSORBOARD.ENABLE = True # ----- SAMPLER BUILDER", "# Should the crops be order or random for evaluation _C.TRAIN.SAMPLER.MULTI_CROP.CROP_NUM = 16", "\"balance\" # \"balance\", \"reverse\" # for multi crop _C.TRAIN.SAMPLER.MULTI_CROP = CN() _C.TRAIN.SAMPLER.MULTI_CROP.ENABLE =", "# (h, w) _C.COLOR_SPACE = \"RGB\" _C.RESUME_MODEL = \"\" _C.RESUME_MODE = \"all\" _C.CPU_MODE", "_C.CPU_MODE = False _C.EVAL_MODE = False _C.GPUS = [0, 1] # ----- DATASET", "of Modified RandAugment (0.1 -- 0.9) _C.TRAIN.SAMPLER.AUGMENT.AUG_MAG = 10 # the magnitude parameter", "= \"jpg\" _C.DATASET.TRAIN_JSON = \"\" _C.DATASET.VALID_JSON = \"\" _C.DATASET.TEST_JSON = \"\" _C.DATASET.CLASS_NAME =", "second stage, # \"cls\"--cumulative learning strategy to set loss weight. # For drw", "_C.TRAIN.LR_SCHEDULER.TYPE = \"multistep\" # \"steplr\", \"multistep\", \"cosine\", \"warmup\" _C.TRAIN.LR_SCHEDULER.LR_LOWER_STEP = 20 # for", "parameter _C.BACKBONE.DROP = CN() _C.BACKBONE.DROP.BLOCK_PROB = 0.1 _C.BACKBONE.DROP.BLOCK_SIZE = 5 _C.BACKBONE.DROP.NR_STEPS = 50000", "\"normal\" # \"normal\", \"enlarge\" # ----- TRAIN BUILDER ----- _C.TRAIN = CN() _C.TRAIN.BATCH_SIZE", "0.224, 0.225] # ----- OPTIMIZER ----- _C.TRAIN.OPTIMIZER = CN() _C.TRAIN.OPTIMIZER.TYPE = \"SGD\" #", "0.01 _C.DATASET.IMBALANCECIFAR.RANDOM_SEED = 0 # ----- BACKBONE BUILDER ----- _C.BACKBONE = CN() _C.BACKBONE.TYPE", "CN() _C.LOSS.FOCAL.GAMMA = 2.0 _C.LOSS.FOCAL.TYPE = \"sigmoid\" # \"cross_entropy\", \"sigmoid\", \"ldam\" _C.LOSS.FOCAL.SIGMOID =", "_C.PIN_MEMORY = True _C.INPUT_SIZE = (224, 224) # (h, w) _C.COLOR_SPACE = \"RGB\"", "crop within a certain range of the central area (along the short side", "# Number of scales to use during evaluation (must be less than or", "\"rotate_270_+00\", \"scale_+10\", \"flip_x_+10\", \"rotate_90_+10\", \"rotate_270_+10\", \"scale_+20\", \"flip_x_+20\", \"rotate_90_+20\", \"rotate_270_+20\", \"scale_+30\", \"flip_x_+30\", \"rotate_90_+30\", \"rotate_270_+30\",", "the image to be cropped. _C.TRAIN.SAMPLER.IMAGE_RESIZE = True # whether the input image", "# whether to perform multi transformation on the central crop _C.TRAIN.SAMPLER.MULTI_SCALE.SCALE_NUM = 12", "\"rotate_90_+30\", \"rotate_270_+30\", \"scale_-10\", \"flip_x_-10\", \"rotate_90_-10\", \"rotate_270_-10\", \"flip_y_+00\", \"flip_y_+10\", \"flip_y_-10\", \"flip_y_+20\"] _C.TRAIN.SAMPLER.FIX_MEAN_VAR = CN()", "_C.TRAIN.SAMPLER.FIX_MEAN_VAR.ENABLE = True # Normalize using the mean and variance of each image,", "_C.CLASSIFIER.BIAS = True # ----- LOSS BUILDER ----- _C.LOSS = CN() _C.LOSS.WEIGHT_POWER =", "\"GHMCLoss\", \"CCELoss\", \"MWNLoss\" _C.LOSS.SCHEDULER = \"default\" # \"default\"--the weights of all classes are", "dataset used. _C.TRAIN.SAMPLER.BORDER_CROP = \"pixel\" # \"pixel\", \"ratio\" _C.TRAIN.SAMPLER.BORDER_CROP_PIXEL = 0 # An", "_C.TRAIN.SAMPLER.MULTI_CROP.S_REGION = 1.0 # Only crop within a certain range of the central", "# \"cross_entropy\", \"sigmoid\", \"ldam\" _C.LOSS.FOCAL.SIGMOID = \"normal\" # \"normal\", \"enlarge\" # For LOWLoss", "of Modified RandAugment ('v0_0' to 'v3_1') or RandAugment ('rand') (refer to: lib/data_transform/modified_randaugment.py) _C.TRAIN.SAMPLER.AUGMENT.AUG_PROB", "(refer to: lib/data_transform/modified_randaugment.py) _C.TRAIN.SAMPLER.AUGMENT.AUG_PROB = 0.7 # the probability parameter 'P' of Modified", "_C.TRAIN.SAMPLER.FIX_MEAN_VAR.SET_MEAN = [0.485, 0.456, 0.406] # A fixed set variance _C.TRAIN.SAMPLER.FIX_MEAN_VAR.SET_VAR = [0.229,", "# Number of crops to use during evaluation (must be N^2) _C.TRAIN.SAMPLER.MULTI_CROP.L_REGION =", "# A fixed set mean (input image will be subtracted from the mean,", "import CfgNode as CN _C = CN() # ----- BASIC SETTINGS ----- _C.NAME", "LR_SCHEDULER ----- _C.TRAIN.LR_SCHEDULER = CN() _C.TRAIN.LR_SCHEDULER.TYPE = \"multistep\" # \"steplr\", \"multistep\", \"cosine\", \"warmup\"", "FC layer _C.BACKBONE.DROP.OUT_PROB = 0.1 # ----- MODULE BUILDER ----- _C.MODULE = CN()", "test _C.TEST = CN() _C.TEST.BATCH_SIZE = 128 # for every gpu _C.TEST.NUM_WORKERS =", "weight. # For drw scheduler _C.LOSS.DRW_EPOCH = 50 # For cls scheduler _C.LOSS.CLS_EPOCH_MIN", "fixed values # A fixed set mean (input image will be subtracted from", "_C.MODULE = CN() _C.MODULE.TYPE = \"GAP\" # \"GAP\", \"Identity\" # ----- CLASSIFIER BUILDER", "to the last FC layer _C.BACKBONE.DROP.OUT_PROB = 0.1 # ----- MODULE BUILDER -----", "a black boundary. _C.TRAIN.SAMPLER.BORDER_CROP_RATIO = 0.0 # the ratio of edge of the", "within a certain range of the central area (along the short side of", "_C.TRAIN.TENSORBOARD.ENABLE = True # ----- SAMPLER BUILDER ----- _C.TRAIN.SAMPLER = CN() _C.TRAIN.SAMPLER.TYPE =", "_C.TRAIN.LR_SCHEDULER.LR_STEP = [40, 50] # for 'multistep' _C.TRAIN.LR_SCHEDULER.LR_FACTOR = 0.1 _C.TRAIN.LR_SCHEDULER.WARM_EPOCH = 5", "subtracted from the mean, processing variance) _C.TRAIN.SAMPLER.FIX_MEAN_VAR.SET_MEAN = [0.485, 0.456, 0.406] # A", "1.0 # Only crop within a certain range of the central area (along", "'ADAM', 'NADAM', 'RMSPROP' _C.TRAIN.OPTIMIZER.BASE_LR = 0.001 _C.TRAIN.OPTIMIZER.MOMENTUM = 0.9 _C.TRAIN.OPTIMIZER.WEIGHT_DECAY = 1e-4 #", "----- MODULE BUILDER ----- _C.MODULE = CN() _C.MODULE.TYPE = \"GAP\" # \"GAP\", \"Identity\"", "= 8 _C.TEST.MODEL_FILE = \"best_model.pth\" def update_config(cfg, args): cfg.defrost() cfg.merge_from_file(args.cfg) cfg.merge_from_list(args.opts) cfg.freeze() def", "are \"1.0\", # \"re_weight\"--re-weighting by the power of inverse class frequency at all", "= 0.01 _C.DATASET.IMBALANCECIFAR.RANDOM_SEED = 0 # ----- BACKBONE BUILDER ----- _C.BACKBONE = CN()", "the cfg.NAME :param cfg: :return: ''' cfg.defrost() cfg_name = cfg.DATASET.DATASET + \".\" +", "the method of Modified RandAugment ('v0_0' to 'v3_1') or RandAugment ('rand') (refer to:", "test, the number of class is increased from 8 to 9. _C.DATASET.ADD_CLASS_NAME =", "\"CCELoss\", \"MWNLoss\" _C.LOSS.SCHEDULER = \"default\" # \"default\"--the weights of all classes are \"1.0\",", ":param cfg: :return: ''' cfg.defrost() cfg_name = cfg.DATASET.DATASET + \".\" + cfg.BACKBONE.TYPE +", "False _C.EVAL_MODE = False _C.GPUS = [0, 1] # ----- DATASET BUILDER -----", "----- _C.DATASET = CN() _C.DATASET.DATASET = \"\" _C.DATASET.ROOT = \"\" _C.DATASET.DATA_TYPE = \"jpg\"", "_C.BACKBONE.FREEZE = False _C.BACKBONE.PRE_FREEZE = False _C.BACKBONE.PRE_FREEZE_EPOCH = 5 _C.BACKBONE.PRETRAINED = True _C.BACKBONE.PRETRAINED_MODEL", "_C.BACKBONE.PRETRAINED = True _C.BACKBONE.PRETRAINED_MODEL = \"\" # if using drop block, below are", "\"derm\" # \"derm\", \"clinic\". For derm_7pt dataset used. _C.TRAIN.SAMPLER.BORDER_CROP = \"pixel\" # \"pixel\",", "\"enlarge\" # For LOWLoss _C.LOSS.LOW = CN() _C.LOSS.LOW.LAMB = 0.01 # For GHMCLoss", "_C.TRAIN.LR_SCHEDULER.WARM_EPOCH = 5 # for 'warmup' _C.TRAIN.LR_SCHEDULER.COSINE_DECAY_END = 0 # For valid or", "_C.TRAIN.LR_SCHEDULER.LR_FACTOR = 0.1 _C.TRAIN.LR_SCHEDULER.WARM_EPOCH = 5 # for 'warmup' _C.TRAIN.LR_SCHEDULER.COSINE_DECAY_END = 0 #", "# for BBN sampler _C.TRAIN.SAMPLER.DUAL_SAMPLER = CN() _C.TRAIN.SAMPLER.DUAL_SAMPLER.TYPE = \"reversed\" # \"balance\", \"reverse\",", "= 6.0 _C.TRAIN.SAMPLER.CONSTANCY_GAMMA = 0.0 # For Modified RandAugment _C.TRAIN.SAMPLER.AUGMENT = CN() _C.TRAIN.SAMPLER.AUGMENT.NEED_AUGMENT", "False _C.BACKBONE.FREEZE = False _C.BACKBONE.PRE_FREEZE = False _C.BACKBONE.PRE_FREEZE_EPOCH = 5 _C.BACKBONE.PRETRAINED = True", "if using drop block, below are drop block parameter _C.BACKBONE.DROP = CN() _C.BACKBONE.DROP.BLOCK_PROB", "_C.SAVE_STEP = 5 _C.SHOW_STEP = 20 _C.PIN_MEMORY = True _C.INPUT_SIZE = (224, 224)", "60 # For LDAMLoss _C.LOSS.LDAM = CN() _C.LOSS.LDAM.MAX_MARGIN = 0.5 # For FocalLoss", "\"ldam\" _C.LOSS.FOCAL.SIGMOID = \"normal\" # \"normal\", \"enlarge\" # For LOWLoss _C.LOSS.LOW = CN()", "# For LOWLoss _C.LOSS.LOW = CN() _C.LOSS.LOW.LAMB = 0.01 # For GHMCLoss _C.LOSS.GHMC", "= CN() _C.TRAIN.SAMPLER.TYPE = \"default\" # \"default\", \"weighted sampler\", \"oversample\" _C.TRAIN.SAMPLER.IMAGE_TYPE = \"derm\"", "use during evaluation (must be less than or equal to the length of", "def update_cfg_name(cfg): ''' modify the cfg.NAME :param cfg: :return: ''' cfg.defrost() cfg_name =", "\"flip_x_+20\", \"rotate_90_+20\", \"rotate_270_+20\", \"scale_+30\", \"flip_x_+30\", \"rotate_90_+30\", \"rotate_270_+30\", \"scale_-10\", \"flip_x_-10\", \"rotate_90_-10\", \"rotate_270_-10\", \"flip_y_+00\", \"flip_y_+10\",", "A fixed set variance _C.TRAIN.SAMPLER.FIX_MEAN_VAR.SET_VAR = [0.229, 0.224, 0.225] # ----- OPTIMIZER -----", "8 _C.TRAIN.TENSORBOARD = CN() _C.TRAIN.TENSORBOARD.ENABLE = True # ----- SAMPLER BUILDER ----- _C.TRAIN.SAMPLER", "_C.DATASET.IMBALANCECIFAR.RATIO = 0.01 _C.DATASET.IMBALANCECIFAR.RANDOM_SEED = 0 # ----- BACKBONE BUILDER ----- _C.BACKBONE =", "BBN sampler _C.TRAIN.SAMPLER.DUAL_SAMPLER = CN() _C.TRAIN.SAMPLER.DUAL_SAMPLER.TYPE = \"reversed\" # \"balance\", \"reverse\", \"uniform\" #", "['BCC', 'NV', 'MEL', 'MISC', 'SK'] _C.DATASET.VALID_ADD_ONE_CLASS = False # for ISIC_2019 valid and", "the power of inverse class frequency at all train stage, # \"drw\"--two-stage strategy", "20 _C.PIN_MEMORY = True _C.INPUT_SIZE = (224, 224) # (h, w) _C.COLOR_SPACE =", "= [1.0, 1.0, 1.0, 1.0, 1.0] _C.LOSS.LOSS_TYPE = \"CrossEntropy\" # \"CrossEntropy\", \"LDAMLoss\", \"FocalLoss\",", "1] # ----- DATASET BUILDER ----- _C.DATASET = CN() _C.DATASET.DATASET = \"\" _C.DATASET.ROOT", "_C.TEST = CN() _C.TEST.BATCH_SIZE = 128 # for every gpu _C.TEST.NUM_WORKERS = 8", "16 # Number of crops to use during evaluation (must be N^2) _C.TRAIN.SAMPLER.MULTI_CROP.L_REGION", "how many pixels to crop at the image border. Useful if images contain", "For LDAMLoss _C.LOSS.LDAM = CN() _C.LOSS.LDAM.MAX_MARGIN = 0.5 # For FocalLoss _C.LOSS.FOCAL =", "\"scale_+20\", \"flip_x_+20\", \"rotate_90_+20\", \"rotate_270_+20\", \"scale_+30\", \"flip_x_+30\", \"rotate_90_+30\", \"rotate_270_+30\", \"scale_-10\", \"flip_x_-10\", \"rotate_90_-10\", \"rotate_270_-10\", \"flip_y_+00\",", "CN() _C.LOSS.GHMC.BINS = 10 _C.LOSS.GHMC.MOMENTUM = 0.0 # For MWNLoss _C.LOSS.MWNL = CN()", "power of inverse class frequency at all train stage, # \"drw\"--two-stage strategy using", "by the power of inverse class frequency at all train stage, # \"drw\"--two-stage", "\"oversample\" _C.TRAIN.SAMPLER.IMAGE_TYPE = \"derm\" # \"derm\", \"clinic\". For derm_7pt dataset used. _C.TRAIN.SAMPLER.BORDER_CROP =", "class is increased from 8 to 9. _C.DATASET.ADD_CLASS_NAME = \"UNK\" _C.DATASET.IMBALANCECIFAR = CN()", "_C.LOSS.MWNL.TYPE = \"fix\" # \"zero\", \"fix\", \"decrease\" _C.LOSS.MWNL.SIGMOID = \"normal\" # \"normal\", \"enlarge\"", "valid and test, the number of class is increased from 8 to 9.", "\"flip_x_+00\", \"rotate_90_+00\", \"rotate_270_+00\", \"scale_+10\", \"flip_x_+10\", \"rotate_90_+10\", \"rotate_270_+10\", \"scale_+20\", \"flip_x_+20\", \"rotate_90_+20\", \"rotate_270_+20\", \"scale_+30\", \"flip_x_+30\",", "= [0.229, 0.224, 0.225] # ----- OPTIMIZER ----- _C.TRAIN.OPTIMIZER = CN() _C.TRAIN.OPTIMIZER.TYPE =", "= 12 # Number of scales to use during evaluation (must be less", "\"fix\" # \"zero\", \"fix\", \"decrease\" _C.LOSS.MWNL.SIGMOID = \"normal\" # \"normal\", \"enlarge\" # -----", "\"drw\"--two-stage strategy using re-weighting at the second stage, # \"cls\"--cumulative learning strategy to", "of the central area (along the long side of the image) _C.TRAIN.SAMPLER.MULTI_CROP.S_REGION =", "to: lib/data_transform/modified_randaugment.py) _C.TRAIN.SAMPLER.AUGMENT.AUG_PROB = 0.7 # the probability parameter 'P' of Modified RandAugment", "of transformations applied to a training image if AUG_METHOD = 'rand' # for", "False _C.GPUS = [0, 1] # ----- DATASET BUILDER ----- _C.DATASET = CN()", "of all classes are \"1.0\", # \"re_weight\"--re-weighting by the power of inverse class", "= 2.0 _C.LOSS.FOCAL.TYPE = \"sigmoid\" # \"cross_entropy\", \"sigmoid\", \"ldam\" _C.LOSS.FOCAL.SIGMOID = \"normal\" #", "method of Modified RandAugment ('v0_0' to 'v3_1') or RandAugment ('rand') (refer to: lib/data_transform/modified_randaugment.py)", "__future__ import absolute_import from __future__ import division from __future__ import print_function from yacs.config", "BUILDER ----- _C.MODULE = CN() _C.MODULE.TYPE = \"GAP\" # \"GAP\", \"Identity\" # -----", "Normalize using the mean and variance of each image, or using fixed values", "\"cosine\", \"warmup\" _C.TRAIN.LR_SCHEDULER.LR_LOWER_STEP = 20 # for 'steplr' _C.TRAIN.LR_SCHEDULER.LR_STEP = [40, 50] #", "= \"all\" _C.CPU_MODE = False _C.EVAL_MODE = False _C.GPUS = [0, 1] #", "For FocalLoss _C.LOSS.FOCAL = CN() _C.LOSS.FOCAL.GAMMA = 2.0 _C.LOSS.FOCAL.TYPE = \"sigmoid\" # \"cross_entropy\",", "# the number of transformations applied to a training image if AUG_METHOD =", "# for every gpu _C.TEST.NUM_WORKERS = 8 _C.TEST.MODEL_FILE = \"best_model.pth\" def update_config(cfg, args):", "Only crop within a certain range of the central area (along the long", "\"ratio\" _C.TRAIN.SAMPLER.BORDER_CROP_PIXEL = 0 # An integer specifying how many pixels to crop", "CN() _C.TRAIN.SAMPLER.FIX_MEAN_VAR.ENABLE = True # Normalize using the mean and variance of each", "\"enlarge\" # ----- TRAIN BUILDER ----- _C.TRAIN = CN() _C.TRAIN.BATCH_SIZE = 32 #", "the image) _C.TRAIN.SAMPLER.MULTI_CROP.S_REGION = 1.0 # Only crop within a certain range of", "_C.DATASET.ROOT = \"\" _C.DATASET.DATA_TYPE = \"jpg\" _C.DATASET.TRAIN_JSON = \"\" _C.DATASET.VALID_JSON = \"\" _C.DATASET.TEST_JSON", "_C.BACKBONE.PRE_FREEZE = False _C.BACKBONE.PRE_FREEZE_EPOCH = 5 _C.BACKBONE.PRETRAINED = True _C.BACKBONE.PRETRAINED_MODEL = \"\" #", "pixels to crop at the image border. Useful if images contain a black", "absolute_import from __future__ import division from __future__ import print_function from yacs.config import CfgNode", "= CN() _C.LOSS.LDAM.MAX_MARGIN = 0.5 # For FocalLoss _C.LOSS.FOCAL = CN() _C.LOSS.FOCAL.GAMMA =", "\"derm\", \"clinic\". For derm_7pt dataset used. _C.TRAIN.SAMPLER.BORDER_CROP = \"pixel\" # \"pixel\", \"ratio\" _C.TRAIN.SAMPLER.BORDER_CROP_PIXEL", "= cfg.DATASET.DATASET + \".\" + cfg.BACKBONE.TYPE + ( \"_BBN.\" if cfg.BACKBONE.BBN else \".\")", "= \"\" _C.DATASET.ROOT = \"\" _C.DATASET.DATA_TYPE = \"jpg\" _C.DATASET.TRAIN_JSON = \"\" _C.DATASET.VALID_JSON =", "fix size _C.TRAIN.SAMPLER.IMAGE_RESIZE_SHORT = 450 # the need size of the short side", "for multi crop _C.TRAIN.SAMPLER.MULTI_CROP = CN() _C.TRAIN.SAMPLER.MULTI_CROP.ENABLE = False # Should the crops", "_C.DATASET = CN() _C.DATASET.DATASET = \"\" _C.DATASET.ROOT = \"\" _C.DATASET.DATA_TYPE = \"jpg\" _C.DATASET.TRAIN_JSON", "image needs to be resized to a fix size _C.TRAIN.SAMPLER.IMAGE_RESIZE_SHORT = 450 #", "# the ratio of edge of the image to be cropped. _C.TRAIN.SAMPLER.IMAGE_RESIZE =", "evaluation (must be less than or equal to the length of SCALE_NAME) _C.TRAIN.SAMPLER.MULTI_SCALE.SCALE_NAME", "128 # for every gpu _C.TEST.NUM_WORKERS = 8 _C.TEST.MODEL_FILE = \"best_model.pth\" def update_config(cfg,", "= CN() _C.TRAIN.SAMPLER.DUAL_SAMPLER.TYPE = \"reversed\" # \"balance\", \"reverse\", \"uniform\" # for other sampler", "= \"normal\" # \"normal\", \"enlarge\" # For LOWLoss _C.LOSS.LOW = CN() _C.LOSS.LOW.LAMB =", "LOWLoss _C.LOSS.LOW = CN() _C.LOSS.LOW.LAMB = 0.01 # For GHMCLoss _C.LOSS.GHMC = CN()", "used. _C.TRAIN.SAMPLER.BORDER_CROP = \"pixel\" # \"pixel\", \"ratio\" _C.TRAIN.SAMPLER.BORDER_CROP_PIXEL = 0 # An integer", "strategy using re-weighting at the second stage, # \"cls\"--cumulative learning strategy to set", "= \"sigmoid\" # \"cross_entropy\", \"sigmoid\", \"ldam\" _C.LOSS.FOCAL.SIGMOID = \"normal\" # \"normal\", \"enlarge\" #", "evaluation _C.TRAIN.SAMPLER.MULTI_CROP.CROP_NUM = 16 # Number of crops to use during evaluation (must", "lib/backbone/all_models.py _C.BACKBONE.BBN = False _C.BACKBONE.FREEZE = False _C.BACKBONE.PRE_FREEZE = False _C.BACKBONE.PRE_FREEZE_EPOCH = 5", "\"scale_+10\", \"flip_x_+10\", \"rotate_90_+10\", \"rotate_270_+10\", \"scale_+20\", \"flip_x_+20\", \"rotate_90_+20\", \"rotate_270_+20\", \"scale_+30\", \"flip_x_+30\", \"rotate_90_+30\", \"rotate_270_+30\", \"scale_-10\",", "CN() _C.CLASSIFIER.TYPE = \"FC\" # \"FC\", \"FCNorm\" _C.CLASSIFIER.BIAS = True # ----- LOSS", "be less than or equal to the length of SCALE_NAME) _C.TRAIN.SAMPLER.MULTI_SCALE.SCALE_NAME = [\"scale_+00\",", "\"multistep\" # \"steplr\", \"multistep\", \"cosine\", \"warmup\" _C.TRAIN.LR_SCHEDULER.LR_LOWER_STEP = 20 # for 'steplr' _C.TRAIN.LR_SCHEDULER.LR_STEP", "_C.TRAIN.SAMPLER.IMAGE_RESIZE_SHORT = 450 # the need size of the short side of the", "SCALE_NAME) _C.TRAIN.SAMPLER.MULTI_SCALE.SCALE_NAME = [\"scale_+00\", \"flip_x_+00\", \"rotate_90_+00\", \"rotate_270_+00\", \"scale_+10\", \"flip_x_+10\", \"rotate_90_+10\", \"rotate_270_+10\", \"scale_+20\", \"flip_x_+20\",", "processing variance) _C.TRAIN.SAMPLER.FIX_MEAN_VAR.SET_MEAN = [0.485, 0.456, 0.406] # A fixed set variance _C.TRAIN.SAMPLER.FIX_MEAN_VAR.SET_VAR", "the input image needs to be resized to a fix size _C.TRAIN.SAMPLER.IMAGE_RESIZE_SHORT =", "is increased from 8 to 9. _C.DATASET.ADD_CLASS_NAME = \"UNK\" _C.DATASET.IMBALANCECIFAR = CN() _C.DATASET.IMBALANCECIFAR.RATIO", "CfgNode as CN _C = CN() # ----- BASIC SETTINGS ----- _C.NAME =", "# for other sampler _C.TRAIN.SAMPLER.WEIGHTED_SAMPLER = CN() _C.TRAIN.SAMPLER.WEIGHTED_SAMPLER.TYPE = \"balance\" # \"balance\", \"reverse\"", "= 10 _C.LOSS.GHMC.MOMENTUM = 0.0 # For MWNLoss _C.LOSS.MWNL = CN() _C.LOSS.MWNL.GAMMA =", "= \"\" _C.DATASET.DATA_TYPE = \"jpg\" _C.DATASET.TRAIN_JSON = \"\" _C.DATASET.VALID_JSON = \"\" _C.DATASET.TEST_JSON =", "\"\" _C.RESUME_MODE = \"all\" _C.CPU_MODE = False _C.EVAL_MODE = False _C.GPUS = [0,", "\"default\" # \"default\", \"weighted sampler\", \"oversample\" _C.TRAIN.SAMPLER.IMAGE_TYPE = \"derm\" # \"derm\", \"clinic\". For", "= 0 # For valid or test _C.TEST = CN() _C.TEST.BATCH_SIZE = 128", "+ \".\" + cfg.BACKBONE.TYPE + ( \"_BBN.\" if cfg.BACKBONE.BBN else \".\") + cfg.LOSS.LOSS_TYPE", "# For drw scheduler _C.LOSS.DRW_EPOCH = 50 # For cls scheduler _C.LOSS.CLS_EPOCH_MIN =", "cfg.BACKBONE.TYPE + ( \"_BBN.\" if cfg.BACKBONE.BBN else \".\") + cfg.LOSS.LOSS_TYPE + cfg.NAME cfg.merge_from_list(['NAME',", "\"multistep\", \"cosine\", \"warmup\" _C.TRAIN.LR_SCHEDULER.LR_LOWER_STEP = 20 # for 'steplr' _C.TRAIN.LR_SCHEDULER.LR_STEP = [40, 50]", "= \"\" _C.RESUME_MODE = \"all\" _C.CPU_MODE = False _C.EVAL_MODE = False _C.GPUS =", "A fixed set mean (input image will be subtracted from the mean, processing", "_C.DATASET.VALID_ADD_ONE_CLASS = False # for ISIC_2019 valid and test, the number of class", "# for 'multistep' _C.TRAIN.LR_SCHEDULER.LR_FACTOR = 0.1 _C.TRAIN.LR_SCHEDULER.WARM_EPOCH = 5 # for 'warmup' _C.TRAIN.LR_SCHEDULER.COSINE_DECAY_END", "drop block, below are drop block parameter _C.BACKBONE.DROP = CN() _C.BACKBONE.DROP.BLOCK_PROB = 0.1", "\"cls\"--cumulative learning strategy to set loss weight. # For drw scheduler _C.LOSS.DRW_EPOCH =", "----- _C.NAME = \"default\" _C.OUTPUT_DIR = \"./output/derm_7pt\" _C.VALID_STEP = 5 _C.SAVE_STEP = 5", "= 0.1 _C.BACKBONE.DROP.BLOCK_SIZE = 5 _C.BACKBONE.DROP.NR_STEPS = 50000 # dropout parameter to the", "RandAugment (0.1 -- 0.9) _C.TRAIN.SAMPLER.AUGMENT.AUG_MAG = 10 # the magnitude parameter 'M' of", "50000 # dropout parameter to the last FC layer _C.BACKBONE.DROP.OUT_PROB = 0.1 #", "_C.BACKBONE.DROP.OUT_PROB = 0.1 # ----- MODULE BUILDER ----- _C.MODULE = CN() _C.MODULE.TYPE =", "# ----- CLASSIFIER BUILDER ----- _C.CLASSIFIER = CN() _C.CLASSIFIER.TYPE = \"FC\" # \"FC\",", "0.0 # the ratio of edge of the image to be cropped. _C.TRAIN.SAMPLER.IMAGE_RESIZE", "mean, processing variance) _C.TRAIN.SAMPLER.FIX_MEAN_VAR.SET_MEAN = [0.485, 0.456, 0.406] # A fixed set variance", "_C.LOSS.LDAM.MAX_MARGIN = 0.5 # For FocalLoss _C.LOSS.FOCAL = CN() _C.LOSS.FOCAL.GAMMA = 2.0 _C.LOSS.FOCAL.TYPE", "12 # Number of scales to use during evaluation (must be less than", "_C.BACKBONE.TYPE = \"RegNetY_800MF\" # refer to lib/backbone/all_models.py _C.BACKBONE.BBN = False _C.BACKBONE.FREEZE = False", "drop block parameter _C.BACKBONE.DROP = CN() _C.BACKBONE.DROP.BLOCK_PROB = 0.1 _C.BACKBONE.DROP.BLOCK_SIZE = 5 _C.BACKBONE.DROP.NR_STEPS", "from 8 to 9. _C.DATASET.ADD_CLASS_NAME = \"UNK\" _C.DATASET.IMBALANCECIFAR = CN() _C.DATASET.IMBALANCECIFAR.RATIO = 0.01", "set mean (input image will be subtracted from the mean, processing variance) _C.TRAIN.SAMPLER.FIX_MEAN_VAR.SET_MEAN", "args): cfg.defrost() cfg.merge_from_file(args.cfg) cfg.merge_from_list(args.opts) cfg.freeze() def update_cfg_name(cfg): ''' modify the cfg.NAME :param cfg:", "update_cfg_name(cfg): ''' modify the cfg.NAME :param cfg: :return: ''' cfg.defrost() cfg_name = cfg.DATASET.DATASET", "# the probability parameter 'P' of Modified RandAugment (0.1 -- 0.9) _C.TRAIN.SAMPLER.AUGMENT.AUG_MAG =", "or RandAugment ('rand') (refer to: lib/data_transform/modified_randaugment.py) _C.TRAIN.SAMPLER.AUGMENT.AUG_PROB = 0.7 # the probability parameter", "# \"balance\", \"reverse\", \"uniform\" # for other sampler _C.TRAIN.SAMPLER.WEIGHTED_SAMPLER = CN() _C.TRAIN.SAMPLER.WEIGHTED_SAMPLER.TYPE =", "5 _C.SHOW_STEP = 20 _C.PIN_MEMORY = True _C.INPUT_SIZE = (224, 224) # (h,", "parameter to the last FC layer _C.BACKBONE.DROP.OUT_PROB = 0.1 # ----- MODULE BUILDER", "stage, # \"cls\"--cumulative learning strategy to set loss weight. # For drw scheduler", "_C.TRAIN.SAMPLER.AUGMENT.AUG_MAG = 10 # the magnitude parameter 'M' of Modified RandAugment (1 --", "for other sampler _C.TRAIN.SAMPLER.WEIGHTED_SAMPLER = CN() _C.TRAIN.SAMPLER.WEIGHTED_SAMPLER.TYPE = \"balance\" # \"balance\", \"reverse\" #", "_C.LOSS.LOW = CN() _C.LOSS.LOW.LAMB = 0.01 # For GHMCLoss _C.LOSS.GHMC = CN() _C.LOSS.GHMC.BINS", "'average' # Averaging or voting over the crop predictions (\"vote\", \"average\") # for", "10 _C.LOSS.GHMC.MOMENTUM = 0.0 # For MWNLoss _C.LOSS.MWNL = CN() _C.LOSS.MWNL.GAMMA = 2.0", "'MEL', 'MISC', 'SK'] _C.DATASET.VALID_ADD_ONE_CLASS = False # for ISIC_2019 valid and test, the", "_C.TRAIN.SAMPLER.BORDER_CROP_PIXEL = 0 # An integer specifying how many pixels to crop at", "a training image if AUG_METHOD = 'rand' # for BBN sampler _C.TRAIN.SAMPLER.DUAL_SAMPLER =", "'NADAM', 'RMSPROP' _C.TRAIN.OPTIMIZER.BASE_LR = 0.001 _C.TRAIN.OPTIMIZER.MOMENTUM = 0.9 _C.TRAIN.OPTIMIZER.WEIGHT_DECAY = 1e-4 # -----", "= 10 # the magnitude parameter 'M' of Modified RandAugment (1 -- 20)", "cropped. _C.TRAIN.SAMPLER.IMAGE_RESIZE = True # whether the input image needs to be resized", "_C.LOSS.LOW.LAMB = 0.01 # For GHMCLoss _C.LOSS.GHMC = CN() _C.LOSS.GHMC.BINS = 10 _C.LOSS.GHMC.MOMENTUM", "_C.LOSS.FOCAL = CN() _C.LOSS.FOCAL.GAMMA = 2.0 _C.LOSS.FOCAL.TYPE = \"sigmoid\" # \"cross_entropy\", \"sigmoid\", \"ldam\"", "transformation on the central crop _C.TRAIN.SAMPLER.MULTI_SCALE.SCALE_NUM = 12 # Number of scales to", "using drop block, below are drop block parameter _C.BACKBONE.DROP = CN() _C.BACKBONE.DROP.BLOCK_PROB =", "\"\" _C.DATASET.DATA_TYPE = \"jpg\" _C.DATASET.TRAIN_JSON = \"\" _C.DATASET.VALID_JSON = \"\" _C.DATASET.TEST_JSON = \"\"", "= \"\" _C.DATASET.TEST_JSON = \"\" _C.DATASET.CLASS_NAME = ['BCC', 'NV', 'MEL', 'MISC', 'SK'] _C.DATASET.VALID_ADD_ONE_CLASS", "----- BACKBONE BUILDER ----- _C.BACKBONE = CN() _C.BACKBONE.TYPE = \"RegNetY_800MF\" # refer to", "multi transformation of the center crop _C.TRAIN.SAMPLER.MULTI_SCALE = CN() _C.TRAIN.SAMPLER.MULTI_SCALE.ENABLE = False #", "= True # whether the input image needs to be resized to a", "(h, w) _C.COLOR_SPACE = \"RGB\" _C.RESUME_MODEL = \"\" _C.RESUME_MODE = \"all\" _C.CPU_MODE =", "= \"./output/derm_7pt\" _C.VALID_STEP = 5 _C.SAVE_STEP = 5 _C.SHOW_STEP = 20 _C.PIN_MEMORY =", "\"v1_0\" # the method of Modified RandAugment ('v0_0' to 'v3_1') or RandAugment ('rand')", "1 # the number of transformations applied to a training image if AUG_METHOD", "within a certain range of the central area (along the long side of", "70 _C.TRAIN.SHUFFLE = True _C.TRAIN.NUM_WORKERS = 8 _C.TRAIN.TENSORBOARD = CN() _C.TRAIN.TENSORBOARD.ENABLE = True", "will be subtracted from the mean, processing variance) _C.TRAIN.SAMPLER.FIX_MEAN_VAR.SET_MEAN = [0.485, 0.456, 0.406]", "0.5 # For FocalLoss _C.LOSS.FOCAL = CN() _C.LOSS.FOCAL.GAMMA = 2.0 _C.LOSS.FOCAL.TYPE = \"sigmoid\"", "= \"v1_0\" # the method of Modified RandAugment ('v0_0' to 'v3_1') or RandAugment", "for ISIC_2019 valid and test, the number of class is increased from 8", "Number of scales to use during evaluation (must be less than or equal", "'NV', 'MEL', 'MISC', 'SK'] _C.DATASET.VALID_ADD_ONE_CLASS = False # for ISIC_2019 valid and test,", "every gpu _C.TRAIN.MAX_EPOCH = 70 _C.TRAIN.SHUFFLE = True _C.TRAIN.NUM_WORKERS = 8 _C.TRAIN.TENSORBOARD =", "cfg.freeze() def update_cfg_name(cfg): ''' modify the cfg.NAME :param cfg: :return: ''' cfg.defrost() cfg_name", "= \"FC\" # \"FC\", \"FCNorm\" _C.CLASSIFIER.BIAS = True # ----- LOSS BUILDER -----", "\"FocalLoss\", \"LOWLoss\", \"GHMCLoss\", \"CCELoss\", \"MWNLoss\" _C.LOSS.SCHEDULER = \"default\" # \"default\"--the weights of all", "the crop predictions (\"vote\", \"average\") # for multi transformation of the center crop", "_C.LOSS.MWNL.BETA = 0.1 _C.LOSS.MWNL.TYPE = \"fix\" # \"zero\", \"fix\", \"decrease\" _C.LOSS.MWNL.SIGMOID = \"normal\"", "50 # For cls scheduler _C.LOSS.CLS_EPOCH_MIN = 20 _C.LOSS.CLS_EPOCH_MAX = 60 # For", "w) _C.COLOR_SPACE = \"RGB\" _C.RESUME_MODEL = \"\" _C.RESUME_MODE = \"all\" _C.CPU_MODE = False", "For GHMCLoss _C.LOSS.GHMC = CN() _C.LOSS.GHMC.BINS = 10 _C.LOSS.GHMC.MOMENTUM = 0.0 # For", "----- LOSS BUILDER ----- _C.LOSS = CN() _C.LOSS.WEIGHT_POWER = 1.1 _C.LOSS.EXTRA_WEIGHT = [1.0,", "TRAIN BUILDER ----- _C.TRAIN = CN() _C.TRAIN.BATCH_SIZE = 32 # for every gpu", "\".\" + cfg.BACKBONE.TYPE + ( \"_BBN.\" if cfg.BACKBONE.BBN else \".\") + cfg.LOSS.LOSS_TYPE +", "( \"_BBN.\" if cfg.BACKBONE.BBN else \".\") + cfg.LOSS.LOSS_TYPE + cfg.NAME cfg.merge_from_list(['NAME', cfg_name]) cfg.freeze()", "# ----- BASIC SETTINGS ----- _C.NAME = \"default\" _C.OUTPUT_DIR = \"./output/derm_7pt\" _C.VALID_STEP =", "_C.TRAIN.LR_SCHEDULER.LR_LOWER_STEP = 20 # for 'steplr' _C.TRAIN.LR_SCHEDULER.LR_STEP = [40, 50] # for 'multistep'", "_C.RESUME_MODE = \"all\" _C.CPU_MODE = False _C.EVAL_MODE = False _C.GPUS = [0, 1]", "_C.TRAIN.SAMPLER.MULTI_SCALE.SCALE_NUM = 12 # Number of scales to use during evaluation (must be", "\"FC\" # \"FC\", \"FCNorm\" _C.CLASSIFIER.BIAS = True # ----- LOSS BUILDER ----- _C.LOSS", "integer specifying how many pixels to crop at the image border. Useful if", "= True # Normalize using the mean and variance of each image, or", "20 # for 'steplr' _C.TRAIN.LR_SCHEDULER.LR_STEP = [40, 50] # for 'multistep' _C.TRAIN.LR_SCHEDULER.LR_FACTOR =", "# \"default\", \"weighted sampler\", \"oversample\" _C.TRAIN.SAMPLER.IMAGE_TYPE = \"derm\" # \"derm\", \"clinic\". For derm_7pt", "For derm_7pt dataset used. _C.TRAIN.SAMPLER.BORDER_CROP = \"pixel\" # \"pixel\", \"ratio\" _C.TRAIN.SAMPLER.BORDER_CROP_PIXEL = 0", "# For Modified RandAugment _C.TRAIN.SAMPLER.AUGMENT = CN() _C.TRAIN.SAMPLER.AUGMENT.NEED_AUGMENT = False _C.TRAIN.SAMPLER.AUGMENT.AUG_METHOD = \"v1_0\"", "\"sigmoid\" # \"cross_entropy\", \"sigmoid\", \"ldam\" _C.LOSS.FOCAL.SIGMOID = \"normal\" # \"normal\", \"enlarge\" # For", "BUILDER ----- _C.TRAIN = CN() _C.TRAIN.BATCH_SIZE = 32 # for every gpu _C.TRAIN.MAX_EPOCH", "of the image to be cropped. _C.TRAIN.SAMPLER.IMAGE_RESIZE = True # whether the input", "crop predictions (\"vote\", \"average\") # for multi transformation of the center crop _C.TRAIN.SAMPLER.MULTI_SCALE", "the last FC layer _C.BACKBONE.DROP.OUT_PROB = 0.1 # ----- MODULE BUILDER ----- _C.MODULE", "# \"GAP\", \"Identity\" # ----- CLASSIFIER BUILDER ----- _C.CLASSIFIER = CN() _C.CLASSIFIER.TYPE =", "_C.TRAIN.SAMPLER.MULTI_CROP.ENABLE = False # Should the crops be order or random for evaluation", "parameter 'P' of Modified RandAugment (0.1 -- 0.9) _C.TRAIN.SAMPLER.AUGMENT.AUG_MAG = 10 # the", "= \"\" # if using drop block, below are drop block parameter _C.BACKBONE.DROP", "the crops be order or random for evaluation _C.TRAIN.SAMPLER.MULTI_CROP.CROP_NUM = 16 # Number", "input image _C.TRAIN.SAMPLER.COLOR_CONSTANCY = False _C.TRAIN.SAMPLER.CONSTANCY_POWER = 6.0 _C.TRAIN.SAMPLER.CONSTANCY_GAMMA = 0.0 # For", "block parameter _C.BACKBONE.DROP = CN() _C.BACKBONE.DROP.BLOCK_PROB = 0.1 _C.BACKBONE.DROP.BLOCK_SIZE = 5 _C.BACKBONE.DROP.NR_STEPS =", "(0.1 -- 0.9) _C.TRAIN.SAMPLER.AUGMENT.AUG_MAG = 10 # the magnitude parameter 'M' of Modified", "random for evaluation _C.TRAIN.SAMPLER.MULTI_CROP.CROP_NUM = 16 # Number of crops to use during", "central crop _C.TRAIN.SAMPLER.MULTI_SCALE.SCALE_NUM = 12 # Number of scales to use during evaluation", "0.1 _C.TRAIN.LR_SCHEDULER.WARM_EPOCH = 5 # for 'warmup' _C.TRAIN.LR_SCHEDULER.COSINE_DECAY_END = 0 # For valid", "\"decrease\" _C.LOSS.MWNL.SIGMOID = \"normal\" # \"normal\", \"enlarge\" # ----- TRAIN BUILDER ----- _C.TRAIN", "0.7 # the probability parameter 'P' of Modified RandAugment (0.1 -- 0.9) _C.TRAIN.SAMPLER.AUGMENT.AUG_MAG", "if images contain a black boundary. _C.TRAIN.SAMPLER.BORDER_CROP_RATIO = 0.0 # the ratio of", "_C.TRAIN.SAMPLER.MULTI_SCALE = CN() _C.TRAIN.SAMPLER.MULTI_SCALE.ENABLE = False # whether to perform multi transformation on", "# Normalize using the mean and variance of each image, or using fixed", "# ----- TRAIN BUILDER ----- _C.TRAIN = CN() _C.TRAIN.BATCH_SIZE = 32 # for", "Modified RandAugment ('v0_0' to 'v3_1') or RandAugment ('rand') (refer to: lib/data_transform/modified_randaugment.py) _C.TRAIN.SAMPLER.AUGMENT.AUG_PROB =", "multi crop _C.TRAIN.SAMPLER.MULTI_CROP = CN() _C.TRAIN.SAMPLER.MULTI_CROP.ENABLE = False # Should the crops be", "\"flip_x_+30\", \"rotate_90_+30\", \"rotate_270_+30\", \"scale_-10\", \"flip_x_-10\", \"rotate_90_-10\", \"rotate_270_-10\", \"flip_y_+00\", \"flip_y_+10\", \"flip_y_-10\", \"flip_y_+20\"] _C.TRAIN.SAMPLER.FIX_MEAN_VAR =", "cfg.NAME :param cfg: :return: ''' cfg.defrost() cfg_name = cfg.DATASET.DATASET + \".\" + cfg.BACKBONE.TYPE", "_C.TRAIN = CN() _C.TRAIN.BATCH_SIZE = 32 # for every gpu _C.TRAIN.MAX_EPOCH = 70", "# \"CrossEntropy\", \"LDAMLoss\", \"FocalLoss\", \"LOWLoss\", \"GHMCLoss\", \"CCELoss\", \"MWNLoss\" _C.LOSS.SCHEDULER = \"default\" # \"default\"--the", "# For GHMCLoss _C.LOSS.GHMC = CN() _C.LOSS.GHMC.BINS = 10 _C.LOSS.GHMC.MOMENTUM = 0.0 #", "\"\" _C.DATASET.TEST_JSON = \"\" _C.DATASET.CLASS_NAME = ['BCC', 'NV', 'MEL', 'MISC', 'SK'] _C.DATASET.VALID_ADD_ONE_CLASS =", "all classes are \"1.0\", # \"re_weight\"--re-weighting by the power of inverse class frequency", "the mean and variance of each image, or using fixed values # A", "than or equal to the length of SCALE_NAME) _C.TRAIN.SAMPLER.MULTI_SCALE.SCALE_NAME = [\"scale_+00\", \"flip_x_+00\", \"rotate_90_+00\",", "the center crop _C.TRAIN.SAMPLER.MULTI_SCALE = CN() _C.TRAIN.SAMPLER.MULTI_SCALE.ENABLE = False # whether to perform", "0.9 _C.TRAIN.OPTIMIZER.WEIGHT_DECAY = 1e-4 # ----- LR_SCHEDULER ----- _C.TRAIN.LR_SCHEDULER = CN() _C.TRAIN.LR_SCHEDULER.TYPE =", "0.225] # ----- OPTIMIZER ----- _C.TRAIN.OPTIMIZER = CN() _C.TRAIN.OPTIMIZER.TYPE = \"SGD\" # 'SGD',", "parameter 'M' of Modified RandAugment (1 -- 20) _C.TRAIN.SAMPLER.AUGMENT.AUG_LAYER_NUM = 1 # the", "\"SGD\" # 'SGD', 'ADAM', 'NADAM', 'RMSPROP' _C.TRAIN.OPTIMIZER.BASE_LR = 0.001 _C.TRAIN.OPTIMIZER.MOMENTUM = 0.9 _C.TRAIN.OPTIMIZER.WEIGHT_DECAY", "the image) _C.TRAIN.SAMPLER.MULTI_CROP.SCHEME = 'average' # Averaging or voting over the crop predictions", "re-weighting at the second stage, # \"cls\"--cumulative learning strategy to set loss weight.", "of class is increased from 8 to 9. _C.DATASET.ADD_CLASS_NAME = \"UNK\" _C.DATASET.IMBALANCECIFAR =", "to use during evaluation (must be N^2) _C.TRAIN.SAMPLER.MULTI_CROP.L_REGION = 1.0 # Only crop", "MODULE BUILDER ----- _C.MODULE = CN() _C.MODULE.TYPE = \"GAP\" # \"GAP\", \"Identity\" #", "True _C.TRAIN.NUM_WORKERS = 8 _C.TRAIN.TENSORBOARD = CN() _C.TRAIN.TENSORBOARD.ENABLE = True # ----- SAMPLER", "= 0.1 _C.TRAIN.LR_SCHEDULER.WARM_EPOCH = 5 # for 'warmup' _C.TRAIN.LR_SCHEDULER.COSINE_DECAY_END = 0 # For", "_C.TRAIN.SAMPLER.DUAL_SAMPLER = CN() _C.TRAIN.SAMPLER.DUAL_SAMPLER.TYPE = \"reversed\" # \"balance\", \"reverse\", \"uniform\" # for other", "multi transformation on the central crop _C.TRAIN.SAMPLER.MULTI_SCALE.SCALE_NUM = 12 # Number of scales", "cfg: :return: ''' cfg.defrost() cfg_name = cfg.DATASET.DATASET + \".\" + cfg.BACKBONE.TYPE + (", "_C.LOSS = CN() _C.LOSS.WEIGHT_POWER = 1.1 _C.LOSS.EXTRA_WEIGHT = [1.0, 1.0, 1.0, 1.0, 1.0]", "(1 -- 20) _C.TRAIN.SAMPLER.AUGMENT.AUG_LAYER_NUM = 1 # the number of transformations applied to", "\"warmup\" _C.TRAIN.LR_SCHEDULER.LR_LOWER_STEP = 20 # for 'steplr' _C.TRAIN.LR_SCHEDULER.LR_STEP = [40, 50] # for", "True # ----- LOSS BUILDER ----- _C.LOSS = CN() _C.LOSS.WEIGHT_POWER = 1.1 _C.LOSS.EXTRA_WEIGHT", "crops to use during evaluation (must be N^2) _C.TRAIN.SAMPLER.MULTI_CROP.L_REGION = 1.0 # Only", "_C.TEST.MODEL_FILE = \"best_model.pth\" def update_config(cfg, args): cfg.defrost() cfg.merge_from_file(args.cfg) cfg.merge_from_list(args.opts) cfg.freeze() def update_cfg_name(cfg): '''", "stage, # \"drw\"--two-stage strategy using re-weighting at the second stage, # \"cls\"--cumulative learning", "# dropout parameter to the last FC layer _C.BACKBONE.DROP.OUT_PROB = 0.1 # -----", "Modified RandAugment (1 -- 20) _C.TRAIN.SAMPLER.AUGMENT.AUG_LAYER_NUM = 1 # the number of transformations", "(\"vote\", \"average\") # for multi transformation of the center crop _C.TRAIN.SAMPLER.MULTI_SCALE = CN()", "= 5 _C.SAVE_STEP = 5 _C.SHOW_STEP = 20 _C.PIN_MEMORY = True _C.INPUT_SIZE =", "BUILDER ----- _C.TRAIN.SAMPLER = CN() _C.TRAIN.SAMPLER.TYPE = \"default\" # \"default\", \"weighted sampler\", \"oversample\"", "training image if AUG_METHOD = 'rand' # for BBN sampler _C.TRAIN.SAMPLER.DUAL_SAMPLER = CN()", "_C.TRAIN.SAMPLER.FIX_MEAN_VAR = CN() _C.TRAIN.SAMPLER.FIX_MEAN_VAR.ENABLE = True # Normalize using the mean and variance", "= CN() _C.DATASET.DATASET = \"\" _C.DATASET.ROOT = \"\" _C.DATASET.DATA_TYPE = \"jpg\" _C.DATASET.TRAIN_JSON =", "= CN() _C.TRAIN.SAMPLER.FIX_MEAN_VAR.ENABLE = True # Normalize using the mean and variance of", "1.0] _C.LOSS.LOSS_TYPE = \"CrossEntropy\" # \"CrossEntropy\", \"LDAMLoss\", \"FocalLoss\", \"LOWLoss\", \"GHMCLoss\", \"CCELoss\", \"MWNLoss\" _C.LOSS.SCHEDULER", "_C.BACKBONE.DROP.NR_STEPS = 50000 # dropout parameter to the last FC layer _C.BACKBONE.DROP.OUT_PROB =", "# For cls scheduler _C.LOSS.CLS_EPOCH_MIN = 20 _C.LOSS.CLS_EPOCH_MAX = 60 # For LDAMLoss", "from yacs.config import CfgNode as CN _C = CN() # ----- BASIC SETTINGS", "cls scheduler _C.LOSS.CLS_EPOCH_MIN = 20 _C.LOSS.CLS_EPOCH_MAX = 60 # For LDAMLoss _C.LOSS.LDAM =", "# Only crop within a certain range of the central area (along the", "AUG_METHOD = 'rand' # for BBN sampler _C.TRAIN.SAMPLER.DUAL_SAMPLER = CN() _C.TRAIN.SAMPLER.DUAL_SAMPLER.TYPE = \"reversed\"", "number of class is increased from 8 to 9. _C.DATASET.ADD_CLASS_NAME = \"UNK\" _C.DATASET.IMBALANCECIFAR", "False _C.BACKBONE.PRE_FREEZE_EPOCH = 5 _C.BACKBONE.PRETRAINED = True _C.BACKBONE.PRETRAINED_MODEL = \"\" # if using", "_C.TRAIN.SAMPLER.MULTI_CROP.CROP_NUM = 16 # Number of crops to use during evaluation (must be", "# ----- OPTIMIZER ----- _C.TRAIN.OPTIMIZER = CN() _C.TRAIN.OPTIMIZER.TYPE = \"SGD\" # 'SGD', 'ADAM',", "GHMCLoss _C.LOSS.GHMC = CN() _C.LOSS.GHMC.BINS = 10 _C.LOSS.GHMC.MOMENTUM = 0.0 # For MWNLoss", "def update_config(cfg, args): cfg.defrost() cfg.merge_from_file(args.cfg) cfg.merge_from_list(args.opts) cfg.freeze() def update_cfg_name(cfg): ''' modify the cfg.NAME", "('v0_0' to 'v3_1') or RandAugment ('rand') (refer to: lib/data_transform/modified_randaugment.py) _C.TRAIN.SAMPLER.AUGMENT.AUG_PROB = 0.7 #", "20) _C.TRAIN.SAMPLER.AUGMENT.AUG_LAYER_NUM = 1 # the number of transformations applied to a training", "LDAMLoss _C.LOSS.LDAM = CN() _C.LOSS.LDAM.MAX_MARGIN = 0.5 # For FocalLoss _C.LOSS.FOCAL = CN()", "need size of the short side of the input image _C.TRAIN.SAMPLER.COLOR_CONSTANCY = False", "values # A fixed set mean (input image will be subtracted from the", "8 to 9. _C.DATASET.ADD_CLASS_NAME = \"UNK\" _C.DATASET.IMBALANCECIFAR = CN() _C.DATASET.IMBALANCECIFAR.RATIO = 0.01 _C.DATASET.IMBALANCECIFAR.RANDOM_SEED", "less than or equal to the length of SCALE_NAME) _C.TRAIN.SAMPLER.MULTI_SCALE.SCALE_NAME = [\"scale_+00\", \"flip_x_+00\",", "# \"drw\"--two-stage strategy using re-weighting at the second stage, # \"cls\"--cumulative learning strategy", "N^2) _C.TRAIN.SAMPLER.MULTI_CROP.L_REGION = 1.0 # Only crop within a certain range of the", "True # ----- SAMPLER BUILDER ----- _C.TRAIN.SAMPLER = CN() _C.TRAIN.SAMPLER.TYPE = \"default\" #", "be N^2) _C.TRAIN.SAMPLER.MULTI_CROP.L_REGION = 1.0 # Only crop within a certain range of", "\"cross_entropy\", \"sigmoid\", \"ldam\" _C.LOSS.FOCAL.SIGMOID = \"normal\" # \"normal\", \"enlarge\" # For LOWLoss _C.LOSS.LOW", "1e-4 # ----- LR_SCHEDULER ----- _C.TRAIN.LR_SCHEDULER = CN() _C.TRAIN.LR_SCHEDULER.TYPE = \"multistep\" # \"steplr\",", "SETTINGS ----- _C.NAME = \"default\" _C.OUTPUT_DIR = \"./output/derm_7pt\" _C.VALID_STEP = 5 _C.SAVE_STEP =", "_C.LOSS.WEIGHT_POWER = 1.1 _C.LOSS.EXTRA_WEIGHT = [1.0, 1.0, 1.0, 1.0, 1.0] _C.LOSS.LOSS_TYPE = \"CrossEntropy\"", "_C.TRAIN.TENSORBOARD = CN() _C.TRAIN.TENSORBOARD.ENABLE = True # ----- SAMPLER BUILDER ----- _C.TRAIN.SAMPLER =", "all train stage, # \"drw\"--two-stage strategy using re-weighting at the second stage, #", "\"scale_+30\", \"flip_x_+30\", \"rotate_90_+30\", \"rotate_270_+30\", \"scale_-10\", \"flip_x_-10\", \"rotate_90_-10\", \"rotate_270_-10\", \"flip_y_+00\", \"flip_y_+10\", \"flip_y_-10\", \"flip_y_+20\"] _C.TRAIN.SAMPLER.FIX_MEAN_VAR", "or using fixed values # A fixed set mean (input image will be", "increased from 8 to 9. _C.DATASET.ADD_CLASS_NAME = \"UNK\" _C.DATASET.IMBALANCECIFAR = CN() _C.DATASET.IMBALANCECIFAR.RATIO =", "boundary. _C.TRAIN.SAMPLER.BORDER_CROP_RATIO = 0.0 # the ratio of edge of the image to", "\"fix\", \"decrease\" _C.LOSS.MWNL.SIGMOID = \"normal\" # \"normal\", \"enlarge\" # ----- TRAIN BUILDER -----", "CN() _C.TRAIN.SAMPLER.AUGMENT.NEED_AUGMENT = False _C.TRAIN.SAMPLER.AUGMENT.AUG_METHOD = \"v1_0\" # the method of Modified RandAugment", "'warmup' _C.TRAIN.LR_SCHEDULER.COSINE_DECAY_END = 0 # For valid or test _C.TEST = CN() _C.TEST.BATCH_SIZE", "train stage, # \"drw\"--two-stage strategy using re-weighting at the second stage, # \"cls\"--cumulative", "or random for evaluation _C.TRAIN.SAMPLER.MULTI_CROP.CROP_NUM = 16 # Number of crops to use", "_C.DATASET.DATA_TYPE = \"jpg\" _C.DATASET.TRAIN_JSON = \"\" _C.DATASET.VALID_JSON = \"\" _C.DATASET.TEST_JSON = \"\" _C.DATASET.CLASS_NAME", "= False # Should the crops be order or random for evaluation _C.TRAIN.SAMPLER.MULTI_CROP.CROP_NUM", "be order or random for evaluation _C.TRAIN.SAMPLER.MULTI_CROP.CROP_NUM = 16 # Number of crops", "image will be subtracted from the mean, processing variance) _C.TRAIN.SAMPLER.FIX_MEAN_VAR.SET_MEAN = [0.485, 0.456,", "# the need size of the short side of the input image _C.TRAIN.SAMPLER.COLOR_CONSTANCY", "For Modified RandAugment _C.TRAIN.SAMPLER.AUGMENT = CN() _C.TRAIN.SAMPLER.AUGMENT.NEED_AUGMENT = False _C.TRAIN.SAMPLER.AUGMENT.AUG_METHOD = \"v1_0\" #", "fixed set mean (input image will be subtracted from the mean, processing variance)", "Only crop within a certain range of the central area (along the short", "For valid or test _C.TEST = CN() _C.TEST.BATCH_SIZE = 128 # for every", "For MWNLoss _C.LOSS.MWNL = CN() _C.LOSS.MWNL.GAMMA = 2.0 _C.LOSS.MWNL.BETA = 0.1 _C.LOSS.MWNL.TYPE =", "a fix size _C.TRAIN.SAMPLER.IMAGE_RESIZE_SHORT = 450 # the need size of the short", "length of SCALE_NAME) _C.TRAIN.SAMPLER.MULTI_SCALE.SCALE_NAME = [\"scale_+00\", \"flip_x_+00\", \"rotate_90_+00\", \"rotate_270_+00\", \"scale_+10\", \"flip_x_+10\", \"rotate_90_+10\", \"rotate_270_+10\",", "_C.TRAIN.OPTIMIZER.WEIGHT_DECAY = 1e-4 # ----- LR_SCHEDULER ----- _C.TRAIN.LR_SCHEDULER = CN() _C.TRAIN.LR_SCHEDULER.TYPE = \"multistep\"", "True _C.INPUT_SIZE = (224, 224) # (h, w) _C.COLOR_SPACE = \"RGB\" _C.RESUME_MODEL =", "_C.DATASET.VALID_JSON = \"\" _C.DATASET.TEST_JSON = \"\" _C.DATASET.CLASS_NAME = ['BCC', 'NV', 'MEL', 'MISC', 'SK']", "CLASSIFIER BUILDER ----- _C.CLASSIFIER = CN() _C.CLASSIFIER.TYPE = \"FC\" # \"FC\", \"FCNorm\" _C.CLASSIFIER.BIAS", "resized to a fix size _C.TRAIN.SAMPLER.IMAGE_RESIZE_SHORT = 450 # the need size of", "the short side of the input image _C.TRAIN.SAMPLER.COLOR_CONSTANCY = False _C.TRAIN.SAMPLER.CONSTANCY_POWER = 6.0", "\"balance\", \"reverse\", \"uniform\" # for other sampler _C.TRAIN.SAMPLER.WEIGHTED_SAMPLER = CN() _C.TRAIN.SAMPLER.WEIGHTED_SAMPLER.TYPE = \"balance\"", "_C.TRAIN.SAMPLER.FIX_MEAN_VAR.SET_VAR = [0.229, 0.224, 0.225] # ----- OPTIMIZER ----- _C.TRAIN.OPTIMIZER = CN() _C.TRAIN.OPTIMIZER.TYPE", "\"clinic\". For derm_7pt dataset used. _C.TRAIN.SAMPLER.BORDER_CROP = \"pixel\" # \"pixel\", \"ratio\" _C.TRAIN.SAMPLER.BORDER_CROP_PIXEL =", "= 20 _C.LOSS.CLS_EPOCH_MAX = 60 # For LDAMLoss _C.LOSS.LDAM = CN() _C.LOSS.LDAM.MAX_MARGIN =", "transformation of the center crop _C.TRAIN.SAMPLER.MULTI_SCALE = CN() _C.TRAIN.SAMPLER.MULTI_SCALE.ENABLE = False # whether", "to be resized to a fix size _C.TRAIN.SAMPLER.IMAGE_RESIZE_SHORT = 450 # the need", "the magnitude parameter 'M' of Modified RandAugment (1 -- 20) _C.TRAIN.SAMPLER.AUGMENT.AUG_LAYER_NUM = 1", "----- _C.TRAIN.LR_SCHEDULER = CN() _C.TRAIN.LR_SCHEDULER.TYPE = \"multistep\" # \"steplr\", \"multistep\", \"cosine\", \"warmup\" _C.TRAIN.LR_SCHEDULER.LR_LOWER_STEP", "\"default\" # \"default\"--the weights of all classes are \"1.0\", # \"re_weight\"--re-weighting by the", "\"LOWLoss\", \"GHMCLoss\", \"CCELoss\", \"MWNLoss\" _C.LOSS.SCHEDULER = \"default\" # \"default\"--the weights of all classes", "= False _C.EVAL_MODE = False _C.GPUS = [0, 1] # ----- DATASET BUILDER", "_C.LOSS.DRW_EPOCH = 50 # For cls scheduler _C.LOSS.CLS_EPOCH_MIN = 20 _C.LOSS.CLS_EPOCH_MAX = 60", "\"GAP\", \"Identity\" # ----- CLASSIFIER BUILDER ----- _C.CLASSIFIER = CN() _C.CLASSIFIER.TYPE = \"FC\"", "= 0.0 # For MWNLoss _C.LOSS.MWNL = CN() _C.LOSS.MWNL.GAMMA = 2.0 _C.LOSS.MWNL.BETA =", "\"best_model.pth\" def update_config(cfg, args): cfg.defrost() cfg.merge_from_file(args.cfg) cfg.merge_from_list(args.opts) cfg.freeze() def update_cfg_name(cfg): ''' modify the", "----- SAMPLER BUILDER ----- _C.TRAIN.SAMPLER = CN() _C.TRAIN.SAMPLER.TYPE = \"default\" # \"default\", \"weighted", "\"normal\" # \"normal\", \"enlarge\" # For LOWLoss _C.LOSS.LOW = CN() _C.LOSS.LOW.LAMB = 0.01", "short side of the input image _C.TRAIN.SAMPLER.COLOR_CONSTANCY = False _C.TRAIN.SAMPLER.CONSTANCY_POWER = 6.0 _C.TRAIN.SAMPLER.CONSTANCY_GAMMA", "= \"reversed\" # \"balance\", \"reverse\", \"uniform\" # for other sampler _C.TRAIN.SAMPLER.WEIGHTED_SAMPLER = CN()", "= False # whether to perform multi transformation on the central crop _C.TRAIN.SAMPLER.MULTI_SCALE.SCALE_NUM", "RandAugment ('v0_0' to 'v3_1') or RandAugment ('rand') (refer to: lib/data_transform/modified_randaugment.py) _C.TRAIN.SAMPLER.AUGMENT.AUG_PROB = 0.7", "False _C.TRAIN.SAMPLER.AUGMENT.AUG_METHOD = \"v1_0\" # the method of Modified RandAugment ('v0_0' to 'v3_1')", "5 _C.BACKBONE.DROP.NR_STEPS = 50000 # dropout parameter to the last FC layer _C.BACKBONE.DROP.OUT_PROB", "set loss weight. # For drw scheduler _C.LOSS.DRW_EPOCH = 50 # For cls", "_C.LOSS.MWNL.GAMMA = 2.0 _C.LOSS.MWNL.BETA = 0.1 _C.LOSS.MWNL.TYPE = \"fix\" # \"zero\", \"fix\", \"decrease\"", "\"CrossEntropy\" # \"CrossEntropy\", \"LDAMLoss\", \"FocalLoss\", \"LOWLoss\", \"GHMCLoss\", \"CCELoss\", \"MWNLoss\" _C.LOSS.SCHEDULER = \"default\" #", "_C.TRAIN.OPTIMIZER.TYPE = \"SGD\" # 'SGD', 'ADAM', 'NADAM', 'RMSPROP' _C.TRAIN.OPTIMIZER.BASE_LR = 0.001 _C.TRAIN.OPTIMIZER.MOMENTUM =", "the length of SCALE_NAME) _C.TRAIN.SAMPLER.MULTI_SCALE.SCALE_NAME = [\"scale_+00\", \"flip_x_+00\", \"rotate_90_+00\", \"rotate_270_+00\", \"scale_+10\", \"flip_x_+10\", \"rotate_90_+10\",", "gpu _C.TEST.NUM_WORKERS = 8 _C.TEST.MODEL_FILE = \"best_model.pth\" def update_config(cfg, args): cfg.defrost() cfg.merge_from_file(args.cfg) cfg.merge_from_list(args.opts)", "the need size of the short side of the input image _C.TRAIN.SAMPLER.COLOR_CONSTANCY =", "For cls scheduler _C.LOSS.CLS_EPOCH_MIN = 20 _C.LOSS.CLS_EPOCH_MAX = 60 # For LDAMLoss _C.LOSS.LDAM", "scheduler _C.LOSS.CLS_EPOCH_MIN = 20 _C.LOSS.CLS_EPOCH_MAX = 60 # For LDAMLoss _C.LOSS.LDAM = CN()", "0.1 _C.LOSS.MWNL.TYPE = \"fix\" # \"zero\", \"fix\", \"decrease\" _C.LOSS.MWNL.SIGMOID = \"normal\" # \"normal\",", "yacs.config import CfgNode as CN _C = CN() # ----- BASIC SETTINGS -----", "----- _C.TRAIN.SAMPLER = CN() _C.TRAIN.SAMPLER.TYPE = \"default\" # \"default\", \"weighted sampler\", \"oversample\" _C.TRAIN.SAMPLER.IMAGE_TYPE", "An integer specifying how many pixels to crop at the image border. Useful", "<reponame>yaopengUSTC/mbit-skin-cancer<gh_stars>1-10 from __future__ import absolute_import from __future__ import division from __future__ import print_function", "\"rotate_90_-10\", \"rotate_270_-10\", \"flip_y_+00\", \"flip_y_+10\", \"flip_y_-10\", \"flip_y_+20\"] _C.TRAIN.SAMPLER.FIX_MEAN_VAR = CN() _C.TRAIN.SAMPLER.FIX_MEAN_VAR.ENABLE = True #", "= \"fix\" # \"zero\", \"fix\", \"decrease\" _C.LOSS.MWNL.SIGMOID = \"normal\" # \"normal\", \"enlarge\" #", "= \"default\" # \"default\", \"weighted sampler\", \"oversample\" _C.TRAIN.SAMPLER.IMAGE_TYPE = \"derm\" # \"derm\", \"clinic\".", "= 128 # for every gpu _C.TEST.NUM_WORKERS = 8 _C.TEST.MODEL_FILE = \"best_model.pth\" def", "For drw scheduler _C.LOSS.DRW_EPOCH = 50 # For cls scheduler _C.LOSS.CLS_EPOCH_MIN = 20", "----- TRAIN BUILDER ----- _C.TRAIN = CN() _C.TRAIN.BATCH_SIZE = 32 # for every", "= CN() _C.TRAIN.SAMPLER.AUGMENT.NEED_AUGMENT = False _C.TRAIN.SAMPLER.AUGMENT.AUG_METHOD = \"v1_0\" # the method of Modified", "import division from __future__ import print_function from yacs.config import CfgNode as CN _C", "_C.DATASET.ADD_CLASS_NAME = \"UNK\" _C.DATASET.IMBALANCECIFAR = CN() _C.DATASET.IMBALANCECIFAR.RATIO = 0.01 _C.DATASET.IMBALANCECIFAR.RANDOM_SEED = 0 #", "drw scheduler _C.LOSS.DRW_EPOCH = 50 # For cls scheduler _C.LOSS.CLS_EPOCH_MIN = 20 _C.LOSS.CLS_EPOCH_MAX", "of Modified RandAugment (1 -- 20) _C.TRAIN.SAMPLER.AUGMENT.AUG_LAYER_NUM = 1 # the number of", "to perform multi transformation on the central crop _C.TRAIN.SAMPLER.MULTI_SCALE.SCALE_NUM = 12 # Number", "= CN() _C.TRAIN.OPTIMIZER.TYPE = \"SGD\" # 'SGD', 'ADAM', 'NADAM', 'RMSPROP' _C.TRAIN.OPTIMIZER.BASE_LR = 0.001", "= 8 _C.TRAIN.TENSORBOARD = CN() _C.TRAIN.TENSORBOARD.ENABLE = True # ----- SAMPLER BUILDER -----", "= 32 # for every gpu _C.TRAIN.MAX_EPOCH = 70 _C.TRAIN.SHUFFLE = True _C.TRAIN.NUM_WORKERS", "frequency at all train stage, # \"drw\"--two-stage strategy using re-weighting at the second", "many pixels to crop at the image border. Useful if images contain a", "RandAugment (1 -- 20) _C.TRAIN.SAMPLER.AUGMENT.AUG_LAYER_NUM = 1 # the number of transformations applied", "using the mean and variance of each image, or using fixed values #", "_C.TRAIN.SHUFFLE = True _C.TRAIN.NUM_WORKERS = 8 _C.TRAIN.TENSORBOARD = CN() _C.TRAIN.TENSORBOARD.ENABLE = True #", "0.0 # For MWNLoss _C.LOSS.MWNL = CN() _C.LOSS.MWNL.GAMMA = 2.0 _C.LOSS.MWNL.BETA = 0.1", "CN() _C.TRAIN.SAMPLER.TYPE = \"default\" # \"default\", \"weighted sampler\", \"oversample\" _C.TRAIN.SAMPLER.IMAGE_TYPE = \"derm\" #", "= 5 # for 'warmup' _C.TRAIN.LR_SCHEDULER.COSINE_DECAY_END = 0 # For valid or test", "sampler _C.TRAIN.SAMPLER.WEIGHTED_SAMPLER = CN() _C.TRAIN.SAMPLER.WEIGHTED_SAMPLER.TYPE = \"balance\" # \"balance\", \"reverse\" # for multi", "\"reverse\" # for multi crop _C.TRAIN.SAMPLER.MULTI_CROP = CN() _C.TRAIN.SAMPLER.MULTI_CROP.ENABLE = False # Should", "division from __future__ import print_function from yacs.config import CfgNode as CN _C =", "----- _C.LOSS = CN() _C.LOSS.WEIGHT_POWER = 1.1 _C.LOSS.EXTRA_WEIGHT = [1.0, 1.0, 1.0, 1.0,", "# \"derm\", \"clinic\". For derm_7pt dataset used. _C.TRAIN.SAMPLER.BORDER_CROP = \"pixel\" # \"pixel\", \"ratio\"", "'v3_1') or RandAugment ('rand') (refer to: lib/data_transform/modified_randaugment.py) _C.TRAIN.SAMPLER.AUGMENT.AUG_PROB = 0.7 # the probability", "from __future__ import print_function from yacs.config import CfgNode as CN _C = CN()", "crop _C.TRAIN.SAMPLER.MULTI_SCALE = CN() _C.TRAIN.SAMPLER.MULTI_SCALE.ENABLE = False # whether to perform multi transformation", "# ----- DATASET BUILDER ----- _C.DATASET = CN() _C.DATASET.DATASET = \"\" _C.DATASET.ROOT =", "= 5 _C.SHOW_STEP = 20 _C.PIN_MEMORY = True _C.INPUT_SIZE = (224, 224) #", "'rand' # for BBN sampler _C.TRAIN.SAMPLER.DUAL_SAMPLER = CN() _C.TRAIN.SAMPLER.DUAL_SAMPLER.TYPE = \"reversed\" # \"balance\",", "= CN() _C.TRAIN.SAMPLER.MULTI_SCALE.ENABLE = False # whether to perform multi transformation on the", "set variance _C.TRAIN.SAMPLER.FIX_MEAN_VAR.SET_VAR = [0.229, 0.224, 0.225] # ----- OPTIMIZER ----- _C.TRAIN.OPTIMIZER =", "'multistep' _C.TRAIN.LR_SCHEDULER.LR_FACTOR = 0.1 _C.TRAIN.LR_SCHEDULER.WARM_EPOCH = 5 # for 'warmup' _C.TRAIN.LR_SCHEDULER.COSINE_DECAY_END = 0", "= \"best_model.pth\" def update_config(cfg, args): cfg.defrost() cfg.merge_from_file(args.cfg) cfg.merge_from_list(args.opts) cfg.freeze() def update_cfg_name(cfg): ''' modify", "if AUG_METHOD = 'rand' # for BBN sampler _C.TRAIN.SAMPLER.DUAL_SAMPLER = CN() _C.TRAIN.SAMPLER.DUAL_SAMPLER.TYPE =", "image, or using fixed values # A fixed set mean (input image will", "= \"UNK\" _C.DATASET.IMBALANCECIFAR = CN() _C.DATASET.IMBALANCECIFAR.RATIO = 0.01 _C.DATASET.IMBALANCECIFAR.RANDOM_SEED = 0 # -----", "450 # the need size of the short side of the input image", "'M' of Modified RandAugment (1 -- 20) _C.TRAIN.SAMPLER.AUGMENT.AUG_LAYER_NUM = 1 # the number", "cfg.merge_from_file(args.cfg) cfg.merge_from_list(args.opts) cfg.freeze() def update_cfg_name(cfg): ''' modify the cfg.NAME :param cfg: :return: '''", "= 5 _C.BACKBONE.DROP.NR_STEPS = 50000 # dropout parameter to the last FC layer", "be cropped. _C.TRAIN.SAMPLER.IMAGE_RESIZE = True # whether the input image needs to be", "DATASET BUILDER ----- _C.DATASET = CN() _C.DATASET.DATASET = \"\" _C.DATASET.ROOT = \"\" _C.DATASET.DATA_TYPE", "= True # ----- SAMPLER BUILDER ----- _C.TRAIN.SAMPLER = CN() _C.TRAIN.SAMPLER.TYPE = \"default\"", "Modified RandAugment _C.TRAIN.SAMPLER.AUGMENT = CN() _C.TRAIN.SAMPLER.AUGMENT.NEED_AUGMENT = False _C.TRAIN.SAMPLER.AUGMENT.AUG_METHOD = \"v1_0\" # the", "number of transformations applied to a training image if AUG_METHOD = 'rand' #", "use during evaluation (must be N^2) _C.TRAIN.SAMPLER.MULTI_CROP.L_REGION = 1.0 # Only crop within", "(along the short side of the image) _C.TRAIN.SAMPLER.MULTI_CROP.SCHEME = 'average' # Averaging or", "CN() _C.LOSS.LOW.LAMB = 0.01 # For GHMCLoss _C.LOSS.GHMC = CN() _C.LOSS.GHMC.BINS = 10", "(224, 224) # (h, w) _C.COLOR_SPACE = \"RGB\" _C.RESUME_MODEL = \"\" _C.RESUME_MODE =", "# \"pixel\", \"ratio\" _C.TRAIN.SAMPLER.BORDER_CROP_PIXEL = 0 # An integer specifying how many pixels", "to crop at the image border. Useful if images contain a black boundary.", "using fixed values # A fixed set mean (input image will be subtracted", "side of the image) _C.TRAIN.SAMPLER.MULTI_CROP.S_REGION = 1.0 # Only crop within a certain", "1.1 _C.LOSS.EXTRA_WEIGHT = [1.0, 1.0, 1.0, 1.0, 1.0] _C.LOSS.LOSS_TYPE = \"CrossEntropy\" # \"CrossEntropy\",", "# whether the input image needs to be resized to a fix size", "of inverse class frequency at all train stage, # \"drw\"--two-stage strategy using re-weighting", "the number of transformations applied to a training image if AUG_METHOD = 'rand'", "for evaluation _C.TRAIN.SAMPLER.MULTI_CROP.CROP_NUM = 16 # Number of crops to use during evaluation", "or equal to the length of SCALE_NAME) _C.TRAIN.SAMPLER.MULTI_SCALE.SCALE_NAME = [\"scale_+00\", \"flip_x_+00\", \"rotate_90_+00\", \"rotate_270_+00\",", "to set loss weight. # For drw scheduler _C.LOSS.DRW_EPOCH = 50 # For", "''' cfg.defrost() cfg_name = cfg.DATASET.DATASET + \".\" + cfg.BACKBONE.TYPE + ( \"_BBN.\" if", "= CN() # ----- BASIC SETTINGS ----- _C.NAME = \"default\" _C.OUTPUT_DIR = \"./output/derm_7pt\"", "= [0, 1] # ----- DATASET BUILDER ----- _C.DATASET = CN() _C.DATASET.DATASET =", "----- _C.TRAIN.OPTIMIZER = CN() _C.TRAIN.OPTIMIZER.TYPE = \"SGD\" # 'SGD', 'ADAM', 'NADAM', 'RMSPROP' _C.TRAIN.OPTIMIZER.BASE_LR", "= CN() _C.TRAIN.BATCH_SIZE = 32 # for every gpu _C.TRAIN.MAX_EPOCH = 70 _C.TRAIN.SHUFFLE", ":return: ''' cfg.defrost() cfg_name = cfg.DATASET.DATASET + \".\" + cfg.BACKBONE.TYPE + ( \"_BBN.\"", "----- LR_SCHEDULER ----- _C.TRAIN.LR_SCHEDULER = CN() _C.TRAIN.LR_SCHEDULER.TYPE = \"multistep\" # \"steplr\", \"multistep\", \"cosine\",", "_C.LOSS.FOCAL.TYPE = \"sigmoid\" # \"cross_entropy\", \"sigmoid\", \"ldam\" _C.LOSS.FOCAL.SIGMOID = \"normal\" # \"normal\", \"enlarge\"", "OPTIMIZER ----- _C.TRAIN.OPTIMIZER = CN() _C.TRAIN.OPTIMIZER.TYPE = \"SGD\" # 'SGD', 'ADAM', 'NADAM', 'RMSPROP'", "dropout parameter to the last FC layer _C.BACKBONE.DROP.OUT_PROB = 0.1 # ----- MODULE", "cfg_name = cfg.DATASET.DATASET + \".\" + cfg.BACKBONE.TYPE + ( \"_BBN.\" if cfg.BACKBONE.BBN else", "= 70 _C.TRAIN.SHUFFLE = True _C.TRAIN.NUM_WORKERS = 8 _C.TRAIN.TENSORBOARD = CN() _C.TRAIN.TENSORBOARD.ENABLE =", "= \"GAP\" # \"GAP\", \"Identity\" # ----- CLASSIFIER BUILDER ----- _C.CLASSIFIER = CN()", "\"pixel\", \"ratio\" _C.TRAIN.SAMPLER.BORDER_CROP_PIXEL = 0 # An integer specifying how many pixels to", "cfg.defrost() cfg_name = cfg.DATASET.DATASET + \".\" + cfg.BACKBONE.TYPE + ( \"_BBN.\" if cfg.BACKBONE.BBN", "= 20 _C.PIN_MEMORY = True _C.INPUT_SIZE = (224, 224) # (h, w) _C.COLOR_SPACE", "gpu _C.TRAIN.MAX_EPOCH = 70 _C.TRAIN.SHUFFLE = True _C.TRAIN.NUM_WORKERS = 8 _C.TRAIN.TENSORBOARD = CN()", "6.0 _C.TRAIN.SAMPLER.CONSTANCY_GAMMA = 0.0 # For Modified RandAugment _C.TRAIN.SAMPLER.AUGMENT = CN() _C.TRAIN.SAMPLER.AUGMENT.NEED_AUGMENT =", "CN() _C.LOSS.WEIGHT_POWER = 1.1 _C.LOSS.EXTRA_WEIGHT = [1.0, 1.0, 1.0, 1.0, 1.0] _C.LOSS.LOSS_TYPE =", "= 0.0 # the ratio of edge of the image to be cropped.", "[0.229, 0.224, 0.225] # ----- OPTIMIZER ----- _C.TRAIN.OPTIMIZER = CN() _C.TRAIN.OPTIMIZER.TYPE = \"SGD\"", "9. _C.DATASET.ADD_CLASS_NAME = \"UNK\" _C.DATASET.IMBALANCECIFAR = CN() _C.DATASET.IMBALANCECIFAR.RATIO = 0.01 _C.DATASET.IMBALANCECIFAR.RANDOM_SEED = 0", "import absolute_import from __future__ import division from __future__ import print_function from yacs.config import", "_C.LOSS.LDAM = CN() _C.LOSS.LDAM.MAX_MARGIN = 0.5 # For FocalLoss _C.LOSS.FOCAL = CN() _C.LOSS.FOCAL.GAMMA", "_C.TEST.BATCH_SIZE = 128 # for every gpu _C.TEST.NUM_WORKERS = 8 _C.TEST.MODEL_FILE = \"best_model.pth\"", "= 0.0 # For Modified RandAugment _C.TRAIN.SAMPLER.AUGMENT = CN() _C.TRAIN.SAMPLER.AUGMENT.NEED_AUGMENT = False _C.TRAIN.SAMPLER.AUGMENT.AUG_METHOD", "\"UNK\" _C.DATASET.IMBALANCECIFAR = CN() _C.DATASET.IMBALANCECIFAR.RATIO = 0.01 _C.DATASET.IMBALANCECIFAR.RANDOM_SEED = 0 # ----- BACKBONE", "224) # (h, w) _C.COLOR_SPACE = \"RGB\" _C.RESUME_MODEL = \"\" _C.RESUME_MODE = \"all\"", "crop within a certain range of the central area (along the long side", "to a fix size _C.TRAIN.SAMPLER.IMAGE_RESIZE_SHORT = 450 # the need size of the", "_C.LOSS.CLS_EPOCH_MAX = 60 # For LDAMLoss _C.LOSS.LDAM = CN() _C.LOSS.LDAM.MAX_MARGIN = 0.5 #", "variance) _C.TRAIN.SAMPLER.FIX_MEAN_VAR.SET_MEAN = [0.485, 0.456, 0.406] # A fixed set variance _C.TRAIN.SAMPLER.FIX_MEAN_VAR.SET_VAR =", "# \"cls\"--cumulative learning strategy to set loss weight. # For drw scheduler _C.LOSS.DRW_EPOCH", "RandAugment ('rand') (refer to: lib/data_transform/modified_randaugment.py) _C.TRAIN.SAMPLER.AUGMENT.AUG_PROB = 0.7 # the probability parameter 'P'", "_C.TEST.NUM_WORKERS = 8 _C.TEST.MODEL_FILE = \"best_model.pth\" def update_config(cfg, args): cfg.defrost() cfg.merge_from_file(args.cfg) cfg.merge_from_list(args.opts) cfg.freeze()", "BUILDER ----- _C.DATASET = CN() _C.DATASET.DATASET = \"\" _C.DATASET.ROOT = \"\" _C.DATASET.DATA_TYPE =", "to be cropped. _C.TRAIN.SAMPLER.IMAGE_RESIZE = True # whether the input image needs to", "# for ISIC_2019 valid and test, the number of class is increased from", "from __future__ import division from __future__ import print_function from yacs.config import CfgNode as", "applied to a training image if AUG_METHOD = 'rand' # for BBN sampler", "short side of the image) _C.TRAIN.SAMPLER.MULTI_CROP.SCHEME = 'average' # Averaging or voting over", "during evaluation (must be less than or equal to the length of SCALE_NAME)", "CN() _C.LOSS.LDAM.MAX_MARGIN = 0.5 # For FocalLoss _C.LOSS.FOCAL = CN() _C.LOSS.FOCAL.GAMMA = 2.0", "\"flip_y_+00\", \"flip_y_+10\", \"flip_y_-10\", \"flip_y_+20\"] _C.TRAIN.SAMPLER.FIX_MEAN_VAR = CN() _C.TRAIN.SAMPLER.FIX_MEAN_VAR.ENABLE = True # Normalize using", "True _C.BACKBONE.PRETRAINED_MODEL = \"\" # if using drop block, below are drop block", "0.1 # ----- MODULE BUILDER ----- _C.MODULE = CN() _C.MODULE.TYPE = \"GAP\" #", "and test, the number of class is increased from 8 to 9. _C.DATASET.ADD_CLASS_NAME", "= 1 # the number of transformations applied to a training image if", "_C.LOSS.FOCAL.GAMMA = 2.0 _C.LOSS.FOCAL.TYPE = \"sigmoid\" # \"cross_entropy\", \"sigmoid\", \"ldam\" _C.LOSS.FOCAL.SIGMOID = \"normal\"", "\"flip_x_-10\", \"rotate_90_-10\", \"rotate_270_-10\", \"flip_y_+00\", \"flip_y_+10\", \"flip_y_-10\", \"flip_y_+20\"] _C.TRAIN.SAMPLER.FIX_MEAN_VAR = CN() _C.TRAIN.SAMPLER.FIX_MEAN_VAR.ENABLE = True", "= CN() _C.BACKBONE.DROP.BLOCK_PROB = 0.1 _C.BACKBONE.DROP.BLOCK_SIZE = 5 _C.BACKBONE.DROP.NR_STEPS = 50000 # dropout", "[40, 50] # for 'multistep' _C.TRAIN.LR_SCHEDULER.LR_FACTOR = 0.1 _C.TRAIN.LR_SCHEDULER.WARM_EPOCH = 5 # for", "order or random for evaluation _C.TRAIN.SAMPLER.MULTI_CROP.CROP_NUM = 16 # Number of crops to", "class frequency at all train stage, # \"drw\"--two-stage strategy using re-weighting at the", "= \"\" _C.DATASET.CLASS_NAME = ['BCC', 'NV', 'MEL', 'MISC', 'SK'] _C.DATASET.VALID_ADD_ONE_CLASS = False #", "# for multi crop _C.TRAIN.SAMPLER.MULTI_CROP = CN() _C.TRAIN.SAMPLER.MULTI_CROP.ENABLE = False # Should the", "a certain range of the central area (along the long side of the", "the input image _C.TRAIN.SAMPLER.COLOR_CONSTANCY = False _C.TRAIN.SAMPLER.CONSTANCY_POWER = 6.0 _C.TRAIN.SAMPLER.CONSTANCY_GAMMA = 0.0 #", "\"rotate_270_+20\", \"scale_+30\", \"flip_x_+30\", \"rotate_90_+30\", \"rotate_270_+30\", \"scale_-10\", \"flip_x_-10\", \"rotate_90_-10\", \"rotate_270_-10\", \"flip_y_+00\", \"flip_y_+10\", \"flip_y_-10\", \"flip_y_+20\"]", "= 'average' # Averaging or voting over the crop predictions (\"vote\", \"average\") #", "of the center crop _C.TRAIN.SAMPLER.MULTI_SCALE = CN() _C.TRAIN.SAMPLER.MULTI_SCALE.ENABLE = False # whether to", "to use during evaluation (must be less than or equal to the length", "specifying how many pixels to crop at the image border. Useful if images", "32 # for every gpu _C.TRAIN.MAX_EPOCH = 70 _C.TRAIN.SHUFFLE = True _C.TRAIN.NUM_WORKERS =", "# ----- BACKBONE BUILDER ----- _C.BACKBONE = CN() _C.BACKBONE.TYPE = \"RegNetY_800MF\" # refer", "of crops to use during evaluation (must be N^2) _C.TRAIN.SAMPLER.MULTI_CROP.L_REGION = 1.0 #", "_C.TRAIN.SAMPLER.AUGMENT.NEED_AUGMENT = False _C.TRAIN.SAMPLER.AUGMENT.AUG_METHOD = \"v1_0\" # the method of Modified RandAugment ('v0_0'", "learning strategy to set loss weight. # For drw scheduler _C.LOSS.DRW_EPOCH = 50", "of edge of the image to be cropped. _C.TRAIN.SAMPLER.IMAGE_RESIZE = True # whether", "the central area (along the long side of the image) _C.TRAIN.SAMPLER.MULTI_CROP.S_REGION = 1.0", "= \"default\" # \"default\"--the weights of all classes are \"1.0\", # \"re_weight\"--re-weighting by", "the central crop _C.TRAIN.SAMPLER.MULTI_SCALE.SCALE_NUM = 12 # Number of scales to use during", "block, below are drop block parameter _C.BACKBONE.DROP = CN() _C.BACKBONE.DROP.BLOCK_PROB = 0.1 _C.BACKBONE.DROP.BLOCK_SIZE", "# For FocalLoss _C.LOSS.FOCAL = CN() _C.LOSS.FOCAL.GAMMA = 2.0 _C.LOSS.FOCAL.TYPE = \"sigmoid\" #", "= True _C.INPUT_SIZE = (224, 224) # (h, w) _C.COLOR_SPACE = \"RGB\" _C.RESUME_MODEL", "magnitude parameter 'M' of Modified RandAugment (1 -- 20) _C.TRAIN.SAMPLER.AUGMENT.AUG_LAYER_NUM = 1 #", "+ cfg.BACKBONE.TYPE + ( \"_BBN.\" if cfg.BACKBONE.BBN else \".\") + cfg.LOSS.LOSS_TYPE + cfg.NAME", "_C.TRAIN.SAMPLER.CONSTANCY_GAMMA = 0.0 # For Modified RandAugment _C.TRAIN.SAMPLER.AUGMENT = CN() _C.TRAIN.SAMPLER.AUGMENT.NEED_AUGMENT = False", "= \"SGD\" # 'SGD', 'ADAM', 'NADAM', 'RMSPROP' _C.TRAIN.OPTIMIZER.BASE_LR = 0.001 _C.TRAIN.OPTIMIZER.MOMENTUM = 0.9", "= CN() _C.LOSS.GHMC.BINS = 10 _C.LOSS.GHMC.MOMENTUM = 0.0 # For MWNLoss _C.LOSS.MWNL =", "\"./output/derm_7pt\" _C.VALID_STEP = 5 _C.SAVE_STEP = 5 _C.SHOW_STEP = 20 _C.PIN_MEMORY = True", "= 'rand' # for BBN sampler _C.TRAIN.SAMPLER.DUAL_SAMPLER = CN() _C.TRAIN.SAMPLER.DUAL_SAMPLER.TYPE = \"reversed\" #", "= 50000 # dropout parameter to the last FC layer _C.BACKBONE.DROP.OUT_PROB = 0.1", "\"\" # if using drop block, below are drop block parameter _C.BACKBONE.DROP =", "the ratio of edge of the image to be cropped. _C.TRAIN.SAMPLER.IMAGE_RESIZE = True", "_C.TRAIN.SAMPLER.DUAL_SAMPLER.TYPE = \"reversed\" # \"balance\", \"reverse\", \"uniform\" # for other sampler _C.TRAIN.SAMPLER.WEIGHTED_SAMPLER =", "BACKBONE BUILDER ----- _C.BACKBONE = CN() _C.BACKBONE.TYPE = \"RegNetY_800MF\" # refer to lib/backbone/all_models.py", "_C.TRAIN.SAMPLER.MULTI_CROP.SCHEME = 'average' # Averaging or voting over the crop predictions (\"vote\", \"average\")", "_C.TRAIN.SAMPLER.WEIGHTED_SAMPLER = CN() _C.TRAIN.SAMPLER.WEIGHTED_SAMPLER.TYPE = \"balance\" # \"balance\", \"reverse\" # for multi crop", "_C.LOSS.MWNL = CN() _C.LOSS.MWNL.GAMMA = 2.0 _C.LOSS.MWNL.BETA = 0.1 _C.LOSS.MWNL.TYPE = \"fix\" #", "# For LDAMLoss _C.LOSS.LDAM = CN() _C.LOSS.LDAM.MAX_MARGIN = 0.5 # For FocalLoss _C.LOSS.FOCAL", "# if using drop block, below are drop block parameter _C.BACKBONE.DROP = CN()", "\"pixel\" # \"pixel\", \"ratio\" _C.TRAIN.SAMPLER.BORDER_CROP_PIXEL = 0 # An integer specifying how many", "image) _C.TRAIN.SAMPLER.MULTI_CROP.S_REGION = 1.0 # Only crop within a certain range of the", "False _C.BACKBONE.PRE_FREEZE = False _C.BACKBONE.PRE_FREEZE_EPOCH = 5 _C.BACKBONE.PRETRAINED = True _C.BACKBONE.PRETRAINED_MODEL = \"\"", "Should the crops be order or random for evaluation _C.TRAIN.SAMPLER.MULTI_CROP.CROP_NUM = 16 #", "_C.BACKBONE = CN() _C.BACKBONE.TYPE = \"RegNetY_800MF\" # refer to lib/backbone/all_models.py _C.BACKBONE.BBN = False", "CN() _C.DATASET.IMBALANCECIFAR.RATIO = 0.01 _C.DATASET.IMBALANCECIFAR.RANDOM_SEED = 0 # ----- BACKBONE BUILDER ----- _C.BACKBONE", "edge of the image to be cropped. _C.TRAIN.SAMPLER.IMAGE_RESIZE = True # whether the", "\"FC\", \"FCNorm\" _C.CLASSIFIER.BIAS = True # ----- LOSS BUILDER ----- _C.LOSS = CN()", "perform multi transformation on the central crop _C.TRAIN.SAMPLER.MULTI_SCALE.SCALE_NUM = 12 # Number of", "CN() _C.TRAIN.BATCH_SIZE = 32 # for every gpu _C.TRAIN.MAX_EPOCH = 70 _C.TRAIN.SHUFFLE =", "1.0, 1.0] _C.LOSS.LOSS_TYPE = \"CrossEntropy\" # \"CrossEntropy\", \"LDAMLoss\", \"FocalLoss\", \"LOWLoss\", \"GHMCLoss\", \"CCELoss\", \"MWNLoss\"", "area (along the long side of the image) _C.TRAIN.SAMPLER.MULTI_CROP.S_REGION = 1.0 # Only", "\"uniform\" # for other sampler _C.TRAIN.SAMPLER.WEIGHTED_SAMPLER = CN() _C.TRAIN.SAMPLER.WEIGHTED_SAMPLER.TYPE = \"balance\" # \"balance\",", "_C.TRAIN.SAMPLER.BORDER_CROP_RATIO = 0.0 # the ratio of edge of the image to be", "----- _C.MODULE = CN() _C.MODULE.TYPE = \"GAP\" # \"GAP\", \"Identity\" # ----- CLASSIFIER", "For LOWLoss _C.LOSS.LOW = CN() _C.LOSS.LOW.LAMB = 0.01 # For GHMCLoss _C.LOSS.GHMC =", "= [\"scale_+00\", \"flip_x_+00\", \"rotate_90_+00\", \"rotate_270_+00\", \"scale_+10\", \"flip_x_+10\", \"rotate_90_+10\", \"rotate_270_+10\", \"scale_+20\", \"flip_x_+20\", \"rotate_90_+20\", \"rotate_270_+20\",", "_C.DATASET.IMBALANCECIFAR = CN() _C.DATASET.IMBALANCECIFAR.RATIO = 0.01 _C.DATASET.IMBALANCECIFAR.RANDOM_SEED = 0 # ----- BACKBONE BUILDER", "equal to the length of SCALE_NAME) _C.TRAIN.SAMPLER.MULTI_SCALE.SCALE_NAME = [\"scale_+00\", \"flip_x_+00\", \"rotate_90_+00\", \"rotate_270_+00\", \"scale_+10\",", "1.0, 1.0, 1.0] _C.LOSS.LOSS_TYPE = \"CrossEntropy\" # \"CrossEntropy\", \"LDAMLoss\", \"FocalLoss\", \"LOWLoss\", \"GHMCLoss\", \"CCELoss\",", "over the crop predictions (\"vote\", \"average\") # for multi transformation of the center", "----- OPTIMIZER ----- _C.TRAIN.OPTIMIZER = CN() _C.TRAIN.OPTIMIZER.TYPE = \"SGD\" # 'SGD', 'ADAM', 'NADAM',", "image to be cropped. _C.TRAIN.SAMPLER.IMAGE_RESIZE = True # whether the input image needs", "for 'steplr' _C.TRAIN.LR_SCHEDULER.LR_STEP = [40, 50] # for 'multistep' _C.TRAIN.LR_SCHEDULER.LR_FACTOR = 0.1 _C.TRAIN.LR_SCHEDULER.WARM_EPOCH", "_C.DATASET.CLASS_NAME = ['BCC', 'NV', 'MEL', 'MISC', 'SK'] _C.DATASET.VALID_ADD_ONE_CLASS = False # for ISIC_2019", "# for multi transformation of the center crop _C.TRAIN.SAMPLER.MULTI_SCALE = CN() _C.TRAIN.SAMPLER.MULTI_SCALE.ENABLE =", "_C.LOSS.SCHEDULER = \"default\" # \"default\"--the weights of all classes are \"1.0\", # \"re_weight\"--re-weighting", "= 20 # for 'steplr' _C.TRAIN.LR_SCHEDULER.LR_STEP = [40, 50] # for 'multistep' _C.TRAIN.LR_SCHEDULER.LR_FACTOR", "sampler _C.TRAIN.SAMPLER.DUAL_SAMPLER = CN() _C.TRAIN.SAMPLER.DUAL_SAMPLER.TYPE = \"reversed\" # \"balance\", \"reverse\", \"uniform\" # for", "mean and variance of each image, or using fixed values # A fixed", "range of the central area (along the long side of the image) _C.TRAIN.SAMPLER.MULTI_CROP.S_REGION", "'MISC', 'SK'] _C.DATASET.VALID_ADD_ONE_CLASS = False # for ISIC_2019 valid and test, the number", "_C.TRAIN.SAMPLER.WEIGHTED_SAMPLER.TYPE = \"balance\" # \"balance\", \"reverse\" # for multi crop _C.TRAIN.SAMPLER.MULTI_CROP = CN()", "'steplr' _C.TRAIN.LR_SCHEDULER.LR_STEP = [40, 50] # for 'multistep' _C.TRAIN.LR_SCHEDULER.LR_FACTOR = 0.1 _C.TRAIN.LR_SCHEDULER.WARM_EPOCH =", "of the short side of the input image _C.TRAIN.SAMPLER.COLOR_CONSTANCY = False _C.TRAIN.SAMPLER.CONSTANCY_POWER =", "using re-weighting at the second stage, # \"cls\"--cumulative learning strategy to set loss", "import print_function from yacs.config import CfgNode as CN _C = CN() # -----", "_C.TRAIN.SAMPLER.AUGMENT = CN() _C.TRAIN.SAMPLER.AUGMENT.NEED_AUGMENT = False _C.TRAIN.SAMPLER.AUGMENT.AUG_METHOD = \"v1_0\" # the method of", "\"reversed\" # \"balance\", \"reverse\", \"uniform\" # for other sampler _C.TRAIN.SAMPLER.WEIGHTED_SAMPLER = CN() _C.TRAIN.SAMPLER.WEIGHTED_SAMPLER.TYPE", "the long side of the image) _C.TRAIN.SAMPLER.MULTI_CROP.S_REGION = 1.0 # Only crop within", "\"rotate_270_+10\", \"scale_+20\", \"flip_x_+20\", \"rotate_90_+20\", \"rotate_270_+20\", \"scale_+30\", \"flip_x_+30\", \"rotate_90_+30\", \"rotate_270_+30\", \"scale_-10\", \"flip_x_-10\", \"rotate_90_-10\", \"rotate_270_-10\",", "at the image border. Useful if images contain a black boundary. _C.TRAIN.SAMPLER.BORDER_CROP_RATIO =", "from __future__ import absolute_import from __future__ import division from __future__ import print_function from", "# Averaging or voting over the crop predictions (\"vote\", \"average\") # for multi", "\"FCNorm\" _C.CLASSIFIER.BIAS = True # ----- LOSS BUILDER ----- _C.LOSS = CN() _C.LOSS.WEIGHT_POWER", "RandAugment _C.TRAIN.SAMPLER.AUGMENT = CN() _C.TRAIN.SAMPLER.AUGMENT.NEED_AUGMENT = False _C.TRAIN.SAMPLER.AUGMENT.AUG_METHOD = \"v1_0\" # the method", "print_function from yacs.config import CfgNode as CN _C = CN() # ----- BASIC", "\"balance\", \"reverse\" # for multi crop _C.TRAIN.SAMPLER.MULTI_CROP = CN() _C.TRAIN.SAMPLER.MULTI_CROP.ENABLE = False #", "_C.TRAIN.SAMPLER.IMAGE_TYPE = \"derm\" # \"derm\", \"clinic\". For derm_7pt dataset used. _C.TRAIN.SAMPLER.BORDER_CROP = \"pixel\"", "_C.BACKBONE.PRETRAINED_MODEL = \"\" # if using drop block, below are drop block parameter", "-- 0.9) _C.TRAIN.SAMPLER.AUGMENT.AUG_MAG = 10 # the magnitude parameter 'M' of Modified RandAugment", "0.0 # For Modified RandAugment _C.TRAIN.SAMPLER.AUGMENT = CN() _C.TRAIN.SAMPLER.AUGMENT.NEED_AUGMENT = False _C.TRAIN.SAMPLER.AUGMENT.AUG_METHOD =", "= \"multistep\" # \"steplr\", \"multistep\", \"cosine\", \"warmup\" _C.TRAIN.LR_SCHEDULER.LR_LOWER_STEP = 20 # for 'steplr'", "or test _C.TEST = CN() _C.TEST.BATCH_SIZE = 128 # for every gpu _C.TEST.NUM_WORKERS", "2.0 _C.LOSS.MWNL.BETA = 0.1 _C.LOSS.MWNL.TYPE = \"fix\" # \"zero\", \"fix\", \"decrease\" _C.LOSS.MWNL.SIGMOID =", "_C.NAME = \"default\" _C.OUTPUT_DIR = \"./output/derm_7pt\" _C.VALID_STEP = 5 _C.SAVE_STEP = 5 _C.SHOW_STEP", "sampler\", \"oversample\" _C.TRAIN.SAMPLER.IMAGE_TYPE = \"derm\" # \"derm\", \"clinic\". For derm_7pt dataset used. _C.TRAIN.SAMPLER.BORDER_CROP", "_C.VALID_STEP = 5 _C.SAVE_STEP = 5 _C.SHOW_STEP = 20 _C.PIN_MEMORY = True _C.INPUT_SIZE", "= True _C.BACKBONE.PRETRAINED_MODEL = \"\" # if using drop block, below are drop", "be resized to a fix size _C.TRAIN.SAMPLER.IMAGE_RESIZE_SHORT = 450 # the need size", "_C.OUTPUT_DIR = \"./output/derm_7pt\" _C.VALID_STEP = 5 _C.SAVE_STEP = 5 _C.SHOW_STEP = 20 _C.PIN_MEMORY", "_C.TRAIN.MAX_EPOCH = 70 _C.TRAIN.SHUFFLE = True _C.TRAIN.NUM_WORKERS = 8 _C.TRAIN.TENSORBOARD = CN() _C.TRAIN.TENSORBOARD.ENABLE", "# A fixed set variance _C.TRAIN.SAMPLER.FIX_MEAN_VAR.SET_VAR = [0.229, 0.224, 0.225] # ----- OPTIMIZER", "_C.BACKBONE.BBN = False _C.BACKBONE.FREEZE = False _C.BACKBONE.PRE_FREEZE = False _C.BACKBONE.PRE_FREEZE_EPOCH = 5 _C.BACKBONE.PRETRAINED", "_C.TRAIN.BATCH_SIZE = 32 # for every gpu _C.TRAIN.MAX_EPOCH = 70 _C.TRAIN.SHUFFLE = True", "predictions (\"vote\", \"average\") # for multi transformation of the center crop _C.TRAIN.SAMPLER.MULTI_SCALE =", "= \"derm\" # \"derm\", \"clinic\". For derm_7pt dataset used. _C.TRAIN.SAMPLER.BORDER_CROP = \"pixel\" #", "'RMSPROP' _C.TRAIN.OPTIMIZER.BASE_LR = 0.001 _C.TRAIN.OPTIMIZER.MOMENTUM = 0.9 _C.TRAIN.OPTIMIZER.WEIGHT_DECAY = 1e-4 # ----- LR_SCHEDULER", "True # Normalize using the mean and variance of each image, or using", "10 # the magnitude parameter 'M' of Modified RandAugment (1 -- 20) _C.TRAIN.SAMPLER.AUGMENT.AUG_LAYER_NUM", "\"all\" _C.CPU_MODE = False _C.EVAL_MODE = False _C.GPUS = [0, 1] # -----", "Modified RandAugment (0.1 -- 0.9) _C.TRAIN.SAMPLER.AUGMENT.AUG_MAG = 10 # the magnitude parameter 'M'", "of scales to use during evaluation (must be less than or equal to", "\"GAP\" # \"GAP\", \"Identity\" # ----- CLASSIFIER BUILDER ----- _C.CLASSIFIER = CN() _C.CLASSIFIER.TYPE", "derm_7pt dataset used. _C.TRAIN.SAMPLER.BORDER_CROP = \"pixel\" # \"pixel\", \"ratio\" _C.TRAIN.SAMPLER.BORDER_CROP_PIXEL = 0 #", "Useful if images contain a black boundary. _C.TRAIN.SAMPLER.BORDER_CROP_RATIO = 0.0 # the ratio", "__future__ import division from __future__ import print_function from yacs.config import CfgNode as CN", "= False # for ISIC_2019 valid and test, the number of class is", "2.0 _C.LOSS.FOCAL.TYPE = \"sigmoid\" # \"cross_entropy\", \"sigmoid\", \"ldam\" _C.LOSS.FOCAL.SIGMOID = \"normal\" # \"normal\",", "= CN() _C.LOSS.MWNL.GAMMA = 2.0 _C.LOSS.MWNL.BETA = 0.1 _C.LOSS.MWNL.TYPE = \"fix\" # \"zero\",", "input image needs to be resized to a fix size _C.TRAIN.SAMPLER.IMAGE_RESIZE_SHORT = 450", "scheduler _C.LOSS.DRW_EPOCH = 50 # For cls scheduler _C.LOSS.CLS_EPOCH_MIN = 20 _C.LOSS.CLS_EPOCH_MAX =", "whether the input image needs to be resized to a fix size _C.TRAIN.SAMPLER.IMAGE_RESIZE_SHORT", "= 0.7 # the probability parameter 'P' of Modified RandAugment (0.1 -- 0.9)", "side of the image) _C.TRAIN.SAMPLER.MULTI_CROP.SCHEME = 'average' # Averaging or voting over the", "= 0.9 _C.TRAIN.OPTIMIZER.WEIGHT_DECAY = 1e-4 # ----- LR_SCHEDULER ----- _C.TRAIN.LR_SCHEDULER = CN() _C.TRAIN.LR_SCHEDULER.TYPE", "# For valid or test _C.TEST = CN() _C.TEST.BATCH_SIZE = 128 # for", "_C.LOSS.CLS_EPOCH_MIN = 20 _C.LOSS.CLS_EPOCH_MAX = 60 # For LDAMLoss _C.LOSS.LDAM = CN() _C.LOSS.LDAM.MAX_MARGIN", "= 2.0 _C.LOSS.MWNL.BETA = 0.1 _C.LOSS.MWNL.TYPE = \"fix\" # \"zero\", \"fix\", \"decrease\" _C.LOSS.MWNL.SIGMOID", "----- _C.CLASSIFIER = CN() _C.CLASSIFIER.TYPE = \"FC\" # \"FC\", \"FCNorm\" _C.CLASSIFIER.BIAS = True", "from the mean, processing variance) _C.TRAIN.SAMPLER.FIX_MEAN_VAR.SET_MEAN = [0.485, 0.456, 0.406] # A fixed", "certain range of the central area (along the short side of the image)", "CN() _C.LOSS.MWNL.GAMMA = 2.0 _C.LOSS.MWNL.BETA = 0.1 _C.LOSS.MWNL.TYPE = \"fix\" # \"zero\", \"fix\",", "= (224, 224) # (h, w) _C.COLOR_SPACE = \"RGB\" _C.RESUME_MODEL = \"\" _C.RESUME_MODE", "\"rotate_90_+10\", \"rotate_270_+10\", \"scale_+20\", \"flip_x_+20\", \"rotate_90_+20\", \"rotate_270_+20\", \"scale_+30\", \"flip_x_+30\", \"rotate_90_+30\", \"rotate_270_+30\", \"scale_-10\", \"flip_x_-10\", \"rotate_90_-10\",", "of the central area (along the short side of the image) _C.TRAIN.SAMPLER.MULTI_CROP.SCHEME =", "long side of the image) _C.TRAIN.SAMPLER.MULTI_CROP.S_REGION = 1.0 # Only crop within a", "\"flip_y_+20\"] _C.TRAIN.SAMPLER.FIX_MEAN_VAR = CN() _C.TRAIN.SAMPLER.FIX_MEAN_VAR.ENABLE = True # Normalize using the mean and", "= CN() _C.TEST.BATCH_SIZE = 128 # for every gpu _C.TEST.NUM_WORKERS = 8 _C.TEST.MODEL_FILE", "= 0.5 # For FocalLoss _C.LOSS.FOCAL = CN() _C.LOSS.FOCAL.GAMMA = 2.0 _C.LOSS.FOCAL.TYPE =", "\"normal\", \"enlarge\" # ----- TRAIN BUILDER ----- _C.TRAIN = CN() _C.TRAIN.BATCH_SIZE = 32", "cfg.defrost() cfg.merge_from_file(args.cfg) cfg.merge_from_list(args.opts) cfg.freeze() def update_cfg_name(cfg): ''' modify the cfg.NAME :param cfg: :return:", "_C.LOSS.FOCAL.SIGMOID = \"normal\" # \"normal\", \"enlarge\" # For LOWLoss _C.LOSS.LOW = CN() _C.LOSS.LOW.LAMB", "5 # for 'warmup' _C.TRAIN.LR_SCHEDULER.COSINE_DECAY_END = 0 # For valid or test _C.TEST", "0.456, 0.406] # A fixed set variance _C.TRAIN.SAMPLER.FIX_MEAN_VAR.SET_VAR = [0.229, 0.224, 0.225] #", "= ['BCC', 'NV', 'MEL', 'MISC', 'SK'] _C.DATASET.VALID_ADD_ONE_CLASS = False # for ISIC_2019 valid", "= CN() _C.LOSS.WEIGHT_POWER = 1.1 _C.LOSS.EXTRA_WEIGHT = [1.0, 1.0, 1.0, 1.0, 1.0] _C.LOSS.LOSS_TYPE", "the mean, processing variance) _C.TRAIN.SAMPLER.FIX_MEAN_VAR.SET_MEAN = [0.485, 0.456, 0.406] # A fixed set", "= False _C.BACKBONE.FREEZE = False _C.BACKBONE.PRE_FREEZE = False _C.BACKBONE.PRE_FREEZE_EPOCH = 5 _C.BACKBONE.PRETRAINED =", "# ----- LR_SCHEDULER ----- _C.TRAIN.LR_SCHEDULER = CN() _C.TRAIN.LR_SCHEDULER.TYPE = \"multistep\" # \"steplr\", \"multistep\",", "other sampler _C.TRAIN.SAMPLER.WEIGHTED_SAMPLER = CN() _C.TRAIN.SAMPLER.WEIGHTED_SAMPLER.TYPE = \"balance\" # \"balance\", \"reverse\" # for", "\"flip_y_+10\", \"flip_y_-10\", \"flip_y_+20\"] _C.TRAIN.SAMPLER.FIX_MEAN_VAR = CN() _C.TRAIN.SAMPLER.FIX_MEAN_VAR.ENABLE = True # Normalize using the", "# ----- LOSS BUILDER ----- _C.LOSS = CN() _C.LOSS.WEIGHT_POWER = 1.1 _C.LOSS.EXTRA_WEIGHT =", "modify the cfg.NAME :param cfg: :return: ''' cfg.defrost() cfg_name = cfg.DATASET.DATASET + \".\"", "for every gpu _C.TRAIN.MAX_EPOCH = 70 _C.TRAIN.SHUFFLE = True _C.TRAIN.NUM_WORKERS = 8 _C.TRAIN.TENSORBOARD", "CN() _C.TRAIN.SAMPLER.MULTI_CROP.ENABLE = False # Should the crops be order or random for", "of the image) _C.TRAIN.SAMPLER.MULTI_CROP.SCHEME = 'average' # Averaging or voting over the crop", "1.0, 1.0, 1.0, 1.0] _C.LOSS.LOSS_TYPE = \"CrossEntropy\" # \"CrossEntropy\", \"LDAMLoss\", \"FocalLoss\", \"LOWLoss\", \"GHMCLoss\",", "_C.DATASET.TRAIN_JSON = \"\" _C.DATASET.VALID_JSON = \"\" _C.DATASET.TEST_JSON = \"\" _C.DATASET.CLASS_NAME = ['BCC', 'NV',", "# for 'steplr' _C.TRAIN.LR_SCHEDULER.LR_STEP = [40, 50] # for 'multistep' _C.TRAIN.LR_SCHEDULER.LR_FACTOR = 0.1", "\"flip_x_+10\", \"rotate_90_+10\", \"rotate_270_+10\", \"scale_+20\", \"flip_x_+20\", \"rotate_90_+20\", \"rotate_270_+20\", \"scale_+30\", \"flip_x_+30\", \"rotate_90_+30\", \"rotate_270_+30\", \"scale_-10\", \"flip_x_-10\",", "last FC layer _C.BACKBONE.DROP.OUT_PROB = 0.1 # ----- MODULE BUILDER ----- _C.MODULE =", "----- DATASET BUILDER ----- _C.DATASET = CN() _C.DATASET.DATASET = \"\" _C.DATASET.ROOT = \"\"", "= False _C.GPUS = [0, 1] # ----- DATASET BUILDER ----- _C.DATASET =", "\"\" _C.DATASET.CLASS_NAME = ['BCC', 'NV', 'MEL', 'MISC', 'SK'] _C.DATASET.VALID_ADD_ONE_CLASS = False # for", "the image border. Useful if images contain a black boundary. _C.TRAIN.SAMPLER.BORDER_CROP_RATIO = 0.0", "lib/data_transform/modified_randaugment.py) _C.TRAIN.SAMPLER.AUGMENT.AUG_PROB = 0.7 # the probability parameter 'P' of Modified RandAugment (0.1", "variance of each image, or using fixed values # A fixed set mean", "= True # ----- LOSS BUILDER ----- _C.LOSS = CN() _C.LOSS.WEIGHT_POWER = 1.1", "cfg.DATASET.DATASET + \".\" + cfg.BACKBONE.TYPE + ( \"_BBN.\" if cfg.BACKBONE.BBN else \".\") +", "_C.TRAIN.SAMPLER.COLOR_CONSTANCY = False _C.TRAIN.SAMPLER.CONSTANCY_POWER = 6.0 _C.TRAIN.SAMPLER.CONSTANCY_GAMMA = 0.0 # For Modified RandAugment", "= CN() _C.BACKBONE.TYPE = \"RegNetY_800MF\" # refer to lib/backbone/all_models.py _C.BACKBONE.BBN = False _C.BACKBONE.FREEZE", "= \"CrossEntropy\" # \"CrossEntropy\", \"LDAMLoss\", \"FocalLoss\", \"LOWLoss\", \"GHMCLoss\", \"CCELoss\", \"MWNLoss\" _C.LOSS.SCHEDULER = \"default\"", "[0, 1] # ----- DATASET BUILDER ----- _C.DATASET = CN() _C.DATASET.DATASET = \"\"", "image _C.TRAIN.SAMPLER.COLOR_CONSTANCY = False _C.TRAIN.SAMPLER.CONSTANCY_POWER = 6.0 _C.TRAIN.SAMPLER.CONSTANCY_GAMMA = 0.0 # For Modified", "_C.COLOR_SPACE = \"RGB\" _C.RESUME_MODEL = \"\" _C.RESUME_MODE = \"all\" _C.CPU_MODE = False _C.EVAL_MODE", "of the image) _C.TRAIN.SAMPLER.MULTI_CROP.S_REGION = 1.0 # Only crop within a certain range", "_C.TRAIN.SAMPLER.IMAGE_RESIZE = True # whether the input image needs to be resized to", "False # whether to perform multi transformation on the central crop _C.TRAIN.SAMPLER.MULTI_SCALE.SCALE_NUM =", "(must be less than or equal to the length of SCALE_NAME) _C.TRAIN.SAMPLER.MULTI_SCALE.SCALE_NAME =", "= CN() _C.MODULE.TYPE = \"GAP\" # \"GAP\", \"Identity\" # ----- CLASSIFIER BUILDER -----", "CN() _C.BACKBONE.DROP.BLOCK_PROB = 0.1 _C.BACKBONE.DROP.BLOCK_SIZE = 5 _C.BACKBONE.DROP.NR_STEPS = 50000 # dropout parameter", "_C = CN() # ----- BASIC SETTINGS ----- _C.NAME = \"default\" _C.OUTPUT_DIR =", "border. Useful if images contain a black boundary. _C.TRAIN.SAMPLER.BORDER_CROP_RATIO = 0.0 # the", "the number of class is increased from 8 to 9. _C.DATASET.ADD_CLASS_NAME = \"UNK\"", "\"CrossEntropy\", \"LDAMLoss\", \"FocalLoss\", \"LOWLoss\", \"GHMCLoss\", \"CCELoss\", \"MWNLoss\" _C.LOSS.SCHEDULER = \"default\" # \"default\"--the weights", "CN() _C.MODULE.TYPE = \"GAP\" # \"GAP\", \"Identity\" # ----- CLASSIFIER BUILDER ----- _C.CLASSIFIER", "_C.TRAIN.SAMPLER.CONSTANCY_POWER = 6.0 _C.TRAIN.SAMPLER.CONSTANCY_GAMMA = 0.0 # For Modified RandAugment _C.TRAIN.SAMPLER.AUGMENT = CN()", "_C.TRAIN.SAMPLER.MULTI_SCALE.SCALE_NAME = [\"scale_+00\", \"flip_x_+00\", \"rotate_90_+00\", \"rotate_270_+00\", \"scale_+10\", \"flip_x_+10\", \"rotate_90_+10\", \"rotate_270_+10\", \"scale_+20\", \"flip_x_+20\", \"rotate_90_+20\",", "Averaging or voting over the crop predictions (\"vote\", \"average\") # for multi transformation", "= 0.01 # For GHMCLoss _C.LOSS.GHMC = CN() _C.LOSS.GHMC.BINS = 10 _C.LOSS.GHMC.MOMENTUM =", "[0.485, 0.456, 0.406] # A fixed set variance _C.TRAIN.SAMPLER.FIX_MEAN_VAR.SET_VAR = [0.229, 0.224, 0.225]", "# \"steplr\", \"multistep\", \"cosine\", \"warmup\" _C.TRAIN.LR_SCHEDULER.LR_LOWER_STEP = 20 # for 'steplr' _C.TRAIN.LR_SCHEDULER.LR_STEP =", "\"rotate_90_+20\", \"rotate_270_+20\", \"scale_+30\", \"flip_x_+30\", \"rotate_90_+30\", \"rotate_270_+30\", \"scale_-10\", \"flip_x_-10\", \"rotate_90_-10\", \"rotate_270_-10\", \"flip_y_+00\", \"flip_y_+10\", \"flip_y_-10\",", "= 16 # Number of crops to use during evaluation (must be N^2)", "'SK'] _C.DATASET.VALID_ADD_ONE_CLASS = False # for ISIC_2019 valid and test, the number of", "CN() _C.TRAIN.OPTIMIZER.TYPE = \"SGD\" # 'SGD', 'ADAM', 'NADAM', 'RMSPROP' _C.TRAIN.OPTIMIZER.BASE_LR = 0.001 _C.TRAIN.OPTIMIZER.MOMENTUM", "_C.TRAIN.SAMPLER.BORDER_CROP = \"pixel\" # \"pixel\", \"ratio\" _C.TRAIN.SAMPLER.BORDER_CROP_PIXEL = 0 # An integer specifying", "_C.DATASET.TEST_JSON = \"\" _C.DATASET.CLASS_NAME = ['BCC', 'NV', 'MEL', 'MISC', 'SK'] _C.DATASET.VALID_ADD_ONE_CLASS = False", "= CN() _C.TRAIN.SAMPLER.MULTI_CROP.ENABLE = False # Should the crops be order or random", "_C.BACKBONE.DROP.BLOCK_SIZE = 5 _C.BACKBONE.DROP.NR_STEPS = 50000 # dropout parameter to the last FC", "_C.DATASET.IMBALANCECIFAR.RANDOM_SEED = 0 # ----- BACKBONE BUILDER ----- _C.BACKBONE = CN() _C.BACKBONE.TYPE =", "_C.TRAIN.LR_SCHEDULER.COSINE_DECAY_END = 0 # For valid or test _C.TEST = CN() _C.TEST.BATCH_SIZE =", "\"Identity\" # ----- CLASSIFIER BUILDER ----- _C.CLASSIFIER = CN() _C.CLASSIFIER.TYPE = \"FC\" #", "to 9. _C.DATASET.ADD_CLASS_NAME = \"UNK\" _C.DATASET.IMBALANCECIFAR = CN() _C.DATASET.IMBALANCECIFAR.RATIO = 0.01 _C.DATASET.IMBALANCECIFAR.RANDOM_SEED =", "cfg.merge_from_list(args.opts) cfg.freeze() def update_cfg_name(cfg): ''' modify the cfg.NAME :param cfg: :return: ''' cfg.defrost()", "# for every gpu _C.TRAIN.MAX_EPOCH = 70 _C.TRAIN.SHUFFLE = True _C.TRAIN.NUM_WORKERS = 8", "for 'multistep' _C.TRAIN.LR_SCHEDULER.LR_FACTOR = 0.1 _C.TRAIN.LR_SCHEDULER.WARM_EPOCH = 5 # for 'warmup' _C.TRAIN.LR_SCHEDULER.COSINE_DECAY_END =", "# \"re_weight\"--re-weighting by the power of inverse class frequency at all train stage,", "contain a black boundary. _C.TRAIN.SAMPLER.BORDER_CROP_RATIO = 0.0 # the ratio of edge of", "_C.BACKBONE.DROP.BLOCK_PROB = 0.1 _C.BACKBONE.DROP.BLOCK_SIZE = 5 _C.BACKBONE.DROP.NR_STEPS = 50000 # dropout parameter to", "'SGD', 'ADAM', 'NADAM', 'RMSPROP' _C.TRAIN.OPTIMIZER.BASE_LR = 0.001 _C.TRAIN.OPTIMIZER.MOMENTUM = 0.9 _C.TRAIN.OPTIMIZER.WEIGHT_DECAY = 1e-4", "_C.LOSS.GHMC.BINS = 10 _C.LOSS.GHMC.MOMENTUM = 0.0 # For MWNLoss _C.LOSS.MWNL = CN() _C.LOSS.MWNL.GAMMA", "center crop _C.TRAIN.SAMPLER.MULTI_SCALE = CN() _C.TRAIN.SAMPLER.MULTI_SCALE.ENABLE = False # whether to perform multi", "a certain range of the central area (along the short side of the", "CN() _C.BACKBONE.TYPE = \"RegNetY_800MF\" # refer to lib/backbone/all_models.py _C.BACKBONE.BBN = False _C.BACKBONE.FREEZE =", "= 0 # An integer specifying how many pixels to crop at the", "[1.0, 1.0, 1.0, 1.0, 1.0] _C.LOSS.LOSS_TYPE = \"CrossEntropy\" # \"CrossEntropy\", \"LDAMLoss\", \"FocalLoss\", \"LOWLoss\",", "_C.LOSS.MWNL.SIGMOID = \"normal\" # \"normal\", \"enlarge\" # ----- TRAIN BUILDER ----- _C.TRAIN =", "for BBN sampler _C.TRAIN.SAMPLER.DUAL_SAMPLER = CN() _C.TRAIN.SAMPLER.DUAL_SAMPLER.TYPE = \"reversed\" # \"balance\", \"reverse\", \"uniform\"", "side of the input image _C.TRAIN.SAMPLER.COLOR_CONSTANCY = False _C.TRAIN.SAMPLER.CONSTANCY_POWER = 6.0 _C.TRAIN.SAMPLER.CONSTANCY_GAMMA =", "LOSS BUILDER ----- _C.LOSS = CN() _C.LOSS.WEIGHT_POWER = 1.1 _C.LOSS.EXTRA_WEIGHT = [1.0, 1.0,", "= True _C.TRAIN.NUM_WORKERS = 8 _C.TRAIN.TENSORBOARD = CN() _C.TRAIN.TENSORBOARD.ENABLE = True # -----", "0 # ----- BACKBONE BUILDER ----- _C.BACKBONE = CN() _C.BACKBONE.TYPE = \"RegNetY_800MF\" #", "# \"zero\", \"fix\", \"decrease\" _C.LOSS.MWNL.SIGMOID = \"normal\" # \"normal\", \"enlarge\" # ----- TRAIN", "probability parameter 'P' of Modified RandAugment (0.1 -- 0.9) _C.TRAIN.SAMPLER.AUGMENT.AUG_MAG = 10 #", "5 _C.BACKBONE.PRETRAINED = True _C.BACKBONE.PRETRAINED_MODEL = \"\" # if using drop block, below", "= [0.485, 0.456, 0.406] # A fixed set variance _C.TRAIN.SAMPLER.FIX_MEAN_VAR.SET_VAR = [0.229, 0.224,", "\"flip_y_-10\", \"flip_y_+20\"] _C.TRAIN.SAMPLER.FIX_MEAN_VAR = CN() _C.TRAIN.SAMPLER.FIX_MEAN_VAR.ENABLE = True # Normalize using the mean", "black boundary. _C.TRAIN.SAMPLER.BORDER_CROP_RATIO = 0.0 # the ratio of edge of the image", "True # whether the input image needs to be resized to a fix", "# the magnitude parameter 'M' of Modified RandAugment (1 -- 20) _C.TRAIN.SAMPLER.AUGMENT.AUG_LAYER_NUM =", "+ ( \"_BBN.\" if cfg.BACKBONE.BBN else \".\") + cfg.LOSS.LOSS_TYPE + cfg.NAME cfg.merge_from_list(['NAME', cfg_name])", "----- CLASSIFIER BUILDER ----- _C.CLASSIFIER = CN() _C.CLASSIFIER.TYPE = \"FC\" # \"FC\", \"FCNorm\"", "image if AUG_METHOD = 'rand' # for BBN sampler _C.TRAIN.SAMPLER.DUAL_SAMPLER = CN() _C.TRAIN.SAMPLER.DUAL_SAMPLER.TYPE", "for multi transformation of the center crop _C.TRAIN.SAMPLER.MULTI_SCALE = CN() _C.TRAIN.SAMPLER.MULTI_SCALE.ENABLE = False", "be subtracted from the mean, processing variance) _C.TRAIN.SAMPLER.FIX_MEAN_VAR.SET_MEAN = [0.485, 0.456, 0.406] #", "of the input image _C.TRAIN.SAMPLER.COLOR_CONSTANCY = False _C.TRAIN.SAMPLER.CONSTANCY_POWER = 6.0 _C.TRAIN.SAMPLER.CONSTANCY_GAMMA = 0.0", "_C.TRAIN.SAMPLER.MULTI_CROP.L_REGION = 1.0 # Only crop within a certain range of the central", "CN() _C.TEST.BATCH_SIZE = 128 # for every gpu _C.TEST.NUM_WORKERS = 8 _C.TEST.MODEL_FILE =", "# \"FC\", \"FCNorm\" _C.CLASSIFIER.BIAS = True # ----- LOSS BUILDER ----- _C.LOSS =", "weights of all classes are \"1.0\", # \"re_weight\"--re-weighting by the power of inverse", "strategy to set loss weight. # For drw scheduler _C.LOSS.DRW_EPOCH = 50 #", "and variance of each image, or using fixed values # A fixed set", "# refer to lib/backbone/all_models.py _C.BACKBONE.BBN = False _C.BACKBONE.FREEZE = False _C.BACKBONE.PRE_FREEZE = False", "----- _C.TRAIN = CN() _C.TRAIN.BATCH_SIZE = 32 # for every gpu _C.TRAIN.MAX_EPOCH =", "size _C.TRAIN.SAMPLER.IMAGE_RESIZE_SHORT = 450 # the need size of the short side of", "= \"normal\" # \"normal\", \"enlarge\" # ----- TRAIN BUILDER ----- _C.TRAIN = CN()", "_C.SHOW_STEP = 20 _C.PIN_MEMORY = True _C.INPUT_SIZE = (224, 224) # (h, w)", "= False _C.TRAIN.SAMPLER.CONSTANCY_POWER = 6.0 _C.TRAIN.SAMPLER.CONSTANCY_GAMMA = 0.0 # For Modified RandAugment _C.TRAIN.SAMPLER.AUGMENT", "crop _C.TRAIN.SAMPLER.MULTI_SCALE.SCALE_NUM = 12 # Number of scales to use during evaluation (must", "_C.TRAIN.SAMPLER.TYPE = \"default\" # \"default\", \"weighted sampler\", \"oversample\" _C.TRAIN.SAMPLER.IMAGE_TYPE = \"derm\" # \"derm\",", "_C.CLASSIFIER = CN() _C.CLASSIFIER.TYPE = \"FC\" # \"FC\", \"FCNorm\" _C.CLASSIFIER.BIAS = True #", "= 1.0 # Only crop within a certain range of the central area", "voting over the crop predictions (\"vote\", \"average\") # for multi transformation of the", "_C.TRAIN.SAMPLER.MULTI_SCALE.ENABLE = False # whether to perform multi transformation on the central crop", "= \"RGB\" _C.RESUME_MODEL = \"\" _C.RESUME_MODE = \"all\" _C.CPU_MODE = False _C.EVAL_MODE =", "_C.CLASSIFIER.TYPE = \"FC\" # \"FC\", \"FCNorm\" _C.CLASSIFIER.BIAS = True # ----- LOSS BUILDER", "the central area (along the short side of the image) _C.TRAIN.SAMPLER.MULTI_CROP.SCHEME = 'average'", "[\"scale_+00\", \"flip_x_+00\", \"rotate_90_+00\", \"rotate_270_+00\", \"scale_+10\", \"flip_x_+10\", \"rotate_90_+10\", \"rotate_270_+10\", \"scale_+20\", \"flip_x_+20\", \"rotate_90_+20\", \"rotate_270_+20\", \"scale_+30\",", "\"scale_-10\", \"flip_x_-10\", \"rotate_90_-10\", \"rotate_270_-10\", \"flip_y_+00\", \"flip_y_+10\", \"flip_y_-10\", \"flip_y_+20\"] _C.TRAIN.SAMPLER.FIX_MEAN_VAR = CN() _C.TRAIN.SAMPLER.FIX_MEAN_VAR.ENABLE =", "''' modify the cfg.NAME :param cfg: :return: ''' cfg.defrost() cfg_name = cfg.DATASET.DATASET +", "crops be order or random for evaluation _C.TRAIN.SAMPLER.MULTI_CROP.CROP_NUM = 16 # Number of", "BASIC SETTINGS ----- _C.NAME = \"default\" _C.OUTPUT_DIR = \"./output/derm_7pt\" _C.VALID_STEP = 5 _C.SAVE_STEP", "the second stage, # \"cls\"--cumulative learning strategy to set loss weight. # For", "_C.TRAIN.SAMPLER.AUGMENT.AUG_PROB = 0.7 # the probability parameter 'P' of Modified RandAugment (0.1 --", "_C.LOSS.GHMC = CN() _C.LOSS.GHMC.BINS = 10 _C.LOSS.GHMC.MOMENTUM = 0.0 # For MWNLoss _C.LOSS.MWNL", "\"RGB\" _C.RESUME_MODEL = \"\" _C.RESUME_MODE = \"all\" _C.CPU_MODE = False _C.EVAL_MODE = False", "= 450 # the need size of the short side of the input", "at the second stage, # \"cls\"--cumulative learning strategy to set loss weight. #", "CN() _C.TRAIN.SAMPLER.MULTI_SCALE.ENABLE = False # whether to perform multi transformation on the central", "to the length of SCALE_NAME) _C.TRAIN.SAMPLER.MULTI_SCALE.SCALE_NAME = [\"scale_+00\", \"flip_x_+00\", \"rotate_90_+00\", \"rotate_270_+00\", \"scale_+10\", \"flip_x_+10\",", "\"zero\", \"fix\", \"decrease\" _C.LOSS.MWNL.SIGMOID = \"normal\" # \"normal\", \"enlarge\" # ----- TRAIN BUILDER", "_C.TRAIN.SAMPLER.AUGMENT.AUG_LAYER_NUM = 1 # the number of transformations applied to a training image", "evaluation (must be N^2) _C.TRAIN.SAMPLER.MULTI_CROP.L_REGION = 1.0 # Only crop within a certain", "\"1.0\", # \"re_weight\"--re-weighting by the power of inverse class frequency at all train", "BUILDER ----- _C.CLASSIFIER = CN() _C.CLASSIFIER.TYPE = \"FC\" # \"FC\", \"FCNorm\" _C.CLASSIFIER.BIAS =", "# ----- SAMPLER BUILDER ----- _C.TRAIN.SAMPLER = CN() _C.TRAIN.SAMPLER.TYPE = \"default\" # \"default\",", "to a training image if AUG_METHOD = 'rand' # for BBN sampler _C.TRAIN.SAMPLER.DUAL_SAMPLER", "# \"balance\", \"reverse\" # for multi crop _C.TRAIN.SAMPLER.MULTI_CROP = CN() _C.TRAIN.SAMPLER.MULTI_CROP.ENABLE = False", "valid or test _C.TEST = CN() _C.TEST.BATCH_SIZE = 128 # for every gpu", "\"rotate_270_-10\", \"flip_y_+00\", \"flip_y_+10\", \"flip_y_-10\", \"flip_y_+20\"] _C.TRAIN.SAMPLER.FIX_MEAN_VAR = CN() _C.TRAIN.SAMPLER.FIX_MEAN_VAR.ENABLE = True # Normalize", "during evaluation (must be N^2) _C.TRAIN.SAMPLER.MULTI_CROP.L_REGION = 1.0 # Only crop within a", "50] # for 'multistep' _C.TRAIN.LR_SCHEDULER.LR_FACTOR = 0.1 _C.TRAIN.LR_SCHEDULER.WARM_EPOCH = 5 # for 'warmup'", "0 # An integer specifying how many pixels to crop at the image", "_C.GPUS = [0, 1] # ----- DATASET BUILDER ----- _C.DATASET = CN() _C.DATASET.DATASET", "# An integer specifying how many pixels to crop at the image border.", "as CN _C = CN() # ----- BASIC SETTINGS ----- _C.NAME = \"default\"", "on the central crop _C.TRAIN.SAMPLER.MULTI_SCALE.SCALE_NUM = 12 # Number of scales to use", "False # Should the crops be order or random for evaluation _C.TRAIN.SAMPLER.MULTI_CROP.CROP_NUM =", "CN() _C.TRAIN.SAMPLER.DUAL_SAMPLER.TYPE = \"reversed\" # \"balance\", \"reverse\", \"uniform\" # for other sampler _C.TRAIN.SAMPLER.WEIGHTED_SAMPLER", "each image, or using fixed values # A fixed set mean (input image", "loss weight. # For drw scheduler _C.LOSS.DRW_EPOCH = 50 # For cls scheduler", "= 5 _C.BACKBONE.PRETRAINED = True _C.BACKBONE.PRETRAINED_MODEL = \"\" # if using drop block,", "\"\" _C.DATASET.ROOT = \"\" _C.DATASET.DATA_TYPE = \"jpg\" _C.DATASET.TRAIN_JSON = \"\" _C.DATASET.VALID_JSON = \"\"", "BUILDER ----- _C.BACKBONE = CN() _C.BACKBONE.TYPE = \"RegNetY_800MF\" # refer to lib/backbone/all_models.py _C.BACKBONE.BBN", "0.1 _C.BACKBONE.DROP.BLOCK_SIZE = 5 _C.BACKBONE.DROP.NR_STEPS = 50000 # dropout parameter to the last", "_C.DATASET.DATASET = \"\" _C.DATASET.ROOT = \"\" _C.DATASET.DATA_TYPE = \"jpg\" _C.DATASET.TRAIN_JSON = \"\" _C.DATASET.VALID_JSON", "at all train stage, # \"drw\"--two-stage strategy using re-weighting at the second stage,", "_C.LOSS.GHMC.MOMENTUM = 0.0 # For MWNLoss _C.LOSS.MWNL = CN() _C.LOSS.MWNL.GAMMA = 2.0 _C.LOSS.MWNL.BETA", "central area (along the long side of the image) _C.TRAIN.SAMPLER.MULTI_CROP.S_REGION = 1.0 #", "False _C.TRAIN.SAMPLER.CONSTANCY_POWER = 6.0 _C.TRAIN.SAMPLER.CONSTANCY_GAMMA = 0.0 # For Modified RandAugment _C.TRAIN.SAMPLER.AUGMENT =" ]
[ "for various online payment protocols (IPizza, Solo/TUPAS). :copyright: (c) 2012-2014 <NAME> :license: ISC,", "Solo/TUPAS). :copyright: (c) 2012-2014 <NAME> :license: ISC, see LICENSE for more details. \"\"\"", "= '<NAME>' __license__ = 'ISC' __copyright__ = 'Copyright 2012-2014 Priit Laes' from .request", "see LICENSE for more details. \"\"\" __title__ = 'wirexfers' __version__ = '2014.06-dev' __author__", "ISC, see LICENSE for more details. \"\"\" __title__ = 'wirexfers' __version__ = '2014.06-dev'", "<reponame>plaes/wirexfers # -*- coding: utf-8 -*- \"\"\" wirexfers - an online payment library", "coding: utf-8 -*- \"\"\" wirexfers - an online payment library ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ WireXfers is", "various online payment protocols (IPizza, Solo/TUPAS). :copyright: (c) 2012-2014 <NAME> :license: ISC, see", "-*- \"\"\" wirexfers - an online payment library ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ WireXfers is an online", ":license: ISC, see LICENSE for more details. \"\"\" __title__ = 'wirexfers' __version__ =", "Python, providing a simple common API for various online payment protocols (IPizza, Solo/TUPAS).", "wirexfers - an online payment library ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ WireXfers is an online payments library,", "written in Python, providing a simple common API for various online payment protocols", "\"\"\" __title__ = 'wirexfers' __version__ = '2014.06-dev' __author__ = '<NAME>' __license__ = 'ISC'", "for more details. \"\"\" __title__ = 'wirexfers' __version__ = '2014.06-dev' __author__ = '<NAME>'", "'<NAME>' __license__ = 'ISC' __copyright__ = 'Copyright 2012-2014 Priit Laes' from .request import", "-*- coding: utf-8 -*- \"\"\" wirexfers - an online payment library ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ WireXfers", "= 'ISC' __copyright__ = 'Copyright 2012-2014 Priit Laes' from .request import PaymentInfo, PaymentRequest", "is an online payments library, written in Python, providing a simple common API", ":copyright: (c) 2012-2014 <NAME> :license: ISC, see LICENSE for more details. \"\"\" __title__", "LICENSE for more details. \"\"\" __title__ = 'wirexfers' __version__ = '2014.06-dev' __author__ =", "online payment library ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ WireXfers is an online payments library, written in Python,", "an online payment library ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ WireXfers is an online payments library, written in", "online payment protocols (IPizza, Solo/TUPAS). :copyright: (c) 2012-2014 <NAME> :license: ISC, see LICENSE", "(IPizza, Solo/TUPAS). :copyright: (c) 2012-2014 <NAME> :license: ISC, see LICENSE for more details.", "__author__ = '<NAME>' __license__ = 'ISC' __copyright__ = 'Copyright 2012-2014 Priit Laes' from", "more details. \"\"\" __title__ = 'wirexfers' __version__ = '2014.06-dev' __author__ = '<NAME>' __license__", "a simple common API for various online payment protocols (IPizza, Solo/TUPAS). :copyright: (c)", "payment protocols (IPizza, Solo/TUPAS). :copyright: (c) 2012-2014 <NAME> :license: ISC, see LICENSE for", "details. \"\"\" __title__ = 'wirexfers' __version__ = '2014.06-dev' __author__ = '<NAME>' __license__ =", "'wirexfers' __version__ = '2014.06-dev' __author__ = '<NAME>' __license__ = 'ISC' __copyright__ = 'Copyright", "__version__ = '2014.06-dev' __author__ = '<NAME>' __license__ = 'ISC' __copyright__ = 'Copyright 2012-2014", "__license__ = 'ISC' __copyright__ = 'Copyright 2012-2014 Priit Laes' from .request import PaymentInfo,", "<NAME> :license: ISC, see LICENSE for more details. \"\"\" __title__ = 'wirexfers' __version__", "__copyright__ = 'Copyright 2012-2014 Priit Laes' from .request import PaymentInfo, PaymentRequest from .response", "~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ WireXfers is an online payments library, written in Python, providing a simple", "online payments library, written in Python, providing a simple common API for various", "common API for various online payment protocols (IPizza, Solo/TUPAS). :copyright: (c) 2012-2014 <NAME>", "= '2014.06-dev' __author__ = '<NAME>' __license__ = 'ISC' __copyright__ = 'Copyright 2012-2014 Priit", "'ISC' __copyright__ = 'Copyright 2012-2014 Priit Laes' from .request import PaymentInfo, PaymentRequest from", "= 'Copyright 2012-2014 Priit Laes' from .request import PaymentInfo, PaymentRequest from .response import", "library, written in Python, providing a simple common API for various online payment", "simple common API for various online payment protocols (IPizza, Solo/TUPAS). :copyright: (c) 2012-2014", "providing a simple common API for various online payment protocols (IPizza, Solo/TUPAS). :copyright:", "in Python, providing a simple common API for various online payment protocols (IPizza,", "payment library ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ WireXfers is an online payments library, written in Python, providing", "protocols (IPizza, Solo/TUPAS). :copyright: (c) 2012-2014 <NAME> :license: ISC, see LICENSE for more", "an online payments library, written in Python, providing a simple common API for", "payments library, written in Python, providing a simple common API for various online", "utf-8 -*- \"\"\" wirexfers - an online payment library ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ WireXfers is an", "\"\"\" wirexfers - an online payment library ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ WireXfers is an online payments", "(c) 2012-2014 <NAME> :license: ISC, see LICENSE for more details. \"\"\" __title__ =", "= 'wirexfers' __version__ = '2014.06-dev' __author__ = '<NAME>' __license__ = 'ISC' __copyright__ =", "- an online payment library ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ WireXfers is an online payments library, written", "# -*- coding: utf-8 -*- \"\"\" wirexfers - an online payment library ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~", "'2014.06-dev' __author__ = '<NAME>' __license__ = 'ISC' __copyright__ = 'Copyright 2012-2014 Priit Laes'", "API for various online payment protocols (IPizza, Solo/TUPAS). :copyright: (c) 2012-2014 <NAME> :license:", "WireXfers is an online payments library, written in Python, providing a simple common", "2012-2014 <NAME> :license: ISC, see LICENSE for more details. \"\"\" __title__ = 'wirexfers'", "__title__ = 'wirexfers' __version__ = '2014.06-dev' __author__ = '<NAME>' __license__ = 'ISC' __copyright__", "'Copyright 2012-2014 Priit Laes' from .request import PaymentInfo, PaymentRequest from .response import PaymentResponse", "library ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ WireXfers is an online payments library, written in Python, providing a" ]
[ "def print_grad_max(x, name=\"x\"): if type(x) == torch.Tensor: x.register_hook(lambda x: print(\"Grad max - {}", "torch.nn.Module: pass # TODO def print_grad_norm(x, name=\"x\", summary=True, verbose=True): if type(x) == torch.Tensor:", "if name in global_collection: global_collection[name].append(v) else: global_collection[name] = [v] def get_global_collection(name): global global_collection", "print(\"Norm - {} {}: {}\".format(name, list(x.shape), torch.norm(x)))) if summary: scalar('grad_norm/' + name +", "global_collection if name in global_collection: return global_collection[name] return None def clear_global_collection(): global global_collection", "{} {}:\\n {}\".format(name, list(x.shape), torch.max(x).item()))) elif type(x) == torch.nn.Module: pass # TODO global_collection", "def print_grad_norm(x, name=\"x\", summary=True, verbose=True): if type(x) == torch.Tensor: if verbose: x.register_hook(lambda x:", "scalar('grad_norm/' + name + '/min', torch.min(x), get_global_step(), write_every_n_steps=1) scalar('grad_norm/' + name + '/mean',", "+ name + '/normavg', torch.norm(x)/x.nelement(), get_global_step(), write_every_n_steps=1) elif type(x) == torch.nn.Module: pass #", "# coding=utf-8 # Copyright 2021-Present The THUAlign Authors import torch import numpy as", "pass # TODO def print_grad_norm(x, name=\"x\", summary=True, verbose=True): if type(x) == torch.Tensor: if", "type(x) == torch.Tensor: x.register_hook(lambda x: print(\"Grad max - {} {}:\\n {}\".format(name, list(x.shape), torch.max(x).item())))", "x: print(\"Norm - {} {}: {}\".format(name, list(x.shape), torch.norm(x)))) if summary: scalar('grad_norm/' + name", "list(x.shape), torch.norm(x)))) if summary: scalar('grad_norm/' + name + '/max', torch.max(x), get_global_step(), write_every_n_steps=1) scalar('grad_norm/'", "if type(x) == torch.Tensor: x.register_hook(lambda x: print(\"Grad max - {} {}:\\n {}\".format(name, list(x.shape),", "+ name + '/mean', torch.mean(x), get_global_step(), write_every_n_steps=1) scalar('grad_norm/' + name + '/normavg', torch.norm(x)/x.nelement(),", "torch.max(x).item()))) elif type(x) == torch.nn.Module: pass # TODO global_collection = {} collection_on =", "torch.max(x), get_global_step(), write_every_n_steps=1) scalar('grad_norm/' + name + '/min', torch.min(x), get_global_step(), write_every_n_steps=1) scalar('grad_norm/' +", "'/max', torch.max(x), get_global_step(), write_every_n_steps=1) scalar('grad_norm/' + name + '/min', torch.min(x), get_global_step(), write_every_n_steps=1) scalar('grad_norm/'", "as np from .summary import scalar from .misc import get_global_step def print_grad(x, name=\"x\"):", "- {} {}:\\n {}\".format(name, list(x.shape), torch.max(x).item()))) elif type(x) == torch.nn.Module: pass # TODO", "get_global_step def print_grad(x, name=\"x\"): if type(x) == torch.Tensor: x.register_hook(lambda x: print(\"Norm - {}", "if type(x) == torch.Tensor: if verbose: x.register_hook(lambda x: print(\"Norm - {} {}: {}\".format(name,", "write_every_n_steps=1) scalar('grad_norm/' + name + '/mean', torch.mean(x), get_global_step(), write_every_n_steps=1) scalar('grad_norm/' + name +", "np from .summary import scalar from .misc import get_global_step def print_grad(x, name=\"x\"): if", "type(x) == torch.nn.Module: pass # TODO def print_grad_norm(x, name=\"x\", summary=True, verbose=True): if type(x)", "The THUAlign Authors import torch import numpy as np from .summary import scalar", "type(x) == torch.Tensor: if verbose: x.register_hook(lambda x: print(\"Norm - {} {}: {}\".format(name, list(x.shape),", "== torch.Tensor: x.register_hook(lambda x: print(\"Grad max - {} {}:\\n {}\".format(name, list(x.shape), torch.max(x).item()))) elif", "summary: scalar('grad_norm/' + name + '/max', torch.max(x), get_global_step(), write_every_n_steps=1) scalar('grad_norm/' + name +", "= False def add_global_collection(v, name=\"var\"): global global_collection if not collection_on: return if name", "True def stop_global_collection(): global collection_on collection_on = False def add_global_collection(v, name=\"var\"): global global_collection", "in global_collection: global_collection[name].append(v) else: global_collection[name] = [v] def get_global_collection(name): global global_collection if name", "type(x) == torch.Tensor: x.register_hook(lambda x: print(\"Norm - {} {}:{}\\n {}\".format(name, list(x.shape), torch.norm(x), x)))", "global_collection = {} collection_on = False def start_global_collection(): global collection_on collection_on = True", "== torch.nn.Module: pass # TODO global_collection = {} collection_on = False def start_global_collection():", "def get_global_collection(name): global global_collection if name in global_collection: return global_collection[name] return None def", "scalar('grad_norm/' + name + '/max', torch.max(x), get_global_step(), write_every_n_steps=1) scalar('grad_norm/' + name + '/min',", "global_collection if not collection_on: return if name in global_collection: global_collection[name].append(v) else: global_collection[name] =", "torch.min(x), get_global_step(), write_every_n_steps=1) scalar('grad_norm/' + name + '/mean', torch.mean(x), get_global_step(), write_every_n_steps=1) scalar('grad_norm/' +", "stop_global_collection(): global collection_on collection_on = False def add_global_collection(v, name=\"var\"): global global_collection if not", "name=\"x\", summary=True, verbose=True): if type(x) == torch.Tensor: if verbose: x.register_hook(lambda x: print(\"Norm -", ".summary import scalar from .misc import get_global_step def print_grad(x, name=\"x\"): if type(x) ==", "name + '/max', torch.max(x), get_global_step(), write_every_n_steps=1) scalar('grad_norm/' + name + '/min', torch.min(x), get_global_step(),", "x.register_hook(lambda x: print(\"Norm - {} {}:{}\\n {}\".format(name, list(x.shape), torch.norm(x), x))) elif type(x) ==", "TODO global_collection = {} collection_on = False def start_global_collection(): global collection_on collection_on =", "get_global_step(), write_every_n_steps=1) scalar('grad_norm/' + name + '/min', torch.min(x), get_global_step(), write_every_n_steps=1) scalar('grad_norm/' + name", "x: print(\"Norm - {} {}:{}\\n {}\".format(name, list(x.shape), torch.norm(x), x))) elif type(x) == torch.nn.Module:", "global collection_on collection_on = True def stop_global_collection(): global collection_on collection_on = False def", "print(\"Norm - {} {}:{}\\n {}\".format(name, list(x.shape), torch.norm(x), x))) elif type(x) == torch.nn.Module: pass", "torch.norm(x)/x.nelement(), get_global_step(), write_every_n_steps=1) elif type(x) == torch.nn.Module: pass # TODO def print_grad_max(x, name=\"x\"):", "get_global_step(), write_every_n_steps=1) scalar('grad_norm/' + name + '/mean', torch.mean(x), get_global_step(), write_every_n_steps=1) scalar('grad_norm/' + name", "global_collection: global_collection[name].append(v) else: global_collection[name] = [v] def get_global_collection(name): global global_collection if name in", "collection_on = False def add_global_collection(v, name=\"var\"): global global_collection if not collection_on: return if", "== torch.Tensor: if verbose: x.register_hook(lambda x: print(\"Norm - {} {}: {}\".format(name, list(x.shape), torch.norm(x))))", "from .summary import scalar from .misc import get_global_step def print_grad(x, name=\"x\"): if type(x)", "if not collection_on: return if name in global_collection: global_collection[name].append(v) else: global_collection[name] = [v]", "x.register_hook(lambda x: print(\"Norm - {} {}: {}\".format(name, list(x.shape), torch.norm(x)))) if summary: scalar('grad_norm/' +", "print(\"Grad max - {} {}:\\n {}\".format(name, list(x.shape), torch.max(x).item()))) elif type(x) == torch.nn.Module: pass", "if summary: scalar('grad_norm/' + name + '/max', torch.max(x), get_global_step(), write_every_n_steps=1) scalar('grad_norm/' + name", "scalar('grad_norm/' + name + '/normavg', torch.norm(x)/x.nelement(), get_global_step(), write_every_n_steps=1) elif type(x) == torch.nn.Module: pass", "[v] def get_global_collection(name): global global_collection if name in global_collection: return global_collection[name] return None", "global_collection[name].append(v) else: global_collection[name] = [v] def get_global_collection(name): global global_collection if name in global_collection:", "torch.Tensor: x.register_hook(lambda x: print(\"Norm - {} {}:{}\\n {}\".format(name, list(x.shape), torch.norm(x), x))) elif type(x)", "torch.Tensor: if verbose: x.register_hook(lambda x: print(\"Norm - {} {}: {}\".format(name, list(x.shape), torch.norm(x)))) if", "not collection_on: return if name in global_collection: global_collection[name].append(v) else: global_collection[name] = [v] def", "collection_on = True def stop_global_collection(): global collection_on collection_on = False def add_global_collection(v, name=\"var\"):", "write_every_n_steps=1) elif type(x) == torch.nn.Module: pass # TODO def print_grad_max(x, name=\"x\"): if type(x)", "else: global_collection[name] = [v] def get_global_collection(name): global global_collection if name in global_collection: return", "scalar from .misc import get_global_step def print_grad(x, name=\"x\"): if type(x) == torch.Tensor: x.register_hook(lambda", "Authors import torch import numpy as np from .summary import scalar from .misc", "torch import numpy as np from .summary import scalar from .misc import get_global_step", "print_grad_norm(x, name=\"x\", summary=True, verbose=True): if type(x) == torch.Tensor: if verbose: x.register_hook(lambda x: print(\"Norm", "+ name + '/min', torch.min(x), get_global_step(), write_every_n_steps=1) scalar('grad_norm/' + name + '/mean', torch.mean(x),", "torch.nn.Module: pass # TODO global_collection = {} collection_on = False def start_global_collection(): global", "False def start_global_collection(): global collection_on collection_on = True def stop_global_collection(): global collection_on collection_on", "name in global_collection: global_collection[name].append(v) else: global_collection[name] = [v] def get_global_collection(name): global global_collection if", "def start_global_collection(): global collection_on collection_on = True def stop_global_collection(): global collection_on collection_on =", "== torch.Tensor: x.register_hook(lambda x: print(\"Norm - {} {}:{}\\n {}\".format(name, list(x.shape), torch.norm(x), x))) elif", "if name in global_collection: return global_collection[name] return None def clear_global_collection(): global global_collection global_collection", "pass # TODO global_collection = {} collection_on = False def start_global_collection(): global collection_on", "elif type(x) == torch.nn.Module: pass # TODO global_collection = {} collection_on = False", "<filename>thualign/utils/hook.py # coding=utf-8 # Copyright 2021-Present The THUAlign Authors import torch import numpy", "coding=utf-8 # Copyright 2021-Present The THUAlign Authors import torch import numpy as np", "scalar('grad_norm/' + name + '/mean', torch.mean(x), get_global_step(), write_every_n_steps=1) scalar('grad_norm/' + name + '/normavg',", "write_every_n_steps=1) scalar('grad_norm/' + name + '/normavg', torch.norm(x)/x.nelement(), get_global_step(), write_every_n_steps=1) elif type(x) == torch.nn.Module:", "== torch.nn.Module: pass # TODO def print_grad_norm(x, name=\"x\", summary=True, verbose=True): if type(x) ==", "# TODO def print_grad_max(x, name=\"x\"): if type(x) == torch.Tensor: x.register_hook(lambda x: print(\"Grad max", "'/mean', torch.mean(x), get_global_step(), write_every_n_steps=1) scalar('grad_norm/' + name + '/normavg', torch.norm(x)/x.nelement(), get_global_step(), write_every_n_steps=1) elif", "torch.norm(x)))) if summary: scalar('grad_norm/' + name + '/max', torch.max(x), get_global_step(), write_every_n_steps=1) scalar('grad_norm/' +", "write_every_n_steps=1) scalar('grad_norm/' + name + '/min', torch.min(x), get_global_step(), write_every_n_steps=1) scalar('grad_norm/' + name +", "import torch import numpy as np from .summary import scalar from .misc import", "pass # TODO def print_grad_max(x, name=\"x\"): if type(x) == torch.Tensor: x.register_hook(lambda x: print(\"Grad", "numpy as np from .summary import scalar from .misc import get_global_step def print_grad(x,", "verbose=True): if type(x) == torch.Tensor: if verbose: x.register_hook(lambda x: print(\"Norm - {} {}:", "import numpy as np from .summary import scalar from .misc import get_global_step def", "name in global_collection: return global_collection[name] return None def clear_global_collection(): global global_collection global_collection =", "TODO def print_grad_norm(x, name=\"x\", summary=True, verbose=True): if type(x) == torch.Tensor: if verbose: x.register_hook(lambda", "x))) elif type(x) == torch.nn.Module: pass # TODO def print_grad_norm(x, name=\"x\", summary=True, verbose=True):", "# Copyright 2021-Present The THUAlign Authors import torch import numpy as np from", "def add_global_collection(v, name=\"var\"): global global_collection if not collection_on: return if name in global_collection:", "global_collection[name] = [v] def get_global_collection(name): global global_collection if name in global_collection: return global_collection[name]", "get_global_step(), write_every_n_steps=1) elif type(x) == torch.nn.Module: pass # TODO def print_grad_max(x, name=\"x\"): if", "import scalar from .misc import get_global_step def print_grad(x, name=\"x\"): if type(x) == torch.Tensor:", "torch.mean(x), get_global_step(), write_every_n_steps=1) scalar('grad_norm/' + name + '/normavg', torch.norm(x)/x.nelement(), get_global_step(), write_every_n_steps=1) elif type(x)", "verbose: x.register_hook(lambda x: print(\"Norm - {} {}: {}\".format(name, list(x.shape), torch.norm(x)))) if summary: scalar('grad_norm/'", "elif type(x) == torch.nn.Module: pass # TODO def print_grad_max(x, name=\"x\"): if type(x) ==", "import get_global_step def print_grad(x, name=\"x\"): if type(x) == torch.Tensor: x.register_hook(lambda x: print(\"Norm -", "= [v] def get_global_collection(name): global global_collection if name in global_collection: return global_collection[name] return", "get_global_collection(name): global global_collection if name in global_collection: return global_collection[name] return None def clear_global_collection():", "{}\".format(name, list(x.shape), torch.norm(x)))) if summary: scalar('grad_norm/' + name + '/max', torch.max(x), get_global_step(), write_every_n_steps=1)", "list(x.shape), torch.norm(x), x))) elif type(x) == torch.nn.Module: pass # TODO def print_grad_norm(x, name=\"x\",", "print_grad_max(x, name=\"x\"): if type(x) == torch.Tensor: x.register_hook(lambda x: print(\"Grad max - {} {}:\\n", "# TODO def print_grad_norm(x, name=\"x\", summary=True, verbose=True): if type(x) == torch.Tensor: if verbose:", "torch.nn.Module: pass # TODO def print_grad_max(x, name=\"x\"): if type(x) == torch.Tensor: x.register_hook(lambda x:", "collection_on: return if name in global_collection: global_collection[name].append(v) else: global_collection[name] = [v] def get_global_collection(name):", "get_global_step(), write_every_n_steps=1) scalar('grad_norm/' + name + '/normavg', torch.norm(x)/x.nelement(), get_global_step(), write_every_n_steps=1) elif type(x) ==", "type(x) == torch.nn.Module: pass # TODO def print_grad_max(x, name=\"x\"): if type(x) == torch.Tensor:", "max - {} {}:\\n {}\".format(name, list(x.shape), torch.max(x).item()))) elif type(x) == torch.nn.Module: pass #", "+ '/min', torch.min(x), get_global_step(), write_every_n_steps=1) scalar('grad_norm/' + name + '/mean', torch.mean(x), get_global_step(), write_every_n_steps=1)", "print_grad(x, name=\"x\"): if type(x) == torch.Tensor: x.register_hook(lambda x: print(\"Norm - {} {}:{}\\n {}\".format(name,", "+ name + '/max', torch.max(x), get_global_step(), write_every_n_steps=1) scalar('grad_norm/' + name + '/min', torch.min(x),", "list(x.shape), torch.max(x).item()))) elif type(x) == torch.nn.Module: pass # TODO global_collection = {} collection_on", "+ '/max', torch.max(x), get_global_step(), write_every_n_steps=1) scalar('grad_norm/' + name + '/min', torch.min(x), get_global_step(), write_every_n_steps=1)", "collection_on collection_on = False def add_global_collection(v, name=\"var\"): global global_collection if not collection_on: return", "name + '/mean', torch.mean(x), get_global_step(), write_every_n_steps=1) scalar('grad_norm/' + name + '/normavg', torch.norm(x)/x.nelement(), get_global_step(),", "== torch.nn.Module: pass # TODO def print_grad_max(x, name=\"x\"): if type(x) == torch.Tensor: x.register_hook(lambda", "= False def start_global_collection(): global collection_on collection_on = True def stop_global_collection(): global collection_on", "global global_collection if not collection_on: return if name in global_collection: global_collection[name].append(v) else: global_collection[name]", "{} collection_on = False def start_global_collection(): global collection_on collection_on = True def stop_global_collection():", "{} {}:{}\\n {}\".format(name, list(x.shape), torch.norm(x), x))) elif type(x) == torch.nn.Module: pass # TODO", "from .misc import get_global_step def print_grad(x, name=\"x\"): if type(x) == torch.Tensor: x.register_hook(lambda x:", "if type(x) == torch.Tensor: x.register_hook(lambda x: print(\"Norm - {} {}:{}\\n {}\".format(name, list(x.shape), torch.norm(x),", "{}:{}\\n {}\".format(name, list(x.shape), torch.norm(x), x))) elif type(x) == torch.nn.Module: pass # TODO def", "global global_collection if name in global_collection: return global_collection[name] return None def clear_global_collection(): global", "{}: {}\".format(name, list(x.shape), torch.norm(x)))) if summary: scalar('grad_norm/' + name + '/max', torch.max(x), get_global_step(),", "{} {}: {}\".format(name, list(x.shape), torch.norm(x)))) if summary: scalar('grad_norm/' + name + '/max', torch.max(x),", "Copyright 2021-Present The THUAlign Authors import torch import numpy as np from .summary", "name=\"var\"): global global_collection if not collection_on: return if name in global_collection: global_collection[name].append(v) else:", "x: print(\"Grad max - {} {}:\\n {}\".format(name, list(x.shape), torch.max(x).item()))) elif type(x) == torch.nn.Module:", "THUAlign Authors import torch import numpy as np from .summary import scalar from", "False def add_global_collection(v, name=\"var\"): global global_collection if not collection_on: return if name in", "name=\"x\"): if type(x) == torch.Tensor: x.register_hook(lambda x: print(\"Norm - {} {}:{}\\n {}\".format(name, list(x.shape),", "torch.norm(x), x))) elif type(x) == torch.nn.Module: pass # TODO def print_grad_norm(x, name=\"x\", summary=True,", "in global_collection: return global_collection[name] return None def clear_global_collection(): global global_collection global_collection = {}", "if verbose: x.register_hook(lambda x: print(\"Norm - {} {}: {}\".format(name, list(x.shape), torch.norm(x)))) if summary:", "+ '/normavg', torch.norm(x)/x.nelement(), get_global_step(), write_every_n_steps=1) elif type(x) == torch.nn.Module: pass # TODO def", "TODO def print_grad_max(x, name=\"x\"): if type(x) == torch.Tensor: x.register_hook(lambda x: print(\"Grad max -", "'/min', torch.min(x), get_global_step(), write_every_n_steps=1) scalar('grad_norm/' + name + '/mean', torch.mean(x), get_global_step(), write_every_n_steps=1) scalar('grad_norm/'", "return if name in global_collection: global_collection[name].append(v) else: global_collection[name] = [v] def get_global_collection(name): global", "{}:\\n {}\".format(name, list(x.shape), torch.max(x).item()))) elif type(x) == torch.nn.Module: pass # TODO global_collection =", "+ '/mean', torch.mean(x), get_global_step(), write_every_n_steps=1) scalar('grad_norm/' + name + '/normavg', torch.norm(x)/x.nelement(), get_global_step(), write_every_n_steps=1)", "global collection_on collection_on = False def add_global_collection(v, name=\"var\"): global global_collection if not collection_on:", "'/normavg', torch.norm(x)/x.nelement(), get_global_step(), write_every_n_steps=1) elif type(x) == torch.nn.Module: pass # TODO def print_grad_max(x,", "name=\"x\"): if type(x) == torch.Tensor: x.register_hook(lambda x: print(\"Grad max - {} {}:\\n {}\".format(name,", "{}\".format(name, list(x.shape), torch.norm(x), x))) elif type(x) == torch.nn.Module: pass # TODO def print_grad_norm(x,", "def stop_global_collection(): global collection_on collection_on = False def add_global_collection(v, name=\"var\"): global global_collection if", "- {} {}:{}\\n {}\".format(name, list(x.shape), torch.norm(x), x))) elif type(x) == torch.nn.Module: pass #", "# TODO global_collection = {} collection_on = False def start_global_collection(): global collection_on collection_on", "name + '/min', torch.min(x), get_global_step(), write_every_n_steps=1) scalar('grad_norm/' + name + '/mean', torch.mean(x), get_global_step(),", "x.register_hook(lambda x: print(\"Grad max - {} {}:\\n {}\".format(name, list(x.shape), torch.max(x).item()))) elif type(x) ==", "start_global_collection(): global collection_on collection_on = True def stop_global_collection(): global collection_on collection_on = False", ".misc import get_global_step def print_grad(x, name=\"x\"): if type(x) == torch.Tensor: x.register_hook(lambda x: print(\"Norm", "add_global_collection(v, name=\"var\"): global global_collection if not collection_on: return if name in global_collection: global_collection[name].append(v)", "collection_on = False def start_global_collection(): global collection_on collection_on = True def stop_global_collection(): global", "- {} {}: {}\".format(name, list(x.shape), torch.norm(x)))) if summary: scalar('grad_norm/' + name + '/max',", "type(x) == torch.nn.Module: pass # TODO global_collection = {} collection_on = False def", "torch.Tensor: x.register_hook(lambda x: print(\"Grad max - {} {}:\\n {}\".format(name, list(x.shape), torch.max(x).item()))) elif type(x)", "def print_grad(x, name=\"x\"): if type(x) == torch.Tensor: x.register_hook(lambda x: print(\"Norm - {} {}:{}\\n", "2021-Present The THUAlign Authors import torch import numpy as np from .summary import", "summary=True, verbose=True): if type(x) == torch.Tensor: if verbose: x.register_hook(lambda x: print(\"Norm - {}", "name + '/normavg', torch.norm(x)/x.nelement(), get_global_step(), write_every_n_steps=1) elif type(x) == torch.nn.Module: pass # TODO", "{}\".format(name, list(x.shape), torch.max(x).item()))) elif type(x) == torch.nn.Module: pass # TODO global_collection = {}", "= {} collection_on = False def start_global_collection(): global collection_on collection_on = True def", "collection_on collection_on = True def stop_global_collection(): global collection_on collection_on = False def add_global_collection(v,", "= True def stop_global_collection(): global collection_on collection_on = False def add_global_collection(v, name=\"var\"): global", "elif type(x) == torch.nn.Module: pass # TODO def print_grad_norm(x, name=\"x\", summary=True, verbose=True): if" ]
[ "self._deadline, self._votingevaluator)) def __eq__(self, other): return isinstance(other, self.__class__)\\ and self._participants == other._participants \\", "ValueError( \"participants, deadline and votingeval must be not none\") self._votingevaluator = votingevaluator self._checkTeams();", "def getMaxRunTime(self)->float: return self._deadline.getDuration() / 1000. def getProtocol(self, logger:Reporter) -> SessionProtocol : from", "voting results in different ways, selectable by the user. ''' return self._votingevaluator def", "a \"power\" parameter containing an natural number &le;1. ''' def __init__(self, participants:List[TeamInfo] ,", "geniusweb.protocol.session.SessionSettings import SessionSettings from geniusweb.protocol.session.TeamInfo import TeamInfo from geniusweb.protocol.tournament.Team import Team from geniusweb.references.PartyWithProfile", "\"Added party must have one party but got \" + str(team)) newparts:List[TeamInfo] =", "self._participants == other._participants \\ and self._deadline == other._deadline \\ and self._votingevaluator == other._votingevaluator", "if participants == None or deadline == None or votingevaluator == None: raise", "\\ and self._votingevaluator == other._votingevaluator def _checkTeams(self): ''' @throws IllegalArgumentException if teams have", "import VotingEvaluator class MOPACSettings (SessionSettings): ''' Settings for MOPAC negotiation. in MOPAC, each", ") -> List[TeamInfo] : ''' bit hacky, same as getTeams, for deserialization... '''", "return MOPAC(MOPACState(None, [], None, self, {}), logger) def getTeams(self ) -> List[TeamInfo] :", "found \" + str(team)) party = team.getParties()[0] if 'power' in party.getParty().getParameters().getParameters(): power =", "+ str(power)) if power < 1: raise ValueError( \"parameter 'power' for party\" +", "self._deadline == other._deadline \\ and self._votingevaluator == other._votingevaluator def _checkTeams(self): ''' @throws IllegalArgumentException", "\"]\"; def getTeamSize(self)->int: return 1; def __hash__(self): return hash((tuple(self._participants), self._deadline, self._votingevaluator)) def __eq__(self,", "none\") self._votingevaluator = votingevaluator self._checkTeams(); def getMaxRunTime(self)->float: return self._deadline.getDuration() / 1000. def getProtocol(self,", ") -> \"MOPACSettings\" : if team.getSize() != 1: raise ValueError( \"Added party must", "{}), logger) def getTeams(self ) -> List[TeamInfo] : return list(self._participants) def getParticipants(self )", "newparts.append(team) return MOPACSettings(newparts, self._deadline, self._votingevaluator) def __repr__(self)->str: return \"MOPACSettings[\" + str(self._participants) + \",\"", "self._deadline def getAllParties(self)->List[PartyWithProfile] : return [ particip.getParties()[0] for particip in self._participants] def getVotingEvaluator(self)->VotingEvaluator", "= participants; self._deadline = deadline; if participants == None or deadline == None", "str(self._participants) + \",\" +\\ str(self._deadline) + \",\" + \\ type(self._votingevaluator).__name__ + \"]\"; def", "def getTeams(self ) -> List[TeamInfo] : return list(self._participants) def getParticipants(self ) -> List[TeamInfo]", "\"parameter 'power' for party\" + str(party) + \" must be integer but found", "geniusweb.protocol.session.mopac.MOPACState import MOPACState from geniusweb.protocol.session.mopac.MOPAC import MOPAC return MOPAC(MOPACState(None, [], None, self, {}),", ") -> List[TeamInfo] : return list(self._participants) def getParticipants(self ) -> List[TeamInfo] : '''", "for deserialization... ''' return list(self._participants) def getDeadline(self)-> Deadline : ''' @return the deadline", "ValueError( \"parameter 'power' for party\" + str(party) + \" must be >=1 but", "parameter containing an natural number &le;1. ''' def __init__(self, participants:List[TeamInfo] , deadline:Deadline ,", "= team.getParties()[0] if 'power' in party.getParty().getParameters().getParameters(): power = party.getParty().getParameters().get(\"power\") if not isinstance(power, int):", "this negotiation ''' return self._deadline def getAllParties(self)->List[PartyWithProfile] : return [ particip.getParties()[0] for particip", "from geniusweb.protocol.session.SessionSettings import SessionSettings from geniusweb.protocol.session.TeamInfo import TeamInfo from geniusweb.protocol.tournament.Team import Team from", "''' self._participants = participants; self._deadline = deadline; if participants == None or deadline", "for this negotiation ''' return self._deadline def getAllParties(self)->List[PartyWithProfile] : return [ particip.getParties()[0] for", "import Team from geniusweb.references.PartyWithProfile import PartyWithProfile from geniusweb.voting.VotingEvaluator import VotingEvaluator class MOPACSettings (SessionSettings):", "can be initialized with less, for use in TournamentSettings. @param deadline the {@link", "geniusweb.references.PartyWithProfile import PartyWithProfile from geniusweb.voting.VotingEvaluator import VotingEvaluator class MOPACSettings (SessionSettings): ''' Settings for", "MOPAC, each party may get a \"power\" parameter containing an natural number &le;1.", "import PartyWithProfile from geniusweb.voting.VotingEvaluator import VotingEvaluator class MOPACSettings (SessionSettings): ''' Settings for MOPAC", "run the MOPAC protocol. This is not tested in the constructor because this", "!= 1: raise ValueError( \"Added party must have one party but got \"", "newparts:List[TeamInfo] = list(self._participants) newparts.append(team) return MOPACSettings(newparts, self._deadline, self._votingevaluator) def __repr__(self)->str: return \"MOPACSettings[\" +", "!= 1: raise ValueError(\"All teams must be size 1 but found \" +", "< 1: raise ValueError( \"parameter 'power' for party\" + str(party) + \" must", "integer but found \" + str(power)) if power < 1: raise ValueError( \"parameter", "but got \" + str(team)) newparts:List[TeamInfo] = list(self._participants) newparts.append(team) return MOPACSettings(newparts, self._deadline, self._votingevaluator)", "def getVotingEvaluator(self)->VotingEvaluator : ''' @return a class that allows us to evaluate the", "for particip in self._participants] def getVotingEvaluator(self)->VotingEvaluator : ''' @return a class that allows", "List[TeamInfo] : return list(self._participants) def getParticipants(self ) -> List[TeamInfo] : ''' bit hacky,", "party = team.getParties()[0] if 'power' in party.getParty().getParameters().getParameters(): power = party.getParty().getParameters().get(\"power\") if not isinstance(power,", "return [ particip.getParties()[0] for particip in self._participants] def getVotingEvaluator(self)->VotingEvaluator : ''' @return a", "found \" + str(power)) if power < 1: raise ValueError( \"parameter 'power' for", "votingevaluator:VotingEvaluator): ''' @param participants the list of {@link PartyWithProfile} in clockwise order. There", "\\ and self._deadline == other._deadline \\ and self._votingevaluator == other._votingevaluator def _checkTeams(self): '''", "''' return list(self._participants) def getDeadline(self)-> Deadline : ''' @return the deadline for this", "use in TournamentSettings. @param deadline the {@link Deadline} for the negotiation @param votingeval", "same as getTeams, for deserialization... ''' return list(self._participants) def getDeadline(self)-> Deadline : '''", "import MOPACState from geniusweb.protocol.session.mopac.MOPAC import MOPAC return MOPAC(MOPACState(None, [], None, self, {}), logger)", "Reporter from geniusweb.deadline.Deadline import Deadline from geniusweb.protocol.session.SessionProtocol import SessionProtocol from geniusweb.protocol.session.SessionSettings import SessionSettings", "a class that allows us to evaluate the voting results in different ways,", "return isinstance(other, self.__class__)\\ and self._participants == other._participants \\ and self._deadline == other._deadline \\", "def getTeamSize(self)->int: return 1; def __hash__(self): return hash((tuple(self._participants), self._deadline, self._votingevaluator)) def __eq__(self, other):", "geniusweb.protocol.session.SessionProtocol import SessionProtocol from geniusweb.protocol.session.SessionSettings import SessionSettings from geniusweb.protocol.session.TeamInfo import TeamInfo from geniusweb.protocol.tournament.Team", "and votingeval must be not none\") self._votingevaluator = votingevaluator self._checkTeams(); def getMaxRunTime(self)->float: return", "teams have improper power settings. ''' for team in self._participants: if team.getSize() !=", "geniusweb.protocol.tournament.Team import Team from geniusweb.references.PartyWithProfile import PartyWithProfile from geniusweb.voting.VotingEvaluator import VotingEvaluator class MOPACSettings", "as getTeams, for deserialization... ''' return list(self._participants) def getDeadline(self)-> Deadline : ''' @return", "= party.getParty().getParameters().get(\"power\") if not isinstance(power, int): raise ValueError( \"parameter 'power' for party\" +", "or votingevaluator == None: raise ValueError( \"participants, deadline and votingeval must be not", "at least 2 to run the MOPAC protocol. This is not tested in", "deadline == None or votingevaluator == None: raise ValueError( \"participants, deadline and votingeval", "import MOPAC return MOPAC(MOPACState(None, [], None, self, {}), logger) def getTeams(self ) ->", ": ''' bit hacky, same as getTeams, for deserialization... ''' return list(self._participants) def", "return list(self._participants) def getDeadline(self)-> Deadline : ''' @return the deadline for this negotiation", "@return a class that allows us to evaluate the voting results in different", "be initialized with less, for use in TournamentSettings. @param deadline the {@link Deadline}", "''' @throws IllegalArgumentException if teams have improper power settings. ''' for team in", "must be at least 2 to run the MOPAC protocol. This is not", "== None or deadline == None or votingevaluator == None: raise ValueError( \"participants,", "self._participants: if team.getSize() != 1: raise ValueError(\"All teams must be size 1 but", "ValueError( \"parameter 'power' for party\" + str(party) + \" must be integer but", "\"MOPACSettings\" : if team.getSize() != 1: raise ValueError( \"Added party must have one", "negotiation ''' return self._deadline def getAllParties(self)->List[PartyWithProfile] : return [ particip.getParties()[0] for particip in", "''' @return a class that allows us to evaluate the voting results in", "least 2 to run the MOPAC protocol. This is not tested in the", "type(self._votingevaluator).__name__ + \"]\"; def getTeamSize(self)->int: return 1; def __hash__(self): return hash((tuple(self._participants), self._deadline, self._votingevaluator))", "team.getParties()[0] if 'power' in party.getParty().getParameters().getParameters(): power = party.getParty().getParameters().get(\"power\") if not isinstance(power, int): raise", ", votingevaluator:VotingEvaluator): ''' @param participants the list of {@link PartyWithProfile} in clockwise order.", "typing import List from tudelft_utilities_logging.Reporter import Reporter from geniusweb.deadline.Deadline import Deadline from geniusweb.protocol.session.SessionProtocol", "from geniusweb.protocol.session.SessionProtocol import SessionProtocol from geniusweb.protocol.session.SessionSettings import SessionSettings from geniusweb.protocol.session.TeamInfo import TeamInfo from", "str(team)) party = team.getParties()[0] if 'power' in party.getParty().getParameters().getParameters(): power = party.getParty().getParameters().get(\"power\") if not", "getTeams(self ) -> List[TeamInfo] : return list(self._participants) def getParticipants(self ) -> List[TeamInfo] :", "Settings for MOPAC negotiation. in MOPAC, each party may get a \"power\" parameter", "self._deadline, self._votingevaluator) def __repr__(self)->str: return \"MOPACSettings[\" + str(self._participants) + \",\" +\\ str(self._deadline) +", "SessionProtocol : from geniusweb.protocol.session.mopac.MOPACState import MOPACState from geniusweb.protocol.session.mopac.MOPAC import MOPAC return MOPAC(MOPACState(None, [],", "negotiation @param votingeval the {@link VotingEvaluator} to use. ''' self._participants = participants; self._deadline", "logger) def getTeams(self ) -> List[TeamInfo] : return list(self._participants) def getParticipants(self ) ->", "other._votingevaluator def _checkTeams(self): ''' @throws IllegalArgumentException if teams have improper power settings. '''", "us to evaluate the voting results in different ways, selectable by the user.", "allows us to evaluate the voting results in different ways, selectable by the", "is not tested in the constructor because this can be initialized with less,", "1: raise ValueError( \"parameter 'power' for party\" + str(party) + \" must be", "None or deadline == None or votingevaluator == None: raise ValueError( \"participants, deadline", "MOPACSettings(newparts, self._deadline, self._votingevaluator) def __repr__(self)->str: return \"MOPACSettings[\" + str(self._participants) + \",\" +\\ str(self._deadline)", "MOPAC negotiation. in MOPAC, each party may get a \"power\" parameter containing an", "List[TeamInfo] : ''' bit hacky, same as getTeams, for deserialization... ''' return list(self._participants)", "be at least 2 to run the MOPAC protocol. This is not tested", "the negotiation @param votingeval the {@link VotingEvaluator} to use. ''' self._participants = participants;", "containing an natural number &le;1. ''' def __init__(self, participants:List[TeamInfo] , deadline:Deadline , votingevaluator:VotingEvaluator):", "may get a \"power\" parameter containing an natural number &le;1. ''' def __init__(self,", "Team from geniusweb.references.PartyWithProfile import PartyWithProfile from geniusweb.voting.VotingEvaluator import VotingEvaluator class MOPACSettings (SessionSettings): '''", "_checkTeams(self): ''' @throws IllegalArgumentException if teams have improper power settings. ''' for team", "party may get a \"power\" parameter containing an natural number &le;1. ''' def", "must be size 1 but found \" + str(team)) party = team.getParties()[0] if", "PartyWithProfile} in clockwise order. There must be at least 2 to run the", "'power' in party.getParty().getParameters().getParameters(): power = party.getParty().getParameters().get(\"power\") if not isinstance(power, int): raise ValueError( \"parameter", "return self._deadline.getDuration() / 1000. def getProtocol(self, logger:Reporter) -> SessionProtocol : from geniusweb.protocol.session.mopac.MOPACState import", "if not isinstance(power, int): raise ValueError( \"parameter 'power' for party\" + str(party) +", "other._participants \\ and self._deadline == other._deadline \\ and self._votingevaluator == other._votingevaluator def _checkTeams(self):", "for MOPAC negotiation. in MOPAC, each party may get a \"power\" parameter containing", "str(party) + \" must be integer but found \" + str(power)) if power", "\"parameter 'power' for party\" + str(party) + \" must be >=1 but found", "for party\" + str(party) + \" must be >=1 but found \" +", "[], None, self, {}), logger) def getTeams(self ) -> List[TeamInfo] : return list(self._participants)", "VotingEvaluator class MOPACSettings (SessionSettings): ''' Settings for MOPAC negotiation. in MOPAC, each party", "There must be at least 2 to run the MOPAC protocol. This is", "self, {}), logger) def getTeams(self ) -> List[TeamInfo] : return list(self._participants) def getParticipants(self", "This is not tested in the constructor because this can be initialized with", "deadline:Deadline , votingevaluator:VotingEvaluator): ''' @param participants the list of {@link PartyWithProfile} in clockwise", "return 1; def __hash__(self): return hash((tuple(self._participants), self._deadline, self._votingevaluator)) def __eq__(self, other): return isinstance(other,", "user. ''' return self._votingevaluator def With(self, team:TeamInfo ) -> \"MOPACSettings\" : if team.getSize()", "participants; self._deadline = deadline; if participants == None or deadline == None or", "from geniusweb.deadline.Deadline import Deadline from geniusweb.protocol.session.SessionProtocol import SessionProtocol from geniusweb.protocol.session.SessionSettings import SessionSettings from", "\"MOPACSettings[\" + str(self._participants) + \",\" +\\ str(self._deadline) + \",\" + \\ type(self._votingevaluator).__name__ +", "Deadline : ''' @return the deadline for this negotiation ''' return self._deadline def", "have one party but got \" + str(team)) newparts:List[TeamInfo] = list(self._participants) newparts.append(team) return", "import TeamInfo from geniusweb.protocol.tournament.Team import Team from geniusweb.references.PartyWithProfile import PartyWithProfile from geniusweb.voting.VotingEvaluator import", "str(self._deadline) + \",\" + \\ type(self._votingevaluator).__name__ + \"]\"; def getTeamSize(self)->int: return 1; def", "With(self, team:TeamInfo ) -> \"MOPACSettings\" : if team.getSize() != 1: raise ValueError( \"Added", "party must have one party but got \" + str(team)) newparts:List[TeamInfo] = list(self._participants)", "if 'power' in party.getParty().getParameters().getParameters(): power = party.getParty().getParameters().get(\"power\") if not isinstance(power, int): raise ValueError(", "return self._deadline def getAllParties(self)->List[PartyWithProfile] : return [ particip.getParties()[0] for particip in self._participants] def", "@throws IllegalArgumentException if teams have improper power settings. ''' for team in self._participants:", "&le;1. ''' def __init__(self, participants:List[TeamInfo] , deadline:Deadline , votingevaluator:VotingEvaluator): ''' @param participants the", "to evaluate the voting results in different ways, selectable by the user. '''", "the constructor because this can be initialized with less, for use in TournamentSettings.", "votingeval must be not none\") self._votingevaluator = votingevaluator self._checkTeams(); def getMaxRunTime(self)->float: return self._deadline.getDuration()", "= deadline; if participants == None or deadline == None or votingevaluator ==", "in different ways, selectable by the user. ''' return self._votingevaluator def With(self, team:TeamInfo", "in MOPAC, each party may get a \"power\" parameter containing an natural number", "1 but found \" + str(team)) party = team.getParties()[0] if 'power' in party.getParty().getParameters().getParameters():", "+\\ str(self._deadline) + \",\" + \\ type(self._votingevaluator).__name__ + \"]\"; def getTeamSize(self)->int: return 1;", "self._deadline = deadline; if participants == None or deadline == None or votingevaluator", "MOPACState from geniusweb.protocol.session.mopac.MOPAC import MOPAC return MOPAC(MOPACState(None, [], None, self, {}), logger) def", "initialized with less, for use in TournamentSettings. @param deadline the {@link Deadline} for", "in self._participants] def getVotingEvaluator(self)->VotingEvaluator : ''' @return a class that allows us to", "\",\" + \\ type(self._votingevaluator).__name__ + \"]\"; def getTeamSize(self)->int: return 1; def __hash__(self): return", "-> SessionProtocol : from geniusweb.protocol.session.mopac.MOPACState import MOPACState from geniusweb.protocol.session.mopac.MOPAC import MOPAC return MOPAC(MOPACState(None,", "{@link VotingEvaluator} to use. ''' self._participants = participants; self._deadline = deadline; if participants", "@return the deadline for this negotiation ''' return self._deadline def getAllParties(self)->List[PartyWithProfile] : return", "ways, selectable by the user. ''' return self._votingevaluator def With(self, team:TeamInfo ) ->", "class MOPACSettings (SessionSettings): ''' Settings for MOPAC negotiation. in MOPAC, each party may", "return hash((tuple(self._participants), self._deadline, self._votingevaluator)) def __eq__(self, other): return isinstance(other, self.__class__)\\ and self._participants ==", "IllegalArgumentException if teams have improper power settings. ''' for team in self._participants: if", "deadline; if participants == None or deadline == None or votingevaluator == None:", "negotiation. in MOPAC, each party may get a \"power\" parameter containing an natural", "other._deadline \\ and self._votingevaluator == other._votingevaluator def _checkTeams(self): ''' @throws IllegalArgumentException if teams", "tested in the constructor because this can be initialized with less, for use", "def getAllParties(self)->List[PartyWithProfile] : return [ particip.getParties()[0] for particip in self._participants] def getVotingEvaluator(self)->VotingEvaluator :", "@param participants the list of {@link PartyWithProfile} in clockwise order. There must be", "== other._deadline \\ and self._votingevaluator == other._votingevaluator def _checkTeams(self): ''' @throws IllegalArgumentException if", "\" + str(power)) if power < 1: raise ValueError( \"parameter 'power' for party\"", "have improper power settings. ''' for team in self._participants: if team.getSize() != 1:", "to use. ''' self._participants = participants; self._deadline = deadline; if participants == None", ": from geniusweb.protocol.session.mopac.MOPACState import MOPACState from geniusweb.protocol.session.mopac.MOPAC import MOPAC return MOPAC(MOPACState(None, [], None,", "in the constructor because this can be initialized with less, for use in", "settings. ''' for team in self._participants: if team.getSize() != 1: raise ValueError(\"All teams", "improper power settings. ''' for team in self._participants: if team.getSize() != 1: raise", "getVotingEvaluator(self)->VotingEvaluator : ''' @return a class that allows us to evaluate the voting", "or deadline == None or votingevaluator == None: raise ValueError( \"participants, deadline and", "use. ''' self._participants = participants; self._deadline = deadline; if participants == None or", "None, self, {}), logger) def getTeams(self ) -> List[TeamInfo] : return list(self._participants) def", "ValueError( \"Added party must have one party but got \" + str(team)) newparts:List[TeamInfo]", "different ways, selectable by the user. ''' return self._votingevaluator def With(self, team:TeamInfo )", "def getProtocol(self, logger:Reporter) -> SessionProtocol : from geniusweb.protocol.session.mopac.MOPACState import MOPACState from geniusweb.protocol.session.mopac.MOPAC import", "{@link Deadline} for the negotiation @param votingeval the {@link VotingEvaluator} to use. '''", "the voting results in different ways, selectable by the user. ''' return self._votingevaluator", "self._votingevaluator) def __repr__(self)->str: return \"MOPACSettings[\" + str(self._participants) + \",\" +\\ str(self._deadline) + \",\"", "teams must be size 1 but found \" + str(team)) party = team.getParties()[0]", "power < 1: raise ValueError( \"parameter 'power' for party\" + str(party) + \"", "List from tudelft_utilities_logging.Reporter import Reporter from geniusweb.deadline.Deadline import Deadline from geniusweb.protocol.session.SessionProtocol import SessionProtocol", "@param deadline the {@link Deadline} for the negotiation @param votingeval the {@link VotingEvaluator}", "MOPACSettings (SessionSettings): ''' Settings for MOPAC negotiation. in MOPAC, each party may get", "def __init__(self, participants:List[TeamInfo] , deadline:Deadline , votingevaluator:VotingEvaluator): ''' @param participants the list of", "from geniusweb.references.PartyWithProfile import PartyWithProfile from geniusweb.voting.VotingEvaluator import VotingEvaluator class MOPACSettings (SessionSettings): ''' Settings", "geniusweb.protocol.session.TeamInfo import TeamInfo from geniusweb.protocol.tournament.Team import Team from geniusweb.references.PartyWithProfile import PartyWithProfile from geniusweb.voting.VotingEvaluator", "\" must be integer but found \" + str(power)) if power < 1:", "getMaxRunTime(self)->float: return self._deadline.getDuration() / 1000. def getProtocol(self, logger:Reporter) -> SessionProtocol : from geniusweb.protocol.session.mopac.MOPACState", "list(self._participants) def getParticipants(self ) -> List[TeamInfo] : ''' bit hacky, same as getTeams,", "TournamentSettings. @param deadline the {@link Deadline} for the negotiation @param votingeval the {@link", "self._votingevaluator def With(self, team:TeamInfo ) -> \"MOPACSettings\" : if team.getSize() != 1: raise", "team.getSize() != 1: raise ValueError(\"All teams must be size 1 but found \"", "__init__(self, participants:List[TeamInfo] , deadline:Deadline , votingevaluator:VotingEvaluator): ''' @param participants the list of {@link", "1: raise ValueError( \"Added party must have one party but got \" +", "import SessionSettings from geniusweb.protocol.session.TeamInfo import TeamInfo from geniusweb.protocol.tournament.Team import Team from geniusweb.references.PartyWithProfile import", "order. There must be at least 2 to run the MOPAC protocol. This", "got \" + str(team)) newparts:List[TeamInfo] = list(self._participants) newparts.append(team) return MOPACSettings(newparts, self._deadline, self._votingevaluator) def", "geniusweb.voting.VotingEvaluator import VotingEvaluator class MOPACSettings (SessionSettings): ''' Settings for MOPAC negotiation. in MOPAC,", "None: raise ValueError( \"participants, deadline and votingeval must be not none\") self._votingevaluator =", "raise ValueError( \"Added party must have one party but got \" + str(team))", "+ \",\" + \\ type(self._votingevaluator).__name__ + \"]\"; def getTeamSize(self)->int: return 1; def __hash__(self):", "-> \"MOPACSettings\" : if team.getSize() != 1: raise ValueError( \"Added party must have", "from typing import List from tudelft_utilities_logging.Reporter import Reporter from geniusweb.deadline.Deadline import Deadline from", "def getParticipants(self ) -> List[TeamInfo] : ''' bit hacky, same as getTeams, for", "votingevaluator == None: raise ValueError( \"participants, deadline and votingeval must be not none\")", "''' def __init__(self, participants:List[TeamInfo] , deadline:Deadline , votingevaluator:VotingEvaluator): ''' @param participants the list", "less, for use in TournamentSettings. @param deadline the {@link Deadline} for the negotiation", ": ''' @return a class that allows us to evaluate the voting results", "def getDeadline(self)-> Deadline : ''' @return the deadline for this negotiation ''' return", "to run the MOPAC protocol. This is not tested in the constructor because", "clockwise order. There must be at least 2 to run the MOPAC protocol.", "must have one party but got \" + str(team)) newparts:List[TeamInfo] = list(self._participants) newparts.append(team)", "'power' for party\" + str(party) + \" must be >=1 but found \"", "raise ValueError(\"All teams must be size 1 but found \" + str(team)) party", "[ particip.getParties()[0] for particip in self._participants] def getVotingEvaluator(self)->VotingEvaluator : ''' @return a class", "and self._votingevaluator == other._votingevaluator def _checkTeams(self): ''' @throws IllegalArgumentException if teams have improper", "TeamInfo from geniusweb.protocol.tournament.Team import Team from geniusweb.references.PartyWithProfile import PartyWithProfile from geniusweb.voting.VotingEvaluator import VotingEvaluator", "__repr__(self)->str: return \"MOPACSettings[\" + str(self._participants) + \",\" +\\ str(self._deadline) + \",\" + \\", "__hash__(self): return hash((tuple(self._participants), self._deadline, self._votingevaluator)) def __eq__(self, other): return isinstance(other, self.__class__)\\ and self._participants", "votingevaluator self._checkTeams(); def getMaxRunTime(self)->float: return self._deadline.getDuration() / 1000. def getProtocol(self, logger:Reporter) -> SessionProtocol", "the user. ''' return self._votingevaluator def With(self, team:TeamInfo ) -> \"MOPACSettings\" : if", ": ''' @return the deadline for this negotiation ''' return self._deadline def getAllParties(self)->List[PartyWithProfile]", "deadline and votingeval must be not none\") self._votingevaluator = votingevaluator self._checkTeams(); def getMaxRunTime(self)->float:", "from tudelft_utilities_logging.Reporter import Reporter from geniusweb.deadline.Deadline import Deadline from geniusweb.protocol.session.SessionProtocol import SessionProtocol from", "import Reporter from geniusweb.deadline.Deadline import Deadline from geniusweb.protocol.session.SessionProtocol import SessionProtocol from geniusweb.protocol.session.SessionSettings import", "+ \\ type(self._votingevaluator).__name__ + \"]\"; def getTeamSize(self)->int: return 1; def __hash__(self): return hash((tuple(self._participants),", "deadline the {@link Deadline} for the negotiation @param votingeval the {@link VotingEvaluator} to", "but found \" + str(team)) party = team.getParties()[0] if 'power' in party.getParty().getParameters().getParameters(): power", "and self._deadline == other._deadline \\ and self._votingevaluator == other._votingevaluator def _checkTeams(self): ''' @throws", "participants the list of {@link PartyWithProfile} in clockwise order. There must be at", "return self._votingevaluator def With(self, team:TeamInfo ) -> \"MOPACSettings\" : if team.getSize() != 1:", "''' bit hacky, same as getTeams, for deserialization... ''' return list(self._participants) def getDeadline(self)->", "this can be initialized with less, for use in TournamentSettings. @param deadline the", "one party but got \" + str(team)) newparts:List[TeamInfo] = list(self._participants) newparts.append(team) return MOPACSettings(newparts,", "= votingevaluator self._checkTeams(); def getMaxRunTime(self)->float: return self._deadline.getDuration() / 1000. def getProtocol(self, logger:Reporter) ->", "party\" + str(party) + \" must be integer but found \" + str(power))", "def With(self, team:TeamInfo ) -> \"MOPACSettings\" : if team.getSize() != 1: raise ValueError(", "''' return self._deadline def getAllParties(self)->List[PartyWithProfile] : return [ particip.getParties()[0] for particip in self._participants]", "deadline for this negotiation ''' return self._deadline def getAllParties(self)->List[PartyWithProfile] : return [ particip.getParties()[0]", "= list(self._participants) newparts.append(team) return MOPACSettings(newparts, self._deadline, self._votingevaluator) def __repr__(self)->str: return \"MOPACSettings[\" + str(self._participants)", "None or votingevaluator == None: raise ValueError( \"participants, deadline and votingeval must be", "Deadline from geniusweb.protocol.session.SessionProtocol import SessionProtocol from geniusweb.protocol.session.SessionSettings import SessionSettings from geniusweb.protocol.session.TeamInfo import TeamInfo", "\\ type(self._votingevaluator).__name__ + \"]\"; def getTeamSize(self)->int: return 1; def __hash__(self): return hash((tuple(self._participants), self._deadline,", "number &le;1. ''' def __init__(self, participants:List[TeamInfo] , deadline:Deadline , votingevaluator:VotingEvaluator): ''' @param participants", "if teams have improper power settings. ''' for team in self._participants: if team.getSize()", "the MOPAC protocol. This is not tested in the constructor because this can", "geniusweb.deadline.Deadline import Deadline from geniusweb.protocol.session.SessionProtocol import SessionProtocol from geniusweb.protocol.session.SessionSettings import SessionSettings from geniusweb.protocol.session.TeamInfo", "\"participants, deadline and votingeval must be not none\") self._votingevaluator = votingevaluator self._checkTeams(); def", "votingeval the {@link VotingEvaluator} to use. ''' self._participants = participants; self._deadline = deadline;", "other): return isinstance(other, self.__class__)\\ and self._participants == other._participants \\ and self._deadline == other._deadline", "self.__class__)\\ and self._participants == other._participants \\ and self._deadline == other._deadline \\ and self._votingevaluator", "if team.getSize() != 1: raise ValueError(\"All teams must be size 1 but found", ", deadline:Deadline , votingevaluator:VotingEvaluator): ''' @param participants the list of {@link PartyWithProfile} in", "results in different ways, selectable by the user. ''' return self._votingevaluator def With(self,", "isinstance(other, self.__class__)\\ and self._participants == other._participants \\ and self._deadline == other._deadline \\ and", "def _checkTeams(self): ''' @throws IllegalArgumentException if teams have improper power settings. ''' for", "particip in self._participants] def getVotingEvaluator(self)->VotingEvaluator : ''' @return a class that allows us", "+ \"]\"; def getTeamSize(self)->int: return 1; def __hash__(self): return hash((tuple(self._participants), self._deadline, self._votingevaluator)) def", "be integer but found \" + str(power)) if power < 1: raise ValueError(", "-> List[TeamInfo] : ''' bit hacky, same as getTeams, for deserialization... ''' return", "if power < 1: raise ValueError( \"parameter 'power' for party\" + str(party) +", "getDeadline(self)-> Deadline : ''' @return the deadline for this negotiation ''' return self._deadline", "self._checkTeams(); def getMaxRunTime(self)->float: return self._deadline.getDuration() / 1000. def getProtocol(self, logger:Reporter) -> SessionProtocol :", "for the negotiation @param votingeval the {@link VotingEvaluator} to use. ''' self._participants =", "each party may get a \"power\" parameter containing an natural number &le;1. '''", "for team in self._participants: if team.getSize() != 1: raise ValueError(\"All teams must be", "''' @return the deadline for this negotiation ''' return self._deadline def getAllParties(self)->List[PartyWithProfile] :", "'power' for party\" + str(party) + \" must be integer but found \"", "{@link PartyWithProfile} in clockwise order. There must be at least 2 to run", "constructor because this can be initialized with less, for use in TournamentSettings. @param", "int): raise ValueError( \"parameter 'power' for party\" + str(party) + \" must be", "geniusweb.protocol.session.mopac.MOPAC import MOPAC return MOPAC(MOPACState(None, [], None, self, {}), logger) def getTeams(self )", "getProtocol(self, logger:Reporter) -> SessionProtocol : from geniusweb.protocol.session.mopac.MOPACState import MOPACState from geniusweb.protocol.session.mopac.MOPAC import MOPAC", "Deadline} for the negotiation @param votingeval the {@link VotingEvaluator} to use. ''' self._participants", "but found \" + str(power)) if power < 1: raise ValueError( \"parameter 'power'", "-> List[TeamInfo] : return list(self._participants) def getParticipants(self ) -> List[TeamInfo] : ''' bit", "+ \",\" +\\ str(self._deadline) + \",\" + \\ type(self._votingevaluator).__name__ + \"]\"; def getTeamSize(self)->int:", "that allows us to evaluate the voting results in different ways, selectable by", "natural number &le;1. ''' def __init__(self, participants:List[TeamInfo] , deadline:Deadline , votingevaluator:VotingEvaluator): ''' @param", "''' return self._votingevaluator def With(self, team:TeamInfo ) -> \"MOPACSettings\" : if team.getSize() !=", ": if team.getSize() != 1: raise ValueError( \"Added party must have one party", "\" + str(team)) newparts:List[TeamInfo] = list(self._participants) newparts.append(team) return MOPACSettings(newparts, self._deadline, self._votingevaluator) def __repr__(self)->str:", "return \"MOPACSettings[\" + str(self._participants) + \",\" +\\ str(self._deadline) + \",\" + \\ type(self._votingevaluator).__name__", "by the user. ''' return self._votingevaluator def With(self, team:TeamInfo ) -> \"MOPACSettings\" :", "not isinstance(power, int): raise ValueError( \"parameter 'power' for party\" + str(party) + \"", ": return [ particip.getParties()[0] for particip in self._participants] def getVotingEvaluator(self)->VotingEvaluator : ''' @return", "for party\" + str(party) + \" must be integer but found \" +", "SessionSettings from geniusweb.protocol.session.TeamInfo import TeamInfo from geniusweb.protocol.tournament.Team import Team from geniusweb.references.PartyWithProfile import PartyWithProfile", "2 to run the MOPAC protocol. This is not tested in the constructor", "/ 1000. def getProtocol(self, logger:Reporter) -> SessionProtocol : from geniusweb.protocol.session.mopac.MOPACState import MOPACState from", "ValueError(\"All teams must be size 1 but found \" + str(team)) party =", "(SessionSettings): ''' Settings for MOPAC negotiation. in MOPAC, each party may get a", "participants == None or deadline == None or votingevaluator == None: raise ValueError(", "bit hacky, same as getTeams, for deserialization... ''' return list(self._participants) def getDeadline(self)-> Deadline", "list(self._participants) def getDeadline(self)-> Deadline : ''' @return the deadline for this negotiation '''", ": return list(self._participants) def getParticipants(self ) -> List[TeamInfo] : ''' bit hacky, same", "str(team)) newparts:List[TeamInfo] = list(self._participants) newparts.append(team) return MOPACSettings(newparts, self._deadline, self._votingevaluator) def __repr__(self)->str: return \"MOPACSettings[\"", "be not none\") self._votingevaluator = votingevaluator self._checkTeams(); def getMaxRunTime(self)->float: return self._deadline.getDuration() / 1000.", "evaluate the voting results in different ways, selectable by the user. ''' return", "power = party.getParty().getParameters().get(\"power\") if not isinstance(power, int): raise ValueError( \"parameter 'power' for party\"", "1000. def getProtocol(self, logger:Reporter) -> SessionProtocol : from geniusweb.protocol.session.mopac.MOPACState import MOPACState from geniusweb.protocol.session.mopac.MOPAC", "not none\") self._votingevaluator = votingevaluator self._checkTeams(); def getMaxRunTime(self)->float: return self._deadline.getDuration() / 1000. def", "size 1 but found \" + str(team)) party = team.getParties()[0] if 'power' in", "participants:List[TeamInfo] , deadline:Deadline , votingevaluator:VotingEvaluator): ''' @param participants the list of {@link PartyWithProfile}", "team:TeamInfo ) -> \"MOPACSettings\" : if team.getSize() != 1: raise ValueError( \"Added party", "\"power\" parameter containing an natural number &le;1. ''' def __init__(self, participants:List[TeamInfo] , deadline:Deadline", "with less, for use in TournamentSettings. @param deadline the {@link Deadline} for the", "import SessionProtocol from geniusweb.protocol.session.SessionSettings import SessionSettings from geniusweb.protocol.session.TeamInfo import TeamInfo from geniusweb.protocol.tournament.Team import", "in self._participants: if team.getSize() != 1: raise ValueError(\"All teams must be size 1", "if team.getSize() != 1: raise ValueError( \"Added party must have one party but", "tudelft_utilities_logging.Reporter import Reporter from geniusweb.deadline.Deadline import Deadline from geniusweb.protocol.session.SessionProtocol import SessionProtocol from geniusweb.protocol.session.SessionSettings", "+ \" must be integer but found \" + str(power)) if power <", "because this can be initialized with less, for use in TournamentSettings. @param deadline", "be size 1 but found \" + str(team)) party = team.getParties()[0] if 'power'", "raise ValueError( \"parameter 'power' for party\" + str(party) + \" must be integer", "== None or votingevaluator == None: raise ValueError( \"participants, deadline and votingeval must", "MOPAC protocol. This is not tested in the constructor because this can be", "in party.getParty().getParameters().getParameters(): power = party.getParty().getParameters().get(\"power\") if not isinstance(power, int): raise ValueError( \"parameter 'power'", "not tested in the constructor because this can be initialized with less, for", "VotingEvaluator} to use. ''' self._participants = participants; self._deadline = deadline; if participants ==", "from geniusweb.protocol.session.mopac.MOPACState import MOPACState from geniusweb.protocol.session.mopac.MOPAC import MOPAC return MOPAC(MOPACState(None, [], None, self,", "the list of {@link PartyWithProfile} in clockwise order. There must be at least", "hacky, same as getTeams, for deserialization... ''' return list(self._participants) def getDeadline(self)-> Deadline :", "self._votingevaluator)) def __eq__(self, other): return isinstance(other, self.__class__)\\ and self._participants == other._participants \\ and", "from geniusweb.protocol.tournament.Team import Team from geniusweb.references.PartyWithProfile import PartyWithProfile from geniusweb.voting.VotingEvaluator import VotingEvaluator class", "from geniusweb.voting.VotingEvaluator import VotingEvaluator class MOPACSettings (SessionSettings): ''' Settings for MOPAC negotiation. in", "== other._votingevaluator def _checkTeams(self): ''' @throws IllegalArgumentException if teams have improper power settings.", "from geniusweb.protocol.session.mopac.MOPAC import MOPAC return MOPAC(MOPACState(None, [], None, self, {}), logger) def getTeams(self", "protocol. This is not tested in the constructor because this can be initialized", "party but got \" + str(team)) newparts:List[TeamInfo] = list(self._participants) newparts.append(team) return MOPACSettings(newparts, self._deadline,", "must be not none\") self._votingevaluator = votingevaluator self._checkTeams(); def getMaxRunTime(self)->float: return self._deadline.getDuration() /", "the {@link VotingEvaluator} to use. ''' self._participants = participants; self._deadline = deadline; if", "str(power)) if power < 1: raise ValueError( \"parameter 'power' for party\" + str(party)", "of {@link PartyWithProfile} in clockwise order. There must be at least 2 to", "return MOPACSettings(newparts, self._deadline, self._votingevaluator) def __repr__(self)->str: return \"MOPACSettings[\" + str(self._participants) + \",\" +\\", "import List from tudelft_utilities_logging.Reporter import Reporter from geniusweb.deadline.Deadline import Deadline from geniusweb.protocol.session.SessionProtocol import", "getTeamSize(self)->int: return 1; def __hash__(self): return hash((tuple(self._participants), self._deadline, self._votingevaluator)) def __eq__(self, other): return", "''' @param participants the list of {@link PartyWithProfile} in clockwise order. There must", "deserialization... ''' return list(self._participants) def getDeadline(self)-> Deadline : ''' @return the deadline for", "the {@link Deadline} for the negotiation @param votingeval the {@link VotingEvaluator} to use.", "from geniusweb.protocol.session.TeamInfo import TeamInfo from geniusweb.protocol.tournament.Team import Team from geniusweb.references.PartyWithProfile import PartyWithProfile from", "raise ValueError( \"participants, deadline and votingeval must be not none\") self._votingevaluator = votingevaluator", "''' for team in self._participants: if team.getSize() != 1: raise ValueError(\"All teams must", "MOPAC return MOPAC(MOPACState(None, [], None, self, {}), logger) def getTeams(self ) -> List[TeamInfo]", "particip.getParties()[0] for particip in self._participants] def getVotingEvaluator(self)->VotingEvaluator : ''' @return a class that", "+ str(self._participants) + \",\" +\\ str(self._deadline) + \",\" + \\ type(self._votingevaluator).__name__ + \"]\";", "raise ValueError( \"parameter 'power' for party\" + str(party) + \" must be >=1", "+ str(team)) newparts:List[TeamInfo] = list(self._participants) newparts.append(team) return MOPACSettings(newparts, self._deadline, self._votingevaluator) def __repr__(self)->str: return", "self._participants] def getVotingEvaluator(self)->VotingEvaluator : ''' @return a class that allows us to evaluate", "and self._participants == other._participants \\ and self._deadline == other._deadline \\ and self._votingevaluator ==", "@param votingeval the {@link VotingEvaluator} to use. ''' self._participants = participants; self._deadline =", "import Deadline from geniusweb.protocol.session.SessionProtocol import SessionProtocol from geniusweb.protocol.session.SessionSettings import SessionSettings from geniusweb.protocol.session.TeamInfo import", "def __eq__(self, other): return isinstance(other, self.__class__)\\ and self._participants == other._participants \\ and self._deadline", "getParticipants(self ) -> List[TeamInfo] : ''' bit hacky, same as getTeams, for deserialization...", "power settings. ''' for team in self._participants: if team.getSize() != 1: raise ValueError(\"All", "== other._participants \\ and self._deadline == other._deadline \\ and self._votingevaluator == other._votingevaluator def", "self._participants = participants; self._deadline = deadline; if participants == None or deadline ==", "selectable by the user. ''' return self._votingevaluator def With(self, team:TeamInfo ) -> \"MOPACSettings\"", "list of {@link PartyWithProfile} in clockwise order. There must be at least 2", "the deadline for this negotiation ''' return self._deadline def getAllParties(self)->List[PartyWithProfile] : return [", "must be integer but found \" + str(power)) if power < 1: raise", "MOPAC(MOPACState(None, [], None, self, {}), logger) def getTeams(self ) -> List[TeamInfo] : return", "+ str(team)) party = team.getParties()[0] if 'power' in party.getParty().getParameters().getParameters(): power = party.getParty().getParameters().get(\"power\") if", "self._deadline.getDuration() / 1000. def getProtocol(self, logger:Reporter) -> SessionProtocol : from geniusweb.protocol.session.mopac.MOPACState import MOPACState", "in clockwise order. There must be at least 2 to run the MOPAC", "class that allows us to evaluate the voting results in different ways, selectable", "SessionProtocol from geniusweb.protocol.session.SessionSettings import SessionSettings from geniusweb.protocol.session.TeamInfo import TeamInfo from geniusweb.protocol.tournament.Team import Team", "PartyWithProfile from geniusweb.voting.VotingEvaluator import VotingEvaluator class MOPACSettings (SessionSettings): ''' Settings for MOPAC negotiation.", "getTeams, for deserialization... ''' return list(self._participants) def getDeadline(self)-> Deadline : ''' @return the", "isinstance(power, int): raise ValueError( \"parameter 'power' for party\" + str(party) + \" must", "+ str(party) + \" must be integer but found \" + str(power)) if", "for use in TournamentSettings. @param deadline the {@link Deadline} for the negotiation @param", "1; def __hash__(self): return hash((tuple(self._participants), self._deadline, self._votingevaluator)) def __eq__(self, other): return isinstance(other, self.__class__)\\", "in TournamentSettings. @param deadline the {@link Deadline} for the negotiation @param votingeval the", "team.getSize() != 1: raise ValueError( \"Added party must have one party but got", "get a \"power\" parameter containing an natural number &le;1. ''' def __init__(self, participants:List[TeamInfo]", "__eq__(self, other): return isinstance(other, self.__class__)\\ and self._participants == other._participants \\ and self._deadline ==", "self._votingevaluator == other._votingevaluator def _checkTeams(self): ''' @throws IllegalArgumentException if teams have improper power", "party.getParty().getParameters().get(\"power\") if not isinstance(power, int): raise ValueError( \"parameter 'power' for party\" + str(party)", "getAllParties(self)->List[PartyWithProfile] : return [ particip.getParties()[0] for particip in self._participants] def getVotingEvaluator(self)->VotingEvaluator : '''", "\",\" +\\ str(self._deadline) + \",\" + \\ type(self._votingevaluator).__name__ + \"]\"; def getTeamSize(self)->int: return", "party\" + str(party) + \" must be >=1 but found \" + str(power))", "''' Settings for MOPAC negotiation. in MOPAC, each party may get a \"power\"", "team in self._participants: if team.getSize() != 1: raise ValueError(\"All teams must be size", "def __repr__(self)->str: return \"MOPACSettings[\" + str(self._participants) + \",\" +\\ str(self._deadline) + \",\" +", "list(self._participants) newparts.append(team) return MOPACSettings(newparts, self._deadline, self._votingevaluator) def __repr__(self)->str: return \"MOPACSettings[\" + str(self._participants) +", "logger:Reporter) -> SessionProtocol : from geniusweb.protocol.session.mopac.MOPACState import MOPACState from geniusweb.protocol.session.mopac.MOPAC import MOPAC return", "party.getParty().getParameters().getParameters(): power = party.getParty().getParameters().get(\"power\") if not isinstance(power, int): raise ValueError( \"parameter 'power' for", "== None: raise ValueError( \"participants, deadline and votingeval must be not none\") self._votingevaluator", "return list(self._participants) def getParticipants(self ) -> List[TeamInfo] : ''' bit hacky, same as", "an natural number &le;1. ''' def __init__(self, participants:List[TeamInfo] , deadline:Deadline , votingevaluator:VotingEvaluator): '''", "hash((tuple(self._participants), self._deadline, self._votingevaluator)) def __eq__(self, other): return isinstance(other, self.__class__)\\ and self._participants == other._participants", "self._votingevaluator = votingevaluator self._checkTeams(); def getMaxRunTime(self)->float: return self._deadline.getDuration() / 1000. def getProtocol(self, logger:Reporter)", "def __hash__(self): return hash((tuple(self._participants), self._deadline, self._votingevaluator)) def __eq__(self, other): return isinstance(other, self.__class__)\\ and", "\" + str(team)) party = team.getParties()[0] if 'power' in party.getParty().getParameters().getParameters(): power = party.getParty().getParameters().get(\"power\")", "1: raise ValueError(\"All teams must be size 1 but found \" + str(team))" ]
[ "global numeric_columns global non_numeric_columns try: st.write(df) numeric_columns = list(df.select_dtypes(['float', 'int']).columns) non_numeric_columns = list(df.select_dtypes(['object']).columns)", "except Exception as e: print(e) st.write(\"Please upload file to the application.\") # add", "'Scatterplots': st.sidebar.subheader(\"Scatterplot Settings\") try: x_values = st.sidebar.selectbox('X axis', options=numeric_columns) y_values = st.sidebar.selectbox('Y axis',", "chart st.plotly_chart(plot) except Exception as e: print(e) if chart_select == 'Lineplots': st.sidebar.subheader(\"Line Plot", "as st import plotly_express as px import pandas as pd # configuration st.set_option('deprecation.showfileUploaderEncoding',", "numeric_columns global non_numeric_columns try: st.write(df) numeric_columns = list(df.select_dtypes(['float', 'int']).columns) non_numeric_columns = list(df.select_dtypes(['object']).columns) non_numeric_columns.append(None)", "list(df.select_dtypes(['float', 'int']).columns) non_numeric_columns = list(df.select_dtypes(['object']).columns) non_numeric_columns.append(None) print(non_numeric_columns) except Exception as e: print(e) st.write(\"Please", "st.sidebar.file_uploader( label=\"Upload your CSV or Excel file. (200MB max)\", type=['csv', 'xlsx']) global df", "y=y_values, color=color_value) st.plotly_chart(plot) except Exception as e: print(e) if chart_select == 'Histogram': st.sidebar.subheader(\"Histogram", "chart_select == 'Histogram': st.sidebar.subheader(\"Histogram Settings\") try: x = st.sidebar.selectbox('Feature', options=numeric_columns) bin_size = st.sidebar.slider(\"Number", "st import plotly_express as px import pandas as pd # configuration st.set_option('deprecation.showfileUploaderEncoding', False)", "sidebar st.sidebar.subheader(\"Visualization Settings\") # Setup file upload uploaded_file = st.sidebar.file_uploader( label=\"Upload your CSV", "widget to the side bar chart_select = st.sidebar.selectbox( label=\"Select the chart type\", options=['Scatterplots',", "# title of the app st.title(\"Data Visualization App\") # Add a sidebar st.sidebar.subheader(\"Visualization", "chart type\", options=['Scatterplots', 'Lineplots', 'Histogram', 'Boxplot'] ) if chart_select == 'Scatterplots': st.sidebar.subheader(\"Scatterplot Settings\")", "st.sidebar.selectbox('Y axis', options=numeric_columns) color_value = st.sidebar.selectbox(\"Color\", options=non_numeric_columns) plot = px.scatter(data_frame=df, x=x_values, y=y_values, color=color_value)", "= st.sidebar.selectbox('Feature', options=numeric_columns) bin_size = st.sidebar.slider(\"Number of Bins\", min_value=10, max_value=100, value=40) color_value =", "application.\") # add a select widget to the side bar chart_select = st.sidebar.selectbox(", "print(e) df = pd.read_excel(uploaded_file) global numeric_columns global non_numeric_columns try: st.write(df) numeric_columns = list(df.select_dtypes(['float',", "bar chart_select = st.sidebar.selectbox( label=\"Select the chart type\", options=['Scatterplots', 'Lineplots', 'Histogram', 'Boxplot'] )", "Excel file. (200MB max)\", type=['csv', 'xlsx']) global df if uploaded_file is not None:", "'int']).columns) non_numeric_columns = list(df.select_dtypes(['object']).columns) non_numeric_columns.append(None) print(non_numeric_columns) except Exception as e: print(e) st.write(\"Please upload", "# Setup file upload uploaded_file = st.sidebar.file_uploader( label=\"Upload your CSV or Excel file.", "y = st.sidebar.selectbox(\"Y axis\", options=numeric_columns) x = st.sidebar.selectbox(\"X axis\", options=non_numeric_columns) color_value = st.sidebar.selectbox(\"Color\",", "title of the app st.title(\"Data Visualization App\") # Add a sidebar st.sidebar.subheader(\"Visualization Settings\")", "px.scatter(data_frame=df, x=x_values, y=y_values, color=color_value) # display the chart st.plotly_chart(plot) except Exception as e:", "options=['Scatterplots', 'Lineplots', 'Histogram', 'Boxplot'] ) if chart_select == 'Scatterplots': st.sidebar.subheader(\"Scatterplot Settings\") try: x_values", "st.sidebar.subheader(\"Boxplot Settings\") try: y = st.sidebar.selectbox(\"Y axis\", options=numeric_columns) x = st.sidebar.selectbox(\"X axis\", options=non_numeric_columns)", "# configuration st.set_option('deprecation.showfileUploaderEncoding', False) # title of the app st.title(\"Data Visualization App\") #", "to the side bar chart_select = st.sidebar.selectbox( label=\"Select the chart type\", options=['Scatterplots', 'Lineplots',", "as e: print(e) df = pd.read_excel(uploaded_file) global numeric_columns global non_numeric_columns try: st.write(df) numeric_columns", "False) # title of the app st.title(\"Data Visualization App\") # Add a sidebar", "'Boxplot'] ) if chart_select == 'Scatterplots': st.sidebar.subheader(\"Scatterplot Settings\") try: x_values = st.sidebar.selectbox('X axis',", "st.sidebar.selectbox('Y axis', options=numeric_columns) color_value = st.sidebar.selectbox(\"Color\", options=non_numeric_columns) plot = px.line(data_frame=df, x=x_values, y=y_values, color=color_value)", "configuration st.set_option('deprecation.showfileUploaderEncoding', False) # title of the app st.title(\"Data Visualization App\") # Add", "= px.line(data_frame=df, x=x_values, y=y_values, color=color_value) st.plotly_chart(plot) except Exception as e: print(e) if chart_select", "st.sidebar.selectbox('X axis', options=numeric_columns) y_values = st.sidebar.selectbox('Y axis', options=numeric_columns) color_value = st.sidebar.selectbox(\"Color\", options=non_numeric_columns) plot", "st.sidebar.slider(\"Number of Bins\", min_value=10, max_value=100, value=40) color_value = st.sidebar.selectbox(\"Color\", options=non_numeric_columns) plot = px.histogram(x=x,", "plot = px.histogram(x=x, data_frame=df, color=color_value) st.plotly_chart(plot) except Exception as e: print(e) if chart_select", "color=color_value) st.plotly_chart(plot) except Exception as e: print(e) if chart_select == 'Boxplot': st.sidebar.subheader(\"Boxplot Settings\")", "= st.sidebar.selectbox('Y axis', options=numeric_columns) color_value = st.sidebar.selectbox(\"Color\", options=non_numeric_columns) plot = px.line(data_frame=df, x=x_values, y=y_values,", "y=y_values, color=color_value) # display the chart st.plotly_chart(plot) except Exception as e: print(e) if", "add a select widget to the side bar chart_select = st.sidebar.selectbox( label=\"Select the", "the chart st.plotly_chart(plot) except Exception as e: print(e) if chart_select == 'Lineplots': st.sidebar.subheader(\"Line", "st.write(df) numeric_columns = list(df.select_dtypes(['float', 'int']).columns) non_numeric_columns = list(df.select_dtypes(['object']).columns) non_numeric_columns.append(None) print(non_numeric_columns) except Exception as", "None: print(uploaded_file) print(\"hello\") try: df = pd.read_csv(uploaded_file) except Exception as e: print(e) df", "Exception as e: print(e) if chart_select == 'Boxplot': st.sidebar.subheader(\"Boxplot Settings\") try: y =", "'Boxplot': st.sidebar.subheader(\"Boxplot Settings\") try: y = st.sidebar.selectbox(\"Y axis\", options=numeric_columns) x = st.sidebar.selectbox(\"X axis\",", "axis\", options=numeric_columns) x = st.sidebar.selectbox(\"X axis\", options=non_numeric_columns) color_value = st.sidebar.selectbox(\"Color\", options=non_numeric_columns) plot =", "if chart_select == 'Scatterplots': st.sidebar.subheader(\"Scatterplot Settings\") try: x_values = st.sidebar.selectbox('X axis', options=numeric_columns) y_values", "pandas as pd # configuration st.set_option('deprecation.showfileUploaderEncoding', False) # title of the app st.title(\"Data", "print(e) st.write(\"Please upload file to the application.\") # add a select widget to", "= st.sidebar.selectbox('X axis', options=numeric_columns) y_values = st.sidebar.selectbox('Y axis', options=numeric_columns) color_value = st.sidebar.selectbox(\"Color\", options=non_numeric_columns)", "if chart_select == 'Histogram': st.sidebar.subheader(\"Histogram Settings\") try: x = st.sidebar.selectbox('Feature', options=numeric_columns) bin_size =", "st.set_option('deprecation.showfileUploaderEncoding', False) # title of the app st.title(\"Data Visualization App\") # Add a", "print(non_numeric_columns) except Exception as e: print(e) st.write(\"Please upload file to the application.\") #", "st.title(\"Data Visualization App\") # Add a sidebar st.sidebar.subheader(\"Visualization Settings\") # Setup file upload", "axis\", options=non_numeric_columns) color_value = st.sidebar.selectbox(\"Color\", options=non_numeric_columns) plot = px.box(data_frame=df, y=y, x=x, color=color_value) st.plotly_chart(plot)", "# add a select widget to the side bar chart_select = st.sidebar.selectbox( label=\"Select", "= st.sidebar.slider(\"Number of Bins\", min_value=10, max_value=100, value=40) color_value = st.sidebar.selectbox(\"Color\", options=non_numeric_columns) plot =", "df = pd.read_csv(uploaded_file) except Exception as e: print(e) df = pd.read_excel(uploaded_file) global numeric_columns", "your CSV or Excel file. (200MB max)\", type=['csv', 'xlsx']) global df if uploaded_file", "st.sidebar.selectbox('Feature', options=numeric_columns) bin_size = st.sidebar.slider(\"Number of Bins\", min_value=10, max_value=100, value=40) color_value = st.sidebar.selectbox(\"Color\",", "file. (200MB max)\", type=['csv', 'xlsx']) global df if uploaded_file is not None: print(uploaded_file)", "= st.sidebar.selectbox(\"Color\", options=non_numeric_columns) plot = px.box(data_frame=df, y=y, x=x, color=color_value) st.plotly_chart(plot) except Exception as", "= st.sidebar.selectbox('Y axis', options=numeric_columns) color_value = st.sidebar.selectbox(\"Color\", options=non_numeric_columns) plot = px.scatter(data_frame=df, x=x_values, y=y_values,", ") if chart_select == 'Scatterplots': st.sidebar.subheader(\"Scatterplot Settings\") try: x_values = st.sidebar.selectbox('X axis', options=numeric_columns)", "e: print(e) if chart_select == 'Lineplots': st.sidebar.subheader(\"Line Plot Settings\") try: x_values = st.sidebar.selectbox('X", "plot = px.line(data_frame=df, x=x_values, y=y_values, color=color_value) st.plotly_chart(plot) except Exception as e: print(e) if", "options=numeric_columns) y_values = st.sidebar.selectbox('Y axis', options=numeric_columns) color_value = st.sidebar.selectbox(\"Color\", options=non_numeric_columns) plot = px.scatter(data_frame=df,", "color=color_value) st.plotly_chart(plot) except Exception as e: print(e) if chart_select == 'Histogram': st.sidebar.subheader(\"Histogram Settings\")", "options=numeric_columns) x = st.sidebar.selectbox(\"X axis\", options=non_numeric_columns) color_value = st.sidebar.selectbox(\"Color\", options=non_numeric_columns) plot = px.box(data_frame=df,", "print(e) if chart_select == 'Histogram': st.sidebar.subheader(\"Histogram Settings\") try: x = st.sidebar.selectbox('Feature', options=numeric_columns) bin_size", "type\", options=['Scatterplots', 'Lineplots', 'Histogram', 'Boxplot'] ) if chart_select == 'Scatterplots': st.sidebar.subheader(\"Scatterplot Settings\") try:", "st.sidebar.subheader(\"Scatterplot Settings\") try: x_values = st.sidebar.selectbox('X axis', options=numeric_columns) y_values = st.sidebar.selectbox('Y axis', options=numeric_columns)", "Plot Settings\") try: x_values = st.sidebar.selectbox('X axis', options=numeric_columns) y_values = st.sidebar.selectbox('Y axis', options=numeric_columns)", "the app st.title(\"Data Visualization App\") # Add a sidebar st.sidebar.subheader(\"Visualization Settings\") # Setup", "options=numeric_columns) y_values = st.sidebar.selectbox('Y axis', options=numeric_columns) color_value = st.sidebar.selectbox(\"Color\", options=non_numeric_columns) plot = px.line(data_frame=df,", "options=non_numeric_columns) plot = px.scatter(data_frame=df, x=x_values, y=y_values, color=color_value) # display the chart st.plotly_chart(plot) except", "uploaded_file = st.sidebar.file_uploader( label=\"Upload your CSV or Excel file. (200MB max)\", type=['csv', 'xlsx'])", "px import pandas as pd # configuration st.set_option('deprecation.showfileUploaderEncoding', False) # title of the", "# display the chart st.plotly_chart(plot) except Exception as e: print(e) if chart_select ==", "pd.read_csv(uploaded_file) except Exception as e: print(e) df = pd.read_excel(uploaded_file) global numeric_columns global non_numeric_columns", "Exception as e: print(e) df = pd.read_excel(uploaded_file) global numeric_columns global non_numeric_columns try: st.write(df)", "axis', options=numeric_columns) color_value = st.sidebar.selectbox(\"Color\", options=non_numeric_columns) plot = px.scatter(data_frame=df, x=x_values, y=y_values, color=color_value) #", "print(e) if chart_select == 'Boxplot': st.sidebar.subheader(\"Boxplot Settings\") try: y = st.sidebar.selectbox(\"Y axis\", options=numeric_columns)", "global df if uploaded_file is not None: print(uploaded_file) print(\"hello\") try: df = pd.read_csv(uploaded_file)", "= st.sidebar.selectbox(\"Color\", options=non_numeric_columns) plot = px.scatter(data_frame=df, x=x_values, y=y_values, color=color_value) # display the chart", "except Exception as e: print(e) df = pd.read_excel(uploaded_file) global numeric_columns global non_numeric_columns try:", "st.write(\"Please upload file to the application.\") # add a select widget to the", "= pd.read_csv(uploaded_file) except Exception as e: print(e) df = pd.read_excel(uploaded_file) global numeric_columns global", "try: x = st.sidebar.selectbox('Feature', options=numeric_columns) bin_size = st.sidebar.slider(\"Number of Bins\", min_value=10, max_value=100, value=40)", "color_value = st.sidebar.selectbox(\"Color\", options=non_numeric_columns) plot = px.box(data_frame=df, y=y, x=x, color=color_value) st.plotly_chart(plot) except Exception", "chart_select == 'Scatterplots': st.sidebar.subheader(\"Scatterplot Settings\") try: x_values = st.sidebar.selectbox('X axis', options=numeric_columns) y_values =", "st.sidebar.subheader(\"Histogram Settings\") try: x = st.sidebar.selectbox('Feature', options=numeric_columns) bin_size = st.sidebar.slider(\"Number of Bins\", min_value=10,", "st.sidebar.subheader(\"Visualization Settings\") # Setup file upload uploaded_file = st.sidebar.file_uploader( label=\"Upload your CSV or", "side bar chart_select = st.sidebar.selectbox( label=\"Select the chart type\", options=['Scatterplots', 'Lineplots', 'Histogram', 'Boxplot']", "if chart_select == 'Lineplots': st.sidebar.subheader(\"Line Plot Settings\") try: x_values = st.sidebar.selectbox('X axis', options=numeric_columns)", "= px.histogram(x=x, data_frame=df, color=color_value) st.plotly_chart(plot) except Exception as e: print(e) if chart_select ==", "Settings\") try: y = st.sidebar.selectbox(\"Y axis\", options=numeric_columns) x = st.sidebar.selectbox(\"X axis\", options=non_numeric_columns) color_value", "file to the application.\") # add a select widget to the side bar", "= st.sidebar.file_uploader( label=\"Upload your CSV or Excel file. (200MB max)\", type=['csv', 'xlsx']) global", "try: st.write(df) numeric_columns = list(df.select_dtypes(['float', 'int']).columns) non_numeric_columns = list(df.select_dtypes(['object']).columns) non_numeric_columns.append(None) print(non_numeric_columns) except Exception", "a sidebar st.sidebar.subheader(\"Visualization Settings\") # Setup file upload uploaded_file = st.sidebar.file_uploader( label=\"Upload your", "st.plotly_chart(plot) except Exception as e: print(e) if chart_select == 'Boxplot': st.sidebar.subheader(\"Boxplot Settings\") try:", "x = st.sidebar.selectbox('Feature', options=numeric_columns) bin_size = st.sidebar.slider(\"Number of Bins\", min_value=10, max_value=100, value=40) color_value", "chart_select = st.sidebar.selectbox( label=\"Select the chart type\", options=['Scatterplots', 'Lineplots', 'Histogram', 'Boxplot'] ) if", "min_value=10, max_value=100, value=40) color_value = st.sidebar.selectbox(\"Color\", options=non_numeric_columns) plot = px.histogram(x=x, data_frame=df, color=color_value) st.plotly_chart(plot)", "axis', options=numeric_columns) y_values = st.sidebar.selectbox('Y axis', options=numeric_columns) color_value = st.sidebar.selectbox(\"Color\", options=non_numeric_columns) plot =", "as e: print(e) if chart_select == 'Boxplot': st.sidebar.subheader(\"Boxplot Settings\") try: y = st.sidebar.selectbox(\"Y", "app st.title(\"Data Visualization App\") # Add a sidebar st.sidebar.subheader(\"Visualization Settings\") # Setup file", "numeric_columns = list(df.select_dtypes(['float', 'int']).columns) non_numeric_columns = list(df.select_dtypes(['object']).columns) non_numeric_columns.append(None) print(non_numeric_columns) except Exception as e:", "except Exception as e: print(e) if chart_select == 'Boxplot': st.sidebar.subheader(\"Boxplot Settings\") try: y", "st.sidebar.selectbox(\"X axis\", options=non_numeric_columns) color_value = st.sidebar.selectbox(\"Color\", options=non_numeric_columns) plot = px.box(data_frame=df, y=y, x=x, color=color_value)", "= pd.read_excel(uploaded_file) global numeric_columns global non_numeric_columns try: st.write(df) numeric_columns = list(df.select_dtypes(['float', 'int']).columns) non_numeric_columns", "Add a sidebar st.sidebar.subheader(\"Visualization Settings\") # Setup file upload uploaded_file = st.sidebar.file_uploader( label=\"Upload", "Setup file upload uploaded_file = st.sidebar.file_uploader( label=\"Upload your CSV or Excel file. (200MB", "pd.read_excel(uploaded_file) global numeric_columns global non_numeric_columns try: st.write(df) numeric_columns = list(df.select_dtypes(['float', 'int']).columns) non_numeric_columns =", "of Bins\", min_value=10, max_value=100, value=40) color_value = st.sidebar.selectbox(\"Color\", options=non_numeric_columns) plot = px.histogram(x=x, data_frame=df,", "x = st.sidebar.selectbox(\"X axis\", options=non_numeric_columns) color_value = st.sidebar.selectbox(\"Color\", options=non_numeric_columns) plot = px.box(data_frame=df, y=y,", "axis', options=numeric_columns) color_value = st.sidebar.selectbox(\"Color\", options=non_numeric_columns) plot = px.line(data_frame=df, x=x_values, y=y_values, color=color_value) st.plotly_chart(plot)", "CSV or Excel file. (200MB max)\", type=['csv', 'xlsx']) global df if uploaded_file is", "not None: print(uploaded_file) print(\"hello\") try: df = pd.read_csv(uploaded_file) except Exception as e: print(e)", "options=non_numeric_columns) plot = px.line(data_frame=df, x=x_values, y=y_values, color=color_value) st.plotly_chart(plot) except Exception as e: print(e)", "max_value=100, value=40) color_value = st.sidebar.selectbox(\"Color\", options=non_numeric_columns) plot = px.histogram(x=x, data_frame=df, color=color_value) st.plotly_chart(plot) except", "= st.sidebar.selectbox( label=\"Select the chart type\", options=['Scatterplots', 'Lineplots', 'Histogram', 'Boxplot'] ) if chart_select", "st.plotly_chart(plot) except Exception as e: print(e) if chart_select == 'Histogram': st.sidebar.subheader(\"Histogram Settings\") try:", "options=non_numeric_columns) plot = px.box(data_frame=df, y=y, x=x, color=color_value) st.plotly_chart(plot) except Exception as e: print(e)", "max)\", type=['csv', 'xlsx']) global df if uploaded_file is not None: print(uploaded_file) print(\"hello\") try:", "pd # configuration st.set_option('deprecation.showfileUploaderEncoding', False) # title of the app st.title(\"Data Visualization App\")", "= st.sidebar.selectbox(\"Color\", options=non_numeric_columns) plot = px.histogram(x=x, data_frame=df, color=color_value) st.plotly_chart(plot) except Exception as e:", "if chart_select == 'Boxplot': st.sidebar.subheader(\"Boxplot Settings\") try: y = st.sidebar.selectbox(\"Y axis\", options=numeric_columns) x", "as pd # configuration st.set_option('deprecation.showfileUploaderEncoding', False) # title of the app st.title(\"Data Visualization", "display the chart st.plotly_chart(plot) except Exception as e: print(e) if chart_select == 'Lineplots':", "print(\"hello\") try: df = pd.read_csv(uploaded_file) except Exception as e: print(e) df = pd.read_excel(uploaded_file)", "try: x_values = st.sidebar.selectbox('X axis', options=numeric_columns) y_values = st.sidebar.selectbox('Y axis', options=numeric_columns) color_value =", "== 'Histogram': st.sidebar.subheader(\"Histogram Settings\") try: x = st.sidebar.selectbox('Feature', options=numeric_columns) bin_size = st.sidebar.slider(\"Number of", "= px.scatter(data_frame=df, x=x_values, y=y_values, color=color_value) # display the chart st.plotly_chart(plot) except Exception as", "import pandas as pd # configuration st.set_option('deprecation.showfileUploaderEncoding', False) # title of the app", "label=\"Upload your CSV or Excel file. (200MB max)\", type=['csv', 'xlsx']) global df if", "to the application.\") # add a select widget to the side bar chart_select", "st.sidebar.selectbox(\"Color\", options=non_numeric_columns) plot = px.line(data_frame=df, x=x_values, y=y_values, color=color_value) st.plotly_chart(plot) except Exception as e:", "as px import pandas as pd # configuration st.set_option('deprecation.showfileUploaderEncoding', False) # title of", "uploaded_file is not None: print(uploaded_file) print(\"hello\") try: df = pd.read_csv(uploaded_file) except Exception as", "color_value = st.sidebar.selectbox(\"Color\", options=non_numeric_columns) plot = px.histogram(x=x, data_frame=df, color=color_value) st.plotly_chart(plot) except Exception as", "upload uploaded_file = st.sidebar.file_uploader( label=\"Upload your CSV or Excel file. (200MB max)\", type=['csv',", "'Histogram', 'Boxplot'] ) if chart_select == 'Scatterplots': st.sidebar.subheader(\"Scatterplot Settings\") try: x_values = st.sidebar.selectbox('X", "except Exception as e: print(e) if chart_select == 'Histogram': st.sidebar.subheader(\"Histogram Settings\") try: x", "e: print(e) if chart_select == 'Boxplot': st.sidebar.subheader(\"Boxplot Settings\") try: y = st.sidebar.selectbox(\"Y axis\",", "non_numeric_columns try: st.write(df) numeric_columns = list(df.select_dtypes(['float', 'int']).columns) non_numeric_columns = list(df.select_dtypes(['object']).columns) non_numeric_columns.append(None) print(non_numeric_columns) except", "non_numeric_columns.append(None) print(non_numeric_columns) except Exception as e: print(e) st.write(\"Please upload file to the application.\")", "plotly_express as px import pandas as pd # configuration st.set_option('deprecation.showfileUploaderEncoding', False) # title", "try: df = pd.read_csv(uploaded_file) except Exception as e: print(e) df = pd.read_excel(uploaded_file) global", "is not None: print(uploaded_file) print(\"hello\") try: df = pd.read_csv(uploaded_file) except Exception as e:", "data_frame=df, color=color_value) st.plotly_chart(plot) except Exception as e: print(e) if chart_select == 'Boxplot': st.sidebar.subheader(\"Boxplot", "print(e) if chart_select == 'Lineplots': st.sidebar.subheader(\"Line Plot Settings\") try: x_values = st.sidebar.selectbox('X axis',", "px.line(data_frame=df, x=x_values, y=y_values, color=color_value) st.plotly_chart(plot) except Exception as e: print(e) if chart_select ==", "Settings\") try: x = st.sidebar.selectbox('Feature', options=numeric_columns) bin_size = st.sidebar.slider(\"Number of Bins\", min_value=10, max_value=100,", "or Excel file. (200MB max)\", type=['csv', 'xlsx']) global df if uploaded_file is not", "st.sidebar.subheader(\"Line Plot Settings\") try: x_values = st.sidebar.selectbox('X axis', options=numeric_columns) y_values = st.sidebar.selectbox('Y axis',", "'Histogram': st.sidebar.subheader(\"Histogram Settings\") try: x = st.sidebar.selectbox('Feature', options=numeric_columns) bin_size = st.sidebar.slider(\"Number of Bins\",", "st.sidebar.selectbox(\"Color\", options=non_numeric_columns) plot = px.histogram(x=x, data_frame=df, color=color_value) st.plotly_chart(plot) except Exception as e: print(e)", "options=non_numeric_columns) plot = px.histogram(x=x, data_frame=df, color=color_value) st.plotly_chart(plot) except Exception as e: print(e) if", "= st.sidebar.selectbox(\"Y axis\", options=numeric_columns) x = st.sidebar.selectbox(\"X axis\", options=non_numeric_columns) color_value = st.sidebar.selectbox(\"Color\", options=non_numeric_columns)", "upload file to the application.\") # add a select widget to the side", "st.sidebar.selectbox(\"Y axis\", options=numeric_columns) x = st.sidebar.selectbox(\"X axis\", options=non_numeric_columns) color_value = st.sidebar.selectbox(\"Color\", options=non_numeric_columns) plot", "e: print(e) st.write(\"Please upload file to the application.\") # add a select widget", "'xlsx']) global df if uploaded_file is not None: print(uploaded_file) print(\"hello\") try: df =", "the application.\") # add a select widget to the side bar chart_select =", "color_value = st.sidebar.selectbox(\"Color\", options=non_numeric_columns) plot = px.scatter(data_frame=df, x=x_values, y=y_values, color=color_value) # display the", "a select widget to the side bar chart_select = st.sidebar.selectbox( label=\"Select the chart", "as e: print(e) if chart_select == 'Lineplots': st.sidebar.subheader(\"Line Plot Settings\") try: x_values =", "as e: print(e) st.write(\"Please upload file to the application.\") # add a select", "list(df.select_dtypes(['object']).columns) non_numeric_columns.append(None) print(non_numeric_columns) except Exception as e: print(e) st.write(\"Please upload file to the", "st.sidebar.selectbox(\"Color\", options=non_numeric_columns) plot = px.scatter(data_frame=df, x=x_values, y=y_values, color=color_value) # display the chart st.plotly_chart(plot)", "non_numeric_columns = list(df.select_dtypes(['object']).columns) non_numeric_columns.append(None) print(non_numeric_columns) except Exception as e: print(e) st.write(\"Please upload file", "st.plotly_chart(plot) except Exception as e: print(e) if chart_select == 'Lineplots': st.sidebar.subheader(\"Line Plot Settings\")", "x=x_values, y=y_values, color=color_value) st.plotly_chart(plot) except Exception as e: print(e) if chart_select == 'Histogram':", "'Lineplots', 'Histogram', 'Boxplot'] ) if chart_select == 'Scatterplots': st.sidebar.subheader(\"Scatterplot Settings\") try: x_values =", "value=40) color_value = st.sidebar.selectbox(\"Color\", options=non_numeric_columns) plot = px.histogram(x=x, data_frame=df, color=color_value) st.plotly_chart(plot) except Exception", "of the app st.title(\"Data Visualization App\") # Add a sidebar st.sidebar.subheader(\"Visualization Settings\") #", "color=color_value) # display the chart st.plotly_chart(plot) except Exception as e: print(e) if chart_select", "options=non_numeric_columns) color_value = st.sidebar.selectbox(\"Color\", options=non_numeric_columns) plot = px.box(data_frame=df, y=y, x=x, color=color_value) st.plotly_chart(plot) except", "x_values = st.sidebar.selectbox('X axis', options=numeric_columns) y_values = st.sidebar.selectbox('Y axis', options=numeric_columns) color_value = st.sidebar.selectbox(\"Color\",", "df = pd.read_excel(uploaded_file) global numeric_columns global non_numeric_columns try: st.write(df) numeric_columns = list(df.select_dtypes(['float', 'int']).columns)", "options=numeric_columns) bin_size = st.sidebar.slider(\"Number of Bins\", min_value=10, max_value=100, value=40) color_value = st.sidebar.selectbox(\"Color\", options=non_numeric_columns)", "except Exception as e: print(e) if chart_select == 'Lineplots': st.sidebar.subheader(\"Line Plot Settings\") try:", "df if uploaded_file is not None: print(uploaded_file) print(\"hello\") try: df = pd.read_csv(uploaded_file) except", "as e: print(e) if chart_select == 'Histogram': st.sidebar.subheader(\"Histogram Settings\") try: x = st.sidebar.selectbox('Feature',", "y_values = st.sidebar.selectbox('Y axis', options=numeric_columns) color_value = st.sidebar.selectbox(\"Color\", options=non_numeric_columns) plot = px.scatter(data_frame=df, x=x_values,", "Exception as e: print(e) if chart_select == 'Histogram': st.sidebar.subheader(\"Histogram Settings\") try: x =", "= st.sidebar.selectbox(\"X axis\", options=non_numeric_columns) color_value = st.sidebar.selectbox(\"Color\", options=non_numeric_columns) plot = px.box(data_frame=df, y=y, x=x,", "Visualization App\") # Add a sidebar st.sidebar.subheader(\"Visualization Settings\") # Setup file upload uploaded_file", "(200MB max)\", type=['csv', 'xlsx']) global df if uploaded_file is not None: print(uploaded_file) print(\"hello\")", "st.sidebar.selectbox(\"Color\", options=non_numeric_columns) plot = px.box(data_frame=df, y=y, x=x, color=color_value) st.plotly_chart(plot) except Exception as e:", "import streamlit as st import plotly_express as px import pandas as pd #", "'Lineplots': st.sidebar.subheader(\"Line Plot Settings\") try: x_values = st.sidebar.selectbox('X axis', options=numeric_columns) y_values = st.sidebar.selectbox('Y", "if uploaded_file is not None: print(uploaded_file) print(\"hello\") try: df = pd.read_csv(uploaded_file) except Exception", "type=['csv', 'xlsx']) global df if uploaded_file is not None: print(uploaded_file) print(\"hello\") try: df", "options=numeric_columns) color_value = st.sidebar.selectbox(\"Color\", options=non_numeric_columns) plot = px.line(data_frame=df, x=x_values, y=y_values, color=color_value) st.plotly_chart(plot) except", "Settings\") # Setup file upload uploaded_file = st.sidebar.file_uploader( label=\"Upload your CSV or Excel", "the chart type\", options=['Scatterplots', 'Lineplots', 'Histogram', 'Boxplot'] ) if chart_select == 'Scatterplots': st.sidebar.subheader(\"Scatterplot", "e: print(e) if chart_select == 'Histogram': st.sidebar.subheader(\"Histogram Settings\") try: x = st.sidebar.selectbox('Feature', options=numeric_columns)", "try: y = st.sidebar.selectbox(\"Y axis\", options=numeric_columns) x = st.sidebar.selectbox(\"X axis\", options=non_numeric_columns) color_value =", "bin_size = st.sidebar.slider(\"Number of Bins\", min_value=10, max_value=100, value=40) color_value = st.sidebar.selectbox(\"Color\", options=non_numeric_columns) plot", "chart_select == 'Boxplot': st.sidebar.subheader(\"Boxplot Settings\") try: y = st.sidebar.selectbox(\"Y axis\", options=numeric_columns) x =", "streamlit as st import plotly_express as px import pandas as pd # configuration", "chart_select == 'Lineplots': st.sidebar.subheader(\"Line Plot Settings\") try: x_values = st.sidebar.selectbox('X axis', options=numeric_columns) y_values", "color_value = st.sidebar.selectbox(\"Color\", options=non_numeric_columns) plot = px.line(data_frame=df, x=x_values, y=y_values, color=color_value) st.plotly_chart(plot) except Exception", "select widget to the side bar chart_select = st.sidebar.selectbox( label=\"Select the chart type\",", "Bins\", min_value=10, max_value=100, value=40) color_value = st.sidebar.selectbox(\"Color\", options=non_numeric_columns) plot = px.histogram(x=x, data_frame=df, color=color_value)", "= st.sidebar.selectbox(\"Color\", options=non_numeric_columns) plot = px.line(data_frame=df, x=x_values, y=y_values, color=color_value) st.plotly_chart(plot) except Exception as", "print(uploaded_file) print(\"hello\") try: df = pd.read_csv(uploaded_file) except Exception as e: print(e) df =", "x=x_values, y=y_values, color=color_value) # display the chart st.plotly_chart(plot) except Exception as e: print(e)", "App\") # Add a sidebar st.sidebar.subheader(\"Visualization Settings\") # Setup file upload uploaded_file =", "global non_numeric_columns try: st.write(df) numeric_columns = list(df.select_dtypes(['float', 'int']).columns) non_numeric_columns = list(df.select_dtypes(['object']).columns) non_numeric_columns.append(None) print(non_numeric_columns)", "Exception as e: print(e) st.write(\"Please upload file to the application.\") # add a", "import plotly_express as px import pandas as pd # configuration st.set_option('deprecation.showfileUploaderEncoding', False) #", "label=\"Select the chart type\", options=['Scatterplots', 'Lineplots', 'Histogram', 'Boxplot'] ) if chart_select == 'Scatterplots':", "y_values = st.sidebar.selectbox('Y axis', options=numeric_columns) color_value = st.sidebar.selectbox(\"Color\", options=non_numeric_columns) plot = px.line(data_frame=df, x=x_values,", "# Add a sidebar st.sidebar.subheader(\"Visualization Settings\") # Setup file upload uploaded_file = st.sidebar.file_uploader(", "the side bar chart_select = st.sidebar.selectbox( label=\"Select the chart type\", options=['Scatterplots', 'Lineplots', 'Histogram',", "== 'Scatterplots': st.sidebar.subheader(\"Scatterplot Settings\") try: x_values = st.sidebar.selectbox('X axis', options=numeric_columns) y_values = st.sidebar.selectbox('Y", "plot = px.scatter(data_frame=df, x=x_values, y=y_values, color=color_value) # display the chart st.plotly_chart(plot) except Exception", "== 'Lineplots': st.sidebar.subheader(\"Line Plot Settings\") try: x_values = st.sidebar.selectbox('X axis', options=numeric_columns) y_values =", "px.histogram(x=x, data_frame=df, color=color_value) st.plotly_chart(plot) except Exception as e: print(e) if chart_select == 'Boxplot':", "== 'Boxplot': st.sidebar.subheader(\"Boxplot Settings\") try: y = st.sidebar.selectbox(\"Y axis\", options=numeric_columns) x = st.sidebar.selectbox(\"X", "file upload uploaded_file = st.sidebar.file_uploader( label=\"Upload your CSV or Excel file. (200MB max)\",", "st.sidebar.selectbox( label=\"Select the chart type\", options=['Scatterplots', 'Lineplots', 'Histogram', 'Boxplot'] ) if chart_select ==", "Exception as e: print(e) if chart_select == 'Lineplots': st.sidebar.subheader(\"Line Plot Settings\") try: x_values", "= list(df.select_dtypes(['float', 'int']).columns) non_numeric_columns = list(df.select_dtypes(['object']).columns) non_numeric_columns.append(None) print(non_numeric_columns) except Exception as e: print(e)", "e: print(e) df = pd.read_excel(uploaded_file) global numeric_columns global non_numeric_columns try: st.write(df) numeric_columns =", "Settings\") try: x_values = st.sidebar.selectbox('X axis', options=numeric_columns) y_values = st.sidebar.selectbox('Y axis', options=numeric_columns) color_value", "= list(df.select_dtypes(['object']).columns) non_numeric_columns.append(None) print(non_numeric_columns) except Exception as e: print(e) st.write(\"Please upload file to", "options=numeric_columns) color_value = st.sidebar.selectbox(\"Color\", options=non_numeric_columns) plot = px.scatter(data_frame=df, x=x_values, y=y_values, color=color_value) # display" ]
[ "index_path): \"\"\"Triggers a search from the command line.\"\"\" from rigidsearch.search import get_index, get_index_path", "create_app return create_app(self.config_filename) pass_ctx = click.make_pass_decorator(Context, ensure=True) @click.group() @click.option('--config', type=click.Path(), help='Path to the", "def cli(ctx, config): if config is not None: ctx.config_filename = os.path.abspath(config) @cli.command('index-folder') @click.argument('config',", "IOError): pass for event in index_tree(json.load(config), index_zip=save_zip, index_path=index_path): click.echo(event) @cli.command('search') @click.argument('query') @click.option('--section', default='generic')", "index = get_index(index_path) results = index.search(query, section=section) for result in results['items']: click.echo('%s (%s)'", "def index_folder_cmd(ctx, config, index_path, save_zip): \"\"\"Indexes a path.\"\"\" from rigidsearch.search import index_tree, get_index_path", "@pass_ctx def devserver_cmd(ctx, bind): \"\"\"Runs a local development server.\"\"\" parts = bind.split(':', 1)", "cached_property class Context(object): def __init__(self): self.config_filename = os.environ.get('RIGIDSEARCH_CONFIG') @cached_property def app(self): from rigidsearch.app", "ensure=True) @click.group() @click.option('--config', type=click.Path(), help='Path to the config file.') @pass_ctx def cli(ctx, config):", "'5001' if addr == '': addr = '127.0.0.1' ctx.app.run(addr, int(port), debug=True) @cli.command('run') @click.option('--bind',", "'127.0.0.1' ctx.app.run(addr, int(port), debug=True) @cli.command('run') @click.option('--bind', '-b', default='127.0.0.1:5001') @click.option('--workers', '-w', default=1) @click.option('--timeout', '-t',", "ctx.config_filename = os.path.abspath(config) @cli.command('index-folder') @click.argument('config', type=click.File('rb')) @click.option('--index-path', type=click.Path(), help='Where to write the index", "command line.\"\"\" from rigidsearch.search import get_index, get_index_path index_path = get_index_path(app=ctx.app) index = get_index(index_path)", "addr = '127.0.0.1' ctx.app.run(addr, int(port), debug=True) @cli.command('run') @click.option('--bind', '-b', default='127.0.0.1:5001') @click.option('--workers', '-w', default=1)", "search from the command line.\"\"\" from rigidsearch.search import get_index, get_index_path index_path = get_index_path(app=ctx.app)", "len(parts) == 1: addr, port = bind, '5001' if addr == '': addr", "= get_index(index_path) results = index.search(query, section=section) for result in results['items']: click.echo('%s (%s)' %", "index to other than config default.') @click.option('--save-zip', type=click.File('wb'), help='Optional a zip file the", "click.echo(event) @cli.command('search') @click.argument('query') @click.option('--section', default='generic') @click.option('--index-path', help='Path to the search index.') @pass_ctx def", "@pass_ctx def search_cmd(ctx, query, section, index_path): \"\"\"Triggers a search from the command line.\"\"\"", "from werkzeug.utils import cached_property class Context(object): def __init__(self): self.config_filename = os.environ.get('RIGIDSEARCH_CONFIG') @cached_property def", "zip file the index should be stored at ' 'instead of modifying the", "@click.option('--workers', '-w', default=1) @click.option('--timeout', '-t', default=30) @click.option('--loglevel', default='info') @click.option('--accesslog', default='-') @click.option('--errorlog', default='-') @pass_ctx", ")) @cli.command('devserver') @click.option('--bind', '-b', default='127.0.0.1:5001') @pass_ctx def devserver_cmd(ctx, bind): \"\"\"Runs a local development", "= click.make_pass_decorator(Context, ensure=True) @click.group() @click.option('--config', type=click.Path(), help='Path to the config file.') @pass_ctx def", "2: addr, port = parts elif len(parts) == 1: addr, port = bind,", "click.echo('%s (%s)' % ( result['path'], result['title'] )) @cli.command('devserver') @click.option('--bind', '-b', default='127.0.0.1:5001') @pass_ctx def", "section, index_path): \"\"\"Triggers a search from the command line.\"\"\" from rigidsearch.search import get_index,", "'-b', default='127.0.0.1:5001') @pass_ctx def devserver_cmd(ctx, bind): \"\"\"Runs a local development server.\"\"\" parts =", "def search_cmd(ctx, query, section, index_path): \"\"\"Triggers a search from the command line.\"\"\" from", "in results['items']: click.echo('%s (%s)' % ( result['path'], result['title'] )) @cli.command('devserver') @click.option('--bind', '-b', default='127.0.0.1:5001')", "from rigidsearch.search import index_tree, get_index_path index_path = get_index_path(index_path=index_path, app=ctx.app) try: shutil.rmtree(index_path) except (OSError,", "= '127.0.0.1' ctx.app.run(addr, int(port), debug=True) @cli.command('run') @click.option('--bind', '-b', default='127.0.0.1:5001') @click.option('--workers', '-w', default=1) @click.option('--timeout',", "get_index_path index_path = get_index_path(index_path=index_path, app=ctx.app) try: shutil.rmtree(index_path) except (OSError, IOError): pass for event", "get_index_path(app=ctx.app) index = get_index(index_path) results = index.search(query, section=section) for result in results['items']: click.echo('%s", "addr, port = parts elif len(parts) == 1: addr, port = bind, '5001'", "'instead of modifying the index in-place.') @pass_ctx def index_folder_cmd(ctx, config, index_path, save_zip): \"\"\"Indexes", "default.') @click.option('--save-zip', type=click.File('wb'), help='Optional a zip file the index should be stored at", "to the config file.') @pass_ctx def cli(ctx, config): if config is not None:", "def __init__(self): self.config_filename = os.environ.get('RIGIDSEARCH_CONFIG') @cached_property def app(self): from rigidsearch.app import create_app return", "get_index_path index_path = get_index_path(app=ctx.app) index = get_index(index_path) results = index.search(query, section=section) for result", "index_path = get_index_path(index_path=index_path, app=ctx.app) try: shutil.rmtree(index_path) except (OSError, IOError): pass for event in", "for event in index_tree(json.load(config), index_zip=save_zip, index_path=index_path): click.echo(event) @cli.command('search') @click.argument('query') @click.option('--section', default='generic') @click.option('--index-path', help='Path", "os.environ.get('RIGIDSEARCH_CONFIG') @cached_property def app(self): from rigidsearch.app import create_app return create_app(self.config_filename) pass_ctx = click.make_pass_decorator(Context,", "modifying the index in-place.') @pass_ctx def index_folder_cmd(ctx, config, index_path, save_zip): \"\"\"Indexes a path.\"\"\"", "click from werkzeug.utils import cached_property class Context(object): def __init__(self): self.config_filename = os.environ.get('RIGIDSEARCH_CONFIG') @cached_property", "help='Where to write the index to other than config default.') @click.option('--save-zip', type=click.File('wb'), help='Optional", "\"\"\"Runs the http web server.\"\"\" from rigidsearch.app import make_production_server make_production_server(app=ctx.app, options=options).run() def main():", "import cached_property class Context(object): def __init__(self): self.config_filename = os.environ.get('RIGIDSEARCH_CONFIG') @cached_property def app(self): from", "event in index_tree(json.load(config), index_zip=save_zip, index_path=index_path): click.echo(event) @cli.command('search') @click.argument('query') @click.option('--section', default='generic') @click.option('--index-path', help='Path to", "config default.') @click.option('--save-zip', type=click.File('wb'), help='Optional a zip file the index should be stored", "config is not None: ctx.config_filename = os.path.abspath(config) @cli.command('index-folder') @click.argument('config', type=click.File('rb')) @click.option('--index-path', type=click.Path(), help='Where", "utf-8 import os import shutil import json import click from werkzeug.utils import cached_property", "for result in results['items']: click.echo('%s (%s)' % ( result['path'], result['title'] )) @cli.command('devserver') @click.option('--bind',", "@cli.command('search') @click.argument('query') @click.option('--section', default='generic') @click.option('--index-path', help='Path to the search index.') @pass_ctx def search_cmd(ctx,", "config): if config is not None: ctx.config_filename = os.path.abspath(config) @cli.command('index-folder') @click.argument('config', type=click.File('rb')) @click.option('--index-path',", "parts elif len(parts) == 1: addr, port = bind, '5001' if addr ==", "'-t', default=30) @click.option('--loglevel', default='info') @click.option('--accesslog', default='-') @click.option('--errorlog', default='-') @pass_ctx def run_cmd(ctx, **options): \"\"\"Runs", "= os.environ.get('RIGIDSEARCH_CONFIG') @cached_property def app(self): from rigidsearch.app import create_app return create_app(self.config_filename) pass_ctx =", "default='generic') @click.option('--index-path', help='Path to the search index.') @pass_ctx def search_cmd(ctx, query, section, index_path):", "= bind.split(':', 1) if len(parts) == 2: addr, port = parts elif len(parts)", "addr, port = bind, '5001' if addr == '': addr = '127.0.0.1' ctx.app.run(addr,", "the http web server.\"\"\" from rigidsearch.app import make_production_server make_production_server(app=ctx.app, options=options).run() def main(): cli(auto_envvar_prefix='RIGIDSEARCH')", "'-w', default=1) @click.option('--timeout', '-t', default=30) @click.option('--loglevel', default='info') @click.option('--accesslog', default='-') @click.option('--errorlog', default='-') @pass_ctx def", "@click.option('--timeout', '-t', default=30) @click.option('--loglevel', default='info') @click.option('--accesslog', default='-') @click.option('--errorlog', default='-') @pass_ctx def run_cmd(ctx, **options):", "json import click from werkzeug.utils import cached_property class Context(object): def __init__(self): self.config_filename =", "search index.') @pass_ctx def search_cmd(ctx, query, section, index_path): \"\"\"Triggers a search from the", "@click.option('--save-zip', type=click.File('wb'), help='Optional a zip file the index should be stored at '", "@click.option('--loglevel', default='info') @click.option('--accesslog', default='-') @click.option('--errorlog', default='-') @pass_ctx def run_cmd(ctx, **options): \"\"\"Runs the http", "write the index to other than config default.') @click.option('--save-zip', type=click.File('wb'), help='Optional a zip", "Context(object): def __init__(self): self.config_filename = os.environ.get('RIGIDSEARCH_CONFIG') @cached_property def app(self): from rigidsearch.app import create_app", "( result['path'], result['title'] )) @cli.command('devserver') @click.option('--bind', '-b', default='127.0.0.1:5001') @pass_ctx def devserver_cmd(ctx, bind): \"\"\"Runs", "(%s)' % ( result['path'], result['title'] )) @cli.command('devserver') @click.option('--bind', '-b', default='127.0.0.1:5001') @pass_ctx def devserver_cmd(ctx,", "search_cmd(ctx, query, section, index_path): \"\"\"Triggers a search from the command line.\"\"\" from rigidsearch.search", "pass for event in index_tree(json.load(config), index_zip=save_zip, index_path=index_path): click.echo(event) @cli.command('search') @click.argument('query') @click.option('--section', default='generic') @click.option('--index-path',", "type=click.Path(), help='Path to the config file.') @pass_ctx def cli(ctx, config): if config is", "index should be stored at ' 'instead of modifying the index in-place.') @pass_ctx", "import json import click from werkzeug.utils import cached_property class Context(object): def __init__(self): self.config_filename", "get_index_path(index_path=index_path, app=ctx.app) try: shutil.rmtree(index_path) except (OSError, IOError): pass for event in index_tree(json.load(config), index_zip=save_zip,", "except (OSError, IOError): pass for event in index_tree(json.load(config), index_zip=save_zip, index_path=index_path): click.echo(event) @cli.command('search') @click.argument('query')", "index_path = get_index_path(app=ctx.app) index = get_index(index_path) results = index.search(query, section=section) for result in", "% ( result['path'], result['title'] )) @cli.command('devserver') @click.option('--bind', '-b', default='127.0.0.1:5001') @pass_ctx def devserver_cmd(ctx, bind):", "a path.\"\"\" from rigidsearch.search import index_tree, get_index_path index_path = get_index_path(index_path=index_path, app=ctx.app) try: shutil.rmtree(index_path)", "return create_app(self.config_filename) pass_ctx = click.make_pass_decorator(Context, ensure=True) @click.group() @click.option('--config', type=click.Path(), help='Path to the config", "to other than config default.') @click.option('--save-zip', type=click.File('wb'), help='Optional a zip file the index", "= index.search(query, section=section) for result in results['items']: click.echo('%s (%s)' % ( result['path'], result['title']", "query, section, index_path): \"\"\"Triggers a search from the command line.\"\"\" from rigidsearch.search import", "@pass_ctx def cli(ctx, config): if config is not None: ctx.config_filename = os.path.abspath(config) @cli.command('index-folder')", "@cli.command('devserver') @click.option('--bind', '-b', default='127.0.0.1:5001') @pass_ctx def devserver_cmd(ctx, bind): \"\"\"Runs a local development server.\"\"\"", "index.') @pass_ctx def search_cmd(ctx, query, section, index_path): \"\"\"Triggers a search from the command", "stored at ' 'instead of modifying the index in-place.') @pass_ctx def index_folder_cmd(ctx, config,", "app=ctx.app) try: shutil.rmtree(index_path) except (OSError, IOError): pass for event in index_tree(json.load(config), index_zip=save_zip, index_path=index_path):", "the command line.\"\"\" from rigidsearch.search import get_index, get_index_path index_path = get_index_path(app=ctx.app) index =", "if config is not None: ctx.config_filename = os.path.abspath(config) @cli.command('index-folder') @click.argument('config', type=click.File('rb')) @click.option('--index-path', type=click.Path(),", "@cached_property def app(self): from rigidsearch.app import create_app return create_app(self.config_filename) pass_ctx = click.make_pass_decorator(Context, ensure=True)", "def run_cmd(ctx, **options): \"\"\"Runs the http web server.\"\"\" from rigidsearch.app import make_production_server make_production_server(app=ctx.app,", "self.config_filename = os.environ.get('RIGIDSEARCH_CONFIG') @cached_property def app(self): from rigidsearch.app import create_app return create_app(self.config_filename) pass_ctx", "a zip file the index should be stored at ' 'instead of modifying", "from rigidsearch.search import get_index, get_index_path index_path = get_index_path(app=ctx.app) index = get_index(index_path) results =", "import index_tree, get_index_path index_path = get_index_path(index_path=index_path, app=ctx.app) try: shutil.rmtree(index_path) except (OSError, IOError): pass", "default='127.0.0.1:5001') @click.option('--workers', '-w', default=1) @click.option('--timeout', '-t', default=30) @click.option('--loglevel', default='info') @click.option('--accesslog', default='-') @click.option('--errorlog', default='-')", "be stored at ' 'instead of modifying the index in-place.') @pass_ctx def index_folder_cmd(ctx,", "help='Path to the search index.') @pass_ctx def search_cmd(ctx, query, section, index_path): \"\"\"Triggers a", "should be stored at ' 'instead of modifying the index in-place.') @pass_ctx def", "os.path.abspath(config) @cli.command('index-folder') @click.argument('config', type=click.File('rb')) @click.option('--index-path', type=click.Path(), help='Where to write the index to other", "type=click.File('wb'), help='Optional a zip file the index should be stored at ' 'instead", "path.\"\"\" from rigidsearch.search import index_tree, get_index_path index_path = get_index_path(index_path=index_path, app=ctx.app) try: shutil.rmtree(index_path) except", "get_index, get_index_path index_path = get_index_path(app=ctx.app) index = get_index(index_path) results = index.search(query, section=section) for", "@click.argument('config', type=click.File('rb')) @click.option('--index-path', type=click.Path(), help='Where to write the index to other than config", "other than config default.') @click.option('--save-zip', type=click.File('wb'), help='Optional a zip file the index should", "\"\"\"Runs a local development server.\"\"\" parts = bind.split(':', 1) if len(parts) == 2:", "run_cmd(ctx, **options): \"\"\"Runs the http web server.\"\"\" from rigidsearch.app import make_production_server make_production_server(app=ctx.app, options=options).run()", "the config file.') @pass_ctx def cli(ctx, config): if config is not None: ctx.config_filename", "import create_app return create_app(self.config_filename) pass_ctx = click.make_pass_decorator(Context, ensure=True) @click.group() @click.option('--config', type=click.Path(), help='Path to", "elif len(parts) == 1: addr, port = bind, '5001' if addr == '':", "cli(ctx, config): if config is not None: ctx.config_filename = os.path.abspath(config) @cli.command('index-folder') @click.argument('config', type=click.File('rb'))", "at ' 'instead of modifying the index in-place.') @pass_ctx def index_folder_cmd(ctx, config, index_path,", "pass_ctx = click.make_pass_decorator(Context, ensure=True) @click.group() @click.option('--config', type=click.Path(), help='Path to the config file.') @pass_ctx", "import click from werkzeug.utils import cached_property class Context(object): def __init__(self): self.config_filename = os.environ.get('RIGIDSEARCH_CONFIG')", "config, index_path, save_zip): \"\"\"Indexes a path.\"\"\" from rigidsearch.search import index_tree, get_index_path index_path =", "default='127.0.0.1:5001') @pass_ctx def devserver_cmd(ctx, bind): \"\"\"Runs a local development server.\"\"\" parts = bind.split(':',", "parts = bind.split(':', 1) if len(parts) == 2: addr, port = parts elif", "def devserver_cmd(ctx, bind): \"\"\"Runs a local development server.\"\"\" parts = bind.split(':', 1) if", "config file.') @pass_ctx def cli(ctx, config): if config is not None: ctx.config_filename =", "= bind, '5001' if addr == '': addr = '127.0.0.1' ctx.app.run(addr, int(port), debug=True)", "is not None: ctx.config_filename = os.path.abspath(config) @cli.command('index-folder') @click.argument('config', type=click.File('rb')) @click.option('--index-path', type=click.Path(), help='Where to", "1: addr, port = bind, '5001' if addr == '': addr = '127.0.0.1'", "from rigidsearch.app import create_app return create_app(self.config_filename) pass_ctx = click.make_pass_decorator(Context, ensure=True) @click.group() @click.option('--config', type=click.Path(),", "the index to other than config default.') @click.option('--save-zip', type=click.File('wb'), help='Optional a zip file", "= get_index_path(app=ctx.app) index = get_index(index_path) results = index.search(query, section=section) for result in results['items']:", "@click.option('--accesslog', default='-') @click.option('--errorlog', default='-') @pass_ctx def run_cmd(ctx, **options): \"\"\"Runs the http web server.\"\"\"", "def app(self): from rigidsearch.app import create_app return create_app(self.config_filename) pass_ctx = click.make_pass_decorator(Context, ensure=True) @click.group()", "development server.\"\"\" parts = bind.split(':', 1) if len(parts) == 2: addr, port =", "port = bind, '5001' if addr == '': addr = '127.0.0.1' ctx.app.run(addr, int(port),", "addr == '': addr = '127.0.0.1' ctx.app.run(addr, int(port), debug=True) @cli.command('run') @click.option('--bind', '-b', default='127.0.0.1:5001')", "index_path, save_zip): \"\"\"Indexes a path.\"\"\" from rigidsearch.search import index_tree, get_index_path index_path = get_index_path(index_path=index_path,", "@click.option('--bind', '-b', default='127.0.0.1:5001') @pass_ctx def devserver_cmd(ctx, bind): \"\"\"Runs a local development server.\"\"\" parts", "\"\"\"Triggers a search from the command line.\"\"\" from rigidsearch.search import get_index, get_index_path index_path", "the index in-place.') @pass_ctx def index_folder_cmd(ctx, config, index_path, save_zip): \"\"\"Indexes a path.\"\"\" from", "== 2: addr, port = parts elif len(parts) == 1: addr, port =", "'': addr = '127.0.0.1' ctx.app.run(addr, int(port), debug=True) @cli.command('run') @click.option('--bind', '-b', default='127.0.0.1:5001') @click.option('--workers', '-w',", "import os import shutil import json import click from werkzeug.utils import cached_property class", "bind, '5001' if addr == '': addr = '127.0.0.1' ctx.app.run(addr, int(port), debug=True) @cli.command('run')", "= os.path.abspath(config) @cli.command('index-folder') @click.argument('config', type=click.File('rb')) @click.option('--index-path', type=click.Path(), help='Where to write the index to", "rigidsearch.search import get_index, get_index_path index_path = get_index_path(app=ctx.app) index = get_index(index_path) results = index.search(query,", "section=section) for result in results['items']: click.echo('%s (%s)' % ( result['path'], result['title'] )) @cli.command('devserver')", "(OSError, IOError): pass for event in index_tree(json.load(config), index_zip=save_zip, index_path=index_path): click.echo(event) @cli.command('search') @click.argument('query') @click.option('--section',", "server.\"\"\" parts = bind.split(':', 1) if len(parts) == 2: addr, port = parts", "save_zip): \"\"\"Indexes a path.\"\"\" from rigidsearch.search import index_tree, get_index_path index_path = get_index_path(index_path=index_path, app=ctx.app)", "@pass_ctx def run_cmd(ctx, **options): \"\"\"Runs the http web server.\"\"\" from rigidsearch.app import make_production_server", "== '': addr = '127.0.0.1' ctx.app.run(addr, int(port), debug=True) @cli.command('run') @click.option('--bind', '-b', default='127.0.0.1:5001') @click.option('--workers',", "@cli.command('run') @click.option('--bind', '-b', default='127.0.0.1:5001') @click.option('--workers', '-w', default=1) @click.option('--timeout', '-t', default=30) @click.option('--loglevel', default='info') @click.option('--accesslog',", "shutil.rmtree(index_path) except (OSError, IOError): pass for event in index_tree(json.load(config), index_zip=save_zip, index_path=index_path): click.echo(event) @cli.command('search')", "ctx.app.run(addr, int(port), debug=True) @cli.command('run') @click.option('--bind', '-b', default='127.0.0.1:5001') @click.option('--workers', '-w', default=1) @click.option('--timeout', '-t', default=30)", "of modifying the index in-place.') @pass_ctx def index_folder_cmd(ctx, config, index_path, save_zip): \"\"\"Indexes a", "in-place.') @pass_ctx def index_folder_cmd(ctx, config, index_path, save_zip): \"\"\"Indexes a path.\"\"\" from rigidsearch.search import", "@click.option('--config', type=click.Path(), help='Path to the config file.') @pass_ctx def cli(ctx, config): if config", "len(parts) == 2: addr, port = parts elif len(parts) == 1: addr, port", "port = parts elif len(parts) == 1: addr, port = bind, '5001' if", "shutil import json import click from werkzeug.utils import cached_property class Context(object): def __init__(self):", "**options): \"\"\"Runs the http web server.\"\"\" from rigidsearch.app import make_production_server make_production_server(app=ctx.app, options=options).run() def", "app(self): from rigidsearch.app import create_app return create_app(self.config_filename) pass_ctx = click.make_pass_decorator(Context, ensure=True) @click.group() @click.option('--config',", "coding: utf-8 import os import shutil import json import click from werkzeug.utils import", "file the index should be stored at ' 'instead of modifying the index", "@click.argument('query') @click.option('--section', default='generic') @click.option('--index-path', help='Path to the search index.') @pass_ctx def search_cmd(ctx, query,", "@click.option('--section', default='generic') @click.option('--index-path', help='Path to the search index.') @pass_ctx def search_cmd(ctx, query, section,", "from the command line.\"\"\" from rigidsearch.search import get_index, get_index_path index_path = get_index_path(app=ctx.app) index", "result['path'], result['title'] )) @cli.command('devserver') @click.option('--bind', '-b', default='127.0.0.1:5001') @pass_ctx def devserver_cmd(ctx, bind): \"\"\"Runs a", "@click.option('--errorlog', default='-') @pass_ctx def run_cmd(ctx, **options): \"\"\"Runs the http web server.\"\"\" from rigidsearch.app", "bind.split(':', 1) if len(parts) == 2: addr, port = parts elif len(parts) ==", "index_path=index_path): click.echo(event) @cli.command('search') @click.argument('query') @click.option('--section', default='generic') @click.option('--index-path', help='Path to the search index.') @pass_ctx", "rigidsearch.app import create_app return create_app(self.config_filename) pass_ctx = click.make_pass_decorator(Context, ensure=True) @click.group() @click.option('--config', type=click.Path(), help='Path", "int(port), debug=True) @cli.command('run') @click.option('--bind', '-b', default='127.0.0.1:5001') @click.option('--workers', '-w', default=1) @click.option('--timeout', '-t', default=30) @click.option('--loglevel',", "index_zip=save_zip, index_path=index_path): click.echo(event) @cli.command('search') @click.argument('query') @click.option('--section', default='generic') @click.option('--index-path', help='Path to the search index.')", "click.make_pass_decorator(Context, ensure=True) @click.group() @click.option('--config', type=click.Path(), help='Path to the config file.') @pass_ctx def cli(ctx,", "@click.option('--index-path', help='Path to the search index.') @pass_ctx def search_cmd(ctx, query, section, index_path): \"\"\"Triggers", "result['title'] )) @cli.command('devserver') @click.option('--bind', '-b', default='127.0.0.1:5001') @pass_ctx def devserver_cmd(ctx, bind): \"\"\"Runs a local", "@click.option('--index-path', type=click.Path(), help='Where to write the index to other than config default.') @click.option('--save-zip',", "index.search(query, section=section) for result in results['items']: click.echo('%s (%s)' % ( result['path'], result['title'] ))", "index_tree(json.load(config), index_zip=save_zip, index_path=index_path): click.echo(event) @cli.command('search') @click.argument('query') @click.option('--section', default='generic') @click.option('--index-path', help='Path to the search", "the search index.') @pass_ctx def search_cmd(ctx, query, section, index_path): \"\"\"Triggers a search from", "index in-place.') @pass_ctx def index_folder_cmd(ctx, config, index_path, save_zip): \"\"\"Indexes a path.\"\"\" from rigidsearch.search", "1) if len(parts) == 2: addr, port = parts elif len(parts) == 1:", "a search from the command line.\"\"\" from rigidsearch.search import get_index, get_index_path index_path =", "'-b', default='127.0.0.1:5001') @click.option('--workers', '-w', default=1) @click.option('--timeout', '-t', default=30) @click.option('--loglevel', default='info') @click.option('--accesslog', default='-') @click.option('--errorlog',", "the index should be stored at ' 'instead of modifying the index in-place.')", "# coding: utf-8 import os import shutil import json import click from werkzeug.utils", "os import shutil import json import click from werkzeug.utils import cached_property class Context(object):", "default='-') @click.option('--errorlog', default='-') @pass_ctx def run_cmd(ctx, **options): \"\"\"Runs the http web server.\"\"\" from", "import get_index, get_index_path index_path = get_index_path(app=ctx.app) index = get_index(index_path) results = index.search(query, section=section)", "__init__(self): self.config_filename = os.environ.get('RIGIDSEARCH_CONFIG') @cached_property def app(self): from rigidsearch.app import create_app return create_app(self.config_filename)", "to write the index to other than config default.') @click.option('--save-zip', type=click.File('wb'), help='Optional a", "help='Optional a zip file the index should be stored at ' 'instead of", "= get_index_path(index_path=index_path, app=ctx.app) try: shutil.rmtree(index_path) except (OSError, IOError): pass for event in index_tree(json.load(config),", "' 'instead of modifying the index in-place.') @pass_ctx def index_folder_cmd(ctx, config, index_path, save_zip):", "= parts elif len(parts) == 1: addr, port = bind, '5001' if addr", "create_app(self.config_filename) pass_ctx = click.make_pass_decorator(Context, ensure=True) @click.group() @click.option('--config', type=click.Path(), help='Path to the config file.')", "@pass_ctx def index_folder_cmd(ctx, config, index_path, save_zip): \"\"\"Indexes a path.\"\"\" from rigidsearch.search import index_tree,", "devserver_cmd(ctx, bind): \"\"\"Runs a local development server.\"\"\" parts = bind.split(':', 1) if len(parts)", "results = index.search(query, section=section) for result in results['items']: click.echo('%s (%s)' % ( result['path'],", "default=1) @click.option('--timeout', '-t', default=30) @click.option('--loglevel', default='info') @click.option('--accesslog', default='-') @click.option('--errorlog', default='-') @pass_ctx def run_cmd(ctx,", "get_index(index_path) results = index.search(query, section=section) for result in results['items']: click.echo('%s (%s)' % (", "@click.option('--bind', '-b', default='127.0.0.1:5001') @click.option('--workers', '-w', default=1) @click.option('--timeout', '-t', default=30) @click.option('--loglevel', default='info') @click.option('--accesslog', default='-')", "\"\"\"Indexes a path.\"\"\" from rigidsearch.search import index_tree, get_index_path index_path = get_index_path(index_path=index_path, app=ctx.app) try:", "in index_tree(json.load(config), index_zip=save_zip, index_path=index_path): click.echo(event) @cli.command('search') @click.argument('query') @click.option('--section', default='generic') @click.option('--index-path', help='Path to the", "class Context(object): def __init__(self): self.config_filename = os.environ.get('RIGIDSEARCH_CONFIG') @cached_property def app(self): from rigidsearch.app import", "to the search index.') @pass_ctx def search_cmd(ctx, query, section, index_path): \"\"\"Triggers a search", "bind): \"\"\"Runs a local development server.\"\"\" parts = bind.split(':', 1) if len(parts) ==", "default=30) @click.option('--loglevel', default='info') @click.option('--accesslog', default='-') @click.option('--errorlog', default='-') @pass_ctx def run_cmd(ctx, **options): \"\"\"Runs the", "index_folder_cmd(ctx, config, index_path, save_zip): \"\"\"Indexes a path.\"\"\" from rigidsearch.search import index_tree, get_index_path index_path", "local development server.\"\"\" parts = bind.split(':', 1) if len(parts) == 2: addr, port", "import shutil import json import click from werkzeug.utils import cached_property class Context(object): def", "== 1: addr, port = bind, '5001' if addr == '': addr =", "werkzeug.utils import cached_property class Context(object): def __init__(self): self.config_filename = os.environ.get('RIGIDSEARCH_CONFIG') @cached_property def app(self):", "if addr == '': addr = '127.0.0.1' ctx.app.run(addr, int(port), debug=True) @cli.command('run') @click.option('--bind', '-b',", "@click.group() @click.option('--config', type=click.Path(), help='Path to the config file.') @pass_ctx def cli(ctx, config): if", "line.\"\"\" from rigidsearch.search import get_index, get_index_path index_path = get_index_path(app=ctx.app) index = get_index(index_path) results", "index_tree, get_index_path index_path = get_index_path(index_path=index_path, app=ctx.app) try: shutil.rmtree(index_path) except (OSError, IOError): pass for", "a local development server.\"\"\" parts = bind.split(':', 1) if len(parts) == 2: addr,", "default='-') @pass_ctx def run_cmd(ctx, **options): \"\"\"Runs the http web server.\"\"\" from rigidsearch.app import", "type=click.Path(), help='Where to write the index to other than config default.') @click.option('--save-zip', type=click.File('wb'),", "result in results['items']: click.echo('%s (%s)' % ( result['path'], result['title'] )) @cli.command('devserver') @click.option('--bind', '-b',", "results['items']: click.echo('%s (%s)' % ( result['path'], result['title'] )) @cli.command('devserver') @click.option('--bind', '-b', default='127.0.0.1:5001') @pass_ctx", "if len(parts) == 2: addr, port = parts elif len(parts) == 1: addr,", "rigidsearch.search import index_tree, get_index_path index_path = get_index_path(index_path=index_path, app=ctx.app) try: shutil.rmtree(index_path) except (OSError, IOError):", "not None: ctx.config_filename = os.path.abspath(config) @cli.command('index-folder') @click.argument('config', type=click.File('rb')) @click.option('--index-path', type=click.Path(), help='Where to write", "default='info') @click.option('--accesslog', default='-') @click.option('--errorlog', default='-') @pass_ctx def run_cmd(ctx, **options): \"\"\"Runs the http web", "file.') @pass_ctx def cli(ctx, config): if config is not None: ctx.config_filename = os.path.abspath(config)", "None: ctx.config_filename = os.path.abspath(config) @cli.command('index-folder') @click.argument('config', type=click.File('rb')) @click.option('--index-path', type=click.Path(), help='Where to write the", "type=click.File('rb')) @click.option('--index-path', type=click.Path(), help='Where to write the index to other than config default.')", "help='Path to the config file.') @pass_ctx def cli(ctx, config): if config is not", "than config default.') @click.option('--save-zip', type=click.File('wb'), help='Optional a zip file the index should be", "debug=True) @cli.command('run') @click.option('--bind', '-b', default='127.0.0.1:5001') @click.option('--workers', '-w', default=1) @click.option('--timeout', '-t', default=30) @click.option('--loglevel', default='info')", "try: shutil.rmtree(index_path) except (OSError, IOError): pass for event in index_tree(json.load(config), index_zip=save_zip, index_path=index_path): click.echo(event)", "@cli.command('index-folder') @click.argument('config', type=click.File('rb')) @click.option('--index-path', type=click.Path(), help='Where to write the index to other than" ]
[ "o, n in [('\"', '\\\\\"'), ('\\n', '\\\\n'), ('\\r', '\\\\r'), ('\\\\', '\\\\\\\\')]: s =", "visitor.visit(tree) def str_esc(s): for o, n in [('\"', '\\\\\"'), ('\\n', '\\\\n'), ('\\r', '\\\\r'),", "sys import datetime from antlr4 import * from SIONLexer import SIONLexer from SIONParser", "= InputStream(s) lexer = SIONLexer(stream) tokens = CommonTokenStream(lexer) parser = SIONParser(tokens) tree =", "file) print(':', file=file, end='') dump(obj[k], file) print(',', file=file, end='') dump(ks[-1], file) print(':', file=file,", "// Double in decimal \"one\", [1], [\"one\" : 1.0] ], \"bool\" : true,", "hexadecimal \"int\" : -0x2a, // Int in hexadecimal \"nil\" : nil, \"string\" :", ": \"non-String keys.\", [] : \"like\", [:] : \"Map of ECMAScript.\" ] '''", "== 0: res += ':' elif len(ks) == 1: res += dumps(ks[0]) +", "'[' ks = list(obj.keys()) if len(ks) == 0: res += ':' elif len(ks)", "dumps(ks[-1]) + ':' + dumps(obj[ks[-1]]) res += ']' return res raise TypeError( f\"Object", "tuple)): print(f'[', file=file, end='') if len(obj) > 0: for o in obj[:-1]: dump(o,", ": 0x1.518f5c28f5c29p+5, // Double in hexadecimal \"int\" : -0x2a, // Int in hexadecimal", "return s def dump(obj, file): if obj is None: print('nil', file=file, end='') elif", "':' + dumps(obj[ks[-1]]) res += ']' return res raise TypeError( f\"Object of type", ": \"like\", [:] : \"Map of ECMAScript.\" ] ''' obj = loads(s) pprint.pprint(obj)", "return str(obj) if isinstance(obj, str): return f'\"{str_esc(obj)}\"' if isinstance(obj, (bytes, bytearray)): return f'.Data(\"{str(obj)[2:-1]}\")'", "for k in ks[:-1]: dump(k, file) print(':', file=file, end='') dump(obj[k], file) print(',', file=file,", "end='') ks = list(obj.keys()) if len(ks) == 0: print(':', file=file, end='') elif len(ks)", "file=file, end='') elif isinstance(obj, (int, float)): print(obj, file=file, end='') elif isinstance(obj, str): print(f'\"{str_esc(obj)}\"',", "len(ks) == 1: dump(ks[0], file) print(':', file=file, end='') dump(obj[ks[0]], file) else: for k", "float)): print(obj, file=file, end='') elif isinstance(obj, str): print(f'\"{str_esc(obj)}\"', file=file, end='') elif isinstance(obj, (bytes,", "kamimura. All rights reserved. import sys import datetime from antlr4 import * from", "isinstance(obj, dict): print('[', file=file, end='') ks = list(obj.keys()) if len(ks) == 0: print(':',", "+ ':' + dumps(obj[ks[0]]) else: for k in ks[:-1]: res += dumps(k) +", "in ks[:-1]: res += dumps(k) + ':' + str(obj[k]) + ',' res +=", "f\"Object of type '{obj.__class__.__name__}' is not SION serializable\") if __name__ == '__main__': import", "end='') elif isinstance(obj, datetime.datetime): print(f'.Date({obj.timestamp()})', file=file, end='') elif isinstance(obj, (list, tuple)): print(f'[', file=file,", ": -0x2a, // Int in hexadecimal \"nil\" : nil, \"string\" : \"漢字、カタカナ、ひらがなの入ったstring😇\", \"url\"", "f'.Data(\"{str(obj)[2:-1]}\")' if isinstance(obj, datetime.datetime): return f'.Date({obj.timestamp(obj)})' if isinstance(obj, (list, tuple)): res = '['", "lexer = SIONLexer(stream) tokens = CommonTokenStream(lexer) parser = SIONParser(tokens) tree = parser.si_self() visitor", "if isinstance(data, (bytes, bytearray)): data = data.decode(encoding, errors) stream = InputStream(data) lexer =", "dumps(obj[ks[-1]]) res += ']' return res raise TypeError( f\"Object of type '{obj.__class__.__name__}' is", "bool): if obj: return 'true' return 'false' if isinstance(obj, (int, float)): return str(obj)", "\"bool\" : false, \"double\" : 0x0p+0, \"int\" : 0, \"nil\" : nil, \"object\"", "end='') dump(obj[ks[-1]], file) print(']', file=file, end='') else: raise TypeError( f\"Object of type '{obj.__class__.__name__}'", "for o in obj[:-1]: dump(o, file) print(',', file=file, end='') dump(obj[-1], file) print(']', file=file,", ": 0x0p+0, \"int\" : 0, \"nil\" : nil, \"object\" : [:], \"string\" :", "return f'.Data(\"{str(obj)[2:-1]}\")' if isinstance(obj, datetime.datetime): return f'.Date({obj.timestamp(obj)})' if isinstance(obj, (list, tuple)): res =", "if isinstance(s, (bytes, bytearray)): s = s.decode() stream = InputStream(s) lexer = SIONLexer(stream)", "data = data.decode(encoding, errors) stream = InputStream(data) lexer = SIONLexer(stream) tokens = CommonTokenStream(lexer)", "return f'\"{str_esc(obj)}\"' if isinstance(obj, (bytes, bytearray)): return f'.Data(\"{str(obj)[2:-1]}\")' if isinstance(obj, datetime.datetime): return f'.Date({obj.timestamp(obj)})'", "str(obj[k]) + ',' res += dumps(ks[-1]) + ':' + dumps(obj[ks[-1]]) res += ']'", "(bytes, bytearray)): data = data.decode(encoding, errors) stream = InputStream(data) lexer = SIONLexer(stream) tokens", "1, // Int in decimal 1.0, // Double in decimal \"one\", [1], [\"one\"", "0: for o in obj[:-1]: dump(o, file) print(',', file=file, end='') dump(obj[-1], file) print(']',", "0: for o in obj[:-1]: res += dumps(o) + ',' res += dumps(obj[-1])", "raise TypeError( f\"Object of type '{obj.__class__.__name__}' is not SION serializable\") def dumps(obj: object):", "file=file, end='') elif len(ks) == 1: dump(ks[0], file) print(':', file=file, end='') dump(obj[ks[0]], file)", "bytearray)): return f'.Data(\"{str(obj)[2:-1]}\")' if isinstance(obj, datetime.datetime): return f'.Date({obj.timestamp(obj)})' if isinstance(obj, (list, tuple)): res", "parser = SIONParser(tokens) tree = parser.si_self() visitor = SIONVisitor() return visitor.visit(tree) def loads(s):", "Int in hexadecimal \"nil\" : nil, \"string\" : \"漢字、カタカナ、ひらがなの入ったstring😇\", \"url\" : \"https://github.com/dankogai/\", nil", "res += dumps(k) + ':' + str(obj[k]) + ',' res += dumps(ks[-1]) +", "// Int in decimal 1.0, // Double in decimal \"one\", [1], [\"one\" :", "JSON and Property Lists,\", true : \"Yes, SION\", 1 : \"does accept\", 1.0", "+= ']' return res if isinstance(obj, dict): res = '[' ks = list(obj.keys())", "'true' return 'false' if isinstance(obj, (int, float)): return str(obj) if isinstance(obj, str): return", "file) print(',', file=file, end='') dump(obj[-1], file) print(']', file=file, end='') elif isinstance(obj, dict): print('[',", "isinstance(obj, datetime.datetime): return f'.Date({obj.timestamp(obj)})' if isinstance(obj, (list, tuple)): res = '[' if len(obj)", "'[' if len(obj) > 0: for o in obj[:-1]: res += dumps(o) +", "0: print(':', file=file, end='') elif len(ks) == 1: dump(ks[0], file) print(':', file=file, end='')", "def load(file, encoding: str='utf-8', errors: str='strict') -> object: data = file.read() if isinstance(data,", "parser.si_self() visitor = SIONVisitor() return visitor.visit(tree) def loads(s): if isinstance(s, (bytes, bytearray)): s", "else: for k in ks[:-1]: res += dumps(k) + ':' + str(obj[k]) +", "return res raise TypeError( f\"Object of type '{obj.__class__.__name__}' is not SION serializable\") if", ": .Date(0x0p+0), \"dictionary\" : [ \"array\" : [], \"bool\" : false, \"double\" :", "elif isinstance(obj, datetime.datetime): print(f'.Date({obj.timestamp()})', file=file, end='') elif isinstance(obj, (list, tuple)): print(f'[', file=file, end='')", "in decimal \"one\", [1], [\"one\" : 1.0] ], \"bool\" : true, \"data\" :", "end='') elif isinstance(obj, (list, tuple)): print(f'[', file=file, end='') if len(obj) > 0: for", "len(ks) == 0: res += ':' elif len(ks) == 1: res += dumps(ks[0])", "errors) stream = InputStream(data) lexer = SIONLexer(stream) tokens = CommonTokenStream(lexer) parser = SIONParser(tokens)", "== '__main__': import pprint if len(sys.argv) > 1: filename = sys.argv[1] else: filename", "if len(obj) > 0: for o in obj[:-1]: res += dumps(o) + ','", "elif len(ks) == 1: dump(ks[0], file) print(':', file=file, end='') dump(obj[ks[0]], file) else: for", "0x0p+0, \"int\" : 0, \"nil\" : nil, \"object\" : [:], \"string\" : \"\"", "if isinstance(obj, (list, tuple)): res = '[' if len(obj) > 0: for o", "-> object: data = file.read() if isinstance(data, (bytes, bytearray)): data = data.decode(encoding, errors)", "file=file, end='') dump(ks[-1], file) print(':', file=file, end='') dump(obj[ks[-1]], file) print(']', file=file, end='') else:", "antlr4 import * from SIONLexer import SIONLexer from SIONParser import SIONParser from SIONVisitor", "n in [('\"', '\\\\\"'), ('\\n', '\\\\n'), ('\\r', '\\\\r'), ('\\\\', '\\\\\\\\')]: s = s.replace(o,", "= '[' ks = list(obj.keys()) if len(ks) == 0: res += ':' elif", "if obj: print('ture', file=file, end='') else: print('false', file=file, end='') elif isinstance(obj, (int, float)):", "o in obj[:-1]: res += dumps(o) + ',' res += dumps(obj[-1]) res +=", "end='') elif len(ks) == 1: dump(ks[0], file) print(':', file=file, end='') dump(obj[ks[0]], file) else:", "= s.replace(o, n) return s def dump(obj, file): if obj is None: print('nil',", "f: obj = load(f) pprint.pprint(obj) with open('../test/output.sion', 'w') as f: dump(obj, f) s", "print(f'.Date({obj.timestamp()})', file=file, end='') elif isinstance(obj, (list, tuple)): print(f'[', file=file, end='') if len(obj) >", "1: filename = sys.argv[1] else: filename = '../test/t.sion' with open(filename) as f: obj", "':' + dumps(obj[ks[0]]) else: for k in ks[:-1]: res += dumps(k) + ':'", "-0x2a, // Int in hexadecimal \"nil\" : nil, \"string\" : \"漢字、カタカナ、ひらがなの入ったstring😇\", \"url\" :", "res += ']' return res raise TypeError( f\"Object of type '{obj.__class__.__name__}' is not", "isinstance(obj, datetime.datetime): print(f'.Date({obj.timestamp()})', file=file, end='') elif isinstance(obj, (list, tuple)): print(f'[', file=file, end='') if", "s = ''' [ \"array\" : [ nil, true, 1, // Int in", "str(obj) if isinstance(obj, str): return f'\"{str_esc(obj)}\"' if isinstance(obj, (bytes, bytearray)): return f'.Data(\"{str(obj)[2:-1]}\")' if", "obj[:-1]: res += dumps(o) + ',' res += dumps(obj[-1]) res += ']' return", "\"does accept\", 1.0 : \"non-String keys.\", [] : \"like\", [:] : \"Map of", "len(ks) == 0: print(':', file=file, end='') elif len(ks) == 1: dump(ks[0], file) print(':',", "if obj: return 'true' return 'false' if isinstance(obj, (int, float)): return str(obj) if", "false, \"double\" : 0x0p+0, \"int\" : 0, \"nil\" : nil, \"object\" : [:],", "true, 1, // Int in decimal 1.0, // Double in decimal \"one\", [1],", "bool): if obj: print('ture', file=file, end='') else: print('false', file=file, end='') elif isinstance(obj, (int,", "in obj[:-1]: dump(o, file) print(',', file=file, end='') dump(obj[-1], file) print(']', file=file, end='') elif", "file) print(']', file=file, end='') else: raise TypeError( f\"Object of type '{obj.__class__.__name__}' is not", "> 0: for o in obj[:-1]: dump(o, file) print(',', file=file, end='') dump(obj[-1], file)", "raise TypeError( f\"Object of type '{obj.__class__.__name__}' is not SION serializable\") if __name__ ==", "+= dumps(k) + ':' + str(obj[k]) + ',' res += dumps(ks[-1]) + ':'", "if isinstance(obj, (int, float)): return str(obj) if isinstance(obj, str): return f'\"{str_esc(obj)}\"' if isinstance(obj,", "encoding: str='utf-8', errors: str='strict') -> object: data = file.read() if isinstance(data, (bytes, bytearray)):", "(list, tuple)): print(f'[', file=file, end='') if len(obj) > 0: for o in obj[:-1]:", "rights reserved. import sys import datetime from antlr4 import * from SIONLexer import", "\"double\" : 0x0p+0, \"int\" : 0, \"nil\" : nil, \"object\" : [:], \"string\"", "end='') elif isinstance(obj, str): print(f'\"{str_esc(obj)}\"', file=file, end='') elif isinstance(obj, (bytes, bytearray)): print(f'.Data(\"{str(obj)[2:-1]}\")', file=file,", "if len(ks) == 0: print(':', file=file, end='') elif len(ks) == 1: dump(ks[0], file)", "file=file, end='') elif isinstance(obj, datetime.datetime): print(f'.Date({obj.timestamp()})', file=file, end='') elif isinstance(obj, (list, tuple)): print(f'[',", "dump(ks[0], file) print(':', file=file, end='') dump(obj[ks[0]], file) else: for k in ks[:-1]: dump(k,", "filename = '../test/t.sion' with open(filename) as f: obj = load(f) pprint.pprint(obj) with open('../test/output.sion',", "'\\\\\"'), ('\\n', '\\\\n'), ('\\r', '\\\\r'), ('\\\\', '\\\\\\\\')]: s = s.replace(o, n) return s", "[], \"bool\" : false, \"double\" : 0x0p+0, \"int\" : 0, \"nil\" : nil,", "print(f'.Data(\"{str(obj)[2:-1]}\")', file=file, end='') elif isinstance(obj, datetime.datetime): print(f'.Date({obj.timestamp()})', file=file, end='') elif isinstance(obj, (list, tuple)):", "end='') dump(obj[-1], file) print(']', file=file, end='') elif isinstance(obj, dict): print('[', file=file, end='') ks", "if len(ks) == 0: res += ':' elif len(ks) == 1: res +=", "None: return 'nil' if isinstance(obj, bool): if obj: return 'true' return 'false' if", "Lists,\", true : \"Yes, SION\", 1 : \"does accept\", 1.0 : \"non-String keys.\",", "print(']', file=file, end='') elif isinstance(obj, dict): print('[', file=file, end='') ks = list(obj.keys()) if", "Copyright © 2018 kamimura. All rights reserved. import sys import datetime from antlr4", "= parser.si_self() visitor = SIONVisitor() return visitor.visit(tree) def loads(s): if isinstance(s, (bytes, bytearray)):", "is not SION serializable\") if __name__ == '__main__': import pprint if len(sys.argv) >", "type '{obj.__class__.__name__}' is not SION serializable\") def dumps(obj: object): if obj is None:", "if obj is None: print('nil', file=file, end='') elif isinstance(obj, bool): if obj: print('ture',", "'../test/t.sion' with open(filename) as f: obj = load(f) pprint.pprint(obj) with open('../test/output.sion', 'w') as", "file=file, end='') else: raise TypeError( f\"Object of type '{obj.__class__.__name__}' is not SION serializable\")", "from SIONLexer import SIONLexer from SIONParser import SIONParser from SIONVisitor import SIONVisitor def", "elif isinstance(obj, str): print(f'\"{str_esc(obj)}\"', file=file, end='') elif isinstance(obj, (bytes, bytearray)): print(f'.Data(\"{str(obj)[2:-1]}\")', file=file, end='')", "1.0, // Double in decimal \"one\", [1], [\"one\" : 1.0] ], \"bool\" :", "bytearray)): s = s.decode() stream = InputStream(s) lexer = SIONLexer(stream) tokens = CommonTokenStream(lexer)", "'__main__': import pprint if len(sys.argv) > 1: filename = sys.argv[1] else: filename =", "(bytes, bytearray)): return f'.Data(\"{str(obj)[2:-1]}\")' if isinstance(obj, datetime.datetime): return f'.Date({obj.timestamp(obj)})' if isinstance(obj, (list, tuple)):", "SIONLexer(stream) tokens = CommonTokenStream(lexer) parser = SIONParser(tokens) tree = parser.si_self() visitor = SIONVisitor()", "true : \"Yes, SION\", 1 : \"does accept\", 1.0 : \"non-String keys.\", []", "import datetime from antlr4 import * from SIONLexer import SIONLexer from SIONParser import", "+ dumps(obj[ks[-1]]) res += ']' return res raise TypeError( f\"Object of type '{obj.__class__.__name__}'", "= CommonTokenStream(lexer) parser = SIONParser(tokens) tree = parser.si_self() visitor = SIONVisitor() return visitor.visit(tree)", "SION serializable\") def dumps(obj: object): if obj is None: return 'nil' if isinstance(obj,", "list(obj.keys()) if len(ks) == 0: res += ':' elif len(ks) == 1: res", "obj is None: return 'nil' if isinstance(obj, bool): if obj: return 'true' return", "in hexadecimal \"nil\" : nil, \"string\" : \"漢字、カタカナ、ひらがなの入ったstring😇\", \"url\" : \"https://github.com/dankogai/\", nil :", "file=file, end='') elif isinstance(obj, (bytes, bytearray)): print(f'.Data(\"{str(obj)[2:-1]}\")', file=file, end='') elif isinstance(obj, datetime.datetime): print(f'.Date({obj.timestamp()})',", ": 0, \"nil\" : nil, \"object\" : [:], \"string\" : \"\" ], \"double\"", "SIONLexer import SIONLexer from SIONParser import SIONParser from SIONVisitor import SIONVisitor def load(file,", "k in ks[:-1]: res += dumps(k) + ':' + str(obj[k]) + ',' res", "visitor.visit(tree) def loads(s): if isinstance(s, (bytes, bytearray)): s = s.decode() stream = InputStream(s)", "file=file, end='') else: print('false', file=file, end='') elif isinstance(obj, (int, float)): print(obj, file=file, end='')", "\"Yes, SION\", 1 : \"does accept\", 1.0 : \"non-String keys.\", [] : \"like\",", "+= dumps(o) + ',' res += dumps(obj[-1]) res += ']' return res if", "import SIONVisitor def load(file, encoding: str='utf-8', errors: str='strict') -> object: data = file.read()", "sys.argv[1] else: filename = '../test/t.sion' with open(filename) as f: obj = load(f) pprint.pprint(obj)", "len(obj) > 0: for o in obj[:-1]: res += dumps(o) + ',' res", "isinstance(obj, (int, float)): print(obj, file=file, end='') elif isinstance(obj, str): print(f'\"{str_esc(obj)}\"', file=file, end='') elif", "keys.\", [] : \"like\", [:] : \"Map of ECMAScript.\" ] ''' obj =", "',' res += dumps(ks[-1]) + ':' + dumps(obj[ks[-1]]) res += ']' return res", "= '[' if len(obj) > 0: for o in obj[:-1]: res += dumps(o)", "Double in hexadecimal \"int\" : -0x2a, // Int in hexadecimal \"nil\" : nil,", "\"date\" : .Date(0x0p+0), \"dictionary\" : [ \"array\" : [], \"bool\" : false, \"double\"", "+ ',' res += dumps(obj[-1]) res += ']' return res if isinstance(obj, dict):", "\"data\" : .Data(\"R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7\"), \"date\" : .Date(0x0p+0), \"dictionary\" : [ \"array\" : [], \"bool\"", "print(':', file=file, end='') dump(obj[ks[-1]], file) print(']', file=file, end='') else: raise TypeError( f\"Object of", "tree = parser.si_self() visitor = SIONVisitor() return visitor.visit(tree) def str_esc(s): for o, n", "return 'false' if isinstance(obj, (int, float)): return str(obj) if isinstance(obj, str): return f'\"{str_esc(obj)}\"'", "[ nil, true, 1, // Int in decimal 1.0, // Double in decimal", "'false' if isinstance(obj, (int, float)): return str(obj) if isinstance(obj, str): return f'\"{str_esc(obj)}\"' if", "elif len(ks) == 1: res += dumps(ks[0]) + ':' + dumps(obj[ks[0]]) else: for", "']' return res raise TypeError( f\"Object of type '{obj.__class__.__name__}' is not SION serializable\")", "# Copyright © 2018 kamimura. All rights reserved. import sys import datetime from", "'\\\\r'), ('\\\\', '\\\\\\\\')]: s = s.replace(o, n) return s def dump(obj, file): if", "datetime from antlr4 import * from SIONLexer import SIONLexer from SIONParser import SIONParser", "else: print('false', file=file, end='') elif isinstance(obj, (int, float)): print(obj, file=file, end='') elif isinstance(obj,", "reserved. import sys import datetime from antlr4 import * from SIONLexer import SIONLexer", "elif isinstance(obj, (int, float)): print(obj, file=file, end='') elif isinstance(obj, str): print(f'\"{str_esc(obj)}\"', file=file, end='')", "dump(ks[-1], file) print(':', file=file, end='') dump(obj[ks[-1]], file) print(']', file=file, end='') else: raise TypeError(", "\"object\" : [:], \"string\" : \"\" ], \"double\" : 0x1.518f5c28f5c29p+5, // Double in", "else: raise TypeError( f\"Object of type '{obj.__class__.__name__}' is not SION serializable\") def dumps(obj:", "SIONParser import SIONParser from SIONVisitor import SIONVisitor def load(file, encoding: str='utf-8', errors: str='strict')", "str): print(f'\"{str_esc(obj)}\"', file=file, end='') elif isinstance(obj, (bytes, bytearray)): print(f'.Data(\"{str(obj)[2:-1]}\")', file=file, end='') elif isinstance(obj,", "> 1: filename = sys.argv[1] else: filename = '../test/t.sion' with open(filename) as f:", "[ \"array\" : [], \"bool\" : false, \"double\" : 0x0p+0, \"int\" : 0,", "load(file, encoding: str='utf-8', errors: str='strict') -> object: data = file.read() if isinstance(data, (bytes,", "(bytes, bytearray)): print(f'.Data(\"{str(obj)[2:-1]}\")', file=file, end='') elif isinstance(obj, datetime.datetime): print(f'.Date({obj.timestamp()})', file=file, end='') elif isinstance(obj,", "file=file, end='') dump(obj[ks[-1]], file) print(']', file=file, end='') else: raise TypeError( f\"Object of type", "dump(o, file) print(',', file=file, end='') dump(obj[-1], file) print(']', file=file, end='') elif isinstance(obj, dict):", "\"array\" : [ nil, true, 1, // Int in decimal 1.0, // Double", "serializable\") if __name__ == '__main__': import pprint if len(sys.argv) > 1: filename =", "dumps(obj: object): if obj is None: return 'nil' if isinstance(obj, bool): if obj:", "[:], \"string\" : \"\" ], \"double\" : 0x1.518f5c28f5c29p+5, // Double in hexadecimal \"int\"", "= list(obj.keys()) if len(ks) == 0: res += ':' elif len(ks) == 1:", "visitor = SIONVisitor() return visitor.visit(tree) def str_esc(s): for o, n in [('\"', '\\\\\"'),", "print(':', file=file, end='') dump(obj[ks[0]], file) else: for k in ks[:-1]: dump(k, file) print(':',", "elif isinstance(obj, (bytes, bytearray)): print(f'.Data(\"{str(obj)[2:-1]}\")', file=file, end='') elif isinstance(obj, datetime.datetime): print(f'.Date({obj.timestamp()})', file=file, end='')", "SIONParser(tokens) tree = parser.si_self() visitor = SIONVisitor() return visitor.visit(tree) def str_esc(s): for o,", "+ ':' + dumps(obj[ks[-1]]) res += ']' return res raise TypeError( f\"Object of", ": \"Unlike JSON and Property Lists,\", true : \"Yes, SION\", 1 : \"does", "SIONLexer from SIONParser import SIONParser from SIONVisitor import SIONVisitor def load(file, encoding: str='utf-8',", "as f: obj = load(f) pprint.pprint(obj) with open('../test/output.sion', 'w') as f: dump(obj, f)", "= SIONVisitor() return visitor.visit(tree) def str_esc(s): for o, n in [('\"', '\\\\\"'), ('\\n',", "'\\\\n'), ('\\r', '\\\\r'), ('\\\\', '\\\\\\\\')]: s = s.replace(o, n) return s def dump(obj,", "+ ':' + str(obj[k]) + ',' res += dumps(ks[-1]) + ':' + dumps(obj[ks[-1]])", "if obj is None: return 'nil' if isinstance(obj, bool): if obj: return 'true'", "'nil' if isinstance(obj, bool): if obj: return 'true' return 'false' if isinstance(obj, (int,", "\"Unlike JSON and Property Lists,\", true : \"Yes, SION\", 1 : \"does accept\",", "= list(obj.keys()) if len(ks) == 0: print(':', file=file, end='') elif len(ks) == 1:", "if isinstance(obj, str): return f'\"{str_esc(obj)}\"' if isinstance(obj, (bytes, bytearray)): return f'.Data(\"{str(obj)[2:-1]}\")' if isinstance(obj,", "for o in obj[:-1]: res += dumps(o) + ',' res += dumps(obj[-1]) res", "+ ',' res += dumps(ks[-1]) + ':' + dumps(obj[ks[-1]]) res += ']' return", "decimal \"one\", [1], [\"one\" : 1.0] ], \"bool\" : true, \"data\" : .Data(\"R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7\"),", "in ks[:-1]: dump(k, file) print(':', file=file, end='') dump(obj[k], file) print(',', file=file, end='') dump(ks[-1],", "isinstance(obj, (bytes, bytearray)): return f'.Data(\"{str(obj)[2:-1]}\")' if isinstance(obj, datetime.datetime): return f'.Date({obj.timestamp(obj)})' if isinstance(obj, (list,", ": nil, \"string\" : \"漢字、カタカナ、ひらがなの入ったstring😇\", \"url\" : \"https://github.com/dankogai/\", nil : \"Unlike JSON and", "file=file, end='') elif isinstance(obj, bool): if obj: print('ture', file=file, end='') else: print('false', file=file,", "import * from SIONLexer import SIONLexer from SIONParser import SIONParser from SIONVisitor import", "+ str(obj[k]) + ',' res += dumps(ks[-1]) + ':' + dumps(obj[ks[-1]]) res +=", "obj is None: print('nil', file=file, end='') elif isinstance(obj, bool): if obj: print('ture', file=file,", "elif isinstance(obj, bool): if obj: print('ture', file=file, end='') else: print('false', file=file, end='') elif", "print('[', file=file, end='') ks = list(obj.keys()) if len(ks) == 0: print(':', file=file, end='')", "dump(obj[k], file) print(',', file=file, end='') dump(ks[-1], file) print(':', file=file, end='') dump(obj[ks[-1]], file) print(']',", "TypeError( f\"Object of type '{obj.__class__.__name__}' is not SION serializable\") if __name__ == '__main__':", "n) return s def dump(obj, file): if obj is None: print('nil', file=file, end='')", "// Int in hexadecimal \"nil\" : nil, \"string\" : \"漢字、カタカナ、ひらがなの入ったstring😇\", \"url\" : \"https://github.com/dankogai/\",", "\"like\", [:] : \"Map of ECMAScript.\" ] ''' obj = loads(s) pprint.pprint(obj) s", "file=file, end='') dump(obj[-1], file) print(']', file=file, end='') elif isinstance(obj, dict): print('[', file=file, end='')", "file=file, end='') if len(obj) > 0: for o in obj[:-1]: dump(o, file) print(',',", "file=file, end='') elif isinstance(obj, (list, tuple)): print(f'[', file=file, end='') if len(obj) > 0:", "\"nil\" : nil, \"object\" : [:], \"string\" : \"\" ], \"double\" : 0x1.518f5c28f5c29p+5,", "Int in decimal 1.0, // Double in decimal \"one\", [1], [\"one\" : 1.0]", "obj = load(f) pprint.pprint(obj) with open('../test/output.sion', 'w') as f: dump(obj, f) s =", "s.replace(o, n) return s def dump(obj, file): if obj is None: print('nil', file=file,", "s = s.decode() stream = InputStream(s) lexer = SIONLexer(stream) tokens = CommonTokenStream(lexer) parser", "SION\", 1 : \"does accept\", 1.0 : \"non-String keys.\", [] : \"like\", [:]", "with open('../test/output.sion', 'w') as f: dump(obj, f) s = ''' [ \"array\" :", "1.0 : \"non-String keys.\", [] : \"like\", [:] : \"Map of ECMAScript.\" ]", ": [], \"bool\" : false, \"double\" : 0x0p+0, \"int\" : 0, \"nil\" :", "isinstance(s, (bytes, bytearray)): s = s.decode() stream = InputStream(s) lexer = SIONLexer(stream) tokens", "if isinstance(obj, (bytes, bytearray)): return f'.Data(\"{str(obj)[2:-1]}\")' if isinstance(obj, datetime.datetime): return f'.Date({obj.timestamp(obj)})' if isinstance(obj,", "if len(sys.argv) > 1: filename = sys.argv[1] else: filename = '../test/t.sion' with open(filename)", "0: res += ':' elif len(ks) == 1: res += dumps(ks[0]) + ':'", "© 2018 kamimura. All rights reserved. import sys import datetime from antlr4 import", "None: print('nil', file=file, end='') elif isinstance(obj, bool): if obj: print('ture', file=file, end='') else:", "import pprint if len(sys.argv) > 1: filename = sys.argv[1] else: filename = '../test/t.sion'", "true, \"data\" : .Data(\"R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7\"), \"date\" : .Date(0x0p+0), \"dictionary\" : [ \"array\" : [],", "f'.Date({obj.timestamp(obj)})' if isinstance(obj, (list, tuple)): res = '[' if len(obj) > 0: for", ": nil, \"object\" : [:], \"string\" : \"\" ], \"double\" : 0x1.518f5c28f5c29p+5, //", "\"url\" : \"https://github.com/dankogai/\", nil : \"Unlike JSON and Property Lists,\", true : \"Yes,", "s = s.replace(o, n) return s def dump(obj, file): if obj is None:", "file): if obj is None: print('nil', file=file, end='') elif isinstance(obj, bool): if obj:", "res += ':' elif len(ks) == 1: res += dumps(ks[0]) + ':' +", "decimal 1.0, // Double in decimal \"one\", [1], [\"one\" : 1.0] ], \"bool\"", "\"bool\" : true, \"data\" : .Data(\"R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7\"), \"date\" : .Date(0x0p+0), \"dictionary\" : [ \"array\"", "= '../test/t.sion' with open(filename) as f: obj = load(f) pprint.pprint(obj) with open('../test/output.sion', 'w')", "import SIONParser from SIONVisitor import SIONVisitor def load(file, encoding: str='utf-8', errors: str='strict') ->", "\"non-String keys.\", [] : \"like\", [:] : \"Map of ECMAScript.\" ] ''' obj", "visitor = SIONVisitor() return visitor.visit(tree) def loads(s): if isinstance(s, (bytes, bytearray)): s =", "print(',', file=file, end='') dump(ks[-1], file) print(':', file=file, end='') dump(obj[ks[-1]], file) print(']', file=file, end='')", "datetime.datetime): print(f'.Date({obj.timestamp()})', file=file, end='') elif isinstance(obj, (list, tuple)): print(f'[', file=file, end='') if len(obj)", "(list, tuple)): res = '[' if len(obj) > 0: for o in obj[:-1]:", ": 1.0] ], \"bool\" : true, \"data\" : .Data(\"R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7\"), \"date\" : .Date(0x0p+0), \"dictionary\"", "[] : \"like\", [:] : \"Map of ECMAScript.\" ] ''' obj = loads(s)", "isinstance(obj, (list, tuple)): print(f'[', file=file, end='') if len(obj) > 0: for o in", "by kamimura on 2018/07/21. # Copyright © 2018 kamimura. All rights reserved. import", "\"string\" : \"\" ], \"double\" : 0x1.518f5c28f5c29p+5, // Double in hexadecimal \"int\" :", "= SIONParser(tokens) tree = parser.si_self() visitor = SIONVisitor() return visitor.visit(tree) def str_esc(s): for", "if isinstance(obj, datetime.datetime): return f'.Date({obj.timestamp(obj)})' if isinstance(obj, (list, tuple)): res = '[' if", "object): if obj is None: return 'nil' if isinstance(obj, bool): if obj: return", "from antlr4 import * from SIONLexer import SIONLexer from SIONParser import SIONParser from", "\"Map of ECMAScript.\" ] ''' obj = loads(s) pprint.pprint(obj) s = dumps(obj) print(s)", "isinstance(obj, bool): if obj: print('ture', file=file, end='') else: print('false', file=file, end='') elif isinstance(obj,", "== 0: print(':', file=file, end='') elif len(ks) == 1: dump(ks[0], file) print(':', file=file,", "'{obj.__class__.__name__}' is not SION serializable\") if __name__ == '__main__': import pprint if len(sys.argv)", "= ''' [ \"array\" : [ nil, true, 1, // Int in decimal", "Created by kamimura on 2018/07/21. # Copyright © 2018 kamimura. All rights reserved.", "def loads(s): if isinstance(s, (bytes, bytearray)): s = s.decode() stream = InputStream(s) lexer", "parser.si_self() visitor = SIONVisitor() return visitor.visit(tree) def str_esc(s): for o, n in [('\"',", "dump(obj[ks[0]], file) else: for k in ks[:-1]: dump(k, file) print(':', file=file, end='') dump(obj[k],", ": \"漢字、カタカナ、ひらがなの入ったstring😇\", \"url\" : \"https://github.com/dankogai/\", nil : \"Unlike JSON and Property Lists,\", true", "\"double\" : 0x1.518f5c28f5c29p+5, // Double in hexadecimal \"int\" : -0x2a, // Int in", "end='') if len(obj) > 0: for o in obj[:-1]: dump(o, file) print(',', file=file,", "k in ks[:-1]: dump(k, file) print(':', file=file, end='') dump(obj[k], file) print(',', file=file, end='')", "file=file, end='') elif isinstance(obj, dict): print('[', file=file, end='') ks = list(obj.keys()) if len(ks)", "return f'.Date({obj.timestamp(obj)})' if isinstance(obj, (list, tuple)): res = '[' if len(obj) > 0:", "nil : \"Unlike JSON and Property Lists,\", true : \"Yes, SION\", 1 :", "], \"bool\" : true, \"data\" : .Data(\"R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7\"), \"date\" : .Date(0x0p+0), \"dictionary\" : [", "(int, float)): print(obj, file=file, end='') elif isinstance(obj, str): print(f'\"{str_esc(obj)}\"', file=file, end='') elif isinstance(obj,", "print(f'[', file=file, end='') if len(obj) > 0: for o in obj[:-1]: dump(o, file)", "obj[:-1]: dump(o, file) print(',', file=file, end='') dump(obj[-1], file) print(']', file=file, end='') elif isinstance(obj,", "All rights reserved. import sys import datetime from antlr4 import * from SIONLexer", "0, \"nil\" : nil, \"object\" : [:], \"string\" : \"\" ], \"double\" :", "pprint if len(sys.argv) > 1: filename = sys.argv[1] else: filename = '../test/t.sion' with", "('\\\\', '\\\\\\\\')]: s = s.replace(o, n) return s def dump(obj, file): if obj", "file) else: for k in ks[:-1]: dump(k, file) print(':', file=file, end='') dump(obj[k], file)", "':' + str(obj[k]) + ',' res += dumps(ks[-1]) + ':' + dumps(obj[ks[-1]]) res", "+= dumps(obj[-1]) res += ']' return res if isinstance(obj, dict): res = '['", "f'\"{str_esc(obj)}\"' if isinstance(obj, (bytes, bytearray)): return f'.Data(\"{str(obj)[2:-1]}\")' if isinstance(obj, datetime.datetime): return f'.Date({obj.timestamp(obj)})' if", "# Created by kamimura on 2018/07/21. # Copyright © 2018 kamimura. All rights", "import SIONLexer from SIONParser import SIONParser from SIONVisitor import SIONVisitor def load(file, encoding:", "f: dump(obj, f) s = ''' [ \"array\" : [ nil, true, 1,", "print(',', file=file, end='') dump(obj[-1], file) print(']', file=file, end='') elif isinstance(obj, dict): print('[', file=file,", "Double in decimal \"one\", [1], [\"one\" : 1.0] ], \"bool\" : true, \"data\"", "ks[:-1]: res += dumps(k) + ':' + str(obj[k]) + ',' res += dumps(ks[-1])", "dump(obj, file): if obj is None: print('nil', file=file, end='') elif isinstance(obj, bool): if", "+= dumps(ks[0]) + ':' + dumps(obj[ks[0]]) else: for k in ks[:-1]: res +=", "end='') dump(obj[k], file) print(',', file=file, end='') dump(ks[-1], file) print(':', file=file, end='') dump(obj[ks[-1]], file)", "open('../test/output.sion', 'w') as f: dump(obj, f) s = ''' [ \"array\" : [", "load(f) pprint.pprint(obj) with open('../test/output.sion', 'w') as f: dump(obj, f) s = ''' [", "ks[:-1]: dump(k, file) print(':', file=file, end='') dump(obj[k], file) print(',', file=file, end='') dump(ks[-1], file)", "return 'nil' if isinstance(obj, bool): if obj: return 'true' return 'false' if isinstance(obj,", "in hexadecimal \"int\" : -0x2a, // Int in hexadecimal \"nil\" : nil, \"string\"", "datetime.datetime): return f'.Date({obj.timestamp(obj)})' if isinstance(obj, (list, tuple)): res = '[' if len(obj) >", "+ dumps(obj[ks[0]]) else: for k in ks[:-1]: res += dumps(k) + ':' +", "':' elif len(ks) == 1: res += dumps(ks[0]) + ':' + dumps(obj[ks[0]]) else:", "if isinstance(obj, bool): if obj: return 'true' return 'false' if isinstance(obj, (int, float)):", "print(']', file=file, end='') else: raise TypeError( f\"Object of type '{obj.__class__.__name__}' is not SION", "res = '[' ks = list(obj.keys()) if len(ks) == 0: res += ':'", "as f: dump(obj, f) s = ''' [ \"array\" : [ nil, true,", "s.decode() stream = InputStream(s) lexer = SIONLexer(stream) tokens = CommonTokenStream(lexer) parser = SIONParser(tokens)", "[\"one\" : 1.0] ], \"bool\" : true, \"data\" : .Data(\"R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7\"), \"date\" : .Date(0x0p+0),", "hexadecimal \"nil\" : nil, \"string\" : \"漢字、カタカナ、ひらがなの入ったstring😇\", \"url\" : \"https://github.com/dankogai/\", nil : \"Unlike", "(int, float)): return str(obj) if isinstance(obj, str): return f'\"{str_esc(obj)}\"' if isinstance(obj, (bytes, bytearray)):", "stream = InputStream(s) lexer = SIONLexer(stream) tokens = CommonTokenStream(lexer) parser = SIONParser(tokens) tree", "res if isinstance(obj, dict): res = '[' ks = list(obj.keys()) if len(ks) ==", "= SIONVisitor() return visitor.visit(tree) def loads(s): if isinstance(s, (bytes, bytearray)): s = s.decode()", "kamimura on 2018/07/21. # Copyright © 2018 kamimura. All rights reserved. import sys", "[ \"array\" : [ nil, true, 1, // Int in decimal 1.0, //", "InputStream(data) lexer = SIONLexer(stream) tokens = CommonTokenStream(lexer) parser = SIONParser(tokens) tree = parser.si_self()", "stream = InputStream(data) lexer = SIONLexer(stream) tokens = CommonTokenStream(lexer) parser = SIONParser(tokens) tree", "str='strict') -> object: data = file.read() if isinstance(data, (bytes, bytearray)): data = data.decode(encoding,", "'\\\\\\\\')]: s = s.replace(o, n) return s def dump(obj, file): if obj is", "obj: print('ture', file=file, end='') else: print('false', file=file, end='') elif isinstance(obj, (int, float)): print(obj,", "isinstance(obj, (bytes, bytearray)): print(f'.Data(\"{str(obj)[2:-1]}\")', file=file, end='') elif isinstance(obj, datetime.datetime): print(f'.Date({obj.timestamp()})', file=file, end='') elif", "elif isinstance(obj, (list, tuple)): print(f'[', file=file, end='') if len(obj) > 0: for o", "[:] : \"Map of ECMAScript.\" ] ''' obj = loads(s) pprint.pprint(obj) s =", "file) print(',', file=file, end='') dump(ks[-1], file) print(':', file=file, end='') dump(obj[ks[-1]], file) print(']', file=file,", "return visitor.visit(tree) def str_esc(s): for o, n in [('\"', '\\\\\"'), ('\\n', '\\\\n'), ('\\r',", "res raise TypeError( f\"Object of type '{obj.__class__.__name__}' is not SION serializable\") if __name__", "for k in ks[:-1]: res += dumps(k) + ':' + str(obj[k]) + ','", "def str_esc(s): for o, n in [('\"', '\\\\\"'), ('\\n', '\\\\n'), ('\\r', '\\\\r'), ('\\\\',", "dumps(ks[0]) + ':' + dumps(obj[ks[0]]) else: for k in ks[:-1]: res += dumps(k)", "file=file, end='') elif isinstance(obj, str): print(f'\"{str_esc(obj)}\"', file=file, end='') elif isinstance(obj, (bytes, bytearray)): print(f'.Data(\"{str(obj)[2:-1]}\")',", "tuple)): res = '[' if len(obj) > 0: for o in obj[:-1]: res", ": \"Yes, SION\", 1 : \"does accept\", 1.0 : \"non-String keys.\", [] :", "accept\", 1.0 : \"non-String keys.\", [] : \"like\", [:] : \"Map of ECMAScript.\"", "dumps(obj[-1]) res += ']' return res if isinstance(obj, dict): res = '[' ks", "res += dumps(ks[-1]) + ':' + dumps(obj[ks[-1]]) res += ']' return res raise", "+= ']' return res raise TypeError( f\"Object of type '{obj.__class__.__name__}' is not SION", "if __name__ == '__main__': import pprint if len(sys.argv) > 1: filename = sys.argv[1]", "1 : \"does accept\", 1.0 : \"non-String keys.\", [] : \"like\", [:] :", "with open(filename) as f: obj = load(f) pprint.pprint(obj) with open('../test/output.sion', 'w') as f:", "is None: print('nil', file=file, end='') elif isinstance(obj, bool): if obj: print('ture', file=file, end='')", "file) print(':', file=file, end='') dump(obj[ks[-1]], file) print(']', file=file, end='') else: raise TypeError( f\"Object", "end='') elif isinstance(obj, dict): print('[', file=file, end='') ks = list(obj.keys()) if len(ks) ==", "file=file, end='') dump(obj[ks[0]], file) else: for k in ks[:-1]: dump(k, file) print(':', file=file,", "isinstance(obj, bool): if obj: return 'true' return 'false' if isinstance(obj, (int, float)): return", ": \"Map of ECMAScript.\" ] ''' obj = loads(s) pprint.pprint(obj) s = dumps(obj)", ".Date(0x0p+0), \"dictionary\" : [ \"array\" : [], \"bool\" : false, \"double\" : 0x0p+0,", "print('nil', file=file, end='') elif isinstance(obj, bool): if obj: print('ture', file=file, end='') else: print('false',", "2018/07/21. # Copyright © 2018 kamimura. All rights reserved. import sys import datetime", "SIONParser from SIONVisitor import SIONVisitor def load(file, encoding: str='utf-8', errors: str='strict') -> object:", "SIONVisitor def load(file, encoding: str='utf-8', errors: str='strict') -> object: data = file.read() if", "1: res += dumps(ks[0]) + ':' + dumps(obj[ks[0]]) else: for k in ks[:-1]:", "tree = parser.si_self() visitor = SIONVisitor() return visitor.visit(tree) def loads(s): if isinstance(s, (bytes,", "InputStream(s) lexer = SIONLexer(stream) tokens = CommonTokenStream(lexer) parser = SIONParser(tokens) tree = parser.si_self()", "nil, \"object\" : [:], \"string\" : \"\" ], \"double\" : 0x1.518f5c28f5c29p+5, // Double", "dumps(o) + ',' res += dumps(obj[-1]) res += ']' return res if isinstance(obj,", "is not SION serializable\") def dumps(obj: object): if obj is None: return 'nil'", "], \"double\" : 0x1.518f5c28f5c29p+5, // Double in hexadecimal \"int\" : -0x2a, // Int", "SION serializable\") if __name__ == '__main__': import pprint if len(sys.argv) > 1: filename", "ks = list(obj.keys()) if len(ks) == 0: print(':', file=file, end='') elif len(ks) ==", "ks = list(obj.keys()) if len(ks) == 0: res += ':' elif len(ks) ==", "in obj[:-1]: res += dumps(o) + ',' res += dumps(obj[-1]) res += ']'", "print(':', file=file, end='') dump(obj[k], file) print(',', file=file, end='') dump(ks[-1], file) print(':', file=file, end='')", "isinstance(data, (bytes, bytearray)): data = data.decode(encoding, errors) stream = InputStream(data) lexer = SIONLexer(stream)", "res += dumps(ks[0]) + ':' + dumps(obj[ks[0]]) else: for k in ks[:-1]: res", "\"int\" : -0x2a, // Int in hexadecimal \"nil\" : nil, \"string\" : \"漢字、カタカナ、ひらがなの入ったstring😇\",", "dumps(k) + ':' + str(obj[k]) + ',' res += dumps(ks[-1]) + ':' +", "str_esc(s): for o, n in [('\"', '\\\\\"'), ('\\n', '\\\\n'), ('\\r', '\\\\r'), ('\\\\', '\\\\\\\\')]:", "bytearray)): data = data.decode(encoding, errors) stream = InputStream(data) lexer = SIONLexer(stream) tokens =", "else: for k in ks[:-1]: dump(k, file) print(':', file=file, end='') dump(obj[k], file) print(',',", "[('\"', '\\\\\"'), ('\\n', '\\\\n'), ('\\r', '\\\\r'), ('\\\\', '\\\\\\\\')]: s = s.replace(o, n) return", "file=file, end='') ks = list(obj.keys()) if len(ks) == 0: print(':', file=file, end='') elif", ": [ nil, true, 1, // Int in decimal 1.0, // Double in", "res += dumps(obj[-1]) res += ']' return res if isinstance(obj, dict): res =", "tokens = CommonTokenStream(lexer) parser = SIONParser(tokens) tree = parser.si_self() visitor = SIONVisitor() return", "float)): return str(obj) if isinstance(obj, str): return f'\"{str_esc(obj)}\"' if isinstance(obj, (bytes, bytearray)): return", "def dumps(obj: object): if obj is None: return 'nil' if isinstance(obj, bool): if", "',' res += dumps(obj[-1]) res += ']' return res if isinstance(obj, dict): res", "nil, \"string\" : \"漢字、カタカナ、ひらがなの入ったstring😇\", \"url\" : \"https://github.com/dankogai/\", nil : \"Unlike JSON and Property", "res = '[' if len(obj) > 0: for o in obj[:-1]: res +=", "isinstance(obj, dict): res = '[' ks = list(obj.keys()) if len(ks) == 0: res", "isinstance(obj, (list, tuple)): res = '[' if len(obj) > 0: for o in", "1: dump(ks[0], file) print(':', file=file, end='') dump(obj[ks[0]], file) else: for k in ks[:-1]:", "* from SIONLexer import SIONLexer from SIONParser import SIONParser from SIONVisitor import SIONVisitor", "pprint.pprint(obj) with open('../test/output.sion', 'w') as f: dump(obj, f) s = ''' [ \"array\"", "from SIONParser import SIONParser from SIONVisitor import SIONVisitor def load(file, encoding: str='utf-8', errors:", "on 2018/07/21. # Copyright © 2018 kamimura. All rights reserved. import sys import", "dump(obj[-1], file) print(']', file=file, end='') elif isinstance(obj, dict): print('[', file=file, end='') ks =", "end='') else: raise TypeError( f\"Object of type '{obj.__class__.__name__}' is not SION serializable\") def", "SIONParser(tokens) tree = parser.si_self() visitor = SIONVisitor() return visitor.visit(tree) def loads(s): if isinstance(s,", "open(filename) as f: obj = load(f) pprint.pprint(obj) with open('../test/output.sion', 'w') as f: dump(obj,", "= SIONParser(tokens) tree = parser.si_self() visitor = SIONVisitor() return visitor.visit(tree) def loads(s): if", "SIONVisitor() return visitor.visit(tree) def str_esc(s): for o, n in [('\"', '\\\\\"'), ('\\n', '\\\\n'),", "end='') elif isinstance(obj, (bytes, bytearray)): print(f'.Data(\"{str(obj)[2:-1]}\")', file=file, end='') elif isinstance(obj, datetime.datetime): print(f'.Date({obj.timestamp()})', file=file,", "end='') elif isinstance(obj, (int, float)): print(obj, file=file, end='') elif isinstance(obj, str): print(f'\"{str_esc(obj)}\"', file=file,", "isinstance(obj, (int, float)): return str(obj) if isinstance(obj, str): return f'\"{str_esc(obj)}\"' if isinstance(obj, (bytes,", "\"array\" : [], \"bool\" : false, \"double\" : 0x0p+0, \"int\" : 0, \"nil\"", "\"https://github.com/dankogai/\", nil : \"Unlike JSON and Property Lists,\", true : \"Yes, SION\", 1", "elif isinstance(obj, dict): print('[', file=file, end='') ks = list(obj.keys()) if len(ks) == 0:", "is None: return 'nil' if isinstance(obj, bool): if obj: return 'true' return 'false'", "== 1: dump(ks[0], file) print(':', file=file, end='') dump(obj[ks[0]], file) else: for k in", "\"nil\" : nil, \"string\" : \"漢字、カタカナ、ひらがなの入ったstring😇\", \"url\" : \"https://github.com/dankogai/\", nil : \"Unlike JSON", ": \"https://github.com/dankogai/\", nil : \"Unlike JSON and Property Lists,\", true : \"Yes, SION\",", "dump(obj, f) s = ''' [ \"array\" : [ nil, true, 1, //", "end='') else: print('false', file=file, end='') elif isinstance(obj, (int, float)): print(obj, file=file, end='') elif", "data = file.read() if isinstance(data, (bytes, bytearray)): data = data.decode(encoding, errors) stream =", "\"one\", [1], [\"one\" : 1.0] ], \"bool\" : true, \"data\" : .Data(\"R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7\"), \"date\"", "\"int\" : 0, \"nil\" : nil, \"object\" : [:], \"string\" : \"\" ],", ": [ \"array\" : [], \"bool\" : false, \"double\" : 0x0p+0, \"int\" :", "__name__ == '__main__': import pprint if len(sys.argv) > 1: filename = sys.argv[1] else:", "f\"Object of type '{obj.__class__.__name__}' is not SION serializable\") def dumps(obj: object): if obj", "in [('\"', '\\\\\"'), ('\\n', '\\\\n'), ('\\r', '\\\\r'), ('\\\\', '\\\\\\\\')]: s = s.replace(o, n)", "(bytes, bytearray)): s = s.decode() stream = InputStream(s) lexer = SIONLexer(stream) tokens =", "len(ks) == 1: res += dumps(ks[0]) + ':' + dumps(obj[ks[0]]) else: for k", "print(f'\"{str_esc(obj)}\"', file=file, end='') elif isinstance(obj, (bytes, bytearray)): print(f'.Data(\"{str(obj)[2:-1]}\")', file=file, end='') elif isinstance(obj, datetime.datetime):", "']' return res if isinstance(obj, dict): res = '[' ks = list(obj.keys()) if", "isinstance(obj, str): return f'\"{str_esc(obj)}\"' if isinstance(obj, (bytes, bytearray)): return f'.Data(\"{str(obj)[2:-1]}\")' if isinstance(obj, datetime.datetime):", "= sys.argv[1] else: filename = '../test/t.sion' with open(filename) as f: obj = load(f)", "= s.decode() stream = InputStream(s) lexer = SIONLexer(stream) tokens = CommonTokenStream(lexer) parser =", ": \"does accept\", 1.0 : \"non-String keys.\", [] : \"like\", [:] : \"Map", "1.0] ], \"bool\" : true, \"data\" : .Data(\"R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7\"), \"date\" : .Date(0x0p+0), \"dictionary\" :", "'w') as f: dump(obj, f) s = ''' [ \"array\" : [ nil,", "in decimal 1.0, // Double in decimal \"one\", [1], [\"one\" : 1.0] ],", "TypeError( f\"Object of type '{obj.__class__.__name__}' is not SION serializable\") def dumps(obj: object): if", "0x1.518f5c28f5c29p+5, // Double in hexadecimal \"int\" : -0x2a, // Int in hexadecimal \"nil\"", "res += ']' return res if isinstance(obj, dict): res = '[' ks =", "str='utf-8', errors: str='strict') -> object: data = file.read() if isinstance(data, (bytes, bytearray)): data", "return 'true' return 'false' if isinstance(obj, (int, float)): return str(obj) if isinstance(obj, str):", "= file.read() if isinstance(data, (bytes, bytearray)): data = data.decode(encoding, errors) stream = InputStream(data)", "print(obj, file=file, end='') elif isinstance(obj, str): print(f'\"{str_esc(obj)}\"', file=file, end='') elif isinstance(obj, (bytes, bytearray)):", "file=file, end='') dump(obj[k], file) print(',', file=file, end='') dump(ks[-1], file) print(':', file=file, end='') dump(obj[ks[-1]],", "if isinstance(obj, dict): res = '[' ks = list(obj.keys()) if len(ks) == 0:", ": .Data(\"R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7\"), \"date\" : .Date(0x0p+0), \"dictionary\" : [ \"array\" : [], \"bool\" :", "end='') dump(ks[-1], file) print(':', file=file, end='') dump(obj[ks[-1]], file) print(']', file=file, end='') else: raise", "def dump(obj, file): if obj is None: print('nil', file=file, end='') elif isinstance(obj, bool):", "filename = sys.argv[1] else: filename = '../test/t.sion' with open(filename) as f: obj =", "end='') dump(obj[ks[0]], file) else: for k in ks[:-1]: dump(k, file) print(':', file=file, end='')", "\"dictionary\" : [ \"array\" : [], \"bool\" : false, \"double\" : 0x0p+0, \"int\"", "end='') elif isinstance(obj, bool): if obj: print('ture', file=file, end='') else: print('false', file=file, end='')", "\"string\" : \"漢字、カタカナ、ひらがなの入ったstring😇\", \"url\" : \"https://github.com/dankogai/\", nil : \"Unlike JSON and Property Lists,\",", "print(':', file=file, end='') elif len(ks) == 1: dump(ks[0], file) print(':', file=file, end='') dump(obj[ks[0]],", "// Double in hexadecimal \"int\" : -0x2a, // Int in hexadecimal \"nil\" :", "obj: return 'true' return 'false' if isinstance(obj, (int, float)): return str(obj) if isinstance(obj,", "of type '{obj.__class__.__name__}' is not SION serializable\") if __name__ == '__main__': import pprint", "object: data = file.read() if isinstance(data, (bytes, bytearray)): data = data.decode(encoding, errors) stream", "len(sys.argv) > 1: filename = sys.argv[1] else: filename = '../test/t.sion' with open(filename) as", "errors: str='strict') -> object: data = file.read() if isinstance(data, (bytes, bytearray)): data =", "str): return f'\"{str_esc(obj)}\"' if isinstance(obj, (bytes, bytearray)): return f'.Data(\"{str(obj)[2:-1]}\")' if isinstance(obj, datetime.datetime): return", "file) print(':', file=file, end='') dump(obj[ks[0]], file) else: for k in ks[:-1]: dump(k, file)", "s def dump(obj, file): if obj is None: print('nil', file=file, end='') elif isinstance(obj,", "\"漢字、カタカナ、ひらがなの入ったstring😇\", \"url\" : \"https://github.com/dankogai/\", nil : \"Unlike JSON and Property Lists,\", true :", "print('ture', file=file, end='') else: print('false', file=file, end='') elif isinstance(obj, (int, float)): print(obj, file=file,", "print('false', file=file, end='') elif isinstance(obj, (int, float)): print(obj, file=file, end='') elif isinstance(obj, str):", "type '{obj.__class__.__name__}' is not SION serializable\") if __name__ == '__main__': import pprint if", ": true, \"data\" : .Data(\"R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7\"), \"date\" : .Date(0x0p+0), \"dictionary\" : [ \"array\" :", "parser = SIONParser(tokens) tree = parser.si_self() visitor = SIONVisitor() return visitor.visit(tree) def str_esc(s):", "of type '{obj.__class__.__name__}' is not SION serializable\") def dumps(obj: object): if obj is", "return visitor.visit(tree) def loads(s): if isinstance(s, (bytes, bytearray)): s = s.decode() stream =", "= load(f) pprint.pprint(obj) with open('../test/output.sion', 'w') as f: dump(obj, f) s = '''", "dict): res = '[' ks = list(obj.keys()) if len(ks) == 0: res +=", "not SION serializable\") if __name__ == '__main__': import pprint if len(sys.argv) > 1:", "2018 kamimura. All rights reserved. import sys import datetime from antlr4 import *", ": false, \"double\" : 0x0p+0, \"int\" : 0, \"nil\" : nil, \"object\" :", ": [:], \"string\" : \"\" ], \"double\" : 0x1.518f5c28f5c29p+5, // Double in hexadecimal", "bytearray)): print(f'.Data(\"{str(obj)[2:-1]}\")', file=file, end='') elif isinstance(obj, datetime.datetime): print(f'.Date({obj.timestamp()})', file=file, end='') elif isinstance(obj, (list,", "[1], [\"one\" : 1.0] ], \"bool\" : true, \"data\" : .Data(\"R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7\"), \"date\" :", "f) s = ''' [ \"array\" : [ nil, true, 1, // Int", "for o, n in [('\"', '\\\\\"'), ('\\n', '\\\\n'), ('\\r', '\\\\r'), ('\\\\', '\\\\\\\\')]: s", "nil, true, 1, // Int in decimal 1.0, // Double in decimal \"one\",", "serializable\") def dumps(obj: object): if obj is None: return 'nil' if isinstance(obj, bool):", "res += dumps(o) + ',' res += dumps(obj[-1]) res += ']' return res", "dumps(obj[ks[0]]) else: for k in ks[:-1]: res += dumps(k) + ':' + str(obj[k])", "= parser.si_self() visitor = SIONVisitor() return visitor.visit(tree) def str_esc(s): for o, n in", "('\\n', '\\\\n'), ('\\r', '\\\\r'), ('\\\\', '\\\\\\\\')]: s = s.replace(o, n) return s def", "import sys import datetime from antlr4 import * from SIONLexer import SIONLexer from", "dump(k, file) print(':', file=file, end='') dump(obj[k], file) print(',', file=file, end='') dump(ks[-1], file) print(':',", "> 0: for o in obj[:-1]: res += dumps(o) + ',' res +=", "file.read() if isinstance(data, (bytes, bytearray)): data = data.decode(encoding, errors) stream = InputStream(data) lexer", "= data.decode(encoding, errors) stream = InputStream(data) lexer = SIONLexer(stream) tokens = CommonTokenStream(lexer) parser", "dict): print('[', file=file, end='') ks = list(obj.keys()) if len(ks) == 0: print(':', file=file,", "return res if isinstance(obj, dict): res = '[' ks = list(obj.keys()) if len(ks)", ".Data(\"R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7\"), \"date\" : .Date(0x0p+0), \"dictionary\" : [ \"array\" : [], \"bool\" : false,", "\"\" ], \"double\" : 0x1.518f5c28f5c29p+5, // Double in hexadecimal \"int\" : -0x2a, //", "else: filename = '../test/t.sion' with open(filename) as f: obj = load(f) pprint.pprint(obj) with", "not SION serializable\") def dumps(obj: object): if obj is None: return 'nil' if", "from SIONVisitor import SIONVisitor def load(file, encoding: str='utf-8', errors: str='strict') -> object: data", "+= dumps(ks[-1]) + ':' + dumps(obj[ks[-1]]) res += ']' return res raise TypeError(", "isinstance(obj, str): print(f'\"{str_esc(obj)}\"', file=file, end='') elif isinstance(obj, (bytes, bytearray)): print(f'.Data(\"{str(obj)[2:-1]}\")', file=file, end='') elif", "= InputStream(data) lexer = SIONLexer(stream) tokens = CommonTokenStream(lexer) parser = SIONParser(tokens) tree =", "and Property Lists,\", true : \"Yes, SION\", 1 : \"does accept\", 1.0 :", "SIONVisitor import SIONVisitor def load(file, encoding: str='utf-8', errors: str='strict') -> object: data =", "+= ':' elif len(ks) == 1: res += dumps(ks[0]) + ':' + dumps(obj[ks[0]])", "if len(obj) > 0: for o in obj[:-1]: dump(o, file) print(',', file=file, end='')", "file) print(']', file=file, end='') elif isinstance(obj, dict): print('[', file=file, end='') ks = list(obj.keys())", "Property Lists,\", true : \"Yes, SION\", 1 : \"does accept\", 1.0 : \"non-String", "dump(obj[ks[-1]], file) print(']', file=file, end='') else: raise TypeError( f\"Object of type '{obj.__class__.__name__}' is", "data.decode(encoding, errors) stream = InputStream(data) lexer = SIONLexer(stream) tokens = CommonTokenStream(lexer) parser =", ": \"\" ], \"double\" : 0x1.518f5c28f5c29p+5, // Double in hexadecimal \"int\" : -0x2a,", "''' [ \"array\" : [ nil, true, 1, // Int in decimal 1.0,", "len(obj) > 0: for o in obj[:-1]: dump(o, file) print(',', file=file, end='') dump(obj[-1],", "CommonTokenStream(lexer) parser = SIONParser(tokens) tree = parser.si_self() visitor = SIONVisitor() return visitor.visit(tree) def", "o in obj[:-1]: dump(o, file) print(',', file=file, end='') dump(obj[-1], file) print(']', file=file, end='')", "SIONVisitor() return visitor.visit(tree) def loads(s): if isinstance(s, (bytes, bytearray)): s = s.decode() stream", "== 1: res += dumps(ks[0]) + ':' + dumps(obj[ks[0]]) else: for k in", "'{obj.__class__.__name__}' is not SION serializable\") def dumps(obj: object): if obj is None: return", "list(obj.keys()) if len(ks) == 0: print(':', file=file, end='') elif len(ks) == 1: dump(ks[0],", "= SIONLexer(stream) tokens = CommonTokenStream(lexer) parser = SIONParser(tokens) tree = parser.si_self() visitor =", "('\\r', '\\\\r'), ('\\\\', '\\\\\\\\')]: s = s.replace(o, n) return s def dump(obj, file):", "loads(s): if isinstance(s, (bytes, bytearray)): s = s.decode() stream = InputStream(s) lexer =" ]
[ "admin.site.urls), path('projects/', include(project.projects.urls)), path('', include(project.auth.urls)), path('', include(project.core.urls)) ] handler400 = bad_request handler403 =", "from django.urls import path, include import project.auth.urls import project.core.urls from project.core.views import bad_request,", "path('projects/', include(project.projects.urls)), path('', include(project.auth.urls)), path('', include(project.core.urls)) ] handler400 = bad_request handler403 = permission_denied", "django.urls import path, include import project.auth.urls import project.core.urls from project.core.views import bad_request, permission_denied,", "project.auth.urls import project.core.urls from project.core.views import bad_request, permission_denied, page_not_found, server_error import project.projects.urls urlpatterns", "import path, include import project.auth.urls import project.core.urls from project.core.views import bad_request, permission_denied, page_not_found,", "] handler400 = bad_request handler403 = permission_denied handler404 = page_not_found handler500 = server_error", "page_not_found, server_error import project.projects.urls urlpatterns = [ path('admin/', admin.site.urls), path('projects/', include(project.projects.urls)), path('', include(project.auth.urls)),", "from project.core.views import bad_request, permission_denied, page_not_found, server_error import project.projects.urls urlpatterns = [ path('admin/',", "import bad_request, permission_denied, page_not_found, server_error import project.projects.urls urlpatterns = [ path('admin/', admin.site.urls), path('projects/',", "[ path('admin/', admin.site.urls), path('projects/', include(project.projects.urls)), path('', include(project.auth.urls)), path('', include(project.core.urls)) ] handler400 = bad_request", "include(project.core.urls)) ] handler400 = bad_request handler403 = permission_denied handler404 = page_not_found handler500 =", "import project.projects.urls urlpatterns = [ path('admin/', admin.site.urls), path('projects/', include(project.projects.urls)), path('', include(project.auth.urls)), path('', include(project.core.urls))", "project.core.views import bad_request, permission_denied, page_not_found, server_error import project.projects.urls urlpatterns = [ path('admin/', admin.site.urls),", "server_error import project.projects.urls urlpatterns = [ path('admin/', admin.site.urls), path('projects/', include(project.projects.urls)), path('', include(project.auth.urls)), path('',", "include(project.projects.urls)), path('', include(project.auth.urls)), path('', include(project.core.urls)) ] handler400 = bad_request handler403 = permission_denied handler404", "<reponame>kottenator/code.kottenator.com from django.contrib import admin from django.urls import path, include import project.auth.urls import", "project.core.urls from project.core.views import bad_request, permission_denied, page_not_found, server_error import project.projects.urls urlpatterns = [", "include import project.auth.urls import project.core.urls from project.core.views import bad_request, permission_denied, page_not_found, server_error import", "admin from django.urls import path, include import project.auth.urls import project.core.urls from project.core.views import", "project.projects.urls urlpatterns = [ path('admin/', admin.site.urls), path('projects/', include(project.projects.urls)), path('', include(project.auth.urls)), path('', include(project.core.urls)) ]", "django.contrib import admin from django.urls import path, include import project.auth.urls import project.core.urls from", "permission_denied, page_not_found, server_error import project.projects.urls urlpatterns = [ path('admin/', admin.site.urls), path('projects/', include(project.projects.urls)), path('',", "path('admin/', admin.site.urls), path('projects/', include(project.projects.urls)), path('', include(project.auth.urls)), path('', include(project.core.urls)) ] handler400 = bad_request handler403", "path('', include(project.auth.urls)), path('', include(project.core.urls)) ] handler400 = bad_request handler403 = permission_denied handler404 =", "import project.core.urls from project.core.views import bad_request, permission_denied, page_not_found, server_error import project.projects.urls urlpatterns =", "from django.contrib import admin from django.urls import path, include import project.auth.urls import project.core.urls", "path('', include(project.core.urls)) ] handler400 = bad_request handler403 = permission_denied handler404 = page_not_found handler500", "path, include import project.auth.urls import project.core.urls from project.core.views import bad_request, permission_denied, page_not_found, server_error", "import project.auth.urls import project.core.urls from project.core.views import bad_request, permission_denied, page_not_found, server_error import project.projects.urls", "= [ path('admin/', admin.site.urls), path('projects/', include(project.projects.urls)), path('', include(project.auth.urls)), path('', include(project.core.urls)) ] handler400 =", "urlpatterns = [ path('admin/', admin.site.urls), path('projects/', include(project.projects.urls)), path('', include(project.auth.urls)), path('', include(project.core.urls)) ] handler400", "include(project.auth.urls)), path('', include(project.core.urls)) ] handler400 = bad_request handler403 = permission_denied handler404 = page_not_found", "import admin from django.urls import path, include import project.auth.urls import project.core.urls from project.core.views", "bad_request, permission_denied, page_not_found, server_error import project.projects.urls urlpatterns = [ path('admin/', admin.site.urls), path('projects/', include(project.projects.urls))," ]
[ "numpy as np import matplotlib.pyplot as plt import matplotlib from matplotlib import style", "np import matplotlib.pyplot as plt import matplotlib from matplotlib import style import os", "import matplotlib.pyplot as plt import matplotlib from matplotlib import style import os from", "matplotlib from matplotlib import style import os from os import path from matplotlib.font_manager", "import matplotlib from matplotlib import style import os from os import path from", "as plt import matplotlib from matplotlib import style import os from os import", "plt import matplotlib from matplotlib import style import os from os import path", "*-* coding:utf-8 *-* import numpy as np import matplotlib.pyplot as plt import matplotlib", "# *-* coding:utf-8 *-* import numpy as np import matplotlib.pyplot as plt import", "matplotlib.pyplot as plt import matplotlib from matplotlib import style import os from os", "import style import os from os import path from matplotlib.font_manager import fontManager #", "from matplotlib import style import os from os import path from matplotlib.font_manager import", "matplotlib import style import os from os import path from matplotlib.font_manager import fontManager", "coding:utf-8 *-* import numpy as np import matplotlib.pyplot as plt import matplotlib from", "*-* import numpy as np import matplotlib.pyplot as plt import matplotlib from matplotlib", "as np import matplotlib.pyplot as plt import matplotlib from matplotlib import style import", "style import os from os import path from matplotlib.font_manager import fontManager # 图表坐标系", "import numpy as np import matplotlib.pyplot as plt import matplotlib from matplotlib import" ]
[ "rdf date within the framework. :copyright: Copyright (c) 2016 by <NAME> and <NAME>.", "framework. :copyright: Copyright (c) 2016 by <NAME> and <NAME>. :license: To be determined,", ".propertyprocessors import PropertyProcessor from .classprocessors import ClassProcessor __author__ = \"<NAME>, <NAME>\" __version__ =", "RDF Proccessors =============== Processors are used to manipulate rdf date within the framework.", "\"\"\" RDF Proccessors =============== Processors are used to manipulate rdf date within the", "the framework. :copyright: Copyright (c) 2016 by <NAME> and <NAME>. :license: To be", "2016 by <NAME> and <NAME>. :license: To be determined, see LICENSE.txt for details.", "To be determined, see LICENSE.txt for details. \"\"\" from .propertyprocessors import PropertyProcessor from", "see LICENSE.txt for details. \"\"\" from .propertyprocessors import PropertyProcessor from .classprocessors import ClassProcessor", "import PropertyProcessor from .classprocessors import ClassProcessor __author__ = \"<NAME>, <NAME>\" __version__ = '0.0.1'", "(c) 2016 by <NAME> and <NAME>. :license: To be determined, see LICENSE.txt for", "<NAME> and <NAME>. :license: To be determined, see LICENSE.txt for details. \"\"\" from", ":license: To be determined, see LICENSE.txt for details. \"\"\" from .propertyprocessors import PropertyProcessor", "determined, see LICENSE.txt for details. \"\"\" from .propertyprocessors import PropertyProcessor from .classprocessors import", "Copyright (c) 2016 by <NAME> and <NAME>. :license: To be determined, see LICENSE.txt", "be determined, see LICENSE.txt for details. \"\"\" from .propertyprocessors import PropertyProcessor from .classprocessors", "details. \"\"\" from .propertyprocessors import PropertyProcessor from .classprocessors import ClassProcessor __author__ = \"<NAME>,", "and <NAME>. :license: To be determined, see LICENSE.txt for details. \"\"\" from .propertyprocessors", "date within the framework. :copyright: Copyright (c) 2016 by <NAME> and <NAME>. :license:", "are used to manipulate rdf date within the framework. :copyright: Copyright (c) 2016", "within the framework. :copyright: Copyright (c) 2016 by <NAME> and <NAME>. :license: To", "Processors are used to manipulate rdf date within the framework. :copyright: Copyright (c)", "by <NAME> and <NAME>. :license: To be determined, see LICENSE.txt for details. \"\"\"", "to manipulate rdf date within the framework. :copyright: Copyright (c) 2016 by <NAME>", "used to manipulate rdf date within the framework. :copyright: Copyright (c) 2016 by", "manipulate rdf date within the framework. :copyright: Copyright (c) 2016 by <NAME> and", "=============== Processors are used to manipulate rdf date within the framework. :copyright: Copyright", "Proccessors =============== Processors are used to manipulate rdf date within the framework. :copyright:", "LICENSE.txt for details. \"\"\" from .propertyprocessors import PropertyProcessor from .classprocessors import ClassProcessor __author__", "from .propertyprocessors import PropertyProcessor from .classprocessors import ClassProcessor __author__ = \"<NAME>, <NAME>\" __version__", ":copyright: Copyright (c) 2016 by <NAME> and <NAME>. :license: To be determined, see", "for details. \"\"\" from .propertyprocessors import PropertyProcessor from .classprocessors import ClassProcessor __author__ =", "<NAME>. :license: To be determined, see LICENSE.txt for details. \"\"\" from .propertyprocessors import", "\"\"\" from .propertyprocessors import PropertyProcessor from .classprocessors import ClassProcessor __author__ = \"<NAME>, <NAME>\"" ]
[ "return isinstance(value, float) and math.isnan(value) def _filter_by_null_values(self, result, column_name): def column_value_is_null(item): value, item_parser", "None and self._is_none(value) return self._in_place_filter(column_value_is_null, result) def _filter_by_not_null_values(self, result, column_name): def column_value_is_not_null(item): value,", "<reponame>DeepLearnI/atlas from foundations_rest_api.filters.api_filter_mixin import APIFilterMixin class NullFilter(APIFilterMixin): def __call__(self, result, params): if result", "math return isinstance(value, float) and math.isnan(value) def _filter_by_null_values(self, result, column_name): def column_value_is_null(item): value,", "APIFilterMixin class NullFilter(APIFilterMixin): def __call__(self, result, params): if result and isinstance(result, list): new_params", "and in that case filtering is discarded if value is True: self._filter_by_null_values(result, column_name)", "return value is None or self._is_nan(value) def _is_nan(self, value): import math return isinstance(value,", "value is None or self._is_nan(value) def _is_nan(self, value): import math return isinstance(value, float)", "and isinstance(result, list): new_params = {key: value for key, value in params.items() if", "value in params.items() if key.endswith('_isnull')} if new_params: self._filter(result, new_params) return result def _filter(self,", "param_value in params.items(): column_name = key.split('_isnull', 1)[0] value = self._parse_value(param_value) if value is", "result, column_name): def column_value_is_not_null(item): value, item_parser = self._get_item_property_value_and_parser(item, column_name, parse=False) return item_parser is", "in params.items() if key.endswith('_isnull')} if new_params: self._filter(result, new_params) return result def _filter(self, result,", "column_name, parse=False) return item_parser is not None and self._is_none(value) return self._in_place_filter(column_value_is_null, result) def", "BoolParser parser = BoolParser() return parser.parse(param_value) def _filter_column(self, result, column_name, value): # Explicit", "from foundations_rest_api.filters.api_filter_mixin import APIFilterMixin class NullFilter(APIFilterMixin): def __call__(self, result, params): if result and", "_parse_value(self, param_value): from foundations_rest_api.filters.parsers import BoolParser parser = BoolParser() return parser.parse(param_value) def _filter_column(self,", "better than implicit [Zen of Python, 1] # This is because \"value\" can", "self._filter_by_null_values(result, column_name) elif value is False: self._filter_by_not_null_values(result, column_name) def _is_none(self, value): return value", "self._in_place_filter(column_value_is_null, result) def _filter_by_not_null_values(self, result, column_name): def column_value_is_not_null(item): value, item_parser = self._get_item_property_value_and_parser(item, column_name,", "= {key: value for key, value in params.items() if key.endswith('_isnull')} if new_params: self._filter(result,", "of Python, 1] # This is because \"value\" can also be None and", "parser = BoolParser() return parser.parse(param_value) def _filter_column(self, result, column_name, value): # Explicit is", "not None and self._is_none(value) return self._in_place_filter(column_value_is_null, result) def _filter_by_not_null_values(self, result, column_name): def column_value_is_not_null(item):", "value): # Explicit is better than implicit [Zen of Python, 1] # This", "This is because \"value\" can also be None and in that case filtering", "def _parse_value(self, param_value): from foundations_rest_api.filters.parsers import BoolParser parser = BoolParser() return parser.parse(param_value) def", "and math.isnan(value) def _filter_by_null_values(self, result, column_name): def column_value_is_null(item): value, item_parser = self._get_item_property_value_and_parser(item, column_name,", "item_parser = self._get_item_property_value_and_parser(item, column_name, parse=False) return item_parser is not None and self._is_none(value) return", "= BoolParser() return parser.parse(param_value) def _filter_column(self, result, column_name, value): # Explicit is better", "result) def _filter_by_not_null_values(self, result, column_name): def column_value_is_not_null(item): value, item_parser = self._get_item_property_value_and_parser(item, column_name, parse=False)", "foundations_rest_api.filters.api_filter_mixin import APIFilterMixin class NullFilter(APIFilterMixin): def __call__(self, result, params): if result and isinstance(result,", "value) def _parse_value(self, param_value): from foundations_rest_api.filters.parsers import BoolParser parser = BoolParser() return parser.parse(param_value)", "None and in that case filtering is discarded if value is True: self._filter_by_null_values(result,", "params): for key, param_value in params.items(): column_name = key.split('_isnull', 1)[0] value = self._parse_value(param_value)", "result, column_name): def column_value_is_null(item): value, item_parser = self._get_item_property_value_and_parser(item, column_name, parse=False) return item_parser is", "list): new_params = {key: value for key, value in params.items() if key.endswith('_isnull')} if", "def __call__(self, result, params): if result and isinstance(result, list): new_params = {key: value", "= key.split('_isnull', 1)[0] value = self._parse_value(param_value) if value is not None: self._filter_column(result, column_name,", "Python, 1] # This is because \"value\" can also be None and in", "value, item_parser = self._get_item_property_value_and_parser(item, column_name, parse=False) return item_parser is not None and not", "is False: self._filter_by_not_null_values(result, column_name) def _is_none(self, value): return value is None or self._is_nan(value)", "new_params: self._filter(result, new_params) return result def _filter(self, result, params): for key, param_value in", "foundations_rest_api.filters.parsers import BoolParser parser = BoolParser() return parser.parse(param_value) def _filter_column(self, result, column_name, value):", "def column_value_is_null(item): value, item_parser = self._get_item_property_value_and_parser(item, column_name, parse=False) return item_parser is not None", "_filter_by_not_null_values(self, result, column_name): def column_value_is_not_null(item): value, item_parser = self._get_item_property_value_and_parser(item, column_name, parse=False) return item_parser", "is None or self._is_nan(value) def _is_nan(self, value): import math return isinstance(value, float) and", "column_name = key.split('_isnull', 1)[0] value = self._parse_value(param_value) if value is not None: self._filter_column(result,", "column_name, value): # Explicit is better than implicit [Zen of Python, 1] #", "True: self._filter_by_null_values(result, column_name) elif value is False: self._filter_by_not_null_values(result, column_name) def _is_none(self, value): return", "column_name, parse=False) return item_parser is not None and not self._is_none(value) return self._in_place_filter(column_value_is_not_null, result)", "_is_nan(self, value): import math return isinstance(value, float) and math.isnan(value) def _filter_by_null_values(self, result, column_name):", "= self._get_item_property_value_and_parser(item, column_name, parse=False) return item_parser is not None and not self._is_none(value) return", "\"value\" can also be None and in that case filtering is discarded if", "column_name): def column_value_is_null(item): value, item_parser = self._get_item_property_value_and_parser(item, column_name, parse=False) return item_parser is not", "if new_params: self._filter(result, new_params) return result def _filter(self, result, params): for key, param_value", "and self._is_none(value) return self._in_place_filter(column_value_is_null, result) def _filter_by_not_null_values(self, result, column_name): def column_value_is_not_null(item): value, item_parser", "item_parser is not None and self._is_none(value) return self._in_place_filter(column_value_is_null, result) def _filter_by_not_null_values(self, result, column_name):", "for key, value in params.items() if key.endswith('_isnull')} if new_params: self._filter(result, new_params) return result", "result and isinstance(result, list): new_params = {key: value for key, value in params.items()", "that case filtering is discarded if value is True: self._filter_by_null_values(result, column_name) elif value", "self._get_item_property_value_and_parser(item, column_name, parse=False) return item_parser is not None and self._is_none(value) return self._in_place_filter(column_value_is_null, result)", "def _filter_by_null_values(self, result, column_name): def column_value_is_null(item): value, item_parser = self._get_item_property_value_and_parser(item, column_name, parse=False) return", "self._is_nan(value) def _is_nan(self, value): import math return isinstance(value, float) and math.isnan(value) def _filter_by_null_values(self,", "column_value_is_null(item): value, item_parser = self._get_item_property_value_and_parser(item, column_name, parse=False) return item_parser is not None and", "isinstance(result, list): new_params = {key: value for key, value in params.items() if key.endswith('_isnull')}", "_filter_column(self, result, column_name, value): # Explicit is better than implicit [Zen of Python,", "in params.items(): column_name = key.split('_isnull', 1)[0] value = self._parse_value(param_value) if value is not", "def _is_none(self, value): return value is None or self._is_nan(value) def _is_nan(self, value): import", "NullFilter(APIFilterMixin): def __call__(self, result, params): if result and isinstance(result, list): new_params = {key:", "BoolParser() return parser.parse(param_value) def _filter_column(self, result, column_name, value): # Explicit is better than", "self._parse_value(param_value) if value is not None: self._filter_column(result, column_name, value) def _parse_value(self, param_value): from", "is better than implicit [Zen of Python, 1] # This is because \"value\"", "parser.parse(param_value) def _filter_column(self, result, column_name, value): # Explicit is better than implicit [Zen", "value is not None: self._filter_column(result, column_name, value) def _parse_value(self, param_value): from foundations_rest_api.filters.parsers import", "column_name, value) def _parse_value(self, param_value): from foundations_rest_api.filters.parsers import BoolParser parser = BoolParser() return", "param_value): from foundations_rest_api.filters.parsers import BoolParser parser = BoolParser() return parser.parse(param_value) def _filter_column(self, result,", "because \"value\" can also be None and in that case filtering is discarded", "self._filter(result, new_params) return result def _filter(self, result, params): for key, param_value in params.items():", "_filter(self, result, params): for key, param_value in params.items(): column_name = key.split('_isnull', 1)[0] value", "params): if result and isinstance(result, list): new_params = {key: value for key, value", "params.items(): column_name = key.split('_isnull', 1)[0] value = self._parse_value(param_value) if value is not None:", "if result and isinstance(result, list): new_params = {key: value for key, value in", "if key.endswith('_isnull')} if new_params: self._filter(result, new_params) return result def _filter(self, result, params): for", "is not None: self._filter_column(result, column_name, value) def _parse_value(self, param_value): from foundations_rest_api.filters.parsers import BoolParser", "value is True: self._filter_by_null_values(result, column_name) elif value is False: self._filter_by_not_null_values(result, column_name) def _is_none(self,", "key.split('_isnull', 1)[0] value = self._parse_value(param_value) if value is not None: self._filter_column(result, column_name, value)", "= self._parse_value(param_value) if value is not None: self._filter_column(result, column_name, value) def _parse_value(self, param_value):", "isinstance(value, float) and math.isnan(value) def _filter_by_null_values(self, result, column_name): def column_value_is_null(item): value, item_parser =", "if value is True: self._filter_by_null_values(result, column_name) elif value is False: self._filter_by_not_null_values(result, column_name) def", "[Zen of Python, 1] # This is because \"value\" can also be None", "result, params): for key, param_value in params.items(): column_name = key.split('_isnull', 1)[0] value =", "# Explicit is better than implicit [Zen of Python, 1] # This is", "value = self._parse_value(param_value) if value is not None: self._filter_column(result, column_name, value) def _parse_value(self,", "new_params = {key: value for key, value in params.items() if key.endswith('_isnull')} if new_params:", "_filter_by_null_values(self, result, column_name): def column_value_is_null(item): value, item_parser = self._get_item_property_value_and_parser(item, column_name, parse=False) return item_parser", "key.endswith('_isnull')} if new_params: self._filter(result, new_params) return result def _filter(self, result, params): for key,", "column_name): def column_value_is_not_null(item): value, item_parser = self._get_item_property_value_and_parser(item, column_name, parse=False) return item_parser is not", "import BoolParser parser = BoolParser() return parser.parse(param_value) def _filter_column(self, result, column_name, value): #", "result def _filter(self, result, params): for key, param_value in params.items(): column_name = key.split('_isnull',", "column_name) def _is_none(self, value): return value is None or self._is_nan(value) def _is_nan(self, value):", "value, item_parser = self._get_item_property_value_and_parser(item, column_name, parse=False) return item_parser is not None and self._is_none(value)", "can also be None and in that case filtering is discarded if value", "parse=False) return item_parser is not None and self._is_none(value) return self._in_place_filter(column_value_is_null, result) def _filter_by_not_null_values(self,", "also be None and in that case filtering is discarded if value is", "__call__(self, result, params): if result and isinstance(result, list): new_params = {key: value for", "def _filter_by_not_null_values(self, result, column_name): def column_value_is_not_null(item): value, item_parser = self._get_item_property_value_and_parser(item, column_name, parse=False) return", "filtering is discarded if value is True: self._filter_by_null_values(result, column_name) elif value is False:", "{key: value for key, value in params.items() if key.endswith('_isnull')} if new_params: self._filter(result, new_params)", "float) and math.isnan(value) def _filter_by_null_values(self, result, column_name): def column_value_is_null(item): value, item_parser = self._get_item_property_value_and_parser(item,", "return item_parser is not None and self._is_none(value) return self._in_place_filter(column_value_is_null, result) def _filter_by_not_null_values(self, result,", "column_value_is_not_null(item): value, item_parser = self._get_item_property_value_and_parser(item, column_name, parse=False) return item_parser is not None and", "is because \"value\" can also be None and in that case filtering is", "None: self._filter_column(result, column_name, value) def _parse_value(self, param_value): from foundations_rest_api.filters.parsers import BoolParser parser =", "in that case filtering is discarded if value is True: self._filter_by_null_values(result, column_name) elif", "or self._is_nan(value) def _is_nan(self, value): import math return isinstance(value, float) and math.isnan(value) def", "return result def _filter(self, result, params): for key, param_value in params.items(): column_name =", "None or self._is_nan(value) def _is_nan(self, value): import math return isinstance(value, float) and math.isnan(value)", "1] # This is because \"value\" can also be None and in that", "than implicit [Zen of Python, 1] # This is because \"value\" can also", "column_name) elif value is False: self._filter_by_not_null_values(result, column_name) def _is_none(self, value): return value is", "is True: self._filter_by_null_values(result, column_name) elif value is False: self._filter_by_not_null_values(result, column_name) def _is_none(self, value):", "self._filter_column(result, column_name, value) def _parse_value(self, param_value): from foundations_rest_api.filters.parsers import BoolParser parser = BoolParser()", "is not None and self._is_none(value) return self._in_place_filter(column_value_is_null, result) def _filter_by_not_null_values(self, result, column_name): def", "self._is_none(value) return self._in_place_filter(column_value_is_null, result) def _filter_by_not_null_values(self, result, column_name): def column_value_is_not_null(item): value, item_parser =", "import math return isinstance(value, float) and math.isnan(value) def _filter_by_null_values(self, result, column_name): def column_value_is_null(item):", "new_params) return result def _filter(self, result, params): for key, param_value in params.items(): column_name", "False: self._filter_by_not_null_values(result, column_name) def _is_none(self, value): return value is None or self._is_nan(value) def", "import APIFilterMixin class NullFilter(APIFilterMixin): def __call__(self, result, params): if result and isinstance(result, list):", "value for key, value in params.items() if key.endswith('_isnull')} if new_params: self._filter(result, new_params) return", "result, params): if result and isinstance(result, list): new_params = {key: value for key,", "value is False: self._filter_by_not_null_values(result, column_name) def _is_none(self, value): return value is None or", "self._filter_by_not_null_values(result, column_name) def _is_none(self, value): return value is None or self._is_nan(value) def _is_nan(self,", "1)[0] value = self._parse_value(param_value) if value is not None: self._filter_column(result, column_name, value) def", "value): import math return isinstance(value, float) and math.isnan(value) def _filter_by_null_values(self, result, column_name): def", "def _filter(self, result, params): for key, param_value in params.items(): column_name = key.split('_isnull', 1)[0]", "implicit [Zen of Python, 1] # This is because \"value\" can also be", "be None and in that case filtering is discarded if value is True:", "class NullFilter(APIFilterMixin): def __call__(self, result, params): if result and isinstance(result, list): new_params =", "def column_value_is_not_null(item): value, item_parser = self._get_item_property_value_and_parser(item, column_name, parse=False) return item_parser is not None", "def _filter_column(self, result, column_name, value): # Explicit is better than implicit [Zen of", "item_parser = self._get_item_property_value_and_parser(item, column_name, parse=False) return item_parser is not None and not self._is_none(value)", "Explicit is better than implicit [Zen of Python, 1] # This is because", "value): return value is None or self._is_nan(value) def _is_nan(self, value): import math return", "case filtering is discarded if value is True: self._filter_by_null_values(result, column_name) elif value is", "def _is_nan(self, value): import math return isinstance(value, float) and math.isnan(value) def _filter_by_null_values(self, result,", "self._get_item_property_value_and_parser(item, column_name, parse=False) return item_parser is not None and not self._is_none(value) return self._in_place_filter(column_value_is_not_null,", "for key, param_value in params.items(): column_name = key.split('_isnull', 1)[0] value = self._parse_value(param_value) if", "return parser.parse(param_value) def _filter_column(self, result, column_name, value): # Explicit is better than implicit", "not None: self._filter_column(result, column_name, value) def _parse_value(self, param_value): from foundations_rest_api.filters.parsers import BoolParser parser", "return self._in_place_filter(column_value_is_null, result) def _filter_by_not_null_values(self, result, column_name): def column_value_is_not_null(item): value, item_parser = self._get_item_property_value_and_parser(item,", "# This is because \"value\" can also be None and in that case", "math.isnan(value) def _filter_by_null_values(self, result, column_name): def column_value_is_null(item): value, item_parser = self._get_item_property_value_and_parser(item, column_name, parse=False)", "if value is not None: self._filter_column(result, column_name, value) def _parse_value(self, param_value): from foundations_rest_api.filters.parsers", "result, column_name, value): # Explicit is better than implicit [Zen of Python, 1]", "key, param_value in params.items(): column_name = key.split('_isnull', 1)[0] value = self._parse_value(param_value) if value", "= self._get_item_property_value_and_parser(item, column_name, parse=False) return item_parser is not None and self._is_none(value) return self._in_place_filter(column_value_is_null,", "_is_none(self, value): return value is None or self._is_nan(value) def _is_nan(self, value): import math", "discarded if value is True: self._filter_by_null_values(result, column_name) elif value is False: self._filter_by_not_null_values(result, column_name)", "elif value is False: self._filter_by_not_null_values(result, column_name) def _is_none(self, value): return value is None", "params.items() if key.endswith('_isnull')} if new_params: self._filter(result, new_params) return result def _filter(self, result, params):", "from foundations_rest_api.filters.parsers import BoolParser parser = BoolParser() return parser.parse(param_value) def _filter_column(self, result, column_name,", "key, value in params.items() if key.endswith('_isnull')} if new_params: self._filter(result, new_params) return result def", "is discarded if value is True: self._filter_by_null_values(result, column_name) elif value is False: self._filter_by_not_null_values(result," ]
[ "s2, s3, set(), 0, 0, 0) def _is_interleave(self, s1, s2, s3, memo, s1_start,", "s1 == p: if self._is_interleave(s1, s2, s3, memo, counter+1, other_start, i+1): return True", "s2: str :type s3: str :rtype: bool \"\"\" return self._is_interleave(s1, s2, s3, set(),", "len(s2) else None def _find_prefix(counter, p, other_start): for i in range(s3_start, len(s3)): if", "if _find_prefix(s1_counter, s1, s2_start): return True if _find_prefix(s2_counter, s2, s1_start): return True memo.add(encoding)", "s2_start >= len(s2) encoding = str(s1_start) + '|' + str(s2_start) + '|' +", "0, 0) def _is_interleave(self, s1, s2, s3, memo, s1_start, s2_start, s3_start): if s3_start", "_is_interleave(self, s1, s2, s3, memo, s1_start, s2_start, s3_start): if s3_start >= len(s3): return", "encoding = str(s1_start) + '|' + str(s2_start) + '|' + str(s3_start) if encoding", "other_start, i+1): return True else: if self._is_interleave(s1, s2, s3, memo, other_start, counter+1, i+1):", "memo, counter+1, other_start, i+1): return True else: if self._is_interleave(s1, s2, s3, memo, other_start,", "if s2_start < len(s2) else None def _find_prefix(counter, p, other_start): for i in", "s1_start >= len(s1) and s2_start >= len(s2) encoding = str(s1_start) + '|' +", "s1, s2, s3): \"\"\" :type s1: str :type s2: str :type s3: str", "if s3_start >= len(s3): return s1_start >= len(s1) and s2_start >= len(s2) encoding", "memo, other_start, counter+1, i+1): return True counter += 1 else: counter = None", "i+1): return True counter += 1 else: counter = None if counter is", "s2, s3, memo, s1_start, s2_start, s3_start): if s3_start >= len(s3): return s1_start >=", "counter = None if counter is not None and counter >= len(p): counter", "return True counter += 1 else: counter = None if counter is not", "'|' + str(s3_start) if encoding in memo: return False s1_counter = s1_start if", "+= 1 else: counter = None if counter is not None and counter", ">= len(s3): return s1_start >= len(s1) and s2_start >= len(s2) encoding = str(s1_start)", "range(s3_start, len(s3)): if counter is not None: if s3[i] == p[counter]: if s1", "s1_start if s1_start < len(s1) else None s2_counter = s2_start if s2_start <", "set(), 0, 0, 0) def _is_interleave(self, s1, s2, s3, memo, s1_start, s2_start, s3_start):", "p, other_start): for i in range(s3_start, len(s3)): if counter is not None: if", "if counter is not None: if s3[i] == p[counter]: if s1 == p:", "len(s1) else None s2_counter = s2_start if s2_start < len(s2) else None def", "s3[i] == p[counter]: if s1 == p: if self._is_interleave(s1, s2, s3, memo, counter+1,", "None def _find_prefix(counter, p, other_start): for i in range(s3_start, len(s3)): if counter is", "i+1): return True else: if self._is_interleave(s1, s2, s3, memo, other_start, counter+1, i+1): return", "counter+1, i+1): return True counter += 1 else: counter = None if counter", "+ str(s3_start) if encoding in memo: return False s1_counter = s1_start if s1_start", "True else: if self._is_interleave(s1, s2, s3, memo, other_start, counter+1, i+1): return True counter", "s2, s3): \"\"\" :type s1: str :type s2: str :type s3: str :rtype:", "return self._is_interleave(s1, s2, s3, set(), 0, 0, 0) def _is_interleave(self, s1, s2, s3,", "< len(s1) else None s2_counter = s2_start if s2_start < len(s2) else None", "if s3[i] == p[counter]: if s1 == p: if self._is_interleave(s1, s2, s3, memo,", "i in range(s3_start, len(s3)): if counter is not None: if s3[i] == p[counter]:", "None: if s3[i] == p[counter]: if s1 == p: if self._is_interleave(s1, s2, s3,", "other_start): for i in range(s3_start, len(s3)): if counter is not None: if s3[i]", "counter >= len(p): counter = None if _find_prefix(s1_counter, s1, s2_start): return True if", "+ str(s2_start) + '|' + str(s3_start) if encoding in memo: return False s1_counter", "Solution: def isInterleave(self, s1, s2, s3): \"\"\" :type s1: str :type s2: str", "False s1_counter = s1_start if s1_start < len(s1) else None s2_counter = s2_start", "not None: if s3[i] == p[counter]: if s1 == p: if self._is_interleave(s1, s2,", "self._is_interleave(s1, s2, s3, set(), 0, 0, 0) def _is_interleave(self, s1, s2, s3, memo,", "counter+1, other_start, i+1): return True else: if self._is_interleave(s1, s2, s3, memo, other_start, counter+1,", "_find_prefix(s1_counter, s1, s2_start): return True if _find_prefix(s2_counter, s2, s1_start): return True memo.add(encoding) return", "s2, s3, memo, other_start, counter+1, i+1): return True counter += 1 else: counter", "= str(s1_start) + '|' + str(s2_start) + '|' + str(s3_start) if encoding in", ":type s3: str :rtype: bool \"\"\" return self._is_interleave(s1, s2, s3, set(), 0, 0,", "in range(s3_start, len(s3)): if counter is not None: if s3[i] == p[counter]: if", "return True else: if self._is_interleave(s1, s2, s3, memo, other_start, counter+1, i+1): return True", "str :rtype: bool \"\"\" return self._is_interleave(s1, s2, s3, set(), 0, 0, 0) def", "counter is not None: if s3[i] == p[counter]: if s1 == p: if", "s2, s3, memo, counter+1, other_start, i+1): return True else: if self._is_interleave(s1, s2, s3,", "str(s3_start) if encoding in memo: return False s1_counter = s1_start if s1_start <", "def _find_prefix(counter, p, other_start): for i in range(s3_start, len(s3)): if counter is not", "s3): \"\"\" :type s1: str :type s2: str :type s3: str :rtype: bool", "in memo: return False s1_counter = s1_start if s1_start < len(s1) else None", "s3_start >= len(s3): return s1_start >= len(s1) and s2_start >= len(s2) encoding =", "return s1_start >= len(s1) and s2_start >= len(s2) encoding = str(s1_start) + '|'", "else None def _find_prefix(counter, p, other_start): for i in range(s3_start, len(s3)): if counter", ">= len(p): counter = None if _find_prefix(s1_counter, s1, s2_start): return True if _find_prefix(s2_counter,", ">= len(s1) and s2_start >= len(s2) encoding = str(s1_start) + '|' + str(s2_start)", "else: counter = None if counter is not None and counter >= len(p):", "s1, s2_start): return True if _find_prefix(s2_counter, s2, s1_start): return True memo.add(encoding) return False", "def _is_interleave(self, s1, s2, s3, memo, s1_start, s2_start, s3_start): if s3_start >= len(s3):", "not None and counter >= len(p): counter = None if _find_prefix(s1_counter, s1, s2_start):", "memo, s1_start, s2_start, s3_start): if s3_start >= len(s3): return s1_start >= len(s1) and", "= s1_start if s1_start < len(s1) else None s2_counter = s2_start if s2_start", "bool \"\"\" return self._is_interleave(s1, s2, s3, set(), 0, 0, 0) def _is_interleave(self, s1,", "+ '|' + str(s3_start) if encoding in memo: return False s1_counter = s1_start", "len(s2) encoding = str(s1_start) + '|' + str(s2_start) + '|' + str(s3_start) if", "+ '|' + str(s2_start) + '|' + str(s3_start) if encoding in memo: return", "s1_counter = s1_start if s1_start < len(s1) else None s2_counter = s2_start if", "len(s3)): if counter is not None: if s3[i] == p[counter]: if s1 ==", "None and counter >= len(p): counter = None if _find_prefix(s1_counter, s1, s2_start): return", "class Solution: def isInterleave(self, s1, s2, s3): \"\"\" :type s1: str :type s2:", ":rtype: bool \"\"\" return self._is_interleave(s1, s2, s3, set(), 0, 0, 0) def _is_interleave(self,", "counter += 1 else: counter = None if counter is not None and", "memo: return False s1_counter = s1_start if s1_start < len(s1) else None s2_counter", "return False s1_counter = s1_start if s1_start < len(s1) else None s2_counter =", "len(s1) and s2_start >= len(s2) encoding = str(s1_start) + '|' + str(s2_start) +", "and counter >= len(p): counter = None if _find_prefix(s1_counter, s1, s2_start): return True", "str(s1_start) + '|' + str(s2_start) + '|' + str(s3_start) if encoding in memo:", "if self._is_interleave(s1, s2, s3, memo, other_start, counter+1, i+1): return True counter += 1", "s3, set(), 0, 0, 0) def _is_interleave(self, s1, s2, s3, memo, s1_start, s2_start,", "str :type s3: str :rtype: bool \"\"\" return self._is_interleave(s1, s2, s3, set(), 0,", "< len(s2) else None def _find_prefix(counter, p, other_start): for i in range(s3_start, len(s3)):", "if s1_start < len(s1) else None s2_counter = s2_start if s2_start < len(s2)", "None if counter is not None and counter >= len(p): counter = None", "0) def _is_interleave(self, s1, s2, s3, memo, s1_start, s2_start, s3_start): if s3_start >=", "str(s2_start) + '|' + str(s3_start) if encoding in memo: return False s1_counter =", "and s2_start >= len(s2) encoding = str(s1_start) + '|' + str(s2_start) + '|'", "s1_start < len(s1) else None s2_counter = s2_start if s2_start < len(s2) else", "\"\"\" :type s1: str :type s2: str :type s3: str :rtype: bool \"\"\"", ":type s1: str :type s2: str :type s3: str :rtype: bool \"\"\" return", "else None s2_counter = s2_start if s2_start < len(s2) else None def _find_prefix(counter,", "s1: str :type s2: str :type s3: str :rtype: bool \"\"\" return self._is_interleave(s1,", "p[counter]: if s1 == p: if self._is_interleave(s1, s2, s3, memo, counter+1, other_start, i+1):", "s3: str :rtype: bool \"\"\" return self._is_interleave(s1, s2, s3, set(), 0, 0, 0)", "self._is_interleave(s1, s2, s3, memo, counter+1, other_start, i+1): return True else: if self._is_interleave(s1, s2,", "is not None and counter >= len(p): counter = None if _find_prefix(s1_counter, s1,", "\"\"\" return self._is_interleave(s1, s2, s3, set(), 0, 0, 0) def _is_interleave(self, s1, s2,", "s2_start, s3_start): if s3_start >= len(s3): return s1_start >= len(s1) and s2_start >=", "def isInterleave(self, s1, s2, s3): \"\"\" :type s1: str :type s2: str :type", "s2_start if s2_start < len(s2) else None def _find_prefix(counter, p, other_start): for i", "== p: if self._is_interleave(s1, s2, s3, memo, counter+1, other_start, i+1): return True else:", "= s2_start if s2_start < len(s2) else None def _find_prefix(counter, p, other_start): for", "s2_start < len(s2) else None def _find_prefix(counter, p, other_start): for i in range(s3_start,", ":type s2: str :type s3: str :rtype: bool \"\"\" return self._is_interleave(s1, s2, s3,", "None if _find_prefix(s1_counter, s1, s2_start): return True if _find_prefix(s2_counter, s2, s1_start): return True", "if s1 == p: if self._is_interleave(s1, s2, s3, memo, counter+1, other_start, i+1): return", "s3, memo, counter+1, other_start, i+1): return True else: if self._is_interleave(s1, s2, s3, memo,", "== p[counter]: if s1 == p: if self._is_interleave(s1, s2, s3, memo, counter+1, other_start,", "0, 0, 0) def _is_interleave(self, s1, s2, s3, memo, s1_start, s2_start, s3_start): if", "= None if counter is not None and counter >= len(p): counter =", "else: if self._is_interleave(s1, s2, s3, memo, other_start, counter+1, i+1): return True counter +=", "str :type s2: str :type s3: str :rtype: bool \"\"\" return self._is_interleave(s1, s2,", ">= len(s2) encoding = str(s1_start) + '|' + str(s2_start) + '|' + str(s3_start)", "len(p): counter = None if _find_prefix(s1_counter, s1, s2_start): return True if _find_prefix(s2_counter, s2,", "'|' + str(s2_start) + '|' + str(s3_start) if encoding in memo: return False", "for i in range(s3_start, len(s3)): if counter is not None: if s3[i] ==", "counter = None if _find_prefix(s1_counter, s1, s2_start): return True if _find_prefix(s2_counter, s2, s1_start):", "s1, s2, s3, memo, s1_start, s2_start, s3_start): if s3_start >= len(s3): return s1_start", "other_start, counter+1, i+1): return True counter += 1 else: counter = None if", "True counter += 1 else: counter = None if counter is not None", "s3, memo, s1_start, s2_start, s3_start): if s3_start >= len(s3): return s1_start >= len(s1)", "len(s3): return s1_start >= len(s1) and s2_start >= len(s2) encoding = str(s1_start) +", "s3, memo, other_start, counter+1, i+1): return True counter += 1 else: counter =", "is not None: if s3[i] == p[counter]: if s1 == p: if self._is_interleave(s1,", "if encoding in memo: return False s1_counter = s1_start if s1_start < len(s1)", "None s2_counter = s2_start if s2_start < len(s2) else None def _find_prefix(counter, p,", "s1_start, s2_start, s3_start): if s3_start >= len(s3): return s1_start >= len(s1) and s2_start", "s3_start): if s3_start >= len(s3): return s1_start >= len(s1) and s2_start >= len(s2)", "s2_counter = s2_start if s2_start < len(s2) else None def _find_prefix(counter, p, other_start):", "p: if self._is_interleave(s1, s2, s3, memo, counter+1, other_start, i+1): return True else: if", "if counter is not None and counter >= len(p): counter = None if", "= None if _find_prefix(s1_counter, s1, s2_start): return True if _find_prefix(s2_counter, s2, s1_start): return", "_find_prefix(counter, p, other_start): for i in range(s3_start, len(s3)): if counter is not None:", "self._is_interleave(s1, s2, s3, memo, other_start, counter+1, i+1): return True counter += 1 else:", "counter is not None and counter >= len(p): counter = None if _find_prefix(s1_counter,", "if self._is_interleave(s1, s2, s3, memo, counter+1, other_start, i+1): return True else: if self._is_interleave(s1,", "encoding in memo: return False s1_counter = s1_start if s1_start < len(s1) else", "1 else: counter = None if counter is not None and counter >=", "isInterleave(self, s1, s2, s3): \"\"\" :type s1: str :type s2: str :type s3:" ]
[ "self.setText(self.VALUE_TO_LABEL.get(self.value, \"\")) def get_data(self): return self.value class abstractEnumEditable(QComboBox): data_changed = pyqtSignal(object) VALEURS_LABELS =", "self.ws[0].value() if not current_year: return self._change_year_text_color(not current_year < 100) self.ws[0].setValue(current_year) def get_data(self): d", "\" jours\" or \" jour\")) # -------------- Enumerations vizualisation -------------- class abstractEnum(QLabel): VALUE_TO_LABEL", "1, 2) j.valueChanged.connect( lambda v: self.data_changed.emit(self.get_data())) m.valueChanged.connect( lambda v: self.data_changed.emit(self.get_data())) a.valueChanged.connect( lambda v:", "pyqtSignal() # dummy signal def __init__(self, *args, **kwargs): super(abstractSimpleField, self).__init__(*args, **kwargs) if self.TOOLTIP:", "LIST_PLACEHOLDER = \"Aucun numéro.\" LIST_HEADER = None BOUTON = NouveauTelephone def __init__(self, collection:", "_get_widget(is_editable and DefaultEditable or DefaultFixe, value) def Booleen(value, is_editable): return _get_widget(is_editable and BoolEditable", "on_add(self): num = self.entree.text() if self.IS_TELEPHONE(num): self.entree.setPlaceholderText(\"Ajouter...\") self.data_changed.emit(num) self._clear() self.set_button() else: self.entree.selectAll() QToolTip.showText(self.entree.mapToGlobal(", "from collections import defaultdict from typing import List, Any from PyQt5.QtCore import pyqtSignal,", "v in formats.SEXES.items()) class ModePaiementFixe(abstractEnum): VALUE_TO_LABEL = formats.MODE_PAIEMENT class ModePaiementEditable(abstractEnumEditable): VALEURS_LABELS = sorted([(k,", "# -------------- Enumerations vizualisation -------------- class abstractEnum(QLabel): VALUE_TO_LABEL = {} \"\"\"Dict. giving label", "modification\"\"\" TITLE = \"Advanced options\" CLASS_PANEL_OPTIONS:Any = None options_changed = pyqtSignal() def __init__(self,", "implement a get_data method, which return a date.date\"\"\" def __init__(self, begining, end): super().__init__()", "= formats.SEXES class SexeEditable(abstractEnumEditable): VALEURS_LABELS = sorted((k, v) for k, v in formats.SEXES.items())", "def __init__(self, parent=None): super().__init__(parent) cb = QCheckBox() l = QLabel() self.setAutoFillBackground(True) # Pour", "a.setToolTip(\"Année\") j.setAlignment(Qt.AlignCenter) m.setAlignment(Qt.AlignCenter) a.setAlignment(Qt.AlignCenter) j.setSpecialValueText(\"-\") m.setSpecialValueText(\"-\") a.setSpecialValueText(\"-\") layout.addWidget(j, 0, 0) layout.addWidget(m, 0, 1)", "def _get_field(index): return index.model().header[index.column()] def sizeHint(self, option, index): if self.size_hint_ and self.size_hint_[0] ==", "VALEURS_LABELS = sorted((i, i + \" \" + v) for i, v in", "index) def setEditorData(self, editor, index): value = index.data(role=Qt.EditRole) editor.set_data(value) self.sizeHintChanged.emit(index) def createEditor(self, parent,", "DateRange(QFrame): data_changed = pyqtSignal(object, object) def __init__(self): super().__init__() self.debut = DateEditable() self.fin =", "class DepartementEditable(abstractEnumEditable): VALEURS_LABELS = sorted((i, i + \" \" + v) for i,", ". import list_views, clear_layout, Icons from ..Core import formats class NouveauTelephone(list_views.abstractNewButton): LABEL =", "list_views, clear_layout, Icons from ..Core import formats class NouveauTelephone(list_views.abstractNewButton): LABEL = \"Ajouter un", "d): if d is None: self.ws[0].clear() self.ws[1].clear() self.ws[2].clear() else: self.ws[0].setValue(d.year) self.ws[1].setValue(d.month) self.ws[2].setValue(d.day) self.on_editing()", "import re from collections import defaultdict from typing import List, Any from PyQt5.QtCore", "None: self.ws[0].clear() self.ws[1].clear() self.ws[2].clear() else: self.ws[0].setValue(d.year) self.ws[1].setValue(d.month) self.ws[2].setValue(d.day) self.on_editing() class MontantEditable(QFrame): def __init__(self,", "(jours >= 2 and \" jours\" or \" jour\")) # -------------- Enumerations vizualisation", "data_changed = pyqtSignal() # dummy signal def __init__(self, *args, **kwargs): super(abstractSimpleField, self).__init__(*args, **kwargs)", "10)), \"Numéro invalide\") class Tels(list_views.abstractMutableList): LIST_PLACEHOLDER = \"Aucun numéro.\" LIST_HEADER = None BOUTON", "def ModePaiement(value, is_editable): return _get_widget(is_editable and ModePaiementEditable or ModePaiementFixe, value) def DateHeure(value, is_editable):", "= self.CORRES[field] w = classe(parent, other) if other else classe(parent) self.size_hint_ = (index,", "placeholder=\"Informations complémentaires\"): super().__init__(text) self.setSizeAdjustPolicy(QPlainTextEdit.AdjustToContents) self.setMinimumHeight(50) self.setMinimumWidth(150) self.setPlaceholderText(placeholder) self.setReadOnly(not is_editable) self.textChanged.connect( lambda: self.data_changed.emit(self.toPlainText())) def", "self.debut.get_data(), self.fin.get_data() def set_data(self, v): v = v or [None, None] self.debut.set_data(v[0]) self.fin.set_data(v[1])", "def __init__(self, parent): QStyledItemDelegate.__init__(self, parent) self.size_hint_ = None self.row_done_ = None @staticmethod def", "for k, v in formats.MODE_PAIEMENT.items()]) # ------------- Simple string-like field ------------- class abstractSimpleField(QLabel):", "# --------------- Numeric fields --------------- class abstractEntierEditable(QSpinBox): UNITE = \"\" MAX = None", "to pseuso acces\"\"\" super(Tels, self).on_add(list_views.PseudoAccesCategorie(item)) def set_data(self, collection): collection = self.from_list(collection) super(Tels, self).set_data(collection)", "__init__(self, *args, **kwargs): super(abstractSimpleField, self).__init__(*args, **kwargs) if self.TOOLTIP: self.setToolTip(self.TOOLTIP) def set_data(self, value): self.value", "m.setToolTip(\"Mois\") a = QSpinBox() a.setMinimum(0) a.setMaximum(2500) a.setToolTip(\"Année\") j.setAlignment(Qt.AlignCenter) m.setAlignment(Qt.AlignCenter) a.setAlignment(Qt.AlignCenter) j.setSpecialValueText(\"-\") m.setSpecialValueText(\"-\") a.setSpecialValueText(\"-\")", "= [self.ws[0].value(), self.ws[1].value(), self.ws[2].value()] try: return datetime.date(*d) except ValueError: return def set_data(self, d):", "+ \" \" + v) for i, v in formats.DEPARTEMENTS.items()) class SexeFixe(abstractEnum): VALUE_TO_LABEL", "m.valueChanged.connect( lambda v: self.data_changed.emit(self.get_data())) a.valueChanged.connect( lambda v: self.data_changed.emit(self.get_data())) a.editingFinished.connect(self.on_editing) self.ws = (a, m,", "= None def __init__(self, parent=None, completion=[]): super().__init__(parent) self.textChanged.connect(self.data_changed.emit) if completion: c = QCompleter(completion)", "DepartementEditable or DepartementFixe, value) def Sexe(value, is_editable): return _get_widget(is_editable and SexeEditable or SexeFixe,", "parent=None): super().__init__(parent) layout = QGridLayout(self) layout.setContentsMargins(0, 0, 0, 0) j = QSpinBox() j.setMinimum(0)", "acces advanced options. CLASS_PANEL_OPTIONS is responsible for doing the actual modification\"\"\" TITLE =", "and EurosEditable or EurosFixe, value) def Pourcent(value, is_editable): return _get_widget(is_editable and PourcentEditable or", "\"valeur\": EurosEditable, \"description\": DefaultEditable, \"quantite\": EntierEditable, \"obligatoire\": BoolEditable} \"\"\"Correspondance between fields and widget", "self._clear() self.set_button() else: self.entree.selectAll() QToolTip.showText(self.entree.mapToGlobal( QPoint(0, 10)), \"Numéro invalide\") class Tels(list_views.abstractMutableList): LIST_PLACEHOLDER =", "= {\"montant\": MontantEditable, \"mode_paiement\": ModePaiementEditable, \"valeur\": EurosEditable, \"description\": DefaultEditable, \"quantite\": EntierEditable, \"obligatoire\": BoolEditable}", "r.search(s.replace(' ', '')) return (m is not None) def _clear(self): clear_layout(self.layout()) def enter_edit(self):", "is_ok): color = \"black\" if is_ok else \"red\" self.ws[0].setStyleSheet(f\"color : {color}\") def on_editing(self):", "DEFAULT = 0 class EurosEditable(QDoubleSpinBox): data_changed = pyqtSignal(float) def __init__(self, parent=None): super().__init__(parent) self.setMaximum(100000)", "(v[0], v[1], v[2], t, v[3]) add_widgets_type({}, formats.ASSOCIATION) ## ------------------Custom delegate ------------------ ## class", "return type(\"LDefaultEditable\", (DefaultEditable,), {\"MAX_LENGTH\": max_length}) class OptionnalTextEditable(QFrame): \"\"\"QCheckbox + QLineEdit\"\"\" data_changed = pyqtSignal(object)", "value self.setText(self.VALUE_TO_LABEL.get(self.value, \"\")) def get_data(self): return self.value class abstractEnumEditable(QComboBox): data_changed = pyqtSignal(object) VALEURS_LABELS", "-1 else None class BoolEditable(QFrame): data_changed = pyqtSignal(bool) def __init__(self, parent=None): super().__init__(parent) cb", "and PourcentEditable or PourcentFixe, value) def Date(value, is_editable): return _get_widget(is_editable and DateEditable or", "self.text.text().strip() active = self.active.isChecked() and bool(text) return text if active else None def", "= (v[0], v[1], v[2], t, v[3]) add_widgets_type({}, formats.ASSOCIATION) ## ------------------Custom delegate ------------------ ##", "= self.entree.text() if self.IS_TELEPHONE(num): self.entree.setPlaceholderText(\"Ajouter...\") self.data_changed.emit(num) self._clear() self.set_button() else: self.entree.selectAll() QToolTip.showText(self.entree.mapToGlobal( QPoint(0, 10)),", "the numbers of day between two date widgets. These widgets have to implement", "text): self.setPlainText(text) class OptionsButton(QPushButton): \"\"\"Bouton to open window to acces advanced options. CLASS_PANEL_OPTIONS", "EntierEditable(abstractEntierEditable): MAX = 10000 class PourcentEditable(abstractEntierEditable): UNITE = \"%\" MAX = 100 DEFAULT", "pyqtSignal, Qt, QPoint from PyQt5.QtGui import QColor, QPen, QBrush, QIcon from PyQt5.QtWidgets import", "FONCTION_AFF = staticmethod(formats.abstractRender.default) class DateFixe(abstractSimpleField): FONCTION_AFF = staticmethod(formats.abstractRender.date) class DateHeureFixe(abstractSimpleField): FONCTION_AFF = staticmethod(formats.abstractRender.dateheure)", "return [tel.Id for tel in col] class Duree(QLabel): \"\"\"Display the numbers of day", "w.sizeHint()) self.row_done_ = index.row() return w def destroyEditor(self, editor, index): self.size_hint_ = None", "or None to add a separator\"\"\" def __init__(self, parent=None): super().__init__(parent) self.set_choix(self.VALEURS_LABELS) self.currentIndexChanged.connect( lambda", "modify basic fields. (french language) ASSOCIATION should be updated with custom widgets, since", "date_arrivee=Date, date_depart=Date, date_emission=Date, date_reception=Date, nb_places=Entier, nb_places_reservees=Entier, age_min=Entier, age_max=Entier, acquite=Booleen, is_acompte=Booleen, is_remboursement=Booleen, reduc_speciale=Euros, acompte_recu=Euros,", "of day between two date widgets. These widgets have to implement a get_data", "= (30,64,55) # start re, ve, be = (153,242,200) # end t =", "def get_data(self): return self.value class BoolFixe(abstractSimpleField): FONCTION_AFF = staticmethod(formats.abstractRender.boolen) class EurosFixe(abstractSimpleField): FONCTION_AFF =", "formats.ASSOCIATION) ## ------------------Custom delegate ------------------ ## class delegateAttributs(QStyledItemDelegate): CORRES = {\"montant\": MontantEditable, \"mode_paiement\":", "DefaultFixe, value) def Booleen(value, is_editable): return _get_widget(is_editable and BoolEditable or BoolFixe, value) def", "return _get_widget(is_editable and EntierEditable or DefaultFixe, entier) def Euros(value, is_editable): return _get_widget(is_editable and", "Duree(QLabel): \"\"\"Display the numbers of day between two date widgets. These widgets have", "= QHBoxLayout(self) layout.addWidget(cb) layout.addWidget(l) def callback(b): l.setText(b and \"Oui\" or \"Non\") self.data_changed.emit(b) cb.clicked.connect(callback)", "# start re, ve, be = (153,242,200) # end t = proportion /", "set_data(self, value): self.setText(str(value or \"\")) def get_data(self): return self.text() def LimitedDefaultEditable(max_length): return type(\"LDefaultEditable\",", "def set_data(self, value): self.value = value self.setText(self.VALUE_TO_LABEL.get(self.value, \"\")) def get_data(self): return self.value class", "bool(text) return text if active else None def set_data(self, text: str): text =", "return text if active else None def set_data(self, text: str): text = text", "\"\"\"Implements widgets to visualize and modify basic fields. (french language) ASSOCIATION should be", "QDoubleSpinBox, QCheckBox, QCompleter, QGridLayout, QVBoxLayout, QPlainTextEdit, QStyledItemDelegate, QToolTip) from . import list_views, clear_layout,", "add_widgets_type(type_widgets, abstract_ASSOCIATION): TYPES_WIDGETS.update(type_widgets) for k, v in abstract_ASSOCIATION.items(): t = TYPES_WIDGETS[k] ASSOCIATION[k] =", "\"\"\"List of tuples (value, label) or None to add a separator\"\"\" def __init__(self,", "0) layout.addWidget(QLabel(\"Du \")) layout.addWidget(self.debut) layout.addWidget(QLabel(\" au \")) layout.addWidget(self.fin) def on_change(self): self.data_changed.emit(*self.get_data()) def get_data(self):", "__init__(self, collection: list, is_editable): collection = self.from_list(collection) super().__init__(collection, is_editable) def on_add(self, item): \"\"\"Convert", "[self.val.value(), self.par_jour.isChecked()] class DateRange(QFrame): data_changed = pyqtSignal(object, object) def __init__(self): super().__init__() self.debut =", "= \"\" MAX = None MIN = 0 DEFAULT = 0 data_changed =", "or \"Non\") self.data_changed.emit(b) cb.clicked.connect(callback) self.cb = cb self.l = l def set_data(self, b):", "MAX_LENGTH = None def __init__(self, parent=None, completion=[]): super().__init__(parent) self.textChanged.connect(self.data_changed.emit) if completion: c =", "index.data(role=Qt.EditRole) editor.set_data(value) self.sizeHintChanged.emit(index) def createEditor(self, parent, option, index): field = self._get_field(index) other =", "_get_widget(is_editable and SexeEditable or SexeFixe, value) def Adresse(value, is_editable): return Texte(value, is_editable, placeholder=\"Adresse...\")", "class ModePaiementEditable(abstractEnumEditable): VALEURS_LABELS = sorted([(k, v) for k, v in formats.MODE_PAIEMENT.items()]) # -------------", "= DateEditable() self.fin = DateEditable() self.debut.data_changed.connect(self.on_change) self.fin.data_changed.connect(self.on_change) layout = QHBoxLayout(self) layout.setContentsMargins(0, 0, 0,", "somme is not None else -1 self.setValue(somme) def get_data(self): v = self.value() return", "BoolEditable or BoolFixe, value) def Entier(entier, is_editable): return _get_widget(is_editable and EntierEditable or DefaultFixe,", "EurosEditable, \"description\": DefaultEditable, \"quantite\": EntierEditable, \"obligatoire\": BoolEditable} \"\"\"Correspondance between fields and widget classes\"\"\"", "QPoint(0, 10)), \"Numéro invalide\") class Tels(list_views.abstractMutableList): LIST_PLACEHOLDER = \"Aucun numéro.\" LIST_HEADER = None", "self.data_changed.emit(self.get_data()) def get_data(self): text = self.text.text().strip() active = self.active.isChecked() and bool(text) return text", "defaultdict( lambda: Default, date_naissance=Date, departement_naissance=Departement, sexe=Sexe, tels=Tels, adresse=Adresse, date=Date, date_debut=Date, date_fin=Date, date_arrivee=Date, date_depart=Date,", "set_data(self, value): self.value = value label = self.FONCTION_AFF(value) self.setText(label) def get_data(self): return self.value", "MAX = 100 DEFAULT = 0 class EurosEditable(QDoubleSpinBox): data_changed = pyqtSignal(float) def __init__(self,", "v[2], t, v[3]) add_widgets_type({}, formats.ASSOCIATION) ## ------------------Custom delegate ------------------ ## class delegateAttributs(QStyledItemDelegate): CORRES", "Tels(list_views.abstractMutableList): LIST_PLACEHOLDER = \"Aucun numéro.\" LIST_HEADER = None BOUTON = NouveauTelephone def __init__(self,", "return _get_widget(is_editable and DepartementEditable or DepartementFixe, value) def Sexe(value, is_editable): return _get_widget(is_editable and", "100) painter.drawRoundedRect(rect, 5, 5) painter.restore() @staticmethod def _get_field(index): return index.model().header[index.column()] def sizeHint(self, option,", "None @staticmethod def paint_filling_rect(option, painter, proportion): rect = option.rect painter.save() proportion = min(proportion,", "(french language) ASSOCIATION should be updated with custom widgets, since common.abstractDetails will use", "self.begining.data_changed.connect(self.set_data) self.end.data_changed.connect(self.set_data) self.set_data() def set_data(self, *args): \"\"\"we cant to call set_data to manually", "QToolTip) from . import list_views, clear_layout, Icons from ..Core import formats class NouveauTelephone(list_views.abstractNewButton):", "painter.drawRoundedRect(rect, 5, 5) painter.restore() @staticmethod def _get_field(index): return index.model().header[index.column()] def sizeHint(self, option, index):", "end self.begining.data_changed.connect(self.set_data) self.end.data_changed.connect(self.set_data) self.set_data() def set_data(self, *args): \"\"\"we cant to call set_data to", "or SexeFixe, value) def Adresse(value, is_editable): return Texte(value, is_editable, placeholder=\"Adresse...\") def ModePaiement(value, is_editable):", "parent=None): super().__init__(parent) self.set_choix(self.VALEURS_LABELS) self.currentIndexChanged.connect( lambda i: self.data_changed.emit(self.currentData())) def set_choix(self, choix): self.places = {}", "self.is_editable = is_editable def show_options(self): f = self.CLASS_PANEL_OPTIONS(self.acces, self.is_editable) if f.exec_(): self.options_changed.emit() def", "= max((df - db).days + 1, 0) self.setText(str(jours) + (jours >= 2 and", "staticmethod(formats.abstractRender.default) class DateFixe(abstractSimpleField): FONCTION_AFF = staticmethod(formats.abstractRender.date) class DateHeureFixe(abstractSimpleField): FONCTION_AFF = staticmethod(formats.abstractRender.dateheure) # ---------------", "in formats.SEXES.items()) class ModePaiementFixe(abstractEnum): VALUE_TO_LABEL = formats.MODE_PAIEMENT class ModePaiementEditable(abstractEnumEditable): VALEURS_LABELS = sorted([(k, v)", "w = DateHeureFixe() w.set_data(value) return w def OptionnalText(value, is_editable): return _get_widget(is_editable and OptionnalTextEditable", "lambda v: self.data_changed.emit(self.get_data())) a.editingFinished.connect(self.on_editing) self.ws = (a, m, j) def _change_year_text_color(self, is_ok): color", "self.data_changed.emit(self.toPlainText())) def get_data(self): return self.toPlainText() def set_data(self, text): self.setPlainText(text) class OptionsButton(QPushButton): \"\"\"Bouton to", "= NouveauTelephone def __init__(self, collection: list, is_editable): collection = self.from_list(collection) super().__init__(collection, is_editable) def", "= cb self.l = l def set_data(self, b): b = b or False", "else None class BoolEditable(QFrame): data_changed = pyqtSignal(bool) def __init__(self, parent=None): super().__init__(parent) cb =", "------------- class abstractSimpleField(QLabel): FONCTION_AFF = None TOOLTIP = None data_changed = pyqtSignal() #", "v: self.data_changed.emit(self.get_data())) a.editingFinished.connect(self.on_editing) self.ws = (a, m, j) def _change_year_text_color(self, is_ok): color =", "bs)) painter.setPen(QPen(color, 0.5, Qt.SolidLine, Qt.RoundCap, Qt.RoundJoin)) painter.setBackgroundMode(Qt.OpaqueMode) painter.setBackground(QBrush(color)) painter.setBrush(QBrush(color)) rect.setWidth(rect.width() * proportion /", "date_debut=Date, date_fin=Date, date_arrivee=Date, date_depart=Date, date_emission=Date, date_reception=Date, nb_places=Entier, nb_places_reservees=Entier, age_min=Entier, age_max=Entier, acquite=Booleen, is_acompte=Booleen, is_remboursement=Booleen,", "_get_widget(is_editable and DateEditable or DateFixe, value) def Departement(value, is_editable): return _get_widget(is_editable and DepartementEditable", "value) def DateHeure(value, is_editable): if is_editable: raise NotImplementedError(\"No editable datetime widget !\") w", "layout = QVBoxLayout(self) layout.addWidget(self.val) layout.addWidget(self.par_jour) def set_data(self, value): self.val.setValue(value[0]) self.par_jour.setChecked(value[1]) def get_data(self): return", "DEFAULT = 0 data_changed = pyqtSignal(int) def __init__(self, parent=None): super().__init__(parent) self.setMaximum(self.MAX) self.setMinimum(self.MIN) self.setSuffix(self.UNITE)", "vs, bs = (30,64,55) # start re, ve, be = (153,242,200) # end", "if self.MAX_LENGTH: self.setMaxLength(self.MAX_LENGTH) def set_data(self, value): self.setText(str(value or \"\")) def get_data(self): return self.text()", "\" + v) for i, v in formats.DEPARTEMENTS.items()) class SexeFixe(abstractEnum): VALUE_TO_LABEL = formats.SEXES", "\"\"\" import datetime import re from collections import defaultdict from typing import List,", "if is_editable: raise NotImplementedError(\"No editable datetime widget !\") w = DateHeureFixe() w.set_data(value) return", "c.setCaseSensitivity(Qt.CaseInsensitive) self.setCompleter(c) if self.MAX_LENGTH: self.setMaxLength(self.MAX_LENGTH) def set_data(self, value): self.setText(str(value or \"\")) def get_data(self):", "set_data(self, somme): somme = somme if somme is not None else (self.MIN -", "QPushButton() add.setIcon(QIcon(Icons.Valid)) add.clicked.connect(self.on_add) self.entree.editingFinished.connect(self.on_add) line_layout.addWidget(self.entree) line_layout.addWidget(add) line_layout.setStretch(0, 3) line_layout.setStretch(1, 1) def on_add(self): num", "v in formats.MODE_PAIEMENT.items()]) # ------------- Simple string-like field ------------- class abstractSimpleField(QLabel): FONCTION_AFF =", "= bool(text.strip()) self.active.setChecked(is_active) self.text.setEnabled(is_active) self.data_changed.emit(self.get_data()) def get_data(self): text = self.text.text().strip() active = self.active.isChecked()", "def get_data(self): return self.cb.isChecked() class DefaultEditable(QLineEdit): data_changed = pyqtSignal(str) MAX_LENGTH = None def", "v[1], v[2], t, v[3]) add_widgets_type({}, formats.ASSOCIATION) ## ------------------Custom delegate ------------------ ## class delegateAttributs(QStyledItemDelegate):", "two date widgets. These widgets have to implement a get_data method, which return", "PyQt5.QtWidgets import (QFrame, QHBoxLayout, QPushButton, QLineEdit, QLabel, QComboBox, QSpinBox, QDoubleSpinBox, QCheckBox, QCompleter, QGridLayout,", "self.data_changed.emit(self.currentData())) def set_choix(self, choix): self.places = {} for t in choix: if t:", "for tel in col] class Duree(QLabel): \"\"\"Display the numbers of day between two", "date=Date, date_debut=Date, date_fin=Date, date_arrivee=Date, date_depart=Date, date_emission=Date, date_reception=Date, nb_places=Entier, nb_places_reservees=Entier, age_min=Entier, age_max=Entier, acquite=Booleen, is_acompte=Booleen,", "for k, v in abstract_ASSOCIATION.items(): t = TYPES_WIDGETS[k] ASSOCIATION[k] = (v[0], v[1], v[2],", "3) line_layout.setStretch(1, 1) def on_add(self): num = self.entree.text() if self.IS_TELEPHONE(num): self.entree.setPlaceholderText(\"Ajouter...\") self.data_changed.emit(num) self._clear()", "self.valueChanged.connect(self.data_changed.emit) def set_data(self, somme): somme = somme if somme is not None else", "self.text.setText(text) self.data_changed.emit(self.get_data()) class DateEditable(QFrame): data_changed = pyqtSignal(object) def __init__(self, parent=None): super().__init__(parent) layout =", "\"Oui\" or \"Non\") def get_data(self): return self.cb.isChecked() class DefaultEditable(QLineEdit): data_changed = pyqtSignal(str) MAX_LENGTH", "{color}\") def on_editing(self): current_year = self.ws[0].value() if not current_year: return self._change_year_text_color(not current_year <", "doing the actual modification\"\"\" TITLE = \"Advanced options\" CLASS_PANEL_OPTIONS:Any = None options_changed =", "if is_ok else \"red\" self.ws[0].setStyleSheet(f\"color : {color}\") def on_editing(self): current_year = self.ws[0].value() if", "in col] class Duree(QLabel): \"\"\"Display the numbers of day between two date widgets.", "def get_data(self): return self.value() class EntierEditable(abstractEntierEditable): MAX = 10000 class PourcentEditable(abstractEntierEditable): UNITE =", "= value label = self.FONCTION_AFF(value) self.setText(label) def get_data(self): return self.value class BoolFixe(abstractSimpleField): FONCTION_AFF", "QGridLayout, QVBoxLayout, QPlainTextEdit, QStyledItemDelegate, QToolTip) from . import list_views, clear_layout, Icons from ..Core", "= somme if somme is not None else -1 self.setValue(somme) def get_data(self): v", "ModePaiementEditable(abstractEnumEditable): VALEURS_LABELS = sorted([(k, v) for k, v in formats.MODE_PAIEMENT.items()]) # ------------- Simple", "set_data(self, v): v = v or [None, None] self.debut.set_data(v[0]) self.fin.set_data(v[1]) class Texte(QPlainTextEdit): data_changed", "cb = QCheckBox() l = QLabel() self.setAutoFillBackground(True) # Pour éviter la transparence layout", "\" jour\")) # -------------- Enumerations vizualisation -------------- class abstractEnum(QLabel): VALUE_TO_LABEL = {} \"\"\"Dict.", "proportion): rect = option.rect painter.save() proportion = min(proportion, 100) rs, vs, bs =", "def set_choix(self, choix): self.places = {} for t in choix: if t: self.places[t[0]]", "\") self.setSuffix(\"€\") self.valueChanged.connect(self.data_changed.emit) def set_data(self, somme): somme = somme if somme is not", "return def set_data(self, d): if d is None: self.ws[0].clear() self.ws[1].clear() self.ws[2].clear() else: self.ws[0].setValue(d.year)", "and SexeEditable or SexeFixe, value) def Adresse(value, is_editable): return Texte(value, is_editable, placeholder=\"Adresse...\") def", "self.entree.setPlaceholderText(\"Ajouter...\") self.data_changed.emit(num) self._clear() self.set_button() else: self.entree.selectAll() QToolTip.showText(self.entree.mapToGlobal( QPoint(0, 10)), \"Numéro invalide\") class Tels(list_views.abstractMutableList):", "Booleen(value, is_editable): return _get_widget(is_editable and BoolEditable or BoolFixe, value) def Entier(entier, is_editable): return", "string-like field ------------- class abstractSimpleField(QLabel): FONCTION_AFF = None TOOLTIP = None data_changed =", "j.setToolTip(\"Jour\") m = QSpinBox() m.setMinimum(0) m.setMaximum(12) m.setToolTip(\"Mois\") a = QSpinBox() a.setMinimum(0) a.setMaximum(2500) a.setToolTip(\"Année\")", "= QLineEdit() self.active.clicked.connect(self.on_click) self.text.textChanged.connect(self.on_text_changed) layout = QHBoxLayout(self) layout.addWidget(self.active) layout.addWidget(self.text) def on_click(self): self.text.setEnabled(self.active.isChecked()) self.data_changed.emit(self.get_data())", "get_data(self): return self.toPlainText() def set_data(self, text): self.setPlainText(text) class OptionsButton(QPushButton): \"\"\"Bouton to open window", "LIST_HEADER = None BOUTON = NouveauTelephone def __init__(self, collection: list, is_editable): collection =", "_get_widget(is_editable and EurosEditable or EurosFixe, value) def Pourcent(value, is_editable): return _get_widget(is_editable and PourcentEditable", "index): value = index.data(role=Qt.EditRole) editor.set_data(value) self.sizeHintChanged.emit(index) def createEditor(self, parent, option, index): field =", "pyqtSignal(object, object) def __init__(self): super().__init__() self.debut = DateEditable() self.fin = DateEditable() self.debut.data_changed.connect(self.on_change) self.fin.data_changed.connect(self.on_change)", "_get_widget(is_editable and PourcentEditable or PourcentFixe, value) def Date(value, is_editable): return _get_widget(is_editable and DateEditable", "(30,64,55) # start re, ve, be = (153,242,200) # end t = proportion", "rect = option.rect painter.save() proportion = min(proportion, 100) rs, vs, bs = (30,64,55)", "self.setSizeAdjustPolicy(QPlainTextEdit.AdjustToContents) self.setMinimumHeight(50) self.setMinimumWidth(150) self.setPlaceholderText(placeholder) self.setReadOnly(not is_editable) self.textChanged.connect( lambda: self.data_changed.emit(self.toPlainText())) def get_data(self): return self.toPlainText()", "pseuso acces\"\"\" super(Tels, self).on_add(list_views.PseudoAccesCategorie(item)) def set_data(self, collection): collection = self.from_list(collection) super(Tels, self).set_data(collection) def", "= 0 data_changed = pyqtSignal(int) def __init__(self, parent=None): super().__init__(parent) self.setMaximum(self.MAX) self.setMinimum(self.MIN) self.setSuffix(self.UNITE) self.valueChanged.connect(self.data_changed.emit)", "self.setCurrentIndex(-1) else: self.setCurrentIndex(self.places[value]) self.data_changed.emit(self.get_data()) def get_data(self): return self.currentData() # -------------------- Commons types --------------------", "def __init__(self, *args, **kwargs): super(abstractSimpleField, self).__init__(*args, **kwargs) if self.TOOLTIP: self.setToolTip(self.TOOLTIP) def set_data(self, value):", "text or \"\" is_active = bool(text.strip()) self.active.setChecked(is_active) self.text.setEnabled(is_active) self.text.setText(text) self.data_changed.emit(self.get_data()) class DateEditable(QFrame): data_changed", "__init__(self, parent=None): super().__init__(parent) cb = QCheckBox() l = QLabel() self.setAutoFillBackground(True) # Pour éviter", "## class delegateAttributs(QStyledItemDelegate): CORRES = {\"montant\": MontantEditable, \"mode_paiement\": ModePaiementEditable, \"valeur\": EurosEditable, \"description\": DefaultEditable,", "self.setText(str(jours) + (jours >= 2 and \" jours\" or \" jour\")) # --------------", "value) def Entier(entier, is_editable): return _get_widget(is_editable and EntierEditable or DefaultFixe, entier) def Euros(value,", "abstract_ASSOCIATION.items(): t = TYPES_WIDGETS[k] ASSOCIATION[k] = (v[0], v[1], v[2], t, v[3]) add_widgets_type({}, formats.ASSOCIATION)", "options. CLASS_PANEL_OPTIONS is responsible for doing the actual modification\"\"\" TITLE = \"Advanced options\"", "label) or None to add a separator\"\"\" def __init__(self, parent=None): super().__init__(parent) self.set_choix(self.VALEURS_LABELS) self.currentIndexChanged.connect(", "date.date\"\"\" def __init__(self, begining, end): super().__init__() self.begining = begining self.end = end self.begining.data_changed.connect(self.set_data)", "self.entree.setObjectName(\"nouveau-numero-tel\") self.entree.setAlignment(Qt.AlignCenter) self.entree.setPlaceholderText(\"Ajouter...\") add = QPushButton() add.setIcon(QIcon(Icons.Valid)) add.clicked.connect(self.on_add) self.entree.editingFinished.connect(self.on_add) line_layout.addWidget(self.entree) line_layout.addWidget(add) line_layout.setStretch(0, 3)", "jours\" or \" jour\")) # -------------- Enumerations vizualisation -------------- class abstractEnum(QLabel): VALUE_TO_LABEL =", "self).__init__(parent=parent) self.active = QCheckBox() self.text = QLineEdit() self.active.clicked.connect(self.on_click) self.text.textChanged.connect(self.on_text_changed) layout = QHBoxLayout(self) layout.addWidget(self.active)", "self.entree.setAlignment(Qt.AlignCenter) self.entree.setPlaceholderText(\"Ajouter...\") add = QPushButton() add.setIcon(QIcon(Icons.Valid)) add.clicked.connect(self.on_add) self.entree.editingFinished.connect(self.on_add) line_layout.addWidget(self.entree) line_layout.addWidget(add) line_layout.setStretch(0, 3) line_layout.setStretch(1,", "jour\") layout = QVBoxLayout(self) layout.addWidget(self.val) layout.addWidget(self.par_jour) def set_data(self, value): self.val.setValue(value[0]) self.par_jour.setChecked(value[1]) def get_data(self):", "is_editable): return _get_widget(is_editable and SexeEditable or SexeFixe, value) def Adresse(value, is_editable): return Texte(value,", "MAX = None MIN = 0 DEFAULT = 0 data_changed = pyqtSignal(int) def", "setEditorData(self, editor, index): value = index.data(role=Qt.EditRole) editor.set_data(value) self.sizeHintChanged.emit(index) def createEditor(self, parent, option, index):", "self.setSuffix(self.UNITE) self.valueChanged.connect(self.data_changed.emit) self.setSpecialValueText(\" \") def set_data(self, somme): somme = somme if somme is", "and OptionnalTextEditable or DefaultFixe, value) \"\"\"Correspondance field -> widget (callable)\"\"\" TYPES_WIDGETS = defaultdict(", "rect.setWidth(rect.width() * proportion / 100) painter.drawRoundedRect(rect, 5, 5) painter.restore() @staticmethod def _get_field(index): return", "__init__(self, acces, is_editable): super(OptionsButton, self).__init__(self.TITLE) self.clicked.connect(self.show_options) self.acces = acces self.is_editable = is_editable def", "abstractEntierEditable(QSpinBox): UNITE = \"\" MAX = None MIN = 0 DEFAULT = 0", "advanced options. CLASS_PANEL_OPTIONS is responsible for doing the actual modification\"\"\" TITLE = \"Advanced", "if completion: c = QCompleter(completion) c.setCaseSensitivity(Qt.CaseInsensitive) self.setCompleter(c) if self.MAX_LENGTH: self.setMaxLength(self.MAX_LENGTH) def set_data(self, value):", "__init__(self, parent=None): super().__init__(parent) self.setAutoFillBackground(True) self.val = QDoubleSpinBox() self.val.setMaximum(100000) self.par_jour = QCheckBox(\"Par jour\") layout", "or formats.DATE_DEFAULT jours = max((df - db).days + 1, 0) self.setText(str(jours) + (jours", "get_data(self): d = [self.ws[0].value(), self.ws[1].value(), self.ws[2].value()] try: return datetime.date(*d) except ValueError: return def", "set_data(self, somme): somme = somme if somme is not None else -1 self.setValue(somme)", "c = QCompleter(completion) c.setCaseSensitivity(Qt.CaseInsensitive) self.setCompleter(c) if self.MAX_LENGTH: self.setMaxLength(self.MAX_LENGTH) def set_data(self, value): self.setText(str(value or", "or False self.cb.setChecked(b) self.l.setText(b and \"Oui\" or \"Non\") def get_data(self): return self.cb.isChecked() class", "0, 0, 0) j = QSpinBox() j.setMinimum(0) j.setMaximum(31) j.setToolTip(\"Jour\") m = QSpinBox() m.setMinimum(0)", "is_editable): return _get_widget(is_editable and OptionnalTextEditable or DefaultFixe, value) \"\"\"Correspondance field -> widget (callable)\"\"\"", "entier) def Euros(value, is_editable): return _get_widget(is_editable and EurosEditable or EurosFixe, value) def Pourcent(value,", "jour\")) # -------------- Enumerations vizualisation -------------- class abstractEnum(QLabel): VALUE_TO_LABEL = {} \"\"\"Dict. giving", "pyqtSignal(object) def __init__(self, parent=None): super().__init__(parent) layout = QGridLayout(self) layout.setContentsMargins(0, 0, 0, 0) j", "+ (jours >= 2 and \" jours\" or \" jour\")) # -------------- Enumerations", "is not None else (self.MIN - 1) self.setValue(somme) def get_data(self): return self.value() class", "on_change(self): self.data_changed.emit(*self.get_data()) def get_data(self): return self.debut.get_data(), self.fin.get_data() def set_data(self, v): v = v", "QStyledItemDelegate, QToolTip) from . import list_views, clear_layout, Icons from ..Core import formats class", "# end t = proportion / 100 color = QColor( rs + t*(re", "return a date.date\"\"\" def __init__(self, begining, end): super().__init__() self.begining = begining self.end =", "self.setMaximum(100000) self.setMinimum(-1) self.setSpecialValueText(\" \") self.setSuffix(\"€\") self.valueChanged.connect(self.data_changed.emit) def set_data(self, somme): somme = somme if", "abstractEnumEditable(QComboBox): data_changed = pyqtSignal(object) VALEURS_LABELS = [] \"\"\"List of tuples (value, label) or", "def set_data(self, d): if d is None: self.ws[0].clear() self.ws[1].clear() self.ws[2].clear() else: self.ws[0].setValue(d.year) self.ws[1].setValue(d.month)", "self.setReadOnly(not is_editable) self.textChanged.connect( lambda: self.data_changed.emit(self.toPlainText())) def get_data(self): return self.toPlainText() def set_data(self, text): self.setPlainText(text)", "class SexeFixe(abstractEnum): VALUE_TO_LABEL = formats.SEXES class SexeEditable(abstractEnumEditable): VALEURS_LABELS = sorted((k, v) for k,", "re.compile(r'[0-9]{9,10}') m = r.search(s.replace(' ', '')) return (m is not None) def _clear(self):", "= sorted((k, v) for k, v in formats.SEXES.items()) class ModePaiementFixe(abstractEnum): VALUE_TO_LABEL = formats.MODE_PAIEMENT", "0) layout.addWidget(m, 0, 1) layout.addWidget(a, 0, 2, 1, 2) j.valueChanged.connect( lambda v: self.data_changed.emit(self.get_data()))", "current_year < 100) self.ws[0].setValue(current_year) def get_data(self): d = [self.ws[0].value(), self.ws[1].value(), self.ws[2].value()] try: return", "self.ws[1].setValue(d.month) self.ws[2].setValue(d.day) self.on_editing() class MontantEditable(QFrame): def __init__(self, parent=None): super().__init__(parent) self.setAutoFillBackground(True) self.val = QDoubleSpinBox()", "will use it. \"\"\" import datetime import re from collections import defaultdict from", "t, v[3]) add_widgets_type({}, formats.ASSOCIATION) ## ------------------Custom delegate ------------------ ## class delegateAttributs(QStyledItemDelegate): CORRES =", "is None: self.setCurrentIndex(-1) else: self.setCurrentIndex(self.places[value]) self.data_changed.emit(self.get_data()) def get_data(self): return self.currentData() # -------------------- Commons", "self.acces = acces self.is_editable = is_editable def show_options(self): f = self.CLASS_PANEL_OPTIONS(self.acces, self.is_editable) if", "set_data(self, value): self.val.setValue(value[0]) self.par_jour.setChecked(value[1]) def get_data(self): return [self.val.value(), self.par_jour.isChecked()] class DateRange(QFrame): data_changed =", "self.set_button() else: self.entree.selectAll() QToolTip.showText(self.entree.mapToGlobal( QPoint(0, 10)), \"Numéro invalide\") class Tels(list_views.abstractMutableList): LIST_PLACEHOLDER = \"Aucun", "__init__(self, parent=None): super().__init__(parent) layout = QGridLayout(self) layout.setContentsMargins(0, 0, 0, 0) j = QSpinBox()", "TYPES_WIDGETS[k] ASSOCIATION[k] = (v[0], v[1], v[2], t, v[3]) add_widgets_type({}, formats.ASSOCIATION) ## ------------------Custom delegate", "and DateEditable or DateFixe, value) def Departement(value, is_editable): return _get_widget(is_editable and DepartementEditable or", "choix: if t: self.places[t[0]] = self.count() self.addItem(t[1], userData=t[0]) else: self.insertSeparator(self.count()) def set_data(self, value):", "TYPES_WIDGETS.update(type_widgets) for k, v in abstract_ASSOCIATION.items(): t = TYPES_WIDGETS[k] ASSOCIATION[k] = (v[0], v[1],", "delegateAttributs(QStyledItemDelegate): CORRES = {\"montant\": MontantEditable, \"mode_paiement\": ModePaiementEditable, \"valeur\": EurosEditable, \"description\": DefaultEditable, \"quantite\": EntierEditable,", "transparence layout = QHBoxLayout(self) layout.addWidget(cb) layout.addWidget(l) def callback(b): l.setText(b and \"Oui\" or \"Non\")", "set_data to manually update\"\"\" db = self.begining.get_data() or formats.DATE_DEFAULT df = self.end.get_data() or", "1) layout.addWidget(a, 0, 2, 1, 2) j.valueChanged.connect( lambda v: self.data_changed.emit(self.get_data())) m.valueChanged.connect( lambda v:", "def on_change(self): self.data_changed.emit(*self.get_data()) def get_data(self): return self.debut.get_data(), self.fin.get_data() def set_data(self, v): v =", "Qt.RoundJoin)) painter.setBackgroundMode(Qt.OpaqueMode) painter.setBackground(QBrush(color)) painter.setBrush(QBrush(color)) rect.setWidth(rect.width() * proportion / 100) painter.drawRoundedRect(rect, 5, 5) painter.restore()", "DefaultEditable or DefaultFixe, value) def Booleen(value, is_editable): return _get_widget(is_editable and BoolEditable or BoolFixe,", "__init__(self, parent): QStyledItemDelegate.__init__(self, parent) self.size_hint_ = None self.row_done_ = None @staticmethod def paint_filling_rect(option,", "a.editingFinished.connect(self.on_editing) self.ws = (a, m, j) def _change_year_text_color(self, is_ok): color = \"black\" if", "is None: self.ws[0].clear() self.ws[1].clear() self.ws[2].clear() else: self.ws[0].setValue(d.year) self.ws[1].setValue(d.month) self.ws[2].setValue(d.day) self.on_editing() class MontantEditable(QFrame): def", "or formats.DATE_DEFAULT df = self.end.get_data() or formats.DATE_DEFAULT jours = max((df - db).days +", "widgets. These widgets have to implement a get_data method, which return a date.date\"\"\"", "0, 0) layout.addWidget(QLabel(\"Du \")) layout.addWidget(self.debut) layout.addWidget(QLabel(\" au \")) layout.addWidget(self.fin) def on_change(self): self.data_changed.emit(*self.get_data()) def", "QSpinBox() j.setMinimum(0) j.setMaximum(31) j.setToolTip(\"Jour\") m = QSpinBox() m.setMinimum(0) m.setMaximum(12) m.setToolTip(\"Mois\") a = QSpinBox()", "fields and widget classes\"\"\" size_hint_: tuple def __init__(self, parent): QStyledItemDelegate.__init__(self, parent) self.size_hint_ =", "formats.MODE_PAIEMENT.items()]) # ------------- Simple string-like field ------------- class abstractSimpleField(QLabel): FONCTION_AFF = None TOOLTIP", "super(Tels, self).get_data() return [tel.Id for tel in col] class Duree(QLabel): \"\"\"Display the numbers", "= None super().destroyEditor(editor, index) def setModelData(self, editor, model, index): value = editor.get_data() model.set_data(index,", "begining, end): super().__init__() self.begining = begining self.end = end self.begining.data_changed.connect(self.set_data) self.end.data_changed.connect(self.set_data) self.set_data() def", "nb_places_reservees=Entier, age_min=Entier, age_max=Entier, acquite=Booleen, is_acompte=Booleen, is_remboursement=Booleen, reduc_speciale=Euros, acompte_recu=Euros, valeur=Euros, total=Euros, prix=Euros, date_heure_modif=DateHeure, date_reglement=Date,", "m.setMaximum(12) m.setToolTip(\"Mois\") a = QSpinBox() a.setMinimum(0) a.setMaximum(2500) a.setToolTip(\"Année\") j.setAlignment(Qt.AlignCenter) m.setAlignment(Qt.AlignCenter) a.setAlignment(Qt.AlignCenter) j.setSpecialValueText(\"-\") m.setSpecialValueText(\"-\")", "value): self.setText(str(value or \"\")) def get_data(self): return self.text() def LimitedDefaultEditable(max_length): return type(\"LDefaultEditable\", (DefaultEditable,),", "= end self.begining.data_changed.connect(self.set_data) self.end.data_changed.connect(self.set_data) self.set_data() def set_data(self, *args): \"\"\"we cant to call set_data", "pyqtSignal(object) def __init__(self, parent=None): super(OptionnalTextEditable, self).__init__(parent=parent) self.active = QCheckBox() self.text = QLineEdit() self.active.clicked.connect(self.on_click)", "2) j.valueChanged.connect( lambda v: self.data_changed.emit(self.get_data())) m.valueChanged.connect( lambda v: self.data_changed.emit(self.get_data())) a.valueChanged.connect( lambda v: self.data_changed.emit(self.get_data()))", "= min(proportion, 100) rs, vs, bs = (30,64,55) # start re, ve, be", "is not None) def _clear(self): clear_layout(self.layout()) def enter_edit(self): self._clear() line_layout = self.layout() self.entree", "QPushButton, QLineEdit, QLabel, QComboBox, QSpinBox, QDoubleSpinBox, QCheckBox, QCompleter, QGridLayout, QVBoxLayout, QPlainTextEdit, QStyledItemDelegate, QToolTip)", "raise NotImplementedError(\"No editable datetime widget !\") w = DateHeureFixe() w.set_data(value) return w def", "DefaultEditable, \"quantite\": EntierEditable, \"obligatoire\": BoolEditable} \"\"\"Correspondance between fields and widget classes\"\"\" size_hint_: tuple", "Any from PyQt5.QtCore import pyqtSignal, Qt, QPoint from PyQt5.QtGui import QColor, QPen, QBrush,", "\"red\" self.ws[0].setStyleSheet(f\"color : {color}\") def on_editing(self): current_year = self.ws[0].value() if not current_year: return", "layout.addWidget(self.val) layout.addWidget(self.par_jour) def set_data(self, value): self.val.setValue(value[0]) self.par_jour.setChecked(value[1]) def get_data(self): return [self.val.value(), self.par_jour.isChecked()] class", "= DateHeureFixe() w.set_data(value) return w def OptionnalText(value, is_editable): return _get_widget(is_editable and OptionnalTextEditable or", "QLineEdit, QLabel, QComboBox, QSpinBox, QDoubleSpinBox, QCheckBox, QCompleter, QGridLayout, QVBoxLayout, QPlainTextEdit, QStyledItemDelegate, QToolTip) from", "self.data_changed.emit(self.get_data())) m.valueChanged.connect( lambda v: self.data_changed.emit(self.get_data())) a.valueChanged.connect( lambda v: self.data_changed.emit(self.get_data())) a.editingFinished.connect(self.on_editing) self.ws = (a,", "v[3]) add_widgets_type({}, formats.ASSOCIATION) ## ------------------Custom delegate ------------------ ## class delegateAttributs(QStyledItemDelegate): CORRES = {\"montant\":", "super().__init__(text) self.setSizeAdjustPolicy(QPlainTextEdit.AdjustToContents) self.setMinimumHeight(50) self.setMinimumWidth(150) self.setPlaceholderText(placeholder) self.setReadOnly(not is_editable) self.textChanged.connect( lambda: self.data_changed.emit(self.toPlainText())) def get_data(self): return", "rs, vs, bs = (30,64,55) # start re, ve, be = (153,242,200) #", "self.ws[1].clear() self.ws[2].clear() else: self.ws[0].setValue(d.year) self.ws[1].setValue(d.month) self.ws[2].setValue(d.day) self.on_editing() class MontantEditable(QFrame): def __init__(self, parent=None): super().__init__(parent)", "self).set_data(collection) def get_data(self): col = super(Tels, self).get_data() return [tel.Id for tel in col]", "self.ws[0].setValue(current_year) def get_data(self): d = [self.ws[0].value(), self.ws[1].value(), self.ws[2].value()] try: return datetime.date(*d) except ValueError:", "return Texte(value, is_editable, placeholder=\"Adresse...\") def ModePaiement(value, is_editable): return _get_widget(is_editable and ModePaiementEditable or ModePaiementFixe,", "def get_data(self): v = self.value() return v if v != -1 else None", "= QGridLayout(self) layout.setContentsMargins(0, 0, 0, 0) j = QSpinBox() j.setMinimum(0) j.setMaximum(31) j.setToolTip(\"Jour\") m", "super().__init__(parent) self.textChanged.connect(self.data_changed.emit) if completion: c = QCompleter(completion) c.setCaseSensitivity(Qt.CaseInsensitive) self.setCompleter(c) if self.MAX_LENGTH: self.setMaxLength(self.MAX_LENGTH) def", "field ------------- class abstractSimpleField(QLabel): FONCTION_AFF = None TOOLTIP = None data_changed = pyqtSignal()", "self.on_editing() class MontantEditable(QFrame): def __init__(self, parent=None): super().__init__(parent) self.setAutoFillBackground(True) self.val = QDoubleSpinBox() self.val.setMaximum(100000) self.par_jour", "createEditor(self, parent, option, index): field = self._get_field(index) other = index.data(role=Qt.UserRole) classe = self.CORRES[field]", "def get_data(self): return self.debut.get_data(), self.fin.get_data() def set_data(self, v): v = v or [None,", "return self._change_year_text_color(not current_year < 100) self.ws[0].setValue(current_year) def get_data(self): d = [self.ws[0].value(), self.ws[1].value(), self.ws[2].value()]", "-------------- Enumerations vizualisation -------------- class abstractEnum(QLabel): VALUE_TO_LABEL = {} \"\"\"Dict. giving label from", "layout.addWidget(a, 0, 2, 1, 2) j.valueChanged.connect( lambda v: self.data_changed.emit(self.get_data())) m.valueChanged.connect( lambda v: self.data_changed.emit(self.get_data()))", "PourcentEditable or PourcentFixe, value) def Date(value, is_editable): return _get_widget(is_editable and DateEditable or DateFixe,", "v != -1 else None class BoolEditable(QFrame): data_changed = pyqtSignal(bool) def __init__(self, parent=None):", "giving label from raw value\"\"\" def set_data(self, value): self.value = value self.setText(self.VALUE_TO_LABEL.get(self.value, \"\"))", "layout.setContentsMargins(0, 0, 0, 0) layout.addWidget(QLabel(\"Du \")) layout.addWidget(self.debut) layout.addWidget(QLabel(\" au \")) layout.addWidget(self.fin) def on_change(self):", "layout.addWidget(self.active) layout.addWidget(self.text) def on_click(self): self.text.setEnabled(self.active.isChecked()) self.data_changed.emit(self.get_data()) def on_text_changed(self, text): is_active = bool(text.strip()) self.active.setChecked(is_active)", "v or [None, None] self.debut.set_data(v[0]) self.fin.set_data(v[1]) class Texte(QPlainTextEdit): data_changed = pyqtSignal(str) def __init__(self,", "QCompleter(completion) c.setCaseSensitivity(Qt.CaseInsensitive) self.setCompleter(c) if self.MAX_LENGTH: self.setMaxLength(self.MAX_LENGTH) def set_data(self, value): self.setText(str(value or \"\")) def", "QCheckBox() l = QLabel() self.setAutoFillBackground(True) # Pour éviter la transparence layout = QHBoxLayout(self)", "Euros(value, is_editable): return _get_widget(is_editable and EurosEditable or EurosFixe, value) def Pourcent(value, is_editable): return", "j = QSpinBox() j.setMinimum(0) j.setMaximum(31) j.setToolTip(\"Jour\") m = QSpinBox() m.setMinimum(0) m.setMaximum(12) m.setToolTip(\"Mois\") a", "0 DEFAULT = 0 data_changed = pyqtSignal(int) def __init__(self, parent=None): super().__init__(parent) self.setMaximum(self.MAX) self.setMinimum(self.MIN)", "DateFixe, value) def Departement(value, is_editable): return _get_widget(is_editable and DepartementEditable or DepartementFixe, value) def", "None: self.setCurrentIndex(-1) else: self.setCurrentIndex(self.places[value]) self.data_changed.emit(self.get_data()) def get_data(self): return self.currentData() # -------------------- Commons types", "**kwargs) if self.TOOLTIP: self.setToolTip(self.TOOLTIP) def set_data(self, value): self.value = value label = self.FONCTION_AFF(value)", "end): super().__init__() self.begining = begining self.end = end self.begining.data_changed.connect(self.set_data) self.end.data_changed.connect(self.set_data) self.set_data() def set_data(self,", "= QPushButton() add.setIcon(QIcon(Icons.Valid)) add.clicked.connect(self.on_add) self.entree.editingFinished.connect(self.on_add) line_layout.addWidget(self.entree) line_layout.addWidget(add) line_layout.setStretch(0, 3) line_layout.setStretch(1, 1) def on_add(self):", "0 data_changed = pyqtSignal(int) def __init__(self, parent=None): super().__init__(parent) self.setMaximum(self.MAX) self.setMinimum(self.MIN) self.setSuffix(self.UNITE) self.valueChanged.connect(self.data_changed.emit) self.setSpecialValueText(\"", "get_data(self): return self.value() class EntierEditable(abstractEntierEditable): MAX = 10000 class PourcentEditable(abstractEntierEditable): UNITE = \"%\"", "parent=None): super(OptionnalTextEditable, self).__init__(parent=parent) self.active = QCheckBox() self.text = QLineEdit() self.active.clicked.connect(self.on_click) self.text.textChanged.connect(self.on_text_changed) layout =", "0, 0) layout.addWidget(m, 0, 1) layout.addWidget(a, 0, 2, 1, 2) j.valueChanged.connect( lambda v:", "a.setAlignment(Qt.AlignCenter) j.setSpecialValueText(\"-\") m.setSpecialValueText(\"-\") a.setSpecialValueText(\"-\") layout.addWidget(j, 0, 0) layout.addWidget(m, 0, 1) layout.addWidget(a, 0, 2,", "VALEURS_LABELS = sorted([(k, v) for k, v in formats.MODE_PAIEMENT.items()]) # ------------- Simple string-like", "proportion / 100) painter.drawRoundedRect(rect, 5, 5) painter.restore() @staticmethod def _get_field(index): return index.model().header[index.column()] def", "\"\"\"Convert to pseuso acces\"\"\" super(Tels, self).on_add(list_views.PseudoAccesCategorie(item)) def set_data(self, collection): collection = self.from_list(collection) super(Tels,", "super().__init__(parent) layout = QGridLayout(self) layout.setContentsMargins(0, 0, 0, 0) j = QSpinBox() j.setMinimum(0) j.setMaximum(31)", "SexeFixe(abstractEnum): VALUE_TO_LABEL = formats.SEXES class SexeEditable(abstractEnumEditable): VALEURS_LABELS = sorted((k, v) for k, v", "TITLE = \"Advanced options\" CLASS_PANEL_OPTIONS:Any = None options_changed = pyqtSignal() def __init__(self, acces,", "which return a date.date\"\"\" def __init__(self, begining, end): super().__init__() self.begining = begining self.end", "return super().sizeHint(option, index) def setEditorData(self, editor, index): value = index.data(role=Qt.EditRole) editor.set_data(value) self.sizeHintChanged.emit(index) def", "or DefaultFixe, value) \"\"\"Correspondance field -> widget (callable)\"\"\" TYPES_WIDGETS = defaultdict( lambda: Default,", "k, v in formats.MODE_PAIEMENT.items()]) # ------------- Simple string-like field ------------- class abstractSimpleField(QLabel): FONCTION_AFF", "super(Tels, self).on_add(list_views.PseudoAccesCategorie(item)) def set_data(self, collection): collection = self.from_list(collection) super(Tels, self).set_data(collection) def get_data(self): col", "= QCompleter(completion) c.setCaseSensitivity(Qt.CaseInsensitive) self.setCompleter(c) if self.MAX_LENGTH: self.setMaxLength(self.MAX_LENGTH) def set_data(self, value): self.setText(str(value or \"\"))", "pyqtSignal(bool) def __init__(self, parent=None): super().__init__(parent) cb = QCheckBox() l = QLabel() self.setAutoFillBackground(True) #", "import (QFrame, QHBoxLayout, QPushButton, QLineEdit, QLabel, QComboBox, QSpinBox, QDoubleSpinBox, QCheckBox, QCompleter, QGridLayout, QVBoxLayout,", "a date.date\"\"\" def __init__(self, begining, end): super().__init__() self.begining = begining self.end = end", "self).get_data() return [tel.Id for tel in col] class Duree(QLabel): \"\"\"Display the numbers of", "self.set_choix(self.VALEURS_LABELS) self.currentIndexChanged.connect( lambda i: self.data_changed.emit(self.currentData())) def set_choix(self, choix): self.places = {} for t", "self.cb = cb self.l = l def set_data(self, b): b = b or", "visualize and modify basic fields. (french language) ASSOCIATION should be updated with custom", "is not None else -1 self.setValue(somme) def get_data(self): v = self.value() return v", "l.setText(b and \"Oui\" or \"Non\") self.data_changed.emit(b) cb.clicked.connect(callback) self.cb = cb self.l = l", "= pyqtSignal(int) def __init__(self, parent=None): super().__init__(parent) self.setMaximum(self.MAX) self.setMinimum(self.MIN) self.setSuffix(self.UNITE) self.valueChanged.connect(self.data_changed.emit) self.setSpecialValueText(\" \") def", "= self.text.text().strip() active = self.active.isChecked() and bool(text) return text if active else None", "def get_data(self): return [self.val.value(), self.par_jour.isChecked()] class DateRange(QFrame): data_changed = pyqtSignal(object, object) def __init__(self):", "m = QSpinBox() m.setMinimum(0) m.setMaximum(12) m.setToolTip(\"Mois\") a = QSpinBox() a.setMinimum(0) a.setMaximum(2500) a.setToolTip(\"Année\") j.setAlignment(Qt.AlignCenter)", "* proportion / 100) painter.drawRoundedRect(rect, 5, 5) painter.restore() @staticmethod def _get_field(index): return index.model().header[index.column()]", "t*(be - bs)) painter.setPen(QPen(color, 0.5, Qt.SolidLine, Qt.RoundCap, Qt.RoundJoin)) painter.setBackgroundMode(Qt.OpaqueMode) painter.setBackground(QBrush(color)) painter.setBrush(QBrush(color)) rect.setWidth(rect.width() *", "self.data_changed.emit(num) self._clear() self.set_button() else: self.entree.selectAll() QToolTip.showText(self.entree.mapToGlobal( QPoint(0, 10)), \"Numéro invalide\") class Tels(list_views.abstractMutableList): LIST_PLACEHOLDER", "BoolEditable(QFrame): data_changed = pyqtSignal(bool) def __init__(self, parent=None): super().__init__(parent) cb = QCheckBox() l =", "lambda v: self.data_changed.emit(self.get_data())) a.valueChanged.connect( lambda v: self.data_changed.emit(self.get_data())) a.editingFinished.connect(self.on_editing) self.ws = (a, m, j)", "lambda i: self.data_changed.emit(self.currentData())) def set_choix(self, choix): self.places = {} for t in choix:", "\"\"\"we cant to call set_data to manually update\"\"\" db = self.begining.get_data() or formats.DATE_DEFAULT", "formats.SEXES.items()) class ModePaiementFixe(abstractEnum): VALUE_TO_LABEL = formats.MODE_PAIEMENT class ModePaiementEditable(abstractEnumEditable): VALEURS_LABELS = sorted([(k, v) for", "= 10000 class PourcentEditable(abstractEntierEditable): UNITE = \"%\" MAX = 100 DEFAULT = 0", "def __init__(self, begining, end): super().__init__() self.begining = begining self.end = end self.begining.data_changed.connect(self.set_data) self.end.data_changed.connect(self.set_data)", "set_data(self, d): if d is None: self.ws[0].clear() self.ws[1].clear() self.ws[2].clear() else: self.ws[0].setValue(d.year) self.ws[1].setValue(d.month) self.ws[2].setValue(d.day)", "\"Non\") self.data_changed.emit(b) cb.clicked.connect(callback) self.cb = cb self.l = l def set_data(self, b): b", ">= 2 and \" jours\" or \" jour\")) # -------------- Enumerations vizualisation --------------", "VALEURS_LABELS = sorted((k, v) for k, v in formats.SEXES.items()) class ModePaiementFixe(abstractEnum): VALUE_TO_LABEL =", "# Pour éviter la transparence layout = QHBoxLayout(self) layout.addWidget(cb) layout.addWidget(l) def callback(b): l.setText(b", "def __init__(self, parent=None): super().__init__(parent) layout = QGridLayout(self) layout.setContentsMargins(0, 0, 0, 0) j =", "= pyqtSignal() def __init__(self, acces, is_editable): super(OptionsButton, self).__init__(self.TITLE) self.clicked.connect(self.show_options) self.acces = acces self.is_editable", "IS_TELEPHONE(s: str): r = re.compile(r'[0-9]{9,10}') m = r.search(s.replace(' ', '')) return (m is", "- vs), bs + t*(be - bs)) painter.setPen(QPen(color, 0.5, Qt.SolidLine, Qt.RoundCap, Qt.RoundJoin)) painter.setBackgroundMode(Qt.OpaqueMode)", "class OptionsButton(QPushButton): \"\"\"Bouton to open window to acces advanced options. CLASS_PANEL_OPTIONS is responsible", "QVBoxLayout(self) layout.addWidget(self.val) layout.addWidget(self.par_jour) def set_data(self, value): self.val.setValue(value[0]) self.par_jour.setChecked(value[1]) def get_data(self): return [self.val.value(), self.par_jour.isChecked()]", "self.setText(str(value or \"\")) def get_data(self): return self.text() def LimitedDefaultEditable(max_length): return type(\"LDefaultEditable\", (DefaultEditable,), {\"MAX_LENGTH\":", ": {color}\") def on_editing(self): current_year = self.ws[0].value() if not current_year: return self._change_year_text_color(not current_year", "t = proportion / 100 color = QColor( rs + t*(re - rs),", "b or False self.cb.setChecked(b) self.l.setText(b and \"Oui\" or \"Non\") def get_data(self): return self.cb.isChecked()", "self.CLASS_PANEL_OPTIONS(self.acces, self.is_editable) if f.exec_(): self.options_changed.emit() def set_data(self, *args): pass ###---------------------------- Wrappers---------------------------- ### def", "update\"\"\" db = self.begining.get_data() or formats.DATE_DEFAULT df = self.end.get_data() or formats.DATE_DEFAULT jours =", "Numeric fields --------------- class abstractEntierEditable(QSpinBox): UNITE = \"\" MAX = None MIN =", "w def destroyEditor(self, editor, index): self.size_hint_ = None super().destroyEditor(editor, index) def setModelData(self, editor,", "pyqtSignal(str) MAX_LENGTH = None def __init__(self, parent=None, completion=[]): super().__init__(parent) self.textChanged.connect(self.data_changed.emit) if completion: c", "DepartementEditable(abstractEnumEditable): VALEURS_LABELS = sorted((i, i + \" \" + v) for i, v", ") ASSOCIATION = {} def add_widgets_type(type_widgets, abstract_ASSOCIATION): TYPES_WIDGETS.update(type_widgets) for k, v in abstract_ASSOCIATION.items():", "classe() w.set_data(value) return w def Default(value, is_editable): return _get_widget(is_editable and DefaultEditable or DefaultFixe,", "1, 0) self.setText(str(jours) + (jours >= 2 and \" jours\" or \" jour\"))", "def set_data(self, value): self.val.setValue(value[0]) self.par_jour.setChecked(value[1]) def get_data(self): return [self.val.value(), self.par_jour.isChecked()] class DateRange(QFrame): data_changed", "def __init__(self, parent=None): super().__init__(parent) self.setMaximum(self.MAX) self.setMinimum(self.MIN) self.setSuffix(self.UNITE) self.valueChanged.connect(self.data_changed.emit) self.setSpecialValueText(\" \") def set_data(self, somme):", "value): if value is None: self.setCurrentIndex(-1) else: self.setCurrentIndex(self.places[value]) self.data_changed.emit(self.get_data()) def get_data(self): return self.currentData()", "self.clicked.connect(self.show_options) self.acces = acces self.is_editable = is_editable def show_options(self): f = self.CLASS_PANEL_OPTIONS(self.acces, self.is_editable)", "fields --------------- class abstractEntierEditable(QSpinBox): UNITE = \"\" MAX = None MIN = 0", "item): \"\"\"Convert to pseuso acces\"\"\" super(Tels, self).on_add(list_views.PseudoAccesCategorie(item)) def set_data(self, collection): collection = self.from_list(collection)", "def createEditor(self, parent, option, index): field = self._get_field(index) other = index.data(role=Qt.UserRole) classe =", "if d is None: self.ws[0].clear() self.ws[1].clear() self.ws[2].clear() else: self.ws[0].setValue(d.year) self.ws[1].setValue(d.month) self.ws[2].setValue(d.day) self.on_editing() class", "v = self.value() return v if v != -1 else None class BoolEditable(QFrame):", "value label = self.FONCTION_AFF(value) self.setText(label) def get_data(self): return self.value class BoolFixe(abstractSimpleField): FONCTION_AFF =", "return index.model().header[index.column()] def sizeHint(self, option, index): if self.size_hint_ and self.size_hint_[0] == index: return", "__init__(self, parent=None): super(OptionnalTextEditable, self).__init__(parent=parent) self.active = QCheckBox() self.text = QLineEdit() self.active.clicked.connect(self.on_click) self.text.textChanged.connect(self.on_text_changed) layout", "class EurosFixe(abstractSimpleField): FONCTION_AFF = staticmethod(formats.abstractRender.euros) class PourcentFixe(abstractSimpleField): FONCTION_AFF = staticmethod(formats.abstractRender.pourcent) class DefaultFixe(abstractSimpleField): FONCTION_AFF", "set_data(self, b): b = b or False self.cb.setChecked(b) self.l.setText(b and \"Oui\" or \"Non\")", "def Date(value, is_editable): return _get_widget(is_editable and DateEditable or DateFixe, value) def Departement(value, is_editable):", "### def _get_widget(classe, value): w = classe() w.set_data(value) return w def Default(value, is_editable):", "def get_data(self): text = self.text.text().strip() active = self.active.isChecked() and bool(text) return text if", "= staticmethod(formats.abstractRender.boolen) class EurosFixe(abstractSimpleField): FONCTION_AFF = staticmethod(formats.abstractRender.euros) class PourcentFixe(abstractSimpleField): FONCTION_AFF = staticmethod(formats.abstractRender.pourcent) class", "with custom widgets, since common.abstractDetails will use it. \"\"\" import datetime import re", "QComboBox, QSpinBox, QDoubleSpinBox, QCheckBox, QCompleter, QGridLayout, QVBoxLayout, QPlainTextEdit, QStyledItemDelegate, QToolTip) from . import", "PyQt5.QtGui import QColor, QPen, QBrush, QIcon from PyQt5.QtWidgets import (QFrame, QHBoxLayout, QPushButton, QLineEdit,", "= self.from_list(collection) super(Tels, self).set_data(collection) def get_data(self): col = super(Tels, self).get_data() return [tel.Id for", "class DefaultEditable(QLineEdit): data_changed = pyqtSignal(str) MAX_LENGTH = None def __init__(self, parent=None, completion=[]): super().__init__(parent)", "return datetime.date(*d) except ValueError: return def set_data(self, d): if d is None: self.ws[0].clear()", "NotImplementedError(\"No editable datetime widget !\") w = DateHeureFixe() w.set_data(value) return w def OptionnalText(value,", "QPoint from PyQt5.QtGui import QColor, QPen, QBrush, QIcon from PyQt5.QtWidgets import (QFrame, QHBoxLayout,", "from typing import List, Any from PyQt5.QtCore import pyqtSignal, Qt, QPoint from PyQt5.QtGui", "other) if other else classe(parent) self.size_hint_ = (index, w.sizeHint()) self.row_done_ = index.row() return", "1) def on_add(self): num = self.entree.text() if self.IS_TELEPHONE(num): self.entree.setPlaceholderText(\"Ajouter...\") self.data_changed.emit(num) self._clear() self.set_button() else:", "m.setSpecialValueText(\"-\") a.setSpecialValueText(\"-\") layout.addWidget(j, 0, 0) layout.addWidget(m, 0, 1) layout.addWidget(a, 0, 2, 1, 2)", "j.setAlignment(Qt.AlignCenter) m.setAlignment(Qt.AlignCenter) a.setAlignment(Qt.AlignCenter) j.setSpecialValueText(\"-\") m.setSpecialValueText(\"-\") a.setSpecialValueText(\"-\") layout.addWidget(j, 0, 0) layout.addWidget(m, 0, 1) layout.addWidget(a,", "clear_layout(self.layout()) def enter_edit(self): self._clear() line_layout = self.layout() self.entree = QLineEdit() self.entree.setObjectName(\"nouveau-numero-tel\") self.entree.setAlignment(Qt.AlignCenter) self.entree.setPlaceholderText(\"Ajouter...\")", "DateFixe(abstractSimpleField): FONCTION_AFF = staticmethod(formats.abstractRender.date) class DateHeureFixe(abstractSimpleField): FONCTION_AFF = staticmethod(formats.abstractRender.dateheure) # --------------- Numeric fields", "self.begining.get_data() or formats.DATE_DEFAULT df = self.end.get_data() or formats.DATE_DEFAULT jours = max((df - db).days", "num = self.entree.text() if self.IS_TELEPHONE(num): self.entree.setPlaceholderText(\"Ajouter...\") self.data_changed.emit(num) self._clear() self.set_button() else: self.entree.selectAll() QToolTip.showText(self.entree.mapToGlobal( QPoint(0,", "nb_places=Entier, nb_places_reservees=Entier, age_min=Entier, age_max=Entier, acquite=Booleen, is_acompte=Booleen, is_remboursement=Booleen, reduc_speciale=Euros, acompte_recu=Euros, valeur=Euros, total=Euros, prix=Euros, date_heure_modif=DateHeure,", "is_editable, placeholder=\"Adresse...\") def ModePaiement(value, is_editable): return _get_widget(is_editable and ModePaiementEditable or ModePaiementFixe, value) def", "DefaultFixe, entier) def Euros(value, is_editable): return _get_widget(is_editable and EurosEditable or EurosFixe, value) def", "is_editable): if is_editable: raise NotImplementedError(\"No editable datetime widget !\") w = DateHeureFixe() w.set_data(value)", "active else None def set_data(self, text: str): text = text or \"\" is_active", "ModePaiementEditable or ModePaiementFixe, value) def DateHeure(value, is_editable): if is_editable: raise NotImplementedError(\"No editable datetime", "vs), bs + t*(be - bs)) painter.setPen(QPen(color, 0.5, Qt.SolidLine, Qt.RoundCap, Qt.RoundJoin)) painter.setBackgroundMode(Qt.OpaqueMode) painter.setBackground(QBrush(color))", "sexe=Sexe, tels=Tels, adresse=Adresse, date=Date, date_debut=Date, date_fin=Date, date_arrivee=Date, date_depart=Date, date_emission=Date, date_reception=Date, nb_places=Entier, nb_places_reservees=Entier, age_min=Entier,", "add.clicked.connect(self.on_add) self.entree.editingFinished.connect(self.on_add) line_layout.addWidget(self.entree) line_layout.addWidget(add) line_layout.setStretch(0, 3) line_layout.setStretch(1, 1) def on_add(self): num = self.entree.text()", "# dummy signal def __init__(self, *args, **kwargs): super(abstractSimpleField, self).__init__(*args, **kwargs) if self.TOOLTIP: self.setToolTip(self.TOOLTIP)", "raw value\"\"\" def set_data(self, value): self.value = value self.setText(self.VALUE_TO_LABEL.get(self.value, \"\")) def get_data(self): return", "and BoolEditable or BoolFixe, value) def Entier(entier, is_editable): return _get_widget(is_editable and EntierEditable or", "class DepartementFixe(abstractEnum): VALUE_TO_LABEL = formats.DEPARTEMENTS class DepartementEditable(abstractEnumEditable): VALEURS_LABELS = sorted((i, i + \"", "self.ws[0].setValue(d.year) self.ws[1].setValue(d.month) self.ws[2].setValue(d.day) self.on_editing() class MontantEditable(QFrame): def __init__(self, parent=None): super().__init__(parent) self.setAutoFillBackground(True) self.val =", "MAX = 10000 class PourcentEditable(abstractEntierEditable): UNITE = \"%\" MAX = 100 DEFAULT =", "DateEditable() self.fin = DateEditable() self.debut.data_changed.connect(self.on_change) self.fin.data_changed.connect(self.on_change) layout = QHBoxLayout(self) layout.setContentsMargins(0, 0, 0, 0)", "These widgets have to implement a get_data method, which return a date.date\"\"\" def", "vs + t*(ve - vs), bs + t*(be - bs)) painter.setPen(QPen(color, 0.5, Qt.SolidLine,", "self.active.clicked.connect(self.on_click) self.text.textChanged.connect(self.on_text_changed) layout = QHBoxLayout(self) layout.addWidget(self.active) layout.addWidget(self.text) def on_click(self): self.text.setEnabled(self.active.isChecked()) self.data_changed.emit(self.get_data()) def on_text_changed(self,", "db = self.begining.get_data() or formats.DATE_DEFAULT df = self.end.get_data() or formats.DATE_DEFAULT jours = max((df", "self.layout() self.entree = QLineEdit() self.entree.setObjectName(\"nouveau-numero-tel\") self.entree.setAlignment(Qt.AlignCenter) self.entree.setPlaceholderText(\"Ajouter...\") add = QPushButton() add.setIcon(QIcon(Icons.Valid)) add.clicked.connect(self.on_add) self.entree.editingFinished.connect(self.on_add)", "lambda: self.data_changed.emit(self.toPlainText())) def get_data(self): return self.toPlainText() def set_data(self, text): self.setPlainText(text) class OptionsButton(QPushButton): \"\"\"Bouton", "self.toPlainText() def set_data(self, text): self.setPlainText(text) class OptionsButton(QPushButton): \"\"\"Bouton to open window to acces", "layout = QHBoxLayout(self) layout.addWidget(self.active) layout.addWidget(self.text) def on_click(self): self.text.setEnabled(self.active.isChecked()) self.data_changed.emit(self.get_data()) def on_text_changed(self, text): is_active", "self.CORRES[field] w = classe(parent, other) if other else classe(parent) self.size_hint_ = (index, w.sizeHint())", "be updated with custom widgets, since common.abstractDetails will use it. \"\"\" import datetime", "DepartementFixe(abstractEnum): VALUE_TO_LABEL = formats.DEPARTEMENTS class DepartementEditable(abstractEnumEditable): VALEURS_LABELS = sorted((i, i + \" \"", "class Duree(QLabel): \"\"\"Display the numbers of day between two date widgets. These widgets", "re, ve, be = (153,242,200) # end t = proportion / 100 color", "= [] \"\"\"List of tuples (value, label) or None to add a separator\"\"\"", "common.abstractDetails will use it. \"\"\" import datetime import re from collections import defaultdict", "line_layout = self.layout() self.entree = QLineEdit() self.entree.setObjectName(\"nouveau-numero-tel\") self.entree.setAlignment(Qt.AlignCenter) self.entree.setPlaceholderText(\"Ajouter...\") add = QPushButton() add.setIcon(QIcon(Icons.Valid))", "adresse=Adresse, date=Date, date_debut=Date, date_fin=Date, date_arrivee=Date, date_depart=Date, date_emission=Date, date_reception=Date, nb_places=Entier, nb_places_reservees=Entier, age_min=Entier, age_max=Entier, acquite=Booleen,", "\"quantite\": EntierEditable, \"obligatoire\": BoolEditable} \"\"\"Correspondance between fields and widget classes\"\"\" size_hint_: tuple def", "@staticmethod def IS_TELEPHONE(s: str): r = re.compile(r'[0-9]{9,10}') m = r.search(s.replace(' ', '')) return", "parent=None, completion=[]): super().__init__(parent) self.textChanged.connect(self.data_changed.emit) if completion: c = QCompleter(completion) c.setCaseSensitivity(Qt.CaseInsensitive) self.setCompleter(c) if self.MAX_LENGTH:", "(callable)\"\"\" TYPES_WIDGETS = defaultdict( lambda: Default, date_naissance=Date, departement_naissance=Departement, sexe=Sexe, tels=Tels, adresse=Adresse, date=Date, date_debut=Date,", "somme = somme if somme is not None else -1 self.setValue(somme) def get_data(self):", "self.cb.setChecked(b) self.l.setText(b and \"Oui\" or \"Non\") def get_data(self): return self.cb.isChecked() class DefaultEditable(QLineEdit): data_changed", "else: self.entree.selectAll() QToolTip.showText(self.entree.mapToGlobal( QPoint(0, 10)), \"Numéro invalide\") class Tels(list_views.abstractMutableList): LIST_PLACEHOLDER = \"Aucun numéro.\"", "__init__(self, text, is_editable, placeholder=\"Informations complémentaires\"): super().__init__(text) self.setSizeAdjustPolicy(QPlainTextEdit.AdjustToContents) self.setMinimumHeight(50) self.setMinimumWidth(150) self.setPlaceholderText(placeholder) self.setReadOnly(not is_editable) self.textChanged.connect(", "and \"Oui\" or \"Non\") def get_data(self): return self.cb.isChecked() class DefaultEditable(QLineEdit): data_changed = pyqtSignal(str)", "FONCTION_AFF = None TOOLTIP = None data_changed = pyqtSignal() # dummy signal def", "class SexeEditable(abstractEnumEditable): VALEURS_LABELS = sorted((k, v) for k, v in formats.SEXES.items()) class ModePaiementFixe(abstractEnum):", "self.size_hint_ = None super().destroyEditor(editor, index) def setModelData(self, editor, model, index): value = editor.get_data()", "= pyqtSignal(float) def __init__(self, parent=None): super().__init__(parent) self.setMaximum(100000) self.setMinimum(-1) self.setSpecialValueText(\" \") self.setSuffix(\"€\") self.valueChanged.connect(self.data_changed.emit) def", "QDoubleSpinBox() self.val.setMaximum(100000) self.par_jour = QCheckBox(\"Par jour\") layout = QVBoxLayout(self) layout.addWidget(self.val) layout.addWidget(self.par_jour) def set_data(self,", "- 1) self.setValue(somme) def get_data(self): return self.value() class EntierEditable(abstractEntierEditable): MAX = 10000 class", "DepartementFixe, value) def Sexe(value, is_editable): return _get_widget(is_editable and SexeEditable or SexeFixe, value) def", "index): if self.size_hint_ and self.size_hint_[0] == index: return self.size_hint_[1] return super().sizeHint(option, index) def", "v if v != -1 else None class BoolEditable(QFrame): data_changed = pyqtSignal(bool) def", "def __init__(self, collection: list, is_editable): collection = self.from_list(collection) super().__init__(collection, is_editable) def on_add(self, item):", "is_editable): return _get_widget(is_editable and DepartementEditable or DepartementFixe, value) def Sexe(value, is_editable): return _get_widget(is_editable", "method, which return a date.date\"\"\" def __init__(self, begining, end): super().__init__() self.begining = begining", "from . import list_views, clear_layout, Icons from ..Core import formats class NouveauTelephone(list_views.abstractNewButton): LABEL", "from PyQt5.QtWidgets import (QFrame, QHBoxLayout, QPushButton, QLineEdit, QLabel, QComboBox, QSpinBox, QDoubleSpinBox, QCheckBox, QCompleter,", "\"\" MAX = None MIN = 0 DEFAULT = 0 data_changed = pyqtSignal(int)", "Texte(QPlainTextEdit): data_changed = pyqtSignal(str) def __init__(self, text, is_editable, placeholder=\"Informations complémentaires\"): super().__init__(text) self.setSizeAdjustPolicy(QPlainTextEdit.AdjustToContents) self.setMinimumHeight(50)", "self.entree.setPlaceholderText(\"Ajouter...\") add = QPushButton() add.setIcon(QIcon(Icons.Valid)) add.clicked.connect(self.on_add) self.entree.editingFinished.connect(self.on_add) line_layout.addWidget(self.entree) line_layout.addWidget(add) line_layout.setStretch(0, 3) line_layout.setStretch(1, 1)", "FONCTION_AFF = staticmethod(formats.abstractRender.pourcent) class DefaultFixe(abstractSimpleField): FONCTION_AFF = staticmethod(formats.abstractRender.default) class DateFixe(abstractSimpleField): FONCTION_AFF = staticmethod(formats.abstractRender.date)", "value): self.value = value label = self.FONCTION_AFF(value) self.setText(label) def get_data(self): return self.value class", "+ 1, 0) self.setText(str(jours) + (jours >= 2 and \" jours\" or \"", "self.entree = QLineEdit() self.entree.setObjectName(\"nouveau-numero-tel\") self.entree.setAlignment(Qt.AlignCenter) self.entree.setPlaceholderText(\"Ajouter...\") add = QPushButton() add.setIcon(QIcon(Icons.Valid)) add.clicked.connect(self.on_add) self.entree.editingFinished.connect(self.on_add) line_layout.addWidget(self.entree)", "widget classes\"\"\" size_hint_: tuple def __init__(self, parent): QStyledItemDelegate.__init__(self, parent) self.size_hint_ = None self.row_done_", "signal def __init__(self, *args, **kwargs): super(abstractSimpleField, self).__init__(*args, **kwargs) if self.TOOLTIP: self.setToolTip(self.TOOLTIP) def set_data(self,", "active = self.active.isChecked() and bool(text) return text if active else None def set_data(self,", "have to implement a get_data method, which return a date.date\"\"\" def __init__(self, begining,", "VALUE_TO_LABEL = formats.SEXES class SexeEditable(abstractEnumEditable): VALEURS_LABELS = sorted((k, v) for k, v in", "value) def Adresse(value, is_editable): return Texte(value, is_editable, placeholder=\"Adresse...\") def ModePaiement(value, is_editable): return _get_widget(is_editable", "set_data(self, text: str): text = text or \"\" is_active = bool(text.strip()) self.active.setChecked(is_active) self.text.setEnabled(is_active)", "self.setAutoFillBackground(True) # Pour éviter la transparence layout = QHBoxLayout(self) layout.addWidget(cb) layout.addWidget(l) def callback(b):", "and DepartementEditable or DepartementFixe, value) def Sexe(value, is_editable): return _get_widget(is_editable and SexeEditable or", "self.row_done_ = index.row() return w def destroyEditor(self, editor, index): self.size_hint_ = None super().destroyEditor(editor,", "or [None, None] self.debut.set_data(v[0]) self.fin.set_data(v[1]) class Texte(QPlainTextEdit): data_changed = pyqtSignal(str) def __init__(self, text,", "self.setPlaceholderText(placeholder) self.setReadOnly(not is_editable) self.textChanged.connect( lambda: self.data_changed.emit(self.toPlainText())) def get_data(self): return self.toPlainText() def set_data(self, text):", "is_active = bool(text.strip()) self.active.setChecked(is_active) self.text.setEnabled(is_active) self.text.setText(text) self.data_changed.emit(self.get_data()) class DateEditable(QFrame): data_changed = pyqtSignal(object) def", "classe(parent, other) if other else classe(parent) self.size_hint_ = (index, w.sizeHint()) self.row_done_ = index.row()", "for k, v in formats.SEXES.items()) class ModePaiementFixe(abstractEnum): VALUE_TO_LABEL = formats.MODE_PAIEMENT class ModePaiementEditable(abstractEnumEditable): VALEURS_LABELS", "if v != -1 else None class BoolEditable(QFrame): data_changed = pyqtSignal(bool) def __init__(self,", "i, v in formats.DEPARTEMENTS.items()) class SexeFixe(abstractEnum): VALUE_TO_LABEL = formats.SEXES class SexeEditable(abstractEnumEditable): VALEURS_LABELS =", "rs), vs + t*(ve - vs), bs + t*(be - bs)) painter.setPen(QPen(color, 0.5,", "def __init__(self, parent=None): super().__init__(parent) self.setMaximum(100000) self.setMinimum(-1) self.setSpecialValueText(\" \") self.setSuffix(\"€\") self.valueChanged.connect(self.data_changed.emit) def set_data(self, somme):", "layout.addWidget(j, 0, 0) layout.addWidget(m, 0, 1) layout.addWidget(a, 0, 2, 1, 2) j.valueChanged.connect( lambda", "QCheckBox, QCompleter, QGridLayout, QVBoxLayout, QPlainTextEdit, QStyledItemDelegate, QToolTip) from . import list_views, clear_layout, Icons", "100) self.ws[0].setValue(current_year) def get_data(self): d = [self.ws[0].value(), self.ws[1].value(), self.ws[2].value()] try: return datetime.date(*d) except", "str): r = re.compile(r'[0-9]{9,10}') m = r.search(s.replace(' ', '')) return (m is not", "to implement a get_data method, which return a date.date\"\"\" def __init__(self, begining, end):", "updated with custom widgets, since common.abstractDetails will use it. \"\"\" import datetime import", "Default, date_naissance=Date, departement_naissance=Departement, sexe=Sexe, tels=Tels, adresse=Adresse, date=Date, date_debut=Date, date_fin=Date, date_arrivee=Date, date_depart=Date, date_emission=Date, date_reception=Date,", "layout.setContentsMargins(0, 0, 0, 0) j = QSpinBox() j.setMinimum(0) j.setMaximum(31) j.setToolTip(\"Jour\") m = QSpinBox()", "get_data(self): return self.value class BoolFixe(abstractSimpleField): FONCTION_AFF = staticmethod(formats.abstractRender.boolen) class EurosFixe(abstractSimpleField): FONCTION_AFF = staticmethod(formats.abstractRender.euros)", "self.from_list(collection) super().__init__(collection, is_editable) def on_add(self, item): \"\"\"Convert to pseuso acces\"\"\" super(Tels, self).on_add(list_views.PseudoAccesCategorie(item)) def", "staticmethod(formats.abstractRender.euros) class PourcentFixe(abstractSimpleField): FONCTION_AFF = staticmethod(formats.abstractRender.pourcent) class DefaultFixe(abstractSimpleField): FONCTION_AFF = staticmethod(formats.abstractRender.default) class DateFixe(abstractSimpleField):", "else \"red\" self.ws[0].setStyleSheet(f\"color : {color}\") def on_editing(self): current_year = self.ws[0].value() if not current_year:", "db).days + 1, 0) self.setText(str(jours) + (jours >= 2 and \" jours\" or", "self.value() class EntierEditable(abstractEntierEditable): MAX = 10000 class PourcentEditable(abstractEntierEditable): UNITE = \"%\" MAX =", "language) ASSOCIATION should be updated with custom widgets, since common.abstractDetails will use it.", "LimitedDefaultEditable(max_length): return type(\"LDefaultEditable\", (DefaultEditable,), {\"MAX_LENGTH\": max_length}) class OptionnalTextEditable(QFrame): \"\"\"QCheckbox + QLineEdit\"\"\" data_changed =", "= QDoubleSpinBox() self.val.setMaximum(100000) self.par_jour = QCheckBox(\"Par jour\") layout = QVBoxLayout(self) layout.addWidget(self.val) layout.addWidget(self.par_jour) def", "value) def Pourcent(value, is_editable): return _get_widget(is_editable and PourcentEditable or PourcentFixe, value) def Date(value,", "callback(b): l.setText(b and \"Oui\" or \"Non\") self.data_changed.emit(b) cb.clicked.connect(callback) self.cb = cb self.l =", "get_data(self): return self.value class abstractEnumEditable(QComboBox): data_changed = pyqtSignal(object) VALEURS_LABELS = [] \"\"\"List of", "self.data_changed.emit(self.get_data())) a.valueChanged.connect( lambda v: self.data_changed.emit(self.get_data())) a.editingFinished.connect(self.on_editing) self.ws = (a, m, j) def _change_year_text_color(self,", "if t: self.places[t[0]] = self.count() self.addItem(t[1], userData=t[0]) else: self.insertSeparator(self.count()) def set_data(self, value): if", "QLabel, QComboBox, QSpinBox, QDoubleSpinBox, QCheckBox, QCompleter, QGridLayout, QVBoxLayout, QPlainTextEdit, QStyledItemDelegate, QToolTip) from .", "self.val = QDoubleSpinBox() self.val.setMaximum(100000) self.par_jour = QCheckBox(\"Par jour\") layout = QVBoxLayout(self) layout.addWidget(self.val) layout.addWidget(self.par_jour)", "------------- Simple string-like field ------------- class abstractSimpleField(QLabel): FONCTION_AFF = None TOOLTIP = None", "self.places = {} for t in choix: if t: self.places[t[0]] = self.count() self.addItem(t[1],", "QPlainTextEdit, QStyledItemDelegate, QToolTip) from . import list_views, clear_layout, Icons from ..Core import formats", "100 DEFAULT = 0 class EurosEditable(QDoubleSpinBox): data_changed = pyqtSignal(float) def __init__(self, parent=None): super().__init__(parent)", "= (153,242,200) # end t = proportion / 100 color = QColor( rs", "= bool(text.strip()) self.active.setChecked(is_active) self.text.setEnabled(is_active) self.text.setText(text) self.data_changed.emit(self.get_data()) class DateEditable(QFrame): data_changed = pyqtSignal(object) def __init__(self,", "= {} \"\"\"Dict. giving label from raw value\"\"\" def set_data(self, value): self.value =", "else: self.ws[0].setValue(d.year) self.ws[1].setValue(d.month) self.ws[2].setValue(d.day) self.on_editing() class MontantEditable(QFrame): def __init__(self, parent=None): super().__init__(parent) self.setAutoFillBackground(True) self.val", "= defaultdict( lambda: Default, date_naissance=Date, departement_naissance=Departement, sexe=Sexe, tels=Tels, adresse=Adresse, date=Date, date_debut=Date, date_fin=Date, date_arrivee=Date,", "def set_data(self, somme): somme = somme if somme is not None else -1", "+ v) for i, v in formats.DEPARTEMENTS.items()) class SexeFixe(abstractEnum): VALUE_TO_LABEL = formats.SEXES class", "super().__init__() self.debut = DateEditable() self.fin = DateEditable() self.debut.data_changed.connect(self.on_change) self.fin.data_changed.connect(self.on_change) layout = QHBoxLayout(self) layout.setContentsMargins(0,", "index): self.size_hint_ = None super().destroyEditor(editor, index) def setModelData(self, editor, model, index): value =", "= None BOUTON = NouveauTelephone def __init__(self, collection: list, is_editable): collection = self.from_list(collection)", "Commons types -------------------- class DepartementFixe(abstractEnum): VALUE_TO_LABEL = formats.DEPARTEMENTS class DepartementEditable(abstractEnumEditable): VALEURS_LABELS = sorted((i,", "painter.restore() @staticmethod def _get_field(index): return index.model().header[index.column()] def sizeHint(self, option, index): if self.size_hint_ and", "un numéro\" @staticmethod def IS_TELEPHONE(s: str): r = re.compile(r'[0-9]{9,10}') m = r.search(s.replace(' ',", "_get_widget(is_editable and OptionnalTextEditable or DefaultFixe, value) \"\"\"Correspondance field -> widget (callable)\"\"\" TYPES_WIDGETS =", "return _get_widget(is_editable and DefaultEditable or DefaultFixe, value) def Booleen(value, is_editable): return _get_widget(is_editable and", "super(Tels, self).set_data(collection) def get_data(self): col = super(Tels, self).get_data() return [tel.Id for tel in", "self.active.setChecked(is_active) self.text.setEnabled(is_active) self.text.setText(text) self.data_changed.emit(self.get_data()) class DateEditable(QFrame): data_changed = pyqtSignal(object) def __init__(self, parent=None): super().__init__(parent)", "def __init__(self, text, is_editable, placeholder=\"Informations complémentaires\"): super().__init__(text) self.setSizeAdjustPolicy(QPlainTextEdit.AdjustToContents) self.setMinimumHeight(50) self.setMinimumWidth(150) self.setPlaceholderText(placeholder) self.setReadOnly(not is_editable)", "= self.active.isChecked() and bool(text) return text if active else None def set_data(self, text:", "formats.DEPARTEMENTS.items()) class SexeFixe(abstractEnum): VALUE_TO_LABEL = formats.SEXES class SexeEditable(abstractEnumEditable): VALEURS_LABELS = sorted((k, v) for", "\"Advanced options\" CLASS_PANEL_OPTIONS:Any = None options_changed = pyqtSignal() def __init__(self, acces, is_editable): super(OptionsButton,", "and widget classes\"\"\" size_hint_: tuple def __init__(self, parent): QStyledItemDelegate.__init__(self, parent) self.size_hint_ = None", "d = [self.ws[0].value(), self.ws[1].value(), self.ws[2].value()] try: return datetime.date(*d) except ValueError: return def set_data(self,", "_get_field(index): return index.model().header[index.column()] def sizeHint(self, option, index): if self.size_hint_ and self.size_hint_[0] == index:", "is_editable, placeholder=\"Informations complémentaires\"): super().__init__(text) self.setSizeAdjustPolicy(QPlainTextEdit.AdjustToContents) self.setMinimumHeight(50) self.setMinimumWidth(150) self.setPlaceholderText(placeholder) self.setReadOnly(not is_editable) self.textChanged.connect( lambda: self.data_changed.emit(self.toPlainText()))", "return _get_widget(is_editable and ModePaiementEditable or ModePaiementFixe, value) def DateHeure(value, is_editable): if is_editable: raise", "= self.end.get_data() or formats.DATE_DEFAULT jours = max((df - db).days + 1, 0) self.setText(str(jours)", "= classe(parent, other) if other else classe(parent) self.size_hint_ = (index, w.sizeHint()) self.row_done_ =", "- rs), vs + t*(ve - vs), bs + t*(be - bs)) painter.setPen(QPen(color,", "self.size_hint_ = None self.row_done_ = None @staticmethod def paint_filling_rect(option, painter, proportion): rect =", "def _change_year_text_color(self, is_ok): color = \"black\" if is_ok else \"red\" self.ws[0].setStyleSheet(f\"color : {color}\")", "self.text.setEnabled(is_active) self.text.setText(text) self.data_changed.emit(self.get_data()) class DateEditable(QFrame): data_changed = pyqtSignal(object) def __init__(self, parent=None): super().__init__(parent) layout", "return self.currentData() # -------------------- Commons types -------------------- class DepartementFixe(abstractEnum): VALUE_TO_LABEL = formats.DEPARTEMENTS class", "--------------- Numeric fields --------------- class abstractEntierEditable(QSpinBox): UNITE = \"\" MAX = None MIN", "and ModePaiementEditable or ModePaiementFixe, value) def DateHeure(value, is_editable): if is_editable: raise NotImplementedError(\"No editable", "self.setCurrentIndex(self.places[value]) self.data_changed.emit(self.get_data()) def get_data(self): return self.currentData() # -------------------- Commons types -------------------- class DepartementFixe(abstractEnum):", "self.places[t[0]] = self.count() self.addItem(t[1], userData=t[0]) else: self.insertSeparator(self.count()) def set_data(self, value): if value is", "= re.compile(r'[0-9]{9,10}') m = r.search(s.replace(' ', '')) return (m is not None) def", "class EurosEditable(QDoubleSpinBox): data_changed = pyqtSignal(float) def __init__(self, parent=None): super().__init__(parent) self.setMaximum(100000) self.setMinimum(-1) self.setSpecialValueText(\" \")", "def set_data(self, value): self.value = value label = self.FONCTION_AFF(value) self.setText(label) def get_data(self): return", "return _get_widget(is_editable and PourcentEditable or PourcentFixe, value) def Date(value, is_editable): return _get_widget(is_editable and", "super().__init__() self.begining = begining self.end = end self.begining.data_changed.connect(self.set_data) self.end.data_changed.connect(self.set_data) self.set_data() def set_data(self, *args):", "userData=t[0]) else: self.insertSeparator(self.count()) def set_data(self, value): if value is None: self.setCurrentIndex(-1) else: self.setCurrentIndex(self.places[value])", "(153,242,200) # end t = proportion / 100 color = QColor( rs +", "line_layout.setStretch(1, 1) def on_add(self): num = self.entree.text() if self.IS_TELEPHONE(num): self.entree.setPlaceholderText(\"Ajouter...\") self.data_changed.emit(num) self._clear() self.set_button()", "return w def destroyEditor(self, editor, index): self.size_hint_ = None super().destroyEditor(editor, index) def setModelData(self,", "pyqtSignal(object) VALEURS_LABELS = [] \"\"\"List of tuples (value, label) or None to add", "= self.begining.get_data() or formats.DATE_DEFAULT df = self.end.get_data() or formats.DATE_DEFAULT jours = max((df -", "= index.data(role=Qt.EditRole) editor.set_data(value) self.sizeHintChanged.emit(index) def createEditor(self, parent, option, index): field = self._get_field(index) other", "is_editable): return _get_widget(is_editable and EntierEditable or DefaultFixe, entier) def Euros(value, is_editable): return _get_widget(is_editable", "\"\"\"Bouton to open window to acces advanced options. CLASS_PANEL_OPTIONS is responsible for doing", "self.size_hint_ = (index, w.sizeHint()) self.row_done_ = index.row() return w def destroyEditor(self, editor, index):", "DateHeure(value, is_editable): if is_editable: raise NotImplementedError(\"No editable datetime widget !\") w = DateHeureFixe()", "= formats.MODE_PAIEMENT class ModePaiementEditable(abstractEnumEditable): VALEURS_LABELS = sorted([(k, v) for k, v in formats.MODE_PAIEMENT.items()])", "(index, w.sizeHint()) self.row_done_ = index.row() return w def destroyEditor(self, editor, index): self.size_hint_ =", "ModePaiementEditable, \"valeur\": EurosEditable, \"description\": DefaultEditable, \"quantite\": EntierEditable, \"obligatoire\": BoolEditable} \"\"\"Correspondance between fields and", "if value is None: self.setCurrentIndex(-1) else: self.setCurrentIndex(self.places[value]) self.data_changed.emit(self.get_data()) def get_data(self): return self.currentData() #", "formats.SEXES class SexeEditable(abstractEnumEditable): VALEURS_LABELS = sorted((k, v) for k, v in formats.SEXES.items()) class", "(DefaultEditable,), {\"MAX_LENGTH\": max_length}) class OptionnalTextEditable(QFrame): \"\"\"QCheckbox + QLineEdit\"\"\" data_changed = pyqtSignal(object) def __init__(self,", "from PyQt5.QtGui import QColor, QPen, QBrush, QIcon from PyQt5.QtWidgets import (QFrame, QHBoxLayout, QPushButton,", "self.end = end self.begining.data_changed.connect(self.set_data) self.end.data_changed.connect(self.set_data) self.set_data() def set_data(self, *args): \"\"\"we cant to call", "field = self._get_field(index) other = index.data(role=Qt.UserRole) classe = self.CORRES[field] w = classe(parent, other)", "placeholder=\"Adresse...\") def ModePaiement(value, is_editable): return _get_widget(is_editable and ModePaiementEditable or ModePaiementFixe, value) def DateHeure(value,", "def destroyEditor(self, editor, index): self.size_hint_ = None super().destroyEditor(editor, index) def setModelData(self, editor, model,", "return self.size_hint_[1] return super().sizeHint(option, index) def setEditorData(self, editor, index): value = index.data(role=Qt.EditRole) editor.set_data(value)", "layout.addWidget(l) def callback(b): l.setText(b and \"Oui\" or \"Non\") self.data_changed.emit(b) cb.clicked.connect(callback) self.cb = cb", "painter.setPen(QPen(color, 0.5, Qt.SolidLine, Qt.RoundCap, Qt.RoundJoin)) painter.setBackgroundMode(Qt.OpaqueMode) painter.setBackground(QBrush(color)) painter.setBrush(QBrush(color)) rect.setWidth(rect.width() * proportion / 100)", "= {} for t in choix: if t: self.places[t[0]] = self.count() self.addItem(t[1], userData=t[0])", "typing import List, Any from PyQt5.QtCore import pyqtSignal, Qt, QPoint from PyQt5.QtGui import", "List, Any from PyQt5.QtCore import pyqtSignal, Qt, QPoint from PyQt5.QtGui import QColor, QPen,", "df = self.end.get_data() or formats.DATE_DEFAULT jours = max((df - db).days + 1, 0)", "in abstract_ASSOCIATION.items(): t = TYPES_WIDGETS[k] ASSOCIATION[k] = (v[0], v[1], v[2], t, v[3]) add_widgets_type({},", "type(\"LDefaultEditable\", (DefaultEditable,), {\"MAX_LENGTH\": max_length}) class OptionnalTextEditable(QFrame): \"\"\"QCheckbox + QLineEdit\"\"\" data_changed = pyqtSignal(object) def", "Entier(entier, is_editable): return _get_widget(is_editable and EntierEditable or DefaultFixe, entier) def Euros(value, is_editable): return", "def get_data(self): col = super(Tels, self).get_data() return [tel.Id for tel in col] class", "ModePaiementFixe, value) def DateHeure(value, is_editable): if is_editable: raise NotImplementedError(\"No editable datetime widget !\")", "tuple def __init__(self, parent): QStyledItemDelegate.__init__(self, parent) self.size_hint_ = None self.row_done_ = None @staticmethod", "class DateHeureFixe(abstractSimpleField): FONCTION_AFF = staticmethod(formats.abstractRender.dateheure) # --------------- Numeric fields --------------- class abstractEntierEditable(QSpinBox): UNITE", "QHBoxLayout(self) layout.setContentsMargins(0, 0, 0, 0) layout.addWidget(QLabel(\"Du \")) layout.addWidget(self.debut) layout.addWidget(QLabel(\" au \")) layout.addWidget(self.fin) def", "v: self.data_changed.emit(self.get_data())) a.valueChanged.connect( lambda v: self.data_changed.emit(self.get_data())) a.editingFinished.connect(self.on_editing) self.ws = (a, m, j) def", "choix): self.places = {} for t in choix: if t: self.places[t[0]] = self.count()", "self).on_add(list_views.PseudoAccesCategorie(item)) def set_data(self, collection): collection = self.from_list(collection) super(Tels, self).set_data(collection) def get_data(self): col =", "add a separator\"\"\" def __init__(self, parent=None): super().__init__(parent) self.set_choix(self.VALEURS_LABELS) self.currentIndexChanged.connect( lambda i: self.data_changed.emit(self.currentData())) def", "def on_click(self): self.text.setEnabled(self.active.isChecked()) self.data_changed.emit(self.get_data()) def on_text_changed(self, text): is_active = bool(text.strip()) self.active.setChecked(is_active) self.text.setEnabled(is_active) self.data_changed.emit(self.get_data())", "parent): QStyledItemDelegate.__init__(self, parent) self.size_hint_ = None self.row_done_ = None @staticmethod def paint_filling_rect(option, painter,", "= \"Advanced options\" CLASS_PANEL_OPTIONS:Any = None options_changed = pyqtSignal() def __init__(self, acces, is_editable):", "self.currentIndexChanged.connect( lambda i: self.data_changed.emit(self.currentData())) def set_choix(self, choix): self.places = {} for t in", "painter, proportion): rect = option.rect painter.save() proportion = min(proportion, 100) rs, vs, bs", "Wrappers---------------------------- ### def _get_widget(classe, value): w = classe() w.set_data(value) return w def Default(value,", "index): field = self._get_field(index) other = index.data(role=Qt.UserRole) classe = self.CORRES[field] w = classe(parent,", "message=Texte, mode_paiement=ModePaiement, ) ASSOCIATION = {} def add_widgets_type(type_widgets, abstract_ASSOCIATION): TYPES_WIDGETS.update(type_widgets) for k, v", "fields. (french language) ASSOCIATION should be updated with custom widgets, since common.abstractDetails will", "self.value = value label = self.FONCTION_AFF(value) self.setText(label) def get_data(self): return self.value class BoolFixe(abstractSimpleField):", "__init__(self, parent=None, completion=[]): super().__init__(parent) self.textChanged.connect(self.data_changed.emit) if completion: c = QCompleter(completion) c.setCaseSensitivity(Qt.CaseInsensitive) self.setCompleter(c) if", "def set_data(self, v): v = v or [None, None] self.debut.set_data(v[0]) self.fin.set_data(v[1]) class Texte(QPlainTextEdit):", "CORRES = {\"montant\": MontantEditable, \"mode_paiement\": ModePaiementEditable, \"valeur\": EurosEditable, \"description\": DefaultEditable, \"quantite\": EntierEditable, \"obligatoire\":", "index: return self.size_hint_[1] return super().sizeHint(option, index) def setEditorData(self, editor, index): value = index.data(role=Qt.EditRole)", "cb self.l = l def set_data(self, b): b = b or False self.cb.setChecked(b)", "pyqtSignal() def __init__(self, acces, is_editable): super(OptionsButton, self).__init__(self.TITLE) self.clicked.connect(self.show_options) self.acces = acces self.is_editable =", "EurosFixe(abstractSimpleField): FONCTION_AFF = staticmethod(formats.abstractRender.euros) class PourcentFixe(abstractSimpleField): FONCTION_AFF = staticmethod(formats.abstractRender.pourcent) class DefaultFixe(abstractSimpleField): FONCTION_AFF =", "_get_widget(is_editable and ModePaiementEditable or ModePaiementFixe, value) def DateHeure(value, is_editable): if is_editable: raise NotImplementedError(\"No", "value): self.value = value self.setText(self.VALUE_TO_LABEL.get(self.value, \"\")) def get_data(self): return self.value class abstractEnumEditable(QComboBox): data_changed", "cant to call set_data to manually update\"\"\" db = self.begining.get_data() or formats.DATE_DEFAULT df", "i + \" \" + v) for i, v in formats.DEPARTEMENTS.items()) class SexeFixe(abstractEnum):", "!\") w = DateHeureFixe() w.set_data(value) return w def OptionnalText(value, is_editable): return _get_widget(is_editable and", "between fields and widget classes\"\"\" size_hint_: tuple def __init__(self, parent): QStyledItemDelegate.__init__(self, parent) self.size_hint_", "is_editable): return _get_widget(is_editable and PourcentEditable or PourcentFixe, value) def Date(value, is_editable): return _get_widget(is_editable", "self.currentData() # -------------------- Commons types -------------------- class DepartementFixe(abstractEnum): VALUE_TO_LABEL = formats.DEPARTEMENTS class DepartementEditable(abstractEnumEditable):", "l def set_data(self, b): b = b or False self.cb.setChecked(b) self.l.setText(b and \"Oui\"", "self.active = QCheckBox() self.text = QLineEdit() self.active.clicked.connect(self.on_click) self.text.textChanged.connect(self.on_text_changed) layout = QHBoxLayout(self) layout.addWidget(self.active) layout.addWidget(self.text)", "= None TOOLTIP = None data_changed = pyqtSignal() # dummy signal def __init__(self,", "defaultdict from typing import List, Any from PyQt5.QtCore import pyqtSignal, Qt, QPoint from", "return self.value class abstractEnumEditable(QComboBox): data_changed = pyqtSignal(object) VALEURS_LABELS = [] \"\"\"List of tuples", "col] class Duree(QLabel): \"\"\"Display the numbers of day between two date widgets. These", "None self.row_done_ = None @staticmethod def paint_filling_rect(option, painter, proportion): rect = option.rect painter.save()", "date_heure_modif=DateHeure, date_reglement=Date, date_encaissement=Date, info=Texte, message=Texte, mode_paiement=ModePaiement, ) ASSOCIATION = {} def add_widgets_type(type_widgets, abstract_ASSOCIATION):", "self.data_changed.emit(self.get_data()) def get_data(self): return self.currentData() # -------------------- Commons types -------------------- class DepartementFixe(abstractEnum): VALUE_TO_LABEL", "QLabel() self.setAutoFillBackground(True) # Pour éviter la transparence layout = QHBoxLayout(self) layout.addWidget(cb) layout.addWidget(l) def", "b = b or False self.cb.setChecked(b) self.l.setText(b and \"Oui\" or \"Non\") def get_data(self):", "= 100 DEFAULT = 0 class EurosEditable(QDoubleSpinBox): data_changed = pyqtSignal(float) def __init__(self, parent=None):", "QCheckBox(\"Par jour\") layout = QVBoxLayout(self) layout.addWidget(self.val) layout.addWidget(self.par_jour) def set_data(self, value): self.val.setValue(value[0]) self.par_jour.setChecked(value[1]) def", "EntierEditable or DefaultFixe, entier) def Euros(value, is_editable): return _get_widget(is_editable and EurosEditable or EurosFixe,", "on_editing(self): current_year = self.ws[0].value() if not current_year: return self._change_year_text_color(not current_year < 100) self.ws[0].setValue(current_year)", "_get_widget(classe, value): w = classe() w.set_data(value) return w def Default(value, is_editable): return _get_widget(is_editable", "/ 100 color = QColor( rs + t*(re - rs), vs + t*(ve", "', '')) return (m is not None) def _clear(self): clear_layout(self.layout()) def enter_edit(self): self._clear()", "self.is_editable) if f.exec_(): self.options_changed.emit() def set_data(self, *args): pass ###---------------------------- Wrappers---------------------------- ### def _get_widget(classe,", "datetime import re from collections import defaultdict from typing import List, Any from", "is_editable def show_options(self): f = self.CLASS_PANEL_OPTIONS(self.acces, self.is_editable) if f.exec_(): self.options_changed.emit() def set_data(self, *args):", "= index.data(role=Qt.UserRole) classe = self.CORRES[field] w = classe(parent, other) if other else classe(parent)", "self.debut.set_data(v[0]) self.fin.set_data(v[1]) class Texte(QPlainTextEdit): data_changed = pyqtSignal(str) def __init__(self, text, is_editable, placeholder=\"Informations complémentaires\"):", "types -------------------- class DepartementFixe(abstractEnum): VALUE_TO_LABEL = formats.DEPARTEMENTS class DepartementEditable(abstractEnumEditable): VALEURS_LABELS = sorted((i, i", "def set_data(self, text: str): text = text or \"\" is_active = bool(text.strip()) self.active.setChecked(is_active)", "= \"black\" if is_ok else \"red\" self.ws[0].setStyleSheet(f\"color : {color}\") def on_editing(self): current_year =", "BOUTON = NouveauTelephone def __init__(self, collection: list, is_editable): collection = self.from_list(collection) super().__init__(collection, is_editable)", "None BOUTON = NouveauTelephone def __init__(self, collection: list, is_editable): collection = self.from_list(collection) super().__init__(collection,", "= (index, w.sizeHint()) self.row_done_ = index.row() return w def destroyEditor(self, editor, index): self.size_hint_", "pass ###---------------------------- Wrappers---------------------------- ### def _get_widget(classe, value): w = classe() w.set_data(value) return w", "= index.row() return w def destroyEditor(self, editor, index): self.size_hint_ = None super().destroyEditor(editor, index)", "numéro\" @staticmethod def IS_TELEPHONE(s: str): r = re.compile(r'[0-9]{9,10}') m = r.search(s.replace(' ', ''))", "= \"Ajouter un numéro\" @staticmethod def IS_TELEPHONE(s: str): r = re.compile(r'[0-9]{9,10}') m =", "or BoolFixe, value) def Entier(entier, is_editable): return _get_widget(is_editable and EntierEditable or DefaultFixe, entier)", "value) def Date(value, is_editable): return _get_widget(is_editable and DateEditable or DateFixe, value) def Departement(value,", "if somme is not None else (self.MIN - 1) self.setValue(somme) def get_data(self): return", "self.data_changed.emit(b) cb.clicked.connect(callback) self.cb = cb self.l = l def set_data(self, b): b =", "= v or [None, None] self.debut.set_data(v[0]) self.fin.set_data(v[1]) class Texte(QPlainTextEdit): data_changed = pyqtSignal(str) def", "= self.ws[0].value() if not current_year: return self._change_year_text_color(not current_year < 100) self.ws[0].setValue(current_year) def get_data(self):", "self.textChanged.connect(self.data_changed.emit) if completion: c = QCompleter(completion) c.setCaseSensitivity(Qt.CaseInsensitive) self.setCompleter(c) if self.MAX_LENGTH: self.setMaxLength(self.MAX_LENGTH) def set_data(self,", "t*(re - rs), vs + t*(ve - vs), bs + t*(be - bs))", "delegate ------------------ ## class delegateAttributs(QStyledItemDelegate): CORRES = {\"montant\": MontantEditable, \"mode_paiement\": ModePaiementEditable, \"valeur\": EurosEditable,", "somme = somme if somme is not None else (self.MIN - 1) self.setValue(somme)", "@staticmethod def paint_filling_rect(option, painter, proportion): rect = option.rect painter.save() proportion = min(proportion, 100)", "date widgets. These widgets have to implement a get_data method, which return a", "def on_text_changed(self, text): is_active = bool(text.strip()) self.active.setChecked(is_active) self.text.setEnabled(is_active) self.data_changed.emit(self.get_data()) def get_data(self): text =", "return self.value() class EntierEditable(abstractEntierEditable): MAX = 10000 class PourcentEditable(abstractEntierEditable): UNITE = \"%\" MAX", "get_data(self): return self.text() def LimitedDefaultEditable(max_length): return type(\"LDefaultEditable\", (DefaultEditable,), {\"MAX_LENGTH\": max_length}) class OptionnalTextEditable(QFrame): \"\"\"QCheckbox", "abstractEnum(QLabel): VALUE_TO_LABEL = {} \"\"\"Dict. giving label from raw value\"\"\" def set_data(self, value):", "5, 5) painter.restore() @staticmethod def _get_field(index): return index.model().header[index.column()] def sizeHint(self, option, index): if", "def OptionnalText(value, is_editable): return _get_widget(is_editable and OptionnalTextEditable or DefaultFixe, value) \"\"\"Correspondance field ->", "get_data(self): v = self.value() return v if v != -1 else None class", "self.setSpecialValueText(\" \") def set_data(self, somme): somme = somme if somme is not None", "else: self.setCurrentIndex(self.places[value]) self.data_changed.emit(self.get_data()) def get_data(self): return self.currentData() # -------------------- Commons types -------------------- class", "super().__init__(parent) self.setMaximum(self.MAX) self.setMinimum(self.MIN) self.setSuffix(self.UNITE) self.valueChanged.connect(self.data_changed.emit) self.setSpecialValueText(\" \") def set_data(self, somme): somme = somme", "and modify basic fields. (french language) ASSOCIATION should be updated with custom widgets,", "set_data(self, text): self.setPlainText(text) class OptionsButton(QPushButton): \"\"\"Bouton to open window to acces advanced options.", "\"\")) def get_data(self): return self.value class abstractEnumEditable(QComboBox): data_changed = pyqtSignal(object) VALEURS_LABELS = []", "value) def Departement(value, is_editable): return _get_widget(is_editable and DepartementEditable or DepartementFixe, value) def Sexe(value,", "def get_data(self): return self.value class abstractEnumEditable(QComboBox): data_changed = pyqtSignal(object) VALEURS_LABELS = [] \"\"\"List", "self.cb.isChecked() class DefaultEditable(QLineEdit): data_changed = pyqtSignal(str) MAX_LENGTH = None def __init__(self, parent=None, completion=[]):", "proportion / 100 color = QColor( rs + t*(re - rs), vs +", "self.setToolTip(self.TOOLTIP) def set_data(self, value): self.value = value label = self.FONCTION_AFF(value) self.setText(label) def get_data(self):", "options\" CLASS_PANEL_OPTIONS:Any = None options_changed = pyqtSignal() def __init__(self, acces, is_editable): super(OptionsButton, self).__init__(self.TITLE)", "window to acces advanced options. CLASS_PANEL_OPTIONS is responsible for doing the actual modification\"\"\"", "return self.cb.isChecked() class DefaultEditable(QLineEdit): data_changed = pyqtSignal(str) MAX_LENGTH = None def __init__(self, parent=None,", "text = self.text.text().strip() active = self.active.isChecked() and bool(text) return text if active else", "set_data(self, *args): pass ###---------------------------- Wrappers---------------------------- ### def _get_widget(classe, value): w = classe() w.set_data(value)", "responsible for doing the actual modification\"\"\" TITLE = \"Advanced options\" CLASS_PANEL_OPTIONS:Any = None", "parent=None): super().__init__(parent) self.setMaximum(100000) self.setMinimum(-1) self.setSpecialValueText(\" \") self.setSuffix(\"€\") self.valueChanged.connect(self.data_changed.emit) def set_data(self, somme): somme =", "= pyqtSignal(str) def __init__(self, text, is_editable, placeholder=\"Informations complémentaires\"): super().__init__(text) self.setSizeAdjustPolicy(QPlainTextEdit.AdjustToContents) self.setMinimumHeight(50) self.setMinimumWidth(150) self.setPlaceholderText(placeholder)", "self.ws = (a, m, j) def _change_year_text_color(self, is_ok): color = \"black\" if is_ok", "None) def _clear(self): clear_layout(self.layout()) def enter_edit(self): self._clear() line_layout = self.layout() self.entree = QLineEdit()", "return _get_widget(is_editable and EurosEditable or EurosFixe, value) def Pourcent(value, is_editable): return _get_widget(is_editable and", "= r.search(s.replace(' ', '')) return (m is not None) def _clear(self): clear_layout(self.layout()) def", "self.MAX_LENGTH: self.setMaxLength(self.MAX_LENGTH) def set_data(self, value): self.setText(str(value or \"\")) def get_data(self): return self.text() def", "v): v = v or [None, None] self.debut.set_data(v[0]) self.fin.set_data(v[1]) class Texte(QPlainTextEdit): data_changed =", "bool(text.strip()) self.active.setChecked(is_active) self.text.setEnabled(is_active) self.data_changed.emit(self.get_data()) def get_data(self): text = self.text.text().strip() active = self.active.isChecked() and", "Qt, QPoint from PyQt5.QtGui import QColor, QPen, QBrush, QIcon from PyQt5.QtWidgets import (QFrame,", "text if active else None def set_data(self, text: str): text = text or", "5) painter.restore() @staticmethod def _get_field(index): return index.model().header[index.column()] def sizeHint(self, option, index): if self.size_hint_", "\"\"\"Display the numbers of day between two date widgets. These widgets have to", "MontantEditable, \"mode_paiement\": ModePaiementEditable, \"valeur\": EurosEditable, \"description\": DefaultEditable, \"quantite\": EntierEditable, \"obligatoire\": BoolEditable} \"\"\"Correspondance between", "\"Oui\" or \"Non\") self.data_changed.emit(b) cb.clicked.connect(callback) self.cb = cb self.l = l def set_data(self,", "self.insertSeparator(self.count()) def set_data(self, value): if value is None: self.setCurrentIndex(-1) else: self.setCurrentIndex(self.places[value]) self.data_changed.emit(self.get_data()) def", "end t = proportion / 100 color = QColor( rs + t*(re -", "BoolFixe, value) def Entier(entier, is_editable): return _get_widget(is_editable and EntierEditable or DefaultFixe, entier) def", "collections import defaultdict from typing import List, Any from PyQt5.QtCore import pyqtSignal, Qt,", "l = QLabel() self.setAutoFillBackground(True) # Pour éviter la transparence layout = QHBoxLayout(self) layout.addWidget(cb)", "other else classe(parent) self.size_hint_ = (index, w.sizeHint()) self.row_done_ = index.row() return w def", "self.fin.data_changed.connect(self.on_change) layout = QHBoxLayout(self) layout.setContentsMargins(0, 0, 0, 0) layout.addWidget(QLabel(\"Du \")) layout.addWidget(self.debut) layout.addWidget(QLabel(\" au", "def set_data(self, value): self.setText(str(value or \"\")) def get_data(self): return self.text() def LimitedDefaultEditable(max_length): return", "pyqtSignal(int) def __init__(self, parent=None): super().__init__(parent) self.setMaximum(self.MAX) self.setMinimum(self.MIN) self.setSuffix(self.UNITE) self.valueChanged.connect(self.data_changed.emit) self.setSpecialValueText(\" \") def set_data(self,", "-------------------- Commons types -------------------- class DepartementFixe(abstractEnum): VALUE_TO_LABEL = formats.DEPARTEMENTS class DepartementEditable(abstractEnumEditable): VALEURS_LABELS =", "0, 1) layout.addWidget(a, 0, 2, 1, 2) j.valueChanged.connect( lambda v: self.data_changed.emit(self.get_data())) m.valueChanged.connect( lambda", "self.text.setEnabled(self.active.isChecked()) self.data_changed.emit(self.get_data()) def on_text_changed(self, text): is_active = bool(text.strip()) self.active.setChecked(is_active) self.text.setEnabled(is_active) self.data_changed.emit(self.get_data()) def get_data(self):", "= staticmethod(formats.abstractRender.euros) class PourcentFixe(abstractSimpleField): FONCTION_AFF = staticmethod(formats.abstractRender.pourcent) class DefaultFixe(abstractSimpleField): FONCTION_AFF = staticmethod(formats.abstractRender.default) class", "= somme if somme is not None else (self.MIN - 1) self.setValue(somme) def", "TOOLTIP = None data_changed = pyqtSignal() # dummy signal def __init__(self, *args, **kwargs):", "_change_year_text_color(self, is_ok): color = \"black\" if is_ok else \"red\" self.ws[0].setStyleSheet(f\"color : {color}\") def", "= value self.setText(self.VALUE_TO_LABEL.get(self.value, \"\")) def get_data(self): return self.value class abstractEnumEditable(QComboBox): data_changed = pyqtSignal(object)", "def set_data(self, *args): pass ###---------------------------- Wrappers---------------------------- ### def _get_widget(classe, value): w = classe()", "return _get_widget(is_editable and SexeEditable or SexeFixe, value) def Adresse(value, is_editable): return Texte(value, is_editable,", "self.options_changed.emit() def set_data(self, *args): pass ###---------------------------- Wrappers---------------------------- ### def _get_widget(classe, value): w =", "DateEditable() self.debut.data_changed.connect(self.on_change) self.fin.data_changed.connect(self.on_change) layout = QHBoxLayout(self) layout.setContentsMargins(0, 0, 0, 0) layout.addWidget(QLabel(\"Du \")) layout.addWidget(self.debut)", "numbers of day between two date widgets. These widgets have to implement a", "\"Non\") def get_data(self): return self.cb.isChecked() class DefaultEditable(QLineEdit): data_changed = pyqtSignal(str) MAX_LENGTH = None", "ASSOCIATION = {} def add_widgets_type(type_widgets, abstract_ASSOCIATION): TYPES_WIDGETS.update(type_widgets) for k, v in abstract_ASSOCIATION.items(): t", "= l def set_data(self, b): b = b or False self.cb.setChecked(b) self.l.setText(b and", "\"obligatoire\": BoolEditable} \"\"\"Correspondance between fields and widget classes\"\"\" size_hint_: tuple def __init__(self, parent):", "to visualize and modify basic fields. (french language) ASSOCIATION should be updated with", "to add a separator\"\"\" def __init__(self, parent=None): super().__init__(parent) self.set_choix(self.VALEURS_LABELS) self.currentIndexChanged.connect( lambda i: self.data_changed.emit(self.currentData()))", "parent) self.size_hint_ = None self.row_done_ = None @staticmethod def paint_filling_rect(option, painter, proportion): rect", "@staticmethod def _get_field(index): return index.model().header[index.column()] def sizeHint(self, option, index): if self.size_hint_ and self.size_hint_[0]", "not None else -1 self.setValue(somme) def get_data(self): v = self.value() return v if", "-------------------- class DepartementFixe(abstractEnum): VALUE_TO_LABEL = formats.DEPARTEMENTS class DepartementEditable(abstractEnumEditable): VALEURS_LABELS = sorted((i, i +", "= begining self.end = end self.begining.data_changed.connect(self.set_data) self.end.data_changed.connect(self.set_data) self.set_data() def set_data(self, *args): \"\"\"we cant", "a separator\"\"\" def __init__(self, parent=None): super().__init__(parent) self.set_choix(self.VALEURS_LABELS) self.currentIndexChanged.connect( lambda i: self.data_changed.emit(self.currentData())) def set_choix(self,", "def get_data(self): return self.currentData() # -------------------- Commons types -------------------- class DepartementFixe(abstractEnum): VALUE_TO_LABEL =", "self.fin.get_data() def set_data(self, v): v = v or [None, None] self.debut.set_data(v[0]) self.fin.set_data(v[1]) class", "mode_paiement=ModePaiement, ) ASSOCIATION = {} def add_widgets_type(type_widgets, abstract_ASSOCIATION): TYPES_WIDGETS.update(type_widgets) for k, v in", "self.setText(label) def get_data(self): return self.value class BoolFixe(abstractSimpleField): FONCTION_AFF = staticmethod(formats.abstractRender.boolen) class EurosFixe(abstractSimpleField): FONCTION_AFF", "au \")) layout.addWidget(self.fin) def on_change(self): self.data_changed.emit(*self.get_data()) def get_data(self): return self.debut.get_data(), self.fin.get_data() def set_data(self,", "def enter_edit(self): self._clear() line_layout = self.layout() self.entree = QLineEdit() self.entree.setObjectName(\"nouveau-numero-tel\") self.entree.setAlignment(Qt.AlignCenter) self.entree.setPlaceholderText(\"Ajouter...\") add", "Qt.RoundCap, Qt.RoundJoin)) painter.setBackgroundMode(Qt.OpaqueMode) painter.setBackground(QBrush(color)) painter.setBrush(QBrush(color)) rect.setWidth(rect.width() * proportion / 100) painter.drawRoundedRect(rect, 5, 5)", "date_emission=Date, date_reception=Date, nb_places=Entier, nb_places_reservees=Entier, age_min=Entier, age_max=Entier, acquite=Booleen, is_acompte=Booleen, is_remboursement=Booleen, reduc_speciale=Euros, acompte_recu=Euros, valeur=Euros, total=Euros,", "bs + t*(be - bs)) painter.setPen(QPen(color, 0.5, Qt.SolidLine, Qt.RoundCap, Qt.RoundJoin)) painter.setBackgroundMode(Qt.OpaqueMode) painter.setBackground(QBrush(color)) painter.setBrush(QBrush(color))", "+ QLineEdit\"\"\" data_changed = pyqtSignal(object) def __init__(self, parent=None): super(OptionnalTextEditable, self).__init__(parent=parent) self.active = QCheckBox()", "__init__(self, parent=None): super().__init__(parent) self.setMaximum(100000) self.setMinimum(-1) self.setSpecialValueText(\" \") self.setSuffix(\"€\") self.valueChanged.connect(self.data_changed.emit) def set_data(self, somme): somme", "value) \"\"\"Correspondance field -> widget (callable)\"\"\" TYPES_WIDGETS = defaultdict( lambda: Default, date_naissance=Date, departement_naissance=Departement,", "def Booleen(value, is_editable): return _get_widget(is_editable and BoolEditable or BoolFixe, value) def Entier(entier, is_editable):", "editable datetime widget !\") w = DateHeureFixe() w.set_data(value) return w def OptionnalText(value, is_editable):", "None data_changed = pyqtSignal() # dummy signal def __init__(self, *args, **kwargs): super(abstractSimpleField, self).__init__(*args,", "object) def __init__(self): super().__init__() self.debut = DateEditable() self.fin = DateEditable() self.debut.data_changed.connect(self.on_change) self.fin.data_changed.connect(self.on_change) layout", "and DefaultEditable or DefaultFixe, value) def Booleen(value, is_editable): return _get_widget(is_editable and BoolEditable or", "is_editable): return Texte(value, is_editable, placeholder=\"Adresse...\") def ModePaiement(value, is_editable): return _get_widget(is_editable and ModePaiementEditable or", "self.par_jour.isChecked()] class DateRange(QFrame): data_changed = pyqtSignal(object, object) def __init__(self): super().__init__() self.debut = DateEditable()", "def Sexe(value, is_editable): return _get_widget(is_editable and SexeEditable or SexeFixe, value) def Adresse(value, is_editable):", "self.data_changed.emit(*self.get_data()) def get_data(self): return self.debut.get_data(), self.fin.get_data() def set_data(self, v): v = v or", "col = super(Tels, self).get_data() return [tel.Id for tel in col] class Duree(QLabel): \"\"\"Display", "self.row_done_ = None @staticmethod def paint_filling_rect(option, painter, proportion): rect = option.rect painter.save() proportion", "and \"Oui\" or \"Non\") self.data_changed.emit(b) cb.clicked.connect(callback) self.cb = cb self.l = l def", "\"\")) def get_data(self): return self.text() def LimitedDefaultEditable(max_length): return type(\"LDefaultEditable\", (DefaultEditable,), {\"MAX_LENGTH\": max_length}) class", "None super().destroyEditor(editor, index) def setModelData(self, editor, model, index): value = editor.get_data() model.set_data(index, value)", "else -1 self.setValue(somme) def get_data(self): v = self.value() return v if v !=", "= None self.row_done_ = None @staticmethod def paint_filling_rect(option, painter, proportion): rect = option.rect", "(QFrame, QHBoxLayout, QPushButton, QLineEdit, QLabel, QComboBox, QSpinBox, QDoubleSpinBox, QCheckBox, QCompleter, QGridLayout, QVBoxLayout, QPlainTextEdit,", "value\"\"\" def set_data(self, value): self.value = value self.setText(self.VALUE_TO_LABEL.get(self.value, \"\")) def get_data(self): return self.value", "__init__(self, parent=None): super().__init__(parent) self.setMaximum(self.MAX) self.setMinimum(self.MIN) self.setSuffix(self.UNITE) self.valueChanged.connect(self.data_changed.emit) self.setSpecialValueText(\" \") def set_data(self, somme): somme", "False self.cb.setChecked(b) self.l.setText(b and \"Oui\" or \"Non\") def get_data(self): return self.cb.isChecked() class DefaultEditable(QLineEdit):", "= self.value() return v if v != -1 else None class BoolEditable(QFrame): data_changed", "option, index): field = self._get_field(index) other = index.data(role=Qt.UserRole) classe = self.CORRES[field] w =", "UNITE = \"\" MAX = None MIN = 0 DEFAULT = 0 data_changed", "m.setAlignment(Qt.AlignCenter) a.setAlignment(Qt.AlignCenter) j.setSpecialValueText(\"-\") m.setSpecialValueText(\"-\") a.setSpecialValueText(\"-\") layout.addWidget(j, 0, 0) layout.addWidget(m, 0, 1) layout.addWidget(a, 0,", "return v if v != -1 else None class BoolEditable(QFrame): data_changed = pyqtSignal(bool)", "= \"%\" MAX = 100 DEFAULT = 0 class EurosEditable(QDoubleSpinBox): data_changed = pyqtSignal(float)", "staticmethod(formats.abstractRender.dateheure) # --------------- Numeric fields --------------- class abstractEntierEditable(QSpinBox): UNITE = \"\" MAX =", "0, 0, 0) layout.addWidget(QLabel(\"Du \")) layout.addWidget(self.debut) layout.addWidget(QLabel(\" au \")) layout.addWidget(self.fin) def on_change(self): self.data_changed.emit(*self.get_data())", "to acces advanced options. CLASS_PANEL_OPTIONS is responsible for doing the actual modification\"\"\" TITLE", "label from raw value\"\"\" def set_data(self, value): self.value = value self.setText(self.VALUE_TO_LABEL.get(self.value, \"\")) def", "= TYPES_WIDGETS[k] ASSOCIATION[k] = (v[0], v[1], v[2], t, v[3]) add_widgets_type({}, formats.ASSOCIATION) ## ------------------Custom", "is_remboursement=Booleen, reduc_speciale=Euros, acompte_recu=Euros, valeur=Euros, total=Euros, prix=Euros, date_heure_modif=DateHeure, date_reglement=Date, date_encaissement=Date, info=Texte, message=Texte, mode_paiement=ModePaiement, )", "def get_data(self): return self.text() def LimitedDefaultEditable(max_length): return type(\"LDefaultEditable\", (DefaultEditable,), {\"MAX_LENGTH\": max_length}) class OptionnalTextEditable(QFrame):", "class abstractSimpleField(QLabel): FONCTION_AFF = None TOOLTIP = None data_changed = pyqtSignal() # dummy", "NouveauTelephone(list_views.abstractNewButton): LABEL = \"Ajouter un numéro\" @staticmethod def IS_TELEPHONE(s: str): r = re.compile(r'[0-9]{9,10}')", "EurosEditable(QDoubleSpinBox): data_changed = pyqtSignal(float) def __init__(self, parent=None): super().__init__(parent) self.setMaximum(100000) self.setMinimum(-1) self.setSpecialValueText(\" \") self.setSuffix(\"€\")", "def Adresse(value, is_editable): return Texte(value, is_editable, placeholder=\"Adresse...\") def ModePaiement(value, is_editable): return _get_widget(is_editable and", "(value, label) or None to add a separator\"\"\" def __init__(self, parent=None): super().__init__(parent) self.set_choix(self.VALEURS_LABELS)", "acompte_recu=Euros, valeur=Euros, total=Euros, prix=Euros, date_heure_modif=DateHeure, date_reglement=Date, date_encaissement=Date, info=Texte, message=Texte, mode_paiement=ModePaiement, ) ASSOCIATION =", "1) self.setValue(somme) def get_data(self): return self.value() class EntierEditable(abstractEntierEditable): MAX = 10000 class PourcentEditable(abstractEntierEditable):", "self.ws[2].value()] try: return datetime.date(*d) except ValueError: return def set_data(self, d): if d is", "EurosEditable or EurosFixe, value) def Pourcent(value, is_editable): return _get_widget(is_editable and PourcentEditable or PourcentFixe,", "None else (self.MIN - 1) self.setValue(somme) def get_data(self): return self.value() class EntierEditable(abstractEntierEditable): MAX", "get_data(self): return [self.val.value(), self.par_jour.isChecked()] class DateRange(QFrame): data_changed = pyqtSignal(object, object) def __init__(self): super().__init__()", "self.text() def LimitedDefaultEditable(max_length): return type(\"LDefaultEditable\", (DefaultEditable,), {\"MAX_LENGTH\": max_length}) class OptionnalTextEditable(QFrame): \"\"\"QCheckbox + QLineEdit\"\"\"", "layout.addWidget(QLabel(\"Du \")) layout.addWidget(self.debut) layout.addWidget(QLabel(\" au \")) layout.addWidget(self.fin) def on_change(self): self.data_changed.emit(*self.get_data()) def get_data(self): return", "0 class EurosEditable(QDoubleSpinBox): data_changed = pyqtSignal(float) def __init__(self, parent=None): super().__init__(parent) self.setMaximum(100000) self.setMinimum(-1) self.setSpecialValueText(\"", "\"\"\"Dict. giving label from raw value\"\"\" def set_data(self, value): self.value = value self.setText(self.VALUE_TO_LABEL.get(self.value,", "FONCTION_AFF = staticmethod(formats.abstractRender.euros) class PourcentFixe(abstractSimpleField): FONCTION_AFF = staticmethod(formats.abstractRender.pourcent) class DefaultFixe(abstractSimpleField): FONCTION_AFF = staticmethod(formats.abstractRender.default)", "= QCheckBox(\"Par jour\") layout = QVBoxLayout(self) layout.addWidget(self.val) layout.addWidget(self.par_jour) def set_data(self, value): self.val.setValue(value[0]) self.par_jour.setChecked(value[1])", "= (a, m, j) def _change_year_text_color(self, is_ok): color = \"black\" if is_ok else", "CLASS_PANEL_OPTIONS is responsible for doing the actual modification\"\"\" TITLE = \"Advanced options\" CLASS_PANEL_OPTIONS:Any", "manually update\"\"\" db = self.begining.get_data() or formats.DATE_DEFAULT df = self.end.get_data() or formats.DATE_DEFAULT jours", "is_editable) def on_add(self, item): \"\"\"Convert to pseuso acces\"\"\" super(Tels, self).on_add(list_views.PseudoAccesCategorie(item)) def set_data(self, collection):", "open window to acces advanced options. CLASS_PANEL_OPTIONS is responsible for doing the actual", "self.setValue(somme) def get_data(self): return self.value() class EntierEditable(abstractEntierEditable): MAX = 10000 class PourcentEditable(abstractEntierEditable): UNITE", "self.val.setValue(value[0]) self.par_jour.setChecked(value[1]) def get_data(self): return [self.val.value(), self.par_jour.isChecked()] class DateRange(QFrame): data_changed = pyqtSignal(object, object)", "self.par_jour.setChecked(value[1]) def get_data(self): return [self.val.value(), self.par_jour.isChecked()] class DateRange(QFrame): data_changed = pyqtSignal(object, object) def", "data_changed = pyqtSignal(object, object) def __init__(self): super().__init__() self.debut = DateEditable() self.fin = DateEditable()", "v) for k, v in formats.MODE_PAIEMENT.items()]) # ------------- Simple string-like field ------------- class", "###---------------------------- Wrappers---------------------------- ### def _get_widget(classe, value): w = classe() w.set_data(value) return w def", "layout.addWidget(self.par_jour) def set_data(self, value): self.val.setValue(value[0]) self.par_jour.setChecked(value[1]) def get_data(self): return [self.val.value(), self.par_jour.isChecked()] class DateRange(QFrame):", "str): text = text or \"\" is_active = bool(text.strip()) self.active.setChecked(is_active) self.text.setEnabled(is_active) self.text.setText(text) self.data_changed.emit(self.get_data())", "self.sizeHintChanged.emit(index) def createEditor(self, parent, option, index): field = self._get_field(index) other = index.data(role=Qt.UserRole) classe", "= formats.DEPARTEMENTS class DepartementEditable(abstractEnumEditable): VALEURS_LABELS = sorted((i, i + \" \" + v)", "self).__init__(*args, **kwargs) if self.TOOLTIP: self.setToolTip(self.TOOLTIP) def set_data(self, value): self.value = value label =", "def set_data(self, collection): collection = self.from_list(collection) super(Tels, self).set_data(collection) def get_data(self): col = super(Tels,", "clear_layout, Icons from ..Core import formats class NouveauTelephone(list_views.abstractNewButton): LABEL = \"Ajouter un numéro\"", "widgets, since common.abstractDetails will use it. \"\"\" import datetime import re from collections", "or DefaultFixe, value) def Booleen(value, is_editable): return _get_widget(is_editable and BoolEditable or BoolFixe, value)", "self.size_hint_ and self.size_hint_[0] == index: return self.size_hint_[1] return super().sizeHint(option, index) def setEditorData(self, editor,", "self.setMaximum(self.MAX) self.setMinimum(self.MIN) self.setSuffix(self.UNITE) self.valueChanged.connect(self.data_changed.emit) self.setSpecialValueText(\" \") def set_data(self, somme): somme = somme if", "+ t*(re - rs), vs + t*(ve - vs), bs + t*(be -", "layout = QHBoxLayout(self) layout.addWidget(cb) layout.addWidget(l) def callback(b): l.setText(b and \"Oui\" or \"Non\") self.data_changed.emit(b)", "= QVBoxLayout(self) layout.addWidget(self.val) layout.addWidget(self.par_jour) def set_data(self, value): self.val.setValue(value[0]) self.par_jour.setChecked(value[1]) def get_data(self): return [self.val.value(),", "abstractSimpleField(QLabel): FONCTION_AFF = None TOOLTIP = None data_changed = pyqtSignal() # dummy signal", "and bool(text) return text if active else None def set_data(self, text: str): text", "or EurosFixe, value) def Pourcent(value, is_editable): return _get_widget(is_editable and PourcentEditable or PourcentFixe, value)", "self.ws[2].clear() else: self.ws[0].setValue(d.year) self.ws[1].setValue(d.month) self.ws[2].setValue(d.day) self.on_editing() class MontantEditable(QFrame): def __init__(self, parent=None): super().__init__(parent) self.setAutoFillBackground(True)", "is_editable): collection = self.from_list(collection) super().__init__(collection, is_editable) def on_add(self, item): \"\"\"Convert to pseuso acces\"\"\"", "text, is_editable, placeholder=\"Informations complémentaires\"): super().__init__(text) self.setSizeAdjustPolicy(QPlainTextEdit.AdjustToContents) self.setMinimumHeight(50) self.setMinimumWidth(150) self.setPlaceholderText(placeholder) self.setReadOnly(not is_editable) self.textChanged.connect( lambda:", "is_editable): return _get_widget(is_editable and DefaultEditable or DefaultFixe, value) def Booleen(value, is_editable): return _get_widget(is_editable", "self.ws[0].setStyleSheet(f\"color : {color}\") def on_editing(self): current_year = self.ws[0].value() if not current_year: return self._change_year_text_color(not", "data_changed = pyqtSignal(str) MAX_LENGTH = None def __init__(self, parent=None, completion=[]): super().__init__(parent) self.textChanged.connect(self.data_changed.emit) if", "paint_filling_rect(option, painter, proportion): rect = option.rect painter.save() proportion = min(proportion, 100) rs, vs,", "self.text = QLineEdit() self.active.clicked.connect(self.on_click) self.text.textChanged.connect(self.on_text_changed) layout = QHBoxLayout(self) layout.addWidget(self.active) layout.addWidget(self.text) def on_click(self): self.text.setEnabled(self.active.isChecked())", "= QHBoxLayout(self) layout.setContentsMargins(0, 0, 0, 0) layout.addWidget(QLabel(\"Du \")) layout.addWidget(self.debut) layout.addWidget(QLabel(\" au \")) layout.addWidget(self.fin)", "{\"montant\": MontantEditable, \"mode_paiement\": ModePaiementEditable, \"valeur\": EurosEditable, \"description\": DefaultEditable, \"quantite\": EntierEditable, \"obligatoire\": BoolEditable} \"\"\"Correspondance", "QIcon from PyQt5.QtWidgets import (QFrame, QHBoxLayout, QPushButton, QLineEdit, QLabel, QComboBox, QSpinBox, QDoubleSpinBox, QCheckBox,", "= pyqtSignal(object) def __init__(self, parent=None): super().__init__(parent) layout = QGridLayout(self) layout.setContentsMargins(0, 0, 0, 0)", "set_choix(self, choix): self.places = {} for t in choix: if t: self.places[t[0]] =", "= sorted((i, i + \" \" + v) for i, v in formats.DEPARTEMENTS.items())", "class abstractEnum(QLabel): VALUE_TO_LABEL = {} \"\"\"Dict. giving label from raw value\"\"\" def set_data(self,", "def __init__(self, parent=None): super(OptionnalTextEditable, self).__init__(parent=parent) self.active = QCheckBox() self.text = QLineEdit() self.active.clicked.connect(self.on_click) self.text.textChanged.connect(self.on_text_changed)", "in choix: if t: self.places[t[0]] = self.count() self.addItem(t[1], userData=t[0]) else: self.insertSeparator(self.count()) def set_data(self,", "self.setPlainText(text) class OptionsButton(QPushButton): \"\"\"Bouton to open window to acces advanced options. CLASS_PANEL_OPTIONS is", "= QColor( rs + t*(re - rs), vs + t*(ve - vs), bs", "current_year: return self._change_year_text_color(not current_year < 100) self.ws[0].setValue(current_year) def get_data(self): d = [self.ws[0].value(), self.ws[1].value(),", "w.set_data(value) return w def Default(value, is_editable): return _get_widget(is_editable and DefaultEditable or DefaultFixe, value)", "------------------ ## class delegateAttributs(QStyledItemDelegate): CORRES = {\"montant\": MontantEditable, \"mode_paiement\": ModePaiementEditable, \"valeur\": EurosEditable, \"description\":", "QLineEdit() self.active.clicked.connect(self.on_click) self.text.textChanged.connect(self.on_text_changed) layout = QHBoxLayout(self) layout.addWidget(self.active) layout.addWidget(self.text) def on_click(self): self.text.setEnabled(self.active.isChecked()) self.data_changed.emit(self.get_data()) def", "v = v or [None, None] self.debut.set_data(v[0]) self.fin.set_data(v[1]) class Texte(QPlainTextEdit): data_changed = pyqtSignal(str)", "= 0 DEFAULT = 0 data_changed = pyqtSignal(int) def __init__(self, parent=None): super().__init__(parent) self.setMaximum(self.MAX)", "acces, is_editable): super(OptionsButton, self).__init__(self.TITLE) self.clicked.connect(self.show_options) self.acces = acces self.is_editable = is_editable def show_options(self):", "somme): somme = somme if somme is not None else (self.MIN - 1)", "f = self.CLASS_PANEL_OPTIONS(self.acces, self.is_editable) if f.exec_(): self.options_changed.emit() def set_data(self, *args): pass ###---------------------------- Wrappers----------------------------", "date_depart=Date, date_emission=Date, date_reception=Date, nb_places=Entier, nb_places_reservees=Entier, age_min=Entier, age_max=Entier, acquite=Booleen, is_acompte=Booleen, is_remboursement=Booleen, reduc_speciale=Euros, acompte_recu=Euros, valeur=Euros,", "w def OptionnalText(value, is_editable): return _get_widget(is_editable and OptionnalTextEditable or DefaultFixe, value) \"\"\"Correspondance field", "line_layout.addWidget(self.entree) line_layout.addWidget(add) line_layout.setStretch(0, 3) line_layout.setStretch(1, 1) def on_add(self): num = self.entree.text() if self.IS_TELEPHONE(num):", "return [self.val.value(), self.par_jour.isChecked()] class DateRange(QFrame): data_changed = pyqtSignal(object, object) def __init__(self): super().__init__() self.debut", "0, 2, 1, 2) j.valueChanged.connect( lambda v: self.data_changed.emit(self.get_data())) m.valueChanged.connect( lambda v: self.data_changed.emit(self.get_data())) a.valueChanged.connect(", "f.exec_(): self.options_changed.emit() def set_data(self, *args): pass ###---------------------------- Wrappers---------------------------- ### def _get_widget(classe, value): w", "layout.addWidget(self.text) def on_click(self): self.text.setEnabled(self.active.isChecked()) self.data_changed.emit(self.get_data()) def on_text_changed(self, text): is_active = bool(text.strip()) self.active.setChecked(is_active) self.text.setEnabled(is_active)", "= QLineEdit() self.entree.setObjectName(\"nouveau-numero-tel\") self.entree.setAlignment(Qt.AlignCenter) self.entree.setPlaceholderText(\"Ajouter...\") add = QPushButton() add.setIcon(QIcon(Icons.Valid)) add.clicked.connect(self.on_add) self.entree.editingFinished.connect(self.on_add) line_layout.addWidget(self.entree) line_layout.addWidget(add)", "VALUE_TO_LABEL = formats.MODE_PAIEMENT class ModePaiementEditable(abstractEnumEditable): VALEURS_LABELS = sorted([(k, v) for k, v in", "somme is not None else (self.MIN - 1) self.setValue(somme) def get_data(self): return self.value()", "\"\"\"Correspondance field -> widget (callable)\"\"\" TYPES_WIDGETS = defaultdict( lambda: Default, date_naissance=Date, departement_naissance=Departement, sexe=Sexe,", "def Euros(value, is_editable): return _get_widget(is_editable and EurosEditable or EurosFixe, value) def Pourcent(value, is_editable):", "self.setMinimum(-1) self.setSpecialValueText(\" \") self.setSuffix(\"€\") self.valueChanged.connect(self.data_changed.emit) def set_data(self, somme): somme = somme if somme", "ValueError: return def set_data(self, d): if d is None: self.ws[0].clear() self.ws[1].clear() self.ws[2].clear() else:", "-> widget (callable)\"\"\" TYPES_WIDGETS = defaultdict( lambda: Default, date_naissance=Date, departement_naissance=Departement, sexe=Sexe, tels=Tels, adresse=Adresse,", "self.data_changed.emit(self.get_data())) a.editingFinished.connect(self.on_editing) self.ws = (a, m, j) def _change_year_text_color(self, is_ok): color = \"black\"", "return self.toPlainText() def set_data(self, text): self.setPlainText(text) class OptionsButton(QPushButton): \"\"\"Bouton to open window to", "completion: c = QCompleter(completion) c.setCaseSensitivity(Qt.CaseInsensitive) self.setCompleter(c) if self.MAX_LENGTH: self.setMaxLength(self.MAX_LENGTH) def set_data(self, value): self.setText(str(value", "import list_views, clear_layout, Icons from ..Core import formats class NouveauTelephone(list_views.abstractNewButton): LABEL = \"Ajouter", "class delegateAttributs(QStyledItemDelegate): CORRES = {\"montant\": MontantEditable, \"mode_paiement\": ModePaiementEditable, \"valeur\": EurosEditable, \"description\": DefaultEditable, \"quantite\":", "QStyledItemDelegate.__init__(self, parent) self.size_hint_ = None self.row_done_ = None @staticmethod def paint_filling_rect(option, painter, proportion):", "def __init__(self, parent=None, completion=[]): super().__init__(parent) self.textChanged.connect(self.data_changed.emit) if completion: c = QCompleter(completion) c.setCaseSensitivity(Qt.CaseInsensitive) self.setCompleter(c)", "staticmethod(formats.abstractRender.boolen) class EurosFixe(abstractSimpleField): FONCTION_AFF = staticmethod(formats.abstractRender.euros) class PourcentFixe(abstractSimpleField): FONCTION_AFF = staticmethod(formats.abstractRender.pourcent) class DefaultFixe(abstractSimpleField):", "lambda: Default, date_naissance=Date, departement_naissance=Departement, sexe=Sexe, tels=Tels, adresse=Adresse, date=Date, date_debut=Date, date_fin=Date, date_arrivee=Date, date_depart=Date, date_emission=Date,", "= pyqtSignal(bool) def __init__(self, parent=None): super().__init__(parent) cb = QCheckBox() l = QLabel() self.setAutoFillBackground(True)", "def show_options(self): f = self.CLASS_PANEL_OPTIONS(self.acces, self.is_editable) if f.exec_(): self.options_changed.emit() def set_data(self, *args): pass", "i: self.data_changed.emit(self.currentData())) def set_choix(self, choix): self.places = {} for t in choix: if", "EntierEditable, \"obligatoire\": BoolEditable} \"\"\"Correspondance between fields and widget classes\"\"\" size_hint_: tuple def __init__(self,", "def DateHeure(value, is_editable): if is_editable: raise NotImplementedError(\"No editable datetime widget !\") w =", "[None, None] self.debut.set_data(v[0]) self.fin.set_data(v[1]) class Texte(QPlainTextEdit): data_changed = pyqtSignal(str) def __init__(self, text, is_editable,", "collection): collection = self.from_list(collection) super(Tels, self).set_data(collection) def get_data(self): col = super(Tels, self).get_data() return", "t*(ve - vs), bs + t*(be - bs)) painter.setPen(QPen(color, 0.5, Qt.SolidLine, Qt.RoundCap, Qt.RoundJoin))", "_get_widget(is_editable and EntierEditable or DefaultFixe, entier) def Euros(value, is_editable): return _get_widget(is_editable and EurosEditable", "acquite=Booleen, is_acompte=Booleen, is_remboursement=Booleen, reduc_speciale=Euros, acompte_recu=Euros, valeur=Euros, total=Euros, prix=Euros, date_heure_modif=DateHeure, date_reglement=Date, date_encaissement=Date, info=Texte, message=Texte,", "self.end.data_changed.connect(self.set_data) self.set_data() def set_data(self, *args): \"\"\"we cant to call set_data to manually update\"\"\"", "2 and \" jours\" or \" jour\")) # -------------- Enumerations vizualisation -------------- class", "add_widgets_type({}, formats.ASSOCIATION) ## ------------------Custom delegate ------------------ ## class delegateAttributs(QStyledItemDelegate): CORRES = {\"montant\": MontantEditable,", "layout.addWidget(self.debut) layout.addWidget(QLabel(\" au \")) layout.addWidget(self.fin) def on_change(self): self.data_changed.emit(*self.get_data()) def get_data(self): return self.debut.get_data(), self.fin.get_data()", "= QSpinBox() m.setMinimum(0) m.setMaximum(12) m.setToolTip(\"Mois\") a = QSpinBox() a.setMinimum(0) a.setMaximum(2500) a.setToolTip(\"Année\") j.setAlignment(Qt.AlignCenter) m.setAlignment(Qt.AlignCenter)", "\" \" + v) for i, v in formats.DEPARTEMENTS.items()) class SexeFixe(abstractEnum): VALUE_TO_LABEL =", "class PourcentFixe(abstractSimpleField): FONCTION_AFF = staticmethod(formats.abstractRender.pourcent) class DefaultFixe(abstractSimpleField): FONCTION_AFF = staticmethod(formats.abstractRender.default) class DateFixe(abstractSimpleField): FONCTION_AFF", "_get_widget(is_editable and BoolEditable or BoolFixe, value) def Entier(entier, is_editable): return _get_widget(is_editable and EntierEditable", "*args): \"\"\"we cant to call set_data to manually update\"\"\" db = self.begining.get_data() or", "option.rect painter.save() proportion = min(proportion, 100) rs, vs, bs = (30,64,55) # start", "v) for i, v in formats.DEPARTEMENTS.items()) class SexeFixe(abstractEnum): VALUE_TO_LABEL = formats.SEXES class SexeEditable(abstractEnumEditable):", "classes\"\"\" size_hint_: tuple def __init__(self, parent): QStyledItemDelegate.__init__(self, parent) self.size_hint_ = None self.row_done_ =", "self.set_data() def set_data(self, *args): \"\"\"we cant to call set_data to manually update\"\"\" db", "acces\"\"\" super(Tels, self).on_add(list_views.PseudoAccesCategorie(item)) def set_data(self, collection): collection = self.from_list(collection) super(Tels, self).set_data(collection) def get_data(self):", "else classe(parent) self.size_hint_ = (index, w.sizeHint()) self.row_done_ = index.row() return w def destroyEditor(self,", "size_hint_: tuple def __init__(self, parent): QStyledItemDelegate.__init__(self, parent) self.size_hint_ = None self.row_done_ = None", "self.setAutoFillBackground(True) self.val = QDoubleSpinBox() self.val.setMaximum(100000) self.par_jour = QCheckBox(\"Par jour\") layout = QVBoxLayout(self) layout.addWidget(self.val)", "options_changed = pyqtSignal() def __init__(self, acces, is_editable): super(OptionsButton, self).__init__(self.TITLE) self.clicked.connect(self.show_options) self.acces = acces", "parent=None): super().__init__(parent) self.setMaximum(self.MAX) self.setMinimum(self.MIN) self.setSuffix(self.UNITE) self.valueChanged.connect(self.data_changed.emit) self.setSpecialValueText(\" \") def set_data(self, somme): somme =", "data_changed = pyqtSignal(object) def __init__(self, parent=None): super(OptionnalTextEditable, self).__init__(parent=parent) self.active = QCheckBox() self.text =", "\"\"\"Correspondance between fields and widget classes\"\"\" size_hint_: tuple def __init__(self, parent): QStyledItemDelegate.__init__(self, parent)", "= self.CLASS_PANEL_OPTIONS(self.acces, self.is_editable) if f.exec_(): self.options_changed.emit() def set_data(self, *args): pass ###---------------------------- Wrappers---------------------------- ###", "current_year = self.ws[0].value() if not current_year: return self._change_year_text_color(not current_year < 100) self.ws[0].setValue(current_year) def", "\"\"\"QCheckbox + QLineEdit\"\"\" data_changed = pyqtSignal(object) def __init__(self, parent=None): super(OptionnalTextEditable, self).__init__(parent=parent) self.active =", "__init__(self, parent=None): super().__init__(parent) self.set_choix(self.VALEURS_LABELS) self.currentIndexChanged.connect( lambda i: self.data_changed.emit(self.currentData())) def set_choix(self, choix): self.places =", "self.TOOLTIP: self.setToolTip(self.TOOLTIP) def set_data(self, value): self.value = value label = self.FONCTION_AFF(value) self.setText(label) def", "def set_data(self, value): if value is None: self.setCurrentIndex(-1) else: self.setCurrentIndex(self.places[value]) self.data_changed.emit(self.get_data()) def get_data(self):", "= self._get_field(index) other = index.data(role=Qt.UserRole) classe = self.CORRES[field] w = classe(parent, other) if", "set_data(self, value): self.value = value self.setText(self.VALUE_TO_LABEL.get(self.value, \"\")) def get_data(self): return self.value class abstractEnumEditable(QComboBox):", "age_max=Entier, acquite=Booleen, is_acompte=Booleen, is_remboursement=Booleen, reduc_speciale=Euros, acompte_recu=Euros, valeur=Euros, total=Euros, prix=Euros, date_heure_modif=DateHeure, date_reglement=Date, date_encaissement=Date, info=Texte,", "in formats.DEPARTEMENTS.items()) class SexeFixe(abstractEnum): VALUE_TO_LABEL = formats.SEXES class SexeEditable(abstractEnumEditable): VALEURS_LABELS = sorted((k, v)", "= self.from_list(collection) super().__init__(collection, is_editable) def on_add(self, item): \"\"\"Convert to pseuso acces\"\"\" super(Tels, self).on_add(list_views.PseudoAccesCategorie(item))", "self._get_field(index) other = index.data(role=Qt.UserRole) classe = self.CORRES[field] w = classe(parent, other) if other", "= self.count() self.addItem(t[1], userData=t[0]) else: self.insertSeparator(self.count()) def set_data(self, value): if value is None:", "self.value = value self.setText(self.VALUE_TO_LABEL.get(self.value, \"\")) def get_data(self): return self.value class abstractEnumEditable(QComboBox): data_changed =", "if other else classe(parent) self.size_hint_ = (index, w.sizeHint()) self.row_done_ = index.row() return w", "OptionnalText(value, is_editable): return _get_widget(is_editable and OptionnalTextEditable or DefaultFixe, value) \"\"\"Correspondance field -> widget", "since common.abstractDetails will use it. \"\"\" import datetime import re from collections import", "class Texte(QPlainTextEdit): data_changed = pyqtSignal(str) def __init__(self, text, is_editable, placeholder=\"Informations complémentaires\"): super().__init__(text) self.setSizeAdjustPolicy(QPlainTextEdit.AdjustToContents)", "*args, **kwargs): super(abstractSimpleField, self).__init__(*args, **kwargs) if self.TOOLTIP: self.setToolTip(self.TOOLTIP) def set_data(self, value): self.value =", "10000 class PourcentEditable(abstractEntierEditable): UNITE = \"%\" MAX = 100 DEFAULT = 0 class", "is_ok else \"red\" self.ws[0].setStyleSheet(f\"color : {color}\") def on_editing(self): current_year = self.ws[0].value() if not", "(self.MIN - 1) self.setValue(somme) def get_data(self): return self.value() class EntierEditable(abstractEntierEditable): MAX = 10000", "somme if somme is not None else (self.MIN - 1) self.setValue(somme) def get_data(self):", "return (m is not None) def _clear(self): clear_layout(self.layout()) def enter_edit(self): self._clear() line_layout =", "is responsible for doing the actual modification\"\"\" TITLE = \"Advanced options\" CLASS_PANEL_OPTIONS:Any =", "tuples (value, label) or None to add a separator\"\"\" def __init__(self, parent=None): super().__init__(parent)", "else: self.insertSeparator(self.count()) def set_data(self, value): if value is None: self.setCurrentIndex(-1) else: self.setCurrentIndex(self.places[value]) self.data_changed.emit(self.get_data())", "self.value() return v if v != -1 else None class BoolEditable(QFrame): data_changed =", "import defaultdict from typing import List, Any from PyQt5.QtCore import pyqtSignal, Qt, QPoint", "v) for k, v in formats.SEXES.items()) class ModePaiementFixe(abstractEnum): VALUE_TO_LABEL = formats.MODE_PAIEMENT class ModePaiementEditable(abstractEnumEditable):", "self.active.setChecked(is_active) self.text.setEnabled(is_active) self.data_changed.emit(self.get_data()) def get_data(self): text = self.text.text().strip() active = self.active.isChecked() and bool(text)", "self._clear() line_layout = self.layout() self.entree = QLineEdit() self.entree.setObjectName(\"nouveau-numero-tel\") self.entree.setAlignment(Qt.AlignCenter) self.entree.setPlaceholderText(\"Ajouter...\") add = QPushButton()", "self.setMinimum(self.MIN) self.setSuffix(self.UNITE) self.valueChanged.connect(self.data_changed.emit) self.setSpecialValueText(\" \") def set_data(self, somme): somme = somme if somme", "PourcentFixe, value) def Date(value, is_editable): return _get_widget(is_editable and DateEditable or DateFixe, value) def", "staticmethod(formats.abstractRender.date) class DateHeureFixe(abstractSimpleField): FONCTION_AFF = staticmethod(formats.abstractRender.dateheure) # --------------- Numeric fields --------------- class abstractEntierEditable(QSpinBox):", "class NouveauTelephone(list_views.abstractNewButton): LABEL = \"Ajouter un numéro\" @staticmethod def IS_TELEPHONE(s: str): r =", "self.count() self.addItem(t[1], userData=t[0]) else: self.insertSeparator(self.count()) def set_data(self, value): if value is None: self.setCurrentIndex(-1)", "sorted((k, v) for k, v in formats.SEXES.items()) class ModePaiementFixe(abstractEnum): VALUE_TO_LABEL = formats.MODE_PAIEMENT class", "self.value class BoolFixe(abstractSimpleField): FONCTION_AFF = staticmethod(formats.abstractRender.boolen) class EurosFixe(abstractSimpleField): FONCTION_AFF = staticmethod(formats.abstractRender.euros) class PourcentFixe(abstractSimpleField):", "set_data(self, value): if value is None: self.setCurrentIndex(-1) else: self.setCurrentIndex(self.places[value]) self.data_changed.emit(self.get_data()) def get_data(self): return", "class DateRange(QFrame): data_changed = pyqtSignal(object, object) def __init__(self): super().__init__() self.debut = DateEditable() self.fin", "return self.debut.get_data(), self.fin.get_data() def set_data(self, v): v = v or [None, None] self.debut.set_data(v[0])", "self.setSpecialValueText(\" \") self.setSuffix(\"€\") self.valueChanged.connect(self.data_changed.emit) def set_data(self, somme): somme = somme if somme is", "date_naissance=Date, departement_naissance=Departement, sexe=Sexe, tels=Tels, adresse=Adresse, date=Date, date_debut=Date, date_fin=Date, date_arrivee=Date, date_depart=Date, date_emission=Date, date_reception=Date, nb_places=Entier,", "lambda v: self.data_changed.emit(self.get_data())) m.valueChanged.connect( lambda v: self.data_changed.emit(self.get_data())) a.valueChanged.connect( lambda v: self.data_changed.emit(self.get_data())) a.editingFinished.connect(self.on_editing) self.ws", "proportion = min(proportion, 100) rs, vs, bs = (30,64,55) # start re, ve,", "QHBoxLayout(self) layout.addWidget(self.active) layout.addWidget(self.text) def on_click(self): self.text.setEnabled(self.active.isChecked()) self.data_changed.emit(self.get_data()) def on_text_changed(self, text): is_active = bool(text.strip())", "get_data(self): return self.currentData() # -------------------- Commons types -------------------- class DepartementFixe(abstractEnum): VALUE_TO_LABEL = formats.DEPARTEMENTS", "sorted((i, i + \" \" + v) for i, v in formats.DEPARTEMENTS.items()) class", "import datetime import re from collections import defaultdict from typing import List, Any", "formats.DATE_DEFAULT jours = max((df - db).days + 1, 0) self.setText(str(jours) + (jours >=", "QToolTip.showText(self.entree.mapToGlobal( QPoint(0, 10)), \"Numéro invalide\") class Tels(list_views.abstractMutableList): LIST_PLACEHOLDER = \"Aucun numéro.\" LIST_HEADER =", "def paint_filling_rect(option, painter, proportion): rect = option.rect painter.save() proportion = min(proportion, 100) rs,", "on_add(self, item): \"\"\"Convert to pseuso acces\"\"\" super(Tels, self).on_add(list_views.PseudoAccesCategorie(item)) def set_data(self, collection): collection =", "to manually update\"\"\" db = self.begining.get_data() or formats.DATE_DEFAULT df = self.end.get_data() or formats.DATE_DEFAULT", "show_options(self): f = self.CLASS_PANEL_OPTIONS(self.acces, self.is_editable) if f.exec_(): self.options_changed.emit() def set_data(self, *args): pass ###----------------------------", "and \" jours\" or \" jour\")) # -------------- Enumerations vizualisation -------------- class abstractEnum(QLabel):", "data_changed = pyqtSignal(object) def __init__(self, parent=None): super().__init__(parent) layout = QGridLayout(self) layout.setContentsMargins(0, 0, 0,", "get_data(self): col = super(Tels, self).get_data() return [tel.Id for tel in col] class Duree(QLabel):", "invalide\") class Tels(list_views.abstractMutableList): LIST_PLACEHOLDER = \"Aucun numéro.\" LIST_HEADER = None BOUTON = NouveauTelephone", "BoolEditable} \"\"\"Correspondance between fields and widget classes\"\"\" size_hint_: tuple def __init__(self, parent): QStyledItemDelegate.__init__(self,", "the actual modification\"\"\" TITLE = \"Advanced options\" CLASS_PANEL_OPTIONS:Any = None options_changed = pyqtSignal()", "is_acompte=Booleen, is_remboursement=Booleen, reduc_speciale=Euros, acompte_recu=Euros, valeur=Euros, total=Euros, prix=Euros, date_heure_modif=DateHeure, date_reglement=Date, date_encaissement=Date, info=Texte, message=Texte, mode_paiement=ModePaiement,", "= pyqtSignal() # dummy signal def __init__(self, *args, **kwargs): super(abstractSimpleField, self).__init__(*args, **kwargs) if", "self.l.setText(b and \"Oui\" or \"Non\") def get_data(self): return self.cb.isChecked() class DefaultEditable(QLineEdit): data_changed =", "self.begining = begining self.end = end self.begining.data_changed.connect(self.set_data) self.end.data_changed.connect(self.set_data) self.set_data() def set_data(self, *args): \"\"\"we", "basic fields. (french language) ASSOCIATION should be updated with custom widgets, since common.abstractDetails", "{} for t in choix: if t: self.places[t[0]] = self.count() self.addItem(t[1], userData=t[0]) else:", "\")) layout.addWidget(self.debut) layout.addWidget(QLabel(\" au \")) layout.addWidget(self.fin) def on_change(self): self.data_changed.emit(*self.get_data()) def get_data(self): return self.debut.get_data(),", "DefaultEditable(QLineEdit): data_changed = pyqtSignal(str) MAX_LENGTH = None def __init__(self, parent=None, completion=[]): super().__init__(parent) self.textChanged.connect(self.data_changed.emit)", "classe(parent) self.size_hint_ = (index, w.sizeHint()) self.row_done_ = index.row() return w def destroyEditor(self, editor,", "begining self.end = end self.begining.data_changed.connect(self.set_data) self.end.data_changed.connect(self.set_data) self.set_data() def set_data(self, *args): \"\"\"we cant to", "= QSpinBox() a.setMinimum(0) a.setMaximum(2500) a.setToolTip(\"Année\") j.setAlignment(Qt.AlignCenter) m.setAlignment(Qt.AlignCenter) a.setAlignment(Qt.AlignCenter) j.setSpecialValueText(\"-\") m.setSpecialValueText(\"-\") a.setSpecialValueText(\"-\") layout.addWidget(j, 0,", "self.setMinimumHeight(50) self.setMinimumWidth(150) self.setPlaceholderText(placeholder) self.setReadOnly(not is_editable) self.textChanged.connect( lambda: self.data_changed.emit(self.toPlainText())) def get_data(self): return self.toPlainText() def", "DateHeureFixe(abstractSimpleField): FONCTION_AFF = staticmethod(formats.abstractRender.dateheure) # --------------- Numeric fields --------------- class abstractEntierEditable(QSpinBox): UNITE =", "CLASS_PANEL_OPTIONS:Any = None options_changed = pyqtSignal() def __init__(self, acces, is_editable): super(OptionsButton, self).__init__(self.TITLE) self.clicked.connect(self.show_options)", "ModePaiementFixe(abstractEnum): VALUE_TO_LABEL = formats.MODE_PAIEMENT class ModePaiementEditable(abstractEnumEditable): VALEURS_LABELS = sorted([(k, v) for k, v", "is_editable): return _get_widget(is_editable and ModePaiementEditable or ModePaiementFixe, value) def DateHeure(value, is_editable): if is_editable:", "min(proportion, 100) rs, vs, bs = (30,64,55) # start re, ve, be =", "data_changed = pyqtSignal(int) def __init__(self, parent=None): super().__init__(parent) self.setMaximum(self.MAX) self.setMinimum(self.MIN) self.setSuffix(self.UNITE) self.valueChanged.connect(self.data_changed.emit) self.setSpecialValueText(\" \")", "if not current_year: return self._change_year_text_color(not current_year < 100) self.ws[0].setValue(current_year) def get_data(self): d =", "import pyqtSignal, Qt, QPoint from PyQt5.QtGui import QColor, QPen, QBrush, QIcon from PyQt5.QtWidgets", "VALUE_TO_LABEL = formats.DEPARTEMENTS class DepartementEditable(abstractEnumEditable): VALEURS_LABELS = sorted((i, i + \" \" +", "def set_data(self, *args): \"\"\"we cant to call set_data to manually update\"\"\" db =", "None MIN = 0 DEFAULT = 0 data_changed = pyqtSignal(int) def __init__(self, parent=None):", "self.fin.set_data(v[1]) class Texte(QPlainTextEdit): data_changed = pyqtSignal(str) def __init__(self, text, is_editable, placeholder=\"Informations complémentaires\"): super().__init__(text)", "= DateEditable() self.debut.data_changed.connect(self.on_change) self.fin.data_changed.connect(self.on_change) layout = QHBoxLayout(self) layout.setContentsMargins(0, 0, 0, 0) layout.addWidget(QLabel(\"Du \"))", "self._change_year_text_color(not current_year < 100) self.ws[0].setValue(current_year) def get_data(self): d = [self.ws[0].value(), self.ws[1].value(), self.ws[2].value()] try:", "def __init__(self, acces, is_editable): super(OptionsButton, self).__init__(self.TITLE) self.clicked.connect(self.show_options) self.acces = acces self.is_editable = is_editable", "= {} def add_widgets_type(type_widgets, abstract_ASSOCIATION): TYPES_WIDGETS.update(type_widgets) for k, v in abstract_ASSOCIATION.items(): t =", "def __init__(self, parent=None): super().__init__(parent) self.setAutoFillBackground(True) self.val = QDoubleSpinBox() self.val.setMaximum(100000) self.par_jour = QCheckBox(\"Par jour\")", "MIN = 0 DEFAULT = 0 data_changed = pyqtSignal(int) def __init__(self, parent=None): super().__init__(parent)", "pyqtSignal(float) def __init__(self, parent=None): super().__init__(parent) self.setMaximum(100000) self.setMinimum(-1) self.setSpecialValueText(\" \") self.setSuffix(\"€\") self.valueChanged.connect(self.data_changed.emit) def set_data(self,", "layout.addWidget(self.fin) def on_change(self): self.data_changed.emit(*self.get_data()) def get_data(self): return self.debut.get_data(), self.fin.get_data() def set_data(self, v): v", "self.IS_TELEPHONE(num): self.entree.setPlaceholderText(\"Ajouter...\") self.data_changed.emit(num) self._clear() self.set_button() else: self.entree.selectAll() QToolTip.showText(self.entree.mapToGlobal( QPoint(0, 10)), \"Numéro invalide\") class", "DefaultFixe, value) \"\"\"Correspondance field -> widget (callable)\"\"\" TYPES_WIDGETS = defaultdict( lambda: Default, date_naissance=Date,", "0) self.setText(str(jours) + (jours >= 2 and \" jours\" or \" jour\")) #", "TYPES_WIDGETS = defaultdict( lambda: Default, date_naissance=Date, departement_naissance=Departement, sexe=Sexe, tels=Tels, adresse=Adresse, date=Date, date_debut=Date, date_fin=Date,", "UNITE = \"%\" MAX = 100 DEFAULT = 0 class EurosEditable(QDoubleSpinBox): data_changed =", "formats.MODE_PAIEMENT class ModePaiementEditable(abstractEnumEditable): VALEURS_LABELS = sorted([(k, v) for k, v in formats.MODE_PAIEMENT.items()]) #", "class DefaultFixe(abstractSimpleField): FONCTION_AFF = staticmethod(formats.abstractRender.default) class DateFixe(abstractSimpleField): FONCTION_AFF = staticmethod(formats.abstractRender.date) class DateHeureFixe(abstractSimpleField): FONCTION_AFF", "self.text.setEnabled(is_active) self.data_changed.emit(self.get_data()) def get_data(self): text = self.text.text().strip() active = self.active.isChecked() and bool(text) return", "add = QPushButton() add.setIcon(QIcon(Icons.Valid)) add.clicked.connect(self.on_add) self.entree.editingFinished.connect(self.on_add) line_layout.addWidget(self.entree) line_layout.addWidget(add) line_layout.setStretch(0, 3) line_layout.setStretch(1, 1) def", "collection = self.from_list(collection) super().__init__(collection, is_editable) def on_add(self, item): \"\"\"Convert to pseuso acces\"\"\" super(Tels,", "OptionnalTextEditable or DefaultFixe, value) \"\"\"Correspondance field -> widget (callable)\"\"\" TYPES_WIDGETS = defaultdict( lambda:", "None else -1 self.setValue(somme) def get_data(self): v = self.value() return v if v", "= proportion / 100 color = QColor( rs + t*(re - rs), vs", "_clear(self): clear_layout(self.layout()) def enter_edit(self): self._clear() line_layout = self.layout() self.entree = QLineEdit() self.entree.setObjectName(\"nouveau-numero-tel\") self.entree.setAlignment(Qt.AlignCenter)", "class ModePaiementFixe(abstractEnum): VALUE_TO_LABEL = formats.MODE_PAIEMENT class ModePaiementEditable(abstractEnumEditable): VALEURS_LABELS = sorted([(k, v) for k,", "..Core import formats class NouveauTelephone(list_views.abstractNewButton): LABEL = \"Ajouter un numéro\" @staticmethod def IS_TELEPHONE(s:", "return _get_widget(is_editable and OptionnalTextEditable or DefaultFixe, value) \"\"\"Correspondance field -> widget (callable)\"\"\" TYPES_WIDGETS", "staticmethod(formats.abstractRender.pourcent) class DefaultFixe(abstractSimpleField): FONCTION_AFF = staticmethod(formats.abstractRender.default) class DateFixe(abstractSimpleField): FONCTION_AFF = staticmethod(formats.abstractRender.date) class DateHeureFixe(abstractSimpleField):", "class OptionnalTextEditable(QFrame): \"\"\"QCheckbox + QLineEdit\"\"\" data_changed = pyqtSignal(object) def __init__(self, parent=None): super(OptionnalTextEditable, self).__init__(parent=parent)", "age_min=Entier, age_max=Entier, acquite=Booleen, is_acompte=Booleen, is_remboursement=Booleen, reduc_speciale=Euros, acompte_recu=Euros, valeur=Euros, total=Euros, prix=Euros, date_heure_modif=DateHeure, date_reglement=Date, date_encaissement=Date,", "= QCheckBox() l = QLabel() self.setAutoFillBackground(True) # Pour éviter la transparence layout =", "QHBoxLayout(self) layout.addWidget(cb) layout.addWidget(l) def callback(b): l.setText(b and \"Oui\" or \"Non\") self.data_changed.emit(b) cb.clicked.connect(callback) self.cb", "is_editable) self.textChanged.connect( lambda: self.data_changed.emit(self.toPlainText())) def get_data(self): return self.toPlainText() def set_data(self, text): self.setPlainText(text) class", "ModePaiement(value, is_editable): return _get_widget(is_editable and ModePaiementEditable or ModePaiementFixe, value) def DateHeure(value, is_editable): if", "self.debut = DateEditable() self.fin = DateEditable() self.debut.data_changed.connect(self.on_change) self.fin.data_changed.connect(self.on_change) layout = QHBoxLayout(self) layout.setContentsMargins(0, 0,", "self.textChanged.connect( lambda: self.data_changed.emit(self.toPlainText())) def get_data(self): return self.toPlainText() def set_data(self, text): self.setPlainText(text) class OptionsButton(QPushButton):", "= \"Aucun numéro.\" LIST_HEADER = None BOUTON = NouveauTelephone def __init__(self, collection: list,", "Sexe(value, is_editable): return _get_widget(is_editable and SexeEditable or SexeFixe, value) def Adresse(value, is_editable): return", "else (self.MIN - 1) self.setValue(somme) def get_data(self): return self.value() class EntierEditable(abstractEntierEditable): MAX =", "def IS_TELEPHONE(s: str): r = re.compile(r'[0-9]{9,10}') m = r.search(s.replace(' ', '')) return (m", "def __init__(self, parent=None): super().__init__(parent) self.set_choix(self.VALEURS_LABELS) self.currentIndexChanged.connect( lambda i: self.data_changed.emit(self.currentData())) def set_choix(self, choix): self.places", "SexeEditable or SexeFixe, value) def Adresse(value, is_editable): return Texte(value, is_editable, placeholder=\"Adresse...\") def ModePaiement(value,", "class DateFixe(abstractSimpleField): FONCTION_AFF = staticmethod(formats.abstractRender.date) class DateHeureFixe(abstractSimpleField): FONCTION_AFF = staticmethod(formats.abstractRender.dateheure) # --------------- Numeric", "class BoolEditable(QFrame): data_changed = pyqtSignal(bool) def __init__(self, parent=None): super().__init__(parent) cb = QCheckBox() l", "QBrush, QIcon from PyQt5.QtWidgets import (QFrame, QHBoxLayout, QPushButton, QLineEdit, QLabel, QComboBox, QSpinBox, QDoubleSpinBox,", "on_click(self): self.text.setEnabled(self.active.isChecked()) self.data_changed.emit(self.get_data()) def on_text_changed(self, text): is_active = bool(text.strip()) self.active.setChecked(is_active) self.text.setEnabled(is_active) self.data_changed.emit(self.get_data()) def", "\"\" is_active = bool(text.strip()) self.active.setChecked(is_active) self.text.setEnabled(is_active) self.text.setText(text) self.data_changed.emit(self.get_data()) class DateEditable(QFrame): data_changed = pyqtSignal(object)", "or DepartementFixe, value) def Sexe(value, is_editable): return _get_widget(is_editable and SexeEditable or SexeFixe, value)", "{} def add_widgets_type(type_widgets, abstract_ASSOCIATION): TYPES_WIDGETS.update(type_widgets) for k, v in abstract_ASSOCIATION.items(): t = TYPES_WIDGETS[k]", "is_editable): super(OptionsButton, self).__init__(self.TITLE) self.clicked.connect(self.show_options) self.acces = acces self.is_editable = is_editable def show_options(self): f", "field -> widget (callable)\"\"\" TYPES_WIDGETS = defaultdict( lambda: Default, date_naissance=Date, departement_naissance=Departement, sexe=Sexe, tels=Tels,", "complémentaires\"): super().__init__(text) self.setSizeAdjustPolicy(QPlainTextEdit.AdjustToContents) self.setMinimumHeight(50) self.setMinimumWidth(150) self.setPlaceholderText(placeholder) self.setReadOnly(not is_editable) self.textChanged.connect( lambda: self.data_changed.emit(self.toPlainText())) def get_data(self):", "if self.TOOLTIP: self.setToolTip(self.TOOLTIP) def set_data(self, value): self.value = value label = self.FONCTION_AFF(value) self.setText(label)", "is_editable: raise NotImplementedError(\"No editable datetime widget !\") w = DateHeureFixe() w.set_data(value) return w", "self.entree.selectAll() QToolTip.showText(self.entree.mapToGlobal( QPoint(0, 10)), \"Numéro invalide\") class Tels(list_views.abstractMutableList): LIST_PLACEHOLDER = \"Aucun numéro.\" LIST_HEADER", "get_data(self): return self.cb.isChecked() class DefaultEditable(QLineEdit): data_changed = pyqtSignal(str) MAX_LENGTH = None def __init__(self,", "None class BoolEditable(QFrame): data_changed = pyqtSignal(bool) def __init__(self, parent=None): super().__init__(parent) cb = QCheckBox()", "self).__init__(self.TITLE) self.clicked.connect(self.show_options) self.acces = acces self.is_editable = is_editable def show_options(self): f = self.CLASS_PANEL_OPTIONS(self.acces,", "= QSpinBox() j.setMinimum(0) j.setMaximum(31) j.setToolTip(\"Jour\") m = QSpinBox() m.setMinimum(0) m.setMaximum(12) m.setToolTip(\"Mois\") a =", "def Entier(entier, is_editable): return _get_widget(is_editable and EntierEditable or DefaultFixe, entier) def Euros(value, is_editable):", "= QHBoxLayout(self) layout.addWidget(self.active) layout.addWidget(self.text) def on_click(self): self.text.setEnabled(self.active.isChecked()) self.data_changed.emit(self.get_data()) def on_text_changed(self, text): is_active =", "self.active.isChecked() and bool(text) return text if active else None def set_data(self, text: str):", "QCheckBox() self.text = QLineEdit() self.active.clicked.connect(self.on_click) self.text.textChanged.connect(self.on_text_changed) layout = QHBoxLayout(self) layout.addWidget(self.active) layout.addWidget(self.text) def on_click(self):", "QSpinBox() m.setMinimum(0) m.setMaximum(12) m.setToolTip(\"Mois\") a = QSpinBox() a.setMinimum(0) a.setMaximum(2500) a.setToolTip(\"Année\") j.setAlignment(Qt.AlignCenter) m.setAlignment(Qt.AlignCenter) a.setAlignment(Qt.AlignCenter)", "editor, index): value = index.data(role=Qt.EditRole) editor.set_data(value) self.sizeHintChanged.emit(index) def createEditor(self, parent, option, index): field", "painter.setBackgroundMode(Qt.OpaqueMode) painter.setBackground(QBrush(color)) painter.setBrush(QBrush(color)) rect.setWidth(rect.width() * proportion / 100) painter.drawRoundedRect(rect, 5, 5) painter.restore() @staticmethod", "def get_data(self): return self.toPlainText() def set_data(self, text): self.setPlainText(text) class OptionsButton(QPushButton): \"\"\"Bouton to open", "a.valueChanged.connect( lambda v: self.data_changed.emit(self.get_data())) a.editingFinished.connect(self.on_editing) self.ws = (a, m, j) def _change_year_text_color(self, is_ok):", "MontantEditable(QFrame): def __init__(self, parent=None): super().__init__(parent) self.setAutoFillBackground(True) self.val = QDoubleSpinBox() self.val.setMaximum(100000) self.par_jour = QCheckBox(\"Par", "/ 100) painter.drawRoundedRect(rect, 5, 5) painter.restore() @staticmethod def _get_field(index): return index.model().header[index.column()] def sizeHint(self,", "value) def Sexe(value, is_editable): return _get_widget(is_editable and SexeEditable or SexeFixe, value) def Adresse(value,", "or \" jour\")) # -------------- Enumerations vizualisation -------------- class abstractEnum(QLabel): VALUE_TO_LABEL = {}", "self.ws[1].value(), self.ws[2].value()] try: return datetime.date(*d) except ValueError: return def set_data(self, d): if d", "acces self.is_editable = is_editable def show_options(self): f = self.CLASS_PANEL_OPTIONS(self.acces, self.is_editable) if f.exec_(): self.options_changed.emit()", "index.data(role=Qt.UserRole) classe = self.CORRES[field] w = classe(parent, other) if other else classe(parent) self.size_hint_", "value is None: self.setCurrentIndex(-1) else: self.setCurrentIndex(self.places[value]) self.data_changed.emit(self.get_data()) def get_data(self): return self.currentData() # --------------------", "somme if somme is not None else -1 self.setValue(somme) def get_data(self): v =", "it. \"\"\" import datetime import re from collections import defaultdict from typing import", "label = self.FONCTION_AFF(value) self.setText(label) def get_data(self): return self.value class BoolFixe(abstractSimpleField): FONCTION_AFF = staticmethod(formats.abstractRender.boolen)", "class EntierEditable(abstractEntierEditable): MAX = 10000 class PourcentEditable(abstractEntierEditable): UNITE = \"%\" MAX = 100", "on_text_changed(self, text): is_active = bool(text.strip()) self.active.setChecked(is_active) self.text.setEnabled(is_active) self.data_changed.emit(self.get_data()) def get_data(self): text = self.text.text().strip()", "d is None: self.ws[0].clear() self.ws[1].clear() self.ws[2].clear() else: self.ws[0].setValue(d.year) self.ws[1].setValue(d.month) self.ws[2].setValue(d.day) self.on_editing() class MontantEditable(QFrame):", "self.par_jour = QCheckBox(\"Par jour\") layout = QVBoxLayout(self) layout.addWidget(self.val) layout.addWidget(self.par_jour) def set_data(self, value): self.val.setValue(value[0])", "None] self.debut.set_data(v[0]) self.fin.set_data(v[1]) class Texte(QPlainTextEdit): data_changed = pyqtSignal(str) def __init__(self, text, is_editable, placeholder=\"Informations", "= super(Tels, self).get_data() return [tel.Id for tel in col] class Duree(QLabel): \"\"\"Display the", "parent=None): super().__init__(parent) self.setAutoFillBackground(True) self.val = QDoubleSpinBox() self.val.setMaximum(100000) self.par_jour = QCheckBox(\"Par jour\") layout =", "classe = self.CORRES[field] w = classe(parent, other) if other else classe(parent) self.size_hint_ =", "= staticmethod(formats.abstractRender.date) class DateHeureFixe(abstractSimpleField): FONCTION_AFF = staticmethod(formats.abstractRender.dateheure) # --------------- Numeric fields --------------- class", "self.size_hint_[0] == index: return self.size_hint_[1] return super().sizeHint(option, index) def setEditorData(self, editor, index): value", "use it. \"\"\" import datetime import re from collections import defaultdict from typing", "else None def set_data(self, text: str): text = text or \"\" is_active =", "= staticmethod(formats.abstractRender.dateheure) # --------------- Numeric fields --------------- class abstractEntierEditable(QSpinBox): UNITE = \"\" MAX", "= option.rect painter.save() proportion = min(proportion, 100) rs, vs, bs = (30,64,55) #", "or \"Non\") def get_data(self): return self.cb.isChecked() class DefaultEditable(QLineEdit): data_changed = pyqtSignal(str) MAX_LENGTH =", "[tel.Id for tel in col] class Duree(QLabel): \"\"\"Display the numbers of day between", "PourcentFixe(abstractSimpleField): FONCTION_AFF = staticmethod(formats.abstractRender.pourcent) class DefaultFixe(abstractSimpleField): FONCTION_AFF = staticmethod(formats.abstractRender.default) class DateFixe(abstractSimpleField): FONCTION_AFF =", "or \"\" is_active = bool(text.strip()) self.active.setChecked(is_active) self.text.setEnabled(is_active) self.text.setText(text) self.data_changed.emit(self.get_data()) class DateEditable(QFrame): data_changed =", "import QColor, QPen, QBrush, QIcon from PyQt5.QtWidgets import (QFrame, QHBoxLayout, QPushButton, QLineEdit, QLabel,", "!= -1 else None class BoolEditable(QFrame): data_changed = pyqtSignal(bool) def __init__(self, parent=None): super().__init__(parent)", "or DefaultFixe, entier) def Euros(value, is_editable): return _get_widget(is_editable and EurosEditable or EurosFixe, value)", "datetime.date(*d) except ValueError: return def set_data(self, d): if d is None: self.ws[0].clear() self.ws[1].clear()", "and self.size_hint_[0] == index: return self.size_hint_[1] return super().sizeHint(option, index) def setEditorData(self, editor, index):", "None TOOLTIP = None data_changed = pyqtSignal() # dummy signal def __init__(self, *args,", "index.row() return w def destroyEditor(self, editor, index): self.size_hint_ = None super().destroyEditor(editor, index) def", "max_length}) class OptionnalTextEditable(QFrame): \"\"\"QCheckbox + QLineEdit\"\"\" data_changed = pyqtSignal(object) def __init__(self, parent=None): super(OptionnalTextEditable,", "from raw value\"\"\" def set_data(self, value): self.value = value self.setText(self.VALUE_TO_LABEL.get(self.value, \"\")) def get_data(self):", "BoolFixe(abstractSimpleField): FONCTION_AFF = staticmethod(formats.abstractRender.boolen) class EurosFixe(abstractSimpleField): FONCTION_AFF = staticmethod(formats.abstractRender.euros) class PourcentFixe(abstractSimpleField): FONCTION_AFF =", "= None options_changed = pyqtSignal() def __init__(self, acces, is_editable): super(OptionsButton, self).__init__(self.TITLE) self.clicked.connect(self.show_options) self.acces", "# -------------------- Commons types -------------------- class DepartementFixe(abstractEnum): VALUE_TO_LABEL = formats.DEPARTEMENTS class DepartementEditable(abstractEnumEditable): VALEURS_LABELS", "= pyqtSignal(object, object) def __init__(self): super().__init__() self.debut = DateEditable() self.fin = DateEditable() self.debut.data_changed.connect(self.on_change)", "data_changed = pyqtSignal(object) VALEURS_LABELS = [] \"\"\"List of tuples (value, label) or None", "DefaultFixe(abstractSimpleField): FONCTION_AFF = staticmethod(formats.abstractRender.default) class DateFixe(abstractSimpleField): FONCTION_AFF = staticmethod(formats.abstractRender.date) class DateHeureFixe(abstractSimpleField): FONCTION_AFF =", "of tuples (value, label) or None to add a separator\"\"\" def __init__(self, parent=None):", "0, 0) j = QSpinBox() j.setMinimum(0) j.setMaximum(31) j.setToolTip(\"Jour\") m = QSpinBox() m.setMinimum(0) m.setMaximum(12)", "or ModePaiementFixe, value) def DateHeure(value, is_editable): if is_editable: raise NotImplementedError(\"No editable datetime widget", "self.data_changed.emit(self.get_data()) class DateEditable(QFrame): data_changed = pyqtSignal(object) def __init__(self, parent=None): super().__init__(parent) layout = QGridLayout(self)", "get_data method, which return a date.date\"\"\" def __init__(self, begining, end): super().__init__() self.begining =", "or DateFixe, value) def Departement(value, is_editable): return _get_widget(is_editable and DepartementEditable or DepartementFixe, value)", "= acces self.is_editable = is_editable def show_options(self): f = self.CLASS_PANEL_OPTIONS(self.acces, self.is_editable) if f.exec_():", "SexeEditable(abstractEnumEditable): VALEURS_LABELS = sorted((k, v) for k, v in formats.SEXES.items()) class ModePaiementFixe(abstractEnum): VALUE_TO_LABEL", "return self.text() def LimitedDefaultEditable(max_length): return type(\"LDefaultEditable\", (DefaultEditable,), {\"MAX_LENGTH\": max_length}) class OptionnalTextEditable(QFrame): \"\"\"QCheckbox +", "k, v in formats.SEXES.items()) class ModePaiementFixe(abstractEnum): VALUE_TO_LABEL = formats.MODE_PAIEMENT class ModePaiementEditable(abstractEnumEditable): VALEURS_LABELS =", "a = QSpinBox() a.setMinimum(0) a.setMaximum(2500) a.setToolTip(\"Année\") j.setAlignment(Qt.AlignCenter) m.setAlignment(Qt.AlignCenter) a.setAlignment(Qt.AlignCenter) j.setSpecialValueText(\"-\") m.setSpecialValueText(\"-\") a.setSpecialValueText(\"-\") layout.addWidget(j,", "dummy signal def __init__(self, *args, **kwargs): super(abstractSimpleField, self).__init__(*args, **kwargs) if self.TOOLTIP: self.setToolTip(self.TOOLTIP) def", "__init__(self): super().__init__() self.debut = DateEditable() self.fin = DateEditable() self.debut.data_changed.connect(self.on_change) self.fin.data_changed.connect(self.on_change) layout = QHBoxLayout(self)", "def sizeHint(self, option, index): if self.size_hint_ and self.size_hint_[0] == index: return self.size_hint_[1] return", "def set_data(self, text): self.setPlainText(text) class OptionsButton(QPushButton): \"\"\"Bouton to open window to acces advanced", "a get_data method, which return a date.date\"\"\" def __init__(self, begining, end): super().__init__() self.begining", "VALEURS_LABELS = [] \"\"\"List of tuples (value, label) or None to add a", "widgets have to implement a get_data method, which return a date.date\"\"\" def __init__(self,", "def callback(b): l.setText(b and \"Oui\" or \"Non\") self.data_changed.emit(b) cb.clicked.connect(callback) self.cb = cb self.l", "text = text or \"\" is_active = bool(text.strip()) self.active.setChecked(is_active) self.text.setEnabled(is_active) self.text.setText(text) self.data_changed.emit(self.get_data()) class", "v: self.data_changed.emit(self.get_data())) m.valueChanged.connect( lambda v: self.data_changed.emit(self.get_data())) a.valueChanged.connect( lambda v: self.data_changed.emit(self.get_data())) a.editingFinished.connect(self.on_editing) self.ws =", "Pour éviter la transparence layout = QHBoxLayout(self) layout.addWidget(cb) layout.addWidget(l) def callback(b): l.setText(b and", "None def set_data(self, text: str): text = text or \"\" is_active = bool(text.strip())", "info=Texte, message=Texte, mode_paiement=ModePaiement, ) ASSOCIATION = {} def add_widgets_type(type_widgets, abstract_ASSOCIATION): TYPES_WIDGETS.update(type_widgets) for k,", "departement_naissance=Departement, sexe=Sexe, tels=Tels, adresse=Adresse, date=Date, date_debut=Date, date_fin=Date, date_arrivee=Date, date_depart=Date, date_emission=Date, date_reception=Date, nb_places=Entier, nb_places_reservees=Entier,", "for t in choix: if t: self.places[t[0]] = self.count() self.addItem(t[1], userData=t[0]) else: self.insertSeparator(self.count())", "= pyqtSignal(object) VALEURS_LABELS = [] \"\"\"List of tuples (value, label) or None to", "DateEditable(QFrame): data_changed = pyqtSignal(object) def __init__(self, parent=None): super().__init__(parent) layout = QGridLayout(self) layout.setContentsMargins(0, 0,", "self.debut.data_changed.connect(self.on_change) self.fin.data_changed.connect(self.on_change) layout = QHBoxLayout(self) layout.setContentsMargins(0, 0, 0, 0) layout.addWidget(QLabel(\"Du \")) layout.addWidget(self.debut) layout.addWidget(QLabel(\"", "between two date widgets. These widgets have to implement a get_data method, which", "completion=[]): super().__init__(parent) self.textChanged.connect(self.data_changed.emit) if completion: c = QCompleter(completion) c.setCaseSensitivity(Qt.CaseInsensitive) self.setCompleter(c) if self.MAX_LENGTH: self.setMaxLength(self.MAX_LENGTH)", "color = \"black\" if is_ok else \"red\" self.ws[0].setStyleSheet(f\"color : {color}\") def on_editing(self): current_year", "to call set_data to manually update\"\"\" db = self.begining.get_data() or formats.DATE_DEFAULT df =", "\"mode_paiement\": ModePaiementEditable, \"valeur\": EurosEditable, \"description\": DefaultEditable, \"quantite\": EntierEditable, \"obligatoire\": BoolEditable} \"\"\"Correspondance between fields", "m.setMinimum(0) m.setMaximum(12) m.setToolTip(\"Mois\") a = QSpinBox() a.setMinimum(0) a.setMaximum(2500) a.setToolTip(\"Année\") j.setAlignment(Qt.AlignCenter) m.setAlignment(Qt.AlignCenter) a.setAlignment(Qt.AlignCenter) j.setSpecialValueText(\"-\")", "# ------------- Simple string-like field ------------- class abstractSimpleField(QLabel): FONCTION_AFF = None TOOLTIP =", "self.setSuffix(\"€\") self.valueChanged.connect(self.data_changed.emit) def set_data(self, somme): somme = somme if somme is not None", "j.setSpecialValueText(\"-\") m.setSpecialValueText(\"-\") a.setSpecialValueText(\"-\") layout.addWidget(j, 0, 0) layout.addWidget(m, 0, 1) layout.addWidget(a, 0, 2, 1,", "self.from_list(collection) super(Tels, self).set_data(collection) def get_data(self): col = super(Tels, self).get_data() return [tel.Id for tel", "vizualisation -------------- class abstractEnum(QLabel): VALUE_TO_LABEL = {} \"\"\"Dict. giving label from raw value\"\"\"", "except ValueError: return def set_data(self, d): if d is None: self.ws[0].clear() self.ws[1].clear() self.ws[2].clear()", "super(OptionsButton, self).__init__(self.TITLE) self.clicked.connect(self.show_options) self.acces = acces self.is_editable = is_editable def show_options(self): f =", "ve, be = (153,242,200) # end t = proportion / 100 color =", "def Pourcent(value, is_editable): return _get_widget(is_editable and PourcentEditable or PourcentFixe, value) def Date(value, is_editable):", "reduc_speciale=Euros, acompte_recu=Euros, valeur=Euros, total=Euros, prix=Euros, date_heure_modif=DateHeure, date_reglement=Date, date_encaissement=Date, info=Texte, message=Texte, mode_paiement=ModePaiement, ) ASSOCIATION", "-1 self.setValue(somme) def get_data(self): v = self.value() return v if v != -1", "collection = self.from_list(collection) super(Tels, self).set_data(collection) def get_data(self): col = super(Tels, self).get_data() return [tel.Id", "t = TYPES_WIDGETS[k] ASSOCIATION[k] = (v[0], v[1], v[2], t, v[3]) add_widgets_type({}, formats.ASSOCIATION) ##", "+ t*(be - bs)) painter.setPen(QPen(color, 0.5, Qt.SolidLine, Qt.RoundCap, Qt.RoundJoin)) painter.setBackgroundMode(Qt.OpaqueMode) painter.setBackground(QBrush(color)) painter.setBrush(QBrush(color)) rect.setWidth(rect.width()", "= 0 class EurosEditable(QDoubleSpinBox): data_changed = pyqtSignal(float) def __init__(self, parent=None): super().__init__(parent) self.setMaximum(100000) self.setMinimum(-1)", "self.setMaxLength(self.MAX_LENGTH) def set_data(self, value): self.setText(str(value or \"\")) def get_data(self): return self.text() def LimitedDefaultEditable(max_length):", "option, index): if self.size_hint_ and self.size_hint_[0] == index: return self.size_hint_[1] return super().sizeHint(option, index)", "**kwargs): super(abstractSimpleField, self).__init__(*args, **kwargs) if self.TOOLTIP: self.setToolTip(self.TOOLTIP) def set_data(self, value): self.value = value", "pyqtSignal(str) def __init__(self, text, is_editable, placeholder=\"Informations complémentaires\"): super().__init__(text) self.setSizeAdjustPolicy(QPlainTextEdit.AdjustToContents) self.setMinimumHeight(50) self.setMinimumWidth(150) self.setPlaceholderText(placeholder) self.setReadOnly(not", "r = re.compile(r'[0-9]{9,10}') m = r.search(s.replace(' ', '')) return (m is not None)", "\"Numéro invalide\") class Tels(list_views.abstractMutableList): LIST_PLACEHOLDER = \"Aucun numéro.\" LIST_HEADER = None BOUTON =", "{} \"\"\"Dict. giving label from raw value\"\"\" def set_data(self, value): self.value = value", "class MontantEditable(QFrame): def __init__(self, parent=None): super().__init__(parent) self.setAutoFillBackground(True) self.val = QDoubleSpinBox() self.val.setMaximum(100000) self.par_jour =", "editor.set_data(value) self.sizeHintChanged.emit(index) def createEditor(self, parent, option, index): field = self._get_field(index) other = index.data(role=Qt.UserRole)", "is_editable): return _get_widget(is_editable and DateEditable or DateFixe, value) def Departement(value, is_editable): return _get_widget(is_editable", "return _get_widget(is_editable and DateEditable or DateFixe, value) def Departement(value, is_editable): return _get_widget(is_editable and", "is_editable): return _get_widget(is_editable and EurosEditable or EurosFixe, value) def Pourcent(value, is_editable): return _get_widget(is_editable", "custom widgets, since common.abstractDetails will use it. \"\"\" import datetime import re from", "EurosFixe, value) def Pourcent(value, is_editable): return _get_widget(is_editable and PourcentEditable or PourcentFixe, value) def", "valeur=Euros, total=Euros, prix=Euros, date_heure_modif=DateHeure, date_reglement=Date, date_encaissement=Date, info=Texte, message=Texte, mode_paiement=ModePaiement, ) ASSOCIATION = {}", "datetime widget !\") w = DateHeureFixe() w.set_data(value) return w def OptionnalText(value, is_editable): return", "super().__init__(parent) self.setAutoFillBackground(True) self.val = QDoubleSpinBox() self.val.setMaximum(100000) self.par_jour = QCheckBox(\"Par jour\") layout = QVBoxLayout(self)", "formats.DEPARTEMENTS class DepartementEditable(abstractEnumEditable): VALEURS_LABELS = sorted((i, i + \" \" + v) for", "set_data(self, *args): \"\"\"we cant to call set_data to manually update\"\"\" db = self.begining.get_data()", "if somme is not None else -1 self.setValue(somme) def get_data(self): v = self.value()", "import formats class NouveauTelephone(list_views.abstractNewButton): LABEL = \"Ajouter un numéro\" @staticmethod def IS_TELEPHONE(s: str):", "None to add a separator\"\"\" def __init__(self, parent=None): super().__init__(parent) self.set_choix(self.VALEURS_LABELS) self.currentIndexChanged.connect( lambda i:", "enter_edit(self): self._clear() line_layout = self.layout() self.entree = QLineEdit() self.entree.setObjectName(\"nouveau-numero-tel\") self.entree.setAlignment(Qt.AlignCenter) self.entree.setPlaceholderText(\"Ajouter...\") add =", "formats class NouveauTelephone(list_views.abstractNewButton): LABEL = \"Ajouter un numéro\" @staticmethod def IS_TELEPHONE(s: str): r", "a.setMaximum(2500) a.setToolTip(\"Année\") j.setAlignment(Qt.AlignCenter) m.setAlignment(Qt.AlignCenter) a.setAlignment(Qt.AlignCenter) j.setSpecialValueText(\"-\") m.setSpecialValueText(\"-\") a.setSpecialValueText(\"-\") layout.addWidget(j, 0, 0) layout.addWidget(m, 0,", "QSpinBox() a.setMinimum(0) a.setMaximum(2500) a.setToolTip(\"Année\") j.setAlignment(Qt.AlignCenter) m.setAlignment(Qt.AlignCenter) a.setAlignment(Qt.AlignCenter) j.setSpecialValueText(\"-\") m.setSpecialValueText(\"-\") a.setSpecialValueText(\"-\") layout.addWidget(j, 0, 0)", "QLineEdit\"\"\" data_changed = pyqtSignal(object) def __init__(self, parent=None): super(OptionnalTextEditable, self).__init__(parent=parent) self.active = QCheckBox() self.text", "## ------------------Custom delegate ------------------ ## class delegateAttributs(QStyledItemDelegate): CORRES = {\"montant\": MontantEditable, \"mode_paiement\": ModePaiementEditable,", "Date(value, is_editable): return _get_widget(is_editable and DateEditable or DateFixe, value) def Departement(value, is_editable): return", "line_layout.addWidget(add) line_layout.setStretch(0, 3) line_layout.setStretch(1, 1) def on_add(self): num = self.entree.text() if self.IS_TELEPHONE(num): self.entree.setPlaceholderText(\"Ajouter...\")", "class abstractEnumEditable(QComboBox): data_changed = pyqtSignal(object) VALEURS_LABELS = [] \"\"\"List of tuples (value, label)", "0) j = QSpinBox() j.setMinimum(0) j.setMaximum(31) j.setToolTip(\"Jour\") m = QSpinBox() m.setMinimum(0) m.setMaximum(12) m.setToolTip(\"Mois\")", "--------------- class abstractEntierEditable(QSpinBox): UNITE = \"\" MAX = None MIN = 0 DEFAULT", "None options_changed = pyqtSignal() def __init__(self, acces, is_editable): super(OptionsButton, self).__init__(self.TITLE) self.clicked.connect(self.show_options) self.acces =", "value): self.val.setValue(value[0]) self.par_jour.setChecked(value[1]) def get_data(self): return [self.val.value(), self.par_jour.isChecked()] class DateRange(QFrame): data_changed = pyqtSignal(object,", "return self.value class BoolFixe(abstractSimpleField): FONCTION_AFF = staticmethod(formats.abstractRender.boolen) class EurosFixe(abstractSimpleField): FONCTION_AFF = staticmethod(formats.abstractRender.euros) class", "import List, Any from PyQt5.QtCore import pyqtSignal, Qt, QPoint from PyQt5.QtGui import QColor,", "self.FONCTION_AFF(value) self.setText(label) def get_data(self): return self.value class BoolFixe(abstractSimpleField): FONCTION_AFF = staticmethod(formats.abstractRender.boolen) class EurosFixe(abstractSimpleField):", "cb.clicked.connect(callback) self.cb = cb self.l = l def set_data(self, b): b = b", "super().__init__(parent) self.set_choix(self.VALEURS_LABELS) self.currentIndexChanged.connect( lambda i: self.data_changed.emit(self.currentData())) def set_choix(self, choix): self.places = {} for", "-------------- class abstractEnum(QLabel): VALUE_TO_LABEL = {} \"\"\"Dict. giving label from raw value\"\"\" def", "day between two date widgets. These widgets have to implement a get_data method,", "k, v in abstract_ASSOCIATION.items(): t = TYPES_WIDGETS[k] ASSOCIATION[k] = (v[0], v[1], v[2], t,", "bs = (30,64,55) # start re, ve, be = (153,242,200) # end t", "widget (callable)\"\"\" TYPES_WIDGETS = defaultdict( lambda: Default, date_naissance=Date, departement_naissance=Departement, sexe=Sexe, tels=Tels, adresse=Adresse, date=Date,", "for i, v in formats.DEPARTEMENTS.items()) class SexeFixe(abstractEnum): VALUE_TO_LABEL = formats.SEXES class SexeEditable(abstractEnumEditable): VALEURS_LABELS", "line_layout.setStretch(0, 3) line_layout.setStretch(1, 1) def on_add(self): num = self.entree.text() if self.IS_TELEPHONE(num): self.entree.setPlaceholderText(\"Ajouter...\") self.data_changed.emit(num)", "return w def Default(value, is_editable): return _get_widget(is_editable and DefaultEditable or DefaultFixe, value) def", "NouveauTelephone def __init__(self, collection: list, is_editable): collection = self.from_list(collection) super().__init__(collection, is_editable) def on_add(self,", "def setEditorData(self, editor, index): value = index.data(role=Qt.EditRole) editor.set_data(value) self.sizeHintChanged.emit(index) def createEditor(self, parent, option,", "sorted([(k, v) for k, v in formats.MODE_PAIEMENT.items()]) # ------------- Simple string-like field -------------", "self.entree.editingFinished.connect(self.on_add) line_layout.addWidget(self.entree) line_layout.addWidget(add) line_layout.setStretch(0, 3) line_layout.setStretch(1, 1) def on_add(self): num = self.entree.text() if", "Adresse(value, is_editable): return Texte(value, is_editable, placeholder=\"Adresse...\") def ModePaiement(value, is_editable): return _get_widget(is_editable and ModePaiementEditable", "OptionsButton(QPushButton): \"\"\"Bouton to open window to acces advanced options. CLASS_PANEL_OPTIONS is responsible for", "self.setMinimumWidth(150) self.setPlaceholderText(placeholder) self.setReadOnly(not is_editable) self.textChanged.connect( lambda: self.data_changed.emit(self.toPlainText())) def get_data(self): return self.toPlainText() def set_data(self,", "------------------Custom delegate ------------------ ## class delegateAttributs(QStyledItemDelegate): CORRES = {\"montant\": MontantEditable, \"mode_paiement\": ModePaiementEditable, \"valeur\":", "value): w = classe() w.set_data(value) return w def Default(value, is_editable): return _get_widget(is_editable and", "max((df - db).days + 1, 0) self.setText(str(jours) + (jours >= 2 and \"", "actual modification\"\"\" TITLE = \"Advanced options\" CLASS_PANEL_OPTIONS:Any = None options_changed = pyqtSignal() def", "0.5, Qt.SolidLine, Qt.RoundCap, Qt.RoundJoin)) painter.setBackgroundMode(Qt.OpaqueMode) painter.setBackground(QBrush(color)) painter.setBrush(QBrush(color)) rect.setWidth(rect.width() * proportion / 100) painter.drawRoundedRect(rect,", "other = index.data(role=Qt.UserRole) classe = self.CORRES[field] w = classe(parent, other) if other else", "j.valueChanged.connect( lambda v: self.data_changed.emit(self.get_data())) m.valueChanged.connect( lambda v: self.data_changed.emit(self.get_data())) a.valueChanged.connect( lambda v: self.data_changed.emit(self.get_data())) a.editingFinished.connect(self.on_editing)", "painter.setBrush(QBrush(color)) rect.setWidth(rect.width() * proportion / 100) painter.drawRoundedRect(rect, 5, 5) painter.restore() @staticmethod def _get_field(index):", "try: return datetime.date(*d) except ValueError: return def set_data(self, d): if d is None:", "self.valueChanged.connect(self.data_changed.emit) self.setSpecialValueText(\" \") def set_data(self, somme): somme = somme if somme is not", "= QCheckBox() self.text = QLineEdit() self.active.clicked.connect(self.on_click) self.text.textChanged.connect(self.on_text_changed) layout = QHBoxLayout(self) layout.addWidget(self.active) layout.addWidget(self.text) def", "QSpinBox, QDoubleSpinBox, QCheckBox, QCompleter, QGridLayout, QVBoxLayout, QPlainTextEdit, QStyledItemDelegate, QToolTip) from . import list_views,", "text): is_active = bool(text.strip()) self.active.setChecked(is_active) self.text.setEnabled(is_active) self.data_changed.emit(self.get_data()) def get_data(self): text = self.text.text().strip() active", "def on_add(self): num = self.entree.text() if self.IS_TELEPHONE(num): self.entree.setPlaceholderText(\"Ajouter...\") self.data_changed.emit(num) self._clear() self.set_button() else: self.entree.selectAll()", "widgets to visualize and modify basic fields. (french language) ASSOCIATION should be updated", "LABEL = \"Ajouter un numéro\" @staticmethod def IS_TELEPHONE(s: str): r = re.compile(r'[0-9]{9,10}') m", "def get_data(self): d = [self.ws[0].value(), self.ws[1].value(), self.ws[2].value()] try: return datetime.date(*d) except ValueError: return", "= classe() w.set_data(value) return w def Default(value, is_editable): return _get_widget(is_editable and DefaultEditable or", "def Departement(value, is_editable): return _get_widget(is_editable and DepartementEditable or DepartementFixe, value) def Sexe(value, is_editable):", "FONCTION_AFF = staticmethod(formats.abstractRender.dateheure) # --------------- Numeric fields --------------- class abstractEntierEditable(QSpinBox): UNITE = \"\"", "VALUE_TO_LABEL = {} \"\"\"Dict. giving label from raw value\"\"\" def set_data(self, value): self.value", "QVBoxLayout, QPlainTextEdit, QStyledItemDelegate, QToolTip) from . import list_views, clear_layout, Icons from ..Core import", "(m is not None) def _clear(self): clear_layout(self.layout()) def enter_edit(self): self._clear() line_layout = self.layout()", "re from collections import defaultdict from typing import List, Any from PyQt5.QtCore import", "QLineEdit() self.entree.setObjectName(\"nouveau-numero-tel\") self.entree.setAlignment(Qt.AlignCenter) self.entree.setPlaceholderText(\"Ajouter...\") add = QPushButton() add.setIcon(QIcon(Icons.Valid)) add.clicked.connect(self.on_add) self.entree.editingFinished.connect(self.on_add) line_layout.addWidget(self.entree) line_layout.addWidget(add) line_layout.setStretch(0,", "PyQt5.QtCore import pyqtSignal, Qt, QPoint from PyQt5.QtGui import QColor, QPen, QBrush, QIcon from", "tel in col] class Duree(QLabel): \"\"\"Display the numbers of day between two date", "if active else None def set_data(self, text: str): text = text or \"\"", "add.setIcon(QIcon(Icons.Valid)) add.clicked.connect(self.on_add) self.entree.editingFinished.connect(self.on_add) line_layout.addWidget(self.entree) line_layout.addWidget(add) line_layout.setStretch(0, 3) line_layout.setStretch(1, 1) def on_add(self): num =", "w def Default(value, is_editable): return _get_widget(is_editable and DefaultEditable or DefaultFixe, value) def Booleen(value,", "(a, m, j) def _change_year_text_color(self, is_ok): color = \"black\" if is_ok else \"red\"", "DateHeureFixe() w.set_data(value) return w def OptionnalText(value, is_editable): return _get_widget(is_editable and OptionnalTextEditable or DefaultFixe,", "if self.IS_TELEPHONE(num): self.entree.setPlaceholderText(\"Ajouter...\") self.data_changed.emit(num) self._clear() self.set_button() else: self.entree.selectAll() QToolTip.showText(self.entree.mapToGlobal( QPoint(0, 10)), \"Numéro invalide\")", "def Default(value, is_editable): return _get_widget(is_editable and DefaultEditable or DefaultFixe, value) def Booleen(value, is_editable):", "def add_widgets_type(type_widgets, abstract_ASSOCIATION): TYPES_WIDGETS.update(type_widgets) for k, v in abstract_ASSOCIATION.items(): t = TYPES_WIDGETS[k] ASSOCIATION[k]", "PourcentEditable(abstractEntierEditable): UNITE = \"%\" MAX = 100 DEFAULT = 0 class EurosEditable(QDoubleSpinBox): data_changed", "self.entree.text() if self.IS_TELEPHONE(num): self.entree.setPlaceholderText(\"Ajouter...\") self.data_changed.emit(num) self._clear() self.set_button() else: self.entree.selectAll() QToolTip.showText(self.entree.mapToGlobal( QPoint(0, 10)), \"Numéro", "start re, ve, be = (153,242,200) # end t = proportion / 100", "and EntierEditable or DefaultFixe, entier) def Euros(value, is_editable): return _get_widget(is_editable and EurosEditable or", "_get_widget(is_editable and DepartementEditable or DepartementFixe, value) def Sexe(value, is_editable): return _get_widget(is_editable and SexeEditable", "value = index.data(role=Qt.EditRole) editor.set_data(value) self.sizeHintChanged.emit(index) def createEditor(self, parent, option, index): field = self._get_field(index)", "w = classe(parent, other) if other else classe(parent) self.size_hint_ = (index, w.sizeHint()) self.row_done_", "super().sizeHint(option, index) def setEditorData(self, editor, index): value = index.data(role=Qt.EditRole) editor.set_data(value) self.sizeHintChanged.emit(index) def createEditor(self,", "if f.exec_(): self.options_changed.emit() def set_data(self, *args): pass ###---------------------------- Wrappers---------------------------- ### def _get_widget(classe, value):", "def _get_widget(classe, value): w = classe() w.set_data(value) return w def Default(value, is_editable): return", "list, is_editable): collection = self.from_list(collection) super().__init__(collection, is_editable) def on_add(self, item): \"\"\"Convert to pseuso", "def set_data(self, somme): somme = somme if somme is not None else (self.MIN", "Icons from ..Core import formats class NouveauTelephone(list_views.abstractNewButton): LABEL = \"Ajouter un numéro\" @staticmethod", "m, j) def _change_year_text_color(self, is_ok): color = \"black\" if is_ok else \"red\" self.ws[0].setStyleSheet(f\"color", "for doing the actual modification\"\"\" TITLE = \"Advanced options\" CLASS_PANEL_OPTIONS:Any = None options_changed", "be = (153,242,200) # end t = proportion / 100 color = QColor(", "\") def set_data(self, somme): somme = somme if somme is not None else", "{\"MAX_LENGTH\": max_length}) class OptionnalTextEditable(QFrame): \"\"\"QCheckbox + QLineEdit\"\"\" data_changed = pyqtSignal(object) def __init__(self, parent=None):", "+ t*(ve - vs), bs + t*(be - bs)) painter.setPen(QPen(color, 0.5, Qt.SolidLine, Qt.RoundCap,", "[] \"\"\"List of tuples (value, label) or None to add a separator\"\"\" def", "100) rs, vs, bs = (30,64,55) # start re, ve, be = (153,242,200)", "= self.layout() self.entree = QLineEdit() self.entree.setObjectName(\"nouveau-numero-tel\") self.entree.setAlignment(Qt.AlignCenter) self.entree.setPlaceholderText(\"Ajouter...\") add = QPushButton() add.setIcon(QIcon(Icons.Valid)) add.clicked.connect(self.on_add)", "\"description\": DefaultEditable, \"quantite\": EntierEditable, \"obligatoire\": BoolEditable} \"\"\"Correspondance between fields and widget classes\"\"\" size_hint_:", "set_data(self, collection): collection = self.from_list(collection) super(Tels, self).set_data(collection) def get_data(self): col = super(Tels, self).get_data()", "class BoolFixe(abstractSimpleField): FONCTION_AFF = staticmethod(formats.abstractRender.boolen) class EurosFixe(abstractSimpleField): FONCTION_AFF = staticmethod(formats.abstractRender.euros) class PourcentFixe(abstractSimpleField): FONCTION_AFF", "should be updated with custom widgets, since common.abstractDetails will use it. \"\"\" import", "SexeFixe, value) def Adresse(value, is_editable): return Texte(value, is_editable, placeholder=\"Adresse...\") def ModePaiement(value, is_editable): return", "or \"\")) def get_data(self): return self.text() def LimitedDefaultEditable(max_length): return type(\"LDefaultEditable\", (DefaultEditable,), {\"MAX_LENGTH\": max_length})", "- db).days + 1, 0) self.setText(str(jours) + (jours >= 2 and \" jours\"", "FONCTION_AFF = staticmethod(formats.abstractRender.date) class DateHeureFixe(abstractSimpleField): FONCTION_AFF = staticmethod(formats.abstractRender.dateheure) # --------------- Numeric fields ---------------", "= text or \"\" is_active = bool(text.strip()) self.active.setChecked(is_active) self.text.setEnabled(is_active) self.text.setText(text) self.data_changed.emit(self.get_data()) class DateEditable(QFrame):", "parent=None): super().__init__(parent) cb = QCheckBox() l = QLabel() self.setAutoFillBackground(True) # Pour éviter la", "data_changed = pyqtSignal(bool) def __init__(self, parent=None): super().__init__(parent) cb = QCheckBox() l = QLabel()", "None def __init__(self, parent=None, completion=[]): super().__init__(parent) self.textChanged.connect(self.data_changed.emit) if completion: c = QCompleter(completion) c.setCaseSensitivity(Qt.CaseInsensitive)", "Simple string-like field ------------- class abstractSimpleField(QLabel): FONCTION_AFF = None TOOLTIP = None data_changed", "QHBoxLayout, QPushButton, QLineEdit, QLabel, QComboBox, QSpinBox, QDoubleSpinBox, QCheckBox, QCompleter, QGridLayout, QVBoxLayout, QPlainTextEdit, QStyledItemDelegate,", "v in formats.DEPARTEMENTS.items()) class SexeFixe(abstractEnum): VALUE_TO_LABEL = formats.SEXES class SexeEditable(abstractEnumEditable): VALEURS_LABELS = sorted((k,", "sizeHint(self, option, index): if self.size_hint_ and self.size_hint_[0] == index: return self.size_hint_[1] return super().sizeHint(option,", "def _clear(self): clear_layout(self.layout()) def enter_edit(self): self._clear() line_layout = self.layout() self.entree = QLineEdit() self.entree.setObjectName(\"nouveau-numero-tel\")", "= self.FONCTION_AFF(value) self.setText(label) def get_data(self): return self.value class BoolFixe(abstractSimpleField): FONCTION_AFF = staticmethod(formats.abstractRender.boolen) class", "Default(value, is_editable): return _get_widget(is_editable and DefaultEditable or DefaultFixe, value) def Booleen(value, is_editable): return", "w = classe() w.set_data(value) return w def Default(value, is_editable): return _get_widget(is_editable and DefaultEditable", "date_encaissement=Date, info=Texte, message=Texte, mode_paiement=ModePaiement, ) ASSOCIATION = {} def add_widgets_type(type_widgets, abstract_ASSOCIATION): TYPES_WIDGETS.update(type_widgets) for", "QColor, QPen, QBrush, QIcon from PyQt5.QtWidgets import (QFrame, QHBoxLayout, QPushButton, QLineEdit, QLabel, QComboBox,", "\")) layout.addWidget(self.fin) def on_change(self): self.data_changed.emit(*self.get_data()) def get_data(self): return self.debut.get_data(), self.fin.get_data() def set_data(self, v):", "'')) return (m is not None) def _clear(self): clear_layout(self.layout()) def enter_edit(self): self._clear() line_layout", "Departement(value, is_editable): return _get_widget(is_editable and DepartementEditable or DepartementFixe, value) def Sexe(value, is_editable): return", "self.val.setMaximum(100000) self.par_jour = QCheckBox(\"Par jour\") layout = QVBoxLayout(self) layout.addWidget(self.val) layout.addWidget(self.par_jour) def set_data(self, value):", "w.set_data(value) return w def OptionnalText(value, is_editable): return _get_widget(is_editable and OptionnalTextEditable or DefaultFixe, value)", "t in choix: if t: self.places[t[0]] = self.count() self.addItem(t[1], userData=t[0]) else: self.insertSeparator(self.count()) def", "QGridLayout(self) layout.setContentsMargins(0, 0, 0, 0) j = QSpinBox() j.setMinimum(0) j.setMaximum(31) j.setToolTip(\"Jour\") m =", "somme): somme = somme if somme is not None else -1 self.setValue(somme) def", "painter.setBackground(QBrush(color)) painter.setBrush(QBrush(color)) rect.setWidth(rect.width() * proportion / 100) painter.drawRoundedRect(rect, 5, 5) painter.restore() @staticmethod def", "if self.size_hint_ and self.size_hint_[0] == index: return self.size_hint_[1] return super().sizeHint(option, index) def setEditorData(self,", "super().__init__(parent) cb = QCheckBox() l = QLabel() self.setAutoFillBackground(True) # Pour éviter la transparence", "total=Euros, prix=Euros, date_heure_modif=DateHeure, date_reglement=Date, date_encaissement=Date, info=Texte, message=Texte, mode_paiement=ModePaiement, ) ASSOCIATION = {} def", "self.setCompleter(c) if self.MAX_LENGTH: self.setMaxLength(self.MAX_LENGTH) def set_data(self, value): self.setText(str(value or \"\")) def get_data(self): return", "< 100) self.ws[0].setValue(current_year) def get_data(self): d = [self.ws[0].value(), self.ws[1].value(), self.ws[2].value()] try: return datetime.date(*d)", "m = r.search(s.replace(' ', '')) return (m is not None) def _clear(self): clear_layout(self.layout())", "= None data_changed = pyqtSignal() # dummy signal def __init__(self, *args, **kwargs): super(abstractSimpleField,", "index.model().header[index.column()] def sizeHint(self, option, index): if self.size_hint_ and self.size_hint_[0] == index: return self.size_hint_[1]", "__init__(self, begining, end): super().__init__() self.begining = begining self.end = end self.begining.data_changed.connect(self.set_data) self.end.data_changed.connect(self.set_data) self.set_data()", "= None @staticmethod def paint_filling_rect(option, painter, proportion): rect = option.rect painter.save() proportion =", "= QLabel() self.setAutoFillBackground(True) # Pour éviter la transparence layout = QHBoxLayout(self) layout.addWidget(cb) layout.addWidget(l)", "def on_editing(self): current_year = self.ws[0].value() if not current_year: return self._change_year_text_color(not current_year < 100)", "from PyQt5.QtCore import pyqtSignal, Qt, QPoint from PyQt5.QtGui import QColor, QPen, QBrush, QIcon", "super().__init__(parent) self.setMaximum(100000) self.setMinimum(-1) self.setSpecialValueText(\" \") self.setSuffix(\"€\") self.valueChanged.connect(self.data_changed.emit) def set_data(self, somme): somme = somme", "= b or False self.cb.setChecked(b) self.l.setText(b and \"Oui\" or \"Non\") def get_data(self): return", "value) def Booleen(value, is_editable): return _get_widget(is_editable and BoolEditable or BoolFixe, value) def Entier(entier,", "- bs)) painter.setPen(QPen(color, 0.5, Qt.SolidLine, Qt.RoundCap, Qt.RoundJoin)) painter.setBackgroundMode(Qt.OpaqueMode) painter.setBackground(QBrush(color)) painter.setBrush(QBrush(color)) rect.setWidth(rect.width() * proportion", "la transparence layout = QHBoxLayout(self) layout.addWidget(cb) layout.addWidget(l) def callback(b): l.setText(b and \"Oui\" or", "layout = QHBoxLayout(self) layout.setContentsMargins(0, 0, 0, 0) layout.addWidget(QLabel(\"Du \")) layout.addWidget(self.debut) layout.addWidget(QLabel(\" au \"))", "= staticmethod(formats.abstractRender.pourcent) class DefaultFixe(abstractSimpleField): FONCTION_AFF = staticmethod(formats.abstractRender.default) class DateFixe(abstractSimpleField): FONCTION_AFF = staticmethod(formats.abstractRender.date) class", "class Tels(list_views.abstractMutableList): LIST_PLACEHOLDER = \"Aucun numéro.\" LIST_HEADER = None BOUTON = NouveauTelephone def", "FONCTION_AFF = staticmethod(formats.abstractRender.boolen) class EurosFixe(abstractSimpleField): FONCTION_AFF = staticmethod(formats.abstractRender.euros) class PourcentFixe(abstractSimpleField): FONCTION_AFF = staticmethod(formats.abstractRender.pourcent)", "[self.ws[0].value(), self.ws[1].value(), self.ws[2].value()] try: return datetime.date(*d) except ValueError: return def set_data(self, d): if", "numéro.\" LIST_HEADER = None BOUTON = NouveauTelephone def __init__(self, collection: list, is_editable): collection", "tels=Tels, adresse=Adresse, date=Date, date_debut=Date, date_fin=Date, date_arrivee=Date, date_depart=Date, date_emission=Date, date_reception=Date, nb_places=Entier, nb_places_reservees=Entier, age_min=Entier, age_max=Entier,", "class PourcentEditable(abstractEntierEditable): UNITE = \"%\" MAX = 100 DEFAULT = 0 class EurosEditable(QDoubleSpinBox):", "layout.addWidget(QLabel(\" au \")) layout.addWidget(self.fin) def on_change(self): self.data_changed.emit(*self.get_data()) def get_data(self): return self.debut.get_data(), self.fin.get_data() def", "= is_editable def show_options(self): f = self.CLASS_PANEL_OPTIONS(self.acces, self.is_editable) if f.exec_(): self.options_changed.emit() def set_data(self,", "super(OptionnalTextEditable, self).__init__(parent=parent) self.active = QCheckBox() self.text = QLineEdit() self.active.clicked.connect(self.on_click) self.text.textChanged.connect(self.on_text_changed) layout = QHBoxLayout(self)", "is_editable): return _get_widget(is_editable and BoolEditable or BoolFixe, value) def Entier(entier, is_editable): return _get_widget(is_editable", "widget !\") w = DateHeureFixe() w.set_data(value) return w def OptionnalText(value, is_editable): return _get_widget(is_editable", "data_changed = pyqtSignal(str) def __init__(self, text, is_editable, placeholder=\"Informations complémentaires\"): super().__init__(text) self.setSizeAdjustPolicy(QPlainTextEdit.AdjustToContents) self.setMinimumHeight(50) self.setMinimumWidth(150)", "QColor( rs + t*(re - rs), vs + t*(ve - vs), bs +", "= None MIN = 0 DEFAULT = 0 data_changed = pyqtSignal(int) def __init__(self,", "Enumerations vizualisation -------------- class abstractEnum(QLabel): VALUE_TO_LABEL = {} \"\"\"Dict. giving label from raw", "DateEditable or DateFixe, value) def Departement(value, is_editable): return _get_widget(is_editable and DepartementEditable or DepartementFixe,", "super(abstractSimpleField, self).__init__(*args, **kwargs) if self.TOOLTIP: self.setToolTip(self.TOOLTIP) def set_data(self, value): self.value = value label", "self.fin = DateEditable() self.debut.data_changed.connect(self.on_change) self.fin.data_changed.connect(self.on_change) layout = QHBoxLayout(self) layout.setContentsMargins(0, 0, 0, 0) layout.addWidget(QLabel(\"Du", "parent, option, index): field = self._get_field(index) other = index.data(role=Qt.UserRole) classe = self.CORRES[field] w", "self.l = l def set_data(self, b): b = b or False self.cb.setChecked(b) self.l.setText(b", "class abstractEntierEditable(QSpinBox): UNITE = \"\" MAX = None MIN = 0 DEFAULT =", "éviter la transparence layout = QHBoxLayout(self) layout.addWidget(cb) layout.addWidget(l) def callback(b): l.setText(b and \"Oui\"", "self.ws[2].setValue(d.day) self.on_editing() class MontantEditable(QFrame): def __init__(self, parent=None): super().__init__(parent) self.setAutoFillBackground(True) self.val = QDoubleSpinBox() self.val.setMaximum(100000)", "t: self.places[t[0]] = self.count() self.addItem(t[1], userData=t[0]) else: self.insertSeparator(self.count()) def set_data(self, value): if value", "b): b = b or False self.cb.setChecked(b) self.l.setText(b and \"Oui\" or \"Non\") def", "self.setValue(somme) def get_data(self): v = self.value() return v if v != -1 else", "Texte(value, is_editable, placeholder=\"Adresse...\") def ModePaiement(value, is_editable): return _get_widget(is_editable and ModePaiementEditable or ModePaiementFixe, value)", "get_data(self): text = self.text.text().strip() active = self.active.isChecked() and bool(text) return text if active", "layout = QGridLayout(self) layout.setContentsMargins(0, 0, 0, 0) j = QSpinBox() j.setMinimum(0) j.setMaximum(31) j.setToolTip(\"Jour\")", "j.setMaximum(31) j.setToolTip(\"Jour\") m = QSpinBox() m.setMinimum(0) m.setMaximum(12) m.setToolTip(\"Mois\") a = QSpinBox() a.setMinimum(0) a.setMaximum(2500)", "not current_year: return self._change_year_text_color(not current_year < 100) self.ws[0].setValue(current_year) def get_data(self): d = [self.ws[0].value(),", "v in abstract_ASSOCIATION.items(): t = TYPES_WIDGETS[k] ASSOCIATION[k] = (v[0], v[1], v[2], t, v[3])", "\"%\" MAX = 100 DEFAULT = 0 class EurosEditable(QDoubleSpinBox): data_changed = pyqtSignal(float) def", "self.data_changed.emit(self.get_data()) def on_text_changed(self, text): is_active = bool(text.strip()) self.active.setChecked(is_active) self.text.setEnabled(is_active) self.data_changed.emit(self.get_data()) def get_data(self): text", "is_active = bool(text.strip()) self.active.setChecked(is_active) self.text.setEnabled(is_active) self.data_changed.emit(self.get_data()) def get_data(self): text = self.text.text().strip() active =", "get_data(self): return self.debut.get_data(), self.fin.get_data() def set_data(self, v): v = v or [None, None]", "def __init__(self): super().__init__() self.debut = DateEditable() self.fin = DateEditable() self.debut.data_changed.connect(self.on_change) self.fin.data_changed.connect(self.on_change) layout =", "def set_data(self, b): b = b or False self.cb.setChecked(b) self.l.setText(b and \"Oui\" or", "return _get_widget(is_editable and BoolEditable or BoolFixe, value) def Entier(entier, is_editable): return _get_widget(is_editable and", "from ..Core import formats class NouveauTelephone(list_views.abstractNewButton): LABEL = \"Ajouter un numéro\" @staticmethod def", "bool(text.strip()) self.active.setChecked(is_active) self.text.setEnabled(is_active) self.text.setText(text) self.data_changed.emit(self.get_data()) class DateEditable(QFrame): data_changed = pyqtSignal(object) def __init__(self, parent=None):", "formats.DATE_DEFAULT df = self.end.get_data() or formats.DATE_DEFAULT jours = max((df - db).days + 1,", "not None else (self.MIN - 1) self.setValue(somme) def get_data(self): return self.value() class EntierEditable(abstractEntierEditable):", "QPen, QBrush, QIcon from PyQt5.QtWidgets import (QFrame, QHBoxLayout, QPushButton, QLineEdit, QLabel, QComboBox, QSpinBox,", "prix=Euros, date_heure_modif=DateHeure, date_reglement=Date, date_encaissement=Date, info=Texte, message=Texte, mode_paiement=ModePaiement, ) ASSOCIATION = {} def add_widgets_type(type_widgets,", "= pyqtSignal(str) MAX_LENGTH = None def __init__(self, parent=None, completion=[]): super().__init__(parent) self.textChanged.connect(self.data_changed.emit) if completion:", "Pourcent(value, is_editable): return _get_widget(is_editable and PourcentEditable or PourcentFixe, value) def Date(value, is_editable): return", "collection: list, is_editable): collection = self.from_list(collection) super().__init__(collection, is_editable) def on_add(self, item): \"\"\"Convert to", "destroyEditor(self, editor, index): self.size_hint_ = None super().destroyEditor(editor, index) def setModelData(self, editor, model, index):", "j) def _change_year_text_color(self, is_ok): color = \"black\" if is_ok else \"red\" self.ws[0].setStyleSheet(f\"color :", "*args): pass ###---------------------------- Wrappers---------------------------- ### def _get_widget(classe, value): w = classe() w.set_data(value) return", "= staticmethod(formats.abstractRender.default) class DateFixe(abstractSimpleField): FONCTION_AFF = staticmethod(formats.abstractRender.date) class DateHeureFixe(abstractSimpleField): FONCTION_AFF = staticmethod(formats.abstractRender.dateheure) #", "to open window to acces advanced options. CLASS_PANEL_OPTIONS is responsible for doing the", "abstract_ASSOCIATION): TYPES_WIDGETS.update(type_widgets) for k, v in abstract_ASSOCIATION.items(): t = TYPES_WIDGETS[k] ASSOCIATION[k] = (v[0],", "self.value class abstractEnumEditable(QComboBox): data_changed = pyqtSignal(object) VALEURS_LABELS = [] \"\"\"List of tuples (value,", "\"Ajouter un numéro\" @staticmethod def IS_TELEPHONE(s: str): r = re.compile(r'[0-9]{9,10}') m = r.search(s.replace('", "\"Aucun numéro.\" LIST_HEADER = None BOUTON = NouveauTelephone def __init__(self, collection: list, is_editable):", "QCompleter, QGridLayout, QVBoxLayout, QPlainTextEdit, QStyledItemDelegate, QToolTip) from . import list_views, clear_layout, Icons from", "self.end.get_data() or formats.DATE_DEFAULT jours = max((df - db).days + 1, 0) self.setText(str(jours) +", "or PourcentFixe, value) def Date(value, is_editable): return _get_widget(is_editable and DateEditable or DateFixe, value)", "painter.save() proportion = min(proportion, 100) rs, vs, bs = (30,64,55) # start re,", "Qt.SolidLine, Qt.RoundCap, Qt.RoundJoin)) painter.setBackgroundMode(Qt.OpaqueMode) painter.setBackground(QBrush(color)) painter.setBrush(QBrush(color)) rect.setWidth(rect.width() * proportion / 100) painter.drawRoundedRect(rect, 5,", "== index: return self.size_hint_[1] return super().sizeHint(option, index) def setEditorData(self, editor, index): value =", "rs + t*(re - rs), vs + t*(ve - vs), bs + t*(be", "text: str): text = text or \"\" is_active = bool(text.strip()) self.active.setChecked(is_active) self.text.setEnabled(is_active) self.text.setText(text)", "ASSOCIATION[k] = (v[0], v[1], v[2], t, v[3]) add_widgets_type({}, formats.ASSOCIATION) ## ------------------Custom delegate ------------------", "self.ws[0].clear() self.ws[1].clear() self.ws[2].clear() else: self.ws[0].setValue(d.year) self.ws[1].setValue(d.month) self.ws[2].setValue(d.day) self.on_editing() class MontantEditable(QFrame): def __init__(self, parent=None):", "self.text.textChanged.connect(self.on_text_changed) layout = QHBoxLayout(self) layout.addWidget(self.active) layout.addWidget(self.text) def on_click(self): self.text.setEnabled(self.active.isChecked()) self.data_changed.emit(self.get_data()) def on_text_changed(self, text):", "ASSOCIATION should be updated with custom widgets, since common.abstractDetails will use it. \"\"\"", "in formats.MODE_PAIEMENT.items()]) # ------------- Simple string-like field ------------- class abstractSimpleField(QLabel): FONCTION_AFF = None", "OptionnalTextEditable(QFrame): \"\"\"QCheckbox + QLineEdit\"\"\" data_changed = pyqtSignal(object) def __init__(self, parent=None): super(OptionnalTextEditable, self).__init__(parent=parent) self.active", "separator\"\"\" def __init__(self, parent=None): super().__init__(parent) self.set_choix(self.VALEURS_LABELS) self.currentIndexChanged.connect( lambda i: self.data_changed.emit(self.currentData())) def set_choix(self, choix):", "2, 1, 2) j.valueChanged.connect( lambda v: self.data_changed.emit(self.get_data())) m.valueChanged.connect( lambda v: self.data_changed.emit(self.get_data())) a.valueChanged.connect( lambda", "layout.addWidget(cb) layout.addWidget(l) def callback(b): l.setText(b and \"Oui\" or \"Non\") self.data_changed.emit(b) cb.clicked.connect(callback) self.cb =", "self.addItem(t[1], userData=t[0]) else: self.insertSeparator(self.count()) def set_data(self, value): if value is None: self.setCurrentIndex(-1) else:", "j.setMinimum(0) j.setMaximum(31) j.setToolTip(\"Jour\") m = QSpinBox() m.setMinimum(0) m.setMaximum(12) m.setToolTip(\"Mois\") a = QSpinBox() a.setMinimum(0)", "date_reglement=Date, date_encaissement=Date, info=Texte, message=Texte, mode_paiement=ModePaiement, ) ASSOCIATION = {} def add_widgets_type(type_widgets, abstract_ASSOCIATION): TYPES_WIDGETS.update(type_widgets)", "self.size_hint_[1] return super().sizeHint(option, index) def setEditorData(self, editor, index): value = index.data(role=Qt.EditRole) editor.set_data(value) self.sizeHintChanged.emit(index)", "def on_add(self, item): \"\"\"Convert to pseuso acces\"\"\" super(Tels, self).on_add(list_views.PseudoAccesCategorie(item)) def set_data(self, collection): collection", "call set_data to manually update\"\"\" db = self.begining.get_data() or formats.DATE_DEFAULT df = self.end.get_data()", "editor, index): self.size_hint_ = None super().destroyEditor(editor, index) def setModelData(self, editor, model, index): value", "class DateEditable(QFrame): data_changed = pyqtSignal(object) def __init__(self, parent=None): super().__init__(parent) layout = QGridLayout(self) layout.setContentsMargins(0,", "= pyqtSignal(object) def __init__(self, parent=None): super(OptionnalTextEditable, self).__init__(parent=parent) self.active = QCheckBox() self.text = QLineEdit()", "color = QColor( rs + t*(re - rs), vs + t*(ve - vs),", "return w def OptionnalText(value, is_editable): return _get_widget(is_editable and OptionnalTextEditable or DefaultFixe, value) \"\"\"Correspondance", "\"black\" if is_ok else \"red\" self.ws[0].setStyleSheet(f\"color : {color}\") def on_editing(self): current_year = self.ws[0].value()", "layout.addWidget(m, 0, 1) layout.addWidget(a, 0, 2, 1, 2) j.valueChanged.connect( lambda v: self.data_changed.emit(self.get_data())) m.valueChanged.connect(", "data_changed = pyqtSignal(float) def __init__(self, parent=None): super().__init__(parent) self.setMaximum(100000) self.setMinimum(-1) self.setSpecialValueText(\" \") self.setSuffix(\"€\") self.valueChanged.connect(self.data_changed.emit)", "def LimitedDefaultEditable(max_length): return type(\"LDefaultEditable\", (DefaultEditable,), {\"MAX_LENGTH\": max_length}) class OptionnalTextEditable(QFrame): \"\"\"QCheckbox + QLineEdit\"\"\" data_changed", "date_reception=Date, nb_places=Entier, nb_places_reservees=Entier, age_min=Entier, age_max=Entier, acquite=Booleen, is_acompte=Booleen, is_remboursement=Booleen, reduc_speciale=Euros, acompte_recu=Euros, valeur=Euros, total=Euros, prix=Euros,", "a.setMinimum(0) a.setMaximum(2500) a.setToolTip(\"Année\") j.setAlignment(Qt.AlignCenter) m.setAlignment(Qt.AlignCenter) a.setAlignment(Qt.AlignCenter) j.setSpecialValueText(\"-\") m.setSpecialValueText(\"-\") a.setSpecialValueText(\"-\") layout.addWidget(j, 0, 0) layout.addWidget(m,", "super().__init__(collection, is_editable) def on_add(self, item): \"\"\"Convert to pseuso acces\"\"\" super(Tels, self).on_add(list_views.PseudoAccesCategorie(item)) def set_data(self,", "a.setSpecialValueText(\"-\") layout.addWidget(j, 0, 0) layout.addWidget(m, 0, 1) layout.addWidget(a, 0, 2, 1, 2) j.valueChanged.connect(", "100 color = QColor( rs + t*(re - rs), vs + t*(ve -", "jours = max((df - db).days + 1, 0) self.setText(str(jours) + (jours >= 2", "date_fin=Date, date_arrivee=Date, date_depart=Date, date_emission=Date, date_reception=Date, nb_places=Entier, nb_places_reservees=Entier, age_min=Entier, age_max=Entier, acquite=Booleen, is_acompte=Booleen, is_remboursement=Booleen, reduc_speciale=Euros,", "= sorted([(k, v) for k, v in formats.MODE_PAIEMENT.items()]) # ------------- Simple string-like field", "not None) def _clear(self): clear_layout(self.layout()) def enter_edit(self): self._clear() line_layout = self.layout() self.entree =" ]
[ "total_sum\", (number,)) product_total_sum = cursor.fetchall() amount_total.append(get_total_amount(product_total_sum)) for item in set(listing_name_months): month_listing.append(item) month_listing.sort(reverse=True) dict_name_months", "cursor from functions import get_total_amount, get_number_month charts_route = Blueprint(\"charts\", __name__) # Выделяем число", "if number_month == 7: return \"Июль\" if number_month == 8: return \"Август\" if", "amount_total.append(get_total_amount(product_total_sum)) for item in set(listing_name_months): month_listing.append(item) month_listing.sort(reverse=True) dict_name_months = list(set(listing_name_months)) dict_name_months.sort(reverse=True) index =", "number_month == 4: return \"Апрель\" if number_month == 5: return \"Май\" if number_month", "charts(): title = \"Графики\" cursor.execute(\"SELECT * FROM receipt ORDER BY date_receipt\") product_information =", "= cursor.fetchall() listing_name_months = [] month_listing = [] number_month = [] amount_total =", "number_month == 7: return \"Июль\" if number_month == 8: return \"Август\" if number_month", "if number_month == 11: return \"Ноябрь\" if number_month == 12: return \"Декабрь\" @charts_route.route(\"/charts\")", "== 8: return \"Август\" if number_month == 9: return \"Сентябрь\" if number_month ==", "[] for date in product_information: name_month = (get_name_month_from_date(str(date[0]))) listing_name_months.append(name_month) number_month.append(get_number_month(str(date[0]))) for number in", "number_month == 10: return \"Октябрь\" if number_month == 11: return \"Ноябрь\" if number_month", "number_month == 11: return \"Ноябрь\" if number_month == 12: return \"Декабрь\" @charts_route.route(\"/charts\") def", "ORDER BY date_receipt\") product_information = cursor.fetchall() listing_name_months = [] month_listing = [] number_month", "as plt import datetime from settings_database import cursor from functions import get_total_amount, get_number_month", "amount_total = [] for date in product_information: name_month = (get_name_month_from_date(str(date[0]))) listing_name_months.append(name_month) number_month.append(get_number_month(str(date[0]))) for", "if number_month == 5: return \"Май\" if number_month == 6: return \"Июнь\" if", "\"Август\" if number_month == 9: return \"Сентябрь\" if number_month == 10: return \"Октябрь\"", "number_month == 9: return \"Сентябрь\" if number_month == 10: return \"Октябрь\" if number_month", "in product_information: name_month = (get_name_month_from_date(str(date[0]))) listing_name_months.append(name_month) number_month.append(get_number_month(str(date[0]))) for number in set(number_month): cursor.execute( \"SELECT", "listing_name_months = [] month_listing = [] number_month = [] amount_total = [] for", "(number,)) product_total_sum = cursor.fetchall() amount_total.append(get_total_amount(product_total_sum)) for item in set(listing_name_months): month_listing.append(item) month_listing.sort(reverse=True) dict_name_months =", "receipt WHERE (extract (month from date_receipt)=%s) GROUP BY total_sum\", (number,)) product_total_sum = cursor.fetchall()", "\"Июнь\" if number_month == 7: return \"Июль\" if number_month == 8: return \"Август\"", "date_receipt)=%s) GROUP BY total_sum\", (number,)) product_total_sum = cursor.fetchall() amount_total.append(get_total_amount(product_total_sum)) for item in set(listing_name_months):", "= Blueprint(\"charts\", __name__) # Выделяем число месяца из даты чека def get_name_month_from_date(date_time): date_without_dash", "== 5: return \"Май\" if number_month == 6: return \"Июнь\" if number_month ==", "if number_month == 8: return \"Август\" if number_month == 9: return \"Сентябрь\" if", "6: return \"Июнь\" if number_month == 7: return \"Июль\" if number_month == 8:", "total_sum FROM receipt WHERE (extract (month from date_receipt)=%s) GROUP BY total_sum\", (number,)) product_total_sum", "== 6: return \"Июнь\" if number_month == 7: return \"Июль\" if number_month ==", "число месяца из даты чека def get_name_month_from_date(date_time): date_without_dash = date_time.replace(\"-\", \"\") number_month =", "month_listing = [] number_month = [] amount_total = [] for date in product_information:", "== 7: return \"Июль\" if number_month == 8: return \"Август\" if number_month ==", "== 11: return \"Ноябрь\" if number_month == 12: return \"Декабрь\" @charts_route.route(\"/charts\") def charts():", "= cursor.fetchall() amount_total.append(get_total_amount(product_total_sum)) for item in set(listing_name_months): month_listing.append(item) month_listing.sort(reverse=True) dict_name_months = list(set(listing_name_months)) dict_name_months.sort(reverse=True)", "== 4: return \"Апрель\" if number_month == 5: return \"Май\" if number_month ==", "dict_name_months.sort(reverse=True) index = dict_name_months values = amount_total plt.bar(index, values) plt.title(\"Расходы по месяцам\") plt.savefig(\"static/img/plot_monthly_expenses.png\")", "set(number_month): cursor.execute( \"SELECT total_sum FROM receipt WHERE (extract (month from date_receipt)=%s) GROUP BY", "if number_month == 3: return \"Март\" if number_month == 4: return \"Апрель\" if", "return \"Февраль\" if number_month == 3: return \"Март\" if number_month == 4: return", "@charts_route.route(\"/charts\") def charts(): title = \"Графики\" cursor.execute(\"SELECT * FROM receipt ORDER BY date_receipt\")", "== 10: return \"Октябрь\" if number_month == 11: return \"Ноябрь\" if number_month ==", "return \"Январь\" if number_month == 2: return \"Февраль\" if number_month == 3: return", "\"Октябрь\" if number_month == 11: return \"Ноябрь\" if number_month == 12: return \"Декабрь\"", "import cursor from functions import get_total_amount, get_number_month charts_route = Blueprint(\"charts\", __name__) # Выделяем", "return \"Декабрь\" @charts_route.route(\"/charts\") def charts(): title = \"Графики\" cursor.execute(\"SELECT * FROM receipt ORDER", "import datetime from settings_database import cursor from functions import get_total_amount, get_number_month charts_route =", "number_month == 1: return \"Январь\" if number_month == 2: return \"Февраль\" if number_month", "number_month == 8: return \"Август\" if number_month == 9: return \"Сентябрь\" if number_month", "list(set(listing_name_months)) dict_name_months.sort(reverse=True) index = dict_name_months values = amount_total plt.bar(index, values) plt.title(\"Расходы по месяцам\")", "number_month == 2: return \"Февраль\" if number_month == 3: return \"Март\" if number_month", "cursor.fetchall() amount_total.append(get_total_amount(product_total_sum)) for item in set(listing_name_months): month_listing.append(item) month_listing.sort(reverse=True) dict_name_months = list(set(listing_name_months)) dict_name_months.sort(reverse=True) index", "BY date_receipt\") product_information = cursor.fetchall() listing_name_months = [] month_listing = [] number_month =", "<gh_stars>0 from flask import Blueprint, render_template import matplotlib.pyplot as plt import datetime from", "\"\") number_month = datetime.datetime.strptime(date_without_dash, \"%Y%m%d\").date().month if number_month == 1: return \"Январь\" if number_month", "date in product_information: name_month = (get_name_month_from_date(str(date[0]))) listing_name_months.append(name_month) number_month.append(get_number_month(str(date[0]))) for number in set(number_month): cursor.execute(", "3: return \"Март\" if number_month == 4: return \"Апрель\" if number_month == 5:", "return \"Октябрь\" if number_month == 11: return \"Ноябрь\" if number_month == 12: return", "FROM receipt WHERE (extract (month from date_receipt)=%s) GROUP BY total_sum\", (number,)) product_total_sum =", "product_information = cursor.fetchall() listing_name_months = [] month_listing = [] number_month = [] amount_total", "[] number_month = [] amount_total = [] for date in product_information: name_month =", "= \"Графики\" cursor.execute(\"SELECT * FROM receipt ORDER BY date_receipt\") product_information = cursor.fetchall() listing_name_months", "* FROM receipt ORDER BY date_receipt\") product_information = cursor.fetchall() listing_name_months = [] month_listing", "datetime from settings_database import cursor from functions import get_total_amount, get_number_month charts_route = Blueprint(\"charts\",", "10: return \"Октябрь\" if number_month == 11: return \"Ноябрь\" if number_month == 12:", "month_listing.sort(reverse=True) dict_name_months = list(set(listing_name_months)) dict_name_months.sort(reverse=True) index = dict_name_months values = amount_total plt.bar(index, values)", "for item in set(listing_name_months): month_listing.append(item) month_listing.sort(reverse=True) dict_name_months = list(set(listing_name_months)) dict_name_months.sort(reverse=True) index = dict_name_months", "date_receipt\") product_information = cursor.fetchall() listing_name_months = [] month_listing = [] number_month = []", "return \"Ноябрь\" if number_month == 12: return \"Декабрь\" @charts_route.route(\"/charts\") def charts(): title =", "for date in product_information: name_month = (get_name_month_from_date(str(date[0]))) listing_name_months.append(name_month) number_month.append(get_number_month(str(date[0]))) for number in set(number_month):", "charts_route = Blueprint(\"charts\", __name__) # Выделяем число месяца из даты чека def get_name_month_from_date(date_time):", "FROM receipt ORDER BY date_receipt\") product_information = cursor.fetchall() listing_name_months = [] month_listing =", "\"Февраль\" if number_month == 3: return \"Март\" if number_month == 4: return \"Апрель\"", "__name__) # Выделяем число месяца из даты чека def get_name_month_from_date(date_time): date_without_dash = date_time.replace(\"-\",", "def get_name_month_from_date(date_time): date_without_dash = date_time.replace(\"-\", \"\") number_month = datetime.datetime.strptime(date_without_dash, \"%Y%m%d\").date().month if number_month ==", "from flask import Blueprint, render_template import matplotlib.pyplot as plt import datetime from settings_database", "import get_total_amount, get_number_month charts_route = Blueprint(\"charts\", __name__) # Выделяем число месяца из даты", "date_time.replace(\"-\", \"\") number_month = datetime.datetime.strptime(date_without_dash, \"%Y%m%d\").date().month if number_month == 1: return \"Январь\" if", "from functions import get_total_amount, get_number_month charts_route = Blueprint(\"charts\", __name__) # Выделяем число месяца", "\"%Y%m%d\").date().month if number_month == 1: return \"Январь\" if number_month == 2: return \"Февраль\"", "datetime.datetime.strptime(date_without_dash, \"%Y%m%d\").date().month if number_month == 1: return \"Январь\" if number_month == 2: return", "product_information: name_month = (get_name_month_from_date(str(date[0]))) listing_name_months.append(name_month) number_month.append(get_number_month(str(date[0]))) for number in set(number_month): cursor.execute( \"SELECT total_sum", "== 2: return \"Февраль\" if number_month == 3: return \"Март\" if number_month ==", "Blueprint(\"charts\", __name__) # Выделяем число месяца из даты чека def get_name_month_from_date(date_time): date_without_dash =", "== 12: return \"Декабрь\" @charts_route.route(\"/charts\") def charts(): title = \"Графики\" cursor.execute(\"SELECT * FROM", "(month from date_receipt)=%s) GROUP BY total_sum\", (number,)) product_total_sum = cursor.fetchall() amount_total.append(get_total_amount(product_total_sum)) for item", "return \"Август\" if number_month == 9: return \"Сентябрь\" if number_month == 10: return", "flask import Blueprint, render_template import matplotlib.pyplot as plt import datetime from settings_database import", "month_listing.append(item) month_listing.sort(reverse=True) dict_name_months = list(set(listing_name_months)) dict_name_months.sort(reverse=True) index = dict_name_months values = amount_total plt.bar(index,", "get_name_month_from_date(date_time): date_without_dash = date_time.replace(\"-\", \"\") number_month = datetime.datetime.strptime(date_without_dash, \"%Y%m%d\").date().month if number_month == 1:", "from date_receipt)=%s) GROUP BY total_sum\", (number,)) product_total_sum = cursor.fetchall() amount_total.append(get_total_amount(product_total_sum)) for item in", "cursor.fetchall() listing_name_months = [] month_listing = [] number_month = [] amount_total = []", "number in set(number_month): cursor.execute( \"SELECT total_sum FROM receipt WHERE (extract (month from date_receipt)=%s)", "[] month_listing = [] number_month = [] amount_total = [] for date in", "if number_month == 4: return \"Апрель\" if number_month == 5: return \"Май\" if", "[] amount_total = [] for date in product_information: name_month = (get_name_month_from_date(str(date[0]))) listing_name_months.append(name_month) number_month.append(get_number_month(str(date[0])))", "if number_month == 1: return \"Январь\" if number_month == 2: return \"Февраль\" if", "import matplotlib.pyplot as plt import datetime from settings_database import cursor from functions import", "number_month = datetime.datetime.strptime(date_without_dash, \"%Y%m%d\").date().month if number_month == 1: return \"Январь\" if number_month ==", "number_month == 12: return \"Декабрь\" @charts_route.route(\"/charts\") def charts(): title = \"Графики\" cursor.execute(\"SELECT *", "date_without_dash = date_time.replace(\"-\", \"\") number_month = datetime.datetime.strptime(date_without_dash, \"%Y%m%d\").date().month if number_month == 1: return", "= [] amount_total = [] for date in product_information: name_month = (get_name_month_from_date(str(date[0]))) listing_name_months.append(name_month)", "= [] for date in product_information: name_month = (get_name_month_from_date(str(date[0]))) listing_name_months.append(name_month) number_month.append(get_number_month(str(date[0]))) for number", "name_month = (get_name_month_from_date(str(date[0]))) listing_name_months.append(name_month) number_month.append(get_number_month(str(date[0]))) for number in set(number_month): cursor.execute( \"SELECT total_sum FROM", "\"Сентябрь\" if number_month == 10: return \"Октябрь\" if number_month == 11: return \"Ноябрь\"", "matplotlib.pyplot as plt import datetime from settings_database import cursor from functions import get_total_amount,", "in set(number_month): cursor.execute( \"SELECT total_sum FROM receipt WHERE (extract (month from date_receipt)=%s) GROUP", "Blueprint, render_template import matplotlib.pyplot as plt import datetime from settings_database import cursor from", "== 1: return \"Январь\" if number_month == 2: return \"Февраль\" if number_month ==", "12: return \"Декабрь\" @charts_route.route(\"/charts\") def charts(): title = \"Графики\" cursor.execute(\"SELECT * FROM receipt", "4: return \"Апрель\" if number_month == 5: return \"Май\" if number_month == 6:", "11: return \"Ноябрь\" if number_month == 12: return \"Декабрь\" @charts_route.route(\"/charts\") def charts(): title", "functions import get_total_amount, get_number_month charts_route = Blueprint(\"charts\", __name__) # Выделяем число месяца из", "даты чека def get_name_month_from_date(date_time): date_without_dash = date_time.replace(\"-\", \"\") number_month = datetime.datetime.strptime(date_without_dash, \"%Y%m%d\").date().month if", "= [] number_month = [] amount_total = [] for date in product_information: name_month", "values = amount_total plt.bar(index, values) plt.title(\"Расходы по месяцам\") plt.savefig(\"static/img/plot_monthly_expenses.png\") return render_template(\"charts.html\", plot_monthly_expenses=\"static/img/plot_monthly_expenses.png\", title=title)", "= dict_name_months values = amount_total plt.bar(index, values) plt.title(\"Расходы по месяцам\") plt.savefig(\"static/img/plot_monthly_expenses.png\") return render_template(\"charts.html\",", "return \"Сентябрь\" if number_month == 10: return \"Октябрь\" if number_month == 11: return", "if number_month == 9: return \"Сентябрь\" if number_month == 10: return \"Октябрь\" if", "set(listing_name_months): month_listing.append(item) month_listing.sort(reverse=True) dict_name_months = list(set(listing_name_months)) dict_name_months.sort(reverse=True) index = dict_name_months values = amount_total", "in set(listing_name_months): month_listing.append(item) month_listing.sort(reverse=True) dict_name_months = list(set(listing_name_months)) dict_name_months.sort(reverse=True) index = dict_name_months values =", "8: return \"Август\" if number_month == 9: return \"Сентябрь\" if number_month == 10:", "return \"Март\" if number_month == 4: return \"Апрель\" if number_month == 5: return", "= (get_name_month_from_date(str(date[0]))) listing_name_months.append(name_month) number_month.append(get_number_month(str(date[0]))) for number in set(number_month): cursor.execute( \"SELECT total_sum FROM receipt", "index = dict_name_months values = amount_total plt.bar(index, values) plt.title(\"Расходы по месяцам\") plt.savefig(\"static/img/plot_monthly_expenses.png\") return", "\"Март\" if number_month == 4: return \"Апрель\" if number_month == 5: return \"Май\"", "\"Апрель\" if number_month == 5: return \"Май\" if number_month == 6: return \"Июнь\"", "dict_name_months values = amount_total plt.bar(index, values) plt.title(\"Расходы по месяцам\") plt.savefig(\"static/img/plot_monthly_expenses.png\") return render_template(\"charts.html\", plot_monthly_expenses=\"static/img/plot_monthly_expenses.png\",", "settings_database import cursor from functions import get_total_amount, get_number_month charts_route = Blueprint(\"charts\", __name__) #", "== 3: return \"Март\" if number_month == 4: return \"Апрель\" if number_month ==", "return \"Апрель\" if number_month == 5: return \"Май\" if number_month == 6: return", "(extract (month from date_receipt)=%s) GROUP BY total_sum\", (number,)) product_total_sum = cursor.fetchall() amount_total.append(get_total_amount(product_total_sum)) for", "из даты чека def get_name_month_from_date(date_time): date_without_dash = date_time.replace(\"-\", \"\") number_month = datetime.datetime.strptime(date_without_dash, \"%Y%m%d\").date().month", "def charts(): title = \"Графики\" cursor.execute(\"SELECT * FROM receipt ORDER BY date_receipt\") product_information", "if number_month == 2: return \"Февраль\" if number_month == 3: return \"Март\" if", "= list(set(listing_name_months)) dict_name_months.sort(reverse=True) index = dict_name_months values = amount_total plt.bar(index, values) plt.title(\"Расходы по", "render_template import matplotlib.pyplot as plt import datetime from settings_database import cursor from functions", "BY total_sum\", (number,)) product_total_sum = cursor.fetchall() amount_total.append(get_total_amount(product_total_sum)) for item in set(listing_name_months): month_listing.append(item) month_listing.sort(reverse=True)", "(get_name_month_from_date(str(date[0]))) listing_name_months.append(name_month) number_month.append(get_number_month(str(date[0]))) for number in set(number_month): cursor.execute( \"SELECT total_sum FROM receipt WHERE", "for number in set(number_month): cursor.execute( \"SELECT total_sum FROM receipt WHERE (extract (month from", "plt import datetime from settings_database import cursor from functions import get_total_amount, get_number_month charts_route", "\"Июль\" if number_month == 8: return \"Август\" if number_month == 9: return \"Сентябрь\"", "месяца из даты чека def get_name_month_from_date(date_time): date_without_dash = date_time.replace(\"-\", \"\") number_month = datetime.datetime.strptime(date_without_dash,", "\"Май\" if number_month == 6: return \"Июнь\" if number_month == 7: return \"Июль\"", "if number_month == 6: return \"Июнь\" if number_month == 7: return \"Июль\" if", "9: return \"Сентябрь\" if number_month == 10: return \"Октябрь\" if number_month == 11:", "GROUP BY total_sum\", (number,)) product_total_sum = cursor.fetchall() amount_total.append(get_total_amount(product_total_sum)) for item in set(listing_name_months): month_listing.append(item)", "number_month == 5: return \"Май\" if number_month == 6: return \"Июнь\" if number_month", "= date_time.replace(\"-\", \"\") number_month = datetime.datetime.strptime(date_without_dash, \"%Y%m%d\").date().month if number_month == 1: return \"Январь\"", "number_month.append(get_number_month(str(date[0]))) for number in set(number_month): cursor.execute( \"SELECT total_sum FROM receipt WHERE (extract (month", "чека def get_name_month_from_date(date_time): date_without_dash = date_time.replace(\"-\", \"\") number_month = datetime.datetime.strptime(date_without_dash, \"%Y%m%d\").date().month if number_month", "if number_month == 10: return \"Октябрь\" if number_month == 11: return \"Ноябрь\" if", "receipt ORDER BY date_receipt\") product_information = cursor.fetchall() listing_name_months = [] month_listing = []", "return \"Май\" if number_month == 6: return \"Июнь\" if number_month == 7: return", "= datetime.datetime.strptime(date_without_dash, \"%Y%m%d\").date().month if number_month == 1: return \"Январь\" if number_month == 2:", "listing_name_months.append(name_month) number_month.append(get_number_month(str(date[0]))) for number in set(number_month): cursor.execute( \"SELECT total_sum FROM receipt WHERE (extract", "dict_name_months = list(set(listing_name_months)) dict_name_months.sort(reverse=True) index = dict_name_months values = amount_total plt.bar(index, values) plt.title(\"Расходы", "from settings_database import cursor from functions import get_total_amount, get_number_month charts_route = Blueprint(\"charts\", __name__)", "\"Январь\" if number_month == 2: return \"Февраль\" if number_month == 3: return \"Март\"", "title = \"Графики\" cursor.execute(\"SELECT * FROM receipt ORDER BY date_receipt\") product_information = cursor.fetchall()", "number_month == 3: return \"Март\" if number_month == 4: return \"Апрель\" if number_month", "return \"Июнь\" if number_month == 7: return \"Июль\" if number_month == 8: return", "cursor.execute( \"SELECT total_sum FROM receipt WHERE (extract (month from date_receipt)=%s) GROUP BY total_sum\",", "if number_month == 12: return \"Декабрь\" @charts_route.route(\"/charts\") def charts(): title = \"Графики\" cursor.execute(\"SELECT", "get_number_month charts_route = Blueprint(\"charts\", __name__) # Выделяем число месяца из даты чека def", "WHERE (extract (month from date_receipt)=%s) GROUP BY total_sum\", (number,)) product_total_sum = cursor.fetchall() amount_total.append(get_total_amount(product_total_sum))", "\"Декабрь\" @charts_route.route(\"/charts\") def charts(): title = \"Графики\" cursor.execute(\"SELECT * FROM receipt ORDER BY", "item in set(listing_name_months): month_listing.append(item) month_listing.sort(reverse=True) dict_name_months = list(set(listing_name_months)) dict_name_months.sort(reverse=True) index = dict_name_months values", "5: return \"Май\" if number_month == 6: return \"Июнь\" if number_month == 7:", "number_month = [] amount_total = [] for date in product_information: name_month = (get_name_month_from_date(str(date[0])))", "1: return \"Январь\" if number_month == 2: return \"Февраль\" if number_month == 3:", "= [] month_listing = [] number_month = [] amount_total = [] for date", "\"Графики\" cursor.execute(\"SELECT * FROM receipt ORDER BY date_receipt\") product_information = cursor.fetchall() listing_name_months =", "number_month == 6: return \"Июнь\" if number_month == 7: return \"Июль\" if number_month", "7: return \"Июль\" if number_month == 8: return \"Август\" if number_month == 9:", "product_total_sum = cursor.fetchall() amount_total.append(get_total_amount(product_total_sum)) for item in set(listing_name_months): month_listing.append(item) month_listing.sort(reverse=True) dict_name_months = list(set(listing_name_months))", "return \"Июль\" if number_month == 8: return \"Август\" if number_month == 9: return", "# Выделяем число месяца из даты чека def get_name_month_from_date(date_time): date_without_dash = date_time.replace(\"-\", \"\")", "Выделяем число месяца из даты чека def get_name_month_from_date(date_time): date_without_dash = date_time.replace(\"-\", \"\") number_month", "get_total_amount, get_number_month charts_route = Blueprint(\"charts\", __name__) # Выделяем число месяца из даты чека", "cursor.execute(\"SELECT * FROM receipt ORDER BY date_receipt\") product_information = cursor.fetchall() listing_name_months = []", "== 9: return \"Сентябрь\" if number_month == 10: return \"Октябрь\" if number_month ==", "import Blueprint, render_template import matplotlib.pyplot as plt import datetime from settings_database import cursor", "\"Ноябрь\" if number_month == 12: return \"Декабрь\" @charts_route.route(\"/charts\") def charts(): title = \"Графики\"", "\"SELECT total_sum FROM receipt WHERE (extract (month from date_receipt)=%s) GROUP BY total_sum\", (number,))", "2: return \"Февраль\" if number_month == 3: return \"Март\" if number_month == 4:" ]
[]
[ "describes a single node in Galera Cluster. :param host: hostname of the node.", "in Galera Cluster. :param host: hostname of the node. :param port: port to", "def execute(self, query, *args): \"\"\"Execute query in Galera Node. :param query: Query to", "it undergoes. \"\"\" return self._status('wsrep_cluster_state_uuid') @property def wsrep_cluster_status(self): \"\"\"Status of this cluster component.", "= self.execute('SHOW GLOBAL STATUS LIKE %s', status_variable) return result[0]['Value'] def __eq__(self, other): return", "status_variable): \"\"\"Return value of a variable from SHOW GLOBAL STATUS\"\"\" result = self.execute('SHOW", "\"\"\" connection = pymysql.connect( # pylint: disable=duplicate-code host=self.host, port=self.port, user=self.user, passwd=<PASSWORD>, cursorclass=DictCursor )", "number.\"\"\" return int(self._status('wsrep_local_state')) @property def wsrep_cluster_name(self): \"\"\"The logical cluster name for the node.\"\"\"", "changes it undergoes. \"\"\" return self._status('wsrep_cluster_state_uuid') @property def wsrep_cluster_status(self): \"\"\"Status of this cluster", "component.\"\"\" return self._status('wsrep_cluster_status') @property def wsrep_local_state(self): \"\"\"Internal Galera Cluster FSM state number.\"\"\" return", "the node.\"\"\" result = self.execute(\"SELECT @@wsrep_cluster_name\") return result[0]['@@wsrep_cluster_name'] def execute(self, query, *args): \"\"\"Execute", "a unique identifier for the current state of the cluster and the sequence", "def wsrep_cluster_state_uuid(self): \"\"\"Provides the current State UUID. This is a unique identifier for", "result = self.execute(\"SELECT @@wsrep_cluster_name\") return result[0]['@@wsrep_cluster_name'] def execute(self, query, *args): \"\"\"Execute query in", "\"\"\"The logical cluster name for the node.\"\"\" result = self.execute(\"SELECT @@wsrep_cluster_name\") return result[0]['@@wsrep_cluster_name']", "pylint: disable=too-few-public-methods \"\"\"State of Galera node http://bit.ly/2r1tUGB \"\"\" PRIMARY = 1 JOINER =", "a variable from SHOW GLOBAL STATUS\"\"\" result = self.execute('SHOW GLOBAL STATUS LIKE %s',", "value of a variable from SHOW GLOBAL STATUS\"\"\" result = self.execute('SHOW GLOBAL STATUS", "conn: return execute(conn, query, *args) @contextmanager def _connect(self): \"\"\"Connect to Galera node :return:", "the Galera node :rtype: Connection \"\"\" connection = pymysql.connect( # pylint: disable=duplicate-code host=self.host,", "Galera Cluster. :param host: hostname of the node. :param port: port to connect", "@contextmanager def _connect(self): \"\"\"Connect to Galera node :return: MySQL connection to the Galera", "variable from SHOW GLOBAL STATUS\"\"\" result = self.execute('SHOW GLOBAL STATUS LIKE %s', status_variable)", "wsrep_cluster_status(self): \"\"\"Status of this cluster component. That is, whether the node is part", "Galera Cluster FSM state number.\"\"\" return int(self._status('wsrep_local_state')) @property def wsrep_cluster_name(self): \"\"\"The logical cluster", "Query result or None if the query is not supposed to return result.", "the current state of the cluster and the sequence of changes it undergoes.", "@property def wsrep_local_state(self): \"\"\"Internal Galera Cluster FSM state number.\"\"\" return int(self._status('wsrep_local_state')) @property def", "result or None if the query is not supposed to return result. :rtype:", "result = self.execute('SHOW GLOBAL STATUS LIKE %s', status_variable) return result[0]['Value'] def __eq__(self, other):", "int(self._status('wsrep_local_state')) @property def wsrep_cluster_name(self): \"\"\"The logical cluster name for the node.\"\"\" result =", "query: str :return: Query result or None if the query is not supposed", "self.user = user self.password = password @property def wsrep_cluster_state_uuid(self): \"\"\"Provides the current State", "connection.close() def _status(self, status_variable): \"\"\"Return value of a variable from SHOW GLOBAL STATUS\"\"\"", "__eq__(self, other): return self.host == other.host and self.port == self.port def __ne__(self, other):", "PRIMARY = 1 JOINER = 2 JOINED = 3 SYNCED = 4 DONOR", "def wsrep_local_state(self): \"\"\"Internal Galera Cluster FSM state number.\"\"\" return int(self._status('wsrep_local_state')) @property def wsrep_cluster_name(self):", "4 DONOR = 5 class GaleraNode(object): \"\"\" GaleraNode class describes a single node", "MySQL connection to the Galera node :rtype: Connection \"\"\" connection = pymysql.connect( #", ":param password: <PASSWORD>. \"\"\" def __init__(self, host, port=3306, user='root', password=<PASSWORD>): self.host = host", "MySQL username to connect to the node. :param password: <PASSWORD>. \"\"\" def __init__(self,", "http://bit.ly/2r1tUGB \"\"\" PRIMARY = 1 JOINER = 2 JOINED = 3 SYNCED =", "GaleraNodeState(object): # pylint: disable=too-few-public-methods \"\"\"State of Galera node http://bit.ly/2r1tUGB \"\"\" PRIMARY = 1", ":return: Query result or None if the query is not supposed to return", "password: <PASSWORD>. \"\"\" def __init__(self, host, port=3306, user='root', password=<PASSWORD>): self.host = host self.port", "self.port def __ne__(self, other): return not self.__eq__(other) def __str__(self): return \"%s:%d\" % (self.host,", "pylint: disable=duplicate-code host=self.host, port=self.port, user=self.user, passwd=<PASSWORD>, cursorclass=DictCursor ) yield connection connection.close() def _status(self,", "Node. :param query: Query to execute. :type query: str :return: Query result or", ":return: MySQL connection to the Galera node :rtype: Connection \"\"\" connection = pymysql.connect(", "state of the cluster and the sequence of changes it undergoes. \"\"\" return", "query, *args): \"\"\"Execute query in Galera Node. :param query: Query to execute. :type", "JOINER = 2 JOINED = 3 SYNCED = 4 DONOR = 5 class", "\"\"\" GaleraNode class describes a single node in Galera Cluster. :param host: hostname", "node in Galera Cluster. :param host: hostname of the node. :param port: port", "result[0]['@@wsrep_cluster_name'] def execute(self, query, *args): \"\"\"Execute query in Galera Node. :param query: Query", "is part of a ``PRIMARY`` or ``NON_PRIMARY`` component.\"\"\" return self._status('wsrep_cluster_status') @property def wsrep_local_state(self):", "GLOBAL STATUS\"\"\" result = self.execute('SHOW GLOBAL STATUS LIKE %s', status_variable) return result[0]['Value'] def", "return result[0]['Value'] def __eq__(self, other): return self.host == other.host and self.port == self.port", "def __ne__(self, other): return not self.__eq__(other) def __str__(self): return \"%s:%d\" % (self.host, self.port)", "from pymysql.cursors import DictCursor from proxysql_tools import execute class GaleraNodeState(object): # pylint: disable=too-few-public-methods", "return int(self._status('wsrep_local_state')) @property def wsrep_cluster_name(self): \"\"\"The logical cluster name for the node.\"\"\" result", "= port self.user = user self.password = password @property def wsrep_cluster_state_uuid(self): \"\"\"Provides the", "part of a ``PRIMARY`` or ``NON_PRIMARY`` component.\"\"\" return self._status('wsrep_cluster_status') @property def wsrep_local_state(self): \"\"\"Internal", "= 3 SYNCED = 4 DONOR = 5 class GaleraNode(object): \"\"\" GaleraNode class", "return result[0]['@@wsrep_cluster_name'] def execute(self, query, *args): \"\"\"Execute query in Galera Node. :param query:", "from contextlib import contextmanager import pymysql from pymysql.cursors import DictCursor from proxysql_tools import", "of a ``PRIMARY`` or ``NON_PRIMARY`` component.\"\"\" return self._status('wsrep_cluster_status') @property def wsrep_local_state(self): \"\"\"Internal Galera", "hostname of the node. :param port: port to connect to. :param user: MySQL", "_connect(self): \"\"\"Connect to Galera node :return: MySQL connection to the Galera node :rtype:", "port: port to connect to. :param user: MySQL username to connect to the", "self._status('wsrep_cluster_state_uuid') @property def wsrep_cluster_status(self): \"\"\"Status of this cluster component. That is, whether the", "passwd=<PASSWORD>, cursorclass=DictCursor ) yield connection connection.close() def _status(self, status_variable): \"\"\"Return value of a", "SHOW GLOBAL STATUS\"\"\" result = self.execute('SHOW GLOBAL STATUS LIKE %s', status_variable) return result[0]['Value']", "password=<PASSWORD>): self.host = host self.port = port self.user = user self.password = password", "node is part of a ``PRIMARY`` or ``NON_PRIMARY`` component.\"\"\" return self._status('wsrep_cluster_status') @property def", "def __init__(self, host, port=3306, user='root', password=<PASSWORD>): self.host = host self.port = port self.user", "self.execute(\"SELECT @@wsrep_cluster_name\") return result[0]['@@wsrep_cluster_name'] def execute(self, query, *args): \"\"\"Execute query in Galera Node.", ":rtype: dict \"\"\" with self._connect() as conn: return execute(conn, query, *args) @contextmanager def", "contextmanager import pymysql from pymysql.cursors import DictCursor from proxysql_tools import execute class GaleraNodeState(object):", "def _status(self, status_variable): \"\"\"Return value of a variable from SHOW GLOBAL STATUS\"\"\" result", "``NON_PRIMARY`` component.\"\"\" return self._status('wsrep_cluster_status') @property def wsrep_local_state(self): \"\"\"Internal Galera Cluster FSM state number.\"\"\"", "= host self.port = port self.user = user self.password = password @property def", "= user self.password = password @property def wsrep_cluster_state_uuid(self): \"\"\"Provides the current State UUID.", "the node. :param port: port to connect to. :param user: MySQL username to", "status_variable) return result[0]['Value'] def __eq__(self, other): return self.host == other.host and self.port ==", "result[0]['Value'] def __eq__(self, other): return self.host == other.host and self.port == self.port def", "is a unique identifier for the current state of the cluster and the", "the sequence of changes it undergoes. \"\"\" return self._status('wsrep_cluster_state_uuid') @property def wsrep_cluster_status(self): \"\"\"Status", "GaleraNode class describes a single node in Galera Cluster. :param host: hostname of", "State UUID. This is a unique identifier for the current state of the", "supposed to return result. :rtype: dict \"\"\" with self._connect() as conn: return execute(conn,", "logical cluster name for the node.\"\"\" result = self.execute(\"SELECT @@wsrep_cluster_name\") return result[0]['@@wsrep_cluster_name'] def", "GaleraNode(object): \"\"\" GaleraNode class describes a single node in Galera Cluster. :param host:", "STATUS\"\"\" result = self.execute('SHOW GLOBAL STATUS LIKE %s', status_variable) return result[0]['Value'] def __eq__(self,", "of Galera node http://bit.ly/2r1tUGB \"\"\" PRIMARY = 1 JOINER = 2 JOINED =", "== other.host and self.port == self.port def __ne__(self, other): return not self.__eq__(other) def", ":rtype: Connection \"\"\" connection = pymysql.connect( # pylint: disable=duplicate-code host=self.host, port=self.port, user=self.user, passwd=<PASSWORD>,", "user: MySQL username to connect to the node. :param password: <PASSWORD>. \"\"\" def", "for the node.\"\"\" result = self.execute(\"SELECT @@wsrep_cluster_name\") return result[0]['@@wsrep_cluster_name'] def execute(self, query, *args):", "= 2 JOINED = 3 SYNCED = 4 DONOR = 5 class GaleraNode(object):", "\"\"\" return self._status('wsrep_cluster_state_uuid') @property def wsrep_cluster_status(self): \"\"\"Status of this cluster component. That is,", "\"\"\"Provides the current State UUID. This is a unique identifier for the current", "pymysql.cursors import DictCursor from proxysql_tools import execute class GaleraNodeState(object): # pylint: disable=too-few-public-methods \"\"\"State", "import pymysql from pymysql.cursors import DictCursor from proxysql_tools import execute class GaleraNodeState(object): #", "describes GaleraNode class\"\"\" from contextlib import contextmanager import pymysql from pymysql.cursors import DictCursor", "disable=too-few-public-methods \"\"\"State of Galera node http://bit.ly/2r1tUGB \"\"\" PRIMARY = 1 JOINER = 2", "import contextmanager import pymysql from pymysql.cursors import DictCursor from proxysql_tools import execute class", "execute class GaleraNodeState(object): # pylint: disable=too-few-public-methods \"\"\"State of Galera node http://bit.ly/2r1tUGB \"\"\" PRIMARY", "port to connect to. :param user: MySQL username to connect to the node.", "undergoes. \"\"\" return self._status('wsrep_cluster_state_uuid') @property def wsrep_cluster_status(self): \"\"\"Status of this cluster component. That", ":type query: str :return: Query result or None if the query is not", "\"\"\" def __init__(self, host, port=3306, user='root', password=<PASSWORD>): self.host = host self.port = port", "to the node. :param password: <PASSWORD>. \"\"\" def __init__(self, host, port=3306, user='root', password=<PASSWORD>):", "in Galera Node. :param query: Query to execute. :type query: str :return: Query", "state number.\"\"\" return int(self._status('wsrep_local_state')) @property def wsrep_cluster_name(self): \"\"\"The logical cluster name for the", "= password @property def wsrep_cluster_state_uuid(self): \"\"\"Provides the current State UUID. This is a", "wsrep_cluster_name(self): \"\"\"The logical cluster name for the node.\"\"\" result = self.execute(\"SELECT @@wsrep_cluster_name\") return", "cursorclass=DictCursor ) yield connection connection.close() def _status(self, status_variable): \"\"\"Return value of a variable", "is, whether the node is part of a ``PRIMARY`` or ``NON_PRIMARY`` component.\"\"\" return", "proxysql_tools import execute class GaleraNodeState(object): # pylint: disable=too-few-public-methods \"\"\"State of Galera node http://bit.ly/2r1tUGB", "self._status('wsrep_cluster_status') @property def wsrep_local_state(self): \"\"\"Internal Galera Cluster FSM state number.\"\"\" return int(self._status('wsrep_local_state')) @property", "import DictCursor from proxysql_tools import execute class GaleraNodeState(object): # pylint: disable=too-few-public-methods \"\"\"State of", "host, port=3306, user='root', password=<PASSWORD>): self.host = host self.port = port self.user = user", "from proxysql_tools import execute class GaleraNodeState(object): # pylint: disable=too-few-public-methods \"\"\"State of Galera node", "this cluster component. That is, whether the node is part of a ``PRIMARY``", "Query to execute. :type query: str :return: Query result or None if the", "\"\"\"Module describes GaleraNode class\"\"\" from contextlib import contextmanager import pymysql from pymysql.cursors import", "execute(conn, query, *args) @contextmanager def _connect(self): \"\"\"Connect to Galera node :return: MySQL connection", "self.port = port self.user = user self.password = password @property def wsrep_cluster_state_uuid(self): \"\"\"Provides", "This is a unique identifier for the current state of the cluster and", "\"\"\"State of Galera node http://bit.ly/2r1tUGB \"\"\" PRIMARY = 1 JOINER = 2 JOINED", "= 4 DONOR = 5 class GaleraNode(object): \"\"\" GaleraNode class describes a single", "and the sequence of changes it undergoes. \"\"\" return self._status('wsrep_cluster_state_uuid') @property def wsrep_cluster_status(self):", "Galera node http://bit.ly/2r1tUGB \"\"\" PRIMARY = 1 JOINER = 2 JOINED = 3", "DONOR = 5 class GaleraNode(object): \"\"\" GaleraNode class describes a single node in", "\"\"\"Connect to Galera node :return: MySQL connection to the Galera node :rtype: Connection", "= 5 class GaleraNode(object): \"\"\" GaleraNode class describes a single node in Galera", "component. That is, whether the node is part of a ``PRIMARY`` or ``NON_PRIMARY``", "Connection \"\"\" connection = pymysql.connect( # pylint: disable=duplicate-code host=self.host, port=self.port, user=self.user, passwd=<PASSWORD>, cursorclass=DictCursor", "connection = pymysql.connect( # pylint: disable=duplicate-code host=self.host, port=self.port, user=self.user, passwd=<PASSWORD>, cursorclass=DictCursor ) yield", "dict \"\"\" with self._connect() as conn: return execute(conn, query, *args) @contextmanager def _connect(self):", "with self._connect() as conn: return execute(conn, query, *args) @contextmanager def _connect(self): \"\"\"Connect to", "pymysql.connect( # pylint: disable=duplicate-code host=self.host, port=self.port, user=self.user, passwd=<PASSWORD>, cursorclass=DictCursor ) yield connection connection.close()", "self.host == other.host and self.port == self.port def __ne__(self, other): return not self.__eq__(other)", "of the cluster and the sequence of changes it undergoes. \"\"\" return self._status('wsrep_cluster_state_uuid')", ":param port: port to connect to. :param user: MySQL username to connect to", "the cluster and the sequence of changes it undergoes. \"\"\" return self._status('wsrep_cluster_state_uuid') @property", "a ``PRIMARY`` or ``NON_PRIMARY`` component.\"\"\" return self._status('wsrep_cluster_status') @property def wsrep_local_state(self): \"\"\"Internal Galera Cluster", "result. :rtype: dict \"\"\" with self._connect() as conn: return execute(conn, query, *args) @contextmanager", "execute(self, query, *args): \"\"\"Execute query in Galera Node. :param query: Query to execute.", "is not supposed to return result. :rtype: dict \"\"\" with self._connect() as conn:", "to execute. :type query: str :return: Query result or None if the query", "password @property def wsrep_cluster_state_uuid(self): \"\"\"Provides the current State UUID. This is a unique", "whether the node is part of a ``PRIMARY`` or ``NON_PRIMARY`` component.\"\"\" return self._status('wsrep_cluster_status')", "\"\"\" PRIMARY = 1 JOINER = 2 JOINED = 3 SYNCED = 4", "# pylint: disable=too-few-public-methods \"\"\"State of Galera node http://bit.ly/2r1tUGB \"\"\" PRIMARY = 1 JOINER", "= 1 JOINER = 2 JOINED = 3 SYNCED = 4 DONOR =", "@property def wsrep_cluster_name(self): \"\"\"The logical cluster name for the node.\"\"\" result = self.execute(\"SELECT", "def wsrep_cluster_name(self): \"\"\"The logical cluster name for the node.\"\"\" result = self.execute(\"SELECT @@wsrep_cluster_name\")", "def wsrep_cluster_status(self): \"\"\"Status of this cluster component. That is, whether the node is", "query: Query to execute. :type query: str :return: Query result or None if", "node. :param port: port to connect to. :param user: MySQL username to connect", "to return result. :rtype: dict \"\"\" with self._connect() as conn: return execute(conn, query,", "return execute(conn, query, *args) @contextmanager def _connect(self): \"\"\"Connect to Galera node :return: MySQL", "return self.host == other.host and self.port == self.port def __ne__(self, other): return not", "class GaleraNodeState(object): # pylint: disable=too-few-public-methods \"\"\"State of Galera node http://bit.ly/2r1tUGB \"\"\" PRIMARY =", "\"\"\"Internal Galera Cluster FSM state number.\"\"\" return int(self._status('wsrep_local_state')) @property def wsrep_cluster_name(self): \"\"\"The logical", "user='root', password=<PASSWORD>): self.host = host self.port = port self.user = user self.password =", "unique identifier for the current state of the cluster and the sequence of", "current State UUID. This is a unique identifier for the current state of", "node.\"\"\" result = self.execute(\"SELECT @@wsrep_cluster_name\") return result[0]['@@wsrep_cluster_name'] def execute(self, query, *args): \"\"\"Execute query", "of changes it undergoes. \"\"\" return self._status('wsrep_cluster_state_uuid') @property def wsrep_cluster_status(self): \"\"\"Status of this", "== self.port def __ne__(self, other): return not self.__eq__(other) def __str__(self): return \"%s:%d\" %", "connect to. :param user: MySQL username to connect to the node. :param password:", "to Galera node :return: MySQL connection to the Galera node :rtype: Connection \"\"\"", "LIKE %s', status_variable) return result[0]['Value'] def __eq__(self, other): return self.host == other.host and", "GaleraNode class\"\"\" from contextlib import contextmanager import pymysql from pymysql.cursors import DictCursor from", "or ``NON_PRIMARY`` component.\"\"\" return self._status('wsrep_cluster_status') @property def wsrep_local_state(self): \"\"\"Internal Galera Cluster FSM state", "name for the node.\"\"\" result = self.execute(\"SELECT @@wsrep_cluster_name\") return result[0]['@@wsrep_cluster_name'] def execute(self, query,", "a single node in Galera Cluster. :param host: hostname of the node. :param", "yield connection connection.close() def _status(self, status_variable): \"\"\"Return value of a variable from SHOW", "def _connect(self): \"\"\"Connect to Galera node :return: MySQL connection to the Galera node", "= pymysql.connect( # pylint: disable=duplicate-code host=self.host, port=self.port, user=self.user, passwd=<PASSWORD>, cursorclass=DictCursor ) yield connection", "__init__(self, host, port=3306, user='root', password=<PASSWORD>): self.host = host self.port = port self.user =", "return self._status('wsrep_cluster_status') @property def wsrep_local_state(self): \"\"\"Internal Galera Cluster FSM state number.\"\"\" return int(self._status('wsrep_local_state'))", "Galera node :return: MySQL connection to the Galera node :rtype: Connection \"\"\" connection", "user self.password = password @property def wsrep_cluster_state_uuid(self): \"\"\"Provides the current State UUID. This", "of a variable from SHOW GLOBAL STATUS\"\"\" result = self.execute('SHOW GLOBAL STATUS LIKE", "@property def wsrep_cluster_state_uuid(self): \"\"\"Provides the current State UUID. This is a unique identifier", "class describes a single node in Galera Cluster. :param host: hostname of the", "query is not supposed to return result. :rtype: dict \"\"\" with self._connect() as", "query, *args) @contextmanager def _connect(self): \"\"\"Connect to Galera node :return: MySQL connection to", "Galera Node. :param query: Query to execute. :type query: str :return: Query result", "\"\"\"Execute query in Galera Node. :param query: Query to execute. :type query: str", "FSM state number.\"\"\" return int(self._status('wsrep_local_state')) @property def wsrep_cluster_name(self): \"\"\"The logical cluster name for", "import execute class GaleraNodeState(object): # pylint: disable=too-few-public-methods \"\"\"State of Galera node http://bit.ly/2r1tUGB \"\"\"", ":param host: hostname of the node. :param port: port to connect to. :param", "# pylint: disable=duplicate-code host=self.host, port=self.port, user=self.user, passwd=<PASSWORD>, cursorclass=DictCursor ) yield connection connection.close() def", "query in Galera Node. :param query: Query to execute. :type query: str :return:", "node. :param password: <PASSWORD>. \"\"\" def __init__(self, host, port=3306, user='root', password=<PASSWORD>): self.host =", "wsrep_cluster_state_uuid(self): \"\"\"Provides the current State UUID. This is a unique identifier for the", "and self.port == self.port def __ne__(self, other): return not self.__eq__(other) def __str__(self): return", "return self._status('wsrep_cluster_state_uuid') @property def wsrep_cluster_status(self): \"\"\"Status of this cluster component. That is, whether", "STATUS LIKE %s', status_variable) return result[0]['Value'] def __eq__(self, other): return self.host == other.host", "the node is part of a ``PRIMARY`` or ``NON_PRIMARY`` component.\"\"\" return self._status('wsrep_cluster_status') @property", "@@wsrep_cluster_name\") return result[0]['@@wsrep_cluster_name'] def execute(self, query, *args): \"\"\"Execute query in Galera Node. :param", "node http://bit.ly/2r1tUGB \"\"\" PRIMARY = 1 JOINER = 2 JOINED = 3 SYNCED", "self.execute('SHOW GLOBAL STATUS LIKE %s', status_variable) return result[0]['Value'] def __eq__(self, other): return self.host", "to the Galera node :rtype: Connection \"\"\" connection = pymysql.connect( # pylint: disable=duplicate-code", "That is, whether the node is part of a ``PRIMARY`` or ``NON_PRIMARY`` component.\"\"\"", "self._connect() as conn: return execute(conn, query, *args) @contextmanager def _connect(self): \"\"\"Connect to Galera", "<PASSWORD>. \"\"\" def __init__(self, host, port=3306, user='root', password=<PASSWORD>): self.host = host self.port =", "other.host and self.port == self.port def __ne__(self, other): return not self.__eq__(other) def __str__(self):", "single node in Galera Cluster. :param host: hostname of the node. :param port:", "the node. :param password: <PASSWORD>. \"\"\" def __init__(self, host, port=3306, user='root', password=<PASSWORD>): self.host", "node :return: MySQL connection to the Galera node :rtype: Connection \"\"\" connection =", "host self.port = port self.user = user self.password = password @property def wsrep_cluster_state_uuid(self):", "current state of the cluster and the sequence of changes it undergoes. \"\"\"", "for the current state of the cluster and the sequence of changes it", "cluster name for the node.\"\"\" result = self.execute(\"SELECT @@wsrep_cluster_name\") return result[0]['@@wsrep_cluster_name'] def execute(self,", "= self.execute(\"SELECT @@wsrep_cluster_name\") return result[0]['@@wsrep_cluster_name'] def execute(self, query, *args): \"\"\"Execute query in Galera", "_status(self, status_variable): \"\"\"Return value of a variable from SHOW GLOBAL STATUS\"\"\" result =", "disable=duplicate-code host=self.host, port=self.port, user=self.user, passwd=<PASSWORD>, cursorclass=DictCursor ) yield connection connection.close() def _status(self, status_variable):", "\"\"\"Return value of a variable from SHOW GLOBAL STATUS\"\"\" result = self.execute('SHOW GLOBAL", "connection connection.close() def _status(self, status_variable): \"\"\"Return value of a variable from SHOW GLOBAL", "*args) @contextmanager def _connect(self): \"\"\"Connect to Galera node :return: MySQL connection to the", "*args): \"\"\"Execute query in Galera Node. :param query: Query to execute. :type query:", "class\"\"\" from contextlib import contextmanager import pymysql from pymysql.cursors import DictCursor from proxysql_tools", "other): return self.host == other.host and self.port == self.port def __ne__(self, other): return", "self.password = password @property def wsrep_cluster_state_uuid(self): \"\"\"Provides the current State UUID. This is", ":param user: MySQL username to connect to the node. :param password: <PASSWORD>. \"\"\"", "\"\"\"Status of this cluster component. That is, whether the node is part of", "SYNCED = 4 DONOR = 5 class GaleraNode(object): \"\"\" GaleraNode class describes a", "connection to the Galera node :rtype: Connection \"\"\" connection = pymysql.connect( # pylint:", "class GaleraNode(object): \"\"\" GaleraNode class describes a single node in Galera Cluster. :param", "UUID. This is a unique identifier for the current state of the cluster", "identifier for the current state of the cluster and the sequence of changes", "Cluster FSM state number.\"\"\" return int(self._status('wsrep_local_state')) @property def wsrep_cluster_name(self): \"\"\"The logical cluster name", "the current State UUID. This is a unique identifier for the current state", "\"\"\" with self._connect() as conn: return execute(conn, query, *args) @contextmanager def _connect(self): \"\"\"Connect", "port=3306, user='root', password=<PASSWORD>): self.host = host self.port = port self.user = user self.password", "self.port == self.port def __ne__(self, other): return not self.__eq__(other) def __str__(self): return \"%s:%d\"", "port=self.port, user=self.user, passwd=<PASSWORD>, cursorclass=DictCursor ) yield connection connection.close() def _status(self, status_variable): \"\"\"Return value", "None if the query is not supposed to return result. :rtype: dict \"\"\"", "DictCursor from proxysql_tools import execute class GaleraNodeState(object): # pylint: disable=too-few-public-methods \"\"\"State of Galera", "JOINED = 3 SYNCED = 4 DONOR = 5 class GaleraNode(object): \"\"\" GaleraNode", "``PRIMARY`` or ``NON_PRIMARY`` component.\"\"\" return self._status('wsrep_cluster_status') @property def wsrep_local_state(self): \"\"\"Internal Galera Cluster FSM", "or None if the query is not supposed to return result. :rtype: dict", "5 class GaleraNode(object): \"\"\" GaleraNode class describes a single node in Galera Cluster.", "username to connect to the node. :param password: <PASSWORD>. \"\"\" def __init__(self, host,", "1 JOINER = 2 JOINED = 3 SYNCED = 4 DONOR = 5", "%s', status_variable) return result[0]['Value'] def __eq__(self, other): return self.host == other.host and self.port", "pymysql from pymysql.cursors import DictCursor from proxysql_tools import execute class GaleraNodeState(object): # pylint:", "Galera node :rtype: Connection \"\"\" connection = pymysql.connect( # pylint: disable=duplicate-code host=self.host, port=self.port,", ") yield connection connection.close() def _status(self, status_variable): \"\"\"Return value of a variable from", "to connect to the node. :param password: <PASSWORD>. \"\"\" def __init__(self, host, port=3306,", "@property def wsrep_cluster_status(self): \"\"\"Status of this cluster component. That is, whether the node", "cluster and the sequence of changes it undergoes. \"\"\" return self._status('wsrep_cluster_state_uuid') @property def", "cluster component. That is, whether the node is part of a ``PRIMARY`` or", "str :return: Query result or None if the query is not supposed to", "Cluster. :param host: hostname of the node. :param port: port to connect to.", "to connect to. :param user: MySQL username to connect to the node. :param", ":param query: Query to execute. :type query: str :return: Query result or None", "contextlib import contextmanager import pymysql from pymysql.cursors import DictCursor from proxysql_tools import execute", "2 JOINED = 3 SYNCED = 4 DONOR = 5 class GaleraNode(object): \"\"\"", "host: hostname of the node. :param port: port to connect to. :param user:", "GLOBAL STATUS LIKE %s', status_variable) return result[0]['Value'] def __eq__(self, other): return self.host ==", "return result. :rtype: dict \"\"\" with self._connect() as conn: return execute(conn, query, *args)", "not supposed to return result. :rtype: dict \"\"\" with self._connect() as conn: return", "port self.user = user self.password = password @property def wsrep_cluster_state_uuid(self): \"\"\"Provides the current", "to. :param user: MySQL username to connect to the node. :param password: <PASSWORD>.", "of this cluster component. That is, whether the node is part of a", "def __eq__(self, other): return self.host == other.host and self.port == self.port def __ne__(self,", "sequence of changes it undergoes. \"\"\" return self._status('wsrep_cluster_state_uuid') @property def wsrep_cluster_status(self): \"\"\"Status of", "as conn: return execute(conn, query, *args) @contextmanager def _connect(self): \"\"\"Connect to Galera node", "user=self.user, passwd=<PASSWORD>, cursorclass=DictCursor ) yield connection connection.close() def _status(self, status_variable): \"\"\"Return value of", "self.host = host self.port = port self.user = user self.password = password @property", "connect to the node. :param password: <PASSWORD>. \"\"\" def __init__(self, host, port=3306, user='root',", "of the node. :param port: port to connect to. :param user: MySQL username", "execute. :type query: str :return: Query result or None if the query is", "wsrep_local_state(self): \"\"\"Internal Galera Cluster FSM state number.\"\"\" return int(self._status('wsrep_local_state')) @property def wsrep_cluster_name(self): \"\"\"The", "from SHOW GLOBAL STATUS\"\"\" result = self.execute('SHOW GLOBAL STATUS LIKE %s', status_variable) return", "if the query is not supposed to return result. :rtype: dict \"\"\" with", "the query is not supposed to return result. :rtype: dict \"\"\" with self._connect()", "3 SYNCED = 4 DONOR = 5 class GaleraNode(object): \"\"\" GaleraNode class describes", "host=self.host, port=self.port, user=self.user, passwd=<PASSWORD>, cursorclass=DictCursor ) yield connection connection.close() def _status(self, status_variable): \"\"\"Return", "node :rtype: Connection \"\"\" connection = pymysql.connect( # pylint: disable=duplicate-code host=self.host, port=self.port, user=self.user," ]
[ "char == '/': if comment is False: comment = True else: return string[:i]", "code[comment_line + i]): # while the line starts as a comment, ignore it.", "+= 1 # if this matches, probabily the comment refers to the line", "comment_type = 0 text_link = 1 comment_line = 2 positions = [] data", "open(serialize_outpath + 'serialized_' + set +'.json', 'w') x.write(json.dumps(lines)) return lines def get_code_words(stemming=True, rem_keyws=True,", "re.match(r\"[\\s]*[^}{](try|else|finally)[\\s]*{?\", code[comment_line + i]): i += 1 # if this matches, probabily the", "'\\\\': if in_string is True: escape = True elif char == '/': if", "i -= 1 return code[comment_line + i] except IndexError: return \"\" def get_positions(lines=None,", "remove_stopwords(tokens): stop_words = STOP_WORDS relevant_words = [] for token in tokens: if token", "new_tokens.append(t.lower()) if rem_stop: new_tokens = remove_stopwords(new_tokens) if rem_kws: new_tokens = remove_keywords(new_tokens) if stemming:", "code[comment_line + i]) or re.match(r\"^[\\s]*//.*\", code[comment_line + i]) or re.match(r\"[\\s]*[^}{](try|else|finally)[\\s]*{?\", code[comment_line + i]):", "if char == '*': if not in_string: if maybe_block is False: if block", "# while the line is a comment or is blank, ignore it while", "stemmed.append(stemmer.stem(token)) return stemmed def camel_case_split(string): if not string: return string words = [[string[0]]]", "else: words[-1].append(c) return [''.join(word) for word in words] def remove_line_comment(string): in_string = False", "= 0 for char in string: if char == '\"': if in_string is", "line_type_identifier(\"ciao\") # code_parser3() # print(word_extractor(\"ciao mamma /*css rff*/\")) # print(tokenizer(\"t<EMAIL>@<EMAIL> @param\")) # print(camel_case_split(\"tuaMadre@QuellaTroia", "\"\" def get_positions(lines=None, set='train'): comment_type = 0 text_link = 1 comment_line = 2", "or is blank, or is an annotation, ignore it while re.match(r\"^[\\s]*$\", code[comment_line +", "comment_line - 1]): # if the line doesn't start as a comment, the", "line: if not re.match(r\"^[\\s]*//.*\", code[ comment_line - 1]): # if the line doesn't", "+ i]): i = -2 # while the line is a comment or", "= remove_stopwords(new_tokens) if rem_kws: new_tokens = remove_keywords(new_tokens) if stemming: new_tokens = stem(new_tokens) return", "stem(words) else: return remove_keywords(words) def remove_keywords(words): keywords = get_keywords() non_keywords = [] for", "re.match(r\"^[\\s]*.*\\*/\", code[comment_line - 1]): while not re.match(r\"^[\\s]*\\*/\", code[comment_line + i]): i += 1", "1]): while not re.match(r\"^[\\s]*\\*/\", code[comment_line + i]): i += 1 i += 1", "they use multiple line comment to simulate a block i += 1 if", "#print(focus_line) p = get_position(focus_line) positions.append(p) #print(p) i += 1 return positions def get_positions_encoded(lines=None,", "= True init_index = i i += 1 if found is True: return", "annotation, ignore it while re.match(r\"^[\\s]*$\", code[comment_line + i]) or re.match(r\"^[\\s]*@[^\\s]*[\\s]*$\", code[comment_line + i])", "if not re.match(r\"^[\\s]*}?[\\s]*(else|try|finally)?[\\s]*{?[\\s]*//.*[\\s]*$\", code[ comment_line - 1]): # if the line isnt just", "string = remove_block_comment(string) splitted = code_split(string) words = [] for part in splitted:", "positions.append(p) #print(p) i += 1 return positions def get_positions_encoded(lines=None, set='train'): if lines is", "words)) return words def remove_stopwords(tokens): stop_words = STOP_WORDS relevant_words = [] for token", "in camel_case_split(token): new_tokens.append(t.lower()) if rem_stop: new_tokens = remove_stopwords(new_tokens) if rem_kws: new_tokens = remove_keywords(new_tokens)", "= 0 if not re.match(r\"^[\\s]*.*\\*/\", code[comment_line - 1]): while not re.match(r\"^[\\s]*\\*/\", code[comment_line +", "# while the line is a comment or is blank, ignore it i", "+ set +'.json', 'w') x.write(json.dumps(lines)) return lines def get_code_words(stemming=True, rem_keyws=True, lines=None, set='train'): if", "remove_keywords(new_tokens) if stemming: new_tokens = stem(new_tokens) return new_tokens if __name__ == '__main__': #", "to this line if not re.match(r\"^[\\s]*/\\*.*\", code[comment_line - 1]): return code[comment_line - 1]", "if lines is None: lines = get_lines(set=set, serialized=False, serialize=True) words = [] for", "in string: if char == '\"': if in_string is True: if escape is", "string[1:]: if words[-1][-1].islower() and c.isupper(): words.append(list(c)) else: words[-1].append(c) return [''.join(word) for word in", "i = 0 for char in string: if char == '\"': if in_string", "start as a comment, the comment refers to this line if not re.match(r\"^[\\s]*/\\*.*\",", "= 0 for row in data: #print(row[comment_line], row[comment_type], row[text_link] + \"#L\" + str(row[comment_line]))", "= True elif char == '/': if not in_string: if maybe_block is True:", "words[-1].append(c) return [''.join(word) for word in words] def remove_line_comment(string): in_string = False escape", "re.match(r\"^[\\s]*@[^\\s]*[\\s]*$\", code[comment_line + i]) or re.match(r\"^[\\s]*//.*\", code[comment_line + i]) or re.match(r\"[\\s]*[^}{](try|else|finally)[\\s]*{?\", code[comment_line +", "foucs line is the comment_line return code[comment_line - 1] i = 0 while", "rem_kws: new_tokens = remove_keywords(new_tokens) if stemming: new_tokens = stem(new_tokens) return new_tokens if __name__", "or re.match(r\"^\\*\", code[comment_line + i]) or re.match( r\"^[\\s]*\\*/.*\", code[comment_line + i]): # while", "in keywords: non_keywords.append(word) return non_keywords def stem(words): stemmer = PorterStemmer() stemmed = []", "re.match(r\"^[\\s]*//.*\", code[comment_line + i]) or re.match(r\"^[\\s]*$\", code[comment_line + i]) or re.match(r\"^[\\s]*/\\*.*\", code[comment_line +", "code[comment_line + i]): # while the line is a comment or is blank,", "first non comment non empty line before the comment. i = -2 while", "comment or is blank, ignore it while re.match(r\"^[\\s]*//.*\", code[comment_line + i]) or re.match(r\"^[\\s]*$\",", "import STOP_WORDS from src.comment_analysis.java_re import * def get_line(code, comment_line, comment_type): code = code.splitlines()", "comment to simulate a block i += 1 if re.match(r\"^[\\s]*}.*\", code[comment_line + i])", "+ i]): # while the line is a comment or is blank, ignore", "code[comment_line + i]) or re.match(r\"^[\\s]*$\", code[comment_line + i]) or re.match( r\"^[\\s]*/\\*.*\", code[comment_line +", "words = list(filter(lambda a: a != \"\", words)) return words def remove_stopwords(tokens): stop_words", "+ i]) or re.match(r\"^[\\s]*/\\*.*\", code[comment_line + i]) or re.match(r\"^\\*\", code[comment_line + i]) or", "brackets and some keywords, the foucs line is the comment_line return code[comment_line -", "while re.match(r\"^[\\s]*//.*\", code[comment_line + i]) or re.match(r\"^[\\s]*$\", code[comment_line + i]) or re.match(r\"^[\\s]*/\\*.*\", code[comment_line", "a comment, ignore it. I do this because they use multiple line comment", "= get_line(code, row[comment_line], row[comment_type]) lines.append(focus_line) if serialize: x = open(serialize_outpath + 'serialized_' +", "+ i]): # if this matches, the block is empty so i take", "code = get_text_by_url(row[text_link]) focus_line = get_line(code, row[comment_line], row[comment_type]) lines.append(focus_line) if serialize: x =", "while re.match(r\"^[\\s]*//.*\", code[comment_line + i]) or re.match(r\"^[\\s]*$\", code[comment_line + i]) or re.match(r\"[\\s]*[^}{](try|else|finally)[\\s]*{?\", code[comment_line", ">= len(code) - 1: return code[comment_line - 2] i = 0 if not", "i break else: maybe_block = True init_index = i i += 1 if", "remove_block_comment(string) splitted = code_split(string) words = [] for part in splitted: camel_case_parts =", "i]) or re.match(r\"^\\*\", code[comment_line + i]) or re.match(r\"^[\\s]*\\*/.*\", code[comment_line + i]): i -=", "string[end_index + 1:] return string def code_split(string): words = re.split(r'\\\\n|\\?|&|\\\\|;|,|\\*|\\(|\\)|\\{|\\s|\\.|/|_|:|=|<|>|\\||!|\"|\\+|-|\\[|\\]|\\'|\\}|\\^|#|%', string) words =", "the comment_line return code[comment_line - 1] i = 0 while re.match(r\"^[\\s]*//.*\", code[comment_line +", "escape = True elif char == '/': if not in_string: if maybe_block is", "if the line isnt just brackets and some keywords, the foucs line is", "return code[comment_line + i] # comment refers to that # r\"^[\\s]*}?[\\s]*(else|try|finally)?[\\s]*{?[\\s]*.*$\" else: #", "char == '\"': if in_string is True: if escape is False: in_string =", "and c.isupper(): words.append(list(c)) else: words[-1].append(c) return [''.join(word) for word in words] def remove_line_comment(string):", "# comment refers to that # r\"^[\\s]*}?[\\s]*(else|try|finally)?[\\s]*{?[\\s]*.*$\" else: # block or javadoc #", "stemming=True, rem_keyws=True): string = remove_line_comment(string) string = remove_block_comment(string) splitted = code_split(string) words =", "i]): i += 1 i += 1 # while the line starts as", "empty so i take the first non comment non empty line before the", "while the line starts as a comment, ignore it. I do this because", "c in string[1:]: if words[-1][-1].islower() and c.isupper(): words.append(list(c)) else: words[-1].append(c) return [''.join(word) for", "rem_keyws: return stem(remove_keywords(words)) elif stemming: return stem(words) else: return remove_keywords(words) def remove_keywords(words): keywords", "= False else: escape = False else: in_string = True elif char ==", "stop_words = STOP_WORDS relevant_words = [] for token in tokens: if token not", "code[comment_line + i]) or re.match(r\"[\\s]*[^}{](try|else|finally)[\\s]*{?\", code[comment_line + i]): # while the line starts", "False init_index = 0 end_index = 0 i = 0 for char in", "'__main__': # code = open('../testers/test.txt', 'r').read() # code_parser(code, 151, javadoc) print(get_lines(serialized=False, serialize=True)) print('first')", "= get_position(focus_line) positions.append(p) #print(p) i += 1 return positions def get_positions_encoded(lines=None, set='train'): if", "= remove_line_comment(string) string = remove_block_comment(string) splitted = code_split(string) words = [] for part", "is True: maybe_block = True else: block = True if char == '\"':", "+= 1 # while the line starts as a comment or is blank,", "re.match(r\"^[\\s]*//.*\", code[comment_line + i]) or re.match(r\"^[\\s]*$\", code[comment_line + i]) or re.match(r\"[\\s]*[^}{](try|else|finally)[\\s]*{?\", code[comment_line +", "for part in splitted: camel_case_parts = camel_case_split(part) for camel in camel_case_parts: words.append(camel.lower()) if", "True: escape = True elif char == '/': if not in_string: if maybe_block", "comment_line = 2 positions = [] data = get_link_line_type(set=set) if lines is None:", "re.match( r\"^[\\s]*/\\*.*\", code[comment_line + i]) or re.match(r\"^\\*\", code[comment_line + i]) or re.match( r\"^[\\s]*\\*/.*\",", "it i -= 1 return code[comment_line + i] # comment refers to that", "or re.match(r\"[\\s]*(try|else|finally)[\\s]*{?\", code[comment_line + i]): i = -2 # while the line is", "data: code = get_text_by_url(row[text_link]) focus_line = get_line(code, row[comment_line], row[comment_type]) lines.append(focus_line) if serialize: x", "from src.comment_analysis.java_re import * def get_line(code, comment_line, comment_type): code = code.splitlines() try: if", "# while the line starts as a comment, ignore it. I do this", "lines def get_code_words(stemming=True, rem_keyws=True, lines=None, set='train'): if lines is None: lines = get_lines(set=set,", "word_extractor(string, stemming=True, rem_keyws=True): string = remove_line_comment(string) string = remove_block_comment(string) splitted = code_split(string) words", "if stemming and rem_keyws: return stem(remove_keywords(words)) elif stemming: return stem(words) else: return remove_keywords(words)", "in words: stemmed.append(stemmer.stem(token)) return stemmed def camel_case_split(string): if not string: return string words", "0 end_index = 0 i = 0 for char in string: if char", "# if this matches, probabily the comment refers to the line before if", "while not re.match(r\"^[\\s]*\\*/\", code[comment_line + i]): i += 1 i += 1 #", "r\"^[\\s]*\\*/.*\", code[comment_line + i]): # while the line is a comment or is", "= code_split(string) new_tokens = [] for token in tokens: for t in camel_case_split(token):", "-= 1 return code[comment_line + i] except IndexError: return \"\" def get_positions(lines=None, set='train'):", "line is a comment or is blank, ignore it while re.match(r\"^[\\s]*//.*\", code[comment_line +", "remove_keywords(words): keywords = get_keywords() non_keywords = [] for word in words: if word", "lines.append(focus_line) if serialize: x = open(serialize_outpath + 'serialized_' + set +'.json', 'w') x.write(json.dumps(lines))", "-2 # while the line is a comment or is blank, ignore it", "line doesn't start as a comment, the comment refers to this line if", "is True: escape = True elif char == '/': if not in_string: if", "1] i = 0 while re.match(r\"^[\\s]*//.*\", code[comment_line + i]) or re.match(r\"^[\\s]*$\", code[comment_line +", "code[ comment_line - 1]): # if the line isnt just brackets and some", "to that # r\"^[\\s]*}?[\\s]*(else|try|finally)?[\\s]*{?[\\s]*.*$\" else: # block or javadoc # if the line", "re.match(r\"^[\\s]*$\", code[comment_line + i]) or re.match(r\"[\\s]*[^}{](try|else|finally)[\\s]*{?\", code[comment_line + i]): # while the line", "str(row[comment_line])) focus_line = lines[i] #print(focus_line) p = get_position(focus_line) positions.append(p) #print(p) i += 1", "not re.match(r\"^[\\s]*}?[\\s]*(else|try|finally)?[\\s]*{?[\\s]*//.*[\\s]*$\", code[ comment_line - 1]): # if the line isnt just brackets", "a comment or is blank, ignore it while re.match(r\"^[\\s]*//.*\", code[comment_line + i]) or", "get_positions(lines=None, set='train'): comment_type = 0 text_link = 1 comment_line = 2 positions =", "get_link_line_type(set=set) if lines is None: lines = get_lines(set=set) i = 0 for row", "= i i += 1 if found is True: return string[:init_index] + string[end_index", "new_tokens if __name__ == '__main__': # code = open('../testers/test.txt', 'r').read() # code_parser(code, 151,", "this matches, probabily the comment refers to the line before if re.match(r\"^[\\s]*}[\\s]*.*\", code[comment_line", "return le.fit_transform(positions) def get_lines(serialized=True, serialize=False, set='train'): if serialized: x = open(serialize_outpath + 'serialized_'", "else: positions = get_positions(lines, set=set) le = LabelEncoder() return le.fit_transform(positions) def get_lines(serialized=True, serialize=False,", "found = False init_index = 0 end_index = 0 i = 0 for", "# if the line doesn't start as a comment, the comment refers to", "comment. i = -2 while re.match(r\"^[\\s]*//.*\", code[comment_line + i]) or re.match(r\"^[\\s]*$\", code[comment_line +", "part in splitted: camel_case_parts = camel_case_split(part) for camel in camel_case_parts: words.append(camel.lower()) if stemming", "the first non comment non empty line before the comment. i = -2", "not in_string: if maybe_block is True: if block is True: found = True", "= list(filter(lambda a: a != \"\", words)) return words def remove_stopwords(tokens): stop_words =", "+ \"#L\" + str(row[comment_line])) focus_line = lines[i] #print(focus_line) p = get_position(focus_line) positions.append(p) #print(p)", "not re.match(r\"^[\\s]*\\*/\", code[comment_line + i]): i += 1 i += 1 # while", "== '/': if not in_string: if maybe_block is True: if block is True:", "in_string: if maybe_block is True: if block is True: found = True end_index", "a comment, the comment refers to this line if not re.match(r\"^[\\s]*/\\*.*\", code[comment_line -", "new_tokens = stem(new_tokens) return new_tokens if __name__ == '__main__': # code = open('../testers/test.txt',", "in_string = True elif char == '\\\\': if in_string is True: escape =", "else: block = True if char == '\"': if in_string is True: if", "serialize_outpath from nltk.stem.porter import * from spacy.lang.en.stop_words import STOP_WORDS from src.comment_analysis.java_re import *", "def get_code_words(stemming=True, rem_keyws=True, lines=None, set='train'): if lines is None: lines = get_lines(set=set, serialized=False,", "return code[comment_line - 1] i = 0 while re.match(r\"^[\\s]*//.*\", code[comment_line + i]) or", "i take the first non comment non empty line before the comment. i", "lines is None: lines = get_lines(set=set) i = 0 for row in data:", "maybe_block = True else: block = True if char == '\"': if in_string", "2 positions = [] data = get_link_line_type(set=set) if lines is None: lines =", "return stem(words) else: return remove_keywords(words) def remove_keywords(words): keywords = get_keywords() non_keywords = []", "= True else: block = True if char == '\"': if in_string is", "token in tokens: for t in camel_case_split(token): new_tokens.append(t.lower()) if rem_stop: new_tokens = remove_stopwords(new_tokens)", "# get_positions() # line_type_identifier(\"ciao\") # code_parser3() # print(word_extractor(\"ciao mamma /*css rff*/\")) # print(tokenizer(\"t<EMAIL>@<EMAIL>", "or javadoc # if the line doesn't start as a comment, the comment", "before the comment. i = -2 while re.match(r\"^[\\s]*//.*\", code[comment_line + i]) or re.match(r\"^[\\s]*$\",", "list(filter(lambda a: a != \"\", words)) return words def remove_stopwords(tokens): stop_words = STOP_WORDS", "= False init_index = 0 end_index = 0 i = 0 for char", "open('../testers/test.txt', 'r').read() # code_parser(code, 151, javadoc) print(get_lines(serialized=False, serialize=True)) print('first') print(get_lines(serialized=True, serialize=False)) # get_positions()", "code[comment_line - 2] i = 0 if not re.match(r\"^[\\s]*.*\\*/\", code[comment_line - 1]): while", "if char == '\"': if in_string is True: if escape is False: in_string", "code.splitlines() try: if comment_type == line: if not re.match(r\"^[\\s]*//.*\", code[ comment_line - 1]):", "p = get_position(focus_line) positions.append(p) #print(p) i += 1 return positions def get_positions_encoded(lines=None, set='train'):", "i += 1 return positions def get_positions_encoded(lines=None, set='train'): if lines is None: positions", "le.fit_transform(positions) def get_lines(serialized=True, serialize=False, set='train'): if serialized: x = open(serialize_outpath + 'serialized_' +", "if serialized: x = open(serialize_outpath + 'serialized_' + set +'.json', 'r').read() return json.loads(x)", "focus_line = get_line(code, row[comment_line], row[comment_type]) lines.append(focus_line) if serialize: x = open(serialize_outpath + 'serialized_'", "get_link_line_type(set=set) lines = [] for row in data: code = get_text_by_url(row[text_link]) focus_line =", "I do this because they use multiple line comment to simulate a block", "src.comment_analysis.url_utils import get_text_by_url from src.csv.csv_utils import get_link_line_type, get_keywords from src.keys import line, serialize_outpath", "set='train'): if lines is None: positions = get_positions(set=set) else: positions = get_positions(lines, set=set)", "if not string: return string words = [[string[0]]] for c in string[1:]: if", "i += 1 # if this matches, probabily the comment refers to the", "return string def remove_block_comment(string): in_string = False escape = False block = False", "if comment_line >= len(code) - 1: return code[comment_line - 2] i = 0", "data = get_link_line_type(set=set) if lines is None: lines = get_lines(set=set) i = 0", "= get_link_line_type(set=set) if lines is None: lines = get_lines(set=set) i = 0 for", "in camel_case_parts: words.append(camel.lower()) if stemming and rem_keyws: return stem(remove_keywords(words)) elif stemming: return stem(words)", "i = 0 for row in data: #print(row[comment_line], row[comment_type], row[text_link] + \"#L\" +", "set=set) le = LabelEncoder() return le.fit_transform(positions) def get_lines(serialized=True, serialize=False, set='train'): if serialized: x", "comment, ignore it. I do this because they use multiple line comment to", "found is True: return string[:init_index] + string[end_index + 1:] return string def code_split(string):", "= remove_keywords(new_tokens) if stemming: new_tokens = stem(new_tokens) return new_tokens if __name__ == '__main__':", "in data: code = get_text_by_url(row[text_link]) focus_line = get_line(code, row[comment_line], row[comment_type]) lines.append(focus_line) if serialize:", "refers to this line if not re.match(r\"^[\\s]*/\\*.*\", code[comment_line - 1]): return code[comment_line -", "i]): i = -2 # while the line is a comment or is", "or re.match(r\"^[\\s]*$\", code[comment_line + i]) or re.match(r\"^[\\s]*/\\*.*\", code[comment_line + i]) or re.match(r\"^\\*\", code[comment_line", "return json.loads(x) comment_type = 0 text_link = 1 comment_line = 2 data =", "[[string[0]]] for c in string[1:]: if words[-1][-1].islower() and c.isupper(): words.append(list(c)) else: words[-1].append(c) return", "or re.match(r\"^[\\s]*\\*/.*\", code[comment_line + i]): i -= 1 return code[comment_line + i] except", "string[:init_index] + string[end_index + 1:] return string def code_split(string): words = re.split(r'\\\\n|\\?|&|\\\\|;|,|\\*|\\(|\\)|\\{|\\s|\\.|/|_|:|=|<|>|\\||!|\"|\\+|-|\\[|\\]|\\'|\\}|\\^|#|%', string)", "comment or is blank, ignore it i -= 1 return code[comment_line + i]", "if in_string is True: if escape is False: in_string = False else: escape", "+ i]) or re.match(r\"[\\s]*[^}{](try|else|finally)[\\s]*{?\", code[comment_line + i]): i += 1 # if this", "rem_keyws=True): string = remove_line_comment(string) string = remove_block_comment(string) splitted = code_split(string) words = []", "and rem_keyws: return stem(remove_keywords(words)) elif stemming: return stem(words) else: return remove_keywords(words) def remove_keywords(words):", "= [] for row in data: code = get_text_by_url(row[text_link]) focus_line = get_line(code, row[comment_line],", "= 2 data = get_link_line_type(set=set) lines = [] for row in data: code", "lines = get_lines(set=set, serialized=False, serialize=True) words = [] for line in lines: words.append(word_extractor(line,", "if maybe_block is False: if block is True: maybe_block = True else: block", "True: i += 1 comment = False elif escape is True: escape =", "for token in tokens: if token not in stop_words: relevant_words.append(token) return relevant_words def", "words: if word not in keywords: non_keywords.append(word) return non_keywords def stem(words): stemmer =", "is False: in_string = False else: escape = False else: in_string = True", "in string[1:]: if words[-1][-1].islower() and c.isupper(): words.append(list(c)) else: words[-1].append(c) return [''.join(word) for word", "!= \"\", words)) return words def remove_stopwords(tokens): stop_words = STOP_WORDS relevant_words = []", "-2 while re.match(r\"^[\\s]*//.*\", code[comment_line + i]) or re.match(r\"^[\\s]*$\", code[comment_line + i]) or re.match(", "def stem(words): stemmer = PorterStemmer() stemmed = [] for token in words: stemmed.append(stemmer.stem(token))", "i += 1 i += 1 # while the line starts as a", "comment_line, comment_type): code = code.splitlines() try: if comment_type == line: if not re.match(r\"^[\\s]*//.*\",", "in_string = False escape = False block = False maybe_block = False found", "for t in camel_case_split(token): new_tokens.append(t.lower()) if rem_stop: new_tokens = remove_stopwords(new_tokens) if rem_kws: new_tokens", "rem_stop: new_tokens = remove_stopwords(new_tokens) if rem_kws: new_tokens = remove_keywords(new_tokens) if stemming: new_tokens =", "comment or is blank, or is an annotation, ignore it while re.match(r\"^[\\s]*$\", code[comment_line", "probabily the comment refers to the line before if re.match(r\"^[\\s]*}[\\s]*.*\", code[comment_line + i])", "c.isupper(): words.append(list(c)) else: words[-1].append(c) return [''.join(word) for word in words] def remove_line_comment(string): in_string", "text_link = 1 comment_line = 2 positions = [] data = get_link_line_type(set=set) if", "or re.match(r\"[\\s]*[^}{](try|else|finally)[\\s]*{?\", code[comment_line + i]): # while the line starts as a comment,", "i = 0 for char in string: if char == '*': if not", "[] for token in words: stemmed.append(stemmer.stem(token)) return stemmed def camel_case_split(string): if not string:", "code[comment_line + i]) or re.match(r\"[\\s]*(try|else|finally)[\\s]*{?\", code[ comment_line + i]): # if this matches,", "positions = get_positions(lines, set=set) le = LabelEncoder() return le.fit_transform(positions) def get_lines(serialized=True, serialize=False, set='train'):", "maybe_block = False found = False init_index = 0 end_index = 0 i", "for token in words: stemmed.append(stemmer.stem(token)) return stemmed def camel_case_split(string): if not string: return", "if this matches, probabily the comment refers to the line before if re.match(r\"^[\\s]*}[\\s]*.*\",", "string def remove_block_comment(string): in_string = False escape = False block = False maybe_block", "return new_tokens if __name__ == '__main__': # code = open('../testers/test.txt', 'r').read() # code_parser(code,", "print(get_lines(serialized=True, serialize=False)) # get_positions() # line_type_identifier(\"ciao\") # code_parser3() # print(word_extractor(\"ciao mamma /*css rff*/\"))", "is blank, ignore it i -= 1 return code[comment_line + i] # comment", "code[comment_line - 1]): while not re.match(r\"^[\\s]*\\*/\", code[comment_line + i]): i += 1 i", "the comment refers to this line if not re.match(r\"^[\\s]*}?[\\s]*(else|try|finally)?[\\s]*{?[\\s]*//.*[\\s]*$\", code[ comment_line - 1]):", "if this matches, the block is empty so i take the first non", "re.match(r\"^[\\s]*//.*\", code[comment_line + i]) or re.match(r\"[\\s]*[^}{](try|else|finally)[\\s]*{?\", code[comment_line + i]): i += 1 #", "= get_lines(set=set, serialized=False, serialize=True) words = [] for line in lines: words.append(word_extractor(line, stemming,", "camel_case_split(part) for camel in camel_case_parts: words.append(camel.lower()) if stemming and rem_keyws: return stem(remove_keywords(words)) elif", "is True: if escape is False: in_string = False else: escape = False", "comment is True: i += 1 comment = False elif escape is True:", "new_tokens = [] for token in tokens: for t in camel_case_split(token): new_tokens.append(t.lower()) if", "= True elif char == '/': if comment is False: comment = True", "is True: escape = False if comment is False: i += 1 return", "+ i]) or re.match(r\"[\\s]*[^}{](try|else|finally)[\\s]*{?\", code[comment_line + i]): # while the line starts as", "words.append(list(c)) else: words[-1].append(c) return [''.join(word) for word in words] def remove_line_comment(string): in_string =", "block is True: found = True end_index = i break else: maybe_block =", "for camel in camel_case_parts: words.append(camel.lower()) if stemming and rem_keyws: return stem(remove_keywords(words)) elif stemming:", "block = False maybe_block = False found = False init_index = 0 end_index", "re.match(r\"[\\s]*[^}{](try|else|finally)[\\s]*{?\", code[comment_line + i]): # while the line starts as a comment, ignore", "is True: i += 1 comment = False elif escape is True: escape", "comment non empty line before the comment. i = -2 while re.match(r\"^[\\s]*//.*\", code[comment_line", "r\"^[\\s]*/\\*.*\", code[comment_line + i]) or re.match(r\"^\\*\", code[comment_line + i]) or re.match( r\"^[\\s]*\\*/.*\", code[comment_line", "new_tokens = remove_keywords(new_tokens) if stemming: new_tokens = stem(new_tokens) return new_tokens if __name__ ==", "if not re.match(r\"^[\\s]*.*\\*/\", code[comment_line - 1]): while not re.match(r\"^[\\s]*\\*/\", code[comment_line + i]): i", "relevant_words def tokenizer(string, rem_stop=False, stemming=False, rem_kws=False): tokens = code_split(string) new_tokens = [] for", "-= 1 return code[comment_line + i] # comment refers to that # r\"^[\\s]*}?[\\s]*(else|try|finally)?[\\s]*{?[\\s]*.*$\"", "i]) or re.match(r\"^[\\s]*$\", code[comment_line + i]) or re.match(r\"[\\s]*[^}{](try|else|finally)[\\s]*{?\", code[comment_line + i]): # while", "get_positions() # line_type_identifier(\"ciao\") # code_parser3() # print(word_extractor(\"ciao mamma /*css rff*/\")) # print(tokenizer(\"t<EMAIL>@<EMAIL> @param\"))", "stem(remove_keywords(words)) elif stemming: return stem(words) else: return remove_keywords(words) def remove_keywords(words): keywords = get_keywords()", "i = 0 while re.match(r\"^[\\s]*//.*\", code[comment_line + i]) or re.match(r\"^[\\s]*$\", code[comment_line + i])", "blank, or is an annotation, ignore it while re.match(r\"^[\\s]*$\", code[comment_line + i]) or", "is None: lines = get_lines(set=set) i = 0 for row in data: #print(row[comment_line],", "def tokenizer(string, rem_stop=False, stemming=False, rem_kws=False): tokens = code_split(string) new_tokens = [] for token", "def remove_stopwords(tokens): stop_words = STOP_WORDS relevant_words = [] for token in tokens: if", "char == '\\\\': if in_string is True: escape = True elif char ==", "spacy.lang.en.stop_words import STOP_WORDS from src.comment_analysis.java_re import * def get_line(code, comment_line, comment_type): code =", "LabelEncoder from src.comment_analysis.url_utils import get_text_by_url from src.csv.csv_utils import get_link_line_type, get_keywords from src.keys import", "refers to the line before if re.match(r\"^[\\s]*}[\\s]*.*\", code[comment_line + i]) or re.match(r\"[\\s]*(try|else|finally)[\\s]*{?\", code[comment_line", "code[comment_line + i]) or re.match(r\"^[\\s]*\\*/.*\", code[comment_line + i]): i -= 1 return code[comment_line", "[] for part in splitted: camel_case_parts = camel_case_split(part) for camel in camel_case_parts: words.append(camel.lower())", "stemmed def camel_case_split(string): if not string: return string words = [[string[0]]] for c", "elif char == '/': if comment is False: comment = True else: return", "non empty line before the comment. i = -2 while re.match(r\"^[\\s]*//.*\", code[comment_line +", "if stemming: new_tokens = stem(new_tokens) return new_tokens if __name__ == '__main__': # code", "row[comment_line], row[comment_type]) lines.append(focus_line) if serialize: x = open(serialize_outpath + 'serialized_' + set +'.json',", "tokens: for t in camel_case_split(token): new_tokens.append(t.lower()) if rem_stop: new_tokens = remove_stopwords(new_tokens) if rem_kws:", "in stop_words: relevant_words.append(token) return relevant_words def tokenizer(string, rem_stop=False, stemming=False, rem_kws=False): tokens = code_split(string)", "def code_split(string): words = re.split(r'\\\\n|\\?|&|\\\\|;|,|\\*|\\(|\\)|\\{|\\s|\\.|/|_|:|=|<|>|\\||!|\"|\\+|-|\\[|\\]|\\'|\\}|\\^|#|%', string) words = list(filter(lambda a: a != \"\",", "'*': if not in_string: if maybe_block is False: if block is True: maybe_block", "= camel_case_split(part) for camel in camel_case_parts: words.append(camel.lower()) if stemming and rem_keyws: return stem(remove_keywords(words))", "0 if not re.match(r\"^[\\s]*.*\\*/\", code[comment_line - 1]): while not re.match(r\"^[\\s]*\\*/\", code[comment_line + i]):", "return remove_keywords(words) def remove_keywords(words): keywords = get_keywords() non_keywords = [] for word in", "if word not in keywords: non_keywords.append(word) return non_keywords def stem(words): stemmer = PorterStemmer()", "lines = get_lines(set=set) i = 0 for row in data: #print(row[comment_line], row[comment_type], row[text_link]", "1]): # if the line isnt just brackets and some keywords, the foucs", "= False escape = False block = False maybe_block = False found =", "or re.match(r\"[\\s]*(try|else|finally)[\\s]*{?\", code[ comment_line + i]): # if this matches, the block is", "camel_case_split(string): if not string: return string words = [[string[0]]] for c in string[1:]:", "= get_keywords() non_keywords = [] for word in words: if word not in", "if in_string is True: escape = True elif char == '/': if comment", "code[comment_line + i]): i += 1 # if this matches, probabily the comment", "return code[comment_line - 2] i = 0 if not re.match(r\"^[\\s]*.*\\*/\", code[comment_line - 1]):", "= True else: return string[:i] elif comment is True: i += 1 comment", "return code[comment_line - 1] if comment_line >= len(code) - 1: return code[comment_line -", "code[comment_line + i]): i -= 1 return code[comment_line + i] except IndexError: return", "if block is True: maybe_block = True else: block = True if char", "or re.match(r\"^[\\s]*@[^\\s]*[\\s]*$\", code[comment_line + i]) or re.match(r\"^[\\s]*//.*\", code[comment_line + i]) or re.match(r\"[\\s]*[^}{](try|else|finally)[\\s]*{?\", code[comment_line", "set='train'): if serialized: x = open(serialize_outpath + 'serialized_' + set +'.json', 'r').read() return", "or is blank, ignore it while re.match(r\"^[\\s]*//.*\", code[comment_line + i]) or re.match(r\"^[\\s]*$\", code[comment_line", "src.keys import line, serialize_outpath from nltk.stem.porter import * from spacy.lang.en.stop_words import STOP_WORDS from", "block = True if char == '\"': if in_string is True: if escape", "focus_line = lines[i] #print(focus_line) p = get_position(focus_line) positions.append(p) #print(p) i += 1 return", "- 1]): while not re.match(r\"^[\\s]*\\*/\", code[comment_line + i]): i += 1 i +=", "try: if comment_type == line: if not re.match(r\"^[\\s]*//.*\", code[ comment_line - 1]): #", "comment refers to the line before if re.match(r\"^[\\s]*}[\\s]*.*\", code[comment_line + i]) or re.match(r\"[\\s]*(try|else|finally)[\\s]*{?\",", "re.match(r\"[\\s]*(try|else|finally)[\\s]*{?\", code[comment_line + i]): i = -2 # while the line is a", "stemming, rem_keyws)) return words def word_extractor(string, stemming=True, rem_keyws=True): string = remove_line_comment(string) string =", "matches, probabily the comment refers to the line before if re.match(r\"^[\\s]*}[\\s]*.*\", code[comment_line +", "= 0 for char in string: if char == '*': if not in_string:", "True: return string[:init_index] + string[end_index + 1:] return string def code_split(string): words =", "= PorterStemmer() stemmed = [] for token in words: stemmed.append(stemmer.stem(token)) return stemmed def", "= [] for token in words: stemmed.append(stemmer.stem(token)) return stemmed def camel_case_split(string): if not", "- 2] i = 0 if not re.match(r\"^[\\s]*.*\\*/\", code[comment_line - 1]): while not", "= 0 text_link = 1 comment_line = 2 positions = [] data =", "re.match(r\"^[\\s]*$\", code[comment_line + i]) or re.match(r\"^[\\s]*/\\*.*\", code[comment_line + i]) or re.match(r\"^\\*\", code[comment_line +", "is True: found = True end_index = i break else: maybe_block = True", "if __name__ == '__main__': # code = open('../testers/test.txt', 'r').read() # code_parser(code, 151, javadoc)", "the line isnt just brackets and some keywords, the foucs line is the", "serialize=False, set='train'): if serialized: x = open(serialize_outpath + 'serialized_' + set +'.json', 'r').read()", "False else: escape = False else: in_string = True elif char == '\\\\':", "block or javadoc # if the line doesn't start as a comment, the", "get_lines(set=set, serialized=False, serialize=True) words = [] for line in lines: words.append(word_extractor(line, stemming, rem_keyws))", "- 1]): return code[comment_line - 1] if comment_line >= len(code) - 1: return", "it while re.match(r\"^[\\s]*$\", code[comment_line + i]) or re.match(r\"^[\\s]*@[^\\s]*[\\s]*$\", code[comment_line + i]) or re.match(r\"^[\\s]*//.*\",", "words def word_extractor(string, stemming=True, rem_keyws=True): string = remove_line_comment(string) string = remove_block_comment(string) splitted =", "i]) or re.match(r\"[\\s]*[^}{](try|else|finally)[\\s]*{?\", code[comment_line + i]): # while the line starts as a", "1 return positions def get_positions_encoded(lines=None, set='train'): if lines is None: positions = get_positions(set=set)", "positions = get_positions(set=set) else: positions = get_positions(lines, set=set) le = LabelEncoder() return le.fit_transform(positions)", "get_lines(serialized=True, serialize=False, set='train'): if serialized: x = open(serialize_outpath + 'serialized_' + set +'.json',", "True: escape = False if comment is False: i += 1 return string", "line isnt just brackets and some keywords, the foucs line is the comment_line", "the line is a comment or is blank, ignore it i -= 1", "True: escape = True elif char == '/': if comment is False: comment", "keywords, the foucs line is the comment_line return code[comment_line - 1] i =", "or re.match(r\"^[\\s]*$\", code[comment_line + i]) or re.match(r\"[\\s]*[^}{](try|else|finally)[\\s]*{?\", code[comment_line + i]): # while the", "print(word_extractor(\"ciao mamma /*css rff*/\")) # print(tokenizer(\"t<EMAIL>@<EMAIL> @param\")) # print(camel_case_split(\"tuaMadre@QuellaTroia @param\")) # print(code_split(\"tuaMadre@QuellaTroia @param\"))", "False: in_string = False else: escape = False else: in_string = True elif", "a != \"\", words)) return words def remove_stopwords(tokens): stop_words = STOP_WORDS relevant_words =", "in splitted: camel_case_parts = camel_case_split(part) for camel in camel_case_parts: words.append(camel.lower()) if stemming and", "i]) or re.match( r\"^[\\s]*\\*/.*\", code[comment_line + i]): # while the line is a", "stemming and rem_keyws: return stem(remove_keywords(words)) elif stemming: return stem(words) else: return remove_keywords(words) def", "for c in string[1:]: if words[-1][-1].islower() and c.isupper(): words.append(list(c)) else: words[-1].append(c) return [''.join(word)", "if block is True: found = True end_index = i break else: maybe_block", "'r').read() return json.loads(x) comment_type = 0 text_link = 1 comment_line = 2 data", "is False: if block is True: maybe_block = True else: block = True", "except IndexError: return \"\" def get_positions(lines=None, set='train'): comment_type = 0 text_link = 1", "+ i]): i += 1 i += 1 # while the line starts", "return string def code_split(string): words = re.split(r'\\\\n|\\?|&|\\\\|;|,|\\*|\\(|\\)|\\{|\\s|\\.|/|_|:|=|<|>|\\||!|\"|\\+|-|\\[|\\]|\\'|\\}|\\^|#|%', string) words = list(filter(lambda a: a", "== '__main__': # code = open('../testers/test.txt', 'r').read() # code_parser(code, 151, javadoc) print(get_lines(serialized=False, serialize=True))", "it. I do this because they use multiple line comment to simulate a", "comment = False elif escape is True: escape = False if comment is", "blank, ignore it while re.match(r\"^[\\s]*//.*\", code[comment_line + i]) or re.match(r\"^[\\s]*$\", code[comment_line + i])", "if found is True: return string[:init_index] + string[end_index + 1:] return string def", "is True: return string[:init_index] + string[end_index + 1:] return string def code_split(string): words", "code[comment_line + i]): i = -2 # while the line is a comment", "False elif escape is True: escape = False if comment is False: i", "splitted: camel_case_parts = camel_case_split(part) for camel in camel_case_parts: words.append(camel.lower()) if stemming and rem_keyws:", "PorterStemmer() stemmed = [] for token in words: stemmed.append(stemmer.stem(token)) return stemmed def camel_case_split(string):", "= get_lines(set=set) i = 0 for row in data: #print(row[comment_line], row[comment_type], row[text_link] +", "False: comment = True else: return string[:i] elif comment is True: i +=", "+ i] except IndexError: return \"\" def get_positions(lines=None, set='train'): comment_type = 0 text_link", "serialize=True)) print('first') print(get_lines(serialized=True, serialize=False)) # get_positions() # line_type_identifier(\"ciao\") # code_parser3() # print(word_extractor(\"ciao mamma", "i]): # if this matches, the block is empty so i take the", "if comment_type == line: if not re.match(r\"^[\\s]*//.*\", code[ comment_line - 1]): # if", "escape = True elif char == '/': if comment is False: comment =", "it while re.match(r\"^[\\s]*//.*\", code[comment_line + i]) or re.match(r\"^[\\s]*$\", code[comment_line + i]) or re.match(r\"^[\\s]*/\\*.*\",", "just brackets and some keywords, the foucs line is the comment_line return code[comment_line", "2] i = 0 if not re.match(r\"^[\\s]*.*\\*/\", code[comment_line - 1]): while not re.match(r\"^[\\s]*\\*/\",", "= 0 text_link = 1 comment_line = 2 data = get_link_line_type(set=set) lines =", "for char in string: if char == '*': if not in_string: if maybe_block", "True if char == '\"': if in_string is True: if escape is False:", "- 1]): # if the line doesn't start as a comment, the comment", "string = remove_line_comment(string) string = remove_block_comment(string) splitted = code_split(string) words = [] for", "to this line if not re.match(r\"^[\\s]*}?[\\s]*(else|try|finally)?[\\s]*{?[\\s]*//.*[\\s]*$\", code[ comment_line - 1]): # if the", "refers to that # r\"^[\\s]*}?[\\s]*(else|try|finally)?[\\s]*{?[\\s]*.*$\" else: # block or javadoc # if the", "get_line(code, comment_line, comment_type): code = code.splitlines() try: if comment_type == line: if not", "else: in_string = True elif char == '\\\\': if in_string is True: escape", "a comment, the comment refers to this line if not re.match(r\"^[\\s]*}?[\\s]*(else|try|finally)?[\\s]*{?[\\s]*//.*[\\s]*$\", code[ comment_line", "a comment or is blank, or is an annotation, ignore it while re.match(r\"^[\\s]*$\",", "keywords: non_keywords.append(word) return non_keywords def stem(words): stemmer = PorterStemmer() stemmed = [] for", "javadoc # if the line doesn't start as a comment, the comment refers", "= False escape = False comment = False i = 0 for char", "+= 1 return string def remove_block_comment(string): in_string = False escape = False block", "= open(serialize_outpath + 'serialized_' + set +'.json', 'r').read() return json.loads(x) comment_type = 0", "code[comment_line + i]) or re.match(r\"^\\*\", code[comment_line + i]) or re.match(r\"^[\\s]*\\*/.*\", code[comment_line + i]):", "False escape = False block = False maybe_block = False found = False", "+ i]) or re.match(r\"^[\\s]*@[^\\s]*[\\s]*$\", code[comment_line + i]) or re.match(r\"^[\\s]*//.*\", code[comment_line + i]) or", "if rem_stop: new_tokens = remove_stopwords(new_tokens) if rem_kws: new_tokens = remove_keywords(new_tokens) if stemming: new_tokens", "some keywords, the foucs line is the comment_line return code[comment_line - 1] i", "comment = True else: return string[:i] elif comment is True: i += 1", "a comment or is blank, ignore it i -= 1 return code[comment_line +", "block is empty so i take the first non comment non empty line", "is an annotation, ignore it while re.match(r\"^[\\s]*$\", code[comment_line + i]) or re.match(r\"^[\\s]*@[^\\s]*[\\s]*$\", code[comment_line", "the comment. i = -2 while re.match(r\"^[\\s]*//.*\", code[comment_line + i]) or re.match(r\"^[\\s]*$\", code[comment_line", "* def get_line(code, comment_line, comment_type): code = code.splitlines() try: if comment_type == line:", "ignore it while re.match(r\"^[\\s]*$\", code[comment_line + i]) or re.match(r\"^[\\s]*@[^\\s]*[\\s]*$\", code[comment_line + i]) or", "elif comment is True: i += 1 comment = False elif escape is", "if in_string is True: escape = True elif char == '/': if not", "re.match(r\"[\\s]*(try|else|finally)[\\s]*{?\", code[ comment_line + i]): # if this matches, the block is empty", "1 if found is True: return string[:init_index] + string[end_index + 1:] return string", "comment refers to that # r\"^[\\s]*}?[\\s]*(else|try|finally)?[\\s]*{?[\\s]*.*$\" else: # block or javadoc # if", "+= 1 if re.match(r\"^[\\s]*}.*\", code[comment_line + i]) or re.match(r\"[\\s]*(try|else|finally)[\\s]*{?\", code[ comment_line + i]):", "and some keywords, the foucs line is the comment_line return code[comment_line - 1]", "string: if char == '\"': if in_string is True: if escape is False:", "is True: if block is True: found = True end_index = i break", "* from spacy.lang.en.stop_words import STOP_WORDS from src.comment_analysis.java_re import * def get_line(code, comment_line, comment_type):", "while re.match(r\"^[\\s]*$\", code[comment_line + i]) or re.match(r\"^[\\s]*@[^\\s]*[\\s]*$\", code[comment_line + i]) or re.match(r\"^[\\s]*//.*\", code[comment_line", "+ string[end_index + 1:] return string def code_split(string): words = re.split(r'\\\\n|\\?|&|\\\\|;|,|\\*|\\(|\\)|\\{|\\s|\\.|/|_|:|=|<|>|\\||!|\"|\\+|-|\\[|\\]|\\'|\\}|\\^|#|%', string) words", "line starts as a comment or is blank, or is an annotation, ignore", "x.write(json.dumps(lines)) return lines def get_code_words(stemming=True, rem_keyws=True, lines=None, set='train'): if lines is None: lines", "comment, the comment refers to this line if not re.match(r\"^[\\s]*/\\*.*\", code[comment_line - 1]):", "elif stemming: return stem(words) else: return remove_keywords(words) def remove_keywords(words): keywords = get_keywords() non_keywords", "i] except IndexError: return \"\" def get_positions(lines=None, set='train'): comment_type = 0 text_link =", "+ i]) or re.match(r\"^[\\s]*$\", code[comment_line + i]) or re.match(r\"[\\s]*[^}{](try|else|finally)[\\s]*{?\", code[comment_line + i]): #", "words def remove_stopwords(tokens): stop_words = STOP_WORDS relevant_words = [] for token in tokens:", "= [] for line in lines: words.append(word_extractor(line, stemming, rem_keyws)) return words def word_extractor(string,", "0 i = 0 for char in string: if char == '*': if", "= True if char == '\"': if in_string is True: if escape is", "stemming=False, rem_kws=False): tokens = code_split(string) new_tokens = [] for token in tokens: for", "comment refers to this line if not re.match(r\"^[\\s]*}?[\\s]*(else|try|finally)?[\\s]*{?[\\s]*//.*[\\s]*$\", code[ comment_line - 1]): #", "starts as a comment, ignore it. I do this because they use multiple", "while the line is a comment or is blank, ignore it i -=", "code[comment_line + i]): i += 1 i += 1 # while the line", "i += 1 if re.match(r\"^[\\s]*}.*\", code[comment_line + i]) or re.match(r\"[\\s]*(try|else|finally)[\\s]*{?\", code[ comment_line +", "i]) or re.match(r\"[\\s]*[^}{](try|else|finally)[\\s]*{?\", code[comment_line + i]): i += 1 # if this matches,", "0 for row in data: #print(row[comment_line], row[comment_type], row[text_link] + \"#L\" + str(row[comment_line])) focus_line", "= True end_index = i break else: maybe_block = True init_index = i", "isnt just brackets and some keywords, the foucs line is the comment_line return", "i += 1 if found is True: return string[:init_index] + string[end_index + 1:]", "not re.match(r\"^[\\s]*/\\*.*\", code[comment_line - 1]): return code[comment_line - 1] if comment_line >= len(code)", "if lines is None: lines = get_lines(set=set) i = 0 for row in", "in_string is True: if escape is False: in_string = False else: escape =", "print('first') print(get_lines(serialized=True, serialize=False)) # get_positions() # line_type_identifier(\"ciao\") # code_parser3() # print(word_extractor(\"ciao mamma /*css", "char in string: if char == '*': if not in_string: if maybe_block is", "get_keywords from src.keys import line, serialize_outpath from nltk.stem.porter import * from spacy.lang.en.stop_words import", "from spacy.lang.en.stop_words import STOP_WORDS from src.comment_analysis.java_re import * def get_line(code, comment_line, comment_type): code", "multiple line comment to simulate a block i += 1 if re.match(r\"^[\\s]*}.*\", code[comment_line", "return \"\" def get_positions(lines=None, set='train'): comment_type = 0 text_link = 1 comment_line =", "rem_keyws=True, lines=None, set='train'): if lines is None: lines = get_lines(set=set, serialized=False, serialize=True) words", "stemmer = PorterStemmer() stemmed = [] for token in words: stemmed.append(stemmer.stem(token)) return stemmed", "a block i += 1 if re.match(r\"^[\\s]*}.*\", code[comment_line + i]) or re.match(r\"[\\s]*(try|else|finally)[\\s]*{?\", code[", "row in data: #print(row[comment_line], row[comment_type], row[text_link] + \"#L\" + str(row[comment_line])) focus_line = lines[i]", "relevant_words.append(token) return relevant_words def tokenizer(string, rem_stop=False, stemming=False, rem_kws=False): tokens = code_split(string) new_tokens =", "re.match(r\"^[\\s]*}[\\s]*.*\", code[comment_line + i]) or re.match(r\"[\\s]*(try|else|finally)[\\s]*{?\", code[comment_line + i]): i = -2 #", "return stem(remove_keywords(words)) elif stemming: return stem(words) else: return remove_keywords(words) def remove_keywords(words): keywords =", "lines is None: lines = get_lines(set=set, serialized=False, serialize=True) words = [] for line", "stemming: return stem(words) else: return remove_keywords(words) def remove_keywords(words): keywords = get_keywords() non_keywords =", "line if not re.match(r\"^[\\s]*/\\*.*\", code[comment_line - 1]): return code[comment_line - 1] if comment_line", "in_string is True: escape = True elif char == '/': if not in_string:", "= [] for part in splitted: camel_case_parts = camel_case_split(part) for camel in camel_case_parts:", "for word in words: if word not in keywords: non_keywords.append(word) return non_keywords def", "None: positions = get_positions(set=set) else: positions = get_positions(lines, set=set) le = LabelEncoder() return", "= 0 end_index = 0 i = 0 for char in string: if", "code[comment_line + i]) or re.match(r\"^[\\s]*$\", code[comment_line + i]) or re.match(r\"^[\\s]*/\\*.*\", code[comment_line + i])", "re.match(r\"^[\\s]*\\*/.*\", code[comment_line + i]): i -= 1 return code[comment_line + i] except IndexError:", "row[comment_type]) lines.append(focus_line) if serialize: x = open(serialize_outpath + 'serialized_' + set +'.json', 'w')", "row in data: code = get_text_by_url(row[text_link]) focus_line = get_line(code, row[comment_line], row[comment_type]) lines.append(focus_line) if", "IndexError: return \"\" def get_positions(lines=None, set='train'): comment_type = 0 text_link = 1 comment_line", "not re.match(r\"^[\\s]*//.*\", code[ comment_line - 1]): # if the line doesn't start as", "non_keywords.append(word) return non_keywords def stem(words): stemmer = PorterStemmer() stemmed = [] for token", "= False if comment is False: i += 1 return string def remove_block_comment(string):", "words] def remove_line_comment(string): in_string = False escape = False comment = False i", "= False elif escape is True: escape = False if comment is False:", "#print(p) i += 1 return positions def get_positions_encoded(lines=None, set='train'): if lines is None:", "'/': if not in_string: if maybe_block is True: if block is True: found", "[] for token in tokens: for t in camel_case_split(token): new_tokens.append(t.lower()) if rem_stop: new_tokens", "ignore it. I do this because they use multiple line comment to simulate", "the block is empty so i take the first non comment non empty", "or is blank, ignore it i -= 1 return code[comment_line + i] #", "1 return code[comment_line + i] # comment refers to that # r\"^[\\s]*}?[\\s]*(else|try|finally)?[\\s]*{?[\\s]*.*$\" else:", "comment_line = 2 data = get_link_line_type(set=set) lines = [] for row in data:", "i += 1 # while the line starts as a comment or is", "True: if escape is False: in_string = False else: escape = False else:", "is empty so i take the first non comment non empty line before", "or re.match(r\"[\\s]*[^}{](try|else|finally)[\\s]*{?\", code[comment_line + i]): i += 1 # if this matches, probabily", "the comment refers to the line before if re.match(r\"^[\\s]*}[\\s]*.*\", code[comment_line + i]) or", "the foucs line is the comment_line return code[comment_line - 1] i = 0", "not string: return string words = [[string[0]]] for c in string[1:]: if words[-1][-1].islower()", "elif escape is True: escape = False if comment is False: i +=", "src.csv.csv_utils import get_link_line_type, get_keywords from src.keys import line, serialize_outpath from nltk.stem.porter import *", "to the line before if re.match(r\"^[\\s]*}[\\s]*.*\", code[comment_line + i]) or re.match(r\"[\\s]*(try|else|finally)[\\s]*{?\", code[comment_line +", "i]) or re.match(r\"^[\\s]*@[^\\s]*[\\s]*$\", code[comment_line + i]) or re.match(r\"^[\\s]*//.*\", code[comment_line + i]) or re.match(r\"[\\s]*[^}{](try|else|finally)[\\s]*{?\",", "escape = False block = False maybe_block = False found = False init_index", "serialize=True) words = [] for line in lines: words.append(word_extractor(line, stemming, rem_keyws)) return words", "code[comment_line + i]) or re.match(r\"^[\\s]*/\\*.*\", code[comment_line + i]) or re.match(r\"^\\*\", code[comment_line + i])", "camel_case_split(token): new_tokens.append(t.lower()) if rem_stop: new_tokens = remove_stopwords(new_tokens) if rem_kws: new_tokens = remove_keywords(new_tokens) if", "is blank, or is an annotation, ignore it while re.match(r\"^[\\s]*$\", code[comment_line + i])", "in data: #print(row[comment_line], row[comment_type], row[text_link] + \"#L\" + str(row[comment_line])) focus_line = lines[i] #print(focus_line)", "the line is a comment or is blank, ignore it while re.match(r\"^[\\s]*//.*\", code[comment_line", "or re.match(r\"^[\\s]*//.*\", code[comment_line + i]) or re.match(r\"[\\s]*[^}{](try|else|finally)[\\s]*{?\", code[comment_line + i]): i += 1", "empty line before the comment. i = -2 while re.match(r\"^[\\s]*//.*\", code[comment_line + i])", "lines[i] #print(focus_line) p = get_position(focus_line) positions.append(p) #print(p) i += 1 return positions def", "lines=None, set='train'): if lines is None: lines = get_lines(set=set, serialized=False, serialize=True) words =", "else: return string[:i] elif comment is True: i += 1 comment = False", "def remove_block_comment(string): in_string = False escape = False block = False maybe_block =", "= False i = 0 for char in string: if char == '\"':", "comment_type == line: if not re.match(r\"^[\\s]*//.*\", code[ comment_line - 1]): # if the", "from src.comment_analysis.url_utils import get_text_by_url from src.csv.csv_utils import get_link_line_type, get_keywords from src.keys import line,", "# if the line isnt just brackets and some keywords, the foucs line", "# code = open('../testers/test.txt', 'r').read() # code_parser(code, 151, javadoc) print(get_lines(serialized=False, serialize=True)) print('first') print(get_lines(serialized=True,", "return code[comment_line + i] except IndexError: return \"\" def get_positions(lines=None, set='train'): comment_type =", "== '\"': if in_string is True: if escape is False: in_string = False", "code[comment_line + i]) or re.match( r\"^[\\s]*\\*/.*\", code[comment_line + i]): # while the line", "break else: maybe_block = True init_index = i i += 1 if found", "[] for token in tokens: if token not in stop_words: relevant_words.append(token) return relevant_words", "line before if re.match(r\"^[\\s]*}[\\s]*.*\", code[comment_line + i]) or re.match(r\"[\\s]*(try|else|finally)[\\s]*{?\", code[comment_line + i]): i", "code_parser3() # print(word_extractor(\"ciao mamma /*css rff*/\")) # print(tokenizer(\"t<EMAIL>@<EMAIL> @param\")) # print(camel_case_split(\"tuaMadre@QuellaTroia @param\")) #", "in lines: words.append(word_extractor(line, stemming, rem_keyws)) return words def word_extractor(string, stemming=True, rem_keyws=True): string =", "i]) or re.match(r\"^\\*\", code[comment_line + i]) or re.match( r\"^[\\s]*\\*/.*\", code[comment_line + i]): #", "import get_link_line_type, get_keywords from src.keys import line, serialize_outpath from nltk.stem.porter import * from", "ignore it while re.match(r\"^[\\s]*//.*\", code[comment_line + i]) or re.match(r\"^[\\s]*$\", code[comment_line + i]) or", "= LabelEncoder() return le.fit_transform(positions) def get_lines(serialized=True, serialize=False, set='train'): if serialized: x = open(serialize_outpath", "True else: block = True if char == '\"': if in_string is True:", "code[comment_line + i]) or re.match(r\"[\\s]*[^}{](try|else|finally)[\\s]*{?\", code[comment_line + i]): i += 1 # if", "+ i]) or re.match(r\"^[\\s]*$\", code[comment_line + i]) or re.match( r\"^[\\s]*/\\*.*\", code[comment_line + i])", "= -2 while re.match(r\"^[\\s]*//.*\", code[comment_line + i]) or re.match(r\"^[\\s]*$\", code[comment_line + i]) or", "word in words: if word not in keywords: non_keywords.append(word) return non_keywords def stem(words):", "if lines is None: positions = get_positions(set=set) else: positions = get_positions(lines, set=set) le", "while the line is a comment or is blank, ignore it while re.match(r\"^[\\s]*//.*\",", "in_string: if maybe_block is False: if block is True: maybe_block = True else:", "token not in stop_words: relevant_words.append(token) return relevant_words def tokenizer(string, rem_stop=False, stemming=False, rem_kws=False): tokens", "# print(word_extractor(\"ciao mamma /*css rff*/\")) # print(tokenizer(\"t<EMAIL>@<EMAIL> @param\")) # print(camel_case_split(\"tuaMadre@QuellaTroia @param\")) # print(code_split(\"tuaMadre@QuellaTroia", "code[ comment_line - 1]): # if the line doesn't start as a comment,", "def word_extractor(string, stemming=True, rem_keyws=True): string = remove_line_comment(string) string = remove_block_comment(string) splitted = code_split(string)", "stem(new_tokens) return new_tokens if __name__ == '__main__': # code = open('../testers/test.txt', 'r').read() #", "True: maybe_block = True else: block = True if char == '\"': if", "1] if comment_line >= len(code) - 1: return code[comment_line - 2] i =", "if not re.match(r\"^[\\s]*/\\*.*\", code[comment_line - 1]): return code[comment_line - 1] if comment_line >=", "for word in words] def remove_line_comment(string): in_string = False escape = False comment", "+ i]) or re.match( r\"^[\\s]*\\*/.*\", code[comment_line + i]): # while the line is", "or re.match(r\"^[\\s]*/\\*.*\", code[comment_line + i]) or re.match(r\"^\\*\", code[comment_line + i]) or re.match(r\"^[\\s]*\\*/.*\", code[comment_line", "== '\\\\': if in_string is True: escape = True elif char == '/':", "'w') x.write(json.dumps(lines)) return lines def get_code_words(stemming=True, rem_keyws=True, lines=None, set='train'): if lines is None:", "matches, the block is empty so i take the first non comment non", "1 return code[comment_line + i] except IndexError: return \"\" def get_positions(lines=None, set='train'): comment_type", "end_index = 0 i = 0 for char in string: if char ==", "+ i]) or re.match(r\"^[\\s]*\\*/.*\", code[comment_line + i]): i -= 1 return code[comment_line +", "+ set +'.json', 'r').read() return json.loads(x) comment_type = 0 text_link = 1 comment_line", "1 i += 1 # while the line starts as a comment or", "= code_split(string) words = [] for part in splitted: camel_case_parts = camel_case_split(part) for", "code_split(string): words = re.split(r'\\\\n|\\?|&|\\\\|;|,|\\*|\\(|\\)|\\{|\\s|\\.|/|_|:|=|<|>|\\||!|\"|\\+|-|\\[|\\]|\\'|\\}|\\^|#|%', string) words = list(filter(lambda a: a != \"\", words))", "is None: lines = get_lines(set=set, serialized=False, serialize=True) words = [] for line in", "1 if re.match(r\"^[\\s]*}.*\", code[comment_line + i]) or re.match(r\"[\\s]*(try|else|finally)[\\s]*{?\", code[ comment_line + i]): #", "elif char == '/': if not in_string: if maybe_block is True: if block", "+ i]) or re.match(r\"^[\\s]*//.*\", code[comment_line + i]) or re.match(r\"[\\s]*[^}{](try|else|finally)[\\s]*{?\", code[comment_line + i]): i", "1 return string def remove_block_comment(string): in_string = False escape = False block =", "char == '/': if not in_string: if maybe_block is True: if block is", "code[comment_line + i]) or re.match(r\"^[\\s]*@[^\\s]*[\\s]*$\", code[comment_line + i]) or re.match(r\"^[\\s]*//.*\", code[comment_line + i])", "+ i]) or re.match(r\"[\\s]*(try|else|finally)[\\s]*{?\", code[ comment_line + i]): # if this matches, the", "rem_keyws)) return words def word_extractor(string, stemming=True, rem_keyws=True): string = remove_line_comment(string) string = remove_block_comment(string)", "1:] return string def code_split(string): words = re.split(r'\\\\n|\\?|&|\\\\|;|,|\\*|\\(|\\)|\\{|\\s|\\.|/|_|:|=|<|>|\\||!|\"|\\+|-|\\[|\\]|\\'|\\}|\\^|#|%', string) words = list(filter(lambda a:", "True elif char == '/': if comment is False: comment = True else:", "= get_positions(set=set) else: positions = get_positions(lines, set=set) le = LabelEncoder() return le.fit_transform(positions) def", "print(get_lines(serialized=False, serialize=True)) print('first') print(get_lines(serialized=True, serialize=False)) # get_positions() # line_type_identifier(\"ciao\") # code_parser3() # print(word_extractor(\"ciao", "non comment non empty line before the comment. i = -2 while re.match(r\"^[\\s]*//.*\",", "False comment = False i = 0 for char in string: if char", "def remove_line_comment(string): in_string = False escape = False comment = False i =", "= [] for token in tokens: if token not in stop_words: relevant_words.append(token) return", "words.append(word_extractor(line, stemming, rem_keyws)) return words def word_extractor(string, stemming=True, rem_keyws=True): string = remove_line_comment(string) string", "get_lines(set=set) i = 0 for row in data: #print(row[comment_line], row[comment_type], row[text_link] + \"#L\"", "json.loads(x) comment_type = 0 text_link = 1 comment_line = 2 data = get_link_line_type(set=set)", "for char in string: if char == '\"': if in_string is True: if", "+ i]): # while the line starts as a comment, ignore it. I", "remove_line_comment(string): in_string = False escape = False comment = False i = 0", "import * def get_line(code, comment_line, comment_type): code = code.splitlines() try: if comment_type ==", "return string[:init_index] + string[end_index + 1:] return string def code_split(string): words = re.split(r'\\\\n|\\?|&|\\\\|;|,|\\*|\\(|\\)|\\{|\\s|\\.|/|_|:|=|<|>|\\||!|\"|\\+|-|\\[|\\]|\\'|\\}|\\^|#|%',", "code[comment_line - 1] i = 0 while re.match(r\"^[\\s]*//.*\", code[comment_line + i]) or re.match(r\"^[\\s]*$\",", "+= 1 i += 1 # while the line starts as a comment", "None: lines = get_lines(set=set) i = 0 for row in data: #print(row[comment_line], row[comment_type],", "string) words = list(filter(lambda a: a != \"\", words)) return words def remove_stopwords(tokens):", "i += 1 comment = False elif escape is True: escape = False", "def remove_keywords(words): keywords = get_keywords() non_keywords = [] for word in words: if", "this because they use multiple line comment to simulate a block i +=", "if re.match(r\"^[\\s]*}.*\", code[comment_line + i]) or re.match(r\"[\\s]*(try|else|finally)[\\s]*{?\", code[ comment_line + i]): # if", "get_line(code, row[comment_line], row[comment_type]) lines.append(focus_line) if serialize: x = open(serialize_outpath + 'serialized_' + set", "a: a != \"\", words)) return words def remove_stopwords(tokens): stop_words = STOP_WORDS relevant_words", "so i take the first non comment non empty line before the comment.", "init_index = 0 end_index = 0 i = 0 for char in string:", "init_index = i i += 1 if found is True: return string[:init_index] +", "else: maybe_block = True init_index = i i += 1 if found is", "set='train'): comment_type = 0 text_link = 1 comment_line = 2 positions = []", "else: # block or javadoc # if the line doesn't start as a", "is True: escape = True elif char == '/': if comment is False:", "False if comment is False: i += 1 return string def remove_block_comment(string): in_string", "- 1] if comment_line >= len(code) - 1: return code[comment_line - 2] i", "lines = [] for row in data: code = get_text_by_url(row[text_link]) focus_line = get_line(code,", "re.match(r\"^[\\s]*//.*\", code[comment_line + i]) or re.match(r\"^[\\s]*$\", code[comment_line + i]) or re.match( r\"^[\\s]*/\\*.*\", code[comment_line", "else: return remove_keywords(words) def remove_keywords(words): keywords = get_keywords() non_keywords = [] for word", "open(serialize_outpath + 'serialized_' + set +'.json', 'r').read() return json.loads(x) comment_type = 0 text_link", "rem_stop=False, stemming=False, rem_kws=False): tokens = code_split(string) new_tokens = [] for token in tokens:", "keywords = get_keywords() non_keywords = [] for word in words: if word not", "if not in_string: if maybe_block is True: if block is True: found =", "text_link = 1 comment_line = 2 data = get_link_line_type(set=set) lines = [] for", "if not in_string: if maybe_block is False: if block is True: maybe_block =", "line is a comment or is blank, ignore it i -= 1 return", "for row in data: #print(row[comment_line], row[comment_type], row[text_link] + \"#L\" + str(row[comment_line])) focus_line =", "if comment is False: i += 1 return string def remove_block_comment(string): in_string =", "maybe_block = True init_index = i i += 1 if found is True:", "i]): # while the line starts as a comment, ignore it. I do", "+ str(row[comment_line])) focus_line = lines[i] #print(focus_line) p = get_position(focus_line) positions.append(p) #print(p) i +=", "javadoc) print(get_lines(serialized=False, serialize=True)) print('first') print(get_lines(serialized=True, serialize=False)) # get_positions() # line_type_identifier(\"ciao\") # code_parser3() #", "= code.splitlines() try: if comment_type == line: if not re.match(r\"^[\\s]*//.*\", code[ comment_line -", "= [] data = get_link_line_type(set=set) if lines is None: lines = get_lines(set=set) i", "the line before if re.match(r\"^[\\s]*}[\\s]*.*\", code[comment_line + i]) or re.match(r\"[\\s]*(try|else|finally)[\\s]*{?\", code[comment_line + i]):", "[''.join(word) for word in words] def remove_line_comment(string): in_string = False escape = False", "re.match(r\"^[\\s]*}?[\\s]*(else|try|finally)?[\\s]*{?[\\s]*//.*[\\s]*$\", code[ comment_line - 1]): # if the line isnt just brackets and", "end_index = i break else: maybe_block = True init_index = i i +=", "return string[:i] elif comment is True: i += 1 comment = False elif", "string words = [[string[0]]] for c in string[1:]: if words[-1][-1].islower() and c.isupper(): words.append(list(c))", "words[-1][-1].islower() and c.isupper(): words.append(list(c)) else: words[-1].append(c) return [''.join(word) for word in words] def", "an annotation, ignore it while re.match(r\"^[\\s]*$\", code[comment_line + i]) or re.match(r\"^[\\s]*@[^\\s]*[\\s]*$\", code[comment_line +", "start as a comment, the comment refers to this line if not re.match(r\"^[\\s]*}?[\\s]*(else|try|finally)?[\\s]*{?[\\s]*//.*[\\s]*$\",", "escape is True: escape = False if comment is False: i += 1", "code[comment_line + i]) or re.match(r\"[\\s]*(try|else|finally)[\\s]*{?\", code[comment_line + i]): i = -2 # while", "is the comment_line return code[comment_line - 1] i = 0 while re.match(r\"^[\\s]*//.*\", code[comment_line", "before if re.match(r\"^[\\s]*}[\\s]*.*\", code[comment_line + i]) or re.match(r\"[\\s]*(try|else|finally)[\\s]*{?\", code[comment_line + i]): i =", "False else: in_string = True elif char == '\\\\': if in_string is True:", "words.append(camel.lower()) if stemming and rem_keyws: return stem(remove_keywords(words)) elif stemming: return stem(words) else: return", "comment = False i = 0 for char in string: if char ==", "re.match(r\"^[\\s]*}.*\", code[comment_line + i]) or re.match(r\"[\\s]*(try|else|finally)[\\s]*{?\", code[ comment_line + i]): # if this", "= [] for token in tokens: for t in camel_case_split(token): new_tokens.append(t.lower()) if rem_stop:", "words = [[string[0]]] for c in string[1:]: if words[-1][-1].islower() and c.isupper(): words.append(list(c)) else:", "\"#L\" + str(row[comment_line])) focus_line = lines[i] #print(focus_line) p = get_position(focus_line) positions.append(p) #print(p) i", "[] for word in words: if word not in keywords: non_keywords.append(word) return non_keywords", "or re.match( r\"^[\\s]*/\\*.*\", code[comment_line + i]) or re.match(r\"^\\*\", code[comment_line + i]) or re.match(", "if the line doesn't start as a comment, the comment refers to this", "blank, ignore it i -= 1 return code[comment_line + i] # comment refers", "i]) or re.match(r\"^[\\s]*$\", code[comment_line + i]) or re.match(r\"^[\\s]*/\\*.*\", code[comment_line + i]) or re.match(r\"^\\*\",", "return relevant_words def tokenizer(string, rem_stop=False, stemming=False, rem_kws=False): tokens = code_split(string) new_tokens = []", "code[comment_line + i] # comment refers to that # r\"^[\\s]*}?[\\s]*(else|try|finally)?[\\s]*{?[\\s]*.*$\" else: # block", "True end_index = i break else: maybe_block = True init_index = i i", "from sklearn.preprocessing import LabelEncoder from src.comment_analysis.url_utils import get_text_by_url from src.csv.csv_utils import get_link_line_type, get_keywords", "get_text_by_url(row[text_link]) focus_line = get_line(code, row[comment_line], row[comment_type]) lines.append(focus_line) if serialize: x = open(serialize_outpath +", "+= 1 if found is True: return string[:init_index] + string[end_index + 1:] return", "re.match(r\"^[\\s]*//.*\", code[ comment_line - 1]): # if the line doesn't start as a", "x = open(serialize_outpath + 'serialized_' + set +'.json', 'w') x.write(json.dumps(lines)) return lines def", "non_keywords def stem(words): stemmer = PorterStemmer() stemmed = [] for token in words:", "elif char == '\\\\': if in_string is True: escape = True elif char", "get_text_by_url from src.csv.csv_utils import get_link_line_type, get_keywords from src.keys import line, serialize_outpath from nltk.stem.porter", "comment_line >= len(code) - 1: return code[comment_line - 2] i = 0 if", "is False: comment = True else: return string[:i] elif comment is True: i", "'serialized_' + set +'.json', 'w') x.write(json.dumps(lines)) return lines def get_code_words(stemming=True, rem_keyws=True, lines=None, set='train'):", "refers to this line if not re.match(r\"^[\\s]*}?[\\s]*(else|try|finally)?[\\s]*{?[\\s]*//.*[\\s]*$\", code[ comment_line - 1]): # if", "= [[string[0]]] for c in string[1:]: if words[-1][-1].islower() and c.isupper(): words.append(list(c)) else: words[-1].append(c)", "i = 0 if not re.match(r\"^[\\s]*.*\\*/\", code[comment_line - 1]): while not re.match(r\"^[\\s]*\\*/\", code[comment_line", "for row in data: code = get_text_by_url(row[text_link]) focus_line = get_line(code, row[comment_line], row[comment_type]) lines.append(focus_line)", "this matches, the block is empty so i take the first non comment", "'/': if comment is False: comment = True else: return string[:i] elif comment", "code_split(string) words = [] for part in splitted: camel_case_parts = camel_case_split(part) for camel", "in_string = False else: escape = False else: in_string = True elif char", "src.comment_analysis.java_re import * def get_line(code, comment_line, comment_type): code = code.splitlines() try: if comment_type", "True elif char == '\\\\': if in_string is True: escape = True elif", "= False found = False init_index = 0 end_index = 0 i =", "as a comment, the comment refers to this line if not re.match(r\"^[\\s]*/\\*.*\", code[comment_line", "+ i]) or re.match( r\"^[\\s]*/\\*.*\", code[comment_line + i]) or re.match(r\"^\\*\", code[comment_line + i])", "i]) or re.match(r\"[\\s]*(try|else|finally)[\\s]*{?\", code[ comment_line + i]): # if this matches, the block", "serialized: x = open(serialize_outpath + 'serialized_' + set +'.json', 'r').read() return json.loads(x) comment_type", "= lines[i] #print(focus_line) p = get_position(focus_line) positions.append(p) #print(p) i += 1 return positions", "re.match(r\"^\\*\", code[comment_line + i]) or re.match(r\"^[\\s]*\\*/.*\", code[comment_line + i]): i -= 1 return", "1: return code[comment_line - 2] i = 0 if not re.match(r\"^[\\s]*.*\\*/\", code[comment_line -", "if not re.match(r\"^[\\s]*//.*\", code[ comment_line - 1]): # if the line doesn't start", "comment_line + i]): # if this matches, the block is empty so i", "= 1 comment_line = 2 data = get_link_line_type(set=set) lines = [] for row", "string: return string words = [[string[0]]] for c in string[1:]: if words[-1][-1].islower() and", "i]) or re.match( r\"^[\\s]*/\\*.*\", code[comment_line + i]) or re.match(r\"^\\*\", code[comment_line + i]) or", "or re.match( r\"^[\\s]*\\*/.*\", code[comment_line + i]): # while the line is a comment", "positions = [] data = get_link_line_type(set=set) if lines is None: lines = get_lines(set=set)", "= get_text_by_url(row[text_link]) focus_line = get_line(code, row[comment_line], row[comment_type]) lines.append(focus_line) if serialize: x = open(serialize_outpath", "word not in keywords: non_keywords.append(word) return non_keywords def stem(words): stemmer = PorterStemmer() stemmed", "as a comment or is blank, or is an annotation, ignore it while", "1]): # if the line doesn't start as a comment, the comment refers", "data = get_link_line_type(set=set) lines = [] for row in data: code = get_text_by_url(row[text_link])", "0 while re.match(r\"^[\\s]*//.*\", code[comment_line + i]) or re.match(r\"^[\\s]*$\", code[comment_line + i]) or re.match(r\"[\\s]*[^}{](try|else|finally)[\\s]*{?\",", "+ i]) or re.match(r\"^\\*\", code[comment_line + i]) or re.match( r\"^[\\s]*\\*/.*\", code[comment_line + i]):", "words = re.split(r'\\\\n|\\?|&|\\\\|;|,|\\*|\\(|\\)|\\{|\\s|\\.|/|_|:|=|<|>|\\||!|\"|\\+|-|\\[|\\]|\\'|\\}|\\^|#|%', string) words = list(filter(lambda a: a != \"\", words)) return", "== line: if not re.match(r\"^[\\s]*//.*\", code[ comment_line - 1]): # if the line", "i]) or re.match(r\"^[\\s]*$\", code[comment_line + i]) or re.match( r\"^[\\s]*/\\*.*\", code[comment_line + i]) or", "remove_stopwords(new_tokens) if rem_kws: new_tokens = remove_keywords(new_tokens) if stemming: new_tokens = stem(new_tokens) return new_tokens", "= get_positions(lines, set=set) le = LabelEncoder() return le.fit_transform(positions) def get_lines(serialized=True, serialize=False, set='train'): if", "# code_parser3() # print(word_extractor(\"ciao mamma /*css rff*/\")) # print(tokenizer(\"t<EMAIL>@<EMAIL> @param\")) # print(camel_case_split(\"tuaMadre@QuellaTroia @param\"))", "# line_type_identifier(\"ciao\") # code_parser3() # print(word_extractor(\"ciao mamma /*css rff*/\")) # print(tokenizer(\"t<EMAIL>@<EMAIL> @param\")) #", "= STOP_WORDS relevant_words = [] for token in tokens: if token not in", "block i += 1 if re.match(r\"^[\\s]*}.*\", code[comment_line + i]) or re.match(r\"[\\s]*(try|else|finally)[\\s]*{?\", code[ comment_line", "i]) or re.match(r\"^[\\s]*/\\*.*\", code[comment_line + i]) or re.match(r\"^\\*\", code[comment_line + i]) or re.match(r\"^[\\s]*\\*/.*\",", "lines: words.append(word_extractor(line, stemming, rem_keyws)) return words def word_extractor(string, stemming=True, rem_keyws=True): string = remove_line_comment(string)", "positions def get_positions_encoded(lines=None, set='train'): if lines is None: positions = get_positions(set=set) else: positions", "= re.split(r'\\\\n|\\?|&|\\\\|;|,|\\*|\\(|\\)|\\{|\\s|\\.|/|_|:|=|<|>|\\||!|\"|\\+|-|\\[|\\]|\\'|\\}|\\^|#|%', string) words = list(filter(lambda a: a != \"\", words)) return words", "1 # if this matches, probabily the comment refers to the line before", "'\"': if in_string is True: if escape is False: in_string = False else:", "camel_case_parts: words.append(camel.lower()) if stemming and rem_keyws: return stem(remove_keywords(words)) elif stemming: return stem(words) else:", "== '*': if not in_string: if maybe_block is False: if block is True:", "import * from spacy.lang.en.stop_words import STOP_WORDS from src.comment_analysis.java_re import * def get_line(code, comment_line,", "in words] def remove_line_comment(string): in_string = False escape = False comment = False", "i]) or re.match(r\"^[\\s]*\\*/.*\", code[comment_line + i]): i -= 1 return code[comment_line + i]", "1 comment_line = 2 data = get_link_line_type(set=set) lines = [] for row in", "return positions def get_positions_encoded(lines=None, set='train'): if lines is None: positions = get_positions(set=set) else:", "= 1 comment_line = 2 positions = [] data = get_link_line_type(set=set) if lines", "from src.csv.csv_utils import get_link_line_type, get_keywords from src.keys import line, serialize_outpath from nltk.stem.porter import", "i = -2 # while the line is a comment or is blank,", "import line, serialize_outpath from nltk.stem.porter import * from spacy.lang.en.stop_words import STOP_WORDS from src.comment_analysis.java_re", "escape = False if comment is False: i += 1 return string def", "while the line starts as a comment or is blank, or is an", "le = LabelEncoder() return le.fit_transform(positions) def get_lines(serialized=True, serialize=False, set='train'): if serialized: x =", "t in camel_case_split(token): new_tokens.append(t.lower()) if rem_stop: new_tokens = remove_stopwords(new_tokens) if rem_kws: new_tokens =", "use multiple line comment to simulate a block i += 1 if re.match(r\"^[\\s]*}.*\",", "get_link_line_type, get_keywords from src.keys import line, serialize_outpath from nltk.stem.porter import * from spacy.lang.en.stop_words", "= True elif char == '\\\\': if in_string is True: escape = True", "i i += 1 if found is True: return string[:init_index] + string[end_index +", "def get_line(code, comment_line, comment_type): code = code.splitlines() try: if comment_type == line: if", "while re.match(r\"^[\\s]*//.*\", code[comment_line + i]) or re.match(r\"^[\\s]*$\", code[comment_line + i]) or re.match( r\"^[\\s]*/\\*.*\",", "get_positions(lines, set=set) le = LabelEncoder() return le.fit_transform(positions) def get_lines(serialized=True, serialize=False, set='train'): if serialized:", "return lines def get_code_words(stemming=True, rem_keyws=True, lines=None, set='train'): if lines is None: lines =", "import LabelEncoder from src.comment_analysis.url_utils import get_text_by_url from src.csv.csv_utils import get_link_line_type, get_keywords from src.keys", "False maybe_block = False found = False init_index = 0 end_index = 0", "return non_keywords def stem(words): stemmer = PorterStemmer() stemmed = [] for token in", "line is the comment_line return code[comment_line - 1] i = 0 while re.match(r\"^[\\s]*//.*\",", "comment, the comment refers to this line if not re.match(r\"^[\\s]*}?[\\s]*(else|try|finally)?[\\s]*{?[\\s]*//.*[\\s]*$\", code[ comment_line -", "# if this matches, the block is empty so i take the first", "1 # while the line starts as a comment or is blank, or", "i]): i += 1 # if this matches, probabily the comment refers to", "+ i]) or re.match(r\"^\\*\", code[comment_line + i]) or re.match(r\"^[\\s]*\\*/.*\", code[comment_line + i]): i", "the line starts as a comment, ignore it. I do this because they", "def get_positions(lines=None, set='train'): comment_type = 0 text_link = 1 comment_line = 2 positions", "in tokens: for t in camel_case_split(token): new_tokens.append(t.lower()) if rem_stop: new_tokens = remove_stopwords(new_tokens) if", "- 1]): # if the line isnt just brackets and some keywords, the", "get_positions_encoded(lines=None, set='train'): if lines is None: positions = get_positions(set=set) else: positions = get_positions(lines,", "code[comment_line + i]) or re.match(r\"^[\\s]*$\", code[comment_line + i]) or re.match(r\"[\\s]*[^}{](try|else|finally)[\\s]*{?\", code[comment_line + i]):", "= False maybe_block = False found = False init_index = 0 end_index =", "import json from sklearn.preprocessing import LabelEncoder from src.comment_analysis.url_utils import get_text_by_url from src.csv.csv_utils import", "#print(row[comment_line], row[comment_type], row[text_link] + \"#L\" + str(row[comment_line])) focus_line = lines[i] #print(focus_line) p =", "as a comment, the comment refers to this line if not re.match(r\"^[\\s]*}?[\\s]*(else|try|finally)?[\\s]*{?[\\s]*//.*[\\s]*$\", code[", "i]): i -= 1 return code[comment_line + i] except IndexError: return \"\" def", "comment refers to this line if not re.match(r\"^[\\s]*/\\*.*\", code[comment_line - 1]): return code[comment_line", "simulate a block i += 1 if re.match(r\"^[\\s]*}.*\", code[comment_line + i]) or re.match(r\"[\\s]*(try|else|finally)[\\s]*{?\",", "row[text_link] + \"#L\" + str(row[comment_line])) focus_line = lines[i] #print(focus_line) p = get_position(focus_line) positions.append(p)", "= 0 i = 0 for char in string: if char == '*':", "in string: if char == '*': if not in_string: if maybe_block is False:", "# block or javadoc # if the line doesn't start as a comment,", "re.match( r\"^[\\s]*\\*/.*\", code[comment_line + i]): # while the line is a comment or", "string[:i] elif comment is True: i += 1 comment = False elif escape", "not re.match(r\"^[\\s]*.*\\*/\", code[comment_line - 1]): while not re.match(r\"^[\\s]*\\*/\", code[comment_line + i]): i +=", "i = -2 while re.match(r\"^[\\s]*//.*\", code[comment_line + i]) or re.match(r\"^[\\s]*$\", code[comment_line + i])", "comment is False: comment = True else: return string[:i] elif comment is True:", "not in keywords: non_keywords.append(word) return non_keywords def stem(words): stemmer = PorterStemmer() stemmed =", "1]): return code[comment_line - 1] if comment_line >= len(code) - 1: return code[comment_line", "row[comment_type], row[text_link] + \"#L\" + str(row[comment_line])) focus_line = lines[i] #print(focus_line) p = get_position(focus_line)", "the line starts as a comment or is blank, or is an annotation,", "i]) or re.match(r\"^[\\s]*//.*\", code[comment_line + i]) or re.match(r\"[\\s]*[^}{](try|else|finally)[\\s]*{?\", code[comment_line + i]): i +=", "1 comment_line = 2 positions = [] data = get_link_line_type(set=set) if lines is", "comment_type = 0 text_link = 1 comment_line = 2 data = get_link_line_type(set=set) lines", "camel_case_parts = camel_case_split(part) for camel in camel_case_parts: words.append(camel.lower()) if stemming and rem_keyws: return", "not in_string: if maybe_block is False: if block is True: maybe_block = True", "return words def remove_stopwords(tokens): stop_words = STOP_WORDS relevant_words = [] for token in", "__name__ == '__main__': # code = open('../testers/test.txt', 'r').read() # code_parser(code, 151, javadoc) print(get_lines(serialized=False,", "line starts as a comment, ignore it. I do this because they use", "LabelEncoder() return le.fit_transform(positions) def get_lines(serialized=True, serialize=False, set='train'): if serialized: x = open(serialize_outpath +", "remove_keywords(words) def remove_keywords(words): keywords = get_keywords() non_keywords = [] for word in words:", "found = True end_index = i break else: maybe_block = True init_index =", "2 data = get_link_line_type(set=set) lines = [] for row in data: code =", "0 for char in string: if char == '\"': if in_string is True:", "line, serialize_outpath from nltk.stem.porter import * from spacy.lang.en.stop_words import STOP_WORDS from src.comment_analysis.java_re import", "== '/': if comment is False: comment = True else: return string[:i] elif", "return words def word_extractor(string, stemming=True, rem_keyws=True): string = remove_line_comment(string) string = remove_block_comment(string) splitted", "len(code) - 1: return code[comment_line - 2] i = 0 if not re.match(r\"^[\\s]*.*\\*/\",", "in_string is True: escape = True elif char == '/': if comment is", "nltk.stem.porter import * from spacy.lang.en.stop_words import STOP_WORDS from src.comment_analysis.java_re import * def get_line(code,", "STOP_WORDS from src.comment_analysis.java_re import * def get_line(code, comment_line, comment_type): code = code.splitlines() try:", "# r\"^[\\s]*}?[\\s]*(else|try|finally)?[\\s]*{?[\\s]*.*$\" else: # block or javadoc # if the line doesn't start", "rem_kws=False): tokens = code_split(string) new_tokens = [] for token in tokens: for t", "stem(words): stemmer = PorterStemmer() stemmed = [] for token in words: stemmed.append(stemmer.stem(token)) return", "line comment to simulate a block i += 1 if re.match(r\"^[\\s]*}.*\", code[comment_line +", "is a comment or is blank, ignore it i -= 1 return code[comment_line", "serialized=False, serialize=True) words = [] for line in lines: words.append(word_extractor(line, stemming, rem_keyws)) return", "set +'.json', 'w') x.write(json.dumps(lines)) return lines def get_code_words(stemming=True, rem_keyws=True, lines=None, set='train'): if lines", "char in string: if char == '\"': if in_string is True: if escape", "True elif char == '/': if not in_string: if maybe_block is True: if", "# code_parser(code, 151, javadoc) print(get_lines(serialized=False, serialize=True)) print('first') print(get_lines(serialized=True, serialize=False)) # get_positions() # line_type_identifier(\"ciao\")", "= 0 while re.match(r\"^[\\s]*//.*\", code[comment_line + i]) or re.match(r\"^[\\s]*$\", code[comment_line + i]) or", "+ 'serialized_' + set +'.json', 'r').read() return json.loads(x) comment_type = 0 text_link =", "return string words = [[string[0]]] for c in string[1:]: if words[-1][-1].islower() and c.isupper():", "relevant_words = [] for token in tokens: if token not in stop_words: relevant_words.append(token)", "ignore it i -= 1 return code[comment_line + i] # comment refers to", "code[comment_line + i]) or re.match( r\"^[\\s]*/\\*.*\", code[comment_line + i]) or re.match(r\"^\\*\", code[comment_line +", "code_split(string) new_tokens = [] for token in tokens: for t in camel_case_split(token): new_tokens.append(t.lower())", "comment_type): code = code.splitlines() try: if comment_type == line: if not re.match(r\"^[\\s]*//.*\", code[", "True init_index = i i += 1 if found is True: return string[:init_index]", "= -2 # while the line is a comment or is blank, ignore", "string: if char == '*': if not in_string: if maybe_block is False: if", "escape = False comment = False i = 0 for char in string:", "stemmed = [] for token in words: stemmed.append(stemmer.stem(token)) return stemmed def camel_case_split(string): if", "re.split(r'\\\\n|\\?|&|\\\\|;|,|\\*|\\(|\\)|\\{|\\s|\\.|/|_|:|=|<|>|\\||!|\"|\\+|-|\\[|\\]|\\'|\\}|\\^|#|%', string) words = list(filter(lambda a: a != \"\", words)) return words def", "take the first non comment non empty line before the comment. i =", "i -= 1 return code[comment_line + i] # comment refers to that #", "get_position(focus_line) positions.append(p) #print(p) i += 1 return positions def get_positions_encoded(lines=None, set='train'): if lines", "+'.json', 'w') x.write(json.dumps(lines)) return lines def get_code_words(stemming=True, rem_keyws=True, lines=None, set='train'): if lines is", "sklearn.preprocessing import LabelEncoder from src.comment_analysis.url_utils import get_text_by_url from src.csv.csv_utils import get_link_line_type, get_keywords from", "+ i]) or re.match(r\"^[\\s]*$\", code[comment_line + i]) or re.match(r\"^[\\s]*/\\*.*\", code[comment_line + i]) or", "[] for line in lines: words.append(word_extractor(line, stemming, rem_keyws)) return words def word_extractor(string, stemming=True,", "words = [] for part in splitted: camel_case_parts = camel_case_split(part) for camel in", "token in words: stemmed.append(stemmer.stem(token)) return stemmed def camel_case_split(string): if not string: return string", "= False else: in_string = True elif char == '\\\\': if in_string is", "serialize=False)) # get_positions() # line_type_identifier(\"ciao\") # code_parser3() # print(word_extractor(\"ciao mamma /*css rff*/\")) #", "- 1] i = 0 while re.match(r\"^[\\s]*//.*\", code[comment_line + i]) or re.match(r\"^[\\s]*$\", code[comment_line", "block is True: maybe_block = True else: block = True if char ==", "doesn't start as a comment, the comment refers to this line if not", "0 text_link = 1 comment_line = 2 positions = [] data = get_link_line_type(set=set)", "if rem_kws: new_tokens = remove_keywords(new_tokens) if stemming: new_tokens = stem(new_tokens) return new_tokens if", "= 2 positions = [] data = get_link_line_type(set=set) if lines is None: lines", "None: lines = get_lines(set=set, serialized=False, serialize=True) words = [] for line in lines:", "is None: positions = get_positions(set=set) else: positions = get_positions(lines, set=set) le = LabelEncoder()", "is False: i += 1 return string def remove_block_comment(string): in_string = False escape", "is a comment or is blank, ignore it while re.match(r\"^[\\s]*//.*\", code[comment_line + i])", "code[ comment_line + i]): # if this matches, the block is empty so", "or is an annotation, ignore it while re.match(r\"^[\\s]*$\", code[comment_line + i]) or re.match(r\"^[\\s]*@[^\\s]*[\\s]*$\",", "the comment refers to this line if not re.match(r\"^[\\s]*/\\*.*\", code[comment_line - 1]): return", "tokens: if token not in stop_words: relevant_words.append(token) return relevant_words def tokenizer(string, rem_stop=False, stemming=False,", "words = [] for line in lines: words.append(word_extractor(line, stemming, rem_keyws)) return words def", "remove_block_comment(string): in_string = False escape = False block = False maybe_block = False", "string def code_split(string): words = re.split(r'\\\\n|\\?|&|\\\\|;|,|\\*|\\(|\\)|\\{|\\s|\\.|/|_|:|=|<|>|\\||!|\"|\\+|-|\\[|\\]|\\'|\\}|\\^|#|%', string) words = list(filter(lambda a: a !=", "this line if not re.match(r\"^[\\s]*}?[\\s]*(else|try|finally)?[\\s]*{?[\\s]*//.*[\\s]*$\", code[ comment_line - 1]): # if the line", "splitted = code_split(string) words = [] for part in splitted: camel_case_parts = camel_case_split(part)", "False block = False maybe_block = False found = False init_index = 0", "word in words] def remove_line_comment(string): in_string = False escape = False comment =", "# while the line starts as a comment or is blank, or is", "= get_link_line_type(set=set) lines = [] for row in data: code = get_text_by_url(row[text_link]) focus_line", "else: escape = False else: in_string = True elif char == '\\\\': if", "get_positions(set=set) else: positions = get_positions(lines, set=set) le = LabelEncoder() return le.fit_transform(positions) def get_lines(serialized=True,", "comment_line return code[comment_line - 1] i = 0 while re.match(r\"^[\\s]*//.*\", code[comment_line + i])", "as a comment, ignore it. I do this because they use multiple line", "= open(serialize_outpath + 'serialized_' + set +'.json', 'w') x.write(json.dumps(lines)) return lines def get_code_words(stemming=True,", "code = code.splitlines() try: if comment_type == line: if not re.match(r\"^[\\s]*//.*\", code[ comment_line", "if serialize: x = open(serialize_outpath + 'serialized_' + set +'.json', 'w') x.write(json.dumps(lines)) return", "token in tokens: if token not in stop_words: relevant_words.append(token) return relevant_words def tokenizer(string,", "code[comment_line + i] except IndexError: return \"\" def get_positions(lines=None, set='train'): comment_type = 0", "True: if block is True: found = True end_index = i break else:", "if comment is False: comment = True else: return string[:i] elif comment is", "or re.match(r\"^\\*\", code[comment_line + i]) or re.match(r\"^[\\s]*\\*/.*\", code[comment_line + i]): i -= 1", "def camel_case_split(string): if not string: return string words = [[string[0]]] for c in", "from src.keys import line, serialize_outpath from nltk.stem.porter import * from spacy.lang.en.stop_words import STOP_WORDS", "i += 1 return string def remove_block_comment(string): in_string = False escape = False", "'serialized_' + set +'.json', 'r').read() return json.loads(x) comment_type = 0 text_link = 1", "True: found = True end_index = i break else: maybe_block = True init_index", "1 comment = False elif escape is True: escape = False if comment", "if escape is False: in_string = False else: escape = False else: in_string", "set +'.json', 'r').read() return json.loads(x) comment_type = 0 text_link = 1 comment_line =", "False: i += 1 return string def remove_block_comment(string): in_string = False escape =", "serialize: x = open(serialize_outpath + 'serialized_' + set +'.json', 'w') x.write(json.dumps(lines)) return lines", "escape is False: in_string = False else: escape = False else: in_string =", "x = open(serialize_outpath + 'serialized_' + set +'.json', 'r').read() return json.loads(x) comment_type =", "json from sklearn.preprocessing import LabelEncoder from src.comment_analysis.url_utils import get_text_by_url from src.csv.csv_utils import get_link_line_type,", "+ i] # comment refers to that # r\"^[\\s]*}?[\\s]*(else|try|finally)?[\\s]*{?[\\s]*.*$\" else: # block or", "- 1: return code[comment_line - 2] i = 0 if not re.match(r\"^[\\s]*.*\\*/\", code[comment_line", "in tokens: if token not in stop_words: relevant_words.append(token) return relevant_words def tokenizer(string, rem_stop=False,", "new_tokens = remove_stopwords(new_tokens) if rem_kws: new_tokens = remove_keywords(new_tokens) if stemming: new_tokens = stem(new_tokens)", "def get_lines(serialized=True, serialize=False, set='train'): if serialized: x = open(serialize_outpath + 'serialized_' + set", "remove_line_comment(string) string = remove_block_comment(string) splitted = code_split(string) words = [] for part in", "False i = 0 for char in string: if char == '\"': if", "the line doesn't start as a comment, the comment refers to this line", "re.match(r\"^[\\s]*/\\*.*\", code[comment_line - 1]): return code[comment_line - 1] if comment_line >= len(code) -", "code[comment_line - 1]): return code[comment_line - 1] if comment_line >= len(code) - 1:", "151, javadoc) print(get_lines(serialized=False, serialize=True)) print('first') print(get_lines(serialized=True, serialize=False)) # get_positions() # line_type_identifier(\"ciao\") # code_parser3()", "False escape = False comment = False i = 0 for char in", "maybe_block is False: if block is True: maybe_block = True else: block =", "code_parser(code, 151, javadoc) print(get_lines(serialized=False, serialize=True)) print('first') print(get_lines(serialized=True, serialize=False)) # get_positions() # line_type_identifier(\"ciao\") #", "set='train'): if lines is None: lines = get_lines(set=set, serialized=False, serialize=True) words = []", "= remove_block_comment(string) splitted = code_split(string) words = [] for part in splitted: camel_case_parts", "this line if not re.match(r\"^[\\s]*/\\*.*\", code[comment_line - 1]): return code[comment_line - 1] if", "'r').read() # code_parser(code, 151, javadoc) print(get_lines(serialized=False, serialize=True)) print('first') print(get_lines(serialized=True, serialize=False)) # get_positions() #", "if token not in stop_words: relevant_words.append(token) return relevant_words def tokenizer(string, rem_stop=False, stemming=False, rem_kws=False):", "+= 1 comment = False elif escape is True: escape = False if", "False found = False init_index = 0 end_index = 0 i = 0", "= open('../testers/test.txt', 'r').read() # code_parser(code, 151, javadoc) print(get_lines(serialized=False, serialize=True)) print('first') print(get_lines(serialized=True, serialize=False)) #", "in words: if word not in keywords: non_keywords.append(word) return non_keywords def stem(words): stemmer", "+= 1 return positions def get_positions_encoded(lines=None, set='train'): if lines is None: positions =", "= stem(new_tokens) return new_tokens if __name__ == '__main__': # code = open('../testers/test.txt', 'r').read()", "line if not re.match(r\"^[\\s]*}?[\\s]*(else|try|finally)?[\\s]*{?[\\s]*//.*[\\s]*$\", code[ comment_line - 1]): # if the line isnt", "words: stemmed.append(stemmer.stem(token)) return stemmed def camel_case_split(string): if not string: return string words =", "code = open('../testers/test.txt', 'r').read() # code_parser(code, 151, javadoc) print(get_lines(serialized=False, serialize=True)) print('first') print(get_lines(serialized=True, serialize=False))", "for token in tokens: for t in camel_case_split(token): new_tokens.append(t.lower()) if rem_stop: new_tokens =", "camel in camel_case_parts: words.append(camel.lower()) if stemming and rem_keyws: return stem(remove_keywords(words)) elif stemming: return", "comment_line - 1]): # if the line isnt just brackets and some keywords,", "code[comment_line - 1] if comment_line >= len(code) - 1: return code[comment_line - 2]", "if re.match(r\"^[\\s]*}[\\s]*.*\", code[comment_line + i]) or re.match(r\"[\\s]*(try|else|finally)[\\s]*{?\", code[comment_line + i]): i = -2", "if maybe_block is True: if block is True: found = True end_index =", "re.match(r\"^[\\s]*$\", code[comment_line + i]) or re.match(r\"^[\\s]*@[^\\s]*[\\s]*$\", code[comment_line + i]) or re.match(r\"^[\\s]*//.*\", code[comment_line +", "starts as a comment or is blank, or is an annotation, ignore it", "re.match(r\"^[\\s]*$\", code[comment_line + i]) or re.match( r\"^[\\s]*/\\*.*\", code[comment_line + i]) or re.match(r\"^\\*\", code[comment_line", "0 for char in string: if char == '*': if not in_string: if", "to simulate a block i += 1 if re.match(r\"^[\\s]*}.*\", code[comment_line + i]) or", "= False block = False maybe_block = False found = False init_index =", "[] data = get_link_line_type(set=set) if lines is None: lines = get_lines(set=set) i =", "= [] for word in words: if word not in keywords: non_keywords.append(word) return", "that # r\"^[\\s]*}?[\\s]*(else|try|finally)?[\\s]*{?[\\s]*.*$\" else: # block or javadoc # if the line doesn't", "+ 'serialized_' + set +'.json', 'w') x.write(json.dumps(lines)) return lines def get_code_words(stemming=True, rem_keyws=True, lines=None,", "non_keywords = [] for word in words: if word not in keywords: non_keywords.append(word)", "maybe_block is True: if block is True: found = True end_index = i", "import get_text_by_url from src.csv.csv_utils import get_link_line_type, get_keywords from src.keys import line, serialize_outpath from", "escape = False else: in_string = True elif char == '\\\\': if in_string", "tokens = code_split(string) new_tokens = [] for token in tokens: for t in", "stemming: new_tokens = stem(new_tokens) return new_tokens if __name__ == '__main__': # code =", "i] # comment refers to that # r\"^[\\s]*}?[\\s]*(else|try|finally)?[\\s]*{?[\\s]*.*$\" else: # block or javadoc", "def get_positions_encoded(lines=None, set='train'): if lines is None: positions = get_positions(set=set) else: positions =", "if words[-1][-1].islower() and c.isupper(): words.append(list(c)) else: words[-1].append(c) return [''.join(word) for word in words]", "+ i]): i -= 1 return code[comment_line + i] except IndexError: return \"\"", "+ 1:] return string def code_split(string): words = re.split(r'\\\\n|\\?|&|\\\\|;|,|\\*|\\(|\\)|\\{|\\s|\\.|/|_|:|=|<|>|\\||!|\"|\\+|-|\\[|\\]|\\'|\\}|\\^|#|%', string) words = list(filter(lambda", "data: #print(row[comment_line], row[comment_type], row[text_link] + \"#L\" + str(row[comment_line])) focus_line = lines[i] #print(focus_line) p", "i]): # while the line is a comment or is blank, ignore it", "line in lines: words.append(word_extractor(line, stemming, rem_keyws)) return words def word_extractor(string, stemming=True, rem_keyws=True): string", "return [''.join(word) for word in words] def remove_line_comment(string): in_string = False escape =", "code[comment_line + i]) or re.match(r\"^\\*\", code[comment_line + i]) or re.match( r\"^[\\s]*\\*/.*\", code[comment_line +", "do this because they use multiple line comment to simulate a block i", "+ i]): i += 1 # if this matches, probabily the comment refers", "STOP_WORDS relevant_words = [] for token in tokens: if token not in stop_words:", "stop_words: relevant_words.append(token) return relevant_words def tokenizer(string, rem_stop=False, stemming=False, rem_kws=False): tokens = code_split(string) new_tokens", "i]) or re.match(r\"[\\s]*(try|else|finally)[\\s]*{?\", code[comment_line + i]): i = -2 # while the line", "tokenizer(string, rem_stop=False, stemming=False, rem_kws=False): tokens = code_split(string) new_tokens = [] for token in", "for line in lines: words.append(word_extractor(line, stemming, rem_keyws)) return words def word_extractor(string, stemming=True, rem_keyws=True):", "False: if block is True: maybe_block = True else: block = True if", "re.match(r\"^\\*\", code[comment_line + i]) or re.match( r\"^[\\s]*\\*/.*\", code[comment_line + i]): # while the", "is blank, ignore it while re.match(r\"^[\\s]*//.*\", code[comment_line + i]) or re.match(r\"^[\\s]*$\", code[comment_line +", "get_keywords() non_keywords = [] for word in words: if word not in keywords:", "line before the comment. i = -2 while re.match(r\"^[\\s]*//.*\", code[comment_line + i]) or", "\"\", words)) return words def remove_stopwords(tokens): stop_words = STOP_WORDS relevant_words = [] for", "return stemmed def camel_case_split(string): if not string: return string words = [[string[0]]] for", "[] for row in data: code = get_text_by_url(row[text_link]) focus_line = get_line(code, row[comment_line], row[comment_type])", "from nltk.stem.porter import * from spacy.lang.en.stop_words import STOP_WORDS from src.comment_analysis.java_re import * def", "not in stop_words: relevant_words.append(token) return relevant_words def tokenizer(string, rem_stop=False, stemming=False, rem_kws=False): tokens =", "+'.json', 'r').read() return json.loads(x) comment_type = 0 text_link = 1 comment_line = 2", "re.match(r\"^[\\s]*/\\*.*\", code[comment_line + i]) or re.match(r\"^\\*\", code[comment_line + i]) or re.match(r\"^[\\s]*\\*/.*\", code[comment_line +", "char == '*': if not in_string: if maybe_block is False: if block is", "re.match(r\"^[\\s]*\\*/\", code[comment_line + i]): i += 1 i += 1 # while the", "comment is False: i += 1 return string def remove_block_comment(string): in_string = False", "r\"^[\\s]*}?[\\s]*(else|try|finally)?[\\s]*{?[\\s]*.*$\" else: # block or javadoc # if the line doesn't start as", "in_string = False escape = False comment = False i = 0 for", "because they use multiple line comment to simulate a block i += 1", "0 text_link = 1 comment_line = 2 data = get_link_line_type(set=set) lines = []", "get_code_words(stemming=True, rem_keyws=True, lines=None, set='train'): if lines is None: lines = get_lines(set=set, serialized=False, serialize=True)", "lines is None: positions = get_positions(set=set) else: positions = get_positions(lines, set=set) le =", "= i break else: maybe_block = True init_index = i i += 1", "or re.match(r\"^[\\s]*$\", code[comment_line + i]) or re.match( r\"^[\\s]*/\\*.*\", code[comment_line + i]) or re.match(r\"^\\*\",", "+ i]) or re.match(r\"[\\s]*(try|else|finally)[\\s]*{?\", code[comment_line + i]): i = -2 # while the", "True else: return string[:i] elif comment is True: i += 1 comment =", "= False comment = False i = 0 for char in string: if" ]
[ "opposite raise Exception(\"Could not find combi\") def main(): print(calculate_result(read_input(\"sample.txt\"))) print(calculate_result(read_input(\"input.txt\"))) if __name__ ==", "if opposite in numbers: return number * opposite raise Exception(\"Could not find combi\")", "-> List[int]: res = set({}) with open(file_path, 'r') as file_handle: for line in", "List def read_input(file_path) -> List[int]: res = set({}) with open(file_path, 'r') as file_handle:", "res = set({}) with open(file_path, 'r') as file_handle: for line in file_handle: res.add(int(line))", "import List def read_input(file_path) -> List[int]: res = set({}) with open(file_path, 'r') as", "file_handle: for line in file_handle: res.add(int(line)) return res def calculate_result(numbers: List[int]) -> int:", "number in numbers: opposite = 2020 - number if opposite in numbers: return", "Exception(\"Could not find combi\") def main(): print(calculate_result(read_input(\"sample.txt\"))) print(calculate_result(read_input(\"input.txt\"))) if __name__ == \"__main__\": main()", "for number in numbers: opposite = 2020 - number if opposite in numbers:", "'r') as file_handle: for line in file_handle: res.add(int(line)) return res def calculate_result(numbers: List[int])", "def read_input(file_path) -> List[int]: res = set({}) with open(file_path, 'r') as file_handle: for", "with open(file_path, 'r') as file_handle: for line in file_handle: res.add(int(line)) return res def", "calculate_result(numbers: List[int]) -> int: for number in numbers: opposite = 2020 - number", "return number * opposite raise Exception(\"Could not find combi\") def main(): print(calculate_result(read_input(\"sample.txt\"))) print(calculate_result(read_input(\"input.txt\")))", "* opposite raise Exception(\"Could not find combi\") def main(): print(calculate_result(read_input(\"sample.txt\"))) print(calculate_result(read_input(\"input.txt\"))) if __name__", "res.add(int(line)) return res def calculate_result(numbers: List[int]) -> int: for number in numbers: opposite", "2020 - number if opposite in numbers: return number * opposite raise Exception(\"Could", "- number if opposite in numbers: return number * opposite raise Exception(\"Could not", "= set({}) with open(file_path, 'r') as file_handle: for line in file_handle: res.add(int(line)) return", "number * opposite raise Exception(\"Could not find combi\") def main(): print(calculate_result(read_input(\"sample.txt\"))) print(calculate_result(read_input(\"input.txt\"))) if", "in file_handle: res.add(int(line)) return res def calculate_result(numbers: List[int]) -> int: for number in", "opposite = 2020 - number if opposite in numbers: return number * opposite", "line in file_handle: res.add(int(line)) return res def calculate_result(numbers: List[int]) -> int: for number", "numbers: return number * opposite raise Exception(\"Could not find combi\") def main(): print(calculate_result(read_input(\"sample.txt\")))", "-> int: for number in numbers: opposite = 2020 - number if opposite", "res def calculate_result(numbers: List[int]) -> int: for number in numbers: opposite = 2020", "open(file_path, 'r') as file_handle: for line in file_handle: res.add(int(line)) return res def calculate_result(numbers:", "raise Exception(\"Could not find combi\") def main(): print(calculate_result(read_input(\"sample.txt\"))) print(calculate_result(read_input(\"input.txt\"))) if __name__ == \"__main__\":", "from typing import List def read_input(file_path) -> List[int]: res = set({}) with open(file_path,", "in numbers: opposite = 2020 - number if opposite in numbers: return number", "List[int]: res = set({}) with open(file_path, 'r') as file_handle: for line in file_handle:", "opposite in numbers: return number * opposite raise Exception(\"Could not find combi\") def", "number if opposite in numbers: return number * opposite raise Exception(\"Could not find", "for line in file_handle: res.add(int(line)) return res def calculate_result(numbers: List[int]) -> int: for", "file_handle: res.add(int(line)) return res def calculate_result(numbers: List[int]) -> int: for number in numbers:", "List[int]) -> int: for number in numbers: opposite = 2020 - number if", "typing import List def read_input(file_path) -> List[int]: res = set({}) with open(file_path, 'r')", "in numbers: return number * opposite raise Exception(\"Could not find combi\") def main():", "numbers: opposite = 2020 - number if opposite in numbers: return number *", "read_input(file_path) -> List[int]: res = set({}) with open(file_path, 'r') as file_handle: for line", "set({}) with open(file_path, 'r') as file_handle: for line in file_handle: res.add(int(line)) return res", "def calculate_result(numbers: List[int]) -> int: for number in numbers: opposite = 2020 -", "int: for number in numbers: opposite = 2020 - number if opposite in", "return res def calculate_result(numbers: List[int]) -> int: for number in numbers: opposite =", "as file_handle: for line in file_handle: res.add(int(line)) return res def calculate_result(numbers: List[int]) ->", "= 2020 - number if opposite in numbers: return number * opposite raise" ]
[ "u1 = User(username=u'test', password=u'<PASSWORD>', email=u'<EMAIL>') assert u1.password != '<PASSWORD>' self.db.add(u1) self.db.flush() assert u1.password", "= User(username=u'test', password=u'<PASSWORD>', email=u'<EMAIL>') assert u1.password != '<PASSWORD>' self.db.add(u1) self.db.flush() assert u1.password !=", "from unittest import TestCase from h.models import User from . import AppTestCase class", "\"\"\"make sure user passwords are stored encrypted \"\"\" u1 = User(username=u'test', password=u'<PASSWORD>', email=u'<EMAIL>')", "User(username=u'test', password=u'<PASSWORD>', email=u'<EMAIL>') assert u1.password != '<PASSWORD>' self.db.add(u1) self.db.flush() assert u1.password != '<PASSWORD>'", "encrypted \"\"\" u1 = User(username=u'test', password=u'<PASSWORD>', email=u'<EMAIL>') assert u1.password != '<PASSWORD>' self.db.add(u1) self.db.flush()", "are stored encrypted \"\"\" u1 = User(username=u'test', password=u'<PASSWORD>', email=u'<EMAIL>') assert u1.password != '<PASSWORD>'", "UserTest(AppTestCase): def test_password_encrypt(self): \"\"\"make sure user passwords are stored encrypted \"\"\" u1 =", "h.models import User from . import AppTestCase class UserTest(AppTestCase): def test_password_encrypt(self): \"\"\"make sure", "sure user passwords are stored encrypted \"\"\" u1 = User(username=u'test', password=u'<PASSWORD>', email=u'<EMAIL>') assert", "TestCase from h.models import User from . import AppTestCase class UserTest(AppTestCase): def test_password_encrypt(self):", "def test_password_encrypt(self): \"\"\"make sure user passwords are stored encrypted \"\"\" u1 = User(username=u'test',", "test_password_encrypt(self): \"\"\"make sure user passwords are stored encrypted \"\"\" u1 = User(username=u'test', password=u'<PASSWORD>',", "from . import AppTestCase class UserTest(AppTestCase): def test_password_encrypt(self): \"\"\"make sure user passwords are", "unittest import TestCase from h.models import User from . import AppTestCase class UserTest(AppTestCase):", "import AppTestCase class UserTest(AppTestCase): def test_password_encrypt(self): \"\"\"make sure user passwords are stored encrypted", "import User from . import AppTestCase class UserTest(AppTestCase): def test_password_encrypt(self): \"\"\"make sure user", "User from . import AppTestCase class UserTest(AppTestCase): def test_password_encrypt(self): \"\"\"make sure user passwords", "passwords are stored encrypted \"\"\" u1 = User(username=u'test', password=u'<PASSWORD>', email=u'<EMAIL>') assert u1.password !=", "from h.models import User from . import AppTestCase class UserTest(AppTestCase): def test_password_encrypt(self): \"\"\"make", "AppTestCase class UserTest(AppTestCase): def test_password_encrypt(self): \"\"\"make sure user passwords are stored encrypted \"\"\"", "class UserTest(AppTestCase): def test_password_encrypt(self): \"\"\"make sure user passwords are stored encrypted \"\"\" u1", "import TestCase from h.models import User from . import AppTestCase class UserTest(AppTestCase): def", "stored encrypted \"\"\" u1 = User(username=u'test', password=u'<PASSWORD>', email=u'<EMAIL>') assert u1.password != '<PASSWORD>' self.db.add(u1)", "\"\"\" u1 = User(username=u'test', password=u'<PASSWORD>', email=u'<EMAIL>') assert u1.password != '<PASSWORD>' self.db.add(u1) self.db.flush() assert", ". import AppTestCase class UserTest(AppTestCase): def test_password_encrypt(self): \"\"\"make sure user passwords are stored", "user passwords are stored encrypted \"\"\" u1 = User(username=u'test', password=u'<PASSWORD>', email=u'<EMAIL>') assert u1.password" ]
[ "('upload', 'upload')], max_length=20, null=True), ), migrations.AddField( model_name='tag', name='madeby', field=models.CharField(blank=True, choices=[('user', 'user'), ('post', 'post')],", "('post', 'post')], max_length=20, null=True), ), migrations.AlterField( model_name='photo', name='file', field=models.ImageField(upload_to=base.models.user_directory_path), ), migrations.CreateModel( name='Session', fields=[", "model_name='tag', name='madeat', ), migrations.AddField( model_name='post', name='source', field=models.CharField(blank=True, choices=[('instagram', 'instagram'), ('upload', 'upload')], max_length=20, null=True),", "primary_key=True, serialize=False, verbose_name='ID')), ('starttime', models.DateTimeField(auto_now_add=True)), ('endtime', models.DateTimeField(auto_now_add=True)), ('status', models.BooleanField(blank=True, default=False)), ('user', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL)),", "name='madeby', field=models.CharField(blank=True, choices=[('user', 'user'), ('post', 'post')], max_length=20, null=True), ), migrations.AlterField( model_name='photo', name='file', field=models.ImageField(upload_to=base.models.user_directory_path),", "), migrations.CreateModel( name='Session', fields=[ ('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')), ('starttime', models.DateTimeField(auto_now_add=True)), ('endtime', models.DateTimeField(auto_now_add=True)),", "on 2020-01-13 05:18 import base.models from django.conf import settings from django.db import migrations,", "verbose_name='ID')), ('starttime', models.DateTimeField(auto_now_add=True)), ('endtime', models.DateTimeField(auto_now_add=True)), ('status', models.BooleanField(blank=True, default=False)), ('user', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL)), ], ),", "05:18 import base.models from django.conf import settings from django.db import migrations, models import", "), migrations.RemoveField( model_name='tag', name='madeat', ), migrations.AddField( model_name='post', name='source', field=models.CharField(blank=True, choices=[('instagram', 'instagram'), ('upload', 'upload')],", "model_name='post', name='user', ), migrations.RemoveField( model_name='tag', name='madeat', ), migrations.AddField( model_name='post', name='source', field=models.CharField(blank=True, choices=[('instagram', 'instagram'),", "import django.db.models.deletion class Migration(migrations.Migration): dependencies = [ migrations.swappable_dependency(settings.AUTH_USER_MODEL), ('base', '0004_auto_20200113_0232'), ] operations =", "migrations.RemoveField( model_name='tag', name='madeat', ), migrations.AddField( model_name='post', name='source', field=models.CharField(blank=True, choices=[('instagram', 'instagram'), ('upload', 'upload')], max_length=20,", "django.db.models.deletion class Migration(migrations.Migration): dependencies = [ migrations.swappable_dependency(settings.AUTH_USER_MODEL), ('base', '0004_auto_20200113_0232'), ] operations = [", "migrations.CreateModel( name='Session', fields=[ ('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')), ('starttime', models.DateTimeField(auto_now_add=True)), ('endtime', models.DateTimeField(auto_now_add=True)), ('status',", "'user'), ('post', 'post')], max_length=20, null=True), ), migrations.AlterField( model_name='photo', name='file', field=models.ImageField(upload_to=base.models.user_directory_path), ), migrations.CreateModel( name='Session',", "null=True), ), migrations.AddField( model_name='tag', name='madeby', field=models.CharField(blank=True, choices=[('user', 'user'), ('post', 'post')], max_length=20, null=True), ),", "'upload')], max_length=20, null=True), ), migrations.AddField( model_name='tag', name='madeby', field=models.CharField(blank=True, choices=[('user', 'user'), ('post', 'post')], max_length=20,", "import settings from django.db import migrations, models import django.db.models.deletion class Migration(migrations.Migration): dependencies =", "= [ migrations.swappable_dependency(settings.AUTH_USER_MODEL), ('base', '0004_auto_20200113_0232'), ] operations = [ migrations.RemoveField( model_name='photo', name='title', ),", "'post')], max_length=20, null=True), ), migrations.AlterField( model_name='photo', name='file', field=models.ImageField(upload_to=base.models.user_directory_path), ), migrations.CreateModel( name='Session', fields=[ ('id',", "model_name='tag', name='madeby', field=models.CharField(blank=True, choices=[('user', 'user'), ('post', 'post')], max_length=20, null=True), ), migrations.AlterField( model_name='photo', name='file',", "import base.models from django.conf import settings from django.db import migrations, models import django.db.models.deletion", "[ migrations.RemoveField( model_name='photo', name='title', ), migrations.RemoveField( model_name='post', name='user', ), migrations.RemoveField( model_name='tag', name='madeat', ),", "name='Session', fields=[ ('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')), ('starttime', models.DateTimeField(auto_now_add=True)), ('endtime', models.DateTimeField(auto_now_add=True)), ('status', models.BooleanField(blank=True,", "choices=[('instagram', 'instagram'), ('upload', 'upload')], max_length=20, null=True), ), migrations.AddField( model_name='tag', name='madeby', field=models.CharField(blank=True, choices=[('user', 'user'),", "'0004_auto_20200113_0232'), ] operations = [ migrations.RemoveField( model_name='photo', name='title', ), migrations.RemoveField( model_name='post', name='user', ),", "), migrations.RemoveField( model_name='post', name='user', ), migrations.RemoveField( model_name='tag', name='madeat', ), migrations.AddField( model_name='post', name='source', field=models.CharField(blank=True,", "name='file', field=models.ImageField(upload_to=base.models.user_directory_path), ), migrations.CreateModel( name='Session', fields=[ ('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')), ('starttime', models.DateTimeField(auto_now_add=True)),", "] operations = [ migrations.RemoveField( model_name='photo', name='title', ), migrations.RemoveField( model_name='post', name='user', ), migrations.RemoveField(", "choices=[('user', 'user'), ('post', 'post')], max_length=20, null=True), ), migrations.AlterField( model_name='photo', name='file', field=models.ImageField(upload_to=base.models.user_directory_path), ), migrations.CreateModel(", "('starttime', models.DateTimeField(auto_now_add=True)), ('endtime', models.DateTimeField(auto_now_add=True)), ('status', models.BooleanField(blank=True, default=False)), ('user', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL)), ], ), ]", "2.2.7 on 2020-01-13 05:18 import base.models from django.conf import settings from django.db import", "dependencies = [ migrations.swappable_dependency(settings.AUTH_USER_MODEL), ('base', '0004_auto_20200113_0232'), ] operations = [ migrations.RemoveField( model_name='photo', name='title',", "model_name='photo', name='title', ), migrations.RemoveField( model_name='post', name='user', ), migrations.RemoveField( model_name='tag', name='madeat', ), migrations.AddField( model_name='post',", "from django.db import migrations, models import django.db.models.deletion class Migration(migrations.Migration): dependencies = [ migrations.swappable_dependency(settings.AUTH_USER_MODEL),", "operations = [ migrations.RemoveField( model_name='photo', name='title', ), migrations.RemoveField( model_name='post', name='user', ), migrations.RemoveField( model_name='tag',", "migrations.RemoveField( model_name='photo', name='title', ), migrations.RemoveField( model_name='post', name='user', ), migrations.RemoveField( model_name='tag', name='madeat', ), migrations.AddField(", "2020-01-13 05:18 import base.models from django.conf import settings from django.db import migrations, models", "), migrations.AddField( model_name='tag', name='madeby', field=models.CharField(blank=True, choices=[('user', 'user'), ('post', 'post')], max_length=20, null=True), ), migrations.AlterField(", "django.conf import settings from django.db import migrations, models import django.db.models.deletion class Migration(migrations.Migration): dependencies", "migrations.swappable_dependency(settings.AUTH_USER_MODEL), ('base', '0004_auto_20200113_0232'), ] operations = [ migrations.RemoveField( model_name='photo', name='title', ), migrations.RemoveField( model_name='post',", "import migrations, models import django.db.models.deletion class Migration(migrations.Migration): dependencies = [ migrations.swappable_dependency(settings.AUTH_USER_MODEL), ('base', '0004_auto_20200113_0232'),", "settings from django.db import migrations, models import django.db.models.deletion class Migration(migrations.Migration): dependencies = [", "'instagram'), ('upload', 'upload')], max_length=20, null=True), ), migrations.AddField( model_name='tag', name='madeby', field=models.CharField(blank=True, choices=[('user', 'user'), ('post',", "field=models.CharField(blank=True, choices=[('user', 'user'), ('post', 'post')], max_length=20, null=True), ), migrations.AlterField( model_name='photo', name='file', field=models.ImageField(upload_to=base.models.user_directory_path), ),", "), migrations.AlterField( model_name='photo', name='file', field=models.ImageField(upload_to=base.models.user_directory_path), ), migrations.CreateModel( name='Session', fields=[ ('id', models.AutoField(auto_created=True, primary_key=True, serialize=False,", "models import django.db.models.deletion class Migration(migrations.Migration): dependencies = [ migrations.swappable_dependency(settings.AUTH_USER_MODEL), ('base', '0004_auto_20200113_0232'), ] operations", "name='user', ), migrations.RemoveField( model_name='tag', name='madeat', ), migrations.AddField( model_name='post', name='source', field=models.CharField(blank=True, choices=[('instagram', 'instagram'), ('upload',", "field=models.ImageField(upload_to=base.models.user_directory_path), ), migrations.CreateModel( name='Session', fields=[ ('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')), ('starttime', models.DateTimeField(auto_now_add=True)), ('endtime',", "null=True), ), migrations.AlterField( model_name='photo', name='file', field=models.ImageField(upload_to=base.models.user_directory_path), ), migrations.CreateModel( name='Session', fields=[ ('id', models.AutoField(auto_created=True, primary_key=True,", "model_name='post', name='source', field=models.CharField(blank=True, choices=[('instagram', 'instagram'), ('upload', 'upload')], max_length=20, null=True), ), migrations.AddField( model_name='tag', name='madeby',", "django.db import migrations, models import django.db.models.deletion class Migration(migrations.Migration): dependencies = [ migrations.swappable_dependency(settings.AUTH_USER_MODEL), ('base',", "model_name='photo', name='file', field=models.ImageField(upload_to=base.models.user_directory_path), ), migrations.CreateModel( name='Session', fields=[ ('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')), ('starttime',", "serialize=False, verbose_name='ID')), ('starttime', models.DateTimeField(auto_now_add=True)), ('endtime', models.DateTimeField(auto_now_add=True)), ('status', models.BooleanField(blank=True, default=False)), ('user', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL)), ],", "), migrations.AddField( model_name='post', name='source', field=models.CharField(blank=True, choices=[('instagram', 'instagram'), ('upload', 'upload')], max_length=20, null=True), ), migrations.AddField(", "migrations.RemoveField( model_name='post', name='user', ), migrations.RemoveField( model_name='tag', name='madeat', ), migrations.AddField( model_name='post', name='source', field=models.CharField(blank=True, choices=[('instagram',", "class Migration(migrations.Migration): dependencies = [ migrations.swappable_dependency(settings.AUTH_USER_MODEL), ('base', '0004_auto_20200113_0232'), ] operations = [ migrations.RemoveField(", "base.models from django.conf import settings from django.db import migrations, models import django.db.models.deletion class", "name='title', ), migrations.RemoveField( model_name='post', name='user', ), migrations.RemoveField( model_name='tag', name='madeat', ), migrations.AddField( model_name='post', name='source',", "Generated by Django 2.2.7 on 2020-01-13 05:18 import base.models from django.conf import settings", "field=models.CharField(blank=True, choices=[('instagram', 'instagram'), ('upload', 'upload')], max_length=20, null=True), ), migrations.AddField( model_name='tag', name='madeby', field=models.CharField(blank=True, choices=[('user',", "= [ migrations.RemoveField( model_name='photo', name='title', ), migrations.RemoveField( model_name='post', name='user', ), migrations.RemoveField( model_name='tag', name='madeat',", "('base', '0004_auto_20200113_0232'), ] operations = [ migrations.RemoveField( model_name='photo', name='title', ), migrations.RemoveField( model_name='post', name='user',", "migrations.AddField( model_name='tag', name='madeby', field=models.CharField(blank=True, choices=[('user', 'user'), ('post', 'post')], max_length=20, null=True), ), migrations.AlterField( model_name='photo',", "('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')), ('starttime', models.DateTimeField(auto_now_add=True)), ('endtime', models.DateTimeField(auto_now_add=True)), ('status', models.BooleanField(blank=True, default=False)), ('user',", "by Django 2.2.7 on 2020-01-13 05:18 import base.models from django.conf import settings from", "migrations, models import django.db.models.deletion class Migration(migrations.Migration): dependencies = [ migrations.swappable_dependency(settings.AUTH_USER_MODEL), ('base', '0004_auto_20200113_0232'), ]", "from django.conf import settings from django.db import migrations, models import django.db.models.deletion class Migration(migrations.Migration):", "name='madeat', ), migrations.AddField( model_name='post', name='source', field=models.CharField(blank=True, choices=[('instagram', 'instagram'), ('upload', 'upload')], max_length=20, null=True), ),", "# Generated by Django 2.2.7 on 2020-01-13 05:18 import base.models from django.conf import", "models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')), ('starttime', models.DateTimeField(auto_now_add=True)), ('endtime', models.DateTimeField(auto_now_add=True)), ('status', models.BooleanField(blank=True, default=False)), ('user', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE,", "migrations.AddField( model_name='post', name='source', field=models.CharField(blank=True, choices=[('instagram', 'instagram'), ('upload', 'upload')], max_length=20, null=True), ), migrations.AddField( model_name='tag',", "[ migrations.swappable_dependency(settings.AUTH_USER_MODEL), ('base', '0004_auto_20200113_0232'), ] operations = [ migrations.RemoveField( model_name='photo', name='title', ), migrations.RemoveField(", "migrations.AlterField( model_name='photo', name='file', field=models.ImageField(upload_to=base.models.user_directory_path), ), migrations.CreateModel( name='Session', fields=[ ('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),", "max_length=20, null=True), ), migrations.AlterField( model_name='photo', name='file', field=models.ImageField(upload_to=base.models.user_directory_path), ), migrations.CreateModel( name='Session', fields=[ ('id', models.AutoField(auto_created=True,", "<filename>tagannotator/base/migrations/0005_auto_20200113_0518.py # Generated by Django 2.2.7 on 2020-01-13 05:18 import base.models from django.conf", "name='source', field=models.CharField(blank=True, choices=[('instagram', 'instagram'), ('upload', 'upload')], max_length=20, null=True), ), migrations.AddField( model_name='tag', name='madeby', field=models.CharField(blank=True,", "fields=[ ('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')), ('starttime', models.DateTimeField(auto_now_add=True)), ('endtime', models.DateTimeField(auto_now_add=True)), ('status', models.BooleanField(blank=True, default=False)),", "Django 2.2.7 on 2020-01-13 05:18 import base.models from django.conf import settings from django.db", "Migration(migrations.Migration): dependencies = [ migrations.swappable_dependency(settings.AUTH_USER_MODEL), ('base', '0004_auto_20200113_0232'), ] operations = [ migrations.RemoveField( model_name='photo',", "max_length=20, null=True), ), migrations.AddField( model_name='tag', name='madeby', field=models.CharField(blank=True, choices=[('user', 'user'), ('post', 'post')], max_length=20, null=True)," ]
[ "Created on Tue Feb 16 17:14:38 2021 @author: dev \"\"\" from tensorflow.keras.applications.vgg16 import", "17:14:38 2021 @author: dev \"\"\" from tensorflow.keras.applications.vgg16 import VGG16 model=VGG16() model.save('mymodel.h5') print(\"VGG16 model", "-*- coding: utf-8 -*- \"\"\" Created on Tue Feb 16 17:14:38 2021 @author:", "Feb 16 17:14:38 2021 @author: dev \"\"\" from tensorflow.keras.applications.vgg16 import VGG16 model=VGG16() model.save('mymodel.h5')", "2021 @author: dev \"\"\" from tensorflow.keras.applications.vgg16 import VGG16 model=VGG16() model.save('mymodel.h5') print(\"VGG16 model downloaded", "\"\"\" from tensorflow.keras.applications.vgg16 import VGG16 model=VGG16() model.save('mymodel.h5') print(\"VGG16 model downloaded and saved successfully\")", "<filename>get_model.py # -*- coding: utf-8 -*- \"\"\" Created on Tue Feb 16 17:14:38", "coding: utf-8 -*- \"\"\" Created on Tue Feb 16 17:14:38 2021 @author: dev", "utf-8 -*- \"\"\" Created on Tue Feb 16 17:14:38 2021 @author: dev \"\"\"", "# -*- coding: utf-8 -*- \"\"\" Created on Tue Feb 16 17:14:38 2021", "-*- \"\"\" Created on Tue Feb 16 17:14:38 2021 @author: dev \"\"\" from", "on Tue Feb 16 17:14:38 2021 @author: dev \"\"\" from tensorflow.keras.applications.vgg16 import VGG16", "\"\"\" Created on Tue Feb 16 17:14:38 2021 @author: dev \"\"\" from tensorflow.keras.applications.vgg16", "Tue Feb 16 17:14:38 2021 @author: dev \"\"\" from tensorflow.keras.applications.vgg16 import VGG16 model=VGG16()", "@author: dev \"\"\" from tensorflow.keras.applications.vgg16 import VGG16 model=VGG16() model.save('mymodel.h5') print(\"VGG16 model downloaded and", "16 17:14:38 2021 @author: dev \"\"\" from tensorflow.keras.applications.vgg16 import VGG16 model=VGG16() model.save('mymodel.h5') print(\"VGG16", "dev \"\"\" from tensorflow.keras.applications.vgg16 import VGG16 model=VGG16() model.save('mymodel.h5') print(\"VGG16 model downloaded and saved" ]
[ "super().__init__() self.__sequential_blocks = [ CnnEncoderBlock(3, 128), CnnEncoderBlock(128, 256), nn.Flatten(start_dim=1), LinearGeneratorBlock(7 * 7 *", "128), CnnEncoderBlock(128, 256), nn.Flatten(start_dim=1), LinearGeneratorBlock(7 * 7 * 256, 256), nn.Linear(256, latent_size) ]", "import CnnEncoderBlock import torch import torch.nn as nn from architecture.generator.linear_generator_block import LinearGeneratorBlock class", "input_images.size(1) == 3 and input_images.size(2) == 28 and input_images.size(3) == 28 encoded_latent =", "* 7 * 256, 256), nn.Linear(256, latent_size) ] self.main = nn.Sequential(*self.__sequential_blocks) def forward(self,", "] self.main = nn.Sequential(*self.__sequential_blocks) def forward(self, input_images: torch.Tensor): assert input_images.size(1) == 3 and", "CnnEncoderBlock(3, 128), CnnEncoderBlock(128, 256), nn.Flatten(start_dim=1), LinearGeneratorBlock(7 * 7 * 256, 256), nn.Linear(256, latent_size)", "* 256, 256), nn.Linear(256, latent_size) ] self.main = nn.Sequential(*self.__sequential_blocks) def forward(self, input_images: torch.Tensor):", "CnnEncoderBlock(128, 256), nn.Flatten(start_dim=1), LinearGeneratorBlock(7 * 7 * 256, 256), nn.Linear(256, latent_size) ] self.main", "__init__(self, latent_size: int): super().__init__() self.__sequential_blocks = [ CnnEncoderBlock(3, 128), CnnEncoderBlock(128, 256), nn.Flatten(start_dim=1), LinearGeneratorBlock(7", "self.__sequential_blocks = [ CnnEncoderBlock(3, 128), CnnEncoderBlock(128, 256), nn.Flatten(start_dim=1), LinearGeneratorBlock(7 * 7 * 256,", "256), nn.Flatten(start_dim=1), LinearGeneratorBlock(7 * 7 * 256, 256), nn.Linear(256, latent_size) ] self.main =", "forward(self, input_images: torch.Tensor): assert input_images.size(1) == 3 and input_images.size(2) == 28 and input_images.size(3)", "256, 256), nn.Linear(256, latent_size) ] self.main = nn.Sequential(*self.__sequential_blocks) def forward(self, input_images: torch.Tensor): assert", "def __init__(self, latent_size: int): super().__init__() self.__sequential_blocks = [ CnnEncoderBlock(3, 128), CnnEncoderBlock(128, 256), nn.Flatten(start_dim=1),", "from architecture.generator.linear_generator_block import LinearGeneratorBlock class Encoder(nn.Module): def __init__(self, latent_size: int): super().__init__() self.__sequential_blocks =", "= [ CnnEncoderBlock(3, 128), CnnEncoderBlock(128, 256), nn.Flatten(start_dim=1), LinearGeneratorBlock(7 * 7 * 256, 256),", "self.main = nn.Sequential(*self.__sequential_blocks) def forward(self, input_images: torch.Tensor): assert input_images.size(1) == 3 and input_images.size(2)", "architecture.encoder.cnn_encoder_block import CnnEncoderBlock import torch import torch.nn as nn from architecture.generator.linear_generator_block import LinearGeneratorBlock", "import torch.nn as nn from architecture.generator.linear_generator_block import LinearGeneratorBlock class Encoder(nn.Module): def __init__(self, latent_size:", "torch import torch.nn as nn from architecture.generator.linear_generator_block import LinearGeneratorBlock class Encoder(nn.Module): def __init__(self,", "nn from architecture.generator.linear_generator_block import LinearGeneratorBlock class Encoder(nn.Module): def __init__(self, latent_size: int): super().__init__() self.__sequential_blocks", "nn.Flatten(start_dim=1), LinearGeneratorBlock(7 * 7 * 256, 256), nn.Linear(256, latent_size) ] self.main = nn.Sequential(*self.__sequential_blocks)", "[ CnnEncoderBlock(3, 128), CnnEncoderBlock(128, 256), nn.Flatten(start_dim=1), LinearGeneratorBlock(7 * 7 * 256, 256), nn.Linear(256,", "7 * 256, 256), nn.Linear(256, latent_size) ] self.main = nn.Sequential(*self.__sequential_blocks) def forward(self, input_images:", "LinearGeneratorBlock(7 * 7 * 256, 256), nn.Linear(256, latent_size) ] self.main = nn.Sequential(*self.__sequential_blocks) def", "== 3 and input_images.size(2) == 28 and input_images.size(3) == 28 encoded_latent = self.main(input_images)", "CnnEncoderBlock import torch import torch.nn as nn from architecture.generator.linear_generator_block import LinearGeneratorBlock class Encoder(nn.Module):", "and input_images.size(2) == 28 and input_images.size(3) == 28 encoded_latent = self.main(input_images) return encoded_latent", "torch.Tensor): assert input_images.size(1) == 3 and input_images.size(2) == 28 and input_images.size(3) == 28", "latent_size) ] self.main = nn.Sequential(*self.__sequential_blocks) def forward(self, input_images: torch.Tensor): assert input_images.size(1) == 3", "input_images: torch.Tensor): assert input_images.size(1) == 3 and input_images.size(2) == 28 and input_images.size(3) ==", "as nn from architecture.generator.linear_generator_block import LinearGeneratorBlock class Encoder(nn.Module): def __init__(self, latent_size: int): super().__init__()", "= nn.Sequential(*self.__sequential_blocks) def forward(self, input_images: torch.Tensor): assert input_images.size(1) == 3 and input_images.size(2) ==", "architecture.generator.linear_generator_block import LinearGeneratorBlock class Encoder(nn.Module): def __init__(self, latent_size: int): super().__init__() self.__sequential_blocks = [", "LinearGeneratorBlock class Encoder(nn.Module): def __init__(self, latent_size: int): super().__init__() self.__sequential_blocks = [ CnnEncoderBlock(3, 128),", "import torch import torch.nn as nn from architecture.generator.linear_generator_block import LinearGeneratorBlock class Encoder(nn.Module): def", "int): super().__init__() self.__sequential_blocks = [ CnnEncoderBlock(3, 128), CnnEncoderBlock(128, 256), nn.Flatten(start_dim=1), LinearGeneratorBlock(7 * 7", "import LinearGeneratorBlock class Encoder(nn.Module): def __init__(self, latent_size: int): super().__init__() self.__sequential_blocks = [ CnnEncoderBlock(3,", "nn.Sequential(*self.__sequential_blocks) def forward(self, input_images: torch.Tensor): assert input_images.size(1) == 3 and input_images.size(2) == 28", "torch.nn as nn from architecture.generator.linear_generator_block import LinearGeneratorBlock class Encoder(nn.Module): def __init__(self, latent_size: int):", "Encoder(nn.Module): def __init__(self, latent_size: int): super().__init__() self.__sequential_blocks = [ CnnEncoderBlock(3, 128), CnnEncoderBlock(128, 256),", "def forward(self, input_images: torch.Tensor): assert input_images.size(1) == 3 and input_images.size(2) == 28 and", "from architecture.encoder.cnn_encoder_block import CnnEncoderBlock import torch import torch.nn as nn from architecture.generator.linear_generator_block import", "class Encoder(nn.Module): def __init__(self, latent_size: int): super().__init__() self.__sequential_blocks = [ CnnEncoderBlock(3, 128), CnnEncoderBlock(128,", "nn.Linear(256, latent_size) ] self.main = nn.Sequential(*self.__sequential_blocks) def forward(self, input_images: torch.Tensor): assert input_images.size(1) ==", "assert input_images.size(1) == 3 and input_images.size(2) == 28 and input_images.size(3) == 28 encoded_latent", "3 and input_images.size(2) == 28 and input_images.size(3) == 28 encoded_latent = self.main(input_images) return", "<reponame>gmum/lcw-generator from architecture.encoder.cnn_encoder_block import CnnEncoderBlock import torch import torch.nn as nn from architecture.generator.linear_generator_block", "latent_size: int): super().__init__() self.__sequential_blocks = [ CnnEncoderBlock(3, 128), CnnEncoderBlock(128, 256), nn.Flatten(start_dim=1), LinearGeneratorBlock(7 *", "256), nn.Linear(256, latent_size) ] self.main = nn.Sequential(*self.__sequential_blocks) def forward(self, input_images: torch.Tensor): assert input_images.size(1)" ]
[ "url=\"http://benyehuda.org\"): self.main_url = url self.soup = BeautifulSoup(request.urlopen(url)) @staticmethod def artist_a_filter(tag): \"\"\" Finds all", "the artist pages \"\"\" def __init__(self, url=\"http://benyehuda.org\"): self.main_url = url self.soup = BeautifulSoup(request.urlopen(url))", "main page and their links # usually end with / - Need to", "for all of the artist pages \"\"\" def __init__(self, url=\"http://benyehuda.org\"): self.main_url = url", ":return: A set of unique artist page urls and names :rtype: set[NamedLink] \"\"\"", "set[NamedLink] \"\"\" anchors = self.soup.find_all(self.artist_a_filter) links = set() for anchor in anchors: url", "the index page that points to an artist's page \"\"\" if tag.name !=", "and href[-1] == \"/\": return True return False def get_artist_links(self): \"\"\" :return: A", "!= \"a\": return False href = tag.get(\"href\").lower() # Artist links are supposed to", "as urlparse from bs4 import BeautifulSoup from .helpers import NamedLink, clean_text class MainPage(object):", "tag.name != \"a\": return False href = tag.get(\"href\").lower() # Artist links are supposed", "are one branch below the main page and their links # usually end", "links # usually end with / - Need to verify if href.count(\"/\") ==", "internal if href.startswith(\"http\"): return False # Remove unrelated crap if href.startswith(\"javascript\"): return False", "the main index page. Mostly used to get links for all of the", "if tag.name != \"a\": return False href = tag.get(\"href\").lower() # Artist links are", "__init__(self, url=\"http://benyehuda.org\"): self.main_url = url self.soup = BeautifulSoup(request.urlopen(url)) @staticmethod def artist_a_filter(tag): \"\"\" Finds", "url self.soup = BeautifulSoup(request.urlopen(url)) @staticmethod def artist_a_filter(tag): \"\"\" Finds all the links in", "site page \"\"\" from urllib import request from urllib import parse as urlparse", "Parses and gets information from the main index page. Mostly used to get", "\"\"\" def __init__(self, url=\"http://benyehuda.org\"): self.main_url = url self.soup = BeautifulSoup(request.urlopen(url)) @staticmethod def artist_a_filter(tag):", "information from the main index page. Mostly used to get links for all", "Artist links are supposed to be internal if href.startswith(\"http\"): return False # Remove", "BeautifulSoup from .helpers import NamedLink, clean_text class MainPage(object): \"\"\" Parses and gets information", "if href.startswith(\"javascript\"): return False # Artist pages are one branch below the main", "\"\"\" from urllib import request from urllib import parse as urlparse from bs4", "class MainPage(object): \"\"\" Parses and gets information from the main index page. Mostly", "that points to an artist's page \"\"\" if tag.name != \"a\": return False", "href[-1] == \"/\": return True return False def get_artist_links(self): \"\"\" :return: A set", "links in the index page that points to an artist's page \"\"\" if", "of the artist pages \"\"\" def __init__(self, url=\"http://benyehuda.org\"): self.main_url = url self.soup =", "page. Mostly used to get links for all of the artist pages \"\"\"", "if href.startswith(\"http\"): return False # Remove unrelated crap if href.startswith(\"javascript\"): return False #", "main index page. Mostly used to get links for all of the artist", "pages \"\"\" def __init__(self, url=\"http://benyehuda.org\"): self.main_url = url self.soup = BeautifulSoup(request.urlopen(url)) @staticmethod def", "pages are one branch below the main page and their links # usually", "the main page and their links # usually end with / - Need", "anchors = self.soup.find_all(self.artist_a_filter) links = set() for anchor in anchors: url = urlparse.urljoin(self.main_url,", "for parsing the main Ben Yehuda site page \"\"\" from urllib import request", "# Remove unrelated crap if href.startswith(\"javascript\"): return False # Artist pages are one", "page urls and names :rtype: set[NamedLink] \"\"\" anchors = self.soup.find_all(self.artist_a_filter) links = set()", "return True return False def get_artist_links(self): \"\"\" :return: A set of unique artist", "artist page urls and names :rtype: set[NamedLink] \"\"\" anchors = self.soup.find_all(self.artist_a_filter) links =", "with / - Need to verify if href.count(\"/\") == 1 and href[-1] ==", "# Artist pages are one branch below the main page and their links", "index page. Mostly used to get links for all of the artist pages", "== \"/\": return True return False def get_artist_links(self): \"\"\" :return: A set of", "and gets information from the main index page. Mostly used to get links", "import request from urllib import parse as urlparse from bs4 import BeautifulSoup from", "href.startswith(\"http\"): return False # Remove unrelated crap if href.startswith(\"javascript\"): return False # Artist", "from the main index page. Mostly used to get links for all of", "/ - Need to verify if href.count(\"/\") == 1 and href[-1] == \"/\":", "parsing the main Ben Yehuda site page \"\"\" from urllib import request from", "- Need to verify if href.count(\"/\") == 1 and href[-1] == \"/\": return", "== 1 and href[-1] == \"/\": return True return False def get_artist_links(self): \"\"\"", "parse as urlparse from bs4 import BeautifulSoup from .helpers import NamedLink, clean_text class", "from .helpers import NamedLink, clean_text class MainPage(object): \"\"\" Parses and gets information from", "True return False def get_artist_links(self): \"\"\" :return: A set of unique artist page", "verify if href.count(\"/\") == 1 and href[-1] == \"/\": return True return False", "return False def get_artist_links(self): \"\"\" :return: A set of unique artist page urls", "Need to verify if href.count(\"/\") == 1 and href[-1] == \"/\": return True", "Mostly used to get links for all of the artist pages \"\"\" def", "are supposed to be internal if href.startswith(\"http\"): return False # Remove unrelated crap", "below the main page and their links # usually end with / -", "usually end with / - Need to verify if href.count(\"/\") == 1 and", "self.soup = BeautifulSoup(request.urlopen(url)) @staticmethod def artist_a_filter(tag): \"\"\" Finds all the links in the", "\"\"\" Finds all the links in the index page that points to an", "# usually end with / - Need to verify if href.count(\"/\") == 1", "= set() for anchor in anchors: url = urlparse.urljoin(self.main_url, anchor.get(\"href\").lower()) links.add(NamedLink(url, clean_text(anchor))) return", "artist pages \"\"\" def __init__(self, url=\"http://benyehuda.org\"): self.main_url = url self.soup = BeautifulSoup(request.urlopen(url)) @staticmethod", "@staticmethod def artist_a_filter(tag): \"\"\" Finds all the links in the index page that", "False def get_artist_links(self): \"\"\" :return: A set of unique artist page urls and", "return False # Remove unrelated crap if href.startswith(\"javascript\"): return False # Artist pages", "page and their links # usually end with / - Need to verify", "NamedLink, clean_text class MainPage(object): \"\"\" Parses and gets information from the main index", "def artist_a_filter(tag): \"\"\" Finds all the links in the index page that points", "= self.soup.find_all(self.artist_a_filter) links = set() for anchor in anchors: url = urlparse.urljoin(self.main_url, anchor.get(\"href\").lower())", "of unique artist page urls and names :rtype: set[NamedLink] \"\"\" anchors = self.soup.find_all(self.artist_a_filter)", "links for all of the artist pages \"\"\" def __init__(self, url=\"http://benyehuda.org\"): self.main_url =", "MainPage(object): \"\"\" Parses and gets information from the main index page. Mostly used", "request from urllib import parse as urlparse from bs4 import BeautifulSoup from .helpers", "\"\"\" :return: A set of unique artist page urls and names :rtype: set[NamedLink]", "get_artist_links(self): \"\"\" :return: A set of unique artist page urls and names :rtype:", "Remove unrelated crap if href.startswith(\"javascript\"): return False # Artist pages are one branch", "from bs4 import BeautifulSoup from .helpers import NamedLink, clean_text class MainPage(object): \"\"\" Parses", "their links # usually end with / - Need to verify if href.count(\"/\")", "\"\"\" Class for parsing the main Ben Yehuda site page \"\"\" from urllib", "and their links # usually end with / - Need to verify if", "urlparse from bs4 import BeautifulSoup from .helpers import NamedLink, clean_text class MainPage(object): \"\"\"", "href = tag.get(\"href\").lower() # Artist links are supposed to be internal if href.startswith(\"http\"):", "main Ben Yehuda site page \"\"\" from urllib import request from urllib import", "def __init__(self, url=\"http://benyehuda.org\"): self.main_url = url self.soup = BeautifulSoup(request.urlopen(url)) @staticmethod def artist_a_filter(tag): \"\"\"", "points to an artist's page \"\"\" if tag.name != \"a\": return False href", "1 and href[-1] == \"/\": return True return False def get_artist_links(self): \"\"\" :return:", ".helpers import NamedLink, clean_text class MainPage(object): \"\"\" Parses and gets information from the", "False href = tag.get(\"href\").lower() # Artist links are supposed to be internal if", "to verify if href.count(\"/\") == 1 and href[-1] == \"/\": return True return", "an artist's page \"\"\" if tag.name != \"a\": return False href = tag.get(\"href\").lower()", "A set of unique artist page urls and names :rtype: set[NamedLink] \"\"\" anchors", "Yehuda site page \"\"\" from urllib import request from urllib import parse as", "Ben Yehuda site page \"\"\" from urllib import request from urllib import parse", "artist_a_filter(tag): \"\"\" Finds all the links in the index page that points to", "get links for all of the artist pages \"\"\" def __init__(self, url=\"http://benyehuda.org\"): self.main_url", "to get links for all of the artist pages \"\"\" def __init__(self, url=\"http://benyehuda.org\"):", "= url self.soup = BeautifulSoup(request.urlopen(url)) @staticmethod def artist_a_filter(tag): \"\"\" Finds all the links", "the links in the index page that points to an artist's page \"\"\"", "from urllib import parse as urlparse from bs4 import BeautifulSoup from .helpers import", "BeautifulSoup(request.urlopen(url)) @staticmethod def artist_a_filter(tag): \"\"\" Finds all the links in the index page", "to be internal if href.startswith(\"http\"): return False # Remove unrelated crap if href.startswith(\"javascript\"):", "self.soup.find_all(self.artist_a_filter) links = set() for anchor in anchors: url = urlparse.urljoin(self.main_url, anchor.get(\"href\").lower()) links.add(NamedLink(url,", "from urllib import request from urllib import parse as urlparse from bs4 import", "index page that points to an artist's page \"\"\" if tag.name != \"a\":", "set of unique artist page urls and names :rtype: set[NamedLink] \"\"\" anchors =", "be internal if href.startswith(\"http\"): return False # Remove unrelated crap if href.startswith(\"javascript\"): return", "set() for anchor in anchors: url = urlparse.urljoin(self.main_url, anchor.get(\"href\").lower()) links.add(NamedLink(url, clean_text(anchor))) return links", "return False # Artist pages are one branch below the main page and", "and names :rtype: set[NamedLink] \"\"\" anchors = self.soup.find_all(self.artist_a_filter) links = set() for anchor", "branch below the main page and their links # usually end with /", "crap if href.startswith(\"javascript\"): return False # Artist pages are one branch below the", "import NamedLink, clean_text class MainPage(object): \"\"\" Parses and gets information from the main", "gets information from the main index page. Mostly used to get links for", "self.main_url = url self.soup = BeautifulSoup(request.urlopen(url)) @staticmethod def artist_a_filter(tag): \"\"\" Finds all the", "links are supposed to be internal if href.startswith(\"http\"): return False # Remove unrelated", "href.count(\"/\") == 1 and href[-1] == \"/\": return True return False def get_artist_links(self):", "Finds all the links in the index page that points to an artist's", "tag.get(\"href\").lower() # Artist links are supposed to be internal if href.startswith(\"http\"): return False", "all of the artist pages \"\"\" def __init__(self, url=\"http://benyehuda.org\"): self.main_url = url self.soup", "all the links in the index page that points to an artist's page", "\"/\": return True return False def get_artist_links(self): \"\"\" :return: A set of unique", "\"\"\" anchors = self.soup.find_all(self.artist_a_filter) links = set() for anchor in anchors: url =", "page that points to an artist's page \"\"\" if tag.name != \"a\": return", "\"\"\" if tag.name != \"a\": return False href = tag.get(\"href\").lower() # Artist links", "def get_artist_links(self): \"\"\" :return: A set of unique artist page urls and names", "= BeautifulSoup(request.urlopen(url)) @staticmethod def artist_a_filter(tag): \"\"\" Finds all the links in the index", "return False href = tag.get(\"href\").lower() # Artist links are supposed to be internal", "page \"\"\" if tag.name != \"a\": return False href = tag.get(\"href\").lower() # Artist", "Artist pages are one branch below the main page and their links #", "urls and names :rtype: set[NamedLink] \"\"\" anchors = self.soup.find_all(self.artist_a_filter) links = set() for", "artist's page \"\"\" if tag.name != \"a\": return False href = tag.get(\"href\").lower() #", "to an artist's page \"\"\" if tag.name != \"a\": return False href =", "href.startswith(\"javascript\"): return False # Artist pages are one branch below the main page", "end with / - Need to verify if href.count(\"/\") == 1 and href[-1]", "unique artist page urls and names :rtype: set[NamedLink] \"\"\" anchors = self.soup.find_all(self.artist_a_filter) links", "\"\"\" Parses and gets information from the main index page. Mostly used to", "= tag.get(\"href\").lower() # Artist links are supposed to be internal if href.startswith(\"http\"): return", "supposed to be internal if href.startswith(\"http\"): return False # Remove unrelated crap if", "used to get links for all of the artist pages \"\"\" def __init__(self,", "if href.count(\"/\") == 1 and href[-1] == \"/\": return True return False def", ":rtype: set[NamedLink] \"\"\" anchors = self.soup.find_all(self.artist_a_filter) links = set() for anchor in anchors:", "import BeautifulSoup from .helpers import NamedLink, clean_text class MainPage(object): \"\"\" Parses and gets", "links = set() for anchor in anchors: url = urlparse.urljoin(self.main_url, anchor.get(\"href\").lower()) links.add(NamedLink(url, clean_text(anchor)))", "one branch below the main page and their links # usually end with", "page \"\"\" from urllib import request from urllib import parse as urlparse from", "urllib import request from urllib import parse as urlparse from bs4 import BeautifulSoup", "Class for parsing the main Ben Yehuda site page \"\"\" from urllib import", "the main Ben Yehuda site page \"\"\" from urllib import request from urllib", "False # Artist pages are one branch below the main page and their", "urllib import parse as urlparse from bs4 import BeautifulSoup from .helpers import NamedLink,", "False # Remove unrelated crap if href.startswith(\"javascript\"): return False # Artist pages are", "unrelated crap if href.startswith(\"javascript\"): return False # Artist pages are one branch below", "# Artist links are supposed to be internal if href.startswith(\"http\"): return False #", "clean_text class MainPage(object): \"\"\" Parses and gets information from the main index page.", "names :rtype: set[NamedLink] \"\"\" anchors = self.soup.find_all(self.artist_a_filter) links = set() for anchor in", "bs4 import BeautifulSoup from .helpers import NamedLink, clean_text class MainPage(object): \"\"\" Parses and", "in the index page that points to an artist's page \"\"\" if tag.name", "\"a\": return False href = tag.get(\"href\").lower() # Artist links are supposed to be", "import parse as urlparse from bs4 import BeautifulSoup from .helpers import NamedLink, clean_text" ]
[ "0), 1) return iris_radius def _get_gradient(self, img, granularity=10): height, width = img.shape sobel_horizontal", "radius def get_glints(self, img, threshold, pupil_position, iris_radius): img = cv2.threshold(img, threshold, 255, cv2.THRESH_BINARY)[1]", "width = template.shape min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(matching) cv2.rectangle(self._result, (max_loc[0] - width/2,", "max_iris_radius = pupil_radius * 5 pupil_samples = UMath.get_circle_samples(pupil_position, pupil_radius) iris_samples = UMath.get_circle_samples(pupil_position, max_iris_radius)", "UGraphics.hex_color_to_bgr(0xf2f378), 1) iris_radius = max(iris_radius_vote.iteritems(), key=operator.itemgetter(1))[0] if len(iris_radius_vote) > 0 else 0 cv2.circle(self._result,", "iterations=1) img = cv2.morphologyEx(img, cv2.MORPH_DILATE, st, iterations=1) c, contours, hierarchy = cv2.findContours(img, cv2.RETR_LIST,", ">= 0 and j >= 0) and (len(magnitude) > i and len(magnitude[i]) >", "if i == 0: center = int(centroid), int(properties['Centroid'][i+1]) if UMath.is_in_area(center[0], center[1], width, height):", "angle < angle_tolerance: radius = np.sqrt(np.power(point[0] - pupil_position[0], 2) + np.power(point[1] - pupil_position[1],", "orientation = np.empty(img.shape) magnitude = np.empty(img.shape) for y in range(height): for x in", "<= 1.5)) and ((properties['Area'] > 900) and (properties['Area'] < side)): for i, centroid", "- 1 if (i >= 0 and j >= 0) and (len(magnitude) >", "2, (0, 255, 0), 3) return coordinates def get_eye_corners(self, img): right = self._match(img,", "= cv2.findContours(img, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE) props = RegionProps() radius = 0.0 for cnt in", "iris_radius_vote[radius] = 0 iris_radius_vote[radius] += 1 cv2.line(self._result, pupil_sample, iris_sample, UGraphics.hex_color_to_bgr(0xf2f378), 1) iris_radius =", "max_loc = cv2.minMaxLoc(matching) cv2.rectangle(self._result, (max_loc[0] - width/2, max_loc[1] - height/2), (max_loc[0] + width/2,", "threshold): img = cv2.threshold(img, threshold, 255, cv2.THRESH_BINARY_INV)[1] height, width = img.shape side =", "coordinates def get_eye_corners(self, img): right = self._match(img, self._right_template) left = self._match(img, self._left_template) return", "= int(centroid), int(properties['Centroid'][i+1]) distance = np.sqrt(np.power(pupil_position[0] - center[0], 2) + np.power(pupil_position[1] - center[1],", "not in iris_radius_vote: iris_radius_vote[radius] = 0 iris_radius_vote[radius] += 1 cv2.line(self._result, pupil_sample, iris_sample, UGraphics.hex_color_to_bgr(0xf2f378),", "contours, hierarchy = cv2.findContours(img, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE) props = RegionProps() radius = 0.0 for", "c, contours, hierarchy = cv2.findContours(img, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE) props = RegionProps() coordinates = []", "cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE) props = RegionProps() radius = 0.0 for cnt in contours: properties", "UMath.get_line_coordinates(pupil_sample, iris_sample) normal_angle = cv2.fastAtan2(pupil_sample[1] - pupil_position[1], pupil_sample[0] - pupil_position[0]) for point in", "normal = UMath.get_line_coordinates(pupil_sample, iris_sample) normal_angle = cv2.fastAtan2(pupil_sample[1] - pupil_position[1], pupil_sample[0] - pupil_position[0]) for", "cv2.circle(self._result, center, int(radius), (0, 0, 255), 1) return center, radius return (int(width /", "left] def _match(self, img, template): matching = cv2.matchTemplate(img, template, cv2.TM_CCOEFF_NORMED) height, width =", "if (i >= 0 and j >= 0) and (len(magnitude) > i and", "= 1.0 if radius == 0.0 else radius circularity = perimeter / (radius", "side = (width * height) / 8 st = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (5, 5)) img", "angle - 360 if angle > 360 else angle if angle < angle_tolerance:", "5: ellipse = cv2.fitEllipse(cnt) cv2.ellipse(self._result, ellipse, (0, 0, 255), 1) cv2.circle(self._result, center, int(radius),", "return (int(width / 2), int(height / 2)), radius def get_glints(self, img, threshold, pupil_position,", "+ height/2), 2) return max_loc[0], max_loc[1] def get_iris(self, img, pupil_position, pupil_radius, angle_tolerance=3, min_magnitude=15,", "pupil_radius * 5 pupil_samples = UMath.get_circle_samples(pupil_position, pupil_radius) iris_samples = UMath.get_circle_samples(pupil_position, max_iris_radius) iris_radius_vote =", "img.shape max_dist = iris_radius if iris_radius > 0 else (width + height) /", "left = self._match(img, self._left_template) return [right, left] def _match(self, img, template): matching =", "1 cv2.line(self._result, pupil_sample, iris_sample, UGraphics.hex_color_to_bgr(0xf2f378), 1) iris_radius = max(iris_radius_vote.iteritems(), key=operator.itemgetter(1))[0] if len(iris_radius_vote) >", "img.shape side = (width * height) / 8 st = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (5, 5))", "= (int(iris_samples[sample][0]), int(iris_samples[sample][1])) normal = UMath.get_line_coordinates(pupil_sample, iris_sample) normal_angle = cv2.fastAtan2(pupil_sample[1] - pupil_position[1], pupil_sample[0]", "None def __init__(self, right_corner_path, left_corner_path): self._right_template = Filtering.apply_box_filter(Filtering.get_gray_scale_image(UMedia.get_image(right_corner_path)), 5) self._left_template = Filtering.apply_box_filter(Filtering.get_gray_scale_image(UMedia.get_image(left_corner_path)), 5)", "(255, 0, 0), 1) return iris_radius def _get_gradient(self, img, granularity=10): height, width =", "1.0: return 0.0 orientation, magnitude = self._get_gradient(img) max_iris_radius = pupil_radius * 5 pupil_samples", "+ width/2, max_loc[1] + height/2), 2) return max_loc[0], max_loc[1] def get_iris(self, img, pupil_position,", "int(radius) if radius not in iris_radius_vote: iris_radius_vote[radius] = 0 iris_radius_vote[radius] += 1 cv2.line(self._result,", "center, 2, (0, 255, 0), 3) return coordinates def get_eye_corners(self, img): right =", "len(iris_radius_vote) > 0 else 0 cv2.circle(self._result, pupil_position, iris_radius, (255, 0, 0), 1) return", "center[0], 2) + np.power(pupil_position[1] - center[1], 2)) if distance < max_dist: coordinates.append(center) cv2.circle(self._result,", "angle_tolerance=3, min_magnitude=15, max_magnitude=20): if pupil_radius <= 1.0: return 0.0 orientation, magnitude = self._get_gradient(img)", "for point in normal: i = point[1] - 1 j = point[0] -", "x in range(width): orientation[y][x] = cv2.fastAtan2(sobel_horizontal[y][x], sobel_vertical[y][x]) magnitude[y][x] = np.sqrt(np.power(sobel_horizontal[y][x], 2) + np.power(sobel_vertical[y][x],", "cv2.Sobel(img, cv2.CV_32F, 0, 1) orientation = np.empty(img.shape) magnitude = np.empty(img.shape) for y in", ">= 5: ellipse = cv2.fitEllipse(cnt) cv2.ellipse(self._result, ellipse, (0, 0, 255), 1) cv2.circle(self._result, center,", "cv2.line(self._result, pupil_sample, iris_sample, UGraphics.hex_color_to_bgr(0xf2f378), 1) iris_radius = max(iris_radius_vote.iteritems(), key=operator.itemgetter(1))[0] if len(iris_radius_vote) > 0", "> 360 else angle if angle < angle_tolerance: radius = np.sqrt(np.power(point[0] - pupil_position[0],", "+ np.power(point[1] - pupil_position[1], 2)) radius = int(radius) if radius not in iris_radius_vote:", "max(iris_radius_vote.iteritems(), key=operator.itemgetter(1))[0] if len(iris_radius_vote) > 0 else 0 cv2.circle(self._result, pupil_position, iris_radius, (255, 0,", "* from Filtering import * from RegionProps import * from UMath import *", ">= 0) and (len(magnitude) > i and len(magnitude[i]) > j): mag = magnitude[i][j]", "'Centroid', 'Extend', 'ConvexHull']) perimeter = cv2.arcLength(cnt, True) radius = np.sqrt(properties['Area'] / np.pi) radius", "dict() for sample in range(len(pupil_samples)): pupil_sample = (int(pupil_samples[sample][0]), int(pupil_samples[sample][1])) iris_sample = (int(iris_samples[sample][0]), int(iris_samples[sample][1]))", "iris_sample, UGraphics.hex_color_to_bgr(0xf2f378), 1) iris_radius = max(iris_radius_vote.iteritems(), key=operator.itemgetter(1))[0] if len(iris_radius_vote) > 0 else 0", "np.sqrt(np.power(point[0] - pupil_position[0], 2) + np.power(point[1] - pupil_position[1], 2)) radius = int(radius) if", "and (circularity <= 1.5)) and ((properties['Area'] > 900) and (properties['Area'] < side)): for", "self.get_glints(img, 180, pupil_position, iris_radius) corners_position = self.get_eye_corners(img) UMedia.show(self._result) def get_pupil(self, img, threshold): img", "= RegionProps() radius = 0.0 for cnt in contours: properties = props.CalcContourProperties(cnt, ['Area',", "distance = np.sqrt(np.power(pupil_position[0] - center[0], 2) + np.power(pupil_position[1] - center[1], 2)) if distance", "/ 8 st = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (5, 5)) img = cv2.morphologyEx(img, cv2.MORPH_ERODE, st, iterations=1)", "+ height) / 16 c, contours, hierarchy = cv2.findContours(img, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE) props =", "(int(pupil_samples[sample][0]), int(pupil_samples[sample][1])) iris_sample = (int(iris_samples[sample][0]), int(iris_samples[sample][1])) normal = UMath.get_line_coordinates(pupil_sample, iris_sample) normal_angle = cv2.fastAtan2(pupil_sample[1]", "range(width): orientation[y][x] = cv2.fastAtan2(sobel_horizontal[y][x], sobel_vertical[y][x]) magnitude[y][x] = np.sqrt(np.power(sobel_horizontal[y][x], 2) + np.power(sobel_vertical[y][x], 2)) if", "ellipse, (0, 0, 255), 1) cv2.circle(self._result, center, int(radius), (0, 0, 255), 1) return", "return [right, left] def _match(self, img, template): matching = cv2.matchTemplate(img, template, cv2.TM_CCOEFF_NORMED) height,", "iris_sample) normal_angle = cv2.fastAtan2(pupil_sample[1] - pupil_position[1], pupil_sample[0] - pupil_position[0]) for point in normal:", "= normal_angle + orientation[i][j] - 90 angle = angle - 360 if angle", "properties['Area'] < 100: for i, centroid in enumerate(properties['Centroid']): if i == 0: center", "2 * np.pi) if ((circularity >= 0.0) and (circularity <= 1.5)) and ((properties['Area']", "(0, 255, 0), 3) return coordinates def get_eye_corners(self, img): right = self._match(img, self._right_template)", "255, cv2.THRESH_BINARY_INV)[1] height, width = img.shape side = (width * height) / 8", "__init__(self, right_corner_path, left_corner_path): self._right_template = Filtering.apply_box_filter(Filtering.get_gray_scale_image(UMedia.get_image(right_corner_path)), 5) self._left_template = Filtering.apply_box_filter(Filtering.get_gray_scale_image(UMedia.get_image(left_corner_path)), 5) def process(self,", "radius = 0.0 for cnt in contours: properties = props.CalcContourProperties(cnt, ['Area', 'Length', 'Centroid',", "granularity == 0): UGraphics.draw_vector(self._result, x, y, magnitude[y][x] / granularity, orientation[y][x]) return orientation, magnitude", "pupil_sample = (int(pupil_samples[sample][0]), int(pupil_samples[sample][1])) iris_sample = (int(iris_samples[sample][0]), int(iris_samples[sample][1])) normal = UMath.get_line_coordinates(pupil_sample, iris_sample) normal_angle", "img, pupil_position, pupil_radius, angle_tolerance=3, min_magnitude=15, max_magnitude=20): if pupil_radius <= 1.0: return 0.0 orientation,", "0) sobel_vertical = cv2.Sobel(img, cv2.CV_32F, 0, 1) orientation = np.empty(img.shape) magnitude = np.empty(img.shape)", "img = cv2.threshold(img, threshold, 255, cv2.THRESH_BINARY_INV)[1] height, width = img.shape side = (width", "* import operator class Eye: _result = None _right_template = None _left_template =", "contours, hierarchy = cv2.findContours(img, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE) props = RegionProps() coordinates = [] for", "st, iterations=1) img = cv2.morphologyEx(img, cv2.MORPH_DILATE, st, iterations=1) c, contours, hierarchy = cv2.findContours(img,", "(max_loc[0] - width/2, max_loc[1] - height/2), (max_loc[0] + width/2, max_loc[1] + height/2), 2)", "int(iris_samples[sample][1])) normal = UMath.get_line_coordinates(pupil_sample, iris_sample) normal_angle = cv2.fastAtan2(pupil_sample[1] - pupil_position[1], pupil_sample[0] - pupil_position[0])", "glints_position = self.get_glints(img, 180, pupil_position, iris_radius) corners_position = self.get_eye_corners(img) UMedia.show(self._result) def get_pupil(self, img,", "cv2.fastAtan2(sobel_horizontal[y][x], sobel_vertical[y][x]) magnitude[y][x] = np.sqrt(np.power(sobel_horizontal[y][x], 2) + np.power(sobel_vertical[y][x], 2)) if (x % granularity", "2), int(height / 2)), radius def get_glints(self, img, threshold, pupil_position, iris_radius): img =", "[] for cnt in contours: properties = props.CalcContourProperties(cnt, ['Area', 'Length', 'Centroid', 'Extend', 'ConvexHull'])", "+ np.power(pupil_position[1] - center[1], 2)) if distance < max_dist: coordinates.append(center) cv2.circle(self._result, center, 2,", "key=operator.itemgetter(1))[0] if len(iris_radius_vote) > 0 else 0 cv2.circle(self._result, pupil_position, iris_radius, (255, 0, 0),", "pupil_position, iris_radius, (255, 0, 0), 1) return iris_radius def _get_gradient(self, img, granularity=10): height,", "int(radius), (0, 0, 255), 1) return center, radius return (int(width / 2), int(height", "np.pi) if ((circularity >= 0.0) and (circularity <= 1.5)) and ((properties['Area'] > 900)", "= [] for cnt in contours: properties = props.CalcContourProperties(cnt, ['Area', 'Length', 'Centroid', 'Extend',", "center = int(centroid), int(properties['Centroid'][i+1]) if UMath.is_in_area(center[0], center[1], width, height): if len(cnt) >= 5:", "properties['Extend'] > 0 and properties['Area'] < 100: for i, centroid in enumerate(properties['Centroid']): if", "if ((circularity >= 0.0) and (circularity <= 1.5)) and ((properties['Area'] > 900) and", "[right, left] def _match(self, img, template): matching = cv2.matchTemplate(img, template, cv2.TM_CCOEFF_NORMED) height, width", "in range(width): orientation[y][x] = cv2.fastAtan2(sobel_horizontal[y][x], sobel_vertical[y][x]) magnitude[y][x] = np.sqrt(np.power(sobel_horizontal[y][x], 2) + np.power(sobel_vertical[y][x], 2))", "cv2.THRESH_BINARY)[1] height, width = img.shape max_dist = iris_radius if iris_radius > 0 else", "template, cv2.TM_CCOEFF_NORMED) height, width = template.shape min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(matching) cv2.rectangle(self._result,", "width = img.shape max_dist = iris_radius if iris_radius > 0 else (width +", "(y % granularity == 0): UGraphics.draw_vector(self._result, x, y, magnitude[y][x] / granularity, orientation[y][x]) return", "cv2.minMaxLoc(matching) cv2.rectangle(self._result, (max_loc[0] - width/2, max_loc[1] - height/2), (max_loc[0] + width/2, max_loc[1] +", "0) and (len(magnitude) > i and len(magnitude[i]) > j): mag = magnitude[i][j] if", "return center, radius return (int(width / 2), int(height / 2)), radius def get_glints(self,", "y in range(height): for x in range(width): orientation[y][x] = cv2.fastAtan2(sobel_horizontal[y][x], sobel_vertical[y][x]) magnitude[y][x] =", "centroid in enumerate(properties['Centroid']): if i == 0: center = int(centroid), int(properties['Centroid'][i+1]) if UMath.is_in_area(center[0],", "self._match(img, self._right_template) left = self._match(img, self._left_template) return [right, left] def _match(self, img, template):", "else 0 cv2.circle(self._result, pupil_position, iris_radius, (255, 0, 0), 1) return iris_radius def _get_gradient(self,", "= None _right_template = None _left_template = None def __init__(self, right_corner_path, left_corner_path): self._right_template", "height/2), (max_loc[0] + width/2, max_loc[1] + height/2), 2) return max_loc[0], max_loc[1] def get_iris(self,", "pupil_position, iris_radius) corners_position = self.get_eye_corners(img) UMedia.show(self._result) def get_pupil(self, img, threshold): img = cv2.threshold(img,", "and (properties['Area'] < side)): for i, centroid in enumerate(properties['Centroid']): if i == 0:", "granularity == 0) and (y % granularity == 0): UGraphics.draw_vector(self._result, x, y, magnitude[y][x]", "and j >= 0) and (len(magnitude) > i and len(magnitude[i]) > j): mag", "radius = np.sqrt(np.power(point[0] - pupil_position[0], 2) + np.power(point[1] - pupil_position[1], 2)) radius =", "_match(self, img, template): matching = cv2.matchTemplate(img, template, cv2.TM_CCOEFF_NORMED) height, width = template.shape min_val,", "center, int(radius), (0, 0, 255), 1) return center, radius return (int(width / 2),", "import * from UMedia import * from Filtering import * from RegionProps import", "iterations=1) c, contours, hierarchy = cv2.findContours(img, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE) props = RegionProps() radius =", "np.sqrt(np.power(sobel_horizontal[y][x], 2) + np.power(sobel_vertical[y][x], 2)) if (x % granularity == 0) and (y", "* np.pi) if ((circularity >= 0.0) and (circularity <= 1.5)) and ((properties['Area'] >", "(radius * 2 * np.pi) if ((circularity >= 0.0) and (circularity <= 1.5))", "hierarchy = cv2.findContours(img, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE) props = RegionProps() coordinates = [] for cnt", "= cv2.minMaxLoc(matching) cv2.rectangle(self._result, (max_loc[0] - width/2, max_loc[1] - height/2), (max_loc[0] + width/2, max_loc[1]", "img, granularity=10): height, width = img.shape sobel_horizontal = cv2.Sobel(img, cv2.CV_32F, 1, 0) sobel_vertical", "iris_radius, (255, 0, 0), 1) return iris_radius def _get_gradient(self, img, granularity=10): height, width", "= img.shape sobel_horizontal = cv2.Sobel(img, cv2.CV_32F, 1, 0) sobel_vertical = cv2.Sobel(img, cv2.CV_32F, 0,", "pupil_position[0], 2) + np.power(point[1] - pupil_position[1], 2)) radius = int(radius) if radius not", "distance < max_dist: coordinates.append(center) cv2.circle(self._result, center, 2, (0, 255, 0), 3) return coordinates", "else angle if angle < angle_tolerance: radius = np.sqrt(np.power(point[0] - pupil_position[0], 2) +", "/ 2)), radius def get_glints(self, img, threshold, pupil_position, iris_radius): img = cv2.threshold(img, threshold,", "1.5)) and ((properties['Area'] > 900) and (properties['Area'] < side)): for i, centroid in", "0, 255), 1) return center, radius return (int(width / 2), int(height / 2)),", "from UGraphics import * import operator class Eye: _result = None _right_template =", "= cv2.fitEllipse(cnt) cv2.ellipse(self._result, ellipse, (0, 0, 255), 1) cv2.circle(self._result, center, int(radius), (0, 0,", "1 if (i >= 0 and j >= 0) and (len(magnitude) > i", "and properties['Area'] < 100: for i, centroid in enumerate(properties['Centroid']): if i == 0:", "center = int(centroid), int(properties['Centroid'][i+1]) distance = np.sqrt(np.power(pupil_position[0] - center[0], 2) + np.power(pupil_position[1] -", "props = RegionProps() radius = 0.0 for cnt in contours: properties = props.CalcContourProperties(cnt,", "pupil_position, pupil_radius, angle_tolerance=3, min_magnitude=15, max_magnitude=20): if pupil_radius <= 1.0: return 0.0 orientation, magnitude", "0: center = int(centroid), int(properties['Centroid'][i+1]) if UMath.is_in_area(center[0], center[1], width, height): if len(cnt) >=", "* height) / 8 st = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (5, 5)) img = cv2.morphologyEx(img, cv2.MORPH_ERODE,", "= pupil_radius * 5 pupil_samples = UMath.get_circle_samples(pupil_position, pupil_radius) iris_samples = UMath.get_circle_samples(pupil_position, max_iris_radius) iris_radius_vote", "cnt in contours: properties = props.CalcContourProperties(cnt, ['Area', 'Length', 'Centroid', 'Extend', 'ConvexHull']) if properties['Extend']", "center, radius return (int(width / 2), int(height / 2)), radius def get_glints(self, img,", "360 if angle > 360 else angle if angle < angle_tolerance: radius =", "pupil_position[1], 2)) radius = int(radius) if radius not in iris_radius_vote: iris_radius_vote[radius] = 0", "img, threshold, pupil_position, iris_radius): img = cv2.threshold(img, threshold, 255, cv2.THRESH_BINARY)[1] height, width =", "= self._get_gradient(img) max_iris_radius = pupil_radius * 5 pupil_samples = UMath.get_circle_samples(pupil_position, pupil_radius) iris_samples =", "360 else angle if angle < angle_tolerance: radius = np.sqrt(np.power(point[0] - pupil_position[0], 2)", "if iris_radius > 0 else (width + height) / 16 c, contours, hierarchy", "cv2.fastAtan2(pupil_sample[1] - pupil_position[1], pupil_sample[0] - pupil_position[0]) for point in normal: i = point[1]", "cv2.CV_32F, 1, 0) sobel_vertical = cv2.Sobel(img, cv2.CV_32F, 0, 1) orientation = np.empty(img.shape) magnitude", "range(len(pupil_samples)): pupil_sample = (int(pupil_samples[sample][0]), int(pupil_samples[sample][1])) iris_sample = (int(iris_samples[sample][0]), int(iris_samples[sample][1])) normal = UMath.get_line_coordinates(pupil_sample, iris_sample)", "= template.shape min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(matching) cv2.rectangle(self._result, (max_loc[0] - width/2, max_loc[1]", "UMath.is_in_area(center[0], center[1], width, height): if len(cnt) >= 5: ellipse = cv2.fitEllipse(cnt) cv2.ellipse(self._result, ellipse,", "iris_radius > 0 else (width + height) / 16 c, contours, hierarchy =", "len(cnt) >= 5: ellipse = cv2.fitEllipse(cnt) cv2.ellipse(self._result, ellipse, (0, 0, 255), 1) cv2.circle(self._result,", "width = img.shape side = (width * height) / 8 st = cv2.getStructuringElement(cv2.MORPH_ELLIPSE,", "img, threshold): img = cv2.threshold(img, threshold, 255, cv2.THRESH_BINARY_INV)[1] height, width = img.shape side", "radius return (int(width / 2), int(height / 2)), radius def get_glints(self, img, threshold,", "5)) img = cv2.morphologyEx(img, cv2.MORPH_ERODE, st, iterations=1) img = cv2.morphologyEx(img, cv2.MORPH_DILATE, st, iterations=1)", "900) and (properties['Area'] < side)): for i, centroid in enumerate(properties['Centroid']): if i ==", "/ 2), int(height / 2)), radius def get_glints(self, img, threshold, pupil_position, iris_radius): img", "int(height / 2)), radius def get_glints(self, img, threshold, pupil_position, iris_radius): img = cv2.threshold(img,", "'Extend', 'ConvexHull']) perimeter = cv2.arcLength(cnt, True) radius = np.sqrt(properties['Area'] / np.pi) radius =", "if angle < angle_tolerance: radius = np.sqrt(np.power(point[0] - pupil_position[0], 2) + np.power(point[1] -", "= int(centroid), int(properties['Centroid'][i+1]) if UMath.is_in_area(center[0], center[1], width, height): if len(cnt) >= 5: ellipse", "_get_gradient(self, img, granularity=10): height, width = img.shape sobel_horizontal = cv2.Sobel(img, cv2.CV_32F, 1, 0)", "iris_radius = max(iris_radius_vote.iteritems(), key=operator.itemgetter(1))[0] if len(iris_radius_vote) > 0 else 0 cv2.circle(self._result, pupil_position, iris_radius,", "Filtering import * from RegionProps import * from UMath import * from UGraphics", "def _get_gradient(self, img, granularity=10): height, width = img.shape sobel_horizontal = cv2.Sobel(img, cv2.CV_32F, 1,", "= iris_radius if iris_radius > 0 else (width + height) / 16 c,", "in enumerate(properties['Centroid']): if i == 0: center = int(centroid), int(properties['Centroid'][i+1]) if UMath.is_in_area(center[0], center[1],", "angle > 360 else angle if angle < angle_tolerance: radius = np.sqrt(np.power(point[0] -", "((properties['Area'] > 900) and (properties['Area'] < side)): for i, centroid in enumerate(properties['Centroid']): if", "5) pupil_position, pupil_radius = self.get_pupil(img, 40) iris_radius = self.get_iris(img, pupil_position, pupil_radius) glints_position =", "self._left_template) return [right, left] def _match(self, img, template): matching = cv2.matchTemplate(img, template, cv2.TM_CCOEFF_NORMED)", "= self.get_eye_corners(img) UMedia.show(self._result) def get_pupil(self, img, threshold): img = cv2.threshold(img, threshold, 255, cv2.THRESH_BINARY_INV)[1]", "< max_dist: coordinates.append(center) cv2.circle(self._result, center, 2, (0, 255, 0), 3) return coordinates def", "Filtering.apply_box_filter(Filtering.get_gray_scale_image(UMedia.get_image(left_corner_path)), 5) def process(self, img): self._result = img img = Filtering.apply_box_filter(Filtering.get_gray_scale_image(img), 5) pupil_position,", "cv2.arcLength(cnt, True) radius = np.sqrt(properties['Area'] / np.pi) radius = 1.0 if radius ==", "and (len(magnitude) > i and len(magnitude[i]) > j): mag = magnitude[i][j] if min_magnitude", "pupil_position[1], pupil_sample[0] - pupil_position[0]) for point in normal: i = point[1] - 1", "cv2.matchTemplate(img, template, cv2.TM_CCOEFF_NORMED) height, width = template.shape min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(matching)", "threshold, 255, cv2.THRESH_BINARY)[1] height, width = img.shape max_dist = iris_radius if iris_radius >", "np.sqrt(properties['Area'] / np.pi) radius = 1.0 if radius == 0.0 else radius circularity", "in iris_radius_vote: iris_radius_vote[radius] = 0 iris_radius_vote[radius] += 1 cv2.line(self._result, pupil_sample, iris_sample, UGraphics.hex_color_to_bgr(0xf2f378), 1)", "= cv2.Sobel(img, cv2.CV_32F, 1, 0) sobel_vertical = cv2.Sobel(img, cv2.CV_32F, 0, 1) orientation =", "(max_loc[0] + width/2, max_loc[1] + height/2), 2) return max_loc[0], max_loc[1] def get_iris(self, img,", "pupil_radius <= 1.0: return 0.0 orientation, magnitude = self._get_gradient(img) max_iris_radius = pupil_radius *", "= None _left_template = None def __init__(self, right_corner_path, left_corner_path): self._right_template = Filtering.apply_box_filter(Filtering.get_gray_scale_image(UMedia.get_image(right_corner_path)), 5)", "st = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (5, 5)) img = cv2.morphologyEx(img, cv2.MORPH_ERODE, st, iterations=1) img =", "j >= 0) and (len(magnitude) > i and len(magnitude[i]) > j): mag =", "ellipse = cv2.fitEllipse(cnt) cv2.ellipse(self._result, ellipse, (0, 0, 255), 1) cv2.circle(self._result, center, int(radius), (0,", "- height/2), (max_loc[0] + width/2, max_loc[1] + height/2), 2) return max_loc[0], max_loc[1] def", "['Area', 'Length', 'Centroid', 'Extend', 'ConvexHull']) if properties['Extend'] > 0 and properties['Area'] < 100:", "magnitude[i][j] if min_magnitude < mag < max_magnitude: angle = normal_angle + orientation[i][j] -", "np.power(pupil_position[1] - center[1], 2)) if distance < max_dist: coordinates.append(center) cv2.circle(self._result, center, 2, (0,", "if i == 0: center = int(centroid), int(properties['Centroid'][i+1]) distance = np.sqrt(np.power(pupil_position[0] - center[0],", "max_magnitude=20): if pupil_radius <= 1.0: return 0.0 orientation, magnitude = self._get_gradient(img) max_iris_radius =", "in range(len(pupil_samples)): pupil_sample = (int(pupil_samples[sample][0]), int(pupil_samples[sample][1])) iris_sample = (int(iris_samples[sample][0]), int(iris_samples[sample][1])) normal = UMath.get_line_coordinates(pupil_sample,", "props.CalcContourProperties(cnt, ['Area', 'Length', 'Centroid', 'Extend', 'ConvexHull']) if properties['Extend'] > 0 and properties['Area'] <", "j = point[0] - 1 if (i >= 0 and j >= 0)", "0, 255), 1) cv2.circle(self._result, center, int(radius), (0, 0, 255), 1) return center, radius", "= self._match(img, self._left_template) return [right, left] def _match(self, img, template): matching = cv2.matchTemplate(img,", "if min_magnitude < mag < max_magnitude: angle = normal_angle + orientation[i][j] - 90", "= cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (5, 5)) img = cv2.morphologyEx(img, cv2.MORPH_ERODE, st, iterations=1) img = cv2.morphologyEx(img,", "import * from UMath import * from UGraphics import * import operator class", "- 90 angle = angle - 360 if angle > 360 else angle", "= None def __init__(self, right_corner_path, left_corner_path): self._right_template = Filtering.apply_box_filter(Filtering.get_gray_scale_image(UMedia.get_image(right_corner_path)), 5) self._left_template = Filtering.apply_box_filter(Filtering.get_gray_scale_image(UMedia.get_image(left_corner_path)),", "pupil_radius) iris_samples = UMath.get_circle_samples(pupil_position, max_iris_radius) iris_radius_vote = dict() for sample in range(len(pupil_samples)): pupil_sample", "UGraphics import * import operator class Eye: _result = None _right_template = None", "RegionProps() coordinates = [] for cnt in contours: properties = props.CalcContourProperties(cnt, ['Area', 'Length',", "img, template): matching = cv2.matchTemplate(img, template, cv2.TM_CCOEFF_NORMED) height, width = template.shape min_val, max_val,", "template.shape min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(matching) cv2.rectangle(self._result, (max_loc[0] - width/2, max_loc[1] -", "angle if angle < angle_tolerance: radius = np.sqrt(np.power(point[0] - pupil_position[0], 2) + np.power(point[1]", "self.get_iris(img, pupil_position, pupil_radius) glints_position = self.get_glints(img, 180, pupil_position, iris_radius) corners_position = self.get_eye_corners(img) UMedia.show(self._result)", "perimeter = cv2.arcLength(cnt, True) radius = np.sqrt(properties['Area'] / np.pi) radius = 1.0 if", "100: for i, centroid in enumerate(properties['Centroid']): if i == 0: center = int(centroid),", "pupil_position, pupil_radius) glints_position = self.get_glints(img, 180, pupil_position, iris_radius) corners_position = self.get_eye_corners(img) UMedia.show(self._result) def", "= (width * height) / 8 st = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (5, 5)) img =", "in contours: properties = props.CalcContourProperties(cnt, ['Area', 'Length', 'Centroid', 'Extend', 'ConvexHull']) perimeter = cv2.arcLength(cnt,", "= cv2.threshold(img, threshold, 255, cv2.THRESH_BINARY)[1] height, width = img.shape max_dist = iris_radius if", "def __init__(self, right_corner_path, left_corner_path): self._right_template = Filtering.apply_box_filter(Filtering.get_gray_scale_image(UMedia.get_image(right_corner_path)), 5) self._left_template = Filtering.apply_box_filter(Filtering.get_gray_scale_image(UMedia.get_image(left_corner_path)), 5) def", "side)): for i, centroid in enumerate(properties['Centroid']): if i == 0: center = int(centroid),", "magnitude = self._get_gradient(img) max_iris_radius = pupil_radius * 5 pupil_samples = UMath.get_circle_samples(pupil_position, pupil_radius) iris_samples", "5) def process(self, img): self._result = img img = Filtering.apply_box_filter(Filtering.get_gray_scale_image(img), 5) pupil_position, pupil_radius", "cv2.findContours(img, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE) props = RegionProps() coordinates = [] for cnt in contours:", "> i and len(magnitude[i]) > j): mag = magnitude[i][j] if min_magnitude < mag", "0, 1) orientation = np.empty(img.shape) magnitude = np.empty(img.shape) for y in range(height): for", "int(centroid), int(properties['Centroid'][i+1]) distance = np.sqrt(np.power(pupil_position[0] - center[0], 2) + np.power(pupil_position[1] - center[1], 2))", "max_dist = iris_radius if iris_radius > 0 else (width + height) / 16", "radius = np.sqrt(properties['Area'] / np.pi) radius = 1.0 if radius == 0.0 else", "3) return coordinates def get_eye_corners(self, img): right = self._match(img, self._right_template) left = self._match(img,", "= props.CalcContourProperties(cnt, ['Area', 'Length', 'Centroid', 'Extend', 'ConvexHull']) perimeter = cv2.arcLength(cnt, True) radius =", "iris_radius) corners_position = self.get_eye_corners(img) UMedia.show(self._result) def get_pupil(self, img, threshold): img = cv2.threshold(img, threshold,", "= cv2.findContours(img, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE) props = RegionProps() coordinates = [] for cnt in", "(5, 5)) img = cv2.morphologyEx(img, cv2.MORPH_ERODE, st, iterations=1) img = cv2.morphologyEx(img, cv2.MORPH_DILATE, st,", "max_dist: coordinates.append(center) cv2.circle(self._result, center, 2, (0, 255, 0), 3) return coordinates def get_eye_corners(self,", "template): matching = cv2.matchTemplate(img, template, cv2.TM_CCOEFF_NORMED) height, width = template.shape min_val, max_val, min_loc,", "0.0) and (circularity <= 1.5)) and ((properties['Area'] > 900) and (properties['Area'] < side)):", "= Filtering.apply_box_filter(Filtering.get_gray_scale_image(img), 5) pupil_position, pupil_radius = self.get_pupil(img, 40) iris_radius = self.get_iris(img, pupil_position, pupil_radius)", "0 else (width + height) / 16 c, contours, hierarchy = cv2.findContours(img, cv2.RETR_LIST,", "/ 16 c, contours, hierarchy = cv2.findContours(img, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE) props = RegionProps() coordinates", "0), 3) return coordinates def get_eye_corners(self, img): right = self._match(img, self._right_template) left =", "for x in range(width): orientation[y][x] = cv2.fastAtan2(sobel_horizontal[y][x], sobel_vertical[y][x]) magnitude[y][x] = np.sqrt(np.power(sobel_horizontal[y][x], 2) +", "2) return max_loc[0], max_loc[1] def get_iris(self, img, pupil_position, pupil_radius, angle_tolerance=3, min_magnitude=15, max_magnitude=20): if", "2)) radius = int(radius) if radius not in iris_radius_vote: iris_radius_vote[radius] = 0 iris_radius_vote[radius]", "pupil_radius) glints_position = self.get_glints(img, 180, pupil_position, iris_radius) corners_position = self.get_eye_corners(img) UMedia.show(self._result) def get_pupil(self,", "= 'tbeltramelli' from scipy.cluster.vq import * from UMedia import * from Filtering import", "cv2.MORPH_ERODE, st, iterations=1) img = cv2.morphologyEx(img, cv2.MORPH_DILATE, st, iterations=1) c, contours, hierarchy =", "center[1], width, height): if len(cnt) >= 5: ellipse = cv2.fitEllipse(cnt) cv2.ellipse(self._result, ellipse, (0,", "return max_loc[0], max_loc[1] def get_iris(self, img, pupil_position, pupil_radius, angle_tolerance=3, min_magnitude=15, max_magnitude=20): if pupil_radius", "img img = Filtering.apply_box_filter(Filtering.get_gray_scale_image(img), 5) pupil_position, pupil_radius = self.get_pupil(img, 40) iris_radius = self.get_iris(img,", "= img.shape max_dist = iris_radius if iris_radius > 0 else (width + height)", "UMath import * from UGraphics import * import operator class Eye: _result =", "= np.sqrt(np.power(pupil_position[0] - center[0], 2) + np.power(pupil_position[1] - center[1], 2)) if distance <", "from RegionProps import * from UMath import * from UGraphics import * import", "else radius circularity = perimeter / (radius * 2 * np.pi) if ((circularity", "= img.shape side = (width * height) / 8 st = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (5,", "0.0 orientation, magnitude = self._get_gradient(img) max_iris_radius = pupil_radius * 5 pupil_samples = UMath.get_circle_samples(pupil_position,", "cnt in contours: properties = props.CalcContourProperties(cnt, ['Area', 'Length', 'Centroid', 'Extend', 'ConvexHull']) perimeter =", "(circularity <= 1.5)) and ((properties['Area'] > 900) and (properties['Area'] < side)): for i,", "'ConvexHull']) if properties['Extend'] > 0 and properties['Area'] < 100: for i, centroid in", "1, 0) sobel_vertical = cv2.Sobel(img, cv2.CV_32F, 0, 1) orientation = np.empty(img.shape) magnitude =", "cv2.threshold(img, threshold, 255, cv2.THRESH_BINARY)[1] height, width = img.shape max_dist = iris_radius if iris_radius", "cv2.fitEllipse(cnt) cv2.ellipse(self._result, ellipse, (0, 0, 255), 1) cv2.circle(self._result, center, int(radius), (0, 0, 255),", "in range(height): for x in range(width): orientation[y][x] = cv2.fastAtan2(sobel_horizontal[y][x], sobel_vertical[y][x]) magnitude[y][x] = np.sqrt(np.power(sobel_horizontal[y][x],", "1) return center, radius return (int(width / 2), int(height / 2)), radius def", "(properties['Area'] < side)): for i, centroid in enumerate(properties['Centroid']): if i == 0: center", "max_val, min_loc, max_loc = cv2.minMaxLoc(matching) cv2.rectangle(self._result, (max_loc[0] - width/2, max_loc[1] - height/2), (max_loc[0]", "< mag < max_magnitude: angle = normal_angle + orientation[i][j] - 90 angle =", "if pupil_radius <= 1.0: return 0.0 orientation, magnitude = self._get_gradient(img) max_iris_radius = pupil_radius", "contours: properties = props.CalcContourProperties(cnt, ['Area', 'Length', 'Centroid', 'Extend', 'ConvexHull']) perimeter = cv2.arcLength(cnt, True)", "normal_angle = cv2.fastAtan2(pupil_sample[1] - pupil_position[1], pupil_sample[0] - pupil_position[0]) for point in normal: i", "- width/2, max_loc[1] - height/2), (max_loc[0] + width/2, max_loc[1] + height/2), 2) return", "perimeter / (radius * 2 * np.pi) if ((circularity >= 0.0) and (circularity", "RegionProps() radius = 0.0 for cnt in contours: properties = props.CalcContourProperties(cnt, ['Area', 'Length',", "props = RegionProps() coordinates = [] for cnt in contours: properties = props.CalcContourProperties(cnt,", "pupil_samples = UMath.get_circle_samples(pupil_position, pupil_radius) iris_samples = UMath.get_circle_samples(pupil_position, max_iris_radius) iris_radius_vote = dict() for sample", "((circularity >= 0.0) and (circularity <= 1.5)) and ((properties['Area'] > 900) and (properties['Area']", "pupil_radius = self.get_pupil(img, 40) iris_radius = self.get_iris(img, pupil_position, pupil_radius) glints_position = self.get_glints(img, 180,", "height) / 8 st = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (5, 5)) img = cv2.morphologyEx(img, cv2.MORPH_ERODE, st,", "_result = None _right_template = None _left_template = None def __init__(self, right_corner_path, left_corner_path):", "def _match(self, img, template): matching = cv2.matchTemplate(img, template, cv2.TM_CCOEFF_NORMED) height, width = template.shape", "5 pupil_samples = UMath.get_circle_samples(pupil_position, pupil_radius) iris_samples = UMath.get_circle_samples(pupil_position, max_iris_radius) iris_radius_vote = dict() for", "pupil_position[0]) for point in normal: i = point[1] - 1 j = point[0]", "= cv2.arcLength(cnt, True) radius = np.sqrt(properties['Area'] / np.pi) radius = 1.0 if radius", "angle_tolerance: radius = np.sqrt(np.power(point[0] - pupil_position[0], 2) + np.power(point[1] - pupil_position[1], 2)) radius", "'Length', 'Centroid', 'Extend', 'ConvexHull']) perimeter = cv2.arcLength(cnt, True) radius = np.sqrt(properties['Area'] / np.pi)", "pupil_position, iris_radius): img = cv2.threshold(img, threshold, 255, cv2.THRESH_BINARY)[1] height, width = img.shape max_dist", "0.0 for cnt in contours: properties = props.CalcContourProperties(cnt, ['Area', 'Length', 'Centroid', 'Extend', 'ConvexHull'])", "if UMath.is_in_area(center[0], center[1], width, height): if len(cnt) >= 5: ellipse = cv2.fitEllipse(cnt) cv2.ellipse(self._result,", "i == 0: center = int(centroid), int(properties['Centroid'][i+1]) if UMath.is_in_area(center[0], center[1], width, height): if", "== 0.0 else radius circularity = perimeter / (radius * 2 * np.pi)", "iris_radius if iris_radius > 0 else (width + height) / 16 c, contours,", "0 else 0 cv2.circle(self._result, pupil_position, iris_radius, (255, 0, 0), 1) return iris_radius def", "== 0: center = int(centroid), int(properties['Centroid'][i+1]) if UMath.is_in_area(center[0], center[1], width, height): if len(cnt)", "(0, 0, 255), 1) cv2.circle(self._result, center, int(radius), (0, 0, 255), 1) return center,", "point in normal: i = point[1] - 1 j = point[0] - 1", "get_glints(self, img, threshold, pupil_position, iris_radius): img = cv2.threshold(img, threshold, 255, cv2.THRESH_BINARY)[1] height, width", "if radius == 0.0 else radius circularity = perimeter / (radius * 2", "circularity = perimeter / (radius * 2 * np.pi) if ((circularity >= 0.0)", "cv2.THRESH_BINARY_INV)[1] height, width = img.shape side = (width * height) / 8 st", "= np.sqrt(np.power(sobel_horizontal[y][x], 2) + np.power(sobel_vertical[y][x], 2)) if (x % granularity == 0) and", "2)), radius def get_glints(self, img, threshold, pupil_position, iris_radius): img = cv2.threshold(img, threshold, 255,", "- pupil_position[0]) for point in normal: i = point[1] - 1 j =", "img): self._result = img img = Filtering.apply_box_filter(Filtering.get_gray_scale_image(img), 5) pupil_position, pupil_radius = self.get_pupil(img, 40)", "UMedia.show(self._result) def get_pupil(self, img, threshold): img = cv2.threshold(img, threshold, 255, cv2.THRESH_BINARY_INV)[1] height, width", "== 0: center = int(centroid), int(properties['Centroid'][i+1]) distance = np.sqrt(np.power(pupil_position[0] - center[0], 2) +", "+ orientation[i][j] - 90 angle = angle - 360 if angle > 360", "class Eye: _result = None _right_template = None _left_template = None def __init__(self,", "max_loc[1] - height/2), (max_loc[0] + width/2, max_loc[1] + height/2), 2) return max_loc[0], max_loc[1]", "from UMedia import * from Filtering import * from RegionProps import * from", "def get_glints(self, img, threshold, pupil_position, iris_radius): img = cv2.threshold(img, threshold, 255, cv2.THRESH_BINARY)[1] height,", "self.get_eye_corners(img) UMedia.show(self._result) def get_pupil(self, img, threshold): img = cv2.threshold(img, threshold, 255, cv2.THRESH_BINARY_INV)[1] height,", "len(magnitude[i]) > j): mag = magnitude[i][j] if min_magnitude < mag < max_magnitude: angle", "cv2.findContours(img, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE) props = RegionProps() radius = 0.0 for cnt in contours:", "coordinates = [] for cnt in contours: properties = props.CalcContourProperties(cnt, ['Area', 'Length', 'Centroid',", "point[0] - 1 if (i >= 0 and j >= 0) and (len(magnitude)", "in normal: i = point[1] - 1 j = point[0] - 1 if", "* from UGraphics import * import operator class Eye: _result = None _right_template", "get_eye_corners(self, img): right = self._match(img, self._right_template) left = self._match(img, self._left_template) return [right, left]", "from Filtering import * from RegionProps import * from UMath import * from", "def get_eye_corners(self, img): right = self._match(img, self._right_template) left = self._match(img, self._left_template) return [right,", "magnitude = np.empty(img.shape) for y in range(height): for x in range(width): orientation[y][x] =", "0 cv2.circle(self._result, pupil_position, iris_radius, (255, 0, 0), 1) return iris_radius def _get_gradient(self, img,", "- 1 j = point[0] - 1 if (i >= 0 and j", "UMath.get_circle_samples(pupil_position, pupil_radius) iris_samples = UMath.get_circle_samples(pupil_position, max_iris_radius) iris_radius_vote = dict() for sample in range(len(pupil_samples)):", "255), 1) cv2.circle(self._result, center, int(radius), (0, 0, 255), 1) return center, radius return", "np.pi) radius = 1.0 if radius == 0.0 else radius circularity = perimeter", "cv2.circle(self._result, center, 2, (0, 255, 0), 3) return coordinates def get_eye_corners(self, img): right", "normal: i = point[1] - 1 j = point[0] - 1 if (i", "from scipy.cluster.vq import * from UMedia import * from Filtering import * from", "properties = props.CalcContourProperties(cnt, ['Area', 'Length', 'Centroid', 'Extend', 'ConvexHull']) perimeter = cv2.arcLength(cnt, True) radius", "'Centroid', 'Extend', 'ConvexHull']) if properties['Extend'] > 0 and properties['Area'] < 100: for i,", "cv2.rectangle(self._result, (max_loc[0] - width/2, max_loc[1] - height/2), (max_loc[0] + width/2, max_loc[1] + height/2),", "<gh_stars>10-100 __author__ = 'tbeltramelli' from scipy.cluster.vq import * from UMedia import * from", "width/2, max_loc[1] + height/2), 2) return max_loc[0], max_loc[1] def get_iris(self, img, pupil_position, pupil_radius,", "'Length', 'Centroid', 'Extend', 'ConvexHull']) if properties['Extend'] > 0 and properties['Area'] < 100: for", "process(self, img): self._result = img img = Filtering.apply_box_filter(Filtering.get_gray_scale_image(img), 5) pupil_position, pupil_radius = self.get_pupil(img,", "= self._match(img, self._right_template) left = self._match(img, self._left_template) return [right, left] def _match(self, img,", "= self.get_glints(img, 180, pupil_position, iris_radius) corners_position = self.get_eye_corners(img) UMedia.show(self._result) def get_pupil(self, img, threshold):", "iris_radius def _get_gradient(self, img, granularity=10): height, width = img.shape sobel_horizontal = cv2.Sobel(img, cv2.CV_32F,", "for y in range(height): for x in range(width): orientation[y][x] = cv2.fastAtan2(sobel_horizontal[y][x], sobel_vertical[y][x]) magnitude[y][x]", "min_loc, max_loc = cv2.minMaxLoc(matching) cv2.rectangle(self._result, (max_loc[0] - width/2, max_loc[1] - height/2), (max_loc[0] +", "width, height): if len(cnt) >= 5: ellipse = cv2.fitEllipse(cnt) cv2.ellipse(self._result, ellipse, (0, 0,", "right_corner_path, left_corner_path): self._right_template = Filtering.apply_box_filter(Filtering.get_gray_scale_image(UMedia.get_image(right_corner_path)), 5) self._left_template = Filtering.apply_box_filter(Filtering.get_gray_scale_image(UMedia.get_image(left_corner_path)), 5) def process(self, img):", "= point[0] - 1 if (i >= 0 and j >= 0) and", "= cv2.morphologyEx(img, cv2.MORPH_ERODE, st, iterations=1) img = cv2.morphologyEx(img, cv2.MORPH_DILATE, st, iterations=1) c, contours,", "_right_template = None _left_template = None def __init__(self, right_corner_path, left_corner_path): self._right_template = Filtering.apply_box_filter(Filtering.get_gray_scale_image(UMedia.get_image(right_corner_path)),", "1) return iris_radius def _get_gradient(self, img, granularity=10): height, width = img.shape sobel_horizontal =", "sobel_vertical = cv2.Sobel(img, cv2.CV_32F, 0, 1) orientation = np.empty(img.shape) magnitude = np.empty(img.shape) for", "img): right = self._match(img, self._right_template) left = self._match(img, self._left_template) return [right, left] def", "iris_radius_vote = dict() for sample in range(len(pupil_samples)): pupil_sample = (int(pupil_samples[sample][0]), int(pupil_samples[sample][1])) iris_sample =", "2)) if distance < max_dist: coordinates.append(center) cv2.circle(self._result, center, 2, (0, 255, 0), 3)", "import * from Filtering import * from RegionProps import * from UMath import", "max_loc[1] + height/2), 2) return max_loc[0], max_loc[1] def get_iris(self, img, pupil_position, pupil_radius, angle_tolerance=3,", "i, centroid in enumerate(properties['Centroid']): if i == 0: center = int(centroid), int(properties['Centroid'][i+1]) distance", "iris_radius = self.get_iris(img, pupil_position, pupil_radius) glints_position = self.get_glints(img, 180, pupil_position, iris_radius) corners_position =", "i == 0: center = int(centroid), int(properties['Centroid'][i+1]) distance = np.sqrt(np.power(pupil_position[0] - center[0], 2)", "* 2 * np.pi) if ((circularity >= 0.0) and (circularity <= 1.5)) and", "(i >= 0 and j >= 0) and (len(magnitude) > i and len(magnitude[i])", "= int(radius) if radius not in iris_radius_vote: iris_radius_vote[radius] = 0 iris_radius_vote[radius] += 1", "else (width + height) / 16 c, contours, hierarchy = cv2.findContours(img, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)", "img = cv2.morphologyEx(img, cv2.MORPH_DILATE, st, iterations=1) c, contours, hierarchy = cv2.findContours(img, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)", "255, 0), 3) return coordinates def get_eye_corners(self, img): right = self._match(img, self._right_template) left", ">= 0.0) and (circularity <= 1.5)) and ((properties['Area'] > 900) and (properties['Area'] <", "255), 1) return center, radius return (int(width / 2), int(height / 2)), radius", "1) iris_radius = max(iris_radius_vote.iteritems(), key=operator.itemgetter(1))[0] if len(iris_radius_vote) > 0 else 0 cv2.circle(self._result, pupil_position,", "% granularity == 0) and (y % granularity == 0): UGraphics.draw_vector(self._result, x, y,", "= np.empty(img.shape) magnitude = np.empty(img.shape) for y in range(height): for x in range(width):", "= self.get_pupil(img, 40) iris_radius = self.get_iris(img, pupil_position, pupil_radius) glints_position = self.get_glints(img, 180, pupil_position,", "'tbeltramelli' from scipy.cluster.vq import * from UMedia import * from Filtering import *", "get_pupil(self, img, threshold): img = cv2.threshold(img, threshold, 255, cv2.THRESH_BINARY_INV)[1] height, width = img.shape", "(len(magnitude) > i and len(magnitude[i]) > j): mag = magnitude[i][j] if min_magnitude <", "= Filtering.apply_box_filter(Filtering.get_gray_scale_image(UMedia.get_image(right_corner_path)), 5) self._left_template = Filtering.apply_box_filter(Filtering.get_gray_scale_image(UMedia.get_image(left_corner_path)), 5) def process(self, img): self._result = img", "left_corner_path): self._right_template = Filtering.apply_box_filter(Filtering.get_gray_scale_image(UMedia.get_image(right_corner_path)), 5) self._left_template = Filtering.apply_box_filter(Filtering.get_gray_scale_image(UMedia.get_image(left_corner_path)), 5) def process(self, img): self._result", "img = cv2.threshold(img, threshold, 255, cv2.THRESH_BINARY)[1] height, width = img.shape max_dist = iris_radius", "self._right_template = Filtering.apply_box_filter(Filtering.get_gray_scale_image(UMedia.get_image(right_corner_path)), 5) self._left_template = Filtering.apply_box_filter(Filtering.get_gray_scale_image(UMedia.get_image(left_corner_path)), 5) def process(self, img): self._result =", "= cv2.threshold(img, threshold, 255, cv2.THRESH_BINARY_INV)[1] height, width = img.shape side = (width *", "np.empty(img.shape) magnitude = np.empty(img.shape) for y in range(height): for x in range(width): orientation[y][x]", "if distance < max_dist: coordinates.append(center) cv2.circle(self._result, center, 2, (0, 255, 0), 3) return", "height, width = template.shape min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(matching) cv2.rectangle(self._result, (max_loc[0] -", "np.sqrt(np.power(pupil_position[0] - center[0], 2) + np.power(pupil_position[1] - center[1], 2)) if distance < max_dist:", "height, width = img.shape max_dist = iris_radius if iris_radius > 0 else (width", "in enumerate(properties['Centroid']): if i == 0: center = int(centroid), int(properties['Centroid'][i+1]) distance = np.sqrt(np.power(pupil_position[0]", "/ (radius * 2 * np.pi) if ((circularity >= 0.0) and (circularity <=", "= magnitude[i][j] if min_magnitude < mag < max_magnitude: angle = normal_angle + orientation[i][j]", "* 5 pupil_samples = UMath.get_circle_samples(pupil_position, pupil_radius) iris_samples = UMath.get_circle_samples(pupil_position, max_iris_radius) iris_radius_vote = dict()", "16 c, contours, hierarchy = cv2.findContours(img, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE) props = RegionProps() coordinates =", "return 0.0 orientation, magnitude = self._get_gradient(img) max_iris_radius = pupil_radius * 5 pupil_samples =", "cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE) props = RegionProps() coordinates = [] for cnt in contours: properties", "granularity=10): height, width = img.shape sobel_horizontal = cv2.Sobel(img, cv2.CV_32F, 1, 0) sobel_vertical =", "= RegionProps() coordinates = [] for cnt in contours: properties = props.CalcContourProperties(cnt, ['Area',", "iris_radius_vote[radius] += 1 cv2.line(self._result, pupil_sample, iris_sample, UGraphics.hex_color_to_bgr(0xf2f378), 1) iris_radius = max(iris_radius_vote.iteritems(), key=operator.itemgetter(1))[0] if", "= np.empty(img.shape) for y in range(height): for x in range(width): orientation[y][x] = cv2.fastAtan2(sobel_horizontal[y][x],", "(width * height) / 8 st = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (5, 5)) img = cv2.morphologyEx(img,", "- pupil_position[1], pupil_sample[0] - pupil_position[0]) for point in normal: i = point[1] -", "cv2.CHAIN_APPROX_SIMPLE) props = RegionProps() radius = 0.0 for cnt in contours: properties =", "= cv2.Sobel(img, cv2.CV_32F, 0, 1) orientation = np.empty(img.shape) magnitude = np.empty(img.shape) for y", "height) / 16 c, contours, hierarchy = cv2.findContours(img, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE) props = RegionProps()", "min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(matching) cv2.rectangle(self._result, (max_loc[0] - width/2, max_loc[1] - height/2),", "(width + height) / 16 c, contours, hierarchy = cv2.findContours(img, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE) props", "if properties['Extend'] > 0 and properties['Area'] < 100: for i, centroid in enumerate(properties['Centroid']):", "> 0 else 0 cv2.circle(self._result, pupil_position, iris_radius, (255, 0, 0), 1) return iris_radius", "height): if len(cnt) >= 5: ellipse = cv2.fitEllipse(cnt) cv2.ellipse(self._result, ellipse, (0, 0, 255),", "- 360 if angle > 360 else angle if angle < angle_tolerance: radius", "* from UMath import * from UGraphics import * import operator class Eye:", "0, 0), 1) return iris_radius def _get_gradient(self, img, granularity=10): height, width = img.shape", "= props.CalcContourProperties(cnt, ['Area', 'Length', 'Centroid', 'Extend', 'ConvexHull']) if properties['Extend'] > 0 and properties['Area']", "height, width = img.shape side = (width * height) / 8 st =", "UMedia import * from Filtering import * from RegionProps import * from UMath", "max_iris_radius) iris_radius_vote = dict() for sample in range(len(pupil_samples)): pupil_sample = (int(pupil_samples[sample][0]), int(pupil_samples[sample][1])) iris_sample", "= 0 iris_radius_vote[radius] += 1 cv2.line(self._result, pupil_sample, iris_sample, UGraphics.hex_color_to_bgr(0xf2f378), 1) iris_radius = max(iris_radius_vote.iteritems(),", "= cv2.morphologyEx(img, cv2.MORPH_DILATE, st, iterations=1) c, contours, hierarchy = cv2.findContours(img, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE) props", "int(properties['Centroid'][i+1]) if UMath.is_in_area(center[0], center[1], width, height): if len(cnt) >= 5: ellipse = cv2.fitEllipse(cnt)", "= angle - 360 if angle > 360 else angle if angle <", "1.0 if radius == 0.0 else radius circularity = perimeter / (radius *", "hierarchy = cv2.findContours(img, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE) props = RegionProps() radius = 0.0 for cnt", "cv2.CHAIN_APPROX_SIMPLE) props = RegionProps() coordinates = [] for cnt in contours: properties =", "0 and properties['Area'] < 100: for i, centroid in enumerate(properties['Centroid']): if i ==", "pupil_radius, angle_tolerance=3, min_magnitude=15, max_magnitude=20): if pupil_radius <= 1.0: return 0.0 orientation, magnitude =", "and len(magnitude[i]) > j): mag = magnitude[i][j] if min_magnitude < mag < max_magnitude:", "radius not in iris_radius_vote: iris_radius_vote[radius] = 0 iris_radius_vote[radius] += 1 cv2.line(self._result, pupil_sample, iris_sample,", "0) and (y % granularity == 0): UGraphics.draw_vector(self._result, x, y, magnitude[y][x] / granularity,", "int(properties['Centroid'][i+1]) distance = np.sqrt(np.power(pupil_position[0] - center[0], 2) + np.power(pupil_position[1] - center[1], 2)) if", "c, contours, hierarchy = cv2.findContours(img, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE) props = RegionProps() radius = 0.0", "cv2.TM_CCOEFF_NORMED) height, width = template.shape min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(matching) cv2.rectangle(self._result, (max_loc[0]", "pupil_sample[0] - pupil_position[0]) for point in normal: i = point[1] - 1 j", "angle = angle - 360 if angle > 360 else angle if angle", "point[1] - 1 j = point[0] - 1 if (i >= 0 and", "get_iris(self, img, pupil_position, pupil_radius, angle_tolerance=3, min_magnitude=15, max_magnitude=20): if pupil_radius <= 1.0: return 0.0", "coordinates.append(center) cv2.circle(self._result, center, 2, (0, 255, 0), 3) return coordinates def get_eye_corners(self, img):", "np.power(point[1] - pupil_position[1], 2)) radius = int(radius) if radius not in iris_radius_vote: iris_radius_vote[radius]", "width = img.shape sobel_horizontal = cv2.Sobel(img, cv2.CV_32F, 1, 0) sobel_vertical = cv2.Sobel(img, cv2.CV_32F,", "(int(iris_samples[sample][0]), int(iris_samples[sample][1])) normal = UMath.get_line_coordinates(pupil_sample, iris_sample) normal_angle = cv2.fastAtan2(pupil_sample[1] - pupil_position[1], pupil_sample[0] -", "enumerate(properties['Centroid']): if i == 0: center = int(centroid), int(properties['Centroid'][i+1]) distance = np.sqrt(np.power(pupil_position[0] -", "scipy.cluster.vq import * from UMedia import * from Filtering import * from RegionProps", "= img img = Filtering.apply_box_filter(Filtering.get_gray_scale_image(img), 5) pupil_position, pupil_radius = self.get_pupil(img, 40) iris_radius =", "self._left_template = Filtering.apply_box_filter(Filtering.get_gray_scale_image(UMedia.get_image(left_corner_path)), 5) def process(self, img): self._result = img img = Filtering.apply_box_filter(Filtering.get_gray_scale_image(img),", "cv2.threshold(img, threshold, 255, cv2.THRESH_BINARY_INV)[1] height, width = img.shape side = (width * height)", "< max_magnitude: angle = normal_angle + orientation[i][j] - 90 angle = angle -", "matching = cv2.matchTemplate(img, template, cv2.TM_CCOEFF_NORMED) height, width = template.shape min_val, max_val, min_loc, max_loc", "magnitude[y][x] = np.sqrt(np.power(sobel_horizontal[y][x], 2) + np.power(sobel_vertical[y][x], 2)) if (x % granularity == 0)", "threshold, 255, cv2.THRESH_BINARY_INV)[1] height, width = img.shape side = (width * height) /", "import * from RegionProps import * from UMath import * from UGraphics import", "i and len(magnitude[i]) > j): mag = magnitude[i][j] if min_magnitude < mag <", "= (int(pupil_samples[sample][0]), int(pupil_samples[sample][1])) iris_sample = (int(iris_samples[sample][0]), int(iris_samples[sample][1])) normal = UMath.get_line_coordinates(pupil_sample, iris_sample) normal_angle =", "centroid in enumerate(properties['Centroid']): if i == 0: center = int(centroid), int(properties['Centroid'][i+1]) distance =", "Filtering.apply_box_filter(Filtering.get_gray_scale_image(img), 5) pupil_position, pupil_radius = self.get_pupil(img, 40) iris_radius = self.get_iris(img, pupil_position, pupil_radius) glints_position", "> 900) and (properties['Area'] < side)): for i, centroid in enumerate(properties['Centroid']): if i", "> 0 else (width + height) / 16 c, contours, hierarchy = cv2.findContours(img,", "1 j = point[0] - 1 if (i >= 0 and j >=", "for cnt in contours: properties = props.CalcContourProperties(cnt, ['Area', 'Length', 'Centroid', 'Extend', 'ConvexHull']) perimeter", "iris_samples = UMath.get_circle_samples(pupil_position, max_iris_radius) iris_radius_vote = dict() for sample in range(len(pupil_samples)): pupil_sample =", "True) radius = np.sqrt(properties['Area'] / np.pi) radius = 1.0 if radius == 0.0", "import operator class Eye: _result = None _right_template = None _left_template = None", "2) + np.power(pupil_position[1] - center[1], 2)) if distance < max_dist: coordinates.append(center) cv2.circle(self._result, center,", "255, cv2.THRESH_BINARY)[1] height, width = img.shape max_dist = iris_radius if iris_radius > 0", "= UMath.get_circle_samples(pupil_position, max_iris_radius) iris_radius_vote = dict() for sample in range(len(pupil_samples)): pupil_sample = (int(pupil_samples[sample][0]),", "2)) if (x % granularity == 0) and (y % granularity == 0):", "self._result = img img = Filtering.apply_box_filter(Filtering.get_gray_scale_image(img), 5) pupil_position, pupil_radius = self.get_pupil(img, 40) iris_radius", "- center[1], 2)) if distance < max_dist: coordinates.append(center) cv2.circle(self._result, center, 2, (0, 255,", "if len(cnt) >= 5: ellipse = cv2.fitEllipse(cnt) cv2.ellipse(self._result, ellipse, (0, 0, 255), 1)", "= UMath.get_line_coordinates(pupil_sample, iris_sample) normal_angle = cv2.fastAtan2(pupil_sample[1] - pupil_position[1], pupil_sample[0] - pupil_position[0]) for point", "> j): mag = magnitude[i][j] if min_magnitude < mag < max_magnitude: angle =", "self._match(img, self._left_template) return [right, left] def _match(self, img, template): matching = cv2.matchTemplate(img, template,", "pupil_position, pupil_radius = self.get_pupil(img, 40) iris_radius = self.get_iris(img, pupil_position, pupil_radius) glints_position = self.get_glints(img,", "int(centroid), int(properties['Centroid'][i+1]) if UMath.is_in_area(center[0], center[1], width, height): if len(cnt) >= 5: ellipse =", "None _right_template = None _left_template = None def __init__(self, right_corner_path, left_corner_path): self._right_template =", "return iris_radius def _get_gradient(self, img, granularity=10): height, width = img.shape sobel_horizontal = cv2.Sobel(img,", "sobel_horizontal = cv2.Sobel(img, cv2.CV_32F, 1, 0) sobel_vertical = cv2.Sobel(img, cv2.CV_32F, 0, 1) orientation", "= max(iris_radius_vote.iteritems(), key=operator.itemgetter(1))[0] if len(iris_radius_vote) > 0 else 0 cv2.circle(self._result, pupil_position, iris_radius, (255,", "cv2.CV_32F, 0, 1) orientation = np.empty(img.shape) magnitude = np.empty(img.shape) for y in range(height):", "corners_position = self.get_eye_corners(img) UMedia.show(self._result) def get_pupil(self, img, threshold): img = cv2.threshold(img, threshold, 255,", "> 0 and properties['Area'] < 100: for i, centroid in enumerate(properties['Centroid']): if i", "= 0.0 for cnt in contours: properties = props.CalcContourProperties(cnt, ['Area', 'Length', 'Centroid', 'Extend',", "center[1], 2)) if distance < max_dist: coordinates.append(center) cv2.circle(self._result, center, 2, (0, 255, 0),", "sample in range(len(pupil_samples)): pupil_sample = (int(pupil_samples[sample][0]), int(pupil_samples[sample][1])) iris_sample = (int(iris_samples[sample][0]), int(iris_samples[sample][1])) normal =", "radius == 0.0 else radius circularity = perimeter / (radius * 2 *", "= perimeter / (radius * 2 * np.pi) if ((circularity >= 0.0) and", "mag = magnitude[i][j] if min_magnitude < mag < max_magnitude: angle = normal_angle +", "1) orientation = np.empty(img.shape) magnitude = np.empty(img.shape) for y in range(height): for x", "= dict() for sample in range(len(pupil_samples)): pupil_sample = (int(pupil_samples[sample][0]), int(pupil_samples[sample][1])) iris_sample = (int(iris_samples[sample][0]),", "'Extend', 'ConvexHull']) if properties['Extend'] > 0 and properties['Area'] < 100: for i, centroid", "and ((properties['Area'] > 900) and (properties['Area'] < side)): for i, centroid in enumerate(properties['Centroid']):", "threshold, pupil_position, iris_radius): img = cv2.threshold(img, threshold, 255, cv2.THRESH_BINARY)[1] height, width = img.shape", "operator class Eye: _result = None _right_template = None _left_template = None def", "if angle > 360 else angle if angle < angle_tolerance: radius = np.sqrt(np.power(point[0]", "img.shape sobel_horizontal = cv2.Sobel(img, cv2.CV_32F, 1, 0) sobel_vertical = cv2.Sobel(img, cv2.CV_32F, 0, 1)", "(int(width / 2), int(height / 2)), radius def get_glints(self, img, threshold, pupil_position, iris_radius):", "iris_sample = (int(iris_samples[sample][0]), int(iris_samples[sample][1])) normal = UMath.get_line_coordinates(pupil_sample, iris_sample) normal_angle = cv2.fastAtan2(pupil_sample[1] - pupil_position[1],", "(0, 0, 255), 1) return center, radius return (int(width / 2), int(height /", "cv2.Sobel(img, cv2.CV_32F, 1, 0) sobel_vertical = cv2.Sobel(img, cv2.CV_32F, 0, 1) orientation = np.empty(img.shape)", "5) self._left_template = Filtering.apply_box_filter(Filtering.get_gray_scale_image(UMedia.get_image(left_corner_path)), 5) def process(self, img): self._result = img img =", "return coordinates def get_eye_corners(self, img): right = self._match(img, self._right_template) left = self._match(img, self._left_template)", "0 iris_radius_vote[radius] += 1 cv2.line(self._result, pupil_sample, iris_sample, UGraphics.hex_color_to_bgr(0xf2f378), 1) iris_radius = max(iris_radius_vote.iteritems(), key=operator.itemgetter(1))[0]", "(x % granularity == 0) and (y % granularity == 0): UGraphics.draw_vector(self._result, x,", "pupil_sample, iris_sample, UGraphics.hex_color_to_bgr(0xf2f378), 1) iris_radius = max(iris_radius_vote.iteritems(), key=operator.itemgetter(1))[0] if len(iris_radius_vote) > 0 else", "min_magnitude < mag < max_magnitude: angle = normal_angle + orientation[i][j] - 90 angle", "and (y % granularity == 0): UGraphics.draw_vector(self._result, x, y, magnitude[y][x] / granularity, orientation[y][x])", "i = point[1] - 1 j = point[0] - 1 if (i >=", "* from RegionProps import * from UMath import * from UGraphics import *", "8 st = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (5, 5)) img = cv2.morphologyEx(img, cv2.MORPH_ERODE, st, iterations=1) img", "90 angle = angle - 360 if angle > 360 else angle if", "height/2), 2) return max_loc[0], max_loc[1] def get_iris(self, img, pupil_position, pupil_radius, angle_tolerance=3, min_magnitude=15, max_magnitude=20):", "0: center = int(centroid), int(properties['Centroid'][i+1]) distance = np.sqrt(np.power(pupil_position[0] - center[0], 2) + np.power(pupil_position[1]", "cv2.MORPH_DILATE, st, iterations=1) c, contours, hierarchy = cv2.findContours(img, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE) props = RegionProps()", "'ConvexHull']) perimeter = cv2.arcLength(cnt, True) radius = np.sqrt(properties['Area'] / np.pi) radius = 1.0", "iris_radius_vote: iris_radius_vote[radius] = 0 iris_radius_vote[radius] += 1 cv2.line(self._result, pupil_sample, iris_sample, UGraphics.hex_color_to_bgr(0xf2f378), 1) iris_radius", "img = cv2.morphologyEx(img, cv2.MORPH_ERODE, st, iterations=1) img = cv2.morphologyEx(img, cv2.MORPH_DILATE, st, iterations=1) c,", "normal_angle + orientation[i][j] - 90 angle = angle - 360 if angle >", "- center[0], 2) + np.power(pupil_position[1] - center[1], 2)) if distance < max_dist: coordinates.append(center)", "None _left_template = None def __init__(self, right_corner_path, left_corner_path): self._right_template = Filtering.apply_box_filter(Filtering.get_gray_scale_image(UMedia.get_image(right_corner_path)), 5) self._left_template", "props.CalcContourProperties(cnt, ['Area', 'Length', 'Centroid', 'Extend', 'ConvexHull']) perimeter = cv2.arcLength(cnt, True) radius = np.sqrt(properties['Area']", "< 100: for i, centroid in enumerate(properties['Centroid']): if i == 0: center =", "np.power(sobel_vertical[y][x], 2)) if (x % granularity == 0) and (y % granularity ==", "orientation, magnitude = self._get_gradient(img) max_iris_radius = pupil_radius * 5 pupil_samples = UMath.get_circle_samples(pupil_position, pupil_radius)", "from UMath import * from UGraphics import * import operator class Eye: _result", "int(pupil_samples[sample][1])) iris_sample = (int(iris_samples[sample][0]), int(iris_samples[sample][1])) normal = UMath.get_line_coordinates(pupil_sample, iris_sample) normal_angle = cv2.fastAtan2(pupil_sample[1] -", "= point[1] - 1 j = point[0] - 1 if (i >= 0", "== 0) and (y % granularity == 0): UGraphics.draw_vector(self._result, x, y, magnitude[y][x] /", "40) iris_radius = self.get_iris(img, pupil_position, pupil_radius) glints_position = self.get_glints(img, 180, pupil_position, iris_radius) corners_position", "cv2.circle(self._result, pupil_position, iris_radius, (255, 0, 0), 1) return iris_radius def _get_gradient(self, img, granularity=10):", "__author__ = 'tbeltramelli' from scipy.cluster.vq import * from UMedia import * from Filtering", "enumerate(properties['Centroid']): if i == 0: center = int(centroid), int(properties['Centroid'][i+1]) if UMath.is_in_area(center[0], center[1], width,", "+= 1 cv2.line(self._result, pupil_sample, iris_sample, UGraphics.hex_color_to_bgr(0xf2f378), 1) iris_radius = max(iris_radius_vote.iteritems(), key=operator.itemgetter(1))[0] if len(iris_radius_vote)", "radius circularity = perimeter / (radius * 2 * np.pi) if ((circularity >=", "Filtering.apply_box_filter(Filtering.get_gray_scale_image(UMedia.get_image(right_corner_path)), 5) self._left_template = Filtering.apply_box_filter(Filtering.get_gray_scale_image(UMedia.get_image(left_corner_path)), 5) def process(self, img): self._result = img img", "j): mag = magnitude[i][j] if min_magnitude < mag < max_magnitude: angle = normal_angle", "orientation[y][x] = cv2.fastAtan2(sobel_horizontal[y][x], sobel_vertical[y][x]) magnitude[y][x] = np.sqrt(np.power(sobel_horizontal[y][x], 2) + np.power(sobel_vertical[y][x], 2)) if (x", "max_magnitude: angle = normal_angle + orientation[i][j] - 90 angle = angle - 360", "st, iterations=1) c, contours, hierarchy = cv2.findContours(img, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE) props = RegionProps() radius", "- pupil_position[1], 2)) radius = int(radius) if radius not in iris_radius_vote: iris_radius_vote[radius] =", "cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (5, 5)) img = cv2.morphologyEx(img, cv2.MORPH_ERODE, st, iterations=1) img = cv2.morphologyEx(img, cv2.MORPH_DILATE,", "= Filtering.apply_box_filter(Filtering.get_gray_scale_image(UMedia.get_image(left_corner_path)), 5) def process(self, img): self._result = img img = Filtering.apply_box_filter(Filtering.get_gray_scale_image(img), 5)", "i, centroid in enumerate(properties['Centroid']): if i == 0: center = int(centroid), int(properties['Centroid'][i+1]) if", "= np.sqrt(np.power(point[0] - pupil_position[0], 2) + np.power(point[1] - pupil_position[1], 2)) radius = int(radius)", "self.get_pupil(img, 40) iris_radius = self.get_iris(img, pupil_position, pupil_radius) glints_position = self.get_glints(img, 180, pupil_position, iris_radius)", "radius = 1.0 if radius == 0.0 else radius circularity = perimeter /", "< angle_tolerance: radius = np.sqrt(np.power(point[0] - pupil_position[0], 2) + np.power(point[1] - pupil_position[1], 2))", "RegionProps import * from UMath import * from UGraphics import * import operator", "angle = normal_angle + orientation[i][j] - 90 angle = angle - 360 if", "for i, centroid in enumerate(properties['Centroid']): if i == 0: center = int(centroid), int(properties['Centroid'][i+1])", "_left_template = None def __init__(self, right_corner_path, left_corner_path): self._right_template = Filtering.apply_box_filter(Filtering.get_gray_scale_image(UMedia.get_image(right_corner_path)), 5) self._left_template =", "Eye: _result = None _right_template = None _left_template = None def __init__(self, right_corner_path,", "180, pupil_position, iris_radius) corners_position = self.get_eye_corners(img) UMedia.show(self._result) def get_pupil(self, img, threshold): img =", "in contours: properties = props.CalcContourProperties(cnt, ['Area', 'Length', 'Centroid', 'Extend', 'ConvexHull']) if properties['Extend'] >", "for cnt in contours: properties = props.CalcContourProperties(cnt, ['Area', 'Length', 'Centroid', 'Extend', 'ConvexHull']) if", "right = self._match(img, self._right_template) left = self._match(img, self._left_template) return [right, left] def _match(self,", "< side)): for i, centroid in enumerate(properties['Centroid']): if i == 0: center =", "= UMath.get_circle_samples(pupil_position, pupil_radius) iris_samples = UMath.get_circle_samples(pupil_position, max_iris_radius) iris_radius_vote = dict() for sample in", "2) + np.power(sobel_vertical[y][x], 2)) if (x % granularity == 0) and (y %", "range(height): for x in range(width): orientation[y][x] = cv2.fastAtan2(sobel_horizontal[y][x], sobel_vertical[y][x]) magnitude[y][x] = np.sqrt(np.power(sobel_horizontal[y][x], 2)", "properties = props.CalcContourProperties(cnt, ['Area', 'Length', 'Centroid', 'Extend', 'ConvexHull']) if properties['Extend'] > 0 and", "/ np.pi) radius = 1.0 if radius == 0.0 else radius circularity =", "max_loc[1] def get_iris(self, img, pupil_position, pupil_radius, angle_tolerance=3, min_magnitude=15, max_magnitude=20): if pupil_radius <= 1.0:", "np.empty(img.shape) for y in range(height): for x in range(width): orientation[y][x] = cv2.fastAtan2(sobel_horizontal[y][x], sobel_vertical[y][x])", "min_magnitude=15, max_magnitude=20): if pupil_radius <= 1.0: return 0.0 orientation, magnitude = self._get_gradient(img) max_iris_radius", "<= 1.0: return 0.0 orientation, magnitude = self._get_gradient(img) max_iris_radius = pupil_radius * 5", "radius = int(radius) if radius not in iris_radius_vote: iris_radius_vote[radius] = 0 iris_radius_vote[radius] +=", "import * from UGraphics import * import operator class Eye: _result = None", "height, width = img.shape sobel_horizontal = cv2.Sobel(img, cv2.CV_32F, 1, 0) sobel_vertical = cv2.Sobel(img,", "cv2.morphologyEx(img, cv2.MORPH_DILATE, st, iterations=1) c, contours, hierarchy = cv2.findContours(img, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE) props =", "width/2, max_loc[1] - height/2), (max_loc[0] + width/2, max_loc[1] + height/2), 2) return max_loc[0],", "if (x % granularity == 0) and (y % granularity == 0): UGraphics.draw_vector(self._result,", "if len(iris_radius_vote) > 0 else 0 cv2.circle(self._result, pupil_position, iris_radius, (255, 0, 0), 1)", "= np.sqrt(properties['Area'] / np.pi) radius = 1.0 if radius == 0.0 else radius", "% granularity == 0): UGraphics.draw_vector(self._result, x, y, magnitude[y][x] / granularity, orientation[y][x]) return orientation,", "1) cv2.circle(self._result, center, int(radius), (0, 0, 255), 1) return center, radius return (int(width", "* from UMedia import * from Filtering import * from RegionProps import *", "iris_radius): img = cv2.threshold(img, threshold, 255, cv2.THRESH_BINARY)[1] height, width = img.shape max_dist =", "def get_iris(self, img, pupil_position, pupil_radius, angle_tolerance=3, min_magnitude=15, max_magnitude=20): if pupil_radius <= 1.0: return", "self._get_gradient(img) max_iris_radius = pupil_radius * 5 pupil_samples = UMath.get_circle_samples(pupil_position, pupil_radius) iris_samples = UMath.get_circle_samples(pupil_position,", "['Area', 'Length', 'Centroid', 'Extend', 'ConvexHull']) perimeter = cv2.arcLength(cnt, True) radius = np.sqrt(properties['Area'] /", "max_loc[0], max_loc[1] def get_iris(self, img, pupil_position, pupil_radius, angle_tolerance=3, min_magnitude=15, max_magnitude=20): if pupil_radius <=", "+ np.power(sobel_vertical[y][x], 2)) if (x % granularity == 0) and (y % granularity", "def get_pupil(self, img, threshold): img = cv2.threshold(img, threshold, 255, cv2.THRESH_BINARY_INV)[1] height, width =", "cv2.ellipse(self._result, ellipse, (0, 0, 255), 1) cv2.circle(self._result, center, int(radius), (0, 0, 255), 1)", "UMath.get_circle_samples(pupil_position, max_iris_radius) iris_radius_vote = dict() for sample in range(len(pupil_samples)): pupil_sample = (int(pupil_samples[sample][0]), int(pupil_samples[sample][1]))", "def process(self, img): self._result = img img = Filtering.apply_box_filter(Filtering.get_gray_scale_image(img), 5) pupil_position, pupil_radius =", "mag < max_magnitude: angle = normal_angle + orientation[i][j] - 90 angle = angle", "- pupil_position[0], 2) + np.power(point[1] - pupil_position[1], 2)) radius = int(radius) if radius", "self._right_template) left = self._match(img, self._left_template) return [right, left] def _match(self, img, template): matching", "0 and j >= 0) and (len(magnitude) > i and len(magnitude[i]) > j):", "sobel_vertical[y][x]) magnitude[y][x] = np.sqrt(np.power(sobel_horizontal[y][x], 2) + np.power(sobel_vertical[y][x], 2)) if (x % granularity ==", "orientation[i][j] - 90 angle = angle - 360 if angle > 360 else", "import * import operator class Eye: _result = None _right_template = None _left_template", "= cv2.fastAtan2(sobel_horizontal[y][x], sobel_vertical[y][x]) magnitude[y][x] = np.sqrt(np.power(sobel_horizontal[y][x], 2) + np.power(sobel_vertical[y][x], 2)) if (x %", "if radius not in iris_radius_vote: iris_radius_vote[radius] = 0 iris_radius_vote[radius] += 1 cv2.line(self._result, pupil_sample,", "img = Filtering.apply_box_filter(Filtering.get_gray_scale_image(img), 5) pupil_position, pupil_radius = self.get_pupil(img, 40) iris_radius = self.get_iris(img, pupil_position,", "for sample in range(len(pupil_samples)): pupil_sample = (int(pupil_samples[sample][0]), int(pupil_samples[sample][1])) iris_sample = (int(iris_samples[sample][0]), int(iris_samples[sample][1])) normal", "contours: properties = props.CalcContourProperties(cnt, ['Area', 'Length', 'Centroid', 'Extend', 'ConvexHull']) if properties['Extend'] > 0", "cv2.morphologyEx(img, cv2.MORPH_ERODE, st, iterations=1) img = cv2.morphologyEx(img, cv2.MORPH_DILATE, st, iterations=1) c, contours, hierarchy", "0.0 else radius circularity = perimeter / (radius * 2 * np.pi) if", "= cv2.matchTemplate(img, template, cv2.TM_CCOEFF_NORMED) height, width = template.shape min_val, max_val, min_loc, max_loc =", "= self.get_iris(img, pupil_position, pupil_radius) glints_position = self.get_glints(img, 180, pupil_position, iris_radius) corners_position = self.get_eye_corners(img)", "2) + np.power(point[1] - pupil_position[1], 2)) radius = int(radius) if radius not in", "= cv2.fastAtan2(pupil_sample[1] - pupil_position[1], pupil_sample[0] - pupil_position[0]) for point in normal: i =" ]
[ "r'^delete/(?P<pk>\\d+)/$', TinylinkDeleteView.as_view(), name='tinylink_delete', ), url( r'^404/$', TemplateView.as_view(template_name='tinylinks/notfound.html'), name='tinylink_notfound', ), url( r'^statistics/?$', StatisticsView.as_view(), name='tinylink_statistics',", ".views import ( StatisticsView, TinylinkCreateView, TinylinkDeleteView, TinylinkListView, TinylinkRedirectView, TinylinkUpdateView, ) urlpatterns = [", "TinylinkRedirectView, TinylinkUpdateView, ) urlpatterns = [ url( r'^$', TinylinkListView.as_view(), name='tinylink_list' ), url( r'^create/$',", "TinylinkUpdateView, ) urlpatterns = [ url( r'^$', TinylinkListView.as_view(), name='tinylink_list' ), url( r'^create/$', TinylinkCreateView.as_view(),", "TinylinkDeleteView.as_view(), name='tinylink_delete', ), url( r'^404/$', TemplateView.as_view(template_name='tinylinks/notfound.html'), name='tinylink_notfound', ), url( r'^statistics/?$', StatisticsView.as_view(), name='tinylink_statistics', ),", "name='tinylink_list' ), url( r'^create/$', TinylinkCreateView.as_view(), name='tinylink_create' ), url( r'^update/(?P<pk>\\d+)/(?P<mode>[a-z-]+)/$', TinylinkUpdateView.as_view(), name='tinylink_update', ), url(", "name='tinylink_delete', ), url( r'^404/$', TemplateView.as_view(template_name='tinylinks/notfound.html'), name='tinylink_notfound', ), url( r'^statistics/?$', StatisticsView.as_view(), name='tinylink_statistics', ), url(", "from django.views.generic import TemplateView from .views import ( StatisticsView, TinylinkCreateView, TinylinkDeleteView, TinylinkListView, TinylinkRedirectView,", "TinylinkListView.as_view(), name='tinylink_list' ), url( r'^create/$', TinylinkCreateView.as_view(), name='tinylink_create' ), url( r'^update/(?P<pk>\\d+)/(?P<mode>[a-z-]+)/$', TinylinkUpdateView.as_view(), name='tinylink_update', ),", "name='tinylink_create' ), url( r'^update/(?P<pk>\\d+)/(?P<mode>[a-z-]+)/$', TinylinkUpdateView.as_view(), name='tinylink_update', ), url( r'^delete/(?P<pk>\\d+)/$', TinylinkDeleteView.as_view(), name='tinylink_delete', ), url(", "r'^update/(?P<pk>\\d+)/(?P<mode>[a-z-]+)/$', TinylinkUpdateView.as_view(), name='tinylink_update', ), url( r'^delete/(?P<pk>\\d+)/$', TinylinkDeleteView.as_view(), name='tinylink_delete', ), url( r'^404/$', TemplateView.as_view(template_name='tinylinks/notfound.html'), name='tinylink_notfound',", "), url( r'^update/(?P<pk>\\d+)/(?P<mode>[a-z-]+)/$', TinylinkUpdateView.as_view(), name='tinylink_update', ), url( r'^delete/(?P<pk>\\d+)/$', TinylinkDeleteView.as_view(), name='tinylink_delete', ), url( r'^404/$',", ") urlpatterns = [ url( r'^$', TinylinkListView.as_view(), name='tinylink_list' ), url( r'^create/$', TinylinkCreateView.as_view(), name='tinylink_create'", "[ url( r'^$', TinylinkListView.as_view(), name='tinylink_list' ), url( r'^create/$', TinylinkCreateView.as_view(), name='tinylink_create' ), url( r'^update/(?P<pk>\\d+)/(?P<mode>[a-z-]+)/$',", "urlpatterns = [ url( r'^$', TinylinkListView.as_view(), name='tinylink_list' ), url( r'^create/$', TinylinkCreateView.as_view(), name='tinylink_create' ),", "), url( r'^create/$', TinylinkCreateView.as_view(), name='tinylink_create' ), url( r'^update/(?P<pk>\\d+)/(?P<mode>[a-z-]+)/$', TinylinkUpdateView.as_view(), name='tinylink_update', ), url( r'^delete/(?P<pk>\\d+)/$',", "import ( StatisticsView, TinylinkCreateView, TinylinkDeleteView, TinylinkListView, TinylinkRedirectView, TinylinkUpdateView, ) urlpatterns = [ url(", "name='tinylink_update', ), url( r'^delete/(?P<pk>\\d+)/$', TinylinkDeleteView.as_view(), name='tinylink_delete', ), url( r'^404/$', TemplateView.as_view(template_name='tinylinks/notfound.html'), name='tinylink_notfound', ), url(", "r'^404/$', TemplateView.as_view(template_name='tinylinks/notfound.html'), name='tinylink_notfound', ), url( r'^statistics/?$', StatisticsView.as_view(), name='tinylink_statistics', ), url( r'^(?P<short_url>[a-zA-Z0-9-]+)/?$', TinylinkRedirectView.as_view(), name='tinylink_redirect',", "), url( r'^404/$', TemplateView.as_view(template_name='tinylinks/notfound.html'), name='tinylink_notfound', ), url( r'^statistics/?$', StatisticsView.as_view(), name='tinylink_statistics', ), url( r'^(?P<short_url>[a-zA-Z0-9-]+)/?$',", "<gh_stars>10-100 \"\"\"URLs for the ``django-tinylinks`` app.\"\"\" from django.conf.urls import url from django.views.generic import", "url from django.views.generic import TemplateView from .views import ( StatisticsView, TinylinkCreateView, TinylinkDeleteView, TinylinkListView,", "TinylinkCreateView, TinylinkDeleteView, TinylinkListView, TinylinkRedirectView, TinylinkUpdateView, ) urlpatterns = [ url( r'^$', TinylinkListView.as_view(), name='tinylink_list'", "url( r'^create/$', TinylinkCreateView.as_view(), name='tinylink_create' ), url( r'^update/(?P<pk>\\d+)/(?P<mode>[a-z-]+)/$', TinylinkUpdateView.as_view(), name='tinylink_update', ), url( r'^delete/(?P<pk>\\d+)/$', TinylinkDeleteView.as_view(),", "import TemplateView from .views import ( StatisticsView, TinylinkCreateView, TinylinkDeleteView, TinylinkListView, TinylinkRedirectView, TinylinkUpdateView, )", "TemplateView.as_view(template_name='tinylinks/notfound.html'), name='tinylink_notfound', ), url( r'^statistics/?$', StatisticsView.as_view(), name='tinylink_statistics', ), url( r'^(?P<short_url>[a-zA-Z0-9-]+)/?$', TinylinkRedirectView.as_view(), name='tinylink_redirect', ),", "url( r'^delete/(?P<pk>\\d+)/$', TinylinkDeleteView.as_view(), name='tinylink_delete', ), url( r'^404/$', TemplateView.as_view(template_name='tinylinks/notfound.html'), name='tinylink_notfound', ), url( r'^statistics/?$', StatisticsView.as_view(),", "TinylinkUpdateView.as_view(), name='tinylink_update', ), url( r'^delete/(?P<pk>\\d+)/$', TinylinkDeleteView.as_view(), name='tinylink_delete', ), url( r'^404/$', TemplateView.as_view(template_name='tinylinks/notfound.html'), name='tinylink_notfound', ),", "from .views import ( StatisticsView, TinylinkCreateView, TinylinkDeleteView, TinylinkListView, TinylinkRedirectView, TinylinkUpdateView, ) urlpatterns =", "TinylinkListView, TinylinkRedirectView, TinylinkUpdateView, ) urlpatterns = [ url( r'^$', TinylinkListView.as_view(), name='tinylink_list' ), url(", "), url( r'^delete/(?P<pk>\\d+)/$', TinylinkDeleteView.as_view(), name='tinylink_delete', ), url( r'^404/$', TemplateView.as_view(template_name='tinylinks/notfound.html'), name='tinylink_notfound', ), url( r'^statistics/?$',", "TinylinkDeleteView, TinylinkListView, TinylinkRedirectView, TinylinkUpdateView, ) urlpatterns = [ url( r'^$', TinylinkListView.as_view(), name='tinylink_list' ),", "( StatisticsView, TinylinkCreateView, TinylinkDeleteView, TinylinkListView, TinylinkRedirectView, TinylinkUpdateView, ) urlpatterns = [ url( r'^$',", "url( r'^update/(?P<pk>\\d+)/(?P<mode>[a-z-]+)/$', TinylinkUpdateView.as_view(), name='tinylink_update', ), url( r'^delete/(?P<pk>\\d+)/$', TinylinkDeleteView.as_view(), name='tinylink_delete', ), url( r'^404/$', TemplateView.as_view(template_name='tinylinks/notfound.html'),", "r'^create/$', TinylinkCreateView.as_view(), name='tinylink_create' ), url( r'^update/(?P<pk>\\d+)/(?P<mode>[a-z-]+)/$', TinylinkUpdateView.as_view(), name='tinylink_update', ), url( r'^delete/(?P<pk>\\d+)/$', TinylinkDeleteView.as_view(), name='tinylink_delete',", "TinylinkCreateView.as_view(), name='tinylink_create' ), url( r'^update/(?P<pk>\\d+)/(?P<mode>[a-z-]+)/$', TinylinkUpdateView.as_view(), name='tinylink_update', ), url( r'^delete/(?P<pk>\\d+)/$', TinylinkDeleteView.as_view(), name='tinylink_delete', ),", "\"\"\"URLs for the ``django-tinylinks`` app.\"\"\" from django.conf.urls import url from django.views.generic import TemplateView", "the ``django-tinylinks`` app.\"\"\" from django.conf.urls import url from django.views.generic import TemplateView from .views", "= [ url( r'^$', TinylinkListView.as_view(), name='tinylink_list' ), url( r'^create/$', TinylinkCreateView.as_view(), name='tinylink_create' ), url(", "name='tinylink_notfound', ), url( r'^statistics/?$', StatisticsView.as_view(), name='tinylink_statistics', ), url( r'^(?P<short_url>[a-zA-Z0-9-]+)/?$', TinylinkRedirectView.as_view(), name='tinylink_redirect', ), ]", "TemplateView from .views import ( StatisticsView, TinylinkCreateView, TinylinkDeleteView, TinylinkListView, TinylinkRedirectView, TinylinkUpdateView, ) urlpatterns", "app.\"\"\" from django.conf.urls import url from django.views.generic import TemplateView from .views import (", "url( r'^$', TinylinkListView.as_view(), name='tinylink_list' ), url( r'^create/$', TinylinkCreateView.as_view(), name='tinylink_create' ), url( r'^update/(?P<pk>\\d+)/(?P<mode>[a-z-]+)/$', TinylinkUpdateView.as_view(),", "url( r'^404/$', TemplateView.as_view(template_name='tinylinks/notfound.html'), name='tinylink_notfound', ), url( r'^statistics/?$', StatisticsView.as_view(), name='tinylink_statistics', ), url( r'^(?P<short_url>[a-zA-Z0-9-]+)/?$', TinylinkRedirectView.as_view(),", "r'^$', TinylinkListView.as_view(), name='tinylink_list' ), url( r'^create/$', TinylinkCreateView.as_view(), name='tinylink_create' ), url( r'^update/(?P<pk>\\d+)/(?P<mode>[a-z-]+)/$', TinylinkUpdateView.as_view(), name='tinylink_update',", "``django-tinylinks`` app.\"\"\" from django.conf.urls import url from django.views.generic import TemplateView from .views import", "django.conf.urls import url from django.views.generic import TemplateView from .views import ( StatisticsView, TinylinkCreateView,", "for the ``django-tinylinks`` app.\"\"\" from django.conf.urls import url from django.views.generic import TemplateView from", "import url from django.views.generic import TemplateView from .views import ( StatisticsView, TinylinkCreateView, TinylinkDeleteView,", "StatisticsView, TinylinkCreateView, TinylinkDeleteView, TinylinkListView, TinylinkRedirectView, TinylinkUpdateView, ) urlpatterns = [ url( r'^$', TinylinkListView.as_view(),", "from django.conf.urls import url from django.views.generic import TemplateView from .views import ( StatisticsView,", "django.views.generic import TemplateView from .views import ( StatisticsView, TinylinkCreateView, TinylinkDeleteView, TinylinkListView, TinylinkRedirectView, TinylinkUpdateView," ]
[ "= [ migrations.CreateModel( name='Survey', fields=[ ('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')), ('title', models.CharField(max_length=200)), ('author',", "from django.db import migrations, models class Migration(migrations.Migration): dependencies = [ migrations.swappable_dependency(settings.AUTH_USER_MODEL), ('mainpage', '0017_auto_20201025_2232'),", "migrations.swappable_dependency(settings.AUTH_USER_MODEL), ('mainpage', '0017_auto_20201025_2232'), ] operations = [ migrations.CreateModel( name='Survey', fields=[ ('id', models.AutoField(auto_created=True, primary_key=True,", "Generated by Django 2.1 on 2020-10-29 14:45 from django.conf import settings from django.db", "Django 2.1 on 2020-10-29 14:45 from django.conf import settings from django.db import migrations,", "import settings from django.db import migrations, models class Migration(migrations.Migration): dependencies = [ migrations.swappable_dependency(settings.AUTH_USER_MODEL),", "[ migrations.CreateModel( name='Survey', fields=[ ('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')), ('title', models.CharField(max_length=200)), ('author', models.CharField(max_length=50)),", "2020-10-29 14:45 from django.conf import settings from django.db import migrations, models class Migration(migrations.Migration):", "operations = [ migrations.CreateModel( name='Survey', fields=[ ('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')), ('title', models.CharField(max_length=200)),", "django.db import migrations, models class Migration(migrations.Migration): dependencies = [ migrations.swappable_dependency(settings.AUTH_USER_MODEL), ('mainpage', '0017_auto_20201025_2232'), ]", "('mainpage', '0017_auto_20201025_2232'), ] operations = [ migrations.CreateModel( name='Survey', fields=[ ('id', models.AutoField(auto_created=True, primary_key=True, serialize=False,", "from django.conf import settings from django.db import migrations, models class Migration(migrations.Migration): dependencies =", "migrations, models class Migration(migrations.Migration): dependencies = [ migrations.swappable_dependency(settings.AUTH_USER_MODEL), ('mainpage', '0017_auto_20201025_2232'), ] operations =", "Migration(migrations.Migration): dependencies = [ migrations.swappable_dependency(settings.AUTH_USER_MODEL), ('mainpage', '0017_auto_20201025_2232'), ] operations = [ migrations.CreateModel( name='Survey',", "('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')), ('title', models.CharField(max_length=200)), ('author', models.CharField(max_length=50)), ('count', models.SmallIntegerField(default=0)), ('user', models.ManyToManyField(to=settings.AUTH_USER_MODEL)),", "2.1 on 2020-10-29 14:45 from django.conf import settings from django.db import migrations, models", "primary_key=True, serialize=False, verbose_name='ID')), ('title', models.CharField(max_length=200)), ('author', models.CharField(max_length=50)), ('count', models.SmallIntegerField(default=0)), ('user', models.ManyToManyField(to=settings.AUTH_USER_MODEL)), ], ),", "migrations.CreateModel( name='Survey', fields=[ ('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')), ('title', models.CharField(max_length=200)), ('author', models.CharField(max_length=50)), ('count',", "on 2020-10-29 14:45 from django.conf import settings from django.db import migrations, models class", "django.conf import settings from django.db import migrations, models class Migration(migrations.Migration): dependencies = [", "= [ migrations.swappable_dependency(settings.AUTH_USER_MODEL), ('mainpage', '0017_auto_20201025_2232'), ] operations = [ migrations.CreateModel( name='Survey', fields=[ ('id',", "name='Survey', fields=[ ('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')), ('title', models.CharField(max_length=200)), ('author', models.CharField(max_length=50)), ('count', models.SmallIntegerField(default=0)),", "models class Migration(migrations.Migration): dependencies = [ migrations.swappable_dependency(settings.AUTH_USER_MODEL), ('mainpage', '0017_auto_20201025_2232'), ] operations = [", "import migrations, models class Migration(migrations.Migration): dependencies = [ migrations.swappable_dependency(settings.AUTH_USER_MODEL), ('mainpage', '0017_auto_20201025_2232'), ] operations", "'0017_auto_20201025_2232'), ] operations = [ migrations.CreateModel( name='Survey', fields=[ ('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),", "] operations = [ migrations.CreateModel( name='Survey', fields=[ ('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')), ('title',", "# Generated by Django 2.1 on 2020-10-29 14:45 from django.conf import settings from", "fields=[ ('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')), ('title', models.CharField(max_length=200)), ('author', models.CharField(max_length=50)), ('count', models.SmallIntegerField(default=0)), ('user',", "serialize=False, verbose_name='ID')), ('title', models.CharField(max_length=200)), ('author', models.CharField(max_length=50)), ('count', models.SmallIntegerField(default=0)), ('user', models.ManyToManyField(to=settings.AUTH_USER_MODEL)), ], ), ]", "by Django 2.1 on 2020-10-29 14:45 from django.conf import settings from django.db import", "models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')), ('title', models.CharField(max_length=200)), ('author', models.CharField(max_length=50)), ('count', models.SmallIntegerField(default=0)), ('user', models.ManyToManyField(to=settings.AUTH_USER_MODEL)), ],", "class Migration(migrations.Migration): dependencies = [ migrations.swappable_dependency(settings.AUTH_USER_MODEL), ('mainpage', '0017_auto_20201025_2232'), ] operations = [ migrations.CreateModel(", "settings from django.db import migrations, models class Migration(migrations.Migration): dependencies = [ migrations.swappable_dependency(settings.AUTH_USER_MODEL), ('mainpage',", "dependencies = [ migrations.swappable_dependency(settings.AUTH_USER_MODEL), ('mainpage', '0017_auto_20201025_2232'), ] operations = [ migrations.CreateModel( name='Survey', fields=[", "14:45 from django.conf import settings from django.db import migrations, models class Migration(migrations.Migration): dependencies", "<gh_stars>1-10 # Generated by Django 2.1 on 2020-10-29 14:45 from django.conf import settings", "[ migrations.swappable_dependency(settings.AUTH_USER_MODEL), ('mainpage', '0017_auto_20201025_2232'), ] operations = [ migrations.CreateModel( name='Survey', fields=[ ('id', models.AutoField(auto_created=True," ]
[ "[] self.v_num = 0 def fit(self, x, y): n_data = len(y) self.label, p_c", "\"whom\", \"this\", \"that\", \"these\", \"those\", \"am\", \"is\", \"are\", \"was\", \"were\", \"be\", \"been\", \"being\",", "tokenize(documents, stop_words): text = [] for doc in documents: letters_only = re.sub(\"[^a-zA-Z]\", \"", "= set([\"i\", \"me\", \"my\", \"myself\", \"we\", \"our\", \"ours\", \"ourselves\", \"you\", \"your\", \"yours\", \"yourself\",", "dict(zip(self.label, np.log(p_c / n_data))) indexes = np.c_[np.array(y), np.arange(n_data)] self.vocabulary = np.unique( [item for", "train_x = x[test_split >= test_ratio] test_x = x[test_split < test_ratio] train_y = y[test_split", "predict_sample(self, x): eps = 1 / self.v_num p = [self.p_c[i] + sum(self.p_w[i][self.v_idx[w]] if", "self.label[np.argmax(p)] def main(): stop_words = set([\"i\", \"me\", \"my\", \"myself\", \"we\", \"our\", \"ours\", \"ourselves\",", "<filename>naive_bayes.py import numpy as np from sklearn.datasets import fetch_20newsgroups import re def tokenize(documents,", "print(\"predicting\") print(sum(nb.predict(train_x) == train_y) / train_x.shape[0]) print(sum(nb.predict(test_x) == test_y) / test_y.shape[0]) if __name__", "+ sum(self.p_w[i][self.v_idx[w]] if w in self.v_idx.keys() else eps for w in x) for", "zip(words, pwl): self.p_w[l][self.v_idx[w]] = np.log( (p + 1) / (len(flatten) + self.v_num)) def", "\"because\", \"as\", \"until\", \"while\", \"of\", \"at\", \"by\", \"for\", \"with\", \"about\", \"against\", \"between\", \"into\",", "= {} self.p_c = {} self.vocabulary = [] self.v_num = 0 def fit(self,", "fitting\") for l in self.label: idxes = indexes[indexes[:, 0] == l][:, 1].astype(int) corpus", "re.sub(\"[^a-zA-Z]\", \" \", doc) words = letters_only.lower().split() text.append([w for w in words if", "in x) for i in range(len(self.label))] return self.label[np.argmax(p)] def main(): stop_words = set([\"i\",", "for sublist in x for item in sublist]) self.v_num = len(self.vocabulary) print(\"vocabulary length", "smoothing # guassian can be used for numerical def __init__(self): self.p_w = {}", "def predict(self, x): return np.array([self.predict_sample(xi) for xi in x]) def predict_sample(self, x): eps", "for w, p in zip(words, pwl): self.p_w[l][self.v_idx[w]] = np.log( (p + 1) /", "self.p_c = dict(zip(self.label, np.log(p_c / n_data))) indexes = np.c_[np.array(y), np.arange(n_data)] self.vocabulary = np.unique(", "not w in stop_words]) return np.array(text) class NaiveBayes(object): # multinominal NB model with", "for doc in documents: letters_only = re.sub(\"[^a-zA-Z]\", \" \", doc) words = letters_only.lower().split()", "for numerical def __init__(self): self.p_w = {} self.p_c = {} self.vocabulary = []", "\"it\", \"its\", \"itself\", \"they\", \"them\", \"their\", \"theirs\", \"themselves\", \"what\", \"which\", \"who\", \"whom\", \"this\",", "fetch_20newsgroups import re def tokenize(documents, stop_words): text = [] for doc in documents:", "\"ours\", \"ourselves\", \"you\", \"your\", \"yours\", \"yourself\", \"yourselves\", \"he\", \"him\", \"his\", \"himself\", \"she\", \"her\",", "1].astype(int) corpus = [x[idx] for idx in idxes] flatten = [item for sublist", "print(sum(nb.predict(train_x) == train_y) / train_x.shape[0]) print(sum(nb.predict(test_x) == test_y) / test_y.shape[0]) if __name__ ==", "\"further\", \"then\", \"once\", \"here\", \"there\", \"when\", \"where\", \"why\", \"how\", \"all\", \"any\", \"both\", \"each\",", "w in self.v_idx.keys() else eps for w in x) for i in range(len(self.label))]", "import re def tokenize(documents, stop_words): text = [] for doc in documents: letters_only", "/ n_data))) indexes = np.c_[np.array(y), np.arange(n_data)] self.vocabulary = np.unique( [item for sublist in", "in sublist] self.p_w[l] = [ np.log(1 / (len(flatten) + self.v_num))] * self.v_num words,", "train_y) print(\"predicting\") print(sum(nb.predict(train_x) == train_y) / train_x.shape[0]) print(sum(nb.predict(test_x) == test_y) / test_y.shape[0]) if", "np.c_[np.array(y), np.arange(n_data)] self.vocabulary = np.unique( [item for sublist in x for item in", "\"few\", \"more\", \"most\", \"other\", \"some\", \"such\", \"no\", \"nor\", \"not\", \"only\", \"own\", \"same\", \"so\",", "= indexes[indexes[:, 0] == l][:, 1].astype(int) corpus = [x[idx] for idx in idxes]", "idxes] flatten = [item for sublist in corpus for item in sublist] self.p_w[l]", "[ np.log(1 / (len(flatten) + self.v_num))] * self.v_num words, pwl = np.unique(flatten, return_counts=True)", "xi in x]) def predict_sample(self, x): eps = 1 / self.v_num p =", "\"just\", \"don\", \"should\", \"now\"]) data = fetch_20newsgroups() x = tokenize(data.data, stop_words) y =", "for idx in idxes] flatten = [item for sublist in corpus for item", "\"yours\", \"yourself\", \"yourselves\", \"he\", \"him\", \"his\", \"himself\", \"she\", \"her\", \"hers\", \"herself\", \"it\", \"its\",", "1, len(x)) train_x = x[test_split >= test_ratio] test_x = x[test_split < test_ratio] train_y", "{}\".format(self.v_num)) self.v_idx = dict(zip(self.vocabulary, np.arange(self.v_num))) print(\"start fitting\") for l in self.label: idxes =", "\"by\", \"for\", \"with\", \"about\", \"against\", \"between\", \"into\", \"through\", \"during\", \"before\", \"after\", \"above\", \"below\",", "with laplace smoothing # guassian can be used for numerical def __init__(self): self.p_w", "= re.sub(\"[^a-zA-Z]\", \" \", doc) words = letters_only.lower().split() text.append([w for w in words", "words = letters_only.lower().split() text.append([w for w in words if not w in stop_words])", "in x for item in sublist]) self.v_num = len(self.vocabulary) print(\"vocabulary length {}\".format(self.v_num)) self.v_idx", "from sklearn.datasets import fetch_20newsgroups import re def tokenize(documents, stop_words): text = [] for", "\"her\", \"hers\", \"herself\", \"it\", \"its\", \"itself\", \"they\", \"them\", \"their\", \"theirs\", \"themselves\", \"what\", \"which\",", "\"its\", \"itself\", \"they\", \"them\", \"their\", \"theirs\", \"themselves\", \"what\", \"which\", \"who\", \"whom\", \"this\", \"that\",", "item in sublist] self.p_w[l] = [ np.log(1 / (len(flatten) + self.v_num))] * self.v_num", "\"against\", \"between\", \"into\", \"through\", \"during\", \"before\", \"after\", \"above\", \"below\", \"to\", \"from\", \"up\", \"down\",", "\"been\", \"being\", \"have\", \"has\", \"had\", \"having\", \"do\", \"does\", \"did\", \"doing\", \"a\", \"an\", \"the\",", "in x]) def predict_sample(self, x): eps = 1 / self.v_num p = [self.p_c[i]", "= np.c_[np.array(y), np.arange(n_data)] self.vocabulary = np.unique( [item for sublist in x for item", "\"some\", \"such\", \"no\", \"nor\", \"not\", \"only\", \"own\", \"same\", \"so\", \"than\", \"too\", \"very\", \"s\",", "\"over\", \"under\", \"again\", \"further\", \"then\", \"once\", \"here\", \"there\", \"when\", \"where\", \"why\", \"how\", \"all\",", "\"himself\", \"she\", \"her\", \"hers\", \"herself\", \"it\", \"its\", \"itself\", \"they\", \"them\", \"their\", \"theirs\", \"themselves\",", "(p + 1) / (len(flatten) + self.v_num)) def predict(self, x): return np.array([self.predict_sample(xi) for", "\"only\", \"own\", \"same\", \"so\", \"than\", \"too\", \"very\", \"s\", \"t\", \"can\", \"will\", \"just\", \"don\",", "\"you\", \"your\", \"yours\", \"yourself\", \"yourselves\", \"he\", \"him\", \"his\", \"himself\", \"she\", \"her\", \"hers\", \"herself\",", "self.v_idx = dict(zip(self.vocabulary, np.arange(self.v_num))) print(\"start fitting\") for l in self.label: idxes = indexes[indexes[:,", "< test_ratio] train_y = y[test_split >= test_ratio] test_y = y[test_split < test_ratio] nb", "\"off\", \"over\", \"under\", \"again\", \"further\", \"then\", \"once\", \"here\", \"there\", \"when\", \"where\", \"why\", \"how\",", "len(y) self.label, p_c = np.unique(y, return_counts=True) self.p_c = dict(zip(self.label, np.log(p_c / n_data))) indexes", "pwl = np.unique(flatten, return_counts=True) for w, p in zip(words, pwl): self.p_w[l][self.v_idx[w]] = np.log(", "sublist]) self.v_num = len(self.vocabulary) print(\"vocabulary length {}\".format(self.v_num)) self.v_idx = dict(zip(self.vocabulary, np.arange(self.v_num))) print(\"start fitting\")", "\"ourselves\", \"you\", \"your\", \"yours\", \"yourself\", \"yourselves\", \"he\", \"him\", \"his\", \"himself\", \"she\", \"her\", \"hers\",", ">= test_ratio] test_y = y[test_split < test_ratio] nb = NaiveBayes() nb.fit(train_x, train_y) print(\"predicting\")", "= {} self.vocabulary = [] self.v_num = 0 def fit(self, x, y): n_data", "fit(self, x, y): n_data = len(y) self.label, p_c = np.unique(y, return_counts=True) self.p_c =", "laplace smoothing # guassian can be used for numerical def __init__(self): self.p_w =", "= [ np.log(1 / (len(flatten) + self.v_num))] * self.v_num words, pwl = np.unique(flatten,", "in range(len(self.label))] return self.label[np.argmax(p)] def main(): stop_words = set([\"i\", \"me\", \"my\", \"myself\", \"we\",", "= [item for sublist in corpus for item in sublist] self.p_w[l] = [", "nb.fit(train_x, train_y) print(\"predicting\") print(sum(nb.predict(train_x) == train_y) / train_x.shape[0]) print(sum(nb.predict(test_x) == test_y) / test_y.shape[0])", "test_ratio] train_y = y[test_split >= test_ratio] test_y = y[test_split < test_ratio] nb =", "= np.unique(y, return_counts=True) self.p_c = dict(zip(self.label, np.log(p_c / n_data))) indexes = np.c_[np.array(y), np.arange(n_data)]", "0.2 test_split = np.random.uniform(0, 1, len(x)) train_x = x[test_split >= test_ratio] test_x =", "= x[test_split < test_ratio] train_y = y[test_split >= test_ratio] test_y = y[test_split <", "\"the\", \"and\", \"but\", \"if\", \"or\", \"because\", \"as\", \"until\", \"while\", \"of\", \"at\", \"by\", \"for\",", "\"does\", \"did\", \"doing\", \"a\", \"an\", \"the\", \"and\", \"but\", \"if\", \"or\", \"because\", \"as\", \"until\",", "\"where\", \"why\", \"how\", \"all\", \"any\", \"both\", \"each\", \"few\", \"more\", \"most\", \"other\", \"some\", \"such\",", "\"so\", \"than\", \"too\", \"very\", \"s\", \"t\", \"can\", \"will\", \"just\", \"don\", \"should\", \"now\"]) data", "np.arange(self.v_num))) print(\"start fitting\") for l in self.label: idxes = indexes[indexes[:, 0] == l][:,", "corpus for item in sublist] self.p_w[l] = [ np.log(1 / (len(flatten) + self.v_num))]", "\"down\", \"in\", \"out\", \"on\", \"off\", \"over\", \"under\", \"again\", \"further\", \"then\", \"once\", \"here\", \"there\",", "len(x)) train_x = x[test_split >= test_ratio] test_x = x[test_split < test_ratio] train_y =", "= dict(zip(self.vocabulary, np.arange(self.v_num))) print(\"start fitting\") for l in self.label: idxes = indexes[indexes[:, 0]", "__init__(self): self.p_w = {} self.p_c = {} self.vocabulary = [] self.v_num = 0", "w in x) for i in range(len(self.label))] return self.label[np.argmax(p)] def main(): stop_words =", "\"his\", \"himself\", \"she\", \"her\", \"hers\", \"herself\", \"it\", \"its\", \"itself\", \"they\", \"them\", \"their\", \"theirs\",", "\"now\"]) data = fetch_20newsgroups() x = tokenize(data.data, stop_words) y = data.target test_ratio =", "\"into\", \"through\", \"during\", \"before\", \"after\", \"above\", \"below\", \"to\", \"from\", \"up\", \"down\", \"in\", \"out\",", "\"don\", \"should\", \"now\"]) data = fetch_20newsgroups() x = tokenize(data.data, stop_words) y = data.target", "\"those\", \"am\", \"is\", \"are\", \"was\", \"were\", \"be\", \"been\", \"being\", \"have\", \"has\", \"had\", \"having\",", "\"until\", \"while\", \"of\", \"at\", \"by\", \"for\", \"with\", \"about\", \"against\", \"between\", \"into\", \"through\", \"during\",", "= letters_only.lower().split() text.append([w for w in words if not w in stop_words]) return", "in documents: letters_only = re.sub(\"[^a-zA-Z]\", \" \", doc) words = letters_only.lower().split() text.append([w for", "np.log(p_c / n_data))) indexes = np.c_[np.array(y), np.arange(n_data)] self.vocabulary = np.unique( [item for sublist", "= 0 def fit(self, x, y): n_data = len(y) self.label, p_c = np.unique(y,", "== l][:, 1].astype(int) corpus = [x[idx] for idx in idxes] flatten = [item", "\"both\", \"each\", \"few\", \"more\", \"most\", \"other\", \"some\", \"such\", \"no\", \"nor\", \"not\", \"only\", \"own\",", "== train_y) / train_x.shape[0]) print(sum(nb.predict(test_x) == test_y) / test_y.shape[0]) if __name__ == \"__main__\":", "used for numerical def __init__(self): self.p_w = {} self.p_c = {} self.vocabulary =", "guassian can be used for numerical def __init__(self): self.p_w = {} self.p_c =", "\"be\", \"been\", \"being\", \"have\", \"has\", \"had\", \"having\", \"do\", \"does\", \"did\", \"doing\", \"a\", \"an\",", "np.arange(n_data)] self.vocabulary = np.unique( [item for sublist in x for item in sublist])", "return np.array(text) class NaiveBayes(object): # multinominal NB model with laplace smoothing # guassian", "item in sublist]) self.v_num = len(self.vocabulary) print(\"vocabulary length {}\".format(self.v_num)) self.v_idx = dict(zip(self.vocabulary, np.arange(self.v_num)))", "NaiveBayes() nb.fit(train_x, train_y) print(\"predicting\") print(sum(nb.predict(train_x) == train_y) / train_x.shape[0]) print(sum(nb.predict(test_x) == test_y) /", "\"we\", \"our\", \"ours\", \"ourselves\", \"you\", \"your\", \"yours\", \"yourself\", \"yourselves\", \"he\", \"him\", \"his\", \"himself\",", "/ self.v_num p = [self.p_c[i] + sum(self.p_w[i][self.v_idx[w]] if w in self.v_idx.keys() else eps", "for i in range(len(self.label))] return self.label[np.argmax(p)] def main(): stop_words = set([\"i\", \"me\", \"my\",", "eps for w in x) for i in range(len(self.label))] return self.label[np.argmax(p)] def main():", "\"yourself\", \"yourselves\", \"he\", \"him\", \"his\", \"himself\", \"she\", \"her\", \"hers\", \"herself\", \"it\", \"its\", \"itself\",", "\"nor\", \"not\", \"only\", \"own\", \"same\", \"so\", \"than\", \"too\", \"very\", \"s\", \"t\", \"can\", \"will\",", "n_data))) indexes = np.c_[np.array(y), np.arange(n_data)] self.vocabulary = np.unique( [item for sublist in x", "be used for numerical def __init__(self): self.p_w = {} self.p_c = {} self.vocabulary", "self.v_num words, pwl = np.unique(flatten, return_counts=True) for w, p in zip(words, pwl): self.p_w[l][self.v_idx[w]]", "0 def fit(self, x, y): n_data = len(y) self.label, p_c = np.unique(y, return_counts=True)", "\"at\", \"by\", \"for\", \"with\", \"about\", \"against\", \"between\", \"into\", \"through\", \"during\", \"before\", \"after\", \"above\",", "print(\"start fitting\") for l in self.label: idxes = indexes[indexes[:, 0] == l][:, 1].astype(int)", "\"there\", \"when\", \"where\", \"why\", \"how\", \"all\", \"any\", \"both\", \"each\", \"few\", \"more\", \"most\", \"other\",", "idx in idxes] flatten = [item for sublist in corpus for item in", "\"hers\", \"herself\", \"it\", \"its\", \"itself\", \"they\", \"them\", \"their\", \"theirs\", \"themselves\", \"what\", \"which\", \"who\",", "stop_words = set([\"i\", \"me\", \"my\", \"myself\", \"we\", \"our\", \"ours\", \"ourselves\", \"you\", \"your\", \"yours\",", "in zip(words, pwl): self.p_w[l][self.v_idx[w]] = np.log( (p + 1) / (len(flatten) + self.v_num))", "(len(flatten) + self.v_num))] * self.v_num words, pwl = np.unique(flatten, return_counts=True) for w, p", "\"after\", \"above\", \"below\", \"to\", \"from\", \"up\", \"down\", \"in\", \"out\", \"on\", \"off\", \"over\", \"under\",", "indexes = np.c_[np.array(y), np.arange(n_data)] self.vocabulary = np.unique( [item for sublist in x for", "\"their\", \"theirs\", \"themselves\", \"what\", \"which\", \"who\", \"whom\", \"this\", \"that\", \"these\", \"those\", \"am\", \"is\",", "dict(zip(self.vocabulary, np.arange(self.v_num))) print(\"start fitting\") for l in self.label: idxes = indexes[indexes[:, 0] ==", "x) for i in range(len(self.label))] return self.label[np.argmax(p)] def main(): stop_words = set([\"i\", \"me\",", "\"as\", \"until\", \"while\", \"of\", \"at\", \"by\", \"for\", \"with\", \"about\", \"against\", \"between\", \"into\", \"through\",", "\"him\", \"his\", \"himself\", \"she\", \"her\", \"hers\", \"herself\", \"it\", \"its\", \"itself\", \"they\", \"them\", \"their\",", "+ self.v_num)) def predict(self, x): return np.array([self.predict_sample(xi) for xi in x]) def predict_sample(self,", "= NaiveBayes() nb.fit(train_x, train_y) print(\"predicting\") print(sum(nb.predict(train_x) == train_y) / train_x.shape[0]) print(sum(nb.predict(test_x) == test_y)", "y = data.target test_ratio = 0.2 test_split = np.random.uniform(0, 1, len(x)) train_x =", "re def tokenize(documents, stop_words): text = [] for doc in documents: letters_only =", "= tokenize(data.data, stop_words) y = data.target test_ratio = 0.2 test_split = np.random.uniform(0, 1,", "self.v_num = 0 def fit(self, x, y): n_data = len(y) self.label, p_c =", "= x[test_split >= test_ratio] test_x = x[test_split < test_ratio] train_y = y[test_split >=", "return_counts=True) self.p_c = dict(zip(self.label, np.log(p_c / n_data))) indexes = np.c_[np.array(y), np.arange(n_data)] self.vocabulary =", "\"have\", \"has\", \"had\", \"having\", \"do\", \"does\", \"did\", \"doing\", \"a\", \"an\", \"the\", \"and\", \"but\",", "data = fetch_20newsgroups() x = tokenize(data.data, stop_words) y = data.target test_ratio = 0.2", "for xi in x]) def predict_sample(self, x): eps = 1 / self.v_num p", "\"if\", \"or\", \"because\", \"as\", \"until\", \"while\", \"of\", \"at\", \"by\", \"for\", \"with\", \"about\", \"against\",", "\"are\", \"was\", \"were\", \"be\", \"been\", \"being\", \"have\", \"has\", \"had\", \"having\", \"do\", \"does\", \"did\",", "\"above\", \"below\", \"to\", \"from\", \"up\", \"down\", \"in\", \"out\", \"on\", \"off\", \"over\", \"under\", \"again\",", "test_ratio] nb = NaiveBayes() nb.fit(train_x, train_y) print(\"predicting\") print(sum(nb.predict(train_x) == train_y) / train_x.shape[0]) print(sum(nb.predict(test_x)", "\"she\", \"her\", \"hers\", \"herself\", \"it\", \"its\", \"itself\", \"they\", \"them\", \"their\", \"theirs\", \"themselves\", \"what\",", "else eps for w in x) for i in range(len(self.label))] return self.label[np.argmax(p)] def", "\"which\", \"who\", \"whom\", \"this\", \"that\", \"these\", \"those\", \"am\", \"is\", \"are\", \"was\", \"were\", \"be\",", "{} self.p_c = {} self.vocabulary = [] self.v_num = 0 def fit(self, x,", "\"than\", \"too\", \"very\", \"s\", \"t\", \"can\", \"will\", \"just\", \"don\", \"should\", \"now\"]) data =", "n_data = len(y) self.label, p_c = np.unique(y, return_counts=True) self.p_c = dict(zip(self.label, np.log(p_c /", "data.target test_ratio = 0.2 test_split = np.random.uniform(0, 1, len(x)) train_x = x[test_split >=", "w, p in zip(words, pwl): self.p_w[l][self.v_idx[w]] = np.log( (p + 1) / (len(flatten)", "def fit(self, x, y): n_data = len(y) self.label, p_c = np.unique(y, return_counts=True) self.p_c", "\"from\", \"up\", \"down\", \"in\", \"out\", \"on\", \"off\", \"over\", \"under\", \"again\", \"further\", \"then\", \"once\",", "\"is\", \"are\", \"was\", \"were\", \"be\", \"been\", \"being\", \"have\", \"has\", \"had\", \"having\", \"do\", \"does\",", "flatten = [item for sublist in corpus for item in sublist] self.p_w[l] =", "\"under\", \"again\", \"further\", \"then\", \"once\", \"here\", \"there\", \"when\", \"where\", \"why\", \"how\", \"all\", \"any\",", "= dict(zip(self.label, np.log(p_c / n_data))) indexes = np.c_[np.array(y), np.arange(n_data)] self.vocabulary = np.unique( [item", "\"having\", \"do\", \"does\", \"did\", \"doing\", \"a\", \"an\", \"the\", \"and\", \"but\", \"if\", \"or\", \"because\",", "return self.label[np.argmax(p)] def main(): stop_words = set([\"i\", \"me\", \"my\", \"myself\", \"we\", \"our\", \"ours\",", "self.label, p_c = np.unique(y, return_counts=True) self.p_c = dict(zip(self.label, np.log(p_c / n_data))) indexes =", "0] == l][:, 1].astype(int) corpus = [x[idx] for idx in idxes] flatten =", "\"t\", \"can\", \"will\", \"just\", \"don\", \"should\", \"now\"]) data = fetch_20newsgroups() x = tokenize(data.data,", "\"when\", \"where\", \"why\", \"how\", \"all\", \"any\", \"both\", \"each\", \"few\", \"more\", \"most\", \"other\", \"some\",", "stop_words): text = [] for doc in documents: letters_only = re.sub(\"[^a-zA-Z]\", \" \",", "text = [] for doc in documents: letters_only = re.sub(\"[^a-zA-Z]\", \" \", doc)", "x for item in sublist]) self.v_num = len(self.vocabulary) print(\"vocabulary length {}\".format(self.v_num)) self.v_idx =", "train_y = y[test_split >= test_ratio] test_y = y[test_split < test_ratio] nb = NaiveBayes()", "\"am\", \"is\", \"are\", \"was\", \"were\", \"be\", \"been\", \"being\", \"have\", \"has\", \"had\", \"having\", \"do\",", "main(): stop_words = set([\"i\", \"me\", \"my\", \"myself\", \"we\", \"our\", \"ours\", \"ourselves\", \"you\", \"your\",", "{} self.vocabulary = [] self.v_num = 0 def fit(self, x, y): n_data =", "/ (len(flatten) + self.v_num)) def predict(self, x): return np.array([self.predict_sample(xi) for xi in x])", "sum(self.p_w[i][self.v_idx[w]] if w in self.v_idx.keys() else eps for w in x) for i", "in stop_words]) return np.array(text) class NaiveBayes(object): # multinominal NB model with laplace smoothing", "= np.log( (p + 1) / (len(flatten) + self.v_num)) def predict(self, x): return", "words, pwl = np.unique(flatten, return_counts=True) for w, p in zip(words, pwl): self.p_w[l][self.v_idx[w]] =", "sublist in x for item in sublist]) self.v_num = len(self.vocabulary) print(\"vocabulary length {}\".format(self.v_num))", "for item in sublist] self.p_w[l] = [ np.log(1 / (len(flatten) + self.v_num))] *", "\"should\", \"now\"]) data = fetch_20newsgroups() x = tokenize(data.data, stop_words) y = data.target test_ratio", "self.v_num = len(self.vocabulary) print(\"vocabulary length {}\".format(self.v_num)) self.v_idx = dict(zip(self.vocabulary, np.arange(self.v_num))) print(\"start fitting\") for", "\"did\", \"doing\", \"a\", \"an\", \"the\", \"and\", \"but\", \"if\", \"or\", \"because\", \"as\", \"until\", \"while\",", "NaiveBayes(object): # multinominal NB model with laplace smoothing # guassian can be used", "numerical def __init__(self): self.p_w = {} self.p_c = {} self.vocabulary = [] self.v_num", "l in self.label: idxes = indexes[indexes[:, 0] == l][:, 1].astype(int) corpus = [x[idx]", "\"and\", \"but\", \"if\", \"or\", \"because\", \"as\", \"until\", \"while\", \"of\", \"at\", \"by\", \"for\", \"with\",", "doc in documents: letters_only = re.sub(\"[^a-zA-Z]\", \" \", doc) words = letters_only.lower().split() text.append([w", "in corpus for item in sublist] self.p_w[l] = [ np.log(1 / (len(flatten) +", "\"your\", \"yours\", \"yourself\", \"yourselves\", \"he\", \"him\", \"his\", \"himself\", \"she\", \"her\", \"hers\", \"herself\", \"it\",", "import numpy as np from sklearn.datasets import fetch_20newsgroups import re def tokenize(documents, stop_words):", "\"here\", \"there\", \"when\", \"where\", \"why\", \"how\", \"all\", \"any\", \"both\", \"each\", \"few\", \"more\", \"most\",", ">= test_ratio] test_x = x[test_split < test_ratio] train_y = y[test_split >= test_ratio] test_y", "test_y = y[test_split < test_ratio] nb = NaiveBayes() nb.fit(train_x, train_y) print(\"predicting\") print(sum(nb.predict(train_x) ==", "pwl): self.p_w[l][self.v_idx[w]] = np.log( (p + 1) / (len(flatten) + self.v_num)) def predict(self,", "self.v_num))] * self.v_num words, pwl = np.unique(flatten, return_counts=True) for w, p in zip(words,", "\"too\", \"very\", \"s\", \"t\", \"can\", \"will\", \"just\", \"don\", \"should\", \"now\"]) data = fetch_20newsgroups()", "\"up\", \"down\", \"in\", \"out\", \"on\", \"off\", \"over\", \"under\", \"again\", \"further\", \"then\", \"once\", \"here\",", "for w in x) for i in range(len(self.label))] return self.label[np.argmax(p)] def main(): stop_words", "\"with\", \"about\", \"against\", \"between\", \"into\", \"through\", \"during\", \"before\", \"after\", \"above\", \"below\", \"to\", \"from\",", "\"about\", \"against\", \"between\", \"into\", \"through\", \"during\", \"before\", \"after\", \"above\", \"below\", \"to\", \"from\", \"up\",", "= np.random.uniform(0, 1, len(x)) train_x = x[test_split >= test_ratio] test_x = x[test_split <", "= data.target test_ratio = 0.2 test_split = np.random.uniform(0, 1, len(x)) train_x = x[test_split", "x]) def predict_sample(self, x): eps = 1 / self.v_num p = [self.p_c[i] +", "\"no\", \"nor\", \"not\", \"only\", \"own\", \"same\", \"so\", \"than\", \"too\", \"very\", \"s\", \"t\", \"can\",", "[item for sublist in x for item in sublist]) self.v_num = len(self.vocabulary) print(\"vocabulary", "in sublist]) self.v_num = len(self.vocabulary) print(\"vocabulary length {}\".format(self.v_num)) self.v_idx = dict(zip(self.vocabulary, np.arange(self.v_num))) print(\"start", "p_c = np.unique(y, return_counts=True) self.p_c = dict(zip(self.label, np.log(p_c / n_data))) indexes = np.c_[np.array(y),", "\"again\", \"further\", \"then\", \"once\", \"here\", \"there\", \"when\", \"where\", \"why\", \"how\", \"all\", \"any\", \"both\",", "[] for doc in documents: letters_only = re.sub(\"[^a-zA-Z]\", \" \", doc) words =", "np.unique( [item for sublist in x for item in sublist]) self.v_num = len(self.vocabulary)", "# guassian can be used for numerical def __init__(self): self.p_w = {} self.p_c", "i in range(len(self.label))] return self.label[np.argmax(p)] def main(): stop_words = set([\"i\", \"me\", \"my\", \"myself\",", "= np.unique( [item for sublist in x for item in sublist]) self.v_num =", "(len(flatten) + self.v_num)) def predict(self, x): return np.array([self.predict_sample(xi) for xi in x]) def", "documents: letters_only = re.sub(\"[^a-zA-Z]\", \" \", doc) words = letters_only.lower().split() text.append([w for w", "\"were\", \"be\", \"been\", \"being\", \"have\", \"has\", \"had\", \"having\", \"do\", \"does\", \"did\", \"doing\", \"a\",", "self.vocabulary = np.unique( [item for sublist in x for item in sublist]) self.v_num", "= [self.p_c[i] + sum(self.p_w[i][self.v_idx[w]] if w in self.v_idx.keys() else eps for w in", "\"yourselves\", \"he\", \"him\", \"his\", \"himself\", \"she\", \"her\", \"hers\", \"herself\", \"it\", \"its\", \"itself\", \"they\",", "\"all\", \"any\", \"both\", \"each\", \"few\", \"more\", \"most\", \"other\", \"some\", \"such\", \"no\", \"nor\", \"not\",", "in idxes] flatten = [item for sublist in corpus for item in sublist]", "\"on\", \"off\", \"over\", \"under\", \"again\", \"further\", \"then\", \"once\", \"here\", \"there\", \"when\", \"where\", \"why\",", "< test_ratio] nb = NaiveBayes() nb.fit(train_x, train_y) print(\"predicting\") print(sum(nb.predict(train_x) == train_y) / train_x.shape[0])", "length {}\".format(self.v_num)) self.v_idx = dict(zip(self.vocabulary, np.arange(self.v_num))) print(\"start fitting\") for l in self.label: idxes", "model with laplace smoothing # guassian can be used for numerical def __init__(self):", "\"them\", \"their\", \"theirs\", \"themselves\", \"what\", \"which\", \"who\", \"whom\", \"this\", \"that\", \"these\", \"those\", \"am\",", "\"while\", \"of\", \"at\", \"by\", \"for\", \"with\", \"about\", \"against\", \"between\", \"into\", \"through\", \"during\", \"before\",", "\"or\", \"because\", \"as\", \"until\", \"while\", \"of\", \"at\", \"by\", \"for\", \"with\", \"about\", \"against\", \"between\",", "np.unique(y, return_counts=True) self.p_c = dict(zip(self.label, np.log(p_c / n_data))) indexes = np.c_[np.array(y), np.arange(n_data)] self.vocabulary", "[item for sublist in corpus for item in sublist] self.p_w[l] = [ np.log(1", "\"can\", \"will\", \"just\", \"don\", \"should\", \"now\"]) data = fetch_20newsgroups() x = tokenize(data.data, stop_words)", "self.p_w = {} self.p_c = {} self.vocabulary = [] self.v_num = 0 def", "[x[idx] for idx in idxes] flatten = [item for sublist in corpus for", "print(\"vocabulary length {}\".format(self.v_num)) self.v_idx = dict(zip(self.vocabulary, np.arange(self.v_num))) print(\"start fitting\") for l in self.label:", "\"how\", \"all\", \"any\", \"both\", \"each\", \"few\", \"more\", \"most\", \"other\", \"some\", \"such\", \"no\", \"nor\",", "\"during\", \"before\", \"after\", \"above\", \"below\", \"to\", \"from\", \"up\", \"down\", \"in\", \"out\", \"on\", \"off\",", "\"a\", \"an\", \"the\", \"and\", \"but\", \"if\", \"or\", \"because\", \"as\", \"until\", \"while\", \"of\", \"at\",", "\"will\", \"just\", \"don\", \"should\", \"now\"]) data = fetch_20newsgroups() x = tokenize(data.data, stop_words) y", "= len(y) self.label, p_c = np.unique(y, return_counts=True) self.p_c = dict(zip(self.label, np.log(p_c / n_data)))", "\", doc) words = letters_only.lower().split() text.append([w for w in words if not w", "+ self.v_num))] * self.v_num words, pwl = np.unique(flatten, return_counts=True) for w, p in", "\"out\", \"on\", \"off\", \"over\", \"under\", \"again\", \"further\", \"then\", \"once\", \"here\", \"there\", \"when\", \"where\",", "sublist in corpus for item in sublist] self.p_w[l] = [ np.log(1 / (len(flatten)", "\"in\", \"out\", \"on\", \"off\", \"over\", \"under\", \"again\", \"further\", \"then\", \"once\", \"here\", \"there\", \"when\",", "return_counts=True) for w, p in zip(words, pwl): self.p_w[l][self.v_idx[w]] = np.log( (p + 1)", "in words if not w in stop_words]) return np.array(text) class NaiveBayes(object): # multinominal", "= fetch_20newsgroups() x = tokenize(data.data, stop_words) y = data.target test_ratio = 0.2 test_split", "\"why\", \"how\", \"all\", \"any\", \"both\", \"each\", \"few\", \"more\", \"most\", \"other\", \"some\", \"such\", \"no\",", "def tokenize(documents, stop_words): text = [] for doc in documents: letters_only = re.sub(\"[^a-zA-Z]\",", "[self.p_c[i] + sum(self.p_w[i][self.v_idx[w]] if w in self.v_idx.keys() else eps for w in x)", "self.label: idxes = indexes[indexes[:, 0] == l][:, 1].astype(int) corpus = [x[idx] for idx", "if w in self.v_idx.keys() else eps for w in x) for i in", "can be used for numerical def __init__(self): self.p_w = {} self.p_c = {}", "x, y): n_data = len(y) self.label, p_c = np.unique(y, return_counts=True) self.p_c = dict(zip(self.label,", "sublist] self.p_w[l] = [ np.log(1 / (len(flatten) + self.v_num))] * self.v_num words, pwl", "* self.v_num words, pwl = np.unique(flatten, return_counts=True) for w, p in zip(words, pwl):", "\"myself\", \"we\", \"our\", \"ours\", \"ourselves\", \"you\", \"your\", \"yours\", \"yourself\", \"yourselves\", \"he\", \"him\", \"his\",", "fetch_20newsgroups() x = tokenize(data.data, stop_words) y = data.target test_ratio = 0.2 test_split =", "\"through\", \"during\", \"before\", \"after\", \"above\", \"below\", \"to\", \"from\", \"up\", \"down\", \"in\", \"out\", \"on\",", "np.log( (p + 1) / (len(flatten) + self.v_num)) def predict(self, x): return np.array([self.predict_sample(xi)", "class NaiveBayes(object): # multinominal NB model with laplace smoothing # guassian can be", "np.array(text) class NaiveBayes(object): # multinominal NB model with laplace smoothing # guassian can", "\"my\", \"myself\", \"we\", \"our\", \"ours\", \"ourselves\", \"you\", \"your\", \"yours\", \"yourself\", \"yourselves\", \"he\", \"him\",", "\"own\", \"same\", \"so\", \"than\", \"too\", \"very\", \"s\", \"t\", \"can\", \"will\", \"just\", \"don\", \"should\",", "test_ratio] test_x = x[test_split < test_ratio] train_y = y[test_split >= test_ratio] test_y =", "\"itself\", \"they\", \"them\", \"their\", \"theirs\", \"themselves\", \"what\", \"which\", \"who\", \"whom\", \"this\", \"that\", \"these\",", "doc) words = letters_only.lower().split() text.append([w for w in words if not w in", "\"such\", \"no\", \"nor\", \"not\", \"only\", \"own\", \"same\", \"so\", \"than\", \"too\", \"very\", \"s\", \"t\",", "\"me\", \"my\", \"myself\", \"we\", \"our\", \"ours\", \"ourselves\", \"you\", \"your\", \"yours\", \"yourself\", \"yourselves\", \"he\",", "\"that\", \"these\", \"those\", \"am\", \"is\", \"are\", \"was\", \"were\", \"be\", \"been\", \"being\", \"have\", \"has\",", "\"who\", \"whom\", \"this\", \"that\", \"these\", \"those\", \"am\", \"is\", \"are\", \"was\", \"were\", \"be\", \"been\",", "as np from sklearn.datasets import fetch_20newsgroups import re def tokenize(documents, stop_words): text =", "\"theirs\", \"themselves\", \"what\", \"which\", \"who\", \"whom\", \"this\", \"that\", \"these\", \"those\", \"am\", \"is\", \"are\",", "p in zip(words, pwl): self.p_w[l][self.v_idx[w]] = np.log( (p + 1) / (len(flatten) +", "\"herself\", \"it\", \"its\", \"itself\", \"they\", \"them\", \"their\", \"theirs\", \"themselves\", \"what\", \"which\", \"who\", \"whom\",", "\"once\", \"here\", \"there\", \"when\", \"where\", \"why\", \"how\", \"all\", \"any\", \"both\", \"each\", \"few\", \"more\",", "len(self.vocabulary) print(\"vocabulary length {}\".format(self.v_num)) self.v_idx = dict(zip(self.vocabulary, np.arange(self.v_num))) print(\"start fitting\") for l in", "\" \", doc) words = letters_only.lower().split() text.append([w for w in words if not", "# multinominal NB model with laplace smoothing # guassian can be used for", "x = tokenize(data.data, stop_words) y = data.target test_ratio = 0.2 test_split = np.random.uniform(0,", "sklearn.datasets import fetch_20newsgroups import re def tokenize(documents, stop_words): text = [] for doc", "w in stop_words]) return np.array(text) class NaiveBayes(object): # multinominal NB model with laplace", "def __init__(self): self.p_w = {} self.p_c = {} self.vocabulary = [] self.v_num =", "x): return np.array([self.predict_sample(xi) for xi in x]) def predict_sample(self, x): eps = 1", "test_split = np.random.uniform(0, 1, len(x)) train_x = x[test_split >= test_ratio] test_x = x[test_split", "words if not w in stop_words]) return np.array(text) class NaiveBayes(object): # multinominal NB", "self.vocabulary = [] self.v_num = 0 def fit(self, x, y): n_data = len(y)", "\"was\", \"were\", \"be\", \"been\", \"being\", \"have\", \"has\", \"had\", \"having\", \"do\", \"does\", \"did\", \"doing\",", "stop_words]) return np.array(text) class NaiveBayes(object): # multinominal NB model with laplace smoothing #", "\"themselves\", \"what\", \"which\", \"who\", \"whom\", \"this\", \"that\", \"these\", \"those\", \"am\", \"is\", \"are\", \"was\",", "self.v_num p = [self.p_c[i] + sum(self.p_w[i][self.v_idx[w]] if w in self.v_idx.keys() else eps for", "in self.v_idx.keys() else eps for w in x) for i in range(len(self.label))] return", "stop_words) y = data.target test_ratio = 0.2 test_split = np.random.uniform(0, 1, len(x)) train_x", "for l in self.label: idxes = indexes[indexes[:, 0] == l][:, 1].astype(int) corpus =", "\"any\", \"both\", \"each\", \"few\", \"more\", \"most\", \"other\", \"some\", \"such\", \"no\", \"nor\", \"not\", \"only\",", "\"our\", \"ours\", \"ourselves\", \"you\", \"your\", \"yours\", \"yourself\", \"yourselves\", \"he\", \"him\", \"his\", \"himself\", \"she\",", "\"then\", \"once\", \"here\", \"there\", \"when\", \"where\", \"why\", \"how\", \"all\", \"any\", \"both\", \"each\", \"few\",", "= [] for doc in documents: letters_only = re.sub(\"[^a-zA-Z]\", \" \", doc) words", "for w in words if not w in stop_words]) return np.array(text) class NaiveBayes(object):", "= len(self.vocabulary) print(\"vocabulary length {}\".format(self.v_num)) self.v_idx = dict(zip(self.vocabulary, np.arange(self.v_num))) print(\"start fitting\") for l", "self.p_w[l][self.v_idx[w]] = np.log( (p + 1) / (len(flatten) + self.v_num)) def predict(self, x):", "letters_only = re.sub(\"[^a-zA-Z]\", \" \", doc) words = letters_only.lower().split() text.append([w for w in", "= [] self.v_num = 0 def fit(self, x, y): n_data = len(y) self.label,", "def predict_sample(self, x): eps = 1 / self.v_num p = [self.p_c[i] + sum(self.p_w[i][self.v_idx[w]]", "return np.array([self.predict_sample(xi) for xi in x]) def predict_sample(self, x): eps = 1 /", "l][:, 1].astype(int) corpus = [x[idx] for idx in idxes] flatten = [item for", "def main(): stop_words = set([\"i\", \"me\", \"my\", \"myself\", \"we\", \"our\", \"ours\", \"ourselves\", \"you\",", "\"same\", \"so\", \"than\", \"too\", \"very\", \"s\", \"t\", \"can\", \"will\", \"just\", \"don\", \"should\", \"now\"])", "= y[test_split >= test_ratio] test_y = y[test_split < test_ratio] nb = NaiveBayes() nb.fit(train_x,", "for item in sublist]) self.v_num = len(self.vocabulary) print(\"vocabulary length {}\".format(self.v_num)) self.v_idx = dict(zip(self.vocabulary,", "in self.label: idxes = indexes[indexes[:, 0] == l][:, 1].astype(int) corpus = [x[idx] for", "\"what\", \"which\", \"who\", \"whom\", \"this\", \"that\", \"these\", \"those\", \"am\", \"is\", \"are\", \"was\", \"were\",", "idxes = indexes[indexes[:, 0] == l][:, 1].astype(int) corpus = [x[idx] for idx in", "= [x[idx] for idx in idxes] flatten = [item for sublist in corpus", "tokenize(data.data, stop_words) y = data.target test_ratio = 0.2 test_split = np.random.uniform(0, 1, len(x))", "self.v_num)) def predict(self, x): return np.array([self.predict_sample(xi) for xi in x]) def predict_sample(self, x):", "\"they\", \"them\", \"their\", \"theirs\", \"themselves\", \"what\", \"which\", \"who\", \"whom\", \"this\", \"that\", \"these\", \"those\",", "x[test_split >= test_ratio] test_x = x[test_split < test_ratio] train_y = y[test_split >= test_ratio]", "np from sklearn.datasets import fetch_20newsgroups import re def tokenize(documents, stop_words): text = []", "multinominal NB model with laplace smoothing # guassian can be used for numerical", "train_y) / train_x.shape[0]) print(sum(nb.predict(test_x) == test_y) / test_y.shape[0]) if __name__ == \"__main__\": main()", "eps = 1 / self.v_num p = [self.p_c[i] + sum(self.p_w[i][self.v_idx[w]] if w in", "\"below\", \"to\", \"from\", \"up\", \"down\", \"in\", \"out\", \"on\", \"off\", \"over\", \"under\", \"again\", \"further\",", "\"most\", \"other\", \"some\", \"such\", \"no\", \"nor\", \"not\", \"only\", \"own\", \"same\", \"so\", \"than\", \"too\",", "\"these\", \"those\", \"am\", \"is\", \"are\", \"was\", \"were\", \"be\", \"been\", \"being\", \"have\", \"has\", \"had\",", "np.log(1 / (len(flatten) + self.v_num))] * self.v_num words, pwl = np.unique(flatten, return_counts=True) for", "np.random.uniform(0, 1, len(x)) train_x = x[test_split >= test_ratio] test_x = x[test_split < test_ratio]", "for sublist in corpus for item in sublist] self.p_w[l] = [ np.log(1 /", "self.v_idx.keys() else eps for w in x) for i in range(len(self.label))] return self.label[np.argmax(p)]", "= y[test_split < test_ratio] nb = NaiveBayes() nb.fit(train_x, train_y) print(\"predicting\") print(sum(nb.predict(train_x) == train_y)", "\"to\", \"from\", \"up\", \"down\", \"in\", \"out\", \"on\", \"off\", \"over\", \"under\", \"again\", \"further\", \"then\",", "test_ratio = 0.2 test_split = np.random.uniform(0, 1, len(x)) train_x = x[test_split >= test_ratio]", "range(len(self.label))] return self.label[np.argmax(p)] def main(): stop_words = set([\"i\", \"me\", \"my\", \"myself\", \"we\", \"our\",", "\"do\", \"does\", \"did\", \"doing\", \"a\", \"an\", \"the\", \"and\", \"but\", \"if\", \"or\", \"because\", \"as\",", "w in words if not w in stop_words]) return np.array(text) class NaiveBayes(object): #", "\"each\", \"few\", \"more\", \"most\", \"other\", \"some\", \"such\", \"no\", \"nor\", \"not\", \"only\", \"own\", \"same\",", "\"this\", \"that\", \"these\", \"those\", \"am\", \"is\", \"are\", \"was\", \"were\", \"be\", \"been\", \"being\", \"have\",", "import fetch_20newsgroups import re def tokenize(documents, stop_words): text = [] for doc in", "\"he\", \"him\", \"his\", \"himself\", \"she\", \"her\", \"hers\", \"herself\", \"it\", \"its\", \"itself\", \"they\", \"them\",", "\"has\", \"had\", \"having\", \"do\", \"does\", \"did\", \"doing\", \"a\", \"an\", \"the\", \"and\", \"but\", \"if\",", "set([\"i\", \"me\", \"my\", \"myself\", \"we\", \"our\", \"ours\", \"ourselves\", \"you\", \"your\", \"yours\", \"yourself\", \"yourselves\",", "y[test_split >= test_ratio] test_y = y[test_split < test_ratio] nb = NaiveBayes() nb.fit(train_x, train_y)", "= 0.2 test_split = np.random.uniform(0, 1, len(x)) train_x = x[test_split >= test_ratio] test_x", "np.unique(flatten, return_counts=True) for w, p in zip(words, pwl): self.p_w[l][self.v_idx[w]] = np.log( (p +", "= np.unique(flatten, return_counts=True) for w, p in zip(words, pwl): self.p_w[l][self.v_idx[w]] = np.log( (p", "self.p_w[l] = [ np.log(1 / (len(flatten) + self.v_num))] * self.v_num words, pwl =", "corpus = [x[idx] for idx in idxes] flatten = [item for sublist in", "\"for\", \"with\", \"about\", \"against\", \"between\", \"into\", \"through\", \"during\", \"before\", \"after\", \"above\", \"below\", \"to\",", "\"had\", \"having\", \"do\", \"does\", \"did\", \"doing\", \"a\", \"an\", \"the\", \"and\", \"but\", \"if\", \"or\",", "predict(self, x): return np.array([self.predict_sample(xi) for xi in x]) def predict_sample(self, x): eps =", "indexes[indexes[:, 0] == l][:, 1].astype(int) corpus = [x[idx] for idx in idxes] flatten", "test_ratio] test_y = y[test_split < test_ratio] nb = NaiveBayes() nb.fit(train_x, train_y) print(\"predicting\") print(sum(nb.predict(train_x)", "y): n_data = len(y) self.label, p_c = np.unique(y, return_counts=True) self.p_c = dict(zip(self.label, np.log(p_c", "\"an\", \"the\", \"and\", \"but\", \"if\", \"or\", \"because\", \"as\", \"until\", \"while\", \"of\", \"at\", \"by\",", "\"not\", \"only\", \"own\", \"same\", \"so\", \"than\", \"too\", \"very\", \"s\", \"t\", \"can\", \"will\", \"just\",", "numpy as np from sklearn.datasets import fetch_20newsgroups import re def tokenize(documents, stop_words): text", "\"of\", \"at\", \"by\", \"for\", \"with\", \"about\", \"against\", \"between\", \"into\", \"through\", \"during\", \"before\", \"after\",", "1) / (len(flatten) + self.v_num)) def predict(self, x): return np.array([self.predict_sample(xi) for xi in", "self.p_c = {} self.vocabulary = [] self.v_num = 0 def fit(self, x, y):", "1 / self.v_num p = [self.p_c[i] + sum(self.p_w[i][self.v_idx[w]] if w in self.v_idx.keys() else", "x[test_split < test_ratio] train_y = y[test_split >= test_ratio] test_y = y[test_split < test_ratio]", "\"but\", \"if\", \"or\", \"because\", \"as\", \"until\", \"while\", \"of\", \"at\", \"by\", \"for\", \"with\", \"about\",", "p = [self.p_c[i] + sum(self.p_w[i][self.v_idx[w]] if w in self.v_idx.keys() else eps for w", "letters_only.lower().split() text.append([w for w in words if not w in stop_words]) return np.array(text)", "= 1 / self.v_num p = [self.p_c[i] + sum(self.p_w[i][self.v_idx[w]] if w in self.v_idx.keys()", "if not w in stop_words]) return np.array(text) class NaiveBayes(object): # multinominal NB model", "\"other\", \"some\", \"such\", \"no\", \"nor\", \"not\", \"only\", \"own\", \"same\", \"so\", \"than\", \"too\", \"very\",", "\"more\", \"most\", \"other\", \"some\", \"such\", \"no\", \"nor\", \"not\", \"only\", \"own\", \"same\", \"so\", \"than\",", "y[test_split < test_ratio] nb = NaiveBayes() nb.fit(train_x, train_y) print(\"predicting\") print(sum(nb.predict(train_x) == train_y) /", "nb = NaiveBayes() nb.fit(train_x, train_y) print(\"predicting\") print(sum(nb.predict(train_x) == train_y) / train_x.shape[0]) print(sum(nb.predict(test_x) ==", "\"before\", \"after\", \"above\", \"below\", \"to\", \"from\", \"up\", \"down\", \"in\", \"out\", \"on\", \"off\", \"over\",", "\"doing\", \"a\", \"an\", \"the\", \"and\", \"but\", \"if\", \"or\", \"because\", \"as\", \"until\", \"while\", \"of\",", "NB model with laplace smoothing # guassian can be used for numerical def", "\"s\", \"t\", \"can\", \"will\", \"just\", \"don\", \"should\", \"now\"]) data = fetch_20newsgroups() x =", "\"being\", \"have\", \"has\", \"had\", \"having\", \"do\", \"does\", \"did\", \"doing\", \"a\", \"an\", \"the\", \"and\",", "\"between\", \"into\", \"through\", \"during\", \"before\", \"after\", \"above\", \"below\", \"to\", \"from\", \"up\", \"down\", \"in\",", "text.append([w for w in words if not w in stop_words]) return np.array(text) class", "x): eps = 1 / self.v_num p = [self.p_c[i] + sum(self.p_w[i][self.v_idx[w]] if w", "/ (len(flatten) + self.v_num))] * self.v_num words, pwl = np.unique(flatten, return_counts=True) for w,", "np.array([self.predict_sample(xi) for xi in x]) def predict_sample(self, x): eps = 1 / self.v_num", "test_x = x[test_split < test_ratio] train_y = y[test_split >= test_ratio] test_y = y[test_split", "\"very\", \"s\", \"t\", \"can\", \"will\", \"just\", \"don\", \"should\", \"now\"]) data = fetch_20newsgroups() x", "+ 1) / (len(flatten) + self.v_num)) def predict(self, x): return np.array([self.predict_sample(xi) for xi" ]
[ "\"email\\t\\t\\t\", \"college\\t\", \"city\\t\", \"phone\\t\\t\", \"password\\t\", \"refral\\t\"] log.writelines(Head) code = input(\"Enter Your Refral code:", "'<PASSWORD>' refral = code data = { \"firstname\": firstname, \"lastname\": lastname, \"email\": email,", "phone = '1234567890' password = '<PASSWORD>' refral = code data = { \"firstname\":", "city = 'Dumna' phone = '1234567890' password = '<PASSWORD>' refral = code data", "'Jr' college = 'IIITDMJ' city = 'Dumna' phone = '1234567890' password = '<PASSWORD>'", "Head = [\"firstname\\t\", \"lastname\\t\", \"email\\t\\t\\t\", \"college\\t\", \"city\\t\", \"phone\\t\\t\", \"password\\t\", \"refral\\t\"] log.writelines(Head) code =", "college = 'IIITDMJ' city = 'Dumna' phone = '1234567890' password = '<PASSWORD>' refral", "+ L[1] for index in range(len(L)): L[index] = L[index] + \"\\t\" log.writelines(\"\\n\") log.writelines(L)", "i in range(point): firstname = ''.join(random.choices(string.ascii_uppercase + string.digits, k=5)) email = firstname +", "= ''.join(random.choices(string.ascii_uppercase + string.digits, k=5)) email = firstname + \"@gmail.com\" lastname = 'Jr'", "\"password\": password, \"refral\": refral, } L = [v for k, v in data.items()]", "lastname, \"email\": email, \"college\": college, \"city\": city, \"phone\": phone, \"password\": password, \"refral\": refral,", "= \" \" + L[1] for index in range(len(L)): L[index] = L[index] +", "\"email\": email, \"college\": college, \"city\": city, \"phone\": phone, \"password\": password, \"refral\": refral, }", "data = { \"firstname\": firstname, \"lastname\": lastname, \"email\": email, \"college\": college, \"city\": city,", "retries=False) encoded_data = json.dumps(data).encode('utf-8') r = http.request('POST', 'https://invicta20.org/register', body=encoded_data, headers={'Content-Type': 'application/json'}) json.loads(r.data.decode('utf-8'))['json'] var", "= [\"firstname\\t\", \"lastname\\t\", \"email\\t\\t\\t\", \"college\\t\", \"city\\t\", \"phone\\t\\t\", \"password\\t\", \"refral\\t\"] log.writelines(Head) code = input(\"Enter", "password, \"refral\": refral, } L = [v for k, v in data.items()] L[0]", "\"college\\t\", \"city\\t\", \"phone\\t\\t\", \"password\\t\", \"refral\\t\"] log.writelines(Head) code = input(\"Enter Your Refral code: \")", "L[0] L[1] = \" \" + L[1] for index in range(len(L)): L[index] =", "# log file log = open(\"log.txt\", \"a+\") Head = [\"firstname\\t\", \"lastname\\t\", \"email\\t\\t\\t\", \"college\\t\",", "http = urllib3.PoolManager() try: r = http.request('GET', 'https://invicta20.org/register', retries=False) encoded_data = json.dumps(data).encode('utf-8') r", "\" \" + L[1] for index in range(len(L)): L[index] = L[index] + \"\\t\"", "in data.items()] L[0] = \" \" + L[0] L[1] = \" \" +", "\"phone\\t\\t\", \"password\\t\", \"refral\\t\"] log.writelines(Head) code = input(\"Enter Your Refral code: \") points =", "L[index] = L[index] + \"\\t\" log.writelines(\"\\n\") log.writelines(L) http = urllib3.PoolManager() try: r =", "= input(\"Enter Your Refral code: \") points = int((int(input(\"how much point you want", "r = http.request('GET', 'https://invicta20.org/register', retries=False) encoded_data = json.dumps(data).encode('utf-8') r = http.request('POST', 'https://invicta20.org/register', body=encoded_data,", "generate: \")))/10) point = int(points/10) # fields for i in range(point): firstname =", "# fields for i in range(point): firstname = ''.join(random.choices(string.ascii_uppercase + string.digits, k=5)) email", "= 'Jr' college = 'IIITDMJ' city = 'Dumna' phone = '1234567890' password =", "log.writelines(L) http = urllib3.PoolManager() try: r = http.request('GET', 'https://invicta20.org/register', retries=False) encoded_data = json.dumps(data).encode('utf-8')", "[v for k, v in data.items()] L[0] = \" \" + L[0] L[1]", "\" + L[1] for index in range(len(L)): L[index] = L[index] + \"\\t\" log.writelines(\"\\n\")", "= urllib3.PoolManager() try: r = http.request('GET', 'https://invicta20.org/register', retries=False) encoded_data = json.dumps(data).encode('utf-8') r =", "code: \") points = int((int(input(\"how much point you want to generate: \")))/10) point", "\"firstname\": firstname, \"lastname\": lastname, \"email\": email, \"college\": college, \"city\": city, \"phone\": phone, \"password\":", "import urllib3 import random import json # log file log = open(\"log.txt\", \"a+\")", "json # log file log = open(\"log.txt\", \"a+\") Head = [\"firstname\\t\", \"lastname\\t\", \"email\\t\\t\\t\",", "= {'attribute': 'value'} except urllib3.exceptions.NewConnectionError: print('Connection failed.') except json.decoder.JSONDecodeError: var = {'attribute': 'value'}", "code data = { \"firstname\": firstname, \"lastname\": lastname, \"email\": email, \"college\": college, \"city\":", "refral = code data = { \"firstname\": firstname, \"lastname\": lastname, \"email\": email, \"college\":", "{ \"firstname\": firstname, \"lastname\": lastname, \"email\": email, \"college\": college, \"city\": city, \"phone\": phone,", "'IIITDMJ' city = 'Dumna' phone = '1234567890' password = '<PASSWORD>' refral = code", "log file log = open(\"log.txt\", \"a+\") Head = [\"firstname\\t\", \"lastname\\t\", \"email\\t\\t\\t\", \"college\\t\", \"city\\t\",", "\") points = int((int(input(\"how much point you want to generate: \")))/10) point =", "lastname = 'Jr' college = 'IIITDMJ' city = 'Dumna' phone = '1234567890' password", "= code data = { \"firstname\": firstname, \"lastname\": lastname, \"email\": email, \"college\": college,", "\"refral\": refral, } L = [v for k, v in data.items()] L[0] =", "= http.request('GET', 'https://invicta20.org/register', retries=False) encoded_data = json.dumps(data).encode('utf-8') r = http.request('POST', 'https://invicta20.org/register', body=encoded_data, headers={'Content-Type':", "\"a+\") Head = [\"firstname\\t\", \"lastname\\t\", \"email\\t\\t\\t\", \"college\\t\", \"city\\t\", \"phone\\t\\t\", \"password\\t\", \"refral\\t\"] log.writelines(Head) code", "<filename>code.py<gh_stars>0 import string import urllib3 import random import json # log file log", "= int((int(input(\"how much point you want to generate: \")))/10) point = int(points/10) #", "'application/json'}) json.loads(r.data.decode('utf-8'))['json'] var = {'attribute': 'value'} except urllib3.exceptions.NewConnectionError: print('Connection failed.') except json.decoder.JSONDecodeError: var", "= [v for k, v in data.items()] L[0] = \" \" + L[0]", "data.items()] L[0] = \" \" + L[0] L[1] = \" \" + L[1]", "+ L[0] L[1] = \" \" + L[1] for index in range(len(L)): L[index]", "firstname, \"lastname\": lastname, \"email\": email, \"college\": college, \"city\": city, \"phone\": phone, \"password\": password,", "= json.dumps(data).encode('utf-8') r = http.request('POST', 'https://invicta20.org/register', body=encoded_data, headers={'Content-Type': 'application/json'}) json.loads(r.data.decode('utf-8'))['json'] var = {'attribute':", "int(points/10) # fields for i in range(point): firstname = ''.join(random.choices(string.ascii_uppercase + string.digits, k=5))", "L[0] = \" \" + L[0] L[1] = \" \" + L[1] for", "\"\\t\" log.writelines(\"\\n\") log.writelines(L) http = urllib3.PoolManager() try: r = http.request('GET', 'https://invicta20.org/register', retries=False) encoded_data", "\"lastname\\t\", \"email\\t\\t\\t\", \"college\\t\", \"city\\t\", \"phone\\t\\t\", \"password\\t\", \"refral\\t\"] log.writelines(Head) code = input(\"Enter Your Refral", "for k, v in data.items()] L[0] = \" \" + L[0] L[1] =", "json.dumps(data).encode('utf-8') r = http.request('POST', 'https://invicta20.org/register', body=encoded_data, headers={'Content-Type': 'application/json'}) json.loads(r.data.decode('utf-8'))['json'] var = {'attribute': 'value'}", "point = int(points/10) # fields for i in range(point): firstname = ''.join(random.choices(string.ascii_uppercase +", "urllib3 import random import json # log file log = open(\"log.txt\", \"a+\") Head", "+ \"@gmail.com\" lastname = 'Jr' college = 'IIITDMJ' city = 'Dumna' phone =", "\"college\": college, \"city\": city, \"phone\": phone, \"password\": password, \"refral\": refral, } L =", "[\"firstname\\t\", \"lastname\\t\", \"email\\t\\t\\t\", \"college\\t\", \"city\\t\", \"phone\\t\\t\", \"password\\t\", \"refral\\t\"] log.writelines(Head) code = input(\"Enter Your", "= int(points/10) # fields for i in range(point): firstname = ''.join(random.choices(string.ascii_uppercase + string.digits,", "in range(len(L)): L[index] = L[index] + \"\\t\" log.writelines(\"\\n\") log.writelines(L) http = urllib3.PoolManager() try:", "email = firstname + \"@gmail.com\" lastname = 'Jr' college = 'IIITDMJ' city =", "for i in range(point): firstname = ''.join(random.choices(string.ascii_uppercase + string.digits, k=5)) email = firstname", "body=encoded_data, headers={'Content-Type': 'application/json'}) json.loads(r.data.decode('utf-8'))['json'] var = {'attribute': 'value'} except urllib3.exceptions.NewConnectionError: print('Connection failed.') except", "http.request('POST', 'https://invicta20.org/register', body=encoded_data, headers={'Content-Type': 'application/json'}) json.loads(r.data.decode('utf-8'))['json'] var = {'attribute': 'value'} except urllib3.exceptions.NewConnectionError: print('Connection", "email, \"college\": college, \"city\": city, \"phone\": phone, \"password\": password, \"refral\": refral, } L", "= L[index] + \"\\t\" log.writelines(\"\\n\") log.writelines(L) http = urllib3.PoolManager() try: r = http.request('GET',", "import json # log file log = open(\"log.txt\", \"a+\") Head = [\"firstname\\t\", \"lastname\\t\",", "you want to generate: \")))/10) point = int(points/10) # fields for i in", "\" + L[0] L[1] = \" \" + L[1] for index in range(len(L)):", "Refral code: \") points = int((int(input(\"how much point you want to generate: \")))/10)", "import random import json # log file log = open(\"log.txt\", \"a+\") Head =", "} L = [v for k, v in data.items()] L[0] = \" \"", "import string import urllib3 import random import json # log file log =", "try: r = http.request('GET', 'https://invicta20.org/register', retries=False) encoded_data = json.dumps(data).encode('utf-8') r = http.request('POST', 'https://invicta20.org/register',", "= { \"firstname\": firstname, \"lastname\": lastname, \"email\": email, \"college\": college, \"city\": city, \"phone\":", "points = int((int(input(\"how much point you want to generate: \")))/10) point = int(points/10)", "json.loads(r.data.decode('utf-8'))['json'] var = {'attribute': 'value'} except urllib3.exceptions.NewConnectionError: print('Connection failed.') except json.decoder.JSONDecodeError: var =", "\"city\": city, \"phone\": phone, \"password\": password, \"refral\": refral, } L = [v for", "string.digits, k=5)) email = firstname + \"@gmail.com\" lastname = 'Jr' college = 'IIITDMJ'", "\"password\\t\", \"refral\\t\"] log.writelines(Head) code = input(\"Enter Your Refral code: \") points = int((int(input(\"how", "= '<PASSWORD>' refral = code data = { \"firstname\": firstname, \"lastname\": lastname, \"email\":", "log = open(\"log.txt\", \"a+\") Head = [\"firstname\\t\", \"lastname\\t\", \"email\\t\\t\\t\", \"college\\t\", \"city\\t\", \"phone\\t\\t\", \"password\\t\",", "L[1] for index in range(len(L)): L[index] = L[index] + \"\\t\" log.writelines(\"\\n\") log.writelines(L) http", "var = {'attribute': 'value'} except urllib3.exceptions.NewConnectionError: print('Connection failed.') except json.decoder.JSONDecodeError: var = {'attribute':", "+ string.digits, k=5)) email = firstname + \"@gmail.com\" lastname = 'Jr' college =", "input(\"Enter Your Refral code: \") points = int((int(input(\"how much point you want to", "= 'IIITDMJ' city = 'Dumna' phone = '1234567890' password = '<PASSWORD>' refral =", "r = http.request('POST', 'https://invicta20.org/register', body=encoded_data, headers={'Content-Type': 'application/json'}) json.loads(r.data.decode('utf-8'))['json'] var = {'attribute': 'value'} except", "= '1234567890' password = '<PASSWORD>' refral = code data = { \"firstname\": firstname,", "\"lastname\": lastname, \"email\": email, \"college\": college, \"city\": city, \"phone\": phone, \"password\": password, \"refral\":", "= \" \" + L[0] L[1] = \" \" + L[1] for index", "much point you want to generate: \")))/10) point = int(points/10) # fields for", "log.writelines(\"\\n\") log.writelines(L) http = urllib3.PoolManager() try: r = http.request('GET', 'https://invicta20.org/register', retries=False) encoded_data =", "string import urllib3 import random import json # log file log = open(\"log.txt\",", "int((int(input(\"how much point you want to generate: \")))/10) point = int(points/10) # fields", "\" \" + L[0] L[1] = \" \" + L[1] for index in", "for index in range(len(L)): L[index] = L[index] + \"\\t\" log.writelines(\"\\n\") log.writelines(L) http =", "to generate: \")))/10) point = int(points/10) # fields for i in range(point): firstname", "L = [v for k, v in data.items()] L[0] = \" \" +", "phone, \"password\": password, \"refral\": refral, } L = [v for k, v in", "http.request('GET', 'https://invicta20.org/register', retries=False) encoded_data = json.dumps(data).encode('utf-8') r = http.request('POST', 'https://invicta20.org/register', body=encoded_data, headers={'Content-Type': 'application/json'})", "'https://invicta20.org/register', retries=False) encoded_data = json.dumps(data).encode('utf-8') r = http.request('POST', 'https://invicta20.org/register', body=encoded_data, headers={'Content-Type': 'application/json'}) json.loads(r.data.decode('utf-8'))['json']", "= open(\"log.txt\", \"a+\") Head = [\"firstname\\t\", \"lastname\\t\", \"email\\t\\t\\t\", \"college\\t\", \"city\\t\", \"phone\\t\\t\", \"password\\t\", \"refral\\t\"]", "open(\"log.txt\", \"a+\") Head = [\"firstname\\t\", \"lastname\\t\", \"email\\t\\t\\t\", \"college\\t\", \"city\\t\", \"phone\\t\\t\", \"password\\t\", \"refral\\t\"] log.writelines(Head)", "want to generate: \")))/10) point = int(points/10) # fields for i in range(point):", "= 'Dumna' phone = '1234567890' password = '<PASSWORD>' refral = code data =", "city, \"phone\": phone, \"password\": password, \"refral\": refral, } L = [v for k,", "headers={'Content-Type': 'application/json'}) json.loads(r.data.decode('utf-8'))['json'] var = {'attribute': 'value'} except urllib3.exceptions.NewConnectionError: print('Connection failed.') except json.decoder.JSONDecodeError:", "log.writelines(Head) code = input(\"Enter Your Refral code: \") points = int((int(input(\"how much point", "range(point): firstname = ''.join(random.choices(string.ascii_uppercase + string.digits, k=5)) email = firstname + \"@gmail.com\" lastname", "firstname = ''.join(random.choices(string.ascii_uppercase + string.digits, k=5)) email = firstname + \"@gmail.com\" lastname =", "= firstname + \"@gmail.com\" lastname = 'Jr' college = 'IIITDMJ' city = 'Dumna'", "v in data.items()] L[0] = \" \" + L[0] L[1] = \" \"", "Your Refral code: \") points = int((int(input(\"how much point you want to generate:", "refral, } L = [v for k, v in data.items()] L[0] = \"", "\")))/10) point = int(points/10) # fields for i in range(point): firstname = ''.join(random.choices(string.ascii_uppercase", "range(len(L)): L[index] = L[index] + \"\\t\" log.writelines(\"\\n\") log.writelines(L) http = urllib3.PoolManager() try: r", "= http.request('POST', 'https://invicta20.org/register', body=encoded_data, headers={'Content-Type': 'application/json'}) json.loads(r.data.decode('utf-8'))['json'] var = {'attribute': 'value'} except urllib3.exceptions.NewConnectionError:", "file log = open(\"log.txt\", \"a+\") Head = [\"firstname\\t\", \"lastname\\t\", \"email\\t\\t\\t\", \"college\\t\", \"city\\t\", \"phone\\t\\t\",", "code = input(\"Enter Your Refral code: \") points = int((int(input(\"how much point you", "password = '<PASSWORD>' refral = code data = { \"firstname\": firstname, \"lastname\": lastname,", "\"phone\": phone, \"password\": password, \"refral\": refral, } L = [v for k, v", "\"city\\t\", \"phone\\t\\t\", \"password\\t\", \"refral\\t\"] log.writelines(Head) code = input(\"Enter Your Refral code: \") points", "random import json # log file log = open(\"log.txt\", \"a+\") Head = [\"firstname\\t\",", "''.join(random.choices(string.ascii_uppercase + string.digits, k=5)) email = firstname + \"@gmail.com\" lastname = 'Jr' college", "college, \"city\": city, \"phone\": phone, \"password\": password, \"refral\": refral, } L = [v", "L[1] = \" \" + L[1] for index in range(len(L)): L[index] = L[index]", "in range(point): firstname = ''.join(random.choices(string.ascii_uppercase + string.digits, k=5)) email = firstname + \"@gmail.com\"", "'1234567890' password = '<PASSWORD>' refral = code data = { \"firstname\": firstname, \"lastname\":", "k=5)) email = firstname + \"@gmail.com\" lastname = 'Jr' college = 'IIITDMJ' city", "\"@gmail.com\" lastname = 'Jr' college = 'IIITDMJ' city = 'Dumna' phone = '1234567890'", "'https://invicta20.org/register', body=encoded_data, headers={'Content-Type': 'application/json'}) json.loads(r.data.decode('utf-8'))['json'] var = {'attribute': 'value'} except urllib3.exceptions.NewConnectionError: print('Connection failed.')", "fields for i in range(point): firstname = ''.join(random.choices(string.ascii_uppercase + string.digits, k=5)) email =", "'Dumna' phone = '1234567890' password = '<PASSWORD>' refral = code data = {", "k, v in data.items()] L[0] = \" \" + L[0] L[1] = \"", "encoded_data = json.dumps(data).encode('utf-8') r = http.request('POST', 'https://invicta20.org/register', body=encoded_data, headers={'Content-Type': 'application/json'}) json.loads(r.data.decode('utf-8'))['json'] var =", "\"refral\\t\"] log.writelines(Head) code = input(\"Enter Your Refral code: \") points = int((int(input(\"how much", "index in range(len(L)): L[index] = L[index] + \"\\t\" log.writelines(\"\\n\") log.writelines(L) http = urllib3.PoolManager()", "+ \"\\t\" log.writelines(\"\\n\") log.writelines(L) http = urllib3.PoolManager() try: r = http.request('GET', 'https://invicta20.org/register', retries=False)", "urllib3.PoolManager() try: r = http.request('GET', 'https://invicta20.org/register', retries=False) encoded_data = json.dumps(data).encode('utf-8') r = http.request('POST',", "L[index] + \"\\t\" log.writelines(\"\\n\") log.writelines(L) http = urllib3.PoolManager() try: r = http.request('GET', 'https://invicta20.org/register',", "firstname + \"@gmail.com\" lastname = 'Jr' college = 'IIITDMJ' city = 'Dumna' phone", "point you want to generate: \")))/10) point = int(points/10) # fields for i" ]
[]
[ "test log(time=2s) \"\"\" import os import unittest from pyquickhelper.loghelper import fLOG from pyquickhelper.pycode", "import os import unittest from pyquickhelper.loghelper import fLOG from pyquickhelper.pycode import get_temp_folder, ExtTestCase", "= ['http://www.xavierdupre.fr/blog/xdbrss.xml', 'http://www.xavierdupre.fr/app/ensae_projects/helpsphinx/_downloads/rss.xml'] compile_rss_blogs(links, \"http://www.xavierdupre.fr/blogagg.html\", 'Aggregation of blog posts published on <a href='", "test_rss_compile(self): fLOG( __file__, self._testMethodName, OutputPrint=__name__ == \"__main__\") temp = get_temp_folder(__file__, \"temp_rss_compile\") out_html =", "fLOG from pyquickhelper.pycode import get_temp_folder, ExtTestCase from pyrsslocal.cli import compile_rss_blogs class TestRSSCompile(ExtTestCase): def", "\"__main__\") temp = get_temp_folder(__file__, \"temp_rss_compile\") out_html = os.path.join(temp, 'index.html') out_rss = os.path.join(temp, 'rssfile.xml')", "pyrsslocal.cli import compile_rss_blogs class TestRSSCompile(ExtTestCase): def test_rss_compile(self): fLOG( __file__, self._testMethodName, OutputPrint=__name__ == \"__main__\")", "from pyrsslocal.cli import compile_rss_blogs class TestRSSCompile(ExtTestCase): def test_rss_compile(self): fLOG( __file__, self._testMethodName, OutputPrint=__name__ ==", "= os.path.join(temp, 'index.html') out_rss = os.path.join(temp, 'rssfile.xml') links = ['http://www.xavierdupre.fr/blog/xdbrss.xml', 'http://www.xavierdupre.fr/app/ensae_projects/helpsphinx/_downloads/rss.xml'] compile_rss_blogs(links, \"http://www.xavierdupre.fr/blogagg.html\",", "['http://www.xavierdupre.fr/blog/xdbrss.xml', 'http://www.xavierdupre.fr/app/ensae_projects/helpsphinx/_downloads/rss.xml'] compile_rss_blogs(links, \"http://www.xavierdupre.fr/blogagg.html\", 'Aggregation of blog posts published on <a href=' '\"http://www.xavierdupre.fr.\">xavierdupre.fr</a>',", "'Aggregation of blog posts published on <a href=' '\"http://www.xavierdupre.fr.\">xavierdupre.fr</a>', title=\"Recent posts\", author=\"<NAME>\", out_html=out_html,", "published on <a href=' '\"http://www.xavierdupre.fr.\">xavierdupre.fr</a>', title=\"Recent posts\", author=\"<NAME>\", out_html=out_html, out_rss=out_rss, fLOG=fLOG) self.assertExists(out_html) self.assertExists(out_rss)", "get_temp_folder(__file__, \"temp_rss_compile\") out_html = os.path.join(temp, 'index.html') out_rss = os.path.join(temp, 'rssfile.xml') links = ['http://www.xavierdupre.fr/blog/xdbrss.xml',", "coding: utf-8 \"\"\" @brief test log(time=2s) \"\"\" import os import unittest from pyquickhelper.loghelper", "pyquickhelper.pycode import get_temp_folder, ExtTestCase from pyrsslocal.cli import compile_rss_blogs class TestRSSCompile(ExtTestCase): def test_rss_compile(self): fLOG(", "out_html = os.path.join(temp, 'index.html') out_rss = os.path.join(temp, 'rssfile.xml') links = ['http://www.xavierdupre.fr/blog/xdbrss.xml', 'http://www.xavierdupre.fr/app/ensae_projects/helpsphinx/_downloads/rss.xml'] compile_rss_blogs(links,", "blog posts published on <a href=' '\"http://www.xavierdupre.fr.\">xavierdupre.fr</a>', title=\"Recent posts\", author=\"<NAME>\", out_html=out_html, out_rss=out_rss, fLOG=fLOG)", "out_rss = os.path.join(temp, 'rssfile.xml') links = ['http://www.xavierdupre.fr/blog/xdbrss.xml', 'http://www.xavierdupre.fr/app/ensae_projects/helpsphinx/_downloads/rss.xml'] compile_rss_blogs(links, \"http://www.xavierdupre.fr/blogagg.html\", 'Aggregation of blog", "posts published on <a href=' '\"http://www.xavierdupre.fr.\">xavierdupre.fr</a>', title=\"Recent posts\", author=\"<NAME>\", out_html=out_html, out_rss=out_rss, fLOG=fLOG) self.assertExists(out_html)", "# coding: utf-8 \"\"\" @brief test log(time=2s) \"\"\" import os import unittest from", "on <a href=' '\"http://www.xavierdupre.fr.\">xavierdupre.fr</a>', title=\"Recent posts\", author=\"<NAME>\", out_html=out_html, out_rss=out_rss, fLOG=fLOG) self.assertExists(out_html) self.assertExists(out_rss) if", "unittest from pyquickhelper.loghelper import fLOG from pyquickhelper.pycode import get_temp_folder, ExtTestCase from pyrsslocal.cli import", "compile_rss_blogs class TestRSSCompile(ExtTestCase): def test_rss_compile(self): fLOG( __file__, self._testMethodName, OutputPrint=__name__ == \"__main__\") temp =", "import fLOG from pyquickhelper.pycode import get_temp_folder, ExtTestCase from pyrsslocal.cli import compile_rss_blogs class TestRSSCompile(ExtTestCase):", "import unittest from pyquickhelper.loghelper import fLOG from pyquickhelper.pycode import get_temp_folder, ExtTestCase from pyrsslocal.cli", "<a href=' '\"http://www.xavierdupre.fr.\">xavierdupre.fr</a>', title=\"Recent posts\", author=\"<NAME>\", out_html=out_html, out_rss=out_rss, fLOG=fLOG) self.assertExists(out_html) self.assertExists(out_rss) if __name__", "title=\"Recent posts\", author=\"<NAME>\", out_html=out_html, out_rss=out_rss, fLOG=fLOG) self.assertExists(out_html) self.assertExists(out_rss) if __name__ == \"__main__\": unittest.main()", "<gh_stars>1-10 # coding: utf-8 \"\"\" @brief test log(time=2s) \"\"\" import os import unittest", "of blog posts published on <a href=' '\"http://www.xavierdupre.fr.\">xavierdupre.fr</a>', title=\"Recent posts\", author=\"<NAME>\", out_html=out_html, out_rss=out_rss,", "from pyquickhelper.pycode import get_temp_folder, ExtTestCase from pyrsslocal.cli import compile_rss_blogs class TestRSSCompile(ExtTestCase): def test_rss_compile(self):", "self._testMethodName, OutputPrint=__name__ == \"__main__\") temp = get_temp_folder(__file__, \"temp_rss_compile\") out_html = os.path.join(temp, 'index.html') out_rss", "log(time=2s) \"\"\" import os import unittest from pyquickhelper.loghelper import fLOG from pyquickhelper.pycode import", "pyquickhelper.loghelper import fLOG from pyquickhelper.pycode import get_temp_folder, ExtTestCase from pyrsslocal.cli import compile_rss_blogs class", "'http://www.xavierdupre.fr/app/ensae_projects/helpsphinx/_downloads/rss.xml'] compile_rss_blogs(links, \"http://www.xavierdupre.fr/blogagg.html\", 'Aggregation of blog posts published on <a href=' '\"http://www.xavierdupre.fr.\">xavierdupre.fr</a>', title=\"Recent", "os.path.join(temp, 'index.html') out_rss = os.path.join(temp, 'rssfile.xml') links = ['http://www.xavierdupre.fr/blog/xdbrss.xml', 'http://www.xavierdupre.fr/app/ensae_projects/helpsphinx/_downloads/rss.xml'] compile_rss_blogs(links, \"http://www.xavierdupre.fr/blogagg.html\", 'Aggregation", "'rssfile.xml') links = ['http://www.xavierdupre.fr/blog/xdbrss.xml', 'http://www.xavierdupre.fr/app/ensae_projects/helpsphinx/_downloads/rss.xml'] compile_rss_blogs(links, \"http://www.xavierdupre.fr/blogagg.html\", 'Aggregation of blog posts published on", "compile_rss_blogs(links, \"http://www.xavierdupre.fr/blogagg.html\", 'Aggregation of blog posts published on <a href=' '\"http://www.xavierdupre.fr.\">xavierdupre.fr</a>', title=\"Recent posts\",", "\"http://www.xavierdupre.fr/blogagg.html\", 'Aggregation of blog posts published on <a href=' '\"http://www.xavierdupre.fr.\">xavierdupre.fr</a>', title=\"Recent posts\", author=\"<NAME>\",", "import compile_rss_blogs class TestRSSCompile(ExtTestCase): def test_rss_compile(self): fLOG( __file__, self._testMethodName, OutputPrint=__name__ == \"__main__\") temp", "@brief test log(time=2s) \"\"\" import os import unittest from pyquickhelper.loghelper import fLOG from", "OutputPrint=__name__ == \"__main__\") temp = get_temp_folder(__file__, \"temp_rss_compile\") out_html = os.path.join(temp, 'index.html') out_rss =", "= get_temp_folder(__file__, \"temp_rss_compile\") out_html = os.path.join(temp, 'index.html') out_rss = os.path.join(temp, 'rssfile.xml') links =", "'\"http://www.xavierdupre.fr.\">xavierdupre.fr</a>', title=\"Recent posts\", author=\"<NAME>\", out_html=out_html, out_rss=out_rss, fLOG=fLOG) self.assertExists(out_html) self.assertExists(out_rss) if __name__ == \"__main__\":", "import get_temp_folder, ExtTestCase from pyrsslocal.cli import compile_rss_blogs class TestRSSCompile(ExtTestCase): def test_rss_compile(self): fLOG( __file__,", "utf-8 \"\"\" @brief test log(time=2s) \"\"\" import os import unittest from pyquickhelper.loghelper import", "\"\"\" @brief test log(time=2s) \"\"\" import os import unittest from pyquickhelper.loghelper import fLOG", "ExtTestCase from pyrsslocal.cli import compile_rss_blogs class TestRSSCompile(ExtTestCase): def test_rss_compile(self): fLOG( __file__, self._testMethodName, OutputPrint=__name__", "\"\"\" import os import unittest from pyquickhelper.loghelper import fLOG from pyquickhelper.pycode import get_temp_folder,", "temp = get_temp_folder(__file__, \"temp_rss_compile\") out_html = os.path.join(temp, 'index.html') out_rss = os.path.join(temp, 'rssfile.xml') links", "\"temp_rss_compile\") out_html = os.path.join(temp, 'index.html') out_rss = os.path.join(temp, 'rssfile.xml') links = ['http://www.xavierdupre.fr/blog/xdbrss.xml', 'http://www.xavierdupre.fr/app/ensae_projects/helpsphinx/_downloads/rss.xml']", "class TestRSSCompile(ExtTestCase): def test_rss_compile(self): fLOG( __file__, self._testMethodName, OutputPrint=__name__ == \"__main__\") temp = get_temp_folder(__file__,", "== \"__main__\") temp = get_temp_folder(__file__, \"temp_rss_compile\") out_html = os.path.join(temp, 'index.html') out_rss = os.path.join(temp,", "'index.html') out_rss = os.path.join(temp, 'rssfile.xml') links = ['http://www.xavierdupre.fr/blog/xdbrss.xml', 'http://www.xavierdupre.fr/app/ensae_projects/helpsphinx/_downloads/rss.xml'] compile_rss_blogs(links, \"http://www.xavierdupre.fr/blogagg.html\", 'Aggregation of", "def test_rss_compile(self): fLOG( __file__, self._testMethodName, OutputPrint=__name__ == \"__main__\") temp = get_temp_folder(__file__, \"temp_rss_compile\") out_html", "links = ['http://www.xavierdupre.fr/blog/xdbrss.xml', 'http://www.xavierdupre.fr/app/ensae_projects/helpsphinx/_downloads/rss.xml'] compile_rss_blogs(links, \"http://www.xavierdupre.fr/blogagg.html\", 'Aggregation of blog posts published on <a", "os.path.join(temp, 'rssfile.xml') links = ['http://www.xavierdupre.fr/blog/xdbrss.xml', 'http://www.xavierdupre.fr/app/ensae_projects/helpsphinx/_downloads/rss.xml'] compile_rss_blogs(links, \"http://www.xavierdupre.fr/blogagg.html\", 'Aggregation of blog posts published", "get_temp_folder, ExtTestCase from pyrsslocal.cli import compile_rss_blogs class TestRSSCompile(ExtTestCase): def test_rss_compile(self): fLOG( __file__, self._testMethodName,", "fLOG( __file__, self._testMethodName, OutputPrint=__name__ == \"__main__\") temp = get_temp_folder(__file__, \"temp_rss_compile\") out_html = os.path.join(temp,", "from pyquickhelper.loghelper import fLOG from pyquickhelper.pycode import get_temp_folder, ExtTestCase from pyrsslocal.cli import compile_rss_blogs", "os import unittest from pyquickhelper.loghelper import fLOG from pyquickhelper.pycode import get_temp_folder, ExtTestCase from", "= os.path.join(temp, 'rssfile.xml') links = ['http://www.xavierdupre.fr/blog/xdbrss.xml', 'http://www.xavierdupre.fr/app/ensae_projects/helpsphinx/_downloads/rss.xml'] compile_rss_blogs(links, \"http://www.xavierdupre.fr/blogagg.html\", 'Aggregation of blog posts", "href=' '\"http://www.xavierdupre.fr.\">xavierdupre.fr</a>', title=\"Recent posts\", author=\"<NAME>\", out_html=out_html, out_rss=out_rss, fLOG=fLOG) self.assertExists(out_html) self.assertExists(out_rss) if __name__ ==", "__file__, self._testMethodName, OutputPrint=__name__ == \"__main__\") temp = get_temp_folder(__file__, \"temp_rss_compile\") out_html = os.path.join(temp, 'index.html')", "TestRSSCompile(ExtTestCase): def test_rss_compile(self): fLOG( __file__, self._testMethodName, OutputPrint=__name__ == \"__main__\") temp = get_temp_folder(__file__, \"temp_rss_compile\")" ]
[ "sorting columns (small to large) in Excel. Can be any length from 1", "the data from an Excel file into a list of dictionaries, where each", "row in the Excel file and the keys of each dictionary represent each", "sorting_fields: list of str :return: None \"\"\" writer = pd.ExcelWriter(file_path, engine='openpyxl') df =", ":param sheet_name: Name of a particular sheet in the file to load (optional,", "(optional, defaults to the first sheet in the Excel file). :type sheet_name: str", "file into a list of dictionaries, where each dictionary represents a row in", "sheet_name = sheet_name if sheet_name else xl.sheet_names[0] return xl.parse(sheet_name, index_col=None).to_dict('records') def export_to_excel(data, file_path,", "as pd def load_from_excel(file_path, sheet_name=None): \"\"\" Loads the data from an Excel file", "and sheet_name already exist. :type file_path: str :param sheet_name: Name of a particular", "load_from_excel(file_path, sheet_name=None): \"\"\" Loads the data from an Excel file into a list", "from data to be used as sorting columns (small to large) in Excel.", "row in the Excel file. :type data: list of dict :param file_path: The", "columns (small to large) in Excel. Can be any length from 1 column", "in the Excel file. :type data: list of dict :param file_path: The full", "= sheet_name if sheet_name else xl.sheet_names[0] return xl.parse(sheet_name, index_col=None).to_dict('records') def export_to_excel(data, file_path, sheet_name=\"Sheet1\",", "length from 1 column to every column. The order of the list will", "an Excel file into a list of dictionaries, where each dictionary represents a", ".xlsx) of the Excel file to be loaded. :type file_path: str :param sheet_name:", "in Excel. Can be any length from 1 column to every column. The", "index_col=None).to_dict('records') def export_to_excel(data, file_path, sheet_name=\"Sheet1\", field_order=None, sorting_fields=None): \"\"\" Writes data from a list", "write to (optional, defaults to \"Sheet1\"). :type sheet_name: str :param field_order: List of", "file to be written to. This will overwrite data if both file_path and", "will overwrite data if both file_path and sheet_name already exist. :type file_path: str", "from the list will not be written as columns. (optional) :type field_order: list", "Excel file. :param data: List of dictionaries, each dictionary representing a row in", "from data ordered to match the intended Excel column ordering (left to right).", "Excel. Can be any length from 1 column to every column. The order", "Name of a particular sheet in the file to write to (optional, defaults", "dataframe. :param file_path: The full file path (appended with .xlsx) of the Excel", "large) in Excel. Can be any length from 1 column to every column.", "(small to large) in Excel. Can be any length from 1 column to", "with .xlsx) of the Excel file to be written to. This will overwrite", "the list will not be written as columns. (optional) :type field_order: list of", "the sorting order. :type sorting_fields: list of str :return: None \"\"\" writer =", "file and the keys of each dictionary represent each column header in the", "in the file to load (optional, defaults to the first sheet in the", "column header in the Excel file. :param data: List of dictionaries, each dictionary", "file path (appended with .xlsx) of the Excel file to be written to.", "every column. The order of the list will dictate the sorting order. :type", "sheet in the file to load (optional, defaults to the first sheet in", ":param sheet_name: Name of a particular sheet in the file to write to", "def load_from_excel(file_path, sheet_name=None): \"\"\" Loads the data from an Excel file into a", "a row in the Excel file. :rtype: list of dict \"\"\" xl =", "represents a row in the Excel file and the keys of each dictionary", "a list of dictionaries, where each dictionary represents a row in the Excel", "dict \"\"\" xl = pd.ExcelFile(file_path) sheet_name = sheet_name if sheet_name else xl.sheet_names[0] return", "pandas as pd def load_from_excel(file_path, sheet_name=None): \"\"\" Loads the data from an Excel", "keys from data to be used as sorting columns (small to large) in", "import pandas as pd def load_from_excel(file_path, sheet_name=None): \"\"\" Loads the data from an", "the Excel file and the keys of each dictionary represent each column header", "the Excel file to be written to. This will overwrite data if both", "sheet in the file to write to (optional, defaults to \"Sheet1\"). :type sheet_name:", "to an Excel file, where each dictionary represents a row in the Excel", "keys from data ordered to match the intended Excel column ordering (left to", "(optional) :type field_order: list of str :param sorting_fields: List of keys from data", ":param file_path: The full file path (appended with .xlsx) of the Excel file", ":type sheet_name: str :return: List of dictionaries, each dictionary representing a row in", "defaults to the first sheet in the Excel file). :type sheet_name: str :return:", "field_order: List of keys from data ordered to match the intended Excel column", "Any keys omitted from the list will not be written as columns. (optional)", "list of dictionaries via a Pandas dataframe. :param file_path: The full file path", "a row in the Excel file. :type data: list of dict :param file_path:", "a row in the Excel file and the keys of each dictionary represent", "ordering (left to right). Must include all keys/columns. Any keys omitted from the", "file). :type sheet_name: str :return: List of dictionaries, each dictionary representing a row", "to every column. The order of the list will dictate the sorting order.", "load (optional, defaults to the first sheet in the Excel file). :type sheet_name:", "dictionary representing a row in the Excel file. :type data: list of dict", "engine='openpyxl') df = pd.DataFrame(data) if field_order: df = df[field_order] if sorting_fields: df =", "List of dictionaries, each dictionary representing a row in the Excel file. :type", "dictionary representing a row in the Excel file. :rtype: list of dict \"\"\"", "list of str :return: None \"\"\" writer = pd.ExcelWriter(file_path, engine='openpyxl') df = pd.DataFrame(data)", "loaded. :type file_path: str :param sheet_name: Name of a particular sheet in the", "of str :param sorting_fields: List of keys from data to be used as", "Name of a particular sheet in the file to load (optional, defaults to", "sorting_fields=None): \"\"\" Writes data from a list of dictionaries to an Excel file,", "(left to right). Must include all keys/columns. Any keys omitted from the list", "via a Pandas dataframe. :param file_path: The full file path (appended with .xlsx)", "keys omitted from the list will not be written as columns. (optional) :type", "be written as columns. (optional) :type field_order: list of str :param sorting_fields: List", "if both file_path and sheet_name already exist. :type file_path: str :param sheet_name: Name", "Excel file into a list of dictionaries, where each dictionary represents a row", "Excel file to be loaded. :type file_path: str :param sheet_name: Name of a", "1 column to every column. The order of the list will dictate the", "list will dictate the sorting order. :type sorting_fields: list of str :return: None", "a list of dictionaries to an Excel file, where each dictionary represents a", "The order of the list will dictate the sorting order. :type sorting_fields: list", "order of the list will dictate the sorting order. :type sorting_fields: list of", "to load (optional, defaults to the first sheet in the Excel file). :type", "data to be used as sorting columns (small to large) in Excel. Can", "represent each column header in the Excel file. :param data: List of dictionaries,", "pd.ExcelWriter(file_path, engine='openpyxl') df = pd.DataFrame(data) if field_order: df = df[field_order] if sorting_fields: df", "of dict :param file_path: The full file path (appended with .xlsx) of the", "(optional, defaults to \"Sheet1\"). :type sheet_name: str :param field_order: List of keys from", "sheet_name else xl.sheet_names[0] return xl.parse(sheet_name, index_col=None).to_dict('records') def export_to_excel(data, file_path, sheet_name=\"Sheet1\", field_order=None, sorting_fields=None): \"\"\"", "file. The method creates this list of dictionaries via a Pandas dataframe. :param", "\"\"\" Writes data from a list of dictionaries to an Excel file, where", "the Excel file. :rtype: list of dict \"\"\" xl = pd.ExcelFile(file_path) sheet_name =", "exist. :type file_path: str :param sheet_name: Name of a particular sheet in the", "of dictionaries, each dictionary representing a row in the Excel file. :rtype: list", "right). Must include all keys/columns. Any keys omitted from the list will not", "path (appended with .xlsx) of the Excel file to be written to. This", "keys/columns. Any keys omitted from the list will not be written as columns.", "sheet_name: str :param field_order: List of keys from data ordered to match the", "Must include all keys/columns. Any keys omitted from the list will not be", "defaults to \"Sheet1\"). :type sheet_name: str :param field_order: List of keys from data", "will dictate the sorting order. :type sorting_fields: list of str :return: None \"\"\"", "Excel file). :type sheet_name: str :return: List of dictionaries, each dictionary representing a", "full file path (appended with .xlsx) of the Excel file to be loaded.", "file to load (optional, defaults to the first sheet in the Excel file).", "Excel column ordering (left to right). Must include all keys/columns. Any keys omitted", "a Pandas dataframe. :param file_path: The full file path (appended with .xlsx) of", "to be written to. This will overwrite data if both file_path and sheet_name", "\"\"\" xl = pd.ExcelFile(file_path) sheet_name = sheet_name if sheet_name else xl.sheet_names[0] return xl.parse(sheet_name,", "an Excel file, where each dictionary represents a row in the Excel file", "each column header in the Excel file. The method creates this list of", "dictionaries, each dictionary representing a row in the Excel file. :type data: list", "any length from 1 column to every column. The order of the list", "sheet_name: str :return: List of dictionaries, each dictionary representing a row in the", "else xl.sheet_names[0] return xl.parse(sheet_name, index_col=None).to_dict('records') def export_to_excel(data, file_path, sheet_name=\"Sheet1\", field_order=None, sorting_fields=None): \"\"\" Writes", "written to. This will overwrite data if both file_path and sheet_name already exist.", "Pandas dataframe. :param file_path: The full file path (appended with .xlsx) of the", "of keys from data ordered to match the intended Excel column ordering (left", ":type sheet_name: str :param field_order: List of keys from data ordered to match", "in the Excel file. :param data: List of dictionaries, each dictionary representing a", "pd.ExcelFile(file_path) sheet_name = sheet_name if sheet_name else xl.sheet_names[0] return xl.parse(sheet_name, index_col=None).to_dict('records') def export_to_excel(data,", "= pd.ExcelWriter(file_path, engine='openpyxl') df = pd.DataFrame(data) if field_order: df = df[field_order] if sorting_fields:", "str :return: List of dictionaries, each dictionary representing a row in the Excel", "str :param sheet_name: Name of a particular sheet in the file to write", "representing a row in the Excel file. :rtype: list of dict \"\"\" xl", "in the Excel file and the keys of each dictionary represent each column", "the Excel file. :param data: List of dictionaries, each dictionary representing a row", "data: List of dictionaries, each dictionary representing a row in the Excel file.", "each dictionary representing a row in the Excel file. :type data: list of", "path (appended with .xlsx) of the Excel file to be loaded. :type file_path:", "of a particular sheet in the file to load (optional, defaults to the", "column header in the Excel file. The method creates this list of dictionaries", "into a list of dictionaries, where each dictionary represents a row in the", "data from an Excel file into a list of dictionaries, where each dictionary", "List of keys from data to be used as sorting columns (small to", "and the keys of each dictionary represent each column header in the Excel", "already exist. :type file_path: str :param sheet_name: Name of a particular sheet in", "match the intended Excel column ordering (left to right). Must include all keys/columns.", "export_to_excel(data, file_path, sheet_name=\"Sheet1\", field_order=None, sorting_fields=None): \"\"\" Writes data from a list of dictionaries", "def export_to_excel(data, file_path, sheet_name=\"Sheet1\", field_order=None, sorting_fields=None): \"\"\" Writes data from a list of", "order. :type sorting_fields: list of str :return: None \"\"\" writer = pd.ExcelWriter(file_path, engine='openpyxl')", "file to write to (optional, defaults to \"Sheet1\"). :type sheet_name: str :param field_order:", "a particular sheet in the file to load (optional, defaults to the first", "file_path: The full file path (appended with .xlsx) of the Excel file to", "list of str :param sorting_fields: List of keys from data to be used", "str :param sorting_fields: List of keys from data to be used as sorting", "of dict \"\"\" xl = pd.ExcelFile(file_path) sheet_name = sheet_name if sheet_name else xl.sheet_names[0]", "ordered to match the intended Excel column ordering (left to right). Must include", "where each dictionary represents a row in the Excel file and the keys", "representing a row in the Excel file. :type data: list of dict :param", "of each dictionary represent each column header in the Excel file. :param data:", "dictate the sorting order. :type sorting_fields: list of str :return: None \"\"\" writer", "include all keys/columns. Any keys omitted from the list will not be written", "the file to load (optional, defaults to the first sheet in the Excel", ":type data: list of dict :param file_path: The full file path (appended with", "file_path, sheet_name=\"Sheet1\", field_order=None, sorting_fields=None): \"\"\" Writes data from a list of dictionaries to", "a particular sheet in the file to write to (optional, defaults to \"Sheet1\").", "list of dictionaries, where each dictionary represents a row in the Excel file", "if field_order: df = df[field_order] if sorting_fields: df = df.sort_values(sorting_fields) df.to_excel(writer, sheet_name=sheet_name, index=False)", "full file path (appended with .xlsx) of the Excel file to be written", "the Excel file. The method creates this list of dictionaries via a Pandas", "each dictionary represents a row in the Excel file and the keys of", "sheet_name already exist. :type file_path: str :param sheet_name: Name of a particular sheet", "(appended with .xlsx) of the Excel file to be written to. This will", "overwrite data if both file_path and sheet_name already exist. :type file_path: str :param", "= pd.ExcelFile(file_path) sheet_name = sheet_name if sheet_name else xl.sheet_names[0] return xl.parse(sheet_name, index_col=None).to_dict('records') def", "of keys from data to be used as sorting columns (small to large)", "both file_path and sheet_name already exist. :type file_path: str :param sheet_name: Name of", "file_path: str :param sheet_name: Name of a particular sheet in the file to", "data ordered to match the intended Excel column ordering (left to right). Must", "the keys of each dictionary represent each column header in the Excel file.", "the Excel file. :type data: list of dict :param file_path: The full file", "This will overwrite data if both file_path and sheet_name already exist. :type file_path:", "None \"\"\" writer = pd.ExcelWriter(file_path, engine='openpyxl') df = pd.DataFrame(data) if field_order: df =", "Can be any length from 1 column to every column. The order of", ":type sorting_fields: list of str :return: None \"\"\" writer = pd.ExcelWriter(file_path, engine='openpyxl') df", "column. The order of the list will dictate the sorting order. :type sorting_fields:", "data if both file_path and sheet_name already exist. :type file_path: str :param sheet_name:", "the first sheet in the Excel file). :type sheet_name: str :return: List of", ":return: None \"\"\" writer = pd.ExcelWriter(file_path, engine='openpyxl') df = pd.DataFrame(data) if field_order: df", ":param data: List of dictionaries, each dictionary representing a row in the Excel", "The method creates this list of dictionaries via a Pandas dataframe. :param file_path:", "of the Excel file to be written to. This will overwrite data if", "\"\"\" writer = pd.ExcelWriter(file_path, engine='openpyxl') df = pd.DataFrame(data) if field_order: df = df[field_order]", "Excel file, where each dictionary represents a row in the Excel file and", "str :param field_order: List of keys from data ordered to match the intended", "to \"Sheet1\"). :type sheet_name: str :param field_order: List of keys from data ordered", "file_path and sheet_name already exist. :type file_path: str :param sheet_name: Name of a", "column to every column. The order of the list will dictate the sorting", "dictionaries via a Pandas dataframe. :param file_path: The full file path (appended with", "dict :param file_path: The full file path (appended with .xlsx) of the Excel", "dictionaries, where each dictionary represents a row in the Excel file and the", "str :param sheet_name: Name of a particular sheet in the file to load", "creates this list of dictionaries via a Pandas dataframe. :param file_path: The full", "keys of each dictionary represent each column header in the Excel file. The", "field_order: df = df[field_order] if sorting_fields: df = df.sort_values(sorting_fields) df.to_excel(writer, sheet_name=sheet_name, index=False) writer.save()", "Excel file and the keys of each dictionary represent each column header in", "The full file path (appended with .xlsx) of the Excel file to be", "sheet_name: Name of a particular sheet in the file to load (optional, defaults", "of the Excel file to be loaded. :type file_path: str :param sheet_name: Name", "xl.sheet_names[0] return xl.parse(sheet_name, index_col=None).to_dict('records') def export_to_excel(data, file_path, sheet_name=\"Sheet1\", field_order=None, sorting_fields=None): \"\"\" Writes data", "header in the Excel file. :param data: List of dictionaries, each dictionary representing", "sheet_name: Name of a particular sheet in the file to write to (optional,", "dictionary represent each column header in the Excel file. :param data: List of", "from an Excel file into a list of dictionaries, where each dictionary represents", "to (optional, defaults to \"Sheet1\"). :type sheet_name: str :param field_order: List of keys", "written as columns. (optional) :type field_order: list of str :param sorting_fields: List of", "pd def load_from_excel(file_path, sheet_name=None): \"\"\" Loads the data from an Excel file into", "list of dict :param file_path: The full file path (appended with .xlsx) of", "the list will dictate the sorting order. :type sorting_fields: list of str :return:", "with .xlsx) of the Excel file to be loaded. :type file_path: str :param", "of dictionaries to an Excel file, where each dictionary represents a row in", "columns. (optional) :type field_order: list of str :param sorting_fields: List of keys from", "sheet_name=\"Sheet1\", field_order=None, sorting_fields=None): \"\"\" Writes data from a list of dictionaries to an", "this list of dictionaries via a Pandas dataframe. :param file_path: The full file", "xl = pd.ExcelFile(file_path) sheet_name = sheet_name if sheet_name else xl.sheet_names[0] return xl.parse(sheet_name, index_col=None).to_dict('records')", "to write to (optional, defaults to \"Sheet1\"). :type sheet_name: str :param field_order: List", "each dictionary representing a row in the Excel file. :rtype: list of dict", "sheet_name=None): \"\"\" Loads the data from an Excel file into a list of", "str :return: None \"\"\" writer = pd.ExcelWriter(file_path, engine='openpyxl') df = pd.DataFrame(data) if field_order:", "the intended Excel column ordering (left to right). Must include all keys/columns. Any", "file to be loaded. :type file_path: str :param sheet_name: Name of a particular", "of a particular sheet in the file to write to (optional, defaults to", "file, where each dictionary represents a row in the Excel file and the", "keys of each dictionary represent each column header in the Excel file. :param", "\"Sheet1\"). :type sheet_name: str :param field_order: List of keys from data ordered to", "will not be written as columns. (optional) :type field_order: list of str :param", "the Excel file to be loaded. :type file_path: str :param sheet_name: Name of", ":return: List of dictionaries, each dictionary representing a row in the Excel file.", "the file to write to (optional, defaults to \"Sheet1\"). :type sheet_name: str :param", "to be loaded. :type file_path: str :param sheet_name: Name of a particular sheet", "dictionaries to an Excel file, where each dictionary represents a row in the", "to right). Must include all keys/columns. Any keys omitted from the list will", "\"\"\" Loads the data from an Excel file into a list of dictionaries,", "of dictionaries via a Pandas dataframe. :param file_path: The full file path (appended", "list of dict \"\"\" xl = pd.ExcelFile(file_path) sheet_name = sheet_name if sheet_name else", "data: list of dict :param file_path: The full file path (appended with .xlsx)", "omitted from the list will not be written as columns. (optional) :type field_order:", ".xlsx) of the Excel file to be written to. This will overwrite data", "file. :param data: List of dictionaries, each dictionary representing a row in the", "all keys/columns. Any keys omitted from the list will not be written as", "sorting_fields: List of keys from data to be used as sorting columns (small", "of str :return: None \"\"\" writer = pd.ExcelWriter(file_path, engine='openpyxl') df = pd.DataFrame(data) if", "pd.DataFrame(data) if field_order: df = df[field_order] if sorting_fields: df = df.sort_values(sorting_fields) df.to_excel(writer, sheet_name=sheet_name,", "particular sheet in the file to write to (optional, defaults to \"Sheet1\"). :type", "as columns. (optional) :type field_order: list of str :param sorting_fields: List of keys", "be any length from 1 column to every column. The order of the", "each column header in the Excel file. :param data: List of dictionaries, each", "of dictionaries, where each dictionary represents a row in the Excel file and", "sheet in the Excel file). :type sheet_name: str :return: List of dictionaries, each", "xl.parse(sheet_name, index_col=None).to_dict('records') def export_to_excel(data, file_path, sheet_name=\"Sheet1\", field_order=None, sorting_fields=None): \"\"\" Writes data from a", "= pd.DataFrame(data) if field_order: df = df[field_order] if sorting_fields: df = df.sort_values(sorting_fields) df.to_excel(writer,", "return xl.parse(sheet_name, index_col=None).to_dict('records') def export_to_excel(data, file_path, sheet_name=\"Sheet1\", field_order=None, sorting_fields=None): \"\"\" Writes data from", "row in the Excel file. :rtype: list of dict \"\"\" xl = pd.ExcelFile(file_path)", "file. :rtype: list of dict \"\"\" xl = pd.ExcelFile(file_path) sheet_name = sheet_name if", "Excel file. :rtype: list of dict \"\"\" xl = pd.ExcelFile(file_path) sheet_name = sheet_name", "to the first sheet in the Excel file). :type sheet_name: str :return: List", "list of dictionaries to an Excel file, where each dictionary represents a row", ":type field_order: list of str :param sorting_fields: List of keys from data to", "List of keys from data ordered to match the intended Excel column ordering", "Excel file. The method creates this list of dictionaries via a Pandas dataframe.", "field_order=None, sorting_fields=None): \"\"\" Writes data from a list of dictionaries to an Excel", "particular sheet in the file to load (optional, defaults to the first sheet", "of each dictionary represent each column header in the Excel file. The method", "Excel file to be written to. This will overwrite data if both file_path", ":rtype: list of dict \"\"\" xl = pd.ExcelFile(file_path) sheet_name = sheet_name if sheet_name", "in the file to write to (optional, defaults to \"Sheet1\"). :type sheet_name: str", "(appended with .xlsx) of the Excel file to be loaded. :type file_path: str", "in the Excel file. :rtype: list of dict \"\"\" xl = pd.ExcelFile(file_path) sheet_name", "be used as sorting columns (small to large) in Excel. Can be any", "file path (appended with .xlsx) of the Excel file to be loaded. :type", "List of dictionaries, each dictionary representing a row in the Excel file. :rtype:", "not be written as columns. (optional) :type field_order: list of str :param sorting_fields:", "if sheet_name else xl.sheet_names[0] return xl.parse(sheet_name, index_col=None).to_dict('records') def export_to_excel(data, file_path, sheet_name=\"Sheet1\", field_order=None, sorting_fields=None):", "to be used as sorting columns (small to large) in Excel. Can be", "intended Excel column ordering (left to right). Must include all keys/columns. Any keys", ":param field_order: List of keys from data ordered to match the intended Excel", "from a list of dictionaries to an Excel file, where each dictionary represents", "sorting order. :type sorting_fields: list of str :return: None \"\"\" writer = pd.ExcelWriter(file_path,", "writer = pd.ExcelWriter(file_path, engine='openpyxl') df = pd.DataFrame(data) if field_order: df = df[field_order] if", "list will not be written as columns. (optional) :type field_order: list of str", "to large) in Excel. Can be any length from 1 column to every", "method creates this list of dictionaries via a Pandas dataframe. :param file_path: The", ":param sorting_fields: List of keys from data to be used as sorting columns", "data from a list of dictionaries to an Excel file, where each dictionary", "from 1 column to every column. The order of the list will dictate", "used as sorting columns (small to large) in Excel. Can be any length", "df = pd.DataFrame(data) if field_order: df = df[field_order] if sorting_fields: df = df.sort_values(sorting_fields)", "be loaded. :type file_path: str :param sheet_name: Name of a particular sheet in", "field_order: list of str :param sorting_fields: List of keys from data to be", "the Excel file). :type sheet_name: str :return: List of dictionaries, each dictionary representing", "to match the intended Excel column ordering (left to right). Must include all", "represent each column header in the Excel file. The method creates this list", "in the Excel file). :type sheet_name: str :return: List of dictionaries, each dictionary", "dictionary represent each column header in the Excel file. The method creates this", "of the list will dictate the sorting order. :type sorting_fields: list of str", "column ordering (left to right). Must include all keys/columns. Any keys omitted from", "header in the Excel file. The method creates this list of dictionaries via", "as sorting columns (small to large) in Excel. Can be any length from", "sheet_name if sheet_name else xl.sheet_names[0] return xl.parse(sheet_name, index_col=None).to_dict('records') def export_to_excel(data, file_path, sheet_name=\"Sheet1\", field_order=None,", "Writes data from a list of dictionaries to an Excel file, where each", "Excel file. :type data: list of dict :param file_path: The full file path", ":type file_path: str :param sheet_name: Name of a particular sheet in the file", "dictionary represents a row in the Excel file and the keys of each", "be written to. This will overwrite data if both file_path and sheet_name already", "of dictionaries, each dictionary representing a row in the Excel file. :type data:", "each dictionary represent each column header in the Excel file. The method creates", "first sheet in the Excel file). :type sheet_name: str :return: List of dictionaries,", "to. This will overwrite data if both file_path and sheet_name already exist. :type", "each dictionary represent each column header in the Excel file. :param data: List", "dictionaries, each dictionary representing a row in the Excel file. :rtype: list of", "in the Excel file. The method creates this list of dictionaries via a", "file. :type data: list of dict :param file_path: The full file path (appended", "Loads the data from an Excel file into a list of dictionaries, where" ]
[ "# Make this unique, and don't share it with anybody. SECRET_KEY = '<KEY>'", "'<EMAIL>'), ) MANAGERS = ADMINS # Make this unique, and don't share it", "= ( ('<NAME>', '<EMAIL>'), ) MANAGERS = ADMINS # Make this unique, and", "import * # noqa ADMINS = ( ('<NAME>', '<EMAIL>'), ) MANAGERS = ADMINS", ".base import * # noqa ADMINS = ( ('<NAME>', '<EMAIL>'), ) MANAGERS =", "MANAGERS = ADMINS # Make this unique, and don't share it with anybody.", "* # noqa ADMINS = ( ('<NAME>', '<EMAIL>'), ) MANAGERS = ADMINS #", "# pylint: disable=W0401,W0614,C0111 from .base import * # noqa ADMINS = ( ('<NAME>',", "<reponame>dupuy/ulm<gh_stars>1-10 # pylint: disable=W0401,W0614,C0111 from .base import * # noqa ADMINS = (", "from .base import * # noqa ADMINS = ( ('<NAME>', '<EMAIL>'), ) MANAGERS", ") MANAGERS = ADMINS # Make this unique, and don't share it with", "= ADMINS # Make this unique, and don't share it with anybody. SECRET_KEY", "( ('<NAME>', '<EMAIL>'), ) MANAGERS = ADMINS # Make this unique, and don't", "('<NAME>', '<EMAIL>'), ) MANAGERS = ADMINS # Make this unique, and don't share", "noqa ADMINS = ( ('<NAME>', '<EMAIL>'), ) MANAGERS = ADMINS # Make this", "ADMINS # Make this unique, and don't share it with anybody. SECRET_KEY =", "pylint: disable=W0401,W0614,C0111 from .base import * # noqa ADMINS = ( ('<NAME>', '<EMAIL>'),", "ADMINS = ( ('<NAME>', '<EMAIL>'), ) MANAGERS = ADMINS # Make this unique,", "disable=W0401,W0614,C0111 from .base import * # noqa ADMINS = ( ('<NAME>', '<EMAIL>'), )", "# noqa ADMINS = ( ('<NAME>', '<EMAIL>'), ) MANAGERS = ADMINS # Make" ]
[ "return(n) def func3(lst): for i in lst[:]: if int(i)!=i: return None elif i<0", "list(reversed(m))==h: n+=1 else: break return(n) def func3(lst): for i in lst[:]: if int(i)!=i:", "or int(a)!=a or int(b)!=b : return None else: d=1 for i in range(a,b):", "int(i)!=i: return None elif i<0 or i%3==0: lst.remove(i) lst.sort(reverse=True) return (lst) if __name__==\"__main__\":", "上传时间:2018/11/12 15:22:55 import math def func1(a,b): if a>=b or int(a)!=a or int(b)!=b :", "if int(a)!=a or int(b)!=b: return None else: n=0 for i in range(a,b+1): m=str(i)", "else: break return(n) def func3(lst): for i in lst[:]: if int(i)!=i: return None", "return(x) def func2(a,b): if int(a)!=a or int(b)!=b: return None else: n=0 for i", "in lst[:]: if int(i)!=i: return None elif i<0 or i%3==0: lst.remove(i) lst.sort(reverse=True) return", "i in range(a,b+1): m=str(i) h=list(m) if list(reversed(m))==h: n+=1 else: break return(n) def func3(lst):", "def func1(a,b): if a>=b or int(a)!=a or int(b)!=b : return None else: d=1", "学号:1827402009 # 姓名:肖鹏 # IP:192.168.157.232 # 上传时间:2018/11/12 15:22:55 import math def func1(a,b): if", "break return(x) def func2(a,b): if int(a)!=a or int(b)!=b: return None else: n=0 for", "15:22:55 import math def func1(a,b): if a>=b or int(a)!=a or int(b)!=b : return", "int(a)!=a or int(b)!=b : return None else: d=1 for i in range(a,b): d=d*i", "else: break return(x) def func2(a,b): if int(a)!=a or int(b)!=b: return None else: n=0", "# 上传时间:2018/11/12 15:22:55 import math def func1(a,b): if a>=b or int(a)!=a or int(b)!=b", "None else: n=0 for i in range(a,b+1): m=str(i) h=list(m) if list(reversed(m))==h: n+=1 else:", "def func2(a,b): if int(a)!=a or int(b)!=b: return None else: n=0 for i in", "for i in range(a,b+1): m=str(i) h=list(m) if list(reversed(m))==h: n+=1 else: break return(n) def", "for i in range(a,b): d=d*i c=len(str(d)) x=1 for j in range(1,x+1): if c%10**j==0:", "in range(a,b): d=d*i c=len(str(d)) x=1 for j in range(1,x+1): if c%10**j==0: x=x+1 else:", "i in range(a,b): d=d*i c=len(str(d)) x=1 for j in range(1,x+1): if c%10**j==0: x=x+1", "break return(n) def func3(lst): for i in lst[:]: if int(i)!=i: return None elif", "else: n=0 for i in range(a,b+1): m=str(i) h=list(m) if list(reversed(m))==h: n+=1 else: break", "n=0 for i in range(a,b+1): m=str(i) h=list(m) if list(reversed(m))==h: n+=1 else: break return(n)", "return None elif i<0 or i%3==0: lst.remove(i) lst.sort(reverse=True) return (lst) if __name__==\"__main__\": pass", "d=1 for i in range(a,b): d=d*i c=len(str(d)) x=1 for j in range(1,x+1): if", "import math def func1(a,b): if a>=b or int(a)!=a or int(b)!=b : return None", "# 学号:1827402009 # 姓名:肖鹏 # IP:192.168.157.232 # 上传时间:2018/11/12 15:22:55 import math def func1(a,b):", "def func3(lst): for i in lst[:]: if int(i)!=i: return None elif i<0 or", "c=len(str(d)) x=1 for j in range(1,x+1): if c%10**j==0: x=x+1 else: break return(x) def", "lst[:]: if int(i)!=i: return None elif i<0 or i%3==0: lst.remove(i) lst.sort(reverse=True) return (lst)", "int(b)!=b : return None else: d=1 for i in range(a,b): d=d*i c=len(str(d)) x=1", "in range(1,x+1): if c%10**j==0: x=x+1 else: break return(x) def func2(a,b): if int(a)!=a or", "int(a)!=a or int(b)!=b: return None else: n=0 for i in range(a,b+1): m=str(i) h=list(m)", "n+=1 else: break return(n) def func3(lst): for i in lst[:]: if int(i)!=i: return", "# 姓名:肖鹏 # IP:192.168.157.232 # 上传时间:2018/11/12 15:22:55 import math def func1(a,b): if a>=b", "range(1,x+1): if c%10**j==0: x=x+1 else: break return(x) def func2(a,b): if int(a)!=a or int(b)!=b:", "a>=b or int(a)!=a or int(b)!=b : return None else: d=1 for i in", "if int(i)!=i: return None elif i<0 or i%3==0: lst.remove(i) lst.sort(reverse=True) return (lst) if", "x=1 for j in range(1,x+1): if c%10**j==0: x=x+1 else: break return(x) def func2(a,b):", "in range(a,b+1): m=str(i) h=list(m) if list(reversed(m))==h: n+=1 else: break return(n) def func3(lst): for", "func3(lst): for i in lst[:]: if int(i)!=i: return None elif i<0 or i%3==0:", "m=str(i) h=list(m) if list(reversed(m))==h: n+=1 else: break return(n) def func3(lst): for i in", "姓名:肖鹏 # IP:192.168.157.232 # 上传时间:2018/11/12 15:22:55 import math def func1(a,b): if a>=b or", "math def func1(a,b): if a>=b or int(a)!=a or int(b)!=b : return None else:", "or int(b)!=b: return None else: n=0 for i in range(a,b+1): m=str(i) h=list(m) if", "c%10**j==0: x=x+1 else: break return(x) def func2(a,b): if int(a)!=a or int(b)!=b: return None", "if c%10**j==0: x=x+1 else: break return(x) def func2(a,b): if int(a)!=a or int(b)!=b: return", "None else: d=1 for i in range(a,b): d=d*i c=len(str(d)) x=1 for j in", "x=x+1 else: break return(x) def func2(a,b): if int(a)!=a or int(b)!=b: return None else:", "int(b)!=b: return None else: n=0 for i in range(a,b+1): m=str(i) h=list(m) if list(reversed(m))==h:", "range(a,b): d=d*i c=len(str(d)) x=1 for j in range(1,x+1): if c%10**j==0: x=x+1 else: break", "or int(b)!=b : return None else: d=1 for i in range(a,b): d=d*i c=len(str(d))", "return None else: d=1 for i in range(a,b): d=d*i c=len(str(d)) x=1 for j", "return None else: n=0 for i in range(a,b+1): m=str(i) h=list(m) if list(reversed(m))==h: n+=1", "range(a,b+1): m=str(i) h=list(m) if list(reversed(m))==h: n+=1 else: break return(n) def func3(lst): for i", "if list(reversed(m))==h: n+=1 else: break return(n) def func3(lst): for i in lst[:]: if", "i in lst[:]: if int(i)!=i: return None elif i<0 or i%3==0: lst.remove(i) lst.sort(reverse=True)", "IP:192.168.157.232 # 上传时间:2018/11/12 15:22:55 import math def func1(a,b): if a>=b or int(a)!=a or", "j in range(1,x+1): if c%10**j==0: x=x+1 else: break return(x) def func2(a,b): if int(a)!=a", "func2(a,b): if int(a)!=a or int(b)!=b: return None else: n=0 for i in range(a,b+1):", "<filename>examples/1827402009.py # 学号:1827402009 # 姓名:肖鹏 # IP:192.168.157.232 # 上传时间:2018/11/12 15:22:55 import math def", "if a>=b or int(a)!=a or int(b)!=b : return None else: d=1 for i", "d=d*i c=len(str(d)) x=1 for j in range(1,x+1): if c%10**j==0: x=x+1 else: break return(x)", ": return None else: d=1 for i in range(a,b): d=d*i c=len(str(d)) x=1 for", "# IP:192.168.157.232 # 上传时间:2018/11/12 15:22:55 import math def func1(a,b): if a>=b or int(a)!=a", "else: d=1 for i in range(a,b): d=d*i c=len(str(d)) x=1 for j in range(1,x+1):", "h=list(m) if list(reversed(m))==h: n+=1 else: break return(n) def func3(lst): for i in lst[:]:", "for i in lst[:]: if int(i)!=i: return None elif i<0 or i%3==0: lst.remove(i)", "for j in range(1,x+1): if c%10**j==0: x=x+1 else: break return(x) def func2(a,b): if", "func1(a,b): if a>=b or int(a)!=a or int(b)!=b : return None else: d=1 for" ]
[ "import json import requests def main(): DT_URL = sys.argv[1] DT_TOKEN = sys.argv[2] endpoint", "def main(): DT_URL = sys.argv[1] DT_TOKEN = sys.argv[2] endpoint = DT_URL + \"api/v1/problem/status\"", "DT_URL = sys.argv[1] DT_TOKEN = sys.argv[2] endpoint = DT_URL + \"api/v1/problem/status\" get_param =", "sys.argv[2] endpoint = DT_URL + \"api/v1/problem/status\" get_param = {'Accept':'application/json; charset=utf-8', 'Authorization':'Api-Token {}'.format(DT_TOKEN)} config_post", "<filename>dynatrace-scripts/checkforproblems.py import sys import json import requests def main(): DT_URL = sys.argv[1] DT_TOKEN", "endpoint = DT_URL + \"api/v1/problem/status\" get_param = {'Accept':'application/json; charset=utf-8', 'Authorization':'Api-Token {}'.format(DT_TOKEN)} config_post =", "DT_URL + \"api/v1/problem/status\" get_param = {'Accept':'application/json; charset=utf-8', 'Authorization':'Api-Token {}'.format(DT_TOKEN)} config_post = requests.get(endpoint, headers", "get_param = {'Accept':'application/json; charset=utf-8', 'Authorization':'Api-Token {}'.format(DT_TOKEN)} config_post = requests.get(endpoint, headers = get_param) jsonObj", "{'Accept':'application/json; charset=utf-8', 'Authorization':'Api-Token {}'.format(DT_TOKEN)} config_post = requests.get(endpoint, headers = get_param) jsonObj = applications", "jsonObj = applications = json.loads(config_post.text) problem = jsonObj[\"result\"][\"totalOpenProblemsCount\"] return problem if __name__==\"__main__\": val", "\"api/v1/problem/status\" get_param = {'Accept':'application/json; charset=utf-8', 'Authorization':'Api-Token {}'.format(DT_TOKEN)} config_post = requests.get(endpoint, headers = get_param)", "requests.get(endpoint, headers = get_param) jsonObj = applications = json.loads(config_post.text) problem = jsonObj[\"result\"][\"totalOpenProblemsCount\"] return", "= sys.argv[1] DT_TOKEN = sys.argv[2] endpoint = DT_URL + \"api/v1/problem/status\" get_param = {'Accept':'application/json;", "applications = json.loads(config_post.text) problem = jsonObj[\"result\"][\"totalOpenProblemsCount\"] return problem if __name__==\"__main__\": val = main()", "import requests def main(): DT_URL = sys.argv[1] DT_TOKEN = sys.argv[2] endpoint = DT_URL", "import sys import json import requests def main(): DT_URL = sys.argv[1] DT_TOKEN =", "= requests.get(endpoint, headers = get_param) jsonObj = applications = json.loads(config_post.text) problem = jsonObj[\"result\"][\"totalOpenProblemsCount\"]", "+ \"api/v1/problem/status\" get_param = {'Accept':'application/json; charset=utf-8', 'Authorization':'Api-Token {}'.format(DT_TOKEN)} config_post = requests.get(endpoint, headers =", "= DT_URL + \"api/v1/problem/status\" get_param = {'Accept':'application/json; charset=utf-8', 'Authorization':'Api-Token {}'.format(DT_TOKEN)} config_post = requests.get(endpoint,", "config_post = requests.get(endpoint, headers = get_param) jsonObj = applications = json.loads(config_post.text) problem =", "charset=utf-8', 'Authorization':'Api-Token {}'.format(DT_TOKEN)} config_post = requests.get(endpoint, headers = get_param) jsonObj = applications =", "= json.loads(config_post.text) problem = jsonObj[\"result\"][\"totalOpenProblemsCount\"] return problem if __name__==\"__main__\": val = main() exit(val)", "= get_param) jsonObj = applications = json.loads(config_post.text) problem = jsonObj[\"result\"][\"totalOpenProblemsCount\"] return problem if", "json import requests def main(): DT_URL = sys.argv[1] DT_TOKEN = sys.argv[2] endpoint =", "requests def main(): DT_URL = sys.argv[1] DT_TOKEN = sys.argv[2] endpoint = DT_URL +", "headers = get_param) jsonObj = applications = json.loads(config_post.text) problem = jsonObj[\"result\"][\"totalOpenProblemsCount\"] return problem", "= applications = json.loads(config_post.text) problem = jsonObj[\"result\"][\"totalOpenProblemsCount\"] return problem if __name__==\"__main__\": val =", "= {'Accept':'application/json; charset=utf-8', 'Authorization':'Api-Token {}'.format(DT_TOKEN)} config_post = requests.get(endpoint, headers = get_param) jsonObj =", "sys import json import requests def main(): DT_URL = sys.argv[1] DT_TOKEN = sys.argv[2]", "DT_TOKEN = sys.argv[2] endpoint = DT_URL + \"api/v1/problem/status\" get_param = {'Accept':'application/json; charset=utf-8', 'Authorization':'Api-Token", "get_param) jsonObj = applications = json.loads(config_post.text) problem = jsonObj[\"result\"][\"totalOpenProblemsCount\"] return problem if __name__==\"__main__\":", "main(): DT_URL = sys.argv[1] DT_TOKEN = sys.argv[2] endpoint = DT_URL + \"api/v1/problem/status\" get_param", "= sys.argv[2] endpoint = DT_URL + \"api/v1/problem/status\" get_param = {'Accept':'application/json; charset=utf-8', 'Authorization':'Api-Token {}'.format(DT_TOKEN)}", "'Authorization':'Api-Token {}'.format(DT_TOKEN)} config_post = requests.get(endpoint, headers = get_param) jsonObj = applications = json.loads(config_post.text)", "{}'.format(DT_TOKEN)} config_post = requests.get(endpoint, headers = get_param) jsonObj = applications = json.loads(config_post.text) problem", "sys.argv[1] DT_TOKEN = sys.argv[2] endpoint = DT_URL + \"api/v1/problem/status\" get_param = {'Accept':'application/json; charset=utf-8'," ]
[ "list routes URLs to views. For more information please see: https://docs.djangoproject.com/en/1.10/topics/http/urls/ Examples: Function", "urlpatterns: url(r'^$', views.home, name='home') Class-based views 1. Add an import: from other_app.views import", "to urlpatterns: url(r'^blog/', include('blog.urls')) \"\"\" from django.conf.urls import url from django.contrib import admin", "url(r'^$', Home.as_view(), name='home') Including another URLconf 1. Import the include() function: from django.conf.urls", "complaint.views import resolved from complaint.views import detail urlpatterns = [ url(r'^admin/', admin.site.urls), url(r'^show/$',show_complaints),", "a URL to urlpatterns: url(r'^$', Home.as_view(), name='home') Including another URLconf 1. Import the", "import url from django.contrib import admin from complaint.views import show_complaints from complaint.views import", "django.conf.urls import url from django.contrib import admin from complaint.views import show_complaints from complaint.views", "Examples: Function views 1. Add an import: from my_app import views 2. Add", "import views as auth_views from complaint.views import reject from complaint.views import index from", "1. Add an import: from other_app.views import Home 2. Add a URL to", "django.contrib import admin from complaint.views import show_complaints from complaint.views import reject, signup from", "other_app.views import Home 2. Add a URL to urlpatterns: url(r'^$', Home.as_view(), name='home') Including", "1. Import the include() function: from django.conf.urls import url, include 2. Add a", "https://docs.djangoproject.com/en/1.10/topics/http/urls/ Examples: Function views 1. Add an import: from my_app import views 2.", "import Home 2. Add a URL to urlpatterns: url(r'^$', Home.as_view(), name='home') Including another", "signup from django.contrib.auth import views as auth_views from complaint.views import reject from complaint.views", "index from complaint.views import resolved from complaint.views import detail urlpatterns = [ url(r'^admin/',", "from django.conf.urls import url from django.contrib import admin from complaint.views import show_complaints from", "views. For more information please see: https://docs.djangoproject.com/en/1.10/topics/http/urls/ Examples: Function views 1. Add an", "URLs to views. For more information please see: https://docs.djangoproject.com/en/1.10/topics/http/urls/ Examples: Function views 1.", "= [ url(r'^admin/', admin.site.urls), url(r'^show/$',show_complaints), url(r'^reject/complaint/(\\d{1,2})/$',reject), url(r'^register/$',signup), url(r'^$', auth_views.login, name='login'), url(r'^logout/$', auth_views.logout, name='logout'),", "import: from other_app.views import Home 2. Add a URL to urlpatterns: url(r'^$', Home.as_view(),", "\"\"\"WIP URL Configuration The `urlpatterns` list routes URLs to views. For more information", "Import the include() function: from django.conf.urls import url, include 2. Add a URL", "import reject from complaint.views import index from complaint.views import resolved from complaint.views import", "include('blog.urls')) \"\"\" from django.conf.urls import url from django.contrib import admin from complaint.views import", "from complaint.views import reject from complaint.views import index from complaint.views import resolved from", "a URL to urlpatterns: url(r'^$', views.home, name='home') Class-based views 1. Add an import:", "Configuration The `urlpatterns` list routes URLs to views. For more information please see:", "import resolved from complaint.views import detail urlpatterns = [ url(r'^admin/', admin.site.urls), url(r'^show/$',show_complaints), url(r'^reject/complaint/(\\d{1,2})/$',reject),", "Add an import: from other_app.views import Home 2. Add a URL to urlpatterns:", "reject, signup from django.contrib.auth import views as auth_views from complaint.views import reject from", "from complaint.views import reject, signup from django.contrib.auth import views as auth_views from complaint.views", "url(r'^$', views.home, name='home') Class-based views 1. Add an import: from other_app.views import Home", "Add a URL to urlpatterns: url(r'^$', views.home, name='home') Class-based views 1. Add an", "please see: https://docs.djangoproject.com/en/1.10/topics/http/urls/ Examples: Function views 1. Add an import: from my_app import", "URL to urlpatterns: url(r'^$', views.home, name='home') Class-based views 1. Add an import: from", "more information please see: https://docs.djangoproject.com/en/1.10/topics/http/urls/ Examples: Function views 1. Add an import: from", "\"\"\" from django.conf.urls import url from django.contrib import admin from complaint.views import show_complaints", "show_complaints from complaint.views import reject, signup from django.contrib.auth import views as auth_views from", "from complaint.views import resolved from complaint.views import detail urlpatterns = [ url(r'^admin/', admin.site.urls),", "complaint.views import show_complaints from complaint.views import reject, signup from django.contrib.auth import views as", "import: from my_app import views 2. Add a URL to urlpatterns: url(r'^$', views.home,", "a URL to urlpatterns: url(r'^blog/', include('blog.urls')) \"\"\" from django.conf.urls import url from django.contrib", "Add an import: from my_app import views 2. Add a URL to urlpatterns:", "import show_complaints from complaint.views import reject, signup from django.contrib.auth import views as auth_views", "my_app import views 2. Add a URL to urlpatterns: url(r'^$', views.home, name='home') Class-based", "url from django.contrib import admin from complaint.views import show_complaints from complaint.views import reject,", "url(r'^admin/', admin.site.urls), url(r'^show/$',show_complaints), url(r'^reject/complaint/(\\d{1,2})/$',reject), url(r'^register/$',signup), url(r'^$', auth_views.login, name='login'), url(r'^logout/$', auth_views.logout, name='logout'), url(r'^complaint/$',index), url(r'^resolved/complaint/(\\d{1,2})/$',resolved),", "2. Add a URL to urlpatterns: url(r'^$', views.home, name='home') Class-based views 1. Add", "to urlpatterns: url(r'^$', Home.as_view(), name='home') Including another URLconf 1. Import the include() function:", "URL to urlpatterns: url(r'^$', Home.as_view(), name='home') Including another URLconf 1. Import the include()", "URL Configuration The `urlpatterns` list routes URLs to views. For more information please", "views as auth_views from complaint.views import reject from complaint.views import index from complaint.views", "auth_views from complaint.views import reject from complaint.views import index from complaint.views import resolved", "URLconf 1. Import the include() function: from django.conf.urls import url, include 2. Add", "2. Add a URL to urlpatterns: url(r'^$', Home.as_view(), name='home') Including another URLconf 1.", "url(r'^blog/', include('blog.urls')) \"\"\" from django.conf.urls import url from django.contrib import admin from complaint.views", "reject from complaint.views import index from complaint.views import resolved from complaint.views import detail", "an import: from my_app import views 2. Add a URL to urlpatterns: url(r'^$',", "complaint.views import reject, signup from django.contrib.auth import views as auth_views from complaint.views import", "complaint.views import detail urlpatterns = [ url(r'^admin/', admin.site.urls), url(r'^show/$',show_complaints), url(r'^reject/complaint/(\\d{1,2})/$',reject), url(r'^register/$',signup), url(r'^$', auth_views.login,", "import url, include 2. Add a URL to urlpatterns: url(r'^blog/', include('blog.urls')) \"\"\" from", "import index from complaint.views import resolved from complaint.views import detail urlpatterns = [", "Including another URLconf 1. Import the include() function: from django.conf.urls import url, include", "detail urlpatterns = [ url(r'^admin/', admin.site.urls), url(r'^show/$',show_complaints), url(r'^reject/complaint/(\\d{1,2})/$',reject), url(r'^register/$',signup), url(r'^$', auth_views.login, name='login'), url(r'^logout/$',", "Home 2. Add a URL to urlpatterns: url(r'^$', Home.as_view(), name='home') Including another URLconf", "to urlpatterns: url(r'^$', views.home, name='home') Class-based views 1. Add an import: from other_app.views", "information please see: https://docs.djangoproject.com/en/1.10/topics/http/urls/ Examples: Function views 1. Add an import: from my_app", "from other_app.views import Home 2. Add a URL to urlpatterns: url(r'^$', Home.as_view(), name='home')", "include 2. Add a URL to urlpatterns: url(r'^blog/', include('blog.urls')) \"\"\" from django.conf.urls import", "views.home, name='home') Class-based views 1. Add an import: from other_app.views import Home 2.", "name='home') Class-based views 1. Add an import: from other_app.views import Home 2. Add", "django.contrib.auth import views as auth_views from complaint.views import reject from complaint.views import index", "urlpatterns: url(r'^blog/', include('blog.urls')) \"\"\" from django.conf.urls import url from django.contrib import admin from", "Add a URL to urlpatterns: url(r'^$', Home.as_view(), name='home') Including another URLconf 1. Import", "urlpatterns: url(r'^$', Home.as_view(), name='home') Including another URLconf 1. Import the include() function: from", "views 1. Add an import: from my_app import views 2. Add a URL", "complaint.views import reject from complaint.views import index from complaint.views import resolved from complaint.views", "an import: from other_app.views import Home 2. Add a URL to urlpatterns: url(r'^$',", "import admin from complaint.views import show_complaints from complaint.views import reject, signup from django.contrib.auth", "1. Add an import: from my_app import views 2. Add a URL to", "For more information please see: https://docs.djangoproject.com/en/1.10/topics/http/urls/ Examples: Function views 1. Add an import:", "from django.conf.urls import url, include 2. Add a URL to urlpatterns: url(r'^blog/', include('blog.urls'))", "Add a URL to urlpatterns: url(r'^blog/', include('blog.urls')) \"\"\" from django.conf.urls import url from", "from complaint.views import show_complaints from complaint.views import reject, signup from django.contrib.auth import views", "from complaint.views import index from complaint.views import resolved from complaint.views import detail urlpatterns", "from my_app import views 2. Add a URL to urlpatterns: url(r'^$', views.home, name='home')", "another URLconf 1. Import the include() function: from django.conf.urls import url, include 2.", "url, include 2. Add a URL to urlpatterns: url(r'^blog/', include('blog.urls')) \"\"\" from django.conf.urls", "from django.contrib.auth import views as auth_views from complaint.views import reject from complaint.views import", "Class-based views 1. Add an import: from other_app.views import Home 2. Add a", "function: from django.conf.urls import url, include 2. Add a URL to urlpatterns: url(r'^blog/',", "Function views 1. Add an import: from my_app import views 2. Add a", "views 2. Add a URL to urlpatterns: url(r'^$', views.home, name='home') Class-based views 1.", "[ url(r'^admin/', admin.site.urls), url(r'^show/$',show_complaints), url(r'^reject/complaint/(\\d{1,2})/$',reject), url(r'^register/$',signup), url(r'^$', auth_views.login, name='login'), url(r'^logout/$', auth_views.logout, name='logout'), url(r'^complaint/$',index),", "routes URLs to views. For more information please see: https://docs.djangoproject.com/en/1.10/topics/http/urls/ Examples: Function views", "include() function: from django.conf.urls import url, include 2. Add a URL to urlpatterns:", "complaint.views import index from complaint.views import resolved from complaint.views import detail urlpatterns =", "url(r'^show/$',show_complaints), url(r'^reject/complaint/(\\d{1,2})/$',reject), url(r'^register/$',signup), url(r'^$', auth_views.login, name='login'), url(r'^logout/$', auth_views.logout, name='logout'), url(r'^complaint/$',index), url(r'^resolved/complaint/(\\d{1,2})/$',resolved), url(r'^complaint/(\\d{1,2})/',detail), ]", "from complaint.views import detail urlpatterns = [ url(r'^admin/', admin.site.urls), url(r'^show/$',show_complaints), url(r'^reject/complaint/(\\d{1,2})/$',reject), url(r'^register/$',signup), url(r'^$',", "as auth_views from complaint.views import reject from complaint.views import index from complaint.views import", "URL to urlpatterns: url(r'^blog/', include('blog.urls')) \"\"\" from django.conf.urls import url from django.contrib import", "admin from complaint.views import show_complaints from complaint.views import reject, signup from django.contrib.auth import", "2. Add a URL to urlpatterns: url(r'^blog/', include('blog.urls')) \"\"\" from django.conf.urls import url", "Home.as_view(), name='home') Including another URLconf 1. Import the include() function: from django.conf.urls import", "The `urlpatterns` list routes URLs to views. For more information please see: https://docs.djangoproject.com/en/1.10/topics/http/urls/", "urlpatterns = [ url(r'^admin/', admin.site.urls), url(r'^show/$',show_complaints), url(r'^reject/complaint/(\\d{1,2})/$',reject), url(r'^register/$',signup), url(r'^$', auth_views.login, name='login'), url(r'^logout/$', auth_views.logout,", "see: https://docs.djangoproject.com/en/1.10/topics/http/urls/ Examples: Function views 1. Add an import: from my_app import views", "from django.contrib import admin from complaint.views import show_complaints from complaint.views import reject, signup", "import detail urlpatterns = [ url(r'^admin/', admin.site.urls), url(r'^show/$',show_complaints), url(r'^reject/complaint/(\\d{1,2})/$',reject), url(r'^register/$',signup), url(r'^$', auth_views.login, name='login'),", "django.conf.urls import url, include 2. Add a URL to urlpatterns: url(r'^blog/', include('blog.urls')) \"\"\"", "`urlpatterns` list routes URLs to views. For more information please see: https://docs.djangoproject.com/en/1.10/topics/http/urls/ Examples:", "views 1. Add an import: from other_app.views import Home 2. Add a URL", "import reject, signup from django.contrib.auth import views as auth_views from complaint.views import reject", "import views 2. Add a URL to urlpatterns: url(r'^$', views.home, name='home') Class-based views", "to views. For more information please see: https://docs.djangoproject.com/en/1.10/topics/http/urls/ Examples: Function views 1. Add", "admin.site.urls), url(r'^show/$',show_complaints), url(r'^reject/complaint/(\\d{1,2})/$',reject), url(r'^register/$',signup), url(r'^$', auth_views.login, name='login'), url(r'^logout/$', auth_views.logout, name='logout'), url(r'^complaint/$',index), url(r'^resolved/complaint/(\\d{1,2})/$',resolved), url(r'^complaint/(\\d{1,2})/',detail),", "the include() function: from django.conf.urls import url, include 2. Add a URL to", "resolved from complaint.views import detail urlpatterns = [ url(r'^admin/', admin.site.urls), url(r'^show/$',show_complaints), url(r'^reject/complaint/(\\d{1,2})/$',reject), url(r'^register/$',signup),", "name='home') Including another URLconf 1. Import the include() function: from django.conf.urls import url," ]
[ "for param in divisionParams: divisionPoints.append(pointCurve.PointAt(param)) planePoints = divisionPoints try: for point in planePoints:", "0: value = None if len(valueList) == 1: value = valueList[0] elif len(valueList)", "False #Check the inputs and make sure that we have everything that we", "that the blinds are always drawn\" else: schedule= blindsSchedule_.upper() if schedule!=None and not", "\"The surface must not be curved. With the way that we mesh curved", "checkSameType = False warning = \"This component currently only supports inputs that are", "= rc.Geometry.Vector3d.Multiply(-1, tolVec) norOrient = True maxXYPt = rc.Geometry.Point3d.Subtract(maxXYPt, tolVec) minXYPt = rc.Geometry.Point3d.Subtract(minXYPt,", "be assigned to this window.\" windowNames.append(winNames) windowSrfs.append(winBreps) elif object.objectType == \"HBSurface\": isZoneList.append(0) warning", "warning = \"Not all of the connected zoneData has a Ladybug/Honeybee header on", "else: try: blindsMaterial = deconstructBlindMaterial(blindsMaterial_) except: checkData5 = False warning = 'Blinds material", "Energy | Energy\" #compatibleHBVersion = VER 0.0.55\\nAUG_25_2014 #compatibleLBVersion = VER 0.0.58\\nAUG_20_2014 try: ghenv.Component.AdditionalHelpFromDocStrings", "if _depth == []: checkData2 = False print \"You must provide a depth", "a blinds material connected and, if not, set a default. checkData5 = True", "angle if shdAngle != None: if horOrVertical == True or horOrVertical == None:", "+ ', !- Back Side Slat Diffuse Solar Reflectance\\n' + \\ '\\t' +", "depth, numShds, distBtwn): rotationAngle_ = 0 # import the classes lb_preparation = sc.sticky[\"ladybug_Preparation\"]()", "= rc.Geometry.Point3d(minXYPt.X, minXYPt.Y, minXYPt.Z) maxXYPt = bbox.Corner(True, False, True) maxXYPt = rc.Geometry.Point3d(maxXYPt.X, maxXYPt.Y,", "\"key:location/dataType/units/frequency/startsAt/endsAt\": checkHeader.append(1) dataHeaders.append(list[:7]) dataNumbers.append(list[7:]) allHeaders.append(dataHeaders) allNumbers.append(dataNumbers) if sum(checkHeader) == len(branch):pass else: checkData3 =", "degrees. If applied to windows facing East or West, tilting the shades like", "| Energy | Energy\" #compatibleHBVersion = VER 0.0.55\\nAUG_25_2014 #compatibleLBVersion = VER 0.0.58\\nAUG_20_2014 try:", "tolVec) norOrient = False if tolVec.X < 0 and tolVec.Y < 0: tolVec", "+ \\ '\\t' + schedCntrlType + ', !- Shading Control Type\\n' + \\", "'Yes' EPBlindControl = 'WindowProperty:ShadingControl,\\n' + \\ '\\t' + 'BlindCntrlFor_' + name +', !-", "else: print \"You should first let both Honeybee and Ladybug fly...\" ghenv.Component.AddRuntimeMessage(w, \"You", "North direction is set to the Y-axis (0 degrees). _depth: A number representing", "= [] planes = [] pointCurve = rc.Geometry.Curve.CreateControlPointCurve([maxXYPt, minXYPt]) divisionParams = pointCurve.DivideByLength(shadingHeight, True)", "Import idf component will not align correctly with the EP Result data. blindsMaterial_:", "createEPBlindMat(blindsMaterial, EPSlatOrientList[count], depthList[count], shadingHeightList[count], EPshdAngleList[count], distToGlassList[count], windowObj.name) windowObj.shadingControl = createEPBlindControl(blindsMaterial, schedule, EPinteriorOrExterList[count], windowObj.name)", "This header is necessary for data input to this component.\" print warning ghenv.Component.AddRuntimeMessage(w,", "to generate shades for Honeybee zone windows. The component has two main uses:", "depths greater than 1. HBObjWShades will not be generated. shadeBreps will still be", "values for depths will assign each value of the list as follows: item", "= bbox.Corner(False, False, True) maxXYPt = rc.Geometry.Point3d(maxXYPt.X, maxXYPt.Y, maxXYPt.Z) centerPt = bbox.Center #Test", "name + ', !- Name\\n' + \\ '\\t' + EPSlatOrient + ', !-", "_depth == []: checkData2 = False print \"You must provide a depth for", "tolVec.X < 0 and tolVec.Y < 0: tolVec = rc.Geometry.Vector3d.Multiply(-1, tolVec) norOrient =", "0) planeVec.Rotate(1.570796, rc.Geometry.Vector3d.ZAxis) normalVectorPerp.Rotate((shdAngle*0.01745329), planeVec) elif horOrVertical == False: planeVec = rc.Geometry.Vector3d.ZAxis if", "- Provided by Honeybee 0.0.55 Args: _HBObjects: The HBZones out of any of", "ghenv.Component.AddRuntimeMessage(w, warning) #Check the depth and the shadingHeight to see if E+ will", "be plugged into a shade benefit evaulation as each window is its own", "the shade to be generated on each window. You can also input lists", "= 'InteriorBlind' else: EPinteriorOrExter = 'ExteriorBlind' #Generate the shade curves based on the", "to be used to generate the planes planeVec = rc.Geometry.Vector3d(normalVector.X, normalVector.Y, 0) planeVec.Rotate(1.570796,", "warning) return shadingSurfaces, EPSlatOrient, depth, shadingHeight, EPshdAngle, EPDistToGlass, EPinteriorOrExter, assignEPCheckInit def deconstructBlindMaterial(material): matLines", "planes planeVec = rc.Geometry.Vector3d(normalVector.X, normalVector.Y, 0) planeVec.Rotate(1.570796, rc.Geometry.Vector3d.ZAxis) #Define a bounding box for", "object.BC != 'Outdoors': assignEPCheck = False warning = \"The boundary condition of the", "both Honeybee and Ladybug fly...\" ghenv.Component.AddRuntimeMessage(w, \"You should first let both Honeybee and", "Shading Device Material Name\\n' + \\ '\\t' + 'FixedSlatAngle, !- Type of Slat", "tolVec.X > 0 and tolVec.Y > 0: tolVec = rc.Geometry.Vector3d.Multiply(1, tolVec) norOrient =", "planes planeOrigins = [] planes = [] X, Y, z = minZPt.X, minZPt.Y,", "90 that represents an angle in degrees to rotate the shades. The default", "make North. The default North direction is set to the Y-axis (0 degrees).", "as each window is its own branch of a grasshopper data tree. shadeBreps:", "\\ '\\t' + '180; !- Maximum Slat Angle {deg}\\n' return EPBlindMat def createEPBlindControl(blindsMaterial,", "a true North direction or a number between 0 and 360 that represents", "= 90 + (EPshdAngleInint)*-1 if EPshdAngle > 180 or EPshdAngle < 0: warning", "EPDistToGlass = 0.01 elif EPDistToGlass > 1: warning = \"The input distToGlass_ value", "Rhino as rc import rhinoscriptsyntax as rs import scriptcontext as sc import uuid", "EnergyPlus blinds material and assign it to the windows with shades. if assignEPCheck", "3 = east depth. Lists of vectors to be shaded can also be", "beam gain for a shade benefit simulation with the generated shades. zoneData3_: Optional", "gain for a shade benefit simulation with the generated shades. zoneData3_: Optional zone", "testVec.IsParallelTo(planeVec) == 0: minXYPt = bbox.Corner(False, True, True) minXYPt = rc.Geometry.Point3d(minXYPt.X, minXYPt.Y, minXYPt.Z)", "\"No value is connected for number of shades. The component will be run", "rc.Geometry.Vector3d.ZAxis)) except: # single shading planes.append(rc.Geometry.Plane(rc.Geometry.Point3d(maxZPt), rc.Geometry.Vector3d.ZAxis)) # sort the planes sortedPlanes =", "to be simulated). ---------------: ... windowBreps: Breps representing each window of the zone.", "input object must be outdoors. E+ cannot create shades for intdoor windows.\" print", "False #Create a Python list from the input data trees. def makePyTree(zoneData): dataPyList", "\\ '\\t' + '0.5, !- Blind Top Opening Multiplier\\n' + \\ '\\t' +", "enumerate(shadings): for brep in brepList: shadeBreps.Add(brep, GH_Path(count)) for treeCount, finalTree in enumerate(alignedDataTree): if", "Multiplier\\n' + \\ '\\t' + '0.5, !- Blind Left Side Opening Multiplier\\n' +", "def getAngle2North(normalVector): if north_ != None and north_.IsValid(): northVector = north_ else:northVector =", "window above. zoneData3Tree: Data trees of the izoneData3_, which align with the branches", "'OUTDOORS' if object.hasChild: if object.BC != 'OUTDOORS' and object.BC != 'Outdoors': assignEPCheck =", "material of 0.65 solar reflectance, 0 transmittance, 0.9 emittance, 0.25 mm thickness, 221", "checkData3 == True and checkData4 == True and checkData5 == True: checkData =", "= schedule schedCntrlType = 'OnIfScheduleAllows' schedCntrl = 'Yes' EPBlindControl = 'WindowProperty:ShadingControl,\\n' + \\", "EPinteriorOrExterList[count], windowObj.name) windowObj.shadingControlName = 'BlindCntrlFor_' + windowObj.name windowObj.shadingSchName = schedule ModifiedHBZones = hb_hive.addToHoneybeeHive(HBZoneObjects,", "shades. The component will be run with one shade per window.\" else: numOfShd", "360 that represents the degrees off from the y-axis to make North. The", "reflectance, 0 transmittance, 0.9 emittance, 0.25 mm thickness, 221 W/mK conductivity.\" blindsMaterial =", "\"09 | Energy | Energy\" #compatibleHBVersion = VER 0.0.55\\nAUG_25_2014 #compatibleLBVersion = VER 0.0.58\\nAUG_20_2014", "the shades, move them along the normal vector. if distToGlass != None: transVec", "again!\" return -1 shadingSurfaces =[] #Define a function that can get the angle", "let in more winter sun than summer sun. If you have horizontal shades,", "rc.Geometry.Vector3d.ZAxis) normalVectorPerp.Rotate((shdAngle*0.01745329), planeVec) elif horOrVertical == False: planeVec = rc.Geometry.Vector3d.ZAxis if getAngle2North(normalVectorPerp) <", "if not hasattr(object, 'angle2North'): # find the type based on object.getAngle2North() if not", "orientations based on cardinal direction. shdAngle_: A number between -90 and 90 that", "downward. You can also put in lists of angles to assign different shade", "print \"Couldn't find the normal of the shading surface.\" + \\ \"\\nRebuild the", "west depth, item 2 = south depth, item 3 = east depth. Lists", "Slat Beam Solar Reflectance\\n' + \\ '\\t' + ', !- Slat Diffuse Solar", "and lower the blinds. If no value is connected here, the blinds will", "shades. if assignEPCheck == True: for count, windowObj in enumerate(windowObjects): windowObj.blindsMaterial = createEPBlindMat(blindsMaterial,", "= rc.Geometry.Point3d.Subtract(maxXYPt, tolVec) minXYPt = rc.Geometry.Point3d.Subtract(minXYPt, tolVec) #glazing distance glzHeight = minXYPt.DistanceTo(maxXYPt) #", "Beam Visible Reflectance\\n' + \\ '\\t' + ', !- Back Side Slat Beam", "tolVec = rc.Geometry.Vector3d.Multiply(sc.doc.ModelAbsoluteTolerance*2, tolVec) if tolVec.X > 0 and tolVec.Y > 0: tolVec", "tolVec = rc.Geometry.Vector3d.Multiply(-1, tolVec) norOrient = True else: tolVec = rc.Geometry.Vector3d.Multiply(-1, tolVec) norOrient", "or all HBSrfs but not both. For now, just grab another component for", "zHeights: planes.append(rc.Geometry.Plane(rc.Geometry.Point3d(X, Y, Z), rc.Geometry.Vector3d.ZAxis)) except: # single shading planes.append(rc.Geometry.Plane(rc.Geometry.Point3d(maxZPt), rc.Geometry.Vector3d.ZAxis)) # sort", "in HBZoneObjects: if object.objectType == \"HBZone\": isZoneList.append(1) zoneNames.append(object.name) winBreps = [] winNames =", "intCrvs: try: shdSrf = rc.Geometry.Surface.CreateExtrusion(c, float(depth) * normalVectorPerp).ToBrep() shadingSurfaces.append(shdSrf) except: pass #If the", "a material of 0.65 solar reflectance, 0 transmittance, 0.9 emittance, 0.25 mm thickness,", "is facing the outdoors in order to be sure that your shades are", "== True: for childSrf in srf.childSrfs: windowObjects.append(childSrf) winNames.append(childSrf.name) winBreps.append(childSrf.geometry) else: print \"One surface", "will assign different orientations based on cardinal direction. shdAngle_: A number between -90", "def makeShade(_glzSrf, depth, numShds, distBtwn): rotationAngle_ = 0 # import the classes lb_preparation", "window of the zone. These can be plugged into a shade benefit evaulation", "window. If there's no header, the data cannot be coordinated with this component.", "Name\\n' + \\ '\\t' + ', !- Setpoint {W/m2, W or deg C}\\n'", "HBZones_ that will be aligned with the generated windows. Use this to align", "allData = [] allData.append(makePyTree(zoneData1_)) allData.append(makePyTree(zoneData2_)) allData.append(makePyTree(zoneData3_)) #Test to see if the data lists", "based on the planes and extrusion vectors if intCrvs !=[]: for c in", "a Python list from the input data trees. def makePyTree(zoneData): dataPyList = []", "are of the same type. checkSameType = True if sum(isZoneList) == len(_HBObjects): isZone", "blinds schedule connected and, if not, set a default. checkData4 = True if", "windows with shades. if assignEPCheck == True: for count, windowObj in enumerate(windowObjects): windowObj.blindsMaterial", "lists of horOrVertical_ input, which will assign different orientations based on cardinal direction.", "1 if numOfShds == 0 or distBetween == 0: sortedPlanes = [] elif", "\\ '\\t' + 'No, !- Glare Control Is Active\\n' + \\ '\\t' +", "depthList[count], shadingHeightList[count], EPshdAngleList[count], distToGlassList[count], windowObj.name) windowObj.shadingControl = createEPBlindControl(blindsMaterial, schedule, EPinteriorOrExterList[count], windowObj.name) windowObj.shadingControlName =", "== True and sc.sticky.has_key('ladybug_release') == True: #Import the classes hb_EPZone = sc.sticky[\"honeybee_EPZone\"] hb_EPSrf", "curved surfaces for E+, the program would just freak out with blinds.\" print", "with this component. checkData3 = True checkBranches = [] allHeaders = [] allNumbers", "rs.frange(0, 360, 360/len(valueList)) for an in initAngles: angles.append(an-(360/(2*len(valueList)))) angles.append(360) for angleCount in range(len(angles)-1):", "to the \"zoneData\" inputs and use the output \"zoneDataTree\" in the shade benefit", "shadingHeight = glzHeight/numOfShd shadingRemainder = shadingHeight except: shadingHeight = distBetween shadingRemainder = (((glzHeight/distBetween)", "Shading Control Type\\n' + \\ '\\t' + schedName + ', !- Schedule Name\\n'", "or distBetween == 0: sortedPlanes = [] elif horOrVertical == True: # Horizontal", "can also put in lists of angles to assign different shade angles to", "reflect, transmit, emiss, thickness, conduct] def createEPBlindMat(blindsMaterial, EPSlatOrient, depth, shadingHeight, EPshdAngle, distToGlass, name):", "number between -90 and 90 that represents an angle in degrees to rotate", "assign different numbers of shades to different directions. numShds = getValueBasedOnOrientation(numShds) # If", "see if the data lists have a headers on them, which is necessary", "the EnergyPlus distance to glass. EPDistToGlass = distToGlass + (depth)*(0.5)*math.cos(math.radians(EPshdAngle)) if EPDistToGlass <", "str(blindsMaterial[3]) + ', !- Back Side Slat Infrared Hemispherical Emissivity\\n' + \\ '\\t'", "for twig in branch: zoneData1Tree.Add(twig, GH_Path(bCount)) elif treeCount == 1: for bCount, branch", "if header[2].split(' for ')[-1] == zoneName.upper(): alignedDataTree[inputDataTreeCount].append(allData[inputDataTreeCount][listCount]) zoneData = True #Test to see", "outdoors in order to be sure that your shades are previewing correctly.\" print", "schedName = schedule schedCntrlType = 'OnIfScheduleAllows' schedCntrl = 'Yes' EPBlindControl = 'WindowProperty:ShadingControl,\\n' +", "angle in degrees to rotate the shades. The default is set to \"0\"", "sc.sticky.has_key('honeybee_release') == True and sc.sticky.has_key('ladybug_release') == True: #Import the classes hb_EPZone = sc.sticky[\"honeybee_EPZone\"]", "checkData4 = False elif schedule!=None and schedule.lower().endswith(\".csv\"): # check if csv file is", "above. zoneData2Tree: Data trees of the zoneData2_, which align with the branches for", "a.Origin.Z) elif horOrVertical == False: # Vertical # Define a vector to be", "windowName = windowNames[zoneCount][windowCount] windowNamesFinal.append(windowName) for inputDataTreeCount, branch in enumerate(allHeaders): #Test to see if", "tolVec) norOrient = False if tolVec.X < 0 and tolVec.Y > 0: tolVec", "ghenv.Component.AddRuntimeMessage(w, warning) elif object.isPlanar == False: assignEPCheck = False warning = \"The surface", "---------------: ... zoneData1Tree: Data trees of the zoneData1_, which align with the branches", "= 'WindowProperty:ShadingControl,\\n' + \\ '\\t' + 'BlindCntrlFor_' + name +', !- Name\\n' +", "for list in branch: if str(list[0]) == \"key:location/dataType/units/frequency/startsAt/endsAt\": checkHeader.append(1) dataHeaders.append(list[:7]) dataNumbers.append(list[7:]) allHeaders.append(dataHeaders) allNumbers.append(dataNumbers)", "different interiorOrExterior_ to different directions. interiorOrExter = getValueBasedOnOrientation(interiorOrExter_) #If multiple distToGlass_ inputs are", "EPshdAngleInint else: EPshdAngle = 90 + (EPshdAngleInint)*-1 if EPshdAngle > 180 or EPshdAngle", "drawn\" else: schedule= blindsSchedule_.upper() if schedule!=None and not schedule.lower().endswith(\".csv\") and schedule not in", "= 1 if numOfShds == 0 or distBetween == 0: sortedPlanes = []", "= 'Horizontal' else: EPSlatOrient = 'Vertical' if interiorOrExter == True: EPinteriorOrExter = 'InteriorBlind'", "the Y-axis (0 degrees). _depth: A number representing the depth of the shade", "warning) #Check if there is a blinds schedule connected and, if not, set", "rc.Geometry.Vector3d.YAxis angle = rc.Geometry.Vector3d.VectorAngle(northVector, normalVector, rc.Geometry.Plane.WorldXY) finalAngle = math.degrees(angle) return finalAngle # Define", "horizontal or vertical to different directions. horOrVertical = getValueBasedOnOrientation(horOrVertical_) # If multiple shdAngle_", "Top Opening Multiplier\\n' + \\ '\\t' + ', !- Blind Bottom Opening Multiplier\\n'", "\\ '\\t' + ', !- Setpoint {W/m2, W or deg C}\\n' + \\", "or beam gain for a shade benefit simulation with the generated shades. zoneData2_:", "the center # note2developer: there might be cases that the surface is not", "the shades.\" ghenv.Component.AddRuntimeMessage(gh.GH_RuntimeMessageLevel.Warning, \"You must provide a depth for the shades.\") #Check if", "simulation with the generated shades. zoneData2_: Optional zone data for the HBZones_ that", "Name\\n' + \\ '\\t' + EPSlatOrient + ', !- Slat Orientation\\n' + \\", "EPshdAngle, distToGlass, name): EPBlindMat = \"WindowMaterial:Blind,\\n\" + \\ '\\t' + blindsMaterial[0] + \"_\"", "or beam gain for a shade benefit simulation with the generated shades. zoneData3_:", "Grasshopper from the Import idf component will not align correctly with the EP", "schedule not in HBScheduleList: msg = \"Cannot find \" + schedule + \"", "True if blindsMaterial_ == None: print \"No blinds material has been connected. A", "warning ghenv.Component.AddRuntimeMessage(gh.GH_RuntimeMessageLevel.Remark, warning) if shadingHeight > 1: assignEPCheckInit = False warning = \"Note", "baseSrfCenPt = _glzSrf.ClosestPoint(baseSrfCenPt) bool, centerPtU, centerPtV = _glzSrf.Faces[0].ClosestPoint(baseSrfCenPt) if bool: normalVector = _glzSrf.Faces[0].NormalAt(centerPtU,", "isZoneList.append(1) zoneNames.append(object.name) winBreps = [] winNames = [] for srf in object.surfaces: if", "print msg ghenv.Component.AddRuntimeMessage(gh.GH_RuntimeMessageLevel.Warning, msg) checkData4 = False #Create a Python list from the", "math.floor(glzHeight/distBetween))*distBetween) if shadingRemainder == 0: shadingRemainder = shadingHeight # find shading base planes", "0: tolVec = rc.Geometry.Vector3d.Multiply(-1, tolVec) norOrient = True else: tolVec = rc.Geometry.Vector3d.Multiply(-1, tolVec)", "True else: checkData = False return checkData, windowNamesFinal, windowBrepsFinal, _depth, alignedDataTree, numOfShd, blindsMaterial,", "valid blinds material from the \"Honeybee_EnergyPlus Blinds Material\" component.' print warning ghenv.Component.AddRuntimeMessage(w, warning)", "be cases that the surface is not planar and # the normal is", "getValueBasedOnOrientation(horOrVertical_) # If multiple shdAngle_ inputs are given, use it to split up", "representing each shade of the window. These can be plugged into a shade", "EP versions of some of the outputs. EPshdAngleInint = angleFromNorm+shdAngle if EPshdAngleInint >=", "with one shade per window.\" else: numOfShd = _numOfShds #Check the depths. checkData2", "Python list from the input data trees. def makePyTree(zoneData): dataPyList = [] for", "Blind to Glass Distance {m}\\n' + \\ '\\t' + '0.5, !- Blind Top", "and object.BC != 'Outdoors': assignEPCheck = False warning = \"The boundary condition of", "0.0.55 Args: _HBObjects: The HBZones out of any of the HB components that", "maxXYPt = bbox.Corner(True, False, True) maxXYPt = rc.Geometry.Point3d(maxXYPt.X, maxXYPt.Y, maxXYPt.Z) #Adjust the points", "print warning ghenv.Component.AddRuntimeMessage(w, warning) else: for childSrf in object.childSrfs: windowObjects.append(childSrf) windowNames.append([childSrf.name]) windowSrfs.append([childSrf.geometry]) #Make", "gain synced with that of the zones and windows. For this, you would", "'OUTDOORS' or srf.BC == 'Outdoors': if srf.isPlanar == True: for childSrf in srf.childSrfs:", "minZPt = rc.Geometry.Point3d(minZPt.X, minZPt.Y, minZPt.Z) maxZPt = bbox.Corner(False, True, False) maxZPt = rc.Geometry.Point3d(maxZPt.X,", "default North direction is set to the Y-axis (0 degrees). _depth: A number", "warning = \"The surface must not be curved. With the way that we", "!= 'OUTDOORS' and object.BC != 'Outdoors': assignEPCheck = False warning = \"The boundary", "objects from the hive. HBZoneObjects = hb_hive.callFromHoneybeeHive(_HBObjects) #Find out what the object is", "EnergyPlus to crash.\" print warning ghenv.Component.AddRuntimeMessage(w, warning) #Check the depth and the shadingHeight", "and assign different shdAngle_ to different directions. shdAngle = getValueBasedOnOrientation(shdAngle_) #If multiple interiorOrExter_", "float(matLines[6].split(';')[0]) return [name, reflect, transmit, emiss, thickness, conduct] def createEPBlindMat(blindsMaterial, EPSlatOrient, depth, shadingHeight,", "# If multiple number of shade inputs are given, use it to split", "material is connected here, the component will automatically assign a material of 0.65", "each window. You can also input lists of depths, which will assign different", "False warning = \"The boundary condition of the input object must be outdoors.", "ghenv.Component.AddRuntimeMessage(gh.GH_RuntimeMessageLevel.Remark, warning) if not hasattr(object, 'type'): # find the type based on object.type", "shades, use this to rotate them towards the South by a certain value", "+ \\ '\\t' + 'No, !- Glare Control Is Active\\n' + \\ '\\t'", "been run. In this case, the component helps keep the data tree paths", "= False warning = \"Note that E+ does not like distances between shades", "from Grasshopper import DataTree from Grasshopper.Kernel.Data import GH_Path import Rhino as rc import", "by a certain value in degrees. If applied to windows facing East or", "split up the glazing by cardinal direction and assign different interiorOrExterior_ to different", "into Grasshopper from the Import idf component will not align correctly with the", "hb_EPFenSurface = sc.sticky[\"honeybee_EPFenSurface\"] hb_hive = sc.sticky[\"honeybee_Hive\"]() #Make the lists that will be filled", "Hemispherical Emissivity\\n' + \\ '\\t' + str(blindsMaterial[3]) + ', !- Back Side Slat", "initAngles: angles.append(an-(360/(2*len(valueList)))) angles.append(360) for angleCount in range(len(angles)-1): if angles[angleCount] <= (getAngle2North(normalVector))%360 <= angles[angleCount", "+ ', !- Back Side Slat Beam Visible Reflectance\\n' + \\ '\\t' +", "load or beam gain for a shade benefit simulation with the generated shades.", "rc.Geometry.Vector3d(normalVector.X, normalVector.Y, 0) planeVec.Rotate(1.570796, rc.Geometry.Vector3d.ZAxis) #Define a bounding box for use in calculating", "= float(matLines[6].split(';')[0]) return [name, reflect, transmit, emiss, thickness, conduct] def createEPBlindMat(blindsMaterial, EPSlatOrient, depth,", "centerPt = bbox.Center #Test to be sure that the values are parallel to", "else: print \"One surface with a window does not have an outdoor boundary", "+ \\ '\\t' + ', !- Slat Diffuse Solar Transmittance\\n' + \\ '\\t'", "elif sum(isZoneList) == 0: isZone = False else: checkSameType = False warning =", "rc.Geometry.Plane(baseSrfCenPt,normalVector) else: print \"Couldn't find the normal of the shading surface.\" + \\", "shdAngle_ value will cause EnergyPlus to crash.\" print warning ghenv.Component.AddRuntimeMessage(w, warning) if horOrVertical", "True if sum(isZoneList) == len(_HBObjects): isZone = True elif sum(isZoneList) == 0: isZone", "and generate shades. ---------------: ... zoneData1_: Optional zone data for the HBZones_ that", "True: checkData, windowSrfsInit, shadings, alignedDataTree, HBObjWShades = main() #Unpack the data trees. if", "+ shadingRemainder, maxZPt.Z + 0.5*sc.doc.ModelAbsoluteTolerance, shadingHeight) try: for Z in zHeights: planes.append(rc.Geometry.Plane(rc.Geometry.Point3d(X, Y,", "math.degrees(angle) return finalAngle # Define a function that can split up a list", "distToGlass_ to different directions. distToGlass = getValueBasedOnOrientation(distToGlass_) # generate the planes planes, shadingHeight", "first let both Honeybee and Ladybug fly...\" ghenv.Component.AddRuntimeMessage(w, \"You should first let both", "emiss, thickness, conduct] def createEPBlindMat(blindsMaterial, EPSlatOrient, depth, shadingHeight, EPshdAngle, distToGlass, name): EPBlindMat =", "shades for intdoor windows.\" print warning ghenv.Component.AddRuntimeMessage(w, warning) elif object.isPlanar == False: assignEPCheck", "normalVector.Y, 0) planeVec.Rotate(1.570796, rc.Geometry.Vector3d.ZAxis) #Define a bounding box for use in calculating the", "account for these shades using a 'Honeybee_EP Context Surfaces' component.\" print warning ghenv.Component.AddRuntimeMessage(gh.GH_RuntimeMessageLevel.Remark,", "transmittance, 0.9 emittance, 0.25 mm thickness, 221 W/mK conductivity.\" blindsMaterial = ['DEFAULTBLINDSMATERIAL', 0.65,", "= bbox.Center #glazing hieghts glzHeight = minZPt.DistanceTo(maxZPt) # find number of shadings try:", "= bbox.Corner(False, True, True) minXYPt = rc.Geometry.Point3d(minXYPt.X, minXYPt.Y, minXYPt.Z) maxXYPt = bbox.Corner(True, False,", "creation of the correct number of shades starting from the northernmost side of", "Slat Conductivity {W/m-K}\\n' + \\ '\\t' + str(blindsMaterial[2]) + ', !- Slat Beam", "None: horOrVertical = True planeVec = rc.Geometry.Vector3d(normalVector.X, normalVector.Y, 0) planeVec.Rotate(1.570796, rc.Geometry.Vector3d.ZAxis) normalVectorPerp.Rotate((shdAngle*0.01745329), planeVec)", "a valid blinds material from the \"Honeybee_EnergyPlus Blinds Material\" component.' print warning ghenv.Component.AddRuntimeMessage(w,", "_numOfShds: The number of shades to generated for each glazed surface. _distBetween: An", "depth, shadingHeight, EPshdAngle, distToGlass, name): EPBlindMat = \"WindowMaterial:Blind,\\n\" + \\ '\\t' + blindsMaterial[0]", "+ str(blindsMaterial[2]) + ', !- Slat Beam Solar Transmittance\\n' + \\ '\\t' +", "True or horOrVertical == None: horOrVertical = True planeVec = rc.Geometry.Vector3d(normalVector.X, normalVector.Y, 0)", "find the bounding box bbox = glzSrf.GetBoundingBox(True) if horOrVertical == None: horOrVertical =", "angleFromNorm+shdAngle if EPshdAngleInint >= 0: EPshdAngle = 90 - EPshdAngleInint else: EPshdAngle =", "+ \\ '\\t' + str(blindsMaterial[3]) + ', !- Front Side Slat Infrared Hemispherical", "shdSrf = rc.Geometry.Surface.CreateExtrusion(c, float(depth) * normalVectorPerp).ToBrep() shadingSurfaces.append(shdSrf) except: pass #If the user has", "the offset distance from the glass to make the shades. _runIt: Set boolean", "+ ', !- Slat Conductivity {W/m-K}\\n' + \\ '\\t' + str(blindsMaterial[2]) + ',", "parallel to the correct vector. testVec = rc.Geometry.Vector3d.Subtract(rc.Geometry.Vector3d(minXYPt.X, minXYPt.Y, minXYPt.Z), rc.Geometry.Vector3d(maxXYPt.X, maxXYPt.Y, maxXYPt.Z))", "winNames = [] for srf in object.surfaces: if srf.hasChild: if srf.BC == 'OUTDOORS'", "benefit simulation with the generated shades. Returns: readMe!: ... ---------------: ... HBZones: The", "createEPBlindMat(blindsMaterial, EPSlatOrient, depth, shadingHeight, EPshdAngle, distToGlass, name): EPBlindMat = \"WindowMaterial:Blind,\\n\" + \\ '\\t'", "== True: for count, windowObj in enumerate(windowObjects): windowObj.blindsMaterial = createEPBlindMat(blindsMaterial, EPSlatOrientList[count], depthList[count], shadingHeightList[count],", "the EP Result data. blindsMaterial_: An optional blind material from the blind material", "window above. \"\"\" ghenv.Component.Name = \"Honeybee_EnergyPlus Window Shade Generator\" ghenv.Component.NickName = 'EPWindowShades' ghenv.Component.Message", "Name\\n' + \\ '\\t' + 'FixedSlatAngle, !- Type of Slat Angle Control for", "mesh curved surfaces for E+, the program would just freak out with blinds.\"", "zoneData1Tree.Add(twig, GH_Path(bCount)) elif treeCount == 1: for bCount, branch in enumerate(finalTree): for twig", "normal vector. if distToGlass != None: transVec = normalVectorPerp transVec.Unitize() finalTransVec = rc.Geometry.Vector3d.Multiply(distToGlass,", "and north_.IsValid(): northVector = north_ else:northVector = rc.Geometry.Vector3d.YAxis angle = rc.Geometry.Vector3d.VectorAngle(northVector, normalVector, rc.Geometry.Plane.WorldXY)", "a window is not planar. EenergyPlus shades will not be assigned to this", "\\ '\\t' + '; !- Slat Angle Schedule Name\\n' return EPBlindControl def main():", "dataVal.append(item) dataPyList.append(dataVal) return dataPyList allData = [] allData.append(makePyTree(zoneData1_)) allData.append(makePyTree(zoneData2_)) allData.append(makePyTree(zoneData3_)) #Test to see", "component will be run with one shade per window.\" else: numOfShd = _numOfShds", "msg) checkData4 = False elif schedule!=None and schedule.lower().endswith(\".csv\"): # check if csv file", "\"\\nRebuild the surface and try again!\" return -1 shadingSurfaces =[] #Define a function", "#If the user has specified a distance to move the shades, move them", "of the shading surface.\" + \\ \"\\nRebuild the surface and try again!\" return", "a Ladybug/Honeybee header on it. This header is necessary for data input to", "if north_ != None and north_.IsValid(): northVector = north_ else:northVector = rc.Geometry.Vector3d.YAxis angle", "Reflectance\\n' + \\ '\\t' + ', !- Back Side Slat Diffuse Visible Reflectance\\n'", "EPinteriorOrExter, name): if schedule == 'ALWAYSON': schedCntrlType = 'ALWAYSON' schedCntrl = 'No' schedName", "\\ '\\t' + ', !- Slat Diffuse Solar Transmittance\\n' + \\ '\\t' +", "#Test to see if the data is for the zone level. zoneData =", "= True if zoneData == False and srfData == False and alignedDataTree !=", "input to this component.\" print warning ghenv.Component.AddRuntimeMessage(w, warning) #Align all of the lists", "Side Slat Beam Solar Reflectance\\n' + \\ '\\t' + ', !- Slat Diffuse", "This component creates shades for Honeybee Zones # By <NAME> # <EMAIL> #", "base planes planeOrigins = [] planes = [] pointCurve = rc.Geometry.Curve.CreateControlPointCurve([maxXYPt, minXYPt]) divisionParams", "finalTree in enumerate(alignedDataTree): if treeCount == 0: for bCount, branch in enumerate(finalTree): for", "+ (EPshdAngleInint)*-1 if EPshdAngle > 180 or EPshdAngle < 0: warning = \"The", "hb_hive.callFromHoneybeeHive(_HBObjects) #Find out what the object is and make sure that we can", "not planar. EenergyPlus shades will not be assigned to this window.\" else: print", "lb_visualization, normalVector): # find the bounding box bbox = glzSrf.GetBoundingBox(True) if horOrVertical ==", "= [] allHeaders = [] allNumbers = [] for branch in allData: checkHeader", "shading depths are given, use it to split up the glazing by cardinal", "the generated shades. zoneData3_: Optional zone data for the HBZones_ that will be", "targetValue = valueList[angleCount%len(valueList)] value = targetValue return value # If multiple shading depths", "example, inputing 4 values for depths will assign each value of the list", "Y-axis (0 degrees). _depth: A number representing the depth of the shade to", "planePoints: planes.append(rc.Geometry.Plane(point, planeVec)) except: # single shading planes.append(rc.Geometry.Plane(rc.Geometry.Point3d(minXYPt), planeVec)) sortedPlanes = planes return", "'\\t' + str(shadingHeight) +', !- Slat Separation {m}\\n' + \\ '\\t' + str(blindsMaterial[4])", "them, which is necessary to match the data to a zone or window.", "previewing correctly.\" print warning ghenv.Component.AddRuntimeMessage(gh.GH_RuntimeMessageLevel.Remark, warning) if not hasattr(object, 'type'): # find the", "tolVec = rc.Geometry.Vector3d.Multiply(1, tolVec) norOrient = False if tolVec.X < 0 and tolVec.Y", "shadingHeight def makeShade(_glzSrf, depth, numShds, distBtwn): rotationAngle_ = 0 # import the classes", "to ensure the creation of the correct number of shades starting from the", "(EPshdAngleInint)*-1 if EPshdAngle > 180 or EPshdAngle < 0: warning = \"The input", "rc.Geometry.Curve.CreateControlPointCurve([maxXYPt, minXYPt]) divisionParams = pointCurve.DivideByLength(shadingHeight, True) divisionPoints = [] for param in divisionParams:", "= sorted(planes, key=lambda a: a.Origin.Z) elif horOrVertical == False: # Vertical # Define", "EnergyPlus to crash.\" print warning ghenv.Component.AddRuntimeMessage(w, warning) if horOrVertical == True: EPSlatOrient =", "representing each window of the zone. These can be plugged into a shade", "in srf.childSrfs: windowObjects.append(childSrf) winNames.append(childSrf.name) winBreps.append(childSrf.geometry) else: print \"One surface with a window is", "# find shading base planes planeOrigins = [] planes = [] X, Y,", "lb_visualization, normalVector) # find the intersection crvs as the base for shadings intCrvs", "True #Test to see if the data is for the window level. srfData", "treeCount == 0: for bCount, branch in enumerate(finalTree): for twig in branch: zoneData1Tree.Add(twig,", "winBreps.append(childSrf.geometry) else: print \"One surface with a window is not planar. EenergyPlus shades", "in the center # note2developer: there might be cases that the surface is", "degrees off from the y-axis to make North. The default North direction is", "{m}\\n' + \\ '\\t' + str(shadingHeight) +', !- Slat Separation {m}\\n' + \\", "+ ', !- Slat Angle {deg}\\n' + \\ '\\t' + str(blindsMaterial[5]) + ',", "from the hive. HBZoneObjects = hb_hive.callFromHoneybeeHive(_HBObjects) #Find out what the object is and", "connected for number of shades. The component will be run with one shade", "#Align all of the lists to each window. windowNamesFinal = [] windowBrepsFinal =", "no material is connected here, the component will automatically assign a material of", "on the interior, flip the normal vector. if interiorOrExter == True: normalVectorPerp.Reverse() else:", "Shade Generator\" ghenv.Component.NickName = 'EPWindowShades' ghenv.Component.Message = 'VER 0.0.55\\nSEP_11_2014' ghenv.Component.Category = \"Honeybee\" ghenv.Component.SubCategory", "+ \\ '\\t' + str(blindsMaterial[1]) + ', !- Back Side Slat Diffuse Solar", "str(shadingHeight) +', !- Slat Separation {m}\\n' + \\ '\\t' + str(blindsMaterial[4]) + ',", "#If a shdAngle is provided, use it to rotate the planes by that", "== \"HBZone\": isZoneList.append(1) zoneNames.append(object.name) winBreps = [] winNames = [] for srf in", "[] winNames = [] for srf in object.surfaces: if srf.hasChild: if srf.BC ==", "represents the degrees off from the y-axis to make North. The default North", "assign different distances of shades to different directions. distBtwn = getValueBasedOnOrientation(distBtwn) # If", "= True if depth > 1: assignEPCheckInit = False warning = \"Note that", "glazing by cardinal direction and assign different distToGlass_ to different directions. distToGlass =", "planar. EenergyPlus shades will not be assigned to this window.\" else: print \"One", "are previewing correctly.\" print warning ghenv.Component.AddRuntimeMessage(gh.GH_RuntimeMessageLevel.Remark, warning) if not hasattr(object, 'type'): # find", "if str(winNm) == str(windowName.upper()): alignedDataTree[inputDataTreeCount].append(allData[inputDataTreeCount][listCount]) srfData = True if zoneData == False and", "from clr import AddReference AddReference('Grasshopper') import Grasshopper.Kernel as gh from Grasshopper import DataTree", "#Check the depth and the shadingHeight to see if E+ will crash. assignEPCheckInit", "= [] EPSlatOrientList = [] depthList = [] shadingHeightList = [] EPshdAngleList =", "1 = west depth, item 2 = south depth, item 3 = east", "True) divisionPoints = [] for param in divisionParams: divisionPoints.append(pointCurve.PointAt(param)) planePoints = divisionPoints try:", "msg ghenv.Component.AddRuntimeMessage(gh.GH_RuntimeMessageLevel.Warning, msg) checkData4 = False elif schedule!=None and schedule.lower().endswith(\".csv\"): # check if", "windowBrepsFinal.append(window) windowName = windowNames[zoneCount][windowCount] windowNamesFinal.append(windowName) for inputDataTreeCount, branch in enumerate(allHeaders): #Test to see", "if E+ will crash. assignEPCheckInit = True if depth > 1: assignEPCheckInit =", "cardinal direction and assign different depths to different directions. depth = getValueBasedOnOrientation(depth) #", "Attribution-ShareAlike 3.0 Unported License. \"\"\" Use this component to generate shades for Honeybee", "'InteriorBlind' else: EPinteriorOrExter = 'ExteriorBlind' #Generate the shade curves based on the planes", "a 'Honeybee_EP Context Surfaces' component.\" print warning ghenv.Component.AddRuntimeMessage(gh.GH_RuntimeMessageLevel.Remark, warning) if shadingHeight > 1:", "trees. def makePyTree(zoneData): dataPyList = [] for i in range(zoneData.BranchCount): branchList = zoneData.Branch(i)", "Diffuse Solar Transmittance\\n' + \\ '\\t' + str(blindsMaterial[1]) + ', !- Front Side", "Opening Multiplier\\n' + \\ '\\t' + '0.5, !- Blind Left Side Opening Multiplier\\n'", "== True: checkData, windowNames, windowSrfsInit, depths, alignedDataTree, numOfShd, blindsMaterial, schedule = checkTheInputs(zoneNames, windowNames,", "isZone == True: for listCount, header in enumerate(branch): if header[2].split(' for ')[-1] ==", "the planes by that angle if shdAngle != None: if horOrVertical == True", "the user has specified a distance to move the shades, move them along", "Slat Separation {m}\\n' + \\ '\\t' + str(blindsMaterial[4]) + ', !- Slat Thickness", "+ \\ '\\t' + str(EPshdAngle) + ', !- Slat Angle {deg}\\n' + \\", "mergeVectors_ input. _numOfShds: The number of shades to generated for each glazed surface.", "#glazing hieghts glzHeight = minZPt.DistanceTo(maxZPt) # find number of shadings try: numOfShd =", "makeShade(_glzSrf, depth, numShds, distBtwn): rotationAngle_ = 0 # import the classes lb_preparation =", "split up the glazing by cardinal direction and assign different horizontal or vertical", "\\ '\\t' + ', !- Slat Diffuse Visible Transmittance\\n' + \\ '\\t' +", "str(blindsMaterial[2]) + ', !- Slat Beam Solar Transmittance\\n' + \\ '\\t' + str(blindsMaterial[1])", "north_: Input a vector to be used as a true North direction or", "= False warning = \"The boundary condition of the input object must be", "allNumbers.append(dataNumbers) if sum(checkHeader) == len(branch):pass else: checkData3 = False warning = \"Not all", "zoneData1Tree = DataTree[Object]() zoneData2Tree = DataTree[Object]() zoneData3Tree = DataTree[Object]() for count, brep in", "cardinal direction and assign different shdAngle_ to different directions. shdAngle = getValueBasedOnOrientation(shdAngle_) #If", "assignEPCheckInit = makeShade(window, depths, numOfShd, _distBetween) shadings.append(shadeBreps) EPSlatOrientList.append(EPSlatOrient) depthList.append(depth) shadingHeightList.append(shadingHeight) EPshdAngleList.append(EPshdAngle) distToGlassList.append(distToGlass) EPinteriorOrExterList.append(EPinteriorOrExter)", "== 'OUTDOORS' or srf.BC == 'Outdoors': if srf.isPlanar == True: for childSrf in", "and hook them up to the \"zoneData\" inputs and use the output \"zoneDataTree\"", "EPshdAngle, distToGlass, EPinteriorOrExter, assignEPCheckInit = makeShade(window, depths, numOfShd, _distBetween) shadings.append(shadeBreps) EPSlatOrientList.append(EPSlatOrient) depthList.append(depth) shadingHeightList.append(shadingHeight)", "#Test to see if the data is for the window level. srfData =", "Unported License. \"\"\" Use this component to generate shades for Honeybee zone windows.", "= deconstructBlindMaterial(blindsMaterial_) except: checkData5 = False warning = 'Blinds material is not a", "number representing the offset distance from the glass to make the shades. _runIt:", "= float(matLines[3].split(',')[0]) emiss = float(matLines[4].split(',')[0]) thickness = float(matLines[5].split(',')[0]) conduct = float(matLines[6].split(';')[0]) return [name,", "# the normal is changing from point to point, then I should sample", "distance in Rhino units between each shade. horOrVertical_: Set to \"True\" to generate", "'\\t' + blindsMaterial[0] + \"_\" + name + ', !- Name\\n' + \\", "value of the list as follows: item 0 = north depth, item 1", "= True planeVec = rc.Geometry.Vector3d(normalVector.X, normalVector.Y, 0) planeVec.Rotate(1.570796, rc.Geometry.Vector3d.ZAxis) normalVectorPerp.Rotate((shdAngle*0.01745329), planeVec) elif horOrVertical", "childSrf in object.childSrfs: windowObjects.append(childSrf) windowNames.append([childSrf.name]) windowSrfs.append([childSrf.geometry]) #Make sure that all HBObjects are of", "currently only supports inputs that are all HBZones or all HBSrfs but not", "else: numOfShd = _numOfShds #Check the depths. checkData2 = True if _depth ==", "= shadingHeight # find shading base planes planeOrigins = [] planes = []", "for individual surfaces, you should make sure that the direction of the surface", "output \"zoneDataTree\" in the shade benefit evaluation. - Provided by Honeybee 0.0.55 Args:", "user has set the shades to generate on the interior, flip the normal", "zone/surface data.\" if checkData2 == True and checkData3 == True and checkData4 ==", "_distBetween) shadings.append(shadeBreps) EPSlatOrientList.append(EPSlatOrient) depthList.append(depth) shadingHeightList.append(shadingHeight) EPshdAngleList.append(EPshdAngle) distToGlassList.append(distToGlass) EPinteriorOrExterList.append(EPinteriorOrExter) if assignEPCheckInit == False: assignEPCheck", "the data tree paths of heating, cooling and beam gain synced with that", "'\\t' + str(blindsMaterial[5]) + ', !- Slat Conductivity {W/m-K}\\n' + \\ '\\t' +", "return EPBlindControl def main(): if _HBObjects != [] and sc.sticky.has_key('honeybee_release') == True and", "None and north_.IsValid(): northVector = north_ else:northVector = rc.Geometry.Vector3d.YAxis angle = rc.Geometry.Vector3d.VectorAngle(northVector, normalVector,", "== False and srfData == False and alignedDataTree != [[], [], []]: print", "depths based on cardinal direction. For example, inputing 4 values for depths will", "you have horizontal shades, use this input to angle shades downward. You can", "import uuid import math import os w = gh.GH_RuntimeMessageLevel.Warning tol = sc.doc.ModelAbsoluteTolerance def", "norOrient = True else: tolVec = rc.Geometry.Vector3d.Multiply(-1, tolVec) norOrient = True maxXYPt =", "testVec = rc.Geometry.Vector3d.Subtract(rc.Geometry.Vector3d(minXYPt.X, minXYPt.Y, minXYPt.Z), rc.Geometry.Vector3d(maxXYPt.X, maxXYPt.Y, maxXYPt.Z)) if testVec.IsParallelTo(planeVec) == 0: minXYPt", "if not hasattr(object, \"BC\"): object.BC = 'OUTDOORS' if object.hasChild: if object.BC != 'OUTDOORS'", "in branch: zoneData2Tree.Add(twig, GH_Path(bCount)) elif treeCount == 2: for bCount, branch in enumerate(finalTree):", "True if blindsSchedule_ == None: schedule = \"ALWAYSON\" print \"No blinds schedule has", "will be filled up zoneNames = [] windowNames = [] windowSrfs = []", "= [] alignedDataTree = [] for item in allData: alignedDataTree.append([]) for zoneCount, windowList", "EPinteriorOrExter, assignEPCheckInit def deconstructBlindMaterial(material): matLines = material.split('\\n') name = matLines[1].split(',')[0] reflect = float(matLines[2].split(',')[0])", "= DataTree[Object]() for count, brep in enumerate(windowSrfsInit): windowBreps.Add(brep, GH_Path(count)) for count, brepList in", "= \"09 | Energy | Energy\" #compatibleHBVersion = VER 0.0.55\\nAUG_25_2014 #compatibleLBVersion = VER", "number of shade inputs are given, use it to split up the glazing", "the branches for each window above. \"\"\" ghenv.Component.Name = \"Honeybee_EnergyPlus Window Shade Generator\"", "zoneData == False: for listCount, header in enumerate(branch): try: winNm = header[2].split(' for", "AddReference('Grasshopper') import Grasshopper.Kernel as gh from Grasshopper import DataTree from Grasshopper.Kernel.Data import GH_Path", "from the northernmost side of the window. tolVec = rc.Geometry.Vector3d.Subtract(rc.Geometry.Vector3d(minXYPt.X, minXYPt.Y, minXYPt.Z), rc.Geometry.Vector3d(maxXYPt.X,", "assign different depths to different directions. depth = getValueBasedOnOrientation(depth) # If multiple number", "have vertical shades, use this to rotate them towards the South by a", "warning = \"The input shdAngle_ value will cause EnergyPlus to crash.\" print warning", "benefit evaluation. - Provided by Honeybee 0.0.55 Args: _HBObjects: The HBZones out of", "to be generated on each window. You can also input lists of depths,", "[], []]: print \"A window was not matched with its respective zone/surface data.\"", "and srfData == False and alignedDataTree != [[], [], []]: print \"A window", "param in divisionParams: divisionPoints.append(pointCurve.PointAt(param)) planePoints = divisionPoints try: for point in planePoints: planes.append(rc.Geometry.Plane(point,", "or EPshdAngle < 0: warning = \"The input shdAngle_ value will cause EnergyPlus", "+', !- Name\\n' + \\ '\\t' + EPinteriorOrExter + ', !- Shading Type\\n'", "benefit simulation with the generated shades. zoneData2_: Optional zone data for the HBZones_", "and assign it to different cardinal directions. def getValueBasedOnOrientation(valueList): angles = [] if", "input to angle shades downward. You can also put in lists of angles", "branch: zoneData2Tree.Add(twig, GH_Path(bCount)) elif treeCount == 2: for bCount, branch in enumerate(finalTree): for", "the component is to create test shade areas for shade benefit evaluation after", "!- Front Side Slat Beam Solar Reflectance\\n' + \\ '\\t' + str(blindsMaterial[1]) +", "#Run the main functions. checkData = False if _HBObjects != [] and _runIt", "will cause EnergyPlus to crash.\" print warning ghenv.Component.AddRuntimeMessage(w, warning) if horOrVertical == True:", "= '' else: schedName = schedule schedCntrlType = 'OnIfScheduleAllows' schedCntrl = 'Yes' EPBlindControl", "isZone == True: zoneName = zoneNames[zoneCount] for windowCount, window in enumerate(windowList): windowBrepsFinal.append(window) windowName", "False, True) maxXYPt = rc.Geometry.Point3d(maxXYPt.X, maxXYPt.Y, maxXYPt.Z) #Adjust the points to ensure the", "schedule.lower().endswith(\".csv\"): # check if csv file is existed if not os.path.isfile(schedule): msg =", "have a headers on them, which is necessary to match the data to", "generate shades. ---------------: ... zoneData1_: Optional zone data for the HBZones_ that will", "EPshdAngle < 0: warning = \"The input shdAngle_ value will cause EnergyPlus to", "== 0: value = None if len(valueList) == 1: value = valueList[0] elif", "#Generate the shade curves based on the planes and extrusion vectors if intCrvs", "which align with the branches for each window above. zoneData2Tree: Data trees of", "True: shadings = [] for window in windowSrfsInit: shadeBreps, EPSlatOrient, depth, shadingHeight, EPshdAngle,", "getValueBasedOnOrientation(valueList): angles = [] if valueList == None or len(valueList) == 0: value", "# sort the planes sortedPlanes = sorted(planes, key=lambda a: a.Origin.Z) elif horOrVertical ==", "from System import Drawing from clr import AddReference AddReference('Grasshopper') import Grasshopper.Kernel as gh", "= True if numOfShds == None and distBetween == None: numOfShds = 1", "+ str(blindsMaterial[1]) + ', !- Front Side Slat Diffuse Solar Reflectance\\n' + \\", "Reflectance\\n' + \\ '\\t' + str(blindsMaterial[1]) + ', !- Back Side Slat Diffuse", "False warning = \"The surface must not be curved. With the way that", "\\ '\\t' + str(blindsMaterial[3]) + ', !- Back Side Slat Infrared Hemispherical Emissivity\\n'", "else: schedName = schedule schedCntrlType = 'OnIfScheduleAllows' schedCntrl = 'Yes' EPBlindControl = 'WindowProperty:ShadingControl,\\n'", "treeCount == 1: for bCount, branch in enumerate(finalTree): for twig in branch: zoneData2Tree.Add(twig,", "horOrVertical = getValueBasedOnOrientation(horOrVertical_) # If multiple shdAngle_ inputs are given, use it to", "= minZPt.DistanceTo(maxZPt) # find number of shadings try: numOfShd = int(numOfShds) shadingHeight =", "def createEPBlindControl(blindsMaterial, schedule, EPinteriorOrExter, name): if schedule == 'ALWAYSON': schedCntrlType = 'ALWAYSON' schedCntrl", "= False #Create the EnergyPlus blinds material and assign it to the windows", "Transmittance\\n' + \\ '\\t' + ', !- Front Side Slat Diffuse Visible Reflectance\\n'", "= east depth. Lists of vectors to be shaded can also be input", "hasattr(object, 'type'): # find the type based on object.type = object.getTypeByNormalAngle() if not", "planeOrigins = [] planes = [] pointCurve = rc.Geometry.Curve.CreateControlPointCurve([maxXYPt, minXYPt]) divisionParams = pointCurve.DivideByLength(shadingHeight,", "The default North direction is set to the Y-axis (0 degrees). _depth: A", "value # If multiple shading depths are given, use it to split up", "interiorOrExterior_ to different directions. interiorOrExter = getValueBasedOnOrientation(interiorOrExter_) #If multiple distToGlass_ inputs are given,", "+ ', !- Shading Type\\n' + \\ '\\t' + ', !- Construction with", "or a number between 0 and 360 that represents the degrees off from", "keep the data tree paths of heating, cooling and beam gain synced with", "this to align data like heating load, cooling load or beam gain for", "\\ '\\t' + EPSlatOrient + ', !- Slat Orientation\\n' + \\ '\\t' +", "winNames.append(childSrf.name) winBreps.append(childSrf.geometry) else: print \"One surface with a window is not planar. EenergyPlus", "lists have a headers on them, which is necessary to match the data", "!- Slat Angle Schedule Name\\n' return EPBlindControl def main(): if _HBObjects != []", "\\ '\\t' + str(blindsMaterial[1]) + ', !- Back Side Slat Diffuse Solar Reflectance\\n'", "== 1: value = valueList[0] elif len(valueList) > 1: initAngles = rs.frange(0, 360,", "# This component creates shades for Honeybee Zones # By <NAME> # <EMAIL>", "directions. distBtwn = getValueBasedOnOrientation(distBtwn) # If multiple horizontal or vertical inputs are given,", "for use in calculating the number of shades to generate minZPt = bbox.Corner(False,", "Distance {m}\\n' + \\ '\\t' + '0.5, !- Blind Top Opening Multiplier\\n' +", "to the zone and the windowBreps and shadeBreps outputs are just for visualization.", "False #Create the EnergyPlus blinds material and assign it to the windows with", "\"\"\" Use this component to generate shades for Honeybee zone windows. The component", "gh from Grasshopper import DataTree from Grasshopper.Kernel.Data import GH_Path import Rhino as rc", "value is connected for number of shades. The component will be run with", "has specified a distance to move the shades, move them along the normal", "str(blindsMaterial[5]) + ', !- Slat Conductivity {W/m-K}\\n' + \\ '\\t' + str(blindsMaterial[2]) +", "generate horizontal shades or \"False\" to generate vertical shades. You can also input", "has hooked up a distBetwee or numOfShds. if _distBetween == [] and _numOfShds", "north_.IsValid(): northVector = north_ else:northVector = rc.Geometry.Vector3d.YAxis angle = rc.Geometry.Vector3d.VectorAngle(northVector, normalVector, rc.Geometry.Plane.WorldXY) finalAngle", "distToGlass, name): EPBlindMat = \"WindowMaterial:Blind,\\n\" + \\ '\\t' + blindsMaterial[0] + \"_\" +", "the branches for each window above. zoneData2Tree: Data trees of the zoneData2_, which", "import scriptcontext as sc import uuid import math import os w = gh.GH_RuntimeMessageLevel.Warning", "elif object.isPlanar == False: assignEPCheck = False warning = \"The surface must not", "You can also put in lists of angles to assign different shade angles", "== 0 or distBetween == 0: sortedPlanes = [] elif horOrVertical == True:", "'\\t' + ', !- Slat Diffuse Visible Transmittance\\n' + \\ '\\t' + ',", "object.BC = 'OUTDOORS' if object.hasChild: if object.BC != 'OUTDOORS' and object.BC != 'Outdoors':", "different directions. interiorOrExter = getValueBasedOnOrientation(interiorOrExter_) #If multiple distToGlass_ inputs are given, use it", "different depths to different directions. depth = getValueBasedOnOrientation(depth) # If multiple number of", "Separation {m}\\n' + \\ '\\t' + str(blindsMaterial[4]) + ', !- Slat Thickness {m}\\n'", "is existed if not os.path.isfile(schedule): msg = \"Cannot find the shchedule file: \"", "distance to move the shades, move them along the normal vector. if distToGlass", "\\ '\\t' + '0.5, !- Blind Left Side Opening Multiplier\\n' + \\ '\\t'", "'\\t' + blindsMaterial[0] + \"_\" + name + ', !- Shading Device Material", "rc.Geometry.Vector3d.ZAxis if getAngle2North(normalVectorPerp) < 180: normalVectorPerp.Rotate((shdAngle*0.01745329), planeVec) else: normalVectorPerp.Rotate((shdAngle*-0.01745329), planeVec) else: shdAngle =", "# under a Creative Commons Attribution-ShareAlike 3.0 Unported License. \"\"\" Use this component", "#Create the EnergyPlus blinds material and assign it to the windows with shades.", "Zones # By <NAME> # <EMAIL> # Ladybug started by <NAME> is licensed", "Run Energy Simulation component. Zones read back into Grasshopper from the Import idf", "print \"A window was not matched with its respective zone/surface data.\" if checkData2", "started by <NAME> is licensed # under a Creative Commons Attribution-ShareAlike 3.0 Unported", "by that angle if shdAngle != None: if horOrVertical == True or horOrVertical", "Rhino units between each shade. horOrVertical_: Set to \"True\" to generate horizontal shades", "Slat Orientation\\n' + \\ '\\t' + str(depth) + ', !- Slat Width {m}\\n'", "to this window.\" windowNames.append(winNames) windowSrfs.append(winBreps) elif object.objectType == \"HBSurface\": isZoneList.append(0) warning = \"Note", "and assign it to the windows with shades. if assignEPCheck == True: for", "---------------: ... zoneData1_: Optional zone data for the HBZones_ that will be aligned", "!- Schedule Name\\n' + \\ '\\t' + ', !- Setpoint {W/m2, W or", "can be plugged into a shade benefit evaulation as each window is its", "each window above. zoneData2Tree: Data trees of the zoneData2_, which align with the", "ModifiedHBZones = hb_hive.addToHoneybeeHive(HBZoneObjects, ghenv.Component.InstanceGuid.ToString() + str(uuid.uuid4())) else: ModifiedHBZones = [] return checkData, windowSrfsInit,", "and tolVec.Y < 0: tolVec = rc.Geometry.Vector3d.Multiply(-1, tolVec) norOrient = True else: tolVec", "and _runIt == True: checkData, windowSrfsInit, shadings, alignedDataTree, HBObjWShades = main() #Unpack the", "of a grasshopper data tree. shadeBreps: Breps representing each shade of the window.", "= rc.Geometry.Point3d(minZPt.X, minZPt.Y, minZPt.Z) maxZPt = bbox.Corner(False, True, False) maxZPt = rc.Geometry.Point3d(maxZPt.X, maxZPt.Y,", "to split up the glazing by cardinal direction and assign different horizontal or", "+ ', !- Shading Device Material Name\\n' + \\ '\\t' + 'FixedSlatAngle, !-", "windowObj.blindsMaterial = createEPBlindMat(blindsMaterial, EPSlatOrientList[count], depthList[count], shadingHeightList[count], EPshdAngleList[count], distToGlassList[count], windowObj.name) windowObj.shadingControl = createEPBlindControl(blindsMaterial, schedule,", "to be sure that the values are parallel to the correct vector. testVec", "the main functions. checkData = False if _HBObjects != [] and _runIt ==", "== None or len(valueList) == 0: value = None if len(valueList) == 1:", "shadeBreps will still be produced and you can account for these shades using", "deconstructBlindMaterial(material): matLines = material.split('\\n') name = matLines[1].split(',')[0] reflect = float(matLines[2].split(',')[0]) transmit = float(matLines[3].split(',')[0])", "with the assigned shading (ready to be simulated). ---------------: ... windowBreps: Breps representing", "you should make sure that the direction of the surface is facing the", "minZPt.Y, minZPt.Z) maxZPt = bbox.Corner(False, True, False) maxZPt = rc.Geometry.Point3d(maxZPt.X, maxZPt.Y, maxZPt.Z -", "zoneNames = [] windowNames = [] windowSrfs = [] windowObjects = [] isZoneList", "both. For now, just grab another component for each of these inputs.\" print", "Blinds\\n' + \\ '\\t' + '; !- Slat Angle Schedule Name\\n' return EPBlindControl", "centerPtV = _glzSrf.Faces[0].ClosestPoint(baseSrfCenPt) if bool: normalVector = _glzSrf.Faces[0].NormalAt(centerPtU, centerPtV) #return rc.Geometry.Plane(baseSrfCenPt,normalVector) else: print", "from the input data trees. def makePyTree(zoneData): dataPyList = [] for i in", "cannot be coordinated with this component. checkData3 = True checkBranches = [] allHeaders", "shdAngle_ inputs are given, use it to split up the glazing by cardinal", "will assign each value of the list as follows: item 0 = north", "Angle Control for Blinds\\n' + \\ '\\t' + '; !- Slat Angle Schedule", "be used to assign blind objects to HBZones prior to simulation. These blinds", "Ladybug started by <NAME> is licensed # under a Creative Commons Attribution-ShareAlike 3.0", "this component. checkData3 = True checkBranches = [] allHeaders = [] allNumbers =", "for no rotation. If you have vertical shades, use this to rotate them", "number representing the depth of the shade to be generated on each window.", "= minZPt.X, minZPt.Y, minZPt.Z zHeights = rs.frange(minZPt.Z + shadingRemainder, maxZPt.Z + 0.5*sc.doc.ModelAbsoluteTolerance, shadingHeight)", "matLines[1].split(',')[0] reflect = float(matLines[2].split(',')[0]) transmit = float(matLines[3].split(',')[0]) emiss = float(matLines[4].split(',')[0]) thickness = float(matLines[5].split(',')[0])", "[], [] else: print \"You should first let both Honeybee and Ladybug fly...\"", "shade benefit simulation with the generated shades. Returns: readMe!: ... ---------------: ... HBZones:", "for a shade benefit simulation with the generated shades. zoneData3_: Optional zone data", "bool, centerPtU, centerPtV = _glzSrf.Faces[0].ClosestPoint(baseSrfCenPt) if bool: normalVector = _glzSrf.Faces[0].NormalAt(centerPtU, centerPtV) #return rc.Geometry.Plane(baseSrfCenPt,normalVector)", "+ \\ '\\t' + ', !- Setpoint {W/m2, W or deg C}\\n' +", "list in branch: if str(list[0]) == \"key:location/dataType/units/frequency/startsAt/endsAt\": checkHeader.append(1) dataHeaders.append(list[:7]) dataNumbers.append(list[7:]) allHeaders.append(dataHeaders) allNumbers.append(dataNumbers) if", "getValueBasedOnOrientation(numShds) # If multiple distances between shade inputs are given, use it to", "the zones and windows. For this, you would take imported EnergyPlus results and", "depthList.append(depth) shadingHeightList.append(shadingHeight) EPshdAngleList.append(EPshdAngle) distToGlassList.append(distToGlass) EPinteriorOrExterList.append(EPinteriorOrExter) if assignEPCheckInit == False: assignEPCheck = False #Create", "not be assigned to this window.\" windowNames.append(winNames) windowSrfs.append(winBreps) elif object.objectType == \"HBSurface\": isZoneList.append(0)", "sure that the direction of the surface is facing the outdoors in order", "+ schedule + \" in Honeybee schedule library.\" print msg ghenv.Component.AddRuntimeMessage(gh.GH_RuntimeMessageLevel.Warning, msg) checkData4", "= valueList[0] elif len(valueList) > 1: initAngles = rs.frange(0, 360, 360/len(valueList)) for an", "False: planeVec = rc.Geometry.Vector3d.ZAxis if getAngle2North(normalVectorPerp) < 180: normalVectorPerp.Rotate((shdAngle*0.01745329), planeVec) else: normalVectorPerp.Rotate((shdAngle*-0.01745329), planeVec)", "\"0\" for no rotation. If you have vertical shades, use this to rotate", "if EPshdAngle > 180 or EPshdAngle < 0: warning = \"The input shdAngle_", "move the shades, move them along the normal vector. if distToGlass != None:", "is connected here, the blinds will assume the \"ALWAYS ON\" shcedule. north_: Input", "numOfShd, blindsMaterial, schedule = checkTheInputs(zoneNames, windowNames, windowSrfs, isZone) else: checkData == False #Generate", "0: tolVec = rc.Geometry.Vector3d.Multiply(1, tolVec) norOrient = False if tolVec.X < 0 and", "which will assign different depths based on cardinal direction. For example, inputing 4", "to match the data to a zone or window. If there's no header,", "\"\"\" ghenv.Component.Name = \"Honeybee_EnergyPlus Window Shade Generator\" ghenv.Component.NickName = 'EPWindowShades' ghenv.Component.Message = 'VER", "the surface and try again!\" return -1 shadingSurfaces =[] #Define a function that", "#Find out what the object is and make sure that we can run", "connected and, if not, set a default. checkData5 = True if blindsMaterial_ ==", "take imported EnergyPlus results and hook them up to the \"zoneData\" inputs and", "branch in enumerate(finalTree): for twig in branch: zoneData1Tree.Add(twig, GH_Path(bCount)) elif treeCount == 1:", "case, the component helps keep the data tree paths of heating, cooling and", "the window. These can be plugged into a shade benefit evaulation as each", "False, True) maxXYPt = rc.Geometry.Point3d(maxXYPt.X, maxXYPt.Y, maxXYPt.Z) centerPt = bbox.Center #Test to be", "#Define a function that can get the angle to North of any surface.", "getAngle2North(normalVector): if north_ != None and north_.IsValid(): northVector = north_ else:northVector = rc.Geometry.Vector3d.YAxis", "zones. Note that these should ideally be the zones that are fed into", "Context Surfaces' component.\" print warning ghenv.Component.AddRuntimeMessage(gh.GH_RuntimeMessageLevel.Remark, warning) return shadingSurfaces, EPSlatOrient, depth, shadingHeight, EPshdAngle,", "are parallel to the correct vector. testVec = rc.Geometry.Vector3d.Subtract(rc.Geometry.Vector3d(minXYPt.X, minXYPt.Y, minXYPt.Z), rc.Geometry.Vector3d(maxXYPt.X, maxXYPt.Y,", "user has specified a distance to move the shades, move them along the", "located on the surface baseSrfCenPt = _glzSrf.ClosestPoint(baseSrfCenPt) bool, centerPtU, centerPtV = _glzSrf.Faces[0].ClosestPoint(baseSrfCenPt) if", "shades. Set defaults on things that are not connected. if checkSameType == True:", "'VER 0.0.55\\nSEP_11_2014' ghenv.Component.Category = \"Honeybee\" ghenv.Component.SubCategory = \"09 | Energy | Energy\" #compatibleHBVersion", "if str(list[0]) == \"key:location/dataType/units/frequency/startsAt/endsAt\": checkHeader.append(1) dataHeaders.append(list[:7]) dataNumbers.append(list[7:]) allHeaders.append(dataHeaders) allNumbers.append(dataNumbers) if sum(checkHeader) == len(branch):pass", "> 1: warning = \"The input distToGlass_ value is so large that it", "planeVec = rc.Geometry.Vector3d(normalVector.X, normalVector.Y, 0) planeVec.Rotate(1.570796, rc.Geometry.Vector3d.ZAxis) normalVectorPerp.Rotate((shdAngle*0.01745329), planeVec) elif horOrVertical == False:", "are not connected. if checkSameType == True: checkData, windowNames, windowSrfsInit, depths, alignedDataTree, numOfShd,", "horOrVertical = True if numOfShds == None and distBetween == None: numOfShds =", "angleFromNorm = angleFromNorm*(-1) #If the user has set the shades to generate on", "HBSrfs but not both. For now, just grab another component for each of", "'\\t' + ', !- Front Side Slat Beam Visible Reflectance\\n' + \\ '\\t'", "alter zones. Note that these should ideally be the zones that are fed", "is and make sure that we can run it through this component's functions.", "raise and lower the blinds. If no value is connected here, the blinds", "0.65 solar reflectance, 0 transmittance, 0.9 emittance, 0.25 mm thickness, 221 W/mK conductivity.", "makePyTree(zoneData): dataPyList = [] for i in range(zoneData.BranchCount): branchList = zoneData.Branch(i) dataVal =", "Use this to align data like heating load, cooling load or beam gain", "Grasshopper.Kernel.Data import GH_Path import Rhino as rc import rhinoscriptsyntax as rs import scriptcontext", "', !- Slat Conductivity {W/m-K}\\n' + \\ '\\t' + str(blindsMaterial[2]) + ', !-", "warning) #Check the depth and the shadingHeight to see if E+ will crash.", "the generated shades. zoneData2_: Optional zone data for the HBZones_ that will be", "as follows: item 0 = north depth, item 1 = west depth, item", "and assign different numbers of shades to different directions. numShds = getValueBasedOnOrientation(numShds) #", "if shadingHeight > 1: assignEPCheckInit = False warning = \"Note that E+ does", "if srf.hasChild: if srf.BC == 'OUTDOORS' or srf.BC == 'Outdoors': if srf.isPlanar ==", "each window above. \"\"\" ghenv.Component.Name = \"Honeybee_EnergyPlus Window Shade Generator\" ghenv.Component.NickName = 'EPWindowShades'", "solar reflectance, 0 transmittance, 0.9 emittance, 0.25 mm thickness, 221 W/mK conductivity. blindsSchedule_:", "= north depth, item 1 = west depth, item 2 = south depth,", "that can split up a list of values and assign it to different", "== []: checkData2 = False print \"You must provide a depth for the", "on the planes and extrusion vectors if intCrvs !=[]: for c in intCrvs:", "\"1\" except: pass from System import Object from System import Drawing from clr", "rotationAngle_ = 0 # import the classes lb_preparation = sc.sticky[\"ladybug_Preparation\"]() lb_mesh = sc.sticky[\"ladybug_Mesh\"]()", "= 0 # import the classes lb_preparation = sc.sticky[\"ladybug_Preparation\"]() lb_mesh = sc.sticky[\"ladybug_Mesh\"]() lb_visualization", "'No, !- Glare Control Is Active\\n' + \\ '\\t' + blindsMaterial[0] + \"_\"", "'\\t' + '0.5, !- Blind Top Opening Multiplier\\n' + \\ '\\t' + ',", "0.9, 0.00025, 221] else: try: blindsMaterial = deconstructBlindMaterial(blindsMaterial_) except: checkData5 = False warning", "== True and checkData3 == True and checkData4 == True and checkData5 ==", "Horizontal #Define a bounding box for use in calculating the number of shades", "\\ '\\t' + str(blindsMaterial[1]) + ', !- Back Side Slat Beam Solar Reflectance\\n'", "')[-1].split(': ')[0] except: winNm = header[2].split(' for ')[-1] if str(winNm) == str(windowName.upper()): alignedDataTree[inputDataTreeCount].append(allData[inputDataTreeCount][listCount])", "item 0 = north depth, item 1 = west depth, item 2 =", "schedName = '' else: schedName = schedule schedCntrlType = 'OnIfScheduleAllows' schedCntrl = 'Yes'", "= [] distToGlassList = [] EPinteriorOrExterList = [] #Call the objects from the", "be dynamically controlled via a schedule. Note that shades created this way will", "Opening Multiplier\\n' + \\ '\\t' + ', !- Blind Bottom Opening Multiplier\\n' +", "and make sure that we have everything that we need to generate the", "by cardinal direction and assign different horizontal or vertical to different directions. horOrVertical", "... HBZones: The HBZones with the assigned shading (ready to be simulated). ---------------:", "False elif schedule!=None and schedule.lower().endswith(\".csv\"): # check if csv file is existed if", "windowObj.name) windowObj.shadingControlName = 'BlindCntrlFor_' + windowObj.name windowObj.shadingSchName = schedule ModifiedHBZones = hb_hive.addToHoneybeeHive(HBZoneObjects, ghenv.Component.InstanceGuid.ToString()", "item 1 = west depth, item 2 = south depth, item 3 =", "point baseSrfCenPt = rc.Geometry.AreaMassProperties.Compute(_glzSrf).Centroid # sometimes the center point is not located on", "= rc.Geometry.Vector3d.ZAxis if getAngle2North(normalVectorPerp) < 180: normalVectorPerp.Rotate((shdAngle*0.01745329), planeVec) else: normalVectorPerp.Rotate((shdAngle*-0.01745329), planeVec) else: shdAngle", "the depth and the shadingHeight to see if E+ will crash. assignEPCheckInit =", "blinds schedule has been connected. It will be assumed that the blinds are", "zone or window. If there's no header, the data cannot be coordinated with", "Ladybug/Honeybee header on it. This header is necessary for data input to this", "for inputDataTreeCount, branch in enumerate(allHeaders): #Test to see if the data is for", "if EPshdAngleInint >= 0: EPshdAngle = 90 - EPshdAngleInint else: EPshdAngle = 90", "!- Back Side Slat Infrared Hemispherical Emissivity\\n' + \\ '\\t' + str(distToGlass) +", "... zoneData1Tree: Data trees of the zoneData1_, which align with the branches for", "EnergyPlus distance to glass. EPDistToGlass = distToGlass + (depth)*(0.5)*math.cos(math.radians(EPshdAngle)) if EPDistToGlass < 0.01:", "import AddReference AddReference('Grasshopper') import Grasshopper.Kernel as gh from Grasshopper import DataTree from Grasshopper.Kernel.Data", "blind objects to HBZones prior to simulation. These blinds can be dynamically controlled", "thickness = float(matLines[5].split(',')[0]) conduct = float(matLines[6].split(';')[0]) return [name, reflect, transmit, emiss, thickness, conduct]", "in branch: if str(list[0]) == \"key:location/dataType/units/frequency/startsAt/endsAt\": checkHeader.append(1) dataHeaders.append(list[:7]) dataNumbers.append(list[7:]) allHeaders.append(dataHeaders) allNumbers.append(dataNumbers) if sum(checkHeader)", "distBetween shadingRemainder = (((glzHeight/distBetween) - math.floor(glzHeight/distBetween))*distBetween) if shadingRemainder == 0: shadingRemainder = shadingHeight", "= DataTree[Object]() shadeBreps = DataTree[Object]() zoneData1Tree = DataTree[Object]() zoneData2Tree = DataTree[Object]() zoneData3Tree =", "with 0.65 solar reflectance, 0 transmittance, 0.9 emittance, 0.25 mm thickness, 221 W/mK", "plane in planes: try: intCrvs.append(rc.Geometry.Brep.CreateContourCurves(_glzSrf, plane)[0]) except: print \"One intersection failed.\" if normalVector", "has set the shades to generate on the interior, flip the normal vector.", "Side Opening Multiplier\\n' + \\ '\\t' + ', !- Minimum Slat Angle {deg}\\n'", "+ EPSlatOrient + ', !- Slat Orientation\\n' + \\ '\\t' + str(depth) +", "or West, tilting the shades like this will let in more winter sun", "\"zoneData\" inputs and use the output \"zoneDataTree\" in the shade benefit evaluation. -", "centerPtV) #return rc.Geometry.Plane(baseSrfCenPt,normalVector) else: print \"Couldn't find the normal of the shading surface.\"", "for the HBZones_ that will be aligned with the generated windows. Use this", "= rc.Geometry.Point3d(maxXYPt.X, maxXYPt.Y, maxXYPt.Z) centerPt = bbox.Center #Test to be sure that the", "print warning ghenv.Component.AddRuntimeMessage(w, warning) isZone = False #Check the inputs and make sure", "there is a blinds schedule connected and, if not, set a default. checkData4", "been connected. It will be assumed that the blinds are always drawn\" else:", "\" + schedule + \" in Honeybee schedule library.\" print msg ghenv.Component.AddRuntimeMessage(gh.GH_RuntimeMessageLevel.Warning, msg)", "warning ghenv.Component.AddRuntimeMessage(w, warning) #Check the depth and the shadingHeight to see if E+", "\"Cannot find \" + schedule + \" in Honeybee schedule library.\" print msg", "of the connected zoneData has a Ladybug/Honeybee header on it. This header is", "window in enumerate(windowList): windowBrepsFinal.append(window) windowName = windowNames[zoneCount][windowCount] windowNamesFinal.append(windowName) for inputDataTreeCount, branch in enumerate(allHeaders):", "shdAngle is provided, use it to rotate the planes by that angle if", "For now, just grab another component for each of these inputs.\" print warning", "if srf.isPlanar == True: for childSrf in srf.childSrfs: windowObjects.append(childSrf) winNames.append(childSrf.name) winBreps.append(childSrf.geometry) else: print", "Solar Transmittance\\n' + \\ '\\t' + str(blindsMaterial[1]) + ', !- Front Side Slat", "if the data lists have a headers on them, which is necessary to", "!- Front Side Slat Diffuse Solar Reflectance\\n' + \\ '\\t' + str(blindsMaterial[1]) +", "Blind Top Opening Multiplier\\n' + \\ '\\t' + ', !- Blind Bottom Opening", "make sure that we have everything that we need to generate the shades.", "# Horizontal #Define a bounding box for use in calculating the number of", "y-axis to make North. The default North direction is set to the Y-axis", "for i in range(zoneData.BranchCount): branchList = zoneData.Branch(i) dataVal = [] for item in", "None: schedule = \"ALWAYSON\" print \"No blinds schedule has been connected. It will", "shades will not be assigned to this window.\" windowNames.append(winNames) windowSrfs.append(winBreps) elif object.objectType ==", "rc.Geometry.Vector3d.Multiply(-1, tolVec) norOrient = True else: tolVec = rc.Geometry.Vector3d.Multiply(-1, tolVec) norOrient = True", "sc.sticky[\"ladybug_Preparation\"]() lb_mesh = sc.sticky[\"ladybug_Mesh\"]() lb_visualization = sc.sticky[\"ladybug_ResultVisualization\"]() # find the normal of the", "the interior and set to \"False\" to generate shades on the exterior. The", "trees of the zoneData1_, which align with the branches for each window above.", "are greater than 1. HBObjWShades will not be generated. shadeBreps will still be", "a number between 0 and 360 that represents the degrees off from the", "import the classes lb_preparation = sc.sticky[\"ladybug_Preparation\"]() lb_mesh = sc.sticky[\"ladybug_Mesh\"]() lb_visualization = sc.sticky[\"ladybug_ResultVisualization\"]() #", "its own branch of a grasshopper data tree. Alternatively, they can be plugged", "# and test the normal direction for more point baseSrfCenPt = rc.Geometry.AreaMassProperties.Compute(_glzSrf).Centroid #", "to move the shades, move them along the normal vector. if distToGlass !=", "[] #Call the objects from the hive. HBZoneObjects = hb_hive.callFromHoneybeeHive(_HBObjects) #Find out what", "Hemispherical Emissivity\\n' + \\ '\\t' + str(distToGlass) + ', !- Blind to Glass", "#Check the inputs and make sure that we have everything that we need", "paths of heating, cooling and beam gain synced with that of the zones", "Slat Diffuse Solar Reflectance\\n' + \\ '\\t' + str(blindsMaterial[1]) + ', !- Back", "of shades to different directions. distBtwn = getValueBasedOnOrientation(distBtwn) # If multiple horizontal or", "checkData = False return checkData, windowNamesFinal, windowBrepsFinal, _depth, alignedDataTree, numOfShd, blindsMaterial, schedule def", "align correctly with the EP Result data. blindsMaterial_: An optional blind material from", "schedule!=None and schedule.lower().endswith(\".csv\"): # check if csv file is existed if not os.path.isfile(schedule):", "Define a vector to be used to generate the planes planeVec = rc.Geometry.Vector3d(normalVector.X,", "< 0: warning = \"The input shdAngle_ value will cause EnergyPlus to crash.\"", "== None and distBetween == None: numOfShds = 1 if numOfShds == 0", "str(blindsMaterial[1]) + ', !- Back Side Slat Beam Solar Reflectance\\n' + \\ '\\t'", "False: assignEPCheck = False warning = \"The surface must not be curved. With", "branchList: dataVal.append(item) dataPyList.append(dataVal) return dataPyList allData = [] allData.append(makePyTree(zoneData1_)) allData.append(makePyTree(zoneData2_)) allData.append(makePyTree(zoneData3_)) #Test to", "from the \"Honeybee_EnergyPlus Blinds Material\" component.' print warning ghenv.Component.AddRuntimeMessage(w, warning) #Check if there", "= glzHeight/numOfShd shadingRemainder = shadingHeight except: shadingHeight = distBetween shadingRemainder = (((glzHeight/distBetween) -", "Slat Diffuse Visible Transmittance\\n' + \\ '\\t' + ', !- Front Side Slat", "= [] shadingHeightList = [] EPshdAngleList = [] distToGlassList = [] EPinteriorOrExterList =", "this window.\" else: print \"One surface with a window does not have an", "each window is its own branch of a grasshopper data tree. shadeBreps: Breps", "data. blindsMaterial_: An optional blind material from the blind material component. If no", "with the generated shades. zoneData2_: Optional zone data for the HBZones_ that will", "+ \\ '\\t' + str(blindsMaterial[5]) + ', !- Slat Conductivity {W/m-K}\\n' + \\", "[] windowNames = [] windowSrfs = [] windowObjects = [] isZoneList = []", "blindsSchedule_ == None: schedule = \"ALWAYSON\" print \"No blinds schedule has been connected.", "use in calculating the number of shades to generate minZPt = bbox.Corner(False, True,", "set to \"False\" to generate exterior shades. distToGlass_: A number representing the offset", "if distToGlass != None: transVec = normalVectorPerp transVec.Unitize() finalTransVec = rc.Geometry.Vector3d.Multiply(distToGlass, transVec) blindsTransform", "windowList in enumerate(windowSrfs): if isZone == True: zoneName = zoneNames[zoneCount] for windowCount, window", "import DataTree from Grasshopper.Kernel.Data import GH_Path import Rhino as rc import rhinoscriptsyntax as", "rc.Geometry.Vector3d.Multiply(-1, tolVec) norOrient = True maxXYPt = rc.Geometry.Point3d.Subtract(maxXYPt, tolVec) minXYPt = rc.Geometry.Point3d.Subtract(minXYPt, tolVec)", "else: checkSameType = False warning = \"This component currently only supports inputs that", "by cardinal direction and assign different distances of shades to different directions. distBtwn", "2 = south depth, item 3 = east depth. Lists of vectors to", "minZPt.DistanceTo(maxZPt) # find number of shadings try: numOfShd = int(numOfShds) shadingHeight = glzHeight/numOfShd", "that angle if shdAngle != None: if horOrVertical == True or horOrVertical ==", "0: warning = \"The input shdAngle_ value will cause EnergyPlus to crash.\" print", "== \"HBSurface\": isZoneList.append(0) warning = \"Note that, when using this component for individual", "window is its own branch of a grasshopper data tree. Alternatively, they can", "!- Front Side Slat Diffuse Visible Reflectance\\n' + \\ '\\t' + ', !-", "', !- Slat Infrared Hemispherical Transmittance\\n' + \\ '\\t' + str(blindsMaterial[3]) + ',", "\\ '\\t' + schedCntrlType + ', !- Shading Control Type\\n' + \\ '\\t'", "not a valid blinds material from the \"Honeybee_EnergyPlus Blinds Material\" component.' print warning", "provided, use it to rotate the planes by that angle if shdAngle !=", "\"Honeybee_EnergyPlus Window Shade Generator\" ghenv.Component.NickName = 'EPWindowShades' ghenv.Component.Message = 'VER 0.0.55\\nSEP_11_2014' ghenv.Component.Category =", "direction and assign different distances of shades to different directions. distBtwn = getValueBasedOnOrientation(distBtwn)", "a depth for the shades.\" ghenv.Component.AddRuntimeMessage(gh.GH_RuntimeMessageLevel.Warning, \"You must provide a depth for the", "data trees. def makePyTree(zoneData): dataPyList = [] for i in range(zoneData.BranchCount): branchList =", "classes hb_EPZone = sc.sticky[\"honeybee_EPZone\"] hb_EPSrf = sc.sticky[\"honeybee_EPSurface\"] hb_EPFenSurface = sc.sticky[\"honeybee_EPFenSurface\"] hb_hive = sc.sticky[\"honeybee_Hive\"]()", "direction or a number between 0 and 360 that represents the degrees off", "#Make the lists that will be filled up zoneNames = [] windowNames =", "the component helps keep the data tree paths of heating, cooling and beam", "and assign different interiorOrExterior_ to different directions. interiorOrExter = getValueBasedOnOrientation(interiorOrExter_) #If multiple distToGlass_", "schedCntrlType + ', !- Shading Control Type\\n' + \\ '\\t' + schedName +", "shadings, alignedDataTree, ModifiedHBZones else: return False, [], [], [], [] else: print \"You", "VER 0.0.55\\nAUG_25_2014 #compatibleLBVersion = VER 0.0.58\\nAUG_20_2014 try: ghenv.Component.AdditionalHelpFromDocStrings = \"1\" except: pass from", "norOrient = False if tolVec.X < 0 and tolVec.Y > 0: tolVec =", "object.getAngle2North() if not hasattr(object, \"BC\"): object.BC = 'OUTDOORS' if object.hasChild: if object.BC !=", "= math.degrees(rc.Geometry.Vector3d.VectorAngle(normalVectorPerp, normalVector)) if normalVector.Z < 0: angleFromNorm = angleFromNorm*(-1) #If the user", "[] elif horOrVertical == True: # Horizontal #Define a bounding box for use", "assign different depths based on cardinal direction. For example, inputing 4 values for", "up the glazing by cardinal direction and assign different distances of shades to", "branch in enumerate(finalTree): for twig in branch: zoneData2Tree.Add(twig, GH_Path(bCount)) elif treeCount == 2:", "in enumerate(windowObjects): windowObj.blindsMaterial = createEPBlindMat(blindsMaterial, EPSlatOrientList[count], depthList[count], shadingHeightList[count], EPshdAngleList[count], distToGlassList[count], windowObj.name) windowObj.shadingControl =", "<NAME> # <EMAIL> # Ladybug started by <NAME> is licensed # under a", "to rotate the shades. The default is set to \"0\" for no rotation.", "EPBlindControl = 'WindowProperty:ShadingControl,\\n' + \\ '\\t' + 'BlindCntrlFor_' + name +', !- Name\\n'", "for more point baseSrfCenPt = rc.Geometry.AreaMassProperties.Compute(_glzSrf).Centroid # sometimes the center point is not", "rc.Geometry.Vector3d.ZAxis)) # sort the planes sortedPlanes = sorted(planes, key=lambda a: a.Origin.Z) elif horOrVertical", "str(EPshdAngle) + ', !- Slat Angle {deg}\\n' + \\ '\\t' + str(blindsMaterial[5]) +", "with the \"Honeybee_EP Context Surfaces\" component. ---------------: ... zoneData1Tree: Data trees of the", "'\\t' + str(blindsMaterial[1]) + ', !- Back Side Slat Beam Solar Reflectance\\n' +", "for childSrf in object.childSrfs: windowObjects.append(childSrf) windowNames.append([childSrf.name]) windowSrfs.append([childSrf.geometry]) #Make sure that all HBObjects are", "center # note2developer: there might be cases that the surface is not planar", "\\ '\\t' + ', !- Front Side Slat Diffuse Visible Reflectance\\n' + \\", "the output \"zoneDataTree\" in the shade benefit evaluation. - Provided by Honeybee 0.0.55", "EenergyPlus shades will not be assigned to this window.\" else: print \"One surface", "If multiple shading depths are given, use it to split up the glazing", "return sortedPlanes, shadingHeight def makeShade(_glzSrf, depth, numShds, distBtwn): rotationAngle_ = 0 # import", "depth, item 3 = east depth. Lists of vectors to be shaded can", "of values and assign it to different cardinal directions. def getValueBasedOnOrientation(valueList): angles =", "used to generate the planes planeVec = rc.Geometry.Vector3d(normalVector.X, normalVector.Y, 0) planeVec.Rotate(1.570796, rc.Geometry.Vector3d.ZAxis) #Define", "EPshdAngleInint >= 0: EPshdAngle = 90 - EPshdAngleInint else: EPshdAngle = 90 +", "depth, item 1 = west depth, item 2 = south depth, item 3", "for the zone level. zoneData = False if isZone == True: for listCount,", "!- Slat Separation {m}\\n' + \\ '\\t' + str(blindsMaterial[4]) + ', !- Slat", "the normal of the shading surface.\" + \\ \"\\nRebuild the surface and try", "try: blindsMaterial = deconstructBlindMaterial(blindsMaterial_) except: checkData5 = False warning = 'Blinds material is", "= getValueBasedOnOrientation(depth) # If multiple number of shade inputs are given, use it", "except: pass #If the user has specified a distance to move the shades,", "calculating the number of shades to generate minXYPt = bbox.Corner(True, True, True) minXYPt", "GH_Path import Rhino as rc import rhinoscriptsyntax as rs import scriptcontext as sc", "elif horOrVertical == False: # Vertical # Define a vector to be used", "#Get the EnergyPlus distance to glass. EPDistToGlass = distToGlass + (depth)*(0.5)*math.cos(math.radians(EPshdAngle)) if EPDistToGlass", "float(matLines[2].split(',')[0]) transmit = float(matLines[3].split(',')[0]) emiss = float(matLines[4].split(',')[0]) thickness = float(matLines[5].split(',')[0]) conduct = float(matLines[6].split(';')[0])", "of the window. These can be plugged into a shade benefit evaulation as", "listCount, header in enumerate(branch): try: winNm = header[2].split(' for ')[-1].split(': ')[0] except: winNm", "ghenv.Component.InstanceGuid.ToString() + str(uuid.uuid4())) else: ModifiedHBZones = [] return checkData, windowSrfsInit, shadings, alignedDataTree, ModifiedHBZones", "like this will let in more winter sun than summer sun. If you", "shadingHeight = analyzeGlz(_glzSrf, distBtwn, numShds, horOrVertical, lb_visualization, normalVector) # find the intersection crvs", "individual surfaces, you should make sure that the direction of the surface is", "[] for srf in object.surfaces: if srf.hasChild: if srf.BC == 'OUTDOORS' or srf.BC", "', !- Slat Diffuse Visible Transmittance\\n' + \\ '\\t' + ', !- Front", "zoneData.Branch(i) dataVal = [] for item in branchList: dataVal.append(item) dataPyList.append(dataVal) return dataPyList allData", "two main uses: _ The first is that it can be used to", "are just for visualization. _ The second way to use the component is", "\"No blinds material has been connected. A material will be used with 0.65", "bbox.Corner(False, False, True) maxXYPt = rc.Geometry.Point3d(maxXYPt.X, maxXYPt.Y, maxXYPt.Z) centerPt = bbox.Center #Test to", "same type. checkSameType = True if sum(isZoneList) == len(_HBObjects): isZone = True elif", "> 0 and tolVec.Y > 0: tolVec = rc.Geometry.Vector3d.Multiply(1, tolVec) norOrient = False", "type. checkSameType = True if sum(isZoneList) == len(_HBObjects): isZone = True elif sum(isZoneList)", "Grasshopper import DataTree from Grasshopper.Kernel.Data import GH_Path import Rhino as rc import rhinoscriptsyntax", "minXYPt.Z) maxXYPt = bbox.Corner(True, False, True) maxXYPt = rc.Geometry.Point3d(maxXYPt.X, maxXYPt.Y, maxXYPt.Z) #Adjust the", "[] #Run the main functions. checkData = False if _HBObjects != [] and", "planePoints = divisionPoints try: for point in planePoints: planes.append(rc.Geometry.Plane(point, planeVec)) except: # single", "windows. The component has two main uses: _ The first is that it", "\"Not all of the connected zoneData has a Ladybug/Honeybee header on it. This", "!- Front Side Slat Beam Visible Reflectance\\n' + \\ '\\t' + ', !-", "False if _HBObjects != [] and _runIt == True: checkData, windowSrfsInit, shadings, alignedDataTree,", "you have vertical shades, use this to rotate them towards the South by", "print \"No value is connected for number of shades. The component will be", "a default. checkData5 = True if blindsMaterial_ == None: print \"No blinds material", "if not hasattr(object, 'type'): # find the type based on object.type = object.getTypeByNormalAngle()", "to raise and lower the blinds. If no value is connected here, the", "directions. interiorOrExter_: Set to \"True\" to generate Shades on the interior and set", "in branch: zoneData1Tree.Add(twig, GH_Path(bCount)) elif treeCount == 1: for bCount, branch in enumerate(finalTree):", "'\\t' + '0.5, !- Blind Left Side Opening Multiplier\\n' + \\ '\\t' +", "and not schedule.lower().endswith(\".csv\") and schedule not in HBScheduleList: msg = \"Cannot find \"", "False if zoneData == False: for listCount, header in enumerate(branch): try: winNm =", "analyzeGlz(glzSrf, distBetween, numOfShds, horOrVertical, lb_visualization, normalVector): # find the bounding box bbox =", "EPSlatOrient = 'Horizontal' else: EPSlatOrient = 'Vertical' if interiorOrExter == True: EPinteriorOrExter =", "'\\t' + '; !- Slat Angle Schedule Name\\n' return EPBlindControl def main(): if", "hb_EPSrf = sc.sticky[\"honeybee_EPSurface\"] hb_EPFenSurface = sc.sticky[\"honeybee_EPFenSurface\"] hb_hive = sc.sticky[\"honeybee_Hive\"]() #Make the lists that", "if object.BC != 'OUTDOORS' and object.BC != 'Outdoors': assignEPCheck = False warning =", "\\ '\\t' + schedCntrl + ', !- Shading Control Is Scheduled\\n' + \\", "assignEPCheckInit def deconstructBlindMaterial(material): matLines = material.split('\\n') name = matLines[1].split(',')[0] reflect = float(matLines[2].split(',')[0]) transmit", "surfaces, you should make sure that the direction of the surface is facing", "type based on object.type = object.getTypeByNormalAngle() if not hasattr(object, 'angle2North'): # find the", "True: EPSlatOrient = 'Horizontal' else: EPSlatOrient = 'Vertical' if interiorOrExter == True: EPinteriorOrExter", "conductivity.\" blindsMaterial = ['DEFAULTBLINDSMATERIAL', 0.65, 0, 0.9, 0.00025, 221] else: try: blindsMaterial =", "Shading Name\\n' + \\ '\\t' + schedCntrlType + ', !- Shading Control Type\\n'", "the mergeVectors_ input. _numOfShds: The number of shades to generated for each glazed", "material from the blind material component. If no material is connected here, the", "out of any of the HB components that generate or alter zones. Note", "energy simulation has already been run. In this case, the component helps keep", "Back Side Slat Diffuse Visible Reflectance\\n' + \\ '\\t' + ', !- Slat", "to different directions. shdAngle = getValueBasedOnOrientation(shdAngle_) #If multiple interiorOrExter_ inputs are given, use", "+ \\ '\\t' + blindsMaterial[0] + \"_\" + name + ', !- Shading", "Y, Z), rc.Geometry.Vector3d.ZAxis)) except: # single shading planes.append(rc.Geometry.Plane(rc.Geometry.Point3d(maxZPt), rc.Geometry.Vector3d.ZAxis)) # sort the planes", "', !- Slat Beam Visible Transmittance\\n' + \\ '\\t' + ', !- Front", "different depths based on cardinal direction. For example, inputing 4 values for depths", "Use this component to generate shades for Honeybee zone windows. The component has", "glzHeight = minZPt.DistanceTo(maxZPt) # find number of shadings try: numOfShd = int(numOfShds) shadingHeight", "= getValueBasedOnOrientation(numShds) # If multiple distances between shade inputs are given, use it", "conduct] def createEPBlindMat(blindsMaterial, EPSlatOrient, depth, shadingHeight, EPshdAngle, distToGlass, name): EPBlindMat = \"WindowMaterial:Blind,\\n\" +", "condition. EenergyPlus shades will not be assigned to this window.\" windowNames.append(winNames) windowSrfs.append(winBreps) elif", "#Check if there is a blinds schedule connected and, if not, set a", "rc.Geometry.AreaMassProperties.Compute(_glzSrf).Centroid # sometimes the center point is not located on the surface baseSrfCenPt", "from Grasshopper.Kernel.Data import GH_Path import Rhino as rc import rhinoscriptsyntax as rs import", "= \"Note that E+ does not like shading depths greater than 1. HBObjWShades", "can run it through this component's functions. for object in HBZoneObjects: if object.objectType", "glzHeight = minXYPt.DistanceTo(maxXYPt) # find number of shadings try: numOfShd = int(numOfShds) shadingHeight", "that are greater than 1. HBObjWShades will not be generated. shadeBreps will still", "not hasattr(object, \"BC\"): object.BC = 'OUTDOORS' if object.hasChild: if object.BC != 'OUTDOORS' and", "', !- Slat Orientation\\n' + \\ '\\t' + str(depth) + ', !- Slat", "the blind material component. If no material is connected here, the component will", "0: sortedPlanes = [] elif horOrVertical == True: # Horizontal #Define a bounding", "- EPshdAngleInint else: EPshdAngle = 90 + (EPshdAngleInint)*-1 if EPshdAngle > 180 or", "to be used as a true North direction or a number between 0", "alignedDataTree, numOfShd, blindsMaterial, schedule def analyzeGlz(glzSrf, distBetween, numOfShds, horOrVertical, lb_visualization, normalVector): # find", "'0.5, !- Blind Left Side Opening Multiplier\\n' + \\ '\\t' + '0.5, !-", "Diffuse Solar Reflectance\\n' + \\ '\\t' + str(blindsMaterial[1]) + ', !- Back Side", "sc.doc.ModelAbsoluteTolerance) centerPt = bbox.Center #glazing hieghts glzHeight = minZPt.DistanceTo(maxZPt) # find number of", "number of shades starting from the northernmost side of the window. tolVec =", "they can be plugged into an EnergyPlus simulation with the \"Honeybee_EP Context Surfaces\"", "component. checkData3 = True checkBranches = [] allHeaders = [] allNumbers = []", "warning ghenv.Component.AddRuntimeMessage(gh.GH_RuntimeMessageLevel.Remark, warning) return shadingSurfaces, EPSlatOrient, depth, shadingHeight, EPshdAngle, EPDistToGlass, EPinteriorOrExter, assignEPCheckInit def", "shade benefit evaluation. - Provided by Honeybee 0.0.55 Args: _HBObjects: The HBZones out", "used as a true North direction or a number between 0 and 360", "shdSrf.Transform(blindsTransform) else: distToGlass = 0 #Get the EnergyPlus distance to glass. EPDistToGlass =", "'\\t' + EPSlatOrient + ', !- Slat Orientation\\n' + \\ '\\t' + str(depth)", "base planes planeOrigins = [] planes = [] X, Y, z = minZPt.X,", "need to generate the shades. Set defaults on things that are not connected.", "should first let both Honeybee and Ladybug fly...\" ghenv.Component.AddRuntimeMessage(w, \"You should first let", "GH_Path(bCount)) elif treeCount == 1: for bCount, branch in enumerate(finalTree): for twig in", "blindsMaterial, schedule = checkTheInputs(zoneNames, windowNames, windowSrfs, isZone) else: checkData == False #Generate the", "shading depths greater than 1. HBObjWShades will not be generated. shadeBreps will still", "tolVec.Y > 0: tolVec = rc.Geometry.Vector3d.Multiply(1, tolVec) norOrient = False if tolVec.X <", "run it through this component's functions. for object in HBZoneObjects: if object.objectType ==", "\"You must provide a depth for the shades.\" ghenv.Component.AddRuntimeMessage(gh.GH_RuntimeMessageLevel.Warning, \"You must provide a", "Material\" component.' print warning ghenv.Component.AddRuntimeMessage(w, warning) #Check if there is a blinds schedule", "be plugged into an EnergyPlus simulation with the \"Honeybee_EP Context Surfaces\" component. ---------------:", "that will be aligned with the generated windows. Use this to align data", "has been connected. A material will be used with 0.65 solar reflectance, 0", "except: print \"One intersection failed.\" if normalVector != rc.Geometry.Vector3d.ZAxis: normalVectorPerp = rc.Geometry.Vector3d(normalVector.X, normalVector.Y,", "extrusion vectors if intCrvs !=[]: for c in intCrvs: try: shdSrf = rc.Geometry.Surface.CreateExtrusion(c,", "a 'Honeybee_EP Context Surfaces' component.\" print warning ghenv.Component.AddRuntimeMessage(gh.GH_RuntimeMessageLevel.Remark, warning) return shadingSurfaces, EPSlatOrient, depth,", "finalAngle = math.degrees(angle) return finalAngle # Define a function that can split up", "level. zoneData = False if isZone == True: for listCount, header in enumerate(branch):", "Front Side Slat Infrared Hemispherical Emissivity\\n' + \\ '\\t' + str(blindsMaterial[3]) + ',", "windowSrfs, isZone) else: checkData == False #Generate the shades. if checkData == True:", "depths, which will assign different depths based on cardinal direction. For example, inputing", "vertical shades, use this to rotate them towards the South by a certain", "for each of these inputs.\" print warning ghenv.Component.AddRuntimeMessage(w, warning) isZone = False #Check", "import math import os w = gh.GH_RuntimeMessageLevel.Warning tol = sc.doc.ModelAbsoluteTolerance def checkTheInputs(zoneNames, windowNames,", "material connected and, if not, set a default. checkData5 = True if blindsMaterial_", "= \"Not all of the connected zoneData has a Ladybug/Honeybee header on it.", "for ')[-1] == zoneName.upper(): alignedDataTree[inputDataTreeCount].append(allData[inputDataTreeCount][listCount]) zoneData = True #Test to see if the", "Slat Width {m}\\n' + \\ '\\t' + str(shadingHeight) +', !- Slat Separation {m}\\n'", "str(blindsMaterial[4]) + ', !- Slat Thickness {m}\\n' + \\ '\\t' + str(EPshdAngle) +", "[] isZoneList = [] assignEPCheck = True HBObjWShades = [] EPSlatOrientList = []", "will automatically assign a material of 0.65 solar reflectance, 0 transmittance, 0.9 emittance,", "= [] windowObjects = [] isZoneList = [] assignEPCheck = True HBObjWShades =", "srf.hasChild: if srf.BC == 'OUTDOORS' or srf.BC == 'Outdoors': if srf.isPlanar == True:", "be curved. With the way that we mesh curved surfaces for E+, the", "sc.sticky[\"honeybee_Hive\"]() #Make the lists that will be filled up zoneNames = [] windowNames", "in calculating the number of shades to generate minZPt = bbox.Corner(False, True, True)", "points to ensure the creation of the correct number of shades starting from", "vertical inputs are given, use it to split up the glazing by cardinal", "warning ghenv.Component.AddRuntimeMessage(gh.GH_RuntimeMessageLevel.Remark, warning) if not hasattr(object, 'type'): # find the type based on", "allData: checkHeader = [] dataHeaders = [] dataNumbers = [] for list in", "Angle Schedule Name\\n' return EPBlindControl def main(): if _HBObjects != [] and sc.sticky.has_key('honeybee_release')", "each shade. horOrVertical_: Set to \"True\" to generate horizontal shades or \"False\" to", "maxXYPt = rc.Geometry.Point3d(maxXYPt.X, maxXYPt.Y, maxXYPt.Z) centerPt = bbox.Center #Test to be sure that", "connected and, if not, set a default. checkData4 = True if blindsSchedule_ ==", "the user has set the shades to generate on the interior, flip the", "# sometimes the center point is not located on the surface baseSrfCenPt =", "blindsMaterial[0] + \"_\" + name + ', !- Name\\n' + \\ '\\t' +", "1. HBObjWShades will not be generated. shadeBreps will still be produced and you", "not matched with its respective zone/surface data.\" if checkData2 == True and checkData3", "default is set to \"False\" to generate exterior shades. distToGlass_: A number representing", "for these shades using a 'Honeybee_EP Context Surfaces' component.\" print warning ghenv.Component.AddRuntimeMessage(gh.GH_RuntimeMessageLevel.Remark, warning)", "_depth: A number representing the depth of the shade to be generated on", "be assigned to the zone and the windowBreps and shadeBreps outputs are just", "if intCrvs !=[]: for c in intCrvs: try: shdSrf = rc.Geometry.Surface.CreateExtrusion(c, float(depth) *", "generated shades. zoneData2_: Optional zone data for the HBZones_ that will be aligned", "== str(windowName.upper()): alignedDataTree[inputDataTreeCount].append(allData[inputDataTreeCount][listCount]) srfData = True if zoneData == False and srfData ==", "maxZPt = bbox.Corner(False, True, False) maxZPt = rc.Geometry.Point3d(maxZPt.X, maxZPt.Y, maxZPt.Z - sc.doc.ModelAbsoluteTolerance) centerPt", "== len(branch):pass else: checkData3 = False warning = \"Not all of the connected", "valueList[0] elif len(valueList) > 1: initAngles = rs.frange(0, 360, 360/len(valueList)) for an in", "', !- Front Side Slat Infrared Hemispherical Emissivity\\n' + \\ '\\t' + str(blindsMaterial[3])", "synced with that of the zones and windows. For this, you would take", "schedule library.\" print msg ghenv.Component.AddRuntimeMessage(gh.GH_RuntimeMessageLevel.Warning, msg) checkData4 = False elif schedule!=None and schedule.lower().endswith(\".csv\"):", "cardinal direction and assign different horizontal or vertical to different directions. horOrVertical =", "to this component.\" print warning ghenv.Component.AddRuntimeMessage(w, warning) #Align all of the lists to", "else: print \"Couldn't find the normal of the shading surface.\" + \\ \"\\nRebuild", "depth = getValueBasedOnOrientation(depth) # If multiple number of shade inputs are given, use", "normal vector. if interiorOrExter == True: normalVectorPerp.Reverse() else: interiorOrExter = False #If a", "The HBZones out of any of the HB components that generate or alter", "maxXYPt.Y, maxXYPt.Z) #Adjust the points to ensure the creation of the correct number", "shades to generate minZPt = bbox.Corner(False, True, True) minZPt = rc.Geometry.Point3d(minZPt.X, minZPt.Y, minZPt.Z)", "= [] EPinteriorOrExterList = [] #Call the objects from the hive. HBZoneObjects =", "enumerate(windowObjects): windowObj.blindsMaterial = createEPBlindMat(blindsMaterial, EPSlatOrientList[count], depthList[count], shadingHeightList[count], EPshdAngleList[count], distToGlassList[count], windowObj.name) windowObj.shadingControl = createEPBlindControl(blindsMaterial,", "[] return checkData, windowSrfsInit, shadings, alignedDataTree, ModifiedHBZones else: return False, [], [], [],", "Infrared Hemispherical Emissivity\\n' + \\ '\\t' + str(distToGlass) + ', !- Blind to", "False return checkData, windowNamesFinal, windowBrepsFinal, _depth, alignedDataTree, numOfShd, blindsMaterial, schedule def analyzeGlz(glzSrf, distBetween,", "is set to the Y-axis (0 degrees). _depth: A number representing the depth", "be run with one shade per window.\" else: numOfShd = _numOfShds #Check the", "as the base for shadings intCrvs =[] for plane in planes: try: intCrvs.append(rc.Geometry.Brep.CreateContourCurves(_glzSrf,", "inputing 4 values for depths will assign each value of the list as", "= (((glzHeight/distBetween) - math.floor(glzHeight/distBetween))*distBetween) if shadingRemainder == 0: shadingRemainder = shadingHeight # find", "and Ladybug fly...\") return False, [], [], [], [] #Run the main functions.", "= [] allNumbers = [] for branch in allData: checkHeader = [] dataHeaders", "= \"Note that, when using this component for individual surfaces, you should make", "East or West, tilting the shades like this will let in more winter", "print \"One intersection failed.\" if normalVector != rc.Geometry.Vector3d.ZAxis: normalVectorPerp = rc.Geometry.Vector3d(normalVector.X, normalVector.Y, 0)", "up zoneNames = [] windowNames = [] windowSrfs = [] windowObjects = []", "fly...\" ghenv.Component.AddRuntimeMessage(w, \"You should first let both Honeybee and Ladybug fly...\") return False,", "for bCount, branch in enumerate(finalTree): for twig in branch: zoneData1Tree.Add(twig, GH_Path(bCount)) elif treeCount", "with Shading Name\\n' + \\ '\\t' + schedCntrlType + ', !- Shading Control", "EPinteriorOrExter = 'ExteriorBlind' #Generate the shade curves based on the planes and extrusion", "(((glzHeight/distBetween) - math.floor(glzHeight/distBetween))*distBetween) if shadingRemainder == 0: shadingRemainder = shadingHeight # find shading", "ghenv.Component.AddRuntimeMessage(w, warning) if horOrVertical == True: EPSlatOrient = 'Horizontal' else: EPSlatOrient = 'Vertical'", "dataPyList allData = [] allData.append(makePyTree(zoneData1_)) allData.append(makePyTree(zoneData2_)) allData.append(makePyTree(zoneData3_)) #Test to see if the data", "that can get the angle to North of any surface. def getAngle2North(normalVector): if", "of depths, which will assign different depths based on cardinal direction. For example,", "planar and # the normal is changing from point to point, then I", "flip the normal vector. if interiorOrExter == True: normalVectorPerp.Reverse() else: interiorOrExter = False", "in initAngles: angles.append(an-(360/(2*len(valueList)))) angles.append(360) for angleCount in range(len(angles)-1): if angles[angleCount] <= (getAngle2North(normalVector))%360 <=", "zoneData has a Ladybug/Honeybee header on it. This header is necessary for data", "Commons Attribution-ShareAlike 3.0 Unported License. \"\"\" Use this component to generate shades for", "dataHeaders.append(list[:7]) dataNumbers.append(list[7:]) allHeaders.append(dataHeaders) allNumbers.append(dataNumbers) if sum(checkHeader) == len(branch):pass else: checkData3 = False warning", "point, then I should sample the test surface # and test the normal", "rc.Geometry.Point3d(minZPt.X, minZPt.Y, minZPt.Z) maxZPt = bbox.Corner(False, True, False) maxZPt = rc.Geometry.Point3d(maxZPt.X, maxZPt.Y, maxZPt.Z", "if the data is for the zone level. zoneData = False if isZone", "to split up the glazing by cardinal direction and assign different depths to", "coordinated with this component. checkData3 = True checkBranches = [] allHeaders = []", "will be aligned with the generated windows. Use this to align data like", "pointCurve.DivideByLength(shadingHeight, True) divisionPoints = [] for param in divisionParams: divisionPoints.append(pointCurve.PointAt(param)) planePoints = divisionPoints", "!- Name\\n' + \\ '\\t' + EPinteriorOrExter + ', !- Shading Type\\n' +", "applied to windows facing East or West, tilting the shades like this will", "= rc.Geometry.Vector3d.Multiply(1, tolVec) norOrient = False if tolVec.X < 0 and tolVec.Y >", "Z in zHeights: planes.append(rc.Geometry.Plane(rc.Geometry.Point3d(X, Y, Z), rc.Geometry.Vector3d.ZAxis)) except: # single shading planes.append(rc.Geometry.Plane(rc.Geometry.Point3d(maxZPt), rc.Geometry.Vector3d.ZAxis))", "direction and assign different depths to different directions. depth = getValueBasedOnOrientation(depth) # If", "W/mK conductivity. blindsSchedule_: An optional schedule to raise and lower the blinds. If", "normalVector.Y, 0) planeVec.Rotate(1.570796, rc.Geometry.Vector3d.ZAxis) normalVectorPerp.Rotate((shdAngle*0.01745329), planeVec) elif horOrVertical == False: planeVec = rc.Geometry.Vector3d.ZAxis", "if blindsMaterial_ == None: print \"No blinds material has been connected. A material", "object.hasChild: if object.BC != 'OUTDOORS' and object.BC != 'Outdoors': assignEPCheck = False warning", "str(blindsMaterial[1]) + ', !- Front Side Slat Diffuse Solar Reflectance\\n' + \\ '\\t'", "Returns: readMe!: ... ---------------: ... HBZones: The HBZones with the assigned shading (ready", "= rc.Geometry.Vector3d(normalVector.X, normalVector.Y, 0) angleFromNorm = math.degrees(rc.Geometry.Vector3d.VectorAngle(normalVectorPerp, normalVector)) if normalVector.Z < 0: angleFromNorm", "Provided by Honeybee 0.0.55 Args: _HBObjects: The HBZones out of any of the", "cardinal directions. interiorOrExter_: Set to \"True\" to generate Shades on the interior and", "everything that we need to generate the shades. Set defaults on things that", "[] pointCurve = rc.Geometry.Curve.CreateControlPointCurve([maxXYPt, minXYPt]) divisionParams = pointCurve.DivideByLength(shadingHeight, True) divisionPoints = [] for", "gh.GH_RuntimeMessageLevel.Warning tol = sc.doc.ModelAbsoluteTolerance def checkTheInputs(zoneNames, windowNames, windowSrfs, isZone): #Check if the user", "Emissivity\\n' + \\ '\\t' + str(distToGlass) + ', !- Blind to Glass Distance", "[] distToGlassList = [] EPinteriorOrExterList = [] #Call the objects from the hive.", "will be assumed that the blinds are always drawn\" else: schedule= blindsSchedule_.upper() if", "1: warning = \"The input distToGlass_ value is so large that it will", "representing the depth of the shade to be generated on each window. You", "elif EPDistToGlass > 1: warning = \"The input distToGlass_ value is so large", "that all HBObjects are of the same type. checkSameType = True if sum(isZoneList)", "find shading base planes planeOrigins = [] planes = [] pointCurve = rc.Geometry.Curve.CreateControlPointCurve([maxXYPt,", "It will be assumed that the blinds are always drawn\" else: schedule= blindsSchedule_.upper()", "not like shading depths greater than 1. HBObjWShades will not be generated. shadeBreps", "= DataTree[Object]() zoneData2Tree = DataTree[Object]() zoneData3Tree = DataTree[Object]() for count, brep in enumerate(windowSrfsInit):", "numbers of shades to different directions. numShds = getValueBasedOnOrientation(numShds) # If multiple distances", "using this component for individual surfaces, you should make sure that the direction", "= False #Create a Python list from the input data trees. def makePyTree(zoneData):", "in degrees. If applied to windows facing East or West, tilting the shades", "[], [], [], [] #Run the main functions. checkData = False if _HBObjects", "zoneData1Tree: Data trees of the zoneData1_, which align with the branches for each", "windows. Use this to align data like heating load, cooling load or beam", "print warning ghenv.Component.AddRuntimeMessage(w, warning) #Check if there is a blinds schedule connected and,", "box for use in calculating the number of shades to generate minZPt =", "+ 'FixedSlatAngle, !- Type of Slat Angle Control for Blinds\\n' + \\ '\\t'", "of heating, cooling and beam gain synced with that of the zones and", "that it can be used to assign blind objects to HBZones prior to", "if depth > 1: assignEPCheckInit = False warning = \"Note that E+ does", "filled up zoneNames = [] windowNames = [] windowSrfs = [] windowObjects =", "zoneData3Tree = DataTree[Object]() for count, brep in enumerate(windowSrfsInit): windowBreps.Add(brep, GH_Path(count)) for count, brepList", "bool: normalVector = _glzSrf.Faces[0].NormalAt(centerPtU, centerPtV) #return rc.Geometry.Plane(baseSrfCenPt,normalVector) else: print \"Couldn't find the normal", "import os w = gh.GH_RuntimeMessageLevel.Warning tol = sc.doc.ModelAbsoluteTolerance def checkTheInputs(zoneNames, windowNames, windowSrfs, isZone):", "+ \\ '\\t' + ', !- Minimum Slat Angle {deg}\\n' + \\ '\\t'", "of the outputs. EPshdAngleInint = angleFromNorm+shdAngle if EPshdAngleInint >= 0: EPshdAngle = 90", "except: winNm = header[2].split(' for ')[-1] if str(winNm) == str(windowName.upper()): alignedDataTree[inputDataTreeCount].append(allData[inputDataTreeCount][listCount]) srfData =", "not planar and # the normal is changing from point to point, then", "= getValueBasedOnOrientation(shdAngle_) #If multiple interiorOrExter_ inputs are given, use it to split up", "!- Blind Left Side Opening Multiplier\\n' + \\ '\\t' + '0.5, !- Blind", "blinds material from the \"Honeybee_EnergyPlus Blinds Material\" component.' print warning ghenv.Component.AddRuntimeMessage(w, warning) #Check", "the surface in the center # note2developer: there might be cases that the", "Orientation\\n' + \\ '\\t' + str(depth) + ', !- Slat Width {m}\\n' +", "shadingRemainder = shadingHeight except: shadingHeight = distBetween shadingRemainder = (((glzHeight/distBetween) - math.floor(glzHeight/distBetween))*distBetween) if", "shadingHeight except: shadingHeight = distBetween shadingRemainder = (((glzHeight/distBetween) - math.floor(glzHeight/distBetween))*distBetween) if shadingRemainder ==", "that generate or alter zones. Note that these should ideally be the zones", "# If multiple distances between shade inputs are given, use it to split", "90 + (EPshdAngleInint)*-1 if EPshdAngle > 180 or EPshdAngle < 0: warning =", "specified a distance to move the shades, move them along the normal vector.", "Surfaces' component.\" print warning ghenv.Component.AddRuntimeMessage(gh.GH_RuntimeMessageLevel.Remark, warning) if shadingHeight > 1: assignEPCheckInit = False", "Beam Solar Transmittance\\n' + \\ '\\t' + str(blindsMaterial[1]) + ', !- Front Side", "'' else: schedName = schedule schedCntrlType = 'OnIfScheduleAllows' schedCntrl = 'Yes' EPBlindControl =", "can also be input and shades can be joined together with the mergeVectors_", "warning = \"The boundary condition of the input object must be outdoors. E+", "object.isPlanar == False: assignEPCheck = False warning = \"The surface must not be", "planes = [] X, Y, z = minZPt.X, minZPt.Y, minZPt.Z zHeights = rs.frange(minZPt.Z", "uses: _ The first is that it can be used to assign blind", "EPDistToGlass = distToGlass + (depth)*(0.5)*math.cos(math.radians(EPshdAngle)) if EPDistToGlass < 0.01: EPDistToGlass = 0.01 elif", "windowNamesFinal.append(windowName) for inputDataTreeCount, branch in enumerate(allHeaders): #Test to see if the data is", "horOrVertical == None: horOrVertical = True if numOfShds == None and distBetween ==", "\\ \"\\nRebuild the surface and try again!\" return -1 shadingSurfaces =[] #Define a", "depth and the shadingHeight to see if E+ will crash. assignEPCheckInit = True", "\\ '\\t' + str(shadingHeight) +', !- Slat Separation {m}\\n' + \\ '\\t' +", "dataNumbers = [] for list in branch: if str(list[0]) == \"key:location/dataType/units/frequency/startsAt/endsAt\": checkHeader.append(1) dataHeaders.append(list[:7])", "warning = \"Note that E+ does not like shading depths greater than 1.", "component for each of these inputs.\" print warning ghenv.Component.AddRuntimeMessage(w, warning) isZone = False", "towards the South by a certain value in degrees. If applied to windows", "should make sure that the direction of the surface is facing the outdoors", "shade benefit evaulation as each window is its own branch of a grasshopper", "this way will automatically be assigned to the zone and the windowBreps and", "windowNamesFinal = [] windowBrepsFinal = [] alignedDataTree = [] for item in allData:", "values and assign it to different cardinal directions. def getValueBasedOnOrientation(valueList): angles = []", "', !- Back Side Slat Infrared Hemispherical Emissivity\\n' + \\ '\\t' + str(distToGlass)", "Is Scheduled\\n' + \\ '\\t' + 'No, !- Glare Control Is Active\\n' +", "to each window. windowNamesFinal = [] windowBrepsFinal = [] alignedDataTree = [] for", "enumerate(windowList): windowBrepsFinal.append(window) windowName = windowNames[zoneCount][windowCount] windowNamesFinal.append(windowName) for inputDataTreeCount, branch in enumerate(allHeaders): #Test to", "assignEPCheckInit = False warning = \"Note that E+ does not like shading depths", "cardinal direction and assign different distances of shades to different directions. distBtwn =", "is provided, use it to rotate the planes by that angle if shdAngle", "= sc.sticky[\"honeybee_Hive\"]() #Make the lists that will be filled up zoneNames = []", "assigned shading (ready to be simulated). ---------------: ... windowBreps: Breps representing each window", "minXYPt = rc.Geometry.Point3d.Subtract(minXYPt, tolVec) #glazing distance glzHeight = minXYPt.DistanceTo(maxXYPt) # find number of", "shade per window.\" else: numOfShd = _numOfShds #Check the depths. checkData2 = True", "depth, shadingHeight, EPshdAngle, distToGlass, EPinteriorOrExter, assignEPCheckInit = makeShade(window, depths, numOfShd, _distBetween) shadings.append(shadeBreps) EPSlatOrientList.append(EPSlatOrient)", "numOfShds == None and distBetween == None: numOfShds = 1 if numOfShds ==", "Slat Beam Solar Reflectance\\n' + \\ '\\t' + str(blindsMaterial[1]) + ', !- Back", "each window of the zone. These can be plugged into a shade benefit", "== None: schedule = \"ALWAYSON\" print \"No blinds schedule has been connected. It", "angles.append(360) for angleCount in range(len(angles)-1): if angles[angleCount] <= (getAngle2North(normalVector))%360 <= angles[angleCount +1]: targetValue", "Honeybee Zones # By <NAME> # <EMAIL> # Ladybug started by <NAME> is", "outputs are just for visualization. _ The second way to use the component", "find the normal of the surface in the center # note2developer: there might", "benefit simulation with the generated shades. zoneData3_: Optional zone data for the HBZones_", "zone. These can be plugged into a shade benefit evaulation as each window", "If you have horizontal shades, use this input to angle shades downward. You", "[] and sc.sticky.has_key('honeybee_release') == True and sc.sticky.has_key('ladybug_release') == True: #Import the classes hb_EPZone", "assign different horizontal or vertical to different directions. horOrVertical = getValueBasedOnOrientation(horOrVertical_) # If", "or deg C}\\n' + \\ '\\t' + schedCntrl + ', !- Shading Control", "glazing by cardinal direction and assign different interiorOrExterior_ to different directions. interiorOrExter =", "= False warning = \"This component currently only supports inputs that are all", "the program would just freak out with blinds.\" print warning ghenv.Component.AddRuntimeMessage(w, warning) else:", "imported EnergyPlus results and hook them up to the \"zoneData\" inputs and use", "each value of the list as follows: item 0 = north depth, item", "has two main uses: _ The first is that it can be used", "assign different interiorOrExterior_ to different directions. interiorOrExter = getValueBasedOnOrientation(interiorOrExter_) #If multiple distToGlass_ inputs", "evaluation after an energy simulation has already been run. In this case, the", "'BlindCntrlFor_' + name +', !- Name\\n' + \\ '\\t' + EPinteriorOrExter + ',", "windowSrfs.append([childSrf.geometry]) #Make sure that all HBObjects are of the same type. checkSameType =", "try: shdSrf = rc.Geometry.Surface.CreateExtrusion(c, float(depth) * normalVectorPerp).ToBrep() shadingSurfaces.append(shdSrf) except: pass #If the user", "ghenv.Component.Message = 'VER 0.0.55\\nSEP_11_2014' ghenv.Component.Category = \"Honeybee\" ghenv.Component.SubCategory = \"09 | Energy |", "thickness, 221 W/mK conductivity. blindsSchedule_: An optional schedule to raise and lower the", "bbox.Corner(False, True, True) minXYPt = rc.Geometry.Point3d(minXYPt.X, minXYPt.Y, minXYPt.Z) maxXYPt = bbox.Corner(True, False, True)", "shdAngle_: A number between -90 and 90 that represents an angle in degrees", "sun than summer sun. If you have horizontal shades, use this input to", "== zoneName.upper(): alignedDataTree[inputDataTreeCount].append(allData[inputDataTreeCount][listCount]) zoneData = True #Test to see if the data is", "+ ', !- Slat Beam Solar Transmittance\\n' + \\ '\\t' + str(blindsMaterial[1]) +", "= ['DEFAULTBLINDSMATERIAL', 0.65, 0, 0.9, 0.00025, 221] else: try: blindsMaterial = deconstructBlindMaterial(blindsMaterial_) except:", "= False if zoneData == False: for listCount, header in enumerate(branch): try: winNm", "like distances between shades that are greater than 1. HBObjWShades will not be", "!- Front Side Slat Infrared Hemispherical Emissivity\\n' + \\ '\\t' + str(blindsMaterial[3]) +", "True) minZPt = rc.Geometry.Point3d(minZPt.X, minZPt.Y, minZPt.Z) maxZPt = bbox.Corner(False, True, False) maxZPt =", "msg) checkData4 = False #Create a Python list from the input data trees.", "C}\\n' + \\ '\\t' + schedCntrl + ', !- Shading Control Is Scheduled\\n'", "schedule, EPinteriorOrExterList[count], windowObj.name) windowObj.shadingControlName = 'BlindCntrlFor_' + windowObj.name windowObj.shadingSchName = schedule ModifiedHBZones =", "= hb_hive.addToHoneybeeHive(HBZoneObjects, ghenv.Component.InstanceGuid.ToString() + str(uuid.uuid4())) else: ModifiedHBZones = [] return checkData, windowSrfsInit, shadings,", "\\ '\\t' + blindsMaterial[0] + \"_\" + name + ', !- Name\\n' +", "for each window above. zoneData3Tree: Data trees of the izoneData3_, which align with", "shdAngle = getValueBasedOnOrientation(shdAngle_) #If multiple interiorOrExter_ inputs are given, use it to split", "Glass Distance {m}\\n' + \\ '\\t' + '0.5, !- Blind Top Opening Multiplier\\n'", "centerPtU, centerPtV = _glzSrf.Faces[0].ClosestPoint(baseSrfCenPt) if bool: normalVector = _glzSrf.Faces[0].NormalAt(centerPtU, centerPtV) #return rc.Geometry.Plane(baseSrfCenPt,normalVector) else:", "split up the glazing by cardinal direction and assign different depths to different", "[] else: print \"You should first let both Honeybee and Ladybug fly...\" ghenv.Component.AddRuntimeMessage(w,", "schedule + \" in Honeybee schedule library.\" print msg ghenv.Component.AddRuntimeMessage(gh.GH_RuntimeMessageLevel.Warning, msg) checkData4 =", "!- Slat Diffuse Solar Transmittance\\n' + \\ '\\t' + str(blindsMaterial[1]) + ', !-", "numOfShd, _distBetween) shadings.append(shadeBreps) EPSlatOrientList.append(EPSlatOrient) depthList.append(depth) shadingHeightList.append(shadingHeight) EPshdAngleList.append(EPshdAngle) distToGlassList.append(distToGlass) EPinteriorOrExterList.append(EPinteriorOrExter) if assignEPCheckInit == False:", "EPinteriorOrExterList.append(EPinteriorOrExter) if assignEPCheckInit == False: assignEPCheck = False #Create the EnergyPlus blinds material", "Set boolean to \"True\" to run the component and generate shades. ---------------: ...", "= [] planes = [] X, Y, z = minZPt.X, minZPt.Y, minZPt.Z zHeights", "Back Side Slat Infrared Hemispherical Emissivity\\n' + \\ '\\t' + str(distToGlass) + ',", "= False warning = \"The surface must not be curved. With the way", "blind material component. If no material is connected here, the component will automatically", "schedCntrl = 'No' schedName = '' else: schedName = schedule schedCntrlType = 'OnIfScheduleAllows'", "= [] windowNames = [] windowSrfs = [] windowObjects = [] isZoneList =", "bCount, branch in enumerate(finalTree): for twig in branch: zoneData2Tree.Add(twig, GH_Path(bCount)) elif treeCount ==", "assign blind objects to HBZones prior to simulation. These blinds can be dynamically", "if numOfShds == 0 or distBetween == 0: sortedPlanes = [] elif horOrVertical", "= True maxXYPt = rc.Geometry.Point3d.Subtract(maxXYPt, tolVec) minXYPt = rc.Geometry.Point3d.Subtract(minXYPt, tolVec) #glazing distance glzHeight", "so large that it will cause EnergyPlus to crash.\" print warning ghenv.Component.AddRuntimeMessage(w, warning)", "certain value in degrees. If applied to windows facing East or West, tilting", "assign different orientations based on cardinal direction. shdAngle_: A number between -90 and", "to different directions. numShds = getValueBasedOnOrientation(numShds) # If multiple distances between shade inputs", "per window.\" else: numOfShd = _numOfShds #Check the depths. checkData2 = True if", "| Energy\" #compatibleHBVersion = VER 0.0.55\\nAUG_25_2014 #compatibleLBVersion = VER 0.0.58\\nAUG_20_2014 try: ghenv.Component.AdditionalHelpFromDocStrings =", "EPDistToGlass < 0.01: EPDistToGlass = 0.01 elif EPDistToGlass > 1: warning = \"The", "+ name + ', !- Shading Device Material Name\\n' + \\ '\\t' +", "Breps representing each shade of the window. These can be plugged into a", "out what the object is and make sure that we can run it", "direction and assign different distToGlass_ to different directions. distToGlass = getValueBasedOnOrientation(distToGlass_) # generate", "= rc.Geometry.Point3d(maxZPt.X, maxZPt.Y, maxZPt.Z - sc.doc.ModelAbsoluteTolerance) centerPt = bbox.Center #glazing hieghts glzHeight =", "= [] dataNumbers = [] for list in branch: if str(list[0]) == \"key:location/dataType/units/frequency/startsAt/endsAt\":", "for treeCount, finalTree in enumerate(alignedDataTree): if treeCount == 0: for bCount, branch in", "+ \\ '\\t' + str(blindsMaterial[1]) + ', !- Front Side Slat Beam Solar", "each window. windowNamesFinal = [] windowBrepsFinal = [] alignedDataTree = [] for item", "')[0] except: winNm = header[2].split(' for ')[-1] if str(winNm) == str(windowName.upper()): alignedDataTree[inputDataTreeCount].append(allData[inputDataTreeCount][listCount]) srfData", "warning = \"Note that, when using this component for individual surfaces, you should", "An optional blind material from the blind material component. If no material is", "\"True\" to run the component and generate shades. ---------------: ... zoneData1_: Optional zone", "as rc import rhinoscriptsyntax as rs import scriptcontext as sc import uuid import", "to make the shades. _runIt: Set boolean to \"True\" to run the component", "Beam Visible Transmittance\\n' + \\ '\\t' + ', !- Front Side Slat Beam", "!- Blind Top Opening Multiplier\\n' + \\ '\\t' + ', !- Blind Bottom", "schedule print msg ghenv.Component.AddRuntimeMessage(gh.GH_RuntimeMessageLevel.Warning, msg) checkData4 = False #Create a Python list from", "len(valueList) == 1: value = valueList[0] elif len(valueList) > 1: initAngles = rs.frange(0,", "name): if schedule == 'ALWAYSON': schedCntrlType = 'ALWAYSON' schedCntrl = 'No' schedName =", "Beam Visible Reflectance\\n' + \\ '\\t' + ', !- Slat Diffuse Visible Transmittance\\n'", "in windowSrfsInit: shadeBreps, EPSlatOrient, depth, shadingHeight, EPshdAngle, distToGlass, EPinteriorOrExter, assignEPCheckInit = makeShade(window, depths,", "if horOrVertical == None: horOrVertical = True if numOfShds == None and distBetween", "glazing by cardinal direction and assign different distances of shades to different directions.", "elif schedule!=None and schedule.lower().endswith(\".csv\"): # check if csv file is existed if not", "window. windowNamesFinal = [] windowBrepsFinal = [] alignedDataTree = [] for item in", "return False, [], [], [], [] else: print \"You should first let both", "The default is set to \"False\" to generate exterior shades. distToGlass_: A number", "maxXYPt.Z)) if testVec.IsParallelTo(planeVec) == 0: minXYPt = bbox.Corner(False, True, True) minXYPt = rc.Geometry.Point3d(minXYPt.X,", "allData.append(makePyTree(zoneData2_)) allData.append(makePyTree(zoneData3_)) #Test to see if the data lists have a headers on", "Transmittance\\n' + \\ '\\t' + str(blindsMaterial[1]) + ', !- Front Side Slat Beam", "< 0: angleFromNorm = angleFromNorm*(-1) #If the user has set the shades to", "normalVectorPerp.Rotate((shdAngle*-0.01745329), planeVec) else: shdAngle = 0 #Make EP versions of some of the", "Construction with Shading Name\\n' + \\ '\\t' + schedCntrlType + ', !- Shading", "value = None if len(valueList) == 1: value = valueList[0] elif len(valueList) >", "scriptcontext as sc import uuid import math import os w = gh.GH_RuntimeMessageLevel.Warning tol", "Args: _HBObjects: The HBZones out of any of the HB components that generate", "the shades. Set defaults on things that are not connected. if checkSameType ==", "shadingHeightList = [] EPshdAngleList = [] distToGlassList = [] EPinteriorOrExterList = [] #Call", "+ \\ '\\t' + ', !- Slat Infrared Hemispherical Transmittance\\n' + \\ '\\t'", "0.0.55\\nAUG_25_2014 #compatibleLBVersion = VER 0.0.58\\nAUG_20_2014 try: ghenv.Component.AdditionalHelpFromDocStrings = \"1\" except: pass from System", "to the correct vector. testVec = rc.Geometry.Vector3d.Subtract(rc.Geometry.Vector3d(minXYPt.X, minXYPt.Y, minXYPt.Z), rc.Geometry.Vector3d(maxXYPt.X, maxXYPt.Y, maxXYPt.Z)) if", "'\\t' + 'BlindCntrlFor_' + name +', !- Name\\n' + \\ '\\t' + EPinteriorOrExter", "here is the distance in Rhino units between each shade. horOrVertical_: Set to", "number of shades to generated for each glazed surface. _distBetween: An alternate option", "the assigned shading (ready to be simulated). ---------------: ... windowBreps: Breps representing each", "Shading Control Is Scheduled\\n' + \\ '\\t' + 'No, !- Glare Control Is", "to split up the glazing by cardinal direction and assign different numbers of", "!- Back Side Slat Beam Solar Reflectance\\n' + \\ '\\t' + ', !-", "component.\" print warning ghenv.Component.AddRuntimeMessage(gh.GH_RuntimeMessageLevel.Remark, warning) if shadingHeight > 1: assignEPCheckInit = False warning", "let both Honeybee and Ladybug fly...\" ghenv.Component.AddRuntimeMessage(w, \"You should first let both Honeybee", "you would take imported EnergyPlus results and hook them up to the \"zoneData\"", "also input lists of horOrVertical_ input, which will assign different orientations based on", "can be plugged into an EnergyPlus simulation with the \"Honeybee_EP Context Surfaces\" component.", "a shade benefit simulation with the generated shades. zoneData3_: Optional zone data for", "rc.Geometry.Vector3d.Subtract(rc.Geometry.Vector3d(minXYPt.X, minXYPt.Y, minXYPt.Z), rc.Geometry.Vector3d(maxXYPt.X, maxXYPt.Y, maxXYPt.Z)) tolVec.Unitize() tolVec = rc.Geometry.Vector3d.Multiply(sc.doc.ModelAbsoluteTolerance*2, tolVec) if tolVec.X", "shadingHeight, EPshdAngle, distToGlass, EPinteriorOrExter, assignEPCheckInit = makeShade(window, depths, numOfShd, _distBetween) shadings.append(shadeBreps) EPSlatOrientList.append(EPSlatOrient) depthList.append(depth)", "EPSlatOrientList.append(EPSlatOrient) depthList.append(depth) shadingHeightList.append(shadingHeight) EPshdAngleList.append(EPshdAngle) distToGlassList.append(distToGlass) EPinteriorOrExterList.append(EPinteriorOrExter) if assignEPCheckInit == False: assignEPCheck = False", "# single shading planes.append(rc.Geometry.Plane(rc.Geometry.Point3d(minXYPt), planeVec)) sortedPlanes = planes return sortedPlanes, shadingHeight def makeShade(_glzSrf,", "then I should sample the test surface # and test the normal direction", "generate the shades. Set defaults on things that are not connected. if checkSameType", "horOrVertical = True planeVec = rc.Geometry.Vector3d(normalVector.X, normalVector.Y, 0) planeVec.Rotate(1.570796, rc.Geometry.Vector3d.ZAxis) normalVectorPerp.Rotate((shdAngle*0.01745329), planeVec) elif", "not connected. if checkSameType == True: checkData, windowNames, windowSrfsInit, depths, alignedDataTree, numOfShd, blindsMaterial,", "the depth of the shade to be generated on each window. You can", "default. checkData5 = True if blindsMaterial_ == None: print \"No blinds material has", "see if the data is for the window level. srfData = False if", "# Ladybug started by <NAME> is licensed # under a Creative Commons Attribution-ShareAlike", "distToGlass + (depth)*(0.5)*math.cos(math.radians(EPshdAngle)) if EPDistToGlass < 0.01: EPDistToGlass = 0.01 elif EPDistToGlass >", "planes, shadingHeight = analyzeGlz(_glzSrf, distBtwn, numShds, horOrVertical, lb_visualization, normalVector) # find the intersection", "angle shades downward. You can also put in lists of angles to assign", "distances of shades to different directions. distBtwn = getValueBasedOnOrientation(distBtwn) # If multiple horizontal", "+ ', !- Slat Diffuse Solar Transmittance\\n' + \\ '\\t' + str(blindsMaterial[1]) +", "# generate the planes planes, shadingHeight = analyzeGlz(_glzSrf, distBtwn, numShds, horOrVertical, lb_visualization, normalVector)", "= planes return sortedPlanes, shadingHeight def makeShade(_glzSrf, depth, numShds, distBtwn): rotationAngle_ = 0", "!- Blind Right Side Opening Multiplier\\n' + \\ '\\t' + ', !- Minimum", "up the glazing by cardinal direction and assign different numbers of shades to", "EPshdAngle = 90 + (EPshdAngleInint)*-1 if EPshdAngle > 180 or EPshdAngle < 0:", "shadingRemainder, maxZPt.Z + 0.5*sc.doc.ModelAbsoluteTolerance, shadingHeight) try: for Z in zHeights: planes.append(rc.Geometry.Plane(rc.Geometry.Point3d(X, Y, Z),", "!= None and north_.IsValid(): northVector = north_ else:northVector = rc.Geometry.Vector3d.YAxis angle = rc.Geometry.Vector3d.VectorAngle(northVector,", "Side Slat Diffuse Solar Reflectance\\n' + \\ '\\t' + str(blindsMaterial[2]) + ', !-", "Slat Diffuse Visible Reflectance\\n' + \\ '\\t' + ', !- Back Side Slat", "above. \"\"\" ghenv.Component.Name = \"Honeybee_EnergyPlus Window Shade Generator\" ghenv.Component.NickName = 'EPWindowShades' ghenv.Component.Message =", "hook them up to the \"zoneData\" inputs and use the output \"zoneDataTree\" in", "False: assignEPCheck = False #Create the EnergyPlus blinds material and assign it to", "a grasshopper data tree. Alternatively, they can be plugged into an EnergyPlus simulation", "windowObj.name windowObj.shadingSchName = schedule ModifiedHBZones = hb_hive.addToHoneybeeHive(HBZoneObjects, ghenv.Component.InstanceGuid.ToString() + str(uuid.uuid4())) else: ModifiedHBZones =", "True) minXYPt = rc.Geometry.Point3d(minXYPt.X, minXYPt.Y, minXYPt.Z) maxXYPt = bbox.Corner(False, False, True) maxXYPt =", "# single shading planes.append(rc.Geometry.Plane(rc.Geometry.Point3d(maxZPt), rc.Geometry.Vector3d.ZAxis)) # sort the planes sortedPlanes = sorted(planes, key=lambda", "Breps representing each window of the zone. These can be plugged into a", "on the surface baseSrfCenPt = _glzSrf.ClosestPoint(baseSrfCenPt) bool, centerPtU, centerPtV = _glzSrf.Faces[0].ClosestPoint(baseSrfCenPt) if bool:", "'0.5, !- Blind Top Opening Multiplier\\n' + \\ '\\t' + ', !- Blind", "== False: assignEPCheck = False #Create the EnergyPlus blinds material and assign it", "created this way will automatically be assigned to the zone and the windowBreps", "the window level. srfData = False if zoneData == False: for listCount, header", "rc.Geometry.Vector3d(normalVector.X, normalVector.Y, 0) planeVec.Rotate(1.570796, rc.Geometry.Vector3d.ZAxis) normalVectorPerp.Rotate((shdAngle*0.01745329), planeVec) elif horOrVertical == False: planeVec =", "= rs.frange(minZPt.Z + shadingRemainder, maxZPt.Z + 0.5*sc.doc.ModelAbsoluteTolerance, shadingHeight) try: for Z in zHeights:", "'FixedSlatAngle, !- Type of Slat Angle Control for Blinds\\n' + \\ '\\t' +", "+ schedule print msg ghenv.Component.AddRuntimeMessage(gh.GH_RuntimeMessageLevel.Warning, msg) checkData4 = False #Create a Python list", "windows facing East or West, tilting the shades like this will let in", "'EPWindowShades' ghenv.Component.Message = 'VER 0.0.55\\nSEP_11_2014' ghenv.Component.Category = \"Honeybee\" ghenv.Component.SubCategory = \"09 | Energy", "ghenv.Component.AddRuntimeMessage(gh.GH_RuntimeMessageLevel.Remark, warning) if shadingHeight > 1: assignEPCheckInit = False warning = \"Note that", "minXYPt = rc.Geometry.Point3d(minXYPt.X, minXYPt.Y, minXYPt.Z) maxXYPt = bbox.Corner(False, False, True) maxXYPt = rc.Geometry.Point3d(maxXYPt.X,", "load, cooling load or beam gain for a shade benefit simulation with the", "alignedDataTree.append([]) for zoneCount, windowList in enumerate(windowSrfs): if isZone == True: zoneName = zoneNames[zoneCount]", "= False if tolVec.X < 0 and tolVec.Y < 0: tolVec = rc.Geometry.Vector3d.Multiply(-1,", "shadingSurfaces: shdSrf.Transform(blindsTransform) else: distToGlass = 0 #Get the EnergyPlus distance to glass. EPDistToGlass", "= int(numOfShds) shadingHeight = glzHeight/numOfShd shadingRemainder = shadingHeight except: shadingHeight = distBetween shadingRemainder", "csv file is existed if not os.path.isfile(schedule): msg = \"Cannot find the shchedule", "the angle to North of any surface. def getAngle2North(normalVector): if north_ != None", "= getValueBasedOnOrientation(distToGlass_) # generate the planes planes, shadingHeight = analyzeGlz(_glzSrf, distBtwn, numShds, horOrVertical,", "window.\" windowNames.append(winNames) windowSrfs.append(winBreps) elif object.objectType == \"HBSurface\": isZoneList.append(0) warning = \"Note that, when", "which align with the branches for each window above. zoneData3Tree: Data trees of", "cannot create shades for intdoor windows.\" print warning ghenv.Component.AddRuntimeMessage(w, warning) elif object.isPlanar ==", "and, if not, set a default. checkData5 = True if blindsMaterial_ == None:", "distBtwn, numShds, horOrVertical, lb_visualization, normalVector) # find the intersection crvs as the base", "inputs.\" print warning ghenv.Component.AddRuntimeMessage(w, warning) isZone = False #Check the inputs and make", "blind material from the blind material component. If no material is connected here,", "sure that all HBObjects are of the same type. checkSameType = True if", "createEPBlindControl(blindsMaterial, schedule, EPinteriorOrExterList[count], windowObj.name) windowObj.shadingControlName = 'BlindCntrlFor_' + windowObj.name windowObj.shadingSchName = schedule ModifiedHBZones", "+1]: targetValue = valueList[angleCount%len(valueList)] value = targetValue return value # If multiple shading", "def deconstructBlindMaterial(material): matLines = material.split('\\n') name = matLines[1].split(',')[0] reflect = float(matLines[2].split(',')[0]) transmit =", "License. \"\"\" Use this component to generate shades for Honeybee zone windows. The", "isZone): #Check if the user has hooked up a distBetwee or numOfShds. if", "= [] for branch in allData: checkHeader = [] dataHeaders = [] dataNumbers", "rc.Geometry.Point3d.Subtract(maxXYPt, tolVec) minXYPt = rc.Geometry.Point3d.Subtract(minXYPt, tolVec) #glazing distance glzHeight = minXYPt.DistanceTo(maxXYPt) # find", "the glazing by cardinal direction and assign different distances of shades to different", "a list of values and assign it to different cardinal directions. def getValueBasedOnOrientation(valueList):", "make sure that the direction of the surface is facing the outdoors in", "DataTree[Object]() zoneData1Tree = DataTree[Object]() zoneData2Tree = DataTree[Object]() zoneData3Tree = DataTree[Object]() for count, brep", "you can account for these shades using a 'Honeybee_EP Context Surfaces' component.\" print", "cardinal direction. For example, inputing 4 values for depths will assign each value", "= rc.Geometry.Point3d.Subtract(minXYPt, tolVec) #glazing distance glzHeight = minXYPt.DistanceTo(maxXYPt) # find number of shadings", "windowSrfsInit, shadings, alignedDataTree, ModifiedHBZones else: return False, [], [], [], [] else: print", "be shaded can also be input and shades can be joined together with", "between shades that are greater than 1. HBObjWShades will not be generated. shadeBreps", "float(depth) * normalVectorPerp).ToBrep() shadingSurfaces.append(shdSrf) except: pass #If the user has specified a distance", "EPshdAngleList.append(EPshdAngle) distToGlassList.append(distToGlass) EPinteriorOrExterList.append(EPinteriorOrExter) if assignEPCheckInit == False: assignEPCheck = False #Create the EnergyPlus", "have an outdoor boundary condition. EenergyPlus shades will not be assigned to this", "return -1 shadingSurfaces =[] #Define a function that can get the angle to", "!- Type of Slat Angle Control for Blinds\\n' + \\ '\\t' + ';", "value in degrees. If applied to windows facing East or West, tilting the", "all of the lists to each window. windowNamesFinal = [] windowBrepsFinal = []", "if zoneData == False and srfData == False and alignedDataTree != [[], [],", "isZoneList = [] assignEPCheck = True HBObjWShades = [] EPSlatOrientList = [] depthList", "directions. distToGlass = getValueBasedOnOrientation(distToGlass_) # generate the planes planes, shadingHeight = analyzeGlz(_glzSrf, distBtwn,", "warning = \"This component currently only supports inputs that are all HBZones or", "hb_hive.addToHoneybeeHive(HBZoneObjects, ghenv.Component.InstanceGuid.ToString() + str(uuid.uuid4())) else: ModifiedHBZones = [] return checkData, windowSrfsInit, shadings, alignedDataTree,", "be used as a true North direction or a number between 0 and", "srf in object.surfaces: if srf.hasChild: if srf.BC == 'OUTDOORS' or srf.BC == 'Outdoors':", "emittance, 0.25 mm thickness, 221 W/mK conductivity. blindsSchedule_: An optional schedule to raise", "any surface. def getAngle2North(normalVector): if north_ != None and north_.IsValid(): northVector = north_", "split up the glazing by cardinal direction and assign different numbers of shades", "try: winNm = header[2].split(' for ')[-1].split(': ')[0] except: winNm = header[2].split(' for ')[-1]", "generate vertical shades. You can also input lists of horOrVertical_ input, which will", "sun. If you have horizontal shades, use this input to angle shades downward.", "shadings intCrvs =[] for plane in planes: try: intCrvs.append(rc.Geometry.Brep.CreateContourCurves(_glzSrf, plane)[0]) except: print \"One", "branches for each window above. zoneData2Tree: Data trees of the zoneData2_, which align", "on things that are not connected. if checkSameType == True: checkData, windowNames, windowSrfsInit,", "<NAME> is licensed # under a Creative Commons Attribution-ShareAlike 3.0 Unported License. \"\"\"", "header on it. This header is necessary for data input to this component.\"", "= [] depthList = [] shadingHeightList = [] EPshdAngleList = [] distToGlassList =", "off from the y-axis to make North. The default North direction is set", "HBZones: The HBZones with the assigned shading (ready to be simulated). ---------------: ...", "\\ '\\t' + ', !- Back Side Slat Beam Visible Reflectance\\n' + \\", "EPBlindControl def main(): if _HBObjects != [] and sc.sticky.has_key('honeybee_release') == True and sc.sticky.has_key('ladybug_release')", "= rc.Geometry.Point3d(maxXYPt.X, maxXYPt.Y, maxXYPt.Z) #Adjust the points to ensure the creation of the", "to \"False\" to generate shades on the exterior. The default is set to", "221 W/mK conductivity. blindsSchedule_: An optional schedule to raise and lower the blinds.", "to the Y-axis (0 degrees). _depth: A number representing the depth of the", "component.\" print warning ghenv.Component.AddRuntimeMessage(w, warning) #Align all of the lists to each window.", "we need to generate the shades. Set defaults on things that are not", "function that can get the angle to North of any surface. def getAngle2North(normalVector):", "= \"This component currently only supports inputs that are all HBZones or all", "if testVec.IsParallelTo(planeVec) == 0: minXYPt = bbox.Corner(False, True, True) minXYPt = rc.Geometry.Point3d(minXYPt.X, minXYPt.Y,", "of a grasshopper data tree. Alternatively, they can be plugged into an EnergyPlus", "\"_\" + name + ', !- Name\\n' + \\ '\\t' + EPSlatOrient +", "rc.Geometry.Vector3d.Subtract(rc.Geometry.Vector3d(minXYPt.X, minXYPt.Y, minXYPt.Z), rc.Geometry.Vector3d(maxXYPt.X, maxXYPt.Y, maxXYPt.Z)) if testVec.IsParallelTo(planeVec) == 0: minXYPt = bbox.Corner(False,", "windowObjects.append(childSrf) windowNames.append([childSrf.name]) windowSrfs.append([childSrf.geometry]) #Make sure that all HBObjects are of the same type.", "east depth. Lists of vectors to be shaded can also be input and", "shades will not be assigned to this window.\" else: print \"One surface with", "together with the mergeVectors_ input. _numOfShds: The number of shades to generated for", "the shade benefit evaluation. - Provided by Honeybee 0.0.55 Args: _HBObjects: The HBZones", "shades using a 'Honeybee_EP Context Surfaces' component.\" print warning ghenv.Component.AddRuntimeMessage(gh.GH_RuntimeMessageLevel.Remark, warning) return shadingSurfaces,", "EPinteriorOrExter + ', !- Shading Type\\n' + \\ '\\t' + ', !- Construction", "# Define a vector to be used to generate the planes planeVec =", "!- Setpoint {W/m2, W or deg C}\\n' + \\ '\\t' + schedCntrl +", "rc.Geometry.Vector3d(maxXYPt.X, maxXYPt.Y, maxXYPt.Z)) tolVec.Unitize() tolVec = rc.Geometry.Vector3d.Multiply(sc.doc.ModelAbsoluteTolerance*2, tolVec) if tolVec.X > 0 and", "divisionPoints try: for point in planePoints: planes.append(rc.Geometry.Plane(point, planeVec)) except: # single shading planes.append(rc.Geometry.Plane(rc.Geometry.Point3d(minXYPt),", "if assignEPCheck == True: for count, windowObj in enumerate(windowObjects): windowObj.blindsMaterial = createEPBlindMat(blindsMaterial, EPSlatOrientList[count],", "the data lists have a headers on them, which is necessary to match", "', !- Blind to Glass Distance {m}\\n' + \\ '\\t' + '0.5, !-", "+ \\ '\\t' + str(shadingHeight) +', !- Slat Separation {m}\\n' + \\ '\\t'", "= checkTheInputs(zoneNames, windowNames, windowSrfs, isZone) else: checkData == False #Generate the shades. if", "shading planes.append(rc.Geometry.Plane(rc.Geometry.Point3d(maxZPt), rc.Geometry.Vector3d.ZAxis)) # sort the planes sortedPlanes = sorted(planes, key=lambda a: a.Origin.Z)", "> 180 or EPshdAngle < 0: warning = \"The input shdAngle_ value will", "+ str(blindsMaterial[1]) + ', !- Back Side Slat Beam Solar Reflectance\\n' + \\", "of angles to assign different shade angles to different cardinal directions. interiorOrExter_: Set", "= bbox.Corner(True, True, True) minXYPt = rc.Geometry.Point3d(minXYPt.X, minXYPt.Y, minXYPt.Z) maxXYPt = bbox.Corner(False, False,", "own branch of a grasshopper data tree. shadeBreps: Breps representing each shade of", "!= None: if horOrVertical == True or horOrVertical == None: horOrVertical = True", "component. If no material is connected here, the component will automatically assign a", "rc.Geometry.Plane.WorldXY) finalAngle = math.degrees(angle) return finalAngle # Define a function that can split", "Back Side Slat Beam Solar Reflectance\\n' + \\ '\\t' + ', !- Slat", "Angle {deg}\\n' return EPBlindMat def createEPBlindControl(blindsMaterial, schedule, EPinteriorOrExter, name): if schedule == 'ALWAYSON':", "= True HBObjWShades = [] EPSlatOrientList = [] depthList = [] shadingHeightList =", "that the values are parallel to the correct vector. testVec = rc.Geometry.Vector3d.Subtract(rc.Geometry.Vector3d(minXYPt.X, minXYPt.Y,", "reflect = float(matLines[2].split(',')[0]) transmit = float(matLines[3].split(',')[0]) emiss = float(matLines[4].split(',')[0]) thickness = float(matLines[5].split(',')[0]) conduct", "trees. if checkData == True: windowBreps = DataTree[Object]() shadeBreps = DataTree[Object]() zoneData1Tree =", "baseSrfCenPt = rc.Geometry.AreaMassProperties.Compute(_glzSrf).Centroid # sometimes the center point is not located on the", "for twig in branch: zoneData2Tree.Add(twig, GH_Path(bCount)) elif treeCount == 2: for bCount, branch", "simulation has already been run. In this case, the component helps keep the", "shade to be generated on each window. You can also input lists of", "be generated. shadeBreps will still be produced and you can account for these", "sc.sticky[\"ladybug_ResultVisualization\"]() # find the normal of the surface in the center # note2developer:", "what the object is and make sure that we can run it through", "= [] assignEPCheck = True HBObjWShades = [] EPSlatOrientList = [] depthList =", "data trees. if checkData == True: windowBreps = DataTree[Object]() shadeBreps = DataTree[Object]() zoneData1Tree", "except: checkData5 = False warning = 'Blinds material is not a valid blinds", "isZone = True elif sum(isZoneList) == 0: isZone = False else: checkSameType =", "winNm = header[2].split(' for ')[-1] if str(winNm) == str(windowName.upper()): alignedDataTree[inputDataTreeCount].append(allData[inputDataTreeCount][listCount]) srfData = True", "count, windowObj in enumerate(windowObjects): windowObj.blindsMaterial = createEPBlindMat(blindsMaterial, EPSlatOrientList[count], depthList[count], shadingHeightList[count], EPshdAngleList[count], distToGlassList[count], windowObj.name)", "= main() #Unpack the data trees. if checkData == True: windowBreps = DataTree[Object]()", "for branch in allData: checkHeader = [] dataHeaders = [] dataNumbers = []", "evaulation as each window is its own branch of a grasshopper data tree.", "connected. A material will be used with 0.65 solar reflectance, 0 transmittance, 0.9", "', !- Shading Control Type\\n' + \\ '\\t' + schedName + ', !-", "these inputs.\" print warning ghenv.Component.AddRuntimeMessage(w, warning) isZone = False #Check the inputs and", "... windowBreps: Breps representing each window of the zone. These can be plugged", "rc.Geometry.Vector3d.VectorAngle(northVector, normalVector, rc.Geometry.Plane.WorldXY) finalAngle = math.degrees(angle) return finalAngle # Define a function that", "zone data for the HBZones_ that will be aligned with the generated windows.", "Control Is Active\\n' + \\ '\\t' + blindsMaterial[0] + \"_\" + name +", "shadeBreps, EPSlatOrient, depth, shadingHeight, EPshdAngle, distToGlass, EPinteriorOrExter, assignEPCheckInit = makeShade(window, depths, numOfShd, _distBetween)", "vector to be used as a true North direction or a number between", "in object.childSrfs: windowObjects.append(childSrf) windowNames.append([childSrf.name]) windowSrfs.append([childSrf.geometry]) #Make sure that all HBObjects are of the", "benefit evaulation as each window is its own branch of a grasshopper data", "+ str(distToGlass) + ', !- Blind to Glass Distance {m}\\n' + \\ '\\t'", "horOrVertical == None: horOrVertical = True planeVec = rc.Geometry.Vector3d(normalVector.X, normalVector.Y, 0) planeVec.Rotate(1.570796, rc.Geometry.Vector3d.ZAxis)", "GH_Path(bCount)) elif treeCount == 2: for bCount, branch in enumerate(finalTree): for twig in", "[] for item in branchList: dataVal.append(item) dataPyList.append(dataVal) return dataPyList allData = [] allData.append(makePyTree(zoneData1_))", "blinds will assume the \"ALWAYS ON\" shcedule. north_: Input a vector to be", "!= rc.Geometry.Vector3d.ZAxis: normalVectorPerp = rc.Geometry.Vector3d(normalVector.X, normalVector.Y, 0) angleFromNorm = math.degrees(rc.Geometry.Vector3d.VectorAngle(normalVectorPerp, normalVector)) if normalVector.Z", "component helps keep the data tree paths of heating, cooling and beam gain", "the creation of the correct number of shades starting from the northernmost side", "zoneData = True #Test to see if the data is for the window", "= sc.sticky[\"ladybug_Preparation\"]() lb_mesh = sc.sticky[\"ladybug_Mesh\"]() lb_visualization = sc.sticky[\"ladybug_ResultVisualization\"]() # find the normal of", "Result data. blindsMaterial_: An optional blind material from the blind material component. If", "len(valueList) > 1: initAngles = rs.frange(0, 360, 360/len(valueList)) for an in initAngles: angles.append(an-(360/(2*len(valueList))))", "if treeCount == 0: for bCount, branch in enumerate(finalTree): for twig in branch:", "if there is a blinds schedule connected and, if not, set a default.", "float(matLines[4].split(',')[0]) thickness = float(matLines[5].split(',')[0]) conduct = float(matLines[6].split(';')[0]) return [name, reflect, transmit, emiss, thickness,", "#Adjust the points to ensure the creation of the correct number of shades", "= header[2].split(' for ')[-1].split(': ')[0] except: winNm = header[2].split(' for ')[-1] if str(winNm)", "= divisionPoints try: for point in planePoints: planes.append(rc.Geometry.Plane(point, planeVec)) except: # single shading", "in allData: checkHeader = [] dataHeaders = [] dataNumbers = [] for list", "[] depthList = [] shadingHeightList = [] EPshdAngleList = [] distToGlassList = []", "!- Slat Beam Visible Transmittance\\n' + \\ '\\t' + ', !- Front Side", "= makeShade(window, depths, numOfShd, _distBetween) shadings.append(shadeBreps) EPSlatOrientList.append(EPSlatOrient) depthList.append(depth) shadingHeightList.append(shadingHeight) EPshdAngleList.append(EPshdAngle) distToGlassList.append(distToGlass) EPinteriorOrExterList.append(EPinteriorOrExter) if", "twig in branch: zoneData1Tree.Add(twig, GH_Path(bCount)) elif treeCount == 1: for bCount, branch in", "generated shades. Returns: readMe!: ... ---------------: ... HBZones: The HBZones with the assigned", "else: return False, [], [], [], [] else: print \"You should first let", "numOfShds, horOrVertical, lb_visualization, normalVector): # find the bounding box bbox = glzSrf.GetBoundingBox(True) if", "Visible Transmittance\\n' + \\ '\\t' + ', !- Front Side Slat Beam Visible", "interiorOrExter_: Set to \"True\" to generate Shades on the interior and set to", "== False: for listCount, header in enumerate(branch): try: winNm = header[2].split(' for ')[-1].split(':", "= \"1\" except: pass from System import Object from System import Drawing from", "if isZone == True: zoneName = zoneNames[zoneCount] for windowCount, window in enumerate(windowList): windowBrepsFinal.append(window)", "rc.Geometry.Vector3d.Multiply(distToGlass, transVec) blindsTransform = rc.Geometry.Transform.Translation(finalTransVec) for shdSrf in shadingSurfaces: shdSrf.Transform(blindsTransform) else: distToGlass =", "just for visualization. _ The second way to use the component is to", "'\\t' + str(distToGlass) + ', !- Blind to Glass Distance {m}\\n' + \\", "to different cardinal directions. def getValueBasedOnOrientation(valueList): angles = [] if valueList == None", "+ ', !- Construction with Shading Name\\n' + \\ '\\t' + schedCntrlType +", "for angleCount in range(len(angles)-1): if angles[angleCount] <= (getAngle2North(normalVector))%360 <= angles[angleCount +1]: targetValue =", "str(uuid.uuid4())) else: ModifiedHBZones = [] return checkData, windowSrfsInit, shadings, alignedDataTree, ModifiedHBZones else: return", "[] allData.append(makePyTree(zoneData1_)) allData.append(makePyTree(zoneData2_)) allData.append(makePyTree(zoneData3_)) #Test to see if the data lists have a", "make the shades. _runIt: Set boolean to \"True\" to run the component and", "\"Honeybee_EP Context Surfaces\" component. ---------------: ... zoneData1Tree: Data trees of the zoneData1_, which", "if EPDistToGlass < 0.01: EPDistToGlass = 0.01 elif EPDistToGlass > 1: warning =", "windowNames, windowSrfs, isZone): #Check if the user has hooked up a distBetwee or", "object.getTypeByNormalAngle() if not hasattr(object, 'angle2North'): # find the type based on object.getAngle2North() if", "in shadingSurfaces: shdSrf.Transform(blindsTransform) else: distToGlass = 0 #Get the EnergyPlus distance to glass.", "Type\\n' + \\ '\\t' + schedName + ', !- Schedule Name\\n' + \\", "is that it can be used to assign blind objects to HBZones prior", "starting from the northernmost side of the window. tolVec = rc.Geometry.Vector3d.Subtract(rc.Geometry.Vector3d(minXYPt.X, minXYPt.Y, minXYPt.Z),", "versions of some of the outputs. EPshdAngleInint = angleFromNorm+shdAngle if EPshdAngleInint >= 0:", "False print \"You must provide a depth for the shades.\" ghenv.Component.AddRuntimeMessage(gh.GH_RuntimeMessageLevel.Warning, \"You must", "distBtwn = getValueBasedOnOrientation(distBtwn) # If multiple horizontal or vertical inputs are given, use", "0.01: EPDistToGlass = 0.01 elif EPDistToGlass > 1: warning = \"The input distToGlass_", "Shading Type\\n' + \\ '\\t' + ', !- Construction with Shading Name\\n' +", "windowSrfsInit: shadeBreps, EPSlatOrient, depth, shadingHeight, EPshdAngle, distToGlass, EPinteriorOrExter, assignEPCheckInit = makeShade(window, depths, numOfShd,", "was not matched with its respective zone/surface data.\" if checkData2 == True and", "0 or distBetween == 0: sortedPlanes = [] elif horOrVertical == True: #", "True: # Horizontal #Define a bounding box for use in calculating the number", "schedule def analyzeGlz(glzSrf, distBetween, numOfShds, horOrVertical, lb_visualization, normalVector): # find the bounding box", "shades to different directions. distBtwn = getValueBasedOnOrientation(distBtwn) # If multiple horizontal or vertical", "\"A window was not matched with its respective zone/surface data.\" if checkData2 ==", "Active\\n' + \\ '\\t' + blindsMaterial[0] + \"_\" + name + ', !-", "0: isZone = False else: checkSameType = False warning = \"This component currently", "cause EnergyPlus to crash.\" print warning ghenv.Component.AddRuntimeMessage(w, warning) #Check the depth and the", "= createEPBlindControl(blindsMaterial, schedule, EPinteriorOrExterList[count], windowObj.name) windowObj.shadingControlName = 'BlindCntrlFor_' + windowObj.name windowObj.shadingSchName = schedule", "intCrvs =[] for plane in planes: try: intCrvs.append(rc.Geometry.Brep.CreateContourCurves(_glzSrf, plane)[0]) except: print \"One intersection", "warning) #Align all of the lists to each window. windowNamesFinal = [] windowBrepsFinal", "os.path.isfile(schedule): msg = \"Cannot find the shchedule file: \" + schedule print msg", "Front Side Slat Diffuse Visible Reflectance\\n' + \\ '\\t' + ', !- Back", "as each window is its own branch of a grasshopper data tree. Alternatively,", "side of the window. tolVec = rc.Geometry.Vector3d.Subtract(rc.Geometry.Vector3d(minXYPt.X, minXYPt.Y, minXYPt.Z), rc.Geometry.Vector3d(maxXYPt.X, maxXYPt.Y, maxXYPt.Z)) tolVec.Unitize()", "base for shadings intCrvs =[] for plane in planes: try: intCrvs.append(rc.Geometry.Brep.CreateContourCurves(_glzSrf, plane)[0]) except:", "Back Side Slat Beam Visible Reflectance\\n' + \\ '\\t' + ', !- Slat", "North. The default North direction is set to the Y-axis (0 degrees). _depth:", "_glzSrf.ClosestPoint(baseSrfCenPt) bool, centerPtU, centerPtV = _glzSrf.Faces[0].ClosestPoint(baseSrfCenPt) if bool: normalVector = _glzSrf.Faces[0].NormalAt(centerPtU, centerPtV) #return", "Front Side Slat Beam Visible Reflectance\\n' + \\ '\\t' + ', !- Back", "data for the HBZones_ that will be aligned with the generated windows. Use", "Data trees of the zoneData2_, which align with the branches for each window", "HBZones with the assigned shading (ready to be simulated). ---------------: ... windowBreps: Breps", "+ '180; !- Maximum Slat Angle {deg}\\n' return EPBlindMat def createEPBlindControl(blindsMaterial, schedule, EPinteriorOrExter,", "of 0.65 solar reflectance, 0 transmittance, 0.9 emittance, 0.25 mm thickness, 221 W/mK", "not located on the surface baseSrfCenPt = _glzSrf.ClosestPoint(baseSrfCenPt) bool, centerPtU, centerPtV = _glzSrf.Faces[0].ClosestPoint(baseSrfCenPt)", "another component for each of these inputs.\" print warning ghenv.Component.AddRuntimeMessage(w, warning) isZone =", "move them along the normal vector. if distToGlass != None: transVec = normalVectorPerp", "inputDataTreeCount, branch in enumerate(allHeaders): #Test to see if the data is for the", "the planes planeVec = rc.Geometry.Vector3d(normalVector.X, normalVector.Y, 0) planeVec.Rotate(1.570796, rc.Geometry.Vector3d.ZAxis) #Define a bounding box", "# Vertical # Define a vector to be used to generate the planes", "+ \\ '\\t' + EPSlatOrient + ', !- Slat Orientation\\n' + \\ '\\t'", "= rc.Geometry.Vector3d.VectorAngle(northVector, normalVector, rc.Geometry.Plane.WorldXY) finalAngle = math.degrees(angle) return finalAngle # Define a function", "by cardinal direction and assign different shdAngle_ to different directions. shdAngle = getValueBasedOnOrientation(shdAngle_)", "Blind Left Side Opening Multiplier\\n' + \\ '\\t' + '0.5, !- Blind Right", "if tolVec.X < 0 and tolVec.Y > 0: tolVec = rc.Geometry.Vector3d.Multiply(1, tolVec) norOrient", "= bbox.Center #Test to be sure that the values are parallel to the", "window in windowSrfsInit: shadeBreps, EPSlatOrient, depth, shadingHeight, EPshdAngle, distToGlass, EPinteriorOrExter, assignEPCheckInit = makeShade(window,", "getValueBasedOnOrientation(distBtwn) # If multiple horizontal or vertical inputs are given, use it to", "= distToGlass + (depth)*(0.5)*math.cos(math.radians(EPshdAngle)) if EPDistToGlass < 0.01: EPDistToGlass = 0.01 elif EPDistToGlass", "E+ will crash. assignEPCheckInit = True if depth > 1: assignEPCheckInit = False", "\\ '\\t' + EPinteriorOrExter + ', !- Shading Type\\n' + \\ '\\t' +", "\" + schedule print msg ghenv.Component.AddRuntimeMessage(gh.GH_RuntimeMessageLevel.Warning, msg) checkData4 = False #Create a Python", "minXYPt.Z), rc.Geometry.Vector3d(maxXYPt.X, maxXYPt.Y, maxXYPt.Z)) tolVec.Unitize() tolVec = rc.Geometry.Vector3d.Multiply(sc.doc.ModelAbsoluteTolerance*2, tolVec) if tolVec.X > 0", "checkData5 = True if blindsMaterial_ == None: print \"No blinds material has been", "== True: EPinteriorOrExter = 'InteriorBlind' else: EPinteriorOrExter = 'ExteriorBlind' #Generate the shade curves", "'\\t' + str(blindsMaterial[2]) + ', !- Slat Beam Solar Transmittance\\n' + \\ '\\t'", "+ EPinteriorOrExter + ', !- Shading Type\\n' + \\ '\\t' + ', !-", "distBetween, numOfShds, horOrVertical, lb_visualization, normalVector): # find the bounding box bbox = glzSrf.GetBoundingBox(True)", "of the surface in the center # note2developer: there might be cases that", "shades or \"False\" to generate vertical shades. You can also input lists of", "like shading depths greater than 1. HBObjWShades will not be generated. shadeBreps will", "+ windowObj.name windowObj.shadingSchName = schedule ModifiedHBZones = hb_hive.addToHoneybeeHive(HBZoneObjects, ghenv.Component.InstanceGuid.ToString() + str(uuid.uuid4())) else: ModifiedHBZones", "= getValueBasedOnOrientation(horOrVertical_) # If multiple shdAngle_ inputs are given, use it to split", "or beam gain for a shade benefit simulation with the generated shades. Returns:", "HBObjWShades will not be generated. shadeBreps will still be produced and you can", "into a shade benefit evaulation as each window is its own branch of", "shading base planes planeOrigins = [] planes = [] pointCurve = rc.Geometry.Curve.CreateControlPointCurve([maxXYPt, minXYPt])", "schedule = \"ALWAYSON\" print \"No blinds schedule has been connected. It will be", "the shchedule file: \" + schedule print msg ghenv.Component.AddRuntimeMessage(gh.GH_RuntimeMessageLevel.Warning, msg) checkData4 = False", "distToGlass != None: transVec = normalVectorPerp transVec.Unitize() finalTransVec = rc.Geometry.Vector3d.Multiply(distToGlass, transVec) blindsTransform =", "', !- Minimum Slat Angle {deg}\\n' + \\ '\\t' + '180; !- Maximum", "\"The input distToGlass_ value is so large that it will cause EnergyPlus to", "on the interior and set to \"False\" to generate shades on the exterior.", "sorted(planes, key=lambda a: a.Origin.Z) elif horOrVertical == False: # Vertical # Define a", "these shades using a 'Honeybee_EP Context Surfaces' component.\" print warning ghenv.Component.AddRuntimeMessage(gh.GH_RuntimeMessageLevel.Remark, warning) if", "= float(matLines[2].split(',')[0]) transmit = float(matLines[3].split(',')[0]) emiss = float(matLines[4].split(',')[0]) thickness = float(matLines[5].split(',')[0]) conduct =", "and sc.sticky.has_key('honeybee_release') == True and sc.sticky.has_key('ladybug_release') == True: #Import the classes hb_EPZone =", "center point is not located on the surface baseSrfCenPt = _glzSrf.ClosestPoint(baseSrfCenPt) bool, centerPtU,", "elif treeCount == 2: for bCount, branch in enumerate(finalTree): for twig in branch:", "shadingRemainder = (((glzHeight/distBetween) - math.floor(glzHeight/distBetween))*distBetween) if shadingRemainder == 0: shadingRemainder = shadingHeight #", "shades like this will let in more winter sun than summer sun. If", "necessary for data input to this component.\" print warning ghenv.Component.AddRuntimeMessage(w, warning) #Align all", "shade benefit simulation with the generated shades. zoneData2_: Optional zone data for the", "on it. This header is necessary for data input to this component.\" print", "the bounding box bbox = glzSrf.GetBoundingBox(True) if horOrVertical == None: horOrVertical = True", "Honeybee zone windows. The component has two main uses: _ The first is", "item in branchList: dataVal.append(item) dataPyList.append(dataVal) return dataPyList allData = [] allData.append(makePyTree(zoneData1_)) allData.append(makePyTree(zoneData2_)) allData.append(makePyTree(zoneData3_))", "offset distance from the glass to make the shades. _runIt: Set boolean to", "=[] for plane in planes: try: intCrvs.append(rc.Geometry.Brep.CreateContourCurves(_glzSrf, plane)[0]) except: print \"One intersection failed.\"", "count, brepList in enumerate(shadings): for brep in brepList: shadeBreps.Add(brep, GH_Path(count)) for treeCount, finalTree", "warning) if horOrVertical == True: EPSlatOrient = 'Horizontal' else: EPSlatOrient = 'Vertical' if", "blinds material and assign it to the windows with shades. if assignEPCheck ==", "glass. EPDistToGlass = distToGlass + (depth)*(0.5)*math.cos(math.radians(EPshdAngle)) if EPDistToGlass < 0.01: EPDistToGlass = 0.01", "does not like distances between shades that are greater than 1. HBObjWShades will", "up the glazing by cardinal direction and assign different horizontal or vertical to", "DataTree[Object]() for count, brep in enumerate(windowSrfsInit): windowBreps.Add(brep, GH_Path(count)) for count, brepList in enumerate(shadings):", "benefit evaluation after an energy simulation has already been run. In this case,", "[] for window in windowSrfsInit: shadeBreps, EPSlatOrient, depth, shadingHeight, EPshdAngle, distToGlass, EPinteriorOrExter, assignEPCheckInit", "Context Surfaces\" component. ---------------: ... zoneData1Tree: Data trees of the zoneData1_, which align", "trees of the izoneData3_, which align with the branches for each window above.", "str(blindsMaterial[3]) + ', !- Front Side Slat Infrared Hemispherical Emissivity\\n' + \\ '\\t'", "provide a depth for the shades.\") #Check if there is a blinds material", "shades to different directions. numShds = getValueBasedOnOrientation(numShds) # If multiple distances between shade", "Infrared Hemispherical Transmittance\\n' + \\ '\\t' + str(blindsMaterial[3]) + ', !- Front Side", "= False elif schedule!=None and schedule.lower().endswith(\".csv\"): # check if csv file is existed", "create shades for intdoor windows.\" print warning ghenv.Component.AddRuntimeMessage(w, warning) elif object.isPlanar == False:", "isZoneList.append(0) warning = \"Note that, when using this component for individual surfaces, you", "Opening Multiplier\\n' + \\ '\\t' + '0.5, !- Blind Right Side Opening Multiplier\\n'", "to generate Shades on the interior and set to \"False\" to generate shades", "= 'Yes' EPBlindControl = 'WindowProperty:ShadingControl,\\n' + \\ '\\t' + 'BlindCntrlFor_' + name +',", "!- Shading Control Type\\n' + \\ '\\t' + schedName + ', !- Schedule", "import Rhino as rc import rhinoscriptsyntax as rs import scriptcontext as sc import", "else:northVector = rc.Geometry.Vector3d.YAxis angle = rc.Geometry.Vector3d.VectorAngle(northVector, normalVector, rc.Geometry.Plane.WorldXY) finalAngle = math.degrees(angle) return finalAngle", "#Make sure that all HBObjects are of the same type. checkSameType = True", "cause EnergyPlus to crash.\" print warning ghenv.Component.AddRuntimeMessage(w, warning) if horOrVertical == True: EPSlatOrient", "zone windows. The component has two main uses: _ The first is that", "distToGlass = 0 #Get the EnergyPlus distance to glass. EPDistToGlass = distToGlass +", "of horOrVertical_ input, which will assign different orientations based on cardinal direction. shdAngle_:", "data lists have a headers on them, which is necessary to match the", "angles[angleCount +1]: targetValue = valueList[angleCount%len(valueList)] value = targetValue return value # If multiple", "Type\\n' + \\ '\\t' + ', !- Construction with Shading Name\\n' + \\", "!- Blind Bottom Opening Multiplier\\n' + \\ '\\t' + '0.5, !- Blind Left", "are fed into the Run Energy Simulation component. Zones read back into Grasshopper", "for Honeybee zone windows. The component has two main uses: _ The first", "normalVector, rc.Geometry.Plane.WorldXY) finalAngle = math.degrees(angle) return finalAngle # Define a function that can", "E+ cannot create shades for intdoor windows.\" print warning ghenv.Component.AddRuntimeMessage(w, warning) elif object.isPlanar", "to glass. EPDistToGlass = distToGlass + (depth)*(0.5)*math.cos(math.radians(EPshdAngle)) if EPDistToGlass < 0.01: EPDistToGlass =", "set to the Y-axis (0 degrees). _depth: A number representing the depth of", "in enumerate(branch): if header[2].split(' for ')[-1] == zoneName.upper(): alignedDataTree[inputDataTreeCount].append(allData[inputDataTreeCount][listCount]) zoneData = True #Test", "True checkBranches = [] allHeaders = [] allNumbers = [] for branch in", "ghenv.Component.AddRuntimeMessage(gh.GH_RuntimeMessageLevel.Warning, msg) checkData4 = False elif schedule!=None and schedule.lower().endswith(\".csv\"): # check if csv", "be coordinated with this component. checkData3 = True checkBranches = [] allHeaders =", "the same type. checkSameType = True if sum(isZoneList) == len(_HBObjects): isZone = True", "+ \\ '\\t' + ', !- Front Side Slat Diffuse Visible Reflectance\\n' +", "Side Slat Beam Visible Reflectance\\n' + \\ '\\t' + ', !- Slat Diffuse", "angles to assign different shade angles to different cardinal directions. interiorOrExter_: Set to", "= [] for list in branch: if str(list[0]) == \"key:location/dataType/units/frequency/startsAt/endsAt\": checkHeader.append(1) dataHeaders.append(list[:7]) dataNumbers.append(list[7:])", "set a default. checkData5 = True if blindsMaterial_ == None: print \"No blinds", "- sc.doc.ModelAbsoluteTolerance) centerPt = bbox.Center #glazing hieghts glzHeight = minZPt.DistanceTo(maxZPt) # find number", "of vectors to be shaded can also be input and shades can be", "Alternatively, they can be plugged into an EnergyPlus simulation with the \"Honeybee_EP Context", "is connected for number of shades. The component will be run with one", "windowNamesFinal, windowBrepsFinal, _depth, alignedDataTree, numOfShd, blindsMaterial, schedule def analyzeGlz(glzSrf, distBetween, numOfShds, horOrVertical, lb_visualization,", "Device Material Name\\n' + \\ '\\t' + 'FixedSlatAngle, !- Type of Slat Angle", "to make North. The default North direction is set to the Y-axis (0", "in enumerate(alignedDataTree): if treeCount == 0: for bCount, branch in enumerate(finalTree): for twig", "list as follows: item 0 = north depth, item 1 = west depth,", "== True or horOrVertical == None: horOrVertical = True planeVec = rc.Geometry.Vector3d(normalVector.X, normalVector.Y,", "surface is facing the outdoors in order to be sure that your shades", "conduct = float(matLines[6].split(';')[0]) return [name, reflect, transmit, emiss, thickness, conduct] def createEPBlindMat(blindsMaterial, EPSlatOrient,", "for item in allData: alignedDataTree.append([]) for zoneCount, windowList in enumerate(windowSrfs): if isZone ==", "EPSlatOrient, depth, shadingHeight, EPshdAngle, EPDistToGlass, EPinteriorOrExter, assignEPCheckInit def deconstructBlindMaterial(material): matLines = material.split('\\n') name", "True, False) maxZPt = rc.Geometry.Point3d(maxZPt.X, maxZPt.Y, maxZPt.Z - sc.doc.ModelAbsoluteTolerance) centerPt = bbox.Center #glazing", "the izoneData3_, which align with the branches for each window above. \"\"\" ghenv.Component.Name", "If no material is connected here, the component will automatically assign a material", "allNumbers = [] for branch in allData: checkHeader = [] dataHeaders = []", "be sure that your shades are previewing correctly.\" print warning ghenv.Component.AddRuntimeMessage(gh.GH_RuntimeMessageLevel.Remark, warning) if", "now, just grab another component for each of these inputs.\" print warning ghenv.Component.AddRuntimeMessage(w,", "lb_mesh = sc.sticky[\"ladybug_Mesh\"]() lb_visualization = sc.sticky[\"ladybug_ResultVisualization\"]() # find the normal of the surface", "distToGlass_: A number representing the offset distance from the glass to make the", "by Honeybee 0.0.55 Args: _HBObjects: The HBZones out of any of the HB", "to see if the data lists have a headers on them, which is", "= [] for param in divisionParams: divisionPoints.append(pointCurve.PointAt(param)) planePoints = divisionPoints try: for point", "classes lb_preparation = sc.sticky[\"ladybug_Preparation\"]() lb_mesh = sc.sticky[\"ladybug_Mesh\"]() lb_visualization = sc.sticky[\"ladybug_ResultVisualization\"]() # find the", "rc.Geometry.Point3d(minXYPt.X, minXYPt.Y, minXYPt.Z) maxXYPt = bbox.Corner(True, False, True) maxXYPt = rc.Geometry.Point3d(maxXYPt.X, maxXYPt.Y, maxXYPt.Z)", "finalTransVec = rc.Geometry.Vector3d.Multiply(distToGlass, transVec) blindsTransform = rc.Geometry.Transform.Translation(finalTransVec) for shdSrf in shadingSurfaces: shdSrf.Transform(blindsTransform) else:", "windowSrfsInit, depths, alignedDataTree, numOfShd, blindsMaterial, schedule = checkTheInputs(zoneNames, windowNames, windowSrfs, isZone) else: checkData", "0: shadingRemainder = shadingHeight # find shading base planes planeOrigins = [] planes", "normalVectorPerp transVec.Unitize() finalTransVec = rc.Geometry.Vector3d.Multiply(distToGlass, transVec) blindsTransform = rc.Geometry.Transform.Translation(finalTransVec) for shdSrf in shadingSurfaces:", "library.\" print msg ghenv.Component.AddRuntimeMessage(gh.GH_RuntimeMessageLevel.Warning, msg) checkData4 = False elif schedule!=None and schedule.lower().endswith(\".csv\"): #", "zoneData == False and srfData == False and alignedDataTree != [[], [], []]:", "blinds material has been connected. A material will be used with 0.65 solar", "will cause EnergyPlus to crash.\" print warning ghenv.Component.AddRuntimeMessage(w, warning) #Check the depth and", "!- Slat Width {m}\\n' + \\ '\\t' + str(shadingHeight) +', !- Slat Separation", "if checkData == True: windowBreps = DataTree[Object]() shadeBreps = DataTree[Object]() zoneData1Tree = DataTree[Object]()", "facing East or West, tilting the shades like this will let in more", "direction and assign different interiorOrExterior_ to different directions. interiorOrExter = getValueBasedOnOrientation(interiorOrExter_) #If multiple", "srf.BC == 'OUTDOORS' or srf.BC == 'Outdoors': if srf.isPlanar == True: for childSrf", "alignedDataTree, HBObjWShades = main() #Unpack the data trees. if checkData == True: windowBreps", "used to assign blind objects to HBZones prior to simulation. These blinds can", "def makePyTree(zoneData): dataPyList = [] for i in range(zoneData.BranchCount): branchList = zoneData.Branch(i) dataVal", "\"_\" + name + ', !- Shading Device Material Name\\n' + \\ '\\t'", "[], [], [], [] else: print \"You should first let both Honeybee and", "freak out with blinds.\" print warning ghenv.Component.AddRuntimeMessage(w, warning) else: for childSrf in object.childSrfs:", "elif object.objectType == \"HBSurface\": isZoneList.append(0) warning = \"Note that, when using this component", "# find shading base planes planeOrigins = [] planes = [] pointCurve =", "the shadingHeight to see if E+ will crash. assignEPCheckInit = True if depth", "on them, which is necessary to match the data to a zone or", "Slat Beam Visible Reflectance\\n' + \\ '\\t' + ', !- Back Side Slat", "= 'Blinds material is not a valid blinds material from the \"Honeybee_EnergyPlus Blinds", "blindsSchedule_.upper() if schedule!=None and not schedule.lower().endswith(\".csv\") and schedule not in HBScheduleList: msg =", "< 0 and tolVec.Y > 0: tolVec = rc.Geometry.Vector3d.Multiply(1, tolVec) norOrient = False", "str(blindsMaterial[1]) + ', !- Front Side Slat Beam Solar Reflectance\\n' + \\ '\\t'", "+ schedName + ', !- Schedule Name\\n' + \\ '\\t' + ', !-", "that we can run it through this component's functions. for object in HBZoneObjects:", "'Outdoors': if srf.isPlanar == True: for childSrf in srf.childSrfs: windowObjects.append(childSrf) winNames.append(childSrf.name) winBreps.append(childSrf.geometry) else:", "shade of the window. These can be plugged into a shade benefit evaulation", "depthList = [] shadingHeightList = [] EPshdAngleList = [] distToGlassList = [] EPinteriorOrExterList", "= [1] print \"No value is connected for number of shades. The component", "EPinteriorOrExterList = [] #Call the objects from the hive. HBZoneObjects = hb_hive.callFromHoneybeeHive(_HBObjects) #Find", "for srf in object.surfaces: if srf.hasChild: if srf.BC == 'OUTDOORS' or srf.BC ==", "above. zoneData3Tree: Data trees of the izoneData3_, which align with the branches for", "shadingHeight = distBetween shadingRemainder = (((glzHeight/distBetween) - math.floor(glzHeight/distBetween))*distBetween) if shadingRemainder == 0: shadingRemainder", "is licensed # under a Creative Commons Attribution-ShareAlike 3.0 Unported License. \"\"\" Use", "= sc.sticky[\"honeybee_EPSurface\"] hb_EPFenSurface = sc.sticky[\"honeybee_EPFenSurface\"] hb_hive = sc.sticky[\"honeybee_Hive\"]() #Make the lists that will", "with the branches for each window above. zoneData3Tree: Data trees of the izoneData3_,", "numOfShd = [1] print \"No value is connected for number of shades. The", "\"False\" to generate shades on the exterior. The default is set to \"False\"", "= zoneNames[zoneCount] for windowCount, window in enumerate(windowList): windowBrepsFinal.append(window) windowName = windowNames[zoneCount][windowCount] windowNamesFinal.append(windowName) for", "Side Slat Beam Solar Reflectance\\n' + \\ '\\t' + str(blindsMaterial[1]) + ', !-", "to crash.\" print warning ghenv.Component.AddRuntimeMessage(w, warning) if horOrVertical == True: EPSlatOrient = 'Horizontal'", "does not like shading depths greater than 1. HBObjWShades will not be generated.", "blinds material connected and, if not, set a default. checkData5 = True if", "intdoor windows.\" print warning ghenv.Component.AddRuntimeMessage(w, warning) elif object.isPlanar == False: assignEPCheck = False", "shades. _runIt: Set boolean to \"True\" to run the component and generate shades.", "= _numOfShds #Check the depths. checkData2 = True if _depth == []: checkData2", "header is necessary for data input to this component.\" print warning ghenv.Component.AddRuntimeMessage(w, warning)", "+ ', !- Slat Thickness {m}\\n' + \\ '\\t' + str(EPshdAngle) + ',", "normalVector = _glzSrf.Faces[0].NormalAt(centerPtU, centerPtV) #return rc.Geometry.Plane(baseSrfCenPt,normalVector) else: print \"Couldn't find the normal of", "automatically assign a material of 0.65 solar reflectance, 0 transmittance, 0.9 emittance, 0.25", "Schedule Name\\n' + \\ '\\t' + ', !- Setpoint {W/m2, W or deg", "winNm = header[2].split(' for ')[-1].split(': ')[0] except: winNm = header[2].split(' for ')[-1] if", "')[-1] == zoneName.upper(): alignedDataTree[inputDataTreeCount].append(allData[inputDataTreeCount][listCount]) zoneData = True #Test to see if the data", "None: numOfShds = 1 if numOfShds == 0 or distBetween == 0: sortedPlanes", "South by a certain value in degrees. If applied to windows facing East", "zoneData1_, which align with the branches for each window above. zoneData2Tree: Data trees", "it to different cardinal directions. def getValueBasedOnOrientation(valueList): angles = [] if valueList ==", "object in HBZoneObjects: if object.objectType == \"HBZone\": isZoneList.append(1) zoneNames.append(object.name) winBreps = [] winNames", "== True: checkData = True else: checkData = False return checkData, windowNamesFinal, windowBrepsFinal,", "Front Side Slat Beam Solar Reflectance\\n' + \\ '\\t' + str(blindsMaterial[1]) + ',", "True: EPinteriorOrExter = 'InteriorBlind' else: EPinteriorOrExter = 'ExteriorBlind' #Generate the shade curves based", "+ \\ '\\t' + 'BlindCntrlFor_' + name +', !- Name\\n' + \\ '\\t'", "!- Shading Control Is Scheduled\\n' + \\ '\\t' + 'No, !- Glare Control", "object.childSrfs: windowObjects.append(childSrf) windowNames.append([childSrf.name]) windowSrfs.append([childSrf.geometry]) #Make sure that all HBObjects are of the same", "the shades like this will let in more winter sun than summer sun.", "A number representing the depth of the shade to be generated on each", "in calculating the number of shades to generate minXYPt = bbox.Corner(True, True, True)", "based on object.getAngle2North() if not hasattr(object, \"BC\"): object.BC = 'OUTDOORS' if object.hasChild: if", "+ str(blindsMaterial[5]) + ', !- Slat Conductivity {W/m-K}\\n' + \\ '\\t' + str(blindsMaterial[2])", "4 values for depths will assign each value of the list as follows:", "', !- Slat Width {m}\\n' + \\ '\\t' + str(shadingHeight) +', !- Slat", "allData.append(makePyTree(zoneData1_)) allData.append(makePyTree(zoneData2_)) allData.append(makePyTree(zoneData3_)) #Test to see if the data lists have a headers", "---------------: ... windowBreps: Breps representing each window of the zone. These can be", "planes planeOrigins = [] planes = [] pointCurve = rc.Geometry.Curve.CreateControlPointCurve([maxXYPt, minXYPt]) divisionParams =", "divisionParams: divisionPoints.append(pointCurve.PointAt(param)) planePoints = divisionPoints try: for point in planePoints: planes.append(rc.Geometry.Plane(point, planeVec)) except:", "can also input lists of horOrVertical_ input, which will assign different orientations based", "print warning ghenv.Component.AddRuntimeMessage(w, warning) #Align all of the lists to each window. windowNamesFinal", "depth, shadingHeight, EPshdAngle, EPDistToGlass, EPinteriorOrExter, assignEPCheckInit def deconstructBlindMaterial(material): matLines = material.split('\\n') name =", "assignEPCheck = False #Create the EnergyPlus blinds material and assign it to the", "dataNumbers.append(list[7:]) allHeaders.append(dataHeaders) allNumbers.append(dataNumbers) if sum(checkHeader) == len(branch):pass else: checkData3 = False warning =", "rotation. If you have vertical shades, use this to rotate them towards the", "the normal vector. if interiorOrExter == True: normalVectorPerp.Reverse() else: interiorOrExter = False #If", "sc.sticky[\"honeybee_EPFenSurface\"] hb_hive = sc.sticky[\"honeybee_Hive\"]() #Make the lists that will be filled up zoneNames", "checkData, windowNames, windowSrfsInit, depths, alignedDataTree, numOfShd, blindsMaterial, schedule = checkTheInputs(zoneNames, windowNames, windowSrfs, isZone)", "that are fed into the Run Energy Simulation component. Zones read back into", "surface with a window does not have an outdoor boundary condition. EenergyPlus shades", "value is connected here, the blinds will assume the \"ALWAYS ON\" shcedule. north_:", "Surfaces' component.\" print warning ghenv.Component.AddRuntimeMessage(gh.GH_RuntimeMessageLevel.Remark, warning) return shadingSurfaces, EPSlatOrient, depth, shadingHeight, EPshdAngle, EPDistToGlass,", "True maxXYPt = rc.Geometry.Point3d.Subtract(maxXYPt, tolVec) minXYPt = rc.Geometry.Point3d.Subtract(minXYPt, tolVec) #glazing distance glzHeight =", "if csv file is existed if not os.path.isfile(schedule): msg = \"Cannot find the", "name = matLines[1].split(',')[0] reflect = float(matLines[2].split(',')[0]) transmit = float(matLines[3].split(',')[0]) emiss = float(matLines[4].split(',')[0]) thickness", "representing the offset distance from the glass to make the shades. _runIt: Set", "hasattr(object, 'angle2North'): # find the type based on object.getAngle2North() if not hasattr(object, \"BC\"):", "shadingSurfaces, EPSlatOrient, depth, shadingHeight, EPshdAngle, EPDistToGlass, EPinteriorOrExter, assignEPCheckInit def deconstructBlindMaterial(material): matLines = material.split('\\n')", "depth for the shades.\" ghenv.Component.AddRuntimeMessage(gh.GH_RuntimeMessageLevel.Warning, \"You must provide a depth for the shades.\")", "'\\t' + 'FixedSlatAngle, !- Type of Slat Angle Control for Blinds\\n' + \\", "would just freak out with blinds.\" print warning ghenv.Component.AddRuntimeMessage(w, warning) else: for childSrf", "'\\t' + EPinteriorOrExter + ', !- Shading Type\\n' + \\ '\\t' + ',", "Type of Slat Angle Control for Blinds\\n' + \\ '\\t' + '; !-", "is not planar and # the normal is changing from point to point,", "surface and try again!\" return -1 shadingSurfaces =[] #Define a function that can", "minZPt.Z zHeights = rs.frange(minZPt.Z + shadingRemainder, maxZPt.Z + 0.5*sc.doc.ModelAbsoluteTolerance, shadingHeight) try: for Z", "planeVec)) except: # single shading planes.append(rc.Geometry.Plane(rc.Geometry.Point3d(minXYPt), planeVec)) sortedPlanes = planes return sortedPlanes, shadingHeight", "hb_hive = sc.sticky[\"honeybee_Hive\"]() #Make the lists that will be filled up zoneNames =", "if _HBObjects != [] and sc.sticky.has_key('honeybee_release') == True and sc.sticky.has_key('ladybug_release') == True: #Import", "grasshopper data tree. Alternatively, they can be plugged into an EnergyPlus simulation with", "[]: checkData2 = False print \"You must provide a depth for the shades.\"", "maxXYPt = rc.Geometry.Point3d.Subtract(maxXYPt, tolVec) minXYPt = rc.Geometry.Point3d.Subtract(minXYPt, tolVec) #glazing distance glzHeight = minXYPt.DistanceTo(maxXYPt)", "221] else: try: blindsMaterial = deconstructBlindMaterial(blindsMaterial_) except: checkData5 = False warning = 'Blinds", "the shades.\") #Check if there is a blinds material connected and, if not,", "level. srfData = False if zoneData == False: for listCount, header in enumerate(branch):", "to different directions. horOrVertical = getValueBasedOnOrientation(horOrVertical_) # If multiple shdAngle_ inputs are given,", "bbox.Center #Test to be sure that the values are parallel to the correct", "assigned to the zone and the windowBreps and shadeBreps outputs are just for", "input here is the distance in Rhino units between each shade. horOrVertical_: Set", "maxZPt.Z + 0.5*sc.doc.ModelAbsoluteTolerance, shadingHeight) try: for Z in zHeights: planes.append(rc.Geometry.Plane(rc.Geometry.Point3d(X, Y, Z), rc.Geometry.Vector3d.ZAxis))", "this will let in more winter sun than summer sun. If you have", "generate minXYPt = bbox.Corner(True, True, True) minXYPt = rc.Geometry.Point3d(minXYPt.X, minXYPt.Y, minXYPt.Z) maxXYPt =", "Schedule Name\\n' return EPBlindControl def main(): if _HBObjects != [] and sc.sticky.has_key('honeybee_release') ==", "up a distBetwee or numOfShds. if _distBetween == [] and _numOfShds == []:", "False if isZone == True: for listCount, header in enumerate(branch): if header[2].split(' for", "transmit = float(matLines[3].split(',')[0]) emiss = float(matLines[4].split(',')[0]) thickness = float(matLines[5].split(',')[0]) conduct = float(matLines[6].split(';')[0]) return", "> 0: tolVec = rc.Geometry.Vector3d.Multiply(1, tolVec) norOrient = False if tolVec.X < 0", "schedCntrl + ', !- Shading Control Is Scheduled\\n' + \\ '\\t' + 'No,", "brep in brepList: shadeBreps.Add(brep, GH_Path(count)) for treeCount, finalTree in enumerate(alignedDataTree): if treeCount ==", "<= (getAngle2North(normalVector))%360 <= angles[angleCount +1]: targetValue = valueList[angleCount%len(valueList)] value = targetValue return value", "analyzeGlz(_glzSrf, distBtwn, numShds, horOrVertical, lb_visualization, normalVector) # find the intersection crvs as the", "a function that can get the angle to North of any surface. def", "for point in planePoints: planes.append(rc.Geometry.Plane(point, planeVec)) except: # single shading planes.append(rc.Geometry.Plane(rc.Geometry.Point3d(minXYPt), planeVec)) sortedPlanes", "'OnIfScheduleAllows' schedCntrl = 'Yes' EPBlindControl = 'WindowProperty:ShadingControl,\\n' + \\ '\\t' + 'BlindCntrlFor_' +", "and schedule.lower().endswith(\".csv\"): # check if csv file is existed if not os.path.isfile(schedule): msg", "isZone = False else: checkSameType = False warning = \"This component currently only", "component.\" print warning ghenv.Component.AddRuntimeMessage(gh.GH_RuntimeMessageLevel.Remark, warning) return shadingSurfaces, EPSlatOrient, depth, shadingHeight, EPshdAngle, EPDistToGlass, EPinteriorOrExter,", "+ '; !- Slat Angle Schedule Name\\n' return EPBlindControl def main(): if _HBObjects", "point is not located on the surface baseSrfCenPt = _glzSrf.ClosestPoint(baseSrfCenPt) bool, centerPtU, centerPtV", "[] planes = [] pointCurve = rc.Geometry.Curve.CreateControlPointCurve([maxXYPt, minXYPt]) divisionParams = pointCurve.DivideByLength(shadingHeight, True) divisionPoints", "EPSlatOrient + ', !- Slat Orientation\\n' + \\ '\\t' + str(depth) + ',", "[] for param in divisionParams: divisionPoints.append(pointCurve.PointAt(param)) planePoints = divisionPoints try: for point in", "surface is not planar and # the normal is changing from point to", "!- Construction with Shading Name\\n' + \\ '\\t' + schedCntrlType + ', !-", "print msg ghenv.Component.AddRuntimeMessage(gh.GH_RuntimeMessageLevel.Warning, msg) checkData4 = False elif schedule!=None and schedule.lower().endswith(\".csv\"): # check", "if the data is for the window level. srfData = False if zoneData", "DataTree[Object]() shadeBreps = DataTree[Object]() zoneData1Tree = DataTree[Object]() zoneData2Tree = DataTree[Object]() zoneData3Tree = DataTree[Object]()", "for Honeybee Zones # By <NAME> # <EMAIL> # Ladybug started by <NAME>", "shades on the exterior. The default is set to \"False\" to generate exterior", "cooling load or beam gain for a shade benefit simulation with the generated", "different cardinal directions. def getValueBasedOnOrientation(valueList): angles = [] if valueList == None or", "solar reflectance, 0 transmittance, 0.9 emittance, 0.25 mm thickness, 221 W/mK conductivity.\" blindsMaterial", "\"Note that E+ does not like distances between shades that are greater than", "= [] for item in branchList: dataVal.append(item) dataPyList.append(dataVal) return dataPyList allData = []", "#glazing distance glzHeight = minXYPt.DistanceTo(maxXYPt) # find number of shadings try: numOfShd =", "values are parallel to the correct vector. testVec = rc.Geometry.Vector3d.Subtract(rc.Geometry.Vector3d(minXYPt.X, minXYPt.Y, minXYPt.Z), rc.Geometry.Vector3d(maxXYPt.X,", "0: EPshdAngle = 90 - EPshdAngleInint else: EPshdAngle = 90 + (EPshdAngleInint)*-1 if", "# If multiple shdAngle_ inputs are given, use it to split up the", "', !- Front Side Slat Diffuse Visible Reflectance\\n' + \\ '\\t' + ',", "trees of the zoneData2_, which align with the branches for each window above.", "Slat Beam Solar Transmittance\\n' + \\ '\\t' + str(blindsMaterial[1]) + ', !- Front", "= rc.Geometry.Vector3d(normalVector.X, normalVector.Y, 0) planeVec.Rotate(1.570796, rc.Geometry.Vector3d.ZAxis) #Define a bounding box for use in", "interiorOrExter == True: normalVectorPerp.Reverse() else: interiorOrExter = False #If a shdAngle is provided,", "this component to generate shades for Honeybee zone windows. The component has two", "ModifiedHBZones = [] return checkData, windowSrfsInit, shadings, alignedDataTree, ModifiedHBZones else: return False, [],", "to see if the data is for the window level. srfData = False", "transVec) blindsTransform = rc.Geometry.Transform.Translation(finalTransVec) for shdSrf in shadingSurfaces: shdSrf.Transform(blindsTransform) else: distToGlass = 0", "to crash.\" print warning ghenv.Component.AddRuntimeMessage(w, warning) #Check the depth and the shadingHeight to", "\\ '\\t' + str(blindsMaterial[2]) + ', !- Slat Beam Visible Transmittance\\n' + \\", "the zone and the windowBreps and shadeBreps outputs are just for visualization. _", "Context Surfaces' component.\" print warning ghenv.Component.AddRuntimeMessage(gh.GH_RuntimeMessageLevel.Remark, warning) if shadingHeight > 1: assignEPCheckInit =", "0.9 emittance, 0.25 mm thickness, 221 W/mK conductivity.\" blindsMaterial = ['DEFAULTBLINDSMATERIAL', 0.65, 0,", "component will not align correctly with the EP Result data. blindsMaterial_: An optional", "generated on each window. You can also input lists of depths, which will", "_depth, alignedDataTree, numOfShd, blindsMaterial, schedule def analyzeGlz(glzSrf, distBetween, numOfShds, horOrVertical, lb_visualization, normalVector): #", "a shade benefit simulation with the generated shades. zoneData2_: Optional zone data for", "= createEPBlindMat(blindsMaterial, EPSlatOrientList[count], depthList[count], shadingHeightList[count], EPshdAngleList[count], distToGlassList[count], windowObj.name) windowObj.shadingControl = createEPBlindControl(blindsMaterial, schedule, EPinteriorOrExterList[count],", "material component. If no material is connected here, the component will automatically assign", "alignedDataTree = [] for item in allData: alignedDataTree.append([]) for zoneCount, windowList in enumerate(windowSrfs):", "depth, item 2 = south depth, item 3 = east depth. Lists of", "= sc.doc.ModelAbsoluteTolerance def checkTheInputs(zoneNames, windowNames, windowSrfs, isZone): #Check if the user has hooked", "the Import idf component will not align correctly with the EP Result data.", "+ str(blindsMaterial[3]) + ', !- Back Side Slat Infrared Hemispherical Emissivity\\n' + \\", "of any surface. def getAngle2North(normalVector): if north_ != None and north_.IsValid(): northVector =", "generated for each glazed surface. _distBetween: An alternate option to _numOfShds where the", "print warning ghenv.Component.AddRuntimeMessage(w, warning) elif object.isPlanar == False: assignEPCheck = False warning =", "input lists of horOrVertical_ input, which will assign different orientations based on cardinal", "for listCount, header in enumerate(branch): if header[2].split(' for ')[-1] == zoneName.upper(): alignedDataTree[inputDataTreeCount].append(allData[inputDataTreeCount][listCount]) zoneData", "each window is its own branch of a grasshopper data tree. Alternatively, they", "direction. shdAngle_: A number between -90 and 90 that represents an angle in", "= zoneData.Branch(i) dataVal = [] for item in branchList: dataVal.append(item) dataPyList.append(dataVal) return dataPyList", "zoneNames[zoneCount] for windowCount, window in enumerate(windowList): windowBrepsFinal.append(window) windowName = windowNames[zoneCount][windowCount] windowNamesFinal.append(windowName) for inputDataTreeCount,", "warning = \"Note that E+ does not like distances between shades that are", "if there is a blinds material connected and, if not, set a default.", "False and srfData == False and alignedDataTree != [[], [], []]: print \"A", "is for the window level. srfData = False if zoneData == False: for", "see if E+ will crash. assignEPCheckInit = True if depth > 1: assignEPCheckInit", "glass to make the shades. _runIt: Set boolean to \"True\" to run the", "Slat Diffuse Solar Transmittance\\n' + \\ '\\t' + str(blindsMaterial[1]) + ', !- Front", "+ blindsMaterial[0] + \"_\" + name + ', !- Shading Device Material Name\\n'", "will still be produced and you can account for these shades using a", "Slat Diffuse Solar Reflectance\\n' + \\ '\\t' + str(blindsMaterial[2]) + ', !- Slat", "Slat Diffuse Visible Reflectance\\n' + \\ '\\t' + ', !- Slat Infrared Hemispherical", "zoneName.upper(): alignedDataTree[inputDataTreeCount].append(allData[inputDataTreeCount][listCount]) zoneData = True #Test to see if the data is for", "== False: # Vertical # Define a vector to be used to generate", "normalVector)) if normalVector.Z < 0: angleFromNorm = angleFromNorm*(-1) #If the user has set", "EPDistToGlass > 1: warning = \"The input distToGlass_ value is so large that", "from the glass to make the shades. _runIt: Set boolean to \"True\" to", "to North of any surface. def getAngle2North(normalVector): if north_ != None and north_.IsValid():", "object.type = object.getTypeByNormalAngle() if not hasattr(object, 'angle2North'): # find the type based on", "to use the component is to create test shade areas for shade benefit", "bbox.Corner(True, False, True) maxXYPt = rc.Geometry.Point3d(maxXYPt.X, maxXYPt.Y, maxXYPt.Z) #Adjust the points to ensure", "rc.Geometry.Point3d.Subtract(minXYPt, tolVec) #glazing distance glzHeight = minXYPt.DistanceTo(maxXYPt) # find number of shadings try:", "user has hooked up a distBetwee or numOfShds. if _distBetween == [] and", "checkData3 = False warning = \"Not all of the connected zoneData has a", "False) maxZPt = rc.Geometry.Point3d(maxZPt.X, maxZPt.Y, maxZPt.Z - sc.doc.ModelAbsoluteTolerance) centerPt = bbox.Center #glazing hieghts", "\\ '\\t' + 'FixedSlatAngle, !- Type of Slat Angle Control for Blinds\\n' +", "of the surface is facing the outdoors in order to be sure that", "# find number of shadings try: numOfShd = int(numOfShds) shadingHeight = glzHeight/numOfShd shadingRemainder", "= pointCurve.DivideByLength(shadingHeight, True) divisionPoints = [] for param in divisionParams: divisionPoints.append(pointCurve.PointAt(param)) planePoints =", "= [] allData.append(makePyTree(zoneData1_)) allData.append(makePyTree(zoneData2_)) allData.append(makePyTree(zoneData3_)) #Test to see if the data lists have", "Z), rc.Geometry.Vector3d.ZAxis)) except: # single shading planes.append(rc.Geometry.Plane(rc.Geometry.Point3d(maxZPt), rc.Geometry.Vector3d.ZAxis)) # sort the planes sortedPlanes", "import rhinoscriptsyntax as rs import scriptcontext as sc import uuid import math import", "component. ---------------: ... zoneData1Tree: Data trees of the zoneData1_, which align with the", "== 0: sortedPlanes = [] elif horOrVertical == True: # Horizontal #Define a", "the correct number of shades starting from the northernmost side of the window.", "North of any surface. def getAngle2North(normalVector): if north_ != None and north_.IsValid(): northVector", "list of values and assign it to different cardinal directions. def getValueBasedOnOrientation(valueList): angles", "checkHeader = [] dataHeaders = [] dataNumbers = [] for list in branch:", "northVector = north_ else:northVector = rc.Geometry.Vector3d.YAxis angle = rc.Geometry.Vector3d.VectorAngle(northVector, normalVector, rc.Geometry.Plane.WorldXY) finalAngle =", "\"This component currently only supports inputs that are all HBZones or all HBSrfs", "A number between -90 and 90 that represents an angle in degrees to", "def analyzeGlz(glzSrf, distBetween, numOfShds, horOrVertical, lb_visualization, normalVector): # find the bounding box bbox", "minXYPt.Y, minXYPt.Z) maxXYPt = bbox.Corner(True, False, True) maxXYPt = rc.Geometry.Point3d(maxXYPt.X, maxXYPt.Y, maxXYPt.Z) #Adjust", "for window in windowSrfsInit: shadeBreps, EPSlatOrient, depth, shadingHeight, EPshdAngle, distToGlass, EPinteriorOrExter, assignEPCheckInit =", "#Make EP versions of some of the outputs. EPshdAngleInint = angleFromNorm+shdAngle if EPshdAngleInint", "shades. The default is set to \"0\" for no rotation. If you have", "Width {m}\\n' + \\ '\\t' + str(shadingHeight) +', !- Slat Separation {m}\\n' +", "True: normalVectorPerp.Reverse() else: interiorOrExter = False #If a shdAngle is provided, use it", "+ ', !- Front Side Slat Beam Solar Reflectance\\n' + \\ '\\t' +", "ghenv.Component.AddRuntimeMessage(w, warning) isZone = False #Check the inputs and make sure that we", "them up to the \"zoneData\" inputs and use the output \"zoneDataTree\" in the", "= [] winNames = [] for srf in object.surfaces: if srf.hasChild: if srf.BC", "minXYPt = rc.Geometry.Point3d(minXYPt.X, minXYPt.Y, minXYPt.Z) maxXYPt = bbox.Corner(True, False, True) maxXYPt = rc.Geometry.Point3d(maxXYPt.X,", "the windows with shades. if assignEPCheck == True: for count, windowObj in enumerate(windowObjects):", "it to the windows with shades. if assignEPCheck == True: for count, windowObj", "planes.append(rc.Geometry.Plane(rc.Geometry.Point3d(maxZPt), rc.Geometry.Vector3d.ZAxis)) # sort the planes sortedPlanes = sorted(planes, key=lambda a: a.Origin.Z) elif", "each glazed surface. _distBetween: An alternate option to _numOfShds where the input here", "with the generated shades. zoneData3_: Optional zone data for the HBZones_ that will", "An optional schedule to raise and lower the blinds. If no value is", "one shade per window.\" else: numOfShd = _numOfShds #Check the depths. checkData2 =", "it to split up the glazing by cardinal direction and assign different distToGlass_", "that will be filled up zoneNames = [] windowNames = [] windowSrfs =", "to generate minXYPt = bbox.Corner(True, True, True) minXYPt = rc.Geometry.Point3d(minXYPt.X, minXYPt.Y, minXYPt.Z) maxXYPt", "inputs that are all HBZones or all HBSrfs but not both. For now,", "a zone or window. If there's no header, the data cannot be coordinated", "we have everything that we need to generate the shades. Set defaults on", "to generate minZPt = bbox.Corner(False, True, True) minZPt = rc.Geometry.Point3d(minZPt.X, minZPt.Y, minZPt.Z) maxZPt", "0) planeVec.Rotate(1.570796, rc.Geometry.Vector3d.ZAxis) #Define a bounding box for use in calculating the number", "print \"No blinds material has been connected. A material will be used with", "Y, z = minZPt.X, minZPt.Y, minZPt.Z zHeights = rs.frange(minZPt.Z + shadingRemainder, maxZPt.Z +", "If multiple number of shade inputs are given, use it to split up", "def getValueBasedOnOrientation(valueList): angles = [] if valueList == None or len(valueList) == 0:", "0.25 mm thickness, 221 W/mK conductivity. blindsSchedule_: An optional schedule to raise and", "header in enumerate(branch): if header[2].split(' for ')[-1] == zoneName.upper(): alignedDataTree[inputDataTreeCount].append(allData[inputDataTreeCount][listCount]) zoneData = True", "+ blindsMaterial[0] + \"_\" + name + ', !- Name\\n' + \\ '\\t'", "should sample the test surface # and test the normal direction for more", "default is set to \"0\" for no rotation. If you have vertical shades,", "alternate option to _numOfShds where the input here is the distance in Rhino", "normal of the surface in the center # note2developer: there might be cases", "HBZones or all HBSrfs but not both. For now, just grab another component", "getAngle2North(normalVectorPerp) < 180: normalVectorPerp.Rotate((shdAngle*0.01745329), planeVec) else: normalVectorPerp.Rotate((shdAngle*-0.01745329), planeVec) else: shdAngle = 0 #Make", "False warning = \"Note that E+ does not like distances between shades that", "and beam gain synced with that of the zones and windows. For this,", "distBetween == 0: sortedPlanes = [] elif horOrVertical == True: # Horizontal #Define", "different directions. distBtwn = getValueBasedOnOrientation(distBtwn) # If multiple horizontal or vertical inputs are", "warning ghenv.Component.AddRuntimeMessage(w, warning) else: for childSrf in object.childSrfs: windowObjects.append(childSrf) windowNames.append([childSrf.name]) windowSrfs.append([childSrf.geometry]) #Make sure", "= distBetween shadingRemainder = (((glzHeight/distBetween) - math.floor(glzHeight/distBetween))*distBetween) if shadingRemainder == 0: shadingRemainder =", "EPshdAngle > 180 or EPshdAngle < 0: warning = \"The input shdAngle_ value", "_ The first is that it can be used to assign blind objects", "the zoneData1_, which align with the branches for each window above. zoneData2Tree: Data", "way to use the component is to create test shade areas for shade", "brep in enumerate(windowSrfsInit): windowBreps.Add(brep, GH_Path(count)) for count, brepList in enumerate(shadings): for brep in", "shdAngle != None: if horOrVertical == True or horOrVertical == None: horOrVertical =", "sortedPlanes = [] elif horOrVertical == True: # Horizontal #Define a bounding box", "+ (depth)*(0.5)*math.cos(math.radians(EPshdAngle)) if EPDistToGlass < 0.01: EPDistToGlass = 0.01 elif EPDistToGlass > 1:", "the degrees off from the y-axis to make North. The default North direction", "horOrVertical == True or horOrVertical == None: horOrVertical = True planeVec = rc.Geometry.Vector3d(normalVector.X,", "', !- Slat Thickness {m}\\n' + \\ '\\t' + str(EPshdAngle) + ', !-", "used with 0.65 solar reflectance, 0 transmittance, 0.9 emittance, 0.25 mm thickness, 221", "find the normal of the shading surface.\" + \\ \"\\nRebuild the surface and", "find the intersection crvs as the base for shadings intCrvs =[] for plane", "!=[]: for c in intCrvs: try: shdSrf = rc.Geometry.Surface.CreateExtrusion(c, float(depth) * normalVectorPerp).ToBrep() shadingSurfaces.append(shdSrf)", "find \" + schedule + \" in Honeybee schedule library.\" print msg ghenv.Component.AddRuntimeMessage(gh.GH_RuntimeMessageLevel.Warning,", "= angleFromNorm*(-1) #If the user has set the shades to generate on the", "lb_visualization = sc.sticky[\"ladybug_ResultVisualization\"]() # find the normal of the surface in the center", "Diffuse Visible Reflectance\\n' + \\ '\\t' + ', !- Back Side Slat Diffuse", "the glazing by cardinal direction and assign different numbers of shades to different", "the intersection crvs as the base for shadings intCrvs =[] for plane in", "each of these inputs.\" print warning ghenv.Component.AddRuntimeMessage(w, warning) isZone = False #Check the", "getValueBasedOnOrientation(depth) # If multiple number of shade inputs are given, use it to", "if checkData2 == True and checkData3 == True and checkData4 == True and", "In this case, the component helps keep the data tree paths of heating,", "these should ideally be the zones that are fed into the Run Energy", "optional schedule to raise and lower the blinds. If no value is connected", "under a Creative Commons Attribution-ShareAlike 3.0 Unported License. \"\"\" Use this component to", "the blinds. If no value is connected here, the blinds will assume the", "warning ghenv.Component.AddRuntimeMessage(w, warning) elif object.isPlanar == False: assignEPCheck = False warning = \"The", "+ \\ '\\t' + str(blindsMaterial[2]) + ', !- Slat Beam Solar Transmittance\\n' +", "!= None: transVec = normalVectorPerp transVec.Unitize() finalTransVec = rc.Geometry.Vector3d.Multiply(distToGlass, transVec) blindsTransform = rc.Geometry.Transform.Translation(finalTransVec)", "sort the planes sortedPlanes = sorted(planes, key=lambda a: a.Origin.Z) elif horOrVertical == False:", "DataTree[Object]() zoneData2Tree = DataTree[Object]() zoneData3Tree = DataTree[Object]() for count, brep in enumerate(windowSrfsInit): windowBreps.Add(brep,", "branch: if str(list[0]) == \"key:location/dataType/units/frequency/startsAt/endsAt\": checkHeader.append(1) dataHeaders.append(list[:7]) dataNumbers.append(list[7:]) allHeaders.append(dataHeaders) allNumbers.append(dataNumbers) if sum(checkHeader) ==", "sortedPlanes = sorted(planes, key=lambda a: a.Origin.Z) elif horOrVertical == False: # Vertical #", "'\\t' + ', !- Back Side Slat Beam Visible Reflectance\\n' + \\ '\\t'", "= getValueBasedOnOrientation(distBtwn) # If multiple horizontal or vertical inputs are given, use it", "the HBZones_ that will be aligned with the generated windows. Use this to", "shadingHeight to see if E+ will crash. assignEPCheckInit = True if depth >", "Thickness {m}\\n' + \\ '\\t' + str(EPshdAngle) + ', !- Slat Angle {deg}\\n'", "of the shade to be generated on each window. You can also input", "file is existed if not os.path.isfile(schedule): msg = \"Cannot find the shchedule file:", "deg C}\\n' + \\ '\\t' + schedCntrl + ', !- Shading Control Is", "should ideally be the zones that are fed into the Run Energy Simulation", "= False else: checkSameType = False warning = \"This component currently only supports", "EPshdAngle = 90 - EPshdAngleInint else: EPshdAngle = 90 + (EPshdAngleInint)*-1 if EPshdAngle", "numShds = getValueBasedOnOrientation(numShds) # If multiple distances between shade inputs are given, use", "= [] return checkData, windowSrfsInit, shadings, alignedDataTree, ModifiedHBZones else: return False, [], [],", "Note that these should ideally be the zones that are fed into the", "or numOfShds. if _distBetween == [] and _numOfShds == []: numOfShd = [1]", "rotate the shades. The default is set to \"0\" for no rotation. If", "+ \\ '\\t' + '0.5, !- Blind Left Side Opening Multiplier\\n' + \\", "is set to \"False\" to generate exterior shades. distToGlass_: A number representing the", "+ ', !- Shading Control Type\\n' + \\ '\\t' + schedName + ',", "0) angleFromNorm = math.degrees(rc.Geometry.Vector3d.VectorAngle(normalVectorPerp, normalVector)) if normalVector.Z < 0: angleFromNorm = angleFromNorm*(-1) #If", "+ ', !- Front Side Slat Infrared Hemispherical Emissivity\\n' + \\ '\\t' +", "is a blinds material connected and, if not, set a default. checkData5 =", "window level. srfData = False if zoneData == False: for listCount, header in", "!- Slat Conductivity {W/m-K}\\n' + \\ '\\t' + str(blindsMaterial[2]) + ', !- Slat", "print \"You should first let both Honeybee and Ladybug fly...\" ghenv.Component.AddRuntimeMessage(w, \"You should", "3.0 Unported License. \"\"\" Use this component to generate shades for Honeybee zone", "directions. depth = getValueBasedOnOrientation(depth) # If multiple number of shade inputs are given,", "window. You can also input lists of depths, which will assign different depths", "interiorOrExter_ inputs are given, use it to split up the glazing by cardinal", "shadingRemainder = shadingHeight # find shading base planes planeOrigins = [] planes =", "None: if horOrVertical == True or horOrVertical == None: horOrVertical = True planeVec", "the zone level. zoneData = False if isZone == True: for listCount, header", "shades starting from the northernmost side of the window. tolVec = rc.Geometry.Vector3d.Subtract(rc.Geometry.Vector3d(minXYPt.X, minXYPt.Y,", "1: assignEPCheckInit = False warning = \"Note that E+ does not like shading", "Transmittance\\n' + \\ '\\t' + str(blindsMaterial[1]) + ', !- Front Side Slat Diffuse", "+ \\ '\\t' + schedName + ', !- Schedule Name\\n' + \\ '\\t'", "data tree. Alternatively, they can be plugged into an EnergyPlus simulation with the", "cardinal direction and assign different interiorOrExterior_ to different directions. interiorOrExter = getValueBasedOnOrientation(interiorOrExter_) #If", "based on cardinal direction. For example, inputing 4 values for depths will assign", "The first is that it can be used to assign blind objects to", "<EMAIL> # Ladybug started by <NAME> is licensed # under a Creative Commons", "of the lists to each window. windowNamesFinal = [] windowBrepsFinal = [] alignedDataTree", "'Honeybee_EP Context Surfaces' component.\" print warning ghenv.Component.AddRuntimeMessage(gh.GH_RuntimeMessageLevel.Remark, warning) if shadingHeight > 1: assignEPCheckInit", "and make sure that we can run it through this component's functions. for", "data input to this component.\" print warning ghenv.Component.AddRuntimeMessage(w, warning) #Align all of the", "= [] EPshdAngleList = [] distToGlassList = [] EPinteriorOrExterList = [] #Call the", "Input a vector to be used as a true North direction or a", "shades.\") #Check if there is a blinds material connected and, if not, set", "X, Y, z = minZPt.X, minZPt.Y, minZPt.Z zHeights = rs.frange(minZPt.Z + shadingRemainder, maxZPt.Z", "fly...\") return False, [], [], [], [] #Run the main functions. checkData =", "plugged into an EnergyPlus simulation with the \"Honeybee_EP Context Surfaces\" component. ---------------: ...", "plugged into a shade benefit evaulation as each window is its own branch", "warning ghenv.Component.AddRuntimeMessage(w, warning) if horOrVertical == True: EPSlatOrient = 'Horizontal' else: EPSlatOrient =", "of the HB components that generate or alter zones. Note that these should", "bbox.Center #glazing hieghts glzHeight = minZPt.DistanceTo(maxZPt) # find number of shadings try: numOfShd", "checkData4 = False #Create a Python list from the input data trees. def", "= 90 - EPshdAngleInint else: EPshdAngle = 90 + (EPshdAngleInint)*-1 if EPshdAngle >", "to point, then I should sample the test surface # and test the", "+ \\ '\\t' + ', !- Slat Diffuse Visible Transmittance\\n' + \\ '\\t'", "'No' schedName = '' else: schedName = schedule schedCntrlType = 'OnIfScheduleAllows' schedCntrl =", "Data trees of the izoneData3_, which align with the branches for each window", "\"You must provide a depth for the shades.\") #Check if there is a", "True: zoneName = zoneNames[zoneCount] for windowCount, window in enumerate(windowList): windowBrepsFinal.append(window) windowName = windowNames[zoneCount][windowCount]", "into the Run Energy Simulation component. Zones read back into Grasshopper from the", "supports inputs that are all HBZones or all HBSrfs but not both. For", "_distBetween: An alternate option to _numOfShds where the input here is the distance", "set to \"False\" to generate shades on the exterior. The default is set", "'ExteriorBlind' #Generate the shade curves based on the planes and extrusion vectors if", "vector. if interiorOrExter == True: normalVectorPerp.Reverse() else: interiorOrExter = False #If a shdAngle", "use this to rotate them towards the South by a certain value in", "direction and assign different horizontal or vertical to different directions. horOrVertical = getValueBasedOnOrientation(horOrVertical_)", "and use the output \"zoneDataTree\" in the shade benefit evaluation. - Provided by", "branches for each window above. zoneData3Tree: Data trees of the izoneData3_, which align", "+ 0.5*sc.doc.ModelAbsoluteTolerance, shadingHeight) try: for Z in zHeights: planes.append(rc.Geometry.Plane(rc.Geometry.Point3d(X, Y, Z), rc.Geometry.Vector3d.ZAxis)) except:", "to generate shades on the exterior. The default is set to \"False\" to", "divisionPoints = [] for param in divisionParams: divisionPoints.append(pointCurve.PointAt(param)) planePoints = divisionPoints try: for", "normalVector.Y, 0) angleFromNorm = math.degrees(rc.Geometry.Vector3d.VectorAngle(normalVectorPerp, normalVector)) if normalVector.Z < 0: angleFromNorm = angleFromNorm*(-1)", "conductivity. blindsSchedule_: An optional schedule to raise and lower the blinds. If no", "numOfShd, blindsMaterial, schedule def analyzeGlz(glzSrf, distBetween, numOfShds, horOrVertical, lb_visualization, normalVector): # find the", "= \"The input distToGlass_ value is so large that it will cause EnergyPlus", "item in allData: alignedDataTree.append([]) for zoneCount, windowList in enumerate(windowSrfs): if isZone == True:", "= \"Honeybee\" ghenv.Component.SubCategory = \"09 | Energy | Energy\" #compatibleHBVersion = VER 0.0.55\\nAUG_25_2014", "in enumerate(windowSrfs): if isZone == True: zoneName = zoneNames[zoneCount] for windowCount, window in", "', !- Back Side Slat Diffuse Visible Reflectance\\n' + \\ '\\t' + ',", "planes and extrusion vectors if intCrvs !=[]: for c in intCrvs: try: shdSrf", "the objects from the hive. HBZoneObjects = hb_hive.callFromHoneybeeHive(_HBObjects) #Find out what the object", "assignEPCheck == True: for count, windowObj in enumerate(windowObjects): windowObj.blindsMaterial = createEPBlindMat(blindsMaterial, EPSlatOrientList[count], depthList[count],", "thickness, 221 W/mK conductivity.\" blindsMaterial = ['DEFAULTBLINDSMATERIAL', 0.65, 0, 0.9, 0.00025, 221] else:", "different orientations based on cardinal direction. shdAngle_: A number between -90 and 90", "material has been connected. A material will be used with 0.65 solar reflectance,", "checkData, windowNamesFinal, windowBrepsFinal, _depth, alignedDataTree, numOfShd, blindsMaterial, schedule def analyzeGlz(glzSrf, distBetween, numOfShds, horOrVertical,", "vector to be used to generate the planes planeVec = rc.Geometry.Vector3d(normalVector.X, normalVector.Y, 0)", "minXYPt.Y, minXYPt.Z), rc.Geometry.Vector3d(maxXYPt.X, maxXYPt.Y, maxXYPt.Z)) if testVec.IsParallelTo(planeVec) == 0: minXYPt = bbox.Corner(False, True,", "which align with the branches for each window above. \"\"\" ghenv.Component.Name = \"Honeybee_EnergyPlus", "like heating load, cooling load or beam gain for a shade benefit simulation", "!- Slat Orientation\\n' + \\ '\\t' + str(depth) + ', !- Slat Width", "+ \\ '\\t' + '; !- Slat Angle Schedule Name\\n' return EPBlindControl def", "= False warning = \"Not all of the connected zoneData has a Ladybug/Honeybee", "= bbox.Corner(False, True, True) minZPt = rc.Geometry.Point3d(minZPt.X, minZPt.Y, minZPt.Z) maxZPt = bbox.Corner(False, True,", "+ \\ '\\t' + '0.5, !- Blind Right Side Opening Multiplier\\n' + \\", "Honeybee and Ladybug fly...\") return False, [], [], [], [] #Run the main", "= _glzSrf.Faces[0].NormalAt(centerPtU, centerPtV) #return rc.Geometry.Plane(baseSrfCenPt,normalVector) else: print \"Couldn't find the normal of the", "the way that we mesh curved surfaces for E+, the program would just", "With the way that we mesh curved surfaces for E+, the program would", "maxXYPt.Z) centerPt = bbox.Center #Test to be sure that the values are parallel", "that, when using this component for individual surfaces, you should make sure that", "connected here, the component will automatically assign a material of 0.65 solar reflectance,", "= 'ExteriorBlind' #Generate the shade curves based on the planes and extrusion vectors", "try: ghenv.Component.AdditionalHelpFromDocStrings = \"1\" except: pass from System import Object from System import", "These blinds can be dynamically controlled via a schedule. Note that shades created", "shades are previewing correctly.\" print warning ghenv.Component.AddRuntimeMessage(gh.GH_RuntimeMessageLevel.Remark, warning) if not hasattr(object, 'type'): #", "way will automatically be assigned to the zone and the windowBreps and shadeBreps", "not be generated. shadeBreps will still be produced and you can account for", "planes by that angle if shdAngle != None: if horOrVertical == True or", "assign different shdAngle_ to different directions. shdAngle = getValueBasedOnOrientation(shdAngle_) #If multiple interiorOrExter_ inputs", "material and assign it to the windows with shades. if assignEPCheck == True:", "generated windows. Use this to align data like heating load, cooling load or", "for ')[-1].split(': ')[0] except: winNm = header[2].split(' for ')[-1] if str(winNm) == str(windowName.upper()):", "Visible Reflectance\\n' + \\ '\\t' + ', !- Slat Diffuse Visible Transmittance\\n' +", "that your shades are previewing correctly.\" print warning ghenv.Component.AddRuntimeMessage(gh.GH_RuntimeMessageLevel.Remark, warning) if not hasattr(object,", "Object from System import Drawing from clr import AddReference AddReference('Grasshopper') import Grasshopper.Kernel as", "!= [] and sc.sticky.has_key('honeybee_release') == True and sc.sticky.has_key('ladybug_release') == True: #Import the classes", "1: initAngles = rs.frange(0, 360, 360/len(valueList)) for an in initAngles: angles.append(an-(360/(2*len(valueList)))) angles.append(360) for", "surface.\" + \\ \"\\nRebuild the surface and try again!\" return -1 shadingSurfaces =[]", "north depth, item 1 = west depth, item 2 = south depth, item", "no rotation. If you have vertical shades, use this to rotate them towards", "ideally be the zones that are fed into the Run Energy Simulation component.", "+ str(blindsMaterial[4]) + ', !- Slat Thickness {m}\\n' + \\ '\\t' + str(EPshdAngle)", "True: windowBreps = DataTree[Object]() shadeBreps = DataTree[Object]() zoneData1Tree = DataTree[Object]() zoneData2Tree = DataTree[Object]()", "of shades to generated for each glazed surface. _distBetween: An alternate option to", "= material.split('\\n') name = matLines[1].split(',')[0] reflect = float(matLines[2].split(',')[0]) transmit = float(matLines[3].split(',')[0]) emiss =", "= True if blindsMaterial_ == None: print \"No blinds material has been connected.", "will assume the \"ALWAYS ON\" shcedule. north_: Input a vector to be used", "\"You should first let both Honeybee and Ladybug fly...\") return False, [], [],", "True elif sum(isZoneList) == 0: isZone = False else: checkSameType = False warning", "uuid import math import os w = gh.GH_RuntimeMessageLevel.Warning tol = sc.doc.ModelAbsoluteTolerance def checkTheInputs(zoneNames,", "tolVec) if tolVec.X > 0 and tolVec.Y > 0: tolVec = rc.Geometry.Vector3d.Multiply(1, tolVec)", "the zoneData2_, which align with the branches for each window above. zoneData3Tree: Data", "System import Drawing from clr import AddReference AddReference('Grasshopper') import Grasshopper.Kernel as gh from", "1: assignEPCheckInit = False warning = \"Note that E+ does not like distances", "isZone) else: checkData == False #Generate the shades. if checkData == True: shadings", "windowObj.name) windowObj.shadingControl = createEPBlindControl(blindsMaterial, schedule, EPinteriorOrExterList[count], windowObj.name) windowObj.shadingControlName = 'BlindCntrlFor_' + windowObj.name windowObj.shadingSchName", "shadeBreps = DataTree[Object]() zoneData1Tree = DataTree[Object]() zoneData2Tree = DataTree[Object]() zoneData3Tree = DataTree[Object]() for", "as sc import uuid import math import os w = gh.GH_RuntimeMessageLevel.Warning tol =", "warning) if shadingHeight > 1: assignEPCheckInit = False warning = \"Note that E+", "deconstructBlindMaterial(blindsMaterial_) except: checkData5 = False warning = 'Blinds material is not a valid", "'\\t' + str(EPshdAngle) + ', !- Slat Angle {deg}\\n' + \\ '\\t' +", "True: for listCount, header in enumerate(branch): if header[2].split(' for ')[-1] == zoneName.upper(): alignedDataTree[inputDataTreeCount].append(allData[inputDataTreeCount][listCount])", "[1] print \"No value is connected for number of shades. The component will", "out with blinds.\" print warning ghenv.Component.AddRuntimeMessage(w, warning) else: for childSrf in object.childSrfs: windowObjects.append(childSrf)", "== len(_HBObjects): isZone = True elif sum(isZoneList) == 0: isZone = False else:", "= VER 0.0.55\\nAUG_25_2014 #compatibleLBVersion = VER 0.0.58\\nAUG_20_2014 try: ghenv.Component.AdditionalHelpFromDocStrings = \"1\" except: pass", "rotate them towards the South by a certain value in degrees. If applied", "to \"False\" to generate exterior shades. distToGlass_: A number representing the offset distance", "#If multiple distToGlass_ inputs are given, use it to split up the glazing", "\"The boundary condition of the input object must be outdoors. E+ cannot create", "the user has hooked up a distBetwee or numOfShds. if _distBetween == []", "for the shades.\") #Check if there is a blinds material connected and, if", "maxXYPt = bbox.Corner(False, False, True) maxXYPt = rc.Geometry.Point3d(maxXYPt.X, maxXYPt.Y, maxXYPt.Z) centerPt = bbox.Center", "be outdoors. E+ cannot create shades for intdoor windows.\" print warning ghenv.Component.AddRuntimeMessage(w, warning)", "', !- Shading Control Is Scheduled\\n' + \\ '\\t' + 'No, !- Glare", "through this component's functions. for object in HBZoneObjects: if object.objectType == \"HBZone\": isZoneList.append(1)", "objects to HBZones prior to simulation. These blinds can be dynamically controlled via", "data cannot be coordinated with this component. checkData3 = True checkBranches = []", "normalVector) # find the intersection crvs as the base for shadings intCrvs =[]", "Scheduled\\n' + \\ '\\t' + 'No, !- Glare Control Is Active\\n' + \\", "the planes sortedPlanes = sorted(planes, key=lambda a: a.Origin.Z) elif horOrVertical == False: #", "your shades are previewing correctly.\" print warning ghenv.Component.AddRuntimeMessage(gh.GH_RuntimeMessageLevel.Remark, warning) if not hasattr(object, 'type'):", "North direction or a number between 0 and 360 that represents the degrees", "<reponame>rdzeldenrust/Honeybee # This component creates shades for Honeybee Zones # By <NAME> #", "branch: zoneData1Tree.Add(twig, GH_Path(bCount)) elif treeCount == 1: for bCount, branch in enumerate(finalTree): for", "readMe!: ... ---------------: ... HBZones: The HBZones with the assigned shading (ready to", "0.65, 0, 0.9, 0.00025, 221] else: try: blindsMaterial = deconstructBlindMaterial(blindsMaterial_) except: checkData5 =", "angle = rc.Geometry.Vector3d.VectorAngle(northVector, normalVector, rc.Geometry.Plane.WorldXY) finalAngle = math.degrees(angle) return finalAngle # Define a", "!- Back Side Slat Diffuse Solar Reflectance\\n' + \\ '\\t' + str(blindsMaterial[2]) +", "if angles[angleCount] <= (getAngle2North(normalVector))%360 <= angles[angleCount +1]: targetValue = valueList[angleCount%len(valueList)] value = targetValue", "material will be used with 0.65 solar reflectance, 0 transmittance, 0.9 emittance, 0.25", "assigned to this window.\" else: print \"One surface with a window does not", "!- Slat Thickness {m}\\n' + \\ '\\t' + str(EPshdAngle) + ', !- Slat", "that we have everything that we need to generate the shades. Set defaults", "to different directions. distBtwn = getValueBasedOnOrientation(distBtwn) # If multiple horizontal or vertical inputs", "a shdAngle is provided, use it to rotate the planes by that angle", "would take imported EnergyPlus results and hook them up to the \"zoneData\" inputs", "to \"True\" to generate Shades on the interior and set to \"False\" to", "with blinds.\" print warning ghenv.Component.AddRuntimeMessage(w, warning) else: for childSrf in object.childSrfs: windowObjects.append(childSrf) windowNames.append([childSrf.name])", "rhinoscriptsyntax as rs import scriptcontext as sc import uuid import math import os", "zHeights = rs.frange(minZPt.Z + shadingRemainder, maxZPt.Z + 0.5*sc.doc.ModelAbsoluteTolerance, shadingHeight) try: for Z in", "0.01 elif EPDistToGlass > 1: warning = \"The input distToGlass_ value is so", "= True if _depth == []: checkData2 = False print \"You must provide", "Side Slat Diffuse Solar Reflectance\\n' + \\ '\\t' + str(blindsMaterial[1]) + ', !-", "srfData = False if zoneData == False: for listCount, header in enumerate(branch): try:", "can be dynamically controlled via a schedule. Note that shades created this way", "vector. testVec = rc.Geometry.Vector3d.Subtract(rc.Geometry.Vector3d(minXYPt.X, minXYPt.Y, minXYPt.Z), rc.Geometry.Vector3d(maxXYPt.X, maxXYPt.Y, maxXYPt.Z)) if testVec.IsParallelTo(planeVec) == 0:", "data tree paths of heating, cooling and beam gain synced with that of", "a schedule. Note that shades created this way will automatically be assigned to", "numOfShds == 0 or distBetween == 0: sortedPlanes = [] elif horOrVertical ==", "let both Honeybee and Ladybug fly...\") return False, [], [], [], [] #Run", "0 and tolVec.Y > 0: tolVec = rc.Geometry.Vector3d.Multiply(1, tolVec) norOrient = False if", "[], [] #Run the main functions. checkData = False if _HBObjects != []", "{deg}\\n' + \\ '\\t' + str(blindsMaterial[5]) + ', !- Slat Conductivity {W/m-K}\\n' +", "= \"Cannot find \" + schedule + \" in Honeybee schedule library.\" print", "+ \\ '\\t' + str(depth) + ', !- Slat Width {m}\\n' + \\", "ghenv.Component.AddRuntimeMessage(gh.GH_RuntimeMessageLevel.Warning, msg) checkData4 = False #Create a Python list from the input data", "Right Side Opening Multiplier\\n' + \\ '\\t' + ', !- Minimum Slat Angle", "= [] pointCurve = rc.Geometry.Curve.CreateControlPointCurve([maxXYPt, minXYPt]) divisionParams = pointCurve.DivideByLength(shadingHeight, True) divisionPoints = []", "= [] windowBrepsFinal = [] alignedDataTree = [] for item in allData: alignedDataTree.append([])", "just grab another component for each of these inputs.\" print warning ghenv.Component.AddRuntimeMessage(w, warning)", "'\\t' + schedCntrl + ', !- Shading Control Is Scheduled\\n' + \\ '\\t'", "normal of the shading surface.\" + \\ \"\\nRebuild the surface and try again!\"", "+', !- Slat Separation {m}\\n' + \\ '\\t' + str(blindsMaterial[4]) + ', !-", "it to split up the glazing by cardinal direction and assign different horizontal", "221 W/mK conductivity.\" blindsMaterial = ['DEFAULTBLINDSMATERIAL', 0.65, 0, 0.9, 0.00025, 221] else: try:", "allHeaders.append(dataHeaders) allNumbers.append(dataNumbers) if sum(checkHeader) == len(branch):pass else: checkData3 = False warning = \"Not", "listCount, header in enumerate(branch): if header[2].split(' for ')[-1] == zoneName.upper(): alignedDataTree[inputDataTreeCount].append(allData[inputDataTreeCount][listCount]) zoneData =", "name + ', !- Shading Device Material Name\\n' + \\ '\\t' + 'FixedSlatAngle,", "can be used to assign blind objects to HBZones prior to simulation. These", "else: EPshdAngle = 90 + (EPshdAngleInint)*-1 if EPshdAngle > 180 or EPshdAngle <", "return EPBlindMat def createEPBlindControl(blindsMaterial, schedule, EPinteriorOrExter, name): if schedule == 'ALWAYSON': schedCntrlType =", "The default is set to \"0\" for no rotation. If you have vertical", "blinds.\" print warning ghenv.Component.AddRuntimeMessage(w, warning) else: for childSrf in object.childSrfs: windowObjects.append(childSrf) windowNames.append([childSrf.name]) windowSrfs.append([childSrf.geometry])", "A number representing the offset distance from the glass to make the shades.", "Optional zone data for the HBZones_ that will be aligned with the generated", "GH_Path(count)) for count, brepList in enumerate(shadings): for brep in brepList: shadeBreps.Add(brep, GH_Path(count)) for", "str(blindsMaterial[2]) + ', !- Slat Beam Visible Transmittance\\n' + \\ '\\t' + ',", "blinds. If no value is connected here, the blinds will assume the \"ALWAYS", "ghenv.Component.AddRuntimeMessage(gh.GH_RuntimeMessageLevel.Warning, \"You must provide a depth for the shades.\") #Check if there is", "the branches for each window above. zoneData3Tree: Data trees of the izoneData3_, which", "in HBScheduleList: msg = \"Cannot find \" + schedule + \" in Honeybee", "maxXYPt = rc.Geometry.Point3d(maxXYPt.X, maxXYPt.Y, maxXYPt.Z) #Adjust the points to ensure the creation of", "= gh.GH_RuntimeMessageLevel.Warning tol = sc.doc.ModelAbsoluteTolerance def checkTheInputs(zoneNames, windowNames, windowSrfs, isZone): #Check if the", "= True if sum(isZoneList) == len(_HBObjects): isZone = True elif sum(isZoneList) == 0:", "and shadeBreps outputs are just for visualization. _ The second way to use", "box for use in calculating the number of shades to generate minXYPt =", "between 0 and 360 that represents the degrees off from the y-axis to", "[] dataNumbers = [] for list in branch: if str(list[0]) == \"key:location/dataType/units/frequency/startsAt/endsAt\": checkHeader.append(1)", "[] X, Y, z = minZPt.X, minZPt.Y, minZPt.Z zHeights = rs.frange(minZPt.Z + shadingRemainder,", "normal direction for more point baseSrfCenPt = rc.Geometry.AreaMassProperties.Compute(_glzSrf).Centroid # sometimes the center point", "outputs. EPshdAngleInint = angleFromNorm+shdAngle if EPshdAngleInint >= 0: EPshdAngle = 90 - EPshdAngleInint", "True if zoneData == False and srfData == False and alignedDataTree != [[],", "bounding box bbox = glzSrf.GetBoundingBox(True) if horOrVertical == None: horOrVertical = True if", "str(depth) + ', !- Slat Width {m}\\n' + \\ '\\t' + str(shadingHeight) +',", "treeCount == 2: for bCount, branch in enumerate(finalTree): for twig in branch: zoneData3Tree.Add(twig,", "print \"No blinds schedule has been connected. It will be assumed that the", "= glzSrf.GetBoundingBox(True) if horOrVertical == None: horOrVertical = True if numOfShds == None", "bounding box for use in calculating the number of shades to generate minXYPt", "from point to point, then I should sample the test surface # and", "horOrVertical == True: # Horizontal #Define a bounding box for use in calculating", "+ ', !- Minimum Slat Angle {deg}\\n' + \\ '\\t' + '180; !-", "len(valueList) == 0: value = None if len(valueList) == 1: value = valueList[0]", ">= 0: EPshdAngle = 90 - EPshdAngleInint else: EPshdAngle = 90 + (EPshdAngleInint)*-1", "in range(len(angles)-1): if angles[angleCount] <= (getAngle2North(normalVector))%360 <= angles[angleCount +1]: targetValue = valueList[angleCount%len(valueList)] value", "'\\t' + schedCntrlType + ', !- Shading Control Type\\n' + \\ '\\t' +", "True) maxXYPt = rc.Geometry.Point3d(maxXYPt.X, maxXYPt.Y, maxXYPt.Z) centerPt = bbox.Center #Test to be sure", "are always drawn\" else: schedule= blindsSchedule_.upper() if schedule!=None and not schedule.lower().endswith(\".csv\") and schedule", "E+ does not like distances between shades that are greater than 1. HBObjWShades", "Lists of vectors to be shaded can also be input and shades can", "that represents the degrees off from the y-axis to make North. The default", "input lists of depths, which will assign different depths based on cardinal direction.", "branch of a grasshopper data tree. Alternatively, they can be plugged into an", "and, if not, set a default. checkData4 = True if blindsSchedule_ == None:", "to assign different shade angles to different cardinal directions. interiorOrExter_: Set to \"True\"", "than summer sun. If you have horizontal shades, use this input to angle", "with the generated shades. Returns: readMe!: ... ---------------: ... HBZones: The HBZones with", "here, the blinds will assume the \"ALWAYS ON\" shcedule. north_: Input a vector", "a blinds schedule connected and, if not, set a default. checkData4 = True", "distToGlassList[count], windowObj.name) windowObj.shadingControl = createEPBlindControl(blindsMaterial, schedule, EPinteriorOrExterList[count], windowObj.name) windowObj.shadingControlName = 'BlindCntrlFor_' + windowObj.name", "enumerate(alignedDataTree): if treeCount == 0: for bCount, branch in enumerate(finalTree): for twig in", "second way to use the component is to create test shade areas for", "shading surface.\" + \\ \"\\nRebuild the surface and try again!\" return -1 shadingSurfaces", "[] and _numOfShds == []: numOfShd = [1] print \"No value is connected", "find shading base planes planeOrigins = [] planes = [] X, Y, z", "getValueBasedOnOrientation(interiorOrExter_) #If multiple distToGlass_ inputs are given, use it to split up the", "split up the glazing by cardinal direction and assign different distToGlass_ to different", "= True checkBranches = [] allHeaders = [] allNumbers = [] for branch", "inputs are given, use it to split up the glazing by cardinal direction", "the glazing by cardinal direction and assign different shdAngle_ to different directions. shdAngle", "> 1: assignEPCheckInit = False warning = \"Note that E+ does not like", "\"HBZone\": isZoneList.append(1) zoneNames.append(object.name) winBreps = [] winNames = [] for srf in object.surfaces:", "in object.surfaces: if srf.hasChild: if srf.BC == 'OUTDOORS' or srf.BC == 'Outdoors': if", "a certain value in degrees. If applied to windows facing East or West,", "of the zoneData1_, which align with the branches for each window above. zoneData2Tree:", "will not be assigned to this window.\" else: print \"One surface with a", "= sc.sticky[\"ladybug_ResultVisualization\"]() # find the normal of the surface in the center #", "is changing from point to point, then I should sample the test surface", "Reflectance\\n' + \\ '\\t' + ', !- Slat Diffuse Visible Transmittance\\n' + \\", "of shade inputs are given, use it to split up the glazing by", "data.\" if checkData2 == True and checkData3 == True and checkData4 == True", "for c in intCrvs: try: shdSrf = rc.Geometry.Surface.CreateExtrusion(c, float(depth) * normalVectorPerp).ToBrep() shadingSurfaces.append(shdSrf) except:", "to generate on the interior, flip the normal vector. if interiorOrExter == True:", "we can run it through this component's functions. for object in HBZoneObjects: if", "north_ != None and north_.IsValid(): northVector = north_ else:northVector = rc.Geometry.Vector3d.YAxis angle =", "component currently only supports inputs that are all HBZones or all HBSrfs but", "schedCntrlType = 'OnIfScheduleAllows' schedCntrl = 'Yes' EPBlindControl = 'WindowProperty:ShadingControl,\\n' + \\ '\\t' +", "!- Shading Device Material Name\\n' + \\ '\\t' + 'FixedSlatAngle, !- Type of", "different distances of shades to different directions. distBtwn = getValueBasedOnOrientation(distBtwn) # If multiple", "to the windows with shades. if assignEPCheck == True: for count, windowObj in", "as a true North direction or a number between 0 and 360 that", "connected zoneData has a Ladybug/Honeybee header on it. This header is necessary for", "= rc.Geometry.Vector3d.Multiply(-1, tolVec) norOrient = True else: tolVec = rc.Geometry.Vector3d.Multiply(-1, tolVec) norOrient =", "test surface # and test the normal direction for more point baseSrfCenPt =", "gain for a shade benefit simulation with the generated shades. Returns: readMe!: ...", "True: checkData = True else: checkData = False return checkData, windowNamesFinal, windowBrepsFinal, _depth,", "value = targetValue return value # If multiple shading depths are given, use", "== True: #Import the classes hb_EPZone = sc.sticky[\"honeybee_EPZone\"] hb_EPSrf = sc.sticky[\"honeybee_EPSurface\"] hb_EPFenSurface =", "winter sun than summer sun. If you have horizontal shades, use this input", "assign a material of 0.65 solar reflectance, 0 transmittance, 0.9 emittance, 0.25 mm", "windowBrepsFinal, _depth, alignedDataTree, numOfShd, blindsMaterial, schedule def analyzeGlz(glzSrf, distBetween, numOfShds, horOrVertical, lb_visualization, normalVector):", "treeCount, finalTree in enumerate(alignedDataTree): if treeCount == 0: for bCount, branch in enumerate(finalTree):", "= True else: checkData = False return checkData, windowNamesFinal, windowBrepsFinal, _depth, alignedDataTree, numOfShd,", "DataTree[Object]() zoneData3Tree = DataTree[Object]() for count, brep in enumerate(windowSrfsInit): windowBreps.Add(brep, GH_Path(count)) for count,", "0 transmittance, 0.9 emittance, 0.25 mm thickness, 221 W/mK conductivity. blindsSchedule_: An optional", "+ \\ '\\t' + str(blindsMaterial[1]) + ', !- Front Side Slat Diffuse Solar", "controlled via a schedule. Note that shades created this way will automatically be", "assign different distToGlass_ to different directions. distToGlass = getValueBasedOnOrientation(distToGlass_) # generate the planes", "<= angles[angleCount +1]: targetValue = valueList[angleCount%len(valueList)] value = targetValue return value # If", "it will cause EnergyPlus to crash.\" print warning ghenv.Component.AddRuntimeMessage(w, warning) #Check the depth", "hasattr(object, \"BC\"): object.BC = 'OUTDOORS' if object.hasChild: if object.BC != 'OUTDOORS' and object.BC", "shades, use this input to angle shades downward. You can also put in", "aligned with the generated windows. Use this to align data like heating load,", "component.' print warning ghenv.Component.AddRuntimeMessage(w, warning) #Check if there is a blinds schedule connected", "Surfaces\" component. ---------------: ... zoneData1Tree: Data trees of the zoneData1_, which align with", "Setpoint {W/m2, W or deg C}\\n' + \\ '\\t' + schedCntrl + ',", "#compatibleHBVersion = VER 0.0.55\\nAUG_25_2014 #compatibleLBVersion = VER 0.0.58\\nAUG_20_2014 try: ghenv.Component.AdditionalHelpFromDocStrings = \"1\" except:", "been connected. A material will be used with 0.65 solar reflectance, 0 transmittance,", "zones and windows. For this, you would take imported EnergyPlus results and hook", "= object.getTypeByNormalAngle() if not hasattr(object, 'angle2North'): # find the type based on object.getAngle2North()", "rc.Geometry.Vector3d.Multiply(1, tolVec) norOrient = False if tolVec.X < 0 and tolVec.Y > 0:", "or vertical inputs are given, use it to split up the glazing by", "the blinds will assume the \"ALWAYS ON\" shcedule. north_: Input a vector to", "the data trees. if checkData == True: windowBreps = DataTree[Object]() shadeBreps = DataTree[Object]()", "Emissivity\\n' + \\ '\\t' + str(blindsMaterial[3]) + ', !- Back Side Slat Infrared", "value = valueList[0] elif len(valueList) > 1: initAngles = rs.frange(0, 360, 360/len(valueList)) for", "_numOfShds == []: numOfShd = [1] print \"No value is connected for number", "distBetwee or numOfShds. if _distBetween == [] and _numOfShds == []: numOfShd =", "function that can split up a list of values and assign it to", "for shdSrf in shadingSurfaces: shdSrf.Transform(blindsTransform) else: distToGlass = 0 #Get the EnergyPlus distance", "glazed surface. _distBetween: An alternate option to _numOfShds where the input here is", "True and checkData3 == True and checkData4 == True and checkData5 == True:", "horizontal or vertical inputs are given, use it to split up the glazing", "to angle shades downward. You can also put in lists of angles to", "by cardinal direction and assign different interiorOrExterior_ to different directions. interiorOrExter = getValueBasedOnOrientation(interiorOrExter_)", "to \"True\" to generate horizontal shades or \"False\" to generate vertical shades. You", "tolVec) minXYPt = rc.Geometry.Point3d.Subtract(minXYPt, tolVec) #glazing distance glzHeight = minXYPt.DistanceTo(maxXYPt) # find number", "', !- Front Side Slat Beam Solar Reflectance\\n' + \\ '\\t' + str(blindsMaterial[1])", "+ \\ '\\t' + ', !- Construction with Shading Name\\n' + \\ '\\t'", "grasshopper data tree. shadeBreps: Breps representing each shade of the window. These can", "functions. for object in HBZoneObjects: if object.objectType == \"HBZone\": isZoneList.append(1) zoneNames.append(object.name) winBreps =", "#Check if the user has hooked up a distBetwee or numOfShds. if _distBetween", "\\ '\\t' + ', !- Front Side Slat Beam Visible Reflectance\\n' + \\", "shades for Honeybee zone windows. The component has two main uses: _ The", "split up the glazing by cardinal direction and assign different shdAngle_ to different", "+ ', !- Front Side Slat Beam Visible Reflectance\\n' + \\ '\\t' +", "# check if csv file is existed if not os.path.isfile(schedule): msg = \"Cannot", "and assign different distToGlass_ to different directions. distToGlass = getValueBasedOnOrientation(distToGlass_) # generate the", "... ---------------: ... HBZones: The HBZones with the assigned shading (ready to be", "lower the blinds. If no value is connected here, the blinds will assume", "\\ '\\t' + str(blindsMaterial[4]) + ', !- Slat Thickness {m}\\n' + \\ '\\t'", "!= [[], [], []]: print \"A window was not matched with its respective", "if normalVector.Z < 0: angleFromNorm = angleFromNorm*(-1) #If the user has set the", "+ ', !- Back Side Slat Diffuse Visible Reflectance\\n' + \\ '\\t' +", "object.BC != 'OUTDOORS' and object.BC != 'Outdoors': assignEPCheck = False warning = \"The", "use this input to angle shades downward. You can also put in lists", "= rc.Geometry.Vector3d.Multiply(sc.doc.ModelAbsoluteTolerance*2, tolVec) if tolVec.X > 0 and tolVec.Y > 0: tolVec =", "shades. distToGlass_: A number representing the offset distance from the glass to make", "= VER 0.0.58\\nAUG_20_2014 try: ghenv.Component.AdditionalHelpFromDocStrings = \"1\" except: pass from System import Object", "Control for Blinds\\n' + \\ '\\t' + '; !- Slat Angle Schedule Name\\n'", "HB components that generate or alter zones. Note that these should ideally be", "emittance, 0.25 mm thickness, 221 W/mK conductivity.\" blindsMaterial = ['DEFAULTBLINDSMATERIAL', 0.65, 0, 0.9,", "material.split('\\n') name = matLines[1].split(',')[0] reflect = float(matLines[2].split(',')[0]) transmit = float(matLines[3].split(',')[0]) emiss = float(matLines[4].split(',')[0])", "EnergyPlus simulation with the \"Honeybee_EP Context Surfaces\" component. ---------------: ... zoneData1Tree: Data trees", "multiple horizontal or vertical inputs are given, use it to split up the", "for Blinds\\n' + \\ '\\t' + '; !- Slat Angle Schedule Name\\n' return", "is its own branch of a grasshopper data tree. Alternatively, they can be", "Set to \"True\" to generate horizontal shades or \"False\" to generate vertical shades.", "* normalVectorPerp).ToBrep() shadingSurfaces.append(shdSrf) except: pass #If the user has specified a distance to", "E+ does not like shading depths greater than 1. HBObjWShades will not be", "assign different shade angles to different cardinal directions. interiorOrExter_: Set to \"True\" to", "windowCount, window in enumerate(windowList): windowBrepsFinal.append(window) windowName = windowNames[zoneCount][windowCount] windowNamesFinal.append(windowName) for inputDataTreeCount, branch in", "== False: assignEPCheck = False warning = \"The surface must not be curved.", "windowObj in enumerate(windowObjects): windowObj.blindsMaterial = createEPBlindMat(blindsMaterial, EPSlatOrientList[count], depthList[count], shadingHeightList[count], EPshdAngleList[count], distToGlassList[count], windowObj.name) windowObj.shadingControl", "windowBreps.Add(brep, GH_Path(count)) for count, brepList in enumerate(shadings): for brep in brepList: shadeBreps.Add(brep, GH_Path(count))", "enumerate(finalTree): for twig in branch: zoneData2Tree.Add(twig, GH_Path(bCount)) elif treeCount == 2: for bCount,", "ModifiedHBZones else: return False, [], [], [], [] else: print \"You should first", "them along the normal vector. if distToGlass != None: transVec = normalVectorPerp transVec.Unitize()", "areas for shade benefit evaluation after an energy simulation has already been run.", "a shade benefit evaulation as each window is its own branch of a", "directions. def getValueBasedOnOrientation(valueList): angles = [] if valueList == None or len(valueList) ==", "which is necessary to match the data to a zone or window. If", "norOrient = False if tolVec.X < 0 and tolVec.Y < 0: tolVec =", "with the branches for each window above. \"\"\" ghenv.Component.Name = \"Honeybee_EnergyPlus Window Shade", "blindsMaterial_: An optional blind material from the blind material component. If no material", "== None: print \"No blinds material has been connected. A material will be", "enumerate(branch): try: winNm = header[2].split(' for ')[-1].split(': ')[0] except: winNm = header[2].split(' for", "can be joined together with the mergeVectors_ input. _numOfShds: The number of shades", "zoneData2Tree: Data trees of the zoneData2_, which align with the branches for each", "!- Maximum Slat Angle {deg}\\n' return EPBlindMat def createEPBlindControl(blindsMaterial, schedule, EPinteriorOrExter, name): if", "horOrVertical_: Set to \"True\" to generate horizontal shades or \"False\" to generate vertical", "generate shades on the exterior. The default is set to \"False\" to generate", "# Define a function that can split up a list of values and", "zone level. zoneData = False if isZone == True: for listCount, header in", "rc.Geometry.Point3d(maxXYPt.X, maxXYPt.Y, maxXYPt.Z) #Adjust the points to ensure the creation of the correct", "else: normalVectorPerp.Rotate((shdAngle*-0.01745329), planeVec) else: shdAngle = 0 #Make EP versions of some of", "that are all HBZones or all HBSrfs but not both. For now, just", "a: a.Origin.Z) elif horOrVertical == False: # Vertical # Define a vector to", "of the window. tolVec = rc.Geometry.Vector3d.Subtract(rc.Geometry.Vector3d(minXYPt.X, minXYPt.Y, minXYPt.Z), rc.Geometry.Vector3d(maxXYPt.X, maxXYPt.Y, maxXYPt.Z)) tolVec.Unitize() tolVec", "True: for childSrf in srf.childSrfs: windowObjects.append(childSrf) winNames.append(childSrf.name) winBreps.append(childSrf.geometry) else: print \"One surface with", "shades for Honeybee Zones # By <NAME> # <EMAIL> # Ladybug started by", "+ ', !- Name\\n' + \\ '\\t' + EPSlatOrient + ', !- Slat", "cardinal direction and assign different numbers of shades to different directions. numShds =", "generate shades for Honeybee zone windows. The component has two main uses: _", "dynamically controlled via a schedule. Note that shades created this way will automatically", "assignEPCheckInit = True if depth > 1: assignEPCheckInit = False warning = \"Note", "Beam Solar Reflectance\\n' + \\ '\\t' + ', !- Slat Diffuse Solar Transmittance\\n'", "boolean to \"True\" to run the component and generate shades. ---------------: ... zoneData1_:", "respective zone/surface data.\" if checkData2 == True and checkData3 == True and checkData4", "\"False\" to generate exterior shades. distToGlass_: A number representing the offset distance from", "= 'ALWAYSON' schedCntrl = 'No' schedName = '' else: schedName = schedule schedCntrlType", "shade benefit simulation with the generated shades. zoneData3_: Optional zone data for the", "distances between shades that are greater than 1. HBObjWShades will not be generated.", "generate the planes planeVec = rc.Geometry.Vector3d(normalVector.X, normalVector.Y, 0) planeVec.Rotate(1.570796, rc.Geometry.Vector3d.ZAxis) #Define a bounding", "+ \\ '\\t' + str(blindsMaterial[4]) + ', !- Slat Thickness {m}\\n' + \\", "window.\" else: numOfShd = _numOfShds #Check the depths. checkData2 = True if _depth", "of the input object must be outdoors. E+ cannot create shades for intdoor", "0: minXYPt = bbox.Corner(False, True, True) minXYPt = rc.Geometry.Point3d(minXYPt.X, minXYPt.Y, minXYPt.Z) maxXYPt =", "minZPt.X, minZPt.Y, minZPt.Z zHeights = rs.frange(minZPt.Z + shadingRemainder, maxZPt.Z + 0.5*sc.doc.ModelAbsoluteTolerance, shadingHeight) try:", "first is that it can be used to assign blind objects to HBZones", "zoneData2Tree = DataTree[Object]() zoneData3Tree = DataTree[Object]() for count, brep in enumerate(windowSrfsInit): windowBreps.Add(brep, GH_Path(count))", "lb_preparation = sc.sticky[\"ladybug_Preparation\"]() lb_mesh = sc.sticky[\"ladybug_Mesh\"]() lb_visualization = sc.sticky[\"ladybug_ResultVisualization\"]() # find the normal", "== 'Outdoors': if srf.isPlanar == True: for childSrf in srf.childSrfs: windowObjects.append(childSrf) winNames.append(childSrf.name) winBreps.append(childSrf.geometry)", "branch of a grasshopper data tree. shadeBreps: Breps representing each shade of the", "!- Back Side Slat Diffuse Visible Reflectance\\n' + \\ '\\t' + ', !-", "input data trees. def makePyTree(zoneData): dataPyList = [] for i in range(zoneData.BranchCount): branchList", "should first let both Honeybee and Ladybug fly...\") return False, [], [], [],", "len(branch):pass else: checkData3 = False warning = \"Not all of the connected zoneData", "def createEPBlindMat(blindsMaterial, EPSlatOrient, depth, shadingHeight, EPshdAngle, distToGlass, name): EPBlindMat = \"WindowMaterial:Blind,\\n\" + \\", "!- Slat Infrared Hemispherical Transmittance\\n' + \\ '\\t' + str(blindsMaterial[3]) + ', !-", "if not, set a default. checkData4 = True if blindsSchedule_ == None: schedule", "to simulation. These blinds can be dynamically controlled via a schedule. Note that", "[] EPSlatOrientList = [] depthList = [] shadingHeightList = [] EPshdAngleList = []", "to \"True\" to run the component and generate shades. ---------------: ... zoneData1_: Optional", "+ \\ '\\t' + ', !- Back Side Slat Diffuse Visible Reflectance\\n' +", "Honeybee schedule library.\" print msg ghenv.Component.AddRuntimeMessage(gh.GH_RuntimeMessageLevel.Warning, msg) checkData4 = False elif schedule!=None and", "planes.append(rc.Geometry.Plane(rc.Geometry.Point3d(minXYPt), planeVec)) sortedPlanes = planes return sortedPlanes, shadingHeight def makeShade(_glzSrf, depth, numShds, distBtwn):", "if sum(isZoneList) == len(_HBObjects): isZone = True elif sum(isZoneList) == 0: isZone =", "inputs and make sure that we have everything that we need to generate", "multiple shdAngle_ inputs are given, use it to split up the glazing by", "no header, the data cannot be coordinated with this component. checkData3 = True", "Side Slat Diffuse Visible Reflectance\\n' + \\ '\\t' + ', !- Back Side", "both Honeybee and Ladybug fly...\") return False, [], [], [], [] #Run the", "and extrusion vectors if intCrvs !=[]: for c in intCrvs: try: shdSrf =", "planes = [] pointCurve = rc.Geometry.Curve.CreateControlPointCurve([maxXYPt, minXYPt]) divisionParams = pointCurve.DivideByLength(shadingHeight, True) divisionPoints =", "a Creative Commons Attribution-ShareAlike 3.0 Unported License. \"\"\" Use this component to generate", "in planes: try: intCrvs.append(rc.Geometry.Brep.CreateContourCurves(_glzSrf, plane)[0]) except: print \"One intersection failed.\" if normalVector !=", "horOrVertical == True: EPSlatOrient = 'Horizontal' else: EPSlatOrient = 'Vertical' if interiorOrExter ==", "and # the normal is changing from point to point, then I should", "correct vector. testVec = rc.Geometry.Vector3d.Subtract(rc.Geometry.Vector3d(minXYPt.X, minXYPt.Y, minXYPt.Z), rc.Geometry.Vector3d(maxXYPt.X, maxXYPt.Y, maxXYPt.Z)) if testVec.IsParallelTo(planeVec) ==", "True: checkData, windowNames, windowSrfsInit, depths, alignedDataTree, numOfShd, blindsMaterial, schedule = checkTheInputs(zoneNames, windowNames, windowSrfs,", "vector. if distToGlass != None: transVec = normalVectorPerp transVec.Unitize() finalTransVec = rc.Geometry.Vector3d.Multiply(distToGlass, transVec)", "= _glzSrf.Faces[0].ClosestPoint(baseSrfCenPt) if bool: normalVector = _glzSrf.Faces[0].NormalAt(centerPtU, centerPtV) #return rc.Geometry.Plane(baseSrfCenPt,normalVector) else: print \"Couldn't", "None or len(valueList) == 0: value = None if len(valueList) == 1: value", "180: normalVectorPerp.Rotate((shdAngle*0.01745329), planeVec) else: normalVectorPerp.Rotate((shdAngle*-0.01745329), planeVec) else: shdAngle = 0 #Make EP versions", "be generated on each window. You can also input lists of depths, which", "shadingHeight > 1: assignEPCheckInit = False warning = \"Note that E+ does not", "outdoor boundary condition. EenergyPlus shades will not be assigned to this window.\" windowNames.append(winNames)", "else: schedule= blindsSchedule_.upper() if schedule!=None and not schedule.lower().endswith(\".csv\") and schedule not in HBScheduleList:", "print \"One surface with a window does not have an outdoor boundary condition.", "in more winter sun than summer sun. If you have horizontal shades, use", "else: interiorOrExter = False #If a shdAngle is provided, use it to rotate", "+ schedCntrl + ', !- Shading Control Is Scheduled\\n' + \\ '\\t' +", "= float(matLines[5].split(',')[0]) conduct = float(matLines[6].split(';')[0]) return [name, reflect, transmit, emiss, thickness, conduct] def", "based on object.type = object.getTypeByNormalAngle() if not hasattr(object, 'angle2North'): # find the type", "default. checkData4 = True if blindsSchedule_ == None: schedule = \"ALWAYSON\" print \"No", "single shading planes.append(rc.Geometry.Plane(rc.Geometry.Point3d(maxZPt), rc.Geometry.Vector3d.ZAxis)) # sort the planes sortedPlanes = sorted(planes, key=lambda a:", "Diffuse Visible Reflectance\\n' + \\ '\\t' + ', !- Slat Infrared Hemispherical Transmittance\\n'", "assignEPCheckInit = False warning = \"Note that E+ does not like distances between", "0.5*sc.doc.ModelAbsoluteTolerance, shadingHeight) try: for Z in zHeights: planes.append(rc.Geometry.Plane(rc.Geometry.Point3d(X, Y, Z), rc.Geometry.Vector3d.ZAxis)) except: #", "= bbox.Corner(False, True, False) maxZPt = rc.Geometry.Point3d(maxZPt.X, maxZPt.Y, maxZPt.Z - sc.doc.ModelAbsoluteTolerance) centerPt =", "\\ '\\t' + schedName + ', !- Schedule Name\\n' + \\ '\\t' +", "run the component and generate shades. ---------------: ... zoneData1_: Optional zone data for", "ghenv.Component.NickName = 'EPWindowShades' ghenv.Component.Message = 'VER 0.0.55\\nSEP_11_2014' ghenv.Component.Category = \"Honeybee\" ghenv.Component.SubCategory = \"09", "range(len(angles)-1): if angles[angleCount] <= (getAngle2North(normalVector))%360 <= angles[angleCount +1]: targetValue = valueList[angleCount%len(valueList)] value =", "return [name, reflect, transmit, emiss, thickness, conduct] def createEPBlindMat(blindsMaterial, EPSlatOrient, depth, shadingHeight, EPshdAngle,", "Name\\n' + \\ '\\t' + EPinteriorOrExter + ', !- Shading Type\\n' + \\", "Set defaults on things that are not connected. if checkSameType == True: checkData,", "not, set a default. checkData4 = True if blindsSchedule_ == None: schedule =", "horizontal shades or \"False\" to generate vertical shades. You can also input lists", "a function that can split up a list of values and assign it", "greater than 1. HBObjWShades will not be generated. shadeBreps will still be produced", "shadingHeight) try: for Z in zHeights: planes.append(rc.Geometry.Plane(rc.Geometry.Point3d(X, Y, Z), rc.Geometry.Vector3d.ZAxis)) except: # single", "[], [], [] else: print \"You should first let both Honeybee and Ladybug", "\\ '\\t' + ', !- Slat Infrared Hemispherical Transmittance\\n' + \\ '\\t' +", "for a shade benefit simulation with the generated shades. Returns: readMe!: ... ---------------:", "the generated windows. Use this to align data like heating load, cooling load", "The HBZones with the assigned shading (ready to be simulated). ---------------: ... windowBreps:", "of shadings try: numOfShd = int(numOfShds) shadingHeight = glzHeight/numOfShd shadingRemainder = shadingHeight except:", "return checkData, windowNamesFinal, windowBrepsFinal, _depth, alignedDataTree, numOfShd, blindsMaterial, schedule def analyzeGlz(glzSrf, distBetween, numOfShds,", "the shading surface.\" + \\ \"\\nRebuild the surface and try again!\" return -1", "not both. For now, just grab another component for each of these inputs.\"", "blindsMaterial_ == None: print \"No blinds material has been connected. A material will", "= 0 #Get the EnergyPlus distance to glass. EPDistToGlass = distToGlass + (depth)*(0.5)*math.cos(math.radians(EPshdAngle))", "None: horOrVertical = True if numOfShds == None and distBetween == None: numOfShds", "lists of angles to assign different shade angles to different cardinal directions. interiorOrExter_:", "_glzSrf.Faces[0].ClosestPoint(baseSrfCenPt) if bool: normalVector = _glzSrf.Faces[0].NormalAt(centerPtU, centerPtV) #return rc.Geometry.Plane(baseSrfCenPt,normalVector) else: print \"Couldn't find", "Slat Angle {deg}\\n' + \\ '\\t' + '180; !- Maximum Slat Angle {deg}\\n'", "+ \\ '\\t' + str(blindsMaterial[3]) + ', !- Back Side Slat Infrared Hemispherical", "Angle {deg}\\n' + \\ '\\t' + '180; !- Maximum Slat Angle {deg}\\n' return", "the data cannot be coordinated with this component. checkData3 = True checkBranches =", "vectors if intCrvs !=[]: for c in intCrvs: try: shdSrf = rc.Geometry.Surface.CreateExtrusion(c, float(depth)", "!- Minimum Slat Angle {deg}\\n' + \\ '\\t' + '180; !- Maximum Slat", "(ready to be simulated). ---------------: ... windowBreps: Breps representing each window of the", "', !- Name\\n' + \\ '\\t' + EPSlatOrient + ', !- Slat Orientation\\n'", "number of shades to generate minXYPt = bbox.Corner(True, True, True) minXYPt = rc.Geometry.Point3d(minXYPt.X,", "this input to angle shades downward. You can also put in lists of", "must not be curved. With the way that we mesh curved surfaces for", "multiple shading depths are given, use it to split up the glazing by", "or alter zones. Note that these should ideally be the zones that are", "be used to generate the planes planeVec = rc.Geometry.Vector3d(normalVector.X, normalVector.Y, 0) planeVec.Rotate(1.570796, rc.Geometry.Vector3d.ZAxis)", "the base for shadings intCrvs =[] for plane in planes: try: intCrvs.append(rc.Geometry.Brep.CreateContourCurves(_glzSrf, plane)[0])", "[name, reflect, transmit, emiss, thickness, conduct] def createEPBlindMat(blindsMaterial, EPSlatOrient, depth, shadingHeight, EPshdAngle, distToGlass,", "find the type based on object.getAngle2North() if not hasattr(object, \"BC\"): object.BC = 'OUTDOORS'", "be aligned with the generated windows. Use this to align data like heating", "\"BC\"): object.BC = 'OUTDOORS' if object.hasChild: if object.BC != 'OUTDOORS' and object.BC !=", "simulation. These blinds can be dynamically controlled via a schedule. Note that shades", "will let in more winter sun than summer sun. If you have horizontal", "shade inputs are given, use it to split up the glazing by cardinal", "that the surface is not planar and # the normal is changing from", "for brep in brepList: shadeBreps.Add(brep, GH_Path(count)) for treeCount, finalTree in enumerate(alignedDataTree): if treeCount", "!= [] and _runIt == True: checkData, windowSrfsInit, shadings, alignedDataTree, HBObjWShades = main()", "the window. tolVec = rc.Geometry.Vector3d.Subtract(rc.Geometry.Vector3d(minXYPt.X, minXYPt.Y, minXYPt.Z), rc.Geometry.Vector3d(maxXYPt.X, maxXYPt.Y, maxXYPt.Z)) tolVec.Unitize() tolVec =", "in enumerate(finalTree): for twig in branch: zoneData1Tree.Add(twig, GH_Path(bCount)) elif treeCount == 1: for", "the lists to each window. windowNamesFinal = [] windowBrepsFinal = [] alignedDataTree =", "and try again!\" return -1 shadingSurfaces =[] #Define a function that can get", "= bbox.Corner(True, False, True) maxXYPt = rc.Geometry.Point3d(maxXYPt.X, maxXYPt.Y, maxXYPt.Z) #Adjust the points to", "Beam Solar Reflectance\\n' + \\ '\\t' + str(blindsMaterial[1]) + ', !- Back Side", "in planePoints: planes.append(rc.Geometry.Plane(point, planeVec)) except: # single shading planes.append(rc.Geometry.Plane(rc.Geometry.Point3d(minXYPt), planeVec)) sortedPlanes = planes", "and set to \"False\" to generate shades on the exterior. The default is", "dataHeaders = [] dataNumbers = [] for list in branch: if str(list[0]) ==", "main(): if _HBObjects != [] and sc.sticky.has_key('honeybee_release') == True and sc.sticky.has_key('ladybug_release') == True:", "data is for the window level. srfData = False if zoneData == False:", "object.surfaces: if srf.hasChild: if srf.BC == 'OUTDOORS' or srf.BC == 'Outdoors': if srf.isPlanar", "for intdoor windows.\" print warning ghenv.Component.AddRuntimeMessage(w, warning) elif object.isPlanar == False: assignEPCheck =", "helps keep the data tree paths of heating, cooling and beam gain synced", "else: checkData3 = False warning = \"Not all of the connected zoneData has", "glzSrf.GetBoundingBox(True) if horOrVertical == None: horOrVertical = True if numOfShds == None and", "will assign different depths based on cardinal direction. For example, inputing 4 values", "different numbers of shades to different directions. numShds = getValueBasedOnOrientation(numShds) # If multiple", "multiple distToGlass_ inputs are given, use it to split up the glazing by", "sure that we can run it through this component's functions. for object in", "minXYPt.Z), rc.Geometry.Vector3d(maxXYPt.X, maxXYPt.Y, maxXYPt.Z)) if testVec.IsParallelTo(planeVec) == 0: minXYPt = bbox.Corner(False, True, True)", "+ '0.5, !- Blind Left Side Opening Multiplier\\n' + \\ '\\t' + '0.5,", "[], [], [] #Run the main functions. checkData = False if _HBObjects !=", "# If multiple horizontal or vertical inputs are given, use it to split", "sc.sticky.has_key('ladybug_release') == True: #Import the classes hb_EPZone = sc.sticky[\"honeybee_EPZone\"] hb_EPSrf = sc.sticky[\"honeybee_EPSurface\"] hb_EPFenSurface", "shadingHeightList.append(shadingHeight) EPshdAngleList.append(EPshdAngle) distToGlassList.append(distToGlass) EPinteriorOrExterList.append(EPinteriorOrExter) if assignEPCheckInit == False: assignEPCheck = False #Create the", "For this, you would take imported EnergyPlus results and hook them up to", "ghenv.Component.AddRuntimeMessage(w, warning) #Check if there is a blinds schedule connected and, if not,", "0 # import the classes lb_preparation = sc.sticky[\"ladybug_Preparation\"]() lb_mesh = sc.sticky[\"ladybug_Mesh\"]() lb_visualization =", "intersection failed.\" if normalVector != rc.Geometry.Vector3d.ZAxis: normalVectorPerp = rc.Geometry.Vector3d(normalVector.X, normalVector.Y, 0) angleFromNorm =", "split up the glazing by cardinal direction and assign different distances of shades", "allData.append(makePyTree(zoneData3_)) #Test to see if the data lists have a headers on them,", "= \"Honeybee_EnergyPlus Window Shade Generator\" ghenv.Component.NickName = 'EPWindowShades' ghenv.Component.Message = 'VER 0.0.55\\nSEP_11_2014' ghenv.Component.Category", "for each window above. zoneData2Tree: Data trees of the zoneData2_, which align with", "and 90 that represents an angle in degrees to rotate the shades. The", "crash.\" print warning ghenv.Component.AddRuntimeMessage(w, warning) #Check the depth and the shadingHeight to see", "bCount, branch in enumerate(finalTree): for twig in branch: zoneData1Tree.Add(twig, GH_Path(bCount)) elif treeCount ==", "== True: for listCount, header in enumerate(branch): if header[2].split(' for ')[-1] == zoneName.upper():", "for each glazed surface. _distBetween: An alternate option to _numOfShds where the input", "bbox.Corner(True, True, True) minXYPt = rc.Geometry.Point3d(minXYPt.X, minXYPt.Y, minXYPt.Z) maxXYPt = bbox.Corner(False, False, True)", "+ '0.5, !- Blind Top Opening Multiplier\\n' + \\ '\\t' + ', !-", "0: angleFromNorm = angleFromNorm*(-1) #If the user has set the shades to generate", "and Ladybug fly...\" ghenv.Component.AddRuntimeMessage(w, \"You should first let both Honeybee and Ladybug fly...\")", "blindsMaterial[0] + \"_\" + name + ', !- Shading Device Material Name\\n' +", "of shades to generate minZPt = bbox.Corner(False, True, True) minZPt = rc.Geometry.Point3d(minZPt.X, minZPt.Y,", "is not located on the surface baseSrfCenPt = _glzSrf.ClosestPoint(baseSrfCenPt) bool, centerPtU, centerPtV =", "elif len(valueList) > 1: initAngles = rs.frange(0, 360, 360/len(valueList)) for an in initAngles:", "alignedDataTree != [[], [], []]: print \"A window was not matched with its", "EPshdAngle, EPDistToGlass, EPinteriorOrExter, assignEPCheckInit def deconstructBlindMaterial(material): matLines = material.split('\\n') name = matLines[1].split(',')[0] reflect", "to \"0\" for no rotation. If you have vertical shades, use this to", "be the zones that are fed into the Run Energy Simulation component. Zones", "numOfShds = 1 if numOfShds == 0 or distBetween == 0: sortedPlanes =", "minXYPt.Z) maxXYPt = bbox.Corner(False, False, True) maxXYPt = rc.Geometry.Point3d(maxXYPt.X, maxXYPt.Y, maxXYPt.Z) centerPt =", "tolVec.Unitize() tolVec = rc.Geometry.Vector3d.Multiply(sc.doc.ModelAbsoluteTolerance*2, tolVec) if tolVec.X > 0 and tolVec.Y > 0:", "planeVec) else: shdAngle = 0 #Make EP versions of some of the outputs.", "the \"ALWAYS ON\" shcedule. north_: Input a vector to be used as a", "and checkData4 == True and checkData5 == True: checkData = True else: checkData", "shades to generate on the interior, flip the normal vector. if interiorOrExter ==", "sure that we have everything that we need to generate the shades. Set", "hooked up a distBetwee or numOfShds. if _distBetween == [] and _numOfShds ==", "normalVectorPerp.Rotate((shdAngle*0.01745329), planeVec) else: normalVectorPerp.Rotate((shdAngle*-0.01745329), planeVec) else: shdAngle = 0 #Make EP versions of", "check if csv file is existed if not os.path.isfile(schedule): msg = \"Cannot find", "Is Active\\n' + \\ '\\t' + blindsMaterial[0] + \"_\" + name + ',", "HBZones out of any of the HB components that generate or alter zones.", "some of the outputs. EPshdAngleInint = angleFromNorm+shdAngle if EPshdAngleInint >= 0: EPshdAngle =", "if not, set a default. checkData5 = True if blindsMaterial_ == None: print", "not schedule.lower().endswith(\".csv\") and schedule not in HBScheduleList: msg = \"Cannot find \" +", "the exterior. The default is set to \"False\" to generate exterior shades. distToGlass_:", "shades. zoneData3_: Optional zone data for the HBZones_ that will be aligned with", "'\\t' + ', !- Construction with Shading Name\\n' + \\ '\\t' + schedCntrlType", "shcedule. north_: Input a vector to be used as a true North direction", "EPSlatOrient = 'Vertical' if interiorOrExter == True: EPinteriorOrExter = 'InteriorBlind' else: EPinteriorOrExter =", "[] and _runIt == True: checkData, windowSrfsInit, shadings, alignedDataTree, HBObjWShades = main() #Unpack", "True HBObjWShades = [] EPSlatOrientList = [] depthList = [] shadingHeightList = []", "things that are not connected. if checkSameType == True: checkData, windowNames, windowSrfsInit, depths,", "zones that are fed into the Run Energy Simulation component. Zones read back", "to generate the planes planeVec = rc.Geometry.Vector3d(normalVector.X, normalVector.Y, 0) planeVec.Rotate(1.570796, rc.Geometry.Vector3d.ZAxis) #Define a", "The component has two main uses: _ The first is that it can", "dataPyList = [] for i in range(zoneData.BranchCount): branchList = zoneData.Branch(i) dataVal = []", "see if the data is for the zone level. zoneData = False if", "the northernmost side of the window. tolVec = rc.Geometry.Vector3d.Subtract(rc.Geometry.Vector3d(minXYPt.X, minXYPt.Y, minXYPt.Z), rc.Geometry.Vector3d(maxXYPt.X, maxXYPt.Y,", "True and sc.sticky.has_key('ladybug_release') == True: #Import the classes hb_EPZone = sc.sticky[\"honeybee_EPZone\"] hb_EPSrf =", "== 2: for bCount, branch in enumerate(finalTree): for twig in branch: zoneData3Tree.Add(twig, GH_Path(bCount))", "# <EMAIL> # Ladybug started by <NAME> is licensed # under a Creative", "Reflectance\\n' + \\ '\\t' + str(blindsMaterial[1]) + ', !- Back Side Slat Beam", "and sc.sticky.has_key('ladybug_release') == True: #Import the classes hb_EPZone = sc.sticky[\"honeybee_EPZone\"] hb_EPSrf = sc.sticky[\"honeybee_EPSurface\"]", "a grasshopper data tree. shadeBreps: Breps representing each shade of the window. These", "divisionPoints.append(pointCurve.PointAt(param)) planePoints = divisionPoints try: for point in planePoints: planes.append(rc.Geometry.Plane(point, planeVec)) except: #", "generate or alter zones. Note that these should ideally be the zones that", "Slat Angle Control for Blinds\\n' + \\ '\\t' + '; !- Slat Angle", "makeShade(window, depths, numOfShd, _distBetween) shadings.append(shadeBreps) EPSlatOrientList.append(EPSlatOrient) depthList.append(depth) shadingHeightList.append(shadingHeight) EPshdAngleList.append(EPshdAngle) distToGlassList.append(distToGlass) EPinteriorOrExterList.append(EPinteriorOrExter) if assignEPCheckInit", "False #Generate the shades. if checkData == True: shadings = [] for window", "cardinal direction. shdAngle_: A number between -90 and 90 that represents an angle", "!- Slat Beam Solar Transmittance\\n' + \\ '\\t' + str(blindsMaterial[1]) + ', !-", "if schedule == 'ALWAYSON': schedCntrlType = 'ALWAYSON' schedCntrl = 'No' schedName = ''", "System import Object from System import Drawing from clr import AddReference AddReference('Grasshopper') import", "different shdAngle_ to different directions. shdAngle = getValueBasedOnOrientation(shdAngle_) #If multiple interiorOrExter_ inputs are", "of shades to generate minXYPt = bbox.Corner(True, True, True) minXYPt = rc.Geometry.Point3d(minXYPt.X, minXYPt.Y,", "the connected zoneData has a Ladybug/Honeybee header on it. This header is necessary", "planes: try: intCrvs.append(rc.Geometry.Brep.CreateContourCurves(_glzSrf, plane)[0]) except: print \"One intersection failed.\" if normalVector != rc.Geometry.Vector3d.ZAxis:", "[] shadingHeightList = [] EPshdAngleList = [] distToGlassList = [] EPinteriorOrExterList = []", "+ name +', !- Name\\n' + \\ '\\t' + EPinteriorOrExter + ', !-", "elif horOrVertical == False: planeVec = rc.Geometry.Vector3d.ZAxis if getAngle2North(normalVectorPerp) < 180: normalVectorPerp.Rotate((shdAngle*0.01745329), planeVec)", "header[2].split(' for ')[-1].split(': ')[0] except: winNm = header[2].split(' for ')[-1] if str(winNm) ==", "tolVec) #glazing distance glzHeight = minXYPt.DistanceTo(maxXYPt) # find number of shadings try: numOfShd", "different directions. depth = getValueBasedOnOrientation(depth) # If multiple number of shade inputs are", "{m}\\n' + \\ '\\t' + '0.5, !- Blind Top Opening Multiplier\\n' + \\", "glazing by cardinal direction and assign different horizontal or vertical to different directions.", "windowObj.shadingControlName = 'BlindCntrlFor_' + windowObj.name windowObj.shadingSchName = schedule ModifiedHBZones = hb_hive.addToHoneybeeHive(HBZoneObjects, ghenv.Component.InstanceGuid.ToString() +", "angles[angleCount] <= (getAngle2North(normalVector))%360 <= angles[angleCount +1]: targetValue = valueList[angleCount%len(valueList)] value = targetValue return", "intCrvs.append(rc.Geometry.Brep.CreateContourCurves(_glzSrf, plane)[0]) except: print \"One intersection failed.\" if normalVector != rc.Geometry.Vector3d.ZAxis: normalVectorPerp =", "!- Slat Angle {deg}\\n' + \\ '\\t' + str(blindsMaterial[5]) + ', !- Slat", "normalVector.Z < 0: angleFromNorm = angleFromNorm*(-1) #If the user has set the shades", "+ \\ '\\t' + 'FixedSlatAngle, !- Type of Slat Angle Control for Blinds\\n'", "0.0.55\\nSEP_11_2014' ghenv.Component.Category = \"Honeybee\" ghenv.Component.SubCategory = \"09 | Energy | Energy\" #compatibleHBVersion =", "for number of shades. The component will be run with one shade per", "different shade angles to different cardinal directions. interiorOrExter_: Set to \"True\" to generate", "angleFromNorm = math.degrees(rc.Geometry.Vector3d.VectorAngle(normalVectorPerp, normalVector)) if normalVector.Z < 0: angleFromNorm = angleFromNorm*(-1) #If the", "these shades using a 'Honeybee_EP Context Surfaces' component.\" print warning ghenv.Component.AddRuntimeMessage(gh.GH_RuntimeMessageLevel.Remark, warning) return", "'WindowProperty:ShadingControl,\\n' + \\ '\\t' + 'BlindCntrlFor_' + name +', !- Name\\n' + \\", "planeVec.Rotate(1.570796, rc.Geometry.Vector3d.ZAxis) normalVectorPerp.Rotate((shdAngle*0.01745329), planeVec) elif horOrVertical == False: planeVec = rc.Geometry.Vector3d.ZAxis if getAngle2North(normalVectorPerp)", "dataVal = [] for item in branchList: dataVal.append(item) dataPyList.append(dataVal) return dataPyList allData =", "return shadingSurfaces, EPSlatOrient, depth, shadingHeight, EPshdAngle, EPDistToGlass, EPinteriorOrExter, assignEPCheckInit def deconstructBlindMaterial(material): matLines =", "boundary condition. EenergyPlus shades will not be assigned to this window.\" windowNames.append(winNames) windowSrfs.append(winBreps)", "= True else: tolVec = rc.Geometry.Vector3d.Multiply(-1, tolVec) norOrient = True maxXYPt = rc.Geometry.Point3d.Subtract(maxXYPt,", "#Generate the shades. if checkData == True: shadings = [] for window in", "beam gain for a shade benefit simulation with the generated shades. zoneData2_: Optional", "the HB components that generate or alter zones. Note that these should ideally", "surface. _distBetween: An alternate option to _numOfShds where the input here is the", "when using this component for individual surfaces, you should make sure that the", "= sc.sticky[\"honeybee_EPFenSurface\"] hb_hive = sc.sticky[\"honeybee_Hive\"]() #Make the lists that will be filled up", "90 - EPshdAngleInint else: EPshdAngle = 90 + (EPshdAngleInint)*-1 if EPshdAngle > 180", "= 0.01 elif EPDistToGlass > 1: warning = \"The input distToGlass_ value is", "for object in HBZoneObjects: if object.objectType == \"HBZone\": isZoneList.append(1) zoneNames.append(object.name) winBreps = []", "between -90 and 90 that represents an angle in degrees to rotate the", "object.objectType == \"HBSurface\": isZoneList.append(0) warning = \"Note that, when using this component for", "== 0: shadingRemainder = shadingHeight # find shading base planes planeOrigins = []", "to split up the glazing by cardinal direction and assign different distToGlass_ to", "generated. shadeBreps will still be produced and you can account for these shades", "msg ghenv.Component.AddRuntimeMessage(gh.GH_RuntimeMessageLevel.Warning, msg) checkData4 = False #Create a Python list from the input", "a distBetwee or numOfShds. if _distBetween == [] and _numOfShds == []: numOfShd", "sure that the values are parallel to the correct vector. testVec = rc.Geometry.Vector3d.Subtract(rc.Geometry.Vector3d(minXYPt.X,", "+ \\ \"\\nRebuild the surface and try again!\" return -1 shadingSurfaces =[] #Define", "'\\t' + ', !- Slat Infrared Hemispherical Transmittance\\n' + \\ '\\t' + str(blindsMaterial[3])", "= normalVectorPerp transVec.Unitize() finalTransVec = rc.Geometry.Vector3d.Multiply(distToGlass, transVec) blindsTransform = rc.Geometry.Transform.Translation(finalTransVec) for shdSrf in", "Maximum Slat Angle {deg}\\n' return EPBlindMat def createEPBlindControl(blindsMaterial, schedule, EPinteriorOrExter, name): if schedule", "header[2].split(' for ')[-1] == zoneName.upper(): alignedDataTree[inputDataTreeCount].append(allData[inputDataTreeCount][listCount]) zoneData = True #Test to see if", "for E+, the program would just freak out with blinds.\" print warning ghenv.Component.AddRuntimeMessage(w,", "The second way to use the component is to create test shade areas", "this case, the component helps keep the data tree paths of heating, cooling", "planes planes, shadingHeight = analyzeGlz(_glzSrf, distBtwn, numShds, horOrVertical, lb_visualization, normalVector) # find the", "an EnergyPlus simulation with the \"Honeybee_EP Context Surfaces\" component. ---------------: ... zoneData1Tree: Data", "interiorOrExter = False #If a shdAngle is provided, use it to rotate the", "depth > 1: assignEPCheckInit = False warning = \"Note that E+ does not", "'OUTDOORS' and object.BC != 'Outdoors': assignEPCheck = False warning = \"The boundary condition", "shades to generate minXYPt = bbox.Corner(True, True, True) minXYPt = rc.Geometry.Point3d(minXYPt.X, minXYPt.Y, minXYPt.Z)", "[] for i in range(zoneData.BranchCount): branchList = zoneData.Branch(i) dataVal = [] for item", "checkData = True else: checkData = False return checkData, windowNamesFinal, windowBrepsFinal, _depth, alignedDataTree,", "shadeBreps.Add(brep, GH_Path(count)) for treeCount, finalTree in enumerate(alignedDataTree): if treeCount == 0: for bCount,", "\"Note that, when using this component for individual surfaces, you should make sure", "where the input here is the distance in Rhino units between each shade.", "directions. shdAngle = getValueBasedOnOrientation(shdAngle_) #If multiple interiorOrExter_ inputs are given, use it to", "normalVectorPerp.Reverse() else: interiorOrExter = False #If a shdAngle is provided, use it to", "Diffuse Solar Reflectance\\n' + \\ '\\t' + str(blindsMaterial[2]) + ', !- Slat Beam", "Glare Control Is Active\\n' + \\ '\\t' + blindsMaterial[0] + \"_\" + name", "of the zoneData2_, which align with the branches for each window above. zoneData3Tree:", "rc.Geometry.Vector3d.Multiply(sc.doc.ModelAbsoluteTolerance*2, tolVec) if tolVec.X > 0 and tolVec.Y > 0: tolVec = rc.Geometry.Vector3d.Multiply(1,", "direction and assign different shdAngle_ to different directions. shdAngle = getValueBasedOnOrientation(shdAngle_) #If multiple", "it to rotate the planes by that angle if shdAngle != None: if", "distance to glass. EPDistToGlass = distToGlass + (depth)*(0.5)*math.cos(math.radians(EPshdAngle)) if EPDistToGlass < 0.01: EPDistToGlass", "Visible Reflectance\\n' + \\ '\\t' + ', !- Slat Infrared Hemispherical Transmittance\\n' +", "= DataTree[Object]() zoneData3Tree = DataTree[Object]() for count, brep in enumerate(windowSrfsInit): windowBreps.Add(brep, GH_Path(count)) for", "#If multiple interiorOrExter_ inputs are given, use it to split up the glazing", "[] assignEPCheck = True HBObjWShades = [] EPSlatOrientList = [] depthList = []", "EPshdAngleList[count], distToGlassList[count], windowObj.name) windowObj.shadingControl = createEPBlindControl(blindsMaterial, schedule, EPinteriorOrExterList[count], windowObj.name) windowObj.shadingControlName = 'BlindCntrlFor_' +", "== [] and _numOfShds == []: numOfShd = [1] print \"No value is", "blindsMaterial, schedule def analyzeGlz(glzSrf, distBetween, numOfShds, horOrVertical, lb_visualization, normalVector): # find the bounding", "with the mergeVectors_ input. _numOfShds: The number of shades to generated for each", "using a 'Honeybee_EP Context Surfaces' component.\" print warning ghenv.Component.AddRuntimeMessage(gh.GH_RuntimeMessageLevel.Remark, warning) return shadingSurfaces, EPSlatOrient,", "= False if tolVec.X < 0 and tolVec.Y > 0: tolVec = rc.Geometry.Vector3d.Multiply(1,", "== 0: minXYPt = bbox.Corner(False, True, True) minXYPt = rc.Geometry.Point3d(minXYPt.X, minXYPt.Y, minXYPt.Z) maxXYPt", "can also input lists of depths, which will assign different depths based on", "necessary to match the data to a zone or window. If there's no", "sometimes the center point is not located on the surface baseSrfCenPt = _glzSrf.ClosestPoint(baseSrfCenPt)", "header in enumerate(branch): try: winNm = header[2].split(' for ')[-1].split(': ')[0] except: winNm =", "ghenv.Component.Category = \"Honeybee\" ghenv.Component.SubCategory = \"09 | Energy | Energy\" #compatibleHBVersion = VER", "pass from System import Object from System import Drawing from clr import AddReference", "+ str(uuid.uuid4())) else: ModifiedHBZones = [] return checkData, windowSrfsInit, shadings, alignedDataTree, ModifiedHBZones else:", "minZPt.Y, minZPt.Z zHeights = rs.frange(minZPt.Z + shadingRemainder, maxZPt.Z + 0.5*sc.doc.ModelAbsoluteTolerance, shadingHeight) try: for", "of the correct number of shades starting from the northernmost side of the", "might be cases that the surface is not planar and # the normal", "windows.\" print warning ghenv.Component.AddRuntimeMessage(w, warning) elif object.isPlanar == False: assignEPCheck = False warning", "checkSameType = True if sum(isZoneList) == len(_HBObjects): isZone = True elif sum(isZoneList) ==", "each shade of the window. These can be plugged into a shade benefit", "has been connected. It will be assumed that the blinds are always drawn\"", "transmit, emiss, thickness, conduct] def createEPBlindMat(blindsMaterial, EPSlatOrient, depth, shadingHeight, EPshdAngle, distToGlass, name): EPBlindMat", "'Blinds material is not a valid blinds material from the \"Honeybee_EnergyPlus Blinds Material\"", "if bool: normalVector = _glzSrf.Faces[0].NormalAt(centerPtU, centerPtV) #return rc.Geometry.Plane(baseSrfCenPt,normalVector) else: print \"Couldn't find the", "use the output \"zoneDataTree\" in the shade benefit evaluation. - Provided by Honeybee", "is the distance in Rhino units between each shade. horOrVertical_: Set to \"True\"", "+ str(blindsMaterial[2]) + ', !- Slat Beam Visible Transmittance\\n' + \\ '\\t' +", "and test the normal direction for more point baseSrfCenPt = rc.Geometry.AreaMassProperties.Compute(_glzSrf).Centroid # sometimes", "HBScheduleList: msg = \"Cannot find \" + schedule + \" in Honeybee schedule", "the correct vector. testVec = rc.Geometry.Vector3d.Subtract(rc.Geometry.Vector3d(minXYPt.X, minXYPt.Y, minXYPt.Z), rc.Geometry.Vector3d(maxXYPt.X, maxXYPt.Y, maxXYPt.Z)) if testVec.IsParallelTo(planeVec)", "horOrVertical, lb_visualization, normalVector) # find the intersection crvs as the base for shadings", "not hasattr(object, 'angle2North'): # find the type based on object.getAngle2North() if not hasattr(object,", "the normal is changing from point to point, then I should sample the", "True planeVec = rc.Geometry.Vector3d(normalVector.X, normalVector.Y, 0) planeVec.Rotate(1.570796, rc.Geometry.Vector3d.ZAxis) normalVectorPerp.Rotate((shdAngle*0.01745329), planeVec) elif horOrVertical ==", "'\\t' + str(blindsMaterial[2]) + ', !- Slat Beam Visible Transmittance\\n' + \\ '\\t'", "centerPt = bbox.Center #glazing hieghts glzHeight = minZPt.DistanceTo(maxZPt) # find number of shadings", "0 #Get the EnergyPlus distance to glass. EPDistToGlass = distToGlass + (depth)*(0.5)*math.cos(math.radians(EPshdAngle)) if", "EPBlindMat def createEPBlindControl(blindsMaterial, schedule, EPinteriorOrExter, name): if schedule == 'ALWAYSON': schedCntrlType = 'ALWAYSON'", "True and checkData4 == True and checkData5 == True: checkData = True else:", "windowBreps and shadeBreps outputs are just for visualization. _ The second way to", "Side Slat Infrared Hemispherical Emissivity\\n' + \\ '\\t' + str(distToGlass) + ', !-", "', !- Setpoint {W/m2, W or deg C}\\n' + \\ '\\t' + schedCntrl", "try: for point in planePoints: planes.append(rc.Geometry.Plane(point, planeVec)) except: # single shading planes.append(rc.Geometry.Plane(rc.Geometry.Point3d(minXYPt), planeVec))", "= False warning = 'Blinds material is not a valid blinds material from", "!- Slat Diffuse Visible Transmittance\\n' + \\ '\\t' + ', !- Front Side", "degrees to rotate the shades. The default is set to \"0\" for no", "branch in enumerate(allHeaders): #Test to see if the data is for the zone", "#Import the classes hb_EPZone = sc.sticky[\"honeybee_EPZone\"] hb_EPSrf = sc.sticky[\"honeybee_EPSurface\"] hb_EPFenSurface = sc.sticky[\"honeybee_EPFenSurface\"] hb_hive", "tree. shadeBreps: Breps representing each shade of the window. These can be plugged", "window.\" else: print \"One surface with a window does not have an outdoor", "+ \\ '\\t' + str(distToGlass) + ', !- Blind to Glass Distance {m}\\n'", "An alternate option to _numOfShds where the input here is the distance in", "+ \\ '\\t' + ', !- Blind Bottom Opening Multiplier\\n' + \\ '\\t'", "\\ '\\t' + ', !- Minimum Slat Angle {deg}\\n' + \\ '\\t' +", "windows. For this, you would take imported EnergyPlus results and hook them up", "inputs and use the output \"zoneDataTree\" in the shade benefit evaluation. - Provided", "is a blinds schedule connected and, if not, set a default. checkData4 =", "# note2developer: there might be cases that the surface is not planar and", "for Z in zHeights: planes.append(rc.Geometry.Plane(rc.Geometry.Point3d(X, Y, Z), rc.Geometry.Vector3d.ZAxis)) except: # single shading planes.append(rc.Geometry.Plane(rc.Geometry.Point3d(maxZPt),", "to different cardinal directions. interiorOrExter_: Set to \"True\" to generate Shades on the", "\"Honeybee_EnergyPlus Blinds Material\" component.' print warning ghenv.Component.AddRuntimeMessage(w, warning) #Check if there is a", "= _glzSrf.ClosestPoint(baseSrfCenPt) bool, centerPtU, centerPtV = _glzSrf.Faces[0].ClosestPoint(baseSrfCenPt) if bool: normalVector = _glzSrf.Faces[0].NormalAt(centerPtU, centerPtV)", "shades, move them along the normal vector. if distToGlass != None: transVec =", "None: transVec = normalVectorPerp transVec.Unitize() finalTransVec = rc.Geometry.Vector3d.Multiply(distToGlass, transVec) blindsTransform = rc.Geometry.Transform.Translation(finalTransVec) for", "to see if E+ will crash. assignEPCheckInit = True if depth > 1:", "= hb_hive.callFromHoneybeeHive(_HBObjects) #Find out what the object is and make sure that we", "'Horizontal' else: EPSlatOrient = 'Vertical' if interiorOrExter == True: EPinteriorOrExter = 'InteriorBlind' else:", "between each shade. horOrVertical_: Set to \"True\" to generate horizontal shades or \"False\"", "[] dataHeaders = [] dataNumbers = [] for list in branch: if str(list[0])", "input and shades can be joined together with the mergeVectors_ input. _numOfShds: The", "+ ', !- Blind to Glass Distance {m}\\n' + \\ '\\t' + '0.5,", "schedule == 'ALWAYSON': schedCntrlType = 'ALWAYSON' schedCntrl = 'No' schedName = '' else:", "# If multiple shading depths are given, use it to split up the", "connected here, the blinds will assume the \"ALWAYS ON\" shcedule. north_: Input a", "glazing by cardinal direction and assign different depths to different directions. depth =", "0.25 mm thickness, 221 W/mK conductivity.\" blindsMaterial = ['DEFAULTBLINDSMATERIAL', 0.65, 0, 0.9, 0.00025,", "sum(isZoneList) == 0: isZone = False else: checkSameType = False warning = \"This", "', !- Construction with Shading Name\\n' + \\ '\\t' + schedCntrlType + ',", "int(numOfShds) shadingHeight = glzHeight/numOfShd shadingRemainder = shadingHeight except: shadingHeight = distBetween shadingRemainder =", "minXYPt = bbox.Corner(True, True, True) minXYPt = rc.Geometry.Point3d(minXYPt.X, minXYPt.Y, minXYPt.Z) maxXYPt = bbox.Corner(False,", "planeVec = rc.Geometry.Vector3d.ZAxis if getAngle2North(normalVectorPerp) < 180: normalVectorPerp.Rotate((shdAngle*0.01745329), planeVec) else: normalVectorPerp.Rotate((shdAngle*-0.01745329), planeVec) else:", "checkSameType == True: checkData, windowNames, windowSrfsInit, depths, alignedDataTree, numOfShd, blindsMaterial, schedule = checkTheInputs(zoneNames,", "for data input to this component.\" print warning ghenv.Component.AddRuntimeMessage(w, warning) #Align all of", "normal is changing from point to point, then I should sample the test", "component will automatically assign a material of 0.65 solar reflectance, 0 transmittance, 0.9", "the number of shades to generate minXYPt = bbox.Corner(True, True, True) minXYPt =", "lists that will be filled up zoneNames = [] windowNames = [] windowSrfs", "i in range(zoneData.BranchCount): branchList = zoneData.Branch(i) dataVal = [] for item in branchList:", "'\\t' + str(blindsMaterial[1]) + ', !- Back Side Slat Diffuse Solar Reflectance\\n' +", "', !- Back Side Slat Beam Solar Reflectance\\n' + \\ '\\t' + ',", "False warning = 'Blinds material is not a valid blinds material from the", "the type based on object.type = object.getTypeByNormalAngle() if not hasattr(object, 'angle2North'): # find", "item 2 = south depth, item 3 = east depth. Lists of vectors", "= [] isZoneList = [] assignEPCheck = True HBObjWShades = [] EPSlatOrientList =", "= 0 #Make EP versions of some of the outputs. EPshdAngleInint = angleFromNorm+shdAngle", "up the glazing by cardinal direction and assign different distToGlass_ to different directions.", "the hive. HBZoneObjects = hb_hive.callFromHoneybeeHive(_HBObjects) #Find out what the object is and make", "shadeBreps: Breps representing each shade of the window. These can be plugged into", "== True and checkData5 == True: checkData = True else: checkData = False", "depths to different directions. depth = getValueBasedOnOrientation(depth) # If multiple number of shade", "for plane in planes: try: intCrvs.append(rc.Geometry.Brep.CreateContourCurves(_glzSrf, plane)[0]) except: print \"One intersection failed.\" if", "+ ', !- Blind Bottom Opening Multiplier\\n' + \\ '\\t' + '0.5, !-", "component. Zones read back into Grasshopper from the Import idf component will not", "depth of the shade to be generated on each window. You can also", "if _HBObjects != [] and _runIt == True: checkData, windowSrfsInit, shadings, alignedDataTree, HBObjWShades", "If multiple shdAngle_ inputs are given, use it to split up the glazing", "program would just freak out with blinds.\" print warning ghenv.Component.AddRuntimeMessage(w, warning) else: for", "shades. zoneData2_: Optional zone data for the HBZones_ that will be aligned with", "a distance to move the shades, move them along the normal vector. if", "it to split up the glazing by cardinal direction and assign different distances", "= \"ALWAYSON\" print \"No blinds schedule has been connected. It will be assumed", "run. In this case, the component helps keep the data tree paths of", "input shdAngle_ value will cause EnergyPlus to crash.\" print warning ghenv.Component.AddRuntimeMessage(w, warning) if", "EPSlatOrientList[count], depthList[count], shadingHeightList[count], EPshdAngleList[count], distToGlassList[count], windowObj.name) windowObj.shadingControl = createEPBlindControl(blindsMaterial, schedule, EPinteriorOrExterList[count], windowObj.name) windowObj.shadingControlName", "curved. With the way that we mesh curved surfaces for E+, the program", "Note that shades created this way will automatically be assigned to the zone", "it to split up the glazing by cardinal direction and assign different interiorOrExterior_", "= \"WindowMaterial:Blind,\\n\" + \\ '\\t' + blindsMaterial[0] + \"_\" + name + ',", "== 0: isZone = False else: checkSameType = False warning = \"This component", "ensure the creation of the correct number of shades starting from the northernmost", "the planes and extrusion vectors if intCrvs !=[]: for c in intCrvs: try:", "+ ', !- Schedule Name\\n' + \\ '\\t' + ', !- Setpoint {W/m2,", "EPshdAngleInint = angleFromNorm+shdAngle if EPshdAngleInint >= 0: EPshdAngle = 90 - EPshdAngleInint else:", "valueList[angleCount%len(valueList)] value = targetValue return value # If multiple shading depths are given,", "= minXYPt.DistanceTo(maxXYPt) # find number of shadings try: numOfShd = int(numOfShds) shadingHeight =", "generate exterior shades. distToGlass_: A number representing the offset distance from the glass", "checkData == True: shadings = [] for window in windowSrfsInit: shadeBreps, EPSlatOrient, depth,", "different distToGlass_ to different directions. distToGlass = getValueBasedOnOrientation(distToGlass_) # generate the planes planes,", "= rc.Geometry.Vector3d.Multiply(distToGlass, transVec) blindsTransform = rc.Geometry.Transform.Translation(finalTransVec) for shdSrf in shadingSurfaces: shdSrf.Transform(blindsTransform) else: distToGlass", "+ \\ '\\t' + schedCntrl + ', !- Shading Control Is Scheduled\\n' +", "has already been run. In this case, the component helps keep the data", "childSrf in srf.childSrfs: windowObjects.append(childSrf) winNames.append(childSrf.name) winBreps.append(childSrf.geometry) else: print \"One surface with a window", "If applied to windows facing East or West, tilting the shades like this", "to rotate the planes by that angle if shdAngle != None: if horOrVertical", "to see if the data is for the zone level. zoneData = False", "vectors to be shaded can also be input and shades can be joined", "createEPBlindControl(blindsMaterial, schedule, EPinteriorOrExter, name): if schedule == 'ALWAYSON': schedCntrlType = 'ALWAYSON' schedCntrl =", "the shades to generate on the interior, flip the normal vector. if interiorOrExter", "the classes lb_preparation = sc.sticky[\"ladybug_Preparation\"]() lb_mesh = sc.sticky[\"ladybug_Mesh\"]() lb_visualization = sc.sticky[\"ladybug_ResultVisualization\"]() # find", "= north_ else:northVector = rc.Geometry.Vector3d.YAxis angle = rc.Geometry.Vector3d.VectorAngle(northVector, normalVector, rc.Geometry.Plane.WorldXY) finalAngle = math.degrees(angle)", "= 'BlindCntrlFor_' + windowObj.name windowObj.shadingSchName = schedule ModifiedHBZones = hb_hive.addToHoneybeeHive(HBZoneObjects, ghenv.Component.InstanceGuid.ToString() + str(uuid.uuid4()))", "alignedDataTree, ModifiedHBZones else: return False, [], [], [], [] else: print \"You should", "to HBZones prior to simulation. These blinds can be dynamically controlled via a", "be used with 0.65 solar reflectance, 0 transmittance, 0.9 emittance, 0.25 mm thickness,", "and schedule not in HBScheduleList: msg = \"Cannot find \" + schedule +", "= 'No' schedName = '' else: schedName = schedule schedCntrlType = 'OnIfScheduleAllows' schedCntrl", "shdAngle = 0 #Make EP versions of some of the outputs. EPshdAngleInint =", "schedName + ', !- Schedule Name\\n' + \\ '\\t' + ', !- Setpoint", "to align data like heating load, cooling load or beam gain for a", "set the shades to generate on the interior, flip the normal vector. if", "it to split up the glazing by cardinal direction and assign different depths", "on cardinal direction. shdAngle_: A number between -90 and 90 that represents an", "planeOrigins = [] planes = [] X, Y, z = minZPt.X, minZPt.Y, minZPt.Z", "value will cause EnergyPlus to crash.\" print warning ghenv.Component.AddRuntimeMessage(w, warning) if horOrVertical ==", "= False #Check the inputs and make sure that we have everything that", "main() #Unpack the data trees. if checkData == True: windowBreps = DataTree[Object]() shadeBreps", "surface. def getAngle2North(normalVector): if north_ != None and north_.IsValid(): northVector = north_ else:northVector", "align with the branches for each window above. zoneData2Tree: Data trees of the", "True, True) minZPt = rc.Geometry.Point3d(minZPt.X, minZPt.Y, minZPt.Z) maxZPt = bbox.Corner(False, True, False) maxZPt", "return finalAngle # Define a function that can split up a list of", "(getAngle2North(normalVector))%360 <= angles[angleCount +1]: targetValue = valueList[angleCount%len(valueList)] value = targetValue return value #", "enumerate(windowSrfs): if isZone == True: zoneName = zoneNames[zoneCount] for windowCount, window in enumerate(windowList):", "crvs as the base for shadings intCrvs =[] for plane in planes: try:", "False, [], [], [], [] #Run the main functions. checkData = False if", "results and hook them up to the \"zoneData\" inputs and use the output", "dataPyList.append(dataVal) return dataPyList allData = [] allData.append(makePyTree(zoneData1_)) allData.append(makePyTree(zoneData2_)) allData.append(makePyTree(zoneData3_)) #Test to see if", "EPSlatOrient, depth, shadingHeight, EPshdAngle, distToGlass, name): EPBlindMat = \"WindowMaterial:Blind,\\n\" + \\ '\\t' +", "emiss = float(matLines[4].split(',')[0]) thickness = float(matLines[5].split(',')[0]) conduct = float(matLines[6].split(';')[0]) return [name, reflect, transmit,", "connected. It will be assumed that the blinds are always drawn\" else: schedule=", "the surface is facing the outdoors in order to be sure that your", "return checkData, windowSrfsInit, shadings, alignedDataTree, ModifiedHBZones else: return False, [], [], [], []", "norOrient = True maxXYPt = rc.Geometry.Point3d.Subtract(maxXYPt, tolVec) minXYPt = rc.Geometry.Point3d.Subtract(minXYPt, tolVec) #glazing distance", "try: for Z in zHeights: planes.append(rc.Geometry.Plane(rc.Geometry.Point3d(X, Y, Z), rc.Geometry.Vector3d.ZAxis)) except: # single shading", "horizontal shades, use this input to angle shades downward. You can also put", "windowBreps: Breps representing each window of the zone. These can be plugged into", "if assignEPCheckInit == False: assignEPCheck = False #Create the EnergyPlus blinds material and", "# find the normal of the surface in the center # note2developer: there", "\\ '\\t' + 'BlindCntrlFor_' + name +', !- Name\\n' + \\ '\\t' +", "rotate the planes by that angle if shdAngle != None: if horOrVertical ==", "180 or EPshdAngle < 0: warning = \"The input shdAngle_ value will cause", "!- Name\\n' + \\ '\\t' + EPSlatOrient + ', !- Slat Orientation\\n' +", "by cardinal direction and assign different distToGlass_ to different directions. distToGlass = getValueBasedOnOrientation(distToGlass_)", "planeVec.Rotate(1.570796, rc.Geometry.Vector3d.ZAxis) #Define a bounding box for use in calculating the number of", "it to split up the glazing by cardinal direction and assign different numbers", "< 0 and tolVec.Y < 0: tolVec = rc.Geometry.Vector3d.Multiply(-1, tolVec) norOrient = True", "\\ '\\t' + '0.5, !- Blind Right Side Opening Multiplier\\n' + \\ '\\t'", "heating, cooling and beam gain synced with that of the zones and windows.", "planes return sortedPlanes, shadingHeight def makeShade(_glzSrf, depth, numShds, distBtwn): rotationAngle_ = 0 #", "Creative Commons Attribution-ShareAlike 3.0 Unported License. \"\"\" Use this component to generate shades", "branchList = zoneData.Branch(i) dataVal = [] for item in branchList: dataVal.append(item) dataPyList.append(dataVal) return", "simulation with the generated shades. Returns: readMe!: ... ---------------: ... HBZones: The HBZones", "'\\t' + 'No, !- Glare Control Is Active\\n' + \\ '\\t' + blindsMaterial[0]", "Energy Simulation component. Zones read back into Grasshopper from the Import idf component", "of the zone. These can be plugged into a shade benefit evaulation as", "bbox.Corner(False, True, True) minZPt = rc.Geometry.Point3d(minZPt.X, minZPt.Y, minZPt.Z) maxZPt = bbox.Corner(False, True, False)", "Blinds Material\" component.' print warning ghenv.Component.AddRuntimeMessage(w, warning) #Check if there is a blinds", "maxXYPt.Y, maxXYPt.Z) centerPt = bbox.Center #Test to be sure that the values are", "enumerate(branch): if header[2].split(' for ')[-1] == zoneName.upper(): alignedDataTree[inputDataTreeCount].append(allData[inputDataTreeCount][listCount]) zoneData = True #Test to", "shadings try: numOfShd = int(numOfShds) shadingHeight = glzHeight/numOfShd shadingRemainder = shadingHeight except: shadingHeight", "', !- Back Side Slat Diffuse Solar Reflectance\\n' + \\ '\\t' + str(blindsMaterial[2])", "a headers on them, which is necessary to match the data to a", "if zoneData == False: for listCount, header in enumerate(branch): try: winNm = header[2].split('", "sample the test surface # and test the normal direction for more point", "#Check the depths. checkData2 = True if _depth == []: checkData2 = False", "header, the data cannot be coordinated with this component. checkData3 = True checkBranches", "numOfShd = int(numOfShds) shadingHeight = glzHeight/numOfShd shadingRemainder = shadingHeight except: shadingHeight = distBetween", "the input object must be outdoors. E+ cannot create shades for intdoor windows.\"", "number between 0 and 360 that represents the degrees off from the y-axis", "== False and alignedDataTree != [[], [], []]: print \"A window was not", "cardinal directions. def getValueBasedOnOrientation(valueList): angles = [] if valueList == None or len(valueList)", "'\\t' + ', !- Back Side Slat Diffuse Visible Reflectance\\n' + \\ '\\t'", "visualization. _ The second way to use the component is to create test", "Shades on the interior and set to \"False\" to generate shades on the", "will be run with one shade per window.\" else: numOfShd = _numOfShds #Check", "sc import uuid import math import os w = gh.GH_RuntimeMessageLevel.Warning tol = sc.doc.ModelAbsoluteTolerance", "_ The second way to use the component is to create test shade", "#If the user has set the shades to generate on the interior, flip", "by cardinal direction and assign different numbers of shades to different directions. numShds", "Name\\n' return EPBlindControl def main(): if _HBObjects != [] and sc.sticky.has_key('honeybee_release') == True", "DataTree from Grasshopper.Kernel.Data import GH_Path import Rhino as rc import rhinoscriptsyntax as rs", "not hasattr(object, 'type'): # find the type based on object.type = object.getTypeByNormalAngle() if", "horOrVertical_ input, which will assign different orientations based on cardinal direction. shdAngle_: A", "True) minXYPt = rc.Geometry.Point3d(minXYPt.X, minXYPt.Y, minXYPt.Z) maxXYPt = bbox.Corner(True, False, True) maxXYPt =", "= None if len(valueList) == 1: value = valueList[0] elif len(valueList) > 1:", "with shades. if assignEPCheck == True: for count, windowObj in enumerate(windowObjects): windowObj.blindsMaterial =", "Blind Right Side Opening Multiplier\\n' + \\ '\\t' + ', !- Minimum Slat", "functions. checkData = False if _HBObjects != [] and _runIt == True: checkData,", "rc.Geometry.Vector3d.ZAxis) #Define a bounding box for use in calculating the number of shades", "== True and checkData4 == True and checkData5 == True: checkData = True", "use in calculating the number of shades to generate minXYPt = bbox.Corner(True, True,", "brepList: shadeBreps.Add(brep, GH_Path(count)) for treeCount, finalTree in enumerate(alignedDataTree): if treeCount == 0: for", "+ str(EPshdAngle) + ', !- Slat Angle {deg}\\n' + \\ '\\t' + str(blindsMaterial[5])", "in intCrvs: try: shdSrf = rc.Geometry.Surface.CreateExtrusion(c, float(depth) * normalVectorPerp).ToBrep() shadingSurfaces.append(shdSrf) except: pass #If", "intersection crvs as the base for shadings intCrvs =[] for plane in planes:", "\"True\" to generate horizontal shades or \"False\" to generate vertical shades. You can", "= \"Cannot find the shchedule file: \" + schedule print msg ghenv.Component.AddRuntimeMessage(gh.GH_RuntimeMessageLevel.Warning, msg)", "number of shades to generate minZPt = bbox.Corner(False, True, True) minZPt = rc.Geometry.Point3d(minZPt.X,", "schedule.lower().endswith(\".csv\") and schedule not in HBScheduleList: msg = \"Cannot find \" + schedule", "= rc.Geometry.Vector3d.Subtract(rc.Geometry.Vector3d(minXYPt.X, minXYPt.Y, minXYPt.Z), rc.Geometry.Vector3d(maxXYPt.X, maxXYPt.Y, maxXYPt.Z)) if testVec.IsParallelTo(planeVec) == 0: minXYPt =", "schedule has been connected. It will be assumed that the blinds are always", "distance glzHeight = minXYPt.DistanceTo(maxXYPt) # find number of shadings try: numOfShd = int(numOfShds)", "try: intCrvs.append(rc.Geometry.Brep.CreateContourCurves(_glzSrf, plane)[0]) except: print \"One intersection failed.\" if normalVector != rc.Geometry.Vector3d.ZAxis: normalVectorPerp", "{m}\\n' + \\ '\\t' + str(EPshdAngle) + ', !- Slat Angle {deg}\\n' +", "Back Side Slat Diffuse Solar Reflectance\\n' + \\ '\\t' + str(blindsMaterial[2]) + ',", "assignEPCheck = True HBObjWShades = [] EPSlatOrientList = [] depthList = [] shadingHeightList", "it through this component's functions. for object in HBZoneObjects: if object.objectType == \"HBZone\":", "direction. For example, inputing 4 values for depths will assign each value of", "zoneData = False if isZone == True: for listCount, header in enumerate(branch): if", "cardinal direction and assign different distToGlass_ to different directions. distToGlass = getValueBasedOnOrientation(distToGlass_) #", "shadingRemainder == 0: shadingRemainder = shadingHeight # find shading base planes planeOrigins =", "the lists that will be filled up zoneNames = [] windowNames = []", "in enumerate(finalTree): for twig in branch: zoneData2Tree.Add(twig, GH_Path(bCount)) elif treeCount == 2: for", "[] alignedDataTree = [] for item in allData: alignedDataTree.append([]) for zoneCount, windowList in", "lists of depths, which will assign different depths based on cardinal direction. For", "+ 'BlindCntrlFor_' + name +', !- Name\\n' + \\ '\\t' + EPinteriorOrExter +", "zone and the windowBreps and shadeBreps outputs are just for visualization. _ The", "with that of the zones and windows. For this, you would take imported", "If no value is connected here, the blinds will assume the \"ALWAYS ON\"", "distToGlassList.append(distToGlass) EPinteriorOrExterList.append(EPinteriorOrExter) if assignEPCheckInit == False: assignEPCheck = False #Create the EnergyPlus blinds", "headers on them, which is necessary to match the data to a zone", "interiorOrExter == True: EPinteriorOrExter = 'InteriorBlind' else: EPinteriorOrExter = 'ExteriorBlind' #Generate the shade", "schedule to raise and lower the blinds. If no value is connected here,", "_runIt: Set boolean to \"True\" to run the component and generate shades. ---------------:", "schedule= blindsSchedule_.upper() if schedule!=None and not schedule.lower().endswith(\".csv\") and schedule not in HBScheduleList: msg", "to generate exterior shades. distToGlass_: A number representing the offset distance from the", "for zoneCount, windowList in enumerate(windowSrfs): if isZone == True: zoneName = zoneNames[zoneCount] for", "If you have vertical shades, use this to rotate them towards the South", "shading (ready to be simulated). ---------------: ... windowBreps: Breps representing each window of", "there might be cases that the surface is not planar and # the", "to run the component and generate shades. ---------------: ... zoneData1_: Optional zone data", "shade areas for shade benefit evaluation after an energy simulation has already been", "angle to North of any surface. def getAngle2North(normalVector): if north_ != None and", "planeVec)) sortedPlanes = planes return sortedPlanes, shadingHeight def makeShade(_glzSrf, depth, numShds, distBtwn): rotationAngle_", "If multiple distances between shade inputs are given, use it to split up", "that E+ does not like distances between shades that are greater than 1.", "+ ', !- Slat Beam Visible Transmittance\\n' + \\ '\\t' + ', !-", "\\ '\\t' + ', !- Back Side Slat Diffuse Visible Reflectance\\n' + \\", "checkData == True: windowBreps = DataTree[Object]() shadeBreps = DataTree[Object]() zoneData1Tree = DataTree[Object]() zoneData2Tree", "True: for count, windowObj in enumerate(windowObjects): windowObj.blindsMaterial = createEPBlindMat(blindsMaterial, EPSlatOrientList[count], depthList[count], shadingHeightList[count], EPshdAngleList[count],", "except: # single shading planes.append(rc.Geometry.Plane(rc.Geometry.Point3d(maxZPt), rc.Geometry.Vector3d.ZAxis)) # sort the planes sortedPlanes = sorted(planes,", "\"False\" to generate vertical shades. You can also input lists of horOrVertical_ input,", "= 'VER 0.0.55\\nSEP_11_2014' ghenv.Component.Category = \"Honeybee\" ghenv.Component.SubCategory = \"09 | Energy | Energy\"", "shadingHeight, EPshdAngle, distToGlass, name): EPBlindMat = \"WindowMaterial:Blind,\\n\" + \\ '\\t' + blindsMaterial[0] +", "Slat Infrared Hemispherical Emissivity\\n' + \\ '\\t' + str(distToGlass) + ', !- Blind", "Slat Angle {deg}\\n' return EPBlindMat def createEPBlindControl(blindsMaterial, schedule, EPinteriorOrExter, name): if schedule ==", "sum(isZoneList) == len(_HBObjects): isZone = True elif sum(isZoneList) == 0: isZone = False", "warning ghenv.Component.AddRuntimeMessage(w, warning) #Align all of the lists to each window. windowNamesFinal =", "normalVector): # find the bounding box bbox = glzSrf.GetBoundingBox(True) if horOrVertical == None:", "winBreps = [] winNames = [] for srf in object.surfaces: if srf.hasChild: if", "shadings = [] for window in windowSrfsInit: shadeBreps, EPSlatOrient, depth, shadingHeight, EPshdAngle, distToGlass,", "#Test to be sure that the values are parallel to the correct vector.", "tolVec.Y < 0: tolVec = rc.Geometry.Vector3d.Multiply(-1, tolVec) norOrient = True else: tolVec =", "directions. numShds = getValueBasedOnOrientation(numShds) # If multiple distances between shade inputs are given,", "to be shaded can also be input and shades can be joined together", "_numOfShds where the input here is the distance in Rhino units between each", "planes.append(rc.Geometry.Plane(point, planeVec)) except: # single shading planes.append(rc.Geometry.Plane(rc.Geometry.Point3d(minXYPt), planeVec)) sortedPlanes = planes return sortedPlanes,", "_runIt == True: checkData, windowSrfsInit, shadings, alignedDataTree, HBObjWShades = main() #Unpack the data", "in enumerate(shadings): for brep in brepList: shadeBreps.Add(brep, GH_Path(count)) for treeCount, finalTree in enumerate(alignedDataTree):", "main uses: _ The first is that it can be used to assign", "numOfShd = _numOfShds #Check the depths. checkData2 = True if _depth == []:", "Conductivity {W/m-K}\\n' + \\ '\\t' + str(blindsMaterial[2]) + ', !- Slat Beam Solar", "!= 'Outdoors': assignEPCheck = False warning = \"The boundary condition of the input", "only supports inputs that are all HBZones or all HBSrfs but not both.", "for the shades.\" ghenv.Component.AddRuntimeMessage(gh.GH_RuntimeMessageLevel.Warning, \"You must provide a depth for the shades.\") #Check", "# find the bounding box bbox = glzSrf.GetBoundingBox(True) if horOrVertical == None: horOrVertical", "up to the \"zoneData\" inputs and use the output \"zoneDataTree\" in the shade", "align data like heating load, cooling load or beam gain for a shade", "cooling and beam gain synced with that of the zones and windows. For", "= False #If a shdAngle is provided, use it to rotate the planes", "\"The input shdAngle_ value will cause EnergyPlus to crash.\" print warning ghenv.Component.AddRuntimeMessage(w, warning)", "northernmost side of the window. tolVec = rc.Geometry.Vector3d.Subtract(rc.Geometry.Vector3d(minXYPt.X, minXYPt.Y, minXYPt.Z), rc.Geometry.Vector3d(maxXYPt.X, maxXYPt.Y, maxXYPt.Z))", "'; !- Slat Angle Schedule Name\\n' return EPBlindControl def main(): if _HBObjects !=", "set a default. checkData4 = True if blindsSchedule_ == None: schedule = \"ALWAYSON\"", "= rc.Geometry.Vector3d(normalVector.X, normalVector.Y, 0) planeVec.Rotate(1.570796, rc.Geometry.Vector3d.ZAxis) normalVectorPerp.Rotate((shdAngle*0.01745329), planeVec) elif horOrVertical == False: planeVec", "up the glazing by cardinal direction and assign different depths to different directions.", "with the generated windows. Use this to align data like heating load, cooling", "ghenv.Component.AdditionalHelpFromDocStrings = \"1\" except: pass from System import Object from System import Drawing", "name): EPBlindMat = \"WindowMaterial:Blind,\\n\" + \\ '\\t' + blindsMaterial[0] + \"_\" + name", "Diffuse Visible Transmittance\\n' + \\ '\\t' + ', !- Front Side Slat Diffuse", "planes.append(rc.Geometry.Plane(rc.Geometry.Point3d(X, Y, Z), rc.Geometry.Vector3d.ZAxis)) except: # single shading planes.append(rc.Geometry.Plane(rc.Geometry.Point3d(maxZPt), rc.Geometry.Vector3d.ZAxis)) # sort the", "By <NAME> # <EMAIL> # Ladybug started by <NAME> is licensed # under", "print warning ghenv.Component.AddRuntimeMessage(w, warning) #Check the depth and the shadingHeight to see if", "-1 shadingSurfaces =[] #Define a function that can get the angle to North", "of the izoneData3_, which align with the branches for each window above. \"\"\"", "rc.Geometry.Point3d(minXYPt.X, minXYPt.Y, minXYPt.Z) maxXYPt = bbox.Corner(False, False, True) maxXYPt = rc.Geometry.Point3d(maxXYPt.X, maxXYPt.Y, maxXYPt.Z)", "is its own branch of a grasshopper data tree. shadeBreps: Breps representing each", "component has two main uses: _ The first is that it can be", "West, tilting the shades like this will let in more winter sun than", "window is its own branch of a grasshopper data tree. shadeBreps: Breps representing", "is necessary for data input to this component.\" print warning ghenv.Component.AddRuntimeMessage(w, warning) #Align", "z = minZPt.X, minZPt.Y, minZPt.Z zHeights = rs.frange(minZPt.Z + shadingRemainder, maxZPt.Z + 0.5*sc.doc.ModelAbsoluteTolerance,", "minXYPt.Y, minXYPt.Z), rc.Geometry.Vector3d(maxXYPt.X, maxXYPt.Y, maxXYPt.Z)) tolVec.Unitize() tolVec = rc.Geometry.Vector3d.Multiply(sc.doc.ModelAbsoluteTolerance*2, tolVec) if tolVec.X >", "for count, windowObj in enumerate(windowObjects): windowObj.blindsMaterial = createEPBlindMat(blindsMaterial, EPSlatOrientList[count], depthList[count], shadingHeightList[count], EPshdAngleList[count], distToGlassList[count],", "component is to create test shade areas for shade benefit evaluation after an", "which will assign different orientations based on cardinal direction. shdAngle_: A number between", "True if _depth == []: checkData2 = False print \"You must provide a", "that these should ideally be the zones that are fed into the Run", "windowNames.append(winNames) windowSrfs.append(winBreps) elif object.objectType == \"HBSurface\": isZoneList.append(0) warning = \"Note that, when using", "_HBObjects: The HBZones out of any of the HB components that generate or", "windowSrfs = [] windowObjects = [] isZoneList = [] assignEPCheck = True HBObjWShades", "minXYPt.Y, minXYPt.Z) maxXYPt = bbox.Corner(False, False, True) maxXYPt = rc.Geometry.Point3d(maxXYPt.X, maxXYPt.Y, maxXYPt.Z) centerPt", "direction for more point baseSrfCenPt = rc.Geometry.AreaMassProperties.Compute(_glzSrf).Centroid # sometimes the center point is", "an outdoor boundary condition. EenergyPlus shades will not be assigned to this window.\"", "and you can account for these shades using a 'Honeybee_EP Context Surfaces' component.\"", "', !- Slat Angle {deg}\\n' + \\ '\\t' + str(blindsMaterial[5]) + ', !-", "if object.hasChild: if object.BC != 'OUTDOORS' and object.BC != 'Outdoors': assignEPCheck = False", "the number of shades to generate minZPt = bbox.Corner(False, True, True) minZPt =", "the input data trees. def makePyTree(zoneData): dataPyList = [] for i in range(zoneData.BranchCount):", "normalVectorPerp = rc.Geometry.Vector3d(normalVector.X, normalVector.Y, 0) angleFromNorm = math.degrees(rc.Geometry.Vector3d.VectorAngle(normalVectorPerp, normalVector)) if normalVector.Z < 0:", "Reflectance\\n' + \\ '\\t' + ', !- Back Side Slat Beam Visible Reflectance\\n'", "'ALWAYSON': schedCntrlType = 'ALWAYSON' schedCntrl = 'No' schedName = '' else: schedName =", "#compatibleLBVersion = VER 0.0.58\\nAUG_20_2014 try: ghenv.Component.AdditionalHelpFromDocStrings = \"1\" except: pass from System import", "print warning ghenv.Component.AddRuntimeMessage(w, warning) if horOrVertical == True: EPSlatOrient = 'Horizontal' else: EPSlatOrient", "just freak out with blinds.\" print warning ghenv.Component.AddRuntimeMessage(w, warning) else: for childSrf in", "grab another component for each of these inputs.\" print warning ghenv.Component.AddRuntimeMessage(w, warning) isZone", "[] for item in allData: alignedDataTree.append([]) for zoneCount, windowList in enumerate(windowSrfs): if isZone", "#Call the objects from the hive. HBZoneObjects = hb_hive.callFromHoneybeeHive(_HBObjects) #Find out what the", "not be assigned to this window.\" else: print \"One surface with a window", "Reflectance\\n' + \\ '\\t' + ', !- Slat Diffuse Solar Transmittance\\n' + \\", "it can be used to assign blind objects to HBZones prior to simulation.", "{m}\\n' + \\ '\\t' + str(blindsMaterial[4]) + ', !- Slat Thickness {m}\\n' +", "maxXYPt.Z) #Adjust the points to ensure the creation of the correct number of", "\\ '\\t' + str(distToGlass) + ', !- Blind to Glass Distance {m}\\n' +", "checkData2 = True if _depth == []: checkData2 = False print \"You must", "#return rc.Geometry.Plane(baseSrfCenPt,normalVector) else: print \"Couldn't find the normal of the shading surface.\" +", "to Glass Distance {m}\\n' + \\ '\\t' + '0.5, !- Blind Top Opening", "+ ', !- Shading Control Is Scheduled\\n' + \\ '\\t' + 'No, !-", "be assumed that the blinds are always drawn\" else: schedule= blindsSchedule_.upper() if schedule!=None", "print \"One surface with a window is not planar. EenergyPlus shades will not", "= sc.sticky[\"ladybug_Mesh\"]() lb_visualization = sc.sticky[\"ladybug_ResultVisualization\"]() # find the normal of the surface in", "rc.Geometry.Point3d(maxXYPt.X, maxXYPt.Y, maxXYPt.Z) centerPt = bbox.Center #Test to be sure that the values", "'\\t' + str(blindsMaterial[1]) + ', !- Front Side Slat Beam Solar Reflectance\\n' +", "'BlindCntrlFor_' + windowObj.name windowObj.shadingSchName = schedule ModifiedHBZones = hb_hive.addToHoneybeeHive(HBZoneObjects, ghenv.Component.InstanceGuid.ToString() + str(uuid.uuid4())) else:", "will not align correctly with the EP Result data. blindsMaterial_: An optional blind", "this component for individual surfaces, you should make sure that the direction of", "automatically be assigned to the zone and the windowBreps and shadeBreps outputs are", "elif treeCount == 1: for bCount, branch in enumerate(finalTree): for twig in branch:", "= [] dataHeaders = [] dataNumbers = [] for list in branch: if", "different directions. horOrVertical = getValueBasedOnOrientation(horOrVertical_) # If multiple shdAngle_ inputs are given, use", "maxXYPt.Z)) tolVec.Unitize() tolVec = rc.Geometry.Vector3d.Multiply(sc.doc.ModelAbsoluteTolerance*2, tolVec) if tolVec.X > 0 and tolVec.Y >", "A material will be used with 0.65 solar reflectance, 0 transmittance, 0.9 emittance,", "', !- Slat Diffuse Solar Transmittance\\n' + \\ '\\t' + str(blindsMaterial[1]) + ',", "in degrees to rotate the shades. The default is set to \"0\" for", "= rc.Geometry.Vector3d.Subtract(rc.Geometry.Vector3d(minXYPt.X, minXYPt.Y, minXYPt.Z), rc.Geometry.Vector3d(maxXYPt.X, maxXYPt.Y, maxXYPt.Z)) tolVec.Unitize() tolVec = rc.Geometry.Vector3d.Multiply(sc.doc.ModelAbsoluteTolerance*2, tolVec) if", "srfData = True if zoneData == False and srfData == False and alignedDataTree", "that of the zones and windows. For this, you would take imported EnergyPlus", "the component and generate shades. ---------------: ... zoneData1_: Optional zone data for the", "== None: horOrVertical = True planeVec = rc.Geometry.Vector3d(normalVector.X, normalVector.Y, 0) planeVec.Rotate(1.570796, rc.Geometry.Vector3d.ZAxis) normalVectorPerp.Rotate((shdAngle*0.01745329),", "== \"key:location/dataType/units/frequency/startsAt/endsAt\": checkHeader.append(1) dataHeaders.append(list[:7]) dataNumbers.append(list[7:]) allHeaders.append(dataHeaders) allNumbers.append(dataNumbers) if sum(checkHeader) == len(branch):pass else: checkData3", "the input here is the distance in Rhino units between each shade. horOrVertical_:", "Slat Angle Schedule Name\\n' return EPBlindControl def main(): if _HBObjects != [] and", "Control Is Scheduled\\n' + \\ '\\t' + 'No, !- Glare Control Is Active\\n'", "= [] #Call the objects from the hive. HBZoneObjects = hb_hive.callFromHoneybeeHive(_HBObjects) #Find out", "#Define a bounding box for use in calculating the number of shades to", "Zones read back into Grasshopper from the Import idf component will not align", "bbox.Corner(False, True, False) maxZPt = rc.Geometry.Point3d(maxZPt.X, maxZPt.Y, maxZPt.Z - sc.doc.ModelAbsoluteTolerance) centerPt = bbox.Center", "assign it to different cardinal directions. def getValueBasedOnOrientation(valueList): angles = [] if valueList", "True else: tolVec = rc.Geometry.Vector3d.Multiply(-1, tolVec) norOrient = True maxXYPt = rc.Geometry.Point3d.Subtract(maxXYPt, tolVec)", "float(matLines[5].split(',')[0]) conduct = float(matLines[6].split(';')[0]) return [name, reflect, transmit, emiss, thickness, conduct] def createEPBlindMat(blindsMaterial,", "srf.childSrfs: windowObjects.append(childSrf) winNames.append(childSrf.name) winBreps.append(childSrf.geometry) else: print \"One surface with a window is not", "surfaces for E+, the program would just freak out with blinds.\" print warning", "import Drawing from clr import AddReference AddReference('Grasshopper') import Grasshopper.Kernel as gh from Grasshopper", "allData: alignedDataTree.append([]) for zoneCount, windowList in enumerate(windowSrfs): if isZone == True: zoneName =", "windowSrfs, isZone): #Check if the user has hooked up a distBetwee or numOfShds.", "planeVec) elif horOrVertical == False: planeVec = rc.Geometry.Vector3d.ZAxis if getAngle2North(normalVectorPerp) < 180: normalVectorPerp.Rotate((shdAngle*0.01745329),", "the glazing by cardinal direction and assign different horizontal or vertical to different", "to generate horizontal shades or \"False\" to generate vertical shades. You can also", "str(blindsMaterial[1]) + ', !- Back Side Slat Diffuse Solar Reflectance\\n' + \\ '\\t'", "zoneData2Tree.Add(twig, GH_Path(bCount)) elif treeCount == 2: for bCount, branch in enumerate(finalTree): for twig", "summer sun. If you have horizontal shades, use this input to angle shades", "cases that the surface is not planar and # the normal is changing", "a window does not have an outdoor boundary condition. EenergyPlus shades will not", "maxZPt.Z - sc.doc.ModelAbsoluteTolerance) centerPt = bbox.Center #glazing hieghts glzHeight = minZPt.DistanceTo(maxZPt) # find", "EPBlindMat = \"WindowMaterial:Blind,\\n\" + \\ '\\t' + blindsMaterial[0] + \"_\" + name +", "if not os.path.isfile(schedule): msg = \"Cannot find the shchedule file: \" + schedule", "checkHeader.append(1) dataHeaders.append(list[:7]) dataNumbers.append(list[7:]) allHeaders.append(dataHeaders) allNumbers.append(dataNumbers) if sum(checkHeader) == len(branch):pass else: checkData3 = False", "or horOrVertical == None: horOrVertical = True planeVec = rc.Geometry.Vector3d(normalVector.X, normalVector.Y, 0) planeVec.Rotate(1.570796,", "else: distToGlass = 0 #Get the EnergyPlus distance to glass. EPDistToGlass = distToGlass", "Generator\" ghenv.Component.NickName = 'EPWindowShades' ghenv.Component.Message = 'VER 0.0.55\\nSEP_11_2014' ghenv.Component.Category = \"Honeybee\" ghenv.Component.SubCategory =", "Set to \"True\" to generate Shades on the interior and set to \"False\"", "')[-1] if str(winNm) == str(windowName.upper()): alignedDataTree[inputDataTreeCount].append(allData[inputDataTreeCount][listCount]) srfData = True if zoneData == False", "number of shades. The component will be run with one shade per window.\"", "= [] X, Y, z = minZPt.X, minZPt.Y, minZPt.Z zHeights = rs.frange(minZPt.Z +", "import Grasshopper.Kernel as gh from Grasshopper import DataTree from Grasshopper.Kernel.Data import GH_Path import", "planeVec) else: normalVectorPerp.Rotate((shdAngle*-0.01745329), planeVec) else: shdAngle = 0 #Make EP versions of some", "set to \"0\" for no rotation. If you have vertical shades, use this", "+ \\ '\\t' + EPinteriorOrExter + ', !- Shading Type\\n' + \\ '\\t'", "the \"Honeybee_EnergyPlus Blinds Material\" component.' print warning ghenv.Component.AddRuntimeMessage(w, warning) #Check if there is", "intCrvs !=[]: for c in intCrvs: try: shdSrf = rc.Geometry.Surface.CreateExtrusion(c, float(depth) * normalVectorPerp).ToBrep()", "transmittance, 0.9 emittance, 0.25 mm thickness, 221 W/mK conductivity. blindsSchedule_: An optional schedule", "units between each shade. horOrVertical_: Set to \"True\" to generate horizontal shades or", "= schedule ModifiedHBZones = hb_hive.addToHoneybeeHive(HBZoneObjects, ghenv.Component.InstanceGuid.ToString() + str(uuid.uuid4())) else: ModifiedHBZones = [] return", "#Test to see if the data lists have a headers on them, which", "minZPt = bbox.Corner(False, True, True) minZPt = rc.Geometry.Point3d(minZPt.X, minZPt.Y, minZPt.Z) maxZPt = bbox.Corner(False,", "matched with its respective zone/surface data.\" if checkData2 == True and checkData3 ==", "+ ', !- Slat Width {m}\\n' + \\ '\\t' + str(shadingHeight) +', !-", "EPinteriorOrExter, assignEPCheckInit = makeShade(window, depths, numOfShd, _distBetween) shadings.append(shadeBreps) EPSlatOrientList.append(EPSlatOrient) depthList.append(depth) shadingHeightList.append(shadingHeight) EPshdAngleList.append(EPshdAngle) distToGlassList.append(distToGlass)", "joined together with the mergeVectors_ input. _numOfShds: The number of shades to generated", "#Create a Python list from the input data trees. def makePyTree(zoneData): dataPyList =", "== True: windowBreps = DataTree[Object]() shadeBreps = DataTree[Object]() zoneData1Tree = DataTree[Object]() zoneData2Tree =", "W/mK conductivity.\" blindsMaterial = ['DEFAULTBLINDSMATERIAL', 0.65, 0, 0.9, 0.00025, 221] else: try: blindsMaterial", "enumerate(finalTree): for twig in branch: zoneData1Tree.Add(twig, GH_Path(bCount)) elif treeCount == 1: for bCount,", "GH_Path(count)) for treeCount, finalTree in enumerate(alignedDataTree): if treeCount == 0: for bCount, branch", "be joined together with the mergeVectors_ input. _numOfShds: The number of shades to", "depths, alignedDataTree, numOfShd, blindsMaterial, schedule = checkTheInputs(zoneNames, windowNames, windowSrfs, isZone) else: checkData ==", "_glzSrf.Faces[0].NormalAt(centerPtU, centerPtV) #return rc.Geometry.Plane(baseSrfCenPt,normalVector) else: print \"Couldn't find the normal of the shading", "Minimum Slat Angle {deg}\\n' + \\ '\\t' + '180; !- Maximum Slat Angle", "Visible Reflectance\\n' + \\ '\\t' + ', !- Back Side Slat Beam Visible", "= analyzeGlz(_glzSrf, distBtwn, numShds, horOrVertical, lb_visualization, normalVector) # find the intersection crvs as", "transVec.Unitize() finalTransVec = rc.Geometry.Vector3d.Multiply(distToGlass, transVec) blindsTransform = rc.Geometry.Transform.Translation(finalTransVec) for shdSrf in shadingSurfaces: shdSrf.Transform(blindsTransform)", "Energy\" #compatibleHBVersion = VER 0.0.55\\nAUG_25_2014 #compatibleLBVersion = VER 0.0.58\\nAUG_20_2014 try: ghenv.Component.AdditionalHelpFromDocStrings = \"1\"", "', !- Back Side Slat Beam Visible Reflectance\\n' + \\ '\\t' + ',", "Blind Bottom Opening Multiplier\\n' + \\ '\\t' + '0.5, !- Blind Left Side", "math.degrees(rc.Geometry.Vector3d.VectorAngle(normalVectorPerp, normalVector)) if normalVector.Z < 0: angleFromNorm = angleFromNorm*(-1) #If the user has", "use it to rotate the planes by that angle if shdAngle != None:", "EenergyPlus shades will not be assigned to this window.\" windowNames.append(winNames) windowSrfs.append(winBreps) elif object.objectType", "generate minZPt = bbox.Corner(False, True, True) minZPt = rc.Geometry.Point3d(minZPt.X, minZPt.Y, minZPt.Z) maxZPt =", "None and distBetween == None: numOfShds = 1 if numOfShds == 0 or", "True, True) minXYPt = rc.Geometry.Point3d(minXYPt.X, minXYPt.Y, minXYPt.Z) maxXYPt = bbox.Corner(False, False, True) maxXYPt", "shade. horOrVertical_: Set to \"True\" to generate horizontal shades or \"False\" to generate", "the \"zoneData\" inputs and use the output \"zoneDataTree\" in the shade benefit evaluation.", "vertical shades. You can also input lists of horOrVertical_ input, which will assign", "by <NAME> is licensed # under a Creative Commons Attribution-ShareAlike 3.0 Unported License.", "point in planePoints: planes.append(rc.Geometry.Plane(point, planeVec)) except: # single shading planes.append(rc.Geometry.Plane(rc.Geometry.Point3d(minXYPt), planeVec)) sortedPlanes =", "+ str(depth) + ', !- Slat Width {m}\\n' + \\ '\\t' + str(shadingHeight)", "or vertical to different directions. horOrVertical = getValueBasedOnOrientation(horOrVertical_) # If multiple shdAngle_ inputs", "must provide a depth for the shades.\" ghenv.Component.AddRuntimeMessage(gh.GH_RuntimeMessageLevel.Warning, \"You must provide a depth", "= 'Vertical' if interiorOrExter == True: EPinteriorOrExter = 'InteriorBlind' else: EPinteriorOrExter = 'ExteriorBlind'", "ghenv.Component.AddRuntimeMessage(w, warning) else: for childSrf in object.childSrfs: windowObjects.append(childSrf) windowNames.append([childSrf.name]) windowSrfs.append([childSrf.geometry]) #Make sure that", "You can also input lists of depths, which will assign different depths based", "assignEPCheck = False warning = \"The boundary condition of the input object must", "+ ', !- Back Side Slat Infrared Hemispherical Emissivity\\n' + \\ '\\t' +", "point to point, then I should sample the test surface # and test", "in enumerate(allHeaders): #Test to see if the data is for the zone level.", "are given, use it to split up the glazing by cardinal direction and", "optional blind material from the blind material component. If no material is connected", "the generated shades. Returns: readMe!: ... ---------------: ... HBZones: The HBZones with the", "windowSrfsInit, shadings, alignedDataTree, HBObjWShades = main() #Unpack the data trees. if checkData ==", "brepList in enumerate(shadings): for brep in brepList: shadeBreps.Add(brep, GH_Path(count)) for treeCount, finalTree in", "except: shadingHeight = distBetween shadingRemainder = (((glzHeight/distBetween) - math.floor(glzHeight/distBetween))*distBetween) if shadingRemainder == 0:", "and shades can be joined together with the mergeVectors_ input. _numOfShds: The number", "each window above. zoneData3Tree: Data trees of the izoneData3_, which align with the", "correctly with the EP Result data. blindsMaterial_: An optional blind material from the", "0 = north depth, item 1 = west depth, item 2 = south", "will crash. assignEPCheckInit = True if depth > 1: assignEPCheckInit = False warning", "in divisionParams: divisionPoints.append(pointCurve.PointAt(param)) planePoints = divisionPoints try: for point in planePoints: planes.append(rc.Geometry.Plane(point, planeVec))", "or len(valueList) == 0: value = None if len(valueList) == 1: value =", "back into Grasshopper from the Import idf component will not align correctly with", "str(list[0]) == \"key:location/dataType/units/frequency/startsAt/endsAt\": checkHeader.append(1) dataHeaders.append(list[:7]) dataNumbers.append(list[7:]) allHeaders.append(dataHeaders) allNumbers.append(dataNumbers) if sum(checkHeader) == len(branch):pass else:", "warning = \"The input distToGlass_ value is so large that it will cause", "shadingSurfaces =[] #Define a function that can get the angle to North of", "\"One surface with a window does not have an outdoor boundary condition. EenergyPlus", "windowObj.shadingSchName = schedule ModifiedHBZones = hb_hive.addToHoneybeeHive(HBZoneObjects, ghenv.Component.InstanceGuid.ToString() + str(uuid.uuid4())) else: ModifiedHBZones = []", "checkData2 == True and checkData3 == True and checkData4 == True and checkData5", "shdSrf in shadingSurfaces: shdSrf.Transform(blindsTransform) else: distToGlass = 0 #Get the EnergyPlus distance to", "no value is connected here, the blinds will assume the \"ALWAYS ON\" shcedule.", "distance from the glass to make the shades. _runIt: Set boolean to \"True\"", "always drawn\" else: schedule= blindsSchedule_.upper() if schedule!=None and not schedule.lower().endswith(\".csv\") and schedule not", "= rs.frange(0, 360, 360/len(valueList)) for an in initAngles: angles.append(an-(360/(2*len(valueList)))) angles.append(360) for angleCount in", "this window.\" windowNames.append(winNames) windowSrfs.append(winBreps) elif object.objectType == \"HBSurface\": isZoneList.append(0) warning = \"Note that,", "beam gain for a shade benefit simulation with the generated shades. Returns: readMe!:", "shades.\" ghenv.Component.AddRuntimeMessage(gh.GH_RuntimeMessageLevel.Warning, \"You must provide a depth for the shades.\") #Check if there", "schedule!=None and not schedule.lower().endswith(\".csv\") and schedule not in HBScheduleList: msg = \"Cannot find", "== True: # Horizontal #Define a bounding box for use in calculating the", "the glazing by cardinal direction and assign different interiorOrExterior_ to different directions. interiorOrExter", "+ str(blindsMaterial[3]) + ', !- Front Side Slat Infrared Hemispherical Emissivity\\n' + \\", "warning) if not hasattr(object, 'type'): # find the type based on object.type =", "all HBSrfs but not both. For now, just grab another component for each", "for bCount, branch in enumerate(finalTree): for twig in branch: zoneData2Tree.Add(twig, GH_Path(bCount)) elif treeCount", "in enumerate(windowList): windowBrepsFinal.append(window) windowName = windowNames[zoneCount][windowCount] windowNamesFinal.append(windowName) for inputDataTreeCount, branch in enumerate(allHeaders): #Test", "tree. Alternatively, they can be plugged into an EnergyPlus simulation with the \"Honeybee_EP", "or window. If there's no header, the data cannot be coordinated with this", "with a window is not planar. EenergyPlus shades will not be assigned to", "in brepList: shadeBreps.Add(brep, GH_Path(count)) for treeCount, finalTree in enumerate(alignedDataTree): if treeCount == 0:", "be input and shades can be joined together with the mergeVectors_ input. _numOfShds:", "boundary condition of the input object must be outdoors. E+ cannot create shades", "create test shade areas for shade benefit evaluation after an energy simulation has", "schedule schedCntrlType = 'OnIfScheduleAllows' schedCntrl = 'Yes' EPBlindControl = 'WindowProperty:ShadingControl,\\n' + \\ '\\t'", "licensed # under a Creative Commons Attribution-ShareAlike 3.0 Unported License. \"\"\" Use this", "ghenv.Component.SubCategory = \"09 | Energy | Energy\" #compatibleHBVersion = VER 0.0.55\\nAUG_25_2014 #compatibleLBVersion =", "== []: numOfShd = [1] print \"No value is connected for number of", "else: print \"One surface with a window is not planar. EenergyPlus shades will", "but not both. For now, just grab another component for each of these", "all HBObjects are of the same type. checkSameType = True if sum(isZoneList) ==", "from the Import idf component will not align correctly with the EP Result", "import GH_Path import Rhino as rc import rhinoscriptsyntax as rs import scriptcontext as", "multiple interiorOrExter_ inputs are given, use it to split up the glazing by", "+ ', !- Back Side Slat Beam Solar Reflectance\\n' + \\ '\\t' +", "for depths will assign each value of the list as follows: item 0", "for listCount, header in enumerate(branch): try: winNm = header[2].split(' for ')[-1].split(': ')[0] except:", "follows: item 0 = north depth, item 1 = west depth, item 2", "interiorOrExter = getValueBasedOnOrientation(interiorOrExter_) #If multiple distToGlass_ inputs are given, use it to split", "data like heating load, cooling load or beam gain for a shade benefit", "sc.sticky[\"ladybug_Mesh\"]() lb_visualization = sc.sticky[\"ladybug_ResultVisualization\"]() # find the normal of the surface in the", "plane)[0]) except: print \"One intersection failed.\" if normalVector != rc.Geometry.Vector3d.ZAxis: normalVectorPerp = rc.Geometry.Vector3d(normalVector.X,", "else: tolVec = rc.Geometry.Vector3d.Multiply(-1, tolVec) norOrient = True maxXYPt = rc.Geometry.Point3d.Subtract(maxXYPt, tolVec) minXYPt", "of shades. The component will be run with one shade per window.\" else:", "!- Back Side Slat Beam Visible Reflectance\\n' + \\ '\\t' + ', !-", "!- Blind to Glass Distance {m}\\n' + \\ '\\t' + '0.5, !- Blind", "the surface is not planar and # the normal is changing from point", "all HBZones or all HBSrfs but not both. For now, just grab another", "already been run. In this case, the component helps keep the data tree", "defaults on things that are not connected. if checkSameType == True: checkData, windowNames,", "math import os w = gh.GH_RuntimeMessageLevel.Warning tol = sc.doc.ModelAbsoluteTolerance def checkTheInputs(zoneNames, windowNames, windowSrfs,", "shadingHeight # find shading base planes planeOrigins = [] planes = [] pointCurve", "shdAngle_ to different directions. shdAngle = getValueBasedOnOrientation(shdAngle_) #If multiple interiorOrExter_ inputs are given,", "sortedPlanes = planes return sortedPlanes, shadingHeight def makeShade(_glzSrf, depth, numShds, distBtwn): rotationAngle_ =", "the zone. These can be plugged into a shade benefit evaulation as each", "input, which will assign different orientations based on cardinal direction. shdAngle_: A number", "= targetValue return value # If multiple shading depths are given, use it", "enumerate(windowSrfsInit): windowBreps.Add(brep, GH_Path(count)) for count, brepList in enumerate(shadings): for brep in brepList: shadeBreps.Add(brep,", "\"Honeybee\" ghenv.Component.SubCategory = \"09 | Energy | Energy\" #compatibleHBVersion = VER 0.0.55\\nAUG_25_2014 #compatibleLBVersion", "_distBetween == [] and _numOfShds == []: numOfShd = [1] print \"No value", "for ')[-1] if str(winNm) == str(windowName.upper()): alignedDataTree[inputDataTreeCount].append(allData[inputDataTreeCount][listCount]) srfData = True if zoneData ==", "data tree. shadeBreps: Breps representing each shade of the window. These can be", "+ 'No, !- Glare Control Is Active\\n' + \\ '\\t' + blindsMaterial[0] +", "{W/m2, W or deg C}\\n' + \\ '\\t' + schedCntrl + ', !-", "the outdoors in order to be sure that your shades are previewing correctly.\"", "'type'): # find the type based on object.type = object.getTypeByNormalAngle() if not hasattr(object,", "curves based on the planes and extrusion vectors if intCrvs !=[]: for c", "blindsSchedule_: An optional schedule to raise and lower the blinds. If no value", "hive. HBZoneObjects = hb_hive.callFromHoneybeeHive(_HBObjects) #Find out what the object is and make sure", "', !- Shading Type\\n' + \\ '\\t' + ', !- Construction with Shading", "its respective zone/surface data.\" if checkData2 == True and checkData3 == True and", "checkData4 = True if blindsSchedule_ == None: schedule = \"ALWAYSON\" print \"No blinds", "', !- Schedule Name\\n' + \\ '\\t' + ', !- Setpoint {W/m2, W", "that we mesh curved surfaces for E+, the program would just freak out", "transVec = normalVectorPerp transVec.Unitize() finalTransVec = rc.Geometry.Vector3d.Multiply(distToGlass, transVec) blindsTransform = rc.Geometry.Transform.Translation(finalTransVec) for shdSrf", "These can be plugged into a shade benefit evaulation as each window is", "[]: numOfShd = [1] print \"No value is connected for number of shades.", "= DataTree[Object]() zoneData1Tree = DataTree[Object]() zoneData2Tree = DataTree[Object]() zoneData3Tree = DataTree[Object]() for count,", "print \"You must provide a depth for the shades.\" ghenv.Component.AddRuntimeMessage(gh.GH_RuntimeMessageLevel.Warning, \"You must provide", "< 0: tolVec = rc.Geometry.Vector3d.Multiply(-1, tolVec) norOrient = True else: tolVec = rc.Geometry.Vector3d.Multiply(-1,", "glazing by cardinal direction and assign different numbers of shades to different directions.", "if normalVector != rc.Geometry.Vector3d.ZAxis: normalVectorPerp = rc.Geometry.Vector3d(normalVector.X, normalVector.Y, 0) angleFromNorm = math.degrees(rc.Geometry.Vector3d.VectorAngle(normalVectorPerp, normalVector))", "Bottom Opening Multiplier\\n' + \\ '\\t' + '0.5, !- Blind Left Side Opening", "False warning = \"Note that E+ does not like shading depths greater than", "_numOfShds #Check the depths. checkData2 = True if _depth == []: checkData2 =", "distBetween == None: numOfShds = 1 if numOfShds == 0 or distBetween ==", "surface # and test the normal direction for more point baseSrfCenPt = rc.Geometry.AreaMassProperties.Compute(_glzSrf).Centroid", "has a Ladybug/Honeybee header on it. This header is necessary for data input", "The component will be run with one shade per window.\" else: numOfShd =", "= getValueBasedOnOrientation(interiorOrExter_) #If multiple distToGlass_ inputs are given, use it to split up", "simulation with the generated shades. zoneData3_: Optional zone data for the HBZones_ that", "+ \"_\" + name + ', !- Shading Device Material Name\\n' + \\", "== False #Generate the shades. if checkData == True: shadings = [] for", "+ \\ '\\t' + blindsMaterial[0] + \"_\" + name + ', !- Name\\n'", "and 360 that represents the degrees off from the y-axis to make North.", "'Vertical' if interiorOrExter == True: EPinteriorOrExter = 'InteriorBlind' else: EPinteriorOrExter = 'ExteriorBlind' #Generate", "'angle2North'): # find the type based on object.getAngle2North() if not hasattr(object, \"BC\"): object.BC", "divisionParams = pointCurve.DivideByLength(shadingHeight, True) divisionPoints = [] for param in divisionParams: divisionPoints.append(pointCurve.PointAt(param)) planePoints", "for childSrf in srf.childSrfs: windowObjects.append(childSrf) winNames.append(childSrf.name) winBreps.append(childSrf.geometry) else: print \"One surface with a", "zoneCount, windowList in enumerate(windowSrfs): if isZone == True: zoneName = zoneNames[zoneCount] for windowCount,", "# By <NAME> # <EMAIL> # Ladybug started by <NAME> is licensed #", "\"True\" to generate Shades on the interior and set to \"False\" to generate", "\\ '\\t' + ', !- Blind Bottom Opening Multiplier\\n' + \\ '\\t' +", "assignEPCheckInit == False: assignEPCheck = False #Create the EnergyPlus blinds material and assign", "_HBObjects != [] and sc.sticky.has_key('honeybee_release') == True and sc.sticky.has_key('ladybug_release') == True: #Import the", "in range(zoneData.BranchCount): branchList = zoneData.Branch(i) dataVal = [] for item in branchList: dataVal.append(item)", "shades created this way will automatically be assigned to the zone and the", "shades. ---------------: ... zoneData1_: Optional zone data for the HBZones_ that will be", "', !- Front Side Slat Beam Visible Reflectance\\n' + \\ '\\t' + ',", "of the zones and windows. For this, you would take imported EnergyPlus results", "mm thickness, 221 W/mK conductivity.\" blindsMaterial = ['DEFAULTBLINDSMATERIAL', 0.65, 0, 0.9, 0.00025, 221]", "hieghts glzHeight = minZPt.DistanceTo(maxZPt) # find number of shadings try: numOfShd = int(numOfShds)", "== 0: for bCount, branch in enumerate(finalTree): for twig in branch: zoneData1Tree.Add(twig, GH_Path(bCount))", "0.0.58\\nAUG_20_2014 try: ghenv.Component.AdditionalHelpFromDocStrings = \"1\" except: pass from System import Object from System", "shaded can also be input and shades can be joined together with the", "are all HBZones or all HBSrfs but not both. For now, just grab", "the inputs and make sure that we have everything that we need to", "You can also input lists of horOrVertical_ input, which will assign different orientations", "a vector to be used as a true North direction or a number", "simulated). ---------------: ... windowBreps: Breps representing each window of the zone. These can", "use it to split up the glazing by cardinal direction and assign different", "sure that your shades are previewing correctly.\" print warning ghenv.Component.AddRuntimeMessage(gh.GH_RuntimeMessageLevel.Remark, warning) if not", "= [] for window in windowSrfsInit: shadeBreps, EPSlatOrient, depth, shadingHeight, EPshdAngle, distToGlass, EPinteriorOrExter,", "= math.degrees(angle) return finalAngle # Define a function that can split up a", "for count, brep in enumerate(windowSrfsInit): windowBreps.Add(brep, GH_Path(count)) for count, brepList in enumerate(shadings): for", "valueList == None or len(valueList) == 0: value = None if len(valueList) ==", "generate Shades on the interior and set to \"False\" to generate shades on", "Solar Reflectance\\n' + \\ '\\t' + ', !- Slat Diffuse Solar Transmittance\\n' +", "glazing by cardinal direction and assign different shdAngle_ to different directions. shdAngle =", "warning ghenv.Component.AddRuntimeMessage(w, warning) #Check if there is a blinds schedule connected and, if", "true North direction or a number between 0 and 360 that represents the", "numShds, distBtwn): rotationAngle_ = 0 # import the classes lb_preparation = sc.sticky[\"ladybug_Preparation\"]() lb_mesh", "up the glazing by cardinal direction and assign different interiorOrExterior_ to different directions.", "produced and you can account for these shades using a 'Honeybee_EP Context Surfaces'", "else: ModifiedHBZones = [] return checkData, windowSrfsInit, shadings, alignedDataTree, ModifiedHBZones else: return False,", "else: for childSrf in object.childSrfs: windowObjects.append(childSrf) windowNames.append([childSrf.name]) windowSrfs.append([childSrf.geometry]) #Make sure that all HBObjects", "to _numOfShds where the input here is the distance in Rhino units between", "key=lambda a: a.Origin.Z) elif horOrVertical == False: # Vertical # Define a vector", "msg = \"Cannot find the shchedule file: \" + schedule print msg ghenv.Component.AddRuntimeMessage(gh.GH_RuntimeMessageLevel.Warning,", "Slat Infrared Hemispherical Emissivity\\n' + \\ '\\t' + str(blindsMaterial[3]) + ', !- Back", "os w = gh.GH_RuntimeMessageLevel.Warning tol = sc.doc.ModelAbsoluteTolerance def checkTheInputs(zoneNames, windowNames, windowSrfs, isZone): #Check", "is necessary to match the data to a zone or window. If there's", "in allData: alignedDataTree.append([]) for zoneCount, windowList in enumerate(windowSrfs): if isZone == True: zoneName", "= rc.Geometry.Curve.CreateControlPointCurve([maxXYPt, minXYPt]) divisionParams = pointCurve.DivideByLength(shadingHeight, True) divisionPoints = [] for param in", "in the shade benefit evaluation. - Provided by Honeybee 0.0.55 Args: _HBObjects: The", "checkData == False #Generate the shades. if checkData == True: shadings = []", "not have an outdoor boundary condition. EenergyPlus shades will not be assigned to", "+ ', !- Slat Infrared Hemispherical Transmittance\\n' + \\ '\\t' + str(blindsMaterial[3]) +", "them towards the South by a certain value in degrees. If applied to", "360/len(valueList)) for an in initAngles: angles.append(an-(360/(2*len(valueList)))) angles.append(360) for angleCount in range(len(angles)-1): if angles[angleCount]", "'\\t' + '0.5, !- Blind Right Side Opening Multiplier\\n' + \\ '\\t' +", "using a 'Honeybee_EP Context Surfaces' component.\" print warning ghenv.Component.AddRuntimeMessage(gh.GH_RuntimeMessageLevel.Remark, warning) if shadingHeight >", "sortedPlanes, shadingHeight def makeShade(_glzSrf, depth, numShds, distBtwn): rotationAngle_ = 0 # import the", "EPDistToGlass, EPinteriorOrExter, assignEPCheckInit def deconstructBlindMaterial(material): matLines = material.split('\\n') name = matLines[1].split(',')[0] reflect =", "None if len(valueList) == 1: value = valueList[0] elif len(valueList) > 1: initAngles", "to different directions. distToGlass = getValueBasedOnOrientation(distToGlass_) # generate the planes planes, shadingHeight =", "be produced and you can account for these shades using a 'Honeybee_EP Context", "Infrared Hemispherical Emissivity\\n' + \\ '\\t' + str(blindsMaterial[3]) + ', !- Back Side", "print warning ghenv.Component.AddRuntimeMessage(gh.GH_RuntimeMessageLevel.Remark, warning) if shadingHeight > 1: assignEPCheckInit = False warning =", "---------------: ... HBZones: The HBZones with the assigned shading (ready to be simulated).", "if numOfShds == None and distBetween == None: numOfShds = 1 if numOfShds", "= rc.Geometry.Vector3d.Multiply(1, tolVec) norOrient = False if tolVec.X < 0 and tolVec.Y <", "shades downward. You can also put in lists of angles to assign different", "zoneName = zoneNames[zoneCount] for windowCount, window in enumerate(windowList): windowBrepsFinal.append(window) windowName = windowNames[zoneCount][windowCount] windowNamesFinal.append(windowName)", "outdoors. E+ cannot create shades for intdoor windows.\" print warning ghenv.Component.AddRuntimeMessage(w, warning) elif", "if _distBetween == [] and _numOfShds == []: numOfShd = [1] print \"No", "Slat Angle {deg}\\n' + \\ '\\t' + str(blindsMaterial[5]) + ', !- Slat Conductivity", "Opening Multiplier\\n' + \\ '\\t' + ', !- Minimum Slat Angle {deg}\\n' +", "= False warning = \"Note that E+ does not like shading depths greater", "= [] for i in range(zoneData.BranchCount): branchList = zoneData.Branch(i) dataVal = [] for", "for the window level. srfData = False if zoneData == False: for listCount,", "checkData, windowSrfsInit, shadings, alignedDataTree, ModifiedHBZones else: return False, [], [], [], [] else:", "for item in branchList: dataVal.append(item) dataPyList.append(dataVal) return dataPyList allData = [] allData.append(makePyTree(zoneData1_)) allData.append(makePyTree(zoneData2_))", "str(winNm) == str(windowName.upper()): alignedDataTree[inputDataTreeCount].append(allData[inputDataTreeCount][listCount]) srfData = True if zoneData == False and srfData", "[] for list in branch: if str(list[0]) == \"key:location/dataType/units/frequency/startsAt/endsAt\": checkHeader.append(1) dataHeaders.append(list[:7]) dataNumbers.append(list[7:]) allHeaders.append(dataHeaders)", "targetValue return value # If multiple shading depths are given, use it to", "return value # If multiple shading depths are given, use it to split", "warning) isZone = False #Check the inputs and make sure that we have", "be filled up zoneNames = [] windowNames = [] windowSrfs = [] windowObjects", "== True: normalVectorPerp.Reverse() else: interiorOrExter = False #If a shdAngle is provided, use", "schedule = checkTheInputs(zoneNames, windowNames, windowSrfs, isZone) else: checkData == False #Generate the shades.", "= True #Test to see if the data is for the window level.", "to a zone or window. If there's no header, the data cannot be", "windowNames[zoneCount][windowCount] windowNamesFinal.append(windowName) for inputDataTreeCount, branch in enumerate(allHeaders): #Test to see if the data", "angleCount in range(len(angles)-1): if angles[angleCount] <= (getAngle2North(normalVector))%360 <= angles[angleCount +1]: targetValue = valueList[angleCount%len(valueList)]", "alignedDataTree, numOfShd, blindsMaterial, schedule = checkTheInputs(zoneNames, windowNames, windowSrfs, isZone) else: checkData == False", "interior and set to \"False\" to generate shades on the exterior. The default", "clr import AddReference AddReference('Grasshopper') import Grasshopper.Kernel as gh from Grasshopper import DataTree from", "is so large that it will cause EnergyPlus to crash.\" print warning ghenv.Component.AddRuntimeMessage(w,", "to split up the glazing by cardinal direction and assign different interiorOrExterior_ to", "!- Shading Type\\n' + \\ '\\t' + ', !- Construction with Shading Name\\n'", "blindsMaterial = deconstructBlindMaterial(blindsMaterial_) except: checkData5 = False warning = 'Blinds material is not", "\"You should first let both Honeybee and Ladybug fly...\" ghenv.Component.AddRuntimeMessage(w, \"You should first", "direction of the surface is facing the outdoors in order to be sure", "by cardinal direction and assign different depths to different directions. depth = getValueBasedOnOrientation(depth)", "component's functions. for object in HBZoneObjects: if object.objectType == \"HBZone\": isZoneList.append(1) zoneNames.append(object.name) winBreps", "True if numOfShds == None and distBetween == None: numOfShds = 1 if", "srf.BC == 'Outdoors': if srf.isPlanar == True: for childSrf in srf.childSrfs: windowObjects.append(childSrf) winNames.append(childSrf.name)", "for an in initAngles: angles.append(an-(360/(2*len(valueList)))) angles.append(360) for angleCount in range(len(angles)-1): if angles[angleCount] <=", "= rc.Geometry.Surface.CreateExtrusion(c, float(depth) * normalVectorPerp).ToBrep() shadingSurfaces.append(shdSrf) except: pass #If the user has specified", "\"HBSurface\": isZoneList.append(0) warning = \"Note that, when using this component for individual surfaces,", "that E+ does not like shading depths greater than 1. HBObjWShades will not", "interior, flip the normal vector. if interiorOrExter == True: normalVectorPerp.Reverse() else: interiorOrExter =", "izoneData3_, which align with the branches for each window above. \"\"\" ghenv.Component.Name =", "surface in the center # note2developer: there might be cases that the surface", "will be used with 0.65 solar reflectance, 0 transmittance, 0.9 emittance, 0.25 mm", "= matLines[1].split(',')[0] reflect = float(matLines[2].split(',')[0]) transmit = float(matLines[3].split(',')[0]) emiss = float(matLines[4].split(',')[0]) thickness =", "option to _numOfShds where the input here is the distance in Rhino units", "exterior shades. distToGlass_: A number representing the offset distance from the glass to", "True) maxXYPt = rc.Geometry.Point3d(maxXYPt.X, maxXYPt.Y, maxXYPt.Z) #Adjust the points to ensure the creation", "get the angle to North of any surface. def getAngle2North(normalVector): if north_ !=", "Honeybee and Ladybug fly...\" ghenv.Component.AddRuntimeMessage(w, \"You should first let both Honeybee and Ladybug", "the component will automatically assign a material of 0.65 solar reflectance, 0 transmittance,", "the interior, flip the normal vector. if interiorOrExter == True: normalVectorPerp.Reverse() else: interiorOrExter", "the blinds are always drawn\" else: schedule= blindsSchedule_.upper() if schedule!=None and not schedule.lower().endswith(\".csv\")", "the EnergyPlus blinds material and assign it to the windows with shades. if", "ghenv.Component.Name = \"Honeybee_EnergyPlus Window Shade Generator\" ghenv.Component.NickName = 'EPWindowShades' ghenv.Component.Message = 'VER 0.0.55\\nSEP_11_2014'", "to split up the glazing by cardinal direction and assign different distances of", "= [] for srf in object.surfaces: if srf.hasChild: if srf.BC == 'OUTDOORS' or", "= rc.Geometry.Transform.Translation(finalTransVec) for shdSrf in shadingSurfaces: shdSrf.Transform(blindsTransform) else: distToGlass = 0 #Get the", "try: numOfShd = int(numOfShds) shadingHeight = glzHeight/numOfShd shadingRemainder = shadingHeight except: shadingHeight =", "surface must not be curved. With the way that we mesh curved surfaces", "EPSlatOrientList = [] depthList = [] shadingHeightList = [] EPshdAngleList = [] distToGlassList", "#Unpack the data trees. if checkData == True: windowBreps = DataTree[Object]() shadeBreps =", "print warning ghenv.Component.AddRuntimeMessage(gh.GH_RuntimeMessageLevel.Remark, warning) if not hasattr(object, 'type'): # find the type based", "direction is set to the Y-axis (0 degrees). _depth: A number representing the", "0 #Make EP versions of some of the outputs. EPshdAngleInint = angleFromNorm+shdAngle if", "crash. assignEPCheckInit = True if depth > 1: assignEPCheckInit = False warning =", "= [] if valueList == None or len(valueList) == 0: value = None", "be assigned to this window.\" else: print \"One surface with a window does", "+ name + ', !- Name\\n' + \\ '\\t' + EPSlatOrient + ',", "this component.\" print warning ghenv.Component.AddRuntimeMessage(w, warning) #Align all of the lists to each", "if schedule!=None and not schedule.lower().endswith(\".csv\") and schedule not in HBScheduleList: msg = \"Cannot", "Grasshopper.Kernel as gh from Grasshopper import DataTree from Grasshopper.Kernel.Data import GH_Path import Rhino", "+ \" in Honeybee schedule library.\" print msg ghenv.Component.AddRuntimeMessage(gh.GH_RuntimeMessageLevel.Warning, msg) checkData4 = False", "# find the type based on object.type = object.getTypeByNormalAngle() if not hasattr(object, 'angle2North'):", "else: shdAngle = 0 #Make EP versions of some of the outputs. EPshdAngleInint", "up the glazing by cardinal direction and assign different shdAngle_ to different directions.", "enumerate(allHeaders): #Test to see if the data is for the zone level. zoneData", "else: EPSlatOrient = 'Vertical' if interiorOrExter == True: EPinteriorOrExter = 'InteriorBlind' else: EPinteriorOrExter", "distToGlass = getValueBasedOnOrientation(distToGlass_) # generate the planes planes, shadingHeight = analyzeGlz(_glzSrf, distBtwn, numShds,", "to split up the glazing by cardinal direction and assign different shdAngle_ to", "find the type based on object.type = object.getTypeByNormalAngle() if not hasattr(object, 'angle2North'): #", "except: pass from System import Object from System import Drawing from clr import", "simulation with the \"Honeybee_EP Context Surfaces\" component. ---------------: ... zoneData1Tree: Data trees of", "components that generate or alter zones. Note that these should ideally be the", "'\\t' + ', !- Minimum Slat Angle {deg}\\n' + \\ '\\t' + '180;", "and assign different horizontal or vertical to different directions. horOrVertical = getValueBasedOnOrientation(horOrVertical_) #", "Ladybug fly...\" ghenv.Component.AddRuntimeMessage(w, \"You should first let both Honeybee and Ladybug fly...\") return", "Define a function that can split up a list of values and assign", "rc.Geometry.Vector3d.ZAxis: normalVectorPerp = rc.Geometry.Vector3d(normalVector.X, normalVector.Y, 0) angleFromNorm = math.degrees(rc.Geometry.Vector3d.VectorAngle(normalVectorPerp, normalVector)) if normalVector.Z <", "also put in lists of angles to assign different shade angles to different", "the points to ensure the creation of the correct number of shades starting", "rs.frange(minZPt.Z + shadingRemainder, maxZPt.Z + 0.5*sc.doc.ModelAbsoluteTolerance, shadingHeight) try: for Z in zHeights: planes.append(rc.Geometry.Plane(rc.Geometry.Point3d(X,", "reflectance, 0 transmittance, 0.9 emittance, 0.25 mm thickness, 221 W/mK conductivity. blindsSchedule_: An", "windowNames = [] windowSrfs = [] windowObjects = [] isZoneList = [] assignEPCheck", "into an EnergyPlus simulation with the \"Honeybee_EP Context Surfaces\" component. ---------------: ... zoneData1Tree:", "# find the intersection crvs as the base for shadings intCrvs =[] for", "have everything that we need to generate the shades. Set defaults on things", "_HBObjects != [] and _runIt == True: checkData, windowSrfsInit, shadings, alignedDataTree, HBObjWShades =", "0, 0.9, 0.00025, 221] else: try: blindsMaterial = deconstructBlindMaterial(blindsMaterial_) except: checkData5 = False", "thickness, conduct] def createEPBlindMat(blindsMaterial, EPSlatOrient, depth, shadingHeight, EPshdAngle, distToGlass, name): EPBlindMat = \"WindowMaterial:Blind,\\n\"", "number of shadings try: numOfShd = int(numOfShds) shadingHeight = glzHeight/numOfShd shadingRemainder = shadingHeight", "the test surface # and test the normal direction for more point baseSrfCenPt", "warning = 'Blinds material is not a valid blinds material from the \"Honeybee_EnergyPlus", "shades. Returns: readMe!: ... ---------------: ... HBZones: The HBZones with the assigned shading", "360, 360/len(valueList)) for an in initAngles: angles.append(an-(360/(2*len(valueList)))) angles.append(360) for angleCount in range(len(angles)-1): if", "must provide a depth for the shades.\") #Check if there is a blinds", "+ str(blindsMaterial[1]) + ', !- Front Side Slat Beam Solar Reflectance\\n' + \\", "= [] windowSrfs = [] windowObjects = [] isZoneList = [] assignEPCheck =", "or \"False\" to generate vertical shades. You can also input lists of horOrVertical_", "tol = sc.doc.ModelAbsoluteTolerance def checkTheInputs(zoneNames, windowNames, windowSrfs, isZone): #Check if the user has", "= west depth, item 2 = south depth, item 3 = east depth.", "return False, [], [], [], [] #Run the main functions. checkData = False", "horOrVertical == False: # Vertical # Define a vector to be used to", "'\\t' + str(blindsMaterial[1]) + ', !- Front Side Slat Diffuse Solar Reflectance\\n' +", "def main(): if _HBObjects != [] and sc.sticky.has_key('honeybee_release') == True and sc.sticky.has_key('ladybug_release') ==", "of the same type. checkSameType = True if sum(isZoneList) == len(_HBObjects): isZone =", "'\\t' + str(blindsMaterial[3]) + ', !- Back Side Slat Infrared Hemispherical Emissivity\\n' +", "[] windowObjects = [] isZoneList = [] assignEPCheck = True HBObjWShades = []", "will not be generated. shadeBreps will still be produced and you can account", "component to generate shades for Honeybee zone windows. The component has two main", "[] allHeaders = [] allNumbers = [] for branch in allData: checkHeader =", "to windows facing East or West, tilting the shades like this will let", "with a window does not have an outdoor boundary condition. EenergyPlus shades will", "also be input and shades can be joined together with the mergeVectors_ input.", "the data to a zone or window. If there's no header, the data", "window. These can be plugged into a shade benefit evaulation as each window", "of shades to different directions. numShds = getValueBasedOnOrientation(numShds) # If multiple distances between", "via a schedule. Note that shades created this way will automatically be assigned", "tolVec = rc.Geometry.Vector3d.Multiply(-1, tolVec) norOrient = True maxXYPt = rc.Geometry.Point3d.Subtract(maxXYPt, tolVec) minXYPt =", "there's no header, the data cannot be coordinated with this component. checkData3 =", "rc.Geometry.Vector3d.Multiply(1, tolVec) norOrient = False if tolVec.X < 0 and tolVec.Y < 0:", "EnergyPlus results and hook them up to the \"zoneData\" inputs and use the", "order to be sure that your shades are previewing correctly.\" print warning ghenv.Component.AddRuntimeMessage(gh.GH_RuntimeMessageLevel.Remark,", "[] allNumbers = [] for branch in allData: checkHeader = [] dataHeaders =", "[] EPinteriorOrExterList = [] #Call the objects from the hive. HBZoneObjects = hb_hive.callFromHoneybeeHive(_HBObjects)", "= valueList[angleCount%len(valueList)] value = targetValue return value # If multiple shading depths are", "the classes hb_EPZone = sc.sticky[\"honeybee_EPZone\"] hb_EPSrf = sc.sticky[\"honeybee_EPSurface\"] hb_EPFenSurface = sc.sticky[\"honeybee_EPFenSurface\"] hb_hive =", "None: print \"No blinds material has been connected. A material will be used", "distToGlassList = [] EPinteriorOrExterList = [] #Call the objects from the hive. HBZoneObjects", "schedule connected and, if not, set a default. checkData4 = True if blindsSchedule_", "different horizontal or vertical to different directions. horOrVertical = getValueBasedOnOrientation(horOrVertical_) # If multiple", "True, True) minXYPt = rc.Geometry.Point3d(minXYPt.X, minXYPt.Y, minXYPt.Z) maxXYPt = bbox.Corner(True, False, True) maxXYPt", "If multiple horizontal or vertical inputs are given, use it to split up", "return dataPyList allData = [] allData.append(makePyTree(zoneData1_)) allData.append(makePyTree(zoneData2_)) allData.append(makePyTree(zoneData3_)) #Test to see if the", "', !- Blind Bottom Opening Multiplier\\n' + \\ '\\t' + '0.5, !- Blind", "+ ', !- Front Side Slat Diffuse Solar Reflectance\\n' + \\ '\\t' +", "False warning = \"This component currently only supports inputs that are all HBZones", "EPSlatOrient, depth, shadingHeight, EPshdAngle, distToGlass, EPinteriorOrExter, assignEPCheckInit = makeShade(window, depths, numOfShd, _distBetween) shadings.append(shadeBreps)", "== 1: for bCount, branch in enumerate(finalTree): for twig in branch: zoneData2Tree.Add(twig, GH_Path(bCount))", "align with the branches for each window above. zoneData3Tree: Data trees of the", "HBObjects are of the same type. checkSameType = True if sum(isZoneList) == len(_HBObjects):", "and alignedDataTree != [[], [], []]: print \"A window was not matched with", "\\ '\\t' + str(blindsMaterial[3]) + ', !- Front Side Slat Infrared Hemispherical Emissivity\\n'", "does not have an outdoor boundary condition. EenergyPlus shades will not be assigned", "False and alignedDataTree != [[], [], []]: print \"A window was not matched", "Reflectance\\n' + \\ '\\t' + ', !- Slat Infrared Hemispherical Transmittance\\n' + \\", "the outputs. EPshdAngleInint = angleFromNorm+shdAngle if EPshdAngleInint >= 0: EPshdAngle = 90 -", "can get the angle to North of any surface. def getAngle2North(normalVector): if north_", "heating load, cooling load or beam gain for a shade benefit simulation with", "the type based on object.getAngle2North() if not hasattr(object, \"BC\"): object.BC = 'OUTDOORS' if", "and assign different distances of shades to different directions. distBtwn = getValueBasedOnOrientation(distBtwn) #", "- math.floor(glzHeight/distBetween))*distBetween) if shadingRemainder == 0: shadingRemainder = shadingHeight # find shading base", "this component's functions. for object in HBZoneObjects: if object.objectType == \"HBZone\": isZoneList.append(1) zoneNames.append(object.name)", "rs import scriptcontext as sc import uuid import math import os w =", "tolVec) norOrient = True maxXYPt = rc.Geometry.Point3d.Subtract(maxXYPt, tolVec) minXYPt = rc.Geometry.Point3d.Subtract(minXYPt, tolVec) #glazing", "if horOrVertical == True or horOrVertical == None: horOrVertical = True planeVec =", "input distToGlass_ value is so large that it will cause EnergyPlus to crash.\"", "zoneData1_: Optional zone data for the HBZones_ that will be aligned with the", "minXYPt]) divisionParams = pointCurve.DivideByLength(shadingHeight, True) divisionPoints = [] for param in divisionParams: divisionPoints.append(pointCurve.PointAt(param))", "depths will assign each value of the list as follows: item 0 =", "checkTheInputs(zoneNames, windowNames, windowSrfs, isZone): #Check if the user has hooked up a distBetwee", "assigned to this window.\" windowNames.append(winNames) windowSrfs.append(winBreps) elif object.objectType == \"HBSurface\": isZoneList.append(0) warning =", "= \"Note that E+ does not like distances between shades that are greater", "all of the connected zoneData has a Ladybug/Honeybee header on it. This header", "= [] elif horOrVertical == True: # Horizontal #Define a bounding box for", "False: for listCount, header in enumerate(branch): try: winNm = header[2].split(' for ')[-1].split(': ')[0]", "if getAngle2North(normalVectorPerp) < 180: normalVectorPerp.Rotate((shdAngle*0.01745329), planeVec) else: normalVectorPerp.Rotate((shdAngle*-0.01745329), planeVec) else: shdAngle = 0", "windowBreps = DataTree[Object]() shadeBreps = DataTree[Object]() zoneData1Tree = DataTree[Object]() zoneData2Tree = DataTree[Object]() zoneData3Tree", "Honeybee 0.0.55 Args: _HBObjects: The HBZones out of any of the HB components", "'\\t' + str(depth) + ', !- Slat Width {m}\\n' + \\ '\\t' +", "is connected here, the component will automatically assign a material of 0.65 solar", "for a shade benefit simulation with the generated shades. zoneData2_: Optional zone data", "> 1: initAngles = rs.frange(0, 360, 360/len(valueList)) for an in initAngles: angles.append(an-(360/(2*len(valueList)))) angles.append(360)", "try again!\" return -1 shadingSurfaces =[] #Define a function that can get the", "shades that are greater than 1. HBObjWShades will not be generated. shadeBreps will", "if horOrVertical == True: EPSlatOrient = 'Horizontal' else: EPSlatOrient = 'Vertical' if interiorOrExter", "type based on object.getAngle2North() if not hasattr(object, \"BC\"): object.BC = 'OUTDOORS' if object.hasChild:", "different cardinal directions. interiorOrExter_: Set to \"True\" to generate Shades on the interior", "if tolVec.X > 0 and tolVec.Y > 0: tolVec = rc.Geometry.Vector3d.Multiply(1, tolVec) norOrient", "normalVector != rc.Geometry.Vector3d.ZAxis: normalVectorPerp = rc.Geometry.Vector3d(normalVector.X, normalVector.Y, 0) angleFromNorm = math.degrees(rc.Geometry.Vector3d.VectorAngle(normalVectorPerp, normalVector)) if", "not like distances between shades that are greater than 1. HBObjWShades will not", "planes sortedPlanes = sorted(planes, key=lambda a: a.Origin.Z) elif horOrVertical == False: # Vertical", "'\\t' + ', !- Slat Diffuse Solar Transmittance\\n' + \\ '\\t' + str(blindsMaterial[1])", "the data is for the window level. srfData = False if zoneData ==", "rc.Geometry.Vector3d(normalVector.X, normalVector.Y, 0) angleFromNorm = math.degrees(rc.Geometry.Vector3d.VectorAngle(normalVectorPerp, normalVector)) if normalVector.Z < 0: angleFromNorm =", "'\\t' + schedName + ', !- Schedule Name\\n' + \\ '\\t' + ',", "Material Name\\n' + \\ '\\t' + 'FixedSlatAngle, !- Type of Slat Angle Control", "isZone = False #Check the inputs and make sure that we have everything", "as gh from Grasshopper import DataTree from Grasshopper.Kernel.Data import GH_Path import Rhino as", "match the data to a zone or window. If there's no header, the", "w = gh.GH_RuntimeMessageLevel.Warning tol = sc.doc.ModelAbsoluteTolerance def checkTheInputs(zoneNames, windowNames, windowSrfs, isZone): #Check if", "= shadingHeight except: shadingHeight = distBetween shadingRemainder = (((glzHeight/distBetween) - math.floor(glzHeight/distBetween))*distBetween) if shadingRemainder", "pass #If the user has specified a distance to move the shades, move", "windowNames, windowSrfsInit, depths, alignedDataTree, numOfShd, blindsMaterial, schedule = checkTheInputs(zoneNames, windowNames, windowSrfs, isZone) else:", "also input lists of depths, which will assign different depths based on cardinal", "== True: EPSlatOrient = 'Horizontal' else: EPSlatOrient = 'Vertical' if interiorOrExter == True:", "maxZPt.Y, maxZPt.Z - sc.doc.ModelAbsoluteTolerance) centerPt = bbox.Center #glazing hieghts glzHeight = minZPt.DistanceTo(maxZPt) #", "\"ALWAYS ON\" shcedule. north_: Input a vector to be used as a true", "material from the \"Honeybee_EnergyPlus Blinds Material\" component.' print warning ghenv.Component.AddRuntimeMessage(w, warning) #Check if", "< 0.01: EPDistToGlass = 0.01 elif EPDistToGlass > 1: warning = \"The input", "numShds, horOrVertical, lb_visualization, normalVector) # find the intersection crvs as the base for", "north_ else:northVector = rc.Geometry.Vector3d.YAxis angle = rc.Geometry.Vector3d.VectorAngle(northVector, normalVector, rc.Geometry.Plane.WorldXY) finalAngle = math.degrees(angle) return", "def checkTheInputs(zoneNames, windowNames, windowSrfs, isZone): #Check if the user has hooked up a", "the shade curves based on the planes and extrusion vectors if intCrvs !=[]:", "to different directions. interiorOrExter = getValueBasedOnOrientation(interiorOrExter_) #If multiple distToGlass_ inputs are given, use", "Hemispherical Transmittance\\n' + \\ '\\t' + str(blindsMaterial[3]) + ', !- Front Side Slat", "for shade benefit evaluation after an energy simulation has already been run. In", "'ALWAYSON' schedCntrl = 'No' schedName = '' else: schedName = schedule schedCntrlType =", "assumed that the blinds are always drawn\" else: schedule= blindsSchedule_.upper() if schedule!=None and", "Multiplier\\n' + \\ '\\t' + ', !- Blind Bottom Opening Multiplier\\n' + \\", "we mesh curved surfaces for E+, the program would just freak out with", "EP Result data. blindsMaterial_: An optional blind material from the blind material component.", "\\ '\\t' + ', !- Construction with Shading Name\\n' + \\ '\\t' +", "surface with a window is not planar. EenergyPlus shades will not be assigned", "the glass to make the shades. _runIt: Set boolean to \"True\" to run", "direction and assign different numbers of shades to different directions. numShds = getValueBasedOnOrientation(numShds)", "sc.sticky[\"honeybee_EPSurface\"] hb_EPFenSurface = sc.sticky[\"honeybee_EPFenSurface\"] hb_hive = sc.sticky[\"honeybee_Hive\"]() #Make the lists that will be", "assign each value of the list as follows: item 0 = north depth,", "[]]: print \"A window was not matched with its respective zone/surface data.\" if", "if tolVec.X < 0 and tolVec.Y < 0: tolVec = rc.Geometry.Vector3d.Multiply(-1, tolVec) norOrient", "the normal of the surface in the center # note2developer: there might be", "generate the planes planes, shadingHeight = analyzeGlz(_glzSrf, distBtwn, numShds, horOrVertical, lb_visualization, normalVector) #", "data is for the zone level. zoneData = False if isZone == True:", "+ str(shadingHeight) +', !- Slat Separation {m}\\n' + \\ '\\t' + str(blindsMaterial[4]) +", "+ ', !- Setpoint {W/m2, W or deg C}\\n' + \\ '\\t' +", "the shades. The default is set to \"0\" for no rotation. If you", "distances between shade inputs are given, use it to split up the glazing", "if isZone == True: for listCount, header in enumerate(branch): if header[2].split(' for ')[-1]", "\\ '\\t' + str(blindsMaterial[1]) + ', !- Front Side Slat Beam Solar Reflectance\\n'", "'180; !- Maximum Slat Angle {deg}\\n' return EPBlindMat def createEPBlindControl(blindsMaterial, schedule, EPinteriorOrExter, name):", "= windowNames[zoneCount][windowCount] windowNamesFinal.append(windowName) for inputDataTreeCount, branch in enumerate(allHeaders): #Test to see if the", "Ladybug fly...\") return False, [], [], [], [] #Run the main functions. checkData", "= angleFromNorm+shdAngle if EPshdAngleInint >= 0: EPshdAngle = 90 - EPshdAngleInint else: EPshdAngle", "any of the HB components that generate or alter zones. Note that these", "EPinteriorOrExter = 'InteriorBlind' else: EPinteriorOrExter = 'ExteriorBlind' #Generate the shade curves based on", "and tolVec.Y > 0: tolVec = rc.Geometry.Vector3d.Multiply(1, tolVec) norOrient = False if tolVec.X", "lists to each window. windowNamesFinal = [] windowBrepsFinal = [] alignedDataTree = []", "= \"The boundary condition of the input object must be outdoors. E+ cannot", "in enumerate(windowSrfsInit): windowBreps.Add(brep, GH_Path(count)) for count, brepList in enumerate(shadings): for brep in brepList:", "Solar Reflectance\\n' + \\ '\\t' + str(blindsMaterial[1]) + ', !- Back Side Slat", "the list as follows: item 0 = north depth, item 1 = west", "a default. checkData4 = True if blindsSchedule_ == None: schedule = \"ALWAYSON\" print", "and assign different depths to different directions. depth = getValueBasedOnOrientation(depth) # If multiple", "zoneData3_: Optional zone data for the HBZones_ that will be aligned with the", "# import the classes lb_preparation = sc.sticky[\"ladybug_Preparation\"]() lb_mesh = sc.sticky[\"ladybug_Mesh\"]() lb_visualization = sc.sticky[\"ladybug_ResultVisualization\"]()", "for use in calculating the number of shades to generate minXYPt = bbox.Corner(True,", "'\\t' + '180; !- Maximum Slat Angle {deg}\\n' return EPBlindMat def createEPBlindControl(blindsMaterial, schedule,", "= True if blindsSchedule_ == None: schedule = \"ALWAYSON\" print \"No blinds schedule", "shadingSurfaces.append(shdSrf) except: pass #If the user has specified a distance to move the", "0.9 emittance, 0.25 mm thickness, 221 W/mK conductivity. blindsSchedule_: An optional schedule to", "in enumerate(branch): try: winNm = header[2].split(' for ')[-1].split(': ')[0] except: winNm = header[2].split('", "window was not matched with its respective zone/surface data.\" if checkData2 == True", "0 and 360 that represents the degrees off from the y-axis to make", "Vertical # Define a vector to be used to generate the planes planeVec", "Multiplier\\n' + \\ '\\t' + '0.5, !- Blind Right Side Opening Multiplier\\n' +", "find the shchedule file: \" + schedule print msg ghenv.Component.AddRuntimeMessage(gh.GH_RuntimeMessageLevel.Warning, msg) checkData4 =", "getValueBasedOnOrientation(distToGlass_) # generate the planes planes, shadingHeight = analyzeGlz(_glzSrf, distBtwn, numShds, horOrVertical, lb_visualization,", "the object is and make sure that we can run it through this", "'Outdoors': assignEPCheck = False warning = \"The boundary condition of the input object", "still be produced and you can account for these shades using a 'Honeybee_EP", "the y-axis to make North. The default North direction is set to the", "angles.append(an-(360/(2*len(valueList)))) angles.append(360) for angleCount in range(len(angles)-1): if angles[angleCount] <= (getAngle2North(normalVector))%360 <= angles[angleCount +1]:", "component and generate shades. ---------------: ... zoneData1_: Optional zone data for the HBZones_", "hb_EPZone = sc.sticky[\"honeybee_EPZone\"] hb_EPSrf = sc.sticky[\"honeybee_EPSurface\"] hb_EPFenSurface = sc.sticky[\"honeybee_EPFenSurface\"] hb_hive = sc.sticky[\"honeybee_Hive\"]() #Make", "to generate the shades. Set defaults on things that are not connected. if", "= [] for item in allData: alignedDataTree.append([]) for zoneCount, windowList in enumerate(windowSrfs): if", "to be sure that your shades are previewing correctly.\" print warning ghenv.Component.AddRuntimeMessage(gh.GH_RuntimeMessageLevel.Remark, warning)", "ghenv.Component.AddRuntimeMessage(w, warning) #Align all of the lists to each window. windowNamesFinal = []", "numOfShds. if _distBetween == [] and _numOfShds == []: numOfShd = [1] print", "== None: horOrVertical = True if numOfShds == None and distBetween == None:", "on object.type = object.getTypeByNormalAngle() if not hasattr(object, 'angle2North'): # find the type based", "\\ '\\t' + str(blindsMaterial[1]) + ', !- Front Side Slat Diffuse Solar Reflectance\\n'", "windowNames, windowSrfs, isZone) else: checkData == False #Generate the shades. if checkData ==", "shades using a 'Honeybee_EP Context Surfaces' component.\" print warning ghenv.Component.AddRuntimeMessage(gh.GH_RuntimeMessageLevel.Remark, warning) if shadingHeight", "= True elif sum(isZoneList) == 0: isZone = False else: checkSameType = False", "bounding box for use in calculating the number of shades to generate minZPt", "range(zoneData.BranchCount): branchList = zoneData.Branch(i) dataVal = [] for item in branchList: dataVal.append(item) dataPyList.append(dataVal)", "that the direction of the surface is facing the outdoors in order to", "zoneData3Tree: Data trees of the izoneData3_, which align with the branches for each", "except: # single shading planes.append(rc.Geometry.Plane(rc.Geometry.Point3d(minXYPt), planeVec)) sortedPlanes = planes return sortedPlanes, shadingHeight def", "if blindsSchedule_ == None: schedule = \"ALWAYSON\" print \"No blinds schedule has been", "Control Type\\n' + \\ '\\t' + schedName + ', !- Schedule Name\\n' +", "= float(matLines[4].split(',')[0]) thickness = float(matLines[5].split(',')[0]) conduct = float(matLines[6].split(';')[0]) return [name, reflect, transmit, emiss,", "rc.Geometry.Vector3d(maxXYPt.X, maxXYPt.Y, maxXYPt.Z)) if testVec.IsParallelTo(planeVec) == 0: minXYPt = bbox.Corner(False, True, True) minXYPt", "', !- Shading Device Material Name\\n' + \\ '\\t' + 'FixedSlatAngle, !- Type", "schedule. Note that shades created this way will automatically be assigned to the", "a vector to be used to generate the planes planeVec = rc.Geometry.Vector3d(normalVector.X, normalVector.Y,", "initAngles = rs.frange(0, 360, 360/len(valueList)) for an in initAngles: angles.append(an-(360/(2*len(valueList)))) angles.append(360) for angleCount", "facing the outdoors in order to be sure that your shades are previewing", "the distance in Rhino units between each shade. horOrVertical_: Set to \"True\" to", "checkData, windowSrfsInit, shadings, alignedDataTree, HBObjWShades = main() #Unpack the data trees. if checkData", "a shade benefit simulation with the generated shades. Returns: readMe!: ... ---------------: ...", "different directions. shdAngle = getValueBasedOnOrientation(shdAngle_) #If multiple interiorOrExter_ inputs are given, use it", "alignedDataTree[inputDataTreeCount].append(allData[inputDataTreeCount][listCount]) srfData = True if zoneData == False and srfData == False and", "rc.Geometry.Transform.Translation(finalTransVec) for shdSrf in shadingSurfaces: shdSrf.Transform(blindsTransform) else: distToGlass = 0 #Get the EnergyPlus", "Reflectance\\n' + \\ '\\t' + str(blindsMaterial[2]) + ', !- Slat Beam Visible Transmittance\\n'", "\"Couldn't find the normal of the shading surface.\" + \\ \"\\nRebuild the surface", "split up a list of values and assign it to different cardinal directions.", "alignedDataTree[inputDataTreeCount].append(allData[inputDataTreeCount][listCount]) zoneData = True #Test to see if the data is for the", "to this window.\" else: print \"One surface with a window does not have", "windowObj.shadingControl = createEPBlindControl(blindsMaterial, schedule, EPinteriorOrExterList[count], windowObj.name) windowObj.shadingControlName = 'BlindCntrlFor_' + windowObj.name windowObj.shadingSchName =", "of shades starting from the northernmost side of the window. tolVec = rc.Geometry.Vector3d.Subtract(rc.Geometry.Vector3d(minXYPt.X,", "on cardinal direction. For example, inputing 4 values for depths will assign each", "distToGlass_ inputs are given, use it to split up the glazing by cardinal", "material is not a valid blinds material from the \"Honeybee_EnergyPlus Blinds Material\" component.'", "the center point is not located on the surface baseSrfCenPt = _glzSrf.ClosestPoint(baseSrfCenPt) bool,", "the normal vector. if distToGlass != None: transVec = normalVectorPerp transVec.Unitize() finalTransVec =", "windowSrfs.append(winBreps) elif object.objectType == \"HBSurface\": isZoneList.append(0) warning = \"Note that, when using this", "= False if isZone == True: for listCount, header in enumerate(branch): if header[2].split('", "-90 and 90 that represents an angle in degrees to rotate the shades.", "correct number of shades starting from the northernmost side of the window. tolVec", "rc import rhinoscriptsyntax as rs import scriptcontext as sc import uuid import math", "False: # Vertical # Define a vector to be used to generate the", "existed if not os.path.isfile(schedule): msg = \"Cannot find the shchedule file: \" +", "Transmittance\\n' + \\ '\\t' + ', !- Front Side Slat Beam Visible Reflectance\\n'", "be simulated). ---------------: ... windowBreps: Breps representing each window of the zone. These", "matLines = material.split('\\n') name = matLines[1].split(',')[0] reflect = float(matLines[2].split(',')[0]) transmit = float(matLines[3].split(',')[0]) emiss", "assignEPCheck = False warning = \"The surface must not be curved. With the", "generated shades. zoneData3_: Optional zone data for the HBZones_ that will be aligned", "+ ', !- Slat Orientation\\n' + \\ '\\t' + str(depth) + ', !-", "on each window. You can also input lists of depths, which will assign", "shchedule file: \" + schedule print msg ghenv.Component.AddRuntimeMessage(gh.GH_RuntimeMessageLevel.Warning, msg) checkData4 = False #Create", "minZPt.Z) maxZPt = bbox.Corner(False, True, False) maxZPt = rc.Geometry.Point3d(maxZPt.X, maxZPt.Y, maxZPt.Z - sc.doc.ModelAbsoluteTolerance)", "', !- Front Side Slat Diffuse Solar Reflectance\\n' + \\ '\\t' + str(blindsMaterial[1])", "{deg}\\n' return EPBlindMat def createEPBlindControl(blindsMaterial, schedule, EPinteriorOrExter, name): if schedule == 'ALWAYSON': schedCntrlType", "assume the \"ALWAYS ON\" shcedule. north_: Input a vector to be used as", "\"WindowMaterial:Blind,\\n\" + \\ '\\t' + blindsMaterial[0] + \"_\" + name + ', !-", "more winter sun than summer sun. If you have horizontal shades, use this", "checkTheInputs(zoneNames, windowNames, windowSrfs, isZone) else: checkData == False #Generate the shades. if checkData", "blinds are always drawn\" else: schedule= blindsSchedule_.upper() if schedule!=None and not schedule.lower().endswith(\".csv\") and", "will not be assigned to this window.\" windowNames.append(winNames) windowSrfs.append(winBreps) elif object.objectType == \"HBSurface\":", "windowObjects = [] isZoneList = [] assignEPCheck = True HBObjWShades = [] EPSlatOrientList", "correctly.\" print warning ghenv.Component.AddRuntimeMessage(gh.GH_RuntimeMessageLevel.Remark, warning) if not hasattr(object, 'type'): # find the type", "shadingHeightList[count], EPshdAngleList[count], distToGlassList[count], windowObj.name) windowObj.shadingControl = createEPBlindControl(blindsMaterial, schedule, EPinteriorOrExterList[count], windowObj.name) windowObj.shadingControlName = 'BlindCntrlFor_'", "read back into Grasshopper from the Import idf component will not align correctly", "checkData4 == True and checkData5 == True: checkData = True else: checkData =", "crash.\" print warning ghenv.Component.AddRuntimeMessage(w, warning) if horOrVertical == True: EPSlatOrient = 'Horizontal' else:", "\"One surface with a window is not planar. EenergyPlus shades will not be", "shades can be joined together with the mergeVectors_ input. _numOfShds: The number of", "allHeaders = [] allNumbers = [] for branch in allData: checkHeader = []", "box bbox = glzSrf.GetBoundingBox(True) if horOrVertical == None: horOrVertical = True if numOfShds", "{deg}\\n' + \\ '\\t' + '180; !- Maximum Slat Angle {deg}\\n' return EPBlindMat", "schedule, EPinteriorOrExter, name): if schedule == 'ALWAYSON': schedCntrlType = 'ALWAYSON' schedCntrl = 'No'", "main functions. checkData = False if _HBObjects != [] and _runIt == True:", "of the list as follows: item 0 = north depth, item 1 =", "south depth, item 3 = east depth. Lists of vectors to be shaded", "sc.doc.ModelAbsoluteTolerance def checkTheInputs(zoneNames, windowNames, windowSrfs, isZone): #Check if the user has hooked up", "tolVec.X < 0 and tolVec.Y > 0: tolVec = rc.Geometry.Vector3d.Multiply(1, tolVec) norOrient =", "own branch of a grasshopper data tree. Alternatively, they can be plugged into", "the windowBreps and shadeBreps outputs are just for visualization. _ The second way", "[] planes = [] X, Y, z = minZPt.X, minZPt.Y, minZPt.Z zHeights =", "len(_HBObjects): isZone = True elif sum(isZoneList) == 0: isZone = False else: checkSameType", "evaluation. - Provided by Honeybee 0.0.55 Args: _HBObjects: The HBZones out of any", "minXYPt.DistanceTo(maxXYPt) # find number of shadings try: numOfShd = int(numOfShds) shadingHeight = glzHeight/numOfShd", "along the normal vector. if distToGlass != None: transVec = normalVectorPerp transVec.Unitize() finalTransVec", "== True: shadings = [] for window in windowSrfsInit: shadeBreps, EPSlatOrient, depth, shadingHeight,", "maxZPt = rc.Geometry.Point3d(maxZPt.X, maxZPt.Y, maxZPt.Z - sc.doc.ModelAbsoluteTolerance) centerPt = bbox.Center #glazing hieghts glzHeight", "given, use it to split up the glazing by cardinal direction and assign", "the data is for the zone level. zoneData = False if isZone ==", "'\\t' + ', !- Setpoint {W/m2, W or deg C}\\n' + \\ '\\t'", "with its respective zone/surface data.\" if checkData2 == True and checkData3 == True", "note2developer: there might be cases that the surface is not planar and #", "multiple distances between shade inputs are given, use it to split up the", "shading base planes planeOrigins = [] planes = [] X, Y, z =", "float(matLines[3].split(',')[0]) emiss = float(matLines[4].split(',')[0]) thickness = float(matLines[5].split(',')[0]) conduct = float(matLines[6].split(';')[0]) return [name, reflect,", "to different directions. depth = getValueBasedOnOrientation(depth) # If multiple number of shade inputs", "blinds can be dynamically controlled via a schedule. Note that shades created this", "\"One intersection failed.\" if normalVector != rc.Geometry.Vector3d.ZAxis: normalVectorPerp = rc.Geometry.Vector3d(normalVector.X, normalVector.Y, 0) angleFromNorm", "the \"Honeybee_EP Context Surfaces\" component. ---------------: ... zoneData1Tree: Data trees of the zoneData1_,", "shading planes.append(rc.Geometry.Plane(rc.Geometry.Point3d(minXYPt), planeVec)) sortedPlanes = planes return sortedPlanes, shadingHeight def makeShade(_glzSrf, depth, numShds,", "warning) else: for childSrf in object.childSrfs: windowObjects.append(childSrf) windowNames.append([childSrf.name]) windowSrfs.append([childSrf.geometry]) #Make sure that all", "checkData = False if _HBObjects != [] and _runIt == True: checkData, windowSrfsInit,", "\\ '\\t' + str(blindsMaterial[5]) + ', !- Slat Conductivity {W/m-K}\\n' + \\ '\\t'", "and checkData3 == True and checkData4 == True and checkData5 == True: checkData", "in zHeights: planes.append(rc.Geometry.Plane(rc.Geometry.Point3d(X, Y, Z), rc.Geometry.Vector3d.ZAxis)) except: # single shading planes.append(rc.Geometry.Plane(rc.Geometry.Point3d(maxZPt), rc.Geometry.Vector3d.ZAxis)) #", "that it will cause EnergyPlus to crash.\" print warning ghenv.Component.AddRuntimeMessage(w, warning) #Check the", "else: checkData = False return checkData, windowNamesFinal, windowBrepsFinal, _depth, alignedDataTree, numOfShd, blindsMaterial, schedule", "Transmittance\\n' + \\ '\\t' + str(blindsMaterial[3]) + ', !- Front Side Slat Infrared", "\"No blinds schedule has been connected. It will be assumed that the blinds", "shadingHeight # find shading base planes planeOrigins = [] planes = [] X,", "planeVec = rc.Geometry.Vector3d(normalVector.X, normalVector.Y, 0) planeVec.Rotate(1.570796, rc.Geometry.Vector3d.ZAxis) #Define a bounding box for use", "data to a zone or window. If there's no header, the data cannot", "False, [], [], [], [] else: print \"You should first let both Honeybee", "ghenv.Component.AddRuntimeMessage(w, \"You should first let both Honeybee and Ladybug fly...\") return False, [],", "distToGlass, EPinteriorOrExter, assignEPCheckInit = makeShade(window, depths, numOfShd, _distBetween) shadings.append(shadeBreps) EPSlatOrientList.append(EPSlatOrient) depthList.append(depth) shadingHeightList.append(shadingHeight) EPshdAngleList.append(EPshdAngle)", "If there's no header, the data cannot be coordinated with this component. checkData3", "the values are parallel to the correct vector. testVec = rc.Geometry.Vector3d.Subtract(rc.Geometry.Vector3d(minXYPt.X, minXYPt.Y, minXYPt.Z),", "bbox = glzSrf.GetBoundingBox(True) if horOrVertical == None: horOrVertical = True if numOfShds ==", "schedCntrlType = 'ALWAYSON' schedCntrl = 'No' schedName = '' else: schedName = schedule", "shadings.append(shadeBreps) EPSlatOrientList.append(EPSlatOrient) depthList.append(depth) shadingHeightList.append(shadingHeight) EPshdAngleList.append(EPshdAngle) distToGlassList.append(distToGlass) EPinteriorOrExterList.append(EPinteriorOrExter) if assignEPCheckInit == False: assignEPCheck =", "window. tolVec = rc.Geometry.Vector3d.Subtract(rc.Geometry.Vector3d(minXYPt.X, minXYPt.Y, minXYPt.Z), rc.Geometry.Vector3d(maxXYPt.X, maxXYPt.Y, maxXYPt.Z)) tolVec.Unitize() tolVec = rc.Geometry.Vector3d.Multiply(sc.doc.ModelAbsoluteTolerance*2,", "\\ '\\t' + str(depth) + ', !- Slat Width {m}\\n' + \\ '\\t'", "= rc.Geometry.AreaMassProperties.Compute(_glzSrf).Centroid # sometimes the center point is not located on the surface", "not, set a default. checkData5 = True if blindsMaterial_ == None: print \"No", "from the blind material component. If no material is connected here, the component", "\\ '\\t' + str(EPshdAngle) + ', !- Slat Angle {deg}\\n' + \\ '\\t'", "that shades created this way will automatically be assigned to the zone and", "ON\" shcedule. north_: Input a vector to be used as a true North", "to generate vertical shades. You can also input lists of horOrVertical_ input, which", "\" in Honeybee schedule library.\" print msg ghenv.Component.AddRuntimeMessage(gh.GH_RuntimeMessageLevel.Warning, msg) checkData4 = False elif", "its own branch of a grasshopper data tree. shadeBreps: Breps representing each shade", "Side Opening Multiplier\\n' + \\ '\\t' + '0.5, !- Blind Right Side Opening", "else: checkData == False #Generate the shades. if checkData == True: shadings =", "the Run Energy Simulation component. Zones read back into Grasshopper from the Import", "+ schedCntrlType + ', !- Shading Control Type\\n' + \\ '\\t' + schedName", "in Rhino units between each shade. horOrVertical_: Set to \"True\" to generate horizontal", "here, the component will automatically assign a material of 0.65 solar reflectance, 0", "single shading planes.append(rc.Geometry.Plane(rc.Geometry.Point3d(minXYPt), planeVec)) sortedPlanes = planes return sortedPlanes, shadingHeight def makeShade(_glzSrf, depth,", "tree paths of heating, cooling and beam gain synced with that of the", "Side Slat Diffuse Visible Reflectance\\n' + \\ '\\t' + ', !- Slat Infrared", "with the branches for each window above. zoneData2Tree: Data trees of the zoneData2_,", "the planes planes, shadingHeight = analyzeGlz(_glzSrf, distBtwn, numShds, horOrVertical, lb_visualization, normalVector) # find", "zoneNames.append(object.name) winBreps = [] winNames = [] for srf in object.surfaces: if srf.hasChild:", "of Slat Angle Control for Blinds\\n' + \\ '\\t' + '; !- Slat", "that we need to generate the shades. Set defaults on things that are", "the glazing by cardinal direction and assign different depths to different directions. depth", "put in lists of angles to assign different shade angles to different cardinal", "input. _numOfShds: The number of shades to generated for each glazed surface. _distBetween:", "depth for the shades.\") #Check if there is a blinds material connected and,", "shade curves based on the planes and extrusion vectors if intCrvs !=[]: for", "{W/m-K}\\n' + \\ '\\t' + str(blindsMaterial[2]) + ', !- Slat Beam Solar Transmittance\\n'", "align with the branches for each window above. \"\"\" ghenv.Component.Name = \"Honeybee_EnergyPlus Window", "[] EPshdAngleList = [] distToGlassList = [] EPinteriorOrExterList = [] #Call the objects", "the South by a certain value in degrees. If applied to windows facing", "Slat Thickness {m}\\n' + \\ '\\t' + str(EPshdAngle) + ', !- Slat Angle", "failed.\" if normalVector != rc.Geometry.Vector3d.ZAxis: normalVectorPerp = rc.Geometry.Vector3d(normalVector.X, normalVector.Y, 0) angleFromNorm = math.degrees(rc.Geometry.Vector3d.VectorAngle(normalVectorPerp,", "False if tolVec.X < 0 and tolVec.Y > 0: tolVec = rc.Geometry.Vector3d.Multiply(1, tolVec)", "must be outdoors. E+ cannot create shades for intdoor windows.\" print warning ghenv.Component.AddRuntimeMessage(w,", "based on cardinal direction. shdAngle_: A number between -90 and 90 that represents", "E+, the program would just freak out with blinds.\" print warning ghenv.Component.AddRuntimeMessage(w, warning)", "'\\t' + ', !- Blind Bottom Opening Multiplier\\n' + \\ '\\t' + '0.5,", "changing from point to point, then I should sample the test surface #", "than 1. HBObjWShades will not be generated. shadeBreps will still be produced and", "Slat Infrared Hemispherical Transmittance\\n' + \\ '\\t' + str(blindsMaterial[3]) + ', !- Front", "shadings, alignedDataTree, HBObjWShades = main() #Unpack the data trees. if checkData == True:", "VER 0.0.58\\nAUG_20_2014 try: ghenv.Component.AdditionalHelpFromDocStrings = \"1\" except: pass from System import Object from", "the direction of the surface is facing the outdoors in order to be", "in branchList: dataVal.append(item) dataPyList.append(dataVal) return dataPyList allData = [] allData.append(makePyTree(zoneData1_)) allData.append(makePyTree(zoneData2_)) allData.append(makePyTree(zoneData3_)) #Test", "shade angles to different cardinal directions. interiorOrExter_: Set to \"True\" to generate Shades", "for each window above. \"\"\" ghenv.Component.Name = \"Honeybee_EnergyPlus Window Shade Generator\" ghenv.Component.NickName =", "checkBranches = [] allHeaders = [] allNumbers = [] for branch in allData:", "[[], [], []]: print \"A window was not matched with its respective zone/surface", "= 'OnIfScheduleAllows' schedCntrl = 'Yes' EPBlindControl = 'WindowProperty:ShadingControl,\\n' + \\ '\\t' + 'BlindCntrlFor_'", "tilting the shades like this will let in more winter sun than summer", "glzHeight/numOfShd shadingRemainder = shadingHeight except: shadingHeight = distBetween shadingRemainder = (((glzHeight/distBetween) - math.floor(glzHeight/distBetween))*distBetween)", "list from the input data trees. def makePyTree(zoneData): dataPyList = [] for i", "= 'EPWindowShades' ghenv.Component.Message = 'VER 0.0.55\\nSEP_11_2014' ghenv.Component.Category = \"Honeybee\" ghenv.Component.SubCategory = \"09 |", "if the user has hooked up a distBetwee or numOfShds. if _distBetween ==", "of these inputs.\" print warning ghenv.Component.AddRuntimeMessage(w, warning) isZone = False #Check the inputs", "Multiplier\\n' + \\ '\\t' + ', !- Minimum Slat Angle {deg}\\n' + \\", "HBObjWShades = [] EPSlatOrientList = [] depthList = [] shadingHeightList = [] EPshdAngleList", "an in initAngles: angles.append(an-(360/(2*len(valueList)))) angles.append(360) for angleCount in range(len(angles)-1): if angles[angleCount] <= (getAngle2North(normalVector))%360", "The number of shades to generated for each glazed surface. _distBetween: An alternate", "str(distToGlass) + ', !- Blind to Glass Distance {m}\\n' + \\ '\\t' +", "an energy simulation has already been run. In this case, the component helps", "shadingHeight, EPshdAngle, EPDistToGlass, EPinteriorOrExter, assignEPCheckInit def deconstructBlindMaterial(material): matLines = material.split('\\n') name = matLines[1].split(',')[0]", "this, you would take imported EnergyPlus results and hook them up to the", "True: #Import the classes hb_EPZone = sc.sticky[\"honeybee_EPZone\"] hb_EPSrf = sc.sticky[\"honeybee_EPSurface\"] hb_EPFenSurface = sc.sticky[\"honeybee_EPFenSurface\"]", "= south depth, item 3 = east depth. Lists of vectors to be", "branches for each window above. \"\"\" ghenv.Component.Name = \"Honeybee_EnergyPlus Window Shade Generator\" ghenv.Component.NickName", "is to create test shade areas for shade benefit evaluation after an energy", "test the normal direction for more point baseSrfCenPt = rc.Geometry.AreaMassProperties.Compute(_glzSrf).Centroid # sometimes the", "Front Side Slat Diffuse Solar Reflectance\\n' + \\ '\\t' + str(blindsMaterial[1]) + ',", "depths are given, use it to split up the glazing by cardinal direction", "= \"The surface must not be curved. With the way that we mesh", "way that we mesh curved surfaces for E+, the program would just freak", "a bounding box for use in calculating the number of shades to generate", "with the EP Result data. blindsMaterial_: An optional blind material from the blind", "item 3 = east depth. Lists of vectors to be shaded can also", "be sure that the values are parallel to the correct vector. testVec =", "header[2].split(' for ')[-1] if str(winNm) == str(windowName.upper()): alignedDataTree[inputDataTreeCount].append(allData[inputDataTreeCount][listCount]) srfData = True if zoneData", "schedCntrl = 'Yes' EPBlindControl = 'WindowProperty:ShadingControl,\\n' + \\ '\\t' + 'BlindCntrlFor_' + name", "Visible Transmittance\\n' + \\ '\\t' + ', !- Front Side Slat Diffuse Visible", "if interiorOrExter == True: EPinteriorOrExter = 'InteriorBlind' else: EPinteriorOrExter = 'ExteriorBlind' #Generate the", "else: EPinteriorOrExter = 'ExteriorBlind' #Generate the shade curves based on the planes and", "the glazing by cardinal direction and assign different distToGlass_ to different directions. distToGlass", "HBZoneObjects = hb_hive.callFromHoneybeeHive(_HBObjects) #Find out what the object is and make sure that", "exterior. The default is set to \"False\" to generate exterior shades. distToGlass_: A", "pointCurve = rc.Geometry.Curve.CreateControlPointCurve([maxXYPt, minXYPt]) divisionParams = pointCurve.DivideByLength(shadingHeight, True) divisionPoints = [] for param", "(depth)*(0.5)*math.cos(math.radians(EPshdAngle)) if EPDistToGlass < 0.01: EPDistToGlass = 0.01 elif EPDistToGlass > 1: warning", "# find the type based on object.getAngle2North() if not hasattr(object, \"BC\"): object.BC =", "W or deg C}\\n' + \\ '\\t' + schedCntrl + ', !- Shading", "window is not planar. EenergyPlus shades will not be assigned to this window.\"", "normalVectorPerp).ToBrep() shadingSurfaces.append(shdSrf) except: pass #If the user has specified a distance to move", "name +', !- Name\\n' + \\ '\\t' + EPinteriorOrExter + ', !- Shading", "= rc.Geometry.Point3d(minXYPt.X, minXYPt.Y, minXYPt.Z) maxXYPt = bbox.Corner(False, False, True) maxXYPt = rc.Geometry.Point3d(maxXYPt.X, maxXYPt.Y,", "warning) elif object.isPlanar == False: assignEPCheck = False warning = \"The surface must", "branch in allData: checkHeader = [] dataHeaders = [] dataNumbers = [] for", "depths, numOfShd, _distBetween) shadings.append(shadeBreps) EPSlatOrientList.append(EPSlatOrient) depthList.append(depth) shadingHeightList.append(shadingHeight) EPshdAngleList.append(EPshdAngle) distToGlassList.append(distToGlass) EPinteriorOrExterList.append(EPinteriorOrExter) if assignEPCheckInit ==", "0.65 solar reflectance, 0 transmittance, 0.9 emittance, 0.25 mm thickness, 221 W/mK conductivity.\"", "HBZoneObjects: if object.objectType == \"HBZone\": isZoneList.append(1) zoneNames.append(object.name) winBreps = [] winNames = []", "for windowCount, window in enumerate(windowList): windowBrepsFinal.append(window) windowName = windowNames[zoneCount][windowCount] windowNamesFinal.append(windowName) for inputDataTreeCount, branch", "or srf.BC == 'Outdoors': if srf.isPlanar == True: for childSrf in srf.childSrfs: windowObjects.append(childSrf)", "Slat Beam Visible Reflectance\\n' + \\ '\\t' + ', !- Slat Diffuse Visible", "that represents an angle in degrees to rotate the shades. The default is", "False warning = \"Not all of the connected zoneData has a Ladybug/Honeybee header", "the normal direction for more point baseSrfCenPt = rc.Geometry.AreaMassProperties.Compute(_glzSrf).Centroid # sometimes the center", "\"Note that E+ does not like shading depths greater than 1. HBObjWShades will", "minXYPt = bbox.Corner(False, True, True) minXYPt = rc.Geometry.Point3d(minXYPt.X, minXYPt.Y, minXYPt.Z) maxXYPt = bbox.Corner(True,", "this to rotate them towards the South by a certain value in degrees.", "beam gain synced with that of the zones and windows. For this, you", "= \"The input shdAngle_ value will cause EnergyPlus to crash.\" print warning ghenv.Component.AddRuntimeMessage(w,", "degrees). _depth: A number representing the depth of the shade to be generated", "if checkData == True: shadings = [] for window in windowSrfsInit: shadeBreps, EPSlatOrient,", "Left Side Opening Multiplier\\n' + \\ '\\t' + '0.5, !- Blind Right Side", "+ \\ '\\t' + '0.5, !- Blind Top Opening Multiplier\\n' + \\ '\\t'", "is set to \"0\" for no rotation. If you have vertical shades, use", "the shades. _runIt: Set boolean to \"True\" to run the component and generate", "\"ALWAYSON\" print \"No blinds schedule has been connected. It will be assumed that", "horOrVertical == False: planeVec = rc.Geometry.Vector3d.ZAxis if getAngle2North(normalVectorPerp) < 180: normalVectorPerp.Rotate((shdAngle*0.01745329), planeVec) else:", "horOrVertical, lb_visualization, normalVector): # find the bounding box bbox = glzSrf.GetBoundingBox(True) if horOrVertical", "value is so large that it will cause EnergyPlus to crash.\" print warning", "... zoneData1_: Optional zone data for the HBZones_ that will be aligned with", "window does not have an outdoor boundary condition. EenergyPlus shades will not be", "+ '0.5, !- Blind Right Side Opening Multiplier\\n' + \\ '\\t' + ',", "not align correctly with the EP Result data. blindsMaterial_: An optional blind material", "warning ghenv.Component.AddRuntimeMessage(w, warning) isZone = False #Check the inputs and make sure that", "first let both Honeybee and Ladybug fly...\") return False, [], [], [], []", "+ ', !- Front Side Slat Diffuse Visible Reflectance\\n' + \\ '\\t' +", "it to split up the glazing by cardinal direction and assign different shdAngle_", "if len(valueList) == 1: value = valueList[0] elif len(valueList) > 1: initAngles =", "if sum(checkHeader) == len(branch):pass else: checkData3 = False warning = \"Not all of", "0 transmittance, 0.9 emittance, 0.25 mm thickness, 221 W/mK conductivity.\" blindsMaterial = ['DEFAULTBLINDSMATERIAL',", "large that it will cause EnergyPlus to crash.\" print warning ghenv.Component.AddRuntimeMessage(w, warning) #Check", "it. This header is necessary for data input to this component.\" print warning", "angles = [] if valueList == None or len(valueList) == 0: value =", "calculating the number of shades to generate minZPt = bbox.Corner(False, True, True) minZPt", "the zones that are fed into the Run Energy Simulation component. Zones read", "as rs import scriptcontext as sc import uuid import math import os w", "'\\t' + str(blindsMaterial[3]) + ', !- Front Side Slat Infrared Hemispherical Emissivity\\n' +", "0: for bCount, branch in enumerate(finalTree): for twig in branch: zoneData1Tree.Add(twig, GH_Path(bCount)) elif", "not be curved. With the way that we mesh curved surfaces for E+,", "!- Glare Control Is Active\\n' + \\ '\\t' + blindsMaterial[0] + \"_\" +", "test shade areas for shade benefit evaluation after an energy simulation has already", "c in intCrvs: try: shdSrf = rc.Geometry.Surface.CreateExtrusion(c, float(depth) * normalVectorPerp).ToBrep() shadingSurfaces.append(shdSrf) except: pass", "shade benefit evaluation after an energy simulation has already been run. In this", "'\\t' + str(blindsMaterial[4]) + ', !- Slat Thickness {m}\\n' + \\ '\\t' +", "= header[2].split(' for ')[-1] if str(winNm) == str(windowName.upper()): alignedDataTree[inputDataTreeCount].append(allData[inputDataTreeCount][listCount]) srfData = True if", "zoneData2_: Optional zone data for the HBZones_ that will be aligned with the", "component creates shades for Honeybee Zones # By <NAME> # <EMAIL> # Ladybug", "assign it to the windows with shades. if assignEPCheck == True: for count,", "normalVectorPerp.Rotate((shdAngle*0.01745329), planeVec) elif horOrVertical == False: planeVec = rc.Geometry.Vector3d.ZAxis if getAngle2North(normalVectorPerp) < 180:", "\\ '\\t' + str(blindsMaterial[2]) + ', !- Slat Beam Solar Transmittance\\n' + \\", "= rc.Geometry.Vector3d.YAxis angle = rc.Geometry.Vector3d.VectorAngle(northVector, normalVector, rc.Geometry.Plane.WorldXY) finalAngle = math.degrees(angle) return finalAngle #", "finalAngle # Define a function that can split up a list of values", "+ ', !- Slat Diffuse Visible Transmittance\\n' + \\ '\\t' + ', !-", "windowNames.append([childSrf.name]) windowSrfs.append([childSrf.geometry]) #Make sure that all HBObjects are of the same type. checkSameType", "object.objectType == \"HBZone\": isZoneList.append(1) zoneNames.append(object.name) winBreps = [] winNames = [] for srf", "to assign blind objects to HBZones prior to simulation. These blinds can be", "vertical to different directions. horOrVertical = getValueBasedOnOrientation(horOrVertical_) # If multiple shdAngle_ inputs are", "\\ '\\t' + blindsMaterial[0] + \"_\" + name + ', !- Shading Device", "object is and make sure that we can run it through this component's", "Drawing from clr import AddReference AddReference('Grasshopper') import Grasshopper.Kernel as gh from Grasshopper import", "directions. horOrVertical = getValueBasedOnOrientation(horOrVertical_) # If multiple shdAngle_ inputs are given, use it", "checkData5 = False warning = 'Blinds material is not a valid blinds material", "condition of the input object must be outdoors. E+ cannot create shades for", "zoneData2_, which align with the branches for each window above. zoneData3Tree: Data trees", "for count, brepList in enumerate(shadings): for brep in brepList: shadeBreps.Add(brep, GH_Path(count)) for treeCount,", "for visualization. _ The second way to use the component is to create", "if shadingRemainder == 0: shadingRemainder = shadingHeight # find shading base planes planeOrigins", "HBObjWShades = main() #Unpack the data trees. if checkData == True: windowBreps =", "to rotate them towards the South by a certain value in degrees. If", "= sc.sticky[\"honeybee_EPZone\"] hb_EPSrf = sc.sticky[\"honeybee_EPSurface\"] hb_EPFenSurface = sc.sticky[\"honeybee_EPFenSurface\"] hb_hive = sc.sticky[\"honeybee_Hive\"]() #Make the", "is not planar. EenergyPlus shades will not be assigned to this window.\" else:", "distBtwn): rotationAngle_ = 0 # import the classes lb_preparation = sc.sticky[\"ladybug_Preparation\"]() lb_mesh =", "different directions. distToGlass = getValueBasedOnOrientation(distToGlass_) # generate the planes planes, shadingHeight = analyzeGlz(_glzSrf,", "component for individual surfaces, you should make sure that the direction of the", "if interiorOrExter == True: normalVectorPerp.Reverse() else: interiorOrExter = False #If a shdAngle is", "maxXYPt.Y, maxXYPt.Z)) if testVec.IsParallelTo(planeVec) == 0: minXYPt = bbox.Corner(False, True, True) minXYPt =", "+ \\ '\\t' + ', !- Front Side Slat Beam Visible Reflectance\\n' +", "True if depth > 1: assignEPCheckInit = False warning = \"Note that E+", "shades to generated for each glazed surface. _distBetween: An alternate option to _numOfShds", "+ \"_\" + name + ', !- Name\\n' + \\ '\\t' + EPSlatOrient", "tolVec = rc.Geometry.Vector3d.Subtract(rc.Geometry.Vector3d(minXYPt.X, minXYPt.Y, minXYPt.Z), rc.Geometry.Vector3d(maxXYPt.X, maxXYPt.Y, maxXYPt.Z)) tolVec.Unitize() tolVec = rc.Geometry.Vector3d.Multiply(sc.doc.ModelAbsoluteTolerance*2, tolVec)", "sc.sticky[\"honeybee_EPZone\"] hb_EPSrf = sc.sticky[\"honeybee_EPSurface\"] hb_EPFenSurface = sc.sticky[\"honeybee_EPFenSurface\"] hb_hive = sc.sticky[\"honeybee_Hive\"]() #Make the lists", "+ str(blindsMaterial[1]) + ', !- Back Side Slat Diffuse Solar Reflectance\\n' + \\", "print warning ghenv.Component.AddRuntimeMessage(gh.GH_RuntimeMessageLevel.Remark, warning) return shadingSurfaces, EPSlatOrient, depth, shadingHeight, EPshdAngle, EPDistToGlass, EPinteriorOrExter, assignEPCheckInit", "represents an angle in degrees to rotate the shades. The default is set", "checkData3 = True checkBranches = [] allHeaders = [] allNumbers = [] for", "import Object from System import Drawing from clr import AddReference AddReference('Grasshopper') import Grasshopper.Kernel", "+ \\ '\\t' + str(blindsMaterial[1]) + ', !- Back Side Slat Beam Solar", "== 'ALWAYSON': schedCntrlType = 'ALWAYSON' schedCntrl = 'No' schedName = '' else: schedName", "run with one shade per window.\" else: numOfShd = _numOfShds #Check the depths.", "'\\t' + ', !- Front Side Slat Diffuse Visible Reflectance\\n' + \\ '\\t'", "object must be outdoors. E+ cannot create shades for intdoor windows.\" print warning", "maxXYPt.Y, maxXYPt.Z)) tolVec.Unitize() tolVec = rc.Geometry.Vector3d.Multiply(sc.doc.ModelAbsoluteTolerance*2, tolVec) if tolVec.X > 0 and tolVec.Y", "that are not connected. if checkSameType == True: checkData, windowNames, windowSrfsInit, depths, alignedDataTree,", "#Check if there is a blinds material connected and, if not, set a", "1: value = valueList[0] elif len(valueList) > 1: initAngles = rs.frange(0, 360, 360/len(valueList))", "creates shades for Honeybee Zones # By <NAME> # <EMAIL> # Ladybug started", "in Honeybee schedule library.\" print msg ghenv.Component.AddRuntimeMessage(gh.GH_RuntimeMessageLevel.Warning, msg) checkData4 = False elif schedule!=None", "will automatically be assigned to the zone and the windowBreps and shadeBreps outputs", "count, brep in enumerate(windowSrfsInit): windowBreps.Add(brep, GH_Path(count)) for count, brepList in enumerate(shadings): for brep", "Window Shade Generator\" ghenv.Component.NickName = 'EPWindowShades' ghenv.Component.Message = 'VER 0.0.55\\nSEP_11_2014' ghenv.Component.Category = \"Honeybee\"", "\"Cannot find the shchedule file: \" + schedule print msg ghenv.Component.AddRuntimeMessage(gh.GH_RuntimeMessageLevel.Warning, msg) checkData4", "on the exterior. The default is set to \"False\" to generate exterior shades.", "windowBrepsFinal = [] alignedDataTree = [] for item in allData: alignedDataTree.append([]) for zoneCount,", "== True: zoneName = zoneNames[zoneCount] for windowCount, window in enumerate(windowList): windowBrepsFinal.append(window) windowName =", "HBZones prior to simulation. These blinds can be dynamically controlled via a schedule.", "blindsMaterial = ['DEFAULTBLINDSMATERIAL', 0.65, 0, 0.9, 0.00025, 221] else: try: blindsMaterial = deconstructBlindMaterial(blindsMaterial_)", "use the component is to create test shade areas for shade benefit evaluation", "tolVec) norOrient = True else: tolVec = rc.Geometry.Vector3d.Multiply(-1, tolVec) norOrient = True maxXYPt", "== True: checkData, windowSrfsInit, shadings, alignedDataTree, HBObjWShades = main() #Unpack the data trees.", "twig in branch: zoneData2Tree.Add(twig, GH_Path(bCount)) elif treeCount == 2: for bCount, branch in", "can account for these shades using a 'Honeybee_EP Context Surfaces' component.\" print warning", "Solar Reflectance\\n' + \\ '\\t' + str(blindsMaterial[2]) + ', !- Slat Beam Visible", "= False if _HBObjects != [] and _runIt == True: checkData, windowSrfsInit, shadings,", "[] windowBrepsFinal = [] alignedDataTree = [] for item in allData: alignedDataTree.append([]) for", "fed into the Run Energy Simulation component. Zones read back into Grasshopper from", "if valueList == None or len(valueList) == 0: value = None if len(valueList)", "shades. You can also input lists of horOrVertical_ input, which will assign different", "EPshdAngleList = [] distToGlassList = [] EPinteriorOrExterList = [] #Call the objects from", "is for the zone level. zoneData = False if isZone == True: for", "an angle in degrees to rotate the shades. The default is set to", "checkData2 = False print \"You must provide a depth for the shades.\" ghenv.Component.AddRuntimeMessage(gh.GH_RuntimeMessageLevel.Warning,", "up a list of values and assign it to different cardinal directions. def", "Angle {deg}\\n' + \\ '\\t' + str(blindsMaterial[5]) + ', !- Slat Conductivity {W/m-K}\\n'", "angles to different cardinal directions. interiorOrExter_: Set to \"True\" to generate Shades on", "prior to simulation. These blinds can be dynamically controlled via a schedule. Note", "have horizontal shades, use this input to angle shades downward. You can also", "angleFromNorm*(-1) #If the user has set the shades to generate on the interior,", "windowObjects.append(childSrf) winNames.append(childSrf.name) winBreps.append(childSrf.geometry) else: print \"One surface with a window is not planar.", "Side Slat Infrared Hemispherical Emissivity\\n' + \\ '\\t' + str(blindsMaterial[3]) + ', !-", "find number of shadings try: numOfShd = int(numOfShds) shadingHeight = glzHeight/numOfShd shadingRemainder =", "['DEFAULTBLINDSMATERIAL', 0.65, 0, 0.9, 0.00025, 221] else: try: blindsMaterial = deconstructBlindMaterial(blindsMaterial_) except: checkData5", "file: \" + schedule print msg ghenv.Component.AddRuntimeMessage(gh.GH_RuntimeMessageLevel.Warning, msg) checkData4 = False #Create a", "not in HBScheduleList: msg = \"Cannot find \" + schedule + \" in", "surface baseSrfCenPt = _glzSrf.ClosestPoint(baseSrfCenPt) bool, centerPtU, centerPtV = _glzSrf.Faces[0].ClosestPoint(baseSrfCenPt) if bool: normalVector =", "from System import Object from System import Drawing from clr import AddReference AddReference('Grasshopper')", "AddReference AddReference('Grasshopper') import Grasshopper.Kernel as gh from Grasshopper import DataTree from Grasshopper.Kernel.Data import", "blindsTransform = rc.Geometry.Transform.Translation(finalTransVec) for shdSrf in shadingSurfaces: shdSrf.Transform(blindsTransform) else: distToGlass = 0 #Get", "idf component will not align correctly with the EP Result data. blindsMaterial_: An", "= False return checkData, windowNamesFinal, windowBrepsFinal, _depth, alignedDataTree, numOfShd, blindsMaterial, schedule def analyzeGlz(glzSrf,", "if shdAngle != None: if horOrVertical == True or horOrVertical == None: horOrVertical", "', !- Slat Beam Solar Transmittance\\n' + \\ '\\t' + str(blindsMaterial[1]) + ',", "the shades. if checkData == True: shadings = [] for window in windowSrfsInit:", "elif horOrVertical == True: # Horizontal #Define a bounding box for use in", "mm thickness, 221 W/mK conductivity. blindsSchedule_: An optional schedule to raise and lower", "can split up a list of values and assign it to different cardinal", "[] if valueList == None or len(valueList) == 0: value = None if", "if srf.BC == 'OUTDOORS' or srf.BC == 'Outdoors': if srf.isPlanar == True: for", "is not a valid blinds material from the \"Honeybee_EnergyPlus Blinds Material\" component.' print", "not os.path.isfile(schedule): msg = \"Cannot find the shchedule file: \" + schedule print", "For example, inputing 4 values for depths will assign each value of the", "+ \\ '\\t' + ', !- Back Side Slat Beam Visible Reflectance\\n' +", "gain for a shade benefit simulation with the generated shades. zoneData2_: Optional zone", "between shade inputs are given, use it to split up the glazing by", "depth. Lists of vectors to be shaded can also be input and shades", "shades. if checkData == True: shadings = [] for window in windowSrfsInit: shadeBreps,", "and _numOfShds == []: numOfShd = [1] print \"No value is connected for", "\"zoneDataTree\" in the shade benefit evaluation. - Provided by Honeybee 0.0.55 Args: _HBObjects:", "to create test shade areas for shade benefit evaluation after an energy simulation", "there is a blinds material connected and, if not, set a default. checkData5", "and windows. For this, you would take imported EnergyPlus results and hook them", "of any of the HB components that generate or alter zones. Note that", "from the y-axis to make North. The default North direction is set to", "if checkSameType == True: checkData, windowNames, windowSrfsInit, depths, alignedDataTree, numOfShd, blindsMaterial, schedule =", "different directions. numShds = getValueBasedOnOrientation(numShds) # If multiple distances between shade inputs are", "'Honeybee_EP Context Surfaces' component.\" print warning ghenv.Component.AddRuntimeMessage(gh.GH_RuntimeMessageLevel.Remark, warning) return shadingSurfaces, EPSlatOrient, depth, shadingHeight,", "'0.5, !- Blind Right Side Opening Multiplier\\n' + \\ '\\t' + ', !-", "< 180: normalVectorPerp.Rotate((shdAngle*0.01745329), planeVec) else: normalVectorPerp.Rotate((shdAngle*-0.01745329), planeVec) else: shdAngle = 0 #Make EP", "and the windowBreps and shadeBreps outputs are just for visualization. _ The second", "= False print \"You must provide a depth for the shades.\" ghenv.Component.AddRuntimeMessage(gh.GH_RuntimeMessageLevel.Warning, \"You", "sum(checkHeader) == len(branch):pass else: checkData3 = False warning = \"Not all of the", "in order to be sure that your shades are previewing correctly.\" print warning", "False else: checkSameType = False warning = \"This component currently only supports inputs", "for shadings intCrvs =[] for plane in planes: try: intCrvs.append(rc.Geometry.Brep.CreateContourCurves(_glzSrf, plane)[0]) except: print", "msg = \"Cannot find \" + schedule + \" in Honeybee schedule library.\"", "on object.getAngle2North() if not hasattr(object, \"BC\"): object.BC = 'OUTDOORS' if object.hasChild: if object.BC", "to generated for each glazed surface. _distBetween: An alternate option to _numOfShds where", "False if tolVec.X < 0 and tolVec.Y < 0: tolVec = rc.Geometry.Vector3d.Multiply(-1, tolVec)", "== False: planeVec = rc.Geometry.Vector3d.ZAxis if getAngle2North(normalVectorPerp) < 180: normalVectorPerp.Rotate((shdAngle*0.01745329), planeVec) else: normalVectorPerp.Rotate((shdAngle*-0.01745329),", "Name\\n' + \\ '\\t' + schedCntrlType + ', !- Shading Control Type\\n' +", "in lists of angles to assign different shade angles to different cardinal directions.", "depths. checkData2 = True if _depth == []: checkData2 = False print \"You", "a depth for the shades.\") #Check if there is a blinds material connected", "schedule ModifiedHBZones = hb_hive.addToHoneybeeHive(HBZoneObjects, ghenv.Component.InstanceGuid.ToString() + str(uuid.uuid4())) else: ModifiedHBZones = [] return checkData,", "Slat Beam Visible Transmittance\\n' + \\ '\\t' + ', !- Front Side Slat", "make sure that we can run it through this component's functions. for object", "= 'OUTDOORS' if object.hasChild: if object.BC != 'OUTDOORS' and object.BC != 'Outdoors': assignEPCheck", "shadeBreps outputs are just for visualization. _ The second way to use the", "and checkData5 == True: checkData = True else: checkData = False return checkData,", "connected. if checkSameType == True: checkData, windowNames, windowSrfsInit, depths, alignedDataTree, numOfShd, blindsMaterial, schedule", "rc.Geometry.Point3d(maxZPt.X, maxZPt.Y, maxZPt.Z - sc.doc.ModelAbsoluteTolerance) centerPt = bbox.Center #glazing hieghts glzHeight = minZPt.DistanceTo(maxZPt)", "provide a depth for the shades.\" ghenv.Component.AddRuntimeMessage(gh.GH_RuntimeMessageLevel.Warning, \"You must provide a depth for", "== None: numOfShds = 1 if numOfShds == 0 or distBetween == 0:", "the surface baseSrfCenPt = _glzSrf.ClosestPoint(baseSrfCenPt) bool, centerPtU, centerPtV = _glzSrf.Faces[0].ClosestPoint(baseSrfCenPt) if bool: normalVector", "getValueBasedOnOrientation(shdAngle_) #If multiple interiorOrExter_ inputs are given, use it to split up the", "and distBetween == None: numOfShds = 1 if numOfShds == 0 or distBetween", "str(windowName.upper()): alignedDataTree[inputDataTreeCount].append(allData[inputDataTreeCount][listCount]) srfData = True if zoneData == False and srfData == False", "1: for bCount, branch in enumerate(finalTree): for twig in branch: zoneData2Tree.Add(twig, GH_Path(bCount)) elif", "and the shadingHeight to see if E+ will crash. assignEPCheckInit = True if", "[] windowSrfs = [] windowObjects = [] isZoneList = [] assignEPCheck = True", "more point baseSrfCenPt = rc.Geometry.AreaMassProperties.Compute(_glzSrf).Centroid # sometimes the center point is not located", "+ \\ '\\t' + '180; !- Maximum Slat Angle {deg}\\n' return EPBlindMat def", "True and checkData5 == True: checkData = True else: checkData = False return", "multiple number of shade inputs are given, use it to split up the", "srfData == False and alignedDataTree != [[], [], []]: print \"A window was", "Simulation component. Zones read back into Grasshopper from the Import idf component will", "=[] #Define a function that can get the angle to North of any", "0.00025, 221] else: try: blindsMaterial = deconstructBlindMaterial(blindsMaterial_) except: checkData5 = False warning =", "+ \\ '\\t' + str(blindsMaterial[2]) + ', !- Slat Beam Visible Transmittance\\n' +", "if object.objectType == \"HBZone\": isZoneList.append(1) zoneNames.append(object.name) winBreps = [] winNames = [] for", "Data trees of the zoneData1_, which align with the branches for each window", "0 and tolVec.Y < 0: tolVec = rc.Geometry.Vector3d.Multiply(-1, tolVec) norOrient = True else:", "ghenv.Component.AddRuntimeMessage(gh.GH_RuntimeMessageLevel.Remark, warning) return shadingSurfaces, EPSlatOrient, depth, shadingHeight, EPshdAngle, EPDistToGlass, EPinteriorOrExter, assignEPCheckInit def deconstructBlindMaterial(material):", "Side Slat Beam Visible Reflectance\\n' + \\ '\\t' + ', !- Back Side", "srf.isPlanar == True: for childSrf in srf.childSrfs: windowObjects.append(childSrf) winNames.append(childSrf.name) winBreps.append(childSrf.geometry) else: print \"One", "directions. interiorOrExter = getValueBasedOnOrientation(interiorOrExter_) #If multiple distToGlass_ inputs are given, use it to", "window above. zoneData2Tree: Data trees of the zoneData2_, which align with the branches", "the depths. checkData2 = True if _depth == []: checkData2 = False print", "I should sample the test surface # and test the normal direction for", "after an energy simulation has already been run. In this case, the component", "[] for branch in allData: checkHeader = [] dataHeaders = [] dataNumbers =", "(0 degrees). _depth: A number representing the depth of the shade to be", "checkData5 == True: checkData = True else: checkData = False return checkData, windowNamesFinal,", "False #If a shdAngle is provided, use it to rotate the planes by", "of some of the outputs. EPshdAngleInint = angleFromNorm+shdAngle if EPshdAngleInint >= 0: EPshdAngle", "rc.Geometry.Surface.CreateExtrusion(c, float(depth) * normalVectorPerp).ToBrep() shadingSurfaces.append(shdSrf) except: pass #If the user has specified a", "distToGlass_ value is so large that it will cause EnergyPlus to crash.\" print", "generate on the interior, flip the normal vector. if interiorOrExter == True: normalVectorPerp.Reverse()", "Visible Reflectance\\n' + \\ '\\t' + ', !- Back Side Slat Diffuse Visible" ]
[ "DEVSERVER_AUTO_PROFILE from datetime import datetime import functools import gc class ProfileSummaryModule(DevServerModule): \"\"\" Outputs", "/ 1000) class LeftOversModule(DevServerModule): \"\"\" Outputs a summary of events the garbage collector", "= StringIO() if (DEVSERVER_AUTO_PROFILE): request.devserver_profiler.disable_by_count() request.devserver_profiler.print_stats(stream=out) self.logger.info(out.getvalue()) def _unwrap_closure_and_profile(profiler, func): if not hasattr(func,", "profiled_func(*args, **kwargs): request = args[0] if hasattr(request, 'request'): # We're decorating a Django", "left in garbage', len(gc.garbage)) from django.template.defaultfilters import filesizeformat try: from guppy import hpy", "gc.collect() self.logger.info('%s objects left in garbage', len(gc.garbage)) from django.template.defaultfilters import filesizeformat try: from", "LineProfiler() request.devserver_profiler_run = False if (DEVSERVER_AUTO_PROFILE): _unwrap_closure_and_profile(request.devserver_profiler, view_func) request.devserver_profiler.enable_by_count() def process_complete(self, request): if", "if (DEVSERVER_AUTO_PROFILE): request.devserver_profiler.disable_by_count() request.devserver_profiler.print_stats(stream=out) self.logger.info(out.getvalue()) def _unwrap_closure_and_profile(profiler, func): if not hasattr(func, 'func_code'): return", "**kwargs): request = args[0] if hasattr(request, 'request'): # We're decorating a Django class-based-view", "import hpy except ImportError: import warnings class MemoryUseModule(DevServerModule): def __new__(cls, *args, **kwargs): warnings.warn('MemoryUseModule", "args[0] if hasattr(request, 'request'): # We're decorating a Django class-based-view and the first", "\"\"\" # TODO: Not even sure this is correct, but the its a", "devserver.settings import DEVSERVER_AUTO_PROFILE from datetime import datetime import functools import gc class ProfileSummaryModule(DevServerModule):", "import gc class ProfileSummaryModule(DevServerModule): \"\"\" Outputs a summary of cache events once a", "ImportError: import warnings class MemoryUseModule(DevServerModule): def __new__(cls, *args, **kwargs): warnings.warn('MemoryUseModule requires guppy to", "process_view(self, request, view_func, view_args, view_kwargs): request.devserver_profiler = LineProfiler() request.devserver_profiler_run = False if (DEVSERVER_AUTO_PROFILE):", "from cStringIO import StringIO out = StringIO() if (DEVSERVER_AUTO_PROFILE): request.devserver_profiler.disable_by_count() request.devserver_profiler.print_stats(stream=out) self.logger.info(out.getvalue()) def", "__new__(cls, *args, **kwargs): warnings.warn('MemoryUseModule requires guppy to be installed.') return super(MemoryUseModule, cls).__new__(cls) else:", "= self.hpy.heap() alloch = newh - self.oldh dealloch = self.oldh - newh self.oldh", "try: request.devserver_profiler.add_function(func) request.devserver_profiler_run = True for f in self.follow: request.devserver_profiler.add_function(f) request.devserver_profiler.enable_by_count() return func(*args,", "self.oldh = self.hpy.heap() self.logger.info('heap size is %s', filesizeformat(self.oldh.size)) def process_complete(self, request): newh =", "process_init(self, request): self.start = datetime.now() def process_complete(self, request): duration = datetime.now() - self.start", "render was %.2fs', ms_from_timedelta(duration) / 1000) class LeftOversModule(DevServerModule): \"\"\" Outputs a summary of", "if not hasattr(func, 'func_code'): return profiler.add_function(func) if func.func_closure: for cell in func.func_closure: if", "- newh self.oldh = newh self.logger.info('%s allocated, %s deallocated, heap size is %s',", "of the course of a request. \"\"\" logger_name = 'profile' def __init__(self, request):", "LineProfilerModule(DevServerModule): \"\"\" Outputs a Line by Line profile of any @devserver_profile'd functions that", "newh = self.hpy.heap() alloch = newh - self.oldh dealloch = self.oldh - newh", "def process_view(self, request, view_func, view_args, view_kwargs): request.devserver_profiler = LineProfiler() request.devserver_profiler_run = False if", "self.start self.logger.info('Total time to render was %.2fs', ms_from_timedelta(duration) / 1000) class LeftOversModule(DevServerModule): \"\"\"", "to be installed.') return super(MemoryUseModule, cls).__new__(cls) else: class MemoryUseModule(DevServerModule): \"\"\" Outputs a summary", "summary of events the garbage collector couldn't handle. \"\"\" # TODO: Not even", "dealloch = self.oldh - newh self.oldh = newh self.logger.info('%s allocated, %s deallocated, heap", "\"\"\" Outputs a summary of events the garbage collector couldn't handle. \"\"\" #", "from guppy import hpy except ImportError: import warnings class MemoryUseModule(DevServerModule): def __new__(cls, *args,", "1000) class LeftOversModule(DevServerModule): \"\"\" Outputs a summary of events the garbage collector couldn't", "except ImportError: import warnings class MemoryUseModule(DevServerModule): def __new__(cls, *args, **kwargs): warnings.warn('MemoryUseModule requires guppy", "follow=[]): pass def __call__(self, func): return func else: class LineProfilerModule(DevServerModule): \"\"\" Outputs a", "datetime import datetime import functools import gc class ProfileSummaryModule(DevServerModule): \"\"\" Outputs a summary", "is %s', filesizeformat(self.oldh.size)) def process_complete(self, request): newh = self.hpy.heap() alloch = newh -", "cStringIO import StringIO out = StringIO() if (DEVSERVER_AUTO_PROFILE): request.devserver_profiler.disable_by_count() request.devserver_profiler.print_stats(stream=out) self.logger.info(out.getvalue()) def _unwrap_closure_and_profile(profiler,", "collector couldn't handle. \"\"\" # TODO: Not even sure this is correct, but", "'request'): # We're decorating a Django class-based-view and the first argument is actually", "installed.') return super(MemoryUseModule, cls).__new__(cls) else: class MemoryUseModule(DevServerModule): \"\"\" Outputs a summary of memory", "view_func) request.devserver_profiler.enable_by_count() def process_complete(self, request): if hasattr(request, 'devserver_profiler_run') and (DEVSERVER_AUTO_PROFILE or request.devserver_profiler_run): from", "decorating a Django class-based-view and the first argument is actually self: request =", "warnings.warn('LineProfilerModule requires line_profiler to be installed.') return super(LineProfilerModule, cls).__new__(cls) class devserver_profile(object): def __init__(self,", "request): gc.collect() self.logger.info('%s objects left in garbage', len(gc.garbage)) from django.template.defaultfilters import filesizeformat try:", "= args[0] if hasattr(request, 'request'): # We're decorating a Django class-based-view and the", "datetime import functools import gc class ProfileSummaryModule(DevServerModule): \"\"\" Outputs a summary of cache", "self.hpy = hpy() self.oldh = self.hpy.heap() self.logger.info('heap size is %s', filesizeformat(self.oldh.size)) def process_complete(self,", "%s', *map(filesizeformat, [alloch.size, dealloch.size, newh.size])) try: from line_profiler import LineProfiler except ImportError: import", "garbage', len(gc.garbage)) from django.template.defaultfilters import filesizeformat try: from guppy import hpy except ImportError:", "be installed.') return super(MemoryUseModule, cls).__new__(cls) else: class MemoryUseModule(DevServerModule): \"\"\" Outputs a summary of", "Django class-based-view and the first argument is actually self: request = args[1] try:", "a summary of cache events once a response is ready. \"\"\" logger_name =", "request): self.start = datetime.now() def process_complete(self, request): duration = datetime.now() - self.start self.logger.info('Total", "gc.enable() gc.set_debug(gc.DEBUG_SAVEALL) def process_complete(self, request): gc.collect() self.logger.info('%s objects left in garbage', len(gc.garbage)) from", "\"\"\" logger_name = 'profile' def process_view(self, request, view_func, view_args, view_kwargs): request.devserver_profiler = LineProfiler()", "= self.hpy.heap() self.logger.info('heap size is %s', filesizeformat(self.oldh.size)) def process_complete(self, request): newh = self.hpy.heap()", "is %s', *map(filesizeformat, [alloch.size, dealloch.size, newh.size])) try: from line_profiler import LineProfiler except ImportError:", "**kwargs): warnings.warn('LineProfilerModule requires line_profiler to be installed.') return super(LineProfilerModule, cls).__new__(cls) class devserver_profile(object): def", "request): duration = datetime.now() - self.start self.logger.info('Total time to render was %.2fs', ms_from_timedelta(duration)", "if (DEVSERVER_AUTO_PROFILE): _unwrap_closure_and_profile(request.devserver_profiler, view_func) request.devserver_profiler.enable_by_count() def process_complete(self, request): if hasattr(request, 'devserver_profiler_run') and (DEVSERVER_AUTO_PROFILE", "hasattr(func, 'func_code'): return profiler.add_function(func) if func.func_closure: for cell in func.func_closure: if hasattr(cell.cell_contents, 'func_code'):", "if hasattr(cell.cell_contents, 'func_code'): _unwrap_closure_and_profile(profiler, cell.cell_contents) class devserver_profile(object): def __init__(self, follow=[]): self.follow = follow", "response is ready. \"\"\" logger_name = 'profile' def process_init(self, request): self.start = datetime.now()", "func): return func else: class LineProfilerModule(DevServerModule): \"\"\" Outputs a Line by Line profile", "try: from line_profiler import LineProfiler except ImportError: import warnings class LineProfilerModule(DevServerModule): def __new__(cls,", "def __call__(self, func): def profiled_func(*args, **kwargs): request = args[0] if hasattr(request, 'request'): #", "cls).__new__(cls) class devserver_profile(object): def __init__(self, follow=[]): pass def __call__(self, func): return func else:", "__call__(self, func): def profiled_func(*args, **kwargs): request = args[0] if hasattr(request, 'request'): # We're", "datetime.now() def process_complete(self, request): duration = datetime.now() - self.start self.logger.info('Total time to render", "the its a general idea logger_name = 'profile' def process_init(self, request): gc.enable() gc.set_debug(gc.DEBUG_SAVEALL)", "def profiled_func(*args, **kwargs): request = args[0] if hasattr(request, 'request'): # We're decorating a", "= follow def __call__(self, func): def profiled_func(*args, **kwargs): request = args[0] if hasattr(request,", "hasattr(request, 'devserver_profiler_run') and (DEVSERVER_AUTO_PROFILE or request.devserver_profiler_run): from cStringIO import StringIO out = StringIO()", "import LineProfiler except ImportError: import warnings class LineProfilerModule(DevServerModule): def __new__(cls, *args, **kwargs): warnings.warn('LineProfilerModule", "from devserver.settings import DEVSERVER_AUTO_PROFILE from datetime import datetime import functools import gc class", "hasattr(cell.cell_contents, 'func_code'): _unwrap_closure_and_profile(profiler, cell.cell_contents) class devserver_profile(object): def __init__(self, follow=[]): self.follow = follow def", "a summary of events the garbage collector couldn't handle. \"\"\" # TODO: Not", "of events the garbage collector couldn't handle. \"\"\" # TODO: Not even sure", "else: class MemoryUseModule(DevServerModule): \"\"\" Outputs a summary of memory usage of the course", "hasattr(request, 'request'): # We're decorating a Django class-based-view and the first argument is", "idea logger_name = 'profile' def process_init(self, request): gc.enable() gc.set_debug(gc.DEBUG_SAVEALL) def process_complete(self, request): gc.collect()", "general idea logger_name = 'profile' def process_init(self, request): gc.enable() gc.set_debug(gc.DEBUG_SAVEALL) def process_complete(self, request):", "hpy except ImportError: import warnings class MemoryUseModule(DevServerModule): def __new__(cls, *args, **kwargs): warnings.warn('MemoryUseModule requires", "\"\"\" Outputs a summary of cache events once a response is ready. \"\"\"", "was %.2fs', ms_from_timedelta(duration) / 1000) class LeftOversModule(DevServerModule): \"\"\" Outputs a summary of events", "the course of a request. \"\"\" logger_name = 'profile' def __init__(self, request): super(MemoryUseModule,", "import filesizeformat try: from guppy import hpy except ImportError: import warnings class MemoryUseModule(DevServerModule):", "import DevServerModule from devserver.utils.time import ms_from_timedelta from devserver.settings import DEVSERVER_AUTO_PROFILE from datetime import", "a general idea logger_name = 'profile' def process_init(self, request): gc.enable() gc.set_debug(gc.DEBUG_SAVEALL) def process_complete(self,", "TODO: Not even sure this is correct, but the its a general idea", "__init__(self, request): super(MemoryUseModule, self).__init__(request) self.hpy = hpy() self.oldh = self.hpy.heap() self.logger.info('heap size is", "\"\"\" Outputs a Line by Line profile of any @devserver_profile'd functions that were", "functions that were run \"\"\" logger_name = 'profile' def process_view(self, request, view_func, view_args,", "= datetime.now() - self.start self.logger.info('Total time to render was %.2fs', ms_from_timedelta(duration) / 1000)", "class devserver_profile(object): def __init__(self, follow=[]): pass def __call__(self, func): return func else: class", "def __new__(cls, *args, **kwargs): warnings.warn('MemoryUseModule requires guppy to be installed.') return super(MemoryUseModule, cls).__new__(cls)", "cache events once a response is ready. \"\"\" logger_name = 'profile' def process_init(self,", "'profile' def process_view(self, request, view_func, view_args, view_kwargs): request.devserver_profiler = LineProfiler() request.devserver_profiler_run = False", "not hasattr(func, 'func_code'): return profiler.add_function(func) if func.func_closure: for cell in func.func_closure: if hasattr(cell.cell_contents,", "class ProfileSummaryModule(DevServerModule): \"\"\" Outputs a summary of cache events once a response is", "_unwrap_closure_and_profile(profiler, cell.cell_contents) class devserver_profile(object): def __init__(self, follow=[]): self.follow = follow def __call__(self, func):", "a response is ready. \"\"\" logger_name = 'profile' def process_init(self, request): self.start =", "ImportError: import warnings class LineProfilerModule(DevServerModule): def __new__(cls, *args, **kwargs): warnings.warn('LineProfilerModule requires line_profiler to", "Line by Line profile of any @devserver_profile'd functions that were run \"\"\" logger_name", "= args[1] try: request.devserver_profiler.add_function(func) request.devserver_profiler_run = True for f in self.follow: request.devserver_profiler.add_function(f) request.devserver_profiler.enable_by_count()", "cls).__new__(cls) else: class MemoryUseModule(DevServerModule): \"\"\" Outputs a summary of memory usage of the", "newh.size])) try: from line_profiler import LineProfiler except ImportError: import warnings class LineProfilerModule(DevServerModule): def", "usage of the course of a request. \"\"\" logger_name = 'profile' def __init__(self,", "garbage collector couldn't handle. \"\"\" # TODO: Not even sure this is correct,", "of any @devserver_profile'd functions that were run \"\"\" logger_name = 'profile' def process_view(self,", "request.devserver_profiler = LineProfiler() request.devserver_profiler_run = False if (DEVSERVER_AUTO_PROFILE): _unwrap_closure_and_profile(request.devserver_profiler, view_func) request.devserver_profiler.enable_by_count() def process_complete(self,", "def __init__(self, follow=[]): pass def __call__(self, func): return func else: class LineProfilerModule(DevServerModule): \"\"\"", "functools import gc class ProfileSummaryModule(DevServerModule): \"\"\" Outputs a summary of cache events once", "a Django class-based-view and the first argument is actually self: request = args[1]", "MemoryUseModule(DevServerModule): def __new__(cls, *args, **kwargs): warnings.warn('MemoryUseModule requires guppy to be installed.') return super(MemoryUseModule,", "profiler.add_function(func) if func.func_closure: for cell in func.func_closure: if hasattr(cell.cell_contents, 'func_code'): _unwrap_closure_and_profile(profiler, cell.cell_contents) class", "request.devserver_profiler.add_function(func) request.devserver_profiler_run = True for f in self.follow: request.devserver_profiler.add_function(f) request.devserver_profiler.enable_by_count() return func(*args, **kwargs)", "_unwrap_closure_and_profile(request.devserver_profiler, view_func) request.devserver_profiler.enable_by_count() def process_complete(self, request): if hasattr(request, 'devserver_profiler_run') and (DEVSERVER_AUTO_PROFILE or request.devserver_profiler_run):", "self: request = args[1] try: request.devserver_profiler.add_function(func) request.devserver_profiler_run = True for f in self.follow:", "but the its a general idea logger_name = 'profile' def process_init(self, request): gc.enable()", "request): super(MemoryUseModule, self).__init__(request) self.hpy = hpy() self.oldh = self.hpy.heap() self.logger.info('heap size is %s',", "deallocated, heap size is %s', *map(filesizeformat, [alloch.size, dealloch.size, newh.size])) try: from line_profiler import", "any @devserver_profile'd functions that were run \"\"\" logger_name = 'profile' def process_view(self, request,", "request, view_func, view_args, view_kwargs): request.devserver_profiler = LineProfiler() request.devserver_profiler_run = False if (DEVSERVER_AUTO_PROFILE): _unwrap_closure_and_profile(request.devserver_profiler,", "ms_from_timedelta from devserver.settings import DEVSERVER_AUTO_PROFILE from datetime import datetime import functools import gc", "self.logger.info('heap size is %s', filesizeformat(self.oldh.size)) def process_complete(self, request): newh = self.hpy.heap() alloch =", "run \"\"\" logger_name = 'profile' def process_view(self, request, view_func, view_args, view_kwargs): request.devserver_profiler =", "if func.func_closure: for cell in func.func_closure: if hasattr(cell.cell_contents, 'func_code'): _unwrap_closure_and_profile(profiler, cell.cell_contents) class devserver_profile(object):", "\"\"\" logger_name = 'profile' def process_init(self, request): self.start = datetime.now() def process_complete(self, request):", "to render was %.2fs', ms_from_timedelta(duration) / 1000) class LeftOversModule(DevServerModule): \"\"\" Outputs a summary", "- self.start self.logger.info('Total time to render was %.2fs', ms_from_timedelta(duration) / 1000) class LeftOversModule(DevServerModule):", "try: from guppy import hpy except ImportError: import warnings class MemoryUseModule(DevServerModule): def __new__(cls,", "self).__init__(request) self.hpy = hpy() self.oldh = self.hpy.heap() self.logger.info('heap size is %s', filesizeformat(self.oldh.size)) def", "def process_complete(self, request): gc.collect() self.logger.info('%s objects left in garbage', len(gc.garbage)) from django.template.defaultfilters import", "We're decorating a Django class-based-view and the first argument is actually self: request", "self.logger.info('Total time to render was %.2fs', ms_from_timedelta(duration) / 1000) class LeftOversModule(DevServerModule): \"\"\" Outputs", "for cell in func.func_closure: if hasattr(cell.cell_contents, 'func_code'): _unwrap_closure_and_profile(profiler, cell.cell_contents) class devserver_profile(object): def __init__(self,", "args[1] try: request.devserver_profiler.add_function(func) request.devserver_profiler_run = True for f in self.follow: request.devserver_profiler.add_function(f) request.devserver_profiler.enable_by_count() return", "def process_init(self, request): self.start = datetime.now() def process_complete(self, request): duration = datetime.now() -", "correct, but the its a general idea logger_name = 'profile' def process_init(self, request):", "view_func, view_args, view_kwargs): request.devserver_profiler = LineProfiler() request.devserver_profiler_run = False if (DEVSERVER_AUTO_PROFILE): _unwrap_closure_and_profile(request.devserver_profiler, view_func)", "cell.cell_contents) class devserver_profile(object): def __init__(self, follow=[]): self.follow = follow def __call__(self, func): def", "alloch = newh - self.oldh dealloch = self.oldh - newh self.oldh = newh", "course of a request. \"\"\" logger_name = 'profile' def __init__(self, request): super(MemoryUseModule, self).__init__(request)", "class LineProfilerModule(DevServerModule): def __new__(cls, *args, **kwargs): warnings.warn('LineProfilerModule requires line_profiler to be installed.') return", "request): if hasattr(request, 'devserver_profiler_run') and (DEVSERVER_AUTO_PROFILE or request.devserver_profiler_run): from cStringIO import StringIO out", "import functools import gc class ProfileSummaryModule(DevServerModule): \"\"\" Outputs a summary of cache events", "from datetime import datetime import functools import gc class ProfileSummaryModule(DevServerModule): \"\"\" Outputs a", "events once a response is ready. \"\"\" logger_name = 'profile' def process_init(self, request):", "logger_name = 'profile' def process_view(self, request, view_func, view_args, view_kwargs): request.devserver_profiler = LineProfiler() request.devserver_profiler_run", "%s', filesizeformat(self.oldh.size)) def process_complete(self, request): newh = self.hpy.heap() alloch = newh - self.oldh", "this is correct, but the its a general idea logger_name = 'profile' def", "(DEVSERVER_AUTO_PROFILE): request.devserver_profiler.disable_by_count() request.devserver_profiler.print_stats(stream=out) self.logger.info(out.getvalue()) def _unwrap_closure_and_profile(profiler, func): if not hasattr(func, 'func_code'): return profiler.add_function(func)", "of memory usage of the course of a request. \"\"\" logger_name = 'profile'", "request.devserver_profiler_run = False if (DEVSERVER_AUTO_PROFILE): _unwrap_closure_and_profile(request.devserver_profiler, view_func) request.devserver_profiler.enable_by_count() def process_complete(self, request): if hasattr(request,", "def process_init(self, request): gc.enable() gc.set_debug(gc.DEBUG_SAVEALL) def process_complete(self, request): gc.collect() self.logger.info('%s objects left in", "size is %s', filesizeformat(self.oldh.size)) def process_complete(self, request): newh = self.hpy.heap() alloch = newh", "StringIO out = StringIO() if (DEVSERVER_AUTO_PROFILE): request.devserver_profiler.disable_by_count() request.devserver_profiler.print_stats(stream=out) self.logger.info(out.getvalue()) def _unwrap_closure_and_profile(profiler, func): if", "once a response is ready. \"\"\" logger_name = 'profile' def process_init(self, request): self.start", "func): def profiled_func(*args, **kwargs): request = args[0] if hasattr(request, 'request'): # We're decorating", "request): gc.enable() gc.set_debug(gc.DEBUG_SAVEALL) def process_complete(self, request): gc.collect() self.logger.info('%s objects left in garbage', len(gc.garbage))", "request.devserver_profiler.print_stats(stream=out) self.logger.info(out.getvalue()) def _unwrap_closure_and_profile(profiler, func): if not hasattr(func, 'func_code'): return profiler.add_function(func) if func.func_closure:", "follow def __call__(self, func): def profiled_func(*args, **kwargs): request = args[0] if hasattr(request, 'request'):", "of a request. \"\"\" logger_name = 'profile' def __init__(self, request): super(MemoryUseModule, self).__init__(request) self.hpy", "func else: class LineProfilerModule(DevServerModule): \"\"\" Outputs a Line by Line profile of any", "is correct, but the its a general idea logger_name = 'profile' def process_init(self,", "= self.oldh - newh self.oldh = newh self.logger.info('%s allocated, %s deallocated, heap size", "objects left in garbage', len(gc.garbage)) from django.template.defaultfilters import filesizeformat try: from guppy import", "self.hpy.heap() alloch = newh - self.oldh dealloch = self.oldh - newh self.oldh =", "Line profile of any @devserver_profile'd functions that were run \"\"\" logger_name = 'profile'", "warnings.warn('MemoryUseModule requires guppy to be installed.') return super(MemoryUseModule, cls).__new__(cls) else: class MemoryUseModule(DevServerModule): \"\"\"", "requires line_profiler to be installed.') return super(LineProfilerModule, cls).__new__(cls) class devserver_profile(object): def __init__(self, follow=[]):", "logger_name = 'profile' def process_init(self, request): gc.enable() gc.set_debug(gc.DEBUG_SAVEALL) def process_complete(self, request): gc.collect() self.logger.info('%s", "__call__(self, func): return func else: class LineProfilerModule(DevServerModule): \"\"\" Outputs a Line by Line", "to be installed.') return super(LineProfilerModule, cls).__new__(cls) class devserver_profile(object): def __init__(self, follow=[]): pass def", "Outputs a summary of memory usage of the course of a request. \"\"\"", "def __init__(self, follow=[]): self.follow = follow def __call__(self, func): def profiled_func(*args, **kwargs): request", "def __new__(cls, *args, **kwargs): warnings.warn('LineProfilerModule requires line_profiler to be installed.') return super(LineProfilerModule, cls).__new__(cls)", "def process_complete(self, request): duration = datetime.now() - self.start self.logger.info('Total time to render was", "Outputs a Line by Line profile of any @devserver_profile'd functions that were run", "LineProfiler except ImportError: import warnings class LineProfilerModule(DevServerModule): def __new__(cls, *args, **kwargs): warnings.warn('LineProfilerModule requires", "ms_from_timedelta(duration) / 1000) class LeftOversModule(DevServerModule): \"\"\" Outputs a summary of events the garbage", "and the first argument is actually self: request = args[1] try: request.devserver_profiler.add_function(func) request.devserver_profiler_run", "filesizeformat try: from guppy import hpy except ImportError: import warnings class MemoryUseModule(DevServerModule): def", "import datetime import functools import gc class ProfileSummaryModule(DevServerModule): \"\"\" Outputs a summary of", "newh self.logger.info('%s allocated, %s deallocated, heap size is %s', *map(filesizeformat, [alloch.size, dealloch.size, newh.size]))", "return func else: class LineProfilerModule(DevServerModule): \"\"\" Outputs a Line by Line profile of", "process_complete(self, request): newh = self.hpy.heap() alloch = newh - self.oldh dealloch = self.oldh", "'profile' def process_init(self, request): gc.enable() gc.set_debug(gc.DEBUG_SAVEALL) def process_complete(self, request): gc.collect() self.logger.info('%s objects left", "cell in func.func_closure: if hasattr(cell.cell_contents, 'func_code'): _unwrap_closure_and_profile(profiler, cell.cell_contents) class devserver_profile(object): def __init__(self, follow=[]):", "Outputs a summary of cache events once a response is ready. \"\"\" logger_name", "time to render was %.2fs', ms_from_timedelta(duration) / 1000) class LeftOversModule(DevServerModule): \"\"\" Outputs a", "request. \"\"\" logger_name = 'profile' def __init__(self, request): super(MemoryUseModule, self).__init__(request) self.hpy = hpy()", "handle. \"\"\" # TODO: Not even sure this is correct, but the its", "that were run \"\"\" logger_name = 'profile' def process_view(self, request, view_func, view_args, view_kwargs):", "process_complete(self, request): gc.collect() self.logger.info('%s objects left in garbage', len(gc.garbage)) from django.template.defaultfilters import filesizeformat", "# We're decorating a Django class-based-view and the first argument is actually self:", "warnings class MemoryUseModule(DevServerModule): def __new__(cls, *args, **kwargs): warnings.warn('MemoryUseModule requires guppy to be installed.')", "__init__(self, follow=[]): self.follow = follow def __call__(self, func): def profiled_func(*args, **kwargs): request =", "of cache events once a response is ready. \"\"\" logger_name = 'profile' def", "class-based-view and the first argument is actually self: request = args[1] try: request.devserver_profiler.add_function(func)", "self.logger.info('%s allocated, %s deallocated, heap size is %s', *map(filesizeformat, [alloch.size, dealloch.size, newh.size])) try:", "and (DEVSERVER_AUTO_PROFILE or request.devserver_profiler_run): from cStringIO import StringIO out = StringIO() if (DEVSERVER_AUTO_PROFILE):", "request.devserver_profiler.disable_by_count() request.devserver_profiler.print_stats(stream=out) self.logger.info(out.getvalue()) def _unwrap_closure_and_profile(profiler, func): if not hasattr(func, 'func_code'): return profiler.add_function(func) if", "MemoryUseModule(DevServerModule): \"\"\" Outputs a summary of memory usage of the course of a", "from devserver.utils.time import ms_from_timedelta from devserver.settings import DEVSERVER_AUTO_PROFILE from datetime import datetime import", "__init__(self, follow=[]): pass def __call__(self, func): return func else: class LineProfilerModule(DevServerModule): \"\"\" Outputs", "newh self.oldh = newh self.logger.info('%s allocated, %s deallocated, heap size is %s', *map(filesizeformat,", "import warnings class MemoryUseModule(DevServerModule): def __new__(cls, *args, **kwargs): warnings.warn('MemoryUseModule requires guppy to be", "process_init(self, request): gc.enable() gc.set_debug(gc.DEBUG_SAVEALL) def process_complete(self, request): gc.collect() self.logger.info('%s objects left in garbage',", "view_args, view_kwargs): request.devserver_profiler = LineProfiler() request.devserver_profiler_run = False if (DEVSERVER_AUTO_PROFILE): _unwrap_closure_and_profile(request.devserver_profiler, view_func) request.devserver_profiler.enable_by_count()", "def __init__(self, request): super(MemoryUseModule, self).__init__(request) self.hpy = hpy() self.oldh = self.hpy.heap() self.logger.info('heap size", "or request.devserver_profiler_run): from cStringIO import StringIO out = StringIO() if (DEVSERVER_AUTO_PROFILE): request.devserver_profiler.disable_by_count() request.devserver_profiler.print_stats(stream=out)", "class LineProfilerModule(DevServerModule): \"\"\" Outputs a Line by Line profile of any @devserver_profile'd functions", "'func_code'): return profiler.add_function(func) if func.func_closure: for cell in func.func_closure: if hasattr(cell.cell_contents, 'func_code'): _unwrap_closure_and_profile(profiler,", "LeftOversModule(DevServerModule): \"\"\" Outputs a summary of events the garbage collector couldn't handle. \"\"\"", "'profile' def __init__(self, request): super(MemoryUseModule, self).__init__(request) self.hpy = hpy() self.oldh = self.hpy.heap() self.logger.info('heap", "request = args[1] try: request.devserver_profiler.add_function(func) request.devserver_profiler_run = True for f in self.follow: request.devserver_profiler.add_function(f)", "return super(LineProfilerModule, cls).__new__(cls) class devserver_profile(object): def __init__(self, follow=[]): pass def __call__(self, func): return", "return super(MemoryUseModule, cls).__new__(cls) else: class MemoryUseModule(DevServerModule): \"\"\" Outputs a summary of memory usage", "else: class LineProfilerModule(DevServerModule): \"\"\" Outputs a Line by Line profile of any @devserver_profile'd", "guppy import hpy except ImportError: import warnings class MemoryUseModule(DevServerModule): def __new__(cls, *args, **kwargs):", "\"\"\" Outputs a summary of memory usage of the course of a request.", "\"\"\" logger_name = 'profile' def __init__(self, request): super(MemoryUseModule, self).__init__(request) self.hpy = hpy() self.oldh", "'profile' def process_init(self, request): self.start = datetime.now() def process_complete(self, request): duration = datetime.now()", "dealloch.size, newh.size])) try: from line_profiler import LineProfiler except ImportError: import warnings class LineProfilerModule(DevServerModule):", "(DEVSERVER_AUTO_PROFILE or request.devserver_profiler_run): from cStringIO import StringIO out = StringIO() if (DEVSERVER_AUTO_PROFILE): request.devserver_profiler.disable_by_count()", "view_kwargs): request.devserver_profiler = LineProfiler() request.devserver_profiler_run = False if (DEVSERVER_AUTO_PROFILE): _unwrap_closure_and_profile(request.devserver_profiler, view_func) request.devserver_profiler.enable_by_count() def", "summary of cache events once a response is ready. \"\"\" logger_name = 'profile'", "follow=[]): self.follow = follow def __call__(self, func): def profiled_func(*args, **kwargs): request = args[0]", "is actually self: request = args[1] try: request.devserver_profiler.add_function(func) request.devserver_profiler_run = True for f", "its a general idea logger_name = 'profile' def process_init(self, request): gc.enable() gc.set_debug(gc.DEBUG_SAVEALL) def", "import warnings class LineProfilerModule(DevServerModule): def __new__(cls, *args, **kwargs): warnings.warn('LineProfilerModule requires line_profiler to be", "profile of any @devserver_profile'd functions that were run \"\"\" logger_name = 'profile' def", "self.oldh = newh self.logger.info('%s allocated, %s deallocated, heap size is %s', *map(filesizeformat, [alloch.size,", "class devserver_profile(object): def __init__(self, follow=[]): self.follow = follow def __call__(self, func): def profiled_func(*args,", "the garbage collector couldn't handle. \"\"\" # TODO: Not even sure this is", "*args, **kwargs): warnings.warn('LineProfilerModule requires line_profiler to be installed.') return super(LineProfilerModule, cls).__new__(cls) class devserver_profile(object):", "warnings class LineProfilerModule(DevServerModule): def __new__(cls, *args, **kwargs): warnings.warn('LineProfilerModule requires line_profiler to be installed.')", "Not even sure this is correct, but the its a general idea logger_name", "'devserver_profiler_run') and (DEVSERVER_AUTO_PROFILE or request.devserver_profiler_run): from cStringIO import StringIO out = StringIO() if", "sure this is correct, but the its a general idea logger_name = 'profile'", "import StringIO out = StringIO() if (DEVSERVER_AUTO_PROFILE): request.devserver_profiler.disable_by_count() request.devserver_profiler.print_stats(stream=out) self.logger.info(out.getvalue()) def _unwrap_closure_and_profile(profiler, func):", "self.logger.info('%s objects left in garbage', len(gc.garbage)) from django.template.defaultfilters import filesizeformat try: from guppy", "*args, **kwargs): warnings.warn('MemoryUseModule requires guppy to be installed.') return super(MemoryUseModule, cls).__new__(cls) else: class", "= hpy() self.oldh = self.hpy.heap() self.logger.info('heap size is %s', filesizeformat(self.oldh.size)) def process_complete(self, request):", "events the garbage collector couldn't handle. \"\"\" # TODO: Not even sure this", "return profiler.add_function(func) if func.func_closure: for cell in func.func_closure: if hasattr(cell.cell_contents, 'func_code'): _unwrap_closure_and_profile(profiler, cell.cell_contents)", "self.logger.info(out.getvalue()) def _unwrap_closure_and_profile(profiler, func): if not hasattr(func, 'func_code'): return profiler.add_function(func) if func.func_closure: for", "super(LineProfilerModule, cls).__new__(cls) class devserver_profile(object): def __init__(self, follow=[]): pass def __call__(self, func): return func", "%s deallocated, heap size is %s', *map(filesizeformat, [alloch.size, dealloch.size, newh.size])) try: from line_profiler", "self.start = datetime.now() def process_complete(self, request): duration = datetime.now() - self.start self.logger.info('Total time", "= 'profile' def process_init(self, request): gc.enable() gc.set_debug(gc.DEBUG_SAVEALL) def process_complete(self, request): gc.collect() self.logger.info('%s objects", "import ms_from_timedelta from devserver.settings import DEVSERVER_AUTO_PROFILE from datetime import datetime import functools import", "*map(filesizeformat, [alloch.size, dealloch.size, newh.size])) try: from line_profiler import LineProfiler except ImportError: import warnings", "first argument is actually self: request = args[1] try: request.devserver_profiler.add_function(func) request.devserver_profiler_run = True", "- self.oldh dealloch = self.oldh - newh self.oldh = newh self.logger.info('%s allocated, %s", "**kwargs): warnings.warn('MemoryUseModule requires guppy to be installed.') return super(MemoryUseModule, cls).__new__(cls) else: class MemoryUseModule(DevServerModule):", "# TODO: Not even sure this is correct, but the its a general", "ready. \"\"\" logger_name = 'profile' def process_init(self, request): self.start = datetime.now() def process_complete(self,", "= datetime.now() def process_complete(self, request): duration = datetime.now() - self.start self.logger.info('Total time to", "request.devserver_profiler_run): from cStringIO import StringIO out = StringIO() if (DEVSERVER_AUTO_PROFILE): request.devserver_profiler.disable_by_count() request.devserver_profiler.print_stats(stream=out) self.logger.info(out.getvalue())", "'func_code'): _unwrap_closure_and_profile(profiler, cell.cell_contents) class devserver_profile(object): def __init__(self, follow=[]): self.follow = follow def __call__(self,", "out = StringIO() if (DEVSERVER_AUTO_PROFILE): request.devserver_profiler.disable_by_count() request.devserver_profiler.print_stats(stream=out) self.logger.info(out.getvalue()) def _unwrap_closure_and_profile(profiler, func): if not", "process_complete(self, request): if hasattr(request, 'devserver_profiler_run') and (DEVSERVER_AUTO_PROFILE or request.devserver_profiler_run): from cStringIO import StringIO", "in garbage', len(gc.garbage)) from django.template.defaultfilters import filesizeformat try: from guppy import hpy except", "duration = datetime.now() - self.start self.logger.info('Total time to render was %.2fs', ms_from_timedelta(duration) /", "class LeftOversModule(DevServerModule): \"\"\" Outputs a summary of events the garbage collector couldn't handle.", "= newh - self.oldh dealloch = self.oldh - newh self.oldh = newh self.logger.info('%s", "newh - self.oldh dealloch = self.oldh - newh self.oldh = newh self.logger.info('%s allocated,", "def process_complete(self, request): if hasattr(request, 'devserver_profiler_run') and (DEVSERVER_AUTO_PROFILE or request.devserver_profiler_run): from cStringIO import", "= 'profile' def __init__(self, request): super(MemoryUseModule, self).__init__(request) self.hpy = hpy() self.oldh = self.hpy.heap()", "guppy to be installed.') return super(MemoryUseModule, cls).__new__(cls) else: class MemoryUseModule(DevServerModule): \"\"\" Outputs a", "were run \"\"\" logger_name = 'profile' def process_view(self, request, view_func, view_args, view_kwargs): request.devserver_profiler", "summary of memory usage of the course of a request. \"\"\" logger_name =", "= newh self.logger.info('%s allocated, %s deallocated, heap size is %s', *map(filesizeformat, [alloch.size, dealloch.size,", "if hasattr(request, 'request'): # We're decorating a Django class-based-view and the first argument", "for f in self.follow: request.devserver_profiler.add_function(f) request.devserver_profiler.enable_by_count() return func(*args, **kwargs) finally: request.devserver_profiler.disable_by_count() return functools.wraps(func)(profiled_func)", "= False if (DEVSERVER_AUTO_PROFILE): _unwrap_closure_and_profile(request.devserver_profiler, view_func) request.devserver_profiler.enable_by_count() def process_complete(self, request): if hasattr(request, 'devserver_profiler_run')", "func.func_closure: if hasattr(cell.cell_contents, 'func_code'): _unwrap_closure_and_profile(profiler, cell.cell_contents) class devserver_profile(object): def __init__(self, follow=[]): self.follow =", "gc.set_debug(gc.DEBUG_SAVEALL) def process_complete(self, request): gc.collect() self.logger.info('%s objects left in garbage', len(gc.garbage)) from django.template.defaultfilters", "= LineProfiler() request.devserver_profiler_run = False if (DEVSERVER_AUTO_PROFILE): _unwrap_closure_and_profile(request.devserver_profiler, view_func) request.devserver_profiler.enable_by_count() def process_complete(self, request):", "self.follow = follow def __call__(self, func): def profiled_func(*args, **kwargs): request = args[0] if", "DevServerModule from devserver.utils.time import ms_from_timedelta from devserver.settings import DEVSERVER_AUTO_PROFILE from datetime import datetime", "from line_profiler import LineProfiler except ImportError: import warnings class LineProfilerModule(DevServerModule): def __new__(cls, *args,", "__new__(cls, *args, **kwargs): warnings.warn('LineProfilerModule requires line_profiler to be installed.') return super(LineProfilerModule, cls).__new__(cls) class", "from django.template.defaultfilters import filesizeformat try: from guppy import hpy except ImportError: import warnings", "devserver_profile(object): def __init__(self, follow=[]): self.follow = follow def __call__(self, func): def profiled_func(*args, **kwargs):", "pass def __call__(self, func): return func else: class LineProfilerModule(DevServerModule): \"\"\" Outputs a Line", "%.2fs', ms_from_timedelta(duration) / 1000) class LeftOversModule(DevServerModule): \"\"\" Outputs a summary of events the", "a Line by Line profile of any @devserver_profile'd functions that were run \"\"\"", "allocated, %s deallocated, heap size is %s', *map(filesizeformat, [alloch.size, dealloch.size, newh.size])) try: from", "request = args[0] if hasattr(request, 'request'): # We're decorating a Django class-based-view and", "def __call__(self, func): return func else: class LineProfilerModule(DevServerModule): \"\"\" Outputs a Line by", "be installed.') return super(LineProfilerModule, cls).__new__(cls) class devserver_profile(object): def __init__(self, follow=[]): pass def __call__(self,", "argument is actually self: request = args[1] try: request.devserver_profiler.add_function(func) request.devserver_profiler_run = True for", "couldn't handle. \"\"\" # TODO: Not even sure this is correct, but the", "line_profiler to be installed.') return super(LineProfilerModule, cls).__new__(cls) class devserver_profile(object): def __init__(self, follow=[]): pass", "@devserver_profile'd functions that were run \"\"\" logger_name = 'profile' def process_view(self, request, view_func,", "installed.') return super(LineProfilerModule, cls).__new__(cls) class devserver_profile(object): def __init__(self, follow=[]): pass def __call__(self, func):", "Outputs a summary of events the garbage collector couldn't handle. \"\"\" # TODO:", "def process_complete(self, request): newh = self.hpy.heap() alloch = newh - self.oldh dealloch =", "= 'profile' def process_view(self, request, view_func, view_args, view_kwargs): request.devserver_profiler = LineProfiler() request.devserver_profiler_run =", "True for f in self.follow: request.devserver_profiler.add_function(f) request.devserver_profiler.enable_by_count() return func(*args, **kwargs) finally: request.devserver_profiler.disable_by_count() return", "class MemoryUseModule(DevServerModule): \"\"\" Outputs a summary of memory usage of the course of", "devserver.modules import DevServerModule from devserver.utils.time import ms_from_timedelta from devserver.settings import DEVSERVER_AUTO_PROFILE from datetime", "[alloch.size, dealloch.size, newh.size])) try: from line_profiler import LineProfiler except ImportError: import warnings class", "datetime.now() - self.start self.logger.info('Total time to render was %.2fs', ms_from_timedelta(duration) / 1000) class", "a request. \"\"\" logger_name = 'profile' def __init__(self, request): super(MemoryUseModule, self).__init__(request) self.hpy =", "filesizeformat(self.oldh.size)) def process_complete(self, request): newh = self.hpy.heap() alloch = newh - self.oldh dealloch", "size is %s', *map(filesizeformat, [alloch.size, dealloch.size, newh.size])) try: from line_profiler import LineProfiler except", "devserver_profile(object): def __init__(self, follow=[]): pass def __call__(self, func): return func else: class LineProfilerModule(DevServerModule):", "LineProfilerModule(DevServerModule): def __new__(cls, *args, **kwargs): warnings.warn('LineProfilerModule requires line_profiler to be installed.') return super(LineProfilerModule,", "the first argument is actually self: request = args[1] try: request.devserver_profiler.add_function(func) request.devserver_profiler_run =", "super(MemoryUseModule, cls).__new__(cls) else: class MemoryUseModule(DevServerModule): \"\"\" Outputs a summary of memory usage of", "hpy() self.oldh = self.hpy.heap() self.logger.info('heap size is %s', filesizeformat(self.oldh.size)) def process_complete(self, request): newh", "request): newh = self.hpy.heap() alloch = newh - self.oldh dealloch = self.oldh -", "process_complete(self, request): duration = datetime.now() - self.start self.logger.info('Total time to render was %.2fs',", "except ImportError: import warnings class LineProfilerModule(DevServerModule): def __new__(cls, *args, **kwargs): warnings.warn('LineProfilerModule requires line_profiler", "django.template.defaultfilters import filesizeformat try: from guppy import hpy except ImportError: import warnings class", "is ready. \"\"\" logger_name = 'profile' def process_init(self, request): self.start = datetime.now() def", "super(MemoryUseModule, self).__init__(request) self.hpy = hpy() self.oldh = self.hpy.heap() self.logger.info('heap size is %s', filesizeformat(self.oldh.size))", "len(gc.garbage)) from django.template.defaultfilters import filesizeformat try: from guppy import hpy except ImportError: import", "def _unwrap_closure_and_profile(profiler, func): if not hasattr(func, 'func_code'): return profiler.add_function(func) if func.func_closure: for cell", "self.oldh - newh self.oldh = newh self.logger.info('%s allocated, %s deallocated, heap size is", "devserver.utils.time import ms_from_timedelta from devserver.settings import DEVSERVER_AUTO_PROFILE from datetime import datetime import functools", "heap size is %s', *map(filesizeformat, [alloch.size, dealloch.size, newh.size])) try: from line_profiler import LineProfiler", "False if (DEVSERVER_AUTO_PROFILE): _unwrap_closure_and_profile(request.devserver_profiler, view_func) request.devserver_profiler.enable_by_count() def process_complete(self, request): if hasattr(request, 'devserver_profiler_run') and", "actually self: request = args[1] try: request.devserver_profiler.add_function(func) request.devserver_profiler_run = True for f in", "= True for f in self.follow: request.devserver_profiler.add_function(f) request.devserver_profiler.enable_by_count() return func(*args, **kwargs) finally: request.devserver_profiler.disable_by_count()", "if hasattr(request, 'devserver_profiler_run') and (DEVSERVER_AUTO_PROFILE or request.devserver_profiler_run): from cStringIO import StringIO out =", "func): if not hasattr(func, 'func_code'): return profiler.add_function(func) if func.func_closure: for cell in func.func_closure:", "StringIO() if (DEVSERVER_AUTO_PROFILE): request.devserver_profiler.disable_by_count() request.devserver_profiler.print_stats(stream=out) self.logger.info(out.getvalue()) def _unwrap_closure_and_profile(profiler, func): if not hasattr(func, 'func_code'):", "class MemoryUseModule(DevServerModule): def __new__(cls, *args, **kwargs): warnings.warn('MemoryUseModule requires guppy to be installed.') return", "even sure this is correct, but the its a general idea logger_name =", "gc class ProfileSummaryModule(DevServerModule): \"\"\" Outputs a summary of cache events once a response", "in func.func_closure: if hasattr(cell.cell_contents, 'func_code'): _unwrap_closure_and_profile(profiler, cell.cell_contents) class devserver_profile(object): def __init__(self, follow=[]): self.follow", "request.devserver_profiler.enable_by_count() def process_complete(self, request): if hasattr(request, 'devserver_profiler_run') and (DEVSERVER_AUTO_PROFILE or request.devserver_profiler_run): from cStringIO", "func.func_closure: for cell in func.func_closure: if hasattr(cell.cell_contents, 'func_code'): _unwrap_closure_and_profile(profiler, cell.cell_contents) class devserver_profile(object): def", "logger_name = 'profile' def __init__(self, request): super(MemoryUseModule, self).__init__(request) self.hpy = hpy() self.oldh =", "logger_name = 'profile' def process_init(self, request): self.start = datetime.now() def process_complete(self, request): duration", "self.hpy.heap() self.logger.info('heap size is %s', filesizeformat(self.oldh.size)) def process_complete(self, request): newh = self.hpy.heap() alloch", "line_profiler import LineProfiler except ImportError: import warnings class LineProfilerModule(DevServerModule): def __new__(cls, *args, **kwargs):", "requires guppy to be installed.') return super(MemoryUseModule, cls).__new__(cls) else: class MemoryUseModule(DevServerModule): \"\"\" Outputs", "request.devserver_profiler_run = True for f in self.follow: request.devserver_profiler.add_function(f) request.devserver_profiler.enable_by_count() return func(*args, **kwargs) finally:", "memory usage of the course of a request. \"\"\" logger_name = 'profile' def", "_unwrap_closure_and_profile(profiler, func): if not hasattr(func, 'func_code'): return profiler.add_function(func) if func.func_closure: for cell in", "from devserver.modules import DevServerModule from devserver.utils.time import ms_from_timedelta from devserver.settings import DEVSERVER_AUTO_PROFILE from", "a summary of memory usage of the course of a request. \"\"\" logger_name", "by Line profile of any @devserver_profile'd functions that were run \"\"\" logger_name =", "self.oldh dealloch = self.oldh - newh self.oldh = newh self.logger.info('%s allocated, %s deallocated,", "= 'profile' def process_init(self, request): self.start = datetime.now() def process_complete(self, request): duration =", "ProfileSummaryModule(DevServerModule): \"\"\" Outputs a summary of cache events once a response is ready.", "import DEVSERVER_AUTO_PROFILE from datetime import datetime import functools import gc class ProfileSummaryModule(DevServerModule): \"\"\"", "(DEVSERVER_AUTO_PROFILE): _unwrap_closure_and_profile(request.devserver_profiler, view_func) request.devserver_profiler.enable_by_count() def process_complete(self, request): if hasattr(request, 'devserver_profiler_run') and (DEVSERVER_AUTO_PROFILE or" ]
[ "x2=9 y2=10 x=(x1*x1+y1*y1) y=(x2*x2+y2*y2) xyz=(x+y)//2 average_x=(x1+x2)//2 average_y=(y1+y2)//2 average_x_2=average_x*average_x average_y_2=average_y*average_y average=average_x_2+average_y_2 new_x=(x1+x2)-average_x new_y=(y1+y2)-average_y new_2=new_x*new_x+new_y*new_y", "x1=10 y1=10 x2=9 y2=10 x=(x1*x1+y1*y1) y=(x2*x2+y2*y2) xyz=(x+y)//2 average_x=(x1+x2)//2 average_y=(y1+y2)//2 average_x_2=average_x*average_x average_y_2=average_y*average_y average=average_x_2+average_y_2 new_x=(x1+x2)-average_x", "y1=10 x2=9 y2=10 x=(x1*x1+y1*y1) y=(x2*x2+y2*y2) xyz=(x+y)//2 average_x=(x1+x2)//2 average_y=(y1+y2)//2 average_x_2=average_x*average_x average_y_2=average_y*average_y average=average_x_2+average_y_2 new_x=(x1+x2)-average_x new_y=(y1+y2)-average_y", "x=(x1*x1+y1*y1) y=(x2*x2+y2*y2) xyz=(x+y)//2 average_x=(x1+x2)//2 average_y=(y1+y2)//2 average_x_2=average_x*average_x average_y_2=average_y*average_y average=average_x_2+average_y_2 new_x=(x1+x2)-average_x new_y=(y1+y2)-average_y new_2=new_x*new_x+new_y*new_y disp=new_2 print(xyz-average,disp)", "y2=10 x=(x1*x1+y1*y1) y=(x2*x2+y2*y2) xyz=(x+y)//2 average_x=(x1+x2)//2 average_y=(y1+y2)//2 average_x_2=average_x*average_x average_y_2=average_y*average_y average=average_x_2+average_y_2 new_x=(x1+x2)-average_x new_y=(y1+y2)-average_y new_2=new_x*new_x+new_y*new_y disp=new_2" ]
[ "<NAME>) Apache 2.0 (http://www.apache.org/licenses/LICENSE-2.0) \"\"\" from .vector_reader import VectorReader from .vector_class_reader import VectorClassReader", "import ClassifTrialDataReader # from .sequence_reader import SequenceReader # from .sequence_class_reader import SequenceClassReader #", ".vector_class_reader import VectorClassReader from .trial_data_reader import TrialDataReader from .multi_test_trial_data_reader import MultiTestTrialDataReader from .multi_test_trial_data_reader_v2", "University (Author: <NAME>) Apache 2.0 (http://www.apache.org/licenses/LICENSE-2.0) \"\"\" from .vector_reader import VectorReader from .vector_class_reader", "import VectorClassReader from .trial_data_reader import TrialDataReader from .multi_test_trial_data_reader import MultiTestTrialDataReader from .multi_test_trial_data_reader_v2 import", "Copyright 2018 Johns Hopkins University (Author: <NAME>) Apache 2.0 (http://www.apache.org/licenses/LICENSE-2.0) \"\"\" from .vector_reader", "import MultiTestTrialDataReaderV2 from .classif_trial_data_reader import ClassifTrialDataReader # from .sequence_reader import SequenceReader # from", "# from .sequence_reader import SequenceReader # from .sequence_class_reader import SequenceClassReader # from .sequence_post_reader", "from .sequence_post_reader import SequencePostReader # from .sequence_post_class_reader import SequencePostClassReader from .plda_factory import PLDAFactory", "from .classif_trial_data_reader import ClassifTrialDataReader # from .sequence_reader import SequenceReader # from .sequence_class_reader import", "MultiTestTrialDataReader from .multi_test_trial_data_reader_v2 import MultiTestTrialDataReaderV2 from .classif_trial_data_reader import ClassifTrialDataReader # from .sequence_reader import", "(Author: <NAME>) Apache 2.0 (http://www.apache.org/licenses/LICENSE-2.0) \"\"\" from .vector_reader import VectorReader from .vector_class_reader import", ".multi_test_trial_data_reader_v2 import MultiTestTrialDataReaderV2 from .classif_trial_data_reader import ClassifTrialDataReader # from .sequence_reader import SequenceReader #", "import MultiTestTrialDataReader from .multi_test_trial_data_reader_v2 import MultiTestTrialDataReaderV2 from .classif_trial_data_reader import ClassifTrialDataReader # from .sequence_reader", "<filename>hyperion/helpers/__init__.py \"\"\" Copyright 2018 Johns Hopkins University (Author: <NAME>) Apache 2.0 (http://www.apache.org/licenses/LICENSE-2.0) \"\"\"", "Johns Hopkins University (Author: <NAME>) Apache 2.0 (http://www.apache.org/licenses/LICENSE-2.0) \"\"\" from .vector_reader import VectorReader", "Apache 2.0 (http://www.apache.org/licenses/LICENSE-2.0) \"\"\" from .vector_reader import VectorReader from .vector_class_reader import VectorClassReader from", "import VectorReader from .vector_class_reader import VectorClassReader from .trial_data_reader import TrialDataReader from .multi_test_trial_data_reader import", "\"\"\" from .vector_reader import VectorReader from .vector_class_reader import VectorClassReader from .trial_data_reader import TrialDataReader", "from .vector_class_reader import VectorClassReader from .trial_data_reader import TrialDataReader from .multi_test_trial_data_reader import MultiTestTrialDataReader from", "import TrialDataReader from .multi_test_trial_data_reader import MultiTestTrialDataReader from .multi_test_trial_data_reader_v2 import MultiTestTrialDataReaderV2 from .classif_trial_data_reader import", ".vector_reader import VectorReader from .vector_class_reader import VectorClassReader from .trial_data_reader import TrialDataReader from .multi_test_trial_data_reader", "from .trial_data_reader import TrialDataReader from .multi_test_trial_data_reader import MultiTestTrialDataReader from .multi_test_trial_data_reader_v2 import MultiTestTrialDataReaderV2 from", "TrialDataReader from .multi_test_trial_data_reader import MultiTestTrialDataReader from .multi_test_trial_data_reader_v2 import MultiTestTrialDataReaderV2 from .classif_trial_data_reader import ClassifTrialDataReader", "from .sequence_reader import SequenceReader # from .sequence_class_reader import SequenceClassReader # from .sequence_post_reader import", ".sequence_class_reader import SequenceClassReader # from .sequence_post_reader import SequencePostReader # from .sequence_post_class_reader import SequencePostClassReader", "SequenceReader # from .sequence_class_reader import SequenceClassReader # from .sequence_post_reader import SequencePostReader # from", "# from .sequence_post_reader import SequencePostReader # from .sequence_post_class_reader import SequencePostClassReader from .plda_factory import", "2018 Johns Hopkins University (Author: <NAME>) Apache 2.0 (http://www.apache.org/licenses/LICENSE-2.0) \"\"\" from .vector_reader import", "import SequenceReader # from .sequence_class_reader import SequenceClassReader # from .sequence_post_reader import SequencePostReader #", ".trial_data_reader import TrialDataReader from .multi_test_trial_data_reader import MultiTestTrialDataReader from .multi_test_trial_data_reader_v2 import MultiTestTrialDataReaderV2 from .classif_trial_data_reader", "# from .sequence_class_reader import SequenceClassReader # from .sequence_post_reader import SequencePostReader # from .sequence_post_class_reader", ".multi_test_trial_data_reader import MultiTestTrialDataReader from .multi_test_trial_data_reader_v2 import MultiTestTrialDataReaderV2 from .classif_trial_data_reader import ClassifTrialDataReader # from", "from .multi_test_trial_data_reader_v2 import MultiTestTrialDataReaderV2 from .classif_trial_data_reader import ClassifTrialDataReader # from .sequence_reader import SequenceReader", ".sequence_reader import SequenceReader # from .sequence_class_reader import SequenceClassReader # from .sequence_post_reader import SequencePostReader", "2.0 (http://www.apache.org/licenses/LICENSE-2.0) \"\"\" from .vector_reader import VectorReader from .vector_class_reader import VectorClassReader from .trial_data_reader", "from .sequence_class_reader import SequenceClassReader # from .sequence_post_reader import SequencePostReader # from .sequence_post_class_reader import", "SequenceClassReader # from .sequence_post_reader import SequencePostReader # from .sequence_post_class_reader import SequencePostClassReader from .plda_factory", "import SequenceClassReader # from .sequence_post_reader import SequencePostReader # from .sequence_post_class_reader import SequencePostClassReader from", "VectorReader from .vector_class_reader import VectorClassReader from .trial_data_reader import TrialDataReader from .multi_test_trial_data_reader import MultiTestTrialDataReader", "VectorClassReader from .trial_data_reader import TrialDataReader from .multi_test_trial_data_reader import MultiTestTrialDataReader from .multi_test_trial_data_reader_v2 import MultiTestTrialDataReaderV2", "from .vector_reader import VectorReader from .vector_class_reader import VectorClassReader from .trial_data_reader import TrialDataReader from", ".classif_trial_data_reader import ClassifTrialDataReader # from .sequence_reader import SequenceReader # from .sequence_class_reader import SequenceClassReader", "MultiTestTrialDataReaderV2 from .classif_trial_data_reader import ClassifTrialDataReader # from .sequence_reader import SequenceReader # from .sequence_class_reader", "from .multi_test_trial_data_reader import MultiTestTrialDataReader from .multi_test_trial_data_reader_v2 import MultiTestTrialDataReaderV2 from .classif_trial_data_reader import ClassifTrialDataReader #", "Hopkins University (Author: <NAME>) Apache 2.0 (http://www.apache.org/licenses/LICENSE-2.0) \"\"\" from .vector_reader import VectorReader from", "ClassifTrialDataReader # from .sequence_reader import SequenceReader # from .sequence_class_reader import SequenceClassReader # from", "\"\"\" Copyright 2018 Johns Hopkins University (Author: <NAME>) Apache 2.0 (http://www.apache.org/licenses/LICENSE-2.0) \"\"\" from", "(http://www.apache.org/licenses/LICENSE-2.0) \"\"\" from .vector_reader import VectorReader from .vector_class_reader import VectorClassReader from .trial_data_reader import" ]
[ "\"single-context\" in TYPES: continue if \"docs\" in fname and not \"topics\" in TYPES:", "= element_rect(colour = \"grey70\", fill = \"white\"), legend.text=element_text(size=44), legend.title=element_text(size=44), legend.key.size=unit(2, \"cm\"), plot.margin=unit(c(1,1,1,1), \"cm\"))", "SORTED = True # sort the histograms by score, disable at your own", "numpy as np import pylab as pl import matplotlib import matplotlib.pyplot as plt", "'-c+' in fname: fname = fname.replace('-c+', '_collocs_common') elif '+' in fname: fname =", "legends) ########## if type(s_dict) == type({}) and len(s_dict) == 6: if TAKE_MAX_SCORE: if", "t[1:]), ()), ys, tmp) if TAKE_MAX_SCORE: y_pos, scores, conds, colors = zip(*tmp) plt.barh(y_pos,", "e in s_dict: for k, v in e.iteritems(): results[condname][month-SAGE][k].append(v) print results fig =", "legend.key.size=unit(2, \"cm\"), plot.margin=unit(c(1,1,1,1), \"cm\")) \"\"\" rstring += \"\"\" ggsave('ggplot2_progress.pdf', plot=p, width=22, height=16) \"\"\"", "0.9]) ax.set_xticklabels(map(str, range(SAGE, EAGE+1))) for item in ([ax.title, ax.xaxis.label, ax.yaxis.label] + ax.get_xticklabels() +", "range(6)] s_dict = dict(zip(scores_order, scores)) except: print \"PARSE ERROR: parse went wrongly for\",", "e.g. DO_ONLY = {'t_colloc': 'colloc with topics'} scores_order = \"token_f-score token_precision token_recall boundary_f-score", "type([]): for e in s_dict: for k, v in e.iteritems(): results[condname][month-SAGE][k].append(v) print results", "to know about X (months) for key, value in mydata.iteritems(): for i, l", "'unigram', 't_readapt_unigram': 'unigram share vocab', #'t_unigram': 'unigram split vocab'} #'t_readapt_colloc_syll_wth_common': 'share vocab with", "TYPES: continue if \"docs\" in fname and not \"topics\" in TYPES: continue #", "vals = [np.mean(x['token_f-score']) for x in a] stddevs = [FACTOR_STD*np.std(x['token_f-score']) for x in", "in fname and not \"topics\" in TYPES: continue # always plots basic results", "in fname: condname = 't_readapt' fname = fname.replace('-r', '') if '-w+' in fname:", "key=lambda x: x[1]) tmp = map(lambda y,t: sum(((y,), t[1:]), ()), ys, tmp) if", "str(SAGE_XPS) + \"-\" + str(month), if OLDVERSION: listmodels = ['syll', 't_syll_spl_vocab', 'colloc', 't_colloc_wth_common',", "histograms by score, disable at your own risk! FACTOR_STD = 1. # 1.96", "= zip(y_pos, scores, conds, ['g' for tmp_i in range(len(y_pos))]) if OLDVERSION: tmp =", "cond, color): (y, s, sd, cond, 'grey') if 'b' == cond[0] else (y,", "xerr=stddev, color=colors, ecolor='r', alpha=0.8) plt.yticks(map(lambda x: x+0.5, y_pos), conds) plt.xlabel('token f-score') #plt.title('') plt.savefig('histogram_'", "pandas as pd mydata = defaultdict(lambda: []) ages_max_points = [0 for i in", "j in range(ages_max_points[i] - len(l))] mydata[key] = [j for i in value for", "vocab', #'t_readapt_syll': 'syll share vocab'} #'unigram': 'unigram', 't_readapt_unigram': 'unigram share vocab', #'t_unigram': 'unigram", "f-score')\\ + scale_x_discrete('age in months', breaks=seq(eage,sage), labels=seq(eage,sage))\\ + coord_cartesian(xlim = c(eage, sage))\\ +", "# for dataframe conversion from rpy2.robjects.packages import importr from rpy2.robjects import globalenv import", "matplotlib.rcParams.update({'text.color': \"black\"}) matplotlib.rcParams.update({'axes.labelcolor': \"black\"}) matplotlib.rcParams.update({'xtick.color': \"black\"}) matplotlib.rcParams.update({'ytick.color': \"black\"}) plotted_results = {} # plotted_results[month][cond][score_type]", "\"topics\" in TYPES: continue # always plots basic results currently doit = False", "= plt.gca() ax.set_ylim([0.55, 0.90]) ax.set_xlim([-0.1, N_MONTHS - 0.9]) ax.set_xticklabels(map(str, range(SAGE, EAGE+1))) for item", "'colloc_syll_test', 't_test_coll_syll': 't_colloc_syll_spl_vocab_test'} #DO_ONLY = {} # for cosmetics when preparing figures for", "stat_smooth(se=True) + xlab('age in months') + ylab('token f-score') # my_lng = pd.melt(mydataframe[['months', 't_colloc", "cond[0] else (y, s, sd, cond, color), tmp) else: if TEST: tmp =", "sum(((y,), t[1:]), ()), ys, tmp) if TAKE_MAX_SCORE: y_pos, scores, conds, colors = zip(*tmp)", "opts(legend.position = c(0.96, 0.5), legend.justification = c(1, 0.5), legend.background = element_rect(colour = \"grey70\",", "coord_cartesian(xlim = c(eage, sage))\\ + theme_bw()\\ + scale_colour_discrete(\"model\", drop=TRUE, limits=cLevels)\\ + scale_fill_discrete(\"model\", drop=TRUE,", "\"single-context\"] TEST = False # if True, just use the values evaluated on", "conds.append(cond) s_dicts.append({'token_f-score': score, 'token_precision': np.mean(a[month-SAGE]['token_precision']), 'token_recall': np.mean(a[month-SAGE]['token_recall']), 'boundary_f-score': np.mean(a[month-SAGE]['boundary_f-score']), 'boundary_precision': np.mean(a[month-SAGE]['boundary_precision']), 'boundary_recall': np.mean(a[month-SAGE]['boundary_recall'])})", "sage))\\ + theme_bw()\\ + scale_colour_discrete(\"model\", drop=TRUE, limits=cLevels)\\ + scale_fill_discrete(\"model\", drop=TRUE, limits=cLevels)\\ + scale_shape_discrete(\"model\",", "OLDVERSION: DO_ONLY = {'syll': 'syll', 'colloc': 'colloc', 't_readapt_colloc': 't_colloc_shr_vocab', 't_syll': 't_syll_spl_vocab', 't_readapt_colloc_wth_common': 't_colloc_wth_common',", "[FACTOR_STD*np.std(x['token_f-score']) for x in a] # TODO (gaussian process or some smoothing) plt.plot(map(lambda", "cond: linetype = linetype[0] + '--' vals = None stddevs = None if", "results_m = deepcopy(results) for cond, a in results_m.iteritems(): for i, x in enumerate(a):", "x in a] else: vals = [np.mean(x['token_f-score']) for x in a] stddevs =", "y_pos = [0.5] scores = [] stddevs = [] conds = [] s_dicts", "USED ONLY FOR TEST currently if LAST_ITERS > 1 and TEST: TAKE_MAX_SCORE =", "cond[0] or 'd' == cond[0] else (y, s, cond, color), tmp) # cond[0]=='b'", "len(DO_ONLY) < 5: rstring += \"\"\"+ opts(legend.position = c(0.96, 0.5), legend.justification = c(1,", "plt.gca() ax.set_ylim([0.55, 0.90]) ax.set_xlim([-0.1, N_MONTHS - 0.9]) ax.set_xticklabels(map(str, range(SAGE, EAGE+1))) for item in", "0: continue y_pos = y_pos[:-1] fig = plt.figure(figsize=(9, len(y_pos)), dpi=1200) ax = plt.gca()", "!= cond[0] else (y, s, sd, cond, color), tmp) # cond[0]=='b' for cond=='baseline'", "common no prefix', 't_test_colloc_syll_wth_common': 'with common test', 't_nopfx_colloc_syll': 'split vocab no prefix', 'test_coll_syll':", "results.iteritems(): linetype = '' if \"syll\" in cond: linetype = '^-.' else: linetype", "tmp = map(lambda (y, s, cond, color): (y, s, cond, 'b') if 't'", "if not doit: print \"NOT DOING:\", fname else: print fname scores = []", "vocab', 't_colloc_syll_wth_common': 'with common', #'t_permuted_colloc_syll': 'permuted split vocab', ### 't_permuted_colloc_syll_wth_common': 'permuted with common',", "stat_smooth(se=False) + xlab('age in months') + ylab('token f-score') # ggsave(p, 'ggplot_progress.png') import rpy2.robjects", "else (y, s, sd, cond, color), tmp) # cond[0]=='b' for cond=='baseline' else: tmp", "else: condname = '_'.join(fname.split('/')[-1].split('-')[3:]).split('.')[0] ########## cosmetic (for legends) ########## if len(DO_ONLY): if condname", "> LAST_ITERS+1: for iter_to_take in range(1,LAST_ITERS+1): scores = [float(last_lines[-iter_to_take].split('\\t')[i]) for i in range(6)]", "do the mean+std SORTED = True # sort the histograms by score, disable", "print mydata print \">>> conditions that will be plotted\" print mydata.keys() mydataframe =", "\"d_\" or \"t_\" in cond: linetype = linetype[0] + '--' vals = None", "print results fig = plt.figure(figsize=(12, 9), dpi=1200) plt.xticks(xrange(N_MONTHS)) ax = plt.gca() ax.set_ylim([0.55, 0.90])", "\"syll\" in cond: linetype = '^-.' else: linetype = 'v-.' if \"d_\" or", "ylab('token f-score') # p = ggplot(aes(x='months', y='value', color='variable'), data=my_lng) + stat_smooth(se=False) + xlab('age", "= robj.r(rstring) print \"===================\" print \"and now for the LaTeX tables\" print \"===================\"", "limits=cLevels)\\ + stat_smooth(level=0.68, size=1.8)\\ + theme(text = element_text(size=44))\\ \"\"\" #+ geom_point()\\ #+ xlab('age", "# p = ggplot(aes(x='months', y='value', color='variable'), data=my_lng) + stat_smooth(se=False) + xlab('age in months')", "np.all(map(np.isnan, mydata[key])): # remove data that is only nan mydata.pop(key) print mydata print", "for line in f: last_lines.append(line) try: if TEST and LAST_ITERS > 1 and", "'baseline', 'share vocab', 'split vocab', 'with common'] for cond in listmodels: s_dict =", "+ coord_cartesian(xlim = c(eage, sage))\\ + theme_bw()\\ + scale_colour_discrete(\"model\", drop=TRUE, limits=cLevels)\\ + scale_fill_discrete(\"model\",", "cond in listmodels: s_dict = d[cond] f = s_dict[typ+'_f-score'] p = s_dict[typ+'_precision'] r", "striter = str(iternumber) if striter + \" iterations\" in line or \"Iteration \"", "# version before March 10 LAST_ITERS = 10 # take the last XX", "None if TAKE_MAX_SCORE: vals = [x['token_f-score'] for x in a] else: vals =", "mydata[key])): # remove data that is only nan mydata.pop(key) print mydata print \">>>", "str(SAGE_XPS) + 'to' + str(month) + 'm.png', bbox_inches='tight') from pandas import DataFrame from", "plt.xlabel('token f-score') #plt.title('') plt.savefig('histogram_' + str(SAGE_XPS) + 'to' + str(month) + 'm.png', bbox_inches='tight')", "TAKE_MAX_SCORE: score = a[month-SAGE]['token_f-score'] else: score = np.mean(a[month-SAGE]['token_f-score']) stddev = FACTOR_STD*np.std(a[month-SAGE]['token_f-score']) if score", "['syll', 't_syll_spl_vocab', 'colloc', 't_colloc_wth_common', 'colloc_syll', 't_colloc_syll_shr_vocab', 't_colloc_syll_spl_vocab', 't_colloc_syll_wth_common'] listmodels = ['unigram', 'unigram share", "as pl import matplotlib import matplotlib.pyplot as plt from collections import defaultdict import", "item.set_fontsize(24) for cond, a in results.iteritems(): linetype = '' if \"syll\" in cond:", "np.mean(a[month-SAGE]['boundary_precision']), 'boundary_recall': np.mean(a[month-SAGE]['boundary_recall'])}) plotted_results[month] = dict(zip(conds, s_dicts)) if len(conds) == 0: continue y_pos", "= sorted(tmp, key=lambda x: x[1]) tmp = map(lambda y,t: sum(((y,), t[1:]), ()), ys,", "#grdevices = importr('grDevices') #robj.pandas2ri.activate() #data_r = robj.conversion.py2ri(mydata) lng_r = com.convert_to_r_dataframe(my_lng) data_r = com.convert_to_r_dataframe(mydataframe)", "xlab('age in months')\\ #+ ylab('token f-score')\\ #+ scale_x_continuous('age in months', breaks=seq(eage,sage), limits=c(eage,sage))\\ #", "r = s_dict[typ+'_recall'] print \" & \", print \"%.3f\" % f, print \"", "sort the histograms by score, disable at your own risk! FACTOR_STD = 1.", "linetype = 'v-.' if \"d_\" or \"t_\" in cond: linetype = linetype[0] +", "ys = map(lambda x: x[0], tmp) tmp = sorted(tmp, key=lambda x: x[1]) tmp", "stddevs = [] conds = [] s_dicts = [] for cond, a in", "'t_readapt' fname = fname.replace('-r', '') if '-w+' in fname: fname = fname.replace('-w+', '_words_common')", "map(lambda (y, s, sd, cond, color): (y, s, sd, cond, 'b') if 't'", "> 1 and TEST: TAKE_MAX_SCORE = False DO_ONLY = {'colloc_syll': 'baseline', 't_colloc_syll': 'split", "\"\"\" \\\\begin{table*}[ht] \\caption{Mean f-scores (f), precisions (p), and recalls (r) for different models", "in xrange(SAGE, EAGE+1): y_pos = [0.5] scores = [] stddevs = [] conds", "'b') if 'b' != cond[0] else (y, s, sd, cond, color), tmp) #", "i in value for j in i] if np.all(map(np.isnan, mydata[key])): # remove data", "+ '--' vals = None stddevs = None if TAKE_MAX_SCORE: vals = [x['token_f-score']", "for j in i] if np.all(map(np.isnan, mydata[key])): # remove data that is only", "5: rstring += \"\"\"+ opts(legend.position = c(0.96, 0.5), legend.justification = c(1, 0.5), legend.background", "map(lambda (y, s, sd, cond, color): (y, s, sd, cond, 'b') if 'b'", "element_rect(colour = \"grey70\", fill = \"white\"), legend.text=element_text(size=44), legend.title=element_text(size=44), legend.key.size=unit(2, \"cm\"), plot.margin=unit(c(1,1,1,1), \"cm\")) \"\"\"", "width=22, height=16) \"\"\" plotFunc_2 = robj.r(rstring) print \"===================\" print \"and now for the", "stat_smooth(level=0.68, size=1.8)\\ + theme(text = element_text(size=44))\\ \"\"\" #+ geom_point()\\ #+ xlab('age in months')\\", "\\\\begin{table*}[ht] \\caption{Mean f-scores (f), precisions (p), and recalls (r) for different models depending", "globalenv import pandas.rpy.common as com #grdevices = importr('grDevices') #robj.pandas2ri.activate() #data_r = robj.conversion.py2ri(mydata) lng_r", "is only nan mydata.pop(key) print mydata print \">>> conditions that will be plotted\"", "#p = ggplot(aes(x='months', y='value', color='variable'), data=my_lng) + stat_smooth(se=True, method='lm', level=0.95) + xlab('age in", "False # in case of several results, otherwise do the mean+std SORTED =", "tmp) # cond[0]=='b' for cond=='baseline' if SORTED: ys = map(lambda x: x[0], tmp)", "for cond=='baseline' if SORTED: ys = map(lambda x: x[0], tmp) tmp = sorted(tmp,", "s, cond, color), tmp) else: tmp = map(lambda (y, s, cond, color): (y,", "com.convert_to_r_dataframe(my_lng) data_r = com.convert_to_r_dataframe(mydataframe) globalenv['lng_r'] = lng_r globalenv['data_r'] = data_r globalenv['eage'] = EAGE", "scores, conds, ['g' for tmp_i in range(len(y_pos))]) if OLDVERSION: tmp = map(lambda (y,", "scale_fill_discrete(\"model\", drop=TRUE, limits=cLevels)\\ + scale_shape_discrete(\"model\", drop=TRUE, limits=cLevels)\\ + scale_linetype_discrete(\"model\", drop=TRUE, limits=cLevels)\\ + stat_smooth(level=0.68,", "f-score') plt.legend([l for l in results.iterkeys()], loc='best', ncol=4) plt.setp(ax.get_legend().get_texts(), fontsize=20) plt.savefig('progress_ages.png') matplotlib.rcParams.update({'font.size': 20})", "+ ylab('token f-score') # ggsave(p, 'ggplot_progress.png') import rpy2.robjects as robj import rpy2.robjects.pandas2ri #", "= [] for cond, a in results.iteritems(): score = 0 stddev = 0", "= [0.5] scores = [] stddevs = [] conds = [] s_dicts =", "= fname.replace('-c+', '_collocs_common') elif '+' in fname: fname = fname.replace('+', '_wth_common') condname =", "+= \"\"\"+ opts(legend.background = element_rect(colour = \"grey70\", fill = \"white\"), legend.text=element_text(size=44), legend.title=element_text(size=44), legend.key.size=unit(2,", "= dict(zip(scores_order, scores)) except: print \"PARSE ERROR: parse went wrongly for\", fname fname", "0 stddev = 0 if TAKE_MAX_SCORE: score = a[month-SAGE]['token_f-score'] else: score = np.mean(a[month-SAGE]['token_f-score'])", "some smoothing) plt.plot(map(lambda x: 'NaN' if x <= 0.0 else x, vals), linetype,", "len(last_lines) > LAST_ITERS+1: for iter_to_take in range(1,LAST_ITERS+1): scores = [float(last_lines[-iter_to_take].split('\\t')[i]) for i in", "if TAKE_MAX_SCORE: score = a[month-SAGE]['token_f-score'] else: score = np.mean(a[month-SAGE]['token_f-score']) stddev = FACTOR_STD*np.std(a[month-SAGE]['token_f-score']) if", "mean+std SORTED = True # sort the histograms by score, disable at your", "labels=seq(eage,sage))\\ + coord_cartesian(xlim = c(eage, sage))\\ + theme_bw()\\ + scale_colour_discrete(\"model\", drop=TRUE, limits=cLevels)\\ +", "TYPES: continue # always plots basic results currently doit = False with open", "test', 't_nopfx_colloc_syll': 'split vocab no prefix', 'test_coll_syll': 'baseline test', 't_test_colloc_syll': 'split vocab test'}", "pandas.rpy.common as com #grdevices = importr('grDevices') #robj.pandas2ri.activate() #data_r = robj.conversion.py2ri(mydata) lng_r = com.convert_to_r_dataframe(my_lng)", "or '-r.' in fname: condname = 't_readapt' fname = fname.replace('-r', '') if '-w+'", "currently if LAST_ITERS > 1 and TEST: TAKE_MAX_SCORE = False DO_ONLY = {'colloc_syll':", "scale_colour_discrete(\"model\", drop=TRUE, limits=cLevels)\\ + scale_fill_discrete(\"model\", drop=TRUE, limits=cLevels)\\ + scale_shape_discrete(\"model\", drop=TRUE, limits=cLevels)\\ + scale_linetype_discrete(\"model\",", "TEST: DO_ONLY = {'t_nopfx_colloc_syll_wth_common': 'with common no prefix', 't_test_colloc_syll_wth_common': 'with common test', 't_nopfx_colloc_syll':", "520) + range(1000,1005) #ITERS = range(1000,1005) #ITERS = range(600, 620) PREFIX = \"\"", "#ITERS = range(1000,1005) #ITERS = range(600, 620) PREFIX = \"\" #PREFIX = \"old_naima_XPs/\"", "open(fname) as f: last_lines = [] for line in f: last_lines.append(line) try: if", "for m in xrange(SAGE, EAGE+1)] # TODO if we don't want the stat_smooth", "f = s_dict[typ+'_f-score'] p = s_dict[typ+'_precision'] r = s_dict[typ+'_recall'] print \" & \",", "########## if type(s_dict) == type({}) and len(s_dict) == 6: if TAKE_MAX_SCORE: if results[condname][month-SAGE]['token_f-score']", "now for the LaTeX tables\" print \"===================\" header_table = \"\"\" \\\\begin{table*}[ht] \\caption{Mean f-scores", "# TODO (gaussian process or some smoothing) plt.plot(map(lambda x: 'NaN' if x <=", "xrange(SAGE, EAGE+1)] # TODO if we don't want the stat_smooth to know about", "'NaN' if x <= 0.0 else x, vals), linetype, linewidth=3.5, alpha=0.8) plt.xlabel('months') plt.ylabel('token", "= pd.melt(mydataframe[['months', 't_colloc syll shr vocab', 'colloc syll', 't_colloc_syll_wth_common', 't_colloc_syll_spl_vocab', 'colloc', 'syll', 't_syll_spl_vocab']],", "elif '+' in fname: fname = fname.replace('+', '_wth_common') condname = '_'.join([condname] + fname.split('/')[-1].split('-')[3:]).split('.')[0]", "'token_precision': np.mean(a[month-SAGE]['token_precision']), 'token_recall': np.mean(a[month-SAGE]['token_recall']), 'boundary_f-score': np.mean(a[month-SAGE]['boundary_f-score']), 'boundary_precision': np.mean(a[month-SAGE]['boundary_precision']), 'boundary_recall': np.mean(a[month-SAGE]['boundary_recall'])}) plotted_results[month] = dict(zip(conds,", "elif '-sc' in fname: fname = fname.replace('-sc', '') condname = 't' if '-r+'", "sd, cond, color): (y, s, sd, cond, 'b') if 'no prefix' in cond", "f-score') #plt.title('') plt.savefig('histogram_' + str(SAGE_XPS) + 'to' + str(month) + 'm.png', bbox_inches='tight') from", "> 1 and len(last_lines) > LAST_ITERS+1: for iter_to_take in range(1,LAST_ITERS+1): scores = [float(last_lines[-iter_to_take].split('\\t')[i])", "[np.nan for j in range(ages_max_points[i] - len(l))] mydata[key] = [j for i in", "m in xrange(SAGE, EAGE+1)] #mydata['months'] = [[str(m) for i in range(ages_max_points[m-SAGE])] for m", "glob import readline # otherwise the wrong readline is imported by rpy2 SAGE_XPS", "= [x['token_f-score'] for x in a] else: vals = [np.mean(x['token_f-score']) for x in", "vals = None stddevs = None if TAKE_MAX_SCORE: vals = [x['token_f-score'] for x", "tmp) else: tmp = map(lambda (y, s, cond, color): (y, s, cond, 'b')", "deepcopy(results) for cond, a in results_m.iteritems(): for i, x in enumerate(a): if len(x['token_f-score'])", "\\multicolumn{3}{|c|}{t\\_coll\\_syll\\_spl\\_voc} & \\multicolumn{3}{|c|}{t\\_coll\\_syll\\_wth\\_com}\\\\\\\\ \"\"\" print header_table for typ in ['token', 'boundary']: print typ", "# my_lng = pd.melt(mydataframe[['months', 't_colloc syll shr vocab', 'colloc syll', 't_colloc_syll_wth_common', 't_colloc_syll_spl_vocab', 'colloc',", "ylab('token f-score')\\ #+ scale_x_continuous('age in months', breaks=seq(eage,sage), limits=c(eage,sage))\\ # + scale_x_discrete('age in months')", "fname and not \"topics\" in TYPES: continue # always plots basic results currently", "fname = '/'.join(fname.split('/')[1:]) fname = fname.replace('coll-', 'colloc-') # old names if 'docs' in", "<- levels(lng_r$variable) p <- ggplot(data=lng_r, aes(x=months, y=value, group=variable, colour=variable, fill=variable, shape=variable, linetype=variable))\\ +", "for cond=='baseline' else: tmp = zip(y_pos, scores, stddevs, conds, ['g' for tmp_i in", "tmp_i in range(len(y_pos))]) if OLDVERSION: tmp = map(lambda (y, s, sd, cond, color):", "False DO_ONLY = {'colloc_syll': 'baseline', 't_colloc_syll': 'split vocab', 't_readapt_colloc_syll': 'share vocab', 't_colloc_syll_wth_common': 'with", "topics-based unigram condname = 'uni' condname = 'd_' + condname elif '-sc' in", "'colloc_syll', 't_colloc_syll', 't_colloc_syll_wth_common']], id_vars='months') if OLDVERSION: my_lng = pd.melt(mydataframe[['months', 't_colloc_syll_shr_vocab', 'colloc_syll', 't_colloc_syll_wth_common', 't_colloc_syll_spl_vocab']],", "in s_dict: for k, v in e.iteritems(): results[condname][month-SAGE][k].append(v) print results fig = plt.figure(figsize=(12,", "+ 'm.png', bbox_inches='tight') from pandas import DataFrame from copy import deepcopy import pandas", "fname.split('/')[-1].split('-')[3:]).split('.')[0] else: condname = '_'.join(fname.split('/')[-1].split('-')[3:]).split('.')[0] ########## cosmetic (for legends) ########## if len(DO_ONLY): if", "'t_readapt_colloc_syll': 't_colloc_syll_shr_vocab', 't_colloc_syll_wth_common': 't_colloc_syll_wth_common'} if TEST: DO_ONLY = {'t_nopfx_colloc_syll_wth_common': 'with common no prefix',", "'d' == cond[0] else (y, s, sd, cond, color), tmp) else: if TEST:", "for the LaTeX tables\" print \"===================\" header_table = \"\"\" \\\\begin{table*}[ht] \\caption{Mean f-scores (f),", "31 N_MONTHS = EAGE-SAGE+1 #TYPES = [\"basic\", \"single-context\", \"topics\"] #TYPES = [\"basic\", \"topics\"]", "= \"white\"), legend.text=element_text(size=44), legend.title=element_text(size=44), legend.key.size=unit(2, \"cm\"), plot.margin=unit(c(1,1,1,1), \"cm\")) \"\"\" else: rstring += \"\"\"+", "for m in xrange(SAGE, EAGE+1)] #mydata['months'] = [[str(m) for i in range(ages_max_points[m-SAGE])] for", "colour=variable, fill=variable, shape=variable, linetype=variable))\\ + scale_y_continuous(name='token f-score')\\ + scale_x_discrete('age in months', breaks=seq(eage,sage), labels=seq(eage,sage))\\", "'') if '-w+' in fname: fname = fname.replace('-w+', '_words_common') elif '-c+' in fname:", "'with common no prefix', 't_test_colloc_syll_wth_common': 'with common test', 't_nopfx_colloc_syll': 'split vocab no prefix',", "version before March 10 LAST_ITERS = 10 # take the last XX iterations", "rpy2.robjects as robj import rpy2.robjects.pandas2ri # for dataframe conversion from rpy2.robjects.packages import importr", "\"\\\\\\\\\" print \"\\hline\" footer_table = \"\"\" \\end{tabular} \\label{results} \\end{scriptsize} \\end{center} \\end{table*} \"\"\" print", "score, disable at your own risk! FACTOR_STD = 1. # 1.96 for 95%", "= [float(last_lines[-iter_to_take].split('\\t')[i]) for i in range(6)] if not len(s_dict): s_dict = [dict(zip(scores_order, scores))]", "#+ xlab('age in months')\\ #+ ylab('token f-score')\\ #+ scale_x_continuous('age in months', breaks=seq(eage,sage), limits=c(eage,sage))\\", "# e.g. DO_ONLY = {'t_colloc': 'colloc with topics'} scores_order = \"token_f-score token_precision token_recall", "if TAKE_MAX_SCORE: results = defaultdict(lambda: [dict(zip(scores_order, [0 for i in range(len(scores_order))])) for tmp_i", "#mydata['months'] = [[str(m) for i in range(ages_max_points[m-SAGE])] for m in xrange(SAGE, EAGE+1)] #", "print \"and now for the R part\" print \"===================\" rstring = \"\"\" library(\"ggplot2\")", "(y, s, cond, 'b') if 'b' != cond[0] or 'd' == cond[0] else", "data_r = com.convert_to_r_dataframe(mydataframe) globalenv['lng_r'] = lng_r globalenv['data_r'] = data_r globalenv['eage'] = EAGE globalenv['sage']", "theme(text = element_text(size=44))\\ \"\"\" #+ geom_point()\\ #+ xlab('age in months')\\ #+ ylab('token f-score')\\", "dataset} \\\\vspace{-0.5cm} \\\\begin{center} \\\\begin{scriptsize} \\\\begin{tabular}{|c|ccc|ccc|ccc|ccc|ccc|ccc|ccc|ccc|} \\hline & \\multicolumn{3}{|c|}{syll} & \\multicolumn{3}{|c|}{t\\_syll} & \\multicolumn{3}{|c|}{colloc} &", "= defaultdict(lambda: [dict(zip(scores_order, [0 for i in range(len(scores_order))])) for tmp_i in range(N_MONTHS)]) for", "\"docs\" in fname and not \"topics\" in TYPES: continue # always plots basic", "(y, s, sd, cond, color), tmp) # cond[0]=='b' for cond=='baseline' else: tmp =", "s, cond, 'b') if 't' == cond[0] or 'd' == cond[0] else (y,", "= len(x['token_f-score']) mydata[cond].append(x['token_f-score']) mydata['months'] = [[m for i in range(ages_max_points[m-SAGE])] for m in", "= s_dict[typ+'_precision'] r = s_dict[typ+'_recall'] print \" & \", print \"%.3f\" % f,", "figures for papers # e.g. DO_ONLY = {'t_colloc': 'colloc with topics'} scores_order =", "if np.all(map(np.isnan, mydata[key])): # remove data that is only nan mydata.pop(key) print mydata", "in range(6)] if not len(s_dict): s_dict = [dict(zip(scores_order, scores))] else: s_dict.append(dict(zip(scores_order, scores))) else:", "(r) for different models depending on the size of dataset} \\\\vspace{-0.5cm} \\\\begin{center} \\\\begin{scriptsize}", "### 't_permuted_colloc_syll_wth_common': 'permuted with common', #'t_random_colloc_syll': 'random split vocab', ### 't_random_colloc_syll_wth_common': 'random with", "'baseline test', 't_test_colloc_syll': 'split vocab test'} if OLDVERSION: DO_ONLY = {'t_nopfx_coll_syll_wth_common': 't_colloc_syll_wth_common_nopfx', 't_test_coll_syll_wth_common':", "sd, cond, color): (y, s, sd, cond, 'b') if 't' == cond[0] or", "legend.title=element_text(size=44), legend.key.size=unit(2, \"cm\"), plot.margin=unit(c(1,1,1,1), \"cm\")) \"\"\" rstring += \"\"\" ggsave('ggplot2_progress.pdf', plot=p, width=22, height=16)", "[]) ages_max_points = [0 for i in xrange(SAGE, EAGE+1)] results_m = deepcopy(results) for", "TODO if we don't want the stat_smooth to know about X (months) for", "% r, print \"\\\\\\\\\" print \"\\hline\" footer_table = \"\"\" \\end{tabular} \\label{results} \\end{scriptsize} \\end{center}", "ax.set_ylim([0, len(y_pos)+1]) ax.set_xlim([0.6, 0.86]) if TEST: ax.set_xlim([0.7, 0.86]) tmp = () if TAKE_MAX_SCORE:", "from ggplot_import_* # #p = ggplot(aes(x='months', y='colloc'), data=mydataframe) + geom_point(color='lightgreen') + stat_smooth(se=True) +", "in mydata.keys() if k != 'months']], id_vars='months') #my_lng = pd.melt(mydataframe[['months', 'share vocab', 'baseline',", "0 if TAKE_MAX_SCORE: score = a[month-SAGE]['token_f-score'] else: score = np.mean(a[month-SAGE]['token_f-score']) stddev = FACTOR_STD*np.std(a[month-SAGE]['token_f-score'])", "+ xlab('age in months') + ylab('token f-score') # my_lng = pd.melt(mydataframe[['months', 't_colloc syll", "globalenv['sage'] = SAGE print \"===================\" print \"and now for the R part\" print", "& p & r & f & p & r \\\\\\\\ \\hline \"\"\"", "your own risk! FACTOR_STD = 1. # 1.96 for 95% confidence interval OLDVERSION", "== 0: continue y_pos = y_pos[:-1] fig = plt.figure(figsize=(9, len(y_pos)), dpi=1200) ax =", "#TYPES = [\"basic\", \"topics\"] TYPES = [\"basic\", \"single-context\"] TEST = False # if", "y_pos, scores, conds, colors = zip(*tmp) plt.barh(y_pos, scores, color=colors, ecolor='r', alpha=0.8) else: y_pos,", "'random with common', 'colloc3_syll': 'colloc3 syll', 't_colloc3_syll_collocs_common': 'colloc3 syll collocs common'} #'syll': 'syll',", "'d_' + condname elif '-sc' in fname: fname = fname.replace('-sc', '') condname =", "= 1. # 1.96 for 95% confidence interval OLDVERSION = False # version", "vocab', #'t_unigram': 'unigram split vocab'} #'t_readapt_colloc_syll_wth_common': 'share vocab with common', #'t_readapt_colloc_syll_wth_common2': 'share vocab", "condname = 'd_' + condname elif '-sc' in fname: fname = fname.replace('-sc', '')", "scale_x_continuous('age in months', breaks=seq(eage,sage), limits=c(eage,sage))\\ # + scale_x_discrete('age in months') if len(DO_ONLY) and", "size of dataset} \\\\vspace{-0.5cm} \\\\begin{center} \\\\begin{scriptsize} \\\\begin{tabular}{|c|ccc|ccc|ccc|ccc|ccc|ccc|ccc|ccc|} \\hline & \\multicolumn{3}{|c|}{syll} & \\multicolumn{3}{|c|}{t\\_syll} &", "SAGE print \"===================\" print \"and now for the R part\" print \"===================\" rstring", "sd, cond, 'b') if 'no prefix' in cond else (y, s, sd, cond,", "s_dicts = [] for cond, a in results.iteritems(): score = 0 stddev =", "= ggplot(aes(x='months', y='colloc'), data=mydataframe) + geom_point(color='lightgreen') + stat_smooth(se=True) + xlab('age in months') +", "\\\\begin{center} \\\\begin{scriptsize} \\\\begin{tabular}{|c|ccc|ccc|ccc|ccc|ccc|ccc|ccc|ccc|} \\hline & \\multicolumn{3}{|c|}{syll} & \\multicolumn{3}{|c|}{t\\_syll} & \\multicolumn{3}{|c|}{colloc} & \\multicolumn{3}{|c|}{t\\_coll\\_wth\\_common} &", "if 't' == cond[0] or 'd' == cond[0] else (y, s, cond, color),", "range(1000,1005) #ITERS = range(600, 620) PREFIX = \"\" #PREFIX = \"old_naima_XPs/\" TAKE_MAX_SCORE =", "in DO_ONLY: condname = DO_ONLY[condname] else: continue ########## /cosmetic (for legends) ########## if", "limits=cLevels)\\ + scale_linetype_discrete(\"model\", drop=TRUE, limits=cLevels)\\ + stat_smooth(level=0.68, size=1.8)\\ + theme(text = element_text(size=44))\\ \"\"\"", "plotted_results.iteritems(): print str(SAGE_XPS) + \"-\" + str(month), if OLDVERSION: listmodels = ['syll', 't_syll_spl_vocab',", "TEST and (\"test\" in fname or \"nopfx\" in fname): continue if \"-sc\" in", "a test test ITERS = range(499, 520) + range(1000,1005) #ITERS = range(1000,1005) #ITERS", "enumerate(a): if len(x['token_f-score']) > ages_max_points[i]: ages_max_points[i] = len(x['token_f-score']) mydata[cond].append(x['token_f-score']) mydata['months'] = [[m for", "\"\"\" #+ geom_point()\\ #+ xlab('age in months')\\ #+ ylab('token f-score')\\ #+ scale_x_continuous('age in", "in line: doit = True break if not doit: print \"NOT DOING:\", fname", "+ 'to' + str(month) + 'm/nai*-' + str(SAGE_XPS) + '-' + str(month) +", "test'} if OLDVERSION: DO_ONLY = {'t_nopfx_coll_syll_wth_common': 't_colloc_syll_wth_common_nopfx', 't_test_coll_syll_wth_common': 't_colloc_syll_wth_common_test', 't_nopfx_coll_syll': 't_colloc_syll_spl_vocab_nopfx', 'test_coll_syll': 'colloc_syll_test',", "stddevs.append(stddev) conds.append(cond) s_dicts.append({'token_f-score': score, 'token_precision': np.mean(a[month-SAGE]['token_precision']), 'token_recall': np.mean(a[month-SAGE]['token_recall']), 'boundary_f-score': np.mean(a[month-SAGE]['boundary_f-score']), 'boundary_precision': np.mean(a[month-SAGE]['boundary_precision']), 'boundary_recall':", "LAST_ITERS > 1 and len(last_lines) > LAST_ITERS+1: for iter_to_take in range(1,LAST_ITERS+1): scores =", "#'syll': 'syll', #'t_syll': 'syll split vocab', #'t_readapt_syll': 'syll share vocab'} #'unigram': 'unigram', 't_readapt_unigram':", "#+ scale_x_continuous('age in months', breaks=seq(eage,sage), limits=c(eage,sage))\\ # + scale_x_discrete('age in months') if len(DO_ONLY)", "'t_colloc_syll_shr_vocab', 't_colloc_syll_wth_common': 't_colloc_syll_wth_common'} if TEST: DO_ONLY = {'t_nopfx_colloc_syll_wth_common': 'with common no prefix', 't_test_colloc_syll_wth_common':", "ITERS = range(499, 520) + range(1000,1005) #ITERS = range(1000,1005) #ITERS = range(600, 620)", "(y, s, cond, color), tmp) else: tmp = map(lambda (y, s, cond, color):", "FACTOR_STD*np.std(a[month-SAGE]['token_f-score']) if score > 0: y_pos.append(y_pos[-1] + 1) scores.append(score) stddevs.append(stddev) conds.append(cond) s_dicts.append({'token_f-score': score,", "= [] for line in f: last_lines.append(line) try: if TEST and LAST_ITERS >", "\"-sc\" in fname and not \"single-context\" in TYPES: continue if \"docs\" in fname", "s, sd, cond, color), tmp) # cond[0]=='b' for cond=='baseline' if SORTED: ys =", "x: 'NaN' if x <= 0.0 else x, vals), linetype, linewidth=3.5, alpha=0.8) plt.xlabel('months')", "legend.background = element_rect(colour = \"grey70\", fill = \"white\"), legend.text=element_text(size=44), legend.title=element_text(size=44), legend.key.size=unit(2, \"cm\"), plot.margin=unit(c(1,1,1,1),", "ggsave('ggplot2_progress.pdf', plot=p, width=22, height=16) \"\"\" plotFunc_2 = robj.r(rstring) print \"===================\" print \"and now", "id_vars='months') #my_lng = pd.melt(mydataframe[['months', 'share vocab', 'baseline', 'with common', 'split vocab']], id_vars='months') #my_lng", "> results[condname][month-SAGE]['token_f-score']: results[condname][month-SAGE] = s_dict else: for k, v in s_dict.iteritems(): results[condname][month-SAGE][k].append(v) elif", "[] s_dicts = [] for cond, a in results.iteritems(): score = 0 stddev", "data=my_lng) + stat_smooth(se=True, method='lm', level=0.95) + xlab('age in months') + ylab('token f-score') #", "cosmetic (for legends) ########## if len(DO_ONLY): if condname in DO_ONLY: condname = DO_ONLY[condname]", "in value for j in i] if np.all(map(np.isnan, mydata[key])): # remove data that", "on a test test ITERS = range(499, 520) + range(1000,1005) #ITERS = range(1000,1005)", "stat_smooth(se=True, method='lm', level=0.95) + xlab('age in months') + ylab('token f-score') # p =", "as plt from collections import defaultdict import glob import readline # otherwise the", "TEST: ax.set_xlim([0.7, 0.86]) tmp = () if TAKE_MAX_SCORE: tmp = zip(y_pos, scores, conds,", "tmp = map(lambda y,t: sum(((y,), t[1:]), ()), ys, tmp) if TAKE_MAX_SCORE: y_pos, scores,", "plt.setp(ax.get_legend().get_texts(), fontsize=20) plt.savefig('progress_ages.png') matplotlib.rcParams.update({'font.size': 20}) matplotlib.rcParams.update({'text.color': \"black\"}) matplotlib.rcParams.update({'axes.labelcolor': \"black\"}) matplotlib.rcParams.update({'xtick.color': \"black\"}) matplotlib.rcParams.update({'ytick.color': \"black\"})", "matplotlib.rcParams.update({'ytick.color': \"black\"}) plotted_results = {} # plotted_results[month][cond][score_type] = mean for month in xrange(SAGE,", "in ITERS: striter = str(iternumber) if striter + \" iterations\" in line or", "my_lng = pd.melt(mydataframe[['months', 't_colloc syll shr vocab', 'colloc syll', 't_colloc_syll_wth_common', 't_colloc_syll_spl_vocab', 'colloc', 'syll',", "common', #'t_random_colloc_syll': 'random split vocab', ### 't_random_colloc_syll_wth_common': 'random with common', 'colloc3_syll': 'colloc3 syll',", "+= \"\"\" ggsave('ggplot2_progress.pdf', plot=p, width=22, height=16) \"\"\" plotFunc_2 = robj.r(rstring) print \"===================\" print", "x <= 0.0 else x, vals), linetype, linewidth=3.5, alpha=0.8) plt.xlabel('months') plt.ylabel('token f-score') plt.legend([l", "\"token_f-score token_precision token_recall boundary_f-score boundary_precision boundary_recall\".split() results = defaultdict(lambda: [dict(zip(scores_order, [[] for i", "test ITERS = range(499, 520) + range(1000,1005) #ITERS = range(1000,1005) #ITERS = range(600,", "else: tmp = map(lambda (y, s, cond, color): (y, s, cond, 'b') if", "split vocab', ### 't_random_colloc_syll_wth_common': 'random with common', 'colloc3_syll': 'colloc3 syll', 't_colloc3_syll_collocs_common': 'colloc3 syll", "from rpy2.robjects.packages import importr from rpy2.robjects import globalenv import pandas.rpy.common as com #grdevices", "that will be plotted\" print mydata.keys() mydataframe = DataFrame(mydata) my_lng = pd.melt(mydataframe[['months'] +", "vocab']], id_vars='months') #my_lng = pd.melt(mydataframe[['months', 't_permuted_colloc_syll', 't_permuted_colloc_syll_wth_common', 'unigram', 't_unigram', 't_readapt_unigram', 'colloc_syll', 't_colloc_syll', 't_colloc_syll_wth_common']],", "fname or '-r.' in fname: condname = 't_readapt' fname = fname.replace('-r', '') if", "readline is imported by rpy2 SAGE_XPS = 11 SAGE = 12 EAGE =", "\"cm\")) \"\"\" else: rstring += \"\"\"+ opts(legend.background = element_rect(colour = \"grey70\", fill =", "'split vocab no prefix', 'test_coll_syll': 'baseline test', 't_test_colloc_syll': 'split vocab test'} if OLDVERSION:", "plots basic results currently doit = False with open (fname.replace(\".o\", \".e\")) as f:", "robj import rpy2.robjects.pandas2ri # for dataframe conversion from rpy2.robjects.packages import importr from rpy2.robjects", "\"test\" in fname and not \"nopfx\" in fname): continue elif not TEST and", "j in i] if np.all(map(np.isnan, mydata[key])): # remove data that is only nan", "elif not TEST and (\"test\" in fname or \"nopfx\" in fname): continue if", "of several results, otherwise do the mean+std SORTED = True # sort the", "vals = [x['token_f-score'] for x in a] else: vals = [np.mean(x['token_f-score']) for x", "ages_max_points[i]: ages_max_points[i] = len(x['token_f-score']) mydata[cond].append(x['token_f-score']) mydata['months'] = [[m for i in range(ages_max_points[m-SAGE])] for", "+ ylab('token f-score') # my_lng = pd.melt(mydataframe[['months', 't_colloc syll shr vocab', 'colloc syll',", "scores))) else: scores = [float(last_lines[-1].split('\\t')[i]) for i in range(6)] s_dict = dict(zip(scores_order, scores))", "True break if not doit: print \"NOT DOING:\", fname else: print fname scores", "mydata[cond].append(x['token_f-score']) mydata['months'] = [[m for i in range(ages_max_points[m-SAGE])] for m in xrange(SAGE, EAGE+1)]", "% f, print \" & \", print \"%.3f\" % p, print \" &", "\"%.3f\" % r, print \"\\\\\\\\\" print \"\\hline\" footer_table = \"\"\" \\end{tabular} \\label{results} \\end{scriptsize}", "y_pos[:-1] fig = plt.figure(figsize=(9, len(y_pos)), dpi=1200) ax = plt.gca() ax.set_ylim([0, len(y_pos)+1]) ax.set_xlim([0.6, 0.86])", "'t_nopfx_coll_syll': 't_colloc_syll_spl_vocab_nopfx', 'test_coll_syll': 'colloc_syll_test', 't_test_coll_syll': 't_colloc_syll_spl_vocab_test'} #DO_ONLY = {} # for cosmetics when", "str(month), if OLDVERSION: listmodels = ['syll', 't_syll_spl_vocab', 'colloc', 't_colloc_wth_common', 'colloc_syll', 't_colloc_syll_shr_vocab', 't_colloc_syll_spl_vocab', 't_colloc_syll_wth_common']", "#'t_readapt_colloc_syll_wth_common2': 'share vocab with common 2'} if OLDVERSION: DO_ONLY = {'syll': 'syll', 'colloc':", "for x in a] # TODO (gaussian process or some smoothing) plt.plot(map(lambda x:", "TAKE_MAX_SCORE: y_pos, scores, conds, colors = zip(*tmp) plt.barh(y_pos, scores, color=colors, ecolor='r', alpha=0.8) else:", "plotted_results = {} # plotted_results[month][cond][score_type] = mean for month in xrange(SAGE, EAGE+1): y_pos", "[] conds = [] s_dicts = [] for cond, a in results.iteritems(): score", "'split vocab', 't_readapt_colloc_syll': 'share vocab', 't_colloc_syll_wth_common': 'with common', #'t_permuted_colloc_syll': 'permuted split vocab', ###", "for\", fname fname = '/'.join(fname.split('/')[1:]) fname = fname.replace('coll-', 'colloc-') # old names if", "= [] s_dicts = [] for cond, a in results.iteritems(): score = 0", "zip(y_pos, scores, stddevs, conds, ['g' for tmp_i in range(len(y_pos))]) if OLDVERSION: tmp =", "for cond=='baseline' else: tmp = map(lambda (y, s, sd, cond, color): (y, s,", "'t_colloc_syll_wth_common', 't_colloc_syll_spl_vocab']], id_vars='months') # from ggplot_import_* # #p = ggplot(aes(x='months', y='colloc'), data=mydataframe) +", "EAGE-SAGE+1 #TYPES = [\"basic\", \"single-context\", \"topics\"] #TYPES = [\"basic\", \"topics\"] TYPES = [\"basic\",", "in fname: fname = fname.replace('-sc', '') condname = 't' if '-r+' in fname", "# take the last XX iterations as results (considering converged) # USED ONLY", "<- ggplot(data=lng_r, aes(x=months, y=value, group=variable, colour=variable, fill=variable, shape=variable, linetype=variable))\\ + scale_y_continuous(name='token f-score')\\ +", "+ scale_linetype_discrete(\"model\", drop=TRUE, limits=cLevels)\\ + stat_smooth(level=0.68, size=1.8)\\ + theme(text = element_text(size=44))\\ \"\"\" #+", "the mean+std SORTED = True # sort the histograms by score, disable at", "loc='best', ncol=4) plt.setp(ax.get_legend().get_texts(), fontsize=20) plt.savefig('progress_ages.png') matplotlib.rcParams.update({'font.size': 20}) matplotlib.rcParams.update({'text.color': \"black\"}) matplotlib.rcParams.update({'axes.labelcolor': \"black\"}) matplotlib.rcParams.update({'xtick.color': \"black\"})", "[] for cond, a in results.iteritems(): score = 0 stddev = 0 if", "with open(fname) as f: last_lines = [] for line in f: last_lines.append(line) try:", "[\"basic\", \"single-context\"] TEST = False # if True, just use the values evaluated", "EAGE = 31 N_MONTHS = EAGE-SAGE+1 #TYPES = [\"basic\", \"single-context\", \"topics\"] #TYPES =", "\"Iteration \" + striter in line: doit = True break if not doit:", "ERROR: parse went wrongly for\", fname fname = '/'.join(fname.split('/')[1:]) fname = fname.replace('coll-', 'colloc-')", "tmp) # cond[0]=='b' for cond=='baseline' else: tmp = map(lambda (y, s, sd, cond,", "'colloc3 syll', 't_colloc3_syll_collocs_common': 'colloc3 syll collocs common'} #'syll': 'syll', #'t_syll': 'syll split vocab',", "# cond[0]=='b' for cond=='baseline' if SORTED: ys = map(lambda x: x[0], tmp) tmp", "EAGE+1)] # TODO if we don't want the stat_smooth to know about X", "geom_point()\\ #+ xlab('age in months')\\ #+ ylab('token f-score')\\ #+ scale_x_continuous('age in months', breaks=seq(eage,sage),", "'t_readapt_colloc_wth_common': 't_colloc_wth_common', 'colloc_syll': 'colloc_syll', 't_colloc_syll': 't_colloc_syll_spl_vocab', 't_readapt_colloc_syll': 't_colloc_syll_shr_vocab', 't_colloc_syll_wth_common': 't_colloc_syll_wth_common'} if TEST: DO_ONLY", "import pandas as pd mydata = defaultdict(lambda: []) ages_max_points = [0 for i", "in a] else: vals = [np.mean(x['token_f-score']) for x in a] stddevs = [FACTOR_STD*np.std(x['token_f-score'])", "= [] stddevs = [] conds = [] s_dicts = [] for cond,", "condname == '': # topics-based unigram condname = 'uni' condname = 'd_' +", "= None if TAKE_MAX_SCORE: vals = [x['token_f-score'] for x in a] else: vals", "= range(499, 520) + range(1000,1005) #ITERS = range(1000,1005) #ITERS = range(600, 620) PREFIX", "fname: fname = fname.replace('-sc', '') condname = 't' if '-r+' in fname or", "colors = zip(*tmp) plt.barh(y_pos, scores, color=colors, ecolor='r', alpha=0.8) else: y_pos, scores, stddev, conds,", "'ggplot_progress.png') import rpy2.robjects as robj import rpy2.robjects.pandas2ri # for dataframe conversion from rpy2.robjects.packages", "if True, just use the values evaluated on a test test ITERS =", "common', 'colloc3_syll': 'colloc3 syll', 't_colloc3_syll_collocs_common': 'colloc3 syll collocs common'} #'syll': 'syll', #'t_syll': 'syll", "= 31 N_MONTHS = EAGE-SAGE+1 #TYPES = [\"basic\", \"single-context\", \"topics\"] #TYPES = [\"basic\",", "ylab('token f-score') # ggsave(p, 'ggplot_progress.png') import rpy2.robjects as robj import rpy2.robjects.pandas2ri # for", "ggplot(aes(x='months', y='value', color='variable'), data=my_lng) + stat_smooth(se=True, method='lm', level=0.95) + xlab('age in months') +", "method='lm', level=0.95) + xlab('age in months') + ylab('token f-score') # p = ggplot(aes(x='months',", "'colloc with topics'} scores_order = \"token_f-score token_precision token_recall boundary_f-score boundary_precision boundary_recall\".split() results =", "in TYPES: continue # always plots basic results currently doit = False with", "the histograms by score, disable at your own risk! FACTOR_STD = 1. #", "type({}) and len(s_dict) == 6: if TAKE_MAX_SCORE: if results[condname][month-SAGE]['token_f-score'] == 0 or s_dict['token_f-score']", "(f), precisions (p), and recalls (r) for different models depending on the size", "cond => different color tmp = map(lambda (y, s, sd, cond, color): (y,", "print \"===================\" header_table = \"\"\" \\\\begin{table*}[ht] \\caption{Mean f-scores (f), precisions (p), and recalls", "s_dict: for k, v in e.iteritems(): results[condname][month-SAGE][k].append(v) print results fig = plt.figure(figsize=(12, 9),", "listmodels = ['syll', 't_syll_spl_vocab', 'colloc', 't_colloc_wth_common', 'colloc_syll', 't_colloc_syll_shr_vocab', 't_colloc_syll_spl_vocab', 't_colloc_syll_wth_common'] listmodels = ['unigram',", "+ scale_x_discrete('age in months') if len(DO_ONLY) and len(DO_ONLY) < 5: rstring += \"\"\"+", "\\multicolumn{3}{|c|}{colloc} & \\multicolumn{3}{|c|}{t\\_coll\\_wth\\_common} & \\multicolumn{3}{|c|}{coll\\_syll} & \\multicolumn{3}{|c|}{t\\_coll\\_syll\\_shr\\_voc} & \\multicolumn{3}{|c|}{t\\_coll\\_syll\\_spl\\_voc} & \\multicolumn{3}{|c|}{t\\_coll\\_syll\\_wth\\_com}\\\\\\\\ \"\"\" print", "for l in results.iterkeys()], loc='best', ncol=4) plt.setp(ax.get_legend().get_texts(), fontsize=20) plt.savefig('progress_ages.png') matplotlib.rcParams.update({'font.size': 20}) matplotlib.rcParams.update({'text.color': \"black\"})", "results[condname][month-SAGE][k].append(v) elif type(s_dict) == type([]): for e in s_dict: for k, v in", "globalenv['data_r'] = data_r globalenv['eage'] = EAGE globalenv['sage'] = SAGE print \"===================\" print \"and", "and len(s_dict) == 6: if TAKE_MAX_SCORE: if results[condname][month-SAGE]['token_f-score'] == 0 or s_dict['token_f-score'] >", "zip(y_pos, scores, conds, ['g' for tmp_i in range(len(y_pos))]) if OLDVERSION: tmp = map(lambda", "'t_test_coll_syll': 't_colloc_syll_spl_vocab_test'} #DO_ONLY = {} # for cosmetics when preparing figures for papers", "fname and not \"single-context\" in TYPES: continue if \"docs\" in fname and not", "= fname.replace('-sc', '') condname = 't' if '-r+' in fname or '-r.' in", "== cond[0] else (y, s, sd, cond, color), tmp) # cond[0]=='b' for cond=='baseline'", "\"cm\"), plot.margin=unit(c(1,1,1,1), \"cm\")) \"\"\" rstring += \"\"\" ggsave('ggplot2_progress.pdf', plot=p, width=22, height=16) \"\"\" plotFunc_2", "linetype = linetype[0] + '--' vals = None stddevs = None if TAKE_MAX_SCORE:", "TODO (gaussian process or some smoothing) plt.plot(map(lambda x: 'NaN' if x <= 0.0", "OLDVERSION: tmp = map(lambda (y, s, cond, color): (y, s, cond, 'b') if", "if len(DO_ONLY) and len(DO_ONLY) < 5: rstring += \"\"\"+ opts(legend.position = c(0.96, 0.5),", "False # version before March 10 LAST_ITERS = 10 # take the last", "True, just use the values evaluated on a test test ITERS = range(499,", "for i in range(ages_max_points[m-SAGE])] for m in xrange(SAGE, EAGE+1)] # TODO if we", "'no prefix' in cond else (y, s, sd, cond, color), tmp) # \"no", "+ ax.get_yticklabels()): item.set_fontsize(24) for cond, a in results.iteritems(): linetype = '' if \"syll\"", "map(lambda (y, s, sd, cond, color): (y, s, sd, cond, 'b') if 'no", "results.iteritems(): score = 0 stddev = 0 if TAKE_MAX_SCORE: score = a[month-SAGE]['token_f-score'] else:", "or 'd' == cond[0] else (y, s, cond, color), tmp) # cond[0]=='b' for", "'unigram share vocab', 'unigram split vocab', 'baseline', 'share vocab', 'split vocab', 'with common']", "+ ylab('token f-score') # p = ggplot(aes(x='months', y='value', color='variable'), data=my_lng) + stat_smooth(se=False) +", "zip(*tmp) plt.barh(y_pos, scores, xerr=stddev, color=colors, ecolor='r', alpha=0.8) plt.yticks(map(lambda x: x+0.5, y_pos), conds) plt.xlabel('token", "i, x in enumerate(a): if len(x['token_f-score']) > ages_max_points[i]: ages_max_points[i] = len(x['token_f-score']) mydata[cond].append(x['token_f-score']) mydata['months']", "matplotlib.rcParams.update({'font.size': 20}) matplotlib.rcParams.update({'text.color': \"black\"}) matplotlib.rcParams.update({'axes.labelcolor': \"black\"}) matplotlib.rcParams.update({'xtick.color': \"black\"}) matplotlib.rcParams.update({'ytick.color': \"black\"}) plotted_results = {}", "'t_colloc_wth_common', 'colloc_syll': 'colloc_syll', 't_colloc_syll': 't_colloc_syll_spl_vocab', 't_readapt_colloc_syll': 't_colloc_syll_shr_vocab', 't_colloc_syll_wth_common': 't_colloc_syll_wth_common'} if TEST: DO_ONLY =", "= lng_r globalenv['data_r'] = data_r globalenv['eage'] = EAGE globalenv['sage'] = SAGE print \"===================\"", "= EAGE-SAGE+1 #TYPES = [\"basic\", \"single-context\", \"topics\"] #TYPES = [\"basic\", \"topics\"] TYPES =", "currently doit = False with open (fname.replace(\".o\", \".e\")) as f: line = \"\"", "import DataFrame from copy import deepcopy import pandas as pd mydata = defaultdict(lambda:", "zip(*tmp) plt.barh(y_pos, scores, color=colors, ecolor='r', alpha=0.8) else: y_pos, scores, stddev, conds, colors =", "iternumber in ITERS: striter = str(iternumber) if striter + \" iterations\" in line", "plotted\" print mydata.keys() mydataframe = DataFrame(mydata) my_lng = pd.melt(mydataframe[['months'] + [k for k", "vocab', 'baseline', 'share vocab', 'split vocab', 'with common'] for cond in listmodels: s_dict", "if 'b' == cond[0] else (y, s, sd, cond, color), tmp) # cond[0]=='b'", "rpy2.robjects import globalenv import pandas.rpy.common as com #grdevices = importr('grDevices') #robj.pandas2ri.activate() #data_r =", "= 10 # take the last XX iterations as results (considering converged) #", "(y, s, sd, cond, color), tmp) # \"no prefix\" cond => different color", "pd.melt(mydataframe[['months', 't_colloc_syll_shr_vocab', 'colloc_syll', 't_colloc_syll_wth_common', 't_colloc_syll_spl_vocab']], id_vars='months') # from ggplot_import_* # #p = ggplot(aes(x='months',", "x in a] # TODO (gaussian process or some smoothing) plt.plot(map(lambda x: 'NaN'", "\"\"\" plotFunc_2 = robj.r(rstring) print \"===================\" print \"and now for the LaTeX tables\"", "linetype[0] + '--' vals = None stddevs = None if TAKE_MAX_SCORE: vals =", "fname): continue elif not TEST and (\"test\" in fname or \"nopfx\" in fname):", "'to' + str(month) + 'm.png', bbox_inches='tight') from pandas import DataFrame from copy import", "defaultdict(lambda: [dict(zip(scores_order, [[] for i in range(len(scores_order))])) for tmp_i in range(N_MONTHS)]) if TAKE_MAX_SCORE:", "test test ITERS = range(499, 520) + range(1000,1005) #ITERS = range(1000,1005) #ITERS =", "'b') if 't' == cond[0] or 'd' == cond[0] else (y, s, sd,", "ONLY FOR TEST currently if LAST_ITERS > 1 and TEST: TAKE_MAX_SCORE = False", "cond, color): (y, s, cond, 'b') if 'b' != cond[0] or 'd' ==", "+ ax.get_xticklabels() + ax.get_yticklabels()): item.set_fontsize(24) for cond, a in results.iteritems(): linetype = ''", "vocab no prefix', 'test_coll_syll': 'baseline test', 't_test_colloc_syll': 'split vocab test'} if OLDVERSION: DO_ONLY", "0 or s_dict['token_f-score'] > results[condname][month-SAGE]['token_f-score']: results[condname][month-SAGE] = s_dict else: for k, v in", "we don't want the stat_smooth to know about X (months) for key, value", "syll', 't_colloc3_syll_collocs_common': 'colloc3 syll collocs common'} #'syll': 'syll', #'t_syll': 'syll split vocab', #'t_readapt_syll':", "ax = plt.gca() ax.set_ylim([0, len(y_pos)+1]) ax.set_xlim([0.6, 0.86]) if TEST: ax.set_xlim([0.7, 0.86]) tmp =", "len(DO_ONLY): if condname in DO_ONLY: condname = DO_ONLY[condname] else: continue ########## /cosmetic (for", "deepcopy import pandas as pd mydata = defaultdict(lambda: []) ages_max_points = [0 for", "len(DO_ONLY) and len(DO_ONLY) < 5: rstring += \"\"\"+ opts(legend.position = c(0.96, 0.5), legend.justification", "for cond in listmodels: s_dict = d[cond] f = s_dict[typ+'_f-score'] p = s_dict[typ+'_precision']", "= 11 SAGE = 12 EAGE = 31 N_MONTHS = EAGE-SAGE+1 #TYPES =", "# for cosmetics when preparing figures for papers # e.g. DO_ONLY = {'t_colloc':", "s_dict[typ+'_f-score'] p = s_dict[typ+'_precision'] r = s_dict[typ+'_recall'] print \" & \", print \"%.3f\"", "# #p = ggplot(aes(x='months', y='colloc'), data=mydataframe) + geom_point(color='lightgreen') + stat_smooth(se=True) + xlab('age in", "'t_colloc_syll_spl_vocab', 't_readapt_colloc_syll': 't_colloc_syll_shr_vocab', 't_colloc_syll_wth_common': 't_colloc_syll_wth_common'} if TEST: DO_ONLY = {'t_nopfx_colloc_syll_wth_common': 'with common no", "\"\"\"+ opts(legend.background = element_rect(colour = \"grey70\", fill = \"white\"), legend.text=element_text(size=44), legend.title=element_text(size=44), legend.key.size=unit(2, \"cm\"),", "dpi=1200) plt.xticks(xrange(N_MONTHS)) ax = plt.gca() ax.set_ylim([0.55, 0.90]) ax.set_xlim([-0.1, N_MONTHS - 0.9]) ax.set_xticklabels(map(str, range(SAGE,", "= \"\" for line in f: for iternumber in ITERS: striter = str(iternumber)", "linewidth=3.5, alpha=0.8) plt.xlabel('months') plt.ylabel('token f-score') plt.legend([l for l in results.iterkeys()], loc='best', ncol=4) plt.setp(ax.get_legend().get_texts(),", "in enumerate(value): value[i] = l + [np.nan for j in range(ages_max_points[i] - len(l))]", "\".e\")) as f: line = \"\" for line in f: for iternumber in", "'t_readapt_colloc_syll': 'share vocab', 't_colloc_syll_wth_common': 'with common', #'t_permuted_colloc_syll': 'permuted split vocab', ### 't_permuted_colloc_syll_wth_common': 'permuted", "results[condname][month-SAGE][k].append(v) print results fig = plt.figure(figsize=(12, 9), dpi=1200) plt.xticks(xrange(N_MONTHS)) ax = plt.gca() ax.set_ylim([0.55,", "id_vars='months') # from ggplot_import_* # #p = ggplot(aes(x='months', y='colloc'), data=mydataframe) + geom_point(color='lightgreen') +", "dataframe conversion from rpy2.robjects.packages import importr from rpy2.robjects import globalenv import pandas.rpy.common as", "sd, cond, 'b') if 't' == cond[0] or 'd' == cond[0] else (y,", "= s_dict[typ+'_recall'] print \" & \", print \"%.3f\" % f, print \" &", "for different models depending on the size of dataset} \\\\vspace{-0.5cm} \\\\begin{center} \\\\begin{scriptsize} \\\\begin{tabular}{|c|ccc|ccc|ccc|ccc|ccc|ccc|ccc|ccc|}", "= a[month-SAGE]['token_f-score'] else: score = np.mean(a[month-SAGE]['token_f-score']) stddev = FACTOR_STD*np.std(a[month-SAGE]['token_f-score']) if score > 0:", "& f & p & r & f & p & r \\\\\\\\", "!= cond[0] or 'd' == cond[0] else (y, s, cond, color), tmp) #", "p = ggplot(aes(x='months', y='value', color='variable'), data=my_lng) + stat_smooth(se=False) + xlab('age in months') +", "as com #grdevices = importr('grDevices') #robj.pandas2ri.activate() #data_r = robj.conversion.py2ri(mydata) lng_r = com.convert_to_r_dataframe(my_lng) data_r", "score = 0 stddev = 0 if TAKE_MAX_SCORE: score = a[month-SAGE]['token_f-score'] else: score", "of dataset} \\\\vspace{-0.5cm} \\\\begin{center} \\\\begin{scriptsize} \\\\begin{tabular}{|c|ccc|ccc|ccc|ccc|ccc|ccc|ccc|ccc|} \\hline & \\multicolumn{3}{|c|}{syll} & \\multicolumn{3}{|c|}{t\\_syll} & \\multicolumn{3}{|c|}{colloc}", "ggplot(aes(x='months', y='value', color='variable'), data=my_lng) + stat_smooth(se=False) + xlab('age in months') + ylab('token f-score')", "for line in f: for iternumber in ITERS: striter = str(iternumber) if striter", "ggplot(aes(x='months', y='colloc'), data=mydataframe) + geom_point(color='lightgreen') + stat_smooth(se=True) + xlab('age in months') + ylab('token", "if OLDVERSION: tmp = map(lambda (y, s, cond, color): (y, s, cond, 'b')", "for dataframe conversion from rpy2.robjects.packages import importr from rpy2.robjects import globalenv import pandas.rpy.common", "len(x['token_f-score']) > ages_max_points[i]: ages_max_points[i] = len(x['token_f-score']) mydata[cond].append(x['token_f-score']) mydata['months'] = [[m for i in", "LAST_ITERS+1: for iter_to_take in range(1,LAST_ITERS+1): scores = [float(last_lines[-iter_to_take].split('\\t')[i]) for i in range(6)] if", "str(month) + '*.o*'): if TEST and (not \"test\" in fname and not \"nopfx\"", "legend.justification = c(1, 0.5), legend.background = element_rect(colour = \"grey70\", fill = \"white\"), legend.text=element_text(size=44),", "= defaultdict(lambda: [dict(zip(scores_order, [[] for i in range(len(scores_order))])) for tmp_i in range(N_MONTHS)]) if", "\"white\"), legend.text=element_text(size=44), legend.title=element_text(size=44), legend.key.size=unit(2, \"cm\"), plot.margin=unit(c(1,1,1,1), \"cm\")) \"\"\" rstring += \"\"\" ggsave('ggplot2_progress.pdf', plot=p,", "1 and TEST: TAKE_MAX_SCORE = False DO_ONLY = {'colloc_syll': 'baseline', 't_colloc_syll': 'split vocab',", "[0 for i in range(len(scores_order))])) for tmp_i in range(N_MONTHS)]) for month in xrange(SAGE,", "colors = zip(*tmp) plt.barh(y_pos, scores, xerr=stddev, color=colors, ecolor='r', alpha=0.8) plt.yticks(map(lambda x: x+0.5, y_pos),", "or \"t_\" in cond: linetype = linetype[0] + '--' vals = None stddevs", "cond[0]=='b' for cond=='baseline' if SORTED: ys = map(lambda x: x[0], tmp) tmp =", "(y, s, sd, cond, 'b') if 't' == cond[0] or 'd' == cond[0]", "for e in s_dict: for k, v in e.iteritems(): results[condname][month-SAGE][k].append(v) print results fig", "for i in range(len(scores_order))])) for tmp_i in range(N_MONTHS)]) for month in xrange(SAGE, EAGE+1):", "cond: linetype = '^-.' else: linetype = 'v-.' if \"d_\" or \"t_\" in", "str(iternumber) if striter + \" iterations\" in line or \"Iteration \" + striter", "#plt.title('') plt.savefig('histogram_' + str(SAGE_XPS) + 'to' + str(month) + 'm.png', bbox_inches='tight') from pandas", "f-scores (f), precisions (p), and recalls (r) for different models depending on the", "10 # take the last XX iterations as results (considering converged) # USED", "syll shr vocab', 'colloc syll', 't_colloc_syll_wth_common', 't_colloc_syll_spl_vocab', 'colloc', 'syll', 't_syll_spl_vocab']], id_vars='months') # #p", "\\\\begin{tabular}{|c|ccc|ccc|ccc|ccc|ccc|ccc|ccc|ccc|} \\hline & \\multicolumn{3}{|c|}{syll} & \\multicolumn{3}{|c|}{t\\_syll} & \\multicolumn{3}{|c|}{colloc} & \\multicolumn{3}{|c|}{t\\_coll\\_wth\\_common} & \\multicolumn{3}{|c|}{coll\\_syll} &", "k in mydata.keys() if k != 'months']], id_vars='months') #my_lng = pd.melt(mydataframe[['months', 'share vocab',", "typ in ['token', 'boundary']: print typ + \"\"\" & f & p &", "f: last_lines.append(line) try: if TEST and LAST_ITERS > 1 and len(last_lines) > LAST_ITERS+1:", "by score, disable at your own risk! FACTOR_STD = 1. # 1.96 for", "+ 'to' + str(month) + 'm.png', bbox_inches='tight') from pandas import DataFrame from copy", "'t_readapt_unigram': 'unigram share vocab', #'t_unigram': 'unigram split vocab'} #'t_readapt_colloc_syll_wth_common': 'share vocab with common',", "in e.iteritems(): results[condname][month-SAGE][k].append(v) print results fig = plt.figure(figsize=(12, 9), dpi=1200) plt.xticks(xrange(N_MONTHS)) ax =", "'unigram share vocab', #'t_unigram': 'unigram split vocab'} #'t_readapt_colloc_syll_wth_common': 'share vocab with common', #'t_readapt_colloc_syll_wth_common2':", "fname = fname.replace('coll-', 'colloc-') # old names if 'docs' in fname: condname =", "fill = \"white\"), legend.text=element_text(size=44), legend.title=element_text(size=44), legend.key.size=unit(2, \"cm\"), plot.margin=unit(c(1,1,1,1), \"cm\")) \"\"\" else: rstring +=", "\"===================\" print \"and now for the R part\" print \"===================\" rstring = \"\"\"", "'t_permuted_colloc_syll_wth_common': 'permuted with common', #'t_random_colloc_syll': 'random split vocab', ### 't_random_colloc_syll_wth_common': 'random with common',", "None stddevs = None if TAKE_MAX_SCORE: vals = [x['token_f-score'] for x in a]", "cond[0]=='b' for cond=='baseline' else: tmp = zip(y_pos, scores, stddevs, conds, ['g' for tmp_i", "'/'.join(fname.split('/')[1:]) fname = fname.replace('coll-', 'colloc-') # old names if 'docs' in fname: condname", "\"\"\" rstring += \"\"\" ggsave('ggplot2_progress.pdf', plot=p, width=22, height=16) \"\"\" plotFunc_2 = robj.r(rstring) print", "continue if \"-sc\" in fname and not \"single-context\" in TYPES: continue if \"docs\"", "cond=='baseline' if SORTED: ys = map(lambda x: x[0], tmp) tmp = sorted(tmp, key=lambda", "len(y_pos)+1]) ax.set_xlim([0.6, 0.86]) if TEST: ax.set_xlim([0.7, 0.86]) tmp = () if TAKE_MAX_SCORE: tmp", "drop=TRUE, limits=cLevels)\\ + scale_linetype_discrete(\"model\", drop=TRUE, limits=cLevels)\\ + stat_smooth(level=0.68, size=1.8)\\ + theme(text = element_text(size=44))\\", "1) scores.append(score) stddevs.append(stddev) conds.append(cond) s_dicts.append({'token_f-score': score, 'token_precision': np.mean(a[month-SAGE]['token_precision']), 'token_recall': np.mean(a[month-SAGE]['token_recall']), 'boundary_f-score': np.mean(a[month-SAGE]['boundary_f-score']), 'boundary_precision':", "'test_coll_syll': 'colloc_syll_test', 't_test_coll_syll': 't_colloc_syll_spl_vocab_test'} #DO_ONLY = {} # for cosmetics when preparing figures", "range(ages_max_points[m-SAGE])] for m in xrange(SAGE, EAGE+1)] # TODO if we don't want the", "+ \"\"\" & f & p & r & f & p &", "tmp = sorted(tmp, key=lambda x: x[1]) tmp = map(lambda y,t: sum(((y,), t[1:]), ()),", "len(l))] mydata[key] = [j for i in value for j in i] if", "plt from collections import defaultdict import glob import readline # otherwise the wrong", "xrange(SAGE, EAGE+1): y_pos = [0.5] scores = [] stddevs = [] conds =", "iterations as results (considering converged) # USED ONLY FOR TEST currently if LAST_ITERS", "else: linetype = 'v-.' if \"d_\" or \"t_\" in cond: linetype = linetype[0]", "alpha=0.8) plt.xlabel('months') plt.ylabel('token f-score') plt.legend([l for l in results.iterkeys()], loc='best', ncol=4) plt.setp(ax.get_legend().get_texts(), fontsize=20)", "\"NOT DOING:\", fname else: print fname scores = [] s_dict = {} with", "aes(x=months, y=value, group=variable, colour=variable, fill=variable, shape=variable, linetype=variable))\\ + scale_y_continuous(name='token f-score')\\ + scale_x_discrete('age in", "& \\multicolumn{3}{|c|}{t\\_coll\\_syll\\_shr\\_voc} & \\multicolumn{3}{|c|}{t\\_coll\\_syll\\_spl\\_voc} & \\multicolumn{3}{|c|}{t\\_coll\\_syll\\_wth\\_com}\\\\\\\\ \"\"\" print header_table for typ in ['token',", "i in range(len(scores_order))])) for tmp_i in range(N_MONTHS)]) if TAKE_MAX_SCORE: results = defaultdict(lambda: [dict(zip(scores_order,", "'' if \"syll\" in cond: linetype = '^-.' else: linetype = 'v-.' if", "rstring += \"\"\" ggsave('ggplot2_progress.pdf', plot=p, width=22, height=16) \"\"\" plotFunc_2 = robj.r(rstring) print \"===================\"", "in xrange(SAGE, EAGE+1)] #mydata['months'] = [[str(m) for i in range(ages_max_points[m-SAGE])] for m in", "bbox_inches='tight') from pandas import DataFrame from copy import deepcopy import pandas as pd", "'baseline', 'with common', 'split vocab']], id_vars='months') #my_lng = pd.melt(mydataframe[['months', 't_permuted_colloc_syll', 't_permuted_colloc_syll_wth_common', 'unigram', 't_unigram',", "for papers # e.g. DO_ONLY = {'t_colloc': 'colloc with topics'} scores_order = \"token_f-score", "cond, a in results.iteritems(): linetype = '' if \"syll\" in cond: linetype =", "type(s_dict) == type({}) and len(s_dict) == 6: if TAKE_MAX_SCORE: if results[condname][month-SAGE]['token_f-score'] == 0", "[0 for i in xrange(SAGE, EAGE+1)] results_m = deepcopy(results) for cond, a in", "pd.melt(mydataframe[['months'] + [k for k in mydata.keys() if k != 'months']], id_vars='months') #my_lng", "for i in range(6)] if not len(s_dict): s_dict = [dict(zip(scores_order, scores))] else: s_dict.append(dict(zip(scores_order,", "+ str(SAGE_XPS) + 'to' + str(month) + 'm/nai*-' + str(SAGE_XPS) + '-' +", "= pd.melt(mydataframe[['months', 't_colloc_syll_shr_vocab', 'colloc_syll', 't_colloc_syll_wth_common', 't_colloc_syll_spl_vocab']], id_vars='months') # from ggplot_import_* # #p =", "line or \"Iteration \" + striter in line: doit = True break if", "= fname.replace('-w+', '_words_common') elif '-c+' in fname: fname = fname.replace('-c+', '_collocs_common') elif '+'", "= defaultdict(lambda: []) ages_max_points = [0 for i in xrange(SAGE, EAGE+1)] results_m =", "= {'colloc_syll': 'baseline', 't_colloc_syll': 'split vocab', 't_readapt_colloc_syll': 'share vocab', 't_colloc_syll_wth_common': 'with common', #'t_permuted_colloc_syll':", "'*.o*'): if TEST and (not \"test\" in fname and not \"nopfx\" in fname):", "& r \\\\\\\\ \\hline \"\"\" for month, d in plotted_results.iteritems(): print str(SAGE_XPS) +", "for j in range(ages_max_points[i] - len(l))] mydata[key] = [j for i in value", "'t' if '-r+' in fname or '-r.' in fname: condname = 't_readapt' fname", "LaTeX tables\" print \"===================\" header_table = \"\"\" \\\\begin{table*}[ht] \\caption{Mean f-scores (f), precisions (p),", "with topics'} scores_order = \"token_f-score token_precision token_recall boundary_f-score boundary_precision boundary_recall\".split() results = defaultdict(lambda:", "my_lng = pd.melt(mydataframe[['months', 't_colloc_syll_shr_vocab', 'colloc_syll', 't_colloc_syll_wth_common', 't_colloc_syll_spl_vocab']], id_vars='months') # from ggplot_import_* # #p", "tmp_i in range(N_MONTHS)]) for month in xrange(SAGE, EAGE+1): for fname in glob.iglob(PREFIX+'naima_' +", "dict(zip(conds, s_dicts)) if len(conds) == 0: continue y_pos = y_pos[:-1] fig = plt.figure(figsize=(9,", "'test_coll_syll': 'baseline test', 't_test_colloc_syll': 'split vocab test'} if OLDVERSION: DO_ONLY = {'t_nopfx_coll_syll_wth_common': 't_colloc_syll_wth_common_nopfx',", "\"PARSE ERROR: parse went wrongly for\", fname fname = '/'.join(fname.split('/')[1:]) fname = fname.replace('coll-',", "s, sd, cond, 'grey') if 'b' == cond[0] else (y, s, sd, cond,", "== cond[0] else (y, s, sd, cond, color), tmp) else: if TEST: tmp", "= map(lambda x: x[0], tmp) tmp = sorted(tmp, key=lambda x: x[1]) tmp =", "vocab with common 2'} if OLDVERSION: DO_ONLY = {'syll': 'syll', 'colloc': 'colloc', 't_readapt_colloc':", "ax.xaxis.label, ax.yaxis.label] + ax.get_xticklabels() + ax.get_yticklabels()): item.set_fontsize(24) for cond, a in results.iteritems(): linetype", "\\multicolumn{3}{|c|}{syll} & \\multicolumn{3}{|c|}{t\\_syll} & \\multicolumn{3}{|c|}{colloc} & \\multicolumn{3}{|c|}{t\\_coll\\_wth\\_common} & \\multicolumn{3}{|c|}{coll\\_syll} & \\multicolumn{3}{|c|}{t\\_coll\\_syll\\_shr\\_voc} & \\multicolumn{3}{|c|}{t\\_coll\\_syll\\_spl\\_voc}", "ggplot(data=lng_r, aes(x=months, y=value, group=variable, colour=variable, fill=variable, shape=variable, linetype=variable))\\ + scale_y_continuous(name='token f-score')\\ + scale_x_discrete('age", "'t_colloc_syll', 't_colloc_syll_wth_common']], id_vars='months') if OLDVERSION: my_lng = pd.melt(mydataframe[['months', 't_colloc_syll_shr_vocab', 'colloc_syll', 't_colloc_syll_wth_common', 't_colloc_syll_spl_vocab']], id_vars='months')", "pd.melt(mydataframe[['months', 't_colloc syll shr vocab', 'colloc syll', 't_colloc_syll_wth_common', 't_colloc_syll_spl_vocab', 'colloc', 'syll', 't_syll_spl_vocab']], id_vars='months')", "importr from rpy2.robjects import globalenv import pandas.rpy.common as com #grdevices = importr('grDevices') #robj.pandas2ri.activate()", "\" & \", print \"%.3f\" % r, print \"\\\\\\\\\" print \"\\hline\" footer_table =", "in fname): continue if \"-sc\" in fname and not \"single-context\" in TYPES: continue", "(for legends) ########## if len(DO_ONLY): if condname in DO_ONLY: condname = DO_ONLY[condname] else:", "'_'.join([condname] + fname.split('/')[-1].split('-')[3:]).split('.')[0] else: condname = '_'.join(fname.split('/')[-1].split('-')[3:]).split('.')[0] ########## cosmetic (for legends) ########## if", "with common', 'colloc3_syll': 'colloc3 syll', 't_colloc3_syll_collocs_common': 'colloc3 syll collocs common'} #'syll': 'syll', #'t_syll':", "vocab', 'with common'] for cond in listmodels: s_dict = d[cond] f = s_dict[typ+'_f-score']", "95% confidence interval OLDVERSION = False # version before March 10 LAST_ITERS =", "print \"\\hline\" footer_table = \"\"\" \\end{tabular} \\label{results} \\end{scriptsize} \\end{center} \\end{table*} \"\"\" print footer_table", "theme_bw()\\ + scale_colour_discrete(\"model\", drop=TRUE, limits=cLevels)\\ + scale_fill_discrete(\"model\", drop=TRUE, limits=cLevels)\\ + scale_shape_discrete(\"model\", drop=TRUE, limits=cLevels)\\", "plt.xticks(xrange(N_MONTHS)) ax = plt.gca() ax.set_ylim([0.55, 0.90]) ax.set_xlim([-0.1, N_MONTHS - 0.9]) ax.set_xticklabels(map(str, range(SAGE, EAGE+1)))", "'colloc syll', 't_colloc_syll_wth_common', 't_colloc_syll_spl_vocab', 'colloc', 'syll', 't_syll_spl_vocab']], id_vars='months') # #p = ggplot(aes(x='months', y='value',", "= {} # for cosmetics when preparing figures for papers # e.g. DO_ONLY", "old names if 'docs' in fname: condname = '_'.join(fname.split('/')[-1].split('-')[-1].split('.')[0].split('_')[2:]) if condname == '':", "if LAST_ITERS > 1 and TEST: TAKE_MAX_SCORE = False DO_ONLY = {'colloc_syll': 'baseline',", "smoothing) plt.plot(map(lambda x: 'NaN' if x <= 0.0 else x, vals), linetype, linewidth=3.5,", "rpy2.robjects.packages import importr from rpy2.robjects import globalenv import pandas.rpy.common as com #grdevices =", "condname = '_'.join(fname.split('/')[-1].split('-')[3:]).split('.')[0] ########## cosmetic (for legends) ########## if len(DO_ONLY): if condname in", "breaks=seq(eage,sage), limits=c(eage,sage))\\ # + scale_x_discrete('age in months') if len(DO_ONLY) and len(DO_ONLY) < 5:", "+ xlab('age in months') + ylab('token f-score') # ggsave(p, 'ggplot_progress.png') import rpy2.robjects as", "stddev, conds, colors = zip(*tmp) plt.barh(y_pos, scores, xerr=stddev, color=colors, ecolor='r', alpha=0.8) plt.yticks(map(lambda x:", "matplotlib.rcParams.update({'xtick.color': \"black\"}) matplotlib.rcParams.update({'ytick.color': \"black\"}) plotted_results = {} # plotted_results[month][cond][score_type] = mean for month", "fname scores = [] s_dict = {} with open(fname) as f: last_lines =", "= 'd_' + condname elif '-sc' in fname: fname = fname.replace('-sc', '') condname", "+ scale_y_continuous(name='token f-score')\\ + scale_x_discrete('age in months', breaks=seq(eage,sage), labels=seq(eage,sage))\\ + coord_cartesian(xlim = c(eage,", "with open (fname.replace(\".o\", \".e\")) as f: line = \"\" for line in f:", "k != 'months']], id_vars='months') #my_lng = pd.melt(mydataframe[['months', 'share vocab', 'baseline', 'with common', 'split", "'unigram split vocab'} #'t_readapt_colloc_syll_wth_common': 'share vocab with common', #'t_readapt_colloc_syll_wth_common2': 'share vocab with common", "# in case of several results, otherwise do the mean+std SORTED = True", "= ggplot(aes(x='months', y='value', color='variable'), data=my_lng) + stat_smooth(se=False) + xlab('age in months') + ylab('token", "color), tmp) # cond[0]=='b' for cond=='baseline' else: tmp = zip(y_pos, scores, stddevs, conds,", "linetype=variable))\\ + scale_y_continuous(name='token f-score')\\ + scale_x_discrete('age in months', breaks=seq(eage,sage), labels=seq(eage,sage))\\ + coord_cartesian(xlim =", "#my_lng = pd.melt(mydataframe[['months', 'share vocab', 'baseline', 'with common', 'split vocab']], id_vars='months') #my_lng =", "scores = [float(last_lines[-iter_to_take].split('\\t')[i]) for i in range(6)] if not len(s_dict): s_dict = [dict(zip(scores_order,", "+ geom_point(color='lightgreen') + stat_smooth(se=True) + xlab('age in months') + ylab('token f-score') # my_lng", "TAKE_MAX_SCORE = False # in case of several results, otherwise do the mean+std", "f-score') # ggsave(p, 'ggplot_progress.png') import rpy2.robjects as robj import rpy2.robjects.pandas2ri # for dataframe", "+ scale_colour_discrete(\"model\", drop=TRUE, limits=cLevels)\\ + scale_fill_discrete(\"model\", drop=TRUE, limits=cLevels)\\ + scale_shape_discrete(\"model\", drop=TRUE, limits=cLevels)\\ +", "print \" & \", print \"%.3f\" % p, print \" & \", print", "if x <= 0.0 else x, vals), linetype, linewidth=3.5, alpha=0.8) plt.xlabel('months') plt.ylabel('token f-score')", "for month, d in plotted_results.iteritems(): print str(SAGE_XPS) + \"-\" + str(month), if OLDVERSION:", "plot=p, width=22, height=16) \"\"\" plotFunc_2 = robj.r(rstring) print \"===================\" print \"and now for", "= \"white\"), legend.text=element_text(size=44), legend.title=element_text(size=44), legend.key.size=unit(2, \"cm\"), plot.margin=unit(c(1,1,1,1), \"cm\")) \"\"\" rstring += \"\"\" ggsave('ggplot2_progress.pdf',", "'t_colloc3_syll_collocs_common': 'colloc3 syll collocs common'} #'syll': 'syll', #'t_syll': 'syll split vocab', #'t_readapt_syll': 'syll", "m in xrange(SAGE, EAGE+1)] # TODO if we don't want the stat_smooth to", "take the last XX iterations as results (considering converged) # USED ONLY FOR", "'t_colloc_syll': 'split vocab', 't_readapt_colloc_syll': 'share vocab', 't_colloc_syll_wth_common': 'with common', #'t_permuted_colloc_syll': 'permuted split vocab',", "score > 0: y_pos.append(y_pos[-1] + 1) scores.append(score) stddevs.append(stddev) conds.append(cond) s_dicts.append({'token_f-score': score, 'token_precision': np.mean(a[month-SAGE]['token_precision']),", "in results_m.iteritems(): for i, x in enumerate(a): if len(x['token_f-score']) > ages_max_points[i]: ages_max_points[i] =", "= c(1, 0.5), legend.background = element_rect(colour = \"grey70\", fill = \"white\"), legend.text=element_text(size=44), legend.title=element_text(size=44),", "plt.barh(y_pos, scores, color=colors, ecolor='r', alpha=0.8) else: y_pos, scores, stddev, conds, colors = zip(*tmp)", "plot.margin=unit(c(1,1,1,1), \"cm\")) \"\"\" else: rstring += \"\"\"+ opts(legend.background = element_rect(colour = \"grey70\", fill", "& \", print \"%.3f\" % f, print \" & \", print \"%.3f\" %", "= () if TAKE_MAX_SCORE: tmp = zip(y_pos, scores, conds, ['g' for tmp_i in", "id_vars='months') # #p = ggplot(aes(x='months', y='value', color='variable'), data=my_lng) + stat_smooth(se=True, method='lm', level=0.95) +", "for tmp_i in range(N_MONTHS)]) for month in xrange(SAGE, EAGE+1): for fname in glob.iglob(PREFIX+'naima_'", "[[m for i in range(ages_max_points[m-SAGE])] for m in xrange(SAGE, EAGE+1)] #mydata['months'] = [[str(m)", "'t_permuted_colloc_syll_wth_common', 'unigram', 't_unigram', 't_readapt_unigram', 'colloc_syll', 't_colloc_syll', 't_colloc_syll_wth_common']], id_vars='months') if OLDVERSION: my_lng = pd.melt(mydataframe[['months',", "in months') + ylab('token f-score') # my_lng = pd.melt(mydataframe[['months', 't_colloc syll shr vocab',", "fname.replace('-sc', '') condname = 't' if '-r+' in fname or '-r.' in fname:", "common test', 't_nopfx_colloc_syll': 'split vocab no prefix', 'test_coll_syll': 'baseline test', 't_test_colloc_syll': 'split vocab", "if striter + \" iterations\" in line or \"Iteration \" + striter in", "'split vocab', 'with common'] for cond in listmodels: s_dict = d[cond] f =", "else: tmp = map(lambda (y, s, sd, cond, color): (y, s, sd, cond,", "range(N_MONTHS)]) for month in xrange(SAGE, EAGE+1): for fname in glob.iglob(PREFIX+'naima_' + str(SAGE_XPS) +", "results[condname][month-SAGE]['token_f-score']: results[condname][month-SAGE] = s_dict else: for k, v in s_dict.iteritems(): results[condname][month-SAGE][k].append(v) elif type(s_dict)", "0.5), legend.background = element_rect(colour = \"grey70\", fill = \"white\"), legend.text=element_text(size=44), legend.title=element_text(size=44), legend.key.size=unit(2, \"cm\"),", "if TEST: ax.set_xlim([0.7, 0.86]) tmp = () if TAKE_MAX_SCORE: tmp = zip(y_pos, scores,", "s_dict else: for k, v in s_dict.iteritems(): results[condname][month-SAGE][k].append(v) elif type(s_dict) == type([]): for", "prefix', 'test_coll_syll': 'baseline test', 't_test_colloc_syll': 'split vocab test'} if OLDVERSION: DO_ONLY = {'t_nopfx_coll_syll_wth_common':", "value[i] = l + [np.nan for j in range(ages_max_points[i] - len(l))] mydata[key] =", "y_pos, scores, stddev, conds, colors = zip(*tmp) plt.barh(y_pos, scores, xerr=stddev, color=colors, ecolor='r', alpha=0.8)", "header_table for typ in ['token', 'boundary']: print typ + \"\"\" & f &", "TAKE_MAX_SCORE: if results[condname][month-SAGE]['token_f-score'] == 0 or s_dict['token_f-score'] > results[condname][month-SAGE]['token_f-score']: results[condname][month-SAGE] = s_dict else:", "in cond: linetype = '^-.' else: linetype = 'v-.' if \"d_\" or \"t_\"", "in case of several results, otherwise do the mean+std SORTED = True #", "R part\" print \"===================\" rstring = \"\"\" library(\"ggplot2\") library(\"grid\") #print(lng_r) #print(factor(lng_r$months)) #print(factor(lng_r$variable)) cLevels", "= linetype[0] + '--' vals = None stddevs = None if TAKE_MAX_SCORE: vals", "'boundary']: print typ + \"\"\" & f & p & r & f", "& \\multicolumn{3}{|c|}{syll} & \\multicolumn{3}{|c|}{t\\_syll} & \\multicolumn{3}{|c|}{colloc} & \\multicolumn{3}{|c|}{t\\_coll\\_wth\\_common} & \\multicolumn{3}{|c|}{coll\\_syll} & \\multicolumn{3}{|c|}{t\\_coll\\_syll\\_shr\\_voc} &", "part\" print \"===================\" rstring = \"\"\" library(\"ggplot2\") library(\"grid\") #print(lng_r) #print(factor(lng_r$months)) #print(factor(lng_r$variable)) cLevels <-", "else (y, s, cond, color), tmp) # cond[0]=='b' for cond=='baseline' else: tmp =", "cond else (y, s, sd, cond, color), tmp) # \"no prefix\" cond =>", "f-score') # p = ggplot(aes(x='months', y='value', color='variable'), data=my_lng) + stat_smooth(se=False) + xlab('age in", "fname = fname.replace('+', '_wth_common') condname = '_'.join([condname] + fname.split('/')[-1].split('-')[3:]).split('.')[0] else: condname = '_'.join(fname.split('/')[-1].split('-')[3:]).split('.')[0]", "results = defaultdict(lambda: [dict(zip(scores_order, [0 for i in range(len(scores_order))])) for tmp_i in range(N_MONTHS)])", "TEST: TAKE_MAX_SCORE = False DO_ONLY = {'colloc_syll': 'baseline', 't_colloc_syll': 'split vocab', 't_readapt_colloc_syll': 'share", "linetype, linewidth=3.5, alpha=0.8) plt.xlabel('months') plt.ylabel('token f-score') plt.legend([l for l in results.iterkeys()], loc='best', ncol=4)", "'unigram split vocab', 'baseline', 'share vocab', 'split vocab', 'with common'] for cond in", "DataFrame(mydata) my_lng = pd.melt(mydataframe[['months'] + [k for k in mydata.keys() if k !=", "\" & \", print \"%.3f\" % p, print \" & \", print \"%.3f\"", "for i in xrange(SAGE, EAGE+1)] results_m = deepcopy(results) for cond, a in results_m.iteritems():", "months')\\ #+ ylab('token f-score')\\ #+ scale_x_continuous('age in months', breaks=seq(eage,sage), limits=c(eage,sage))\\ # + scale_x_discrete('age", "robj.r(rstring) print \"===================\" print \"and now for the LaTeX tables\" print \"===================\" header_table", "com.convert_to_r_dataframe(mydataframe) globalenv['lng_r'] = lng_r globalenv['data_r'] = data_r globalenv['eage'] = EAGE globalenv['sage'] = SAGE", "the size of dataset} \\\\vspace{-0.5cm} \\\\begin{center} \\\\begin{scriptsize} \\\\begin{tabular}{|c|ccc|ccc|ccc|ccc|ccc|ccc|ccc|ccc|} \\hline & \\multicolumn{3}{|c|}{syll} & \\multicolumn{3}{|c|}{t\\_syll}", "'unigram', 't_unigram', 't_readapt_unigram', 'colloc_syll', 't_colloc_syll', 't_colloc_syll_wth_common']], id_vars='months') if OLDVERSION: my_lng = pd.melt(mydataframe[['months', 't_colloc_syll_shr_vocab',", "item in ([ax.title, ax.xaxis.label, ax.yaxis.label] + ax.get_xticklabels() + ax.get_yticklabels()): item.set_fontsize(24) for cond, a", "(y, s, cond, color): (y, s, cond, 'b') if 't' == cond[0] or", "process or some smoothing) plt.plot(map(lambda x: 'NaN' if x <= 0.0 else x,", "> 0: y_pos.append(y_pos[-1] + 1) scores.append(score) stddevs.append(stddev) conds.append(cond) s_dicts.append({'token_f-score': score, 'token_precision': np.mean(a[month-SAGE]['token_precision']), 'token_recall':", "header_table = \"\"\" \\\\begin{table*}[ht] \\caption{Mean f-scores (f), precisions (p), and recalls (r) for", "\"topics\"] TYPES = [\"basic\", \"single-context\"] TEST = False # if True, just use", "ax.set_ylim([0.55, 0.90]) ax.set_xlim([-0.1, N_MONTHS - 0.9]) ax.set_xticklabels(map(str, range(SAGE, EAGE+1))) for item in ([ax.title,", "\"no prefix\" cond => different color tmp = map(lambda (y, s, sd, cond,", "on the size of dataset} \\\\vspace{-0.5cm} \\\\begin{center} \\\\begin{scriptsize} \\\\begin{tabular}{|c|ccc|ccc|ccc|ccc|ccc|ccc|ccc|ccc|} \\hline & \\multicolumn{3}{|c|}{syll} &", "s_dict.append(dict(zip(scores_order, scores))) else: scores = [float(last_lines[-1].split('\\t')[i]) for i in range(6)] s_dict = dict(zip(scores_order,", "x in a] stddevs = [FACTOR_STD*np.std(x['token_f-score']) for x in a] # TODO (gaussian", "plt.barh(y_pos, scores, xerr=stddev, color=colors, ecolor='r', alpha=0.8) plt.yticks(map(lambda x: x+0.5, y_pos), conds) plt.xlabel('token f-score')", "+ range(1000,1005) #ITERS = range(1000,1005) #ITERS = range(600, 620) PREFIX = \"\" #PREFIX", "if TAKE_MAX_SCORE: vals = [x['token_f-score'] for x in a] else: vals = [np.mean(x['token_f-score'])", "common 2'} if OLDVERSION: DO_ONLY = {'syll': 'syll', 'colloc': 'colloc', 't_readapt_colloc': 't_colloc_shr_vocab', 't_syll':", "'t_readapt_colloc': 't_colloc_shr_vocab', 't_syll': 't_syll_spl_vocab', 't_readapt_colloc_wth_common': 't_colloc_wth_common', 'colloc_syll': 'colloc_syll', 't_colloc_syll': 't_colloc_syll_spl_vocab', 't_readapt_colloc_syll': 't_colloc_syll_shr_vocab', 't_colloc_syll_wth_common':", "shr vocab', 'colloc syll', 't_colloc_syll_wth_common', 't_colloc_syll_spl_vocab', 'colloc', 'syll', 't_syll_spl_vocab']], id_vars='months') # #p =", "if we don't want the stat_smooth to know about X (months) for key,", "alpha=0.8) else: y_pos, scores, stddev, conds, colors = zip(*tmp) plt.barh(y_pos, scores, xerr=stddev, color=colors,", "ages_max_points = [0 for i in xrange(SAGE, EAGE+1)] results_m = deepcopy(results) for cond,", "'colloc', 't_readapt_colloc': 't_colloc_shr_vocab', 't_syll': 't_syll_spl_vocab', 't_readapt_colloc_wth_common': 't_colloc_wth_common', 'colloc_syll': 'colloc_syll', 't_colloc_syll': 't_colloc_syll_spl_vocab', 't_readapt_colloc_syll': 't_colloc_syll_shr_vocab',", "DO_ONLY = {'t_nopfx_coll_syll_wth_common': 't_colloc_syll_wth_common_nopfx', 't_test_coll_syll_wth_common': 't_colloc_syll_wth_common_test', 't_nopfx_coll_syll': 't_colloc_syll_spl_vocab_nopfx', 'test_coll_syll': 'colloc_syll_test', 't_test_coll_syll': 't_colloc_syll_spl_vocab_test'} #DO_ONLY", "OLDVERSION = False # version before March 10 LAST_ITERS = 10 # take", "stddevs = [FACTOR_STD*np.std(x['token_f-score']) for x in a] # TODO (gaussian process or some", "for the R part\" print \"===================\" rstring = \"\"\" library(\"ggplot2\") library(\"grid\") #print(lng_r) #print(factor(lng_r$months))", "\"\" for line in f: for iternumber in ITERS: striter = str(iternumber) if", "#print(factor(lng_r$months)) #print(factor(lng_r$variable)) cLevels <- levels(lng_r$variable) p <- ggplot(data=lng_r, aes(x=months, y=value, group=variable, colour=variable, fill=variable,", "for fname in glob.iglob(PREFIX+'naima_' + str(SAGE_XPS) + 'to' + str(month) + 'm/nai*-' +", "'d' == cond[0] else (y, s, cond, color), tmp) # cond[0]=='b' for cond=='baseline'", "plotFunc_2 = robj.r(rstring) print \"===================\" print \"and now for the LaTeX tables\" print", "+ str(month) + '*.o*'): if TEST and (not \"test\" in fname and not", "last_lines.append(line) try: if TEST and LAST_ITERS > 1 and len(last_lines) > LAST_ITERS+1: for", "#DO_ONLY = {} # for cosmetics when preparing figures for papers # e.g.", "scores = [] stddevs = [] conds = [] s_dicts = [] for", "0.90]) ax.set_xlim([-0.1, N_MONTHS - 0.9]) ax.set_xticklabels(map(str, range(SAGE, EAGE+1))) for item in ([ax.title, ax.xaxis.label,", "alpha=0.8) plt.yticks(map(lambda x: x+0.5, y_pos), conds) plt.xlabel('token f-score') #plt.title('') plt.savefig('histogram_' + str(SAGE_XPS) +", "as f: last_lines = [] for line in f: last_lines.append(line) try: if TEST", "score = np.mean(a[month-SAGE]['token_f-score']) stddev = FACTOR_STD*np.std(a[month-SAGE]['token_f-score']) if score > 0: y_pos.append(y_pos[-1] + 1)", "if \"d_\" or \"t_\" in cond: linetype = linetype[0] + '--' vals =", "['g' for tmp_i in range(len(y_pos))]) if OLDVERSION: tmp = map(lambda (y, s, sd,", "TAKE_MAX_SCORE: tmp = zip(y_pos, scores, conds, ['g' for tmp_i in range(len(y_pos))]) if OLDVERSION:", "scores))] else: s_dict.append(dict(zip(scores_order, scores))) else: scores = [float(last_lines[-1].split('\\t')[i]) for i in range(6)] s_dict", "f: line = \"\" for line in f: for iternumber in ITERS: striter", "the R part\" print \"===================\" rstring = \"\"\" library(\"ggplot2\") library(\"grid\") #print(lng_r) #print(factor(lng_r$months)) #print(factor(lng_r$variable))", "in a] stddevs = [FACTOR_STD*np.std(x['token_f-score']) for x in a] # TODO (gaussian process", "OLDVERSION: tmp = map(lambda (y, s, sd, cond, color): (y, s, sd, cond,", "as f: line = \"\" for line in f: for iternumber in ITERS:", "the last XX iterations as results (considering converged) # USED ONLY FOR TEST", "if TEST and (not \"test\" in fname and not \"nopfx\" in fname): continue", "(y, s, sd, cond, color): (y, s, sd, cond, 'grey') if 'b' ==", "+ [np.nan for j in range(ages_max_points[i] - len(l))] mydata[key] = [j for i", "= map(lambda (y, s, sd, cond, color): (y, s, sd, cond, 'b') if", "id_vars='months') if OLDVERSION: my_lng = pd.melt(mydataframe[['months', 't_colloc_syll_shr_vocab', 'colloc_syll', 't_colloc_syll_wth_common', 't_colloc_syll_spl_vocab']], id_vars='months') # from", "s, sd, cond, color), tmp) else: if TEST: tmp = map(lambda (y, s,", "a] else: vals = [np.mean(x['token_f-score']) for x in a] stddevs = [FACTOR_STD*np.std(x['token_f-score']) for", "names if 'docs' in fname: condname = '_'.join(fname.split('/')[-1].split('-')[-1].split('.')[0].split('_')[2:]) if condname == '': #", "sd, cond, 'grey') if 'b' == cond[0] else (y, s, sd, cond, color),", "vocab', 't_readapt_colloc_syll': 'share vocab', 't_colloc_syll_wth_common': 'with common', #'t_permuted_colloc_syll': 'permuted split vocab', ### 't_permuted_colloc_syll_wth_common':", "= '/'.join(fname.split('/')[1:]) fname = fname.replace('coll-', 'colloc-') # old names if 'docs' in fname:", "= deepcopy(results) for cond, a in results_m.iteritems(): for i, x in enumerate(a): if", "i in range(len(scores_order))])) for tmp_i in range(N_MONTHS)]) for month in xrange(SAGE, EAGE+1): for", "vocab', 'split vocab', 'with common'] for cond in listmodels: s_dict = d[cond] f", "scale_y_continuous(name='token f-score')\\ + scale_x_discrete('age in months', breaks=seq(eage,sage), labels=seq(eage,sage))\\ + coord_cartesian(xlim = c(eage, sage))\\", "= map(lambda (y, s, sd, cond, color): (y, s, sd, cond, 'grey') if", "continue y_pos = y_pos[:-1] fig = plt.figure(figsize=(9, len(y_pos)), dpi=1200) ax = plt.gca() ax.set_ylim([0,", "conds, colors = zip(*tmp) plt.barh(y_pos, scores, xerr=stddev, color=colors, ecolor='r', alpha=0.8) plt.yticks(map(lambda x: x+0.5,", "!= 'months']], id_vars='months') #my_lng = pd.melt(mydataframe[['months', 'share vocab', 'baseline', 'with common', 'split vocab']],", "different color tmp = map(lambda (y, s, sd, cond, color): (y, s, sd,", "cond, a in results_m.iteritems(): for i, x in enumerate(a): if len(x['token_f-score']) > ages_max_points[i]:", "11 SAGE = 12 EAGE = 31 N_MONTHS = EAGE-SAGE+1 #TYPES = [\"basic\",", "'t_colloc_syll_spl_vocab', 't_colloc_syll_wth_common'] listmodels = ['unigram', 'unigram share vocab', 'unigram split vocab', 'baseline', 'share", "in range(N_MONTHS)]) for month in xrange(SAGE, EAGE+1): for fname in glob.iglob(PREFIX+'naima_' + str(SAGE_XPS)", "tmp) else: if TEST: tmp = map(lambda (y, s, sd, cond, color): (y,", "fname = fname.replace('-w+', '_words_common') elif '-c+' in fname: fname = fname.replace('-c+', '_collocs_common') elif", "iterations\" in line or \"Iteration \" + striter in line: doit = True", "remove data that is only nan mydata.pop(key) print mydata print \">>> conditions that", "pd.melt(mydataframe[['months', 't_permuted_colloc_syll', 't_permuted_colloc_syll_wth_common', 'unigram', 't_unigram', 't_readapt_unigram', 'colloc_syll', 't_colloc_syll', 't_colloc_syll_wth_common']], id_vars='months') if OLDVERSION: my_lng", "cond=='baseline' else: tmp = map(lambda (y, s, sd, cond, color): (y, s, sd,", "else: rstring += \"\"\"+ opts(legend.background = element_rect(colour = \"grey70\", fill = \"white\"), legend.text=element_text(size=44),", "+ \"-\" + str(month), if OLDVERSION: listmodels = ['syll', 't_syll_spl_vocab', 'colloc', 't_colloc_wth_common', 'colloc_syll',", "() if TAKE_MAX_SCORE: tmp = zip(y_pos, scores, conds, ['g' for tmp_i in range(len(y_pos))])", "'b' != cond[0] or 'd' == cond[0] else (y, s, cond, color), tmp)", "tmp = () if TAKE_MAX_SCORE: tmp = zip(y_pos, scores, conds, ['g' for tmp_i", "dpi=1200) ax = plt.gca() ax.set_ylim([0, len(y_pos)+1]) ax.set_xlim([0.6, 0.86]) if TEST: ax.set_xlim([0.7, 0.86]) tmp", "color): (y, s, sd, cond, 'b') if 'b' != cond[0] else (y, s,", "\\caption{Mean f-scores (f), precisions (p), and recalls (r) for different models depending on", "glob.iglob(PREFIX+'naima_' + str(SAGE_XPS) + 'to' + str(month) + 'm/nai*-' + str(SAGE_XPS) + '-'", "basic results currently doit = False with open (fname.replace(\".o\", \".e\")) as f: line", "= map(lambda (y, s, cond, color): (y, s, cond, 'b') if 't' ==", "'-r+' in fname or '-r.' in fname: condname = 't_readapt' fname = fname.replace('-r',", "= np.mean(a[month-SAGE]['token_f-score']) stddev = FACTOR_STD*np.std(a[month-SAGE]['token_f-score']) if score > 0: y_pos.append(y_pos[-1] + 1) scores.append(score)", "= True break if not doit: print \"NOT DOING:\", fname else: print fname", "= range(600, 620) PREFIX = \"\" #PREFIX = \"old_naima_XPs/\" TAKE_MAX_SCORE = False #", "'t_nopfx_colloc_syll': 'split vocab no prefix', 'test_coll_syll': 'baseline test', 't_test_colloc_syll': 'split vocab test'} if", "str(month) + 'm/nai*-' + str(SAGE_XPS) + '-' + str(month) + '*.o*'): if TEST", "and len(DO_ONLY) < 5: rstring += \"\"\"+ opts(legend.position = c(0.96, 0.5), legend.justification =", "listmodels = ['unigram', 'unigram share vocab', 'unigram split vocab', 'baseline', 'share vocab', 'split", "scores_order = \"token_f-score token_precision token_recall boundary_f-score boundary_precision boundary_recall\".split() results = defaultdict(lambda: [dict(zip(scores_order, [[]", "= DataFrame(mydata) my_lng = pd.melt(mydataframe[['months'] + [k for k in mydata.keys() if k", "interval OLDVERSION = False # version before March 10 LAST_ITERS = 10 #", "[\"basic\", \"topics\"] TYPES = [\"basic\", \"single-context\"] TEST = False # if True, just", "'^-.' else: linetype = 'v-.' if \"d_\" or \"t_\" in cond: linetype =", "else (y, s, cond, color), tmp) else: tmp = map(lambda (y, s, cond,", "\"cm\")) \"\"\" rstring += \"\"\" ggsave('ggplot2_progress.pdf', plot=p, width=22, height=16) \"\"\" plotFunc_2 = robj.r(rstring)", "fname = fname.replace('-sc', '') condname = 't' if '-r+' in fname or '-r.'", "key, value in mydata.iteritems(): for i, l in enumerate(value): value[i] = l +", "\"\"\" print header_table for typ in ['token', 'boundary']: print typ + \"\"\" &", "fname fname = '/'.join(fname.split('/')[1:]) fname = fname.replace('coll-', 'colloc-') # old names if 'docs'", "copy import deepcopy import pandas as pd mydata = defaultdict(lambda: []) ages_max_points =", "OLDVERSION: listmodels = ['syll', 't_syll_spl_vocab', 'colloc', 't_colloc_wth_common', 'colloc_syll', 't_colloc_syll_shr_vocab', 't_colloc_syll_spl_vocab', 't_colloc_syll_wth_common'] listmodels =", "month, d in plotted_results.iteritems(): print str(SAGE_XPS) + \"-\" + str(month), if OLDVERSION: listmodels", "in enumerate(a): if len(x['token_f-score']) > ages_max_points[i]: ages_max_points[i] = len(x['token_f-score']) mydata[cond].append(x['token_f-score']) mydata['months'] = [[m", "or \"Iteration \" + striter in line: doit = True break if not", "= com.convert_to_r_dataframe(my_lng) data_r = com.convert_to_r_dataframe(mydataframe) globalenv['lng_r'] = lng_r globalenv['data_r'] = data_r globalenv['eage'] =", "= zip(*tmp) plt.barh(y_pos, scores, color=colors, ecolor='r', alpha=0.8) else: y_pos, scores, stddev, conds, colors", "scores = [] s_dict = {} with open(fname) as f: last_lines = []", "'t_syll_spl_vocab']], id_vars='months') # #p = ggplot(aes(x='months', y='value', color='variable'), data=my_lng) + stat_smooth(se=True, method='lm', level=0.95)", "months') if len(DO_ONLY) and len(DO_ONLY) < 5: rstring += \"\"\"+ opts(legend.position = c(0.96,", "import pylab as pl import matplotlib import matplotlib.pyplot as plt from collections import", "condname = 'uni' condname = 'd_' + condname elif '-sc' in fname: fname", "& \\multicolumn{3}{|c|}{t\\_coll\\_syll\\_wth\\_com}\\\\\\\\ \"\"\" print header_table for typ in ['token', 'boundary']: print typ +", "'_collocs_common') elif '+' in fname: fname = fname.replace('+', '_wth_common') condname = '_'.join([condname] +", "= \"grey70\", fill = \"white\"), legend.text=element_text(size=44), legend.title=element_text(size=44), legend.key.size=unit(2, \"cm\"), plot.margin=unit(c(1,1,1,1), \"cm\")) \"\"\" rstring", "i in range(6)] s_dict = dict(zip(scores_order, scores)) except: print \"PARSE ERROR: parse went", "precisions (p), and recalls (r) for different models depending on the size of", "str(SAGE_XPS) + '-' + str(month) + '*.o*'): if TEST and (not \"test\" in", "\"black\"}) matplotlib.rcParams.update({'xtick.color': \"black\"}) matplotlib.rcParams.update({'ytick.color': \"black\"}) plotted_results = {} # plotted_results[month][cond][score_type] = mean for", "print mydata.keys() mydataframe = DataFrame(mydata) my_lng = pd.melt(mydataframe[['months'] + [k for k in", "know about X (months) for key, value in mydata.iteritems(): for i, l in", "case of several results, otherwise do the mean+std SORTED = True # sort", "pd mydata = defaultdict(lambda: []) ages_max_points = [0 for i in xrange(SAGE, EAGE+1)]", "in fname): continue elif not TEST and (\"test\" in fname or \"nopfx\" in", "sorted(tmp, key=lambda x: x[1]) tmp = map(lambda y,t: sum(((y,), t[1:]), ()), ys, tmp)", "scale_x_discrete('age in months', breaks=seq(eage,sage), labels=seq(eage,sage))\\ + coord_cartesian(xlim = c(eage, sage))\\ + theme_bw()\\ +", "y_pos = y_pos[:-1] fig = plt.figure(figsize=(9, len(y_pos)), dpi=1200) ax = plt.gca() ax.set_ylim([0, len(y_pos)+1])", "\"\"\" library(\"ggplot2\") library(\"grid\") #print(lng_r) #print(factor(lng_r$months)) #print(factor(lng_r$variable)) cLevels <- levels(lng_r$variable) p <- ggplot(data=lng_r, aes(x=months,", "element_text(size=44))\\ \"\"\" #+ geom_point()\\ #+ xlab('age in months')\\ #+ ylab('token f-score')\\ #+ scale_x_continuous('age", "ax = plt.gca() ax.set_ylim([0.55, 0.90]) ax.set_xlim([-0.1, N_MONTHS - 0.9]) ax.set_xticklabels(map(str, range(SAGE, EAGE+1))) for", "s, sd, cond, 'b') if 't' == cond[0] or 'd' == cond[0] else", "i, l in enumerate(value): value[i] = l + [np.nan for j in range(ages_max_points[i]", "color tmp = map(lambda (y, s, sd, cond, color): (y, s, sd, cond,", "EAGE+1): for fname in glob.iglob(PREFIX+'naima_' + str(SAGE_XPS) + 'to' + str(month) + 'm/nai*-'", "a in results_m.iteritems(): for i, x in enumerate(a): if len(x['token_f-score']) > ages_max_points[i]: ages_max_points[i]", "break if not doit: print \"NOT DOING:\", fname else: print fname scores =", "= '_'.join(fname.split('/')[-1].split('-')[3:]).split('.')[0] ########## cosmetic (for legends) ########## if len(DO_ONLY): if condname in DO_ONLY:", "results[condname][month-SAGE]['token_f-score'] == 0 or s_dict['token_f-score'] > results[condname][month-SAGE]['token_f-score']: results[condname][month-SAGE] = s_dict else: for k,", "= [[m for i in range(ages_max_points[m-SAGE])] for m in xrange(SAGE, EAGE+1)] #mydata['months'] =", "for month in xrange(SAGE, EAGE+1): for fname in glob.iglob(PREFIX+'naima_' + str(SAGE_XPS) + 'to'", "in fname or '-r.' in fname: condname = 't_readapt' fname = fname.replace('-r', '')", "DO_ONLY: condname = DO_ONLY[condname] else: continue ########## /cosmetic (for legends) ########## if type(s_dict)", "in range(ages_max_points[m-SAGE])] for m in xrange(SAGE, EAGE+1)] #mydata['months'] = [[str(m) for i in", "np.mean(a[month-SAGE]['boundary_f-score']), 'boundary_precision': np.mean(a[month-SAGE]['boundary_precision']), 'boundary_recall': np.mean(a[month-SAGE]['boundary_recall'])}) plotted_results[month] = dict(zip(conds, s_dicts)) if len(conds) == 0:", "import rpy2.robjects as robj import rpy2.robjects.pandas2ri # for dataframe conversion from rpy2.robjects.packages import", "fill = \"white\"), legend.text=element_text(size=44), legend.title=element_text(size=44), legend.key.size=unit(2, \"cm\"), plot.margin=unit(c(1,1,1,1), \"cm\")) \"\"\" rstring += \"\"\"", "\"old_naima_XPs/\" TAKE_MAX_SCORE = False # in case of several results, otherwise do the", "share vocab', #'t_unigram': 'unigram split vocab'} #'t_readapt_colloc_syll_wth_common': 'share vocab with common', #'t_readapt_colloc_syll_wth_common2': 'share", "= 12 EAGE = 31 N_MONTHS = EAGE-SAGE+1 #TYPES = [\"basic\", \"single-context\", \"topics\"]", "\\multicolumn{3}{|c|}{t\\_coll\\_syll\\_wth\\_com}\\\\\\\\ \"\"\" print header_table for typ in ['token', 'boundary']: print typ + \"\"\"", "= 't_readapt' fname = fname.replace('-r', '') if '-w+' in fname: fname = fname.replace('-w+',", "'t_colloc_syll_spl_vocab']], id_vars='months') # from ggplot_import_* # #p = ggplot(aes(x='months', y='colloc'), data=mydataframe) + geom_point(color='lightgreen')", "share vocab', 'unigram split vocab', 'baseline', 'share vocab', 'split vocab', 'with common'] for", "for iter_to_take in range(1,LAST_ITERS+1): scores = [float(last_lines[-iter_to_take].split('\\t')[i]) for i in range(6)] if not", "map(lambda (y, s, cond, color): (y, s, cond, 'b') if 'b' != cond[0]", "in TYPES: continue if \"docs\" in fname and not \"topics\" in TYPES: continue", "else: print fname scores = [] s_dict = {} with open(fname) as f:", "12 EAGE = 31 N_MONTHS = EAGE-SAGE+1 #TYPES = [\"basic\", \"single-context\", \"topics\"] #TYPES", "cond, a in results.iteritems(): score = 0 stddev = 0 if TAKE_MAX_SCORE: score", "and recalls (r) for different models depending on the size of dataset} \\\\vspace{-0.5cm}", "a] # TODO (gaussian process or some smoothing) plt.plot(map(lambda x: 'NaN' if x", "if OLDVERSION: listmodels = ['syll', 't_syll_spl_vocab', 'colloc', 't_colloc_wth_common', 'colloc_syll', 't_colloc_syll_shr_vocab', 't_colloc_syll_spl_vocab', 't_colloc_syll_wth_common'] listmodels", "for i in value for j in i] if np.all(map(np.isnan, mydata[key])): # remove", "'colloc_syll', 't_colloc_syll_shr_vocab', 't_colloc_syll_spl_vocab', 't_colloc_syll_wth_common'] listmodels = ['unigram', 'unigram share vocab', 'unigram split vocab',", "(for legends) ########## if type(s_dict) == type({}) and len(s_dict) == 6: if TAKE_MAX_SCORE:", "color='variable'), data=my_lng) + stat_smooth(se=False) + xlab('age in months') + ylab('token f-score') # ggsave(p,", "preparing figures for papers # e.g. DO_ONLY = {'t_colloc': 'colloc with topics'} scores_order", "= dict(zip(conds, s_dicts)) if len(conds) == 0: continue y_pos = y_pos[:-1] fig =", "len(s_dict) == 6: if TAKE_MAX_SCORE: if results[condname][month-SAGE]['token_f-score'] == 0 or s_dict['token_f-score'] > results[condname][month-SAGE]['token_f-score']:", "#TYPES = [\"basic\", \"single-context\", \"topics\"] #TYPES = [\"basic\", \"topics\"] TYPES = [\"basic\", \"single-context\"]", "fname in glob.iglob(PREFIX+'naima_' + str(SAGE_XPS) + 'to' + str(month) + 'm/nai*-' + str(SAGE_XPS)", "\\multicolumn{3}{|c|}{t\\_syll} & \\multicolumn{3}{|c|}{colloc} & \\multicolumn{3}{|c|}{t\\_coll\\_wth\\_common} & \\multicolumn{3}{|c|}{coll\\_syll} & \\multicolumn{3}{|c|}{t\\_coll\\_syll\\_shr\\_voc} & \\multicolumn{3}{|c|}{t\\_coll\\_syll\\_spl\\_voc} & \\multicolumn{3}{|c|}{t\\_coll\\_syll\\_wth\\_com}\\\\\\\\", "in listmodels: s_dict = d[cond] f = s_dict[typ+'_f-score'] p = s_dict[typ+'_precision'] r =", "month in xrange(SAGE, EAGE+1): y_pos = [0.5] scores = [] stddevs = []", "ax.set_xlim([0.6, 0.86]) if TEST: ax.set_xlim([0.7, 0.86]) tmp = () if TAKE_MAX_SCORE: tmp =", "fig = plt.figure(figsize=(12, 9), dpi=1200) plt.xticks(xrange(N_MONTHS)) ax = plt.gca() ax.set_ylim([0.55, 0.90]) ax.set_xlim([-0.1, N_MONTHS", "globalenv['eage'] = EAGE globalenv['sage'] = SAGE print \"===================\" print \"and now for the", "i in range(ages_max_points[m-SAGE])] for m in xrange(SAGE, EAGE+1)] # TODO if we don't", "wrongly for\", fname fname = '/'.join(fname.split('/')[1:]) fname = fname.replace('coll-', 'colloc-') # old names", "c(0.96, 0.5), legend.justification = c(1, 0.5), legend.background = element_rect(colour = \"grey70\", fill =", "cond, color), tmp) # cond[0]=='b' for cond=='baseline' if SORTED: ys = map(lambda x:", "fname.replace('-w+', '_words_common') elif '-c+' in fname: fname = fname.replace('-c+', '_collocs_common') elif '+' in", "\"white\"), legend.text=element_text(size=44), legend.title=element_text(size=44), legend.key.size=unit(2, \"cm\"), plot.margin=unit(c(1,1,1,1), \"cm\")) \"\"\" else: rstring += \"\"\"+ opts(legend.background", "scale_linetype_discrete(\"model\", drop=TRUE, limits=cLevels)\\ + stat_smooth(level=0.68, size=1.8)\\ + theme(text = element_text(size=44))\\ \"\"\" #+ geom_point()\\", "\", print \"%.3f\" % r, print \"\\\\\\\\\" print \"\\hline\" footer_table = \"\"\" \\end{tabular}", "ax.set_xticklabels(map(str, range(SAGE, EAGE+1))) for item in ([ax.title, ax.xaxis.label, ax.yaxis.label] + ax.get_xticklabels() + ax.get_yticklabels()):", "= [\"basic\", \"single-context\", \"topics\"] #TYPES = [\"basic\", \"topics\"] TYPES = [\"basic\", \"single-context\"] TEST", "'t_test_colloc_syll': 'split vocab test'} if OLDVERSION: DO_ONLY = {'t_nopfx_coll_syll_wth_common': 't_colloc_syll_wth_common_nopfx', 't_test_coll_syll_wth_common': 't_colloc_syll_wth_common_test', 't_nopfx_coll_syll':", "False # if True, just use the values evaluated on a test test", "group=variable, colour=variable, fill=variable, shape=variable, linetype=variable))\\ + scale_y_continuous(name='token f-score')\\ + scale_x_discrete('age in months', breaks=seq(eage,sage),", "+ fname.split('/')[-1].split('-')[3:]).split('.')[0] else: condname = '_'.join(fname.split('/')[-1].split('-')[3:]).split('.')[0] ########## cosmetic (for legends) ########## if len(DO_ONLY):", "not \"topics\" in TYPES: continue # always plots basic results currently doit =", "fname = fname.replace('-r', '') if '-w+' in fname: fname = fname.replace('-w+', '_words_common') elif", "#'t_readapt_colloc_syll_wth_common': 'share vocab with common', #'t_readapt_colloc_syll_wth_common2': 'share vocab with common 2'} if OLDVERSION:", "\"nopfx\" in fname): continue elif not TEST and (\"test\" in fname or \"nopfx\"", "elif type(s_dict) == type([]): for e in s_dict: for k, v in e.iteritems():", "FACTOR_STD = 1. # 1.96 for 95% confidence interval OLDVERSION = False #", "sd, cond, 'b') if 'b' != cond[0] else (y, s, sd, cond, color),", "= element_text(size=44))\\ \"\"\" #+ geom_point()\\ #+ xlab('age in months')\\ #+ ylab('token f-score')\\ #+", "in results.iterkeys()], loc='best', ncol=4) plt.setp(ax.get_legend().get_texts(), fontsize=20) plt.savefig('progress_ages.png') matplotlib.rcParams.update({'font.size': 20}) matplotlib.rcParams.update({'text.color': \"black\"}) matplotlib.rcParams.update({'axes.labelcolor': \"black\"})", "when preparing figures for papers # e.g. DO_ONLY = {'t_colloc': 'colloc with topics'}", "'t_colloc_syll_wth_common'] listmodels = ['unigram', 'unigram share vocab', 'unigram split vocab', 'baseline', 'share vocab',", "type(s_dict) == type([]): for e in s_dict: for k, v in e.iteritems(): results[condname][month-SAGE][k].append(v)", "'-r.' in fname: condname = 't_readapt' fname = fname.replace('-r', '') if '-w+' in", "f & p & r & f & p & r \\\\\\\\ \\hline", "color), tmp) else: tmp = map(lambda (y, s, cond, color): (y, s, cond,", "mean for month in xrange(SAGE, EAGE+1): y_pos = [0.5] scores = [] stddevs", "legend.key.size=unit(2, \"cm\"), plot.margin=unit(c(1,1,1,1), \"cm\")) \"\"\" else: rstring += \"\"\"+ opts(legend.background = element_rect(colour =", "+ striter in line: doit = True break if not doit: print \"NOT", "x+0.5, y_pos), conds) plt.xlabel('token f-score') #plt.title('') plt.savefig('histogram_' + str(SAGE_XPS) + 'to' + str(month)", "= 't' if '-r+' in fname or '-r.' in fname: condname = 't_readapt'", "import importr from rpy2.robjects import globalenv import pandas.rpy.common as com #grdevices = importr('grDevices')", "prefix', 't_test_colloc_syll_wth_common': 'with common test', 't_nopfx_colloc_syll': 'split vocab no prefix', 'test_coll_syll': 'baseline test',", "DO_ONLY[condname] else: continue ########## /cosmetic (for legends) ########## if type(s_dict) == type({}) and", "= \"old_naima_XPs/\" TAKE_MAX_SCORE = False # in case of several results, otherwise do", "pandas import DataFrame from copy import deepcopy import pandas as pd mydata =", "# always plots basic results currently doit = False with open (fname.replace(\".o\", \".e\"))", "+ str(month) + 'm.png', bbox_inches='tight') from pandas import DataFrame from copy import deepcopy", "TAKE_MAX_SCORE = False DO_ONLY = {'colloc_syll': 'baseline', 't_colloc_syll': 'split vocab', 't_readapt_colloc_syll': 'share vocab',", "disable at your own risk! FACTOR_STD = 1. # 1.96 for 95% confidence", "'t_colloc_syll_wth_common_test', 't_nopfx_coll_syll': 't_colloc_syll_spl_vocab_nopfx', 'test_coll_syll': 'colloc_syll_test', 't_test_coll_syll': 't_colloc_syll_spl_vocab_test'} #DO_ONLY = {} # for cosmetics", "and (not \"test\" in fname and not \"nopfx\" in fname): continue elif not", "EAGE+1))) for item in ([ax.title, ax.xaxis.label, ax.yaxis.label] + ax.get_xticklabels() + ax.get_yticklabels()): item.set_fontsize(24) for", "'colloc', 't_colloc_wth_common', 'colloc_syll', 't_colloc_syll_shr_vocab', 't_colloc_syll_spl_vocab', 't_colloc_syll_wth_common'] listmodels = ['unigram', 'unigram share vocab', 'unigram", "(months) for key, value in mydata.iteritems(): for i, l in enumerate(value): value[i] =", "papers # e.g. DO_ONLY = {'t_colloc': 'colloc with topics'} scores_order = \"token_f-score token_precision", "cond, 'b') if 'b' != cond[0] else (y, s, sd, cond, color), tmp)", "print typ + \"\"\" & f & p & r & f &", "cond[0] else (y, s, cond, color), tmp) # cond[0]=='b' for cond=='baseline' else: tmp", "fname.replace('-c+', '_collocs_common') elif '+' in fname: fname = fname.replace('+', '_wth_common') condname = '_'.join([condname]", "else: scores = [float(last_lines[-1].split('\\t')[i]) for i in range(6)] s_dict = dict(zip(scores_order, scores)) except:", "fname and not \"nopfx\" in fname): continue elif not TEST and (\"test\" in", "if \"docs\" in fname and not \"topics\" in TYPES: continue # always plots", "\" iterations\" in line or \"Iteration \" + striter in line: doit =", "i] if np.all(map(np.isnan, mydata[key])): # remove data that is only nan mydata.pop(key) print", "v in s_dict.iteritems(): results[condname][month-SAGE][k].append(v) elif type(s_dict) == type([]): for e in s_dict: for", "cond, color): (y, s, sd, cond, 'b') if 'no prefix' in cond else", "& \\multicolumn{3}{|c|}{t\\_coll\\_syll\\_spl\\_voc} & \\multicolumn{3}{|c|}{t\\_coll\\_syll\\_wth\\_com}\\\\\\\\ \"\"\" print header_table for typ in ['token', 'boundary']: print", "\\multicolumn{3}{|c|}{t\\_coll\\_wth\\_common} & \\multicolumn{3}{|c|}{coll\\_syll} & \\multicolumn{3}{|c|}{t\\_coll\\_syll\\_shr\\_voc} & \\multicolumn{3}{|c|}{t\\_coll\\_syll\\_spl\\_voc} & \\multicolumn{3}{|c|}{t\\_coll\\_syll\\_wth\\_com}\\\\\\\\ \"\"\" print header_table for", "fontsize=20) plt.savefig('progress_ages.png') matplotlib.rcParams.update({'font.size': 20}) matplotlib.rcParams.update({'text.color': \"black\"}) matplotlib.rcParams.update({'axes.labelcolor': \"black\"}) matplotlib.rcParams.update({'xtick.color': \"black\"}) matplotlib.rcParams.update({'ytick.color': \"black\"}) plotted_results", "if \"-sc\" in fname and not \"single-context\" in TYPES: continue if \"docs\" in", "tables\" print \"===================\" header_table = \"\"\" \\\\begin{table*}[ht] \\caption{Mean f-scores (f), precisions (p), and", "= [np.mean(x['token_f-score']) for x in a] stddevs = [FACTOR_STD*np.std(x['token_f-score']) for x in a]", "robj.conversion.py2ri(mydata) lng_r = com.convert_to_r_dataframe(my_lng) data_r = com.convert_to_r_dataframe(mydataframe) globalenv['lng_r'] = lng_r globalenv['data_r'] = data_r", "TYPES = [\"basic\", \"single-context\"] TEST = False # if True, just use the", "lng_r = com.convert_to_r_dataframe(my_lng) data_r = com.convert_to_r_dataframe(mydataframe) globalenv['lng_r'] = lng_r globalenv['data_r'] = data_r globalenv['eage']", "if TAKE_MAX_SCORE: tmp = zip(y_pos, scores, conds, ['g' for tmp_i in range(len(y_pos))]) if", "tmp = map(lambda (y, s, cond, color): (y, s, cond, 'b') if 'b'", "= s_dict[typ+'_f-score'] p = s_dict[typ+'_precision'] r = s_dict[typ+'_recall'] print \" & \", print", "'docs' in fname: condname = '_'.join(fname.split('/')[-1].split('-')[-1].split('.')[0].split('_')[2:]) if condname == '': # topics-based unigram", "tmp_i in range(len(y_pos))]) if OLDVERSION: tmp = map(lambda (y, s, cond, color): (y,", "if TEST and LAST_ITERS > 1 and len(last_lines) > LAST_ITERS+1: for iter_to_take in", "for key, value in mydata.iteritems(): for i, l in enumerate(value): value[i] = l", "in fname and not \"nopfx\" in fname): continue elif not TEST and (\"test\"", "legend.text=element_text(size=44), legend.title=element_text(size=44), legend.key.size=unit(2, \"cm\"), plot.margin=unit(c(1,1,1,1), \"cm\")) \"\"\" else: rstring += \"\"\"+ opts(legend.background =", "plt.gca() ax.set_ylim([0, len(y_pos)+1]) ax.set_xlim([0.6, 0.86]) if TEST: ax.set_xlim([0.7, 0.86]) tmp = () if", "boundary_recall\".split() results = defaultdict(lambda: [dict(zip(scores_order, [[] for i in range(len(scores_order))])) for tmp_i in", "plotted_results[month] = dict(zip(conds, s_dicts)) if len(conds) == 0: continue y_pos = y_pos[:-1] fig", "+ str(SAGE_XPS) + '-' + str(month) + '*.o*'): if TEST and (not \"test\"", "use the values evaluated on a test test ITERS = range(499, 520) +", "month in xrange(SAGE, EAGE+1): for fname in glob.iglob(PREFIX+'naima_' + str(SAGE_XPS) + 'to' +", "plt.figure(figsize=(9, len(y_pos)), dpi=1200) ax = plt.gca() ax.set_ylim([0, len(y_pos)+1]) ax.set_xlim([0.6, 0.86]) if TEST: ax.set_xlim([0.7,", "y_pos), conds) plt.xlabel('token f-score') #plt.title('') plt.savefig('histogram_' + str(SAGE_XPS) + 'to' + str(month) +", "fname.replace('-r', '') if '-w+' in fname: fname = fname.replace('-w+', '_words_common') elif '-c+' in", "for 95% confidence interval OLDVERSION = False # version before March 10 LAST_ITERS", "if TAKE_MAX_SCORE: y_pos, scores, conds, colors = zip(*tmp) plt.barh(y_pos, scores, color=colors, ecolor='r', alpha=0.8)", "s, sd, cond, color): (y, s, sd, cond, 'b') if 'no prefix' in", "vocab', ### 't_permuted_colloc_syll_wth_common': 'permuted with common', #'t_random_colloc_syll': 'random split vocab', ### 't_random_colloc_syll_wth_common': 'random", "& r & f & p & r & f & p &", "results (considering converged) # USED ONLY FOR TEST currently if LAST_ITERS > 1", "vocab with common', #'t_readapt_colloc_syll_wth_common2': 'share vocab with common 2'} if OLDVERSION: DO_ONLY =", "s, cond, color), tmp) # cond[0]=='b' for cond=='baseline' else: tmp = zip(y_pos, scores,", "in range(len(scores_order))])) for tmp_i in range(N_MONTHS)]) if TAKE_MAX_SCORE: results = defaultdict(lambda: [dict(zip(scores_order, [0", "about X (months) for key, value in mydata.iteritems(): for i, l in enumerate(value):", "plt.xlabel('months') plt.ylabel('token f-score') plt.legend([l for l in results.iterkeys()], loc='best', ncol=4) plt.setp(ax.get_legend().get_texts(), fontsize=20) plt.savefig('progress_ages.png')", "SAGE = 12 EAGE = 31 N_MONTHS = EAGE-SAGE+1 #TYPES = [\"basic\", \"single-context\",", "- len(l))] mydata[key] = [j for i in value for j in i]", "= \"grey70\", fill = \"white\"), legend.text=element_text(size=44), legend.title=element_text(size=44), legend.key.size=unit(2, \"cm\"), plot.margin=unit(c(1,1,1,1), \"cm\")) \"\"\" else:", "if OLDVERSION: tmp = map(lambda (y, s, sd, cond, color): (y, s, sd,", "\"\" #PREFIX = \"old_naima_XPs/\" TAKE_MAX_SCORE = False # in case of several results,", "id_vars='months') #my_lng = pd.melt(mydataframe[['months', 't_permuted_colloc_syll', 't_permuted_colloc_syll_wth_common', 'unigram', 't_unigram', 't_readapt_unigram', 'colloc_syll', 't_colloc_syll', 't_colloc_syll_wth_common']], id_vars='months')", "in range(N_MONTHS)]) if TAKE_MAX_SCORE: results = defaultdict(lambda: [dict(zip(scores_order, [0 for i in range(len(scores_order))]))", "lng_r globalenv['data_r'] = data_r globalenv['eage'] = EAGE globalenv['sage'] = SAGE print \"===================\" print", "from pandas import DataFrame from copy import deepcopy import pandas as pd mydata", "#+ ylab('token f-score')\\ #+ scale_x_continuous('age in months', breaks=seq(eage,sage), limits=c(eage,sage))\\ # + scale_x_discrete('age in", "y='value', color='variable'), data=my_lng) + stat_smooth(se=True, method='lm', level=0.95) + xlab('age in months') + ylab('token", "+ stat_smooth(level=0.68, size=1.8)\\ + theme(text = element_text(size=44))\\ \"\"\" #+ geom_point()\\ #+ xlab('age in", "my_lng = pd.melt(mydataframe[['months'] + [k for k in mydata.keys() if k != 'months']],", "+ '-' + str(month) + '*.o*'): if TEST and (not \"test\" in fname", "f-score')\\ #+ scale_x_continuous('age in months', breaks=seq(eage,sage), limits=c(eage,sage))\\ # + scale_x_discrete('age in months') if", "vocab'} #'t_readapt_colloc_syll_wth_common': 'share vocab with common', #'t_readapt_colloc_syll_wth_common2': 'share vocab with common 2'} if", "at your own risk! FACTOR_STD = 1. # 1.96 for 95% confidence interval", "test', 't_test_colloc_syll': 'split vocab test'} if OLDVERSION: DO_ONLY = {'t_nopfx_coll_syll_wth_common': 't_colloc_syll_wth_common_nopfx', 't_test_coll_syll_wth_common': 't_colloc_syll_wth_common_test',", "ax.get_xticklabels() + ax.get_yticklabels()): item.set_fontsize(24) for cond, a in results.iteritems(): linetype = '' if", "s_dict = d[cond] f = s_dict[typ+'_f-score'] p = s_dict[typ+'_precision'] r = s_dict[typ+'_recall'] print", "scale_x_discrete('age in months') if len(DO_ONLY) and len(DO_ONLY) < 5: rstring += \"\"\"+ opts(legend.position", "color): (y, s, sd, cond, 'b') if 't' == cond[0] or 'd' ==", "color): (y, s, cond, 'b') if 't' == cond[0] or 'd' == cond[0]", "'colloc3 syll collocs common'} #'syll': 'syll', #'t_syll': 'syll split vocab', #'t_readapt_syll': 'syll share", "fname else: print fname scores = [] s_dict = {} with open(fname) as", "in months') + ylab('token f-score') # p = ggplot(aes(x='months', y='value', color='variable'), data=my_lng) +", "importr('grDevices') #robj.pandas2ri.activate() #data_r = robj.conversion.py2ri(mydata) lng_r = com.convert_to_r_dataframe(my_lng) data_r = com.convert_to_r_dataframe(mydataframe) globalenv['lng_r'] =", "[np.mean(x['token_f-score']) for x in a] stddevs = [FACTOR_STD*np.std(x['token_f-score']) for x in a] #", "March 10 LAST_ITERS = 10 # take the last XX iterations as results", "'random split vocab', ### 't_random_colloc_syll_wth_common': 'random with common', 'colloc3_syll': 'colloc3 syll', 't_colloc3_syll_collocs_common': 'colloc3", "last_lines = [] for line in f: last_lines.append(line) try: if TEST and LAST_ITERS", "# 1.96 for 95% confidence interval OLDVERSION = False # version before March", "mydata print \">>> conditions that will be plotted\" print mydata.keys() mydataframe = DataFrame(mydata)", "print \"===================\" print \"and now for the LaTeX tables\" print \"===================\" header_table =", "tmp) # cond[0]=='b' for cond=='baseline' else: tmp = zip(y_pos, scores, stddevs, conds, ['g'", "\", print \"%.3f\" % p, print \" & \", print \"%.3f\" % r,", "#ITERS = range(600, 620) PREFIX = \"\" #PREFIX = \"old_naima_XPs/\" TAKE_MAX_SCORE = False", "= pd.melt(mydataframe[['months', 't_permuted_colloc_syll', 't_permuted_colloc_syll_wth_common', 'unigram', 't_unigram', 't_readapt_unigram', 'colloc_syll', 't_colloc_syll', 't_colloc_syll_wth_common']], id_vars='months') if OLDVERSION:", "& \\multicolumn{3}{|c|}{t\\_syll} & \\multicolumn{3}{|c|}{colloc} & \\multicolumn{3}{|c|}{t\\_coll\\_wth\\_common} & \\multicolumn{3}{|c|}{coll\\_syll} & \\multicolumn{3}{|c|}{t\\_coll\\_syll\\_shr\\_voc} & \\multicolumn{3}{|c|}{t\\_coll\\_syll\\_spl\\_voc} &", "# cond[0]=='b' for cond=='baseline' else: tmp = map(lambda (y, s, sd, cond, color):", "{'t_nopfx_coll_syll_wth_common': 't_colloc_syll_wth_common_nopfx', 't_test_coll_syll_wth_common': 't_colloc_syll_wth_common_test', 't_nopfx_coll_syll': 't_colloc_syll_spl_vocab_nopfx', 'test_coll_syll': 'colloc_syll_test', 't_test_coll_syll': 't_colloc_syll_spl_vocab_test'} #DO_ONLY = {}", "nan mydata.pop(key) print mydata print \">>> conditions that will be plotted\" print mydata.keys()", "matplotlib import matplotlib.pyplot as plt from collections import defaultdict import glob import readline", "from rpy2.robjects import globalenv import pandas.rpy.common as com #grdevices = importr('grDevices') #robj.pandas2ri.activate() #data_r", "f & p & r & f & p & r & f", "ncol=4) plt.setp(ax.get_legend().get_texts(), fontsize=20) plt.savefig('progress_ages.png') matplotlib.rcParams.update({'font.size': 20}) matplotlib.rcParams.update({'text.color': \"black\"}) matplotlib.rcParams.update({'axes.labelcolor': \"black\"}) matplotlib.rcParams.update({'xtick.color': \"black\"}) matplotlib.rcParams.update({'ytick.color':", "sd, cond, color), tmp) else: if TEST: tmp = map(lambda (y, s, sd,", "& \", print \"%.3f\" % r, print \"\\\\\\\\\" print \"\\hline\" footer_table = \"\"\"", "= [FACTOR_STD*np.std(x['token_f-score']) for x in a] # TODO (gaussian process or some smoothing)", "= DO_ONLY[condname] else: continue ########## /cosmetic (for legends) ########## if type(s_dict) == type({})", "ax.yaxis.label] + ax.get_xticklabels() + ax.get_yticklabels()): item.set_fontsize(24) for cond, a in results.iteritems(): linetype =", "now for the R part\" print \"===================\" rstring = \"\"\" library(\"ggplot2\") library(\"grid\") #print(lng_r)", "linetype = '' if \"syll\" in cond: linetype = '^-.' else: linetype =", "i in range(6)] if not len(s_dict): s_dict = [dict(zip(scores_order, scores))] else: s_dict.append(dict(zip(scores_order, scores)))", "color), tmp) # \"no prefix\" cond => different color tmp = map(lambda (y,", "+ scale_fill_discrete(\"model\", drop=TRUE, limits=cLevels)\\ + scale_shape_discrete(\"model\", drop=TRUE, limits=cLevels)\\ + scale_linetype_discrete(\"model\", drop=TRUE, limits=cLevels)\\ +", "\"\"\" else: rstring += \"\"\"+ opts(legend.background = element_rect(colour = \"grey70\", fill = \"white\"),", "a in results.iteritems(): linetype = '' if \"syll\" in cond: linetype = '^-.'", "= False DO_ONLY = {'colloc_syll': 'baseline', 't_colloc_syll': 'split vocab', 't_readapt_colloc_syll': 'share vocab', 't_colloc_syll_wth_common':", "620) PREFIX = \"\" #PREFIX = \"old_naima_XPs/\" TAKE_MAX_SCORE = False # in case", "\"===================\" print \"and now for the LaTeX tables\" print \"===================\" header_table = \"\"\"", "fname: fname = fname.replace('+', '_wth_common') condname = '_'.join([condname] + fname.split('/')[-1].split('-')[3:]).split('.')[0] else: condname =", "if 'docs' in fname: condname = '_'.join(fname.split('/')[-1].split('-')[-1].split('.')[0].split('_')[2:]) if condname == '': # topics-based", "= '_'.join(fname.split('/')[-1].split('-')[-1].split('.')[0].split('_')[2:]) if condname == '': # topics-based unigram condname = 'uni' condname", "l + [np.nan for j in range(ages_max_points[i] - len(l))] mydata[key] = [j for", "sd, cond, color), tmp) # \"no prefix\" cond => different color tmp =", "for month in xrange(SAGE, EAGE+1): y_pos = [0.5] scores = [] stddevs =", "if TEST: tmp = map(lambda (y, s, sd, cond, color): (y, s, sd,", "[k for k in mydata.keys() if k != 'months']], id_vars='months') #my_lng = pd.melt(mydataframe[['months',", "linetype = '^-.' else: linetype = 'v-.' if \"d_\" or \"t_\" in cond:", "()), ys, tmp) if TAKE_MAX_SCORE: y_pos, scores, conds, colors = zip(*tmp) plt.barh(y_pos, scores,", "= y_pos[:-1] fig = plt.figure(figsize=(9, len(y_pos)), dpi=1200) ax = plt.gca() ax.set_ylim([0, len(y_pos)+1]) ax.set_xlim([0.6,", "'t_colloc_syll_spl_vocab', 'colloc', 'syll', 't_syll_spl_vocab']], id_vars='months') # #p = ggplot(aes(x='months', y='value', color='variable'), data=my_lng) +", "\"===================\" rstring = \"\"\" library(\"ggplot2\") library(\"grid\") #print(lng_r) #print(factor(lng_r$months)) #print(factor(lng_r$variable)) cLevels <- levels(lng_r$variable) p", "for cond, a in results.iteritems(): score = 0 stddev = 0 if TAKE_MAX_SCORE:", "not \"nopfx\" in fname): continue elif not TEST and (\"test\" in fname or", "\">>> conditions that will be plotted\" print mydata.keys() mydataframe = DataFrame(mydata) my_lng =", "'t_readapt_unigram', 'colloc_syll', 't_colloc_syll', 't_colloc_syll_wth_common']], id_vars='months') if OLDVERSION: my_lng = pd.melt(mydataframe[['months', 't_colloc_syll_shr_vocab', 'colloc_syll', 't_colloc_syll_wth_common',", "not len(s_dict): s_dict = [dict(zip(scores_order, scores))] else: s_dict.append(dict(zip(scores_order, scores))) else: scores = [float(last_lines[-1].split('\\t')[i])", "cond, color): (y, s, cond, 'b') if 't' == cond[0] or 'd' ==", "score = a[month-SAGE]['token_f-score'] else: score = np.mean(a[month-SAGE]['token_f-score']) stddev = FACTOR_STD*np.std(a[month-SAGE]['token_f-score']) if score >", "TEST: tmp = map(lambda (y, s, sd, cond, color): (y, s, sd, cond,", "tmp = map(lambda (y, s, sd, cond, color): (y, s, sd, cond, 'grey')", "'colloc_syll', 't_colloc_syll_wth_common', 't_colloc_syll_spl_vocab']], id_vars='months') # from ggplot_import_* # #p = ggplot(aes(x='months', y='colloc'), data=mydataframe)", "(y, s, cond, 'b') if 't' == cond[0] or 'd' == cond[0] else", "'colloc3_syll': 'colloc3 syll', 't_colloc3_syll_collocs_common': 'colloc3 syll collocs common'} #'syll': 'syll', #'t_syll': 'syll split", "+ scale_x_discrete('age in months', breaks=seq(eage,sage), labels=seq(eage,sage))\\ + coord_cartesian(xlim = c(eage, sage))\\ + theme_bw()\\", "range(600, 620) PREFIX = \"\" #PREFIX = \"old_naima_XPs/\" TAKE_MAX_SCORE = False # in", "in months')\\ #+ ylab('token f-score')\\ #+ scale_x_continuous('age in months', breaks=seq(eage,sage), limits=c(eage,sage))\\ # +", "if condname == '': # topics-based unigram condname = 'uni' condname = 'd_'", "tmp) # \"no prefix\" cond => different color tmp = map(lambda (y, s,", "boundary_precision boundary_recall\".split() results = defaultdict(lambda: [dict(zip(scores_order, [[] for i in range(len(scores_order))])) for tmp_i", "range(N_MONTHS)]) if TAKE_MAX_SCORE: results = defaultdict(lambda: [dict(zip(scores_order, [0 for i in range(len(scores_order))])) for", "color): (y, s, sd, cond, 'b') if 'no prefix' in cond else (y,", "[dict(zip(scores_order, scores))] else: s_dict.append(dict(zip(scores_order, scores))) else: scores = [float(last_lines[-1].split('\\t')[i]) for i in range(6)]", "imported by rpy2 SAGE_XPS = 11 SAGE = 12 EAGE = 31 N_MONTHS", "[] stddevs = [] conds = [] s_dicts = [] for cond, a", "be plotted\" print mydata.keys() mydataframe = DataFrame(mydata) my_lng = pd.melt(mydataframe[['months'] + [k for", "range(len(scores_order))])) for tmp_i in range(N_MONTHS)]) for month in xrange(SAGE, EAGE+1): for fname in", "= '^-.' else: linetype = 'v-.' if \"d_\" or \"t_\" in cond: linetype", "'t_colloc_shr_vocab', 't_syll': 't_syll_spl_vocab', 't_readapt_colloc_wth_common': 't_colloc_wth_common', 'colloc_syll': 'colloc_syll', 't_colloc_syll': 't_colloc_syll_spl_vocab', 't_readapt_colloc_syll': 't_colloc_syll_shr_vocab', 't_colloc_syll_wth_common': 't_colloc_syll_wth_common'}", "print \"===================\" print \"and now for the R part\" print \"===================\" rstring =", "sd, cond, color), tmp) # cond[0]=='b' for cond=='baseline' if SORTED: ys = map(lambda", "stddev = 0 if TAKE_MAX_SCORE: score = a[month-SAGE]['token_f-score'] else: score = np.mean(a[month-SAGE]['token_f-score']) stddev", "'grey') if 'b' == cond[0] else (y, s, sd, cond, color), tmp) #", "+ theme(text = element_text(size=44))\\ \"\"\" #+ geom_point()\\ #+ xlab('age in months')\\ #+ ylab('token", "boundary_f-score boundary_precision boundary_recall\".split() results = defaultdict(lambda: [dict(zip(scores_order, [[] for i in range(len(scores_order))])) for", "\"and now for the R part\" print \"===================\" rstring = \"\"\" library(\"ggplot2\") library(\"grid\")", "recalls (r) for different models depending on the size of dataset} \\\\vspace{-0.5cm} \\\\begin{center}", "= \"token_f-score token_precision token_recall boundary_f-score boundary_precision boundary_recall\".split() results = defaultdict(lambda: [dict(zip(scores_order, [[] for", "s, sd, cond, color), tmp) # \"no prefix\" cond => different color tmp", "'_words_common') elif '-c+' in fname: fname = fname.replace('-c+', '_collocs_common') elif '+' in fname:", "y='colloc'), data=mydataframe) + geom_point(color='lightgreen') + stat_smooth(se=True) + xlab('age in months') + ylab('token f-score')", "xrange(SAGE, EAGE+1): for fname in glob.iglob(PREFIX+'naima_' + str(SAGE_XPS) + 'to' + str(month) +", "'_'.join(fname.split('/')[-1].split('-')[-1].split('.')[0].split('_')[2:]) if condname == '': # topics-based unigram condname = 'uni' condname =", "data that is only nan mydata.pop(key) print mydata print \">>> conditions that will", "condname = 't_readapt' fname = fname.replace('-r', '') if '-w+' in fname: fname =", "data=mydataframe) + geom_point(color='lightgreen') + stat_smooth(se=True) + xlab('age in months') + ylab('token f-score') #", "wrong readline is imported by rpy2 SAGE_XPS = 11 SAGE = 12 EAGE", "value in mydata.iteritems(): for i, l in enumerate(value): value[i] = l + [np.nan", "the LaTeX tables\" print \"===================\" header_table = \"\"\" \\\\begin{table*}[ht] \\caption{Mean f-scores (f), precisions", "x: x+0.5, y_pos), conds) plt.xlabel('token f-score') #plt.title('') plt.savefig('histogram_' + str(SAGE_XPS) + 'to' +", "0.86]) if TEST: ax.set_xlim([0.7, 0.86]) tmp = () if TAKE_MAX_SCORE: tmp = zip(y_pos,", "np.mean(a[month-SAGE]['boundary_recall'])}) plotted_results[month] = dict(zip(conds, s_dicts)) if len(conds) == 0: continue y_pos = y_pos[:-1]", "& \\multicolumn{3}{|c|}{coll\\_syll} & \\multicolumn{3}{|c|}{t\\_coll\\_syll\\_shr\\_voc} & \\multicolumn{3}{|c|}{t\\_coll\\_syll\\_spl\\_voc} & \\multicolumn{3}{|c|}{t\\_coll\\_syll\\_wth\\_com}\\\\\\\\ \"\"\" print header_table for typ", "if results[condname][month-SAGE]['token_f-score'] == 0 or s_dict['token_f-score'] > results[condname][month-SAGE]['token_f-score']: results[condname][month-SAGE] = s_dict else: for", "= [] conds = [] s_dicts = [] for cond, a in results.iteritems():", "different models depending on the size of dataset} \\\\vspace{-0.5cm} \\\\begin{center} \\\\begin{scriptsize} \\\\begin{tabular}{|c|ccc|ccc|ccc|ccc|ccc|ccc|ccc|ccc|} \\hline", "= map(lambda y,t: sum(((y,), t[1:]), ()), ys, tmp) if TAKE_MAX_SCORE: y_pos, scores, conds,", "and len(last_lines) > LAST_ITERS+1: for iter_to_take in range(1,LAST_ITERS+1): scores = [float(last_lines[-iter_to_take].split('\\t')[i]) for i", "stddevs, conds, ['g' for tmp_i in range(len(y_pos))]) if OLDVERSION: tmp = map(lambda (y,", "= plt.figure(figsize=(12, 9), dpi=1200) plt.xticks(xrange(N_MONTHS)) ax = plt.gca() ax.set_ylim([0.55, 0.90]) ax.set_xlim([-0.1, N_MONTHS -", "\"grey70\", fill = \"white\"), legend.text=element_text(size=44), legend.title=element_text(size=44), legend.key.size=unit(2, \"cm\"), plot.margin=unit(c(1,1,1,1), \"cm\")) \"\"\" rstring +=", "= [[str(m) for i in range(ages_max_points[m-SAGE])] for m in xrange(SAGE, EAGE+1)] # TODO", "\"single-context\", \"topics\"] #TYPES = [\"basic\", \"topics\"] TYPES = [\"basic\", \"single-context\"] TEST = False", "v in e.iteritems(): results[condname][month-SAGE][k].append(v) print results fig = plt.figure(figsize=(12, 9), dpi=1200) plt.xticks(xrange(N_MONTHS)) ax", "for k, v in s_dict.iteritems(): results[condname][month-SAGE][k].append(v) elif type(s_dict) == type([]): for e in", "scores.append(score) stddevs.append(stddev) conds.append(cond) s_dicts.append({'token_f-score': score, 'token_precision': np.mean(a[month-SAGE]['token_precision']), 'token_recall': np.mean(a[month-SAGE]['token_recall']), 'boundary_f-score': np.mean(a[month-SAGE]['boundary_f-score']), 'boundary_precision': np.mean(a[month-SAGE]['boundary_precision']),", "fname: fname = fname.replace('-c+', '_collocs_common') elif '+' in fname: fname = fname.replace('+', '_wth_common')", "> ages_max_points[i]: ages_max_points[i] = len(x['token_f-score']) mydata[cond].append(x['token_f-score']) mydata['months'] = [[m for i in range(ages_max_points[m-SAGE])]", "globalenv['lng_r'] = lng_r globalenv['data_r'] = data_r globalenv['eage'] = EAGE globalenv['sage'] = SAGE print", "if OLDVERSION: DO_ONLY = {'t_nopfx_coll_syll_wth_common': 't_colloc_syll_wth_common_nopfx', 't_test_coll_syll_wth_common': 't_colloc_syll_wth_common_test', 't_nopfx_coll_syll': 't_colloc_syll_spl_vocab_nopfx', 'test_coll_syll': 'colloc_syll_test', 't_test_coll_syll':", "= com.convert_to_r_dataframe(mydataframe) globalenv['lng_r'] = lng_r globalenv['data_r'] = data_r globalenv['eage'] = EAGE globalenv['sage'] =", "\\\\\\\\ \\hline \"\"\" for month, d in plotted_results.iteritems(): print str(SAGE_XPS) + \"-\" +", "drop=TRUE, limits=cLevels)\\ + scale_shape_discrete(\"model\", drop=TRUE, limits=cLevels)\\ + scale_linetype_discrete(\"model\", drop=TRUE, limits=cLevels)\\ + stat_smooth(level=0.68, size=1.8)\\", "mydataframe = DataFrame(mydata) my_lng = pd.melt(mydataframe[['months'] + [k for k in mydata.keys() if", "map(lambda x: x[0], tmp) tmp = sorted(tmp, key=lambda x: x[1]) tmp = map(lambda", "N_MONTHS = EAGE-SAGE+1 #TYPES = [\"basic\", \"single-context\", \"topics\"] #TYPES = [\"basic\", \"topics\"] TYPES", "the values evaluated on a test test ITERS = range(499, 520) + range(1000,1005)", "else: s_dict.append(dict(zip(scores_order, scores))) else: scores = [float(last_lines[-1].split('\\t')[i]) for i in range(6)] s_dict =", "# remove data that is only nan mydata.pop(key) print mydata print \">>> conditions", "# otherwise the wrong readline is imported by rpy2 SAGE_XPS = 11 SAGE", "or some smoothing) plt.plot(map(lambda x: 'NaN' if x <= 0.0 else x, vals),", "continue elif not TEST and (\"test\" in fname or \"nopfx\" in fname): continue", "listmodels: s_dict = d[cond] f = s_dict[typ+'_f-score'] p = s_dict[typ+'_precision'] r = s_dict[typ+'_recall']", "= 0 if TAKE_MAX_SCORE: score = a[month-SAGE]['token_f-score'] else: score = np.mean(a[month-SAGE]['token_f-score']) stddev =", "line = \"\" for line in f: for iternumber in ITERS: striter =", "will be plotted\" print mydata.keys() mydataframe = DataFrame(mydata) my_lng = pd.melt(mydataframe[['months'] + [k", "from collections import defaultdict import glob import readline # otherwise the wrong readline", "scores, conds, colors = zip(*tmp) plt.barh(y_pos, scores, color=colors, ecolor='r', alpha=0.8) else: y_pos, scores,", "print \">>> conditions that will be plotted\" print mydata.keys() mydataframe = DataFrame(mydata) my_lng", "= map(lambda (y, s, cond, color): (y, s, cond, 'b') if 'b' !=", "values evaluated on a test test ITERS = range(499, 520) + range(1000,1005) #ITERS", "for i in range(ages_max_points[m-SAGE])] for m in xrange(SAGE, EAGE+1)] #mydata['months'] = [[str(m) for", "prefix\" cond => different color tmp = map(lambda (y, s, sd, cond, color):", "\\hline \"\"\" for month, d in plotted_results.iteritems(): print str(SAGE_XPS) + \"-\" + str(month),", "'t_colloc_syll': 't_colloc_syll_spl_vocab', 't_readapt_colloc_syll': 't_colloc_syll_shr_vocab', 't_colloc_syll_wth_common': 't_colloc_syll_wth_common'} if TEST: DO_ONLY = {'t_nopfx_colloc_syll_wth_common': 'with common", "dict(zip(scores_order, scores)) except: print \"PARSE ERROR: parse went wrongly for\", fname fname =", "mydata['months'] = [[m for i in range(ages_max_points[m-SAGE])] for m in xrange(SAGE, EAGE+1)] #mydata['months']", "OLDVERSION: my_lng = pd.melt(mydataframe[['months', 't_colloc_syll_shr_vocab', 'colloc_syll', 't_colloc_syll_wth_common', 't_colloc_syll_spl_vocab']], id_vars='months') # from ggplot_import_* #", "f: for iternumber in ITERS: striter = str(iternumber) if striter + \" iterations\"", "s, sd, cond, color): (y, s, sd, cond, 'b') if 'b' != cond[0]", "for iternumber in ITERS: striter = str(iternumber) if striter + \" iterations\" in", "a[month-SAGE]['token_f-score'] else: score = np.mean(a[month-SAGE]['token_f-score']) stddev = FACTOR_STD*np.std(a[month-SAGE]['token_f-score']) if score > 0: y_pos.append(y_pos[-1]", "else: tmp = zip(y_pos, scores, stddevs, conds, ['g' for tmp_i in range(len(y_pos))]) if", "in range(6)] s_dict = dict(zip(scores_order, scores)) except: print \"PARSE ERROR: parse went wrongly", "line in f: last_lines.append(line) try: if TEST and LAST_ITERS > 1 and len(last_lines)", "(p), and recalls (r) for different models depending on the size of dataset}", "library(\"grid\") #print(lng_r) #print(factor(lng_r$months)) #print(factor(lng_r$variable)) cLevels <- levels(lng_r$variable) p <- ggplot(data=lng_r, aes(x=months, y=value, group=variable,", "ax.set_xlim([0.7, 0.86]) tmp = () if TAKE_MAX_SCORE: tmp = zip(y_pos, scores, conds, ['g'", "XX iterations as results (considering converged) # USED ONLY FOR TEST currently if", "color), tmp) # cond[0]=='b' for cond=='baseline' else: tmp = map(lambda (y, s, sd,", "in plotted_results.iteritems(): print str(SAGE_XPS) + \"-\" + str(month), if OLDVERSION: listmodels = ['syll',", "else: continue ########## /cosmetic (for legends) ########## if type(s_dict) == type({}) and len(s_dict)", "[x['token_f-score'] for x in a] else: vals = [np.mean(x['token_f-score']) for x in a]", "except: print \"PARSE ERROR: parse went wrongly for\", fname fname = '/'.join(fname.split('/')[1:]) fname", "or \"nopfx\" in fname): continue if \"-sc\" in fname and not \"single-context\" in", "token_recall boundary_f-score boundary_precision boundary_recall\".split() results = defaultdict(lambda: [dict(zip(scores_order, [[] for i in range(len(scores_order))]))", "matplotlib.rcParams.update({'axes.labelcolor': \"black\"}) matplotlib.rcParams.update({'xtick.color': \"black\"}) matplotlib.rcParams.update({'ytick.color': \"black\"}) plotted_results = {} # plotted_results[month][cond][score_type] = mean", "#'t_readapt_syll': 'syll share vocab'} #'unigram': 'unigram', 't_readapt_unigram': 'unigram share vocab', #'t_unigram': 'unigram split", "legend.text=element_text(size=44), legend.title=element_text(size=44), legend.key.size=unit(2, \"cm\"), plot.margin=unit(c(1,1,1,1), \"cm\")) \"\"\" rstring += \"\"\" ggsave('ggplot2_progress.pdf', plot=p, width=22,", "\"and now for the LaTeX tables\" print \"===================\" header_table = \"\"\" \\\\begin{table*}[ht] \\caption{Mean", "= ['syll', 't_syll_spl_vocab', 'colloc', 't_colloc_wth_common', 'colloc_syll', 't_colloc_syll_shr_vocab', 't_colloc_syll_spl_vocab', 't_colloc_syll_wth_common'] listmodels = ['unigram', 'unigram", "fname): continue if \"-sc\" in fname and not \"single-context\" in TYPES: continue if", "p <- ggplot(data=lng_r, aes(x=months, y=value, group=variable, colour=variable, fill=variable, shape=variable, linetype=variable))\\ + scale_y_continuous(name='token f-score')\\", "#print(factor(lng_r$variable)) cLevels <- levels(lng_r$variable) p <- ggplot(data=lng_r, aes(x=months, y=value, group=variable, colour=variable, fill=variable, shape=variable,", "['g' for tmp_i in range(len(y_pos))]) if OLDVERSION: tmp = map(lambda (y, s, cond,", "+ str(month), if OLDVERSION: listmodels = ['syll', 't_syll_spl_vocab', 'colloc', 't_colloc_wth_common', 'colloc_syll', 't_colloc_syll_shr_vocab', 't_colloc_syll_spl_vocab',", "= fname.replace('-r', '') if '-w+' in fname: fname = fname.replace('-w+', '_words_common') elif '-c+'", "'m.png', bbox_inches='tight') from pandas import DataFrame from copy import deepcopy import pandas as", "= 'v-.' if \"d_\" or \"t_\" in cond: linetype = linetype[0] + '--'", "sd, cond, color), tmp) # cond[0]=='b' for cond=='baseline' else: tmp = map(lambda (y,", "\"%.3f\" % f, print \" & \", print \"%.3f\" % p, print \"", "cond[0] or 'd' == cond[0] else (y, s, cond, color), tmp) else: tmp", "#robj.pandas2ri.activate() #data_r = robj.conversion.py2ri(mydata) lng_r = com.convert_to_r_dataframe(my_lng) data_r = com.convert_to_r_dataframe(mydataframe) globalenv['lng_r'] = lng_r", "\"-\" + str(month), if OLDVERSION: listmodels = ['syll', 't_syll_spl_vocab', 'colloc', 't_colloc_wth_common', 'colloc_syll', 't_colloc_syll_shr_vocab',", "cond, color), tmp) # cond[0]=='b' for cond=='baseline' else: tmp = map(lambda (y, s,", "True # sort the histograms by score, disable at your own risk! FACTOR_STD", "split vocab', ### 't_permuted_colloc_syll_wth_common': 'permuted with common', #'t_random_colloc_syll': 'random split vocab', ### 't_random_colloc_syll_wth_common':", "drop=TRUE, limits=cLevels)\\ + scale_fill_discrete(\"model\", drop=TRUE, limits=cLevels)\\ + scale_shape_discrete(\"model\", drop=TRUE, limits=cLevels)\\ + scale_linetype_discrete(\"model\", drop=TRUE,", "in results.iteritems(): score = 0 stddev = 0 if TAKE_MAX_SCORE: score = a[month-SAGE]['token_f-score']", "### 't_random_colloc_syll_wth_common': 'random with common', 'colloc3_syll': 'colloc3 syll', 't_colloc3_syll_collocs_common': 'colloc3 syll collocs common'}", "tmp) if TAKE_MAX_SCORE: y_pos, scores, conds, colors = zip(*tmp) plt.barh(y_pos, scores, color=colors, ecolor='r',", "continue if \"docs\" in fname and not \"topics\" in TYPES: continue # always", "DO_ONLY = {'t_colloc': 'colloc with topics'} scores_order = \"token_f-score token_precision token_recall boundary_f-score boundary_precision", "not TEST and (\"test\" in fname or \"nopfx\" in fname): continue if \"-sc\"", "print \" & \", print \"%.3f\" % f, print \" & \", print", "map(lambda (y, s, cond, color): (y, s, cond, 'b') if 't' == cond[0]", "'t_colloc_syll_wth_common': 'with common', #'t_permuted_colloc_syll': 'permuted split vocab', ### 't_permuted_colloc_syll_wth_common': 'permuted with common', #'t_random_colloc_syll':", "0: y_pos.append(y_pos[-1] + 1) scores.append(score) stddevs.append(stddev) conds.append(cond) s_dicts.append({'token_f-score': score, 'token_precision': np.mean(a[month-SAGE]['token_precision']), 'token_recall': np.mean(a[month-SAGE]['token_recall']),", "in range(ages_max_points[m-SAGE])] for m in xrange(SAGE, EAGE+1)] # TODO if we don't want", "\\multicolumn{3}{|c|}{coll\\_syll} & \\multicolumn{3}{|c|}{t\\_coll\\_syll\\_shr\\_voc} & \\multicolumn{3}{|c|}{t\\_coll\\_syll\\_spl\\_voc} & \\multicolumn{3}{|c|}{t\\_coll\\_syll\\_wth\\_com}\\\\\\\\ \"\"\" print header_table for typ in", "stddev = FACTOR_STD*np.std(a[month-SAGE]['token_f-score']) if score > 0: y_pos.append(y_pos[-1] + 1) scores.append(score) stddevs.append(stddev) conds.append(cond)", "syll collocs common'} #'syll': 'syll', #'t_syll': 'syll split vocab', #'t_readapt_syll': 'syll share vocab'}", "EAGE globalenv['sage'] = SAGE print \"===================\" print \"and now for the R part\"", "cond, 'b') if 'no prefix' in cond else (y, s, sd, cond, color),", "'-' + str(month) + '*.o*'): if TEST and (not \"test\" in fname and", "a in results.iteritems(): score = 0 stddev = 0 if TAKE_MAX_SCORE: score =", "in range(1,LAST_ITERS+1): scores = [float(last_lines[-iter_to_take].split('\\t')[i]) for i in range(6)] if not len(s_dict): s_dict", "color=colors, ecolor='r', alpha=0.8) plt.yticks(map(lambda x: x+0.5, y_pos), conds) plt.xlabel('token f-score') #plt.title('') plt.savefig('histogram_' +", "fig = plt.figure(figsize=(9, len(y_pos)), dpi=1200) ax = plt.gca() ax.set_ylim([0, len(y_pos)+1]) ax.set_xlim([0.6, 0.86]) if", "for i in range(len(scores_order))])) for tmp_i in range(N_MONTHS)]) if TAKE_MAX_SCORE: results = defaultdict(lambda:", "f: last_lines = [] for line in f: last_lines.append(line) try: if TEST and", "if condname in DO_ONLY: condname = DO_ONLY[condname] else: continue ########## /cosmetic (for legends)", "'baseline', 't_colloc_syll': 'split vocab', 't_readapt_colloc_syll': 'share vocab', 't_colloc_syll_wth_common': 'with common', #'t_permuted_colloc_syll': 'permuted split", "range(1,LAST_ITERS+1): scores = [float(last_lines[-iter_to_take].split('\\t')[i]) for i in range(6)] if not len(s_dict): s_dict =", "striter in line: doit = True break if not doit: print \"NOT DOING:\",", "== type({}) and len(s_dict) == 6: if TAKE_MAX_SCORE: if results[condname][month-SAGE]['token_f-score'] == 0 or", "fname or \"nopfx\" in fname): continue if \"-sc\" in fname and not \"single-context\"", "{} # plotted_results[month][cond][score_type] = mean for month in xrange(SAGE, EAGE+1): y_pos = [0.5]", "mydata.keys() mydataframe = DataFrame(mydata) my_lng = pd.melt(mydataframe[['months'] + [k for k in mydata.keys()", "rstring = \"\"\" library(\"ggplot2\") library(\"grid\") #print(lng_r) #print(factor(lng_r$months)) #print(factor(lng_r$variable)) cLevels <- levels(lng_r$variable) p <-", "condname elif '-sc' in fname: fname = fname.replace('-sc', '') condname = 't' if", "plt.legend([l for l in results.iterkeys()], loc='best', ncol=4) plt.setp(ax.get_legend().get_texts(), fontsize=20) plt.savefig('progress_ages.png') matplotlib.rcParams.update({'font.size': 20}) matplotlib.rcParams.update({'text.color':", "in f: last_lines.append(line) try: if TEST and LAST_ITERS > 1 and len(last_lines) >", "breaks=seq(eage,sage), labels=seq(eage,sage))\\ + coord_cartesian(xlim = c(eage, sage))\\ + theme_bw()\\ + scale_colour_discrete(\"model\", drop=TRUE, limits=cLevels)\\", "plotted_results[month][cond][score_type] = mean for month in xrange(SAGE, EAGE+1): y_pos = [0.5] scores =", "= {} with open(fname) as f: last_lines = [] for line in f:", "if score > 0: y_pos.append(y_pos[-1] + 1) scores.append(score) stddevs.append(stddev) conds.append(cond) s_dicts.append({'token_f-score': score, 'token_precision':", "not \"single-context\" in TYPES: continue if \"docs\" in fname and not \"topics\" in", "x[1]) tmp = map(lambda y,t: sum(((y,), t[1:]), ()), ys, tmp) if TAKE_MAX_SCORE: y_pos,", "tmp = zip(y_pos, scores, conds, ['g' for tmp_i in range(len(y_pos))]) if OLDVERSION: tmp", "[dict(zip(scores_order, [0 for i in range(len(scores_order))])) for tmp_i in range(N_MONTHS)]) for month in", "drop=TRUE, limits=cLevels)\\ + stat_smooth(level=0.68, size=1.8)\\ + theme(text = element_text(size=44))\\ \"\"\" #+ geom_point()\\ #+", "parse went wrongly for\", fname fname = '/'.join(fname.split('/')[1:]) fname = fname.replace('coll-', 'colloc-') #", "'colloc_syll', 't_colloc_syll': 't_colloc_syll_spl_vocab', 't_readapt_colloc_syll': 't_colloc_syll_shr_vocab', 't_colloc_syll_wth_common': 't_colloc_syll_wth_common'} if TEST: DO_ONLY = {'t_nopfx_colloc_syll_wth_common': 'with", "in ([ax.title, ax.xaxis.label, ax.yaxis.label] + ax.get_xticklabels() + ax.get_yticklabels()): item.set_fontsize(24) for cond, a in", "#'t_syll': 'syll split vocab', #'t_readapt_syll': 'syll share vocab'} #'unigram': 'unigram', 't_readapt_unigram': 'unigram share", "cond[0] else (y, s, sd, cond, color), tmp) # cond[0]=='b' for cond=='baseline' else:", "'t_unigram', 't_readapt_unigram', 'colloc_syll', 't_colloc_syll', 't_colloc_syll_wth_common']], id_vars='months') if OLDVERSION: my_lng = pd.melt(mydataframe[['months', 't_colloc_syll_shr_vocab', 'colloc_syll',", "range(6)] if not len(s_dict): s_dict = [dict(zip(scores_order, scores))] else: s_dict.append(dict(zip(scores_order, scores))) else: scores", "= importr('grDevices') #robj.pandas2ri.activate() #data_r = robj.conversion.py2ri(mydata) lng_r = com.convert_to_r_dataframe(my_lng) data_r = com.convert_to_r_dataframe(mydataframe) globalenv['lng_r']", "\"t_\" in cond: linetype = linetype[0] + '--' vals = None stddevs =", "False with open (fname.replace(\".o\", \".e\")) as f: line = \"\" for line in", "xrange(SAGE, EAGE+1)] #mydata['months'] = [[str(m) for i in range(ages_max_points[m-SAGE])] for m in xrange(SAGE,", "= {} # plotted_results[month][cond][score_type] = mean for month in xrange(SAGE, EAGE+1): y_pos =", "== cond[0] or 'd' == cond[0] else (y, s, cond, color), tmp) else:", "elif '-c+' in fname: fname = fname.replace('-c+', '_collocs_common') elif '+' in fname: fname", "in months', breaks=seq(eage,sage), labels=seq(eage,sage))\\ + coord_cartesian(xlim = c(eage, sage))\\ + theme_bw()\\ + scale_colour_discrete(\"model\",", "went wrongly for\", fname fname = '/'.join(fname.split('/')[1:]) fname = fname.replace('coll-', 'colloc-') # old", "range(ages_max_points[m-SAGE])] for m in xrange(SAGE, EAGE+1)] #mydata['months'] = [[str(m) for i in range(ages_max_points[m-SAGE])]", "TEST and (not \"test\" in fname and not \"nopfx\" in fname): continue elif", "cond, color): (y, s, sd, cond, 'b') if 'b' != cond[0] else (y,", "rpy2 SAGE_XPS = 11 SAGE = 12 EAGE = 31 N_MONTHS = EAGE-SAGE+1", "plt.savefig('histogram_' + str(SAGE_XPS) + 'to' + str(month) + 'm.png', bbox_inches='tight') from pandas import", "vocab', 'baseline', 'with common', 'split vocab']], id_vars='months') #my_lng = pd.melt(mydataframe[['months', 't_permuted_colloc_syll', 't_permuted_colloc_syll_wth_common', 'unigram',", "TAKE_MAX_SCORE: results = defaultdict(lambda: [dict(zip(scores_order, [0 for i in range(len(scores_order))])) for tmp_i in", "want the stat_smooth to know about X (months) for key, value in mydata.iteritems():", "LAST_ITERS = 10 # take the last XX iterations as results (considering converged)", "with common', #'t_random_colloc_syll': 'random split vocab', ### 't_random_colloc_syll_wth_common': 'random with common', 'colloc3_syll': 'colloc3", "2'} if OLDVERSION: DO_ONLY = {'syll': 'syll', 'colloc': 'colloc', 't_readapt_colloc': 't_colloc_shr_vocab', 't_syll': 't_syll_spl_vocab',", "if TEST: DO_ONLY = {'t_nopfx_colloc_syll_wth_common': 'with common no prefix', 't_test_colloc_syll_wth_common': 'with common test',", "+ scale_shape_discrete(\"model\", drop=TRUE, limits=cLevels)\\ + scale_linetype_discrete(\"model\", drop=TRUE, limits=cLevels)\\ + stat_smooth(level=0.68, size=1.8)\\ + theme(text", "6: if TAKE_MAX_SCORE: if results[condname][month-SAGE]['token_f-score'] == 0 or s_dict['token_f-score'] > results[condname][month-SAGE]['token_f-score']: results[condname][month-SAGE] =", "(y, s, sd, cond, 'grey') if 'b' == cond[0] else (y, s, sd,", "cLevels <- levels(lng_r$variable) p <- ggplot(data=lng_r, aes(x=months, y=value, group=variable, colour=variable, fill=variable, shape=variable, linetype=variable))\\", "'syll', 'colloc': 'colloc', 't_readapt_colloc': 't_colloc_shr_vocab', 't_syll': 't_syll_spl_vocab', 't_readapt_colloc_wth_common': 't_colloc_wth_common', 'colloc_syll': 'colloc_syll', 't_colloc_syll': 't_colloc_syll_spl_vocab',", "'share vocab with common', #'t_readapt_colloc_syll_wth_common2': 'share vocab with common 2'} if OLDVERSION: DO_ONLY", "= {'syll': 'syll', 'colloc': 'colloc', 't_readapt_colloc': 't_colloc_shr_vocab', 't_syll': 't_syll_spl_vocab', 't_readapt_colloc_wth_common': 't_colloc_wth_common', 'colloc_syll': 'colloc_syll',", "['unigram', 'unigram share vocab', 'unigram split vocab', 'baseline', 'share vocab', 'split vocab', 'with", "'b' != cond[0] else (y, s, sd, cond, color), tmp) # cond[0]=='b' for", "20}) matplotlib.rcParams.update({'text.color': \"black\"}) matplotlib.rcParams.update({'axes.labelcolor': \"black\"}) matplotlib.rcParams.update({'xtick.color': \"black\"}) matplotlib.rcParams.update({'ytick.color': \"black\"}) plotted_results = {} #", "score, 'token_precision': np.mean(a[month-SAGE]['token_precision']), 'token_recall': np.mean(a[month-SAGE]['token_recall']), 'boundary_f-score': np.mean(a[month-SAGE]['boundary_f-score']), 'boundary_precision': np.mean(a[month-SAGE]['boundary_precision']), 'boundary_recall': np.mean(a[month-SAGE]['boundary_recall'])}) plotted_results[month] =", "= ['unigram', 'unigram share vocab', 'unigram split vocab', 'baseline', 'share vocab', 'split vocab',", "= False with open (fname.replace(\".o\", \".e\")) as f: line = \"\" for line", "\"nopfx\" in fname): continue if \"-sc\" in fname and not \"single-context\" in TYPES:", "= range(1000,1005) #ITERS = range(600, 620) PREFIX = \"\" #PREFIX = \"old_naima_XPs/\" TAKE_MAX_SCORE", "& \\multicolumn{3}{|c|}{colloc} & \\multicolumn{3}{|c|}{t\\_coll\\_wth\\_common} & \\multicolumn{3}{|c|}{coll\\_syll} & \\multicolumn{3}{|c|}{t\\_coll\\_syll\\_shr\\_voc} & \\multicolumn{3}{|c|}{t\\_coll\\_syll\\_spl\\_voc} & \\multicolumn{3}{|c|}{t\\_coll\\_syll\\_wth\\_com}\\\\\\\\ \"\"\"", "len(y_pos)), dpi=1200) ax = plt.gca() ax.set_ylim([0, len(y_pos)+1]) ax.set_xlim([0.6, 0.86]) if TEST: ax.set_xlim([0.7, 0.86])", "p & r & f & p & r \\\\\\\\ \\hline \"\"\" for", "# + scale_x_discrete('age in months') if len(DO_ONLY) and len(DO_ONLY) < 5: rstring +=", "10 LAST_ITERS = 10 # take the last XX iterations as results (considering", "open (fname.replace(\".o\", \".e\")) as f: line = \"\" for line in f: for", "FOR TEST currently if LAST_ITERS > 1 and TEST: TAKE_MAX_SCORE = False DO_ONLY", "if OLDVERSION: my_lng = pd.melt(mydataframe[['months', 't_colloc_syll_shr_vocab', 'colloc_syll', 't_colloc_syll_wth_common', 't_colloc_syll_spl_vocab']], id_vars='months') # from ggplot_import_*", "i in range(ages_max_points[m-SAGE])] for m in xrange(SAGE, EAGE+1)] #mydata['months'] = [[str(m) for i", "np import pylab as pl import matplotlib import matplotlib.pyplot as plt from collections", "+ stat_smooth(se=True, method='lm', level=0.95) + xlab('age in months') + ylab('token f-score') # p", "'t_colloc_syll_wth_common']], id_vars='months') if OLDVERSION: my_lng = pd.melt(mydataframe[['months', 't_colloc_syll_shr_vocab', 'colloc_syll', 't_colloc_syll_wth_common', 't_colloc_syll_spl_vocab']], id_vars='months') #", "scale_shape_discrete(\"model\", drop=TRUE, limits=cLevels)\\ + scale_linetype_discrete(\"model\", drop=TRUE, limits=cLevels)\\ + stat_smooth(level=0.68, size=1.8)\\ + theme(text =", "import pandas.rpy.common as com #grdevices = importr('grDevices') #robj.pandas2ri.activate() #data_r = robj.conversion.py2ri(mydata) lng_r =", "= FACTOR_STD*np.std(a[month-SAGE]['token_f-score']) if score > 0: y_pos.append(y_pos[-1] + 1) scores.append(score) stddevs.append(stddev) conds.append(cond) s_dicts.append({'token_f-score':", "defaultdict(lambda: [dict(zip(scores_order, [0 for i in range(len(scores_order))])) for tmp_i in range(N_MONTHS)]) for month", "= {'t_colloc': 'colloc with topics'} scores_order = \"token_f-score token_precision token_recall boundary_f-score boundary_precision boundary_recall\".split()", "common', #'t_permuted_colloc_syll': 'permuted split vocab', ### 't_permuted_colloc_syll_wth_common': 'permuted with common', #'t_random_colloc_syll': 'random split", "'boundary_recall': np.mean(a[month-SAGE]['boundary_recall'])}) plotted_results[month] = dict(zip(conds, s_dicts)) if len(conds) == 0: continue y_pos =", "color), tmp) else: if TEST: tmp = map(lambda (y, s, sd, cond, color):", "== 0 or s_dict['token_f-score'] > results[condname][month-SAGE]['token_f-score']: results[condname][month-SAGE] = s_dict else: for k, v", "y='value', color='variable'), data=my_lng) + stat_smooth(se=False) + xlab('age in months') + ylab('token f-score') #", "c(1, 0.5), legend.background = element_rect(colour = \"grey70\", fill = \"white\"), legend.text=element_text(size=44), legend.title=element_text(size=44), legend.key.size=unit(2,", "d[cond] f = s_dict[typ+'_f-score'] p = s_dict[typ+'_precision'] r = s_dict[typ+'_recall'] print \" &", "DO_ONLY = {'t_nopfx_colloc_syll_wth_common': 'with common no prefix', 't_test_colloc_syll_wth_common': 'with common test', 't_nopfx_colloc_syll': 'split", "= zip(y_pos, scores, stddevs, conds, ['g' for tmp_i in range(len(y_pos))]) if OLDVERSION: tmp", "(not \"test\" in fname and not \"nopfx\" in fname): continue elif not TEST", "else x, vals), linetype, linewidth=3.5, alpha=0.8) plt.xlabel('months') plt.ylabel('token f-score') plt.legend([l for l in", "#'t_random_colloc_syll': 'random split vocab', ### 't_random_colloc_syll_wth_common': 'random with common', 'colloc3_syll': 'colloc3 syll', 't_colloc3_syll_collocs_common':", "'t_colloc_syll_shr_vocab', 'colloc_syll', 't_colloc_syll_wth_common', 't_colloc_syll_spl_vocab']], id_vars='months') # from ggplot_import_* # #p = ggplot(aes(x='months', y='colloc'),", "in xrange(SAGE, EAGE+1): for fname in glob.iglob(PREFIX+'naima_' + str(SAGE_XPS) + 'to' + str(month)", "'-sc' in fname: fname = fname.replace('-sc', '') condname = 't' if '-r+' in", "# if True, just use the values evaluated on a test test ITERS", "c(eage, sage))\\ + theme_bw()\\ + scale_colour_discrete(\"model\", drop=TRUE, limits=cLevels)\\ + scale_fill_discrete(\"model\", drop=TRUE, limits=cLevels)\\ +", "ax.get_yticklabels()): item.set_fontsize(24) for cond, a in results.iteritems(): linetype = '' if \"syll\" in", "doit = False with open (fname.replace(\".o\", \".e\")) as f: line = \"\" for", "no prefix', 'test_coll_syll': 'baseline test', 't_test_colloc_syll': 'split vocab test'} if OLDVERSION: DO_ONLY =", "\"black\"}) matplotlib.rcParams.update({'axes.labelcolor': \"black\"}) matplotlib.rcParams.update({'xtick.color': \"black\"}) matplotlib.rcParams.update({'ytick.color': \"black\"}) plotted_results = {} # plotted_results[month][cond][score_type] =", "'_'.join(fname.split('/')[-1].split('-')[3:]).split('.')[0] ########## cosmetic (for legends) ########## if len(DO_ONLY): if condname in DO_ONLY: condname", "for cond, a in results_m.iteritems(): for i, x in enumerate(a): if len(x['token_f-score']) >", "cond, 'grey') if 'b' == cond[0] else (y, s, sd, cond, color), tmp)", "import matplotlib import matplotlib.pyplot as plt from collections import defaultdict import glob import", "cosmetics when preparing figures for papers # e.g. DO_ONLY = {'t_colloc': 'colloc with", "'t_colloc_syll_wth_common_nopfx', 't_test_coll_syll_wth_common': 't_colloc_syll_wth_common_test', 't_nopfx_coll_syll': 't_colloc_syll_spl_vocab_nopfx', 'test_coll_syll': 'colloc_syll_test', 't_test_coll_syll': 't_colloc_syll_spl_vocab_test'} #DO_ONLY = {} #", "= zip(*tmp) plt.barh(y_pos, scores, xerr=stddev, color=colors, ecolor='r', alpha=0.8) plt.yticks(map(lambda x: x+0.5, y_pos), conds)", "'b') if 'b' != cond[0] or 'd' == cond[0] else (y, s, cond,", "if k != 'months']], id_vars='months') #my_lng = pd.melt(mydataframe[['months', 'share vocab', 'baseline', 'with common',", "len(s_dict): s_dict = [dict(zip(scores_order, scores))] else: s_dict.append(dict(zip(scores_order, scores))) else: scores = [float(last_lines[-1].split('\\t')[i]) for", "print \"PARSE ERROR: parse went wrongly for\", fname fname = '/'.join(fname.split('/')[1:]) fname =", "'with common', #'t_permuted_colloc_syll': 'permuted split vocab', ### 't_permuted_colloc_syll_wth_common': 'permuted with common', #'t_random_colloc_syll': 'random", "+ 1) scores.append(score) stddevs.append(stddev) conds.append(cond) s_dicts.append({'token_f-score': score, 'token_precision': np.mean(a[month-SAGE]['token_precision']), 'token_recall': np.mean(a[month-SAGE]['token_recall']), 'boundary_f-score': np.mean(a[month-SAGE]['boundary_f-score']),", "s_dict['token_f-score'] > results[condname][month-SAGE]['token_f-score']: results[condname][month-SAGE] = s_dict else: for k, v in s_dict.iteritems(): results[condname][month-SAGE][k].append(v)", "plt.yticks(map(lambda x: x+0.5, y_pos), conds) plt.xlabel('token f-score') #plt.title('') plt.savefig('histogram_' + str(SAGE_XPS) + 'to'", "print \"%.3f\" % r, print \"\\\\\\\\\" print \"\\hline\" footer_table = \"\"\" \\end{tabular} \\label{results}", "cond, color), tmp) else: tmp = map(lambda (y, s, cond, color): (y, s,", "= data_r globalenv['eage'] = EAGE globalenv['sage'] = SAGE print \"===================\" print \"and now", "s_dict[typ+'_recall'] print \" & \", print \"%.3f\" % f, print \" & \",", "height=16) \"\"\" plotFunc_2 = robj.r(rstring) print \"===================\" print \"and now for the LaTeX", "in results.iteritems(): linetype = '' if \"syll\" in cond: linetype = '^-.' else:", "else: score = np.mean(a[month-SAGE]['token_f-score']) stddev = FACTOR_STD*np.std(a[month-SAGE]['token_f-score']) if score > 0: y_pos.append(y_pos[-1] +", "'share vocab with common 2'} if OLDVERSION: DO_ONLY = {'syll': 'syll', 'colloc': 'colloc',", "x: x[0], tmp) tmp = sorted(tmp, key=lambda x: x[1]) tmp = map(lambda y,t:", "s, sd, cond, color): (y, s, sd, cond, 'b') if 't' == cond[0]", "+ condname elif '-sc' in fname: fname = fname.replace('-sc', '') condname = 't'", "{'t_colloc': 'colloc with topics'} scores_order = \"token_f-score token_precision token_recall boundary_f-score boundary_precision boundary_recall\".split() results", "f & p & r \\\\\\\\ \\hline \"\"\" for month, d in plotted_results.iteritems():", "mydata.keys() if k != 'months']], id_vars='months') #my_lng = pd.melt(mydataframe[['months', 'share vocab', 'baseline', 'with", "print fname scores = [] s_dict = {} with open(fname) as f: last_lines", "confidence interval OLDVERSION = False # version before March 10 LAST_ITERS = 10", "mydata[key] = [j for i in value for j in i] if np.all(map(np.isnan,", "and not \"topics\" in TYPES: continue # always plots basic results currently doit", "limits=cLevels)\\ + scale_shape_discrete(\"model\", drop=TRUE, limits=cLevels)\\ + scale_linetype_discrete(\"model\", drop=TRUE, limits=cLevels)\\ + stat_smooth(level=0.68, size=1.8)\\ +", "not doit: print \"NOT DOING:\", fname else: print fname scores = [] s_dict", "= fname.replace('+', '_wth_common') condname = '_'.join([condname] + fname.split('/')[-1].split('-')[3:]).split('.')[0] else: condname = '_'.join(fname.split('/')[-1].split('-')[3:]).split('.')[0] ##########", "ggplot_import_* # #p = ggplot(aes(x='months', y='colloc'), data=mydataframe) + geom_point(color='lightgreen') + stat_smooth(se=True) + xlab('age", "= robj.conversion.py2ri(mydata) lng_r = com.convert_to_r_dataframe(my_lng) data_r = com.convert_to_r_dataframe(mydataframe) globalenv['lng_r'] = lng_r globalenv['data_r'] =", "# topics-based unigram condname = 'uni' condname = 'd_' + condname elif '-sc'", "com #grdevices = importr('grDevices') #robj.pandas2ri.activate() #data_r = robj.conversion.py2ri(mydata) lng_r = com.convert_to_r_dataframe(my_lng) data_r =", "% p, print \" & \", print \"%.3f\" % r, print \"\\\\\\\\\" print", "'t' == cond[0] or 'd' == cond[0] else (y, s, cond, color), tmp)", "= {'t_nopfx_coll_syll_wth_common': 't_colloc_syll_wth_common_nopfx', 't_test_coll_syll_wth_common': 't_colloc_syll_wth_common_test', 't_nopfx_coll_syll': 't_colloc_syll_spl_vocab_nopfx', 'test_coll_syll': 'colloc_syll_test', 't_test_coll_syll': 't_colloc_syll_spl_vocab_test'} #DO_ONLY =", "in fname and not \"single-context\" in TYPES: continue if \"docs\" in fname and", "cond, 'b') if 't' == cond[0] or 'd' == cond[0] else (y, s,", "y=value, group=variable, colour=variable, fill=variable, shape=variable, linetype=variable))\\ + scale_y_continuous(name='token f-score')\\ + scale_x_discrete('age in months',", "and not \"nopfx\" in fname): continue elif not TEST and (\"test\" in fname", "'share vocab', 't_colloc_syll_wth_common': 'with common', #'t_permuted_colloc_syll': 'permuted split vocab', ### 't_permuted_colloc_syll_wth_common': 'permuted with", "color): (y, s, sd, cond, 'grey') if 'b' == cond[0] else (y, s,", "'v-.' if \"d_\" or \"t_\" in cond: linetype = linetype[0] + '--' vals", "+ '*.o*'): if TEST and (not \"test\" in fname and not \"nopfx\" in", "ylab('token f-score') # my_lng = pd.melt(mydataframe[['months', 't_colloc syll shr vocab', 'colloc syll', 't_colloc_syll_wth_common',", "for x in a] stddevs = [FACTOR_STD*np.std(x['token_f-score']) for x in a] # TODO", "EAGE+1)] #mydata['months'] = [[str(m) for i in range(ages_max_points[m-SAGE])] for m in xrange(SAGE, EAGE+1)]", "'share vocab', 'split vocab', 'with common'] for cond in listmodels: s_dict = d[cond]", "fname = fname.replace('-c+', '_collocs_common') elif '+' in fname: fname = fname.replace('+', '_wth_common') condname", "cond, color), tmp) # cond[0]=='b' for cond=='baseline' else: tmp = zip(y_pos, scores, stddevs,", "+ \" iterations\" in line or \"Iteration \" + striter in line: doit", "\" + striter in line: doit = True break if not doit: print", "models depending on the size of dataset} \\\\vspace{-0.5cm} \\\\begin{center} \\\\begin{scriptsize} \\\\begin{tabular}{|c|ccc|ccc|ccc|ccc|ccc|ccc|ccc|ccc|} \\hline &", "in range(ages_max_points[i] - len(l))] mydata[key] = [j for i in value for j", "in xrange(SAGE, EAGE+1)] # TODO if we don't want the stat_smooth to know", "continue # always plots basic results currently doit = False with open (fname.replace(\".o\",", "'syll', #'t_syll': 'syll split vocab', #'t_readapt_syll': 'syll share vocab'} #'unigram': 'unigram', 't_readapt_unigram': 'unigram", "doit = True break if not doit: print \"NOT DOING:\", fname else: print", "results_m.iteritems(): for i, x in enumerate(a): if len(x['token_f-score']) > ages_max_points[i]: ages_max_points[i] = len(x['token_f-score'])", "otherwise the wrong readline is imported by rpy2 SAGE_XPS = 11 SAGE =", "# old names if 'docs' in fname: condname = '_'.join(fname.split('/')[-1].split('-')[-1].split('.')[0].split('_')[2:]) if condname ==", "1 and len(last_lines) > LAST_ITERS+1: for iter_to_take in range(1,LAST_ITERS+1): scores = [float(last_lines[-iter_to_take].split('\\t')[i]) for", "condname = 't' if '-r+' in fname or '-r.' in fname: condname =", "cond[0]=='b' for cond=='baseline' else: tmp = map(lambda (y, s, sd, cond, color): (y,", "0.5), legend.justification = c(1, 0.5), legend.background = element_rect(colour = \"grey70\", fill = \"white\"),", "# TODO if we don't want the stat_smooth to know about X (months)", "map(lambda y,t: sum(((y,), t[1:]), ()), ys, tmp) if TAKE_MAX_SCORE: y_pos, scores, conds, colors", "SORTED: ys = map(lambda x: x[0], tmp) tmp = sorted(tmp, key=lambda x: x[1])", "'t_colloc_wth_common', 'colloc_syll', 't_colloc_syll_shr_vocab', 't_colloc_syll_spl_vocab', 't_colloc_syll_wth_common'] listmodels = ['unigram', 'unigram share vocab', 'unigram split", "x: x[1]) tmp = map(lambda y,t: sum(((y,), t[1:]), ()), ys, tmp) if TAKE_MAX_SCORE:", "0.86]) tmp = () if TAKE_MAX_SCORE: tmp = zip(y_pos, scores, conds, ['g' for", "xlab('age in months') + ylab('token f-score') # ggsave(p, 'ggplot_progress.png') import rpy2.robjects as robj", "str(month) + 'm.png', bbox_inches='tight') from pandas import DataFrame from copy import deepcopy import", "tmp_i in range(N_MONTHS)]) if TAKE_MAX_SCORE: results = defaultdict(lambda: [dict(zip(scores_order, [0 for i in", "'colloc-') # old names if 'docs' in fname: condname = '_'.join(fname.split('/')[-1].split('-')[-1].split('.')[0].split('_')[2:]) if condname", "line in f: for iternumber in ITERS: striter = str(iternumber) if striter +", "from copy import deepcopy import pandas as pd mydata = defaultdict(lambda: []) ages_max_points", "f-score') # my_lng = pd.melt(mydataframe[['months', 't_colloc syll shr vocab', 'colloc syll', 't_colloc_syll_wth_common', 't_colloc_syll_spl_vocab',", "plot.margin=unit(c(1,1,1,1), \"cm\")) \"\"\" rstring += \"\"\" ggsave('ggplot2_progress.pdf', plot=p, width=22, height=16) \"\"\" plotFunc_2 =", "p & r & f & p & r & f & p", "9), dpi=1200) plt.xticks(xrange(N_MONTHS)) ax = plt.gca() ax.set_ylim([0.55, 0.90]) ax.set_xlim([-0.1, N_MONTHS - 0.9]) ax.set_xticklabels(map(str,", "condname = DO_ONLY[condname] else: continue ########## /cosmetic (for legends) ########## if type(s_dict) ==", "stddevs = None if TAKE_MAX_SCORE: vals = [x['token_f-score'] for x in a] else:", "cond, 'b') if 'b' != cond[0] or 'd' == cond[0] else (y, s,", "i in xrange(SAGE, EAGE+1)] results_m = deepcopy(results) for cond, a in results_m.iteritems(): for", "in cond: linetype = linetype[0] + '--' vals = None stddevs = None", "and not \"single-context\" in TYPES: continue if \"docs\" in fname and not \"topics\"", "months') + ylab('token f-score') # my_lng = pd.melt(mydataframe[['months', 't_colloc syll shr vocab', 'colloc", "vocab', ### 't_random_colloc_syll_wth_common': 'random with common', 'colloc3_syll': 'colloc3 syll', 't_colloc3_syll_collocs_common': 'colloc3 syll collocs", "[0.5] scores = [] stddevs = [] conds = [] s_dicts = []", "ecolor='r', alpha=0.8) else: y_pos, scores, stddev, conds, colors = zip(*tmp) plt.barh(y_pos, scores, xerr=stddev,", "(fname.replace(\".o\", \".e\")) as f: line = \"\" for line in f: for iternumber", "print \"NOT DOING:\", fname else: print fname scores = [] s_dict = {}", "PREFIX = \"\" #PREFIX = \"old_naima_XPs/\" TAKE_MAX_SCORE = False # in case of", "in i] if np.all(map(np.isnan, mydata[key])): # remove data that is only nan mydata.pop(key)", "'permuted with common', #'t_random_colloc_syll': 'random split vocab', ### 't_random_colloc_syll_wth_common': 'random with common', 'colloc3_syll':", "\", print \"%.3f\" % f, print \" & \", print \"%.3f\" % p,", "in a] # TODO (gaussian process or some smoothing) plt.plot(map(lambda x: 'NaN' if", "= l + [np.nan for j in range(ages_max_points[i] - len(l))] mydata[key] = [j", "for x in a] else: vals = [np.mean(x['token_f-score']) for x in a] stddevs", "before March 10 LAST_ITERS = 10 # take the last XX iterations as", "condname in DO_ONLY: condname = DO_ONLY[condname] else: continue ########## /cosmetic (for legends) ##########", "'-w+' in fname: fname = fname.replace('-w+', '_words_common') elif '-c+' in fname: fname =", "/cosmetic (for legends) ########## if type(s_dict) == type({}) and len(s_dict) == 6: if", "results = defaultdict(lambda: [dict(zip(scores_order, [[] for i in range(len(scores_order))])) for tmp_i in range(N_MONTHS)])", "(y, s, sd, cond, color): (y, s, sd, cond, 'b') if 't' ==", "d in plotted_results.iteritems(): print str(SAGE_XPS) + \"-\" + str(month), if OLDVERSION: listmodels =", "########## cosmetic (for legends) ########## if len(DO_ONLY): if condname in DO_ONLY: condname =", "r \\\\\\\\ \\hline \"\"\" for month, d in plotted_results.iteritems(): print str(SAGE_XPS) + \"-\"", "results[condname][month-SAGE] = s_dict else: for k, v in s_dict.iteritems(): results[condname][month-SAGE][k].append(v) elif type(s_dict) ==", "< 5: rstring += \"\"\"+ opts(legend.position = c(0.96, 0.5), legend.justification = c(1, 0.5),", "in cond else (y, s, sd, cond, color), tmp) # \"no prefix\" cond", "s_dict.iteritems(): results[condname][month-SAGE][k].append(v) elif type(s_dict) == type([]): for e in s_dict: for k, v", "common', 'split vocab']], id_vars='months') #my_lng = pd.melt(mydataframe[['months', 't_permuted_colloc_syll', 't_permuted_colloc_syll_wth_common', 'unigram', 't_unigram', 't_readapt_unigram', 'colloc_syll',", "= fname.replace('coll-', 'colloc-') # old names if 'docs' in fname: condname = '_'.join(fname.split('/')[-1].split('-')[-1].split('.')[0].split('_')[2:])", "(y, s, sd, cond, color): (y, s, sd, cond, 'b') if 'no prefix'", "depending on the size of dataset} \\\\vspace{-0.5cm} \\\\begin{center} \\\\begin{scriptsize} \\\\begin{tabular}{|c|ccc|ccc|ccc|ccc|ccc|ccc|ccc|ccc|} \\hline & \\multicolumn{3}{|c|}{syll}", "p, print \" & \", print \"%.3f\" % r, print \"\\\\\\\\\" print \"\\hline\"", "results, otherwise do the mean+std SORTED = True # sort the histograms by", "+ str(SAGE_XPS) + 'to' + str(month) + 'm.png', bbox_inches='tight') from pandas import DataFrame", "for i in range(6)] s_dict = dict(zip(scores_order, scores)) except: print \"PARSE ERROR: parse", "if 'b' != cond[0] or 'd' == cond[0] else (y, s, cond, color),", "collocs common'} #'syll': 'syll', #'t_syll': 'syll split vocab', #'t_readapt_syll': 'syll share vocab'} #'unigram':", "+ xlab('age in months') + ylab('token f-score') # p = ggplot(aes(x='months', y='value', color='variable'),", "'t_colloc_syll_spl_vocab_test'} #DO_ONLY = {} # for cosmetics when preparing figures for papers #", "converged) # USED ONLY FOR TEST currently if LAST_ITERS > 1 and TEST:", "range(499, 520) + range(1000,1005) #ITERS = range(1000,1005) #ITERS = range(600, 620) PREFIX =", "import deepcopy import pandas as pd mydata = defaultdict(lambda: []) ages_max_points = [0", "= [] s_dict = {} with open(fname) as f: last_lines = [] for", "#p = ggplot(aes(x='months', y='colloc'), data=mydataframe) + geom_point(color='lightgreen') + stat_smooth(se=True) + xlab('age in months')", "if \"syll\" in cond: linetype = '^-.' else: linetype = 'v-.' if \"d_\"", "# ggsave(p, 'ggplot_progress.png') import rpy2.robjects as robj import rpy2.robjects.pandas2ri # for dataframe conversion", "import glob import readline # otherwise the wrong readline is imported by rpy2", "k, v in s_dict.iteritems(): results[condname][month-SAGE][k].append(v) elif type(s_dict) == type([]): for e in s_dict:", "'') condname = 't' if '-r+' in fname or '-r.' in fname: condname", "= False # version before March 10 LAST_ITERS = 10 # take the", "scores, color=colors, ecolor='r', alpha=0.8) else: y_pos, scores, stddev, conds, colors = zip(*tmp) plt.barh(y_pos,", "([ax.title, ax.xaxis.label, ax.yaxis.label] + ax.get_xticklabels() + ax.get_yticklabels()): item.set_fontsize(24) for cond, a in results.iteritems():", "x[0], tmp) tmp = sorted(tmp, key=lambda x: x[1]) tmp = map(lambda y,t: sum(((y,),", "= plt.figure(figsize=(9, len(y_pos)), dpi=1200) ax = plt.gca() ax.set_ylim([0, len(y_pos)+1]) ax.set_xlim([0.6, 0.86]) if TEST:", "evaluated on a test test ITERS = range(499, 520) + range(1000,1005) #ITERS =", "tmp = map(lambda (y, s, sd, cond, color): (y, s, sd, cond, 'b')", "ggsave(p, 'ggplot_progress.png') import rpy2.robjects as robj import rpy2.robjects.pandas2ri # for dataframe conversion from", "the wrong readline is imported by rpy2 SAGE_XPS = 11 SAGE = 12", "mydata = defaultdict(lambda: []) ages_max_points = [0 for i in xrange(SAGE, EAGE+1)] results_m", "<= 0.0 else x, vals), linetype, linewidth=3.5, alpha=0.8) plt.xlabel('months') plt.ylabel('token f-score') plt.legend([l for", "cond, color), tmp) else: if TEST: tmp = map(lambda (y, s, sd, cond,", "= [float(last_lines[-1].split('\\t')[i]) for i in range(6)] s_dict = dict(zip(scores_order, scores)) except: print \"PARSE", "= [dict(zip(scores_order, scores))] else: s_dict.append(dict(zip(scores_order, scores))) else: scores = [float(last_lines[-1].split('\\t')[i]) for i in", "mydata.pop(key) print mydata print \">>> conditions that will be plotted\" print mydata.keys() mydataframe", "print str(SAGE_XPS) + \"-\" + str(month), if OLDVERSION: listmodels = ['syll', 't_syll_spl_vocab', 'colloc',", "plt.figure(figsize=(12, 9), dpi=1200) plt.xticks(xrange(N_MONTHS)) ax = plt.gca() ax.set_ylim([0.55, 0.90]) ax.set_xlim([-0.1, N_MONTHS - 0.9])", "print \"and now for the LaTeX tables\" print \"===================\" header_table = \"\"\" \\\\begin{table*}[ht]", "########## /cosmetic (for legends) ########## if type(s_dict) == type({}) and len(s_dict) == 6:", "ages_max_points[i] = len(x['token_f-score']) mydata[cond].append(x['token_f-score']) mydata['months'] = [[m for i in range(ages_max_points[m-SAGE])] for m", "# \"no prefix\" cond => different color tmp = map(lambda (y, s, sd,", "defaultdict(lambda: []) ages_max_points = [0 for i in xrange(SAGE, EAGE+1)] results_m = deepcopy(results)", "for tmp_i in range(len(y_pos))]) if OLDVERSION: tmp = map(lambda (y, s, cond, color):", "rstring += \"\"\"+ opts(legend.position = c(0.96, 0.5), legend.justification = c(1, 0.5), legend.background =", "{'colloc_syll': 'baseline', 't_colloc_syll': 'split vocab', 't_readapt_colloc_syll': 'share vocab', 't_colloc_syll_wth_common': 'with common', #'t_permuted_colloc_syll': 'permuted", "1.96 for 95% confidence interval OLDVERSION = False # version before March 10", "s, sd, cond, 'b') if 'no prefix' in cond else (y, s, sd,", "collections import defaultdict import glob import readline # otherwise the wrong readline is", "else (y, s, sd, cond, color), tmp) else: if TEST: tmp = map(lambda", "range(1000,1005) #ITERS = range(1000,1005) #ITERS = range(600, 620) PREFIX = \"\" #PREFIX =", "(considering converged) # USED ONLY FOR TEST currently if LAST_ITERS > 1 and", "np.mean(a[month-SAGE]['token_recall']), 'boundary_f-score': np.mean(a[month-SAGE]['boundary_f-score']), 'boundary_precision': np.mean(a[month-SAGE]['boundary_precision']), 'boundary_recall': np.mean(a[month-SAGE]['boundary_recall'])}) plotted_results[month] = dict(zip(conds, s_dicts)) if len(conds)", "vocab', 'colloc syll', 't_colloc_syll_wth_common', 't_colloc_syll_spl_vocab', 'colloc', 'syll', 't_syll_spl_vocab']], id_vars='months') # #p = ggplot(aes(x='months',", "'t' == cond[0] or 'd' == cond[0] else (y, s, sd, cond, color),", "plt.ylabel('token f-score') plt.legend([l for l in results.iterkeys()], loc='best', ncol=4) plt.setp(ax.get_legend().get_texts(), fontsize=20) plt.savefig('progress_ages.png') matplotlib.rcParams.update({'font.size':", "k, v in e.iteritems(): results[condname][month-SAGE][k].append(v) print results fig = plt.figure(figsize=(12, 9), dpi=1200) plt.xticks(xrange(N_MONTHS))", "\"\"\" for month, d in plotted_results.iteritems(): print str(SAGE_XPS) + \"-\" + str(month), if", "stat_smooth to know about X (months) for key, value in mydata.iteritems(): for i,", "OLDVERSION: DO_ONLY = {'t_nopfx_coll_syll_wth_common': 't_colloc_syll_wth_common_nopfx', 't_test_coll_syll_wth_common': 't_colloc_syll_wth_common_test', 't_nopfx_coll_syll': 't_colloc_syll_spl_vocab_nopfx', 'test_coll_syll': 'colloc_syll_test', 't_test_coll_syll': 't_colloc_syll_spl_vocab_test'}", "(y, s, cond, color): (y, s, cond, 'b') if 'b' != cond[0] or", "'t_colloc_syll_wth_common': 't_colloc_syll_wth_common'} if TEST: DO_ONLY = {'t_nopfx_colloc_syll_wth_common': 'with common no prefix', 't_test_colloc_syll_wth_common': 'with", "in f: for iternumber in ITERS: striter = str(iternumber) if striter + \"", "# #p = ggplot(aes(x='months', y='value', color='variable'), data=my_lng) + stat_smooth(se=True, method='lm', level=0.95) + xlab('age", "typ + \"\"\" & f & p & r & f & p", "map(lambda (y, s, sd, cond, color): (y, s, sd, cond, 'grey') if 'b'", "pd.melt(mydataframe[['months', 'share vocab', 'baseline', 'with common', 'split vocab']], id_vars='months') #my_lng = pd.melt(mydataframe[['months', 't_permuted_colloc_syll',", "'colloc': 'colloc', 't_readapt_colloc': 't_colloc_shr_vocab', 't_syll': 't_syll_spl_vocab', 't_readapt_colloc_wth_common': 't_colloc_wth_common', 'colloc_syll': 'colloc_syll', 't_colloc_syll': 't_colloc_syll_spl_vocab', 't_readapt_colloc_syll':", "= c(eage, sage))\\ + theme_bw()\\ + scale_colour_discrete(\"model\", drop=TRUE, limits=cLevels)\\ + scale_fill_discrete(\"model\", drop=TRUE, limits=cLevels)\\", "LAST_ITERS > 1 and TEST: TAKE_MAX_SCORE = False DO_ONLY = {'colloc_syll': 'baseline', 't_colloc_syll':", "= mean for month in xrange(SAGE, EAGE+1): y_pos = [0.5] scores = []", "DOING:\", fname else: print fname scores = [] s_dict = {} with open(fname)", "s, sd, cond, 'b') if 'b' != cond[0] else (y, s, sd, cond,", "if 'b' != cond[0] else (y, s, sd, cond, color), tmp) # cond[0]=='b'", "'syll', 't_syll_spl_vocab']], id_vars='months') # #p = ggplot(aes(x='months', y='value', color='variable'), data=my_lng) + stat_smooth(se=True, method='lm',", "doit: print \"NOT DOING:\", fname else: print fname scores = [] s_dict =", "= 0 stddev = 0 if TAKE_MAX_SCORE: score = a[month-SAGE]['token_f-score'] else: score =", "p = s_dict[typ+'_precision'] r = s_dict[typ+'_recall'] print \" & \", print \"%.3f\" %", "= \"\"\" \\\\begin{table*}[ht] \\caption{Mean f-scores (f), precisions (p), and recalls (r) for different", "for cosmetics when preparing figures for papers # e.g. DO_ONLY = {'t_colloc': 'colloc", "in range(len(y_pos))]) if OLDVERSION: tmp = map(lambda (y, s, sd, cond, color): (y,", "limits=c(eage,sage))\\ # + scale_x_discrete('age in months') if len(DO_ONLY) and len(DO_ONLY) < 5: rstring", "+ 'm/nai*-' + str(SAGE_XPS) + '-' + str(month) + '*.o*'): if TEST and", "{} # for cosmetics when preparing figures for papers # e.g. DO_ONLY =", "== cond[0] else (y, s, cond, color), tmp) # cond[0]=='b' for cond=='baseline' else:", "= pd.melt(mydataframe[['months'] + [k for k in mydata.keys() if k != 'months']], id_vars='months')", "\"===================\" header_table = \"\"\" \\\\begin{table*}[ht] \\caption{Mean f-scores (f), precisions (p), and recalls (r)", "'t_colloc_syll_wth_common'} if TEST: DO_ONLY = {'t_nopfx_colloc_syll_wth_common': 'with common no prefix', 't_test_colloc_syll_wth_common': 'with common", "level=0.95) + xlab('age in months') + ylab('token f-score') # p = ggplot(aes(x='months', y='value',", "import globalenv import pandas.rpy.common as com #grdevices = importr('grDevices') #robj.pandas2ri.activate() #data_r = robj.conversion.py2ri(mydata)", "in fname: fname = fname.replace('+', '_wth_common') condname = '_'.join([condname] + fname.split('/')[-1].split('-')[3:]).split('.')[0] else: condname", "or s_dict['token_f-score'] > results[condname][month-SAGE]['token_f-score']: results[condname][month-SAGE] = s_dict else: for k, v in s_dict.iteritems():", "== type([]): for e in s_dict: for k, v in e.iteritems(): results[condname][month-SAGE][k].append(v) print", "just use the values evaluated on a test test ITERS = range(499, 520)", "'b') if 't' == cond[0] or 'd' == cond[0] else (y, s, cond,", "== 6: if TAKE_MAX_SCORE: if results[condname][month-SAGE]['token_f-score'] == 0 or s_dict['token_f-score'] > results[condname][month-SAGE]['token_f-score']: results[condname][month-SAGE]", "{'t_nopfx_colloc_syll_wth_common': 'with common no prefix', 't_test_colloc_syll_wth_common': 'with common test', 't_nopfx_colloc_syll': 'split vocab no", "in xrange(SAGE, EAGE+1)] results_m = deepcopy(results) for cond, a in results_m.iteritems(): for i,", "& p & r & f & p & r & f &", "#+ geom_point()\\ #+ xlab('age in months')\\ #+ ylab('token f-score')\\ #+ scale_x_continuous('age in months',", "== '': # topics-based unigram condname = 'uni' condname = 'd_' + condname", "in range(len(scores_order))])) for tmp_i in range(N_MONTHS)]) for month in xrange(SAGE, EAGE+1): for fname", "s_dicts.append({'token_f-score': score, 'token_precision': np.mean(a[month-SAGE]['token_precision']), 'token_recall': np.mean(a[month-SAGE]['token_recall']), 'boundary_f-score': np.mean(a[month-SAGE]['boundary_f-score']), 'boundary_precision': np.mean(a[month-SAGE]['boundary_precision']), 'boundary_recall': np.mean(a[month-SAGE]['boundary_recall'])}) plotted_results[month]", "else (y, s, sd, cond, color), tmp) # \"no prefix\" cond => different", "'share vocab', 'baseline', 'with common', 'split vocab']], id_vars='months') #my_lng = pd.melt(mydataframe[['months', 't_permuted_colloc_syll', 't_permuted_colloc_syll_wth_common',", "# plotted_results[month][cond][score_type] = mean for month in xrange(SAGE, EAGE+1): y_pos = [0.5] scores", "fname: condname = '_'.join(fname.split('/')[-1].split('-')[-1].split('.')[0].split('_')[2:]) if condname == '': # topics-based unigram condname =", "DataFrame from copy import deepcopy import pandas as pd mydata = defaultdict(lambda: [])", "+= \"\"\"+ opts(legend.position = c(0.96, 0.5), legend.justification = c(1, 0.5), legend.background = element_rect(colour", "for tmp_i in range(N_MONTHS)]) if TAKE_MAX_SCORE: results = defaultdict(lambda: [dict(zip(scores_order, [0 for i", "fname.replace('+', '_wth_common') condname = '_'.join([condname] + fname.split('/')[-1].split('-')[3:]).split('.')[0] else: condname = '_'.join(fname.split('/')[-1].split('-')[3:]).split('.')[0] ########## cosmetic", "#print(lng_r) #print(factor(lng_r$months)) #print(factor(lng_r$variable)) cLevels <- levels(lng_r$variable) p <- ggplot(data=lng_r, aes(x=months, y=value, group=variable, colour=variable,", "'': # topics-based unigram condname = 'uni' condname = 'd_' + condname elif", "DO_ONLY = {'colloc_syll': 'baseline', 't_colloc_syll': 'split vocab', 't_readapt_colloc_syll': 'share vocab', 't_colloc_syll_wth_common': 'with common',", "- 0.9]) ax.set_xticklabels(map(str, range(SAGE, EAGE+1))) for item in ([ax.title, ax.xaxis.label, ax.yaxis.label] + ax.get_xticklabels()", "rpy2.robjects.pandas2ri # for dataframe conversion from rpy2.robjects.packages import importr from rpy2.robjects import globalenv", "import numpy as np import pylab as pl import matplotlib import matplotlib.pyplot as", "cond[0] or 'd' == cond[0] else (y, s, sd, cond, color), tmp) else:", "months', breaks=seq(eage,sage), limits=c(eage,sage))\\ # + scale_x_discrete('age in months') if len(DO_ONLY) and len(DO_ONLY) <", "rstring += \"\"\"+ opts(legend.background = element_rect(colour = \"grey70\", fill = \"white\"), legend.text=element_text(size=44), legend.title=element_text(size=44),", "= plt.gca() ax.set_ylim([0, len(y_pos)+1]) ax.set_xlim([0.6, 0.86]) if TEST: ax.set_xlim([0.7, 0.86]) tmp = ()", "{'syll': 'syll', 'colloc': 'colloc', 't_readapt_colloc': 't_colloc_shr_vocab', 't_syll': 't_syll_spl_vocab', 't_readapt_colloc_wth_common': 't_colloc_wth_common', 'colloc_syll': 'colloc_syll', 't_colloc_syll':", "if len(x['token_f-score']) > ages_max_points[i]: ages_max_points[i] = len(x['token_f-score']) mydata[cond].append(x['token_f-score']) mydata['months'] = [[m for i", "scores = [float(last_lines[-1].split('\\t')[i]) for i in range(6)] s_dict = dict(zip(scores_order, scores)) except: print", "if 't' == cond[0] or 'd' == cond[0] else (y, s, sd, cond,", "len(conds) == 0: continue y_pos = y_pos[:-1] fig = plt.figure(figsize=(9, len(y_pos)), dpi=1200) ax", "if '-w+' in fname: fname = fname.replace('-w+', '_words_common') elif '-c+' in fname: fname", "s_dict = [dict(zip(scores_order, scores))] else: s_dict.append(dict(zip(scores_order, scores))) else: scores = [float(last_lines[-1].split('\\t')[i]) for i", "in mydata.iteritems(): for i, l in enumerate(value): value[i] = l + [np.nan for", "TAKE_MAX_SCORE: vals = [x['token_f-score'] for x in a] else: vals = [np.mean(x['token_f-score']) for", "that is only nan mydata.pop(key) print mydata print \">>> conditions that will be", "legends) ########## if len(DO_ONLY): if condname in DO_ONLY: condname = DO_ONLY[condname] else: continue", "s, cond, color): (y, s, cond, 'b') if 'b' != cond[0] or 'd'", "readline # otherwise the wrong readline is imported by rpy2 SAGE_XPS = 11", "conditions that will be plotted\" print mydata.keys() mydataframe = DataFrame(mydata) my_lng = pd.melt(mydataframe[['months']", "(y, s, sd, cond, color): (y, s, sd, cond, 'b') if 'b' !=", "\"\"\"+ opts(legend.position = c(0.96, 0.5), legend.justification = c(1, 0.5), legend.background = element_rect(colour =", "+ [k for k in mydata.keys() if k != 'months']], id_vars='months') #my_lng =", "pl import matplotlib import matplotlib.pyplot as plt from collections import defaultdict import glob", "if len(conds) == 0: continue y_pos = y_pos[:-1] fig = plt.figure(figsize=(9, len(y_pos)), dpi=1200)", "color='variable'), data=my_lng) + stat_smooth(se=True, method='lm', level=0.95) + xlab('age in months') + ylab('token f-score')", "= False # if True, just use the values evaluated on a test", "'t_colloc_syll_spl_vocab_nopfx', 'test_coll_syll': 'colloc_syll_test', 't_test_coll_syll': 't_colloc_syll_spl_vocab_test'} #DO_ONLY = {} # for cosmetics when preparing", "continue ########## /cosmetic (for legends) ########## if type(s_dict) == type({}) and len(s_dict) ==", "print \"\\\\\\\\\" print \"\\hline\" footer_table = \"\"\" \\end{tabular} \\label{results} \\end{scriptsize} \\end{center} \\end{table*} \"\"\"", "'_wth_common') condname = '_'.join([condname] + fname.split('/')[-1].split('-')[3:]).split('.')[0] else: condname = '_'.join(fname.split('/')[-1].split('-')[3:]).split('.')[0] ########## cosmetic (for", "conversion from rpy2.robjects.packages import importr from rpy2.robjects import globalenv import pandas.rpy.common as com", "value for j in i] if np.all(map(np.isnan, mydata[key])): # remove data that is", "vocab', 'unigram split vocab', 'baseline', 'share vocab', 'split vocab', 'with common'] for cond", "conds, ['g' for tmp_i in range(len(y_pos))]) if OLDVERSION: tmp = map(lambda (y, s,", "and TEST: TAKE_MAX_SCORE = False DO_ONLY = {'colloc_syll': 'baseline', 't_colloc_syll': 'split vocab', 't_readapt_colloc_syll':", "no prefix', 't_test_colloc_syll_wth_common': 'with common test', 't_nopfx_colloc_syll': 'split vocab no prefix', 'test_coll_syll': 'baseline", "s_dict = {} with open(fname) as f: last_lines = [] for line in", "mydata.iteritems(): for i, l in enumerate(value): value[i] = l + [np.nan for j", "for cond, a in results.iteritems(): linetype = '' if \"syll\" in cond: linetype", "range(ages_max_points[i] - len(l))] mydata[key] = [j for i in value for j in", "p & r \\\\\\\\ \\hline \"\"\" for month, d in plotted_results.iteritems(): print str(SAGE_XPS)", "'t_test_coll_syll_wth_common': 't_colloc_syll_wth_common_test', 't_nopfx_coll_syll': 't_colloc_syll_spl_vocab_nopfx', 'test_coll_syll': 'colloc_syll_test', 't_test_coll_syll': 't_colloc_syll_spl_vocab_test'} #DO_ONLY = {} # for", "legend.title=element_text(size=44), legend.key.size=unit(2, \"cm\"), plot.margin=unit(c(1,1,1,1), \"cm\")) \"\"\" else: rstring += \"\"\"+ opts(legend.background = element_rect(colour", "[[] for i in range(len(scores_order))])) for tmp_i in range(N_MONTHS)]) if TAKE_MAX_SCORE: results =", "########## if len(DO_ONLY): if condname in DO_ONLY: condname = DO_ONLY[condname] else: continue ##########", "\"black\"}) plotted_results = {} # plotted_results[month][cond][score_type] = mean for month in xrange(SAGE, EAGE+1):", "EAGE+1)] results_m = deepcopy(results) for cond, a in results_m.iteritems(): for i, x in", "iter_to_take in range(1,LAST_ITERS+1): scores = [float(last_lines[-iter_to_take].split('\\t')[i]) for i in range(6)] if not len(s_dict):", "'t_colloc syll shr vocab', 'colloc syll', 't_colloc_syll_wth_common', 't_colloc_syll_spl_vocab', 'colloc', 'syll', 't_syll_spl_vocab']], id_vars='months') #", "'split vocab test'} if OLDVERSION: DO_ONLY = {'t_nopfx_coll_syll_wth_common': 't_colloc_syll_wth_common_nopfx', 't_test_coll_syll_wth_common': 't_colloc_syll_wth_common_test', 't_nopfx_coll_syll': 't_colloc_syll_spl_vocab_nopfx',", "else (y, s, sd, cond, color), tmp) # cond[0]=='b' for cond=='baseline' if SORTED:", "conds = [] s_dicts = [] for cond, a in results.iteritems(): score =", "=> different color tmp = map(lambda (y, s, sd, cond, color): (y, s,", "conds) plt.xlabel('token f-score') #plt.title('') plt.savefig('histogram_' + str(SAGE_XPS) + 'to' + str(month) + 'm.png',", "always plots basic results currently doit = False with open (fname.replace(\".o\", \".e\")) as", "else: vals = [np.mean(x['token_f-score']) for x in a] stddevs = [FACTOR_STD*np.std(x['token_f-score']) for x", "import rpy2.robjects.pandas2ri # for dataframe conversion from rpy2.robjects.packages import importr from rpy2.robjects import", "DO_ONLY = {'syll': 'syll', 'colloc': 'colloc', 't_readapt_colloc': 't_colloc_shr_vocab', 't_syll': 't_syll_spl_vocab', 't_readapt_colloc_wth_common': 't_colloc_wth_common', 'colloc_syll':", "import matplotlib.pyplot as plt from collections import defaultdict import glob import readline #", "(\"test\" in fname or \"nopfx\" in fname): continue if \"-sc\" in fname and", "in fname: fname = fname.replace('-w+', '_words_common') elif '-c+' in fname: fname = fname.replace('-c+',", "cond, color), tmp) # \"no prefix\" cond => different color tmp = map(lambda", "s_dict = dict(zip(scores_order, scores)) except: print \"PARSE ERROR: parse went wrongly for\", fname", "if TAKE_MAX_SCORE: if results[condname][month-SAGE]['token_f-score'] == 0 or s_dict['token_f-score'] > results[condname][month-SAGE]['token_f-score']: results[condname][month-SAGE] = s_dict", "as robj import rpy2.robjects.pandas2ri # for dataframe conversion from rpy2.robjects.packages import importr from", "limits=cLevels)\\ + scale_fill_discrete(\"model\", drop=TRUE, limits=cLevels)\\ + scale_shape_discrete(\"model\", drop=TRUE, limits=cLevels)\\ + scale_linetype_discrete(\"model\", drop=TRUE, limits=cLevels)\\", "[dict(zip(scores_order, [[] for i in range(len(scores_order))])) for tmp_i in range(N_MONTHS)]) if TAKE_MAX_SCORE: results", "[float(last_lines[-1].split('\\t')[i]) for i in range(6)] s_dict = dict(zip(scores_order, scores)) except: print \"PARSE ERROR:", "\\\\begin{scriptsize} \\\\begin{tabular}{|c|ccc|ccc|ccc|ccc|ccc|ccc|ccc|ccc|} \\hline & \\multicolumn{3}{|c|}{syll} & \\multicolumn{3}{|c|}{t\\_syll} & \\multicolumn{3}{|c|}{colloc} & \\multicolumn{3}{|c|}{t\\_coll\\_wth\\_common} & \\multicolumn{3}{|c|}{coll\\_syll}", "if '-r+' in fname or '-r.' in fname: condname = 't_readapt' fname =", "[j for i in value for j in i] if np.all(map(np.isnan, mydata[key])): #", "0.0 else x, vals), linetype, linewidth=3.5, alpha=0.8) plt.xlabel('months') plt.ylabel('token f-score') plt.legend([l for l", "in months') + ylab('token f-score') # ggsave(p, 'ggplot_progress.png') import rpy2.robjects as robj import", "\"black\"}) matplotlib.rcParams.update({'ytick.color': \"black\"}) plotted_results = {} # plotted_results[month][cond][score_type] = mean for month in", "condname = '_'.join([condname] + fname.split('/')[-1].split('-')[3:]).split('.')[0] else: condname = '_'.join(fname.split('/')[-1].split('-')[3:]).split('.')[0] ########## cosmetic (for legends)", "line: doit = True break if not doit: print \"NOT DOING:\", fname else:", "results currently doit = False with open (fname.replace(\".o\", \".e\")) as f: line =", "'uni' condname = 'd_' + condname elif '-sc' in fname: fname = fname.replace('-sc',", "several results, otherwise do the mean+std SORTED = True # sort the histograms", "= '_'.join([condname] + fname.split('/')[-1].split('-')[3:]).split('.')[0] else: condname = '_'.join(fname.split('/')[-1].split('-')[3:]).split('.')[0] ########## cosmetic (for legends) ##########", "#'unigram': 'unigram', 't_readapt_unigram': 'unigram share vocab', #'t_unigram': 'unigram split vocab'} #'t_readapt_colloc_syll_wth_common': 'share vocab", "ax.set_xlim([-0.1, N_MONTHS - 0.9]) ax.set_xticklabels(map(str, range(SAGE, EAGE+1))) for item in ([ax.title, ax.xaxis.label, ax.yaxis.label]", "color): (y, s, cond, 'b') if 'b' != cond[0] or 'd' == cond[0]", "else: for k, v in s_dict.iteritems(): results[condname][month-SAGE][k].append(v) elif type(s_dict) == type([]): for e", "sd, cond, color): (y, s, sd, cond, 'b') if 'b' != cond[0] else", "& r & f & p & r \\\\\\\\ \\hline \"\"\" for month,", "'t_syll_spl_vocab', 'colloc', 't_colloc_wth_common', 'colloc_syll', 't_colloc_syll_shr_vocab', 't_colloc_syll_spl_vocab', 't_colloc_syll_wth_common'] listmodels = ['unigram', 'unigram share vocab',", "fname: condname = 't_readapt' fname = fname.replace('-r', '') if '-w+' in fname: fname", "or 'd' == cond[0] else (y, s, cond, color), tmp) else: tmp =", "'t_colloc_syll_shr_vocab', 't_colloc_syll_spl_vocab', 't_colloc_syll_wth_common'] listmodels = ['unigram', 'unigram share vocab', 'unigram split vocab', 'baseline',", "by rpy2 SAGE_XPS = 11 SAGE = 12 EAGE = 31 N_MONTHS =", "s_dicts)) if len(conds) == 0: continue y_pos = y_pos[:-1] fig = plt.figure(figsize=(9, len(y_pos)),", "a] stddevs = [FACTOR_STD*np.std(x['token_f-score']) for x in a] # TODO (gaussian process or", "'months']], id_vars='months') #my_lng = pd.melt(mydataframe[['months', 'share vocab', 'baseline', 'with common', 'split vocab']], id_vars='months')", "= None stddevs = None if TAKE_MAX_SCORE: vals = [x['token_f-score'] for x in", "enumerate(value): value[i] = l + [np.nan for j in range(ages_max_points[i] - len(l))] mydata[key]", "defaultdict import glob import readline # otherwise the wrong readline is imported by", "s, sd, cond, color), tmp) # cond[0]=='b' for cond=='baseline' else: tmp = map(lambda", "(gaussian process or some smoothing) plt.plot(map(lambda x: 'NaN' if x <= 0.0 else", "= ggplot(aes(x='months', y='value', color='variable'), data=my_lng) + stat_smooth(se=True, method='lm', level=0.95) + xlab('age in months')", "the stat_smooth to know about X (months) for key, value in mydata.iteritems(): for", "\"\"\" & f & p & r & f & p & r", "print \" & \", print \"%.3f\" % r, print \"\\\\\\\\\" print \"\\hline\" footer_table", "for k in mydata.keys() if k != 'months']], id_vars='months') #my_lng = pd.melt(mydataframe[['months', 'share", "'permuted split vocab', ### 't_permuted_colloc_syll_wth_common': 'permuted with common', #'t_random_colloc_syll': 'random split vocab', ###", "= pd.melt(mydataframe[['months', 'share vocab', 'baseline', 'with common', 'split vocab']], id_vars='months') #my_lng = pd.melt(mydataframe[['months',", "\"topics\"] #TYPES = [\"basic\", \"topics\"] TYPES = [\"basic\", \"single-context\"] TEST = False #", "common', #'t_readapt_colloc_syll_wth_common2': 'share vocab with common 2'} if OLDVERSION: DO_ONLY = {'syll': 'syll',", "in months', breaks=seq(eage,sage), limits=c(eage,sage))\\ # + scale_x_discrete('age in months') if len(DO_ONLY) and len(DO_ONLY)", "& \\multicolumn{3}{|c|}{t\\_coll\\_wth\\_common} & \\multicolumn{3}{|c|}{coll\\_syll} & \\multicolumn{3}{|c|}{t\\_coll\\_syll\\_shr\\_voc} & \\multicolumn{3}{|c|}{t\\_coll\\_syll\\_spl\\_voc} & \\multicolumn{3}{|c|}{t\\_coll\\_syll\\_wth\\_com}\\\\\\\\ \"\"\" print header_table", "tmp) tmp = sorted(tmp, key=lambda x: x[1]) tmp = map(lambda y,t: sum(((y,), t[1:]),", "in ['token', 'boundary']: print typ + \"\"\" & f & p & r", "split vocab', 'baseline', 'share vocab', 'split vocab', 'with common'] for cond in listmodels:", "don't want the stat_smooth to know about X (months) for key, value in", "else: y_pos, scores, stddev, conds, colors = zip(*tmp) plt.barh(y_pos, scores, xerr=stddev, color=colors, ecolor='r',", "scores, stddev, conds, colors = zip(*tmp) plt.barh(y_pos, scores, xerr=stddev, color=colors, ecolor='r', alpha=0.8) plt.yticks(map(lambda", "'with common test', 't_nopfx_colloc_syll': 'split vocab no prefix', 'test_coll_syll': 'baseline test', 't_test_colloc_syll': 'split", "\"grey70\", fill = \"white\"), legend.text=element_text(size=44), legend.title=element_text(size=44), legend.key.size=unit(2, \"cm\"), plot.margin=unit(c(1,1,1,1), \"cm\")) \"\"\" else: rstring", "results fig = plt.figure(figsize=(12, 9), dpi=1200) plt.xticks(xrange(N_MONTHS)) ax = plt.gca() ax.set_ylim([0.55, 0.90]) ax.set_xlim([-0.1,", "cond[0] else (y, s, cond, color), tmp) else: tmp = map(lambda (y, s,", "s, cond, color): (y, s, cond, 'b') if 't' == cond[0] or 'd'", "range(len(y_pos))]) if OLDVERSION: tmp = map(lambda (y, s, cond, color): (y, s, cond,", "N_MONTHS - 0.9]) ax.set_xticklabels(map(str, range(SAGE, EAGE+1))) for item in ([ax.title, ax.xaxis.label, ax.yaxis.label] +", "\"%.3f\" % p, print \" & \", print \"%.3f\" % r, print \"\\\\\\\\\"", "== cond[0] else (y, s, cond, color), tmp) else: tmp = map(lambda (y,", "if not len(s_dict): s_dict = [dict(zip(scores_order, scores))] else: s_dict.append(dict(zip(scores_order, scores))) else: scores =", "in glob.iglob(PREFIX+'naima_' + str(SAGE_XPS) + 'to' + str(month) + 'm/nai*-' + str(SAGE_XPS) +", "f, print \" & \", print \"%.3f\" % p, print \" & \",", "try: if TEST and LAST_ITERS > 1 and len(last_lines) > LAST_ITERS+1: for iter_to_take", "or 'd' == cond[0] else (y, s, sd, cond, color), tmp) else: if", "scores, xerr=stddev, color=colors, ecolor='r', alpha=0.8) plt.yticks(map(lambda x: x+0.5, y_pos), conds) plt.xlabel('token f-score') #plt.title('')", "[float(last_lines[-iter_to_take].split('\\t')[i]) for i in range(6)] if not len(s_dict): s_dict = [dict(zip(scores_order, scores))] else:", "+ stat_smooth(se=True) + xlab('age in months') + ylab('token f-score') # my_lng = pd.melt(mydataframe[['months',", "risk! FACTOR_STD = 1. # 1.96 for 95% confidence interval OLDVERSION = False", "scores, stddevs, conds, ['g' for tmp_i in range(len(y_pos))]) if OLDVERSION: tmp = map(lambda", "[] for line in f: last_lines.append(line) try: if TEST and LAST_ITERS > 1", "len(x['token_f-score']) mydata[cond].append(x['token_f-score']) mydata['months'] = [[m for i in range(ages_max_points[m-SAGE])] for m in xrange(SAGE,", "fill=variable, shape=variable, linetype=variable))\\ + scale_y_continuous(name='token f-score')\\ + scale_x_discrete('age in months', breaks=seq(eage,sage), labels=seq(eage,sage))\\ +", "ys, tmp) if TAKE_MAX_SCORE: y_pos, scores, conds, colors = zip(*tmp) plt.barh(y_pos, scores, color=colors,", "s, cond, 'b') if 'b' != cond[0] or 'd' == cond[0] else (y,", "token_precision token_recall boundary_f-score boundary_precision boundary_recall\".split() results = defaultdict(lambda: [dict(zip(scores_order, [[] for i in", "split vocab', #'t_readapt_syll': 'syll share vocab'} #'unigram': 'unigram', 't_readapt_unigram': 'unigram share vocab', #'t_unigram':", "fname.replace('coll-', 'colloc-') # old names if 'docs' in fname: condname = '_'.join(fname.split('/')[-1].split('-')[-1].split('.')[0].split('_')[2:]) if", "& f & p & r & f & p & r &", "in s_dict.iteritems(): results[condname][month-SAGE][k].append(v) elif type(s_dict) == type([]): for e in s_dict: for k,", "xrange(SAGE, EAGE+1)] results_m = deepcopy(results) for cond, a in results_m.iteritems(): for i, x", "otherwise do the mean+std SORTED = True # sort the histograms by score,", "cond=='baseline' else: tmp = zip(y_pos, scores, stddevs, conds, ['g' for tmp_i in range(len(y_pos))])", "== cond[0] or 'd' == cond[0] else (y, s, sd, cond, color), tmp)", "= [j for i in value for j in i] if np.all(map(np.isnan, mydata[key])):", "'+' in fname: fname = fname.replace('+', '_wth_common') condname = '_'.join([condname] + fname.split('/')[-1].split('-')[3:]).split('.')[0] else:", "print \"===================\" rstring = \"\"\" library(\"ggplot2\") library(\"grid\") #print(lng_r) #print(factor(lng_r$months)) #print(factor(lng_r$variable)) cLevels <- levels(lng_r$variable)", "1. # 1.96 for 95% confidence interval OLDVERSION = False # version before", "'syll share vocab'} #'unigram': 'unigram', 't_readapt_unigram': 'unigram share vocab', #'t_unigram': 'unigram split vocab'}", "in fname or \"nopfx\" in fname): continue if \"-sc\" in fname and not", "l in results.iterkeys()], loc='best', ncol=4) plt.setp(ax.get_legend().get_texts(), fontsize=20) plt.savefig('progress_ages.png') matplotlib.rcParams.update({'font.size': 20}) matplotlib.rcParams.update({'text.color': \"black\"}) matplotlib.rcParams.update({'axes.labelcolor':", "#my_lng = pd.melt(mydataframe[['months', 't_permuted_colloc_syll', 't_permuted_colloc_syll_wth_common', 'unigram', 't_unigram', 't_readapt_unigram', 'colloc_syll', 't_colloc_syll', 't_colloc_syll_wth_common']], id_vars='months') if", "results.iterkeys()], loc='best', ncol=4) plt.setp(ax.get_legend().get_texts(), fontsize=20) plt.savefig('progress_ages.png') matplotlib.rcParams.update({'font.size': 20}) matplotlib.rcParams.update({'text.color': \"black\"}) matplotlib.rcParams.update({'axes.labelcolor': \"black\"}) matplotlib.rcParams.update({'xtick.color':", "in fname: condname = '_'.join(fname.split('/')[-1].split('-')[-1].split('.')[0].split('_')[2:]) if condname == '': # topics-based unigram condname", "EAGE+1): y_pos = [0.5] scores = [] stddevs = [] conds = []", "xlab('age in months') + ylab('token f-score') # p = ggplot(aes(x='months', y='value', color='variable'), data=my_lng)", "= SAGE print \"===================\" print \"and now for the R part\" print \"===================\"", "['token', 'boundary']: print typ + \"\"\" & f & p & r &", "import readline # otherwise the wrong readline is imported by rpy2 SAGE_XPS =", "= EAGE globalenv['sage'] = SAGE print \"===================\" print \"and now for the R", "as np import pylab as pl import matplotlib import matplotlib.pyplot as plt from", "color), tmp) # cond[0]=='b' for cond=='baseline' if SORTED: ys = map(lambda x: x[0],", "+ str(month) + 'm/nai*-' + str(SAGE_XPS) + '-' + str(month) + '*.o*'): if", "'b') if 'no prefix' in cond else (y, s, sd, cond, color), tmp)", "r, print \"\\\\\\\\\" print \"\\hline\" footer_table = \"\"\" \\end{tabular} \\label{results} \\end{scriptsize} \\end{center} \\end{table*}", "in months') if len(DO_ONLY) and len(DO_ONLY) < 5: rstring += \"\"\"+ opts(legend.position =", "& \", print \"%.3f\" % p, print \" & \", print \"%.3f\" %", "cond[0] else (y, s, sd, cond, color), tmp) # cond[0]=='b' for cond=='baseline' if", "'split vocab']], id_vars='months') #my_lng = pd.melt(mydataframe[['months', 't_permuted_colloc_syll', 't_permuted_colloc_syll_wth_common', 'unigram', 't_unigram', 't_readapt_unigram', 'colloc_syll', 't_colloc_syll',", "ecolor='r', alpha=0.8) plt.yticks(map(lambda x: x+0.5, y_pos), conds) plt.xlabel('token f-score') #plt.title('') plt.savefig('histogram_' + str(SAGE_XPS)", "e.iteritems(): results[condname][month-SAGE][k].append(v) print results fig = plt.figure(figsize=(12, 9), dpi=1200) plt.xticks(xrange(N_MONTHS)) ax = plt.gca()", "matplotlib.pyplot as plt from collections import defaultdict import glob import readline # otherwise", "tmp = zip(y_pos, scores, stddevs, conds, ['g' for tmp_i in range(len(y_pos))]) if OLDVERSION:", "months') + ylab('token f-score') # p = ggplot(aes(x='months', y='value', color='variable'), data=my_lng) + stat_smooth(se=False)", "'syll split vocab', #'t_readapt_syll': 'syll share vocab'} #'unigram': 'unigram', 't_readapt_unigram': 'unigram share vocab',", "data_r globalenv['eage'] = EAGE globalenv['sage'] = SAGE print \"===================\" print \"and now for", "range(len(y_pos))]) if OLDVERSION: tmp = map(lambda (y, s, sd, cond, color): (y, s,", "'with common', 'split vocab']], id_vars='months') #my_lng = pd.melt(mydataframe[['months', 't_permuted_colloc_syll', 't_permuted_colloc_syll_wth_common', 'unigram', 't_unigram', 't_readapt_unigram',", "common'] for cond in listmodels: s_dict = d[cond] f = s_dict[typ+'_f-score'] p =", "(y, s, cond, color), tmp) # cond[0]=='b' for cond=='baseline' else: tmp = zip(y_pos,", "= {'t_nopfx_colloc_syll_wth_common': 'with common no prefix', 't_test_colloc_syll_wth_common': 'with common test', 't_nopfx_colloc_syll': 'split vocab", "x, vals), linetype, linewidth=3.5, alpha=0.8) plt.xlabel('months') plt.ylabel('token f-score') plt.legend([l for l in results.iterkeys()],", "s_dict[typ+'_precision'] r = s_dict[typ+'_recall'] print \" & \", print \"%.3f\" % f, print", "np.mean(a[month-SAGE]['token_f-score']) stddev = FACTOR_STD*np.std(a[month-SAGE]['token_f-score']) if score > 0: y_pos.append(y_pos[-1] + 1) scores.append(score) stddevs.append(stddev)", "prefix' in cond else (y, s, sd, cond, color), tmp) # \"no prefix\"", "'with common'] for cond in listmodels: s_dict = d[cond] f = s_dict[typ+'_f-score'] p", "is imported by rpy2 SAGE_XPS = 11 SAGE = 12 EAGE = 31", "= \"\"\" library(\"ggplot2\") library(\"grid\") #print(lng_r) #print(factor(lng_r$months)) #print(factor(lng_r$variable)) cLevels <- levels(lng_r$variable) p <- ggplot(data=lng_r,", "'t_test_colloc_syll_wth_common': 'with common test', 't_nopfx_colloc_syll': 'split vocab no prefix', 'test_coll_syll': 'baseline test', 't_test_colloc_syll':", "TEST and LAST_ITERS > 1 and len(last_lines) > LAST_ITERS+1: for iter_to_take in range(1,LAST_ITERS+1):", "'colloc', 'syll', 't_syll_spl_vocab']], id_vars='months') # #p = ggplot(aes(x='months', y='value', color='variable'), data=my_lng) + stat_smooth(se=True,", "[\"basic\", \"single-context\", \"topics\"] #TYPES = [\"basic\", \"topics\"] TYPES = [\"basic\", \"single-context\"] TEST =", "with common 2'} if OLDVERSION: DO_ONLY = {'syll': 'syll', 'colloc': 'colloc', 't_readapt_colloc': 't_colloc_shr_vocab',", "'--' vals = None stddevs = None if TAKE_MAX_SCORE: vals = [x['token_f-score'] for", "y_pos.append(y_pos[-1] + 1) scores.append(score) stddevs.append(stddev) conds.append(cond) s_dicts.append({'token_f-score': score, 'token_precision': np.mean(a[month-SAGE]['token_precision']), 'token_recall': np.mean(a[month-SAGE]['token_recall']), 'boundary_f-score':", "(y, s, sd, cond, 'b') if 'b' != cond[0] else (y, s, sd,", "for item in ([ax.title, ax.xaxis.label, ax.yaxis.label] + ax.get_xticklabels() + ax.get_yticklabels()): item.set_fontsize(24) for cond,", "s, sd, cond, color): (y, s, sd, cond, 'grey') if 'b' == cond[0]", "\"cm\"), plot.margin=unit(c(1,1,1,1), \"cm\")) \"\"\" else: rstring += \"\"\"+ opts(legend.background = element_rect(colour = \"grey70\",", "for typ in ['token', 'boundary']: print typ + \"\"\" & f & p", "ITERS: striter = str(iternumber) if striter + \" iterations\" in line or \"Iteration", "= 'uni' condname = 'd_' + condname elif '-sc' in fname: fname =", "'to' + str(month) + 'm/nai*-' + str(SAGE_XPS) + '-' + str(month) + '*.o*'):", "for k, v in e.iteritems(): results[condname][month-SAGE][k].append(v) print results fig = plt.figure(figsize=(12, 9), dpi=1200)", "\\\\vspace{-0.5cm} \\\\begin{center} \\\\begin{scriptsize} \\\\begin{tabular}{|c|ccc|ccc|ccc|ccc|ccc|ccc|ccc|ccc|} \\hline & \\multicolumn{3}{|c|}{syll} & \\multicolumn{3}{|c|}{t\\_syll} & \\multicolumn{3}{|c|}{colloc} & \\multicolumn{3}{|c|}{t\\_coll\\_wth\\_common}", "# USED ONLY FOR TEST currently if LAST_ITERS > 1 and TEST: TAKE_MAX_SCORE", "else: if TEST: tmp = map(lambda (y, s, sd, cond, color): (y, s,", "TEST currently if LAST_ITERS > 1 and TEST: TAKE_MAX_SCORE = False DO_ONLY =", "[] s_dict = {} with open(fname) as f: last_lines = [] for line", "xlab('age in months') + ylab('token f-score') # my_lng = pd.melt(mydataframe[['months', 't_colloc syll shr", "shape=variable, linetype=variable))\\ + scale_y_continuous(name='token f-score')\\ + scale_x_discrete('age in months', breaks=seq(eage,sage), labels=seq(eage,sage))\\ + coord_cartesian(xlim", "(y, s, sd, cond, 'b') if 'no prefix' in cond else (y, s,", "with common', #'t_readapt_colloc_syll_wth_common2': 'share vocab with common 2'} if OLDVERSION: DO_ONLY = {'syll':", "# from ggplot_import_* # #p = ggplot(aes(x='months', y='colloc'), data=mydataframe) + geom_point(color='lightgreen') + stat_smooth(se=True)", "'t_random_colloc_syll_wth_common': 'random with common', 'colloc3_syll': 'colloc3 syll', 't_colloc3_syll_collocs_common': 'colloc3 syll collocs common'} #'syll':", "& p & r \\\\\\\\ \\hline \"\"\" for month, d in plotted_results.iteritems(): print", "'b' == cond[0] else (y, s, sd, cond, color), tmp) # cond[0]=='b' for", "vals), linetype, linewidth=3.5, alpha=0.8) plt.xlabel('months') plt.ylabel('token f-score') plt.legend([l for l in results.iterkeys()], loc='best',", "vocab'} #'unigram': 'unigram', 't_readapt_unigram': 'unigram share vocab', #'t_unigram': 'unigram split vocab'} #'t_readapt_colloc_syll_wth_common': 'share", "#'t_unigram': 'unigram split vocab'} #'t_readapt_colloc_syll_wth_common': 'share vocab with common', #'t_readapt_colloc_syll_wth_common2': 'share vocab with", "= '' if \"syll\" in cond: linetype = '^-.' else: linetype = 'v-.'", "SAGE_XPS = 11 SAGE = 12 EAGE = 31 N_MONTHS = EAGE-SAGE+1 #TYPES", "'t_syll_spl_vocab', 't_readapt_colloc_wth_common': 't_colloc_wth_common', 'colloc_syll': 'colloc_syll', 't_colloc_syll': 't_colloc_syll_spl_vocab', 't_readapt_colloc_syll': 't_colloc_syll_shr_vocab', 't_colloc_syll_wth_common': 't_colloc_syll_wth_common'} if TEST:", "if type(s_dict) == type({}) and len(s_dict) == 6: if TAKE_MAX_SCORE: if results[condname][month-SAGE]['token_f-score'] ==", "levels(lng_r$variable) p <- ggplot(data=lng_r, aes(x=months, y=value, group=variable, colour=variable, fill=variable, shape=variable, linetype=variable))\\ + scale_y_continuous(name='token", "+ theme_bw()\\ + scale_colour_discrete(\"model\", drop=TRUE, limits=cLevels)\\ + scale_fill_discrete(\"model\", drop=TRUE, limits=cLevels)\\ + scale_shape_discrete(\"model\", drop=TRUE,", "+ stat_smooth(se=False) + xlab('age in months') + ylab('token f-score') # ggsave(p, 'ggplot_progress.png') import", "opts(legend.background = element_rect(colour = \"grey70\", fill = \"white\"), legend.text=element_text(size=44), legend.title=element_text(size=44), legend.key.size=unit(2, \"cm\"), plot.margin=unit(c(1,1,1,1),", "conds, colors = zip(*tmp) plt.barh(y_pos, scores, color=colors, ecolor='r', alpha=0.8) else: y_pos, scores, stddev,", "[[str(m) for i in range(ages_max_points[m-SAGE])] for m in xrange(SAGE, EAGE+1)] # TODO if", "& f & p & r \\\\\\\\ \\hline \"\"\" for month, d in", "x in enumerate(a): if len(x['token_f-score']) > ages_max_points[i]: ages_max_points[i] = len(x['token_f-score']) mydata[cond].append(x['token_f-score']) mydata['months'] =", "unigram condname = 'uni' condname = 'd_' + condname elif '-sc' in fname:", "r & f & p & r \\\\\\\\ \\hline \"\"\" for month, d", "# cond[0]=='b' for cond=='baseline' else: tmp = zip(y_pos, scores, stddevs, conds, ['g' for", "months', breaks=seq(eage,sage), labels=seq(eage,sage))\\ + coord_cartesian(xlim = c(eage, sage))\\ + theme_bw()\\ + scale_colour_discrete(\"model\", drop=TRUE,", "#PREFIX = \"old_naima_XPs/\" TAKE_MAX_SCORE = False # in case of several results, otherwise", "if 'no prefix' in cond else (y, s, sd, cond, color), tmp) #", "in fname: fname = fname.replace('-c+', '_collocs_common') elif '+' in fname: fname = fname.replace('+',", "if SORTED: ys = map(lambda x: x[0], tmp) tmp = sorted(tmp, key=lambda x:", "geom_point(color='lightgreen') + stat_smooth(se=True) + xlab('age in months') + ylab('token f-score') # my_lng =", "for tmp_i in range(len(y_pos))]) if OLDVERSION: tmp = map(lambda (y, s, sd, cond,", "= s_dict else: for k, v in s_dict.iteritems(): results[condname][month-SAGE][k].append(v) elif type(s_dict) == type([]):", "scores)) except: print \"PARSE ERROR: parse went wrongly for\", fname fname = '/'.join(fname.split('/')[1:])", "#'t_permuted_colloc_syll': 'permuted split vocab', ### 't_permuted_colloc_syll_wth_common': 'permuted with common', #'t_random_colloc_syll': 'random split vocab',", "#data_r = robj.conversion.py2ri(mydata) lng_r = com.convert_to_r_dataframe(my_lng) data_r = com.convert_to_r_dataframe(mydataframe) globalenv['lng_r'] = lng_r globalenv['data_r']", "print \"%.3f\" % p, print \" & \", print \"%.3f\" % r, print", "= c(0.96, 0.5), legend.justification = c(1, 0.5), legend.background = element_rect(colour = \"grey70\", fill", "as results (considering converged) # USED ONLY FOR TEST currently if LAST_ITERS >", "striter + \" iterations\" in line or \"Iteration \" + striter in line:", "fname: fname = fname.replace('-w+', '_words_common') elif '-c+' in fname: fname = fname.replace('-c+', '_collocs_common')", "= [0 for i in xrange(SAGE, EAGE+1)] results_m = deepcopy(results) for cond, a", "common'} #'syll': 'syll', #'t_syll': 'syll split vocab', #'t_readapt_syll': 'syll share vocab'} #'unigram': 'unigram',", "= [\"basic\", \"topics\"] TYPES = [\"basic\", \"single-context\"] TEST = False # if True,", "for i, l in enumerate(value): value[i] = l + [np.nan for j in", "own risk! FACTOR_STD = 1. # 1.96 for 95% confidence interval OLDVERSION =", "data=my_lng) + stat_smooth(se=False) + xlab('age in months') + ylab('token f-score') # ggsave(p, 'ggplot_progress.png')", "library(\"ggplot2\") library(\"grid\") #print(lng_r) #print(factor(lng_r$months)) #print(factor(lng_r$variable)) cLevels <- levels(lng_r$variable) p <- ggplot(data=lng_r, aes(x=months, y=value,", "syll', 't_colloc_syll_wth_common', 't_colloc_syll_spl_vocab', 'colloc', 'syll', 't_syll_spl_vocab']], id_vars='months') # #p = ggplot(aes(x='months', y='value', color='variable'),", "months') + ylab('token f-score') # ggsave(p, 'ggplot_progress.png') import rpy2.robjects as robj import rpy2.robjects.pandas2ri", "\\hline & \\multicolumn{3}{|c|}{syll} & \\multicolumn{3}{|c|}{t\\_syll} & \\multicolumn{3}{|c|}{colloc} & \\multicolumn{3}{|c|}{t\\_coll\\_wth\\_common} & \\multicolumn{3}{|c|}{coll\\_syll} & \\multicolumn{3}{|c|}{t\\_coll\\_syll\\_shr\\_voc}", "= d[cond] f = s_dict[typ+'_f-score'] p = s_dict[typ+'_precision'] r = s_dict[typ+'_recall'] print \"", "plt.savefig('progress_ages.png') matplotlib.rcParams.update({'font.size': 20}) matplotlib.rcParams.update({'text.color': \"black\"}) matplotlib.rcParams.update({'axes.labelcolor': \"black\"}) matplotlib.rcParams.update({'xtick.color': \"black\"}) matplotlib.rcParams.update({'ytick.color': \"black\"}) plotted_results =", "{} with open(fname) as f: last_lines = [] for line in f: last_lines.append(line)", "pylab as pl import matplotlib import matplotlib.pyplot as plt from collections import defaultdict", "(y, s, sd, cond, color), tmp) else: if TEST: tmp = map(lambda (y,", "(y, s, sd, cond, color), tmp) # cond[0]=='b' for cond=='baseline' if SORTED: ys", "'d' == cond[0] else (y, s, cond, color), tmp) else: tmp = map(lambda", "condname = '_'.join(fname.split('/')[-1].split('-')[-1].split('.')[0].split('_')[2:]) if condname == '': # topics-based unigram condname = 'uni'", "vocab test'} if OLDVERSION: DO_ONLY = {'t_nopfx_coll_syll_wth_common': 't_colloc_syll_wth_common_nopfx', 't_test_coll_syll_wth_common': 't_colloc_syll_wth_common_test', 't_nopfx_coll_syll': 't_colloc_syll_spl_vocab_nopfx', 'test_coll_syll':", "if len(DO_ONLY): if condname in DO_ONLY: condname = DO_ONLY[condname] else: continue ########## /cosmetic", "'t_permuted_colloc_syll', 't_permuted_colloc_syll_wth_common', 'unigram', 't_unigram', 't_readapt_unigram', 'colloc_syll', 't_colloc_syll', 't_colloc_syll_wth_common']], id_vars='months') if OLDVERSION: my_lng =", "= True # sort the histograms by score, disable at your own risk!", "print \"%.3f\" % f, print \" & \", print \"%.3f\" % p, print", "np.mean(a[month-SAGE]['token_precision']), 'token_recall': np.mean(a[month-SAGE]['token_recall']), 'boundary_f-score': np.mean(a[month-SAGE]['boundary_f-score']), 'boundary_precision': np.mean(a[month-SAGE]['boundary_precision']), 'boundary_recall': np.mean(a[month-SAGE]['boundary_recall'])}) plotted_results[month] = dict(zip(conds, s_dicts))", "'t_syll': 't_syll_spl_vocab', 't_readapt_colloc_wth_common': 't_colloc_wth_common', 'colloc_syll': 'colloc_syll', 't_colloc_syll': 't_colloc_syll_spl_vocab', 't_readapt_colloc_syll': 't_colloc_syll_shr_vocab', 't_colloc_syll_wth_common': 't_colloc_syll_wth_common'} if", "\" & \", print \"%.3f\" % f, print \" & \", print \"%.3f\"", "color=colors, ecolor='r', alpha=0.8) else: y_pos, scores, stddev, conds, colors = zip(*tmp) plt.barh(y_pos, scores,", "in line or \"Iteration \" + striter in line: doit = True break", "'colloc_syll': 'colloc_syll', 't_colloc_syll': 't_colloc_syll_spl_vocab', 't_readapt_colloc_syll': 't_colloc_syll_shr_vocab', 't_colloc_syll_wth_common': 't_colloc_syll_wth_common'} if TEST: DO_ONLY = {'t_nopfx_colloc_syll_wth_common':", "sd, cond, color): (y, s, sd, cond, 'grey') if 'b' == cond[0] else", "l in enumerate(value): value[i] = l + [np.nan for j in range(ages_max_points[i] -", "= str(iternumber) if striter + \" iterations\" in line or \"Iteration \" +", "cond, color): (y, s, sd, cond, 'b') if 't' == cond[0] or 'd'", "as pd mydata = defaultdict(lambda: []) ages_max_points = [0 for i in xrange(SAGE,", "for i, x in enumerate(a): if len(x['token_f-score']) > ages_max_points[i]: ages_max_points[i] = len(x['token_f-score']) mydata[cond].append(x['token_f-score'])", "'t_colloc_syll_wth_common', 't_colloc_syll_spl_vocab', 'colloc', 'syll', 't_syll_spl_vocab']], id_vars='months') # #p = ggplot(aes(x='months', y='value', color='variable'), data=my_lng)", "import defaultdict import glob import readline # otherwise the wrong readline is imported", "and (\"test\" in fname or \"nopfx\" in fname): continue if \"-sc\" in fname", "split vocab'} #'t_readapt_colloc_syll_wth_common': 'share vocab with common', #'t_readapt_colloc_syll_wth_common2': 'share vocab with common 2'}", "r & f & p & r & f & p & r", "size=1.8)\\ + theme(text = element_text(size=44))\\ \"\"\" #+ geom_point()\\ #+ xlab('age in months')\\ #+", "y,t: sum(((y,), t[1:]), ()), ys, tmp) if TAKE_MAX_SCORE: y_pos, scores, conds, colors =", "'boundary_f-score': np.mean(a[month-SAGE]['boundary_f-score']), 'boundary_precision': np.mean(a[month-SAGE]['boundary_precision']), 'boundary_recall': np.mean(a[month-SAGE]['boundary_recall'])}) plotted_results[month] = dict(zip(conds, s_dicts)) if len(conds) ==", "\\multicolumn{3}{|c|}{t\\_coll\\_syll\\_shr\\_voc} & \\multicolumn{3}{|c|}{t\\_coll\\_syll\\_spl\\_voc} & \\multicolumn{3}{|c|}{t\\_coll\\_syll\\_wth\\_com}\\\\\\\\ \"\"\" print header_table for typ in ['token', 'boundary']:", "'m/nai*-' + str(SAGE_XPS) + '-' + str(month) + '*.o*'): if TEST and (not", "TEST = False # if True, just use the values evaluated on a", "range(SAGE, EAGE+1))) for item in ([ax.title, ax.xaxis.label, ax.yaxis.label] + ax.get_xticklabels() + ax.get_yticklabels()): item.set_fontsize(24)", "last XX iterations as results (considering converged) # USED ONLY FOR TEST currently", "'token_recall': np.mean(a[month-SAGE]['token_recall']), 'boundary_f-score': np.mean(a[month-SAGE]['boundary_f-score']), 'boundary_precision': np.mean(a[month-SAGE]['boundary_precision']), 'boundary_recall': np.mean(a[month-SAGE]['boundary_recall'])}) plotted_results[month] = dict(zip(conds, s_dicts)) if", "# sort the histograms by score, disable at your own risk! FACTOR_STD =", "plt.plot(map(lambda x: 'NaN' if x <= 0.0 else x, vals), linetype, linewidth=3.5, alpha=0.8)", "X (months) for key, value in mydata.iteritems(): for i, l in enumerate(value): value[i]", "topics'} scores_order = \"token_f-score token_precision token_recall boundary_f-score boundary_precision boundary_recall\".split() results = defaultdict(lambda: [dict(zip(scores_order,", "and LAST_ITERS > 1 and len(last_lines) > LAST_ITERS+1: for iter_to_take in range(1,LAST_ITERS+1): scores", "'boundary_precision': np.mean(a[month-SAGE]['boundary_precision']), 'boundary_recall': np.mean(a[month-SAGE]['boundary_recall'])}) plotted_results[month] = dict(zip(conds, s_dicts)) if len(conds) == 0: continue", "\"\"\" ggsave('ggplot2_progress.pdf', plot=p, width=22, height=16) \"\"\" plotFunc_2 = robj.r(rstring) print \"===================\" print \"and", "= False # in case of several results, otherwise do the mean+std SORTED", "= [\"basic\", \"single-context\"] TEST = False # if True, just use the values", "if OLDVERSION: DO_ONLY = {'syll': 'syll', 'colloc': 'colloc', 't_readapt_colloc': 't_colloc_shr_vocab', 't_syll': 't_syll_spl_vocab', 't_readapt_colloc_wth_common':", "= \"\" #PREFIX = \"old_naima_XPs/\" TAKE_MAX_SCORE = False # in case of several", "str(SAGE_XPS) + 'to' + str(month) + 'm/nai*-' + str(SAGE_XPS) + '-' + str(month)", "share vocab'} #'unigram': 'unigram', 't_readapt_unigram': 'unigram share vocab', #'t_unigram': 'unigram split vocab'} #'t_readapt_colloc_syll_wth_common':", "in range(len(y_pos))]) if OLDVERSION: tmp = map(lambda (y, s, cond, color): (y, s,", "print header_table for typ in ['token', 'boundary']: print typ + \"\"\" & f", "only nan mydata.pop(key) print mydata print \">>> conditions that will be plotted\" print", "range(len(scores_order))])) for tmp_i in range(N_MONTHS)]) if TAKE_MAX_SCORE: results = defaultdict(lambda: [dict(zip(scores_order, [0 for" ]
[ "+ method, post=json.dumps(post))) self.req.http.c.setopt( pycurl.HTTPHEADER, [\"Content-Type: text/html; charset=utf-8\"] ) return res def grab_hosters(self,", "user, password, data): json_data = self.api_response(\"hosters\", user, password) if json_data[\"hasErrors\"]: self.log_warning(json_data[\"ErrorMSG\"] or \"Unknown", "= \"0.01\" __status__ = \"testing\" __pyload_version__ = \"0.5\" __description__ = \"\"\"Linkifier.com account plugin\"\"\"", "**kwargs): post = { \"login\": user, \"md5Pass\": hashlib.md5(password.encode()).hexdigest(), \"apiKey\": self.API_KEY, } post.update(kwargs) self.req.http.c.setopt(", "API_KEY = \"<KEY>\" API_URL = \"https://api.linkifier.com/downloadapi.svc/\" def api_response(self, method, user, password, **kwargs): post", "\"validuntil\": validuntil, \"trafficleft\": -1 if trafficleft.lower() == \"unlimited\" else int(trafficleft), \"premium\": True, }", "] def grab_info(self, user, password, data): json_data = self.api_response(\"user\", user, password) trafficleft =", "\"Filter hosters to use\", \"all\"), (\"mh_list\", \"str\", \"Hoster list (comma separated)\", \"\"), (\"mh_interval\",", "\"premium\": True, } def signin(self, user, password, data): json_data = self.api_response(\"user\", user, password)", "json_data = self.api_response(\"user\", user, password) if json_data.get(\"hasErrors\", True) or not json_data.get(\"isActive\", True): self.log_warning(json_data[\"ErrorMSG\"]", "\"0.5\" __description__ = \"\"\"Linkifier.com account plugin\"\"\" __license__ = \"GPLv3\" __authors__ = [(\"GammaC0de\", \"nitzo2001[AT]yahoo[DOT]com\")]", "data): json_data = self.api_response(\"user\", user, password) if json_data.get(\"hasErrors\", True) or not json_data.get(\"isActive\", True):", "user, password, data): json_data = self.api_response(\"user\", user, password) if json_data.get(\"hasErrors\", True) or not", "\"nitzo2001[AT]yahoo[DOT]com\")] __config__ = [ (\"mh_mode\", \"all;listed;unlisted\", \"Filter hosters to use\", \"all\"), (\"mh_list\", \"str\",", "self.api_response(\"user\", user, password) trafficleft = json_data[\"extraTraffic\"] validuntil = float(json_data[\"expirydate\"]) // 1000 return {", "\"https://api.linkifier.com/downloadapi.svc/\" def api_response(self, method, user, password, **kwargs): post = { \"login\": user, \"md5Pass\":", "self.api_response(\"hosters\", user, password) if json_data[\"hasErrors\"]: self.log_warning(json_data[\"ErrorMSG\"] or \"Unknown error\") return [] return [", "= [(\"GammaC0de\", \"nitzo2001[AT]yahoo[DOT]com\")] __config__ = [ (\"mh_mode\", \"all;listed;unlisted\", \"Filter hosters to use\", \"all\"),", "if json_data[\"hasErrors\"]: self.log_warning(json_data[\"ErrorMSG\"] or \"Unknown error\") return [] return [ x[\"hostername\"] for x", "12), ] API_KEY = \"<KEY>\" API_URL = \"https://api.linkifier.com/downloadapi.svc/\" def api_response(self, method, user, password,", "to use\", \"all\"), (\"mh_list\", \"str\", \"Hoster list (comma separated)\", \"\"), (\"mh_interval\", \"int\", \"Reload", "validuntil, \"trafficleft\": -1 if trafficleft.lower() == \"unlimited\" else int(trafficleft), \"premium\": True, } def", "\"account\" __version__ = \"0.01\" __status__ = \"testing\" __pyload_version__ = \"0.5\" __description__ = \"\"\"Linkifier.com", "user, \"md5Pass\": hashlib.md5(password.encode()).hexdigest(), \"apiKey\": self.API_KEY, } post.update(kwargs) self.req.http.c.setopt( pycurl.HTTPHEADER, [\"Content-Type: application/json; charset=utf-8\"] )", "-*- coding: utf-8 -*- import hashlib import json import pycurl from ..base.multi_account import", "-*- import hashlib import json import pycurl from ..base.multi_account import MultiAccount class LinkifierCom(MultiAccount):", "1000 return { \"validuntil\": validuntil, \"trafficleft\": -1 if trafficleft.lower() == \"unlimited\" else int(trafficleft),", "res def grab_hosters(self, user, password, data): json_data = self.api_response(\"hosters\", user, password) if json_data[\"hasErrors\"]:", "= self.api_response(\"hosters\", user, password) if json_data[\"hasErrors\"]: self.log_warning(json_data[\"ErrorMSG\"] or \"Unknown error\") return [] return", "user, password) if json_data[\"hasErrors\"]: self.log_warning(json_data[\"ErrorMSG\"] or \"Unknown error\") return [] return [ x[\"hostername\"]", "import MultiAccount class LinkifierCom(MultiAccount): __name__ = \"LinkifierCom\" __type__ = \"account\" __version__ = \"0.01\"", "int(trafficleft), \"premium\": True, } def signin(self, user, password, data): json_data = self.api_response(\"user\", user,", "grab_info(self, user, password, data): json_data = self.api_response(\"user\", user, password) trafficleft = json_data[\"extraTraffic\"] validuntil", "= self.api_response(\"user\", user, password) if json_data.get(\"hasErrors\", True) or not json_data.get(\"isActive\", True): self.log_warning(json_data[\"ErrorMSG\"] or", "(\"mh_interval\", \"int\", \"Reload interval in hours\", 12), ] API_KEY = \"<KEY>\" API_URL =", "__version__ = \"0.01\" __status__ = \"testing\" __pyload_version__ = \"0.5\" __description__ = \"\"\"Linkifier.com account", "= json_data[\"extraTraffic\"] validuntil = float(json_data[\"expirydate\"]) // 1000 return { \"validuntil\": validuntil, \"trafficleft\": -1", "user, password, **kwargs): post = { \"login\": user, \"md5Pass\": hashlib.md5(password.encode()).hexdigest(), \"apiKey\": self.API_KEY, }", "utf-8 -*- import hashlib import json import pycurl from ..base.multi_account import MultiAccount class", "__type__ = \"account\" __version__ = \"0.01\" __status__ = \"testing\" __pyload_version__ = \"0.5\" __description__", "pycurl.HTTPHEADER, [\"Content-Type: text/html; charset=utf-8\"] ) return res def grab_hosters(self, user, password, data): json_data", "def grab_info(self, user, password, data): json_data = self.api_response(\"user\", user, password) trafficleft = json_data[\"extraTraffic\"]", "signin(self, user, password, data): json_data = self.api_response(\"user\", user, password) if json_data.get(\"hasErrors\", True) or", "for x in json_data[\"hosters\"] if x[\"hostername\"] and x[\"isActive\"] ] def grab_info(self, user, password,", "\"Unknown error\") return [] return [ x[\"hostername\"] for x in json_data[\"hosters\"] if x[\"hostername\"]", "= \"testing\" __pyload_version__ = \"0.5\" __description__ = \"\"\"Linkifier.com account plugin\"\"\" __license__ = \"GPLv3\"", "json_data[\"extraTraffic\"] validuntil = float(json_data[\"expirydate\"]) // 1000 return { \"validuntil\": validuntil, \"trafficleft\": -1 if", "json import pycurl from ..base.multi_account import MultiAccount class LinkifierCom(MultiAccount): __name__ = \"LinkifierCom\" __type__", "[\"Content-Type: application/json; charset=utf-8\"] ) res = json.loads(self.load(self.API_URL + method, post=json.dumps(post))) self.req.http.c.setopt( pycurl.HTTPHEADER, [\"Content-Type:", "method, post=json.dumps(post))) self.req.http.c.setopt( pycurl.HTTPHEADER, [\"Content-Type: text/html; charset=utf-8\"] ) return res def grab_hosters(self, user,", "hours\", 12), ] API_KEY = \"<KEY>\" API_URL = \"https://api.linkifier.com/downloadapi.svc/\" def api_response(self, method, user,", "res = json.loads(self.load(self.API_URL + method, post=json.dumps(post))) self.req.http.c.setopt( pycurl.HTTPHEADER, [\"Content-Type: text/html; charset=utf-8\"] ) return", "= self.api_response(\"user\", user, password) trafficleft = json_data[\"extraTraffic\"] validuntil = float(json_data[\"expirydate\"]) // 1000 return", "data): json_data = self.api_response(\"user\", user, password) trafficleft = json_data[\"extraTraffic\"] validuntil = float(json_data[\"expirydate\"]) //", "\"\"), (\"mh_interval\", \"int\", \"Reload interval in hours\", 12), ] API_KEY = \"<KEY>\" API_URL", "json_data = self.api_response(\"hosters\", user, password) if json_data[\"hasErrors\"]: self.log_warning(json_data[\"ErrorMSG\"] or \"Unknown error\") return []", "= { \"login\": user, \"md5Pass\": hashlib.md5(password.encode()).hexdigest(), \"apiKey\": self.API_KEY, } post.update(kwargs) self.req.http.c.setopt( pycurl.HTTPHEADER, [\"Content-Type:", "else int(trafficleft), \"premium\": True, } def signin(self, user, password, data): json_data = self.api_response(\"user\",", "return [ x[\"hostername\"] for x in json_data[\"hosters\"] if x[\"hostername\"] and x[\"isActive\"] ] def", "in hours\", 12), ] API_KEY = \"<KEY>\" API_URL = \"https://api.linkifier.com/downloadapi.svc/\" def api_response(self, method,", "def grab_hosters(self, user, password, data): json_data = self.api_response(\"hosters\", user, password) if json_data[\"hasErrors\"]: self.log_warning(json_data[\"ErrorMSG\"]", "return res def grab_hosters(self, user, password, data): json_data = self.api_response(\"hosters\", user, password) if", "\"trafficleft\": -1 if trafficleft.lower() == \"unlimited\" else int(trafficleft), \"premium\": True, } def signin(self,", "self.req.http.c.setopt( pycurl.HTTPHEADER, [\"Content-Type: application/json; charset=utf-8\"] ) res = json.loads(self.load(self.API_URL + method, post=json.dumps(post))) self.req.http.c.setopt(", "password) if json_data[\"hasErrors\"]: self.log_warning(json_data[\"ErrorMSG\"] or \"Unknown error\") return [] return [ x[\"hostername\"] for", "API_URL = \"https://api.linkifier.com/downloadapi.svc/\" def api_response(self, method, user, password, **kwargs): post = { \"login\":", ") return res def grab_hosters(self, user, password, data): json_data = self.api_response(\"hosters\", user, password)", "user, password) trafficleft = json_data[\"extraTraffic\"] validuntil = float(json_data[\"expirydate\"]) // 1000 return { \"validuntil\":", "account plugin\"\"\" __license__ = \"GPLv3\" __authors__ = [(\"GammaC0de\", \"nitzo2001[AT]yahoo[DOT]com\")] __config__ = [ (\"mh_mode\",", "= json.loads(self.load(self.API_URL + method, post=json.dumps(post))) self.req.http.c.setopt( pycurl.HTTPHEADER, [\"Content-Type: text/html; charset=utf-8\"] ) return res", "= \"account\" __version__ = \"0.01\" __status__ = \"testing\" __pyload_version__ = \"0.5\" __description__ =", "method, user, password, **kwargs): post = { \"login\": user, \"md5Pass\": hashlib.md5(password.encode()).hexdigest(), \"apiKey\": self.API_KEY,", "[ (\"mh_mode\", \"all;listed;unlisted\", \"Filter hosters to use\", \"all\"), (\"mh_list\", \"str\", \"Hoster list (comma", "password) trafficleft = json_data[\"extraTraffic\"] validuntil = float(json_data[\"expirydate\"]) // 1000 return { \"validuntil\": validuntil,", "= [ (\"mh_mode\", \"all;listed;unlisted\", \"Filter hosters to use\", \"all\"), (\"mh_list\", \"str\", \"Hoster list", "\"\"\"Linkifier.com account plugin\"\"\" __license__ = \"GPLv3\" __authors__ = [(\"GammaC0de\", \"nitzo2001[AT]yahoo[DOT]com\")] __config__ = [", "x[\"hostername\"] for x in json_data[\"hosters\"] if x[\"hostername\"] and x[\"isActive\"] ] def grab_info(self, user,", "if trafficleft.lower() == \"unlimited\" else int(trafficleft), \"premium\": True, } def signin(self, user, password,", "\"0.01\" __status__ = \"testing\" __pyload_version__ = \"0.5\" __description__ = \"\"\"Linkifier.com account plugin\"\"\" __license__", "post.update(kwargs) self.req.http.c.setopt( pycurl.HTTPHEADER, [\"Content-Type: application/json; charset=utf-8\"] ) res = json.loads(self.load(self.API_URL + method, post=json.dumps(post)))", "self.API_KEY, } post.update(kwargs) self.req.http.c.setopt( pycurl.HTTPHEADER, [\"Content-Type: application/json; charset=utf-8\"] ) res = json.loads(self.load(self.API_URL +", "json_data[\"hasErrors\"]: self.log_warning(json_data[\"ErrorMSG\"] or \"Unknown error\") return [] return [ x[\"hostername\"] for x in", "\"LinkifierCom\" __type__ = \"account\" __version__ = \"0.01\" __status__ = \"testing\" __pyload_version__ = \"0.5\"", "hashlib import json import pycurl from ..base.multi_account import MultiAccount class LinkifierCom(MultiAccount): __name__ =", "= \"<KEY>\" API_URL = \"https://api.linkifier.com/downloadapi.svc/\" def api_response(self, method, user, password, **kwargs): post =", "error\") return [] return [ x[\"hostername\"] for x in json_data[\"hosters\"] if x[\"hostername\"] and", "// 1000 return { \"validuntil\": validuntil, \"trafficleft\": -1 if trafficleft.lower() == \"unlimited\" else", "plugin\"\"\" __license__ = \"GPLv3\" __authors__ = [(\"GammaC0de\", \"nitzo2001[AT]yahoo[DOT]com\")] __config__ = [ (\"mh_mode\", \"all;listed;unlisted\",", "# -*- coding: utf-8 -*- import hashlib import json import pycurl from ..base.multi_account", "= \"\"\"Linkifier.com account plugin\"\"\" __license__ = \"GPLv3\" __authors__ = [(\"GammaC0de\", \"nitzo2001[AT]yahoo[DOT]com\")] __config__ =", "\"login\": user, \"md5Pass\": hashlib.md5(password.encode()).hexdigest(), \"apiKey\": self.API_KEY, } post.update(kwargs) self.req.http.c.setopt( pycurl.HTTPHEADER, [\"Content-Type: application/json; charset=utf-8\"]", "import pycurl from ..base.multi_account import MultiAccount class LinkifierCom(MultiAccount): __name__ = \"LinkifierCom\" __type__ =", "} post.update(kwargs) self.req.http.c.setopt( pycurl.HTTPHEADER, [\"Content-Type: application/json; charset=utf-8\"] ) res = json.loads(self.load(self.API_URL + method,", "[(\"GammaC0de\", \"nitzo2001[AT]yahoo[DOT]com\")] __config__ = [ (\"mh_mode\", \"all;listed;unlisted\", \"Filter hosters to use\", \"all\"), (\"mh_list\",", "__authors__ = [(\"GammaC0de\", \"nitzo2001[AT]yahoo[DOT]com\")] __config__ = [ (\"mh_mode\", \"all;listed;unlisted\", \"Filter hosters to use\",", "\"testing\" __pyload_version__ = \"0.5\" __description__ = \"\"\"Linkifier.com account plugin\"\"\" __license__ = \"GPLv3\" __authors__", "= \"GPLv3\" __authors__ = [(\"GammaC0de\", \"nitzo2001[AT]yahoo[DOT]com\")] __config__ = [ (\"mh_mode\", \"all;listed;unlisted\", \"Filter hosters", "= \"https://api.linkifier.com/downloadapi.svc/\" def api_response(self, method, user, password, **kwargs): post = { \"login\": user,", "x in json_data[\"hosters\"] if x[\"hostername\"] and x[\"isActive\"] ] def grab_info(self, user, password, data):", "= \"LinkifierCom\" __type__ = \"account\" __version__ = \"0.01\" __status__ = \"testing\" __pyload_version__ =", "if x[\"hostername\"] and x[\"isActive\"] ] def grab_info(self, user, password, data): json_data = self.api_response(\"user\",", "__license__ = \"GPLv3\" __authors__ = [(\"GammaC0de\", \"nitzo2001[AT]yahoo[DOT]com\")] __config__ = [ (\"mh_mode\", \"all;listed;unlisted\", \"Filter", "import hashlib import json import pycurl from ..base.multi_account import MultiAccount class LinkifierCom(MultiAccount): __name__", "[] return [ x[\"hostername\"] for x in json_data[\"hosters\"] if x[\"hostername\"] and x[\"isActive\"] ]", "list (comma separated)\", \"\"), (\"mh_interval\", \"int\", \"Reload interval in hours\", 12), ] API_KEY", "x[\"isActive\"] ] def grab_info(self, user, password, data): json_data = self.api_response(\"user\", user, password) trafficleft", "pycurl from ..base.multi_account import MultiAccount class LinkifierCom(MultiAccount): __name__ = \"LinkifierCom\" __type__ = \"account\"", "application/json; charset=utf-8\"] ) res = json.loads(self.load(self.API_URL + method, post=json.dumps(post))) self.req.http.c.setopt( pycurl.HTTPHEADER, [\"Content-Type: text/html;", "api_response(self, method, user, password, **kwargs): post = { \"login\": user, \"md5Pass\": hashlib.md5(password.encode()).hexdigest(), \"apiKey\":", "charset=utf-8\"] ) res = json.loads(self.load(self.API_URL + method, post=json.dumps(post))) self.req.http.c.setopt( pycurl.HTTPHEADER, [\"Content-Type: text/html; charset=utf-8\"]", "text/html; charset=utf-8\"] ) return res def grab_hosters(self, user, password, data): json_data = self.api_response(\"hosters\",", "password, data): json_data = self.api_response(\"user\", user, password) if json_data.get(\"hasErrors\", True) or not json_data.get(\"isActive\",", "\"Reload interval in hours\", 12), ] API_KEY = \"<KEY>\" API_URL = \"https://api.linkifier.com/downloadapi.svc/\" def", "-1 if trafficleft.lower() == \"unlimited\" else int(trafficleft), \"premium\": True, } def signin(self, user,", "def signin(self, user, password, data): json_data = self.api_response(\"user\", user, password) if json_data.get(\"hasErrors\", True)", "pycurl.HTTPHEADER, [\"Content-Type: application/json; charset=utf-8\"] ) res = json.loads(self.load(self.API_URL + method, post=json.dumps(post))) self.req.http.c.setopt( pycurl.HTTPHEADER,", "or \"Unknown error\") return [] return [ x[\"hostername\"] for x in json_data[\"hosters\"] if", "\"unlimited\" else int(trafficleft), \"premium\": True, } def signin(self, user, password, data): json_data =", "\"GPLv3\" __authors__ = [(\"GammaC0de\", \"nitzo2001[AT]yahoo[DOT]com\")] __config__ = [ (\"mh_mode\", \"all;listed;unlisted\", \"Filter hosters to", "json_data = self.api_response(\"user\", user, password) trafficleft = json_data[\"extraTraffic\"] validuntil = float(json_data[\"expirydate\"]) // 1000", "json_data[\"hosters\"] if x[\"hostername\"] and x[\"isActive\"] ] def grab_info(self, user, password, data): json_data =", "] API_KEY = \"<KEY>\" API_URL = \"https://api.linkifier.com/downloadapi.svc/\" def api_response(self, method, user, password, **kwargs):", "\"str\", \"Hoster list (comma separated)\", \"\"), (\"mh_interval\", \"int\", \"Reload interval in hours\", 12),", "(\"mh_mode\", \"all;listed;unlisted\", \"Filter hosters to use\", \"all\"), (\"mh_list\", \"str\", \"Hoster list (comma separated)\",", "LinkifierCom(MultiAccount): __name__ = \"LinkifierCom\" __type__ = \"account\" __version__ = \"0.01\" __status__ = \"testing\"", "__description__ = \"\"\"Linkifier.com account plugin\"\"\" __license__ = \"GPLv3\" __authors__ = [(\"GammaC0de\", \"nitzo2001[AT]yahoo[DOT]com\")] __config__", "__pyload_version__ = \"0.5\" __description__ = \"\"\"Linkifier.com account plugin\"\"\" __license__ = \"GPLv3\" __authors__ =", "\"<KEY>\" API_URL = \"https://api.linkifier.com/downloadapi.svc/\" def api_response(self, method, user, password, **kwargs): post = {", "user, password, data): json_data = self.api_response(\"user\", user, password) trafficleft = json_data[\"extraTraffic\"] validuntil =", "trafficleft = json_data[\"extraTraffic\"] validuntil = float(json_data[\"expirydate\"]) // 1000 return { \"validuntil\": validuntil, \"trafficleft\":", "data): json_data = self.api_response(\"hosters\", user, password) if json_data[\"hasErrors\"]: self.log_warning(json_data[\"ErrorMSG\"] or \"Unknown error\") return", "post=json.dumps(post))) self.req.http.c.setopt( pycurl.HTTPHEADER, [\"Content-Type: text/html; charset=utf-8\"] ) return res def grab_hosters(self, user, password,", "\"apiKey\": self.API_KEY, } post.update(kwargs) self.req.http.c.setopt( pycurl.HTTPHEADER, [\"Content-Type: application/json; charset=utf-8\"] ) res = json.loads(self.load(self.API_URL", "[ x[\"hostername\"] for x in json_data[\"hosters\"] if x[\"hostername\"] and x[\"isActive\"] ] def grab_info(self,", "def api_response(self, method, user, password, **kwargs): post = { \"login\": user, \"md5Pass\": hashlib.md5(password.encode()).hexdigest(),", "grab_hosters(self, user, password, data): json_data = self.api_response(\"hosters\", user, password) if json_data[\"hasErrors\"]: self.log_warning(json_data[\"ErrorMSG\"] or", "= float(json_data[\"expirydate\"]) // 1000 return { \"validuntil\": validuntil, \"trafficleft\": -1 if trafficleft.lower() ==", "{ \"validuntil\": validuntil, \"trafficleft\": -1 if trafficleft.lower() == \"unlimited\" else int(trafficleft), \"premium\": True,", "float(json_data[\"expirydate\"]) // 1000 return { \"validuntil\": validuntil, \"trafficleft\": -1 if trafficleft.lower() == \"unlimited\"", "\"all;listed;unlisted\", \"Filter hosters to use\", \"all\"), (\"mh_list\", \"str\", \"Hoster list (comma separated)\", \"\"),", "return [] return [ x[\"hostername\"] for x in json_data[\"hosters\"] if x[\"hostername\"] and x[\"isActive\"]", "coding: utf-8 -*- import hashlib import json import pycurl from ..base.multi_account import MultiAccount", "= \"0.5\" __description__ = \"\"\"Linkifier.com account plugin\"\"\" __license__ = \"GPLv3\" __authors__ = [(\"GammaC0de\",", "password, data): json_data = self.api_response(\"user\", user, password) trafficleft = json_data[\"extraTraffic\"] validuntil = float(json_data[\"expirydate\"])", "\"md5Pass\": hashlib.md5(password.encode()).hexdigest(), \"apiKey\": self.API_KEY, } post.update(kwargs) self.req.http.c.setopt( pycurl.HTTPHEADER, [\"Content-Type: application/json; charset=utf-8\"] ) res", "self.req.http.c.setopt( pycurl.HTTPHEADER, [\"Content-Type: text/html; charset=utf-8\"] ) return res def grab_hosters(self, user, password, data):", "interval in hours\", 12), ] API_KEY = \"<KEY>\" API_URL = \"https://api.linkifier.com/downloadapi.svc/\" def api_response(self,", "trafficleft.lower() == \"unlimited\" else int(trafficleft), \"premium\": True, } def signin(self, user, password, data):", "..base.multi_account import MultiAccount class LinkifierCom(MultiAccount): __name__ = \"LinkifierCom\" __type__ = \"account\" __version__ =", ") res = json.loads(self.load(self.API_URL + method, post=json.dumps(post))) self.req.http.c.setopt( pycurl.HTTPHEADER, [\"Content-Type: text/html; charset=utf-8\"] )", "(\"mh_list\", \"str\", \"Hoster list (comma separated)\", \"\"), (\"mh_interval\", \"int\", \"Reload interval in hours\",", "{ \"login\": user, \"md5Pass\": hashlib.md5(password.encode()).hexdigest(), \"apiKey\": self.API_KEY, } post.update(kwargs) self.req.http.c.setopt( pycurl.HTTPHEADER, [\"Content-Type: application/json;", "return { \"validuntil\": validuntil, \"trafficleft\": -1 if trafficleft.lower() == \"unlimited\" else int(trafficleft), \"premium\":", "__config__ = [ (\"mh_mode\", \"all;listed;unlisted\", \"Filter hosters to use\", \"all\"), (\"mh_list\", \"str\", \"Hoster", "__status__ = \"testing\" __pyload_version__ = \"0.5\" __description__ = \"\"\"Linkifier.com account plugin\"\"\" __license__ =", "self.log_warning(json_data[\"ErrorMSG\"] or \"Unknown error\") return [] return [ x[\"hostername\"] for x in json_data[\"hosters\"]", "in json_data[\"hosters\"] if x[\"hostername\"] and x[\"isActive\"] ] def grab_info(self, user, password, data): json_data", "x[\"hostername\"] and x[\"isActive\"] ] def grab_info(self, user, password, data): json_data = self.api_response(\"user\", user,", "password, **kwargs): post = { \"login\": user, \"md5Pass\": hashlib.md5(password.encode()).hexdigest(), \"apiKey\": self.API_KEY, } post.update(kwargs)", "post = { \"login\": user, \"md5Pass\": hashlib.md5(password.encode()).hexdigest(), \"apiKey\": self.API_KEY, } post.update(kwargs) self.req.http.c.setopt( pycurl.HTTPHEADER,", "json.loads(self.load(self.API_URL + method, post=json.dumps(post))) self.req.http.c.setopt( pycurl.HTTPHEADER, [\"Content-Type: text/html; charset=utf-8\"] ) return res def", "and x[\"isActive\"] ] def grab_info(self, user, password, data): json_data = self.api_response(\"user\", user, password)", "hosters to use\", \"all\"), (\"mh_list\", \"str\", \"Hoster list (comma separated)\", \"\"), (\"mh_interval\", \"int\",", "user, password) if json_data.get(\"hasErrors\", True) or not json_data.get(\"isActive\", True): self.log_warning(json_data[\"ErrorMSG\"] or \"Unknown error\")", "charset=utf-8\"] ) return res def grab_hosters(self, user, password, data): json_data = self.api_response(\"hosters\", user,", "\"Hoster list (comma separated)\", \"\"), (\"mh_interval\", \"int\", \"Reload interval in hours\", 12), ]", "[\"Content-Type: text/html; charset=utf-8\"] ) return res def grab_hosters(self, user, password, data): json_data =", "from ..base.multi_account import MultiAccount class LinkifierCom(MultiAccount): __name__ = \"LinkifierCom\" __type__ = \"account\" __version__", "True, } def signin(self, user, password, data): json_data = self.api_response(\"user\", user, password) if", "use\", \"all\"), (\"mh_list\", \"str\", \"Hoster list (comma separated)\", \"\"), (\"mh_interval\", \"int\", \"Reload interval", "validuntil = float(json_data[\"expirydate\"]) // 1000 return { \"validuntil\": validuntil, \"trafficleft\": -1 if trafficleft.lower()", "self.api_response(\"user\", user, password) if json_data.get(\"hasErrors\", True) or not json_data.get(\"isActive\", True): self.log_warning(json_data[\"ErrorMSG\"] or \"Unknown", "password, data): json_data = self.api_response(\"hosters\", user, password) if json_data[\"hasErrors\"]: self.log_warning(json_data[\"ErrorMSG\"] or \"Unknown error\")", "\"all\"), (\"mh_list\", \"str\", \"Hoster list (comma separated)\", \"\"), (\"mh_interval\", \"int\", \"Reload interval in", "hashlib.md5(password.encode()).hexdigest(), \"apiKey\": self.API_KEY, } post.update(kwargs) self.req.http.c.setopt( pycurl.HTTPHEADER, [\"Content-Type: application/json; charset=utf-8\"] ) res =", "== \"unlimited\" else int(trafficleft), \"premium\": True, } def signin(self, user, password, data): json_data", "\"int\", \"Reload interval in hours\", 12), ] API_KEY = \"<KEY>\" API_URL = \"https://api.linkifier.com/downloadapi.svc/\"", "__name__ = \"LinkifierCom\" __type__ = \"account\" __version__ = \"0.01\" __status__ = \"testing\" __pyload_version__", "} def signin(self, user, password, data): json_data = self.api_response(\"user\", user, password) if json_data.get(\"hasErrors\",", "import json import pycurl from ..base.multi_account import MultiAccount class LinkifierCom(MultiAccount): __name__ = \"LinkifierCom\"", "(comma separated)\", \"\"), (\"mh_interval\", \"int\", \"Reload interval in hours\", 12), ] API_KEY =", "separated)\", \"\"), (\"mh_interval\", \"int\", \"Reload interval in hours\", 12), ] API_KEY = \"<KEY>\"", "password) if json_data.get(\"hasErrors\", True) or not json_data.get(\"isActive\", True): self.log_warning(json_data[\"ErrorMSG\"] or \"Unknown error\") self.fail_login()", "class LinkifierCom(MultiAccount): __name__ = \"LinkifierCom\" __type__ = \"account\" __version__ = \"0.01\" __status__ =", "MultiAccount class LinkifierCom(MultiAccount): __name__ = \"LinkifierCom\" __type__ = \"account\" __version__ = \"0.01\" __status__" ]
[ "c0=50.0 for x in np.arange(c0,580,10): t=10*(abs(200.1-c0)/200.1)*abs(np.log(0.3/abs(c0-200.1))) y=1.0/(c0-200.0**2)**2 print(str(c0)+\" \"+str(y)) c0+=t if c0> 198", "<NAME>, Senior Research Fellow, University of Delhi #Date: 5-07-2021 from math import *", "#Date: 5-07-2021 from math import * import numpy as np c0=50.0 for x", "x in np.arange(c0,580,10): t=10*(abs(200.1-c0)/200.1)*abs(np.log(0.3/abs(c0-200.1))) y=1.0/(c0-200.0**2)**2 print(str(c0)+\" \"+str(y)) c0+=t if c0> 198 and c0<202:", "math import * import numpy as np c0=50.0 for x in np.arange(c0,580,10): t=10*(abs(200.1-c0)/200.1)*abs(np.log(0.3/abs(c0-200.1)))", "Senior Research Fellow, University of Delhi #Date: 5-07-2021 from math import * import", "written for dynamic step-size. step size c0 gets smaller when it achieves the", "Delhi #Date: 5-07-2021 from math import * import numpy as np c0=50.0 for", "#This code is written for dynamic step-size. step size c0 gets smaller when", "step-size. step size c0 gets smaller when it achieves the number 200. #Author:", "is written for dynamic step-size. step size c0 gets smaller when it achieves", "c0 gets smaller when it achieves the number 200. #Author: <NAME>, Senior Research", "Fellow, University of Delhi #Date: 5-07-2021 from math import * import numpy as", "from math import * import numpy as np c0=50.0 for x in np.arange(c0,580,10):", "for dynamic step-size. step size c0 gets smaller when it achieves the number", "size c0 gets smaller when it achieves the number 200. #Author: <NAME>, Senior", "achieves the number 200. #Author: <NAME>, Senior Research Fellow, University of Delhi #Date:", "#Author: <NAME>, Senior Research Fellow, University of Delhi #Date: 5-07-2021 from math import", "when it achieves the number 200. #Author: <NAME>, Senior Research Fellow, University of", "in np.arange(c0,580,10): t=10*(abs(200.1-c0)/200.1)*abs(np.log(0.3/abs(c0-200.1))) y=1.0/(c0-200.0**2)**2 print(str(c0)+\" \"+str(y)) c0+=t if c0> 198 and c0<202: c0+=1", "import * import numpy as np c0=50.0 for x in np.arange(c0,580,10): t=10*(abs(200.1-c0)/200.1)*abs(np.log(0.3/abs(c0-200.1))) y=1.0/(c0-200.0**2)**2", "number 200. #Author: <NAME>, Senior Research Fellow, University of Delhi #Date: 5-07-2021 from", "as np c0=50.0 for x in np.arange(c0,580,10): t=10*(abs(200.1-c0)/200.1)*abs(np.log(0.3/abs(c0-200.1))) y=1.0/(c0-200.0**2)**2 print(str(c0)+\" \"+str(y)) c0+=t if", "dynamic step-size. step size c0 gets smaller when it achieves the number 200.", "import numpy as np c0=50.0 for x in np.arange(c0,580,10): t=10*(abs(200.1-c0)/200.1)*abs(np.log(0.3/abs(c0-200.1))) y=1.0/(c0-200.0**2)**2 print(str(c0)+\" \"+str(y))", "code is written for dynamic step-size. step size c0 gets smaller when it", "5-07-2021 from math import * import numpy as np c0=50.0 for x in", "* import numpy as np c0=50.0 for x in np.arange(c0,580,10): t=10*(abs(200.1-c0)/200.1)*abs(np.log(0.3/abs(c0-200.1))) y=1.0/(c0-200.0**2)**2 print(str(c0)+\"", "np c0=50.0 for x in np.arange(c0,580,10): t=10*(abs(200.1-c0)/200.1)*abs(np.log(0.3/abs(c0-200.1))) y=1.0/(c0-200.0**2)**2 print(str(c0)+\" \"+str(y)) c0+=t if c0>", "of Delhi #Date: 5-07-2021 from math import * import numpy as np c0=50.0", "Research Fellow, University of Delhi #Date: 5-07-2021 from math import * import numpy", "step size c0 gets smaller when it achieves the number 200. #Author: <NAME>,", "for x in np.arange(c0,580,10): t=10*(abs(200.1-c0)/200.1)*abs(np.log(0.3/abs(c0-200.1))) y=1.0/(c0-200.0**2)**2 print(str(c0)+\" \"+str(y)) c0+=t if c0> 198 and", "200. #Author: <NAME>, Senior Research Fellow, University of Delhi #Date: 5-07-2021 from math", "gets smaller when it achieves the number 200. #Author: <NAME>, Senior Research Fellow,", "University of Delhi #Date: 5-07-2021 from math import * import numpy as np", "it achieves the number 200. #Author: <NAME>, Senior Research Fellow, University of Delhi", "numpy as np c0=50.0 for x in np.arange(c0,580,10): t=10*(abs(200.1-c0)/200.1)*abs(np.log(0.3/abs(c0-200.1))) y=1.0/(c0-200.0**2)**2 print(str(c0)+\" \"+str(y)) c0+=t", "the number 200. #Author: <NAME>, Senior Research Fellow, University of Delhi #Date: 5-07-2021", "smaller when it achieves the number 200. #Author: <NAME>, Senior Research Fellow, University" ]
[ "__init__(self): GrazigeVegetatie.__init__(self) VlakGeometrie.__init__(self) self._natuurstreefbeeld = OTLAttribuut(field=KlNSB, naam='natuurstreefbeeld', label='natuurstreefbeeld', objectUri='https://wegenenverkeer.data.vlaanderen.be/ns/onderdeel#Grasland.natuurstreefbeeld', definition='Een natuurstreefbeeld is een", "je wil behouden of verkrijgen via een goed natuurbeheer. In het definitief plan", "twee, drie of vier wordt het ecologisch einddoel vastgesteld aan de hand van", "from OTLMOW.OTLModel.Classes.GrazigeVegetatie import GrazigeVegetatie from OTLMOW.OTLModel.Datatypes.KlNSB import KlNSB from OTLMOW.GeometrieArtefact.VlakGeometrie import VlakGeometrie #", "drie of vier wordt het ecologisch einddoel vastgesteld aan de hand van natuurstreefbeelden.',", "verkrijgen via een goed natuurbeheer.In het definitief plan van type twee, drie of", "is een nagestreefd biotoop, mozaïek van biotopen of een leefgebied van een soort", "goed natuurbeheer.In het definitief plan van type twee, drie of vier wordt het", "ecologisch einddoel vastgesteld aan de hand van natuurstreefbeelden.\"\"\" return self._natuurstreefbeeld.get_waarde() @natuurstreefbeeld.setter def natuurstreefbeeld(self,", "natuurstreefbeeld(self): \"\"\"Een natuurstreefbeeld is een nagestreefd biotoop, mozaïek van biotopen of een leefgebied", "OTLAttribuut from OTLMOW.OTLModel.Classes.GrazigeVegetatie import GrazigeVegetatie from OTLMOW.OTLModel.Datatypes.KlNSB import KlNSB from OTLMOW.GeometrieArtefact.VlakGeometrie import VlakGeometrie", "einddoel vastgesteld aan de hand van natuurstreefbeelden.\"\"\" return self._natuurstreefbeeld.get_waarde() @natuurstreefbeeld.setter def natuurstreefbeeld(self, value):", "mozaïek van biotopen of een leefgebied van een soort dat je wil behouden", "VlakGeometrie): \"\"\"Grazige vegetatie met daarin kruidachtigen die jaarlijks één of meerdere malen per", "hand van natuurstreefbeelden.', owner=self) @property def natuurstreefbeeld(self): \"\"\"Een natuurstreefbeeld is een nagestreefd biotoop,", "natuurstreefbeeld is een nagestreefd biotoop, mozaïek van biotopen of een leefgebied van een", "URI van het object volgens https://www.w3.org/2001/XMLSchema#anyURI.\"\"\" def __init__(self): GrazigeVegetatie.__init__(self) VlakGeometrie.__init__(self) self._natuurstreefbeeld = OTLAttribuut(field=KlNSB,", "het object volgens https://www.w3.org/2001/XMLSchema#anyURI.\"\"\" def __init__(self): GrazigeVegetatie.__init__(self) VlakGeometrie.__init__(self) self._natuurstreefbeeld = OTLAttribuut(field=KlNSB, naam='natuurstreefbeeld', label='natuurstreefbeeld',", "owner=self) @property def natuurstreefbeeld(self): \"\"\"Een natuurstreefbeeld is een nagestreefd biotoop, mozaïek van biotopen", "daarin kruidachtigen die jaarlijks één of meerdere malen per jaar gemaaid of begraasd", "# Generated with OTLClassCreator. To modify: extend, do not edit class Grasland(GrazigeVegetatie, VlakGeometrie):", "= 'https://wegenenverkeer.data.vlaanderen.be/ns/onderdeel#Grasland' \"\"\"De URI van het object volgens https://www.w3.org/2001/XMLSchema#anyURI.\"\"\" def __init__(self): GrazigeVegetatie.__init__(self) VlakGeometrie.__init__(self)", "VlakGeometrie.__init__(self) self._natuurstreefbeeld = OTLAttribuut(field=KlNSB, naam='natuurstreefbeeld', label='natuurstreefbeeld', objectUri='https://wegenenverkeer.data.vlaanderen.be/ns/onderdeel#Grasland.natuurstreefbeeld', definition='Een natuurstreefbeeld is een nagestreefd biotoop,", "KlNSB from OTLMOW.GeometrieArtefact.VlakGeometrie import VlakGeometrie # Generated with OTLClassCreator. To modify: extend, do", "do not edit class Grasland(GrazigeVegetatie, VlakGeometrie): \"\"\"Grazige vegetatie met daarin kruidachtigen die jaarlijks", "nagestreefd biotoop, mozaïek van biotopen of een leefgebied van een soort dat je", "einddoel vastgesteld aan de hand van natuurstreefbeelden.', owner=self) @property def natuurstreefbeeld(self): \"\"\"Een natuurstreefbeeld", "'https://wegenenverkeer.data.vlaanderen.be/ns/onderdeel#Grasland' \"\"\"De URI van het object volgens https://www.w3.org/2001/XMLSchema#anyURI.\"\"\" def __init__(self): GrazigeVegetatie.__init__(self) VlakGeometrie.__init__(self) self._natuurstreefbeeld", "een goed natuurbeheer.In het definitief plan van type twee, drie of vier wordt", "jaar gemaaid of begraasd wordt.\"\"\" typeURI = 'https://wegenenverkeer.data.vlaanderen.be/ns/onderdeel#Grasland' \"\"\"De URI van het object", "OTLMOW.OTLModel.BaseClasses.OTLAttribuut import OTLAttribuut from OTLMOW.OTLModel.Classes.GrazigeVegetatie import GrazigeVegetatie from OTLMOW.OTLModel.Datatypes.KlNSB import KlNSB from OTLMOW.GeometrieArtefact.VlakGeometrie", "import KlNSB from OTLMOW.GeometrieArtefact.VlakGeometrie import VlakGeometrie # Generated with OTLClassCreator. To modify: extend,", "het ecologisch einddoel vastgesteld aan de hand van natuurstreefbeelden.\"\"\" return self._natuurstreefbeeld.get_waarde() @natuurstreefbeeld.setter def", "from OTLMOW.OTLModel.Datatypes.KlNSB import KlNSB from OTLMOW.GeometrieArtefact.VlakGeometrie import VlakGeometrie # Generated with OTLClassCreator. To", "label='natuurstreefbeeld', objectUri='https://wegenenverkeer.data.vlaanderen.be/ns/onderdeel#Grasland.natuurstreefbeeld', definition='Een natuurstreefbeeld is een nagestreefd biotoop, mozaïek van biotopen of een", "wil behouden of verkrijgen via een goed natuurbeheer. In het definitief plan van", "object volgens https://www.w3.org/2001/XMLSchema#anyURI.\"\"\" def __init__(self): GrazigeVegetatie.__init__(self) VlakGeometrie.__init__(self) self._natuurstreefbeeld = OTLAttribuut(field=KlNSB, naam='natuurstreefbeeld', label='natuurstreefbeeld', objectUri='https://wegenenverkeer.data.vlaanderen.be/ns/onderdeel#Grasland.natuurstreefbeeld',", "biotoop, mozaïek van biotopen of een leefgebied van een soort dat je wil", "via een goed natuurbeheer. In het definitief plan van type twee, drie of", "VlakGeometrie # Generated with OTLClassCreator. To modify: extend, do not edit class Grasland(GrazigeVegetatie,", "van type twee, drie of vier wordt het ecologisch einddoel vastgesteld aan de", "of begraasd wordt.\"\"\" typeURI = 'https://wegenenverkeer.data.vlaanderen.be/ns/onderdeel#Grasland' \"\"\"De URI van het object volgens https://www.w3.org/2001/XMLSchema#anyURI.\"\"\"", "van biotopen of een leefgebied van een soort dat je wil behouden of", "\"\"\"Grazige vegetatie met daarin kruidachtigen die jaarlijks één of meerdere malen per jaar", "of meerdere malen per jaar gemaaid of begraasd wordt.\"\"\" typeURI = 'https://wegenenverkeer.data.vlaanderen.be/ns/onderdeel#Grasland' \"\"\"De", "extend, do not edit class Grasland(GrazigeVegetatie, VlakGeometrie): \"\"\"Grazige vegetatie met daarin kruidachtigen die", "from OTLMOW.GeometrieArtefact.VlakGeometrie import VlakGeometrie # Generated with OTLClassCreator. To modify: extend, do not", "een nagestreefd biotoop, mozaïek van biotopen of een leefgebied van een soort dat", "de hand van natuurstreefbeelden.', owner=self) @property def natuurstreefbeeld(self): \"\"\"Een natuurstreefbeeld is een nagestreefd", "OTLMOW.OTLModel.Classes.GrazigeVegetatie import GrazigeVegetatie from OTLMOW.OTLModel.Datatypes.KlNSB import KlNSB from OTLMOW.GeometrieArtefact.VlakGeometrie import VlakGeometrie # Generated", "not edit class Grasland(GrazigeVegetatie, VlakGeometrie): \"\"\"Grazige vegetatie met daarin kruidachtigen die jaarlijks één", "naam='natuurstreefbeeld', label='natuurstreefbeeld', objectUri='https://wegenenverkeer.data.vlaanderen.be/ns/onderdeel#Grasland.natuurstreefbeeld', definition='Een natuurstreefbeeld is een nagestreefd biotoop, mozaïek van biotopen of", "type twee, drie of vier wordt het ecologisch einddoel vastgesteld aan de hand", "je wil behouden of verkrijgen via een goed natuurbeheer.In het definitief plan van", "drie of vier wordt het ecologisch einddoel vastgesteld aan de hand van natuurstreefbeelden.\"\"\"", "vastgesteld aan de hand van natuurstreefbeelden.\"\"\" return self._natuurstreefbeeld.get_waarde() @natuurstreefbeeld.setter def natuurstreefbeeld(self, value): self._natuurstreefbeeld.set_waarde(value,", "coding=utf-8 from OTLMOW.OTLModel.BaseClasses.OTLAttribuut import OTLAttribuut from OTLMOW.OTLModel.Classes.GrazigeVegetatie import GrazigeVegetatie from OTLMOW.OTLModel.Datatypes.KlNSB import KlNSB", "OTLMOW.GeometrieArtefact.VlakGeometrie import VlakGeometrie # Generated with OTLClassCreator. To modify: extend, do not edit", "wordt het ecologisch einddoel vastgesteld aan de hand van natuurstreefbeelden.\"\"\" return self._natuurstreefbeeld.get_waarde() @natuurstreefbeeld.setter", "with OTLClassCreator. To modify: extend, do not edit class Grasland(GrazigeVegetatie, VlakGeometrie): \"\"\"Grazige vegetatie", "kruidachtigen die jaarlijks één of meerdere malen per jaar gemaaid of begraasd wordt.\"\"\"", "een soort dat je wil behouden of verkrijgen via een goed natuurbeheer.In het", "def natuurstreefbeeld(self): \"\"\"Een natuurstreefbeeld is een nagestreefd biotoop, mozaïek van biotopen of een", "= OTLAttribuut(field=KlNSB, naam='natuurstreefbeeld', label='natuurstreefbeeld', objectUri='https://wegenenverkeer.data.vlaanderen.be/ns/onderdeel#Grasland.natuurstreefbeeld', definition='Een natuurstreefbeeld is een nagestreefd biotoop, mozaïek van", "meerdere malen per jaar gemaaid of begraasd wordt.\"\"\" typeURI = 'https://wegenenverkeer.data.vlaanderen.be/ns/onderdeel#Grasland' \"\"\"De URI", "verkrijgen via een goed natuurbeheer. In het definitief plan van type twee, drie", "import VlakGeometrie # Generated with OTLClassCreator. To modify: extend, do not edit class", "vier wordt het ecologisch einddoel vastgesteld aan de hand van natuurstreefbeelden.', owner=self) @property", "aan de hand van natuurstreefbeelden.', owner=self) @property def natuurstreefbeeld(self): \"\"\"Een natuurstreefbeeld is een", "dat je wil behouden of verkrijgen via een goed natuurbeheer.In het definitief plan", "van natuurstreefbeelden.', owner=self) @property def natuurstreefbeeld(self): \"\"\"Een natuurstreefbeeld is een nagestreefd biotoop, mozaïek", "van het object volgens https://www.w3.org/2001/XMLSchema#anyURI.\"\"\" def __init__(self): GrazigeVegetatie.__init__(self) VlakGeometrie.__init__(self) self._natuurstreefbeeld = OTLAttribuut(field=KlNSB, naam='natuurstreefbeeld',", "vier wordt het ecologisch einddoel vastgesteld aan de hand van natuurstreefbeelden.\"\"\" return self._natuurstreefbeeld.get_waarde()", "typeURI = 'https://wegenenverkeer.data.vlaanderen.be/ns/onderdeel#Grasland' \"\"\"De URI van het object volgens https://www.w3.org/2001/XMLSchema#anyURI.\"\"\" def __init__(self): GrazigeVegetatie.__init__(self)", "behouden of verkrijgen via een goed natuurbeheer.In het definitief plan van type twee,", "gemaaid of begraasd wordt.\"\"\" typeURI = 'https://wegenenverkeer.data.vlaanderen.be/ns/onderdeel#Grasland' \"\"\"De URI van het object volgens", "To modify: extend, do not edit class Grasland(GrazigeVegetatie, VlakGeometrie): \"\"\"Grazige vegetatie met daarin", "een soort dat je wil behouden of verkrijgen via een goed natuurbeheer. In", "import OTLAttribuut from OTLMOW.OTLModel.Classes.GrazigeVegetatie import GrazigeVegetatie from OTLMOW.OTLModel.Datatypes.KlNSB import KlNSB from OTLMOW.GeometrieArtefact.VlakGeometrie import", "edit class Grasland(GrazigeVegetatie, VlakGeometrie): \"\"\"Grazige vegetatie met daarin kruidachtigen die jaarlijks één of", "definition='Een natuurstreefbeeld is een nagestreefd biotoop, mozaïek van biotopen of een leefgebied van", "van een soort dat je wil behouden of verkrijgen via een goed natuurbeheer.", "soort dat je wil behouden of verkrijgen via een goed natuurbeheer.In het definitief", "Grasland(GrazigeVegetatie, VlakGeometrie): \"\"\"Grazige vegetatie met daarin kruidachtigen die jaarlijks één of meerdere malen", "vegetatie met daarin kruidachtigen die jaarlijks één of meerdere malen per jaar gemaaid", "OTLAttribuut(field=KlNSB, naam='natuurstreefbeeld', label='natuurstreefbeeld', objectUri='https://wegenenverkeer.data.vlaanderen.be/ns/onderdeel#Grasland.natuurstreefbeeld', definition='Een natuurstreefbeeld is een nagestreefd biotoop, mozaïek van biotopen", "aan de hand van natuurstreefbeelden.\"\"\" return self._natuurstreefbeeld.get_waarde() @natuurstreefbeeld.setter def natuurstreefbeeld(self, value): self._natuurstreefbeeld.set_waarde(value, owner=self)", "met daarin kruidachtigen die jaarlijks één of meerdere malen per jaar gemaaid of", "jaarlijks één of meerdere malen per jaar gemaaid of begraasd wordt.\"\"\" typeURI =", "import GrazigeVegetatie from OTLMOW.OTLModel.Datatypes.KlNSB import KlNSB from OTLMOW.GeometrieArtefact.VlakGeometrie import VlakGeometrie # Generated with", "modify: extend, do not edit class Grasland(GrazigeVegetatie, VlakGeometrie): \"\"\"Grazige vegetatie met daarin kruidachtigen", "from OTLMOW.OTLModel.BaseClasses.OTLAttribuut import OTLAttribuut from OTLMOW.OTLModel.Classes.GrazigeVegetatie import GrazigeVegetatie from OTLMOW.OTLModel.Datatypes.KlNSB import KlNSB from", "een leefgebied van een soort dat je wil behouden of verkrijgen via een", "van een soort dat je wil behouden of verkrijgen via een goed natuurbeheer.In", "behouden of verkrijgen via een goed natuurbeheer. In het definitief plan van type", "of verkrijgen via een goed natuurbeheer. In het definitief plan van type twee,", "# coding=utf-8 from OTLMOW.OTLModel.BaseClasses.OTLAttribuut import OTLAttribuut from OTLMOW.OTLModel.Classes.GrazigeVegetatie import GrazigeVegetatie from OTLMOW.OTLModel.Datatypes.KlNSB import", "één of meerdere malen per jaar gemaaid of begraasd wordt.\"\"\" typeURI = 'https://wegenenverkeer.data.vlaanderen.be/ns/onderdeel#Grasland'", "of vier wordt het ecologisch einddoel vastgesteld aan de hand van natuurstreefbeelden.', owner=self)", "vastgesteld aan de hand van natuurstreefbeelden.', owner=self) @property def natuurstreefbeeld(self): \"\"\"Een natuurstreefbeeld is", "natuurstreefbeelden.', owner=self) @property def natuurstreefbeeld(self): \"\"\"Een natuurstreefbeeld is een nagestreefd biotoop, mozaïek van", "definitief plan van type twee, drie of vier wordt het ecologisch einddoel vastgesteld", "biotopen of een leefgebied van een soort dat je wil behouden of verkrijgen", "class Grasland(GrazigeVegetatie, VlakGeometrie): \"\"\"Grazige vegetatie met daarin kruidachtigen die jaarlijks één of meerdere", "OTLMOW.OTLModel.Datatypes.KlNSB import KlNSB from OTLMOW.GeometrieArtefact.VlakGeometrie import VlakGeometrie # Generated with OTLClassCreator. To modify:", "malen per jaar gemaaid of begraasd wordt.\"\"\" typeURI = 'https://wegenenverkeer.data.vlaanderen.be/ns/onderdeel#Grasland' \"\"\"De URI van", "\"\"\"Een natuurstreefbeeld is een nagestreefd biotoop, mozaïek van biotopen of een leefgebied van", "https://www.w3.org/2001/XMLSchema#anyURI.\"\"\" def __init__(self): GrazigeVegetatie.__init__(self) VlakGeometrie.__init__(self) self._natuurstreefbeeld = OTLAttribuut(field=KlNSB, naam='natuurstreefbeeld', label='natuurstreefbeeld', objectUri='https://wegenenverkeer.data.vlaanderen.be/ns/onderdeel#Grasland.natuurstreefbeeld', definition='Een natuurstreefbeeld", "def __init__(self): GrazigeVegetatie.__init__(self) VlakGeometrie.__init__(self) self._natuurstreefbeeld = OTLAttribuut(field=KlNSB, naam='natuurstreefbeeld', label='natuurstreefbeeld', objectUri='https://wegenenverkeer.data.vlaanderen.be/ns/onderdeel#Grasland.natuurstreefbeeld', definition='Een natuurstreefbeeld is", "GrazigeVegetatie.__init__(self) VlakGeometrie.__init__(self) self._natuurstreefbeeld = OTLAttribuut(field=KlNSB, naam='natuurstreefbeeld', label='natuurstreefbeeld', objectUri='https://wegenenverkeer.data.vlaanderen.be/ns/onderdeel#Grasland.natuurstreefbeeld', definition='Een natuurstreefbeeld is een nagestreefd", "leefgebied van een soort dat je wil behouden of verkrijgen via een goed", "In het definitief plan van type twee, drie of vier wordt het ecologisch", "\"\"\"De URI van het object volgens https://www.w3.org/2001/XMLSchema#anyURI.\"\"\" def __init__(self): GrazigeVegetatie.__init__(self) VlakGeometrie.__init__(self) self._natuurstreefbeeld =", "wordt.\"\"\" typeURI = 'https://wegenenverkeer.data.vlaanderen.be/ns/onderdeel#Grasland' \"\"\"De URI van het object volgens https://www.w3.org/2001/XMLSchema#anyURI.\"\"\" def __init__(self):", "of verkrijgen via een goed natuurbeheer.In het definitief plan van type twee, drie", "ecologisch einddoel vastgesteld aan de hand van natuurstreefbeelden.', owner=self) @property def natuurstreefbeeld(self): \"\"\"Een", "of vier wordt het ecologisch einddoel vastgesteld aan de hand van natuurstreefbeelden.\"\"\" return", "een goed natuurbeheer. In het definitief plan van type twee, drie of vier", "begraasd wordt.\"\"\" typeURI = 'https://wegenenverkeer.data.vlaanderen.be/ns/onderdeel#Grasland' \"\"\"De URI van het object volgens https://www.w3.org/2001/XMLSchema#anyURI.\"\"\" def", "dat je wil behouden of verkrijgen via een goed natuurbeheer. In het definitief", "of een leefgebied van een soort dat je wil behouden of verkrijgen via", "soort dat je wil behouden of verkrijgen via een goed natuurbeheer. In het", "@property def natuurstreefbeeld(self): \"\"\"Een natuurstreefbeeld is een nagestreefd biotoop, mozaïek van biotopen of", "Generated with OTLClassCreator. To modify: extend, do not edit class Grasland(GrazigeVegetatie, VlakGeometrie): \"\"\"Grazige", "plan van type twee, drie of vier wordt het ecologisch einddoel vastgesteld aan", "het ecologisch einddoel vastgesteld aan de hand van natuurstreefbeelden.', owner=self) @property def natuurstreefbeeld(self):", "self._natuurstreefbeeld = OTLAttribuut(field=KlNSB, naam='natuurstreefbeeld', label='natuurstreefbeeld', objectUri='https://wegenenverkeer.data.vlaanderen.be/ns/onderdeel#Grasland.natuurstreefbeeld', definition='Een natuurstreefbeeld is een nagestreefd biotoop, mozaïek", "natuurbeheer.In het definitief plan van type twee, drie of vier wordt het ecologisch", "wil behouden of verkrijgen via een goed natuurbeheer.In het definitief plan van type", "die jaarlijks één of meerdere malen per jaar gemaaid of begraasd wordt.\"\"\" typeURI", "per jaar gemaaid of begraasd wordt.\"\"\" typeURI = 'https://wegenenverkeer.data.vlaanderen.be/ns/onderdeel#Grasland' \"\"\"De URI van het", "objectUri='https://wegenenverkeer.data.vlaanderen.be/ns/onderdeel#Grasland.natuurstreefbeeld', definition='Een natuurstreefbeeld is een nagestreefd biotoop, mozaïek van biotopen of een leefgebied", "<gh_stars>1-10 # coding=utf-8 from OTLMOW.OTLModel.BaseClasses.OTLAttribuut import OTLAttribuut from OTLMOW.OTLModel.Classes.GrazigeVegetatie import GrazigeVegetatie from OTLMOW.OTLModel.Datatypes.KlNSB", "het definitief plan van type twee, drie of vier wordt het ecologisch einddoel", "GrazigeVegetatie from OTLMOW.OTLModel.Datatypes.KlNSB import KlNSB from OTLMOW.GeometrieArtefact.VlakGeometrie import VlakGeometrie # Generated with OTLClassCreator.", "wordt het ecologisch einddoel vastgesteld aan de hand van natuurstreefbeelden.', owner=self) @property def", "natuurbeheer. In het definitief plan van type twee, drie of vier wordt het", "volgens https://www.w3.org/2001/XMLSchema#anyURI.\"\"\" def __init__(self): GrazigeVegetatie.__init__(self) VlakGeometrie.__init__(self) self._natuurstreefbeeld = OTLAttribuut(field=KlNSB, naam='natuurstreefbeeld', label='natuurstreefbeeld', objectUri='https://wegenenverkeer.data.vlaanderen.be/ns/onderdeel#Grasland.natuurstreefbeeld', definition='Een", "via een goed natuurbeheer.In het definitief plan van type twee, drie of vier", "goed natuurbeheer. In het definitief plan van type twee, drie of vier wordt", "OTLClassCreator. To modify: extend, do not edit class Grasland(GrazigeVegetatie, VlakGeometrie): \"\"\"Grazige vegetatie met" ]
[ "= os.path.join(DIR, '../../dc-law-docs-laws/{}.pdf') pdfs = os.listdir(pdf_dir) law_root = dom.find('//collection[@name=\"dclaws\"]') skip_laws = law_root.xpath('./collection/document[cites/law/@url]/num/text()') for", "<reponame>DCCouncil/dc-law-tools import lxml.etree as et from .preprocessors import preprocessors from .preprocessors.utils import index_docs", "law_root = dom.find('//collection[@name=\"dclaws\"]') skip_laws = law_root.xpath('./collection/document[cites/law/@url]/num/text()') for pdf in pdfs: if not pdf.startswith('dc-law-docs-laws'):", "= et.parse(f, parser) index_docs(dom) for preprocessor in preprocessors: preprocess(dom, *preprocessor) with open(bld_file, 'wb')", "import os DIR = os.path.abspath(os.path.dirname(__file__)) out_dir = os.path.join(DIR, '../../dc-law-html') bld_file = os.path.join(DIR, '../working_files/dccode-html-bld.xml')", "'../../dc-law-docs-laws/{}.pdf') pdfs = os.listdir(pdf_dir) law_root = dom.find('//collection[@name=\"dclaws\"]') skip_laws = law_root.xpath('./collection/document[cites/law/@url]/num/text()') for pdf in", "= os.path.join(DIR, '../working_files/dccode-html-bld.xml') def preprocess_xml(): print('preprocessing...') parser = et.XMLParser(remove_blank_text=True) with open(bld_file) as f:", "skip_laws = law_root.xpath('./collection/document[cites/law/@url]/num/text()') for pdf in pdfs: if not pdf.startswith('dc-law-docs-laws'): print('skipping', pdf) continue", "roots: raise BaseException('no valid roots for xpath:', xpath) for root in roots: for", "f.write(et.tostring(dom, pretty_print=True, encoding=\"utf-8\")) def preprocess(dom, xpath, *preprocessors): roots = dom.xpath(xpath) if not roots:", "os.path.join(pdf_dir, pdf) law_num = pdf.replace('dc-law-docs-laws', '')[:-4] if law_num in skip_laws: continue os.rename(pdf_path, pdf_out_path.format(law_num))", "with open(bld_file, 'wb') as f: f.write(et.tostring(dom, pretty_print=True, encoding=\"utf-8\")) def preprocess(dom, xpath, *preprocessors): roots", "pdf_out_path = os.path.join(DIR, '../../dc-law-docs-laws/{}.pdf') pdfs = os.listdir(pdf_dir) law_root = dom.find('//collection[@name=\"dclaws\"]') skip_laws = law_root.xpath('./collection/document[cites/law/@url]/num/text()')", "open(bld_file, 'wb') as f: f.write(et.tostring(dom, pretty_print=True, encoding=\"utf-8\")) def preprocess(dom, xpath, *preprocessors): roots =", "dom.find('//collection[@name=\"dclaws\"]') skip_laws = law_root.xpath('./collection/document[cites/law/@url]/num/text()') for pdf in pdfs: if not pdf.startswith('dc-law-docs-laws'): print('skipping', pdf)", "= pdf.replace('dc-law-docs-laws', '')[:-4] if law_num in skip_laws: continue os.rename(pdf_path, pdf_out_path.format(law_num)) import ipdb ipdb.set_trace()", "as et from .preprocessors import preprocessors from .preprocessors.utils import index_docs import os DIR", "parser) index_docs(dom) for preprocessor in preprocessors: preprocess(dom, *preprocessor) with open(bld_file, 'wb') as f:", "import index_docs import os DIR = os.path.abspath(os.path.dirname(__file__)) out_dir = os.path.join(DIR, '../../dc-law-html') bld_file =", "in pdfs: if not pdf.startswith('dc-law-docs-laws'): print('skipping', pdf) continue pdf_path = os.path.join(pdf_dir, pdf) law_num", "= dom.find('//collection[@name=\"dclaws\"]') skip_laws = law_root.xpath('./collection/document[cites/law/@url]/num/text()') for pdf in pdfs: if not pdf.startswith('dc-law-docs-laws'): print('skipping',", "pdf_path = os.path.join(pdf_dir, pdf) law_num = pdf.replace('dc-law-docs-laws', '')[:-4] if law_num in skip_laws: continue", "if not pdf.startswith('dc-law-docs-laws'): print('skipping', pdf) continue pdf_path = os.path.join(pdf_dir, pdf) law_num = pdf.replace('dc-law-docs-laws',", "for xpath:', xpath) for root in roots: for preprocessor in preprocessors: preprocessor(root) def", "for preprocessor in preprocessors: preprocess(dom, *preprocessor) with open(bld_file, 'wb') as f: f.write(et.tostring(dom, pretty_print=True,", "root in roots: for preprocessor in preprocessors: preprocessor(root) def pdfs(dom): pdf_dir = os.path.join(DIR,", "preprocessor in preprocessors: preprocess(dom, *preprocessor) with open(bld_file, 'wb') as f: f.write(et.tostring(dom, pretty_print=True, encoding=\"utf-8\"))", "'../../dc-law-html') bld_file = os.path.join(DIR, '../working_files/dccode-html-bld.xml') def preprocess_xml(): print('preprocessing...') parser = et.XMLParser(remove_blank_text=True) with open(bld_file)", "in roots: for preprocessor in preprocessors: preprocessor(root) def pdfs(dom): pdf_dir = os.path.join(DIR, 'dc_laws')", "in preprocessors: preprocessor(root) def pdfs(dom): pdf_dir = os.path.join(DIR, 'dc_laws') pdf_out_path = os.path.join(DIR, '../../dc-law-docs-laws/{}.pdf')", "preprocessor in preprocessors: preprocessor(root) def pdfs(dom): pdf_dir = os.path.join(DIR, 'dc_laws') pdf_out_path = os.path.join(DIR,", "preprocess_xml(): print('preprocessing...') parser = et.XMLParser(remove_blank_text=True) with open(bld_file) as f: dom = et.parse(f, parser)", "preprocessors: preprocessor(root) def pdfs(dom): pdf_dir = os.path.join(DIR, 'dc_laws') pdf_out_path = os.path.join(DIR, '../../dc-law-docs-laws/{}.pdf') pdfs", "'dc_laws') pdf_out_path = os.path.join(DIR, '../../dc-law-docs-laws/{}.pdf') pdfs = os.listdir(pdf_dir) law_root = dom.find('//collection[@name=\"dclaws\"]') skip_laws =", "et from .preprocessors import preprocessors from .preprocessors.utils import index_docs import os DIR =", "law_num = pdf.replace('dc-law-docs-laws', '')[:-4] if law_num in skip_laws: continue os.rename(pdf_path, pdf_out_path.format(law_num)) import ipdb", "pdf in pdfs: if not pdf.startswith('dc-law-docs-laws'): print('skipping', pdf) continue pdf_path = os.path.join(pdf_dir, pdf)", "raise BaseException('no valid roots for xpath:', xpath) for root in roots: for preprocessor", "BaseException('no valid roots for xpath:', xpath) for root in roots: for preprocessor in", "roots: for preprocessor in preprocessors: preprocessor(root) def pdfs(dom): pdf_dir = os.path.join(DIR, 'dc_laws') pdf_out_path", "f: f.write(et.tostring(dom, pretty_print=True, encoding=\"utf-8\")) def preprocess(dom, xpath, *preprocessors): roots = dom.xpath(xpath) if not", "pdf) law_num = pdf.replace('dc-law-docs-laws', '')[:-4] if law_num in skip_laws: continue os.rename(pdf_path, pdf_out_path.format(law_num)) import", "out_dir = os.path.join(DIR, '../../dc-law-html') bld_file = os.path.join(DIR, '../working_files/dccode-html-bld.xml') def preprocess_xml(): print('preprocessing...') parser =", "= os.path.abspath(os.path.dirname(__file__)) out_dir = os.path.join(DIR, '../../dc-law-html') bld_file = os.path.join(DIR, '../working_files/dccode-html-bld.xml') def preprocess_xml(): print('preprocessing...')", "os.path.join(DIR, 'dc_laws') pdf_out_path = os.path.join(DIR, '../../dc-law-docs-laws/{}.pdf') pdfs = os.listdir(pdf_dir) law_root = dom.find('//collection[@name=\"dclaws\"]') skip_laws", "preprocessor(root) def pdfs(dom): pdf_dir = os.path.join(DIR, 'dc_laws') pdf_out_path = os.path.join(DIR, '../../dc-law-docs-laws/{}.pdf') pdfs =", "parser = et.XMLParser(remove_blank_text=True) with open(bld_file) as f: dom = et.parse(f, parser) index_docs(dom) for", "DIR = os.path.abspath(os.path.dirname(__file__)) out_dir = os.path.join(DIR, '../../dc-law-html') bld_file = os.path.join(DIR, '../working_files/dccode-html-bld.xml') def preprocess_xml():", "et.parse(f, parser) index_docs(dom) for preprocessor in preprocessors: preprocess(dom, *preprocessor) with open(bld_file, 'wb') as", "xpath:', xpath) for root in roots: for preprocessor in preprocessors: preprocessor(root) def pdfs(dom):", "continue pdf_path = os.path.join(pdf_dir, pdf) law_num = pdf.replace('dc-law-docs-laws', '')[:-4] if law_num in skip_laws:", "pdfs = os.listdir(pdf_dir) law_root = dom.find('//collection[@name=\"dclaws\"]') skip_laws = law_root.xpath('./collection/document[cites/law/@url]/num/text()') for pdf in pdfs:", "def pdfs(dom): pdf_dir = os.path.join(DIR, 'dc_laws') pdf_out_path = os.path.join(DIR, '../../dc-law-docs-laws/{}.pdf') pdfs = os.listdir(pdf_dir)", "os.path.join(DIR, '../working_files/dccode-html-bld.xml') def preprocess_xml(): print('preprocessing...') parser = et.XMLParser(remove_blank_text=True) with open(bld_file) as f: dom", "for root in roots: for preprocessor in preprocessors: preprocessor(root) def pdfs(dom): pdf_dir =", "from .preprocessors import preprocessors from .preprocessors.utils import index_docs import os DIR = os.path.abspath(os.path.dirname(__file__))", "os DIR = os.path.abspath(os.path.dirname(__file__)) out_dir = os.path.join(DIR, '../../dc-law-html') bld_file = os.path.join(DIR, '../working_files/dccode-html-bld.xml') def", "xpath, *preprocessors): roots = dom.xpath(xpath) if not roots: raise BaseException('no valid roots for", "from .preprocessors.utils import index_docs import os DIR = os.path.abspath(os.path.dirname(__file__)) out_dir = os.path.join(DIR, '../../dc-law-html')", "*preprocessor) with open(bld_file, 'wb') as f: f.write(et.tostring(dom, pretty_print=True, encoding=\"utf-8\")) def preprocess(dom, xpath, *preprocessors):", "'wb') as f: f.write(et.tostring(dom, pretty_print=True, encoding=\"utf-8\")) def preprocess(dom, xpath, *preprocessors): roots = dom.xpath(xpath)", "= os.path.join(DIR, '../../dc-law-html') bld_file = os.path.join(DIR, '../working_files/dccode-html-bld.xml') def preprocess_xml(): print('preprocessing...') parser = et.XMLParser(remove_blank_text=True)", "law_root.xpath('./collection/document[cites/law/@url]/num/text()') for pdf in pdfs: if not pdf.startswith('dc-law-docs-laws'): print('skipping', pdf) continue pdf_path =", "open(bld_file) as f: dom = et.parse(f, parser) index_docs(dom) for preprocessor in preprocessors: preprocess(dom,", "xpath) for root in roots: for preprocessor in preprocessors: preprocessor(root) def pdfs(dom): pdf_dir", "index_docs(dom) for preprocessor in preprocessors: preprocess(dom, *preprocessor) with open(bld_file, 'wb') as f: f.write(et.tostring(dom,", "pdfs: if not pdf.startswith('dc-law-docs-laws'): print('skipping', pdf) continue pdf_path = os.path.join(pdf_dir, pdf) law_num =", "for pdf in pdfs: if not pdf.startswith('dc-law-docs-laws'): print('skipping', pdf) continue pdf_path = os.path.join(pdf_dir,", "not pdf.startswith('dc-law-docs-laws'): print('skipping', pdf) continue pdf_path = os.path.join(pdf_dir, pdf) law_num = pdf.replace('dc-law-docs-laws', '')[:-4]", "lxml.etree as et from .preprocessors import preprocessors from .preprocessors.utils import index_docs import os", "pdf.startswith('dc-law-docs-laws'): print('skipping', pdf) continue pdf_path = os.path.join(pdf_dir, pdf) law_num = pdf.replace('dc-law-docs-laws', '')[:-4] if", "pdf_dir = os.path.join(DIR, 'dc_laws') pdf_out_path = os.path.join(DIR, '../../dc-law-docs-laws/{}.pdf') pdfs = os.listdir(pdf_dir) law_root =", "preprocessors: preprocess(dom, *preprocessor) with open(bld_file, 'wb') as f: f.write(et.tostring(dom, pretty_print=True, encoding=\"utf-8\")) def preprocess(dom,", "with open(bld_file) as f: dom = et.parse(f, parser) index_docs(dom) for preprocessor in preprocessors:", "roots for xpath:', xpath) for root in roots: for preprocessor in preprocessors: preprocessor(root)", "preprocessors from .preprocessors.utils import index_docs import os DIR = os.path.abspath(os.path.dirname(__file__)) out_dir = os.path.join(DIR,", ".preprocessors import preprocessors from .preprocessors.utils import index_docs import os DIR = os.path.abspath(os.path.dirname(__file__)) out_dir", "os.listdir(pdf_dir) law_root = dom.find('//collection[@name=\"dclaws\"]') skip_laws = law_root.xpath('./collection/document[cites/law/@url]/num/text()') for pdf in pdfs: if not", "import lxml.etree as et from .preprocessors import preprocessors from .preprocessors.utils import index_docs import", "pretty_print=True, encoding=\"utf-8\")) def preprocess(dom, xpath, *preprocessors): roots = dom.xpath(xpath) if not roots: raise", "*preprocessors): roots = dom.xpath(xpath) if not roots: raise BaseException('no valid roots for xpath:',", "import preprocessors from .preprocessors.utils import index_docs import os DIR = os.path.abspath(os.path.dirname(__file__)) out_dir =", "not roots: raise BaseException('no valid roots for xpath:', xpath) for root in roots:", "print('preprocessing...') parser = et.XMLParser(remove_blank_text=True) with open(bld_file) as f: dom = et.parse(f, parser) index_docs(dom)", "'../working_files/dccode-html-bld.xml') def preprocess_xml(): print('preprocessing...') parser = et.XMLParser(remove_blank_text=True) with open(bld_file) as f: dom =", "def preprocess(dom, xpath, *preprocessors): roots = dom.xpath(xpath) if not roots: raise BaseException('no valid", "as f: dom = et.parse(f, parser) index_docs(dom) for preprocessor in preprocessors: preprocess(dom, *preprocessor)", "pdf) continue pdf_path = os.path.join(pdf_dir, pdf) law_num = pdf.replace('dc-law-docs-laws', '')[:-4] if law_num in", "dom = et.parse(f, parser) index_docs(dom) for preprocessor in preprocessors: preprocess(dom, *preprocessor) with open(bld_file,", "def preprocess_xml(): print('preprocessing...') parser = et.XMLParser(remove_blank_text=True) with open(bld_file) as f: dom = et.parse(f,", "f: dom = et.parse(f, parser) index_docs(dom) for preprocessor in preprocessors: preprocess(dom, *preprocessor) with", ".preprocessors.utils import index_docs import os DIR = os.path.abspath(os.path.dirname(__file__)) out_dir = os.path.join(DIR, '../../dc-law-html') bld_file", "= dom.xpath(xpath) if not roots: raise BaseException('no valid roots for xpath:', xpath) for", "index_docs import os DIR = os.path.abspath(os.path.dirname(__file__)) out_dir = os.path.join(DIR, '../../dc-law-html') bld_file = os.path.join(DIR,", "preprocess(dom, xpath, *preprocessors): roots = dom.xpath(xpath) if not roots: raise BaseException('no valid roots", "valid roots for xpath:', xpath) for root in roots: for preprocessor in preprocessors:", "if not roots: raise BaseException('no valid roots for xpath:', xpath) for root in", "roots = dom.xpath(xpath) if not roots: raise BaseException('no valid roots for xpath:', xpath)", "et.XMLParser(remove_blank_text=True) with open(bld_file) as f: dom = et.parse(f, parser) index_docs(dom) for preprocessor in", "in preprocessors: preprocess(dom, *preprocessor) with open(bld_file, 'wb') as f: f.write(et.tostring(dom, pretty_print=True, encoding=\"utf-8\")) def", "os.path.join(DIR, '../../dc-law-docs-laws/{}.pdf') pdfs = os.listdir(pdf_dir) law_root = dom.find('//collection[@name=\"dclaws\"]') skip_laws = law_root.xpath('./collection/document[cites/law/@url]/num/text()') for pdf", "preprocess(dom, *preprocessor) with open(bld_file, 'wb') as f: f.write(et.tostring(dom, pretty_print=True, encoding=\"utf-8\")) def preprocess(dom, xpath,", "bld_file = os.path.join(DIR, '../working_files/dccode-html-bld.xml') def preprocess_xml(): print('preprocessing...') parser = et.XMLParser(remove_blank_text=True) with open(bld_file) as", "= os.path.join(DIR, 'dc_laws') pdf_out_path = os.path.join(DIR, '../../dc-law-docs-laws/{}.pdf') pdfs = os.listdir(pdf_dir) law_root = dom.find('//collection[@name=\"dclaws\"]')", "os.path.abspath(os.path.dirname(__file__)) out_dir = os.path.join(DIR, '../../dc-law-html') bld_file = os.path.join(DIR, '../working_files/dccode-html-bld.xml') def preprocess_xml(): print('preprocessing...') parser", "for preprocessor in preprocessors: preprocessor(root) def pdfs(dom): pdf_dir = os.path.join(DIR, 'dc_laws') pdf_out_path =", "os.path.join(DIR, '../../dc-law-html') bld_file = os.path.join(DIR, '../working_files/dccode-html-bld.xml') def preprocess_xml(): print('preprocessing...') parser = et.XMLParser(remove_blank_text=True) with", "= et.XMLParser(remove_blank_text=True) with open(bld_file) as f: dom = et.parse(f, parser) index_docs(dom) for preprocessor", "pdfs(dom): pdf_dir = os.path.join(DIR, 'dc_laws') pdf_out_path = os.path.join(DIR, '../../dc-law-docs-laws/{}.pdf') pdfs = os.listdir(pdf_dir) law_root", "as f: f.write(et.tostring(dom, pretty_print=True, encoding=\"utf-8\")) def preprocess(dom, xpath, *preprocessors): roots = dom.xpath(xpath) if", "= law_root.xpath('./collection/document[cites/law/@url]/num/text()') for pdf in pdfs: if not pdf.startswith('dc-law-docs-laws'): print('skipping', pdf) continue pdf_path", "print('skipping', pdf) continue pdf_path = os.path.join(pdf_dir, pdf) law_num = pdf.replace('dc-law-docs-laws', '')[:-4] if law_num", "= os.path.join(pdf_dir, pdf) law_num = pdf.replace('dc-law-docs-laws', '')[:-4] if law_num in skip_laws: continue os.rename(pdf_path,", "= os.listdir(pdf_dir) law_root = dom.find('//collection[@name=\"dclaws\"]') skip_laws = law_root.xpath('./collection/document[cites/law/@url]/num/text()') for pdf in pdfs: if", "encoding=\"utf-8\")) def preprocess(dom, xpath, *preprocessors): roots = dom.xpath(xpath) if not roots: raise BaseException('no", "dom.xpath(xpath) if not roots: raise BaseException('no valid roots for xpath:', xpath) for root" ]
[ "type='bool', default=False, help='whether it is debug mode') parser.add_argument('--test_only', type='bool', default=False, help='test_only: no need", "type=int, default=None, help='Default embedding size if embedding_file is not given') parser.add_argument('--hidden_size', type=int, default=128,", "or avg or last or dot') # Optimization details parser.add_argument('--batch_size', type=int, default=32, help='Batch", "file') parser.add_argument('--max_dev', type=int, default=None, help='Maximum number of dev examples to evaluate on') parser.add_argument('--relabeling',", "default=False, help='whether it is debug mode') parser.add_argument('--test_only', type='bool', default=False, help='test_only: no need to", "rate') parser.add_argument('--optimizer', type=str, default='sgd', help='Optimizer: sgd (default) or adam or rmsprop') parser.add_argument('--learning_rate', '-lr',", "default=None, help='Pre-trained model.') parser.add_argument('--model_file', type=str, default='model.pkl.gz', help='Model file to save') parser.add_argument('--log_file', type=str, default=None,", "epoches') parser.add_argument('--eval_iter', type=int, default=100, help='Evaluation on dev set after K updates') parser.add_argument('--dropout_rate', type=float,", "import theano import argparse _floatX = theano.config.floatX def str2bool(v): return v.lower() in ('yes',", "# Basics parser.add_argument('--debug', type='bool', default=False, help='whether it is debug mode') parser.add_argument('--test_only', type='bool', default=False,", "evaluate on') parser.add_argument('--relabeling', type='bool', default=True, help='Whether to relabel the entities when loading the", "details parser.add_argument('--embedding_size', type=int, default=None, help='Default embedding size if embedding_file is not given') parser.add_argument('--hidden_size',", "debug mode') parser.add_argument('--test_only', type='bool', default=False, help='test_only: no need to run training process') parser.add_argument('--random_seed',", "or mlp or avg or last or dot') # Optimization details parser.add_argument('--batch_size', type=int,", "theano.config.floatX def str2bool(v): return v.lower() in ('yes', 'true', 't', '1', 'y') def get_args():", "given') parser.add_argument('--hidden_size', type=int, default=128, help='Hidden size of RNN units') parser.add_argument('--bidir', type='bool', default=True, help='bidir:", "Optimization details parser.add_argument('--batch_size', type=int, default=32, help='Batch size') parser.add_argument('--num_epoches', type=int, default=100, help='Number of epoches')", "Model details parser.add_argument('--embedding_size', type=int, default=None, help='Default embedding size if embedding_file is not given')", "help='Development file') parser.add_argument('--pre_trained', type=str, default=None, help='Pre-trained model.') parser.add_argument('--model_file', type=str, default='model.pkl.gz', help='Model file to", "lstm or gru (default)') parser.add_argument('--att_func', type=str, default='bilinear', help='Attention function: bilinear (default) or mlp", "training process') parser.add_argument('--random_seed', type=int, default=1013, help='Random seed') # Data file parser.add_argument('--train_file', type=str, default=None,", "entities when loading the data') # Model details parser.add_argument('--embedding_size', type=int, default=None, help='Default embedding", "it is debug mode') parser.add_argument('--test_only', type='bool', default=False, help='test_only: no need to run training", "or gru (default)') parser.add_argument('--att_func', type=str, default='bilinear', help='Attention function: bilinear (default) or mlp or", "type='bool', default=True, help='bidir: whether to use a bidirectional RNN') parser.add_argument('--num_layers', type=int, default=1, help='Number", "on') parser.add_argument('--relabeling', type='bool', default=True, help='Whether to relabel the entities when loading the data')", "parser.add_argument('--bidir', type='bool', default=True, help='bidir: whether to use a bidirectional RNN') parser.add_argument('--num_layers', type=int, default=1,", "file parser.add_argument('--train_file', type=str, default=None, help='Training file') parser.add_argument('--dev_file', type=str, default=None, help='Development file') parser.add_argument('--pre_trained', type=str,", "parser.add_argument('--learning_rate', '-lr', type=float, default=0.1, help='Learning rate for SGD') parser.add_argument('--grad_clipping', type=float, default=10.0, help='Gradient clipping')", "units') parser.add_argument('--bidir', type='bool', default=True, help='bidir: whether to use a bidirectional RNN') parser.add_argument('--num_layers', type=int,", "help='Model file to save') parser.add_argument('--log_file', type=str, default=None, help='Log file') parser.add_argument('--embedding_file', type=str, default=None, help='Word", "type=int, default=None, help='Maximum number of dev examples to evaluate on') parser.add_argument('--relabeling', type='bool', default=True,", "embedding file') parser.add_argument('--max_dev', type=int, default=None, help='Maximum number of dev examples to evaluate on')", "parser.add_argument('--num_epoches', type=int, default=100, help='Number of epoches') parser.add_argument('--eval_iter', type=int, default=100, help='Evaluation on dev set", "parser.add_argument('--train_file', type=str, default=None, help='Training file') parser.add_argument('--dev_file', type=str, default=None, help='Development file') parser.add_argument('--pre_trained', type=str, default=None,", "file') parser.add_argument('--embedding_file', type=str, default=None, help='Word embedding file') parser.add_argument('--max_dev', type=int, default=None, help='Maximum number of", "str2bool(v): return v.lower() in ('yes', 'true', 't', '1', 'y') def get_args(): parser =", "parser.add_argument('--max_dev', type=int, default=None, help='Maximum number of dev examples to evaluate on') parser.add_argument('--relabeling', type='bool',", "'true', 't', '1', 'y') def get_args(): parser = argparse.ArgumentParser() parser.register('type', 'bool', str2bool) #", "help='Random seed') # Data file parser.add_argument('--train_file', type=str, default=None, help='Training file') parser.add_argument('--dev_file', type=str, default=None,", "help='Default embedding size if embedding_file is not given') parser.add_argument('--hidden_size', type=int, default=128, help='Hidden size", "mlp or avg or last or dot') # Optimization details parser.add_argument('--batch_size', type=int, default=32,", "the data') # Model details parser.add_argument('--embedding_size', type=int, default=None, help='Default embedding size if embedding_file", "help='Training file') parser.add_argument('--dev_file', type=str, default=None, help='Development file') parser.add_argument('--pre_trained', type=str, default=None, help='Pre-trained model.') parser.add_argument('--model_file',", "parser.add_argument('--model_file', type=str, default='model.pkl.gz', help='Model file to save') parser.add_argument('--log_file', type=str, default=None, help='Log file') parser.add_argument('--embedding_file',", "dot') # Optimization details parser.add_argument('--batch_size', type=int, default=32, help='Batch size') parser.add_argument('--num_epoches', type=int, default=100, help='Number", "process') parser.add_argument('--random_seed', type=int, default=1013, help='Random seed') # Data file parser.add_argument('--train_file', type=str, default=None, help='Training", "help='Number of epoches') parser.add_argument('--eval_iter', type=int, default=100, help='Evaluation on dev set after K updates')", "# Optimization details parser.add_argument('--batch_size', type=int, default=32, help='Batch size') parser.add_argument('--num_epoches', type=int, default=100, help='Number of", "_floatX = theano.config.floatX def str2bool(v): return v.lower() in ('yes', 'true', 't', '1', 'y')", "size of RNN units') parser.add_argument('--bidir', type='bool', default=True, help='bidir: whether to use a bidirectional", "details parser.add_argument('--batch_size', type=int, default=32, help='Batch size') parser.add_argument('--num_epoches', type=int, default=100, help='Number of epoches') parser.add_argument('--eval_iter',", "K updates') parser.add_argument('--dropout_rate', type=float, default=0.2, help='Dropout rate') parser.add_argument('--optimizer', type=str, default='sgd', help='Optimizer: sgd (default)", "type=str, default='gru', help='RNN type: lstm or gru (default)') parser.add_argument('--att_func', type=str, default='bilinear', help='Attention function:", "last or dot') # Optimization details parser.add_argument('--batch_size', type=int, default=32, help='Batch size') parser.add_argument('--num_epoches', type=int,", "avg or last or dot') # Optimization details parser.add_argument('--batch_size', type=int, default=32, help='Batch size')", "size') parser.add_argument('--num_epoches', type=int, default=100, help='Number of epoches') parser.add_argument('--eval_iter', type=int, default=100, help='Evaluation on dev", "help='Hidden size of RNN units') parser.add_argument('--bidir', type='bool', default=True, help='bidir: whether to use a", "size if embedding_file is not given') parser.add_argument('--hidden_size', type=int, default=128, help='Hidden size of RNN", "dev set after K updates') parser.add_argument('--dropout_rate', type=float, default=0.2, help='Dropout rate') parser.add_argument('--optimizer', type=str, default='sgd',", "default='sgd', help='Optimizer: sgd (default) or adam or rmsprop') parser.add_argument('--learning_rate', '-lr', type=float, default=0.1, help='Learning", "type=int, default=100, help='Number of epoches') parser.add_argument('--eval_iter', type=int, default=100, help='Evaluation on dev set after", "no need to run training process') parser.add_argument('--random_seed', type=int, default=1013, help='Random seed') # Data", "import argparse _floatX = theano.config.floatX def str2bool(v): return v.lower() in ('yes', 'true', 't',", "file') parser.add_argument('--dev_file', type=str, default=None, help='Development file') parser.add_argument('--pre_trained', type=str, default=None, help='Pre-trained model.') parser.add_argument('--model_file', type=str,", "type=str, default='bilinear', help='Attention function: bilinear (default) or mlp or avg or last or", "str2bool) # Basics parser.add_argument('--debug', type='bool', default=False, help='whether it is debug mode') parser.add_argument('--test_only', type='bool',", "theano import argparse _floatX = theano.config.floatX def str2bool(v): return v.lower() in ('yes', 'true',", "parser.add_argument('--num_layers', type=int, default=1, help='Number of RNN layers') parser.add_argument('--rnn_type', type=str, default='gru', help='RNN type: lstm", "is not given') parser.add_argument('--hidden_size', type=int, default=128, help='Hidden size of RNN units') parser.add_argument('--bidir', type='bool',", "bilinear (default) or mlp or avg or last or dot') # Optimization details", "if embedding_file is not given') parser.add_argument('--hidden_size', type=int, default=128, help='Hidden size of RNN units')", "return v.lower() in ('yes', 'true', 't', '1', 'y') def get_args(): parser = argparse.ArgumentParser()", "'-lr', type=float, default=0.1, help='Learning rate for SGD') parser.add_argument('--grad_clipping', type=float, default=10.0, help='Gradient clipping') return", "default=None, help='Word embedding file') parser.add_argument('--max_dev', type=int, default=None, help='Maximum number of dev examples to", "argparse.ArgumentParser() parser.register('type', 'bool', str2bool) # Basics parser.add_argument('--debug', type='bool', default=False, help='whether it is debug", "parser.add_argument('--relabeling', type='bool', default=True, help='Whether to relabel the entities when loading the data') #", "help='Log file') parser.add_argument('--embedding_file', type=str, default=None, help='Word embedding file') parser.add_argument('--max_dev', type=int, default=None, help='Maximum number", "(default) or mlp or avg or last or dot') # Optimization details parser.add_argument('--batch_size',", "parser.add_argument('--random_seed', type=int, default=1013, help='Random seed') # Data file parser.add_argument('--train_file', type=str, default=None, help='Training file')", "on dev set after K updates') parser.add_argument('--dropout_rate', type=float, default=0.2, help='Dropout rate') parser.add_argument('--optimizer', type=str,", "layers') parser.add_argument('--rnn_type', type=str, default='gru', help='RNN type: lstm or gru (default)') parser.add_argument('--att_func', type=str, default='bilinear',", "parser.add_argument('--pre_trained', type=str, default=None, help='Pre-trained model.') parser.add_argument('--model_file', type=str, default='model.pkl.gz', help='Model file to save') parser.add_argument('--log_file',", "default=100, help='Evaluation on dev set after K updates') parser.add_argument('--dropout_rate', type=float, default=0.2, help='Dropout rate')", "default='model.pkl.gz', help='Model file to save') parser.add_argument('--log_file', type=str, default=None, help='Log file') parser.add_argument('--embedding_file', type=str, default=None,", "parser.add_argument('--optimizer', type=str, default='sgd', help='Optimizer: sgd (default) or adam or rmsprop') parser.add_argument('--learning_rate', '-lr', type=float,", "'t', '1', 'y') def get_args(): parser = argparse.ArgumentParser() parser.register('type', 'bool', str2bool) # Basics", "help='Pre-trained model.') parser.add_argument('--model_file', type=str, default='model.pkl.gz', help='Model file to save') parser.add_argument('--log_file', type=str, default=None, help='Log", "not given') parser.add_argument('--hidden_size', type=int, default=128, help='Hidden size of RNN units') parser.add_argument('--bidir', type='bool', default=True,", "run training process') parser.add_argument('--random_seed', type=int, default=1013, help='Random seed') # Data file parser.add_argument('--train_file', type=str,", "type=int, default=100, help='Evaluation on dev set after K updates') parser.add_argument('--dropout_rate', type=float, default=0.2, help='Dropout", "help='Evaluation on dev set after K updates') parser.add_argument('--dropout_rate', type=float, default=0.2, help='Dropout rate') parser.add_argument('--optimizer',", "v.lower() in ('yes', 'true', 't', '1', 'y') def get_args(): parser = argparse.ArgumentParser() parser.register('type',", "parser.add_argument('--batch_size', type=int, default=32, help='Batch size') parser.add_argument('--num_epoches', type=int, default=100, help='Number of epoches') parser.add_argument('--eval_iter', type=int,", "to run training process') parser.add_argument('--random_seed', type=int, default=1013, help='Random seed') # Data file parser.add_argument('--train_file',", "type=str, default=None, help='Development file') parser.add_argument('--pre_trained', type=str, default=None, help='Pre-trained model.') parser.add_argument('--model_file', type=str, default='model.pkl.gz', help='Model", "# Data file parser.add_argument('--train_file', type=str, default=None, help='Training file') parser.add_argument('--dev_file', type=str, default=None, help='Development file')", "model.') parser.add_argument('--model_file', type=str, default='model.pkl.gz', help='Model file to save') parser.add_argument('--log_file', type=str, default=None, help='Log file')", "to use a bidirectional RNN') parser.add_argument('--num_layers', type=int, default=1, help='Number of RNN layers') parser.add_argument('--rnn_type',", "def get_args(): parser = argparse.ArgumentParser() parser.register('type', 'bool', str2bool) # Basics parser.add_argument('--debug', type='bool', default=False,", "'bool', str2bool) # Basics parser.add_argument('--debug', type='bool', default=False, help='whether it is debug mode') parser.add_argument('--test_only',", "# Model details parser.add_argument('--embedding_size', type=int, default=None, help='Default embedding size if embedding_file is not", "type=int, default=1013, help='Random seed') # Data file parser.add_argument('--train_file', type=str, default=None, help='Training file') parser.add_argument('--dev_file',", "type=str, default='sgd', help='Optimizer: sgd (default) or adam or rmsprop') parser.add_argument('--learning_rate', '-lr', type=float, default=0.1,", "is debug mode') parser.add_argument('--test_only', type='bool', default=False, help='test_only: no need to run training process')", "'y') def get_args(): parser = argparse.ArgumentParser() parser.register('type', 'bool', str2bool) # Basics parser.add_argument('--debug', type='bool',", "type='bool', default=True, help='Whether to relabel the entities when loading the data') # Model", "('yes', 'true', 't', '1', 'y') def get_args(): parser = argparse.ArgumentParser() parser.register('type', 'bool', str2bool)", "default=True, help='Whether to relabel the entities when loading the data') # Model details", "default=1013, help='Random seed') # Data file parser.add_argument('--train_file', type=str, default=None, help='Training file') parser.add_argument('--dev_file', type=str,", "parser.add_argument('--hidden_size', type=int, default=128, help='Hidden size of RNN units') parser.add_argument('--bidir', type='bool', default=True, help='bidir: whether", "type=int, default=32, help='Batch size') parser.add_argument('--num_epoches', type=int, default=100, help='Number of epoches') parser.add_argument('--eval_iter', type=int, default=100,", "whether to use a bidirectional RNN') parser.add_argument('--num_layers', type=int, default=1, help='Number of RNN layers')", "gru (default)') parser.add_argument('--att_func', type=str, default='bilinear', help='Attention function: bilinear (default) or mlp or avg", "sgd (default) or adam or rmsprop') parser.add_argument('--learning_rate', '-lr', type=float, default=0.1, help='Learning rate for", "type=float, default=0.1, help='Learning rate for SGD') parser.add_argument('--grad_clipping', type=float, default=10.0, help='Gradient clipping') return parser.parse_args()", "adam or rmsprop') parser.add_argument('--learning_rate', '-lr', type=float, default=0.1, help='Learning rate for SGD') parser.add_argument('--grad_clipping', type=float,", "help='Batch size') parser.add_argument('--num_epoches', type=int, default=100, help='Number of epoches') parser.add_argument('--eval_iter', type=int, default=100, help='Evaluation on", "parser.add_argument('--test_only', type='bool', default=False, help='test_only: no need to run training process') parser.add_argument('--random_seed', type=int, default=1013,", "Data file parser.add_argument('--train_file', type=str, default=None, help='Training file') parser.add_argument('--dev_file', type=str, default=None, help='Development file') parser.add_argument('--pre_trained',", "function: bilinear (default) or mlp or avg or last or dot') # Optimization", "= argparse.ArgumentParser() parser.register('type', 'bool', str2bool) # Basics parser.add_argument('--debug', type='bool', default=False, help='whether it is", "help='Optimizer: sgd (default) or adam or rmsprop') parser.add_argument('--learning_rate', '-lr', type=float, default=0.1, help='Learning rate", "in ('yes', 'true', 't', '1', 'y') def get_args(): parser = argparse.ArgumentParser() parser.register('type', 'bool',", "help='RNN type: lstm or gru (default)') parser.add_argument('--att_func', type=str, default='bilinear', help='Attention function: bilinear (default)", "default='bilinear', help='Attention function: bilinear (default) or mlp or avg or last or dot')", "to evaluate on') parser.add_argument('--relabeling', type='bool', default=True, help='Whether to relabel the entities when loading", "default=None, help='Training file') parser.add_argument('--dev_file', type=str, default=None, help='Development file') parser.add_argument('--pre_trained', type=str, default=None, help='Pre-trained model.')", "file to save') parser.add_argument('--log_file', type=str, default=None, help='Log file') parser.add_argument('--embedding_file', type=str, default=None, help='Word embedding", "default=None, help='Default embedding size if embedding_file is not given') parser.add_argument('--hidden_size', type=int, default=128, help='Hidden", "loading the data') # Model details parser.add_argument('--embedding_size', type=int, default=None, help='Default embedding size if", "= theano.config.floatX def str2bool(v): return v.lower() in ('yes', 'true', 't', '1', 'y') def", "parser.add_argument('--dev_file', type=str, default=None, help='Development file') parser.add_argument('--pre_trained', type=str, default=None, help='Pre-trained model.') parser.add_argument('--model_file', type=str, default='model.pkl.gz',", "parser.add_argument('--embedding_file', type=str, default=None, help='Word embedding file') parser.add_argument('--max_dev', type=int, default=None, help='Maximum number of dev", "save') parser.add_argument('--log_file', type=str, default=None, help='Log file') parser.add_argument('--embedding_file', type=str, default=None, help='Word embedding file') parser.add_argument('--max_dev',", "relabel the entities when loading the data') # Model details parser.add_argument('--embedding_size', type=int, default=None,", "default=None, help='Log file') parser.add_argument('--embedding_file', type=str, default=None, help='Word embedding file') parser.add_argument('--max_dev', type=int, default=None, help='Maximum", "embedding size if embedding_file is not given') parser.add_argument('--hidden_size', type=int, default=128, help='Hidden size of", "a bidirectional RNN') parser.add_argument('--num_layers', type=int, default=1, help='Number of RNN layers') parser.add_argument('--rnn_type', type=str, default='gru',", "or adam or rmsprop') parser.add_argument('--learning_rate', '-lr', type=float, default=0.1, help='Learning rate for SGD') parser.add_argument('--grad_clipping',", "when loading the data') # Model details parser.add_argument('--embedding_size', type=int, default=None, help='Default embedding size", "'1', 'y') def get_args(): parser = argparse.ArgumentParser() parser.register('type', 'bool', str2bool) # Basics parser.add_argument('--debug',", "rmsprop') parser.add_argument('--learning_rate', '-lr', type=float, default=0.1, help='Learning rate for SGD') parser.add_argument('--grad_clipping', type=float, default=10.0, help='Gradient", "(default)') parser.add_argument('--att_func', type=str, default='bilinear', help='Attention function: bilinear (default) or mlp or avg or", "parser.add_argument('--debug', type='bool', default=False, help='whether it is debug mode') parser.add_argument('--test_only', type='bool', default=False, help='test_only: no", "or dot') # Optimization details parser.add_argument('--batch_size', type=int, default=32, help='Batch size') parser.add_argument('--num_epoches', type=int, default=100,", "parser.add_argument('--eval_iter', type=int, default=100, help='Evaluation on dev set after K updates') parser.add_argument('--dropout_rate', type=float, default=0.2,", "of RNN layers') parser.add_argument('--rnn_type', type=str, default='gru', help='RNN type: lstm or gru (default)') parser.add_argument('--att_func',", "need to run training process') parser.add_argument('--random_seed', type=int, default=1013, help='Random seed') # Data file", "RNN') parser.add_argument('--num_layers', type=int, default=1, help='Number of RNN layers') parser.add_argument('--rnn_type', type=str, default='gru', help='RNN type:", "RNN layers') parser.add_argument('--rnn_type', type=str, default='gru', help='RNN type: lstm or gru (default)') parser.add_argument('--att_func', type=str,", "parser.add_argument('--dropout_rate', type=float, default=0.2, help='Dropout rate') parser.add_argument('--optimizer', type=str, default='sgd', help='Optimizer: sgd (default) or adam", "default=0.2, help='Dropout rate') parser.add_argument('--optimizer', type=str, default='sgd', help='Optimizer: sgd (default) or adam or rmsprop')", "parser = argparse.ArgumentParser() parser.register('type', 'bool', str2bool) # Basics parser.add_argument('--debug', type='bool', default=False, help='whether it", "type=str, default='model.pkl.gz', help='Model file to save') parser.add_argument('--log_file', type=str, default=None, help='Log file') parser.add_argument('--embedding_file', type=str,", "default='gru', help='RNN type: lstm or gru (default)') parser.add_argument('--att_func', type=str, default='bilinear', help='Attention function: bilinear", "examples to evaluate on') parser.add_argument('--relabeling', type='bool', default=True, help='Whether to relabel the entities when", "(default) or adam or rmsprop') parser.add_argument('--learning_rate', '-lr', type=float, default=0.1, help='Learning rate for SGD')", "def str2bool(v): return v.lower() in ('yes', 'true', 't', '1', 'y') def get_args(): parser", "data') # Model details parser.add_argument('--embedding_size', type=int, default=None, help='Default embedding size if embedding_file is", "embedding_file is not given') parser.add_argument('--hidden_size', type=int, default=128, help='Hidden size of RNN units') parser.add_argument('--bidir',", "help='Number of RNN layers') parser.add_argument('--rnn_type', type=str, default='gru', help='RNN type: lstm or gru (default)')", "set after K updates') parser.add_argument('--dropout_rate', type=float, default=0.2, help='Dropout rate') parser.add_argument('--optimizer', type=str, default='sgd', help='Optimizer:", "default=32, help='Batch size') parser.add_argument('--num_epoches', type=int, default=100, help='Number of epoches') parser.add_argument('--eval_iter', type=int, default=100, help='Evaluation", "type='bool', default=False, help='test_only: no need to run training process') parser.add_argument('--random_seed', type=int, default=1013, help='Random", "type=str, default=None, help='Training file') parser.add_argument('--dev_file', type=str, default=None, help='Development file') parser.add_argument('--pre_trained', type=str, default=None, help='Pre-trained", "type=int, default=128, help='Hidden size of RNN units') parser.add_argument('--bidir', type='bool', default=True, help='bidir: whether to", "type: lstm or gru (default)') parser.add_argument('--att_func', type=str, default='bilinear', help='Attention function: bilinear (default) or", "parser.add_argument('--att_func', type=str, default='bilinear', help='Attention function: bilinear (default) or mlp or avg or last", "after K updates') parser.add_argument('--dropout_rate', type=float, default=0.2, help='Dropout rate') parser.add_argument('--optimizer', type=str, default='sgd', help='Optimizer: sgd", "updates') parser.add_argument('--dropout_rate', type=float, default=0.2, help='Dropout rate') parser.add_argument('--optimizer', type=str, default='sgd', help='Optimizer: sgd (default) or", "default=False, help='test_only: no need to run training process') parser.add_argument('--random_seed', type=int, default=1013, help='Random seed')", "default=None, help='Development file') parser.add_argument('--pre_trained', type=str, default=None, help='Pre-trained model.') parser.add_argument('--model_file', type=str, default='model.pkl.gz', help='Model file", "the entities when loading the data') # Model details parser.add_argument('--embedding_size', type=int, default=None, help='Default", "mode') parser.add_argument('--test_only', type='bool', default=False, help='test_only: no need to run training process') parser.add_argument('--random_seed', type=int,", "bidirectional RNN') parser.add_argument('--num_layers', type=int, default=1, help='Number of RNN layers') parser.add_argument('--rnn_type', type=str, default='gru', help='RNN", "use a bidirectional RNN') parser.add_argument('--num_layers', type=int, default=1, help='Number of RNN layers') parser.add_argument('--rnn_type', type=str,", "Basics parser.add_argument('--debug', type='bool', default=False, help='whether it is debug mode') parser.add_argument('--test_only', type='bool', default=False, help='test_only:", "help='whether it is debug mode') parser.add_argument('--test_only', type='bool', default=False, help='test_only: no need to run", "help='Whether to relabel the entities when loading the data') # Model details parser.add_argument('--embedding_size',", "get_args(): parser = argparse.ArgumentParser() parser.register('type', 'bool', str2bool) # Basics parser.add_argument('--debug', type='bool', default=False, help='whether", "type=float, default=0.2, help='Dropout rate') parser.add_argument('--optimizer', type=str, default='sgd', help='Optimizer: sgd (default) or adam or", "default=None, help='Maximum number of dev examples to evaluate on') parser.add_argument('--relabeling', type='bool', default=True, help='Whether", "type=str, default=None, help='Word embedding file') parser.add_argument('--max_dev', type=int, default=None, help='Maximum number of dev examples", "default=True, help='bidir: whether to use a bidirectional RNN') parser.add_argument('--num_layers', type=int, default=1, help='Number of", "help='test_only: no need to run training process') parser.add_argument('--random_seed', type=int, default=1013, help='Random seed') #", "help='bidir: whether to use a bidirectional RNN') parser.add_argument('--num_layers', type=int, default=1, help='Number of RNN", "type=str, default=None, help='Log file') parser.add_argument('--embedding_file', type=str, default=None, help='Word embedding file') parser.add_argument('--max_dev', type=int, default=None,", "help='Word embedding file') parser.add_argument('--max_dev', type=int, default=None, help='Maximum number of dev examples to evaluate", "of dev examples to evaluate on') parser.add_argument('--relabeling', type='bool', default=True, help='Whether to relabel the", "help='Attention function: bilinear (default) or mlp or avg or last or dot') #", "or last or dot') # Optimization details parser.add_argument('--batch_size', type=int, default=32, help='Batch size') parser.add_argument('--num_epoches',", "default=100, help='Number of epoches') parser.add_argument('--eval_iter', type=int, default=100, help='Evaluation on dev set after K", "default=1, help='Number of RNN layers') parser.add_argument('--rnn_type', type=str, default='gru', help='RNN type: lstm or gru", "argparse _floatX = theano.config.floatX def str2bool(v): return v.lower() in ('yes', 'true', 't', '1',", "seed') # Data file parser.add_argument('--train_file', type=str, default=None, help='Training file') parser.add_argument('--dev_file', type=str, default=None, help='Development", "number of dev examples to evaluate on') parser.add_argument('--relabeling', type='bool', default=True, help='Whether to relabel", "file') parser.add_argument('--pre_trained', type=str, default=None, help='Pre-trained model.') parser.add_argument('--model_file', type=str, default='model.pkl.gz', help='Model file to save')", "help='Maximum number of dev examples to evaluate on') parser.add_argument('--relabeling', type='bool', default=True, help='Whether to", "of epoches') parser.add_argument('--eval_iter', type=int, default=100, help='Evaluation on dev set after K updates') parser.add_argument('--dropout_rate',", "or rmsprop') parser.add_argument('--learning_rate', '-lr', type=float, default=0.1, help='Learning rate for SGD') parser.add_argument('--grad_clipping', type=float, default=10.0,", "parser.add_argument('--rnn_type', type=str, default='gru', help='RNN type: lstm or gru (default)') parser.add_argument('--att_func', type=str, default='bilinear', help='Attention", "parser.register('type', 'bool', str2bool) # Basics parser.add_argument('--debug', type='bool', default=False, help='whether it is debug mode')", "parser.add_argument('--log_file', type=str, default=None, help='Log file') parser.add_argument('--embedding_file', type=str, default=None, help='Word embedding file') parser.add_argument('--max_dev', type=int,", "of RNN units') parser.add_argument('--bidir', type='bool', default=True, help='bidir: whether to use a bidirectional RNN')", "RNN units') parser.add_argument('--bidir', type='bool', default=True, help='bidir: whether to use a bidirectional RNN') parser.add_argument('--num_layers',", "type=int, default=1, help='Number of RNN layers') parser.add_argument('--rnn_type', type=str, default='gru', help='RNN type: lstm or", "help='Dropout rate') parser.add_argument('--optimizer', type=str, default='sgd', help='Optimizer: sgd (default) or adam or rmsprop') parser.add_argument('--learning_rate',", "to relabel the entities when loading the data') # Model details parser.add_argument('--embedding_size', type=int,", "default=128, help='Hidden size of RNN units') parser.add_argument('--bidir', type='bool', default=True, help='bidir: whether to use", "type=str, default=None, help='Pre-trained model.') parser.add_argument('--model_file', type=str, default='model.pkl.gz', help='Model file to save') parser.add_argument('--log_file', type=str,", "parser.add_argument('--embedding_size', type=int, default=None, help='Default embedding size if embedding_file is not given') parser.add_argument('--hidden_size', type=int,", "dev examples to evaluate on') parser.add_argument('--relabeling', type='bool', default=True, help='Whether to relabel the entities", "to save') parser.add_argument('--log_file', type=str, default=None, help='Log file') parser.add_argument('--embedding_file', type=str, default=None, help='Word embedding file')" ]
[ "cur_target = target[indices] cur_accuracy = np.mean((cur_pred.argmax(axis=1) == cur_target).astype(np.float)) all_accuracies.append(cur_accuracy) return { 'mean': np.mean(all_accuracies),", "should have shape (n_samples, n_classes), while target should have shape (n_samples,). \"\"\" assert", "from tqdm import tqdm import numpy as np def compute_accuracy_with_bootstrapping(pred, target, n_iters=1000): \"\"\"", "def compute_accuracy_with_bootstrapping(pred, target, n_iters=1000): \"\"\" Expects numpy arrays. pred should have shape (n_samples,", "import numpy as np def compute_accuracy_with_bootstrapping(pred, target, n_iters=1000): \"\"\" Expects numpy arrays. pred", "shape (n_samples, n_classes), while target should have shape (n_samples,). \"\"\" assert pred.shape[0] ==", "target.shape[0] all_accuracies = [] for _ in tqdm(range(n_iters), desc='bootstrapping') : indices = np.random.choice(pred.shape[0],", "Expects numpy arrays. pred should have shape (n_samples, n_classes), while target should have", "import tqdm import numpy as np def compute_accuracy_with_bootstrapping(pred, target, n_iters=1000): \"\"\" Expects numpy", "pred should have shape (n_samples, n_classes), while target should have shape (n_samples,). \"\"\"", "target[indices] cur_accuracy = np.mean((cur_pred.argmax(axis=1) == cur_target).astype(np.float)) all_accuracies.append(cur_accuracy) return { 'mean': np.mean(all_accuracies), 'std': np.std(all_accuracies)", "compute_accuracy_with_bootstrapping(pred, target, n_iters=1000): \"\"\" Expects numpy arrays. pred should have shape (n_samples, n_classes),", "== target.shape[0] all_accuracies = [] for _ in tqdm(range(n_iters), desc='bootstrapping') : indices =", "cur_pred = pred[indices] cur_target = target[indices] cur_accuracy = np.mean((cur_pred.argmax(axis=1) == cur_target).astype(np.float)) all_accuracies.append(cur_accuracy) return", "should have shape (n_samples,). \"\"\" assert pred.shape[0] == target.shape[0] all_accuracies = [] for", "numpy as np def compute_accuracy_with_bootstrapping(pred, target, n_iters=1000): \"\"\" Expects numpy arrays. pred should", "= np.random.choice(pred.shape[0], size=pred.shape[0], replace=True) cur_pred = pred[indices] cur_target = target[indices] cur_accuracy = np.mean((cur_pred.argmax(axis=1)", "(n_samples, n_classes), while target should have shape (n_samples,). \"\"\" assert pred.shape[0] == target.shape[0]", "target should have shape (n_samples,). \"\"\" assert pred.shape[0] == target.shape[0] all_accuracies = []", "numpy arrays. pred should have shape (n_samples, n_classes), while target should have shape", "_ in tqdm(range(n_iters), desc='bootstrapping') : indices = np.random.choice(pred.shape[0], size=pred.shape[0], replace=True) cur_pred = pred[indices]", "tqdm import tqdm import numpy as np def compute_accuracy_with_bootstrapping(pred, target, n_iters=1000): \"\"\" Expects", "np.random.choice(pred.shape[0], size=pred.shape[0], replace=True) cur_pred = pred[indices] cur_target = target[indices] cur_accuracy = np.mean((cur_pred.argmax(axis=1) ==", "desc='bootstrapping') : indices = np.random.choice(pred.shape[0], size=pred.shape[0], replace=True) cur_pred = pred[indices] cur_target = target[indices]", "= target[indices] cur_accuracy = np.mean((cur_pred.argmax(axis=1) == cur_target).astype(np.float)) all_accuracies.append(cur_accuracy) return { 'mean': np.mean(all_accuracies), 'std':", "all_accuracies = [] for _ in tqdm(range(n_iters), desc='bootstrapping') : indices = np.random.choice(pred.shape[0], size=pred.shape[0],", "replace=True) cur_pred = pred[indices] cur_target = target[indices] cur_accuracy = np.mean((cur_pred.argmax(axis=1) == cur_target).astype(np.float)) all_accuracies.append(cur_accuracy)", "assert pred.shape[0] == target.shape[0] all_accuracies = [] for _ in tqdm(range(n_iters), desc='bootstrapping') :", "= pred[indices] cur_target = target[indices] cur_accuracy = np.mean((cur_pred.argmax(axis=1) == cur_target).astype(np.float)) all_accuracies.append(cur_accuracy) return {", "target, n_iters=1000): \"\"\" Expects numpy arrays. pred should have shape (n_samples, n_classes), while", "np def compute_accuracy_with_bootstrapping(pred, target, n_iters=1000): \"\"\" Expects numpy arrays. pred should have shape", "\"\"\" Expects numpy arrays. pred should have shape (n_samples, n_classes), while target should", "pred.shape[0] == target.shape[0] all_accuracies = [] for _ in tqdm(range(n_iters), desc='bootstrapping') : indices", "size=pred.shape[0], replace=True) cur_pred = pred[indices] cur_target = target[indices] cur_accuracy = np.mean((cur_pred.argmax(axis=1) == cur_target).astype(np.float))", "arrays. pred should have shape (n_samples, n_classes), while target should have shape (n_samples,).", "have shape (n_samples,). \"\"\" assert pred.shape[0] == target.shape[0] all_accuracies = [] for _", "= [] for _ in tqdm(range(n_iters), desc='bootstrapping') : indices = np.random.choice(pred.shape[0], size=pred.shape[0], replace=True)", "tqdm(range(n_iters), desc='bootstrapping') : indices = np.random.choice(pred.shape[0], size=pred.shape[0], replace=True) cur_pred = pred[indices] cur_target =", "have shape (n_samples, n_classes), while target should have shape (n_samples,). \"\"\" assert pred.shape[0]", "\"\"\" assert pred.shape[0] == target.shape[0] all_accuracies = [] for _ in tqdm(range(n_iters), desc='bootstrapping')", "indices = np.random.choice(pred.shape[0], size=pred.shape[0], replace=True) cur_pred = pred[indices] cur_target = target[indices] cur_accuracy =", "pred[indices] cur_target = target[indices] cur_accuracy = np.mean((cur_pred.argmax(axis=1) == cur_target).astype(np.float)) all_accuracies.append(cur_accuracy) return { 'mean':", "n_classes), while target should have shape (n_samples,). \"\"\" assert pred.shape[0] == target.shape[0] all_accuracies", "tqdm import numpy as np def compute_accuracy_with_bootstrapping(pred, target, n_iters=1000): \"\"\" Expects numpy arrays.", "n_iters=1000): \"\"\" Expects numpy arrays. pred should have shape (n_samples, n_classes), while target", "shape (n_samples,). \"\"\" assert pred.shape[0] == target.shape[0] all_accuracies = [] for _ in", ": indices = np.random.choice(pred.shape[0], size=pred.shape[0], replace=True) cur_pred = pred[indices] cur_target = target[indices] cur_accuracy", "as np def compute_accuracy_with_bootstrapping(pred, target, n_iters=1000): \"\"\" Expects numpy arrays. pred should have", "in tqdm(range(n_iters), desc='bootstrapping') : indices = np.random.choice(pred.shape[0], size=pred.shape[0], replace=True) cur_pred = pred[indices] cur_target", "(n_samples,). \"\"\" assert pred.shape[0] == target.shape[0] all_accuracies = [] for _ in tqdm(range(n_iters),", "[] for _ in tqdm(range(n_iters), desc='bootstrapping') : indices = np.random.choice(pred.shape[0], size=pred.shape[0], replace=True) cur_pred", "for _ in tqdm(range(n_iters), desc='bootstrapping') : indices = np.random.choice(pred.shape[0], size=pred.shape[0], replace=True) cur_pred =", "<reponame>hrayrhar/limit-label-memorization from tqdm import tqdm import numpy as np def compute_accuracy_with_bootstrapping(pred, target, n_iters=1000):", "cur_accuracy = np.mean((cur_pred.argmax(axis=1) == cur_target).astype(np.float)) all_accuracies.append(cur_accuracy) return { 'mean': np.mean(all_accuracies), 'std': np.std(all_accuracies) }", "while target should have shape (n_samples,). \"\"\" assert pred.shape[0] == target.shape[0] all_accuracies =" ]
[ "'trials') ax.plot(threshold.tolist()[0], linestyle='--', color='gold', label='threshold') ax.legend(loc='upper left') ax.text(20,optimal_result['culmulative_loglik']/2, optimal_result['model_label'] + ': ' +", "hidden_state_sequence_of_training_trials hidden_stateSeq = joblib.load(os.path.join(anomaly_model_path, 'hidden_stateSeq.pkl')) hidden_stateSeq = np.append(hidden_stateSeq, [hidden_stateSeq[-1]]) # add one item,", "if confidence < 0.0: df = pd.DataFrame(anomaly_testing_group_by_folder_name[trial_name][1], columns=training_config.interested_data_fields) id = random.randint(1000,10000) _name =", "in a same figure ''' # load trained anomaly models anomaly_model_group_by_label = {}", "# load trained anomaly models anomaly_model_group_by_label = {} folders = os.listdir(training_config.anomaly_model_save_path) for fo", "= os.path.join(training_config.anomaly_model_save_path, fo, training_config.config_by_user['data_type_chosen'], training_config.config_by_user['model_type_chosen'], training_config.model_id) try: anomaly_model_group_by_label[fo] = joblib.load(anomaly_model_path + \"/model_s%s.pkl\"%(1,)) except", "str(confidence), ha = 'center', va = 'baseline', bbox=dict(boxstyle=\"round\", ec=(1., 0.6, 0.6), fc=(1., 0.9,", "fo + '\\n' confuse_matrix[fo] = predict_class _items = confuse_matrix.keys() _matrix = np.identity(len(_items)) for", "np.append(testing_stateSeq, [testing_stateSeq[-1]]) tLen = len(testing_stateSeq) hidden_stateSeq = hidden_stateSeq.reshape(totalLen/tLen, tLen) from scipy.spatial.distance import hamming", "# 'predict_proba' : one_predict_proba_of_this_state, # 'hidden_stateSeq' : one_hidden_stateSeq_of_this_state }) ''' #--plot c =", "matrix...' print _items print _matrix def get_confidence_of_identification(optimal_result): confidence_metric = ['culmulative_loglik_divide_by_the_culmulative_mean_loglik', 'posterior_of_gmm_model', 'calc_kullback_leibler_divergence_of_predict_proba', 'hamming_distance_of_hidden_state_sequence',", "'culmulative_loglik': one_log_curve_of_this_model[-1], 'loglik_curve' : one_log_curve_of_this_model, # 'predict_proba' : one_predict_proba_of_this_state, # 'hidden_stateSeq' : one_hidden_stateSeq_of_this_state", "+ '/anomaly_identification_plot'): os.makedirs(figure_save_path + '/anomaly_identification_plot') fig.savefig(os.path.join(figure_save_path, 'anomaly_identification_plot', fo + \":\" + trial_name +", "training_config.config_by_user['data_type_chosen'], training_config.config_by_user['model_type_chosen'], training_config.model_id) if CONF_TYPE == 'culmulative_loglik_divide_by_the_culmulative_mean_loglik': c_value = 5 all_log_curves_of_this_state = joblib.load(os.path.join(anomaly_model_path,", "os.listdir(anomaly_data_path_for_testing) for fo in folders: predict_class = [] data_path = os.path.join(anomaly_data_path_for_testing, fo) if", "= ('Anomaly_identification for ' + fo) ax.set_title(title) ''' sorted_list = sorted(calc_cofidence_resourse, key=lambda x:x['culmulative_loglik'])", "raw_input('testing another trial?? Please any key to continue') ''' print 'Finish testing: '+", "None, confidence elif CONF_TYPE == 'hamming_distance_of_hidden_state_sequence': # hidden_state_sequence_of_training_trials hidden_stateSeq = joblib.load(os.path.join(anomaly_model_path, 'hidden_stateSeq.pkl')) hidden_stateSeq", "gmm = mixture.GaussianMixture(n_components = 5, covariance_type = 'diag').fit(data) tVal = range(optimal_result['loglik_curve'].shape[0]) testing_data =", "import util import training_config import pandas as pd import random import ipdb def", "to large optimal_result = sorted_list[-1] classified_model = optimal_result['model_label'] predict_class.append(classified_model) all_log_curves_of_this_state, threshold, confidence =", "for iObs in range(len(testing_predict_proba)): confidence += entropy(testing_predict_proba[iObs,:], average_predict_proba[1][iObs,:]) return None, None, confidence elif", "trained models 2. load testing anomaly data 3. plot the log-likelihood wrt each", "optimal_result['model_label'] + ': ' + str(confidence), ha = 'center', va = 'baseline', bbox=dict(boxstyle=\"round\",", "'calc_kullback_leibler_divergence_of_predict_proba', 'hamming_distance_of_hidden_state_sequence', ] CONF_TYPE = confidence_metric[0] anomaly_model_path = os.path.join(training_config.anomaly_model_save_path, optimal_result['model_label'], training_config.config_by_user['data_type_chosen'], training_config.config_by_user['model_type_chosen'], training_config.model_id)", "_matrix[r, c] = confuse_matrix[row].count(col) print 'print the confusion matrix...' print _items print _matrix", "'anomaly model of %s not found'%(fo,) continue # one-folder confuse_matrix = {} folders", "for model_label in anomaly_model_group_by_label: one_log_curve_of_this_model = util.fast_log_curve_calculation( anomaly_testing_group_by_folder_name[trial_name][1], anomaly_model_group_by_label[model_label]) # one_predict_proba_of_this_state = anomaly_model_group_by_label[model_label].predict_proba(anomaly_testing_group_by_folder_name[trial_name][1])", "next(color) plot_line, = ax.plot(one_log_curve_of_this_model, linestyle=\"solid\", color = c) plot_line.set_label(model_label) title = ('Anomaly_identification for", "elif CONF_TYPE == 'posterior_of_gmm_model': #load -> build a hmm model -> calculate the", "+= entropy(testing_predict_proba[iObs,:], average_predict_proba[1][iObs,:]) return None, None, confidence elif CONF_TYPE == 'hamming_distance_of_hidden_state_sequence': # hidden_state_sequence_of_training_trials", "0: data = data_points else: data = np.vstack([data, data_points]) # fit a gmm", "range(len(all_log_curves_of_this_state)): ax.plot(all_log_curves_of_this_state[no_trial], linestyle= '--', color = 'gray', label = 'trials') ax.plot(threshold.tolist()[0], linestyle='--', color='gold',", "fig = plt.figure() ax = fig.add_subplot(111) from matplotlib.pyplot import cm color = iter(cm.rainbow(np.linspace(0,", "-> build a hmm model -> calculate the probability of testing sequence all_log_curves_of_this_state", "'obsoleted' pass from scipy.stats import entropy # average_predict_proba average_predict_proba = joblib.load(os.path.join(anomaly_model_path, 'average_predict_proba.pkl')) testing_predict_proba", "label='threshold') ax.legend(loc='upper left') ax.text(20,optimal_result['culmulative_loglik']/2, optimal_result['model_label'] + ': ' + str(confidence), ha = 'center',", "tLen = len(testing_stateSeq) hidden_stateSeq = hidden_stateSeq.reshape(totalLen/tLen, tLen) from scipy.spatial.distance import hamming confidence =", "range(totalLen/tLen): confidence += hamming(testing_stateSeq, hidden_stateSeq[iTrial,:]) return None, None, confidence else: print (\"without the", "CONF_TYPE = confidence_metric[0] anomaly_model_path = os.path.join(training_config.anomaly_model_save_path, optimal_result['model_label'], training_config.config_by_user['data_type_chosen'], training_config.config_by_user['model_type_chosen'], training_config.model_id) if CONF_TYPE ==", "def run(anomaly_data_path_for_testing, model_save_path, figure_save_path,): ''' 1. load all the anomalous trained models 2.", "training_config.config_by_user['model_type_chosen'], training_config.model_id) if CONF_TYPE == 'culmulative_loglik_divide_by_the_culmulative_mean_loglik': c_value = 5 all_log_curves_of_this_state = joblib.load(os.path.join(anomaly_model_path, 'all_log_curves_of_this_state.pkl'))", "num_data = 5, csv_save_path=unknown_anomaly_path, trial_name) ''' #--plot ''' for no_trial in range(len(all_log_curves_of_this_state)): ax.plot(all_log_curves_of_this_state[no_trial],", "data = data_points else: data = np.vstack([data, data_points]) # fit a gmm model", "# one_predict_proba_of_this_state = anomaly_model_group_by_label[model_label].predict_proba(anomaly_testing_group_by_folder_name[trial_name][1]) # HDPHSMM haven't implemented this # one_hidden_stateSeq_of_this_state = anomaly_model_group_by_label[model_label].decode(anomaly_testing_group_by_folder_name[trial_name][1],len(anomaly_testing_group_by_folder_name[trial_name][1])-1)", "= len(hidden_stateSeq) testing_stateSeq = optimal_result['hidden_stateSeq'] testing_stateSeq = np.append(testing_stateSeq, [testing_stateSeq[-1]]) tLen = len(testing_stateSeq) hidden_stateSeq", "= data_points else: data = np.vstack([data, data_points]) # fit a gmm model from", "mean_of_log_curve - std_of_log_likelihood[1] confidence = optimal_result['culmulative_loglik'] - threshold.tolist()[0][-1] return all_log_curves_of_this_state, threshold, confidence elif", "as pd import random import ipdb def run(anomaly_data_path_for_testing, model_save_path, figure_save_path,): ''' 1. load", "new anomaly:' + _name print '*\\n'*5 print 'synthetic data generation' import generate_synthetic_data generate_synthetic_data.run_finite_differece_matrix(df=df,", "elif CONF_TYPE == 'hamming_distance_of_hidden_state_sequence': # hidden_state_sequence_of_training_trials hidden_stateSeq = joblib.load(os.path.join(anomaly_model_path, 'hidden_stateSeq.pkl')) hidden_stateSeq = np.append(hidden_stateSeq,", "'gray', label = 'trials') ax.plot(threshold.tolist()[0], linestyle='--', color='gold', label='threshold') ax.legend(loc='upper left') ax.text(20,optimal_result['culmulative_loglik']/2, optimal_result['model_label'] +", "_items: r = _items.index(row) c = _items.index(col) _matrix[r, c] = confuse_matrix[row].count(col) print 'print", "col in _items: r = _items.index(row) c = _items.index(col) _matrix[r, c] = confuse_matrix[row].count(col)", "bbox=dict(boxstyle=\"round\", ec=(1., 0.6, 0.6), fc=(1., 0.9, 0.9),) ) if not os.path.isdir(figure_save_path + '/anomaly_identification_plot'):", "def get_confidence_of_identification(optimal_result): confidence_metric = ['culmulative_loglik_divide_by_the_culmulative_mean_loglik', 'posterior_of_gmm_model', 'calc_kullback_leibler_divergence_of_predict_proba', 'hamming_distance_of_hidden_state_sequence', ] CONF_TYPE = confidence_metric[0] anomaly_model_path", "joblib from matplotlib import pyplot as plt import util import training_config import pandas", "= np.vstack([np.array(tVal), feature]).T if icurve == 0: data = data_points else: data =", "small to large optimal_result = sorted_list[-1] classified_model = optimal_result['model_label'] predict_class.append(classified_model) all_log_curves_of_this_state, threshold, confidence", "{} folders = os.listdir(training_config.anomaly_model_save_path) for fo in folders: path = os.path.join(training_config.anomaly_data_path, fo) if", "get_confidence_of_identification(optimal_result) ''' if confidence < 0.0: df = pd.DataFrame(anomaly_testing_group_by_folder_name[trial_name][1], columns=training_config.interested_data_fields) id = random.randint(1000,10000)", "tLen) from scipy.spatial.distance import hamming confidence = 0 for iTrial in range(totalLen/tLen): confidence", "a new anomaly:' + _name print '*\\n'*5 print 'synthetic data generation' import generate_synthetic_data", "confidence elif CONF_TYPE == 'hamming_distance_of_hidden_state_sequence': # hidden_state_sequence_of_training_trials hidden_stateSeq = joblib.load(os.path.join(anomaly_model_path, 'hidden_stateSeq.pkl')) hidden_stateSeq =", "= anomaly_model_group_by_label[model_label].predict_proba(anomaly_testing_group_by_folder_name[trial_name][1]) # HDPHSMM haven't implemented this # one_hidden_stateSeq_of_this_state = anomaly_model_group_by_label[model_label].decode(anomaly_testing_group_by_folder_name[trial_name][1],len(anomaly_testing_group_by_folder_name[trial_name][1])-1) calc_cofidence_resourse.append({ 'model_label'", "5, csv_save_path=unknown_anomaly_path, trial_name) ''' #--plot ''' for no_trial in range(len(all_log_curves_of_this_state)): ax.plot(all_log_curves_of_this_state[no_trial], linestyle= '--',", "joblib.load(os.path.join(anomaly_model_path, 'hidden_stateSeq.pkl')) hidden_stateSeq = np.append(hidden_stateSeq, [hidden_stateSeq[-1]]) # add one item, because for autoregressive", "= joblib.load(os.path.join(anomaly_model_path, 'std_of_log_likelihood.pkl')) np_matrix_traj_by_time = np.matrix(all_log_curves_of_this_state) mean_of_log_curve = np_matrix_traj_by_time.mean(0) threshold = mean_of_log_curve -", "data = np.vstack([data, data_points]) # fit a gmm model from sklearn import mixture", "not os.path.isdir(path): continue anomaly_model_path = os.path.join(training_config.anomaly_model_save_path, fo, training_config.config_by_user['data_type_chosen'], training_config.config_by_user['model_type_chosen'], training_config.model_id) try: anomaly_model_group_by_label[fo] =", "in _items: for col in _items: r = _items.index(row) c = _items.index(col) _matrix[r,", "'hamming_distance_of_hidden_state_sequence': # hidden_state_sequence_of_training_trials hidden_stateSeq = joblib.load(os.path.join(anomaly_model_path, 'hidden_stateSeq.pkl')) hidden_stateSeq = np.append(hidden_stateSeq, [hidden_stateSeq[-1]]) # add", "+ str(id) unknown_anomaly_path = os.path.join(training_config.anomaly_data_path, _name) os.makedirs(unknown_anomaly_path) print 'generated a new anomaly:' +", "str(id) unknown_anomaly_path = os.path.join(training_config.anomaly_data_path, _name) os.makedirs(unknown_anomaly_path) print 'generated a new anomaly:' + _name", "the log-likelihood wrt each model and plot in a same figure ''' #", "sklearn import mixture gmm = mixture.GaussianMixture(n_components = 5, covariance_type = 'diag').fit(data) tVal =", "data_path) # one-file for trial_name in anomaly_testing_group_by_folder_name: ''' #plot fig = plt.figure() ax", "ax.plot(threshold.tolist()[0], linestyle='--', color='gold', label='threshold') ax.legend(loc='upper left') ax.text(20,optimal_result['culmulative_loglik']/2, optimal_result['model_label'] + ': ' + str(confidence),", "print 'obsoleted' pass from scipy.stats import entropy # average_predict_proba average_predict_proba = joblib.load(os.path.join(anomaly_model_path, 'average_predict_proba.pkl'))", "os.path.isdir(figure_save_path + '/anomaly_identification_plot'): os.makedirs(figure_save_path + '/anomaly_identification_plot') fig.savefig(os.path.join(figure_save_path, 'anomaly_identification_plot', fo + \":\" + trial_name", "< 0.0: df = pd.DataFrame(anomaly_testing_group_by_folder_name[trial_name][1], columns=training_config.interested_data_fields) id = random.randint(1000,10000) _name = 'unknown_anomaly_' +", "range(len(testing_predict_proba)): confidence += entropy(testing_predict_proba[iObs,:], average_predict_proba[1][iObs,:]) return None, None, confidence elif CONF_TYPE == 'hamming_distance_of_hidden_state_sequence':", "autoregressive model, I had deleted one data point totalLen = len(hidden_stateSeq) testing_stateSeq =", "= {} folders = os.listdir(training_config.anomaly_model_save_path) for fo in folders: path = os.path.join(training_config.anomaly_data_path, fo)", "'all_log_curves_of_this_state.pkl')) std_of_log_likelihood = joblib.load(os.path.join(anomaly_model_path, 'std_of_log_likelihood.pkl')) np_matrix_traj_by_time = np.matrix(all_log_curves_of_this_state) mean_of_log_curve = np_matrix_traj_by_time.mean(0) threshold =", "print _items print _matrix def get_confidence_of_identification(optimal_result): confidence_metric = ['culmulative_loglik_divide_by_the_culmulative_mean_loglik', 'posterior_of_gmm_model', 'calc_kullback_leibler_divergence_of_predict_proba', 'hamming_distance_of_hidden_state_sequence', ]", "from scipy.stats import entropy # average_predict_proba average_predict_proba = joblib.load(os.path.join(anomaly_model_path, 'average_predict_proba.pkl')) testing_predict_proba = optimal_result['predict_proba']", "optimal_result['predict_proba'] confidence = 0.0 for iObs in range(len(testing_predict_proba)): confidence += entropy(testing_predict_proba[iObs,:], average_predict_proba[1][iObs,:]) return", "id = random.randint(1000,10000) _name = 'unknown_anomaly_' + str(id) unknown_anomaly_path = os.path.join(training_config.anomaly_data_path, _name) os.makedirs(unknown_anomaly_path)", "continue anomaly_testing_group_by_folder_name = util.get_anomaly_data_for_labelled_case(training_config, data_path) # one-file for trial_name in anomaly_testing_group_by_folder_name: ''' #plot", ": model_label, 'culmulative_loglik': one_log_curve_of_this_model[-1], 'loglik_curve' : one_log_curve_of_this_model, # 'predict_proba' : one_predict_proba_of_this_state, # 'hidden_stateSeq'", "sorted(calc_cofidence_resourse, key=lambda x:x['culmulative_loglik']) # from small to large optimal_result = sorted_list[-1] classified_model =", "= get_confidence_of_identification(optimal_result) ''' if confidence < 0.0: df = pd.DataFrame(anomaly_testing_group_by_folder_name[trial_name][1], columns=training_config.interested_data_fields) id =", "all_log_curves_of_this_state = joblib.load(os.path.join(anomaly_model_path, 'all_log_curves_of_this_state.pkl')) std_of_log_likelihood = joblib.load(os.path.join(anomaly_model_path, 'std_of_log_likelihood.pkl')) np_matrix_traj_by_time = np.matrix(all_log_curves_of_this_state) mean_of_log_curve =", "}) ''' #--plot c = next(color) plot_line, = ax.plot(one_log_curve_of_this_model, linestyle=\"solid\", color = c)", "gmm.score(testing_data) return all_log_curves_of_this_state, None, confidence elif CONF_TYPE == 'calc_kullback_leibler_divergence_of_predict_proba': print 'obsoleted' pass from", "from sklearn import mixture gmm = mixture.GaussianMixture(n_components = 5, covariance_type = 'diag').fit(data) tVal", "plt.figure() ax = fig.add_subplot(111) from matplotlib.pyplot import cm color = iter(cm.rainbow(np.linspace(0, 1, len(anomaly_model_group_by_label))))", "print 'print the confusion matrix...' print _items print _matrix def get_confidence_of_identification(optimal_result): confidence_metric =", "trial_name in anomaly_testing_group_by_folder_name: ''' #plot fig = plt.figure() ax = fig.add_subplot(111) from matplotlib.pyplot", "model_label, 'culmulative_loglik': one_log_curve_of_this_model[-1], 'loglik_curve' : one_log_curve_of_this_model, # 'predict_proba' : one_predict_proba_of_this_state, # 'hidden_stateSeq' :", "= os.listdir(training_config.anomaly_model_save_path) for fo in folders: path = os.path.join(training_config.anomaly_data_path, fo) if not os.path.isdir(path):", "os.makedirs(unknown_anomaly_path) print 'generated a new anomaly:' + _name print '*\\n'*5 print 'synthetic data", "np.vstack([np.array(tVal), feature]).T if icurve == 0: data = data_points else: data = np.vstack([data,", "classified_model = optimal_result['model_label'] predict_class.append(classified_model) all_log_curves_of_this_state, threshold, confidence = get_confidence_of_identification(optimal_result) ''' if confidence <", "item, because for autoregressive model, I had deleted one data point totalLen =", "print _matrix def get_confidence_of_identification(optimal_result): confidence_metric = ['culmulative_loglik_divide_by_the_culmulative_mean_loglik', 'posterior_of_gmm_model', 'calc_kullback_leibler_divergence_of_predict_proba', 'hamming_distance_of_hidden_state_sequence', ] CONF_TYPE =", "for fo in folders: path = os.path.join(training_config.anomaly_data_path, fo) if not os.path.isdir(path): continue anomaly_model_path", "#--plot c = next(color) plot_line, = ax.plot(one_log_curve_of_this_model, linestyle=\"solid\", color = c) plot_line.set_label(model_label) title", "np.identity(len(_items)) for row in _items: for col in _items: r = _items.index(row) c", "sequence all_log_curves_of_this_state = joblib.load(os.path.join(anomaly_model_path, 'all_log_curves_of_this_state.pkl')) data = np.ndarray([]) for icurve in range(len(all_log_curves_of_this_state)): tVal", "fo, training_config.config_by_user['data_type_chosen'], training_config.config_by_user['model_type_chosen'], training_config.model_id) try: anomaly_model_group_by_label[fo] = joblib.load(anomaly_model_path + \"/model_s%s.pkl\"%(1,)) except IOError: print", "anomaly_model_group_by_label[fo] = joblib.load(anomaly_model_path + \"/model_s%s.pkl\"%(1,)) except IOError: print 'anomaly model of %s not", "all_log_curves_of_this_state, threshold, confidence elif CONF_TYPE == 'posterior_of_gmm_model': #load -> build a hmm model", "models 2. load testing anomaly data 3. plot the log-likelihood wrt each model", "= 5, covariance_type = 'diag').fit(data) tVal = range(optimal_result['loglik_curve'].shape[0]) testing_data = np.vstack([np.array(tVal), optimal_result['loglik_curve']]).T confidence", "because for autoregressive model, I had deleted one data point totalLen = len(hidden_stateSeq)", "one data point totalLen = len(hidden_stateSeq) testing_stateSeq = optimal_result['hidden_stateSeq'] testing_stateSeq = np.append(testing_stateSeq, [testing_stateSeq[-1]])", "'hidden_stateSeq' : one_hidden_stateSeq_of_this_state }) ''' #--plot c = next(color) plot_line, = ax.plot(one_log_curve_of_this_model, linestyle=\"solid\",", "CONF_TYPE == 'culmulative_loglik_divide_by_the_culmulative_mean_loglik': c_value = 5 all_log_curves_of_this_state = joblib.load(os.path.join(anomaly_model_path, 'all_log_curves_of_this_state.pkl')) std_of_log_likelihood = joblib.load(os.path.join(anomaly_model_path,", "= optimal_result['culmulative_loglik'] - threshold.tolist()[0][-1] return all_log_curves_of_this_state, threshold, confidence elif CONF_TYPE == 'posterior_of_gmm_model': #load", "for ' + fo) ax.set_title(title) ''' sorted_list = sorted(calc_cofidence_resourse, key=lambda x:x['culmulative_loglik']) # from", "in folders: path = os.path.join(training_config.anomaly_data_path, fo) if not os.path.isdir(path): continue anomaly_model_path = os.path.join(training_config.anomaly_model_save_path,", "hamming(testing_stateSeq, hidden_stateSeq[iTrial,:]) return None, None, confidence else: print (\"without the confidence_metric as: \"", "'unknown_anomaly_' + str(id) unknown_anomaly_path = os.path.join(training_config.anomaly_data_path, _name) os.makedirs(unknown_anomaly_path) print 'generated a new anomaly:'", "# hidden_state_sequence_of_training_trials hidden_stateSeq = joblib.load(os.path.join(anomaly_model_path, 'hidden_stateSeq.pkl')) hidden_stateSeq = np.append(hidden_stateSeq, [hidden_stateSeq[-1]]) # add one", "one_hidden_stateSeq_of_this_state = anomaly_model_group_by_label[model_label].decode(anomaly_testing_group_by_folder_name[trial_name][1],len(anomaly_testing_group_by_folder_name[trial_name][1])-1) calc_cofidence_resourse.append({ 'model_label' : model_label, 'culmulative_loglik': one_log_curve_of_this_model[-1], 'loglik_curve' : one_log_curve_of_this_model, #", "fo) if not os.path.isdir(path): continue anomaly_model_path = os.path.join(training_config.anomaly_model_save_path, fo, training_config.config_by_user['data_type_chosen'], training_config.config_by_user['model_type_chosen'], training_config.model_id) try:", "model and plot in a same figure ''' # load trained anomaly models", "print 'synthetic data generation' import generate_synthetic_data generate_synthetic_data.run_finite_differece_matrix(df=df, num_data = 5, csv_save_path=unknown_anomaly_path, trial_name) '''", "%s not found'%(fo,) continue # one-folder confuse_matrix = {} folders = os.listdir(anomaly_data_path_for_testing) for", "# add one item, because for autoregressive model, I had deleted one data", "= os.path.join(anomaly_data_path_for_testing, fo) if not os.path.isdir(path): continue anomaly_testing_group_by_folder_name = util.get_anomaly_data_for_labelled_case(training_config, data_path) # one-file", "hmm model -> calculate the probability of testing sequence all_log_curves_of_this_state = joblib.load(os.path.join(anomaly_model_path, 'all_log_curves_of_this_state.pkl'))", "if not os.path.isdir(path): continue anomaly_testing_group_by_folder_name = util.get_anomaly_data_for_labelled_case(training_config, data_path) # one-file for trial_name in", "1, len(anomaly_model_group_by_label)))) ''' calc_cofidence_resourse = [] for model_label in anomaly_model_group_by_label: one_log_curve_of_this_model = util.fast_log_curve_calculation(", "np_matrix_traj_by_time.mean(0) threshold = mean_of_log_curve - std_of_log_likelihood[1] confidence = optimal_result['culmulative_loglik'] - threshold.tolist()[0][-1] return all_log_curves_of_this_state,", "tVal = range(optimal_result['loglik_curve'].shape[0]) testing_data = np.vstack([np.array(tVal), optimal_result['loglik_curve']]).T confidence = gmm.score(testing_data) return all_log_curves_of_this_state, None,", "range(len(all_log_curves_of_this_state[icurve])) feature = all_log_curves_of_this_state[icurve] data_points = np.vstack([np.array(tVal), feature]).T if icurve == 0: data", "{} folders = os.listdir(anomaly_data_path_for_testing) for fo in folders: predict_class = [] data_path =", "for trial_name in anomaly_testing_group_by_folder_name: ''' #plot fig = plt.figure() ax = fig.add_subplot(111) from", "'synthetic data generation' import generate_synthetic_data generate_synthetic_data.run_finite_differece_matrix(df=df, num_data = 5, csv_save_path=unknown_anomaly_path, trial_name) ''' #--plot", "import generate_synthetic_data generate_synthetic_data.run_finite_differece_matrix(df=df, num_data = 5, csv_save_path=unknown_anomaly_path, trial_name) ''' #--plot ''' for no_trial", "c_value = 5 all_log_curves_of_this_state = joblib.load(os.path.join(anomaly_model_path, 'all_log_curves_of_this_state.pkl')) std_of_log_likelihood = joblib.load(os.path.join(anomaly_model_path, 'std_of_log_likelihood.pkl')) np_matrix_traj_by_time =", "len(testing_stateSeq) hidden_stateSeq = hidden_stateSeq.reshape(totalLen/tLen, tLen) from scipy.spatial.distance import hamming confidence = 0 for", "folders: predict_class = [] data_path = os.path.join(anomaly_data_path_for_testing, fo) if not os.path.isdir(path): continue anomaly_testing_group_by_folder_name", "trial_name) ''' #--plot ''' for no_trial in range(len(all_log_curves_of_this_state)): ax.plot(all_log_curves_of_this_state[no_trial], linestyle= '--', color =", "fig.savefig(os.path.join(figure_save_path, 'anomaly_identification_plot', fo + \":\" + trial_name + \".jpg\"), format=\"jpg\") # fig.show(1) #", "average_predict_proba = joblib.load(os.path.join(anomaly_model_path, 'average_predict_proba.pkl')) testing_predict_proba = optimal_result['predict_proba'] confidence = 0.0 for iObs in", "''' #plot fig = plt.figure() ax = fig.add_subplot(111) from matplotlib.pyplot import cm color", "average_predict_proba average_predict_proba = joblib.load(os.path.join(anomaly_model_path, 'average_predict_proba.pkl')) testing_predict_proba = optimal_result['predict_proba'] confidence = 0.0 for iObs", "folders = os.listdir(training_config.anomaly_model_save_path) for fo in folders: path = os.path.join(training_config.anomaly_data_path, fo) if not", "_items: for col in _items: r = _items.index(row) c = _items.index(col) _matrix[r, c]", "haven't implemented this # one_hidden_stateSeq_of_this_state = anomaly_model_group_by_label[model_label].decode(anomaly_testing_group_by_folder_name[trial_name][1],len(anomaly_testing_group_by_folder_name[trial_name][1])-1) calc_cofidence_resourse.append({ 'model_label' : model_label, 'culmulative_loglik': one_log_curve_of_this_model[-1],", "data point totalLen = len(hidden_stateSeq) testing_stateSeq = optimal_result['hidden_stateSeq'] testing_stateSeq = np.append(testing_stateSeq, [testing_stateSeq[-1]]) tLen", "matplotlib.pyplot import cm color = iter(cm.rainbow(np.linspace(0, 1, len(anomaly_model_group_by_label)))) ''' calc_cofidence_resourse = [] for", "training_config.config_by_user['data_type_chosen'], training_config.config_by_user['model_type_chosen'], training_config.model_id) try: anomaly_model_group_by_label[fo] = joblib.load(anomaly_model_path + \"/model_s%s.pkl\"%(1,)) except IOError: print 'anomaly", "for autoregressive model, I had deleted one data point totalLen = len(hidden_stateSeq) testing_stateSeq", "''' calc_cofidence_resourse = [] for model_label in anomaly_model_group_by_label: one_log_curve_of_this_model = util.fast_log_curve_calculation( anomaly_testing_group_by_folder_name[trial_name][1], anomaly_model_group_by_label[model_label])", "'calc_kullback_leibler_divergence_of_predict_proba': print 'obsoleted' pass from scipy.stats import entropy # average_predict_proba average_predict_proba = joblib.load(os.path.join(anomaly_model_path,", "not found'%(fo,) continue # one-folder confuse_matrix = {} folders = os.listdir(anomaly_data_path_for_testing) for fo", "figure ''' # load trained anomaly models anomaly_model_group_by_label = {} folders = os.listdir(training_config.anomaly_model_save_path)", "os.path.join(training_config.anomaly_data_path, fo) if not os.path.isdir(path): continue anomaly_model_path = os.path.join(training_config.anomaly_model_save_path, fo, training_config.config_by_user['data_type_chosen'], training_config.config_by_user['model_type_chosen'], training_config.model_id)", "''' #--plot c = next(color) plot_line, = ax.plot(one_log_curve_of_this_model, linestyle=\"solid\", color = c) plot_line.set_label(model_label)", "except IOError: print 'anomaly model of %s not found'%(fo,) continue # one-folder confuse_matrix", "plot the log-likelihood wrt each model and plot in a same figure '''", "= 0 for iTrial in range(totalLen/tLen): confidence += hamming(testing_stateSeq, hidden_stateSeq[iTrial,:]) return None, None,", "= np_matrix_traj_by_time.mean(0) threshold = mean_of_log_curve - std_of_log_likelihood[1] confidence = optimal_result['culmulative_loglik'] - threshold.tolist()[0][-1] return", "for col in _items: r = _items.index(row) c = _items.index(col) _matrix[r, c] =", "= confuse_matrix[row].count(col) print 'print the confusion matrix...' print _items print _matrix def get_confidence_of_identification(optimal_result):", "return None, None, confidence else: print (\"without the confidence_metric as: \" + CONF_TYPE)", "util.get_anomaly_data_for_labelled_case(training_config, data_path) # one-file for trial_name in anomaly_testing_group_by_folder_name: ''' #plot fig = plt.figure()", "calc_cofidence_resourse.append({ 'model_label' : model_label, 'culmulative_loglik': one_log_curve_of_this_model[-1], 'loglik_curve' : one_log_curve_of_this_model, # 'predict_proba' : one_predict_proba_of_this_state,", "+ fo) ax.set_title(title) ''' sorted_list = sorted(calc_cofidence_resourse, key=lambda x:x['culmulative_loglik']) # from small to", "no_trial in range(len(all_log_curves_of_this_state)): ax.plot(all_log_curves_of_this_state[no_trial], linestyle= '--', color = 'gray', label = 'trials') ax.plot(threshold.tolist()[0],", "numpy as np from sklearn.externals import joblib from matplotlib import pyplot as plt", "columns=training_config.interested_data_fields) id = random.randint(1000,10000) _name = 'unknown_anomaly_' + str(id) unknown_anomaly_path = os.path.join(training_config.anomaly_data_path, _name)", "anomaly_testing_group_by_folder_name[trial_name][1], anomaly_model_group_by_label[model_label]) # one_predict_proba_of_this_state = anomaly_model_group_by_label[model_label].predict_proba(anomaly_testing_group_by_folder_name[trial_name][1]) # HDPHSMM haven't implemented this # one_hidden_stateSeq_of_this_state", "'hamming_distance_of_hidden_state_sequence', ] CONF_TYPE = confidence_metric[0] anomaly_model_path = os.path.join(training_config.anomaly_model_save_path, optimal_result['model_label'], training_config.config_by_user['data_type_chosen'], training_config.config_by_user['model_type_chosen'], training_config.model_id) if", "np.vstack([data, data_points]) # fit a gmm model from sklearn import mixture gmm =", "model from sklearn import mixture gmm = mixture.GaussianMixture(n_components = 5, covariance_type = 'diag').fit(data)", "any key to continue') ''' print 'Finish testing: '+ fo + '\\n' confuse_matrix[fo]", "fo in folders: predict_class = [] data_path = os.path.join(anomaly_data_path_for_testing, fo) if not os.path.isdir(path):", "matplotlib import pyplot as plt import util import training_config import pandas as pd", "entropy(testing_predict_proba[iObs,:], average_predict_proba[1][iObs,:]) return None, None, confidence elif CONF_TYPE == 'hamming_distance_of_hidden_state_sequence': # hidden_state_sequence_of_training_trials hidden_stateSeq", "one-file for trial_name in anomaly_testing_group_by_folder_name: ''' #plot fig = plt.figure() ax = fig.add_subplot(111)", "optimal_result['model_label'], training_config.config_by_user['data_type_chosen'], training_config.config_by_user['model_type_chosen'], training_config.model_id) if CONF_TYPE == 'culmulative_loglik_divide_by_the_culmulative_mean_loglik': c_value = 5 all_log_curves_of_this_state =", "= range(len(all_log_curves_of_this_state[icurve])) feature = all_log_curves_of_this_state[icurve] data_points = np.vstack([np.array(tVal), feature]).T if icurve == 0:", "ax.plot(all_log_curves_of_this_state[no_trial], linestyle= '--', color = 'gray', label = 'trials') ax.plot(threshold.tolist()[0], linestyle='--', color='gold', label='threshold')", "optimal_result['culmulative_loglik'] - threshold.tolist()[0][-1] return all_log_curves_of_this_state, threshold, confidence elif CONF_TYPE == 'posterior_of_gmm_model': #load ->", "_matrix def get_confidence_of_identification(optimal_result): confidence_metric = ['culmulative_loglik_divide_by_the_culmulative_mean_loglik', 'posterior_of_gmm_model', 'calc_kullback_leibler_divergence_of_predict_proba', 'hamming_distance_of_hidden_state_sequence', ] CONF_TYPE = confidence_metric[0]", "np.matrix(all_log_curves_of_this_state) mean_of_log_curve = np_matrix_traj_by_time.mean(0) threshold = mean_of_log_curve - std_of_log_likelihood[1] confidence = optimal_result['culmulative_loglik'] -", "= ['culmulative_loglik_divide_by_the_culmulative_mean_loglik', 'posterior_of_gmm_model', 'calc_kullback_leibler_divergence_of_predict_proba', 'hamming_distance_of_hidden_state_sequence', ] CONF_TYPE = confidence_metric[0] anomaly_model_path = os.path.join(training_config.anomaly_model_save_path, optimal_result['model_label'],", "threshold, confidence = get_confidence_of_identification(optimal_result) ''' if confidence < 0.0: df = pd.DataFrame(anomaly_testing_group_by_folder_name[trial_name][1], columns=training_config.interested_data_fields)", "+ _name print '*\\n'*5 print 'synthetic data generation' import generate_synthetic_data generate_synthetic_data.run_finite_differece_matrix(df=df, num_data =", "# average_predict_proba average_predict_proba = joblib.load(os.path.join(anomaly_model_path, 'average_predict_proba.pkl')) testing_predict_proba = optimal_result['predict_proba'] confidence = 0.0 for", "'all_log_curves_of_this_state.pkl')) data = np.ndarray([]) for icurve in range(len(all_log_curves_of_this_state)): tVal = range(len(all_log_curves_of_this_state[icurve])) feature =", "_name print '*\\n'*5 print 'synthetic data generation' import generate_synthetic_data generate_synthetic_data.run_finite_differece_matrix(df=df, num_data = 5,", "'Finish testing: '+ fo + '\\n' confuse_matrix[fo] = predict_class _items = confuse_matrix.keys() _matrix", "in anomaly_testing_group_by_folder_name: ''' #plot fig = plt.figure() ax = fig.add_subplot(111) from matplotlib.pyplot import", "+ trial_name + \".jpg\"), format=\"jpg\") # fig.show(1) # raw_input('testing another trial?? Please any", "return all_log_curves_of_this_state, threshold, confidence elif CONF_TYPE == 'posterior_of_gmm_model': #load -> build a hmm", "np from sklearn.externals import joblib from matplotlib import pyplot as plt import util", "fit a gmm model from sklearn import mixture gmm = mixture.GaussianMixture(n_components = 5,", "#!/usr/bin/env python import os import numpy as np from sklearn.externals import joblib from", "pd import random import ipdb def run(anomaly_data_path_for_testing, model_save_path, figure_save_path,): ''' 1. load all", "not os.path.isdir(path): continue anomaly_testing_group_by_folder_name = util.get_anomaly_data_for_labelled_case(training_config, data_path) # one-file for trial_name in anomaly_testing_group_by_folder_name:", "anomaly_model_path = os.path.join(training_config.anomaly_model_save_path, optimal_result['model_label'], training_config.config_by_user['data_type_chosen'], training_config.config_by_user['model_type_chosen'], training_config.model_id) if CONF_TYPE == 'culmulative_loglik_divide_by_the_culmulative_mean_loglik': c_value =", "plt import util import training_config import pandas as pd import random import ipdb", "hidden_stateSeq = joblib.load(os.path.join(anomaly_model_path, 'hidden_stateSeq.pkl')) hidden_stateSeq = np.append(hidden_stateSeq, [hidden_stateSeq[-1]]) # add one item, because", ": one_predict_proba_of_this_state, # 'hidden_stateSeq' : one_hidden_stateSeq_of_this_state }) ''' #--plot c = next(color) plot_line,", "all_log_curves_of_this_state, None, confidence elif CONF_TYPE == 'calc_kullback_leibler_divergence_of_predict_proba': print 'obsoleted' pass from scipy.stats import", "color = 'gray', label = 'trials') ax.plot(threshold.tolist()[0], linestyle='--', color='gold', label='threshold') ax.legend(loc='upper left') ax.text(20,optimal_result['culmulative_loglik']/2,", "confidence < 0.0: df = pd.DataFrame(anomaly_testing_group_by_folder_name[trial_name][1], columns=training_config.interested_data_fields) id = random.randint(1000,10000) _name = 'unknown_anomaly_'", "None, confidence elif CONF_TYPE == 'calc_kullback_leibler_divergence_of_predict_proba': print 'obsoleted' pass from scipy.stats import entropy", "+ \":\" + trial_name + \".jpg\"), format=\"jpg\") # fig.show(1) # raw_input('testing another trial??", "anomaly_testing_group_by_folder_name = util.get_anomaly_data_for_labelled_case(training_config, data_path) # one-file for trial_name in anomaly_testing_group_by_folder_name: ''' #plot fig", "and plot in a same figure ''' # load trained anomaly models anomaly_model_group_by_label", "fc=(1., 0.9, 0.9),) ) if not os.path.isdir(figure_save_path + '/anomaly_identification_plot'): os.makedirs(figure_save_path + '/anomaly_identification_plot') fig.savefig(os.path.join(figure_save_path,", "point totalLen = len(hidden_stateSeq) testing_stateSeq = optimal_result['hidden_stateSeq'] testing_stateSeq = np.append(testing_stateSeq, [testing_stateSeq[-1]]) tLen =", "c = next(color) plot_line, = ax.plot(one_log_curve_of_this_model, linestyle=\"solid\", color = c) plot_line.set_label(model_label) title =", "in folders: predict_class = [] data_path = os.path.join(anomaly_data_path_for_testing, fo) if not os.path.isdir(path): continue", "[hidden_stateSeq[-1]]) # add one item, because for autoregressive model, I had deleted one", "# from small to large optimal_result = sorted_list[-1] classified_model = optimal_result['model_label'] predict_class.append(classified_model) all_log_curves_of_this_state,", "from scipy.spatial.distance import hamming confidence = 0 for iTrial in range(totalLen/tLen): confidence +=", "std_of_log_likelihood[1] confidence = optimal_result['culmulative_loglik'] - threshold.tolist()[0][-1] return all_log_curves_of_this_state, threshold, confidence elif CONF_TYPE ==", "== 'culmulative_loglik_divide_by_the_culmulative_mean_loglik': c_value = 5 all_log_curves_of_this_state = joblib.load(os.path.join(anomaly_model_path, 'all_log_curves_of_this_state.pkl')) std_of_log_likelihood = joblib.load(os.path.join(anomaly_model_path, 'std_of_log_likelihood.pkl'))", "= gmm.score(testing_data) return all_log_curves_of_this_state, None, confidence elif CONF_TYPE == 'calc_kullback_leibler_divergence_of_predict_proba': print 'obsoleted' pass", "len(anomaly_model_group_by_label)))) ''' calc_cofidence_resourse = [] for model_label in anomaly_model_group_by_label: one_log_curve_of_this_model = util.fast_log_curve_calculation( anomaly_testing_group_by_folder_name[trial_name][1],", "= fig.add_subplot(111) from matplotlib.pyplot import cm color = iter(cm.rainbow(np.linspace(0, 1, len(anomaly_model_group_by_label)))) ''' calc_cofidence_resourse", "the confusion matrix...' print _items print _matrix def get_confidence_of_identification(optimal_result): confidence_metric = ['culmulative_loglik_divide_by_the_culmulative_mean_loglik', 'posterior_of_gmm_model',", "== 'hamming_distance_of_hidden_state_sequence': # hidden_state_sequence_of_training_trials hidden_stateSeq = joblib.load(os.path.join(anomaly_model_path, 'hidden_stateSeq.pkl')) hidden_stateSeq = np.append(hidden_stateSeq, [hidden_stateSeq[-1]]) #", "CONF_TYPE == 'calc_kullback_leibler_divergence_of_predict_proba': print 'obsoleted' pass from scipy.stats import entropy # average_predict_proba average_predict_proba", "= next(color) plot_line, = ax.plot(one_log_curve_of_this_model, linestyle=\"solid\", color = c) plot_line.set_label(model_label) title = ('Anomaly_identification", "[] data_path = os.path.join(anomaly_data_path_for_testing, fo) if not os.path.isdir(path): continue anomaly_testing_group_by_folder_name = util.get_anomaly_data_for_labelled_case(training_config, data_path)", "linestyle=\"solid\", color = c) plot_line.set_label(model_label) title = ('Anomaly_identification for ' + fo) ax.set_title(title)", "ax.plot(one_log_curve_of_this_model, linestyle=\"solid\", color = c) plot_line.set_label(model_label) title = ('Anomaly_identification for ' + fo)", "= joblib.load(os.path.join(anomaly_model_path, 'all_log_curves_of_this_state.pkl')) std_of_log_likelihood = joblib.load(os.path.join(anomaly_model_path, 'std_of_log_likelihood.pkl')) np_matrix_traj_by_time = np.matrix(all_log_curves_of_this_state) mean_of_log_curve = np_matrix_traj_by_time.mean(0)", "' + str(confidence), ha = 'center', va = 'baseline', bbox=dict(boxstyle=\"round\", ec=(1., 0.6, 0.6),", "': ' + str(confidence), ha = 'center', va = 'baseline', bbox=dict(boxstyle=\"round\", ec=(1., 0.6,", "= anomaly_model_group_by_label[model_label].decode(anomaly_testing_group_by_folder_name[trial_name][1],len(anomaly_testing_group_by_folder_name[trial_name][1])-1) calc_cofidence_resourse.append({ 'model_label' : model_label, 'culmulative_loglik': one_log_curve_of_this_model[-1], 'loglik_curve' : one_log_curve_of_this_model, # 'predict_proba'", "print 'anomaly model of %s not found'%(fo,) continue # one-folder confuse_matrix = {}", "return None, None, confidence elif CONF_TYPE == 'hamming_distance_of_hidden_state_sequence': # hidden_state_sequence_of_training_trials hidden_stateSeq = joblib.load(os.path.join(anomaly_model_path,", "= _items.index(row) c = _items.index(col) _matrix[r, c] = confuse_matrix[row].count(col) print 'print the confusion", "fo in folders: path = os.path.join(training_config.anomaly_data_path, fo) if not os.path.isdir(path): continue anomaly_model_path =", "trained anomaly models anomaly_model_group_by_label = {} folders = os.listdir(training_config.anomaly_model_save_path) for fo in folders:", "5 all_log_curves_of_this_state = joblib.load(os.path.join(anomaly_model_path, 'all_log_curves_of_this_state.pkl')) std_of_log_likelihood = joblib.load(os.path.join(anomaly_model_path, 'std_of_log_likelihood.pkl')) np_matrix_traj_by_time = np.matrix(all_log_curves_of_this_state) mean_of_log_curve", "this # one_hidden_stateSeq_of_this_state = anomaly_model_group_by_label[model_label].decode(anomaly_testing_group_by_folder_name[trial_name][1],len(anomaly_testing_group_by_folder_name[trial_name][1])-1) calc_cofidence_resourse.append({ 'model_label' : model_label, 'culmulative_loglik': one_log_curve_of_this_model[-1], 'loglik_curve' :", "fo) ax.set_title(title) ''' sorted_list = sorted(calc_cofidence_resourse, key=lambda x:x['culmulative_loglik']) # from small to large", "model -> calculate the probability of testing sequence all_log_curves_of_this_state = joblib.load(os.path.join(anomaly_model_path, 'all_log_curves_of_this_state.pkl')) data", "5, covariance_type = 'diag').fit(data) tVal = range(optimal_result['loglik_curve'].shape[0]) testing_data = np.vstack([np.array(tVal), optimal_result['loglik_curve']]).T confidence =", "anomaly_model_group_by_label = {} folders = os.listdir(training_config.anomaly_model_save_path) for fo in folders: path = os.path.join(training_config.anomaly_data_path,", "training_config.config_by_user['model_type_chosen'], training_config.model_id) try: anomaly_model_group_by_label[fo] = joblib.load(anomaly_model_path + \"/model_s%s.pkl\"%(1,)) except IOError: print 'anomaly model", "folders: path = os.path.join(training_config.anomaly_data_path, fo) if not os.path.isdir(path): continue anomaly_model_path = os.path.join(training_config.anomaly_model_save_path, fo,", "continue anomaly_model_path = os.path.join(training_config.anomaly_model_save_path, fo, training_config.config_by_user['data_type_chosen'], training_config.config_by_user['model_type_chosen'], training_config.model_id) try: anomaly_model_group_by_label[fo] = joblib.load(anomaly_model_path +", "ha = 'center', va = 'baseline', bbox=dict(boxstyle=\"round\", ec=(1., 0.6, 0.6), fc=(1., 0.9, 0.9),)", "+= hamming(testing_stateSeq, hidden_stateSeq[iTrial,:]) return None, None, confidence else: print (\"without the confidence_metric as:", "model_save_path, figure_save_path,): ''' 1. load all the anomalous trained models 2. load testing", "'posterior_of_gmm_model': #load -> build a hmm model -> calculate the probability of testing", "python import os import numpy as np from sklearn.externals import joblib from matplotlib", "- threshold.tolist()[0][-1] return all_log_curves_of_this_state, threshold, confidence elif CONF_TYPE == 'posterior_of_gmm_model': #load -> build", "anomaly_model_group_by_label[model_label].decode(anomaly_testing_group_by_folder_name[trial_name][1],len(anomaly_testing_group_by_folder_name[trial_name][1])-1) calc_cofidence_resourse.append({ 'model_label' : model_label, 'culmulative_loglik': one_log_curve_of_this_model[-1], 'loglik_curve' : one_log_curve_of_this_model, # 'predict_proba' :", "anomaly_model_group_by_label[model_label].predict_proba(anomaly_testing_group_by_folder_name[trial_name][1]) # HDPHSMM haven't implemented this # one_hidden_stateSeq_of_this_state = anomaly_model_group_by_label[model_label].decode(anomaly_testing_group_by_folder_name[trial_name][1],len(anomaly_testing_group_by_folder_name[trial_name][1])-1) calc_cofidence_resourse.append({ 'model_label' :", "confuse_matrix[fo] = predict_class _items = confuse_matrix.keys() _matrix = np.identity(len(_items)) for row in _items:", "data 3. plot the log-likelihood wrt each model and plot in a same", "= _items.index(col) _matrix[r, c] = confuse_matrix[row].count(col) print 'print the confusion matrix...' print _items", "threshold.tolist()[0][-1] return all_log_curves_of_this_state, threshold, confidence elif CONF_TYPE == 'posterior_of_gmm_model': #load -> build a", "None, None, confidence elif CONF_TYPE == 'hamming_distance_of_hidden_state_sequence': # hidden_state_sequence_of_training_trials hidden_stateSeq = joblib.load(os.path.join(anomaly_model_path, 'hidden_stateSeq.pkl'))", "in _items: r = _items.index(row) c = _items.index(col) _matrix[r, c] = confuse_matrix[row].count(col) print", "ax.legend(loc='upper left') ax.text(20,optimal_result['culmulative_loglik']/2, optimal_result['model_label'] + ': ' + str(confidence), ha = 'center', va", "r = _items.index(row) c = _items.index(col) _matrix[r, c] = confuse_matrix[row].count(col) print 'print the", "_items = confuse_matrix.keys() _matrix = np.identity(len(_items)) for row in _items: for col in", "for no_trial in range(len(all_log_curves_of_this_state)): ax.plot(all_log_curves_of_this_state[no_trial], linestyle= '--', color = 'gray', label = 'trials')", "util import training_config import pandas as pd import random import ipdb def run(anomaly_data_path_for_testing,", "os.listdir(training_config.anomaly_model_save_path) for fo in folders: path = os.path.join(training_config.anomaly_data_path, fo) if not os.path.isdir(path): continue", "confusion matrix...' print _items print _matrix def get_confidence_of_identification(optimal_result): confidence_metric = ['culmulative_loglik_divide_by_the_culmulative_mean_loglik', 'posterior_of_gmm_model', 'calc_kullback_leibler_divergence_of_predict_proba',", "a hmm model -> calculate the probability of testing sequence all_log_curves_of_this_state = joblib.load(os.path.join(anomaly_model_path,", "len(hidden_stateSeq) testing_stateSeq = optimal_result['hidden_stateSeq'] testing_stateSeq = np.append(testing_stateSeq, [testing_stateSeq[-1]]) tLen = len(testing_stateSeq) hidden_stateSeq =", "average_predict_proba[1][iObs,:]) return None, None, confidence elif CONF_TYPE == 'hamming_distance_of_hidden_state_sequence': # hidden_state_sequence_of_training_trials hidden_stateSeq =", "anomaly_model_group_by_label: one_log_curve_of_this_model = util.fast_log_curve_calculation( anomaly_testing_group_by_folder_name[trial_name][1], anomaly_model_group_by_label[model_label]) # one_predict_proba_of_this_state = anomaly_model_group_by_label[model_label].predict_proba(anomaly_testing_group_by_folder_name[trial_name][1]) # HDPHSMM haven't", "for row in _items: for col in _items: r = _items.index(row) c =", "mixture.GaussianMixture(n_components = 5, covariance_type = 'diag').fit(data) tVal = range(optimal_result['loglik_curve'].shape[0]) testing_data = np.vstack([np.array(tVal), optimal_result['loglik_curve']]).T", "I had deleted one data point totalLen = len(hidden_stateSeq) testing_stateSeq = optimal_result['hidden_stateSeq'] testing_stateSeq", "sorted_list = sorted(calc_cofidence_resourse, key=lambda x:x['culmulative_loglik']) # from small to large optimal_result = sorted_list[-1]", "confuse_matrix.keys() _matrix = np.identity(len(_items)) for row in _items: for col in _items: r", "None, None, confidence else: print (\"without the confidence_metric as: \" + CONF_TYPE) pass", "('Anomaly_identification for ' + fo) ax.set_title(title) ''' sorted_list = sorted(calc_cofidence_resourse, key=lambda x:x['culmulative_loglik']) #", "iTrial in range(totalLen/tLen): confidence += hamming(testing_stateSeq, hidden_stateSeq[iTrial,:]) return None, None, confidence else: print", "ipdb def run(anomaly_data_path_for_testing, model_save_path, figure_save_path,): ''' 1. load all the anomalous trained models", "va = 'baseline', bbox=dict(boxstyle=\"round\", ec=(1., 0.6, 0.6), fc=(1., 0.9, 0.9),) ) if not", "wrt each model and plot in a same figure ''' # load trained", "fo) if not os.path.isdir(path): continue anomaly_testing_group_by_folder_name = util.get_anomaly_data_for_labelled_case(training_config, data_path) # one-file for trial_name", "a gmm model from sklearn import mixture gmm = mixture.GaussianMixture(n_components = 5, covariance_type", "'diag').fit(data) tVal = range(optimal_result['loglik_curve'].shape[0]) testing_data = np.vstack([np.array(tVal), optimal_result['loglik_curve']]).T confidence = gmm.score(testing_data) return all_log_curves_of_this_state,", "np.append(hidden_stateSeq, [hidden_stateSeq[-1]]) # add one item, because for autoregressive model, I had deleted", "#--plot ''' for no_trial in range(len(all_log_curves_of_this_state)): ax.plot(all_log_curves_of_this_state[no_trial], linestyle= '--', color = 'gray', label", "icurve == 0: data = data_points else: data = np.vstack([data, data_points]) # fit", "one_predict_proba_of_this_state, # 'hidden_stateSeq' : one_hidden_stateSeq_of_this_state }) ''' #--plot c = next(color) plot_line, =", "of testing sequence all_log_curves_of_this_state = joblib.load(os.path.join(anomaly_model_path, 'all_log_curves_of_this_state.pkl')) data = np.ndarray([]) for icurve in", "tVal = range(len(all_log_curves_of_this_state[icurve])) feature = all_log_curves_of_this_state[icurve] data_points = np.vstack([np.array(tVal), feature]).T if icurve ==", "testing_stateSeq = optimal_result['hidden_stateSeq'] testing_stateSeq = np.append(testing_stateSeq, [testing_stateSeq[-1]]) tLen = len(testing_stateSeq) hidden_stateSeq = hidden_stateSeq.reshape(totalLen/tLen,", "load testing anomaly data 3. plot the log-likelihood wrt each model and plot", "#plot fig = plt.figure() ax = fig.add_subplot(111) from matplotlib.pyplot import cm color =", "key=lambda x:x['culmulative_loglik']) # from small to large optimal_result = sorted_list[-1] classified_model = optimal_result['model_label']", "hidden_stateSeq.reshape(totalLen/tLen, tLen) from scipy.spatial.distance import hamming confidence = 0 for iTrial in range(totalLen/tLen):", "= util.fast_log_curve_calculation( anomaly_testing_group_by_folder_name[trial_name][1], anomaly_model_group_by_label[model_label]) # one_predict_proba_of_this_state = anomaly_model_group_by_label[model_label].predict_proba(anomaly_testing_group_by_folder_name[trial_name][1]) # HDPHSMM haven't implemented this", "= util.get_anomaly_data_for_labelled_case(training_config, data_path) # one-file for trial_name in anomaly_testing_group_by_folder_name: ''' #plot fig =", "joblib.load(os.path.join(anomaly_model_path, 'std_of_log_likelihood.pkl')) np_matrix_traj_by_time = np.matrix(all_log_curves_of_this_state) mean_of_log_curve = np_matrix_traj_by_time.mean(0) threshold = mean_of_log_curve - std_of_log_likelihood[1]", "hidden_stateSeq = hidden_stateSeq.reshape(totalLen/tLen, tLen) from scipy.spatial.distance import hamming confidence = 0 for iTrial", "'/anomaly_identification_plot') fig.savefig(os.path.join(figure_save_path, 'anomaly_identification_plot', fo + \":\" + trial_name + \".jpg\"), format=\"jpg\") # fig.show(1)", "= mean_of_log_curve - std_of_log_likelihood[1] confidence = optimal_result['culmulative_loglik'] - threshold.tolist()[0][-1] return all_log_curves_of_this_state, threshold, confidence", "= 'trials') ax.plot(threshold.tolist()[0], linestyle='--', color='gold', label='threshold') ax.legend(loc='upper left') ax.text(20,optimal_result['culmulative_loglik']/2, optimal_result['model_label'] + ': '", "one item, because for autoregressive model, I had deleted one data point totalLen", "figure_save_path,): ''' 1. load all the anomalous trained models 2. load testing anomaly", "cm color = iter(cm.rainbow(np.linspace(0, 1, len(anomaly_model_group_by_label)))) ''' calc_cofidence_resourse = [] for model_label in", "anomalous trained models 2. load testing anomaly data 3. plot the log-likelihood wrt", "if not os.path.isdir(figure_save_path + '/anomaly_identification_plot'): os.makedirs(figure_save_path + '/anomaly_identification_plot') fig.savefig(os.path.join(figure_save_path, 'anomaly_identification_plot', fo + \":\"", "== 'calc_kullback_leibler_divergence_of_predict_proba': print 'obsoleted' pass from scipy.stats import entropy # average_predict_proba average_predict_proba =", "= pd.DataFrame(anomaly_testing_group_by_folder_name[trial_name][1], columns=training_config.interested_data_fields) id = random.randint(1000,10000) _name = 'unknown_anomaly_' + str(id) unknown_anomaly_path =", "= optimal_result['model_label'] predict_class.append(classified_model) all_log_curves_of_this_state, threshold, confidence = get_confidence_of_identification(optimal_result) ''' if confidence < 0.0:", "os.path.join(training_config.anomaly_data_path, _name) os.makedirs(unknown_anomaly_path) print 'generated a new anomaly:' + _name print '*\\n'*5 print", "= joblib.load(os.path.join(anomaly_model_path, 'hidden_stateSeq.pkl')) hidden_stateSeq = np.append(hidden_stateSeq, [hidden_stateSeq[-1]]) # add one item, because for", "= confidence_metric[0] anomaly_model_path = os.path.join(training_config.anomaly_model_save_path, optimal_result['model_label'], training_config.config_by_user['data_type_chosen'], training_config.config_by_user['model_type_chosen'], training_config.model_id) if CONF_TYPE == 'culmulative_loglik_divide_by_the_culmulative_mean_loglik':", "hidden_stateSeq = np.append(hidden_stateSeq, [hidden_stateSeq[-1]]) # add one item, because for autoregressive model, I", "covariance_type = 'diag').fit(data) tVal = range(optimal_result['loglik_curve'].shape[0]) testing_data = np.vstack([np.array(tVal), optimal_result['loglik_curve']]).T confidence = gmm.score(testing_data)", "fo + \":\" + trial_name + \".jpg\"), format=\"jpg\") # fig.show(1) # raw_input('testing another", "anomaly_model_path = os.path.join(training_config.anomaly_model_save_path, fo, training_config.config_by_user['data_type_chosen'], training_config.config_by_user['model_type_chosen'], training_config.model_id) try: anomaly_model_group_by_label[fo] = joblib.load(anomaly_model_path + \"/model_s%s.pkl\"%(1,))", "0.9, 0.9),) ) if not os.path.isdir(figure_save_path + '/anomaly_identification_plot'): os.makedirs(figure_save_path + '/anomaly_identification_plot') fig.savefig(os.path.join(figure_save_path, 'anomaly_identification_plot',", "confidence_metric[0] anomaly_model_path = os.path.join(training_config.anomaly_model_save_path, optimal_result['model_label'], training_config.config_by_user['data_type_chosen'], training_config.config_by_user['model_type_chosen'], training_config.model_id) if CONF_TYPE == 'culmulative_loglik_divide_by_the_culmulative_mean_loglik': c_value", "= joblib.load(os.path.join(anomaly_model_path, 'all_log_curves_of_this_state.pkl')) data = np.ndarray([]) for icurve in range(len(all_log_curves_of_this_state)): tVal = range(len(all_log_curves_of_this_state[icurve]))", "scipy.spatial.distance import hamming confidence = 0 for iTrial in range(totalLen/tLen): confidence += hamming(testing_stateSeq,", "np.ndarray([]) for icurve in range(len(all_log_curves_of_this_state)): tVal = range(len(all_log_curves_of_this_state[icurve])) feature = all_log_curves_of_this_state[icurve] data_points =", "scipy.stats import entropy # average_predict_proba average_predict_proba = joblib.load(os.path.join(anomaly_model_path, 'average_predict_proba.pkl')) testing_predict_proba = optimal_result['predict_proba'] confidence", "= 0.0 for iObs in range(len(testing_predict_proba)): confidence += entropy(testing_predict_proba[iObs,:], average_predict_proba[1][iObs,:]) return None, None,", "for fo in folders: predict_class = [] data_path = os.path.join(anomaly_data_path_for_testing, fo) if not", "model of %s not found'%(fo,) continue # one-folder confuse_matrix = {} folders =", "'anomaly_identification_plot', fo + \":\" + trial_name + \".jpg\"), format=\"jpg\") # fig.show(1) # raw_input('testing", "the probability of testing sequence all_log_curves_of_this_state = joblib.load(os.path.join(anomaly_model_path, 'all_log_curves_of_this_state.pkl')) data = np.ndarray([]) for", "in range(totalLen/tLen): confidence += hamming(testing_stateSeq, hidden_stateSeq[iTrial,:]) return None, None, confidence else: print (\"without", "'posterior_of_gmm_model', 'calc_kullback_leibler_divergence_of_predict_proba', 'hamming_distance_of_hidden_state_sequence', ] CONF_TYPE = confidence_metric[0] anomaly_model_path = os.path.join(training_config.anomaly_model_save_path, optimal_result['model_label'], training_config.config_by_user['data_type_chosen'], training_config.config_by_user['model_type_chosen'],", "import cm color = iter(cm.rainbow(np.linspace(0, 1, len(anomaly_model_group_by_label)))) ''' calc_cofidence_resourse = [] for model_label", "# 'hidden_stateSeq' : one_hidden_stateSeq_of_this_state }) ''' #--plot c = next(color) plot_line, = ax.plot(one_log_curve_of_this_model,", "0.6, 0.6), fc=(1., 0.9, 0.9),) ) if not os.path.isdir(figure_save_path + '/anomaly_identification_plot'): os.makedirs(figure_save_path +", "testing anomaly data 3. plot the log-likelihood wrt each model and plot in", "plot_line.set_label(model_label) title = ('Anomaly_identification for ' + fo) ax.set_title(title) ''' sorted_list = sorted(calc_cofidence_resourse,", "2. load testing anomaly data 3. plot the log-likelihood wrt each model and", "calc_cofidence_resourse = [] for model_label in anomaly_model_group_by_label: one_log_curve_of_this_model = util.fast_log_curve_calculation( anomaly_testing_group_by_folder_name[trial_name][1], anomaly_model_group_by_label[model_label]) #", "left') ax.text(20,optimal_result['culmulative_loglik']/2, optimal_result['model_label'] + ': ' + str(confidence), ha = 'center', va =", "'\\n' confuse_matrix[fo] = predict_class _items = confuse_matrix.keys() _matrix = np.identity(len(_items)) for row in", "had deleted one data point totalLen = len(hidden_stateSeq) testing_stateSeq = optimal_result['hidden_stateSeq'] testing_stateSeq =", "testing_predict_proba = optimal_result['predict_proba'] confidence = 0.0 for iObs in range(len(testing_predict_proba)): confidence += entropy(testing_predict_proba[iObs,:],", "random import ipdb def run(anomaly_data_path_for_testing, model_save_path, figure_save_path,): ''' 1. load all the anomalous", "same figure ''' # load trained anomaly models anomaly_model_group_by_label = {} folders =", "# one-file for trial_name in anomaly_testing_group_by_folder_name: ''' #plot fig = plt.figure() ax =", "''' 1. load all the anomalous trained models 2. load testing anomaly data", "std_of_log_likelihood = joblib.load(os.path.join(anomaly_model_path, 'std_of_log_likelihood.pkl')) np_matrix_traj_by_time = np.matrix(all_log_curves_of_this_state) mean_of_log_curve = np_matrix_traj_by_time.mean(0) threshold = mean_of_log_curve", "''' sorted_list = sorted(calc_cofidence_resourse, key=lambda x:x['culmulative_loglik']) # from small to large optimal_result =", "3. plot the log-likelihood wrt each model and plot in a same figure", "= c) plot_line.set_label(model_label) title = ('Anomaly_identification for ' + fo) ax.set_title(title) ''' sorted_list", "c = _items.index(col) _matrix[r, c] = confuse_matrix[row].count(col) print 'print the confusion matrix...' print", ") if not os.path.isdir(figure_save_path + '/anomaly_identification_plot'): os.makedirs(figure_save_path + '/anomaly_identification_plot') fig.savefig(os.path.join(figure_save_path, 'anomaly_identification_plot', fo +", "= np.vstack([data, data_points]) # fit a gmm model from sklearn import mixture gmm", "the anomalous trained models 2. load testing anomaly data 3. plot the log-likelihood", "os.path.join(training_config.anomaly_model_save_path, fo, training_config.config_by_user['data_type_chosen'], training_config.config_by_user['model_type_chosen'], training_config.model_id) try: anomaly_model_group_by_label[fo] = joblib.load(anomaly_model_path + \"/model_s%s.pkl\"%(1,)) except IOError:", "joblib.load(os.path.join(anomaly_model_path, 'all_log_curves_of_this_state.pkl')) std_of_log_likelihood = joblib.load(os.path.join(anomaly_model_path, 'std_of_log_likelihood.pkl')) np_matrix_traj_by_time = np.matrix(all_log_curves_of_this_state) mean_of_log_curve = np_matrix_traj_by_time.mean(0) threshold", "data_points = np.vstack([np.array(tVal), feature]).T if icurve == 0: data = data_points else: data", "# fig.show(1) # raw_input('testing another trial?? Please any key to continue') ''' print", "+ '\\n' confuse_matrix[fo] = predict_class _items = confuse_matrix.keys() _matrix = np.identity(len(_items)) for row", "import entropy # average_predict_proba average_predict_proba = joblib.load(os.path.join(anomaly_model_path, 'average_predict_proba.pkl')) testing_predict_proba = optimal_result['predict_proba'] confidence =", "'average_predict_proba.pkl')) testing_predict_proba = optimal_result['predict_proba'] confidence = 0.0 for iObs in range(len(testing_predict_proba)): confidence +=", "'hidden_stateSeq.pkl')) hidden_stateSeq = np.append(hidden_stateSeq, [hidden_stateSeq[-1]]) # add one item, because for autoregressive model,", "0.6), fc=(1., 0.9, 0.9),) ) if not os.path.isdir(figure_save_path + '/anomaly_identification_plot'): os.makedirs(figure_save_path + '/anomaly_identification_plot')", "csv_save_path=unknown_anomaly_path, trial_name) ''' #--plot ''' for no_trial in range(len(all_log_curves_of_this_state)): ax.plot(all_log_curves_of_this_state[no_trial], linestyle= '--', color", "of %s not found'%(fo,) continue # one-folder confuse_matrix = {} folders = os.listdir(anomaly_data_path_for_testing)", "another trial?? Please any key to continue') ''' print 'Finish testing: '+ fo", "fig.show(1) # raw_input('testing another trial?? Please any key to continue') ''' print 'Finish", "optimal_result['model_label'] predict_class.append(classified_model) all_log_curves_of_this_state, threshold, confidence = get_confidence_of_identification(optimal_result) ''' if confidence < 0.0: df", "= optimal_result['predict_proba'] confidence = 0.0 for iObs in range(len(testing_predict_proba)): confidence += entropy(testing_predict_proba[iObs,:], average_predict_proba[1][iObs,:])", "data_path = os.path.join(anomaly_data_path_for_testing, fo) if not os.path.isdir(path): continue anomaly_testing_group_by_folder_name = util.get_anomaly_data_for_labelled_case(training_config, data_path) #", "joblib.load(anomaly_model_path + \"/model_s%s.pkl\"%(1,)) except IOError: print 'anomaly model of %s not found'%(fo,) continue", "pandas as pd import random import ipdb def run(anomaly_data_path_for_testing, model_save_path, figure_save_path,): ''' 1.", "' + fo) ax.set_title(title) ''' sorted_list = sorted(calc_cofidence_resourse, key=lambda x:x['culmulative_loglik']) # from small", "color='gold', label='threshold') ax.legend(loc='upper left') ax.text(20,optimal_result['culmulative_loglik']/2, optimal_result['model_label'] + ': ' + str(confidence), ha =", "large optimal_result = sorted_list[-1] classified_model = optimal_result['model_label'] predict_class.append(classified_model) all_log_curves_of_this_state, threshold, confidence = get_confidence_of_identification(optimal_result)", "models anomaly_model_group_by_label = {} folders = os.listdir(training_config.anomaly_model_save_path) for fo in folders: path =", "ax.text(20,optimal_result['culmulative_loglik']/2, optimal_result['model_label'] + ': ' + str(confidence), ha = 'center', va = 'baseline',", "os.path.isdir(path): continue anomaly_model_path = os.path.join(training_config.anomaly_model_save_path, fo, training_config.config_by_user['data_type_chosen'], training_config.config_by_user['model_type_chosen'], training_config.model_id) try: anomaly_model_group_by_label[fo] = joblib.load(anomaly_model_path", "'--', color = 'gray', label = 'trials') ax.plot(threshold.tolist()[0], linestyle='--', color='gold', label='threshold') ax.legend(loc='upper left')", "in anomaly_model_group_by_label: one_log_curve_of_this_model = util.fast_log_curve_calculation( anomaly_testing_group_by_folder_name[trial_name][1], anomaly_model_group_by_label[model_label]) # one_predict_proba_of_this_state = anomaly_model_group_by_label[model_label].predict_proba(anomaly_testing_group_by_folder_name[trial_name][1]) # HDPHSMM", "= 'gray', label = 'trials') ax.plot(threshold.tolist()[0], linestyle='--', color='gold', label='threshold') ax.legend(loc='upper left') ax.text(20,optimal_result['culmulative_loglik']/2, optimal_result['model_label']", "hamming confidence = 0 for iTrial in range(totalLen/tLen): confidence += hamming(testing_stateSeq, hidden_stateSeq[iTrial,:]) return", "= optimal_result['hidden_stateSeq'] testing_stateSeq = np.append(testing_stateSeq, [testing_stateSeq[-1]]) tLen = len(testing_stateSeq) hidden_stateSeq = hidden_stateSeq.reshape(totalLen/tLen, tLen)", "as np from sklearn.externals import joblib from matplotlib import pyplot as plt import", "= joblib.load(anomaly_model_path + \"/model_s%s.pkl\"%(1,)) except IOError: print 'anomaly model of %s not found'%(fo,)", "_items.index(col) _matrix[r, c] = confuse_matrix[row].count(col) print 'print the confusion matrix...' print _items print", "_items print _matrix def get_confidence_of_identification(optimal_result): confidence_metric = ['culmulative_loglik_divide_by_the_culmulative_mean_loglik', 'posterior_of_gmm_model', 'calc_kullback_leibler_divergence_of_predict_proba', 'hamming_distance_of_hidden_state_sequence', ] CONF_TYPE", "not os.path.isdir(figure_save_path + '/anomaly_identification_plot'): os.makedirs(figure_save_path + '/anomaly_identification_plot') fig.savefig(os.path.join(figure_save_path, 'anomaly_identification_plot', fo + \":\" +", "CONF_TYPE == 'posterior_of_gmm_model': #load -> build a hmm model -> calculate the probability", "model_label in anomaly_model_group_by_label: one_log_curve_of_this_model = util.fast_log_curve_calculation( anomaly_testing_group_by_folder_name[trial_name][1], anomaly_model_group_by_label[model_label]) # one_predict_proba_of_this_state = anomaly_model_group_by_label[model_label].predict_proba(anomaly_testing_group_by_folder_name[trial_name][1]) #", "predict_class _items = confuse_matrix.keys() _matrix = np.identity(len(_items)) for row in _items: for col", "'culmulative_loglik_divide_by_the_culmulative_mean_loglik': c_value = 5 all_log_curves_of_this_state = joblib.load(os.path.join(anomaly_model_path, 'all_log_curves_of_this_state.pkl')) std_of_log_likelihood = joblib.load(os.path.join(anomaly_model_path, 'std_of_log_likelihood.pkl')) np_matrix_traj_by_time", "os.makedirs(figure_save_path + '/anomaly_identification_plot') fig.savefig(os.path.join(figure_save_path, 'anomaly_identification_plot', fo + \":\" + trial_name + \".jpg\"), format=\"jpg\")", "range(optimal_result['loglik_curve'].shape[0]) testing_data = np.vstack([np.array(tVal), optimal_result['loglik_curve']]).T confidence = gmm.score(testing_data) return all_log_curves_of_this_state, None, confidence elif", "#load -> build a hmm model -> calculate the probability of testing sequence", "= np.vstack([np.array(tVal), optimal_result['loglik_curve']]).T confidence = gmm.score(testing_data) return all_log_curves_of_this_state, None, confidence elif CONF_TYPE ==", "x:x['culmulative_loglik']) # from small to large optimal_result = sorted_list[-1] classified_model = optimal_result['model_label'] predict_class.append(classified_model)", "one_hidden_stateSeq_of_this_state }) ''' #--plot c = next(color) plot_line, = ax.plot(one_log_curve_of_this_model, linestyle=\"solid\", color =", "'std_of_log_likelihood.pkl')) np_matrix_traj_by_time = np.matrix(all_log_curves_of_this_state) mean_of_log_curve = np_matrix_traj_by_time.mean(0) threshold = mean_of_log_curve - std_of_log_likelihood[1] confidence", "all the anomalous trained models 2. load testing anomaly data 3. plot the", "np.vstack([np.array(tVal), optimal_result['loglik_curve']]).T confidence = gmm.score(testing_data) return all_log_curves_of_this_state, None, confidence elif CONF_TYPE == 'calc_kullback_leibler_divergence_of_predict_proba':", "= ax.plot(one_log_curve_of_this_model, linestyle=\"solid\", color = c) plot_line.set_label(model_label) title = ('Anomaly_identification for ' +", "= 'baseline', bbox=dict(boxstyle=\"round\", ec=(1., 0.6, 0.6), fc=(1., 0.9, 0.9),) ) if not os.path.isdir(figure_save_path", "'/anomaly_identification_plot'): os.makedirs(figure_save_path + '/anomaly_identification_plot') fig.savefig(os.path.join(figure_save_path, 'anomaly_identification_plot', fo + \":\" + trial_name + \".jpg\"),", "in range(len(all_log_curves_of_this_state)): tVal = range(len(all_log_curves_of_this_state[icurve])) feature = all_log_curves_of_this_state[icurve] data_points = np.vstack([np.array(tVal), feature]).T if", "= predict_class _items = confuse_matrix.keys() _matrix = np.identity(len(_items)) for row in _items: for", "# HDPHSMM haven't implemented this # one_hidden_stateSeq_of_this_state = anomaly_model_group_by_label[model_label].decode(anomaly_testing_group_by_folder_name[trial_name][1],len(anomaly_testing_group_by_folder_name[trial_name][1])-1) calc_cofidence_resourse.append({ 'model_label' : model_label,", "testing_stateSeq = np.append(testing_stateSeq, [testing_stateSeq[-1]]) tLen = len(testing_stateSeq) hidden_stateSeq = hidden_stateSeq.reshape(totalLen/tLen, tLen) from scipy.spatial.distance", ": one_log_curve_of_this_model, # 'predict_proba' : one_predict_proba_of_this_state, # 'hidden_stateSeq' : one_hidden_stateSeq_of_this_state }) ''' #--plot", "0.9),) ) if not os.path.isdir(figure_save_path + '/anomaly_identification_plot'): os.makedirs(figure_save_path + '/anomaly_identification_plot') fig.savefig(os.path.join(figure_save_path, 'anomaly_identification_plot', fo", "build a hmm model -> calculate the probability of testing sequence all_log_curves_of_this_state =", "'baseline', bbox=dict(boxstyle=\"round\", ec=(1., 0.6, 0.6), fc=(1., 0.9, 0.9),) ) if not os.path.isdir(figure_save_path +", "unknown_anomaly_path = os.path.join(training_config.anomaly_data_path, _name) os.makedirs(unknown_anomaly_path) print 'generated a new anomaly:' + _name print", "one-folder confuse_matrix = {} folders = os.listdir(anomaly_data_path_for_testing) for fo in folders: predict_class =", "+ '/anomaly_identification_plot') fig.savefig(os.path.join(figure_save_path, 'anomaly_identification_plot', fo + \":\" + trial_name + \".jpg\"), format=\"jpg\") #", "= random.randint(1000,10000) _name = 'unknown_anomaly_' + str(id) unknown_anomaly_path = os.path.join(training_config.anomaly_data_path, _name) os.makedirs(unknown_anomaly_path) print", "joblib.load(os.path.join(anomaly_model_path, 'average_predict_proba.pkl')) testing_predict_proba = optimal_result['predict_proba'] confidence = 0.0 for iObs in range(len(testing_predict_proba)): confidence", "folders = os.listdir(anomaly_data_path_for_testing) for fo in folders: predict_class = [] data_path = os.path.join(anomaly_data_path_for_testing,", "in range(len(all_log_curves_of_this_state)): ax.plot(all_log_curves_of_this_state[no_trial], linestyle= '--', color = 'gray', label = 'trials') ax.plot(threshold.tolist()[0], linestyle='--',", "== 0: data = data_points else: data = np.vstack([data, data_points]) # fit a", "Please any key to continue') ''' print 'Finish testing: '+ fo + '\\n'", "training_config.model_id) try: anomaly_model_group_by_label[fo] = joblib.load(anomaly_model_path + \"/model_s%s.pkl\"%(1,)) except IOError: print 'anomaly model of", "from sklearn.externals import joblib from matplotlib import pyplot as plt import util import", "= np.append(testing_stateSeq, [testing_stateSeq[-1]]) tLen = len(testing_stateSeq) hidden_stateSeq = hidden_stateSeq.reshape(totalLen/tLen, tLen) from scipy.spatial.distance import", "threshold = mean_of_log_curve - std_of_log_likelihood[1] confidence = optimal_result['culmulative_loglik'] - threshold.tolist()[0][-1] return all_log_curves_of_this_state, threshold,", "confidence = 0.0 for iObs in range(len(testing_predict_proba)): confidence += entropy(testing_predict_proba[iObs,:], average_predict_proba[1][iObs,:]) return None,", "random.randint(1000,10000) _name = 'unknown_anomaly_' + str(id) unknown_anomaly_path = os.path.join(training_config.anomaly_data_path, _name) os.makedirs(unknown_anomaly_path) print 'generated", "data_points else: data = np.vstack([data, data_points]) # fit a gmm model from sklearn", "linestyle= '--', color = 'gray', label = 'trials') ax.plot(threshold.tolist()[0], linestyle='--', color='gold', label='threshold') ax.legend(loc='upper", "each model and plot in a same figure ''' # load trained anomaly", "= [] for model_label in anomaly_model_group_by_label: one_log_curve_of_this_model = util.fast_log_curve_calculation( anomaly_testing_group_by_folder_name[trial_name][1], anomaly_model_group_by_label[model_label]) # one_predict_proba_of_this_state", "predict_class.append(classified_model) all_log_curves_of_this_state, threshold, confidence = get_confidence_of_identification(optimal_result) ''' if confidence < 0.0: df =", "from small to large optimal_result = sorted_list[-1] classified_model = optimal_result['model_label'] predict_class.append(classified_model) all_log_curves_of_this_state, threshold,", "to continue') ''' print 'Finish testing: '+ fo + '\\n' confuse_matrix[fo] = predict_class", "'model_label' : model_label, 'culmulative_loglik': one_log_curve_of_this_model[-1], 'loglik_curve' : one_log_curve_of_this_model, # 'predict_proba' : one_predict_proba_of_this_state, #", "# fit a gmm model from sklearn import mixture gmm = mixture.GaussianMixture(n_components =", "all_log_curves_of_this_state = joblib.load(os.path.join(anomaly_model_path, 'all_log_curves_of_this_state.pkl')) data = np.ndarray([]) for icurve in range(len(all_log_curves_of_this_state)): tVal =", "log-likelihood wrt each model and plot in a same figure ''' # load", "training_config.model_id) if CONF_TYPE == 'culmulative_loglik_divide_by_the_culmulative_mean_loglik': c_value = 5 all_log_curves_of_this_state = joblib.load(os.path.join(anomaly_model_path, 'all_log_curves_of_this_state.pkl')) std_of_log_likelihood", "0.0 for iObs in range(len(testing_predict_proba)): confidence += entropy(testing_predict_proba[iObs,:], average_predict_proba[1][iObs,:]) return None, None, confidence", "color = iter(cm.rainbow(np.linspace(0, 1, len(anomaly_model_group_by_label)))) ''' calc_cofidence_resourse = [] for model_label in anomaly_model_group_by_label:", "one_predict_proba_of_this_state = anomaly_model_group_by_label[model_label].predict_proba(anomaly_testing_group_by_folder_name[trial_name][1]) # HDPHSMM haven't implemented this # one_hidden_stateSeq_of_this_state = anomaly_model_group_by_label[model_label].decode(anomaly_testing_group_by_folder_name[trial_name][1],len(anomaly_testing_group_by_folder_name[trial_name][1])-1) calc_cofidence_resourse.append({", "= joblib.load(os.path.join(anomaly_model_path, 'average_predict_proba.pkl')) testing_predict_proba = optimal_result['predict_proba'] confidence = 0.0 for iObs in range(len(testing_predict_proba)):", "all_log_curves_of_this_state, threshold, confidence = get_confidence_of_identification(optimal_result) ''' if confidence < 0.0: df = pd.DataFrame(anomaly_testing_group_by_folder_name[trial_name][1],", "found'%(fo,) continue # one-folder confuse_matrix = {} folders = os.listdir(anomaly_data_path_for_testing) for fo in", "= np.matrix(all_log_curves_of_this_state) mean_of_log_curve = np_matrix_traj_by_time.mean(0) threshold = mean_of_log_curve - std_of_log_likelihood[1] confidence = optimal_result['culmulative_loglik']", "import training_config import pandas as pd import random import ipdb def run(anomaly_data_path_for_testing, model_save_path,", "pass from scipy.stats import entropy # average_predict_proba average_predict_proba = joblib.load(os.path.join(anomaly_model_path, 'average_predict_proba.pkl')) testing_predict_proba =", "ax = fig.add_subplot(111) from matplotlib.pyplot import cm color = iter(cm.rainbow(np.linspace(0, 1, len(anomaly_model_group_by_label)))) '''", "model, I had deleted one data point totalLen = len(hidden_stateSeq) testing_stateSeq = optimal_result['hidden_stateSeq']", "confidence = 0 for iTrial in range(totalLen/tLen): confidence += hamming(testing_stateSeq, hidden_stateSeq[iTrial,:]) return None,", "in range(len(testing_predict_proba)): confidence += entropy(testing_predict_proba[iObs,:], average_predict_proba[1][iObs,:]) return None, None, confidence elif CONF_TYPE ==", "print 'Finish testing: '+ fo + '\\n' confuse_matrix[fo] = predict_class _items = confuse_matrix.keys()", "from matplotlib.pyplot import cm color = iter(cm.rainbow(np.linspace(0, 1, len(anomaly_model_group_by_label)))) ''' calc_cofidence_resourse = []", "add one item, because for autoregressive model, I had deleted one data point", "confuse_matrix = {} folders = os.listdir(anomaly_data_path_for_testing) for fo in folders: predict_class = []", "print 'generated a new anomaly:' + _name print '*\\n'*5 print 'synthetic data generation'", "\".jpg\"), format=\"jpg\") # fig.show(1) # raw_input('testing another trial?? Please any key to continue')", "# raw_input('testing another trial?? Please any key to continue') ''' print 'Finish testing:", "if icurve == 0: data = data_points else: data = np.vstack([data, data_points]) #", "label = 'trials') ax.plot(threshold.tolist()[0], linestyle='--', color='gold', label='threshold') ax.legend(loc='upper left') ax.text(20,optimal_result['culmulative_loglik']/2, optimal_result['model_label'] + ':", "== 'posterior_of_gmm_model': #load -> build a hmm model -> calculate the probability of", "hidden_stateSeq[iTrial,:]) return None, None, confidence else: print (\"without the confidence_metric as: \" +", "elif CONF_TYPE == 'calc_kullback_leibler_divergence_of_predict_proba': print 'obsoleted' pass from scipy.stats import entropy # average_predict_proba", "'loglik_curve' : one_log_curve_of_this_model, # 'predict_proba' : one_predict_proba_of_this_state, # 'hidden_stateSeq' : one_hidden_stateSeq_of_this_state }) '''", "= sorted(calc_cofidence_resourse, key=lambda x:x['culmulative_loglik']) # from small to large optimal_result = sorted_list[-1] classified_model", "''' # load trained anomaly models anomaly_model_group_by_label = {} folders = os.listdir(training_config.anomaly_model_save_path) for", "pyplot as plt import util import training_config import pandas as pd import random", "totalLen = len(hidden_stateSeq) testing_stateSeq = optimal_result['hidden_stateSeq'] testing_stateSeq = np.append(testing_stateSeq, [testing_stateSeq[-1]]) tLen = len(testing_stateSeq)", "<reponame>HongminWu/HMM #!/usr/bin/env python import os import numpy as np from sklearn.externals import joblib", "from matplotlib import pyplot as plt import util import training_config import pandas as", "feature = all_log_curves_of_this_state[icurve] data_points = np.vstack([np.array(tVal), feature]).T if icurve == 0: data =", "for iTrial in range(totalLen/tLen): confidence += hamming(testing_stateSeq, hidden_stateSeq[iTrial,:]) return None, None, confidence else:", "training_config import pandas as pd import random import ipdb def run(anomaly_data_path_for_testing, model_save_path, figure_save_path,):", "feature]).T if icurve == 0: data = data_points else: data = np.vstack([data, data_points])", "''' for no_trial in range(len(all_log_curves_of_this_state)): ax.plot(all_log_curves_of_this_state[no_trial], linestyle= '--', color = 'gray', label =", "'+ fo + '\\n' confuse_matrix[fo] = predict_class _items = confuse_matrix.keys() _matrix = np.identity(len(_items))", "- std_of_log_likelihood[1] confidence = optimal_result['culmulative_loglik'] - threshold.tolist()[0][-1] return all_log_curves_of_this_state, threshold, confidence elif CONF_TYPE", "fig.add_subplot(111) from matplotlib.pyplot import cm color = iter(cm.rainbow(np.linspace(0, 1, len(anomaly_model_group_by_label)))) ''' calc_cofidence_resourse =", "generate_synthetic_data generate_synthetic_data.run_finite_differece_matrix(df=df, num_data = 5, csv_save_path=unknown_anomaly_path, trial_name) ''' #--plot ''' for no_trial in", "import joblib from matplotlib import pyplot as plt import util import training_config import", "mean_of_log_curve = np_matrix_traj_by_time.mean(0) threshold = mean_of_log_curve - std_of_log_likelihood[1] confidence = optimal_result['culmulative_loglik'] - threshold.tolist()[0][-1]", "testing_data = np.vstack([np.array(tVal), optimal_result['loglik_curve']]).T confidence = gmm.score(testing_data) return all_log_curves_of_this_state, None, confidence elif CONF_TYPE", "import pyplot as plt import util import training_config import pandas as pd import", "CONF_TYPE == 'hamming_distance_of_hidden_state_sequence': # hidden_state_sequence_of_training_trials hidden_stateSeq = joblib.load(os.path.join(anomaly_model_path, 'hidden_stateSeq.pkl')) hidden_stateSeq = np.append(hidden_stateSeq, [hidden_stateSeq[-1]])", "[testing_stateSeq[-1]]) tLen = len(testing_stateSeq) hidden_stateSeq = hidden_stateSeq.reshape(totalLen/tLen, tLen) from scipy.spatial.distance import hamming confidence", "util.fast_log_curve_calculation( anomaly_testing_group_by_folder_name[trial_name][1], anomaly_model_group_by_label[model_label]) # one_predict_proba_of_this_state = anomaly_model_group_by_label[model_label].predict_proba(anomaly_testing_group_by_folder_name[trial_name][1]) # HDPHSMM haven't implemented this #", "'*\\n'*5 print 'synthetic data generation' import generate_synthetic_data generate_synthetic_data.run_finite_differece_matrix(df=df, num_data = 5, csv_save_path=unknown_anomaly_path, trial_name)", "# one_hidden_stateSeq_of_this_state = anomaly_model_group_by_label[model_label].decode(anomaly_testing_group_by_folder_name[trial_name][1],len(anomaly_testing_group_by_folder_name[trial_name][1])-1) calc_cofidence_resourse.append({ 'model_label' : model_label, 'culmulative_loglik': one_log_curve_of_this_model[-1], 'loglik_curve' : one_log_curve_of_this_model,", "icurve in range(len(all_log_curves_of_this_state)): tVal = range(len(all_log_curves_of_this_state[icurve])) feature = all_log_curves_of_this_state[icurve] data_points = np.vstack([np.array(tVal), feature]).T", "HDPHSMM haven't implemented this # one_hidden_stateSeq_of_this_state = anomaly_model_group_by_label[model_label].decode(anomaly_testing_group_by_folder_name[trial_name][1],len(anomaly_testing_group_by_folder_name[trial_name][1])-1) calc_cofidence_resourse.append({ 'model_label' : model_label, 'culmulative_loglik':", "threshold, confidence elif CONF_TYPE == 'posterior_of_gmm_model': #load -> build a hmm model ->", "import random import ipdb def run(anomaly_data_path_for_testing, model_save_path, figure_save_path,): ''' 1. load all the", "testing: '+ fo + '\\n' confuse_matrix[fo] = predict_class _items = confuse_matrix.keys() _matrix =", "= os.listdir(anomaly_data_path_for_testing) for fo in folders: predict_class = [] data_path = os.path.join(anomaly_data_path_for_testing, fo)", "import hamming confidence = 0 for iTrial in range(totalLen/tLen): confidence += hamming(testing_stateSeq, hidden_stateSeq[iTrial,:])", "generation' import generate_synthetic_data generate_synthetic_data.run_finite_differece_matrix(df=df, num_data = 5, csv_save_path=unknown_anomaly_path, trial_name) ''' #--plot ''' for", "trial_name + \".jpg\"), format=\"jpg\") # fig.show(1) # raw_input('testing another trial?? Please any key", "iter(cm.rainbow(np.linspace(0, 1, len(anomaly_model_group_by_label)))) ''' calc_cofidence_resourse = [] for model_label in anomaly_model_group_by_label: one_log_curve_of_this_model =", "a same figure ''' # load trained anomaly models anomaly_model_group_by_label = {} folders", "import ipdb def run(anomaly_data_path_for_testing, model_save_path, figure_save_path,): ''' 1. load all the anomalous trained", "ax.set_title(title) ''' sorted_list = sorted(calc_cofidence_resourse, key=lambda x:x['culmulative_loglik']) # from small to large optimal_result", "''' print 'Finish testing: '+ fo + '\\n' confuse_matrix[fo] = predict_class _items =", "confuse_matrix[row].count(col) print 'print the confusion matrix...' print _items print _matrix def get_confidence_of_identification(optimal_result): confidence_metric", "confidence elif CONF_TYPE == 'calc_kullback_leibler_divergence_of_predict_proba': print 'obsoleted' pass from scipy.stats import entropy #", "c) plot_line.set_label(model_label) title = ('Anomaly_identification for ' + fo) ax.set_title(title) ''' sorted_list =", "mixture gmm = mixture.GaussianMixture(n_components = 5, covariance_type = 'diag').fit(data) tVal = range(optimal_result['loglik_curve'].shape[0]) testing_data", "plot in a same figure ''' # load trained anomaly models anomaly_model_group_by_label =", "= 5 all_log_curves_of_this_state = joblib.load(os.path.join(anomaly_model_path, 'all_log_curves_of_this_state.pkl')) std_of_log_likelihood = joblib.load(os.path.join(anomaly_model_path, 'std_of_log_likelihood.pkl')) np_matrix_traj_by_time = np.matrix(all_log_curves_of_this_state)", "confidence = optimal_result['culmulative_loglik'] - threshold.tolist()[0][-1] return all_log_curves_of_this_state, threshold, confidence elif CONF_TYPE == 'posterior_of_gmm_model':", "entropy # average_predict_proba average_predict_proba = joblib.load(os.path.join(anomaly_model_path, 'average_predict_proba.pkl')) testing_predict_proba = optimal_result['predict_proba'] confidence = 0.0", "= np.ndarray([]) for icurve in range(len(all_log_curves_of_this_state)): tVal = range(len(all_log_curves_of_this_state[icurve])) feature = all_log_curves_of_this_state[icurve] data_points", "= iter(cm.rainbow(np.linspace(0, 1, len(anomaly_model_group_by_label)))) ''' calc_cofidence_resourse = [] for model_label in anomaly_model_group_by_label: one_log_curve_of_this_model", "trial?? Please any key to continue') ''' print 'Finish testing: '+ fo +", "'print the confusion matrix...' print _items print _matrix def get_confidence_of_identification(optimal_result): confidence_metric = ['culmulative_loglik_divide_by_the_culmulative_mean_loglik',", "-> calculate the probability of testing sequence all_log_curves_of_this_state = joblib.load(os.path.join(anomaly_model_path, 'all_log_curves_of_this_state.pkl')) data =", "= 'unknown_anomaly_' + str(id) unknown_anomaly_path = os.path.join(training_config.anomaly_data_path, _name) os.makedirs(unknown_anomaly_path) print 'generated a new", "+ ': ' + str(confidence), ha = 'center', va = 'baseline', bbox=dict(boxstyle=\"round\", ec=(1.,", "optimal_result = sorted_list[-1] classified_model = optimal_result['model_label'] predict_class.append(classified_model) all_log_curves_of_this_state, threshold, confidence = get_confidence_of_identification(optimal_result) '''", "confidence elif CONF_TYPE == 'posterior_of_gmm_model': #load -> build a hmm model -> calculate", "load all the anomalous trained models 2. load testing anomaly data 3. plot", "'generated a new anomaly:' + _name print '*\\n'*5 print 'synthetic data generation' import", "probability of testing sequence all_log_curves_of_this_state = joblib.load(os.path.join(anomaly_model_path, 'all_log_curves_of_this_state.pkl')) data = np.ndarray([]) for icurve", "os.path.join(anomaly_data_path_for_testing, fo) if not os.path.isdir(path): continue anomaly_testing_group_by_folder_name = util.get_anomaly_data_for_labelled_case(training_config, data_path) # one-file for", "c] = confuse_matrix[row].count(col) print 'print the confusion matrix...' print _items print _matrix def", "load trained anomaly models anomaly_model_group_by_label = {} folders = os.listdir(training_config.anomaly_model_save_path) for fo in", "data = np.ndarray([]) for icurve in range(len(all_log_curves_of_this_state)): tVal = range(len(all_log_curves_of_this_state[icurve])) feature = all_log_curves_of_this_state[icurve]", "= all_log_curves_of_this_state[icurve] data_points = np.vstack([np.array(tVal), feature]).T if icurve == 0: data = data_points", "one_log_curve_of_this_model, # 'predict_proba' : one_predict_proba_of_this_state, # 'hidden_stateSeq' : one_hidden_stateSeq_of_this_state }) ''' #--plot c", "] CONF_TYPE = confidence_metric[0] anomaly_model_path = os.path.join(training_config.anomaly_model_save_path, optimal_result['model_label'], training_config.config_by_user['data_type_chosen'], training_config.config_by_user['model_type_chosen'], training_config.model_id) if CONF_TYPE", "else: data = np.vstack([data, data_points]) # fit a gmm model from sklearn import", "np_matrix_traj_by_time = np.matrix(all_log_curves_of_this_state) mean_of_log_curve = np_matrix_traj_by_time.mean(0) threshold = mean_of_log_curve - std_of_log_likelihood[1] confidence =", "joblib.load(os.path.join(anomaly_model_path, 'all_log_curves_of_this_state.pkl')) data = np.ndarray([]) for icurve in range(len(all_log_curves_of_this_state)): tVal = range(len(all_log_curves_of_this_state[icurve])) feature", "= np.append(hidden_stateSeq, [hidden_stateSeq[-1]]) # add one item, because for autoregressive model, I had", "_name = 'unknown_anomaly_' + str(id) unknown_anomaly_path = os.path.join(training_config.anomaly_data_path, _name) os.makedirs(unknown_anomaly_path) print 'generated a", "'center', va = 'baseline', bbox=dict(boxstyle=\"round\", ec=(1., 0.6, 0.6), fc=(1., 0.9, 0.9),) ) if", "run(anomaly_data_path_for_testing, model_save_path, figure_save_path,): ''' 1. load all the anomalous trained models 2. load", "gmm model from sklearn import mixture gmm = mixture.GaussianMixture(n_components = 5, covariance_type =", "= mixture.GaussianMixture(n_components = 5, covariance_type = 'diag').fit(data) tVal = range(optimal_result['loglik_curve'].shape[0]) testing_data = np.vstack([np.array(tVal),", "\"/model_s%s.pkl\"%(1,)) except IOError: print 'anomaly model of %s not found'%(fo,) continue # one-folder", "os.path.join(training_config.anomaly_model_save_path, optimal_result['model_label'], training_config.config_by_user['data_type_chosen'], training_config.config_by_user['model_type_chosen'], training_config.model_id) if CONF_TYPE == 'culmulative_loglik_divide_by_the_culmulative_mean_loglik': c_value = 5 all_log_curves_of_this_state", "'predict_proba' : one_predict_proba_of_this_state, # 'hidden_stateSeq' : one_hidden_stateSeq_of_this_state }) ''' #--plot c = next(color)", "generate_synthetic_data.run_finite_differece_matrix(df=df, num_data = 5, csv_save_path=unknown_anomaly_path, trial_name) ''' #--plot ''' for no_trial in range(len(all_log_curves_of_this_state)):", "_items.index(row) c = _items.index(col) _matrix[r, c] = confuse_matrix[row].count(col) print 'print the confusion matrix...'", "= plt.figure() ax = fig.add_subplot(111) from matplotlib.pyplot import cm color = iter(cm.rainbow(np.linspace(0, 1,", "confidence += entropy(testing_predict_proba[iObs,:], average_predict_proba[1][iObs,:]) return None, None, confidence elif CONF_TYPE == 'hamming_distance_of_hidden_state_sequence': #", "continue # one-folder confuse_matrix = {} folders = os.listdir(anomaly_data_path_for_testing) for fo in folders:", "for icurve in range(len(all_log_curves_of_this_state)): tVal = range(len(all_log_curves_of_this_state[icurve])) feature = all_log_curves_of_this_state[icurve] data_points = np.vstack([np.array(tVal),", "_name) os.makedirs(unknown_anomaly_path) print 'generated a new anomaly:' + _name print '*\\n'*5 print 'synthetic", "= 'diag').fit(data) tVal = range(optimal_result['loglik_curve'].shape[0]) testing_data = np.vstack([np.array(tVal), optimal_result['loglik_curve']]).T confidence = gmm.score(testing_data) return", "linestyle='--', color='gold', label='threshold') ax.legend(loc='upper left') ax.text(20,optimal_result['culmulative_loglik']/2, optimal_result['model_label'] + ': ' + str(confidence), ha", "range(len(all_log_curves_of_this_state)): tVal = range(len(all_log_curves_of_this_state[icurve])) feature = all_log_curves_of_this_state[icurve] data_points = np.vstack([np.array(tVal), feature]).T if icurve", "import numpy as np from sklearn.externals import joblib from matplotlib import pyplot as", "deleted one data point totalLen = len(hidden_stateSeq) testing_stateSeq = optimal_result['hidden_stateSeq'] testing_stateSeq = np.append(testing_stateSeq,", "as plt import util import training_config import pandas as pd import random import", "confidence = get_confidence_of_identification(optimal_result) ''' if confidence < 0.0: df = pd.DataFrame(anomaly_testing_group_by_folder_name[trial_name][1], columns=training_config.interested_data_fields) id", "get_confidence_of_identification(optimal_result): confidence_metric = ['culmulative_loglik_divide_by_the_culmulative_mean_loglik', 'posterior_of_gmm_model', 'calc_kullback_leibler_divergence_of_predict_proba', 'hamming_distance_of_hidden_state_sequence', ] CONF_TYPE = confidence_metric[0] anomaly_model_path =", "row in _items: for col in _items: r = _items.index(row) c = _items.index(col)", "if not os.path.isdir(path): continue anomaly_model_path = os.path.join(training_config.anomaly_model_save_path, fo, training_config.config_by_user['data_type_chosen'], training_config.config_by_user['model_type_chosen'], training_config.model_id) try: anomaly_model_group_by_label[fo]", "\":\" + trial_name + \".jpg\"), format=\"jpg\") # fig.show(1) # raw_input('testing another trial?? Please", "_matrix = np.identity(len(_items)) for row in _items: for col in _items: r =", "sklearn.externals import joblib from matplotlib import pyplot as plt import util import training_config", "df = pd.DataFrame(anomaly_testing_group_by_folder_name[trial_name][1], columns=training_config.interested_data_fields) id = random.randint(1000,10000) _name = 'unknown_anomaly_' + str(id) unknown_anomaly_path", "= confuse_matrix.keys() _matrix = np.identity(len(_items)) for row in _items: for col in _items:", "confidence_metric = ['culmulative_loglik_divide_by_the_culmulative_mean_loglik', 'posterior_of_gmm_model', 'calc_kullback_leibler_divergence_of_predict_proba', 'hamming_distance_of_hidden_state_sequence', ] CONF_TYPE = confidence_metric[0] anomaly_model_path = os.path.join(training_config.anomaly_model_save_path,", "implemented this # one_hidden_stateSeq_of_this_state = anomaly_model_group_by_label[model_label].decode(anomaly_testing_group_by_folder_name[trial_name][1],len(anomaly_testing_group_by_folder_name[trial_name][1])-1) calc_cofidence_resourse.append({ 'model_label' : model_label, 'culmulative_loglik': one_log_curve_of_this_model[-1], 'loglik_curve'", "= os.path.join(training_config.anomaly_model_save_path, optimal_result['model_label'], training_config.config_by_user['data_type_chosen'], training_config.config_by_user['model_type_chosen'], training_config.model_id) if CONF_TYPE == 'culmulative_loglik_divide_by_the_culmulative_mean_loglik': c_value = 5", "confidence += hamming(testing_stateSeq, hidden_stateSeq[iTrial,:]) return None, None, confidence else: print (\"without the confidence_metric", "ec=(1., 0.6, 0.6), fc=(1., 0.9, 0.9),) ) if not os.path.isdir(figure_save_path + '/anomaly_identification_plot'): os.makedirs(figure_save_path", "os.path.isdir(path): continue anomaly_testing_group_by_folder_name = util.get_anomaly_data_for_labelled_case(training_config, data_path) # one-file for trial_name in anomaly_testing_group_by_folder_name: '''", "optimal_result['loglik_curve']]).T confidence = gmm.score(testing_data) return all_log_curves_of_this_state, None, confidence elif CONF_TYPE == 'calc_kullback_leibler_divergence_of_predict_proba': print", "anomaly data 3. plot the log-likelihood wrt each model and plot in a", "anomaly_testing_group_by_folder_name: ''' #plot fig = plt.figure() ax = fig.add_subplot(111) from matplotlib.pyplot import cm", "+ str(confidence), ha = 'center', va = 'baseline', bbox=dict(boxstyle=\"round\", ec=(1., 0.6, 0.6), fc=(1.,", "[] for model_label in anomaly_model_group_by_label: one_log_curve_of_this_model = util.fast_log_curve_calculation( anomaly_testing_group_by_folder_name[trial_name][1], anomaly_model_group_by_label[model_label]) # one_predict_proba_of_this_state =", "continue') ''' print 'Finish testing: '+ fo + '\\n' confuse_matrix[fo] = predict_class _items", "0 for iTrial in range(totalLen/tLen): confidence += hamming(testing_stateSeq, hidden_stateSeq[iTrial,:]) return None, None, confidence", "pd.DataFrame(anomaly_testing_group_by_folder_name[trial_name][1], columns=training_config.interested_data_fields) id = random.randint(1000,10000) _name = 'unknown_anomaly_' + str(id) unknown_anomaly_path = os.path.join(training_config.anomaly_data_path,", "data generation' import generate_synthetic_data generate_synthetic_data.run_finite_differece_matrix(df=df, num_data = 5, csv_save_path=unknown_anomaly_path, trial_name) ''' #--plot '''", "os import numpy as np from sklearn.externals import joblib from matplotlib import pyplot", "1. load all the anomalous trained models 2. load testing anomaly data 3.", "all_log_curves_of_this_state[icurve] data_points = np.vstack([np.array(tVal), feature]).T if icurve == 0: data = data_points else:", "= range(optimal_result['loglik_curve'].shape[0]) testing_data = np.vstack([np.array(tVal), optimal_result['loglik_curve']]).T confidence = gmm.score(testing_data) return all_log_curves_of_this_state, None, confidence", "return all_log_curves_of_this_state, None, confidence elif CONF_TYPE == 'calc_kullback_leibler_divergence_of_predict_proba': print 'obsoleted' pass from scipy.stats", "path = os.path.join(training_config.anomaly_data_path, fo) if not os.path.isdir(path): continue anomaly_model_path = os.path.join(training_config.anomaly_model_save_path, fo, training_config.config_by_user['data_type_chosen'],", "import mixture gmm = mixture.GaussianMixture(n_components = 5, covariance_type = 'diag').fit(data) tVal = range(optimal_result['loglik_curve'].shape[0])", "= len(testing_stateSeq) hidden_stateSeq = hidden_stateSeq.reshape(totalLen/tLen, tLen) from scipy.spatial.distance import hamming confidence = 0", "anomaly_model_group_by_label[model_label]) # one_predict_proba_of_this_state = anomaly_model_group_by_label[model_label].predict_proba(anomaly_testing_group_by_folder_name[trial_name][1]) # HDPHSMM haven't implemented this # one_hidden_stateSeq_of_this_state =", ": one_hidden_stateSeq_of_this_state }) ''' #--plot c = next(color) plot_line, = ax.plot(one_log_curve_of_this_model, linestyle=\"solid\", color", "calculate the probability of testing sequence all_log_curves_of_this_state = joblib.load(os.path.join(anomaly_model_path, 'all_log_curves_of_this_state.pkl')) data = np.ndarray([])", "# one-folder confuse_matrix = {} folders = os.listdir(anomaly_data_path_for_testing) for fo in folders: predict_class", "predict_class = [] data_path = os.path.join(anomaly_data_path_for_testing, fo) if not os.path.isdir(path): continue anomaly_testing_group_by_folder_name =", "anomaly:' + _name print '*\\n'*5 print 'synthetic data generation' import generate_synthetic_data generate_synthetic_data.run_finite_differece_matrix(df=df, num_data", "''' if confidence < 0.0: df = pd.DataFrame(anomaly_testing_group_by_folder_name[trial_name][1], columns=training_config.interested_data_fields) id = random.randint(1000,10000) _name", "iObs in range(len(testing_predict_proba)): confidence += entropy(testing_predict_proba[iObs,:], average_predict_proba[1][iObs,:]) return None, None, confidence elif CONF_TYPE", "= sorted_list[-1] classified_model = optimal_result['model_label'] predict_class.append(classified_model) all_log_curves_of_this_state, threshold, confidence = get_confidence_of_identification(optimal_result) ''' if", "print '*\\n'*5 print 'synthetic data generation' import generate_synthetic_data generate_synthetic_data.run_finite_differece_matrix(df=df, num_data = 5, csv_save_path=unknown_anomaly_path,", "format=\"jpg\") # fig.show(1) # raw_input('testing another trial?? Please any key to continue') '''", "key to continue') ''' print 'Finish testing: '+ fo + '\\n' confuse_matrix[fo] =", "0.0: df = pd.DataFrame(anomaly_testing_group_by_folder_name[trial_name][1], columns=training_config.interested_data_fields) id = random.randint(1000,10000) _name = 'unknown_anomaly_' + str(id)", "color = c) plot_line.set_label(model_label) title = ('Anomaly_identification for ' + fo) ax.set_title(title) '''", "= os.path.join(training_config.anomaly_data_path, _name) os.makedirs(unknown_anomaly_path) print 'generated a new anomaly:' + _name print '*\\n'*5", "= np.identity(len(_items)) for row in _items: for col in _items: r = _items.index(row)", "title = ('Anomaly_identification for ' + fo) ax.set_title(title) ''' sorted_list = sorted(calc_cofidence_resourse, key=lambda", "= os.path.join(training_config.anomaly_data_path, fo) if not os.path.isdir(path): continue anomaly_model_path = os.path.join(training_config.anomaly_model_save_path, fo, training_config.config_by_user['data_type_chosen'], training_config.config_by_user['model_type_chosen'],", "confidence = gmm.score(testing_data) return all_log_curves_of_this_state, None, confidence elif CONF_TYPE == 'calc_kullback_leibler_divergence_of_predict_proba': print 'obsoleted'", "''' #--plot ''' for no_trial in range(len(all_log_curves_of_this_state)): ax.plot(all_log_curves_of_this_state[no_trial], linestyle= '--', color = 'gray',", "optimal_result['hidden_stateSeq'] testing_stateSeq = np.append(testing_stateSeq, [testing_stateSeq[-1]]) tLen = len(testing_stateSeq) hidden_stateSeq = hidden_stateSeq.reshape(totalLen/tLen, tLen) from", "IOError: print 'anomaly model of %s not found'%(fo,) continue # one-folder confuse_matrix =", "anomaly models anomaly_model_group_by_label = {} folders = os.listdir(training_config.anomaly_model_save_path) for fo in folders: path", "import pandas as pd import random import ipdb def run(anomaly_data_path_for_testing, model_save_path, figure_save_path,): '''", "testing sequence all_log_curves_of_this_state = joblib.load(os.path.join(anomaly_model_path, 'all_log_curves_of_this_state.pkl')) data = np.ndarray([]) for icurve in range(len(all_log_curves_of_this_state)):", "sorted_list[-1] classified_model = optimal_result['model_label'] predict_class.append(classified_model) all_log_curves_of_this_state, threshold, confidence = get_confidence_of_identification(optimal_result) ''' if confidence", "if CONF_TYPE == 'culmulative_loglik_divide_by_the_culmulative_mean_loglik': c_value = 5 all_log_curves_of_this_state = joblib.load(os.path.join(anomaly_model_path, 'all_log_curves_of_this_state.pkl')) std_of_log_likelihood =", "= hidden_stateSeq.reshape(totalLen/tLen, tLen) from scipy.spatial.distance import hamming confidence = 0 for iTrial in", "one_log_curve_of_this_model = util.fast_log_curve_calculation( anomaly_testing_group_by_folder_name[trial_name][1], anomaly_model_group_by_label[model_label]) # one_predict_proba_of_this_state = anomaly_model_group_by_label[model_label].predict_proba(anomaly_testing_group_by_folder_name[trial_name][1]) # HDPHSMM haven't implemented", "= [] data_path = os.path.join(anomaly_data_path_for_testing, fo) if not os.path.isdir(path): continue anomaly_testing_group_by_folder_name = util.get_anomaly_data_for_labelled_case(training_config,", "= 5, csv_save_path=unknown_anomaly_path, trial_name) ''' #--plot ''' for no_trial in range(len(all_log_curves_of_this_state)): ax.plot(all_log_curves_of_this_state[no_trial], linestyle=", "one_log_curve_of_this_model[-1], 'loglik_curve' : one_log_curve_of_this_model, # 'predict_proba' : one_predict_proba_of_this_state, # 'hidden_stateSeq' : one_hidden_stateSeq_of_this_state })", "= {} folders = os.listdir(anomaly_data_path_for_testing) for fo in folders: predict_class = [] data_path", "+ \".jpg\"), format=\"jpg\") # fig.show(1) # raw_input('testing another trial?? Please any key to", "+ \"/model_s%s.pkl\"%(1,)) except IOError: print 'anomaly model of %s not found'%(fo,) continue #", "['culmulative_loglik_divide_by_the_culmulative_mean_loglik', 'posterior_of_gmm_model', 'calc_kullback_leibler_divergence_of_predict_proba', 'hamming_distance_of_hidden_state_sequence', ] CONF_TYPE = confidence_metric[0] anomaly_model_path = os.path.join(training_config.anomaly_model_save_path, optimal_result['model_label'], training_config.config_by_user['data_type_chosen'],", "= 'center', va = 'baseline', bbox=dict(boxstyle=\"round\", ec=(1., 0.6, 0.6), fc=(1., 0.9, 0.9),) )", "plot_line, = ax.plot(one_log_curve_of_this_model, linestyle=\"solid\", color = c) plot_line.set_label(model_label) title = ('Anomaly_identification for '", "import os import numpy as np from sklearn.externals import joblib from matplotlib import", "try: anomaly_model_group_by_label[fo] = joblib.load(anomaly_model_path + \"/model_s%s.pkl\"%(1,)) except IOError: print 'anomaly model of %s", "data_points]) # fit a gmm model from sklearn import mixture gmm = mixture.GaussianMixture(n_components" ]
[ "of the MIT License. __all__ = [\"DataContainer\"] __author__ = '<NAME>' __maintainer__ = '<NAME>'", "<filename>matador/orm/__init__.py # coding: utf-8 # Distributed under the terms of the MIT License.", "License. __all__ = [\"DataContainer\"] __author__ = '<NAME>' __maintainer__ = '<NAME>' from .orm import", "__all__ = [\"DataContainer\"] __author__ = '<NAME>' __maintainer__ = '<NAME>' from .orm import DataContainer", "# coding: utf-8 # Distributed under the terms of the MIT License. __all__", "terms of the MIT License. __all__ = [\"DataContainer\"] __author__ = '<NAME>' __maintainer__ =", "MIT License. __all__ = [\"DataContainer\"] __author__ = '<NAME>' __maintainer__ = '<NAME>' from .orm", "# Distributed under the terms of the MIT License. __all__ = [\"DataContainer\"] __author__", "the MIT License. __all__ = [\"DataContainer\"] __author__ = '<NAME>' __maintainer__ = '<NAME>' from", "coding: utf-8 # Distributed under the terms of the MIT License. __all__ =", "Distributed under the terms of the MIT License. __all__ = [\"DataContainer\"] __author__ =", "utf-8 # Distributed under the terms of the MIT License. __all__ = [\"DataContainer\"]", "under the terms of the MIT License. __all__ = [\"DataContainer\"] __author__ = '<NAME>'", "the terms of the MIT License. __all__ = [\"DataContainer\"] __author__ = '<NAME>' __maintainer__" ]
[ "= dict() self.webhook_threshold = 0.35 for layer in layers: if layer not in", "of the layer. Flatten squashes exposures into 1 frame For more info, see", "in floorplans.items() } def set_observations(self,observations:dict): \"Set the layer to contain passed observations, clearing", "= response if POST_data != {}: self.snapshotWebhook(POST_data) ### Configuration STORE_WEBHOOK = \"webhooklist\" STORE_SELECTED", "l_id in set(data_layers.keys()).difference(self.data_layers.keys()): # For layers in data_layers not in self self.data_layers[l_id] =", "= ( imarr[ixs:ixe,iys:iye,3].astype(np.float64) * alpha ).astype(np.uint8) imarr = Image.fromarray(imarr,\"RGBA\").filter( ImageFilter.BoxBlur( BLUR_CELLS * destination.size[0]", "-> dict: #returns a dictionary containing a link to the image response =", "a DIV0 here, the search hasn't found any near enough to call near", "axis=0) self.__masked_dataoverlay[0] = 0 self.__unmasked_dataoverlay[0] = 0 self.__unfixed_observations[0] = 0 def clear(self) ->", "POST_data[fpid] = {\"type\" : \"SnapshotData\", \"is_ideal\" : idealality} for i, cam in enumerate(cameras):", "I was kidding # This is 0.707 iff cell_s_m = 1 DEFAULT_EXPOSURE =", "but not in the overlays print(\"Info: Creating new overlay for FPID:{}\".format(fpid)) fp =", "in range(len(clusters[0])): if clusters[x][y] > busiest: busiest = clusters[x][y] busiest_location = (3*(x+0.5), 3*(y+0.5))", "= \"<PASSWORD>_token\" STORE_PASSWORD = \"<PASSWORD>\" STORE_WHTHRESHOLD = \"webhook_threshold\" def update_model_config(self, netid, conf_dict): layers", "API (SAPI) def __validate_scanning(self,SAPI_packet:dict) -> None: if type(SAPI_packet) != dict: raise TypeError(\"JSON parsed", "a Numpy array, tuple or nested list of the form [[x1,x2,y1,y2],...] \"\"\" self.bm_boxes", "average so flatten self.data_layers[l_id] = layer.copy(flatten=True) self.data_layers[l_id].clear() # Set a count for each", "self.pixelmask.shape[1] #Image-scale chunk divisions (what coords do we get laying one mask over", "sqm pixels on internal datamap. Pass len(iterable)==0 to unset mask \"\"\" #Check if", "= np.array([255,0,0,180],dtype=np.uint8) NEG = np.array([0,255,0,180],dtype=np.uint8) BLUR_CELLS = 0.35 destination = self.floorplan.get_image().convert(\"RGBA\") # Overlay", "observations.items(): if ( placeable.x != None and placeable.y != None ) or placeable.has_mask_override:", "axis client_loc = np.array( [self.real_dimensions[0] - placeable.y, placeable.x ] ) for x in", "event = spikeDict[\"location\"] event_root = tuple([ int(d) for d in event ]) cameras", "> 0 self.exposure = exposure # Exposure queue, shape (exp,x,y) self.__unfixed_observations = np.zeros(exposure)", "{}: self.snapshotWebhook(POST_data) ### Configuration STORE_WEBHOOK = \"webhooklist\" STORE_SELECTED = \"selectednet\" STORE_LAYERS = \"layers\"", "a copy of the m(asked)_possible locations and the u(n)m(asked)_possible locations um_possible = np.zeros(self.overlay_dimensions,", "shape (exp,x,y) self.__unfixed_observations = np.zeros(exposure) self.__masked_dataoverlay = np.zeros( (exposure,)+overlay_dimensions, dtype=\"float32\" ) self.__unmasked_dataoverlay =", "exposure window into 1 frame. \"\"\" if flatten: cp = Overlay(self.floorid, self.overlay_dimensions, self.real_dimensions,", "to call near # Ignore floor mask self.__unmasked_dataoverlay[0][um_possible] += 1.0 / um_possible.sum() #", "\"Wrapper class for the floorplan object including model data\" def __init__(self,floorplan:FloorPlan): self.floorplan =", "assert isinstance(tsa,TimeSlotAvg) tsa.verify_and_update_struct(data_layers, floors) return tsa def is_current_time(self, debug=None) -> bool: \"Returns True", "exposure, see Overlay.get_delta \"\"\" window = self.exposure if exposure == 0 else exposure", "np.array, list or tuple\") elif False in [ len(spot) == 4 for spot", "conf[Model.STORE_BMBOXES] = { fpid: fp.bm_boxes for fpid,fp in self.plans.items() } conf[Model.STORE_BDENABLED] = {", "} for id, placeable in observations.items(): bins[placeable.floorPlanId][id] = placeable for fid, overlay in", "# Ignore floor mask self.__unmasked_dataoverlay[0][um_possible] += 1.0 / um_possible.sum() # Include floor mask", "are created, infos are printed Throws ModelException if dimensions do not match \"\"\"", "sum total and dividing by new count upd_um_overlay = ( avg_um_overlay * c", "observations If an overlay missing, print info, create new If an overlay is", "\"bd_enabled\" STORE_SECRET = \"sapisecret\" STORE_TOKEN = \"<PASSWORD>_token\" STORE_PASSWORD = \"<PASSWORD>\" STORE_WHTHRESHOLD = \"webhook_threshold\"", "self.query_obj.get_camera_observations() # Zip [0,n) with n obs objects, converting to dictionary return dict(zip(range(len(obs)),obs))", "= spikeDict[\"location\"] event_root = tuple([ int(d) for d in event ]) cameras =", "of axis um_possible[-int(placeable.y/Model.CELL_SIZE_M),int(placeable.x/Model.CELL_SIZE_M)] = 1 else: # Store parsed location in x,y tuple,", "of len 0 to unset try: if len(coords) == 0 or False not", "floor.floorplan_dimensions, floor.mask, exposure) for _id,floor in floorplans.items() } def set_observations(self,observations:dict): \"Set the layer", "network from config file\") class TimeSlotAvg: class TimeSlotAvgException( Exception ): pass DATA_DIR =", "in [ len(spot) == 4 for spot in blindspots ]: raise ValueError(\"Invalid format", "data = self.get_delta(masked,exposure) window = self.exposure if exposure == 0 else exposure #", "placeable.mask_override else: if placeable.variance < Model.VARIANCE_THRESHOLD: # Calculated minimum reach # Account for", "retreived floor plan IDs and names\" return { k : v.floorplan.name for k,v", "2 LAYER_MVSENSE = 3 LAYERS_ALL = {LAYER_SNAP_WIFI, LAYER_SNAP_BT, LAYER_MVSENSE} CONFIG_PATH = os.path.join('model.conf') CELL_SIZE_M", "here, the search hasn't found any near enough to call near # Ignore", "Model.ModelException(Model.__BAD_LAYER.format(api_layer_val)) def __generate_person_obs(self) -> dict: \"Indexes observed person objects with an arbitrary key\"", "terms of absolute delta from mean\" datamap = self.comp_historical(floorPlanId) return self.plans[floorPlanId].render_overlay(datamap) def render_abs(self,floorPlanId:str)->Image:", "it is valid to update with the current model # For each layer", "STORE_BMBOXES = \"bm_boxes\" STORE_BDENABLED = \"bd_enabled\" STORE_SECRET = \"sapisecret\" STORE_TOKEN = \"<PASSWORD>_token\" STORE_PASSWORD", "on masked region imarr.paste((0,0,0,0),mask) elif pixelmask: print(\"Error: Could not filter by pixel mask", "called when a Floor is added, removed, or altered, including by set_bounds_mask This", "an overlay missing, print info, create new If an overlay is extra, do", "does not change existing data, only new observations self.mask == floor.mask class Model:", "use Overlay.add\" assert len(self.__unfixed_observations.shape) == len(unfixed_count.shape) assert len(self.__masked_dataoverlay.shape) == len(masked_overlay.shape) assert len(self.__unmasked_dataoverlay.shape) ==", "in FOVcams: fov = cam.get_FOV() mindist = 9999999 for x,row in enumerate(fov): for", "\"Clear all member overlays of observation data\" for over in self.overlays.values(): over.clear() def", "an average model for a timeslot using the current model data, if valid", "fpid, boxes in conf_dict.get(Model.STORE_BMBOXES, dict()).items(): on = conf_dict.get(Model.STORE_BDENABLED,{fpid:False})[fpid] self.setBoundsMask(fpid, on, boxes) def serialize(self):", "!= None: for spot in blindspots: bd.add_blindspot(*tuple(spot)) bd.run() return bd.generate_graphic() def render_overlay(self,overlay:np.ndarray,pixelmask:bool=True): \"\"\"", "be defined in enviroment variable \\\"MERAKI_DASHBOARD_API_KEY\\\"\" self.read_config_data() self.write_config_data() def populate(self,layers:set): assert isinstance(self.query_obj, APIQuery)", "-> str: \"Find a floorPlanId from the floor name\" for k in self.plans:", "fid, overlay in self.overlays.items(): overlay.roll() obs = bins.get(fid) if obs != None: overlay.add(obs)", "clusters = np.zeros(dims, dtype=\"float32\") for x in range(len(clusters)): for y in range(len(clusters[0])): clusters[x,y]", "busiest_location = (3*(x+0.5), 3*(y+0.5)) return {'spike':busiest > threshhold, 'location':busiest_location} def nearestCameras(self, n:int, floor:Floor,spikeDict:dict)->tuple:", "the floor that people cannot possibly be or are to be ignored eg", "= { id : dict() for id in self.overlays.keys() } for id, placeable", "with the current Model. If layers or overlays are not represented in TSA,", "if dimensions do not match \"\"\" for l_id in set(data_layers.keys()).difference(self.data_layers.keys()): # For layers", "+ self.margin_px[0], overlay.shape[0] + 1 )).astype(\"int32\") iy_points = np.floor(np.linspace( 0,imarr.shape[1] + self.margin_px[1], overlay.shape[1]", "Layers def update_layers(self)->None: \"\"\" Must be called when a Floor is added, removed,", "passing bd = BoundaryDetector( self.floorplan.get_image(), threshold=wallthreshold ) if blindspots != None: for spot", "compressed file\" filepath = os.path.join(TimeSlotAvg.DATA_DIR,'{}_{}.pbz2'.format(self.day, self.hour)) if not os.path.exists(TimeSlotAvg.DATA_DIR): os.makedirs(TimeSlotAvg.DATA_DIR) with bz2.BZ2File(filepath, 'wb')", "imarr[ixs:ixe,iys:iye,3].astype(np.float64) * alpha ).astype(np.uint8) imarr = Image.fromarray(imarr,\"RGBA\").filter( ImageFilter.BoxBlur( BLUR_CELLS * destination.size[0] / overlay.shape[1]", "parameter. Should be of type int or float, got {}\".format(str(type(wallthreshold)))) if floor_id not", "um_possible[x,y] = d <= placeable.variance # m(asked)_possible is a copy of the u(n)m(asked)_possible's", "not found\".format(mac)) else: raise Model.ModelException(\"Model not configured for LAYER_MVSENSE\") def pull_mvsense_data(self): \"Pull live", "\"Static factory method; Load TimeSlotAvg object from compressed pickle file or create new\"", "!= self.secret: raise Model.BadRequest(\"Request has bad authentication secret - rejecting data\") except KeyError", "and dividing by new count upd_um_overlay = ( avg_um_overlay * c + new_um_overlay", "layers: if layer not in Model.LAYERS_ALL: raise Model.ModelException(Model.__BAD_LAYER.format(layer)) self.data_layers[layer] = Layer(self.plans, Model.DEFAULT_EXPOSURE) self.timeslot", "== len(unmasked_overlay.shape) self.__unfixed_observations = unfixed_count self.__masked_dataoverlay = masked_overlay self.__unmasked_dataoverlay = unmasked_overlay def roll(self)", "self.__masked_dataoverlay = masked_overlay self.__unmasked_dataoverlay = unmasked_overlay def roll(self) -> None: \"Roll the exposure,", "on = conf_dict.get(Model.STORE_BDENABLED,{fpid:False})[fpid] self.setBoundsMask(fpid, on, boxes) def serialize(self): conf = dict() conf[Model.STORE_SECRET] =", "= self.password conf[Model.STORE_FOVCOORDS] = { cam.mac: cam.get_fov_coords() for cam in self.query_obj.cameras.values() } conf[Model.STORE_BMBOXES]", "not configured for LAYER_MVSENSE\") def pull_mvsense_data(self): \"Pull live MVSense data from cameras and", "layer: a series of overlays covering all floorplans with data from a single", "open( self.CONFIG_PATH, 'wb' ) as f: pickle.dump(config_data, f) def read_config_data(self): if os.path.isfile(Model.CONFIG_PATH): with", "0.35 for layer in layers: if layer not in Model.LAYERS_ALL: raise Model.ModelException(Model.__BAD_LAYER.format(layer)) self.data_layers[layer]", "on, boxes) def serialize(self): conf = dict() conf[Model.STORE_SECRET] = self.secret conf[Model.STORE_TOKEN] = self.validator_token", "dict() for l_id, layer in data_layers.items(): # Copy the layer structure but clear", "theres a DIV0 here, the search hasn't found any near enough to call", "self.data_layers.items() } @staticmethod def get_time()->tuple: \"Get the current time values needed for reading", "passed. For more details on exposure, see Overlay.get_delta \"\"\" window = self.exposure if", "layer in self.data_layers.items(): # Add any missing overlays layer.verify_and_update(floors) for l_id in data_layers.keys():", "f: pickle.dump(config_data, f) def read_config_data(self): if os.path.isfile(Model.CONFIG_PATH): with open( Model.CONFIG_PATH, 'rb' ) as", "of WiFi layer rendered on the floor plan\" return self.plans[floorPlanId].render_overlay(self.data_layers[Model.LAYER_SNAP_WIFI].overlays[floorPlanId].get_delta(exposure=1)) def debug_render(self,fpid)->Image: import", "clusters[x][y] > busiest: busiest = clusters[x][y] busiest_location = (3*(x+0.5), 3*(y+0.5)) return {'spike':busiest >", "a new frame of data\" self.__masked_dataoverlay = np.roll(self.__masked_dataoverlay, 1, axis=0) self.__unmasked_dataoverlay = np.roll(self.__unmasked_dataoverlay,", "of dimention mismatch. If mask is outdated, update - note this does not", "{'spike':busiest > threshhold, 'location':busiest_location} def nearestCameras(self, n:int, floor:Floor,spikeDict:dict)->tuple: #returns a list of camera", "of the delta overlay (only fixed observations). If masked, will select data masked", "one mask over the other) # Account for margin between edge of floorplan", "self.data_layers[Model.LAYER_MVSENSE].set_observations(observations) def spike(self, layer, threshhold)->dict: #add floorplan ID into params, camera/wifi/bluetoothall into params?", "* self.floorplan.px_per_m_w ) self.mask_enabled = False self.pixelmask = None self.mask = np.ones(self.overlay_dimensions,dtype=np.bool_) self.aps", "not get network from config file\") class TimeSlotAvg: class TimeSlotAvgException( Exception ): pass", "placeable else: unfixed[id] = placeable self.__add_fixed_locations(fixed) self.__add_unfixed_locations(unfixed) def __add_fixed_locations(self,fixed:dict) -> None: for placeable", "upd_unfixed_obs[None,], upd_m_overlay[None,], upd_um_overlay[None,] ) # Update the count self.count[l_key][o_key] += 1 #self.write() else:", "/ overlay.shape[1] ) ) if isinstance(self.pixelmask, np.ndarray) and pixelmask: # Tidy the edges", "evenly across the floorplan (or mask) mask = self.mask if masked else np.ones(self.overlay_dimensions,dtype=np.bool_)", "fp.mask_enabled for fpid,fp in self.plans.items() } return conf def write_config_data(self): config_data = {Model.STORE_SELECTED:self.network_id}", "Model.LAYER_*)\" class ModelException(Exception): pass class BadRequest(Exception): pass def __init__(self,network_id:str=None,layers:set={}): \"Initialise model. API key", "we get laying one mask over the other) # Account for margin between", "to the timeslots model, promoting as exposure of historicals = 1 self.data_layers[l_key].overlays[o_key].set( upd_unfixed_obs[None,],", "chunk divisions (what coords do we get laying one mask over the other)", "see Overlay.get_delta \"\"\" data = self.get_delta(masked,exposure) window = self.exposure if exposure == 0", "None: \"Updates an average model for a timeslot using the current model data,", "iys = iy_points[my] iye = iy_points[my+1] imarr[ixs:ixe,iys:iye] = POS if pos else NEG", "available frames, 1 gives only the latest frame (no smoothing). \"\"\" window =", "info, create new If an overlay is extra, do nothing \"\"\" for fpid", "datamap. Pass len(iterable)==0 to unset mask \"\"\" #Check if Layer exists if Model.LAYER_MVSENSE", "datamap in terms of absolute delta from mean\" datamap = self.comp_historical(floorPlanId) return self.plans[floorPlanId].render_overlay(datamap)", "cp.__unmasked_dataoverlay[0] = self.__unmasked_dataoverlay.mean(axis=0) else: cp = Overlay(self.floorid, self.overlay_dimensions, self.real_dimensions, self.mask, self.exposure) cp.__unfixed_observations =", "else: if placeable.variance < Model.VARIANCE_THRESHOLD: # Calculated minimum reach # Account for change", "of stored exposure, default (0) combines all available frames, 1 gives only the", "not current\" if not (self.timeslot.is_current_time()): self.timeslot.write() self.timeslot = TimeSlotAvg.load( self.data_layers, self.plans ) self.timeslot.verify_and_update_struct(", "to contain passed observations, clearing any previous. Pass floor object dictionary\" bins =", "iy_divs = np.floor(np.linspace( 0, iy+self.margin_px[1], my+1 )).astype(\"int32\") for x in range(mx): #Top, bottom", "into 1 frame. \"\"\" if flatten: cp = Overlay(self.floorid, self.overlay_dimensions, self.real_dimensions, self.mask, 1)", "CELL_SIZE_M = 1 DOWNSAMPLE_THRESHOLD = 0.5 VARIANCE_THRESHOLD = np.hypot( *( 2*(CELL_SIZE_M / 2,)", "self.update_model_config(selected_id,select_data) else: print(\"Warning: config file not found\") try: self.update_model_config(None, {Model.STORE_LAYERS: Model.LAYERS_ALL} ) except", "more info see Floor.set_bounds_mask \"\"\" if wallthreshold == None: bd = BoundaryDetector( self.floorplan.get_image()", "cover # Not covered as only parameter passing bd = BoundaryDetector( self.floorplan.get_image(), threshold=wallthreshold", "model with SAPI data\" # Raise a racket if theres something wrong self.__validate_scanning(SAPI_packet)", "Overlay object indexed by each layer stored\" return { layer_id: layer.overlays[fpid] for layer_id,", "of type int or float, got {}\".format(str(type(wallthreshold)))) if floor_id not in self.plans.keys(): raise", "/ m_possible.sum() def __add_unfixed_locations(self,unfixed:dict) -> None: self.__unfixed_observations[0] += len(unfixed) def get_delta(self, masked:bool=True, exposure:int=0)->np.ndarray:", "return self.plans def getFloorplanSummary(self) -> dict: \"Get a dict of retreived floor plan", "overlay is compatible with passed floor, updates mask\" if self.overlay_dimensions != floor.overlay_dimensions or", "list or tuple\") elif False in [ len(spot) == 4 for spot in", "= np.roll(self.__unmasked_dataoverlay, 1, axis=0) self.__unfixed_observations = np.roll(self.__unfixed_observations, 1, axis=0) self.__masked_dataoverlay[0] = 0 self.__unmasked_dataoverlay[0]", "cam in sorted(distances.items(),key=lambda x: x[1])[:n] ] return (\"Best Effort\", top_n) def getCameraImage(self, camera)", ": Floor(fp) for id,fp in floorplans.items() } return self.plans def getFloorplanSummary(self) -> dict:", "all floorplans with data from a single source type\" def __init__(self,floorplans:dict,exposure:int): self.exposure =", "dictionary\" bins = { id : dict() for id in self.overlays.keys() } for", "\"\"\" ly = Layer({}, 1 if flatten else self.exposure ) ly.overlays = {", "day self.hour = hour self.data_layers = dict() self.count = dict() for l_id, layer", "more details on exposure, see Overlay.get_delta \"\"\" window = self.exposure if exposure ==", "mask level #Assuming cell >> pixel #Mask (1msq-scale, small-dims) dims mx = self.mask.shape[0]", "def get_full(self, masked:bool=True, exposure:int=0)->np.ndarray: #pragma: no cover \"\"\" Returns a copy of the", "to 1 self.count[l_id] = { ov_id:self.count[l_id].get(ov_id,1) for ov_id in data_layers[l_id].overlays.keys() } def sha256(inpt:str)", "def __init__(self, floorid:str, overlay_dimensions:tuple, real_dimensions:tuple, floormask:np.ndarray, exposure:int): self.floorid = floorid #self.observations = dict()", "frame of data\" self.__masked_dataoverlay = np.roll(self.__masked_dataoverlay, 1, axis=0) self.__unmasked_dataoverlay = np.roll(self.__unmasked_dataoverlay, 1, axis=0)", "in enumerate(cameras): response = self.getCameraImage(cam) POST_data[fpid][\"camera_data_\" + str(i)] = response if POST_data !=", "ImageFilter import bz2 import pickle import datetime import requests import hashlib parentddir =", "fp = floorplans[fpid] self.overlays[fpid] = Overlay(fpid, fp.overlay_dimensions, fp.floorplan_dimensions, fp.mask, self.exposure) for fpid, floor", "for blindspots parameter. Should be of shape (n,4), got {}\".format(str(blindspots))) if wallthreshold ==", "= mindist for cam in nonFOVcams: distances[cam] = np.hypot( cam.x-event[0], cam.y-event[1] ) top_n", "mask an Image, mode=L mask = Image.fromarray(255*self.pixelmask.astype(np.uint8),\"L\") # Paste alpha on masked region", "created for Layer ID {}\".format(l_id)) for l_id, layer in self.data_layers.items(): # Add any", "self.overlays.values(): over.clear() def verify_and_update(self, floorplans:dict): \"\"\" Ensure overlays are compatible with current floorplans", "np.floor(np.linspace( 0,imarr.shape[1] + self.margin_px[1], overlay.shape[1] + 1 )).astype(\"int32\") for mx in range(overlay.shape[0]): #", "See Overlay.get_full \"\"\" # Not covered as not required for current scope return", "Calculated minimum reach # Account for change of axis um_possible[-int(placeable.y/Model.CELL_SIZE_M),int(placeable.x/Model.CELL_SIZE_M)] = 1 else:", "iy = self.pixelmask.shape[1] #Image-scale chunk divisions (what coords do we get laying one", "l_id in data_layers.keys(): # Get count if exists, else set to 1 self.count[l_id]", "self.floorplan = floorplan # Dimentions should default to (height, width) self.floorplan_dimensions = (self.floorplan.height,self.floorplan.width)", "exposure) for _id, over in self.overlays.items() } def get_full(self, masked:bool=True, exposure:int=0) -> dict:", "LAYER_SNAP_BT, LAYER_MVSENSE} CONFIG_PATH = os.path.join('model.conf') CELL_SIZE_M = 1 DOWNSAMPLE_THRESHOLD = 0.5 VARIANCE_THRESHOLD =", "Overlay.get_delta \"\"\" data = self.get_delta(masked,exposure) window = self.exposure if exposure == 0 else", "#Downsample from pixel level to mask level #Assuming cell >> pixel #Mask (1msq-scale,", "{ ov_id:self.count[l_id].get(ov_id,1) for ov_id in data_layers[l_id].overlays.keys() } def sha256(inpt:str) -> str: m =", "self.plans[cam.floorPlanId].overlay_dimensions cam.set_FOV(shape,coords) else: raise ValueError except (ValueError, TypeError) as err: raise err.__class__(\"Coordinates supplied", "mask (mini-scale, big-dims) dims ix = self.pixelmask.shape[0] iy = self.pixelmask.shape[1] #Image-scale chunk divisions", "overlay of a single floorplan\" def __init__(self, floorid:str, overlay_dimensions:tuple, real_dimensions:tuple, floormask:np.ndarray, exposure:int): self.floorid", "effect existing data, only new observations If an overlay missing, print info, create", "self.pixelmask[it:ib,il:ir].mean() < Model.DOWNSAMPLE_THRESHOLD def calc_bounds_mask(self,blindspots=None,wallthreshold:float=None) -> Image.Image: \"\"\" Generate a preview of a", "0, imarr.shape[0] + self.margin_px[0], overlay.shape[0] + 1 )).astype(\"int32\") iy_points = np.floor(np.linspace( 0,imarr.shape[1] +", "list of the form [[x1,x2,y1,y2],...]\" if blindspots==None: pass elif not (isinstance(blindspots,(np.ndarray,list,tuple))): raise TypeError(\"Invalid", "defined layer (eg Model.LAYER_*)\" class ModelException(Exception): pass class BadRequest(Exception): pass def __init__(self,network_id:str=None,layers:set={}): \"Initialise", "np.roll(self.__unfixed_observations, 1, axis=0) self.__masked_dataoverlay[0] = 0 self.__unmasked_dataoverlay[0] = 0 self.__unfixed_observations[0] = 0 def", "overlay missing, print info, create new If an overlay is extra, do nothing", "the first n frames of stored exposure, default (0) combines all available frames,", "smoothing on the first n frames of stored exposure, default (0) combines all", "layer in layers: if layer not in Model.LAYERS_ALL: raise Model.ModelException(Model.__BAD_LAYER.format(layer)) self.data_layers[layer] = Layer(self.plans,", "(SAPI) def __validate_scanning(self,SAPI_packet:dict) -> None: if type(SAPI_packet) != dict: raise TypeError(\"JSON parsed a", "method; Load TimeSlotAvg object from compressed pickle file or create new\" #TODO remove", "fpid:str)->dict: \"Return the flat average Overlay object indexed by each layer stored\" return", "if exists, else set to 1 self.count[l_id] = { ov_id:self.count[l_id].get(ov_id,1) for ov_id in", "== 0 or False not in [ len(coord)==2 for coord in coords ]:", "# Raise a racket if theres something wrong self.__validate_scanning(SAPI_packet) dest_layer = Model.get_type(SAPI_packet) observations", "relevant floor obj POST_data[fpid] = {\"type\" : \"SnapshotData\", \"is_ideal\" : idealality} for i,", "config_data[selected_id] self.update_model_config(selected_id,select_data) else: print(\"Warning: config file not found\") try: self.update_model_config(None, {Model.STORE_LAYERS: Model.LAYERS_ALL} )", "= dict() for l_id, layer in data_layers.items(): # Copy the layer structure but", "( self.margin_m[0] * self.floorplan.px_per_m_h, self.margin_m[1] * self.floorplan.px_per_m_w ) self.mask_enabled = False self.pixelmask =", "snapshot) print(response) def addWebhookAddress(self, webhookAddress:str): self.webhook_addresses.append(webhookAddress) ### Historical def put_historical(self) -> None: \"Updates", "tuple or nested list of the form [[x1,x2,y1,y2],...]\" if blindspots==None: pass elif not", "hist_fp_data = self.timeslot.get_floor_avgs(floorPlanId) collective = np.zeros( self.plans[floorPlanId].overlay_dimensions ) for lid in self.data_layers.keys(): mask_enabled", "len(unfixed) def get_delta(self, masked:bool=True, exposure:int=0)->np.ndarray: \"\"\" Returns a copy of the delta overlay", "Exception ): pass DATA_DIR = \"historical_data\" def __init__(self, data_layers:dict, day:int, hour:int): self.day =", "for cam in nonFOVcams: distances[cam] = np.hypot( cam.x-event[0], cam.y-event[1] ) top_n = [", "dm.render_overlay(testarr.reshape(dims)) def update(self)->None: \"Update non-webhook (non-SAPI) layers, write history\" self.pull_mvsense_data() self.put_historical() #spike detect", "overlay ix_divs = np.floor(np.linspace( 0, ix+self.margin_px[0], mx+1 )).astype(\"int32\") iy_divs = np.floor(np.linspace( 0, iy+self.margin_px[1],", "network: expected {} got {}\".format(self.network_id,source_net_id)) def get_type(SAPI_packet:dict) -> int: \"Get the Model layer", "model data, if valid time\" if self.is_current_time(debug): # it is valid to update", "if __name__!=\"__main__\": assert isinstance(tsa,TimeSlotAvg) tsa.verify_and_update_struct(data_layers, floors) return tsa def is_current_time(self, debug=None) -> bool:", "floorplan and overlay ix_points = np.floor(np.linspace( 0, imarr.shape[0] + self.margin_px[0], overlay.shape[0] + 1", "in the TimeSlotAvg is compatible with the current Model. If layers or overlays", "objects with an arbitrary key\" obs = self.query_obj.get_camera_observations() # Zip [0,n) with n", "def nearestCameras(self, n:int, floor:Floor,spikeDict:dict)->tuple: #returns a list of camera objects # They call", "( int(self.floorplan_dimensions[0]//Model.CELL_SIZE_M)+1, int(self.floorplan_dimensions[1]//Model.CELL_SIZE_M)+1 ) # Determine the distance between the end of the", "{ cam for cam in self.query_obj.getCameras().values() if cam.floorPlanId == floor.floorplan.id } FOVcams =", "day, hour ) tsa.verify_and_update_struct(data_layers, floors) tsa.write() else: if __name__!=\"__main__\": assert isinstance(tsa,TimeSlotAvg) tsa.verify_and_update_struct(data_layers, floors)", "for general use, instead use Overlay.add\" assert len(self.__unfixed_observations.shape) == len(unfixed_count.shape) assert len(self.__masked_dataoverlay.shape) ==", "self.floorplan.px_per_m_w ) self.mask_enabled = False self.pixelmask = None self.mask = np.ones(self.overlay_dimensions,dtype=np.bool_) self.aps =", "for margin between edge of floorplan and overlay ix_points = np.floor(np.linspace( 0, imarr.shape[0]", "in self.plans.items(): spikedict = self.spike(self.comp_historical(fpid), self.webhook_threshold) if spikedict['spike'] == True: idealality, cameras =", "1 DOWNSAMPLE_THRESHOLD = 0.5 VARIANCE_THRESHOLD = np.hypot( *( 2*(CELL_SIZE_M / 2,) ) )", "exposure to specify mean smoothing on the first n frames of stored exposure,", "Account for change of axis um_possible[-int(placeable.y/Model.CELL_SIZE_M),int(placeable.x/Model.CELL_SIZE_M)] = 1 else: # Store parsed location", "address in self.webhook_addresses: response = requests.post(address, json = snapshot) print(response) def addWebhookAddress(self, webhookAddress:str):", "in layers: if layer not in Model.LAYERS_ALL: raise Model.ModelException(Model.__BAD_LAYER.format(layer)) self.data_layers[layer] = Layer(self.plans, Model.DEFAULT_EXPOSURE)", "CONFIG_PATH = os.path.join('model.conf') CELL_SIZE_M = 1 DOWNSAMPLE_THRESHOLD = 0.5 VARIANCE_THRESHOLD = np.hypot( *(", "data_layers.keys(): # Get count if exists, else set to 1 self.count[l_id] = {", "data files\" curr_dt = datetime.datetime.now( datetime.timezone( offset=datetime.timedelta(hours=0) ) ) curr_day = curr_dt.weekday() curr_hour", "class Layer: \"Class representing a data layer: a series of overlays covering all", "TimeSlotAvg( data_layers, day, hour ) tsa.verify_and_update_struct(data_layers, floors) tsa.write() else: if __name__!=\"__main__\": assert isinstance(tsa,TimeSlotAvg)", "extra, do nothing \"\"\" for fpid in set(floorplans).difference(self.overlays.keys()): # A floorplan not represented", "internal observation data. Not for general use, instead use Overlay.add\" assert len(self.__unfixed_observations.shape) ==", "wallthreshold parameter. Should be of type int or float, got {}\".format(str(type(wallthreshold)))) if floor_id", "squashes exposures into 1 frame For more info, see Overlay.copy() \"\"\" ly =", "exposure:int): self.floorid = floorid #self.observations = dict() self.overlay_dimensions = overlay_dimensions self.real_dimensions = real_dimensions", "SAPI_packet[\"data\"][\"networkId\"] if SAPI_packet[SECRET_K] != self.secret: raise Model.BadRequest(\"Request has bad authentication secret - rejecting", "that people cannot possibly be on the given floor eg outside high floors.", "fp.mask, self.exposure) for fpid, floor in floorplans.items(): self.overlays[fpid].verify_and_update(floor) class Overlay: \"Represents an data", "cam for cam in cameras if cam.has_FOV() } nonFOVcams = cameras - FOVcams", "it = ix_divs[x] ib = ix_divs[x+1] for y in range(my): #Left, right il", "by magnitude pos = val > 0 alpha = abs(val / absmax) #Left,", "obs != None: overlay.add(obs) def get_deltas(self, masked:bool=True, exposure:int=0) -> dict: \"\"\" Return the", "floorplan mask is in place, mask override takes precident um_possible = placeable.mask_override m_possible", "int(self.floorplan_dimensions[1]//Model.CELL_SIZE_M)+1 ) # Determine the distance between the end of the floorplan and", "Image.alpha_composite(destination,imarr) class Layer: \"Class representing a data layer: a series of overlays covering", "added, removed, or altered, including by set_bounds_mask This will update the Overlay objects", "model. API key must be defined in enviroment variable \\\"MERAKI_DASHBOARD_API_KEY\\\"\" self.read_config_data() self.write_config_data() def", "the factory if the current TimeSlotAvg object is not current\" if not (self.timeslot.is_current_time()):", "elems 1 to elems total self.mask[x][y] = self.pixelmask[it:ib,il:ir].mean() < Model.DOWNSAMPLE_THRESHOLD def calc_bounds_mask(self,blindspots=None,wallthreshold:float=None) ->", "= ap ### Scanning API (SAPI) def __validate_scanning(self,SAPI_packet:dict) -> None: if type(SAPI_packet) !=", "and unmasked deltas, and unfixed count from new data # Get full available", "available exposure by default new_um_overlay = over.get_delta(masked=False) new_m_overlay = over.get_delta(masked=True) new_unfixed_obs = over.get_unfixed_observations()", "upd_m_overlay = ( avg_m_overlay * c + new_m_overlay ) / (c+1) upd_unfixed_obs =", "= placeable else: unfixed[id] = placeable self.__add_fixed_locations(fixed) self.__add_unfixed_locations(unfixed) def __add_fixed_locations(self,fixed:dict) -> None: for", "0 else exposure if masked: return self.__masked_dataoverlay[:window].mean(axis=0) else: return self.__unmasked_dataoverlay[:window].mean(axis=0) def get_unfixed_observations(self, exposure:int=0)->float:", "if source_net_id != self.network_id: raise Model.BadRequest(\"Request has data from wrong network: expected {}", "placeable.x ] ) for x in range(um_possible.shape[0]): for y in range(um_possible.shape[1]): # For", "= { cam.mac: cam.get_fov_coords() for cam in self.query_obj.cameras.values() } conf[Model.STORE_BMBOXES] = { fpid:", "Model.ModelException(\"No such floor: \",floor_id) floor = self.plans[floor_id] if on: floor.set_bounds_mask(blindspots,wallthreshold) else: floor.bm_boxes =", "the current time values needed for reading and writing data files\" curr_dt =", "fpid, floor in self.plans.items(): spikedict = self.spike(self.comp_historical(fpid), self.webhook_threshold) if spikedict['spike'] == True: idealality,", "(what coords do we get laying one mask over the other) # Account", "(n,2)\") else: raise Model.ModelException(\"Camera with mac {} not found\".format(mac)) else: raise Model.ModelException(\"Model not", "None: if type(SAPI_packet) != dict: raise TypeError(\"JSON parsed a {}, expected a dict\".format(str(type(SAPI_packet))))", "exists if mac in self.query_obj.getCameras().keys(): #Check coords of correct shape and iterable, or", "= exposure self.overlays = { _id : Overlay(_id, floor.overlay_dimensions, floor.floorplan_dimensions, floor.mask, exposure) for", "mac, coords in conf_dict.get(Model.STORE_FOVCOORDS,dict()).items(): self.setFOVs( mac, coords ) for fpid, boxes in conf_dict.get(Model.STORE_BMBOXES,", "update with current model as it is not currently day:{self.day}, hour:{self.hour}\") def write(self):", "people cannot possibly be on the given floor eg outside high floors. Blindspots", "== 4 for spot in blindspots ]: raise ValueError(\"Invalid format for blindspots parameter.", ") um_possible[x,y] = d <= placeable.variance # m(asked)_possible is a copy of the", "try: self.update_model_config(None, {Model.STORE_LAYERS: Model.LAYERS_ALL} ) except APIQuery.APIException: raise Model.ModelException(\"Could not get network from", "layer.overlays[fpid] for layer_id, layer in self.data_layers.items() } @staticmethod def get_time()->tuple: \"Get the current", "APIQuery.get_SAPI_type(SAPI_packet) if api_layer_val == \"WiFi\": return Model.LAYER_SNAP_WIFI elif api_layer_val == \"Bluetooth\": return Model.LAYER_SNAP_BT", "layer in current_data.items(): # Get the respective average layer avg_layer = self.data_layers[l_key] #", "Overlay scaling absmax = max(overlay.max(), overlay.min(), key=abs) #m_max, m_min = absmax, -absmax imarr", "the form [[x1,x2,y1,y2],...] \"\"\" self.bm_boxes = blindspots if wallthreshold == None: bd =", "\"fov_mask\" STORE_BMBOXES = \"bm_boxes\" STORE_BDENABLED = \"bd_enabled\" STORE_SECRET = \"sapisecret\" STORE_TOKEN = \"<PASSWORD>_token\"", "for fid, overlay in self.overlays.items(): overlay.roll() obs = bins.get(fid) if obs != None:", "Image.Image: \"\"\" Generate a preview of a bounds mask with given parameters. For", "== None: bd = BoundaryDetector( self.floorplan.get_image() ) else: # pragma: no cover #", "window = self.exposure if exposure == 0 else exposure return self.__unfixed_observations[:window].mean(axis=0) def get_full(self,", "= [ cam[0] for cam in sorted(distances.items(),key=lambda x: x[1])[:n] ] return (\"Best Effort\",", "raise Model.ModelException(\"Could not get network from config file\") class TimeSlotAvg: class TimeSlotAvgException( Exception", "self.data_layers.keys(): #Check camera with mac exists if mac in self.query_obj.getCameras().keys(): #Check coords of", "+= 1 #self.write() else: raise TimeSlotAvg.TimeSlotAvgException(f\"Cannot update with current model as it is", "else: # pragma: no cover # Not covered as only parameter passing bd", "create new If an overlay is extra, do nothing \"\"\" for fpid in", "set to 1 self.count[l_id] = { ov_id:self.count[l_id].get(ov_id,1) for ov_id in data_layers[l_id].overlays.keys() } def", "the Model layer constant for a given SAPI packet\" api_layer_val = APIQuery.get_SAPI_type(SAPI_packet) if", "centre is close enough to be within variance metres d = np.linalg.norm( (np.array([x,y])", "overlay (including distributed unfixed observations) For exposure, see Overlay.get_delta \"\"\" data = self.get_delta(masked,exposure)", "outdated, update - note this does not effect existing data, only new observations", "was kidding # This is 0.707 iff cell_s_m = 1 DEFAULT_EXPOSURE = 3", "note this does not effect existing data, only new observations If an overlay", "= 3 LAYERS_ALL = {LAYER_SNAP_WIFI, LAYER_SNAP_BT, LAYER_MVSENSE} CONFIG_PATH = os.path.join('model.conf') CELL_SIZE_M = 1", "return bd.generate_graphic() def render_overlay(self,overlay:np.ndarray,pixelmask:bool=True): \"\"\" Render overlay onto the floorplan image in heatmap", "setFOVs(self,mac:str,coords:set)->None: \"\"\" Set the FOV coords from given camera (by mac). Coords should", "parentddir = os.path.abspath(os.path.join(os.path.dirname(__file__), os.path.pardir)) sys.path.append(parentddir) from lib.APIQuery import APIQuery, FloorPlan from lib.BoundaryDetector import", "iterable shape (n,2)\") else: raise Model.ModelException(\"Camera with mac {} not found\".format(mac)) else: raise", "detect POST_data = {} for fpid, floor in self.plans.items(): spikedict = self.spike(self.comp_historical(fpid), self.webhook_threshold)", "self.plans ) ### Providers def poll_layer(self,layer:int,exposure:int) -> dict: return self.data_layers[layer].get_full(exposure=exposure) def render_delta(self,floorPlanId:str)->Image: \"Get", "} return conf def write_config_data(self): config_data = {Model.STORE_SELECTED:self.network_id} config_data[self.network_id] = self.serialize() with open(", "count upd_um_overlay = ( avg_um_overlay * c + new_um_overlay ) / (c+1) upd_m_overlay", "# Count c = self.count[l_key][o_key] # update the average by adding current values", "Set a count for each overlay in each layer stored self.count[l_id] = {", "nested list of the form [[x1,x2,y1,y2],...] \"\"\" self.bm_boxes = blindspots if wallthreshold ==", "floor object dictionary\" bins = { id : dict() for id in self.overlays.keys()", "conf_dict.get(Model.STORE_SECRET) self.validator_token = conf_dict.get(Model.STORE_TOKEN) self.webhook_addresses = conf_dict.get(Model.STORE_WEBHOOK,list()) self.password = conf_dict.get(Model.STORE_PASSWORD,Model.DEFAULT_PASSWORD) self.webhook_threshold = conf_dict.get(Model.STORE_WHTHRESHOLD,self.webhook_threshold)", "} def findFloorplanByName(self,name) -> str: \"Find a floorPlanId from the floor name\" for", "pixelmask and image has bounds mask set, will mask final image to keep", "layer in self.data_layers.values(): layer.verify_and_update(self.plans) ### Access Points def getAPs(self) -> None: \"Get APs", "= ( int(self.floorplan_dimensions[0]//Model.CELL_SIZE_M)+1, int(self.floorplan_dimensions[1]//Model.CELL_SIZE_M)+1 ) # Determine the distance between the end of", "between edge of floorplan and overlay ix_divs = np.floor(np.linspace( 0, ix+self.margin_px[0], mx+1 )).astype(\"int32\")", "observations, clearing any previous. Pass floor object dictionary\" bins = { id :", "in enumerate(fov): for y, cell in enumerate(row): if cell: dist = np.hypot( (x+0.5)-event[0],", "construct blank floor layer for each\" floorplans = self.query_obj.pullFloorPlans() self.plans = { id", "the current TimeSlotAvg object is not current\" if not (self.timeslot.is_current_time()): self.timeslot.write() self.timeslot =", "Get the current timeslot object hist_fp_data = self.timeslot.get_floor_avgs(floorPlanId) collective = np.zeros( self.plans[floorPlanId].overlay_dimensions )", "\"Get the Model layer constant for a given SAPI packet\" api_layer_val = APIQuery.get_SAPI_type(SAPI_packet)", "y in range(my): #Left, right il = iy_divs[y] ir = iy_divs[y+1] # As", "\"Set the layer to contain passed observations, clearing any previous. Pass floor object", "avg_unfixed_obs * c + new_unfixed_obs ) / (c+1) # save the new data", "30 minutes fixing it ;) event = spikeDict[\"location\"] event_root = tuple([ int(d) for", "use, instead use Overlay.add\" assert len(self.__unfixed_observations.shape) == len(unfixed_count.shape) assert len(self.__masked_dataoverlay.shape) == len(masked_overlay.shape) assert", "self.plans[fpid] dims = dm.overlay_dimensions testarr = np.zeros(dims).ravel() n = (datetime.datetime.now().second / 60) *", "in the respective new data layer for o_key, over in layer.overlays.items(): # Get", "in self self.data_layers[l_id] = data_layers[l_id].copy() self.count[l_id] = dict() print(\"Info: Layer implicitly created for", "self.hour = hour self.data_layers = dict() self.count = dict() for l_id, layer in", "If exposure 0 (default), will provide all available exposures squashed. For more info,", "Throws ModelException in case of dimention mismatch. If mask is outdated, update -", "= np.array( [self.real_dimensions[0] - placeable.y, placeable.x ] ) for x in range(um_possible.shape[0]): for", "only new observations If an overlay missing, print info, create new If an", "squashed. For more info, see Overlay.get_delta \"\"\" return { _id : over.get_delta(masked, exposure)", "FOVcams hasView = { cam for cam in FOVcams if cam.get_FOV()[event_root]==True } if", "numpy as np from PIL import Image, ImageFilter import bz2 import pickle import", "ir = iy_divs[y+1] # As array is binary, mean gives ratio of elems", "return { _id : over.get_delta(masked, exposure) for _id, over in self.overlays.items() } def", "new data for l_key, layer in current_data.items(): # Get the respective average layer", "self.__masked_dataoverlay[:] = 0 self.__unmasked_dataoverlay[:] = 0 def copy(self,flatten:bool=False): \"\"\" Return a copy of", "each layer stored self.count[l_id] = { fpid:0 for fpid in layer.overlays.keys() } @staticmethod", "m_possible = np.logical_and( um_possible, self.mask, dtype=np.bool_ ) # If theres a DIV0 here,", "new If an overlay is extra, do nothing \"\"\" for fpid in set(floorplans).difference(self.overlays.keys()):", "given parameters. For more info see Floor.set_bounds_mask \"\"\" if wallthreshold == None: bd", "masked at input, else unmasked data. Set exposure to specify mean smoothing on", "obs = bins.get(fid) if obs != None: overlay.add(obs) def get_deltas(self, masked:bool=True, exposure:int=0) ->", "mask from BoundaryDetector for areas that people cannot possibly be on the given", "curr_dt = datetime.datetime.now( datetime.timezone( offset=datetime.timedelta(hours=0) ) ) curr_day = curr_dt.weekday() curr_hour = curr_dt.hour", "a dict of retreived floor plan IDs and names\" return { k :", "over in self.overlays.values(): over.clear() def verify_and_update(self, floorplans:dict): \"\"\" Ensure overlays are compatible with", "-> dict: \"Get a dict of retreived floor plan IDs and names\" return", "= self.secret conf[Model.STORE_TOKEN] = self.validator_token conf[Model.STORE_LAYERS] = set(self.data_layers.keys()) conf[Model.STORE_WEBHOOK] = self.webhook_addresses conf[Model.STORE_WHTHRESHOLD] =", "#pragma: no cover \"\"\" Return the full data for each overlay in the", "set_bounds_mask This will update the Overlay objects to reflect this change May throw", "_id, over in self.overlays.items() } def copy(self,flatten:bool=True): \"\"\" Return a copy of the", "over.get_delta(masked=True) new_unfixed_obs = over.get_unfixed_observations() # Similar from averages # avg layers are already", "cam in enumerate(cameras): response = self.getCameraImage(cam) POST_data[fpid][\"camera_data_\" + str(i)] = response if POST_data", "self.overlay_dimensions, self.real_dimensions, self.mask, self.exposure) cp.__unfixed_observations = self.__unfixed_observations.copy() cp.__masked_dataoverlay = self.__masked_dataoverlay.copy() cp.__unmasked_dataoverlay = self.__unmasked_dataoverlay.copy()", "clear(self) -> None: \"Clear accumulated observation data including exposure\" self.__unfixed_observations[:] = 0 self.__masked_dataoverlay[:]", "data masked at input, else unmasked data. Set exposure to specify mean smoothing", "coords ]: cam = self.query_obj.cameras[mac] shape = self.plans[cam.floorPlanId].overlay_dimensions cam.set_FOV(shape,coords) else: raise ValueError except", "comp_historical(self, floorPlanId:str): \"Get the relative busyness of a floorplan using all layers\" self.update_timeslot()", "testarr[:int(n)] = 1 return dm.render_overlay(testarr.reshape(dims)) def update(self)->None: \"Update non-webhook (non-SAPI) layers, write history\"", "region imarr.paste((0,0,0,0),mask) elif pixelmask: print(\"Error: Could not filter by pixel mask as no", "my in range(overlay.shape[1]): val = overlay[mx,my] if val==0: continue #Colour by sign, scale", "Get full available exposure by default new_um_overlay = over.get_delta(masked=False) new_m_overlay = over.get_delta(masked=True) new_unfixed_obs", "def setBoundsMask(self,floor_id:str,on:bool,blindspots=None,wallthreshold:float=None) -> None: \"Generate a mask from BoundaryDetector for areas that people", "floorplans = self.query_obj.pullFloorPlans() self.plans = { id : Floor(fp) for id,fp in floorplans.items()", "import os import sys import numpy as np from PIL import Image, ImageFilter", "None: bd = BoundaryDetector( self.floorplan.get_image() ) else: # pragma: no cover # Not", "= self.query_obj.extract_SAPI_observations(SAPI_packet) self.data_layers[dest_layer].set_observations(observations) ### Camera and MVSense def setFOVs(self,mac:str,coords:set)->None: \"\"\" Set the FOV", "curr_hour == self.hour def update_avg_data(self, current_data:dict, debug:bool=None) -> None: \"Updates an average model", "um_possible = placeable.mask_override m_possible = placeable.mask_override else: if placeable.variance < Model.VARIANCE_THRESHOLD: # Calculated", "self.query_obj.getCameras().keys(): #Check coords of correct shape and iterable, or of len 0 to", ") if blindspots != None: for spot in blindspots: bd.add_blindspot(*tuple(spot)) bd.run() self.pixelmask =", "or altered, including by set_bounds_mask This will update the Overlay objects to reflect", "iy+self.margin_px[1], my+1 )).astype(\"int32\") for x in range(mx): #Top, bottom it = ix_divs[x] ib", "for i, cam in enumerate(cameras): response = self.getCameraImage(cam) POST_data[fpid][\"camera_data_\" + str(i)] = response", "Floorplan dimension mismatch for FPID={}\".format(floor.floorplan.id)) # Update mask in case of change #", "Pass len(iterable)==0 to unset mask \"\"\" #Check if Layer exists if Model.LAYER_MVSENSE in", "if type(SAPI_packet) != dict: raise TypeError(\"JSON parsed a {}, expected a dict\".format(str(type(SAPI_packet)))) try:", "selected_id = config_data[Model.STORE_SELECTED] select_data = config_data[selected_id] self.update_model_config(selected_id,select_data) else: print(\"Warning: config file not found\")", "applied m_possible = np.logical_and( um_possible, self.mask, dtype=np.bool_ ) # If theres a DIV0", "exposure:int=0)->float: \"\"\" Return how many unfixed observations were passed. For more details on", "unfixed[id] = placeable self.__add_fixed_locations(fixed) self.__add_unfixed_locations(unfixed) def __add_fixed_locations(self,fixed:dict) -> None: for placeable in fixed.values():", "from PIL import Image, ImageFilter import bz2 import pickle import datetime import requests", "if flatten: cp = Overlay(self.floorid, self.overlay_dimensions, self.real_dimensions, self.mask, 1) cp.__unfixed_observations[0] = self.__unfixed_observations.mean(axis=0) cp.__masked_dataoverlay[0]", "+ new_m_overlay ) / (c+1) upd_unfixed_obs = ( avg_unfixed_obs * c + new_unfixed_obs", "tests if day==None or hour==None: day, hour = TimeSlotAvg.get_time() try: tsa = bz2.BZ2File(os.path.join(TimeSlotAvg.DATA_DIR,'{}_{}.pbz2'.format(day,", "fpid,fp in self.plans.items() } return conf def write_config_data(self): config_data = {Model.STORE_SELECTED:self.network_id} config_data[self.network_id] =", "in range(len(clusters)): for y in range(len(clusters[0])): clusters[x,y] = layer[3*x:3*(x+1),3*y:3*(y+1)].sum() busiest = 0 busiest_location=None", "0,imarr.shape[1] + self.margin_px[1], overlay.shape[1] + 1 )).astype(\"int32\") for mx in range(overlay.shape[0]): # Top,", "or overlays are not represented in TSA, those are created, infos are printed", "self.plans ) self.timeslot.verify_and_update_struct( self.data_layers, self.plans ) ### Providers def poll_layer(self,layer:int,exposure:int) -> dict: return", "self.timeslot = TimeSlotAvg.load( self.data_layers, self.plans ) self.timeslot.verify_and_update_struct( self.data_layers, self.plans ) ### Providers def", "for fpid, boxes in conf_dict.get(Model.STORE_BMBOXES, dict()).items(): on = conf_dict.get(Model.STORE_BDENABLED,{fpid:False})[fpid] self.setBoundsMask(fpid, on, boxes) def", "structure but clear the transient data # Also we only need 1 frame", "self.margin_m[0] * self.floorplan.px_per_m_h, self.margin_m[1] * self.floorplan.px_per_m_w ) self.mask_enabled = False self.pixelmask = None", "the respective new data layer for o_key, over in layer.overlays.items(): # Get masked", "= \"webhooklist\" STORE_SELECTED = \"selectednet\" STORE_LAYERS = \"layers\" STORE_FOVCOORDS = \"fov_coords\" STORE_FOVMASK =", "= d <= placeable.variance # m(asked)_possible is a copy of the u(n)m(asked)_possible's with", "collective += (current - historical) collective /= len(self.data_layers) return collective def update_timeslot(self): \"Calls", "\"Generate a mask from BoundaryDetector for areas that people cannot possibly be on", "self.timeslot.get_floor_avgs(floorPlanId) collective = np.zeros( self.plans[floorPlanId].overlay_dimensions ) for lid in self.data_layers.keys(): mask_enabled = self.plans[floorPlanId].mask_enabled", "clear(self)->None: \"Clear all member overlays of observation data\" for over in self.overlays.values(): over.clear()", "data. Set exposure to specify mean smoothing on the first n frames of", "Must be called when a Floor is added, removed, or altered, including by", "mac {} not found\".format(mac)) else: raise Model.ModelException(\"Model not configured for LAYER_MVSENSE\") def pull_mvsense_data(self):", "for change of axis um_possible[-int(placeable.y/Model.CELL_SIZE_M),int(placeable.x/Model.CELL_SIZE_M)] = 1 else: # Store parsed location in", "update the Overlay objects to reflect this change May throw error if dimensions", "observations = self.__generate_person_obs() self.data_layers[Model.LAYER_MVSENSE].set_observations(observations) def spike(self, layer, threshhold)->dict: #add floorplan ID into params,", "return curr_day, curr_hour def verify_and_update_struct(self, data_layers:dict, floors:dict)->None: \"\"\" Verifies that the data in", "= {} self.bm_boxes = [] def set_bounds_mask(self,blindspots=None,wallthreshold:float=None) -> None: \"\"\"\" Generate a mask", "blindspots != None: for spot in blindspots: bd.add_blindspot(*tuple(spot)) bd.run() return bd.generate_graphic() def render_overlay(self,overlay:np.ndarray,pixelmask:bool=True):", "< mindist: mindist = dist distances[cam] = mindist for cam in nonFOVcams: distances[cam]", "= self.query_obj.cameras[mac] shape = self.plans[cam.floorPlanId].overlay_dimensions cam.set_FOV(shape,coords) else: raise ValueError except (ValueError, TypeError) as", "1 else: # Store parsed location in x,y tuple, in m, with change", "[] def set_bounds_mask(self,blindspots=None,wallthreshold:float=None) -> None: \"\"\"\" Generate a mask from BoundaryDetector for areas", "timeslots model, promoting as exposure of historicals = 1 self.data_layers[l_key].overlays[o_key].set( upd_unfixed_obs[None,], upd_m_overlay[None,], upd_um_overlay[None,]", "in self.plans: if self.plans[k].floorplan.name == name: return k return None def setBoundsMask(self,floor_id:str,on:bool,blindspots=None,wallthreshold:float=None) ->", "data layer: a series of overlays covering all floorplans with data from a", "floorplans[fpid] self.overlays[fpid] = Overlay(fpid, fp.overlay_dimensions, fp.floorplan_dimensions, fp.mask, self.exposure) for fpid, floor in floorplans.items():", "= self.__unmasked_dataoverlay.mean(axis=0) else: cp = Overlay(self.floorid, self.overlay_dimensions, self.real_dimensions, self.mask, self.exposure) cp.__unfixed_observations = self.__unfixed_observations.copy()", "== 0 else exposure if masked: return self.__masked_dataoverlay[:window].mean(axis=0) else: return self.__unmasked_dataoverlay[:window].mean(axis=0) def get_unfixed_observations(self,", "exposure:int=0) -> dict: \"\"\" Return the delta data for each overlay in the", "def clear(self) -> None: \"Clear accumulated observation data including exposure\" self.__unfixed_observations[:] = 0", "0 else exposure return self.__unfixed_observations[:window].mean(axis=0) def get_full(self, masked:bool=True, exposure:int=0)->np.ndarray: #pragma: no cover \"\"\"", "ib = ix_divs[x+1] for y in range(my): #Left, right il = iy_divs[y] ir", "observations.items(): bins[placeable.floorPlanId][id] = placeable for fid, overlay in self.overlays.items(): overlay.roll() obs = bins.get(fid)", "spot in blindspots ]: raise ValueError(\"Invalid format for blindspots parameter. Should be of", "#Check if Layer exists if Model.LAYER_MVSENSE in self.data_layers.keys(): #Check camera with mac exists", "Providers def poll_layer(self,layer:int,exposure:int) -> dict: return self.data_layers[layer].get_full(exposure=exposure) def render_delta(self,floorPlanId:str)->Image: \"Get the current datamap", "self.password = conf_dict.get(Model.STORE_PASSWORD,Model.DEFAULT_PASSWORD) self.webhook_threshold = conf_dict.get(Model.STORE_WHTHRESHOLD,self.webhook_threshold) for mac, coords in conf_dict.get(Model.STORE_FOVCOORDS,dict()).items(): self.setFOVs( mac,", "self.mask[x][y] = self.pixelmask[it:ib,il:ir].mean() < Model.DOWNSAMPLE_THRESHOLD def calc_bounds_mask(self,blindspots=None,wallthreshold:float=None) -> Image.Image: \"\"\" Generate a preview", "a count for each overlay in each layer stored self.count[l_id] = { fpid:0", "ModelException in case of dimention mismatch. If mask is outdated, update - note", "with passed floor, updates mask\" if self.overlay_dimensions != floor.overlay_dimensions or self.real_dimensions != floor.floorplan_dimensions:", "### Layers def update_layers(self)->None: \"\"\" Must be called when a Floor is added,", "in self.plans.items() } return conf def write_config_data(self): config_data = {Model.STORE_SELECTED:self.network_id} config_data[self.network_id] = self.serialize()", "k,v in self.plans.items() } def findFloorplanByName(self,name) -> str: \"Find a floorPlanId from the", "floorplan and the data overlay, in metres and px self.margin_m = ( Model.CELL_SIZE_M", "id in self.overlays.keys() } for id, placeable in observations.items(): bins[placeable.floorPlanId][id] = placeable for", "overlay.add(obs) def get_deltas(self, masked:bool=True, exposure:int=0) -> dict: \"\"\" Return the delta data for", "raise Model.ModelException(\"Camera with mac {} not found\".format(mac)) else: raise Model.ModelException(\"Model not configured for", "dividing by new count upd_um_overlay = ( avg_um_overlay * c + new_um_overlay )", "wish I was kidding # This is 0.707 iff cell_s_m = 1 DEFAULT_EXPOSURE", "a floorPlanId from the floor name\" for k in self.plans: if self.plans[k].floorplan.name ==", "the TimeSlotAvg is compatible with the current Model. If layers or overlays are", "= self.getCameraImage(cam) POST_data[fpid][\"camera_data_\" + str(i)] = response if POST_data != {}: self.snapshotWebhook(POST_data) ###", "were passed. For more details on exposure, see Overlay.get_delta \"\"\" window = self.exposure", "from cameras and feed into data layer\" self.query_obj.updateCameraMVSenseData() observations = self.__generate_person_obs() self.data_layers[Model.LAYER_MVSENSE].set_observations(observations) def", "Get count if exists, else set to 1 self.count[l_id] = { ov_id:self.count[l_id].get(ov_id,1) for", "\"Calls the factory if the current TimeSlotAvg object is not current\" if not", ").astype(np.uint8) imarr = Image.fromarray(imarr,\"RGBA\").filter( ImageFilter.BoxBlur( BLUR_CELLS * destination.size[0] / overlay.shape[1] ) ) if", "self.is_current_time(debug): # it is valid to update with the current model # For", "= unfixed_count self.__masked_dataoverlay = masked_overlay self.__unmasked_dataoverlay = unmasked_overlay def roll(self) -> None: \"Roll", "__init__(self, data_layers:dict, day:int, hour:int): self.day = day self.hour = hour self.data_layers = dict()", "# Account for margin between edge of floorplan and overlay ix_points = np.floor(np.linspace(", "cam.mac: cam.get_fov_coords() for cam in self.query_obj.cameras.values() } conf[Model.STORE_BMBOXES] = { fpid: fp.bm_boxes for", "count for each overlay in each layer stored self.count[l_id] = { fpid:0 for", "os.path.join('model.conf') CELL_SIZE_M = 1 DOWNSAMPLE_THRESHOLD = 0.5 VARIANCE_THRESHOLD = np.hypot( *( 2*(CELL_SIZE_M /", "each layer in the new data for l_key, layer in current_data.items(): # Get", "ix_divs[x] ib = ix_divs[x+1] for y in range(my): #Left, right il = iy_divs[y]", "ID {}\".format(l_id)) for l_id, layer in self.data_layers.items(): # Add any missing overlays layer.verify_and_update(floors)", "self.real_dimensions, self.mask, 1) cp.__unfixed_observations[0] = self.__unfixed_observations.mean(axis=0) cp.__masked_dataoverlay[0] = self.__masked_dataoverlay.mean(axis=0) cp.__unmasked_dataoverlay[0] = self.__unmasked_dataoverlay.mean(axis=0) else:", "Scanning API (SAPI) def __validate_scanning(self,SAPI_packet:dict) -> None: if type(SAPI_packet) != dict: raise TypeError(\"JSON", "the current TimeSlotAvg object\" self.update_timeslot() # Get the current timeslot object self.timeslot.update_avg_data( self.data_layers", "the layer. Flatten squashes exposures into 1 frame For more info, see Overlay.copy()", "= \"webhook_threshold\" def update_model_config(self, netid, conf_dict): layers = conf_dict.get(Model.STORE_LAYERS,set()) try: if netid !=", "(np.array([x,y]) + 0.5) * Model.CELL_SIZE_M - client_loc ) um_possible[x,y] = d <= placeable.variance", "minutes fixing it ;) event = spikeDict[\"location\"] event_root = tuple([ int(d) for d", "floor.bm_boxes = blindspots floor.mask[:] = 1 floor.mask_enabled = on self.update_layers() ### Layers def", "= floormask assert exposure > 0 self.exposure = exposure # Exposure queue, shape", "= Layer(self.plans, Model.DEFAULT_EXPOSURE) self.timeslot = TimeSlotAvg.load(self.data_layers,self.plans) self.webhook_addresses = [] ### Floorplans def pullFloors(self)", "in range(um_possible.shape[1]): # For each square, see if it's centre is close enough", "(eg Model.LAYER_*)\" class ModelException(Exception): pass class BadRequest(Exception): pass def __init__(self,network_id:str=None,layers:set={}): \"Initialise model. API", "ix = self.pixelmask.shape[0] iy = self.pixelmask.shape[1] #Image-scale chunk divisions (what coords do we", "data_layers:dict, floors:dict)->None: \"\"\" Verifies that the data in the TimeSlotAvg is compatible with", "list of camera objects # They call me the comprehension king # Maybe", "except AttributeError: self.query_obj = APIQuery(netid) self.populate(layers) self.secret = conf_dict.get(Model.STORE_SECRET) self.validator_token = conf_dict.get(Model.STORE_TOKEN) self.webhook_addresses", "= ix_points[mx+1] for my in range(overlay.shape[1]): val = overlay[mx,my] if val==0: continue #Colour", "placeable.y, placeable.x ] ) for x in range(um_possible.shape[0]): for y in range(um_possible.shape[1]): #", "% Model.CELL_SIZE_M), Model.CELL_SIZE_M - (self.floorplan_dimensions[1] % Model.CELL_SIZE_M) ) self.margin_px = ( self.margin_m[0] *", "something wrong self.__validate_scanning(SAPI_packet) dest_layer = Model.get_type(SAPI_packet) observations = self.query_obj.extract_SAPI_observations(SAPI_packet) self.data_layers[dest_layer].set_observations(observations) ### Camera and", "save the new data overlay to the timeslots model, promoting as exposure of", "-> None: for placeable in fixed.values(): # We store both a copy of", "if not os.path.exists(TimeSlotAvg.DATA_DIR): os.makedirs(TimeSlotAvg.DATA_DIR) with bz2.BZ2File(filepath, 'wb') as f: pickle.dump(self, f) def get_floor_avgs(self,", "a single floorplan\" def __init__(self, floorid:str, overlay_dimensions:tuple, real_dimensions:tuple, floormask:np.ndarray, exposure:int): self.floorid = floorid", "the current model data, if valid time\" if self.is_current_time(debug): # it is valid", "Maybe after we spend 30 minutes fixing it ;) event = spikeDict[\"location\"] event_root", "d in event ]) cameras = { cam for cam in self.query_obj.getCameras().values() if", "spikeDict[\"location\"] event_root = tuple([ int(d) for d in event ]) cameras = {", "self.__unmasked_dataoverlay = np.roll(self.__unmasked_dataoverlay, 1, axis=0) self.__unfixed_observations = np.roll(self.__unfixed_observations, 1, axis=0) self.__masked_dataoverlay[0] = 0", "elif not (isinstance(blindspots,(np.ndarray,list,tuple))): raise TypeError(\"Invalid type for blindspots parameter. Should be of type", "timeslot using the current model data, if valid time\" if self.is_current_time(debug): # it", "cam in cameras if cam.has_FOV() } nonFOVcams = cameras - FOVcams hasView =", "new_m_overlay = over.get_delta(masked=True) new_unfixed_obs = over.get_unfixed_observations() # Similar from averages # avg layers", "average data for the current TimeSlotAvg object\" self.update_timeslot() # Get the current timeslot", "nested list of the form [[x1,x2,y1,y2],...]\" if blindspots==None: pass elif not (isinstance(blindspots,(np.ndarray,list,tuple))): raise", "dict of retreived floor plan IDs and names\" return { k : v.floorplan.name", "self.query_obj.pullFloorPlans() self.plans = { id : Floor(fp) for id,fp in floorplans.items() } return", "(\"Best Effort\", top_n) def getCameraImage(self, camera) -> dict: #returns a dictionary containing a", "Camera and MVSense def setFOVs(self,mac:str,coords:set)->None: \"\"\" Set the FOV coords from given camera", "self.__unmasked_dataoverlay[:] = 0 def copy(self,flatten:bool=False): \"\"\" Return a copy of the overlay. If", "floor plan IDs and names\" return { k : v.floorplan.name for k,v in", "render_overlay(self,overlay:np.ndarray,pixelmask:bool=True): \"\"\" Render overlay onto the floorplan image in heatmap form. If pixelmask", "This is 0.707 iff cell_s_m = 1 DEFAULT_EXPOSURE = 3 DEFAULT_PASSWORD = '<PASSWORD>'", "file not found\") try: self.update_model_config(None, {Model.STORE_LAYERS: Model.LAYERS_ALL} ) except APIQuery.APIException: raise Model.ModelException(\"Could not", "mask exists\") destination.putalpha(255) return Image.alpha_composite(destination,imarr) class Layer: \"Class representing a data layer: a", "+ str(ke) ) if source_net_id != self.network_id: raise Model.BadRequest(\"Request has data from wrong", "data for the current TimeSlotAvg object\" self.update_timeslot() # Get the current timeslot object", "SAPI packet\" api_layer_val = APIQuery.get_SAPI_type(SAPI_packet) if api_layer_val == \"WiFi\": return Model.LAYER_SNAP_WIFI elif api_layer_val", "m_possible.sum() def __add_unfixed_locations(self,unfixed:dict) -> None: self.__unfixed_observations[0] += len(unfixed) def get_delta(self, masked:bool=True, exposure:int=0)->np.ndarray: \"\"\"", "dm.overlay_dimensions testarr = np.zeros(dims).ravel() n = (datetime.datetime.now().second / 60) * len(testarr) testarr[:int(n)] =", "= config_data[Model.STORE_SELECTED] select_data = config_data[selected_id] self.update_model_config(selected_id,select_data) else: print(\"Warning: config file not found\") try:", "any missing overlays layer.verify_and_update(floors) for l_id in data_layers.keys(): # Get count if exists,", "np.ndarray) and pixelmask: # Tidy the edges # Make the mask an Image,", "np.zeros(exposure) self.__masked_dataoverlay = np.zeros( (exposure,)+overlay_dimensions, dtype=\"float32\" ) self.__unmasked_dataoverlay = np.zeros( (exposure,)+overlay_dimensions, dtype=\"float32\" )", "full client overlay (including distributed unfixed observations) For exposure, see Overlay.get_delta \"\"\" data", "ixs = ix_points[mx] ixe = ix_points[mx+1] for my in range(overlay.shape[1]): val = overlay[mx,my]", "areas of the floor that people cannot possibly be or are to be", "shape (n,4), got {}\".format(str(blindspots))) if wallthreshold == None: pass elif not isinstance(wallthreshold,(int,float)): raise", "class for the floorplan object including model data\" def __init__(self,floorplan:FloorPlan): self.floorplan = floorplan", "= SAPI_packet[\"data\"][\"networkId\"] if SAPI_packet[SECRET_K] != self.secret: raise Model.BadRequest(\"Request has bad authentication secret -", "return data def verify_and_update(self,floor:Floor)->None: \"Verifies overlay is compatible with passed floor, updates mask\"", "def verify_and_update_struct(self, data_layers:dict, floors:dict)->None: \"\"\" Verifies that the data in the TimeSlotAvg is", "exposure > 0 self.exposure = exposure # Exposure queue, shape (exp,x,y) self.__unfixed_observations =", "floorplans.items() } return self.plans def getFloorplanSummary(self) -> dict: \"Get a dict of retreived", "else: raise Model.ModelException(\"Camera with mac {} not found\".format(mac)) else: raise Model.ModelException(\"Model not configured", "default (0) combines all available frames, 1 gives only the latest frame (no", "me the comprehension king # Maybe after we spend 30 minutes fixing it", "= conf_dict.get(Model.STORE_BDENABLED,{fpid:False})[fpid] self.setBoundsMask(fpid, on, boxes) def serialize(self): conf = dict() conf[Model.STORE_SECRET] = self.secret", "set(self.data_layers.keys()) conf[Model.STORE_WEBHOOK] = self.webhook_addresses conf[Model.STORE_WHTHRESHOLD] = self.webhook_threshold conf[Model.STORE_PASSWORD] = self.password conf[Model.STORE_FOVCOORDS] = {", "overlay (only fixed observations). If masked, will select data masked at input, else", "margin between edge of floorplan and overlay ix_points = np.floor(np.linspace( 0, imarr.shape[0] +", "set(self, unfixed_count:np.ndarray, masked_overlay:np.ndarray, unmasked_overlay:np.ndarray): \"Sets internal observation data. Not for general use, instead", "self.hour def update_avg_data(self, current_data:dict, debug:bool=None) -> None: \"Updates an average model for a", "else np.ones(self.overlay_dimensions,dtype=np.bool_) data[mask] += self.__unfixed_observations[:window].mean(axis=0) / mask.sum() return data def verify_and_update(self,floor:Floor)->None: \"Verifies overlay", "len(coords) == 0 or False not in [ len(coord)==2 for coord in coords", "TSA, those are created, infos are printed Throws ModelException if dimensions do not", "id : Floor(fp) for id,fp in floorplans.items() } return self.plans def getFloorplanSummary(self) ->", "= self.query_obj.get_camera_observations() # Zip [0,n) with n obs objects, converting to dictionary return", "self.data_layers[layer] = Layer(self.plans, Model.DEFAULT_EXPOSURE) self.timeslot = TimeSlotAvg.load(self.data_layers,self.plans) self.webhook_addresses = [] ### Floorplans def", "of the overlay. If flatten, squash (mean) exposure window into 1 frame. \"\"\"", "assert len(self.__unfixed_observations.shape) == len(unfixed_count.shape) assert len(self.__masked_dataoverlay.shape) == len(masked_overlay.shape) assert len(self.__unmasked_dataoverlay.shape) == len(unmasked_overlay.shape) self.__unfixed_observations", "found any near enough to call near # Ignore floor mask self.__unmasked_dataoverlay[0][um_possible] +=", "m(asked)_possible is a copy of the u(n)m(asked)_possible's with the floor mask applied m_possible", "if exposure == 0 else exposure if masked: return self.__masked_dataoverlay[:window].mean(axis=0) else: return self.__unmasked_dataoverlay[:window].mean(axis=0)", "promoting as exposure of historicals = 1 self.data_layers[l_key].overlays[o_key].set( upd_unfixed_obs[None,], upd_m_overlay[None,], upd_um_overlay[None,] ) #", "exposure of historicals = 1 self.data_layers[l_key].overlays[o_key].set( upd_unfixed_obs[None,], upd_m_overlay[None,], upd_um_overlay[None,] ) # Update the", "- client_loc ) um_possible[x,y] = d <= placeable.variance # m(asked)_possible is a copy", "id, placeable in observations.items(): bins[placeable.floorPlanId][id] = placeable for fid, overlay in self.overlays.items(): overlay.roll()", "60) * len(testarr) testarr[:int(n)] = 1 return dm.render_overlay(testarr.reshape(dims)) def update(self)->None: \"Update non-webhook (non-SAPI)", "__generate_person_obs(self) -> dict: \"Indexes observed person objects with an arbitrary key\" obs =", "* destination.size[0] / overlay.shape[1] ) ) if isinstance(self.pixelmask, np.ndarray) and pixelmask: # Tidy", "the distance between the end of the floorplan and the data overlay, in", "return self.__unmasked_dataoverlay[:window].mean(axis=0) def get_unfixed_observations(self, exposure:int=0)->float: \"\"\" Return how many unfixed observations were passed.", "unmasked_overlay:np.ndarray): \"Sets internal observation data. Not for general use, instead use Overlay.add\" assert", "this does not effect existing data, only new observations If an overlay missing,", "the comprehension king # Maybe after we spend 30 minutes fixing it ;)", "the current timeslot object hist_fp_data = self.timeslot.get_floor_avgs(floorPlanId) collective = np.zeros( self.plans[floorPlanId].overlay_dimensions ) for", "self.get_delta(masked,exposure) window = self.exposure if exposure == 0 else exposure # Distribute unfixed", "layer avg_layer = self.data_layers[l_key] # For each overlay in the respective new data", "= \"sapisecret\" STORE_TOKEN = \"<PASSWORD>_token\" STORE_PASSWORD = \"<PASSWORD>\" STORE_WHTHRESHOLD = \"webhook_threshold\" def update_model_config(self,", "the layer to contain passed observations, clearing any previous. Pass floor object dictionary\"", ") ly.overlays = { _id : over.copy(flatten) for _id, over in self.overlays.items() }", "cam for cam in self.query_obj.getCameras().values() if cam.floorPlanId == floor.floorplan.id } FOVcams = {", "### Access Points def getAPs(self) -> None: \"Get APs and store internally in", "class TimeSlotAvgException( Exception ): pass DATA_DIR = \"historical_data\" def __init__(self, data_layers:dict, day:int, hour:int):", "overlay.min(), key=abs) #m_max, m_min = absmax, -absmax imarr = np.zeros((destination.size[1],destination.size[0],4),dtype=\"uint8\") # Account for", "__name__!=\"__main__\": assert isinstance(tsa,TimeSlotAvg) tsa.verify_and_update_struct(data_layers, floors) return tsa def is_current_time(self, debug=None) -> bool: \"Returns", "# Make the mask an Image, mode=L mask = Image.fromarray(255*self.pixelmask.astype(np.uint8),\"L\") # Paste alpha", "f) def get_floor_avgs(self, fpid:str)->dict: \"Return the flat average Overlay object indexed by each", "member overlays of observation data\" for over in self.overlays.values(): over.clear() def verify_and_update(self, floorplans:dict):", "camera (by mac). Coords should be iterable of shape (n,2). Coords pertain to", "format for blindspots parameter. Should be of shape (n,4), got {}\".format(str(blindspots))) if wallthreshold", "nearestCameras(self, n:int, floor:Floor,spikeDict:dict)->tuple: #returns a list of camera objects # They call me", "= self.exposure if exposure == 0 else exposure # Distribute unfixed observations evenly", "conf_dict): layers = conf_dict.get(Model.STORE_LAYERS,set()) try: if netid != self.network_id: raise AttributeError except AttributeError:", "heatmap within bounds. \"\"\" POS = np.array([255,0,0,180],dtype=np.uint8) NEG = np.array([0,255,0,180],dtype=np.uint8) BLUR_CELLS = 0.35", "raise TypeError(\"Invalid type for wallthreshold parameter. Should be of type int or float,", "be called when a Floor is added, removed, or altered, including by set_bounds_mask", "bd.run() self.pixelmask = bd.getBoundaryMask() #Downsample from pixel level to mask level #Assuming cell", "#Top, bottom it = ix_divs[x] ib = ix_divs[x+1] for y in range(my): #Left,", "call near # Ignore floor mask self.__unmasked_dataoverlay[0][um_possible] += 1.0 / um_possible.sum() # Include", "): \"Static factory method; Load TimeSlotAvg object from compressed pickle file or create", "debug != None: return debug curr_day, curr_hour = TimeSlotAvg.get_time() return curr_day == self.day", "cp = Overlay(self.floorid, self.overlay_dimensions, self.real_dimensions, self.mask, self.exposure) cp.__unfixed_observations = self.__unfixed_observations.copy() cp.__masked_dataoverlay = self.__masked_dataoverlay.copy()", "class ModelException(Exception): pass class BadRequest(Exception): pass def __init__(self,network_id:str=None,layers:set={}): \"Initialise model. API key must", "return debug curr_day, curr_hour = TimeSlotAvg.get_time() return curr_day == self.day and curr_hour ==", "see Floor.set_bounds_mask \"\"\" if wallthreshold == None: bd = BoundaryDetector( self.floorplan.get_image() ) else:", "def getAPs(self) -> None: \"Get APs and store internally in relevent floor objects\"", "n frames of stored exposure, default (0) combines all available frames, 1 gives", "POST_data[fpid][\"camera_data_\" + str(i)] = response if POST_data != {}: self.snapshotWebhook(POST_data) ### Configuration STORE_WEBHOOK", "for areas of the floor that people cannot possibly be or are to", "the current datamap in terms of absolute delta from mean\" datamap = self.comp_historical(floorPlanId)", "self.bm_boxes = [] def set_bounds_mask(self,blindspots=None,wallthreshold:float=None) -> None: \"\"\"\" Generate a mask from BoundaryDetector", "= \"bd_enabled\" STORE_SECRET = \"sapisecret\" STORE_TOKEN = \"<PASSWORD>_token\" STORE_PASSWORD = \"<PASSWORD>\" STORE_WHTHRESHOLD =", "over.get_full(masked, exposure) for _id, over in self.overlays.items() } def copy(self,flatten:bool=True): \"\"\" Return a", "NEG = np.array([0,255,0,180],dtype=np.uint8) BLUR_CELLS = 0.35 destination = self.floorplan.get_image().convert(\"RGBA\") # Overlay scaling absmax", "(only fixed observations). If masked, will select data masked at input, else unmasked", "to (height, width) self.floorplan_dimensions = (self.floorplan.height,self.floorplan.width) self.overlay_dimensions = ( int(self.floorplan_dimensions[0]//Model.CELL_SIZE_M)+1, int(self.floorplan_dimensions[1]//Model.CELL_SIZE_M)+1 ) #", "#TODO remove day hour params - used for unit tests if day==None or", "the current timeslot object self.timeslot.update_avg_data( self.data_layers ) def comp_historical(self, floorPlanId:str): \"Get the relative", "to update with the current model # For each layer in the new", "for x in range(len(clusters)): for y in range(len(clusters[0])): clusters[x,y] = layer[3*x:3*(x+1),3*y:3*(y+1)].sum() busiest =", "def getCameraImage(self, camera) -> dict: #returns a dictionary containing a link to the", "len(self.__masked_dataoverlay.shape) == len(masked_overlay.shape) assert len(self.__unmasked_dataoverlay.shape) == len(unmasked_overlay.shape) self.__unfixed_observations = unfixed_count self.__masked_dataoverlay = masked_overlay", "(exposure,)+overlay_dimensions, dtype=\"float32\" ) self.__unmasked_dataoverlay = np.zeros( (exposure,)+overlay_dimensions, dtype=\"float32\" ) def set(self, unfixed_count:np.ndarray, masked_overlay:np.ndarray,", "self.exposure) for fpid, floor in floorplans.items(): self.overlays[fpid].verify_and_update(floor) class Overlay: \"Represents an data overlay", "be of type int or float, got {}\".format(str(type(wallthreshold)))) if floor_id not in self.plans.keys():", "= Overlay(self.floorid, self.overlay_dimensions, self.real_dimensions, self.mask, self.exposure) cp.__unfixed_observations = self.__unfixed_observations.copy() cp.__masked_dataoverlay = self.__masked_dataoverlay.copy() cp.__unmasked_dataoverlay", "new count upd_um_overlay = ( avg_um_overlay * c + new_um_overlay ) / (c+1)", "This means let x = cell_s_m / 2; V_T = sqrt(x^2+x^2) # I", "# Include floor mask self.__masked_dataoverlay[0][m_possible] += 1.0 / m_possible.sum() def __add_unfixed_locations(self,unfixed:dict) -> None:", "in [ len(coord)==2 for coord in coords ]: cam = self.query_obj.cameras[mac] shape =", "floormask:np.ndarray, exposure:int): self.floorid = floorid #self.observations = dict() self.overlay_dimensions = overlay_dimensions self.real_dimensions =", "add(self,observations:dict) -> None: fixed = {} unfixed = {} for id, placeable in", "for x in range(um_possible.shape[0]): for y in range(um_possible.shape[1]): # For each square, see", "!= dict: raise TypeError(\"JSON parsed a {}, expected a dict\".format(str(type(SAPI_packet)))) try: source_net_id =", "type\" def __init__(self,floorplans:dict,exposure:int): self.exposure = exposure self.overlays = { _id : Overlay(_id, floor.overlay_dimensions,", "mindist = dist distances[cam] = mindist for cam in nonFOVcams: distances[cam] = np.hypot(", "latest frame of WiFi layer rendered on the floor plan\" return self.plans[floorPlanId].render_overlay(self.data_layers[Model.LAYER_SNAP_WIFI].overlays[floorPlanId].get_delta(exposure=1)) def", "parsed a {}, expected a dict\".format(str(type(SAPI_packet)))) try: source_net_id = SAPI_packet[\"data\"][\"networkId\"] if SAPI_packet[SECRET_K] !=", "= self.spike(self.comp_historical(fpid), self.webhook_threshold) if spikedict['spike'] == True: idealality, cameras = self.nearestCameras(2, floor, spikedict)", "hour = TimeSlotAvg.get_time() try: tsa = bz2.BZ2File(os.path.join(TimeSlotAvg.DATA_DIR,'{}_{}.pbz2'.format(day, hour)), 'rb') tsa = pickle.load(tsa) except", "how many unfixed observations were passed. For more details on exposure, see Overlay.get_delta", "== 0 else exposure return self.__unfixed_observations[:window].mean(axis=0) def get_full(self, masked:bool=True, exposure:int=0)->np.ndarray: #pragma: no cover", "self.mask == floor.mask class Model: LAYER_SNAP_WIFI = 1 LAYER_SNAP_BT = 2 LAYER_MVSENSE =", "if isinstance(self.pixelmask, np.ndarray) and pixelmask: # Tidy the edges # Make the mask", "- placeable.y, placeable.x ] ) for x in range(um_possible.shape[0]): for y in range(um_possible.shape[1]):", "is added, removed, or altered, including by set_bounds_mask This will update the Overlay", "= 1 return dm.render_overlay(testarr.reshape(dims)) def update(self)->None: \"Update non-webhook (non-SAPI) layers, write history\" self.pull_mvsense_data()", "new frame of data\" self.__masked_dataoverlay = np.roll(self.__masked_dataoverlay, 1, axis=0) self.__unmasked_dataoverlay = np.roll(self.__unmasked_dataoverlay, 1,", "For more info, see Overlay.copy() \"\"\" ly = Layer({}, 1 if flatten else", "BoundaryDetector( self.floorplan.get_image() ) else: # pragma: no cover # Not covered as only", "covered as not required for current scope return { _id : over.get_full(masked, exposure)", "self.webhook_addresses conf[Model.STORE_WHTHRESHOLD] = self.webhook_threshold conf[Model.STORE_PASSWORD] = self.password conf[Model.STORE_FOVCOORDS] = { cam.mac: cam.get_fov_coords() for", "= on self.update_layers() ### Layers def update_layers(self)->None: \"\"\" Must be called when a", "object from compressed pickle file or create new\" #TODO remove day hour params", "floorplans (Floor objects). Throws ModelException in case of dimention mismatch. If mask is", "would be invalidated \"\"\" for layer in self.data_layers.values(): layer.verify_and_update(self.plans) ### Access Points def", "after we spend 30 minutes fixing it ;) event = spikeDict[\"location\"] event_root =", "store both a copy of the m(asked)_possible locations and the u(n)m(asked)_possible locations um_possible", "\"bm_boxes\" STORE_BDENABLED = \"bd_enabled\" STORE_SECRET = \"sapisecret\" STORE_TOKEN = \"<PASSWORD>_token\" STORE_PASSWORD = \"<PASSWORD>\"", "o_key, over in layer.overlays.items(): # Get masked and unmasked deltas, and unfixed count", "\"Get latest frame of WiFi layer rendered on the floor plan\" return self.plans[floorPlanId].render_overlay(self.data_layers[Model.LAYER_SNAP_WIFI].overlays[floorPlanId].get_delta(exposure=1))", "absolute delta from mean\" datamap = self.comp_historical(floorPlanId) return self.plans[floorPlanId].render_overlay(datamap) def render_abs(self,floorPlanId:str)->Image: \"Get latest", "cell_s_m / 2; V_T = sqrt(x^2+x^2) # I wish I was kidding #", "np.linalg.norm( (np.array([x,y]) + 0.5) * Model.CELL_SIZE_M - client_loc ) um_possible[x,y] = d <=", "level to mask level #Assuming cell >> pixel #Mask (1msq-scale, small-dims) dims mx", "in layer.overlays.items(): # Get masked and unmasked deltas, and unfixed count from new", "pos else NEG imarr[ixs:ixe,iys:iye,3] = ( imarr[ixs:ixe,iys:iye,3].astype(np.float64) * alpha ).astype(np.uint8) imarr = Image.fromarray(imarr,\"RGBA\").filter(", "floor.mask_enabled = on self.update_layers() ### Layers def update_layers(self)->None: \"\"\" Must be called when", "for y, cell in enumerate(row): if cell: dist = np.hypot( (x+0.5)-event[0], (y+0.5)-event[1] )", "with the current model # For each layer in the new data for", "shape = self.plans[cam.floorPlanId].overlay_dimensions cam.set_FOV(shape,coords) else: raise ValueError except (ValueError, TypeError) as err: raise", "layer.copy(flatten=True) self.data_layers[l_id].clear() # Set a count for each overlay in each layer stored", "{ _id : over.copy(flatten) for _id, over in self.overlays.items() } return ly def", "] ) for x in range(um_possible.shape[0]): for y in range(um_possible.shape[1]): # For each", "unit tests if day==None or hour==None: day, hour = TimeSlotAvg.get_time() try: tsa =", "= Image.fromarray(imarr,\"RGBA\").filter( ImageFilter.BoxBlur( BLUR_CELLS * destination.size[0] / overlay.shape[1] ) ) if isinstance(self.pixelmask, np.ndarray)", "change # Note this does not change existing data, only new observations self.mask", "1, axis=0) self.__unfixed_observations = np.roll(self.__unfixed_observations, 1, axis=0) self.__masked_dataoverlay[0] = 0 self.__unmasked_dataoverlay[0] = 0", "in aps.items(): if ap.floorPlanId in self.plans.keys(): self.plans[ap.floorPlanId].aps[mac] = ap ### Scanning API (SAPI)", "(c+1) # save the new data overlay to the timeslots model, promoting as", "#returns a list of camera objects # They call me the comprehension king", "be of type np.array, list or tuple\") elif False in [ len(spot) ==", "overlay.roll() obs = bins.get(fid) if obs != None: overlay.add(obs) def get_deltas(self, masked:bool=True, exposure:int=0)", "squash (mean) exposure window into 1 frame. \"\"\" if flatten: cp = Overlay(self.floorid,", "exposure 0 (default), will provide all available exposures squashed. See Overlay.get_full \"\"\" #", "locations and the u(n)m(asked)_possible locations um_possible = np.zeros(self.overlay_dimensions, dtype=np.bool_) if placeable.has_mask_override: # Even", "k : v.floorplan.name for k,v in self.plans.items() } def findFloorplanByName(self,name) -> str: \"Find", "elif False in [ len(spot) == 4 for spot in blindspots ]: raise", "FOV coords from given camera (by mac). Coords should be iterable of shape", "spikedict['spike'] == True: idealality, cameras = self.nearestCameras(2, floor, spikedict) #need to get relevant", "hist_fp_data[lid].get_delta(masked=mask_enabled,exposure=1) collective += (current - historical) collective /= len(self.data_layers) return collective def update_timeslot(self):", "FOVcams: fov = cam.get_FOV() mindist = 9999999 for x,row in enumerate(fov): for y,", "np.ones(self.overlay_dimensions,dtype=np.bool_) self.aps = {} self.bm_boxes = [] def set_bounds_mask(self,blindspots=None,wallthreshold:float=None) -> None: \"\"\"\" Generate", "fp.overlay_dimensions, fp.floorplan_dimensions, fp.mask, self.exposure) for fpid, floor in floorplans.items(): self.overlays[fpid].verify_and_update(floor) class Overlay: \"Represents", "self.real_dimensions = real_dimensions self.mask = floormask assert exposure > 0 self.exposure = exposure", "object hist_fp_data = self.timeslot.get_floor_avgs(floorPlanId) collective = np.zeros( self.plans[floorPlanId].overlay_dimensions ) for lid in self.data_layers.keys():", "(default), will provide all available exposures squashed. See Overlay.get_full \"\"\" # Not covered", "= None self.mask = np.ones(self.overlay_dimensions,dtype=np.bool_) self.aps = {} self.bm_boxes = [] def set_bounds_mask(self,blindspots=None,wallthreshold:float=None)", "keep overlay heatmap within bounds. \"\"\" POS = np.array([255,0,0,180],dtype=np.uint8) NEG = np.array([0,255,0,180],dtype=np.uint8) BLUR_CELLS", "= self.__unmasked_dataoverlay.copy() return cp def add(self,observations:dict) -> None: fixed = {} unfixed =", "snapshotWebhook(self, snapshot:dict): for address in self.webhook_addresses: response = requests.post(address, json = snapshot) print(response)", "missing, print info, create new If an overlay is extra, do nothing \"\"\"", "self.write_config_data() def populate(self,layers:set): assert isinstance(self.query_obj, APIQuery) self.network_id = self.query_obj.network_id self.plans = self.pullFloors() self.getAPs()", "If mask is outdated, update - note this does not effect existing data,", "get_type(SAPI_packet:dict) -> int: \"Get the Model layer constant for a given SAPI packet\"", "mask is in place, mask override takes precident um_possible = placeable.mask_override m_possible =", "current Model. If layers or overlays are not represented in TSA, those are", "infos are printed Throws ModelException if dimensions do not match \"\"\" for l_id", "= self.mask.shape[0] my = self.mask.shape[1] #Image-scale pixel mask (mini-scale, big-dims) dims ix =", ") top_n = [ cam[0] for cam in sorted(distances.items(),key=lambda x: x[1])[:n] ] return", "def addWebhookAddress(self, webhookAddress:str): self.webhook_addresses.append(webhookAddress) ### Historical def put_historical(self) -> None: \"Updates the average", "= blindspots floor.mask[:] = 1 floor.mask_enabled = on self.update_layers() ### Layers def update_layers(self)->None:", "\"Clear accumulated observation data including exposure\" self.__unfixed_observations[:] = 0 self.__masked_dataoverlay[:] = 0 self.__unmasked_dataoverlay[:]", "floor, updates mask\" if self.overlay_dimensions != floor.overlay_dimensions or self.real_dimensions != floor.floorplan_dimensions: raise Model.ModelException(\"Error:", "compatible with passed floor, updates mask\" if self.overlay_dimensions != floor.overlay_dimensions or self.real_dimensions !=", "Model.CELL_SIZE_M), Model.CELL_SIZE_M - (self.floorplan_dimensions[1] % Model.CELL_SIZE_M) ) self.margin_px = ( self.margin_m[0] * self.floorplan.px_per_m_h,", "findFloorplanByName(self,name) -> str: \"Find a floorPlanId from the floor name\" for k in", "\"historical_data\" def __init__(self, data_layers:dict, day:int, hour:int): self.day = day self.hour = hour self.data_layers", "4 for spot in blindspots ]: raise ValueError(\"Invalid format for blindspots parameter. Should", "# A floorplan not represented in the floorplans but not in the overlays", "TypeError) as err: raise err.__class__(\"Coordinates supplied of incorrect shape or type, should be", "for fpid,fp in self.plans.items() } return conf def write_config_data(self): config_data = {Model.STORE_SELECTED:self.network_id} config_data[self.network_id]", "wrong network: expected {} got {}\".format(self.network_id,source_net_id)) def get_type(SAPI_packet:dict) -> int: \"Get the Model", "not match \"\"\" for l_id in set(data_layers.keys()).difference(self.data_layers.keys()): # For layers in data_layers not", "will update the Overlay objects to reflect this change May throw error if", "\"\"\" Return the delta data for each overlay in the layer. If exposure", "of a single floorplan\" def __init__(self, floorid:str, overlay_dimensions:tuple, real_dimensions:tuple, floormask:np.ndarray, exposure:int): self.floorid =", "= \"bm_boxes\" STORE_BDENABLED = \"bd_enabled\" STORE_SECRET = \"sapisecret\" STORE_TOKEN = \"<PASSWORD>_token\" STORE_PASSWORD =", "info see Floor.set_bounds_mask \"\"\" if wallthreshold == None: bd = BoundaryDetector( self.floorplan.get_image() )", "the delta overlay (only fixed observations). If masked, will select data masked at", "busiest = clusters[x][y] busiest_location = (3*(x+0.5), 3*(y+0.5)) return {'spike':busiest > threshhold, 'location':busiest_location} def", "floorplans.items(): self.overlays[fpid].verify_and_update(floor) class Overlay: \"Represents an data overlay of a single floorplan\" def", "for y in range(um_possible.shape[1]): # For each square, see if it's centre is", "in self.query_obj.cameras.values() } conf[Model.STORE_BMBOXES] = { fpid: fp.bm_boxes for fpid,fp in self.plans.items() }", "in x,y tuple, in m, with change of axis client_loc = np.array( [self.real_dimensions[0]", "None: \"Updates the average data for the current TimeSlotAvg object\" self.update_timeslot() # Get", "= dict() self.count = dict() for l_id, layer in data_layers.items(): # Copy the", "Model.CELL_SIZE_M - (self.floorplan_dimensions[0] % Model.CELL_SIZE_M), Model.CELL_SIZE_M - (self.floorplan_dimensions[1] % Model.CELL_SIZE_M) ) self.margin_px =", "= self.plans[floor_id] if on: floor.set_bounds_mask(blindspots,wallthreshold) else: floor.bm_boxes = blindspots floor.mask[:] = 1 floor.mask_enabled", "data\" def __init__(self,floorplan:FloorPlan): self.floorplan = floorplan # Dimentions should default to (height, width)", "\"\"\" Generate a preview of a bounds mask with given parameters. For more", "for each\" floorplans = self.query_obj.pullFloorPlans() self.plans = { id : Floor(fp) for id,fp", "\"Update non-webhook (non-SAPI) layers, write history\" self.pull_mvsense_data() self.put_historical() #spike detect POST_data = {}", "of floorplan and overlay ix_divs = np.floor(np.linspace( 0, ix+self.margin_px[0], mx+1 )).astype(\"int32\") iy_divs =", "Image, mode=L mask = Image.fromarray(255*self.pixelmask.astype(np.uint8),\"L\") # Paste alpha on masked region imarr.paste((0,0,0,0),mask) elif", "for fpid, floor in floorplans.items(): self.overlays[fpid].verify_and_update(floor) class Overlay: \"Represents an data overlay of", "3m^2 areas clusters = np.zeros(dims, dtype=\"float32\") for x in range(len(clusters)): for y in", ") ) curr_day = curr_dt.weekday() curr_hour = curr_dt.hour return curr_day, curr_hour def verify_and_update_struct(self,", "range(overlay.shape[0]): # Top, bottom ixs = ix_points[mx] ixe = ix_points[mx+1] for my in", "many unfixed observations were passed. For more details on exposure, see Overlay.get_delta \"\"\"", "self.__unfixed_observations = np.roll(self.__unfixed_observations, 1, axis=0) self.__masked_dataoverlay[0] = 0 self.__unmasked_dataoverlay[0] = 0 self.__unfixed_observations[0] =", "dict() self.webhook_threshold = 0.35 for layer in layers: if layer not in Model.LAYERS_ALL:", "Generate a preview of a bounds mask with given parameters. For more info", "average model for a timeslot using the current model data, if valid time\"", ") self.__unmasked_dataoverlay = np.zeros( (exposure,)+overlay_dimensions, dtype=\"float32\" ) def set(self, unfixed_count:np.ndarray, masked_overlay:np.ndarray, unmasked_overlay:np.ndarray): \"Sets", "= self.count[l_key][o_key] # update the average by adding current values to sum total", "floorplans:dict): \"\"\" Ensure overlays are compatible with current floorplans (Floor objects). Throws ModelException", "\"\"\" window = self.exposure if exposure == 0 else exposure return self.__unfixed_observations[:window].mean(axis=0) def", "with n obs objects, converting to dictionary return dict(zip(range(len(obs)),obs)) def provide_scanning(self,SAPI_packet:dict) -> None:", "dimension mismatch for FPID={}\".format(floor.floorplan.id)) # Update mask in case of change # Note", "copy of the layer. Flatten squashes exposures into 1 frame For more info,", "0 self.__masked_dataoverlay[:] = 0 self.__unmasked_dataoverlay[:] = 0 def copy(self,flatten:bool=False): \"\"\" Return a copy", "cam.has_FOV() } nonFOVcams = cameras - FOVcams hasView = { cam for cam", "feed into data layer\" self.query_obj.updateCameraMVSenseData() observations = self.__generate_person_obs() self.data_layers[Model.LAYER_MVSENSE].set_observations(observations) def spike(self, layer, threshhold)->dict:", "to the image response = self.query_obj.getCameraSnap(camera) return response def snapshotWebhook(self, snapshot:dict): for address", "set, will mask final image to keep overlay heatmap within bounds. \"\"\" POS", "self.exposure = exposure self.overlays = { _id : Overlay(_id, floor.overlay_dimensions, floor.floorplan_dimensions, floor.mask, exposure)", "m_min = absmax, -absmax imarr = np.zeros((destination.size[1],destination.size[0],4),dtype=\"uint8\") # Account for margin between edge", "> 0 alpha = abs(val / absmax) #Left, right iys = iy_points[my] iye", "FOVcams if cam.get_FOV()[event_root]==True } if len(hasView)>0: return (\"Covered\", list(hasView) ) distances = dict()", "= avg_layer.overlays[o_key].get_delta(masked=False,exposure=1) avg_m_overlay = avg_layer.overlays[o_key].get_delta(masked=True,exposure=1) avg_unfixed_obs = over.get_unfixed_observations() # Count c = self.count[l_key][o_key]", "relative busyness of a floorplan using all layers\" self.update_timeslot() # Get the current", "in range(len(clusters)): for y in range(len(clusters[0])): if clusters[x][y] > busiest: busiest = clusters[x][y]", "1 gives only the latest frame (no smoothing). \"\"\" window = self.exposure if", "exposure if masked: return self.__masked_dataoverlay[:window].mean(axis=0) else: return self.__unmasked_dataoverlay[:window].mean(axis=0) def get_unfixed_observations(self, exposure:int=0)->float: \"\"\" Return", "conf[Model.STORE_FOVCOORDS] = { cam.mac: cam.get_fov_coords() for cam in self.query_obj.cameras.values() } conf[Model.STORE_BMBOXES] = {", "0 self.exposure = exposure # Exposure queue, shape (exp,x,y) self.__unfixed_observations = np.zeros(exposure) self.__masked_dataoverlay", "file\" filepath = os.path.join(TimeSlotAvg.DATA_DIR,'{}_{}.pbz2'.format(self.day, self.hour)) if not os.path.exists(TimeSlotAvg.DATA_DIR): os.makedirs(TimeSlotAvg.DATA_DIR) with bz2.BZ2File(filepath, 'wb') as", "be a Numpy array, tuple or nested list of the form [[x1,x2,y1,y2],...] \"\"\"", "floor mask applied m_possible = np.logical_and( um_possible, self.mask, dtype=np.bool_ ) # If theres", "def get_full(self, masked:bool=True, exposure:int=0) -> dict: #pragma: no cover \"\"\" Return the full", "cp.__unfixed_observations[0] = self.__unfixed_observations.mean(axis=0) cp.__masked_dataoverlay[0] = self.__masked_dataoverlay.mean(axis=0) cp.__unmasked_dataoverlay[0] = self.__unmasked_dataoverlay.mean(axis=0) else: cp = Overlay(self.floorid,", "floor: \",floor_id) floor = self.plans[floor_id] if on: floor.set_bounds_mask(blindspots,wallthreshold) else: floor.bm_boxes = blindspots floor.mask[:]", "in blindspots: bd.add_blindspot(*tuple(spot)) bd.run() self.pixelmask = bd.getBoundaryMask() #Downsample from pixel level to mask", "does not effect existing data, only new observations If an overlay missing, print", "__init__(self,floorplan:FloorPlan): self.floorplan = floorplan # Dimentions should default to (height, width) self.floorplan_dimensions =", "mask set, will mask final image to keep overlay heatmap within bounds. \"\"\"", "conf[Model.STORE_SECRET] = self.secret conf[Model.STORE_TOKEN] = self.validator_token conf[Model.STORE_LAYERS] = set(self.data_layers.keys()) conf[Model.STORE_WEBHOOK] = self.webhook_addresses conf[Model.STORE_WHTHRESHOLD]", "secret - rejecting data\") except KeyError as ke: raise Model.BadRequest(\"Request is missing data:", "os.path.exists(TimeSlotAvg.DATA_DIR): os.makedirs(TimeSlotAvg.DATA_DIR) with bz2.BZ2File(filepath, 'wb') as f: pickle.dump(self, f) def get_floor_avgs(self, fpid:str)->dict: \"Return", "# Get masked and unmasked deltas, and unfixed count from new data #", "LAYER_MVSENSE} CONFIG_PATH = os.path.join('model.conf') CELL_SIZE_M = 1 DOWNSAMPLE_THRESHOLD = 0.5 VARIANCE_THRESHOLD = np.hypot(", "dict: \"Indexes observed person objects with an arbitrary key\" obs = self.query_obj.get_camera_observations() #", "arbitrary key\" obs = self.query_obj.get_camera_observations() # Zip [0,n) with n obs objects, converting", "iy_divs[y] ir = iy_divs[y+1] # As array is binary, mean gives ratio of", "addWebhookAddress(self, webhookAddress:str): self.webhook_addresses.append(webhookAddress) ### Historical def put_historical(self) -> None: \"Updates the average data", "+ new_unfixed_obs ) / (c+1) # save the new data overlay to the", "= (self.floorplan.height,self.floorplan.width) self.overlay_dimensions = ( int(self.floorplan_dimensions[0]//Model.CELL_SIZE_M)+1, int(self.floorplan_dimensions[1]//Model.CELL_SIZE_M)+1 ) # Determine the distance between", "axis=0) self.__unfixed_observations = np.roll(self.__unfixed_observations, 1, axis=0) self.__masked_dataoverlay[0] = 0 self.__unmasked_dataoverlay[0] = 0 self.__unfixed_observations[0]", "scope return { _id : over.get_full(masked, exposure) for _id, over in self.overlays.items() }", "= \"fov_mask\" STORE_BMBOXES = \"bm_boxes\" STORE_BDENABLED = \"bd_enabled\" STORE_SECRET = \"sapisecret\" STORE_TOKEN =", "all available frames, 1 gives only the latest frame (no smoothing). \"\"\" window", "id,fp in floorplans.items() } return self.plans def getFloorplanSummary(self) -> dict: \"Get a dict", "bool: \"Returns True iff timeslot is for current time\" if debug != None:", "floor = self.plans[floor_id] if on: floor.set_bounds_mask(blindspots,wallthreshold) else: floor.bm_boxes = blindspots floor.mask[:] = 1", "if self.overlay_dimensions != floor.overlay_dimensions or self.real_dimensions != floor.floorplan_dimensions: raise Model.ModelException(\"Error: Overlay and Floorplan", "in blindspots ]: raise ValueError(\"Invalid format for blindspots parameter. Should be of shape", "upd_um_overlay[None,] ) # Update the count self.count[l_key][o_key] += 1 #self.write() else: raise TimeSlotAvg.TimeSlotAvgException(f\"Cannot", "in self.data_layers.keys(): mask_enabled = self.plans[floorPlanId].mask_enabled current = self.data_layers[lid].overlays[floorPlanId].get_delta(masked=mask_enabled) historical = hist_fp_data[lid].get_delta(masked=mask_enabled,exposure=1) collective +=", "os.path.isfile(Model.CONFIG_PATH): with open( Model.CONFIG_PATH, 'rb' ) as f: config_data = pickle.load(f) selected_id =", "/ 2; V_T = sqrt(x^2+x^2) # I wish I was kidding # This", "= conf_dict.get(Model.STORE_WEBHOOK,list()) self.password = conf_dict.get(Model.STORE_PASSWORD,Model.DEFAULT_PASSWORD) self.webhook_threshold = conf_dict.get(Model.STORE_WHTHRESHOLD,self.webhook_threshold) for mac, coords in conf_dict.get(Model.STORE_FOVCOORDS,dict()).items():", "-> None: if type(SAPI_packet) != dict: raise TypeError(\"JSON parsed a {}, expected a", "else: floor.bm_boxes = blindspots floor.mask[:] = 1 floor.mask_enabled = on self.update_layers() ### Layers", "get laying one mask over the other) # Account for margin between edge", "def __init__(self,floorplans:dict,exposure:int): self.exposure = exposure self.overlays = { _id : Overlay(_id, floor.overlay_dimensions, floor.floorplan_dimensions,", "of shape (n,2). Coords pertain to sqm pixels on internal datamap. Pass len(iterable)==0", "a Floor is added, removed, or altered, including by set_bounds_mask This will update", "data would be invalidated \"\"\" for layer in self.data_layers.values(): layer.verify_and_update(self.plans) ### Access Points", "values needed for reading and writing data files\" curr_dt = datetime.datetime.now( datetime.timezone( offset=datetime.timedelta(hours=0)", "flatten self.data_layers[l_id] = layer.copy(flatten=True) self.data_layers[l_id].clear() # Set a count for each overlay in", "{ layer_id: layer.overlays[fpid] for layer_id, layer in self.data_layers.items() } @staticmethod def get_time()->tuple: \"Get", "TimeSlotAvg is compatible with the current Model. If layers or overlays are not", "bins[placeable.floorPlanId][id] = placeable for fid, overlay in self.overlays.items(): overlay.roll() obs = bins.get(fid) if", "available exposures squashed. For more info, see Overlay.get_delta \"\"\" return { _id :", "]) cameras = { cam for cam in self.query_obj.getCameras().values() if cam.floorPlanId == floor.floorplan.id", "for fpid,fp in self.plans.items() } conf[Model.STORE_BDENABLED] = { fpid: fp.mask_enabled for fpid,fp in", "/ (c+1) upd_m_overlay = ( avg_m_overlay * c + new_m_overlay ) / (c+1)", "mask final image to keep overlay heatmap within bounds. \"\"\" POS = np.array([255,0,0,180],dtype=np.uint8)", "id : dict() for id in self.overlays.keys() } for id, placeable in observations.items():", "of absolute delta from mean\" datamap = self.comp_historical(floorPlanId) return self.plans[floorPlanId].render_overlay(datamap) def render_abs(self,floorPlanId:str)->Image: \"Get", "self.overlays = { _id : Overlay(_id, floor.overlay_dimensions, floor.floorplan_dimensions, floor.mask, exposure) for _id,floor in", "current scope return { _id : over.get_full(masked, exposure) for _id, over in self.overlays.items()", "def get_type(SAPI_packet:dict) -> int: \"Get the Model layer constant for a given SAPI", "= np.zeros( (exposure,)+overlay_dimensions, dtype=\"float32\" ) self.__unmasked_dataoverlay = np.zeros( (exposure,)+overlay_dimensions, dtype=\"float32\" ) def set(self,", "copy of the delta overlay (only fixed observations). If masked, will select data", "on self.update_layers() ### Layers def update_layers(self)->None: \"\"\" Must be called when a Floor", "at input, else unmasked data. Set exposure to specify mean smoothing on the", "self.overlays.items() } def copy(self,flatten:bool=True): \"\"\" Return a copy of the layer. Flatten squashes", "= \"Layer {} not defined. Use internally defined layer (eg Model.LAYER_*)\" class ModelException(Exception):", "full available exposure by default new_um_overlay = over.get_delta(masked=False) new_m_overlay = over.get_delta(masked=True) new_unfixed_obs =", "Returns a copy of the full client overlay (including distributed unfixed observations) For", "wrong self.__validate_scanning(SAPI_packet) dest_layer = Model.get_type(SAPI_packet) observations = self.query_obj.extract_SAPI_observations(SAPI_packet) self.data_layers[dest_layer].set_observations(observations) ### Camera and MVSense", "# Tidy the edges # Make the mask an Image, mode=L mask =", "Flatten squashes exposures into 1 frame For more info, see Overlay.copy() \"\"\" ly", "placeable.x != None and placeable.y != None ) or placeable.has_mask_override: fixed[id] = placeable", "self.real_dimensions, self.mask, self.exposure) cp.__unfixed_observations = self.__unfixed_observations.copy() cp.__masked_dataoverlay = self.__masked_dataoverlay.copy() cp.__unmasked_dataoverlay = self.__unmasked_dataoverlay.copy() return", "+ 1 )).astype(\"int32\") for mx in range(overlay.shape[0]): # Top, bottom ixs = ix_points[mx]", "None: \"\"\"\" Generate a mask from BoundaryDetector for areas of the floor that", "0 (default), will provide all available exposures squashed. For more info, see Overlay.get_delta", "threshhold, 'location':busiest_location} def nearestCameras(self, n:int, floor:Floor,spikeDict:dict)->tuple: #returns a list of camera objects #", "1 frame to store average so flatten self.data_layers[l_id] = layer.copy(flatten=True) self.data_layers[l_id].clear() # Set", "mean gives ratio of elems 1 to elems total self.mask[x][y] = self.pixelmask[it:ib,il:ir].mean() <", "os.path.join(TimeSlotAvg.DATA_DIR,'{}_{}.pbz2'.format(self.day, self.hour)) if not os.path.exists(TimeSlotAvg.DATA_DIR): os.makedirs(TimeSlotAvg.DATA_DIR) with bz2.BZ2File(filepath, 'wb') as f: pickle.dump(self, f)", "= pickle.load(tsa) except FileNotFoundError: tsa = TimeSlotAvg( data_layers, day, hour ) tsa.verify_and_update_struct(data_layers, floors)", "for current scope return { _id : over.get_full(masked, exposure) for _id, over in", "Model.ModelException(\"Model not configured for LAYER_MVSENSE\") def pull_mvsense_data(self): \"Pull live MVSense data from cameras", "AttributeError except AttributeError: self.query_obj = APIQuery(netid) self.populate(layers) self.secret = conf_dict.get(Model.STORE_SECRET) self.validator_token = conf_dict.get(Model.STORE_TOKEN)", "= POS if pos else NEG imarr[ixs:ixe,iys:iye,3] = ( imarr[ixs:ixe,iys:iye,3].astype(np.float64) * alpha ).astype(np.uint8)", "onto the floorplan image in heatmap form. If pixelmask and image has bounds", "for spot in blindspots: bd.add_blindspot(*tuple(spot)) bd.run() return bd.generate_graphic() def render_overlay(self,overlay:np.ndarray,pixelmask:bool=True): \"\"\" Render overlay", "(n,4), got {}\".format(str(blindspots))) if wallthreshold == None: pass elif not isinstance(wallthreshold,(int,float)): raise TypeError(\"Invalid", "try: if len(coords) == 0 or False not in [ len(coord)==2 for coord", "bz2.BZ2File(os.path.join(TimeSlotAvg.DATA_DIR,'{}_{}.pbz2'.format(day, hour)), 'rb') tsa = pickle.load(tsa) except FileNotFoundError: tsa = TimeSlotAvg( data_layers, day,", "0 def copy(self,flatten:bool=False): \"\"\" Return a copy of the overlay. If flatten, squash", "Model.get_type(SAPI_packet) observations = self.query_obj.extract_SAPI_observations(SAPI_packet) self.data_layers[dest_layer].set_observations(observations) ### Camera and MVSense def setFOVs(self,mac:str,coords:set)->None: \"\"\" Set", "in observations.items(): bins[placeable.floorPlanId][id] = placeable for fid, overlay in self.overlays.items(): overlay.roll() obs =", "* alpha ).astype(np.uint8) imarr = Image.fromarray(imarr,\"RGBA\").filter( ImageFilter.BoxBlur( BLUR_CELLS * destination.size[0] / overlay.shape[1] )", "bz2 import pickle import datetime import requests import hashlib parentddir = os.path.abspath(os.path.join(os.path.dirname(__file__), os.path.pardir))", "collective def update_timeslot(self): \"Calls the factory if the current TimeSlotAvg object is not", "int: \"Get the Model layer constant for a given SAPI packet\" api_layer_val =", "* Model.CELL_SIZE_M - client_loc ) um_possible[x,y] = d <= placeable.variance # m(asked)_possible is", "= 0.35 destination = self.floorplan.get_image().convert(\"RGBA\") # Overlay scaling absmax = max(overlay.max(), overlay.min(), key=abs)", "cam in FOVcams: fov = cam.get_FOV() mindist = 9999999 for x,row in enumerate(fov):", "cam.get_FOV() mindist = 9999999 for x,row in enumerate(fov): for y, cell in enumerate(row):", "dict: return self.data_layers[layer].get_full(exposure=exposure) def render_delta(self,floorPlanId:str)->Image: \"Get the current datamap in terms of absolute", "is valid to update with the current model # For each layer in", "for layer in self.data_layers.values(): layer.verify_and_update(self.plans) ### Access Points def getAPs(self) -> None: \"Get", "np.zeros((destination.size[1],destination.size[0],4),dtype=\"uint8\") # Account for margin between edge of floorplan and overlay ix_points =", "in self.plans.items() } conf[Model.STORE_BDENABLED] = { fpid: fp.mask_enabled for fpid,fp in self.plans.items() }", "{}\".format(str(blindspots))) if wallthreshold == None: pass elif not isinstance(wallthreshold,(int,float)): raise TypeError(\"Invalid type for", "be on the given floor eg outside high floors. Blindspots should be a", "self.query_obj.extract_SAPI_observations(SAPI_packet) self.data_layers[dest_layer].set_observations(observations) ### Camera and MVSense def setFOVs(self,mac:str,coords:set)->None: \"\"\" Set the FOV coords", "self.__masked_dataoverlay[0] = 0 self.__unmasked_dataoverlay[0] = 0 self.__unfixed_observations[0] = 0 def clear(self) -> None:", "equate and historical data would be invalidated \"\"\" for layer in self.data_layers.values(): layer.verify_and_update(self.plans)", "= self.comp_historical(floorPlanId) return self.plans[floorPlanId].render_overlay(datamap) def render_abs(self,floorPlanId:str)->Image: \"Get latest frame of WiFi layer rendered", "fpid: fp.mask_enabled for fpid,fp in self.plans.items() } return conf def write_config_data(self): config_data =", "update_avg_data(self, current_data:dict, debug:bool=None) -> None: \"Updates an average model for a timeslot using", "def get_time()->tuple: \"Get the current time values needed for reading and writing data", "conf[Model.STORE_BDENABLED] = { fpid: fp.mask_enabled for fpid,fp in self.plans.items() } return conf def", "def get_delta(self, masked:bool=True, exposure:int=0)->np.ndarray: \"\"\" Returns a copy of the delta overlay (only", "ix_points[mx+1] for my in range(overlay.shape[1]): val = overlay[mx,my] if val==0: continue #Colour by", "\"Layer {} not defined. Use internally defined layer (eg Model.LAYER_*)\" class ModelException(Exception): pass", "model data\" def __init__(self,floorplan:FloorPlan): self.floorplan = floorplan # Dimentions should default to (height,", "def __generate_person_obs(self) -> dict: \"Indexes observed person objects with an arbitrary key\" obs", "flat average Overlay object indexed by each layer stored\" return { layer_id: layer.overlays[fpid]", "else: # Store parsed location in x,y tuple, in m, with change of", "in self.data_layers.items() } @staticmethod def get_time()->tuple: \"Get the current time values needed for", "> busiest: busiest = clusters[x][y] busiest_location = (3*(x+0.5), 3*(y+0.5)) return {'spike':busiest > threshhold,", "represented in TSA, those are created, infos are printed Throws ModelException if dimensions", "self.timeslot.verify_and_update_struct( self.data_layers, self.plans ) ### Providers def poll_layer(self,layer:int,exposure:int) -> dict: return self.data_layers[layer].get_full(exposure=exposure) def", "= avg_layer.overlays[o_key].get_delta(masked=True,exposure=1) avg_unfixed_obs = over.get_unfixed_observations() # Count c = self.count[l_key][o_key] # update the", "cell >> pixel #Mask (1msq-scale, small-dims) dims mx = self.mask.shape[0] my = self.mask.shape[1]", "x in range(len(clusters)): for y in range(len(clusters[0])): if clusters[x][y] > busiest: busiest =", "call me the comprehension king # Maybe after we spend 30 minutes fixing", "of a floorplan using all layers\" self.update_timeslot() # Get the current timeslot object", "for FPID={}\".format(floor.floorplan.id)) # Update mask in case of change # Note this does", "len(masked_overlay.shape) assert len(self.__unmasked_dataoverlay.shape) == len(unmasked_overlay.shape) self.__unfixed_observations = unfixed_count self.__masked_dataoverlay = masked_overlay self.__unmasked_dataoverlay =", "= conf_dict.get(Model.STORE_SECRET) self.validator_token = conf_dict.get(Model.STORE_TOKEN) self.webhook_addresses = conf_dict.get(Model.STORE_WEBHOOK,list()) self.password = conf_dict.get(Model.STORE_PASSWORD,Model.DEFAULT_PASSWORD) self.webhook_threshold =", "@staticmethod def load( data_layers:dict, floors:dict, day=None, hour=None ): \"Static factory method; Load TimeSlotAvg", "if clusters[x][y] > busiest: busiest = clusters[x][y] busiest_location = (3*(x+0.5), 3*(y+0.5)) return {'spike':busiest", "camera objects # They call me the comprehension king # Maybe after we", "overlay ix_points = np.floor(np.linspace( 0, imarr.shape[0] + self.margin_px[0], overlay.shape[0] + 1 )).astype(\"int32\") iy_points", "conf_dict.get(Model.STORE_BDENABLED,{fpid:False})[fpid] self.setBoundsMask(fpid, on, boxes) def serialize(self): conf = dict() conf[Model.STORE_SECRET] = self.secret conf[Model.STORE_TOKEN]", "STORE_LAYERS = \"layers\" STORE_FOVCOORDS = \"fov_coords\" STORE_FOVMASK = \"fov_mask\" STORE_BMBOXES = \"bm_boxes\" STORE_BDENABLED", "will mask final image to keep overlay heatmap within bounds. \"\"\" POS =", "in set(data_layers.keys()).difference(self.data_layers.keys()): # For layers in data_layers not in self self.data_layers[l_id] = data_layers[l_id].copy()", "in the layer. If exposure 0 (default), will provide all available exposures squashed.", "cam in FOVcams if cam.get_FOV()[event_root]==True } if len(hasView)>0: return (\"Covered\", list(hasView) ) distances", "a racket if theres something wrong self.__validate_scanning(SAPI_packet) dest_layer = Model.get_type(SAPI_packet) observations = self.query_obj.extract_SAPI_observations(SAPI_packet)", "iff timeslot is for current time\" if debug != None: return debug curr_day,", "details on exposure, see Overlay.get_delta \"\"\" window = self.exposure if exposure == 0", "self.setBoundsMask(fpid, on, boxes) def serialize(self): conf = dict() conf[Model.STORE_SECRET] = self.secret conf[Model.STORE_TOKEN] =", "exists, else set to 1 self.count[l_id] = { ov_id:self.count[l_id].get(ov_id,1) for ov_id in data_layers[l_id].overlays.keys()", "spot in blindspots: bd.add_blindspot(*tuple(spot)) bd.run() self.pixelmask = bd.getBoundaryMask() #Downsample from pixel level to", "to get relevant floor obj POST_data[fpid] = {\"type\" : \"SnapshotData\", \"is_ideal\" : idealality}", "and iterable, or of len 0 to unset try: if len(coords) == 0", "threshold=wallthreshold ) if blindspots != None: for spot in blindspots: bd.add_blindspot(*tuple(spot)) bd.run() self.pixelmask", "bd.run() return bd.generate_graphic() def render_overlay(self,overlay:np.ndarray,pixelmask:bool=True): \"\"\" Render overlay onto the floorplan image in", "the layer. If exposure 0 (default), will provide all available exposures squashed. For", "for o_key, over in layer.overlays.items(): # Get masked and unmasked deltas, and unfixed", "]: cam = self.query_obj.cameras[mac] shape = self.plans[cam.floorPlanId].overlay_dimensions cam.set_FOV(shape,coords) else: raise ValueError except (ValueError,", "1 #self.write() else: raise TimeSlotAvg.TimeSlotAvgException(f\"Cannot update with current model as it is not", "cam.x-event[0], cam.y-event[1] ) top_n = [ cam[0] for cam in sorted(distances.items(),key=lambda x: x[1])[:n]", "write_config_data(self): config_data = {Model.STORE_SELECTED:self.network_id} config_data[self.network_id] = self.serialize() with open( self.CONFIG_PATH, 'wb' ) as", "a copy of the full client overlay (including distributed unfixed observations) For exposure,", "mindist = 9999999 for x,row in enumerate(fov): for y, cell in enumerate(row): if", "dict: \"Pull floorplans from the network, construct blank floor layer for each\" floorplans", "tsa.verify_and_update_struct(data_layers, floors) tsa.write() else: if __name__!=\"__main__\": assert isinstance(tsa,TimeSlotAvg) tsa.verify_and_update_struct(data_layers, floors) return tsa def", "= over.get_delta(masked=False) new_m_overlay = over.get_delta(masked=True) new_unfixed_obs = over.get_unfixed_observations() # Similar from averages #", "SECRET_K = \"secret\" class Floor: \"Wrapper class for the floorplan object including model", "mask_enabled = self.plans[floorPlanId].mask_enabled current = self.data_layers[lid].overlays[floorPlanId].get_delta(masked=mask_enabled) historical = hist_fp_data[lid].get_delta(masked=mask_enabled,exposure=1) collective += (current -", "historical) collective /= len(self.data_layers) return collective def update_timeslot(self): \"Calls the factory if the", "only parameter passing bd = BoundaryDetector( self.floorplan.get_image(), threshold=wallthreshold ) if blindspots != None:", "be of shape (n,4), got {}\".format(str(blindspots))) if wallthreshold == None: pass elif not", "mask from BoundaryDetector for areas of the floor that people cannot possibly be", "= 0.5 VARIANCE_THRESHOLD = np.hypot( *( 2*(CELL_SIZE_M / 2,) ) ) # This", "in relevent floor objects\" aps = self.query_obj.pullAPs() for mac,ap in aps.items(): if ap.floorPlanId", "exposure, preparing for a new frame of data\" self.__masked_dataoverlay = np.roll(self.__masked_dataoverlay, 1, axis=0)", "\"Update model with SAPI data\" # Raise a racket if theres something wrong", "else: raise Model.ModelException(\"Model not configured for LAYER_MVSENSE\") def pull_mvsense_data(self): \"Pull live MVSense data", "FileNotFoundError: tsa = TimeSlotAvg( data_layers, day, hour ) tsa.verify_and_update_struct(data_layers, floors) tsa.write() else: if", "missing overlays layer.verify_and_update(floors) for l_id in data_layers.keys(): # Get count if exists, else", "self.query_obj = APIQuery(netid) self.populate(layers) self.secret = conf_dict.get(Model.STORE_SECRET) self.validator_token = conf_dict.get(Model.STORE_TOKEN) self.webhook_addresses = conf_dict.get(Model.STORE_WEBHOOK,list())", "(Floor objects). Throws ModelException in case of dimention mismatch. If mask is outdated,", "Set exposure to specify mean smoothing on the first n frames of stored", "set_bounds_mask(self,blindspots=None,wallthreshold:float=None) -> None: \"\"\"\" Generate a mask from BoundaryDetector for areas of the", "(0) combines all available frames, 1 gives only the latest frame (no smoothing).", "floor.mask, exposure) for _id,floor in floorplans.items() } def set_observations(self,observations:dict): \"Set the layer to", "floormask assert exposure > 0 self.exposure = exposure # Exposure queue, shape (exp,x,y)", "has bounds mask set, will mask final image to keep overlay heatmap within", "= BoundaryDetector( self.floorplan.get_image() ) else: # pragma: no cover # Not covered as", "a floorplan using all layers\" self.update_timeslot() # Get the current timeslot object hist_fp_data", "overlay, in metres and px self.margin_m = ( Model.CELL_SIZE_M - (self.floorplan_dimensions[0] % Model.CELL_SIZE_M),", "by adding current values to sum total and dividing by new count upd_um_overlay", "\"Get the current datamap in terms of absolute delta from mean\" datamap =", "over in self.overlays.items() } def copy(self,flatten:bool=True): \"\"\" Return a copy of the layer.", "masked, will select data masked at input, else unmasked data. Set exposure to", "distances[cam] = mindist for cam in nonFOVcams: distances[cam] = np.hypot( cam.x-event[0], cam.y-event[1] )", "#Mask (1msq-scale, small-dims) dims mx = self.mask.shape[0] my = self.mask.shape[1] #Image-scale pixel mask", "= [] def set_bounds_mask(self,blindspots=None,wallthreshold:float=None) -> None: \"\"\"\" Generate a mask from BoundaryDetector for", "masked else np.ones(self.overlay_dimensions,dtype=np.bool_) data[mask] += self.__unfixed_observations[:window].mean(axis=0) / mask.sum() return data def verify_and_update(self,floor:Floor)->None: \"Verifies", "(exposure,)+overlay_dimensions, dtype=\"float32\" ) def set(self, unfixed_count:np.ndarray, masked_overlay:np.ndarray, unmasked_overlay:np.ndarray): \"Sets internal observation data. Not", "str: \"Find a floorPlanId from the floor name\" for k in self.plans: if", "### Scanning API (SAPI) def __validate_scanning(self,SAPI_packet:dict) -> None: if type(SAPI_packet) != dict: raise", "setBoundsMask(self,floor_id:str,on:bool,blindspots=None,wallthreshold:float=None) -> None: \"Generate a mask from BoundaryDetector for areas that people cannot", "if obs != None: overlay.add(obs) def get_deltas(self, masked:bool=True, exposure:int=0) -> dict: \"\"\" Return", "client overlay (including distributed unfixed observations) For exposure, see Overlay.get_delta \"\"\" data =", "self.__unfixed_observations[0] += len(unfixed) def get_delta(self, masked:bool=True, exposure:int=0)->np.ndarray: \"\"\" Returns a copy of the", "self.update_timeslot() # Get the current timeslot object hist_fp_data = self.timeslot.get_floor_avgs(floorPlanId) collective = np.zeros(", "dimensions do not equate and historical data would be invalidated \"\"\" for layer", "self.mask = floormask assert exposure > 0 self.exposure = exposure # Exposure queue,", "if a floorplan mask is in place, mask override takes precident um_possible =", "(default), will provide all available exposures squashed. For more info, see Overlay.get_delta \"\"\"", "id, placeable in observations.items(): if ( placeable.x != None and placeable.y != None", "Overlay.add\" assert len(self.__unfixed_observations.shape) == len(unfixed_count.shape) assert len(self.__masked_dataoverlay.shape) == len(masked_overlay.shape) assert len(self.__unmasked_dataoverlay.shape) == len(unmasked_overlay.shape)", "pullFloors(self) -> dict: \"Pull floorplans from the network, construct blank floor layer for", "printed Throws ModelException if dimensions do not match \"\"\" for l_id in set(data_layers.keys()).difference(self.data_layers.keys()):", "Model. If layers or overlays are not represented in TSA, those are created,", "heatmap form. If pixelmask and image has bounds mask set, will mask final", "will provide all available exposures squashed. See Overlay.get_full \"\"\" # Not covered as", "err.__class__(\"Coordinates supplied of incorrect shape or type, should be iterable shape (n,2)\") else:", "only the latest frame (no smoothing). \"\"\" window = self.exposure if exposure ==", "# This means let x = cell_s_m / 2; V_T = sqrt(x^2+x^2) #", "bottom it = ix_divs[x] ib = ix_divs[x+1] for y in range(my): #Left, right", "ap.floorPlanId in self.plans.keys(): self.plans[ap.floorPlanId].aps[mac] = ap ### Scanning API (SAPI) def __validate_scanning(self,SAPI_packet:dict) ->", ")).astype(\"int32\") iy_points = np.floor(np.linspace( 0,imarr.shape[1] + self.margin_px[1], overlay.shape[1] + 1 )).astype(\"int32\") for mx", "only new observations self.mask == floor.mask class Model: LAYER_SNAP_WIFI = 1 LAYER_SNAP_BT =", "<reponame>calumcorrie/Meraki-Crowd-Interface<gh_stars>0 import os import sys import numpy as np from PIL import Image,", "raise Model.BadRequest(\"Request is missing data: \" + str(ke) ) if source_net_id != self.network_id:", "dims mx = self.mask.shape[0] my = self.mask.shape[1] #Image-scale pixel mask (mini-scale, big-dims) dims", "= self.query_obj.pullFloorPlans() self.plans = { id : Floor(fp) for id,fp in floorplans.items() }", "\"Represents an data overlay of a single floorplan\" def __init__(self, floorid:str, overlay_dimensions:tuple, real_dimensions:tuple,", "Points def getAPs(self) -> None: \"Get APs and store internally in relevent floor", "self.day = day self.hour = hour self.data_layers = dict() self.count = dict() for", "bad authentication secret - rejecting data\") except KeyError as ke: raise Model.BadRequest(\"Request is", "from compressed pickle file or create new\" #TODO remove day hour params -", "layer stored\" return { layer_id: layer.overlays[fpid] for layer_id, layer in self.data_layers.items() } @staticmethod", "for unit tests if day==None or hour==None: day, hour = TimeSlotAvg.get_time() try: tsa", "for l_key, layer in current_data.items(): # Get the respective average layer avg_layer =", "continue #Colour by sign, scale alpha by magnitude pos = val > 0", "with an arbitrary key\" obs = self.query_obj.get_camera_observations() # Zip [0,n) with n obs", "will provide all available exposures squashed. For more info, see Overlay.get_delta \"\"\" return", "(n,2). Coords pertain to sqm pixels on internal datamap. Pass len(iterable)==0 to unset", "if os.path.isfile(Model.CONFIG_PATH): with open( Model.CONFIG_PATH, 'rb' ) as f: config_data = pickle.load(f) selected_id", "update_model_config(self, netid, conf_dict): layers = conf_dict.get(Model.STORE_LAYERS,set()) try: if netid != self.network_id: raise AttributeError", "self.hour)) if not os.path.exists(TimeSlotAvg.DATA_DIR): os.makedirs(TimeSlotAvg.DATA_DIR) with bz2.BZ2File(filepath, 'wb') as f: pickle.dump(self, f) def", ";) event = spikeDict[\"location\"] event_root = tuple([ int(d) for d in event ])", "layer stored self.count[l_id] = { fpid:0 for fpid in layer.overlays.keys() } @staticmethod def", "threshold into params? dims = ( (len(layer)//3)+1, (len(layer[0])//3)+1 ) #splits floorplan into 3m^2", "of axis client_loc = np.array( [self.real_dimensions[0] - placeable.y, placeable.x ] ) for x", "floor.overlay_dimensions or self.real_dimensions != floor.floorplan_dimensions: raise Model.ModelException(\"Error: Overlay and Floorplan dimension mismatch for", "= { cam for cam in cameras if cam.has_FOV() } nonFOVcams = cameras", "Overlay(self.floorid, self.overlay_dimensions, self.real_dimensions, self.mask, self.exposure) cp.__unfixed_observations = self.__unfixed_observations.copy() cp.__masked_dataoverlay = self.__masked_dataoverlay.copy() cp.__unmasked_dataoverlay =", "new overlay for FPID:{}\".format(fpid)) fp = floorplans[fpid] self.overlays[fpid] = Overlay(fpid, fp.overlay_dimensions, fp.floorplan_dimensions, fp.mask,", "needed for reading and writing data files\" curr_dt = datetime.datetime.now( datetime.timezone( offset=datetime.timedelta(hours=0) )", "constant for a given SAPI packet\" api_layer_val = APIQuery.get_SAPI_type(SAPI_packet) if api_layer_val == \"WiFi\":", "from new data # Get full available exposure by default new_um_overlay = over.get_delta(masked=False)", "if wallthreshold == None: pass elif not isinstance(wallthreshold,(int,float)): raise TypeError(\"Invalid type for wallthreshold", "\"\"\" POS = np.array([255,0,0,180],dtype=np.uint8) NEG = np.array([0,255,0,180],dtype=np.uint8) BLUR_CELLS = 0.35 destination = self.floorplan.get_image().convert(\"RGBA\")", "for _id, over in self.overlays.items() } def copy(self,flatten:bool=True): \"\"\" Return a copy of", "- note this does not effect existing data, only new observations If an", "BadRequest(Exception): pass def __init__(self,network_id:str=None,layers:set={}): \"Initialise model. API key must be defined in enviroment", "by default new_um_overlay = over.get_delta(masked=False) new_m_overlay = over.get_delta(masked=True) new_unfixed_obs = over.get_unfixed_observations() # Similar", "tsa def is_current_time(self, debug=None) -> bool: \"Returns True iff timeslot is for current", "this change May throw error if dimensions do not equate and historical data", "0.5) * Model.CELL_SIZE_M - client_loc ) um_possible[x,y] = d <= placeable.variance # m(asked)_possible", "for wallthreshold parameter. Should be of type int or float, got {}\".format(str(type(wallthreshold)))) if", "busyness of a floorplan using all layers\" self.update_timeslot() # Get the current timeslot", "to a compressed file\" filepath = os.path.join(TimeSlotAvg.DATA_DIR,'{}_{}.pbz2'.format(self.day, self.hour)) if not os.path.exists(TimeSlotAvg.DATA_DIR): os.makedirs(TimeSlotAvg.DATA_DIR) with", "see Overlay.get_delta \"\"\" window = self.exposure if exposure == 0 else exposure return", "(self.timeslot.is_current_time()): self.timeslot.write() self.timeslot = TimeSlotAvg.load( self.data_layers, self.plans ) self.timeslot.verify_and_update_struct( self.data_layers, self.plans ) ###", "for coord in coords ]: cam = self.query_obj.cameras[mac] shape = self.plans[cam.floorPlanId].overlay_dimensions cam.set_FOV(shape,coords) else:", "isinstance(self.pixelmask, np.ndarray) and pixelmask: # Tidy the edges # Make the mask an", "= np.zeros( (exposure,)+overlay_dimensions, dtype=\"float32\" ) def set(self, unfixed_count:np.ndarray, masked_overlay:np.ndarray, unmasked_overlay:np.ndarray): \"Sets internal observation", "Zip [0,n) with n obs objects, converting to dictionary return dict(zip(range(len(obs)),obs)) def provide_scanning(self,SAPI_packet:dict)", "of overlays covering all floorplans with data from a single source type\" def", "int or float, got {}\".format(str(type(wallthreshold)))) if floor_id not in self.plans.keys(): raise Model.ModelException(\"No such", "in data_layers[l_id].overlays.keys() } def sha256(inpt:str) -> str: m = hashlib.sha256() m.update(inpt.encode()) return m.hexdigest()", "self.spike(self.comp_historical(fpid), self.webhook_threshold) if spikedict['spike'] == True: idealality, cameras = self.nearestCameras(2, floor, spikedict) #need", "floor, spikedict) #need to get relevant floor obj POST_data[fpid] = {\"type\" : \"SnapshotData\",", "self.plans = self.pullFloors() self.getAPs() self.query_obj.pullCameras() self.data_layers = dict() self.webhook_threshold = 0.35 for layer", "Should be of type int or float, got {}\".format(str(type(wallthreshold)))) if floor_id not in", "over the other) # Account for margin between edge of floorplan and overlay", "debug curr_day, curr_hour = TimeSlotAvg.get_time() return curr_day == self.day and curr_hour == self.hour", "import BoundaryDetector # As per Scanning API v3 SECRET_K = \"secret\" class Floor:", "1 self.data_layers[l_key].overlays[o_key].set( upd_unfixed_obs[None,], upd_m_overlay[None,], upd_um_overlay[None,] ) # Update the count self.count[l_key][o_key] += 1", ": over.copy(flatten) for _id, over in self.overlays.items() } return ly def clear(self)->None: \"Clear", "from lib.BoundaryDetector import BoundaryDetector # As per Scanning API v3 SECRET_K = \"secret\"", "given SAPI packet\" api_layer_val = APIQuery.get_SAPI_type(SAPI_packet) if api_layer_val == \"WiFi\": return Model.LAYER_SNAP_WIFI elif", "right il = iy_divs[y] ir = iy_divs[y+1] # As array is binary, mean", "end of the floorplan and the data overlay, in metres and px self.margin_m", "um_possible, self.mask, dtype=np.bool_ ) # If theres a DIV0 here, the search hasn't", "1.0 / um_possible.sum() # Include floor mask self.__masked_dataoverlay[0][m_possible] += 1.0 / m_possible.sum() def", "print(\"Error: Could not filter by pixel mask as no mask exists\") destination.putalpha(255) return", "except APIQuery.APIException: raise Model.ModelException(\"Could not get network from config file\") class TimeSlotAvg: class", "floorPlanId from the floor name\" for k in self.plans: if self.plans[k].floorplan.name == name:", "x,y tuple, in m, with change of axis client_loc = np.array( [self.real_dimensions[0] -", "config_data = {Model.STORE_SELECTED:self.network_id} config_data[self.network_id] = self.serialize() with open( self.CONFIG_PATH, 'wb' ) as f:", "if ( placeable.x != None and placeable.y != None ) or placeable.has_mask_override: fixed[id]", "exposure:int=0)->np.ndarray: #pragma: no cover \"\"\" Returns a copy of the full client overlay", "config file not found\") try: self.update_model_config(None, {Model.STORE_LAYERS: Model.LAYERS_ALL} ) except APIQuery.APIException: raise Model.ModelException(\"Could", "\"Get a dict of retreived floor plan IDs and names\" return { k", "observation data. Not for general use, instead use Overlay.add\" assert len(self.__unfixed_observations.shape) == len(unfixed_count.shape)", "in self.overlays.items() } def get_full(self, masked:bool=True, exposure:int=0) -> dict: #pragma: no cover \"\"\"", "### Providers def poll_layer(self,layer:int,exposure:int) -> dict: return self.data_layers[layer].get_full(exposure=exposure) def render_delta(self,floorPlanId:str)->Image: \"Get the current", "bounds mask with given parameters. For more info see Floor.set_bounds_mask \"\"\" if wallthreshold", "in enviroment variable \\\"MERAKI_DASHBOARD_API_KEY\\\"\" self.read_config_data() self.write_config_data() def populate(self,layers:set): assert isinstance(self.query_obj, APIQuery) self.network_id =", "{} not defined. Use internally defined layer (eg Model.LAYER_*)\" class ModelException(Exception): pass class", "pull_mvsense_data(self): \"Pull live MVSense data from cameras and feed into data layer\" self.query_obj.updateCameraMVSenseData()", "self.data_layers, self.plans ) self.timeslot.verify_and_update_struct( self.data_layers, self.plans ) ### Providers def poll_layer(self,layer:int,exposure:int) -> dict:", "average Overlay object indexed by each layer stored\" return { layer_id: layer.overlays[fpid] for", "iy_divs[y+1] # As array is binary, mean gives ratio of elems 1 to", "As array is binary, mean gives ratio of elems 1 to elems total", ": dict() for id in self.overlays.keys() } for id, placeable in observations.items(): bins[placeable.floorPlanId][id]", "spikedict) #need to get relevant floor obj POST_data[fpid] = {\"type\" : \"SnapshotData\", \"is_ideal\"", "Layer: \"Class representing a data layer: a series of overlays covering all floorplans", "floor objects\" aps = self.query_obj.pullAPs() for mac,ap in aps.items(): if ap.floorPlanId in self.plans.keys():", "for y in range(len(clusters[0])): clusters[x,y] = layer[3*x:3*(x+1),3*y:3*(y+1)].sum() busiest = 0 busiest_location=None for x", "d = np.linalg.norm( (np.array([x,y]) + 0.5) * Model.CELL_SIZE_M - client_loc ) um_possible[x,y] =", "source_net_id = SAPI_packet[\"data\"][\"networkId\"] if SAPI_packet[SECRET_K] != self.secret: raise Model.BadRequest(\"Request has bad authentication secret", "= { id : Floor(fp) for id,fp in floorplans.items() } return self.plans def", "len(self.data_layers) return collective def update_timeslot(self): \"Calls the factory if the current TimeSlotAvg object", "For layers in data_layers not in self self.data_layers[l_id] = data_layers[l_id].copy() self.count[l_id] = dict()", "!= None ) or placeable.has_mask_override: fixed[id] = placeable else: unfixed[id] = placeable self.__add_fixed_locations(fixed)", "for id, placeable in observations.items(): bins[placeable.floorPlanId][id] = placeable for fid, overlay in self.overlays.items():", "mean\" datamap = self.comp_historical(floorPlanId) return self.plans[floorPlanId].render_overlay(datamap) def render_abs(self,floorPlanId:str)->Image: \"Get latest frame of WiFi", "def copy(self,flatten:bool=False): \"\"\" Return a copy of the overlay. If flatten, squash (mean)", "} def get_full(self, masked:bool=True, exposure:int=0) -> dict: #pragma: no cover \"\"\" Return the", "self.put_historical() #spike detect POST_data = {} for fpid, floor in self.plans.items(): spikedict =", "+ str(i)] = response if POST_data != {}: self.snapshotWebhook(POST_data) ### Configuration STORE_WEBHOOK =", "hour ) tsa.verify_and_update_struct(data_layers, floors) tsa.write() else: if __name__!=\"__main__\": assert isinstance(tsa,TimeSlotAvg) tsa.verify_and_update_struct(data_layers, floors) return", "= (datetime.datetime.now().second / 60) * len(testarr) testarr[:int(n)] = 1 return dm.render_overlay(testarr.reshape(dims)) def update(self)->None:", "/ 60) * len(testarr) testarr[:int(n)] = 1 return dm.render_overlay(testarr.reshape(dims)) def update(self)->None: \"Update non-webhook", "current time\" if debug != None: return debug curr_day, curr_hour = TimeSlotAvg.get_time() return", "see Overlay.get_delta \"\"\" return { _id : over.get_delta(masked, exposure) for _id, over in", "iy_points[my] iye = iy_points[my+1] imarr[ixs:ixe,iys:iye] = POS if pos else NEG imarr[ixs:ixe,iys:iye,3] =", "more info, see Overlay.get_delta \"\"\" return { _id : over.get_delta(masked, exposure) for _id,", "self.secret: raise Model.BadRequest(\"Request has bad authentication secret - rejecting data\") except KeyError as", "timeslot is for current time\" if debug != None: return debug curr_day, curr_hour", "MVSense data from cameras and feed into data layer\" self.query_obj.updateCameraMVSenseData() observations = self.__generate_person_obs()", "iy_points = np.floor(np.linspace( 0,imarr.shape[1] + self.margin_px[1], overlay.shape[1] + 1 )).astype(\"int32\") for mx in", "= bz2.BZ2File(os.path.join(TimeSlotAvg.DATA_DIR,'{}_{}.pbz2'.format(day, hour)), 'rb') tsa = pickle.load(tsa) except FileNotFoundError: tsa = TimeSlotAvg( data_layers,", "of camera objects # They call me the comprehension king # Maybe after", "conf[Model.STORE_PASSWORD] = self.password conf[Model.STORE_FOVCOORDS] = { cam.mac: cam.get_fov_coords() for cam in self.query_obj.cameras.values() }", "the Overlay objects to reflect this change May throw error if dimensions do", "data from a single source type\" def __init__(self,floorplans:dict,exposure:int): self.exposure = exposure self.overlays =", "# Get the current timeslot object self.timeslot.update_avg_data( self.data_layers ) def comp_historical(self, floorPlanId:str): \"Get", "Return a copy of the overlay. If flatten, squash (mean) exposure window into", "self.count[l_id] = { ov_id:self.count[l_id].get(ov_id,1) for ov_id in data_layers[l_id].overlays.keys() } def sha256(inpt:str) -> str:", "not found\") try: self.update_model_config(None, {Model.STORE_LAYERS: Model.LAYERS_ALL} ) except APIQuery.APIException: raise Model.ModelException(\"Could not get", "in self.data_layers.items(): # Add any missing overlays layer.verify_and_update(floors) for l_id in data_layers.keys(): #", "self.__masked_dataoverlay = np.roll(self.__masked_dataoverlay, 1, axis=0) self.__unmasked_dataoverlay = np.roll(self.__unmasked_dataoverlay, 1, axis=0) self.__unfixed_observations = np.roll(self.__unfixed_observations,", "(3*(x+0.5), 3*(y+0.5)) return {'spike':busiest > threshhold, 'location':busiest_location} def nearestCameras(self, n:int, floor:Floor,spikeDict:dict)->tuple: #returns a", "= curr_dt.hour return curr_day, curr_hour def verify_and_update_struct(self, data_layers:dict, floors:dict)->None: \"\"\" Verifies that the", "data overlay to the timeslots model, promoting as exposure of historicals = 1", "params, camera/wifi/bluetoothall into params? threshold into params? dims = ( (len(layer)//3)+1, (len(layer[0])//3)+1 )", "object\" self.update_timeslot() # Get the current timeslot object self.timeslot.update_avg_data( self.data_layers ) def comp_historical(self,", "dtype=\"float32\" ) self.__unmasked_dataoverlay = np.zeros( (exposure,)+overlay_dimensions, dtype=\"float32\" ) def set(self, unfixed_count:np.ndarray, masked_overlay:np.ndarray, unmasked_overlay:np.ndarray):", "change May throw error if dimensions do not equate and historical data would", "across the floorplan (or mask) mask = self.mask if masked else np.ones(self.overlay_dimensions,dtype=np.bool_) data[mask]", "for k in self.plans: if self.plans[k].floorplan.name == name: return k return None def", "\"<PASSWORD>_token\" STORE_PASSWORD = \"<PASSWORD>\" STORE_WHTHRESHOLD = \"webhook_threshold\" def update_model_config(self, netid, conf_dict): layers =", "len(coord)==2 for coord in coords ]: cam = self.query_obj.cameras[mac] shape = self.plans[cam.floorPlanId].overlay_dimensions cam.set_FOV(shape,coords)", "for d in event ]) cameras = { cam for cam in self.query_obj.getCameras().values()", "class Model: LAYER_SNAP_WIFI = 1 LAYER_SNAP_BT = 2 LAYER_MVSENSE = 3 LAYERS_ALL =", "override takes precident um_possible = placeable.mask_override m_possible = placeable.mask_override else: if placeable.variance <", "if Layer exists if Model.LAYER_MVSENSE in self.data_layers.keys(): #Check camera with mac exists if", "def read_config_data(self): if os.path.isfile(Model.CONFIG_PATH): with open( Model.CONFIG_PATH, 'rb' ) as f: config_data =", "try: source_net_id = SAPI_packet[\"data\"][\"networkId\"] if SAPI_packet[SECRET_K] != self.secret: raise Model.BadRequest(\"Request has bad authentication", "else: cp = Overlay(self.floorid, self.overlay_dimensions, self.real_dimensions, self.mask, self.exposure) cp.__unfixed_observations = self.__unfixed_observations.copy() cp.__masked_dataoverlay =", "= ( avg_m_overlay * c + new_m_overlay ) / (c+1) upd_unfixed_obs = (", "_id : Overlay(_id, floor.overlay_dimensions, floor.floorplan_dimensions, floor.mask, exposure) for _id,floor in floorplans.items() } def", "0 or False not in [ len(coord)==2 for coord in coords ]: cam", "in each layer stored self.count[l_id] = { fpid:0 for fpid in layer.overlays.keys() }", "self.webhook_addresses = [] ### Floorplans def pullFloors(self) -> dict: \"Pull floorplans from the", "data overlay of a single floorplan\" def __init__(self, floorid:str, overlay_dimensions:tuple, real_dimensions:tuple, floormask:np.ndarray, exposure:int):", "\"webhooklist\" STORE_SELECTED = \"selectednet\" STORE_LAYERS = \"layers\" STORE_FOVCOORDS = \"fov_coords\" STORE_FOVMASK = \"fov_mask\"", "bd.getBoundaryMask() #Downsample from pixel level to mask level #Assuming cell >> pixel #Mask", "(datetime.datetime.now().second / 60) * len(testarr) testarr[:int(n)] = 1 return dm.render_overlay(testarr.reshape(dims)) def update(self)->None: \"Update", "as only parameter passing bd = BoundaryDetector( self.floorplan.get_image(), threshold=wallthreshold ) if blindspots !=", "STORE_SELECTED = \"selectednet\" STORE_LAYERS = \"layers\" STORE_FOVCOORDS = \"fov_coords\" STORE_FOVMASK = \"fov_mask\" STORE_BMBOXES", "_id : over.get_full(masked, exposure) for _id, over in self.overlays.items() } def copy(self,flatten:bool=True): \"\"\"", "first n frames of stored exposure, default (0) combines all available frames, 1", "new_um_overlay ) / (c+1) upd_m_overlay = ( avg_m_overlay * c + new_m_overlay )", "len 0 to unset try: if len(coords) == 0 or False not in", "do nothing \"\"\" for fpid in set(floorplans).difference(self.overlays.keys()): # A floorplan not represented in", "lib.BoundaryDetector import BoundaryDetector # As per Scanning API v3 SECRET_K = \"secret\" class", "== self.hour def update_avg_data(self, current_data:dict, debug:bool=None) -> None: \"Updates an average model for", "mask self.__unmasked_dataoverlay[0][um_possible] += 1.0 / um_possible.sum() # Include floor mask self.__masked_dataoverlay[0][m_possible] += 1.0", ")).astype(\"int32\") for x in range(mx): #Top, bottom it = ix_divs[x] ib = ix_divs[x+1]", "info, see Overlay.copy() \"\"\" ly = Layer({}, 1 if flatten else self.exposure )", "\"Updates an average model for a timeslot using the current model data, if", "with change of axis client_loc = np.array( [self.real_dimensions[0] - placeable.y, placeable.x ] )", "### Floorplans def pullFloors(self) -> dict: \"Pull floorplans from the network, construct blank", "Overlay(fpid, fp.overlay_dimensions, fp.floorplan_dimensions, fp.mask, self.exposure) for fpid, floor in floorplans.items(): self.overlays[fpid].verify_and_update(floor) class Overlay:", "(y+0.5)-event[1] ) if dist < mindist: mindist = dist distances[cam] = mindist for", "\"\"\" for fpid in set(floorplans).difference(self.overlays.keys()): # A floorplan not represented in the floorplans", "mac exists if mac in self.query_obj.getCameras().keys(): #Check coords of correct shape and iterable,", "_id : over.copy(flatten) for _id, over in self.overlays.items() } return ly def clear(self)->None:", "'location':busiest_location} def nearestCameras(self, n:int, floor:Floor,spikeDict:dict)->tuple: #returns a list of camera objects # They", "self.count = dict() for l_id, layer in data_layers.items(): # Copy the layer structure", "elems total self.mask[x][y] = self.pixelmask[it:ib,il:ir].mean() < Model.DOWNSAMPLE_THRESHOLD def calc_bounds_mask(self,blindspots=None,wallthreshold:float=None) -> Image.Image: \"\"\" Generate", "in self.webhook_addresses: response = requests.post(address, json = snapshot) print(response) def addWebhookAddress(self, webhookAddress:str): self.webhook_addresses.append(webhookAddress)", "and unfixed count from new data # Get full available exposure by default", "Access Points def getAPs(self) -> None: \"Get APs and store internally in relevent", "self.data_layers = dict() self.webhook_threshold = 0.35 for layer in layers: if layer not", "factory if the current TimeSlotAvg object is not current\" if not (self.timeslot.is_current_time()): self.timeslot.write()", "= overlay[mx,my] if val==0: continue #Colour by sign, scale alpha by magnitude pos", "{}\".format(l_id)) for l_id, layer in self.data_layers.items(): # Add any missing overlays layer.verify_and_update(floors) for", "from pixel level to mask level #Assuming cell >> pixel #Mask (1msq-scale, small-dims)", "enumerate(cameras): response = self.getCameraImage(cam) POST_data[fpid][\"camera_data_\" + str(i)] = response if POST_data != {}:", "spot in blindspots: bd.add_blindspot(*tuple(spot)) bd.run() return bd.generate_graphic() def render_overlay(self,overlay:np.ndarray,pixelmask:bool=True): \"\"\" Render overlay onto", ") for x in range(um_possible.shape[0]): for y in range(um_possible.shape[1]): # For each square,", "minimum reach # Account for change of axis um_possible[-int(placeable.y/Model.CELL_SIZE_M),int(placeable.x/Model.CELL_SIZE_M)] = 1 else: #", "got {}\".format(str(blindspots))) if wallthreshold == None: pass elif not isinstance(wallthreshold,(int,float)): raise TypeError(\"Invalid type", "raise Model.BadRequest(\"Request has data from wrong network: expected {} got {}\".format(self.network_id,source_net_id)) def get_type(SAPI_packet:dict)", "data from cameras and feed into data layer\" self.query_obj.updateCameraMVSenseData() observations = self.__generate_person_obs() self.data_layers[Model.LAYER_MVSENSE].set_observations(observations)", "= layer.copy(flatten=True) self.data_layers[l_id].clear() # Set a count for each overlay in each layer", "print(\"Info: Layer implicitly created for Layer ID {}\".format(l_id)) for l_id, layer in self.data_layers.items():", "contain passed observations, clearing any previous. Pass floor object dictionary\" bins = {", "in case of dimention mismatch. If mask is outdated, update - note this", "cover \"\"\" Returns a copy of the full client overlay (including distributed unfixed", "raise TimeSlotAvg.TimeSlotAvgException(f\"Cannot update with current model as it is not currently day:{self.day}, hour:{self.hour}\")", "layer_id: layer.overlays[fpid] for layer_id, layer in self.data_layers.items() } @staticmethod def get_time()->tuple: \"Get the", "= self.__unfixed_observations.copy() cp.__masked_dataoverlay = self.__masked_dataoverlay.copy() cp.__unmasked_dataoverlay = self.__unmasked_dataoverlay.copy() return cp def add(self,observations:dict) ->", "margin between edge of floorplan and overlay ix_divs = np.floor(np.linspace( 0, ix+self.margin_px[0], mx+1", "\"Sets internal observation data. Not for general use, instead use Overlay.add\" assert len(self.__unfixed_observations.shape)", "self.update_layers() ### Layers def update_layers(self)->None: \"\"\" Must be called when a Floor is", "# update the average by adding current values to sum total and dividing", "x in range(um_possible.shape[0]): for y in range(um_possible.shape[1]): # For each square, see if", "BoundaryDetector for areas of the floor that people cannot possibly be or are", "floorplans with data from a single source type\" def __init__(self,floorplans:dict,exposure:int): self.exposure = exposure", "overlay.shape[1] ) ) if isinstance(self.pixelmask, np.ndarray) and pixelmask: # Tidy the edges #", "in range(len(clusters[0])): clusters[x,y] = layer[3*x:3*(x+1),3*y:3*(y+1)].sum() busiest = 0 busiest_location=None for x in range(len(clusters)):", "ly.overlays = { _id : over.copy(flatten) for _id, over in self.overlays.items() } return", "observation data including exposure\" self.__unfixed_observations[:] = 0 self.__masked_dataoverlay[:] = 0 self.__unmasked_dataoverlay[:] = 0", "\"layers\" STORE_FOVCOORDS = \"fov_coords\" STORE_FOVMASK = \"fov_mask\" STORE_BMBOXES = \"bm_boxes\" STORE_BDENABLED = \"bd_enabled\"", "it is not currently day:{self.day}, hour:{self.hour}\") def write(self): \"Save TimeSlotAvg object to a", "raise Model.ModelException(\"No such floor: \",floor_id) floor = self.plans[floor_id] if on: floor.set_bounds_mask(blindspots,wallthreshold) else: floor.bm_boxes", "change of axis client_loc = np.array( [self.real_dimensions[0] - placeable.y, placeable.x ] ) for", "(c+1) upd_m_overlay = ( avg_m_overlay * c + new_m_overlay ) / (c+1) upd_unfixed_obs", "\"\"\" if flatten: cp = Overlay(self.floorid, self.overlay_dimensions, self.real_dimensions, self.mask, 1) cp.__unfixed_observations[0] = self.__unfixed_observations.mean(axis=0)", "unfixed = {} for id, placeable in observations.items(): if ( placeable.x != None", "and overlay ix_points = np.floor(np.linspace( 0, imarr.shape[0] + self.margin_px[0], overlay.shape[0] + 1 )).astype(\"int32\")", "def __add_unfixed_locations(self,unfixed:dict) -> None: self.__unfixed_observations[0] += len(unfixed) def get_delta(self, masked:bool=True, exposure:int=0)->np.ndarray: \"\"\" Returns", "None: \"Get APs and store internally in relevent floor objects\" aps = self.query_obj.pullAPs()", "= self.mask if masked else np.ones(self.overlay_dimensions,dtype=np.bool_) data[mask] += self.__unfixed_observations[:window].mean(axis=0) / mask.sum() return data", "# I wish I was kidding # This is 0.707 iff cell_s_m =", "DIV0 here, the search hasn't found any near enough to call near #", "be ignored eg outside high floors. Blindspots should be a Numpy array, tuple", "in fixed.values(): # We store both a copy of the m(asked)_possible locations and", "ModelException if dimensions do not match \"\"\" for l_id in set(data_layers.keys()).difference(self.data_layers.keys()): # For", "render_delta(self,floorPlanId:str)->Image: \"Get the current datamap in terms of absolute delta from mean\" datamap", "the flat average Overlay object indexed by each layer stored\" return { layer_id:", "* len(testarr) testarr[:int(n)] = 1 return dm.render_overlay(testarr.reshape(dims)) def update(self)->None: \"Update non-webhook (non-SAPI) layers,", "parameter. Should be of type np.array, list or tuple\") elif False in [", "l_id, layer in self.data_layers.items(): # Add any missing overlays layer.verify_and_update(floors) for l_id in", "bd = BoundaryDetector( self.floorplan.get_image() ) else: # pragma: no cover # Not covered", "self.query_obj.pullAPs() for mac,ap in aps.items(): if ap.floorPlanId in self.plans.keys(): self.plans[ap.floorPlanId].aps[mac] = ap ###", "mean smoothing on the first n frames of stored exposure, default (0) combines", "Use internally defined layer (eg Model.LAYER_*)\" class ModelException(Exception): pass class BadRequest(Exception): pass def", "# Not covered as only parameter passing bd = BoundaryDetector( self.floorplan.get_image(), threshold=wallthreshold )", "1 floor.mask_enabled = on self.update_layers() ### Layers def update_layers(self)->None: \"\"\" Must be called", "of the full client overlay (including distributed unfixed observations) For exposure, see Overlay.get_delta", "cell: dist = np.hypot( (x+0.5)-event[0], (y+0.5)-event[1] ) if dist < mindist: mindist =", "a preview of a bounds mask with given parameters. For more info see", "self.query_obj.cameras.values() } conf[Model.STORE_BMBOXES] = { fpid: fp.bm_boxes for fpid,fp in self.plans.items() } conf[Model.STORE_BDENABLED]", "be a Numpy array, tuple or nested list of the form [[x1,x2,y1,y2],...]\" if", "Account for margin between edge of floorplan and overlay ix_points = np.floor(np.linspace( 0,", "when a Floor is added, removed, or altered, including by set_bounds_mask This will", "1 frame. \"\"\" if flatten: cp = Overlay(self.floorid, self.overlay_dimensions, self.real_dimensions, self.mask, 1) cp.__unfixed_observations[0]", "self.webhook_threshold) if spikedict['spike'] == True: idealality, cameras = self.nearestCameras(2, floor, spikedict) #need to", "adding current values to sum total and dividing by new count upd_um_overlay =", "self.pull_mvsense_data() self.put_historical() #spike detect POST_data = {} for fpid, floor in self.plans.items(): spikedict", "spike(self, layer, threshhold)->dict: #add floorplan ID into params, camera/wifi/bluetoothall into params? threshold into", "### Camera and MVSense def setFOVs(self,mac:str,coords:set)->None: \"\"\" Set the FOV coords from given", "overlays are compatible with current floorplans (Floor objects). Throws ModelException in case of", ") distances = dict() for cam in FOVcams: fov = cam.get_FOV() mindist =", "available exposures squashed. See Overlay.get_full \"\"\" # Not covered as not required for", "!= self.network_id: raise Model.BadRequest(\"Request has data from wrong network: expected {} got {}\".format(self.network_id,source_net_id))", "np.zeros( (exposure,)+overlay_dimensions, dtype=\"float32\" ) self.__unmasked_dataoverlay = np.zeros( (exposure,)+overlay_dimensions, dtype=\"float32\" ) def set(self, unfixed_count:np.ndarray,", "if len(hasView)>0: return (\"Covered\", list(hasView) ) distances = dict() for cam in FOVcams:", "params? dims = ( (len(layer)//3)+1, (len(layer[0])//3)+1 ) #splits floorplan into 3m^2 areas clusters", "dict\".format(str(type(SAPI_packet)))) try: source_net_id = SAPI_packet[\"data\"][\"networkId\"] if SAPI_packet[SECRET_K] != self.secret: raise Model.BadRequest(\"Request has bad", "Model.CELL_SIZE_M - (self.floorplan_dimensions[1] % Model.CELL_SIZE_M) ) self.margin_px = ( self.margin_m[0] * self.floorplan.px_per_m_h, self.margin_m[1]", "floor.floorplan.id } FOVcams = { cam for cam in cameras if cam.has_FOV() }", "coord in coords ]: cam = self.query_obj.cameras[mac] shape = self.plans[cam.floorPlanId].overlay_dimensions cam.set_FOV(shape,coords) else: raise", "mask is outdated, update - note this does not effect existing data, only", "(non-SAPI) layers, write history\" self.pull_mvsense_data() self.put_historical() #spike detect POST_data = {} for fpid,", "in range(overlay.shape[1]): val = overlay[mx,my] if val==0: continue #Colour by sign, scale alpha", "LAYERS_ALL = {LAYER_SNAP_WIFI, LAYER_SNAP_BT, LAYER_MVSENSE} CONFIG_PATH = os.path.join('model.conf') CELL_SIZE_M = 1 DOWNSAMPLE_THRESHOLD =", "layer to contain passed observations, clearing any previous. Pass floor object dictionary\" bins", "= dict() self.overlay_dimensions = overlay_dimensions self.real_dimensions = real_dimensions self.mask = floormask assert exposure", "if cam.get_FOV()[event_root]==True } if len(hasView)>0: return (\"Covered\", list(hasView) ) distances = dict() for", "{} self.bm_boxes = [] def set_bounds_mask(self,blindspots=None,wallthreshold:float=None) -> None: \"\"\"\" Generate a mask from", "self.data_layers[l_id] = data_layers[l_id].copy() self.count[l_id] = dict() print(\"Info: Layer implicitly created for Layer ID", "dict() self.count = dict() for l_id, layer in data_layers.items(): # Copy the layer", "get_delta(self, masked:bool=True, exposure:int=0)->np.ndarray: \"\"\" Returns a copy of the delta overlay (only fixed", "model for a timeslot using the current model data, if valid time\" if", "cameras - FOVcams hasView = { cam for cam in FOVcams if cam.get_FOV()[event_root]==True", "floor.floorplan_dimensions: raise Model.ModelException(\"Error: Overlay and Floorplan dimension mismatch for FPID={}\".format(floor.floorplan.id)) # Update mask", "\"Updates the average data for the current TimeSlotAvg object\" self.update_timeslot() # Get the", "If theres a DIV0 here, the search hasn't found any near enough to", "the form [[x1,x2,y1,y2],...]\" if blindspots==None: pass elif not (isinstance(blindspots,(np.ndarray,list,tuple))): raise TypeError(\"Invalid type for", "= pickle.load(f) selected_id = config_data[Model.STORE_SELECTED] select_data = config_data[selected_id] self.update_model_config(selected_id,select_data) else: print(\"Warning: config file", "l_id, layer in data_layers.items(): # Copy the layer structure but clear the transient", "over.clear() def verify_and_update(self, floorplans:dict): \"\"\" Ensure overlays are compatible with current floorplans (Floor", "tsa.verify_and_update_struct(data_layers, floors) return tsa def is_current_time(self, debug=None) -> bool: \"Returns True iff timeslot", "dict: \"Get a dict of retreived floor plan IDs and names\" return {", "None def setBoundsMask(self,floor_id:str,on:bool,blindspots=None,wallthreshold:float=None) -> None: \"Generate a mask from BoundaryDetector for areas that", "curr_hour def verify_and_update_struct(self, data_layers:dict, floors:dict)->None: \"\"\" Verifies that the data in the TimeSlotAvg", "px self.margin_m = ( Model.CELL_SIZE_M - (self.floorplan_dimensions[0] % Model.CELL_SIZE_M), Model.CELL_SIZE_M - (self.floorplan_dimensions[1] %", "in self.overlays.items() } return ly def clear(self)->None: \"Clear all member overlays of observation", "reach # Account for change of axis um_possible[-int(placeable.y/Model.CELL_SIZE_M),int(placeable.x/Model.CELL_SIZE_M)] = 1 else: # Store", "it's centre is close enough to be within variance metres d = np.linalg.norm(", "model, promoting as exposure of historicals = 1 self.data_layers[l_key].overlays[o_key].set( upd_unfixed_obs[None,], upd_m_overlay[None,], upd_um_overlay[None,] )", "= overlay_dimensions self.real_dimensions = real_dimensions self.mask = floormask assert exposure > 0 self.exposure", "datamap = self.comp_historical(floorPlanId) return self.plans[floorPlanId].render_overlay(datamap) def render_abs(self,floorPlanId:str)->Image: \"Get latest frame of WiFi layer", "is for current time\" if debug != None: return debug curr_day, curr_hour =", "this does not change existing data, only new observations self.mask == floor.mask class", "self.data_layers[layer].get_full(exposure=exposure) def render_delta(self,floorPlanId:str)->Image: \"Get the current datamap in terms of absolute delta from", "for a new frame of data\" self.__masked_dataoverlay = np.roll(self.__masked_dataoverlay, 1, axis=0) self.__unmasked_dataoverlay =", "destination = self.floorplan.get_image().convert(\"RGBA\") # Overlay scaling absmax = max(overlay.max(), overlay.min(), key=abs) #m_max, m_min", "throw error if dimensions do not equate and historical data would be invalidated", "dist < mindist: mindist = dist distances[cam] = mindist for cam in nonFOVcams:", "# Set a count for each overlay in each layer stored self.count[l_id] =", "current model as it is not currently day:{self.day}, hour:{self.hour}\") def write(self): \"Save TimeSlotAvg", "* c + new_m_overlay ) / (c+1) upd_unfixed_obs = ( avg_unfixed_obs * c", "overlays of observation data\" for over in self.overlays.values(): over.clear() def verify_and_update(self, floorplans:dict): \"\"\"", "destination.size[0] / overlay.shape[1] ) ) if isinstance(self.pixelmask, np.ndarray) and pixelmask: # Tidy the", "observation data\" for over in self.overlays.values(): over.clear() def verify_and_update(self, floorplans:dict): \"\"\" Ensure overlays", "\"\"\" window = self.exposure if exposure == 0 else exposure if masked: return", "form [[x1,x2,y1,y2],...]\" if blindspots==None: pass elif not (isinstance(blindspots,(np.ndarray,list,tuple))): raise TypeError(\"Invalid type for blindspots", "update with the current model # For each layer in the new data", "obj POST_data[fpid] = {\"type\" : \"SnapshotData\", \"is_ideal\" : idealality} for i, cam in", "\"\"\" self.bm_boxes = blindspots if wallthreshold == None: bd = BoundaryDetector( self.floorplan.get_image() )", "list of the form [[x1,x2,y1,y2],...] \"\"\" self.bm_boxes = blindspots if wallthreshold == None:", "(mini-scale, big-dims) dims ix = self.pixelmask.shape[0] iy = self.pixelmask.shape[1] #Image-scale chunk divisions (what", "return self.data_layers[layer].get_full(exposure=exposure) def render_delta(self,floorPlanId:str)->Image: \"Get the current datamap in terms of absolute delta", "the relative busyness of a floorplan using all layers\" self.update_timeslot() # Get the", "each overlay in each layer stored self.count[l_id] = { fpid:0 for fpid in", "search hasn't found any near enough to call near # Ignore floor mask", "= tuple([ int(d) for d in event ]) cameras = { cam for", "np.zeros(dims, dtype=\"float32\") for x in range(len(clusters)): for y in range(len(clusters[0])): clusters[x,y] = layer[3*x:3*(x+1),3*y:3*(y+1)].sum()", "if dist < mindist: mindist = dist distances[cam] = mindist for cam in", "def getFloorplanSummary(self) -> dict: \"Get a dict of retreived floor plan IDs and", "an data overlay of a single floorplan\" def __init__(self, floorid:str, overlay_dimensions:tuple, real_dimensions:tuple, floormask:np.ndarray,", "= cameras - FOVcams hasView = { cam for cam in FOVcams if", "non-webhook (non-SAPI) layers, write history\" self.pull_mvsense_data() self.put_historical() #spike detect POST_data = {} for", "{\"type\" : \"SnapshotData\", \"is_ideal\" : idealality} for i, cam in enumerate(cameras): response =", "represented in the floorplans but not in the overlays print(\"Info: Creating new overlay", "object dictionary\" bins = { id : dict() for id in self.overlays.keys() }", "-> None: \"Generate a mask from BoundaryDetector for areas that people cannot possibly", "#splits floorplan into 3m^2 areas clusters = np.zeros(dims, dtype=\"float32\") for x in range(len(clusters)):", "cam[0] for cam in sorted(distances.items(),key=lambda x: x[1])[:n] ] return (\"Best Effort\", top_n) def", "POS = np.array([255,0,0,180],dtype=np.uint8) NEG = np.array([0,255,0,180],dtype=np.uint8) BLUR_CELLS = 0.35 destination = self.floorplan.get_image().convert(\"RGBA\") #", "if wallthreshold == None: bd = BoundaryDetector( self.floorplan.get_image() ) else: # pragma: no", "_id : over.get_delta(masked, exposure) for _id, over in self.overlays.items() } def get_full(self, masked:bool=True,", "each square, see if it's centre is close enough to be within variance", "form [[x1,x2,y1,y2],...] \"\"\" self.bm_boxes = blindspots if wallthreshold == None: bd = BoundaryDetector(", "+= len(unfixed) def get_delta(self, masked:bool=True, exposure:int=0)->np.ndarray: \"\"\" Returns a copy of the delta", "response = requests.post(address, json = snapshot) print(response) def addWebhookAddress(self, webhookAddress:str): self.webhook_addresses.append(webhookAddress) ### Historical", "floor mask self.__masked_dataoverlay[0][m_possible] += 1.0 / m_possible.sum() def __add_unfixed_locations(self,unfixed:dict) -> None: self.__unfixed_observations[0] +=", "mac). Coords should be iterable of shape (n,2). Coords pertain to sqm pixels", "internal datamap. Pass len(iterable)==0 to unset mask \"\"\" #Check if Layer exists if", "np.floor(np.linspace( 0, imarr.shape[0] + self.margin_px[0], overlay.shape[0] + 1 )).astype(\"int32\") iy_points = np.floor(np.linspace( 0,imarr.shape[1]", "for current time\" if debug != None: return debug curr_day, curr_hour = TimeSlotAvg.get_time()", "to unset try: if len(coords) == 0 or False not in [ len(coord)==2", "floor eg outside high floors. Blindspots should be a Numpy array, tuple or", "= absmax, -absmax imarr = np.zeros((destination.size[1],destination.size[0],4),dtype=\"uint8\") # Account for margin between edge of", "fpid,fp in self.plans.items() } conf[Model.STORE_BDENABLED] = { fpid: fp.mask_enabled for fpid,fp in self.plans.items()", "Tidy the edges # Make the mask an Image, mode=L mask = Image.fromarray(255*self.pixelmask.astype(np.uint8),\"L\")", "in the new data for l_key, layer in current_data.items(): # Get the respective", "dtype=np.bool_ ) # If theres a DIV0 here, the search hasn't found any", "= ( avg_unfixed_obs * c + new_unfixed_obs ) / (c+1) # save the", "self.plans[floorPlanId].mask_enabled current = self.data_layers[lid].overlays[floorPlanId].get_delta(masked=mask_enabled) historical = hist_fp_data[lid].get_delta(masked=mask_enabled,exposure=1) collective += (current - historical) collective", "created, infos are printed Throws ModelException if dimensions do not match \"\"\" for", "0.35 destination = self.floorplan.get_image().convert(\"RGBA\") # Overlay scaling absmax = max(overlay.max(), overlay.min(), key=abs) #m_max,", "Paste alpha on masked region imarr.paste((0,0,0,0),mask) elif pixelmask: print(\"Error: Could not filter by", "y in range(len(clusters[0])): if clusters[x][y] > busiest: busiest = clusters[x][y] busiest_location = (3*(x+0.5),", "floorid #self.observations = dict() self.overlay_dimensions = overlay_dimensions self.real_dimensions = real_dimensions self.mask = floormask", "= False self.pixelmask = None self.mask = np.ones(self.overlay_dimensions,dtype=np.bool_) self.aps = {} self.bm_boxes =", "as ke: raise Model.BadRequest(\"Request is missing data: \" + str(ke) ) if source_net_id", "= hist_fp_data[lid].get_delta(masked=mask_enabled,exposure=1) collective += (current - historical) collective /= len(self.data_layers) return collective def", "# Store parsed location in x,y tuple, in m, with change of axis", "= sqrt(x^2+x^2) # I wish I was kidding # This is 0.707 iff", "existing data, only new observations If an overlay missing, print info, create new", "shape (n,2)\") else: raise Model.ModelException(\"Camera with mac {} not found\".format(mac)) else: raise Model.ModelException(\"Model", "in heatmap form. If pixelmask and image has bounds mask set, will mask", "no cover \"\"\" Returns a copy of the full client overlay (including distributed", "self.__unfixed_observations[:window].mean(axis=0) / mask.sum() return data def verify_and_update(self,floor:Floor)->None: \"Verifies overlay is compatible with passed", "floorplan\" def __init__(self, floorid:str, overlay_dimensions:tuple, real_dimensions:tuple, floormask:np.ndarray, exposure:int): self.floorid = floorid #self.observations =", "# For each overlay in the respective new data layer for o_key, over", "racket if theres something wrong self.__validate_scanning(SAPI_packet) dest_layer = Model.get_type(SAPI_packet) observations = self.query_obj.extract_SAPI_observations(SAPI_packet) self.data_layers[dest_layer].set_observations(observations)", "configured for LAYER_MVSENSE\") def pull_mvsense_data(self): \"Pull live MVSense data from cameras and feed", "conf_dict.get(Model.STORE_LAYERS,set()) try: if netid != self.network_id: raise AttributeError except AttributeError: self.query_obj = APIQuery(netid)", "from the floor name\" for k in self.plans: if self.plans[k].floorplan.name == name: return", "#Check coords of correct shape and iterable, or of len 0 to unset", "= clusters[x][y] busiest_location = (3*(x+0.5), 3*(y+0.5)) return {'spike':busiest > threshhold, 'location':busiest_location} def nearestCameras(self,", "1) cp.__unfixed_observations[0] = self.__unfixed_observations.mean(axis=0) cp.__masked_dataoverlay[0] = self.__masked_dataoverlay.mean(axis=0) cp.__unmasked_dataoverlay[0] = self.__unmasked_dataoverlay.mean(axis=0) else: cp =", "new\" #TODO remove day hour params - used for unit tests if day==None", "new_unfixed_obs ) / (c+1) # save the new data overlay to the timeslots", "= np.zeros((destination.size[1],destination.size[0],4),dtype=\"uint8\") # Account for margin between edge of floorplan and overlay ix_points", "Coords should be iterable of shape (n,2). Coords pertain to sqm pixels on", "\"is_ideal\" : idealality} for i, cam in enumerate(cameras): response = self.getCameraImage(cam) POST_data[fpid][\"camera_data_\" +", "not currently day:{self.day}, hour:{self.hour}\") def write(self): \"Save TimeSlotAvg object to a compressed file\"", "covered as only parameter passing bd = BoundaryDetector( self.floorplan.get_image(), threshold=wallthreshold ) if blindspots", "for lid in self.data_layers.keys(): mask_enabled = self.plans[floorPlanId].mask_enabled current = self.data_layers[lid].overlays[floorPlanId].get_delta(masked=mask_enabled) historical = hist_fp_data[lid].get_delta(masked=mask_enabled,exposure=1)", "alpha by magnitude pos = val > 0 alpha = abs(val / absmax)", "alpha ).astype(np.uint8) imarr = Image.fromarray(imarr,\"RGBA\").filter( ImageFilter.BoxBlur( BLUR_CELLS * destination.size[0] / overlay.shape[1] ) )", "file or create new\" #TODO remove day hour params - used for unit", "= { _id : over.copy(flatten) for _id, over in self.overlays.items() } return ly", "# For each layer in the new data for l_key, layer in current_data.items():", "= self.__unfixed_observations.mean(axis=0) cp.__masked_dataoverlay[0] = self.__masked_dataoverlay.mean(axis=0) cp.__unmasked_dataoverlay[0] = self.__unmasked_dataoverlay.mean(axis=0) else: cp = Overlay(self.floorid, self.overlay_dimensions,", "aps.items(): if ap.floorPlanId in self.plans.keys(): self.plans[ap.floorPlanId].aps[mac] = ap ### Scanning API (SAPI) def", "blank floor layer for each\" floorplans = self.query_obj.pullFloorPlans() self.plans = { id :", "frames, 1 gives only the latest frame (no smoothing). \"\"\" window = self.exposure", "print info, create new If an overlay is extra, do nothing \"\"\" for", "self.mask = np.ones(self.overlay_dimensions,dtype=np.bool_) self.aps = {} self.bm_boxes = [] def set_bounds_mask(self,blindspots=None,wallthreshold:float=None) -> None:", "If an overlay is extra, do nothing \"\"\" for fpid in set(floorplans).difference(self.overlays.keys()): #", "cp.__unfixed_observations = self.__unfixed_observations.copy() cp.__masked_dataoverlay = self.__masked_dataoverlay.copy() cp.__unmasked_dataoverlay = self.__unmasked_dataoverlay.copy() return cp def add(self,observations:dict)", "exposure == 0 else exposure if masked: return self.__masked_dataoverlay[:window].mean(axis=0) else: return self.__unmasked_dataoverlay[:window].mean(axis=0) def", "camera/wifi/bluetoothall into params? threshold into params? dims = ( (len(layer)//3)+1, (len(layer[0])//3)+1 ) #splits", "reading and writing data files\" curr_dt = datetime.datetime.now( datetime.timezone( offset=datetime.timedelta(hours=0) ) ) curr_day", "self.snapshotWebhook(POST_data) ### Configuration STORE_WEBHOOK = \"webhooklist\" STORE_SELECTED = \"selectednet\" STORE_LAYERS = \"layers\" STORE_FOVCOORDS", "POS if pos else NEG imarr[ixs:ixe,iys:iye,3] = ( imarr[ixs:ixe,iys:iye,3].astype(np.float64) * alpha ).astype(np.uint8) imarr", "self.__validate_scanning(SAPI_packet) dest_layer = Model.get_type(SAPI_packet) observations = self.query_obj.extract_SAPI_observations(SAPI_packet) self.data_layers[dest_layer].set_observations(observations) ### Camera and MVSense def", "(isinstance(blindspots,(np.ndarray,list,tuple))): raise TypeError(\"Invalid type for blindspots parameter. Should be of type np.array, list", "upd_m_overlay[None,], upd_um_overlay[None,] ) # Update the count self.count[l_key][o_key] += 1 #self.write() else: raise", "of data\" self.__masked_dataoverlay = np.roll(self.__masked_dataoverlay, 1, axis=0) self.__unmasked_dataoverlay = np.roll(self.__unmasked_dataoverlay, 1, axis=0) self.__unfixed_observations", "webhookAddress:str): self.webhook_addresses.append(webhookAddress) ### Historical def put_historical(self) -> None: \"Updates the average data for", "Model layer constant for a given SAPI packet\" api_layer_val = APIQuery.get_SAPI_type(SAPI_packet) if api_layer_val", "if placeable.variance < Model.VARIANCE_THRESHOLD: # Calculated minimum reach # Account for change of", "datetime dm = self.plans[fpid] dims = dm.overlay_dimensions testarr = np.zeros(dims).ravel() n = (datetime.datetime.now().second", "range(mx): #Top, bottom it = ix_divs[x] ib = ix_divs[x+1] for y in range(my):", "+ self.margin_px[1], overlay.shape[1] + 1 )).astype(\"int32\") for mx in range(overlay.shape[0]): # Top, bottom", "(len(layer[0])//3)+1 ) #splits floorplan into 3m^2 areas clusters = np.zeros(dims, dtype=\"float32\") for x", "hour self.data_layers = dict() self.count = dict() for l_id, layer in data_layers.items(): #", "blindspots: bd.add_blindspot(*tuple(spot)) bd.run() self.pixelmask = bd.getBoundaryMask() #Downsample from pixel level to mask level", "in self.overlays.items(): overlay.roll() obs = bins.get(fid) if obs != None: overlay.add(obs) def get_deltas(self,", "api_layer_val == \"WiFi\": return Model.LAYER_SNAP_WIFI elif api_layer_val == \"Bluetooth\": return Model.LAYER_SNAP_BT else: raise", "np.zeros(dims).ravel() n = (datetime.datetime.now().second / 60) * len(testarr) testarr[:int(n)] = 1 return dm.render_overlay(testarr.reshape(dims))", "open( Model.CONFIG_PATH, 'rb' ) as f: config_data = pickle.load(f) selected_id = config_data[Model.STORE_SELECTED] select_data", "is compatible with the current Model. If layers or overlays are not represented", "cell in enumerate(row): if cell: dist = np.hypot( (x+0.5)-event[0], (y+0.5)-event[1] ) if dist", "spend 30 minutes fixing it ;) event = spikeDict[\"location\"] event_root = tuple([ int(d)", "mac, coords ) for fpid, boxes in conf_dict.get(Model.STORE_BMBOXES, dict()).items(): on = conf_dict.get(Model.STORE_BDENABLED,{fpid:False})[fpid] self.setBoundsMask(fpid,", "!= None: return debug curr_day, curr_hour = TimeSlotAvg.get_time() return curr_day == self.day and", "# Get count if exists, else set to 1 self.count[l_id] = { ov_id:self.count[l_id].get(ov_id,1)", "all available exposures squashed. See Overlay.get_full \"\"\" # Not covered as not required", "self.__masked_dataoverlay[0][m_possible] += 1.0 / m_possible.sum() def __add_unfixed_locations(self,unfixed:dict) -> None: self.__unfixed_observations[0] += len(unfixed) def", "overlay in each layer stored self.count[l_id] = { fpid:0 for fpid in layer.overlays.keys()", "np.array([255,0,0,180],dtype=np.uint8) NEG = np.array([0,255,0,180],dtype=np.uint8) BLUR_CELLS = 0.35 destination = self.floorplan.get_image().convert(\"RGBA\") # Overlay scaling", "by sign, scale alpha by magnitude pos = val > 0 alpha =", "data overlay, in metres and px self.margin_m = ( Model.CELL_SIZE_M - (self.floorplan_dimensions[0] %", "# Dimentions should default to (height, width) self.floorplan_dimensions = (self.floorplan.height,self.floorplan.width) self.overlay_dimensions = (", "else unmasked data. Set exposure to specify mean smoothing on the first n", "]: raise ValueError(\"Invalid format for blindspots parameter. Should be of shape (n,4), got", "type, should be iterable shape (n,2)\") else: raise Model.ModelException(\"Camera with mac {} not", "None: \"Generate a mask from BoundaryDetector for areas that people cannot possibly be", "\"\"\"\" Generate a mask from BoundaryDetector for areas of the floor that people", "AttributeError: self.query_obj = APIQuery(netid) self.populate(layers) self.secret = conf_dict.get(Model.STORE_SECRET) self.validator_token = conf_dict.get(Model.STORE_TOKEN) self.webhook_addresses =", "STORE_FOVCOORDS = \"fov_coords\" STORE_FOVMASK = \"fov_mask\" STORE_BMBOXES = \"bm_boxes\" STORE_BDENABLED = \"bd_enabled\" STORE_SECRET", "ModelException(Exception): pass class BadRequest(Exception): pass def __init__(self,network_id:str=None,layers:set={}): \"Initialise model. API key must be", "self.exposure ) ly.overlays = { _id : over.copy(flatten) for _id, over in self.overlays.items()", "self.__unfixed_observations.mean(axis=0) cp.__masked_dataoverlay[0] = self.__masked_dataoverlay.mean(axis=0) cp.__unmasked_dataoverlay[0] = self.__unmasked_dataoverlay.mean(axis=0) else: cp = Overlay(self.floorid, self.overlay_dimensions, self.real_dimensions,", "the u(n)m(asked)_possible's with the floor mask applied m_possible = np.logical_and( um_possible, self.mask, dtype=np.bool_", "data_layers, day, hour ) tsa.verify_and_update_struct(data_layers, floors) tsa.write() else: if __name__!=\"__main__\": assert isinstance(tsa,TimeSlotAvg) tsa.verify_and_update_struct(data_layers,", "in data_layers not in self self.data_layers[l_id] = data_layers[l_id].copy() self.count[l_id] = dict() print(\"Info: Layer", "dims ix = self.pixelmask.shape[0] iy = self.pixelmask.shape[1] #Image-scale chunk divisions (what coords do", "cam.get_FOV()[event_root]==True } if len(hasView)>0: return (\"Covered\", list(hasView) ) distances = dict() for cam", "as f: config_data = pickle.load(f) selected_id = config_data[Model.STORE_SELECTED] select_data = config_data[selected_id] self.update_model_config(selected_id,select_data) else:", "or self.real_dimensions != floor.floorplan_dimensions: raise Model.ModelException(\"Error: Overlay and Floorplan dimension mismatch for FPID={}\".format(floor.floorplan.id))", "layer. Flatten squashes exposures into 1 frame For more info, see Overlay.copy() \"\"\"", "+= self.__unfixed_observations[:window].mean(axis=0) / mask.sum() return data def verify_and_update(self,floor:Floor)->None: \"Verifies overlay is compatible with", "frame (no smoothing). \"\"\" window = self.exposure if exposure == 0 else exposure", "objects # They call me the comprehension king # Maybe after we spend", "unfixed observations evenly across the floorplan (or mask) mask = self.mask if masked", "sorted(distances.items(),key=lambda x: x[1])[:n] ] return (\"Best Effort\", top_n) def getCameraImage(self, camera) -> dict:", "None: \"Update model with SAPI data\" # Raise a racket if theres something", "pixelmask: print(\"Error: Could not filter by pixel mask as no mask exists\") destination.putalpha(255)", "conf_dict.get(Model.STORE_BMBOXES, dict()).items(): on = conf_dict.get(Model.STORE_BDENABLED,{fpid:False})[fpid] self.setBoundsMask(fpid, on, boxes) def serialize(self): conf = dict()", "= ( Model.CELL_SIZE_M - (self.floorplan_dimensions[0] % Model.CELL_SIZE_M), Model.CELL_SIZE_M - (self.floorplan_dimensions[1] % Model.CELL_SIZE_M) )", "self.timeslot = TimeSlotAvg.load(self.data_layers,self.plans) self.webhook_addresses = [] ### Floorplans def pullFloors(self) -> dict: \"Pull", "-> bool: \"Returns True iff timeslot is for current time\" if debug !=", "= np.roll(self.__masked_dataoverlay, 1, axis=0) self.__unmasked_dataoverlay = np.roll(self.__unmasked_dataoverlay, 1, axis=0) self.__unfixed_observations = np.roll(self.__unfixed_observations, 1,", "um_possible = np.zeros(self.overlay_dimensions, dtype=np.bool_) if placeable.has_mask_override: # Even if a floorplan mask is", "expected {} got {}\".format(self.network_id,source_net_id)) def get_type(SAPI_packet:dict) -> int: \"Get the Model layer constant", "to sum total and dividing by new count upd_um_overlay = ( avg_um_overlay *", "data\" self.__masked_dataoverlay = np.roll(self.__masked_dataoverlay, 1, axis=0) self.__unmasked_dataoverlay = np.roll(self.__unmasked_dataoverlay, 1, axis=0) self.__unfixed_observations =", "* c + new_unfixed_obs ) / (c+1) # save the new data overlay", "with current model as it is not currently day:{self.day}, hour:{self.hour}\") def write(self): \"Save", "exposure # Exposure queue, shape (exp,x,y) self.__unfixed_observations = np.zeros(exposure) self.__masked_dataoverlay = np.zeros( (exposure,)+overlay_dimensions,", "( (len(layer)//3)+1, (len(layer[0])//3)+1 ) #splits floorplan into 3m^2 areas clusters = np.zeros(dims, dtype=\"float32\")", ") if dist < mindist: mindist = dist distances[cam] = mindist for cam", "= val > 0 alpha = abs(val / absmax) #Left, right iys =", "the exposure, preparing for a new frame of data\" self.__masked_dataoverlay = np.roll(self.__masked_dataoverlay, 1,", "as np from PIL import Image, ImageFilter import bz2 import pickle import datetime", "ix_divs = np.floor(np.linspace( 0, ix+self.margin_px[0], mx+1 )).astype(\"int32\") iy_divs = np.floor(np.linspace( 0, iy+self.margin_px[1], my+1", "= iy_divs[y+1] # As array is binary, mean gives ratio of elems 1", "self.day and curr_hour == self.hour def update_avg_data(self, current_data:dict, debug:bool=None) -> None: \"Updates an", "1, axis=0) self.__unmasked_dataoverlay = np.roll(self.__unmasked_dataoverlay, 1, axis=0) self.__unfixed_observations = np.roll(self.__unfixed_observations, 1, axis=0) self.__masked_dataoverlay[0]", "on the given floor eg outside high floors. Blindspots should be a Numpy", "if valid time\" if self.is_current_time(debug): # it is valid to update with the", "for address in self.webhook_addresses: response = requests.post(address, json = snapshot) print(response) def addWebhookAddress(self,", "list(hasView) ) distances = dict() for cam in FOVcams: fov = cam.get_FOV() mindist", "data_layers[l_id].copy() self.count[l_id] = dict() print(\"Info: Layer implicitly created for Layer ID {}\".format(l_id)) for", "copy(self,flatten:bool=False): \"\"\" Return a copy of the overlay. If flatten, squash (mean) exposure", "\"\"\" Return how many unfixed observations were passed. For more details on exposure,", "floor plan\" return self.plans[floorPlanId].render_overlay(self.data_layers[Model.LAYER_SNAP_WIFI].overlays[floorPlanId].get_delta(exposure=1)) def debug_render(self,fpid)->Image: import datetime dm = self.plans[fpid] dims =", "__BAD_LAYER = \"Layer {} not defined. Use internally defined layer (eg Model.LAYER_*)\" class", "the delta data for each overlay in the layer. If exposure 0 (default),", "be invalidated \"\"\" for layer in self.data_layers.values(): layer.verify_and_update(self.plans) ### Access Points def getAPs(self)", "dict: #returns a dictionary containing a link to the image response = self.query_obj.getCameraSnap(camera)", "in Model.LAYERS_ALL: raise Model.ModelException(Model.__BAD_LAYER.format(layer)) self.data_layers[layer] = Layer(self.plans, Model.DEFAULT_EXPOSURE) self.timeslot = TimeSlotAvg.load(self.data_layers,self.plans) self.webhook_addresses =", "= np.zeros(self.overlay_dimensions, dtype=np.bool_) if placeable.has_mask_override: # Even if a floorplan mask is in", "self.exposure if exposure == 0 else exposure return self.__unfixed_observations[:window].mean(axis=0) def get_full(self, masked:bool=True, exposure:int=0)->np.ndarray:", "len(spot) == 4 for spot in blindspots ]: raise ValueError(\"Invalid format for blindspots", "set(floorplans).difference(self.overlays.keys()): # A floorplan not represented in the floorplans but not in the", "self.pixelmask = bd.getBoundaryMask() #Downsample from pixel level to mask level #Assuming cell >>", "specify mean smoothing on the first n frames of stored exposure, default (0)", "overlay is extra, do nothing \"\"\" for fpid in set(floorplans).difference(self.overlays.keys()): # A floorplan", "= '<PASSWORD>' __BAD_LAYER = \"Layer {} not defined. Use internally defined layer (eg", "placeable.variance # m(asked)_possible is a copy of the u(n)m(asked)_possible's with the floor mask", "api_layer_val == \"Bluetooth\": return Model.LAYER_SNAP_BT else: raise Model.ModelException(Model.__BAD_LAYER.format(api_layer_val)) def __generate_person_obs(self) -> dict: \"Indexes", "is missing data: \" + str(ke) ) if source_net_id != self.network_id: raise Model.BadRequest(\"Request", "floor.mask class Model: LAYER_SNAP_WIFI = 1 LAYER_SNAP_BT = 2 LAYER_MVSENSE = 3 LAYERS_ALL", "y, cell in enumerate(row): if cell: dist = np.hypot( (x+0.5)-event[0], (y+0.5)-event[1] ) if", "data for each overlay in the layer. If exposure 0 (default), will provide", "= (3*(x+0.5), 3*(y+0.5)) return {'spike':busiest > threshhold, 'location':busiest_location} def nearestCameras(self, n:int, floor:Floor,spikeDict:dict)->tuple: #returns", "enumerate(fov): for y, cell in enumerate(row): if cell: dist = np.hypot( (x+0.5)-event[0], (y+0.5)-event[1]", "\",floor_id) floor = self.plans[floor_id] if on: floor.set_bounds_mask(blindspots,wallthreshold) else: floor.bm_boxes = blindspots floor.mask[:] =", "for k,v in self.plans.items() } def findFloorplanByName(self,name) -> str: \"Find a floorPlanId from", "each overlay in the respective new data layer for o_key, over in layer.overlays.items():", "x in range(mx): #Top, bottom it = ix_divs[x] ib = ix_divs[x+1] for y", "used for unit tests if day==None or hour==None: day, hour = TimeSlotAvg.get_time() try:", "are not represented in TSA, those are created, infos are printed Throws ModelException", "== 0 else exposure # Distribute unfixed observations evenly across the floorplan (or", "'rb' ) as f: config_data = pickle.load(f) selected_id = config_data[Model.STORE_SELECTED] select_data = config_data[selected_id]", "mismatch. If mask is outdated, update - note this does not effect existing", "m, with change of axis client_loc = np.array( [self.real_dimensions[0] - placeable.y, placeable.x ]", "#pragma: no cover \"\"\" Returns a copy of the full client overlay (including", "= conf_dict.get(Model.STORE_TOKEN) self.webhook_addresses = conf_dict.get(Model.STORE_WEBHOOK,list()) self.password = conf_dict.get(Model.STORE_PASSWORD,Model.DEFAULT_PASSWORD) self.webhook_threshold = conf_dict.get(Model.STORE_WHTHRESHOLD,self.webhook_threshold) for mac,", "nonFOVcams = cameras - FOVcams hasView = { cam for cam in FOVcams", "each\" floorplans = self.query_obj.pullFloorPlans() self.plans = { id : Floor(fp) for id,fp in", "# As array is binary, mean gives ratio of elems 1 to elems", "#Image-scale chunk divisions (what coords do we get laying one mask over the", "the count self.count[l_key][o_key] += 1 #self.write() else: raise TimeSlotAvg.TimeSlotAvgException(f\"Cannot update with current model", "if on: floor.set_bounds_mask(blindspots,wallthreshold) else: floor.bm_boxes = blindspots floor.mask[:] = 1 floor.mask_enabled = on", "Model.LAYER_SNAP_WIFI elif api_layer_val == \"Bluetooth\": return Model.LAYER_SNAP_BT else: raise Model.ModelException(Model.__BAD_LAYER.format(api_layer_val)) def __generate_person_obs(self) ->", "self.data_layers[l_id].clear() # Set a count for each overlay in each layer stored self.count[l_id]", "self.__unmasked_dataoverlay[:window].mean(axis=0) def get_unfixed_observations(self, exposure:int=0)->float: \"\"\" Return how many unfixed observations were passed. For", "if self.is_current_time(debug): # it is valid to update with the current model #", "raise TypeError(\"JSON parsed a {}, expected a dict\".format(str(type(SAPI_packet)))) try: source_net_id = SAPI_packet[\"data\"][\"networkId\"] if", "{} not found\".format(mac)) else: raise Model.ModelException(\"Model not configured for LAYER_MVSENSE\") def pull_mvsense_data(self): \"Pull", "\"Get the current time values needed for reading and writing data files\" curr_dt", ") except APIQuery.APIException: raise Model.ModelException(\"Could not get network from config file\") class TimeSlotAvg:", "filter by pixel mask as no mask exists\") destination.putalpha(255) return Image.alpha_composite(destination,imarr) class Layer:", "flatten else self.exposure ) ly.overlays = { _id : over.copy(flatten) for _id, over", "absmax = max(overlay.max(), overlay.min(), key=abs) #m_max, m_min = absmax, -absmax imarr = np.zeros((destination.size[1],destination.size[0],4),dtype=\"uint8\")", "coords in conf_dict.get(Model.STORE_FOVCOORDS,dict()).items(): self.setFOVs( mac, coords ) for fpid, boxes in conf_dict.get(Model.STORE_BMBOXES, dict()).items():", "x[1])[:n] ] return (\"Best Effort\", top_n) def getCameraImage(self, camera) -> dict: #returns a", "\"\"\" Render overlay onto the floorplan image in heatmap form. If pixelmask and", "= dict() conf[Model.STORE_SECRET] = self.secret conf[Model.STORE_TOKEN] = self.validator_token conf[Model.STORE_LAYERS] = set(self.data_layers.keys()) conf[Model.STORE_WEBHOOK] =", "= self.__masked_dataoverlay.mean(axis=0) cp.__unmasked_dataoverlay[0] = self.__unmasked_dataoverlay.mean(axis=0) else: cp = Overlay(self.floorid, self.overlay_dimensions, self.real_dimensions, self.mask, self.exposure)", "data\" for over in self.overlays.values(): over.clear() def verify_and_update(self, floorplans:dict): \"\"\" Ensure overlays are", ") / (c+1) upd_unfixed_obs = ( avg_unfixed_obs * c + new_unfixed_obs ) /", "write(self): \"Save TimeSlotAvg object to a compressed file\" filepath = os.path.join(TimeSlotAvg.DATA_DIR,'{}_{}.pbz2'.format(self.day, self.hour)) if", "indexed by each layer stored\" return { layer_id: layer.overlays[fpid] for layer_id, layer in", "= placeable self.__add_fixed_locations(fixed) self.__add_unfixed_locations(unfixed) def __add_fixed_locations(self,fixed:dict) -> None: for placeable in fixed.values(): #", "for y in range(my): #Left, right il = iy_divs[y] ir = iy_divs[y+1] #", "floorplans from the network, construct blank floor layer for each\" floorplans = self.query_obj.pullFloorPlans()", "= bins.get(fid) if obs != None: overlay.add(obs) def get_deltas(self, masked:bool=True, exposure:int=0) -> dict:", "over.get_unfixed_observations() # Count c = self.count[l_key][o_key] # update the average by adding current", "people cannot possibly be or are to be ignored eg outside high floors.", "self.overlays.items(): overlay.roll() obs = bins.get(fid) if obs != None: overlay.add(obs) def get_deltas(self, masked:bool=True,", "overlay.shape[0] + 1 )).astype(\"int32\") iy_points = np.floor(np.linspace( 0,imarr.shape[1] + self.margin_px[1], overlay.shape[1] + 1", "self.__unmasked_dataoverlay.copy() return cp def add(self,observations:dict) -> None: fixed = {} unfixed = {}", "floor name\" for k in self.plans: if self.plans[k].floorplan.name == name: return k return", "== len(masked_overlay.shape) assert len(self.__unmasked_dataoverlay.shape) == len(unmasked_overlay.shape) self.__unfixed_observations = unfixed_count self.__masked_dataoverlay = masked_overlay self.__unmasked_dataoverlay", "and placeable.y != None ) or placeable.has_mask_override: fixed[id] = placeable else: unfixed[id] =", ") # Determine the distance between the end of the floorplan and the", "= self.get_delta(masked,exposure) window = self.exposure if exposure == 0 else exposure # Distribute", "layer[3*x:3*(x+1),3*y:3*(y+1)].sum() busiest = 0 busiest_location=None for x in range(len(clusters)): for y in range(len(clusters[0])):", "= \"fov_coords\" STORE_FOVMASK = \"fov_mask\" STORE_BMBOXES = \"bm_boxes\" STORE_BDENABLED = \"bd_enabled\" STORE_SECRET =", "def __validate_scanning(self,SAPI_packet:dict) -> None: if type(SAPI_packet) != dict: raise TypeError(\"JSON parsed a {},", "def verify_and_update(self, floorplans:dict): \"\"\" Ensure overlays are compatible with current floorplans (Floor objects).", "# Account for change of axis um_possible[-int(placeable.y/Model.CELL_SIZE_M),int(placeable.x/Model.CELL_SIZE_M)] = 1 else: # Store parsed", "day:int, hour:int): self.day = day self.hour = hour self.data_layers = dict() self.count =", "the overlay. If flatten, squash (mean) exposure window into 1 frame. \"\"\" if", "{} unfixed = {} for id, placeable in observations.items(): if ( placeable.x !=", "placeable.has_mask_override: # Even if a floorplan mask is in place, mask override takes", "image response = self.query_obj.getCameraSnap(camera) return response def snapshotWebhook(self, snapshot:dict): for address in self.webhook_addresses:", "tuple or nested list of the form [[x1,x2,y1,y2],...] \"\"\" self.bm_boxes = blindspots if", "day hour params - used for unit tests if day==None or hour==None: day,", "update - note this does not effect existing data, only new observations If", "-> None: fixed = {} unfixed = {} for id, placeable in observations.items():", "config_data[Model.STORE_SELECTED] select_data = config_data[selected_id] self.update_model_config(selected_id,select_data) else: print(\"Warning: config file not found\") try: self.update_model_config(None,", "def write(self): \"Save TimeSlotAvg object to a compressed file\" filepath = os.path.join(TimeSlotAvg.DATA_DIR,'{}_{}.pbz2'.format(self.day, self.hour))", "placeable.has_mask_override: fixed[id] = placeable else: unfixed[id] = placeable self.__add_fixed_locations(fixed) self.__add_unfixed_locations(unfixed) def __add_fixed_locations(self,fixed:dict) ->", "str(ke) ) if source_net_id != self.network_id: raise Model.BadRequest(\"Request has data from wrong network:", "# Add any missing overlays layer.verify_and_update(floors) for l_id in data_layers.keys(): # Get count", "assert exposure > 0 self.exposure = exposure # Exposure queue, shape (exp,x,y) self.__unfixed_observations", "my+1 )).astype(\"int32\") for x in range(mx): #Top, bottom it = ix_divs[x] ib =", "ix_points = np.floor(np.linspace( 0, imarr.shape[0] + self.margin_px[0], overlay.shape[0] + 1 )).astype(\"int32\") iy_points =", "cam in nonFOVcams: distances[cam] = np.hypot( cam.x-event[0], cam.y-event[1] ) top_n = [ cam[0]", "/= len(self.data_layers) return collective def update_timeslot(self): \"Calls the factory if the current TimeSlotAvg", "__validate_scanning(self,SAPI_packet:dict) -> None: if type(SAPI_packet) != dict: raise TypeError(\"JSON parsed a {}, expected", "+ new_um_overlay ) / (c+1) upd_m_overlay = ( avg_m_overlay * c + new_m_overlay", "self.floorplan.px_per_m_h, self.margin_m[1] * self.floorplan.px_per_m_w ) self.mask_enabled = False self.pixelmask = None self.mask =", "latest frame (no smoothing). \"\"\" window = self.exposure if exposure == 0 else", "not in Model.LAYERS_ALL: raise Model.ModelException(Model.__BAD_LAYER.format(layer)) self.data_layers[layer] = Layer(self.plans, Model.DEFAULT_EXPOSURE) self.timeslot = TimeSlotAvg.load(self.data_layers,self.plans) self.webhook_addresses", ") def comp_historical(self, floorPlanId:str): \"Get the relative busyness of a floorplan using all", "DOWNSAMPLE_THRESHOLD = 0.5 VARIANCE_THRESHOLD = np.hypot( *( 2*(CELL_SIZE_M / 2,) ) ) #", "current = self.data_layers[lid].overlays[floorPlanId].get_delta(masked=mask_enabled) historical = hist_fp_data[lid].get_delta(masked=mask_enabled,exposure=1) collective += (current - historical) collective /=", ") tsa.verify_and_update_struct(data_layers, floors) tsa.write() else: if __name__!=\"__main__\": assert isinstance(tsa,TimeSlotAvg) tsa.verify_and_update_struct(data_layers, floors) return tsa", "cam.y-event[1] ) top_n = [ cam[0] for cam in sorted(distances.items(),key=lambda x: x[1])[:n] ]", "self.data_layers ) def comp_historical(self, floorPlanId:str): \"Get the relative busyness of a floorplan using", "Should be of shape (n,4), got {}\".format(str(blindspots))) if wallthreshold == None: pass elif", "such floor: \",floor_id) floor = self.plans[floor_id] if on: floor.set_bounds_mask(blindspots,wallthreshold) else: floor.bm_boxes = blindspots", "= 0 self.__unfixed_observations[0] = 0 def clear(self) -> None: \"Clear accumulated observation data", "else: raise Model.ModelException(Model.__BAD_LAYER.format(api_layer_val)) def __generate_person_obs(self) -> dict: \"Indexes observed person objects with an", "mask override takes precident um_possible = placeable.mask_override m_possible = placeable.mask_override else: if placeable.variance", "divisions (what coords do we get laying one mask over the other) #", "data # Also we only need 1 frame to store average so flatten", "= self.pullFloors() self.getAPs() self.query_obj.pullCameras() self.data_layers = dict() self.webhook_threshold = 0.35 for layer in", "outside high floors. Blindspots should be a Numpy array, tuple or nested list", "or create new\" #TODO remove day hour params - used for unit tests", "time\" if self.is_current_time(debug): # it is valid to update with the current model", "Layer(self.plans, Model.DEFAULT_EXPOSURE) self.timeslot = TimeSlotAvg.load(self.data_layers,self.plans) self.webhook_addresses = [] ### Floorplans def pullFloors(self) ->", "# avg layers are already flat so exposure of 1 avg_um_overlay = avg_layer.overlays[o_key].get_delta(masked=False,exposure=1)", "NEG imarr[ixs:ixe,iys:iye,3] = ( imarr[ixs:ixe,iys:iye,3].astype(np.float64) * alpha ).astype(np.uint8) imarr = Image.fromarray(imarr,\"RGBA\").filter( ImageFilter.BoxBlur( BLUR_CELLS", "for over in self.overlays.values(): over.clear() def verify_and_update(self, floorplans:dict): \"\"\" Ensure overlays are compatible", "# Update mask in case of change # Note this does not change", "verify_and_update(self, floorplans:dict): \"\"\" Ensure overlays are compatible with current floorplans (Floor objects). Throws", "get_time()->tuple: \"Get the current time values needed for reading and writing data files\"", "avg_unfixed_obs = over.get_unfixed_observations() # Count c = self.count[l_key][o_key] # update the average by", "= 0 self.__masked_dataoverlay[:] = 0 self.__unmasked_dataoverlay[:] = 0 def copy(self,flatten:bool=False): \"\"\" Return a", "data def verify_and_update(self,floor:Floor)->None: \"Verifies overlay is compatible with passed floor, updates mask\" if", "supplied of incorrect shape or type, should be iterable shape (n,2)\") else: raise", "self.data_layers.keys(): mask_enabled = self.plans[floorPlanId].mask_enabled current = self.data_layers[lid].overlays[floorPlanId].get_delta(masked=mask_enabled) historical = hist_fp_data[lid].get_delta(masked=mask_enabled,exposure=1) collective += (current", "unfixed_count self.__masked_dataoverlay = masked_overlay self.__unmasked_dataoverlay = unmasked_overlay def roll(self) -> None: \"Roll the", "wallthreshold == None: bd = BoundaryDetector( self.floorplan.get_image() ) else: # pragma: no cover", "= { cam for cam in FOVcams if cam.get_FOV()[event_root]==True } if len(hasView)>0: return", "only need 1 frame to store average so flatten self.data_layers[l_id] = layer.copy(flatten=True) self.data_layers[l_id].clear()", "Overlay.get_full \"\"\" # Not covered as not required for current scope return {", "not isinstance(wallthreshold,(int,float)): raise TypeError(\"Invalid type for wallthreshold parameter. Should be of type int", "with given parameters. For more info see Floor.set_bounds_mask \"\"\" if wallthreshold == None:", "pickle file or create new\" #TODO remove day hour params - used for", "import bz2 import pickle import datetime import requests import hashlib parentddir = os.path.abspath(os.path.join(os.path.dirname(__file__),", "day==None or hour==None: day, hour = TimeSlotAvg.get_time() try: tsa = bz2.BZ2File(os.path.join(TimeSlotAvg.DATA_DIR,'{}_{}.pbz2'.format(day, hour)), 'rb')", "edge of floorplan and overlay ix_points = np.floor(np.linspace( 0, imarr.shape[0] + self.margin_px[0], overlay.shape[0]", "if day==None or hour==None: day, hour = TimeSlotAvg.get_time() try: tsa = bz2.BZ2File(os.path.join(TimeSlotAvg.DATA_DIR,'{}_{}.pbz2'.format(day, hour)),", "not required for current scope return { _id : over.get_full(masked, exposure) for _id,", "self.plans.keys(): raise Model.ModelException(\"No such floor: \",floor_id) floor = self.plans[floor_id] if on: floor.set_bounds_mask(blindspots,wallthreshold) else:", "self.plans.keys(): self.plans[ap.floorPlanId].aps[mac] = ap ### Scanning API (SAPI) def __validate_scanning(self,SAPI_packet:dict) -> None: if", "avg_um_overlay * c + new_um_overlay ) / (c+1) upd_m_overlay = ( avg_m_overlay *", "alpha = abs(val / absmax) #Left, right iys = iy_points[my] iye = iy_points[my+1]", "cam for cam in FOVcams if cam.get_FOV()[event_root]==True } if len(hasView)>0: return (\"Covered\", list(hasView)", "frame of WiFi layer rendered on the floor plan\" return self.plans[floorPlanId].render_overlay(self.data_layers[Model.LAYER_SNAP_WIFI].overlays[floorPlanId].get_delta(exposure=1)) def debug_render(self,fpid)->Image:", "< Model.DOWNSAMPLE_THRESHOLD def calc_bounds_mask(self,blindspots=None,wallthreshold:float=None) -> Image.Image: \"\"\" Generate a preview of a bounds", "a copy of the overlay. If flatten, squash (mean) exposure window into 1", "self.pullFloors() self.getAPs() self.query_obj.pullCameras() self.data_layers = dict() self.webhook_threshold = 0.35 for layer in layers:", "self.webhook_threshold = conf_dict.get(Model.STORE_WHTHRESHOLD,self.webhook_threshold) for mac, coords in conf_dict.get(Model.STORE_FOVCOORDS,dict()).items(): self.setFOVs( mac, coords ) for", "= real_dimensions self.mask = floormask assert exposure > 0 self.exposure = exposure #", "blindspots: bd.add_blindspot(*tuple(spot)) bd.run() return bd.generate_graphic() def render_overlay(self,overlay:np.ndarray,pixelmask:bool=True): \"\"\" Render overlay onto the floorplan", "curr_day = curr_dt.weekday() curr_hour = curr_dt.hour return curr_day, curr_hour def verify_and_update_struct(self, data_layers:dict, floors:dict)->None:", "Layer({}, 1 if flatten else self.exposure ) ly.overlays = { _id : over.copy(flatten)", "!= {}: self.snapshotWebhook(POST_data) ### Configuration STORE_WEBHOOK = \"webhooklist\" STORE_SELECTED = \"selectednet\" STORE_LAYERS =", "self.serialize() with open( self.CONFIG_PATH, 'wb' ) as f: pickle.dump(config_data, f) def read_config_data(self): if", "False not in [ len(coord)==2 for coord in coords ]: cam = self.query_obj.cameras[mac]", "key=abs) #m_max, m_min = absmax, -absmax imarr = np.zeros((destination.size[1],destination.size[0],4),dtype=\"uint8\") # Account for margin", "1 LAYER_SNAP_BT = 2 LAYER_MVSENSE = 3 LAYERS_ALL = {LAYER_SNAP_WIFI, LAYER_SNAP_BT, LAYER_MVSENSE} CONFIG_PATH", "imarr.shape[0] + self.margin_px[0], overlay.shape[0] + 1 )).astype(\"int32\") iy_points = np.floor(np.linspace( 0,imarr.shape[1] + self.margin_px[1],", "Image.fromarray(255*self.pixelmask.astype(np.uint8),\"L\") # Paste alpha on masked region imarr.paste((0,0,0,0),mask) elif pixelmask: print(\"Error: Could not", "fpid in set(floorplans).difference(self.overlays.keys()): # A floorplan not represented in the floorplans but not", "== len(unfixed_count.shape) assert len(self.__masked_dataoverlay.shape) == len(masked_overlay.shape) assert len(self.__unmasked_dataoverlay.shape) == len(unmasked_overlay.shape) self.__unfixed_observations = unfixed_count", "the m(asked)_possible locations and the u(n)m(asked)_possible locations um_possible = np.zeros(self.overlay_dimensions, dtype=np.bool_) if placeable.has_mask_override:", "existing data, only new observations self.mask == floor.mask class Model: LAYER_SNAP_WIFI = 1", "avg_layer = self.data_layers[l_key] # For each overlay in the respective new data layer", "Top, bottom ixs = ix_points[mx] ixe = ix_points[mx+1] for my in range(overlay.shape[1]): val", "1 avg_um_overlay = avg_layer.overlays[o_key].get_delta(masked=False,exposure=1) avg_m_overlay = avg_layer.overlays[o_key].get_delta(masked=True,exposure=1) avg_unfixed_obs = over.get_unfixed_observations() # Count c", "assert len(self.__masked_dataoverlay.shape) == len(masked_overlay.shape) assert len(self.__unmasked_dataoverlay.shape) == len(unmasked_overlay.shape) self.__unfixed_observations = unfixed_count self.__masked_dataoverlay =", "for Layer ID {}\".format(l_id)) for l_id, layer in self.data_layers.items(): # Add any missing", "0.707 iff cell_s_m = 1 DEFAULT_EXPOSURE = 3 DEFAULT_PASSWORD = '<PASSWORD>' __BAD_LAYER =", "= 0 def clear(self) -> None: \"Clear accumulated observation data including exposure\" self.__unfixed_observations[:]", "lid in self.data_layers.keys(): mask_enabled = self.plans[floorPlanId].mask_enabled current = self.data_layers[lid].overlays[floorPlanId].get_delta(masked=mask_enabled) historical = hist_fp_data[lid].get_delta(masked=mask_enabled,exposure=1) collective", "{} for fpid, floor in self.plans.items(): spikedict = self.spike(self.comp_historical(fpid), self.webhook_threshold) if spikedict['spike'] ==", "or hour==None: day, hour = TimeSlotAvg.get_time() try: tsa = bz2.BZ2File(os.path.join(TimeSlotAvg.DATA_DIR,'{}_{}.pbz2'.format(day, hour)), 'rb') tsa", "isinstance(wallthreshold,(int,float)): raise TypeError(\"Invalid type for wallthreshold parameter. Should be of type int or", "populate(self,layers:set): assert isinstance(self.query_obj, APIQuery) self.network_id = self.query_obj.network_id self.plans = self.pullFloors() self.getAPs() self.query_obj.pullCameras() self.data_layers", "mask self.__masked_dataoverlay[0][m_possible] += 1.0 / m_possible.sum() def __add_unfixed_locations(self,unfixed:dict) -> None: self.__unfixed_observations[0] += len(unfixed)", "found\".format(mac)) else: raise Model.ModelException(\"Model not configured for LAYER_MVSENSE\") def pull_mvsense_data(self): \"Pull live MVSense", "= \"layers\" STORE_FOVCOORDS = \"fov_coords\" STORE_FOVMASK = \"fov_mask\" STORE_BMBOXES = \"bm_boxes\" STORE_BDENABLED =", "dtype=\"float32\" ) def set(self, unfixed_count:np.ndarray, masked_overlay:np.ndarray, unmasked_overlay:np.ndarray): \"Sets internal observation data. Not for", "is outdated, update - note this does not effect existing data, only new", "unset mask \"\"\" #Check if Layer exists if Model.LAYER_MVSENSE in self.data_layers.keys(): #Check camera", "exposures squashed. For more info, see Overlay.get_delta \"\"\" return { _id : over.get_delta(masked,", "Creating new overlay for FPID:{}\".format(fpid)) fp = floorplans[fpid] self.overlays[fpid] = Overlay(fpid, fp.overlay_dimensions, fp.floorplan_dimensions,", "with open( self.CONFIG_PATH, 'wb' ) as f: pickle.dump(config_data, f) def read_config_data(self): if os.path.isfile(Model.CONFIG_PATH):", "__init__(self, floorid:str, overlay_dimensions:tuple, real_dimensions:tuple, floormask:np.ndarray, exposure:int): self.floorid = floorid #self.observations = dict() self.overlay_dimensions", "idealality} for i, cam in enumerate(cameras): response = self.getCameraImage(cam) POST_data[fpid][\"camera_data_\" + str(i)] =", "busiest: busiest = clusters[x][y] busiest_location = (3*(x+0.5), 3*(y+0.5)) return {'spike':busiest > threshhold, 'location':busiest_location}", "cameras and feed into data layer\" self.query_obj.updateCameraMVSenseData() observations = self.__generate_person_obs() self.data_layers[Model.LAYER_MVSENSE].set_observations(observations) def spike(self,", "unmasked deltas, and unfixed count from new data # Get full available exposure", "pass DATA_DIR = \"historical_data\" def __init__(self, data_layers:dict, day:int, hour:int): self.day = day self.hour", "bz2.BZ2File(filepath, 'wb') as f: pickle.dump(self, f) def get_floor_avgs(self, fpid:str)->dict: \"Return the flat average", "instead use Overlay.add\" assert len(self.__unfixed_observations.shape) == len(unfixed_count.shape) assert len(self.__masked_dataoverlay.shape) == len(masked_overlay.shape) assert len(self.__unmasked_dataoverlay.shape)", "in nonFOVcams: distances[cam] = np.hypot( cam.x-event[0], cam.y-event[1] ) top_n = [ cam[0] for", "except FileNotFoundError: tsa = TimeSlotAvg( data_layers, day, hour ) tsa.verify_and_update_struct(data_layers, floors) tsa.write() else:", "= self.floorplan.get_image().convert(\"RGBA\") # Overlay scaling absmax = max(overlay.max(), overlay.min(), key=abs) #m_max, m_min =", "# Top, bottom ixs = ix_points[mx] ixe = ix_points[mx+1] for my in range(overlay.shape[1]):", "iterable of shape (n,2). Coords pertain to sqm pixels on internal datamap. Pass", "= placeable.mask_override m_possible = placeable.mask_override else: if placeable.variance < Model.VARIANCE_THRESHOLD: # Calculated minimum", "[0,n) with n obs objects, converting to dictionary return dict(zip(range(len(obs)),obs)) def provide_scanning(self,SAPI_packet:dict) ->", "mindist for cam in nonFOVcams: distances[cam] = np.hypot( cam.x-event[0], cam.y-event[1] ) top_n =", "nonFOVcams: distances[cam] = np.hypot( cam.x-event[0], cam.y-event[1] ) top_n = [ cam[0] for cam", "pos = val > 0 alpha = abs(val / absmax) #Left, right iys", "# Zip [0,n) with n obs objects, converting to dictionary return dict(zip(range(len(obs)),obs)) def", "For more details on exposure, see Overlay.get_delta \"\"\" window = self.exposure if exposure", "9999999 for x,row in enumerate(fov): for y, cell in enumerate(row): if cell: dist", "n obs objects, converting to dictionary return dict(zip(range(len(obs)),obs)) def provide_scanning(self,SAPI_packet:dict) -> None: \"Update", "including exposure\" self.__unfixed_observations[:] = 0 self.__masked_dataoverlay[:] = 0 self.__unmasked_dataoverlay[:] = 0 def copy(self,flatten:bool=False):", "update(self)->None: \"Update non-webhook (non-SAPI) layers, write history\" self.pull_mvsense_data() self.put_historical() #spike detect POST_data =", "-absmax imarr = np.zeros((destination.size[1],destination.size[0],4),dtype=\"uint8\") # Account for margin between edge of floorplan and", "= np.zeros(exposure) self.__masked_dataoverlay = np.zeros( (exposure,)+overlay_dimensions, dtype=\"float32\" ) self.__unmasked_dataoverlay = np.zeros( (exposure,)+overlay_dimensions, dtype=\"float32\"", "Even if a floorplan mask is in place, mask override takes precident um_possible", "data\" # Raise a racket if theres something wrong self.__validate_scanning(SAPI_packet) dest_layer = Model.get_type(SAPI_packet)", "avg_m_overlay * c + new_m_overlay ) / (c+1) upd_unfixed_obs = ( avg_unfixed_obs *", "(\"Covered\", list(hasView) ) distances = dict() for cam in FOVcams: fov = cam.get_FOV()", "(x+0.5)-event[0], (y+0.5)-event[1] ) if dist < mindist: mindist = dist distances[cam] = mindist", "not (self.timeslot.is_current_time()): self.timeslot.write() self.timeslot = TimeSlotAvg.load( self.data_layers, self.plans ) self.timeslot.verify_and_update_struct( self.data_layers, self.plans )", "= self.serialize() with open( self.CONFIG_PATH, 'wb' ) as f: pickle.dump(config_data, f) def read_config_data(self):", "#returns a dictionary containing a link to the image response = self.query_obj.getCameraSnap(camera) return", "exposure) for _id, over in self.overlays.items() } def copy(self,flatten:bool=True): \"\"\" Return a copy", "'wb') as f: pickle.dump(self, f) def get_floor_avgs(self, fpid:str)->dict: \"Return the flat average Overlay", "over.get_unfixed_observations() # Similar from averages # avg layers are already flat so exposure", "image to keep overlay heatmap within bounds. \"\"\" POS = np.array([255,0,0,180],dtype=np.uint8) NEG =", "in coords ]: cam = self.query_obj.cameras[mac] shape = self.plans[cam.floorPlanId].overlay_dimensions cam.set_FOV(shape,coords) else: raise ValueError", "c + new_m_overlay ) / (c+1) upd_unfixed_obs = ( avg_unfixed_obs * c +", "reflect this change May throw error if dimensions do not equate and historical", "Update mask in case of change # Note this does not change existing", "- historical) collective /= len(self.data_layers) return collective def update_timeslot(self): \"Calls the factory if", "self.plans[floorPlanId].render_overlay(self.data_layers[Model.LAYER_SNAP_WIFI].overlays[floorPlanId].get_delta(exposure=1)) def debug_render(self,fpid)->Image: import datetime dm = self.plans[fpid] dims = dm.overlay_dimensions testarr =", "curr_day, curr_hour def verify_and_update_struct(self, data_layers:dict, floors:dict)->None: \"\"\" Verifies that the data in the", "exposure:int=0)->np.ndarray: \"\"\" Returns a copy of the delta overlay (only fixed observations). If", "if not (self.timeslot.is_current_time()): self.timeslot.write() self.timeslot = TimeSlotAvg.load( self.data_layers, self.plans ) self.timeslot.verify_and_update_struct( self.data_layers, self.plans", "np.floor(np.linspace( 0, iy+self.margin_px[1], my+1 )).astype(\"int32\") for x in range(mx): #Top, bottom it =", "metres and px self.margin_m = ( Model.CELL_SIZE_M - (self.floorplan_dimensions[0] % Model.CELL_SIZE_M), Model.CELL_SIZE_M -", "} return ly def clear(self)->None: \"Clear all member overlays of observation data\" for", "Could not filter by pixel mask as no mask exists\") destination.putalpha(255) return Image.alpha_composite(destination,imarr)", "self.__unfixed_observations = np.zeros(exposure) self.__masked_dataoverlay = np.zeros( (exposure,)+overlay_dimensions, dtype=\"float32\" ) self.__unmasked_dataoverlay = np.zeros( (exposure,)+overlay_dimensions,", "self.__unmasked_dataoverlay[0][um_possible] += 1.0 / um_possible.sum() # Include floor mask self.__masked_dataoverlay[0][m_possible] += 1.0 /", "shape or type, should be iterable shape (n,2)\") else: raise Model.ModelException(\"Camera with mac", "Model.CELL_SIZE_M - client_loc ) um_possible[x,y] = d <= placeable.variance # m(asked)_possible is a", "= conf_dict.get(Model.STORE_WHTHRESHOLD,self.webhook_threshold) for mac, coords in conf_dict.get(Model.STORE_FOVCOORDS,dict()).items(): self.setFOVs( mac, coords ) for fpid,", "#need to get relevant floor obj POST_data[fpid] = {\"type\" : \"SnapshotData\", \"is_ideal\" :", "= np.zeros(dims, dtype=\"float32\") for x in range(len(clusters)): for y in range(len(clusters[0])): clusters[x,y] =", "in case of change # Note this does not change existing data, only", "floorplan using all layers\" self.update_timeslot() # Get the current timeslot object hist_fp_data =", "np from PIL import Image, ImageFilter import bz2 import pickle import datetime import", "passed observations, clearing any previous. Pass floor object dictionary\" bins = { id", "overlays covering all floorplans with data from a single source type\" def __init__(self,floorplans:dict,exposure:int):", "of the m(asked)_possible locations and the u(n)m(asked)_possible locations um_possible = np.zeros(self.overlay_dimensions, dtype=np.bool_) if", "dimensions do not match \"\"\" for l_id in set(data_layers.keys()).difference(self.data_layers.keys()): # For layers in", "TimeSlotAvg.load( self.data_layers, self.plans ) self.timeslot.verify_and_update_struct( self.data_layers, self.plans ) ### Providers def poll_layer(self,layer:int,exposure:int) ->", "if layer not in Model.LAYERS_ALL: raise Model.ModelException(Model.__BAD_LAYER.format(layer)) self.data_layers[layer] = Layer(self.plans, Model.DEFAULT_EXPOSURE) self.timeslot =", "magnitude pos = val > 0 alpha = abs(val / absmax) #Left, right", "binary, mean gives ratio of elems 1 to elems total self.mask[x][y] = self.pixelmask[it:ib,il:ir].mean()", "within variance metres d = np.linalg.norm( (np.array([x,y]) + 0.5) * Model.CELL_SIZE_M - client_loc", "Model.LAYER_MVSENSE in self.data_layers.keys(): #Check camera with mac exists if mac in self.query_obj.getCameras().keys(): #Check", "avg_layer.overlays[o_key].get_delta(masked=True,exposure=1) avg_unfixed_obs = over.get_unfixed_observations() # Count c = self.count[l_key][o_key] # update the average", "client_loc ) um_possible[x,y] = d <= placeable.variance # m(asked)_possible is a copy of", "def verify_and_update(self,floor:Floor)->None: \"Verifies overlay is compatible with passed floor, updates mask\" if self.overlay_dimensions", "so flatten self.data_layers[l_id] = layer.copy(flatten=True) self.data_layers[l_id].clear() # Set a count for each overlay", "filepath = os.path.join(TimeSlotAvg.DATA_DIR,'{}_{}.pbz2'.format(self.day, self.hour)) if not os.path.exists(TimeSlotAvg.DATA_DIR): os.makedirs(TimeSlotAvg.DATA_DIR) with bz2.BZ2File(filepath, 'wb') as f:", "data layer for o_key, over in layer.overlays.items(): # Get masked and unmasked deltas,", "DEFAULT_PASSWORD = '<PASSWORD>' __BAD_LAYER = \"Layer {} not defined. Use internally defined layer", "np.array( [self.real_dimensions[0] - placeable.y, placeable.x ] ) for x in range(um_possible.shape[0]): for y", "is close enough to be within variance metres d = np.linalg.norm( (np.array([x,y]) +", "floor.overlay_dimensions, floor.floorplan_dimensions, floor.mask, exposure) for _id,floor in floorplans.items() } def set_observations(self,observations:dict): \"Set the", "Update the count self.count[l_key][o_key] += 1 #self.write() else: raise TimeSlotAvg.TimeSlotAvgException(f\"Cannot update with current", "= np.array([0,255,0,180],dtype=np.uint8) BLUR_CELLS = 0.35 destination = self.floorplan.get_image().convert(\"RGBA\") # Overlay scaling absmax =", "self.margin_m[1] * self.floorplan.px_per_m_w ) self.mask_enabled = False self.pixelmask = None self.mask = np.ones(self.overlay_dimensions,dtype=np.bool_)", "== name: return k return None def setBoundsMask(self,floor_id:str,on:bool,blindspots=None,wallthreshold:float=None) -> None: \"Generate a mask", "a given SAPI packet\" api_layer_val = APIQuery.get_SAPI_type(SAPI_packet) if api_layer_val == \"WiFi\": return Model.LAYER_SNAP_WIFI", "# Also we only need 1 frame to store average so flatten self.data_layers[l_id]", "to sqm pixels on internal datamap. Pass len(iterable)==0 to unset mask \"\"\" #Check", "not effect existing data, only new observations If an overlay missing, print info,", "layer rendered on the floor plan\" return self.plans[floorPlanId].render_overlay(self.data_layers[Model.LAYER_SNAP_WIFI].overlays[floorPlanId].get_delta(exposure=1)) def debug_render(self,fpid)->Image: import datetime dm", "do not match \"\"\" for l_id in set(data_layers.keys()).difference(self.data_layers.keys()): # For layers in data_layers", "type int or float, got {}\".format(str(type(wallthreshold)))) if floor_id not in self.plans.keys(): raise Model.ModelException(\"No", "fpid:0 for fpid in layer.overlays.keys() } @staticmethod def load( data_layers:dict, floors:dict, day=None, hour=None", "- (self.floorplan_dimensions[0] % Model.CELL_SIZE_M), Model.CELL_SIZE_M - (self.floorplan_dimensions[1] % Model.CELL_SIZE_M) ) self.margin_px = (", "overlays layer.verify_and_update(floors) for l_id in data_layers.keys(): # Get count if exists, else set", "verify_and_update_struct(self, data_layers:dict, floors:dict)->None: \"\"\" Verifies that the data in the TimeSlotAvg is compatible", "comprehension king # Maybe after we spend 30 minutes fixing it ;) event", "self.plans[floorPlanId].render_overlay(datamap) def render_abs(self,floorPlanId:str)->Image: \"Get latest frame of WiFi layer rendered on the floor", "compressed pickle file or create new\" #TODO remove day hour params - used", "floorplan not represented in the floorplans but not in the overlays print(\"Info: Creating", "is 0.707 iff cell_s_m = 1 DEFAULT_EXPOSURE = 3 DEFAULT_PASSWORD = '<PASSWORD>' __BAD_LAYER", "= np.zeros( self.plans[floorPlanId].overlay_dimensions ) for lid in self.data_layers.keys(): mask_enabled = self.plans[floorPlanId].mask_enabled current =", "get network from config file\") class TimeSlotAvg: class TimeSlotAvgException( Exception ): pass DATA_DIR", "in self.plans.keys(): self.plans[ap.floorPlanId].aps[mac] = ap ### Scanning API (SAPI) def __validate_scanning(self,SAPI_packet:dict) -> None:", "= \"selectednet\" STORE_LAYERS = \"layers\" STORE_FOVCOORDS = \"fov_coords\" STORE_FOVMASK = \"fov_mask\" STORE_BMBOXES =", "def serialize(self): conf = dict() conf[Model.STORE_SECRET] = self.secret conf[Model.STORE_TOKEN] = self.validator_token conf[Model.STORE_LAYERS] =", "Get the respective average layer avg_layer = self.data_layers[l_key] # For each overlay in", "for the floorplan object including model data\" def __init__(self,floorplan:FloorPlan): self.floorplan = floorplan #", "on the floor plan\" return self.plans[floorPlanId].render_overlay(self.data_layers[Model.LAYER_SNAP_WIFI].overlays[floorPlanId].get_delta(exposure=1)) def debug_render(self,fpid)->Image: import datetime dm = self.plans[fpid]", "that the data in the TimeSlotAvg is compatible with the current Model. If", "obs objects, converting to dictionary return dict(zip(range(len(obs)),obs)) def provide_scanning(self,SAPI_packet:dict) -> None: \"Update model", "== \"Bluetooth\": return Model.LAYER_SNAP_BT else: raise Model.ModelException(Model.__BAD_LAYER.format(api_layer_val)) def __generate_person_obs(self) -> dict: \"Indexes observed", "info, see Overlay.get_delta \"\"\" return { _id : over.get_delta(masked, exposure) for _id, over", "def clear(self)->None: \"Clear all member overlays of observation data\" for over in self.overlays.values():", "in self.data_layers.keys(): #Check camera with mac exists if mac in self.query_obj.getCameras().keys(): #Check coords", "val = overlay[mx,my] if val==0: continue #Colour by sign, scale alpha by magnitude", "raise Model.ModelException(\"Error: Overlay and Floorplan dimension mismatch for FPID={}\".format(floor.floorplan.id)) # Update mask in", "metres d = np.linalg.norm( (np.array([x,y]) + 0.5) * Model.CELL_SIZE_M - client_loc ) um_possible[x,y]", "overlay for FPID:{}\".format(fpid)) fp = floorplans[fpid] self.overlays[fpid] = Overlay(fpid, fp.overlay_dimensions, fp.floorplan_dimensions, fp.mask, self.exposure)", "!= None and placeable.y != None ) or placeable.has_mask_override: fixed[id] = placeable else:", "get_full(self, masked:bool=True, exposure:int=0) -> dict: #pragma: no cover \"\"\" Return the full data", "elif api_layer_val == \"Bluetooth\": return Model.LAYER_SNAP_BT else: raise Model.ModelException(Model.__BAD_LAYER.format(api_layer_val)) def __generate_person_obs(self) -> dict:", "= { fpid:0 for fpid in layer.overlays.keys() } @staticmethod def load( data_layers:dict, floors:dict,", "in terms of absolute delta from mean\" datamap = self.comp_historical(floorPlanId) return self.plans[floorPlanId].render_overlay(datamap) def", "error if dimensions do not equate and historical data would be invalidated \"\"\"", "= {LAYER_SNAP_WIFI, LAYER_SNAP_BT, LAYER_MVSENSE} CONFIG_PATH = os.path.join('model.conf') CELL_SIZE_M = 1 DOWNSAMPLE_THRESHOLD = 0.5", "api_layer_val = APIQuery.get_SAPI_type(SAPI_packet) if api_layer_val == \"WiFi\": return Model.LAYER_SNAP_WIFI elif api_layer_val == \"Bluetooth\":", "STORE_FOVMASK = \"fov_mask\" STORE_BMBOXES = \"bm_boxes\" STORE_BDENABLED = \"bd_enabled\" STORE_SECRET = \"sapisecret\" STORE_TOKEN", "event ]) cameras = { cam for cam in self.query_obj.getCameras().values() if cam.floorPlanId ==", "1 to elems total self.mask[x][y] = self.pixelmask[it:ib,il:ir].mean() < Model.DOWNSAMPLE_THRESHOLD def calc_bounds_mask(self,blindspots=None,wallthreshold:float=None) -> Image.Image:", "Verifies that the data in the TimeSlotAvg is compatible with the current Model.", "@staticmethod def get_time()->tuple: \"Get the current time values needed for reading and writing", "= self.mask.shape[1] #Image-scale pixel mask (mini-scale, big-dims) dims ix = self.pixelmask.shape[0] iy =", "destination.putalpha(255) return Image.alpha_composite(destination,imarr) class Layer: \"Class representing a data layer: a series of", "{LAYER_SNAP_WIFI, LAYER_SNAP_BT, LAYER_MVSENSE} CONFIG_PATH = os.path.join('model.conf') CELL_SIZE_M = 1 DOWNSAMPLE_THRESHOLD = 0.5 VARIANCE_THRESHOLD", "near # Ignore floor mask self.__unmasked_dataoverlay[0][um_possible] += 1.0 / um_possible.sum() # Include floor", "in self.data_layers.values(): layer.verify_and_update(self.plans) ### Access Points def getAPs(self) -> None: \"Get APs and", "self.overlay_dimensions = overlay_dimensions self.real_dimensions = real_dimensions self.mask = floormask assert exposure > 0", "and pixelmask: # Tidy the edges # Make the mask an Image, mode=L", "For each square, see if it's centre is close enough to be within", "> threshhold, 'location':busiest_location} def nearestCameras(self, n:int, floor:Floor,spikeDict:dict)->tuple: #returns a list of camera objects", "self.count[l_key][o_key] += 1 #self.write() else: raise TimeSlotAvg.TimeSlotAvgException(f\"Cannot update with current model as it", "Store parsed location in x,y tuple, in m, with change of axis client_loc", "self.plans.items() } def findFloorplanByName(self,name) -> str: \"Find a floorPlanId from the floor name\"", "ke: raise Model.BadRequest(\"Request is missing data: \" + str(ke) ) if source_net_id !=", "'wb' ) as f: pickle.dump(config_data, f) def read_config_data(self): if os.path.isfile(Model.CONFIG_PATH): with open( Model.CONFIG_PATH,", "layer in self.data_layers.items() } @staticmethod def get_time()->tuple: \"Get the current time values needed", "# Get full available exposure by default new_um_overlay = over.get_delta(masked=False) new_m_overlay = over.get_delta(masked=True)", "range(len(clusters)): for y in range(len(clusters[0])): if clusters[x][y] > busiest: busiest = clusters[x][y] busiest_location", "distances = dict() for cam in FOVcams: fov = cam.get_FOV() mindist = 9999999", "= os.path.join(TimeSlotAvg.DATA_DIR,'{}_{}.pbz2'.format(self.day, self.hour)) if not os.path.exists(TimeSlotAvg.DATA_DIR): os.makedirs(TimeSlotAvg.DATA_DIR) with bz2.BZ2File(filepath, 'wb') as f: pickle.dump(self,", "absmax, -absmax imarr = np.zeros((destination.size[1],destination.size[0],4),dtype=\"uint8\") # Account for margin between edge of floorplan", "self.password conf[Model.STORE_FOVCOORDS] = { cam.mac: cam.get_fov_coords() for cam in self.query_obj.cameras.values() } conf[Model.STORE_BMBOXES] =", "} def copy(self,flatten:bool=True): \"\"\" Return a copy of the layer. Flatten squashes exposures", "internally in relevent floor objects\" aps = self.query_obj.pullAPs() for mac,ap in aps.items(): if", "over.get_delta(masked, exposure) for _id, over in self.overlays.items() } def get_full(self, masked:bool=True, exposure:int=0) ->", "to reflect this change May throw error if dimensions do not equate and", "cover \"\"\" Return the full data for each overlay in the layer. If", "( placeable.x != None and placeable.y != None ) or placeable.has_mask_override: fixed[id] =", "self.mask, 1) cp.__unfixed_observations[0] = self.__unfixed_observations.mean(axis=0) cp.__masked_dataoverlay[0] = self.__masked_dataoverlay.mean(axis=0) cp.__unmasked_dataoverlay[0] = self.__unmasked_dataoverlay.mean(axis=0) else: cp", "self.bm_boxes = blindspots if wallthreshold == None: bd = BoundaryDetector( self.floorplan.get_image() ) else:", "Add any missing overlays layer.verify_and_update(floors) for l_id in data_layers.keys(): # Get count if", "} conf[Model.STORE_BMBOXES] = { fpid: fp.bm_boxes for fpid,fp in self.plans.items() } conf[Model.STORE_BDENABLED] =", "found\") try: self.update_model_config(None, {Model.STORE_LAYERS: Model.LAYERS_ALL} ) except APIQuery.APIException: raise Model.ModelException(\"Could not get network", "is compatible with passed floor, updates mask\" if self.overlay_dimensions != floor.overlay_dimensions or self.real_dimensions", "self.update_model_config(None, {Model.STORE_LAYERS: Model.LAYERS_ALL} ) except APIQuery.APIException: raise Model.ModelException(\"Could not get network from config", "exposure of 1 avg_um_overlay = avg_layer.overlays[o_key].get_delta(masked=False,exposure=1) avg_m_overlay = avg_layer.overlays[o_key].get_delta(masked=True,exposure=1) avg_unfixed_obs = over.get_unfixed_observations() #", "= 2 LAYER_MVSENSE = 3 LAYERS_ALL = {LAYER_SNAP_WIFI, LAYER_SNAP_BT, LAYER_MVSENSE} CONFIG_PATH = os.path.join('model.conf')", "if self.plans[k].floorplan.name == name: return k return None def setBoundsMask(self,floor_id:str,on:bool,blindspots=None,wallthreshold:float=None) -> None: \"Generate", "for a timeslot using the current model data, if valid time\" if self.is_current_time(debug):", "\"Roll the exposure, preparing for a new frame of data\" self.__masked_dataoverlay = np.roll(self.__masked_dataoverlay,", "object is not current\" if not (self.timeslot.is_current_time()): self.timeslot.write() self.timeslot = TimeSlotAvg.load( self.data_layers, self.plans", "is in place, mask override takes precident um_possible = placeable.mask_override m_possible = placeable.mask_override", "2,) ) ) # This means let x = cell_s_m / 2; V_T", "the new data for l_key, layer in current_data.items(): # Get the respective average", "a link to the image response = self.query_obj.getCameraSnap(camera) return response def snapshotWebhook(self, snapshot:dict):", "= abs(val / absmax) #Left, right iys = iy_points[my] iye = iy_points[my+1] imarr[ixs:ixe,iys:iye]", "over in self.overlays.items() } return ly def clear(self)->None: \"Clear all member overlays of", "TimeSlotAvg object from compressed pickle file or create new\" #TODO remove day hour", "preview of a bounds mask with given parameters. For more info see Floor.set_bounds_mask", "= self.__masked_dataoverlay.copy() cp.__unmasked_dataoverlay = self.__unmasked_dataoverlay.copy() return cp def add(self,observations:dict) -> None: fixed =", "import hashlib parentddir = os.path.abspath(os.path.join(os.path.dirname(__file__), os.path.pardir)) sys.path.append(parentddir) from lib.APIQuery import APIQuery, FloorPlan from", "distance between the end of the floorplan and the data overlay, in metres", "other) # Account for margin between edge of floorplan and overlay ix_divs =", "calc_bounds_mask(self,blindspots=None,wallthreshold:float=None) -> Image.Image: \"\"\" Generate a preview of a bounds mask with given", "for FPID:{}\".format(fpid)) fp = floorplans[fpid] self.overlays[fpid] = Overlay(fpid, fp.overlay_dimensions, fp.floorplan_dimensions, fp.mask, self.exposure) for", "no cover # Not covered as only parameter passing bd = BoundaryDetector( self.floorplan.get_image(),", "#spike detect POST_data = {} for fpid, floor in self.plans.items(): spikedict = self.spike(self.comp_historical(fpid),", "clearing any previous. Pass floor object dictionary\" bins = { id : dict()", "Floor(fp) for id,fp in floorplans.items() } return self.plans def getFloorplanSummary(self) -> dict: \"Get", "self.plans[floor_id] if on: floor.set_bounds_mask(blindspots,wallthreshold) else: floor.bm_boxes = blindspots floor.mask[:] = 1 floor.mask_enabled =", "(no smoothing). \"\"\" window = self.exposure if exposure == 0 else exposure if", "invalidated \"\"\" for layer in self.data_layers.values(): layer.verify_and_update(self.plans) ### Access Points def getAPs(self) ->", "len(unfixed_count.shape) assert len(self.__masked_dataoverlay.shape) == len(masked_overlay.shape) assert len(self.__unmasked_dataoverlay.shape) == len(unmasked_overlay.shape) self.__unfixed_observations = unfixed_count self.__masked_dataoverlay", "floor in self.plans.items(): spikedict = self.spike(self.comp_historical(fpid), self.webhook_threshold) if spikedict['spike'] == True: idealality, cameras", "floor layer for each\" floorplans = self.query_obj.pullFloorPlans() self.plans = { id : Floor(fp)", "return collective def update_timeslot(self): \"Calls the factory if the current TimeSlotAvg object is", "= 0 busiest_location=None for x in range(len(clusters)): for y in range(len(clusters[0])): if clusters[x][y]", "len(self.__unfixed_observations.shape) == len(unfixed_count.shape) assert len(self.__masked_dataoverlay.shape) == len(masked_overlay.shape) assert len(self.__unmasked_dataoverlay.shape) == len(unmasked_overlay.shape) self.__unfixed_observations =", "x in range(len(clusters)): for y in range(len(clusters[0])): clusters[x,y] = layer[3*x:3*(x+1),3*y:3*(y+1)].sum() busiest = 0", "possibly be or are to be ignored eg outside high floors. Blindspots should", "if flatten else self.exposure ) ly.overlays = { _id : over.copy(flatten) for _id,", "TimeSlotAvg.TimeSlotAvgException(f\"Cannot update with current model as it is not currently day:{self.day}, hour:{self.hour}\") def", "overlays print(\"Info: Creating new overlay for FPID:{}\".format(fpid)) fp = floorplans[fpid] self.overlays[fpid] = Overlay(fpid,", "( avg_m_overlay * c + new_m_overlay ) / (c+1) upd_unfixed_obs = ( avg_unfixed_obs", "for y in range(len(clusters[0])): if clusters[x][y] > busiest: busiest = clusters[x][y] busiest_location =", "bounds. \"\"\" POS = np.array([255,0,0,180],dtype=np.uint8) NEG = np.array([0,255,0,180],dtype=np.uint8) BLUR_CELLS = 0.35 destination =", "def render_delta(self,floorPlanId:str)->Image: \"Get the current datamap in terms of absolute delta from mean\"", "= self.plans[fpid] dims = dm.overlay_dimensions testarr = np.zeros(dims).ravel() n = (datetime.datetime.now().second / 60)", "POST_data = {} for fpid, floor in self.plans.items(): spikedict = self.spike(self.comp_historical(fpid), self.webhook_threshold) if", "# Account for margin between edge of floorplan and overlay ix_divs = np.floor(np.linspace(", "for reading and writing data files\" curr_dt = datetime.datetime.now( datetime.timezone( offset=datetime.timedelta(hours=0) ) )", "self.plans = { id : Floor(fp) for id,fp in floorplans.items() } return self.plans", "{Model.STORE_SELECTED:self.network_id} config_data[self.network_id] = self.serialize() with open( self.CONFIG_PATH, 'wb' ) as f: pickle.dump(config_data, f)", "object indexed by each layer stored\" return { layer_id: layer.overlays[fpid] for layer_id, layer", "window = self.exposure if exposure == 0 else exposure # Distribute unfixed observations", "\"Indexes observed person objects with an arbitrary key\" obs = self.query_obj.get_camera_observations() # Zip", "def render_overlay(self,overlay:np.ndarray,pixelmask:bool=True): \"\"\" Render overlay onto the floorplan image in heatmap form. If", "_id, over in self.overlays.items() } return ly def clear(self)->None: \"Clear all member overlays", "dict: #pragma: no cover \"\"\" Return the full data for each overlay in", "ratio of elems 1 to elems total self.mask[x][y] = self.pixelmask[it:ib,il:ir].mean() < Model.DOWNSAMPLE_THRESHOLD def", "in enumerate(row): if cell: dist = np.hypot( (x+0.5)-event[0], (y+0.5)-event[1] ) if dist <", "name: return k return None def setBoundsMask(self,floor_id:str,on:bool,blindspots=None,wallthreshold:float=None) -> None: \"Generate a mask from", "\"<PASSWORD>\" STORE_WHTHRESHOLD = \"webhook_threshold\" def update_model_config(self, netid, conf_dict): layers = conf_dict.get(Model.STORE_LAYERS,set()) try: if", "as no mask exists\") destination.putalpha(255) return Image.alpha_composite(destination,imarr) class Layer: \"Class representing a data", "+= (current - historical) collective /= len(self.data_layers) return collective def update_timeslot(self): \"Calls the", "hour:{self.hour}\") def write(self): \"Save TimeSlotAvg object to a compressed file\" filepath = os.path.join(TimeSlotAvg.DATA_DIR,'{}_{}.pbz2'.format(self.day,", "exposure self.overlays = { _id : Overlay(_id, floor.overlay_dimensions, floor.floorplan_dimensions, floor.mask, exposure) for _id,floor", "-> None: self.__unfixed_observations[0] += len(unfixed) def get_delta(self, masked:bool=True, exposure:int=0)->np.ndarray: \"\"\" Returns a copy", "= exposure # Exposure queue, shape (exp,x,y) self.__unfixed_observations = np.zeros(exposure) self.__masked_dataoverlay = np.zeros(", "len(unmasked_overlay.shape) self.__unfixed_observations = unfixed_count self.__masked_dataoverlay = masked_overlay self.__unmasked_dataoverlay = unmasked_overlay def roll(self) ->", "the layer. If exposure 0 (default), will provide all available exposures squashed. See", "for margin between edge of floorplan and overlay ix_divs = np.floor(np.linspace( 0, ix+self.margin_px[0],", "copy of the u(n)m(asked)_possible's with the floor mask applied m_possible = np.logical_and( um_possible,", "avg_m_overlay = avg_layer.overlays[o_key].get_delta(masked=True,exposure=1) avg_unfixed_obs = over.get_unfixed_observations() # Count c = self.count[l_key][o_key] # update", "exposures squashed. See Overlay.get_full \"\"\" # Not covered as not required for current", "0 self.__unmasked_dataoverlay[0] = 0 self.__unfixed_observations[0] = 0 def clear(self) -> None: \"Clear accumulated", "= 1 floor.mask_enabled = on self.update_layers() ### Layers def update_layers(self)->None: \"\"\" Must be", "cell_s_m = 1 DEFAULT_EXPOSURE = 3 DEFAULT_PASSWORD = '<PASSWORD>' __BAD_LAYER = \"Layer {}", "np.roll(self.__masked_dataoverlay, 1, axis=0) self.__unmasked_dataoverlay = np.roll(self.__unmasked_dataoverlay, 1, axis=0) self.__unfixed_observations = np.roll(self.__unfixed_observations, 1, axis=0)", ": Overlay(_id, floor.overlay_dimensions, floor.floorplan_dimensions, floor.mask, exposure) for _id,floor in floorplans.items() } def set_observations(self,observations:dict):", "= Image.fromarray(255*self.pixelmask.astype(np.uint8),\"L\") # Paste alpha on masked region imarr.paste((0,0,0,0),mask) elif pixelmask: print(\"Error: Could", "= 1 DOWNSAMPLE_THRESHOLD = 0.5 VARIANCE_THRESHOLD = np.hypot( *( 2*(CELL_SIZE_M / 2,) )", "distributed unfixed observations) For exposure, see Overlay.get_delta \"\"\" data = self.get_delta(masked,exposure) window =", "fpid in layer.overlays.keys() } @staticmethod def load( data_layers:dict, floors:dict, day=None, hour=None ): \"Static", "n = (datetime.datetime.now().second / 60) * len(testarr) testarr[:int(n)] = 1 return dm.render_overlay(testarr.reshape(dims)) def", "in data_layers.keys(): # Get count if exists, else set to 1 self.count[l_id] =", "= Overlay(fpid, fp.overlay_dimensions, fp.floorplan_dimensions, fp.mask, self.exposure) for fpid, floor in floorplans.items(): self.overlays[fpid].verify_and_update(floor) class", "\"\"\" for layer in self.data_layers.values(): layer.verify_and_update(self.plans) ### Access Points def getAPs(self) -> None:", "self.pixelmask.shape[0] iy = self.pixelmask.shape[1] #Image-scale chunk divisions (what coords do we get laying", "= self.query_obj.getCameraSnap(camera) return response def snapshotWebhook(self, snapshot:dict): for address in self.webhook_addresses: response =", "= np.ones(self.overlay_dimensions,dtype=np.bool_) self.aps = {} self.bm_boxes = [] def set_bounds_mask(self,blindspots=None,wallthreshold:float=None) -> None: \"\"\"\"", "writing data files\" curr_dt = datetime.datetime.now( datetime.timezone( offset=datetime.timedelta(hours=0) ) ) curr_day = curr_dt.weekday()", "observations were passed. For more details on exposure, see Overlay.get_delta \"\"\" window =", "of elems 1 to elems total self.mask[x][y] = self.pixelmask[it:ib,il:ir].mean() < Model.DOWNSAMPLE_THRESHOLD def calc_bounds_mask(self,blindspots=None,wallthreshold:float=None)", "exposure == 0 else exposure return self.__unfixed_observations[:window].mean(axis=0) def get_full(self, masked:bool=True, exposure:int=0)->np.ndarray: #pragma: no", "on internal datamap. Pass len(iterable)==0 to unset mask \"\"\" #Check if Layer exists", "This will update the Overlay objects to reflect this change May throw error", "Overlay.get_delta \"\"\" window = self.exposure if exposure == 0 else exposure return self.__unfixed_observations[:window].mean(axis=0)", "and writing data files\" curr_dt = datetime.datetime.now( datetime.timezone( offset=datetime.timedelta(hours=0) ) ) curr_day =", "in m, with change of axis client_loc = np.array( [self.real_dimensions[0] - placeable.y, placeable.x", "APIQuery(netid) self.populate(layers) self.secret = conf_dict.get(Model.STORE_SECRET) self.validator_token = conf_dict.get(Model.STORE_TOKEN) self.webhook_addresses = conf_dict.get(Model.STORE_WEBHOOK,list()) self.password =", "Generate a mask from BoundaryDetector for areas of the floor that people cannot", "def is_current_time(self, debug=None) -> bool: \"Returns True iff timeslot is for current time\"", "for the current TimeSlotAvg object\" self.update_timeslot() # Get the current timeslot object self.timeslot.update_avg_data(", "shape (n,2). Coords pertain to sqm pixels on internal datamap. Pass len(iterable)==0 to", "Floor is added, removed, or altered, including by set_bounds_mask This will update the", "sys.path.append(parentddir) from lib.APIQuery import APIQuery, FloorPlan from lib.BoundaryDetector import BoundaryDetector # As per", "of observation data\" for over in self.overlays.values(): over.clear() def verify_and_update(self, floorplans:dict): \"\"\" Ensure", "with mac exists if mac in self.query_obj.getCameras().keys(): #Check coords of correct shape and", "# Update the count self.count[l_key][o_key] += 1 #self.write() else: raise TimeSlotAvg.TimeSlotAvgException(f\"Cannot update with", "self.data_layers[lid].overlays[floorPlanId].get_delta(masked=mask_enabled) historical = hist_fp_data[lid].get_delta(masked=mask_enabled,exposure=1) collective += (current - historical) collective /= len(self.data_layers) return", "else: return self.__unmasked_dataoverlay[:window].mean(axis=0) def get_unfixed_observations(self, exposure:int=0)->float: \"\"\" Return how many unfixed observations were", "0 (default), will provide all available exposures squashed. See Overlay.get_full \"\"\" # Not", "no mask exists\") destination.putalpha(255) return Image.alpha_composite(destination,imarr) class Layer: \"Class representing a data layer:", "exists if Model.LAYER_MVSENSE in self.data_layers.keys(): #Check camera with mac exists if mac in", "new observations If an overlay missing, print info, create new If an overlay", "!= None: overlay.add(obs) def get_deltas(self, masked:bool=True, exposure:int=0) -> dict: \"\"\" Return the delta", "input, else unmasked data. Set exposure to specify mean smoothing on the first", "from a single source type\" def __init__(self,floorplans:dict,exposure:int): self.exposure = exposure self.overlays = {", "if the current TimeSlotAvg object is not current\" if not (self.timeslot.is_current_time()): self.timeslot.write() self.timeslot", "model # For each layer in the new data for l_key, layer in", "return dict(zip(range(len(obs)),obs)) def provide_scanning(self,SAPI_packet:dict) -> None: \"Update model with SAPI data\" # Raise", "= conf_dict.get(Model.STORE_PASSWORD,Model.DEFAULT_PASSWORD) self.webhook_threshold = conf_dict.get(Model.STORE_WHTHRESHOLD,self.webhook_threshold) for mac, coords in conf_dict.get(Model.STORE_FOVCOORDS,dict()).items(): self.setFOVs( mac, coords", "squashed. See Overlay.get_full \"\"\" # Not covered as not required for current scope", "except (ValueError, TypeError) as err: raise err.__class__(\"Coordinates supplied of incorrect shape or type,", "exposure 0 (default), will provide all available exposures squashed. For more info, see", "do not equate and historical data would be invalidated \"\"\" for layer in", "except KeyError as ke: raise Model.BadRequest(\"Request is missing data: \" + str(ke) )", "\"Verifies overlay is compatible with passed floor, updates mask\" if self.overlay_dimensions != floor.overlay_dimensions", "0.5 VARIANCE_THRESHOLD = np.hypot( *( 2*(CELL_SIZE_M / 2,) ) ) # This means", "= self.query_obj.pullAPs() for mac,ap in aps.items(): if ap.floorPlanId in self.plans.keys(): self.plans[ap.floorPlanId].aps[mac] = ap", "is_current_time(self, debug=None) -> bool: \"Returns True iff timeslot is for current time\" if", "not os.path.exists(TimeSlotAvg.DATA_DIR): os.makedirs(TimeSlotAvg.DATA_DIR) with bz2.BZ2File(filepath, 'wb') as f: pickle.dump(self, f) def get_floor_avgs(self, fpid:str)->dict:", "should be a Numpy array, tuple or nested list of the form [[x1,x2,y1,y2],...]", "distances[cam] = np.hypot( cam.x-event[0], cam.y-event[1] ) top_n = [ cam[0] for cam in", "= self.plans[cam.floorPlanId].overlay_dimensions cam.set_FOV(shape,coords) else: raise ValueError except (ValueError, TypeError) as err: raise err.__class__(\"Coordinates", "boxes in conf_dict.get(Model.STORE_BMBOXES, dict()).items(): on = conf_dict.get(Model.STORE_BDENABLED,{fpid:False})[fpid] self.setBoundsMask(fpid, on, boxes) def serialize(self): conf", "that people cannot possibly be or are to be ignored eg outside high", "of a bounds mask with given parameters. For more info see Floor.set_bounds_mask \"\"\"", "self.exposure if exposure == 0 else exposure # Distribute unfixed observations evenly across", "edge of floorplan and overlay ix_divs = np.floor(np.linspace( 0, ix+self.margin_px[0], mx+1 )).astype(\"int32\") iy_divs", "{}\".format(self.network_id,source_net_id)) def get_type(SAPI_packet:dict) -> int: \"Get the Model layer constant for a given", "width) self.floorplan_dimensions = (self.floorplan.height,self.floorplan.width) self.overlay_dimensions = ( int(self.floorplan_dimensions[0]//Model.CELL_SIZE_M)+1, int(self.floorplan_dimensions[1]//Model.CELL_SIZE_M)+1 ) # Determine the", "provide all available exposures squashed. See Overlay.get_full \"\"\" # Not covered as not", "mx in range(overlay.shape[0]): # Top, bottom ixs = ix_points[mx] ixe = ix_points[mx+1] for", "= 1 else: # Store parsed location in x,y tuple, in m, with", "self.exposure) cp.__unfixed_observations = self.__unfixed_observations.copy() cp.__masked_dataoverlay = self.__masked_dataoverlay.copy() cp.__unmasked_dataoverlay = self.__unmasked_dataoverlay.copy() return cp def", "near enough to call near # Ignore floor mask self.__unmasked_dataoverlay[0][um_possible] += 1.0 /", "floorplan into 3m^2 areas clusters = np.zeros(dims, dtype=\"float32\") for x in range(len(clusters)): for", "requests.post(address, json = snapshot) print(response) def addWebhookAddress(self, webhookAddress:str): self.webhook_addresses.append(webhookAddress) ### Historical def put_historical(self)", "current\" if not (self.timeslot.is_current_time()): self.timeslot.write() self.timeslot = TimeSlotAvg.load( self.data_layers, self.plans ) self.timeslot.verify_and_update_struct( self.data_layers,", "cam.set_FOV(shape,coords) else: raise ValueError except (ValueError, TypeError) as err: raise err.__class__(\"Coordinates supplied of", "new observations self.mask == floor.mask class Model: LAYER_SNAP_WIFI = 1 LAYER_SNAP_BT = 2", "Not covered as only parameter passing bd = BoundaryDetector( self.floorplan.get_image(), threshold=wallthreshold ) if", "If exposure 0 (default), will provide all available exposures squashed. See Overlay.get_full \"\"\"", "None and placeable.y != None ) or placeable.has_mask_override: fixed[id] = placeable else: unfixed[id]", "SAPI_packet[SECRET_K] != self.secret: raise Model.BadRequest(\"Request has bad authentication secret - rejecting data\") except", "raise Model.ModelException(\"Model not configured for LAYER_MVSENSE\") def pull_mvsense_data(self): \"Pull live MVSense data from", "conf = dict() conf[Model.STORE_SECRET] = self.secret conf[Model.STORE_TOKEN] = self.validator_token conf[Model.STORE_LAYERS] = set(self.data_layers.keys()) conf[Model.STORE_WEBHOOK]", "for cam in FOVcams: fov = cam.get_FOV() mindist = 9999999 for x,row in", "in floorplans.items(): self.overlays[fpid].verify_and_update(floor) class Overlay: \"Represents an data overlay of a single floorplan\"", "hour=None ): \"Static factory method; Load TimeSlotAvg object from compressed pickle file or", "if Model.LAYER_MVSENSE in self.data_layers.keys(): #Check camera with mac exists if mac in self.query_obj.getCameras().keys():", "debug:bool=None) -> None: \"Updates an average model for a timeslot using the current", "the timeslots model, promoting as exposure of historicals = 1 self.data_layers[l_key].overlays[o_key].set( upd_unfixed_obs[None,], upd_m_overlay[None,],", "dist = np.hypot( (x+0.5)-event[0], (y+0.5)-event[1] ) if dist < mindist: mindist = dist", "For more info, see Overlay.get_delta \"\"\" return { _id : over.get_delta(masked, exposure) for", "import APIQuery, FloorPlan from lib.BoundaryDetector import BoundaryDetector # As per Scanning API v3", "the floorplans but not in the overlays print(\"Info: Creating new overlay for FPID:{}\".format(fpid))", "% Model.CELL_SIZE_M) ) self.margin_px = ( self.margin_m[0] * self.floorplan.px_per_m_h, self.margin_m[1] * self.floorplan.px_per_m_w )", "isinstance(tsa,TimeSlotAvg) tsa.verify_and_update_struct(data_layers, floors) return tsa def is_current_time(self, debug=None) -> bool: \"Returns True iff", "the data in the TimeSlotAvg is compatible with the current Model. If layers", "case of dimention mismatch. If mask is outdated, update - note this does", "day, hour = TimeSlotAvg.get_time() try: tsa = bz2.BZ2File(os.path.join(TimeSlotAvg.DATA_DIR,'{}_{}.pbz2'.format(day, hour)), 'rb') tsa = pickle.load(tsa)", ") self.timeslot.verify_and_update_struct( self.data_layers, self.plans ) ### Providers def poll_layer(self,layer:int,exposure:int) -> dict: return self.data_layers[layer].get_full(exposure=exposure)", "placeable.mask_override m_possible = placeable.mask_override else: if placeable.variance < Model.VARIANCE_THRESHOLD: # Calculated minimum reach", "WiFi layer rendered on the floor plan\" return self.plans[floorPlanId].render_overlay(self.data_layers[Model.LAYER_SNAP_WIFI].overlays[floorPlanId].get_delta(exposure=1)) def debug_render(self,fpid)->Image: import datetime", "location in x,y tuple, in m, with change of axis client_loc = np.array(", "using the current model data, if valid time\" if self.is_current_time(debug): # it is", "params - used for unit tests if day==None or hour==None: day, hour =", "self.webhook_threshold = 0.35 for layer in layers: if layer not in Model.LAYERS_ALL: raise", "np.array([0,255,0,180],dtype=np.uint8) BLUR_CELLS = 0.35 destination = self.floorplan.get_image().convert(\"RGBA\") # Overlay scaling absmax = max(overlay.max(),", "1 frame For more info, see Overlay.copy() \"\"\" ly = Layer({}, 1 if", "FOVcams = { cam for cam in cameras if cam.has_FOV() } nonFOVcams =", "floors) tsa.write() else: if __name__!=\"__main__\": assert isinstance(tsa,TimeSlotAvg) tsa.verify_and_update_struct(data_layers, floors) return tsa def is_current_time(self,", "(ValueError, TypeError) as err: raise err.__class__(\"Coordinates supplied of incorrect shape or type, should", "for layer_id, layer in self.data_layers.items() } @staticmethod def get_time()->tuple: \"Get the current time", "VARIANCE_THRESHOLD = np.hypot( *( 2*(CELL_SIZE_M / 2,) ) ) # This means let", "#self.write() else: raise TimeSlotAvg.TimeSlotAvgException(f\"Cannot update with current model as it is not currently", "a {}, expected a dict\".format(str(type(SAPI_packet)))) try: source_net_id = SAPI_packet[\"data\"][\"networkId\"] if SAPI_packet[SECRET_K] != self.secret:", "into 3m^2 areas clusters = np.zeros(dims, dtype=\"float32\") for x in range(len(clusters)): for y", "range(um_possible.shape[0]): for y in range(um_possible.shape[1]): # For each square, see if it's centre", "raise ValueError(\"Invalid format for blindspots parameter. Should be of shape (n,4), got {}\".format(str(blindspots)))", "delta data for each overlay in the layer. If exposure 0 (default), will", "rejecting data\") except KeyError as ke: raise Model.BadRequest(\"Request is missing data: \" +", "for id,fp in floorplans.items() } return self.plans def getFloorplanSummary(self) -> dict: \"Get a", "are compatible with current floorplans (Floor objects). Throws ModelException in case of dimention", "smoothing). \"\"\" window = self.exposure if exposure == 0 else exposure if masked:", "updates mask\" if self.overlay_dimensions != floor.overlay_dimensions or self.real_dimensions != floor.floorplan_dimensions: raise Model.ModelException(\"Error: Overlay", "# Overlay scaling absmax = max(overlay.max(), overlay.min(), key=abs) #m_max, m_min = absmax, -absmax", "dm = self.plans[fpid] dims = dm.overlay_dimensions testarr = np.zeros(dims).ravel() n = (datetime.datetime.now().second /", "else: raise TimeSlotAvg.TimeSlotAvgException(f\"Cannot update with current model as it is not currently day:{self.day},", "so exposure of 1 avg_um_overlay = avg_layer.overlays[o_key].get_delta(masked=False,exposure=1) avg_m_overlay = avg_layer.overlays[o_key].get_delta(masked=True,exposure=1) avg_unfixed_obs = over.get_unfixed_observations()", "fpid: fp.bm_boxes for fpid,fp in self.plans.items() } conf[Model.STORE_BDENABLED] = { fpid: fp.mask_enabled for", "# Paste alpha on masked region imarr.paste((0,0,0,0),mask) elif pixelmask: print(\"Error: Could not filter", "placeable in observations.items(): bins[placeable.floorPlanId][id] = placeable for fid, overlay in self.overlays.items(): overlay.roll() obs", "king # Maybe after we spend 30 minutes fixing it ;) event =", "# it is valid to update with the current model # For each", "!= self.network_id: raise AttributeError except AttributeError: self.query_obj = APIQuery(netid) self.populate(layers) self.secret = conf_dict.get(Model.STORE_SECRET)", "response def snapshotWebhook(self, snapshot:dict): for address in self.webhook_addresses: response = requests.post(address, json =", "params? threshold into params? dims = ( (len(layer)//3)+1, (len(layer[0])//3)+1 ) #splits floorplan into", "for my in range(overlay.shape[1]): val = overlay[mx,my] if val==0: continue #Colour by sign,", "of change # Note this does not change existing data, only new observations", "mask.sum() return data def verify_and_update(self,floor:Floor)->None: \"Verifies overlay is compatible with passed floor, updates", "not filter by pixel mask as no mask exists\") destination.putalpha(255) return Image.alpha_composite(destination,imarr) class", "masked:bool=True, exposure:int=0) -> dict: #pragma: no cover \"\"\" Return the full data for", "+ 0.5) * Model.CELL_SIZE_M - client_loc ) um_possible[x,y] = d <= placeable.variance #", "the floor plan\" return self.plans[floorPlanId].render_overlay(self.data_layers[Model.LAYER_SNAP_WIFI].overlays[floorPlanId].get_delta(exposure=1)) def debug_render(self,fpid)->Image: import datetime dm = self.plans[fpid] dims", "preparing for a new frame of data\" self.__masked_dataoverlay = np.roll(self.__masked_dataoverlay, 1, axis=0) self.__unmasked_dataoverlay", ") for fpid, boxes in conf_dict.get(Model.STORE_BMBOXES, dict()).items(): on = conf_dict.get(Model.STORE_BDENABLED,{fpid:False})[fpid] self.setBoundsMask(fpid, on, boxes)", "= over.get_delta(masked=True) new_unfixed_obs = over.get_unfixed_observations() # Similar from averages # avg layers are", "floors. Blindspots should be a Numpy array, tuple or nested list of the", "= { cam for cam in self.query_obj.getCameras().values() if cam.floorPlanId == floor.floorplan.id } FOVcams", "(exp,x,y) self.__unfixed_observations = np.zeros(exposure) self.__masked_dataoverlay = np.zeros( (exposure,)+overlay_dimensions, dtype=\"float32\" ) self.__unmasked_dataoverlay = np.zeros(", "new data overlay to the timeslots model, promoting as exposure of historicals =", "Model.CONFIG_PATH, 'rb' ) as f: config_data = pickle.load(f) selected_id = config_data[Model.STORE_SELECTED] select_data =", "full data for each overlay in the layer. If exposure 0 (default), will", "the given floor eg outside high floors. Blindspots should be a Numpy array,", "Count c = self.count[l_key][o_key] # update the average by adding current values to", "Model.DOWNSAMPLE_THRESHOLD def calc_bounds_mask(self,blindspots=None,wallthreshold:float=None) -> Image.Image: \"\"\" Generate a preview of a bounds mask", "placeable in observations.items(): if ( placeable.x != None and placeable.y != None )", "\"\"\" Returns a copy of the full client overlay (including distributed unfixed observations)", "floorplan # Dimentions should default to (height, width) self.floorplan_dimensions = (self.floorplan.height,self.floorplan.width) self.overlay_dimensions =", "on exposure, see Overlay.get_delta \"\"\" window = self.exposure if exposure == 0 else", "(self.floorplan_dimensions[1] % Model.CELL_SIZE_M) ) self.margin_px = ( self.margin_m[0] * self.floorplan.px_per_m_h, self.margin_m[1] * self.floorplan.px_per_m_w", "A floorplan not represented in the floorplans but not in the overlays print(\"Info:", "Floorplans def pullFloors(self) -> dict: \"Pull floorplans from the network, construct blank floor", "data\") except KeyError as ke: raise Model.BadRequest(\"Request is missing data: \" + str(ke)", "TypeError(\"JSON parsed a {}, expected a dict\".format(str(type(SAPI_packet)))) try: source_net_id = SAPI_packet[\"data\"][\"networkId\"] if SAPI_packet[SECRET_K]", "Blindspots should be a Numpy array, tuple or nested list of the form", "self.plans.items() } return conf def write_config_data(self): config_data = {Model.STORE_SELECTED:self.network_id} config_data[self.network_id] = self.serialize() with", "for l_id, layer in self.data_layers.items(): # Add any missing overlays layer.verify_and_update(floors) for l_id", "Coords pertain to sqm pixels on internal datamap. Pass len(iterable)==0 to unset mask", "curr_dt.weekday() curr_hour = curr_dt.hour return curr_day, curr_hour def verify_and_update_struct(self, data_layers:dict, floors:dict)->None: \"\"\" Verifies", "poll_layer(self,layer:int,exposure:int) -> dict: return self.data_layers[layer].get_full(exposure=exposure) def render_delta(self,floorPlanId:str)->Image: \"Get the current datamap in terms", "0 def clear(self) -> None: \"Clear accumulated observation data including exposure\" self.__unfixed_observations[:] =", "= 0.35 for layer in layers: if layer not in Model.LAYERS_ALL: raise Model.ModelException(Model.__BAD_LAYER.format(layer))", "pass def __init__(self,network_id:str=None,layers:set={}): \"Initialise model. API key must be defined in enviroment variable", "in place, mask override takes precident um_possible = placeable.mask_override m_possible = placeable.mask_override else:", "in conf_dict.get(Model.STORE_FOVCOORDS,dict()).items(): self.setFOVs( mac, coords ) for fpid, boxes in conf_dict.get(Model.STORE_BMBOXES, dict()).items(): on", "BoundaryDetector # As per Scanning API v3 SECRET_K = \"secret\" class Floor: \"Wrapper", "# m(asked)_possible is a copy of the u(n)m(asked)_possible's with the floor mask applied", "respective average layer avg_layer = self.data_layers[l_key] # For each overlay in the respective", "gives only the latest frame (no smoothing). \"\"\" window = self.exposure if exposure", "if spikedict['spike'] == True: idealality, cameras = self.nearestCameras(2, floor, spikedict) #need to get", "array, tuple or nested list of the form [[x1,x2,y1,y2],...]\" if blindspots==None: pass elif", "blindspots==None: pass elif not (isinstance(blindspots,(np.ndarray,list,tuple))): raise TypeError(\"Invalid type for blindspots parameter. Should be", "Set the FOV coords from given camera (by mac). Coords should be iterable", "image has bounds mask set, will mask final image to keep overlay heatmap", "as f: pickle.dump(self, f) def get_floor_avgs(self, fpid:str)->dict: \"Return the flat average Overlay object", "real_dimensions self.mask = floormask assert exposure > 0 self.exposure = exposure # Exposure", "to keep overlay heatmap within bounds. \"\"\" POS = np.array([255,0,0,180],dtype=np.uint8) NEG = np.array([0,255,0,180],dtype=np.uint8)", "see if it's centre is close enough to be within variance metres d", "floors:dict, day=None, hour=None ): \"Static factory method; Load TimeSlotAvg object from compressed pickle", "isinstance(self.query_obj, APIQuery) self.network_id = self.query_obj.network_id self.plans = self.pullFloors() self.getAPs() self.query_obj.pullCameras() self.data_layers = dict()", "real_dimensions:tuple, floormask:np.ndarray, exposure:int): self.floorid = floorid #self.observations = dict() self.overlay_dimensions = overlay_dimensions self.real_dimensions", "layer.verify_and_update(self.plans) ### Access Points def getAPs(self) -> None: \"Get APs and store internally", "pragma: no cover # Not covered as only parameter passing bd = BoundaryDetector(", "threshhold)->dict: #add floorplan ID into params, camera/wifi/bluetoothall into params? threshold into params? dims", "{ cam for cam in cameras if cam.has_FOV() } nonFOVcams = cameras -", "exposure, default (0) combines all available frames, 1 gives only the latest frame", "overlay in self.overlays.items(): overlay.roll() obs = bins.get(fid) if obs != None: overlay.add(obs) def", "the full data for each overlay in the layer. If exposure 0 (default),", "np.roll(self.__unmasked_dataoverlay, 1, axis=0) self.__unfixed_observations = np.roll(self.__unfixed_observations, 1, axis=0) self.__masked_dataoverlay[0] = 0 self.__unmasked_dataoverlay[0] =", "dict(zip(range(len(obs)),obs)) def provide_scanning(self,SAPI_packet:dict) -> None: \"Update model with SAPI data\" # Raise a", "do we get laying one mask over the other) # Account for margin", "the floorplan (or mask) mask = self.mask if masked else np.ones(self.overlay_dimensions,dtype=np.bool_) data[mask] +=", "= { ov_id:self.count[l_id].get(ov_id,1) for ov_id in data_layers[l_id].overlays.keys() } def sha256(inpt:str) -> str: m", "os.path.abspath(os.path.join(os.path.dirname(__file__), os.path.pardir)) sys.path.append(parentddir) from lib.APIQuery import APIQuery, FloorPlan from lib.BoundaryDetector import BoundaryDetector #", "with mac {} not found\".format(mac)) else: raise Model.ModelException(\"Model not configured for LAYER_MVSENSE\") def", "def set_observations(self,observations:dict): \"Set the layer to contain passed observations, clearing any previous. Pass", "+= 1.0 / um_possible.sum() # Include floor mask self.__masked_dataoverlay[0][m_possible] += 1.0 / m_possible.sum()", "getCameraImage(self, camera) -> dict: #returns a dictionary containing a link to the image", "self.overlays[fpid].verify_and_update(floor) class Overlay: \"Represents an data overlay of a single floorplan\" def __init__(self,", "TimeSlotAvg object is not current\" if not (self.timeslot.is_current_time()): self.timeslot.write() self.timeslot = TimeSlotAvg.load( self.data_layers,", "fixing it ;) event = spikeDict[\"location\"] event_root = tuple([ int(d) for d in", "to specify mean smoothing on the first n frames of stored exposure, default", "a timeslot using the current model data, if valid time\" if self.is_current_time(debug): #", "FPID:{}\".format(fpid)) fp = floorplans[fpid] self.overlays[fpid] = Overlay(fpid, fp.overlay_dimensions, fp.floorplan_dimensions, fp.mask, self.exposure) for fpid,", "def comp_historical(self, floorPlanId:str): \"Get the relative busyness of a floorplan using all layers\"", "general use, instead use Overlay.add\" assert len(self.__unfixed_observations.shape) == len(unfixed_count.shape) assert len(self.__masked_dataoverlay.shape) == len(masked_overlay.shape)", "# save the new data overlay to the timeslots model, promoting as exposure", "0 to unset try: if len(coords) == 0 or False not in [", "/ absmax) #Left, right iys = iy_points[my] iye = iy_points[my+1] imarr[ixs:ixe,iys:iye] = POS", "place, mask override takes precident um_possible = placeable.mask_override m_possible = placeable.mask_override else: if", "[ cam[0] for cam in sorted(distances.items(),key=lambda x: x[1])[:n] ] return (\"Best Effort\", top_n)", "TimeSlotAvgException( Exception ): pass DATA_DIR = \"historical_data\" def __init__(self, data_layers:dict, day:int, hour:int): self.day", "= np.roll(self.__unfixed_observations, 1, axis=0) self.__masked_dataoverlay[0] = 0 self.__unmasked_dataoverlay[0] = 0 self.__unfixed_observations[0] = 0", "else: if __name__!=\"__main__\": assert isinstance(tsa,TimeSlotAvg) tsa.verify_and_update_struct(data_layers, floors) return tsa def is_current_time(self, debug=None) ->", "(self.floorplan_dimensions[0] % Model.CELL_SIZE_M), Model.CELL_SIZE_M - (self.floorplan_dimensions[1] % Model.CELL_SIZE_M) ) self.margin_px = ( self.margin_m[0]", "if ap.floorPlanId in self.plans.keys(): self.plans[ap.floorPlanId].aps[mac] = ap ### Scanning API (SAPI) def __validate_scanning(self,SAPI_packet:dict)", "clear the transient data # Also we only need 1 frame to store", "Overlay and Floorplan dimension mismatch for FPID={}\".format(floor.floorplan.id)) # Update mask in case of", "Historical def put_historical(self) -> None: \"Updates the average data for the current TimeSlotAvg", "= conf_dict.get(Model.STORE_LAYERS,set()) try: if netid != self.network_id: raise AttributeError except AttributeError: self.query_obj =", "= self.webhook_threshold conf[Model.STORE_PASSWORD] = self.password conf[Model.STORE_FOVCOORDS] = { cam.mac: cam.get_fov_coords() for cam in", "layer for each\" floorplans = self.query_obj.pullFloorPlans() self.plans = { id : Floor(fp) for", "( avg_um_overlay * c + new_um_overlay ) / (c+1) upd_m_overlay = ( avg_m_overlay", "Model.ModelException(\"Camera with mac {} not found\".format(mac)) else: raise Model.ModelException(\"Model not configured for LAYER_MVSENSE\")", "in range(my): #Left, right il = iy_divs[y] ir = iy_divs[y+1] # As array", "as err: raise err.__class__(\"Coordinates supplied of incorrect shape or type, should be iterable", "self.__generate_person_obs() self.data_layers[Model.LAYER_MVSENSE].set_observations(observations) def spike(self, layer, threshhold)->dict: #add floorplan ID into params, camera/wifi/bluetoothall into", "overlay in the respective new data layer for o_key, over in layer.overlays.items(): #", "= ( self.margin_m[0] * self.floorplan.px_per_m_h, self.margin_m[1] * self.floorplan.px_per_m_w ) self.mask_enabled = False self.pixelmask", "overlay[mx,my] if val==0: continue #Colour by sign, scale alpha by magnitude pos =", "floor:Floor,spikeDict:dict)->tuple: #returns a list of camera objects # They call me the comprehension", "self.read_config_data() self.write_config_data() def populate(self,layers:set): assert isinstance(self.query_obj, APIQuery) self.network_id = self.query_obj.network_id self.plans = self.pullFloors()", "object self.timeslot.update_avg_data( self.data_layers ) def comp_historical(self, floorPlanId:str): \"Get the relative busyness of a", "top_n = [ cam[0] for cam in sorted(distances.items(),key=lambda x: x[1])[:n] ] return (\"Best", "Get masked and unmasked deltas, and unfixed count from new data # Get", "\"\"\" if wallthreshold == None: bd = BoundaryDetector( self.floorplan.get_image() ) else: # pragma:", "overlay_dimensions self.real_dimensions = real_dimensions self.mask = floormask assert exposure > 0 self.exposure =", "the u(n)m(asked)_possible locations um_possible = np.zeros(self.overlay_dimensions, dtype=np.bool_) if placeable.has_mask_override: # Even if a", "= config_data[selected_id] self.update_model_config(selected_id,select_data) else: print(\"Warning: config file not found\") try: self.update_model_config(None, {Model.STORE_LAYERS: Model.LAYERS_ALL}", "from config file\") class TimeSlotAvg: class TimeSlotAvgException( Exception ): pass DATA_DIR = \"historical_data\"", "layers or overlays are not represented in TSA, those are created, infos are", "we spend 30 minutes fixing it ;) event = spikeDict[\"location\"] event_root = tuple([", "# Determine the distance between the end of the floorplan and the data", "an overlay is extra, do nothing \"\"\" for fpid in set(floorplans).difference(self.overlays.keys()): # A", "mask \"\"\" #Check if Layer exists if Model.LAYER_MVSENSE in self.data_layers.keys(): #Check camera with", "self.getCameraImage(cam) POST_data[fpid][\"camera_data_\" + str(i)] = response if POST_data != {}: self.snapshotWebhook(POST_data) ### Configuration", "will select data masked at input, else unmasked data. Set exposure to specify", "masked_overlay:np.ndarray, unmasked_overlay:np.ndarray): \"Sets internal observation data. Not for general use, instead use Overlay.add\"", "clusters[x][y] busiest_location = (3*(x+0.5), 3*(y+0.5)) return {'spike':busiest > threshhold, 'location':busiest_location} def nearestCameras(self, n:int,", "= masked_overlay self.__unmasked_dataoverlay = unmasked_overlay def roll(self) -> None: \"Roll the exposure, preparing", "event_root = tuple([ int(d) for d in event ]) cameras = { cam", "laying one mask over the other) # Account for margin between edge of", "pass elif not isinstance(wallthreshold,(int,float)): raise TypeError(\"Invalid type for wallthreshold parameter. Should be of", "averages # avg layers are already flat so exposure of 1 avg_um_overlay =", "layer structure but clear the transient data # Also we only need 1", "as not required for current scope return { _id : over.get_full(masked, exposure) for", "overlay to the timeslots model, promoting as exposure of historicals = 1 self.data_layers[l_key].overlays[o_key].set(", "\"Pull floorplans from the network, construct blank floor layer for each\" floorplans =", "converting to dictionary return dict(zip(range(len(obs)),obs)) def provide_scanning(self,SAPI_packet:dict) -> None: \"Update model with SAPI", "def findFloorplanByName(self,name) -> str: \"Find a floorPlanId from the floor name\" for k", "-> int: \"Get the Model layer constant for a given SAPI packet\" api_layer_val", "it ;) event = spikeDict[\"location\"] event_root = tuple([ int(d) for d in event", "import datetime import requests import hashlib parentddir = os.path.abspath(os.path.join(os.path.dirname(__file__), os.path.pardir)) sys.path.append(parentddir) from lib.APIQuery", "into 1 frame For more info, see Overlay.copy() \"\"\" ly = Layer({}, 1", "Copy the layer structure but clear the transient data # Also we only", "{ fpid:0 for fpid in layer.overlays.keys() } @staticmethod def load( data_layers:dict, floors:dict, day=None,", ": over.get_delta(masked, exposure) for _id, over in self.overlays.items() } def get_full(self, masked:bool=True, exposure:int=0)", "floorplan ID into params, camera/wifi/bluetoothall into params? threshold into params? dims = (", "locations um_possible = np.zeros(self.overlay_dimensions, dtype=np.bool_) if placeable.has_mask_override: # Even if a floorplan mask", "exposure:int=0) -> dict: #pragma: no cover \"\"\" Return the full data for each", "__init__(self,network_id:str=None,layers:set={}): \"Initialise model. API key must be defined in enviroment variable \\\"MERAKI_DASHBOARD_API_KEY\\\"\" self.read_config_data()", "boxes) def serialize(self): conf = dict() conf[Model.STORE_SECRET] = self.secret conf[Model.STORE_TOKEN] = self.validator_token conf[Model.STORE_LAYERS]", "(c+1) upd_unfixed_obs = ( avg_unfixed_obs * c + new_unfixed_obs ) / (c+1) #", "new_um_overlay = over.get_delta(masked=False) new_m_overlay = over.get_delta(masked=True) new_unfixed_obs = over.get_unfixed_observations() # Similar from averages", "set_observations(self,observations:dict): \"Set the layer to contain passed observations, clearing any previous. Pass floor", "wallthreshold == None: pass elif not isinstance(wallthreshold,(int,float)): raise TypeError(\"Invalid type for wallthreshold parameter.", "into params, camera/wifi/bluetoothall into params? threshold into params? dims = ( (len(layer)//3)+1, (len(layer[0])//3)+1", "current model # For each layer in the new data for l_key, layer", "tsa.write() else: if __name__!=\"__main__\": assert isinstance(tsa,TimeSlotAvg) tsa.verify_and_update_struct(data_layers, floors) return tsa def is_current_time(self, debug=None)", "top_n) def getCameraImage(self, camera) -> dict: #returns a dictionary containing a link to", "nothing \"\"\" for fpid in set(floorplans).difference(self.overlays.keys()): # A floorplan not represented in the", "with the floor mask applied m_possible = np.logical_and( um_possible, self.mask, dtype=np.bool_ ) #", "a copy of the u(n)m(asked)_possible's with the floor mask applied m_possible = np.logical_and(", "the floorplan image in heatmap form. If pixelmask and image has bounds mask", "= self.__generate_person_obs() self.data_layers[Model.LAYER_MVSENSE].set_observations(observations) def spike(self, layer, threshhold)->dict: #add floorplan ID into params, camera/wifi/bluetoothall", "total self.mask[x][y] = self.pixelmask[it:ib,il:ir].mean() < Model.DOWNSAMPLE_THRESHOLD def calc_bounds_mask(self,blindspots=None,wallthreshold:float=None) -> Image.Image: \"\"\" Generate a", "1, axis=0) self.__masked_dataoverlay[0] = 0 self.__unmasked_dataoverlay[0] = 0 self.__unfixed_observations[0] = 0 def clear(self)", "getFloorplanSummary(self) -> dict: \"Get a dict of retreived floor plan IDs and names\"", "per Scanning API v3 SECRET_K = \"secret\" class Floor: \"Wrapper class for the", "should be iterable of shape (n,2). Coords pertain to sqm pixels on internal", "self.floorplan.get_image(), threshold=wallthreshold ) if blindspots != None: for spot in blindspots: bd.add_blindspot(*tuple(spot)) bd.run()", "cameras if cam.has_FOV() } nonFOVcams = cameras - FOVcams hasView = { cam", "def roll(self) -> None: \"Roll the exposure, preparing for a new frame of", "\\\"MERAKI_DASHBOARD_API_KEY\\\"\" self.read_config_data() self.write_config_data() def populate(self,layers:set): assert isinstance(self.query_obj, APIQuery) self.network_id = self.query_obj.network_id self.plans =", "0 else exposure # Distribute unfixed observations evenly across the floorplan (or mask)", "self.data_layers = dict() self.count = dict() for l_id, layer in data_layers.items(): # Copy", "of floorplan and overlay ix_points = np.floor(np.linspace( 0, imarr.shape[0] + self.margin_px[0], overlay.shape[0] +", "None: return debug curr_day, curr_hour = TimeSlotAvg.get_time() return curr_day == self.day and curr_hour", "person objects with an arbitrary key\" obs = self.query_obj.get_camera_observations() # Zip [0,n) with", "ix+self.margin_px[0], mx+1 )).astype(\"int32\") iy_divs = np.floor(np.linspace( 0, iy+self.margin_px[1], my+1 )).astype(\"int32\") for x in", "Dimentions should default to (height, width) self.floorplan_dimensions = (self.floorplan.height,self.floorplan.width) self.overlay_dimensions = ( int(self.floorplan_dimensions[0]//Model.CELL_SIZE_M)+1,", "self.__unmasked_dataoverlay = unmasked_overlay def roll(self) -> None: \"Roll the exposure, preparing for a", "floorplan and overlay ix_divs = np.floor(np.linspace( 0, ix+self.margin_px[0], mx+1 )).astype(\"int32\") iy_divs = np.floor(np.linspace(", "= 1 DEFAULT_EXPOSURE = 3 DEFAULT_PASSWORD = '<PASSWORD>' __BAD_LAYER = \"Layer {} not", "else exposure # Distribute unfixed observations evenly across the floorplan (or mask) mask", "already flat so exposure of 1 avg_um_overlay = avg_layer.overlays[o_key].get_delta(masked=False,exposure=1) avg_m_overlay = avg_layer.overlays[o_key].get_delta(masked=True,exposure=1) avg_unfixed_obs", "queue, shape (exp,x,y) self.__unfixed_observations = np.zeros(exposure) self.__masked_dataoverlay = np.zeros( (exposure,)+overlay_dimensions, dtype=\"float32\" ) self.__unmasked_dataoverlay", "#Assuming cell >> pixel #Mask (1msq-scale, small-dims) dims mx = self.mask.shape[0] my =", "None: for placeable in fixed.values(): # We store both a copy of the", "< Model.VARIANCE_THRESHOLD: # Calculated minimum reach # Account for change of axis um_possible[-int(placeable.y/Model.CELL_SIZE_M),int(placeable.x/Model.CELL_SIZE_M)]", "a mask from BoundaryDetector for areas of the floor that people cannot possibly", "parameter. Should be of shape (n,4), got {}\".format(str(blindspots))) if wallthreshold == None: pass", "\"Save TimeSlotAvg object to a compressed file\" filepath = os.path.join(TimeSlotAvg.DATA_DIR,'{}_{}.pbz2'.format(self.day, self.hour)) if not", "the transient data # Also we only need 1 frame to store average", "= snapshot) print(response) def addWebhookAddress(self, webhookAddress:str): self.webhook_addresses.append(webhookAddress) ### Historical def put_historical(self) -> None:", "placeable in fixed.values(): # We store both a copy of the m(asked)_possible locations", "/ (c+1) # save the new data overlay to the timeslots model, promoting", ") # If theres a DIV0 here, the search hasn't found any near", "over.copy(flatten) for _id, over in self.overlays.items() } return ly def clear(self)->None: \"Clear all", "a mask from BoundaryDetector for areas that people cannot possibly be on the", "current model data, if valid time\" if self.is_current_time(debug): # it is valid to", "= self.pixelmask.shape[0] iy = self.pixelmask.shape[1] #Image-scale chunk divisions (what coords do we get", "3 LAYERS_ALL = {LAYER_SNAP_WIFI, LAYER_SNAP_BT, LAYER_MVSENSE} CONFIG_PATH = os.path.join('model.conf') CELL_SIZE_M = 1 DOWNSAMPLE_THRESHOLD", "of type np.array, list or tuple\") elif False in [ len(spot) == 4", "= 3 DEFAULT_PASSWORD = '<PASSWORD>' __BAD_LAYER = \"Layer {} not defined. Use internally", "conf_dict.get(Model.STORE_WEBHOOK,list()) self.password = conf_dict.get(Model.STORE_PASSWORD,Model.DEFAULT_PASSWORD) self.webhook_threshold = conf_dict.get(Model.STORE_WHTHRESHOLD,self.webhook_threshold) for mac, coords in conf_dict.get(Model.STORE_FOVCOORDS,dict()).items(): self.setFOVs(", "} FOVcams = { cam for cam in cameras if cam.has_FOV() } nonFOVcams", "between edge of floorplan and overlay ix_points = np.floor(np.linspace( 0, imarr.shape[0] + self.margin_px[0],", "values to sum total and dividing by new count upd_um_overlay = ( avg_um_overlay", "return ly def clear(self)->None: \"Clear all member overlays of observation data\" for over", "ly def clear(self)->None: \"Clear all member overlays of observation data\" for over in", "update_timeslot(self): \"Calls the factory if the current TimeSlotAvg object is not current\" if", "conf[Model.STORE_TOKEN] = self.validator_token conf[Model.STORE_LAYERS] = set(self.data_layers.keys()) conf[Model.STORE_WEBHOOK] = self.webhook_addresses conf[Model.STORE_WHTHRESHOLD] = self.webhook_threshold conf[Model.STORE_PASSWORD]", "range(overlay.shape[1]): val = overlay[mx,my] if val==0: continue #Colour by sign, scale alpha by", "not represented in the floorplans but not in the overlays print(\"Info: Creating new", "(by mac). Coords should be iterable of shape (n,2). Coords pertain to sqm", "sys import numpy as np from PIL import Image, ImageFilter import bz2 import", "fp.bm_boxes for fpid,fp in self.plans.items() } conf[Model.STORE_BDENABLED] = { fpid: fp.mask_enabled for fpid,fp", "class TimeSlotAvg: class TimeSlotAvgException( Exception ): pass DATA_DIR = \"historical_data\" def __init__(self, data_layers:dict,", "\"\"\" for l_id in set(data_layers.keys()).difference(self.data_layers.keys()): # For layers in data_layers not in self", "self.comp_historical(floorPlanId) return self.plans[floorPlanId].render_overlay(datamap) def render_abs(self,floorPlanId:str)->Image: \"Get latest frame of WiFi layer rendered on", "= blindspots if wallthreshold == None: bd = BoundaryDetector( self.floorplan.get_image() ) else: #", "dict() conf[Model.STORE_SECRET] = self.secret conf[Model.STORE_TOKEN] = self.validator_token conf[Model.STORE_LAYERS] = set(self.data_layers.keys()) conf[Model.STORE_WEBHOOK] = self.webhook_addresses", "type(SAPI_packet) != dict: raise TypeError(\"JSON parsed a {}, expected a dict\".format(str(type(SAPI_packet)))) try: source_net_id", "by set_bounds_mask This will update the Overlay objects to reflect this change May", "self.overlay_dimensions, self.real_dimensions, self.mask, 1) cp.__unfixed_observations[0] = self.__unfixed_observations.mean(axis=0) cp.__masked_dataoverlay[0] = self.__masked_dataoverlay.mean(axis=0) cp.__unmasked_dataoverlay[0] = self.__unmasked_dataoverlay.mean(axis=0)", "self.__masked_dataoverlay.mean(axis=0) cp.__unmasked_dataoverlay[0] = self.__unmasked_dataoverlay.mean(axis=0) else: cp = Overlay(self.floorid, self.overlay_dimensions, self.real_dimensions, self.mask, self.exposure) cp.__unfixed_observations", "v3 SECRET_K = \"secret\" class Floor: \"Wrapper class for the floorplan object including", "internally defined layer (eg Model.LAYER_*)\" class ModelException(Exception): pass class BadRequest(Exception): pass def __init__(self,network_id:str=None,layers:set={}):", "fp.floorplan_dimensions, fp.mask, self.exposure) for fpid, floor in floorplans.items(): self.overlays[fpid].verify_and_update(floor) class Overlay: \"Represents an", "max(overlay.max(), overlay.min(), key=abs) #m_max, m_min = absmax, -absmax imarr = np.zeros((destination.size[1],destination.size[0],4),dtype=\"uint8\") # Account", "mask = Image.fromarray(255*self.pixelmask.astype(np.uint8),\"L\") # Paste alpha on masked region imarr.paste((0,0,0,0),mask) elif pixelmask: print(\"Error:", "for LAYER_MVSENSE\") def pull_mvsense_data(self): \"Pull live MVSense data from cameras and feed into", "current timeslot object hist_fp_data = self.timeslot.get_floor_avgs(floorPlanId) collective = np.zeros( self.plans[floorPlanId].overlay_dimensions ) for lid", "in self.overlays.values(): over.clear() def verify_and_update(self, floorplans:dict): \"\"\" Ensure overlays are compatible with current", "given floor eg outside high floors. Blindspots should be a Numpy array, tuple", "- used for unit tests if day==None or hour==None: day, hour = TimeSlotAvg.get_time()", "If layers or overlays are not represented in TSA, those are created, infos", "for mx in range(overlay.shape[0]): # Top, bottom ixs = ix_points[mx] ixe = ix_points[mx+1]", "floorPlanId:str): \"Get the relative busyness of a floorplan using all layers\" self.update_timeslot() #", ")).astype(\"int32\") iy_divs = np.floor(np.linspace( 0, iy+self.margin_px[1], my+1 )).astype(\"int32\") for x in range(mx): #Top,", "pickle.dump(self, f) def get_floor_avgs(self, fpid:str)->dict: \"Return the flat average Overlay object indexed by", "np.zeros( (exposure,)+overlay_dimensions, dtype=\"float32\" ) def set(self, unfixed_count:np.ndarray, masked_overlay:np.ndarray, unmasked_overlay:np.ndarray): \"Sets internal observation data.", "current_data:dict, debug:bool=None) -> None: \"Updates an average model for a timeslot using the", "def set_bounds_mask(self,blindspots=None,wallthreshold:float=None) -> None: \"\"\"\" Generate a mask from BoundaryDetector for areas of", "layers in data_layers not in self self.data_layers[l_id] = data_layers[l_id].copy() self.count[l_id] = dict() print(\"Info:", "= np.floor(np.linspace( 0, iy+self.margin_px[1], my+1 )).astype(\"int32\") for x in range(mx): #Top, bottom it", "class BadRequest(Exception): pass def __init__(self,network_id:str=None,layers:set={}): \"Initialise model. API key must be defined in", "} conf[Model.STORE_BDENABLED] = { fpid: fp.mask_enabled for fpid,fp in self.plans.items() } return conf", "0 self.__unmasked_dataoverlay[:] = 0 def copy(self,flatten:bool=False): \"\"\" Return a copy of the overlay.", "floorplans but not in the overlays print(\"Info: Creating new overlay for FPID:{}\".format(fpid)) fp", "None: overlay.add(obs) def get_deltas(self, masked:bool=True, exposure:int=0) -> dict: \"\"\" Return the delta data", "= TimeSlotAvg.load( self.data_layers, self.plans ) self.timeslot.verify_and_update_struct( self.data_layers, self.plans ) ### Providers def poll_layer(self,layer:int,exposure:int)", "self.data_layers[l_id] = layer.copy(flatten=True) self.data_layers[l_id].clear() # Set a count for each overlay in each", "for spot in blindspots: bd.add_blindspot(*tuple(spot)) bd.run() self.pixelmask = bd.getBoundaryMask() #Downsample from pixel level", "= cell_s_m / 2; V_T = sqrt(x^2+x^2) # I wish I was kidding", "for l_id in set(data_layers.keys()).difference(self.data_layers.keys()): # For layers in data_layers not in self self.data_layers[l_id]", "response = self.getCameraImage(cam) POST_data[fpid][\"camera_data_\" + str(i)] = response if POST_data != {}: self.snapshotWebhook(POST_data)", "debug_render(self,fpid)->Image: import datetime dm = self.plans[fpid] dims = dm.overlay_dimensions testarr = np.zeros(dims).ravel() n", "np.hypot( cam.x-event[0], cam.y-event[1] ) top_n = [ cam[0] for cam in sorted(distances.items(),key=lambda x:", "self.plans[floorPlanId].overlay_dimensions ) for lid in self.data_layers.keys(): mask_enabled = self.plans[floorPlanId].mask_enabled current = self.data_layers[lid].overlays[floorPlanId].get_delta(masked=mask_enabled) historical", "not defined. Use internally defined layer (eg Model.LAYER_*)\" class ModelException(Exception): pass class BadRequest(Exception):", "== \"WiFi\": return Model.LAYER_SNAP_WIFI elif api_layer_val == \"Bluetooth\": return Model.LAYER_SNAP_BT else: raise Model.ModelException(Model.__BAD_LAYER.format(api_layer_val))", "cameras = { cam for cam in self.query_obj.getCameras().values() if cam.floorPlanId == floor.floorplan.id }", "= self.pixelmask[it:ib,il:ir].mean() < Model.DOWNSAMPLE_THRESHOLD def calc_bounds_mask(self,blindspots=None,wallthreshold:float=None) -> Image.Image: \"\"\" Generate a preview of", "representing a data layer: a series of overlays covering all floorplans with data", "if placeable.has_mask_override: # Even if a floorplan mask is in place, mask override", "self.webhook_addresses: response = requests.post(address, json = snapshot) print(response) def addWebhookAddress(self, webhookAddress:str): self.webhook_addresses.append(webhookAddress) ###", "dict() for id in self.overlays.keys() } for id, placeable in observations.items(): bins[placeable.floorPlanId][id] =", "import datetime dm = self.plans[fpid] dims = dm.overlay_dimensions testarr = np.zeros(dims).ravel() n =", "enough to call near # Ignore floor mask self.__unmasked_dataoverlay[0][um_possible] += 1.0 / um_possible.sum()", "Overlay objects to reflect this change May throw error if dimensions do not", "\"\"\" Verifies that the data in the TimeSlotAvg is compatible with the current", "the data overlay, in metres and px self.margin_m = ( Model.CELL_SIZE_M - (self.floorplan_dimensions[0]", ": \"SnapshotData\", \"is_ideal\" : idealality} for i, cam in enumerate(cameras): response = self.getCameraImage(cam)", "\"\"\" Return the full data for each overlay in the layer. If exposure", "mindist: mindist = dist distances[cam] = mindist for cam in nonFOVcams: distances[cam] =", "def update(self)->None: \"Update non-webhook (non-SAPI) layers, write history\" self.pull_mvsense_data() self.put_historical() #spike detect POST_data", "BLUR_CELLS * destination.size[0] / overlay.shape[1] ) ) if isinstance(self.pixelmask, np.ndarray) and pixelmask: #", "gives ratio of elems 1 to elems total self.mask[x][y] = self.pixelmask[it:ib,il:ir].mean() < Model.DOWNSAMPLE_THRESHOLD", "self.data_layers[dest_layer].set_observations(observations) ### Camera and MVSense def setFOVs(self,mac:str,coords:set)->None: \"\"\" Set the FOV coords from", "# Get the current timeslot object hist_fp_data = self.timeslot.get_floor_avgs(floorPlanId) collective = np.zeros( self.plans[floorPlanId].overlay_dimensions", "in range(overlay.shape[0]): # Top, bottom ixs = ix_points[mx] ixe = ix_points[mx+1] for my", "data. Not for general use, instead use Overlay.add\" assert len(self.__unfixed_observations.shape) == len(unfixed_count.shape) assert", "Model.BadRequest(\"Request has data from wrong network: expected {} got {}\".format(self.network_id,source_net_id)) def get_type(SAPI_packet:dict) ->", "\"\"\" #Check if Layer exists if Model.LAYER_MVSENSE in self.data_layers.keys(): #Check camera with mac", "files\" curr_dt = datetime.datetime.now( datetime.timezone( offset=datetime.timedelta(hours=0) ) ) curr_day = curr_dt.weekday() curr_hour =", "-> None: \"Updates the average data for the current TimeSlotAvg object\" self.update_timeslot() #", "1 self.count[l_id] = { ov_id:self.count[l_id].get(ov_id,1) for ov_id in data_layers[l_id].overlays.keys() } def sha256(inpt:str) ->", "layer.verify_and_update(floors) for l_id in data_layers.keys(): # Get count if exists, else set to", "ov_id:self.count[l_id].get(ov_id,1) for ov_id in data_layers[l_id].overlays.keys() } def sha256(inpt:str) -> str: m = hashlib.sha256()", "ValueError(\"Invalid format for blindspots parameter. Should be of shape (n,4), got {}\".format(str(blindspots))) if", "\"Get APs and store internally in relevent floor objects\" aps = self.query_obj.pullAPs() for", "bins.get(fid) if obs != None: overlay.add(obs) def get_deltas(self, masked:bool=True, exposure:int=0) -> dict: \"\"\"", "mx = self.mask.shape[0] my = self.mask.shape[1] #Image-scale pixel mask (mini-scale, big-dims) dims ix", "mask applied m_possible = np.logical_and( um_possible, self.mask, dtype=np.bool_ ) # If theres a", "cp.__masked_dataoverlay[0] = self.__masked_dataoverlay.mean(axis=0) cp.__unmasked_dataoverlay[0] = self.__unmasked_dataoverlay.mean(axis=0) else: cp = Overlay(self.floorid, self.overlay_dimensions, self.real_dimensions, self.mask,", "imarr[ixs:ixe,iys:iye,3] = ( imarr[ixs:ixe,iys:iye,3].astype(np.float64) * alpha ).astype(np.uint8) imarr = Image.fromarray(imarr,\"RGBA\").filter( ImageFilter.BoxBlur( BLUR_CELLS *", "stored exposure, default (0) combines all available frames, 1 gives only the latest", "key must be defined in enviroment variable \\\"MERAKI_DASHBOARD_API_KEY\\\"\" self.read_config_data() self.write_config_data() def populate(self,layers:set): assert", "return (\"Best Effort\", top_n) def getCameraImage(self, camera) -> dict: #returns a dictionary containing", "= self.validator_token conf[Model.STORE_LAYERS] = set(self.data_layers.keys()) conf[Model.STORE_WEBHOOK] = self.webhook_addresses conf[Model.STORE_WHTHRESHOLD] = self.webhook_threshold conf[Model.STORE_PASSWORD] =", "into params? threshold into params? dims = ( (len(layer)//3)+1, (len(layer[0])//3)+1 ) #splits floorplan", "self.setFOVs( mac, coords ) for fpid, boxes in conf_dict.get(Model.STORE_BMBOXES, dict()).items(): on = conf_dict.get(Model.STORE_BDENABLED,{fpid:False})[fpid]", "scaling absmax = max(overlay.max(), overlay.min(), key=abs) #m_max, m_min = absmax, -absmax imarr =", "# Maybe after we spend 30 minutes fixing it ;) event = spikeDict[\"location\"]", "# Similar from averages # avg layers are already flat so exposure of", "data_layers:dict, floors:dict, day=None, hour=None ): \"Static factory method; Load TimeSlotAvg object from compressed", "in range(mx): #Top, bottom it = ix_divs[x] ib = ix_divs[x+1] for y in", "if masked: return self.__masked_dataoverlay[:window].mean(axis=0) else: return self.__unmasked_dataoverlay[:window].mean(axis=0) def get_unfixed_observations(self, exposure:int=0)->float: \"\"\" Return how", "else: unfixed[id] = placeable self.__add_fixed_locations(fixed) self.__add_unfixed_locations(unfixed) def __add_fixed_locations(self,fixed:dict) -> None: for placeable in", "def populate(self,layers:set): assert isinstance(self.query_obj, APIQuery) self.network_id = self.query_obj.network_id self.plans = self.pullFloors() self.getAPs() self.query_obj.pullCameras()", "any near enough to call near # Ignore floor mask self.__unmasked_dataoverlay[0][um_possible] += 1.0", "} @staticmethod def load( data_layers:dict, floors:dict, day=None, hour=None ): \"Static factory method; Load", "FPID={}\".format(floor.floorplan.id)) # Update mask in case of change # Note this does not", "\"Initialise model. API key must be defined in enviroment variable \\\"MERAKI_DASHBOARD_API_KEY\\\"\" self.read_config_data() self.write_config_data()", "render_abs(self,floorPlanId:str)->Image: \"Get latest frame of WiFi layer rendered on the floor plan\" return", "json = snapshot) print(response) def addWebhookAddress(self, webhookAddress:str): self.webhook_addresses.append(webhookAddress) ### Historical def put_historical(self) ->", "{ _id : Overlay(_id, floor.overlay_dimensions, floor.floorplan_dimensions, floor.mask, exposure) for _id,floor in floorplans.items() }", "print(\"Info: Creating new overlay for FPID:{}\".format(fpid)) fp = floorplans[fpid] self.overlays[fpid] = Overlay(fpid, fp.overlay_dimensions,", "= {Model.STORE_SELECTED:self.network_id} config_data[self.network_id] = self.serialize() with open( self.CONFIG_PATH, 'wb' ) as f: pickle.dump(config_data,", "currently day:{self.day}, hour:{self.hour}\") def write(self): \"Save TimeSlotAvg object to a compressed file\" filepath", "data including exposure\" self.__unfixed_observations[:] = 0 self.__masked_dataoverlay[:] = 0 self.__unmasked_dataoverlay[:] = 0 def", "timeslot object hist_fp_data = self.timeslot.get_floor_avgs(floorPlanId) collective = np.zeros( self.plans[floorPlanId].overlay_dimensions ) for lid in", "each layer stored\" return { layer_id: layer.overlays[fpid] for layer_id, layer in self.data_layers.items() }", "np.zeros( self.plans[floorPlanId].overlay_dimensions ) for lid in self.data_layers.keys(): mask_enabled = self.plans[floorPlanId].mask_enabled current = self.data_layers[lid].overlays[floorPlanId].get_delta(masked=mask_enabled)", "factory method; Load TimeSlotAvg object from compressed pickle file or create new\" #TODO", "self.populate(layers) self.secret = conf_dict.get(Model.STORE_SECRET) self.validator_token = conf_dict.get(Model.STORE_TOKEN) self.webhook_addresses = conf_dict.get(Model.STORE_WEBHOOK,list()) self.password = conf_dict.get(Model.STORE_PASSWORD,Model.DEFAULT_PASSWORD)", "the search hasn't found any near enough to call near # Ignore floor", "elif not isinstance(wallthreshold,(int,float)): raise TypeError(\"Invalid type for wallthreshold parameter. Should be of type", "unset try: if len(coords) == 0 or False not in [ len(coord)==2 for", "Numpy array, tuple or nested list of the form [[x1,x2,y1,y2],...] \"\"\" self.bm_boxes =", "assert len(self.__unmasked_dataoverlay.shape) == len(unmasked_overlay.shape) self.__unfixed_observations = unfixed_count self.__masked_dataoverlay = masked_overlay self.__unmasked_dataoverlay = unmasked_overlay", "/ 2,) ) ) # This means let x = cell_s_m / 2;", "STORE_PASSWORD = \"<PASSWORD>\" STORE_WHTHRESHOLD = \"webhook_threshold\" def update_model_config(self, netid, conf_dict): layers = conf_dict.get(Model.STORE_LAYERS,set())", "def copy(self,flatten:bool=True): \"\"\" Return a copy of the layer. Flatten squashes exposures into", "high floors. Blindspots should be a Numpy array, tuple or nested list of", "if mac in self.query_obj.getCameras().keys(): #Check coords of correct shape and iterable, or of", "masked:bool=True, exposure:int=0) -> dict: \"\"\" Return the delta data for each overlay in", "incorrect shape or type, should be iterable shape (n,2)\") else: raise Model.ModelException(\"Camera with", "deltas, and unfixed count from new data # Get full available exposure by", "placeable self.__add_fixed_locations(fixed) self.__add_unfixed_locations(unfixed) def __add_fixed_locations(self,fixed:dict) -> None: for placeable in fixed.values(): # We", "dest_layer = Model.get_type(SAPI_packet) observations = self.query_obj.extract_SAPI_observations(SAPI_packet) self.data_layers[dest_layer].set_observations(observations) ### Camera and MVSense def setFOVs(self,mac:str,coords:set)->None:", "copy of the m(asked)_possible locations and the u(n)m(asked)_possible locations um_possible = np.zeros(self.overlay_dimensions, dtype=np.bool_)", "return None def setBoundsMask(self,floor_id:str,on:bool,blindspots=None,wallthreshold:float=None) -> None: \"Generate a mask from BoundaryDetector for areas", "in self.query_obj.getCameras().values() if cam.floorPlanId == floor.floorplan.id } FOVcams = { cam for cam", "self.__unmasked_dataoverlay = np.zeros( (exposure,)+overlay_dimensions, dtype=\"float32\" ) def set(self, unfixed_count:np.ndarray, masked_overlay:np.ndarray, unmasked_overlay:np.ndarray): \"Sets internal", "name\" for k in self.plans: if self.plans[k].floorplan.name == name: return k return None", "\"secret\" class Floor: \"Wrapper class for the floorplan object including model data\" def", "else exposure return self.__unfixed_observations[:window].mean(axis=0) def get_full(self, masked:bool=True, exposure:int=0)->np.ndarray: #pragma: no cover \"\"\" Returns", "{ _id : over.get_full(masked, exposure) for _id, over in self.overlays.items() } def copy(self,flatten:bool=True):", "# Copy the layer structure but clear the transient data # Also we", "APIQuery) self.network_id = self.query_obj.network_id self.plans = self.pullFloors() self.getAPs() self.query_obj.pullCameras() self.data_layers = dict() self.webhook_threshold", "= {} for fpid, floor in self.plans.items(): spikedict = self.spike(self.comp_historical(fpid), self.webhook_threshold) if spikedict['spike']", "delta overlay (only fixed observations). If masked, will select data masked at input,", "} return self.plans def getFloorplanSummary(self) -> dict: \"Get a dict of retreived floor", "{ fpid: fp.mask_enabled for fpid,fp in self.plans.items() } return conf def write_config_data(self): config_data", "be iterable shape (n,2)\") else: raise Model.ModelException(\"Camera with mac {} not found\".format(mac)) else:", "coords of correct shape and iterable, or of len 0 to unset try:", "return cp def add(self,observations:dict) -> None: fixed = {} unfixed = {} for", "copy of the overlay. If flatten, squash (mean) exposure window into 1 frame.", "overlays are not represented in TSA, those are created, infos are printed Throws", "[self.real_dimensions[0] - placeable.y, placeable.x ] ) for x in range(um_possible.shape[0]): for y in", "for spot in blindspots ]: raise ValueError(\"Invalid format for blindspots parameter. Should be", "current TimeSlotAvg object is not current\" if not (self.timeslot.is_current_time()): self.timeslot.write() self.timeslot = TimeSlotAvg.load(", "and store internally in relevent floor objects\" aps = self.query_obj.pullAPs() for mac,ap in", "with bz2.BZ2File(filepath, 'wb') as f: pickle.dump(self, f) def get_floor_avgs(self, fpid:str)->dict: \"Return the flat", "or placeable.has_mask_override: fixed[id] = placeable else: unfixed[id] = placeable self.__add_fixed_locations(fixed) self.__add_unfixed_locations(unfixed) def __add_fixed_locations(self,fixed:dict)", "self.mask, self.exposure) cp.__unfixed_observations = self.__unfixed_observations.copy() cp.__masked_dataoverlay = self.__masked_dataoverlay.copy() cp.__unmasked_dataoverlay = self.__unmasked_dataoverlay.copy() return cp", "default to (height, width) self.floorplan_dimensions = (self.floorplan.height,self.floorplan.width) self.overlay_dimensions = ( int(self.floorplan_dimensions[0]//Model.CELL_SIZE_M)+1, int(self.floorplan_dimensions[1]//Model.CELL_SIZE_M)+1 )", "v.floorplan.name for k,v in self.plans.items() } def findFloorplanByName(self,name) -> str: \"Find a floorPlanId", "netid != self.network_id: raise AttributeError except AttributeError: self.query_obj = APIQuery(netid) self.populate(layers) self.secret =", "tsa = bz2.BZ2File(os.path.join(TimeSlotAvg.DATA_DIR,'{}_{}.pbz2'.format(day, hour)), 'rb') tsa = pickle.load(tsa) except FileNotFoundError: tsa = TimeSlotAvg(", "def update_avg_data(self, current_data:dict, debug:bool=None) -> None: \"Updates an average model for a timeslot", "None ) or placeable.has_mask_override: fixed[id] = placeable else: unfixed[id] = placeable self.__add_fixed_locations(fixed) self.__add_unfixed_locations(unfixed)", "if it's centre is close enough to be within variance metres d =", "bins = { id : dict() for id in self.overlays.keys() } for id,", "compatible with current floorplans (Floor objects). Throws ModelException in case of dimention mismatch.", "takes precident um_possible = placeable.mask_override m_possible = placeable.mask_override else: if placeable.variance < Model.VARIANCE_THRESHOLD:", "in conf_dict.get(Model.STORE_BMBOXES, dict()).items(): on = conf_dict.get(Model.STORE_BDENABLED,{fpid:False})[fpid] self.setBoundsMask(fpid, on, boxes) def serialize(self): conf =", "{}, expected a dict\".format(str(type(SAPI_packet)))) try: source_net_id = SAPI_packet[\"data\"][\"networkId\"] if SAPI_packet[SECRET_K] != self.secret: raise", "response = self.query_obj.getCameraSnap(camera) return response def snapshotWebhook(self, snapshot:dict): for address in self.webhook_addresses: response", "APs and store internally in relevent floor objects\" aps = self.query_obj.pullAPs() for mac,ap", "def get_floor_avgs(self, fpid:str)->dict: \"Return the flat average Overlay object indexed by each layer", "= TimeSlotAvg( data_layers, day, hour ) tsa.verify_and_update_struct(data_layers, floors) tsa.write() else: if __name__!=\"__main__\": assert", "= np.logical_and( um_possible, self.mask, dtype=np.bool_ ) # If theres a DIV0 here, the", "is extra, do nothing \"\"\" for fpid in set(floorplans).difference(self.overlays.keys()): # A floorplan not", "self.__unmasked_dataoverlay.mean(axis=0) else: cp = Overlay(self.floorid, self.overlay_dimensions, self.real_dimensions, self.mask, self.exposure) cp.__unfixed_observations = self.__unfixed_observations.copy() cp.__masked_dataoverlay", "def __init__(self,floorplan:FloorPlan): self.floorplan = floorplan # Dimentions should default to (height, width) self.floorplan_dimensions", "masked region imarr.paste((0,0,0,0),mask) elif pixelmask: print(\"Error: Could not filter by pixel mask as", "STORE_BDENABLED = \"bd_enabled\" STORE_SECRET = \"sapisecret\" STORE_TOKEN = \"<PASSWORD>_token\" STORE_PASSWORD = \"<PASSWORD>\" STORE_WHTHRESHOLD", ") as f: pickle.dump(config_data, f) def read_config_data(self): if os.path.isfile(Model.CONFIG_PATH): with open( Model.CONFIG_PATH, 'rb'", "overlay heatmap within bounds. \"\"\" POS = np.array([255,0,0,180],dtype=np.uint8) NEG = np.array([0,255,0,180],dtype=np.uint8) BLUR_CELLS =", "n:int, floor:Floor,spikeDict:dict)->tuple: #returns a list of camera objects # They call me the", "= self.timeslot.get_floor_avgs(floorPlanId) collective = np.zeros( self.plans[floorPlanId].overlay_dimensions ) for lid in self.data_layers.keys(): mask_enabled =", "curr_dt.hour return curr_day, curr_hour def verify_and_update_struct(self, data_layers:dict, floors:dict)->None: \"\"\" Verifies that the data", "datetime.datetime.now( datetime.timezone( offset=datetime.timedelta(hours=0) ) ) curr_day = curr_dt.weekday() curr_hour = curr_dt.hour return curr_day,", "cp.__masked_dataoverlay = self.__masked_dataoverlay.copy() cp.__unmasked_dataoverlay = self.__unmasked_dataoverlay.copy() return cp def add(self,observations:dict) -> None: fixed", "cam.floorPlanId == floor.floorplan.id } FOVcams = { cam for cam in cameras if", "if len(coords) == 0 or False not in [ len(coord)==2 for coord in", "layer in the new data for l_key, layer in current_data.items(): # Get the", "If masked, will select data masked at input, else unmasked data. Set exposure", "### Historical def put_historical(self) -> None: \"Updates the average data for the current", "os import sys import numpy as np from PIL import Image, ImageFilter import", "objects). Throws ModelException in case of dimention mismatch. If mask is outdated, update", "y in range(um_possible.shape[1]): # For each square, see if it's centre is close", "{Model.STORE_LAYERS: Model.LAYERS_ALL} ) except APIQuery.APIException: raise Model.ModelException(\"Could not get network from config file\")", "tuple, in m, with change of axis client_loc = np.array( [self.real_dimensions[0] - placeable.y,", "self.timeslot.update_avg_data( self.data_layers ) def comp_historical(self, floorPlanId:str): \"Get the relative busyness of a floorplan", "int(d) for d in event ]) cameras = { cam for cam in", "of the form [[x1,x2,y1,y2],...] \"\"\" self.bm_boxes = blindspots if wallthreshold == None: bd", "== True: idealality, cameras = self.nearestCameras(2, floor, spikedict) #need to get relevant floor", "Overlay(_id, floor.overlay_dimensions, floor.floorplan_dimensions, floor.mask, exposure) for _id,floor in floorplans.items() } def set_observations(self,observations:dict): \"Set", "return Model.LAYER_SNAP_WIFI elif api_layer_val == \"Bluetooth\": return Model.LAYER_SNAP_BT else: raise Model.ModelException(Model.__BAD_LAYER.format(api_layer_val)) def __generate_person_obs(self)", "coords ) for fpid, boxes in conf_dict.get(Model.STORE_BMBOXES, dict()).items(): on = conf_dict.get(Model.STORE_BDENABLED,{fpid:False})[fpid] self.setBoundsMask(fpid, on,", "pickle.dump(config_data, f) def read_config_data(self): if os.path.isfile(Model.CONFIG_PATH): with open( Model.CONFIG_PATH, 'rb' ) as f:", "if val==0: continue #Colour by sign, scale alpha by magnitude pos = val", "overlay in the layer. If exposure 0 (default), will provide all available exposures", "\"sapisecret\" STORE_TOKEN = \"<PASSWORD>_token\" STORE_PASSWORD = \"<PASSWORD>\" STORE_WHTHRESHOLD = \"webhook_threshold\" def update_model_config(self, netid,", "0 self.__unfixed_observations[0] = 0 def clear(self) -> None: \"Clear accumulated observation data including", "floor in floorplans.items(): self.overlays[fpid].verify_and_update(floor) class Overlay: \"Represents an data overlay of a single", "enumerate(row): if cell: dist = np.hypot( (x+0.5)-event[0], (y+0.5)-event[1] ) if dist < mindist:", "Model.ModelException(\"Error: Overlay and Floorplan dimension mismatch for FPID={}\".format(floor.floorplan.id)) # Update mask in case", "change existing data, only new observations self.mask == floor.mask class Model: LAYER_SNAP_WIFI =", "#Colour by sign, scale alpha by magnitude pos = val > 0 alpha", "um_possible.sum() # Include floor mask self.__masked_dataoverlay[0][m_possible] += 1.0 / m_possible.sum() def __add_unfixed_locations(self,unfixed:dict) ->", "floor_id not in self.plans.keys(): raise Model.ModelException(\"No such floor: \",floor_id) floor = self.plans[floor_id] if", "Layer exists if Model.LAYER_MVSENSE in self.data_layers.keys(): #Check camera with mac exists if mac", "= TimeSlotAvg.get_time() try: tsa = bz2.BZ2File(os.path.join(TimeSlotAvg.DATA_DIR,'{}_{}.pbz2'.format(day, hour)), 'rb') tsa = pickle.load(tsa) except FileNotFoundError:", "layer constant for a given SAPI packet\" api_layer_val = APIQuery.get_SAPI_type(SAPI_packet) if api_layer_val ==", "= 0 self.__unmasked_dataoverlay[:] = 0 def copy(self,flatten:bool=False): \"\"\" Return a copy of the", "\"Class representing a data layer: a series of overlays covering all floorplans with", "self.plans: if self.plans[k].floorplan.name == name: return k return None def setBoundsMask(self,floor_id:str,on:bool,blindspots=None,wallthreshold:float=None) -> None:", "cannot possibly be on the given floor eg outside high floors. Blindspots should", "self.webhook_threshold conf[Model.STORE_PASSWORD] = self.password conf[Model.STORE_FOVCOORDS] = { cam.mac: cam.get_fov_coords() for cam in self.query_obj.cameras.values()", "f: config_data = pickle.load(f) selected_id = config_data[Model.STORE_SELECTED] select_data = config_data[selected_id] self.update_model_config(selected_id,select_data) else: print(\"Warning:", "return tsa def is_current_time(self, debug=None) -> bool: \"Returns True iff timeslot is for", "to dictionary return dict(zip(range(len(obs)),obs)) def provide_scanning(self,SAPI_packet:dict) -> None: \"Update model with SAPI data\"", "self.floorplan_dimensions = (self.floorplan.height,self.floorplan.width) self.overlay_dimensions = ( int(self.floorplan_dimensions[0]//Model.CELL_SIZE_M)+1, int(self.floorplan_dimensions[1]//Model.CELL_SIZE_M)+1 ) # Determine the distance", "self.update_timeslot() # Get the current timeslot object self.timeslot.update_avg_data( self.data_layers ) def comp_historical(self, floorPlanId:str):", "TimeSlotAvg.get_time() try: tsa = bz2.BZ2File(os.path.join(TimeSlotAvg.DATA_DIR,'{}_{}.pbz2'.format(day, hour)), 'rb') tsa = pickle.load(tsa) except FileNotFoundError: tsa", "array, tuple or nested list of the form [[x1,x2,y1,y2],...] \"\"\" self.bm_boxes = blindspots", "#Left, right iys = iy_points[my] iye = iy_points[my+1] imarr[ixs:ixe,iys:iye] = POS if pos", "0 busiest_location=None for x in range(len(clusters)): for y in range(len(clusters[0])): if clusters[x][y] >", "Floor.set_bounds_mask \"\"\" if wallthreshold == None: bd = BoundaryDetector( self.floorplan.get_image() ) else: #", "FloorPlan from lib.BoundaryDetector import BoundaryDetector # As per Scanning API v3 SECRET_K =", "to unset mask \"\"\" #Check if Layer exists if Model.LAYER_MVSENSE in self.data_layers.keys(): #Check", "collective = np.zeros( self.plans[floorPlanId].overlay_dimensions ) for lid in self.data_layers.keys(): mask_enabled = self.plans[floorPlanId].mask_enabled current", "* self.floorplan.px_per_m_h, self.margin_m[1] * self.floorplan.px_per_m_w ) self.mask_enabled = False self.pixelmask = None self.mask", ") / (c+1) upd_m_overlay = ( avg_m_overlay * c + new_m_overlay ) /", "-> dict: \"Indexes observed person objects with an arbitrary key\" obs = self.query_obj.get_camera_observations()", "self.webhook_addresses = conf_dict.get(Model.STORE_WEBHOOK,list()) self.password = conf_dict.get(Model.STORE_PASSWORD,Model.DEFAULT_PASSWORD) self.webhook_threshold = conf_dict.get(Model.STORE_WHTHRESHOLD,self.webhook_threshold) for mac, coords in", "# We store both a copy of the m(asked)_possible locations and the u(n)m(asked)_possible", "layer not in Model.LAYERS_ALL: raise Model.ModelException(Model.__BAD_LAYER.format(layer)) self.data_layers[layer] = Layer(self.plans, Model.DEFAULT_EXPOSURE) self.timeslot = TimeSlotAvg.load(self.data_layers,self.plans)", "for layer in layers: if layer not in Model.LAYERS_ALL: raise Model.ModelException(Model.__BAD_LAYER.format(layer)) self.data_layers[layer] =", "defined. Use internally defined layer (eg Model.LAYER_*)\" class ModelException(Exception): pass class BadRequest(Exception): pass", "got {}\".format(self.network_id,source_net_id)) def get_type(SAPI_packet:dict) -> int: \"Get the Model layer constant for a", "Model.LAYERS_ALL: raise Model.ModelException(Model.__BAD_LAYER.format(layer)) self.data_layers[layer] = Layer(self.plans, Model.DEFAULT_EXPOSURE) self.timeslot = TimeSlotAvg.load(self.data_layers,self.plans) self.webhook_addresses = []", ") for lid in self.data_layers.keys(): mask_enabled = self.plans[floorPlanId].mask_enabled current = self.data_layers[lid].overlays[floorPlanId].get_delta(masked=mask_enabled) historical =", "_id,floor in floorplans.items() } def set_observations(self,observations:dict): \"Set the layer to contain passed observations,", "the network, construct blank floor layer for each\" floorplans = self.query_obj.pullFloorPlans() self.plans =", "TimeSlotAvg object to a compressed file\" filepath = os.path.join(TimeSlotAvg.DATA_DIR,'{}_{}.pbz2'.format(self.day, self.hour)) if not os.path.exists(TimeSlotAvg.DATA_DIR):", "(including distributed unfixed observations) For exposure, see Overlay.get_delta \"\"\" data = self.get_delta(masked,exposure) window", "Effort\", top_n) def getCameraImage(self, camera) -> dict: #returns a dictionary containing a link", "the floor mask applied m_possible = np.logical_and( um_possible, self.mask, dtype=np.bool_ ) # If", "Overlay(self.floorid, self.overlay_dimensions, self.real_dimensions, self.mask, 1) cp.__unfixed_observations[0] = self.__unfixed_observations.mean(axis=0) cp.__masked_dataoverlay[0] = self.__masked_dataoverlay.mean(axis=0) cp.__unmasked_dataoverlay[0] =", "dims = ( (len(layer)//3)+1, (len(layer[0])//3)+1 ) #splits floorplan into 3m^2 areas clusters =", "True iff timeslot is for current time\" if debug != None: return debug", "return { k : v.floorplan.name for k,v in self.plans.items() } def findFloorplanByName(self,name) ->", "from BoundaryDetector for areas of the floor that people cannot possibly be or", "shape and iterable, or of len 0 to unset try: if len(coords) ==", "data, only new observations self.mask == floor.mask class Model: LAYER_SNAP_WIFI = 1 LAYER_SNAP_BT", "load( data_layers:dict, floors:dict, day=None, hour=None ): \"Static factory method; Load TimeSlotAvg object from", "the average by adding current values to sum total and dividing by new", "close enough to be within variance metres d = np.linalg.norm( (np.array([x,y]) + 0.5)", "Model: LAYER_SNAP_WIFI = 1 LAYER_SNAP_BT = 2 LAYER_MVSENSE = 3 LAYERS_ALL = {LAYER_SNAP_WIFI,", "in the overlays print(\"Info: Creating new overlay for FPID:{}\".format(fpid)) fp = floorplans[fpid] self.overlays[fpid]", "network, construct blank floor layer for each\" floorplans = self.query_obj.pullFloorPlans() self.plans = {", "compatible with the current Model. If layers or overlays are not represented in", "dtype=np.bool_) if placeable.has_mask_override: # Even if a floorplan mask is in place, mask", "-> dict: #pragma: no cover \"\"\" Return the full data for each overlay", "placeable.variance < Model.VARIANCE_THRESHOLD: # Calculated minimum reach # Account for change of axis", "masked:bool=True, exposure:int=0)->np.ndarray: \"\"\" Returns a copy of the delta overlay (only fixed observations).", "np.hypot( (x+0.5)-event[0], (y+0.5)-event[1] ) if dist < mindist: mindist = dist distances[cam] =", "axis=0) self.__unmasked_dataoverlay = np.roll(self.__unmasked_dataoverlay, 1, axis=0) self.__unfixed_observations = np.roll(self.__unfixed_observations, 1, axis=0) self.__masked_dataoverlay[0] =", "right iys = iy_points[my] iye = iy_points[my+1] imarr[ixs:ixe,iys:iye] = POS if pos else", "'rb') tsa = pickle.load(tsa) except FileNotFoundError: tsa = TimeSlotAvg( data_layers, day, hour )", "verify_and_update(self,floor:Floor)->None: \"Verifies overlay is compatible with passed floor, updates mask\" if self.overlay_dimensions !=", "c = self.count[l_key][o_key] # update the average by adding current values to sum", "a copy of the delta overlay (only fixed observations). If masked, will select", "snapshot:dict): for address in self.webhook_addresses: response = requests.post(address, json = snapshot) print(response) def", "update_layers(self)->None: \"\"\" Must be called when a Floor is added, removed, or altered,", "required for current scope return { _id : over.get_full(masked, exposure) for _id, over", "pertain to sqm pixels on internal datamap. Pass len(iterable)==0 to unset mask \"\"\"", "self.overlays.items() } def get_full(self, masked:bool=True, exposure:int=0) -> dict: #pragma: no cover \"\"\" Return", "all member overlays of observation data\" for over in self.overlays.values(): over.clear() def verify_and_update(self,", "an Image, mode=L mask = Image.fromarray(255*self.pixelmask.astype(np.uint8),\"L\") # Paste alpha on masked region imarr.paste((0,0,0,0),mask)", "conf_dict.get(Model.STORE_PASSWORD,Model.DEFAULT_PASSWORD) self.webhook_threshold = conf_dict.get(Model.STORE_WHTHRESHOLD,self.webhook_threshold) for mac, coords in conf_dict.get(Model.STORE_FOVCOORDS,dict()).items(): self.setFOVs( mac, coords )", "but clear the transient data # Also we only need 1 frame to", "collective /= len(self.data_layers) return collective def update_timeslot(self): \"Calls the factory if the current", "and px self.margin_m = ( Model.CELL_SIZE_M - (self.floorplan_dimensions[0] % Model.CELL_SIZE_M), Model.CELL_SIZE_M - (self.floorplan_dimensions[1]", "<= placeable.variance # m(asked)_possible is a copy of the u(n)m(asked)_possible's with the floor", "self.network_id: raise Model.BadRequest(\"Request has data from wrong network: expected {} got {}\".format(self.network_id,source_net_id)) def", "offset=datetime.timedelta(hours=0) ) ) curr_day = curr_dt.weekday() curr_hour = curr_dt.hour return curr_day, curr_hour def", "[ len(coord)==2 for coord in coords ]: cam = self.query_obj.cameras[mac] shape = self.plans[cam.floorPlanId].overlay_dimensions", "and Floorplan dimension mismatch for FPID={}\".format(floor.floorplan.id)) # Update mask in case of change", "= layer[3*x:3*(x+1),3*y:3*(y+1)].sum() busiest = 0 busiest_location=None for x in range(len(clusters)): for y in", "between the end of the floorplan and the data overlay, in metres and", "a copy of the layer. Flatten squashes exposures into 1 frame For more", "# Exposure queue, shape (exp,x,y) self.__unfixed_observations = np.zeros(exposure) self.__masked_dataoverlay = np.zeros( (exposure,)+overlay_dimensions, dtype=\"float32\"", "the current model # For each layer in the new data for l_key,", "count from new data # Get full available exposure by default new_um_overlay =", "Return the delta data for each overlay in the layer. If exposure 0", "big-dims) dims ix = self.pixelmask.shape[0] iy = self.pixelmask.shape[1] #Image-scale chunk divisions (what coords", "def provide_scanning(self,SAPI_packet:dict) -> None: \"Update model with SAPI data\" # Raise a racket", "float, got {}\".format(str(type(wallthreshold)))) if floor_id not in self.plans.keys(): raise Model.ModelException(\"No such floor: \",floor_id)", "Model.ModelException(\"Could not get network from config file\") class TimeSlotAvg: class TimeSlotAvgException( Exception ):", "avg layers are already flat so exposure of 1 avg_um_overlay = avg_layer.overlays[o_key].get_delta(masked=False,exposure=1) avg_m_overlay", "#add floorplan ID into params, camera/wifi/bluetoothall into params? threshold into params? dims =", "(len(layer)//3)+1, (len(layer[0])//3)+1 ) #splits floorplan into 3m^2 areas clusters = np.zeros(dims, dtype=\"float32\") for", "new data # Get full available exposure by default new_um_overlay = over.get_delta(masked=False) new_m_overlay", "= curr_dt.weekday() curr_hour = curr_dt.hour return curr_day, curr_hour def verify_and_update_struct(self, data_layers:dict, floors:dict)->None: \"\"\"", "Return a copy of the layer. Flatten squashes exposures into 1 frame For", "layers\" self.update_timeslot() # Get the current timeslot object hist_fp_data = self.timeslot.get_floor_avgs(floorPlanId) collective =", "return Model.LAYER_SNAP_BT else: raise Model.ModelException(Model.__BAD_LAYER.format(api_layer_val)) def __generate_person_obs(self) -> dict: \"Indexes observed person objects", "= np.hypot( cam.x-event[0], cam.y-event[1] ) top_n = [ cam[0] for cam in sorted(distances.items(),key=lambda", "class Overlay: \"Represents an data overlay of a single floorplan\" def __init__(self, floorid:str,", "Model.ModelException(Model.__BAD_LAYER.format(layer)) self.data_layers[layer] = Layer(self.plans, Model.DEFAULT_EXPOSURE) self.timeslot = TimeSlotAvg.load(self.data_layers,self.plans) self.webhook_addresses = [] ### Floorplans", "store internally in relevent floor objects\" aps = self.query_obj.pullAPs() for mac,ap in aps.items():", "a data layer: a series of overlays covering all floorplans with data from", "def calc_bounds_mask(self,blindspots=None,wallthreshold:float=None) -> Image.Image: \"\"\" Generate a preview of a bounds mask with", "# Not covered as not required for current scope return { _id :", "def snapshotWebhook(self, snapshot:dict): for address in self.webhook_addresses: response = requests.post(address, json = snapshot)", "BoundaryDetector( self.floorplan.get_image(), threshold=wallthreshold ) if blindspots != None: for spot in blindspots: bd.add_blindspot(*tuple(spot))", "Return the full data for each overlay in the layer. If exposure 0", "_id, over in self.overlays.items() } def get_full(self, masked:bool=True, exposure:int=0) -> dict: #pragma: no", "passed floor, updates mask\" if self.overlay_dimensions != floor.overlay_dimensions or self.real_dimensions != floor.floorplan_dimensions: raise", "conf def write_config_data(self): config_data = {Model.STORE_SELECTED:self.network_id} config_data[self.network_id] = self.serialize() with open( self.CONFIG_PATH, 'wb'", "dict() self.overlay_dimensions = overlay_dimensions self.real_dimensions = real_dimensions self.mask = floormask assert exposure >", "def set(self, unfixed_count:np.ndarray, masked_overlay:np.ndarray, unmasked_overlay:np.ndarray): \"Sets internal observation data. Not for general use,", "self.floorid = floorid #self.observations = dict() self.overlay_dimensions = overlay_dimensions self.real_dimensions = real_dimensions self.mask", "{} for id, placeable in observations.items(): if ( placeable.x != None and placeable.y", "For exposure, see Overlay.get_delta \"\"\" data = self.get_delta(masked,exposure) window = self.exposure if exposure", "TypeError(\"Invalid type for blindspots parameter. Should be of type np.array, list or tuple\")", "containing a link to the image response = self.query_obj.getCameraSnap(camera) return response def snapshotWebhook(self,", "array is binary, mean gives ratio of elems 1 to elems total self.mask[x][y]", "obs = self.query_obj.get_camera_observations() # Zip [0,n) with n obs objects, converting to dictionary", "um_possible[-int(placeable.y/Model.CELL_SIZE_M),int(placeable.x/Model.CELL_SIZE_M)] = 1 else: # Store parsed location in x,y tuple, in m,", "LAYER_MVSENSE = 3 LAYERS_ALL = {LAYER_SNAP_WIFI, LAYER_SNAP_BT, LAYER_MVSENSE} CONFIG_PATH = os.path.join('model.conf') CELL_SIZE_M =", "object to a compressed file\" filepath = os.path.join(TimeSlotAvg.DATA_DIR,'{}_{}.pbz2'.format(self.day, self.hour)) if not os.path.exists(TimeSlotAvg.DATA_DIR): os.makedirs(TimeSlotAvg.DATA_DIR)", "return response def snapshotWebhook(self, snapshot:dict): for address in self.webhook_addresses: response = requests.post(address, json", "type np.array, list or tuple\") elif False in [ len(spot) == 4 for", "= { fpid: fp.bm_boxes for fpid,fp in self.plans.items() } conf[Model.STORE_BDENABLED] = { fpid:", "file\") class TimeSlotAvg: class TimeSlotAvgException( Exception ): pass DATA_DIR = \"historical_data\" def __init__(self,", "in current_data.items(): # Get the respective average layer avg_layer = self.data_layers[l_key] # For", "None: pass elif not isinstance(wallthreshold,(int,float)): raise TypeError(\"Invalid type for wallthreshold parameter. Should be", "False self.pixelmask = None self.mask = np.ones(self.overlay_dimensions,dtype=np.bool_) self.aps = {} self.bm_boxes = []", "flatten: cp = Overlay(self.floorid, self.overlay_dimensions, self.real_dimensions, self.mask, 1) cp.__unfixed_observations[0] = self.__unfixed_observations.mean(axis=0) cp.__masked_dataoverlay[0] =", "get relevant floor obj POST_data[fpid] = {\"type\" : \"SnapshotData\", \"is_ideal\" : idealality} for", "copy of the full client overlay (including distributed unfixed observations) For exposure, see", "floor.mask[:] = 1 floor.mask_enabled = on self.update_layers() ### Layers def update_layers(self)->None: \"\"\" Must", "series of overlays covering all floorplans with data from a single source type\"", ") # This means let x = cell_s_m / 2; V_T = sqrt(x^2+x^2)", ": v.floorplan.name for k,v in self.plans.items() } def findFloorplanByName(self,name) -> str: \"Find a", "STORE_TOKEN = \"<PASSWORD>_token\" STORE_PASSWORD = \"<PASSWORD>\" STORE_WHTHRESHOLD = \"webhook_threshold\" def update_model_config(self, netid, conf_dict):", "variable \\\"MERAKI_DASHBOARD_API_KEY\\\"\" self.read_config_data() self.write_config_data() def populate(self,layers:set): assert isinstance(self.query_obj, APIQuery) self.network_id = self.query_obj.network_id self.plans", "self.mask, dtype=np.bool_ ) # If theres a DIV0 here, the search hasn't found", "or float, got {}\".format(str(type(wallthreshold)))) if floor_id not in self.plans.keys(): raise Model.ModelException(\"No such floor:", "x,row in enumerate(fov): for y, cell in enumerate(row): if cell: dist = np.hypot(", "upd_um_overlay = ( avg_um_overlay * c + new_um_overlay ) / (c+1) upd_m_overlay =", "self.nearestCameras(2, floor, spikedict) #need to get relevant floor obj POST_data[fpid] = {\"type\" :", "plan\" return self.plans[floorPlanId].render_overlay(self.data_layers[Model.LAYER_SNAP_WIFI].overlays[floorPlanId].get_delta(exposure=1)) def debug_render(self,fpid)->Image: import datetime dm = self.plans[fpid] dims = dm.overlay_dimensions", "We store both a copy of the m(asked)_possible locations and the u(n)m(asked)_possible locations", "fixed observations). If masked, will select data masked at input, else unmasked data.", "i, cam in enumerate(cameras): response = self.getCameraImage(cam) POST_data[fpid][\"camera_data_\" + str(i)] = response if", "parameters. For more info see Floor.set_bounds_mask \"\"\" if wallthreshold == None: bd =", "-> None: \"Clear accumulated observation data including exposure\" self.__unfixed_observations[:] = 0 self.__masked_dataoverlay[:] =", "pixel mask (mini-scale, big-dims) dims ix = self.pixelmask.shape[0] iy = self.pixelmask.shape[1] #Image-scale chunk", "def write_config_data(self): config_data = {Model.STORE_SELECTED:self.network_id} config_data[self.network_id] = self.serialize() with open( self.CONFIG_PATH, 'wb' )", "Model.LAYERS_ALL} ) except APIQuery.APIException: raise Model.ModelException(\"Could not get network from config file\") class", "-> None: \"\"\"\" Generate a mask from BoundaryDetector for areas of the floor", "new_unfixed_obs = over.get_unfixed_observations() # Similar from averages # avg layers are already flat", "conf[Model.STORE_WHTHRESHOLD] = self.webhook_threshold conf[Model.STORE_PASSWORD] = self.password conf[Model.STORE_FOVCOORDS] = { cam.mac: cam.get_fov_coords() for cam", "exposure return self.__unfixed_observations[:window].mean(axis=0) def get_full(self, masked:bool=True, exposure:int=0)->np.ndarray: #pragma: no cover \"\"\" Returns a", "self.data_layers.values(): layer.verify_and_update(self.plans) ### Access Points def getAPs(self) -> None: \"Get APs and store", "conf[Model.STORE_WEBHOOK] = self.webhook_addresses conf[Model.STORE_WHTHRESHOLD] = self.webhook_threshold conf[Model.STORE_PASSWORD] = self.password conf[Model.STORE_FOVCOORDS] = { cam.mac:", "areas clusters = np.zeros(dims, dtype=\"float32\") for x in range(len(clusters)): for y in range(len(clusters[0])):", "-> None: \"Update model with SAPI data\" # Raise a racket if theres", "return self.plans[floorPlanId].render_overlay(self.data_layers[Model.LAYER_SNAP_WIFI].overlays[floorPlanId].get_delta(exposure=1)) def debug_render(self,fpid)->Image: import datetime dm = self.plans[fpid] dims = dm.overlay_dimensions testarr", "layer_id, layer in self.data_layers.items() } @staticmethod def get_time()->tuple: \"Get the current time values", "self.mask_enabled = False self.pixelmask = None self.mask = np.ones(self.overlay_dimensions,dtype=np.bool_) self.aps = {} self.bm_boxes", "write history\" self.pull_mvsense_data() self.put_historical() #spike detect POST_data = {} for fpid, floor in", "type for wallthreshold parameter. Should be of type int or float, got {}\".format(str(type(wallthreshold))))", "for cam in self.query_obj.getCameras().values() if cam.floorPlanId == floor.floorplan.id } FOVcams = { cam", "not in the overlays print(\"Info: Creating new overlay for FPID:{}\".format(fpid)) fp = floorplans[fpid]", "range(len(clusters[0])): clusters[x,y] = layer[3*x:3*(x+1),3*y:3*(y+1)].sum() busiest = 0 busiest_location=None for x in range(len(clusters)): for", "For each layer in the new data for l_key, layer in current_data.items(): #", "( avg_unfixed_obs * c + new_unfixed_obs ) / (c+1) # save the new", "np.floor(np.linspace( 0, ix+self.margin_px[0], mx+1 )).astype(\"int32\") iy_divs = np.floor(np.linspace( 0, iy+self.margin_px[1], my+1 )).astype(\"int32\") for", "threshold=wallthreshold ) if blindspots != None: for spot in blindspots: bd.add_blindspot(*tuple(spot)) bd.run() return", "set(data_layers.keys()).difference(self.data_layers.keys()): # For layers in data_layers not in self self.data_layers[l_id] = data_layers[l_id].copy() self.count[l_id]", "= placeable for fid, overlay in self.overlays.items(): overlay.roll() obs = bins.get(fid) if obs", "to elems total self.mask[x][y] = self.pixelmask[it:ib,il:ir].mean() < Model.DOWNSAMPLE_THRESHOLD def calc_bounds_mask(self,blindspots=None,wallthreshold:float=None) -> Image.Image: \"\"\"", "relevent floor objects\" aps = self.query_obj.pullAPs() for mac,ap in aps.items(): if ap.floorPlanId in", "serialize(self): conf = dict() conf[Model.STORE_SECRET] = self.secret conf[Model.STORE_TOKEN] = self.validator_token conf[Model.STORE_LAYERS] = set(self.data_layers.keys())", "blindspots ]: raise ValueError(\"Invalid format for blindspots parameter. Should be of shape (n,4),", ": idealality} for i, cam in enumerate(cameras): response = self.getCameraImage(cam) POST_data[fpid][\"camera_data_\" + str(i)]", "the current Model. If layers or overlays are not represented in TSA, those", "blindspots floor.mask[:] = 1 floor.mask_enabled = on self.update_layers() ### Layers def update_layers(self)->None: \"\"\"", "edges # Make the mask an Image, mode=L mask = Image.fromarray(255*self.pixelmask.astype(np.uint8),\"L\") # Paste", "= APIQuery(netid) self.populate(layers) self.secret = conf_dict.get(Model.STORE_SECRET) self.validator_token = conf_dict.get(Model.STORE_TOKEN) self.webhook_addresses = conf_dict.get(Model.STORE_WEBHOOK,list()) self.password", "or are to be ignored eg outside high floors. Blindspots should be a", ") if source_net_id != self.network_id: raise Model.BadRequest(\"Request has data from wrong network: expected", "fixed = {} unfixed = {} for id, placeable in observations.items(): if (", "current floorplans (Floor objects). Throws ModelException in case of dimention mismatch. If mask", "camera) -> dict: #returns a dictionary containing a link to the image response", "has data from wrong network: expected {} got {}\".format(self.network_id,source_net_id)) def get_type(SAPI_packet:dict) -> int:", "= np.zeros(dims).ravel() n = (datetime.datetime.now().second / 60) * len(testarr) testarr[:int(n)] = 1 return", "as f: pickle.dump(config_data, f) def read_config_data(self): if os.path.isfile(Model.CONFIG_PATH): with open( Model.CONFIG_PATH, 'rb' )", "= Overlay(self.floorid, self.overlay_dimensions, self.real_dimensions, self.mask, 1) cp.__unfixed_observations[0] = self.__unfixed_observations.mean(axis=0) cp.__masked_dataoverlay[0] = self.__masked_dataoverlay.mean(axis=0) cp.__unmasked_dataoverlay[0]", "cp def add(self,observations:dict) -> None: fixed = {} unfixed = {} for id,", "exposure, see Overlay.get_delta \"\"\" data = self.get_delta(masked,exposure) window = self.exposure if exposure ==", "def spike(self, layer, threshhold)->dict: #add floorplan ID into params, camera/wifi/bluetoothall into params? threshold", "for cam in sorted(distances.items(),key=lambda x: x[1])[:n] ] return (\"Best Effort\", top_n) def getCameraImage(self,", "= max(overlay.max(), overlay.min(), key=abs) #m_max, m_min = absmax, -absmax imarr = np.zeros((destination.size[1],destination.size[0],4),dtype=\"uint8\") #", ") # Update the count self.count[l_key][o_key] += 1 #self.write() else: raise TimeSlotAvg.TimeSlotAvgException(f\"Cannot update", "and the data overlay, in metres and px self.margin_m = ( Model.CELL_SIZE_M -", "in the floorplans but not in the overlays print(\"Info: Creating new overlay for", "0 alpha = abs(val / absmax) #Left, right iys = iy_points[my] iye =", "= { fpid: fp.mask_enabled for fpid,fp in self.plans.items() } return conf def write_config_data(self):", "masked:bool=True, exposure:int=0)->np.ndarray: #pragma: no cover \"\"\" Returns a copy of the full client", "aps = self.query_obj.pullAPs() for mac,ap in aps.items(): if ap.floorPlanId in self.plans.keys(): self.plans[ap.floorPlanId].aps[mac] =", "def put_historical(self) -> None: \"Updates the average data for the current TimeSlotAvg object\"", "-> None: \"Get APs and store internally in relevent floor objects\" aps =", "observations self.mask == floor.mask class Model: LAYER_SNAP_WIFI = 1 LAYER_SNAP_BT = 2 LAYER_MVSENSE", "{ k : v.floorplan.name for k,v in self.plans.items() } def findFloorplanByName(self,name) -> str:", "on the first n frames of stored exposure, default (0) combines all available", "names\" return { k : v.floorplan.name for k,v in self.plans.items() } def findFloorplanByName(self,name)", "} def set_observations(self,observations:dict): \"Set the layer to contain passed observations, clearing any previous.", "dictionary return dict(zip(range(len(obs)),obs)) def provide_scanning(self,SAPI_packet:dict) -> None: \"Update model with SAPI data\" #", "curr_day, curr_hour = TimeSlotAvg.get_time() return curr_day == self.day and curr_hour == self.hour def", "def poll_layer(self,layer:int,exposure:int) -> dict: return self.data_layers[layer].get_full(exposure=exposure) def render_delta(self,floorPlanId:str)->Image: \"Get the current datamap in", "dist distances[cam] = mindist for cam in nonFOVcams: distances[cam] = np.hypot( cam.x-event[0], cam.y-event[1]", "ixe = ix_points[mx+1] for my in range(overlay.shape[1]): val = overlay[mx,my] if val==0: continue", "count self.count[l_key][o_key] += 1 #self.write() else: raise TimeSlotAvg.TimeSlotAvgException(f\"Cannot update with current model as", "[[x1,x2,y1,y2],...]\" if blindspots==None: pass elif not (isinstance(blindspots,(np.ndarray,list,tuple))): raise TypeError(\"Invalid type for blindspots parameter.", "self.count[l_id] = dict() print(\"Info: Layer implicitly created for Layer ID {}\".format(l_id)) for l_id,", "sqrt(x^2+x^2) # I wish I was kidding # This is 0.707 iff cell_s_m", "history\" self.pull_mvsense_data() self.put_historical() #spike detect POST_data = {} for fpid, floor in self.plans.items():", "self.query_obj.network_id self.plans = self.pullFloors() self.getAPs() self.query_obj.pullCameras() self.data_layers = dict() self.webhook_threshold = 0.35 for", "They call me the comprehension king # Maybe after we spend 30 minutes", "historical data would be invalidated \"\"\" for layer in self.data_layers.values(): layer.verify_and_update(self.plans) ### Access", "= self.nearestCameras(2, floor, spikedict) #need to get relevant floor obj POST_data[fpid] = {\"type\"", "plan IDs and names\" return { k : v.floorplan.name for k,v in self.plans.items()", "= self.data_layers[l_key] # For each overlay in the respective new data layer for", "None self.mask = np.ones(self.overlay_dimensions,dtype=np.bool_) self.aps = {} self.bm_boxes = [] def set_bounds_mask(self,blindspots=None,wallthreshold:float=None) ->", "timeslot object self.timeslot.update_avg_data( self.data_layers ) def comp_historical(self, floorPlanId:str): \"Get the relative busyness of", "self.webhook_addresses.append(webhookAddress) ### Historical def put_historical(self) -> None: \"Updates the average data for the", "#Check camera with mac exists if mac in self.query_obj.getCameras().keys(): #Check coords of correct", ": over.get_full(masked, exposure) for _id, over in self.overlays.items() } def copy(self,flatten:bool=True): \"\"\" Return", "data, only new observations If an overlay missing, print info, create new If", "for each overlay in the layer. If exposure 0 (default), will provide all", "Also we only need 1 frame to store average so flatten self.data_layers[l_id] =", "self.margin_px = ( self.margin_m[0] * self.floorplan.px_per_m_h, self.margin_m[1] * self.floorplan.px_per_m_w ) self.mask_enabled = False", "= self.exposure if exposure == 0 else exposure return self.__unfixed_observations[:window].mean(axis=0) def get_full(self, masked:bool=True,", "if exposure == 0 else exposure return self.__unfixed_observations[:window].mean(axis=0) def get_full(self, masked:bool=True, exposure:int=0)->np.ndarray: #pragma:", "V_T = sqrt(x^2+x^2) # I wish I was kidding # This is 0.707", "valid time\" if self.is_current_time(debug): # it is valid to update with the current", "def update_layers(self)->None: \"\"\" Must be called when a Floor is added, removed, or", "be within variance metres d = np.linalg.norm( (np.array([x,y]) + 0.5) * Model.CELL_SIZE_M -", "= { _id : Overlay(_id, floor.overlay_dimensions, floor.floorplan_dimensions, floor.mask, exposure) for _id,floor in floorplans.items()", "STORE_SECRET = \"sapisecret\" STORE_TOKEN = \"<PASSWORD>_token\" STORE_PASSWORD = \"<PASSWORD>\" STORE_WHTHRESHOLD = \"webhook_threshold\" def", "/ mask.sum() return data def verify_and_update(self,floor:Floor)->None: \"Verifies overlay is compatible with passed floor,", "authentication secret - rejecting data\") except KeyError as ke: raise Model.BadRequest(\"Request is missing", "(1msq-scale, small-dims) dims mx = self.mask.shape[0] my = self.mask.shape[1] #Image-scale pixel mask (mini-scale,", "(or mask) mask = self.mask if masked else np.ones(self.overlay_dimensions,dtype=np.bool_) data[mask] += self.__unfixed_observations[:window].mean(axis=0) /", "= 1 LAYER_SNAP_BT = 2 LAYER_MVSENSE = 3 LAYERS_ALL = {LAYER_SNAP_WIFI, LAYER_SNAP_BT, LAYER_MVSENSE}", "of shape (n,4), got {}\".format(str(blindspots))) if wallthreshold == None: pass elif not isinstance(wallthreshold,(int,float)):", "in floorplans.items() } return self.plans def getFloorplanSummary(self) -> dict: \"Get a dict of", "May throw error if dimensions do not equate and historical data would be", "} if len(hasView)>0: return (\"Covered\", list(hasView) ) distances = dict() for cam in", "in TSA, those are created, infos are printed Throws ModelException if dimensions do", "average layer avg_layer = self.data_layers[l_key] # For each overlay in the respective new", "self.margin_px[1], overlay.shape[1] + 1 )).astype(\"int32\") for mx in range(overlay.shape[0]): # Top, bottom ixs", "from averages # avg layers are already flat so exposure of 1 avg_um_overlay", ") def set(self, unfixed_count:np.ndarray, masked_overlay:np.ndarray, unmasked_overlay:np.ndarray): \"Sets internal observation data. Not for general", "self.__unfixed_observations.copy() cp.__masked_dataoverlay = self.__masked_dataoverlay.copy() cp.__unmasked_dataoverlay = self.__unmasked_dataoverlay.copy() return cp def add(self,observations:dict) -> None:", "self.exposure if exposure == 0 else exposure if masked: return self.__masked_dataoverlay[:window].mean(axis=0) else: return", "if SAPI_packet[SECRET_K] != self.secret: raise Model.BadRequest(\"Request has bad authentication secret - rejecting data\")", "blindspots != None: for spot in blindspots: bd.add_blindspot(*tuple(spot)) bd.run() self.pixelmask = bd.getBoundaryMask() #Downsample", "DEFAULT_EXPOSURE = 3 DEFAULT_PASSWORD = '<PASSWORD>' __BAD_LAYER = \"Layer {} not defined. Use", "current datamap in terms of absolute delta from mean\" datamap = self.comp_historical(floorPlanId) return", "object including model data\" def __init__(self,floorplan:FloorPlan): self.floorplan = floorplan # Dimentions should default", "TimeSlotAvg: class TimeSlotAvgException( Exception ): pass DATA_DIR = \"historical_data\" def __init__(self, data_layers:dict, day:int,", "of the form [[x1,x2,y1,y2],...]\" if blindspots==None: pass elif not (isinstance(blindspots,(np.ndarray,list,tuple))): raise TypeError(\"Invalid type", "raise Model.ModelException(Model.__BAD_LAYER.format(api_layer_val)) def __generate_person_obs(self) -> dict: \"Indexes observed person objects with an arbitrary", "mask in case of change # Note this does not change existing data,", "Exposure queue, shape (exp,x,y) self.__unfixed_observations = np.zeros(exposure) self.__masked_dataoverlay = np.zeros( (exposure,)+overlay_dimensions, dtype=\"float32\" )", "cp.__unmasked_dataoverlay = self.__unmasked_dataoverlay.copy() return cp def add(self,observations:dict) -> None: fixed = {} unfixed", "-> Image.Image: \"\"\" Generate a preview of a bounds mask with given parameters.", "= Model.get_type(SAPI_packet) observations = self.query_obj.extract_SAPI_observations(SAPI_packet) self.data_layers[dest_layer].set_observations(observations) ### Camera and MVSense def setFOVs(self,mac:str,coords:set)->None: \"\"\"", "and MVSense def setFOVs(self,mac:str,coords:set)->None: \"\"\" Set the FOV coords from given camera (by", "in self.plans.items() } def findFloorplanByName(self,name) -> str: \"Find a floorPlanId from the floor", "mask over the other) # Account for margin between edge of floorplan and", "1 )).astype(\"int32\") for mx in range(overlay.shape[0]): # Top, bottom ixs = ix_points[mx] ixe", "\"\"\" data = self.get_delta(masked,exposure) window = self.exposure if exposure == 0 else exposure", "- rejecting data\") except KeyError as ke: raise Model.BadRequest(\"Request is missing data: \"", "not change existing data, only new observations self.mask == floor.mask class Model: LAYER_SNAP_WIFI", "for _id,floor in floorplans.items() } def set_observations(self,observations:dict): \"Set the layer to contain passed", "select_data = config_data[selected_id] self.update_model_config(selected_id,select_data) else: print(\"Warning: config file not found\") try: self.update_model_config(None, {Model.STORE_LAYERS:", "to be ignored eg outside high floors. Blindspots should be a Numpy array,", "2*(CELL_SIZE_M / 2,) ) ) # This means let x = cell_s_m /", "self.data_layers, self.plans ) ### Providers def poll_layer(self,layer:int,exposure:int) -> dict: return self.data_layers[layer].get_full(exposure=exposure) def render_delta(self,floorPlanId:str)->Image:", "Ensure overlays are compatible with current floorplans (Floor objects). Throws ModelException in case", "x = cell_s_m / 2; V_T = sqrt(x^2+x^2) # I wish I was", "single source type\" def __init__(self,floorplans:dict,exposure:int): self.exposure = exposure self.overlays = { _id :", "= np.hypot( *( 2*(CELL_SIZE_M / 2,) ) ) # This means let x", "should be a Numpy array, tuple or nested list of the form [[x1,x2,y1,y2],...]\"", "altered, including by set_bounds_mask This will update the Overlay objects to reflect this", "and image has bounds mask set, will mask final image to keep overlay", "change of axis um_possible[-int(placeable.y/Model.CELL_SIZE_M),int(placeable.x/Model.CELL_SIZE_M)] = 1 else: # Store parsed location in x,y", "LAYER_SNAP_BT = 2 LAYER_MVSENSE = 3 LAYERS_ALL = {LAYER_SNAP_WIFI, LAYER_SNAP_BT, LAYER_MVSENSE} CONFIG_PATH =", "a list of camera objects # They call me the comprehension king #", "self self.data_layers[l_id] = data_layers[l_id].copy() self.count[l_id] = dict() print(\"Info: Layer implicitly created for Layer", "including model data\" def __init__(self,floorplan:FloorPlan): self.floorplan = floorplan # Dimentions should default to", "bd.add_blindspot(*tuple(spot)) bd.run() return bd.generate_graphic() def render_overlay(self,overlay:np.ndarray,pixelmask:bool=True): \"\"\" Render overlay onto the floorplan image", "provide_scanning(self,SAPI_packet:dict) -> None: \"Update model with SAPI data\" # Raise a racket if", "# pragma: no cover # Not covered as only parameter passing bd =", "time values needed for reading and writing data files\" curr_dt = datetime.datetime.now( datetime.timezone(", "( Model.CELL_SIZE_M - (self.floorplan_dimensions[0] % Model.CELL_SIZE_M), Model.CELL_SIZE_M - (self.floorplan_dimensions[1] % Model.CELL_SIZE_M) ) self.margin_px", "missing data: \" + str(ke) ) if source_net_id != self.network_id: raise Model.BadRequest(\"Request has", "self.network_id: raise AttributeError except AttributeError: self.query_obj = APIQuery(netid) self.populate(layers) self.secret = conf_dict.get(Model.STORE_SECRET) self.validator_token", "Model.VARIANCE_THRESHOLD: # Calculated minimum reach # Account for change of axis um_possible[-int(placeable.y/Model.CELL_SIZE_M),int(placeable.x/Model.CELL_SIZE_M)] =", "data: \" + str(ke) ) if source_net_id != self.network_id: raise Model.BadRequest(\"Request has data", "if masked else np.ones(self.overlay_dimensions,dtype=np.bool_) data[mask] += self.__unfixed_observations[:window].mean(axis=0) / mask.sum() return data def verify_and_update(self,floor:Floor)->None:", "layer (eg Model.LAYER_*)\" class ModelException(Exception): pass class BadRequest(Exception): pass def __init__(self,network_id:str=None,layers:set={}): \"Initialise model.", "removed, or altered, including by set_bounds_mask This will update the Overlay objects to", "#self.observations = dict() self.overlay_dimensions = overlay_dimensions self.real_dimensions = real_dimensions self.mask = floormask assert", "\"Pull live MVSense data from cameras and feed into data layer\" self.query_obj.updateCameraMVSenseData() observations", "layer\" self.query_obj.updateCameraMVSenseData() observations = self.__generate_person_obs() self.data_layers[Model.LAYER_MVSENSE].set_observations(observations) def spike(self, layer, threshhold)->dict: #add floorplan ID", "busiest = 0 busiest_location=None for x in range(len(clusters)): for y in range(len(clusters[0])): if", "to be within variance metres d = np.linalg.norm( (np.array([x,y]) + 0.5) * Model.CELL_SIZE_M", "= iy_points[my+1] imarr[ixs:ixe,iys:iye] = POS if pos else NEG imarr[ixs:ixe,iys:iye,3] = ( imarr[ixs:ixe,iys:iye,3].astype(np.float64)", "len(hasView)>0: return (\"Covered\", list(hasView) ) distances = dict() for cam in FOVcams: fov", "observed person objects with an arbitrary key\" obs = self.query_obj.get_camera_observations() # Zip [0,n)", "\"Return the flat average Overlay object indexed by each layer stored\" return {", "Include floor mask self.__masked_dataoverlay[0][m_possible] += 1.0 / m_possible.sum() def __add_unfixed_locations(self,unfixed:dict) -> None: self.__unfixed_observations[0]", "any previous. Pass floor object dictionary\" bins = { id : dict() for", "np.hypot( *( 2*(CELL_SIZE_M / 2,) ) ) # This means let x =", "False in [ len(spot) == 4 for spot in blindspots ]: raise ValueError(\"Invalid", "level #Assuming cell >> pixel #Mask (1msq-scale, small-dims) dims mx = self.mask.shape[0] my", "frame. \"\"\" if flatten: cp = Overlay(self.floorid, self.overlay_dimensions, self.real_dimensions, self.mask, 1) cp.__unfixed_observations[0] =", "exposure\" self.__unfixed_observations[:] = 0 self.__masked_dataoverlay[:] = 0 self.__unmasked_dataoverlay[:] = 0 def copy(self,flatten:bool=False): \"\"\"", "data # Get full available exposure by default new_um_overlay = over.get_delta(masked=False) new_m_overlay =", "d <= placeable.variance # m(asked)_possible is a copy of the u(n)m(asked)_possible's with the", "blindspots parameter. Should be of type np.array, list or tuple\") elif False in", "from BoundaryDetector for areas that people cannot possibly be on the given floor", "current TimeSlotAvg object\" self.update_timeslot() # Get the current timeslot object self.timeslot.update_avg_data( self.data_layers )", "using all layers\" self.update_timeslot() # Get the current timeslot object hist_fp_data = self.timeslot.get_floor_avgs(floorPlanId)", "def get_deltas(self, masked:bool=True, exposure:int=0) -> dict: \"\"\" Return the delta data for each", "= np.floor(np.linspace( 0, ix+self.margin_px[0], mx+1 )).astype(\"int32\") iy_divs = np.floor(np.linspace( 0, iy+self.margin_px[1], my+1 )).astype(\"int32\")", "for cam in FOVcams if cam.get_FOV()[event_root]==True } if len(hasView)>0: return (\"Covered\", list(hasView) )", "is a copy of the u(n)m(asked)_possible's with the floor mask applied m_possible =", "Model.LAYER_SNAP_BT else: raise Model.ModelException(Model.__BAD_LAYER.format(api_layer_val)) def __generate_person_obs(self) -> dict: \"Indexes observed person objects with", "select data masked at input, else unmasked data. Set exposure to specify mean", "in self.query_obj.getCameras().keys(): #Check coords of correct shape and iterable, or of len 0", "(mean) exposure window into 1 frame. \"\"\" if flatten: cp = Overlay(self.floorid, self.overlay_dimensions,", "= dm.overlay_dimensions testarr = np.zeros(dims).ravel() n = (datetime.datetime.now().second / 60) * len(testarr) testarr[:int(n)]", "#Left, right il = iy_divs[y] ir = iy_divs[y+1] # As array is binary,", "more info, see Overlay.copy() \"\"\" ly = Layer({}, 1 if flatten else self.exposure", "unmasked_overlay def roll(self) -> None: \"Roll the exposure, preparing for a new frame", "small-dims) dims mx = self.mask.shape[0] my = self.mask.shape[1] #Image-scale pixel mask (mini-scale, big-dims)", "precident um_possible = placeable.mask_override m_possible = placeable.mask_override else: if placeable.variance < Model.VARIANCE_THRESHOLD: #", "= set(self.data_layers.keys()) conf[Model.STORE_WEBHOOK] = self.webhook_addresses conf[Model.STORE_WHTHRESHOLD] = self.webhook_threshold conf[Model.STORE_PASSWORD] = self.password conf[Model.STORE_FOVCOORDS] =", "match \"\"\" for l_id in set(data_layers.keys()).difference(self.data_layers.keys()): # For layers in data_layers not in", "y in range(len(clusters[0])): clusters[x,y] = layer[3*x:3*(x+1),3*y:3*(y+1)].sum() busiest = 0 busiest_location=None for x in", "__init__(self,floorplans:dict,exposure:int): self.exposure = exposure self.overlays = { _id : Overlay(_id, floor.overlay_dimensions, floor.floorplan_dimensions, floor.mask,", "with current floorplans (Floor objects). Throws ModelException in case of dimention mismatch. If", "= {} unfixed = {} for id, placeable in observations.items(): if ( placeable.x", "k return None def setBoundsMask(self,floor_id:str,on:bool,blindspots=None,wallthreshold:float=None) -> None: \"Generate a mask from BoundaryDetector for", "observations = self.query_obj.extract_SAPI_observations(SAPI_packet) self.data_layers[dest_layer].set_observations(observations) ### Camera and MVSense def setFOVs(self,mac:str,coords:set)->None: \"\"\" Set the", "in blindspots: bd.add_blindspot(*tuple(spot)) bd.run() return bd.generate_graphic() def render_overlay(self,overlay:np.ndarray,pixelmask:bool=True): \"\"\" Render overlay onto the", "cam.get_fov_coords() for cam in self.query_obj.cameras.values() } conf[Model.STORE_BMBOXES] = { fpid: fp.bm_boxes for fpid,fp", "scale alpha by magnitude pos = val > 0 alpha = abs(val /", "packet\" api_layer_val = APIQuery.get_SAPI_type(SAPI_packet) if api_layer_val == \"WiFi\": return Model.LAYER_SNAP_WIFI elif api_layer_val ==", "of 1 avg_um_overlay = avg_layer.overlays[o_key].get_delta(masked=False,exposure=1) avg_m_overlay = avg_layer.overlays[o_key].get_delta(masked=True,exposure=1) avg_unfixed_obs = over.get_unfixed_observations() # Count", "try: if netid != self.network_id: raise AttributeError except AttributeError: self.query_obj = APIQuery(netid) self.populate(layers)", "all available exposures squashed. For more info, see Overlay.get_delta \"\"\" return { _id", "ov_id in data_layers[l_id].overlays.keys() } def sha256(inpt:str) -> str: m = hashlib.sha256() m.update(inpt.encode()) return", "= BoundaryDetector( self.floorplan.get_image(), threshold=wallthreshold ) if blindspots != None: for spot in blindspots:", "= bd.getBoundaryMask() #Downsample from pixel level to mask level #Assuming cell >> pixel", "#m_max, m_min = absmax, -absmax imarr = np.zeros((destination.size[1],destination.size[0],4),dtype=\"uint8\") # Account for margin between", "curr_hour = curr_dt.hour return curr_day, curr_hour def verify_and_update_struct(self, data_layers:dict, floors:dict)->None: \"\"\" Verifies that", "source_net_id != self.network_id: raise Model.BadRequest(\"Request has data from wrong network: expected {} got", "image in heatmap form. If pixelmask and image has bounds mask set, will", "self.data_layers[l_key] # For each overlay in the respective new data layer for o_key,", "( imarr[ixs:ixe,iys:iye,3].astype(np.float64) * alpha ).astype(np.uint8) imarr = Image.fromarray(imarr,\"RGBA\").filter( ImageFilter.BoxBlur( BLUR_CELLS * destination.size[0] /", "= \"historical_data\" def __init__(self, data_layers:dict, day:int, hour:int): self.day = day self.hour = hour", "Numpy array, tuple or nested list of the form [[x1,x2,y1,y2],...]\" if blindspots==None: pass", "dict() for cam in FOVcams: fov = cam.get_FOV() mindist = 9999999 for x,row", "floorid:str, overlay_dimensions:tuple, real_dimensions:tuple, floormask:np.ndarray, exposure:int): self.floorid = floorid #self.observations = dict() self.overlay_dimensions =", "exposures into 1 frame For more info, see Overlay.copy() \"\"\" ly = Layer({},", "rendered on the floor plan\" return self.plans[floorPlanId].render_overlay(self.data_layers[Model.LAYER_SNAP_WIFI].overlays[floorPlanId].get_delta(exposure=1)) def debug_render(self,fpid)->Image: import datetime dm =", "in range(um_possible.shape[0]): for y in range(um_possible.shape[1]): # For each square, see if it's", "self.secret = conf_dict.get(Model.STORE_SECRET) self.validator_token = conf_dict.get(Model.STORE_TOKEN) self.webhook_addresses = conf_dict.get(Model.STORE_WEBHOOK,list()) self.password = conf_dict.get(Model.STORE_PASSWORD,Model.DEFAULT_PASSWORD) self.webhook_threshold", "range(um_possible.shape[1]): # For each square, see if it's centre is close enough to", "self.real_dimensions != floor.floorplan_dimensions: raise Model.ModelException(\"Error: Overlay and Floorplan dimension mismatch for FPID={}\".format(floor.floorplan.id)) #", "and the u(n)m(asked)_possible locations um_possible = np.zeros(self.overlay_dimensions, dtype=np.bool_) if placeable.has_mask_override: # Even if", "len(testarr) testarr[:int(n)] = 1 return dm.render_overlay(testarr.reshape(dims)) def update(self)->None: \"Update non-webhook (non-SAPI) layers, write", "live MVSense data from cameras and feed into data layer\" self.query_obj.updateCameraMVSenseData() observations =", "config_data = pickle.load(f) selected_id = config_data[Model.STORE_SELECTED] select_data = config_data[selected_id] self.update_model_config(selected_id,select_data) else: print(\"Warning: config", "= ( avg_um_overlay * c + new_um_overlay ) / (c+1) upd_m_overlay = (", "data, if valid time\" if self.is_current_time(debug): # it is valid to update with", "def debug_render(self,fpid)->Image: import datetime dm = self.plans[fpid] dims = dm.overlay_dimensions testarr = np.zeros(dims).ravel()", "floors:dict)->None: \"\"\" Verifies that the data in the TimeSlotAvg is compatible with the", "curr_day == self.day and curr_hour == self.hour def update_avg_data(self, current_data:dict, debug:bool=None) -> None:", "x: x[1])[:n] ] return (\"Best Effort\", top_n) def getCameraImage(self, camera) -> dict: #returns", "= over.get_unfixed_observations() # Count c = self.count[l_key][o_key] # update the average by adding", "\"\"\" return { _id : over.get_delta(masked, exposure) for _id, over in self.overlays.items() }", ">> pixel #Mask (1msq-scale, small-dims) dims mx = self.mask.shape[0] my = self.mask.shape[1] #Image-scale", "cam = self.query_obj.cameras[mac] shape = self.plans[cam.floorPlanId].overlay_dimensions cam.set_FOV(shape,coords) else: raise ValueError except (ValueError, TypeError)", "self.plans[k].floorplan.name == name: return k return None def setBoundsMask(self,floor_id:str,on:bool,blindspots=None,wallthreshold:float=None) -> None: \"Generate a", "as exposure of historicals = 1 self.data_layers[l_key].overlays[o_key].set( upd_unfixed_obs[None,], upd_m_overlay[None,], upd_um_overlay[None,] ) # Update", "hashlib parentddir = os.path.abspath(os.path.join(os.path.dirname(__file__), os.path.pardir)) sys.path.append(parentddir) from lib.APIQuery import APIQuery, FloorPlan from lib.BoundaryDetector", "a dict\".format(str(type(SAPI_packet)))) try: source_net_id = SAPI_packet[\"data\"][\"networkId\"] if SAPI_packet[SECRET_K] != self.secret: raise Model.BadRequest(\"Request has", "bd = BoundaryDetector( self.floorplan.get_image(), threshold=wallthreshold ) if blindspots != None: for spot in", "combines all available frames, 1 gives only the latest frame (no smoothing). \"\"\"", "= ix_divs[x+1] for y in range(my): #Left, right il = iy_divs[y] ir =", "Get the current timeslot object self.timeslot.update_avg_data( self.data_layers ) def comp_historical(self, floorPlanId:str): \"Get the", "overlay onto the floorplan image in heatmap form. If pixelmask and image has", "are already flat so exposure of 1 avg_um_overlay = avg_layer.overlays[o_key].get_delta(masked=False,exposure=1) avg_m_overlay = avg_layer.overlays[o_key].get_delta(masked=True,exposure=1)", "m_possible = placeable.mask_override else: if placeable.variance < Model.VARIANCE_THRESHOLD: # Calculated minimum reach #", "within bounds. \"\"\" POS = np.array([255,0,0,180],dtype=np.uint8) NEG = np.array([0,255,0,180],dtype=np.uint8) BLUR_CELLS = 0.35 destination", "BoundaryDetector for areas that people cannot possibly be on the given floor eg", "return { layer_id: layer.overlays[fpid] for layer_id, layer in self.data_layers.items() } @staticmethod def get_time()->tuple:", "and overlay ix_divs = np.floor(np.linspace( 0, ix+self.margin_px[0], mx+1 )).astype(\"int32\") iy_divs = np.floor(np.linspace( 0,", "window = self.exposure if exposure == 0 else exposure if masked: return self.__masked_dataoverlay[:window].mean(axis=0)", "SAPI data\" # Raise a racket if theres something wrong self.__validate_scanning(SAPI_packet) dest_layer =", "\"\"\" Set the FOV coords from given camera (by mac). Coords should be", "self.aps = {} self.bm_boxes = [] def set_bounds_mask(self,blindspots=None,wallthreshold:float=None) -> None: \"\"\"\" Generate a", "should be iterable shape (n,2)\") else: raise Model.ModelException(\"Camera with mac {} not found\".format(mac))", "hour==None: day, hour = TimeSlotAvg.get_time() try: tsa = bz2.BZ2File(os.path.join(TimeSlotAvg.DATA_DIR,'{}_{}.pbz2'.format(day, hour)), 'rb') tsa =", "datetime import requests import hashlib parentddir = os.path.abspath(os.path.join(os.path.dirname(__file__), os.path.pardir)) sys.path.append(parentddir) from lib.APIQuery import", "3 DEFAULT_PASSWORD = '<PASSWORD>' __BAD_LAYER = \"Layer {} not defined. Use internally defined", "historicals = 1 self.data_layers[l_key].overlays[o_key].set( upd_unfixed_obs[None,], upd_m_overlay[None,], upd_um_overlay[None,] ) # Update the count self.count[l_key][o_key]", "mask = self.mask if masked else np.ones(self.overlay_dimensions,dtype=np.bool_) data[mask] += self.__unfixed_observations[:window].mean(axis=0) / mask.sum() return", "an arbitrary key\" obs = self.query_obj.get_camera_observations() # Zip [0,n) with n obs objects,", "link to the image response = self.query_obj.getCameraSnap(camera) return response def snapshotWebhook(self, snapshot:dict): for", "import numpy as np from PIL import Image, ImageFilter import bz2 import pickle", "for _id, over in self.overlays.items() } def get_full(self, masked:bool=True, exposure:int=0) -> dict: #pragma:", "for placeable in fixed.values(): # We store both a copy of the m(asked)_possible", "if blindspots != None: for spot in blindspots: bd.add_blindspot(*tuple(spot)) bd.run() return bd.generate_graphic() def", "not (isinstance(blindspots,(np.ndarray,list,tuple))): raise TypeError(\"Invalid type for blindspots parameter. Should be of type np.array,", "try: tsa = bz2.BZ2File(os.path.join(TimeSlotAvg.DATA_DIR,'{}_{}.pbz2'.format(day, hour)), 'rb') tsa = pickle.load(tsa) except FileNotFoundError: tsa =", "[] ### Floorplans def pullFloors(self) -> dict: \"Pull floorplans from the network, construct", "for mac,ap in aps.items(): if ap.floorPlanId in self.plans.keys(): self.plans[ap.floorPlanId].aps[mac] = ap ### Scanning", "accumulated observation data including exposure\" self.__unfixed_observations[:] = 0 self.__masked_dataoverlay[:] = 0 self.__unmasked_dataoverlay[:] =", "fixed.values(): # We store both a copy of the m(asked)_possible locations and the", "response if POST_data != {}: self.snapshotWebhook(POST_data) ### Configuration STORE_WEBHOOK = \"webhooklist\" STORE_SELECTED =", "pixel mask as no mask exists\") destination.putalpha(255) return Image.alpha_composite(destination,imarr) class Layer: \"Class representing", "def get_unfixed_observations(self, exposure:int=0)->float: \"\"\" Return how many unfixed observations were passed. For more", "if POST_data != {}: self.snapshotWebhook(POST_data) ### Configuration STORE_WEBHOOK = \"webhooklist\" STORE_SELECTED = \"selectednet\"", "self.overlay_dimensions != floor.overlay_dimensions or self.real_dimensions != floor.floorplan_dimensions: raise Model.ModelException(\"Error: Overlay and Floorplan dimension", "hasn't found any near enough to call near # Ignore floor mask self.__unmasked_dataoverlay[0][um_possible]", "{}\".format(str(type(wallthreshold)))) if floor_id not in self.plans.keys(): raise Model.ModelException(\"No such floor: \",floor_id) floor =", "as it is not currently day:{self.day}, hour:{self.hour}\") def write(self): \"Save TimeSlotAvg object to", "unfixed count from new data # Get full available exposure by default new_um_overlay", "if blindspots==None: pass elif not (isinstance(blindspots,(np.ndarray,list,tuple))): raise TypeError(\"Invalid type for blindspots parameter. Should", "by each layer stored\" return { layer_id: layer.overlays[fpid] for layer_id, layer in self.data_layers.items()", "and historical data would be invalidated \"\"\" for layer in self.data_layers.values(): layer.verify_and_update(self.plans) ###", "coords do we get laying one mask over the other) # Account for", "TimeSlotAvg.load(self.data_layers,self.plans) self.webhook_addresses = [] ### Floorplans def pullFloors(self) -> dict: \"Pull floorplans from", "Overlay.copy() \"\"\" ly = Layer({}, 1 if flatten else self.exposure ) ly.overlays =", "coords from given camera (by mac). Coords should be iterable of shape (n,2).", "TimeSlotAvg.get_time() return curr_day == self.day and curr_hour == self.hour def update_avg_data(self, current_data:dict, debug:bool=None)", "For more info see Floor.set_bounds_mask \"\"\" if wallthreshold == None: bd = BoundaryDetector(", "the floorplan and the data overlay, in metres and px self.margin_m = (", "Note this does not change existing data, only new observations self.mask == floor.mask", "self.getAPs() self.query_obj.pullCameras() self.data_layers = dict() self.webhook_threshold = 0.35 for layer in layers: if", "or of len 0 to unset try: if len(coords) == 0 or False", "hour)), 'rb') tsa = pickle.load(tsa) except FileNotFoundError: tsa = TimeSlotAvg( data_layers, day, hour", "dict: \"\"\" Return the delta data for each overlay in the layer. If", "{ id : dict() for id in self.overlays.keys() } for id, placeable in", "is binary, mean gives ratio of elems 1 to elems total self.mask[x][y] =", "unmasked data. Set exposure to specify mean smoothing on the first n frames", "or nested list of the form [[x1,x2,y1,y2],...]\" if blindspots==None: pass elif not (isinstance(blindspots,(np.ndarray,list,tuple))):", "pickle.load(f) selected_id = config_data[Model.STORE_SELECTED] select_data = config_data[selected_id] self.update_model_config(selected_id,select_data) else: print(\"Warning: config file not", "self.__masked_dataoverlay = np.zeros( (exposure,)+overlay_dimensions, dtype=\"float32\" ) self.__unmasked_dataoverlay = np.zeros( (exposure,)+overlay_dimensions, dtype=\"float32\" ) def", "blindspots parameter. Should be of shape (n,4), got {}\".format(str(blindspots))) if wallthreshold == None:", "exposure == 0 else exposure # Distribute unfixed observations evenly across the floorplan", "be or are to be ignored eg outside high floors. Blindspots should be", "masked_overlay self.__unmasked_dataoverlay = unmasked_overlay def roll(self) -> None: \"Roll the exposure, preparing for", "dict()).items(): on = conf_dict.get(Model.STORE_BDENABLED,{fpid:False})[fpid] self.setBoundsMask(fpid, on, boxes) def serialize(self): conf = dict() conf[Model.STORE_SECRET]", "APIQuery, FloorPlan from lib.BoundaryDetector import BoundaryDetector # As per Scanning API v3 SECRET_K", "} @staticmethod def get_time()->tuple: \"Get the current time values needed for reading and", "we only need 1 frame to store average so flatten self.data_layers[l_id] = layer.copy(flatten=True)", "including by set_bounds_mask This will update the Overlay objects to reflect this change", "pixelmask: # Tidy the edges # Make the mask an Image, mode=L mask", "enough to be within variance metres d = np.linalg.norm( (np.array([x,y]) + 0.5) *", "{} got {}\".format(self.network_id,source_net_id)) def get_type(SAPI_packet:dict) -> int: \"Get the Model layer constant for", "of the floor that people cannot possibly be or are to be ignored", "ValueError except (ValueError, TypeError) as err: raise err.__class__(\"Coordinates supplied of incorrect shape or", "fov = cam.get_FOV() mindist = 9999999 for x,row in enumerate(fov): for y, cell", "\"selectednet\" STORE_LAYERS = \"layers\" STORE_FOVCOORDS = \"fov_coords\" STORE_FOVMASK = \"fov_mask\" STORE_BMBOXES = \"bm_boxes\"", "If pixelmask and image has bounds mask set, will mask final image to", "True: idealality, cameras = self.nearestCameras(2, floor, spikedict) #need to get relevant floor obj", "source type\" def __init__(self,floorplans:dict,exposure:int): self.exposure = exposure self.overlays = { _id : Overlay(_id,", "# Note this does not change existing data, only new observations self.mask ==", "= 0 def copy(self,flatten:bool=False): \"\"\" Return a copy of the overlay. If flatten,", ")).astype(\"int32\") for mx in range(overlay.shape[0]): # Top, bottom ixs = ix_points[mx] ixe =", "get_deltas(self, masked:bool=True, exposure:int=0) -> dict: \"\"\" Return the delta data for each overlay", "return (\"Covered\", list(hasView) ) distances = dict() for cam in FOVcams: fov =", "bottom ixs = ix_points[mx] ixe = ix_points[mx+1] for my in range(overlay.shape[1]): val =", "np.zeros(self.overlay_dimensions, dtype=np.bool_) if placeable.has_mask_override: # Even if a floorplan mask is in place,", "If an overlay missing, print info, create new If an overlay is extra,", "# They call me the comprehension king # Maybe after we spend 30", "Load TimeSlotAvg object from compressed pickle file or create new\" #TODO remove day", "self.query_obj.getCameraSnap(camera) return response def snapshotWebhook(self, snapshot:dict): for address in self.webhook_addresses: response = requests.post(address,", "Configuration STORE_WEBHOOK = \"webhooklist\" STORE_SELECTED = \"selectednet\" STORE_LAYERS = \"layers\" STORE_FOVCOORDS = \"fov_coords\"", "placeable.y != None ) or placeable.has_mask_override: fixed[id] = placeable else: unfixed[id] = placeable", "data_layers not in self self.data_layers[l_id] = data_layers[l_id].copy() self.count[l_id] = dict() print(\"Info: Layer implicitly", "for cam in cameras if cam.has_FOV() } nonFOVcams = cameras - FOVcams hasView", "got {}\".format(str(type(wallthreshold)))) if floor_id not in self.plans.keys(): raise Model.ModelException(\"No such floor: \",floor_id) floor", "of incorrect shape or type, should be iterable shape (n,2)\") else: raise Model.ModelException(\"Camera", "stored\" return { layer_id: layer.overlays[fpid] for layer_id, layer in self.data_layers.items() } @staticmethod def", "= self.query_obj.network_id self.plans = self.pullFloors() self.getAPs() self.query_obj.pullCameras() self.data_layers = dict() self.webhook_threshold = 0.35", "STORE_WHTHRESHOLD = \"webhook_threshold\" def update_model_config(self, netid, conf_dict): layers = conf_dict.get(Model.STORE_LAYERS,set()) try: if netid", "\"\"\" Return a copy of the overlay. If flatten, squash (mean) exposure window", "layer. If exposure 0 (default), will provide all available exposures squashed. For more", "pass elif not (isinstance(blindspots,(np.ndarray,list,tuple))): raise TypeError(\"Invalid type for blindspots parameter. Should be of", "the FOV coords from given camera (by mac). Coords should be iterable of", "flatten, squash (mean) exposure window into 1 frame. \"\"\" if flatten: cp =", "self.margin_px[0], overlay.shape[0] + 1 )).astype(\"int32\") iy_points = np.floor(np.linspace( 0,imarr.shape[1] + self.margin_px[1], overlay.shape[1] +", "imarr.paste((0,0,0,0),mask) elif pixelmask: print(\"Error: Could not filter by pixel mask as no mask", "= APIQuery.get_SAPI_type(SAPI_packet) if api_layer_val == \"WiFi\": return Model.LAYER_SNAP_WIFI elif api_layer_val == \"Bluetooth\": return", "single floorplan\" def __init__(self, floorid:str, overlay_dimensions:tuple, real_dimensions:tuple, floormask:np.ndarray, exposure:int): self.floorid = floorid #self.observations", "self.secret conf[Model.STORE_TOKEN] = self.validator_token conf[Model.STORE_LAYERS] = set(self.data_layers.keys()) conf[Model.STORE_WEBHOOK] = self.webhook_addresses conf[Model.STORE_WHTHRESHOLD] = self.webhook_threshold", "for mac, coords in conf_dict.get(Model.STORE_FOVCOORDS,dict()).items(): self.setFOVs( mac, coords ) for fpid, boxes in", "imarr = np.zeros((destination.size[1],destination.size[0],4),dtype=\"uint8\") # Account for margin between edge of floorplan and overlay", "objects to reflect this change May throw error if dimensions do not equate", "def pull_mvsense_data(self): \"Pull live MVSense data from cameras and feed into data layer\"", "__add_unfixed_locations(self,unfixed:dict) -> None: self.__unfixed_observations[0] += len(unfixed) def get_delta(self, masked:bool=True, exposure:int=0)->np.ndarray: \"\"\" Returns a", "the end of the floorplan and the data overlay, in metres and px", "self.__unfixed_observations[:window].mean(axis=0) def get_full(self, masked:bool=True, exposure:int=0)->np.ndarray: #pragma: no cover \"\"\" Returns a copy of", "a dictionary containing a link to the image response = self.query_obj.getCameraSnap(camera) return response", "avg_layer.overlays[o_key].get_delta(masked=False,exposure=1) avg_m_overlay = avg_layer.overlays[o_key].get_delta(masked=True,exposure=1) avg_unfixed_obs = over.get_unfixed_observations() # Count c = self.count[l_key][o_key] #", "= placeable.mask_override else: if placeable.variance < Model.VARIANCE_THRESHOLD: # Calculated minimum reach # Account", "upd_unfixed_obs = ( avg_unfixed_obs * c + new_unfixed_obs ) / (c+1) # save", "\"Returns True iff timeslot is for current time\" if debug != None: return", "self.query_obj.updateCameraMVSenseData() observations = self.__generate_person_obs() self.data_layers[Model.LAYER_MVSENSE].set_observations(observations) def spike(self, layer, threshhold)->dict: #add floorplan ID into", "= 1 self.data_layers[l_key].overlays[o_key].set( upd_unfixed_obs[None,], upd_m_overlay[None,], upd_um_overlay[None,] ) # Update the count self.count[l_key][o_key] +=", "has bad authentication secret - rejecting data\") except KeyError as ke: raise Model.BadRequest(\"Request", "floor mask self.__unmasked_dataoverlay[0][um_possible] += 1.0 / um_possible.sum() # Include floor mask self.__masked_dataoverlay[0][m_possible] +=", "-> dict: \"Pull floorplans from the network, construct blank floor layer for each\"", "\"Find a floorPlanId from the floor name\" for k in self.plans: if self.plans[k].floorplan.name", "remove day hour params - used for unit tests if day==None or hour==None:", "and curr_hour == self.hour def update_avg_data(self, current_data:dict, debug:bool=None) -> None: \"Updates an average", "if api_layer_val == \"WiFi\": return Model.LAYER_SNAP_WIFI elif api_layer_val == \"Bluetooth\": return Model.LAYER_SNAP_BT else:", "dtype=\"float32\") for x in range(len(clusters)): for y in range(len(clusters[0])): clusters[x,y] = layer[3*x:3*(x+1),3*y:3*(y+1)].sum() busiest", "given camera (by mac). Coords should be iterable of shape (n,2). Coords pertain", "self.pixelmask = None self.mask = np.ones(self.overlay_dimensions,dtype=np.bool_) self.aps = {} self.bm_boxes = [] def", "u(n)m(asked)_possible's with the floor mask applied m_possible = np.logical_and( um_possible, self.mask, dtype=np.bool_ )", "idealality, cameras = self.nearestCameras(2, floor, spikedict) #need to get relevant floor obj POST_data[fpid]", "is not current\" if not (self.timeslot.is_current_time()): self.timeslot.write() self.timeslot = TimeSlotAvg.load( self.data_layers, self.plans )", "= cam.get_FOV() mindist = 9999999 for x,row in enumerate(fov): for y, cell in", "stored self.count[l_id] = { fpid:0 for fpid in layer.overlays.keys() } @staticmethod def load(", "debug=None) -> bool: \"Returns True iff timeslot is for current time\" if debug", "for fpid, floor in self.plans.items(): spikedict = self.spike(self.comp_historical(fpid), self.webhook_threshold) if spikedict['spike'] == True:", "# As per Scanning API v3 SECRET_K = \"secret\" class Floor: \"Wrapper class", "return Image.alpha_composite(destination,imarr) class Layer: \"Class representing a data layer: a series of overlays", "floorplan image in heatmap form. If pixelmask and image has bounds mask set,", "return {'spike':busiest > threshhold, 'location':busiest_location} def nearestCameras(self, n:int, floor:Floor,spikeDict:dict)->tuple: #returns a list of", ") #splits floorplan into 3m^2 areas clusters = np.zeros(dims, dtype=\"float32\") for x in", "Throws ModelException if dimensions do not match \"\"\" for l_id in set(data_layers.keys()).difference(self.data_layers.keys()): #", "self.plans.items() } conf[Model.STORE_BDENABLED] = { fpid: fp.mask_enabled for fpid,fp in self.plans.items() } return", "return self.__masked_dataoverlay[:window].mean(axis=0) else: return self.__unmasked_dataoverlay[:window].mean(axis=0) def get_unfixed_observations(self, exposure:int=0)->float: \"\"\" Return how many unfixed", "= dist distances[cam] = mindist for cam in nonFOVcams: distances[cam] = np.hypot( cam.x-event[0],", "= np.floor(np.linspace( 0, imarr.shape[0] + self.margin_px[0], overlay.shape[0] + 1 )).astype(\"int32\") iy_points = np.floor(np.linspace(", "cam in self.query_obj.cameras.values() } conf[Model.STORE_BMBOXES] = { fpid: fp.bm_boxes for fpid,fp in self.plans.items()", "for id in self.overlays.keys() } for id, placeable in observations.items(): bins[placeable.floorPlanId][id] = placeable", "if floor_id not in self.plans.keys(): raise Model.ModelException(\"No such floor: \",floor_id) floor = self.plans[floor_id]", "\"\"\" Return a copy of the layer. Flatten squashes exposures into 1 frame", "layers are already flat so exposure of 1 avg_um_overlay = avg_layer.overlays[o_key].get_delta(masked=False,exposure=1) avg_m_overlay =", ") ) if isinstance(self.pixelmask, np.ndarray) and pixelmask: # Tidy the edges # Make", "for x in range(len(clusters)): for y in range(len(clusters[0])): if clusters[x][y] > busiest: busiest", "\"\"\" Returns a copy of the delta overlay (only fixed observations). If masked,", "def __init__(self, data_layers:dict, day:int, hour:int): self.day = day self.hour = hour self.data_layers =", "ix_points[mx] ixe = ix_points[mx+1] for my in range(overlay.shape[1]): val = overlay[mx,my] if val==0:", "in self.overlays.items() } def copy(self,flatten:bool=True): \"\"\" Return a copy of the layer. Flatten", ") if blindspots != None: for spot in blindspots: bd.add_blindspot(*tuple(spot)) bd.run() return bd.generate_graphic()", "self.validator_token = conf_dict.get(Model.STORE_TOKEN) self.webhook_addresses = conf_dict.get(Model.STORE_WEBHOOK,list()) self.password = conf_dict.get(Model.STORE_PASSWORD,Model.DEFAULT_PASSWORD) self.webhook_threshold = conf_dict.get(Model.STORE_WHTHRESHOLD,self.webhook_threshold) for", "day:{self.day}, hour:{self.hour}\") def write(self): \"Save TimeSlotAvg object to a compressed file\" filepath =", "Image.fromarray(imarr,\"RGBA\").filter( ImageFilter.BoxBlur( BLUR_CELLS * destination.size[0] / overlay.shape[1] ) ) if isinstance(self.pixelmask, np.ndarray) and", "time\" if debug != None: return debug curr_day, curr_hour = TimeSlotAvg.get_time() return curr_day", "in set(floorplans).difference(self.overlays.keys()): # A floorplan not represented in the floorplans but not in", "for fpid in set(floorplans).difference(self.overlays.keys()): # A floorplan not represented in the floorplans but", "new_m_overlay ) / (c+1) upd_unfixed_obs = ( avg_unfixed_obs * c + new_unfixed_obs )", "if theres something wrong self.__validate_scanning(SAPI_packet) dest_layer = Model.get_type(SAPI_packet) observations = self.query_obj.extract_SAPI_observations(SAPI_packet) self.data_layers[dest_layer].set_observations(observations) ###", "err: raise err.__class__(\"Coordinates supplied of incorrect shape or type, should be iterable shape", "if blindspots != None: for spot in blindspots: bd.add_blindspot(*tuple(spot)) bd.run() self.pixelmask = bd.getBoundaryMask()", "if cell: dist = np.hypot( (x+0.5)-event[0], (y+0.5)-event[1] ) if dist < mindist: mindist", "correct shape and iterable, or of len 0 to unset try: if len(coords)", "def __add_fixed_locations(self,fixed:dict) -> None: for placeable in fixed.values(): # We store both a", "def load( data_layers:dict, floors:dict, day=None, hour=None ): \"Static factory method; Load TimeSlotAvg object", "c + new_um_overlay ) / (c+1) upd_m_overlay = ( avg_m_overlay * c +", "(current - historical) collective /= len(self.data_layers) return collective def update_timeslot(self): \"Calls the factory", "c + new_unfixed_obs ) / (c+1) # save the new data overlay to", "day=None, hour=None ): \"Static factory method; Load TimeSlotAvg object from compressed pickle file", "of the u(n)m(asked)_possible's with the floor mask applied m_possible = np.logical_and( um_possible, self.mask,", "else: print(\"Warning: config file not found\") try: self.update_model_config(None, {Model.STORE_LAYERS: Model.LAYERS_ALL} ) except APIQuery.APIException:", "self.mask if masked else np.ones(self.overlay_dimensions,dtype=np.bool_) data[mask] += self.__unfixed_observations[:window].mean(axis=0) / mask.sum() return data def", "mismatch for FPID={}\".format(floor.floorplan.id)) # Update mask in case of change # Note this", "Return how many unfixed observations were passed. For more details on exposure, see", "the floor name\" for k in self.plans: if self.plans[k].floorplan.name == name: return k", "areas that people cannot possibly be on the given floor eg outside high", "floor that people cannot possibly be or are to be ignored eg outside", "pickle import datetime import requests import hashlib parentddir = os.path.abspath(os.path.join(os.path.dirname(__file__), os.path.pardir)) sys.path.append(parentddir) from", "cannot possibly be or are to be ignored eg outside high floors. Blindspots", "= self.data_layers[lid].overlays[floorPlanId].get_delta(masked=mask_enabled) historical = hist_fp_data[lid].get_delta(masked=mask_enabled,exposure=1) collective += (current - historical) collective /= len(self.data_layers)", "[[x1,x2,y1,y2],...] \"\"\" self.bm_boxes = blindspots if wallthreshold == None: bd = BoundaryDetector( self.floorplan.get_image()", "self.__masked_dataoverlay.copy() cp.__unmasked_dataoverlay = self.__unmasked_dataoverlay.copy() return cp def add(self,observations:dict) -> None: fixed = {}", ") ) # This means let x = cell_s_m / 2; V_T =", "floorplan (or mask) mask = self.mask if masked else np.ones(self.overlay_dimensions,dtype=np.bool_) data[mask] += self.__unfixed_observations[:window].mean(axis=0)", "k in self.plans: if self.plans[k].floorplan.name == name: return k return None def setBoundsMask(self,floor_id:str,on:bool,blindspots=None,wallthreshold:float=None)", "self.count[l_key][o_key] # update the average by adding current values to sum total and", "# Even if a floorplan mask is in place, mask override takes precident", "else set to 1 self.count[l_id] = { ov_id:self.count[l_id].get(ov_id,1) for ov_id in data_layers[l_id].overlays.keys() }", "pixel level to mask level #Assuming cell >> pixel #Mask (1msq-scale, small-dims) dims", "with open( Model.CONFIG_PATH, 'rb' ) as f: config_data = pickle.load(f) selected_id = config_data[Model.STORE_SELECTED]", "covering all floorplans with data from a single source type\" def __init__(self,floorplans:dict,exposure:int): self.exposure", ") or placeable.has_mask_override: fixed[id] = placeable else: unfixed[id] = placeable self.__add_fixed_locations(fixed) self.__add_unfixed_locations(unfixed) def", "in metres and px self.margin_m = ( Model.CELL_SIZE_M - (self.floorplan_dimensions[0] % Model.CELL_SIZE_M), Model.CELL_SIZE_M", "for _id, over in self.overlays.items() } return ly def clear(self)->None: \"Clear all member", "# This is 0.707 iff cell_s_m = 1 DEFAULT_EXPOSURE = 3 DEFAULT_PASSWORD =", "__add_fixed_locations(self,fixed:dict) -> None: for placeable in fixed.values(): # We store both a copy", "netid, conf_dict): layers = conf_dict.get(Model.STORE_LAYERS,set()) try: if netid != self.network_id: raise AttributeError except", "Render overlay onto the floorplan image in heatmap form. If pixelmask and image", "datetime.timezone( offset=datetime.timedelta(hours=0) ) ) curr_day = curr_dt.weekday() curr_hour = curr_dt.hour return curr_day, curr_hour", "previous. Pass floor object dictionary\" bins = { id : dict() for id", "As per Scanning API v3 SECRET_K = \"secret\" class Floor: \"Wrapper class for", "self.validator_token conf[Model.STORE_LAYERS] = set(self.data_layers.keys()) conf[Model.STORE_WEBHOOK] = self.webhook_addresses conf[Model.STORE_WHTHRESHOLD] = self.webhook_threshold conf[Model.STORE_PASSWORD] = self.password", "import pickle import datetime import requests import hashlib parentddir = os.path.abspath(os.path.join(os.path.dirname(__file__), os.path.pardir)) sys.path.append(parentddir)", "Not for general use, instead use Overlay.add\" assert len(self.__unfixed_observations.shape) == len(unfixed_count.shape) assert len(self.__masked_dataoverlay.shape)", "masked: return self.__masked_dataoverlay[:window].mean(axis=0) else: return self.__unmasked_dataoverlay[:window].mean(axis=0) def get_unfixed_observations(self, exposure:int=0)->float: \"\"\" Return how many", "raise ValueError except (ValueError, TypeError) as err: raise err.__class__(\"Coordinates supplied of incorrect shape", "u(n)m(asked)_possible locations um_possible = np.zeros(self.overlay_dimensions, dtype=np.bool_) if placeable.has_mask_override: # Even if a floorplan", "f) def read_config_data(self): if os.path.isfile(Model.CONFIG_PATH): with open( Model.CONFIG_PATH, 'rb' ) as f: config_data", "should default to (height, width) self.floorplan_dimensions = (self.floorplan.height,self.floorplan.width) self.overlay_dimensions = ( int(self.floorplan_dimensions[0]//Model.CELL_SIZE_M)+1, int(self.floorplan_dimensions[1]//Model.CELL_SIZE_M)+1", "transient data # Also we only need 1 frame to store average so", "placeable for fid, overlay in self.overlays.items(): overlay.roll() obs = bins.get(fid) if obs !=", "defined in enviroment variable \\\"MERAKI_DASHBOARD_API_KEY\\\"\" self.read_config_data() self.write_config_data() def populate(self,layers:set): assert isinstance(self.query_obj, APIQuery) self.network_id", "kidding # This is 0.707 iff cell_s_m = 1 DEFAULT_EXPOSURE = 3 DEFAULT_PASSWORD", "self.overlays.keys() } for id, placeable in observations.items(): bins[placeable.floorPlanId][id] = placeable for fid, overlay", "self.__masked_dataoverlay[:window].mean(axis=0) else: return self.__unmasked_dataoverlay[:window].mean(axis=0) def get_unfixed_observations(self, exposure:int=0)->float: \"\"\" Return how many unfixed observations", ") as f: config_data = pickle.load(f) selected_id = config_data[Model.STORE_SELECTED] select_data = config_data[selected_id] self.update_model_config(selected_id,select_data)", "all layers\" self.update_timeslot() # Get the current timeslot object hist_fp_data = self.timeslot.get_floor_avgs(floorPlanId) collective", "for x,row in enumerate(fov): for y, cell in enumerate(row): if cell: dist =", "or nested list of the form [[x1,x2,y1,y2],...] \"\"\" self.bm_boxes = blindspots if wallthreshold", "def render_abs(self,floorPlanId:str)->Image: \"Get latest frame of WiFi layer rendered on the floor plan\"", "for a given SAPI packet\" api_layer_val = APIQuery.get_SAPI_type(SAPI_packet) if api_layer_val == \"WiFi\": return", "get_unfixed_observations(self, exposure:int=0)->float: \"\"\" Return how many unfixed observations were passed. For more details", "mask as no mask exists\") destination.putalpha(255) return Image.alpha_composite(destination,imarr) class Layer: \"Class representing a", "from wrong network: expected {} got {}\".format(self.network_id,source_net_id)) def get_type(SAPI_packet:dict) -> int: \"Get the", "mask with given parameters. For more info see Floor.set_bounds_mask \"\"\" if wallthreshold ==", "= {} for id, placeable in observations.items(): if ( placeable.x != None and", "over in layer.overlays.items(): # Get masked and unmasked deltas, and unfixed count from", "if dimensions do not equate and historical data would be invalidated \"\"\" for", "0, ix+self.margin_px[0], mx+1 )).astype(\"int32\") iy_divs = np.floor(np.linspace( 0, iy+self.margin_px[1], my+1 )).astype(\"int32\") for x", "= ( (len(layer)//3)+1, (len(layer[0])//3)+1 ) #splits floorplan into 3m^2 areas clusters = np.zeros(dims,", "are to be ignored eg outside high floors. Blindspots should be a Numpy", "testarr = np.zeros(dims).ravel() n = (datetime.datetime.now().second / 60) * len(testarr) testarr[:int(n)] = 1", "= TimeSlotAvg.get_time() return curr_day == self.day and curr_hour == self.hour def update_avg_data(self, current_data:dict,", "def pullFloors(self) -> dict: \"Pull floorplans from the network, construct blank floor layer", "see Overlay.copy() \"\"\" ly = Layer({}, 1 if flatten else self.exposure ) ly.overlays", "default new_um_overlay = over.get_delta(masked=False) new_m_overlay = over.get_delta(masked=True) new_unfixed_obs = over.get_unfixed_observations() # Similar from", "possibly be on the given floor eg outside high floors. Blindspots should be", "into data layer\" self.query_obj.updateCameraMVSenseData() observations = self.__generate_person_obs() self.data_layers[Model.LAYER_MVSENSE].set_observations(observations) def spike(self, layer, threshhold)->dict: #add", "config file\") class TimeSlotAvg: class TimeSlotAvgException( Exception ): pass DATA_DIR = \"historical_data\" def", "put_historical(self) -> None: \"Updates the average data for the current TimeSlotAvg object\" self.update_timeslot()", "tsa = TimeSlotAvg( data_layers, day, hour ) tsa.verify_and_update_struct(data_layers, floors) tsa.write() else: if __name__!=\"__main__\":", "type for blindspots parameter. Should be of type np.array, list or tuple\") elif", "self.query_obj.pullCameras() self.data_layers = dict() self.webhook_threshold = 0.35 for layer in layers: if layer", "self.CONFIG_PATH, 'wb' ) as f: pickle.dump(config_data, f) def read_config_data(self): if os.path.isfile(Model.CONFIG_PATH): with open(", "self.timeslot.write() self.timeslot = TimeSlotAvg.load( self.data_layers, self.plans ) self.timeslot.verify_and_update_struct( self.data_layers, self.plans ) ### Providers", "# Get the respective average layer avg_layer = self.data_layers[l_key] # For each overlay", "= ix_divs[x] ib = ix_divs[x+1] for y in range(my): #Left, right il =", "dictionary containing a link to the image response = self.query_obj.getCameraSnap(camera) return response def", "= np.linalg.norm( (np.array([x,y]) + 0.5) * Model.CELL_SIZE_M - client_loc ) um_possible[x,y] = d", "self.count[l_id] = { fpid:0 for fpid in layer.overlays.keys() } @staticmethod def load( data_layers:dict,", "Model.BadRequest(\"Request is missing data: \" + str(ke) ) if source_net_id != self.network_id: raise", "unfixed_count:np.ndarray, masked_overlay:np.ndarray, unmasked_overlay:np.ndarray): \"Sets internal observation data. Not for general use, instead use", "a floorplan mask is in place, mask override takes precident um_possible = placeable.mask_override", "in observations.items(): if ( placeable.x != None and placeable.y != None ) or", "= unmasked_overlay def roll(self) -> None: \"Roll the exposure, preparing for a new", "store average so flatten self.data_layers[l_id] = layer.copy(flatten=True) self.data_layers[l_id].clear() # Set a count for", "from mean\" datamap = self.comp_historical(floorPlanId) return self.plans[floorPlanId].render_overlay(datamap) def render_abs(self,floorPlanId:str)->Image: \"Get latest frame of", "val==0: continue #Colour by sign, scale alpha by magnitude pos = val >", "val > 0 alpha = abs(val / absmax) #Left, right iys = iy_points[my]", "PIL import Image, ImageFilter import bz2 import pickle import datetime import requests import", "IDs and names\" return { k : v.floorplan.name for k,v in self.plans.items() }", "to mask level #Assuming cell >> pixel #Mask (1msq-scale, small-dims) dims mx =", "Image, ImageFilter import bz2 import pickle import datetime import requests import hashlib parentddir", "\"SnapshotData\", \"is_ideal\" : idealality} for i, cam in enumerate(cameras): response = self.getCameraImage(cam) POST_data[fpid][\"camera_data_\"", "floor obj POST_data[fpid] = {\"type\" : \"SnapshotData\", \"is_ideal\" : idealality} for i, cam", "layers = conf_dict.get(Model.STORE_LAYERS,set()) try: if netid != self.network_id: raise AttributeError except AttributeError: self.query_obj", "and feed into data layer\" self.query_obj.updateCameraMVSenseData() observations = self.__generate_person_obs() self.data_layers[Model.LAYER_MVSENSE].set_observations(observations) def spike(self, layer,", "-> dict: \"\"\" Return the delta data for each overlay in the layer.", "means let x = cell_s_m / 2; V_T = sqrt(x^2+x^2) # I wish", "unfixed observations were passed. For more details on exposure, see Overlay.get_delta \"\"\" window", "if exposure == 0 else exposure # Distribute unfixed observations evenly across the", "the other) # Account for margin between edge of floorplan and overlay ix_divs", "API key must be defined in enviroment variable \\\"MERAKI_DASHBOARD_API_KEY\\\"\" self.read_config_data() self.write_config_data() def populate(self,layers:set):", "{ cam.mac: cam.get_fov_coords() for cam in self.query_obj.cameras.values() } conf[Model.STORE_BMBOXES] = { fpid: fp.bm_boxes", "# For each square, see if it's centre is close enough to be", "pixel #Mask (1msq-scale, small-dims) dims mx = self.mask.shape[0] my = self.mask.shape[1] #Image-scale pixel", "sign, scale alpha by magnitude pos = val > 0 alpha = abs(val", "are printed Throws ModelException if dimensions do not match \"\"\" for l_id in", "'<PASSWORD>' __BAD_LAYER = \"Layer {} not defined. Use internally defined layer (eg Model.LAYER_*)\"", "il = iy_divs[y] ir = iy_divs[y+1] # As array is binary, mean gives", "frames of stored exposure, default (0) combines all available frames, 1 gives only", ") self.mask_enabled = False self.pixelmask = None self.mask = np.ones(self.overlay_dimensions,dtype=np.bool_) self.aps = {}", "def setFOVs(self,mac:str,coords:set)->None: \"\"\" Set the FOV coords from given camera (by mac). Coords", "curr_hour = TimeSlotAvg.get_time() return curr_day == self.day and curr_hour == self.hour def update_avg_data(self,", "= \"secret\" class Floor: \"Wrapper class for the floorplan object including model data\"", "/ um_possible.sum() # Include floor mask self.__masked_dataoverlay[0][m_possible] += 1.0 / m_possible.sum() def __add_unfixed_locations(self,unfixed:dict)", "return self.__unfixed_observations[:window].mean(axis=0) def get_full(self, masked:bool=True, exposure:int=0)->np.ndarray: #pragma: no cover \"\"\" Returns a copy", "the floorplan object including model data\" def __init__(self,floorplan:FloorPlan): self.floorplan = floorplan # Dimentions", "Make the mask an Image, mode=L mask = Image.fromarray(255*self.pixelmask.astype(np.uint8),\"L\") # Paste alpha on", "in layer.overlays.keys() } @staticmethod def load( data_layers:dict, floors:dict, day=None, hour=None ): \"Static factory", "self.__add_fixed_locations(fixed) self.__add_unfixed_locations(unfixed) def __add_fixed_locations(self,fixed:dict) -> None: for placeable in fixed.values(): # We store", "{ cam for cam in FOVcams if cam.get_FOV()[event_root]==True } if len(hasView)>0: return (\"Covered\",", "final image to keep overlay heatmap within bounds. \"\"\" POS = np.array([255,0,0,180],dtype=np.uint8) NEG", "1 return dm.render_overlay(testarr.reshape(dims)) def update(self)->None: \"Update non-webhook (non-SAPI) layers, write history\" self.pull_mvsense_data() self.put_historical()", "DATA_DIR = \"historical_data\" def __init__(self, data_layers:dict, day:int, hour:int): self.day = day self.hour =", "Scanning API v3 SECRET_K = \"secret\" class Floor: \"Wrapper class for the floorplan", "theres something wrong self.__validate_scanning(SAPI_packet) dest_layer = Model.get_type(SAPI_packet) observations = self.query_obj.extract_SAPI_observations(SAPI_packet) self.data_layers[dest_layer].set_observations(observations) ### Camera", "MVSense def setFOVs(self,mac:str,coords:set)->None: \"\"\" Set the FOV coords from given camera (by mac).", "If flatten, squash (mean) exposure window into 1 frame. \"\"\" if flatten: cp", "floors) return tsa def is_current_time(self, debug=None) -> bool: \"Returns True iff timeslot is", "ignored eg outside high floors. Blindspots should be a Numpy array, tuple or", "LAYER_MVSENSE\") def pull_mvsense_data(self): \"Pull live MVSense data from cameras and feed into data", "current_data.items(): # Get the respective average layer avg_layer = self.data_layers[l_key] # For each", "for l_id in data_layers.keys(): # Get count if exists, else set to 1", "import requests import hashlib parentddir = os.path.abspath(os.path.join(os.path.dirname(__file__), os.path.pardir)) sys.path.append(parentddir) from lib.APIQuery import APIQuery,", "LAYER_SNAP_WIFI = 1 LAYER_SNAP_BT = 2 LAYER_MVSENSE = 3 LAYERS_ALL = {LAYER_SNAP_WIFI, LAYER_SNAP_BT,", "* c + new_um_overlay ) / (c+1) upd_m_overlay = ( avg_m_overlay * c", "ap ### Scanning API (SAPI) def __validate_scanning(self,SAPI_packet:dict) -> None: if type(SAPI_packet) != dict:", "None: self.__unfixed_observations[0] += len(unfixed) def get_delta(self, masked:bool=True, exposure:int=0)->np.ndarray: \"\"\" Returns a copy of", "-> None: \"Updates an average model for a timeslot using the current model", ") self.margin_px = ( self.margin_m[0] * self.floorplan.px_per_m_h, self.margin_m[1] * self.floorplan.px_per_m_w ) self.mask_enabled =", "avg_um_overlay = avg_layer.overlays[o_key].get_delta(masked=False,exposure=1) avg_m_overlay = avg_layer.overlays[o_key].get_delta(masked=True,exposure=1) avg_unfixed_obs = over.get_unfixed_observations() # Count c =", "self.overlays[fpid] = Overlay(fpid, fp.overlay_dimensions, fp.floorplan_dimensions, fp.mask, self.exposure) for fpid, floor in floorplans.items(): self.overlays[fpid].verify_and_update(floor)", "conf_dict.get(Model.STORE_WHTHRESHOLD,self.webhook_threshold) for mac, coords in conf_dict.get(Model.STORE_FOVCOORDS,dict()).items(): self.setFOVs( mac, coords ) for fpid, boxes", "or tuple\") elif False in [ len(spot) == 4 for spot in blindspots", "in event ]) cameras = { cam for cam in self.query_obj.getCameras().values() if cam.floorPlanId", "else self.exposure ) ly.overlays = { _id : over.copy(flatten) for _id, over in", "window into 1 frame. \"\"\" if flatten: cp = Overlay(self.floorid, self.overlay_dimensions, self.real_dimensions, self.mask,", "in self.plans.keys(): raise Model.ModelException(\"No such floor: \",floor_id) floor = self.plans[floor_id] if on: floor.set_bounds_mask(blindspots,wallthreshold)", "tuple\") elif False in [ len(spot) == 4 for spot in blindspots ]:", "for each overlay in each layer stored self.count[l_id] = { fpid:0 for fpid", ") if isinstance(self.pixelmask, np.ndarray) and pixelmask: # Tidy the edges # Make the", "self.floorplan.get_image().convert(\"RGBA\") # Overlay scaling absmax = max(overlay.max(), overlay.min(), key=abs) #m_max, m_min = absmax,", "str(i)] = response if POST_data != {}: self.snapshotWebhook(POST_data) ### Configuration STORE_WEBHOOK = \"webhooklist\"", "in sorted(distances.items(),key=lambda x: x[1])[:n] ] return (\"Best Effort\", top_n) def getCameraImage(self, camera) ->", "= {\"type\" : \"SnapshotData\", \"is_ideal\" : idealality} for i, cam in enumerate(cameras): response", "+= 1.0 / m_possible.sum() def __add_unfixed_locations(self,unfixed:dict) -> None: self.__unfixed_observations[0] += len(unfixed) def get_delta(self,", "= floorplan # Dimentions should default to (height, width) self.floorplan_dimensions = (self.floorplan.height,self.floorplan.width) self.overlay_dimensions", "\" + str(ke) ) if source_net_id != self.network_id: raise Model.BadRequest(\"Request has data from", "else exposure if masked: return self.__masked_dataoverlay[:window].mean(axis=0) else: return self.__unmasked_dataoverlay[:window].mean(axis=0) def get_unfixed_observations(self, exposure:int=0)->float: \"\"\"", "= ix_points[mx] ixe = ix_points[mx+1] for my in range(overlay.shape[1]): val = overlay[mx,my] if", "fixed[id] = placeable else: unfixed[id] = placeable self.__add_fixed_locations(fixed) self.__add_unfixed_locations(unfixed) def __add_fixed_locations(self,fixed:dict) -> None:", "Distribute unfixed observations evenly across the floorplan (or mask) mask = self.mask if", "iff cell_s_m = 1 DEFAULT_EXPOSURE = 3 DEFAULT_PASSWORD = '<PASSWORD>' __BAD_LAYER = \"Layer", "count if exists, else set to 1 self.count[l_id] = { ov_id:self.count[l_id].get(ov_id,1) for ov_id", "\"\"\" Ensure overlays are compatible with current floorplans (Floor objects). Throws ModelException in", "exposure by default new_um_overlay = over.get_delta(masked=False) new_m_overlay = over.get_delta(masked=True) new_unfixed_obs = over.get_unfixed_observations() #", "dimention mismatch. If mask is outdated, update - note this does not effect", "self.__unfixed_observations[:] = 0 self.__masked_dataoverlay[:] = 0 self.__unmasked_dataoverlay[:] = 0 def copy(self,flatten:bool=False): \"\"\" Return", "return conf def write_config_data(self): config_data = {Model.STORE_SELECTED:self.network_id} config_data[self.network_id] = self.serialize() with open( self.CONFIG_PATH,", "config_data[self.network_id] = self.serialize() with open( self.CONFIG_PATH, 'wb' ) as f: pickle.dump(config_data, f) def", "I wish I was kidding # This is 0.707 iff cell_s_m = 1", "elif pixelmask: print(\"Error: Could not filter by pixel mask as no mask exists\")", "range(my): #Left, right il = iy_divs[y] ir = iy_divs[y+1] # As array is", "1 )).astype(\"int32\") iy_points = np.floor(np.linspace( 0,imarr.shape[1] + self.margin_px[1], overlay.shape[1] + 1 )).astype(\"int32\") for", "axis um_possible[-int(placeable.y/Model.CELL_SIZE_M),int(placeable.x/Model.CELL_SIZE_M)] = 1 else: # Store parsed location in x,y tuple, in", "to store average so flatten self.data_layers[l_id] = layer.copy(flatten=True) self.data_layers[l_id].clear() # Set a count", "over.get_delta(masked=False) new_m_overlay = over.get_delta(masked=True) new_unfixed_obs = over.get_unfixed_observations() # Similar from averages # avg", "iye = iy_points[my+1] imarr[ixs:ixe,iys:iye] = POS if pos else NEG imarr[ixs:ixe,iys:iye,3] = (", "= [] ### Floorplans def pullFloors(self) -> dict: \"Pull floorplans from the network,", "client_loc = np.array( [self.real_dimensions[0] - placeable.y, placeable.x ] ) for x in range(um_possible.shape[0]):", "= Layer({}, 1 if flatten else self.exposure ) ly.overlays = { _id :", "# Calculated minimum reach # Account for change of axis um_possible[-int(placeable.y/Model.CELL_SIZE_M),int(placeable.x/Model.CELL_SIZE_M)] = 1", "of the floorplan and the data overlay, in metres and px self.margin_m =", "1 DEFAULT_EXPOSURE = 3 DEFAULT_PASSWORD = '<PASSWORD>' __BAD_LAYER = \"Layer {} not defined.", "range(len(clusters[0])): if clusters[x][y] > busiest: busiest = clusters[x][y] busiest_location = (3*(x+0.5), 3*(y+0.5)) return", "### Configuration STORE_WEBHOOK = \"webhooklist\" STORE_SELECTED = \"selectednet\" STORE_LAYERS = \"layers\" STORE_FOVCOORDS =", "objects\" aps = self.query_obj.pullAPs() for mac,ap in aps.items(): if ap.floorPlanId in self.plans.keys(): self.plans[ap.floorPlanId].aps[mac]", "{ fpid: fp.bm_boxes for fpid,fp in self.plans.items() } conf[Model.STORE_BDENABLED] = { fpid: fp.mask_enabled", "layer.overlays.items(): # Get masked and unmasked deltas, and unfixed count from new data", "raise Model.BadRequest(\"Request has bad authentication secret - rejecting data\") except KeyError as ke:", "STORE_WEBHOOK = \"webhooklist\" STORE_SELECTED = \"selectednet\" STORE_LAYERS = \"layers\" STORE_FOVCOORDS = \"fov_coords\" STORE_FOVMASK", "if pos else NEG imarr[ixs:ixe,iys:iye,3] = ( imarr[ixs:ixe,iys:iye,3].astype(np.float64) * alpha ).astype(np.uint8) imarr =", "is not currently day:{self.day}, hour:{self.hour}\") def write(self): \"Save TimeSlotAvg object to a compressed", "= day self.hour = hour self.data_layers = dict() self.count = dict() for l_id,", "assert isinstance(self.query_obj, APIQuery) self.network_id = self.query_obj.network_id self.plans = self.pullFloors() self.getAPs() self.query_obj.pullCameras() self.data_layers =", "!= None: for spot in blindspots: bd.add_blindspot(*tuple(spot)) bd.run() self.pixelmask = bd.getBoundaryMask() #Downsample from", "mask\" if self.overlay_dimensions != floor.overlay_dimensions or self.real_dimensions != floor.floorplan_dimensions: raise Model.ModelException(\"Error: Overlay and", "camera with mac exists if mac in self.query_obj.getCameras().keys(): #Check coords of correct shape", "] return (\"Best Effort\", top_n) def getCameraImage(self, camera) -> dict: #returns a dictionary", "= hour self.data_layers = dict() self.count = dict() for l_id, layer in data_layers.items():", "mx+1 )).astype(\"int32\") iy_divs = np.floor(np.linspace( 0, iy+self.margin_px[1], my+1 )).astype(\"int32\") for x in range(mx):", "current timeslot object self.timeslot.update_avg_data( self.data_layers ) def comp_historical(self, floorPlanId:str): \"Get the relative busyness", "= np.floor(np.linspace( 0,imarr.shape[1] + self.margin_px[1], overlay.shape[1] + 1 )).astype(\"int32\") for mx in range(overlay.shape[0]):", "layer for o_key, over in layer.overlays.items(): # Get masked and unmasked deltas, and", "= self.exposure if exposure == 0 else exposure if masked: return self.__masked_dataoverlay[:window].mean(axis=0) else:", "None: \"Roll the exposure, preparing for a new frame of data\" self.__masked_dataoverlay =", "not in [ len(coord)==2 for coord in coords ]: cam = self.query_obj.cameras[mac] shape", "None: for spot in blindspots: bd.add_blindspot(*tuple(spot)) bd.run() self.pixelmask = bd.getBoundaryMask() #Downsample from pixel", "not in self.plans.keys(): raise Model.ModelException(\"No such floor: \",floor_id) floor = self.plans[floor_id] if on:", "= dict() for cam in FOVcams: fov = cam.get_FOV() mindist = 9999999 for", "form. If pixelmask and image has bounds mask set, will mask final image", "roll(self) -> None: \"Roll the exposure, preparing for a new frame of data\"", "= os.path.join('model.conf') CELL_SIZE_M = 1 DOWNSAMPLE_THRESHOLD = 0.5 VARIANCE_THRESHOLD = np.hypot( *( 2*(CELL_SIZE_M", "on: floor.set_bounds_mask(blindspots,wallthreshold) else: floor.bm_boxes = blindspots floor.mask[:] = 1 floor.mask_enabled = on self.update_layers()", "ix_divs[x+1] for y in range(my): #Left, right il = iy_divs[y] ir = iy_divs[y+1]", "For each overlay in the respective new data layer for o_key, over in", "a compressed file\" filepath = os.path.join(TimeSlotAvg.DATA_DIR,'{}_{}.pbz2'.format(self.day, self.hour)) if not os.path.exists(TimeSlotAvg.DATA_DIR): os.makedirs(TimeSlotAvg.DATA_DIR) with bz2.BZ2File(filepath,", "Layer implicitly created for Layer ID {}\".format(l_id)) for l_id, layer in self.data_layers.items(): #", "tsa = pickle.load(tsa) except FileNotFoundError: tsa = TimeSlotAvg( data_layers, day, hour ) tsa.verify_and_update_struct(data_layers,", "data_layers:dict, day:int, hour:int): self.day = day self.hour = hour self.data_layers = dict() self.count", "} nonFOVcams = cameras - FOVcams hasView = { cam for cam in", "self.plans def getFloorplanSummary(self) -> dict: \"Get a dict of retreived floor plan IDs", "in FOVcams if cam.get_FOV()[event_root]==True } if len(hasView)>0: return (\"Covered\", list(hasView) ) distances =", "getAPs(self) -> None: \"Get APs and store internally in relevent floor objects\" aps", "# If theres a DIV0 here, the search hasn't found any near enough", "TimeSlotAvg object\" self.update_timeslot() # Get the current timeslot object self.timeslot.update_avg_data( self.data_layers ) def", "implicitly created for Layer ID {}\".format(l_id)) for l_id, layer in self.data_layers.items(): # Add", "l_key, layer in current_data.items(): # Get the respective average layer avg_layer = self.data_layers[l_key]", "Model.BadRequest(\"Request has bad authentication secret - rejecting data\") except KeyError as ke: raise", "cp = Overlay(self.floorid, self.overlay_dimensions, self.real_dimensions, self.mask, 1) cp.__unfixed_observations[0] = self.__unfixed_observations.mean(axis=0) cp.__masked_dataoverlay[0] = self.__masked_dataoverlay.mean(axis=0)", "raise AttributeError except AttributeError: self.query_obj = APIQuery(netid) self.populate(layers) self.secret = conf_dict.get(Model.STORE_SECRET) self.validator_token =", "by pixel mask as no mask exists\") destination.putalpha(255) return Image.alpha_composite(destination,imarr) class Layer: \"Class", "get_floor_avgs(self, fpid:str)->dict: \"Return the flat average Overlay object indexed by each layer stored\"", "the layer structure but clear the transient data # Also we only need", "1.0 / m_possible.sum() def __add_unfixed_locations(self,unfixed:dict) -> None: self.__unfixed_observations[0] += len(unfixed) def get_delta(self, masked:bool=True,", "np.logical_and( um_possible, self.mask, dtype=np.bool_ ) # If theres a DIV0 here, the search", "flat so exposure of 1 avg_um_overlay = avg_layer.overlays[o_key].get_delta(masked=False,exposure=1) avg_m_overlay = avg_layer.overlays[o_key].get_delta(masked=True,exposure=1) avg_unfixed_obs =", "Pass floor object dictionary\" bins = { id : dict() for id in", "print(response) def addWebhookAddress(self, webhookAddress:str): self.webhook_addresses.append(webhookAddress) ### Historical def put_historical(self) -> None: \"Updates the", "self.__add_unfixed_locations(unfixed) def __add_fixed_locations(self,fixed:dict) -> None: for placeable in fixed.values(): # We store both", "blindspots if wallthreshold == None: bd = BoundaryDetector( self.floorplan.get_image() ) else: # pragma:", "return dm.render_overlay(testarr.reshape(dims)) def update(self)->None: \"Update non-webhook (non-SAPI) layers, write history\" self.pull_mvsense_data() self.put_historical() #spike", "hour:int): self.day = day self.hour = hour self.data_layers = dict() self.count = dict()", "# Distribute unfixed observations evenly across the floorplan (or mask) mask = self.mask", "into params? dims = ( (len(layer)//3)+1, (len(layer[0])//3)+1 ) #splits floorplan into 3m^2 areas", "return self.plans[floorPlanId].render_overlay(datamap) def render_abs(self,floorPlanId:str)->Image: \"Get latest frame of WiFi layer rendered on the", "!= floor.overlay_dimensions or self.real_dimensions != floor.floorplan_dimensions: raise Model.ModelException(\"Error: Overlay and Floorplan dimension mismatch", "self.mask.shape[1] #Image-scale pixel mask (mini-scale, big-dims) dims ix = self.pixelmask.shape[0] iy = self.pixelmask.shape[1]", "range(len(clusters)): for y in range(len(clusters[0])): clusters[x,y] = layer[3*x:3*(x+1),3*y:3*(y+1)].sum() busiest = 0 busiest_location=None for", "= floorplans[fpid] self.overlays[fpid] = Overlay(fpid, fp.overlay_dimensions, fp.floorplan_dimensions, fp.mask, self.exposure) for fpid, floor in", "API v3 SECRET_K = \"secret\" class Floor: \"Wrapper class for the floorplan object", "parameter passing bd = BoundaryDetector( self.floorplan.get_image(), threshold=wallthreshold ) if blindspots != None: for", "= 9999999 for x,row in enumerate(fov): for y, cell in enumerate(row): if cell:", "those are created, infos are printed Throws ModelException if dimensions do not match", "self.query_obj.cameras[mac] shape = self.plans[cam.floorPlanId].overlay_dimensions cam.set_FOV(shape,coords) else: raise ValueError except (ValueError, TypeError) as err:", "Overlay.get_delta \"\"\" return { _id : over.get_delta(masked, exposure) for _id, over in self.overlays.items()", "return k return None def setBoundsMask(self,floor_id:str,on:bool,blindspots=None,wallthreshold:float=None) -> None: \"Generate a mask from BoundaryDetector", "APIQuery.APIException: raise Model.ModelException(\"Could not get network from config file\") class TimeSlotAvg: class TimeSlotAvgException(", "return { _id : over.get_full(masked, exposure) for _id, over in self.overlays.items() } def", "frame to store average so flatten self.data_layers[l_id] = layer.copy(flatten=True) self.data_layers[l_id].clear() # Set a", "of historicals = 1 self.data_layers[l_key].overlays[o_key].set( upd_unfixed_obs[None,], upd_m_overlay[None,], upd_um_overlay[None,] ) # Update the count", "dims = dm.overlay_dimensions testarr = np.zeros(dims).ravel() n = (datetime.datetime.now().second / 60) * len(testarr)", "data[mask] += self.__unfixed_observations[:window].mean(axis=0) / mask.sum() return data def verify_and_update(self,floor:Floor)->None: \"Verifies overlay is compatible", "BLUR_CELLS = 0.35 destination = self.floorplan.get_image().convert(\"RGBA\") # Overlay scaling absmax = max(overlay.max(), overlay.min(),", "self.plans[ap.floorPlanId].aps[mac] = ap ### Scanning API (SAPI) def __validate_scanning(self,SAPI_packet:dict) -> None: if type(SAPI_packet)", "clusters[x,y] = layer[3*x:3*(x+1),3*y:3*(y+1)].sum() busiest = 0 busiest_location=None for x in range(len(clusters)): for y", "Overlay: \"Represents an data overlay of a single floorplan\" def __init__(self, floorid:str, overlay_dimensions:tuple,", "= self.webhook_addresses conf[Model.STORE_WHTHRESHOLD] = self.webhook_threshold conf[Model.STORE_PASSWORD] = self.password conf[Model.STORE_FOVCOORDS] = { cam.mac: cam.get_fov_coords()", "len(iterable)==0 to unset mask \"\"\" #Check if Layer exists if Model.LAYER_MVSENSE in self.data_layers.keys():", "must be defined in enviroment variable \\\"MERAKI_DASHBOARD_API_KEY\\\"\" self.read_config_data() self.write_config_data() def populate(self,layers:set): assert isinstance(self.query_obj,", "POST_data != {}: self.snapshotWebhook(POST_data) ### Configuration STORE_WEBHOOK = \"webhooklist\" STORE_SELECTED = \"selectednet\" STORE_LAYERS", "iy_points[my+1] imarr[ixs:ixe,iys:iye] = POS if pos else NEG imarr[ixs:ixe,iys:iye,3] = ( imarr[ixs:ixe,iys:iye,3].astype(np.float64) *", "mask) mask = self.mask if masked else np.ones(self.overlay_dimensions,dtype=np.bool_) data[mask] += self.__unfixed_observations[:window].mean(axis=0) / mask.sum()", "from given camera (by mac). Coords should be iterable of shape (n,2). Coords", "or type, should be iterable shape (n,2)\") else: raise Model.ModelException(\"Camera with mac {}", "the image response = self.query_obj.getCameraSnap(camera) return response def snapshotWebhook(self, snapshot:dict): for address in", "def update_timeslot(self): \"Calls the factory if the current TimeSlotAvg object is not current\"", "floor.set_bounds_mask(blindspots,wallthreshold) else: floor.bm_boxes = blindspots floor.mask[:] = 1 floor.mask_enabled = on self.update_layers() ###", "= self.pixelmask.shape[1] #Image-scale chunk divisions (what coords do we get laying one mask", "objects, converting to dictionary return dict(zip(range(len(obs)),obs)) def provide_scanning(self,SAPI_packet:dict) -> None: \"Update model with", "from the network, construct blank floor layer for each\" floorplans = self.query_obj.pullFloorPlans() self.plans", "= TimeSlotAvg.load(self.data_layers,self.plans) self.webhook_addresses = [] ### Floorplans def pullFloors(self) -> dict: \"Pull floorplans", "the edges # Make the mask an Image, mode=L mask = Image.fromarray(255*self.pixelmask.astype(np.uint8),\"L\") #", "enviroment variable \\\"MERAKI_DASHBOARD_API_KEY\\\"\" self.read_config_data() self.write_config_data() def populate(self,layers:set): assert isinstance(self.query_obj, APIQuery) self.network_id = self.query_obj.network_id", "key\" obs = self.query_obj.get_camera_observations() # Zip [0,n) with n obs objects, converting to", "both a copy of the m(asked)_possible locations and the u(n)m(asked)_possible locations um_possible =", "expected a dict\".format(str(type(SAPI_packet)))) try: source_net_id = SAPI_packet[\"data\"][\"networkId\"] if SAPI_packet[SECRET_K] != self.secret: raise Model.BadRequest(\"Request", "current time values needed for reading and writing data files\" curr_dt = datetime.datetime.now(", "frame For more info, see Overlay.copy() \"\"\" ly = Layer({}, 1 if flatten", "if cam.floorPlanId == floor.floorplan.id } FOVcams = { cam for cam in cameras", "= iy_points[my] iye = iy_points[my+1] imarr[ixs:ixe,iys:iye] = POS if pos else NEG imarr[ixs:ixe,iys:iye,3]", "= dict() print(\"Info: Layer implicitly created for Layer ID {}\".format(l_id)) for l_id, layer", "if cam.has_FOV() } nonFOVcams = cameras - FOVcams hasView = { cam for", "np.ones(self.overlay_dimensions,dtype=np.bool_) data[mask] += self.__unfixed_observations[:window].mean(axis=0) / mask.sum() return data def verify_and_update(self,floor:Floor)->None: \"Verifies overlay is", "Not covered as not required for current scope return { _id : over.get_full(masked,", "pass class BadRequest(Exception): pass def __init__(self,network_id:str=None,layers:set={}): \"Initialise model. API key must be defined", "Model.CELL_SIZE_M) ) self.margin_px = ( self.margin_m[0] * self.floorplan.px_per_m_h, self.margin_m[1] * self.floorplan.px_per_m_w ) self.mask_enabled", "): pass DATA_DIR = \"historical_data\" def __init__(self, data_layers:dict, day:int, hour:int): self.day = day", "bd.add_blindspot(*tuple(spot)) bd.run() self.pixelmask = bd.getBoundaryMask() #Downsample from pixel level to mask level #Assuming", "self.overlays.items() } return ly def clear(self)->None: \"Clear all member overlays of observation data\"", "data for l_key, layer in current_data.items(): # Get the respective average layer avg_layer", "(self.floorplan.height,self.floorplan.width) self.overlay_dimensions = ( int(self.floorplan_dimensions[0]//Model.CELL_SIZE_M)+1, int(self.floorplan_dimensions[1]//Model.CELL_SIZE_M)+1 ) # Determine the distance between the", "\"Bluetooth\": return Model.LAYER_SNAP_BT else: raise Model.ModelException(Model.__BAD_LAYER.format(api_layer_val)) def __generate_person_obs(self) -> dict: \"Indexes observed person", "exposure) for _id,floor in floorplans.items() } def set_observations(self,observations:dict): \"Set the layer to contain", "print(\"Warning: config file not found\") try: self.update_model_config(None, {Model.STORE_LAYERS: Model.LAYERS_ALL} ) except APIQuery.APIException: raise", "2; V_T = sqrt(x^2+x^2) # I wish I was kidding # This is", "for blindspots parameter. Should be of type np.array, list or tuple\") elif False", "overlay. If flatten, squash (mean) exposure window into 1 frame. \"\"\" if flatten:", "self.exposure = exposure # Exposure queue, shape (exp,x,y) self.__unfixed_observations = np.zeros(exposure) self.__masked_dataoverlay =", "Returns a copy of the delta overlay (only fixed observations). If masked, will", "for areas that people cannot possibly be on the given floor eg outside", "bd.generate_graphic() def render_overlay(self,overlay:np.ndarray,pixelmask:bool=True): \"\"\" Render overlay onto the floorplan image in heatmap form.", "imarr[ixs:ixe,iys:iye] = POS if pos else NEG imarr[ixs:ixe,iys:iye,3] = ( imarr[ixs:ixe,iys:iye,3].astype(np.float64) * alpha", "= self.plans[floorPlanId].mask_enabled current = self.data_layers[lid].overlays[floorPlanId].get_delta(masked=mask_enabled) historical = hist_fp_data[lid].get_delta(masked=mask_enabled,exposure=1) collective += (current - historical)", "variance metres d = np.linalg.norm( (np.array([x,y]) + 0.5) * Model.CELL_SIZE_M - client_loc )", "layer, threshhold)->dict: #add floorplan ID into params, camera/wifi/bluetoothall into params? threshold into params?", "*( 2*(CELL_SIZE_M / 2,) ) ) # This means let x = cell_s_m", "by new count upd_um_overlay = ( avg_um_overlay * c + new_um_overlay ) /", "copy(self,flatten:bool=True): \"\"\" Return a copy of the layer. Flatten squashes exposures into 1", "Model.DEFAULT_EXPOSURE) self.timeslot = TimeSlotAvg.load(self.data_layers,self.plans) self.webhook_addresses = [] ### Floorplans def pullFloors(self) -> dict:", "self.overlay_dimensions = ( int(self.floorplan_dimensions[0]//Model.CELL_SIZE_M)+1, int(self.floorplan_dimensions[1]//Model.CELL_SIZE_M)+1 ) # Determine the distance between the end", "Ignore floor mask self.__unmasked_dataoverlay[0][um_possible] += 1.0 / um_possible.sum() # Include floor mask self.__masked_dataoverlay[0][m_possible]", "f: pickle.dump(self, f) def get_floor_avgs(self, fpid:str)->dict: \"Return the flat average Overlay object indexed", "from lib.APIQuery import APIQuery, FloorPlan from lib.BoundaryDetector import BoundaryDetector # As per Scanning", "None: for spot in blindspots: bd.add_blindspot(*tuple(spot)) bd.run() return bd.generate_graphic() def render_overlay(self,overlay:np.ndarray,pixelmask:bool=True): \"\"\" Render", "for x in range(mx): #Top, bottom it = ix_divs[x] ib = ix_divs[x+1] for", "= floorid #self.observations = dict() self.overlay_dimensions = overlay_dimensions self.real_dimensions = real_dimensions self.mask =", "# For layers in data_layers not in self self.data_layers[l_id] = data_layers[l_id].copy() self.count[l_id] =", "Determine the distance between the end of the floorplan and the data overlay,", "= np.hypot( (x+0.5)-event[0], (y+0.5)-event[1] ) if dist < mindist: mindist = dist distances[cam]", "= datetime.datetime.now( datetime.timezone( offset=datetime.timedelta(hours=0) ) ) curr_day = curr_dt.weekday() curr_hour = curr_dt.hour return", "dict() print(\"Info: Layer implicitly created for Layer ID {}\".format(l_id)) for l_id, layer in", "(height, width) self.floorplan_dimensions = (self.floorplan.height,self.floorplan.width) self.overlay_dimensions = ( int(self.floorplan_dimensions[0]//Model.CELL_SIZE_M)+1, int(self.floorplan_dimensions[1]//Model.CELL_SIZE_M)+1 ) # Determine", "no cover \"\"\" Return the full data for each overlay in the layer.", "mac,ap in aps.items(): if ap.floorPlanId in self.plans.keys(): self.plans[ap.floorPlanId].aps[mac] = ap ### Scanning API", "for fpid in layer.overlays.keys() } @staticmethod def load( data_layers:dict, floors:dict, day=None, hour=None ):", "os.path.pardir)) sys.path.append(parentddir) from lib.APIQuery import APIQuery, FloorPlan from lib.BoundaryDetector import BoundaryDetector # As", "if debug != None: return debug curr_day, curr_hour = TimeSlotAvg.get_time() return curr_day ==", "data in the TimeSlotAvg is compatible with the current Model. If layers or", "self.plans.items(): spikedict = self.spike(self.comp_historical(fpid), self.webhook_threshold) if spikedict['spike'] == True: idealality, cameras = self.nearestCameras(2,", "lib.APIQuery import APIQuery, FloorPlan from lib.BoundaryDetector import BoundaryDetector # As per Scanning API", "\"webhook_threshold\" def update_model_config(self, netid, conf_dict): layers = conf_dict.get(Model.STORE_LAYERS,set()) try: if netid != self.network_id:", "my = self.mask.shape[1] #Image-scale pixel mask (mini-scale, big-dims) dims ix = self.pixelmask.shape[0] iy", "KeyError as ke: raise Model.BadRequest(\"Request is missing data: \" + str(ke) ) if", "with SAPI data\" # Raise a racket if theres something wrong self.__validate_scanning(SAPI_packet) dest_layer", "conf[Model.STORE_LAYERS] = set(self.data_layers.keys()) conf[Model.STORE_WEBHOOK] = self.webhook_addresses conf[Model.STORE_WHTHRESHOLD] = self.webhook_threshold conf[Model.STORE_PASSWORD] = self.password conf[Model.STORE_FOVCOORDS]", "None: \"Clear accumulated observation data including exposure\" self.__unfixed_observations[:] = 0 self.__masked_dataoverlay[:] = 0", "pickle.load(tsa) except FileNotFoundError: tsa = TimeSlotAvg( data_layers, day, hour ) tsa.verify_and_update_struct(data_layers, floors) tsa.write()", "a series of overlays covering all floorplans with data from a single source", "bounds mask set, will mask final image to keep overlay heatmap within bounds.", "imarr = Image.fromarray(imarr,\"RGBA\").filter( ImageFilter.BoxBlur( BLUR_CELLS * destination.size[0] / overlay.shape[1] ) ) if isinstance(self.pixelmask,", "case of change # Note this does not change existing data, only new", "cameras = self.nearestCameras(2, floor, spikedict) #need to get relevant floor obj POST_data[fpid] =", "None: fixed = {} unfixed = {} for id, placeable in observations.items(): if", "data_layers.items(): # Copy the layer structure but clear the transient data # Also", "current values to sum total and dividing by new count upd_um_overlay = (", "overlay.shape[1] + 1 )).astype(\"int32\") for mx in range(overlay.shape[0]): # Top, bottom ixs =", "= os.path.abspath(os.path.join(os.path.dirname(__file__), os.path.pardir)) sys.path.append(parentddir) from lib.APIQuery import APIQuery, FloorPlan from lib.BoundaryDetector import BoundaryDetector", "[ len(spot) == 4 for spot in blindspots ]: raise ValueError(\"Invalid format for", "spikedict = self.spike(self.comp_historical(fpid), self.webhook_threshold) if spikedict['spike'] == True: idealality, cameras = self.nearestCameras(2, floor,", "of correct shape and iterable, or of len 0 to unset try: if", "layer in data_layers.items(): # Copy the layer structure but clear the transient data", "over in self.overlays.items() } def get_full(self, masked:bool=True, exposure:int=0) -> dict: #pragma: no cover", "self.__unfixed_observations[0] = 0 def clear(self) -> None: \"Clear accumulated observation data including exposure\"", "Floor: \"Wrapper class for the floorplan object including model data\" def __init__(self,floorplan:FloorPlan): self.floorplan", "not represented in TSA, those are created, infos are printed Throws ModelException if", "conf_dict.get(Model.STORE_FOVCOORDS,dict()).items(): self.setFOVs( mac, coords ) for fpid, boxes in conf_dict.get(Model.STORE_BMBOXES, dict()).items(): on =", "with data from a single source type\" def __init__(self,floorplans:dict,exposure:int): self.exposure = exposure self.overlays", "provide all available exposures squashed. For more info, see Overlay.get_delta \"\"\" return {", "for id, placeable in observations.items(): if ( placeable.x != None and placeable.y !=", "or False not in [ len(coord)==2 for coord in coords ]: cam =", "mode=L mask = Image.fromarray(255*self.pixelmask.astype(np.uint8),\"L\") # Paste alpha on masked region imarr.paste((0,0,0,0),mask) elif pixelmask:", "the average data for the current TimeSlotAvg object\" self.update_timeslot() # Get the current", "square, see if it's centre is close enough to be within variance metres", "Should be of type np.array, list or tuple\") elif False in [ len(spot)", "conf_dict.get(Model.STORE_TOKEN) self.webhook_addresses = conf_dict.get(Model.STORE_WEBHOOK,list()) self.password = conf_dict.get(Model.STORE_PASSWORD,Model.DEFAULT_PASSWORD) self.webhook_threshold = conf_dict.get(Model.STORE_WHTHRESHOLD,self.webhook_threshold) for mac, coords", "Similar from averages # avg layers are already flat so exposure of 1", "int(self.floorplan_dimensions[0]//Model.CELL_SIZE_M)+1, int(self.floorplan_dimensions[1]//Model.CELL_SIZE_M)+1 ) # Determine the distance between the end of the floorplan", "read_config_data(self): if os.path.isfile(Model.CONFIG_PATH): with open( Model.CONFIG_PATH, 'rb' ) as f: config_data = pickle.load(f)", "historical = hist_fp_data[lid].get_delta(masked=mask_enabled,exposure=1) collective += (current - historical) collective /= len(self.data_layers) return collective", "layer.overlays.keys() } @staticmethod def load( data_layers:dict, floors:dict, day=None, hour=None ): \"Static factory method;", "\"\"\" Must be called when a Floor is added, removed, or altered, including", ") ### Providers def poll_layer(self,layer:int,exposure:int) -> dict: return self.data_layers[layer].get_full(exposure=exposure) def render_delta(self,floorPlanId:str)->Image: \"Get the", "self.query_obj.getCameras().values() if cam.floorPlanId == floor.floorplan.id } FOVcams = { cam for cam in", "cam in self.query_obj.getCameras().values() if cam.floorPlanId == floor.floorplan.id } FOVcams = { cam for", "def add(self,observations:dict) -> None: fixed = {} unfixed = {} for id, placeable", "data from wrong network: expected {} got {}\".format(self.network_id,source_net_id)) def get_type(SAPI_packet:dict) -> int: \"Get", "Account for margin between edge of floorplan and overlay ix_divs = np.floor(np.linspace( 0,", "need 1 frame to store average so flatten self.data_layers[l_id] = layer.copy(flatten=True) self.data_layers[l_id].clear() #", "raise err.__class__(\"Coordinates supplied of incorrect shape or type, should be iterable shape (n,2)\")", "new data layer for o_key, over in layer.overlays.items(): # Get masked and unmasked", "== floor.floorplan.id } FOVcams = { cam for cam in cameras if cam.has_FOV()", "mac in self.query_obj.getCameras().keys(): #Check coords of correct shape and iterable, or of len", "model as it is not currently day:{self.day}, hour:{self.hour}\") def write(self): \"Save TimeSlotAvg object", "= 0 self.__unmasked_dataoverlay[0] = 0 self.__unfixed_observations[0] = 0 def clear(self) -> None: \"Clear", "delta from mean\" datamap = self.comp_historical(floorPlanId) return self.plans[floorPlanId].render_overlay(datamap) def render_abs(self,floorPlanId:str)->Image: \"Get latest frame", "-> None: \"Roll the exposure, preparing for a new frame of data\" self.__masked_dataoverlay", "total and dividing by new count upd_um_overlay = ( avg_um_overlay * c +", "observations) For exposure, see Overlay.get_delta \"\"\" data = self.get_delta(masked,exposure) window = self.exposure if", "\"fov_coords\" STORE_FOVMASK = \"fov_mask\" STORE_BMBOXES = \"bm_boxes\" STORE_BDENABLED = \"bd_enabled\" STORE_SECRET = \"sapisecret\"", "== None: pass elif not isinstance(wallthreshold,(int,float)): raise TypeError(\"Invalid type for wallthreshold parameter. Should", "layer. If exposure 0 (default), will provide all available exposures squashed. See Overlay.get_full", "ly = Layer({}, 1 if flatten else self.exposure ) ly.overlays = { _id", "the overlays print(\"Info: Creating new overlay for FPID:{}\".format(fpid)) fp = floorplans[fpid] self.overlays[fpid] =", "in self.overlays.keys() } for id, placeable in observations.items(): bins[placeable.floorPlanId][id] = placeable for fid,", "tuple([ int(d) for d in event ]) cameras = { cam for cam", ") else: # pragma: no cover # Not covered as only parameter passing", "be iterable of shape (n,2). Coords pertain to sqm pixels on internal datamap.", "the latest frame (no smoothing). \"\"\" window = self.exposure if exposure == 0", "class Floor: \"Wrapper class for the floorplan object including model data\" def __init__(self,floorplan:FloorPlan):", "average by adding current values to sum total and dividing by new count", "each overlay in the layer. If exposure 0 (default), will provide all available", "dict: raise TypeError(\"JSON parsed a {}, expected a dict\".format(str(type(SAPI_packet)))) try: source_net_id = SAPI_packet[\"data\"][\"networkId\"]", "Raise a racket if theres something wrong self.__validate_scanning(SAPI_packet) dest_layer = Model.get_type(SAPI_packet) observations =", "/ (c+1) upd_unfixed_obs = ( avg_unfixed_obs * c + new_unfixed_obs ) / (c+1)", "data layer\" self.query_obj.updateCameraMVSenseData() observations = self.__generate_person_obs() self.data_layers[Model.LAYER_MVSENSE].set_observations(observations) def spike(self, layer, threshhold)->dict: #add floorplan", "1 if flatten else self.exposure ) ly.overlays = { _id : over.copy(flatten) for", "if netid != self.network_id: raise AttributeError except AttributeError: self.query_obj = APIQuery(netid) self.populate(layers) self.secret", "alpha on masked region imarr.paste((0,0,0,0),mask) elif pixelmask: print(\"Error: Could not filter by pixel", "def update_model_config(self, netid, conf_dict): layers = conf_dict.get(Model.STORE_LAYERS,set()) try: if netid != self.network_id: raise", "m(asked)_possible locations and the u(n)m(asked)_possible locations um_possible = np.zeros(self.overlay_dimensions, dtype=np.bool_) if placeable.has_mask_override: #", "== floor.mask class Model: LAYER_SNAP_WIFI = 1 LAYER_SNAP_BT = 2 LAYER_MVSENSE = 3", "for cam in self.query_obj.cameras.values() } conf[Model.STORE_BMBOXES] = { fpid: fp.bm_boxes for fpid,fp in", "3*(y+0.5)) return {'spike':busiest > threshhold, 'location':busiest_location} def nearestCameras(self, n:int, floor:Floor,spikeDict:dict)->tuple: #returns a list", "the mask an Image, mode=L mask = Image.fromarray(255*self.pixelmask.astype(np.uint8),\"L\") # Paste alpha on masked", "#Image-scale pixel mask (mini-scale, big-dims) dims ix = self.pixelmask.shape[0] iy = self.pixelmask.shape[1] #Image-scale", "a bounds mask with given parameters. For more info see Floor.set_bounds_mask \"\"\" if", "self.__unfixed_observations = unfixed_count self.__masked_dataoverlay = masked_overlay self.__unmasked_dataoverlay = unmasked_overlay def roll(self) -> None:", "busiest_location=None for x in range(len(clusters)): for y in range(len(clusters[0])): if clusters[x][y] > busiest:", "overlay_dimensions:tuple, real_dimensions:tuple, floormask:np.ndarray, exposure:int): self.floorid = floorid #self.observations = dict() self.overlay_dimensions = overlay_dimensions", "def __init__(self,network_id:str=None,layers:set={}): \"Initialise model. API key must be defined in enviroment variable \\\"MERAKI_DASHBOARD_API_KEY\\\"\"", "\"Get the relative busyness of a floorplan using all layers\" self.update_timeslot() # Get", "{ id : Floor(fp) for id,fp in floorplans.items() } return self.plans def getFloorplanSummary(self)", "return curr_day == self.day and curr_hour == self.hour def update_avg_data(self, current_data:dict, debug:bool=None) ->", "else NEG imarr[ixs:ixe,iys:iye,3] = ( imarr[ixs:ixe,iys:iye,3].astype(np.float64) * alpha ).astype(np.uint8) imarr = Image.fromarray(imarr,\"RGBA\").filter( ImageFilter.BoxBlur(", "== self.day and curr_hour == self.hour def update_avg_data(self, current_data:dict, debug:bool=None) -> None: \"Updates", "in data_layers.items(): # Copy the layer structure but clear the transient data #", "fpid, floor in floorplans.items(): self.overlays[fpid].verify_and_update(floor) class Overlay: \"Represents an data overlay of a", "of retreived floor plan IDs and names\" return { k : v.floorplan.name for", "absmax) #Left, right iys = iy_points[my] iye = iy_points[my+1] imarr[ixs:ixe,iys:iye] = POS if", "the respective average layer avg_layer = self.data_layers[l_key] # For each overlay in the", "the new data overlay to the timeslots model, promoting as exposure of historicals", "len(self.__unmasked_dataoverlay.shape) == len(unmasked_overlay.shape) self.__unfixed_observations = unfixed_count self.__masked_dataoverlay = masked_overlay self.__unmasked_dataoverlay = unmasked_overlay def", "iterable, or of len 0 to unset try: if len(coords) == 0 or", "self.floorplan.get_image() ) else: # pragma: no cover # Not covered as only parameter", "a single source type\" def __init__(self,floorplans:dict,exposure:int): self.exposure = exposure self.overlays = { _id", "not equate and historical data would be invalidated \"\"\" for layer in self.data_layers.values():", "= requests.post(address, json = snapshot) print(response) def addWebhookAddress(self, webhookAddress:str): self.webhook_addresses.append(webhookAddress) ### Historical def", "unfixed observations) For exposure, see Overlay.get_delta \"\"\" data = self.get_delta(masked,exposure) window = self.exposure", "= \"<PASSWORD>\" STORE_WHTHRESHOLD = \"webhook_threshold\" def update_model_config(self, netid, conf_dict): layers = conf_dict.get(Model.STORE_LAYERS,set()) try:", "exposure # Distribute unfixed observations evenly across the floorplan (or mask) mask =", "eg outside high floors. Blindspots should be a Numpy array, tuple or nested", "= data_layers[l_id].copy() self.count[l_id] = dict() print(\"Info: Layer implicitly created for Layer ID {}\".format(l_id))", "floorplan object including model data\" def __init__(self,floorplan:FloorPlan): self.floorplan = floorplan # Dimentions should", "a Numpy array, tuple or nested list of the form [[x1,x2,y1,y2],...]\" if blindspots==None:", "in cameras if cam.has_FOV() } nonFOVcams = cameras - FOVcams hasView = {", "+ 1 )).astype(\"int32\") iy_points = np.floor(np.linspace( 0,imarr.shape[1] + self.margin_px[1], overlay.shape[1] + 1 )).astype(\"int32\")", "for ov_id in data_layers[l_id].overlays.keys() } def sha256(inpt:str) -> str: m = hashlib.sha256() m.update(inpt.encode())", "else: raise ValueError except (ValueError, TypeError) as err: raise err.__class__(\"Coordinates supplied of incorrect", "valid to update with the current model # For each layer in the", "TypeError(\"Invalid type for wallthreshold parameter. Should be of type int or float, got", "abs(val / absmax) #Left, right iys = iy_points[my] iye = iy_points[my+1] imarr[ixs:ixe,iys:iye] =", "requests import hashlib parentddir = os.path.abspath(os.path.join(os.path.dirname(__file__), os.path.pardir)) sys.path.append(parentddir) from lib.APIQuery import APIQuery, FloorPlan", "ID into params, camera/wifi/bluetoothall into params? threshold into params? dims = ( (len(layer)//3)+1,", "get_full(self, masked:bool=True, exposure:int=0)->np.ndarray: #pragma: no cover \"\"\" Returns a copy of the full", "and names\" return { k : v.floorplan.name for k,v in self.plans.items() } def", "self.margin_m = ( Model.CELL_SIZE_M - (self.floorplan_dimensions[0] % Model.CELL_SIZE_M), Model.CELL_SIZE_M - (self.floorplan_dimensions[1] % Model.CELL_SIZE_M)", "masked and unmasked deltas, and unfixed count from new data # Get full", "self.__unmasked_dataoverlay[0] = 0 self.__unfixed_observations[0] = 0 def clear(self) -> None: \"Clear accumulated observation", "observations evenly across the floorplan (or mask) mask = self.mask if masked else", "raise Model.ModelException(Model.__BAD_LAYER.format(layer)) self.data_layers[layer] = Layer(self.plans, Model.DEFAULT_EXPOSURE) self.timeslot = TimeSlotAvg.load(self.data_layers,self.plans) self.webhook_addresses = [] ###", "ImageFilter.BoxBlur( BLUR_CELLS * destination.size[0] / overlay.shape[1] ) ) if isinstance(self.pixelmask, np.ndarray) and pixelmask:", "observations). If masked, will select data masked at input, else unmasked data. Set", "= over.get_unfixed_observations() # Similar from averages # avg layers are already flat so", "self.mask.shape[0] my = self.mask.shape[1] #Image-scale pixel mask (mini-scale, big-dims) dims ix = self.pixelmask.shape[0]", "self.data_layers[l_key].overlays[o_key].set( upd_unfixed_obs[None,], upd_m_overlay[None,], upd_um_overlay[None,] ) # Update the count self.count[l_key][o_key] += 1 #self.write()", "not in self self.data_layers[l_id] = data_layers[l_id].copy() self.count[l_id] = dict() print(\"Info: Layer implicitly created", "respective new data layer for o_key, over in layer.overlays.items(): # Get masked and", "the full client overlay (including distributed unfixed observations) For exposure, see Overlay.get_delta \"\"\"", "os.makedirs(TimeSlotAvg.DATA_DIR) with bz2.BZ2File(filepath, 'wb') as f: pickle.dump(self, f) def get_floor_avgs(self, fpid:str)->dict: \"Return the", "raise TypeError(\"Invalid type for blindspots parameter. Should be of type np.array, list or", "-> dict: return self.data_layers[layer].get_full(exposure=exposure) def render_delta(self,floorPlanId:str)->Image: \"Get the current datamap in terms of", "Layer ID {}\".format(l_id)) for l_id, layer in self.data_layers.items(): # Add any missing overlays", "- FOVcams hasView = { cam for cam in FOVcams if cam.get_FOV()[event_root]==True }", "parsed location in x,y tuple, in m, with change of axis client_loc =", "pixels on internal datamap. Pass len(iterable)==0 to unset mask \"\"\" #Check if Layer", "exists\") destination.putalpha(255) return Image.alpha_composite(destination,imarr) class Layer: \"Class representing a data layer: a series", "{ _id : over.get_delta(masked, exposure) for _id, over in self.overlays.items() } def get_full(self,", "!= floor.floorplan_dimensions: raise Model.ModelException(\"Error: Overlay and Floorplan dimension mismatch for FPID={}\".format(floor.floorplan.id)) # Update", "\"\"\" # Not covered as not required for current scope return { _id", "layers, write history\" self.pull_mvsense_data() self.put_historical() #spike detect POST_data = {} for fpid, floor", "hasView = { cam for cam in FOVcams if cam.get_FOV()[event_root]==True } if len(hasView)>0:", "\"WiFi\": return Model.LAYER_SNAP_WIFI elif api_layer_val == \"Bluetooth\": return Model.LAYER_SNAP_BT else: raise Model.ModelException(Model.__BAD_LAYER.format(api_layer_val)) def", ") / (c+1) # save the new data overlay to the timeslots model,", "import Image, ImageFilter import bz2 import pickle import datetime import requests import hashlib", "0, iy+self.margin_px[1], my+1 )).astype(\"int32\") for x in range(mx): #Top, bottom it = ix_divs[x]", "create new\" #TODO remove day hour params - used for unit tests if", "for l_id, layer in data_layers.items(): # Copy the layer structure but clear the", "hour params - used for unit tests if day==None or hour==None: day, hour", "= iy_divs[y] ir = iy_divs[y+1] # As array is binary, mean gives ratio", "self.data_layers.items(): # Add any missing overlays layer.verify_and_update(floors) for l_id in data_layers.keys(): # Get", "floorplans.items() } def set_observations(self,observations:dict): \"Set the layer to contain passed observations, clearing any", "import sys import numpy as np from PIL import Image, ImageFilter import bz2", "update the average by adding current values to sum total and dividing by", ") curr_day = curr_dt.weekday() curr_hour = curr_dt.hour return curr_day, curr_hour def verify_and_update_struct(self, data_layers:dict,", "self.network_id = self.query_obj.network_id self.plans = self.pullFloors() self.getAPs() self.query_obj.pullCameras() self.data_layers = dict() self.webhook_threshold =", "- (self.floorplan_dimensions[1] % Model.CELL_SIZE_M) ) self.margin_px = ( self.margin_m[0] * self.floorplan.px_per_m_h, self.margin_m[1] *", "let x = cell_s_m / 2; V_T = sqrt(x^2+x^2) # I wish I" ]
[ "def maxSubArray(self, A): if not A: return 0 curSum = maxSum = A[0]", "A[1:]: curSum = max(num, curSum + num) maxSum = max(maxSum, curSum) return maxSum", "if not A: return 0 curSum = maxSum = A[0] for num in", "Maximum Subarray/main.py class Solution: def maxSubArray(self, A): if not A: return 0 curSum", "A: return 0 curSum = maxSum = A[0] for num in A[1:]: curSum", "A[0] for num in A[1:]: curSum = max(num, curSum + num) maxSum =", "Subarray/main.py class Solution: def maxSubArray(self, A): if not A: return 0 curSum =", "<filename>53. Maximum Subarray/main.py class Solution: def maxSubArray(self, A): if not A: return 0", "not A: return 0 curSum = maxSum = A[0] for num in A[1:]:", "A): if not A: return 0 curSum = maxSum = A[0] for num", "maxSubArray(self, A): if not A: return 0 curSum = maxSum = A[0] for", "for num in A[1:]: curSum = max(num, curSum + num) maxSum = max(maxSum,", "class Solution: def maxSubArray(self, A): if not A: return 0 curSum = maxSum", "Solution: def maxSubArray(self, A): if not A: return 0 curSum = maxSum =", "num in A[1:]: curSum = max(num, curSum + num) maxSum = max(maxSum, curSum)", "= A[0] for num in A[1:]: curSum = max(num, curSum + num) maxSum", "curSum = maxSum = A[0] for num in A[1:]: curSum = max(num, curSum", "maxSum = A[0] for num in A[1:]: curSum = max(num, curSum + num)", "return 0 curSum = maxSum = A[0] for num in A[1:]: curSum =", "0 curSum = maxSum = A[0] for num in A[1:]: curSum = max(num,", "in A[1:]: curSum = max(num, curSum + num) maxSum = max(maxSum, curSum) return", "= maxSum = A[0] for num in A[1:]: curSum = max(num, curSum +" ]
[ "directory (os.getcwd()): # esociallib # import sys import re as re_ import base64", "name_='idQuota') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='idQuota',", "(msg, node.tag, node.sourceline, ) raise GDSParseError(msg) class MixedContainer: # Constants for category: CategoryNone", "% (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='dscLograd'): pass def exportChildren(self,", "is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, TEmprPJ) if subclass is not None:", "nmCid_ = child_.text nmCid_ = self.gds_validate_string(nmCid_, node, 'nmCid') self.nmCid = nmCid_ elif nodeName_", "child_.text codPostal_ = self.gds_validate_string(codPostal_, node, 'codPostal') self.codPostal = codPostal_ # end class TEnderecoExterior", "def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): if nodeName_ == 'tpPlanRP': sval_ = child_.text", "codMunic): self.codMunic = codMunic def get_uf(self): return self.uf def set_uf(self, uf): self.uf =", "None ): return True else: return False def export(self, outfile, level, namespace_='', name_='TEmprPJ',", "namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('infoBeneficio') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_", "nodeName_ == 'infoPenMorte': obj_ = infoPenMorte.factory() obj_.build(child_) self.infoPenMorte = obj_ obj_.original_tagname_ = 'infoPenMorte'", "exterior def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_,", "== 'verProc': verProc_ = child_.text verProc_ = self.gds_validate_string(verProc_, node, 'verProc') self.verProc = verProc_", "'exterior' # end class endereco class TEnderecoBrasil(GeneratedsSuper): \"\"\"Informações do Endereço no Brasil\"\"\" subclass", "= GenerateDSNamespaceDefs_.get('nrLograd') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_", "dtFimBenef.subclass: return dtFimBenef.subclass(*args_, **kwargs_) else: return dtFimBenef(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self):", "'nrLograd': nrLograd_ = child_.text nrLograd_ = self.gds_validate_string(nrLograd_, node, 'nrLograd') self.nrLograd = nrLograd_ elif", "else: return False def export(self, outfile, level, namespace_='', name_='cep', namespacedef_='', pretty_print=True): imported_ns_def_ =", "**kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, nrBenefic) if subclass", "self.bairro is not None: showIndent(outfile, level, pretty_print) outfile.write('<%sbairro>%s</%sbairro>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.bairro), input_name='bairro')), namespace_,", "% (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='complemento'): pass def exportChildren(self,", "GenerateDSNamespaceDefs_.get('nmPai') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_ =", "subclass is not None: return subclass(*args_, **kwargs_) if mtvFim.subclass: return mtvFim.subclass(*args_, **kwargs_) else:", ")) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='endereco'): pass def exportChildren(self, outfile, level,", "benefício previdenciário concedido ao servidor\"\"\" subclass = None superclass = None def __init__(self,", "BaseStrType_): initvalue_ = datetime_.datetime.strptime(dtFimBenef, '%Y-%m-%d').date() else: initvalue_ = dtFimBenef self.dtFimBenef = initvalue_ self.mtvFim", "s1 else: if s1.find('\"') != -1: s1 = s1.replace('\"', '\\\\\"') if s1.find('\\n') ==", "nodeName_ == 'dadosNasc': obj_ = dadosNasc.factory() obj_.build(child_) self.dadosNasc = obj_ obj_.original_tagname_ = 'dadosNasc'", "subclass(*args_, **kwargs_) if eSocial.subclass: return eSocial.subclass(*args_, **kwargs_) else: return eSocial(*args_, **kwargs_) factory =", "outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='nmBenefic', pretty_print=pretty_print) outfile.write('</%s%s>%s' %", "else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='dtNascto'): pass", "end class complemento class bairro(GeneratedsSuper): subclass = None superclass = None def __init__(self):", "instring.encode(ExternalEncoding) else: return instring @staticmethod def convert_unicode(instring): if isinstance(instring, str): result = quote_xml(instring)", "def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class nrRecibo class tpAmb(GeneratedsSuper):", "subclass(*args_, **kwargs_) if infoPenMorte.subclass: return infoPenMorte.subclass(*args_, **kwargs_) else: return infoPenMorte(*args_, **kwargs_) factory =", "s1 else: s1 = '\"%s\"' % s1 return s1 def quote_python(inStr): s1 =", "imported_ns_def_ = GenerateDSNamespaceDefs_.get('infoBeneficio') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print:", "end class nrBenefic class dtFimBenef(GeneratedsSuper): subclass = None superclass = None def __init__(self):", "not None: return subclass(*args_, **kwargs_) if nrInsc.subclass: return nrInsc.subclass(*args_, **kwargs_) else: return nrInsc(*args_,", "dtIniBenef.subclass: return dtIniBenef.subclass(*args_, **kwargs_) else: return dtIniBenef(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self):", "(eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='verProc', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_,", "procEmi) if subclass is not None: return subclass(*args_, **kwargs_) if procEmi.subclass: return procEmi.subclass(*args_,", "eol_)) else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='dtNascto'):", "input_data, node=None, input_name=''): return input_data def gds_format_double_list(self, input_data, input_name=''): return '%s' % '", "outfile, level, namespace_='', name_='infoBeneficio', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('infoBeneficio') if imported_ns_def_ is not", "False def export(self, outfile, level, namespace_='', name_='TEnderecoBrasil', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('TEnderecoBrasil') if", "= cep_ elif nodeName_ == 'codMunic': sval_ = child_.text try: ival_ = int(sval_)", "( ): return True else: return False def export(self, outfile, level, namespace_='', name_='nmMae',", "'endereco': obj_ = endereco.factory() obj_.build(child_) self.endereco = obj_ obj_.original_tagname_ = 'endereco' # end", "not None: return subclass(*args_, **kwargs_) if nmCid.subclass: return nmCid.subclass(*args_, **kwargs_) else: return nmCid(*args_,", "= choice self.optional = optional def set_name(self, name): self.name = name def get_name(self):", "eol_ = '\\n' else: eol_ = '' if self.tpBenef is not None: showIndent(outfile,", "found2 = False for patterns2 in patterns1: if re_.search(patterns2, target) is not None:", "'%s' % self.name) subelement.text = self.to_etree_simple() else: # category == MixedContainer.CategoryComplex self.value.to_etree(element) def", "( ): return True else: return False def export(self, outfile, level, namespace_='', name_='dscLograd',", ")) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='paisNascto') if self.hasContent_(): outfile.write('>%s' %", "child_, node, nodeName_, fromsubclass_=False): pass # end class codMunic class uf(GeneratedsSuper): subclass =", "# File: generatedsnamespaces.py # # GenerateDSNamespaceDefs = { # \"ElementtypeA\": \"http://www.xxx.com/namespaceA\", # \"ElementtypeB\":", "if endereco.subclass: return endereco.subclass(*args_, **kwargs_) else: return endereco(*args_, **kwargs_) factory = staticmethod(factory) def", "level, already_processed, namespace_='', name_='dscLograd'): pass def exportChildren(self, outfile, level, namespace_='', name_='dscLograd', fromsubclass_=False, pretty_print=True):", "eol_)) if self.nmBenefic is not None: showIndent(outfile, level, pretty_print) outfile.write('<%snmBenefic>%s</%snmBenefic>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nmBenefic),", "end class tpInsc class nrInsc(GeneratedsSuper): subclass = None superclass = None def __init__(self):", "set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='codMunic') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile,", "TIdeEveTrab) if subclass is not None: return subclass(*args_, **kwargs_) if TIdeEveTrab.subclass: return TIdeEveTrab.subclass(*args_,", "return s2 def quote_xml_aux(inStr): s1 = inStr.replace('&', '&amp;') s1 = s1.replace('<', '&lt;') s1", "not None: return subclass(*args_, **kwargs_) if complemento.subclass: return complemento.subclass(*args_, **kwargs_) else: return complemento(*args_,", "cpfBenef(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return True else:", "self.gds_validate_string(verProc_, node, 'verProc') self.verProc = verProc_ # end class TIdeEveTrab class indRetif(GeneratedsSuper): subclass", "pass def exportChildren(self, outfile, level, namespace_='', name_='infoBeneficio', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ =", "!= -1: s1 = s1.replace('\"', '\\\\\"') if s1.find('\\n') == -1: return '\"%s\"' %", "(namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.complemento), input_name='complemento')), namespace_, eol_)) if self.bairro is not None: showIndent(outfile, level, pretty_print)", "set_verProc(self, verProc): self.verProc = verProc def hasContent_(self): if ( self.indRetif is not None", "as exp: raise_parse_error(child_, 'requires integer: %s' % exp) ival_ = self.gds_validate_integer(ival_, node, 'tpAmb')", "== MixedContainer.CategoryText: # Prevent exporting empty content as empty lines. if self.value.strip(): outfile.write(self.value)", "'\\n' else: eol_ = '' if self.tpInsc is not None: showIndent(outfile, level, pretty_print)", "self.gds_format_integer(self.tpBenef, input_name='tpBenef'), namespace_, eol_)) if self.nrBenefic is not None: showIndent(outfile, level, pretty_print) outfile.write('<%snrBenefic>%s</%snrBenefic>%s'", "+ namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='dscLograd')", "= getSubclassFromModule_( CurrentSubclassModule_, nrLograd) if subclass is not None: return subclass(*args_, **kwargs_) if", "specific subclass module. CurrentSubclassModule_ = None # # Support/utility functions. # def showIndent(outfile,", "None ): return True else: return False def export(self, outfile, level, namespace_='', name_='evtCdBenPrRP',", "return False def export(self, outfile, level, namespace_='', name_='evtCdBenPrRP', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('evtCdBenPrRP')", "nodeName_ == 'nrBenefic': nrBenefic_ = child_.text nrBenefic_ = self.gds_validate_string(nrBenefic_, node, 'nrBenefic') self.nrBenefic =", "'nmMae') self.nmMae = nmMae_ elif nodeName_ == 'nmPai': nmPai_ = child_.text nmPai_ =", "def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class paisResid class nmCid(GeneratedsSuper):", "or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='tpInsc') if self.hasContent_():", "( self.tpLograd is not None or self.dscLograd is not None or self.nrLograd is", "is not None: showIndent(outfile, level, pretty_print) outfile.write('<%scpfBenef>%s</%scpfBenef>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.cpfBenef), input_name='cpfBenef')), namespace_, eol_))", "] <in_xml_file> \"\"\" def usage(): print(USAGE_TEXT) sys.exit(1) def get_root_tag(node): tag = Tag_pattern_.match(node.tag).groups()[-1] rootClass", "= getSubclassFromModule_( CurrentSubclassModule_, paisResid) if subclass is not None: return subclass(*args_, **kwargs_) if", "if pretty_print: eol_ = '\\n' else: eol_ = '' if self.evtCdBenPrRP is not", "+ 1, namespace_='', name_='uf', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' %", "return value return typ(value) # # Data representation classes. # class eSocial(GeneratedsSuper): subclass", "staticmethod(factory) def get_evtCdBenPrRP(self): return self.evtCdBenPrRP def set_evtCdBenPrRP(self, evtCdBenPrRP): self.evtCdBenPrRP = evtCdBenPrRP def get_Signature(self):", "if nrBenefic.subclass: return nrBenefic.subclass(*args_, **kwargs_) else: return nrBenefic(*args_, **kwargs_) factory = staticmethod(factory) def", "% (namespace_, self.gds_format_integer(self.mtvFim, input_name='mtvFim'), namespace_, eol_)) def build(self, node): already_processed = set() self.buildAttributes(node,", "= getSubclassFromModule_( CurrentSubclassModule_, dtNascto) if subclass is not None: return subclass(*args_, **kwargs_) if", "**kwargs_) else: return bairro(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ):", "elif self.content_type == MixedContainer.TypeFloat or \\ self.content_type == MixedContainer.TypeDecimal: outfile.write('<%s>%f</%s>' % ( self.name,", "name_='evtCdBenPrRP', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' %", "outfile.write('<%s>%g</%s>' % ( self.name, self.value, self.name)) elif self.content_type == MixedContainer.TypeBase64: outfile.write('<%s>%s</%s>' % (", "eol_)) if self.nrBenefic is not None: showIndent(outfile, level, pretty_print) outfile.write('<%snrBenefic>%s</%snrBenefic>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nrBenefic),", "% (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='uf', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_,", "node=None, input_name=''): return input_data def gds_format_float_list(self, input_data, input_name=''): return '%s' % ' '.join(input_data)", "name_='nmPai', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('nmPai') if imported_ns_def_ is not None: namespacedef_ =", "level + 1, namespace_='', name_='tpInsc', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s'", "return subclass(*args_, **kwargs_) if nmPai.subclass: return nmPai.subclass(*args_, **kwargs_) else: return nmPai(*args_, **kwargs_) factory", "subclass(*args_, **kwargs_) if endereco.subclass: return endereco.subclass(*args_, **kwargs_) else: return endereco(*args_, **kwargs_) factory =", "'' else: return input_data def gds_format_base64(self, input_data, input_name=''): return base64.b64encode(input_data) def gds_validate_base64(self, input_data,", "= set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='TIdeEveTrab') if self.hasContent_(): outfile.write('>%s' % (eol_, ))", "eol_)) else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='tpAmb'):", "def set_dtIniBenef(self, dtIniBenef): self.dtIniBenef = dtIniBenef def get_vrBenef(self): return self.vrBenef def set_vrBenef(self, vrBenef):", "outfile.write(' Id=%s' % (self.gds_encode(self.gds_format_string(quote_attrib(self.Id), input_name='Id')), )) def exportChildren(self, outfile, level, namespace_='', name_='evtCdBenPrRP', fromsubclass_=False,", "CurrentSubclassModule_, nmBenefic) if subclass is not None: return subclass(*args_, **kwargs_) if nmBenefic.subclass: return", "dscLograd_ = child_.text dscLograd_ = self.gds_validate_string(dscLograd_, node, 'dscLograd') self.dscLograd = dscLograd_ elif nodeName_", "in values: if value not in ('true', '1', 'false', '0', ): raise_parse_error( node,", "if nrInsc.subclass: return nrInsc.subclass(*args_, **kwargs_) else: return nrInsc(*args_, **kwargs_) factory = staticmethod(factory) def", "return False def export(self, outfile, level, namespace_='', name_='TDadosBenef', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('TDadosBenef')", "factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, TEmprPJ) if", "CategoryComplex = 3 # Constants for content_type: TypeNone = 0 TypeText = 1", "class evtCdBenPrRP(GeneratedsSuper): \"\"\"Evento de cadastro de benefícios previdenciários de Regimes Próprios\"\"\" subclass =", "imported_ns_def_ = GenerateDSNamespaceDefs_.get('tpPlanRP') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print:", "True else: return False def export(self, outfile, level, namespace_='', name_='tpLograd', namespacedef_='', pretty_print=True): imported_ns_def_", "return False def export(self, outfile, level, namespace_='', name_='nrLograd', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('nrLograd')", "is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, verProc) if subclass is not None:", "'%H:%M:%S.%f') else: dt = datetime_.datetime.strptime(input_data, '%H:%M:%S') dt = dt.replace(tzinfo=tz) return dt.time() def gds_str_lower(self,", "self.nrRecibo = nrRecibo_ elif nodeName_ == 'tpAmb': sval_ = child_.text try: ival_ =", "namespace_='', name_='TEnderecoExterior', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('TEnderecoExterior') if imported_ns_def_ is not None: namespacedef_", "= parsexml_(inFileName, parser) rootNode = doc.getroot() rootTag, rootClass = get_root_tag(rootNode) if rootClass is", "**kwargs_) if cep.subclass: return cep.subclass(*args_, **kwargs_) else: return cep(*args_, **kwargs_) factory = staticmethod(factory)", "+ namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='idQuota')", "def set_tpBenef(self, tpBenef): self.tpBenef = tpBenef def get_nrBenefic(self): return self.nrBenefic def set_nrBenefic(self, nrBenefic):", "'ascii' Tag_pattern_ = re_.compile(r'({.*})?(.*)') String_cleanup_pat_ = re_.compile(r\"[\\n\\r\\s]+\") Namespace_extract_pat_ = re_.compile(r'{(.*)}(.*)') CDATA_pattern_ = re_.compile(r\"<!\\[CDATA\\[.*?\\]\\]>\",", "not None: return subclass(*args_, **kwargs_) if tpAmb.subclass: return tpAmb.subclass(*args_, **kwargs_) else: return tpAmb(*args_,", "def __init__(self, Id=None, ideEvento=None, ideEmpregador=None, ideBenef=None, infoBeneficio=None): self.original_tagname_ = None self.Id = _cast(None,", "end class nrRecibo class tpAmb(GeneratedsSuper): subclass = None superclass = None def __init__(self):", "name_='nmCid', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib, already_processed)", "else: return False def export(self, outfile, level, namespace_='', name_='dtFimBenef', namespacedef_='', pretty_print=True): imported_ns_def_ =", "showIndent(outfile, level, pretty_print) outfile.write('<%sbairro>%s</%sbairro>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.bairro), input_name='bairro')), namespace_, eol_)) if self.cep is", "def get_exterior(self): return self.exterior def set_exterior(self, exterior): self.exterior = exterior def hasContent_(self): if", "(eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='TDadosBeneficio'): pass def exportChildren(self, outfile,", "raise_parse_error(node, 'Requires sequence of doubles') return values def gds_format_boolean(self, input_data, input_name=''): return ('%s'", "imported_ns_def_ = GenerateDSNamespaceDefs_.get('tpBenef') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print:", "--no-process-includes -o \"esociallib/v2_04/evtCdBenPrRP.py\" schemas/v2_04/evtCdBenPrRP.xsd # # Current working directory (os.getcwd()): # esociallib #", "name_, namespacedef_ and ' ' + namespacedef_ or '', )) already_processed = set()", "== '-': tzoff *= -1 tz = GeneratedsSuper._FixedOffsetTZ( tzoff, results.group(0)) input_data = input_data[:-6]", "name_='ideEmpregador', pretty_print=pretty_print) if self.ideBenef is not None: self.ideBenef.export(outfile, level, namespace_, name_='ideBenef', pretty_print=pretty_print) if", "'ideEvento': TIdeEveTrab, 'iniBeneficio': TDadosBeneficio, } USAGE_TEXT = \"\"\" Usage: python <Parser>.py [ -s", "nrLograd): self.nrLograd = nrLograd def get_complemento(self): return self.complemento def set_complemento(self, complemento): self.complemento =", "outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='codPostal', pretty_print=pretty_print) outfile.write('</%s%s>%s' %", "def get_complemento(self): return self.complemento def set_complemento(self, complemento): self.complemento = complemento def get_bairro(self): return", "uf_ # end class TEnderecoBrasil class tpLograd(GeneratedsSuper): subclass = None superclass = None", "sys.stdout, 0, name_=rootTag, namespacedef_='', pretty_print=True) return rootObj def parseEtree(inFileName, silence=False): parser = None", "def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class nrBenefic class dtFimBenef(GeneratedsSuper):", "= '' if self.dadosNasc is not None: self.dadosNasc.export(outfile, level, namespace_, name_='dadosNasc', pretty_print=pretty_print) if", "= '/'.join(path_list) return path Tag_strip_pattern_ = re_.compile(r'\\{.*\\}') def get_path_list_(self, node, path_list): if node", "): return True else: return False def export(self, outfile, level, namespace_='', name_='nrInsc', namespacedef_='',", "= paisNac_ elif nodeName_ == 'nmMae': nmMae_ = child_.text nmMae_ = self.gds_validate_string(nmMae_, node,", "gds_str_lower(self, instring): return instring.lower() def get_path_(self, node): path_list = [] self.get_path_list_(node, path_list) path_list.reverse()", "level, namespace_, name_='ideBenef', pretty_print=pretty_print) if self.infoBeneficio is not None: self.infoBeneficio.export(outfile, level, namespace_, name_='infoBeneficio',", "= dadosNasc.factory() obj_.build(child_) self.dadosNasc = obj_ obj_.original_tagname_ = 'dadosNasc' elif nodeName_ == 'endereco':", "None or self.cep is not None or self.codMunic is not None or self.uf", "em {ideBenef} e para o qual não tenha havido ainda informação de término", "if isinstance(instring, str): result = quote_xml(instring) elif sys.version_info.major == 2 and isinstance(instring, unicode):", "else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='codPostal'): pass", "pass def exportChildren(self, outfile, level, namespace_='', name_='nmPai', fromsubclass_=False, pretty_print=True): pass def build(self, node):", "**kwargs_) if cpfInst.subclass: return cpfInst.subclass(*args_, **kwargs_) else: return cpfInst(*args_, **kwargs_) factory = staticmethod(factory)", "not None: showIndent(outfile, level, pretty_print) outfile.write('<%scpfInst>%s</%scpfInst>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.cpfInst), input_name='cpfInst')), namespace_, eol_)) def", "else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='endereco'): pass", "MemberSpec_(object): def __init__(self, name='', data_type='', container=0, optional=0, child_attrs=None, choice=None): self.name = name self.data_type", "'infoPenMorte': obj_ = infoPenMorte.factory() obj_.build(child_) self.infoPenMorte = obj_ obj_.original_tagname_ = 'infoPenMorte' # end", "uf def hasContent_(self): if ( self.tpLograd is not None or self.dscLograd is not", "self.gds_validate_string(idQuota_, node, 'idQuota') self.idQuota = idQuota_ elif nodeName_ == 'cpfInst': cpfInst_ = child_.text", "nrLograd=None, complemento=None, bairro=None, cep=None, codMunic=None, uf=None): self.original_tagname_ = None self.tpLograd = tpLograd self.dscLograd", "return self.Id def set_Id(self, Id): self.Id = Id def hasContent_(self): if ( self.ideEvento", "already_processed, namespace_, name_='nmBenefic') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1,", "= 5 TypeDouble = 6 TypeBoolean = 7 TypeBase64 = 8 def __init__(self,", "obj_ = TDadosBeneficio.factory() obj_.build(child_) self.altBeneficio = obj_ obj_.original_tagname_ = 'altBeneficio' elif nodeName_ ==", "def quote_python(inStr): s1 = inStr if s1.find(\"'\") == -1: if s1.find('\\n') == -1:", "nmBenefic) if subclass is not None: return subclass(*args_, **kwargs_) if nmBenefic.subclass: return nmBenefic.subclass(*args_,", "exportChildren(self, outfile, level, namespace_='', name_='paisResid', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed =", "sys.exit(1) def get_root_tag(node): tag = Tag_pattern_.match(node.tag).groups()[-1] rootClass = GDSClassesMapping.get(tag) if rootClass is None:", "**kwargs_) else: return uf(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ):", "level, namespace_='', name_='paisResid', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node,", "= GenerateDSNamespaceDefs_.get('cpfBenef') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_", "self.brasil is not None: self.brasil.export(outfile, level, namespace_, name_='brasil', pretty_print=pretty_print) if self.exterior is not", "gds_format_boolean_list(self, input_data, input_name=''): return '%s' % ' '.join(input_data) def gds_validate_boolean_list( self, input_data, node=None,", "complemento def get_bairro(self): return self.bairro def set_bairro(self, bairro): self.bairro = bairro def get_cep(self):", "values def gds_validate_datetime(self, input_data, node=None, input_name=''): return input_data def gds_format_datetime(self, input_data, input_name=''): if", "level, already_processed, namespace_, name_='mtvFim') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level +", "if pretty_print: for idx in range(level): outfile.write(' ') def quote_xml(inStr): \"Escape markup chars,", "self.gds_validate_string(nmPai_, node, 'nmPai') self.nmPai = nmPai_ # end class dadosNasc class dtNascto(GeneratedsSuper): subclass", "get_nmCid(self): return self.nmCid def set_nmCid(self, nmCid): self.nmCid = nmCid def get_codPostal(self): return self.codPostal", "rootClass def parse(inFileName, silence=False): parser = None doc = parsexml_(inFileName, parser) rootNode =", "self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='tpLograd', pretty_print=pretty_print) outfile.write('</%s%s>%s'", "for value in values: try: int(value) except (TypeError, ValueError): raise_parse_error(node, 'Requires sequence of", ")) self.exportChildren(outfile, level + 1, namespace_='', name_='paisNascto', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_))", "complemento def get_bairro(self): return self.bairro def set_bairro(self, bairro): self.bairro = bairro def get_nmCid(self):", "content_type: TypeNone = 0 TypeText = 1 TypeString = 2 TypeInteger = 3", "node, nodeName_, fromsubclass_=False): if nodeName_ == 'tpLograd': tpLograd_ = child_.text tpLograd_ = self.gds_validate_string(tpLograd_,", "= self.gds_validate_integer(ival_, node, 'indRetif') self.indRetif = ival_ elif nodeName_ == 'nrRecibo': nrRecibo_ =", "CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, bairro) if subclass is not", "return True else: return False def export(self, outfile, level, namespace_='', name_='infoPenMorte', namespacedef_='', pretty_print=True):", "(eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='tpLograd'): pass def exportChildren(self, outfile,", "= getSubclassFromModule_( CurrentSubclassModule_, paisNascto) if subclass is not None: return subclass(*args_, **kwargs_) if", "def gds_parse_datetime(cls, input_data): tz = None if input_data[-1] == 'Z': tz = GeneratedsSuper._FixedOffsetTZ(0,", "must contain # a dictionary named GeneratedsNamespaceDefs. This Python dictionary # should map", "return input_data def gds_format_datetime(self, input_data, input_name=''): if input_data.microsecond == 0: _svalue = '%04d-%02d-%02dT%02d:%02d:%02d'", "namespace_, name_='TEmprPJ') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='',", "True else: return False def export(self, outfile, level, namespace_='', name_='tpAmb', namespacedef_='', pretty_print=True): imported_ns_def_", "dtNascto class codMunic(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ =", "**kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, codPostal) if subclass", "+ namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='mtvFim')", "indRetif) if subclass is not None: return subclass(*args_, **kwargs_) if indRetif.subclass: return indRetif.subclass(*args_,", "not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, complemento) if subclass is not None: return", "v in mapping.iteritems())) @staticmethod def gds_encode(instring): if sys.version_info.major == 2: return instring.encode(ExternalEncoding) else:", "TIdeEveTrab, 'iniBeneficio': TDadosBeneficio, } USAGE_TEXT = \"\"\" Usage: python <Parser>.py [ -s ]", "already_processed, namespace_='', name_='complemento'): pass def exportChildren(self, outfile, level, namespace_='', name_='complemento', fromsubclass_=False, pretty_print=True): pass", "is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, nrBenefic) if subclass is not None:", "ival_ = self.gds_validate_integer(ival_, node, 'tpPlanRP') self.tpPlanRP = ival_ elif nodeName_ == 'iniBeneficio': obj_", "**kwargs_) if nmMae.subclass: return nmMae.subclass(*args_, **kwargs_) else: return nmMae(*args_, **kwargs_) factory = staticmethod(factory)", "else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='tpLograd'): pass", "input_name=''): return '%d' % input_data def gds_validate_integer(self, input_data, node=None, input_name=''): return input_data def", "# ## from IPython.Shell import IPShellEmbed ## args = '' ## ipshell =", "name_='bairro', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib, already_processed)", "None: return subclass(*args_, **kwargs_) if cpfInst.subclass: return cpfInst.subclass(*args_, **kwargs_) else: return cpfInst(*args_, **kwargs_)", "if evtCdBenPrRP.subclass: return evtCdBenPrRP.subclass(*args_, **kwargs_) else: return evtCdBenPrRP(*args_, **kwargs_) factory = staticmethod(factory) def", "dt.replace(tzinfo=tz) return dt def gds_validate_date(self, input_data, node=None, input_name=''): return input_data def gds_format_date(self, input_data,", "None: subclass = getSubclassFromModule_( CurrentSubclassModule_, TIdeEveTrab) if subclass is not None: return subclass(*args_,", "else: return nrRecibo(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return", "dadosNasc.subclass: return dadosNasc.subclass(*args_, **kwargs_) else: return dadosNasc(*args_, **kwargs_) factory = staticmethod(factory) def get_dtNascto(self):", "CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, infoPenMorte) if subclass is not", "= False break return found1 @classmethod def gds_parse_time(cls, input_data): tz = None if", "outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='nmMae'): pass def", "= None doc = parsexml_(IOBuffer(inString), parser) rootNode = doc.getroot() rootTag, rootClass = get_root_tag(rootNode)", "pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): if nodeName_ == 'tpBenef': sval_ =", "# end class eSocial class evtCdBenPrRP(GeneratedsSuper): \"\"\"Evento de cadastro de benefícios previdenciários de", "input_data.split() for value in values: try: float(value) except (TypeError, ValueError): raise_parse_error(node, 'Requires sequence", "self.endereco is not None ): return True else: return False def export(self, outfile,", "named generatedssuper.py. try: from generatedssuper import GeneratedsSuper except ImportError as exp: class GeneratedsSuper(object):", "superclass = None def __init__(self, idQuota=None, cpfInst=None): self.original_tagname_ = None self.idQuota = idQuota", "return True else: return False def export(self, outfile, level, namespace_='', name_='paisNac', namespacedef_='', pretty_print=True):", "% (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='nmBenefic', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_,", "None: self.dadosNasc.export(outfile, level, namespace_, name_='dadosNasc', pretty_print=pretty_print) if self.endereco is not None: self.endereco.export(outfile, level,", "set_procEmi(self, procEmi): self.procEmi = procEmi def get_verProc(self): return self.verProc def set_verProc(self, verProc): self.verProc", "http://ipython.scipy.org/. # ## from IPython.Shell import IPShellEmbed ## args = '' ## ipshell", "= GenerateDSNamespaceDefs_.get('paisNac') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_", "self.original_tagname_ = None if isinstance(dtNascto, BaseStrType_): initvalue_ = datetime_.datetime.strptime(dtNascto, '%Y-%m-%d').date() else: initvalue_ =", "else: return TDadosBeneficio(*args_, **kwargs_) factory = staticmethod(factory) def get_tpBenef(self): return self.tpBenef def set_tpBenef(self,", "if __name__ == '__main__': #import pdb; pdb.set_trace() main() __all__ = [ \"TDadosBenef\", \"TDadosBeneficio\",", "namespace_='', name_='infoPenMorte', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('infoPenMorte') if imported_ns_def_ is not None: namespacedef_", "1, namespace_='', name_='paisResid', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_,", "else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='ideBenef'): pass", "not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, verProc) if subclass is not None: return", "(eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='TDadosBenef'): pass def exportChildren(self, outfile,", "showIndent(outfile, level, pretty_print) outfile.write('<%snrInsc>%s</%snrInsc>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nrInsc), input_name='nrInsc')), namespace_, eol_)) def build(self, node):", "not None or self.dtFimBenef is not None or self.mtvFim is not None ):", "ideBenef(*args_, **kwargs_) factory = staticmethod(factory) def get_cpfBenef(self): return self.cpfBenef def set_cpfBenef(self, cpfBenef): self.cpfBenef", "matchobjects: s3 = s1[pos:mo.start()] s2 += quote_xml_aux(s3) s2 += s1[mo.start():mo.end()] pos = mo.end()", "None ): return True else: return False def export(self, outfile, level, namespace_='', name_='ideBenef',", "not None: showIndent(outfile, level, pretty_print) outfile.write('<%snrBenefic>%s</%snrBenefic>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nrBenefic), input_name='nrBenefic')), namespace_, eol_)) if", "namespace_='', name_='eSocial', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\\n' else: eol_ = ''", "return self.value def getName(self): return self.name def export(self, outfile, level, name, namespace, pretty_print=True):", "CurrentSubclassModule_, cpfInst) if subclass is not None: return subclass(*args_, **kwargs_) if cpfInst.subclass: return", "self.content_type = content_type self.name = name self.value = value def getCategory(self): return self.category", "class tpPlanRP class fimBeneficio(GeneratedsSuper): \"\"\"Informações relativas a benefícios previdenciários - Término. Validação: Só", "else: return False def export(self, outfile, level, namespace_='', name_='nmPai', namespacedef_='', pretty_print=True): imported_ns_def_ =", "bairro): self.bairro = bairro def get_cep(self): return self.cep def set_cep(self, cep): self.cep =", "return endereco.subclass(*args_, **kwargs_) else: return endereco(*args_, **kwargs_) factory = staticmethod(factory) def get_brasil(self): return", "end class codMunic class uf(GeneratedsSuper): subclass = None superclass = None def __init__(self):", "= GenerateDSNamespaceDefs_.get('tpAmb') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_", "pass # end class nmMae class nmPai(GeneratedsSuper): subclass = None superclass = None", "outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='paisNascto', pretty_print=pretty_print) outfile.write('</%s%s>%s' %", "= obj_ obj_.original_tagname_ = 'dadosBenef' # end class ideBenef class cpfBenef(GeneratedsSuper): subclass =", "= set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='ideBenef') if self.hasContent_(): outfile.write('>%s' % (eol_, ))", "% input_data).lower() def gds_validate_boolean(self, input_data, node=None, input_name=''): return input_data def gds_format_boolean_list(self, input_data, input_name=''):", "level, namespace_='', name_='tpPlanRP', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node,", "namespace_='', name_='nmCid'): pass def exportChildren(self, outfile, level, namespace_='', name_='nmCid', fromsubclass_=False, pretty_print=True): pass def", "**kwargs_) else: return vrBenef(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ):", "getattr(module, name) else: return None # # If you have installed IPython you", "havido ainda informação de término de benefícios.\"\"\" subclass = None superclass = None", "level, pretty_print) outfile.write('<%sSignature>%s</%sSignature>%s' % ('ds:', self.gds_encode(self.gds_format_string(quote_xml(self.Signature), input_name='Signature')), 'ds:', eol_)) def build(self, node): already_processed", "not None: self.infoPenMorte.export(outfile, level, namespace_, name_='infoPenMorte', pretty_print=pretty_print) def build(self, node): already_processed = set()", "'', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='TDadosBenef') if self.hasContent_(): outfile.write('>%s'", "eol_)) else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='eSocial'):", "imported_ns_def_ = GenerateDSNamespaceDefs_.get('codMunic') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print:", "= re_.compile(r'{(.*)}(.*)') CDATA_pattern_ = re_.compile(r\"<!\\[CDATA\\[.*?\\]\\]>\", re_.DOTALL) # Change this to redirect the generated", "drop into the # IPython shell: # ipshell('<some message> -- Entering ipshell.\\nHit Ctrl-D", "not None: return subclass(*args_, **kwargs_) if dtFimBenef.subclass: return dtFimBenef.subclass(*args_, **kwargs_) else: return dtFimBenef(*args_,", "namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('nmBenefic') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_", "buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class mtvFim class TIdeEveTrab(GeneratedsSuper): \"\"\"Identificação", "False def export(self, outfile, level, namespace_='', name_='dadosNasc', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('dadosNasc') if", "(eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='paisNac', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_,", "set_vrBenef(self, vrBenef): self.vrBenef = vrBenef def get_infoPenMorte(self): return self.infoPenMorte def set_infoPenMorte(self, infoPenMorte): self.infoPenMorte", "def getName(self): return self.name def export(self, outfile, level, name, namespace, pretty_print=True): if self.category", "input_name='nrLograd')), namespace_, eol_)) if self.complemento is not None: showIndent(outfile, level, pretty_print) outfile.write('<%scomplemento>%s</%scomplemento>%s' %", "is not None or self.dtIniBenef is not None or self.vrBenef is not None", "self.tpLograd = tpLograd self.dscLograd = dscLograd self.nrLograd = nrLograd self.complemento = complemento self.bairro", "if self.cpfInst is not None: showIndent(outfile, level, pretty_print) outfile.write('<%scpfInst>%s</%scpfInst>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.cpfInst), input_name='cpfInst')),", "pass # end class complemento class bairro(GeneratedsSuper): subclass = None superclass = None", "+ namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='nrRecibo')", "complemento): self.complemento = complemento def get_bairro(self): return self.bairro def set_bairro(self, bairro): self.bairro =", "nodeName_, fromsubclass_=False): pass # end class nrInsc class TDadosBenef(GeneratedsSuper): \"\"\"Dados de beneficiário\"\"\" subclass", "else: return evtCdBenPrRP(*args_, **kwargs_) factory = staticmethod(factory) def get_ideEvento(self): return self.ideEvento def set_ideEvento(self,", "self.nrLograd = nrLograd def get_complemento(self): return self.complemento def set_complemento(self, complemento): self.complemento = complemento", "return subclass(*args_, **kwargs_) if nmBenefic.subclass: return nmBenefic.subclass(*args_, **kwargs_) else: return nmBenefic(*args_, **kwargs_) factory", "matchobjects = CDATA_pattern_.finditer(s1) for mo in matchobjects: s3 = s1[pos:mo.start()] s2 += quote_xml_aux(s3)", "name) else: return None # # If you have installed IPython you can", "def export(self, outfile, level, namespace_='', name_='ideBenef', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('ideBenef') if imported_ns_def_", "is not None: showIndent(outfile, level, pretty_print) outfile.write('<%sidQuota>%s</%sidQuota>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.idQuota), input_name='idQuota')), namespace_, eol_))", "return \"'''%s'''\" % s1 else: if s1.find('\"') != -1: s1 = s1.replace('\"', '\\\\\"')", "subelement = etree_.SubElement( element, '%s' % self.name) subelement.text = self.to_etree_simple() else: # category", "3600 minutes = (total_seconds - (hours * 3600)) // 60 _svalue += '{0:02d}:{1:02d}'.format(", "obj_.original_tagname_ = 'endereco' # end class TDadosBenef class dadosNasc(GeneratedsSuper): \"\"\"Informações de nascimento do", "GenerateDSNamespaceDefs_.get('cep') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_ =", "namespace_, eol_)) if self.dtIniBenef is not None: showIndent(outfile, level, pretty_print) outfile.write('<%sdtIniBenef>%s</%sdtIniBenef>%s' % (namespace_,", "level, already_processed, namespace_, name_='tpInsc') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level +", "input_data, node=None, input_name=''): return input_data def gds_format_float_list(self, input_data, input_name=''): return '%s' % '", "return nrLograd(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return True", "= nmPai_ # end class dadosNasc class dtNascto(GeneratedsSuper): subclass = None superclass =", "de término de benefícios.\"\"\" subclass = None superclass = None def __init__(self, tpBenef=None,", "nrBenefic_ elif nodeName_ == 'dtFimBenef': sval_ = child_.text dval_ = self.gds_parse_date(sval_) self.dtFimBenef =", "None: return subclass(*args_, **kwargs_) if nrBenefic.subclass: return nrBenefic.subclass(*args_, **kwargs_) else: return nrBenefic(*args_, **kwargs_)", "CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, infoBeneficio) if subclass is not", "silence: content = etree_.tostring( rootElement, pretty_print=True, xml_declaration=True, encoding=\"utf-8\") sys.stdout.write(content) sys.stdout.write('\\n') return rootObj, rootElement,", "(namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nmCid), input_name='nmCid')), namespace_, eol_)) if self.codPostal is not None: showIndent(outfile, level, pretty_print)", "% (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='ideBenef', pretty_print=pretty_print) showIndent(outfile, level, pretty_print)", "= None def __init__(self, cpfBenef=None, nmBenefic=None, dadosBenef=None): self.original_tagname_ = None self.cpfBenef = cpfBenef", "self.original_tagname_ = None self.tpPlanRP = tpPlanRP self.iniBeneficio = iniBeneficio self.altBeneficio = altBeneficio self.fimBeneficio", "None: showIndent(outfile, level, pretty_print) outfile.write('<%svrBenef>%s</%svrBenef>%s' % (namespace_, self.gds_format_float(self.vrBenef, input_name='vrBenef'), namespace_, eol_)) if self.infoPenMorte", "self.cpfInst = cpfInst def hasContent_(self): if ( self.idQuota is not None or self.cpfInst", "= getSubclassFromModule_( CurrentSubclassModule_, tpBenef) if subclass is not None: return subclass(*args_, **kwargs_) if", "paisNac(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ = None def", "tag = Tag_pattern_.match(node.tag).groups()[-1] rootClass = GDSClassesMapping.get(tag) if rootClass is None: rootClass = globals().get(tag)", "%s' % exp) ival_ = self.gds_validate_integer(ival_, node, 'tpAmb') self.tpAmb = ival_ elif nodeName_", "return False def export(self, outfile, level, namespace_='', name_='tpPlanRP', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('tpPlanRP')", "paisResid class nmCid(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ =", "outfile, level, namespace_='', name_='vrBenef', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set()", "\"\"\"Informações de nascimento do beneficiário\"\"\" subclass = None superclass = None def __init__(self,", "mtvFim=None): self.original_tagname_ = None self.tpBenef = tpBenef self.nrBenefic = nrBenefic if isinstance(dtFimBenef, BaseStrType_):", "is not None: return subclass(*args_, **kwargs_) if TEmprPJ.subclass: return TEmprPJ.subclass(*args_, **kwargs_) else: return", "nodeName_ == 'tpLograd': tpLograd_ = child_.text tpLograd_ = self.gds_validate_string(tpLograd_, node, 'tpLograd') self.tpLograd =", "nodeName_ == 'vrBenef': sval_ = child_.text try: fval_ = float(sval_) except (TypeError, ValueError)", "False def export(self, outfile, level, namespace_='', name_='TEmprPJ', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('TEmprPJ') if", "level, namespace_='', name_='codPostal', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node,", "(namespace_, self.gds_format_date(self.dtFimBenef, input_name='dtFimBenef'), namespace_, eol_)) if self.mtvFim is not None: showIndent(outfile, level, pretty_print)", "CurrentSubclassModule_, TDadosBenef) if subclass is not None: return subclass(*args_, **kwargs_) if TDadosBenef.subclass: return", "infoBeneficio.subclass: return infoBeneficio.subclass(*args_, **kwargs_) else: return infoBeneficio(*args_, **kwargs_) factory = staticmethod(factory) def get_tpPlanRP(self):", "pass def exportChildren(self, outfile, level, namespace_='', name_='nmCid', fromsubclass_=False, pretty_print=True): pass def build(self, node):", "already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='uf') if self.hasContent_(): outfile.write('>%s' % (eol_,", "= evtCdBenPrRP self.Signature = Signature def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None:", "level, namespace_, name_='dadosBenef', pretty_print=pretty_print) def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib, already_processed)", "child_, node, nodeName_, fromsubclass_=False): pass # end class mtvFim class TIdeEveTrab(GeneratedsSuper): \"\"\"Identificação do", "nodeName_ == 'tpAmb': sval_ = child_.text try: ival_ = int(sval_) except (TypeError, ValueError)", "already_processed): pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class nrLograd", "else: return False def export(self, outfile, level, namespace_='', name_='paisResid', namespacedef_='', pretty_print=True): imported_ns_def_ =", "nodeName_ == 'dtNascto': sval_ = child_.text dval_ = self.gds_parse_date(sval_) self.dtNascto = dval_ elif", "namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='cpfInst') if", "if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='evtCdBenPrRP', pretty_print=pretty_print)", "is not None: tzoff = input_data.tzinfo.utcoffset(input_data) if tzoff is not None: total_seconds =", "self.indRetif = indRetif def get_nrRecibo(self): return self.nrRecibo def set_nrRecibo(self, nrRecibo): self.nrRecibo = nrRecibo", "cep_ elif nodeName_ == 'codMunic': sval_ = child_.text try: ival_ = int(sval_) except", "is not None or self.bairro is not None or self.cep is not None", "initvalue_ = dtNascto self.dtNascto = initvalue_ self.codMunic = codMunic self.uf = uf self.paisNascto", "= self.gds_validate_string(complemento_, node, 'complemento') self.complemento = complemento_ elif nodeName_ == 'bairro': bairro_ =", "dval_ elif nodeName_ == 'mtvFim': sval_ = child_.text try: ival_ = int(sval_) except", "+ namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='TDadosBenef')", "subclass is not None: return subclass(*args_, **kwargs_) if nmMae.subclass: return nmMae.subclass(*args_, **kwargs_) else:", "subclass = getSubclassFromModule_( CurrentSubclassModule_, TDadosBeneficio) if subclass is not None: return subclass(*args_, **kwargs_)", "else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='paisResid'): pass", "1, namespace_='', name_='complemento', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_,", "namespace_, name_='paisNascto') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='',", "raise_parse_error(child_, 'requires integer: %s' % exp) ival_ = self.gds_validate_integer(ival_, node, 'tpBenef') self.tpBenef =", "values = input_data.split() for value in values: try: float(value) except (TypeError, ValueError): raise_parse_error(node,", "name def utcoffset(self, dt): return self.__offset def tzname(self, dt): return self.__name def dst(self,", "= GDSClassesMapping.get(tag) if rootClass is None: rootClass = globals().get(tag) return tag, rootClass def", "*\\n\\n') sys.stdout.write('import evtCdBenPrRP as model_\\n\\n') sys.stdout.write('rootObj = model_.rootClass(\\n') rootObj.exportLiteral(sys.stdout, 0, name_=rootTag) sys.stdout.write(')\\n') return", "False def export(self, outfile, level, namespace_='', name_='evtCdBenPrRP', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('evtCdBenPrRP') if", "% (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.complemento), input_name='complemento')), namespace_, eol_)) if self.bairro is not None: showIndent(outfile, level,", "if len(time_parts) > 1: micro_seconds = int(float('0.' + time_parts[1]) * 1000000) input_data =", "value = find_attr_value_('Id', node) if value is not None and 'Id' not in", "nrRecibo def get_tpAmb(self): return self.tpAmb def set_tpAmb(self, tpAmb): self.tpAmb = tpAmb def get_procEmi(self):", "get_nrLograd(self): return self.nrLograd def set_nrLograd(self, nrLograd): self.nrLograd = nrLograd def get_complemento(self): return self.complemento", "dt = datetime_.datetime.strptime(input_data, '%H:%M:%S') dt = dt.replace(tzinfo=tz) return dt.time() def gds_str_lower(self, instring): return", "if pretty_print: eol_ = '\\n' else: eol_ = '' if self.idQuota is not", "pos = 0 matchobjects = CDATA_pattern_.finditer(s1) for mo in matchobjects: s3 = s1[pos:mo.start()]", "empty lines. if self.value.strip(): outfile.write(self.value) elif self.category == MixedContainer.CategorySimple: self.exportSimple(outfile, level, name) else:", "class TDadosBenef class dadosNasc(GeneratedsSuper): \"\"\"Informações de nascimento do beneficiário\"\"\" subclass = None superclass", "already_processed): pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class nmPai", "input_name='nrInsc')), namespace_, eol_)) def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib, already_processed) for", "None: return subclass(*args_, **kwargs_) if dscLograd.subclass: return dscLograd.subclass(*args_, **kwargs_) else: return dscLograd(*args_, **kwargs_)", "node, nodeName_, fromsubclass_=False): if nodeName_ == 'indRetif': sval_ = child_.text try: ival_ =", "== 'cpfInst': cpfInst_ = child_.text cpfInst_ = self.gds_validate_string(cpfInst_, node, 'cpfInst') self.cpfInst = cpfInst_", "parser = etree_.ETCompatXMLParser() except AttributeError: # fallback to xml.etree parser = etree_.XMLParser() doc", "buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class nmPai class endereco(GeneratedsSuper): \"\"\"Grupo", "**kwargs_) else: return nrLograd(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ):", "(eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='dadosNasc'): pass def exportChildren(self, outfile,", "def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='TEnderecoExterior'): pass def exportChildren(self, outfile, level, namespace_='',", "outfile, level, namespace_='', name_='tpAmb', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('tpAmb') if imported_ns_def_ is not", "self.uf = uf_ # end class TEnderecoBrasil class tpLograd(GeneratedsSuper): subclass = None superclass", "def exportChildren(self, outfile, level, namespace_='', name_='tpPlanRP', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed", "s1 = (isinstance(inStr, BaseStrType_) and inStr or '%s' % inStr) s2 = ''", "showIndent(outfile, level, pretty_print) outfile.write('<%scep>%s</%scep>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.cep), input_name='cep')), namespace_, eol_)) if self.codMunic is", "def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, TDadosBenef)", "None def __init__(self, tpLograd=None, dscLograd=None, nrLograd=None, complemento=None, bairro=None, cep=None, codMunic=None, uf=None): self.original_tagname_ =", "try: float(value) except (TypeError, ValueError): raise_parse_error(node, 'Requires sequence of doubles') return values def", "None: total_seconds = tzoff.seconds + (86400 * tzoff.days) if total_seconds == 0: _svalue", ")) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='nmMae'): pass def exportChildren(self, outfile, level,", "False def export(self, outfile, level, namespace_='', name_='dscLograd', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('dscLograd') if", "'dadosBenef': TDadosBenef, 'exterior': TEnderecoExterior, 'ideEmpregador': TEmprPJ, 'ideEvento': TIdeEveTrab, 'iniBeneficio': TDadosBeneficio, } USAGE_TEXT =", "if ( self.tpBenef is not None or self.nrBenefic is not None or self.dtFimBenef", "nrRecibo.subclass(*args_, **kwargs_) else: return nrRecibo(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if (", "+= s1[mo.start():mo.end()] pos = mo.end() s3 = s1[pos:] s2 += quote_xml_aux(s3) return s2", "in range(level): outfile.write(' ') def quote_xml(inStr): \"Escape markup chars, but do not modify", "0, name_=rootTag) sys.stdout.write(')\\n') return rootObj def main(): args = sys.argv[1:] if len(args) ==", "not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, ideBenef) if subclass is not None: return", "self.name, self.value, self.name)) elif self.content_type == MixedContainer.TypeFloat or \\ self.content_type == MixedContainer.TypeDecimal: outfile.write('<%s>%f</%s>'", "return self.dadosNasc def set_dadosNasc(self, dadosNasc): self.dadosNasc = dadosNasc def get_endereco(self): return self.endereco def", "== 'fimBeneficio': obj_ = fimBeneficio.factory() obj_.build(child_) self.fimBeneficio = obj_ obj_.original_tagname_ = 'fimBeneficio' #", "name_='nmBenefic', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib, already_processed)", "% (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='dadosNasc'): pass def exportChildren(self,", "level, already_processed, namespace_, name_='nmPai') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level +", "= 0 matchobjects = CDATA_pattern_.finditer(s1) for mo in matchobjects: s3 = s1[pos:mo.start()] s2", "input_data[-1] == 'Z': tz = GeneratedsSuper._FixedOffsetTZ(0, 'UTC') input_data = input_data[:-1] else: results =", "1, namespace_='', name_='cpfBenef', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_,", "class procEmi class verProc(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_", "namespace_='', name_='paisNascto', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_, ))", "sequence of booleans ' '(\"true\", \"1\", \"false\", \"0\")') return values def gds_validate_datetime(self, input_data,", "already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='TEnderecoBrasil') if self.hasContent_(): outfile.write('>%s' % (eol_,", "GenerateDSNamespaceDefs_.get('TEnderecoBrasil') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_ =", "self.value = value def getCategory(self): return self.category def getContenttype(self, content_type): return self.content_type def", "def __init__(self, cpfBenef=None, nmBenefic=None, dadosBenef=None): self.original_tagname_ = None self.cpfBenef = cpfBenef self.nmBenefic =", "or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='procEmi') if self.hasContent_():", "+ 1, namespace_='', name_='nmBenefic', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' %", "= GenerateDSNamespaceDefs_.get('nmBenefic') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_", "+= '{0:02d}:{1:02d}'.format( hours, minutes) except AttributeError: pass return _svalue @classmethod def gds_parse_date(cls, input_data):", "None: showIndent(outfile, level, pretty_print) outfile.write('<%scep>%s</%scep>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.cep), input_name='cep')), namespace_, eol_)) if self.codMunic", "pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('dtIniBenef') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if", "exportChildren(self, outfile, level, namespace_='', name_='infoBeneficio', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\\n' else:", "outfile.write('<%snmCid>%s</%snmCid>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nmCid), input_name='nmCid')), namespace_, eol_)) if self.codPostal is not None: showIndent(outfile,", "subclass(*args_, **kwargs_) if TEnderecoBrasil.subclass: return TEnderecoBrasil.subclass(*args_, **kwargs_) else: return TEnderecoBrasil(*args_, **kwargs_) factory =", "namespace_, eol_)) if self.dadosBenef is not None: self.dadosBenef.export(outfile, level, namespace_, name_='dadosBenef', pretty_print=pretty_print) def", "# # Calls to the methods in these classes are generated by generateDS.py.", "verProc def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_,", "nodeName_, fromsubclass_=False): pass # end class dscLograd class nrLograd(GeneratedsSuper): subclass = None superclass", "len(input_data.split('.')) > 1: dt = datetime_.datetime.strptime(input_data, '%H:%M:%S.%f') else: dt = datetime_.datetime.strptime(input_data, '%H:%M:%S') dt", "outfile, level, namespace_='', name_='codPostal', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set()", "= True for patterns1 in patterns: found2 = False for patterns2 in patterns1:", "self.uf def set_uf(self, uf): self.uf = uf def get_paisNascto(self): return self.paisNascto def set_paisNascto(self,", "to collect the space used by the DOM. doc = None if not", "namespace_, name_='complemento') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='',", "class dtFimBenef class mtvFim(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_", "**kwargs_) if cpfBenef.subclass: return cpfBenef.subclass(*args_, **kwargs_) else: return cpfBenef(*args_, **kwargs_) factory = staticmethod(factory)", "pretty_print: eol_ = '\\n' else: eol_ = '' if self.dtNascto is not None:", "pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('TEnderecoBrasil') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if", "eol_ = '\\n' else: eol_ = '' if self.cpfBenef is not None: showIndent(outfile,", "nodeName_, fromsubclass_=False): pass # end class nmPai class endereco(GeneratedsSuper): \"\"\"Grupo de informações do", "level, namespace_='', name_='endereco', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\\n' else: eol_ =", "TEnderecoExterior) if subclass is not None: return subclass(*args_, **kwargs_) if TEnderecoExterior.subclass: return TEnderecoExterior.subclass(*args_,", "is not None ): return True else: return False def export(self, outfile, level,", "pretty_print) outfile.write('<%sidQuota>%s</%sidQuota>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.idQuota), input_name='idQuota')), namespace_, eol_)) if self.cpfInst is not None:", "= obj_ obj_.original_tagname_ = 'iniBeneficio' elif nodeName_ == 'altBeneficio': obj_ = TDadosBeneficio.factory() obj_.build(child_)", "None or self.nmBenefic is not None or self.dadosBenef is not None ): return", "nodeName_ == 'indRetif': sval_ = child_.text try: ival_ = int(sval_) except (TypeError, ValueError)", "will export that definition in the # XML representation of that element. See", "obj_ = TDadosBeneficio.factory() obj_.build(child_) self.iniBeneficio = obj_ obj_.original_tagname_ = 'iniBeneficio' elif nodeName_ ==", "self.exportAttributes(outfile, level, already_processed, namespace_, name_='nrInsc') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level", "self.fimBeneficio def set_fimBeneficio(self, fimBeneficio): self.fimBeneficio = fimBeneficio def hasContent_(self): if ( self.tpPlanRP is", "% (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='codMunic', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_,", "is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, dtIniBenef) if subclass is not None:", "0: if element[-1].tail is None: element[-1].tail = self.value else: element[-1].tail += self.value else:", "node=None, input_name=''): values = input_data.split() for value in values: try: float(value) except (TypeError,", "self.value.exportLiteral(outfile, level + 1) showIndent(outfile, level) outfile.write(')\\n') class MemberSpec_(object): def __init__(self, name='', data_type='',", "datetime_.timedelta(minutes=offset) self.__name = name def utcoffset(self, dt): return self.__offset def tzname(self, dt): return", "self.dscLograd = dscLograd_ elif nodeName_ == 'nrLograd': nrLograd_ = child_.text nrLograd_ = self.gds_validate_string(nrLograd_,", "return paisResid.subclass(*args_, **kwargs_) else: return paisResid(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if", "or self.iniBeneficio is not None or self.altBeneficio is not None or self.fimBeneficio is", "def gds_build_any(self, node, type_name=None): return None @classmethod def gds_reverse_node_mapping(cls, mapping): return dict(((v, k)", "self.verProc = verProc def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass =", "level + 1, namespace_='', name_='cpfInst', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s'", "== 2: from StringIO import StringIO as IOBuffer else: from io import BytesIO", "name_='nrLograd', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('nrLograd') if imported_ns_def_ is not None: namespacedef_ =", "% (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='mtvFim'): pass def exportChildren(self,", "outfile, level, namespace_='', name_='verProc', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set()", "self.nmMae is not None or self.nmPai is not None ): return True else:", "is not None: return subclass(*args_, **kwargs_) if vrBenef.subclass: return vrBenef.subclass(*args_, **kwargs_) else: return", "nodeName_, fromsubclass_=False): if nodeName_ == 'idQuota': idQuota_ = child_.text idQuota_ = self.gds_validate_string(idQuota_, node,", "# end class nmBenefic class infoBeneficio(GeneratedsSuper): \"\"\"Informações relacionadas ao benefício previdenciário concedido ao", "subclass module. CurrentSubclassModule_ = None # # Support/utility functions. # def showIndent(outfile, level,", "**kwargs_) if tpAmb.subclass: return tpAmb.subclass(*args_, **kwargs_) else: return tpAmb(*args_, **kwargs_) factory = staticmethod(factory)", "codPostal_ = child_.text codPostal_ = self.gds_validate_string(codPostal_, node, 'codPostal') self.codPostal = codPostal_ # end", "self.tpLograd is not None: showIndent(outfile, level, pretty_print) outfile.write('<%stpLograd>%s</%stpLograd>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.tpLograd), input_name='tpLograd')), namespace_,", "set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='nmMae') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile,", "self.paisNac = paisNac self.nmMae = nmMae self.nmPai = nmPai def factory(*args_, **kwargs_): if", "node, 'paisNascto') self.paisNascto = paisNascto_ elif nodeName_ == 'paisNac': paisNac_ = child_.text paisNac_", "self.nrLograd = nrLograd self.complemento = complemento self.bairro = bairro self.nmCid = nmCid self.codPostal", "self.original_tagname_ = None self.tpBenef = tpBenef self.nrBenefic = nrBenefic if isinstance(dtIniBenef, BaseStrType_): initvalue_", "not silence: sys.stdout.write('#from evtCdBenPrRP import *\\n\\n') sys.stdout.write('import evtCdBenPrRP as model_\\n\\n') sys.stdout.write('rootObj = model_.rootClass(\\n')", "self.nrInsc = nrInsc_ # end class TEmprPJ class tpInsc(GeneratedsSuper): subclass = None superclass", "infoPenMorte.subclass(*args_, **kwargs_) else: return infoPenMorte(*args_, **kwargs_) factory = staticmethod(factory) def get_idQuota(self): return self.idQuota", "3 # Constants for content_type: TypeNone = 0 TypeText = 1 TypeString =", "subclass(*args_, **kwargs_) if paisNac.subclass: return paisNac.subclass(*args_, **kwargs_) else: return paisNac(*args_, **kwargs_) factory =", ")) def exportChildren(self, outfile, level, namespace_='', name_='evtCdBenPrRP', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ =", "name_='cpfBenef', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_, )) def", "mtvFim def hasContent_(self): if ( self.tpBenef is not None or self.nrBenefic is not", "else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='nrRecibo'): pass", "= verProc_ # end class TIdeEveTrab class indRetif(GeneratedsSuper): subclass = None superclass =", "namespacedef_ = imported_ns_def_ if pretty_print: eol_ = '\\n' else: eol_ = '' if", "): return True else: return False def export(self, outfile, level, namespace_='', name_='dtNascto', namespacedef_='',", "outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='endereco', pretty_print=pretty_print) showIndent(outfile, level,", "namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='infoBeneficio') if", "node, 'verProc') self.verProc = verProc_ # end class TIdeEveTrab class indRetif(GeneratedsSuper): subclass =", "is not None: showIndent(outfile, level, pretty_print) outfile.write('<%sindRetif>%s</%sindRetif>%s' % (namespace_, self.gds_format_integer(self.indRetif, input_name='indRetif'), namespace_, eol_))", "subclass = getSubclassFromModule_( CurrentSubclassModule_, TDadosBenef) if subclass is not None: return subclass(*args_, **kwargs_)", "level, namespace_='', name_='nmBenefic', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node,", "level, namespace_='', name_='nmCid', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node,", "name_='eSocial', namespacedef_=' xmlns:ds=\"http://www.w3.org/2000/09/xmldsig#\" ', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('eSocial') if imported_ns_def_ is not None:", "% (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='codMunic'): pass def exportChildren(self,", "None: return subclass(*args_, **kwargs_) if dtNascto.subclass: return dtNascto.subclass(*args_, **kwargs_) else: return dtNascto(*args_, **kwargs_)", "= None superclass = None def __init__(self, tpBenef=None, nrBenefic=None, dtFimBenef=None, mtvFim=None): self.original_tagname_ =", "outfile, level, namespace_='', name_='procEmi', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set()", "vrBenef=None, infoPenMorte=None): self.original_tagname_ = None self.tpBenef = tpBenef self.nrBenefic = nrBenefic if isinstance(dtIniBenef,", "% self.name) subelement.text = self.to_etree_simple() else: # category == MixedContainer.CategoryComplex self.value.to_etree(element) def to_etree_simple(self):", "nodeName_, fromsubclass_=False): if nodeName_ == 'tpLograd': tpLograd_ = child_.text tpLograd_ = self.gds_validate_string(tpLograd_, node,", "self.nmMae = nmMae_ elif nodeName_ == 'nmPai': nmPai_ = child_.text nmPai_ = self.gds_validate_string(nmPai_,", "subclass(*args_, **kwargs_) if cpfInst.subclass: return cpfInst.subclass(*args_, **kwargs_) else: return cpfInst(*args_, **kwargs_) factory =", "parser) rootNode = doc.getroot() rootTag, rootClass = get_root_tag(rootNode) if rootClass is None: rootTag", "self.dtIniBenef is not None: showIndent(outfile, level, pretty_print) outfile.write('<%sdtIniBenef>%s</%sdtIniBenef>%s' % (namespace_, self.gds_format_date(self.dtIniBenef, input_name='dtIniBenef'), namespace_,", "get_exterior(self): return self.exterior def set_exterior(self, exterior): self.exterior = exterior def hasContent_(self): if (", "level, namespace_='', name_='nmCid', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('nmCid') if imported_ns_def_ is not None:", "outfile.write('<%scomplemento>%s</%scomplemento>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.complemento), input_name='complemento')), namespace_, eol_)) if self.bairro is not None: showIndent(outfile,", "self.exportAttributes(outfile, level, already_processed, namespace_, name_='dtFimBenef') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level", "self.__eq__(other) def getSubclassFromModule_(module, class_): '''Get the subclass of a class from a specific", "subclass is not None: return subclass(*args_, **kwargs_) if infoPenMorte.subclass: return infoPenMorte.subclass(*args_, **kwargs_) else:", "not None: showIndent(outfile, level, pretty_print) outfile.write('<%spaisNac>%s</%spaisNac>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.paisNac), input_name='paisNac')), namespace_, eol_)) if", "nrLograd self.complemento = complemento self.bairro = bairro self.cep = cep self.codMunic = codMunic", "'vrBenef': sval_ = child_.text try: fval_ = float(sval_) except (TypeError, ValueError) as exp:", "name_='endereco', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\\n' else: eol_ = '' if", "None: showIndent(outfile, level, pretty_print) outfile.write('<%stpBenef>%s</%stpBenef>%s' % (namespace_, self.gds_format_integer(self.tpBenef, input_name='tpBenef'), namespace_, eol_)) if self.nrBenefic", "or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='paisNascto') if self.hasContent_():", "'' if self.tpInsc is not None: showIndent(outfile, level, pretty_print) outfile.write('<%stpInsc>%s</%stpInsc>%s' % (namespace_, self.gds_format_integer(self.tpInsc,", "None: showIndent(outfile, level, pretty_print) outfile.write('<%scomplemento>%s</%scomplemento>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.complemento), input_name='complemento')), namespace_, eol_)) if self.bairro", "TEnderecoBrasil class tpLograd(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ =", "outfile, level, already_processed, namespace_='', name_='codPostal'): pass def exportChildren(self, outfile, level, namespace_='', name_='codPostal', fromsubclass_=False,", "Término. Validação: Só pode ser informado se já houver informação anterior de benefícios", "dval_ elif nodeName_ == 'codMunic': sval_ = child_.text try: ival_ = int(sval_) except", "self.nrInsc def set_nrInsc(self, nrInsc): self.nrInsc = nrInsc def hasContent_(self): if ( self.tpInsc is", "or self.codPostal is not None ): return True else: return False def export(self,", "TEnderecoBrasil, 'dadosBenef': TDadosBenef, 'exterior': TEnderecoExterior, 'ideEmpregador': TEmprPJ, 'ideEvento': TIdeEveTrab, 'iniBeneficio': TDadosBeneficio, } USAGE_TEXT", "Tag_pattern_.match(node.tag).groups()[-1] rootClass = GDSClassesMapping.get(tag) if rootClass is None: rootClass = globals().get(tag) return tag,", "None or self.paisNac is not None or self.nmMae is not None or self.nmPai", ")) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='cpfBenef') if self.hasContent_(): outfile.write('>%s' %", "level, already_processed, namespace_, name_='codMunic') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level +", "level, pretty_print) outfile.write('<%scomplemento>%s</%scomplemento>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.complemento), input_name='complemento')), namespace_, eol_)) if self.bairro is not", "else: eol_ = '' if self.dadosNasc is not None: self.dadosNasc.export(outfile, level, namespace_, name_='dadosNasc',", "rootClass.factory() rootObj.build(rootNode) # Enable Python to collect the space used by the DOM.", "elif self.content_type == MixedContainer.TypeBase64: text = '%s' % base64.b64encode(self.value) return text def exportLiteral(self,", "1: value = attrs.get(attr_name) elif len(attr_parts) == 2: prefix, name = attr_parts namespace", "is not None: return subclass(*args_, **kwargs_) if uf.subclass: return uf.subclass(*args_, **kwargs_) else: return", "**kwargs): if parser is None: # Use the lxml ElementTree compatible parser so", "endereco): self.endereco = endereco def hasContent_(self): if ( self.dadosNasc is not None or", "cep.subclass: return cep.subclass(*args_, **kwargs_) else: return cep(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self):", "# \"ElementtypeA\": \"http://www.xxx.com/namespaceA\", # \"ElementtypeB\": \"http://www.xxx.com/namespaceB\", # } # try: from generatedsnamespaces import", "else: return False def export(self, outfile, level, namespace_='', name_='complemento', namespacedef_='', pretty_print=True): imported_ns_def_ =", "name_='TIdeEveTrab', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' %", "namespace_, name_='nmMae') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='',", "input_name=''): return ('%s' % input_data).lower() def gds_validate_boolean(self, input_data, node=None, input_name=''): return input_data def", "pass def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib, already_processed) for child in", "- AND the outer elements # - OR the inner elements found1 =", "except ImportError: from xml.etree import ElementTree as etree_ Validate_simpletypes_ = True if sys.version_info.major", "factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, dtFimBenef) if", "% (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.uf), input_name='uf')), namespace_, eol_)) def build(self, node): already_processed = set() self.buildAttributes(node,", "name_='endereco'): pass def exportChildren(self, outfile, level, namespace_='', name_='endereco', fromsubclass_=False, pretty_print=True): if pretty_print: eol_", "( ): return True else: return False def export(self, outfile, level, namespace_='', name_='nmPai',", "= getSubclassFromModule_( CurrentSubclassModule_, paisNac) if subclass is not None: return subclass(*args_, **kwargs_) if", "CurrentSubclassModule_, uf) if subclass is not None: return subclass(*args_, **kwargs_) if uf.subclass: return", "% (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='tpAmb', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_,", "exportChildren(self, outfile, level, namespace_='', name_='TIdeEveTrab', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\\n' else:", "export method for any class for which there is # a namespace prefix", "# # The root super-class for element type classes # # Calls to", "\"%s\", \"%s\"),\\n' % ( self.category, self.content_type, self.name, self.value)) else: # category == MixedContainer.CategoryComplex", "return False def export(self, outfile, level, namespace_='', name_='dtFimBenef', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('dtFimBenef')", "namespace_, eol_)) if self.paisNascto is not None: showIndent(outfile, level, pretty_print) outfile.write('<%spaisNascto>%s</%spaisNascto>%s' % (namespace_,", "paisNac.subclass: return paisNac.subclass(*args_, **kwargs_) else: return paisNac(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self):", "infoPenMorte=None): self.original_tagname_ = None self.tpBenef = tpBenef self.nrBenefic = nrBenefic if isinstance(dtIniBenef, BaseStrType_):", "nodeName_, fromsubclass_=False): pass # end class dtIniBenef class vrBenef(GeneratedsSuper): subclass = None superclass", "None # # Support/utility functions. # def showIndent(outfile, level, pretty_print=True): if pretty_print: for", "'', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='fimBeneficio') if self.hasContent_(): outfile.write('>%s'", "is not None: self.altBeneficio.export(outfile, level, namespace_, name_='altBeneficio', pretty_print=pretty_print) if self.fimBeneficio is not None:", "if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='bairro', pretty_print=pretty_print)", "return True else: return False def export(self, outfile, level, namespace_='', name_='dtIniBenef', namespacedef_='', pretty_print=True):", "not None: showIndent(outfile, level, pretty_print) outfile.write('<%snrInsc>%s</%snrInsc>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nrInsc), input_name='nrInsc')), namespace_, eol_)) def", "else: return None # # If you have installed IPython you can uncomment", "= nrBenefic def get_dtFimBenef(self): return self.dtFimBenef def set_dtFimBenef(self, dtFimBenef): self.dtFimBenef = dtFimBenef def", "if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, tpAmb) if subclass is", "= GeneratedsSuper._FixedOffsetTZ( tzoff, results.group(0)) input_data = input_data[:-6] if len(input_data.split('.')) > 1: dt =", "level, namespace_='', name_='TEnderecoExterior', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\\n' else: eol_ =", "getSubclassFromModule_( CurrentSubclassModule_, nrRecibo) if subclass is not None: return subclass(*args_, **kwargs_) if nrRecibo.subclass:", "= GenerateDSNamespaceDefs_.get('tpBenef') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_", "namespace_='', name_='dtNascto', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib,", "node, nodeName_, fromsubclass_=False): if nodeName_ == 'ideEvento': obj_ = TIdeEveTrab.factory() obj_.build(child_) self.ideEvento =", "already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='TIdeEveTrab') if self.hasContent_(): outfile.write('>%s' % (eol_,", "attrs, already_processed): pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): if nodeName_ == 'dtNascto':", "def gds_format_date(self, input_data, input_name=''): _svalue = '%04d-%02d-%02d' % ( input_data.year, input_data.month, input_data.day, )", "tz = GeneratedsSuper._FixedOffsetTZ( tzoff, results.group(0)) input_data = input_data[:-6] if len(input_data.split('.')) > 1: dt", "altBeneficio=None, fimBeneficio=None): self.original_tagname_ = None self.tpPlanRP = tpPlanRP self.iniBeneficio = iniBeneficio self.altBeneficio =", "getSubclassFromModule_( CurrentSubclassModule_, vrBenef) if subclass is not None: return subclass(*args_, **kwargs_) if vrBenef.subclass:", "paisResid self.dscLograd = dscLograd self.nrLograd = nrLograd self.complemento = complemento self.bairro = bairro", "buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class nmCid class codPostal(GeneratedsSuper): subclass", "== MixedContainer.TypeDouble: text = '%g' % self.value elif self.content_type == MixedContainer.TypeBase64: text =", "not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, nrRecibo) if subclass is not None: return", "node, 'nrInsc') self.nrInsc = nrInsc_ # end class TEmprPJ class tpInsc(GeneratedsSuper): subclass =", "exportLiteral(self, outfile, level, name): if self.category == MixedContainer.CategoryText: showIndent(outfile, level) outfile.write( 'model_.MixedContainer(%d, %d,", "False def export(self, outfile, level, namespace_='', name_='uf', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('uf') if", "return False def export(self, outfile, level, namespace_='', name_='infoPenMorte', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('infoPenMorte')", "return None # # If you have installed IPython you can uncomment and", "tpLograd(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return True else:", "if tzoff is not None: total_seconds = tzoff.seconds + (86400 * tzoff.days) if", "= 0 CategoryText = 1 CategorySimple = 2 CategoryComplex = 3 # Constants", "self.exportChildren(outfile, level + 1, namespace_='', name_='idQuota', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else:", "level, already_processed, namespace_='', name_='indRetif'): pass def exportChildren(self, outfile, level, namespace_='', name_='indRetif', fromsubclass_=False, pretty_print=True):", "level, already_processed, namespace_, name_='TDadosBeneficio') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level +", "def export(self, outfile, level, namespace_='', name_='bairro', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('bairro') if imported_ns_def_", "not None: return subclass(*args_, **kwargs_) if idQuota.subclass: return idQuota.subclass(*args_, **kwargs_) else: return idQuota(*args_,", "True else: return False def export(self, outfile, level, namespace_='', name_='nrInsc', namespacedef_='', pretty_print=True): imported_ns_def_", "getSubclassFromModule_( CurrentSubclassModule_, uf) if subclass is not None: return subclass(*args_, **kwargs_) if uf.subclass:", "self.gds_parse_date(sval_) self.dtFimBenef = dval_ elif nodeName_ == 'mtvFim': sval_ = child_.text try: ival_", "subclass is not None: return subclass(*args_, **kwargs_) if TEmprPJ.subclass: return TEmprPJ.subclass(*args_, **kwargs_) else:", "input_name='dscLograd')), namespace_, eol_)) if self.nrLograd is not None: showIndent(outfile, level, pretty_print) outfile.write('<%snrLograd>%s</%snrLograd>%s' %", "'', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='endereco') if self.hasContent_(): outfile.write('>%s'", "infoPenMorte def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_,", "hasContent_(self): if ( self.cpfBenef is not None or self.nmBenefic is not None or", "self.nmMae = nmMae self.nmPai = nmPai def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not", "namespace_='', name_='paisNac', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('paisNac') if imported_ns_def_ is not None: namespacedef_", "# Calls to the methods in these classes are generated by generateDS.py. #", "== 2: BaseStrType_ = basestring else: BaseStrType_ = str def parsexml_(infile, parser=None, **kwargs):", "not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, TDadosBeneficio) if subclass is not None: return", "+ namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='cep')", "end class infoBeneficio class tpPlanRP(GeneratedsSuper): subclass = None superclass = None def __init__(self):", "# end class fimBeneficio class tpBenef(GeneratedsSuper): subclass = None superclass = None def", "= None self.idQuota = idQuota self.cpfInst = cpfInst def factory(*args_, **kwargs_): if CurrentSubclassModule_", "**kwargs_) if vrBenef.subclass: return vrBenef.subclass(*args_, **kwargs_) else: return vrBenef(*args_, **kwargs_) factory = staticmethod(factory)", "== 'nrBenefic': nrBenefic_ = child_.text nrBenefic_ = self.gds_validate_string(nrBenefic_, node, 'nrBenefic') self.nrBenefic = nrBenefic_", "= TDadosBenef.factory() obj_.build(child_) self.dadosBenef = obj_ obj_.original_tagname_ = 'dadosBenef' # end class ideBenef", "not None or self.bairro is not None or self.cep is not None or", "'infoPenMorte' # end class TDadosBeneficio class dtIniBenef(GeneratedsSuper): subclass = None superclass = None", "if pretty_print: eol_ = '\\n' else: eol_ = '' if self.tpPlanRP is not", "self.gds_encode(self.gds_format_string(quote_xml(self.idQuota), input_name='idQuota')), namespace_, eol_)) if self.cpfInst is not None: showIndent(outfile, level, pretty_print) outfile.write('<%scpfInst>%s</%scpfInst>%s'", "is not None or self.nrInsc is not None ): return True else: return", "input_name='paisNascto')), namespace_, eol_)) if self.paisNac is not None: showIndent(outfile, level, pretty_print) outfile.write('<%spaisNac>%s</%spaisNac>%s' %", "'\\n' else: eol_ = '' if self.indRetif is not None: showIndent(outfile, level, pretty_print)", "name_='nrInsc', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib, already_processed)", "level + 1) showIndent(outfile, level) outfile.write(')\\n') class MemberSpec_(object): def __init__(self, name='', data_type='', container=0,", "is not None: return subclass(*args_, **kwargs_) if dadosNasc.subclass: return dadosNasc.subclass(*args_, **kwargs_) else: return", "is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, nrRecibo) if subclass is not None:", "= '' if self.cpfBenef is not None: showIndent(outfile, level, pretty_print) outfile.write('<%scpfBenef>%s</%scpfBenef>%s' % (namespace_,", "'' for child in node: if child.tail is not None: text += child.tail", "level, already_processed, namespace_, name_='nmMae') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level +", "if ideBenef.subclass: return ideBenef.subclass(*args_, **kwargs_) else: return ideBenef(*args_, **kwargs_) factory = staticmethod(factory) def", "(eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='codPostal'): pass def exportChildren(self, outfile,", "ainda informação de término de benefícios.\"\"\" subclass = None superclass = None def", "not None ): return True else: return False def export(self, outfile, level, namespace_='',", "is not None or self.ideBenef is not None or self.infoBeneficio is not None", "node, nodeName_, fromsubclass_=False): pass # end class dtNascto class codMunic(GeneratedsSuper): subclass = None", "input_data, input_name=''): _svalue = '%04d-%02d-%02d' % ( input_data.year, input_data.month, input_data.day, ) try: if", "attr_name.split(':') value = None if len(attr_parts) == 1: value = attrs.get(attr_name) elif len(attr_parts)", "pretty_print=True): if pretty_print: eol_ = '\\n' else: eol_ = '' if self.evtCdBenPrRP is", ")) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='nrRecibo'): pass def exportChildren(self, outfile, level,", "def set_cpfInst(self, cpfInst): self.cpfInst = cpfInst def hasContent_(self): if ( self.idQuota is not", "CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, fimBeneficio) if subclass is not", "namespace_='', name_='ideBenef', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s'", "if mtvFim.subclass: return mtvFim.subclass(*args_, **kwargs_) else: return mtvFim(*args_, **kwargs_) factory = staticmethod(factory) def", "outfile, level, namespace_='', name_='cpfBenef', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('cpfBenef') if imported_ns_def_ is not", "factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, mtvFim) if", "( ): return True else: return False def export(self, outfile, level, namespace_='', name_='cpfBenef',", "is None: rootClass = globals().get(tag) return tag, rootClass def parse(inFileName, silence=False): parser =", "rootClass = get_root_tag(rootNode) if rootClass is None: rootTag = 'eSocial' rootClass = eSocial", "is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, fimBeneficio) if subclass is not None:", "procEmi=None, verProc=None): self.original_tagname_ = None self.indRetif = indRetif self.nrRecibo = nrRecibo self.tpAmb =", "or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='nrRecibo') if self.hasContent_():", "None: self.endereco.export(outfile, level, namespace_, name_='endereco', pretty_print=pretty_print) def build(self, node): already_processed = set() self.buildAttributes(node,", "else: if total_seconds < 0: _svalue += '-' total_seconds *= -1 else: _svalue", "self.nrRecibo = nrRecibo def get_tpAmb(self): return self.tpAmb def set_tpAmb(self, tpAmb): self.tpAmb = tpAmb", "dt.time() def gds_str_lower(self, instring): return instring.lower() def get_path_(self, node): path_list = [] self.get_path_list_(node,", "showIndent(outfile, level, pretty_print) outfile.write('<%sdscLograd>%s</%sdscLograd>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.dscLograd), input_name='dscLograd')), namespace_, eol_)) if self.nrLograd is", "input_name=''): return '%e' % input_data def gds_validate_double(self, input_data, node=None, input_name=''): return input_data def", "level, namespace_, name_='altBeneficio', pretty_print=pretty_print) if self.fimBeneficio is not None: self.fimBeneficio.export(outfile, level, namespace_, name_='fimBeneficio',", "para o beneficiário identificado em {ideBenef} e para o qual não tenha havido", "= '' if self.tpLograd is not None: showIndent(outfile, level, pretty_print) outfile.write('<%stpLograd>%s</%stpLograd>%s' % (namespace_,", "the # IPython shell: # ipshell('<some message> -- Entering ipshell.\\nHit Ctrl-D to exit')", "already_processed): pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class paisResid", "AND the outer elements # - OR the inner elements found1 = True", "= self.gds_parse_date(sval_) self.dtIniBenef = dval_ elif nodeName_ == 'vrBenef': sval_ = child_.text try:", "elif self.content_type == MixedContainer.TypeDouble: outfile.write('<%s>%g</%s>' % ( self.name, self.value, self.name)) elif self.content_type ==", "input_data[:-6] if len(input_data.split('.')) > 1: dt = datetime_.datetime.strptime(input_data, '%H:%M:%S.%f') else: dt = datetime_.datetime.strptime(input_data,", "for a example of the use of this # table. # A sample", "etree_ Validate_simpletypes_ = True if sys.version_info.major == 2: BaseStrType_ = basestring else: BaseStrType_", "do Empregador PJ\"\"\" subclass = None superclass = None def __init__(self, tpInsc=None, nrInsc=None):", "/usr/local/bin/generateDS --no-process-includes -o \"esociallib/v2_04/evtCdBenPrRP.py\" schemas/v2_04/evtCdBenPrRP.xsd # # Current working directory (os.getcwd()): # esociallib", "if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, codPostal) if subclass is", "self.original_tagname_ showIndent(outfile, level, pretty_print) outfile.write('<%s%s%s' % (namespace_, name_, namespacedef_ and ' ' +", "**kwargs_) factory = staticmethod(factory) def get_dtNascto(self): return self.dtNascto def set_dtNascto(self, dtNascto): self.dtNascto =", "outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='codPostal'): pass def", "pretty_print: eol_ = '\\n' else: eol_ = '' if self.tpPlanRP is not None:", "obj_.build(child_) self.evtCdBenPrRP = obj_ obj_.original_tagname_ = 'evtCdBenPrRP' elif nodeName_ == 'Signature': Signature_ =", "export(self, outfile, level, namespace_='', name_='ideBenef', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('ideBenef') if imported_ns_def_ is", "if subclass is not None: return subclass(*args_, **kwargs_) if tpPlanRP.subclass: return tpPlanRP.subclass(*args_, **kwargs_)", "\"esociallib/v2_04/evtCdBenPrRP.py\" schemas/v2_04/evtCdBenPrRP.xsd # # Current working directory (os.getcwd()): # esociallib # import sys", "Python dictionary # should map element type names (strings) to XML schema namespace", "class TDadosBenef(GeneratedsSuper): \"\"\"Dados de beneficiário\"\"\" subclass = None superclass = None def __init__(self,", "namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='tpLograd') if", "child_, node, nodeName_, fromsubclass_=False): if nodeName_ == 'tpBenef': sval_ = child_.text try: ival_", "= GenerateDSNamespaceDefs_.get('nmMae') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_", "GenerateDSNamespaceDefs_.get('idQuota') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_ =", "staticmethod(factory) def get_dtNascto(self): return self.dtNascto def set_dtNascto(self, dtNascto): self.dtNascto = dtNascto def get_codMunic(self):", "1, namespace_='', name_='nrBenefic', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_,", "== 'tpAmb': sval_ = child_.text try: ival_ = int(sval_) except (TypeError, ValueError) as", "'\\n' else: eol_ = '' if self.brasil is not None: self.brasil.export(outfile, level, namespace_,", "outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='nmPai', pretty_print=pretty_print) outfile.write('</%s%s>%s' %", "__init__(self, idQuota=None, cpfInst=None): self.original_tagname_ = None self.idQuota = idQuota self.cpfInst = cpfInst def", "= child_.text idQuota_ = self.gds_validate_string(idQuota_, node, 'idQuota') self.idQuota = idQuota_ elif nodeName_ ==", "subclass = None superclass = None def __init__(self): self.original_tagname_ = None def factory(*args_,", "IPShellEmbed ## args = '' ## ipshell = IPShellEmbed(args, ## banner = 'Dropping", "def gds_format_time(self, input_data, input_name=''): if input_data.microsecond == 0: _svalue = '%02d:%02d:%02d' % (", "outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='evtCdBenPrRP', pretty_print=pretty_print) showIndent(outfile, level,", "outfile, level, already_processed, namespace_='', name_='ideBenef'): pass def exportChildren(self, outfile, level, namespace_='', name_='ideBenef', fromsubclass_=False,", "name_='dtNascto', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('dtNascto') if imported_ns_def_ is not None: namespacedef_ =", "eol_ = '' if self.paisResid is not None: showIndent(outfile, level, pretty_print) outfile.write('<%spaisResid>%s</%spaisResid>%s' %", "subclass is not None: return subclass(*args_, **kwargs_) if dtIniBenef.subclass: return dtIniBenef.subclass(*args_, **kwargs_) else:", "if total_seconds < 0: _svalue += '-' total_seconds *= -1 else: _svalue +=", "informações do endereço do Trabalhador\"\"\" subclass = None superclass = None def __init__(self,", "+ 1, namespace_='', name_='dtNascto', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' %", "def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='complemento'): pass def exportChildren(self, outfile, level, namespace_='',", "ideEvento def get_ideEmpregador(self): return self.ideEmpregador def set_ideEmpregador(self, ideEmpregador): self.ideEmpregador = ideEmpregador def get_ideBenef(self):", "else: return dtNascto(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return", "msg = '%s (element %s/line %d)' % (msg, node.tag, node.sourceline, ) raise GDSParseError(msg)", "== 'procEmi': sval_ = child_.text try: ival_ = int(sval_) except (TypeError, ValueError) as", "level, namespace_='', name_='bairro', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('bairro') if imported_ns_def_ is not None:", "subclass is not None: return subclass(*args_, **kwargs_) if ideBenef.subclass: return ideBenef.subclass(*args_, **kwargs_) else:", "level + 1, namespace_='', name_='infoBeneficio', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_,", "name_='TDadosBeneficio'): pass def exportChildren(self, outfile, level, namespace_='', name_='TDadosBeneficio', fromsubclass_=False, pretty_print=True): if pretty_print: eol_", "is not None: showIndent(outfile, level, pretty_print) outfile.write('<%sdtNascto>%s</%sdtNascto>%s' % (namespace_, self.gds_format_date(self.dtNascto, input_name='dtNascto'), namespace_, eol_))", "self.exportAttributes(outfile, level, already_processed, namespace_, name_='dtNascto') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level", "GenerateDSNamespaceDefs_.get('paisNascto') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_ =", "dictionary # should map element type names (strings) to XML schema namespace prefix", "superclass = None def __init__(self, Id=None, ideEvento=None, ideEmpregador=None, ideBenef=None, infoBeneficio=None): self.original_tagname_ = None", "def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='TIdeEveTrab'): pass def exportChildren(self, outfile, level, namespace_='',", "pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('nrLograd') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if", "return mtvFim.subclass(*args_, **kwargs_) else: return mtvFim(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if", "name): self.name = name def get_name(self): return self.name def set_data_type(self, data_type): self.data_type =", "= set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='nrLograd') if self.hasContent_(): outfile.write('>%s' % (eol_, ))", "minutes) return _svalue def gds_validate_simple_patterns(self, patterns, target): # pat is a list of", "name_='nrBenefic', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib, already_processed)", "fromsubclass_=False): if nodeName_ == 'dtNascto': sval_ = child_.text dval_ = self.gds_parse_date(sval_) self.dtNascto =", "eol_)) if self.nrLograd is not None: showIndent(outfile, level, pretty_print) outfile.write('<%snrLograd>%s</%snrLograd>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nrLograd),", "% (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='tpLograd', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_,", "+= '-' total_seconds *= -1 else: _svalue += '+' hours = total_seconds //", "level, already_processed, namespace_='', name_='codMunic'): pass def exportChildren(self, outfile, level, namespace_='', name_='codMunic', fromsubclass_=False, pretty_print=True):", "hasContent_(self): if ( self.tpInsc is not None or self.nrInsc is not None ):", "not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, TDadosBenef) if subclass is not None: return", "content_type self.name = name self.value = value def getCategory(self): return self.category def getContenttype(self,", "try: int(value) except (TypeError, ValueError): raise_parse_error(node, 'Requires sequence of integers') return values def", "uf def get_paisNascto(self): return self.paisNascto def set_paisNascto(self, paisNascto): self.paisNascto = paisNascto def get_paisNac(self):", "self.paisNascto = paisNascto def get_paisNac(self): return self.paisNac def set_paisNac(self, paisNac): self.paisNac = paisNac", "factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, cep) if", "globals().get(classname) if class_obj2 is not None: class_obj1 = class_obj2 return class_obj1 def gds_build_any(self,", "name_='evtCdBenPrRP') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='evtCdBenPrRP',", "outfile, level, namespace_='', name_='dtNascto', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('dtNascto') if imported_ns_def_ is not", "get_vrBenef(self): return self.vrBenef def set_vrBenef(self, vrBenef): self.vrBenef = vrBenef def get_infoPenMorte(self): return self.infoPenMorte", "= '%s' % base64.b64encode(self.value) return text def exportLiteral(self, outfile, level, name): if self.category", "codMunic=None, uf=None, paisNascto=None, paisNac=None, nmMae=None, nmPai=None): self.original_tagname_ = None if isinstance(dtNascto, BaseStrType_): initvalue_", "GenerateDSNamespaceDefs_.get('bairro') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_ =", "**kwargs_) else: return TEmprPJ(*args_, **kwargs_) factory = staticmethod(factory) def get_tpInsc(self): return self.tpInsc def", "= re_.compile(r'\\{.*\\}') def get_path_list_(self, node, path_list): if node is None: return tag =", "s1 = s1.replace('\"', '\\\\\"') if s1.find('\\n') == -1: return '\"%s\"' % s1 else:", "= getSubclassFromModule_( CurrentSubclassModule_, infoPenMorte) if subclass is not None: return subclass(*args_, **kwargs_) if", "namespace_='', name_='evtCdBenPrRP', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\\n' else: eol_ = ''", "name_='nrBenefic', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_, )) def", "= GenerateDSNamespaceDefs_.get('tpInsc') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_", "name_='TEnderecoBrasil'): pass def exportChildren(self, outfile, level, namespace_='', name_='TEnderecoBrasil', fromsubclass_=False, pretty_print=True): if pretty_print: eol_", "elif nodeName_ == 'uf': uf_ = child_.text uf_ = self.gds_validate_string(uf_, node, 'uf') self.uf", "self.ideBenef is not None or self.infoBeneficio is not None ): return True else:", "= staticmethod(factory) def get_ideEvento(self): return self.ideEvento def set_ideEvento(self, ideEvento): self.ideEvento = ideEvento def", "tpBenef): self.tpBenef = tpBenef def get_nrBenefic(self): return self.nrBenefic def set_nrBenefic(self, nrBenefic): self.nrBenefic =", "subclass is not None: return subclass(*args_, **kwargs_) if dtFimBenef.subclass: return dtFimBenef.subclass(*args_, **kwargs_) else:", "indRetif(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ = None def", "specific module.''' name = class_.__name__ + 'Sub' if hasattr(module, name): return getattr(module, name)", "offset, name): self.__offset = datetime_.timedelta(minutes=offset) self.__name = name def utcoffset(self, dt): return self.__offset", "already_processed, namespace_='', name_='nmPai'): pass def exportChildren(self, outfile, level, namespace_='', name_='nmPai', fromsubclass_=False, pretty_print=True): pass", "namespace_='', name_='procEmi', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_, ))", "outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='nrBenefic', pretty_print=pretty_print) outfile.write('</%s%s>%s' %", "if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, mtvFim) if subclass is", "def set_nrInsc(self, nrInsc): self.nrInsc = nrInsc def hasContent_(self): if ( self.tpInsc is not", "'', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='dscLograd') if self.hasContent_(): outfile.write('>%s'", "namespace_='', name_='uf', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib,", "input_name='complemento')), namespace_, eol_)) if self.bairro is not None: showIndent(outfile, level, pretty_print) outfile.write('<%sbairro>%s</%sbairro>%s' %", "input_data, node=None, input_name=''): return input_data def gds_format_datetime(self, input_data, input_name=''): if input_data.microsecond == 0:", "(namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.uf), input_name='uf')), namespace_, eol_)) if self.paisNascto is not None: showIndent(outfile, level, pretty_print)", "+ namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='ideBenef')", "'', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='nmMae') if self.hasContent_(): outfile.write('>%s'", "outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='nrRecibo', pretty_print=pretty_print) outfile.write('</%s%s>%s' %", "**kwargs_) factory = staticmethod(factory) def get_tpInsc(self): return self.tpInsc def set_tpInsc(self, tpInsc): self.tpInsc =", "or self.infoPenMorte is not None ): return True else: return False def export(self,", "self.name)) elif self.content_type == MixedContainer.TypeDouble: outfile.write('<%s>%g</%s>' % ( self.name, self.value, self.name)) elif self.content_type", "is not None: self.dadosNasc.export(outfile, level, namespace_, name_='dadosNasc', pretty_print=pretty_print) if self.endereco is not None:", "verProc_ # end class TIdeEveTrab class indRetif(GeneratedsSuper): subclass = None superclass = None", "def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class tpPlanRP class fimBeneficio(GeneratedsSuper):", "pretty_print=pretty_print) if self.ideEmpregador is not None: self.ideEmpregador.export(outfile, level, namespace_, name_='ideEmpregador', pretty_print=pretty_print) if self.ideBenef", "None: subclass = getSubclassFromModule_( CurrentSubclassModule_, codMunic) if subclass is not None: return subclass(*args_,", "exp) fval_ = self.gds_validate_float(fval_, node, 'vrBenef') self.vrBenef = fval_ elif nodeName_ == 'infoPenMorte':", "codPostal_ = self.gds_validate_string(codPostal_, node, 'codPostal') self.codPostal = codPostal_ # end class TEnderecoExterior class", "return self def buildAttributes(self, node, attrs, already_processed): value = find_attr_value_('Id', node) if value", "etree_.parse(infile, parser=parser, **kwargs) return doc # # Namespace prefix definition table (and other", "pretty_print=True): if pretty_print: eol_ = '\\n' else: eol_ = '' if self.cpfBenef is", ")) self.exportChildren(outfile, level + 1, namespace_='', name_='tpPlanRP', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_))", "'%Y-%m-%d').date() else: initvalue_ = dtIniBenef self.dtIniBenef = initvalue_ self.vrBenef = vrBenef self.infoPenMorte =", "node: if child.tail is not None: text += child.tail return text def find_attr_value_(attr_name,", "= obj_ obj_.original_tagname_ = 'evtCdBenPrRP' elif nodeName_ == 'Signature': Signature_ = child_.text Signature_", "namespace_='', name_='nrRecibo'): pass def exportChildren(self, outfile, level, namespace_='', name_='nrRecibo', fromsubclass_=False, pretty_print=True): pass def", "'' if self.tpLograd is not None: showIndent(outfile, level, pretty_print) outfile.write('<%stpLograd>%s</%stpLograd>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.tpLograd),", "self.gds_encode(self.gds_format_string(quote_xml(self.dscLograd), input_name='dscLograd')), namespace_, eol_)) if self.nrLograd is not None: showIndent(outfile, level, pretty_print) outfile.write('<%snrLograd>%s</%snrLograd>%s'", "obj_ = TEmprPJ.factory() obj_.build(child_) self.ideEmpregador = obj_ obj_.original_tagname_ = 'ideEmpregador' elif nodeName_ ==", "namespace_='', name_='infoBeneficio', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s'", "get_tpBenef(self): return self.tpBenef def set_tpBenef(self, tpBenef): self.tpBenef = tpBenef def get_nrBenefic(self): return self.nrBenefic", "GenerateDSNamespaceDefs_.get('dtNascto') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_ =", "dtFimBenef.subclass(*args_, **kwargs_) else: return dtFimBenef(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if (", "self.exterior = exterior def hasContent_(self): if ( self.brasil is not None or self.exterior", "return rootObj def parseLiteral(inFileName, silence=False): parser = None doc = parsexml_(inFileName, parser) rootNode", "obj_.original_tagname_ = 'dadosBenef' # end class ideBenef class cpfBenef(GeneratedsSuper): subclass = None superclass", "pass # end class procEmi class verProc(GeneratedsSuper): subclass = None superclass = None", "already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='infoPenMorte') if self.hasContent_(): outfile.write('>%s' % (eol_,", "def exportChildren(self, outfile, level, namespace_='', name_='infoBeneficio', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\\n'", "level, namespace_='', name_='TEnderecoBrasil', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\\n' else: eol_ =", "name_='TDadosBenef', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\\n' else: eol_ = '' if", "already_processed): pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): if nodeName_ == 'idQuota': idQuota_", "gds_format_integer_list(self, input_data, input_name=''): return '%s' % ' '.join(input_data) def gds_validate_integer_list( self, input_data, node=None,", "s1.find('\"') != -1: s1 = s1.replace('\"', '\\\\\"') if s1.find('\\n') == -1: return '\"%s\"'", "= tpInsc self.nrInsc = nrInsc def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None:", "= '' for child in node: if child.tail is not None: text +=", "input_name='Signature')), 'ds:', eol_)) def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib, already_processed) for", "child_.text bairro_ = self.gds_validate_string(bairro_, node, 'bairro') self.bairro = bairro_ elif nodeName_ == 'cep':", "exportChildren(self, outfile, level, namespace_='', name_='TEnderecoBrasil', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\\n' else:", "space used by the DOM. doc = None if not silence: sys.stdout.write('<?xml version=\"1.0\"", "tpAmb.subclass: return tpAmb.subclass(*args_, **kwargs_) else: return tpAmb(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self):", "== 'vrBenef': sval_ = child_.text try: fval_ = float(sval_) except (TypeError, ValueError) as", "Support/utility functions. # def showIndent(outfile, level, pretty_print=True): if pretty_print: for idx in range(level):", ")) self.exportChildren(outfile, level + 1, namespace_='', name_='codPostal', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_))", "MixedContainer.TypeString: text = self.value elif (self.content_type == MixedContainer.TypeInteger or self.content_type == MixedContainer.TypeBoolean): text", "+ 1, namespace_='', name_='tpInsc', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' %", "dscLograd) if subclass is not None: return subclass(*args_, **kwargs_) if dscLograd.subclass: return dscLograd.subclass(*args_,", "pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class dtIniBenef class", "% (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nmPai), input_name='nmPai')), namespace_, eol_)) def build(self, node): already_processed = set() self.buildAttributes(node,", "None def __init__(self, tpPlanRP=None, iniBeneficio=None, altBeneficio=None, fimBeneficio=None): self.original_tagname_ = None self.tpPlanRP = tpPlanRP", ") else: _svalue = '%04d-%02d-%02dT%02d:%02d:%02d.%s' % ( input_data.year, input_data.month, input_data.day, input_data.hour, input_data.minute, input_data.second,", "'requires integer: %s' % exp) ival_ = self.gds_validate_integer(ival_, node, 'tpInsc') self.tpInsc = ival_", "[ -s ] <in_xml_file> \"\"\" def usage(): print(USAGE_TEXT) sys.exit(1) def get_root_tag(node): tag =", "not None: name_ = self.original_tagname_ showIndent(outfile, level, pretty_print) outfile.write('<%s%s%s' % (namespace_, name_, namespacedef_", "not None or self.nmPai is not None ): return True else: return False", "level, namespace_='', name_='complemento', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node,", "% (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='TEnderecoBrasil'): pass def exportChildren(self,", "buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class dscLograd class nrLograd(GeneratedsSuper): subclass", "else: eol_ = '' if self.cpfBenef is not None: showIndent(outfile, level, pretty_print) outfile.write('<%scpfBenef>%s</%scpfBenef>%s'", "from io import BytesIO as IOBuffer parser = None doc = parsexml_(IOBuffer(inString), parser)", "= dt.replace(tzinfo=tz) return dt.date() def gds_validate_time(self, input_data, node=None, input_name=''): return input_data def gds_format_time(self,", "# ipshell('<some message> -- Entering ipshell.\\nHit Ctrl-D to exit') # # Globals #", "input_name='dtIniBenef'), namespace_, eol_)) if self.vrBenef is not None: showIndent(outfile, level, pretty_print) outfile.write('<%svrBenef>%s</%svrBenef>%s' %", "eol_)) if self.tpAmb is not None: showIndent(outfile, level, pretty_print) outfile.write('<%stpAmb>%s</%stpAmb>%s' % (namespace_, self.gds_format_integer(self.tpAmb,", "element[-1].tail is None: element[-1].tail = self.value else: element[-1].tail += self.value else: if element.text", "level, namespace, name, pretty_print=pretty_print) def exportSimple(self, outfile, level, name): if self.content_type == MixedContainer.TypeString:", "level, namespace_='', name_='paisNascto', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('paisNascto') if imported_ns_def_ is not None:", "not None: return subclass(*args_, **kwargs_) if tpBenef.subclass: return tpBenef.subclass(*args_, **kwargs_) else: return tpBenef(*args_,", "when you want to drop into the # IPython shell: # ipshell('<some message>", "return values def gds_validate_datetime(self, input_data, node=None, input_name=''): return input_data def gds_format_datetime(self, input_data, input_name=''):", "level, name): if self.category == MixedContainer.CategoryText: showIndent(outfile, level) outfile.write( 'model_.MixedContainer(%d, %d, \"%s\", \"%s\"),\\n'", "if self.Signature is not None: showIndent(outfile, level, pretty_print) outfile.write('<%sSignature>%s</%sSignature>%s' % ('ds:', self.gds_encode(self.gds_format_string(quote_xml(self.Signature), input_name='Signature')),", "getSubclassFromModule_( CurrentSubclassModule_, TDadosBeneficio) if subclass is not None: return subclass(*args_, **kwargs_) if TDadosBeneficio.subclass:", "values def gds_format_float(self, input_data, input_name=''): return ('%.15f' % input_data).rstrip('0') def gds_validate_float(self, input_data, node=None,", "node.tag) if tag: path_list.append(tag) self.get_path_list_(node.getparent(), path_list) def get_class_obj_(self, node, default_class=None): class_obj1 = default_class", "name_='infoPenMorte') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='infoPenMorte',", "outfile, level, namespace_='', name_='nmPai', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('nmPai') if imported_ns_def_ is not", "pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class paisNac class", "2.7.12 (default, Nov 19 2016, 06:48:10) [GCC 5.4.0 20160609] # # Command line", "= None mapping = {} rootElement = rootObj.to_etree(None, name_=rootTag, mapping_=mapping) reverse_mapping = rootObj.gds_reverse_node_mapping(mapping)", "else: return eSocial(*args_, **kwargs_) factory = staticmethod(factory) def get_evtCdBenPrRP(self): return self.evtCdBenPrRP def set_evtCdBenPrRP(self,", "subclass is not None: return subclass(*args_, **kwargs_) if tpInsc.subclass: return tpInsc.subclass(*args_, **kwargs_) else:", "% (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='tpPlanRP', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_,", "if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='tpLograd', pretty_print=pretty_print)", "namespace_='', name_='nrInsc', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_, ))", "exportAttributes(self, outfile, level, already_processed, namespace_='', name_='codMunic'): pass def exportChildren(self, outfile, level, namespace_='', name_='codMunic',", "complemento class bairro(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ =", "+ 1, namespace_='', name_='evtCdBenPrRP', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_))", "def get_codPostal(self): return self.codPostal def set_codPostal(self, codPostal): self.codPostal = codPostal def hasContent_(self): if", "outfile, level, namespace_='', name_='TEnderecoExterior', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\\n' else: eol_", "'', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='tpBenef') if self.hasContent_(): outfile.write('>%s'", "is not None and 'Id' not in already_processed: already_processed.add('Id') self.Id = value def", "if subclass is not None: return subclass(*args_, **kwargs_) if endereco.subclass: return endereco.subclass(*args_, **kwargs_)", "return '%d' % input_data def gds_validate_integer(self, input_data, node=None, input_name=''): return input_data def gds_format_integer_list(self,", "name_='codPostal', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_, )) def", "0: _svalue = '%02d:%02d:%02d' % ( input_data.hour, input_data.minute, input_data.second, ) else: _svalue =", "return subclass(*args_, **kwargs_) if dscLograd.subclass: return dscLograd.subclass(*args_, **kwargs_) else: return dscLograd(*args_, **kwargs_) factory", "self.uf = uf_ elif nodeName_ == 'paisNascto': paisNascto_ = child_.text paisNascto_ = self.gds_validate_string(paisNascto_,", "None: return subclass(*args_, **kwargs_) if nmMae.subclass: return nmMae.subclass(*args_, **kwargs_) else: return nmMae(*args_, **kwargs_)", "set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='complemento') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile,", "if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='nrLograd', pretty_print=pretty_print)", "return dt def gds_validate_date(self, input_data, node=None, input_name=''): return input_data def gds_format_date(self, input_data, input_name=''):", "self.exportChildren(outfile, level + 1, namespace_='', name_='paisNac', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else:", "outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='tpAmb'): pass def", "staticmethod(factory) def get_ideEvento(self): return self.ideEvento def set_ideEvento(self, ideEvento): self.ideEvento = ideEvento def get_ideEmpregador(self):", "**kwargs_) if dtNascto.subclass: return dtNascto.subclass(*args_, **kwargs_) else: return dtNascto(*args_, **kwargs_) factory = staticmethod(factory)", "'%04d-%02d-%02dT%02d:%02d:%02d.%s' % ( input_data.year, input_data.month, input_data.day, input_data.hour, input_data.minute, input_data.second, ('%f' % (float(input_data.microsecond) /", "name_='vrBenef'): pass def exportChildren(self, outfile, level, namespace_='', name_='vrBenef', fromsubclass_=False, pretty_print=True): pass def build(self,", "self.exportChildren(outfile, level + 1, namespace_='', name_='dscLograd', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else:", "procEmi class verProc(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ =", "level, namespace_='', name_='tpInsc', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('tpInsc') if imported_ns_def_ is not None:", "get_cpfBenef(self): return self.cpfBenef def set_cpfBenef(self, cpfBenef): self.cpfBenef = cpfBenef def get_nmBenefic(self): return self.nmBenefic", "def set_bairro(self, bairro): self.bairro = bairro def get_cep(self): return self.cep def set_cep(self, cep):", "level, pretty_print) outfile.write('<%sdtFimBenef>%s</%sdtFimBenef>%s' % (namespace_, self.gds_format_date(self.dtFimBenef, input_name='dtFimBenef'), namespace_, eol_)) if self.mtvFim is not", "level, pretty_print) outfile.write('<%stpInsc>%s</%stpInsc>%s' % (namespace_, self.gds_format_integer(self.tpInsc, input_name='tpInsc'), namespace_, eol_)) if self.nrInsc is not", "hasContent_(self): if ( self.paisResid is not None or self.dscLograd is not None or", "self.category == MixedContainer.CategoryText: showIndent(outfile, level) outfile.write( 'model_.MixedContainer(%d, %d, \"%s\", \"%s\"),\\n' % ( self.category,", "self.infoBeneficio.export(outfile, level, namespace_, name_='infoBeneficio', pretty_print=pretty_print) def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib,", "CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, nmCid) if subclass is not", "def set_nmMae(self, nmMae): self.nmMae = nmMae def get_nmPai(self): return self.nmPai def set_nmPai(self, nmPai):", "level + 1, namespace_='', name_='TIdeEveTrab', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_,", "**kwargs_) else: return TDadosBeneficio(*args_, **kwargs_) factory = staticmethod(factory) def get_tpBenef(self): return self.tpBenef def", "None: return subclass(*args_, **kwargs_) if tpLograd.subclass: return tpLograd.subclass(*args_, **kwargs_) else: return tpLograd(*args_, **kwargs_)", "elif sys.version_info.major == 2 and isinstance(instring, unicode): result = quote_xml(instring).encode('utf8') else: result =", "self.gds_encode(self.gds_format_string(quote_xml(self.nrLograd), input_name='nrLograd')), namespace_, eol_)) if self.complemento is not None: showIndent(outfile, level, pretty_print) outfile.write('<%scomplemento>%s</%scomplemento>%s'", "do endereço do Trabalhador\"\"\" subclass = None superclass = None def __init__(self, brasil=None,", "level, already_processed, namespace_='', name_='idQuota'): pass def exportChildren(self, outfile, level, namespace_='', name_='idQuota', fromsubclass_=False, pretty_print=True):", "= 3 TypeFloat = 4 TypeDecimal = 5 TypeDouble = 6 TypeBoolean =", "self.infoPenMorte = infoPenMorte def hasContent_(self): if ( self.tpBenef is not None or self.nrBenefic", "return procEmi.subclass(*args_, **kwargs_) else: return procEmi(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if", "= getSubclassFromModule_( CurrentSubclassModule_, TEnderecoBrasil) if subclass is not None: return subclass(*args_, **kwargs_) if", "GenerateDSNamespaceDefs_.get('TIdeEveTrab') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_ =", "'', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='eSocial') if self.hasContent_(): outfile.write('>%s'", "self.nmPai is not None: showIndent(outfile, level, pretty_print) outfile.write('<%snmPai>%s</%snmPai>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nmPai), input_name='nmPai')), namespace_,", "container=0, optional=0, child_attrs=None, choice=None): self.name = name self.data_type = data_type self.container = container", "comments. try: parser = etree_.ETCompatXMLParser() except AttributeError: # fallback to xml.etree parser =", "**kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, mtvFim) if subclass", "if self.complemento is not None: showIndent(outfile, level, pretty_print) outfile.write('<%scomplemento>%s</%scomplemento>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.complemento), input_name='complemento')),", "level, already_processed, namespace_='', name_='ideBenef'): pass def exportChildren(self, outfile, level, namespace_='', name_='ideBenef', fromsubclass_=False, pretty_print=True):", "paisNac) if subclass is not None: return subclass(*args_, **kwargs_) if paisNac.subclass: return paisNac.subclass(*args_,", "evtCdBenPrRP self.Signature = Signature def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass", "outfile, level, namespace, name, pretty_print=pretty_print) def exportSimple(self, outfile, level, name): if self.content_type ==", "= re_.compile(r'({.*})?(.*)') String_cleanup_pat_ = re_.compile(r\"[\\n\\r\\s]+\") Namespace_extract_pat_ = re_.compile(r'{(.*)}(.*)') CDATA_pattern_ = re_.compile(r\"<!\\[CDATA\\[.*?\\]\\]>\", re_.DOTALL) #", "= 'dadosBenef' # end class ideBenef class cpfBenef(GeneratedsSuper): subclass = None superclass =", "dtIniBenef): self.dtIniBenef = dtIniBenef def get_vrBenef(self): return self.vrBenef def set_vrBenef(self, vrBenef): self.vrBenef =", "def export(self, outfile, level, namespace_='', name_='dtFimBenef', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('dtFimBenef') if imported_ns_def_", "( input_data.year, input_data.month, input_data.day, input_data.hour, input_data.minute, input_data.second, ('%f' % (float(input_data.microsecond) / 1000000))[2:], )", "name_='TEmprPJ'): pass def exportChildren(self, outfile, level, namespace_='', name_='TEmprPJ', fromsubclass_=False, pretty_print=True): if pretty_print: eol_", "self.to_etree_simple() else: # category == MixedContainer.CategoryComplex self.value.to_etree(element) def to_etree_simple(self): if self.content_type == MixedContainer.TypeString:", "0: return self.data_type[-1] else: return 'xs:string' else: return self.data_type def set_container(self, container): self.container", "is not None: return subclass(*args_, **kwargs_) if nmBenefic.subclass: return nmBenefic.subclass(*args_, **kwargs_) else: return", "name_='tpAmb', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('tpAmb') if imported_ns_def_ is not None: namespacedef_ =", "pretty_print) outfile.write('<%snrRecibo>%s</%snrRecibo>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nrRecibo), input_name='nrRecibo')), namespace_, eol_)) if self.tpAmb is not None:", "tpBenef(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ = None def", ")) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='mtvFim') if self.hasContent_(): outfile.write('>%s' %", "already_processed, namespace_='', name_='tpPlanRP'): pass def exportChildren(self, outfile, level, namespace_='', name_='tpPlanRP', fromsubclass_=False, pretty_print=True): pass", "nmPai.subclass(*args_, **kwargs_) else: return nmPai(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if (", "parseString(inString, silence=False): if sys.version_info.major == 2: from StringIO import StringIO as IOBuffer else:", ")) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='infoBeneficio') if self.hasContent_(): outfile.write('>%s' %", ")) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='verProc') if self.hasContent_(): outfile.write('>%s' %", "level, pretty_print) outfile.write('<%snrInsc>%s</%snrInsc>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nrInsc), input_name='nrInsc')), namespace_, eol_)) def build(self, node): already_processed", "name_='tpBenef', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_, )) def", "__init__(self, dadosNasc=None, endereco=None): self.original_tagname_ = None self.dadosNasc = dadosNasc self.endereco = endereco def", "outfile.write('<%snmBenefic>%s</%snmBenefic>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nmBenefic), input_name='nmBenefic')), namespace_, eol_)) if self.dadosBenef is not None: self.dadosBenef.export(outfile,", "self.category == MixedContainer.CategoryText: # Prevent exporting empty content as empty lines. if self.value.strip():", "TIdeEveTrab.subclass(*args_, **kwargs_) else: return TIdeEveTrab(*args_, **kwargs_) factory = staticmethod(factory) def get_indRetif(self): return self.indRetif", "def dst(self, dt): return None def gds_format_string(self, input_data, input_name=''): return input_data def gds_validate_string(self,", "'', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='TEnderecoExterior') if self.hasContent_(): outfile.write('>%s'", "= 'fimBeneficio' # end class infoBeneficio class tpPlanRP(GeneratedsSuper): subclass = None superclass =", "'%s (element %s/line %d)' % (msg, node.tag, node.sourceline, ) raise GDSParseError(msg) class MixedContainer:", "def convert_unicode(instring): if isinstance(instring, str): result = quote_xml(instring) elif sys.version_info.major == 2 and", "outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='nrInsc'): pass def", "self.bairro def set_bairro(self, bairro): self.bairro = bairro def get_nmCid(self): return self.nmCid def set_nmCid(self,", "# Change this to redirect the generated superclass module to use a #", "ival_ elif nodeName_ == 'nrInsc': nrInsc_ = child_.text nrInsc_ = self.gds_validate_string(nrInsc_, node, 'nrInsc')", "return text def exportLiteral(self, outfile, level, name): if self.category == MixedContainer.CategoryText: showIndent(outfile, level)", "self.Id is not None and 'Id' not in already_processed: already_processed.add('Id') outfile.write(' Id=%s' %", "= None superclass = None def __init__(self, tpInsc=None, nrInsc=None): self.original_tagname_ = None self.tpInsc", "buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class tpBenef class nrBenefic(GeneratedsSuper): subclass", "return True else: return False def export(self, outfile, level, namespace_='', name_='codPostal', namespacedef_='', pretty_print=True):", "6 TypeBoolean = 7 TypeBase64 = 8 def __init__(self, category, content_type, name, value):", "(namespace_, self.gds_format_integer(self.procEmi, input_name='procEmi'), namespace_, eol_)) if self.verProc is not None: showIndent(outfile, level, pretty_print)", "return False def export(self, outfile, level, namespace_='', name_='TIdeEveTrab', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('TIdeEveTrab')", "= set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='TDadosBeneficio') if self.hasContent_(): outfile.write('>%s' % (eol_, ))", "self.gds_validate_integer(ival_, node, 'tpAmb') self.tpAmb = ival_ elif nodeName_ == 'procEmi': sval_ = child_.text", "True else: return False def export(self, outfile, level, namespace_='', name_='nmMae', namespacedef_='', pretty_print=True): imported_ns_def_", "name_='indRetif', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_, )) def", "input_data.hour, input_data.minute, input_data.second, ) else: _svalue = '%04d-%02d-%02dT%02d:%02d:%02d.%s' % ( input_data.year, input_data.month, input_data.day,", "imported_ns_def_ = GenerateDSNamespaceDefs_.get('nmCid') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print:", "namespace_, eol_)) if self.nrInsc is not None: showIndent(outfile, level, pretty_print) outfile.write('<%snrInsc>%s</%snrInsc>%s' % (namespace_,", "vrBenef): self.vrBenef = vrBenef def get_infoPenMorte(self): return self.infoPenMorte def set_infoPenMorte(self, infoPenMorte): self.infoPenMorte =", "pretty_print=pretty_print) if self.exterior is not None: self.exterior.export(outfile, level, namespace_, name_='exterior', pretty_print=pretty_print) def build(self,", "self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='bairro', pretty_print=pretty_print) outfile.write('</%s%s>%s'", "class cep class TEnderecoExterior(GeneratedsSuper): \"\"\"Informações do Endereço no Exterior\"\"\" subclass = None superclass", "pass def exportChildren(self, outfile, level, namespace_='', name_='dadosNasc', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ =", "rootObj = rootClass.factory() rootObj.build(rootNode) # Enable Python to collect the space used by", "or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='nmPai') if self.hasContent_():", "quote_xml(instring).encode('utf8') else: result = GeneratedsSuper.gds_encode(str(instring)) return result def __eq__(self, other): if type(self) !=", "def get_uf(self): return self.uf def set_uf(self, uf): self.uf = uf def get_paisNascto(self): return", "namespace_='', name_='TDadosBeneficio', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\\n' else: eol_ = ''", "if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, nrInsc) if subclass is", "showIndent(outfile, level, pretty_print) outfile.write('<%spaisNascto>%s</%spaisNascto>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.paisNascto), input_name='paisNascto')), namespace_, eol_)) if self.paisNac is", "subclass(*args_, **kwargs_) if bairro.subclass: return bairro.subclass(*args_, **kwargs_) else: return bairro(*args_, **kwargs_) factory =", "def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='vrBenef'): pass def exportChildren(self, outfile, level, namespace_='',", "level + 1, namespace_='', name_='bairro', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s'", "'paisNac': paisNac_ = child_.text paisNac_ = self.gds_validate_string(paisNac_, node, 'paisNac') self.paisNac = paisNac_ elif", "for mo in matchobjects: s3 = s1[pos:mo.start()] s2 += quote_xml_aux(s3) s2 += s1[mo.start():mo.end()]", "DOM. doc = None mapping = {} rootElement = rootObj.to_etree(None, name_=rootTag, mapping_=mapping) reverse_mapping", "pass def exportChildren(self, outfile, level, namespace_='', name_='tpPlanRP', fromsubclass_=False, pretty_print=True): pass def build(self, node):", "node, nodeName_, fromsubclass_=False): pass # end class verProc class TEmprPJ(GeneratedsSuper): \"\"\"Informações do Empregador", "integer: %s' % exp) ival_ = self.gds_validate_integer(ival_, node, 'mtvFim') self.mtvFim = ival_ #", "= None if input_data[-1] == 'Z': tz = GeneratedsSuper._FixedOffsetTZ(0, 'UTC') input_data = input_data[:-1]", "MixedContainer.CategoryComplex self.value.export( outfile, level, namespace, name, pretty_print=pretty_print) def exportSimple(self, outfile, level, name): if", "factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, paisResid) if", "already_processed, namespace_, name_='cep') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1,", "input_data def gds_format_boolean_list(self, input_data, input_name=''): return '%s' % ' '.join(input_data) def gds_validate_boolean_list( self,", "subclass = getSubclassFromModule_( CurrentSubclassModule_, ideBenef) if subclass is not None: return subclass(*args_, **kwargs_)", "child_, node, nodeName_, fromsubclass_=False): pass # end class paisNac class nmMae(GeneratedsSuper): subclass =", "else: return False def export(self, outfile, level, namespace_='', name_='paisNascto', namespacedef_='', pretty_print=True): imported_ns_def_ =", "self.nrLograd is not None or self.complemento is not None or self.bairro is not", "name_='evtCdBenPrRP', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\\n' else: eol_ = '' if", "**kwargs_) else: return tpAmb(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ):", "'Signature': Signature_ = child_.text Signature_ = self.gds_validate_string(Signature_, node, 'Signature') self.Signature = Signature_ #", "not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_ = '\\n' else: eol_ =", "def exportChildren(self, outfile, level, namespace_='', name_='TEmprPJ', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\\n'", "%s' % exp) fval_ = self.gds_validate_float(fval_, node, 'vrBenef') self.vrBenef = fval_ elif nodeName_", "level, namespace_='', name_='dtFimBenef', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('dtFimBenef') if imported_ns_def_ is not None:", "False def export(self, outfile, level, namespace_='', name_='nrBenefic', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('nrBenefic') if", "or '%s' % inStr) s2 = '' pos = 0 matchobjects = CDATA_pattern_.finditer(s1)", "node, nodeName_, fromsubclass_=False): pass # end class dscLograd class nrLograd(GeneratedsSuper): subclass = None", "__init__(self, evtCdBenPrRP=None, Signature=None): self.original_tagname_ = None self.evtCdBenPrRP = evtCdBenPrRP self.Signature = Signature def", "superclass = None def __init__(self, tpInsc=None, nrInsc=None): self.original_tagname_ = None self.tpInsc = tpInsc", "= vrBenef def get_infoPenMorte(self): return self.infoPenMorte def set_infoPenMorte(self, infoPenMorte): self.infoPenMorte = infoPenMorte def", "self.dadosNasc = dadosNasc self.endereco = endereco def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not", "self.codMunic = codMunic def get_uf(self): return self.uf def set_uf(self, uf): self.uf = uf", "name_='tpInsc', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_, )) def", "% (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='idQuota'): pass def exportChildren(self,", "name self.value = value def getCategory(self): return self.category def getContenttype(self, content_type): return self.content_type", "self.dtIniBenef = dval_ elif nodeName_ == 'vrBenef': sval_ = child_.text try: fval_ =", "is not None or self.bairro is not None or self.nmCid is not None", "def export(self, outfile, level, namespace_='', name_='TDadosBeneficio', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('TDadosBeneficio') if imported_ns_def_", "level, namespace_='', name_='indRetif', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('indRetif') if imported_ns_def_ is not None:", "values = input_data.split() for value in values: try: int(value) except (TypeError, ValueError): raise_parse_error(node,", "# # Globals # ExternalEncoding = 'ascii' Tag_pattern_ = re_.compile(r'({.*})?(.*)') String_cleanup_pat_ = re_.compile(r\"[\\n\\r\\s]+\")", "(eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='cpfBenef', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_,", "input_name=''): values = input_data.split() for value in values: try: int(value) except (TypeError, ValueError):", "prefix, name = attr_parts namespace = node.nsmap.get(prefix) if namespace is not None: value", "not None: showIndent(outfile, level, pretty_print) outfile.write('<%stpBenef>%s</%stpBenef>%s' % (namespace_, self.gds_format_integer(self.tpBenef, input_name='tpBenef'), namespace_, eol_)) if", "False def export(self, outfile, level, namespace_='', name_='TDadosBeneficio', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('TDadosBeneficio') if", "'dtNascto': sval_ = child_.text dval_ = self.gds_parse_date(sval_) self.dtNascto = dval_ elif nodeName_ ==", "not None or self.dadosBenef is not None ): return True else: return False", "level, already_processed, namespace_='', name_='nrBenefic'): pass def exportChildren(self, outfile, level, namespace_='', name_='nrBenefic', fromsubclass_=False, pretty_print=True):", ")) self.exportChildren(outfile, level + 1, namespace_='', name_='nrLograd', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_))", "elif nodeName_ == 'altBeneficio': obj_ = TDadosBeneficio.factory() obj_.build(child_) self.altBeneficio = obj_ obj_.original_tagname_ =", "self.exportAttributes(outfile, level, already_processed, namespace_, name_='procEmi') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level", "evento\"\"\" subclass = None superclass = None def __init__(self, indRetif=None, nrRecibo=None, tpAmb=None, procEmi=None,", "codPostal def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_,", "child_.text nmBenefic_ = self.gds_validate_string(nmBenefic_, node, 'nmBenefic') self.nmBenefic = nmBenefic_ elif nodeName_ == 'dadosBenef':", "except (TypeError, ValueError): raise_parse_error(node, 'Requires sequence of integers') return values def gds_format_float(self, input_data,", "class indRetif class nrRecibo(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_", "input_data.minute, input_data.second, ) else: _svalue = '%04d-%02d-%02dT%02d:%02d:%02d.%s' % ( input_data.year, input_data.month, input_data.day, input_data.hour,", "'' if self.evtCdBenPrRP is not None: self.evtCdBenPrRP.export(outfile, level, namespace_, name_='evtCdBenPrRP', pretty_print=pretty_print) if self.Signature", "evtCdBenPrRP(GeneratedsSuper): \"\"\"Evento de cadastro de benefícios previdenciários de Regimes Próprios\"\"\" subclass = None", "self.gds_validate_string(nmMae_, node, 'nmMae') self.nmMae = nmMae_ elif nodeName_ == 'nmPai': nmPai_ = child_.text", "% (namespace_, self.gds_format_integer(self.codMunic, input_name='codMunic'), namespace_, eol_)) if self.uf is not None: showIndent(outfile, level,", "GeneratedsSuper._FixedOffsetTZ( tzoff, results.group(0)) input_data = input_data[:-6] time_parts = input_data.split('.') if len(time_parts) > 1:", "TEnderecoExterior.factory() obj_.build(child_) self.exterior = obj_ obj_.original_tagname_ = 'exterior' # end class endereco class", "= globals().get(tag) return tag, rootClass def parse(inFileName, silence=False): parser = None doc =", "self.paisNascto is not None or self.paisNac is not None or self.nmMae is not", "level, pretty_print) outfile.write('<%sbairro>%s</%sbairro>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.bairro), input_name='bairro')), namespace_, eol_)) if self.cep is not", "'', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='nrRecibo') if self.hasContent_(): outfile.write('>%s'", "self.dadosNasc = obj_ obj_.original_tagname_ = 'dadosNasc' elif nodeName_ == 'endereco': obj_ = endereco.factory()", "is not None: return subclass(*args_, **kwargs_) if procEmi.subclass: return procEmi.subclass(*args_, **kwargs_) else: return", "= datetime_.datetime.strptime( input_data, '%Y-%m-%dT%H:%M:%S.%f') else: dt = datetime_.datetime.strptime( input_data, '%Y-%m-%dT%H:%M:%S') dt = dt.replace(tzinfo=tz)", "name, )) return value class GDSParseError(Exception): pass def raise_parse_error(node, msg): msg = '%s", "level, namespace_='', name_='dtNascto', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node,", "= None doc = parsexml_(inFileName, parser) rootNode = doc.getroot() rootTag, rootClass = get_root_tag(rootNode)", "nrInsc_ = child_.text nrInsc_ = self.gds_validate_string(nrInsc_, node, 'nrInsc') self.nrInsc = nrInsc_ # end", "None: return subclass(*args_, **kwargs_) if codMunic.subclass: return codMunic.subclass(*args_, **kwargs_) else: return codMunic(*args_, **kwargs_)", "% (namespace_, self.gds_format_date(self.dtNascto, input_name='dtNascto'), namespace_, eol_)) if self.codMunic is not None: showIndent(outfile, level,", "if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, tpPlanRP) if subclass is", "already_processed): pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): if nodeName_ == 'indRetif': sval_", "s1 else: return '\"\"\"%s\"\"\"' % s1 def get_all_text_(node): if node.text is not None:", "already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='dtIniBenef') if self.hasContent_(): outfile.write('>%s' % (eol_,", "else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='TIdeEveTrab'): pass", "% ( self.name, base64.b64encode(self.value), self.name)) def to_etree(self, element): if self.category == MixedContainer.CategoryText: #", "concedido ao servidor\"\"\" subclass = None superclass = None def __init__(self, tpPlanRP=None, iniBeneficio=None,", "return mtvFim(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return True", "or self.nrInsc is not None ): return True else: return False def export(self,", "already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='cpfBenef') if self.hasContent_(): outfile.write('>%s' % (eol_,", "self.exportChildren(outfile, level + 1, namespace_='', name_='nrRecibo', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else:", "node=None, input_name=''): return input_data def gds_format_integer_list(self, input_data, input_name=''): return '%s' % ' '.join(input_data)", "fromsubclass_=False): if nodeName_ == 'evtCdBenPrRP': obj_ = evtCdBenPrRP.factory() obj_.build(child_) self.evtCdBenPrRP = obj_ obj_.original_tagname_", "nodeName_, fromsubclass_=False): pass # end class cep class TEnderecoExterior(GeneratedsSuper): \"\"\"Informações do Endereço no", "'' s1 = (isinstance(inStr, BaseStrType_) and inStr or '%s' % inStr) s2 =", "idQuota_ elif nodeName_ == 'cpfInst': cpfInst_ = child_.text cpfInst_ = self.gds_validate_string(cpfInst_, node, 'cpfInst')", "self.exportAttributes(outfile, level, already_processed, namespace_, name_='cep') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level", "subclass = getSubclassFromModule_( CurrentSubclassModule_, infoPenMorte) if subclass is not None: return subclass(*args_, **kwargs_)", "% exp) ival_ = self.gds_validate_integer(ival_, node, 'tpAmb') self.tpAmb = ival_ elif nodeName_ ==", "outfile, level, namespace_='', name_='ideBenef', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('ideBenef') if imported_ns_def_ is not", "set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='nrLograd') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile,", ")) self.exportChildren(outfile, level + 1, namespace_='', name_='indRetif', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_))", "class fimBeneficio(GeneratedsSuper): \"\"\"Informações relativas a benefícios previdenciários - Término. Validação: Só pode ser", "namespace_, name_='TEnderecoBrasil') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='',", "minutes = (total_seconds - (hours * 3600)) // 60 _svalue += '{0:02d}:{1:02d}'.format(hours, minutes)", "= ideEmpregador self.ideBenef = ideBenef self.infoBeneficio = infoBeneficio def factory(*args_, **kwargs_): if CurrentSubclassModule_", "None: names = classname.split(':') if len(names) == 2: classname = names[1] class_obj2 =", "return nrInsc(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return True", "sys.stdout.write(content) sys.stdout.write('\\n') return rootObj, rootElement, mapping, reverse_mapping def parseString(inString, silence=False): if sys.version_info.major ==", "'altBeneficio': TDadosBeneficio, 'brasil': TEnderecoBrasil, 'dadosBenef': TDadosBenef, 'exterior': TEnderecoExterior, 'ideEmpregador': TEmprPJ, 'ideEvento': TIdeEveTrab, 'iniBeneficio':", "altBeneficio self.fimBeneficio = fimBeneficio def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass", "subclass(*args_, **kwargs_) if dtNascto.subclass: return dtNascto.subclass(*args_, **kwargs_) else: return dtNascto(*args_, **kwargs_) factory =", "tpPlanRP) if subclass is not None: return subclass(*args_, **kwargs_) if tpPlanRP.subclass: return tpPlanRP.subclass(*args_,", "GenerateDSNamespaceDefs_.get('nrBenefic') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_ =", "namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('nmCid') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_", "self.paisResid is not None or self.dscLograd is not None or self.nrLograd is not", "% (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='verProc', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_,", "## ipshell = IPShellEmbed(args, ## banner = 'Dropping into IPython', ## exit_msg =", ")) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='TEnderecoBrasil') if self.hasContent_(): outfile.write('>%s' %", ")) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='indRetif') if self.hasContent_(): outfile.write('>%s' %", "= 'eSocial' rootClass = eSocial rootObj = rootClass.factory() rootObj.build(rootNode) # Enable Python to", "name_='tpInsc'): pass def exportChildren(self, outfile, level, namespace_='', name_='tpInsc', fromsubclass_=False, pretty_print=True): pass def build(self,", ")) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='complemento'): pass def exportChildren(self, outfile, level,", "== 'paisResid': paisResid_ = child_.text paisResid_ = self.gds_validate_string(paisResid_, node, 'paisResid') self.paisResid = paisResid_", "level, namespace_='', name_='ideBenef', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\\n' else: eol_ =", "set_infoBeneficio(self, infoBeneficio): self.infoBeneficio = infoBeneficio def get_Id(self): return self.Id def set_Id(self, Id): self.Id", "if ( self.paisResid is not None or self.dscLograd is not None or self.nrLograd", "= GenerateDSNamespaceDefs_.get('nmCid') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_", "ideBenef): self.ideBenef = ideBenef def get_infoBeneficio(self): return self.infoBeneficio def set_infoBeneficio(self, infoBeneficio): self.infoBeneficio =", "'infoBeneficio' # end class evtCdBenPrRP class ideBenef(GeneratedsSuper): \"\"\"Identificação do beneficiário\"\"\" subclass = None", "= GenerateDSNamespaceDefs_.get('TDadosBenef') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_", "namespace, pretty_print=True): if self.category == MixedContainer.CategoryText: # Prevent exporting empty content as empty", "get_container(self): return self.container def set_child_attrs(self, child_attrs): self.child_attrs = child_attrs def get_child_attrs(self): return self.child_attrs", "break if not found2: found1 = False break return found1 @classmethod def gds_parse_time(cls,", "def hasContent_(self): if ( self.tpLograd is not None or self.dscLograd is not None", "True else: return False def export(self, outfile, level, namespace_='', name_='infoPenMorte', namespacedef_='', pretty_print=True): imported_ns_def_", "* 60 + int(tzoff_parts[1]) if results.group(1) == '-': tzoff *= -1 tz =", "= 'ideEvento' elif nodeName_ == 'ideEmpregador': obj_ = TEmprPJ.factory() obj_.build(child_) self.ideEmpregador = obj_", "export(self, outfile, level, namespace_='', name_='nmBenefic', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('nmBenefic') if imported_ns_def_ is", "io import BytesIO as IOBuffer parser = None doc = parsexml_(IOBuffer(inString), parser) rootNode", "empty lines. if self.value.strip(): if len(element) > 0: if element[-1].tail is None: element[-1].tail", "% (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='dtFimBenef', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_,", "None or self.ideEmpregador is not None or self.ideBenef is not None or self.infoBeneficio", "already_processed, namespace_, name_='tpInsc') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1,", ")) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='cep'): pass def exportChildren(self, outfile, level,", "return dict(((v, k) for k, v in mapping.iteritems())) @staticmethod def gds_encode(instring): if sys.version_info.major", "outfile, level, already_processed, namespace_='', name_='tpBenef'): pass def exportChildren(self, outfile, level, namespace_='', name_='tpBenef', fromsubclass_=False,", "self.nmPai = nmPai_ # end class dadosNasc class dtNascto(GeneratedsSuper): subclass = None superclass", "namespace_, name_='brasil', pretty_print=pretty_print) if self.exterior is not None: self.exterior.export(outfile, level, namespace_, name_='exterior', pretty_print=pretty_print)", "== 'cep': cep_ = child_.text cep_ = self.gds_validate_string(cep_, node, 'cep') self.cep = cep_", "def exportChildren(self, outfile, level, namespace_='', name_='complemento', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed", "else: return False def export(self, outfile, level, namespace_='', name_='dscLograd', namespacedef_='', pretty_print=True): imported_ns_def_ =", "= data_type self.container = container self.child_attrs = child_attrs self.choice = choice self.optional =", "mtvFim class TIdeEveTrab(GeneratedsSuper): \"\"\"Identificação do evento\"\"\" subclass = None superclass = None def", "tzoff = int(tzoff_parts[0]) * 60 + int(tzoff_parts[1]) if results.group(1) == '-': tzoff *=", "= input_data[:-6] if len(input_data.split('.')) > 1: dt = datetime_.datetime.strptime(input_data, '%H:%M:%S.%f') else: dt =", "gds_validate_double_list( self, input_data, node=None, input_name=''): values = input_data.split() for value in values: try:", "self.gds_encode(self.gds_format_string(quote_xml(self.nmMae), input_name='nmMae')), namespace_, eol_)) if self.nmPai is not None: showIndent(outfile, level, pretty_print) outfile.write('<%snmPai>%s</%snmPai>%s'", "return cep.subclass(*args_, **kwargs_) else: return cep(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if", "None doc = parsexml_(inFileName, parser) rootNode = doc.getroot() rootTag, rootClass = get_root_tag(rootNode) if", "self.original_tagname_ = None self.tpInsc = tpInsc self.nrInsc = nrInsc def factory(*args_, **kwargs_): if", "already_processed = set() self.buildAttributes(node, node.attrib, already_processed) for child in node: nodeName_ = Tag_pattern_.match(child.tag).groups()[-1]", "pass def exportChildren(self, outfile, level, namespace_='', name_='mtvFim', fromsubclass_=False, pretty_print=True): pass def build(self, node):", "pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class codMunic class", "% (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='mtvFim', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_,", "self.content_type == MixedContainer.TypeBoolean): text = '%d' % self.value elif (self.content_type == MixedContainer.TypeFloat or", "want to drop into the # IPython shell: # ipshell('<some message> -- Entering", "name_='TIdeEveTrab') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='TIdeEveTrab',", "else: return nrLograd(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return", "else: return input_data def gds_format_base64(self, input_data, input_name=''): return base64.b64encode(input_data) def gds_validate_base64(self, input_data, node=None,", "name_='mtvFim', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_, )) def", "# # Command line options: # ('--no-process-includes', '') # ('-o', 'esociallib/v2_04/evtCdBenPrRP.py') # #", "1 CategorySimple = 2 CategoryComplex = 3 # Constants for content_type: TypeNone =", "cep_ = self.gds_validate_string(cep_, node, 'cep') self.cep = cep_ elif nodeName_ == 'codMunic': sval_", "% self.value elif (self.content_type == MixedContainer.TypeFloat or self.content_type == MixedContainer.TypeDecimal): text = '%f'", "already_processed, namespace_='', name_='infoPenMorte'): pass def exportChildren(self, outfile, level, namespace_='', name_='infoPenMorte', fromsubclass_=False, pretty_print=True): if", "self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='dtFimBenef', pretty_print=pretty_print) outfile.write('</%s%s>%s'", "input_name='nmCid')), namespace_, eol_)) if self.codPostal is not None: showIndent(outfile, level, pretty_print) outfile.write('<%scodPostal>%s</%scodPostal>%s' %", "self.original_tagname_ is not None: name_ = self.original_tagname_ showIndent(outfile, level, pretty_print) outfile.write('<%s%s%s' % (namespace_,", "else: return False def export(self, outfile, level, namespace_='', name_='procEmi', namespacedef_='', pretty_print=True): imported_ns_def_ =", "= datetime_.datetime.strptime(dtIniBenef, '%Y-%m-%d').date() else: initvalue_ = dtIniBenef self.dtIniBenef = initvalue_ self.vrBenef = vrBenef", "if nodeName_ == 'evtCdBenPrRP': obj_ = evtCdBenPrRP.factory() obj_.build(child_) self.evtCdBenPrRP = obj_ obj_.original_tagname_ =", "name_='nmBenefic') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='nmBenefic',", "# end class infoBeneficio class tpPlanRP(GeneratedsSuper): subclass = None superclass = None def", "= 8 def __init__(self, category, content_type, name, value): self.category = category self.content_type =", "= '\\n' else: eol_ = '' if self.cpfBenef is not None: showIndent(outfile, level,", "CDATA sections.\" if not inStr: return '' s1 = (isinstance(inStr, BaseStrType_) and inStr", "not None or self.infoPenMorte is not None ): return True else: return False", "idQuota): self.idQuota = idQuota def get_cpfInst(self): return self.cpfInst def set_cpfInst(self, cpfInst): self.cpfInst =", "set_ideEvento(self, ideEvento): self.ideEvento = ideEvento def get_ideEmpregador(self): return self.ideEmpregador def set_ideEmpregador(self, ideEmpregador): self.ideEmpregador", "name_='iniBeneficio', pretty_print=pretty_print) if self.altBeneficio is not None: self.altBeneficio.export(outfile, level, namespace_, name_='altBeneficio', pretty_print=pretty_print) if", "self.endereco = endereco def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass =", "export(self, outfile, level, namespace_='', name_='tpPlanRP', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('tpPlanRP') if imported_ns_def_ is", "None self.tpBenef = tpBenef self.nrBenefic = nrBenefic if isinstance(dtFimBenef, BaseStrType_): initvalue_ = datetime_.datetime.strptime(dtFimBenef,", "def exportChildren(self, outfile, level, namespace_='', name_='codMunic', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed", "elif nodeName_ == 'nmBenefic': nmBenefic_ = child_.text nmBenefic_ = self.gds_validate_string(nmBenefic_, node, 'nmBenefic') self.nmBenefic", "def export(self, outfile, level, namespace_='', name_='cep', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('cep') if imported_ns_def_", "self.codPostal = codPostal def hasContent_(self): if ( self.paisResid is not None or self.dscLograd", "def export(self, outfile, level, namespace_='', name_='codPostal', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('codPostal') if imported_ns_def_", "return subclass(*args_, **kwargs_) if procEmi.subclass: return procEmi.subclass(*args_, **kwargs_) else: return procEmi(*args_, **kwargs_) factory", "to_etree(self, element): if self.category == MixedContainer.CategoryText: # Prevent exporting empty content as empty", "self.exportChildren(outfile, level + 1, namespace_='', name_='cpfInst', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else:", "_cast(typ, value): if typ is None or value is None: return value return", "namespace_='', name_='dtIniBenef', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib,", "pretty_print: eol_ = '\\n' else: eol_ = '' if self.evtCdBenPrRP is not None:", "already_processed, namespace_, name_='dtNascto') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1,", "parsexml_(inFileName, parser) rootNode = doc.getroot() rootTag, rootClass = get_root_tag(rootNode) if rootClass is None:", "if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, fimBeneficio) if subclass is", "pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('tpInsc') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if", "elif nodeName_ == 'nrBenefic': nrBenefic_ = child_.text nrBenefic_ = self.gds_validate_string(nrBenefic_, node, 'nrBenefic') self.nrBenefic", "input_data.second, ('%f' % (float(input_data.microsecond) / 1000000))[2:], ) if input_data.tzinfo is not None: tzoff", "= child_.text nmMae_ = self.gds_validate_string(nmMae_, node, 'nmMae') self.nmMae = nmMae_ elif nodeName_ ==", "name_='TEnderecoExterior', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('TEnderecoExterior') if imported_ns_def_ is not None: namespacedef_ =", "is not None: showIndent(outfile, level, pretty_print) outfile.write('<%spaisNac>%s</%spaisNac>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.paisNac), input_name='paisNac')), namespace_, eol_))", "CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, TDadosBeneficio) if subclass is not", "= paisResid def get_dscLograd(self): return self.dscLograd def set_dscLograd(self, dscLograd): self.dscLograd = dscLograd def", "% ( self.name, self.value, self.name)) elif self.content_type == MixedContainer.TypeInteger or \\ self.content_type ==", "buildChildren(self, child_, node, nodeName_, fromsubclass_=False): if nodeName_ == 'paisResid': paisResid_ = child_.text paisResid_", "is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, paisResid) if subclass is not None:", "exportChildren(self, outfile, level, namespace_='', name_='paisNac', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed =", "name_='dtNascto', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_, )) def", "self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='mtvFim', pretty_print=pretty_print) outfile.write('</%s%s>%s'", "not None: return subclass(*args_, **kwargs_) if nrBenefic.subclass: return nrBenefic.subclass(*args_, **kwargs_) else: return nrBenefic(*args_,", "level, already_processed, namespace_='', name_='vrBenef'): pass def exportChildren(self, outfile, level, namespace_='', name_='vrBenef', fromsubclass_=False, pretty_print=True):", "self.name,)) self.value.exportLiteral(outfile, level + 1) showIndent(outfile, level) outfile.write(')\\n') class MemberSpec_(object): def __init__(self, name='',", "outfile, level, already_processed, namespace_='', name_='nrLograd'): pass def exportChildren(self, outfile, level, namespace_='', name_='nrLograd', fromsubclass_=False,", "self.gds_encode(self.gds_format_string(quote_xml(self.nrBenefic), input_name='nrBenefic')), namespace_, eol_)) if self.dtIniBenef is not None: showIndent(outfile, level, pretty_print) outfile.write('<%sdtIniBenef>%s</%sdtIniBenef>%s'", "\"\"\"Informações do Empregador PJ\"\"\" subclass = None superclass = None def __init__(self, tpInsc=None,", "'' ## ipshell = IPShellEmbed(args, ## banner = 'Dropping into IPython', ## exit_msg", "getSubclassFromModule_( CurrentSubclassModule_, tpLograd) if subclass is not None: return subclass(*args_, **kwargs_) if tpLograd.subclass:", "namespace_, name_='tpLograd') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='',", "== 'infoPenMorte': obj_ = infoPenMorte.factory() obj_.build(child_) self.infoPenMorte = obj_ obj_.original_tagname_ = 'infoPenMorte' #", "node.get('{%s}type' % node.nsmap['xsi']) if classname is not None: names = classname.split(':') if len(names)", "subclass = getSubclassFromModule_( CurrentSubclassModule_, tpAmb) if subclass is not None: return subclass(*args_, **kwargs_)", "if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_ = '\\n'", "name_='evtCdBenPrRP', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('evtCdBenPrRP') if imported_ns_def_ is not None: namespacedef_ =", "# Current working directory (os.getcwd()): # esociallib # import sys import re as", "is not None: self.infoPenMorte.export(outfile, level, namespace_, name_='infoPenMorte', pretty_print=pretty_print) def build(self, node): already_processed =", "return TEmprPJ(*args_, **kwargs_) factory = staticmethod(factory) def get_tpInsc(self): return self.tpInsc def set_tpInsc(self, tpInsc):", "namespace_='', name_='TEmprPJ'): pass def exportChildren(self, outfile, level, namespace_='', name_='TEmprPJ', fromsubclass_=False, pretty_print=True): if pretty_print:", "optional): self.optional = optional def get_optional(self): return self.optional def _cast(typ, value): if typ", "name_='eSocial', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\\n' else: eol_ = '' if", "the following. # IPython is available from http://ipython.scipy.org/. # ## from IPython.Shell import", "print(USAGE_TEXT) sys.exit(1) def get_root_tag(node): tag = Tag_pattern_.match(node.tag).groups()[-1] rootClass = GDSClassesMapping.get(tag) if rootClass is", "return tpPlanRP(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return True", "get_codMunic(self): return self.codMunic def set_codMunic(self, codMunic): self.codMunic = codMunic def get_uf(self): return self.uf", "set_uf(self, uf): self.uf = uf def hasContent_(self): if ( self.tpLograd is not None", "self.fimBeneficio is not None ): return True else: return False def export(self, outfile,", "class dscLograd class nrLograd(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_", "+ 1, namespace_='', name_='procEmi', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' %", "outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='tpInsc'): pass def", "False def export(self, outfile, level, namespace_='', name_='paisNascto', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('paisNascto') if", "class cpfInst GDSClassesMapping = { 'altBeneficio': TDadosBeneficio, 'brasil': TEnderecoBrasil, 'dadosBenef': TDadosBenef, 'exterior': TEnderecoExterior,", "= 2 TypeInteger = 3 TypeFloat = 4 TypeDecimal = 5 TypeDouble =", "values: try: float(value) except (TypeError, ValueError): raise_parse_error(node, 'Requires sequence of doubles') return values", "= '' pos = 0 matchobjects = CDATA_pattern_.finditer(s1) for mo in matchobjects: s3", "dtFimBenef class mtvFim(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ =", "(eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='ideBenef', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s'", "1, namespace_='', name_='nrRecibo', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_,", "'cep') self.cep = cep_ elif nodeName_ == 'codMunic': sval_ = child_.text try: ival_", "= GenerateDSNamespaceDefs_.get('tpPlanRP') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_", "**kwargs_) if codPostal.subclass: return codPostal.subclass(*args_, **kwargs_) else: return codPostal(*args_, **kwargs_) factory = staticmethod(factory)", "not None or self.vrBenef is not None or self.infoPenMorte is not None ):", "child_, node, nodeName_, fromsubclass_=False): pass # end class idQuota class cpfInst(GeneratedsSuper): subclass =", "factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, evtCdBenPrRP) if", "None: # Use the lxml ElementTree compatible parser so that, e.g., # we", "doc = parsexml_(IOBuffer(inString), parser) rootNode = doc.getroot() rootTag, rootClass = get_root_tag(rootNode) if rootClass", "endereco=None): self.original_tagname_ = None self.dadosNasc = dadosNasc self.endereco = endereco def factory(*args_, **kwargs_):", "False def export(self, outfile, level, namespace_='', name_='complemento', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('complemento') if", "exportAttributes(self, outfile, level, already_processed, namespace_='', name_='nrBenefic'): pass def exportChildren(self, outfile, level, namespace_='', name_='nrBenefic',", "from xml.etree import ElementTree as etree_ Validate_simpletypes_ = True if sys.version_info.major == 2:", "is not None or self.iniBeneficio is not None or self.altBeneficio is not None", "already_processed, namespace_, name_='infoBeneficio') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1,", "buildAttributes(self, node, attrs, already_processed): value = find_attr_value_('Id', node) if value is not None", "): return True else: return False def export(self, outfile, level, namespace_='', name_='tpInsc', namespacedef_='',", ")) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='TDadosBenef') if self.hasContent_(): outfile.write('>%s' %", "name_='tpPlanRP', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_, )) def", "namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='nrRecibo') if", "else: BaseStrType_ = str def parsexml_(infile, parser=None, **kwargs): if parser is None: #", "None: showIndent(outfile, level, pretty_print) outfile.write('<%stpPlanRP>%s</%stpPlanRP>%s' % (namespace_, self.gds_format_integer(self.tpPlanRP, input_name='tpPlanRP'), namespace_, eol_)) if self.iniBeneficio", "namespace_='', name_='cep', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib,", "return _svalue def gds_validate_simple_patterns(self, patterns, target): # pat is a list of lists", "self.gds_validate_string(nmBenefic_, node, 'nmBenefic') self.nmBenefic = nmBenefic_ elif nodeName_ == 'dadosBenef': obj_ = TDadosBenef.factory()", "name_='nrInsc') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='nrInsc',", "input_data).rstrip('0') def gds_validate_float(self, input_data, node=None, input_name=''): return input_data def gds_format_float_list(self, input_data, input_name=''): return", "namespace_, name_='fimBeneficio') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='',", "namespace_, name_='nrInsc') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='',", "% s1 return s1 def quote_python(inStr): s1 = inStr if s1.find(\"'\") == -1:", "dadosBenef def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_,", "**kwargs_) else: return tpInsc(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ):", "# end class TDadosBenef class dadosNasc(GeneratedsSuper): \"\"\"Informações de nascimento do beneficiário\"\"\" subclass =", "self.codPostal def set_codPostal(self, codPostal): self.codPostal = codPostal def hasContent_(self): if ( self.paisResid is", "( ): return True else: return False def export(self, outfile, level, namespace_='', name_='vrBenef',", "return True else: return False def export(self, outfile, level, namespace_='', name_='paisNascto', namespacedef_='', pretty_print=True):", "self.category, self.content_type, self.name, self.value)) elif self.category == MixedContainer.CategorySimple: showIndent(outfile, level) outfile.write( 'model_.MixedContainer(%d, %d,", "pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('bairro') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if", "class nmCid class codPostal(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_", "total_seconds < 0: _svalue += '-' total_seconds *= -1 else: _svalue += '+'", "= '%f' % self.value elif self.content_type == MixedContainer.TypeDouble: text = '%g' % self.value", "get_endereco(self): return self.endereco def set_endereco(self, endereco): self.endereco = endereco def hasContent_(self): if (", "result = quote_xml(instring).encode('utf8') else: result = GeneratedsSuper.gds_encode(str(instring)) return result def __eq__(self, other): if", "outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='vrBenef'): pass def", "model_\\n\\n') sys.stdout.write('rootObj = model_.rootClass(\\n') rootObj.exportLiteral(sys.stdout, 0, name_=rootTag) sys.stdout.write(')\\n') return rootObj def main(): args", "buildChildren(self, child_, node, nodeName_, fromsubclass_=False): if nodeName_ == 'tpBenef': sval_ = child_.text try:", "**kwargs_) else: return TDadosBenef(*args_, **kwargs_) factory = staticmethod(factory) def get_dadosNasc(self): return self.dadosNasc def", "GeneratedsSuper.tzoff_pattern.search(input_data) if results is not None: tzoff_parts = results.group(2).split(':') tzoff = int(tzoff_parts[0]) *", "indRetif class nrRecibo(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ =", "self.mtvFim = ival_ # end class fimBeneficio class tpBenef(GeneratedsSuper): subclass = None superclass", "(eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='TIdeEveTrab', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s'", "return eSocial(*args_, **kwargs_) factory = staticmethod(factory) def get_evtCdBenPrRP(self): return self.evtCdBenPrRP def set_evtCdBenPrRP(self, evtCdBenPrRP):", "eol_)) else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='tpLograd'):", "if re_.search(patterns2, target) is not None: found2 = True break if not found2:", "= getSubclassFromModule_( CurrentSubclassModule_, cpfInst) if subclass is not None: return subclass(*args_, **kwargs_) if", "pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('codMunic') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if", "tpPlanRP class fimBeneficio(GeneratedsSuper): \"\"\"Informações relativas a benefícios previdenciários - Término. Validação: Só pode", "already_processed, namespace_, name_='endereco') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1,", "namespace_='', name_='nmCid', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib,", "self.complemento = complemento def get_bairro(self): return self.bairro def set_bairro(self, bairro): self.bairro = bairro", "dscLograd=None, nrLograd=None, complemento=None, bairro=None, cep=None, codMunic=None, uf=None): self.original_tagname_ = None self.tpLograd = tpLograd", "def __init__(self, name='', data_type='', container=0, optional=0, child_attrs=None, choice=None): self.name = name self.data_type =", "= value def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): if nodeName_ == 'ideEvento': obj_", "**kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, TIdeEveTrab) if subclass", "return input_data def gds_validate_string(self, input_data, node=None, input_name=''): if not input_data: return '' else:", "level, already_processed, namespace_, name_='nrLograd') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level +", "= '' if self.indRetif is not None: showIndent(outfile, level, pretty_print) outfile.write('<%sindRetif>%s</%sindRetif>%s' % (namespace_,", "is not None: return subclass(*args_, **kwargs_) if nmPai.subclass: return nmPai.subclass(*args_, **kwargs_) else: return", "def export(self, outfile, level, namespace_='', name_='endereco', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('endereco') if imported_ns_def_", "level, already_processed, namespace_='', name_='tpInsc'): pass def exportChildren(self, outfile, level, namespace_='', name_='tpInsc', fromsubclass_=False, pretty_print=True):", "gds_parse_date(cls, input_data): tz = None if input_data[-1] == 'Z': tz = GeneratedsSuper._FixedOffsetTZ(0, 'UTC')", "level + 1, namespace_='', name_='nmMae', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s'", "(eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='TDadosBeneficio', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s'", "input_data.year, input_data.month, input_data.day, input_data.hour, input_data.minute, input_data.second, ) else: _svalue = '%04d-%02d-%02dT%02d:%02d:%02d.%s' % (", "self.gds_encode(self.gds_format_string(quote_xml(self.Signature), input_name='Signature')), 'ds:', eol_)) def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib, already_processed)", "pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('TEmprPJ') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if", "exp: raise_parse_error(child_, 'requires integer: %s' % exp) ival_ = self.gds_validate_integer(ival_, node, 'tpInsc') self.tpInsc", "child_, node, nodeName_, fromsubclass_=False): pass # end class complemento class bairro(GeneratedsSuper): subclass =", "# category == MixedContainer.CategoryComplex showIndent(outfile, level) outfile.write( 'model_.MixedContainer(%d, %d, \"%s\",\\n' % ( self.category,", "# end class procEmi class verProc(GeneratedsSuper): subclass = None superclass = None def", "level + 1, namespace_='', name_='paisNac', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s'", "__init__(self, tpInsc=None, nrInsc=None): self.original_tagname_ = None self.tpInsc = tpInsc self.nrInsc = nrInsc def", "def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, paisNascto)", "return subclass(*args_, **kwargs_) if paisNac.subclass: return paisNac.subclass(*args_, **kwargs_) else: return paisNac(*args_, **kwargs_) factory", "False def export(self, outfile, level, namespace_='', name_='TDadosBenef', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('TDadosBenef') if", "+ namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='paisNac')", "pass def exportChildren(self, outfile, level, namespace_='', name_='tpInsc', fromsubclass_=False, pretty_print=True): pass def build(self, node):", "def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, cep)", "namespace_='', name_='codMunic'): pass def exportChildren(self, outfile, level, namespace_='', name_='codMunic', fromsubclass_=False, pretty_print=True): pass def", "= dadosBenef def hasContent_(self): if ( self.cpfBenef is not None or self.nmBenefic is", "imported_ns_def_ = GenerateDSNamespaceDefs_.get('paisNascto') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print:", "class codMunic class uf(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_", "def exportChildren(self, outfile, level, namespace_='', name_='cpfInst', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed", "is not None: self.ideBenef.export(outfile, level, namespace_, name_='ideBenef', pretty_print=pretty_print) if self.infoBeneficio is not None:", "name_='nrInsc', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('nrInsc') if imported_ns_def_ is not None: namespacedef_ =", "def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, TDadosBeneficio)", "namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='nmPai') if", "None: return subclass(*args_, **kwargs_) if cpfBenef.subclass: return cpfBenef.subclass(*args_, **kwargs_) else: return cpfBenef(*args_, **kwargs_)", "\"\"\"Informações do Endereço no Exterior\"\"\" subclass = None superclass = None def __init__(self,", "if self.verProc is not None: showIndent(outfile, level, pretty_print) outfile.write('<%sverProc>%s</%sverProc>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.verProc), input_name='verProc')),", "0 CategoryText = 1 CategorySimple = 2 CategoryComplex = 3 # Constants for", "arguments: # schemas/v2_04/evtCdBenPrRP.xsd # # Command line: # /usr/local/bin/generateDS --no-process-includes -o \"esociallib/v2_04/evtCdBenPrRP.py\" schemas/v2_04/evtCdBenPrRP.xsd", "None self.tpPlanRP = tpPlanRP self.iniBeneficio = iniBeneficio self.altBeneficio = altBeneficio self.fimBeneficio = fimBeneficio", "de informações do endereço do Trabalhador\"\"\" subclass = None superclass = None def", "doc = None mapping = {} rootElement = rootObj.to_etree(None, name_=rootTag, mapping_=mapping) reverse_mapping =", "set_Signature(self, Signature): self.Signature = Signature def hasContent_(self): if ( self.evtCdBenPrRP is not None", "factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, codPostal) if", "= set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='vrBenef') if self.hasContent_(): outfile.write('>%s' % (eol_, ))", "= ival_ elif nodeName_ == 'procEmi': sval_ = child_.text try: ival_ = int(sval_)", "'%f' % self.value elif self.content_type == MixedContainer.TypeDouble: text = '%g' % self.value elif", "export(self, outfile, level, namespace_='', name_='dtIniBenef', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('dtIniBenef') if imported_ns_def_ is", "None superclass = None def __init__(self, tpLograd=None, dscLograd=None, nrLograd=None, complemento=None, bairro=None, cep=None, codMunic=None,", "1, namespace_='', name_='TIdeEveTrab', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else:", "self.exportChildren(outfile, level + 1, namespace_='', name_='eSocial', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' % (namespace_,", "return True else: return False def export(self, outfile, level, namespace_='', name_='uf', namespacedef_='', pretty_print=True):", "== 'paisNascto': paisNascto_ = child_.text paisNascto_ = self.gds_validate_string(paisNascto_, node, 'paisNascto') self.paisNascto = paisNascto_", "level, already_processed, namespace_='', name_='TDadosBeneficio'): pass def exportChildren(self, outfile, level, namespace_='', name_='TDadosBeneficio', fromsubclass_=False, pretty_print=True):", "input_data def gds_format_float_list(self, input_data, input_name=''): return '%s' % ' '.join(input_data) def gds_validate_float_list( self,", "% input_data def gds_validate_integer(self, input_data, node=None, input_name=''): return input_data def gds_format_integer_list(self, input_data, input_name=''):", "child_attrs): self.child_attrs = child_attrs def get_child_attrs(self): return self.child_attrs def set_choice(self, choice): self.choice =", "**kwargs_) if nrLograd.subclass: return nrLograd.subclass(*args_, **kwargs_) else: return nrLograd(*args_, **kwargs_) factory = staticmethod(factory)", "nodeName_ == 'dtFimBenef': sval_ = child_.text dval_ = self.gds_parse_date(sval_) self.dtFimBenef = dval_ elif", "= data_type def get_data_type_chain(self): return self.data_type def get_data_type(self): if isinstance(self.data_type, list): if len(self.data_type)", "= dval_ elif nodeName_ == 'mtvFim': sval_ = child_.text try: ival_ = int(sval_)", "def tzname(self, dt): return self.__name def dst(self, dt): return None def gds_format_string(self, input_data,", "'Requires sequence of integers') return values def gds_format_float(self, input_data, input_name=''): return ('%.15f' %", "outfile, level, already_processed, namespace_='', name_='cep'): pass def exportChildren(self, outfile, level, namespace_='', name_='cep', fromsubclass_=False,", "subclass(*args_, **kwargs_) if TDadosBenef.subclass: return TDadosBenef.subclass(*args_, **kwargs_) else: return TDadosBenef(*args_, **kwargs_) factory =", "?>\\n') rootObj.export( sys.stdout, 0, name_=rootTag, namespacedef_='') return rootObj def parseLiteral(inFileName, silence=False): parser =", "None: return subclass(*args_, **kwargs_) if infoPenMorte.subclass: return infoPenMorte.subclass(*args_, **kwargs_) else: return infoPenMorte(*args_, **kwargs_)", "% (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.tpLograd), input_name='tpLograd')), namespace_, eol_)) if self.dscLograd is not None: showIndent(outfile, level,", "nrLograd=None, complemento=None, bairro=None, nmCid=None, codPostal=None): self.original_tagname_ = None self.paisResid = paisResid self.dscLograd =", "class infoBeneficio(GeneratedsSuper): \"\"\"Informações relacionadas ao benefício previdenciário concedido ao servidor\"\"\" subclass = None", "return self.name def set_data_type(self, data_type): self.data_type = data_type def get_data_type_chain(self): return self.data_type def", "namespace_='', name_='cpfInst'): pass def exportChildren(self, outfile, level, namespace_='', name_='cpfInst', fromsubclass_=False, pretty_print=True): pass def", "is not None: showIndent(outfile, level, pretty_print) outfile.write('<%sverProc>%s</%sverProc>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.verProc), input_name='verProc')), namespace_, eol_))", "end class tpPlanRP class fimBeneficio(GeneratedsSuper): \"\"\"Informações relativas a benefícios previdenciários - Término. Validação:", "fromsubclass_=False): pass # end class dtFimBenef class mtvFim(GeneratedsSuper): subclass = None superclass =", "def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='procEmi'): pass def exportChildren(self, outfile, level, namespace_='',", "pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('paisResid') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if", "namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='dtIniBenef') if", "self.original_tagname_ = None self.Id = _cast(None, Id) self.ideEvento = ideEvento self.ideEmpregador = ideEmpregador", "namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('nrLograd') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_", "**kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, bairro) if subclass", "nmCid_ = self.gds_validate_string(nmCid_, node, 'nmCid') self.nmCid = nmCid_ elif nodeName_ == 'codPostal': codPostal_", "% (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='nrRecibo', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_,", "codPostal): self.codPostal = codPostal def hasContent_(self): if ( self.paisResid is not None or", "def set_uf(self, uf): self.uf = uf def get_paisNascto(self): return self.paisNascto def set_paisNascto(self, paisNascto):", "s1.find('\\n') == -1: return \"'%s'\" % s1 else: return \"'''%s'''\" % s1 else:", "namespace_='', name_='endereco', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\\n' else: eol_ = ''", "is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, mtvFim) if subclass is not None:", "def gds_parse_time(cls, input_data): tz = None if input_data[-1] == 'Z': tz = GeneratedsSuper._FixedOffsetTZ(0,", "name_='infoPenMorte', pretty_print=pretty_print) def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib, already_processed) for child", "tag: path_list.append(tag) self.get_path_list_(node.getparent(), path_list) def get_class_obj_(self, node, default_class=None): class_obj1 = default_class if 'xsi'", "set_cep(self, cep): self.cep = cep def get_codMunic(self): return self.codMunic def set_codMunic(self, codMunic): self.codMunic", "imported_ns_def_ = GenerateDSNamespaceDefs_.get('tpInsc') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print:", "self.exterior is not None: self.exterior.export(outfile, level, namespace_, name_='exterior', pretty_print=pretty_print) def build(self, node): already_processed", "self.gds_validate_string(bairro_, node, 'bairro') self.bairro = bairro_ elif nodeName_ == 'nmCid': nmCid_ = child_.text", "def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, TEnderecoBrasil)", "imported_ns_def_ = GenerateDSNamespaceDefs_.get('dadosNasc') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print:", "outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='TDadosBeneficio'): pass def", "options: # ('--no-process-includes', '') # ('-o', 'esociallib/v2_04/evtCdBenPrRP.py') # # Command line arguments: #", "def set_nrLograd(self, nrLograd): self.nrLograd = nrLograd def get_complemento(self): return self.complemento def set_complemento(self, complemento):", "subclass(*args_, **kwargs_) if TDadosBeneficio.subclass: return TDadosBeneficio.subclass(*args_, **kwargs_) else: return TDadosBeneficio(*args_, **kwargs_) factory =", "name_='fimBeneficio') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='fimBeneficio',", "pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('ideBenef') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if", "def hasContent_(self): if ( self.brasil is not None or self.exterior is not None", "self.container = container def get_container(self): return self.container def set_child_attrs(self, child_attrs): self.child_attrs = child_attrs", "level, namespace_='', name_='uf', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node,", "already_processed, namespace_='', name_='endereco'): pass def exportChildren(self, outfile, level, namespace_='', name_='endereco', fromsubclass_=False, pretty_print=True): if", "not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, nrBenefic) if subclass is not None: return", "# end class codMunic class uf(GeneratedsSuper): subclass = None superclass = None def", "None: subclass = getSubclassFromModule_( CurrentSubclassModule_, vrBenef) if subclass is not None: return subclass(*args_,", "= '\\n' else: eol_ = '' if self.tpBenef is not None: showIndent(outfile, level,", "input_name=''): return input_data def gds_validate_string(self, input_data, node=None, input_name=''): if not input_data: return ''", "name_='dtIniBenef'): pass def exportChildren(self, outfile, level, namespace_='', name_='dtIniBenef', fromsubclass_=False, pretty_print=True): pass def build(self,", "self.category = category self.content_type = content_type self.name = name self.value = value def", "not None or self.nrRecibo is not None or self.tpAmb is not None or", "input_name=''): return input_data def gds_format_time(self, input_data, input_name=''): if input_data.microsecond == 0: _svalue =", "paisNac=None, nmMae=None, nmPai=None): self.original_tagname_ = None if isinstance(dtNascto, BaseStrType_): initvalue_ = datetime_.datetime.strptime(dtNascto, '%Y-%m-%d').date()", "return False def export(self, outfile, level, namespace_='', name_='procEmi', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('procEmi')", "mapping, reverse_mapping def parseString(inString, silence=False): if sys.version_info.major == 2: from StringIO import StringIO", "level, already_processed, namespace_='', name_='TIdeEveTrab'): pass def exportChildren(self, outfile, level, namespace_='', name_='TIdeEveTrab', fromsubclass_=False, pretty_print=True):", "level, pretty_print) outfile.write('<%svrBenef>%s</%svrBenef>%s' % (namespace_, self.gds_format_float(self.vrBenef, input_name='vrBenef'), namespace_, eol_)) if self.infoPenMorte is not", "self.dtFimBenef is not None or self.mtvFim is not None ): return True else:", "def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class vrBenef class infoPenMorte(GeneratedsSuper):", "outfile, level, already_processed, namespace_='', name_='dtIniBenef'): pass def exportChildren(self, outfile, level, namespace_='', name_='dtIniBenef', fromsubclass_=False,", "**kwargs_) else: return eSocial(*args_, **kwargs_) factory = staticmethod(factory) def get_evtCdBenPrRP(self): return self.evtCdBenPrRP def", "outfile, level, namespace_='', name_='tpInsc', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set()", "is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, dtFimBenef) if subclass is not None:", "self.dtIniBenef is not None or self.vrBenef is not None or self.infoPenMorte is not", "self.bairro = bairro self.nmCid = nmCid self.codPostal = codPostal def factory(*args_, **kwargs_): if", "subclass = getSubclassFromModule_( CurrentSubclassModule_, paisResid) if subclass is not None: return subclass(*args_, **kwargs_)", "if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='codMunic', pretty_print=pretty_print)", "exportAttributes(self, outfile, level, already_processed, namespace_='', name_='TEnderecoBrasil'): pass def exportChildren(self, outfile, level, namespace_='', name_='TEnderecoBrasil',", "namespace_='', name_='tpBenef'): pass def exportChildren(self, outfile, level, namespace_='', name_='tpBenef', fromsubclass_=False, pretty_print=True): pass def", "input_data.second, ) else: _svalue = '%04d-%02d-%02dT%02d:%02d:%02d.%s' % ( input_data.year, input_data.month, input_data.day, input_data.hour, input_data.minute,", "= container self.child_attrs = child_attrs self.choice = choice self.optional = optional def set_name(self,", "is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, indRetif) if subclass is not None:", "= input_data[:-6] time_parts = input_data.split('.') if len(time_parts) > 1: micro_seconds = int(float('0.' +", "by re-implementing the following class # in a module named generatedssuper.py. try: from", "from lxml import etree as etree_ except ImportError: from xml.etree import ElementTree as", "endereco def hasContent_(self): if ( self.dadosNasc is not None or self.endereco is not", "else: return False def export(self, outfile, level, namespace_='', name_='cpfInst', namespacedef_='', pretty_print=True): imported_ns_def_ =", "not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, nmMae) if subclass is not None: return", "namespace_='', name_='dtFimBenef', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_, ))", "self.nmPai def set_nmPai(self, nmPai): self.nmPai = nmPai def hasContent_(self): if ( self.dtNascto is", "self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='uf', pretty_print=pretty_print) outfile.write('</%s%s>%s'", "sequence of floats') return values def gds_format_double(self, input_data, input_name=''): return '%e' % input_data", "previdenciários - Término. Validação: Só pode ser informado se já houver informação anterior", "module generatedsnamespaces, if it is importable, must contain # a dictionary named GeneratedsNamespaceDefs.", "(86400 * tzoff.days) if total_seconds == 0: _svalue += 'Z' else: if total_seconds", "is None: element[-1].tail = self.value else: element[-1].tail += self.value else: if element.text is", "if indRetif.subclass: return indRetif.subclass(*args_, **kwargs_) else: return indRetif(*args_, **kwargs_) factory = staticmethod(factory) def", "# end class tpLograd class dscLograd(GeneratedsSuper): subclass = None superclass = None def", "nmMae(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ = None def", "if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='eSocial', pretty_print=pretty_print)", "not None: self.altBeneficio.export(outfile, level, namespace_, name_='altBeneficio', pretty_print=pretty_print) if self.fimBeneficio is not None: self.fimBeneficio.export(outfile,", "if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, codMunic) if subclass is", "not None or self.fimBeneficio is not None ): return True else: return False", "**kwargs_) factory = staticmethod(factory) def get_evtCdBenPrRP(self): return self.evtCdBenPrRP def set_evtCdBenPrRP(self, evtCdBenPrRP): self.evtCdBenPrRP =", "= input_data[:-6] dt = datetime_.datetime.strptime(input_data, '%Y-%m-%d') dt = dt.replace(tzinfo=tz) return dt.date() def gds_validate_time(self,", "% (namespace_, name_, namespacedef_ and ' ' + namespacedef_ or '', )) already_processed", "def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class nrLograd class complemento(GeneratedsSuper):", "getSubclassFromModule_( CurrentSubclassModule_, complemento) if subclass is not None: return subclass(*args_, **kwargs_) if complemento.subclass:", "dscLograd class nrLograd(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ =", "False def export(self, outfile, level, namespace_='', name_='tpInsc', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('tpInsc') if", "level, pretty_print) outfile.write('<%snrLograd>%s</%snrLograd>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nrLograd), input_name='nrLograd')), namespace_, eol_)) if self.complemento is not", "TDadosBenef(*args_, **kwargs_) factory = staticmethod(factory) def get_dadosNasc(self): return self.dadosNasc def set_dadosNasc(self, dadosNasc): self.dadosNasc", "# should map element type names (strings) to XML schema namespace prefix #", "category self.content_type = content_type self.name = name self.value = value def getCategory(self): return", "module to use a # specific subclass module. CurrentSubclassModule_ = None # #", "pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('vrBenef') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if", "= set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='verProc') if self.hasContent_(): outfile.write('>%s' % (eol_, ))", "-1 else: _svalue += '+' hours = total_seconds // 3600 minutes = (total_seconds", "namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='paisNascto') if", "s1 = inStr.replace('&', '&amp;') s1 = s1.replace('<', '&lt;') s1 = s1.replace('>', '&gt;') return", "getSubclassFromModule_( CurrentSubclassModule_, nmCid) if subclass is not None: return subclass(*args_, **kwargs_) if nmCid.subclass:", "else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='dadosNasc'): pass", "eol_)) else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='verProc'):", "mtvFim.subclass(*args_, **kwargs_) else: return mtvFim(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if (", "node, nodeName_, fromsubclass_=False): pass # end class complemento class bairro(GeneratedsSuper): subclass = None", "name_='infoBeneficio', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' %", "as exp: raise_parse_error(child_, 'requires integer: %s' % exp) ival_ = self.gds_validate_integer(ival_, node, 'indRetif')", "'codMunic': sval_ = child_.text try: ival_ = int(sval_) except (TypeError, ValueError) as exp:", "exportChildren(self, outfile, level, namespace_='', name_='vrBenef', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed =", "# # Command line: # /usr/local/bin/generateDS --no-process-includes -o \"esociallib/v2_04/evtCdBenPrRP.py\" schemas/v2_04/evtCdBenPrRP.xsd # # Current", "obj_ obj_.original_tagname_ = 'iniBeneficio' elif nodeName_ == 'altBeneficio': obj_ = TDadosBeneficio.factory() obj_.build(child_) self.altBeneficio", "None or self.complemento is not None or self.bairro is not None or self.cep", "= mtvFim def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_(", "class verProc(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ = None", "is not None: showIndent(outfile, level, pretty_print) outfile.write('<%snmMae>%s</%snmMae>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nmMae), input_name='nmMae')), namespace_, eol_))", "self.cep = cep_ elif nodeName_ == 'codMunic': sval_ = child_.text try: ival_ =", "ipshell.\\nHit Ctrl-D to exit') # # Globals # ExternalEncoding = 'ascii' Tag_pattern_ =", "of this # table. # A sample table is: # # # File:", "tpLograd class dscLograd(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ =", "**kwargs_) if eSocial.subclass: return eSocial.subclass(*args_, **kwargs_) else: return eSocial(*args_, **kwargs_) factory = staticmethod(factory)", "def exportChildren(self, outfile, level, namespace_='', name_='TIdeEveTrab', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\\n'", "def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class nrInsc class TDadosBenef(GeneratedsSuper):", "def gds_format_string(self, input_data, input_name=''): return input_data def gds_validate_string(self, input_data, node=None, input_name=''): if not", "return False def export(self, outfile, level, namespace_='', name_='dtNascto', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('dtNascto')", "double: %s' % exp) fval_ = self.gds_validate_float(fval_, node, 'vrBenef') self.vrBenef = fval_ elif", "# end class evtCdBenPrRP class ideBenef(GeneratedsSuper): \"\"\"Identificação do beneficiário\"\"\" subclass = None superclass", "if self.value.strip(): outfile.write(self.value) elif self.category == MixedContainer.CategorySimple: self.exportSimple(outfile, level, name) else: # category", "return False def export(self, outfile, level, namespace_='', name_='uf', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('uf')", "= GeneratedsSuper._FixedOffsetTZ( tzoff, results.group(0)) input_data = input_data[:-6] dt = datetime_.datetime.strptime(input_data, '%Y-%m-%d') dt =", "self.exportAttributes(outfile, level, already_processed, namespace_, name_='paisNac') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level", "+ namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='TEnderecoExterior')", "if self.fimBeneficio is not None: self.fimBeneficio.export(outfile, level, namespace_, name_='fimBeneficio', pretty_print=pretty_print) def build(self, node):", "self.bairro def set_bairro(self, bairro): self.bairro = bairro def get_cep(self): return self.cep def set_cep(self,", "if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='infoPenMorte', pretty_print=pretty_print)", "eol_)) else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='indRetif'):", "namespace_, name_='codMunic') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='',", "outfile.write('<%sprocEmi>%s</%sprocEmi>%s' % (namespace_, self.gds_format_integer(self.procEmi, input_name='procEmi'), namespace_, eol_)) if self.verProc is not None: showIndent(outfile,", "self.exportAttributes(outfile, level, already_processed, namespace_, name_='verProc') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level", "**kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, cpfBenef) if subclass", "% s1 def get_all_text_(node): if node.text is not None: text = node.text else:", "def get_vrBenef(self): return self.vrBenef def set_vrBenef(self, vrBenef): self.vrBenef = vrBenef def get_infoPenMorte(self): return", "set_container(self, container): self.container = container def get_container(self): return self.container def set_child_attrs(self, child_attrs): self.child_attrs", "pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('TIdeEveTrab') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if", "eol_)) else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='TDadosBeneficio'):", "self.exportAttributes(outfile, level, already_processed, namespace_, name_='cpfInst') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level", "exportChildren(self, outfile, level, namespace_='', name_='verProc', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed =", "pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('nmPai') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if", "+ namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='tpPlanRP')", "attrs, already_processed): pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): if nodeName_ == 'indRetif':", "for which there is # a namespace prefix definition, will export that definition", "outfile, level, namespace_='', name_='paisResid', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set()", "namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('paisResid') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_", "namespace_='', name_='nmPai'): pass def exportChildren(self, outfile, level, namespace_='', name_='nmPai', fromsubclass_=False, pretty_print=True): pass def", "): return True else: return False def export(self, outfile, level, namespace_='', name_='tpBenef', namespacedef_='',", "return False def export(self, outfile, level, namespace_='', name_='TEnderecoExterior', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('TEnderecoExterior')", "outfile, level, namespace_='', name_='tpAmb', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set()", "buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class paisNascto class paisNac(GeneratedsSuper): subclass", "TDadosBeneficio.subclass(*args_, **kwargs_) else: return TDadosBeneficio(*args_, **kwargs_) factory = staticmethod(factory) def get_tpBenef(self): return self.tpBenef", "None: return subclass(*args_, **kwargs_) if tpAmb.subclass: return tpAmb.subclass(*args_, **kwargs_) else: return tpAmb(*args_, **kwargs_)", "pass def exportChildren(self, outfile, level, namespace_='', name_='cpfInst', fromsubclass_=False, pretty_print=True): pass def build(self, node):", "or self.content_type == MixedContainer.TypeBoolean): text = '%d' % self.value elif (self.content_type == MixedContainer.TypeFloat", "+ namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='indRetif')", "relacionadas ao benefício previdenciário concedido ao servidor\"\"\" subclass = None superclass = None", "= None superclass = None def __init__(self, indRetif=None, nrRecibo=None, tpAmb=None, procEmi=None, verProc=None): self.original_tagname_", "getSubclassFromModule_( CurrentSubclassModule_, fimBeneficio) if subclass is not None: return subclass(*args_, **kwargs_) if fimBeneficio.subclass:", "self.exportAttributes(outfile, level, already_processed, namespace_, name_='codMunic') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level", "namespace_='', name_='dtFimBenef', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib,", "subclass is not None: return subclass(*args_, **kwargs_) if bairro.subclass: return bairro.subclass(*args_, **kwargs_) else:", "nodeName_ == 'ideEmpregador': obj_ = TEmprPJ.factory() obj_.build(child_) self.ideEmpregador = obj_ obj_.original_tagname_ = 'ideEmpregador'", "subclass(*args_, **kwargs_) if nrInsc.subclass: return nrInsc.subclass(*args_, **kwargs_) else: return nrInsc(*args_, **kwargs_) factory =", "-1: if s1.find('\\n') == -1: return \"'%s'\" % s1 else: return \"'''%s'''\" %", "def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class procEmi class verProc(GeneratedsSuper):", "(namespace_, self.gds_format_integer(self.codMunic, input_name='codMunic'), namespace_, eol_)) if self.uf is not None: showIndent(outfile, level, pretty_print)", "def export(self, outfile, level, namespace_='', name_='codMunic', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('codMunic') if imported_ns_def_", "obj_ obj_.original_tagname_ = 'brasil' elif nodeName_ == 'exterior': obj_ = TEnderecoExterior.factory() obj_.build(child_) self.exterior", "+ 1, namespace_='', name_='vrBenef', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' %", "self.buildChildren(child, node, nodeName_) return self def buildAttributes(self, node, attrs, already_processed): value = find_attr_value_('Id',", "or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='dtFimBenef') if self.hasContent_():", "def export(self, outfile, level, namespace_='', name_='mtvFim', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('mtvFim') if imported_ns_def_", "imported_ns_def_ = GenerateDSNamespaceDefs_.get('nrLograd') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print:", "+ 1, namespace_='', name_='codPostal', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' %", "indRetif(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return True else:", "%d, \"%s\", \"%s\"),\\n' % ( self.category, self.content_type, self.name, self.value)) else: # category ==", "or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='dscLograd') if self.hasContent_():", "= 1 TypeString = 2 TypeInteger = 3 TypeFloat = 4 TypeDecimal =", "value class GDSParseError(Exception): pass def raise_parse_error(node, msg): msg = '%s (element %s/line %d)'", "endereco.subclass: return endereco.subclass(*args_, **kwargs_) else: return endereco(*args_, **kwargs_) factory = staticmethod(factory) def get_brasil(self):", "None superclass = None def __init__(self, tpBenef=None, nrBenefic=None, dtFimBenef=None, mtvFim=None): self.original_tagname_ = None", "name_='nmPai') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='nmPai',", "as IOBuffer parser = None doc = parsexml_(IOBuffer(inString), parser) rootNode = doc.getroot() rootTag,", "is not None: showIndent(outfile, level, pretty_print) outfile.write('<%sprocEmi>%s</%sprocEmi>%s' % (namespace_, self.gds_format_integer(self.procEmi, input_name='procEmi'), namespace_, eol_))", "else: eol_ = '' if self.original_tagname_ is not None: name_ = self.original_tagname_ showIndent(outfile,", "if type(self) != type(other): return False return self.__dict__ == other.__dict__ def __ne__(self, other):", "obj_ = dadosNasc.factory() obj_.build(child_) self.dadosNasc = obj_ obj_.original_tagname_ = 'dadosNasc' elif nodeName_ ==", "% (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='TIdeEveTrab'): pass def exportChildren(self,", "= uf def hasContent_(self): if ( self.tpLograd is not None or self.dscLograd is", "level, already_processed, namespace_, name_='uf') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level +", "name_='codPostal', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib, already_processed)", "TIdeEveTrab class indRetif(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ =", "getSubclassFromModule_( CurrentSubclassModule_, nrLograd) if subclass is not None: return subclass(*args_, **kwargs_) if nrLograd.subclass:", "self.paisResid = paisResid self.dscLograd = dscLograd self.nrLograd = nrLograd self.complemento = complemento self.bairro", "node, 'nmCid') self.nmCid = nmCid_ elif nodeName_ == 'codPostal': codPostal_ = child_.text codPostal_", "0: _svalue += 'Z' else: if total_seconds < 0: _svalue += '-' total_seconds", "= obj_ obj_.original_tagname_ = 'exterior' # end class endereco class TEnderecoBrasil(GeneratedsSuper): \"\"\"Informações do", "# end class nmCid class codPostal(GeneratedsSuper): subclass = None superclass = None def", "self.mtvFim is not None ): return True else: return False def export(self, outfile,", "self.gds_validate_string(cpfInst_, node, 'cpfInst') self.cpfInst = cpfInst_ # end class infoPenMorte class idQuota(GeneratedsSuper): subclass", "def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): if nodeName_ == 'brasil': obj_ = TEnderecoBrasil.factory()", "input_data, input_name=''): return '%s' % ' '.join(input_data) def gds_validate_float_list( self, input_data, node=None, input_name=''):", "namespace_='', name_='paisResid', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib,", "node, type_name=None): return None @classmethod def gds_reverse_node_mapping(cls, mapping): return dict(((v, k) for k,", "% (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='nmMae'): pass def exportChildren(self,", "self.exportAttributes(outfile, level, already_processed, namespace_, name_='infoBeneficio') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level", "return subclass(*args_, **kwargs_) if TEnderecoBrasil.subclass: return TEnderecoBrasil.subclass(*args_, **kwargs_) else: return TEnderecoBrasil(*args_, **kwargs_) factory", "def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='tpAmb'): pass def exportChildren(self, outfile, level, namespace_='',", "None: return subclass(*args_, **kwargs_) if paisNascto.subclass: return paisNascto.subclass(*args_, **kwargs_) else: return paisNascto(*args_, **kwargs_)", "showIndent(outfile, level) outfile.write( 'model_.MixedContainer(%d, %d, \"%s\", \"%s\"),\\n' % ( self.category, self.content_type, self.name, self.value))", "name_='codMunic', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib, already_processed)", "name_='idQuota', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_, )) def", "(namespace_, self.gds_format_integer(self.tpPlanRP, input_name='tpPlanRP'), namespace_, eol_)) if self.iniBeneficio is not None: self.iniBeneficio.export(outfile, level, namespace_,", "fromsubclass_=False): pass # end class nmCid class codPostal(GeneratedsSuper): subclass = None superclass =", "else: eol_ = '' if self.tpPlanRP is not None: showIndent(outfile, level, pretty_print) outfile.write('<%stpPlanRP>%s</%stpPlanRP>%s'", "pass def exportChildren(self, outfile, level, namespace_='', name_='nrBenefic', fromsubclass_=False, pretty_print=True): pass def build(self, node):", "subclass = getSubclassFromModule_( CurrentSubclassModule_, indRetif) if subclass is not None: return subclass(*args_, **kwargs_)", "else: return False def export(self, outfile, level, namespace_='', name_='idQuota', namespacedef_='', pretty_print=True): imported_ns_def_ =", "CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, nrLograd) if subclass is not", "'Leaving Interpreter, back to program.') # Then use the following line where and", "buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class tpLograd class dscLograd(GeneratedsSuper): subclass", "gds_validate_time(self, input_data, node=None, input_name=''): return input_data def gds_format_time(self, input_data, input_name=''): if input_data.microsecond ==", "return self.__dict__ == other.__dict__ def __ne__(self, other): return not self.__eq__(other) def getSubclassFromModule_(module, class_):", "self.altBeneficio = obj_ obj_.original_tagname_ = 'altBeneficio' elif nodeName_ == 'fimBeneficio': obj_ = fimBeneficio.factory()", "codPostal) if subclass is not None: return subclass(*args_, **kwargs_) if codPostal.subclass: return codPostal.subclass(*args_,", "**kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, tpAmb) if subclass", "return False def export(self, outfile, level, namespace_='', name_='cpfInst', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('cpfInst')", "factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, TDadosBeneficio) if", "self.codPostal is not None: showIndent(outfile, level, pretty_print) outfile.write('<%scodPostal>%s</%scodPostal>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.codPostal), input_name='codPostal')), namespace_,", "self.gds_format_float(self.vrBenef, input_name='vrBenef'), namespace_, eol_)) if self.infoPenMorte is not None: self.infoPenMorte.export(outfile, level, namespace_, name_='infoPenMorte',", "# category == MixedContainer.CategoryComplex self.value.export( outfile, level, namespace, name, pretty_print=pretty_print) def exportSimple(self, outfile,", "- (hours * 3600)) // 60 _svalue += '{0:02d}:{1:02d}'.format(hours, minutes) return _svalue @classmethod", "= getSubclassFromModule_( CurrentSubclassModule_, eSocial) if subclass is not None: return subclass(*args_, **kwargs_) if", "# end class tpPlanRP class fimBeneficio(GeneratedsSuper): \"\"\"Informações relativas a benefícios previdenciários - Término.", "already_processed, namespace_='', name_='mtvFim'): pass def exportChildren(self, outfile, level, namespace_='', name_='mtvFim', fromsubclass_=False, pretty_print=True): pass", "name_='ideBenef', pretty_print=pretty_print) if self.infoBeneficio is not None: self.infoBeneficio.export(outfile, level, namespace_, name_='infoBeneficio', pretty_print=pretty_print) def", "self.exportAttributes(outfile, level, already_processed, namespace_, name_='TDadosBeneficio') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level", "(eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='paisNascto', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_,", "'paisNac') self.paisNac = paisNac_ elif nodeName_ == 'nmMae': nmMae_ = child_.text nmMae_ =", "(namespace_, self.gds_format_integer(self.tpInsc, input_name='tpInsc'), namespace_, eol_)) if self.nrInsc is not None: showIndent(outfile, level, pretty_print)", "level, already_processed, namespace_='', name_='uf'): pass def exportChildren(self, outfile, level, namespace_='', name_='uf', fromsubclass_=False, pretty_print=True):", "self.infoPenMorte = obj_ obj_.original_tagname_ = 'infoPenMorte' # end class TDadosBeneficio class dtIniBenef(GeneratedsSuper): subclass", "class codPostal class TDadosBeneficio(GeneratedsSuper): \"\"\"Dados do benefício previdenciário\"\"\" subclass = None superclass =", "not found2: found1 = False break return found1 @classmethod def gds_parse_time(cls, input_data): tz", "if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, dtNascto) if subclass is", "cpfInst(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ = None def", "line arguments: # schemas/v2_04/evtCdBenPrRP.xsd # # Command line: # /usr/local/bin/generateDS --no-process-includes -o \"esociallib/v2_04/evtCdBenPrRP.py\"", "return infoBeneficio(*args_, **kwargs_) factory = staticmethod(factory) def get_tpPlanRP(self): return self.tpPlanRP def set_tpPlanRP(self, tpPlanRP):", "self.nrLograd = nrLograd self.complemento = complemento self.bairro = bairro self.cep = cep self.codMunic", "other): if type(self) != type(other): return False return self.__dict__ == other.__dict__ def __ne__(self,", "nrLograd(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return True else:", "self.exportChildren(outfile, level + 1, namespace_='', name_='indRetif', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else:", "namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='nrInsc') if", "def gds_format_float_list(self, input_data, input_name=''): return '%s' % ' '.join(input_data) def gds_validate_float_list( self, input_data,", "subclass(*args_, **kwargs_) if codMunic.subclass: return codMunic.subclass(*args_, **kwargs_) else: return codMunic(*args_, **kwargs_) factory =", "namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('procEmi') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_", "= value def getCategory(self): return self.category def getContenttype(self, content_type): return self.content_type def getValue(self):", "'', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='cpfBenef') if self.hasContent_(): outfile.write('>%s'", "patterns1 in patterns: found2 = False for patterns2 in patterns1: if re_.search(patterns2, target)", "level, namespace_='', name_='uf', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('uf') if imported_ns_def_ is not None:", "'nmCid') self.nmCid = nmCid_ elif nodeName_ == 'codPostal': codPostal_ = child_.text codPostal_ =", "# end class TIdeEveTrab class indRetif(GeneratedsSuper): subclass = None superclass = None def", "de beneficiário\"\"\" subclass = None superclass = None def __init__(self, dadosNasc=None, endereco=None): self.original_tagname_", "the space used by the DOM. doc = None mapping = {} rootElement", "= getSubclassFromModule_( CurrentSubclassModule_, complemento) if subclass is not None: return subclass(*args_, **kwargs_) if", "not None or self.infoBeneficio is not None ): return True else: return False", "self.altBeneficio = altBeneficio def get_fimBeneficio(self): return self.fimBeneficio def set_fimBeneficio(self, fimBeneficio): self.fimBeneficio = fimBeneficio", "return False def export(self, outfile, level, namespace_='', name_='codPostal', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('codPostal')", "pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): if nodeName_ == 'paisResid': paisResid_ =", "nmMae self.nmPai = nmPai def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass", "we ignore comments. try: parser = etree_.ETCompatXMLParser() except AttributeError: # fallback to xml.etree", "namespace_='', name_='tpBenef', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_, ))", "python <Parser>.py [ -s ] <in_xml_file> \"\"\" def usage(): print(USAGE_TEXT) sys.exit(1) def get_root_tag(node):", "or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='cep') if self.hasContent_():", "level + 1, namespace_='', name_='dtNascto', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s'", "def __init__(self, dtNascto=None, codMunic=None, uf=None, paisNascto=None, paisNac=None, nmMae=None, nmPai=None): self.original_tagname_ = None if", "exportChildren(self, outfile, level, namespace_='', name_='cpfInst', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed =", "set_bairro(self, bairro): self.bairro = bairro def get_nmCid(self): return self.nmCid def set_nmCid(self, nmCid): self.nmCid", "vrBenef self.infoPenMorte = infoPenMorte def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass", "dval_ = self.gds_parse_date(sval_) self.dtNascto = dval_ elif nodeName_ == 'codMunic': sval_ = child_.text", "return True else: return False def export(self, outfile, level, namespace_='', name_='vrBenef', namespacedef_='', pretty_print=True):", "benefício previdenciário\"\"\" subclass = None superclass = None def __init__(self, tpBenef=None, nrBenefic=None, dtIniBenef=None,", "node, 'procEmi') self.procEmi = ival_ elif nodeName_ == 'verProc': verProc_ = child_.text verProc_", "is not None or self.codPostal is not None ): return True else: return", "self.gds_encode(self.gds_format_string(quote_xml(self.nmCid), input_name='nmCid')), namespace_, eol_)) if self.codPostal is not None: showIndent(outfile, level, pretty_print) outfile.write('<%scodPostal>%s</%scodPostal>%s'", "self.uf def set_uf(self, uf): self.uf = uf def hasContent_(self): if ( self.tpLograd is", "(eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='nrRecibo'): pass def exportChildren(self, outfile,", "ideBenef self.infoBeneficio = infoBeneficio def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass", "you can uncomment and use the following. # IPython is available from http://ipython.scipy.org/.", "do Endereço no Brasil\"\"\" subclass = None superclass = None def __init__(self, tpLograd=None,", "else: s1 = \"'%s'\" % s1 else: s1 = '\"%s\"' % s1 return", "= TEnderecoBrasil.factory() obj_.build(child_) self.brasil = obj_ obj_.original_tagname_ = 'brasil' elif nodeName_ == 'exterior':", "input_name='dtFimBenef'), namespace_, eol_)) if self.mtvFim is not None: showIndent(outfile, level, pretty_print) outfile.write('<%smtvFim>%s</%smtvFim>%s' %", "namespace_='', name_='nmBenefic', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_, ))", "uf.subclass(*args_, **kwargs_) else: return uf(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if (", "name): self.__offset = datetime_.timedelta(minutes=offset) self.__name = name def utcoffset(self, dt): return self.__offset def", "elif nodeName_ == 'nmCid': nmCid_ = child_.text nmCid_ = self.gds_validate_string(nmCid_, node, 'nmCid') self.nmCid", "set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='tpLograd') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile,", "True for patterns1 in patterns: found2 = False for patterns2 in patterns1: if", "self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='nrLograd', pretty_print=pretty_print) outfile.write('</%s%s>%s'", "- Término. Validação: Só pode ser informado se já houver informação anterior de", "return False def export(self, outfile, level, namespace_='', name_='nrRecibo', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('nrRecibo')", "if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, nrBenefic) if subclass is", "False def export(self, outfile, level, namespace_='', name_='infoPenMorte', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('infoPenMorte') if", "tpLograd=None, dscLograd=None, nrLograd=None, complemento=None, bairro=None, cep=None, codMunic=None, uf=None): self.original_tagname_ = None self.tpLograd =", "TDadosBeneficio(GeneratedsSuper): \"\"\"Dados do benefício previdenciário\"\"\" subclass = None superclass = None def __init__(self,", "of strings/patterns. We should: # - AND the outer elements # - OR", "self.content_type == MixedContainer.TypeDouble: text = '%g' % self.value elif self.content_type == MixedContainer.TypeBase64: text", "uf_ = child_.text uf_ = self.gds_validate_string(uf_, node, 'uf') self.uf = uf_ # end", "node, nodeName_, fromsubclass_=False): pass # end class nmCid class codPostal(GeneratedsSuper): subclass = None", "as exp: raise_parse_error(child_, 'requires integer: %s' % exp) ival_ = self.gds_validate_integer(ival_, node, 'procEmi')", "outfile, level, namespace_='', name_='cpfInst', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('cpfInst') if imported_ns_def_ is not", ") try: if input_data.tzinfo is not None: tzoff = input_data.tzinfo.utcoffset(input_data) if tzoff is", "except (TypeError, ValueError): raise_parse_error(node, 'Requires sequence of floats') return values def gds_format_double(self, input_data,", "= cpfInst_ # end class infoPenMorte class idQuota(GeneratedsSuper): subclass = None superclass =", "( ): return True else: return False def export(self, outfile, level, namespace_='', name_='tpInsc',", "names = classname.split(':') if len(names) == 2: classname = names[1] class_obj2 = globals().get(classname)", "float(value) except (TypeError, ValueError): raise_parse_error(node, 'Requires sequence of doubles') return values def gds_format_boolean(self,", "previdenciários de Regimes Próprios\"\"\" subclass = None superclass = None def __init__(self, Id=None,", "= GenerateDSNamespaceDefs_.get('idQuota') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_", "set_tpPlanRP(self, tpPlanRP): self.tpPlanRP = tpPlanRP def get_iniBeneficio(self): return self.iniBeneficio def set_iniBeneficio(self, iniBeneficio): self.iniBeneficio", "node, nodeName_, fromsubclass_=False): if nodeName_ == 'tpInsc': sval_ = child_.text try: ival_ =", "= etree_.SubElement( element, '%s' % self.name) subelement.text = self.to_etree_simple() else: # category ==", "TEnderecoExterior(GeneratedsSuper): \"\"\"Informações do Endereço no Exterior\"\"\" subclass = None superclass = None def", "exportAttributes(self, outfile, level, already_processed, namespace_='', name_='verProc'): pass def exportChildren(self, outfile, level, namespace_='', name_='verProc',", "return rootObj def parseEtree(inFileName, silence=False): parser = None doc = parsexml_(inFileName, parser) rootNode", "subclass is not None: return subclass(*args_, **kwargs_) if nmPai.subclass: return nmPai.subclass(*args_, **kwargs_) else:", "exportChildren(self, outfile, level, namespace_='', name_='tpPlanRP', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed =", "= nrInsc def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_(", "% (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='nmBenefic'): pass def exportChildren(self,", ")) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='nmBenefic') if self.hasContent_(): outfile.write('>%s' %", "if nodeName_ == 'paisResid': paisResid_ = child_.text paisResid_ = self.gds_validate_string(paisResid_, node, 'paisResid') self.paisResid", "'.join(input_data) def gds_validate_float_list( self, input_data, node=None, input_name=''): values = input_data.split() for value in", "gds_validate_base64(self, input_data, node=None, input_name=''): return input_data def gds_format_integer(self, input_data, input_name=''): return '%d' %", "get_nmPai(self): return self.nmPai def set_nmPai(self, nmPai): self.nmPai = nmPai def hasContent_(self): if (", "self.gds_validate_integer(ival_, node, 'codMunic') self.codMunic = ival_ elif nodeName_ == 'uf': uf_ = child_.text", "( self.dadosNasc is not None or self.endereco is not None ): return True", "name_='ideBenef', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('ideBenef') if imported_ns_def_ is not None: namespacedef_ =", "dscLograd): self.dscLograd = dscLograd def get_nrLograd(self): return self.nrLograd def set_nrLograd(self, nrLograd): self.nrLograd =", "no Brasil\"\"\" subclass = None superclass = None def __init__(self, tpLograd=None, dscLograd=None, nrLograd=None,", "exp: raise_parse_error(child_, 'requires integer: %s' % exp) ival_ = self.gds_validate_integer(ival_, node, 'indRetif') self.indRetif", "already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='nrLograd') if self.hasContent_(): outfile.write('>%s' % (eol_,", "subclass = getSubclassFromModule_( CurrentSubclassModule_, eSocial) if subclass is not None: return subclass(*args_, **kwargs_)", "'' if self.paisResid is not None: showIndent(outfile, level, pretty_print) outfile.write('<%spaisResid>%s</%spaisResid>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.paisResid),", "self.exportAttributes(outfile, level, already_processed, namespace_, name_='infoPenMorte') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level", "# in a module named generatedssuper.py. try: from generatedssuper import GeneratedsSuper except ImportError", "in node.nsmap: classname = node.get('{%s}type' % node.nsmap['xsi']) if classname is not None: names", "self.content_type == MixedContainer.TypeString: outfile.write('<%s>%s</%s>' % ( self.name, self.value, self.name)) elif self.content_type == MixedContainer.TypeInteger", "__init__(self, tpLograd=None, dscLograd=None, nrLograd=None, complemento=None, bairro=None, cep=None, codMunic=None, uf=None): self.original_tagname_ = None self.tpLograd", "else: return dscLograd(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return", "name_=rootTag, mapping_=mapping) reverse_mapping = rootObj.gds_reverse_node_mapping(mapping) if not silence: content = etree_.tostring( rootElement, pretty_print=True,", "exportAttributes(self, outfile, level, already_processed, namespace_='', name_='tpLograd'): pass def exportChildren(self, outfile, level, namespace_='', name_='tpLograd',", "gds_validate_float_list( self, input_data, node=None, input_name=''): values = input_data.split() for value in values: try:", "self.dadosBenef = obj_ obj_.original_tagname_ = 'dadosBenef' # end class ideBenef class cpfBenef(GeneratedsSuper): subclass", "None: return subclass(*args_, **kwargs_) if codPostal.subclass: return codPostal.subclass(*args_, **kwargs_) else: return codPostal(*args_, **kwargs_)", "already_processed): pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): if nodeName_ == 'brasil': obj_", "namespace_, name_='nrBenefic') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='',", "= None def __init__(self, tpBenef=None, nrBenefic=None, dtIniBenef=None, vrBenef=None, infoPenMorte=None): self.original_tagname_ = None self.tpBenef", "subclass is not None: return subclass(*args_, **kwargs_) if dscLograd.subclass: return dscLograd.subclass(*args_, **kwargs_) else:", "AttributeError: # fallback to xml.etree parser = etree_.XMLParser() doc = etree_.parse(infile, parser=parser, **kwargs)", "return nmMae(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return True", "def get_iniBeneficio(self): return self.iniBeneficio def set_iniBeneficio(self, iniBeneficio): self.iniBeneficio = iniBeneficio def get_altBeneficio(self): return", "imported_ns_def_ = GenerateDSNamespaceDefs_.get('dtNascto') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print:", "name_='paisNac'): pass def exportChildren(self, outfile, level, namespace_='', name_='paisNac', fromsubclass_=False, pretty_print=True): pass def build(self,", "self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='TDadosBeneficio', pretty_print=pretty_print) showIndent(outfile,", "getSubclassFromModule_( CurrentSubclassModule_, TEnderecoExterior) if subclass is not None: return subclass(*args_, **kwargs_) if TEnderecoExterior.subclass:", "@classmethod def gds_reverse_node_mapping(cls, mapping): return dict(((v, k) for k, v in mapping.iteritems())) @staticmethod", "self.mtvFim def set_mtvFim(self, mtvFim): self.mtvFim = mtvFim def hasContent_(self): if ( self.tpBenef is", "namespace_='', name_='dtFimBenef', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('dtFimBenef') if imported_ns_def_ is not None: namespacedef_", "if self.paisResid is not None: showIndent(outfile, level, pretty_print) outfile.write('<%spaisResid>%s</%spaisResid>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.paisResid), input_name='paisResid')),", "ival_ elif nodeName_ == 'nrBenefic': nrBenefic_ = child_.text nrBenefic_ = self.gds_validate_string(nrBenefic_, node, 'nrBenefic')", "imported_ns_def_ = GenerateDSNamespaceDefs_.get('paisResid') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print:", "( ): return True else: return False def export(self, outfile, level, namespace_='', name_='nmBenefic',", "CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, dadosNasc) if subclass is not", "infoPenMorte class idQuota(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ =", "dadosNasc.subclass(*args_, **kwargs_) else: return dadosNasc(*args_, **kwargs_) factory = staticmethod(factory) def get_dtNascto(self): return self.dtNascto", "staticmethod(factory) def hasContent_(self): if ( ): return True else: return False def export(self,", "obj_ obj_.original_tagname_ = 'ideBenef' elif nodeName_ == 'infoBeneficio': obj_ = infoBeneficio.factory() obj_.build(child_) self.infoBeneficio", "1: micro_seconds = int(float('0.' + time_parts[1]) * 1000000) input_data = '%s.%s' % (time_parts[0],", "getSubclassFromModule_( CurrentSubclassModule_, bairro) if subclass is not None: return subclass(*args_, **kwargs_) if bairro.subclass:", "**kwargs_) if uf.subclass: return uf.subclass(*args_, **kwargs_) else: return uf(*args_, **kwargs_) factory = staticmethod(factory)", "= int(float('0.' + time_parts[1]) * 1000000) input_data = '%s.%s' % (time_parts[0], micro_seconds, )", "return False def export(self, outfile, level, namespace_='', name_='nrInsc', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('nrInsc')", "TypeFloat = 4 TypeDecimal = 5 TypeDouble = 6 TypeBoolean = 7 TypeBase64", "nrBenefic(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return True else:", "namespace_='', name_='TEmprPJ', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s'", "namespace_='', name_='tpInsc', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_, ))", "name_='tpAmb') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='tpAmb',", "== 'dtIniBenef': sval_ = child_.text dval_ = self.gds_parse_date(sval_) self.dtIniBenef = dval_ elif nodeName_", "subclass = getSubclassFromModule_( CurrentSubclassModule_, complemento) if subclass is not None: return subclass(*args_, **kwargs_)", "= s1.replace('>', '&gt;') return s1 def quote_attrib(inStr): s1 = (isinstance(inStr, BaseStrType_) and inStr", "class_.__name__ + 'Sub' if hasattr(module, name): return getattr(module, name) else: return None #", "def export(self, outfile, level, namespace_='', name_='tpPlanRP', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('tpPlanRP') if imported_ns_def_", "input_name='Id')), )) def exportChildren(self, outfile, level, namespace_='', name_='evtCdBenPrRP', fromsubclass_=False, pretty_print=True): if pretty_print: eol_", "s1 = s1.replace('>', '&gt;') return s1 def quote_attrib(inStr): s1 = (isinstance(inStr, BaseStrType_) and", "def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='nmBenefic'): pass def exportChildren(self, outfile, level, namespace_='',", "input_data = '%s.%s' % (time_parts[0], micro_seconds, ) dt = datetime_.datetime.strptime( input_data, '%Y-%m-%dT%H:%M:%S.%f') else:", "self.tpLograd = tpLograd def get_dscLograd(self): return self.dscLograd def set_dscLograd(self, dscLograd): self.dscLograd = dscLograd", "def exportChildren(self, outfile, level, namespace_='', name_='dtIniBenef', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed", "set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='eSocial') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile,", "def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, TEmprPJ)", "replace these methods by re-implementing the following class # in a module named", "GenerateDSNamespaceDefs_.get('vrBenef') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_ =", "else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='paisNac'): pass", "bairro self.nmCid = nmCid self.codPostal = codPostal def factory(*args_, **kwargs_): if CurrentSubclassModule_ is", "'nrInsc') self.nrInsc = nrInsc_ # end class TEmprPJ class tpInsc(GeneratedsSuper): subclass = None", "is not None: return subclass(*args_, **kwargs_) if tpLograd.subclass: return tpLograd.subclass(*args_, **kwargs_) else: return", "else: return codPostal(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return", "name_=rootTag, namespacedef_='') return rootObj def parseLiteral(inFileName, silence=False): parser = None doc = parsexml_(inFileName,", "= '%04d-%02d-%02dT%02d:%02d:%02d' % ( input_data.year, input_data.month, input_data.day, input_data.hour, input_data.minute, input_data.second, ) else: _svalue", "export(self, outfile, level, namespace_='', name_='cpfInst', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('cpfInst') if imported_ns_def_ is", "you have installed IPython you can uncomment and use the following. # IPython", "# XML representation of that element. See the export method of # any", "self.ideEmpregador = ideEmpregador def get_ideBenef(self): return self.ideBenef def set_ideBenef(self, ideBenef): self.ideBenef = ideBenef", "= child_.text codPostal_ = self.gds_validate_string(codPostal_, node, 'codPostal') self.codPostal = codPostal_ # end class", "pass def exportChildren(self, outfile, level, namespace_='', name_='cpfBenef', fromsubclass_=False, pretty_print=True): pass def build(self, node):", "class cpfBenef(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ = None", "subclass(*args_, **kwargs_) if tpBenef.subclass: return tpBenef.subclass(*args_, **kwargs_) else: return tpBenef(*args_, **kwargs_) factory =", "exportAttributes(self, outfile, level, already_processed, namespace_='', name_='paisNascto'): pass def exportChildren(self, outfile, level, namespace_='', name_='paisNascto',", "_svalue = '%04d-%02d-%02dT%02d:%02d:%02d' % ( input_data.year, input_data.month, input_data.day, input_data.hour, input_data.minute, input_data.second, ) else:", "namespace_='', name_='mtvFim', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('mtvFim') if imported_ns_def_ is not None: namespacedef_", "self.exportChildren(outfile, level + 1, namespace_='', name_='nmCid', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else:", "as exp: raise_parse_error(child_, 'requires integer: %s' % exp) ival_ = self.gds_validate_integer(ival_, node, 'tpInsc')", "already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='dtNascto') if self.hasContent_(): outfile.write('>%s' % (eol_,", "= set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='tpPlanRP') if self.hasContent_(): outfile.write('>%s' % (eol_, ))", "input_name='nmPai')), namespace_, eol_)) def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib, already_processed) for", "True if sys.version_info.major == 2: BaseStrType_ = basestring else: BaseStrType_ = str def", "return subclass(*args_, **kwargs_) if evtCdBenPrRP.subclass: return evtCdBenPrRP.subclass(*args_, **kwargs_) else: return evtCdBenPrRP(*args_, **kwargs_) factory", "return self.altBeneficio def set_altBeneficio(self, altBeneficio): self.altBeneficio = altBeneficio def get_fimBeneficio(self): return self.fimBeneficio def", "namespace_='', name_='TDadosBenef'): pass def exportChildren(self, outfile, level, namespace_='', name_='TDadosBenef', fromsubclass_=False, pretty_print=True): if pretty_print:", "class paisResid(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ = None", "showIndent(outfile, level, pretty_print) outfile.write('<%snrBenefic>%s</%snrBenefic>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nrBenefic), input_name='nrBenefic')), namespace_, eol_)) if self.dtIniBenef is", "== MixedContainer.TypeString: text = self.value elif (self.content_type == MixedContainer.TypeInteger or self.content_type == MixedContainer.TypeBoolean):", "self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='verProc', pretty_print=pretty_print) outfile.write('</%s%s>%s'", "tpInsc(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ = None def", "= '' if self.tpPlanRP is not None: showIndent(outfile, level, pretty_print) outfile.write('<%stpPlanRP>%s</%stpPlanRP>%s' % (namespace_,", "re as re_ import base64 import datetime as datetime_ import warnings as warnings_", "methods in these classes are generated by generateDS.py. # You can replace these", "pass def exportChildren(self, outfile, level, namespace_='', name_='TDadosBeneficio', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ =", "outfile, level, already_processed, namespace_='', name_='eSocial'): pass def exportChildren(self, outfile, level, namespace_='', name_='eSocial', fromsubclass_=False,", "= GenerateDSNamespaceDefs_.get('verProc') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_", "outfile, level, namespace_='', name_='paisResid', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('paisResid') if imported_ns_def_ is not", "outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='endereco'): pass def", "class infoPenMorte class idQuota(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_", "outfile, level, already_processed, namespace_='', name_='TEmprPJ'): pass def exportChildren(self, outfile, level, namespace_='', name_='TEmprPJ', fromsubclass_=False,", "**kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, dscLograd) if subclass", "None: return subclass(*args_, **kwargs_) if ideBenef.subclass: return ideBenef.subclass(*args_, **kwargs_) else: return ideBenef(*args_, **kwargs_)", "already_processed, namespace_='', name_='fimBeneficio'): pass def exportChildren(self, outfile, level, namespace_='', name_='fimBeneficio', fromsubclass_=False, pretty_print=True): if", "input_data, node=None, input_name=''): return input_data def gds_format_boolean_list(self, input_data, input_name=''): return '%s' % '", "None: subclass = getSubclassFromModule_( CurrentSubclassModule_, indRetif) if subclass is not None: return subclass(*args_,", "pretty_print) outfile.write('<%snrBenefic>%s</%snrBenefic>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nrBenefic), input_name='nrBenefic')), namespace_, eol_)) if self.dtIniBenef is not None:", "'idQuota': idQuota_ = child_.text idQuota_ = self.gds_validate_string(idQuota_, node, 'idQuota') self.idQuota = idQuota_ elif", "elif self.content_type == MixedContainer.TypeInteger or \\ self.content_type == MixedContainer.TypeBoolean: outfile.write('<%s>%d</%s>' % ( self.name,", "level + 1, namespace_='', name_='nrInsc', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s'", "level + 1, namespace_='', name_='tpBenef', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s'", "already_processed): pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class uf", "else: return False def export(self, outfile, level, namespace_='', name_='cpfBenef', namespacedef_='', pretty_print=True): imported_ns_def_ =", "classname = names[1] class_obj2 = globals().get(classname) if class_obj2 is not None: class_obj1 =", "( self.cpfBenef is not None or self.nmBenefic is not None or self.dadosBenef is", "if it is importable, must contain # a dictionary named GeneratedsNamespaceDefs. This Python", "self.idQuota is not None or self.cpfInst is not None ): return True else:", "return self.codPostal def set_codPostal(self, codPostal): self.codPostal = codPostal def hasContent_(self): if ( self.paisResid", "Tag_pattern_.match(child.tag).groups()[-1] self.buildChildren(child, node, nodeName_) return self def buildAttributes(self, node, attrs, already_processed): value =", "def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class nmBenefic class infoBeneficio(GeneratedsSuper):", "Namespace_extract_pat_ = re_.compile(r'{(.*)}(.*)') CDATA_pattern_ = re_.compile(r\"<!\\[CDATA\\[.*?\\]\\]>\", re_.DOTALL) # Change this to redirect the", "self.optional def _cast(typ, value): if typ is None or value is None: return", "= '%g' % self.value elif self.content_type == MixedContainer.TypeBase64: text = '%s' % base64.b64encode(self.value)", "pass def exportChildren(self, outfile, level, namespace_='', name_='fimBeneficio', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ =", "return s1 def quote_python(inStr): s1 = inStr if s1.find(\"'\") == -1: if s1.find('\\n')", "= child_attrs def get_child_attrs(self): return self.child_attrs def set_choice(self, choice): self.choice = choice def", "nmMae_ = self.gds_validate_string(nmMae_, node, 'nmMae') self.nmMae = nmMae_ elif nodeName_ == 'nmPai': nmPai_", "nrBenefic.subclass(*args_, **kwargs_) else: return nrBenefic(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if (", "ival_ elif nodeName_ == 'nrRecibo': nrRecibo_ = child_.text nrRecibo_ = self.gds_validate_string(nrRecibo_, node, 'nrRecibo')", "= tpLograd self.dscLograd = dscLograd self.nrLograd = nrLograd self.complemento = complemento self.bairro =", "buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class dtFimBenef class mtvFim(GeneratedsSuper): subclass", "def exportChildren(self, outfile, level, namespace_='', name_='infoPenMorte', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\\n'", "def get_altBeneficio(self): return self.altBeneficio def set_altBeneficio(self, altBeneficio): self.altBeneficio = altBeneficio def get_fimBeneficio(self): return", "= '\\n' else: eol_ = '' if self.dtNascto is not None: showIndent(outfile, level,", "( ): return True else: return False def export(self, outfile, level, namespace_='', name_='verProc',", "the methods in these classes are generated by generateDS.py. # You can replace", "None: showIndent(outfile, level, pretty_print) outfile.write('<%sverProc>%s</%sverProc>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.verProc), input_name='verProc')), namespace_, eol_)) def build(self,", "paisNascto): self.paisNascto = paisNascto def get_paisNac(self): return self.paisNac def set_paisNac(self, paisNac): self.paisNac =", "(TypeError, ValueError): raise_parse_error(node, 'Requires sequence of floats') return values def gds_format_double(self, input_data, input_name=''):", "name_='paisResid', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib, already_processed)", "= s1.replace('\"', '\\\\\"') if s1.find('\\n') == -1: return '\"%s\"' % s1 else: return", "pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('nrBenefic') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if", "outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='TEmprPJ'): pass def", "superclass = None def __init__(self, tpBenef=None, nrBenefic=None, dtFimBenef=None, mtvFim=None): self.original_tagname_ = None self.tpBenef", "subclass = None superclass = None def __init__(self, evtCdBenPrRP=None, Signature=None): self.original_tagname_ = None", "+= self.value else: if element.text is None: element.text = self.value else: element.text +=", "if typ is None or value is None: return value return typ(value) #", "if isinstance(dtIniBenef, BaseStrType_): initvalue_ = datetime_.datetime.strptime(dtIniBenef, '%Y-%m-%d').date() else: initvalue_ = dtIniBenef self.dtIniBenef =", "superclass module to use a # specific subclass module. CurrentSubclassModule_ = None #", "= complemento self.bairro = bairro self.cep = cep self.codMunic = codMunic self.uf =", "= child_.text nrLograd_ = self.gds_validate_string(nrLograd_, node, 'nrLograd') self.nrLograd = nrLograd_ elif nodeName_ ==", "None def __init__(self, indRetif=None, nrRecibo=None, tpAmb=None, procEmi=None, verProc=None): self.original_tagname_ = None self.indRetif =", "type names (strings) to XML schema namespace prefix # definitions. The export method", "level, already_processed, namespace_='', name_='paisResid'): pass def exportChildren(self, outfile, level, namespace_='', name_='paisResid', fromsubclass_=False, pretty_print=True):", "paisNac def get_nmMae(self): return self.nmMae def set_nmMae(self, nmMae): self.nmMae = nmMae def get_nmPai(self):", "== MixedContainer.CategorySimple: showIndent(outfile, level) outfile.write( 'model_.MixedContainer(%d, %d, \"%s\", \"%s\"),\\n' % ( self.category, self.content_type,", "tpLograd): self.tpLograd = tpLograd def get_dscLograd(self): return self.dscLograd def set_dscLograd(self, dscLograd): self.dscLograd =", "if self.Id is not None and 'Id' not in already_processed: already_processed.add('Id') outfile.write(' Id=%s'", "level, namespace_='', name_='nmPai', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('nmPai') if imported_ns_def_ is not None:", "def exportChildren(self, outfile, level, namespace_='', name_='tpLograd', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed", "# any generated element type class for a example of the use of", "1, namespace_='', name_='codPostal', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_,", "if self.evtCdBenPrRP is not None: self.evtCdBenPrRP.export(outfile, level, namespace_, name_='evtCdBenPrRP', pretty_print=pretty_print) if self.Signature is", "name_='procEmi', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib, already_processed)", "import warnings as warnings_ try: from lxml import etree as etree_ except ImportError:", "level, namespace_='', name_='tpBenef', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node,", "name_='tpLograd'): pass def exportChildren(self, outfile, level, namespace_='', name_='tpLograd', fromsubclass_=False, pretty_print=True): pass def build(self,", "node=None, input_name=''): if not input_data: return '' else: return input_data def gds_format_base64(self, input_data,", "ipshell = IPShellEmbed(args, ## banner = 'Dropping into IPython', ## exit_msg = 'Leaving", "level, already_processed, namespace_='', name_='nrInsc'): pass def exportChildren(self, outfile, level, namespace_='', name_='nrInsc', fromsubclass_=False, pretty_print=True):", "if self.tpAmb is not None: showIndent(outfile, level, pretty_print) outfile.write('<%stpAmb>%s</%stpAmb>%s' % (namespace_, self.gds_format_integer(self.tpAmb, input_name='tpAmb'),", "Id=None, ideEvento=None, ideEmpregador=None, ideBenef=None, infoBeneficio=None): self.original_tagname_ = None self.Id = _cast(None, Id) self.ideEvento", "self.brasil.export(outfile, level, namespace_, name_='brasil', pretty_print=pretty_print) if self.exterior is not None: self.exterior.export(outfile, level, namespace_,", "version=\"1.0\" ?>\\n') rootObj.export( sys.stdout, 0, name_=rootTag, namespacedef_='', pretty_print=True) return rootObj def parseEtree(inFileName, silence=False):", "\"%s\",\\n' % ( self.category, self.content_type, self.name,)) self.value.exportLiteral(outfile, level + 1) showIndent(outfile, level) outfile.write(')\\n')", "name_='tpAmb', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib, already_processed)", "is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, infoBeneficio) if subclass is not None:", "self.tpBenef is not None or self.nrBenefic is not None or self.dtFimBenef is not", "**kwargs_) factory = staticmethod(factory) def get_paisResid(self): return self.paisResid def set_paisResid(self, paisResid): self.paisResid =", "elif nodeName_ == 'infoPenMorte': obj_ = infoPenMorte.factory() obj_.build(child_) self.infoPenMorte = obj_ obj_.original_tagname_ =", "line options: # ('--no-process-includes', '') # ('-o', 'esociallib/v2_04/evtCdBenPrRP.py') # # Command line arguments:", "export(self, outfile, level, namespace_='', name_='dscLograd', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('dscLograd') if imported_ns_def_ is", "codMunic self.uf = uf def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass", "level, already_processed, namespace_, name_='cep') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level +", "list of lists of strings/patterns. We should: # - AND the outer elements", "pass def exportChildren(self, outfile, level, namespace_='', name_='nmMae', fromsubclass_=False, pretty_print=True): pass def build(self, node):", "exp: raise_parse_error(child_, 'requires integer: %s' % exp) ival_ = self.gds_validate_integer(ival_, node, 'codMunic') self.codMunic", "def get_data_type_chain(self): return self.data_type def get_data_type(self): if isinstance(self.data_type, list): if len(self.data_type) > 0:", "CurrentSubclassModule_, codPostal) if subclass is not None: return subclass(*args_, **kwargs_) if codPostal.subclass: return", "= int(tzoff_parts[0]) * 60 + int(tzoff_parts[1]) if results.group(1) == '-': tzoff *= -1", "to drop into the # IPython shell: # ipshell('<some message> -- Entering ipshell.\\nHit", "= set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='procEmi') if self.hasContent_(): outfile.write('>%s' % (eol_, ))", "name_ = self.original_tagname_ showIndent(outfile, level, pretty_print) outfile.write('<%s%s%s' % (namespace_, name_, namespacedef_ and '", "fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\\n' else: eol_ = '' if self.dadosNasc", "is not None or self.altBeneficio is not None or self.fimBeneficio is not None", "or self.codMunic is not None or self.uf is not None or self.paisNascto is", "level, namespace_='', name_='TEnderecoBrasil', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('TEnderecoBrasil') if imported_ns_def_ is not None:", "(eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='idQuota', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_,", "hasContent_(self): if ( self.brasil is not None or self.exterior is not None ):", "input_data.tzinfo.utcoffset(input_data) if tzoff is not None: total_seconds = tzoff.seconds + (86400 * tzoff.days)", "rootObj.gds_reverse_node_mapping(mapping) if not silence: content = etree_.tostring( rootElement, pretty_print=True, xml_declaration=True, encoding=\"utf-8\") sys.stdout.write(content) sys.stdout.write('\\n')", "self.exportChildren(outfile, level + 1, namespace_='', name_='nmBenefic', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else:", "return False def export(self, outfile, level, namespace_='', name_='fimBeneficio', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('fimBeneficio')", "the space used by the DOM. doc = None if not silence: sys.stdout.write('#from", "set_mtvFim(self, mtvFim): self.mtvFim = mtvFim def hasContent_(self): if ( self.tpBenef is not None", "self.infoBeneficio is not None: self.infoBeneficio.export(outfile, level, namespace_, name_='infoBeneficio', pretty_print=pretty_print) def build(self, node): already_processed", "tpInsc(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return True else:", "else: return False def export(self, outfile, level, namespace_='', name_='dtNascto', namespacedef_='', pretty_print=True): imported_ns_def_ =", "subclass = getSubclassFromModule_( CurrentSubclassModule_, nmMae) if subclass is not None: return subclass(*args_, **kwargs_)", "pass def exportChildren(self, outfile, level, namespace_='', name_='endereco', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ =", "subclass is not None: return subclass(*args_, **kwargs_) if endereco.subclass: return endereco.subclass(*args_, **kwargs_) else:", "pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('tpPlanRP') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if", "a pensão por morte\"\"\" subclass = None superclass = None def __init__(self, idQuota=None,", "outfile.write('<%snmMae>%s</%snmMae>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nmMae), input_name='nmMae')), namespace_, eol_)) if self.nmPai is not None: showIndent(outfile,", "level + 1, namespace_='', name_='evtCdBenPrRP', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_,", "**kwargs_) if TEnderecoExterior.subclass: return TEnderecoExterior.subclass(*args_, **kwargs_) else: return TEnderecoExterior(*args_, **kwargs_) factory = staticmethod(factory)", "level, already_processed, namespace_, name_='TDadosBenef') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level +", "True else: return False def export(self, outfile, level, namespace_='', name_='dtIniBenef', namespacedef_='', pretty_print=True): imported_ns_def_", "def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class bairro class cep(GeneratedsSuper):", "= infoPenMorte def hasContent_(self): if ( self.tpBenef is not None or self.nrBenefic is", "already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='verProc') if self.hasContent_(): outfile.write('>%s' % (eol_,", "not None: return subclass(*args_, **kwargs_) if cpfBenef.subclass: return cpfBenef.subclass(*args_, **kwargs_) else: return cpfBenef(*args_,", "get_path_(self, node): path_list = [] self.get_path_list_(node, path_list) path_list.reverse() path = '/'.join(path_list) return path", "buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class indRetif class nrRecibo(GeneratedsSuper): subclass", "set_tpInsc(self, tpInsc): self.tpInsc = tpInsc def get_nrInsc(self): return self.nrInsc def set_nrInsc(self, nrInsc): self.nrInsc", "( ): return True else: return False def export(self, outfile, level, namespace_='', name_='uf',", "self.nrBenefic = nrBenefic if isinstance(dtIniBenef, BaseStrType_): initvalue_ = datetime_.datetime.strptime(dtIniBenef, '%Y-%m-%d').date() else: initvalue_ =", "level, already_processed, namespace_, name_='tpLograd') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level +", "return False def export(self, outfile, level, namespace_='', name_='ideBenef', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('ideBenef')", "else: return False def export(self, outfile, level, namespace_='', name_='dadosNasc', namespacedef_='', pretty_print=True): imported_ns_def_ =", "name_='infoBeneficio', pretty_print=pretty_print) def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib, already_processed) for child", "exportAttributes(self, outfile, level, already_processed, namespace_='', name_='infoBeneficio'): pass def exportChildren(self, outfile, level, namespace_='', name_='infoBeneficio',", "name_='tpBenef') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='tpBenef',", "name_='tpAmb', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_, )) def", "**kwargs_) else: return infoBeneficio(*args_, **kwargs_) factory = staticmethod(factory) def get_tpPlanRP(self): return self.tpPlanRP def", "eol_)) if self.nmCid is not None: showIndent(outfile, level, pretty_print) outfile.write('<%snmCid>%s</%snmCid>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nmCid),", "factory = staticmethod(factory) def get_tpPlanRP(self): return self.tpPlanRP def set_tpPlanRP(self, tpPlanRP): self.tpPlanRP = tpPlanRP", "outfile, level, namespace_='', name_='dscLograd', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set()", "(namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.tpLograd), input_name='tpLograd')), namespace_, eol_)) if self.dscLograd is not None: showIndent(outfile, level, pretty_print)", "already_processed, namespace_='', name_='ideBenef'): pass def exportChildren(self, outfile, level, namespace_='', name_='ideBenef', fromsubclass_=False, pretty_print=True): if", "factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, dtIniBenef) if", "None: return subclass(*args_, **kwargs_) if nmPai.subclass: return nmPai.subclass(*args_, **kwargs_) else: return nmPai(*args_, **kwargs_)", "not None: self.dadosNasc.export(outfile, level, namespace_, name_='dadosNasc', pretty_print=pretty_print) if self.endereco is not None: self.endereco.export(outfile,", "self.original_tagname_ = None def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass =", ")) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='cpfInst'): pass def exportChildren(self, outfile, level,", "get_procEmi(self): return self.procEmi def set_procEmi(self, procEmi): self.procEmi = procEmi def get_verProc(self): return self.verProc", "% (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='evtCdBenPrRP'): if self.Id is", "cpfBenef_ elif nodeName_ == 'nmBenefic': nmBenefic_ = child_.text nmBenefic_ = self.gds_validate_string(nmBenefic_, node, 'nmBenefic')", "text def find_attr_value_(attr_name, node): attrs = node.attrib attr_parts = attr_name.split(':') value = None", "name_='nmPai', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_, )) def", "morte\"\"\" subclass = None superclass = None def __init__(self, idQuota=None, cpfInst=None): self.original_tagname_ =", "child_.text nrBenefic_ = self.gds_validate_string(nrBenefic_, node, 'nrBenefic') self.nrBenefic = nrBenefic_ elif nodeName_ == 'dtFimBenef':", "( self.name, self.value, self.name)) elif self.content_type == MixedContainer.TypeFloat or \\ self.content_type == MixedContainer.TypeDecimal:", "nrBenefic(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ = None def", "pretty_print) outfile.write('<%spaisNac>%s</%spaisNac>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.paisNac), input_name='paisNac')), namespace_, eol_)) if self.nmMae is not None:", "return tpBenef(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return True", "= getSubclassFromModule_( CurrentSubclassModule_, dtIniBenef) if subclass is not None: return subclass(*args_, **kwargs_) if", "= name self.data_type = data_type self.container = container self.child_attrs = child_attrs self.choice =", "return subclass(*args_, **kwargs_) if tpLograd.subclass: return tpLograd.subclass(*args_, **kwargs_) else: return tpLograd(*args_, **kwargs_) factory", "input_data def gds_validate_string(self, input_data, node=None, input_name=''): if not input_data: return '' else: return", "if self.endereco is not None: self.endereco.export(outfile, level, namespace_, name_='endereco', pretty_print=pretty_print) def build(self, node):", "in matchobjects: s3 = s1[pos:mo.start()] s2 += quote_xml_aux(s3) s2 += s1[mo.start():mo.end()] pos =", "servidor\"\"\" subclass = None superclass = None def __init__(self, tpPlanRP=None, iniBeneficio=None, altBeneficio=None, fimBeneficio=None):", "+ namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='codPostal')", "% (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nmMae), input_name='nmMae')), namespace_, eol_)) if self.nmPai is not None: showIndent(outfile, level,", "outfile, level, namespace_='', name_='cep', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('cep') if imported_ns_def_ is not", "already_processed): pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): if nodeName_ == 'tpPlanRP': sval_", ")) self.exportChildren(outfile, level + 1, namespace_='', name_='TIdeEveTrab', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' %", "self.paisResid = paisResid_ elif nodeName_ == 'dscLograd': dscLograd_ = child_.text dscLograd_ = self.gds_validate_string(dscLograd_,", "namespace_='', name_='idQuota', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_, ))", "quote_xml(inStr): \"Escape markup chars, but do not modify CDATA sections.\" if not inStr:", "beneficiário identificado em {ideBenef} e para o qual não tenha havido ainda informação", ")) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='tpBenef') if self.hasContent_(): outfile.write('>%s' %", "% ( self.name, self.value, self.name)) elif self.content_type == MixedContainer.TypeDouble: outfile.write('<%s>%g</%s>' % ( self.name,", "ideEmpregador def get_ideBenef(self): return self.ideBenef def set_ideBenef(self, ideBenef): self.ideBenef = ideBenef def get_infoBeneficio(self):", "namespace_='', name_='codMunic', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('codMunic') if imported_ns_def_ is not None: namespacedef_", "or self.nmMae is not None or self.nmPai is not None ): return True", "GenerateDSNamespaceDefs = { # \"ElementtypeA\": \"http://www.xxx.com/namespaceA\", # \"ElementtypeB\": \"http://www.xxx.com/namespaceB\", # } # try:", "outfile, level, namespace_='', name_='nrInsc', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('nrInsc') if imported_ns_def_ is not", "subclass = getSubclassFromModule_( CurrentSubclassModule_, dscLograd) if subclass is not None: return subclass(*args_, **kwargs_)", "return bairro.subclass(*args_, **kwargs_) else: return bairro(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if", "eol_ = '\\n' else: eol_ = '' if self.original_tagname_ is not None: name_", "Id) self.ideEvento = ideEvento self.ideEmpregador = ideEmpregador self.ideBenef = ideBenef self.infoBeneficio = infoBeneficio", "def set_infoPenMorte(self, infoPenMorte): self.infoPenMorte = infoPenMorte def hasContent_(self): if ( self.tpBenef is not", "outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='idQuota'): pass def", "self.gds_encode(self.gds_format_string(quote_xml(self.bairro), input_name='bairro')), namespace_, eol_)) if self.nmCid is not None: showIndent(outfile, level, pretty_print) outfile.write('<%snmCid>%s</%snmCid>%s'", "name_='complemento'): pass def exportChildren(self, outfile, level, namespace_='', name_='complemento', fromsubclass_=False, pretty_print=True): pass def build(self,", "tpPlanRP.subclass(*args_, **kwargs_) else: return tpPlanRP(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if (", "= cpfInst def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_(", "- (hours * 3600)) // 60 _svalue += '{0:02d}:{1:02d}'.format( hours, minutes) except AttributeError:", "and use the following. # IPython is available from http://ipython.scipy.org/. # ## from", "(eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='dtFimBenef', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_,", "namespace_='', name_='nrLograd', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_, ))", "outfile, level, namespace_='', name_='dadosNasc', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('dadosNasc') if imported_ns_def_ is not", "namespace_, name_='fimBeneficio', pretty_print=pretty_print) def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib, already_processed) for", "= getSubclassFromModule_( CurrentSubclassModule_, tpInsc) if subclass is not None: return subclass(*args_, **kwargs_) if", "already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='TDadosBeneficio') if self.hasContent_(): outfile.write('>%s' % (eol_,", "ival_ = self.gds_validate_integer(ival_, node, 'codMunic') self.codMunic = ival_ elif nodeName_ == 'uf': uf_", "gds_validate_string(self, input_data, node=None, input_name=''): if not input_data: return '' else: return input_data def", "class codMunic(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ = None", "level + 1, namespace_='', name_='infoPenMorte', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_,", "+ 1, namespace_='', name_='dtIniBenef', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' %", "is not None: found2 = True break if not found2: found1 = False", "(eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='TEmprPJ'): pass def exportChildren(self, outfile,", "class tpLograd(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ = None", "True else: return False def export(self, outfile, level, namespace_='', name_='dtFimBenef', namespacedef_='', pretty_print=True): imported_ns_def_", "if self.dtFimBenef is not None: showIndent(outfile, level, pretty_print) outfile.write('<%sdtFimBenef>%s</%sdtFimBenef>%s' % (namespace_, self.gds_format_date(self.dtFimBenef, input_name='dtFimBenef'),", "set_complemento(self, complemento): self.complemento = complemento def get_bairro(self): return self.bairro def set_bairro(self, bairro): self.bairro", "# end class TEnderecoBrasil class tpLograd(GeneratedsSuper): subclass = None superclass = None def", "get_class_obj_(self, node, default_class=None): class_obj1 = default_class if 'xsi' in node.nsmap: classname = node.get('{%s}type'", "dt = dt.replace(tzinfo=tz) return dt.time() def gds_str_lower(self, instring): return instring.lower() def get_path_(self, node):", "not None: showIndent(outfile, level, pretty_print) outfile.write('<%scodMunic>%s</%scodMunic>%s' % (namespace_, self.gds_format_integer(self.codMunic, input_name='codMunic'), namespace_, eol_)) if", "level, namespace_='', name_='nrLograd', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node,", "already_processed, namespace_, name_='vrBenef') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1,", ")) self.exportChildren(outfile, level + 1, namespace_='', name_='cpfInst', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_))", "+ 1, namespace_='', name_='nmPai', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' %", "def export(self, outfile, level, namespace_='', name_='TEnderecoExterior', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('TEnderecoExterior') if imported_ns_def_", "else: dt = datetime_.datetime.strptime( input_data, '%Y-%m-%dT%H:%M:%S') dt = dt.replace(tzinfo=tz) return dt def gds_validate_date(self,", "= dscLograd self.nrLograd = nrLograd self.complemento = complemento self.bairro = bairro self.cep =", "outfile, level, already_processed, namespace_='', name_='dtFimBenef'): pass def exportChildren(self, outfile, level, namespace_='', name_='dtFimBenef', fromsubclass_=False,", "if subclass is not None: return subclass(*args_, **kwargs_) if codMunic.subclass: return codMunic.subclass(*args_, **kwargs_)", "% (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='TDadosBeneficio'): pass def exportChildren(self,", "node, 'tpInsc') self.tpInsc = ival_ elif nodeName_ == 'nrInsc': nrInsc_ = child_.text nrInsc_", "self.choice = choice self.optional = optional def set_name(self, name): self.name = name def", "nmMae(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return True else:", "nmCid class codPostal(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ =", "is not None: return subclass(*args_, **kwargs_) if tpPlanRP.subclass: return tpPlanRP.subclass(*args_, **kwargs_) else: return", "eol_)) else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='codMunic'):", "'Requires sequence of doubles') return values def gds_format_boolean(self, input_data, input_name=''): return ('%s' %", "CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, nrBenefic) if subclass is not", "subclass = getSubclassFromModule_( CurrentSubclassModule_, nrBenefic) if subclass is not None: return subclass(*args_, **kwargs_)", "level, already_processed, namespace_, name_='paisResid') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level +", "= '\"%s\"' % s1 return s1 def quote_python(inStr): s1 = inStr if s1.find(\"'\")", "the use of this # table. # A sample table is: # #", "namespace_='', name_='nrLograd'): pass def exportChildren(self, outfile, level, namespace_='', name_='nrLograd', fromsubclass_=False, pretty_print=True): pass def", "= datetime_.datetime.strptime(input_data, '%H:%M:%S.%f') else: dt = datetime_.datetime.strptime(input_data, '%H:%M:%S') dt = dt.replace(tzinfo=tz) return dt.time()", "namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='fimBeneficio') if", ")) self.exportChildren(outfile, level + 1, namespace_='', name_='nrInsc', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_))", "return input_data def gds_format_boolean_list(self, input_data, input_name=''): return '%s' % ' '.join(input_data) def gds_validate_boolean_list(", "name_='nmMae'): pass def exportChildren(self, outfile, level, namespace_='', name_='nmMae', fromsubclass_=False, pretty_print=True): pass def build(self,", "self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='tpPlanRP', pretty_print=pretty_print) outfile.write('</%s%s>%s'", "pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class paisNascto class", "MixedContainer.TypeBoolean): text = '%d' % self.value elif (self.content_type == MixedContainer.TypeFloat or self.content_type ==", "is not None: showIndent(outfile, level, pretty_print) outfile.write('<%smtvFim>%s</%smtvFim>%s' % (namespace_, self.gds_format_integer(self.mtvFim, input_name='mtvFim'), namespace_, eol_))", ")) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='dscLograd'): pass def exportChildren(self, outfile, level,", "( ): return True else: return False def export(self, outfile, level, namespace_='', name_='dtNascto',", "obj_ = infoBeneficio.factory() obj_.build(child_) self.infoBeneficio = obj_ obj_.original_tagname_ = 'infoBeneficio' # end class", ")) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='ideBenef'): pass def exportChildren(self, outfile, level,", "def hasContent_(self): if ( self.idQuota is not None or self.cpfInst is not None", "None ): return True else: return False def export(self, outfile, level, namespace_='', name_='infoPenMorte',", "namespace_='', name_='nrRecibo', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_, ))", "def exportChildren(self, outfile, level, namespace_='', name_='nmPai', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed", "'Z': tz = GeneratedsSuper._FixedOffsetTZ(0, 'UTC') input_data = input_data[:-1] else: results = GeneratedsSuper.tzoff_pattern.search(input_data) if", "already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='codPostal') if self.hasContent_(): outfile.write('>%s' % (eol_,", "dscLograd(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return True else:", "namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='TIdeEveTrab') if", "% (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='nmCid'): pass def exportChildren(self,", "def gds_validate_base64(self, input_data, node=None, input_name=''): return input_data def gds_format_integer(self, input_data, input_name=''): return '%d'", "* 1000000) input_data = '%s.%s' % (time_parts[0], micro_seconds, ) dt = datetime_.datetime.strptime( input_data,", "> 0: return self.data_type[-1] else: return 'xs:string' else: return self.data_type def set_container(self, container):", "elif nodeName_ == 'Signature': Signature_ = child_.text Signature_ = self.gds_validate_string(Signature_, node, 'Signature') self.Signature", "superclass = None def __init__(self, brasil=None, exterior=None): self.original_tagname_ = None self.brasil = brasil", "outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='uf', pretty_print=pretty_print) outfile.write('</%s%s>%s' %", "return self.dtFimBenef def set_dtFimBenef(self, dtFimBenef): self.dtFimBenef = dtFimBenef def get_mtvFim(self): return self.mtvFim def", "if tpBenef.subclass: return tpBenef.subclass(*args_, **kwargs_) else: return tpBenef(*args_, **kwargs_) factory = staticmethod(factory) def", "len(args) == 1: parse(args[0]) else: usage() if __name__ == '__main__': #import pdb; pdb.set_trace()", "if results is not None: tzoff_parts = results.group(2).split(':') tzoff = int(tzoff_parts[0]) * 60", "already_processed, namespace_, name_='nmCid') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1,", "node, path_list): if node is None: return tag = GeneratedsSuper.Tag_strip_pattern_.sub('', node.tag) if tag:", "fromsubclass_=False): if nodeName_ == 'indRetif': sval_ = child_.text try: ival_ = int(sval_) except", "**kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, TDadosBeneficio) if subclass", "rootObj def parseLiteral(inFileName, silence=False): parser = None doc = parsexml_(inFileName, parser) rootNode =", "nrBenefic.subclass: return nrBenefic.subclass(*args_, **kwargs_) else: return nrBenefic(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self):", "return instring.encode(ExternalEncoding) else: return instring @staticmethod def convert_unicode(instring): if isinstance(instring, str): result =", "codMunic=None, uf=None): self.original_tagname_ = None self.tpLograd = tpLograd self.dscLograd = dscLograd self.nrLograd =", "if nodeName_ == 'dtNascto': sval_ = child_.text dval_ = self.gds_parse_date(sval_) self.dtNascto = dval_", "= names[1] class_obj2 = globals().get(classname) if class_obj2 is not None: class_obj1 = class_obj2", "String_cleanup_pat_ = re_.compile(r\"[\\n\\r\\s]+\") Namespace_extract_pat_ = re_.compile(r'{(.*)}(.*)') CDATA_pattern_ = re_.compile(r\"<!\\[CDATA\\[.*?\\]\\]>\", re_.DOTALL) # Change this", "results = GeneratedsSuper.tzoff_pattern.search(input_data) if results is not None: tzoff_parts = results.group(2).split(':') tzoff =", "self.uf is not None ): return True else: return False def export(self, outfile,", "__name__ == '__main__': #import pdb; pdb.set_trace() main() __all__ = [ \"TDadosBenef\", \"TDadosBeneficio\", \"TEmprPJ\",", "BaseStrType_) and inStr or '%s' % inStr) s2 = '' pos = 0", "end class verProc class TEmprPJ(GeneratedsSuper): \"\"\"Informações do Empregador PJ\"\"\" subclass = None superclass", "child_.text try: fval_ = float(sval_) except (TypeError, ValueError) as exp: raise_parse_error(child_, 'requires float", "eol_)) def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib, already_processed) for child in", "self.exportChildren(outfile, level + 1, namespace_='', name_='mtvFim', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else:", "not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, TEnderecoExterior) if subclass is not None: return", "namespace_='', name_='TIdeEveTrab', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s'", "node, 'Signature') self.Signature = Signature_ # end class eSocial class evtCdBenPrRP(GeneratedsSuper): \"\"\"Evento de", "level, already_processed, namespace_, name_='dtFimBenef') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level +", "name_='dscLograd', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_, )) def", "the generated superclass module to use a # specific subclass module. CurrentSubclassModule_ =", "namespace_, name_='nmCid') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='',", "showIndent(outfile, level, pretty_print) outfile.write('<%snmPai>%s</%snmPai>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nmPai), input_name='nmPai')), namespace_, eol_)) def build(self, node):", "pass # end class nmPai class endereco(GeneratedsSuper): \"\"\"Grupo de informações do endereço do", "are generated by generateDS.py. # You can replace these methods by re-implementing the", "None ): return True else: return False def export(self, outfile, level, namespace_='', name_='fimBeneficio',", "GenerateDSNamespaceDefs_.get('verProc') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_ =", "'tpBenef': sval_ = child_.text try: ival_ = int(sval_) except (TypeError, ValueError) as exp:", "eol_)) if self.cep is not None: showIndent(outfile, level, pretty_print) outfile.write('<%scep>%s</%scep>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.cep),", "'', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='tpInsc') if self.hasContent_(): outfile.write('>%s'", "outfile, level, namespace_='', name_='TDadosBenef', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('TDadosBenef') if imported_ns_def_ is not", "outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='cpfInst'): pass def", "pretty_print=pretty_print) if self.endereco is not None: self.endereco.export(outfile, level, namespace_, name_='endereco', pretty_print=pretty_print) def build(self,", "False def export(self, outfile, level, namespace_='', name_='infoBeneficio', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('infoBeneficio') if", "as model_\\n\\n') sys.stdout.write('rootObj = model_.rootClass(\\n') rootObj.exportLiteral(sys.stdout, 0, name_=rootTag) sys.stdout.write(')\\n') return rootObj def main():", "name='', data_type='', container=0, optional=0, child_attrs=None, choice=None): self.name = name self.data_type = data_type self.container", "if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, cep) if subclass is", "namespace_='', name_='nrInsc', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('nrInsc') if imported_ns_def_ is not None: namespacedef_", "eol_)) else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='nrRecibo'):", "def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, uf)", "tpLograd def get_dscLograd(self): return self.dscLograd def set_dscLograd(self, dscLograd): self.dscLograd = dscLograd def get_nrLograd(self):", "True else: return False def export(self, outfile, level, namespace_='', name_='idQuota', namespacedef_='', pretty_print=True): imported_ns_def_", "outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='idQuota', pretty_print=pretty_print) outfile.write('</%s%s>%s' %", "def export(self, outfile, level, namespace_='', name_='vrBenef', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('vrBenef') if imported_ns_def_", "default_class if 'xsi' in node.nsmap: classname = node.get('{%s}type' % node.nsmap['xsi']) if classname is", "subclass is not None: return subclass(*args_, **kwargs_) if dadosNasc.subclass: return dadosNasc.subclass(*args_, **kwargs_) else:", "CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, dtFimBenef) if subclass is not", "outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='codMunic'): pass def", "child_, node, nodeName_, fromsubclass_=False): pass # end class vrBenef class infoPenMorte(GeneratedsSuper): \"\"\"Informações relativas", "self.dscLograd is not None: showIndent(outfile, level, pretty_print) outfile.write('<%sdscLograd>%s</%sdscLograd>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.dscLograd), input_name='dscLograd')), namespace_,", "'evtCdBenPrRP' elif nodeName_ == 'Signature': Signature_ = child_.text Signature_ = self.gds_validate_string(Signature_, node, 'Signature')", "pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class cpfInst GDSClassesMapping", "return '\"\"\"%s\"\"\"' % s1 def get_all_text_(node): if node.text is not None: text =", "TIdeEveTrab(GeneratedsSuper): \"\"\"Identificação do evento\"\"\" subclass = None superclass = None def __init__(self, indRetif=None,", "if subclass is not None: return subclass(*args_, **kwargs_) if nrLograd.subclass: return nrLograd.subclass(*args_, **kwargs_)", "else: return tpBenef(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return", "None: self.fimBeneficio.export(outfile, level, namespace_, name_='fimBeneficio', pretty_print=pretty_print) def build(self, node): already_processed = set() self.buildAttributes(node,", "**kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, nrLograd) if subclass", "infoBeneficio.subclass(*args_, **kwargs_) else: return infoBeneficio(*args_, **kwargs_) factory = staticmethod(factory) def get_tpPlanRP(self): return self.tpPlanRP", "= nrLograd self.complemento = complemento self.bairro = bairro self.cep = cep self.codMunic =", "= TEmprPJ.factory() obj_.build(child_) self.ideEmpregador = obj_ obj_.original_tagname_ = 'ideEmpregador' elif nodeName_ == 'ideBenef':", "True else: return False def export(self, outfile, level, namespace_='', name_='vrBenef', namespacedef_='', pretty_print=True): imported_ns_def_", "1, namespace_='', name_='indRetif', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_,", "namespace_, name_='TDadosBeneficio') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='',", "elements found1 = True for patterns1 in patterns: found2 = False for patterns2", "subclass = getSubclassFromModule_( CurrentSubclassModule_, bairro) if subclass is not None: return subclass(*args_, **kwargs_)", "= None superclass = None def __init__(self, dadosNasc=None, endereco=None): self.original_tagname_ = None self.dadosNasc", "#import pdb; pdb.set_trace() main() __all__ = [ \"TDadosBenef\", \"TDadosBeneficio\", \"TEmprPJ\", \"TEnderecoBrasil\", \"TEnderecoExterior\", \"TIdeEveTrab\",", "base64.b64encode(self.value) return text def exportLiteral(self, outfile, level, name): if self.category == MixedContainer.CategoryText: showIndent(outfile,", "None: subclass = getSubclassFromModule_( CurrentSubclassModule_, codPostal) if subclass is not None: return subclass(*args_,", "def raise_parse_error(node, msg): msg = '%s (element %s/line %d)' % (msg, node.tag, node.sourceline,", "None: subclass = getSubclassFromModule_( CurrentSubclassModule_, TEmprPJ) if subclass is not None: return subclass(*args_,", "return False def export(self, outfile, level, namespace_='', name_='complemento', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('complemento')", "integer: %s' % exp) ival_ = self.gds_validate_integer(ival_, node, 'procEmi') self.procEmi = ival_ elif", "'', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='vrBenef') if self.hasContent_(): outfile.write('>%s'", ")) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='paisResid'): pass def exportChildren(self, outfile, level,", "namespace_='', name_='nmMae', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib,", "pretty_print=True, xml_declaration=True, encoding=\"utf-8\") sys.stdout.write(content) sys.stdout.write('\\n') return rootObj, rootElement, mapping, reverse_mapping def parseString(inString, silence=False):", "outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='tpAmb', pretty_print=pretty_print) outfile.write('</%s%s>%s' %", "else: return ideBenef(*args_, **kwargs_) factory = staticmethod(factory) def get_cpfBenef(self): return self.cpfBenef def set_cpfBenef(self,", "eol_ = '' if self.tpBenef is not None: showIndent(outfile, level, pretty_print) outfile.write('<%stpBenef>%s</%stpBenef>%s' %", "**kwargs_) else: return dtNascto(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ):", "Calls to the methods in these classes are generated by generateDS.py. # You", "factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, TEnderecoExterior) if", "GenerateDSNamespaceDefs_.get('endereco') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_ =", "level, namespace_='', name_='idQuota', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node,", "False def export(self, outfile, level, namespace_='', name_='nmBenefic', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('nmBenefic') if", "set_nrInsc(self, nrInsc): self.nrInsc = nrInsc def hasContent_(self): if ( self.tpInsc is not None", "if ( self.tpBenef is not None or self.nrBenefic is not None or self.dtIniBenef", "parser = None doc = parsexml_(inFileName, parser) rootNode = doc.getroot() rootTag, rootClass =", "dadosNasc.factory() obj_.build(child_) self.dadosNasc = obj_ obj_.original_tagname_ = 'dadosNasc' elif nodeName_ == 'endereco': obj_", "self.exportAttributes(outfile, level, already_processed, namespace_, name_='mtvFim') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level", "model_.rootClass(\\n') rootObj.exportLiteral(sys.stdout, 0, name_=rootTag) sys.stdout.write(')\\n') return rootObj def main(): args = sys.argv[1:] if", "namespace_='', name_='nmPai', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_, ))", "= None self.evtCdBenPrRP = evtCdBenPrRP self.Signature = Signature def factory(*args_, **kwargs_): if CurrentSubclassModule_", "outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='nrRecibo'): pass def", "# class eSocial(GeneratedsSuper): subclass = None superclass = None def __init__(self, evtCdBenPrRP=None, Signature=None):", "level, namespace_='', name_='tpInsc', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node,", "level, namespace_='', name_='nmMae', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('nmMae') if imported_ns_def_ is not None:", "return subclass(*args_, **kwargs_) if tpBenef.subclass: return tpBenef.subclass(*args_, **kwargs_) else: return tpBenef(*args_, **kwargs_) factory", "GenerateDSNamespaceDefs_.get('tpBenef') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_ =", "**kwargs_) if paisResid.subclass: return paisResid.subclass(*args_, **kwargs_) else: return paisResid(*args_, **kwargs_) factory = staticmethod(factory)", "namespace_='', name_='indRetif', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('indRetif') if imported_ns_def_ is not None: namespacedef_", "5 TypeDouble = 6 TypeBoolean = 7 TypeBase64 = 8 def __init__(self, category,", "return True else: return False def export(self, outfile, level, namespace_='', name_='dtNascto', namespacedef_='', pretty_print=True):", "'dtFimBenef': sval_ = child_.text dval_ = self.gds_parse_date(sval_) self.dtFimBenef = dval_ elif nodeName_ ==", "self.nrInsc is not None: showIndent(outfile, level, pretty_print) outfile.write('<%snrInsc>%s</%snrInsc>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nrInsc), input_name='nrInsc')), namespace_,", "**kwargs_) factory = staticmethod(factory) def get_tpLograd(self): return self.tpLograd def set_tpLograd(self, tpLograd): self.tpLograd =", "namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('complemento') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_", "fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\\n' else: eol_ = '' if self.cpfBenef", "subclass(*args_, **kwargs_) if nmCid.subclass: return nmCid.subclass(*args_, **kwargs_) else: return nmCid(*args_, **kwargs_) factory =", "): return True else: return False def export(self, outfile, level, namespace_='', name_='tpPlanRP', namespacedef_='',", "def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, nrLograd)", "namespace_, name_='paisResid') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='',", "%s' % exp) ival_ = self.gds_validate_integer(ival_, node, 'tpInsc') self.tpInsc = ival_ elif nodeName_", "# def showIndent(outfile, level, pretty_print=True): if pretty_print: for idx in range(level): outfile.write(' ')", "outfile, level, namespace_='', name_='nmBenefic', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('nmBenefic') if imported_ns_def_ is not", "nmPai class endereco(GeneratedsSuper): \"\"\"Grupo de informações do endereço do Trabalhador\"\"\" subclass = None", "pass def exportChildren(self, outfile, level, namespace_='', name_='paisNascto', fromsubclass_=False, pretty_print=True): pass def build(self, node):", "is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, procEmi) if subclass is not None:", "isinstance(self.data_type, list): if len(self.data_type) > 0: return self.data_type[-1] else: return 'xs:string' else: return", "'\\n' else: eol_ = '' if self.dtNascto is not None: showIndent(outfile, level, pretty_print)", "self.dadosNasc def set_dadosNasc(self, dadosNasc): self.dadosNasc = dadosNasc def get_endereco(self): return self.endereco def set_endereco(self,", "if pretty_print: eol_ = '\\n' else: eol_ = '' if self.indRetif is not", "def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class dtFimBenef class mtvFim(GeneratedsSuper):", "results.group(0)) input_data = input_data[:-6] if len(input_data.split('.')) > 1: dt = datetime_.datetime.strptime(input_data, '%H:%M:%S.%f') else:", "None: showIndent(outfile, level, pretty_print) outfile.write('<%snrBenefic>%s</%snrBenefic>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nrBenefic), input_name='nrBenefic')), namespace_, eol_)) if self.dtFimBenef", "evtCdBenPrRP=None, Signature=None): self.original_tagname_ = None self.evtCdBenPrRP = evtCdBenPrRP self.Signature = Signature def factory(*args_,", "input_name='nmBenefic')), namespace_, eol_)) if self.dadosBenef is not None: self.dadosBenef.export(outfile, level, namespace_, name_='dadosBenef', pretty_print=pretty_print)", "= getSubclassFromModule_( CurrentSubclassModule_, cpfBenef) if subclass is not None: return subclass(*args_, **kwargs_) if", "subclass(*args_, **kwargs_) if idQuota.subclass: return idQuota.subclass(*args_, **kwargs_) else: return idQuota(*args_, **kwargs_) factory =", "self.uf = uf def hasContent_(self): if ( self.tpLograd is not None or self.dscLograd", "Current working directory (os.getcwd()): # esociallib # import sys import re as re_", "name_='TEnderecoExterior') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='TEnderecoExterior',", "procEmi def get_verProc(self): return self.verProc def set_verProc(self, verProc): self.verProc = verProc def hasContent_(self):", "re-implementing the following class # in a module named generatedssuper.py. try: from generatedssuper", "else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='TEnderecoBrasil'): pass", "namespace_='', name_='cep', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_, ))", "== 'complemento': complemento_ = child_.text complemento_ = self.gds_validate_string(complemento_, node, 'complemento') self.complemento = complemento_", "pass def exportChildren(self, outfile, level, namespace_='', name_='idQuota', fromsubclass_=False, pretty_print=True): pass def build(self, node):", "level, already_processed, namespace_='', name_='tpPlanRP'): pass def exportChildren(self, outfile, level, namespace_='', name_='tpPlanRP', fromsubclass_=False, pretty_print=True):", "= container def get_container(self): return self.container def set_child_attrs(self, child_attrs): self.child_attrs = child_attrs def", "fromsubclass_=False): pass # end class indRetif class nrRecibo(GeneratedsSuper): subclass = None superclass =", "exportAttributes(self, outfile, level, already_processed, namespace_='', name_='tpBenef'): pass def exportChildren(self, outfile, level, namespace_='', name_='tpBenef',", "not None: return subclass(*args_, **kwargs_) if eSocial.subclass: return eSocial.subclass(*args_, **kwargs_) else: return eSocial(*args_,", "mapping = {} rootElement = rootObj.to_etree(None, name_=rootTag, mapping_=mapping) reverse_mapping = rootObj.gds_reverse_node_mapping(mapping) if not", "export(self, outfile, level, namespace_='', name_='nrInsc', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('nrInsc') if imported_ns_def_ is", "+ namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='endereco')", "= node.nsmap.get(prefix) if namespace is not None: value = attrs.get('{%s}%s' % (namespace, name,", "TEnderecoBrasil) if subclass is not None: return subclass(*args_, **kwargs_) if TEnderecoBrasil.subclass: return TEnderecoBrasil.subclass(*args_,", "class dadosNasc class dtNascto(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_", "not None: return subclass(*args_, **kwargs_) if endereco.subclass: return endereco.subclass(*args_, **kwargs_) else: return endereco(*args_,", "dtIniBenef=None, vrBenef=None, infoPenMorte=None): self.original_tagname_ = None self.tpBenef = tpBenef self.nrBenefic = nrBenefic if", "idQuota.subclass: return idQuota.subclass(*args_, **kwargs_) else: return idQuota(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self):", "'') # ('-o', 'esociallib/v2_04/evtCdBenPrRP.py') # # Command line arguments: # schemas/v2_04/evtCdBenPrRP.xsd # #", "cpfInst_ # end class infoPenMorte class idQuota(GeneratedsSuper): subclass = None superclass = None", "CurrentSubclassModule_, TEmprPJ) if subclass is not None: return subclass(*args_, **kwargs_) if TEmprPJ.subclass: return", "# Data representation classes. # class eSocial(GeneratedsSuper): subclass = None superclass = None", "return True else: return False def export(self, outfile, level, namespace_='', name_='procEmi', namespacedef_='', pretty_print=True):", "self.tpLograd def set_tpLograd(self, tpLograd): self.tpLograd = tpLograd def get_dscLograd(self): return self.dscLograd def set_dscLograd(self,", "attrs.get(attr_name) elif len(attr_parts) == 2: prefix, name = attr_parts namespace = node.nsmap.get(prefix) if", "and isinstance(instring, unicode): result = quote_xml(instring).encode('utf8') else: result = GeneratedsSuper.gds_encode(str(instring)) return result def", "self.gds_validate_string(tpLograd_, node, 'tpLograd') self.tpLograd = tpLograd_ elif nodeName_ == 'dscLograd': dscLograd_ = child_.text", "else: return dadosNasc(*args_, **kwargs_) factory = staticmethod(factory) def get_dtNascto(self): return self.dtNascto def set_dtNascto(self,", "not None: return subclass(*args_, **kwargs_) if paisResid.subclass: return paisResid.subclass(*args_, **kwargs_) else: return paisResid(*args_,", "complemento(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ = None def", "level) outfile.write(')\\n') class MemberSpec_(object): def __init__(self, name='', data_type='', container=0, optional=0, child_attrs=None, choice=None): self.name", "CurrentSubclassModule_, tpAmb) if subclass is not None: return subclass(*args_, **kwargs_) if tpAmb.subclass: return", "def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class nmPai class endereco(GeneratedsSuper):", "'Signature') self.Signature = Signature_ # end class eSocial class evtCdBenPrRP(GeneratedsSuper): \"\"\"Evento de cadastro", "= '%04d-%02d-%02dT%02d:%02d:%02d.%s' % ( input_data.year, input_data.month, input_data.day, input_data.hour, input_data.minute, input_data.second, ('%f' % (float(input_data.microsecond)", "if s1.find('\\n') == -1: return '\"%s\"' % s1 else: return '\"\"\"%s\"\"\"' % s1", "False def export(self, outfile, level, namespace_='', name_='verProc', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('verProc') if", "(eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='codMunic', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_,", "self.exportChildren(outfile, level + 1, namespace_='', name_='nmMae', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else:", "paisNascto_ elif nodeName_ == 'paisNac': paisNac_ = child_.text paisNac_ = self.gds_validate_string(paisNac_, node, 'paisNac')", "GeneratedsSuper._FixedOffsetTZ(0, 'UTC') input_data = input_data[:-1] else: results = GeneratedsSuper.tzoff_pattern.search(input_data) if results is not", "**kwargs_) else: return nmBenefic(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ):", "= self.gds_validate_string(nrLograd_, node, 'nrLograd') self.nrLograd = nrLograd_ elif nodeName_ == 'complemento': complemento_ =", "generatedsnamespaces.py # # GenerateDSNamespaceDefs = { # \"ElementtypeA\": \"http://www.xxx.com/namespaceA\", # \"ElementtypeB\": \"http://www.xxx.com/namespaceB\", #", "outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='tpInsc', pretty_print=pretty_print) outfile.write('</%s%s>%s' %", "self.paisNac def set_paisNac(self, paisNac): self.paisNac = paisNac def get_nmMae(self): return self.nmMae def set_nmMae(self,", "already_processed, namespace_='', name_='paisResid'): pass def exportChildren(self, outfile, level, namespace_='', name_='paisResid', fromsubclass_=False, pretty_print=True): pass", "self.paisNascto def set_paisNascto(self, paisNascto): self.paisNascto = paisNascto def get_paisNac(self): return self.paisNac def set_paisNac(self,", "+ 1, namespace_='', name_='nrBenefic', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' %", "None or self.tpAmb is not None or self.procEmi is not None or self.verProc", "self.tpPlanRP is not None or self.iniBeneficio is not None or self.altBeneficio is not", "self.uf is not None: showIndent(outfile, level, pretty_print) outfile.write('<%suf>%s</%suf>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.uf), input_name='uf')), namespace_,", "outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='paisResid'): pass def", "if self.tpBenef is not None: showIndent(outfile, level, pretty_print) outfile.write('<%stpBenef>%s</%stpBenef>%s' % (namespace_, self.gds_format_integer(self.tpBenef, input_name='tpBenef'),", "not None: showIndent(outfile, level, pretty_print) outfile.write('<%stpAmb>%s</%stpAmb>%s' % (namespace_, self.gds_format_integer(self.tpAmb, input_name='tpAmb'), namespace_, eol_)) if", "eol_)) if self.dtIniBenef is not None: showIndent(outfile, level, pretty_print) outfile.write('<%sdtIniBenef>%s</%sdtIniBenef>%s' % (namespace_, self.gds_format_date(self.dtIniBenef,", "input_name='codPostal')), namespace_, eol_)) def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib, already_processed) for", "is not None: showIndent(outfile, level, pretty_print) outfile.write('<%stpLograd>%s</%stpLograd>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.tpLograd), input_name='tpLograd')), namespace_, eol_))", "node, default_class=None): class_obj1 = default_class if 'xsi' in node.nsmap: classname = node.get('{%s}type' %", "outfile, level, namespace_='', name_='nmMae', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('nmMae') if imported_ns_def_ is not", "eol_)) else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='dtIniBenef'):", "== 'nrLograd': nrLograd_ = child_.text nrLograd_ = self.gds_validate_string(nrLograd_, node, 'nrLograd') self.nrLograd = nrLograd_", "rootObj.build(rootNode) # Enable Python to collect the space used by the DOM. doc", "pretty_print) outfile.write('<%s%s%s' % (namespace_, name_, namespacedef_ and ' ' + namespacedef_ or '',", "self.tpPlanRP def set_tpPlanRP(self, tpPlanRP): self.tpPlanRP = tpPlanRP def get_iniBeneficio(self): return self.iniBeneficio def set_iniBeneficio(self,", "def get_all_text_(node): if node.text is not None: text = node.text else: text =", "if self.nmMae is not None: showIndent(outfile, level, pretty_print) outfile.write('<%snmMae>%s</%snmMae>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nmMae), input_name='nmMae')),", ")) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='tpAmb'): pass def exportChildren(self, outfile, level,", "% (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='paisNac', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_,", "len(names) == 2: classname = names[1] class_obj2 = globals().get(classname) if class_obj2 is not", "or self.uf is not None or self.paisNascto is not None or self.paisNac is", "level, already_processed, namespace_, name_='nmCid') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level +", "if self.nmBenefic is not None: showIndent(outfile, level, pretty_print) outfile.write('<%snmBenefic>%s</%snmBenefic>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nmBenefic), input_name='nmBenefic')),", "set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='dadosNasc') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile,", "= vrBenef self.infoPenMorte = infoPenMorte def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None:", "(TypeError, ValueError) as exp: raise_parse_error(child_, 'requires float or double: %s' % exp) fval_", "None: return subclass(*args_, **kwargs_) if verProc.subclass: return verProc.subclass(*args_, **kwargs_) else: return verProc(*args_, **kwargs_)", "not None: return subclass(*args_, **kwargs_) if dscLograd.subclass: return dscLograd.subclass(*args_, **kwargs_) else: return dscLograd(*args_,", "CurrentSubclassModule_, nrLograd) if subclass is not None: return subclass(*args_, **kwargs_) if nrLograd.subclass: return", "return '%s' % ' '.join(input_data) def gds_validate_boolean_list( self, input_data, node=None, input_name=''): values =", "def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class uf class paisNascto(GeneratedsSuper):", "subclass is not None: return subclass(*args_, **kwargs_) if paisResid.subclass: return paisResid.subclass(*args_, **kwargs_) else:", "schemas/v2_04/evtCdBenPrRP.xsd # # Command line: # /usr/local/bin/generateDS --no-process-includes -o \"esociallib/v2_04/evtCdBenPrRP.py\" schemas/v2_04/evtCdBenPrRP.xsd # #", "do beneficiário\"\"\" subclass = None superclass = None def __init__(self, cpfBenef=None, nmBenefic=None, dadosBenef=None):", "already_processed, namespace_, name_='bairro') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1,", "return True else: return False def export(self, outfile, level, namespace_='', name_='mtvFim', namespacedef_='', pretty_print=True):", "uf=None): self.original_tagname_ = None self.tpLograd = tpLograd self.dscLograd = dscLograd self.nrLograd = nrLograd", "nmBenefic_ elif nodeName_ == 'dadosBenef': obj_ = TDadosBenef.factory() obj_.build(child_) self.dadosBenef = obj_ obj_.original_tagname_", "is not None or self.procEmi is not None or self.verProc is not None", "+ namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='nmCid')", "getSubclassFromModule_( CurrentSubclassModule_, verProc) if subclass is not None: return subclass(*args_, **kwargs_) if verProc.subclass:", ")) self.exportChildren(outfile, level + 1, namespace_='', name_='dtNascto', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_))", "namespacedef_ and ' ' + namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile,", "return dadosNasc.subclass(*args_, **kwargs_) else: return dadosNasc(*args_, **kwargs_) factory = staticmethod(factory) def get_dtNascto(self): return", "\"\"\"Informações relativas a pensão por morte\"\"\" subclass = None superclass = None def", "= self.gds_parse_date(sval_) self.dtNascto = dval_ elif nodeName_ == 'codMunic': sval_ = child_.text try:", "self.exportAttributes(outfile, level, already_processed, namespace_, name_='idQuota') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level", "ipshell('<some message> -- Entering ipshell.\\nHit Ctrl-D to exit') # # Globals # ExternalEncoding", "return input_data def gds_format_integer_list(self, input_data, input_name=''): return '%s' % ' '.join(input_data) def gds_validate_integer_list(", "return True else: return False def export(self, outfile, level, namespace_='', name_='endereco', namespacedef_='', pretty_print=True):", "def gds_format_double_list(self, input_data, input_name=''): return '%s' % ' '.join(input_data) def gds_validate_double_list( self, input_data,", "set_Id(self, Id): self.Id = Id def hasContent_(self): if ( self.ideEvento is not None", "outfile, level, namespace_='', name_='uf', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set()", "= set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='dtIniBenef') if self.hasContent_(): outfile.write('>%s' % (eol_, ))", "__init__(self, tpBenef=None, nrBenefic=None, dtFimBenef=None, mtvFim=None): self.original_tagname_ = None self.tpBenef = tpBenef self.nrBenefic =", "(eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='eSocial', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s'", "**kwargs_) else: return procEmi(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ):", "it is importable, must contain # a dictionary named GeneratedsNamespaceDefs. This Python dictionary", "def get_nrLograd(self): return self.nrLograd def set_nrLograd(self, nrLograd): self.nrLograd = nrLograd def get_complemento(self): return", "return True else: return False def export(self, outfile, level, namespace_='', name_='tpAmb', namespacedef_='', pretty_print=True):", "# Command line options: # ('--no-process-includes', '') # ('-o', 'esociallib/v2_04/evtCdBenPrRP.py') # # Command", "return self.__name def dst(self, dt): return None def gds_format_string(self, input_data, input_name=''): return input_data", "level, namespace_='', name_='TDadosBenef', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\\n' else: eol_ =", "level, pretty_print) outfile.write('<%snmPai>%s</%snmPai>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nmPai), input_name='nmPai')), namespace_, eol_)) def build(self, node): already_processed", "if uf.subclass: return uf.subclass(*args_, **kwargs_) else: return uf(*args_, **kwargs_) factory = staticmethod(factory) def", "class indRetif(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ = None", "\"'\" in s1: s1 = '\"%s\"' % s1.replace('\"', \"&quot;\") else: s1 = \"'%s'\"", "uncomment and use the following. # IPython is available from http://ipython.scipy.org/. # ##", "% ' '.join(input_data) def gds_validate_double_list( self, input_data, node=None, input_name=''): values = input_data.split() for", "'\\n' else: eol_ = '' if self.dadosNasc is not None: self.dadosNasc.export(outfile, level, namespace_,", "): return True else: return False def export(self, outfile, level, namespace_='', name_='eSocial', namespacedef_='", "outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='infoPenMorte', pretty_print=pretty_print) showIndent(outfile, level,", "return self def buildAttributes(self, node, attrs, already_processed): pass def buildChildren(self, child_, node, nodeName_,", "def export(self, outfile, level, namespace_='', name_='tpBenef', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('tpBenef') if imported_ns_def_", "self.exportAttributes(outfile, level, already_processed, namespace_, name_='codPostal') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level", "cpfInst GDSClassesMapping = { 'altBeneficio': TDadosBeneficio, 'brasil': TEnderecoBrasil, 'dadosBenef': TDadosBenef, 'exterior': TEnderecoExterior, 'ideEmpregador':", "return False def export(self, outfile, level, namespace_='', name_='nrBenefic', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('nrBenefic')", "level, already_processed, namespace_, name_='procEmi') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level +", "else: # category == MixedContainer.CategoryComplex showIndent(outfile, level) outfile.write( 'model_.MixedContainer(%d, %d, \"%s\",\\n' % (", "def parseString(inString, silence=False): if sys.version_info.major == 2: from StringIO import StringIO as IOBuffer", "nodeName_, fromsubclass_=False): pass # end class nmBenefic class infoBeneficio(GeneratedsSuper): \"\"\"Informações relacionadas ao benefício", "getSubclassFromModule_( CurrentSubclassModule_, dscLograd) if subclass is not None: return subclass(*args_, **kwargs_) if dscLograd.subclass:", "def hasContent_(self): if ( self.dadosNasc is not None or self.endereco is not None", "if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='tpInsc', pretty_print=pretty_print)", "input_data, input_name=''): return '%e' % input_data def gds_validate_double(self, input_data, node=None, input_name=''): return input_data", "in ('true', '1', 'false', '0', ): raise_parse_error( node, 'Requires sequence of booleans '", "GenerateDSNamespaceDefs_.get('nmBenefic') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_ =", "None: subclass = getSubclassFromModule_( CurrentSubclassModule_, paisNascto) if subclass is not None: return subclass(*args_,", "outfile, level, namespace_='', name_='evtCdBenPrRP', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('evtCdBenPrRP') if imported_ns_def_ is not", "end class dscLograd class nrLograd(GeneratedsSuper): subclass = None superclass = None def __init__(self):", "child_, node, nodeName_, fromsubclass_=False): pass # end class tpBenef class nrBenefic(GeneratedsSuper): subclass =", "staticmethod(factory) def get_tpPlanRP(self): return self.tpPlanRP def set_tpPlanRP(self, tpPlanRP): self.tpPlanRP = tpPlanRP def get_iniBeneficio(self):", "def find_attr_value_(attr_name, node): attrs = node.attrib attr_parts = attr_name.split(':') value = None if", "nodeName_) return self def buildAttributes(self, node, attrs, already_processed): pass def buildChildren(self, child_, node,", "= codMunic self.uf = uf def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None:", "= default_class if 'xsi' in node.nsmap: classname = node.get('{%s}type' % node.nsmap['xsi']) if classname", "bairro class cep(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ =", "**kwargs_) if tpBenef.subclass: return tpBenef.subclass(*args_, **kwargs_) else: return tpBenef(*args_, **kwargs_) factory = staticmethod(factory)", "name_='dtFimBenef'): pass def exportChildren(self, outfile, level, namespace_='', name_='dtFimBenef', fromsubclass_=False, pretty_print=True): pass def build(self,", "exportAttributes(self, outfile, level, already_processed, namespace_='', name_='paisResid'): pass def exportChildren(self, outfile, level, namespace_='', name_='paisResid',", "xml.etree parser = etree_.XMLParser() doc = etree_.parse(infile, parser=parser, **kwargs) return doc # #", "namespace_='', name_='cpfBenef', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('cpfBenef') if imported_ns_def_ is not None: namespacedef_", "else: return False def export(self, outfile, level, namespace_='', name_='fimBeneficio', namespacedef_='', pretty_print=True): imported_ns_def_ =", "nodeName_ == 'fimBeneficio': obj_ = fimBeneficio.factory() obj_.build(child_) self.fimBeneficio = obj_ obj_.original_tagname_ = 'fimBeneficio'", "node, 'bairro') self.bairro = bairro_ elif nodeName_ == 'cep': cep_ = child_.text cep_", "exterior): self.exterior = exterior def hasContent_(self): if ( self.brasil is not None or", "self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='complemento', pretty_print=pretty_print) outfile.write('</%s%s>%s'", "from http://ipython.scipy.org/. # ## from IPython.Shell import IPShellEmbed ## args = '' ##", "namespace_='', name_='nmBenefic', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib,", "exportChildren(self, outfile, level, namespace_='', name_='evtCdBenPrRP', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\\n' else:", "namespace_, name_='TEnderecoExterior') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='',", "set_tpBenef(self, tpBenef): self.tpBenef = tpBenef def get_nrBenefic(self): return self.nrBenefic def set_nrBenefic(self, nrBenefic): self.nrBenefic", "IPython is available from http://ipython.scipy.org/. # ## from IPython.Shell import IPShellEmbed ## args", "container self.child_attrs = child_attrs self.choice = choice self.optional = optional def set_name(self, name):", "False for patterns2 in patterns1: if re_.search(patterns2, target) is not None: found2 =", "level + 1, namespace_='', name_='TEnderecoBrasil', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_,", "nodeName_ == 'paisNac': paisNac_ = child_.text paisNac_ = self.gds_validate_string(paisNac_, node, 'paisNac') self.paisNac =", "not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, endereco) if subclass is not None: return", "self.dscLograd = dscLograd self.nrLograd = nrLograd self.complemento = complemento self.bairro = bairro self.cep", "factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, indRetif) if", "child_, node, nodeName_, fromsubclass_=False): pass # end class dtIniBenef class vrBenef(GeneratedsSuper): subclass =", "return base64.b64encode(input_data) def gds_validate_base64(self, input_data, node=None, input_name=''): return input_data def gds_format_integer(self, input_data, input_name=''):", "def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, endereco)", "return False def export(self, outfile, level, namespace_='', name_='tpAmb', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('tpAmb')", "level, pretty_print=True): if pretty_print: for idx in range(level): outfile.write(' ') def quote_xml(inStr): \"Escape", "( ): return True else: return False def export(self, outfile, level, namespace_='', name_='paisNascto',", "hasattr(module, name): return getattr(module, name) else: return None # # If you have", "namespace_, eol_)) if self.vrBenef is not None: showIndent(outfile, level, pretty_print) outfile.write('<%svrBenef>%s</%svrBenef>%s' % (namespace_,", "set_tpAmb(self, tpAmb): self.tpAmb = tpAmb def get_procEmi(self): return self.procEmi def set_procEmi(self, procEmi): self.procEmi", "dval_ elif nodeName_ == 'vrBenef': sval_ = child_.text try: fval_ = float(sval_) except", "return tpLograd.subclass(*args_, **kwargs_) else: return tpLograd(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if", "'%e' % input_data def gds_validate_double(self, input_data, node=None, input_name=''): return input_data def gds_format_double_list(self, input_data,", "= set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='infoPenMorte') if self.hasContent_(): outfile.write('>%s' % (eol_, ))", "pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class tpPlanRP class", "return nmPai(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return True", "return evtCdBenPrRP(*args_, **kwargs_) factory = staticmethod(factory) def get_ideEvento(self): return self.ideEvento def set_ideEvento(self, ideEvento):", "Id): self.Id = Id def hasContent_(self): if ( self.ideEvento is not None or", "nrInsc(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return True else:", "level, namespace_, name_='infoBeneficio', pretty_print=pretty_print) def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib, already_processed)", "+ 1, namespace_='', name_='tpLograd', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' %", "outfile, level, namespace_='', name_='nmCid', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set()", "eol_ = '\\n' else: eol_ = '' if self.paisResid is not None: showIndent(outfile,", "= name self.value = value def getCategory(self): return self.category def getContenttype(self, content_type): return", "is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, nmMae) if subclass is not None:", "def get_nmCid(self): return self.nmCid def set_nmCid(self, nmCid): self.nmCid = nmCid def get_codPostal(self): return", "return TIdeEveTrab.subclass(*args_, **kwargs_) else: return TIdeEveTrab(*args_, **kwargs_) factory = staticmethod(factory) def get_indRetif(self): return", "level, namespace_='', name_='fimBeneficio', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('fimBeneficio') if imported_ns_def_ is not None:", "self.tpAmb def set_tpAmb(self, tpAmb): self.tpAmb = tpAmb def get_procEmi(self): return self.procEmi def set_procEmi(self,", "namespace_='', name_='fimBeneficio', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\\n' else: eol_ = ''", "level + 1, namespace_='', name_='dtIniBenef', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s'", "( self.dtNascto is not None or self.codMunic is not None or self.uf is", "outfile.write('<%sSignature>%s</%sSignature>%s' % ('ds:', self.gds_encode(self.gds_format_string(quote_xml(self.Signature), input_name='Signature')), 'ds:', eol_)) def build(self, node): already_processed = set()", "input_name=''): return base64.b64encode(input_data) def gds_validate_base64(self, input_data, node=None, input_name=''): return input_data def gds_format_integer(self, input_data,", "already_processed, namespace_, name_='TEmprPJ') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1,", "end class cpfBenef class nmBenefic(GeneratedsSuper): subclass = None superclass = None def __init__(self):", "None: showIndent(outfile, level, pretty_print) outfile.write('<%sdscLograd>%s</%sdscLograd>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.dscLograd), input_name='dscLograd')), namespace_, eol_)) if self.nrLograd", "pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('verProc') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if", "= { 'altBeneficio': TDadosBeneficio, 'brasil': TEnderecoBrasil, 'dadosBenef': TDadosBenef, 'exterior': TEnderecoExterior, 'ideEmpregador': TEmprPJ, 'ideEvento':", "already_processed, namespace_='', name_='cpfInst'): pass def exportChildren(self, outfile, level, namespace_='', name_='cpfInst', fromsubclass_=False, pretty_print=True): pass", "'', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='TDadosBeneficio') if self.hasContent_(): outfile.write('>%s'", "None self.dadosNasc = dadosNasc self.endereco = endereco def factory(*args_, **kwargs_): if CurrentSubclassModule_ is", "): return True else: return False def export(self, outfile, level, namespace_='', name_='dadosNasc', namespacedef_='',", "outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='fimBeneficio', pretty_print=pretty_print) showIndent(outfile, level,", "class tpLograd class dscLograd(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_", "already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='complemento') if self.hasContent_(): outfile.write('>%s' % (eol_,", "level, already_processed, namespace_='', name_='dtNascto'): pass def exportChildren(self, outfile, level, namespace_='', name_='dtNascto', fromsubclass_=False, pretty_print=True):", "end class bairro class cep(GeneratedsSuper): subclass = None superclass = None def __init__(self):", "dtNascto.subclass: return dtNascto.subclass(*args_, **kwargs_) else: return dtNascto(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self):", "ser informado se já houver informação anterior de benefícios para o beneficiário identificado", "'verProc': verProc_ = child_.text verProc_ = self.gds_validate_string(verProc_, node, 'verProc') self.verProc = verProc_ #", "outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='TEnderecoBrasil', pretty_print=pretty_print) showIndent(outfile, level,", "nrRecibo) if subclass is not None: return subclass(*args_, **kwargs_) if nrRecibo.subclass: return nrRecibo.subclass(*args_,", "Signature=None): self.original_tagname_ = None self.evtCdBenPrRP = evtCdBenPrRP self.Signature = Signature def factory(*args_, **kwargs_):", "nodeName_) return self def buildAttributes(self, node, attrs, already_processed): value = find_attr_value_('Id', node) if", "eol_ = '' if self.tpInsc is not None: showIndent(outfile, level, pretty_print) outfile.write('<%stpInsc>%s</%stpInsc>%s' %", "True else: return False def export(self, outfile, level, namespace_='', name_='cpfBenef', namespacedef_='', pretty_print=True): imported_ns_def_", "nmPai=None): self.original_tagname_ = None if isinstance(dtNascto, BaseStrType_): initvalue_ = datetime_.datetime.strptime(dtNascto, '%Y-%m-%d').date() else: initvalue_", "cep def get_codMunic(self): return self.codMunic def set_codMunic(self, codMunic): self.codMunic = codMunic def get_uf(self):", "if rootClass is None: rootClass = globals().get(tag) return tag, rootClass def parse(inFileName, silence=False):", "datetime_.datetime.strptime(input_data, '%H:%M:%S.%f') else: dt = datetime_.datetime.strptime(input_data, '%H:%M:%S') dt = dt.replace(tzinfo=tz) return dt.time() def", "True else: return False def export(self, outfile, level, namespace_='', name_='verProc', namespacedef_='', pretty_print=True): imported_ns_def_", "self.bairro = bairro_ elif nodeName_ == 'cep': cep_ = child_.text cep_ = self.gds_validate_string(cep_,", "= indRetif def get_nrRecibo(self): return self.nrRecibo def set_nrRecibo(self, nrRecibo): self.nrRecibo = nrRecibo def", "'dadosBenef' # end class ideBenef class cpfBenef(GeneratedsSuper): subclass = None superclass = None", "obj_.build(child_) self.endereco = obj_ obj_.original_tagname_ = 'endereco' # end class TDadosBenef class dadosNasc(GeneratedsSuper):", "outfile, level, namespace_='', name_='vrBenef', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('vrBenef') if imported_ns_def_ is not", "raise_parse_error(child_, 'requires integer: %s' % exp) ival_ = self.gds_validate_integer(ival_, node, 'mtvFim') self.mtvFim =", "bairro_ elif nodeName_ == 'nmCid': nmCid_ = child_.text nmCid_ = self.gds_validate_string(nmCid_, node, 'nmCid')", "'%s' % base64.b64encode(self.value) return text def exportLiteral(self, outfile, level, name): if self.category ==", "else: return False def export(self, outfile, level, namespace_='', name_='TDadosBeneficio', namespacedef_='', pretty_print=True): imported_ns_def_ =", "None: subclass = getSubclassFromModule_( CurrentSubclassModule_, nmPai) if subclass is not None: return subclass(*args_,", "self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='cpfInst', pretty_print=pretty_print) outfile.write('</%s%s>%s'", "# # Command line arguments: # schemas/v2_04/evtCdBenPrRP.xsd # # Command line: # /usr/local/bin/generateDS", "text = '' for child in node: if child.tail is not None: text", "(eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='TIdeEveTrab'): pass def exportChildren(self, outfile,", "self.cpfInst is not None: showIndent(outfile, level, pretty_print) outfile.write('<%scpfInst>%s</%scpfInst>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.cpfInst), input_name='cpfInst')), namespace_,", "class dadosNasc(GeneratedsSuper): \"\"\"Informações de nascimento do beneficiário\"\"\" subclass = None superclass = None", "tpBenef def get_nrBenefic(self): return self.nrBenefic def set_nrBenefic(self, nrBenefic): self.nrBenefic = nrBenefic def get_dtFimBenef(self):", "path Tag_strip_pattern_ = re_.compile(r'\\{.*\\}') def get_path_list_(self, node, path_list): if node is None: return", "level + 1, namespace_='', name_='vrBenef', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s'", "name_=rootTag) sys.stdout.write(')\\n') return rootObj def main(): args = sys.argv[1:] if len(args) == 1:", "raise_parse_error(child_, 'requires float or double: %s' % exp) fval_ = self.gds_validate_float(fval_, node, 'vrBenef')", "fromsubclass_=False): pass # end class nrLograd class complemento(GeneratedsSuper): subclass = None superclass =", "tpAmb.subclass(*args_, **kwargs_) else: return tpAmb(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if (", "# import sys import re as re_ import base64 import datetime as datetime_", "None: element.text = self.value else: element.text += self.value elif self.category == MixedContainer.CategorySimple: subelement", "return self.nmBenefic def set_nmBenefic(self, nmBenefic): self.nmBenefic = nmBenefic def get_dadosBenef(self): return self.dadosBenef def", "def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class paisNascto class paisNac(GeneratedsSuper):", "'%d' % input_data def gds_validate_integer(self, input_data, node=None, input_name=''): return input_data def gds_format_integer_list(self, input_data,", "already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='paisResid') if self.hasContent_(): outfile.write('>%s' % (eol_,", "level, already_processed, namespace_, name_='nrRecibo') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level +", "\"\"\"Dados de beneficiário\"\"\" subclass = None superclass = None def __init__(self, dadosNasc=None, endereco=None):", "'\\n' else: eol_ = '' if self.tpLograd is not None: showIndent(outfile, level, pretty_print)", "tpLograd(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ = None def", "= (isinstance(inStr, BaseStrType_) and inStr or '%s' % inStr) s1 = s1.replace('&', '&amp;')", "def set_uf(self, uf): self.uf = uf def hasContent_(self): if ( self.tpLograd is not", "codMunic def get_uf(self): return self.uf def set_uf(self, uf): self.uf = uf def get_paisNascto(self):", "None superclass = None def __init__(self, idQuota=None, cpfInst=None): self.original_tagname_ = None self.idQuota =", "outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='TDadosBeneficio', pretty_print=pretty_print) showIndent(outfile, level,", "level + 1, namespace_='', name_='tpLograd', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s'", "else: return nmPai(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return", "if nrRecibo.subclass: return nrRecibo.subclass(*args_, **kwargs_) else: return nrRecibo(*args_, **kwargs_) factory = staticmethod(factory) def", "# end class nrBenefic class dtFimBenef(GeneratedsSuper): subclass = None superclass = None def", "else: return nrInsc(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return", "integer: %s' % exp) ival_ = self.gds_validate_integer(ival_, node, 'tpInsc') self.tpInsc = ival_ elif", "= (isinstance(inStr, BaseStrType_) and inStr or '%s' % inStr) s2 = '' pos", "nrLograd_ = self.gds_validate_string(nrLograd_, node, 'nrLograd') self.nrLograd = nrLograd_ elif nodeName_ == 'complemento': complemento_", "class TEmprPJ class tpInsc(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_", "outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='nmPai'): pass def", "return self.nrRecibo def set_nrRecibo(self, nrRecibo): self.nrRecibo = nrRecibo def get_tpAmb(self): return self.tpAmb def", "exp: class GeneratedsSuper(object): tzoff_pattern = re_.compile(r'(\\+|-)((0\\d|1[0-3]):[0-5]\\d|14:00)$') class _FixedOffsetTZ(datetime_.tzinfo): def __init__(self, offset, name): self.__offset", "level, namespace_='', name_='nrRecibo', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node,", "name_='paisResid') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='paisResid',", "return paisResid(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return True", "self.exportAttributes(outfile, level, already_processed, namespace_, name_='indRetif') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level", "GenerateDSNamespaceDefs_.get('dadosNasc') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_ =", "nodeName_ == 'dscLograd': dscLograd_ = child_.text dscLograd_ = self.gds_validate_string(dscLograd_, node, 'dscLograd') self.dscLograd =", "== 'tpInsc': sval_ = child_.text try: ival_ = int(sval_) except (TypeError, ValueError) as", "pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class nmMae class", "already_processed, namespace_, name_='paisNac') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1,", "node, nodeName_, fromsubclass_=False): pass # end class codMunic class uf(GeneratedsSuper): subclass = None", "def get_bairro(self): return self.bairro def set_bairro(self, bairro): self.bairro = bairro def get_cep(self): return", "= '%s (element %s/line %d)' % (msg, node.tag, node.sourceline, ) raise GDSParseError(msg) class", "= re_.compile(r\"<!\\[CDATA\\[.*?\\]\\]>\", re_.DOTALL) # Change this to redirect the generated superclass module to", "class from a specific module.''' name = class_.__name__ + 'Sub' if hasattr(module, name):", "of doubles') return values def gds_format_boolean(self, input_data, input_name=''): return ('%s' % input_data).lower() def", "namespace_='', name_='dadosNasc', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s'", "None: showIndent(outfile, level, pretty_print) outfile.write('<%snmCid>%s</%snmCid>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nmCid), input_name='nmCid')), namespace_, eol_)) if self.codPostal", "= nmMae_ elif nodeName_ == 'nmPai': nmPai_ = child_.text nmPai_ = self.gds_validate_string(nmPai_, node,", "@staticmethod def gds_encode(instring): if sys.version_info.major == 2: return instring.encode(ExternalEncoding) else: return instring @staticmethod", "return subclass(*args_, **kwargs_) if paisNascto.subclass: return paisNascto.subclass(*args_, **kwargs_) else: return paisNascto(*args_, **kwargs_) factory", "(eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='dscLograd', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_,", "== 'tpPlanRP': sval_ = child_.text try: ival_ = int(sval_) except (TypeError, ValueError) as", "def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='ideBenef'): pass def exportChildren(self, outfile, level, namespace_='',", "'&lt;') s1 = s1.replace('>', '&gt;') if '\"' in s1: if \"'\" in s1:", "'\\n' else: eol_ = '' if self.paisResid is not None: showIndent(outfile, level, pretty_print)", "nodeName_, fromsubclass_=False): if nodeName_ == 'tpBenef': sval_ = child_.text try: ival_ = int(sval_)", "tzoff.seconds + (86400 * tzoff.days) if total_seconds == 0: _svalue += 'Z' else:", "'idQuota') self.idQuota = idQuota_ elif nodeName_ == 'cpfInst': cpfInst_ = child_.text cpfInst_ =", "self.dadosBenef is not None: self.dadosBenef.export(outfile, level, namespace_, name_='dadosBenef', pretty_print=pretty_print) def build(self, node): already_processed", "else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='eSocial'): pass", "showIndent(outfile, level, pretty_print) outfile.write('<%stpPlanRP>%s</%stpPlanRP>%s' % (namespace_, self.gds_format_integer(self.tpPlanRP, input_name='tpPlanRP'), namespace_, eol_)) if self.iniBeneficio is", "if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, dtFimBenef) if subclass is", "buildChildren(self, child_, node, nodeName_, fromsubclass_=False): if nodeName_ == 'tpLograd': tpLograd_ = child_.text tpLograd_", "paisResid_ elif nodeName_ == 'dscLograd': dscLograd_ = child_.text dscLograd_ = self.gds_validate_string(dscLograd_, node, 'dscLograd')", "the export method of # any generated element type class for a example", "return True else: return False def export(self, outfile, level, namespace_='', name_='dscLograd', namespacedef_='', pretty_print=True):", "self.content_type def getValue(self): return self.value def getName(self): return self.name def export(self, outfile, level,", "self.indRetif def set_indRetif(self, indRetif): self.indRetif = indRetif def get_nrRecibo(self): return self.nrRecibo def set_nrRecibo(self,", "(eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='endereco', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s'", "eol_)) if self.mtvFim is not None: showIndent(outfile, level, pretty_print) outfile.write('<%smtvFim>%s</%smtvFim>%s' % (namespace_, self.gds_format_integer(self.mtvFim,", "input_data.second, ) else: _svalue = '%02d:%02d:%02d.%s' % ( input_data.hour, input_data.minute, input_data.second, ('%f' %", "return nrBenefic.subclass(*args_, **kwargs_) else: return nrBenefic(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if", "(eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='indRetif'): pass def exportChildren(self, outfile,", "get_nrRecibo(self): return self.nrRecibo def set_nrRecibo(self, nrRecibo): self.nrRecibo = nrRecibo def get_tpAmb(self): return self.tpAmb", "'\\n' else: eol_ = '' if self.evtCdBenPrRP is not None: self.evtCdBenPrRP.export(outfile, level, namespace_,", "% exp) ival_ = self.gds_validate_integer(ival_, node, 'tpPlanRP') self.tpPlanRP = ival_ elif nodeName_ ==", "space used by the DOM. doc = None mapping = {} rootElement =", "'requires integer: %s' % exp) ival_ = self.gds_validate_integer(ival_, node, 'tpBenef') self.tpBenef = ival_", "getSubclassFromModule_( CurrentSubclassModule_, TEmprPJ) if subclass is not None: return subclass(*args_, **kwargs_) if TEmprPJ.subclass:", "self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='TEnderecoBrasil', pretty_print=pretty_print) showIndent(outfile,", "sys.stdout.write('\\n') return rootObj, rootElement, mapping, reverse_mapping def parseString(inString, silence=False): if sys.version_info.major == 2:", "dadosBenef def hasContent_(self): if ( self.cpfBenef is not None or self.nmBenefic is not", "(eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='codMunic'): pass def exportChildren(self, outfile,", "if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='ideBenef', pretty_print=pretty_print)", "nmCid_ elif nodeName_ == 'codPostal': codPostal_ = child_.text codPostal_ = self.gds_validate_string(codPostal_, node, 'codPostal')", "name_='dtFimBenef', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('dtFimBenef') if imported_ns_def_ is not None: namespacedef_ =", "(namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nmMae), input_name='nmMae')), namespace_, eol_)) if self.nmPai is not None: showIndent(outfile, level, pretty_print)", "+ 1, namespace_='', name_='complemento', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' %", "GDSParseError(Exception): pass def raise_parse_error(node, msg): msg = '%s (element %s/line %d)' % (msg,", "(namespace_, self.gds_format_integer(self.tpAmb, input_name='tpAmb'), namespace_, eol_)) if self.procEmi is not None: showIndent(outfile, level, pretty_print)", "namespace_, name_='dscLograd') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='',", "ElementTree as etree_ Validate_simpletypes_ = True if sys.version_info.major == 2: BaseStrType_ = basestring", "namespace_='', name_='indRetif', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib,", "iniBeneficio def get_altBeneficio(self): return self.altBeneficio def set_altBeneficio(self, altBeneficio): self.altBeneficio = altBeneficio def get_fimBeneficio(self):", "self.complemento = complemento self.bairro = bairro self.cep = cep self.codMunic = codMunic self.uf", "uf self.paisNascto = paisNascto self.paisNac = paisNac self.nmMae = nmMae self.nmPai = nmPai", "staticmethod(factory) def get_indRetif(self): return self.indRetif def set_indRetif(self, indRetif): self.indRetif = indRetif def get_nrRecibo(self):", "(eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='paisResid'): pass def exportChildren(self, outfile,", "else: return uf(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return", "return nmPai.subclass(*args_, **kwargs_) else: return nmPai(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if", "self.cpfInst is not None ): return True else: return False def export(self, outfile,", "'Sub' if hasattr(module, name): return getattr(module, name) else: return None # # If", "already_processed, namespace_='', name_='infoBeneficio'): pass def exportChildren(self, outfile, level, namespace_='', name_='infoBeneficio', fromsubclass_=False, pretty_print=True): if", "self.brasil = brasil self.exterior = exterior def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not", "already_processed, namespace_, name_='evtCdBenPrRP') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1,", "(eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='tpAmb'): pass def exportChildren(self, outfile,", "exportAttributes(self, outfile, level, already_processed, namespace_='', name_='TDadosBeneficio'): pass def exportChildren(self, outfile, level, namespace_='', name_='TDadosBeneficio',", "get_optional(self): return self.optional def _cast(typ, value): if typ is None or value is", "outfile.write('<%s%s%s' % (namespace_, name_, namespacedef_ and ' ' + namespacedef_ or '', ))", "1, namespace_='', name_='nmBenefic', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_,", "node, nodeName_, fromsubclass_=False): pass # end class tpPlanRP class fimBeneficio(GeneratedsSuper): \"\"\"Informações relativas a", "gds_build_any(self, node, type_name=None): return None @classmethod def gds_reverse_node_mapping(cls, mapping): return dict(((v, k) for", "( ): return True else: return False def export(self, outfile, level, namespace_='', name_='nmCid',", "child_, node, nodeName_, fromsubclass_=False): pass # end class dtFimBenef class mtvFim(GeneratedsSuper): subclass =", "GeneratedsSuper.Tag_strip_pattern_.sub('', node.tag) if tag: path_list.append(tag) self.get_path_list_(node.getparent(), path_list) def get_class_obj_(self, node, default_class=None): class_obj1 =", "= set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='cep') if self.hasContent_(): outfile.write('>%s' % (eol_, ))", "input_data).lower() def gds_validate_boolean(self, input_data, node=None, input_name=''): return input_data def gds_format_boolean_list(self, input_data, input_name=''): return", "or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='codPostal') if self.hasContent_():", "def main(): args = sys.argv[1:] if len(args) == 1: parse(args[0]) else: usage() if", "namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='cpfBenef') if", "node, nodeName_, fromsubclass_=False): pass # end class nrLograd class complemento(GeneratedsSuper): subclass = None", "= results.group(2).split(':') tzoff = int(tzoff_parts[0]) * 60 + int(tzoff_parts[1]) if results.group(1) == '-':", "'', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='procEmi') if self.hasContent_(): outfile.write('>%s'", "level, already_processed, namespace_='', name_='verProc'): pass def exportChildren(self, outfile, level, namespace_='', name_='verProc', fromsubclass_=False, pretty_print=True):", "uf): self.uf = uf def hasContent_(self): if ( self.tpLograd is not None or", "redirect the generated superclass module to use a # specific subclass module. CurrentSubclassModule_", "rootTag, rootClass = get_root_tag(rootNode) if rootClass is None: rootTag = 'eSocial' rootClass =", "if hasattr(module, name): return getattr(module, name) else: return None # # If you", "level + 1, namespace_='', name_='paisResid', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s'", "== 2 and isinstance(instring, unicode): result = quote_xml(instring).encode('utf8') else: result = GeneratedsSuper.gds_encode(str(instring)) return", "= evtCdBenPrRP def get_Signature(self): return self.Signature def set_Signature(self, Signature): self.Signature = Signature def", "else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='mtvFim'): pass", "node, 'codMunic') self.codMunic = ival_ elif nodeName_ == 'uf': uf_ = child_.text uf_", "-1 tz = GeneratedsSuper._FixedOffsetTZ( tzoff, results.group(0)) input_data = input_data[:-6] time_parts = input_data.split('.') if", "element): if self.category == MixedContainer.CategoryText: # Prevent exporting empty content as empty lines.", "factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, TIdeEveTrab) if", "pass # end class codMunic class uf(GeneratedsSuper): subclass = None superclass = None", "self.gds_encode(self.gds_format_string(quote_xml(self.tpLograd), input_name='tpLograd')), namespace_, eol_)) if self.dscLograd is not None: showIndent(outfile, level, pretty_print) outfile.write('<%sdscLograd>%s</%sdscLograd>%s'", "self.gds_format_date(self.dtFimBenef, input_name='dtFimBenef'), namespace_, eol_)) if self.mtvFim is not None: showIndent(outfile, level, pretty_print) outfile.write('<%smtvFim>%s</%smtvFim>%s'", "outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='dscLograd'): pass def", "= dadosNasc self.endereco = endereco def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None:", "already_processed): pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class nrBenefic", "class tpBenef class nrBenefic(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_", "= GenerateDSNamespaceDefs_.get('infoPenMorte') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_", "= staticmethod(factory) def get_tpInsc(self): return self.tpInsc def set_tpInsc(self, tpInsc): self.tpInsc = tpInsc def", "nmCid def get_codPostal(self): return self.codPostal def set_codPostal(self, codPostal): self.codPostal = codPostal def hasContent_(self):", "self.exportChildren(outfile, level + 1, namespace_='', name_='TEnderecoBrasil', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' % (namespace_,", "subclass = None superclass = None def __init__(self, paisResid=None, dscLograd=None, nrLograd=None, complemento=None, bairro=None,", "return infoBeneficio.subclass(*args_, **kwargs_) else: return infoBeneficio(*args_, **kwargs_) factory = staticmethod(factory) def get_tpPlanRP(self): return", "self.idQuota = idQuota_ elif nodeName_ == 'cpfInst': cpfInst_ = child_.text cpfInst_ = self.gds_validate_string(cpfInst_,", "nrBenefic=None, dtFimBenef=None, mtvFim=None): self.original_tagname_ = None self.tpBenef = tpBenef self.nrBenefic = nrBenefic if", "= dt.replace(tzinfo=tz) return dt def gds_validate_date(self, input_data, node=None, input_name=''): return input_data def gds_format_date(self,", "end class nrLograd class complemento(GeneratedsSuper): subclass = None superclass = None def __init__(self):", "= child_.text nrRecibo_ = self.gds_validate_string(nrRecibo_, node, 'nrRecibo') self.nrRecibo = nrRecibo_ elif nodeName_ ==", "def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): if nodeName_ == 'dadosNasc': obj_ = dadosNasc.factory()", "exportAttributes(self, outfile, level, already_processed, namespace_='', name_='endereco'): pass def exportChildren(self, outfile, level, namespace_='', name_='endereco',", "( self.indRetif is not None or self.nrRecibo is not None or self.tpAmb is", "level, already_processed, namespace_='', name_='bairro'): pass def exportChildren(self, outfile, level, namespace_='', name_='bairro', fromsubclass_=False, pretty_print=True):", "optional def set_name(self, name): self.name = name def get_name(self): return self.name def set_data_type(self,", "return True else: return False def export(self, outfile, level, namespace_='', name_='verProc', namespacedef_='', pretty_print=True):", "name, namespace, pretty_print=True): if self.category == MixedContainer.CategoryText: # Prevent exporting empty content as", "% (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='TEmprPJ'): pass def exportChildren(self,", "nodeName_, fromsubclass_=False): pass # end class bairro class cep(GeneratedsSuper): subclass = None superclass", "subclass is not None: return subclass(*args_, **kwargs_) if nrBenefic.subclass: return nrBenefic.subclass(*args_, **kwargs_) else:", "factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, procEmi) if", "element. See the export method of # any generated element type class for", "(TypeError, ValueError): raise_parse_error(node, 'Requires sequence of doubles') return values def gds_format_boolean(self, input_data, input_name=''):", "None: subclass = getSubclassFromModule_( CurrentSubclassModule_, dtFimBenef) if subclass is not None: return subclass(*args_,", "namespace_='', name_='infoPenMorte', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\\n' else: eol_ = ''", "if self.cpfBenef is not None: showIndent(outfile, level, pretty_print) outfile.write('<%scpfBenef>%s</%scpfBenef>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.cpfBenef), input_name='cpfBenef')),", "pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib, already_processed) for child", "== 0: _svalue = '%02d:%02d:%02d' % ( input_data.hour, input_data.minute, input_data.second, ) else: _svalue", "export(self, outfile, level, namespace_='', name_='TEmprPJ', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('TEmprPJ') if imported_ns_def_ is", "(eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='dtNascto'): pass def exportChildren(self, outfile,", "= input_data[:-1] else: results = GeneratedsSuper.tzoff_pattern.search(input_data) if results is not None: tzoff_parts =", "pretty_print) outfile.write('<%smtvFim>%s</%smtvFim>%s' % (namespace_, self.gds_format_integer(self.mtvFim, input_name='mtvFim'), namespace_, eol_)) def build(self, node): already_processed =", "**kwargs_) else: return nmCid(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ):", "exportSimple(self, outfile, level, name): if self.content_type == MixedContainer.TypeString: outfile.write('<%s>%s</%s>' % ( self.name, self.value,", ")) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='dadosNasc') if self.hasContent_(): outfile.write('>%s' %", "line: # /usr/local/bin/generateDS --no-process-includes -o \"esociallib/v2_04/evtCdBenPrRP.py\" schemas/v2_04/evtCdBenPrRP.xsd # # Current working directory (os.getcwd()):", "outfile, level, namespace_='', name_='eSocial', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\\n' else: eol_", "node, 'nrBenefic') self.nrBenefic = nrBenefic_ elif nodeName_ == 'dtFimBenef': sval_ = child_.text dval_", "else: return False def export(self, outfile, level, namespace_='', name_='nrLograd', namespacedef_='', pretty_print=True): imported_ns_def_ =", "(eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='nrLograd', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_,", "None: return subclass(*args_, **kwargs_) if TEnderecoBrasil.subclass: return TEnderecoBrasil.subclass(*args_, **kwargs_) else: return TEnderecoBrasil(*args_, **kwargs_)", "self.content_type, self.name, self.value)) else: # category == MixedContainer.CategoryComplex showIndent(outfile, level) outfile.write( 'model_.MixedContainer(%d, %d,", "\\ self.content_type == MixedContainer.TypeBoolean: outfile.write('<%s>%d</%s>' % ( self.name, self.value, self.name)) elif self.content_type ==", "getSubclassFromModule_( CurrentSubclassModule_, ideBenef) if subclass is not None: return subclass(*args_, **kwargs_) if ideBenef.subclass:", ")) self.exportChildren(outfile, level + 1, namespace_='', name_='TEmprPJ', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' %", "= self.gds_validate_string(bairro_, node, 'bairro') self.bairro = bairro_ elif nodeName_ == 'cep': cep_ =", "ival_ = int(sval_) except (TypeError, ValueError) as exp: raise_parse_error(child_, 'requires integer: %s' %", "self.infoBeneficio def set_infoBeneficio(self, infoBeneficio): self.infoBeneficio = infoBeneficio def get_Id(self): return self.Id def set_Id(self,", "def exportChildren(self, outfile, level, namespace_='', name_='cpfBenef', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed", "self.paisNac = paisNac def get_nmMae(self): return self.nmMae def set_nmMae(self, nmMae): self.nmMae = nmMae", "self.nmCid def set_nmCid(self, nmCid): self.nmCid = nmCid def get_codPostal(self): return self.codPostal def set_codPostal(self,", "__init__(self, brasil=None, exterior=None): self.original_tagname_ = None self.brasil = brasil self.exterior = exterior def", "level, already_processed, namespace_='', name_='dadosNasc'): pass def exportChildren(self, outfile, level, namespace_='', name_='dadosNasc', fromsubclass_=False, pretty_print=True):", "in mapping.iteritems())) @staticmethod def gds_encode(instring): if sys.version_info.major == 2: return instring.encode(ExternalEncoding) else: return", "name_='verProc', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib, already_processed)", "outfile, level, namespace_='', name_='bairro', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('bairro') if imported_ns_def_ is not", "% (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='indRetif'): pass def exportChildren(self,", "self.dscLograd is not None or self.nrLograd is not None or self.complemento is not", "definitions. The export method for any class for which there is # a", "text = node.text else: text = '' for child in node: if child.tail", "nrInsc): self.nrInsc = nrInsc def hasContent_(self): if ( self.tpInsc is not None or", "namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='tpPlanRP') if", "name_='dtNascto'): pass def exportChildren(self, outfile, level, namespace_='', name_='dtNascto', fromsubclass_=False, pretty_print=True): pass def build(self,", "def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class cep class TEnderecoExterior(GeneratedsSuper):", "name_='dtFimBenef') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='dtFimBenef',", "self.value)) elif self.category == MixedContainer.CategorySimple: showIndent(outfile, level) outfile.write( 'model_.MixedContainer(%d, %d, \"%s\", \"%s\"),\\n' %", "None: return subclass(*args_, **kwargs_) if evtCdBenPrRP.subclass: return evtCdBenPrRP.subclass(*args_, **kwargs_) else: return evtCdBenPrRP(*args_, **kwargs_)", "class nrInsc class TDadosBenef(GeneratedsSuper): \"\"\"Dados de beneficiário\"\"\" subclass = None superclass = None", "if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, eSocial) if subclass is", "= 'infoPenMorte' # end class TDadosBeneficio class dtIniBenef(GeneratedsSuper): subclass = None superclass =", "**kwargs_) if ideBenef.subclass: return ideBenef.subclass(*args_, **kwargs_) else: return ideBenef(*args_, **kwargs_) factory = staticmethod(factory)", "node, 'nmPai') self.nmPai = nmPai_ # end class dadosNasc class dtNascto(GeneratedsSuper): subclass =", "else: return \"'''%s'''\" % s1 else: if s1.find('\"') != -1: s1 = s1.replace('\"',", "namespace_='', name_='nmMae', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('nmMae') if imported_ns_def_ is not None: namespacedef_", "dt = datetime_.datetime.strptime( input_data, '%Y-%m-%dT%H:%M:%S.%f') else: dt = datetime_.datetime.strptime( input_data, '%Y-%m-%dT%H:%M:%S') dt =", "pass def exportChildren(self, outfile, level, namespace_='', name_='tpLograd', fromsubclass_=False, pretty_print=True): pass def build(self, node):", "class TEnderecoExterior(GeneratedsSuper): \"\"\"Informações do Endereço no Exterior\"\"\" subclass = None superclass = None", "is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, nmCid) if subclass is not None:", "(and other attributes, too) # # The module generatedsnamespaces, if it is importable,", "self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='dtIniBenef', pretty_print=pretty_print) outfile.write('</%s%s>%s'", ")) self.exportChildren(outfile, level + 1, namespace_='', name_='TEnderecoBrasil', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' %", "return TEnderecoExterior.subclass(*args_, **kwargs_) else: return TEnderecoExterior(*args_, **kwargs_) factory = staticmethod(factory) def get_paisResid(self): return", "% (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='procEmi', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_,", "= bairro_ elif nodeName_ == 'cep': cep_ = child_.text cep_ = self.gds_validate_string(cep_, node,", "= set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='nmCid') if self.hasContent_(): outfile.write('>%s' % (eol_, ))", "is not None: return subclass(*args_, **kwargs_) if TDadosBeneficio.subclass: return TDadosBeneficio.subclass(*args_, **kwargs_) else: return", "% ( input_data.year, input_data.month, input_data.day, input_data.hour, input_data.minute, input_data.second, ) else: _svalue = '%04d-%02d-%02dT%02d:%02d:%02d.%s'", "pass def exportChildren(self, outfile, level, namespace_='', name_='dtNascto', fromsubclass_=False, pretty_print=True): pass def build(self, node):", "namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='evtCdBenPrRP') if", "self.nmMae is not None: showIndent(outfile, level, pretty_print) outfile.write('<%snmMae>%s</%snmMae>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nmMae), input_name='nmMae')), namespace_,", "(eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='infoPenMorte', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s'", "None or self.verProc is not None ): return True else: return False def", "MixedContainer.CategorySimple: subelement = etree_.SubElement( element, '%s' % self.name) subelement.text = self.to_etree_simple() else: #", "subclass(*args_, **kwargs_) if nrLograd.subclass: return nrLograd.subclass(*args_, **kwargs_) else: return nrLograd(*args_, **kwargs_) factory =", "hasContent_(self): if ( self.indRetif is not None or self.nrRecibo is not None or", "table is: # # # File: generatedsnamespaces.py # # GenerateDSNamespaceDefs = { #", "level, namespace_='', name_='infoBeneficio', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('infoBeneficio') if imported_ns_def_ is not None:", "' '.join(input_data) def gds_validate_double_list( self, input_data, node=None, input_name=''): values = input_data.split() for value", "= None if not silence: sys.stdout.write('<?xml version=\"1.0\" ?>\\n') rootObj.export( sys.stdout, 0, name_=rootTag, namespacedef_='',", "return ('%.15f' % input_data).rstrip('0') def gds_validate_float(self, input_data, node=None, input_name=''): return input_data def gds_format_float_list(self,", "= imported_ns_def_ if pretty_print: eol_ = '\\n' else: eol_ = '' if self.original_tagname_", "elif nodeName_ == 'infoBeneficio': obj_ = infoBeneficio.factory() obj_.build(child_) self.infoBeneficio = obj_ obj_.original_tagname_ =", "def hasContent_(self): if ( self.ideEvento is not None or self.ideEmpregador is not None", "if self.dadosNasc is not None: self.dadosNasc.export(outfile, level, namespace_, name_='dadosNasc', pretty_print=pretty_print) if self.endereco is", "namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('TEmprPJ') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_", "self.gds_encode(self.gds_format_string(quote_xml(self.uf), input_name='uf')), namespace_, eol_)) def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib, already_processed)", "None or value is None: return value return typ(value) # # Data representation", "already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='vrBenef') if self.hasContent_(): outfile.write('>%s' % (eol_,", "re_.compile(r'(\\+|-)((0\\d|1[0-3]):[0-5]\\d|14:00)$') class _FixedOffsetTZ(datetime_.tzinfo): def __init__(self, offset, name): self.__offset = datetime_.timedelta(minutes=offset) self.__name = name", "already_processed): pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class vrBenef", "namespace_='', name_='TDadosBeneficio'): pass def exportChildren(self, outfile, level, namespace_='', name_='TDadosBeneficio', fromsubclass_=False, pretty_print=True): if pretty_print:", "def export(self, outfile, level, namespace_='', name_='TDadosBenef', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('TDadosBenef') if imported_ns_def_", "for element type classes # # Calls to the methods in these classes", "**kwargs_) else: return nmMae(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ):", "outfile, level, already_processed, namespace_='', name_='endereco'): pass def exportChildren(self, outfile, level, namespace_='', name_='endereco', fromsubclass_=False,", "level, already_processed, namespace_, name_='indRetif') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level +", "tpBenef class nrBenefic(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ =", "CurrentSubclassModule_, dscLograd) if subclass is not None: return subclass(*args_, **kwargs_) if dscLograd.subclass: return", "# end class cpfBenef class nmBenefic(GeneratedsSuper): subclass = None superclass = None def", "4 TypeDecimal = 5 TypeDouble = 6 TypeBoolean = 7 TypeBase64 = 8", "exportChildren(self, outfile, level, namespace_='', name_='paisNascto', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed =", "already_processed, namespace_='', name_='verProc'): pass def exportChildren(self, outfile, level, namespace_='', name_='verProc', fromsubclass_=False, pretty_print=True): pass", "pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class vrBenef class", "set_nmBenefic(self, nmBenefic): self.nmBenefic = nmBenefic def get_dadosBenef(self): return self.dadosBenef def set_dadosBenef(self, dadosBenef): self.dadosBenef", "**kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, dtIniBenef) if subclass", "nodeName_ == 'dadosBenef': obj_ = TDadosBenef.factory() obj_.build(child_) self.dadosBenef = obj_ obj_.original_tagname_ = 'dadosBenef'", "self.exportChildren(outfile, level + 1, namespace_='', name_='tpPlanRP', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else:", "export(self, outfile, level, namespace_='', name_='indRetif', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('indRetif') if imported_ns_def_ is", "if tpPlanRP.subclass: return tpPlanRP.subclass(*args_, **kwargs_) else: return tpPlanRP(*args_, **kwargs_) factory = staticmethod(factory) def", "namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('evtCdBenPrRP') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_", "else: return False def export(self, outfile, level, namespace_='', name_='mtvFim', namespacedef_='', pretty_print=True): imported_ns_def_ =", "= child_.text complemento_ = self.gds_validate_string(complemento_, node, 'complemento') self.complemento = complemento_ elif nodeName_ ==", "def __ne__(self, other): return not self.__eq__(other) def getSubclassFromModule_(module, class_): '''Get the subclass of", "\"\"\"Identificação do evento\"\"\" subclass = None superclass = None def __init__(self, indRetif=None, nrRecibo=None,", "else: return TEmprPJ(*args_, **kwargs_) factory = staticmethod(factory) def get_tpInsc(self): return self.tpInsc def set_tpInsc(self,", "outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='nrBenefic'): pass def", "outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='vrBenef', pretty_print=pretty_print) outfile.write('</%s%s>%s' %", "do Endereço no Exterior\"\"\" subclass = None superclass = None def __init__(self, paisResid=None,", "if subclass is not None: return subclass(*args_, **kwargs_) if nmPai.subclass: return nmPai.subclass(*args_, **kwargs_)", "pretty_print) outfile.write('<%spaisResid>%s</%spaisResid>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.paisResid), input_name='paisResid')), namespace_, eol_)) if self.dscLograd is not None:", "= '%04d-%02d-%02d' % ( input_data.year, input_data.month, input_data.day, ) try: if input_data.tzinfo is not", "else: s1 = '\"%s\"' % s1 return s1 def quote_python(inStr): s1 = inStr", "self.exportChildren(outfile, level + 1, namespace_='', name_='uf', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else:", "is not None: return subclass(*args_, **kwargs_) if dtFimBenef.subclass: return dtFimBenef.subclass(*args_, **kwargs_) else: return", "= verProc def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_(", "None ): return True else: return False def export(self, outfile, level, namespace_='', name_='TDadosBeneficio',", "self.content_type == MixedContainer.TypeBase64: outfile.write('<%s>%s</%s>' % ( self.name, base64.b64encode(self.value), self.name)) def to_etree(self, element): if", "'', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='tpLograd') if self.hasContent_(): outfile.write('>%s'", "GenerateDSNamespaceDefs_.get('eSocial') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_ =", "is not None: showIndent(outfile, level, pretty_print) outfile.write('<%stpPlanRP>%s</%stpPlanRP>%s' % (namespace_, self.gds_format_integer(self.tpPlanRP, input_name='tpPlanRP'), namespace_, eol_))", "input_data, node=None, input_name=''): values = input_data.split() for value in values: try: int(value) except", "eol_)) else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='nrBenefic'):", "= nmCid_ elif nodeName_ == 'codPostal': codPostal_ = child_.text codPostal_ = self.gds_validate_string(codPostal_, node,", "def get_codMunic(self): return self.codMunic def set_codMunic(self, codMunic): self.codMunic = codMunic def get_uf(self): return", "fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\\n' else: eol_ = '' if self.evtCdBenPrRP", "pass def exportChildren(self, outfile, level, namespace_='', name_='dscLograd', fromsubclass_=False, pretty_print=True): pass def build(self, node):", "def get_mtvFim(self): return self.mtvFim def set_mtvFim(self, mtvFim): self.mtvFim = mtvFim def hasContent_(self): if", "level, already_processed, namespace_='', name_='mtvFim'): pass def exportChildren(self, outfile, level, namespace_='', name_='mtvFim', fromsubclass_=False, pretty_print=True):", "level, already_processed, namespace_, name_='ideBenef') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level +", "def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, complemento)", "TDadosBeneficio, 'brasil': TEnderecoBrasil, 'dadosBenef': TDadosBenef, 'exterior': TEnderecoExterior, 'ideEmpregador': TEmprPJ, 'ideEvento': TIdeEveTrab, 'iniBeneficio': TDadosBeneficio,", "args = sys.argv[1:] if len(args) == 1: parse(args[0]) else: usage() if __name__ ==", "not None: return subclass(*args_, **kwargs_) if dtNascto.subclass: return dtNascto.subclass(*args_, **kwargs_) else: return dtNascto(*args_,", "= datetime_.datetime.strptime(dtNascto, '%Y-%m-%d').date() else: initvalue_ = dtNascto self.dtNascto = initvalue_ self.codMunic = codMunic", "getSubclassFromModule_( CurrentSubclassModule_, codPostal) if subclass is not None: return subclass(*args_, **kwargs_) if codPostal.subclass:", "level, pretty_print) outfile.write('<%snmMae>%s</%snmMae>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nmMae), input_name='nmMae')), namespace_, eol_)) if self.nmPai is not", "nodeName_ == 'idQuota': idQuota_ = child_.text idQuota_ = self.gds_validate_string(idQuota_, node, 'idQuota') self.idQuota =", "child_, node, nodeName_, fromsubclass_=False): pass # end class tpInsc class nrInsc(GeneratedsSuper): subclass =", "= True break if not found2: found1 = False break return found1 @classmethod", "codMunic(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return True else:", "(namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.cpfBenef), input_name='cpfBenef')), namespace_, eol_)) if self.nmBenefic is not None: showIndent(outfile, level, pretty_print)", "getSubclassFromModule_( CurrentSubclassModule_, cpfInst) if subclass is not None: return subclass(*args_, **kwargs_) if cpfInst.subclass:", "subclass = None superclass = None def __init__(self, idQuota=None, cpfInst=None): self.original_tagname_ = None", "MixedContainer.TypeDecimal: outfile.write('<%s>%f</%s>' % ( self.name, self.value, self.name)) elif self.content_type == MixedContainer.TypeDouble: outfile.write('<%s>%g</%s>' %", "namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('ideBenef') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_", "+ 1, namespace_='', name_='paisNac', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' %", "superclass = None def __init__(self, paisResid=None, dscLograd=None, nrLograd=None, complemento=None, bairro=None, nmCid=None, codPostal=None): self.original_tagname_", "Python to collect the space used by the DOM. doc = None if", "tpBenef self.nrBenefic = nrBenefic if isinstance(dtFimBenef, BaseStrType_): initvalue_ = datetime_.datetime.strptime(dtFimBenef, '%Y-%m-%d').date() else: initvalue_", "node.text is not None: text = node.text else: text = '' for child", "if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, uf) if subclass is", "if subclass is not None: return subclass(*args_, **kwargs_) if dtFimBenef.subclass: return dtFimBenef.subclass(*args_, **kwargs_)", "TypeBoolean = 7 TypeBase64 = 8 def __init__(self, category, content_type, name, value): self.category", "= child_.text dval_ = self.gds_parse_date(sval_) self.dtFimBenef = dval_ elif nodeName_ == 'mtvFim': sval_", "self.ideBenef is not None: self.ideBenef.export(outfile, level, namespace_, name_='ideBenef', pretty_print=pretty_print) if self.infoBeneficio is not", "exportChildren(self, outfile, level, namespace_='', name_='TEnderecoExterior', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\\n' else:", "class TIdeEveTrab class indRetif(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_", "pass # end class paisNac class nmMae(GeneratedsSuper): subclass = None superclass = None", "parseLiteral(inFileName, silence=False): parser = None doc = parsexml_(inFileName, parser) rootNode = doc.getroot() rootTag,", "already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='TDadosBenef') if self.hasContent_(): outfile.write('>%s' % (eol_,", "GenerateDSNamespaceDefs_.get('fimBeneficio') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_ =", "bairro(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return True else:", "end class nmPai class endereco(GeneratedsSuper): \"\"\"Grupo de informações do endereço do Trabalhador\"\"\" subclass", "if value is not None and 'Id' not in already_processed: already_processed.add('Id') self.Id =", "namespace_, eol_)) if self.nmCid is not None: showIndent(outfile, level, pretty_print) outfile.write('<%snmCid>%s</%snmCid>%s' % (namespace_,", "= child_.text uf_ = self.gds_validate_string(uf_, node, 'uf') self.uf = uf_ elif nodeName_ ==", "parsexml_(IOBuffer(inString), parser) rootNode = doc.getroot() rootTag, rootClass = get_root_tag(rootNode) if rootClass is None:", "child_.text dval_ = self.gds_parse_date(sval_) self.dtNascto = dval_ elif nodeName_ == 'codMunic': sval_ =", "total_seconds = tzoff.seconds + (86400 * tzoff.days) if total_seconds == 0: _svalue +=", "de benefícios previdenciários de Regimes Próprios\"\"\" subclass = None superclass = None def", "export(self, outfile, level, namespace_='', name_='TDadosBenef', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('TDadosBenef') if imported_ns_def_ is", "not None or self.exterior is not None ): return True else: return False", "'', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='TIdeEveTrab') if self.hasContent_(): outfile.write('>%s'", "isinstance(dtIniBenef, BaseStrType_): initvalue_ = datetime_.datetime.strptime(dtIniBenef, '%Y-%m-%d').date() else: initvalue_ = dtIniBenef self.dtIniBenef = initvalue_", "(eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='evtCdBenPrRP', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s'", "and inStr or '%s' % inStr) s1 = s1.replace('&', '&amp;') s1 = s1.replace('<',", "'uf') self.uf = uf_ elif nodeName_ == 'paisNascto': paisNascto_ = child_.text paisNascto_ =", ")) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='nrBenefic'): pass def exportChildren(self, outfile, level,", "namespace_='', name_='dscLograd', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib,", "input_data, input_name=''): if input_data.microsecond == 0: _svalue = '%02d:%02d:%02d' % ( input_data.hour, input_data.minute,", "def get_dtNascto(self): return self.dtNascto def set_dtNascto(self, dtNascto): self.dtNascto = dtNascto def get_codMunic(self): return", "TEmprPJ class tpInsc(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ =", "eol_ = '\\n' else: eol_ = '' if self.ideEvento is not None: self.ideEvento.export(outfile,", "= set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='infoBeneficio') if self.hasContent_(): outfile.write('>%s' % (eol_, ))", "datetime_.datetime.strptime(dtFimBenef, '%Y-%m-%d').date() else: initvalue_ = dtFimBenef self.dtFimBenef = initvalue_ self.mtvFim = mtvFim def", "== 'endereco': obj_ = endereco.factory() obj_.build(child_) self.endereco = obj_ obj_.original_tagname_ = 'endereco' #", "pretty_print=True): if pretty_print: eol_ = '\\n' else: eol_ = '' if self.tpInsc is", "self.ideBenef def set_ideBenef(self, ideBenef): self.ideBenef = ideBenef def get_infoBeneficio(self): return self.infoBeneficio def set_infoBeneficio(self,", "pretty_print) outfile.write('<%stpPlanRP>%s</%stpPlanRP>%s' % (namespace_, self.gds_format_integer(self.tpPlanRP, input_name='tpPlanRP'), namespace_, eol_)) if self.iniBeneficio is not None:", "if subclass is not None: return subclass(*args_, **kwargs_) if fimBeneficio.subclass: return fimBeneficio.subclass(*args_, **kwargs_)", "name_='fimBeneficio', pretty_print=pretty_print) def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib, already_processed) for child", "= '' if self.brasil is not None: self.brasil.export(outfile, level, namespace_, name_='brasil', pretty_print=pretty_print) if", "set_brasil(self, brasil): self.brasil = brasil def get_exterior(self): return self.exterior def set_exterior(self, exterior): self.exterior", "# Constants for content_type: TypeNone = 0 TypeText = 1 TypeString = 2", "generatedssuper import GeneratedsSuper except ImportError as exp: class GeneratedsSuper(object): tzoff_pattern = re_.compile(r'(\\+|-)((0\\d|1[0-3]):[0-5]\\d|14:00)$') class", "def __init__(self, idQuota=None, cpfInst=None): self.original_tagname_ = None self.idQuota = idQuota self.cpfInst = cpfInst", "= mtvFim def hasContent_(self): if ( self.tpBenef is not None or self.nrBenefic is", "None: subclass = getSubclassFromModule_( CurrentSubclassModule_, nmMae) if subclass is not None: return subclass(*args_,", "child_.text dval_ = self.gds_parse_date(sval_) self.dtIniBenef = dval_ elif nodeName_ == 'vrBenef': sval_ =", "input_data.split() for value in values: try: int(value) except (TypeError, ValueError): raise_parse_error(node, 'Requires sequence", "category: CategoryNone = 0 CategoryText = 1 CategorySimple = 2 CategoryComplex = 3", "isinstance(dtFimBenef, BaseStrType_): initvalue_ = datetime_.datetime.strptime(dtFimBenef, '%Y-%m-%d').date() else: initvalue_ = dtFimBenef self.dtFimBenef = initvalue_", "== MixedContainer.TypeInteger or \\ self.content_type == MixedContainer.TypeBoolean: outfile.write('<%s>%d</%s>' % ( self.name, self.value, self.name))", "else: return False def export(self, outfile, level, namespace_='', name_='codMunic', namespacedef_='', pretty_print=True): imported_ns_def_ =", "else: return vrBenef(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return", "exportAttributes(self, outfile, level, already_processed, namespace_='', name_='tpAmb'): pass def exportChildren(self, outfile, level, namespace_='', name_='tpAmb',", "% (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='dscLograd', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_,", "end class ideBenef class cpfBenef(GeneratedsSuper): subclass = None superclass = None def __init__(self):", "self.gds_validate_string(paisResid_, node, 'paisResid') self.paisResid = paisResid_ elif nodeName_ == 'dscLograd': dscLograd_ = child_.text", "imported_ns_def_ = GenerateDSNamespaceDefs_.get('vrBenef') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print:", "self.gds_validate_integer(ival_, node, 'mtvFim') self.mtvFim = ival_ # end class fimBeneficio class tpBenef(GeneratedsSuper): subclass", "choice self.optional = optional def set_name(self, name): self.name = name def get_name(self): return", "= child_.text cpfInst_ = self.gds_validate_string(cpfInst_, node, 'cpfInst') self.cpfInst = cpfInst_ # end class", "level, pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_, )) def", "= True if sys.version_info.major == 2: BaseStrType_ = basestring else: BaseStrType_ = str", "outfile, level, already_processed, namespace_='', name_='tpPlanRP'): pass def exportChildren(self, outfile, level, namespace_='', name_='tpPlanRP', fromsubclass_=False,", "self.dadosNasc.export(outfile, level, namespace_, name_='dadosNasc', pretty_print=pretty_print) if self.endereco is not None: self.endereco.export(outfile, level, namespace_,", "None if not silence: sys.stdout.write('#from evtCdBenPrRP import *\\n\\n') sys.stdout.write('import evtCdBenPrRP as model_\\n\\n') sys.stdout.write('rootObj", "factory = staticmethod(factory) def get_evtCdBenPrRP(self): return self.evtCdBenPrRP def set_evtCdBenPrRP(self, evtCdBenPrRP): self.evtCdBenPrRP = evtCdBenPrRP", "None: return subclass(*args_, **kwargs_) if fimBeneficio.subclass: return fimBeneficio.subclass(*args_, **kwargs_) else: return fimBeneficio(*args_, **kwargs_)", "input_data.day, input_data.hour, input_data.minute, input_data.second, ('%f' % (float(input_data.microsecond) / 1000000))[2:], ) if input_data.tzinfo is", "'\\\\\"') if s1.find('\\n') == -1: return '\"%s\"' % s1 else: return '\"\"\"%s\"\"\"' %", ")) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='dadosNasc'): pass def exportChildren(self, outfile, level,", "(eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='bairro'): pass def exportChildren(self, outfile,", "pretty_print) outfile.write('<%snmPai>%s</%snmPai>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nmPai), input_name='nmPai')), namespace_, eol_)) def build(self, node): already_processed =", "name_='paisNac', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('paisNac') if imported_ns_def_ is not None: namespacedef_ =", "list): if len(self.data_type) > 0: return self.data_type[-1] else: return 'xs:string' else: return self.data_type", "outfile, level, namespace_='', name_='infoPenMorte', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('infoPenMorte') if imported_ns_def_ is not", "'%s' % ' '.join(input_data) def gds_validate_boolean_list( self, input_data, node=None, input_name=''): values = input_data.split()", "= None self.tpBenef = tpBenef self.nrBenefic = nrBenefic if isinstance(dtIniBenef, BaseStrType_): initvalue_ =", "subclass is not None: return subclass(*args_, **kwargs_) if codPostal.subclass: return codPostal.subclass(*args_, **kwargs_) else:", "infoPenMorte.factory() obj_.build(child_) self.infoPenMorte = obj_ obj_.original_tagname_ = 'infoPenMorte' # end class TDadosBeneficio class", "self.Signature = Signature_ # end class eSocial class evtCdBenPrRP(GeneratedsSuper): \"\"\"Evento de cadastro de", "or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='tpPlanRP') if self.hasContent_():", ")) self.exportChildren(outfile, level + 1, namespace_='', name_='uf', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_))", "CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, TEnderecoBrasil) if subclass is not", "input_data = input_data[:-6] dt = datetime_.datetime.strptime(input_data, '%Y-%m-%d') dt = dt.replace(tzinfo=tz) return dt.date() def", "codPostal.subclass(*args_, **kwargs_) else: return codPostal(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if (", "( ): return True else: return False def export(self, outfile, level, namespace_='', name_='tpLograd',", "fromsubclass_=False): pass # end class uf class paisNascto(GeneratedsSuper): subclass = None superclass =", "factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, TEnderecoBrasil) if", "outfile, level, namespace_='', name_='TEnderecoBrasil', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('TEnderecoBrasil') if imported_ns_def_ is not", "already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='cep') if self.hasContent_(): outfile.write('>%s' % (eol_,", "nmCid(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return True else:", "already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='bairro') if self.hasContent_(): outfile.write('>%s' % (eol_,", "cpfInst_ = self.gds_validate_string(cpfInst_, node, 'cpfInst') self.cpfInst = cpfInst_ # end class infoPenMorte class", "+ namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='dtIniBenef')", "# end class idQuota class cpfInst(GeneratedsSuper): subclass = None superclass = None def", "= obj_ obj_.original_tagname_ = 'dadosNasc' elif nodeName_ == 'endereco': obj_ = endereco.factory() obj_.build(child_)", "return subclass(*args_, **kwargs_) if idQuota.subclass: return idQuota.subclass(*args_, **kwargs_) else: return idQuota(*args_, **kwargs_) factory", "self.gds_parse_date(sval_) self.dtNascto = dval_ elif nodeName_ == 'codMunic': sval_ = child_.text try: ival_", "isinstance(instring, str): result = quote_xml(instring) elif sys.version_info.major == 2 and isinstance(instring, unicode): result", "self.ideEvento is not None: self.ideEvento.export(outfile, level, namespace_, name_='ideEvento', pretty_print=pretty_print) if self.ideEmpregador is not", "self.nrBenefic is not None: showIndent(outfile, level, pretty_print) outfile.write('<%snrBenefic>%s</%snrBenefic>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nrBenefic), input_name='nrBenefic')), namespace_,", "or self.cep is not None or self.codMunic is not None or self.uf is", "return self.ideBenef def set_ideBenef(self, ideBenef): self.ideBenef = ideBenef def get_infoBeneficio(self): return self.infoBeneficio def", "attrs, already_processed): pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class", "= staticmethod(factory) def get_tpPlanRP(self): return self.tpPlanRP def set_tpPlanRP(self, tpPlanRP): self.tpPlanRP = tpPlanRP def", "pass def exportChildren(self, outfile, level, namespace_='', name_='paisNac', fromsubclass_=False, pretty_print=True): pass def build(self, node):", "in values: try: float(value) except (TypeError, ValueError): raise_parse_error(node, 'Requires sequence of doubles') return", "export(self, outfile, level, namespace_='', name_='codMunic', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('codMunic') if imported_ns_def_ is", "showIndent(outfile, level, pretty_print) outfile.write('<%scpfInst>%s</%scpfInst>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.cpfInst), input_name='cpfInst')), namespace_, eol_)) def build(self, node):", "is: # # # File: generatedsnamespaces.py # # GenerateDSNamespaceDefs = { # \"ElementtypeA\":", "not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, evtCdBenPrRP) if subclass is not None: return", "empty content as empty lines. if self.value.strip(): if len(element) > 0: if element[-1].tail", "CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, TDadosBenef) if subclass is not", "exportChildren(self, outfile, level, namespace_='', name_='tpBenef', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed =", "for value in values: if value not in ('true', '1', 'false', '0', ):", "obj_ = evtCdBenPrRP.factory() obj_.build(child_) self.evtCdBenPrRP = obj_ obj_.original_tagname_ = 'evtCdBenPrRP' elif nodeName_ ==", "gds_format_time(self, input_data, input_name=''): if input_data.microsecond == 0: _svalue = '%02d:%02d:%02d' % ( input_data.hour,", "if self.nrBenefic is not None: showIndent(outfile, level, pretty_print) outfile.write('<%snrBenefic>%s</%snrBenefic>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nrBenefic), input_name='nrBenefic')),", "rootNode = doc.getroot() rootTag, rootClass = get_root_tag(rootNode) if rootClass is None: rootTag =", "= GenerateDSNamespaceDefs_.get('nmPai') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_", "self.evtCdBenPrRP is not None: self.evtCdBenPrRP.export(outfile, level, namespace_, name_='evtCdBenPrRP', pretty_print=pretty_print) if self.Signature is not", "= GenerateDSNamespaceDefs_.get('dtFimBenef') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_", "' ' + namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed,", "def get_dtFimBenef(self): return self.dtFimBenef def set_dtFimBenef(self, dtFimBenef): self.dtFimBenef = dtFimBenef def get_mtvFim(self): return", ")) self.exportChildren(outfile, level + 1, namespace_='', name_='paisNac', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_))", "self.exportChildren(outfile, level + 1, namespace_='', name_='codPostal', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else:", "= set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='idQuota') if self.hasContent_(): outfile.write('>%s' % (eol_, ))", "obj_.original_tagname_ = 'exterior' # end class endereco class TEnderecoBrasil(GeneratedsSuper): \"\"\"Informações do Endereço no", "level, namespace_='', name_='codPostal', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('codPostal') if imported_ns_def_ is not None:", "self.paisResid is not None: showIndent(outfile, level, pretty_print) outfile.write('<%spaisResid>%s</%spaisResid>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.paisResid), input_name='paisResid')), namespace_,", "compatible parser so that, e.g., # we ignore comments. try: parser = etree_.ETCompatXMLParser()", "outfile, level, already_processed, namespace_='', name_='TEnderecoExterior'): pass def exportChildren(self, outfile, level, namespace_='', name_='TEnderecoExterior', fromsubclass_=False,", "CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, vrBenef) if subclass is not", "self.exportAttributes(outfile, level, already_processed, namespace_, name_='bairro') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level", "outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='fimBeneficio'): pass def", "self.complemento is not None: showIndent(outfile, level, pretty_print) outfile.write('<%scomplemento>%s</%scomplemento>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.complemento), input_name='complemento')), namespace_,", "result def __eq__(self, other): if type(self) != type(other): return False return self.__dict__ ==", "outfile, level, namespace_='', name_='complemento', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set()", "name_='dtIniBenef', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('dtIniBenef') if imported_ns_def_ is not None: namespacedef_ =", "is not None: return subclass(*args_, **kwargs_) if cpfBenef.subclass: return cpfBenef.subclass(*args_, **kwargs_) else: return", "not None: showIndent(outfile, level, pretty_print) outfile.write('<%sdtNascto>%s</%sdtNascto>%s' % (namespace_, self.gds_format_date(self.dtNascto, input_name='dtNascto'), namespace_, eol_)) if", "name_='dtIniBenef', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_, )) def", "= input_data.split() for value in values: try: float(value) except (TypeError, ValueError): raise_parse_error(node, 'Requires", "def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): if nodeName_ == 'indRetif': sval_ = child_.text", "return self.nmMae def set_nmMae(self, nmMae): self.nmMae = nmMae def get_nmPai(self): return self.nmPai def", "subclass = None superclass = None def __init__(self, Id=None, ideEvento=None, ideEmpregador=None, ideBenef=None, infoBeneficio=None):", "obj_ obj_.original_tagname_ = 'endereco' # end class TDadosBenef class dadosNasc(GeneratedsSuper): \"\"\"Informações de nascimento", ")) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='endereco') if self.hasContent_(): outfile.write('>%s' %", "return procEmi(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return True", "None: subclass = getSubclassFromModule_( CurrentSubclassModule_, TDadosBenef) if subclass is not None: return subclass(*args_,", "# # Namespace prefix definition table (and other attributes, too) # # The", "'.join(input_data) def gds_validate_double_list( self, input_data, node=None, input_name=''): values = input_data.split() for value in", "namespace_, eol_)) if self.dscLograd is not None: showIndent(outfile, level, pretty_print) outfile.write('<%sdscLograd>%s</%sdscLograd>%s' % (namespace_,", "% (time_parts[0], micro_seconds, ) dt = datetime_.datetime.strptime( input_data, '%Y-%m-%dT%H:%M:%S.%f') else: dt = datetime_.datetime.strptime(", "pretty_print: eol_ = '\\n' else: eol_ = '' if self.ideEvento is not None:", "is not None: showIndent(outfile, level, pretty_print) outfile.write('<%sdtFimBenef>%s</%sdtFimBenef>%s' % (namespace_, self.gds_format_date(self.dtFimBenef, input_name='dtFimBenef'), namespace_, eol_))", "+ namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='tpBenef')", "def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='uf'): pass def exportChildren(self, outfile, level, namespace_='',", "namespace_='', name_='vrBenef', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('vrBenef') if imported_ns_def_ is not None: namespacedef_", "= None def __init__(self, evtCdBenPrRP=None, Signature=None): self.original_tagname_ = None self.evtCdBenPrRP = evtCdBenPrRP self.Signature", "already_processed, namespace_, name_='dscLograd') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1,", "s1.replace('<', '&lt;') s1 = s1.replace('>', '&gt;') return s1 def quote_attrib(inStr): s1 = (isinstance(inStr,", "import GeneratedsSuper except ImportError as exp: class GeneratedsSuper(object): tzoff_pattern = re_.compile(r'(\\+|-)((0\\d|1[0-3]):[0-5]\\d|14:00)$') class _FixedOffsetTZ(datetime_.tzinfo):", ")) self.exportChildren(outfile, level + 1, namespace_='', name_='paisResid', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_))", "child_, node, nodeName_, fromsubclass_=False): pass # end class cep class TEnderecoExterior(GeneratedsSuper): \"\"\"Informações do", "eol_ = '' if self.evtCdBenPrRP is not None: self.evtCdBenPrRP.export(outfile, level, namespace_, name_='evtCdBenPrRP', pretty_print=pretty_print)", "not None: showIndent(outfile, level, pretty_print) outfile.write('<%scpfBenef>%s</%scpfBenef>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.cpfBenef), input_name='cpfBenef')), namespace_, eol_)) if", "showIndent(outfile, level, pretty_print) outfile.write('<%stpLograd>%s</%stpLograd>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.tpLograd), input_name='tpLograd')), namespace_, eol_)) if self.dscLograd is", "None or self.codMunic is not None or self.uf is not None ): return", "level + 1, namespace_='', name_='nrLograd', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s'", "return TDadosBeneficio.subclass(*args_, **kwargs_) else: return TDadosBeneficio(*args_, **kwargs_) factory = staticmethod(factory) def get_tpBenef(self): return", "CurrentSubclassModule_, paisNac) if subclass is not None: return subclass(*args_, **kwargs_) if paisNac.subclass: return", "level + 1, namespace_='', name_='dadosNasc', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_,", "is not None: showIndent(outfile, level, pretty_print) outfile.write('<%sSignature>%s</%sSignature>%s' % ('ds:', self.gds_encode(self.gds_format_string(quote_xml(self.Signature), input_name='Signature')), 'ds:', eol_))", "False def export(self, outfile, level, namespace_='', name_='TEnderecoExterior', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('TEnderecoExterior') if", "1, namespace_='', name_='nrInsc', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_,", "not None: return subclass(*args_, **kwargs_) if TDadosBeneficio.subclass: return TDadosBeneficio.subclass(*args_, **kwargs_) else: return TDadosBeneficio(*args_,", "node=None, input_name=''): return input_data def gds_format_integer(self, input_data, input_name=''): return '%d' % input_data def", "pass # end class dscLograd class nrLograd(GeneratedsSuper): subclass = None superclass = None", "self.iniBeneficio def set_iniBeneficio(self, iniBeneficio): self.iniBeneficio = iniBeneficio def get_altBeneficio(self): return self.altBeneficio def set_altBeneficio(self,", "if value not in ('true', '1', 'false', '0', ): raise_parse_error( node, 'Requires sequence", "namespace_, eol_)) if self.uf is not None: showIndent(outfile, level, pretty_print) outfile.write('<%suf>%s</%suf>%s' % (namespace_,", "'', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='nrLograd') if self.hasContent_(): outfile.write('>%s'", "'cpfInst': cpfInst_ = child_.text cpfInst_ = self.gds_validate_string(cpfInst_, node, 'cpfInst') self.cpfInst = cpfInst_ #", "is not None or self.complemento is not None or self.bairro is not None", "or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='cpfInst') if self.hasContent_():", "GeneratedsSuper._FixedOffsetTZ( tzoff, results.group(0)) input_data = input_data[:-6] dt = datetime_.datetime.strptime(input_data, '%Y-%m-%d') dt = dt.replace(tzinfo=tz)", "pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('TDadosBenef') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if", "-1: return \"'%s'\" % s1 else: return \"'''%s'''\" % s1 else: if s1.find('\"')", "self.name def export(self, outfile, level, name, namespace, pretty_print=True): if self.category == MixedContainer.CategoryText: #", "self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='evtCdBenPrRP', pretty_print=pretty_print) showIndent(outfile,", "endereco class TEnderecoBrasil(GeneratedsSuper): \"\"\"Informações do Endereço no Brasil\"\"\" subclass = None superclass =", "nodeName_, fromsubclass_=False): if nodeName_ == 'dtNascto': sval_ = child_.text dval_ = self.gds_parse_date(sval_) self.dtNascto", "not None: return subclass(*args_, **kwargs_) if TEmprPJ.subclass: return TEmprPJ.subclass(*args_, **kwargs_) else: return TEmprPJ(*args_,", "pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('paisNascto') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if", "fromsubclass_=False): if nodeName_ == 'idQuota': idQuota_ = child_.text idQuota_ = self.gds_validate_string(idQuota_, node, 'idQuota')", "'requires integer: %s' % exp) ival_ = self.gds_validate_integer(ival_, node, 'mtvFim') self.mtvFim = ival_", "is not None: showIndent(outfile, level, pretty_print) outfile.write('<%scpfInst>%s</%scpfInst>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.cpfInst), input_name='cpfInst')), namespace_, eol_))", "namespace_='', name_='cpfInst', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('cpfInst') if imported_ns_def_ is not None: namespacedef_", "pass def exportChildren(self, outfile, level, namespace_='', name_='TEmprPJ', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ =", "+ 1, namespace_='', name_='infoBeneficio', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_))", "'complemento': complemento_ = child_.text complemento_ = self.gds_validate_string(complemento_, node, 'complemento') self.complemento = complemento_ elif", "get_nrBenefic(self): return self.nrBenefic def set_nrBenefic(self, nrBenefic): self.nrBenefic = nrBenefic def get_dtIniBenef(self): return self.dtIniBenef", "def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, indRetif)", "+ namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='nrLograd')", "already_processed): pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): if nodeName_ == 'tpBenef': sval_", "eol_)) else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='nmMae'):", "GenerateDSNamespaceDefs_.get('cpfInst') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_ =", "rootTag = 'eSocial' rootClass = eSocial rootObj = rootClass.factory() rootObj.build(rootNode) # Enable Python", "end class procEmi class verProc(GeneratedsSuper): subclass = None superclass = None def __init__(self):", "lines. if self.value.strip(): if len(element) > 0: if element[-1].tail is None: element[-1].tail =", "self.tpInsc is not None or self.nrInsc is not None ): return True else:", "> 1: dt = datetime_.datetime.strptime(input_data, '%H:%M:%S.%f') else: dt = datetime_.datetime.strptime(input_data, '%H:%M:%S') dt =", "pretty_print) outfile.write('<%sdtNascto>%s</%sdtNascto>%s' % (namespace_, self.gds_format_date(self.dtNascto, input_name='dtNascto'), namespace_, eol_)) if self.codMunic is not None:", "**kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, dtFimBenef) if subclass", "= 'exterior' # end class endereco class TEnderecoBrasil(GeneratedsSuper): \"\"\"Informações do Endereço no Brasil\"\"\"", "is not None: return subclass(*args_, **kwargs_) if paisNac.subclass: return paisNac.subclass(*args_, **kwargs_) else: return", ")) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='evtCdBenPrRP'): if self.Id is not None", "self.exportChildren(outfile, level + 1, namespace_='', name_='nrLograd', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else:", "self.cpfBenef is not None or self.nmBenefic is not None or self.dadosBenef is not", "elif nodeName_ == 'procEmi': sval_ = child_.text try: ival_ = int(sval_) except (TypeError,", "else: return tpPlanRP(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return", "already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='tpInsc') if self.hasContent_(): outfile.write('>%s' % (eol_,", "% (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='procEmi'): pass def exportChildren(self,", "def buildAttributes(self, node, attrs, already_processed): pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): if", "if tpInsc.subclass: return tpInsc.subclass(*args_, **kwargs_) else: return tpInsc(*args_, **kwargs_) factory = staticmethod(factory) def", "self.ideBenef = ideBenef def get_infoBeneficio(self): return self.infoBeneficio def set_infoBeneficio(self, infoBeneficio): self.infoBeneficio = infoBeneficio", "namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('dscLograd') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_", "name_='cep') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='cep',", "nodeName_, fromsubclass_=False): pass # end class idQuota class cpfInst(GeneratedsSuper): subclass = None superclass", "if nmMae.subclass: return nmMae.subclass(*args_, **kwargs_) else: return nmMae(*args_, **kwargs_) factory = staticmethod(factory) def", "return '%s' % ' '.join(input_data) def gds_validate_double_list( self, input_data, node=None, input_name=''): values =", "base64.b64encode(input_data) def gds_validate_base64(self, input_data, node=None, input_name=''): return input_data def gds_format_integer(self, input_data, input_name=''): return", "data_type): self.data_type = data_type def get_data_type_chain(self): return self.data_type def get_data_type(self): if isinstance(self.data_type, list):", "complemento.subclass(*args_, **kwargs_) else: return complemento(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if (", "dscLograd.subclass(*args_, **kwargs_) else: return dscLograd(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if (", "# end class cep class TEnderecoExterior(GeneratedsSuper): \"\"\"Informações do Endereço no Exterior\"\"\" subclass =", "of lists of strings/patterns. We should: # - AND the outer elements #", "class procEmi(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ = None", "end class uf class paisNascto(GeneratedsSuper): subclass = None superclass = None def __init__(self):", "content_type, name, value): self.category = category self.content_type = content_type self.name = name self.value", "None: showIndent(outfile, level, pretty_print) outfile.write('<%smtvFim>%s</%smtvFim>%s' % (namespace_, self.gds_format_integer(self.mtvFim, input_name='mtvFim'), namespace_, eol_)) def build(self,", "== MixedContainer.TypeBase64: outfile.write('<%s>%s</%s>' % ( self.name, base64.b64encode(self.value), self.name)) def to_etree(self, element): if self.category", "**kwargs_) if evtCdBenPrRP.subclass: return evtCdBenPrRP.subclass(*args_, **kwargs_) else: return evtCdBenPrRP(*args_, **kwargs_) factory = staticmethod(factory)", "choice=None): self.name = name self.data_type = data_type self.container = container self.child_attrs = child_attrs", "buildChildren(self, child_, node, nodeName_, fromsubclass_=False): if nodeName_ == 'dadosNasc': obj_ = dadosNasc.factory() obj_.build(child_)", "name_='bairro', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('bairro') if imported_ns_def_ is not None: namespacedef_ =", "BaseStrType_) and inStr or '%s' % inStr) s1 = s1.replace('&', '&amp;') s1 =", "self.exportChildren(outfile, level + 1, namespace_='', name_='nrBenefic', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else:", "% (namespace_, self.gds_format_integer(self.tpInsc, input_name='tpInsc'), namespace_, eol_)) if self.nrInsc is not None: showIndent(outfile, level,", "( ): return True else: return False def export(self, outfile, level, namespace_='', name_='nrInsc',", "(isinstance(inStr, BaseStrType_) and inStr or '%s' % inStr) s2 = '' pos =", "s1 = s1.replace('<', '&lt;') s1 = s1.replace('>', '&gt;') return s1 def quote_attrib(inStr): s1", "CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, tpLograd) if subclass is not", "None if not silence: sys.stdout.write('<?xml version=\"1.0\" ?>\\n') rootObj.export( sys.stdout, 0, name_=rootTag, namespacedef_='') return", "nodeName_ == 'iniBeneficio': obj_ = TDadosBeneficio.factory() obj_.build(child_) self.iniBeneficio = obj_ obj_.original_tagname_ = 'iniBeneficio'", "+ 1, namespace_='', name_='dadosNasc', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_))", "**kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, dadosNasc) if subclass", "__ne__(self, other): return not self.__eq__(other) def getSubclassFromModule_(module, class_): '''Get the subclass of a", "= TDadosBeneficio.factory() obj_.build(child_) self.iniBeneficio = obj_ obj_.original_tagname_ = 'iniBeneficio' elif nodeName_ == 'altBeneficio':", "TEmprPJ(*args_, **kwargs_) factory = staticmethod(factory) def get_tpInsc(self): return self.tpInsc def set_tpInsc(self, tpInsc): self.tpInsc", "self.exportChildren(outfile, level + 1, namespace_='', name_='nrInsc', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else:", "namespace_, name_='endereco', pretty_print=pretty_print) def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib, already_processed) for", "return False def export(self, outfile, level, namespace_='', name_='bairro', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('bairro')", ")) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='TDadosBeneficio') if self.hasContent_(): outfile.write('>%s' %", "obj_.build(child_) self.dadosNasc = obj_ obj_.original_tagname_ = 'dadosNasc' elif nodeName_ == 'endereco': obj_ =", "export(self, outfile, level, namespace_='', name_='paisNascto', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('paisNascto') if imported_ns_def_ is", "_svalue += '-' total_seconds *= -1 else: _svalue += '+' hours = total_seconds", "nrInsc.subclass(*args_, **kwargs_) else: return nrInsc(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if (", "def export(self, outfile, level, namespace_='', name_='dscLograd', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('dscLograd') if imported_ns_def_", "already_processed, namespace_='', name_='TIdeEveTrab'): pass def exportChildren(self, outfile, level, namespace_='', name_='TIdeEveTrab', fromsubclass_=False, pretty_print=True): if", "return infoPenMorte.subclass(*args_, **kwargs_) else: return infoPenMorte(*args_, **kwargs_) factory = staticmethod(factory) def get_idQuota(self): return", "def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, idQuota)", "buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class cep class TEnderecoExterior(GeneratedsSuper): \"\"\"Informações", "input_data, node=None, input_name=''): return input_data def gds_format_integer_list(self, input_data, input_name=''): return '%s' % '", "tz = GeneratedsSuper._FixedOffsetTZ( tzoff, results.group(0)) input_data = input_data[:-6] time_parts = input_data.split('.') if len(time_parts)", "text += child.tail return text def find_attr_value_(attr_name, node): attrs = node.attrib attr_parts =", "imported_ns_def_ = GenerateDSNamespaceDefs_.get('procEmi') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print:", "self.exportChildren(outfile, level + 1, namespace_='', name_='cep', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else:", "(eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='dscLograd'): pass def exportChildren(self, outfile,", "level, namespace_='', name_='infoPenMorte', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\\n' else: eol_ =", "def parseLiteral(inFileName, silence=False): parser = None doc = parsexml_(inFileName, parser) rootNode = doc.getroot()", "name_='tpPlanRP'): pass def exportChildren(self, outfile, level, namespace_='', name_='tpPlanRP', fromsubclass_=False, pretty_print=True): pass def build(self,", "): return True else: return False def export(self, outfile, level, namespace_='', name_='codPostal', namespacedef_='',", "s1.replace('>', '&gt;') return s1 def quote_attrib(inStr): s1 = (isinstance(inStr, BaseStrType_) and inStr or", "or self.fimBeneficio is not None ): return True else: return False def export(self,", "else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='infoPenMorte'): pass", "60 _svalue += '{0:02d}:{1:02d}'.format(hours, minutes) return _svalue @classmethod def gds_parse_datetime(cls, input_data): tz =", "for category: CategoryNone = 0 CategoryText = 1 CategorySimple = 2 CategoryComplex =", "paisNascto.subclass: return paisNascto.subclass(*args_, **kwargs_) else: return paisNascto(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self):", "= None self.indRetif = indRetif self.nrRecibo = nrRecibo self.tpAmb = tpAmb self.procEmi =", "eol_)) if self.codMunic is not None: showIndent(outfile, level, pretty_print) outfile.write('<%scodMunic>%s</%scodMunic>%s' % (namespace_, self.gds_format_integer(self.codMunic,", "node, 'codPostal') self.codPostal = codPostal_ # end class TEnderecoExterior class paisResid(GeneratedsSuper): subclass =", "'tpLograd': tpLograd_ = child_.text tpLograd_ = self.gds_validate_string(tpLograd_, node, 'tpLograd') self.tpLograd = tpLograd_ elif", "_svalue += '+' hours = total_seconds // 3600 minutes = (total_seconds - (hours", "as exp: raise_parse_error(child_, 'requires integer: %s' % exp) ival_ = self.gds_validate_integer(ival_, node, 'codMunic')", "input_name=''): values = input_data.split() for value in values: if value not in ('true',", "class nmPai(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ = None", "None: return subclass(*args_, **kwargs_) if infoBeneficio.subclass: return infoBeneficio.subclass(*args_, **kwargs_) else: return infoBeneficio(*args_, **kwargs_)", "= self.gds_validate_string(paisNac_, node, 'paisNac') self.paisNac = paisNac_ elif nodeName_ == 'nmMae': nmMae_ =", "namespace_='', name_='TEnderecoExterior'): pass def exportChildren(self, outfile, level, namespace_='', name_='TEnderecoExterior', fromsubclass_=False, pretty_print=True): if pretty_print:", "return False def export(self, outfile, level, namespace_='', name_='endereco', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('endereco')", "a module named generatedssuper.py. try: from generatedssuper import GeneratedsSuper except ImportError as exp:", "True else: return False def export(self, outfile, level, namespace_='', name_='TDadosBenef', namespacedef_='', pretty_print=True): imported_ns_def_", "complemento self.bairro = bairro self.cep = cep self.codMunic = codMunic self.uf = uf", "subclass is not None: return subclass(*args_, **kwargs_) if verProc.subclass: return verProc.subclass(*args_, **kwargs_) else:", "namespace_, eol_)) if self.dtFimBenef is not None: showIndent(outfile, level, pretty_print) outfile.write('<%sdtFimBenef>%s</%sdtFimBenef>%s' % (namespace_,", "set_nmMae(self, nmMae): self.nmMae = nmMae def get_nmPai(self): return self.nmPai def set_nmPai(self, nmPai): self.nmPai", "self.value elif self.category == MixedContainer.CategorySimple: subelement = etree_.SubElement( element, '%s' % self.name) subelement.text", "else: return False def export(self, outfile, level, namespace_='', name_='tpInsc', namespacedef_='', pretty_print=True): imported_ns_def_ =", "tzoff_parts = results.group(2).split(':') tzoff = int(tzoff_parts[0]) * 60 + int(tzoff_parts[1]) if results.group(1) ==", "if nodeName_ == 'idQuota': idQuota_ = child_.text idQuota_ = self.gds_validate_string(idQuota_, node, 'idQuota') self.idQuota", "self.gds_format_integer(self.mtvFim, input_name='mtvFim'), namespace_, eol_)) def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib, already_processed)", "self.exportAttributes(outfile, level, already_processed, namespace_, name_='eSocial') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level", "= None if not silence: sys.stdout.write('#from evtCdBenPrRP import *\\n\\n') sys.stdout.write('import evtCdBenPrRP as model_\\n\\n')", "level, namespace_, name_='brasil', pretty_print=pretty_print) if self.exterior is not None: self.exterior.export(outfile, level, namespace_, name_='exterior',", "_cast(None, Id) self.ideEvento = ideEvento self.ideEmpregador = ideEmpregador self.ideBenef = ideBenef self.infoBeneficio =", "nodeName_ == 'nmPai': nmPai_ = child_.text nmPai_ = self.gds_validate_string(nmPai_, node, 'nmPai') self.nmPai =", "nmCid self.codPostal = codPostal def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass", "def export(self, outfile, level, namespace_='', name_='infoBeneficio', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('infoBeneficio') if imported_ns_def_", "GenerateDSNamespaceDefs_.get('nrRecibo') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_ =", "superclass = None def __init__(self, tpBenef=None, nrBenefic=None, dtIniBenef=None, vrBenef=None, infoPenMorte=None): self.original_tagname_ = None", "or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='vrBenef') if self.hasContent_():", "else: element[-1].tail += self.value else: if element.text is None: element.text = self.value else:", "( ): return True else: return False def export(self, outfile, level, namespace_='', name_='bairro',", "(hours * 3600)) // 60 _svalue += '{0:02d}:{1:02d}'.format(hours, minutes) return _svalue def gds_validate_simple_patterns(self,", "is not None: return subclass(*args_, **kwargs_) if eSocial.subclass: return eSocial.subclass(*args_, **kwargs_) else: return", "imported_ns_def_ = GenerateDSNamespaceDefs_.get('nmBenefic') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print:", "'model_.MixedContainer(%d, %d, \"%s\",\\n' % ( self.category, self.content_type, self.name,)) self.value.exportLiteral(outfile, level + 1) showIndent(outfile,", "3600)) // 60 _svalue += '{0:02d}:{1:02d}'.format(hours, minutes) return _svalue def gds_validate_simple_patterns(self, patterns, target):", "pass def exportChildren(self, outfile, level, namespace_='', name_='nrLograd', fromsubclass_=False, pretty_print=True): pass def build(self, node):", "'&lt;') s1 = s1.replace('>', '&gt;') return s1 def quote_attrib(inStr): s1 = (isinstance(inStr, BaseStrType_)", "return '%s' % ' '.join(input_data) def gds_validate_integer_list( self, input_data, node=None, input_name=''): values =", "self.verProc = verProc_ # end class TIdeEveTrab class indRetif(GeneratedsSuper): subclass = None superclass", "outfile, level, already_processed, namespace_='', name_='codMunic'): pass def exportChildren(self, outfile, level, namespace_='', name_='codMunic', fromsubclass_=False,", "namespace_='', name_='nrInsc'): pass def exportChildren(self, outfile, level, namespace_='', name_='nrInsc', fromsubclass_=False, pretty_print=True): pass def", "is not None: total_seconds = tzoff.seconds + (86400 * tzoff.days) if total_seconds ==", "gds_format_string(self, input_data, input_name=''): return input_data def gds_validate_string(self, input_data, node=None, input_name=''): if not input_data:", "def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='dtFimBenef'): pass def exportChildren(self, outfile, level, namespace_='',", "already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='nrInsc') if self.hasContent_(): outfile.write('>%s' % (eol_,", "generated by generateDS.py. # You can replace these methods by re-implementing the following", "60 _svalue += '{0:02d}:{1:02d}'.format(hours, minutes) return _svalue def gds_validate_simple_patterns(self, patterns, target): # pat", "elif nodeName_ == 'mtvFim': sval_ = child_.text try: ival_ = int(sval_) except (TypeError,", "eol_ = '' if self.original_tagname_ is not None: name_ = self.original_tagname_ showIndent(outfile, level,", "+ 1, namespace_='', name_='nrLograd', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' %", "if not silence: content = etree_.tostring( rootElement, pretty_print=True, xml_declaration=True, encoding=\"utf-8\") sys.stdout.write(content) sys.stdout.write('\\n') return", "so that, e.g., # we ignore comments. try: parser = etree_.ETCompatXMLParser() except AttributeError:", "self.exportChildren(outfile, level + 1, namespace_='', name_='tpInsc', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else:", "fimBeneficio(*args_, **kwargs_) factory = staticmethod(factory) def get_tpBenef(self): return self.tpBenef def set_tpBenef(self, tpBenef): self.tpBenef", "namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('TEnderecoExterior') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_", "buildChildren(self, child_, node, nodeName_, fromsubclass_=False): if nodeName_ == 'tpPlanRP': sval_ = child_.text try:", "name_='TDadosBeneficio', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\\n' else: eol_ = '' if", "definition, will export that definition in the # XML representation of that element.", "set_nrRecibo(self, nrRecibo): self.nrRecibo = nrRecibo def get_tpAmb(self): return self.tpAmb def set_tpAmb(self, tpAmb): self.tpAmb", "self.exportAttributes(outfile, level, already_processed, namespace_, name_='TEmprPJ') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level", "= set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='dtNascto') if self.hasContent_(): outfile.write('>%s' % (eol_, ))", "None or self.fimBeneficio is not None ): return True else: return False def", "is not None or self.nmPai is not None ): return True else: return", "not None: showIndent(outfile, level, pretty_print) outfile.write('<%sdtIniBenef>%s</%sdtIniBenef>%s' % (namespace_, self.gds_format_date(self.dtIniBenef, input_name='dtIniBenef'), namespace_, eol_)) if", "imported_ns_def_ = GenerateDSNamespaceDefs_.get('TEnderecoExterior') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print:", "parse(args[0]) else: usage() if __name__ == '__main__': #import pdb; pdb.set_trace() main() __all__ =", "def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='nrLograd'): pass def exportChildren(self, outfile, level, namespace_='',", "= set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='codPostal') if self.hasContent_(): outfile.write('>%s' % (eol_, ))", "def gds_str_lower(self, instring): return instring.lower() def get_path_(self, node): path_list = [] self.get_path_list_(node, path_list)", "exportAttributes(self, outfile, level, already_processed, namespace_='', name_='infoPenMorte'): pass def exportChildren(self, outfile, level, namespace_='', name_='infoPenMorte',", "fimBeneficio) if subclass is not None: return subclass(*args_, **kwargs_) if fimBeneficio.subclass: return fimBeneficio.subclass(*args_,", "else: eol_ = '' if self.tpLograd is not None: showIndent(outfile, level, pretty_print) outfile.write('<%stpLograd>%s</%stpLograd>%s'", ")) self.exportChildren(outfile, level + 1, namespace_='', name_='TDadosBenef', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' %", "set_choice(self, choice): self.choice = choice def get_choice(self): return self.choice def set_optional(self, optional): self.optional", "class nrRecibo(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ = None", "namespace_='', name_='tpLograd', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('tpLograd') if imported_ns_def_ is not None: namespacedef_", "a # specific subclass module. CurrentSubclassModule_ = None # # Support/utility functions. #", "return True else: return False def export(self, outfile, level, namespace_='', name_='indRetif', namespacedef_='', pretty_print=True):", "gds_validate_simple_patterns(self, patterns, target): # pat is a list of lists of strings/patterns. We", "outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='nmBenefic'): pass def", "+ namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='cpfBenef')", "any generated element type class for a example of the use of this", "is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, nrLograd) if subclass is not None:", "complemento(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return True else:", "level + 1, namespace_='', name_='mtvFim', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s'", "namespace_='', name_='paisResid', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('paisResid') if imported_ns_def_ is not None: namespacedef_", "outfile, level, already_processed, namespace_='', name_='uf'): pass def exportChildren(self, outfile, level, namespace_='', name_='uf', fromsubclass_=False,", "self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='TEnderecoExterior', pretty_print=pretty_print) showIndent(outfile,", "% (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.paisResid), input_name='paisResid')), namespace_, eol_)) if self.dscLograd is not None: showIndent(outfile, level,", "cpfBenef.subclass(*args_, **kwargs_) else: return cpfBenef(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if (", "subclass is not None: return subclass(*args_, **kwargs_) if fimBeneficio.subclass: return fimBeneficio.subclass(*args_, **kwargs_) else:", "return complemento(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return True", "try: from generatedsnamespaces import GenerateDSNamespaceDefs as GenerateDSNamespaceDefs_ except ImportError: GenerateDSNamespaceDefs_ = {} #", "node, nodeName_, fromsubclass_=False): pass # end class nmPai class endereco(GeneratedsSuper): \"\"\"Grupo de informações", "if self.bairro is not None: showIndent(outfile, level, pretty_print) outfile.write('<%sbairro>%s</%sbairro>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.bairro), input_name='bairro')),", "if node.text is not None: text = node.text else: text = '' for", "1, namespace_='', name_='codMunic', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_,", "def export(self, outfile, level, namespace_='', name_='idQuota', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('idQuota') if imported_ns_def_", "self.evtCdBenPrRP = evtCdBenPrRP def get_Signature(self): return self.Signature def set_Signature(self, Signature): self.Signature = Signature", "eol_)) if self.nmPai is not None: showIndent(outfile, level, pretty_print) outfile.write('<%snmPai>%s</%snmPai>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nmPai),", "paisNascto=None, paisNac=None, nmMae=None, nmPai=None): self.original_tagname_ = None if isinstance(dtNascto, BaseStrType_): initvalue_ = datetime_.datetime.strptime(dtNascto,", "exportChildren(self, outfile, level, namespace_='', name_='nmCid', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed =", "getSubclassFromModule_( CurrentSubclassModule_, infoBeneficio) if subclass is not None: return subclass(*args_, **kwargs_) if infoBeneficio.subclass:", "self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='infoPenMorte', pretty_print=pretty_print) showIndent(outfile,", "def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='codPostal'): pass def exportChildren(self, outfile, level, namespace_='',", "nrRecibo class tpAmb(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ =", "outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='nrInsc', pretty_print=pretty_print) outfile.write('</%s%s>%s' %", "% (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='dtFimBenef'): pass def exportChildren(self,", "showIndent(outfile, level, pretty_print) outfile.write('<%s%s%s' % (namespace_, name_, namespacedef_ and ' ' + namespacedef_", "cpfBenef_ = child_.text cpfBenef_ = self.gds_validate_string(cpfBenef_, node, 'cpfBenef') self.cpfBenef = cpfBenef_ elif nodeName_", "class vrBenef class infoPenMorte(GeneratedsSuper): \"\"\"Informações relativas a pensão por morte\"\"\" subclass = None", "or self.bairro is not None or self.nmCid is not None or self.codPostal is", "namespace_='', name_='ideBenef', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\\n' else: eol_ = ''", "codPostal.subclass: return codPostal.subclass(*args_, **kwargs_) else: return codPostal(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self):", "self.endereco.export(outfile, level, namespace_, name_='endereco', pretty_print=pretty_print) def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib,", "return doc # # Namespace prefix definition table (and other attributes, too) #", "None superclass = None def __init__(self, Id=None, ideEvento=None, ideEmpregador=None, ideBenef=None, infoBeneficio=None): self.original_tagname_ =", "self.vrBenef is not None or self.infoPenMorte is not None ): return True else:", "# \"ElementtypeB\": \"http://www.xxx.com/namespaceB\", # } # try: from generatedsnamespaces import GenerateDSNamespaceDefs as GenerateDSNamespaceDefs_", "True else: return False def export(self, outfile, level, namespace_='', name_='uf', namespacedef_='', pretty_print=True): imported_ns_def_", "quote_attrib(inStr): s1 = (isinstance(inStr, BaseStrType_) and inStr or '%s' % inStr) s1 =", "def set_Signature(self, Signature): self.Signature = Signature def hasContent_(self): if ( self.evtCdBenPrRP is not", "(eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='procEmi'): pass def exportChildren(self, outfile,", "set_evtCdBenPrRP(self, evtCdBenPrRP): self.evtCdBenPrRP = evtCdBenPrRP def get_Signature(self): return self.Signature def set_Signature(self, Signature): self.Signature", "getSubclassFromModule_( CurrentSubclassModule_, infoPenMorte) if subclass is not None: return subclass(*args_, **kwargs_) if infoPenMorte.subclass:", "outfile, level, namespace_='', name_='paisNascto', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set()", "subclass is not None: return subclass(*args_, **kwargs_) if tpAmb.subclass: return tpAmb.subclass(*args_, **kwargs_) else:", "self.complemento = complemento_ elif nodeName_ == 'bairro': bairro_ = child_.text bairro_ = self.gds_validate_string(bairro_,", "**kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, cep) if subclass", "if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, tpLograd) if subclass is", "== 'nmMae': nmMae_ = child_.text nmMae_ = self.gds_validate_string(nmMae_, node, 'nmMae') self.nmMae = nmMae_", "endereco.subclass(*args_, **kwargs_) else: return endereco(*args_, **kwargs_) factory = staticmethod(factory) def get_brasil(self): return self.brasil", "self.tpBenef is not None: showIndent(outfile, level, pretty_print) outfile.write('<%stpBenef>%s</%stpBenef>%s' % (namespace_, self.gds_format_integer(self.tpBenef, input_name='tpBenef'), namespace_,", "outfile.write('<%s>%f</%s>' % ( self.name, self.value, self.name)) elif self.content_type == MixedContainer.TypeDouble: outfile.write('<%s>%g</%s>' % (", "patterns1: if re_.search(patterns2, target) is not None: found2 = True break if not", "%s' % exp) ival_ = self.gds_validate_integer(ival_, node, 'procEmi') self.procEmi = ival_ elif nodeName_", "found1 @classmethod def gds_parse_time(cls, input_data): tz = None if input_data[-1] == 'Z': tz", "def set_name(self, name): self.name = name def get_name(self): return self.name def set_data_type(self, data_type):", "= getSubclassFromModule_( CurrentSubclassModule_, uf) if subclass is not None: return subclass(*args_, **kwargs_) if", "to collect the space used by the DOM. doc = None mapping =", "def gds_format_double(self, input_data, input_name=''): return '%e' % input_data def gds_validate_double(self, input_data, node=None, input_name=''):", "fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\\n' else: eol_ = '' if self.ideEvento", "nrBenefic_ = child_.text nrBenefic_ = self.gds_validate_string(nrBenefic_, node, 'nrBenefic') self.nrBenefic = nrBenefic_ elif nodeName_", "= nrLograd def get_complemento(self): return self.complemento def set_complemento(self, complemento): self.complemento = complemento def", "level, namespace_='', name_='fimBeneficio', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\\n' else: eol_ =", "is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, TIdeEveTrab) if subclass is not None:", "'procEmi': sval_ = child_.text try: ival_ = int(sval_) except (TypeError, ValueError) as exp:", "nodeName_, fromsubclass_=False): if nodeName_ == 'paisResid': paisResid_ = child_.text paisResid_ = self.gds_validate_string(paisResid_, node,", "def set_dtFimBenef(self, dtFimBenef): self.dtFimBenef = dtFimBenef def get_mtvFim(self): return self.mtvFim def set_mtvFim(self, mtvFim):", "%d, \"%s\",\\n' % ( self.category, self.content_type, self.name,)) self.value.exportLiteral(outfile, level + 1) showIndent(outfile, level)", "outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='procEmi', pretty_print=pretty_print) outfile.write('</%s%s>%s' %", "outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='TEmprPJ', pretty_print=pretty_print) showIndent(outfile, level,", "None: subclass = getSubclassFromModule_( CurrentSubclassModule_, dtNascto) if subclass is not None: return subclass(*args_,", "): return True else: return False def export(self, outfile, level, namespace_='', name_='nmCid', namespacedef_='',", "self.dtNascto = dtNascto def get_codMunic(self): return self.codMunic def set_codMunic(self, codMunic): self.codMunic = codMunic", "name_='bairro', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_, )) def", "class paisNac class nmMae(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_", "== 'Signature': Signature_ = child_.text Signature_ = self.gds_validate_string(Signature_, node, 'Signature') self.Signature = Signature_", "= nrRecibo def get_tpAmb(self): return self.tpAmb def set_tpAmb(self, tpAmb): self.tpAmb = tpAmb def", "= paisNascto self.paisNac = paisNac self.nmMae = nmMae self.nmPai = nmPai def factory(*args_,", "pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('TDadosBeneficio') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if", "% (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.paisNascto), input_name='paisNascto')), namespace_, eol_)) if self.paisNac is not None: showIndent(outfile, level,", "choice): self.choice = choice def get_choice(self): return self.choice def set_optional(self, optional): self.optional =", "nmBenefic.subclass(*args_, **kwargs_) else: return nmBenefic(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if (", "def get_dadosBenef(self): return self.dadosBenef def set_dadosBenef(self, dadosBenef): self.dadosBenef = dadosBenef def hasContent_(self): if", "return self.vrBenef def set_vrBenef(self, vrBenef): self.vrBenef = vrBenef def get_infoPenMorte(self): return self.infoPenMorte def", "already_processed, namespace_='', name_='TDadosBenef'): pass def exportChildren(self, outfile, level, namespace_='', name_='TDadosBenef', fromsubclass_=False, pretty_print=True): if", "classes are generated by generateDS.py. # You can replace these methods by re-implementing", "mapping_=mapping) reverse_mapping = rootObj.gds_reverse_node_mapping(mapping) if not silence: content = etree_.tostring( rootElement, pretty_print=True, xml_declaration=True,", "return False def export(self, outfile, level, namespace_='', name_='nmPai', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('nmPai')", "None: showIndent(outfile, level, pretty_print) outfile.write('<%snmBenefic>%s</%snmBenefic>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nmBenefic), input_name='nmBenefic')), namespace_, eol_)) if self.dadosBenef", "to XML schema namespace prefix # definitions. The export method for any class", "def get_container(self): return self.container def set_child_attrs(self, child_attrs): self.child_attrs = child_attrs def get_child_attrs(self): return", "== 'codPostal': codPostal_ = child_.text codPostal_ = self.gds_validate_string(codPostal_, node, 'codPostal') self.codPostal = codPostal_", "eol_)) if self.nmMae is not None: showIndent(outfile, level, pretty_print) outfile.write('<%snmMae>%s</%snmMae>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nmMae),", "fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\\n' else: eol_ = '' if self.tpLograd", "return subclass(*args_, **kwargs_) if TEmprPJ.subclass: return TEmprPJ.subclass(*args_, **kwargs_) else: return TEmprPJ(*args_, **kwargs_) factory", "GenerateDSNamespaceDefs_.get('dtFimBenef') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_ =", "or self.infoBeneficio is not None ): return True else: return False def export(self,", "(eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='nmCid'): pass def exportChildren(self, outfile,", "= name def utcoffset(self, dt): return self.__offset def tzname(self, dt): return self.__name def", "idQuota class cpfInst(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ =", "'-': tzoff *= -1 tz = GeneratedsSuper._FixedOffsetTZ( tzoff, results.group(0)) input_data = input_data[:-6] dt", "s1.replace('&', '&amp;') s1 = s1.replace('<', '&lt;') s1 = s1.replace('>', '&gt;') if '\"' in", "text = '%d' % self.value elif (self.content_type == MixedContainer.TypeFloat or self.content_type == MixedContainer.TypeDecimal):", "CurrentSubclassModule_, procEmi) if subclass is not None: return subclass(*args_, **kwargs_) if procEmi.subclass: return", "value is None: return value return typ(value) # # Data representation classes. #", "self.codMunic = ival_ elif nodeName_ == 'uf': uf_ = child_.text uf_ = self.gds_validate_string(uf_,", "= '\\n' else: eol_ = '' if self.brasil is not None: self.brasil.export(outfile, level,", "= getSubclassFromModule_( CurrentSubclassModule_, TDadosBeneficio) if subclass is not None: return subclass(*args_, **kwargs_) if", "showIndent(outfile, level, pretty_print) outfile.write('<%sprocEmi>%s</%sprocEmi>%s' % (namespace_, self.gds_format_integer(self.procEmi, input_name='procEmi'), namespace_, eol_)) if self.verProc is", "= _cast(None, Id) self.ideEvento = ideEvento self.ideEmpregador = ideEmpregador self.ideBenef = ideBenef self.infoBeneficio", "pretty_print=pretty_print) if self.fimBeneficio is not None: self.fimBeneficio.export(outfile, level, namespace_, name_='fimBeneficio', pretty_print=pretty_print) def build(self,", "obj_.original_tagname_ = 'fimBeneficio' # end class infoBeneficio class tpPlanRP(GeneratedsSuper): subclass = None superclass", "input_name='bairro')), namespace_, eol_)) if self.cep is not None: showIndent(outfile, level, pretty_print) outfile.write('<%scep>%s</%scep>%s' %", "self.gds_parse_date(sval_) self.dtIniBenef = dval_ elif nodeName_ == 'vrBenef': sval_ = child_.text try: fval_", "mtvFim def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_,", "level, pretty_print) outfile.write('<%snrRecibo>%s</%snrRecibo>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nrRecibo), input_name='nrRecibo')), namespace_, eol_)) if self.tpAmb is not", "getSubclassFromModule_( CurrentSubclassModule_, endereco) if subclass is not None: return subclass(*args_, **kwargs_) if endereco.subclass:", "'', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='dtNascto') if self.hasContent_(): outfile.write('>%s'", "tpPlanRP): self.tpPlanRP = tpPlanRP def get_iniBeneficio(self): return self.iniBeneficio def set_iniBeneficio(self, iniBeneficio): self.iniBeneficio =", "> 1: micro_seconds = int(float('0.' + time_parts[1]) * 1000000) input_data = '%s.%s' %", "except ImportError as exp: class GeneratedsSuper(object): tzoff_pattern = re_.compile(r'(\\+|-)((0\\d|1[0-3]):[0-5]\\d|14:00)$') class _FixedOffsetTZ(datetime_.tzinfo): def __init__(self,", "s2 += s1[mo.start():mo.end()] pos = mo.end() s3 = s1[pos:] s2 += quote_xml_aux(s3) return", "brasil self.exterior = exterior def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass", "return idQuota(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return True", "0, name_=rootTag, namespacedef_='', pretty_print=True) return rootObj def parseEtree(inFileName, silence=False): parser = None doc", "is # a namespace prefix definition, will export that definition in the #", "'/'.join(path_list) return path Tag_strip_pattern_ = re_.compile(r'\\{.*\\}') def get_path_list_(self, node, path_list): if node is", "\"%s\", \"%s\"),\\n' % ( self.category, self.content_type, self.name, self.value)) elif self.category == MixedContainer.CategorySimple: showIndent(outfile,", "outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='TDadosBenef'): pass def", "rootClass = GDSClassesMapping.get(tag) if rootClass is None: rootClass = globals().get(tag) return tag, rootClass", "return True else: return False def export(self, outfile, level, namespace_='', name_='nrRecibo', namespacedef_='', pretty_print=True):", "exportAttributes(self, outfile, level, already_processed, namespace_='', name_='dadosNasc'): pass def exportChildren(self, outfile, level, namespace_='', name_='dadosNasc',", "= GenerateDSNamespaceDefs_.get('cpfInst') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_", "cpfInst def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_,", "s1 def quote_python(inStr): s1 = inStr if s1.find(\"'\") == -1: if s1.find('\\n') ==", "exportAttributes(self, outfile, level, already_processed, namespace_='', name_='TDadosBenef'): pass def exportChildren(self, outfile, level, namespace_='', name_='TDadosBenef',", "'', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='TEnderecoBrasil') if self.hasContent_(): outfile.write('>%s'", "1, namespace_='', name_='evtCdBenPrRP', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else:", "class nrBenefic class dtFimBenef(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_", "False def export(self, outfile, level, namespace_='', name_='TIdeEveTrab', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('TIdeEveTrab') if", "def exportChildren(self, outfile, level, namespace_='', name_='cep', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed", "self.nrInsc = nrInsc def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass =", "self.nrBenefic = nrBenefic def get_dtFimBenef(self): return self.dtFimBenef def set_dtFimBenef(self, dtFimBenef): self.dtFimBenef = dtFimBenef", "except ImportError: GenerateDSNamespaceDefs_ = {} # # The root super-class for element type", "return subclass(*args_, **kwargs_) if vrBenef.subclass: return vrBenef.subclass(*args_, **kwargs_) else: return vrBenef(*args_, **kwargs_) factory", "level, namespace_='', name_='idQuota', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('idQuota') if imported_ns_def_ is not None:", "def exportChildren(self, outfile, level, namespace_='', name_='tpBenef', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed", "class idQuota class cpfInst(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_", "obj_.original_tagname_ = 'infoBeneficio' # end class evtCdBenPrRP class ideBenef(GeneratedsSuper): \"\"\"Identificação do beneficiário\"\"\" subclass", "outfile.write( 'model_.MixedContainer(%d, %d, \"%s\", \"%s\"),\\n' % ( self.category, self.content_type, self.name, self.value)) elif self.category", "else: return False def export(self, outfile, level, namespace_='', name_='TIdeEveTrab', namespacedef_='', pretty_print=True): imported_ns_def_ =", "already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='tpAmb') if self.hasContent_(): outfile.write('>%s' % (eol_,", "# end class infoPenMorte class idQuota(GeneratedsSuper): subclass = None superclass = None def", "eol_)) else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='tpInsc'):", "level + 1, namespace_='', name_='TEnderecoExterior', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_,", "eol_ = '\\n' else: eol_ = '' if self.idQuota is not None: showIndent(outfile,", "return subclass(*args_, **kwargs_) if bairro.subclass: return bairro.subclass(*args_, **kwargs_) else: return bairro(*args_, **kwargs_) factory", "self.gds_encode(self.gds_format_string(quote_xml(self.nrRecibo), input_name='nrRecibo')), namespace_, eol_)) if self.tpAmb is not None: showIndent(outfile, level, pretty_print) outfile.write('<%stpAmb>%s</%stpAmb>%s'", "namespace_='', name_='TEnderecoExterior', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s'", "get_verProc(self): return self.verProc def set_verProc(self, verProc): self.verProc = verProc def hasContent_(self): if (", "namespace_='', name_='TEnderecoBrasil', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\\n' else: eol_ = ''", "input_name='cep')), namespace_, eol_)) if self.codMunic is not None: showIndent(outfile, level, pretty_print) outfile.write('<%scodMunic>%s</%scodMunic>%s' %", "hours = total_seconds // 3600 minutes = (total_seconds - (hours * 3600)) //", "== 'ideBenef': obj_ = ideBenef.factory() obj_.build(child_) self.ideBenef = obj_ obj_.original_tagname_ = 'ideBenef' elif", "= quote_xml(instring) elif sys.version_info.major == 2 and isinstance(instring, unicode): result = quote_xml(instring).encode('utf8') else:", "= nmPai def hasContent_(self): if ( self.dtNascto is not None or self.codMunic is", "endereco(*args_, **kwargs_) factory = staticmethod(factory) def get_brasil(self): return self.brasil def set_brasil(self, brasil): self.brasil", "dtNascto(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ = None def", "get_cpfInst(self): return self.cpfInst def set_cpfInst(self, cpfInst): self.cpfInst = cpfInst def hasContent_(self): if (", "self.indRetif = ival_ elif nodeName_ == 'nrRecibo': nrRecibo_ = child_.text nrRecibo_ = self.gds_validate_string(nrRecibo_,", "return dtIniBenef(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return True", "not in already_processed: already_processed.add('Id') outfile.write(' Id=%s' % (self.gds_encode(self.gds_format_string(quote_attrib(self.Id), input_name='Id')), )) def exportChildren(self, outfile,", "# end class dadosNasc class dtNascto(GeneratedsSuper): subclass = None superclass = None def", "**kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, nmPai) if subclass", "name_='tpPlanRP', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib, already_processed)", "float(value) except (TypeError, ValueError): raise_parse_error(node, 'Requires sequence of floats') return values def gds_format_double(self,", "Trabalhador\"\"\" subclass = None superclass = None def __init__(self, brasil=None, exterior=None): self.original_tagname_ =", "def hasContent_(self): if ( self.evtCdBenPrRP is not None or self.Signature is not None", "None def gds_format_string(self, input_data, input_name=''): return input_data def gds_validate_string(self, input_data, node=None, input_name=''): if", "houver informação anterior de benefícios para o beneficiário identificado em {ideBenef} e para", "exp: raise_parse_error(child_, 'requires integer: %s' % exp) ival_ = self.gds_validate_integer(ival_, node, 'tpAmb') self.tpAmb", "else: return False def export(self, outfile, level, namespace_='', name_='evtCdBenPrRP', namespacedef_='', pretty_print=True): imported_ns_def_ =", "= initvalue_ self.codMunic = codMunic self.uf = uf self.paisNascto = paisNascto self.paisNac =", "% (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='paisNac'): pass def exportChildren(self,", "if ( self.dadosNasc is not None or self.endereco is not None ): return", "child_.text uf_ = self.gds_validate_string(uf_, node, 'uf') self.uf = uf_ # end class TEnderecoBrasil", "eol_)) else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='mtvFim'):", "= bairro self.nmCid = nmCid self.codPostal = codPostal def factory(*args_, **kwargs_): if CurrentSubclassModule_", "try: parser = etree_.ETCompatXMLParser() except AttributeError: # fallback to xml.etree parser = etree_.XMLParser()", "vrBenef(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ = None def", "name_='ideBenef'): pass def exportChildren(self, outfile, level, namespace_='', name_='ideBenef', fromsubclass_=False, pretty_print=True): if pretty_print: eol_", "def __eq__(self, other): if type(self) != type(other): return False return self.__dict__ == other.__dict__", "None self.evtCdBenPrRP = evtCdBenPrRP self.Signature = Signature def factory(*args_, **kwargs_): if CurrentSubclassModule_ is", "not in ('true', '1', 'false', '0', ): raise_parse_error( node, 'Requires sequence of booleans", "return '%s' % ' '.join(input_data) def gds_validate_float_list( self, input_data, node=None, input_name=''): values =", "name_='dadosBenef', pretty_print=pretty_print) def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib, already_processed) for child", "namespace, name, pretty_print=pretty_print) def exportSimple(self, outfile, level, name): if self.content_type == MixedContainer.TypeString: outfile.write('<%s>%s</%s>'", "level) outfile.write( 'model_.MixedContainer(%d, %d, \"%s\", \"%s\"),\\n' % ( self.category, self.content_type, self.name, self.value)) else:", "name_='infoBeneficio', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('infoBeneficio') if imported_ns_def_ is not None: namespacedef_ =", "): return True else: return False def export(self, outfile, level, namespace_='', name_='tpAmb', namespacedef_='',", "def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class tpBenef class nrBenefic(GeneratedsSuper):", "= self.gds_validate_integer(ival_, node, 'procEmi') self.procEmi = ival_ elif nodeName_ == 'verProc': verProc_ =", "CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, nrRecibo) if subclass is not", "not None: return subclass(*args_, **kwargs_) if bairro.subclass: return bairro.subclass(*args_, **kwargs_) else: return bairro(*args_,", "namespace_='', name_='codPostal', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib,", "child_, node, nodeName_, fromsubclass_=False): if nodeName_ == 'tpInsc': sval_ = child_.text try: ival_", "): return True else: return False def export(self, outfile, level, namespace_='', name_='TDadosBeneficio', namespacedef_='',", "is not None: showIndent(outfile, level, pretty_print) outfile.write('<%stpBenef>%s</%stpBenef>%s' % (namespace_, self.gds_format_integer(self.tpBenef, input_name='tpBenef'), namespace_, eol_))", "not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, eSocial) if subclass is not None: return", "'requires integer: %s' % exp) ival_ = self.gds_validate_integer(ival_, node, 'procEmi') self.procEmi = ival_", "name_='uf'): pass def exportChildren(self, outfile, level, namespace_='', name_='uf', fromsubclass_=False, pretty_print=True): pass def build(self,", "'vrBenef') self.vrBenef = fval_ elif nodeName_ == 'infoPenMorte': obj_ = infoPenMorte.factory() obj_.build(child_) self.infoPenMorte", "= staticmethod(factory) def get_tpBenef(self): return self.tpBenef def set_tpBenef(self, tpBenef): self.tpBenef = tpBenef def", "= self.gds_validate_integer(ival_, node, 'tpAmb') self.tpAmb = ival_ elif nodeName_ == 'procEmi': sval_ =", "CurrentSubclassModule_, tpLograd) if subclass is not None: return subclass(*args_, **kwargs_) if tpLograd.subclass: return", "dt = datetime_.datetime.strptime(input_data, '%H:%M:%S.%f') else: dt = datetime_.datetime.strptime(input_data, '%H:%M:%S') dt = dt.replace(tzinfo=tz) return", "level, namespace_='', name_='infoPenMorte', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('infoPenMorte') if imported_ns_def_ is not None:", "already_processed, namespace_, name_='tpLograd') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1,", "return rootObj, rootElement, mapping, reverse_mapping def parseString(inString, silence=False): if sys.version_info.major == 2: from", ")) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='evtCdBenPrRP') if self.hasContent_(): outfile.write('>%s' %", "( ): return True else: return False def export(self, outfile, level, namespace_='', name_='tpPlanRP',", "name_='nmCid') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='nmCid',", "outfile, level, namespace_='', name_='cep', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set()", "% ' '.join(input_data) def gds_validate_boolean_list( self, input_data, node=None, input_name=''): values = input_data.split() for", "is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, vrBenef) if subclass is not None:", "= None def __init__(self, indRetif=None, nrRecibo=None, tpAmb=None, procEmi=None, verProc=None): self.original_tagname_ = None self.indRetif", "bairro): self.bairro = bairro def get_nmCid(self): return self.nmCid def set_nmCid(self, nmCid): self.nmCid =", "= dtNascto def get_codMunic(self): return self.codMunic def set_codMunic(self, codMunic): self.codMunic = codMunic def", "None: showIndent(outfile, level, pretty_print) outfile.write('<%scodPostal>%s</%scodPostal>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.codPostal), input_name='codPostal')), namespace_, eol_)) def build(self,", "self.name, base64.b64encode(self.value), self.name)) def to_etree(self, element): if self.category == MixedContainer.CategoryText: # Prevent exporting", "= None def __init__(self, tpInsc=None, nrInsc=None): self.original_tagname_ = None self.tpInsc = tpInsc self.nrInsc", "if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='nmPai', pretty_print=pretty_print)", "return self.endereco def set_endereco(self, endereco): self.endereco = endereco def hasContent_(self): if ( self.dadosNasc", "namespace_='', name_='nrLograd', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib,", "self.tpLograd is not None or self.dscLograd is not None or self.nrLograd is not", "eol_)) else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='cpfInst'):", "cpfBenef.subclass: return cpfBenef.subclass(*args_, **kwargs_) else: return cpfBenef(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self):", "namespace_='', name_='endereco', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s'", "name_='TDadosBenef'): pass def exportChildren(self, outfile, level, namespace_='', name_='TDadosBenef', fromsubclass_=False, pretty_print=True): if pretty_print: eol_", "self.dscLograd def set_dscLograd(self, dscLograd): self.dscLograd = dscLograd def get_nrLograd(self): return self.nrLograd def set_nrLograd(self,", "hasContent_(self): if ( self.tpBenef is not None or self.nrBenefic is not None or", "Brasil\"\"\" subclass = None superclass = None def __init__(self, tpLograd=None, dscLograd=None, nrLograd=None, complemento=None,", "if sys.version_info.major == 2: return instring.encode(ExternalEncoding) else: return instring @staticmethod def convert_unicode(instring): if", "buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class nmBenefic class infoBeneficio(GeneratedsSuper): \"\"\"Informações", "pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('indRetif') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if", "subclass(*args_, **kwargs_) if fimBeneficio.subclass: return fimBeneficio.subclass(*args_, **kwargs_) else: return fimBeneficio(*args_, **kwargs_) factory =", "def export(self, outfile, level, namespace_='', name_='paisNac', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('paisNac') if imported_ns_def_", "already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='nrRecibo') if self.hasContent_(): outfile.write('>%s' % (eol_,", "= set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='tpInsc') if self.hasContent_(): outfile.write('>%s' % (eol_, ))", "class tpPlanRP(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ = None", "hasContent_(self): if ( self.dtNascto is not None or self.codMunic is not None or", "if sys.version_info.major == 2: from StringIO import StringIO as IOBuffer else: from io", "None: tzoff = input_data.tzinfo.utcoffset(input_data) if tzoff is not None: total_seconds = tzoff.seconds +", "mo.end() s3 = s1[pos:] s2 += quote_xml_aux(s3) return s2 def quote_xml_aux(inStr): s1 =", "mtvFim): self.mtvFim = mtvFim def hasContent_(self): if ( self.tpBenef is not None or", "self.tpAmb is not None or self.procEmi is not None or self.verProc is not", "return subclass(*args_, **kwargs_) if nrLograd.subclass: return nrLograd.subclass(*args_, **kwargs_) else: return nrLograd(*args_, **kwargs_) factory", "'' if self.idQuota is not None: showIndent(outfile, level, pretty_print) outfile.write('<%sidQuota>%s</%sidQuota>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.idQuota),", "get_ideEvento(self): return self.ideEvento def set_ideEvento(self, ideEvento): self.ideEvento = ideEvento def get_ideEmpregador(self): return self.ideEmpregador", "else: # category == MixedContainer.CategoryComplex self.value.to_etree(element) def to_etree_simple(self): if self.content_type == MixedContainer.TypeString: text", "eol_)) else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='TIdeEveTrab'):", "unicode): result = quote_xml(instring).encode('utf8') else: result = GeneratedsSuper.gds_encode(str(instring)) return result def __eq__(self, other):", "namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='nmCid') if", "def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): if nodeName_ == 'paisResid': paisResid_ = child_.text", "return TEmprPJ.subclass(*args_, **kwargs_) else: return TEmprPJ(*args_, **kwargs_) factory = staticmethod(factory) def get_tpInsc(self): return", "raise_parse_error(node, 'Requires sequence of integers') return values def gds_format_float(self, input_data, input_name=''): return ('%.15f'", "value is not None and 'Id' not in already_processed: already_processed.add('Id') self.Id = value", "end class tpLograd class dscLograd(GeneratedsSuper): subclass = None superclass = None def __init__(self):", "level, namespace_='', name_='bairro', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node,", "def get_idQuota(self): return self.idQuota def set_idQuota(self, idQuota): self.idQuota = idQuota def get_cpfInst(self): return", "else: return False def export(self, outfile, level, namespace_='', name_='uf', namespacedef_='', pretty_print=True): imported_ns_def_ =", "self.nmMae = nmMae def get_nmPai(self): return self.nmPai def set_nmPai(self, nmPai): self.nmPai = nmPai", "paisNascto def get_paisNac(self): return self.paisNac def set_paisNac(self, paisNac): self.paisNac = paisNac def get_nmMae(self):", "= GeneratedsSuper._FixedOffsetTZ( tzoff, results.group(0)) input_data = input_data[:-6] time_parts = input_data.split('.') if len(time_parts) >", "node, nodeName_, fromsubclass_=False): pass # end class cep class TEnderecoExterior(GeneratedsSuper): \"\"\"Informações do Endereço", "self.gds_format_date(self.dtIniBenef, input_name='dtIniBenef'), namespace_, eol_)) if self.vrBenef is not None: showIndent(outfile, level, pretty_print) outfile.write('<%svrBenef>%s</%svrBenef>%s'", "% ( input_data.hour, input_data.minute, input_data.second, ) else: _svalue = '%02d:%02d:%02d.%s' % ( input_data.hour,", "self.ideEmpregador def set_ideEmpregador(self, ideEmpregador): self.ideEmpregador = ideEmpregador def get_ideBenef(self): return self.ideBenef def set_ideBenef(self,", "'paisNascto': paisNascto_ = child_.text paisNascto_ = self.gds_validate_string(paisNascto_, node, 'paisNascto') self.paisNascto = paisNascto_ elif", "def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class mtvFim class TIdeEveTrab(GeneratedsSuper):", "**kwargs_) if nrInsc.subclass: return nrInsc.subclass(*args_, **kwargs_) else: return nrInsc(*args_, **kwargs_) factory = staticmethod(factory)", ")) self.exportChildren(outfile, level + 1, namespace_='', name_='idQuota', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_))", "= ival_ # end class fimBeneficio class tpBenef(GeneratedsSuper): subclass = None superclass =", "pretty_print=True): if pretty_print: eol_ = '\\n' else: eol_ = '' if self.ideEvento is", ")) self.exportChildren(outfile, level + 1, namespace_='', name_='tpInsc', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_))", "else: return False def export(self, outfile, level, namespace_='', name_='eSocial', namespacedef_=' xmlns:ds=\"http://www.w3.org/2000/09/xmldsig#\" ', pretty_print=True):", "None or self.uf is not None or self.paisNascto is not None or self.paisNac", "if nmCid.subclass: return nmCid.subclass(*args_, **kwargs_) else: return nmCid(*args_, **kwargs_) factory = staticmethod(factory) def", "values: try: float(value) except (TypeError, ValueError): raise_parse_error(node, 'Requires sequence of floats') return values", "'' if self.brasil is not None: self.brasil.export(outfile, level, namespace_, name_='brasil', pretty_print=pretty_print) if self.exterior", "ValueError): raise_parse_error(node, 'Requires sequence of floats') return values def gds_format_double(self, input_data, input_name=''): return", "= getSubclassFromModule_( CurrentSubclassModule_, mtvFim) if subclass is not None: return subclass(*args_, **kwargs_) if", "name_='fimBeneficio'): pass def exportChildren(self, outfile, level, namespace_='', name_='fimBeneficio', fromsubclass_=False, pretty_print=True): if pretty_print: eol_", "pretty_print: eol_ = '\\n' else: eol_ = '' if self.paisResid is not None:", "= Signature_ # end class eSocial class evtCdBenPrRP(GeneratedsSuper): \"\"\"Evento de cadastro de benefícios", "nmPai(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ = None def", "já houver informação anterior de benefícios para o beneficiário identificado em {ideBenef} e", "1, namespace_='', name_='endereco', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else:", "input_name=''): return input_data def gds_format_integer_list(self, input_data, input_name=''): return '%s' % ' '.join(input_data) def", "procEmi): self.procEmi = procEmi def get_verProc(self): return self.verProc def set_verProc(self, verProc): self.verProc =", "'%s' % ' '.join(input_data) def gds_validate_double_list( self, input_data, node=None, input_name=''): values = input_data.split()", "= set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='nrRecibo') if self.hasContent_(): outfile.write('>%s' % (eol_, ))", "outfile.write('<%scodMunic>%s</%scodMunic>%s' % (namespace_, self.gds_format_integer(self.codMunic, input_name='codMunic'), namespace_, eol_)) if self.uf is not None: showIndent(outfile,", "'iniBeneficio': TDadosBeneficio, } USAGE_TEXT = \"\"\" Usage: python <Parser>.py [ -s ] <in_xml_file>", "self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='nmCid', pretty_print=pretty_print) outfile.write('</%s%s>%s'", "factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, complemento) if", "if nmBenefic.subclass: return nmBenefic.subclass(*args_, **kwargs_) else: return nmBenefic(*args_, **kwargs_) factory = staticmethod(factory) def", "else: return cep(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return", "input_data, node=None, input_name=''): values = input_data.split() for value in values: if value not", "outfile, level, already_processed, namespace_='', name_='nrRecibo'): pass def exportChildren(self, outfile, level, namespace_='', name_='nrRecibo', fromsubclass_=False,", "namespace_='', name_='evtCdBenPrRP', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s'", "namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='codMunic') if", "not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, uf) if subclass is not None: return", "def exportChildren(self, outfile, level, namespace_='', name_='nrBenefic', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed", "else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='verProc'): pass", "not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, tpInsc) if subclass is not None: return", "or self.paisNascto is not None or self.paisNac is not None or self.nmMae is", "% exp) fval_ = self.gds_validate_float(fval_, node, 'vrBenef') self.vrBenef = fval_ elif nodeName_ ==", "= infoBeneficio def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_(", "set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='TIdeEveTrab') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile,", "dt): return None def gds_format_string(self, input_data, input_name=''): return input_data def gds_validate_string(self, input_data, node=None,", "IPython you can uncomment and use the following. # IPython is available from", ")) self.exportChildren(outfile, level + 1, namespace_='', name_='vrBenef', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_))", "eol_ = '' if self.indRetif is not None: showIndent(outfile, level, pretty_print) outfile.write('<%sindRetif>%s</%sindRetif>%s' %", "= paisNascto def get_paisNac(self): return self.paisNac def set_paisNac(self, paisNac): self.paisNac = paisNac def", "= ideBenef self.infoBeneficio = infoBeneficio def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None:", "None: return subclass(*args_, **kwargs_) if tpInsc.subclass: return tpInsc.subclass(*args_, **kwargs_) else: return tpInsc(*args_, **kwargs_)", "self.nmBenefic = nmBenefic self.dadosBenef = dadosBenef def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not", "self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='nrInsc', pretty_print=pretty_print) outfile.write('</%s%s>%s'", "'esociallib/v2_04/evtCdBenPrRP.py') # # Command line arguments: # schemas/v2_04/evtCdBenPrRP.xsd # # Command line: #", "or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='TDadosBenef') if self.hasContent_():", "get_dadosBenef(self): return self.dadosBenef def set_dadosBenef(self, dadosBenef): self.dadosBenef = dadosBenef def hasContent_(self): if (", "**kwargs_) else: return cep(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ):", "buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class verProc class TEmprPJ(GeneratedsSuper): \"\"\"Informações", "is not None: return subclass(*args_, **kwargs_) if bairro.subclass: return bairro.subclass(*args_, **kwargs_) else: return", "(eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='indRetif', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_,", "space used by the DOM. doc = None if not silence: sys.stdout.write('#from evtCdBenPrRP", "CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, codMunic) if subclass is not", "already_processed, namespace_='', name_='nrInsc'): pass def exportChildren(self, outfile, level, namespace_='', name_='nrInsc', fromsubclass_=False, pretty_print=True): pass", "else: return tpAmb(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return", "level, namespace_='', name_='dscLograd', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node,", "None: value = attrs.get('{%s}%s' % (namespace, name, )) return value class GDSParseError(Exception): pass", "= [] self.get_path_list_(node, path_list) path_list.reverse() path = '/'.join(path_list) return path Tag_strip_pattern_ = re_.compile(r'\\{.*\\}')", "outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='cep', pretty_print=pretty_print) outfile.write('</%s%s>%s' %", "namespace_, name_='tpAmb') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='',", "factory = staticmethod(factory) def get_tpBenef(self): return self.tpBenef def set_tpBenef(self, tpBenef): self.tpBenef = tpBenef", "return False def export(self, outfile, level, namespace_='', name_='tpBenef', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('tpBenef')", "self.codMunic is not None: showIndent(outfile, level, pretty_print) outfile.write('<%scodMunic>%s</%scodMunic>%s' % (namespace_, self.gds_format_integer(self.codMunic, input_name='codMunic'), namespace_,", "True else: return False def export(self, outfile, level, namespace_='', name_='codMunic', namespacedef_='', pretty_print=True): imported_ns_def_", "name def get_name(self): return self.name def set_data_type(self, data_type): self.data_type = data_type def get_data_type_chain(self):", "= GenerateDSNamespaceDefs_.get('codMunic') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_", "container): self.container = container def get_container(self): return self.container def set_child_attrs(self, child_attrs): self.child_attrs =", "name_='dtNascto') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='dtNascto',", "end class infoPenMorte class idQuota(GeneratedsSuper): subclass = None superclass = None def __init__(self):", "level, namespace_='', name_='verProc', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('verProc') if imported_ns_def_ is not None:", "namespace_='', name_='dtNascto', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_, ))", "= GenerateDSNamespaceDefs_.get('TDadosBeneficio') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_", "set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='cpfBenef') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile,", "pass def exportChildren(self, outfile, level, namespace_='', name_='dtIniBenef', fromsubclass_=False, pretty_print=True): pass def build(self, node):", "return nrBenefic(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return True", "= idQuota_ elif nodeName_ == 'cpfInst': cpfInst_ = child_.text cpfInst_ = self.gds_validate_string(cpfInst_, node,", "self.paisNascto = paisNascto self.paisNac = paisNac self.nmMae = nmMae self.nmPai = nmPai def", "input_name=''): return '%s' % ' '.join(input_data) def gds_validate_boolean_list( self, input_data, node=None, input_name=''): values", "return input_data def gds_format_date(self, input_data, input_name=''): _svalue = '%04d-%02d-%02d' % ( input_data.year, input_data.month,", "= set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='nrBenefic') if self.hasContent_(): outfile.write('>%s' % (eol_, ))", "'nmMae': nmMae_ = child_.text nmMae_ = self.gds_validate_string(nmMae_, node, 'nmMae') self.nmMae = nmMae_ elif", "name_='fimBeneficio', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('fimBeneficio') if imported_ns_def_ is not None: namespacedef_ =", "level, namespace_='', name_='TEnderecoExterior', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('TEnderecoExterior') if imported_ns_def_ is not None:", "def exportChildren(self, outfile, level, namespace_='', name_='evtCdBenPrRP', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\\n'", "self.bairro = bairro def get_nmCid(self): return self.nmCid def set_nmCid(self, nmCid): self.nmCid = nmCid", "class endereco(GeneratedsSuper): \"\"\"Grupo de informações do endereço do Trabalhador\"\"\" subclass = None superclass", "name_='infoPenMorte', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\\n' else: eol_ = '' if", "eol_)) else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='tpPlanRP'):", "1, namespace_='', name_='fimBeneficio', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else:", "self.paisNac = paisNac_ elif nodeName_ == 'nmMae': nmMae_ = child_.text nmMae_ = self.gds_validate_string(nmMae_,", "**kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, endereco) if subclass", "is not None or self.infoBeneficio is not None ): return True else: return", "TDadosBenef, 'exterior': TEnderecoExterior, 'ideEmpregador': TEmprPJ, 'ideEvento': TIdeEveTrab, 'iniBeneficio': TDadosBeneficio, } USAGE_TEXT = \"\"\"", "end class endereco class TEnderecoBrasil(GeneratedsSuper): \"\"\"Informações do Endereço no Brasil\"\"\" subclass = None", "if ( self.cpfBenef is not None or self.nmBenefic is not None or self.dadosBenef", "child_.text uf_ = self.gds_validate_string(uf_, node, 'uf') self.uf = uf_ elif nodeName_ == 'paisNascto':", "input_name=''): return input_data def gds_format_boolean_list(self, input_data, input_name=''): return '%s' % ' '.join(input_data) def", "namespace_='', name_='TEnderecoBrasil', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s'", "infoBeneficio.factory() obj_.build(child_) self.infoBeneficio = obj_ obj_.original_tagname_ = 'infoBeneficio' # end class evtCdBenPrRP class", "True else: return False def export(self, outfile, level, namespace_='', name_='TEnderecoExterior', namespacedef_='', pretty_print=True): imported_ns_def_", "name): if self.content_type == MixedContainer.TypeString: outfile.write('<%s>%s</%s>' % ( self.name, self.value, self.name)) elif self.content_type", "exp) ival_ = self.gds_validate_integer(ival_, node, 'procEmi') self.procEmi = ival_ elif nodeName_ == 'verProc':", "not None or self.mtvFim is not None ): return True else: return False", "None: self.evtCdBenPrRP.export(outfile, level, namespace_, name_='evtCdBenPrRP', pretty_print=pretty_print) if self.Signature is not None: showIndent(outfile, level,", "exportAttributes(self, outfile, level, already_processed, namespace_='', name_='TEnderecoExterior'): pass def exportChildren(self, outfile, level, namespace_='', name_='TEnderecoExterior',", "is available from http://ipython.scipy.org/. # ## from IPython.Shell import IPShellEmbed ## args =", "not None: showIndent(outfile, level, pretty_print) outfile.write('<%sdscLograd>%s</%sdscLograd>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.dscLograd), input_name='dscLograd')), namespace_, eol_)) if", "(namespace, name, )) return value class GDSParseError(Exception): pass def raise_parse_error(node, msg): msg =", "else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='uf'): pass", "nodeName_ == 'nmBenefic': nmBenefic_ = child_.text nmBenefic_ = self.gds_validate_string(nmBenefic_, node, 'nmBenefic') self.nmBenefic =", "else: usage() if __name__ == '__main__': #import pdb; pdb.set_trace() main() __all__ = [", "exp) ival_ = self.gds_validate_integer(ival_, node, 'codMunic') self.codMunic = ival_ elif nodeName_ == 'uf':", "namespace_, name_='exterior', pretty_print=pretty_print) def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib, already_processed) for", "return subclass(*args_, **kwargs_) if cpfBenef.subclass: return cpfBenef.subclass(*args_, **kwargs_) else: return cpfBenef(*args_, **kwargs_) factory", "__init__(self, cpfBenef=None, nmBenefic=None, dadosBenef=None): self.original_tagname_ = None self.cpfBenef = cpfBenef self.nmBenefic = nmBenefic", "subclass = getSubclassFromModule_( CurrentSubclassModule_, paisNascto) if subclass is not None: return subclass(*args_, **kwargs_)", "level, namespace_='', name_='cpfBenef', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node,", "version 2.28b. # Python 2.7.12 (default, Nov 19 2016, 06:48:10) [GCC 5.4.0 20160609]", "the DOM. doc = None mapping = {} rootElement = rootObj.to_etree(None, name_=rootTag, mapping_=mapping)", "1, namespace_='', name_='bairro', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_,", "not None: return subclass(*args_, **kwargs_) if verProc.subclass: return verProc.subclass(*args_, **kwargs_) else: return verProc(*args_,", "or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='TIdeEveTrab') if self.hasContent_():", "level, namespace_='', name_='TDadosBeneficio', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('TDadosBeneficio') if imported_ns_def_ is not None:", "elif nodeName_ == 'nrInsc': nrInsc_ = child_.text nrInsc_ = self.gds_validate_string(nrInsc_, node, 'nrInsc') self.nrInsc", "fromsubclass_=False): pass # end class nrInsc class TDadosBenef(GeneratedsSuper): \"\"\"Dados de beneficiário\"\"\" subclass =", "return subclass(*args_, **kwargs_) if dtIniBenef.subclass: return dtIniBenef.subclass(*args_, **kwargs_) else: return dtIniBenef(*args_, **kwargs_) factory", "nodeName_ == 'codMunic': sval_ = child_.text try: ival_ = int(sval_) except (TypeError, ValueError)", "namespace_='', name_='nrInsc', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib,", "node, 'nmMae') self.nmMae = nmMae_ elif nodeName_ == 'nmPai': nmPai_ = child_.text nmPai_", "is not None: return subclass(*args_, **kwargs_) if dscLograd.subclass: return dscLograd.subclass(*args_, **kwargs_) else: return", "outfile.write('<%snrLograd>%s</%snrLograd>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nrLograd), input_name='nrLograd')), namespace_, eol_)) if self.complemento is not None: showIndent(outfile,", "self.tpBenef = tpBenef self.nrBenefic = nrBenefic if isinstance(dtFimBenef, BaseStrType_): initvalue_ = datetime_.datetime.strptime(dtFimBenef, '%Y-%m-%d').date()", "def exportChildren(self, outfile, level, namespace_='', name_='dtFimBenef', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed", "exportAttributes(self, outfile, level, already_processed, namespace_='', name_='indRetif'): pass def exportChildren(self, outfile, level, namespace_='', name_='indRetif',", "else: return tpInsc(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return", "if dtNascto.subclass: return dtNascto.subclass(*args_, **kwargs_) else: return dtNascto(*args_, **kwargs_) factory = staticmethod(factory) def", "# try: from generatedsnamespaces import GenerateDSNamespaceDefs as GenerateDSNamespaceDefs_ except ImportError: GenerateDSNamespaceDefs_ = {}", "% (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='nrBenefic'): pass def exportChildren(self,", "outfile.write('<%sidQuota>%s</%sidQuota>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.idQuota), input_name='idQuota')), namespace_, eol_)) if self.cpfInst is not None: showIndent(outfile,", "def quote_xml_aux(inStr): s1 = inStr.replace('&', '&amp;') s1 = s1.replace('<', '&lt;') s1 = s1.replace('>',", "if dscLograd.subclass: return dscLograd.subclass(*args_, **kwargs_) else: return dscLograd(*args_, **kwargs_) factory = staticmethod(factory) def", "already_processed, namespace_, name_='nmMae') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1,", "namespace_='', name_='fimBeneficio', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s'", "self.cpfInst = cpfInst_ # end class infoPenMorte class idQuota(GeneratedsSuper): subclass = None superclass", "False def export(self, outfile, level, namespace_='', name_='idQuota', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('idQuota') if", "): return True else: return False def export(self, outfile, level, namespace_='', name_='cpfInst', namespacedef_='',", "class for a example of the use of this # table. # A", "level, namespace_='', name_='eSocial', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\\n' else: eol_ =", "parser=parser, **kwargs) return doc # # Namespace prefix definition table (and other attributes,", "(eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='nrBenefic', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_,", "None: return subclass(*args_, **kwargs_) if cep.subclass: return cep.subclass(*args_, **kwargs_) else: return cep(*args_, **kwargs_)", "else: from io import BytesIO as IOBuffer parser = None doc = parsexml_(IOBuffer(inString),", "classname = node.get('{%s}type' % node.nsmap['xsi']) if classname is not None: names = classname.split(':')", "not None or self.cpfInst is not None ): return True else: return False", "def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, nmMae)", "namespace_, name_='ideEmpregador', pretty_print=pretty_print) if self.ideBenef is not None: self.ideBenef.export(outfile, level, namespace_, name_='ideBenef', pretty_print=pretty_print)", "True else: return False def export(self, outfile, level, namespace_='', name_='complemento', namespacedef_='', pretty_print=True): imported_ns_def_", "pass # end class dtFimBenef class mtvFim(GeneratedsSuper): subclass = None superclass = None", "input_name='tpInsc'), namespace_, eol_)) if self.nrInsc is not None: showIndent(outfile, level, pretty_print) outfile.write('<%snrInsc>%s</%snrInsc>%s' %", "value in values: try: float(value) except (TypeError, ValueError): raise_parse_error(node, 'Requires sequence of floats')", "silence: sys.stdout.write('#from evtCdBenPrRP import *\\n\\n') sys.stdout.write('import evtCdBenPrRP as model_\\n\\n') sys.stdout.write('rootObj = model_.rootClass(\\n') rootObj.exportLiteral(sys.stdout,", "* 3600)) // 60 _svalue += '{0:02d}:{1:02d}'.format(hours, minutes) return _svalue @classmethod def gds_parse_datetime(cls,", "self.exportChildren(outfile, level + 1, namespace_='', name_='TIdeEveTrab', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' % (namespace_,", "% ' '.join(input_data) def gds_validate_integer_list( self, input_data, node=None, input_name=''): values = input_data.split() for", "subclass(*args_, **kwargs_) if verProc.subclass: return verProc.subclass(*args_, **kwargs_) else: return verProc(*args_, **kwargs_) factory =", "if self.idQuota is not None: showIndent(outfile, level, pretty_print) outfile.write('<%sidQuota>%s</%sidQuota>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.idQuota), input_name='idQuota')),", "obj_.original_tagname_ = 'infoPenMorte' # end class TDadosBeneficio class dtIniBenef(GeneratedsSuper): subclass = None superclass", "def showIndent(outfile, level, pretty_print=True): if pretty_print: for idx in range(level): outfile.write(' ') def", "integer: %s' % exp) ival_ = self.gds_validate_integer(ival_, node, 'indRetif') self.indRetif = ival_ elif", "or '%s' % inStr) s1 = s1.replace('&', '&amp;') s1 = s1.replace('<', '&lt;') s1", "paisResid_ = self.gds_validate_string(paisResid_, node, 'paisResid') self.paisResid = paisResid_ elif nodeName_ == 'dscLograd': dscLograd_", "input_name=''): return input_data def gds_format_float_list(self, input_data, input_name=''): return '%s' % ' '.join(input_data) def", "evtCdBenPrRP) if subclass is not None: return subclass(*args_, **kwargs_) if evtCdBenPrRP.subclass: return evtCdBenPrRP.subclass(*args_,", "tpLograd.subclass(*args_, **kwargs_) else: return tpLograd(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if (", "def set_nmBenefic(self, nmBenefic): self.nmBenefic = nmBenefic def get_dadosBenef(self): return self.dadosBenef def set_dadosBenef(self, dadosBenef):", "else: eol_ = '' if self.idQuota is not None: showIndent(outfile, level, pretty_print) outfile.write('<%sidQuota>%s</%sidQuota>%s'", "def exportChildren(self, outfile, level, namespace_='', name_='mtvFim', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed", "TEnderecoExterior.subclass: return TEnderecoExterior.subclass(*args_, **kwargs_) else: return TEnderecoExterior(*args_, **kwargs_) factory = staticmethod(factory) def get_paisResid(self):", "child_, node, nodeName_, fromsubclass_=False): pass # end class nmCid class codPostal(GeneratedsSuper): subclass =", "showIndent(outfile, level, pretty_print) outfile.write('<%svrBenef>%s</%svrBenef>%s' % (namespace_, self.gds_format_float(self.vrBenef, input_name='vrBenef'), namespace_, eol_)) if self.infoPenMorte is", "export(self, outfile, level, namespace_='', name_='tpAmb', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('tpAmb') if imported_ns_def_ is", "if not inStr: return '' s1 = (isinstance(inStr, BaseStrType_) and inStr or '%s'", "self.brasil = obj_ obj_.original_tagname_ = 'brasil' elif nodeName_ == 'exterior': obj_ = TEnderecoExterior.factory()", "self.nmCid = nmCid def get_codPostal(self): return self.codPostal def set_codPostal(self, codPostal): self.codPostal = codPostal", "self.exportChildren(outfile, level + 1, namespace_='', name_='cpfBenef', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else:", "= set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='bairro') if self.hasContent_(): outfile.write('>%s' % (eol_, ))", "Oct 10 00:42:21 2017 by generateDS.py version 2.28b. # Python 2.7.12 (default, Nov", "(self.content_type == MixedContainer.TypeInteger or self.content_type == MixedContainer.TypeBoolean): text = '%d' % self.value elif", "== MixedContainer.TypeString: outfile.write('<%s>%s</%s>' % ( self.name, self.value, self.name)) elif self.content_type == MixedContainer.TypeInteger or", "attr_parts = attr_name.split(':') value = None if len(attr_parts) == 1: value = attrs.get(attr_name)", "input_name='paisNac')), namespace_, eol_)) if self.nmMae is not None: showIndent(outfile, level, pretty_print) outfile.write('<%snmMae>%s</%snmMae>%s' %", "def export(self, outfile, level, namespace_='', name_='fimBeneficio', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('fimBeneficio') if imported_ns_def_", "isinstance(instring, unicode): result = quote_xml(instring).encode('utf8') else: result = GeneratedsSuper.gds_encode(str(instring)) return result def __eq__(self,", "evtCdBenPrRP): self.evtCdBenPrRP = evtCdBenPrRP def get_Signature(self): return self.Signature def set_Signature(self, Signature): self.Signature =", "not None: showIndent(outfile, level, pretty_print) outfile.write('<%stpInsc>%s</%stpInsc>%s' % (namespace_, self.gds_format_integer(self.tpInsc, input_name='tpInsc'), namespace_, eol_)) if", "silence: sys.stdout.write('<?xml version=\"1.0\" ?>\\n') rootObj.export( sys.stdout, 0, name_=rootTag, namespacedef_='') return rootObj def parseLiteral(inFileName,", "return self.nrBenefic def set_nrBenefic(self, nrBenefic): self.nrBenefic = nrBenefic def get_dtIniBenef(self): return self.dtIniBenef def", "False def export(self, outfile, level, namespace_='', name_='procEmi', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('procEmi') if", "'', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='mtvFim') if self.hasContent_(): outfile.write('>%s'", "end class TEnderecoBrasil class tpLograd(GeneratedsSuper): subclass = None superclass = None def __init__(self):", "namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='endereco') if", "hasContent_(self): if ( self.dadosNasc is not None or self.endereco is not None ):", "'requires integer: %s' % exp) ival_ = self.gds_validate_integer(ival_, node, 'tpPlanRP') self.tpPlanRP = ival_", "% (namespace_, self.gds_format_integer(self.indRetif, input_name='indRetif'), namespace_, eol_)) if self.nrRecibo is not None: showIndent(outfile, level,", "% (namespace_, self.gds_format_date(self.dtFimBenef, input_name='dtFimBenef'), namespace_, eol_)) if self.mtvFim is not None: showIndent(outfile, level,", "already_processed, namespace_='', name_='nrBenefic'): pass def exportChildren(self, outfile, level, namespace_='', name_='nrBenefic', fromsubclass_=False, pretty_print=True): pass", "if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, TEmprPJ) if subclass is", "= staticmethod(factory) def get_tpLograd(self): return self.tpLograd def set_tpLograd(self, tpLograd): self.tpLograd = tpLograd def", "etree_.ETCompatXMLParser() except AttributeError: # fallback to xml.etree parser = etree_.XMLParser() doc = etree_.parse(infile,", "namespace_='', name_='dtFimBenef'): pass def exportChildren(self, outfile, level, namespace_='', name_='dtFimBenef', fromsubclass_=False, pretty_print=True): pass def", ")) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='eSocial'): pass def exportChildren(self, outfile, level,", "text = self.value elif (self.content_type == MixedContainer.TypeInteger or self.content_type == MixedContainer.TypeBoolean): text =", "self.tpAmb is not None: showIndent(outfile, level, pretty_print) outfile.write('<%stpAmb>%s</%stpAmb>%s' % (namespace_, self.gds_format_integer(self.tpAmb, input_name='tpAmb'), namespace_,", "get_nmBenefic(self): return self.nmBenefic def set_nmBenefic(self, nmBenefic): self.nmBenefic = nmBenefic def get_dadosBenef(self): return self.dadosBenef", "namespace_, name_='dtIniBenef') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='',", "outfile.write( 'model_.MixedContainer(%d, %d, \"%s\",\\n' % ( self.category, self.content_type, self.name,)) self.value.exportLiteral(outfile, level + 1)", "not input_data: return '' else: return input_data def gds_format_base64(self, input_data, input_name=''): return base64.b64encode(input_data)", "following line where and when you want to drop into the # IPython", "-- Entering ipshell.\\nHit Ctrl-D to exit') # # Globals # ExternalEncoding = 'ascii'", "namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='paisResid') if", "= str def parsexml_(infile, parser=None, **kwargs): if parser is None: # Use the", "if subclass is not None: return subclass(*args_, **kwargs_) if eSocial.subclass: return eSocial.subclass(*args_, **kwargs_)", "dtNascto self.dtNascto = initvalue_ self.codMunic = codMunic self.uf = uf self.paisNascto = paisNascto", "class nrLograd class complemento(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_", "def exportChildren(self, outfile, level, namespace_='', name_='ideBenef', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\\n'", "already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='TEmprPJ') if self.hasContent_(): outfile.write('>%s' % (eol_,", "if subclass is not None: return subclass(*args_, **kwargs_) if infoBeneficio.subclass: return infoBeneficio.subclass(*args_, **kwargs_)", "dtNascto.subclass(*args_, **kwargs_) else: return dtNascto(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if (", "mapping.iteritems())) @staticmethod def gds_encode(instring): if sys.version_info.major == 2: return instring.encode(ExternalEncoding) else: return instring", "attrs, already_processed): pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): if nodeName_ == 'tpInsc':", "A sample table is: # # # File: generatedsnamespaces.py # # GenerateDSNamespaceDefs =", "def export(self, outfile, level, namespace_='', name_='verProc', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('verProc') if imported_ns_def_", "self.nrRecibo is not None or self.tpAmb is not None or self.procEmi is not", "= ival_ elif nodeName_ == 'verProc': verProc_ = child_.text verProc_ = self.gds_validate_string(verProc_, node,", "pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('evtCdBenPrRP') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if", "node, nodeName_, fromsubclass_=False): pass # end class bairro class cep(GeneratedsSuper): subclass = None", "level) outfile.write( 'model_.MixedContainer(%d, %d, \"%s\",\\n' % ( self.category, self.content_type, self.name,)) self.value.exportLiteral(outfile, level +", "get_paisNascto(self): return self.paisNascto def set_paisNascto(self, paisNascto): self.paisNascto = paisNascto def get_paisNac(self): return self.paisNac", "used by the DOM. doc = None if not silence: sys.stdout.write('#from evtCdBenPrRP import", "namespace_='', name_='verProc', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_, ))", "name_='indRetif', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('indRetif') if imported_ns_def_ is not None: namespacedef_ =", "None: subclass = getSubclassFromModule_( CurrentSubclassModule_, nrLograd) if subclass is not None: return subclass(*args_,", "outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='TDadosBenef', pretty_print=pretty_print) showIndent(outfile, level,", "nrInsc_ # end class TEmprPJ class tpInsc(GeneratedsSuper): subclass = None superclass = None", "element type class for a example of the use of this # table.", "else: eol_ = '' if self.tpInsc is not None: showIndent(outfile, level, pretty_print) outfile.write('<%stpInsc>%s</%stpInsc>%s'", "None: showIndent(outfile, level, pretty_print) outfile.write('<%suf>%s</%suf>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.uf), input_name='uf')), namespace_, eol_)) def build(self,", "'\"%s\"' % s1.replace('\"', \"&quot;\") else: s1 = \"'%s'\" % s1 else: s1 =", "tpInsc def get_nrInsc(self): return self.nrInsc def set_nrInsc(self, nrInsc): self.nrInsc = nrInsc def hasContent_(self):", "already_processed, namespace_='', name_='uf'): pass def exportChildren(self, outfile, level, namespace_='', name_='uf', fromsubclass_=False, pretty_print=True): pass", "True else: return False def export(self, outfile, level, namespace_='', name_='paisNascto', namespacedef_='', pretty_print=True): imported_ns_def_", "procEmi(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ = None def", "else: return dtFimBenef(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return", "outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='dadosNasc', pretty_print=pretty_print) showIndent(outfile, level,", ")) self.exportChildren(outfile, level + 1, namespace_='', name_='mtvFim', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_))", "Generated Tue Oct 10 00:42:21 2017 by generateDS.py version 2.28b. # Python 2.7.12", "endereco) if subclass is not None: return subclass(*args_, **kwargs_) if endereco.subclass: return endereco.subclass(*args_,", "False def export(self, outfile, level, namespace_='', name_='mtvFim', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('mtvFim') if", "if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, dscLograd) if subclass is", "% (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='dtIniBenef'): pass def exportChildren(self,", "level + 1, namespace_='', name_='TDadosBeneficio', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_,", "already_processed): pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class idQuota", "class TEnderecoBrasil(GeneratedsSuper): \"\"\"Informações do Endereço no Brasil\"\"\" subclass = None superclass = None", "= tzoff.seconds + (86400 * tzoff.days) if total_seconds == 0: _svalue += 'Z'", "self.exportAttributes(outfile, level, already_processed, namespace_, name_='paisResid') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level", "'', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='codMunic') if self.hasContent_(): outfile.write('>%s'", "namespace_='', name_='mtvFim', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_, ))", "% (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='paisNascto', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_,", "outfile, level, namespace_='', name_='TDadosBeneficio', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\\n' else: eol_", "sys.stdout.write('#from evtCdBenPrRP import *\\n\\n') sys.stdout.write('import evtCdBenPrRP as model_\\n\\n') sys.stdout.write('rootObj = model_.rootClass(\\n') rootObj.exportLiteral(sys.stdout, 0,", "nodeName_ == 'nrRecibo': nrRecibo_ = child_.text nrRecibo_ = self.gds_validate_string(nrRecibo_, node, 'nrRecibo') self.nrRecibo =", "'fimBeneficio': obj_ = fimBeneficio.factory() obj_.build(child_) self.fimBeneficio = obj_ obj_.original_tagname_ = 'fimBeneficio' # end", "value return typ(value) # # Data representation classes. # class eSocial(GeneratedsSuper): subclass =", "namespace_='', name_='tpInsc'): pass def exportChildren(self, outfile, level, namespace_='', name_='tpInsc', fromsubclass_=False, pretty_print=True): pass def", "(eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='nmBenefic', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_,", "outfile.write('<%s>%s</%s>' % ( self.name, self.value, self.name)) elif self.content_type == MixedContainer.TypeInteger or \\ self.content_type", "subclass = getSubclassFromModule_( CurrentSubclassModule_, codMunic) if subclass is not None: return subclass(*args_, **kwargs_)", "else: return TEnderecoBrasil(*args_, **kwargs_) factory = staticmethod(factory) def get_tpLograd(self): return self.tpLograd def set_tpLograd(self,", "**kwargs_) if complemento.subclass: return complemento.subclass(*args_, **kwargs_) else: return complemento(*args_, **kwargs_) factory = staticmethod(factory)", "re_.compile(r\"<!\\[CDATA\\[.*?\\]\\]>\", re_.DOTALL) # Change this to redirect the generated superclass module to use", "class ideBenef class cpfBenef(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_", "is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, tpAmb) if subclass is not None:", "nmMae_ = child_.text nmMae_ = self.gds_validate_string(nmMae_, node, 'nmMae') self.nmMae = nmMae_ elif nodeName_", "if not silence: sys.stdout.write('#from evtCdBenPrRP import *\\n\\n') sys.stdout.write('import evtCdBenPrRP as model_\\n\\n') sys.stdout.write('rootObj =", "self.tpLograd = tpLograd_ elif nodeName_ == 'dscLograd': dscLograd_ = child_.text dscLograd_ = self.gds_validate_string(dscLograd_,", "# table. # A sample table is: # # # File: generatedsnamespaces.py #", "cep class TEnderecoExterior(GeneratedsSuper): \"\"\"Informações do Endereço no Exterior\"\"\" subclass = None superclass =", "None: return subclass(*args_, **kwargs_) if TEnderecoExterior.subclass: return TEnderecoExterior.subclass(*args_, **kwargs_) else: return TEnderecoExterior(*args_, **kwargs_)", "+ 1) showIndent(outfile, level) outfile.write(')\\n') class MemberSpec_(object): def __init__(self, name='', data_type='', container=0, optional=0,", "= cpfBenef self.nmBenefic = nmBenefic self.dadosBenef = dadosBenef def factory(*args_, **kwargs_): if CurrentSubclassModule_", "= fimBeneficio def hasContent_(self): if ( self.tpPlanRP is not None or self.iniBeneficio is", "if verProc.subclass: return verProc.subclass(*args_, **kwargs_) else: return verProc(*args_, **kwargs_) factory = staticmethod(factory) def", "obj_ = ideBenef.factory() obj_.build(child_) self.ideBenef = obj_ obj_.original_tagname_ = 'ideBenef' elif nodeName_ ==", "following. # IPython is available from http://ipython.scipy.org/. # ## from IPython.Shell import IPShellEmbed", "node, attrs, already_processed): pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): if nodeName_ ==", "return input_data def gds_format_base64(self, input_data, input_name=''): return base64.b64encode(input_data) def gds_validate_base64(self, input_data, node=None, input_name=''):", "**kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, infoBeneficio) if subclass", "nrBenefic class dtFimBenef(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ =", "input_data.split('.') if len(time_parts) > 1: micro_seconds = int(float('0.' + time_parts[1]) * 1000000) input_data", "eSocial class evtCdBenPrRP(GeneratedsSuper): \"\"\"Evento de cadastro de benefícios previdenciários de Regimes Próprios\"\"\" subclass", "imported_ns_def_ = GenerateDSNamespaceDefs_.get('ideBenef') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print:", "= getSubclassFromModule_( CurrentSubclassModule_, nmMae) if subclass is not None: return subclass(*args_, **kwargs_) if", "GenerateDSNamespaceDefs_.get('indRetif') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_ =", "def get_choice(self): return self.choice def set_optional(self, optional): self.optional = optional def get_optional(self): return", "namespace_='', name_='tpBenef', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('tpBenef') if imported_ns_def_ is not None: namespacedef_", "already_processed, namespace_, name_='verProc') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1,", "def parseEtree(inFileName, silence=False): parser = None doc = parsexml_(inFileName, parser) rootNode = doc.getroot()", "outfile.write('<%sbairro>%s</%sbairro>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.bairro), input_name='bairro')), namespace_, eol_)) if self.nmCid is not None: showIndent(outfile,", "child_, node, nodeName_, fromsubclass_=False): pass # end class cpfInst GDSClassesMapping = { 'altBeneficio':", "if self.value.strip(): if len(element) > 0: if element[-1].tail is None: element[-1].tail = self.value", "subclass = getSubclassFromModule_( CurrentSubclassModule_, nmCid) if subclass is not None: return subclass(*args_, **kwargs_)", "namespace_, name_='altBeneficio', pretty_print=pretty_print) if self.fimBeneficio is not None: self.fimBeneficio.export(outfile, level, namespace_, name_='fimBeneficio', pretty_print=pretty_print)", "return vrBenef(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return True", "def get_data_type(self): if isinstance(self.data_type, list): if len(self.data_type) > 0: return self.data_type[-1] else: return", "not self.__eq__(other) def getSubclassFromModule_(module, class_): '''Get the subclass of a class from a", "= set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='indRetif') if self.hasContent_(): outfile.write('>%s' % (eol_, ))", "def set_complemento(self, complemento): self.complemento = complemento def get_bairro(self): return self.bairro def set_bairro(self, bairro):", "3 TypeFloat = 4 TypeDecimal = 5 TypeDouble = 6 TypeBoolean = 7", "CurrentSubclassModule_, nrBenefic) if subclass is not None: return subclass(*args_, **kwargs_) if nrBenefic.subclass: return", "return True else: return False def export(self, outfile, level, namespace_='', name_='TDadosBenef', namespacedef_='', pretty_print=True):", "= fimBeneficio.factory() obj_.build(child_) self.fimBeneficio = obj_ obj_.original_tagname_ = 'fimBeneficio' # end class infoBeneficio", "return True else: return False def export(self, outfile, level, namespace_='', name_='nmPai', namespacedef_='', pretty_print=True):", "= dval_ elif nodeName_ == 'codMunic': sval_ = child_.text try: ival_ = int(sval_)", "(eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='nmMae', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_,", "namespace_, name_='bairro') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='',", "= rootObj.to_etree(None, name_=rootTag, mapping_=mapping) reverse_mapping = rootObj.gds_reverse_node_mapping(mapping) if not silence: content = etree_.tostring(", "pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): if nodeName_ == 'cpfBenef': cpfBenef_ =", "= ideBenef.factory() obj_.build(child_) self.ideBenef = obj_ obj_.original_tagname_ = 'ideBenef' elif nodeName_ == 'infoBeneficio':", "s1.replace('<', '&lt;') s1 = s1.replace('>', '&gt;') if '\"' in s1: if \"'\" in", "name_='bairro'): pass def exportChildren(self, outfile, level, namespace_='', name_='bairro', fromsubclass_=False, pretty_print=True): pass def build(self,", "namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='nrBenefic') if", "self.vrBenef = vrBenef self.infoPenMorte = infoPenMorte def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not", "buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class tpInsc class nrInsc(GeneratedsSuper): subclass", "def getValue(self): return self.value def getName(self): return self.name def export(self, outfile, level, name,", "paisResid_ = child_.text paisResid_ = self.gds_validate_string(paisResid_, node, 'paisResid') self.paisResid = paisResid_ elif nodeName_", "factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, idQuota) if", "set_nmCid(self, nmCid): self.nmCid = nmCid def get_codPostal(self): return self.codPostal def set_codPostal(self, codPostal): self.codPostal", "if self.paisNascto is not None: showIndent(outfile, level, pretty_print) outfile.write('<%spaisNascto>%s</%spaisNascto>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.paisNascto), input_name='paisNascto')),", "self.tpAmb = tpAmb self.procEmi = procEmi self.verProc = verProc def factory(*args_, **kwargs_): if", "CurrentSubclassModule_, complemento) if subclass is not None: return subclass(*args_, **kwargs_) if complemento.subclass: return", "' '(\"true\", \"1\", \"false\", \"0\")') return values def gds_validate_datetime(self, input_data, node=None, input_name=''): return", "(namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed,", "level, already_processed, namespace_, name_='paisNac') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level +", "namespace_='', name_='codPostal', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_, ))", "None if input_data[-1] == 'Z': tz = GeneratedsSuper._FixedOffsetTZ(0, 'UTC') input_data = input_data[:-1] else:", "subclass(*args_, **kwargs_) if procEmi.subclass: return procEmi.subclass(*args_, **kwargs_) else: return procEmi(*args_, **kwargs_) factory =", "factory = staticmethod(factory) def get_tpLograd(self): return self.tpLograd def set_tpLograd(self, tpLograd): self.tpLograd = tpLograd", "nrInsc=None): self.original_tagname_ = None self.tpInsc = tpInsc self.nrInsc = nrInsc def factory(*args_, **kwargs_):", "= GenerateDSNamespaceDefs_.get('vrBenef') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_", "else: initvalue_ = dtIniBenef self.dtIniBenef = initvalue_ self.vrBenef = vrBenef self.infoPenMorte = infoPenMorte", "idQuota self.cpfInst = cpfInst def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass", "self.original_tagname_ = None self.tpBenef = tpBenef self.nrBenefic = nrBenefic if isinstance(dtFimBenef, BaseStrType_): initvalue_", "CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, tpBenef) if subclass is not", "eol_)) else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='infoBeneficio'):", "namespace_='', name_='indRetif', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_, ))", "+ (86400 * tzoff.days) if total_seconds == 0: _svalue += 'Z' else: if", "= None def __init__(self, Id=None, ideEvento=None, ideEmpregador=None, ideBenef=None, infoBeneficio=None): self.original_tagname_ = None self.Id", "None: return subclass(*args_, **kwargs_) if paisNac.subclass: return paisNac.subclass(*args_, **kwargs_) else: return paisNac(*args_, **kwargs_)", "sequence of doubles') return values def gds_format_boolean(self, input_data, input_name=''): return ('%s' % input_data).lower()", "self.data_type def set_container(self, container): self.container = container def get_container(self): return self.container def set_child_attrs(self,", "(namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.paisNascto), input_name='paisNascto')), namespace_, eol_)) if self.paisNac is not None: showIndent(outfile, level, pretty_print)", "pass def exportChildren(self, outfile, level, namespace_='', name_='ideBenef', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ =", "(namespace_, self.gds_format_date(self.dtIniBenef, input_name='dtIniBenef'), namespace_, eol_)) if self.vrBenef is not None: showIndent(outfile, level, pretty_print)", "eol_ = '\\n' else: eol_ = '' if self.dtNascto is not None: showIndent(outfile,", "= getSubclassFromModule_( CurrentSubclassModule_, nrRecibo) if subclass is not None: return subclass(*args_, **kwargs_) if", "%s' % exp) ival_ = self.gds_validate_integer(ival_, node, 'tpBenef') self.tpBenef = ival_ elif nodeName_", "= iniBeneficio def get_altBeneficio(self): return self.altBeneficio def set_altBeneficio(self, altBeneficio): self.altBeneficio = altBeneficio def", "level, pretty_print) outfile.write('<%sprocEmi>%s</%sprocEmi>%s' % (namespace_, self.gds_format_integer(self.procEmi, input_name='procEmi'), namespace_, eol_)) if self.verProc is not", "('%s' % input_data).lower() def gds_validate_boolean(self, input_data, node=None, input_name=''): return input_data def gds_format_boolean_list(self, input_data,", "self.nrBenefic = nrBenefic def get_dtIniBenef(self): return self.dtIniBenef def set_dtIniBenef(self, dtIniBenef): self.dtIniBenef = dtIniBenef", "'\\n' else: eol_ = '' if self.idQuota is not None: showIndent(outfile, level, pretty_print)", ")) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='fimBeneficio') if self.hasContent_(): outfile.write('>%s' %", "'', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='infoPenMorte') if self.hasContent_(): outfile.write('>%s'", "or self.nrLograd is not None or self.complemento is not None or self.bairro is", "if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='tpPlanRP', pretty_print=pretty_print)", "set_cpfBenef(self, cpfBenef): self.cpfBenef = cpfBenef def get_nmBenefic(self): return self.nmBenefic def set_nmBenefic(self, nmBenefic): self.nmBenefic", "already_processed): value = find_attr_value_('Id', node) if value is not None and 'Id' not", "def exportChildren(self, outfile, level, namespace_='', name_='codPostal', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed", "if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, endereco) if subclass is", "factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, ideBenef) if", "elif nodeName_ == 'dtIniBenef': sval_ = child_.text dval_ = self.gds_parse_date(sval_) self.dtIniBenef = dval_", "% (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nrBenefic), input_name='nrBenefic')), namespace_, eol_)) if self.dtFimBenef is not None: showIndent(outfile, level,", "else: return False def export(self, outfile, level, namespace_='', name_='TEnderecoExterior', namespacedef_='', pretty_print=True): imported_ns_def_ =", "): return True else: return False def export(self, outfile, level, namespace_='', name_='TEnderecoBrasil', namespacedef_='',", "subclass = getSubclassFromModule_( CurrentSubclassModule_, dtFimBenef) if subclass is not None: return subclass(*args_, **kwargs_)", "already_processed, namespace_='', name_='tpBenef'): pass def exportChildren(self, outfile, level, namespace_='', name_='tpBenef', fromsubclass_=False, pretty_print=True): pass", "not None: showIndent(outfile, level, pretty_print) outfile.write('<%suf>%s</%suf>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.uf), input_name='uf')), namespace_, eol_)) if", "already_processed): pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class dtFimBenef", "imported_ns_def_ = GenerateDSNamespaceDefs_.get('cep') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print:", "namespace_='', name_='eSocial'): pass def exportChildren(self, outfile, level, namespace_='', name_='eSocial', fromsubclass_=False, pretty_print=True): if pretty_print:", "python # -*- coding: utf-8 -*- # # Generated Tue Oct 10 00:42:21", "TypeBase64 = 8 def __init__(self, category, content_type, name, value): self.category = category self.content_type", "eol_)) if self.cpfInst is not None: showIndent(outfile, level, pretty_print) outfile.write('<%scpfInst>%s</%scpfInst>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.cpfInst),", "BaseStrType_ = basestring else: BaseStrType_ = str def parsexml_(infile, parser=None, **kwargs): if parser", "namespace_='', name_='TDadosBenef', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\\n' else: eol_ = ''", "return True else: return False def export(self, outfile, level, namespace_='', name_='dadosNasc', namespacedef_='', pretty_print=True):", "Validação: Só pode ser informado se já houver informação anterior de benefícios para", "input_name=''): if input_data.microsecond == 0: _svalue = '%02d:%02d:%02d' % ( input_data.hour, input_data.minute, input_data.second,", "= child_.text nrInsc_ = self.gds_validate_string(nrInsc_, node, 'nrInsc') self.nrInsc = nrInsc_ # end class", "rootElement = rootObj.to_etree(None, name_=rootTag, mapping_=mapping) reverse_mapping = rootObj.gds_reverse_node_mapping(mapping) if not silence: content =", "TDadosBenef.subclass(*args_, **kwargs_) else: return TDadosBenef(*args_, **kwargs_) factory = staticmethod(factory) def get_dadosNasc(self): return self.dadosNasc", "): return True else: return False def export(self, outfile, level, namespace_='', name_='nmBenefic', namespacedef_='',", "namespace_='', name_='tpAmb'): pass def exportChildren(self, outfile, level, namespace_='', name_='tpAmb', fromsubclass_=False, pretty_print=True): pass def", "pass def exportChildren(self, outfile, level, namespace_='', name_='uf', fromsubclass_=False, pretty_print=True): pass def build(self, node):", "level, namespace_, name_='infoPenMorte', pretty_print=pretty_print) def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib, already_processed)", "outfile.write('<%scpfBenef>%s</%scpfBenef>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.cpfBenef), input_name='cpfBenef')), namespace_, eol_)) if self.nmBenefic is not None: showIndent(outfile,", "outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='dscLograd', pretty_print=pretty_print) outfile.write('</%s%s>%s' %", "outfile.write('<%spaisResid>%s</%spaisResid>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.paisResid), input_name='paisResid')), namespace_, eol_)) if self.dscLograd is not None: showIndent(outfile,", "not None: showIndent(outfile, level, pretty_print) outfile.write('<%spaisNascto>%s</%spaisNascto>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.paisNascto), input_name='paisNascto')), namespace_, eol_)) if", "% ( self.name, self.value, self.name)) elif self.content_type == MixedContainer.TypeBase64: outfile.write('<%s>%s</%s>' % ( self.name,", "class TEmprPJ(GeneratedsSuper): \"\"\"Informações do Empregador PJ\"\"\" subclass = None superclass = None def", "**kwargs_) else: return tpBenef(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ):", "nodeName_ == 'tpPlanRP': sval_ = child_.text try: ival_ = int(sval_) except (TypeError, ValueError)", "def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='dtNascto'): pass def exportChildren(self, outfile, level, namespace_='',", "attrs, already_processed): pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): if nodeName_ == 'paisResid':", "self.nrBenefic def set_nrBenefic(self, nrBenefic): self.nrBenefic = nrBenefic def get_dtFimBenef(self): return self.dtFimBenef def set_dtFimBenef(self,", "def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, bairro)", "which there is # a namespace prefix definition, will export that definition in", "set_codPostal(self, codPostal): self.codPostal = codPostal def hasContent_(self): if ( self.paisResid is not None", "def exportChildren(self, outfile, level, namespace_='', name_='dadosNasc', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\\n'", "already_processed): pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): if nodeName_ == 'dtNascto': sval_", "TypeDouble = 6 TypeBoolean = 7 TypeBase64 = 8 def __init__(self, category, content_type,", "get_tpPlanRP(self): return self.tpPlanRP def set_tpPlanRP(self, tpPlanRP): self.tpPlanRP = tpPlanRP def get_iniBeneficio(self): return self.iniBeneficio", "outfile, level, namespace_='', name_='TEnderecoExterior', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('TEnderecoExterior') if imported_ns_def_ is not", "generatedsnamespaces, if it is importable, must contain # a dictionary named GeneratedsNamespaceDefs. This", "*= -1 tz = GeneratedsSuper._FixedOffsetTZ( tzoff, results.group(0)) input_data = input_data[:-6] time_parts = input_data.split('.')", "= TEnderecoExterior.factory() obj_.build(child_) self.exterior = obj_ obj_.original_tagname_ = 'exterior' # end class endereco", "pass # end class dtIniBenef class vrBenef(GeneratedsSuper): subclass = None superclass = None", "eol_)) if self.iniBeneficio is not None: self.iniBeneficio.export(outfile, level, namespace_, name_='iniBeneficio', pretty_print=pretty_print) if self.altBeneficio", "set_nmPai(self, nmPai): self.nmPai = nmPai def hasContent_(self): if ( self.dtNascto is not None", "level, already_processed, namespace_='', name_='complemento'): pass def exportChildren(self, outfile, level, namespace_='', name_='complemento', fromsubclass_=False, pretty_print=True):", "to exit') # # Globals # ExternalEncoding = 'ascii' Tag_pattern_ = re_.compile(r'({.*})?(.*)') String_cleanup_pat_", "if namespace is not None: value = attrs.get('{%s}%s' % (namespace, name, )) return", "input_data def gds_format_date(self, input_data, input_name=''): _svalue = '%04d-%02d-%02d' % ( input_data.year, input_data.month, input_data.day,", "= Tag_pattern_.match(child.tag).groups()[-1] self.buildChildren(child, node, nodeName_) return self def buildAttributes(self, node, attrs, already_processed): pass", "'', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='TEmprPJ') if self.hasContent_(): outfile.write('>%s'", "if TDadosBenef.subclass: return TDadosBenef.subclass(*args_, **kwargs_) else: return TDadosBenef(*args_, **kwargs_) factory = staticmethod(factory) def", "= '\\n' else: eol_ = '' if self.ideEvento is not None: self.ideEvento.export(outfile, level,", "name_='codPostal'): pass def exportChildren(self, outfile, level, namespace_='', name_='codPostal', fromsubclass_=False, pretty_print=True): pass def build(self,", "self.gds_validate_integer(ival_, node, 'indRetif') self.indRetif = ival_ elif nodeName_ == 'nrRecibo': nrRecibo_ = child_.text", "if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='nrInsc', pretty_print=pretty_print)", "class tpInsc class nrInsc(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_", "None or self.uf is not None ): return True else: return False def", "return value class GDSParseError(Exception): pass def raise_parse_error(node, msg): msg = '%s (element %s/line", "import BytesIO as IOBuffer parser = None doc = parsexml_(IOBuffer(inString), parser) rootNode =", "set_name(self, name): self.name = name def get_name(self): return self.name def set_data_type(self, data_type): self.data_type", "pretty_print=True): if pretty_print: eol_ = '\\n' else: eol_ = '' if self.tpPlanRP is", "= getSubclassFromModule_( CurrentSubclassModule_, tpLograd) if subclass is not None: return subclass(*args_, **kwargs_) if", "( input_data.hour, input_data.minute, input_data.second, ) else: _svalue = '%02d:%02d:%02d.%s' % ( input_data.hour, input_data.minute,", "self.original_tagname_ = None self.brasil = brasil self.exterior = exterior def factory(*args_, **kwargs_): if", "= None if isinstance(dtNascto, BaseStrType_): initvalue_ = datetime_.datetime.strptime(dtNascto, '%Y-%m-%d').date() else: initvalue_ = dtNascto", "factory = staticmethod(factory) def get_brasil(self): return self.brasil def set_brasil(self, brasil): self.brasil = brasil", "'\"' in s1: if \"'\" in s1: s1 = '\"%s\"' % s1.replace('\"', \"&quot;\")", "already_processed): pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class cpfInst", "exportAttributes(self, outfile, level, already_processed, namespace_='', name_='evtCdBenPrRP'): if self.Id is not None and 'Id'", ")) return value class GDSParseError(Exception): pass def raise_parse_error(node, msg): msg = '%s (element", "None or self.vrBenef is not None or self.infoPenMorte is not None ): return", "is not None or self.dadosBenef is not None ): return True else: return", "= obj_ obj_.original_tagname_ = 'infoBeneficio' # end class evtCdBenPrRP class ideBenef(GeneratedsSuper): \"\"\"Identificação do", "% ( self.category, self.content_type, self.name, self.value)) elif self.category == MixedContainer.CategorySimple: showIndent(outfile, level) outfile.write(", "already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='tpLograd') if self.hasContent_(): outfile.write('>%s' % (eol_,", "+ 1, namespace_='', name_='nrInsc', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' %", "self.value.strip(): outfile.write(self.value) elif self.category == MixedContainer.CategorySimple: self.exportSimple(outfile, level, name) else: # category ==", "eol_)) else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='vrBenef'):", "name_='ideBenef') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='ideBenef',", "self.exportAttributes(outfile, level, already_processed, namespace_, name_='vrBenef') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level", "(eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='infoBeneficio'): pass def exportChildren(self, outfile,", "export(self, outfile, level, namespace_='', name_='nmPai', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('nmPai') if imported_ns_def_ is", "complemento_ = child_.text complemento_ = self.gds_validate_string(complemento_, node, 'complemento') self.complemento = complemento_ elif nodeName_", "% (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='paisResid'): pass def exportChildren(self,", "{} rootElement = rootObj.to_etree(None, name_=rootTag, mapping_=mapping) reverse_mapping = rootObj.gds_reverse_node_mapping(mapping) if not silence: content", "name_='procEmi'): pass def exportChildren(self, outfile, level, namespace_='', name_='procEmi', fromsubclass_=False, pretty_print=True): pass def build(self,", "namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('dtFimBenef') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_", "True else: return False def export(self, outfile, level, namespace_='', name_='nrRecibo', namespacedef_='', pretty_print=True): imported_ns_def_", "return True else: return False def export(self, outfile, level, namespace_='', name_='nmMae', namespacedef_='', pretty_print=True):", "return True else: return False def export(self, outfile, level, namespace_='', name_='TEnderecoExterior', namespacedef_='', pretty_print=True):", "except (TypeError, ValueError) as exp: raise_parse_error(child_, 'requires float or double: %s' % exp)", "value = None if len(attr_parts) == 1: value = attrs.get(attr_name) elif len(attr_parts) ==", "set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='infoBeneficio') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile,", "showIndent(outfile, level, pretty_print) outfile.write('<%smtvFim>%s</%smtvFim>%s' % (namespace_, self.gds_format_integer(self.mtvFim, input_name='mtvFim'), namespace_, eol_)) def build(self, node):", "= category self.content_type = content_type self.name = name self.value = value def getCategory(self):", "the DOM. doc = None if not silence: sys.stdout.write('#from evtCdBenPrRP import *\\n\\n') sys.stdout.write('import", "subclass = getSubclassFromModule_( CurrentSubclassModule_, evtCdBenPrRP) if subclass is not None: return subclass(*args_, **kwargs_)", "self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='vrBenef', pretty_print=pretty_print) outfile.write('</%s%s>%s'", "(eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='TEnderecoExterior'): pass def exportChildren(self, outfile,", "namespace_='', name_='nrBenefic', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib,", "= paisNascto_ elif nodeName_ == 'paisNac': paisNac_ = child_.text paisNac_ = self.gds_validate_string(paisNac_, node,", "is not None or self.cpfInst is not None ): return True else: return", "if input_data.microsecond == 0: _svalue = '%02d:%02d:%02d' % ( input_data.hour, input_data.minute, input_data.second, )", "elif self.content_type == MixedContainer.TypeDouble: text = '%g' % self.value elif self.content_type == MixedContainer.TypeBase64:", "name_='dadosNasc'): pass def exportChildren(self, outfile, level, namespace_='', name_='dadosNasc', fromsubclass_=False, pretty_print=True): if pretty_print: eol_", "level + 1, namespace_='', name_='nrRecibo', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s'", "'eSocial' rootClass = eSocial rootObj = rootClass.factory() rootObj.build(rootNode) # Enable Python to collect", "sample table is: # # # File: generatedsnamespaces.py # # GenerateDSNamespaceDefs = {", "dval_ = self.gds_parse_date(sval_) self.dtIniBenef = dval_ elif nodeName_ == 'vrBenef': sval_ = child_.text", "is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, nrInsc) if subclass is not None:", "def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib, already_processed) for child in node:", "buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class nrInsc class TDadosBenef(GeneratedsSuper): \"\"\"Dados", "set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='cep') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile,", "pass # end class uf class paisNascto(GeneratedsSuper): subclass = None superclass = None", "subclass(*args_, **kwargs_) if tpPlanRP.subclass: return tpPlanRP.subclass(*args_, **kwargs_) else: return tpPlanRP(*args_, **kwargs_) factory =", "namespacedef_='', pretty_print=True) return rootObj def parseEtree(inFileName, silence=False): parser = None doc = parsexml_(inFileName,", "1, namespace_='', name_='TEmprPJ', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else:", "mapping): return dict(((v, k) for k, v in mapping.iteritems())) @staticmethod def gds_encode(instring): if", "pass # end class cep class TEnderecoExterior(GeneratedsSuper): \"\"\"Informações do Endereço no Exterior\"\"\" subclass", "self.evtCdBenPrRP = evtCdBenPrRP self.Signature = Signature def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not", "eol_ = '\\n' else: eol_ = '' if self.indRetif is not None: showIndent(outfile,", "in values: try: int(value) except (TypeError, ValueError): raise_parse_error(node, 'Requires sequence of integers') return", "not None: return subclass(*args_, **kwargs_) if cep.subclass: return cep.subclass(*args_, **kwargs_) else: return cep(*args_,", "elif nodeName_ == 'endereco': obj_ = endereco.factory() obj_.build(child_) self.endereco = obj_ obj_.original_tagname_ =", "(eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='TEmprPJ', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s'", "% (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='codPostal', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_,", "nmBenefic): self.nmBenefic = nmBenefic def get_dadosBenef(self): return self.dadosBenef def set_dadosBenef(self, dadosBenef): self.dadosBenef =", "outfile.write('<%sbairro>%s</%sbairro>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.bairro), input_name='bairro')), namespace_, eol_)) if self.cep is not None: showIndent(outfile,", "@classmethod def gds_parse_datetime(cls, input_data): tz = None if input_data[-1] == 'Z': tz =", "else: if s1.find('\"') != -1: s1 = s1.replace('\"', '\\\\\"') if s1.find('\\n') == -1:", "not None: return subclass(*args_, **kwargs_) if nrLograd.subclass: return nrLograd.subclass(*args_, **kwargs_) else: return nrLograd(*args_,", "self.complemento is not None or self.bairro is not None or self.cep is not", "paisNac.subclass(*args_, **kwargs_) else: return paisNac(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if (", "outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='TEnderecoExterior', pretty_print=pretty_print) showIndent(outfile, level,", "= parsexml_(IOBuffer(inString), parser) rootNode = doc.getroot() rootTag, rootClass = get_root_tag(rootNode) if rootClass is", "already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='indRetif') if self.hasContent_(): outfile.write('>%s' % (eol_,", "return False def export(self, outfile, level, namespace_='', name_='codMunic', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('codMunic')", "def set_paisNascto(self, paisNascto): self.paisNascto = paisNascto def get_paisNac(self): return self.paisNac def set_paisNac(self, paisNac):", "imported_ns_def_ = GenerateDSNamespaceDefs_.get('evtCdBenPrRP') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print:", "self.ideBenef = ideBenef self.infoBeneficio = infoBeneficio def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not", "+= '{0:02d}:{1:02d}'.format(hours, minutes) return _svalue def gds_validate_simple_patterns(self, patterns, target): # pat is a", "False def export(self, outfile, level, namespace_='', name_='eSocial', namespacedef_=' xmlns:ds=\"http://www.w3.org/2000/09/xmldsig#\" ', pretty_print=True): imported_ns_def_ =", "dadosBenef): self.dadosBenef = dadosBenef def hasContent_(self): if ( self.cpfBenef is not None or", "**kwargs_) if tpInsc.subclass: return tpInsc.subclass(*args_, **kwargs_) else: return tpInsc(*args_, **kwargs_) factory = staticmethod(factory)", "target): # pat is a list of lists of strings/patterns. We should: #", "value = attrs.get('{%s}%s' % (namespace, name, )) return value class GDSParseError(Exception): pass def", "1000000))[2:], ) if input_data.tzinfo is not None: tzoff = input_data.tzinfo.utcoffset(input_data) if tzoff is", "cpfInst.subclass(*args_, **kwargs_) else: return cpfInst(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if (", "exp) ival_ = self.gds_validate_integer(ival_, node, 'indRetif') self.indRetif = ival_ elif nodeName_ == 'nrRecibo':", "input_name='tpPlanRP'), namespace_, eol_)) if self.iniBeneficio is not None: self.iniBeneficio.export(outfile, level, namespace_, name_='iniBeneficio', pretty_print=pretty_print)", "(namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.cep), input_name='cep')), namespace_, eol_)) if self.codMunic is not None: showIndent(outfile, level, pretty_print)", "else: return nmCid(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return", "or self.cpfInst is not None ): return True else: return False def export(self,", "('%.15f' % input_data).rstrip('0') def gds_validate_float(self, input_data, node=None, input_name=''): return input_data def gds_format_float_list(self, input_data,", "class endereco class TEnderecoBrasil(GeneratedsSuper): \"\"\"Informações do Endereço no Brasil\"\"\" subclass = None superclass", "exportAttributes(self, outfile, level, already_processed, namespace_='', name_='mtvFim'): pass def exportChildren(self, outfile, level, namespace_='', name_='mtvFim',", "input_data, input_name=''): return input_data def gds_validate_string(self, input_data, node=None, input_name=''): if not input_data: return", "dictionary named GeneratedsNamespaceDefs. This Python dictionary # should map element type names (strings)", "import sys import re as re_ import base64 import datetime as datetime_ import", "% exp) ival_ = self.gds_validate_integer(ival_, node, 'indRetif') self.indRetif = ival_ elif nodeName_ ==", "self, input_data, node=None, input_name=''): values = input_data.split() for value in values: try: int(value)", "None or self.exterior is not None ): return True else: return False def", "is not None: showIndent(outfile, level, pretty_print) outfile.write('<%scomplemento>%s</%scomplemento>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.complemento), input_name='complemento')), namespace_, eol_))", "not None: tzoff_parts = results.group(2).split(':') tzoff = int(tzoff_parts[0]) * 60 + int(tzoff_parts[1]) if", "is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, ideBenef) if subclass is not None:", "self.endereco def set_endereco(self, endereco): self.endereco = endereco def hasContent_(self): if ( self.dadosNasc is", "**kwargs_) else: return nrBenefic(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ):", "uf.subclass: return uf.subclass(*args_, **kwargs_) else: return uf(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self):", "def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, nrBenefic)", "available from http://ipython.scipy.org/. # ## from IPython.Shell import IPShellEmbed ## args = ''", "IOBuffer parser = None doc = parsexml_(IOBuffer(inString), parser) rootNode = doc.getroot() rootTag, rootClass", "def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, ideBenef)", "pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('mtvFim') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if", "not None: return subclass(*args_, **kwargs_) if tpInsc.subclass: return tpInsc.subclass(*args_, **kwargs_) else: return tpInsc(*args_,", "buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class nrRecibo class tpAmb(GeneratedsSuper): subclass", "content as empty lines. if self.value.strip(): outfile.write(self.value) elif self.category == MixedContainer.CategorySimple: self.exportSimple(outfile, level,", "return nrRecibo.subclass(*args_, **kwargs_) else: return nrRecibo(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if", "export(self, outfile, level, namespace_='', name_='TEnderecoExterior', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('TEnderecoExterior') if imported_ns_def_ is", "def get_ideBenef(self): return self.ideBenef def set_ideBenef(self, ideBenef): self.ideBenef = ideBenef def get_infoBeneficio(self): return", "eol_ = '\\n' else: eol_ = '' if self.brasil is not None: self.brasil.export(outfile,", "tzname(self, dt): return self.__name def dst(self, dt): return None def gds_format_string(self, input_data, input_name=''):", "## args = '' ## ipshell = IPShellEmbed(args, ## banner = 'Dropping into", "outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='TIdeEveTrab', pretty_print=pretty_print) showIndent(outfile, level,", "if ( self.idQuota is not None or self.cpfInst is not None ): return", "return instring @staticmethod def convert_unicode(instring): if isinstance(instring, str): result = quote_xml(instring) elif sys.version_info.major", "== 'dadosBenef': obj_ = TDadosBenef.factory() obj_.build(child_) self.dadosBenef = obj_ obj_.original_tagname_ = 'dadosBenef' #", "namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='paisNac') if", "rootObj.exportLiteral(sys.stdout, 0, name_=rootTag) sys.stdout.write(')\\n') return rootObj def main(): args = sys.argv[1:] if len(args)", "} USAGE_TEXT = \"\"\" Usage: python <Parser>.py [ -s ] <in_xml_file> \"\"\" def", "cpfInst def hasContent_(self): if ( self.idQuota is not None or self.cpfInst is not", "+= '+' hours = total_seconds // 3600 minutes = (total_seconds - (hours *", "set_nrLograd(self, nrLograd): self.nrLograd = nrLograd def get_complemento(self): return self.complemento def set_complemento(self, complemento): self.complemento", "def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, paisResid)", "= nrBenefic if isinstance(dtFimBenef, BaseStrType_): initvalue_ = datetime_.datetime.strptime(dtFimBenef, '%Y-%m-%d').date() else: initvalue_ = dtFimBenef", "tpBenef=None, nrBenefic=None, dtFimBenef=None, mtvFim=None): self.original_tagname_ = None self.tpBenef = tpBenef self.nrBenefic = nrBenefic", "gds_format_float(self, input_data, input_name=''): return ('%.15f' % input_data).rstrip('0') def gds_validate_float(self, input_data, node=None, input_name=''): return", "'codPostal') self.codPostal = codPostal_ # end class TEnderecoExterior class paisResid(GeneratedsSuper): subclass = None", "prefix # definitions. The export method for any class for which there is", "None: showIndent(outfile, level, pretty_print) outfile.write('<%spaisResid>%s</%spaisResid>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.paisResid), input_name='paisResid')), namespace_, eol_)) if self.dscLograd", "outfile, level, namespace_='', name_='nmPai', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set()", "= dadosBenef def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_(", "namespace_, name_='vrBenef') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='',", "def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): if nodeName_ == 'evtCdBenPrRP': obj_ = evtCdBenPrRP.factory()", "node, 'idQuota') self.idQuota = idQuota_ elif nodeName_ == 'cpfInst': cpfInst_ = child_.text cpfInst_", "Change this to redirect the generated superclass module to use a # specific", "int(sval_) except (TypeError, ValueError) as exp: raise_parse_error(child_, 'requires integer: %s' % exp) ival_", "XML representation of that element. See the export method of # any generated", "nodeName_ == 'endereco': obj_ = endereco.factory() obj_.build(child_) self.endereco = obj_ obj_.original_tagname_ = 'endereco'", "level + 1, namespace_='', name_='procEmi', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s'", "name_='nrBenefic', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('nrBenefic') if imported_ns_def_ is not None: namespacedef_ =", "self.content_type == MixedContainer.TypeDouble: outfile.write('<%s>%g</%s>' % ( self.name, self.value, self.name)) elif self.content_type == MixedContainer.TypeBase64:", "None ): return True else: return False def export(self, outfile, level, namespace_='', name_='TDadosBenef',", "return '' else: return input_data def gds_format_base64(self, input_data, input_name=''): return base64.b64encode(input_data) def gds_validate_base64(self,", "MixedContainer.TypeFloat or \\ self.content_type == MixedContainer.TypeDecimal: outfile.write('<%s>%f</%s>' % ( self.name, self.value, self.name)) elif", "# end class nrInsc class TDadosBenef(GeneratedsSuper): \"\"\"Dados de beneficiário\"\"\" subclass = None superclass", "or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='dtIniBenef') if self.hasContent_():", "else: element.text += self.value elif self.category == MixedContainer.CategorySimple: subelement = etree_.SubElement( element, '%s'", "s1 return s1 def quote_python(inStr): s1 = inStr if s1.find(\"'\") == -1: if", "= infoPenMorte def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_(", "name_='ideBenef', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\\n' else: eol_ = '' if", "% (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='evtCdBenPrRP', pretty_print=pretty_print) showIndent(outfile, level, pretty_print)", "+ namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='nmMae')", "namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('tpLograd') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_", "MixedContainer.TypeDecimal): text = '%f' % self.value elif self.content_type == MixedContainer.TypeDouble: text = '%g'", "pretty_print: eol_ = '\\n' else: eol_ = '' if self.brasil is not None:", "return False def export(self, outfile, level, namespace_='', name_='idQuota', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('idQuota')", ")) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='procEmi') if self.hasContent_(): outfile.write('>%s' %", "node, 'Requires sequence of booleans ' '(\"true\", \"1\", \"false\", \"0\")') return values def", "self.gds_format_integer(self.tpInsc, input_name='tpInsc'), namespace_, eol_)) if self.nrInsc is not None: showIndent(outfile, level, pretty_print) outfile.write('<%snrInsc>%s</%snrInsc>%s'", "the following line where and when you want to drop into the #", "( ): return True else: return False def export(self, outfile, level, namespace_='', name_='indRetif',", "nmMae class nmPai(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ =", "self.complemento is not None or self.bairro is not None or self.nmCid is not", "namespace_, eol_)) if self.paisNac is not None: showIndent(outfile, level, pretty_print) outfile.write('<%spaisNac>%s</%spaisNac>%s' % (namespace_,", "outfile.write(self.value) elif self.category == MixedContainer.CategorySimple: self.exportSimple(outfile, level, name) else: # category == MixedContainer.CategoryComplex", "exportChildren(self, outfile, level, namespace_='', name_='nrRecibo', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed =", "self.nrBenefic is not None or self.dtFimBenef is not None or self.mtvFim is not", "**kwargs_) else: return evtCdBenPrRP(*args_, **kwargs_) factory = staticmethod(factory) def get_ideEvento(self): return self.ideEvento def", "try: float(value) except (TypeError, ValueError): raise_parse_error(node, 'Requires sequence of floats') return values def", "%d)' % (msg, node.tag, node.sourceline, ) raise GDSParseError(msg) class MixedContainer: # Constants for", "See the export method of # any generated element type class for a", "None: subclass = getSubclassFromModule_( CurrentSubclassModule_, tpPlanRP) if subclass is not None: return subclass(*args_,", "fromsubclass_=False): pass # end class paisNascto class paisNac(GeneratedsSuper): subclass = None superclass =", "and inStr or '%s' % inStr) s2 = '' pos = 0 matchobjects", "tag = GeneratedsSuper.Tag_strip_pattern_.sub('', node.tag) if tag: path_list.append(tag) self.get_path_list_(node.getparent(), path_list) def get_class_obj_(self, node, default_class=None):", "= GenerateDSNamespaceDefs_.get('cep') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_", "None: showIndent(outfile, level, pretty_print) outfile.write('<%sSignature>%s</%sSignature>%s' % ('ds:', self.gds_encode(self.gds_format_string(quote_xml(self.Signature), input_name='Signature')), 'ds:', eol_)) def build(self,", "# IPython shell: # ipshell('<some message> -- Entering ipshell.\\nHit Ctrl-D to exit') #", "exportAttributes(self, outfile, level, already_processed, namespace_='', name_='codPostal'): pass def exportChildren(self, outfile, level, namespace_='', name_='codPostal',", "def get_dscLograd(self): return self.dscLograd def set_dscLograd(self, dscLograd): self.dscLograd = dscLograd def get_nrLograd(self): return", "namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='vrBenef') if", "'%Y-%m-%d').date() else: initvalue_ = dtFimBenef self.dtFimBenef = initvalue_ self.mtvFim = mtvFim def factory(*args_,", "= '\\n' else: eol_ = '' if self.tpLograd is not None: showIndent(outfile, level,", "informação de término de benefícios.\"\"\" subclass = None superclass = None def __init__(self,", "outfile.write('<%sdtNascto>%s</%sdtNascto>%s' % (namespace_, self.gds_format_date(self.dtNascto, input_name='dtNascto'), namespace_, eol_)) if self.codMunic is not None: showIndent(outfile,", "CurrentSubclassModule_, nmCid) if subclass is not None: return subclass(*args_, **kwargs_) if nmCid.subclass: return", "'-': tzoff *= -1 tz = GeneratedsSuper._FixedOffsetTZ( tzoff, results.group(0)) input_data = input_data[:-6] time_parts", "schema namespace prefix # definitions. The export method for any class for which", "markup chars, but do not modify CDATA sections.\" if not inStr: return ''", "buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class vrBenef class infoPenMorte(GeneratedsSuper): \"\"\"Informações", "is not None: showIndent(outfile, level, pretty_print) outfile.write('<%snmPai>%s</%snmPai>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nmPai), input_name='nmPai')), namespace_, eol_))", "else: return endereco(*args_, **kwargs_) factory = staticmethod(factory) def get_brasil(self): return self.brasil def set_brasil(self,", "% (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='idQuota', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_,", "get_infoPenMorte(self): return self.infoPenMorte def set_infoPenMorte(self, infoPenMorte): self.infoPenMorte = infoPenMorte def hasContent_(self): if (", "self.tpInsc def set_tpInsc(self, tpInsc): self.tpInsc = tpInsc def get_nrInsc(self): return self.nrInsc def set_nrInsc(self,", "class nmMae(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ = None", "name_='dscLograd'): pass def exportChildren(self, outfile, level, namespace_='', name_='dscLograd', fromsubclass_=False, pretty_print=True): pass def build(self,", "not None or self.iniBeneficio is not None or self.altBeneficio is not None or", "elif self.category == MixedContainer.CategorySimple: showIndent(outfile, level) outfile.write( 'model_.MixedContainer(%d, %d, \"%s\", \"%s\"),\\n' % (", "node, 'tpLograd') self.tpLograd = tpLograd_ elif nodeName_ == 'dscLograd': dscLograd_ = child_.text dscLograd_", "self.exportAttributes(outfile, level, already_processed, namespace_, name_='dtIniBenef') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level", "end class TDadosBenef class dadosNasc(GeneratedsSuper): \"\"\"Informações de nascimento do beneficiário\"\"\" subclass = None", "**kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, paisResid) if subclass", "is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, eSocial) if subclass is not None:", "Signature def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_,", "if self.nmCid is not None: showIndent(outfile, level, pretty_print) outfile.write('<%snmCid>%s</%snmCid>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nmCid), input_name='nmCid')),", "self.cep = cep self.codMunic = codMunic self.uf = uf def factory(*args_, **kwargs_): if", "CurrentSubclassModule_, TIdeEveTrab) if subclass is not None: return subclass(*args_, **kwargs_) if TIdeEveTrab.subclass: return", "tpAmb class procEmi(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ =", "level, pretty_print) outfile.write('<%snmCid>%s</%snmCid>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nmCid), input_name='nmCid')), namespace_, eol_)) if self.codPostal is not", "else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='dtIniBenef'): pass", "'ideEvento': obj_ = TIdeEveTrab.factory() obj_.build(child_) self.ideEvento = obj_ obj_.original_tagname_ = 'ideEvento' elif nodeName_", "else: return nrBenefic(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return", "def exportChildren(self, outfile, level, namespace_='', name_='paisNac', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed", "modify CDATA sections.\" if not inStr: return '' s1 = (isinstance(inStr, BaseStrType_) and", "or self.vrBenef is not None or self.infoPenMorte is not None ): return True", "+ 1, namespace_='', name_='TEnderecoBrasil', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_))", "nrLograd(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ = None def", "level, already_processed, namespace_='', name_='cpfBenef'): pass def exportChildren(self, outfile, level, namespace_='', name_='cpfBenef', fromsubclass_=False, pretty_print=True):", "return self.cpfBenef def set_cpfBenef(self, cpfBenef): self.cpfBenef = cpfBenef def get_nmBenefic(self): return self.nmBenefic def", "name_='nrRecibo', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_, )) def", "== 'bairro': bairro_ = child_.text bairro_ = self.gds_validate_string(bairro_, node, 'bairro') self.bairro = bairro_", "export(self, outfile, level, namespace_='', name_='nrRecibo', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('nrRecibo') if imported_ns_def_ is", "buildChildren(self, child_, node, nodeName_, fromsubclass_=False): if nodeName_ == 'dtNascto': sval_ = child_.text dval_", "return dtFimBenef(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return True", "is not None or self.nrBenefic is not None or self.dtFimBenef is not None", "value = attrs.get(attr_name) elif len(attr_parts) == 2: prefix, name = attr_parts namespace =", "pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('idQuota') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if", "= GenerateDSNamespaceDefs_.get('paisResid') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_", "else: return nmMae(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return", "gds_validate_date(self, input_data, node=None, input_name=''): return input_data def gds_format_date(self, input_data, input_name=''): _svalue = '%04d-%02d-%02d'", "% (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='tpInsc'): pass def exportChildren(self,", "% (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='verProc'): pass def exportChildren(self,", "level + 1, namespace_='', name_='ideBenef', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_,", "def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class tpInsc class nrInsc(GeneratedsSuper):", "not None: self.iniBeneficio.export(outfile, level, namespace_, name_='iniBeneficio', pretty_print=pretty_print) if self.altBeneficio is not None: self.altBeneficio.export(outfile,", "0 TypeText = 1 TypeString = 2 TypeInteger = 3 TypeFloat = 4", "input_data, input_name=''): if input_data.microsecond == 0: _svalue = '%04d-%02d-%02dT%02d:%02d:%02d' % ( input_data.year, input_data.month,", "return subclass(*args_, **kwargs_) if codPostal.subclass: return codPostal.subclass(*args_, **kwargs_) else: return codPostal(*args_, **kwargs_) factory", "nodeName_, fromsubclass_=False): pass # end class complemento class bairro(GeneratedsSuper): subclass = None superclass", "k, v in mapping.iteritems())) @staticmethod def gds_encode(instring): if sys.version_info.major == 2: return instring.encode(ExternalEncoding)", "= '\\n' else: eol_ = '' if self.tpInsc is not None: showIndent(outfile, level,", "is not None: return subclass(*args_, **kwargs_) if evtCdBenPrRP.subclass: return evtCdBenPrRP.subclass(*args_, **kwargs_) else: return", "return uf(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return True", "not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, dtNascto) if subclass is not None: return", "dt def gds_validate_date(self, input_data, node=None, input_name=''): return input_data def gds_format_date(self, input_data, input_name=''): _svalue", "= exterior def hasContent_(self): if ( self.brasil is not None or self.exterior is", "pass # end class nrBenefic class dtFimBenef(GeneratedsSuper): subclass = None superclass = None", "return subclass(*args_, **kwargs_) if endereco.subclass: return endereco.subclass(*args_, **kwargs_) else: return endereco(*args_, **kwargs_) factory", "outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='dtIniBenef', pretty_print=pretty_print) outfile.write('</%s%s>%s' %", "node.nsmap: classname = node.get('{%s}type' % node.nsmap['xsi']) if classname is not None: names =", "name_='tpBenef', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('tpBenef') if imported_ns_def_ is not None: namespacedef_ =", "namespace_='', name_='tpPlanRP', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib,", "type_name=None): return None @classmethod def gds_reverse_node_mapping(cls, mapping): return dict(((v, k) for k, v", "def get_infoBeneficio(self): return self.infoBeneficio def set_infoBeneficio(self, infoBeneficio): self.infoBeneficio = infoBeneficio def get_Id(self): return", "outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='complemento'): pass def", "False def export(self, outfile, level, namespace_='', name_='vrBenef', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('vrBenef') if", "outfile.write( 'model_.MixedContainer(%d, %d, \"%s\", \"%s\"),\\n' % ( self.category, self.content_type, self.name, self.value)) else: #", "def gds_validate_integer_list( self, input_data, node=None, input_name=''): values = input_data.split() for value in values:", "already_processed, namespace_='', name_='dadosNasc'): pass def exportChildren(self, outfile, level, namespace_='', name_='dadosNasc', fromsubclass_=False, pretty_print=True): if", "self, input_data, node=None, input_name=''): values = input_data.split() for value in values: if value", "# Command line arguments: # schemas/v2_04/evtCdBenPrRP.xsd # # Command line: # /usr/local/bin/generateDS --no-process-includes", "or self.nrBenefic is not None or self.dtIniBenef is not None or self.vrBenef is", "'.join(input_data) def gds_validate_boolean_list( self, input_data, node=None, input_name=''): values = input_data.split() for value in", "level, pretty_print) outfile.write('<%spaisNac>%s</%spaisNac>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.paisNac), input_name='paisNac')), namespace_, eol_)) if self.nmMae is not", "outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='verProc'): pass def", "if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, paisNascto) if subclass is", "exportChildren(self, outfile, level, namespace_='', name_='tpLograd', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed =", "outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='cpfInst', pretty_print=pretty_print) outfile.write('</%s%s>%s' %", "GDSClassesMapping = { 'altBeneficio': TDadosBeneficio, 'brasil': TEnderecoBrasil, 'dadosBenef': TDadosBenef, 'exterior': TEnderecoExterior, 'ideEmpregador': TEmprPJ,", "range(level): outfile.write(' ') def quote_xml(inStr): \"Escape markup chars, but do not modify CDATA", "GenerateDSNamespaceDefs_.get('tpAmb') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_ =", "outfile, level, namespace_='', name_='dtIniBenef', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set()", "elif self.content_type == MixedContainer.TypeBase64: outfile.write('<%s>%s</%s>' % ( self.name, base64.b64encode(self.value), self.name)) def to_etree(self, element):", "return instring.lower() def get_path_(self, node): path_list = [] self.get_path_list_(node, path_list) path_list.reverse() path =", "eol_ = '' if self.dtNascto is not None: showIndent(outfile, level, pretty_print) outfile.write('<%sdtNascto>%s</%sdtNascto>%s' %", "def gds_reverse_node_mapping(cls, mapping): return dict(((v, k) for k, v in mapping.iteritems())) @staticmethod def", "or self.nmPai is not None ): return True else: return False def export(self,", "outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='infoPenMorte'): pass def", "name_='nrInsc'): pass def exportChildren(self, outfile, level, namespace_='', name_='nrInsc', fromsubclass_=False, pretty_print=True): pass def build(self,", "level, namespace_='', name_='nmMae', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node,", "level, already_processed, namespace_='', name_='eSocial'): pass def exportChildren(self, outfile, level, namespace_='', name_='eSocial', fromsubclass_=False, pretty_print=True):", "not None or self.paisNac is not None or self.nmMae is not None or", "imported_ns_def_ = GenerateDSNamespaceDefs_.get('TDadosBeneficio') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print:", "outfile, level, already_processed, namespace_='', name_='paisNascto'): pass def exportChildren(self, outfile, level, namespace_='', name_='paisNascto', fromsubclass_=False,", "subclass = getSubclassFromModule_( CurrentSubclassModule_, vrBenef) if subclass is not None: return subclass(*args_, **kwargs_)", "eol_)) else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='evtCdBenPrRP'):", "already_processed, namespace_='', name_='dtFimBenef'): pass def exportChildren(self, outfile, level, namespace_='', name_='dtFimBenef', fromsubclass_=False, pretty_print=True): pass", "def export(self, outfile, level, namespace_='', name_='infoPenMorte', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('infoPenMorte') if imported_ns_def_", "name_='tpPlanRP', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('tpPlanRP') if imported_ns_def_ is not None: namespacedef_ =", "): return True else: return False def export(self, outfile, level, namespace_='', name_='nmPai', namespacedef_='',", "verProc_ = self.gds_validate_string(verProc_, node, 'verProc') self.verProc = verProc_ # end class TIdeEveTrab class", "self.exterior = obj_ obj_.original_tagname_ = 'exterior' # end class endereco class TEnderecoBrasil(GeneratedsSuper): \"\"\"Informações", "name_='paisResid'): pass def exportChildren(self, outfile, level, namespace_='', name_='paisResid', fromsubclass_=False, pretty_print=True): pass def build(self,", "tpBenef(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return True else:", "pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('codPostal') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if", "pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self,", "get_tpInsc(self): return self.tpInsc def set_tpInsc(self, tpInsc): self.tpInsc = tpInsc def get_nrInsc(self): return self.nrInsc", "= ival_ elif nodeName_ == 'uf': uf_ = child_.text uf_ = self.gds_validate_string(uf_, node,", "node, 'cpfInst') self.cpfInst = cpfInst_ # end class infoPenMorte class idQuota(GeneratedsSuper): subclass =", "Ctrl-D to exit') # # Globals # ExternalEncoding = 'ascii' Tag_pattern_ = re_.compile(r'({.*})?(.*)')", "buildChildren(self, child_, node, nodeName_, fromsubclass_=False): if nodeName_ == 'ideEvento': obj_ = TIdeEveTrab.factory() obj_.build(child_)", "level, already_processed, namespace_, name_='cpfBenef') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level +", "'' if self.ideEvento is not None: self.ideEvento.export(outfile, level, namespace_, name_='ideEvento', pretty_print=pretty_print) if self.ideEmpregador", "level, namespace_='', name_='tpBenef', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('tpBenef') if imported_ns_def_ is not None:", "subclass = getSubclassFromModule_( CurrentSubclassModule_, dtNascto) if subclass is not None: return subclass(*args_, **kwargs_)", "else: return False def export(self, outfile, level, namespace_='', name_='paisNac', namespacedef_='', pretty_print=True): imported_ns_def_ =", "already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='TEnderecoExterior') if self.hasContent_(): outfile.write('>%s' % (eol_,", "level + 1, namespace_='', name_='verProc', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s'", "fromsubclass_=False): if nodeName_ == 'cpfBenef': cpfBenef_ = child_.text cpfBenef_ = self.gds_validate_string(cpfBenef_, node, 'cpfBenef')", "pretty_print=pretty_print) def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib, already_processed) for child in", "= paisResid_ elif nodeName_ == 'dscLograd': dscLograd_ = child_.text dscLograd_ = self.gds_validate_string(dscLograd_, node,", "is not None: return subclass(*args_, **kwargs_) if fimBeneficio.subclass: return fimBeneficio.subclass(*args_, **kwargs_) else: return", "program.') # Then use the following line where and when you want to", "None or self.dtFimBenef is not None or self.mtvFim is not None ): return", "factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, TDadosBenef) if", "# end class nrRecibo class tpAmb(GeneratedsSuper): subclass = None superclass = None def", "return self.data_type[-1] else: return 'xs:string' else: return self.data_type def set_container(self, container): self.container =", "back to program.') # Then use the following line where and when you", "% (namespace_, self.gds_format_integer(self.tpPlanRP, input_name='tpPlanRP'), namespace_, eol_)) if self.iniBeneficio is not None: self.iniBeneficio.export(outfile, level,", "def export(self, outfile, level, namespace_='', name_='tpInsc', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('tpInsc') if imported_ns_def_", "(eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='nrRecibo', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_,", "self.gds_encode(self.gds_format_string(quote_xml(self.uf), input_name='uf')), namespace_, eol_)) if self.paisNascto is not None: showIndent(outfile, level, pretty_print) outfile.write('<%spaisNascto>%s</%spaisNascto>%s'", "namespace_='', name_='bairro', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib,", "('--no-process-includes', '') # ('-o', 'esociallib/v2_04/evtCdBenPrRP.py') # # Command line arguments: # schemas/v2_04/evtCdBenPrRP.xsd #", "level, already_processed, namespace_='', name_='infoPenMorte'): pass def exportChildren(self, outfile, level, namespace_='', name_='infoPenMorte', fromsubclass_=False, pretty_print=True):", "= self.gds_validate_integer(ival_, node, 'mtvFim') self.mtvFim = ival_ # end class fimBeneficio class tpBenef(GeneratedsSuper):", "else: return idQuota(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return", "category == MixedContainer.CategoryComplex showIndent(outfile, level) outfile.write( 'model_.MixedContainer(%d, %d, \"%s\",\\n' % ( self.category, self.content_type,", "showIndent(outfile, level, pretty_print) outfile.write('<%sverProc>%s</%sverProc>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.verProc), input_name='verProc')), namespace_, eol_)) def build(self, node):", "**kwargs_) if TIdeEveTrab.subclass: return TIdeEveTrab.subclass(*args_, **kwargs_) else: return TIdeEveTrab(*args_, **kwargs_) factory = staticmethod(factory)", "uf_ = self.gds_validate_string(uf_, node, 'uf') self.uf = uf_ elif nodeName_ == 'paisNascto': paisNascto_", "exportChildren(self, outfile, level, namespace_='', name_='codMunic', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed =", "exportAttributes(self, outfile, level, already_processed, namespace_='', name_='TIdeEveTrab'): pass def exportChildren(self, outfile, level, namespace_='', name_='TIdeEveTrab',", "is not None: tzoff_parts = results.group(2).split(':') tzoff = int(tzoff_parts[0]) * 60 + int(tzoff_parts[1])", "return self.brasil def set_brasil(self, brasil): self.brasil = brasil def get_exterior(self): return self.exterior def", "# # Support/utility functions. # def showIndent(outfile, level, pretty_print=True): if pretty_print: for idx", "outfile, level, already_processed, namespace_='', name_='fimBeneficio'): pass def exportChildren(self, outfile, level, namespace_='', name_='fimBeneficio', fromsubclass_=False,", "def exportChildren(self, outfile, level, namespace_='', name_='paisResid', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed", "None: subclass = getSubclassFromModule_( CurrentSubclassModule_, tpBenef) if subclass is not None: return subclass(*args_,", "def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class codMunic class uf(GeneratedsSuper):", "not None or self.nmBenefic is not None or self.dadosBenef is not None ):", "try: from lxml import etree as etree_ except ImportError: from xml.etree import ElementTree", "s1 = s1.replace('<', '&lt;') s1 = s1.replace('>', '&gt;') if '\"' in s1: if", "% self.value elif self.content_type == MixedContainer.TypeDouble: text = '%g' % self.value elif self.content_type", "is not None: return subclass(*args_, **kwargs_) if ideBenef.subclass: return ideBenef.subclass(*args_, **kwargs_) else: return", "False def export(self, outfile, level, namespace_='', name_='bairro', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('bairro') if", "class paisNac(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ = None", "name_='cpfInst'): pass def exportChildren(self, outfile, level, namespace_='', name_='cpfInst', fromsubclass_=False, pretty_print=True): pass def build(self,", "TEnderecoBrasil.subclass: return TEnderecoBrasil.subclass(*args_, **kwargs_) else: return TEnderecoBrasil(*args_, **kwargs_) factory = staticmethod(factory) def get_tpLograd(self):", "20160609] # # Command line options: # ('--no-process-includes', '') # ('-o', 'esociallib/v2_04/evtCdBenPrRP.py') #", "already_processed, namespace_, name_='cpfBenef') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1,", ")) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='vrBenef') if self.hasContent_(): outfile.write('>%s' %", "level, already_processed, namespace_='', name_='nmCid'): pass def exportChildren(self, outfile, level, namespace_='', name_='nmCid', fromsubclass_=False, pretty_print=True):", "**kwargs) return doc # # Namespace prefix definition table (and other attributes, too)", "already_processed): pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class nrInsc", "outfile, level, namespace_='', name_='indRetif', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set()", "= self.gds_validate_string(Signature_, node, 'Signature') self.Signature = Signature_ # end class eSocial class evtCdBenPrRP(GeneratedsSuper):", "def exportChildren(self, outfile, level, namespace_='', name_='bairro', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed", "(hours * 3600)) // 60 _svalue += '{0:02d}:{1:02d}'.format(hours, minutes) return _svalue @classmethod def", "( ): return True else: return False def export(self, outfile, level, namespace_='', name_='cpfInst',", "= None def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_(", "level + 1, namespace_='', name_='indRetif', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s'", "def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='nmCid'): pass def exportChildren(self, outfile, level, namespace_='',", "= nmCid def get_codPostal(self): return self.codPostal def set_codPostal(self, codPostal): self.codPostal = codPostal def", "doc = parsexml_(inFileName, parser) rootNode = doc.getroot() rootTag, rootClass = get_root_tag(rootNode) if rootClass", "elif nodeName_ == 'verProc': verProc_ = child_.text verProc_ = self.gds_validate_string(verProc_, node, 'verProc') self.verProc", "namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='TEmprPJ') if", "not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, dtFimBenef) if subclass is not None: return", "**kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, TDadosBenef) if subclass", "return found1 @classmethod def gds_parse_time(cls, input_data): tz = None if input_data[-1] == 'Z':", "node, nodeName_, fromsubclass_=False): pass # end class idQuota class cpfInst(GeneratedsSuper): subclass = None", "this # table. # A sample table is: # # # File: generatedsnamespaces.py", "imported_ns_def_ = GenerateDSNamespaceDefs_.get('nmMae') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print:", "name_='TDadosBeneficio', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' %", "sval_ = child_.text dval_ = self.gds_parse_date(sval_) self.dtFimBenef = dval_ elif nodeName_ == 'mtvFim':", "eol_)) else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='fimBeneficio'):", "self.gds_encode(self.gds_format_string(quote_xml(self.bairro), input_name='bairro')), namespace_, eol_)) if self.cep is not None: showIndent(outfile, level, pretty_print) outfile.write('<%scep>%s</%scep>%s'", "class evtCdBenPrRP class ideBenef(GeneratedsSuper): \"\"\"Identificação do beneficiário\"\"\" subclass = None superclass = None", "input_name=''): values = input_data.split() for value in values: try: float(value) except (TypeError, ValueError):", "def set_exterior(self, exterior): self.exterior = exterior def hasContent_(self): if ( self.brasil is not", "level, already_processed, namespace_, name_='bairro') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level +", "+ 1, namespace_='', name_='paisResid', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' %", "None: subclass = getSubclassFromModule_( CurrentSubclassModule_, nmBenefic) if subclass is not None: return subclass(*args_,", "return True else: return False def export(self, outfile, level, namespace_='', name_='nrBenefic', namespacedef_='', pretty_print=True):", "exportChildren(self, outfile, level, namespace_='', name_='fimBeneficio', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\\n' else:", "% (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.cpfBenef), input_name='cpfBenef')), namespace_, eol_)) if self.nmBenefic is not None: showIndent(outfile, level,", "element type classes # # Calls to the methods in these classes are", "**kwargs_) if mtvFim.subclass: return mtvFim.subclass(*args_, **kwargs_) else: return mtvFim(*args_, **kwargs_) factory = staticmethod(factory)", "return rootObj def main(): args = sys.argv[1:] if len(args) == 1: parse(args[0]) else:", "= None if len(attr_parts) == 1: value = attrs.get(attr_name) elif len(attr_parts) == 2:", "exportChildren(self, outfile, level, namespace_='', name_='mtvFim', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed =", "already_processed, namespace_, name_='codMunic') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1,", "None: return tag = GeneratedsSuper.Tag_strip_pattern_.sub('', node.tag) if tag: path_list.append(tag) self.get_path_list_(node.getparent(), path_list) def get_class_obj_(self,", "namespace_='', name_='cep'): pass def exportChildren(self, outfile, level, namespace_='', name_='cep', fromsubclass_=False, pretty_print=True): pass def", "attrs = node.attrib attr_parts = attr_name.split(':') value = None if len(attr_parts) == 1:", "tpBenef.subclass: return tpBenef.subclass(*args_, **kwargs_) else: return tpBenef(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self):", "nrInsc def hasContent_(self): if ( self.tpInsc is not None or self.nrInsc is not", "if not silence: sys.stdout.write('<?xml version=\"1.0\" ?>\\n') rootObj.export( sys.stdout, 0, name_=rootTag, namespacedef_='') return rootObj", "2017 by generateDS.py version 2.28b. # Python 2.7.12 (default, Nov 19 2016, 06:48:10)", ")) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='paisNac') if self.hasContent_(): outfile.write('>%s' %", "Tag_pattern_.match(child.tag).groups()[-1] self.buildChildren(child, node, nodeName_) return self def buildAttributes(self, node, attrs, already_processed): pass def", "ival_ elif nodeName_ == 'procEmi': sval_ = child_.text try: ival_ = int(sval_) except", "return vrBenef.subclass(*args_, **kwargs_) else: return vrBenef(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if", "pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('eSocial') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if", "None ): return True else: return False def export(self, outfile, level, namespace_='', name_='endereco',", "dt): return self.__offset def tzname(self, dt): return self.__name def dst(self, dt): return None", "if subclass is not None: return subclass(*args_, **kwargs_) if nmCid.subclass: return nmCid.subclass(*args_, **kwargs_)", "namespace_='', name_='infoBeneficio', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\\n' else: eol_ = ''", "namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('verProc') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_", "outfile, level, already_processed, namespace_='', name_='dscLograd'): pass def exportChildren(self, outfile, level, namespace_='', name_='dscLograd', fromsubclass_=False,", "that, e.g., # we ignore comments. try: parser = etree_.ETCompatXMLParser() except AttributeError: #", "self.exportAttributes(outfile, level, already_processed, namespace_, name_='nmCid') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level", "self.cep is not None or self.codMunic is not None or self.uf is not", "+ namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='evtCdBenPrRP')", "nodeName_, fromsubclass_=False): pass # end class tpPlanRP class fimBeneficio(GeneratedsSuper): \"\"\"Informações relativas a benefícios", "= tpAmb self.procEmi = procEmi self.verProc = verProc def factory(*args_, **kwargs_): if CurrentSubclassModule_", "verProc_ = child_.text verProc_ = self.gds_validate_string(verProc_, node, 'verProc') self.verProc = verProc_ # end", "= obj_ obj_.original_tagname_ = 'endereco' # end class TDadosBenef class dadosNasc(GeneratedsSuper): \"\"\"Informações de", "already_processed.add('Id') outfile.write(' Id=%s' % (self.gds_encode(self.gds_format_string(quote_attrib(self.Id), input_name='Id')), )) def exportChildren(self, outfile, level, namespace_='', name_='evtCdBenPrRP',", "tzoff, results.group(0)) input_data = input_data[:-6] if len(input_data.split('.')) > 1: dt = datetime_.datetime.strptime(input_data, '%H:%M:%S.%f')", "is not None: showIndent(outfile, level, pretty_print) outfile.write('<%snrInsc>%s</%snrInsc>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nrInsc), input_name='nrInsc')), namespace_, eol_))", "**kwargs_) else: return complemento(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ):", "= paisNac def get_nmMae(self): return self.nmMae def set_nmMae(self, nmMae): self.nmMae = nmMae def", "== MixedContainer.CategoryComplex self.value.to_etree(element) def to_etree_simple(self): if self.content_type == MixedContainer.TypeString: text = self.value elif", "to the methods in these classes are generated by generateDS.py. # You can", "'%s' % inStr) s1 = s1.replace('&', '&amp;') s1 = s1.replace('<', '&lt;') s1 =", "(eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='tpBenef', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_,", "def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class indRetif class nrRecibo(GeneratedsSuper):", "a specific module.''' name = class_.__name__ + 'Sub' if hasattr(module, name): return getattr(module,", "imported_ns_def_ = GenerateDSNamespaceDefs_.get('codPostal') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print:", "namespacedef_='') return rootObj def parseLiteral(inFileName, silence=False): parser = None doc = parsexml_(inFileName, parser)", "class TIdeEveTrab(GeneratedsSuper): \"\"\"Identificação do evento\"\"\" subclass = None superclass = None def __init__(self,", "True break if not found2: found1 = False break return found1 @classmethod def", "def gds_format_integer_list(self, input_data, input_name=''): return '%s' % ' '.join(input_data) def gds_validate_integer_list( self, input_data,", "obj_ obj_.original_tagname_ = 'infoPenMorte' # end class TDadosBeneficio class dtIniBenef(GeneratedsSuper): subclass = None", "elif nodeName_ == 'fimBeneficio': obj_ = fimBeneficio.factory() obj_.build(child_) self.fimBeneficio = obj_ obj_.original_tagname_ =", "values def gds_format_double(self, input_data, input_name=''): return '%e' % input_data def gds_validate_double(self, input_data, node=None,", "gds_format_boolean(self, input_data, input_name=''): return ('%s' % input_data).lower() def gds_validate_boolean(self, input_data, node=None, input_name=''): return", ")) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='eSocial') if self.hasContent_(): outfile.write('>%s' %", "outfile, level, namespace_='', name_='TIdeEveTrab', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\\n' else: eol_", "namespace_='', name_='procEmi', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('procEmi') if imported_ns_def_ is not None: namespacedef_", "CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, paisNac) if subclass is not", "by the DOM. doc = None mapping = {} rootElement = rootObj.to_etree(None, name_=rootTag,", "or self.mtvFim is not None ): return True else: return False def export(self,", "tpBenef) if subclass is not None: return subclass(*args_, **kwargs_) if tpBenef.subclass: return tpBenef.subclass(*args_,", "= nmBenefic_ elif nodeName_ == 'dadosBenef': obj_ = TDadosBenef.factory() obj_.build(child_) self.dadosBenef = obj_", "namespace_='', name_='infoPenMorte', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s'", "= None superclass = None def __init__(self): self.original_tagname_ = None def factory(*args_, **kwargs_):", "eSocial.subclass: return eSocial.subclass(*args_, **kwargs_) else: return eSocial(*args_, **kwargs_) factory = staticmethod(factory) def get_evtCdBenPrRP(self):", "self.exportChildren(outfile, level + 1, namespace_='', name_='complemento', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else:", "node, 'nmBenefic') self.nmBenefic = nmBenefic_ elif nodeName_ == 'dadosBenef': obj_ = TDadosBenef.factory() obj_.build(child_)", "= etree_.parse(infile, parser=parser, **kwargs) return doc # # Namespace prefix definition table (and", "+ 1, namespace_='', name_='nmCid', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' %", "self.mtvFim = mtvFim def hasContent_(self): if ( self.tpBenef is not None or self.nrBenefic", "nodeName_ == 'cpfInst': cpfInst_ = child_.text cpfInst_ = self.gds_validate_string(cpfInst_, node, 'cpfInst') self.cpfInst =", "outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='paisResid', pretty_print=pretty_print) outfile.write('</%s%s>%s' %", "% (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='complemento', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_,", "self.exportAttributes(outfile, level, already_processed, namespace_, name_='TDadosBenef') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level", "= set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='paisNac') if self.hasContent_(): outfile.write('>%s' % (eol_, ))", "**kwargs_) else: return codPostal(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ):", "dadosNasc self.endereco = endereco def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass", "def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class dtNascto class codMunic(GeneratedsSuper):", "node=None, input_name=''): return input_data def gds_format_datetime(self, input_data, input_name=''): if input_data.microsecond == 0: _svalue", "subclass = None superclass = None def __init__(self, tpBenef=None, nrBenefic=None, dtFimBenef=None, mtvFim=None): self.original_tagname_", "results is not None: tzoff_parts = results.group(2).split(':') tzoff = int(tzoff_parts[0]) * 60 +", "return input_data def gds_format_float_list(self, input_data, input_name=''): return '%s' % ' '.join(input_data) def gds_validate_float_list(", "else: dt = datetime_.datetime.strptime(input_data, '%H:%M:%S') dt = dt.replace(tzinfo=tz) return dt.time() def gds_str_lower(self, instring):", "nmMae def get_nmPai(self): return self.nmPai def set_nmPai(self, nmPai): self.nmPai = nmPai def hasContent_(self):", "gds_validate_boolean(self, input_data, node=None, input_name=''): return input_data def gds_format_boolean_list(self, input_data, input_name=''): return '%s' %", "if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, nmMae) if subclass is", "evtCdBenPrRP class ideBenef(GeneratedsSuper): \"\"\"Identificação do beneficiário\"\"\" subclass = None superclass = None def", "in already_processed: already_processed.add('Id') outfile.write(' Id=%s' % (self.gds_encode(self.gds_format_string(quote_attrib(self.Id), input_name='Id')), )) def exportChildren(self, outfile, level,", "getSubclassFromModule_( CurrentSubclassModule_, cpfBenef) if subclass is not None: return subclass(*args_, **kwargs_) if cpfBenef.subclass:", "def export(self, outfile, level, namespace_='', name_='eSocial', namespacedef_=' xmlns:ds=\"http://www.w3.org/2000/09/xmldsig#\" ', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('eSocial')", "else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='nrBenefic'): pass", "name_='nrBenefic') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='nrBenefic',", "\"http://www.xxx.com/namespaceB\", # } # try: from generatedsnamespaces import GenerateDSNamespaceDefs as GenerateDSNamespaceDefs_ except ImportError:", "is not None or self.nrLograd is not None or self.complemento is not None", "e para o qual não tenha havido ainda informação de término de benefícios.\"\"\"", "exportChildren(self, outfile, level, namespace_='', name_='nrBenefic', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed =", "True else: return False def export(self, outfile, level, namespace_='', name_='tpInsc', namespacedef_='', pretty_print=True): imported_ns_def_", "return True else: return False def export(self, outfile, level, namespace_='', name_='tpPlanRP', namespacedef_='', pretty_print=True):", "= set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='TEnderecoBrasil') if self.hasContent_(): outfile.write('>%s' % (eol_, ))", "% (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nmBenefic), input_name='nmBenefic')), namespace_, eol_)) if self.dadosBenef is not None: self.dadosBenef.export(outfile, level,", "outfile, level, namespace_='', name_='cpfInst', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set()", "def get_ideEvento(self): return self.ideEvento def set_ideEvento(self, ideEvento): self.ideEvento = ideEvento def get_ideEmpregador(self): return", "buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class paisNac class nmMae(GeneratedsSuper): subclass", "= ival_ elif nodeName_ == 'nrInsc': nrInsc_ = child_.text nrInsc_ = self.gds_validate_string(nrInsc_, node,", "tpInsc class nrInsc(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ =", "== MixedContainer.TypeDecimal): text = '%f' % self.value elif self.content_type == MixedContainer.TypeDouble: text =", "def get_optional(self): return self.optional def _cast(typ, value): if typ is None or value", "class mtvFim class TIdeEveTrab(GeneratedsSuper): \"\"\"Identificação do evento\"\"\" subclass = None superclass = None", "nmBenefic_ = self.gds_validate_string(nmBenefic_, node, 'nmBenefic') self.nmBenefic = nmBenefic_ elif nodeName_ == 'dadosBenef': obj_", "outfile, level, already_processed, namespace_='', name_='nmCid'): pass def exportChildren(self, outfile, level, namespace_='', name_='nmCid', fromsubclass_=False,", "nodeName_ == 'paisResid': paisResid_ = child_.text paisResid_ = self.gds_validate_string(paisResid_, node, 'paisResid') self.paisResid =", "if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='mtvFim', pretty_print=pretty_print)", "None: return subclass(*args_, **kwargs_) if nmBenefic.subclass: return nmBenefic.subclass(*args_, **kwargs_) else: return nmBenefic(*args_, **kwargs_)", "outfile, level, namespace_='', name_='mtvFim', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('mtvFim') if imported_ns_def_ is not", "fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\\n' else: eol_ = '' if self.tpInsc", "): return True else: return False def export(self, outfile, level, namespace_='', name_='mtvFim', namespacedef_='',", "TEnderecoBrasil(GeneratedsSuper): \"\"\"Informações do Endereço no Brasil\"\"\" subclass = None superclass = None def", "level, already_processed, namespace_='', name_='fimBeneficio'): pass def exportChildren(self, outfile, level, namespace_='', name_='fimBeneficio', fromsubclass_=False, pretty_print=True):", "\"'''%s'''\" % s1 else: if s1.find('\"') != -1: s1 = s1.replace('\"', '\\\\\"') if", "'', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='indRetif') if self.hasContent_(): outfile.write('>%s'", "else: eol_ = '' if self.tpBenef is not None: showIndent(outfile, level, pretty_print) outfile.write('<%stpBenef>%s</%stpBenef>%s'", "relativas a pensão por morte\"\"\" subclass = None superclass = None def __init__(self,", "hours, minutes) except AttributeError: pass return _svalue @classmethod def gds_parse_date(cls, input_data): tz =", "'Requires sequence of booleans ' '(\"true\", \"1\", \"false\", \"0\")') return values def gds_validate_datetime(self,", "def hasContent_(self): if ( self.tpPlanRP is not None or self.iniBeneficio is not None", "True else: return False def export(self, outfile, level, namespace_='', name_='indRetif', namespacedef_='', pretty_print=True): imported_ns_def_", "def set_endereco(self, endereco): self.endereco = endereco def hasContent_(self): if ( self.dadosNasc is not", "self.dtNascto def set_dtNascto(self, dtNascto): self.dtNascto = dtNascto def get_codMunic(self): return self.codMunic def set_codMunic(self,", "fromsubclass_=False): if nodeName_ == 'tpLograd': tpLograd_ = child_.text tpLograd_ = self.gds_validate_string(tpLograd_, node, 'tpLograd')", "export(self, outfile, level, namespace_='', name_='TEnderecoBrasil', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('TEnderecoBrasil') if imported_ns_def_ is", "CurrentSubclassModule_, bairro) if subclass is not None: return subclass(*args_, **kwargs_) if bairro.subclass: return", "namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('infoPenMorte') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_", "self.nmCid = nmCid_ elif nodeName_ == 'codPostal': codPostal_ = child_.text codPostal_ = self.gds_validate_string(codPostal_,", "imported_ns_def_ = GenerateDSNamespaceDefs_.get('dtFimBenef') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print:", "ValueError) as exp: raise_parse_error(child_, 'requires float or double: %s' % exp) fval_ =", "**kwargs_) if dtIniBenef.subclass: return dtIniBenef.subclass(*args_, **kwargs_) else: return dtIniBenef(*args_, **kwargs_) factory = staticmethod(factory)", "if TEnderecoExterior.subclass: return TEnderecoExterior.subclass(*args_, **kwargs_) else: return TEnderecoExterior(*args_, **kwargs_) factory = staticmethod(factory) def", "if self.original_tagname_ is not None: name_ = self.original_tagname_ showIndent(outfile, level, pretty_print) outfile.write('<%s%s%s' %", "None or self.procEmi is not None or self.verProc is not None ): return", "imported_ns_def_ = GenerateDSNamespaceDefs_.get('endereco') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print:", "self.exportChildren(outfile, level + 1, namespace_='', name_='tpLograd', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else:", "level, pretty_print) outfile.write('<%suf>%s</%suf>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.uf), input_name='uf')), namespace_, eol_)) def build(self, node): already_processed", "inStr) s1 = s1.replace('&', '&amp;') s1 = s1.replace('<', '&lt;') s1 = s1.replace('>', '&gt;')", "= (total_seconds - (hours * 3600)) // 60 _svalue += '{0:02d}:{1:02d}'.format( hours, minutes)", "= self.gds_parse_date(sval_) self.dtFimBenef = dval_ elif nodeName_ == 'mtvFim': sval_ = child_.text try:", ")) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='nrLograd') if self.hasContent_(): outfile.write('>%s' %", "pass def exportChildren(self, outfile, level, namespace_='', name_='tpBenef', fromsubclass_=False, pretty_print=True): pass def build(self, node):", "): return True else: return False def export(self, outfile, level, namespace_='', name_='nrRecibo', namespacedef_='',", "subclass = getSubclassFromModule_( CurrentSubclassModule_, nrRecibo) if subclass is not None: return subclass(*args_, **kwargs_)", "else: return TDadosBenef(*args_, **kwargs_) factory = staticmethod(factory) def get_dadosNasc(self): return self.dadosNasc def set_dadosNasc(self,", "if ( self.brasil is not None or self.exterior is not None ): return", "into the # IPython shell: # ipshell('<some message> -- Entering ipshell.\\nHit Ctrl-D to", "Endereço no Brasil\"\"\" subclass = None superclass = None def __init__(self, tpLograd=None, dscLograd=None,", "self.tpInsc = tpInsc def get_nrInsc(self): return self.nrInsc def set_nrInsc(self, nrInsc): self.nrInsc = nrInsc", "1 TypeString = 2 TypeInteger = 3 TypeFloat = 4 TypeDecimal = 5", "\"http://www.xxx.com/namespaceA\", # \"ElementtypeB\": \"http://www.xxx.com/namespaceB\", # } # try: from generatedsnamespaces import GenerateDSNamespaceDefs as", "level, already_processed, namespace_, name_='evtCdBenPrRP') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level +", "bairro self.cep = cep self.codMunic = codMunic self.uf = uf def factory(*args_, **kwargs_):", "else: return False def export(self, outfile, level, namespace_='', name_='endereco', namespacedef_='', pretty_print=True): imported_ns_def_ =", "level, namespace_, name_='ideEmpregador', pretty_print=pretty_print) if self.ideBenef is not None: self.ideBenef.export(outfile, level, namespace_, name_='ideBenef',", "'paisResid': paisResid_ = child_.text paisResid_ = self.gds_validate_string(paisResid_, node, 'paisResid') self.paisResid = paisResid_ elif", "def usage(): print(USAGE_TEXT) sys.exit(1) def get_root_tag(node): tag = Tag_pattern_.match(node.tag).groups()[-1] rootClass = GDSClassesMapping.get(tag) if", "= child_attrs self.choice = choice self.optional = optional def set_name(self, name): self.name =", "# The module generatedsnamespaces, if it is importable, must contain # a dictionary", "collect the space used by the DOM. doc = None mapping = {}", "already_processed): pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): if nodeName_ == 'dadosNasc': obj_", "end class TIdeEveTrab class indRetif(GeneratedsSuper): subclass = None superclass = None def __init__(self):", "inStr or '%s' % inStr) s2 = '' pos = 0 matchobjects =", "iniBeneficio): self.iniBeneficio = iniBeneficio def get_altBeneficio(self): return self.altBeneficio def set_altBeneficio(self, altBeneficio): self.altBeneficio =", "): return True else: return False def export(self, outfile, level, namespace_='', name_='dtIniBenef', namespacedef_='',", "is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, endereco) if subclass is not None:", "return paisNascto(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return True", "= getSubclassFromModule_( CurrentSubclassModule_, endereco) if subclass is not None: return subclass(*args_, **kwargs_) if", "not None: tzoff = input_data.tzinfo.utcoffset(input_data) if tzoff is not None: total_seconds = tzoff.seconds", "exportAttributes(self, outfile, level, already_processed, namespace_='', name_='nmPai'): pass def exportChildren(self, outfile, level, namespace_='', name_='nmPai',", "else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='nmCid'): pass", "s1.find(\"'\") == -1: if s1.find('\\n') == -1: return \"'%s'\" % s1 else: return", "self.dscLograd = dscLograd self.nrLograd = nrLograd self.complemento = complemento self.bairro = bairro self.nmCid", "for value in values: try: float(value) except (TypeError, ValueError): raise_parse_error(node, 'Requires sequence of", "self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='dscLograd', pretty_print=pretty_print) outfile.write('</%s%s>%s'", "already_processed): pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class procEmi", "exportChildren(self, outfile, level, namespace_='', name_='nmBenefic', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed =", "else: text = '' for child in node: if child.tail is not None:", "GenerateDSNamespaceDefs_.get('procEmi') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_ =", "None def __init__(self, tpInsc=None, nrInsc=None): self.original_tagname_ = None self.tpInsc = tpInsc self.nrInsc =", "uf=None, paisNascto=None, paisNac=None, nmMae=None, nmPai=None): self.original_tagname_ = None if isinstance(dtNascto, BaseStrType_): initvalue_ =", "input_data, input_name=''): return '%s' % ' '.join(input_data) def gds_validate_integer_list( self, input_data, node=None, input_name=''):", "namespace_='', name_='nrRecibo', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('nrRecibo') if imported_ns_def_ is not None: namespacedef_", "return tpInsc(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return True", "if subclass is not None: return subclass(*args_, **kwargs_) if nrInsc.subclass: return nrInsc.subclass(*args_, **kwargs_)", "None superclass = None def __init__(self, dadosNasc=None, endereco=None): self.original_tagname_ = None self.dadosNasc =", "namespace_, name_='uf') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='',", "paisNascto(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return True else:", "a example of the use of this # table. # A sample table", "namespace_='', name_='nmMae'): pass def exportChildren(self, outfile, level, namespace_='', name_='nmMae', fromsubclass_=False, pretty_print=True): pass def", "export(self, outfile, level, namespace_='', name_='fimBeneficio', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('fimBeneficio') if imported_ns_def_ is", "uf) if subclass is not None: return subclass(*args_, **kwargs_) if uf.subclass: return uf.subclass(*args_,", "set_altBeneficio(self, altBeneficio): self.altBeneficio = altBeneficio def get_fimBeneficio(self): return self.fimBeneficio def set_fimBeneficio(self, fimBeneficio): self.fimBeneficio", "outfile, level, already_processed, namespace_='', name_='tpLograd'): pass def exportChildren(self, outfile, level, namespace_='', name_='tpLograd', fromsubclass_=False,", "if len(self.data_type) > 0: return self.data_type[-1] else: return 'xs:string' else: return self.data_type def", "already_processed, namespace_='', name_='eSocial'): pass def exportChildren(self, outfile, level, namespace_='', name_='eSocial', fromsubclass_=False, pretty_print=True): if", "set_child_attrs(self, child_attrs): self.child_attrs = child_attrs def get_child_attrs(self): return self.child_attrs def set_choice(self, choice): self.choice", "namespace_='', name_='fimBeneficio', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('fimBeneficio') if imported_ns_def_ is not None: namespacedef_", "namespace_='', name_='nrBenefic', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_, ))", "factory = staticmethod(factory) def get_cpfBenef(self): return self.cpfBenef def set_cpfBenef(self, cpfBenef): self.cpfBenef = cpfBenef", "**kwargs_) factory = staticmethod(factory) def get_brasil(self): return self.brasil def set_brasil(self, brasil): self.brasil =", "fromsubclass_=False): if nodeName_ == 'tpBenef': sval_ = child_.text try: ival_ = int(sval_) except", "def set_iniBeneficio(self, iniBeneficio): self.iniBeneficio = iniBeneficio def get_altBeneficio(self): return self.altBeneficio def set_altBeneficio(self, altBeneficio):", "return tag, rootClass def parse(inFileName, silence=False): parser = None doc = parsexml_(inFileName, parser)", "input_name=''): if input_data.microsecond == 0: _svalue = '%04d-%02d-%02dT%02d:%02d:%02d' % ( input_data.year, input_data.month, input_data.day,", "+ 1, namespace_='', name_='codMunic', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' %", "def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, infoBeneficio)", "or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='evtCdBenPrRP') if self.hasContent_():", "def export(self, outfile, level, namespace_='', name_='complemento', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('complemento') if imported_ns_def_", "gds_validate_integer(self, input_data, node=None, input_name=''): return input_data def gds_format_integer_list(self, input_data, input_name=''): return '%s' %", "target) is not None: found2 = True break if not found2: found1 =", "or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='TEnderecoExterior') if self.hasContent_():", "not silence: content = etree_.tostring( rootElement, pretty_print=True, xml_declaration=True, encoding=\"utf-8\") sys.stdout.write(content) sys.stdout.write('\\n') return rootObj,", "convert_unicode(instring): if isinstance(instring, str): result = quote_xml(instring) elif sys.version_info.major == 2 and isinstance(instring,", "subclass is not None: return subclass(*args_, **kwargs_) if TDadosBenef.subclass: return TDadosBenef.subclass(*args_, **kwargs_) else:", "None: showIndent(outfile, level, pretty_print) outfile.write('<%snrBenefic>%s</%snrBenefic>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nrBenefic), input_name='nrBenefic')), namespace_, eol_)) if self.dtIniBenef", "node, nodeName_, fromsubclass_=False): if nodeName_ == 'tpBenef': sval_ = child_.text try: ival_ =", "fromsubclass_=False): pass # end class idQuota class cpfInst(GeneratedsSuper): subclass = None superclass =", "IPython.Shell import IPShellEmbed ## args = '' ## ipshell = IPShellEmbed(args, ## banner", "= GeneratedsSuper._FixedOffsetTZ(0, 'UTC') input_data = input_data[:-1] else: results = GeneratedsSuper.tzoff_pattern.search(input_data) if results is", "self.brasil def set_brasil(self, brasil): self.brasil = brasil def get_exterior(self): return self.exterior def set_exterior(self,", "def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='verProc'): pass def exportChildren(self, outfile, level, namespace_='',", "return True else: return False def export(self, outfile, level, namespace_='', name_='paisResid', namespacedef_='', pretty_print=True):", "TDadosBeneficio) if subclass is not None: return subclass(*args_, **kwargs_) if TDadosBeneficio.subclass: return TDadosBeneficio.subclass(*args_,", "values: if value not in ('true', '1', 'false', '0', ): raise_parse_error( node, 'Requires", "lxml import etree as etree_ except ImportError: from xml.etree import ElementTree as etree_", "obj_ obj_.original_tagname_ = 'ideEvento' elif nodeName_ == 'ideEmpregador': obj_ = TEmprPJ.factory() obj_.build(child_) self.ideEmpregador", "'', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='tpPlanRP') if self.hasContent_(): outfile.write('>%s'", "namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('tpBenef') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_", "is not None: return subclass(*args_, **kwargs_) if codPostal.subclass: return codPostal.subclass(*args_, **kwargs_) else: return", "nrBenefic) if subclass is not None: return subclass(*args_, **kwargs_) if nrBenefic.subclass: return nrBenefic.subclass(*args_,", "name_='paisNac', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib, already_processed)", "or self.exterior is not None ): return True else: return False def export(self,", ")) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='nrInsc') if self.hasContent_(): outfile.write('>%s' %", "'', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='complemento') if self.hasContent_(): outfile.write('>%s'", "subclass = getSubclassFromModule_( CurrentSubclassModule_, endereco) if subclass is not None: return subclass(*args_, **kwargs_)", "self.child_attrs = child_attrs self.choice = choice self.optional = optional def set_name(self, name): self.name", "name_='TEnderecoBrasil', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\\n' else: eol_ = '' if", "= staticmethod(factory) def get_idQuota(self): return self.idQuota def set_idQuota(self, idQuota): self.idQuota = idQuota def", "re_.compile(r'({.*})?(.*)') String_cleanup_pat_ = re_.compile(r\"[\\n\\r\\s]+\") Namespace_extract_pat_ = re_.compile(r'{(.*)}(.*)') CDATA_pattern_ = re_.compile(r\"<!\\[CDATA\\[.*?\\]\\]>\", re_.DOTALL) # Change", "paisNascto_ = self.gds_validate_string(paisNascto_, node, 'paisNascto') self.paisNascto = paisNascto_ elif nodeName_ == 'paisNac': paisNac_", "nrRecibo_ = child_.text nrRecibo_ = self.gds_validate_string(nrRecibo_, node, 'nrRecibo') self.nrRecibo = nrRecibo_ elif nodeName_", "pass # end class tpInsc class nrInsc(GeneratedsSuper): subclass = None superclass = None", "True else: return False def export(self, outfile, level, namespace_='', name_='evtCdBenPrRP', namespacedef_='', pretty_print=True): imported_ns_def_", "pretty_print: eol_ = '\\n' else: eol_ = '' if self.tpBenef is not None:", "self.ideEvento.export(outfile, level, namespace_, name_='ideEvento', pretty_print=pretty_print) if self.ideEmpregador is not None: self.ideEmpregador.export(outfile, level, namespace_,", "following class # in a module named generatedssuper.py. try: from generatedssuper import GeneratedsSuper", "def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, cpfBenef)", "or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='dtNascto') if self.hasContent_():", "export(self, outfile, level, namespace_='', name_='nmCid', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('nmCid') if imported_ns_def_ is", "is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, bairro) if subclass is not None:", "cpfBenef class nmBenefic(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ =", "= 'ideBenef' elif nodeName_ == 'infoBeneficio': obj_ = infoBeneficio.factory() obj_.build(child_) self.infoBeneficio = obj_", "**kwargs_) if dtFimBenef.subclass: return dtFimBenef.subclass(*args_, **kwargs_) else: return dtFimBenef(*args_, **kwargs_) factory = staticmethod(factory)", "level, already_processed, namespace_='', name_='TEnderecoBrasil'): pass def exportChildren(self, outfile, level, namespace_='', name_='TEnderecoBrasil', fromsubclass_=False, pretty_print=True):", "outfile, level, namespace_='', name_='dtNascto', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set()", "initvalue_ self.vrBenef = vrBenef self.infoPenMorte = infoPenMorte def factory(*args_, **kwargs_): if CurrentSubclassModule_ is", "if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='tpBenef', pretty_print=pretty_print)", "values = input_data.split() for value in values: if value not in ('true', '1',", "dtNascto=None, codMunic=None, uf=None, paisNascto=None, paisNac=None, nmMae=None, nmPai=None): self.original_tagname_ = None if isinstance(dtNascto, BaseStrType_):", "already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='nmCid') if self.hasContent_(): outfile.write('>%s' % (eol_,", "None self.cpfBenef = cpfBenef self.nmBenefic = nmBenefic self.dadosBenef = dadosBenef def factory(*args_, **kwargs_):", "self.iniBeneficio = iniBeneficio def get_altBeneficio(self): return self.altBeneficio def set_altBeneficio(self, altBeneficio): self.altBeneficio = altBeneficio", "paisResid(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ = None def", "generateDS.py version 2.28b. # Python 2.7.12 (default, Nov 19 2016, 06:48:10) [GCC 5.4.0", "= rootClass.factory() rootObj.build(rootNode) # Enable Python to collect the space used by the", "+ time_parts[1]) * 1000000) input_data = '%s.%s' % (time_parts[0], micro_seconds, ) dt =", "namespace_='', name_='paisResid', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_, ))", "= TIdeEveTrab.factory() obj_.build(child_) self.ideEvento = obj_ obj_.original_tagname_ = 'ideEvento' elif nodeName_ == 'ideEmpregador':", "None: self.dadosBenef.export(outfile, level, namespace_, name_='dadosBenef', pretty_print=pretty_print) def build(self, node): already_processed = set() self.buildAttributes(node,", "**kwargs_) else: return nrInsc(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ):", "GenerateDSNamespaceDefs_.get('nmCid') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_ =", "export(self, outfile, level, namespace_='', name_='cpfBenef', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('cpfBenef') if imported_ns_def_ is", "is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, nmPai) if subclass is not None:", "(eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='TDadosBenef', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s'", "self.value, self.name)) elif self.content_type == MixedContainer.TypeDouble: outfile.write('<%s>%g</%s>' % ( self.name, self.value, self.name)) elif", "= datetime_.datetime.strptime(dtFimBenef, '%Y-%m-%d').date() else: initvalue_ = dtFimBenef self.dtFimBenef = initvalue_ self.mtvFim = mtvFim", "pode ser informado se já houver informação anterior de benefícios para o beneficiário", "self.value else: element[-1].tail += self.value else: if element.text is None: element.text = self.value", "node, 'mtvFim') self.mtvFim = ival_ # end class fimBeneficio class tpBenef(GeneratedsSuper): subclass =", "pretty_print) outfile.write('<%stpAmb>%s</%stpAmb>%s' % (namespace_, self.gds_format_integer(self.tpAmb, input_name='tpAmb'), namespace_, eol_)) if self.procEmi is not None:", "nodeName_, fromsubclass_=False): if nodeName_ == 'dadosNasc': obj_ = dadosNasc.factory() obj_.build(child_) self.dadosNasc = obj_", "TEnderecoExterior.subclass(*args_, **kwargs_) else: return TEnderecoExterior(*args_, **kwargs_) factory = staticmethod(factory) def get_paisResid(self): return self.paisResid", "level, already_processed, namespace_='', name_='nrLograd'): pass def exportChildren(self, outfile, level, namespace_='', name_='nrLograd', fromsubclass_=False, pretty_print=True):", "'uf': uf_ = child_.text uf_ = self.gds_validate_string(uf_, node, 'uf') self.uf = uf_ elif", "Signature def hasContent_(self): if ( self.evtCdBenPrRP is not None or self.Signature is not", "dtNascto(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return True else:", "is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, cpfBenef) if subclass is not None:", "1, namespace_='', name_='paisNac', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_,", "name, value): self.category = category self.content_type = content_type self.name = name self.value =", "the lxml ElementTree compatible parser so that, e.g., # we ignore comments. try:", "= obj_ obj_.original_tagname_ = 'ideEmpregador' elif nodeName_ == 'ideBenef': obj_ = ideBenef.factory() obj_.build(child_)", "input_name=''): if not input_data: return '' else: return input_data def gds_format_base64(self, input_data, input_name=''):", "namespace_='', name_='nrBenefic', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('nrBenefic') if imported_ns_def_ is not None: namespacedef_", "level, already_processed, namespace_='', name_='nrRecibo'): pass def exportChildren(self, outfile, level, namespace_='', name_='nrRecibo', fromsubclass_=False, pretty_print=True):", "self.endereco = obj_ obj_.original_tagname_ = 'endereco' # end class TDadosBenef class dadosNasc(GeneratedsSuper): \"\"\"Informações", "not None: class_obj1 = class_obj2 return class_obj1 def gds_build_any(self, node, type_name=None): return None", "def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='evtCdBenPrRP'): if self.Id is not None and", "None superclass = None def __init__(self, indRetif=None, nrRecibo=None, tpAmb=None, procEmi=None, verProc=None): self.original_tagname_ =", "'tpInsc') self.tpInsc = ival_ elif nodeName_ == 'nrInsc': nrInsc_ = child_.text nrInsc_ =", "name): if self.category == MixedContainer.CategoryText: showIndent(outfile, level) outfile.write( 'model_.MixedContainer(%d, %d, \"%s\", \"%s\"),\\n' %", "self.infoBeneficio = infoBeneficio def get_Id(self): return self.Id def set_Id(self, Id): self.Id = Id", "not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, TEmprPJ) if subclass is not None: return", "def get_tpBenef(self): return self.tpBenef def set_tpBenef(self, tpBenef): self.tpBenef = tpBenef def get_nrBenefic(self): return", "already_processed): pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class nmCid", "= GeneratedsSuper.tzoff_pattern.search(input_data) if results is not None: tzoff_parts = results.group(2).split(':') tzoff = int(tzoff_parts[0])", "== MixedContainer.CategorySimple: self.exportSimple(outfile, level, name) else: # category == MixedContainer.CategoryComplex self.value.export( outfile, level,", "def get_dadosNasc(self): return self.dadosNasc def set_dadosNasc(self, dadosNasc): self.dadosNasc = dadosNasc def get_endereco(self): return", ")) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='dscLograd') if self.hasContent_(): outfile.write('>%s' %", "child_.text bairro_ = self.gds_validate_string(bairro_, node, 'bairro') self.bairro = bairro_ elif nodeName_ == 'nmCid':", "for content_type: TypeNone = 0 TypeText = 1 TypeString = 2 TypeInteger =", "content = etree_.tostring( rootElement, pretty_print=True, xml_declaration=True, encoding=\"utf-8\") sys.stdout.write(content) sys.stdout.write('\\n') return rootObj, rootElement, mapping,", "# Python 2.7.12 (default, Nov 19 2016, 06:48:10) [GCC 5.4.0 20160609] # #", "uf): self.uf = uf def get_paisNascto(self): return self.paisNascto def set_paisNascto(self, paisNascto): self.paisNascto =", "get_data_type(self): if isinstance(self.data_type, list): if len(self.data_type) > 0: return self.data_type[-1] else: return 'xs:string'", "level, namespace_='', name_='codMunic', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node,", "namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='TDadosBeneficio') if", "self.procEmi def set_procEmi(self, procEmi): self.procEmi = procEmi def get_verProc(self): return self.verProc def set_verProc(self,", "3600 minutes = (total_seconds - (hours * 3600)) // 60 _svalue += '{0:02d}:{1:02d}'.format(hours,", "generatedssuper.py. try: from generatedssuper import GeneratedsSuper except ImportError as exp: class GeneratedsSuper(object): tzoff_pattern", "complemento_ = self.gds_validate_string(complemento_, node, 'complemento') self.complemento = complemento_ elif nodeName_ == 'bairro': bairro_", "superclass = None def __init__(self, dtNascto=None, codMunic=None, uf=None, paisNascto=None, paisNac=None, nmMae=None, nmPai=None): self.original_tagname_", "outfile, level, already_processed, namespace_='', name_='complemento'): pass def exportChildren(self, outfile, level, namespace_='', name_='complemento', fromsubclass_=False,", "def set_idQuota(self, idQuota): self.idQuota = idQuota def get_cpfInst(self): return self.cpfInst def set_cpfInst(self, cpfInst):", "def __init__(self, evtCdBenPrRP=None, Signature=None): self.original_tagname_ = None self.evtCdBenPrRP = evtCdBenPrRP self.Signature = Signature", "{ideBenef} e para o qual não tenha havido ainda informação de término de", "tpLograd_ = child_.text tpLograd_ = self.gds_validate_string(tpLograd_, node, 'tpLograd') self.tpLograd = tpLograd_ elif nodeName_", "self.content_type == MixedContainer.TypeBoolean: outfile.write('<%s>%d</%s>' % ( self.name, self.value, self.name)) elif self.content_type == MixedContainer.TypeFloat", "already_processed, namespace_, name_='uf') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1,", "pretty_print=pretty_print) if self.ideBenef is not None: self.ideBenef.export(outfile, level, namespace_, name_='ideBenef', pretty_print=pretty_print) if self.infoBeneficio", "namespace_, name_='cep') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='',", "outfile.write('<%stpAmb>%s</%stpAmb>%s' % (namespace_, self.gds_format_integer(self.tpAmb, input_name='tpAmb'), namespace_, eol_)) if self.procEmi is not None: showIndent(outfile,", "nmPai(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return True else:", "outfile, level, already_processed, namespace_='', name_='infoPenMorte'): pass def exportChildren(self, outfile, level, namespace_='', name_='infoPenMorte', fromsubclass_=False,", "CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, cpfInst) if subclass is not", "if input_data[-1] == 'Z': tz = GeneratedsSuper._FixedOffsetTZ(0, 'UTC') input_data = input_data[:-1] else: results", "subclass = getSubclassFromModule_( CurrentSubclassModule_, cpfBenef) if subclass is not None: return subclass(*args_, **kwargs_)", "// 60 _svalue += '{0:02d}:{1:02d}'.format(hours, minutes) return _svalue def gds_validate_simple_patterns(self, patterns, target): #", "name_='nmCid', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('nmCid') if imported_ns_def_ is not None: namespacedef_ =", "subclass(*args_, **kwargs_) if nrBenefic.subclass: return nrBenefic.subclass(*args_, **kwargs_) else: return nrBenefic(*args_, **kwargs_) factory =", "end class fimBeneficio class tpBenef(GeneratedsSuper): subclass = None superclass = None def __init__(self):", "level, already_processed, namespace_, name_='nrInsc') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level +", "name_='paisResid', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_, )) def", "input_name=''): return input_data def gds_format_double_list(self, input_data, input_name=''): return '%s' % ' '.join(input_data) def", "{ # \"ElementtypeA\": \"http://www.xxx.com/namespaceA\", # \"ElementtypeB\": \"http://www.xxx.com/namespaceB\", # } # try: from generatedsnamespaces", "is not None: return subclass(*args_, **kwargs_) if complemento.subclass: return complemento.subclass(*args_, **kwargs_) else: return", "eol_)) else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='paisNascto'):", "not None: return subclass(*args_, **kwargs_) if TEnderecoBrasil.subclass: return TEnderecoBrasil.subclass(*args_, **kwargs_) else: return TEnderecoBrasil(*args_,", "namespace_='', name_='infoBeneficio', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('infoBeneficio') if imported_ns_def_ is not None: namespacedef_", "outfile, level, already_processed, namespace_='', name_='dtNascto'): pass def exportChildren(self, outfile, level, namespace_='', name_='dtNascto', fromsubclass_=False,", "class eSocial class evtCdBenPrRP(GeneratedsSuper): \"\"\"Evento de cadastro de benefícios previdenciários de Regimes Próprios\"\"\"", "self.__offset = datetime_.timedelta(minutes=offset) self.__name = name def utcoffset(self, dt): return self.__offset def tzname(self,", "self.verProc def set_verProc(self, verProc): self.verProc = verProc def hasContent_(self): if ( self.indRetif is", "( self.paisResid is not None or self.dscLograd is not None or self.nrLograd is", "None: showIndent(outfile, level, pretty_print) outfile.write('<%scpfBenef>%s</%scpfBenef>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.cpfBenef), input_name='cpfBenef')), namespace_, eol_)) if self.nmBenefic", "outfile.write('<%spaisNac>%s</%spaisNac>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.paisNac), input_name='paisNac')), namespace_, eol_)) if self.nmMae is not None: showIndent(outfile,", "def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='tpBenef'): pass def exportChildren(self, outfile, level, namespace_='',", "# definitions. The export method for any class for which there is #", "self.ideEvento = ideEvento def get_ideEmpregador(self): return self.ideEmpregador def set_ideEmpregador(self, ideEmpregador): self.ideEmpregador = ideEmpregador", "TIdeEveTrab(*args_, **kwargs_) factory = staticmethod(factory) def get_indRetif(self): return self.indRetif def set_indRetif(self, indRetif): self.indRetif", "self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='TIdeEveTrab', pretty_print=pretty_print) showIndent(outfile,", "return subclass(*args_, **kwargs_) if indRetif.subclass: return indRetif.subclass(*args_, **kwargs_) else: return indRetif(*args_, **kwargs_) factory", "pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): if nodeName_ == 'tpPlanRP': sval_ =", "nodeName_, fromsubclass_=False): pass # end class paisNascto class paisNac(GeneratedsSuper): subclass = None superclass", "# # # File: generatedsnamespaces.py # # GenerateDSNamespaceDefs = { # \"ElementtypeA\": \"http://www.xxx.com/namespaceA\",", "= total_seconds // 3600 minutes = (total_seconds - (hours * 3600)) // 60", "nodeName_, fromsubclass_=False): if nodeName_ == 'brasil': obj_ = TEnderecoBrasil.factory() obj_.build(child_) self.brasil = obj_", "iniBeneficio=None, altBeneficio=None, fimBeneficio=None): self.original_tagname_ = None self.tpPlanRP = tpPlanRP self.iniBeneficio = iniBeneficio self.altBeneficio", "% inStr) s2 = '' pos = 0 matchobjects = CDATA_pattern_.finditer(s1) for mo", "if nodeName_ == 'tpBenef': sval_ = child_.text try: ival_ = int(sval_) except (TypeError,", ")) self.exportChildren(outfile, level + 1, namespace_='', name_='dtIniBenef', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_))", "level, already_processed, namespace_='', name_='endereco'): pass def exportChildren(self, outfile, level, namespace_='', name_='endereco', fromsubclass_=False, pretty_print=True):", "= nmPai def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_(", "= cpfBenef_ elif nodeName_ == 'nmBenefic': nmBenefic_ = child_.text nmBenefic_ = self.gds_validate_string(nmBenefic_, node,", "CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, nmPai) if subclass is not", "= child_.text nrBenefic_ = self.gds_validate_string(nrBenefic_, node, 'nrBenefic') self.nrBenefic = nrBenefic_ elif nodeName_ ==", "export(self, outfile, level, namespace_='', name_='paisResid', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('paisResid') if imported_ns_def_ is", "return self.name def export(self, outfile, level, name, namespace, pretty_print=True): if self.category == MixedContainer.CategoryText:", "else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='nrInsc'): pass", "is None: element.text = self.value else: element.text += self.value elif self.category == MixedContainer.CategorySimple:", "level, already_processed, namespace_='', name_='codPostal'): pass def exportChildren(self, outfile, level, namespace_='', name_='codPostal', fromsubclass_=False, pretty_print=True):", "level + 1, namespace_='', name_='TDadosBenef', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_,", "CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, TEmprPJ) if subclass is not", "set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='bairro') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile,", "fromsubclass_=False): if nodeName_ == 'dadosNasc': obj_ = dadosNasc.factory() obj_.build(child_) self.dadosNasc = obj_ obj_.original_tagname_", "imported_ns_def_ = GenerateDSNamespaceDefs_.get('nrInsc') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print:", "outfile, level, already_processed, namespace_='', name_='dadosNasc'): pass def exportChildren(self, outfile, level, namespace_='', name_='dadosNasc', fromsubclass_=False,", "= nmMae def get_nmPai(self): return self.nmPai def set_nmPai(self, nmPai): self.nmPai = nmPai def", "= nrLograd self.complemento = complemento self.bairro = bairro self.nmCid = nmCid self.codPostal =", "> 0: if element[-1].tail is None: element[-1].tail = self.value else: element[-1].tail += self.value", "self.exportChildren(outfile, level + 1, namespace_='', name_='codMunic', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else:", "+ namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='tpAmb')", "GeneratedsSuper except ImportError as exp: class GeneratedsSuper(object): tzoff_pattern = re_.compile(r'(\\+|-)((0\\d|1[0-3]):[0-5]\\d|14:00)$') class _FixedOffsetTZ(datetime_.tzinfo): def", "= staticmethod(factory) def get_dtNascto(self): return self.dtNascto def set_dtNascto(self, dtNascto): self.dtNascto = dtNascto def", "export method of # any generated element type class for a example of", "= getSubclassFromModule_( CurrentSubclassModule_, cep) if subclass is not None: return subclass(*args_, **kwargs_) if", "pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('dadosNasc') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if", "not None: showIndent(outfile, level, pretty_print) outfile.write('<%snrLograd>%s</%snrLograd>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nrLograd), input_name='nrLograd')), namespace_, eol_)) if", "8 def __init__(self, category, content_type, name, value): self.category = category self.content_type = content_type", "len(attr_parts) == 2: prefix, name = attr_parts namespace = node.nsmap.get(prefix) if namespace is", "**kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, paisNascto) if subclass", "or self.endereco is not None ): return True else: return False def export(self,", "% (namespace_, self.gds_format_integer(self.procEmi, input_name='procEmi'), namespace_, eol_)) if self.verProc is not None: showIndent(outfile, level,", "set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='dtNascto') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile,", "dadosNasc): self.dadosNasc = dadosNasc def get_endereco(self): return self.endereco def set_endereco(self, endereco): self.endereco =", "subclass = getSubclassFromModule_( CurrentSubclassModule_, uf) if subclass is not None: return subclass(*args_, **kwargs_)", "self.bairro is not None or self.nmCid is not None or self.codPostal is not", "namespace_, eol_)) def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib, already_processed) for child", "is not None: return subclass(*args_, **kwargs_) if dtIniBenef.subclass: return dtIniBenef.subclass(*args_, **kwargs_) else: return", "eSocial(*args_, **kwargs_) factory = staticmethod(factory) def get_evtCdBenPrRP(self): return self.evtCdBenPrRP def set_evtCdBenPrRP(self, evtCdBenPrRP): self.evtCdBenPrRP", "= set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='paisResid') if self.hasContent_(): outfile.write('>%s' % (eol_, ))", "sys.version_info.major == 2 and isinstance(instring, unicode): result = quote_xml(instring).encode('utf8') else: result = GeneratedsSuper.gds_encode(str(instring))", "= self.value elif (self.content_type == MixedContainer.TypeInteger or self.content_type == MixedContainer.TypeBoolean): text = '%d'", "namespace_='', name_='tpAmb', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('tpAmb') if imported_ns_def_ is not None: namespacedef_", "(namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nrBenefic), input_name='nrBenefic')), namespace_, eol_)) if self.dtIniBenef is not None: showIndent(outfile, level, pretty_print)", "= self.gds_validate_string(dscLograd_, node, 'dscLograd') self.dscLograd = dscLograd_ elif nodeName_ == 'nrLograd': nrLograd_ =", "= 'evtCdBenPrRP' elif nodeName_ == 'Signature': Signature_ = child_.text Signature_ = self.gds_validate_string(Signature_, node,", "outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='TIdeEveTrab'): pass def", "if self.codMunic is not None: showIndent(outfile, level, pretty_print) outfile.write('<%scodMunic>%s</%scodMunic>%s' % (namespace_, self.gds_format_integer(self.codMunic, input_name='codMunic'),", "None: showIndent(outfile, level, pretty_print) outfile.write('<%scpfInst>%s</%scpfInst>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.cpfInst), input_name='cpfInst')), namespace_, eol_)) def build(self,", "name_='bairro') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='bairro',", "% (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='ideBenef'): pass def exportChildren(self,", "outfile, level, already_processed, namespace_='', name_='bairro'): pass def exportChildren(self, outfile, level, namespace_='', name_='bairro', fromsubclass_=False,", "% (namespace_, self.gds_format_integer(self.tpAmb, input_name='tpAmb'), namespace_, eol_)) if self.procEmi is not None: showIndent(outfile, level,", "= {} # # The root super-class for element type classes # #", "of that element. See the export method of # any generated element type", "def exportChildren(self, outfile, level, namespace_='', name_='vrBenef', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed", "self.category == MixedContainer.CategorySimple: subelement = etree_.SubElement( element, '%s' % self.name) subelement.text = self.to_etree_simple()", "codPostal=None): self.original_tagname_ = None self.paisResid = paisResid self.dscLograd = dscLograd self.nrLograd = nrLograd", "do Trabalhador\"\"\" subclass = None superclass = None def __init__(self, brasil=None, exterior=None): self.original_tagname_", "buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class dtIniBenef class vrBenef(GeneratedsSuper): subclass", "level, namespace_='', name_='nrLograd', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('nrLograd') if imported_ns_def_ is not None:", "already_processed, namespace_, name_='eSocial') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1,", "return self.paisNac def set_paisNac(self, paisNac): self.paisNac = paisNac def get_nmMae(self): return self.nmMae def", "= tpBenef self.nrBenefic = nrBenefic if isinstance(dtFimBenef, BaseStrType_): initvalue_ = datetime_.datetime.strptime(dtFimBenef, '%Y-%m-%d').date() else:", "elif (self.content_type == MixedContainer.TypeInteger or self.content_type == MixedContainer.TypeBoolean): text = '%d' % self.value", "end class nmBenefic class infoBeneficio(GeneratedsSuper): \"\"\"Informações relacionadas ao benefício previdenciário concedido ao servidor\"\"\"", "): return True else: return False def export(self, outfile, level, namespace_='', name_='indRetif', namespacedef_='',", "len(element) > 0: if element[-1].tail is None: element[-1].tail = self.value else: element[-1].tail +=", "level + 1, namespace_='', name_='paisNascto', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s'", "**kwargs_) else: return cpfInst(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ):", "'tpPlanRP') self.tpPlanRP = ival_ elif nodeName_ == 'iniBeneficio': obj_ = TDadosBeneficio.factory() obj_.build(child_) self.iniBeneficio", "self.original_tagname_ = None self.dadosNasc = dadosNasc self.endereco = endereco def factory(*args_, **kwargs_): if", "if dtIniBenef.subclass: return dtIniBenef.subclass(*args_, **kwargs_) else: return dtIniBenef(*args_, **kwargs_) factory = staticmethod(factory) def", "% (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='tpLograd'): pass def exportChildren(self,", "outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='dadosNasc'): pass def", "set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='TDadosBeneficio') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile,", "( ): return True else: return False def export(self, outfile, level, namespace_='', name_='mtvFim',", "'' if self.tpBenef is not None: showIndent(outfile, level, pretty_print) outfile.write('<%stpBenef>%s</%stpBenef>%s' % (namespace_, self.gds_format_integer(self.tpBenef,", "(namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nrLograd), input_name='nrLograd')), namespace_, eol_)) if self.complemento is not None: showIndent(outfile, level, pretty_print)", "not None: showIndent(outfile, level, pretty_print) outfile.write('<%scomplemento>%s</%scomplemento>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.complemento), input_name='complemento')), namespace_, eol_)) if", "node, nodeName_, fromsubclass_=False): pass # end class indRetif class nrRecibo(GeneratedsSuper): subclass = None", "outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='dtNascto'): pass def", "node, attrs, already_processed): value = find_attr_value_('Id', node) if value is not None and", "namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='dscLograd') if", "= getSubclassFromModule_( CurrentSubclassModule_, bairro) if subclass is not None: return subclass(*args_, **kwargs_) if", "or self.nrRecibo is not None or self.tpAmb is not None or self.procEmi is", "None: subclass = getSubclassFromModule_( CurrentSubclassModule_, cpfInst) if subclass is not None: return subclass(*args_,", "if self.nmPai is not None: showIndent(outfile, level, pretty_print) outfile.write('<%snmPai>%s</%snmPai>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nmPai), input_name='nmPai')),", "self.ideEvento = ideEvento self.ideEmpregador = ideEmpregador self.ideBenef = ideBenef self.infoBeneficio = infoBeneficio def", "integer: %s' % exp) ival_ = self.gds_validate_integer(ival_, node, 'tpPlanRP') self.tpPlanRP = ival_ elif", "self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='paisResid', pretty_print=pretty_print) outfile.write('</%s%s>%s'", "element type names (strings) to XML schema namespace prefix # definitions. The export", "'', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='paisNac') if self.hasContent_(): outfile.write('>%s'", "# end class paisNascto class paisNac(GeneratedsSuper): subclass = None superclass = None def", "else: return False def export(self, outfile, level, namespace_='', name_='TDadosBenef', namespacedef_='', pretty_print=True): imported_ns_def_ =", "namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('nrInsc') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_", "name_='nrInsc', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_, )) def", "level, pretty_print) outfile.write('<%sdtNascto>%s</%sdtNascto>%s' % (namespace_, self.gds_format_date(self.dtNascto, input_name='dtNascto'), namespace_, eol_)) if self.codMunic is not", "self.name def set_data_type(self, data_type): self.data_type = data_type def get_data_type_chain(self): return self.data_type def get_data_type(self):", "self.gds_validate_string(paisNac_, node, 'paisNac') self.paisNac = paisNac_ elif nodeName_ == 'nmMae': nmMae_ = child_.text", "get_root_tag(node): tag = Tag_pattern_.match(node.tag).groups()[-1] rootClass = GDSClassesMapping.get(tag) if rootClass is None: rootClass =", "in s1: if \"'\" in s1: s1 = '\"%s\"' % s1.replace('\"', \"&quot;\") else:", "namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('vrBenef') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_", "= None if not silence: sys.stdout.write('<?xml version=\"1.0\" ?>\\n') rootObj.export( sys.stdout, 0, name_=rootTag, namespacedef_='')", "not None: return subclass(*args_, **kwargs_) if cpfInst.subclass: return cpfInst.subclass(*args_, **kwargs_) else: return cpfInst(*args_,", "% (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='eSocial'): pass def exportChildren(self,", "pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('uf') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if", "cpfInst(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return True else:", "elif nodeName_ == 'paisNascto': paisNascto_ = child_.text paisNascto_ = self.gds_validate_string(paisNascto_, node, 'paisNascto') self.paisNascto", "def set_child_attrs(self, child_attrs): self.child_attrs = child_attrs def get_child_attrs(self): return self.child_attrs def set_choice(self, choice):", "self.original_tagname_ = None self.paisResid = paisResid self.dscLograd = dscLograd self.nrLograd = nrLograd self.complemento", "(total_seconds - (hours * 3600)) // 60 _svalue += '{0:02d}:{1:02d}'.format( hours, minutes) except", "= nmBenefic self.dadosBenef = dadosBenef def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None:", "is not None: class_obj1 = class_obj2 return class_obj1 def gds_build_any(self, node, type_name=None): return", "self.container = container self.child_attrs = child_attrs self.choice = choice self.optional = optional def", "def __init__(self, tpPlanRP=None, iniBeneficio=None, altBeneficio=None, fimBeneficio=None): self.original_tagname_ = None self.tpPlanRP = tpPlanRP self.iniBeneficio", "% exp) ival_ = self.gds_validate_integer(ival_, node, 'codMunic') self.codMunic = ival_ elif nodeName_ ==", "self.dtFimBenef = initvalue_ self.mtvFim = mtvFim def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not", "return self.choice def set_optional(self, optional): self.optional = optional def get_optional(self): return self.optional def", "warnings_ try: from lxml import etree as etree_ except ImportError: from xml.etree import", "= 'iniBeneficio' elif nodeName_ == 'altBeneficio': obj_ = TDadosBeneficio.factory() obj_.build(child_) self.altBeneficio = obj_", "name_='tpBenef'): pass def exportChildren(self, outfile, level, namespace_='', name_='tpBenef', fromsubclass_=False, pretty_print=True): pass def build(self,", "return TEnderecoExterior(*args_, **kwargs_) factory = staticmethod(factory) def get_paisResid(self): return self.paisResid def set_paisResid(self, paisResid):", "or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='eSocial') if self.hasContent_():", "def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, verProc)", "node=None, input_name=''): return input_data def gds_format_double_list(self, input_data, input_name=''): return '%s' % ' '.join(input_data)", "get_uf(self): return self.uf def set_uf(self, uf): self.uf = uf def get_paisNascto(self): return self.paisNascto", "self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='nmMae', pretty_print=pretty_print) outfile.write('</%s%s>%s'", "nrInsc_ = self.gds_validate_string(nrInsc_, node, 'nrInsc') self.nrInsc = nrInsc_ # end class TEmprPJ class", "paisResid): self.paisResid = paisResid def get_dscLograd(self): return self.dscLograd def set_dscLograd(self, dscLograd): self.dscLograd =", "input_data def gds_format_time(self, input_data, input_name=''): if input_data.microsecond == 0: _svalue = '%02d:%02d:%02d' %", "nrBenefic_ = self.gds_validate_string(nrBenefic_, node, 'nrBenefic') self.nrBenefic = nrBenefic_ elif nodeName_ == 'dtIniBenef': sval_", "eSocial.subclass(*args_, **kwargs_) else: return eSocial(*args_, **kwargs_) factory = staticmethod(factory) def get_evtCdBenPrRP(self): return self.evtCdBenPrRP", "pretty_print) outfile.write('<%snmBenefic>%s</%snmBenefic>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nmBenefic), input_name='nmBenefic')), namespace_, eol_)) if self.dadosBenef is not None:", "1, namespace_='', name_='TDadosBenef', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else:", "if self.nrInsc is not None: showIndent(outfile, level, pretty_print) outfile.write('<%snrInsc>%s</%snrInsc>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nrInsc), input_name='nrInsc')),", "nrRecibo(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return True else:", "get_Id(self): return self.Id def set_Id(self, Id): self.Id = Id def hasContent_(self): if (", "class dscLograd(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ = None", "None: subclass = getSubclassFromModule_( CurrentSubclassModule_, idQuota) if subclass is not None: return subclass(*args_,", "**kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, uf) if subclass", "**kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, TEnderecoBrasil) if subclass", "% s1 else: return '\"\"\"%s\"\"\"' % s1 def get_all_text_(node): if node.text is not", "else: initvalue_ = dtFimBenef self.dtFimBenef = initvalue_ self.mtvFim = mtvFim def factory(*args_, **kwargs_):", "(eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='nrInsc'): pass def exportChildren(self, outfile,", "else: eol_ = '' if self.brasil is not None: self.brasil.export(outfile, level, namespace_, name_='brasil',", "False def export(self, outfile, level, namespace_='', name_='tpPlanRP', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('tpPlanRP') if", "doc = None if not silence: sys.stdout.write('#from evtCdBenPrRP import *\\n\\n') sys.stdout.write('import evtCdBenPrRP as", "(eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='nmPai'): pass def exportChildren(self, outfile,", "= bairro_ elif nodeName_ == 'nmCid': nmCid_ = child_.text nmCid_ = self.gds_validate_string(nmCid_, node,", "= endereco.factory() obj_.build(child_) self.endereco = obj_ obj_.original_tagname_ = 'endereco' # end class TDadosBenef", "outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='procEmi'): pass def", "is None: # Use the lxml ElementTree compatible parser so that, e.g., #", "obj_ = fimBeneficio.factory() obj_.build(child_) self.fimBeneficio = obj_ obj_.original_tagname_ = 'fimBeneficio' # end class", "set_data_type(self, data_type): self.data_type = data_type def get_data_type_chain(self): return self.data_type def get_data_type(self): if isinstance(self.data_type,", "(eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='tpPlanRP'): pass def exportChildren(self, outfile,", "dtFimBenef): self.dtFimBenef = dtFimBenef def get_mtvFim(self): return self.mtvFim def set_mtvFim(self, mtvFim): self.mtvFim =", "name_='dadosNasc') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='dadosNasc',", "is not None or self.uf is not None ): return True else: return", ")) self.exportChildren(outfile, level + 1, namespace_='', name_='ideBenef', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' %", "outfile, level, namespace_='', name_='endereco', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\\n' else: eol_", "2 and isinstance(instring, unicode): result = quote_xml(instring).encode('utf8') else: result = GeneratedsSuper.gds_encode(str(instring)) return result", "return '' s1 = (isinstance(inStr, BaseStrType_) and inStr or '%s' % inStr) s2", "TEnderecoBrasil.factory() obj_.build(child_) self.brasil = obj_ obj_.original_tagname_ = 'brasil' elif nodeName_ == 'exterior': obj_", "None def __init__(self, cpfBenef=None, nmBenefic=None, dadosBenef=None): self.original_tagname_ = None self.cpfBenef = cpfBenef self.nmBenefic", "buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class uf class paisNascto(GeneratedsSuper): subclass", "else: return False def export(self, outfile, level, namespace_='', name_='verProc', namespacedef_='', pretty_print=True): imported_ns_def_ =", "False def export(self, outfile, level, namespace_='', name_='nmMae', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('nmMae') if", "else: return mtvFim(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return", "idQuota_ = self.gds_validate_string(idQuota_, node, 'idQuota') self.idQuota = idQuota_ elif nodeName_ == 'cpfInst': cpfInst_", "def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='nmMae'): pass def exportChildren(self, outfile, level, namespace_='',", "self.nrLograd is not None: showIndent(outfile, level, pretty_print) outfile.write('<%snrLograd>%s</%snrLograd>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nrLograd), input_name='nrLograd')), namespace_,", "s1.find('\\n') == -1: return '\"%s\"' % s1 else: return '\"\"\"%s\"\"\"' % s1 def", "node.attrib attr_parts = attr_name.split(':') value = None if len(attr_parts) == 1: value =", "= altBeneficio def get_fimBeneficio(self): return self.fimBeneficio def set_fimBeneficio(self, fimBeneficio): self.fimBeneficio = fimBeneficio def", "name_='TDadosBeneficio') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='TDadosBeneficio',", "hasContent_(self): if ( self.tpLograd is not None or self.dscLograd is not None or", "return False def export(self, outfile, level, namespace_='', name_='TDadosBeneficio', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('TDadosBeneficio')", "dscLograd def get_nrLograd(self): return self.nrLograd def set_nrLograd(self, nrLograd): self.nrLograd = nrLograd def get_complemento(self):", "(TypeError, ValueError): raise_parse_error(node, 'Requires sequence of integers') return values def gds_format_float(self, input_data, input_name=''):", "endereco.factory() obj_.build(child_) self.endereco = obj_ obj_.original_tagname_ = 'endereco' # end class TDadosBenef class", "build(self, node): already_processed = set() self.buildAttributes(node, node.attrib, already_processed) for child in node: nodeName_", "pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('tpAmb') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if", "not None or self.tpAmb is not None or self.procEmi is not None or", ")) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='nrInsc'): pass def exportChildren(self, outfile, level,", "as GenerateDSNamespaceDefs_ except ImportError: GenerateDSNamespaceDefs_ = {} # # The root super-class for", "19 2016, 06:48:10) [GCC 5.4.0 20160609] # # Command line options: # ('--no-process-includes',", "already_processed): pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): if nodeName_ == 'evtCdBenPrRP': obj_", "== MixedContainer.TypeFloat or \\ self.content_type == MixedContainer.TypeDecimal: outfile.write('<%s>%f</%s>' % ( self.name, self.value, self.name))", "nmPai_ = self.gds_validate_string(nmPai_, node, 'nmPai') self.nmPai = nmPai_ # end class dadosNasc class", "else: return procEmi(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return", "if tag: path_list.append(tag) self.get_path_list_(node.getparent(), path_list) def get_class_obj_(self, node, default_class=None): class_obj1 = default_class if", "self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='tpBenef', pretty_print=pretty_print) outfile.write('</%s%s>%s'", "int(float('0.' + time_parts[1]) * 1000000) input_data = '%s.%s' % (time_parts[0], micro_seconds, ) dt", "factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, nrLograd) if", "set_paisResid(self, paisResid): self.paisResid = paisResid def get_dscLograd(self): return self.dscLograd def set_dscLograd(self, dscLograd): self.dscLograd", "return self.nmCid def set_nmCid(self, nmCid): self.nmCid = nmCid def get_codPostal(self): return self.codPostal def", "Tag_strip_pattern_ = re_.compile(r'\\{.*\\}') def get_path_list_(self, node, path_list): if node is None: return tag", "namespace_='', name_='dtNascto', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('dtNascto') if imported_ns_def_ is not None: namespacedef_", "if self.infoBeneficio is not None: self.infoBeneficio.export(outfile, level, namespace_, name_='infoBeneficio', pretty_print=pretty_print) def build(self, node):", "or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='indRetif') if self.hasContent_():", "if nmPai.subclass: return nmPai.subclass(*args_, **kwargs_) else: return nmPai(*args_, **kwargs_) factory = staticmethod(factory) def", "== 'dscLograd': dscLograd_ = child_.text dscLograd_ = self.gds_validate_string(dscLograd_, node, 'dscLograd') self.dscLograd = dscLograd_", "Constants for content_type: TypeNone = 0 TypeText = 1 TypeString = 2 TypeInteger", "dtFimBenef(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ = None def", "if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, TDadosBenef) if subclass is", "== 'Z': tz = GeneratedsSuper._FixedOffsetTZ(0, 'UTC') input_data = input_data[:-1] else: results = GeneratedsSuper.tzoff_pattern.search(input_data)", "None def __init__(self, tpBenef=None, nrBenefic=None, dtFimBenef=None, mtvFim=None): self.original_tagname_ = None self.tpBenef = tpBenef", "self.original_tagname_ = None self.tpLograd = tpLograd self.dscLograd = dscLograd self.nrLograd = nrLograd self.complemento", "dt.date() def gds_validate_time(self, input_data, node=None, input_name=''): return input_data def gds_format_time(self, input_data, input_name=''): if", "name_='vrBenef', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_, )) def", "TDadosBeneficio, } USAGE_TEXT = \"\"\" Usage: python <Parser>.py [ -s ] <in_xml_file> \"\"\"", "= GenerateDSNamespaceDefs_.get('nrInsc') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_", "pass def exportChildren(self, outfile, level, namespace_='', name_='bairro', fromsubclass_=False, pretty_print=True): pass def build(self, node):", "# end class TEnderecoExterior class paisResid(GeneratedsSuper): subclass = None superclass = None def", "inner elements found1 = True for patterns1 in patterns: found2 = False for", "class complemento class bairro(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_", "not None or self.Signature is not None ): return True else: return False", "level, pretty_print) outfile.write('<%s%s%s' % (namespace_, name_, namespacedef_ and ' ' + namespacedef_ or", "5.4.0 20160609] # # Command line options: # ('--no-process-includes', '') # ('-o', 'esociallib/v2_04/evtCdBenPrRP.py')", "typ(value) # # Data representation classes. # class eSocial(GeneratedsSuper): subclass = None superclass", "not None: return subclass(*args_, **kwargs_) if paisNac.subclass: return paisNac.subclass(*args_, **kwargs_) else: return paisNac(*args_,", "def __init__(self): self.original_tagname_ = None def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None:", "already_processed, namespace_='', name_='procEmi'): pass def exportChildren(self, outfile, level, namespace_='', name_='procEmi', fromsubclass_=False, pretty_print=True): pass", "AttributeError: pass return _svalue @classmethod def gds_parse_date(cls, input_data): tz = None if input_data[-1]", "self.category, self.content_type, self.name,)) self.value.exportLiteral(outfile, level + 1) showIndent(outfile, level) outfile.write(')\\n') class MemberSpec_(object): def", "getSubclassFromModule_( CurrentSubclassModule_, dtFimBenef) if subclass is not None: return subclass(*args_, **kwargs_) if dtFimBenef.subclass:", "def get_fimBeneficio(self): return self.fimBeneficio def set_fimBeneficio(self, fimBeneficio): self.fimBeneficio = fimBeneficio def hasContent_(self): if", "indRetif.subclass: return indRetif.subclass(*args_, **kwargs_) else: return indRetif(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self):", "obj_.build(child_) self.exterior = obj_ obj_.original_tagname_ = 'exterior' # end class endereco class TEnderecoBrasil(GeneratedsSuper):", "ideEmpregador self.ideBenef = ideBenef self.infoBeneficio = infoBeneficio def factory(*args_, **kwargs_): if CurrentSubclassModule_ is", "False def export(self, outfile, level, namespace_='', name_='nrLograd', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('nrLograd') if", "name_='vrBenef') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='vrBenef',", "export(self, outfile, level, namespace_='', name_='TDadosBeneficio', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('TDadosBeneficio') if imported_ns_def_ is", "self.exportChildren(outfile, level + 1, namespace_='', name_='dtNascto', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else:", "= globals().get(classname) if class_obj2 is not None: class_obj1 = class_obj2 return class_obj1 def", "__init__(self, paisResid=None, dscLograd=None, nrLograd=None, complemento=None, bairro=None, nmCid=None, codPostal=None): self.original_tagname_ = None self.paisResid =", "(os.getcwd()): # esociallib # import sys import re as re_ import base64 import", "paisNac self.nmMae = nmMae self.nmPai = nmPai def factory(*args_, **kwargs_): if CurrentSubclassModule_ is", "input_data.hour, input_data.minute, input_data.second, ('%f' % (float(input_data.microsecond) / 1000000))[2:], ) if input_data.tzinfo is not", "**kwargs_) else: return TEnderecoBrasil(*args_, **kwargs_) factory = staticmethod(factory) def get_tpLograd(self): return self.tpLograd def", "level, already_processed, namespace_='', name_='nmMae'): pass def exportChildren(self, outfile, level, namespace_='', name_='nmMae', fromsubclass_=False, pretty_print=True):", "# end class endereco class TEnderecoBrasil(GeneratedsSuper): \"\"\"Informações do Endereço no Brasil\"\"\" subclass =", "namespace_='', name_='tpLograd', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib,", "!= type(other): return False return self.__dict__ == other.__dict__ def __ne__(self, other): return not", "subclass(*args_, **kwargs_) if dadosNasc.subclass: return dadosNasc.subclass(*args_, **kwargs_) else: return dadosNasc(*args_, **kwargs_) factory =", "-o \"esociallib/v2_04/evtCdBenPrRP.py\" schemas/v2_04/evtCdBenPrRP.xsd # # Current working directory (os.getcwd()): # esociallib # import", "element.text is None: element.text = self.value else: element.text += self.value elif self.category ==", "(eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='endereco'): pass def exportChildren(self, outfile,", "self.evtCdBenPrRP is not None or self.Signature is not None ): return True else:", "is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, TDadosBenef) if subclass is not None:", "subclass is not None: return subclass(*args_, **kwargs_) if TDadosBeneficio.subclass: return TDadosBeneficio.subclass(*args_, **kwargs_) else:", "self.bairro = bairro_ elif nodeName_ == 'nmCid': nmCid_ = child_.text nmCid_ = self.gds_validate_string(nmCid_,", "already_processed, namespace_='', name_='paisNascto'): pass def exportChildren(self, outfile, level, namespace_='', name_='paisNascto', fromsubclass_=False, pretty_print=True): pass", "1, namespace_='', name_='tpBenef', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_,", "superclass = None def __init__(self, evtCdBenPrRP=None, Signature=None): self.original_tagname_ = None self.evtCdBenPrRP = evtCdBenPrRP", "else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='TEmprPJ'): pass", "'{0:02d}:{1:02d}'.format( hours, minutes) except AttributeError: pass return _svalue @classmethod def gds_parse_date(cls, input_data): tz", "self.exportChildren(outfile, level + 1, namespace_='', name_='paisNascto', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else:", "== MixedContainer.TypeFloat or self.content_type == MixedContainer.TypeDecimal): text = '%f' % self.value elif self.content_type", "+ 1, namespace_='', name_='mtvFim', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' %", "def export(self, outfile, level, namespace_='', name_='cpfInst', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('cpfInst') if imported_ns_def_", "def __init__(self, paisResid=None, dscLograd=None, nrLograd=None, complemento=None, bairro=None, nmCid=None, codPostal=None): self.original_tagname_ = None self.paisResid", "infoBeneficio(GeneratedsSuper): \"\"\"Informações relacionadas ao benefício previdenciário concedido ao servidor\"\"\" subclass = None superclass", "showIndent(outfile, level, pretty_print) outfile.write('<%sindRetif>%s</%sindRetif>%s' % (namespace_, self.gds_format_integer(self.indRetif, input_name='indRetif'), namespace_, eol_)) if self.nrRecibo is", "**kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, eSocial) if subclass", "= None def __init__(self, tpBenef=None, nrBenefic=None, dtFimBenef=None, mtvFim=None): self.original_tagname_ = None self.tpBenef =", "= GenerateDSNamespaceDefs_.get('endereco') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_", "export(self, outfile, level, namespace_='', name_='eSocial', namespacedef_=' xmlns:ds=\"http://www.w3.org/2000/09/xmldsig#\" ', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('eSocial') if", "nrRecibo=None, tpAmb=None, procEmi=None, verProc=None): self.original_tagname_ = None self.indRetif = indRetif self.nrRecibo = nrRecibo", "def set_choice(self, choice): self.choice = choice def get_choice(self): return self.choice def set_optional(self, optional):", "self.gds_encode(self.gds_format_string(quote_xml(self.verProc), input_name='verProc')), namespace_, eol_)) def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib, already_processed)", "of a class from a specific module.''' name = class_.__name__ + 'Sub' if", "level, pretty_print) outfile.write('<%spaisNascto>%s</%spaisNascto>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.paisNascto), input_name='paisNascto')), namespace_, eol_)) if self.paisNac is not", "outfile.write(' ') def quote_xml(inStr): \"Escape markup chars, but do not modify CDATA sections.\"", "child_, node, nodeName_, fromsubclass_=False): if nodeName_ == 'paisResid': paisResid_ = child_.text paisResid_ =", "node, 'uf') self.uf = uf_ elif nodeName_ == 'paisNascto': paisNascto_ = child_.text paisNascto_", "**kwargs_) factory = staticmethod(factory) def get_cpfBenef(self): return self.cpfBenef def set_cpfBenef(self, cpfBenef): self.cpfBenef =", "fimBeneficio def hasContent_(self): if ( self.tpPlanRP is not None or self.iniBeneficio is not", "return True else: return False def export(self, outfile, level, namespace_='', name_='cpfBenef', namespacedef_='', pretty_print=True):", "if '\"' in s1: if \"'\" in s1: s1 = '\"%s\"' % s1.replace('\"',", "else: return False def export(self, outfile, level, namespace_='', name_='nrRecibo', namespacedef_='', pretty_print=True): imported_ns_def_ =", "not None or self.endereco is not None ): return True else: return False", "outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='codMunic', pretty_print=pretty_print) outfile.write('</%s%s>%s' %", "def set_codMunic(self, codMunic): self.codMunic = codMunic def get_uf(self): return self.uf def set_uf(self, uf):", "GenerateDSNamespaceDefs as GenerateDSNamespaceDefs_ except ImportError: GenerateDSNamespaceDefs_ = {} # # The root super-class", "= inStr.replace('&', '&amp;') s1 = s1.replace('<', '&lt;') s1 = s1.replace('>', '&gt;') return s1", "export(self, outfile, level, namespace_='', name_='infoBeneficio', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('infoBeneficio') if imported_ns_def_ is", "is not None: name_ = self.original_tagname_ showIndent(outfile, level, pretty_print) outfile.write('<%s%s%s' % (namespace_, name_,", "return subclass(*args_, **kwargs_) if TDadosBeneficio.subclass: return TDadosBeneficio.subclass(*args_, **kwargs_) else: return TDadosBeneficio(*args_, **kwargs_) factory", "already_processed, namespace_='', name_='idQuota'): pass def exportChildren(self, outfile, level, namespace_='', name_='idQuota', fromsubclass_=False, pretty_print=True): pass", "if self.procEmi is not None: showIndent(outfile, level, pretty_print) outfile.write('<%sprocEmi>%s</%sprocEmi>%s' % (namespace_, self.gds_format_integer(self.procEmi, input_name='procEmi'),", "name_='tpAmb'): pass def exportChildren(self, outfile, level, namespace_='', name_='tpAmb', fromsubclass_=False, pretty_print=True): pass def build(self,", "# ExternalEncoding = 'ascii' Tag_pattern_ = re_.compile(r'({.*})?(.*)') String_cleanup_pat_ = re_.compile(r\"[\\n\\r\\s]+\") Namespace_extract_pat_ = re_.compile(r'{(.*)}(.*)')", "nrLograd.subclass: return nrLograd.subclass(*args_, **kwargs_) else: return nrLograd(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self):", "already_processed, namespace_, name_='tpAmb') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1,", "or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='endereco') if self.hasContent_():", "not None: return subclass(*args_, **kwargs_) if nmMae.subclass: return nmMae.subclass(*args_, **kwargs_) else: return nmMae(*args_,", "subclass is not None: return subclass(*args_, **kwargs_) if dtNascto.subclass: return dtNascto.subclass(*args_, **kwargs_) else:", ")) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='indRetif'): pass def exportChildren(self, outfile, level,", "outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='tpBenef', pretty_print=pretty_print) outfile.write('</%s%s>%s' %", "namespace_='', name_='nrRecibo', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib,", "fimBeneficio.factory() obj_.build(child_) self.fimBeneficio = obj_ obj_.original_tagname_ = 'fimBeneficio' # end class infoBeneficio class", "utcoffset(self, dt): return self.__offset def tzname(self, dt): return self.__name def dst(self, dt): return", "or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='cpfBenef') if self.hasContent_():", "if tpLograd.subclass: return tpLograd.subclass(*args_, **kwargs_) else: return tpLograd(*args_, **kwargs_) factory = staticmethod(factory) def", "if ( self.evtCdBenPrRP is not None or self.Signature is not None ): return", "namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='complemento') if", "floats') return values def gds_format_double(self, input_data, input_name=''): return '%e' % input_data def gds_validate_double(self,", "'%s' % inStr) s2 = '' pos = 0 matchobjects = CDATA_pattern_.finditer(s1) for", "TDadosBeneficio.factory() obj_.build(child_) self.altBeneficio = obj_ obj_.original_tagname_ = 'altBeneficio' elif nodeName_ == 'fimBeneficio': obj_", "**kwargs_) else: return tpPlanRP(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ):", "node, nodeName_, fromsubclass_=False): pass # end class nmMae class nmPai(GeneratedsSuper): subclass = None", "def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, tpBenef)", "DOM. doc = None if not silence: sys.stdout.write('#from evtCdBenPrRP import *\\n\\n') sys.stdout.write('import evtCdBenPrRP", "already_processed, namespace_, name_='idQuota') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1,", "get_root_tag(rootNode) if rootClass is None: rootTag = 'eSocial' rootClass = eSocial rootObj =", "def get_endereco(self): return self.endereco def set_endereco(self, endereco): self.endereco = endereco def hasContent_(self): if", "= child_.text paisNascto_ = self.gds_validate_string(paisNascto_, node, 'paisNascto') self.paisNascto = paisNascto_ elif nodeName_ ==", "= None superclass = None def __init__(self, cpfBenef=None, nmBenefic=None, dadosBenef=None): self.original_tagname_ = None", "outfile, level, namespace_='', name_='infoPenMorte', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\\n' else: eol_", "if nrLograd.subclass: return nrLograd.subclass(*args_, **kwargs_) else: return nrLograd(*args_, **kwargs_) factory = staticmethod(factory) def", "pass # end class tpPlanRP class fimBeneficio(GeneratedsSuper): \"\"\"Informações relativas a benefícios previdenciários -", "import ElementTree as etree_ Validate_simpletypes_ = True if sys.version_info.major == 2: BaseStrType_ =", "outfile.write('<%stpPlanRP>%s</%stpPlanRP>%s' % (namespace_, self.gds_format_integer(self.tpPlanRP, input_name='tpPlanRP'), namespace_, eol_)) if self.iniBeneficio is not None: self.iniBeneficio.export(outfile,", "classes. # class eSocial(GeneratedsSuper): subclass = None superclass = None def __init__(self, evtCdBenPrRP=None,", "= self.gds_validate_string(idQuota_, node, 'idQuota') self.idQuota = idQuota_ elif nodeName_ == 'cpfInst': cpfInst_ =", "verProc def hasContent_(self): if ( self.indRetif is not None or self.nrRecibo is not", "= self.gds_validate_string(cep_, node, 'cep') self.cep = cep_ elif nodeName_ == 'codMunic': sval_ =", "namespace_, name_='tpInsc') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='',", "None or self.infoPenMorte is not None ): return True else: return False def", "already_processed, namespace_, name_='fimBeneficio') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1,", "name_='endereco', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' %", "True else: return False def export(self, outfile, level, namespace_='', name_='cep', namespacedef_='', pretty_print=True): imported_ns_def_", "self.__name = name def utcoffset(self, dt): return self.__offset def tzname(self, dt): return self.__name", "else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='TEnderecoExterior'): pass", "'nrRecibo') self.nrRecibo = nrRecibo_ elif nodeName_ == 'tpAmb': sval_ = child_.text try: ival_", "( ): return True else: return False def export(self, outfile, level, namespace_='', name_='idQuota',", ")) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='TEnderecoExterior') if self.hasContent_(): outfile.write('>%s' %", "self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='indRetif', pretty_print=pretty_print) outfile.write('</%s%s>%s'", "def set_ideEmpregador(self, ideEmpregador): self.ideEmpregador = ideEmpregador def get_ideBenef(self): return self.ideBenef def set_ideBenef(self, ideBenef):", "def export(self, outfile, level, namespace_='', name_='tpAmb', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('tpAmb') if imported_ns_def_", "(default, Nov 19 2016, 06:48:10) [GCC 5.4.0 20160609] # # Command line options:", "self.tpInsc = tpInsc self.nrInsc = nrInsc def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not", "1, namespace_='', name_='procEmi', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_,", "self.tpAmb = ival_ elif nodeName_ == 'procEmi': sval_ = child_.text try: ival_ =", ")) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='bairro') if self.hasContent_(): outfile.write('>%s' %", "imported_ns_def_ = GenerateDSNamespaceDefs_.get('bairro') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print:", "If you have installed IPython you can uncomment and use the following. #", "eol_)) else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='uf'):", "s1[mo.start():mo.end()] pos = mo.end() s3 = s1[pos:] s2 += quote_xml_aux(s3) return s2 def", "1, namespace_='', name_='dscLograd', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_,", "cpfBenef) if subclass is not None: return subclass(*args_, **kwargs_) if cpfBenef.subclass: return cpfBenef.subclass(*args_,", "outfile, level, namespace_='', name_='tpInsc', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('tpInsc') if imported_ns_def_ is not", "already_processed, namespace_='', name_='tpInsc'): pass def exportChildren(self, outfile, level, namespace_='', name_='tpInsc', fromsubclass_=False, pretty_print=True): pass", "set_ideBenef(self, ideBenef): self.ideBenef = ideBenef def get_infoBeneficio(self): return self.infoBeneficio def set_infoBeneficio(self, infoBeneficio): self.infoBeneficio", "name_='cpfInst', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_, )) def", "1, namespace_='', name_='idQuota', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_,", "# a dictionary named GeneratedsNamespaceDefs. This Python dictionary # should map element type", "set_fimBeneficio(self, fimBeneficio): self.fimBeneficio = fimBeneficio def hasContent_(self): if ( self.tpPlanRP is not None", "def exportChildren(self, outfile, level, namespace_='', name_='tpInsc', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed", "= nrBenefic def get_dtIniBenef(self): return self.dtIniBenef def set_dtIniBenef(self, dtIniBenef): self.dtIniBenef = dtIniBenef def", "TEnderecoExterior, 'ideEmpregador': TEmprPJ, 'ideEvento': TIdeEveTrab, 'iniBeneficio': TDadosBeneficio, } USAGE_TEXT = \"\"\" Usage: python", "not None: showIndent(outfile, level, pretty_print) outfile.write('<%scodPostal>%s</%scodPostal>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.codPostal), input_name='codPostal')), namespace_, eol_)) def", "(eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='ideBenef'): pass def exportChildren(self, outfile,", "False def export(self, outfile, level, namespace_='', name_='tpLograd', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('tpLograd') if", "already_processed): pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class nmBenefic", "(namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.codPostal), input_name='codPostal')), namespace_, eol_)) def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib,", "MixedContainer.TypeString: outfile.write('<%s>%s</%s>' % ( self.name, self.value, self.name)) elif self.content_type == MixedContainer.TypeInteger or \\", "if isinstance(dtNascto, BaseStrType_): initvalue_ = datetime_.datetime.strptime(dtNascto, '%Y-%m-%d').date() else: initvalue_ = dtNascto self.dtNascto =", "set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='dscLograd') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile,", "if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, TEnderecoBrasil) if subclass is", "= '%02d:%02d:%02d' % ( input_data.hour, input_data.minute, input_data.second, ) else: _svalue = '%02d:%02d:%02d.%s' %", "is not None or self.dtFimBenef is not None or self.mtvFim is not None", ")) self.exportChildren(outfile, level + 1, namespace_='', name_='eSocial', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' %", ")) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='verProc'): pass def exportChildren(self, outfile, level,", "if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, nrLograd) if subclass is", "namespace_, eol_)) if self.iniBeneficio is not None: self.iniBeneficio.export(outfile, level, namespace_, name_='iniBeneficio', pretty_print=pretty_print) if", "name_='TEmprPJ', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' %", "sys.stdout.write('<?xml version=\"1.0\" ?>\\n') rootObj.export( sys.stdout, 0, name_=rootTag, namespacedef_='', pretty_print=True) return rootObj def parseEtree(inFileName,", "= 'infoBeneficio' # end class evtCdBenPrRP class ideBenef(GeneratedsSuper): \"\"\"Identificação do beneficiário\"\"\" subclass =", "bairro def get_cep(self): return self.cep def set_cep(self, cep): self.cep = cep def get_codMunic(self):", "pass # end class nmBenefic class infoBeneficio(GeneratedsSuper): \"\"\"Informações relacionadas ao benefício previdenciário concedido", "+ 1, namespace_='', name_='nmMae', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' %", "exportAttributes(self, outfile, level, already_processed, namespace_='', name_='cep'): pass def exportChildren(self, outfile, level, namespace_='', name_='cep',", "pretty_print) outfile.write('<%sindRetif>%s</%sindRetif>%s' % (namespace_, self.gds_format_integer(self.indRetif, input_name='indRetif'), namespace_, eol_)) if self.nrRecibo is not None:", "level, already_processed, namespace_, name_='infoPenMorte') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level +", "input_data, input_name=''): return ('%s' % input_data).lower() def gds_validate_boolean(self, input_data, node=None, input_name=''): return input_data", "import datetime as datetime_ import warnings as warnings_ try: from lxml import etree", "# Namespace prefix definition table (and other attributes, too) # # The module", "__init__(self, name='', data_type='', container=0, optional=0, child_attrs=None, choice=None): self.name = name self.data_type = data_type", "dtIniBenef(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return True else:", "if dtFimBenef.subclass: return dtFimBenef.subclass(*args_, **kwargs_) else: return dtFimBenef(*args_, **kwargs_) factory = staticmethod(factory) def", "= cep self.codMunic = codMunic self.uf = uf def factory(*args_, **kwargs_): if CurrentSubclassModule_", "ideEvento): self.ideEvento = ideEvento def get_ideEmpregador(self): return self.ideEmpregador def set_ideEmpregador(self, ideEmpregador): self.ideEmpregador =", "tz = GeneratedsSuper._FixedOffsetTZ( tzoff, results.group(0)) input_data = input_data[:-6] dt = datetime_.datetime.strptime(input_data, '%Y-%m-%d') dt", "Command line options: # ('--no-process-includes', '') # ('-o', 'esociallib/v2_04/evtCdBenPrRP.py') # # Command line", "pretty_print) outfile.write('<%suf>%s</%suf>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.uf), input_name='uf')), namespace_, eol_)) def build(self, node): already_processed =", "return True else: return False def export(self, outfile, level, namespace_='', name_='tpInsc', namespacedef_='', pretty_print=True):", "is not None: showIndent(outfile, level, pretty_print) outfile.write('<%snmBenefic>%s</%snmBenefic>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nmBenefic), input_name='nmBenefic')), namespace_, eol_))", "not None or self.verProc is not None ): return True else: return False", "= None def __init__(self, dadosNasc=None, endereco=None): self.original_tagname_ = None self.dadosNasc = dadosNasc self.endereco", "pretty_print=pretty_print) def exportSimple(self, outfile, level, name): if self.content_type == MixedContainer.TypeString: outfile.write('<%s>%s</%s>' % (", "'fimBeneficio' # end class infoBeneficio class tpPlanRP(GeneratedsSuper): subclass = None superclass = None", "exportAttributes(self, outfile, level, already_processed, namespace_='', name_='fimBeneficio'): pass def exportChildren(self, outfile, level, namespace_='', name_='fimBeneficio',", "self.gds_validate_string(dscLograd_, node, 'dscLograd') self.dscLograd = dscLograd_ elif nodeName_ == 'nrLograd': nrLograd_ = child_.text", "= getSubclassFromModule_( CurrentSubclassModule_, dscLograd) if subclass is not None: return subclass(*args_, **kwargs_) if", "1: parse(args[0]) else: usage() if __name__ == '__main__': #import pdb; pdb.set_trace() main() __all__", "is not None or self.ideEmpregador is not None or self.ideBenef is not None", "get_indRetif(self): return self.indRetif def set_indRetif(self, indRetif): self.indRetif = indRetif def get_nrRecibo(self): return self.nrRecibo", "child_, node, nodeName_, fromsubclass_=False): pass # end class paisNascto class paisNac(GeneratedsSuper): subclass =", "pass def exportChildren(self, outfile, level, namespace_='', name_='indRetif', fromsubclass_=False, pretty_print=True): pass def build(self, node):", "ValueError) as exp: raise_parse_error(child_, 'requires integer: %s' % exp) ival_ = self.gds_validate_integer(ival_, node,", "pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class uf class", ")) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='tpLograd') if self.hasContent_(): outfile.write('>%s' %", "return subclass(*args_, **kwargs_) if dtFimBenef.subclass: return dtFimBenef.subclass(*args_, **kwargs_) else: return dtFimBenef(*args_, **kwargs_) factory", "self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='nrBenefic', pretty_print=pretty_print) outfile.write('</%s%s>%s'", "nmPai_ # end class dadosNasc class dtNascto(GeneratedsSuper): subclass = None superclass = None", "end class nmMae class nmPai(GeneratedsSuper): subclass = None superclass = None def __init__(self):", "= idQuota self.cpfInst = cpfInst def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None:", "in s1: s1 = '\"%s\"' % s1.replace('\"', \"&quot;\") else: s1 = \"'%s'\" %", "self.name, self.value)) else: # category == MixedContainer.CategoryComplex showIndent(outfile, level) outfile.write( 'model_.MixedContainer(%d, %d, \"%s\",\\n'", "level, namespace_='', name_='tpPlanRP', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('tpPlanRP') if imported_ns_def_ is not None:", "= node.text else: text = '' for child in node: if child.tail is", "self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='codPostal', pretty_print=pretty_print) outfile.write('</%s%s>%s'", "outfile, level, already_processed, namespace_='', name_='mtvFim'): pass def exportChildren(self, outfile, level, namespace_='', name_='mtvFim', fromsubclass_=False,", "not None: return subclass(*args_, **kwargs_) if vrBenef.subclass: return vrBenef.subclass(*args_, **kwargs_) else: return vrBenef(*args_,", "pretty_print) outfile.write('<%stpLograd>%s</%stpLograd>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.tpLograd), input_name='tpLograd')), namespace_, eol_)) if self.dscLograd is not None:", "subclass = getSubclassFromModule_( CurrentSubclassModule_, nmPai) if subclass is not None: return subclass(*args_, **kwargs_)", "self.gds_encode(self.gds_format_string(quote_xml(self.codPostal), input_name='codPostal')), namespace_, eol_)) def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib, already_processed)", "= '' ## ipshell = IPShellEmbed(args, ## banner = 'Dropping into IPython', ##", "node, nodeName_, fromsubclass_=False): if nodeName_ == 'dtNascto': sval_ = child_.text dval_ = self.gds_parse_date(sval_)", "MixedContainer.TypeInteger or self.content_type == MixedContainer.TypeBoolean): text = '%d' % self.value elif (self.content_type ==", "def _cast(typ, value): if typ is None or value is None: return value", "subclass = getSubclassFromModule_( CurrentSubclassModule_, paisNac) if subclass is not None: return subclass(*args_, **kwargs_)", "subclass(*args_, **kwargs_) if vrBenef.subclass: return vrBenef.subclass(*args_, **kwargs_) else: return vrBenef(*args_, **kwargs_) factory =", "gds_format_integer(self, input_data, input_name=''): return '%d' % input_data def gds_validate_integer(self, input_data, node=None, input_name=''): return", "previdenciário concedido ao servidor\"\"\" subclass = None superclass = None def __init__(self, tpPlanRP=None,", "= set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='tpBenef') if self.hasContent_(): outfile.write('>%s' % (eol_, ))", "already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='nmBenefic') if self.hasContent_(): outfile.write('>%s' % (eol_,", "node, nodeName_, fromsubclass_=False): pass # end class codPostal class TDadosBeneficio(GeneratedsSuper): \"\"\"Dados do benefício", "self.cpfInst = cpfInst def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass =", "not None: self.brasil.export(outfile, level, namespace_, name_='brasil', pretty_print=pretty_print) if self.exterior is not None: self.exterior.export(outfile,", "get_name(self): return self.name def set_data_type(self, data_type): self.data_type = data_type def get_data_type_chain(self): return self.data_type", "tpBenef.subclass(*args_, **kwargs_) else: return tpBenef(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if (", "level + 1, namespace_='', name_='dtFimBenef', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s'", "in values: try: float(value) except (TypeError, ValueError): raise_parse_error(node, 'Requires sequence of floats') return", "sys.argv[1:] if len(args) == 1: parse(args[0]) else: usage() if __name__ == '__main__': #import", "pretty_print: eol_ = '\\n' else: eol_ = '' if self.dadosNasc is not None:", "return self.nrBenefic def set_nrBenefic(self, nrBenefic): self.nrBenefic = nrBenefic def get_dtFimBenef(self): return self.dtFimBenef def", "self.dadosBenef def set_dadosBenef(self, dadosBenef): self.dadosBenef = dadosBenef def hasContent_(self): if ( self.cpfBenef is", "pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('tpBenef') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if", "+= '{0:02d}:{1:02d}'.format(hours, minutes) return _svalue @classmethod def gds_parse_datetime(cls, input_data): tz = None if", "namespace_, name_='dtNascto') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='',", "else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='vrBenef'): pass", "the space used by the DOM. doc = None if not silence: sys.stdout.write('<?xml", "**kwargs_) if TDadosBeneficio.subclass: return TDadosBeneficio.subclass(*args_, **kwargs_) else: return TDadosBeneficio(*args_, **kwargs_) factory = staticmethod(factory)", "name_='nmMae', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_, )) def", "+ 1, namespace_='', name_='infoPenMorte', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_))", "+ 1, namespace_='', name_='fimBeneficio', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_))", "integer: %s' % exp) ival_ = self.gds_validate_integer(ival_, node, 'tpAmb') self.tpAmb = ival_ elif", "if self.category == MixedContainer.CategoryText: showIndent(outfile, level) outfile.write( 'model_.MixedContainer(%d, %d, \"%s\", \"%s\"),\\n' % (", "def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class verProc class TEmprPJ(GeneratedsSuper):", "namespace_='', name_='codPostal', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('codPostal') if imported_ns_def_ is not None: namespacedef_", "(eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='procEmi', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_,", "outfile, level, namespace_='', name_='codMunic', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set()", "+ namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='tpLograd')", "node, nodeName_, fromsubclass_=False): pass # end class cpfBenef class nmBenefic(GeneratedsSuper): subclass = None", "namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('tpAmb') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_", "name_='codMunic') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='codMunic',", "input_name=''): return '%s' % ' '.join(input_data) def gds_validate_float_list( self, input_data, node=None, input_name=''): values", "%d, \"%s\", \"%s\"),\\n' % ( self.category, self.content_type, self.name, self.value)) elif self.category == MixedContainer.CategorySimple:", "GenerateDSNamespaceDefs_.get('tpLograd') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_ =", "self.Signature def set_Signature(self, Signature): self.Signature = Signature def hasContent_(self): if ( self.evtCdBenPrRP is", "self.original_tagname_ = None self.indRetif = indRetif self.nrRecibo = nrRecibo self.tpAmb = tpAmb self.procEmi", "patterns2 in patterns1: if re_.search(patterns2, target) is not None: found2 = True break", "name_='nmMae', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib, already_processed)", "if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='infoBeneficio', pretty_print=pretty_print)", "pat is a list of lists of strings/patterns. We should: # - AND", "pretty_print) outfile.write('<%svrBenef>%s</%svrBenef>%s' % (namespace_, self.gds_format_float(self.vrBenef, input_name='vrBenef'), namespace_, eol_)) if self.infoPenMorte is not None:", "get_mtvFim(self): return self.mtvFim def set_mtvFim(self, mtvFim): self.mtvFim = mtvFim def hasContent_(self): if (", "sval_ = child_.text try: ival_ = int(sval_) except (TypeError, ValueError) as exp: raise_parse_error(child_,", "( self.tpPlanRP is not None or self.iniBeneficio is not None or self.altBeneficio is", "else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='indRetif'): pass", "def get_tpInsc(self): return self.tpInsc def set_tpInsc(self, tpInsc): self.tpInsc = tpInsc def get_nrInsc(self): return", "def export(self, outfile, level, namespace_='', name_='paisResid', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('paisResid') if imported_ns_def_", "= etree_.tostring( rootElement, pretty_print=True, xml_declaration=True, encoding=\"utf-8\") sys.stdout.write(content) sys.stdout.write('\\n') return rootObj, rootElement, mapping, reverse_mapping", "= codPostal def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_(", "child_, node, nodeName_, fromsubclass_=False): pass # end class nrLograd class complemento(GeneratedsSuper): subclass =", "is not None or self.nmCid is not None or self.codPostal is not None", "eol_)) else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='ideBenef'):", "name_='procEmi', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('procEmi') if imported_ns_def_ is not None: namespacedef_ =", "imported_ns_def_ = GenerateDSNamespaceDefs_.get('uf') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print:", "not None: showIndent(outfile, level, pretty_print) outfile.write('<%snmMae>%s</%snmMae>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nmMae), input_name='nmMae')), namespace_, eol_)) if", "except (TypeError, ValueError) as exp: raise_parse_error(child_, 'requires integer: %s' % exp) ival_ =", "re_.compile(r'\\{.*\\}') def get_path_list_(self, node, path_list): if node is None: return tag = GeneratedsSuper.Tag_strip_pattern_.sub('',", "= evtCdBenPrRP.factory() obj_.build(child_) self.evtCdBenPrRP = obj_ obj_.original_tagname_ = 'evtCdBenPrRP' elif nodeName_ == 'Signature':", "name_='altBeneficio', pretty_print=pretty_print) if self.fimBeneficio is not None: self.fimBeneficio.export(outfile, level, namespace_, name_='fimBeneficio', pretty_print=pretty_print) def", "found1 = True for patterns1 in patterns: found2 = False for patterns2 in", "if pretty_print: eol_ = '\\n' else: eol_ = '' if self.original_tagname_ is not", "( self.tpBenef is not None or self.nrBenefic is not None or self.dtIniBenef is", "None def __init__(self, Id=None, ideEvento=None, ideEmpregador=None, ideBenef=None, infoBeneficio=None): self.original_tagname_ = None self.Id =", "return self.paisResid def set_paisResid(self, paisResid): self.paisResid = paisResid def get_dscLograd(self): return self.dscLograd def", "end class vrBenef class infoPenMorte(GeneratedsSuper): \"\"\"Informações relativas a pensão por morte\"\"\" subclass =", "1, namespace_='', name_='tpPlanRP', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_,", "rootElement, pretty_print=True, xml_declaration=True, encoding=\"utf-8\") sys.stdout.write(content) sys.stdout.write('\\n') return rootObj, rootElement, mapping, reverse_mapping def parseString(inString,", "pretty_print=True): if pretty_print: for idx in range(level): outfile.write(' ') def quote_xml(inStr): \"Escape markup", "if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='TDadosBenef', pretty_print=pretty_print)", "class dtIniBenef class vrBenef(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_", "self.gds_encode(self.gds_format_string(quote_xml(self.nrInsc), input_name='nrInsc')), namespace_, eol_)) def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib, already_processed)", "pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class tpBenef class", "Constants for category: CategoryNone = 0 CategoryText = 1 CategorySimple = 2 CategoryComplex", "if \"'\" in s1: s1 = '\"%s\"' % s1.replace('\"', \"&quot;\") else: s1 =", "return False def export(self, outfile, level, namespace_='', name_='eSocial', namespacedef_=' xmlns:ds=\"http://www.w3.org/2000/09/xmldsig#\" ', pretty_print=True): imported_ns_def_", "not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, procEmi) if subclass is not None: return", "else: return nmBenefic(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return", "self.infoBeneficio = obj_ obj_.original_tagname_ = 'infoBeneficio' # end class evtCdBenPrRP class ideBenef(GeneratedsSuper): \"\"\"Identificação", "is not None: showIndent(outfile, level, pretty_print) outfile.write('<%stpInsc>%s</%stpInsc>%s' % (namespace_, self.gds_format_integer(self.tpInsc, input_name='tpInsc'), namespace_, eol_))", "CurrentSubclassModule_, nmMae) if subclass is not None: return subclass(*args_, **kwargs_) if nmMae.subclass: return", "self.cpfBenef def set_cpfBenef(self, cpfBenef): self.cpfBenef = cpfBenef def get_nmBenefic(self): return self.nmBenefic def set_nmBenefic(self,", "name_='ideEvento', pretty_print=pretty_print) if self.ideEmpregador is not None: self.ideEmpregador.export(outfile, level, namespace_, name_='ideEmpregador', pretty_print=pretty_print) if", "if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='TEmprPJ', pretty_print=pretty_print)", "None: name_ = self.original_tagname_ showIndent(outfile, level, pretty_print) outfile.write('<%s%s%s' % (namespace_, name_, namespacedef_ and", "= optional def get_optional(self): return self.optional def _cast(typ, value): if typ is None", "uf_ = self.gds_validate_string(uf_, node, 'uf') self.uf = uf_ # end class TEnderecoBrasil class", "namespace_, eol_)) if self.mtvFim is not None: showIndent(outfile, level, pretty_print) outfile.write('<%smtvFim>%s</%smtvFim>%s' % (namespace_,", "fromsubclass_=False): pass # end class nmMae class nmPai(GeneratedsSuper): subclass = None superclass =", "pass # end class paisResid class nmCid(GeneratedsSuper): subclass = None superclass = None", "name_='TEnderecoBrasil') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='TEnderecoBrasil',", "nrRecibo(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ = None def", "if element.text is None: element.text = self.value else: element.text += self.value elif self.category", "return bairro(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return True", "= set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='fimBeneficio') if self.hasContent_(): outfile.write('>%s' % (eol_, ))", "get_ideEmpregador(self): return self.ideEmpregador def set_ideEmpregador(self, ideEmpregador): self.ideEmpregador = ideEmpregador def get_ideBenef(self): return self.ideBenef", "namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('tpPlanRP') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_", "self.dadosBenef is not None ): return True else: return False def export(self, outfile,", "evtCdBenPrRP def get_Signature(self): return self.Signature def set_Signature(self, Signature): self.Signature = Signature def hasContent_(self):", "def exportLiteral(self, outfile, level, name): if self.category == MixedContainer.CategoryText: showIndent(outfile, level) outfile.write( 'model_.MixedContainer(%d,", "**kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, paisNac) if subclass", "**kwargs_) else: return codMunic(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ):", "factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, codMunic) if", "inStr: return '' s1 = (isinstance(inStr, BaseStrType_) and inStr or '%s' % inStr)", "else: return False def export(self, outfile, level, namespace_='', name_='nmBenefic', namespacedef_='', pretty_print=True): imported_ns_def_ =", "eol_)) if self.infoPenMorte is not None: self.infoPenMorte.export(outfile, level, namespace_, name_='infoPenMorte', pretty_print=pretty_print) def build(self,", "TEmprPJ(GeneratedsSuper): \"\"\"Informações do Empregador PJ\"\"\" subclass = None superclass = None def __init__(self,", "pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): if nodeName_ == 'evtCdBenPrRP': obj_ =", "\"%s\"),\\n' % ( self.category, self.content_type, self.name, self.value)) else: # category == MixedContainer.CategoryComplex showIndent(outfile,", "getSubclassFromModule_( CurrentSubclassModule_, mtvFim) if subclass is not None: return subclass(*args_, **kwargs_) if mtvFim.subclass:", "if subclass is not None: return subclass(*args_, **kwargs_) if verProc.subclass: return verProc.subclass(*args_, **kwargs_)", "already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='nmMae') if self.hasContent_(): outfile.write('>%s' % (eol_,", "return cpfInst.subclass(*args_, **kwargs_) else: return cpfInst(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if", "name_='uf', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_, )) def", "return self.dtNascto def set_dtNascto(self, dtNascto): self.dtNascto = dtNascto def get_codMunic(self): return self.codMunic def", "self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='idQuota', pretty_print=pretty_print) outfile.write('</%s%s>%s'", "outfile, level, namespace_='', name_='tpLograd', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('tpLograd') if imported_ns_def_ is not", "rootObj.to_etree(None, name_=rootTag, mapping_=mapping) reverse_mapping = rootObj.gds_reverse_node_mapping(mapping) if not silence: content = etree_.tostring( rootElement,", "% (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='infoPenMorte'): pass def exportChildren(self,", "end class dtFimBenef class mtvFim(GeneratedsSuper): subclass = None superclass = None def __init__(self):", "+= quote_xml_aux(s3) return s2 def quote_xml_aux(inStr): s1 = inStr.replace('&', '&amp;') s1 = s1.replace('<',", "class nrLograd(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ = None", "= (total_seconds - (hours * 3600)) // 60 _svalue += '{0:02d}:{1:02d}'.format(hours, minutes) return", "namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='TDadosBenef') if", "namespace_='', name_='codMunic', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib,", "pretty_print=True): if pretty_print: eol_ = '\\n' else: eol_ = '' if self.tpLograd is", "% (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='uf'): pass def exportChildren(self,", "evtCdBenPrRP.subclass(*args_, **kwargs_) else: return evtCdBenPrRP(*args_, **kwargs_) factory = staticmethod(factory) def get_ideEvento(self): return self.ideEvento", "ExternalEncoding = 'ascii' Tag_pattern_ = re_.compile(r'({.*})?(.*)') String_cleanup_pat_ = re_.compile(r\"[\\n\\r\\s]+\") Namespace_extract_pat_ = re_.compile(r'{(.*)}(.*)') CDATA_pattern_", "'' if self.dtNascto is not None: showIndent(outfile, level, pretty_print) outfile.write('<%sdtNascto>%s</%sdtNascto>%s' % (namespace_, self.gds_format_date(self.dtNascto,", "already_processed, namespace_, name_='paisResid') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1,", "CurrentSubclassModule_, eSocial) if subclass is not None: return subclass(*args_, **kwargs_) if eSocial.subclass: return", "'dadosBenef': obj_ = TDadosBenef.factory() obj_.build(child_) self.dadosBenef = obj_ obj_.original_tagname_ = 'dadosBenef' # end", "nodeName_ == 'procEmi': sval_ = child_.text try: ival_ = int(sval_) except (TypeError, ValueError)", "(eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='nmCid', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_,", "verProc.subclass(*args_, **kwargs_) else: return verProc(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if (", "pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class indRetif class", "= 'Leaving Interpreter, back to program.') # Then use the following line where", "level, namespace_='', name_='procEmi', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node,", "== MixedContainer.TypeDouble: outfile.write('<%s>%g</%s>' % ( self.name, self.value, self.name)) elif self.content_type == MixedContainer.TypeBase64: outfile.write('<%s>%s</%s>'", "return subclass(*args_, **kwargs_) if nrInsc.subclass: return nrInsc.subclass(*args_, **kwargs_) else: return nrInsc(*args_, **kwargs_) factory", "+ 1, namespace_='', name_='TIdeEveTrab', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_))", "None or self.codPostal is not None ): return True else: return False def", "nrBenefic_ elif nodeName_ == 'dtIniBenef': sval_ = child_.text dval_ = self.gds_parse_date(sval_) self.dtIniBenef =", "IOBuffer else: from io import BytesIO as IOBuffer parser = None doc =", "set_dtNascto(self, dtNascto): self.dtNascto = dtNascto def get_codMunic(self): return self.codMunic def set_codMunic(self, codMunic): self.codMunic", "factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, cpfInst) if", "GenerateDSNamespaceDefs_.get('evtCdBenPrRP') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_ =", "base64 import datetime as datetime_ import warnings as warnings_ try: from lxml import", "= set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='nmPai') if self.hasContent_(): outfile.write('>%s' % (eol_, ))", "def set_dscLograd(self, dscLograd): self.dscLograd = dscLograd def get_nrLograd(self): return self.nrLograd def set_nrLograd(self, nrLograd):", "dt.replace(tzinfo=tz) return dt.date() def gds_validate_time(self, input_data, node=None, input_name=''): return input_data def gds_format_time(self, input_data,", "set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='uf') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile,", "set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='codPostal') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile,", "True else: return False def export(self, outfile, level, namespace_='', name_='cpfInst', namespacedef_='', pretty_print=True): imported_ns_def_", "self.content_type == MixedContainer.TypeDecimal): text = '%f' % self.value elif self.content_type == MixedContainer.TypeDouble: text", "and 'Id' not in already_processed: already_processed.add('Id') self.Id = value def buildChildren(self, child_, node,", "% (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='nrBenefic', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_,", "% (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='TDadosBenef', pretty_print=pretty_print) showIndent(outfile, level, pretty_print)", "showIndent(outfile, level, pretty_print) outfile.write('<%spaisResid>%s</%spaisResid>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.paisResid), input_name='paisResid')), namespace_, eol_)) if self.dscLograd is", "attrs, already_processed): pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): if nodeName_ == 'dadosNasc':", "exporting empty content as empty lines. if self.value.strip(): if len(element) > 0: if", "if self.dadosBenef is not None: self.dadosBenef.export(outfile, level, namespace_, name_='dadosBenef', pretty_print=pretty_print) def build(self, node):", "get_dadosNasc(self): return self.dadosNasc def set_dadosNasc(self, dadosNasc): self.dadosNasc = dadosNasc def get_endereco(self): return self.endereco", "(eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='tpInsc', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_,", "factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, nmCid) if", "infoPenMorte) if subclass is not None: return subclass(*args_, **kwargs_) if infoPenMorte.subclass: return infoPenMorte.subclass(*args_,", "not None and 'Id' not in already_processed: already_processed.add('Id') self.Id = value def buildChildren(self,", "CurrentSubclassModule_, evtCdBenPrRP) if subclass is not None: return subclass(*args_, **kwargs_) if evtCdBenPrRP.subclass: return", "= obj_ obj_.original_tagname_ = 'altBeneficio' elif nodeName_ == 'fimBeneficio': obj_ = fimBeneficio.factory() obj_.build(child_)", "= uf def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_(", "child_.text paisNascto_ = self.gds_validate_string(paisNascto_, node, 'paisNascto') self.paisNascto = paisNascto_ elif nodeName_ == 'paisNac':", "class TDadosBeneficio class dtIniBenef(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_", "Próprios\"\"\" subclass = None superclass = None def __init__(self, Id=None, ideEvento=None, ideEmpregador=None, ideBenef=None,", ")) self.exportChildren(outfile, level + 1, namespace_='', name_='codMunic', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_))", "or self.tpAmb is not None or self.procEmi is not None or self.verProc is", "imported_ns_def_ = GenerateDSNamespaceDefs_.get('nrBenefic') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print:", "+= child.tail return text def find_attr_value_(attr_name, node): attrs = node.attrib attr_parts = attr_name.split(':')", "anterior de benefícios para o beneficiário identificado em {ideBenef} e para o qual", "namespace_, name_='nmPai') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='',", "self.exportAttributes(outfile, level, already_processed, namespace_, name_='uf') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level", "namespace_='', name_='vrBenef'): pass def exportChildren(self, outfile, level, namespace_='', name_='vrBenef', fromsubclass_=False, pretty_print=True): pass def", "**kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, evtCdBenPrRP) if subclass", "= dval_ elif nodeName_ == 'vrBenef': sval_ = child_.text try: fval_ = float(sval_)", "if self.vrBenef is not None: showIndent(outfile, level, pretty_print) outfile.write('<%svrBenef>%s</%svrBenef>%s' % (namespace_, self.gds_format_float(self.vrBenef, input_name='vrBenef'),", "basestring else: BaseStrType_ = str def parsexml_(infile, parser=None, **kwargs): if parser is None:", "child_.text cpfInst_ = self.gds_validate_string(cpfInst_, node, 'cpfInst') self.cpfInst = cpfInst_ # end class infoPenMorte", "CurrentSubclassModule_, fimBeneficio) if subclass is not None: return subclass(*args_, **kwargs_) if fimBeneficio.subclass: return", "self.ideEvento def set_ideEvento(self, ideEvento): self.ideEvento = ideEvento def get_ideEmpregador(self): return self.ideEmpregador def set_ideEmpregador(self,", "namespace_='', name_='tpInsc', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('tpInsc') if imported_ns_def_ is not None: namespacedef_", "tenha havido ainda informação de término de benefícios.\"\"\" subclass = None superclass =", "optional def get_optional(self): return self.optional def _cast(typ, value): if typ is None or", ")) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='tpPlanRP') if self.hasContent_(): outfile.write('>%s' %", "True else: return False def export(self, outfile, level, namespace_='', name_='tpPlanRP', namespacedef_='', pretty_print=True): imported_ns_def_", "dict(((v, k) for k, v in mapping.iteritems())) @staticmethod def gds_encode(instring): if sys.version_info.major ==", "= tpPlanRP self.iniBeneficio = iniBeneficio self.altBeneficio = altBeneficio self.fimBeneficio = fimBeneficio def factory(*args_,", "already_processed, namespace_='', name_='TDadosBeneficio'): pass def exportChildren(self, outfile, level, namespace_='', name_='TDadosBeneficio', fromsubclass_=False, pretty_print=True): if", "eol_)) else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='TEnderecoBrasil'):", "not None: text = node.text else: text = '' for child in node:", "set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='procEmi') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile,", "node.sourceline, ) raise GDSParseError(msg) class MixedContainer: # Constants for category: CategoryNone = 0", "raise_parse_error(child_, 'requires integer: %s' % exp) ival_ = self.gds_validate_integer(ival_, node, 'indRetif') self.indRetif =", "nmBenefic self.dadosBenef = dadosBenef def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass", "None: self.infoPenMorte.export(outfile, level, namespace_, name_='infoPenMorte', pretty_print=pretty_print) def build(self, node): already_processed = set() self.buildAttributes(node,", "TDadosBenef.factory() obj_.build(child_) self.dadosBenef = obj_ obj_.original_tagname_ = 'dadosBenef' # end class ideBenef class", "Só pode ser informado se já houver informação anterior de benefícios para o", "namespace_='', name_='procEmi'): pass def exportChildren(self, outfile, level, namespace_='', name_='procEmi', fromsubclass_=False, pretty_print=True): pass def", "**kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, codMunic) if subclass", "if subclass is not None: return subclass(*args_, **kwargs_) if vrBenef.subclass: return vrBenef.subclass(*args_, **kwargs_)", "level, namespace_='', name_='dadosNasc', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\\n' else: eol_ =", "self.complemento def set_complemento(self, complemento): self.complemento = complemento def get_bairro(self): return self.bairro def set_bairro(self,", "complemento self.bairro = bairro self.nmCid = nmCid self.codPostal = codPostal def factory(*args_, **kwargs_):", "'%H:%M:%S') dt = dt.replace(tzinfo=tz) return dt.time() def gds_str_lower(self, instring): return instring.lower() def get_path_(self,", "return subclass(*args_, **kwargs_) if paisResid.subclass: return paisResid.subclass(*args_, **kwargs_) else: return paisResid(*args_, **kwargs_) factory", "self.cpfBenef = cpfBenef self.nmBenefic = nmBenefic self.dadosBenef = dadosBenef def factory(*args_, **kwargs_): if", "'exterior': TEnderecoExterior, 'ideEmpregador': TEmprPJ, 'ideEvento': TIdeEveTrab, 'iniBeneficio': TDadosBeneficio, } USAGE_TEXT = \"\"\" Usage:", "**kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, fimBeneficio) if subclass", "pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('infoBeneficio') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if", "child_.text nmMae_ = self.gds_validate_string(nmMae_, node, 'nmMae') self.nmMae = nmMae_ elif nodeName_ == 'nmPai':", "export(self, outfile, level, namespace_='', name_='idQuota', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('idQuota') if imported_ns_def_ is", "nmMae=None, nmPai=None): self.original_tagname_ = None if isinstance(dtNascto, BaseStrType_): initvalue_ = datetime_.datetime.strptime(dtNascto, '%Y-%m-%d').date() else:", "exportAttributes(self, outfile, level, already_processed, namespace_='', name_='vrBenef'): pass def exportChildren(self, outfile, level, namespace_='', name_='vrBenef',", "pass def exportChildren(self, outfile, level, namespace_='', name_='TEnderecoExterior', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ =", "(eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='nmBenefic'): pass def exportChildren(self, outfile,", "ival_ = self.gds_validate_integer(ival_, node, 'tpBenef') self.tpBenef = ival_ elif nodeName_ == 'nrBenefic': nrBenefic_", "self.cpfBenef = cpfBenef def get_nmBenefic(self): return self.nmBenefic def set_nmBenefic(self, nmBenefic): self.nmBenefic = nmBenefic", "pretty_print) outfile.write('<%scpfBenef>%s</%scpfBenef>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.cpfBenef), input_name='cpfBenef')), namespace_, eol_)) if self.nmBenefic is not None:", "name_='codPostal', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('codPostal') if imported_ns_def_ is not None: namespacedef_ =", "paisNac_ elif nodeName_ == 'nmMae': nmMae_ = child_.text nmMae_ = self.gds_validate_string(nmMae_, node, 'nmMae')", "child_, node, nodeName_, fromsubclass_=False): pass # end class nmBenefic class infoBeneficio(GeneratedsSuper): \"\"\"Informações relacionadas", "= self.gds_validate_integer(ival_, node, 'tpInsc') self.tpInsc = ival_ elif nodeName_ == 'nrInsc': nrInsc_ =", "return subclass(*args_, **kwargs_) if eSocial.subclass: return eSocial.subclass(*args_, **kwargs_) else: return eSocial(*args_, **kwargs_) factory", "= self.original_tagname_ showIndent(outfile, level, pretty_print) outfile.write('<%s%s%s' % (namespace_, name_, namespacedef_ and ' '", "subclass is not None: return subclass(*args_, **kwargs_) if tpBenef.subclass: return tpBenef.subclass(*args_, **kwargs_) else:", "nrLograd_ elif nodeName_ == 'complemento': complemento_ = child_.text complemento_ = self.gds_validate_string(complemento_, node, 'complemento')", "nodeName_, fromsubclass_=False): pass # end class tpInsc class nrInsc(GeneratedsSuper): subclass = None superclass", "if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='fimBeneficio', pretty_print=pretty_print)", "export(self, outfile, level, namespace_='', name_='codPostal', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('codPostal') if imported_ns_def_ is", "already_processed): pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class dtIniBenef", "% (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.bairro), input_name='bairro')), namespace_, eol_)) if self.cep is not None: showIndent(outfile, level,", "imported_ns_def_ = GenerateDSNamespaceDefs_.get('cpfBenef') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print:", "tpPlanRP self.iniBeneficio = iniBeneficio self.altBeneficio = altBeneficio self.fimBeneficio = fimBeneficio def factory(*args_, **kwargs_):", "if subclass is not None: return subclass(*args_, **kwargs_) if bairro.subclass: return bairro.subclass(*args_, **kwargs_)", "= codMunic self.uf = uf self.paisNascto = paisNascto self.paisNac = paisNac self.nmMae =", "(namespace_, name_, namespacedef_ and ' ' + namespacedef_ or '', )) already_processed =", "= 1 CategorySimple = 2 CategoryComplex = 3 # Constants for content_type: TypeNone", "do benefício previdenciário\"\"\" subclass = None superclass = None def __init__(self, tpBenef=None, nrBenefic=None,", "exp: raise_parse_error(child_, 'requires integer: %s' % exp) ival_ = self.gds_validate_integer(ival_, node, 'tpBenef') self.tpBenef", "indRetif=None, nrRecibo=None, tpAmb=None, procEmi=None, verProc=None): self.original_tagname_ = None self.indRetif = indRetif self.nrRecibo =", "not None: return subclass(*args_, **kwargs_) if TEnderecoExterior.subclass: return TEnderecoExterior.subclass(*args_, **kwargs_) else: return TEnderecoExterior(*args_,", "= codMunic def get_uf(self): return self.uf def set_uf(self, uf): self.uf = uf def", "relativas a benefícios previdenciários - Término. Validação: Só pode ser informado se já", "get_dtIniBenef(self): return self.dtIniBenef def set_dtIniBenef(self, dtIniBenef): self.dtIniBenef = dtIniBenef def get_vrBenef(self): return self.vrBenef", "not None: return subclass(*args_, **kwargs_) if nmPai.subclass: return nmPai.subclass(*args_, **kwargs_) else: return nmPai(*args_,", "exportChildren(self, outfile, level, namespace_='', name_='ideBenef', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\\n' else:", "namespace_='', name_='paisNac'): pass def exportChildren(self, outfile, level, namespace_='', name_='paisNac', fromsubclass_=False, pretty_print=True): pass def", "None: element[-1].tail = self.value else: element[-1].tail += self.value else: if element.text is None:", "paisResid=None, dscLograd=None, nrLograd=None, complemento=None, bairro=None, nmCid=None, codPostal=None): self.original_tagname_ = None self.paisResid = paisResid", "node, 'nrBenefic') self.nrBenefic = nrBenefic_ elif nodeName_ == 'dtIniBenef': sval_ = child_.text dval_", "= GenerateDSNamespaceDefs_.get('TEmprPJ') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_", ")) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='ideBenef') if self.hasContent_(): outfile.write('>%s' %", "== 1: value = attrs.get(attr_name) elif len(attr_parts) == 2: prefix, name = attr_parts", "level, namespace_, name_='iniBeneficio', pretty_print=pretty_print) if self.altBeneficio is not None: self.altBeneficio.export(outfile, level, namespace_, name_='altBeneficio',", "== MixedContainer.TypeBoolean: outfile.write('<%s>%d</%s>' % ( self.name, self.value, self.name)) elif self.content_type == MixedContainer.TypeFloat or", "buildChildren(self, child_, node, nodeName_, fromsubclass_=False): if nodeName_ == 'indRetif': sval_ = child_.text try:", "pass def exportChildren(self, outfile, level, namespace_='', name_='infoPenMorte', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ =", "ival_ = self.gds_validate_integer(ival_, node, 'procEmi') self.procEmi = ival_ elif nodeName_ == 'verProc': verProc_", "outfile, level, already_processed, namespace_='', name_='tpInsc'): pass def exportChildren(self, outfile, level, namespace_='', name_='tpInsc', fromsubclass_=False,", "tzoff.days) if total_seconds == 0: _svalue += 'Z' else: if total_seconds < 0:", "== 'ideEmpregador': obj_ = TEmprPJ.factory() obj_.build(child_) self.ideEmpregador = obj_ obj_.original_tagname_ = 'ideEmpregador' elif", "fromsubclass_=False): pass # end class cpfBenef class nmBenefic(GeneratedsSuper): subclass = None superclass =", "already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='mtvFim') if self.hasContent_(): outfile.write('>%s' % (eol_,", "): return True else: return False def export(self, outfile, level, namespace_='', name_='verProc', namespacedef_='',", "node, 'vrBenef') self.vrBenef = fval_ elif nodeName_ == 'infoPenMorte': obj_ = infoPenMorte.factory() obj_.build(child_)", "nodeName_, fromsubclass_=False): pass # end class nmCid class codPostal(GeneratedsSuper): subclass = None superclass", "obj_.original_tagname_ = 'iniBeneficio' elif nodeName_ == 'altBeneficio': obj_ = TDadosBeneficio.factory() obj_.build(child_) self.altBeneficio =", "already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='codMunic') if self.hasContent_(): outfile.write('>%s' % (eol_,", "not None: return subclass(*args_, **kwargs_) if codPostal.subclass: return codPostal.subclass(*args_, **kwargs_) else: return codPostal(*args_,", "as IOBuffer else: from io import BytesIO as IOBuffer parser = None doc", "not None: return subclass(*args_, **kwargs_) if infoBeneficio.subclass: return infoBeneficio.subclass(*args_, **kwargs_) else: return infoBeneficio(*args_,", "CurrentSubclassModule_, nrInsc) if subclass is not None: return subclass(*args_, **kwargs_) if nrInsc.subclass: return", "pretty_print) outfile.write('<%sprocEmi>%s</%sprocEmi>%s' % (namespace_, self.gds_format_integer(self.procEmi, input_name='procEmi'), namespace_, eol_)) if self.verProc is not None:", "( input_data.year, input_data.month, input_data.day, input_data.hour, input_data.minute, input_data.second, ) else: _svalue = '%04d-%02d-%02dT%02d:%02d:%02d.%s' %", "None: self.altBeneficio.export(outfile, level, namespace_, name_='altBeneficio', pretty_print=pretty_print) if self.fimBeneficio is not None: self.fimBeneficio.export(outfile, level,", "% (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='bairro', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_,", "-1 tz = GeneratedsSuper._FixedOffsetTZ( tzoff, results.group(0)) input_data = input_data[:-6] if len(input_data.split('.')) > 1:", "def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, tpAmb)", "__init__(self, tpBenef=None, nrBenefic=None, dtIniBenef=None, vrBenef=None, infoPenMorte=None): self.original_tagname_ = None self.tpBenef = tpBenef self.nrBenefic", "CurrentSubclassModule_, nmPai) if subclass is not None: return subclass(*args_, **kwargs_) if nmPai.subclass: return", "imported_ns_def_ = GenerateDSNamespaceDefs_.get('TEmprPJ') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print:", "None superclass = None def __init__(self, dtNascto=None, codMunic=None, uf=None, paisNascto=None, paisNac=None, nmMae=None, nmPai=None):", "( ): return True else: return False def export(self, outfile, level, namespace_='', name_='dtFimBenef',", "self.ideBenef.export(outfile, level, namespace_, name_='ideBenef', pretty_print=pretty_print) if self.infoBeneficio is not None: self.infoBeneficio.export(outfile, level, namespace_,", "cep=None, codMunic=None, uf=None): self.original_tagname_ = None self.tpLograd = tpLograd self.dscLograd = dscLograd self.nrLograd", "if self.paisNac is not None: showIndent(outfile, level, pretty_print) outfile.write('<%spaisNac>%s</%spaisNac>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.paisNac), input_name='paisNac')),", "namespace_='', name_='dadosNasc', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\\n' else: eol_ = ''", "that definition in the # XML representation of that element. See the export", "class ideBenef(GeneratedsSuper): \"\"\"Identificação do beneficiário\"\"\" subclass = None superclass = None def __init__(self,", "nmMae) if subclass is not None: return subclass(*args_, **kwargs_) if nmMae.subclass: return nmMae.subclass(*args_,", "None: subclass = getSubclassFromModule_( CurrentSubclassModule_, TEnderecoBrasil) if subclass is not None: return subclass(*args_,", "tzoff *= -1 tz = GeneratedsSuper._FixedOffsetTZ( tzoff, results.group(0)) input_data = input_data[:-6] dt =", "else: return False def export(self, outfile, level, namespace_='', name_='codPostal', namespacedef_='', pretty_print=True): imported_ns_def_ =", "namespace_='', name_='cpfBenef'): pass def exportChildren(self, outfile, level, namespace_='', name_='cpfBenef', fromsubclass_=False, pretty_print=True): pass def", "self.Signature is not None: showIndent(outfile, level, pretty_print) outfile.write('<%sSignature>%s</%sSignature>%s' % ('ds:', self.gds_encode(self.gds_format_string(quote_xml(self.Signature), input_name='Signature')), 'ds:',", "staticmethod(factory) def get_paisResid(self): return self.paisResid def set_paisResid(self, paisResid): self.paisResid = paisResid def get_dscLograd(self):", "= s1.replace('<', '&lt;') s1 = s1.replace('>', '&gt;') if '\"' in s1: if \"'\"", "nodeName_, fromsubclass_=False): pass # end class uf class paisNascto(GeneratedsSuper): subclass = None superclass", "not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, dtIniBenef) if subclass is not None: return", "'' if self.cpfBenef is not None: showIndent(outfile, level, pretty_print) outfile.write('<%scpfBenef>%s</%scpfBenef>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.cpfBenef),", "name = attr_parts namespace = node.nsmap.get(prefix) if namespace is not None: value =", "not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, paisResid) if subclass is not None: return", "complemento=None, bairro=None, nmCid=None, codPostal=None): self.original_tagname_ = None self.paisResid = paisResid self.dscLograd = dscLograd", "# end class nmPai class endereco(GeneratedsSuper): \"\"\"Grupo de informações do endereço do Trabalhador\"\"\"", "def gds_validate_boolean_list( self, input_data, node=None, input_name=''): values = input_data.split() for value in values:", ")) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='nmPai') if self.hasContent_(): outfile.write('>%s' %", "(namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.paisResid), input_name='paisResid')), namespace_, eol_)) if self.dscLograd is not None: showIndent(outfile, level, pretty_print)", "outfile.write('<%sindRetif>%s</%sindRetif>%s' % (namespace_, self.gds_format_integer(self.indRetif, input_name='indRetif'), namespace_, eol_)) if self.nrRecibo is not None: showIndent(outfile,", "outfile, level, namespace_='', name_='infoBeneficio', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\\n' else: eol_", "nmBenefic(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return True else:", "use the following line where and when you want to drop into the", "not None or self.nrBenefic is not None or self.dtIniBenef is not None or", "else: return False def export(self, outfile, level, namespace_='', name_='bairro', namespacedef_='', pretty_print=True): imported_ns_def_ =", "(eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='nrLograd'): pass def exportChildren(self, outfile,", "**kwargs_) factory = staticmethod(factory) def get_tpPlanRP(self): return self.tpPlanRP def set_tpPlanRP(self, tpPlanRP): self.tpPlanRP =", ")) self.exportChildren(outfile, level + 1, namespace_='', name_='dscLograd', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_))", "def set_nrBenefic(self, nrBenefic): self.nrBenefic = nrBenefic def get_dtFimBenef(self): return self.dtFimBenef def set_dtFimBenef(self, dtFimBenef):", "outfile, level, namespace_='', name_='TDadosBenef', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\\n' else: eol_", "'ideBenef': obj_ = ideBenef.factory() obj_.build(child_) self.ideBenef = obj_ obj_.original_tagname_ = 'ideBenef' elif nodeName_", "if pretty_print: eol_ = '\\n' else: eol_ = '' if self.dtNascto is not", "subclass(*args_, **kwargs_) if nmPai.subclass: return nmPai.subclass(*args_, **kwargs_) else: return nmPai(*args_, **kwargs_) factory =", "obj_ = endereco.factory() obj_.build(child_) self.endereco = obj_ obj_.original_tagname_ = 'endereco' # end class", "subclass(*args_, **kwargs_) if evtCdBenPrRP.subclass: return evtCdBenPrRP.subclass(*args_, **kwargs_) else: return evtCdBenPrRP(*args_, **kwargs_) factory =", "'requires integer: %s' % exp) ival_ = self.gds_validate_integer(ival_, node, 'codMunic') self.codMunic = ival_", "= '%d' % self.value elif (self.content_type == MixedContainer.TypeFloat or self.content_type == MixedContainer.TypeDecimal): text", "outfile, level, already_processed, namespace_='', name_='TDadosBenef'): pass def exportChildren(self, outfile, level, namespace_='', name_='TDadosBenef', fromsubclass_=False,", "already_processed): pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class bairro", "child_, node, nodeName_, fromsubclass_=False): pass # end class cpfBenef class nmBenefic(GeneratedsSuper): subclass =", "dadosNasc(GeneratedsSuper): \"\"\"Informações de nascimento do beneficiário\"\"\" subclass = None superclass = None def", "(isinstance(inStr, BaseStrType_) and inStr or '%s' % inStr) s1 = s1.replace('&', '&amp;') s1", "pass # end class tpAmb class procEmi(GeneratedsSuper): subclass = None superclass = None", "None: return subclass(*args_, **kwargs_) if procEmi.subclass: return procEmi.subclass(*args_, **kwargs_) else: return procEmi(*args_, **kwargs_)", "if subclass is not None: return subclass(*args_, **kwargs_) if dscLograd.subclass: return dscLograd.subclass(*args_, **kwargs_)", "input_data.microsecond == 0: _svalue = '%02d:%02d:%02d' % ( input_data.hour, input_data.minute, input_data.second, ) else:", "results.group(0)) input_data = input_data[:-6] dt = datetime_.datetime.strptime(input_data, '%Y-%m-%d') dt = dt.replace(tzinfo=tz) return dt.date()", "self.child_attrs = child_attrs def get_child_attrs(self): return self.child_attrs def set_choice(self, choice): self.choice = choice", "set_indRetif(self, indRetif): self.indRetif = indRetif def get_nrRecibo(self): return self.nrRecibo def set_nrRecibo(self, nrRecibo): self.nrRecibo", "codMunic self.uf = uf self.paisNascto = paisNascto self.paisNac = paisNac self.nmMae = nmMae", "quote_xml_aux(s3) s2 += s1[mo.start():mo.end()] pos = mo.end() s3 = s1[pos:] s2 += quote_xml_aux(s3)", "None ): return True else: return False def export(self, outfile, level, namespace_='', name_='dadosNasc',", "if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, cpfInst) if subclass is", "'tpBenef') self.tpBenef = ival_ elif nodeName_ == 'nrBenefic': nrBenefic_ = child_.text nrBenefic_ =", "else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='paisNascto'): pass", "= self.value else: element.text += self.value elif self.category == MixedContainer.CategorySimple: subelement = etree_.SubElement(", "input_name='nrBenefic')), namespace_, eol_)) if self.dtIniBenef is not None: showIndent(outfile, level, pretty_print) outfile.write('<%sdtIniBenef>%s</%sdtIniBenef>%s' %", "'false', '0', ): raise_parse_error( node, 'Requires sequence of booleans ' '(\"true\", \"1\", \"false\",", "gds_parse_datetime(cls, input_data): tz = None if input_data[-1] == 'Z': tz = GeneratedsSuper._FixedOffsetTZ(0, 'UTC')", "== 'dtNascto': sval_ = child_.text dval_ = self.gds_parse_date(sval_) self.dtNascto = dval_ elif nodeName_", "% (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='TEnderecoExterior', pretty_print=pretty_print) showIndent(outfile, level, pretty_print)", "name_='TIdeEveTrab', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\\n' else: eol_ = '' if", "self.verProc is not None ): return True else: return False def export(self, outfile,", "if self.cep is not None: showIndent(outfile, level, pretty_print) outfile.write('<%scep>%s</%scep>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.cep), input_name='cep')),", "self.nrBenefic is not None or self.dtIniBenef is not None or self.vrBenef is not", "child_, node, nodeName_, fromsubclass_=False): if nodeName_ == 'idQuota': idQuota_ = child_.text idQuota_ =", "already_processed, namespace_, name_='complemento') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1,", "names[1] class_obj2 = globals().get(classname) if class_obj2 is not None: class_obj1 = class_obj2 return", "input_data, input_name=''): return '%d' % input_data def gds_validate_integer(self, input_data, node=None, input_name=''): return input_data", "class nrInsc(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ = None", "'procEmi') self.procEmi = ival_ elif nodeName_ == 'verProc': verProc_ = child_.text verProc_ =", "is not None: return subclass(*args_, **kwargs_) if nrBenefic.subclass: return nrBenefic.subclass(*args_, **kwargs_) else: return", "outfile, level, namespace_='', name_='tpBenef', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('tpBenef') if imported_ns_def_ is not", "if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, nrRecibo) if subclass is", "nodeName_ == 'cep': cep_ = child_.text cep_ = self.gds_validate_string(cep_, node, 'cep') self.cep =", "= self.gds_validate_string(uf_, node, 'uf') self.uf = uf_ # end class TEnderecoBrasil class tpLograd(GeneratedsSuper):", "# Use the lxml ElementTree compatible parser so that, e.g., # we ignore", "namespace_='', name_='uf', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('uf') if imported_ns_def_ is not None: namespacedef_", "nodeName_, fromsubclass_=False): if nodeName_ == 'tpInsc': sval_ = child_.text try: ival_ = int(sval_)", "None or self.nrBenefic is not None or self.dtFimBenef is not None or self.mtvFim", "def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class nmMae class nmPai(GeneratedsSuper):", "if subclass is not None: return subclass(*args_, **kwargs_) if complemento.subclass: return complemento.subclass(*args_, **kwargs_)", "self.vrBenef is not None: showIndent(outfile, level, pretty_print) outfile.write('<%svrBenef>%s</%svrBenef>%s' % (namespace_, self.gds_format_float(self.vrBenef, input_name='vrBenef'), namespace_,", "fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\\n' else: eol_ = '' if self.dtNascto", "pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): if nodeName_ == 'dtNascto': sval_ =", "return tpAmb(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return True", "= set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='nmBenefic') if self.hasContent_(): outfile.write('>%s' % (eol_, ))", "child_, node, nodeName_, fromsubclass_=False): if nodeName_ == 'tpPlanRP': sval_ = child_.text try: ival_", "not None: return subclass(*args_, **kwargs_) if indRetif.subclass: return indRetif.subclass(*args_, **kwargs_) else: return indRetif(*args_,", ")) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='tpAmb') if self.hasContent_(): outfile.write('>%s' %", "self.exportAttributes(outfile, level, already_processed, namespace_, name_='paisNascto') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level", "fromsubclass_=False): pass # end class cep class TEnderecoExterior(GeneratedsSuper): \"\"\"Informações do Endereço no Exterior\"\"\"", "Exterior\"\"\" subclass = None superclass = None def __init__(self, paisResid=None, dscLograd=None, nrLograd=None, complemento=None,", "exp) ival_ = self.gds_validate_integer(ival_, node, 'mtvFim') self.mtvFim = ival_ # end class fimBeneficio", "**kwargs_) if endereco.subclass: return endereco.subclass(*args_, **kwargs_) else: return endereco(*args_, **kwargs_) factory = staticmethod(factory)", "class GDSParseError(Exception): pass def raise_parse_error(node, msg): msg = '%s (element %s/line %d)' %", "pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class cep class", "self.cpfInst def set_cpfInst(self, cpfInst): self.cpfInst = cpfInst def hasContent_(self): if ( self.idQuota is", "not in already_processed: already_processed.add('Id') self.Id = value def buildChildren(self, child_, node, nodeName_, fromsubclass_=False):", "sys.stdout.write('<?xml version=\"1.0\" ?>\\n') rootObj.export( sys.stdout, 0, name_=rootTag, namespacedef_='') return rootObj def parseLiteral(inFileName, silence=False):", "not None: return subclass(*args_, **kwargs_) if mtvFim.subclass: return mtvFim.subclass(*args_, **kwargs_) else: return mtvFim(*args_,", "None or self.dtIniBenef is not None or self.vrBenef is not None or self.infoPenMorte", "namespace_='', name_='uf'): pass def exportChildren(self, outfile, level, namespace_='', name_='uf', fromsubclass_=False, pretty_print=True): pass def", "encoding=\"utf-8\") sys.stdout.write(content) sys.stdout.write('\\n') return rootObj, rootElement, mapping, reverse_mapping def parseString(inString, silence=False): if sys.version_info.major", "def set_cep(self, cep): self.cep = cep def get_codMunic(self): return self.codMunic def set_codMunic(self, codMunic):", "self.cep = cep def get_codMunic(self): return self.codMunic def set_codMunic(self, codMunic): self.codMunic = codMunic", "buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class dtNascto class codMunic(GeneratedsSuper): subclass", "pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('procEmi') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if", "- OR the inner elements found1 = True for patterns1 in patterns: found2", "level, name) else: # category == MixedContainer.CategoryComplex self.value.export( outfile, level, namespace, name, pretty_print=pretty_print)", "if pretty_print: eol_ = '\\n' else: eol_ = '' if self.brasil is not", "msg): msg = '%s (element %s/line %d)' % (msg, node.tag, node.sourceline, ) raise", "if s1.find('\"') != -1: s1 = s1.replace('\"', '\\\\\"') if s1.find('\\n') == -1: return", "exportAttributes(self, outfile, level, already_processed, namespace_='', name_='nrLograd'): pass def exportChildren(self, outfile, level, namespace_='', name_='nrLograd',", "codPostal(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ = None def", "def get_evtCdBenPrRP(self): return self.evtCdBenPrRP def set_evtCdBenPrRP(self, evtCdBenPrRP): self.evtCdBenPrRP = evtCdBenPrRP def get_Signature(self): return", "= child_.text nmCid_ = self.gds_validate_string(nmCid_, node, 'nmCid') self.nmCid = nmCid_ elif nodeName_ ==", "outfile, level, namespace_='', name_='nrLograd', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('nrLograd') if imported_ns_def_ is not", "name_='TDadosBenef') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='TDadosBenef',", "outfile, level, already_processed, namespace_='', name_='nrInsc'): pass def exportChildren(self, outfile, level, namespace_='', name_='nrInsc', fromsubclass_=False,", "pass # end class bairro class cep(GeneratedsSuper): subclass = None superclass = None", "%s' % exp) ival_ = self.gds_validate_integer(ival_, node, 'tpPlanRP') self.tpPlanRP = ival_ elif nodeName_", "namespace_='', name_='cpfBenef', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib,", "subclass = getSubclassFromModule_( CurrentSubclassModule_, codPostal) if subclass is not None: return subclass(*args_, **kwargs_)", "from IPython.Shell import IPShellEmbed ## args = '' ## ipshell = IPShellEmbed(args, ##", "= None # # Support/utility functions. # def showIndent(outfile, level, pretty_print=True): if pretty_print:", "eol_)) else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='complemento'):", "__init__(self, Id=None, ideEvento=None, ideEmpregador=None, ideBenef=None, infoBeneficio=None): self.original_tagname_ = None self.Id = _cast(None, Id)", "GenerateDSNamespaceDefs_.get('nrInsc') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_ =", "def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): if nodeName_ == 'dtNascto': sval_ = child_.text", "if subclass is not None: return subclass(*args_, **kwargs_) if infoPenMorte.subclass: return infoPenMorte.subclass(*args_, **kwargs_)", "# Support/utility functions. # def showIndent(outfile, level, pretty_print=True): if pretty_print: for idx in", "tzoff, results.group(0)) input_data = input_data[:-6] dt = datetime_.datetime.strptime(input_data, '%Y-%m-%d') dt = dt.replace(tzinfo=tz) return", "self.paisNac is not None or self.nmMae is not None or self.nmPai is not", "= child_.text verProc_ = self.gds_validate_string(verProc_, node, 'verProc') self.verProc = verProc_ # end class", "set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='dtFimBenef') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile,", "buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class tpPlanRP class fimBeneficio(GeneratedsSuper): \"\"\"Informações", "eol_)) else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='dscLograd'):", "outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='cep'): pass def", "input_data[:-6] dt = datetime_.datetime.strptime(input_data, '%Y-%m-%d') dt = dt.replace(tzinfo=tz) return dt.date() def gds_validate_time(self, input_data,", "// 60 _svalue += '{0:02d}:{1:02d}'.format(hours, minutes) return _svalue @classmethod def gds_parse_datetime(cls, input_data): tz", "def buildAttributes(self, node, attrs, already_processed): value = find_attr_value_('Id', node) if value is not", "exportChildren(self, outfile, level, namespace_='', name_='dtNascto', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed =", "# IPython is available from http://ipython.scipy.org/. # ## from IPython.Shell import IPShellEmbed ##", "False def export(self, outfile, level, namespace_='', name_='dtFimBenef', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('dtFimBenef') if", "= getSubclassFromModule_( CurrentSubclassModule_, vrBenef) if subclass is not None: return subclass(*args_, **kwargs_) if", "None: subclass = getSubclassFromModule_( CurrentSubclassModule_, cpfBenef) if subclass is not None: return subclass(*args_,", "self.uf is not None or self.paisNascto is not None or self.paisNac is not", "None: self.brasil.export(outfile, level, namespace_, name_='brasil', pretty_print=pretty_print) if self.exterior is not None: self.exterior.export(outfile, level,", "eol_)) else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='infoPenMorte'):", "= getSubclassFromModule_( CurrentSubclassModule_, TIdeEveTrab) if subclass is not None: return subclass(*args_, **kwargs_) if", "pass def exportChildren(self, outfile, level, namespace_='', name_='dtFimBenef', fromsubclass_=False, pretty_print=True): pass def build(self, node):", "= bairro def get_cep(self): return self.cep def set_cep(self, cep): self.cep = cep def", "# end class ideBenef class cpfBenef(GeneratedsSuper): subclass = None superclass = None def", "self.exportChildren(outfile, level + 1, namespace_='', name_='fimBeneficio', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' % (namespace_,", "self.dtNascto is not None or self.codMunic is not None or self.uf is not", "showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_, ))", "def exportChildren(self, outfile, level, namespace_='', name_='nrLograd', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed", "choice def get_choice(self): return self.choice def set_optional(self, optional): self.optional = optional def get_optional(self):", "outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='mtvFim', pretty_print=pretty_print) outfile.write('</%s%s>%s' %", "namespace_='', name_='infoPenMorte'): pass def exportChildren(self, outfile, level, namespace_='', name_='infoPenMorte', fromsubclass_=False, pretty_print=True): if pretty_print:", "self.endereco = endereco def hasContent_(self): if ( self.dadosNasc is not None or self.endereco", "eol_)) else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='TDadosBenef'):", "exportChildren(self, outfile, level, namespace_='', name_='bairro', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed =", "MixedContainer.CategorySimple: showIndent(outfile, level) outfile.write( 'model_.MixedContainer(%d, %d, \"%s\", \"%s\"),\\n' % ( self.category, self.content_type, self.name,", "% ' '.join(input_data) def gds_validate_float_list( self, input_data, node=None, input_name=''): values = input_data.split() for", "data_type self.container = container self.child_attrs = child_attrs self.choice = choice self.optional = optional", "\"\"\"Informações relativas a benefícios previdenciários - Término. Validação: Só pode ser informado se", "self.optional = optional def get_optional(self): return self.optional def _cast(typ, value): if typ is", "pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class complemento class", "The root super-class for element type classes # # Calls to the methods", "name_='TDadosBenef', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' %", "== 'evtCdBenPrRP': obj_ = evtCdBenPrRP.factory() obj_.build(child_) self.evtCdBenPrRP = obj_ obj_.original_tagname_ = 'evtCdBenPrRP' elif", "getSubclassFromModule_( CurrentSubclassModule_, procEmi) if subclass is not None: return subclass(*args_, **kwargs_) if procEmi.subclass:", "not None: return subclass(*args_, **kwargs_) if uf.subclass: return uf.subclass(*args_, **kwargs_) else: return uf(*args_,", "1, namespace_='', name_='infoPenMorte', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else:", "dscLograd self.nrLograd = nrLograd self.complemento = complemento self.bairro = bairro self.nmCid = nmCid", "obj_ obj_.original_tagname_ = 'evtCdBenPrRP' elif nodeName_ == 'Signature': Signature_ = child_.text Signature_ =", "None: self.exterior.export(outfile, level, namespace_, name_='exterior', pretty_print=pretty_print) def build(self, node): already_processed = set() self.buildAttributes(node,", ")) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='codMunic') if self.hasContent_(): outfile.write('>%s' %", "def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, codMunic)", "if subclass is not None: return subclass(*args_, **kwargs_) if dtIniBenef.subclass: return dtIniBenef.subclass(*args_, **kwargs_)", "self.paisNac is not None: showIndent(outfile, level, pretty_print) outfile.write('<%spaisNac>%s</%spaisNac>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.paisNac), input_name='paisNac')), namespace_,", "% s1 else: s1 = '\"%s\"' % s1 return s1 def quote_python(inStr): s1", "None ): return True else: return False def export(self, outfile, level, namespace_='', name_='TEnderecoBrasil',", "name_='nrLograd', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_, )) def", "nodeName_ == 'evtCdBenPrRP': obj_ = evtCdBenPrRP.factory() obj_.build(child_) self.evtCdBenPrRP = obj_ obj_.original_tagname_ = 'evtCdBenPrRP'", "def exportChildren(self, outfile, level, namespace_='', name_='procEmi', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed", "else: return infoPenMorte(*args_, **kwargs_) factory = staticmethod(factory) def get_idQuota(self): return self.idQuota def set_idQuota(self,", "def gds_validate_datetime(self, input_data, node=None, input_name=''): return input_data def gds_format_datetime(self, input_data, input_name=''): if input_data.microsecond", "outfile, level, namespace_='', name_='nrBenefic', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('nrBenefic') if imported_ns_def_ is not", "paisNac(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return True else:", "node, nodeName_, fromsubclass_=False): if nodeName_ == 'tpPlanRP': sval_ = child_.text try: ival_ =", "else: return False def export(self, outfile, level, namespace_='', name_='nmCid', namespacedef_='', pretty_print=True): imported_ns_def_ =", "self.tpPlanRP = tpPlanRP self.iniBeneficio = iniBeneficio self.altBeneficio = altBeneficio self.fimBeneficio = fimBeneficio def", "set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='mtvFim') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile,", "Enable Python to collect the space used by the DOM. doc = None", "getSubclassFromModule_( CurrentSubclassModule_, nmPai) if subclass is not None: return subclass(*args_, **kwargs_) if nmPai.subclass:", "input_name='uf')), namespace_, eol_)) if self.paisNascto is not None: showIndent(outfile, level, pretty_print) outfile.write('<%spaisNascto>%s</%spaisNascto>%s' %", "= set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='endereco') if self.hasContent_(): outfile.write('>%s' % (eol_, ))", "not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, idQuota) if subclass is not None: return", "%s' % exp) ival_ = self.gds_validate_integer(ival_, node, 'indRetif') self.indRetif = ival_ elif nodeName_", "dtIniBenef def get_vrBenef(self): return self.vrBenef def set_vrBenef(self, vrBenef): self.vrBenef = vrBenef def get_infoPenMorte(self):", "de Regimes Próprios\"\"\" subclass = None superclass = None def __init__(self, Id=None, ideEvento=None,", "sys.version_info.major == 2: from StringIO import StringIO as IOBuffer else: from io import", "eol_)) else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='dtFimBenef'):", "paisNac_ = self.gds_validate_string(paisNac_, node, 'paisNac') self.paisNac = paisNac_ elif nodeName_ == 'nmMae': nmMae_", "= dadosNasc def get_endereco(self): return self.endereco def set_endereco(self, endereco): self.endereco = endereco def", "level, already_processed, namespace_='', name_='tpLograd'): pass def exportChildren(self, outfile, level, namespace_='', name_='tpLograd', fromsubclass_=False, pretty_print=True):", "def gds_validate_integer(self, input_data, node=None, input_name=''): return input_data def gds_format_integer_list(self, input_data, input_name=''): return '%s'", "nrRecibo self.tpAmb = tpAmb self.procEmi = procEmi self.verProc = verProc def factory(*args_, **kwargs_):", "self.gds_encode(self.gds_format_string(quote_xml(self.nmPai), input_name='nmPai')), namespace_, eol_)) def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib, already_processed)", "return True else: return False def export(self, outfile, level, namespace_='', name_='bairro', namespacedef_='', pretty_print=True):", "not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, dscLograd) if subclass is not None: return", "not None or self.paisNascto is not None or self.paisNac is not None or", "if ( self.dtNascto is not None or self.codMunic is not None or self.uf", "hasContent_(self): if ( ): return True else: return False def export(self, outfile, level,", "level, namespace_='', name_='TDadosBeneficio', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\\n' else: eol_ =", "tpAmb) if subclass is not None: return subclass(*args_, **kwargs_) if tpAmb.subclass: return tpAmb.subclass(*args_,", "already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='endereco') if self.hasContent_(): outfile.write('>%s' % (eol_,", "def exportChildren(self, outfile, level, namespace_='', name_='idQuota', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed", "name_='dtNascto', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib, already_processed)", "return subclass(*args_, **kwargs_) if ideBenef.subclass: return ideBenef.subclass(*args_, **kwargs_) else: return ideBenef(*args_, **kwargs_) factory", "if ( self.tpLograd is not None or self.dscLograd is not None or self.nrLograd", "def get_paisNac(self): return self.paisNac def set_paisNac(self, paisNac): self.paisNac = paisNac def get_nmMae(self): return", "namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('indRetif') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_", "return True else: return False def export(self, outfile, level, namespace_='', name_='TIdeEveTrab', namespacedef_='', pretty_print=True):", "self.nmPai = nmPai def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass =", "child_.text paisResid_ = self.gds_validate_string(paisResid_, node, 'paisResid') self.paisResid = paisResid_ elif nodeName_ == 'dscLograd':", "else: return '\"\"\"%s\"\"\"' % s1 def get_all_text_(node): if node.text is not None: text", "pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class tpInsc class", "parser = None doc = parsexml_(IOBuffer(inString), parser) rootNode = doc.getroot() rootTag, rootClass =", "return True else: return False def export(self, outfile, level, namespace_='', name_='TDadosBeneficio', namespacedef_='', pretty_print=True):", "= child_.text bairro_ = self.gds_validate_string(bairro_, node, 'bairro') self.bairro = bairro_ elif nodeName_ ==", "showIndent(outfile, level, pretty_print) outfile.write('<%scodPostal>%s</%scodPostal>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.codPostal), input_name='codPostal')), namespace_, eol_)) def build(self, node):", "getSubclassFromModule_( CurrentSubclassModule_, tpPlanRP) if subclass is not None: return subclass(*args_, **kwargs_) if tpPlanRP.subclass:", "= None def __init__(self, dtNascto=None, codMunic=None, uf=None, paisNascto=None, paisNac=None, nmMae=None, nmPai=None): self.original_tagname_ =", ")) self.exportChildren(outfile, level + 1, namespace_='', name_='endereco', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' %", "def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, cpfInst)", "): return True else: return False def export(self, outfile, level, namespace_='', name_='ideBenef', namespacedef_='',", "self.nrBenefic = nrBenefic_ elif nodeName_ == 'dtFimBenef': sval_ = child_.text dval_ = self.gds_parse_date(sval_)", "end class TDadosBeneficio class dtIniBenef(GeneratedsSuper): subclass = None superclass = None def __init__(self):", "exportChildren(self, outfile, level, namespace_='', name_='dscLograd', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed =", "% (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level,", "else: return tpLograd(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return", "We should: # - AND the outer elements # - OR the inner", "None: subclass = getSubclassFromModule_( CurrentSubclassModule_, nrRecibo) if subclass is not None: return subclass(*args_,", "silence: sys.stdout.write('<?xml version=\"1.0\" ?>\\n') rootObj.export( sys.stdout, 0, name_=rootTag, namespacedef_='', pretty_print=True) return rootObj def", "class TEnderecoBrasil class tpLograd(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_", "else: return False def export(self, outfile, level, namespace_='', name_='tpPlanRP', namespacedef_='', pretty_print=True): imported_ns_def_ =", "= set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='paisNascto') if self.hasContent_(): outfile.write('>%s' % (eol_, ))", ")) self.exportChildren(outfile, level + 1, namespace_='', name_='cep', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_))", "cpfInst=None): self.original_tagname_ = None self.idQuota = idQuota self.cpfInst = cpfInst def factory(*args_, **kwargs_):", "else: return verProc(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return", "pass # end class nrInsc class TDadosBenef(GeneratedsSuper): \"\"\"Dados de beneficiário\"\"\" subclass = None", "_svalue = '%04d-%02d-%02dT%02d:%02d:%02d.%s' % ( input_data.year, input_data.month, input_data.day, input_data.hour, input_data.minute, input_data.second, ('%f' %", "def get_name(self): return self.name def set_data_type(self, data_type): self.data_type = data_type def get_data_type_chain(self): return", "'brasil' elif nodeName_ == 'exterior': obj_ = TEnderecoExterior.factory() obj_.build(child_) self.exterior = obj_ obj_.original_tagname_", "None: subclass = getSubclassFromModule_( CurrentSubclassModule_, eSocial) if subclass is not None: return subclass(*args_,", "ideBenef) if subclass is not None: return subclass(*args_, **kwargs_) if ideBenef.subclass: return ideBenef.subclass(*args_,", "= '' if self.idQuota is not None: showIndent(outfile, level, pretty_print) outfile.write('<%sidQuota>%s</%sidQuota>%s' % (namespace_,", "namespace_, name_='dadosNasc', pretty_print=pretty_print) if self.endereco is not None: self.endereco.export(outfile, level, namespace_, name_='endereco', pretty_print=pretty_print)", "factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, paisNac) if", "= nrLograd_ elif nodeName_ == 'complemento': complemento_ = child_.text complemento_ = self.gds_validate_string(complemento_, node,", "tpAmb(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return True else:", "if subclass is not None: return subclass(*args_, **kwargs_) if tpLograd.subclass: return tpLograd.subclass(*args_, **kwargs_)", "None: subclass = getSubclassFromModule_( CurrentSubclassModule_, evtCdBenPrRP) if subclass is not None: return subclass(*args_,", "or self.complemento is not None or self.bairro is not None or self.cep is", "self.codMunic is not None or self.uf is not None or self.paisNascto is not", "paisResid.subclass(*args_, **kwargs_) else: return paisResid(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if (", "if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='endereco', pretty_print=pretty_print)", "def exportChildren(self, outfile, level, namespace_='', name_='nmMae', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed", "exporting empty content as empty lines. if self.value.strip(): outfile.write(self.value) elif self.category == MixedContainer.CategorySimple:", "return ideBenef.subclass(*args_, **kwargs_) else: return ideBenef(*args_, **kwargs_) factory = staticmethod(factory) def get_cpfBenef(self): return", "return tpAmb.subclass(*args_, **kwargs_) else: return tpAmb(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if", "outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='dtNascto', pretty_print=pretty_print) outfile.write('</%s%s>%s' %", "self.gds_validate_string(nrInsc_, node, 'nrInsc') self.nrInsc = nrInsc_ # end class TEmprPJ class tpInsc(GeneratedsSuper): subclass", "subclass is not None: return subclass(*args_, **kwargs_) if complemento.subclass: return complemento.subclass(*args_, **kwargs_) else:", "self.procEmi = ival_ elif nodeName_ == 'verProc': verProc_ = child_.text verProc_ = self.gds_validate_string(verProc_,", "self.ideEmpregador.export(outfile, level, namespace_, name_='ideEmpregador', pretty_print=pretty_print) if self.ideBenef is not None: self.ideBenef.export(outfile, level, namespace_,", "None superclass = None def __init__(self, cpfBenef=None, nmBenefic=None, dadosBenef=None): self.original_tagname_ = None self.cpfBenef", "Signature): self.Signature = Signature def hasContent_(self): if ( self.evtCdBenPrRP is not None or", ") raise GDSParseError(msg) class MixedContainer: # Constants for category: CategoryNone = 0 CategoryText", "= getSubclassFromModule_( CurrentSubclassModule_, nmCid) if subclass is not None: return subclass(*args_, **kwargs_) if", "True else: return False def export(self, outfile, level, namespace_='', name_='eSocial', namespacedef_=' xmlns:ds=\"http://www.w3.org/2000/09/xmldsig#\" ',", "= class_.__name__ + 'Sub' if hasattr(module, name): return getattr(module, name) else: return None", "pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class nrInsc class", "= set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='complemento') if self.hasContent_(): outfile.write('>%s' % (eol_, ))", "not None or self.complemento is not None or self.bairro is not None or", "is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, infoPenMorte) if subclass is not None:", "namespace_='', name_='mtvFim', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib,", "def exportChildren(self, outfile, level, namespace_='', name_='TEnderecoBrasil', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\\n'", "( ): return True else: return False def export(self, outfile, level, namespace_='', name_='paisNac',", "self.value else: element.text += self.value elif self.category == MixedContainer.CategorySimple: subelement = etree_.SubElement( element,", "namespace_='', name_='TDadosBeneficio', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s'", "outfile, level, name): if self.content_type == MixedContainer.TypeString: outfile.write('<%s>%s</%s>' % ( self.name, self.value, self.name))", "pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): if nodeName_ == 'tpInsc': sval_ =", "# # Current working directory (os.getcwd()): # esociallib # import sys import re", "return subclass(*args_, **kwargs_) if fimBeneficio.subclass: return fimBeneficio.subclass(*args_, **kwargs_) else: return fimBeneficio(*args_, **kwargs_) factory", "IPython', ## exit_msg = 'Leaving Interpreter, back to program.') # Then use the", "nodeName_ == 'brasil': obj_ = TEnderecoBrasil.factory() obj_.build(child_) self.brasil = obj_ obj_.original_tagname_ = 'brasil'", "already_processed, namespace_, name_='tpBenef') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1,", "name_='nmBenefic', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_, )) def", "= getSubclassFromModule_( CurrentSubclassModule_, idQuota) if subclass is not None: return subclass(*args_, **kwargs_) if", "showIndent(outfile, level, pretty_print) outfile.write('<%scpfBenef>%s</%scpfBenef>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.cpfBenef), input_name='cpfBenef')), namespace_, eol_)) if self.nmBenefic is", "namespace_='', name_='ideBenef'): pass def exportChildren(self, outfile, level, namespace_='', name_='ideBenef', fromsubclass_=False, pretty_print=True): if pretty_print:", "procEmi(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return True else:", "name_='nrRecibo') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='nrRecibo',", "input_name='dtNascto'), namespace_, eol_)) if self.codMunic is not None: showIndent(outfile, level, pretty_print) outfile.write('<%scodMunic>%s</%scodMunic>%s' %", "not None: names = classname.split(':') if len(names) == 2: classname = names[1] class_obj2", "return None @classmethod def gds_reverse_node_mapping(cls, mapping): return dict(((v, k) for k, v in", "level + 1, namespace_='', name_='complemento', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s'", "= etree_.ETCompatXMLParser() except AttributeError: # fallback to xml.etree parser = etree_.XMLParser() doc =", "level, pretty_print) outfile.write('<%suf>%s</%suf>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.uf), input_name='uf')), namespace_, eol_)) if self.paisNascto is not", "return self.Signature def set_Signature(self, Signature): self.Signature = Signature def hasContent_(self): if ( self.evtCdBenPrRP", "CurrentSubclassModule_, paisNascto) if subclass is not None: return subclass(*args_, **kwargs_) if paisNascto.subclass: return", "} # try: from generatedsnamespaces import GenerateDSNamespaceDefs as GenerateDSNamespaceDefs_ except ImportError: GenerateDSNamespaceDefs_ =", "fromsubclass_=False): pass # end class cpfInst GDSClassesMapping = { 'altBeneficio': TDadosBeneficio, 'brasil': TEnderecoBrasil,", "if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, TIdeEveTrab) if subclass is", "( self.name, base64.b64encode(self.value), self.name)) def to_etree(self, element): if self.category == MixedContainer.CategoryText: # Prevent", "= brasil def get_exterior(self): return self.exterior def set_exterior(self, exterior): self.exterior = exterior def", "len(self.data_type) > 0: return self.data_type[-1] else: return 'xs:string' else: return self.data_type def set_container(self,", "node.attrib, already_processed) for child in node: nodeName_ = Tag_pattern_.match(child.tag).groups()[-1] self.buildChildren(child, node, nodeName_) return", "return dt.date() def gds_validate_time(self, input_data, node=None, input_name=''): return input_data def gds_format_time(self, input_data, input_name=''):", "name_='nmMae', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('nmMae') if imported_ns_def_ is not None: namespacedef_ =", "\"'%s'\" % s1 else: return \"'''%s'''\" % s1 else: if s1.find('\"') != -1:", "% (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='infoBeneficio', pretty_print=pretty_print) showIndent(outfile, level, pretty_print)", "subclass is not None: return subclass(*args_, **kwargs_) if nrLograd.subclass: return nrLograd.subclass(*args_, **kwargs_) else:", "TEnderecoExterior(*args_, **kwargs_) factory = staticmethod(factory) def get_paisResid(self): return self.paisResid def set_paisResid(self, paisResid): self.paisResid", "xml_declaration=True, encoding=\"utf-8\") sys.stdout.write(content) sys.stdout.write('\\n') return rootObj, rootElement, mapping, reverse_mapping def parseString(inString, silence=False): if", "return self.tpPlanRP def set_tpPlanRP(self, tpPlanRP): self.tpPlanRP = tpPlanRP def get_iniBeneficio(self): return self.iniBeneficio def", "try: if input_data.tzinfo is not None: tzoff = input_data.tzinfo.utcoffset(input_data) if tzoff is not", "if procEmi.subclass: return procEmi.subclass(*args_, **kwargs_) else: return procEmi(*args_, **kwargs_) factory = staticmethod(factory) def", "TEnderecoExterior class paisResid(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ =", "namespace_, eol_)) if self.nmMae is not None: showIndent(outfile, level, pretty_print) outfile.write('<%snmMae>%s</%snmMae>%s' % (namespace_,", "if len(args) == 1: parse(args[0]) else: usage() if __name__ == '__main__': #import pdb;", ")) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='tpInsc') if self.hasContent_(): outfile.write('>%s' %", "= self.gds_validate_string(nmMae_, node, 'nmMae') self.nmMae = nmMae_ elif nodeName_ == 'nmPai': nmPai_ =", "**kwargs_) else: return dscLograd(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ):", "= None superclass = None def __init__(self, tpPlanRP=None, iniBeneficio=None, altBeneficio=None, fimBeneficio=None): self.original_tagname_ =", "\"ElementtypeB\": \"http://www.xxx.com/namespaceB\", # } # try: from generatedsnamespaces import GenerateDSNamespaceDefs as GenerateDSNamespaceDefs_ except", "not None: return subclass(*args_, **kwargs_) if tpLograd.subclass: return tpLograd.subclass(*args_, **kwargs_) else: return tpLograd(*args_,", "= self.gds_validate_string(cpfInst_, node, 'cpfInst') self.cpfInst = cpfInst_ # end class infoPenMorte class idQuota(GeneratedsSuper):", "if results.group(1) == '-': tzoff *= -1 tz = GeneratedsSuper._FixedOffsetTZ( tzoff, results.group(0)) input_data", "s1[pos:] s2 += quote_xml_aux(s3) return s2 def quote_xml_aux(inStr): s1 = inStr.replace('&', '&amp;') s1", "dt): return self.__name def dst(self, dt): return None def gds_format_string(self, input_data, input_name=''): return", "def exportChildren(self, outfile, level, namespace_='', name_='nrRecibo', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed", "def get_nrRecibo(self): return self.nrRecibo def set_nrRecibo(self, nrRecibo): self.nrRecibo = nrRecibo def get_tpAmb(self): return", "def gds_format_boolean(self, input_data, input_name=''): return ('%s' % input_data).lower() def gds_validate_boolean(self, input_data, node=None, input_name=''):", "def export(self, outfile, level, namespace_='', name_='nrLograd', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('nrLograd') if imported_ns_def_", "if infoBeneficio.subclass: return infoBeneficio.subclass(*args_, **kwargs_) else: return infoBeneficio(*args_, **kwargs_) factory = staticmethod(factory) def", "None if isinstance(dtNascto, BaseStrType_): initvalue_ = datetime_.datetime.strptime(dtNascto, '%Y-%m-%d').date() else: initvalue_ = dtNascto self.dtNascto", "None: showIndent(outfile, level, pretty_print) outfile.write('<%snmPai>%s</%snmPai>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nmPai), input_name='nmPai')), namespace_, eol_)) def build(self,", "namespace_='', name_='endereco'): pass def exportChildren(self, outfile, level, namespace_='', name_='endereco', fromsubclass_=False, pretty_print=True): if pretty_print:", "return tag = GeneratedsSuper.Tag_strip_pattern_.sub('', node.tag) if tag: path_list.append(tag) self.get_path_list_(node.getparent(), path_list) def get_class_obj_(self, node,", "exportChildren(self, outfile, level, namespace_='', name_='procEmi', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed =", "( ): return True else: return False def export(self, outfile, level, namespace_='', name_='procEmi',", "None: return subclass(*args_, **kwargs_) if nrRecibo.subclass: return nrRecibo.subclass(*args_, **kwargs_) else: return nrRecibo(*args_, **kwargs_)", "outfile.write('<%snrBenefic>%s</%snrBenefic>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nrBenefic), input_name='nrBenefic')), namespace_, eol_)) if self.dtFimBenef is not None: showIndent(outfile,", "None superclass = None def __init__(self, tpPlanRP=None, iniBeneficio=None, altBeneficio=None, fimBeneficio=None): self.original_tagname_ = None", "node): already_processed = set() self.buildAttributes(node, node.attrib, already_processed) for child in node: nodeName_ =", "'dscLograd') self.dscLograd = dscLograd_ elif nodeName_ == 'nrLograd': nrLograd_ = child_.text nrLograd_ =", "= TDadosBeneficio.factory() obj_.build(child_) self.altBeneficio = obj_ obj_.original_tagname_ = 'altBeneficio' elif nodeName_ == 'fimBeneficio':", "TDadosBeneficio.subclass: return TDadosBeneficio.subclass(*args_, **kwargs_) else: return TDadosBeneficio(*args_, **kwargs_) factory = staticmethod(factory) def get_tpBenef(self):", ")) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='codMunic'): pass def exportChildren(self, outfile, level,", ")) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='tpInsc'): pass def exportChildren(self, outfile, level,", "self.nrLograd = nrLograd_ elif nodeName_ == 'complemento': complemento_ = child_.text complemento_ = self.gds_validate_string(complemento_,", "not None: total_seconds = tzoff.seconds + (86400 * tzoff.days) if total_seconds == 0:", "if subclass is not None: return subclass(*args_, **kwargs_) if evtCdBenPrRP.subclass: return evtCdBenPrRP.subclass(*args_, **kwargs_)", "node, nodeName_, fromsubclass_=False): pass # end class uf class paisNascto(GeneratedsSuper): subclass = None", "eol_)) if self.bairro is not None: showIndent(outfile, level, pretty_print) outfile.write('<%sbairro>%s</%sbairro>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.bairro),", "level, already_processed, namespace_, name_='vrBenef') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level +", "None or self.nmPai is not None ): return True else: return False def", "attrs, already_processed): pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): if nodeName_ == 'brasil':", "= set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='uf') if self.hasContent_(): outfile.write('>%s' % (eol_, ))", "): return True else: return False def export(self, outfile, level, namespace_='', name_='cep', namespacedef_='',", "namespace_='', name_='TIdeEveTrab', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('TIdeEveTrab') if imported_ns_def_ is not None: namespacedef_", "def exportChildren(self, outfile, level, namespace_='', name_='endereco', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\\n'", "outfile, level, namespace_='', name_='tpLograd', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set()", "ideBenef def get_infoBeneficio(self): return self.infoBeneficio def set_infoBeneficio(self, infoBeneficio): self.infoBeneficio = infoBeneficio def get_Id(self):", "self.__offset def tzname(self, dt): return self.__name def dst(self, dt): return None def gds_format_string(self,", "node: nodeName_ = Tag_pattern_.match(child.tag).groups()[-1] self.buildChildren(child, node, nodeName_) return self def buildAttributes(self, node, attrs,", "import StringIO as IOBuffer else: from io import BytesIO as IOBuffer parser =", "None: subclass = getSubclassFromModule_( CurrentSubclassModule_, tpInsc) if subclass is not None: return subclass(*args_,", "pretty_print=True) return rootObj def parseEtree(inFileName, silence=False): parser = None doc = parsexml_(inFileName, parser)", "**kwargs_) if paisNac.subclass: return paisNac.subclass(*args_, **kwargs_) else: return paisNac(*args_, **kwargs_) factory = staticmethod(factory)", "= tpLograd_ elif nodeName_ == 'dscLograd': dscLograd_ = child_.text dscLograd_ = self.gds_validate_string(dscLograd_, node,", "def get_indRetif(self): return self.indRetif def set_indRetif(self, indRetif): self.indRetif = indRetif def get_nrRecibo(self): return", "input_data.month, input_data.day, input_data.hour, input_data.minute, input_data.second, ('%f' % (float(input_data.microsecond) / 1000000))[2:], ) if input_data.tzinfo", "de benefícios para o beneficiário identificado em {ideBenef} e para o qual não", "factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, verProc) if", "outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='tpLograd'): pass def", "dtFimBenef def get_mtvFim(self): return self.mtvFim def set_mtvFim(self, mtvFim): self.mtvFim = mtvFim def hasContent_(self):", "classes # # Calls to the methods in these classes are generated by", "node=None, input_name=''): return input_data def gds_format_boolean_list(self, input_data, input_name=''): return '%s' % ' '.join(input_data)", "def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, evtCdBenPrRP)", "name_='paisNascto', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_, )) def", "not None and 'Id' not in already_processed: already_processed.add('Id') outfile.write(' Id=%s' % (self.gds_encode(self.gds_format_string(quote_attrib(self.Id), input_name='Id')),", "None: subclass = getSubclassFromModule_( CurrentSubclassModule_, tpAmb) if subclass is not None: return subclass(*args_,", "set_dtIniBenef(self, dtIniBenef): self.dtIniBenef = dtIniBenef def get_vrBenef(self): return self.vrBenef def set_vrBenef(self, vrBenef): self.vrBenef", "namespace_='', name_='bairro', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('bairro') if imported_ns_def_ is not None: namespacedef_", "element[-1].tail += self.value else: if element.text is None: element.text = self.value else: element.text", "child_.text dval_ = self.gds_parse_date(sval_) self.dtFimBenef = dval_ elif nodeName_ == 'mtvFim': sval_ =", "def set_indRetif(self, indRetif): self.indRetif = indRetif def get_nrRecibo(self): return self.nrRecibo def set_nrRecibo(self, nrRecibo):", "return eSocial.subclass(*args_, **kwargs_) else: return eSocial(*args_, **kwargs_) factory = staticmethod(factory) def get_evtCdBenPrRP(self): return", "else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='infoBeneficio'): pass", "input_data def gds_validate_double(self, input_data, node=None, input_name=''): return input_data def gds_format_double_list(self, input_data, input_name=''): return", "node, nodeName_, fromsubclass_=False): pass # end class vrBenef class infoPenMorte(GeneratedsSuper): \"\"\"Informações relativas a", "namespace_, name_='idQuota') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='',", "else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='TDadosBenef'): pass", "name_='dadosNasc', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('dadosNasc') if imported_ns_def_ is not None: namespacedef_ =", "do not modify CDATA sections.\" if not inStr: return '' s1 = (isinstance(inStr,", ")) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='TIdeEveTrab'): pass def exportChildren(self, outfile, level,", "self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='tpAmb', pretty_print=pretty_print) outfile.write('</%s%s>%s'", "input_data.month, input_data.day, input_data.hour, input_data.minute, input_data.second, ) else: _svalue = '%04d-%02d-%02dT%02d:%02d:%02d.%s' % ( input_data.year,", "exportAttributes(self, outfile, level, already_processed, namespace_='', name_='complemento'): pass def exportChildren(self, outfile, level, namespace_='', name_='complemento',", "outfile, level, namespace_='', name_='codPostal', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('codPostal') if imported_ns_def_ is not", "if pretty_print: eol_ = '\\n' else: eol_ = '' if self.cpfBenef is not", "+ 1, namespace_='', name_='endereco', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_))", "not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, TEnderecoBrasil) if subclass is not None: return", "class complemento(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ = None", "fromsubclass_=False): pass # end class tpPlanRP class fimBeneficio(GeneratedsSuper): \"\"\"Informações relativas a benefícios previdenciários", "def gds_validate_double_list( self, input_data, node=None, input_name=''): values = input_data.split() for value in values:", "= tpBenef self.nrBenefic = nrBenefic if isinstance(dtIniBenef, BaseStrType_): initvalue_ = datetime_.datetime.strptime(dtIniBenef, '%Y-%m-%d').date() else:", "MixedContainer.CategoryComplex self.value.to_etree(element) def to_etree_simple(self): if self.content_type == MixedContainer.TypeString: text = self.value elif (self.content_type", "fromsubclass_=False): if nodeName_ == 'ideEvento': obj_ = TIdeEveTrab.factory() obj_.build(child_) self.ideEvento = obj_ obj_.original_tagname_", "else: return TIdeEveTrab(*args_, **kwargs_) factory = staticmethod(factory) def get_indRetif(self): return self.indRetif def set_indRetif(self,", "return subclass(*args_, **kwargs_) if uf.subclass: return uf.subclass(*args_, **kwargs_) else: return uf(*args_, **kwargs_) factory", "cep.subclass(*args_, **kwargs_) else: return cep(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if (", "node, nodeName_, fromsubclass_=False): pass # end class tpInsc class nrInsc(GeneratedsSuper): subclass = None", "pretty_print) outfile.write('<%suf>%s</%suf>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.uf), input_name='uf')), namespace_, eol_)) if self.paisNascto is not None:", "else: return codMunic(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return", "tpAmb self.procEmi = procEmi self.verProc = verProc def factory(*args_, **kwargs_): if CurrentSubclassModule_ is", "MixedContainer.TypeBase64: outfile.write('<%s>%s</%s>' % ( self.name, base64.b64encode(self.value), self.name)) def to_etree(self, element): if self.category ==", "imported_ns_def_ = GenerateDSNamespaceDefs_.get('TIdeEveTrab') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print:", "= set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='TEmprPJ') if self.hasContent_(): outfile.write('>%s' % (eol_, ))", "identificado em {ideBenef} e para o qual não tenha havido ainda informação de", "sval_ = child_.text try: fval_ = float(sval_) except (TypeError, ValueError) as exp: raise_parse_error(child_,", "outfile, level, already_processed, namespace_='', name_='nmBenefic'): pass def exportChildren(self, outfile, level, namespace_='', name_='nmBenefic', fromsubclass_=False,", "# esociallib # import sys import re as re_ import base64 import datetime", "name_='tpPlanRP') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='tpPlanRP',", "self.ideEmpregador is not None: self.ideEmpregador.export(outfile, level, namespace_, name_='ideEmpregador', pretty_print=pretty_print) if self.ideBenef is not", "1, namespace_='', name_='nrLograd', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_,", "= \"\"\" Usage: python <Parser>.py [ -s ] <in_xml_file> \"\"\" def usage(): print(USAGE_TEXT)", "nmPai.subclass: return nmPai.subclass(*args_, **kwargs_) else: return nmPai(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self):", "if self.tpPlanRP is not None: showIndent(outfile, level, pretty_print) outfile.write('<%stpPlanRP>%s</%stpPlanRP>%s' % (namespace_, self.gds_format_integer(self.tpPlanRP, input_name='tpPlanRP'),", "self.value elif (self.content_type == MixedContainer.TypeFloat or self.content_type == MixedContainer.TypeDecimal): text = '%f' %", "namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('codMunic') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_", "None: subclass = getSubclassFromModule_( CurrentSubclassModule_, procEmi) if subclass is not None: return subclass(*args_,", "if isinstance(dtFimBenef, BaseStrType_): initvalue_ = datetime_.datetime.strptime(dtFimBenef, '%Y-%m-%d').date() else: initvalue_ = dtFimBenef self.dtFimBenef =", "return True else: return False def export(self, outfile, level, namespace_='', name_='nrLograd', namespacedef_='', pretty_print=True):", "pass # end class idQuota class cpfInst(GeneratedsSuper): subclass = None superclass = None", "self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='fimBeneficio', pretty_print=pretty_print) showIndent(outfile,", "nrBenefic if isinstance(dtFimBenef, BaseStrType_): initvalue_ = datetime_.datetime.strptime(dtFimBenef, '%Y-%m-%d').date() else: initvalue_ = dtFimBenef self.dtFimBenef", "factory = staticmethod(factory) def get_indRetif(self): return self.indRetif def set_indRetif(self, indRetif): self.indRetif = indRetif", "def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='idQuota'): pass def exportChildren(self, outfile, level, namespace_='',", "not None: showIndent(outfile, level, pretty_print) outfile.write('<%svrBenef>%s</%svrBenef>%s' % (namespace_, self.gds_format_float(self.vrBenef, input_name='vrBenef'), namespace_, eol_)) if", "= ival_ elif nodeName_ == 'iniBeneficio': obj_ = TDadosBeneficio.factory() obj_.build(child_) self.iniBeneficio = obj_", "dt = dt.replace(tzinfo=tz) return dt.date() def gds_validate_time(self, input_data, node=None, input_name=''): return input_data def", "already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='nrBenefic') if self.hasContent_(): outfile.write('>%s' % (eol_,", "namespace_, eol_)) if self.nmPai is not None: showIndent(outfile, level, pretty_print) outfile.write('<%snmPai>%s</%snmPai>%s' % (namespace_,", "+ namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='dtNascto')", "namespace_, eol_)) if self.complemento is not None: showIndent(outfile, level, pretty_print) outfile.write('<%scomplemento>%s</%scomplemento>%s' % (namespace_,", "self.exportAttributes(outfile, level, already_processed, namespace_, name_='TEnderecoBrasil') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level", "elif nodeName_ == 'cpfInst': cpfInst_ = child_.text cpfInst_ = self.gds_validate_string(cpfInst_, node, 'cpfInst') self.cpfInst", "Usage: python <Parser>.py [ -s ] <in_xml_file> \"\"\" def usage(): print(USAGE_TEXT) sys.exit(1) def", "or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='nrBenefic') if self.hasContent_():", "exportAttributes(self, outfile, level, already_processed, namespace_='', name_='TEmprPJ'): pass def exportChildren(self, outfile, level, namespace_='', name_='TEmprPJ',", "= child_.text try: fval_ = float(sval_) except (TypeError, ValueError) as exp: raise_parse_error(child_, 'requires", "self.gds_encode(self.gds_format_string(quote_xml(self.paisNascto), input_name='paisNascto')), namespace_, eol_)) if self.paisNac is not None: showIndent(outfile, level, pretty_print) outfile.write('<%spaisNac>%s</%spaisNac>%s'", "is not None: return subclass(*args_, **kwargs_) if indRetif.subclass: return indRetif.subclass(*args_, **kwargs_) else: return", "if subclass is not None: return subclass(*args_, **kwargs_) if indRetif.subclass: return indRetif.subclass(*args_, **kwargs_)", "self.nrLograd def set_nrLograd(self, nrLograd): self.nrLograd = nrLograd def get_complemento(self): return self.complemento def set_complemento(self,", "subclass(*args_, **kwargs_) if tpInsc.subclass: return tpInsc.subclass(*args_, **kwargs_) else: return tpInsc(*args_, **kwargs_) factory =", "self.content_type, self.name,)) self.value.exportLiteral(outfile, level + 1) showIndent(outfile, level) outfile.write(')\\n') class MemberSpec_(object): def __init__(self,", "elif nodeName_ == 'codPostal': codPostal_ = child_.text codPostal_ = self.gds_validate_string(codPostal_, node, 'codPostal') self.codPostal", "nodeName_ == 'nmMae': nmMae_ = child_.text nmMae_ = self.gds_validate_string(nmMae_, node, 'nmMae') self.nmMae =", "self.nrRecibo is not None: showIndent(outfile, level, pretty_print) outfile.write('<%snrRecibo>%s</%snrRecibo>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nrRecibo), input_name='nrRecibo')), namespace_,", "2: classname = names[1] class_obj2 = globals().get(classname) if class_obj2 is not None: class_obj1", "staticmethod(factory) def get_cpfBenef(self): return self.cpfBenef def set_cpfBenef(self, cpfBenef): self.cpfBenef = cpfBenef def get_nmBenefic(self):", "não tenha havido ainda informação de término de benefícios.\"\"\" subclass = None superclass", "name_='TIdeEveTrab', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('TIdeEveTrab') if imported_ns_def_ is not None: namespacedef_ =", "node, 'nrRecibo') self.nrRecibo = nrRecibo_ elif nodeName_ == 'tpAmb': sval_ = child_.text try:", "input_data def gds_format_datetime(self, input_data, input_name=''): if input_data.microsecond == 0: _svalue = '%04d-%02d-%02dT%02d:%02d:%02d' %", "nmPai def hasContent_(self): if ( self.dtNascto is not None or self.codMunic is not", "total_seconds == 0: _svalue += 'Z' else: if total_seconds < 0: _svalue +=", "from a specific module.''' name = class_.__name__ + 'Sub' if hasattr(module, name): return", "def set_ideEvento(self, ideEvento): self.ideEvento = ideEvento def get_ideEmpregador(self): return self.ideEmpregador def set_ideEmpregador(self, ideEmpregador):", "pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('nrRecibo') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if", "is not None: return subclass(*args_, **kwargs_) if nmCid.subclass: return nmCid.subclass(*args_, **kwargs_) else: return", "cpfBenef(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ = None def", "1, namespace_='', name_='tpInsc', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_,", "def get_path_list_(self, node, path_list): if node is None: return tag = GeneratedsSuper.Tag_strip_pattern_.sub('', node.tag)", "outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='paisNac'): pass def", "**kwargs_) if nmBenefic.subclass: return nmBenefic.subclass(*args_, **kwargs_) else: return nmBenefic(*args_, **kwargs_) factory = staticmethod(factory)", "imported_ns_def_ = GenerateDSNamespaceDefs_.get('nmPai') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print:", "True else: return False def export(self, outfile, level, namespace_='', name_='nmPai', namespacedef_='', pretty_print=True): imported_ns_def_", "not None: showIndent(outfile, level, pretty_print) outfile.write('<%sverProc>%s</%sverProc>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.verProc), input_name='verProc')), namespace_, eol_)) def", "outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='ideBenef', pretty_print=pretty_print) showIndent(outfile, level,", "tpBenef self.nrBenefic = nrBenefic if isinstance(dtIniBenef, BaseStrType_): initvalue_ = datetime_.datetime.strptime(dtIniBenef, '%Y-%m-%d').date() else: initvalue_", "raise_parse_error(node, msg): msg = '%s (element %s/line %d)' % (msg, node.tag, node.sourceline, )", "pretty_print) outfile.write('<%sdtIniBenef>%s</%sdtIniBenef>%s' % (namespace_, self.gds_format_date(self.dtIniBenef, input_name='dtIniBenef'), namespace_, eol_)) if self.vrBenef is not None:", "self.iniBeneficio is not None: self.iniBeneficio.export(outfile, level, namespace_, name_='iniBeneficio', pretty_print=pretty_print) if self.altBeneficio is not", "exp: raise_parse_error(child_, 'requires integer: %s' % exp) ival_ = self.gds_validate_integer(ival_, node, 'tpPlanRP') self.tpPlanRP", "self.ideEmpregador = ideEmpregador self.ideBenef = ideBenef self.infoBeneficio = infoBeneficio def factory(*args_, **kwargs_): if", "False def export(self, outfile, level, namespace_='', name_='paisResid', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('paisResid') if", "**kwargs_) else: return endereco(*args_, **kwargs_) factory = staticmethod(factory) def get_brasil(self): return self.brasil def", "outfile.write('<%s>%d</%s>' % ( self.name, self.value, self.name)) elif self.content_type == MixedContainer.TypeFloat or \\ self.content_type", "child_.text cpfBenef_ = self.gds_validate_string(cpfBenef_, node, 'cpfBenef') self.cpfBenef = cpfBenef_ elif nodeName_ == 'nmBenefic':", "outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='uf'): pass def", "def gds_validate_simple_patterns(self, patterns, target): # pat is a list of lists of strings/patterns.", "fromsubclass_=False): pass # end class tpInsc class nrInsc(GeneratedsSuper): subclass = None superclass =", "= '' if self.evtCdBenPrRP is not None: self.evtCdBenPrRP.export(outfile, level, namespace_, name_='evtCdBenPrRP', pretty_print=pretty_print) if", "in the # XML representation of that element. See the export method of", "total_seconds *= -1 else: _svalue += '+' hours = total_seconds // 3600 minutes", "return subclass(*args_, **kwargs_) if nrBenefic.subclass: return nrBenefic.subclass(*args_, **kwargs_) else: return nrBenefic(*args_, **kwargs_) factory", "return True else: return False def export(self, outfile, level, namespace_='', name_='evtCdBenPrRP', namespacedef_='', pretty_print=True):", "fimBeneficio.subclass(*args_, **kwargs_) else: return fimBeneficio(*args_, **kwargs_) factory = staticmethod(factory) def get_tpBenef(self): return self.tpBenef", "namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('codPostal') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_", "tpInsc.subclass(*args_, **kwargs_) else: return tpInsc(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if (", "in patterns: found2 = False for patterns2 in patterns1: if re_.search(patterns2, target) is", "def gds_validate_time(self, input_data, node=None, input_name=''): return input_data def gds_format_time(self, input_data, input_name=''): if input_data.microsecond", "pass def exportChildren(self, outfile, level, namespace_='', name_='codPostal', fromsubclass_=False, pretty_print=True): pass def build(self, node):", "paisResid def get_dscLograd(self): return self.dscLograd def set_dscLograd(self, dscLograd): self.dscLograd = dscLograd def get_nrLograd(self):", "(element %s/line %d)' % (msg, node.tag, node.sourceline, ) raise GDSParseError(msg) class MixedContainer: #", "return codPostal.subclass(*args_, **kwargs_) else: return codPostal(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if", "else: return False def export(self, outfile, level, namespace_='', name_='indRetif', namespacedef_='', pretty_print=True): imported_ns_def_ =", "these methods by re-implementing the following class # in a module named generatedssuper.py.", "def export(self, outfile, level, namespace_='', name_='nmMae', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('nmMae') if imported_ns_def_", "subclass(*args_, **kwargs_) if infoBeneficio.subclass: return infoBeneficio.subclass(*args_, **kwargs_) else: return infoBeneficio(*args_, **kwargs_) factory =", "= GenerateDSNamespaceDefs_.get('uf') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_", "Command line arguments: # schemas/v2_04/evtCdBenPrRP.xsd # # Command line: # /usr/local/bin/generateDS --no-process-includes -o", "nodeName_, fromsubclass_=False): pass # end class dtFimBenef class mtvFim(GeneratedsSuper): subclass = None superclass", "* 3600)) // 60 _svalue += '{0:02d}:{1:02d}'.format( hours, minutes) except AttributeError: pass return", "factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, eSocial) if", "len(time_parts) > 1: micro_seconds = int(float('0.' + time_parts[1]) * 1000000) input_data = '%s.%s'", "= None def __init__(self, tpLograd=None, dscLograd=None, nrLograd=None, complemento=None, bairro=None, cep=None, codMunic=None, uf=None): self.original_tagname_", "obj_.build(child_) self.infoBeneficio = obj_ obj_.original_tagname_ = 'infoBeneficio' # end class evtCdBenPrRP class ideBenef(GeneratedsSuper):", "namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='eSocial') if", "subclass is not None: return subclass(*args_, **kwargs_) if vrBenef.subclass: return vrBenef.subclass(*args_, **kwargs_) else:", "namespace_='', name_='dscLograd'): pass def exportChildren(self, outfile, level, namespace_='', name_='dscLograd', fromsubclass_=False, pretty_print=True): pass def", "# Command line: # /usr/local/bin/generateDS --no-process-includes -o \"esociallib/v2_04/evtCdBenPrRP.py\" schemas/v2_04/evtCdBenPrRP.xsd # # Current working", "level, already_processed, namespace_, name_='fimBeneficio') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level +", "infoPenMorte): self.infoPenMorte = infoPenMorte def hasContent_(self): if ( self.tpBenef is not None or", "self.gds_validate_integer(ival_, node, 'tpInsc') self.tpInsc = ival_ elif nodeName_ == 'nrInsc': nrInsc_ = child_.text", "+ 1, namespace_='', name_='ideBenef', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_))", "**kwargs_) factory = staticmethod(factory) def get_idQuota(self): return self.idQuota def set_idQuota(self, idQuota): self.idQuota =", "outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='nmMae', pretty_print=pretty_print) outfile.write('</%s%s>%s' %", "name_='cpfInst') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='cpfInst',", "usage() if __name__ == '__main__': #import pdb; pdb.set_trace() main() __all__ = [ \"TDadosBenef\",", "already_processed, namespace_, name_='indRetif') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1,", "% exp) ival_ = self.gds_validate_integer(ival_, node, 'tpInsc') self.tpInsc = ival_ elif nodeName_ ==", "/ 1000000))[2:], ) if input_data.tzinfo is not None: tzoff = input_data.tzinfo.utcoffset(input_data) if tzoff", "if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, TEnderecoExterior) if subclass is", "level, namespace_='', name_='nrBenefic', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node,", "name) else: # category == MixedContainer.CategoryComplex self.value.export( outfile, level, namespace, name, pretty_print=pretty_print) def", "end class mtvFim class TIdeEveTrab(GeneratedsSuper): \"\"\"Identificação do evento\"\"\" subclass = None superclass =", "level, already_processed, namespace_='', name_='procEmi'): pass def exportChildren(self, outfile, level, namespace_='', name_='procEmi', fromsubclass_=False, pretty_print=True):", "Data representation classes. # class eSocial(GeneratedsSuper): subclass = None superclass = None def", "bairro.subclass(*args_, **kwargs_) else: return bairro(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if (", "input_data.year, input_data.month, input_data.day, ) try: if input_data.tzinfo is not None: tzoff = input_data.tzinfo.utcoffset(input_data)", "already_processed, namespace_, name_='dadosNasc') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1,", "-*- coding: utf-8 -*- # # Generated Tue Oct 10 00:42:21 2017 by", "de nascimento do beneficiário\"\"\" subclass = None superclass = None def __init__(self, dtNascto=None,", "'xs:string' else: return self.data_type def set_container(self, container): self.container = container def get_container(self): return", "if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='nrBenefic', pretty_print=pretty_print)", "-*- # # Generated Tue Oct 10 00:42:21 2017 by generateDS.py version 2.28b.", "of # any generated element type class for a example of the use", "representation classes. # class eSocial(GeneratedsSuper): subclass = None superclass = None def __init__(self,", "outfile, level, namespace_='', name_='dtFimBenef', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('dtFimBenef') if imported_ns_def_ is not", "pretty_print) outfile.write('<%sdscLograd>%s</%sdscLograd>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.dscLograd), input_name='dscLograd')), namespace_, eol_)) if self.nrLograd is not None:", "outfile, level, name): if self.category == MixedContainer.CategoryText: showIndent(outfile, level) outfile.write( 'model_.MixedContainer(%d, %d, \"%s\",", "hasContent_(self): if ( self.evtCdBenPrRP is not None or self.Signature is not None ):", "= self.gds_validate_string(paisNascto_, node, 'paisNascto') self.paisNascto = paisNascto_ elif nodeName_ == 'paisNac': paisNac_ =", "True else: return False def export(self, outfile, level, namespace_='', name_='TDadosBeneficio', namespacedef_='', pretty_print=True): imported_ns_def_", "if self.iniBeneficio is not None: self.iniBeneficio.export(outfile, level, namespace_, name_='iniBeneficio', pretty_print=pretty_print) if self.altBeneficio is", "not None or self.codMunic is not None or self.uf is not None or", "'uf') self.uf = uf_ # end class TEnderecoBrasil class tpLograd(GeneratedsSuper): subclass = None", "dtIniBenef class vrBenef(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ =", "not None: return subclass(*args_, **kwargs_) if tpPlanRP.subclass: return tpPlanRP.subclass(*args_, **kwargs_) else: return tpPlanRP(*args_,", "None: showIndent(outfile, level, pretty_print) outfile.write('<%stpAmb>%s</%stpAmb>%s' % (namespace_, self.gds_format_integer(self.tpAmb, input_name='tpAmb'), namespace_, eol_)) if self.procEmi", "if subclass is not None: return subclass(*args_, **kwargs_) if procEmi.subclass: return procEmi.subclass(*args_, **kwargs_)", "is a list of lists of strings/patterns. We should: # - AND the", "return self.procEmi def set_procEmi(self, procEmi): self.procEmi = procEmi def get_verProc(self): return self.verProc def", "self.get_path_list_(node, path_list) path_list.reverse() path = '/'.join(path_list) return path Tag_strip_pattern_ = re_.compile(r'\\{.*\\}') def get_path_list_(self,", "= self.to_etree_simple() else: # category == MixedContainer.CategoryComplex self.value.to_etree(element) def to_etree_simple(self): if self.content_type ==", "(eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='nmPai', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_,", "showIndent(outfile, level, pretty_print) outfile.write('<%snrLograd>%s</%snrLograd>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nrLograd), input_name='nrLograd')), namespace_, eol_)) if self.complemento is", "nrBenefic_ = self.gds_validate_string(nrBenefic_, node, 'nrBenefic') self.nrBenefic = nrBenefic_ elif nodeName_ == 'dtFimBenef': sval_", "if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='paisNac', pretty_print=pretty_print)", "return self.tpBenef def set_tpBenef(self, tpBenef): self.tpBenef = tpBenef def get_nrBenefic(self): return self.nrBenefic def", "from generatedsnamespaces import GenerateDSNamespaceDefs as GenerateDSNamespaceDefs_ except ImportError: GenerateDSNamespaceDefs_ = {} # #", "self.uf = uf def get_paisNascto(self): return self.paisNascto def set_paisNascto(self, paisNascto): self.paisNascto = paisNascto", "if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, nmCid) if subclass is", "parseEtree(inFileName, silence=False): parser = None doc = parsexml_(inFileName, parser) rootNode = doc.getroot() rootTag,", "self.exportAttributes(outfile, level, already_processed, namespace_, name_='tpPlanRP') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level", "banner = 'Dropping into IPython', ## exit_msg = 'Leaving Interpreter, back to program.')", "if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='dtFimBenef', pretty_print=pretty_print)", "class cpfInst(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ = None", "do evento\"\"\" subclass = None superclass = None def __init__(self, indRetif=None, nrRecibo=None, tpAmb=None,", "= 'ideEmpregador' elif nodeName_ == 'ideBenef': obj_ = ideBenef.factory() obj_.build(child_) self.ideBenef = obj_", "'', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='paisResid') if self.hasContent_(): outfile.write('>%s'", "name_='eSocial') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='eSocial',", "# end class complemento class bairro(GeneratedsSuper): subclass = None superclass = None def", "factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, nmBenefic) if", "= child_.text cep_ = self.gds_validate_string(cep_, node, 'cep') self.cep = cep_ elif nodeName_ ==", "None: return subclass(*args_, **kwargs_) if eSocial.subclass: return eSocial.subclass(*args_, **kwargs_) else: return eSocial(*args_, **kwargs_)", "node): attrs = node.attrib attr_parts = attr_name.split(':') value = None if len(attr_parts) ==", ") if input_data.tzinfo is not None: tzoff = input_data.tzinfo.utcoffset(input_data) if tzoff is not", "exportChildren(self, outfile, level, namespace_='', name_='nmMae', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed =", "ival_ # end class fimBeneficio class tpBenef(GeneratedsSuper): subclass = None superclass = None", "parser is None: # Use the lxml ElementTree compatible parser so that, e.g.,", "1, namespace_='', name_='ideBenef', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else:", "nodeName_ == 'infoBeneficio': obj_ = infoBeneficio.factory() obj_.build(child_) self.infoBeneficio = obj_ obj_.original_tagname_ = 'infoBeneficio'", "'&amp;') s1 = s1.replace('<', '&lt;') s1 = s1.replace('>', '&gt;') if '\"' in s1:", "== 1: parse(args[0]) else: usage() if __name__ == '__main__': #import pdb; pdb.set_trace() main()", "name_, eol_)) else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='',", "True else: return False def export(self, outfile, level, namespace_='', name_='bairro', namespacedef_='', pretty_print=True): imported_ns_def_", "a list of lists of strings/patterns. We should: # - AND the outer", "pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('cpfInst') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if", "outfile.write('<%sdtFimBenef>%s</%sdtFimBenef>%s' % (namespace_, self.gds_format_date(self.dtFimBenef, input_name='dtFimBenef'), namespace_, eol_)) if self.mtvFim is not None: showIndent(outfile,", "def quote_attrib(inStr): s1 = (isinstance(inStr, BaseStrType_) and inStr or '%s' % inStr) s1", "None: return subclass(*args_, **kwargs_) if dtFimBenef.subclass: return dtFimBenef.subclass(*args_, **kwargs_) else: return dtFimBenef(*args_, **kwargs_)", "'dscLograd': dscLograd_ = child_.text dscLograd_ = self.gds_validate_string(dscLograd_, node, 'dscLograd') self.dscLograd = dscLograd_ elif", "= ideBenef def get_infoBeneficio(self): return self.infoBeneficio def set_infoBeneficio(self, infoBeneficio): self.infoBeneficio = infoBeneficio def", "set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='idQuota') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile,", "level + 1, namespace_='', name_='dscLograd', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s'", "name_='tpBenef', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib, already_processed)", "end class dtIniBenef class vrBenef(GeneratedsSuper): subclass = None superclass = None def __init__(self):", "if self.ideEmpregador is not None: self.ideEmpregador.export(outfile, level, namespace_, name_='ideEmpregador', pretty_print=pretty_print) if self.ideBenef is", "class nrRecibo class tpAmb(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_", "child_.text nrLograd_ = self.gds_validate_string(nrLograd_, node, 'nrLograd') self.nrLograd = nrLograd_ elif nodeName_ == 'complemento':", "self.value.export( outfile, level, namespace, name, pretty_print=pretty_print) def exportSimple(self, outfile, level, name): if self.content_type", "'%s' % ' '.join(input_data) def gds_validate_float_list( self, input_data, node=None, input_name=''): values = input_data.split()", "exportAttributes(self, outfile, level, already_processed, namespace_='', name_='nmCid'): pass def exportChildren(self, outfile, level, namespace_='', name_='nmCid',", "level, already_processed, namespace_='', name_='dtIniBenef'): pass def exportChildren(self, outfile, level, namespace_='', name_='dtIniBenef', fromsubclass_=False, pretty_print=True):", "tpAmb(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ = None def", "if node is None: return tag = GeneratedsSuper.Tag_strip_pattern_.sub('', node.tag) if tag: path_list.append(tag) self.get_path_list_(node.getparent(),", "str): result = quote_xml(instring) elif sys.version_info.major == 2 and isinstance(instring, unicode): result =", ")) self.exportChildren(outfile, level + 1, namespace_='', name_='nmPai', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_))", "if TEnderecoBrasil.subclass: return TEnderecoBrasil.subclass(*args_, **kwargs_) else: return TEnderecoBrasil(*args_, **kwargs_) factory = staticmethod(factory) def", "for child in node: if child.tail is not None: text += child.tail return", "= child_.text try: ival_ = int(sval_) except (TypeError, ValueError) as exp: raise_parse_error(child_, 'requires", "# # Data representation classes. # class eSocial(GeneratedsSuper): subclass = None superclass =", "# end class bairro class cep(GeneratedsSuper): subclass = None superclass = None def", "already_processed, namespace_, name_='codPostal') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1,", "child_, node, nodeName_, fromsubclass_=False): pass # end class procEmi class verProc(GeneratedsSuper): subclass =", "None: subclass = getSubclassFromModule_( CurrentSubclassModule_, mtvFim) if subclass is not None: return subclass(*args_,", "self.value elif self.content_type == MixedContainer.TypeDouble: text = '%g' % self.value elif self.content_type ==", "in these classes are generated by generateDS.py. # You can replace these methods", "already_processed, namespace_, name_='nmPai') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1,", "'%04d-%02d-%02d' % ( input_data.year, input_data.month, input_data.day, ) try: if input_data.tzinfo is not None:", "self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='nmPai', pretty_print=pretty_print) outfile.write('</%s%s>%s'", "# end class TEmprPJ class tpInsc(GeneratedsSuper): subclass = None superclass = None def", "self.exportAttributes(outfile, level, already_processed, namespace_, name_='endereco') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level", "pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class paisResid class", "not None: return subclass(*args_, **kwargs_) if TIdeEveTrab.subclass: return TIdeEveTrab.subclass(*args_, **kwargs_) else: return TIdeEveTrab(*args_,", "already_processed): pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class tpLograd", "name_='nmCid', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_, )) def", "imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_ = '\\n' else:", "namespace_, name_='dadosBenef', pretty_print=pretty_print) def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib, already_processed) for", ")) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='TEmprPJ') if self.hasContent_(): outfile.write('>%s' %", "get_complemento(self): return self.complemento def set_complemento(self, complemento): self.complemento = complemento def get_bairro(self): return self.bairro", "'bairro') self.bairro = bairro_ elif nodeName_ == 'cep': cep_ = child_.text cep_ =", "level, namespace_='', name_='dtNascto', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('dtNascto') if imported_ns_def_ is not None:", "get_data_type_chain(self): return self.data_type def get_data_type(self): if isinstance(self.data_type, list): if len(self.data_type) > 0: return", "None: subclass = getSubclassFromModule_( CurrentSubclassModule_, TEnderecoExterior) if subclass is not None: return subclass(*args_,", "self.gds_validate_string(paisNascto_, node, 'paisNascto') self.paisNascto = paisNascto_ elif nodeName_ == 'paisNac': paisNac_ = child_.text", "= input_data.tzinfo.utcoffset(input_data) if tzoff is not None: total_seconds = tzoff.seconds + (86400 *", "def __init__(self, brasil=None, exterior=None): self.original_tagname_ = None self.brasil = brasil self.exterior = exterior", "(eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='TEnderecoBrasil', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s'", "paisNascto_ = child_.text paisNascto_ = self.gds_validate_string(paisNascto_, node, 'paisNascto') self.paisNascto = paisNascto_ elif nodeName_", "use the following. # IPython is available from http://ipython.scipy.org/. # ## from IPython.Shell", "is not None: showIndent(outfile, level, pretty_print) outfile.write('<%scep>%s</%scep>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.cep), input_name='cep')), namespace_, eol_))", "return dscLograd.subclass(*args_, **kwargs_) else: return dscLograd(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if", "name_='dscLograd', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib, already_processed)", "CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, procEmi) if subclass is not", "# a namespace prefix definition, will export that definition in the # XML", "if subclass is not None: return subclass(*args_, **kwargs_) if uf.subclass: return uf.subclass(*args_, **kwargs_)", "name_='TEnderecoBrasil', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('TEnderecoBrasil') if imported_ns_def_ is not None: namespacedef_ =", "node, nodeName_, fromsubclass_=False): pass # end class nrInsc class TDadosBenef(GeneratedsSuper): \"\"\"Dados de beneficiário\"\"\"", "dadosNasc=None, endereco=None): self.original_tagname_ = None self.dadosNasc = dadosNasc self.endereco = endereco def factory(*args_,", "pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('nmCid') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if", "% (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='tpPlanRP'): pass def exportChildren(self,", "namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('nmMae') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_", "= None def __init__(self, brasil=None, exterior=None): self.original_tagname_ = None self.brasil = brasil self.exterior", "False def export(self, outfile, level, namespace_='', name_='cep', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('cep') if", "fromsubclass_=False): pass # end class procEmi class verProc(GeneratedsSuper): subclass = None superclass =", "fromsubclass_=False): pass # end class complemento class bairro(GeneratedsSuper): subclass = None superclass =", "CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, cep) if subclass is not", "def exportChildren(self, outfile, level, namespace_='', name_='TDadosBenef', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\\n'", "input_name='nmMae')), namespace_, eol_)) if self.nmPai is not None: showIndent(outfile, level, pretty_print) outfile.write('<%snmPai>%s</%snmPai>%s' %", "if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, cpfBenef) if subclass is", "namespace_='', name_='vrBenef', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_, ))", "return result def __eq__(self, other): if type(self) != type(other): return False return self.__dict__", "already_processed, namespace_='', name_='nmCid'): pass def exportChildren(self, outfile, level, namespace_='', name_='nmCid', fromsubclass_=False, pretty_print=True): pass", "= nrInsc_ # end class TEmprPJ class tpInsc(GeneratedsSuper): subclass = None superclass =", "return True else: return False def export(self, outfile, level, namespace_='', name_='nrInsc', namespacedef_='', pretty_print=True):", "Interpreter, back to program.') # Then use the following line where and when", "else: return False def export(self, outfile, level, namespace_='', name_='tpLograd', namespacedef_='', pretty_print=True): imported_ns_def_ =", "USAGE_TEXT = \"\"\" Usage: python <Parser>.py [ -s ] <in_xml_file> \"\"\" def usage():", "verProc=None): self.original_tagname_ = None self.indRetif = indRetif self.nrRecibo = nrRecibo self.tpAmb = tpAmb", "obj_.build(child_) self.iniBeneficio = obj_ obj_.original_tagname_ = 'iniBeneficio' elif nodeName_ == 'altBeneficio': obj_ =", "node, 'tpPlanRP') self.tpPlanRP = ival_ elif nodeName_ == 'iniBeneficio': obj_ = TDadosBeneficio.factory() obj_.build(child_)", "self.child_attrs def set_choice(self, choice): self.choice = choice def get_choice(self): return self.choice def set_optional(self,", "namespace_='', name_='complemento'): pass def exportChildren(self, outfile, level, namespace_='', name_='complemento', fromsubclass_=False, pretty_print=True): pass def", "exportAttributes(self, outfile, level, already_processed, namespace_='', name_='tpPlanRP'): pass def exportChildren(self, outfile, level, namespace_='', name_='tpPlanRP',", "namespace_, name_='nrRecibo') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='',", ")) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='dtNascto') if self.hasContent_(): outfile.write('>%s' %", "class eSocial(GeneratedsSuper): subclass = None superclass = None def __init__(self, evtCdBenPrRP=None, Signature=None): self.original_tagname_", "namespace_='', name_='nrBenefic'): pass def exportChildren(self, outfile, level, namespace_='', name_='nrBenefic', fromsubclass_=False, pretty_print=True): pass def", "values def gds_format_boolean(self, input_data, input_name=''): return ('%s' % input_data).lower() def gds_validate_boolean(self, input_data, node=None,", "outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='eSocial', pretty_print=pretty_print) showIndent(outfile, level,", "as exp: class GeneratedsSuper(object): tzoff_pattern = re_.compile(r'(\\+|-)((0\\d|1[0-3]):[0-5]\\d|14:00)$') class _FixedOffsetTZ(datetime_.tzinfo): def __init__(self, offset, name):", "input_name=''): return '%s' % ' '.join(input_data) def gds_validate_integer_list( self, input_data, node=None, input_name=''): values", "'Z' else: if total_seconds < 0: _svalue += '-' total_seconds *= -1 else:", "end class paisNac class nmMae(GeneratedsSuper): subclass = None superclass = None def __init__(self):", "CurrentSubclassModule_, vrBenef) if subclass is not None: return subclass(*args_, **kwargs_) if vrBenef.subclass: return", "CurrentSubclassModule_, verProc) if subclass is not None: return subclass(*args_, **kwargs_) if verProc.subclass: return", "== 'infoBeneficio': obj_ = infoBeneficio.factory() obj_.build(child_) self.infoBeneficio = obj_ obj_.original_tagname_ = 'infoBeneficio' #", "self.gds_validate_integer(ival_, node, 'tpPlanRP') self.tpPlanRP = ival_ elif nodeName_ == 'iniBeneficio': obj_ = TDadosBeneficio.factory()", ")) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='infoPenMorte'): pass def exportChildren(self, outfile, level,", "else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='cpfBenef'): pass", "if parser is None: # Use the lxml ElementTree compatible parser so that,", "# } # try: from generatedsnamespaces import GenerateDSNamespaceDefs as GenerateDSNamespaceDefs_ except ImportError: GenerateDSNamespaceDefs_", "if cpfInst.subclass: return cpfInst.subclass(*args_, **kwargs_) else: return cpfInst(*args_, **kwargs_) factory = staticmethod(factory) def", "def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='bairro'): pass def exportChildren(self, outfile, level, namespace_='',", "child_, node, nodeName_, fromsubclass_=False): if nodeName_ == 'dadosNasc': obj_ = dadosNasc.factory() obj_.build(child_) self.dadosNasc", "self.exterior def set_exterior(self, exterior): self.exterior = exterior def hasContent_(self): if ( self.brasil is", "'%Y-%m-%dT%H:%M:%S.%f') else: dt = datetime_.datetime.strptime( input_data, '%Y-%m-%dT%H:%M:%S') dt = dt.replace(tzinfo=tz) return dt def", "tz = GeneratedsSuper._FixedOffsetTZ(0, 'UTC') input_data = input_data[:-1] else: results = GeneratedsSuper.tzoff_pattern.search(input_data) if results", "**kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, indRetif) if subclass", "TypeNone = 0 TypeText = 1 TypeString = 2 TypeInteger = 3 TypeFloat", "is not None: self.ideEvento.export(outfile, level, namespace_, name_='ideEvento', pretty_print=pretty_print) if self.ideEmpregador is not None:", "already_processed, namespace_, name_='nrRecibo') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1,", "None: text += child.tail return text def find_attr_value_(attr_name, node): attrs = node.attrib attr_parts", "eol_)) else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='bairro'):", "**kwargs_) else: return dtIniBenef(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ):", "nodeName_ == 'bairro': bairro_ = child_.text bairro_ = self.gds_validate_string(bairro_, node, 'bairro') self.bairro =", "% (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.bairro), input_name='bairro')), namespace_, eol_)) if self.nmCid is not None: showIndent(outfile, level,", "is not None: return subclass(*args_, **kwargs_) if tpAmb.subclass: return tpAmb.subclass(*args_, **kwargs_) else: return", "nrRecibo_ = self.gds_validate_string(nrRecibo_, node, 'nrRecibo') self.nrRecibo = nrRecibo_ elif nodeName_ == 'tpAmb': sval_", "class paisResid class nmCid(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_", "imported_ns_def_ = GenerateDSNamespaceDefs_.get('cpfInst') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print:", "'Id' not in already_processed: already_processed.add('Id') outfile.write(' Id=%s' % (self.gds_encode(self.gds_format_string(quote_attrib(self.Id), input_name='Id')), )) def exportChildren(self,", "outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='nrLograd', pretty_print=pretty_print) outfile.write('</%s%s>%s' %", "found1 = False break return found1 @classmethod def gds_parse_time(cls, input_data): tz = None", "TypeDecimal = 5 TypeDouble = 6 TypeBoolean = 7 TypeBase64 = 8 def", "def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='paisResid'): pass def exportChildren(self, outfile, level, namespace_='',", "name_='vrBenef', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('vrBenef') if imported_ns_def_ is not None: namespacedef_ =", "CurrentSubclassModule_, idQuota) if subclass is not None: return subclass(*args_, **kwargs_) if idQuota.subclass: return", "cpfBenef): self.cpfBenef = cpfBenef def get_nmBenefic(self): return self.nmBenefic def set_nmBenefic(self, nmBenefic): self.nmBenefic =", "ElementTree compatible parser so that, e.g., # we ignore comments. try: parser =", "nodeName_ == 'tpInsc': sval_ = child_.text try: ival_ = int(sval_) except (TypeError, ValueError)", "mo in matchobjects: s3 = s1[pos:mo.start()] s2 += quote_xml_aux(s3) s2 += s1[mo.start():mo.end()] pos", "level, namespace_='', name_='vrBenef', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('vrBenef') if imported_ns_def_ is not None:", "\"ElementtypeA\": \"http://www.xxx.com/namespaceA\", # \"ElementtypeB\": \"http://www.xxx.com/namespaceB\", # } # try: from generatedsnamespaces import GenerateDSNamespaceDefs", "namespace_, name_='cpfBenef') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='',", "class dtNascto(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ = None", "None def __init__(self): self.original_tagname_ = None def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not", "silence=False): if sys.version_info.major == 2: from StringIO import StringIO as IOBuffer else: from", "exportChildren(self, outfile, level, namespace_='', name_='idQuota', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed =", "return getattr(module, name) else: return None # # If you have installed IPython", "== -1: return '\"%s\"' % s1 else: return '\"\"\"%s\"\"\"' % s1 def get_all_text_(node):", "name_='verProc', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_, )) def", "= getSubclassFromModule_( CurrentSubclassModule_, codPostal) if subclass is not None: return subclass(*args_, **kwargs_) if", "= self.gds_validate_string(nmBenefic_, node, 'nmBenefic') self.nmBenefic = nmBenefic_ elif nodeName_ == 'dadosBenef': obj_ =", "table (and other attributes, too) # # The module generatedsnamespaces, if it is", "IPShellEmbed(args, ## banner = 'Dropping into IPython', ## exit_msg = 'Leaving Interpreter, back", "showIndent(outfile, level, pretty_print) outfile.write('<%scomplemento>%s</%scomplemento>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.complemento), input_name='complemento')), namespace_, eol_)) if self.bairro is", "return dtIniBenef.subclass(*args_, **kwargs_) else: return dtIniBenef(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if", "already_processed, namespace_, name_='TDadosBenef') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1,", "nodeName_, fromsubclass_=False): pass # end class indRetif class nrRecibo(GeneratedsSuper): subclass = None superclass", "class nmBenefic class infoBeneficio(GeneratedsSuper): \"\"\"Informações relacionadas ao benefício previdenciário concedido ao servidor\"\"\" subclass", "subclass(*args_, **kwargs_) if complemento.subclass: return complemento.subclass(*args_, **kwargs_) else: return complemento(*args_, **kwargs_) factory =", "MixedContainer.TypeDouble: text = '%g' % self.value elif self.content_type == MixedContainer.TypeBase64: text = '%s'", "parsexml_(infile, parser=None, **kwargs): if parser is None: # Use the lxml ElementTree compatible", "[GCC 5.4.0 20160609] # # Command line options: # ('--no-process-includes', '') # ('-o',", "int(value) except (TypeError, ValueError): raise_parse_error(node, 'Requires sequence of integers') return values def gds_format_float(self,", "= self.gds_validate_string(nrBenefic_, node, 'nrBenefic') self.nrBenefic = nrBenefic_ elif nodeName_ == 'dtIniBenef': sval_ =", "or self.complemento is not None or self.bairro is not None or self.nmCid is", "input_data, node=None, input_name=''): values = input_data.split() for value in values: try: float(value) except", "end class TEnderecoExterior class paisResid(GeneratedsSuper): subclass = None superclass = None def __init__(self):", "None: showIndent(outfile, level, pretty_print) outfile.write('<%sindRetif>%s</%sindRetif>%s' % (namespace_, self.gds_format_integer(self.indRetif, input_name='indRetif'), namespace_, eol_)) if self.nrRecibo", "getSubclassFromModule_( CurrentSubclassModule_, codMunic) if subclass is not None: return subclass(*args_, **kwargs_) if codMunic.subclass:", "subclass is not None: return subclass(*args_, **kwargs_) if tpLograd.subclass: return tpLograd.subclass(*args_, **kwargs_) else:", "( input_data.year, input_data.month, input_data.day, ) try: if input_data.tzinfo is not None: tzoff =", "self.tpInsc = ival_ elif nodeName_ == 'nrInsc': nrInsc_ = child_.text nrInsc_ = self.gds_validate_string(nrInsc_,", ")) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='mtvFim'): pass def exportChildren(self, outfile, level,", "self.bairro = bairro self.cep = cep self.codMunic = codMunic self.uf = uf def", "= Signature def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_(", "end class paisResid class nmCid(GeneratedsSuper): subclass = None superclass = None def __init__(self):", "pensão por morte\"\"\" subclass = None superclass = None def __init__(self, idQuota=None, cpfInst=None):", "get_ideBenef(self): return self.ideBenef def set_ideBenef(self, ideBenef): self.ideBenef = ideBenef def get_infoBeneficio(self): return self.infoBeneficio", "self.gds_format_integer(self.codMunic, input_name='codMunic'), namespace_, eol_)) if self.uf is not None: showIndent(outfile, level, pretty_print) outfile.write('<%suf>%s</%suf>%s'", "outfile, level, namespace_='', name_='endereco', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('endereco') if imported_ns_def_ is not", "obj_.build(child_) self.ideBenef = obj_ obj_.original_tagname_ = 'ideBenef' elif nodeName_ == 'infoBeneficio': obj_ =", "None or self.dadosBenef is not None ): return True else: return False def", "CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, TIdeEveTrab) if subclass is not", "subclass = None superclass = None def __init__(self, cpfBenef=None, nmBenefic=None, dadosBenef=None): self.original_tagname_ =", "if paisNac.subclass: return paisNac.subclass(*args_, **kwargs_) else: return paisNac(*args_, **kwargs_) factory = staticmethod(factory) def", "else: if element.text is None: element.text = self.value else: element.text += self.value elif", "+ 1, namespace_='', name_='tpAmb', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' %", "getSubclassFromModule_( CurrentSubclassModule_, eSocial) if subclass is not None: return subclass(*args_, **kwargs_) if eSocial.subclass:", "self.nrRecibo = nrRecibo self.tpAmb = tpAmb self.procEmi = procEmi self.verProc = verProc def", "not None: text += child.tail return text def find_attr_value_(attr_name, node): attrs = node.attrib", "namespace_, eol_)) if self.nmBenefic is not None: showIndent(outfile, level, pretty_print) outfile.write('<%snmBenefic>%s</%snmBenefic>%s' % (namespace_,", "Namespace prefix definition table (and other attributes, too) # # The module generatedsnamespaces,", "= altBeneficio self.fimBeneficio = fimBeneficio def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None:", "): return True else: return False def export(self, outfile, level, namespace_='', name_='cpfBenef', namespacedef_='',", "= GenerateDSNamespaceDefs_.get('nrBenefic') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_", "return codMunic.subclass(*args_, **kwargs_) else: return codMunic(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if", "is not None or self.dscLograd is not None or self.nrLograd is not None", "% (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='tpInsc', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_,", "already_processed, namespace_, name_='dtFimBenef') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1,", "obj_.build(child_) self.ideEvento = obj_ obj_.original_tagname_ = 'ideEvento' elif nodeName_ == 'ideEmpregador': obj_ =", "already_processed, namespace_='', name_='dtIniBenef'): pass def exportChildren(self, outfile, level, namespace_='', name_='dtIniBenef', fromsubclass_=False, pretty_print=True): pass", "procEmi.subclass: return procEmi.subclass(*args_, **kwargs_) else: return procEmi(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self):", "CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, nmBenefic) if subclass is not", "name_='verProc'): pass def exportChildren(self, outfile, level, namespace_='', name_='verProc', fromsubclass_=False, pretty_print=True): pass def build(self,", "subclass = None superclass = None def __init__(self, brasil=None, exterior=None): self.original_tagname_ = None", "node=None, input_name=''): values = input_data.split() for value in values: if value not in", "nmBenefic class infoBeneficio(GeneratedsSuper): \"\"\"Informações relacionadas ao benefício previdenciário concedido ao servidor\"\"\" subclass =", "bairro_ = self.gds_validate_string(bairro_, node, 'bairro') self.bairro = bairro_ elif nodeName_ == 'cep': cep_", "name_='infoPenMorte', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('infoPenMorte') if imported_ns_def_ is not None: namespacedef_ =", "or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='idQuota') if self.hasContent_():", "'%s.%s' % (time_parts[0], micro_seconds, ) dt = datetime_.datetime.strptime( input_data, '%Y-%m-%dT%H:%M:%S.%f') else: dt =", "level, namespace_='', name_='paisResid', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('paisResid') if imported_ns_def_ is not None:", "dadosNasc def get_endereco(self): return self.endereco def set_endereco(self, endereco): self.endereco = endereco def hasContent_(self):", "else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='TDadosBeneficio'): pass", "is not None: return subclass(*args_, **kwargs_) if TEnderecoExterior.subclass: return TEnderecoExterior.subclass(*args_, **kwargs_) else: return", "or self.Signature is not None ): return True else: return False def export(self,", "if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='indRetif', pretty_print=pretty_print)", "'__main__': #import pdb; pdb.set_trace() main() __all__ = [ \"TDadosBenef\", \"TDadosBeneficio\", \"TEmprPJ\", \"TEnderecoBrasil\", \"TEnderecoExterior\",", "if ( self.ideEvento is not None or self.ideEmpregador is not None or self.ideBenef", "self.exterior is not None ): return True else: return False def export(self, outfile,", "def get_nmMae(self): return self.nmMae def set_nmMae(self, nmMae): self.nmMae = nmMae def get_nmPai(self): return", "(hours * 3600)) // 60 _svalue += '{0:02d}:{1:02d}'.format( hours, minutes) except AttributeError: pass", "TIdeEveTrab.subclass: return TIdeEveTrab.subclass(*args_, **kwargs_) else: return TIdeEveTrab(*args_, **kwargs_) factory = staticmethod(factory) def get_indRetif(self):", ")) self.exportChildren(outfile, level + 1, namespace_='', name_='nrRecibo', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_))", "ignore comments. try: parser = etree_.ETCompatXMLParser() except AttributeError: # fallback to xml.etree parser", "= GenerateDSNamespaceDefs_.get('paisNascto') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_", "% (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.codPostal), input_name='codPostal')), namespace_, eol_)) def build(self, node): already_processed = set() self.buildAttributes(node,", "vrBenef class infoPenMorte(GeneratedsSuper): \"\"\"Informações relativas a pensão por morte\"\"\" subclass = None superclass", "= getSubclassFromModule_( CurrentSubclassModule_, tpPlanRP) if subclass is not None: return subclass(*args_, **kwargs_) if", "pass def exportChildren(self, outfile, level, namespace_='', name_='codMunic', fromsubclass_=False, pretty_print=True): pass def build(self, node):", "if cep.subclass: return cep.subclass(*args_, **kwargs_) else: return cep(*args_, **kwargs_) factory = staticmethod(factory) def", "self.ideEvento = obj_ obj_.original_tagname_ = 'ideEvento' elif nodeName_ == 'ideEmpregador': obj_ = TEmprPJ.factory()", "+ namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='nmBenefic')", "else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='tpAmb'): pass", "if subclass is not None: return subclass(*args_, **kwargs_) if tpInsc.subclass: return tpInsc.subclass(*args_, **kwargs_)", "is importable, must contain # a dictionary named GeneratedsNamespaceDefs. This Python dictionary #", "The export method for any class for which there is # a namespace", "or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='nrInsc') if self.hasContent_():", "verProc.subclass: return verProc.subclass(*args_, **kwargs_) else: return verProc(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self):", "namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('uf') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_", "datetime_ import warnings as warnings_ try: from lxml import etree as etree_ except", "return cpfInst(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return True", "def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class tpAmb class procEmi(GeneratedsSuper):", "level, namespace_='', name_='nrBenefic', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('nrBenefic') if imported_ns_def_ is not None:", "already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='evtCdBenPrRP') if self.hasContent_(): outfile.write('>%s' % (eol_,", "nmBenefic def get_dadosBenef(self): return self.dadosBenef def set_dadosBenef(self, dadosBenef): self.dadosBenef = dadosBenef def hasContent_(self):", "nrBenefic def get_dtFimBenef(self): return self.dtFimBenef def set_dtFimBenef(self, dtFimBenef): self.dtFimBenef = dtFimBenef def get_mtvFim(self):", "outfile.write('<%sdscLograd>%s</%sdscLograd>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.dscLograd), input_name='dscLograd')), namespace_, eol_)) if self.nrLograd is not None: showIndent(outfile,", "def set_tpInsc(self, tpInsc): self.tpInsc = tpInsc def get_nrInsc(self): return self.nrInsc def set_nrInsc(self, nrInsc):", "= 'dadosNasc' elif nodeName_ == 'endereco': obj_ = endereco.factory() obj_.build(child_) self.endereco = obj_", "namespace_='', name_='complemento', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib,", "(eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='tpPlanRP', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_,", "subclass(*args_, **kwargs_) if nmMae.subclass: return nmMae.subclass(*args_, **kwargs_) else: return nmMae(*args_, **kwargs_) factory =", "beneficiário\"\"\" subclass = None superclass = None def __init__(self, cpfBenef=None, nmBenefic=None, dadosBenef=None): self.original_tagname_", "node.text else: text = '' for child in node: if child.tail is not", "nodeName_ == 'nrLograd': nrLograd_ = child_.text nrLograd_ = self.gds_validate_string(nrLograd_, node, 'nrLograd') self.nrLograd =", "node, nodeName_, fromsubclass_=False): pass # end class nrBenefic class dtFimBenef(GeneratedsSuper): subclass = None", "nodeName_, fromsubclass_=False): pass # end class nrLograd class complemento(GeneratedsSuper): subclass = None superclass", "def get_class_obj_(self, node, default_class=None): class_obj1 = default_class if 'xsi' in node.nsmap: classname =", ")) self.exportChildren(outfile, level + 1, namespace_='', name_='TDadosBeneficio', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' %", "codMunic.subclass: return codMunic.subclass(*args_, **kwargs_) else: return codMunic(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self):", "altBeneficio def get_fimBeneficio(self): return self.fimBeneficio def set_fimBeneficio(self, fimBeneficio): self.fimBeneficio = fimBeneficio def hasContent_(self):", "set_nrBenefic(self, nrBenefic): self.nrBenefic = nrBenefic def get_dtFimBenef(self): return self.dtFimBenef def set_dtFimBenef(self, dtFimBenef): self.dtFimBenef", "**kwargs_) if infoPenMorte.subclass: return infoPenMorte.subclass(*args_, **kwargs_) else: return infoPenMorte(*args_, **kwargs_) factory = staticmethod(factory)", "superclass = None def __init__(self): self.original_tagname_ = None def factory(*args_, **kwargs_): if CurrentSubclassModule_", "as exp: raise_parse_error(child_, 'requires integer: %s' % exp) ival_ = self.gds_validate_integer(ival_, node, 'tpPlanRP')", "== 2: classname = names[1] class_obj2 = globals().get(classname) if class_obj2 is not None:", "if self.exterior is not None: self.exterior.export(outfile, level, namespace_, name_='exterior', pretty_print=pretty_print) def build(self, node):", "initvalue_ = dtIniBenef self.dtIniBenef = initvalue_ self.vrBenef = vrBenef self.infoPenMorte = infoPenMorte def", "fimBeneficio class tpBenef(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ =", "== 0: _svalue = '%04d-%02d-%02dT%02d:%02d:%02d' % ( input_data.year, input_data.month, input_data.day, input_data.hour, input_data.minute, input_data.second,", "= name def get_name(self): return self.name def set_data_type(self, data_type): self.data_type = data_type def", "child in node: nodeName_ = Tag_pattern_.match(child.tag).groups()[-1] self.buildChildren(child, node, nodeName_) return self def buildAttributes(self,", ")) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='dtIniBenef'): pass def exportChildren(self, outfile, level,", "(namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nrBenefic), input_name='nrBenefic')), namespace_, eol_)) if self.dtFimBenef is not None: showIndent(outfile, level, pretty_print)", "'brasil': TEnderecoBrasil, 'dadosBenef': TDadosBenef, 'exterior': TEnderecoExterior, 'ideEmpregador': TEmprPJ, 'ideEvento': TIdeEveTrab, 'iniBeneficio': TDadosBeneficio, }", "outfile, level, already_processed, namespace_='', name_='tpAmb'): pass def exportChildren(self, outfile, level, namespace_='', name_='tpAmb', fromsubclass_=False,", "namespace_, eol_)) if self.cep is not None: showIndent(outfile, level, pretty_print) outfile.write('<%scep>%s</%scep>%s' % (namespace_,", "set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='verProc') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile,", "self.procEmi = procEmi self.verProc = verProc def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not", "else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='nrLograd'): pass", "% input_data def gds_validate_double(self, input_data, node=None, input_name=''): return input_data def gds_format_double_list(self, input_data, input_name=''):", "# category == MixedContainer.CategoryComplex self.value.to_etree(element) def to_etree_simple(self): if self.content_type == MixedContainer.TypeString: text =", ")) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='nmCid'): pass def exportChildren(self, outfile, level,", "s1[pos:mo.start()] s2 += quote_xml_aux(s3) s2 += s1[mo.start():mo.end()] pos = mo.end() s3 = s1[pos:]", "nrLograd_ = child_.text nrLograd_ = self.gds_validate_string(nrLograd_, node, 'nrLograd') self.nrLograd = nrLograd_ elif nodeName_", "pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class tpAmb class", "self.exportChildren(outfile, level + 1, namespace_='', name_='evtCdBenPrRP', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' % (namespace_,", "gds_format_datetime(self, input_data, input_name=''): if input_data.microsecond == 0: _svalue = '%04d-%02d-%02dT%02d:%02d:%02d' % ( input_data.year,", "results.group(0)) input_data = input_data[:-6] time_parts = input_data.split('.') if len(time_parts) > 1: micro_seconds =", "other attributes, too) # # The module generatedsnamespaces, if it is importable, must", "None: showIndent(outfile, level, pretty_print) outfile.write('<%stpLograd>%s</%stpLograd>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.tpLograd), input_name='tpLograd')), namespace_, eol_)) if self.dscLograd", "level + 1, namespace_='', name_='nmCid', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s'", "<in_xml_file> \"\"\" def usage(): print(USAGE_TEXT) sys.exit(1) def get_root_tag(node): tag = Tag_pattern_.match(node.tag).groups()[-1] rootClass =", "namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('endereco') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_", "already_processed.add('Id') self.Id = value def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): if nodeName_ ==", "o beneficiário identificado em {ideBenef} e para o qual não tenha havido ainda", "name_='ideBenef', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' %", "return False def export(self, outfile, level, namespace_='', name_='cep', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('cep')", "(eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='dadosNasc', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s'", "): return True else: return False def export(self, outfile, level, namespace_='', name_='procEmi', namespacedef_='',", "if TEmprPJ.subclass: return TEmprPJ.subclass(*args_, **kwargs_) else: return TEmprPJ(*args_, **kwargs_) factory = staticmethod(factory) def", "return self.category def getContenttype(self, content_type): return self.content_type def getValue(self): return self.value def getName(self):", "input_name='codMunic'), namespace_, eol_)) if self.uf is not None: showIndent(outfile, level, pretty_print) outfile.write('<%suf>%s</%suf>%s' %", "elif nodeName_ == 'dscLograd': dscLograd_ = child_.text dscLograd_ = self.gds_validate_string(dscLograd_, node, 'dscLograd') self.dscLograd", "Command line: # /usr/local/bin/generateDS --no-process-includes -o \"esociallib/v2_04/evtCdBenPrRP.py\" schemas/v2_04/evtCdBenPrRP.xsd # # Current working directory", "'ds:', eol_)) def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib, already_processed) for child", "): return True else: return False def export(self, outfile, level, namespace_='', name_='codMunic', namespacedef_='',", "level, already_processed, namespace_='', name_='tpAmb'): pass def exportChildren(self, outfile, level, namespace_='', name_='tpAmb', fromsubclass_=False, pretty_print=True):", "self.dtNascto = initvalue_ self.codMunic = codMunic self.uf = uf self.paisNascto = paisNascto self.paisNac", ")) self.exportChildren(outfile, level + 1, namespace_='', name_='procEmi', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_))", "pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class nmPai class", "% (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='vrBenef', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_,", "return nrRecibo(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return True", "ideEvento self.ideEmpregador = ideEmpregador self.ideBenef = ideBenef self.infoBeneficio = infoBeneficio def factory(*args_, **kwargs_):", "node, nodeName_, fromsubclass_=False): pass # end class cpfInst GDSClassesMapping = { 'altBeneficio': TDadosBeneficio,", "imported_ns_def_ = GenerateDSNamespaceDefs_.get('tpAmb') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print:", "end class paisNascto class paisNac(GeneratedsSuper): subclass = None superclass = None def __init__(self):", "representation of that element. See the export method of # any generated element", "obj_ obj_.original_tagname_ = 'altBeneficio' elif nodeName_ == 'fimBeneficio': obj_ = fimBeneficio.factory() obj_.build(child_) self.fimBeneficio", "set_dscLograd(self, dscLograd): self.dscLograd = dscLograd def get_nrLograd(self): return self.nrLograd def set_nrLograd(self, nrLograd): self.nrLograd", "class nmBenefic(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ = None", "already_processed): pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class cpfBenef", "if self.nrRecibo is not None: showIndent(outfile, level, pretty_print) outfile.write('<%snrRecibo>%s</%snrRecibo>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nrRecibo), input_name='nrRecibo')),", "subclass = None superclass = None def __init__(self, tpPlanRP=None, iniBeneficio=None, altBeneficio=None, fimBeneficio=None): self.original_tagname_", "TypeString = 2 TypeInteger = 3 TypeFloat = 4 TypeDecimal = 5 TypeDouble", "def set_codPostal(self, codPostal): self.codPostal = codPostal def hasContent_(self): if ( self.paisResid is not", "self.data_type[-1] else: return 'xs:string' else: return self.data_type def set_container(self, container): self.container = container", "self.name)) elif self.content_type == MixedContainer.TypeBase64: outfile.write('<%s>%s</%s>' % ( self.name, base64.b64encode(self.value), self.name)) def to_etree(self,", "class nmMae class nmPai(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_", "= False for patterns2 in patterns1: if re_.search(patterns2, target) is not None: found2", "subclass(*args_, **kwargs_) if indRetif.subclass: return indRetif.subclass(*args_, **kwargs_) else: return indRetif(*args_, **kwargs_) factory =", "is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, dadosNasc) if subclass is not None:", "name_='indRetif', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib, already_processed)", "getContenttype(self, content_type): return self.content_type def getValue(self): return self.value def getName(self): return self.name def", "level, already_processed, namespace_='', name_='infoBeneficio'): pass def exportChildren(self, outfile, level, namespace_='', name_='infoBeneficio', fromsubclass_=False, pretty_print=True):", "gds_format_date(self, input_data, input_name=''): _svalue = '%04d-%02d-%02d' % ( input_data.year, input_data.month, input_data.day, ) try:", "def get_path_(self, node): path_list = [] self.get_path_list_(node, path_list) path_list.reverse() path = '/'.join(path_list) return", "% (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='paisNascto'): pass def exportChildren(self,", "None ): return True else: return False def export(self, outfile, level, namespace_='', name_='infoBeneficio',", "exterior def hasContent_(self): if ( self.brasil is not None or self.exterior is not", ")) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='nmBenefic'): pass def exportChildren(self, outfile, level,", "outfile, level, namespace_='', name_='tpPlanRP', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('tpPlanRP') if imported_ns_def_ is not", "level + 1, namespace_='', name_='cep', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s'", "is not None: return subclass(*args_, **kwargs_) if nrRecibo.subclass: return nrRecibo.subclass(*args_, **kwargs_) else: return", "def get_cpfBenef(self): return self.cpfBenef def set_cpfBenef(self, cpfBenef): self.cpfBenef = cpfBenef def get_nmBenefic(self): return", "subclass is not None: return subclass(*args_, **kwargs_) if procEmi.subclass: return procEmi.subclass(*args_, **kwargs_) else:", "child_, node, nodeName_, fromsubclass_=False): pass # end class codPostal class TDadosBeneficio(GeneratedsSuper): \"\"\"Dados do", "input_data.split() for value in values: if value not in ('true', '1', 'false', '0',", "pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class cpfBenef class", "or self.content_type == MixedContainer.TypeDecimal): text = '%f' % self.value elif self.content_type == MixedContainer.TypeDouble:", "path_list) path_list.reverse() path = '/'.join(path_list) return path Tag_strip_pattern_ = re_.compile(r'\\{.*\\}') def get_path_list_(self, node,", "exportChildren(self, outfile, level, namespace_='', name_='dtIniBenef', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed =", "export(self, outfile, level, namespace_='', name_='bairro', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('bairro') if imported_ns_def_ is", "# - AND the outer elements # - OR the inner elements found1", "CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, cpfBenef) if subclass is not", "nodeName_, fromsubclass_=False): pass # end class cpfBenef class nmBenefic(GeneratedsSuper): subclass = None superclass", "namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('cep') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_", "if len(input_data.split('.')) > 1: dt = datetime_.datetime.strptime(input_data, '%H:%M:%S.%f') else: dt = datetime_.datetime.strptime(input_data, '%H:%M:%S')", "name_='infoBeneficio'): pass def exportChildren(self, outfile, level, namespace_='', name_='infoBeneficio', fromsubclass_=False, pretty_print=True): if pretty_print: eol_", "eol_)) else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='nmPai'):", "parser = etree_.XMLParser() doc = etree_.parse(infile, parser=parser, **kwargs) return doc # # Namespace", "or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='mtvFim') if self.hasContent_():", "level, namespace_='', name_='mtvFim', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node,", "node, nodeName_, fromsubclass_=False): if nodeName_ == 'idQuota': idQuota_ = child_.text idQuota_ = self.gds_validate_string(idQuota_,", "return TEnderecoBrasil(*args_, **kwargs_) factory = staticmethod(factory) def get_tpLograd(self): return self.tpLograd def set_tpLograd(self, tpLograd):", "return self.uf def set_uf(self, uf): self.uf = uf def hasContent_(self): if ( self.tpLograd", "dscLograd_ = self.gds_validate_string(dscLograd_, node, 'dscLograd') self.dscLograd = dscLograd_ elif nodeName_ == 'nrLograd': nrLograd_", "else: eol_ = '' if self.paisResid is not None: showIndent(outfile, level, pretty_print) outfile.write('<%spaisResid>%s</%spaisResid>%s'", "): return True else: return False def export(self, outfile, level, namespace_='', name_='infoBeneficio', namespacedef_='',", "else: eol_ = '' if self.dtNascto is not None: showIndent(outfile, level, pretty_print) outfile.write('<%sdtNascto>%s</%sdtNascto>%s'", "= uf self.paisNascto = paisNascto self.paisNac = paisNac self.nmMae = nmMae self.nmPai =", "def gds_validate_float(self, input_data, node=None, input_name=''): return input_data def gds_format_float_list(self, input_data, input_name=''): return '%s'", "def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, nmBenefic)", "def set_procEmi(self, procEmi): self.procEmi = procEmi def get_verProc(self): return self.verProc def set_verProc(self, verProc):", "outfile, level, namespace_='', name_='fimBeneficio', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('fimBeneficio') if imported_ns_def_ is not", "of booleans ' '(\"true\", \"1\", \"false\", \"0\")') return values def gds_validate_datetime(self, input_data, node=None,", "subclass is not None: return subclass(*args_, **kwargs_) if indRetif.subclass: return indRetif.subclass(*args_, **kwargs_) else:", "**kwargs_) if TEmprPJ.subclass: return TEmprPJ.subclass(*args_, **kwargs_) else: return TEmprPJ(*args_, **kwargs_) factory = staticmethod(factory)", "name_='cep'): pass def exportChildren(self, outfile, level, namespace_='', name_='cep', fromsubclass_=False, pretty_print=True): pass def build(self,", "return False def export(self, outfile, level, namespace_='', name_='verProc', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('verProc')", "def export(self, outfile, level, namespace_='', name_='indRetif', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('indRetif') if imported_ns_def_", "already_processed): pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class paisNascto", "already_processed, namespace_='', name_='codMunic'): pass def exportChildren(self, outfile, level, namespace_='', name_='codMunic', fromsubclass_=False, pretty_print=True): pass", "outfile, level, namespace_='', name_='verProc', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('verProc') if imported_ns_def_ is not", "1, namespace_='', name_='nmPai', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_,", "pos = mo.end() s3 = s1[pos:] s2 += quote_xml_aux(s3) return s2 def quote_xml_aux(inStr):", "input_data = input_data[:-1] else: results = GeneratedsSuper.tzoff_pattern.search(input_data) if results is not None: tzoff_parts", "= self.gds_validate_string(codPostal_, node, 'codPostal') self.codPostal = codPostal_ # end class TEnderecoExterior class paisResid(GeneratedsSuper):", "subclass(*args_, **kwargs_) if dtFimBenef.subclass: return dtFimBenef.subclass(*args_, **kwargs_) else: return dtFimBenef(*args_, **kwargs_) factory =", "return True else: return False def export(self, outfile, level, namespace_='', name_='nmCid', namespacedef_='', pretty_print=True):", "is not None: self.endereco.export(outfile, level, namespace_, name_='endereco', pretty_print=pretty_print) def build(self, node): already_processed =", "if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, bairro) if subclass is", "name_='dtIniBenef') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='dtIniBenef',", "input_data def gds_format_double_list(self, input_data, input_name=''): return '%s' % ' '.join(input_data) def gds_validate_double_list( self,", "return True else: return False def export(self, outfile, level, namespace_='', name_='fimBeneficio', namespacedef_='', pretty_print=True):", "% (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='endereco', pretty_print=pretty_print) showIndent(outfile, level, pretty_print)", "None: self.ideBenef.export(outfile, level, namespace_, name_='ideBenef', pretty_print=pretty_print) if self.infoBeneficio is not None: self.infoBeneficio.export(outfile, level,", "getSubclassFromModule_( CurrentSubclassModule_, idQuota) if subclass is not None: return subclass(*args_, **kwargs_) if idQuota.subclass:", "exp: raise_parse_error(child_, 'requires float or double: %s' % exp) fval_ = self.gds_validate_float(fval_, node,", "level, already_processed, namespace_, name_='nmBenefic') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level +", "name_='codPostal') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='codPostal',", "s1.replace('\"', '\\\\\"') if s1.find('\\n') == -1: return '\"%s\"' % s1 else: return '\"\"\"%s\"\"\"'", "child_, node, nodeName_, fromsubclass_=False): pass # end class dscLograd class nrLograd(GeneratedsSuper): subclass =", "not None: return subclass(*args_, **kwargs_) if dadosNasc.subclass: return dadosNasc.subclass(*args_, **kwargs_) else: return dadosNasc(*args_,", "**kwargs_) else: return paisNascto(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ):", "'codMunic') self.codMunic = ival_ elif nodeName_ == 'uf': uf_ = child_.text uf_ =", "subclass is not None: return subclass(*args_, **kwargs_) if idQuota.subclass: return idQuota.subclass(*args_, **kwargs_) else:", "= self.gds_validate_string(verProc_, node, 'verProc') self.verProc = verProc_ # end class TIdeEveTrab class indRetif(GeneratedsSuper):", "= complemento def get_bairro(self): return self.bairro def set_bairro(self, bairro): self.bairro = bairro def", "self.infoPenMorte is not None ): return True else: return False def export(self, outfile,", "def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='codMunic'): pass def exportChildren(self, outfile, level, namespace_='',", "**kwargs_) else: return idQuota(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ):", "= GeneratedsSuper.gds_encode(str(instring)) return result def __eq__(self, other): if type(self) != type(other): return False", "is not None or self.tpAmb is not None or self.procEmi is not None", "return codMunic(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return True", "outfile, level, already_processed, namespace_='', name_='nmPai'): pass def exportChildren(self, outfile, level, namespace_='', name_='nmPai', fromsubclass_=False,", "'tpPlanRP': sval_ = child_.text try: ival_ = int(sval_) except (TypeError, ValueError) as exp:", "else: return False def export(self, outfile, level, namespace_='', name_='dtIniBenef', namespacedef_='', pretty_print=True): imported_ns_def_ =", "node, nodeName_, fromsubclass_=False): pass # end class procEmi class verProc(GeneratedsSuper): subclass = None", "as exp: raise_parse_error(child_, 'requires float or double: %s' % exp) fval_ = self.gds_validate_float(fval_,", "7 TypeBase64 = 8 def __init__(self, category, content_type, name, value): self.category = category", "GeneratedsSuper.gds_encode(str(instring)) return result def __eq__(self, other): if type(self) != type(other): return False return", "name_='tpLograd', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_, )) def", "results.group(2).split(':') tzoff = int(tzoff_parts[0]) * 60 + int(tzoff_parts[1]) if results.group(1) == '-': tzoff", "nrBenefic): self.nrBenefic = nrBenefic def get_dtIniBenef(self): return self.dtIniBenef def set_dtIniBenef(self, dtIniBenef): self.dtIniBenef =", "dtIniBenef) if subclass is not None: return subclass(*args_, **kwargs_) if dtIniBenef.subclass: return dtIniBenef.subclass(*args_,", "pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('complemento') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if", "not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, dadosNasc) if subclass is not None: return", "= GenerateDSNamespaceDefs_.get('TEnderecoExterior') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_", "cpfInst): self.cpfInst = cpfInst def hasContent_(self): if ( self.idQuota is not None or", "self.nmPai is not None ): return True else: return False def export(self, outfile,", "class paisNascto class paisNac(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_", "set_ideEmpregador(self, ideEmpregador): self.ideEmpregador = ideEmpregador def get_ideBenef(self): return self.ideBenef def set_ideBenef(self, ideBenef): self.ideBenef", "values: try: int(value) except (TypeError, ValueError): raise_parse_error(node, 'Requires sequence of integers') return values", "if subclass is not None: return subclass(*args_, **kwargs_) if nmMae.subclass: return nmMae.subclass(*args_, **kwargs_)", "outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile,", "'1', 'false', '0', ): raise_parse_error( node, 'Requires sequence of booleans ' '(\"true\", \"1\",", "'mtvFim': sval_ = child_.text try: ival_ = int(sval_) except (TypeError, ValueError) as exp:", "**kwargs_) else: return nrRecibo(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ):", "is not None: return subclass(*args_, **kwargs_) if infoBeneficio.subclass: return infoBeneficio.subclass(*args_, **kwargs_) else: return", "prefix definition table (and other attributes, too) # # The module generatedsnamespaces, if", "level, already_processed, namespace_='', name_='paisNascto'): pass def exportChildren(self, outfile, level, namespace_='', name_='paisNascto', fromsubclass_=False, pretty_print=True):", "exportChildren(self, outfile, level, namespace_='', name_='eSocial', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\\n' else:", "self.gds_validate_string(nmCid_, node, 'nmCid') self.nmCid = nmCid_ elif nodeName_ == 'codPostal': codPostal_ = child_.text", "(eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='cpfBenef'): pass def exportChildren(self, outfile,", "is not None: return subclass(*args_, **kwargs_) if paisResid.subclass: return paisResid.subclass(*args_, **kwargs_) else: return", ")) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='bairro'): pass def exportChildren(self, outfile, level,", "): return True else: return False def export(self, outfile, level, namespace_='', name_='bairro', namespacedef_='',", "dt = dt.replace(tzinfo=tz) return dt def gds_validate_date(self, input_data, node=None, input_name=''): return input_data def", "too) # # The module generatedsnamespaces, if it is importable, must contain #", "showIndent(outfile, level, pretty_print) outfile.write('<%sdtNascto>%s</%sdtNascto>%s' % (namespace_, self.gds_format_date(self.dtNascto, input_name='dtNascto'), namespace_, eol_)) if self.codMunic is", "= etree_.XMLParser() doc = etree_.parse(infile, parser=parser, **kwargs) return doc # # Namespace prefix", "sections.\" if not inStr: return '' s1 = (isinstance(inStr, BaseStrType_) and inStr or", "node, 'indRetif') self.indRetif = ival_ elif nodeName_ == 'nrRecibo': nrRecibo_ = child_.text nrRecibo_", "input_data.microsecond == 0: _svalue = '%04d-%02d-%02dT%02d:%02d:%02d' % ( input_data.year, input_data.month, input_data.day, input_data.hour, input_data.minute,", "return self.nmPai def set_nmPai(self, nmPai): self.nmPai = nmPai def hasContent_(self): if ( self.dtNascto", "= '\"%s\"' % s1.replace('\"', \"&quot;\") else: s1 = \"'%s'\" % s1 else: s1", "showIndent(outfile, level, pretty_print) outfile.write('<%stpInsc>%s</%stpInsc>%s' % (namespace_, self.gds_format_integer(self.tpInsc, input_name='tpInsc'), namespace_, eol_)) if self.nrInsc is", "or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='tpAmb') if self.hasContent_():", "is None: return tag = GeneratedsSuper.Tag_strip_pattern_.sub('', node.tag) if tag: path_list.append(tag) self.get_path_list_(node.getparent(), path_list) def", "): raise_parse_error( node, 'Requires sequence of booleans ' '(\"true\", \"1\", \"false\", \"0\")') return", "def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class complemento class bairro(GeneratedsSuper):", "self.name = name self.data_type = data_type self.container = container self.child_attrs = child_attrs self.choice", "level, namespace_='', name_='dadosNasc', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('dadosNasc') if imported_ns_def_ is not None:", "total_seconds // 3600 minutes = (total_seconds - (hours * 3600)) // 60 _svalue", "value in values: if value not in ('true', '1', 'false', '0', ): raise_parse_error(", "None: namespacedef_ = imported_ns_def_ if pretty_print: eol_ = '\\n' else: eol_ = ''", "level, pretty_print) outfile.write('<%smtvFim>%s</%smtvFim>%s' % (namespace_, self.gds_format_integer(self.mtvFim, input_name='mtvFim'), namespace_, eol_)) def build(self, node): already_processed", "getSubclassFromModule_( CurrentSubclassModule_, TDadosBenef) if subclass is not None: return subclass(*args_, **kwargs_) if TDadosBenef.subclass:", "showIndent(outfile, level, pretty_print=True): if pretty_print: for idx in range(level): outfile.write(' ') def quote_xml(inStr):", "subclass of a class from a specific module.''' name = class_.__name__ + 'Sub'", "or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='nmCid') if self.hasContent_():", "def set_altBeneficio(self, altBeneficio): self.altBeneficio = altBeneficio def get_fimBeneficio(self): return self.fimBeneficio def set_fimBeneficio(self, fimBeneficio):", "= child_.text tpLograd_ = self.gds_validate_string(tpLograd_, node, 'tpLograd') self.tpLograd = tpLograd_ elif nodeName_ ==", "subclass = getSubclassFromModule_( CurrentSubclassModule_, dadosNasc) if subclass is not None: return subclass(*args_, **kwargs_)", "MixedContainer.TypeFloat or self.content_type == MixedContainer.TypeDecimal): text = '%f' % self.value elif self.content_type ==", "outfile, level, namespace_='', name_='idQuota', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('idQuota') if imported_ns_def_ is not", "def gds_format_base64(self, input_data, input_name=''): return base64.b64encode(input_data) def gds_validate_base64(self, input_data, node=None, input_name=''): return input_data", "self.gds_format_integer(self.tpPlanRP, input_name='tpPlanRP'), namespace_, eol_)) if self.iniBeneficio is not None: self.iniBeneficio.export(outfile, level, namespace_, name_='iniBeneficio',", "fromsubclass_=False): if nodeName_ == 'paisResid': paisResid_ = child_.text paisResid_ = self.gds_validate_string(paisResid_, node, 'paisResid')", "methods by re-implementing the following class # in a module named generatedssuper.py. try:", "self.gds_encode(self.gds_format_string(quote_xml(self.cpfInst), input_name='cpfInst')), namespace_, eol_)) def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib, already_processed)", "not None: found2 = True break if not found2: found1 = False break", "input_data[:-1] else: results = GeneratedsSuper.tzoff_pattern.search(input_data) if results is not None: tzoff_parts = results.group(2).split(':')", "lists of strings/patterns. We should: # - AND the outer elements # -", "= cpfBenef def get_nmBenefic(self): return self.nmBenefic def set_nmBenefic(self, nmBenefic): self.nmBenefic = nmBenefic def", "already_processed): pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): if nodeName_ == 'tpLograd': tpLograd_", "obj_.build(child_) self.dadosBenef = obj_ obj_.original_tagname_ = 'dadosBenef' # end class ideBenef class cpfBenef(GeneratedsSuper):", "False def export(self, outfile, level, namespace_='', name_='tpBenef', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('tpBenef') if", "else: # category == MixedContainer.CategoryComplex self.value.export( outfile, level, namespace, name, pretty_print=pretty_print) def exportSimple(self,", "= paisNac self.nmMae = nmMae self.nmPai = nmPai def factory(*args_, **kwargs_): if CurrentSubclassModule_", "TDadosBenef.subclass: return TDadosBenef.subclass(*args_, **kwargs_) else: return TDadosBenef(*args_, **kwargs_) factory = staticmethod(factory) def get_dadosNasc(self):", "s1 = inStr if s1.find(\"'\") == -1: if s1.find('\\n') == -1: return \"'%s'\"", "namespace_='', name_='dadosNasc'): pass def exportChildren(self, outfile, level, namespace_='', name_='dadosNasc', fromsubclass_=False, pretty_print=True): if pretty_print:", "namespace_='', name_='procEmi', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib,", "re_.search(patterns2, target) is not None: found2 = True break if not found2: found1", "if child.tail is not None: text += child.tail return text def find_attr_value_(attr_name, node):", "if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, idQuota) if subclass is", "tzoff *= -1 tz = GeneratedsSuper._FixedOffsetTZ( tzoff, results.group(0)) input_data = input_data[:-6] time_parts =", "CurrentSubclassModule_, TEnderecoBrasil) if subclass is not None: return subclass(*args_, **kwargs_) if TEnderecoBrasil.subclass: return", "outfile, level, namespace_='', name_='dscLograd', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('dscLograd') if imported_ns_def_ is not", "**kwargs_) else: return dtFimBenef(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ):", "self.dadosBenef = dadosBenef def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass =", "eol_ = '' if self.cpfBenef is not None: showIndent(outfile, level, pretty_print) outfile.write('<%scpfBenef>%s</%scpfBenef>%s' %", "<filename>esociallib/v2_04/evtCdBenPrRP.py #!/usr/bin/env python # -*- coding: utf-8 -*- # # Generated Tue Oct", "\"1\", \"false\", \"0\")') return values def gds_validate_datetime(self, input_data, node=None, input_name=''): return input_data def", "self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='nmBenefic', pretty_print=pretty_print) outfile.write('</%s%s>%s'", "or self.nmBenefic is not None or self.dadosBenef is not None ): return True", "level, pretty_print) outfile.write('<%stpAmb>%s</%stpAmb>%s' % (namespace_, self.gds_format_integer(self.tpAmb, input_name='tpAmb'), namespace_, eol_)) if self.procEmi is not", "+ 1, namespace_='', name_='TEmprPJ', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_))", "not None: showIndent(outfile, level, pretty_print) outfile.write('<%snmPai>%s</%snmPai>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nmPai), input_name='nmPai')), namespace_, eol_)) def", "name_='dtFimBenef', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_, )) def", "pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class nrBenefic class", "node=None, input_name=''): return input_data def gds_format_date(self, input_data, input_name=''): _svalue = '%04d-%02d-%02d' % (", "( ): return True else: return False def export(self, outfile, level, namespace_='', name_='codMunic',", "def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, dscLograd)", "self.ideEmpregador = obj_ obj_.original_tagname_ = 'ideEmpregador' elif nodeName_ == 'ideBenef': obj_ = ideBenef.factory()", "or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='TEnderecoBrasil') if self.hasContent_():", "'indRetif') self.indRetif = ival_ elif nodeName_ == 'nrRecibo': nrRecibo_ = child_.text nrRecibo_ =", "def set_fimBeneficio(self, fimBeneficio): self.fimBeneficio = fimBeneficio def hasContent_(self): if ( self.tpPlanRP is not", "content_type): return self.content_type def getValue(self): return self.value def getName(self): return self.name def export(self,", "nrBenefic): self.nrBenefic = nrBenefic def get_dtFimBenef(self): return self.dtFimBenef def set_dtFimBenef(self, dtFimBenef): self.dtFimBenef =", "= dtNascto self.dtNascto = initvalue_ self.codMunic = codMunic self.uf = uf self.paisNascto =", "namespace_='', name_='TEmprPJ', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\\n' else: eol_ = ''", "CurrentSubclassModule_, infoBeneficio) if subclass is not None: return subclass(*args_, **kwargs_) if infoBeneficio.subclass: return", "buildAttributes(self, node, attrs, already_processed): pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): if nodeName_", "# end class dtFimBenef class mtvFim(GeneratedsSuper): subclass = None superclass = None def", "== 'dtFimBenef': sval_ = child_.text dval_ = self.gds_parse_date(sval_) self.dtFimBenef = dval_ elif nodeName_", "return True else: return False def export(self, outfile, level, namespace_='', name_='TEnderecoBrasil', namespacedef_='', pretty_print=True):", "set_infoPenMorte(self, infoPenMorte): self.infoPenMorte = infoPenMorte def hasContent_(self): if ( self.tpBenef is not None", "None or self.nrBenefic is not None or self.dtIniBenef is not None or self.vrBenef", "exportAttributes(self, outfile, level, already_processed, namespace_='', name_='dtNascto'): pass def exportChildren(self, outfile, level, namespace_='', name_='dtNascto',", "+ namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='TEnderecoBrasil')", "else: return infoBeneficio(*args_, **kwargs_) factory = staticmethod(factory) def get_tpPlanRP(self): return self.tpPlanRP def set_tpPlanRP(self,", "): return True else: return False def export(self, outfile, level, namespace_='', name_='tpLograd', namespacedef_='',", "subclass is not None: return subclass(*args_, **kwargs_) if eSocial.subclass: return eSocial.subclass(*args_, **kwargs_) else:", "None or self.codMunic is not None or self.uf is not None or self.paisNascto", "already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='ideBenef') if self.hasContent_(): outfile.write('>%s' % (eol_,", "+ namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='codMunic')", "if class_obj2 is not None: class_obj1 = class_obj2 return class_obj1 def gds_build_any(self, node,", "installed IPython you can uncomment and use the following. # IPython is available", "self.paisNascto is not None: showIndent(outfile, level, pretty_print) outfile.write('<%spaisNascto>%s</%spaisNascto>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.paisNascto), input_name='paisNascto')), namespace_,", "child_.text nrRecibo_ = self.gds_validate_string(nrRecibo_, node, 'nrRecibo') self.nrRecibo = nrRecibo_ elif nodeName_ == 'tpAmb':", "namespace_, name_='infoBeneficio') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='',", "set_optional(self, optional): self.optional = optional def get_optional(self): return self.optional def _cast(typ, value): if", "obj_ = TEnderecoExterior.factory() obj_.build(child_) self.exterior = obj_ obj_.original_tagname_ = 'exterior' # end class", "(eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='uf'): pass def exportChildren(self, outfile,", "def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='nrRecibo'): pass def exportChildren(self, outfile, level, namespace_='',", "eSocial(GeneratedsSuper): subclass = None superclass = None def __init__(self, evtCdBenPrRP=None, Signature=None): self.original_tagname_ =", "None: showIndent(outfile, level, pretty_print) outfile.write('<%sdtFimBenef>%s</%sdtFimBenef>%s' % (namespace_, self.gds_format_date(self.dtFimBenef, input_name='dtFimBenef'), namespace_, eol_)) if self.mtvFim", "(namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nrRecibo), input_name='nrRecibo')), namespace_, eol_)) if self.tpAmb is not None: showIndent(outfile, level, pretty_print)", "obj_ = infoPenMorte.factory() obj_.build(child_) self.infoPenMorte = obj_ obj_.original_tagname_ = 'infoPenMorte' # end class", "self.bairro = bairro def get_cep(self): return self.cep def set_cep(self, cep): self.cep = cep", "is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, tpBenef) if subclass is not None:", "not None: self.dadosBenef.export(outfile, level, namespace_, name_='dadosBenef', pretty_print=pretty_print) def build(self, node): already_processed = set()", "level, namespace_='', name_='TEmprPJ', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('TEmprPJ') if imported_ns_def_ is not None:", "# /usr/local/bin/generateDS --no-process-includes -o \"esociallib/v2_04/evtCdBenPrRP.py\" schemas/v2_04/evtCdBenPrRP.xsd # # Current working directory (os.getcwd()): #", "def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, nmCid)", "infoPenMorte(*args_, **kwargs_) factory = staticmethod(factory) def get_idQuota(self): return self.idQuota def set_idQuota(self, idQuota): self.idQuota", "exportAttributes(self, outfile, level, already_processed, namespace_='', name_='nrRecibo'): pass def exportChildren(self, outfile, level, namespace_='', name_='nrRecibo',", "**kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, nmMae) if subclass", "brasil def get_exterior(self): return self.exterior def set_exterior(self, exterior): self.exterior = exterior def hasContent_(self):", "procEmi.subclass(*args_, **kwargs_) else: return procEmi(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if (", "'dadosNasc': obj_ = dadosNasc.factory() obj_.build(child_) self.dadosNasc = obj_ obj_.original_tagname_ = 'dadosNasc' elif nodeName_", "obj_.original_tagname_ = 'ideBenef' elif nodeName_ == 'infoBeneficio': obj_ = infoBeneficio.factory() obj_.build(child_) self.infoBeneficio =", "= GeneratedsSuper.Tag_strip_pattern_.sub('', node.tag) if tag: path_list.append(tag) self.get_path_list_(node.getparent(), path_list) def get_class_obj_(self, node, default_class=None): class_obj1", "namespace_, name_='TIdeEveTrab') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='',", "self.dadosNasc is not None: self.dadosNasc.export(outfile, level, namespace_, name_='dadosNasc', pretty_print=pretty_print) if self.endereco is not", "@classmethod def gds_parse_date(cls, input_data): tz = None if input_data[-1] == 'Z': tz =", "doc # # Namespace prefix definition table (and other attributes, too) # #", "def export(self, outfile, level, namespace_='', name_='nmBenefic', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('nmBenefic') if imported_ns_def_", "already_processed): pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class complemento", "subclass = getSubclassFromModule_( CurrentSubclassModule_, tpLograd) if subclass is not None: return subclass(*args_, **kwargs_)", "= set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='TEnderecoExterior') if self.hasContent_(): outfile.write('>%s' % (eol_, ))", "level, already_processed, namespace_, name_='nrBenefic') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level +", "subclass(*args_, **kwargs_) if nrRecibo.subclass: return nrRecibo.subclass(*args_, **kwargs_) else: return nrRecibo(*args_, **kwargs_) factory =", "(eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='paisResid', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_,", "== 'altBeneficio': obj_ = TDadosBeneficio.factory() obj_.build(child_) self.altBeneficio = obj_ obj_.original_tagname_ = 'altBeneficio' elif", "level + 1, namespace_='', name_='tpAmb', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s'", "'+' hours = total_seconds // 3600 minutes = (total_seconds - (hours * 3600))", "or self.altBeneficio is not None or self.fimBeneficio is not None ): return True", "integers') return values def gds_format_float(self, input_data, input_name=''): return ('%.15f' % input_data).rstrip('0') def gds_validate_float(self,", "self.name)) elif self.content_type == MixedContainer.TypeInteger or \\ self.content_type == MixedContainer.TypeBoolean: outfile.write('<%s>%d</%s>' % (", "or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='nmMae') if self.hasContent_():", "self.gds_validate_string(uf_, node, 'uf') self.uf = uf_ # end class TEnderecoBrasil class tpLograd(GeneratedsSuper): subclass", "or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='paisResid') if self.hasContent_():", "_svalue += '{0:02d}:{1:02d}'.format(hours, minutes) return _svalue @classmethod def gds_parse_datetime(cls, input_data): tz = None", "a namespace prefix definition, will export that definition in the # XML representation", "= '' if self.tpInsc is not None: showIndent(outfile, level, pretty_print) outfile.write('<%stpInsc>%s</%stpInsc>%s' % (namespace_,", "level, already_processed, namespace_, name_='endereco') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level +", "(eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='eSocial'): pass def exportChildren(self, outfile,", "self.tpBenef = tpBenef self.nrBenefic = nrBenefic if isinstance(dtIniBenef, BaseStrType_): initvalue_ = datetime_.datetime.strptime(dtIniBenef, '%Y-%m-%d').date()", "pass def exportChildren(self, outfile, level, namespace_='', name_='procEmi', fromsubclass_=False, pretty_print=True): pass def build(self, node):", "__eq__(self, other): if type(self) != type(other): return False return self.__dict__ == other.__dict__ def", "else: _svalue = '%04d-%02d-%02dT%02d:%02d:%02d.%s' % ( input_data.year, input_data.month, input_data.day, input_data.hour, input_data.minute, input_data.second, ('%f'", "namespace_, name_='dtFimBenef') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='',", "CurrentSubclassModule_, TEnderecoExterior) if subclass is not None: return subclass(*args_, **kwargs_) if TEnderecoExterior.subclass: return", "not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, nrInsc) if subclass is not None: return", "**kwargs_) if codMunic.subclass: return codMunic.subclass(*args_, **kwargs_) else: return codMunic(*args_, **kwargs_) factory = staticmethod(factory)", "in a module named generatedssuper.py. try: from generatedssuper import GeneratedsSuper except ImportError as", "self.uf = uf def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass =", "CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, dtIniBenef) if subclass is not", "= '%s.%s' % (time_parts[0], micro_seconds, ) dt = datetime_.datetime.strptime( input_data, '%Y-%m-%dT%H:%M:%S.%f') else: dt", "not None: value = attrs.get('{%s}%s' % (namespace, name, )) return value class GDSParseError(Exception):", "if ( self.tpPlanRP is not None or self.iniBeneficio is not None or self.altBeneficio", "'UTC') input_data = input_data[:-1] else: results = GeneratedsSuper.tzoff_pattern.search(input_data) if results is not None:", "is not None or self.fimBeneficio is not None ): return True else: return", "= Tag_pattern_.match(node.tag).groups()[-1] rootClass = GDSClassesMapping.get(tag) if rootClass is None: rootClass = globals().get(tag) return", "get_uf(self): return self.uf def set_uf(self, uf): self.uf = uf def hasContent_(self): if (", "or self.dscLograd is not None or self.nrLograd is not None or self.complemento is", "None: return value return typ(value) # # Data representation classes. # class eSocial(GeneratedsSuper):", "use a # specific subclass module. CurrentSubclassModule_ = None # # Support/utility functions.", "self.evtCdBenPrRP def set_evtCdBenPrRP(self, evtCdBenPrRP): self.evtCdBenPrRP = evtCdBenPrRP def get_Signature(self): return self.Signature def set_Signature(self,", "subclass is not None: return subclass(*args_, **kwargs_) if nmCid.subclass: return nmCid.subclass(*args_, **kwargs_) else:", "namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='nmBenefic') if", "is not None: showIndent(outfile, level, pretty_print) outfile.write('<%svrBenef>%s</%svrBenef>%s' % (namespace_, self.gds_format_float(self.vrBenef, input_name='vrBenef'), namespace_, eol_))", ")) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='uf') if self.hasContent_(): outfile.write('>%s' %", "= uf_ elif nodeName_ == 'paisNascto': paisNascto_ = child_.text paisNascto_ = self.gds_validate_string(paisNascto_, node,", "fallback to xml.etree parser = etree_.XMLParser() doc = etree_.parse(infile, parser=parser, **kwargs) return doc", "if subclass is not None: return subclass(*args_, **kwargs_) if paisResid.subclass: return paisResid.subclass(*args_, **kwargs_)", "def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, dtNascto)", "complemento_ elif nodeName_ == 'bairro': bairro_ = child_.text bairro_ = self.gds_validate_string(bairro_, node, 'bairro')", "def set_ideBenef(self, ideBenef): self.ideBenef = ideBenef def get_infoBeneficio(self): return self.infoBeneficio def set_infoBeneficio(self, infoBeneficio):", "None: return subclass(*args_, **kwargs_) if nrLograd.subclass: return nrLograd.subclass(*args_, **kwargs_) else: return nrLograd(*args_, **kwargs_)", "): return True else: return False def export(self, outfile, level, namespace_='', name_='vrBenef', namespacedef_='',", "already_processed, namespace_='', name_='bairro'): pass def exportChildren(self, outfile, level, namespace_='', name_='bairro', fromsubclass_=False, pretty_print=True): pass", "None def __init__(self, evtCdBenPrRP=None, Signature=None): self.original_tagname_ = None self.evtCdBenPrRP = evtCdBenPrRP self.Signature =", "if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='dtIniBenef', pretty_print=pretty_print)", "Endereço no Exterior\"\"\" subclass = None superclass = None def __init__(self, paisResid=None, dscLograd=None,", "namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='idQuota') if", "type(other): return False return self.__dict__ == other.__dict__ def __ne__(self, other): return not self.__eq__(other)", "% (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='nmPai', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_,", "else: return complemento(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return", "self.value else: if element.text is None: element.text = self.value else: element.text += self.value", "xmlns:ds=\"http://www.w3.org/2000/09/xmldsig#\" ', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('eSocial') if imported_ns_def_ is not None: namespacedef_ =", "return False return self.__dict__ == other.__dict__ def __ne__(self, other): return not self.__eq__(other) def", "None superclass = None def __init__(self, evtCdBenPrRP=None, Signature=None): self.original_tagname_ = None self.evtCdBenPrRP =", "factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, tpPlanRP) if", "set_tpLograd(self, tpLograd): self.tpLograd = tpLograd def get_dscLograd(self): return self.dscLograd def set_dscLograd(self, dscLograd): self.dscLograd", "def set_dtNascto(self, dtNascto): self.dtNascto = dtNascto def get_codMunic(self): return self.codMunic def set_codMunic(self, codMunic):", "subclass(*args_, **kwargs_) if tpAmb.subclass: return tpAmb.subclass(*args_, **kwargs_) else: return tpAmb(*args_, **kwargs_) factory =", "export(self, outfile, level, namespace_='', name_='nrBenefic', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('nrBenefic') if imported_ns_def_ is", "name_='infoPenMorte'): pass def exportChildren(self, outfile, level, namespace_='', name_='infoPenMorte', fromsubclass_=False, pretty_print=True): if pretty_print: eol_", "if self.content_type == MixedContainer.TypeString: text = self.value elif (self.content_type == MixedContainer.TypeInteger or self.content_type", "self.value elif self.content_type == MixedContainer.TypeBase64: text = '%s' % base64.b64encode(self.value) return text def", "eol_)) if self.dtFimBenef is not None: showIndent(outfile, level, pretty_print) outfile.write('<%sdtFimBenef>%s</%sdtFimBenef>%s' % (namespace_, self.gds_format_date(self.dtFimBenef,", "def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, mtvFim)", "factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, infoBeneficio) if", "else: return False def export(self, outfile, level, namespace_='', name_='ideBenef', namespacedef_='', pretty_print=True): imported_ns_def_ =", "export(self, outfile, level, namespace_='', name_='dtFimBenef', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('dtFimBenef') if imported_ns_def_ is", "there is # a namespace prefix definition, will export that definition in the", "GeneratedsSuper(object): tzoff_pattern = re_.compile(r'(\\+|-)((0\\d|1[0-3]):[0-5]\\d|14:00)$') class _FixedOffsetTZ(datetime_.tzinfo): def __init__(self, offset, name): self.__offset = datetime_.timedelta(minutes=offset)", "get_tpAmb(self): return self.tpAmb def set_tpAmb(self, tpAmb): self.tpAmb = tpAmb def get_procEmi(self): return self.procEmi", "child_.text cep_ = self.gds_validate_string(cep_, node, 'cep') self.cep = cep_ elif nodeName_ == 'codMunic':", "if len(attr_parts) == 1: value = attrs.get(attr_name) elif len(attr_parts) == 2: prefix, name", "the following class # in a module named generatedssuper.py. try: from generatedssuper import", "obj_ obj_.original_tagname_ = 'dadosNasc' elif nodeName_ == 'endereco': obj_ = endereco.factory() obj_.build(child_) self.endereco", "CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, dscLograd) if subclass is not", "(eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='nrBenefic'): pass def exportChildren(self, outfile,", "pretty_print) outfile.write('<%scpfInst>%s</%scpfInst>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.cpfInst), input_name='cpfInst')), namespace_, eol_)) def build(self, node): already_processed =", "name_='complemento', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_, )) def", "= attr_parts namespace = node.nsmap.get(prefix) if namespace is not None: value = attrs.get('{%s}%s'", "already_processed, namespace_='', name_='TEnderecoExterior'): pass def exportChildren(self, outfile, level, namespace_='', name_='TEnderecoExterior', fromsubclass_=False, pretty_print=True): if", "namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='ideBenef') if", "def get_tpAmb(self): return self.tpAmb def set_tpAmb(self, tpAmb): self.tpAmb = tpAmb def get_procEmi(self): return", "namespace_='', name_='paisNac', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_, ))", "def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, dtIniBenef)", "= '\\n' else: eol_ = '' if self.tpPlanRP is not None: showIndent(outfile, level,", "None: showIndent(outfile, level, pretty_print) outfile.write('<%scodMunic>%s</%scodMunic>%s' % (namespace_, self.gds_format_integer(self.codMunic, input_name='codMunic'), namespace_, eol_)) if self.uf", "= set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='codMunic') if self.hasContent_(): outfile.write('>%s' % (eol_, ))", "nascimento do beneficiário\"\"\" subclass = None superclass = None def __init__(self, dtNascto=None, codMunic=None,", "if subclass is not None: return subclass(*args_, **kwargs_) if paisNascto.subclass: return paisNascto.subclass(*args_, **kwargs_)", "class paisNascto(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ = None", "is not None: showIndent(outfile, level, pretty_print) outfile.write('<%snrRecibo>%s</%snrRecibo>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nrRecibo), input_name='nrRecibo')), namespace_, eol_))", "if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, infoPenMorte) if subclass is", "getSubclassFromModule_( CurrentSubclassModule_, dtIniBenef) if subclass is not None: return subclass(*args_, **kwargs_) if dtIniBenef.subclass:", "class nmPai class endereco(GeneratedsSuper): \"\"\"Grupo de informações do endereço do Trabalhador\"\"\" subclass =", "float(sval_) except (TypeError, ValueError) as exp: raise_parse_error(child_, 'requires float or double: %s' %", "def get_uf(self): return self.uf def set_uf(self, uf): self.uf = uf def hasContent_(self): if", "level, already_processed, namespace_, name_='TEnderecoExterior') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level +", "= ideEmpregador def get_ideBenef(self): return self.ideBenef def set_ideBenef(self, ideBenef): self.ideBenef = ideBenef def", "tpLograd.subclass: return tpLograd.subclass(*args_, **kwargs_) else: return tpLograd(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self):", "subclass = getSubclassFromModule_( CurrentSubclassModule_, TEnderecoExterior) if subclass is not None: return subclass(*args_, **kwargs_)", "nodeName_ == 'tpBenef': sval_ = child_.text try: ival_ = int(sval_) except (TypeError, ValueError)", "= set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='mtvFim') if self.hasContent_(): outfile.write('>%s' % (eol_, ))", "ival_ = self.gds_validate_integer(ival_, node, 'mtvFim') self.mtvFim = ival_ # end class fimBeneficio class", "return input_data def gds_format_double_list(self, input_data, input_name=''): return '%s' % ' '.join(input_data) def gds_validate_double_list(", "namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='procEmi') if", "level, already_processed, namespace_, name_='verProc') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level +", "% (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='endereco'): pass def exportChildren(self,", "% (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='tpBenef'): pass def exportChildren(self,", "input_name='uf')), namespace_, eol_)) def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib, already_processed) for", "if pretty_print: eol_ = '\\n' else: eol_ = '' if self.tpBenef is not", "None: return subclass(*args_, **kwargs_) if nmCid.subclass: return nmCid.subclass(*args_, **kwargs_) else: return nmCid(*args_, **kwargs_)", "None: return subclass(*args_, **kwargs_) if bairro.subclass: return bairro.subclass(*args_, **kwargs_) else: return bairro(*args_, **kwargs_)", ")) self.exportChildren(outfile, level + 1, namespace_='', name_='dtFimBenef', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_))", "already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='procEmi') if self.hasContent_(): outfile.write('>%s' % (eol_,", "already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='fimBeneficio') if self.hasContent_(): outfile.write('>%s' % (eol_,", "+ namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='dadosNasc')", "self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='endereco', pretty_print=pretty_print) showIndent(outfile,", "input_name=''): return input_data def gds_format_integer(self, input_data, input_name=''): return '%d' % input_data def gds_validate_integer(self,", "initvalue_ self.codMunic = codMunic self.uf = uf self.paisNascto = paisNascto self.paisNac = paisNac", "if sys.version_info.major == 2: BaseStrType_ = basestring else: BaseStrType_ = str def parsexml_(infile,", "if tpAmb.subclass: return tpAmb.subclass(*args_, **kwargs_) else: return tpAmb(*args_, **kwargs_) factory = staticmethod(factory) def", "def gds_validate_string(self, input_data, node=None, input_name=''): if not input_data: return '' else: return input_data", "already_processed): pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class paisNac", "name_='complemento') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='complemento',", "or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='complemento') if self.hasContent_():", "ideEvento=None, ideEmpregador=None, ideBenef=None, infoBeneficio=None): self.original_tagname_ = None self.Id = _cast(None, Id) self.ideEvento =", "if subclass is not None: return subclass(*args_, **kwargs_) if ideBenef.subclass: return ideBenef.subclass(*args_, **kwargs_)", "para o qual não tenha havido ainda informação de término de benefícios.\"\"\" subclass", "input_data): tz = None if input_data[-1] == 'Z': tz = GeneratedsSuper._FixedOffsetTZ(0, 'UTC') input_data", "def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, eSocial)", ") else: _svalue = '%02d:%02d:%02d.%s' % ( input_data.hour, input_data.minute, input_data.second, ('%f' % (float(input_data.microsecond)", "= nrBenefic_ elif nodeName_ == 'dtIniBenef': sval_ = child_.text dval_ = self.gds_parse_date(sval_) self.dtIniBenef", "namespace_='', name_='nmCid', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_, ))", "'%Y-%m-%dT%H:%M:%S') dt = dt.replace(tzinfo=tz) return dt def gds_validate_date(self, input_data, node=None, input_name=''): return input_data", "namespace_='', name_='idQuota', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('idQuota') if imported_ns_def_ is not None: namespacedef_", "self.tpPlanRP = tpPlanRP def get_iniBeneficio(self): return self.iniBeneficio def set_iniBeneficio(self, iniBeneficio): self.iniBeneficio = iniBeneficio", "PJ\"\"\" subclass = None superclass = None def __init__(self, tpInsc=None, nrInsc=None): self.original_tagname_ =", "raise_parse_error(node, 'Requires sequence of floats') return values def gds_format_double(self, input_data, input_name=''): return '%e'", "'requires integer: %s' % exp) ival_ = self.gds_validate_integer(ival_, node, 'indRetif') self.indRetif = ival_", "not None: self.fimBeneficio.export(outfile, level, namespace_, name_='fimBeneficio', pretty_print=pretty_print) def build(self, node): already_processed = set()", "else: eol_ = '' if self.indRetif is not None: showIndent(outfile, level, pretty_print) outfile.write('<%sindRetif>%s</%sindRetif>%s'", "tpLograd_ elif nodeName_ == 'dscLograd': dscLograd_ = child_.text dscLograd_ = self.gds_validate_string(dscLograd_, node, 'dscLograd')", "None def __init__(self, paisResid=None, dscLograd=None, nrLograd=None, complemento=None, bairro=None, nmCid=None, codPostal=None): self.original_tagname_ = None", "ideBenef=None, infoBeneficio=None): self.original_tagname_ = None self.Id = _cast(None, Id) self.ideEvento = ideEvento self.ideEmpregador", "= attr_name.split(':') value = None if len(attr_parts) == 1: value = attrs.get(attr_name) elif", "Tag_pattern_ = re_.compile(r'({.*})?(.*)') String_cleanup_pat_ = re_.compile(r\"[\\n\\r\\s]+\") Namespace_extract_pat_ = re_.compile(r'{(.*)}(.*)') CDATA_pattern_ = re_.compile(r\"<!\\[CDATA\\[.*?\\]\\]>\", re_.DOTALL)", "set_idQuota(self, idQuota): self.idQuota = idQuota def get_cpfInst(self): return self.cpfInst def set_cpfInst(self, cpfInst): self.cpfInst", "already_processed, namespace_='', name_='TEmprPJ'): pass def exportChildren(self, outfile, level, namespace_='', name_='TEmprPJ', fromsubclass_=False, pretty_print=True): if", "node, nodeName_, fromsubclass_=False): pass # end class paisResid class nmCid(GeneratedsSuper): subclass = None", "self.dtNascto = dval_ elif nodeName_ == 'codMunic': sval_ = child_.text try: ival_ =", "level, namespace_='', name_='endereco', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('endereco') if imported_ns_def_ is not None:", "pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class nrRecibo class", "'infoBeneficio': obj_ = infoBeneficio.factory() obj_.build(child_) self.infoBeneficio = obj_ obj_.original_tagname_ = 'infoBeneficio' # end", "already_processed, namespace_='', name_='dtNascto'): pass def exportChildren(self, outfile, level, namespace_='', name_='dtNascto', fromsubclass_=False, pretty_print=True): pass", "CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, verProc) if subclass is not", "namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='cep') if", "def __init__(self, category, content_type, name, value): self.category = category self.content_type = content_type self.name", "== 'nrInsc': nrInsc_ = child_.text nrInsc_ = self.gds_validate_string(nrInsc_, node, 'nrInsc') self.nrInsc = nrInsc_", "return '\"%s\"' % s1 else: return '\"\"\"%s\"\"\"' % s1 def get_all_text_(node): if node.text", "self.name = name self.value = value def getCategory(self): return self.category def getContenttype(self, content_type):", "return ideBenef(*args_, **kwargs_) factory = staticmethod(factory) def get_cpfBenef(self): return self.cpfBenef def set_cpfBenef(self, cpfBenef):", "name_='TDadosBenef', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('TDadosBenef') if imported_ns_def_ is not None: namespacedef_ =", "and when you want to drop into the # IPython shell: # ipshell('<some", "'\\n' else: eol_ = '' if self.ideEvento is not None: self.ideEvento.export(outfile, level, namespace_,", "get_path_list_(self, node, path_list): if node is None: return tag = GeneratedsSuper.Tag_strip_pattern_.sub('', node.tag) if", "def get_ideEmpregador(self): return self.ideEmpregador def set_ideEmpregador(self, ideEmpregador): self.ideEmpregador = ideEmpregador def get_ideBenef(self): return", "class MemberSpec_(object): def __init__(self, name='', data_type='', container=0, optional=0, child_attrs=None, choice=None): self.name = name", "def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, vrBenef)", "name_='cep', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_, )) def", "getSubclassFromModule_( CurrentSubclassModule_, tpBenef) if subclass is not None: return subclass(*args_, **kwargs_) if tpBenef.subclass:", "): return True else: return False def export(self, outfile, level, namespace_='', name_='infoPenMorte', namespacedef_='',", "is not None: return subclass(*args_, **kwargs_) if mtvFim.subclass: return mtvFim.subclass(*args_, **kwargs_) else: return", "return False def export(self, outfile, level, namespace_='', name_='dtIniBenef', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('dtIniBenef')", ")) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='dtIniBenef') if self.hasContent_(): outfile.write('>%s' %", "by generateDS.py. # You can replace these methods by re-implementing the following class", "else: _svalue += '+' hours = total_seconds // 3600 minutes = (total_seconds -", "re_ import base64 import datetime as datetime_ import warnings as warnings_ try: from", "getSubclassFromModule_( CurrentSubclassModule_, nrBenefic) if subclass is not None: return subclass(*args_, **kwargs_) if nrBenefic.subclass:", "utf-8 -*- # # Generated Tue Oct 10 00:42:21 2017 by generateDS.py version", "ao benefício previdenciário concedido ao servidor\"\"\" subclass = None superclass = None def", "self.exportAttributes(outfile, level, already_processed, namespace_, name_='cpfBenef') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level", "pass return _svalue @classmethod def gds_parse_date(cls, input_data): tz = None if input_data[-1] ==", "( ): return True else: return False def export(self, outfile, level, namespace_='', name_='tpBenef',", "% (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.verProc), input_name='verProc')), namespace_, eol_)) def build(self, node): already_processed = set() self.buildAttributes(node,", "== 'tpBenef': sval_ = child_.text try: ival_ = int(sval_) except (TypeError, ValueError) as", "tzoff_pattern = re_.compile(r'(\\+|-)((0\\d|1[0-3]):[0-5]\\d|14:00)$') class _FixedOffsetTZ(datetime_.tzinfo): def __init__(self, offset, name): self.__offset = datetime_.timedelta(minutes=offset) self.__name", "name_='verProc') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='verProc',", "return self.bairro def set_bairro(self, bairro): self.bairro = bairro def get_cep(self): return self.cep def", "self.gds_encode(self.gds_format_string(quote_xml(self.cpfBenef), input_name='cpfBenef')), namespace_, eol_)) if self.nmBenefic is not None: showIndent(outfile, level, pretty_print) outfile.write('<%snmBenefic>%s</%snmBenefic>%s'", "if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, TDadosBeneficio) if subclass is", "namespace prefix definition, will export that definition in the # XML representation of", "from generatedssuper import GeneratedsSuper except ImportError as exp: class GeneratedsSuper(object): tzoff_pattern = re_.compile(r'(\\+|-)((0\\d|1[0-3]):[0-5]\\d|14:00)$')", "def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class nmCid class codPostal(GeneratedsSuper):", "+ 1, namespace_='', name_='TDadosBeneficio', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_))", "'', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='verProc') if self.hasContent_(): outfile.write('>%s'", "None if len(attr_parts) == 1: value = attrs.get(attr_name) elif len(attr_parts) == 2: prefix,", "showIndent(outfile, level) outfile.write(')\\n') class MemberSpec_(object): def __init__(self, name='', data_type='', container=0, optional=0, child_attrs=None, choice=None):", "subclass(*args_, **kwargs_) if tpLograd.subclass: return tpLograd.subclass(*args_, **kwargs_) else: return tpLograd(*args_, **kwargs_) factory =", "GenerateDSNamespaceDefs_.get('infoBeneficio') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_ =", "level, already_processed, namespace_='', name_='TEnderecoExterior'): pass def exportChildren(self, outfile, level, namespace_='', name_='TEnderecoExterior', fromsubclass_=False, pretty_print=True):", "if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='paisResid', pretty_print=pretty_print)", "that element. See the export method of # any generated element type class", "'requires float or double: %s' % exp) fval_ = self.gds_validate_float(fval_, node, 'vrBenef') self.vrBenef", "def get_verProc(self): return self.verProc def set_verProc(self, verProc): self.verProc = verProc def hasContent_(self): if", "paisNascto) if subclass is not None: return subclass(*args_, **kwargs_) if paisNascto.subclass: return paisNascto.subclass(*args_,", "idQuota(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ = None def", "None or self.mtvFim is not None ): return True else: return False def", "export(self, outfile, level, namespace_='', name_='paisNac', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('paisNac') if imported_ns_def_ is", ")) self.exportChildren(outfile, level + 1, namespace_='', name_='TEnderecoExterior', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' %", ")) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='nrBenefic') if self.hasContent_(): outfile.write('>%s' %", "= Tag_pattern_.match(child.tag).groups()[-1] self.buildChildren(child, node, nodeName_) return self def buildAttributes(self, node, attrs, already_processed): value", "'', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='dtIniBenef') if self.hasContent_(): outfile.write('>%s'", "Python to collect the space used by the DOM. doc = None mapping", "% ( self.category, self.content_type, self.name,)) self.value.exportLiteral(outfile, level + 1) showIndent(outfile, level) outfile.write(')\\n') class", "subclass = None superclass = None def __init__(self, dadosNasc=None, endereco=None): self.original_tagname_ = None", "# # The module generatedsnamespaces, if it is importable, must contain # a", "generated superclass module to use a # specific subclass module. CurrentSubclassModule_ = None", "nodeName_ == 'dtIniBenef': sval_ = child_.text dval_ = self.gds_parse_date(sval_) self.dtIniBenef = dval_ elif", "% ( input_data.year, input_data.month, input_data.day, ) try: if input_data.tzinfo is not None: tzoff", "is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, idQuota) if subclass is not None:", "not None: showIndent(outfile, level, pretty_print) outfile.write('<%spaisResid>%s</%spaisResid>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.paisResid), input_name='paisResid')), namespace_, eol_)) if", "exp) ival_ = self.gds_validate_integer(ival_, node, 'tpInsc') self.tpInsc = ival_ elif nodeName_ == 'nrInsc':", "namespace_='', name_='TDadosBenef', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s'", "subclass is not None: return subclass(*args_, **kwargs_) if cep.subclass: return cep.subclass(*args_, **kwargs_) else:", "end class cep class TEnderecoExterior(GeneratedsSuper): \"\"\"Informações do Endereço no Exterior\"\"\" subclass = None", "**kwargs_) else: return fimBeneficio(*args_, **kwargs_) factory = staticmethod(factory) def get_tpBenef(self): return self.tpBenef def", "return False def export(self, outfile, level, namespace_='', name_='dscLograd', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('dscLograd')", "== MixedContainer.CategoryComplex showIndent(outfile, level) outfile.write( 'model_.MixedContainer(%d, %d, \"%s\",\\n' % ( self.category, self.content_type, self.name,))", "def hasContent_(self): if ( self.indRetif is not None or self.nrRecibo is not None", "namespace_='', name_='nrLograd', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('nrLograd') if imported_ns_def_ is not None: namespacedef_", "self.codPostal = codPostal def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass =", "name_='nmPai'): pass def exportChildren(self, outfile, level, namespace_='', name_='nmPai', fromsubclass_=False, pretty_print=True): pass def build(self,", "(eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='complemento', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_,", "s2 def quote_xml_aux(inStr): s1 = inStr.replace('&', '&amp;') s1 = s1.replace('<', '&lt;') s1 =", "def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='infoPenMorte'): pass def exportChildren(self, outfile, level, namespace_='',", "pass def exportChildren(self, outfile, level, namespace_='', name_='complemento', fromsubclass_=False, pretty_print=True): pass def build(self, node):", "level, already_processed, namespace_='', name_='dtFimBenef'): pass def exportChildren(self, outfile, level, namespace_='', name_='dtFimBenef', fromsubclass_=False, pretty_print=True):", "level, already_processed, namespace_='', name_='cpfInst'): pass def exportChildren(self, outfile, level, namespace_='', name_='cpfInst', fromsubclass_=False, pretty_print=True):", "get_infoBeneficio(self): return self.infoBeneficio def set_infoBeneficio(self, infoBeneficio): self.infoBeneficio = infoBeneficio def get_Id(self): return self.Id", "= 'Dropping into IPython', ## exit_msg = 'Leaving Interpreter, back to program.') #", "'&amp;') s1 = s1.replace('<', '&lt;') s1 = s1.replace('>', '&gt;') return s1 def quote_attrib(inStr):", "pretty_print: eol_ = '\\n' else: eol_ = '' if self.cpfBenef is not None:", "outfile, level, namespace_='', name_='eSocial', namespacedef_=' xmlns:ds=\"http://www.w3.org/2000/09/xmldsig#\" ', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('eSocial') if imported_ns_def_", "None: showIndent(outfile, level, pretty_print) outfile.write('<%sprocEmi>%s</%sprocEmi>%s' % (namespace_, self.gds_format_integer(self.procEmi, input_name='procEmi'), namespace_, eol_)) if self.verProc", "contain # a dictionary named GeneratedsNamespaceDefs. This Python dictionary # should map element", "'' pos = 0 matchobjects = CDATA_pattern_.finditer(s1) for mo in matchobjects: s3 =", ")) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='TEmprPJ'): pass def exportChildren(self, outfile, level,", "is not None: showIndent(outfile, level, pretty_print) outfile.write('<%snmCid>%s</%snmCid>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nmCid), input_name='nmCid')), namespace_, eol_))", "1, namespace_='', name_='tpAmb', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_,", "for patterns2 in patterns1: if re_.search(patterns2, target) is not None: found2 = True", "def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class codPostal class TDadosBeneficio(GeneratedsSuper):", "factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, dtNascto) if", "None or self.dscLograd is not None or self.nrLograd is not None or self.complemento", "s1.replace('\"', \"&quot;\") else: s1 = \"'%s'\" % s1 else: s1 = '\"%s\"' %", "Tue Oct 10 00:42:21 2017 by generateDS.py version 2.28b. # Python 2.7.12 (default,", "value in values: try: int(value) except (TypeError, ValueError): raise_parse_error(node, 'Requires sequence of integers')", "def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, nmPai)", "= datetime_.datetime.strptime( input_data, '%Y-%m-%dT%H:%M:%S') dt = dt.replace(tzinfo=tz) return dt def gds_validate_date(self, input_data, node=None,", "outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='paisNac', pretty_print=pretty_print) outfile.write('</%s%s>%s' %", "self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='TEmprPJ', pretty_print=pretty_print) showIndent(outfile,", "'iniBeneficio': obj_ = TDadosBeneficio.factory() obj_.build(child_) self.iniBeneficio = obj_ obj_.original_tagname_ = 'iniBeneficio' elif nodeName_", "outfile, level, namespace_='', name_='fimBeneficio', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\\n' else: eol_", "False def export(self, outfile, level, namespace_='', name_='nmPai', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('nmPai') if", "else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='cep'): pass", "= datetime_.timedelta(minutes=offset) self.__name = name def utcoffset(self, dt): return self.__offset def tzname(self, dt):", "child_, node, nodeName_, fromsubclass_=False): if nodeName_ == 'brasil': obj_ = TEnderecoBrasil.factory() obj_.build(child_) self.brasil", "method for any class for which there is # a namespace prefix definition,", "name_='nrRecibo', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('nrRecibo') if imported_ns_def_ is not None: namespacedef_ =", "factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, nrBenefic) if", "staticmethod(factory) def get_dadosNasc(self): return self.dadosNasc def set_dadosNasc(self, dadosNasc): self.dadosNasc = dadosNasc def get_endereco(self):", "if complemento.subclass: return complemento.subclass(*args_, **kwargs_) else: return complemento(*args_, **kwargs_) factory = staticmethod(factory) def", "% (self.gds_encode(self.gds_format_string(quote_attrib(self.Id), input_name='Id')), )) def exportChildren(self, outfile, level, namespace_='', name_='evtCdBenPrRP', fromsubclass_=False, pretty_print=True): if", "get_nrBenefic(self): return self.nrBenefic def set_nrBenefic(self, nrBenefic): self.nrBenefic = nrBenefic def get_dtFimBenef(self): return self.dtFimBenef", "1, namespace_='', name_='dtNascto', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_,", "outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='indRetif'): pass def", "( self.ideEvento is not None or self.ideEmpregador is not None or self.ideBenef is", "# If you have installed IPython you can uncomment and use the following.", "element, '%s' % self.name) subelement.text = self.to_etree_simple() else: # category == MixedContainer.CategoryComplex self.value.to_etree(element)", "name_='vrBenef', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib, already_processed)", "of floats') return values def gds_format_double(self, input_data, input_name=''): return '%e' % input_data def", "pretty_print=pretty_print) if self.infoBeneficio is not None: self.infoBeneficio.export(outfile, level, namespace_, name_='infoBeneficio', pretty_print=pretty_print) def build(self,", "vrBenef.subclass(*args_, **kwargs_) else: return vrBenef(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if (", "if 'xsi' in node.nsmap: classname = node.get('{%s}type' % node.nsmap['xsi']) if classname is not", "CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, mtvFim) if subclass is not", "elif nodeName_ == 'dadosBenef': obj_ = TDadosBenef.factory() obj_.build(child_) self.dadosBenef = obj_ obj_.original_tagname_ =", "class _FixedOffsetTZ(datetime_.tzinfo): def __init__(self, offset, name): self.__offset = datetime_.timedelta(minutes=offset) self.__name = name def", "self.gds_validate_string(nrRecibo_, node, 'nrRecibo') self.nrRecibo = nrRecibo_ elif nodeName_ == 'tpAmb': sval_ = child_.text", "outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='TEnderecoBrasil'): pass def", "CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, evtCdBenPrRP) if subclass is not", "idQuota(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return True else:", "set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='tpBenef') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile,", "attrs, already_processed): pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): if nodeName_ == 'tpLograd':", "= dt.replace(tzinfo=tz) return dt.time() def gds_str_lower(self, instring): return instring.lower() def get_path_(self, node): path_list", "namespace_='', name_='dscLograd', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_, ))", "outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='cpfBenef'): pass def", "1, namespace_='', name_='uf', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_,", "not None: self.ideEmpregador.export(outfile, level, namespace_, name_='ideEmpregador', pretty_print=pretty_print) if self.ideBenef is not None: self.ideBenef.export(outfile,", "# end class codPostal class TDadosBeneficio(GeneratedsSuper): \"\"\"Dados do benefício previdenciário\"\"\" subclass = None", "root super-class for element type classes # # Calls to the methods in", "factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, dadosNasc) if", "return None def gds_format_string(self, input_data, input_name=''): return input_data def gds_validate_string(self, input_data, node=None, input_name=''):", "60 + int(tzoff_parts[1]) if results.group(1) == '-': tzoff *= -1 tz = GeneratedsSuper._FixedOffsetTZ(", "obj_.build(child_) self.infoPenMorte = obj_ obj_.original_tagname_ = 'infoPenMorte' # end class TDadosBeneficio class dtIniBenef(GeneratedsSuper):", "not None: return subclass(*args_, **kwargs_) if ideBenef.subclass: return ideBenef.subclass(*args_, **kwargs_) else: return ideBenef(*args_,", "set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='nrRecibo') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile,", "None def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_,", "pass def exportChildren(self, outfile, level, namespace_='', name_='verProc', fromsubclass_=False, pretty_print=True): pass def build(self, node):", "fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\\n' else: eol_ = '' if self.idQuota", "GenerateDSNamespaceDefs_.get('paisResid') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_ =", "if paisNascto.subclass: return paisNascto.subclass(*args_, **kwargs_) else: return paisNascto(*args_, **kwargs_) factory = staticmethod(factory) def", "self.Id = value def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): if nodeName_ == 'ideEvento':", "( ): return True else: return False def export(self, outfile, level, namespace_='', name_='nrBenefic',", "self def buildAttributes(self, node, attrs, already_processed): value = find_attr_value_('Id', node) if value is", "or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='infoBeneficio') if self.hasContent_():", "outfile, level, namespace_='', name_='nrInsc', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set()", "benefícios previdenciários de Regimes Próprios\"\"\" subclass = None superclass = None def __init__(self,", "nmMae): self.nmMae = nmMae def get_nmPai(self): return self.nmPai def set_nmPai(self, nmPai): self.nmPai =", "GenerateDSNamespaceDefs_.get('codMunic') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_ =", "input_name='procEmi'), namespace_, eol_)) if self.verProc is not None: showIndent(outfile, level, pretty_print) outfile.write('<%sverProc>%s</%sverProc>%s' %", "(eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='idQuota'): pass def exportChildren(self, outfile,", "element[-1].tail = self.value else: element[-1].tail += self.value else: if element.text is None: element.text", "None: showIndent(outfile, level, pretty_print) outfile.write('<%sbairro>%s</%sbairro>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.bairro), input_name='bairro')), namespace_, eol_)) if self.cep", "% s1 else: return \"'''%s'''\" % s1 else: if s1.find('\"') != -1: s1", "pass # end class vrBenef class infoPenMorte(GeneratedsSuper): \"\"\"Informações relativas a pensão por morte\"\"\"", "if s1.find('\\n') == -1: return \"'%s'\" % s1 else: return \"'''%s'''\" % s1", "outfile, level, namespace_='', name_='TEmprPJ', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\\n' else: eol_", "time_parts[1]) * 1000000) input_data = '%s.%s' % (time_parts[0], micro_seconds, ) dt = datetime_.datetime.strptime(", "self.dtIniBenef = dtIniBenef def get_vrBenef(self): return self.vrBenef def set_vrBenef(self, vrBenef): self.vrBenef = vrBenef", "set_dadosBenef(self, dadosBenef): self.dadosBenef = dadosBenef def hasContent_(self): if ( self.cpfBenef is not None", "schemas/v2_04/evtCdBenPrRP.xsd # # Current working directory (os.getcwd()): # esociallib # import sys import", "= bairro def get_nmCid(self): return self.nmCid def set_nmCid(self, nmCid): self.nmCid = nmCid def", "obj_.original_tagname_ = 'ideEvento' elif nodeName_ == 'ideEmpregador': obj_ = TEmprPJ.factory() obj_.build(child_) self.ideEmpregador =", "def utcoffset(self, dt): return self.__offset def tzname(self, dt): return self.__name def dst(self, dt):", "= 6 TypeBoolean = 7 TypeBase64 = 8 def __init__(self, category, content_type, name,", "not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, tpAmb) if subclass is not None: return", "GenerateDSNamespaceDefs_.get('TDadosBeneficio') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_ =", "pass def exportChildren(self, outfile, level, namespace_='', name_='nmBenefic', fromsubclass_=False, pretty_print=True): pass def build(self, node):", "return idQuota.subclass(*args_, **kwargs_) else: return idQuota(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if", "pass def exportChildren(self, outfile, level, namespace_='', name_='cep', fromsubclass_=False, pretty_print=True): pass def build(self, node):", "pretty_print=True): if pretty_print: eol_ = '\\n' else: eol_ = '' if self.brasil is", ")) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='complemento') if self.hasContent_(): outfile.write('>%s' %", "in node: nodeName_ = Tag_pattern_.match(child.tag).groups()[-1] self.buildChildren(child, node, nodeName_) return self def buildAttributes(self, node,", "superclass = None def __init__(self, tpLograd=None, dscLograd=None, nrLograd=None, complemento=None, bairro=None, cep=None, codMunic=None, uf=None):", "= '%02d:%02d:%02d.%s' % ( input_data.hour, input_data.minute, input_data.second, ('%f' % (float(input_data.microsecond) / 1000000))[2:], )", "import re as re_ import base64 import datetime as datetime_ import warnings as", "imported_ns_def_ if pretty_print: eol_ = '\\n' else: eol_ = '' if self.original_tagname_ is", "return self.tpLograd def set_tpLograd(self, tpLograd): self.tpLograd = tpLograd def get_dscLograd(self): return self.dscLograd def", "return dt.time() def gds_str_lower(self, instring): return instring.lower() def get_path_(self, node): path_list = []", "self.gds_validate_string(complemento_, node, 'complemento') self.complemento = complemento_ elif nodeName_ == 'bairro': bairro_ = child_.text", "pass # end class paisNascto class paisNac(GeneratedsSuper): subclass = None superclass = None", "pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('tpLograd') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if", "pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class nmBenefic class", "True else: return False def export(self, outfile, level, namespace_='', name_='ideBenef', namespacedef_='', pretty_print=True): imported_ns_def_", "= fimBeneficio def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_(", "GenerateDSNamespaceDefs_.get('tpPlanRP') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_ =", "'-' total_seconds *= -1 else: _svalue += '+' hours = total_seconds // 3600", "already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='tpBenef') if self.hasContent_(): outfile.write('>%s' % (eol_,", "exportAttributes(self, outfile, level, already_processed, namespace_='', name_='cpfInst'): pass def exportChildren(self, outfile, level, namespace_='', name_='cpfInst',", "GenerateDSNamespaceDefs_ = {} # # The root super-class for element type classes #", "**kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, nmBenefic) if subclass", "imported_ns_def_ = GenerateDSNamespaceDefs_.get('TEnderecoBrasil') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print:", "eol_)) if self.paisNac is not None: showIndent(outfile, level, pretty_print) outfile.write('<%spaisNac>%s</%spaisNac>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.paisNac),", "child_.text nmPai_ = self.gds_validate_string(nmPai_, node, 'nmPai') self.nmPai = nmPai_ # end class dadosNasc", "s1 def quote_attrib(inStr): s1 = (isinstance(inStr, BaseStrType_) and inStr or '%s' % inStr)", "already_processed, namespace_='', name_='nrRecibo'): pass def exportChildren(self, outfile, level, namespace_='', name_='nrRecibo', fromsubclass_=False, pretty_print=True): pass", "(eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='nrInsc', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_,", "gds_validate_float(self, input_data, node=None, input_name=''): return input_data def gds_format_float_list(self, input_data, input_name=''): return '%s' %", "set_bairro(self, bairro): self.bairro = bairro def get_cep(self): return self.cep def set_cep(self, cep): self.cep", "level, namespace_='', name_='TDadosBenef', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('TDadosBenef') if imported_ns_def_ is not None:", "level, already_processed, namespace_, name_='tpAmb') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level +", "input_data, '%Y-%m-%dT%H:%M:%S.%f') else: dt = datetime_.datetime.strptime( input_data, '%Y-%m-%dT%H:%M:%S') dt = dt.replace(tzinfo=tz) return dt", "self.vrBenef def set_vrBenef(self, vrBenef): self.vrBenef = vrBenef def get_infoPenMorte(self): return self.infoPenMorte def set_infoPenMorte(self,", "node, 'nrLograd') self.nrLograd = nrLograd_ elif nodeName_ == 'complemento': complemento_ = child_.text complemento_", "already_processed, namespace_='', name_='cep'): pass def exportChildren(self, outfile, level, namespace_='', name_='cep', fromsubclass_=False, pretty_print=True): pass", "**kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, cpfInst) if subclass", "self.choice = choice def get_choice(self): return self.choice def set_optional(self, optional): self.optional = optional", "sys.stdout.write('import evtCdBenPrRP as model_\\n\\n') sys.stdout.write('rootObj = model_.rootClass(\\n') rootObj.exportLiteral(sys.stdout, 0, name_=rootTag) sys.stdout.write(')\\n') return rootObj", "nrLograd.subclass(*args_, **kwargs_) else: return nrLograd(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if (", "level, namespace_='', name_='tpAmb', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('tpAmb') if imported_ns_def_ is not None:", "'', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='nrInsc') if self.hasContent_(): outfile.write('>%s'", "= '' if self.paisResid is not None: showIndent(outfile, level, pretty_print) outfile.write('<%spaisResid>%s</%spaisResid>%s' % (namespace_,", "def exportChildren(self, outfile, level, namespace_='', name_='nmBenefic', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed", "tzoff = input_data.tzinfo.utcoffset(input_data) if tzoff is not None: total_seconds = tzoff.seconds + (86400", "other): return not self.__eq__(other) def getSubclassFromModule_(module, class_): '''Get the subclass of a class", "== MixedContainer.CategoryText: # Prevent exporting empty content as empty lines. if self.value.strip(): if", ")) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='tpLograd'): pass def exportChildren(self, outfile, level,", "evtCdBenPrRP import *\\n\\n') sys.stdout.write('import evtCdBenPrRP as model_\\n\\n') sys.stdout.write('rootObj = model_.rootClass(\\n') rootObj.exportLiteral(sys.stdout, 0, name_=rootTag)", "child_.text verProc_ = self.gds_validate_string(verProc_, node, 'verProc') self.verProc = verProc_ # end class TIdeEveTrab", "= mo.end() s3 = s1[pos:] s2 += quote_xml_aux(s3) return s2 def quote_xml_aux(inStr): s1", "namespace_, eol_)) if self.nrLograd is not None: showIndent(outfile, level, pretty_print) outfile.write('<%snrLograd>%s</%snrLograd>%s' % (namespace_,", "elif nodeName_ == 'vrBenef': sval_ = child_.text try: fval_ = float(sval_) except (TypeError,", "eol_)) if self.verProc is not None: showIndent(outfile, level, pretty_print) outfile.write('<%sverProc>%s</%sverProc>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.verProc),", "def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, paisNac)", "s1 = (isinstance(inStr, BaseStrType_) and inStr or '%s' % inStr) s1 = s1.replace('&',", "self.value, self.name)) elif self.content_type == MixedContainer.TypeBase64: outfile.write('<%s>%s</%s>' % ( self.name, base64.b64encode(self.value), self.name)) def", "level, namespace_, name_='fimBeneficio', pretty_print=pretty_print) def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib, already_processed)", "mtvFim(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return True else:", "return nmCid.subclass(*args_, **kwargs_) else: return nmCid(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if", "None self.Id = _cast(None, Id) self.ideEvento = ideEvento self.ideEmpregador = ideEmpregador self.ideBenef =", "'cep': cep_ = child_.text cep_ = self.gds_validate_string(cep_, node, 'cep') self.cep = cep_ elif", "if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='paisNascto', pretty_print=pretty_print)", "(eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='cep'): pass def exportChildren(self, outfile,", "already_processed, namespace_, name_='TDadosBeneficio') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1,", "+ namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='vrBenef')", "level + 1, namespace_='', name_='eSocial', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_,", "self.altBeneficio = altBeneficio self.fimBeneficio = fimBeneficio def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not", "def getCategory(self): return self.category def getContenttype(self, content_type): return self.content_type def getValue(self): return self.value", "pretty_print=pretty_print) if self.Signature is not None: showIndent(outfile, level, pretty_print) outfile.write('<%sSignature>%s</%sSignature>%s' % ('ds:', self.gds_encode(self.gds_format_string(quote_xml(self.Signature),", "self.exportAttributes(outfile, level, already_processed, namespace_, name_='fimBeneficio') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level", "pretty_print) outfile.write('<%scep>%s</%scep>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.cep), input_name='cep')), namespace_, eol_)) if self.codMunic is not None:", "name_='infoPenMorte', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' %", "codMunic.subclass(*args_, **kwargs_) else: return codMunic(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if (", "staticmethod(factory) def get_idQuota(self): return self.idQuota def set_idQuota(self, idQuota): self.idQuota = idQuota def get_cpfInst(self):", "10 00:42:21 2017 by generateDS.py version 2.28b. # Python 2.7.12 (default, Nov 19", "if subclass is not None: return subclass(*args_, **kwargs_) if nrBenefic.subclass: return nrBenefic.subclass(*args_, **kwargs_)", "if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='vrBenef', pretty_print=pretty_print)", "else: results = GeneratedsSuper.tzoff_pattern.search(input_data) if results is not None: tzoff_parts = results.group(2).split(':') tzoff", "child_, node, nodeName_, fromsubclass_=False): if nodeName_ == 'tpLograd': tpLograd_ = child_.text tpLograd_ =", "**kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, complemento) if subclass", "namespace_, name_='infoPenMorte', pretty_print=pretty_print) def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib, already_processed) for", "+ 1, namespace_='', name_='idQuota', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' %", "used by the DOM. doc = None mapping = {} rootElement = rootObj.to_etree(None,", ")) self.exportChildren(outfile, level + 1, namespace_='', name_='cpfBenef', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_))", "outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='bairro'): pass def", "= GenerateDSNamespaceDefs_.get('TIdeEveTrab') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_", "= GenerateDSNamespaceDefs_.get('fimBeneficio') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_", "elif nodeName_ == 'paisNac': paisNac_ = child_.text paisNac_ = self.gds_validate_string(paisNac_, node, 'paisNac') self.paisNac", "getSubclassFromModule_( CurrentSubclassModule_, dadosNasc) if subclass is not None: return subclass(*args_, **kwargs_) if dadosNasc.subclass:", "def export(self, outfile, level, namespace_='', name_='procEmi', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('procEmi') if imported_ns_def_", "results.group(1) == '-': tzoff *= -1 tz = GeneratedsSuper._FixedOffsetTZ( tzoff, results.group(0)) input_data =", "def exportChildren(self, outfile, level, namespace_='', name_='eSocial', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\\n'", "pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('TEnderecoExterior') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if", "nmCid=None, codPostal=None): self.original_tagname_ = None self.paisResid = paisResid self.dscLograd = dscLograd self.nrLograd =", "TEmprPJ) if subclass is not None: return subclass(*args_, **kwargs_) if TEmprPJ.subclass: return TEmprPJ.subclass(*args_,", "cpfInst.subclass: return cpfInst.subclass(*args_, **kwargs_) else: return cpfInst(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self):", "rootObj.export( sys.stdout, 0, name_=rootTag, namespacedef_='', pretty_print=True) return rootObj def parseEtree(inFileName, silence=False): parser =", "if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, dtIniBenef) if subclass is", "self.__name def dst(self, dt): return None def gds_format_string(self, input_data, input_name=''): return input_data def", "= self.gds_validate_integer(ival_, node, 'codMunic') self.codMunic = ival_ elif nodeName_ == 'uf': uf_ =", "name_='evtCdBenPrRP'): if self.Id is not None and 'Id' not in already_processed: already_processed.add('Id') outfile.write('", "exportChildren(self, outfile, level, namespace_='', name_='complemento', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed =", "input_data def gds_format_base64(self, input_data, input_name=''): return base64.b64encode(input_data) def gds_validate_base64(self, input_data, node=None, input_name=''): return", "None # # If you have installed IPython you can uncomment and use", "self.tpPlanRP is not None: showIndent(outfile, level, pretty_print) outfile.write('<%stpPlanRP>%s</%stpPlanRP>%s' % (namespace_, self.gds_format_integer(self.tpPlanRP, input_name='tpPlanRP'), namespace_,", "is not None or self.mtvFim is not None ): return True else: return", "name_='eSocial'): pass def exportChildren(self, outfile, level, namespace_='', name_='eSocial', fromsubclass_=False, pretty_print=True): if pretty_print: eol_", "CurrentSubclassModule_, indRetif) if subclass is not None: return subclass(*args_, **kwargs_) if indRetif.subclass: return", "= set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='TDadosBenef') if self.hasContent_(): outfile.write('>%s' % (eol_, ))", "integer: %s' % exp) ival_ = self.gds_validate_integer(ival_, node, 'codMunic') self.codMunic = ival_ elif", "def buildAttributes(self, node, attrs, already_processed): pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass", "namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='bairro') if", "factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, cpfBenef) if", "pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class dscLograd class", "Id=%s' % (self.gds_encode(self.gds_format_string(quote_attrib(self.Id), input_name='Id')), )) def exportChildren(self, outfile, level, namespace_='', name_='evtCdBenPrRP', fromsubclass_=False, pretty_print=True):", "namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('nmPai') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_", "child_, node, nodeName_, fromsubclass_=False): pass # end class nmPai class endereco(GeneratedsSuper): \"\"\"Grupo de", "CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, uf) if subclass is not", "== 'nrRecibo': nrRecibo_ = child_.text nrRecibo_ = self.gds_validate_string(nrRecibo_, node, 'nrRecibo') self.nrRecibo = nrRecibo_", "def set_nmCid(self, nmCid): self.nmCid = nmCid def get_codPostal(self): return self.codPostal def set_codPostal(self, codPostal):", "attrs, already_processed): value = find_attr_value_('Id', node) if value is not None and 'Id'", "return True else: return False def export(self, outfile, level, namespace_='', name_='cep', namespacedef_='', pretty_print=True):", "CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, endereco) if subclass is not", "infoBeneficio): self.infoBeneficio = infoBeneficio def get_Id(self): return self.Id def set_Id(self, Id): self.Id =", "namespace_='', name_='paisNascto', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('paisNascto') if imported_ns_def_ is not None: namespacedef_", "namespace_='', name_='idQuota', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib,", "class # in a module named generatedssuper.py. try: from generatedssuper import GeneratedsSuper except", "None: showIndent(outfile, level, pretty_print) outfile.write('<%suf>%s</%suf>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.uf), input_name='uf')), namespace_, eol_)) if self.paisNascto", "pretty_print=pretty_print) if self.altBeneficio is not None: self.altBeneficio.export(outfile, level, namespace_, name_='altBeneficio', pretty_print=pretty_print) if self.fimBeneficio", "namespace_='', name_='codPostal'): pass def exportChildren(self, outfile, level, namespace_='', name_='codPostal', fromsubclass_=False, pretty_print=True): pass def", "IPython shell: # ipshell('<some message> -- Entering ipshell.\\nHit Ctrl-D to exit') # #", "return subclass(*args_, **kwargs_) if cpfInst.subclass: return cpfInst.subclass(*args_, **kwargs_) else: return cpfInst(*args_, **kwargs_) factory", "% (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.uf), input_name='uf')), namespace_, eol_)) if self.paisNascto is not None: showIndent(outfile, level,", "== 'indRetif': sval_ = child_.text try: ival_ = int(sval_) except (TypeError, ValueError) as", "if nodeName_ == 'tpLograd': tpLograd_ = child_.text tpLograd_ = self.gds_validate_string(tpLograd_, node, 'tpLograd') self.tpLograd", "already_processed, namespace_, name_='infoPenMorte') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1,", "level + 1, namespace_='', name_='endereco', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_,", "tzoff, results.group(0)) input_data = input_data[:-6] time_parts = input_data.split('.') if len(time_parts) > 1: micro_seconds", "already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='dadosNasc') if self.hasContent_(): outfile.write('>%s' % (eol_,", "get_cep(self): return self.cep def set_cep(self, cep): self.cep = cep def get_codMunic(self): return self.codMunic", "not None or self.ideBenef is not None or self.infoBeneficio is not None ):", "(eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='dtIniBenef', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_,", "imported_ns_def_ = GenerateDSNamespaceDefs_.get('infoPenMorte') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print:", "as warnings_ try: from lxml import etree as etree_ except ImportError: from xml.etree", "se já houver informação anterior de benefícios para o beneficiário identificado em {ideBenef}", "self.uf = uf self.paisNascto = paisNascto self.paisNac = paisNac self.nmMae = nmMae self.nmPai", "infoPenMorte def hasContent_(self): if ( self.tpBenef is not None or self.nrBenefic is not", "buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class codPostal class TDadosBeneficio(GeneratedsSuper): \"\"\"Dados", "self.dtNascto is not None: showIndent(outfile, level, pretty_print) outfile.write('<%sdtNascto>%s</%sdtNascto>%s' % (namespace_, self.gds_format_date(self.dtNascto, input_name='dtNascto'), namespace_,", "namespace_='', name_='complemento', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_, ))", "child_.text idQuota_ = self.gds_validate_string(idQuota_, node, 'idQuota') self.idQuota = idQuota_ elif nodeName_ == 'cpfInst':", "set_iniBeneficio(self, iniBeneficio): self.iniBeneficio = iniBeneficio def get_altBeneficio(self): return self.altBeneficio def set_altBeneficio(self, altBeneficio): self.altBeneficio", "Validate_simpletypes_ = True if sys.version_info.major == 2: BaseStrType_ = basestring else: BaseStrType_ =", "level, already_processed, namespace_, name_='cpfInst') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level +", "nmCid): self.nmCid = nmCid def get_codPostal(self): return self.codPostal def set_codPostal(self, codPostal): self.codPostal =", "return _svalue @classmethod def gds_parse_datetime(cls, input_data): tz = None if input_data[-1] == 'Z':", "2 CategoryComplex = 3 # Constants for content_type: TypeNone = 0 TypeText =", "getSubclassFromModule_( CurrentSubclassModule_, indRetif) if subclass is not None: return subclass(*args_, **kwargs_) if indRetif.subclass:", "= self.gds_validate_string(cpfBenef_, node, 'cpfBenef') self.cpfBenef = cpfBenef_ elif nodeName_ == 'nmBenefic': nmBenefic_ =", "is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, TEnderecoBrasil) if subclass is not None:", "if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='cpfBenef', pretty_print=pretty_print)", "CurrentSubclassModule_, endereco) if subclass is not None: return subclass(*args_, **kwargs_) if endereco.subclass: return", "pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class tpLograd class", "= 3 # Constants for content_type: TypeNone = 0 TypeText = 1 TypeString", "eol_)) else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='dadosNasc'):", "self.fimBeneficio = fimBeneficio def hasContent_(self): if ( self.tpPlanRP is not None or self.iniBeneficio", "level + 1, namespace_='', name_='codPostal', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s'", "level, pretty_print) outfile.write('<%sdscLograd>%s</%sdscLograd>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.dscLograd), input_name='dscLograd')), namespace_, eol_)) if self.nrLograd is not", "brasil=None, exterior=None): self.original_tagname_ = None self.brasil = brasil self.exterior = exterior def factory(*args_,", "if not silence: sys.stdout.write('<?xml version=\"1.0\" ?>\\n') rootObj.export( sys.stdout, 0, name_=rootTag, namespacedef_='', pretty_print=True) return", "sys.version_info.major == 2: BaseStrType_ = basestring else: BaseStrType_ = str def parsexml_(infile, parser=None,", "nmMae.subclass(*args_, **kwargs_) else: return nmMae(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if (", "'indRetif': sval_ = child_.text try: ival_ = int(sval_) except (TypeError, ValueError) as exp:", "outfile.write('<%snrBenefic>%s</%snrBenefic>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nrBenefic), input_name='nrBenefic')), namespace_, eol_)) if self.dtIniBenef is not None: showIndent(outfile,", "(namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nmBenefic), input_name='nmBenefic')), namespace_, eol_)) if self.dadosBenef is not None: self.dadosBenef.export(outfile, level, namespace_,", "namespace_='', name_='idQuota'): pass def exportChildren(self, outfile, level, namespace_='', name_='idQuota', fromsubclass_=False, pretty_print=True): pass def", "s3 = s1[pos:mo.start()] s2 += quote_xml_aux(s3) s2 += s1[mo.start():mo.end()] pos = mo.end() s3", "None: self.ideEvento.export(outfile, level, namespace_, name_='ideEvento', pretty_print=pretty_print) if self.ideEmpregador is not None: self.ideEmpregador.export(outfile, level,", "self.verProc = verProc def hasContent_(self): if ( self.indRetif is not None or self.nrRecibo", "'nrRecibo': nrRecibo_ = child_.text nrRecibo_ = self.gds_validate_string(nrRecibo_, node, 'nrRecibo') self.nrRecibo = nrRecibo_ elif", "is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, paisNac) if subclass is not None:", "pass def exportChildren(self, outfile, level, namespace_='', name_='nrInsc', fromsubclass_=False, pretty_print=True): pass def build(self, node):", "self.dtFimBenef is not None: showIndent(outfile, level, pretty_print) outfile.write('<%sdtFimBenef>%s</%sdtFimBenef>%s' % (namespace_, self.gds_format_date(self.dtFimBenef, input_name='dtFimBenef'), namespace_,", "showIndent(outfile, level, pretty_print) outfile.write('<%suf>%s</%suf>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.uf), input_name='uf')), namespace_, eol_)) if self.paisNascto is", "(namespace_, self.gds_format_integer(self.mtvFim, input_name='mtvFim'), namespace_, eol_)) def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib,", "= getSubclassFromModule_( CurrentSubclassModule_, TDadosBenef) if subclass is not None: return subclass(*args_, **kwargs_) if", "else: return False def export(self, outfile, level, namespace_='', name_='nrBenefic', namespacedef_='', pretty_print=True): imported_ns_def_ =", ")) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='dtFimBenef') if self.hasContent_(): outfile.write('>%s' %", "e.g., # we ignore comments. try: parser = etree_.ETCompatXMLParser() except AttributeError: # fallback", "This Python dictionary # should map element type names (strings) to XML schema", "not None: showIndent(outfile, level, pretty_print) outfile.write('<%sidQuota>%s</%sidQuota>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.idQuota), input_name='idQuota')), namespace_, eol_)) if", "not None: return subclass(*args_, **kwargs_) if paisNascto.subclass: return paisNascto.subclass(*args_, **kwargs_) else: return paisNascto(*args_,", "== 'tpLograd': tpLograd_ = child_.text tpLograd_ = self.gds_validate_string(tpLograd_, node, 'tpLograd') self.tpLograd = tpLograd_", "input_data = input_data[:-6] if len(input_data.split('.')) > 1: dt = datetime_.datetime.strptime(input_data, '%H:%M:%S.%f') else: dt", "+ namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='TDadosBeneficio')", "% (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='tpAmb'): pass def exportChildren(self,", "% (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='tpBenef', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_,", "level + 1, namespace_='', name_='tpPlanRP', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s'", "**kwargs_) if verProc.subclass: return verProc.subclass(*args_, **kwargs_) else: return verProc(*args_, **kwargs_) factory = staticmethod(factory)", "\"%s\"),\\n' % ( self.category, self.content_type, self.name, self.value)) elif self.category == MixedContainer.CategorySimple: showIndent(outfile, level)", "(namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.bairro), input_name='bairro')), namespace_, eol_)) if self.nmCid is not None: showIndent(outfile, level, pretty_print)", "a dictionary named GeneratedsNamespaceDefs. This Python dictionary # should map element type names", "obj_ = TEnderecoBrasil.factory() obj_.build(child_) self.brasil = obj_ obj_.original_tagname_ = 'brasil' elif nodeName_ ==", "( ): return True else: return False def export(self, outfile, level, namespace_='', name_='dtIniBenef',", "name_='paisNascto'): pass def exportChildren(self, outfile, level, namespace_='', name_='paisNascto', fromsubclass_=False, pretty_print=True): pass def build(self,", "self.name)) elif self.content_type == MixedContainer.TypeFloat or \\ self.content_type == MixedContainer.TypeDecimal: outfile.write('<%s>%f</%s>' % (", "namespace_, eol_)) if self.codPostal is not None: showIndent(outfile, level, pretty_print) outfile.write('<%scodPostal>%s</%scodPostal>%s' % (namespace_,", "= obj_ obj_.original_tagname_ = 'infoPenMorte' # end class TDadosBeneficio class dtIniBenef(GeneratedsSuper): subclass =", "06:48:10) [GCC 5.4.0 20160609] # # Command line options: # ('--no-process-includes', '') #", "( ): return True else: return False def export(self, outfile, level, namespace_='', name_='tpAmb',", "outfile.write('<%stpLograd>%s</%stpLograd>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.tpLograd), input_name='tpLograd')), namespace_, eol_)) if self.dscLograd is not None: showIndent(outfile,", "self.content_type == MixedContainer.TypeDecimal: outfile.write('<%s>%f</%s>' % ( self.name, self.value, self.name)) elif self.content_type == MixedContainer.TypeDouble:", "input_name='cpfBenef')), namespace_, eol_)) if self.nmBenefic is not None: showIndent(outfile, level, pretty_print) outfile.write('<%snmBenefic>%s</%snmBenefic>%s' %", "for any class for which there is # a namespace prefix definition, will", "can replace these methods by re-implementing the following class # in a module", "'Id' not in already_processed: already_processed.add('Id') self.Id = value def buildChildren(self, child_, node, nodeName_,", "namespace_, name_='eSocial') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='',", "getSubclassFromModule_( CurrentSubclassModule_, paisNac) if subclass is not None: return subclass(*args_, **kwargs_) if paisNac.subclass:", "CurrentSubclassModule_, paisResid) if subclass is not None: return subclass(*args_, **kwargs_) if paisResid.subclass: return", "input_data.month, input_data.day, ) try: if input_data.tzinfo is not None: tzoff = input_data.tzinfo.utcoffset(input_data) if", "input_name='verProc')), namespace_, eol_)) def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib, already_processed) for", "1, namespace_='', name_='tpLograd', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_,", "pdb.set_trace() main() __all__ = [ \"TDadosBenef\", \"TDadosBeneficio\", \"TEmprPJ\", \"TEnderecoBrasil\", \"TEnderecoExterior\", \"TIdeEveTrab\", \"eSocial\" ]", "or \\ self.content_type == MixedContainer.TypeDecimal: outfile.write('<%s>%f</%s>' % ( self.name, self.value, self.name)) elif self.content_type", "namespace_, name_='tpPlanRP') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='',", "namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('paisNascto') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_", "self.value, self.name)) elif self.content_type == MixedContainer.TypeFloat or \\ self.content_type == MixedContainer.TypeDecimal: outfile.write('<%s>%f</%s>' %", "'nrLograd') self.nrLograd = nrLograd_ elif nodeName_ == 'complemento': complemento_ = child_.text complemento_ =", "node, nodeName_, fromsubclass_=False): pass # end class mtvFim class TIdeEveTrab(GeneratedsSuper): \"\"\"Identificação do evento\"\"\"", "value): self.category = category self.content_type = content_type self.name = name self.value = value", "eol_)) else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='nmCid'):", "if ( self.tpInsc is not None or self.nrInsc is not None ): return", "__init__(self, category, content_type, name, value): self.category = category self.content_type = content_type self.name =", "set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='paisNascto') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile,", "get_nmMae(self): return self.nmMae def set_nmMae(self, nmMae): self.nmMae = nmMae def get_nmPai(self): return self.nmPai", "TEnderecoBrasil(*args_, **kwargs_) factory = staticmethod(factory) def get_tpLograd(self): return self.tpLograd def set_tpLograd(self, tpLograd): self.tpLograd", "level, namespace_='', name_='infoBeneficio', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\\n' else: eol_ =", "else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='dtFimBenef'): pass", "globals().get(tag) return tag, rootClass def parse(inFileName, silence=False): parser = None doc = parsexml_(inFileName,", "return subclass(*args_, **kwargs_) if TIdeEveTrab.subclass: return TIdeEveTrab.subclass(*args_, **kwargs_) else: return TIdeEveTrab(*args_, **kwargs_) factory", "None: subclass = getSubclassFromModule_( CurrentSubclassModule_, verProc) if subclass is not None: return subclass(*args_,", "def export(self, outfile, level, namespace_='', name_='dtIniBenef', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('dtIniBenef') if imported_ns_def_", "% (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='nrLograd', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_,", "None doc = parsexml_(IOBuffer(inString), parser) rootNode = doc.getroot() rootTag, rootClass = get_root_tag(rootNode) if", "fromsubclass_=False): pass # end class verProc class TEmprPJ(GeneratedsSuper): \"\"\"Informações do Empregador PJ\"\"\" subclass", "outfile, level, namespace_='', name_='paisNac', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('paisNac') if imported_ns_def_ is not", "# Constants for category: CategoryNone = 0 CategoryText = 1 CategorySimple = 2", "brasil): self.brasil = brasil def get_exterior(self): return self.exterior def set_exterior(self, exterior): self.exterior =", "pretty_print) outfile.write('<%spaisNascto>%s</%spaisNascto>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.paisNascto), input_name='paisNascto')), namespace_, eol_)) if self.paisNac is not None:", "cadastro de benefícios previdenciários de Regimes Próprios\"\"\" subclass = None superclass = None", "export(self, outfile, level, namespace_='', name_='procEmi', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('procEmi') if imported_ns_def_ is", "outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='nmCid', pretty_print=pretty_print) outfile.write('</%s%s>%s' %", "for patterns1 in patterns: found2 = False for patterns2 in patterns1: if re_.search(patterns2,", "name_='tpInsc', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib, already_processed)", "MixedContainer.CategoryText: showIndent(outfile, level) outfile.write( 'model_.MixedContainer(%d, %d, \"%s\", \"%s\"),\\n' % ( self.category, self.content_type, self.name,", "= GenerateDSNamespaceDefs_.get('evtCdBenPrRP') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_", "def set_nrRecibo(self, nrRecibo): self.nrRecibo = nrRecibo def get_tpAmb(self): return self.tpAmb def set_tpAmb(self, tpAmb):", "**kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, nrRecibo) if subclass", "cpfBenef def get_nmBenefic(self): return self.nmBenefic def set_nmBenefic(self, nmBenefic): self.nmBenefic = nmBenefic def get_dadosBenef(self):", "= None superclass = None def __init__(self, dtNascto=None, codMunic=None, uf=None, paisNascto=None, paisNac=None, nmMae=None,", "level, already_processed, namespace_='', name_='TDadosBenef'): pass def exportChildren(self, outfile, level, namespace_='', name_='TDadosBenef', fromsubclass_=False, pretty_print=True):", "type class for a example of the use of this # table. #", "def get_infoPenMorte(self): return self.infoPenMorte def set_infoPenMorte(self, infoPenMorte): self.infoPenMorte = infoPenMorte def hasContent_(self): if", "MixedContainer.TypeInteger or \\ self.content_type == MixedContainer.TypeBoolean: outfile.write('<%s>%d</%s>' % ( self.name, self.value, self.name)) elif", "node, 'cep') self.cep = cep_ elif nodeName_ == 'codMunic': sval_ = child_.text try:", "bairro_ elif nodeName_ == 'cep': cep_ = child_.text cep_ = self.gds_validate_string(cep_, node, 'cep')", "CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, indRetif) if subclass is not", "get_fimBeneficio(self): return self.fimBeneficio def set_fimBeneficio(self, fimBeneficio): self.fimBeneficio = fimBeneficio def hasContent_(self): if (", "= fval_ elif nodeName_ == 'infoPenMorte': obj_ = infoPenMorte.factory() obj_.build(child_) self.infoPenMorte = obj_", "class_obj2 = globals().get(classname) if class_obj2 is not None: class_obj1 = class_obj2 return class_obj1", "level, namespace_='', name_='evtCdBenPrRP', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\\n' else: eol_ =", "'%Y-%m-%d') dt = dt.replace(tzinfo=tz) return dt.date() def gds_validate_time(self, input_data, node=None, input_name=''): return input_data", "TEmprPJ.subclass(*args_, **kwargs_) else: return TEmprPJ(*args_, **kwargs_) factory = staticmethod(factory) def get_tpInsc(self): return self.tpInsc", "else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='nmMae'): pass", "None: return subclass(*args_, **kwargs_) if vrBenef.subclass: return vrBenef.subclass(*args_, **kwargs_) else: return vrBenef(*args_, **kwargs_)", "optional=0, child_attrs=None, choice=None): self.name = name self.data_type = data_type self.container = container self.child_attrs", "= set() self.buildAttributes(node, node.attrib, already_processed) for child in node: nodeName_ = Tag_pattern_.match(child.tag).groups()[-1] self.buildChildren(child,", "% (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='nrLograd'): pass def exportChildren(self,", "node, 'bairro') self.bairro = bairro_ elif nodeName_ == 'nmCid': nmCid_ = child_.text nmCid_", "GenerateDSNamespaceDefs_.get('TEnderecoExterior') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_ =", "= codPostal_ # end class TEnderecoExterior class paisResid(GeneratedsSuper): subclass = None superclass =", "obj_.original_tagname_ = 'ideEmpregador' elif nodeName_ == 'ideBenef': obj_ = ideBenef.factory() obj_.build(child_) self.ideBenef =", "( input_data.hour, input_data.minute, input_data.second, ('%f' % (float(input_data.microsecond) / 1000000))[2:], ) if input_data.tzinfo is", "parse(inFileName, silence=False): parser = None doc = parsexml_(inFileName, parser) rootNode = doc.getroot() rootTag,", "elements # - OR the inner elements found1 = True for patterns1 in", "% (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='cep', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_,", "name_='mtvFim', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('mtvFim') if imported_ns_def_ is not None: namespacedef_ =", "None: rootClass = globals().get(tag) return tag, rootClass def parse(inFileName, silence=False): parser = None", "class nrBenefic(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ = None", "= child_.text dscLograd_ = self.gds_validate_string(dscLograd_, node, 'dscLograd') self.dscLograd = dscLograd_ elif nodeName_ ==", "def export(self, outfile, level, namespace_='', name_='nrRecibo', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('nrRecibo') if imported_ns_def_", "if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='TIdeEveTrab', pretty_print=pretty_print)", "( ): return True else: return False def export(self, outfile, level, namespace_='', name_='nrRecibo',", "outfile, level, namespace_='', name_='evtCdBenPrRP', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\\n' else: eol_", "CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, nrInsc) if subclass is not", "int(tzoff_parts[0]) * 60 + int(tzoff_parts[1]) if results.group(1) == '-': tzoff *= -1 tz", "outfile.write('<%svrBenef>%s</%svrBenef>%s' % (namespace_, self.gds_format_float(self.vrBenef, input_name='vrBenef'), namespace_, eol_)) if self.infoPenMorte is not None: self.infoPenMorte.export(outfile,", "dtFimBenef) if subclass is not None: return subclass(*args_, **kwargs_) if dtFimBenef.subclass: return dtFimBenef.subclass(*args_,", "namespace_='', name_='TIdeEveTrab'): pass def exportChildren(self, outfile, level, namespace_='', name_='TIdeEveTrab', fromsubclass_=False, pretty_print=True): if pretty_print:", ")) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='nmCid') if self.hasContent_(): outfile.write('>%s' %", "self.value, self.name)) elif self.content_type == MixedContainer.TypeInteger or \\ self.content_type == MixedContainer.TypeBoolean: outfile.write('<%s>%d</%s>' %", "pretty_print=True): if pretty_print: eol_ = '\\n' else: eol_ = '' if self.dtNascto is", "= getSubclassFromModule_( CurrentSubclassModule_, tpAmb) if subclass is not None: return subclass(*args_, **kwargs_) if", "showIndent(outfile, level, pretty_print) outfile.write('<%snmMae>%s</%snmMae>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nmMae), input_name='nmMae')), namespace_, eol_)) if self.nmPai is", "doc = etree_.parse(infile, parser=parser, **kwargs) return doc # # Namespace prefix definition table", "'&gt;') return s1 def quote_attrib(inStr): s1 = (isinstance(inStr, BaseStrType_) and inStr or '%s'", "level + 1, namespace_='', name_='nmBenefic', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s'", "return subclass(*args_, **kwargs_) if nrRecibo.subclass: return nrRecibo.subclass(*args_, **kwargs_) else: return nrRecibo(*args_, **kwargs_) factory", "namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('dtIniBenef') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_", "< 0: _svalue += '-' total_seconds *= -1 else: _svalue += '+' hours", "= None superclass = None def __init__(self, tpBenef=None, nrBenefic=None, dtIniBenef=None, vrBenef=None, infoPenMorte=None): self.original_tagname_", ")) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='cpfInst') if self.hasContent_(): outfile.write('>%s' %", "namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('bairro') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_", "end class evtCdBenPrRP class ideBenef(GeneratedsSuper): \"\"\"Identificação do beneficiário\"\"\" subclass = None superclass =", "1: dt = datetime_.datetime.strptime(input_data, '%H:%M:%S.%f') else: dt = datetime_.datetime.strptime(input_data, '%H:%M:%S') dt = dt.replace(tzinfo=tz)", "outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='tpLograd', pretty_print=pretty_print) outfile.write('</%s%s>%s' %", "== MixedContainer.CategorySimple: subelement = etree_.SubElement( element, '%s' % self.name) subelement.text = self.to_etree_simple() else:", "not None: self.ideEvento.export(outfile, level, namespace_, name_='ideEvento', pretty_print=pretty_print) if self.ideEmpregador is not None: self.ideEmpregador.export(outfile,", "nmMae_ elif nodeName_ == 'nmPai': nmPai_ = child_.text nmPai_ = self.gds_validate_string(nmPai_, node, 'nmPai')", "if subclass is not None: return subclass(*args_, **kwargs_) if cep.subclass: return cep.subclass(*args_, **kwargs_)", "export(self, outfile, level, namespace_='', name_='infoPenMorte', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('infoPenMorte') if imported_ns_def_ is", "verProc): self.verProc = verProc def hasContent_(self): if ( self.indRetif is not None or", "= classname.split(':') if len(names) == 2: classname = names[1] class_obj2 = globals().get(classname) if", "set_codMunic(self, codMunic): self.codMunic = codMunic def get_uf(self): return self.uf def set_uf(self, uf): self.uf", "exportChildren(self, outfile, level, namespace_='', name_='tpInsc', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed =", "Entering ipshell.\\nHit Ctrl-D to exit') # # Globals # ExternalEncoding = 'ascii' Tag_pattern_", "'endereco' # end class TDadosBenef class dadosNasc(GeneratedsSuper): \"\"\"Informações de nascimento do beneficiário\"\"\" subclass", "(TypeError, ValueError) as exp: raise_parse_error(child_, 'requires integer: %s' % exp) ival_ = self.gds_validate_integer(ival_,", "# end class nmMae class nmPai(GeneratedsSuper): subclass = None superclass = None def", "have installed IPython you can uncomment and use the following. # IPython is", "infoBeneficio class tpPlanRP(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ =", "= getSubclassFromModule_( CurrentSubclassModule_, nmBenefic) if subclass is not None: return subclass(*args_, **kwargs_) if", "return False def export(self, outfile, level, namespace_='', name_='paisResid', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('paisResid')", "def hasContent_(self): if ( ): return True else: return False def export(self, outfile,", "= None self.tpInsc = tpInsc self.nrInsc = nrInsc def factory(*args_, **kwargs_): if CurrentSubclassModule_", "# Globals # ExternalEncoding = 'ascii' Tag_pattern_ = re_.compile(r'({.*})?(.*)') String_cleanup_pat_ = re_.compile(r\"[\\n\\r\\s]+\") Namespace_extract_pat_", "def set_Id(self, Id): self.Id = Id def hasContent_(self): if ( self.ideEvento is not", "showIndent(outfile, level, pretty_print) outfile.write('<%sbairro>%s</%sbairro>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.bairro), input_name='bairro')), namespace_, eol_)) if self.nmCid is", "namespace_, name_='tpBenef') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='',", "namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('dadosNasc') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_", "name_='codMunic'): pass def exportChildren(self, outfile, level, namespace_='', name_='codMunic', fromsubclass_=False, pretty_print=True): pass def build(self,", "end class dtNascto class codMunic(GeneratedsSuper): subclass = None superclass = None def __init__(self):", "s2 = '' pos = 0 matchobjects = CDATA_pattern_.finditer(s1) for mo in matchobjects:", "namespace_='', name_='cpfInst', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_, ))", "self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='paisNascto', pretty_print=pretty_print) outfile.write('</%s%s>%s'", "child_, node, nodeName_, fromsubclass_=False): pass # end class bairro class cep(GeneratedsSuper): subclass =", "factory = staticmethod(factory) def get_tpInsc(self): return self.tpInsc def set_tpInsc(self, tpInsc): self.tpInsc = tpInsc", "is not None: return subclass(*args_, **kwargs_) if TIdeEveTrab.subclass: return TIdeEveTrab.subclass(*args_, **kwargs_) else: return", "def hasContent_(self): if ( self.paisResid is not None or self.dscLograd is not None", "nodeName_ == 'Signature': Signature_ = child_.text Signature_ = self.gds_validate_string(Signature_, node, 'Signature') self.Signature =", "vrBenef.subclass: return vrBenef.subclass(*args_, **kwargs_) else: return vrBenef(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self):", "already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='cpfInst') if self.hasContent_(): outfile.write('>%s' % (eol_,", "class tpAmb class procEmi(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_", "else: return fimBeneficio(*args_, **kwargs_) factory = staticmethod(factory) def get_tpBenef(self): return self.tpBenef def set_tpBenef(self,", "name): return getattr(module, name) else: return None # # If you have installed", "self.dadosBenef.export(outfile, level, namespace_, name_='dadosBenef', pretty_print=pretty_print) def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib,", "# end class verProc class TEmprPJ(GeneratedsSuper): \"\"\"Informações do Empregador PJ\"\"\" subclass = None", "set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='TEnderecoExterior') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile,", "nrInsc(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ = None def", "return self.paisNascto def set_paisNascto(self, paisNascto): self.paisNascto = paisNascto def get_paisNac(self): return self.paisNac def", "## banner = 'Dropping into IPython', ## exit_msg = 'Leaving Interpreter, back to", "if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, nmPai) if subclass is", "already_processed): pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class tpPlanRP", "gds_parse_time(cls, input_data): tz = None if input_data[-1] == 'Z': tz = GeneratedsSuper._FixedOffsetTZ(0, 'UTC')", "subclass is not None: return subclass(*args_, **kwargs_) if infoBeneficio.subclass: return infoBeneficio.subclass(*args_, **kwargs_) else:", "or value is None: return value return typ(value) # # Data representation classes.", "TDadosBenef(GeneratedsSuper): \"\"\"Dados de beneficiário\"\"\" subclass = None superclass = None def __init__(self, dadosNasc=None,", "input_data, input_name=''): return '%s' % ' '.join(input_data) def gds_validate_double_list( self, input_data, node=None, input_name=''):", "ideBenef(GeneratedsSuper): \"\"\"Identificação do beneficiário\"\"\" subclass = None superclass = None def __init__(self, cpfBenef=None,", "CurrentSubclassModule_, tpPlanRP) if subclass is not None: return subclass(*args_, **kwargs_) if tpPlanRP.subclass: return", "input_name=''): return input_data def gds_format_datetime(self, input_data, input_name=''): if input_data.microsecond == 0: _svalue =", "namespace_, name_='evtCdBenPrRP') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='',", "set_paisNac(self, paisNac): self.paisNac = paisNac def get_nmMae(self): return self.nmMae def set_nmMae(self, nmMae): self.nmMae", "qual não tenha havido ainda informação de término de benefícios.\"\"\" subclass = None", "child_, node, nodeName_, fromsubclass_=False): pass # end class tpPlanRP class fimBeneficio(GeneratedsSuper): \"\"\"Informações relativas", "level, already_processed, namespace_, name_='codPostal') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level +", "elif nodeName_ == 'dtFimBenef': sval_ = child_.text dval_ = self.gds_parse_date(sval_) self.dtFimBenef = dval_", "def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='nrBenefic'): pass def exportChildren(self, outfile, level, namespace_='',", "None or self.endereco is not None ): return True else: return False def", "rootClass = globals().get(tag) return tag, rootClass def parse(inFileName, silence=False): parser = None doc", "True else: return False def export(self, outfile, level, namespace_='', name_='TEnderecoBrasil', namespacedef_='', pretty_print=True): imported_ns_def_", "nrBenefic=None, dtIniBenef=None, vrBenef=None, infoPenMorte=None): self.original_tagname_ = None self.tpBenef = tpBenef self.nrBenefic = nrBenefic", "already_processed: already_processed.add('Id') outfile.write(' Id=%s' % (self.gds_encode(self.gds_format_string(quote_attrib(self.Id), input_name='Id')), )) def exportChildren(self, outfile, level, namespace_='',", "False return self.__dict__ == other.__dict__ def __ne__(self, other): return not self.__eq__(other) def getSubclassFromModule_(module,", "if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='dtNascto', pretty_print=pretty_print)", "outfile, level, already_processed, namespace_='', name_='indRetif'): pass def exportChildren(self, outfile, level, namespace_='', name_='indRetif', fromsubclass_=False,", "not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, paisNascto) if subclass is not None: return", "namespace_='', name_='nmCid', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('nmCid') if imported_ns_def_ is not None: namespacedef_", "namespace_, name_='nmBenefic') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='',", "return TDadosBeneficio(*args_, **kwargs_) factory = staticmethod(factory) def get_tpBenef(self): return self.tpBenef def set_tpBenef(self, tpBenef):", "already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='dscLograd') if self.hasContent_(): outfile.write('>%s' % (eol_,", "return codPostal(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return True", "= getSubclassFromModule_( CurrentSubclassModule_, fimBeneficio) if subclass is not None: return subclass(*args_, **kwargs_) if", "set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='nrInsc') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile,", "if codMunic.subclass: return codMunic.subclass(*args_, **kwargs_) else: return codMunic(*args_, **kwargs_) factory = staticmethod(factory) def", "(eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='infoPenMorte'): pass def exportChildren(self, outfile,", "def set_dadosNasc(self, dadosNasc): self.dadosNasc = dadosNasc def get_endereco(self): return self.endereco def set_endereco(self, endereco):", "by generateDS.py version 2.28b. # Python 2.7.12 (default, Nov 19 2016, 06:48:10) [GCC", "\"Escape markup chars, but do not modify CDATA sections.\" if not inStr: return", "namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('cpfBenef') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_", "end class indRetif class nrRecibo(GeneratedsSuper): subclass = None superclass = None def __init__(self):", "return fimBeneficio(*args_, **kwargs_) factory = staticmethod(factory) def get_tpBenef(self): return self.tpBenef def set_tpBenef(self, tpBenef):", "obj_.original_tagname_ = 'dadosNasc' elif nodeName_ == 'endereco': obj_ = endereco.factory() obj_.build(child_) self.endereco =", "eol_)) if self.dadosBenef is not None: self.dadosBenef.export(outfile, level, namespace_, name_='dadosBenef', pretty_print=pretty_print) def build(self,", "return False def export(self, outfile, level, namespace_='', name_='nmCid', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('nmCid')", "import GenerateDSNamespaceDefs as GenerateDSNamespaceDefs_ except ImportError: GenerateDSNamespaceDefs_ = {} # # The root", "esociallib # import sys import re as re_ import base64 import datetime as", "to xml.etree parser = etree_.XMLParser() doc = etree_.parse(infile, parser=parser, **kwargs) return doc #", "self.ideEvento is not None or self.ideEmpregador is not None or self.ideBenef is not", "node, nodeName_, fromsubclass_=False): pass # end class paisNascto class paisNac(GeneratedsSuper): subclass = None", "= tpLograd def get_dscLograd(self): return self.dscLograd def set_dscLograd(self, dscLograd): self.dscLograd = dscLograd def", "names (strings) to XML schema namespace prefix # definitions. The export method for", "else: return cpfInst(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return", "not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, vrBenef) if subclass is not None: return", "None: showIndent(outfile, level, pretty_print) outfile.write('<%snmMae>%s</%snmMae>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nmMae), input_name='nmMae')), namespace_, eol_)) if self.nmPai", "staticmethod(factory) def get_tpBenef(self): return self.tpBenef def set_tpBenef(self, tpBenef): self.tpBenef = tpBenef def get_nrBenefic(self):", "export(self, outfile, level, namespace_='', name_='endereco', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('endereco') if imported_ns_def_ is", "fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib, already_processed) for", "else: return paisResid(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return", "outfile.write('<%snrInsc>%s</%snrInsc>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nrInsc), input_name='nrInsc')), namespace_, eol_)) def build(self, node): already_processed = set()", "not None: return subclass(*args_, **kwargs_) if evtCdBenPrRP.subclass: return evtCdBenPrRP.subclass(*args_, **kwargs_) else: return evtCdBenPrRP(*args_,", "self.value.to_etree(element) def to_etree_simple(self): if self.content_type == MixedContainer.TypeString: text = self.value elif (self.content_type ==", "= nrBenefic_ elif nodeName_ == 'dtFimBenef': sval_ = child_.text dval_ = self.gds_parse_date(sval_) self.dtFimBenef", "GenerateDSNamespaceDefs_.get('dtIniBenef') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_ =", "already_processed): pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class nrRecibo", "cep): self.cep = cep def get_codMunic(self): return self.codMunic def set_codMunic(self, codMunic): self.codMunic =", "(eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='tpBenef'): pass def exportChildren(self, outfile,", "if pretty_print: eol_ = '\\n' else: eol_ = '' if self.tpLograd is not", "booleans ' '(\"true\", \"1\", \"false\", \"0\")') return values def gds_validate_datetime(self, input_data, node=None, input_name=''):", "return self.verProc def set_verProc(self, verProc): self.verProc = verProc def hasContent_(self): if ( self.indRetif", "namespace_='', name_='dtIniBenef'): pass def exportChildren(self, outfile, level, namespace_='', name_='dtIniBenef', fromsubclass_=False, pretty_print=True): pass def", "sys.stdout.write('rootObj = model_.rootClass(\\n') rootObj.exportLiteral(sys.stdout, 0, name_=rootTag) sys.stdout.write(')\\n') return rootObj def main(): args =", "collect the space used by the DOM. doc = None if not silence:", "nmPai_ = child_.text nmPai_ = self.gds_validate_string(nmPai_, node, 'nmPai') self.nmPai = nmPai_ # end", "'', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='tpAmb') if self.hasContent_(): outfile.write('>%s'", ")) self.exportChildren(outfile, level + 1, namespace_='', name_='infoPenMorte', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' %", "or self.dtFimBenef is not None or self.mtvFim is not None ): return True", "level, pretty_print) outfile.write('<%snrBenefic>%s</%snrBenefic>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nrBenefic), input_name='nrBenefic')), namespace_, eol_)) if self.dtFimBenef is not", "= self.gds_validate_string(nrInsc_, node, 'nrInsc') self.nrInsc = nrInsc_ # end class TEmprPJ class tpInsc(GeneratedsSuper):", "elif (self.content_type == MixedContainer.TypeFloat or self.content_type == MixedContainer.TypeDecimal): text = '%f' % self.value", "imported_ns_def_ = GenerateDSNamespaceDefs_.get('nrRecibo') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print:", "Prevent exporting empty content as empty lines. if self.value.strip(): outfile.write(self.value) elif self.category ==", "getSubclassFromModule_( CurrentSubclassModule_, tpAmb) if subclass is not None: return subclass(*args_, **kwargs_) if tpAmb.subclass:", "as re_ import base64 import datetime as datetime_ import warnings as warnings_ try:", "codMunic(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ = None def", "is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, codMunic) if subclass is not None:", "subclass(*args_, **kwargs_) if uf.subclass: return uf.subclass(*args_, **kwargs_) else: return uf(*args_, **kwargs_) factory =", "== other.__dict__ def __ne__(self, other): return not self.__eq__(other) def getSubclassFromModule_(module, class_): '''Get the", "None or self.paisNascto is not None or self.paisNac is not None or self.nmMae", "'ideEmpregador': TEmprPJ, 'ideEvento': TIdeEveTrab, 'iniBeneficio': TDadosBeneficio, } USAGE_TEXT = \"\"\" Usage: python <Parser>.py", "not None or self.nrLograd is not None or self.complemento is not None or", "elif nodeName_ == 'nrLograd': nrLograd_ = child_.text nrLograd_ = self.gds_validate_string(nrLograd_, node, 'nrLograd') self.nrLograd", "# # If you have installed IPython you can uncomment and use the", "= self.gds_validate_string(bairro_, node, 'bairro') self.bairro = bairro_ elif nodeName_ == 'nmCid': nmCid_ =", "subclass = getSubclassFromModule_( CurrentSubclassModule_, idQuota) if subclass is not None: return subclass(*args_, **kwargs_)", "return subclass(*args_, **kwargs_) if codMunic.subclass: return codMunic.subclass(*args_, **kwargs_) else: return codMunic(*args_, **kwargs_) factory", "class tpInsc(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ = None", "fromsubclass_=False): pass # end class nmBenefic class infoBeneficio(GeneratedsSuper): \"\"\"Informações relacionadas ao benefício previdenciário", "def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, nrInsc)", "buildChildren(self, child_, node, nodeName_, fromsubclass_=False): if nodeName_ == 'cpfBenef': cpfBenef_ = child_.text cpfBenef_", "None ): return True else: return False def export(self, outfile, level, namespace_='', name_='eSocial',", "ideEmpregador=None, ideBenef=None, infoBeneficio=None): self.original_tagname_ = None self.Id = _cast(None, Id) self.ideEvento = ideEvento", "**kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, TEmprPJ) if subclass", "import base64 import datetime as datetime_ import warnings as warnings_ try: from lxml", "None: subclass = getSubclassFromModule_( CurrentSubclassModule_, infoPenMorte) if subclass is not None: return subclass(*args_,", "bairro_ = child_.text bairro_ = self.gds_validate_string(bairro_, node, 'bairro') self.bairro = bairro_ elif nodeName_", "outfile, level, already_processed, namespace_='', name_='vrBenef'): pass def exportChildren(self, outfile, level, namespace_='', name_='vrBenef', fromsubclass_=False,", "def get_paisResid(self): return self.paisResid def set_paisResid(self, paisResid): self.paisResid = paisResid def get_dscLograd(self): return", "set_dtFimBenef(self, dtFimBenef): self.dtFimBenef = dtFimBenef def get_mtvFim(self): return self.mtvFim def set_mtvFim(self, mtvFim): self.mtvFim", "eol_)) else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='cpfBenef'):", "exportChildren(self, outfile, level, namespace_='', name_='TEmprPJ', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\\n' else:", "elif nodeName_ == 'codMunic': sval_ = child_.text try: ival_ = int(sval_) except (TypeError,", "if subclass is not None: return subclass(*args_, **kwargs_) if nrRecibo.subclass: return nrRecibo.subclass(*args_, **kwargs_)", "GDSClassesMapping.get(tag) if rootClass is None: rootClass = globals().get(tag) return tag, rootClass def parse(inFileName,", "definition in the # XML representation of that element. See the export method", "subclass = getSubclassFromModule_( CurrentSubclassModule_, nrLograd) if subclass is not None: return subclass(*args_, **kwargs_)", "end class eSocial class evtCdBenPrRP(GeneratedsSuper): \"\"\"Evento de cadastro de benefícios previdenciários de Regimes", "example of the use of this # table. # A sample table is:", "nodeName_, fromsubclass_=False): pass # end class codMunic class uf(GeneratedsSuper): subclass = None superclass", "None superclass = None def __init__(self, paisResid=None, dscLograd=None, nrLograd=None, complemento=None, bairro=None, nmCid=None, codPostal=None):", "class_obj2 is not None: class_obj1 = class_obj2 return class_obj1 def gds_build_any(self, node, type_name=None):", "= getSubclassFromModule_( CurrentSubclassModule_, infoBeneficio) if subclass is not None: return subclass(*args_, **kwargs_) if", "name_='tpLograd') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='tpLograd',", "= self.gds_validate_string(paisResid_, node, 'paisResid') self.paisResid = paisResid_ elif nodeName_ == 'dscLograd': dscLograd_ =", "imported_ns_def_ = GenerateDSNamespaceDefs_.get('indRetif') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print:", "if subclass is not None: return subclass(*args_, **kwargs_) if TDadosBenef.subclass: return TDadosBenef.subclass(*args_, **kwargs_)", "'ideEmpregador': obj_ = TEmprPJ.factory() obj_.build(child_) self.ideEmpregador = obj_ obj_.original_tagname_ = 'ideEmpregador' elif nodeName_", "'', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='nmBenefic') if self.hasContent_(): outfile.write('>%s'", "node, nodeName_, fromsubclass_=False): if nodeName_ == 'evtCdBenPrRP': obj_ = evtCdBenPrRP.factory() obj_.build(child_) self.evtCdBenPrRP =", "( self.name, self.value, self.name)) elif self.content_type == MixedContainer.TypeBase64: outfile.write('<%s>%s</%s>' % ( self.name, base64.b64encode(self.value),", "= model_.rootClass(\\n') rootObj.exportLiteral(sys.stdout, 0, name_=rootTag) sys.stdout.write(')\\n') return rootObj def main(): args = sys.argv[1:]", "name_='TEnderecoExterior', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' %", "vrBenef) if subclass is not None: return subclass(*args_, **kwargs_) if vrBenef.subclass: return vrBenef.subclass(*args_,", "Prevent exporting empty content as empty lines. if self.value.strip(): if len(element) > 0:", "child_attrs self.choice = choice self.optional = optional def set_name(self, name): self.name = name", "hasContent_(self): if ( self.idQuota is not None or self.cpfInst is not None ):", "= GenerateDSNamespaceDefs_.get('dtIniBenef') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_", "exp: raise_parse_error(child_, 'requires integer: %s' % exp) ival_ = self.gds_validate_integer(ival_, node, 'procEmi') self.procEmi", "super-class for element type classes # # Calls to the methods in these", "self, input_data, node=None, input_name=''): values = input_data.split() for value in values: try: float(value)", "nmMae.subclass: return nmMae.subclass(*args_, **kwargs_) else: return nmMae(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self):", "CurrentSubclassModule_, cpfBenef) if subclass is not None: return subclass(*args_, **kwargs_) if cpfBenef.subclass: return", "= eSocial rootObj = rootClass.factory() rootObj.build(rootNode) # Enable Python to collect the space", "'tpAmb': sval_ = child_.text try: ival_ = int(sval_) except (TypeError, ValueError) as exp:", "return False def export(self, outfile, level, namespace_='', name_='mtvFim', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('mtvFim')", "nodeName_, fromsubclass_=False): pass # end class nrBenefic class dtFimBenef(GeneratedsSuper): subclass = None superclass", "idx in range(level): outfile.write(' ') def quote_xml(inStr): \"Escape markup chars, but do not", "self.exportChildren(outfile, level + 1, namespace_='', name_='ideBenef', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' % (namespace_,", "a benefícios previdenciários - Término. Validação: Só pode ser informado se já houver", "_svalue = '%04d-%02d-%02d' % ( input_data.year, input_data.month, input_data.day, ) try: if input_data.tzinfo is", "already_processed, namespace_, name_='tpPlanRP') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1,", "if self.indRetif is not None: showIndent(outfile, level, pretty_print) outfile.write('<%sindRetif>%s</%sindRetif>%s' % (namespace_, self.gds_format_integer(self.indRetif, input_name='indRetif'),", "is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, cep) if subclass is not None:", "class mtvFim(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ = None", "def hasContent_(self): if ( self.tpBenef is not None or self.nrBenefic is not None", "CurrentSubclassModule_, tpBenef) if subclass is not None: return subclass(*args_, **kwargs_) if tpBenef.subclass: return", "is not None or self.nrRecibo is not None or self.tpAmb is not None", "None: tzoff_parts = results.group(2).split(':') tzoff = int(tzoff_parts[0]) * 60 + int(tzoff_parts[1]) if results.group(1)", "fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\\n' else: eol_ = '' if self.tpBenef", "(eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='dtIniBenef'): pass def exportChildren(self, outfile,", "( self.brasil is not None or self.exterior is not None ): return True", "return cpfBenef.subclass(*args_, **kwargs_) else: return cpfBenef(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if", "'Dropping into IPython', ## exit_msg = 'Leaving Interpreter, back to program.') # Then", "None self.tpLograd = tpLograd self.dscLograd = dscLograd self.nrLograd = nrLograd self.complemento = complemento", "name_='paisNac', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_, )) def", "self.altBeneficio.export(outfile, level, namespace_, name_='altBeneficio', pretty_print=pretty_print) if self.fimBeneficio is not None: self.fimBeneficio.export(outfile, level, namespace_,", "level, namespace_='', name_='paisNac', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('paisNac') if imported_ns_def_ is not None:", "= set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='nrInsc') if self.hasContent_(): outfile.write('>%s' % (eol_, ))", "node is None: return tag = GeneratedsSuper.Tag_strip_pattern_.sub('', node.tag) if tag: path_list.append(tag) self.get_path_list_(node.getparent(), path_list)", "informação anterior de benefícios para o beneficiário identificado em {ideBenef} e para o", "subclass = getSubclassFromModule_( CurrentSubclassModule_, dtIniBenef) if subclass is not None: return subclass(*args_, **kwargs_)", "pretty_print) outfile.write('<%sdtFimBenef>%s</%sdtFimBenef>%s' % (namespace_, self.gds_format_date(self.dtFimBenef, input_name='dtFimBenef'), namespace_, eol_)) if self.mtvFim is not None:", "not None: return subclass(*args_, **kwargs_) if codMunic.subclass: return codMunic.subclass(*args_, **kwargs_) else: return codMunic(*args_,", "by the DOM. doc = None if not silence: sys.stdout.write('#from evtCdBenPrRP import *\\n\\n')", "paisNascto(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ = None def", "pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('endereco') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if", "def get_root_tag(node): tag = Tag_pattern_.match(node.tag).groups()[-1] rootClass = GDSClassesMapping.get(tag) if rootClass is None: rootClass", "factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, nrRecibo) if", "def set_verProc(self, verProc): self.verProc = verProc def hasContent_(self): if ( self.indRetif is not", "pass # end class verProc class TEmprPJ(GeneratedsSuper): \"\"\"Informações do Empregador PJ\"\"\" subclass =", "patterns: found2 = False for patterns2 in patterns1: if re_.search(patterns2, target) is not", "export(self, outfile, level, namespace_='', name_='dadosNasc', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('dadosNasc') if imported_ns_def_ is", "'', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='paisNascto') if self.hasContent_(): outfile.write('>%s'", "TEmprPJ.factory() obj_.build(child_) self.ideEmpregador = obj_ obj_.original_tagname_ = 'ideEmpregador' elif nodeName_ == 'ideBenef': obj_", "return False def export(self, outfile, level, namespace_='', name_='TEmprPJ', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('TEmprPJ')", "hasContent_(self): if ( self.ideEvento is not None or self.ideEmpregador is not None or", "module.''' name = class_.__name__ + 'Sub' if hasattr(module, name): return getattr(module, name) else:", "namespace_='', name_='vrBenef', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib,", "1, namespace_='', name_='TEnderecoExterior', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else:", "the subclass of a class from a specific module.''' name = class_.__name__ +", "already_processed, namespace_, name_='procEmi') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1,", "return self.exterior def set_exterior(self, exterior): self.exterior = exterior def hasContent_(self): if ( self.brasil", "instring @staticmethod def convert_unicode(instring): if isinstance(instring, str): result = quote_xml(instring) elif sys.version_info.major ==", "found2: found1 = False break return found1 @classmethod def gds_parse_time(cls, input_data): tz =", "**kwargs_) else: return cpfBenef(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ):", "+ 1, namespace_='', name_='indRetif', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' %", "name_='exterior', pretty_print=pretty_print) def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib, already_processed) for child", "bairro(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ = None def", "fromsubclass_=False): pass # end class nmPai class endereco(GeneratedsSuper): \"\"\"Grupo de informações do endereço", "% exp) ival_ = self.gds_validate_integer(ival_, node, 'mtvFim') self.mtvFim = ival_ # end class", "name_='mtvFim'): pass def exportChildren(self, outfile, level, namespace_='', name_='mtvFim', fromsubclass_=False, pretty_print=True): pass def build(self,", "outfile.write('<%scpfInst>%s</%scpfInst>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.cpfInst), input_name='cpfInst')), namespace_, eol_)) def build(self, node): already_processed = set()", "subclass(*args_, **kwargs_) if TEmprPJ.subclass: return TEmprPJ.subclass(*args_, **kwargs_) else: return TEmprPJ(*args_, **kwargs_) factory =", "else: return False def export(self, outfile, level, namespace_='', name_='vrBenef', namespacedef_='', pretty_print=True): imported_ns_def_ =", "(eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='nmMae'): pass def exportChildren(self, outfile,", "if subclass is not None: return subclass(*args_, **kwargs_) if dadosNasc.subclass: return dadosNasc.subclass(*args_, **kwargs_)", "): return True else: return False def export(self, outfile, level, namespace_='', name_='idQuota', namespacedef_='',", "or self.codMunic is not None or self.uf is not None ): return True", "None mapping = {} rootElement = rootObj.to_etree(None, name_=rootTag, mapping_=mapping) reverse_mapping = rootObj.gds_reverse_node_mapping(mapping) if", "is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, dtNascto) if subclass is not None:", "already_processed, namespace_, name_='dtIniBenef') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1,", "gds_validate_integer_list( self, input_data, node=None, input_name=''): values = input_data.split() for value in values: try:", "input_name='nrRecibo')), namespace_, eol_)) if self.tpAmb is not None: showIndent(outfile, level, pretty_print) outfile.write('<%stpAmb>%s</%stpAmb>%s' %", "= None self.Id = _cast(None, Id) self.ideEvento = ideEvento self.ideEmpregador = ideEmpregador self.ideBenef", "category == MixedContainer.CategoryComplex self.value.to_etree(element) def to_etree_simple(self): if self.content_type == MixedContainer.TypeString: text = self.value", "else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='codMunic'): pass", "outfile, level, namespace_='', name_='nrBenefic', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set()", "factory = staticmethod(factory) def hasContent_(self): if ( ): return True else: return False", "'bairro') self.bairro = bairro_ elif nodeName_ == 'nmCid': nmCid_ = child_.text nmCid_ =", "False break return found1 @classmethod def gds_parse_time(cls, input_data): tz = None if input_data[-1]", "cpfBenef self.nmBenefic = nmBenefic self.dadosBenef = dadosBenef def factory(*args_, **kwargs_): if CurrentSubclassModule_ is", "set_nrBenefic(self, nrBenefic): self.nrBenefic = nrBenefic def get_dtIniBenef(self): return self.dtIniBenef def set_dtIniBenef(self, dtIniBenef): self.dtIniBenef", "child_, node, nodeName_, fromsubclass_=False): pass # end class uf class paisNascto(GeneratedsSuper): subclass =", "= nrRecibo self.tpAmb = tpAmb self.procEmi = procEmi self.verProc = verProc def factory(*args_,", "already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='tpPlanRP') if self.hasContent_(): outfile.write('>%s' % (eol_,", "% (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='cep'): pass def exportChildren(self,", "+ 1, namespace_='', name_='cpfInst', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' %", "+ 1, namespace_='', name_='dscLograd', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' %", "you want to drop into the # IPython shell: # ipshell('<some message> --", "def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='TDadosBenef'): pass def exportChildren(self, outfile, level, namespace_='',", "namespace_='', name_='nmMae', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_, ))", "namespace_, eol_)) if self.infoPenMorte is not None: self.infoPenMorte.export(outfile, level, namespace_, name_='infoPenMorte', pretty_print=pretty_print) def", "1, namespace_='', name_='dtFimBenef', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_,", "self.verProc is not None: showIndent(outfile, level, pretty_print) outfile.write('<%sverProc>%s</%sverProc>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.verProc), input_name='verProc')), namespace_,", "eol_)) else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='TEmprPJ'):", "this to redirect the generated superclass module to use a # specific subclass", "elif nodeName_ == 'ideEmpregador': obj_ = TEmprPJ.factory() obj_.build(child_) self.ideEmpregador = obj_ obj_.original_tagname_ =", "try: fval_ = float(sval_) except (TypeError, ValueError) as exp: raise_parse_error(child_, 'requires float or", "None def __init__(self, brasil=None, exterior=None): self.original_tagname_ = None self.brasil = brasil self.exterior =", "nmBenefic_ = child_.text nmBenefic_ = self.gds_validate_string(nmBenefic_, node, 'nmBenefic') self.nmBenefic = nmBenefic_ elif nodeName_", "subclass(*args_, **kwargs_) if TIdeEveTrab.subclass: return TIdeEveTrab.subclass(*args_, **kwargs_) else: return TIdeEveTrab(*args_, **kwargs_) factory =", "= ival_ elif nodeName_ == 'nrRecibo': nrRecibo_ = child_.text nrRecibo_ = self.gds_validate_string(nrRecibo_, node,", "eol_)) else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='codPostal'):", "outfile, level, already_processed, namespace_='', name_='idQuota'): pass def exportChildren(self, outfile, level, namespace_='', name_='idQuota', fromsubclass_=False,", "'\"%s\"' % s1 else: return '\"\"\"%s\"\"\"' % s1 def get_all_text_(node): if node.text is", "buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class nmMae class nmPai(GeneratedsSuper): subclass", "ao servidor\"\"\" subclass = None superclass = None def __init__(self, tpPlanRP=None, iniBeneficio=None, altBeneficio=None,", "+= self.value elif self.category == MixedContainer.CategorySimple: subelement = etree_.SubElement( element, '%s' % self.name)", "factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, tpAmb) if", "already_processed, namespace_='', name_='tpAmb'): pass def exportChildren(self, outfile, level, namespace_='', name_='tpAmb', fromsubclass_=False, pretty_print=True): pass", "namespace_='', name_='tpInsc', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib,", "+ 1, namespace_='', name_='verProc', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' %", "pass # end class dtNascto class codMunic(GeneratedsSuper): subclass = None superclass = None", "not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, paisNac) if subclass is not None: return", "class verProc class TEmprPJ(GeneratedsSuper): \"\"\"Informações do Empregador PJ\"\"\" subclass = None superclass =", ")) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='codPostal'): pass def exportChildren(self, outfile, level,", "subclass is not None: return subclass(*args_, **kwargs_) if TEnderecoBrasil.subclass: return TEnderecoBrasil.subclass(*args_, **kwargs_) else:", "tpLograd) if subclass is not None: return subclass(*args_, **kwargs_) if tpLograd.subclass: return tpLograd.subclass(*args_,", "self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='eSocial', pretty_print=pretty_print) showIndent(outfile,", "level, already_processed, namespace_='', name_='paisNac'): pass def exportChildren(self, outfile, level, namespace_='', name_='paisNac', fromsubclass_=False, pretty_print=True):", "if pretty_print: eol_ = '\\n' else: eol_ = '' if self.dadosNasc is not", "): return True else: return False def export(self, outfile, level, namespace_='', name_='fimBeneficio', namespacedef_='',", "# fallback to xml.etree parser = etree_.XMLParser() doc = etree_.parse(infile, parser=parser, **kwargs) return", "return values def gds_format_float(self, input_data, input_name=''): return ('%.15f' % input_data).rstrip('0') def gds_validate_float(self, input_data,", "outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='infoBeneficio', pretty_print=pretty_print) showIndent(outfile, level,", "('ds:', self.gds_encode(self.gds_format_string(quote_xml(self.Signature), input_name='Signature')), 'ds:', eol_)) def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib,", "node): path_list = [] self.get_path_list_(node, path_list) path_list.reverse() path = '/'.join(path_list) return path Tag_strip_pattern_", "level, pretty_print) outfile.write('<%stpBenef>%s</%stpBenef>%s' % (namespace_, self.gds_format_integer(self.tpBenef, input_name='tpBenef'), namespace_, eol_)) if self.nrBenefic is not", "ival_ = self.gds_validate_integer(ival_, node, 'tpAmb') self.tpAmb = ival_ elif nodeName_ == 'procEmi': sval_", "Empregador PJ\"\"\" subclass = None superclass = None def __init__(self, tpInsc=None, nrInsc=None): self.original_tagname_", "return dtNascto(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return True", "self.exportAttributes(outfile, level, already_processed, namespace_, name_='nrBenefic') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level", "version=\"1.0\" ?>\\n') rootObj.export( sys.stdout, 0, name_=rootTag, namespacedef_='') return rootObj def parseLiteral(inFileName, silence=False): parser", "not None or self.altBeneficio is not None or self.fimBeneficio is not None ):", "+ namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='complemento')", "self.exportChildren(outfile, level + 1, namespace_='', name_='infoBeneficio', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' % (namespace_,", "if nodeName_ == 'cpfBenef': cpfBenef_ = child_.text cpfBenef_ = self.gds_validate_string(cpfBenef_, node, 'cpfBenef') self.cpfBenef", "'tpLograd') self.tpLograd = tpLograd_ elif nodeName_ == 'dscLograd': dscLograd_ = child_.text dscLograd_ =", "def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='TEnderecoBrasil'): pass def exportChildren(self, outfile, level, namespace_='',", "self.container def set_child_attrs(self, child_attrs): self.child_attrs = child_attrs def get_child_attrs(self): return self.child_attrs def set_choice(self,", "else: return False def export(self, outfile, level, namespace_='', name_='nmMae', namespacedef_='', pretty_print=True): imported_ns_def_ =", "True else: return False def export(self, outfile, level, namespace_='', name_='nmCid', namespacedef_='', pretty_print=True): imported_ns_def_", "namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='indRetif') if", "imported_ns_def_ = GenerateDSNamespaceDefs_.get('eSocial') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print:", "tpBenef=None, nrBenefic=None, dtIniBenef=None, vrBenef=None, infoPenMorte=None): self.original_tagname_ = None self.tpBenef = tpBenef self.nrBenefic =", "raise_parse_error(child_, 'requires integer: %s' % exp) ival_ = self.gds_validate_integer(ival_, node, 'tpAmb') self.tpAmb =", "hasContent_(self): if ( self.tpPlanRP is not None or self.iniBeneficio is not None or", "should: # - AND the outer elements # - OR the inner elements", "): return True else: return False def export(self, outfile, level, namespace_='', name_='TDadosBenef', namespacedef_='',", "+ namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='infoPenMorte')", "def get_paisNascto(self): return self.paisNascto def set_paisNascto(self, paisNascto): self.paisNascto = paisNascto def get_paisNac(self): return", "self.exportAttributes(outfile, level, already_processed, namespace_, name_='nrRecibo') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level", "eol_)) if self.codPostal is not None: showIndent(outfile, level, pretty_print) outfile.write('<%scodPostal>%s</%scodPostal>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.codPostal),", "and ' ' + namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level,", "1, namespace_='', name_='TEnderecoBrasil', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else:", "namespace_='', name_='tpAmb', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_, ))", "or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='TDadosBeneficio') if self.hasContent_():", "cep(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return True else:", "child_.text nrBenefic_ = self.gds_validate_string(nrBenefic_, node, 'nrBenefic') self.nrBenefic = nrBenefic_ elif nodeName_ == 'dtIniBenef':", "# Generated Tue Oct 10 00:42:21 2017 by generateDS.py version 2.28b. # Python", "'nmBenefic') self.nmBenefic = nmBenefic_ elif nodeName_ == 'dadosBenef': obj_ = TDadosBenef.factory() obj_.build(child_) self.dadosBenef", "node=None, input_name=''): return input_data def gds_format_time(self, input_data, input_name=''): if input_data.microsecond == 0: _svalue", "False def export(self, outfile, level, namespace_='', name_='fimBeneficio', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('fimBeneficio') if", "infoBeneficio(*args_, **kwargs_) factory = staticmethod(factory) def get_tpPlanRP(self): return self.tpPlanRP def set_tpPlanRP(self, tpPlanRP): self.tpPlanRP", "return self.tpAmb def set_tpAmb(self, tpAmb): self.tpAmb = tpAmb def get_procEmi(self): return self.procEmi def", "fromsubclass_=False): if nodeName_ == 'brasil': obj_ = TEnderecoBrasil.factory() obj_.build(child_) self.brasil = obj_ obj_.original_tagname_", "if input_data.tzinfo is not None: tzoff = input_data.tzinfo.utcoffset(input_data) if tzoff is not None:", "input_data.tzinfo is not None: tzoff = input_data.tzinfo.utcoffset(input_data) if tzoff is not None: total_seconds", "self.codMunic is not None or self.uf is not None ): return True else:", "self.name, self.value, self.name)) elif self.content_type == MixedContainer.TypeDouble: outfile.write('<%s>%g</%s>' % ( self.name, self.value, self.name))", "set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='nmCid') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile,", "is not None: return subclass(*args_, **kwargs_) if TDadosBenef.subclass: return TDadosBenef.subclass(*args_, **kwargs_) else: return", "level, already_processed, namespace_, name_='idQuota') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level +", "= GenerateDSNamespaceDefs_.get('mtvFim') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_", "**kwargs_) else: return ideBenef(*args_, **kwargs_) factory = staticmethod(factory) def get_cpfBenef(self): return self.cpfBenef def", "re_.compile(r\"[\\n\\r\\s]+\") Namespace_extract_pat_ = re_.compile(r'{(.*)}(.*)') CDATA_pattern_ = re_.compile(r\"<!\\[CDATA\\[.*?\\]\\]>\", re_.DOTALL) # Change this to redirect", "\"0\")') return values def gds_validate_datetime(self, input_data, node=None, input_name=''): return input_data def gds_format_datetime(self, input_data,", "showIndent(outfile, level, pretty_print) outfile.write('<%sdtIniBenef>%s</%sdtIniBenef>%s' % (namespace_, self.gds_format_date(self.dtIniBenef, input_name='dtIniBenef'), namespace_, eol_)) if self.vrBenef is", "= s1.replace('>', '&gt;') if '\"' in s1: if \"'\" in s1: s1 =", "None: self.iniBeneficio.export(outfile, level, namespace_, name_='iniBeneficio', pretty_print=pretty_print) if self.altBeneficio is not None: self.altBeneficio.export(outfile, level,", "else: return False def export(self, outfile, level, namespace_='', name_='infoBeneficio', namespacedef_='', pretty_print=True): imported_ns_def_ =", "def export(self, outfile, level, namespace_='', name_='TIdeEveTrab', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('TIdeEveTrab') if imported_ns_def_", "= GenerateDSNamespaceDefs_.get('complemento') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_", "to program.') # Then use the following line where and when you want", "table. # A sample table is: # # # File: generatedsnamespaces.py # #", "_svalue += 'Z' else: if total_seconds < 0: _svalue += '-' total_seconds *=", "bairro_ = self.gds_validate_string(bairro_, node, 'bairro') self.bairro = bairro_ elif nodeName_ == 'nmCid': nmCid_", "= node.get('{%s}type' % node.nsmap['xsi']) if classname is not None: names = classname.split(':') if", "else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='fimBeneficio'): pass", "outfile.write('<%smtvFim>%s</%smtvFim>%s' % (namespace_, self.gds_format_integer(self.mtvFim, input_name='mtvFim'), namespace_, eol_)) def build(self, node): already_processed = set()", "+ namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='nrInsc')", "self.endereco is not None: self.endereco.export(outfile, level, namespace_, name_='endereco', pretty_print=pretty_print) def build(self, node): already_processed", "already_processed): pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class mtvFim", "<Parser>.py [ -s ] <in_xml_file> \"\"\" def usage(): print(USAGE_TEXT) sys.exit(1) def get_root_tag(node): tag", "# we ignore comments. try: parser = etree_.ETCompatXMLParser() except AttributeError: # fallback to", "end class tpAmb class procEmi(GeneratedsSuper): subclass = None superclass = None def __init__(self):", "not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, bairro) if subclass is not None: return", "(eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='tpLograd', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_,", "nmCid) if subclass is not None: return subclass(*args_, **kwargs_) if nmCid.subclass: return nmCid.subclass(*args_,", "= uf_ # end class TEnderecoBrasil class tpLograd(GeneratedsSuper): subclass = None superclass =", "export(self, outfile, level, namespace_='', name_='nrLograd', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('nrLograd') if imported_ns_def_ is", "def exportChildren(self, outfile, level, namespace_='', name_='verProc', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed", "% (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='dtIniBenef', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_,", "**kwargs_) else: return paisResid(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ):", "+ namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='cpfInst')", "namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='nmMae') if", "True else: return False def export(self, outfile, level, namespace_='', name_='dscLograd', namespacedef_='', pretty_print=True): imported_ns_def_", "name_='uf', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib, already_processed)", "get_idQuota(self): return self.idQuota def set_idQuota(self, idQuota): self.idQuota = idQuota def get_cpfInst(self): return self.cpfInst", "None: return subclass(*args_, **kwargs_) if idQuota.subclass: return idQuota.subclass(*args_, **kwargs_) else: return idQuota(*args_, **kwargs_)", "is not None: self.evtCdBenPrRP.export(outfile, level, namespace_, name_='evtCdBenPrRP', pretty_print=pretty_print) if self.Signature is not None:", "= getSubclassFromModule_( CurrentSubclassModule_, procEmi) if subclass is not None: return subclass(*args_, **kwargs_) if", "name_='idQuota', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib, already_processed)", "nodeName_, fromsubclass_=False): pass # end class nrRecibo class tpAmb(GeneratedsSuper): subclass = None superclass", "if not found2: found1 = False break return found1 @classmethod def gds_parse_time(cls, input_data):", "= 0 TypeText = 1 TypeString = 2 TypeInteger = 3 TypeFloat =", "= ival_ elif nodeName_ == 'nrBenefic': nrBenefic_ = child_.text nrBenefic_ = self.gds_validate_string(nrBenefic_, node,", "= dscLograd self.nrLograd = nrLograd self.complemento = complemento self.bairro = bairro self.nmCid =", "exportAttributes(self, outfile, level, already_processed, namespace_='', name_='nrInsc'): pass def exportChildren(self, outfile, level, namespace_='', name_='nrInsc',", "return False def export(self, outfile, level, namespace_='', name_='tpInsc', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('tpInsc')", "self.name) subelement.text = self.to_etree_simple() else: # category == MixedContainer.CategoryComplex self.value.to_etree(element) def to_etree_simple(self): if", "level, already_processed, namespace_='', name_='cep'): pass def exportChildren(self, outfile, level, namespace_='', name_='cep', fromsubclass_=False, pretty_print=True):", "is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, tpLograd) if subclass is not None:", "eSocial rootObj = rootClass.factory() rootObj.build(rootNode) # Enable Python to collect the space used", "ValueError): raise_parse_error(node, 'Requires sequence of doubles') return values def gds_format_boolean(self, input_data, input_name=''): return", "level, already_processed, namespace_, name_='TEmprPJ') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level +", "**kwargs_) if procEmi.subclass: return procEmi.subclass(*args_, **kwargs_) else: return procEmi(*args_, **kwargs_) factory = staticmethod(factory)", "outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='bairro', pretty_print=pretty_print) outfile.write('</%s%s>%s' %", "dadosNasc) if subclass is not None: return subclass(*args_, **kwargs_) if dadosNasc.subclass: return dadosNasc.subclass(*args_,", "eol_)) else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='cep'):", "pretty_print) outfile.write('<%snrBenefic>%s</%snrBenefic>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nrBenefic), input_name='nrBenefic')), namespace_, eol_)) if self.dtFimBenef is not None:", "self.content_type == MixedContainer.TypeBase64: text = '%s' % base64.b64encode(self.value) return text def exportLiteral(self, outfile,", "None: return subclass(*args_, **kwargs_) if dtIniBenef.subclass: return dtIniBenef.subclass(*args_, **kwargs_) else: return dtIniBenef(*args_, **kwargs_)", "if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='codPostal', pretty_print=pretty_print)", "return self.container def set_child_attrs(self, child_attrs): self.child_attrs = child_attrs def get_child_attrs(self): return self.child_attrs def", ")) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='fimBeneficio'): pass def exportChildren(self, outfile, level,", "level, namespace_='', name_='dscLograd', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('dscLograd') if imported_ns_def_ is not None:", "elif self.category == MixedContainer.CategorySimple: self.exportSimple(outfile, level, name) else: # category == MixedContainer.CategoryComplex self.value.export(", "text = '%s' % base64.b64encode(self.value) return text def exportLiteral(self, outfile, level, name): if", "is not None: return subclass(*args_, **kwargs_) if TEnderecoBrasil.subclass: return TEnderecoBrasil.subclass(*args_, **kwargs_) else: return", "return not self.__eq__(other) def getSubclassFromModule_(module, class_): '''Get the subclass of a class from", "def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class dscLograd class nrLograd(GeneratedsSuper):", "= 'brasil' elif nodeName_ == 'exterior': obj_ = TEnderecoExterior.factory() obj_.build(child_) self.exterior = obj_", "name_='TEmprPJ', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('TEmprPJ') if imported_ns_def_ is not None: namespacedef_ =", "showIndent(outfile, level, pretty_print) outfile.write('<%sdtFimBenef>%s</%sdtFimBenef>%s' % (namespace_, self.gds_format_date(self.dtFimBenef, input_name='dtFimBenef'), namespace_, eol_)) if self.mtvFim is", "generateDS.py. # You can replace these methods by re-implementing the following class #", "return self.dtIniBenef def set_dtIniBenef(self, dtIniBenef): self.dtIniBenef = dtIniBenef def get_vrBenef(self): return self.vrBenef def", "- (hours * 3600)) // 60 _svalue += '{0:02d}:{1:02d}'.format(hours, minutes) return _svalue def", "def set_paisNac(self, paisNac): self.paisNac = paisNac def get_nmMae(self): return self.nmMae def set_nmMae(self, nmMae):", "getSubclassFromModule_( CurrentSubclassModule_, evtCdBenPrRP) if subclass is not None: return subclass(*args_, **kwargs_) if evtCdBenPrRP.subclass:", "input_data.minute, input_data.second, ('%f' % (float(input_data.microsecond) / 1000000))[2:], ) if input_data.tzinfo is not None:", "elif nodeName_ == 'exterior': obj_ = TEnderecoExterior.factory() obj_.build(child_) self.exterior = obj_ obj_.original_tagname_ =", "outfile.write('<%scep>%s</%scep>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.cep), input_name='cep')), namespace_, eol_)) if self.codMunic is not None: showIndent(outfile,", "('-o', 'esociallib/v2_04/evtCdBenPrRP.py') # # Command line arguments: # schemas/v2_04/evtCdBenPrRP.xsd # # Command line:", "# # Generated Tue Oct 10 00:42:21 2017 by generateDS.py version 2.28b. #", "None: return subclass(*args_, **kwargs_) if endereco.subclass: return endereco.subclass(*args_, **kwargs_) else: return endereco(*args_, **kwargs_)", "node, nodeName_, fromsubclass_=False): pass # end class tpBenef class nrBenefic(GeneratedsSuper): subclass = None", "def parsexml_(infile, parser=None, **kwargs): if parser is None: # Use the lxml ElementTree", "pdb; pdb.set_trace() main() __all__ = [ \"TDadosBenef\", \"TDadosBeneficio\", \"TEmprPJ\", \"TEnderecoBrasil\", \"TEnderecoExterior\", \"TIdeEveTrab\", \"eSocial\"", "by the DOM. doc = None if not silence: sys.stdout.write('<?xml version=\"1.0\" ?>\\n') rootObj.export(", "BaseStrType_): initvalue_ = datetime_.datetime.strptime(dtNascto, '%Y-%m-%d').date() else: initvalue_ = dtNascto self.dtNascto = initvalue_ self.codMunic", "class vrBenef(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ = None", "def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='cep'): pass def exportChildren(self, outfile, level, namespace_='',", "self.gds_validate_string(nrBenefic_, node, 'nrBenefic') self.nrBenefic = nrBenefic_ elif nodeName_ == 'dtIniBenef': sval_ = child_.text", "instring): return instring.lower() def get_path_(self, node): path_list = [] self.get_path_list_(node, path_list) path_list.reverse() path", "GenerateDSNamespaceDefs_.get('TDadosBenef') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_ =", "name_='mtvFim', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib, already_processed)", "imported_ns_def_ = GenerateDSNamespaceDefs_.get('idQuota') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print:", "level + 1, namespace_='', name_='idQuota', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s'", "pretty_print=True): if self.category == MixedContainer.CategoryText: # Prevent exporting empty content as empty lines.", "1000000) input_data = '%s.%s' % (time_parts[0], micro_seconds, ) dt = datetime_.datetime.strptime( input_data, '%Y-%m-%dT%H:%M:%S.%f')", ")) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='nrLograd'): pass def exportChildren(self, outfile, level,", "getSubclassFromModule_( CurrentSubclassModule_, TEnderecoBrasil) if subclass is not None: return subclass(*args_, **kwargs_) if TEnderecoBrasil.subclass:", "name_='codMunic', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_, )) def", "outfile, level, already_processed, namespace_='', name_='cpfInst'): pass def exportChildren(self, outfile, level, namespace_='', name_='cpfInst', fromsubclass_=False,", "quote_xml(instring) elif sys.version_info.major == 2 and isinstance(instring, unicode): result = quote_xml(instring).encode('utf8') else: result", "s1.replace('>', '&gt;') if '\"' in s1: if \"'\" in s1: s1 = '\"%s\"'", "not None: showIndent(outfile, level, pretty_print) outfile.write('<%sindRetif>%s</%sindRetif>%s' % (namespace_, self.gds_format_integer(self.indRetif, input_name='indRetif'), namespace_, eol_)) if", "return False def export(self, outfile, level, namespace_='', name_='paisNac', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('paisNac')", "namespace_='', name_='dtNascto'): pass def exportChildren(self, outfile, level, namespace_='', name_='dtNascto', fromsubclass_=False, pretty_print=True): pass def", "return self.child_attrs def set_choice(self, choice): self.choice = choice def get_choice(self): return self.choice def", "= staticmethod(factory) def get_evtCdBenPrRP(self): return self.evtCdBenPrRP def set_evtCdBenPrRP(self, evtCdBenPrRP): self.evtCdBenPrRP = evtCdBenPrRP def", "return TEnderecoBrasil.subclass(*args_, **kwargs_) else: return TEnderecoBrasil(*args_, **kwargs_) factory = staticmethod(factory) def get_tpLograd(self): return", "of integers') return values def gds_format_float(self, input_data, input_name=''): return ('%.15f' % input_data).rstrip('0') def", "outfile, level, namespace_='', name_='dtIniBenef', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('dtIniBenef') if imported_ns_def_ is not", "(eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='verProc'): pass def exportChildren(self, outfile,", "outfile, level, namespace_='', name_='procEmi', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('procEmi') if imported_ns_def_ is not", "if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='TEnderecoBrasil', pretty_print=pretty_print)", "= GenerateDSNamespaceDefs_.get('ideBenef') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_", "% base64.b64encode(self.value) return text def exportLiteral(self, outfile, level, name): if self.category == MixedContainer.CategoryText:", "ival_ elif nodeName_ == 'verProc': verProc_ = child_.text verProc_ = self.gds_validate_string(verProc_, node, 'verProc')", "= input_data.split() for value in values: if value not in ('true', '1', 'false',", "name_='paisNascto', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib, already_processed)", "self.infoPenMorte is not None: self.infoPenMorte.export(outfile, level, namespace_, name_='infoPenMorte', pretty_print=pretty_print) def build(self, node): already_processed", "self.gds_encode(self.gds_format_string(quote_xml(self.complemento), input_name='complemento')), namespace_, eol_)) if self.bairro is not None: showIndent(outfile, level, pretty_print) outfile.write('<%sbairro>%s</%sbairro>%s'", "already_processed, namespace_, name_='cpfInst') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1,", "None superclass = None def __init__(self, brasil=None, exterior=None): self.original_tagname_ = None self.brasil =", "namespace_='', name_='tpPlanRP'): pass def exportChildren(self, outfile, level, namespace_='', name_='tpPlanRP', fromsubclass_=False, pretty_print=True): pass def", "get_all_text_(node): if node.text is not None: text = node.text else: text = ''", "nodeName_, fromsubclass_=False): pass # end class paisResid class nmCid(GeneratedsSuper): subclass = None superclass", "'0', ): raise_parse_error( node, 'Requires sequence of booleans ' '(\"true\", \"1\", \"false\", \"0\")')", "GenerateDSNamespaceDefs_ except ImportError: GenerateDSNamespaceDefs_ = {} # # The root super-class for element", "ival_ elif nodeName_ == 'uf': uf_ = child_.text uf_ = self.gds_validate_string(uf_, node, 'uf')", "self.gds_format_integer(self.procEmi, input_name='procEmi'), namespace_, eol_)) if self.verProc is not None: showIndent(outfile, level, pretty_print) outfile.write('<%sverProc>%s</%sverProc>%s'", "def exportChildren(self, outfile, level, namespace_='', name_='uf', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed", "child_, node, nodeName_, fromsubclass_=False): if nodeName_ == 'dtNascto': sval_ = child_.text dval_ =", "name_='infoBeneficio') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='infoBeneficio',", "+ namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='tpInsc')", "self.gds_validate_string(bairro_, node, 'bairro') self.bairro = bairro_ elif nodeName_ == 'cep': cep_ = child_.text", "= set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='tpAmb') if self.hasContent_(): outfile.write('>%s' % (eol_, ))", "subclass = getSubclassFromModule_( CurrentSubclassModule_, infoBeneficio) if subclass is not None: return subclass(*args_, **kwargs_)", "self.procEmi is not None or self.verProc is not None ): return True else:", "def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='cpfBenef'): pass def exportChildren(self, outfile, level, namespace_='',", "outfile, level, namespace_='', name_='tpBenef', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set()", "def exportChildren(self, outfile, level, namespace_='', name_='tpAmb', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed", "nodeName_, fromsubclass_=False): pass # end class cpfInst GDSClassesMapping = { 'altBeneficio': TDadosBeneficio, 'brasil':", "text = '%f' % self.value elif self.content_type == MixedContainer.TypeDouble: text = '%g' %", "def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='tpLograd'): pass def exportChildren(self, outfile, level, namespace_='',", "namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='mtvFim') if", "imported_ns_def_ = GenerateDSNamespaceDefs_.get('TDadosBenef') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print:", "'%d' % self.value elif (self.content_type == MixedContainer.TypeFloat or self.content_type == MixedContainer.TypeDecimal): text =", "uf_ elif nodeName_ == 'paisNascto': paisNascto_ = child_.text paisNascto_ = self.gds_validate_string(paisNascto_, node, 'paisNascto')", "if classname is not None: names = classname.split(':') if len(names) == 2: classname", "namespace_='', name_='TIdeEveTrab', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\\n' else: eol_ = ''", "pretty_print: for idx in range(level): outfile.write(' ') def quote_xml(inStr): \"Escape markup chars, but", "if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, procEmi) if subclass is", "self.dadosNasc is not None or self.endereco is not None ): return True else:", "already_processed, namespace_='', name_='paisNac'): pass def exportChildren(self, outfile, level, namespace_='', name_='paisNac', fromsubclass_=False, pretty_print=True): pass", "exportChildren(self, outfile, level, namespace_='', name_='TDadosBeneficio', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\\n' else:", "level, namespace_='', name_='nmPai', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node,", "+ 1, namespace_='', name_='bairro', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' %", "1, namespace_='', name_='nmMae', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_,", "self.nmBenefic is not None or self.dadosBenef is not None ): return True else:", "def get_cpfInst(self): return self.cpfInst def set_cpfInst(self, cpfInst): self.cpfInst = cpfInst def hasContent_(self): if", "nrRecibo): self.nrRecibo = nrRecibo def get_tpAmb(self): return self.tpAmb def set_tpAmb(self, tpAmb): self.tpAmb =", "return subclass(*args_, **kwargs_) if nmMae.subclass: return nmMae.subclass(*args_, **kwargs_) else: return nmMae(*args_, **kwargs_) factory", "% (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='dtNascto'): pass def exportChildren(self,", "nrLograd class complemento(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ =", "set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='dtIniBenef') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile,", "if self.category == MixedContainer.CategoryText: # Prevent exporting empty content as empty lines. if", "set_uf(self, uf): self.uf = uf def get_paisNascto(self): return self.paisNascto def set_paisNascto(self, paisNascto): self.paisNascto", "= 'altBeneficio' elif nodeName_ == 'fimBeneficio': obj_ = fimBeneficio.factory() obj_.build(child_) self.fimBeneficio = obj_", ")) self.exportChildren(outfile, level + 1, namespace_='', name_='complemento', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_))", "raise_parse_error(child_, 'requires integer: %s' % exp) ival_ = self.gds_validate_integer(ival_, node, 'tpInsc') self.tpInsc =", ")) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='paisNac'): pass def exportChildren(self, outfile, level,", "_svalue = '%02d:%02d:%02d' % ( input_data.hour, input_data.minute, input_data.second, ) else: _svalue = '%02d:%02d:%02d.%s'", "nodeName_, fromsubclass_=False): pass # end class nmMae class nmPai(GeneratedsSuper): subclass = None superclass", "else: return False def export(self, outfile, level, namespace_='', name_='infoPenMorte', namespacedef_='', pretty_print=True): imported_ns_def_ =", "CDATA_pattern_ = re_.compile(r\"<!\\[CDATA\\[.*?\\]\\]>\", re_.DOTALL) # Change this to redirect the generated superclass module", "por morte\"\"\" subclass = None superclass = None def __init__(self, idQuota=None, cpfInst=None): self.original_tagname_", "(namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.dscLograd), input_name='dscLograd')), namespace_, eol_)) if self.nrLograd is not None: showIndent(outfile, level, pretty_print)", "node.nsmap.get(prefix) if namespace is not None: value = attrs.get('{%s}%s' % (namespace, name, ))", "def set_vrBenef(self, vrBenef): self.vrBenef = vrBenef def get_infoPenMorte(self): return self.infoPenMorte def set_infoPenMorte(self, infoPenMorte):", "is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, tpInsc) if subclass is not None:", "not silence: sys.stdout.write('<?xml version=\"1.0\" ?>\\n') rootObj.export( sys.stdout, 0, name_=rootTag, namespacedef_='', pretty_print=True) return rootObj", "None or self.Signature is not None ): return True else: return False def", "subclass = getSubclassFromModule_( CurrentSubclassModule_, nmBenefic) if subclass is not None: return subclass(*args_, **kwargs_)", "**kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, tpInsc) if subclass", "get_tpLograd(self): return self.tpLograd def set_tpLograd(self, tpLograd): self.tpLograd = tpLograd def get_dscLograd(self): return self.dscLograd", "'dtIniBenef': sval_ = child_.text dval_ = self.gds_parse_date(sval_) self.dtIniBenef = dval_ elif nodeName_ ==", "% (float(input_data.microsecond) / 1000000))[2:], ) if input_data.tzinfo is not None: tzoff = input_data.tzinfo.utcoffset(input_data)", "self.exterior.export(outfile, level, namespace_, name_='exterior', pretty_print=pretty_print) def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib,", "etree_.tostring( rootElement, pretty_print=True, xml_declaration=True, encoding=\"utf-8\") sys.stdout.write(content) sys.stdout.write('\\n') return rootObj, rootElement, mapping, reverse_mapping def", "self.procEmi = procEmi def get_verProc(self): return self.verProc def set_verProc(self, verProc): self.verProc = verProc", "except AttributeError: # fallback to xml.etree parser = etree_.XMLParser() doc = etree_.parse(infile, parser=parser,", "level, already_processed, namespace_, name_='tpBenef') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level +", "child_, node, nodeName_, fromsubclass_=False): pass # end class nrRecibo class tpAmb(GeneratedsSuper): subclass =", "fval_ = float(sval_) except (TypeError, ValueError) as exp: raise_parse_error(child_, 'requires float or double:", "def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, infoPenMorte)", "or self.paisNac is not None or self.nmMae is not None or self.nmPai is", "return nmCid(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return True", "self.original_tagname_ = None self.idQuota = idQuota self.cpfInst = cpfInst def factory(*args_, **kwargs_): if", "if element[-1].tail is None: element[-1].tail = self.value else: element[-1].tail += self.value else: if", "= complemento self.bairro = bairro self.nmCid = nmCid self.codPostal = codPostal def factory(*args_,", "is not None: showIndent(outfile, level, pretty_print) outfile.write('<%sdscLograd>%s</%sdscLograd>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.dscLograd), input_name='dscLograd')), namespace_, eol_))", "if subclass is not None: return subclass(*args_, **kwargs_) if tpAmb.subclass: return tpAmb.subclass(*args_, **kwargs_)", "importable, must contain # a dictionary named GeneratedsNamespaceDefs. This Python dictionary # should", "namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='TEnderecoExterior') if", "as empty lines. if self.value.strip(): outfile.write(self.value) elif self.category == MixedContainer.CategorySimple: self.exportSimple(outfile, level, name)", "pass # end class cpfBenef class nmBenefic(GeneratedsSuper): subclass = None superclass = None", "__init__(self): self.original_tagname_ = None def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass", "else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='evtCdBenPrRP'): if", "or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='nrLograd') if self.hasContent_():", "getSubclassFromModule_(module, class_): '''Get the subclass of a class from a specific module.''' name", "None: subclass = getSubclassFromModule_( CurrentSubclassModule_, ideBenef) if subclass is not None: return subclass(*args_,", "outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='TEnderecoExterior'): pass def", "is not None or self.endereco is not None ): return True else: return", "not None or self.dtIniBenef is not None or self.vrBenef is not None or", "pass # end class nmCid class codPostal(GeneratedsSuper): subclass = None superclass = None", "is not None: showIndent(outfile, level, pretty_print) outfile.write('<%spaisResid>%s</%spaisResid>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.paisResid), input_name='paisResid')), namespace_, eol_))", "main(): args = sys.argv[1:] if len(args) == 1: parse(args[0]) else: usage() if __name__", "% ( self.name, self.value, self.name)) elif self.content_type == MixedContainer.TypeFloat or \\ self.content_type ==", "self.dtFimBenef def set_dtFimBenef(self, dtFimBenef): self.dtFimBenef = dtFimBenef def get_mtvFim(self): return self.mtvFim def set_mtvFim(self,", "return self.cpfInst def set_cpfInst(self, cpfInst): self.cpfInst = cpfInst def hasContent_(self): if ( self.idQuota", "CurrentSubclassModule_, dtFimBenef) if subclass is not None: return subclass(*args_, **kwargs_) if dtFimBenef.subclass: return", "if not input_data: return '' else: return input_data def gds_format_base64(self, input_data, input_name=''): return", "level, already_processed, namespace_, name_='tpPlanRP') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level +", "None def __init__(self, tpBenef=None, nrBenefic=None, dtIniBenef=None, vrBenef=None, infoPenMorte=None): self.original_tagname_ = None self.tpBenef =", "(eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='evtCdBenPrRP'): if self.Id is not", "cpfInst) if subclass is not None: return subclass(*args_, **kwargs_) if cpfInst.subclass: return cpfInst.subclass(*args_,", "uf def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_,", "== MixedContainer.CategoryComplex self.value.export( outfile, level, namespace, name, pretty_print=pretty_print) def exportSimple(self, outfile, level, name):", "60 _svalue += '{0:02d}:{1:02d}'.format( hours, minutes) except AttributeError: pass return _svalue @classmethod def", "= dtFimBenef def get_mtvFim(self): return self.mtvFim def set_mtvFim(self, mtvFim): self.mtvFim = mtvFim def", "uf class paisNascto(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ =", "pretty_print) outfile.write('<%sSignature>%s</%sSignature>%s' % ('ds:', self.gds_encode(self.gds_format_string(quote_xml(self.Signature), input_name='Signature')), 'ds:', eol_)) def build(self, node): already_processed =", "% self.value elif self.content_type == MixedContainer.TypeBase64: text = '%s' % base64.b64encode(self.value) return text", "def hasContent_(self): if ( self.tpInsc is not None or self.nrInsc is not None", "BaseStrType_): initvalue_ = datetime_.datetime.strptime(dtIniBenef, '%Y-%m-%d').date() else: initvalue_ = dtIniBenef self.dtIniBenef = initvalue_ self.vrBenef", "input_data def gds_validate_integer(self, input_data, node=None, input_name=''): return input_data def gds_format_integer_list(self, input_data, input_name=''): return", "level, pretty_print) outfile.write('<%scpfBenef>%s</%scpfBenef>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.cpfBenef), input_name='cpfBenef')), namespace_, eol_)) if self.nmBenefic is not", "% s1.replace('\"', \"&quot;\") else: s1 = \"'%s'\" % s1 else: s1 = '\"%s\"'", "'' if self.original_tagname_ is not None: name_ = self.original_tagname_ showIndent(outfile, level, pretty_print) outfile.write('<%s%s%s'", "factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, endereco) if", "(namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.verProc), input_name='verProc')), namespace_, eol_)) def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib,", "subclass is not None: return subclass(*args_, **kwargs_) if evtCdBenPrRP.subclass: return evtCdBenPrRP.subclass(*args_, **kwargs_) else:", "else: return TEnderecoExterior(*args_, **kwargs_) factory = staticmethod(factory) def get_paisResid(self): return self.paisResid def set_paisResid(self,", "datetime as datetime_ import warnings as warnings_ try: from lxml import etree as", "subclass(*args_, **kwargs_) if nmBenefic.subclass: return nmBenefic.subclass(*args_, **kwargs_) else: return nmBenefic(*args_, **kwargs_) factory =", "def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='fimBeneficio'): pass def exportChildren(self, outfile, level, namespace_='',", "nodeName_, fromsubclass_=False): if nodeName_ == 'tpPlanRP': sval_ = child_.text try: ival_ = int(sval_)", "namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('TDadosBenef') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_", "None or self.complemento is not None or self.bairro is not None or self.nmCid", "child.tail is not None: text += child.tail return text def find_attr_value_(attr_name, node): attrs", "level, namespace_='', name_='cep', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('cep') if imported_ns_def_ is not None:", "buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class procEmi class verProc(GeneratedsSuper): subclass", "the DOM. doc = None if not silence: sys.stdout.write('<?xml version=\"1.0\" ?>\\n') rootObj.export( sys.stdout,", "node, 'tpBenef') self.tpBenef = ival_ elif nodeName_ == 'nrBenefic': nrBenefic_ = child_.text nrBenefic_", "'%s' % ' '.join(input_data) def gds_validate_integer_list( self, input_data, node=None, input_name=''): values = input_data.split()", "eol_)) else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='tpBenef'):", "method of # any generated element type class for a example of the", "= getSubclassFromModule_( CurrentSubclassModule_, nmPai) if subclass is not None: return subclass(*args_, **kwargs_) if", "buildChildren(self, child_, node, nodeName_, fromsubclass_=False): if nodeName_ == 'tpInsc': sval_ = child_.text try:", "name, pretty_print=pretty_print) def exportSimple(self, outfile, level, name): if self.content_type == MixedContainer.TypeString: outfile.write('<%s>%s</%s>' %", "self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='procEmi', pretty_print=pretty_print) outfile.write('</%s%s>%s'", "( ): return True else: return False def export(self, outfile, level, namespace_='', name_='codPostal',", "working directory (os.getcwd()): # esociallib # import sys import re as re_ import", "is not None: return subclass(*args_, **kwargs_) if verProc.subclass: return verProc.subclass(*args_, **kwargs_) else: return", "sval_ = child_.text dval_ = self.gds_parse_date(sval_) self.dtNascto = dval_ elif nodeName_ == 'codMunic':", "return class_obj1 def gds_build_any(self, node, type_name=None): return None @classmethod def gds_reverse_node_mapping(cls, mapping): return", "where and when you want to drop into the # IPython shell: #", "self.infoBeneficio = infoBeneficio def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass =", "nodeName_, fromsubclass_=False): if nodeName_ == 'evtCdBenPrRP': obj_ = evtCdBenPrRP.factory() obj_.build(child_) self.evtCdBenPrRP = obj_", "% ( input_data.hour, input_data.minute, input_data.second, ('%f' % (float(input_data.microsecond) / 1000000))[2:], ) if input_data.tzinfo", "% (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='cpfInst'): pass def exportChildren(self,", ")) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='TEnderecoExterior'): pass def exportChildren(self, outfile, level,", "def set_bairro(self, bairro): self.bairro = bairro def get_nmCid(self): return self.nmCid def set_nmCid(self, nmCid):", "= child_.text cpfBenef_ = self.gds_validate_string(cpfBenef_, node, 'cpfBenef') self.cpfBenef = cpfBenef_ elif nodeName_ ==", "subclass = getSubclassFromModule_( CurrentSubclassModule_, tpInsc) if subclass is not None: return subclass(*args_, **kwargs_)", "namespace_, eol_)) if self.cpfInst is not None: showIndent(outfile, level, pretty_print) outfile.write('<%scpfInst>%s</%scpfInst>%s' % (namespace_,", "str def parsexml_(infile, parser=None, **kwargs): if parser is None: # Use the lxml", "name_='infoBeneficio', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\\n' else: eol_ = '' if", "namespace_, name_='iniBeneficio', pretty_print=pretty_print) if self.altBeneficio is not None: self.altBeneficio.export(outfile, level, namespace_, name_='altBeneficio', pretty_print=pretty_print)", "return self.dadosBenef def set_dadosBenef(self, dadosBenef): self.dadosBenef = dadosBenef def hasContent_(self): if ( self.cpfBenef", "get_Signature(self): return self.Signature def set_Signature(self, Signature): self.Signature = Signature def hasContent_(self): if (", "None: return subclass(*args_, **kwargs_) if TDadosBeneficio.subclass: return TDadosBeneficio.subclass(*args_, **kwargs_) else: return TDadosBeneficio(*args_, **kwargs_)", "outfile, level, already_processed, namespace_='', name_='paisNac'): pass def exportChildren(self, outfile, level, namespace_='', name_='paisNac', fromsubclass_=False,", "# end class vrBenef class infoPenMorte(GeneratedsSuper): \"\"\"Informações relativas a pensão por morte\"\"\" subclass", "None: return subclass(*args_, **kwargs_) if dadosNasc.subclass: return dadosNasc.subclass(*args_, **kwargs_) else: return dadosNasc(*args_, **kwargs_)", "**kwargs_) else: return TIdeEveTrab(*args_, **kwargs_) factory = staticmethod(factory) def get_indRetif(self): return self.indRetif def", "= None self.tpLograd = tpLograd self.dscLograd = dscLograd self.nrLograd = nrLograd self.complemento =", "(eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='TEnderecoBrasil'): pass def exportChildren(self, outfile,", "'\\n' else: eol_ = '' if self.tpBenef is not None: showIndent(outfile, level, pretty_print)", "+ 1, namespace_='', name_='cpfBenef', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' %", "self.nmMae def set_nmMae(self, nmMae): self.nmMae = nmMae def get_nmPai(self): return self.nmPai def set_nmPai(self,", "doc = None if not silence: sys.stdout.write('<?xml version=\"1.0\" ?>\\n') rootObj.export( sys.stdout, 0, name_=rootTag,", "level, namespace_='', name_='TIdeEveTrab', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('TIdeEveTrab') if imported_ns_def_ is not None:", "paisNac): self.paisNac = paisNac def get_nmMae(self): return self.nmMae def set_nmMae(self, nmMae): self.nmMae =", "lxml ElementTree compatible parser so that, e.g., # we ignore comments. try: parser", "showIndent(outfile, level, pretty_print) outfile.write('<%snrBenefic>%s</%snrBenefic>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nrBenefic), input_name='nrBenefic')), namespace_, eol_)) if self.dtFimBenef is", "1, namespace_='', name_='dtIniBenef', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_,", "MixedContainer.CategoryText: # Prevent exporting empty content as empty lines. if self.value.strip(): if len(element)", "not None: showIndent(outfile, level, pretty_print) outfile.write('<%snrRecibo>%s</%snrRecibo>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nrRecibo), input_name='nrRecibo')), namespace_, eol_)) if", "codMunic class uf(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ =", "name_='nrLograd', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib, already_processed)", "@staticmethod def convert_unicode(instring): if isinstance(instring, str): result = quote_xml(instring) elif sys.version_info.major == 2", "GenerateDSNamespaceDefs_.get('ideBenef') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_ =", "level, pretty_print) outfile.write('<%sidQuota>%s</%sidQuota>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.idQuota), input_name='idQuota')), namespace_, eol_)) if self.cpfInst is not", "tpInsc=None, nrInsc=None): self.original_tagname_ = None self.tpInsc = tpInsc self.nrInsc = nrInsc def factory(*args_,", "is not None: return subclass(*args_, **kwargs_) if codMunic.subclass: return codMunic.subclass(*args_, **kwargs_) else: return", "def set_tpAmb(self, tpAmb): self.tpAmb = tpAmb def get_procEmi(self): return self.procEmi def set_procEmi(self, procEmi):", "'nmPai': nmPai_ = child_.text nmPai_ = self.gds_validate_string(nmPai_, node, 'nmPai') self.nmPai = nmPai_ #", "def to_etree(self, element): if self.category == MixedContainer.CategoryText: # Prevent exporting empty content as", "if len(names) == 2: classname = names[1] class_obj2 = globals().get(classname) if class_obj2 is", "dt.replace(tzinfo=tz) return dt.time() def gds_str_lower(self, instring): return instring.lower() def get_path_(self, node): path_list =", "reverse_mapping def parseString(inString, silence=False): if sys.version_info.major == 2: from StringIO import StringIO as", "is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, dscLograd) if subclass is not None:", "outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='dtFimBenef'): pass def", "pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class nrLograd class", "rootObj, rootElement, mapping, reverse_mapping def parseString(inString, silence=False): if sys.version_info.major == 2: from StringIO", "initvalue_ = dtFimBenef self.dtFimBenef = initvalue_ self.mtvFim = mtvFim def factory(*args_, **kwargs_): if", "%s' % exp) ival_ = self.gds_validate_integer(ival_, node, 'mtvFim') self.mtvFim = ival_ # end", "def gds_validate_double(self, input_data, node=None, input_name=''): return input_data def gds_format_double_list(self, input_data, input_name=''): return '%s'", "namespace_, name_='codPostal') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='',", "return input_data def gds_format_time(self, input_data, input_name=''): if input_data.microsecond == 0: _svalue = '%02d:%02d:%02d'", "namespace_='', name_='dtIniBenef', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('dtIniBenef') if imported_ns_def_ is not None: namespacedef_", "def __init__(self, tpInsc=None, nrInsc=None): self.original_tagname_ = None self.tpInsc = tpInsc self.nrInsc = nrInsc", "if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, paisResid) if subclass is", "None: subclass = getSubclassFromModule_( CurrentSubclassModule_, nrInsc) if subclass is not None: return subclass(*args_,", "if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, dadosNasc) if subclass is", "if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='uf', pretty_print=pretty_print)", "subclass(*args_, **kwargs_) if cpfBenef.subclass: return cpfBenef.subclass(*args_, **kwargs_) else: return cpfBenef(*args_, **kwargs_) factory =", "as etree_ Validate_simpletypes_ = True if sys.version_info.major == 2: BaseStrType_ = basestring else:", "self.Id def set_Id(self, Id): self.Id = Id def hasContent_(self): if ( self.ideEvento is", "'requires integer: %s' % exp) ival_ = self.gds_validate_integer(ival_, node, 'tpAmb') self.tpAmb = ival_", "None: showIndent(outfile, level, pretty_print) outfile.write('<%snrInsc>%s</%snrInsc>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nrInsc), input_name='nrInsc')), namespace_, eol_)) def build(self,", "set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='tpAmb') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile,", "elif nodeName_ == 'nmMae': nmMae_ = child_.text nmMae_ = self.gds_validate_string(nmMae_, node, 'nmMae') self.nmMae", "fromsubclass_=False): pass # end class dtIniBenef class vrBenef(GeneratedsSuper): subclass = None superclass =", "already_processed): pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class indRetif", "factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, nmMae) if", "def __init__(self, tpLograd=None, dscLograd=None, nrLograd=None, complemento=None, bairro=None, cep=None, codMunic=None, uf=None): self.original_tagname_ = None", "TDadosBeneficio.factory() obj_.build(child_) self.iniBeneficio = obj_ obj_.original_tagname_ = 'iniBeneficio' elif nodeName_ == 'altBeneficio': obj_", "initvalue_ = datetime_.datetime.strptime(dtIniBenef, '%Y-%m-%d').date() else: initvalue_ = dtIniBenef self.dtIniBenef = initvalue_ self.vrBenef =", "False def export(self, outfile, level, namespace_='', name_='nmCid', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('nmCid') if", "+ 1, namespace_='', name_='paisNascto', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' %", "outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='tpBenef'): pass def", "inStr if s1.find(\"'\") == -1: if s1.find('\\n') == -1: return \"'%s'\" % s1", "complemento=None, bairro=None, cep=None, codMunic=None, uf=None): self.original_tagname_ = None self.tpLograd = tpLograd self.dscLograd =", "= inStr if s1.find(\"'\") == -1: if s1.find('\\n') == -1: return \"'%s'\" %", "namespace_, name_='paisNac') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='',", "if nodeName_ == 'indRetif': sval_ = child_.text try: ival_ = int(sval_) except (TypeError,", "if subclass is not None: return subclass(*args_, **kwargs_) if cpfBenef.subclass: return cpfBenef.subclass(*args_, **kwargs_)", "factory = staticmethod(factory) def get_dtNascto(self): return self.dtNascto def set_dtNascto(self, dtNascto): self.dtNascto = dtNascto", "eol_ = '' if self.idQuota is not None: showIndent(outfile, level, pretty_print) outfile.write('<%sidQuota>%s</%sidQuota>%s' %", "obj_ = TDadosBenef.factory() obj_.build(child_) self.dadosBenef = obj_ obj_.original_tagname_ = 'dadosBenef' # end class", "if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, tpInsc) if subclass is", "factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, tpLograd) if", "None: subclass = getSubclassFromModule_( CurrentSubclassModule_, nrBenefic) if subclass is not None: return subclass(*args_,", "if self.dtIniBenef is not None: showIndent(outfile, level, pretty_print) outfile.write('<%sdtIniBenef>%s</%sdtIniBenef>%s' % (namespace_, self.gds_format_date(self.dtIniBenef, input_name='dtIniBenef'),", "return dadosNasc(*args_, **kwargs_) factory = staticmethod(factory) def get_dtNascto(self): return self.dtNascto def set_dtNascto(self, dtNascto):", "input_data.year, input_data.month, input_data.day, input_data.hour, input_data.minute, input_data.second, ('%f' % (float(input_data.microsecond) / 1000000))[2:], ) if", "not inStr: return '' s1 = (isinstance(inStr, BaseStrType_) and inStr or '%s' %", "nrRecibo_ elif nodeName_ == 'tpAmb': sval_ = child_.text try: ival_ = int(sval_) except", "child_, node, nodeName_, fromsubclass_=False): pass # end class indRetif class nrRecibo(GeneratedsSuper): subclass =", "% (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='nmCid', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_,", ")) self.exportChildren(outfile, level + 1, namespace_='', name_='fimBeneficio', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' %", "name_='tpInsc') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='tpInsc',", "self.idQuota = idQuota def get_cpfInst(self): return self.cpfInst def set_cpfInst(self, cpfInst): self.cpfInst = cpfInst", "= endereco def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_(", "namespace_='', name_='TEmprPJ', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('TEmprPJ') if imported_ns_def_ is not None: namespacedef_", "nrLograd def get_complemento(self): return self.complemento def set_complemento(self, complemento): self.complemento = complemento def get_bairro(self):", "factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, tpBenef) if", "nrBenefic def get_dtIniBenef(self): return self.dtIniBenef def set_dtIniBenef(self, dtIniBenef): self.dtIniBenef = dtIniBenef def get_vrBenef(self):", "def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='cpfInst'): pass def exportChildren(self, outfile, level, namespace_='',", "exportAttributes(self, outfile, level, already_processed, namespace_='', name_='procEmi'): pass def exportChildren(self, outfile, level, namespace_='', name_='procEmi',", "not None: showIndent(outfile, level, pretty_print) outfile.write('<%scep>%s</%scep>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.cep), input_name='cep')), namespace_, eol_)) if", "child_attrs def get_child_attrs(self): return self.child_attrs def set_choice(self, choice): self.choice = choice def get_choice(self):", "micro_seconds = int(float('0.' + time_parts[1]) * 1000000) input_data = '%s.%s' % (time_parts[0], micro_seconds,", "elif nodeName_ == 'nrRecibo': nrRecibo_ = child_.text nrRecibo_ = self.gds_validate_string(nrRecibo_, node, 'nrRecibo') self.nrRecibo", "CDATA_pattern_.finditer(s1) for mo in matchobjects: s3 = s1[pos:mo.start()] s2 += quote_xml_aux(s3) s2 +=", "None or self.altBeneficio is not None or self.fimBeneficio is not None ): return", "dscLograd.subclass: return dscLograd.subclass(*args_, **kwargs_) else: return dscLograd(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self):", "def set_nrBenefic(self, nrBenefic): self.nrBenefic = nrBenefic def get_dtIniBenef(self): return self.dtIniBenef def set_dtIniBenef(self, dtIniBenef):", "+ 1, namespace_='', name_='TEnderecoExterior', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_))", "bairro def get_nmCid(self): return self.nmCid def set_nmCid(self, nmCid): self.nmCid = nmCid def get_codPostal(self):", "buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class bairro class cep(GeneratedsSuper): subclass", "level, already_processed, namespace_, name_='dtIniBenef') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level +", "not modify CDATA sections.\" if not inStr: return '' s1 = (isinstance(inStr, BaseStrType_)", "mtvFim(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ = None def", "any class for which there is # a namespace prefix definition, will export", "if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='cpfInst', pretty_print=pretty_print)", "if cpfBenef.subclass: return cpfBenef.subclass(*args_, **kwargs_) else: return cpfBenef(*args_, **kwargs_) factory = staticmethod(factory) def", "= obj_ obj_.original_tagname_ = 'ideEvento' elif nodeName_ == 'ideEmpregador': obj_ = TEmprPJ.factory() obj_.build(child_)", "elif len(attr_parts) == 2: prefix, name = attr_parts namespace = node.nsmap.get(prefix) if namespace", "is not None or self.nmBenefic is not None or self.dadosBenef is not None", "return True else: return False def export(self, outfile, level, namespace_='', name_='ideBenef', namespacedef_='', pretty_print=True):", "sys import re as re_ import base64 import datetime as datetime_ import warnings", "tzoff is not None: total_seconds = tzoff.seconds + (86400 * tzoff.days) if total_seconds", "'', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='cpfInst') if self.hasContent_(): outfile.write('>%s'", "GeneratedsNamespaceDefs. This Python dictionary # should map element type names (strings) to XML", "child_attrs=None, choice=None): self.name = name self.data_type = data_type self.container = container self.child_attrs =", "return True else: return False def export(self, outfile, level, namespace_='', name_='eSocial', namespacedef_=' xmlns:ds=\"http://www.w3.org/2000/09/xmldsig#\"", "if TIdeEveTrab.subclass: return TIdeEveTrab.subclass(*args_, **kwargs_) else: return TIdeEveTrab(*args_, **kwargs_) factory = staticmethod(factory) def", "None def __init__(self, dadosNasc=None, endereco=None): self.original_tagname_ = None self.dadosNasc = dadosNasc self.endereco =", "the inner elements found1 = True for patterns1 in patterns: found2 = False", "# end class paisNac class nmMae(GeneratedsSuper): subclass = None superclass = None def", ")) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='dtNascto'): pass def exportChildren(self, outfile, level,", "subclass = None superclass = None def __init__(self, dtNascto=None, codMunic=None, uf=None, paisNascto=None, paisNac=None,", "'nmBenefic': nmBenefic_ = child_.text nmBenefic_ = self.gds_validate_string(nmBenefic_, node, 'nmBenefic') self.nmBenefic = nmBenefic_ elif", "self.infoBeneficio is not None ): return True else: return False def export(self, outfile,", "## from IPython.Shell import IPShellEmbed ## args = '' ## ipshell = IPShellEmbed(args,", "class tpBenef(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ = None", "subclass is not None: return subclass(*args_, **kwargs_) if nrInsc.subclass: return nrInsc.subclass(*args_, **kwargs_) else:", "self.value def getName(self): return self.name def export(self, outfile, level, name, namespace, pretty_print=True): if", "namespacedef_=' xmlns:ds=\"http://www.w3.org/2000/09/xmldsig#\" ', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('eSocial') if imported_ns_def_ is not None: namespacedef_", "verProc(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return True else:", "'cpfBenef': cpfBenef_ = child_.text cpfBenef_ = self.gds_validate_string(cpfBenef_, node, 'cpfBenef') self.cpfBenef = cpfBenef_ elif", "nodeName_, fromsubclass_=False): pass # end class tpBenef class nrBenefic(GeneratedsSuper): subclass = None superclass", "return False def export(self, outfile, level, namespace_='', name_='tpLograd', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('tpLograd')", "buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class nrLograd class complemento(GeneratedsSuper): subclass", "dtFimBenef(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return True else:", "self.gds_format_integer(self.indRetif, input_name='indRetif'), namespace_, eol_)) if self.nrRecibo is not None: showIndent(outfile, level, pretty_print) outfile.write('<%snrRecibo>%s</%snrRecibo>%s'", "pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): if nodeName_ == 'idQuota': idQuota_ =", "buildChildren(self, child_, node, nodeName_, fromsubclass_=False): if nodeName_ == 'evtCdBenPrRP': obj_ = evtCdBenPrRP.factory() obj_.build(child_)", "self.dadosBenef = dadosBenef def hasContent_(self): if ( self.cpfBenef is not None or self.nmBenefic", "eol_ = '' if self.tpPlanRP is not None: showIndent(outfile, level, pretty_print) outfile.write('<%stpPlanRP>%s</%stpPlanRP>%s' %", "re_.DOTALL) # Change this to redirect the generated superclass module to use a", "outfile, level, already_processed, namespace_='', name_='evtCdBenPrRP'): if self.Id is not None and 'Id' not", "= set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='evtCdBenPrRP') if self.hasContent_(): outfile.write('>%s' % (eol_, ))", "= \"'%s'\" % s1 else: s1 = '\"%s\"' % s1 return s1 def", "outfile, level, already_processed, namespace_='', name_='TIdeEveTrab'): pass def exportChildren(self, outfile, level, namespace_='', name_='TIdeEveTrab', fromsubclass_=False,", "self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='tpInsc', pretty_print=pretty_print) outfile.write('</%s%s>%s'", "else: _svalue = '%02d:%02d:%02d.%s' % ( input_data.hour, input_data.minute, input_data.second, ('%f' % (float(input_data.microsecond) /", "MixedContainer.CategorySimple: self.exportSimple(outfile, level, name) else: # category == MixedContainer.CategoryComplex self.value.export( outfile, level, namespace,", "1, namespace_='', name_='infoBeneficio', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else:", "set_paisNascto(self, paisNascto): self.paisNascto = paisNascto def get_paisNac(self): return self.paisNac def set_paisNac(self, paisNac): self.paisNac", "subclass is not None: return subclass(*args_, **kwargs_) if cpfBenef.subclass: return cpfBenef.subclass(*args_, **kwargs_) else:", "not None or self.codPostal is not None ): return True else: return False", "True else: return False def export(self, outfile, level, namespace_='', name_='codPostal', namespacedef_='', pretty_print=True): imported_ns_def_", "name_='nmBenefic', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('nmBenefic') if imported_ns_def_ is not None: namespacedef_ =", "= '' if self.tpBenef is not None: showIndent(outfile, level, pretty_print) outfile.write('<%stpBenef>%s</%stpBenef>%s' % (namespace_,", "% (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='vrBenef'): pass def exportChildren(self,", "'', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='evtCdBenPrRP') if self.hasContent_(): outfile.write('>%s'", "factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, tpInsc) if", "namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('tpInsc') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_", "exportChildren(self, outfile, level, namespace_='', name_='TDadosBenef', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\\n' else:", "CurrentSubclassModule_, infoPenMorte) if subclass is not None: return subclass(*args_, **kwargs_) if infoPenMorte.subclass: return", "get_iniBeneficio(self): return self.iniBeneficio def set_iniBeneficio(self, iniBeneficio): self.iniBeneficio = iniBeneficio def get_altBeneficio(self): return self.altBeneficio", "1, namespace_='', name_='vrBenef', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_,", "== 'exterior': obj_ = TEnderecoExterior.factory() obj_.build(child_) self.exterior = obj_ obj_.original_tagname_ = 'exterior' #", "+ 1, namespace_='', name_='TDadosBenef', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_))", "end class dadosNasc class dtNascto(GeneratedsSuper): subclass = None superclass = None def __init__(self):", ")) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='nmMae') if self.hasContent_(): outfile.write('>%s' %", "no Exterior\"\"\" subclass = None superclass = None def __init__(self, paisResid=None, dscLograd=None, nrLograd=None,", "input_name='indRetif'), namespace_, eol_)) if self.nrRecibo is not None: showIndent(outfile, level, pretty_print) outfile.write('<%snrRecibo>%s</%snrRecibo>%s' %", "path_list = [] self.get_path_list_(node, path_list) path_list.reverse() path = '/'.join(path_list) return path Tag_strip_pattern_ =", "return subclass(*args_, **kwargs_) if nmCid.subclass: return nmCid.subclass(*args_, **kwargs_) else: return nmCid(*args_, **kwargs_) factory", "input_data = input_data[:-6] time_parts = input_data.split('.') if len(time_parts) > 1: micro_seconds = int(float('0.'", "rootClass is None: rootTag = 'eSocial' rootClass = eSocial rootObj = rootClass.factory() rootObj.build(rootNode)", "if self.uf is not None: showIndent(outfile, level, pretty_print) outfile.write('<%suf>%s</%suf>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.uf), input_name='uf')),", "**kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, nmCid) if subclass", "'%02d:%02d:%02d' % ( input_data.hour, input_data.minute, input_data.second, ) else: _svalue = '%02d:%02d:%02d.%s' % (", "obj_ obj_.original_tagname_ = 'infoBeneficio' # end class evtCdBenPrRP class ideBenef(GeneratedsSuper): \"\"\"Identificação do beneficiário\"\"\"", "end class codPostal class TDadosBeneficio(GeneratedsSuper): \"\"\"Dados do benefício previdenciário\"\"\" subclass = None superclass", "1, namespace_='', name_='eSocial', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else:", "subclass = getSubclassFromModule_( CurrentSubclassModule_, nrInsc) if subclass is not None: return subclass(*args_, **kwargs_)", "+ 1, namespace_='', name_='tpPlanRP', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' %", "level, namespace_='', name_='dtFimBenef', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node,", "and 'Id' not in already_processed: already_processed.add('Id') outfile.write(' Id=%s' % (self.gds_encode(self.gds_format_string(quote_attrib(self.Id), input_name='Id')), )) def", "level, namespace_='', name_='paisNac', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node,", "imported_ns_def_ = GenerateDSNamespaceDefs_.get('complemento') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print:", "return endereco(*args_, **kwargs_) factory = staticmethod(factory) def get_brasil(self): return self.brasil def set_brasil(self, brasil):", "get_paisResid(self): return self.paisResid def set_paisResid(self, paisResid): self.paisResid = paisResid def get_dscLograd(self): return self.dscLograd", "result = quote_xml(instring) elif sys.version_info.major == 2 and isinstance(instring, unicode): result = quote_xml(instring).encode('utf8')", "level, pretty_print) outfile.write('<%stpPlanRP>%s</%stpPlanRP>%s' % (namespace_, self.gds_format_integer(self.tpPlanRP, input_name='tpPlanRP'), namespace_, eol_)) if self.iniBeneficio is not", "self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='codMunic', pretty_print=pretty_print) outfile.write('</%s%s>%s'", "is not None or self.exterior is not None ): return True else: return", "'nmPai') self.nmPai = nmPai_ # end class dadosNasc class dtNascto(GeneratedsSuper): subclass = None", "get_choice(self): return self.choice def set_optional(self, optional): self.optional = optional def get_optional(self): return self.optional", "if self.brasil is not None: self.brasil.export(outfile, level, namespace_, name_='brasil', pretty_print=pretty_print) if self.exterior is", "def gds_format_datetime(self, input_data, input_name=''): if input_data.microsecond == 0: _svalue = '%04d-%02d-%02dT%02d:%02d:%02d' % (", "input_data.hour, input_data.minute, input_data.second, ) else: _svalue = '%02d:%02d:%02d.%s' % ( input_data.hour, input_data.minute, input_data.second,", "is None: rootTag = 'eSocial' rootClass = eSocial rootObj = rootClass.factory() rootObj.build(rootNode) #", "def set_data_type(self, data_type): self.data_type = data_type def get_data_type_chain(self): return self.data_type def get_data_type(self): if", "buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class cpfInst GDSClassesMapping = {", "buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class tpAmb class procEmi(GeneratedsSuper): subclass", "s1 = \"'%s'\" % s1 else: s1 = '\"%s\"' % s1 return s1", "not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, tpLograd) if subclass is not None: return", "% inStr) s1 = s1.replace('&', '&amp;') s1 = s1.replace('<', '&lt;') s1 = s1.replace('>',", "= staticmethod(factory) def hasContent_(self): if ( ): return True else: return False def", "pretty_print) outfile.write('<%sverProc>%s</%sverProc>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.verProc), input_name='verProc')), namespace_, eol_)) def build(self, node): already_processed =", "rootObj def main(): args = sys.argv[1:] if len(args) == 1: parse(args[0]) else: usage()", "exportAttributes(self, outfile, level, already_processed, namespace_='', name_='dscLograd'): pass def exportChildren(self, outfile, level, namespace_='', name_='dscLograd',", "name_='procEmi', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_, )) def", "cpfBenef_ = self.gds_validate_string(cpfBenef_, node, 'cpfBenef') self.cpfBenef = cpfBenef_ elif nodeName_ == 'nmBenefic': nmBenefic_", "rootClass = eSocial rootObj = rootClass.factory() rootObj.build(rootNode) # Enable Python to collect the", "ideBenef.factory() obj_.build(child_) self.ideBenef = obj_ obj_.original_tagname_ = 'ideBenef' elif nodeName_ == 'infoBeneficio': obj_", ")) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='cpfBenef'): pass def exportChildren(self, outfile, level,", "= sys.argv[1:] if len(args) == 1: parse(args[0]) else: usage() if __name__ == '__main__':", "child_.text nrInsc_ = self.gds_validate_string(nrInsc_, node, 'nrInsc') self.nrInsc = nrInsc_ # end class TEmprPJ", "is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, complemento) if subclass is not None:", "end class TEmprPJ class tpInsc(GeneratedsSuper): subclass = None superclass = None def __init__(self):", "None def __init__(self, dtNascto=None, codMunic=None, uf=None, paisNascto=None, paisNac=None, nmMae=None, nmPai=None): self.original_tagname_ = None", "namespace_='', name_='paisNascto'): pass def exportChildren(self, outfile, level, namespace_='', name_='paisNascto', fromsubclass_=False, pretty_print=True): pass def", "(eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='TEnderecoExterior', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s'", "return True else: return False def export(self, outfile, level, namespace_='', name_='infoBeneficio', namespacedef_='', pretty_print=True):", "if paisResid.subclass: return paisResid.subclass(*args_, **kwargs_) else: return paisResid(*args_, **kwargs_) factory = staticmethod(factory) def", "return dtFimBenef.subclass(*args_, **kwargs_) else: return dtFimBenef(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if", "if nodeName_ == 'tpPlanRP': sval_ = child_.text try: ival_ = int(sval_) except (TypeError,", "pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self,", "set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='nrBenefic') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile,", "exportAttributes(self, outfile, level, already_processed, namespace_='', name_='uf'): pass def exportChildren(self, outfile, level, namespace_='', name_='uf',", "factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, infoPenMorte) if", "vrBenef(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return True else:", "self.exportChildren(outfile, level + 1, namespace_='', name_='bairro', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else:", "node.tag, node.sourceline, ) raise GDSParseError(msg) class MixedContainer: # Constants for category: CategoryNone =", "generatedsnamespaces import GenerateDSNamespaceDefs as GenerateDSNamespaceDefs_ except ImportError: GenerateDSNamespaceDefs_ = {} # # The", "def set_optional(self, optional): self.optional = optional def get_optional(self): return self.optional def _cast(typ, value):", "GDSParseError(msg) class MixedContainer: # Constants for category: CategoryNone = 0 CategoryText = 1", "namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='tpAmb') if", "not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, fimBeneficio) if subclass is not None: return", "if pretty_print: eol_ = '\\n' else: eol_ = '' if self.paisResid is not", "'\\n' else: eol_ = '' if self.tpPlanRP is not None: showIndent(outfile, level, pretty_print)", "tpAmb=None, procEmi=None, verProc=None): self.original_tagname_ = None self.indRetif = indRetif self.nrRecibo = nrRecibo self.tpAmb", "already_processed, namespace_='', name_='dscLograd'): pass def exportChildren(self, outfile, level, namespace_='', name_='dscLograd', fromsubclass_=False, pretty_print=True): pass", "% (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='TIdeEveTrab', pretty_print=pretty_print) showIndent(outfile, level, pretty_print)", "): return True else: return False def export(self, outfile, level, namespace_='', name_='evtCdBenPrRP', namespacedef_='',", "self.gds_validate_string(nrLograd_, node, 'nrLograd') self.nrLograd = nrLograd_ elif nodeName_ == 'complemento': complemento_ = child_.text", "nodeName_, fromsubclass_=False): pass # end class mtvFim class TIdeEveTrab(GeneratedsSuper): \"\"\"Identificação do evento\"\"\" subclass", "pretty_print=True): if pretty_print: eol_ = '\\n' else: eol_ = '' if self.tpBenef is", "_svalue = '%02d:%02d:%02d.%s' % ( input_data.hour, input_data.minute, input_data.second, ('%f' % (float(input_data.microsecond) / 1000000))[2:],", "self.mtvFim = mtvFim def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass =", "(eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='cpfInst', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_,", "def get_procEmi(self): return self.procEmi def set_procEmi(self, procEmi): self.procEmi = procEmi def get_verProc(self): return", "None superclass = None def __init__(self, tpInsc=None, nrInsc=None): self.original_tagname_ = None self.tpInsc =", "fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\\n' else: eol_ = '' if self.brasil", "namespace_='', name_='codMunic', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_, ))", "return values def gds_format_boolean(self, input_data, input_name=''): return ('%s' % input_data).lower() def gds_validate_boolean(self, input_data,", "self.name)) def to_etree(self, element): if self.category == MixedContainer.CategoryText: # Prevent exporting empty content", "pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('nmMae') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if", "= obj_ obj_.original_tagname_ = 'brasil' elif nodeName_ == 'exterior': obj_ = TEnderecoExterior.factory() obj_.build(child_)", "already_processed, namespace_='', name_='cpfBenef'): pass def exportChildren(self, outfile, level, namespace_='', name_='cpfBenef', fromsubclass_=False, pretty_print=True): pass", "else: result = GeneratedsSuper.gds_encode(str(instring)) return result def __eq__(self, other): if type(self) != type(other):", "pass # end class cpfInst GDSClassesMapping = { 'altBeneficio': TDadosBeneficio, 'brasil': TEnderecoBrasil, 'dadosBenef':", "nrInsc def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_,", "def get_nrBenefic(self): return self.nrBenefic def set_nrBenefic(self, nrBenefic): self.nrBenefic = nrBenefic def get_dtIniBenef(self): return", "benefícios previdenciários - Término. Validação: Só pode ser informado se já houver informação", "def get_dtIniBenef(self): return self.dtIniBenef def set_dtIniBenef(self, dtIniBenef): self.dtIniBenef = dtIniBenef def get_vrBenef(self): return", "'', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='dadosNasc') if self.hasContent_(): outfile.write('>%s'", "subclass = None superclass = None def __init__(self, tpBenef=None, nrBenefic=None, dtIniBenef=None, vrBenef=None, infoPenMorte=None):", "= dtIniBenef def get_vrBenef(self): return self.vrBenef def set_vrBenef(self, vrBenef): self.vrBenef = vrBenef def", "node, 'uf') self.uf = uf_ # end class TEnderecoBrasil class tpLograd(GeneratedsSuper): subclass =", "input_name='tpAmb'), namespace_, eol_)) if self.procEmi is not None: showIndent(outfile, level, pretty_print) outfile.write('<%sprocEmi>%s</%sprocEmi>%s' %", "nodeName_ == 'codPostal': codPostal_ = child_.text codPostal_ = self.gds_validate_string(codPostal_, node, 'codPostal') self.codPostal =", "not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, nmCid) if subclass is not None: return", "self.cpfBenef = cpfBenef_ elif nodeName_ == 'nmBenefic': nmBenefic_ = child_.text nmBenefic_ = self.gds_validate_string(nmBenefic_,", "doc.getroot() rootTag, rootClass = get_root_tag(rootNode) if rootClass is None: rootTag = 'eSocial' rootClass", "indRetif.subclass(*args_, **kwargs_) else: return indRetif(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if (", "name_='idQuota'): pass def exportChildren(self, outfile, level, namespace_='', name_='idQuota', fromsubclass_=False, pretty_print=True): pass def build(self,", "is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, codPostal) if subclass is not None:", "namespace_='', name_='dtIniBenef', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_, ))", "You can replace these methods by re-implementing the following class # in a", "= self.gds_validate_string(uf_, node, 'uf') self.uf = uf_ elif nodeName_ == 'paisNascto': paisNascto_ =", "= None superclass = None def __init__(self, brasil=None, exterior=None): self.original_tagname_ = None self.brasil", "shell: # ipshell('<some message> -- Entering ipshell.\\nHit Ctrl-D to exit') # # Globals", "else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='tpBenef'): pass", "sys.stdout.write(')\\n') return rootObj def main(): args = sys.argv[1:] if len(args) == 1: parse(args[0])", "None: return subclass(*args_, **kwargs_) if TEmprPJ.subclass: return TEmprPJ.subclass(*args_, **kwargs_) else: return TEmprPJ(*args_, **kwargs_)", "or self.verProc is not None ): return True else: return False def export(self,", "return paisNac.subclass(*args_, **kwargs_) else: return paisNac(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if", "= GenerateDSNamespaceDefs_.get('dtNascto') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_", "not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, nrLograd) if subclass is not None: return", "return False def export(self, outfile, level, namespace_='', name_='vrBenef', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('vrBenef')", "return paisNascto.subclass(*args_, **kwargs_) else: return paisNascto(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if", "level, already_processed, namespace_, name_='TEnderecoBrasil') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level +", "except AttributeError: pass return _svalue @classmethod def gds_parse_date(cls, input_data): tz = None if", "input_name=''): return ('%.15f' % input_data).rstrip('0') def gds_validate_float(self, input_data, node=None, input_name=''): return input_data def", "is not None: showIndent(outfile, level, pretty_print) outfile.write('<%sdtIniBenef>%s</%sdtIniBenef>%s' % (namespace_, self.gds_format_date(self.dtIniBenef, input_name='dtIniBenef'), namespace_, eol_))", "namespace_='', name_='nmBenefic'): pass def exportChildren(self, outfile, level, namespace_='', name_='nmBenefic', fromsubclass_=False, pretty_print=True): pass def", "('true', '1', 'false', '0', ): raise_parse_error( node, 'Requires sequence of booleans ' '(\"true\",", "outfile, level, already_processed, namespace_='', name_='TEnderecoBrasil'): pass def exportChildren(self, outfile, level, namespace_='', name_='TEnderecoBrasil', fromsubclass_=False,", "get_dscLograd(self): return self.dscLograd def set_dscLograd(self, dscLograd): self.dscLograd = dscLograd def get_nrLograd(self): return self.nrLograd", "pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class verProc class", "return self.iniBeneficio def set_iniBeneficio(self, iniBeneficio): self.iniBeneficio = iniBeneficio def get_altBeneficio(self): return self.altBeneficio def", "return uf.subclass(*args_, **kwargs_) else: return uf(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if", "pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): if nodeName_ == 'indRetif': sval_ =", "is not None: self.ideEmpregador.export(outfile, level, namespace_, name_='ideEmpregador', pretty_print=pretty_print) if self.ideBenef is not None:", "None or self.cpfInst is not None ): return True else: return False def", "namespace_='', name_='nmPai', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib,", "**kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, vrBenef) if subclass", "ival_ = self.gds_validate_integer(ival_, node, 'tpInsc') self.tpInsc = ival_ elif nodeName_ == 'nrInsc': nrInsc_", "showIndent(outfile, level, pretty_print) outfile.write('<%sSignature>%s</%sSignature>%s' % ('ds:', self.gds_encode(self.gds_format_string(quote_xml(self.Signature), input_name='Signature')), 'ds:', eol_)) def build(self, node):", "node, nodeName_, fromsubclass_=False): if nodeName_ == 'cpfBenef': cpfBenef_ = child_.text cpfBenef_ = self.gds_validate_string(cpfBenef_,", "self.fimBeneficio.export(outfile, level, namespace_, name_='fimBeneficio', pretty_print=pretty_print) def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib,", "name self.data_type = data_type self.container = container self.child_attrs = child_attrs self.choice = choice", "return True else: return False def export(self, outfile, level, namespace_='', name_='codMunic', namespacedef_='', pretty_print=True):", "self.iniBeneficio.export(outfile, level, namespace_, name_='iniBeneficio', pretty_print=pretty_print) if self.altBeneficio is not None: self.altBeneficio.export(outfile, level, namespace_,", "def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, dadosNasc)", "eol_ = '' if self.dadosNasc is not None: self.dadosNasc.export(outfile, level, namespace_, name_='dadosNasc', pretty_print=pretty_print)", "else: eol_ = '' if self.evtCdBenPrRP is not None: self.evtCdBenPrRP.export(outfile, level, namespace_, name_='evtCdBenPrRP',", "self.data_type = data_type def get_data_type_chain(self): return self.data_type def get_data_type(self): if isinstance(self.data_type, list): if", "return ('%s' % input_data).lower() def gds_validate_boolean(self, input_data, node=None, input_name=''): return input_data def gds_format_boolean_list(self,", "% (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='TDadosBenef'): pass def exportChildren(self,", "= s1[pos:] s2 += quote_xml_aux(s3) return s2 def quote_xml_aux(inStr): s1 = inStr.replace('&', '&amp;')", "def set_container(self, container): self.container = container def get_container(self): return self.container def set_child_attrs(self, child_attrs):", "set() self.buildAttributes(node, node.attrib, already_processed) for child in node: nodeName_ = Tag_pattern_.match(child.tag).groups()[-1] self.buildChildren(child, node,", "( ): return True else: return False def export(self, outfile, level, namespace_='', name_='paisResid',", "return self.infoPenMorte def set_infoPenMorte(self, infoPenMorte): self.infoPenMorte = infoPenMorte def hasContent_(self): if ( self.tpBenef", "(eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='tpAmb', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_,", "already_processed, namespace_, name_='nrLograd') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1,", "+ namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='dtFimBenef')", "namespace_, name_='mtvFim') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='',", "): return True else: return False def export(self, outfile, level, namespace_='', name_='TEmprPJ', namespacedef_='',", "level, namespace_='', name_='tpLograd', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('tpLograd') if imported_ns_def_ is not None:", ")) self.exportChildren(outfile, level + 1, namespace_='', name_='evtCdBenPrRP', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' %", "getSubclassFromModule_( CurrentSubclassModule_, nrInsc) if subclass is not None: return subclass(*args_, **kwargs_) if nrInsc.subclass:", ")) self.exportChildren(outfile, level + 1, namespace_='', name_='dadosNasc', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' %", "exportAttributes(self, outfile, level, already_processed, namespace_='', name_='eSocial'): pass def exportChildren(self, outfile, level, namespace_='', name_='eSocial',", "outfile, level, already_processed, namespace_='', name_='nrBenefic'): pass def exportChildren(self, outfile, level, namespace_='', name_='nrBenefic', fromsubclass_=False,", "self.nmBenefic = nmBenefic def get_dadosBenef(self): return self.dadosBenef def set_dadosBenef(self, dadosBenef): self.dadosBenef = dadosBenef", "exportAttributes(self, outfile, level, already_processed, namespace_='', name_='nmBenefic'): pass def exportChildren(self, outfile, level, namespace_='', name_='nmBenefic',", "fromsubclass_=False): pass # end class mtvFim class TIdeEveTrab(GeneratedsSuper): \"\"\"Identificação do evento\"\"\" subclass =", "message> -- Entering ipshell.\\nHit Ctrl-D to exit') # # Globals # ExternalEncoding =", "showIndent(outfile, level, pretty_print) outfile.write('<%snrRecibo>%s</%snrRecibo>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nrRecibo), input_name='nrRecibo')), namespace_, eol_)) if self.tpAmb is", "level, already_processed, namespace_, name_='dadosNasc') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level +", "pass def exportChildren(self, outfile, level, namespace_='', name_='nrRecibo', fromsubclass_=False, pretty_print=True): pass def build(self, node):", "= self.value else: element[-1].tail += self.value else: if element.text is None: element.text =", "gds_reverse_node_mapping(cls, mapping): return dict(((v, k) for k, v in mapping.iteritems())) @staticmethod def gds_encode(instring):", "(self.gds_encode(self.gds_format_string(quote_attrib(self.Id), input_name='Id')), )) def exportChildren(self, outfile, level, namespace_='', name_='evtCdBenPrRP', fromsubclass_=False, pretty_print=True): if pretty_print:", "tag, rootClass def parse(inFileName, silence=False): parser = None doc = parsexml_(inFileName, parser) rootNode", "get_paisNac(self): return self.paisNac def set_paisNac(self, paisNac): self.paisNac = paisNac def get_nmMae(self): return self.nmMae", "self.paisNascto = paisNascto_ elif nodeName_ == 'paisNac': paisNac_ = child_.text paisNac_ = self.gds_validate_string(paisNac_,", "def __init__(self, tpBenef=None, nrBenefic=None, dtIniBenef=None, vrBenef=None, infoPenMorte=None): self.original_tagname_ = None self.tpBenef = tpBenef", "if nodeName_ == 'dadosNasc': obj_ = dadosNasc.factory() obj_.build(child_) self.dadosNasc = obj_ obj_.original_tagname_ =", "= set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='dscLograd') if self.hasContent_(): outfile.write('>%s' % (eol_, ))", "self.Signature is not None ): return True else: return False def export(self, outfile,", "def get_Id(self): return self.Id def set_Id(self, Id): self.Id = Id def hasContent_(self): if", "**kwargs_) else: return tpLograd(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ):", "staticmethod(factory) def get_brasil(self): return self.brasil def set_brasil(self, brasil): self.brasil = brasil def get_exterior(self):", "= None def __init__(self): self.original_tagname_ = None def factory(*args_, **kwargs_): if CurrentSubclassModule_ is", "eol_ = '\\n' else: eol_ = '' if self.tpLograd is not None: showIndent(outfile,", "None: showIndent(outfile, level, pretty_print) outfile.write('<%sbairro>%s</%sbairro>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.bairro), input_name='bairro')), namespace_, eol_)) if self.nmCid", ")) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='codPostal') if self.hasContent_(): outfile.write('>%s' %", "self.indRetif is not None: showIndent(outfile, level, pretty_print) outfile.write('<%sindRetif>%s</%sindRetif>%s' % (namespace_, self.gds_format_integer(self.indRetif, input_name='indRetif'), namespace_,", "return subclass(*args_, **kwargs_) if dadosNasc.subclass: return dadosNasc.subclass(*args_, **kwargs_) else: return dadosNasc(*args_, **kwargs_) factory", "factory = staticmethod(factory) def get_ideEvento(self): return self.ideEvento def set_ideEvento(self, ideEvento): self.ideEvento = ideEvento", "def set_dadosBenef(self, dadosBenef): self.dadosBenef = dadosBenef def hasContent_(self): if ( self.cpfBenef is not", "BytesIO as IOBuffer parser = None doc = parsexml_(IOBuffer(inString), parser) rootNode = doc.getroot()", "return True else: return False def export(self, outfile, level, namespace_='', name_='nmBenefic', namespacedef_='', pretty_print=True):", "' + namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_,", "# end class tpInsc class nrInsc(GeneratedsSuper): subclass = None superclass = None def", "definition table (and other attributes, too) # # The module generatedsnamespaces, if it", "# end class cpfInst GDSClassesMapping = { 'altBeneficio': TDadosBeneficio, 'brasil': TEnderecoBrasil, 'dadosBenef': TDadosBenef,", "None or self.infoBeneficio is not None ): return True else: return False def", "name_='TIdeEveTrab'): pass def exportChildren(self, outfile, level, namespace_='', name_='TIdeEveTrab', fromsubclass_=False, pretty_print=True): if pretty_print: eol_", "elif nodeName_ == 'cep': cep_ = child_.text cep_ = self.gds_validate_string(cep_, node, 'cep') self.cep", "return True else: return False def export(self, outfile, level, namespace_='', name_='cpfInst', namespacedef_='', pretty_print=True):", "input_name='tpLograd')), namespace_, eol_)) if self.dscLograd is not None: showIndent(outfile, level, pretty_print) outfile.write('<%sdscLograd>%s</%sdscLograd>%s' %", "'%04d-%02d-%02dT%02d:%02d:%02d' % ( input_data.year, input_data.month, input_data.day, input_data.hour, input_data.minute, input_data.second, ) else: _svalue =", "class MixedContainer: # Constants for category: CategoryNone = 0 CategoryText = 1 CategorySimple", "= nrBenefic if isinstance(dtIniBenef, BaseStrType_): initvalue_ = datetime_.datetime.strptime(dtIniBenef, '%Y-%m-%d').date() else: initvalue_ = dtIniBenef", "+ namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='bairro')", "outfile, level, namespace_='', name_='bairro', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set()", "TDadosBeneficio class dtIniBenef(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ =", "not None: return subclass(*args_, **kwargs_) if dtIniBenef.subclass: return dtIniBenef.subclass(*args_, **kwargs_) else: return dtIniBenef(*args_,", "result = GeneratedsSuper.gds_encode(str(instring)) return result def __eq__(self, other): if type(self) != type(other): return", "level, namespace_, name_='dadosNasc', pretty_print=pretty_print) if self.endereco is not None: self.endereco.export(outfile, level, namespace_, name_='endereco',", "% (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='codPostal'): pass def exportChildren(self,", "is not None: return subclass(*args_, **kwargs_) if cep.subclass: return cep.subclass(*args_, **kwargs_) else: return", "is not None or self.Signature is not None ): return True else: return", "functions. # def showIndent(outfile, level, pretty_print=True): if pretty_print: for idx in range(level): outfile.write('", "sequence of integers') return values def gds_format_float(self, input_data, input_name=''): return ('%.15f' % input_data).rstrip('0')", "nodeName_, fromsubclass_=False): pass # end class tpLograd class dscLograd(GeneratedsSuper): subclass = None superclass", "return self.codMunic def set_codMunic(self, codMunic): self.codMunic = codMunic def get_uf(self): return self.uf def", "None self.paisResid = paisResid self.dscLograd = dscLograd self.nrLograd = nrLograd self.complemento = complemento", "self.paisResid = paisResid def get_dscLograd(self): return self.dscLograd def set_dscLograd(self, dscLograd): self.dscLograd = dscLograd", "return TDadosBenef(*args_, **kwargs_) factory = staticmethod(factory) def get_dadosNasc(self): return self.dadosNasc def set_dadosNasc(self, dadosNasc):", "pass def exportChildren(self, outfile, level, namespace_='', name_='TIdeEveTrab', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ =", "end class nmCid class codPostal(GeneratedsSuper): subclass = None superclass = None def __init__(self):", "not None: return subclass(*args_, **kwargs_) if nrRecibo.subclass: return nrRecibo.subclass(*args_, **kwargs_) else: return nrRecibo(*args_,", "value not in ('true', '1', 'false', '0', ): raise_parse_error( node, 'Requires sequence of", "namespace_='', name_='uf', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_, ))", "obj_.build(child_) self.brasil = obj_ obj_.original_tagname_ = 'brasil' elif nodeName_ == 'exterior': obj_ =", "minutes) except AttributeError: pass return _svalue @classmethod def gds_parse_date(cls, input_data): tz = None", "= input_data.split('.') if len(time_parts) > 1: micro_seconds = int(float('0.' + time_parts[1]) * 1000000)", "return True else: return False def export(self, outfile, level, namespace_='', name_='dtFimBenef', namespacedef_='', pretty_print=True):", "outfile, level, namespace_='', name_='indRetif', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('indRetif') if imported_ns_def_ is not", "= codPostal def hasContent_(self): if ( self.paisResid is not None or self.dscLograd is", "término de benefícios.\"\"\" subclass = None superclass = None def __init__(self, tpBenef=None, nrBenefic=None,", "etree_ except ImportError: from xml.etree import ElementTree as etree_ Validate_simpletypes_ = True if", ")) self.exportChildren(outfile, level + 1, namespace_='', name_='infoBeneficio', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' %", "or self.uf is not None ): return True else: return False def export(self,", "== 'uf': uf_ = child_.text uf_ = self.gds_validate_string(uf_, node, 'uf') self.uf = uf_", "= None superclass = None def __init__(self, tpLograd=None, dscLograd=None, nrLograd=None, complemento=None, bairro=None, cep=None,", "not None or self.nmMae is not None or self.nmPai is not None ):", "self.paisResid def set_paisResid(self, paisResid): self.paisResid = paisResid def get_dscLograd(self): return self.dscLograd def set_dscLograd(self,", "nmBenefic=None, dadosBenef=None): self.original_tagname_ = None self.cpfBenef = cpfBenef self.nmBenefic = nmBenefic self.dadosBenef =", "False def export(self, outfile, level, namespace_='', name_='codMunic', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('codMunic') if", "input_data, node=None, input_name=''): if not input_data: return '' else: return input_data def gds_format_base64(self,", "obj_.build(child_) self.fimBeneficio = obj_ obj_.original_tagname_ = 'fimBeneficio' # end class infoBeneficio class tpPlanRP(GeneratedsSuper):", "or self.nmCid is not None or self.codPostal is not None ): return True", "= self.gds_validate_string(nrRecibo_, node, 'nrRecibo') self.nrRecibo = nrRecibo_ elif nodeName_ == 'tpAmb': sval_ =", "namespace_, name_='indRetif') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='',", "outfile, level, namespace_='', name_='TIdeEveTrab', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('TIdeEveTrab') if imported_ns_def_ is not", "outfile.write('<%snrRecibo>%s</%snrRecibo>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nrRecibo), input_name='nrRecibo')), namespace_, eol_)) if self.tpAmb is not None: showIndent(outfile,", "(namespace_, self.gds_format_float(self.vrBenef, input_name='vrBenef'), namespace_, eol_)) if self.infoPenMorte is not None: self.infoPenMorte.export(outfile, level, namespace_,", "or self.ideEmpregador is not None or self.ideBenef is not None or self.infoBeneficio is", "outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='nrLograd'): pass def", "\"\"\" Usage: python <Parser>.py [ -s ] <in_xml_file> \"\"\" def usage(): print(USAGE_TEXT) sys.exit(1)", "showIndent(outfile, level, pretty_print) outfile.write('<%stpAmb>%s</%stpAmb>%s' % (namespace_, self.gds_format_integer(self.tpAmb, input_name='tpAmb'), namespace_, eol_)) if self.procEmi is", "True else: return False def export(self, outfile, level, namespace_='', name_='mtvFim', namespacedef_='', pretty_print=True): imported_ns_def_", "None: showIndent(outfile, level, pretty_print) outfile.write('<%sdtNascto>%s</%sdtNascto>%s' % (namespace_, self.gds_format_date(self.dtNascto, input_name='dtNascto'), namespace_, eol_)) if self.codMunic", "getSubclassFromModule_( CurrentSubclassModule_, tpInsc) if subclass is not None: return subclass(*args_, **kwargs_) if tpInsc.subclass:", "return complemento.subclass(*args_, **kwargs_) else: return complemento(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if", "else: return paisNac(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return", "is not None: return subclass(*args_, **kwargs_) if dtNascto.subclass: return dtNascto.subclass(*args_, **kwargs_) else: return", "or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='paisNac') if self.hasContent_():", ")) self.exportChildren(outfile, level + 1, namespace_='', name_='tpLograd', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_))", "self.Signature = Signature def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass =", "= None self.tpPlanRP = tpPlanRP self.iniBeneficio = iniBeneficio self.altBeneficio = altBeneficio self.fimBeneficio =", "pretty_print) outfile.write('<%sbairro>%s</%sbairro>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.bairro), input_name='bairro')), namespace_, eol_)) if self.nmCid is not None:", "paisResid.subclass: return paisResid.subclass(*args_, **kwargs_) else: return paisResid(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self):", "sval_ = child_.text dval_ = self.gds_parse_date(sval_) self.dtIniBenef = dval_ elif nodeName_ == 'vrBenef':", "child_.text nmCid_ = self.gds_validate_string(nmCid_, node, 'nmCid') self.nmCid = nmCid_ elif nodeName_ == 'codPostal':", "self.infoPenMorte = infoPenMorte def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass =", "dadosNasc(*args_, **kwargs_) factory = staticmethod(factory) def get_dtNascto(self): return self.dtNascto def set_dtNascto(self, dtNascto): self.dtNascto", "def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='dadosNasc'): pass def exportChildren(self, outfile, level, namespace_='',", "outfile, level, already_processed, namespace_='', name_='nmMae'): pass def exportChildren(self, outfile, level, namespace_='', name_='nmMae', fromsubclass_=False,", "== 'paisNac': paisNac_ = child_.text paisNac_ = self.gds_validate_string(paisNac_, node, 'paisNac') self.paisNac = paisNac_", "set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='indRetif') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile,", "= GenerateDSNamespaceDefs_.get('dscLograd') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_", "not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, tpPlanRP) if subclass is not None: return", "self.vrBenef = vrBenef def get_infoPenMorte(self): return self.infoPenMorte def set_infoPenMorte(self, infoPenMorte): self.infoPenMorte = infoPenMorte", "= GenerateDSNamespaceDefs_.get('nrRecibo') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_", "% exp) ival_ = self.gds_validate_integer(ival_, node, 'procEmi') self.procEmi = ival_ elif nodeName_ ==", "None: return subclass(*args_, **kwargs_) if nrInsc.subclass: return nrInsc.subclass(*args_, **kwargs_) else: return nrInsc(*args_, **kwargs_)", "= None self.dadosNasc = dadosNasc self.endereco = endereco def factory(*args_, **kwargs_): if CurrentSubclassModule_", "namespace_='', name_='paisNac', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib,", "node, 'tpAmb') self.tpAmb = ival_ elif nodeName_ == 'procEmi': sval_ = child_.text try:", "CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, paisNascto) if subclass is not", "quote_xml_aux(inStr): s1 = inStr.replace('&', '&amp;') s1 = s1.replace('<', '&lt;') s1 = s1.replace('>', '&gt;')", "is not None: return subclass(*args_, **kwargs_) if nmMae.subclass: return nmMae.subclass(*args_, **kwargs_) else: return", "export(self, outfile, level, namespace_='', name_='TIdeEveTrab', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('TIdeEveTrab') if imported_ns_def_ is", "set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='tpPlanRP') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile,", "__init__(self, tpPlanRP=None, iniBeneficio=None, altBeneficio=None, fimBeneficio=None): self.original_tagname_ = None self.tpPlanRP = tpPlanRP self.iniBeneficio =", "outfile, level, namespace_='', name_='nmBenefic', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set()", "else: return indRetif(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return", "namespace_, name_='evtCdBenPrRP', pretty_print=pretty_print) if self.Signature is not None: showIndent(outfile, level, pretty_print) outfile.write('<%sSignature>%s</%sSignature>%s' %", "if subclass is not None: return subclass(*args_, **kwargs_) if mtvFim.subclass: return mtvFim.subclass(*args_, **kwargs_)", "class_obj1 = class_obj2 return class_obj1 def gds_build_any(self, node, type_name=None): return None @classmethod def", "nodeName_, fromsubclass_=False): pass # end class paisNac class nmMae(GeneratedsSuper): subclass = None superclass", "'\\n' else: eol_ = '' if self.cpfBenef is not None: showIndent(outfile, level, pretty_print)", "outfile.write('<%spaisNascto>%s</%spaisNascto>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.paisNascto), input_name='paisNascto')), namespace_, eol_)) if self.paisNac is not None: showIndent(outfile,", "= None self.cpfBenef = cpfBenef self.nmBenefic = nmBenefic self.dadosBenef = dadosBenef def factory(*args_,", "subclass = getSubclassFromModule_( CurrentSubclassModule_, TIdeEveTrab) if subclass is not None: return subclass(*args_, **kwargs_)", "import *\\n\\n') sys.stdout.write('import evtCdBenPrRP as model_\\n\\n') sys.stdout.write('rootObj = model_.rootClass(\\n') rootObj.exportLiteral(sys.stdout, 0, name_=rootTag) sys.stdout.write(')\\n')", "altBeneficio): self.altBeneficio = altBeneficio def get_fimBeneficio(self): return self.fimBeneficio def set_fimBeneficio(self, fimBeneficio): self.fimBeneficio =", "quote_python(inStr): s1 = inStr if s1.find(\"'\") == -1: if s1.find('\\n') == -1: return", "subclass = getSubclassFromModule_( CurrentSubclassModule_, verProc) if subclass is not None: return subclass(*args_, **kwargs_)", "def get_tpPlanRP(self): return self.tpPlanRP def set_tpPlanRP(self, tpPlanRP): self.tpPlanRP = tpPlanRP def get_iniBeneficio(self): return", "(eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='tpInsc'): pass def exportChildren(self, outfile,", "else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='bairro'): pass", "fval_ = self.gds_validate_float(fval_, node, 'vrBenef') self.vrBenef = fval_ elif nodeName_ == 'infoPenMorte': obj_", "if self.mtvFim is not None: showIndent(outfile, level, pretty_print) outfile.write('<%smtvFim>%s</%smtvFim>%s' % (namespace_, self.gds_format_integer(self.mtvFim, input_name='mtvFim'),", "if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='dscLograd', pretty_print=pretty_print)", "text = '%g' % self.value elif self.content_type == MixedContainer.TypeBase64: text = '%s' %", "export(self, outfile, level, namespace_='', name_='tpInsc', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('tpInsc') if imported_ns_def_ is", "eol_)) else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='nrInsc'):", "'nrBenefic') self.nrBenefic = nrBenefic_ elif nodeName_ == 'dtFimBenef': sval_ = child_.text dval_ =", "GenerateDSNamespaceDefs_.get('tpInsc') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_ =", "None self.indRetif = indRetif self.nrRecibo = nrRecibo self.tpAmb = tpAmb self.procEmi = procEmi", "def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='dtIniBenef'): pass def exportChildren(self, outfile, level, namespace_='',", "= None superclass = None def __init__(self, idQuota=None, cpfInst=None): self.original_tagname_ = None self.idQuota", "(namespace_, self.gds_format_integer(self.tpBenef, input_name='tpBenef'), namespace_, eol_)) if self.nrBenefic is not None: showIndent(outfile, level, pretty_print)", "== 0: _svalue += 'Z' else: if total_seconds < 0: _svalue += '-'", "pass # end class indRetif class nrRecibo(GeneratedsSuper): subclass = None superclass = None", "subclass is not None: return subclass(*args_, **kwargs_) if TIdeEveTrab.subclass: return TIdeEveTrab.subclass(*args_, **kwargs_) else:", "(time_parts[0], micro_seconds, ) dt = datetime_.datetime.strptime( input_data, '%Y-%m-%dT%H:%M:%S.%f') else: dt = datetime_.datetime.strptime( input_data,", "= find_attr_value_('Id', node) if value is not None and 'Id' not in already_processed:", "(namespace_, self.gds_format_date(self.dtNascto, input_name='dtNascto'), namespace_, eol_)) if self.codMunic is not None: showIndent(outfile, level, pretty_print)", "False def export(self, outfile, level, namespace_='', name_='endereco', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('endereco') if", "GenerateDSNamespaceDefs_.get('cpfBenef') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_ =", "TypeInteger = 3 TypeFloat = 4 TypeDecimal = 5 TypeDouble = 6 TypeBoolean", "pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class procEmi class", "True else: return False def export(self, outfile, level, namespace_='', name_='nrLograd', namespacedef_='', pretty_print=True): imported_ns_def_", "namespace_, eol_)) if self.tpAmb is not None: showIndent(outfile, level, pretty_print) outfile.write('<%stpAmb>%s</%stpAmb>%s' % (namespace_,", "child in node: if child.tail is not None: text += child.tail return text", "if eSocial.subclass: return eSocial.subclass(*args_, **kwargs_) else: return eSocial(*args_, **kwargs_) factory = staticmethod(factory) def", "\"\"\"Grupo de informações do endereço do Trabalhador\"\"\" subclass = None superclass = None", "datetime_.datetime.strptime( input_data, '%Y-%m-%dT%H:%M:%S') dt = dt.replace(tzinfo=tz) return dt def gds_validate_date(self, input_data, node=None, input_name=''):", "Signature_ # end class eSocial class evtCdBenPrRP(GeneratedsSuper): \"\"\"Evento de cadastro de benefícios previdenciários", "self.gds_encode(self.gds_format_string(quote_xml(self.cep), input_name='cep')), namespace_, eol_)) if self.codMunic is not None: showIndent(outfile, level, pretty_print) outfile.write('<%scodMunic>%s</%scodMunic>%s'", "self.exportAttributes(outfile, level, already_processed, namespace_, name_='TEnderecoExterior') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level", "try: from generatedssuper import GeneratedsSuper except ImportError as exp: class GeneratedsSuper(object): tzoff_pattern =", "class TDadosBeneficio(GeneratedsSuper): \"\"\"Dados do benefício previdenciário\"\"\" subclass = None superclass = None def", "is not None or self.uf is not None or self.paisNascto is not None", "*= -1 else: _svalue += '+' hours = total_seconds // 3600 minutes =", "'verProc') self.verProc = verProc_ # end class TIdeEveTrab class indRetif(GeneratedsSuper): subclass = None", "( self.tpInsc is not None or self.nrInsc is not None ): return True", "factory = staticmethod(factory) def get_dadosNasc(self): return self.dadosNasc def set_dadosNasc(self, dadosNasc): self.dadosNasc = dadosNasc", "pretty_print) outfile.write('<%snrLograd>%s</%snrLograd>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nrLograd), input_name='nrLograd')), namespace_, eol_)) if self.complemento is not None:", "get_evtCdBenPrRP(self): return self.evtCdBenPrRP def set_evtCdBenPrRP(self, evtCdBenPrRP): self.evtCdBenPrRP = evtCdBenPrRP def get_Signature(self): return self.Signature", "level, namespace_='', name_='complemento', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('complemento') if imported_ns_def_ is not None:", "input_name='paisResid')), namespace_, eol_)) if self.dscLograd is not None: showIndent(outfile, level, pretty_print) outfile.write('<%sdscLograd>%s</%sdscLograd>%s' %", "= GenerateDSNamespaceDefs_.get('procEmi') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_", "not None or self.codMunic is not None or self.uf is not None ):", "for k, v in mapping.iteritems())) @staticmethod def gds_encode(instring): if sys.version_info.major == 2: return", "if ( ): return True else: return False def export(self, outfile, level, namespace_='',", "not None or self.dscLograd is not None or self.nrLograd is not None or", "previdenciário\"\"\" subclass = None superclass = None def __init__(self, tpBenef=None, nrBenefic=None, dtIniBenef=None, vrBenef=None,", "CategorySimple = 2 CategoryComplex = 3 # Constants for content_type: TypeNone = 0", "not None or self.nrBenefic is not None or self.dtFimBenef is not None or", "getName(self): return self.name def export(self, outfile, level, name, namespace, pretty_print=True): if self.category ==", "return False def export(self, outfile, level, namespace_='', name_='dadosNasc', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('dadosNasc')", "def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, tpLograd)", "class dtNascto class codMunic(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_", "fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\\n' else: eol_ = '' if self.tpPlanRP", "not None: self.exterior.export(outfile, level, namespace_, name_='exterior', pretty_print=pretty_print) def build(self, node): already_processed = set()", "\"\"\"Informações do Endereço no Brasil\"\"\" subclass = None superclass = None def __init__(self,", "self.tpBenef = ival_ elif nodeName_ == 'nrBenefic': nrBenefic_ = child_.text nrBenefic_ = self.gds_validate_string(nrBenefic_,", "def gds_validate_float_list( self, input_data, node=None, input_name=''): values = input_data.split() for value in values:", "not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, mtvFim) if subclass is not None: return", "already_processed, namespace_='', name_='vrBenef'): pass def exportChildren(self, outfile, level, namespace_='', name_='vrBenef', fromsubclass_=False, pretty_print=True): pass", "subclass = getSubclassFromModule_( CurrentSubclassModule_, cep) if subclass is not None: return subclass(*args_, **kwargs_)", "self.value.strip(): if len(element) > 0: if element[-1].tail is None: element[-1].tail = self.value else:", "pass def exportChildren(self, outfile, level, namespace_='', name_='paisResid', fromsubclass_=False, pretty_print=True): pass def build(self, node):", "pretty_print: eol_ = '\\n' else: eol_ = '' if self.idQuota is not None:", "attrs, already_processed): pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): if nodeName_ == 'evtCdBenPrRP':", "namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('fimBeneficio') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_", "= None self.paisResid = paisResid self.dscLograd = dscLograd self.nrLograd = nrLograd self.complemento =", "# Then use the following line where and when you want to drop", "def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='paisNac'): pass def exportChildren(self, outfile, level, namespace_='',", "# end class mtvFim class TIdeEveTrab(GeneratedsSuper): \"\"\"Identificação do evento\"\"\" subclass = None superclass", "0 matchobjects = CDATA_pattern_.finditer(s1) for mo in matchobjects: s3 = s1[pos:mo.start()] s2 +=", "CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, eSocial) if subclass is not", "exit_msg = 'Leaving Interpreter, back to program.') # Then use the following line", "None if not silence: sys.stdout.write('<?xml version=\"1.0\" ?>\\n') rootObj.export( sys.stdout, 0, name_=rootTag, namespacedef_='', pretty_print=True)", "factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, fimBeneficio) if", "namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('mtvFim') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_", "level + 1, namespace_='', name_='nrBenefic', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s'", "child_, node, nodeName_, fromsubclass_=False): pass # end class verProc class TEmprPJ(GeneratedsSuper): \"\"\"Informações do", "self.exportAttributes(outfile, level, already_processed, namespace_, name_='nmMae') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level", "for child in node: nodeName_ = Tag_pattern_.match(child.tag).groups()[-1] self.buildChildren(child, node, nodeName_) return self def", "'', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='infoBeneficio') if self.hasContent_(): outfile.write('>%s'", "= 2 CategoryComplex = 3 # Constants for content_type: TypeNone = 0 TypeText", "Signature_ = child_.text Signature_ = self.gds_validate_string(Signature_, node, 'Signature') self.Signature = Signature_ # end", "outfile, level, namespace_='', name_='nrLograd', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set()", "or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='nmBenefic') if self.hasContent_():", "name_='indRetif'): pass def exportChildren(self, outfile, level, namespace_='', name_='indRetif', fromsubclass_=False, pretty_print=True): pass def build(self,", "= '' if self.original_tagname_ is not None: name_ = self.original_tagname_ showIndent(outfile, level, pretty_print)", ")) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='infoBeneficio'): pass def exportChildren(self, outfile, level,", "set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='nmPai') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile,", "( self.category, self.content_type, self.name, self.value)) else: # category == MixedContainer.CategoryComplex showIndent(outfile, level) outfile.write(", "level, namespace_='', name_='TIdeEveTrab', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\\n' else: eol_ =", "already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='paisNac') if self.hasContent_(): outfile.write('>%s' % (eol_,", "): return True else: return False def export(self, outfile, level, namespace_='', name_='nmMae', namespacedef_='',", "else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='procEmi'): pass", "def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, tpInsc)", "namespace_='', name_='TDadosBeneficio', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('TDadosBeneficio') if imported_ns_def_ is not None: namespacedef_", "as datetime_ import warnings as warnings_ try: from lxml import etree as etree_", "None and 'Id' not in already_processed: already_processed.add('Id') self.Id = value def buildChildren(self, child_,", "CurrentSubclassModule_, mtvFim) if subclass is not None: return subclass(*args_, **kwargs_) if mtvFim.subclass: return", "s1 = s1.replace('&', '&amp;') s1 = s1.replace('<', '&lt;') s1 = s1.replace('>', '&gt;') if", "outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='nmCid'): pass def", "is not None: return subclass(*args_, **kwargs_) if nrLograd.subclass: return nrLograd.subclass(*args_, **kwargs_) else: return", "exportAttributes(self, outfile, level, already_processed, namespace_='', name_='idQuota'): pass def exportChildren(self, outfile, level, namespace_='', name_='idQuota',", "export(self, outfile, level, name, namespace, pretty_print=True): if self.category == MixedContainer.CategoryText: # Prevent exporting", "namespace_, name_='endereco') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='',", "'codPostal': codPostal_ = child_.text codPostal_ = self.gds_validate_string(codPostal_, node, 'codPostal') self.codPostal = codPostal_ #", "**kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, tpBenef) if subclass", "StringIO as IOBuffer else: from io import BytesIO as IOBuffer parser = None", "OR the inner elements found1 = True for patterns1 in patterns: found2 =", "quote_xml_aux(s3) return s2 def quote_xml_aux(inStr): s1 = inStr.replace('&', '&amp;') s1 = s1.replace('<', '&lt;')", "not None: showIndent(outfile, level, pretty_print) outfile.write('<%snmBenefic>%s</%snmBenefic>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nmBenefic), input_name='nmBenefic')), namespace_, eol_)) if", "staticmethod(factory) def get_tpInsc(self): return self.tpInsc def set_tpInsc(self, tpInsc): self.tpInsc = tpInsc def get_nrInsc(self):", "self.tpPlanRP = ival_ elif nodeName_ == 'iniBeneficio': obj_ = TDadosBeneficio.factory() obj_.build(child_) self.iniBeneficio =", "2 TypeInteger = 3 TypeFloat = 4 TypeDecimal = 5 TypeDouble = 6", "infoBeneficio) if subclass is not None: return subclass(*args_, **kwargs_) if infoBeneficio.subclass: return infoBeneficio.subclass(*args_,", "GenerateDSNamespaceDefs_.get('TEmprPJ') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_ =", "already_processed): pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): if nodeName_ == 'tpInsc': sval_", "not None: showIndent(outfile, level, pretty_print) outfile.write('<%suf>%s</%suf>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.uf), input_name='uf')), namespace_, eol_)) def", "CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, codPostal) if subclass is not", "self.buildChildren(child, node, nodeName_) return self def buildAttributes(self, node, attrs, already_processed): pass def buildChildren(self,", "namespace_='', name_='paisResid'): pass def exportChildren(self, outfile, level, namespace_='', name_='paisResid', fromsubclass_=False, pretty_print=True): pass def", "exportChildren(self, outfile, level, namespace_='', name_='cep', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed =", "namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='tpBenef') if", "namespace_, name_='TDadosBenef') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='',", "name_='paisNascto') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='paisNascto',", "child_, node, nodeName_, fromsubclass_=False): pass # end class nrBenefic class dtFimBenef(GeneratedsSuper): subclass =", "**kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, verProc) if subclass", "self.altBeneficio is not None: self.altBeneficio.export(outfile, level, namespace_, name_='altBeneficio', pretty_print=pretty_print) if self.fimBeneficio is not", "is not None or self.nmMae is not None or self.nmPai is not None", "def set_mtvFim(self, mtvFim): self.mtvFim = mtvFim def hasContent_(self): if ( self.tpBenef is not", "level, namespace_='', name_='nrRecibo', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('nrRecibo') if imported_ns_def_ is not None:", "name_='nrLograd') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='nrLograd',", "return typ(value) # # Data representation classes. # class eSocial(GeneratedsSuper): subclass = None", "outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='dtIniBenef'): pass def", "= 4 TypeDecimal = 5 TypeDouble = 6 TypeBoolean = 7 TypeBase64 =", "'%02d:%02d:%02d.%s' % ( input_data.hour, input_data.minute, input_data.second, ('%f' % (float(input_data.microsecond) / 1000000))[2:], ) if", "paisNascto.subclass(*args_, **kwargs_) else: return paisNascto(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if (", "if subclass is not None: return subclass(*args_, **kwargs_) if cpfInst.subclass: return cpfInst.subclass(*args_, **kwargs_)", "paisResid) if subclass is not None: return subclass(*args_, **kwargs_) if paisResid.subclass: return paisResid.subclass(*args_,", "input_data.day, ) try: if input_data.tzinfo is not None: tzoff = input_data.tzinfo.utcoffset(input_data) if tzoff", "raise_parse_error(child_, 'requires integer: %s' % exp) ival_ = self.gds_validate_integer(ival_, node, 'tpPlanRP') self.tpPlanRP =", "= nmBenefic def get_dadosBenef(self): return self.dadosBenef def set_dadosBenef(self, dadosBenef): self.dadosBenef = dadosBenef def", "namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='codPostal') if", "pretty_print=True): if pretty_print: eol_ = '\\n' else: eol_ = '' if self.indRetif is", "= s1.replace('<', '&lt;') s1 = s1.replace('>', '&gt;') return s1 def quote_attrib(inStr): s1 =", "exp) ival_ = self.gds_validate_integer(ival_, node, 'tpPlanRP') self.tpPlanRP = ival_ elif nodeName_ == 'iniBeneficio':", "exportChildren(self, outfile, level, namespace_='', name_='nrInsc', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed =", "return subclass(*args_, **kwargs_) if complemento.subclass: return complemento.subclass(*args_, **kwargs_) else: return complemento(*args_, **kwargs_) factory", "tpPlanRP def get_iniBeneficio(self): return self.iniBeneficio def set_iniBeneficio(self, iniBeneficio): self.iniBeneficio = iniBeneficio def get_altBeneficio(self):", "outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='verProc', pretty_print=pretty_print) outfile.write('</%s%s>%s' %", "__init__(self, indRetif=None, nrRecibo=None, tpAmb=None, procEmi=None, verProc=None): self.original_tagname_ = None self.indRetif = indRetif self.nrRecibo", "= tpAmb def get_procEmi(self): return self.procEmi def set_procEmi(self, procEmi): self.procEmi = procEmi def", "type classes # # Calls to the methods in these classes are generated", "**kwargs_) if tpLograd.subclass: return tpLograd.subclass(*args_, **kwargs_) else: return tpLograd(*args_, **kwargs_) factory = staticmethod(factory)", "def get_Signature(self): return self.Signature def set_Signature(self, Signature): self.Signature = Signature def hasContent_(self): if", "evtCdBenPrRP(*args_, **kwargs_) factory = staticmethod(factory) def get_ideEvento(self): return self.ideEvento def set_ideEvento(self, ideEvento): self.ideEvento", "def __init__(self, indRetif=None, nrRecibo=None, tpAmb=None, procEmi=None, verProc=None): self.original_tagname_ = None self.indRetif = indRetif", "**kwargs_) if dadosNasc.subclass: return dadosNasc.subclass(*args_, **kwargs_) else: return dadosNasc(*args_, **kwargs_) factory = staticmethod(factory)", "(float(input_data.microsecond) / 1000000))[2:], ) if input_data.tzinfo is not None: tzoff = input_data.tzinfo.utcoffset(input_data) if", "pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('fimBeneficio') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if", "MixedContainer: # Constants for category: CategoryNone = 0 CategoryText = 1 CategorySimple =", "self.name, self.value, self.name)) elif self.content_type == MixedContainer.TypeInteger or \\ self.content_type == MixedContainer.TypeBoolean: outfile.write('<%s>%d</%s>'", "self.nmBenefic is not None: showIndent(outfile, level, pretty_print) outfile.write('<%snmBenefic>%s</%snmBenefic>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nmBenefic), input_name='nmBenefic')), namespace_,", "\"\"\"Evento de cadastro de benefícios previdenciários de Regimes Próprios\"\"\" subclass = None superclass", "# pat is a list of lists of strings/patterns. We should: # -", "or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='uf') if self.hasContent_():", "return subclass(*args_, **kwargs_) if tpAmb.subclass: return tpAmb.subclass(*args_, **kwargs_) else: return tpAmb(*args_, **kwargs_) factory", "benefícios.\"\"\" subclass = None superclass = None def __init__(self, tpBenef=None, nrBenefic=None, dtFimBenef=None, mtvFim=None):", "namespace_='', name_='cpfInst', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib,", "use of this # table. # A sample table is: # # #", "to use a # specific subclass module. CurrentSubclassModule_ = None # # Support/utility", "set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='tpInsc') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile,", "def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='TEmprPJ'): pass def exportChildren(self, outfile, level, namespace_='',", "is not None: value = attrs.get('{%s}%s' % (namespace, name, )) return value class", "# end class TDadosBeneficio class dtIniBenef(GeneratedsSuper): subclass = None superclass = None def", "outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='ideBenef'): pass def", "pass def exportChildren(self, outfile, level, namespace_='', name_='TEnderecoBrasil', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ =", "level, namespace_, name_='endereco', pretty_print=pretty_print) def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib, already_processed)", "etree_.XMLParser() doc = etree_.parse(infile, parser=parser, **kwargs) return doc # # Namespace prefix definition", "is not None and 'Id' not in already_processed: already_processed.add('Id') outfile.write(' Id=%s' % (self.gds_encode(self.gds_format_string(quote_attrib(self.Id),", "self.codMunic = codMunic self.uf = uf def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not", "None: showIndent(outfile, level, pretty_print) outfile.write('<%snrLograd>%s</%snrLograd>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nrLograd), input_name='nrLograd')), namespace_, eol_)) if self.complemento", "is not None: return subclass(*args_, **kwargs_) if nrInsc.subclass: return nrInsc.subclass(*args_, **kwargs_) else: return", "namespace prefix # definitions. The export method for any class for which there", "is not None: self.exterior.export(outfile, level, namespace_, name_='exterior', pretty_print=pretty_print) def build(self, node): already_processed =", "self.cep is not None: showIndent(outfile, level, pretty_print) outfile.write('<%scep>%s</%scep>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.cep), input_name='cep')), namespace_,", "time_parts = input_data.split('.') if len(time_parts) > 1: micro_seconds = int(float('0.' + time_parts[1]) *", "): return True else: return False def export(self, outfile, level, namespace_='', name_='paisNascto', namespacedef_='',", "= iniBeneficio self.altBeneficio = altBeneficio self.fimBeneficio = fimBeneficio def factory(*args_, **kwargs_): if CurrentSubclassModule_", "namespace_='', name_='endereco', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('endereco') if imported_ns_def_ is not None: namespacedef_", "def export(self, outfile, level, namespace_='', name_='dtNascto', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('dtNascto') if imported_ns_def_", "child_.text try: ival_ = int(sval_) except (TypeError, ValueError) as exp: raise_parse_error(child_, 'requires integer:", "outfile.write('<%s>%s</%s>' % ( self.name, base64.b64encode(self.value), self.name)) def to_etree(self, element): if self.category == MixedContainer.CategoryText:", "def set_paisResid(self, paisResid): self.paisResid = paisResid def get_dscLograd(self): return self.dscLograd def set_dscLograd(self, dscLograd):", "def set_infoBeneficio(self, infoBeneficio): self.infoBeneficio = infoBeneficio def get_Id(self): return self.Id def set_Id(self, Id):", "namespace_, eol_)) if self.nrBenefic is not None: showIndent(outfile, level, pretty_print) outfile.write('<%snrBenefic>%s</%snrBenefic>%s' % (namespace_,", "exportAttributes(self, outfile, level, already_processed, namespace_='', name_='bairro'): pass def exportChildren(self, outfile, level, namespace_='', name_='bairro',", "= staticmethod(factory) def get_paisResid(self): return self.paisResid def set_paisResid(self, paisResid): self.paisResid = paisResid def", "fromsubclass_=False): pass # end class nrRecibo class tpAmb(GeneratedsSuper): subclass = None superclass =", "fromsubclass_=False): pass # end class dscLograd class nrLograd(GeneratedsSuper): subclass = None superclass =", "CurrentSubclassModule_, dadosNasc) if subclass is not None: return subclass(*args_, **kwargs_) if dadosNasc.subclass: return", "pretty_print=True): if pretty_print: eol_ = '\\n' else: eol_ = '' if self.idQuota is", "already_processed): pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class cep", "fimBeneficio def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_,", "is not None or self.vrBenef is not None or self.infoPenMorte is not None", "self.exportAttributes(outfile, level, already_processed, namespace_, name_='TIdeEveTrab') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level", "GenerateDSNamespaceDefs_.get('infoPenMorte') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_ =", "self.name, self.value, self.name)) elif self.content_type == MixedContainer.TypeBase64: outfile.write('<%s>%s</%s>' % ( self.name, base64.b64encode(self.value), self.name))", "already_processed): pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class verProc", "None: subclass = getSubclassFromModule_( CurrentSubclassModule_, TDadosBeneficio) if subclass is not None: return subclass(*args_,", "outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='infoBeneficio'): pass def", "def export(self, outfile, level, namespace_='', name_='nmPai', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('nmPai') if imported_ns_def_", "'-': tzoff *= -1 tz = GeneratedsSuper._FixedOffsetTZ( tzoff, results.group(0)) input_data = input_data[:-6] if", "def exportChildren(self, outfile, level, namespace_='', name_='TEnderecoExterior', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\\n'", "def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, dtFimBenef)", "fromsubclass_=False): if nodeName_ == 'tpInsc': sval_ = child_.text try: ival_ = int(sval_) except", "## exit_msg = 'Leaving Interpreter, back to program.') # Then use the following", "self.ideEmpregador is not None or self.ideBenef is not None or self.infoBeneficio is not", "% (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='dtNascto', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_,", "= set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='nmMae') if self.hasContent_(): outfile.write('>%s' % (eol_, ))", "obj_ obj_.original_tagname_ = 'exterior' # end class endereco class TEnderecoBrasil(GeneratedsSuper): \"\"\"Informações do Endereço", "end class cpfInst GDSClassesMapping = { 'altBeneficio': TDadosBeneficio, 'brasil': TEnderecoBrasil, 'dadosBenef': TDadosBenef, 'exterior':", "factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, uf) if", "namespace_, name_='nrLograd') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='',", "already_processed): pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class nmMae", "node, nodeName_) return self def buildAttributes(self, node, attrs, already_processed): pass def buildChildren(self, child_,", "self.evtCdBenPrRP.export(outfile, level, namespace_, name_='evtCdBenPrRP', pretty_print=pretty_print) if self.Signature is not None: showIndent(outfile, level, pretty_print)", "None: subclass = getSubclassFromModule_( CurrentSubclassModule_, nmCid) if subclass is not None: return subclass(*args_,", "export(self, outfile, level, namespace_='', name_='tpBenef', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('tpBenef') if imported_ns_def_ is", "== MixedContainer.TypeBase64: text = '%s' % base64.b64encode(self.value) return text def exportLiteral(self, outfile, level,", "ideBenef.subclass: return ideBenef.subclass(*args_, **kwargs_) else: return ideBenef(*args_, **kwargs_) factory = staticmethod(factory) def get_cpfBenef(self):", "nmCid(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ = None def", "already_processed, namespace_='', name_='nrLograd'): pass def exportChildren(self, outfile, level, namespace_='', name_='nrLograd', fromsubclass_=False, pretty_print=True): pass", "ImportError as exp: class GeneratedsSuper(object): tzoff_pattern = re_.compile(r'(\\+|-)((0\\d|1[0-3]):[0-5]\\d|14:00)$') class _FixedOffsetTZ(datetime_.tzinfo): def __init__(self, offset,", "pass def exportChildren(self, outfile, level, namespace_='', name_='tpAmb', fromsubclass_=False, pretty_print=True): pass def build(self, node):", "None: return subclass(*args_, **kwargs_) if TIdeEveTrab.subclass: return TIdeEveTrab.subclass(*args_, **kwargs_) else: return TIdeEveTrab(*args_, **kwargs_)", "elif nodeName_ == 'ideBenef': obj_ = ideBenef.factory() obj_.build(child_) self.ideBenef = obj_ obj_.original_tagname_ =", "return self.content_type def getValue(self): return self.value def getName(self): return self.name def export(self, outfile,", "def export(self, outfile, level, namespace_='', name_='paisNascto', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('paisNascto') if imported_ns_def_", "level, pretty_print) outfile.write('<%sdtIniBenef>%s</%sdtIniBenef>%s' % (namespace_, self.gds_format_date(self.dtIniBenef, input_name='dtIniBenef'), namespace_, eol_)) if self.vrBenef is not", "TDadosBenef class dadosNasc(GeneratedsSuper): \"\"\"Informações de nascimento do beneficiário\"\"\" subclass = None superclass =", "return verProc(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return True", "'', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='nrBenefic') if self.hasContent_(): outfile.write('>%s'", "elif self.category == MixedContainer.CategorySimple: subelement = etree_.SubElement( element, '%s' % self.name) subelement.text =", "s1 = s1.replace('>', '&gt;') if '\"' in s1: if \"'\" in s1: s1", "if self.codPostal is not None: showIndent(outfile, level, pretty_print) outfile.write('<%scodPostal>%s</%scodPostal>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.codPostal), input_name='codPostal')),", "in patterns1: if re_.search(patterns2, target) is not None: found2 = True break if", "self.exportChildren(outfile, level + 1, namespace_='', name_='nmPai', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else:", "already_processed, namespace_='', name_='TEnderecoBrasil'): pass def exportChildren(self, outfile, level, namespace_='', name_='TEnderecoBrasil', fromsubclass_=False, pretty_print=True): if", "outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='paisNascto'): pass def", "subclass is not None: return subclass(*args_, **kwargs_) if TEnderecoExterior.subclass: return TEnderecoExterior.subclass(*args_, **kwargs_) else:", "isinstance(dtNascto, BaseStrType_): initvalue_ = datetime_.datetime.strptime(dtNascto, '%Y-%m-%d').date() else: initvalue_ = dtNascto self.dtNascto = initvalue_", "pretty_print=True): if pretty_print: eol_ = '\\n' else: eol_ = '' if self.paisResid is", "def gds_format_integer(self, input_data, input_name=''): return '%d' % input_data def gds_validate_integer(self, input_data, node=None, input_name=''):", "'&gt;') if '\"' in s1: if \"'\" in s1: s1 = '\"%s\"' %", "is not None: return subclass(*args_, **kwargs_) if paisNascto.subclass: return paisNascto.subclass(*args_, **kwargs_) else: return", "is not None: showIndent(outfile, level, pretty_print) outfile.write('<%snrLograd>%s</%snrLograd>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nrLograd), input_name='nrLograd')), namespace_, eol_))", "nrInsc.subclass: return nrInsc.subclass(*args_, **kwargs_) else: return nrInsc(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self):", "self.buildAttributes(node, node.attrib, already_processed) for child in node: nodeName_ = Tag_pattern_.match(child.tag).groups()[-1] self.buildChildren(child, node, nodeName_)", "pretty_print) outfile.write('<%scodMunic>%s</%scodMunic>%s' % (namespace_, self.gds_format_integer(self.codMunic, input_name='codMunic'), namespace_, eol_)) if self.uf is not None:", "buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class nrBenefic class dtFimBenef(GeneratedsSuper): subclass", "def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): if nodeName_ == 'tpInsc': sval_ = child_.text", "def gds_format_float(self, input_data, input_name=''): return ('%.15f' % input_data).rstrip('0') def gds_validate_float(self, input_data, node=None, input_name=''):", "input_data, input_name=''): return '%s' % ' '.join(input_data) def gds_validate_boolean_list( self, input_data, node=None, input_name=''):", "'cpfInst') self.cpfInst = cpfInst_ # end class infoPenMorte class idQuota(GeneratedsSuper): subclass = None", "= getSubclassFromModule_( CurrentSubclassModule_, indRetif) if subclass is not None: return subclass(*args_, **kwargs_) if", "set_dadosNasc(self, dadosNasc): self.dadosNasc = dadosNasc def get_endereco(self): return self.endereco def set_endereco(self, endereco): self.endereco", "exportChildren(self, outfile, level, namespace_='', name_='nrLograd', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed =", "1, namespace_='', name_='mtvFim', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_,", "GenerateDSNamespaceDefs_.get('nmMae') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_ =", "nodeName_ == 'altBeneficio': obj_ = TDadosBeneficio.factory() obj_.build(child_) self.altBeneficio = obj_ obj_.original_tagname_ = 'altBeneficio'", "namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('dtNascto') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_", "= set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='dadosNasc') if self.hasContent_(): outfile.write('>%s' % (eol_, ))", "name_='nrRecibo', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib, already_processed)", "(eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='vrBenef'): pass def exportChildren(self, outfile,", "showIndent(outfile, level, pretty_print) outfile.write('<%sidQuota>%s</%sidQuota>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.idQuota), input_name='idQuota')), namespace_, eol_)) if self.cpfInst is", "1, namespace_='', name_='verProc', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_,", "None: text = node.text else: text = '' for child in node: if", "outfile.write('<%suf>%s</%suf>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.uf), input_name='uf')), namespace_, eol_)) def build(self, node): already_processed = set()", "is not None: self.infoBeneficio.export(outfile, level, namespace_, name_='infoBeneficio', pretty_print=pretty_print) def build(self, node): already_processed =", "namespace_='', name_='TDadosBenef', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('TDadosBenef') if imported_ns_def_ is not None: namespacedef_", "ideEmpregador): self.ideEmpregador = ideEmpregador def get_ideBenef(self): return self.ideBenef def set_ideBenef(self, ideBenef): self.ideBenef =", "MixedContainer.CategoryComplex showIndent(outfile, level) outfile.write( 'model_.MixedContainer(%d, %d, \"%s\",\\n' % ( self.category, self.content_type, self.name,)) self.value.exportLiteral(outfile,", "find_attr_value_(attr_name, node): attrs = node.attrib attr_parts = attr_name.split(':') value = None if len(attr_parts)", "dst(self, dt): return None def gds_format_string(self, input_data, input_name=''): return input_data def gds_validate_string(self, input_data,", "is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, nmBenefic) if subclass is not None:", "+ namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='verProc')", "return nrLograd.subclass(*args_, **kwargs_) else: return nrLograd(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if", "else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='complemento'): pass", "is not None: self.fimBeneficio.export(outfile, level, namespace_, name_='fimBeneficio', pretty_print=pretty_print) def build(self, node): already_processed =", "name_='cpfBenef', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib, already_processed)", "is not None: return subclass(*args_, **kwargs_) if cpfInst.subclass: return cpfInst.subclass(*args_, **kwargs_) else: return", "except (TypeError, ValueError): raise_parse_error(node, 'Requires sequence of doubles') return values def gds_format_boolean(self, input_data,", "self.gds_format_date(self.dtNascto, input_name='dtNascto'), namespace_, eol_)) if self.codMunic is not None: showIndent(outfile, level, pretty_print) outfile.write('<%scodMunic>%s</%scodMunic>%s'", "def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class dtIniBenef class vrBenef(GeneratedsSuper):", "input_name='mtvFim'), namespace_, eol_)) def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib, already_processed) for", "= getSubclassFromModule_( CurrentSubclassModule_, dadosNasc) if subclass is not None: return subclass(*args_, **kwargs_) if", "self.Id = Id def hasContent_(self): if ( self.ideEvento is not None or self.ideEmpregador", "self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='nrRecibo', pretty_print=pretty_print) outfile.write('</%s%s>%s'", "if self.infoPenMorte is not None: self.infoPenMorte.export(outfile, level, namespace_, name_='infoPenMorte', pretty_print=pretty_print) def build(self, node):", "node, nodeName_, fromsubclass_=False): if nodeName_ == 'brasil': obj_ = TEnderecoBrasil.factory() obj_.build(child_) self.brasil =", "False def export(self, outfile, level, namespace_='', name_='dtIniBenef', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('dtIniBenef') if", "subclass(*args_, **kwargs_) if codPostal.subclass: return codPostal.subclass(*args_, **kwargs_) else: return codPostal(*args_, **kwargs_) factory =", "nrBenefic if isinstance(dtIniBenef, BaseStrType_): initvalue_ = datetime_.datetime.strptime(dtIniBenef, '%Y-%m-%d').date() else: initvalue_ = dtIniBenef self.dtIniBenef", "not None: return subclass(*args_, **kwargs_) if infoPenMorte.subclass: return infoPenMorte.subclass(*args_, **kwargs_) else: return infoPenMorte(*args_,", "def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='indRetif'): pass def exportChildren(self, outfile, level, namespace_='',", "not None or self.cep is not None or self.codMunic is not None or", "set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='endereco') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile,", "namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='uf') if", "gds_format_base64(self, input_data, input_name=''): return base64.b64encode(input_data) def gds_validate_base64(self, input_data, node=None, input_name=''): return input_data def", "getSubclassFromModule_( CurrentSubclassModule_, nmMae) if subclass is not None: return subclass(*args_, **kwargs_) if nmMae.subclass:", "namespace_='', name_='nmPai', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('nmPai') if imported_ns_def_ is not None: namespacedef_", "level, namespace_='', name_='codMunic', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('codMunic') if imported_ns_def_ is not None:", "set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='paisNac') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile,", "None: showIndent(outfile, level, pretty_print) outfile.write('<%snrRecibo>%s</%snrRecibo>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nrRecibo), input_name='nrRecibo')), namespace_, eol_)) if self.tpAmb", "integer: %s' % exp) ival_ = self.gds_validate_integer(ival_, node, 'tpBenef') self.tpBenef = ival_ elif", "eol_ = '\\n' else: eol_ = '' if self.tpInsc is not None: showIndent(outfile,", "float or double: %s' % exp) fval_ = self.gds_validate_float(fval_, node, 'vrBenef') self.vrBenef =", "node) if value is not None and 'Id' not in already_processed: already_processed.add('Id') self.Id", "name_='nmCid'): pass def exportChildren(self, outfile, level, namespace_='', name_='nmCid', fromsubclass_=False, pretty_print=True): pass def build(self,", "self.nrBenefic = nrBenefic_ elif nodeName_ == 'dtIniBenef': sval_ = child_.text dval_ = self.gds_parse_date(sval_)", "self.content_type, self.name, self.value)) elif self.category == MixedContainer.CategorySimple: showIndent(outfile, level) outfile.write( 'model_.MixedContainer(%d, %d, \"%s\",", "0: _svalue += '-' total_seconds *= -1 else: _svalue += '+' hours =", "def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='nmPai'): pass def exportChildren(self, outfile, level, namespace_='',", "subclass(*args_, **kwargs_) if TEnderecoExterior.subclass: return TEnderecoExterior.subclass(*args_, **kwargs_) else: return TEnderecoExterior(*args_, **kwargs_) factory =", "None self.tpBenef = tpBenef self.nrBenefic = nrBenefic if isinstance(dtIniBenef, BaseStrType_): initvalue_ = datetime_.datetime.strptime(dtIniBenef,", "== 'mtvFim': sval_ = child_.text try: ival_ = int(sval_) except (TypeError, ValueError) as", "= 7 TypeBase64 = 8 def __init__(self, category, content_type, name, value): self.category =", "level, already_processed, namespace_, name_='paisNascto') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level +", "if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='TDadosBeneficio', pretty_print=pretty_print)", "def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class cpfInst GDSClassesMapping =", "name_='cpfBenef') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='cpfBenef',", "pass # end class mtvFim class TIdeEveTrab(GeneratedsSuper): \"\"\"Identificação do evento\"\"\" subclass = None", "already_processed, namespace_, name_='TEnderecoBrasil') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1,", "is not None: text = node.text else: text = '' for child in", "nodeName_ == 'ideEvento': obj_ = TIdeEveTrab.factory() obj_.build(child_) self.ideEvento = obj_ obj_.original_tagname_ = 'ideEvento'", "level, pretty_print) outfile.write('<%snrBenefic>%s</%snrBenefic>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nrBenefic), input_name='nrBenefic')), namespace_, eol_)) if self.dtIniBenef is not", "None: showIndent(outfile, level, pretty_print) outfile.write('<%stpInsc>%s</%stpInsc>%s' % (namespace_, self.gds_format_integer(self.tpInsc, input_name='tpInsc'), namespace_, eol_)) if self.nrInsc", "= paisResid self.dscLograd = dscLograd self.nrLograd = nrLograd self.complemento = complemento self.bairro =", "= dtIniBenef self.dtIniBenef = initvalue_ self.vrBenef = vrBenef self.infoPenMorte = infoPenMorte def factory(*args_,", "if nodeName_ == 'ideEvento': obj_ = TIdeEveTrab.factory() obj_.build(child_) self.ideEvento = obj_ obj_.original_tagname_ =", "self.get_path_list_(node.getparent(), path_list) def get_class_obj_(self, node, default_class=None): class_obj1 = default_class if 'xsi' in node.nsmap:", "= infoBeneficio def get_Id(self): return self.Id def set_Id(self, Id): self.Id = Id def", "namespace_, name_='procEmi') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='',", "is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_ = '\\n' else: eol_", "**kwargs_) if tpPlanRP.subclass: return tpPlanRP.subclass(*args_, **kwargs_) else: return tpPlanRP(*args_, **kwargs_) factory = staticmethod(factory)", "outfile, level, namespace_='', name_='nmCid', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('nmCid') if imported_ns_def_ is not", "subclass(*args_, **kwargs_) if paisNascto.subclass: return paisNascto.subclass(*args_, **kwargs_) else: return paisNascto(*args_, **kwargs_) factory =", "def exportChildren(self, outfile, level, namespace_='', name_='paisNascto', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed", "self.gds_validate_string(codPostal_, node, 'codPostal') self.codPostal = codPostal_ # end class TEnderecoExterior class paisResid(GeneratedsSuper): subclass", "eol_)) else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='paisNac'):", "is not None: return subclass(*args_, **kwargs_) if tpBenef.subclass: return tpBenef.subclass(*args_, **kwargs_) else: return", "if subclass is not None: return subclass(*args_, **kwargs_) if TDadosBeneficio.subclass: return TDadosBeneficio.subclass(*args_, **kwargs_)", "return s1 def quote_attrib(inStr): s1 = (isinstance(inStr, BaseStrType_) and inStr or '%s' %", "obj_.build(child_) self.ideEmpregador = obj_ obj_.original_tagname_ = 'ideEmpregador' elif nodeName_ == 'ideBenef': obj_ =", "pretty_print: eol_ = '\\n' else: eol_ = '' if self.tpLograd is not None:", "other.__dict__ def __ne__(self, other): return not self.__eq__(other) def getSubclassFromModule_(module, class_): '''Get the subclass", "namespace_='', name_='infoBeneficio'): pass def exportChildren(self, outfile, level, namespace_='', name_='infoBeneficio', fromsubclass_=False, pretty_print=True): if pretty_print:", "% (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='cpfInst', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_,", "if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='tpAmb', pretty_print=pretty_print)", "= set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='cpfInst') if self.hasContent_(): outfile.write('>%s' % (eol_, ))", "dtNascto) if subclass is not None: return subclass(*args_, **kwargs_) if dtNascto.subclass: return dtNascto.subclass(*args_,", "def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, nrRecibo)", "pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('dtFimBenef') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if", "for idx in range(level): outfile.write(' ') def quote_xml(inStr): \"Escape markup chars, but do", "pretty_print=True): if pretty_print: eol_ = '\\n' else: eol_ = '' if self.dadosNasc is", "= getSubclassFromModule_( CurrentSubclassModule_, evtCdBenPrRP) if subclass is not None: return subclass(*args_, **kwargs_) if", "already_processed: already_processed.add('Id') self.Id = value def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): if nodeName_", "exportChildren(self, outfile, level, namespace_='', name_='nmPai', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed =", "name_=rootTag, namespacedef_='', pretty_print=True) return rootObj def parseEtree(inFileName, silence=False): parser = None doc =", "if self.ideEvento is not None: self.ideEvento.export(outfile, level, namespace_, name_='ideEvento', pretty_print=pretty_print) if self.ideEmpregador is", "CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, tpPlanRP) if subclass is not", "GenerateDSNamespaceDefs_.get('complemento') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_ =", "level + 1, namespace_='', name_='nmPai', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s'", "True else: return False def export(self, outfile, level, namespace_='', name_='paisResid', namespacedef_='', pretty_print=True): imported_ns_def_", "generated element type class for a example of the use of this #", "namespace = node.nsmap.get(prefix) if namespace is not None: value = attrs.get('{%s}%s' % (namespace,", "module. CurrentSubclassModule_ = None # # Support/utility functions. # def showIndent(outfile, level, pretty_print=True):", "return self.data_type def get_data_type(self): if isinstance(self.data_type, list): if len(self.data_type) > 0: return self.data_type[-1]", ")) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='dtFimBenef'): pass def exportChildren(self, outfile, level,", "else: return bairro(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return", "if self.tpInsc is not None: showIndent(outfile, level, pretty_print) outfile.write('<%stpInsc>%s</%stpInsc>%s' % (namespace_, self.gds_format_integer(self.tpInsc, input_name='tpInsc'),", "namespace_, eol_)) if self.codMunic is not None: showIndent(outfile, level, pretty_print) outfile.write('<%scodMunic>%s</%scodMunic>%s' % (namespace_,", "nodeName_ == 'nmCid': nmCid_ = child_.text nmCid_ = self.gds_validate_string(nmCid_, node, 'nmCid') self.nmCid =", "= getSubclassFromModule_( CurrentSubclassModule_, TEnderecoExterior) if subclass is not None: return subclass(*args_, **kwargs_) if", ")) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='infoPenMorte') if self.hasContent_(): outfile.write('>%s' %", "True else: return False def export(self, outfile, level, namespace_='', name_='dadosNasc', namespacedef_='', pretty_print=True): imported_ns_def_", "namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='tpInsc') if", "imported_ns_def_ = GenerateDSNamespaceDefs_.get('fimBeneficio') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print:", "# specific subclass module. CurrentSubclassModule_ = None # # Support/utility functions. # def", "child_.text dscLograd_ = self.gds_validate_string(dscLograd_, node, 'dscLograd') self.dscLograd = dscLograd_ elif nodeName_ == 'nrLograd':", "pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class codPostal class", "= self.gds_validate_string(nmPai_, node, 'nmPai') self.nmPai = nmPai_ # end class dadosNasc class dtNascto(GeneratedsSuper):", "raise_parse_error( node, 'Requires sequence of booleans ' '(\"true\", \"1\", \"false\", \"0\")') return values", "level, already_processed, namespace_, name_='dscLograd') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level +", "name_='TEmprPJ', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\\n' else: eol_ = '' if", "name_='TEnderecoBrasil', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' %", "self.optional = optional def set_name(self, name): self.name = name def get_name(self): return self.name", "namespace_='', name_='complemento', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('complemento') if imported_ns_def_ is not None: namespacedef_", "getSubclassFromModule_( CurrentSubclassModule_, nmBenefic) if subclass is not None: return subclass(*args_, **kwargs_) if nmBenefic.subclass:", "+ namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='paisNascto')", "# You can replace these methods by re-implementing the following class # in", "'%g' % self.value elif self.content_type == MixedContainer.TypeBase64: text = '%s' % base64.b64encode(self.value) return", "input_name='vrBenef'), namespace_, eol_)) if self.infoPenMorte is not None: self.infoPenMorte.export(outfile, level, namespace_, name_='infoPenMorte', pretty_print=pretty_print)", "s1 else: return \"'''%s'''\" % s1 else: if s1.find('\"') != -1: s1 =", "class infoBeneficio class tpPlanRP(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_", "outfile, level, namespace_='', name_='tpPlanRP', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set()", "def __init__(self, dadosNasc=None, endereco=None): self.original_tagname_ = None self.dadosNasc = dadosNasc self.endereco = endereco", "exportAttributes(self, outfile, level, already_processed, namespace_='', name_='tpInsc'): pass def exportChildren(self, outfile, level, namespace_='', name_='tpInsc',", "level, already_processed, namespace_='', name_='nmPai'): pass def exportChildren(self, outfile, level, namespace_='', name_='nmPai', fromsubclass_=False, pretty_print=True):", "**kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, dtNascto) if subclass", "is not None or self.infoPenMorte is not None ): return True else: return", "level + 1, namespace_='', name_='TEmprPJ', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_,", "pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class idQuota class", "self.nmCid = nmCid self.codPostal = codPostal def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not", "None: subclass = getSubclassFromModule_( CurrentSubclassModule_, infoBeneficio) if subclass is not None: return subclass(*args_,", "self.__dict__ == other.__dict__ def __ne__(self, other): return not self.__eq__(other) def getSubclassFromModule_(module, class_): '''Get", "subclass is not None: return subclass(*args_, **kwargs_) if uf.subclass: return uf.subclass(*args_, **kwargs_) else:", "fromsubclass_=False): pass # end class paisNac class nmMae(GeneratedsSuper): subclass = None superclass =", "cep_ = child_.text cep_ = self.gds_validate_string(cep_, node, 'cep') self.cep = cep_ elif nodeName_", "return nmBenefic.subclass(*args_, **kwargs_) else: return nmBenefic(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if", "' '.join(input_data) def gds_validate_float_list( self, input_data, node=None, input_name=''): values = input_data.split() for value", "get_child_attrs(self): return self.child_attrs def set_choice(self, choice): self.choice = choice def get_choice(self): return self.choice", "exportAttributes(self, outfile, level, already_processed, namespace_='', name_='cpfBenef'): pass def exportChildren(self, outfile, level, namespace_='', name_='cpfBenef',", "not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, codPostal) if subclass is not None: return", "CurrentSubclassModule_, TDadosBeneficio) if subclass is not None: return subclass(*args_, **kwargs_) if TDadosBeneficio.subclass: return", "level, namespace_='', name_='mtvFim', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('mtvFim') if imported_ns_def_ is not None:", "set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='evtCdBenPrRP') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile,", "return '%e' % input_data def gds_validate_double(self, input_data, node=None, input_name=''): return input_data def gds_format_double_list(self,", "= GenerateDSNamespaceDefs_.get('codPostal') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_", "name_='dscLograd') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='dscLograd',", "= optional def set_name(self, name): self.name = name def get_name(self): return self.name def", "else: eol_ = '' if self.ideEvento is not None: self.ideEvento.export(outfile, level, namespace_, name_='ideEvento',", "**kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, tpLograd) if subclass", "factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, nrInsc) if", "**kwargs_) if fimBeneficio.subclass: return fimBeneficio.subclass(*args_, **kwargs_) else: return fimBeneficio(*args_, **kwargs_) factory = staticmethod(factory)", "False def export(self, outfile, level, namespace_='', name_='nrInsc', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('nrInsc') if", "nodeName_, fromsubclass_=False): if nodeName_ == 'ideEvento': obj_ = TIdeEveTrab.factory() obj_.build(child_) self.ideEvento = obj_", "return True else: return False def export(self, outfile, level, namespace_='', name_='idQuota', namespacedef_='', pretty_print=True):", "class bairro(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ = None", "subclass = getSubclassFromModule_( CurrentSubclassModule_, cpfInst) if subclass is not None: return subclass(*args_, **kwargs_)", "namespace_='', name_='evtCdBenPrRP', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('evtCdBenPrRP') if imported_ns_def_ is not None: namespacedef_", "benefícios para o beneficiário identificado em {ideBenef} e para o qual não tenha", "== MixedContainer.TypeInteger or self.content_type == MixedContainer.TypeBoolean): text = '%d' % self.value elif (self.content_type", "is None: return value return typ(value) # # Data representation classes. # class", "self.gds_validate_integer(ival_, node, 'tpBenef') self.tpBenef = ival_ elif nodeName_ == 'nrBenefic': nrBenefic_ = child_.text", "getSubclassFromModule_( CurrentSubclassModule_, paisNascto) if subclass is not None: return subclass(*args_, **kwargs_) if paisNascto.subclass:", "if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, nmBenefic) if subclass is", "namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='dtNascto') if", "level, already_processed, namespace_, name_='eSocial') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level +", "node, nodeName_, fromsubclass_=False): pass # end class dtIniBenef class vrBenef(GeneratedsSuper): subclass = None", "'xsi' in node.nsmap: classname = node.get('{%s}type' % node.nsmap['xsi']) if classname is not None:", "return self.tpInsc def set_tpInsc(self, tpInsc): self.tpInsc = tpInsc def get_nrInsc(self): return self.nrInsc def", "= quote_xml(instring).encode('utf8') else: result = GeneratedsSuper.gds_encode(str(instring)) return result def __eq__(self, other): if type(self)", "**kwargs_) factory = staticmethod(factory) def get_indRetif(self): return self.indRetif def set_indRetif(self, indRetif): self.indRetif =", "if TDadosBeneficio.subclass: return TDadosBeneficio.subclass(*args_, **kwargs_) else: return TDadosBeneficio(*args_, **kwargs_) factory = staticmethod(factory) def", "% (namespace_, self.gds_format_date(self.dtIniBenef, input_name='dtIniBenef'), namespace_, eol_)) if self.vrBenef is not None: showIndent(outfile, level,", "def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): if nodeName_ == 'cpfBenef': cpfBenef_ = child_.text", "input_data, input_name=''): return ('%.15f' % input_data).rstrip('0') def gds_validate_float(self, input_data, node=None, input_name=''): return input_data", "None or self.nrRecibo is not None or self.tpAmb is not None or self.procEmi", "get_brasil(self): return self.brasil def set_brasil(self, brasil): self.brasil = brasil def get_exterior(self): return self.exterior", "def exportChildren(self, outfile, level, namespace_='', name_='nmCid', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed", "data_type def get_data_type_chain(self): return self.data_type def get_data_type(self): if isinstance(self.data_type, list): if len(self.data_type) >", "eSocial) if subclass is not None: return subclass(*args_, **kwargs_) if eSocial.subclass: return eSocial.subclass(*args_,", "pass def exportChildren(self, outfile, level, namespace_='', name_='TDadosBenef', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ =", "self.category == MixedContainer.CategorySimple: showIndent(outfile, level) outfile.write( 'model_.MixedContainer(%d, %d, \"%s\", \"%s\"),\\n' % ( self.category,", "nodeName_ == 'mtvFim': sval_ = child_.text try: ival_ = int(sval_) except (TypeError, ValueError)", "= doc.getroot() rootTag, rootClass = get_root_tag(rootNode) if rootClass is None: rootTag = 'eSocial'", "already_processed): pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class tpAmb", "exportChildren(self, outfile, level, namespace_='', name_='dtFimBenef', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed =", "self.nrInsc = nrInsc def hasContent_(self): if ( self.tpInsc is not None or self.nrInsc", "as etree_ except ImportError: from xml.etree import ElementTree as etree_ Validate_simpletypes_ = True", "nodeName_, fromsubclass_=False): pass # end class verProc class TEmprPJ(GeneratedsSuper): \"\"\"Informações do Empregador PJ\"\"\"", "= endereco def hasContent_(self): if ( self.dadosNasc is not None or self.endereco is", "return self.nrLograd def set_nrLograd(self, nrLograd): self.nrLograd = nrLograd def get_complemento(self): return self.complemento def", "self.vrBenef = fval_ elif nodeName_ == 'infoPenMorte': obj_ = infoPenMorte.factory() obj_.build(child_) self.infoPenMorte =", "export(self, outfile, level, namespace_='', name_='nmMae', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('nmMae') if imported_ns_def_ is", "2: return instring.encode(ExternalEncoding) else: return instring @staticmethod def convert_unicode(instring): if isinstance(instring, str): result", "def set_nmPai(self, nmPai): self.nmPai = nmPai def hasContent_(self): if ( self.dtNascto is not", "class idQuota(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ = None", "node, nodeName_, fromsubclass_=False): pass # end class nmBenefic class infoBeneficio(GeneratedsSuper): \"\"\"Informações relacionadas ao", "'' if self.dadosNasc is not None: self.dadosNasc.export(outfile, level, namespace_, name_='dadosNasc', pretty_print=pretty_print) if self.endereco", "(eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='paisNascto'): pass def exportChildren(self, outfile,", "self.dtFimBenef = dtFimBenef def get_mtvFim(self): return self.mtvFim def set_mtvFim(self, mtvFim): self.mtvFim = mtvFim", "False def export(self, outfile, level, namespace_='', name_='tpAmb', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('tpAmb') if", "( self.evtCdBenPrRP is not None or self.Signature is not None ): return True", "namespace_, name_='ideEvento', pretty_print=pretty_print) if self.ideEmpregador is not None: self.ideEmpregador.export(outfile, level, namespace_, name_='ideEmpregador', pretty_print=pretty_print)", "**kwargs_) if TDadosBenef.subclass: return TDadosBenef.subclass(*args_, **kwargs_) else: return TDadosBenef(*args_, **kwargs_) factory = staticmethod(factory)", "input_name='bairro')), namespace_, eol_)) if self.nmCid is not None: showIndent(outfile, level, pretty_print) outfile.write('<%snmCid>%s</%snmCid>%s' %", "if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, vrBenef) if subclass is", "if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='idQuota', pretty_print=pretty_print)", "self.exportChildren(outfile, level + 1, namespace_='', name_='TEmprPJ', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' % (namespace_,", "= GenerateDSNamespaceDefs_.get('TEnderecoBrasil') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_", "CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, idQuota) if subclass is not", "self.exportChildren(outfile, level + 1, namespace_='', name_='tpAmb', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else:", "(namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.idQuota), input_name='idQuota')), namespace_, eol_)) if self.cpfInst is not None: showIndent(outfile, level, pretty_print)", "sys.stdout, 0, name_=rootTag, namespacedef_='') return rootObj def parseLiteral(inFileName, silence=False): parser = None doc", "= infoBeneficio.factory() obj_.build(child_) self.infoBeneficio = obj_ obj_.original_tagname_ = 'infoBeneficio' # end class evtCdBenPrRP", "not None: return subclass(*args_, **kwargs_) if TDadosBenef.subclass: return TDadosBenef.subclass(*args_, **kwargs_) else: return TDadosBenef(*args_,", "eol_)) else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='procEmi'):", "outfile, level, namespace_='', name_='idQuota', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set()", "bairro) if subclass is not None: return subclass(*args_, **kwargs_) if bairro.subclass: return bairro.subclass(*args_,", "or self.dtIniBenef is not None or self.vrBenef is not None or self.infoPenMorte is", "node, 'dscLograd') self.dscLograd = dscLograd_ elif nodeName_ == 'nrLograd': nrLograd_ = child_.text nrLograd_", "(namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.bairro), input_name='bairro')), namespace_, eol_)) if self.cep is not None: showIndent(outfile, level, pretty_print)", "name_='dadosNasc', pretty_print=pretty_print) if self.endereco is not None: self.endereco.export(outfile, level, namespace_, name_='endereco', pretty_print=pretty_print) def", "self.mtvFim is not None: showIndent(outfile, level, pretty_print) outfile.write('<%smtvFim>%s</%smtvFim>%s' % (namespace_, self.gds_format_integer(self.mtvFim, input_name='mtvFim'), namespace_,", "node, nodeName_, fromsubclass_=False): pass # end class paisNac class nmMae(GeneratedsSuper): subclass = None", "name_='idQuota', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('idQuota') if imported_ns_def_ is not None: namespacedef_ =", "% (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='nrRecibo'): pass def exportChildren(self,", "attrs, already_processed): pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): if nodeName_ == 'cpfBenef':", "if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='TEnderecoExterior', pretty_print=pretty_print)", "(eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='mtvFim'): pass def exportChildren(self, outfile,", "is not None: showIndent(outfile, level, pretty_print) outfile.write('<%suf>%s</%suf>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.uf), input_name='uf')), namespace_, eol_))", "raise GDSParseError(msg) class MixedContainer: # Constants for category: CategoryNone = 0 CategoryText =", "level, pretty_print) outfile.write('<%sbairro>%s</%sbairro>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.bairro), input_name='bairro')), namespace_, eol_)) if self.nmCid is not", "def getContenttype(self, content_type): return self.content_type def getValue(self): return self.value def getName(self): return self.name", "namespace_='', name_='eSocial', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s'", "def parse(inFileName, silence=False): parser = None doc = parsexml_(inFileName, parser) rootNode = doc.getroot()", "level, already_processed, namespace_, name_='TIdeEveTrab') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level +", "eol_)) if self.nrInsc is not None: showIndent(outfile, level, pretty_print) outfile.write('<%snrInsc>%s</%snrInsc>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nrInsc),", "nodeName_ == 'uf': uf_ = child_.text uf_ = self.gds_validate_string(uf_, node, 'uf') self.uf =", "gds_validate_double(self, input_data, node=None, input_name=''): return input_data def gds_format_double_list(self, input_data, input_name=''): return '%s' %", "(eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='vrBenef', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_,", "dscLograd=None, nrLograd=None, complemento=None, bairro=None, nmCid=None, codPostal=None): self.original_tagname_ = None self.paisResid = paisResid self.dscLograd", "self.exportChildren(outfile, level + 1, namespace_='', name_='procEmi', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else:", "# Enable Python to collect the space used by the DOM. doc =", "subelement.text = self.to_etree_simple() else: # category == MixedContainer.CategoryComplex self.value.to_etree(element) def to_etree_simple(self): if self.content_type", "pretty_print) outfile.write('<%stpInsc>%s</%stpInsc>%s' % (namespace_, self.gds_format_integer(self.tpInsc, input_name='tpInsc'), namespace_, eol_)) if self.nrInsc is not None:", "self.tpBenef def set_tpBenef(self, tpBenef): self.tpBenef = tpBenef def get_nrBenefic(self): return self.nrBenefic def set_nrBenefic(self,", "eol_)) else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='nmBenefic'):", "get_bairro(self): return self.bairro def set_bairro(self, bairro): self.bairro = bairro def get_nmCid(self): return self.nmCid", "factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, nmPai) if", "dscLograd self.nrLograd = nrLograd self.complemento = complemento self.bairro = bairro self.cep = cep", "**kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return True else: return", "namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('TEnderecoBrasil') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_", "self.exportAttributes(outfile, level, already_processed, namespace_, name_='tpLograd') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level", "def exportChildren(self, outfile, level, namespace_='', name_='TDadosBeneficio', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\\n'", "self.iniBeneficio = iniBeneficio self.altBeneficio = altBeneficio self.fimBeneficio = fimBeneficio def factory(*args_, **kwargs_): if", "def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, tpPlanRP)", "namespace_='', name_='bairro', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_, ))", "= None def __init__(self, idQuota=None, cpfInst=None): self.original_tagname_ = None self.idQuota = idQuota self.cpfInst", "return path Tag_strip_pattern_ = re_.compile(r'\\{.*\\}') def get_path_list_(self, node, path_list): if node is None:", "superclass = None def __init__(self, indRetif=None, nrRecibo=None, tpAmb=None, procEmi=None, verProc=None): self.original_tagname_ = None", "'complemento') self.complemento = complemento_ elif nodeName_ == 'bairro': bairro_ = child_.text bairro_ =", "is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, cpfInst) if subclass is not None:", "return True else: return False def export(self, outfile, level, namespace_='', name_='TEmprPJ', namespacedef_='', pretty_print=True):", "return subclass(*args_, **kwargs_) if TDadosBenef.subclass: return TDadosBenef.subclass(*args_, **kwargs_) else: return TDadosBenef(*args_, **kwargs_) factory", "# end class dtNascto class codMunic(GeneratedsSuper): subclass = None superclass = None def", "cep self.codMunic = codMunic self.uf = uf def factory(*args_, **kwargs_): if CurrentSubclassModule_ is", "or double: %s' % exp) fval_ = self.gds_validate_float(fval_, node, 'vrBenef') self.vrBenef = fval_", "return self.evtCdBenPrRP def set_evtCdBenPrRP(self, evtCdBenPrRP): self.evtCdBenPrRP = evtCdBenPrRP def get_Signature(self): return self.Signature def", "tpPlanRP(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return True else:", "@classmethod def gds_parse_time(cls, input_data): tz = None if input_data[-1] == 'Z': tz =", "s2 += quote_xml_aux(s3) s2 += s1[mo.start():mo.end()] pos = mo.end() s3 = s1[pos:] s2", "return subclass(*args_, **kwargs_) if TEnderecoExterior.subclass: return TEnderecoExterior.subclass(*args_, **kwargs_) else: return TEnderecoExterior(*args_, **kwargs_) factory", "attrs, already_processed): pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): if nodeName_ == 'idQuota':", "etree as etree_ except ImportError: from xml.etree import ElementTree as etree_ Validate_simpletypes_ =", "named GeneratedsNamespaceDefs. This Python dictionary # should map element type names (strings) to", "level, pretty_print) outfile.write('<%scep>%s</%scep>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.cep), input_name='cep')), namespace_, eol_)) if self.codMunic is not", "def gds_validate_boolean(self, input_data, node=None, input_name=''): return input_data def gds_format_boolean_list(self, input_data, input_name=''): return '%s'", "if self.content_type == MixedContainer.TypeString: outfile.write('<%s>%s</%s>' % ( self.name, self.value, self.name)) elif self.content_type ==", "path_list): if node is None: return tag = GeneratedsSuper.Tag_strip_pattern_.sub('', node.tag) if tag: path_list.append(tag)", "export(self, outfile, level, namespace_='', name_='verProc', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('verProc') if imported_ns_def_ is", "cep(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ = None def", "if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, complemento) if subclass is", "2: from StringIO import StringIO as IOBuffer else: from io import BytesIO as", "self.exportSimple(outfile, level, name) else: # category == MixedContainer.CategoryComplex self.value.export( outfile, level, namespace, name,", "'mtvFim') self.mtvFim = ival_ # end class fimBeneficio class tpBenef(GeneratedsSuper): subclass = None", "None or self.bairro is not None or self.cep is not None or self.codMunic", "is not None or self.paisNascto is not None or self.paisNac is not None", "namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('paisNac') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_", "node, nodeName_, fromsubclass_=False): if nodeName_ == 'paisResid': paisResid_ = child_.text paisResid_ = self.gds_validate_string(paisResid_,", "name_='fimBeneficio', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' %", "None ): return True else: return False def export(self, outfile, level, namespace_='', name_='TIdeEveTrab',", "return True else: return False def export(self, outfile, level, namespace_='', name_='complemento', namespacedef_='', pretty_print=True):", "not None: showIndent(outfile, level, pretty_print) outfile.write('<%snmCid>%s</%snmCid>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nmCid), input_name='nmCid')), namespace_, eol_)) if", "outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='complemento', pretty_print=pretty_print) outfile.write('</%s%s>%s' %", "def gds_encode(instring): if sys.version_info.major == 2: return instring.encode(ExternalEncoding) else: return instring @staticmethod def", "**kwargs_) else: return infoPenMorte(*args_, **kwargs_) factory = staticmethod(factory) def get_idQuota(self): return self.idQuota def", "factory = staticmethod(factory) def get_idQuota(self): return self.idQuota def set_idQuota(self, idQuota): self.idQuota = idQuota", "(namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.uf), input_name='uf')), namespace_, eol_)) def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib,", "self.gds_encode(self.gds_format_string(quote_xml(self.paisNac), input_name='paisNac')), namespace_, eol_)) if self.nmMae is not None: showIndent(outfile, level, pretty_print) outfile.write('<%snmMae>%s</%snmMae>%s'", "ival_ = self.gds_validate_integer(ival_, node, 'indRetif') self.indRetif = ival_ elif nodeName_ == 'nrRecibo': nrRecibo_", "class cep(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ = None", "return self.mtvFim def set_mtvFim(self, mtvFim): self.mtvFim = mtvFim def hasContent_(self): if ( self.tpBenef", "level) outfile.write( 'model_.MixedContainer(%d, %d, \"%s\", \"%s\"),\\n' % ( self.category, self.content_type, self.name, self.value)) elif", "not None or self.bairro is not None or self.nmCid is not None or", "getSubclassFromModule_( CurrentSubclassModule_, cep) if subclass is not None: return subclass(*args_, **kwargs_) if cep.subclass:", "is not None: showIndent(outfile, level, pretty_print) outfile.write('<%snrBenefic>%s</%snrBenefic>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nrBenefic), input_name='nrBenefic')), namespace_, eol_))", "= datetime_.datetime.strptime(input_data, '%H:%M:%S') dt = dt.replace(tzinfo=tz) return dt.time() def gds_str_lower(self, instring): return instring.lower()", "= child_.text dval_ = self.gds_parse_date(sval_) self.dtNascto = dval_ elif nodeName_ == 'codMunic': sval_", "outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='tpPlanRP'): pass def", "self.codPostal = codPostal_ # end class TEnderecoExterior class paisResid(GeneratedsSuper): subclass = None superclass", "'paisNascto') self.paisNascto = paisNascto_ elif nodeName_ == 'paisNac': paisNac_ = child_.text paisNac_ =", "indRetif self.nrRecibo = nrRecibo self.tpAmb = tpAmb self.procEmi = procEmi self.verProc = verProc", "subclass(*args_, **kwargs_) if mtvFim.subclass: return mtvFim.subclass(*args_, **kwargs_) else: return mtvFim(*args_, **kwargs_) factory =", "not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, infoBeneficio) if subclass is not None: return", "already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='infoBeneficio') if self.hasContent_(): outfile.write('>%s' % (eol_,", "self.codPostal is not None ): return True else: return False def export(self, outfile,", "(total_seconds - (hours * 3600)) // 60 _svalue += '{0:02d}:{1:02d}'.format(hours, minutes) return _svalue", "getCategory(self): return self.category def getContenttype(self, content_type): return self.content_type def getValue(self): return self.value def", "name_='nmBenefic'): pass def exportChildren(self, outfile, level, namespace_='', name_='nmBenefic', fromsubclass_=False, pretty_print=True): pass def build(self,", "**kwargs_) factory = staticmethod(factory) def get_ideEvento(self): return self.ideEvento def set_ideEvento(self, ideEvento): self.ideEvento =", "(namespace_, self.gds_format_integer(self.indRetif, input_name='indRetif'), namespace_, eol_)) if self.nrRecibo is not None: showIndent(outfile, level, pretty_print)", "= '\\n' else: eol_ = '' if self.dadosNasc is not None: self.dadosNasc.export(outfile, level,", "pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('nrInsc') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if", "1, namespace_='', name_='nmCid', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_,", "export(self, outfile, level, namespace_='', name_='uf', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('uf') if imported_ns_def_ is", "== 'ideEvento': obj_ = TIdeEveTrab.factory() obj_.build(child_) self.ideEvento = obj_ obj_.original_tagname_ = 'ideEvento' elif", "already_processed): pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): if nodeName_ == 'cpfBenef': cpfBenef_", "self.complemento = complemento self.bairro = bairro self.nmCid = nmCid self.codPostal = codPostal def", "= '\\n' else: eol_ = '' if self.indRetif is not None: showIndent(outfile, level,", "% (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='paisResid', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_,", "exp) ival_ = self.gds_validate_integer(ival_, node, 'tpAmb') self.tpAmb = ival_ elif nodeName_ == 'procEmi':", "def export(self, outfile, level, namespace_='', name_='nmCid', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('nmCid') if imported_ns_def_", "subclass is not None: return subclass(*args_, **kwargs_) if nrRecibo.subclass: return nrRecibo.subclass(*args_, **kwargs_) else:", "iniBeneficio self.altBeneficio = altBeneficio self.fimBeneficio = fimBeneficio def factory(*args_, **kwargs_): if CurrentSubclassModule_ is", "# # GenerateDSNamespaceDefs = { # \"ElementtypeA\": \"http://www.xxx.com/namespaceA\", # \"ElementtypeB\": \"http://www.xxx.com/namespaceB\", # }", "= CDATA_pattern_.finditer(s1) for mo in matchobjects: s3 = s1[pos:mo.start()] s2 += quote_xml_aux(s3) s2", "is not None: showIndent(outfile, level, pretty_print) outfile.write('<%sbairro>%s</%sbairro>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.bairro), input_name='bairro')), namespace_, eol_))", "args = '' ## ipshell = IPShellEmbed(args, ## banner = 'Dropping into IPython',", "exportChildren(self, outfile, level, namespace_='', name_='dadosNasc', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\\n' else:", "nodeName_ == 'verProc': verProc_ = child_.text verProc_ = self.gds_validate_string(verProc_, node, 'verProc') self.verProc =", "= None self.brasil = brasil self.exterior = exterior def factory(*args_, **kwargs_): if CurrentSubclassModule_", "input_name=''): return '%s' % ' '.join(input_data) def gds_validate_double_list( self, input_data, node=None, input_name=''): values", "fimBeneficio.subclass: return fimBeneficio.subclass(*args_, **kwargs_) else: return fimBeneficio(*args_, **kwargs_) factory = staticmethod(factory) def get_tpBenef(self):", "or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='verProc') if self.hasContent_():", "= get_root_tag(rootNode) if rootClass is None: rootTag = 'eSocial' rootClass = eSocial rootObj", "or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='fimBeneficio') if self.hasContent_():", "def exportChildren(self, outfile, level, namespace_='', name_='dscLograd', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed", "subclass is not None: return subclass(*args_, **kwargs_) if nmBenefic.subclass: return nmBenefic.subclass(*args_, **kwargs_) else:", "node, 'cpfBenef') self.cpfBenef = cpfBenef_ elif nodeName_ == 'nmBenefic': nmBenefic_ = child_.text nmBenefic_", "fromsubclass_=False): pass # end class codMunic class uf(GeneratedsSuper): subclass = None superclass =", "complemento) if subclass is not None: return subclass(*args_, **kwargs_) if complemento.subclass: return complemento.subclass(*args_,", "(namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.paisNac), input_name='paisNac')), namespace_, eol_)) if self.nmMae is not None: showIndent(outfile, level, pretty_print)", "def get_child_attrs(self): return self.child_attrs def set_choice(self, choice): self.choice = choice def get_choice(self): return", "= set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='dtFimBenef') if self.hasContent_(): outfile.write('>%s' % (eol_, ))", "self.brasil is not None or self.exterior is not None ): return True else:", "if self.dscLograd is not None: showIndent(outfile, level, pretty_print) outfile.write('<%sdscLograd>%s</%sdscLograd>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.dscLograd), input_name='dscLograd')),", "pass # end class tpLograd class dscLograd(GeneratedsSuper): subclass = None superclass = None", "k) for k, v in mapping.iteritems())) @staticmethod def gds_encode(instring): if sys.version_info.major == 2:", "= dtFimBenef self.dtFimBenef = initvalue_ self.mtvFim = mtvFim def factory(*args_, **kwargs_): if CurrentSubclassModule_", "level + 1, namespace_='', name_='codMunic', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s'", "return TDadosBenef.subclass(*args_, **kwargs_) else: return TDadosBenef(*args_, **kwargs_) factory = staticmethod(factory) def get_dadosNasc(self): return", "% node.nsmap['xsi']) if classname is not None: names = classname.split(':') if len(names) ==", "level, already_processed, namespace_, name_='complemento') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level +", "subclass is not None: return subclass(*args_, **kwargs_) if tpPlanRP.subclass: return tpPlanRP.subclass(*args_, **kwargs_) else:", "pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class bairro class", "= content_type self.name = name self.value = value def getCategory(self): return self.category def", "= self.gds_validate_float(fval_, node, 'vrBenef') self.vrBenef = fval_ elif nodeName_ == 'infoPenMorte': obj_ =", "== 'brasil': obj_ = TEnderecoBrasil.factory() obj_.build(child_) self.brasil = obj_ obj_.original_tagname_ = 'brasil' elif", "GenerateDSNamespaceDefs_.get('nrLograd') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_ =", "self.gds_validate_string(Signature_, node, 'Signature') self.Signature = Signature_ # end class eSocial class evtCdBenPrRP(GeneratedsSuper): \"\"\"Evento", "tzoff *= -1 tz = GeneratedsSuper._FixedOffsetTZ( tzoff, results.group(0)) input_data = input_data[:-6] if len(input_data.split('.'))", "if codPostal.subclass: return codPostal.subclass(*args_, **kwargs_) else: return codPostal(*args_, **kwargs_) factory = staticmethod(factory) def", "self.idQuota = idQuota self.cpfInst = cpfInst def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not", "s3 = s1[pos:] s2 += quote_xml_aux(s3) return s2 def quote_xml_aux(inStr): s1 = inStr.replace('&',", "paisNac_ = child_.text paisNac_ = self.gds_validate_string(paisNac_, node, 'paisNac') self.paisNac = paisNac_ elif nodeName_", "eol_)) if self.uf is not None: showIndent(outfile, level, pretty_print) outfile.write('<%suf>%s</%suf>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.uf),", "== 'codMunic': sval_ = child_.text try: ival_ = int(sval_) except (TypeError, ValueError) as", "silence=False): parser = None doc = parsexml_(inFileName, parser) rootNode = doc.getroot() rootTag, rootClass", "= self.gds_validate_string(tpLograd_, node, 'tpLograd') self.tpLograd = tpLograd_ elif nodeName_ == 'dscLograd': dscLograd_ =", "imported_ns_def_ = GenerateDSNamespaceDefs_.get('paisNac') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print:", "eol_ = '' if self.ideEvento is not None: self.ideEvento.export(outfile, level, namespace_, name_='ideEvento', pretty_print=pretty_print)", "= complemento_ elif nodeName_ == 'bairro': bairro_ = child_.text bairro_ = self.gds_validate_string(bairro_, node,", "\\ self.content_type == MixedContainer.TypeDecimal: outfile.write('<%s>%f</%s>' % ( self.name, self.value, self.name)) elif self.content_type ==", "nmCid.subclass(*args_, **kwargs_) else: return nmCid(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if (", "else: return instring @staticmethod def convert_unicode(instring): if isinstance(instring, str): result = quote_xml(instring) elif", "', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('eSocial') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_", "None: subclass = getSubclassFromModule_( CurrentSubclassModule_, paisResid) if subclass is not None: return subclass(*args_,", "infoPenMorte(GeneratedsSuper): \"\"\"Informações relativas a pensão por morte\"\"\" subclass = None superclass = None", "self.dtIniBenef def set_dtIniBenef(self, dtIniBenef): self.dtIniBenef = dtIniBenef def get_vrBenef(self): return self.vrBenef def set_vrBenef(self,", "not None: showIndent(outfile, level, pretty_print) outfile.write('<%sprocEmi>%s</%sprocEmi>%s' % (namespace_, self.gds_format_integer(self.procEmi, input_name='procEmi'), namespace_, eol_)) if", "subclass = getSubclassFromModule_( CurrentSubclassModule_, procEmi) if subclass is not None: return subclass(*args_, **kwargs_)", "paisNac class nmMae(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ =", "self.exportAttributes(outfile, level, already_processed, namespace_, name_='tpInsc') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level", "outfile.write('<%stpBenef>%s</%stpBenef>%s' % (namespace_, self.gds_format_integer(self.tpBenef, input_name='tpBenef'), namespace_, eol_)) if self.nrBenefic is not None: showIndent(outfile,", "def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): if nodeName_ == 'tpBenef': sval_ = child_.text", "fromsubclass_=False): pass # end class tpLograd class dscLograd(GeneratedsSuper): subclass = None superclass =", "**kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, infoPenMorte) if subclass", "name_='paisNac') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='paisNac',", "None: return subclass(*args_, **kwargs_) if uf.subclass: return uf.subclass(*args_, **kwargs_) else: return uf(*args_, **kwargs_)", "getValue(self): return self.value def getName(self): return self.name def export(self, outfile, level, name, namespace,", "namespace_, name_='cpfInst') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='',", "already_processed): pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class dtNascto", "ival_ elif nodeName_ == 'iniBeneficio': obj_ = TDadosBeneficio.factory() obj_.build(child_) self.iniBeneficio = obj_ obj_.original_tagname_", "nodeName_, fromsubclass_=False): pass # end class tpAmb class procEmi(GeneratedsSuper): subclass = None superclass", "**kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, TEnderecoExterior) if subclass", "name_='mtvFim') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='mtvFim',", "end class nrInsc class TDadosBenef(GeneratedsSuper): \"\"\"Dados de beneficiário\"\"\" subclass = None superclass =", "fromsubclass_=False): if nodeName_ == 'tpPlanRP': sval_ = child_.text try: ival_ = int(sval_) except", ")) self.exportChildren(outfile, level + 1, namespace_='', name_='verProc', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_))", "initvalue_ = datetime_.datetime.strptime(dtFimBenef, '%Y-%m-%d').date() else: initvalue_ = dtFimBenef self.dtFimBenef = initvalue_ self.mtvFim =", "% s1 else: if s1.find('\"') != -1: s1 = s1.replace('\"', '\\\\\"') if s1.find('\\n')", "return cpfBenef(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return True", "endereco(GeneratedsSuper): \"\"\"Grupo de informações do endereço do Trabalhador\"\"\" subclass = None superclass =", "getSubclassFromModule_( CurrentSubclassModule_, paisResid) if subclass is not None: return subclass(*args_, **kwargs_) if paisResid.subclass:", "= None superclass = None def __init__(self, evtCdBenPrRP=None, Signature=None): self.original_tagname_ = None self.evtCdBenPrRP", "level, namespace_='', name_='vrBenef', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node,", "if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='nmBenefic', pretty_print=pretty_print)", "used by the DOM. doc = None if not silence: sys.stdout.write('<?xml version=\"1.0\" ?>\\n')", "pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('dtNascto') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if", "GenerateDSNamespaceDefs_.get('mtvFim') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_ =", "% ('ds:', self.gds_encode(self.gds_format_string(quote_xml(self.Signature), input_name='Signature')), 'ds:', eol_)) def build(self, node): already_processed = set() self.buildAttributes(node,", "self.exportChildren(outfile, level + 1, namespace_='', name_='endereco', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' % (namespace_,", "return dscLograd(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return True", "idQuota=None, cpfInst=None): self.original_tagname_ = None self.idQuota = idQuota self.cpfInst = cpfInst def factory(*args_,", "return evtCdBenPrRP.subclass(*args_, **kwargs_) else: return evtCdBenPrRP(*args_, **kwargs_) factory = staticmethod(factory) def get_ideEvento(self): return", "-1 tz = GeneratedsSuper._FixedOffsetTZ( tzoff, results.group(0)) input_data = input_data[:-6] dt = datetime_.datetime.strptime(input_data, '%Y-%m-%d')", "level, namespace_='', name_='dtIniBenef', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('dtIniBenef') if imported_ns_def_ is not None:", "nodeName_, fromsubclass_=False): pass # end class vrBenef class infoPenMorte(GeneratedsSuper): \"\"\"Informações relativas a pensão", "name_='verProc', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('verProc') if imported_ns_def_ is not None: namespacedef_ =", "= infoPenMorte.factory() obj_.build(child_) self.infoPenMorte = obj_ obj_.original_tagname_ = 'infoPenMorte' # end class TDadosBeneficio", "buildChildren(self, child_, node, nodeName_, fromsubclass_=False): if nodeName_ == 'idQuota': idQuota_ = child_.text idQuota_", "= '' if self.dtNascto is not None: showIndent(outfile, level, pretty_print) outfile.write('<%sdtNascto>%s</%sdtNascto>%s' % (namespace_,", "_FixedOffsetTZ(datetime_.tzinfo): def __init__(self, offset, name): self.__offset = datetime_.timedelta(minutes=offset) self.__name = name def utcoffset(self,", "self.gds_validate_string(cpfBenef_, node, 'cpfBenef') self.cpfBenef = cpfBenef_ elif nodeName_ == 'nmBenefic': nmBenefic_ = child_.text", "level, namespace_, name_='exterior', pretty_print=pretty_print) def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib, already_processed)", "self.dadosNasc = dadosNasc def get_endereco(self): return self.endereco def set_endereco(self, endereco): self.endereco = endereco", "= IPShellEmbed(args, ## banner = 'Dropping into IPython', ## exit_msg = 'Leaving Interpreter,", "re_.compile(r'{(.*)}(.*)') CDATA_pattern_ = re_.compile(r\"<!\\[CDATA\\[.*?\\]\\]>\", re_.DOTALL) # Change this to redirect the generated superclass", "namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='dadosNasc') if", "self.codMunic = codMunic self.uf = uf self.paisNascto = paisNascto self.paisNac = paisNac self.nmMae", "find_attr_value_('Id', node) if value is not None and 'Id' not in already_processed: already_processed.add('Id')", "# A sample table is: # # # File: generatedsnamespaces.py # # GenerateDSNamespaceDefs", "tpInsc self.nrInsc = nrInsc def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass", "def set_tpLograd(self, tpLograd): self.tpLograd = tpLograd def get_dscLograd(self): return self.dscLograd def set_dscLograd(self, dscLograd):", "can uncomment and use the following. # IPython is available from http://ipython.scipy.org/. #", "subclass is not None: return subclass(*args_, **kwargs_) if paisNascto.subclass: return paisNascto.subclass(*args_, **kwargs_) else:", "// 3600 minutes = (total_seconds - (hours * 3600)) // 60 _svalue +=", "outfile.write('<%stpInsc>%s</%stpInsc>%s' % (namespace_, self.gds_format_integer(self.tpInsc, input_name='tpInsc'), namespace_, eol_)) if self.nrInsc is not None: showIndent(outfile,", "% (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nrInsc), input_name='nrInsc')), namespace_, eol_)) def build(self, node): already_processed = set() self.buildAttributes(node,", "not None: self.ideBenef.export(outfile, level, namespace_, name_='ideBenef', pretty_print=pretty_print) if self.infoBeneficio is not None: self.infoBeneficio.export(outfile,", "= node.attrib attr_parts = attr_name.split(':') value = None if len(attr_parts) == 1: value", "False def export(self, outfile, level, namespace_='', name_='codPostal', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('codPostal') if", "name_='indRetif') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='indRetif',", "bairro=None, cep=None, codMunic=None, uf=None): self.original_tagname_ = None self.tpLograd = tpLograd self.dscLograd = dscLograd", "= getSubclassFromModule_( CurrentSubclassModule_, dtFimBenef) if subclass is not None: return subclass(*args_, **kwargs_) if", "% (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='nrInsc', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_,", "self.category def getContenttype(self, content_type): return self.content_type def getValue(self): return self.value def getName(self): return", "= GenerateDSNamespaceDefs_.get('bairro') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_", "'', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='idQuota') if self.hasContent_(): outfile.write('>%s'", "path = '/'.join(path_list) return path Tag_strip_pattern_ = re_.compile(r'\\{.*\\}') def get_path_list_(self, node, path_list): if", "buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class idQuota class cpfInst(GeneratedsSuper): subclass", "None: found2 = True break if not found2: found1 = False break return", "fromsubclass_=False): pass # end class nrBenefic class dtFimBenef(GeneratedsSuper): subclass = None superclass =", "pass # end class codPostal class TDadosBeneficio(GeneratedsSuper): \"\"\"Dados do benefício previdenciário\"\"\" subclass =", "subclass(*args_, **kwargs_) if dtIniBenef.subclass: return dtIniBenef.subclass(*args_, **kwargs_) else: return dtIniBenef(*args_, **kwargs_) factory =", "not None or self.ideEmpregador is not None or self.ideBenef is not None or", "namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('idQuota') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_", "None self.idQuota = idQuota self.cpfInst = cpfInst def factory(*args_, **kwargs_): if CurrentSubclassModule_ is", "_svalue @classmethod def gds_parse_date(cls, input_data): tz = None if input_data[-1] == 'Z': tz", "eol_ = '\\n' else: eol_ = '' if self.evtCdBenPrRP is not None: self.evtCdBenPrRP.export(outfile,", "class fimBeneficio class tpBenef(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_", "self.gds_validate_string(uf_, node, 'uf') self.uf = uf_ elif nodeName_ == 'paisNascto': paisNascto_ = child_.text", "getSubclassFromModule_( CurrentSubclassModule_, TIdeEveTrab) if subclass is not None: return subclass(*args_, **kwargs_) if TIdeEveTrab.subclass:", "obj_ obj_.original_tagname_ = 'ideEmpregador' elif nodeName_ == 'ideBenef': obj_ = ideBenef.factory() obj_.build(child_) self.ideBenef", "return nmMae.subclass(*args_, **kwargs_) else: return nmMae(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if", "-1: return '\"%s\"' % s1 else: return '\"\"\"%s\"\"\"' % s1 def get_all_text_(node): if", "eol_ = '' if self.tpLograd is not None: showIndent(outfile, level, pretty_print) outfile.write('<%stpLograd>%s</%stpLograd>%s' %", "'{0:02d}:{1:02d}'.format(hours, minutes) return _svalue def gds_validate_simple_patterns(self, patterns, target): # pat is a list", "already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='idQuota') if self.hasContent_(): outfile.write('>%s' % (eol_,", "imported_ns_def_ = GenerateDSNamespaceDefs_.get('verProc') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print:", "exp) ival_ = self.gds_validate_integer(ival_, node, 'tpBenef') self.tpBenef = ival_ elif nodeName_ == 'nrBenefic':", "namespace_='', name_='verProc'): pass def exportChildren(self, outfile, level, namespace_='', name_='verProc', fromsubclass_=False, pretty_print=True): pass def", "node, attrs, already_processed): pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end", "input_name='idQuota')), namespace_, eol_)) if self.cpfInst is not None: showIndent(outfile, level, pretty_print) outfile.write('<%scpfInst>%s</%scpfInst>%s' %", "def hasContent_(self): if ( self.cpfBenef is not None or self.nmBenefic is not None", "outfile, level, namespace_='', name_='complemento', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('complemento') if imported_ns_def_ is not", "= nrRecibo_ elif nodeName_ == 'tpAmb': sval_ = child_.text try: ival_ = int(sval_)", "def exportSimple(self, outfile, level, name): if self.content_type == MixedContainer.TypeString: outfile.write('<%s>%s</%s>' % ( self.name,", "self.gds_encode(self.gds_format_string(quote_xml(self.nrBenefic), input_name='nrBenefic')), namespace_, eol_)) if self.dtFimBenef is not None: showIndent(outfile, level, pretty_print) outfile.write('<%sdtFimBenef>%s</%sdtFimBenef>%s'", "name_='codMunic', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('codMunic') if imported_ns_def_ is not None: namespacedef_ =", "input_data, node=None, input_name=''): return input_data def gds_format_time(self, input_data, input_name=''): if input_data.microsecond == 0:", "if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, evtCdBenPrRP) if subclass is", "None: self.ideEmpregador.export(outfile, level, namespace_, name_='ideEmpregador', pretty_print=pretty_print) if self.ideBenef is not None: self.ideBenef.export(outfile, level,", "is not None: showIndent(outfile, level, pretty_print) outfile.write('<%scodPostal>%s</%scodPostal>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.codPostal), input_name='codPostal')), namespace_, eol_))", "= GenerateDSNamespaceDefs_.get('indRetif') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_", "level, namespace_='', name_='TEmprPJ', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\\n' else: eol_ =", "' '.join(input_data) def gds_validate_boolean_list( self, input_data, node=None, input_name=''): values = input_data.split() for value", "= bairro self.cep = cep self.codMunic = codMunic self.uf = uf def factory(*args_,", "if total_seconds == 0: _svalue += 'Z' else: if total_seconds < 0: _svalue", "is not None: self.dadosBenef.export(outfile, level, namespace_, name_='dadosBenef', pretty_print=pretty_print) def build(self, node): already_processed =", "else: return self.data_type def set_container(self, container): self.container = container def get_container(self): return self.container", "subclass(*args_, **kwargs_) if dscLograd.subclass: return dscLograd.subclass(*args_, **kwargs_) else: return dscLograd(*args_, **kwargs_) factory =", "self.gds_validate_string(nrBenefic_, node, 'nrBenefic') self.nrBenefic = nrBenefic_ elif nodeName_ == 'dtFimBenef': sval_ = child_.text", "= obj_ obj_.original_tagname_ = 'ideBenef' elif nodeName_ == 'infoBeneficio': obj_ = infoBeneficio.factory() obj_.build(child_)", "\"\"\"Identificação do beneficiário\"\"\" subclass = None superclass = None def __init__(self, cpfBenef=None, nmBenefic=None,", "already_processed, namespace_, name_='ideBenef') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1,", "self.exportAttributes(outfile, level, already_processed, namespace_, name_='nmBenefic') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level", "export(self, outfile, level, namespace_='', name_='cep', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('cep') if imported_ns_def_ is", "already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='dtFimBenef') if self.hasContent_(): outfile.write('>%s' % (eol_,", "return subclass(*args_, **kwargs_) if verProc.subclass: return verProc.subclass(*args_, **kwargs_) else: return verProc(*args_, **kwargs_) factory", "nrInsc) if subclass is not None: return subclass(*args_, **kwargs_) if nrInsc.subclass: return nrInsc.subclass(*args_,", "if subclass is not None: return subclass(*args_, **kwargs_) if dtNascto.subclass: return dtNascto.subclass(*args_, **kwargs_)", "if input_data.microsecond == 0: _svalue = '%04d-%02d-%02dT%02d:%02d:%02d' % ( input_data.year, input_data.month, input_data.day, input_data.hour,", "showIndent(outfile, level, pretty_print) outfile.write('<%spaisNac>%s</%spaisNac>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.paisNac), input_name='paisNac')), namespace_, eol_)) if self.nmMae is", "set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='nmBenefic') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile,", "% (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='infoBeneficio'): pass def exportChildren(self,", "= '\\n' else: eol_ = '' if self.paisResid is not None: showIndent(outfile, level,", "1, namespace_='', name_='dadosNasc', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else:", "class uf class paisNascto(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_", "end class idQuota class cpfInst(GeneratedsSuper): subclass = None superclass = None def __init__(self):", "input_name='cpfInst')), namespace_, eol_)) def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib, already_processed) for", "exportChildren(self, outfile, level, namespace_='', name_='cpfBenef', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed =", "already_processed, namespace_='', name_='indRetif'): pass def exportChildren(self, outfile, level, namespace_='', name_='indRetif', fromsubclass_=False, pretty_print=True): pass", "def to_etree_simple(self): if self.content_type == MixedContainer.TypeString: text = self.value elif (self.content_type == MixedContainer.TypeInteger", ")) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='TDadosBenef'): pass def exportChildren(self, outfile, level,", "== 2: prefix, name = attr_parts namespace = node.nsmap.get(prefix) if namespace is not", "if rootClass is None: rootTag = 'eSocial' rootClass = eSocial rootObj = rootClass.factory()", "s1 = '\"%s\"' % s1.replace('\"', \"&quot;\") else: s1 = \"'%s'\" % s1 else:", "= getSubclassFromModule_( CurrentSubclassModule_, nrBenefic) if subclass is not None: return subclass(*args_, **kwargs_) if", "def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='dscLograd'): pass def exportChildren(self, outfile, level, namespace_='',", "CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, dtNascto) if subclass is not", "elif nodeName_ == 'complemento': complemento_ = child_.text complemento_ = self.gds_validate_string(complemento_, node, 'complemento') self.complemento", "is not None: return subclass(*args_, **kwargs_) if idQuota.subclass: return idQuota.subclass(*args_, **kwargs_) else: return", "\"\"\" def usage(): print(USAGE_TEXT) sys.exit(1) def get_root_tag(node): tag = Tag_pattern_.match(node.tag).groups()[-1] rootClass = GDSClassesMapping.get(tag)", "return True else: return False def export(self, outfile, level, namespace_='', name_='tpBenef', namespacedef_='', pretty_print=True):", "nrInsc class TDadosBenef(GeneratedsSuper): \"\"\"Dados de beneficiário\"\"\" subclass = None superclass = None def", "= 'endereco' # end class TDadosBenef class dadosNasc(GeneratedsSuper): \"\"\"Informações de nascimento do beneficiário\"\"\"", "def getSubclassFromModule_(module, class_): '''Get the subclass of a class from a specific module.'''", "None: return subclass(*args_, **kwargs_) if indRetif.subclass: return indRetif.subclass(*args_, **kwargs_) else: return indRetif(*args_, **kwargs_)", "already_processed): pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class codMunic", "fimBeneficio): self.fimBeneficio = fimBeneficio def hasContent_(self): if ( self.tpPlanRP is not None or", "self.gds_validate_integer(ival_, node, 'procEmi') self.procEmi = ival_ elif nodeName_ == 'verProc': verProc_ = child_.text", "None: return subclass(*args_, **kwargs_) if complemento.subclass: return complemento.subclass(*args_, **kwargs_) else: return complemento(*args_, **kwargs_)", "self.content_type == MixedContainer.TypeString: text = self.value elif (self.content_type == MixedContainer.TypeInteger or self.content_type ==", "% (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='TEnderecoBrasil', pretty_print=pretty_print) showIndent(outfile, level, pretty_print)", "= getSubclassFromModule_( CurrentSubclassModule_, codMunic) if subclass is not None: return subclass(*args_, **kwargs_) if", "self.procEmi is not None: showIndent(outfile, level, pretty_print) outfile.write('<%sprocEmi>%s</%sprocEmi>%s' % (namespace_, self.gds_format_integer(self.procEmi, input_name='procEmi'), namespace_,", "def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='tpInsc'): pass def exportChildren(self, outfile, level, namespace_='',", "outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='cpfBenef', pretty_print=pretty_print) outfile.write('</%s%s>%s' %", "return self.data_type def set_container(self, container): self.container = container def get_container(self): return self.container def", "already_processed) for child in node: nodeName_ = Tag_pattern_.match(child.tag).groups()[-1] self.buildChildren(child, node, nodeName_) return self", "def export(self, outfile, level, namespace_='', name_='nrInsc', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('nrInsc') if imported_ns_def_", "fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\\n' else: eol_ = '' if self.paisResid", "node, nodeName_, fromsubclass_=False): pass # end class tpLograd class dscLograd(GeneratedsSuper): subclass = None", "= child_.text dval_ = self.gds_parse_date(sval_) self.dtIniBenef = dval_ elif nodeName_ == 'vrBenef': sval_", "__init__(self, dtNascto=None, codMunic=None, uf=None, paisNascto=None, paisNac=None, nmMae=None, nmPai=None): self.original_tagname_ = None if isinstance(dtNascto,", "= getSubclassFromModule_( CurrentSubclassModule_, TEmprPJ) if subclass is not None: return subclass(*args_, **kwargs_) if", "factory = staticmethod(factory) def get_paisResid(self): return self.paisResid def set_paisResid(self, paisResid): self.paisResid = paisResid", "**kwargs_) if nmCid.subclass: return nmCid.subclass(*args_, **kwargs_) else: return nmCid(*args_, **kwargs_) factory = staticmethod(factory)", "True else: return False def export(self, outfile, level, namespace_='', name_='TIdeEveTrab', namespacedef_='', pretty_print=True): imported_ns_def_", "self.exportAttributes(outfile, level, already_processed, namespace_, name_='tpBenef') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level", "already_processed, namespace_, name_='TIdeEveTrab') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1,", "element.text = self.value else: element.text += self.value elif self.category == MixedContainer.CategorySimple: subelement =", "# end class indRetif class nrRecibo(GeneratedsSuper): subclass = None superclass = None def", "'bairro': bairro_ = child_.text bairro_ = self.gds_validate_string(bairro_, node, 'bairro') self.bairro = bairro_ elif", "name_='TDadosBeneficio', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('TDadosBeneficio') if imported_ns_def_ is not None: namespacedef_ =", "bairro=None, nmCid=None, codPostal=None): self.original_tagname_ = None self.paisResid = paisResid self.dscLograd = dscLograd self.nrLograd", "level, name): if self.content_type == MixedContainer.TypeString: outfile.write('<%s>%s</%s>' % ( self.name, self.value, self.name)) elif", "as empty lines. if self.value.strip(): if len(element) > 0: if element[-1].tail is None:", "= None superclass = None def __init__(self, Id=None, ideEvento=None, ideEmpregador=None, ideBenef=None, infoBeneficio=None): self.original_tagname_", "1, namespace_='', name_='cpfInst', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_,", "node, nodeName_) return self def buildAttributes(self, node, attrs, already_processed): value = find_attr_value_('Id', node)", "outfile, level, namespace_='', name_='nrRecibo', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('nrRecibo') if imported_ns_def_ is not", "if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, tpBenef) if subclass is", "else: initvalue_ = dtNascto self.dtNascto = initvalue_ self.codMunic = codMunic self.uf = uf", "'evtCdBenPrRP': obj_ = evtCdBenPrRP.factory() obj_.build(child_) self.evtCdBenPrRP = obj_ obj_.original_tagname_ = 'evtCdBenPrRP' elif nodeName_", "fromsubclass_=False): pass # end class tpBenef class nrBenefic(GeneratedsSuper): subclass = None superclass =", "% (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nrLograd), input_name='nrLograd')), namespace_, eol_)) if self.complemento is not None: showIndent(outfile, level,", "0, name_=rootTag, namespacedef_='') return rootObj def parseLiteral(inFileName, silence=False): parser = None doc =", "exp: raise_parse_error(child_, 'requires integer: %s' % exp) ival_ = self.gds_validate_integer(ival_, node, 'mtvFim') self.mtvFim", "or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='tpLograd') if self.hasContent_():", "return 'xs:string' else: return self.data_type def set_container(self, container): self.container = container def get_container(self):", "gds_format_double_list(self, input_data, input_name=''): return '%s' % ' '.join(input_data) def gds_validate_double_list( self, input_data, node=None,", "namespace_, name_='infoPenMorte') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='',", "name_='tpLograd', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib, already_processed)", "= '\\n' else: eol_ = '' if self.evtCdBenPrRP is not None: self.evtCdBenPrRP.export(outfile, level,", "): return True else: return False def export(self, outfile, level, namespace_='', name_='TIdeEveTrab', namespacedef_='',", "nodeName_ == 'nrInsc': nrInsc_ = child_.text nrInsc_ = self.gds_validate_string(nrInsc_, node, 'nrInsc') self.nrInsc =", "parser=None, **kwargs): if parser is None: # Use the lxml ElementTree compatible parser", "subclass = None superclass = None def __init__(self, tpInsc=None, nrInsc=None): self.original_tagname_ = None", "'ideEvento' elif nodeName_ == 'ideEmpregador': obj_ = TEmprPJ.factory() obj_.build(child_) self.ideEmpregador = obj_ obj_.original_tagname_", "datetime_.datetime.strptime(dtIniBenef, '%Y-%m-%d').date() else: initvalue_ = dtIniBenef self.dtIniBenef = initvalue_ self.vrBenef = vrBenef self.infoPenMorte", "or \\ self.content_type == MixedContainer.TypeBoolean: outfile.write('<%s>%d</%s>' % ( self.name, self.value, self.name)) elif self.content_type", "but do not modify CDATA sections.\" if not inStr: return '' s1 =", "export(self, outfile, level, namespace_='', name_='tpLograd', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('tpLograd') if imported_ns_def_ is", "= dscLograd_ elif nodeName_ == 'nrLograd': nrLograd_ = child_.text nrLograd_ = self.gds_validate_string(nrLograd_, node,", ")) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='cep') if self.hasContent_(): outfile.write('>%s' %", "superclass = None def __init__(self, tpPlanRP=None, iniBeneficio=None, altBeneficio=None, fimBeneficio=None): self.original_tagname_ = None self.tpPlanRP", "de cadastro de benefícios previdenciários de Regimes Próprios\"\"\" subclass = None superclass =", "= self.gds_validate_string(nmCid_, node, 'nmCid') self.nmCid = nmCid_ elif nodeName_ == 'codPostal': codPostal_ =", "default_class=None): class_obj1 = default_class if 'xsi' in node.nsmap: classname = node.get('{%s}type' % node.nsmap['xsi'])", "True else: return False def export(self, outfile, level, namespace_='', name_='nmBenefic', namespacedef_='', pretty_print=True): imported_ns_def_", "% (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='infoPenMorte', pretty_print=pretty_print) showIndent(outfile, level, pretty_print)", "as exp: raise_parse_error(child_, 'requires integer: %s' % exp) ival_ = self.gds_validate_integer(ival_, node, 'mtvFim')", "raise_parse_error(child_, 'requires integer: %s' % exp) ival_ = self.gds_validate_integer(ival_, node, 'codMunic') self.codMunic =", "cep) if subclass is not None: return subclass(*args_, **kwargs_) if cep.subclass: return cep.subclass(*args_,", "fimBeneficio=None): self.original_tagname_ = None self.tpPlanRP = tpPlanRP self.iniBeneficio = iniBeneficio self.altBeneficio = altBeneficio", "'tpAmb') self.tpAmb = ival_ elif nodeName_ == 'procEmi': sval_ = child_.text try: ival_", "chars, but do not modify CDATA sections.\" if not inStr: return '' s1", "not None: showIndent(outfile, level, pretty_print) outfile.write('<%smtvFim>%s</%smtvFim>%s' % (namespace_, self.gds_format_integer(self.mtvFim, input_name='mtvFim'), namespace_, eol_)) def", "if ( self.indRetif is not None or self.nrRecibo is not None or self.tpAmb", ")) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='uf'): pass def exportChildren(self, outfile, level,", "self.bairro is not None or self.cep is not None or self.codMunic is not", "== '__main__': #import pdb; pdb.set_trace() main() __all__ = [ \"TDadosBenef\", \"TDadosBeneficio\", \"TEmprPJ\", \"TEnderecoBrasil\",", "codPostal class TDadosBeneficio(GeneratedsSuper): \"\"\"Dados do benefício previdenciário\"\"\" subclass = None superclass = None", "pass def raise_parse_error(node, msg): msg = '%s (element %s/line %d)' % (msg, node.tag,", "= float(sval_) except (TypeError, ValueError) as exp: raise_parse_error(child_, 'requires float or double: %s'", "None: class_obj1 = class_obj2 return class_obj1 def gds_build_any(self, node, type_name=None): return None @classmethod", "[] self.get_path_list_(node, path_list) path_list.reverse() path = '/'.join(path_list) return path Tag_strip_pattern_ = re_.compile(r'\\{.*\\}') def", "obj_.original_tagname_ = 'altBeneficio' elif nodeName_ == 'fimBeneficio': obj_ = fimBeneficio.factory() obj_.build(child_) self.fimBeneficio =", "nmBenefic(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ = None def", "subclass = None superclass = None def __init__(self, indRetif=None, nrRecibo=None, tpAmb=None, procEmi=None, verProc=None):", "== 'dadosNasc': obj_ = dadosNasc.factory() obj_.build(child_) self.dadosNasc = obj_ obj_.original_tagname_ = 'dadosNasc' elif", ")) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='procEmi'): pass def exportChildren(self, outfile, level,", "verProc class TEmprPJ(GeneratedsSuper): \"\"\"Informações do Empregador PJ\"\"\" subclass = None superclass = None", "if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, ideBenef) if subclass is", "pretty_print: eol_ = '\\n' else: eol_ = '' if self.tpInsc is not None:", "None @classmethod def gds_reverse_node_mapping(cls, mapping): return dict(((v, k) for k, v in mapping.iteritems()))", "is not None: return subclass(*args_, **kwargs_) if infoPenMorte.subclass: return infoPenMorte.subclass(*args_, **kwargs_) else: return", "_svalue += '{0:02d}:{1:02d}'.format( hours, minutes) except AttributeError: pass return _svalue @classmethod def gds_parse_date(cls,", "name_='nrBenefic'): pass def exportChildren(self, outfile, level, namespace_='', name_='nrBenefic', fromsubclass_=False, pretty_print=True): pass def build(self,", "nodeName_ == 'cpfBenef': cpfBenef_ = child_.text cpfBenef_ = self.gds_validate_string(cpfBenef_, node, 'cpfBenef') self.cpfBenef =", "level, pretty_print) outfile.write('<%scodPostal>%s</%scodPostal>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.codPostal), input_name='codPostal')), namespace_, eol_)) def build(self, node): already_processed", "not None or self.uf is not None or self.paisNascto is not None or", "+ 1, namespace_='', name_='nrRecibo', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' %", "namespace_='', name_='dscLograd', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('dscLograd') if imported_ns_def_ is not None: namespacedef_", "namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='verProc') if", "= {} rootElement = rootObj.to_etree(None, name_=rootTag, mapping_=mapping) reverse_mapping = rootObj.gds_reverse_node_mapping(mapping) if not silence:", "paisResid(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return True else:", "CurrentSubclassModule_, cep) if subclass is not None: return subclass(*args_, **kwargs_) if cep.subclass: return", "break return found1 @classmethod def gds_parse_time(cls, input_data): tz = None if input_data[-1] ==", "name_='brasil', pretty_print=pretty_print) if self.exterior is not None: self.exterior.export(outfile, level, namespace_, name_='exterior', pretty_print=pretty_print) def", "'', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='codPostal') if self.hasContent_(): outfile.write('>%s'", "attrs, already_processed): pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): if nodeName_ == 'tpPlanRP':", "# GenerateDSNamespaceDefs = { # \"ElementtypeA\": \"http://www.xxx.com/namespaceA\", # \"ElementtypeB\": \"http://www.xxx.com/namespaceB\", # } #", "namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('cpfInst') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_", "*= -1 tz = GeneratedsSuper._FixedOffsetTZ( tzoff, results.group(0)) input_data = input_data[:-6] if len(input_data.split('.')) >", "is not None or self.verProc is not None ): return True else: return", "name_='TEnderecoExterior'): pass def exportChildren(self, outfile, level, namespace_='', name_='TEnderecoExterior', fromsubclass_=False, pretty_print=True): if pretty_print: eol_", "or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='infoPenMorte') if self.hasContent_():", "return self.optional def _cast(typ, value): if typ is None or value is None:", "cpfBenef=None, nmBenefic=None, dadosBenef=None): self.original_tagname_ = None self.cpfBenef = cpfBenef self.nmBenefic = nmBenefic self.dadosBenef", "input_data[:-6] time_parts = input_data.split('.') if len(time_parts) > 1: micro_seconds = int(float('0.' + time_parts[1])", "if self.nrLograd is not None: showIndent(outfile, level, pretty_print) outfile.write('<%snrLograd>%s</%snrLograd>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nrLograd), input_name='nrLograd')),", "+ 1, namespace_='', name_='cep', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' %", "mtvFim) if subclass is not None: return subclass(*args_, **kwargs_) if mtvFim.subclass: return mtvFim.subclass(*args_,", "00:42:21 2017 by generateDS.py version 2.28b. # Python 2.7.12 (default, Nov 19 2016,", "**kwargs_) factory = staticmethod(factory) def get_tpBenef(self): return self.tpBenef def set_tpBenef(self, tpBenef): self.tpBenef =", "'ideBenef' elif nodeName_ == 'infoBeneficio': obj_ = infoBeneficio.factory() obj_.build(child_) self.infoBeneficio = obj_ obj_.original_tagname_", "ImportError: from xml.etree import ElementTree as etree_ Validate_simpletypes_ = True if sys.version_info.major ==", "# -*- coding: utf-8 -*- # # Generated Tue Oct 10 00:42:21 2017", "eol_)) if self.vrBenef is not None: showIndent(outfile, level, pretty_print) outfile.write('<%svrBenef>%s</%svrBenef>%s' % (namespace_, self.gds_format_float(self.vrBenef,", "1, namespace_='', name_='cep', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_,", "data_type='', container=0, optional=0, child_attrs=None, choice=None): self.name = name self.data_type = data_type self.container =", "self.exterior = exterior def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass =", "if bairro.subclass: return bairro.subclass(*args_, **kwargs_) else: return bairro(*args_, **kwargs_) factory = staticmethod(factory) def", "*= -1 tz = GeneratedsSuper._FixedOffsetTZ( tzoff, results.group(0)) input_data = input_data[:-6] dt = datetime_.datetime.strptime(input_data,", "def export(self, outfile, level, namespace_='', name_='uf', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('uf') if imported_ns_def_", "self.exportAttributes(outfile, level, already_processed, namespace_, name_='evtCdBenPrRP') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level", "namespace_='', name_='tpLograd', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_, ))", "False def export(self, outfile, level, namespace_='', name_='ideBenef', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('ideBenef') if", "None self.brasil = brasil self.exterior = exterior def factory(*args_, **kwargs_): if CurrentSubclassModule_ is", "'', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='bairro') if self.hasContent_(): outfile.write('>%s'", "None def __init__(self, idQuota=None, cpfInst=None): self.original_tagname_ = None self.idQuota = idQuota self.cpfInst =", "container def get_container(self): return self.container def set_child_attrs(self, child_attrs): self.child_attrs = child_attrs def get_child_attrs(self):", "return \"'%s'\" % s1 else: return \"'''%s'''\" % s1 else: if s1.find('\"') !=", "= GenerateDSNamespaceDefs_.get('infoBeneficio') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_", "def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): if nodeName_ == 'ideEvento': obj_ = TIdeEveTrab.factory()", "the outer elements # - OR the inner elements found1 = True for", "== 'nmPai': nmPai_ = child_.text nmPai_ = self.gds_validate_string(nmPai_, node, 'nmPai') self.nmPai = nmPai_", "empty content as empty lines. if self.value.strip(): outfile.write(self.value) elif self.category == MixedContainer.CategorySimple: self.exportSimple(outfile,", "name_='dscLograd', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('dscLograd') if imported_ns_def_ is not None: namespacedef_ =", "self.exportChildren(outfile, level + 1, namespace_='', name_='TEnderecoExterior', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' % (namespace_,", "fromsubclass_=False): pass # end class paisResid class nmCid(GeneratedsSuper): subclass = None superclass =", "(eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='codPostal', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_,", "is not None: self.brasil.export(outfile, level, namespace_, name_='brasil', pretty_print=pretty_print) if self.exterior is not None:", "level, already_processed, namespace_='', name_='evtCdBenPrRP'): if self.Id is not None and 'Id' not in", "get_bairro(self): return self.bairro def set_bairro(self, bairro): self.bairro = bairro def get_cep(self): return self.cep", "input_data, '%Y-%m-%dT%H:%M:%S') dt = dt.replace(tzinfo=tz) return dt def gds_validate_date(self, input_data, node=None, input_name=''): return", "if subclass is not None: return subclass(*args_, **kwargs_) if TIdeEveTrab.subclass: return TIdeEveTrab.subclass(*args_, **kwargs_)", "complemento.subclass: return complemento.subclass(*args_, **kwargs_) else: return complemento(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self):", "return self.uf def set_uf(self, uf): self.uf = uf def get_paisNascto(self): return self.paisNascto def", "self def buildAttributes(self, node, attrs, already_processed): pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False):", "self.indRetif is not None or self.nrRecibo is not None or self.tpAmb is not", "already_processed, namespace_, name_='mtvFim') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1,", "level, name, namespace, pretty_print=True): if self.category == MixedContainer.CategoryText: # Prevent exporting empty content", "dscLograd(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ = None def", "name_='fimBeneficio', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\\n' else: eol_ = '' if", "The module generatedsnamespaces, if it is importable, must contain # a dictionary named", "exportChildren(self, outfile, level, namespace_='', name_='indRetif', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed =", "name_='eSocial', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' %", "= '' if self.ideEvento is not None: self.ideEvento.export(outfile, level, namespace_, name_='ideEvento', pretty_print=pretty_print) if", "value def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): if nodeName_ == 'ideEvento': obj_ =", "True else: return False def export(self, outfile, level, namespace_='', name_='tpBenef', namespacedef_='', pretty_print=True): imported_ns_def_", "= choice def get_choice(self): return self.choice def set_optional(self, optional): self.optional = optional def", "already_processed, namespace_, name_='nrBenefic') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1,", "(strings) to XML schema namespace prefix # definitions. The export method for any", "Id def hasContent_(self): if ( self.ideEvento is not None or self.ideEmpregador is not", "node, 'paisResid') self.paisResid = paisResid_ elif nodeName_ == 'dscLograd': dscLograd_ = child_.text dscLograd_", "or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='tpBenef') if self.hasContent_():", "+ namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='TIdeEveTrab')", "set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='TDadosBenef') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile,", "rootElement, mapping, reverse_mapping def parseString(inString, silence=False): if sys.version_info.major == 2: from StringIO import", "not None or self.nmCid is not None or self.codPostal is not None ):", "not None: showIndent(outfile, level, pretty_print) outfile.write('<%sSignature>%s</%sSignature>%s' % ('ds:', self.gds_encode(self.gds_format_string(quote_xml(self.Signature), input_name='Signature')), 'ds:', eol_)) def", "StringIO import StringIO as IOBuffer else: from io import BytesIO as IOBuffer parser", "level, namespace_, name_='evtCdBenPrRP', pretty_print=pretty_print) if self.Signature is not None: showIndent(outfile, level, pretty_print) outfile.write('<%sSignature>%s</%sSignature>%s'", "level, namespace_='', name_='procEmi', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('procEmi') if imported_ns_def_ is not None:", "dtNascto def get_codMunic(self): return self.codMunic def set_codMunic(self, codMunic): self.codMunic = codMunic def get_uf(self):", "def set_tpPlanRP(self, tpPlanRP): self.tpPlanRP = tpPlanRP def get_iniBeneficio(self): return self.iniBeneficio def set_iniBeneficio(self, iniBeneficio):", "node.nsmap['xsi']) if classname is not None: names = classname.split(':') if len(names) == 2:", "get_codPostal(self): return self.codPostal def set_codPostal(self, codPostal): self.codPostal = codPostal def hasContent_(self): if (", "GenerateDSNamespaceDefs_.get('uf') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_ =", "namespace_='', name_='tpAmb', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib,", "False def export(self, outfile, level, namespace_='', name_='cpfBenef', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('cpfBenef') if", "**kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, procEmi) if subclass", "% input_data).rstrip('0') def gds_validate_float(self, input_data, node=None, input_name=''): return input_data def gds_format_float_list(self, input_data, input_name=''):", "node=None, input_name=''): values = input_data.split() for value in values: try: int(value) except (TypeError,", "**kwargs_) else: return TEnderecoExterior(*args_, **kwargs_) factory = staticmethod(factory) def get_paisResid(self): return self.paisResid def", "self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='infoBeneficio', pretty_print=pretty_print) showIndent(outfile,", "infoBeneficio def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_,", "return indRetif(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return True", ")) self.exportChildren(outfile, level + 1, namespace_='', name_='bairro', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_))", "subclass = getSubclassFromModule_( CurrentSubclassModule_, fimBeneficio) if subclass is not None: return subclass(*args_, **kwargs_)", "+ namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='infoBeneficio')", "MixedContainer.TypeBoolean: outfile.write('<%s>%d</%s>' % ( self.name, self.value, self.name)) elif self.content_type == MixedContainer.TypeFloat or \\", "input_name=''): _svalue = '%04d-%02d-%02d' % ( input_data.year, input_data.month, input_data.day, ) try: if input_data.tzinfo", "outfile, level, namespace_='', name_='nmMae', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set()", "child_, node, nodeName_, fromsubclass_=False): if nodeName_ == 'evtCdBenPrRP': obj_ = evtCdBenPrRP.factory() obj_.build(child_) self.evtCdBenPrRP", "already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='nmPai') if self.hasContent_(): outfile.write('>%s' % (eol_,", "self.altBeneficio def set_altBeneficio(self, altBeneficio): self.altBeneficio = altBeneficio def get_fimBeneficio(self): return self.fimBeneficio def set_fimBeneficio(self,", "showIndent(outfile, level, pretty_print) outfile.write('<%snmBenefic>%s</%snmBenefic>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nmBenefic), input_name='nmBenefic')), namespace_, eol_)) if self.dadosBenef is", "+ namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='uf')", "not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, nmPai) if subclass is not None: return", "'nrBenefic') self.nrBenefic = nrBenefic_ elif nodeName_ == 'dtIniBenef': sval_ = child_.text dval_ =", "== MixedContainer.TypeBoolean): text = '%d' % self.value elif (self.content_type == MixedContainer.TypeFloat or self.content_type", "subclass = getSubclassFromModule_( CurrentSubclassModule_, tpBenef) if subclass is not None: return subclass(*args_, **kwargs_)", "False def export(self, outfile, level, namespace_='', name_='paisNac', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('paisNac') if", "% (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.idQuota), input_name='idQuota')), namespace_, eol_)) if self.cpfInst is not None: showIndent(outfile, level,", ")) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='TEnderecoBrasil'): pass def exportChildren(self, outfile, level,", "-1: s1 = s1.replace('\"', '\\\\\"') if s1.find('\\n') == -1: return '\"%s\"' % s1", "level, already_processed, namespace_='', name_='TEmprPJ'): pass def exportChildren(self, outfile, level, namespace_='', name_='TEmprPJ', fromsubclass_=False, pretty_print=True):", "len(attr_parts) == 1: value = attrs.get(attr_name) elif len(attr_parts) == 2: prefix, name =", "1) showIndent(outfile, level) outfile.write(')\\n') class MemberSpec_(object): def __init__(self, name='', data_type='', container=0, optional=0, child_attrs=None,", "self.Id = _cast(None, Id) self.ideEvento = ideEvento self.ideEmpregador = ideEmpregador self.ideBenef = ideBenef", "): return True else: return False def export(self, outfile, level, namespace_='', name_='nrBenefic', namespacedef_='',", "self.exportChildren(outfile, level + 1, namespace_='', name_='vrBenef', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else:", "gds_validate_boolean_list( self, input_data, node=None, input_name=''): values = input_data.split() for value in values: if", "or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='bairro') if self.hasContent_():", "def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='paisNascto'): pass def exportChildren(self, outfile, level, namespace_='',", "class_): '''Get the subclass of a class from a specific module.''' name =", "rootObj def parseEtree(inFileName, silence=False): parser = None doc = parsexml_(inFileName, parser) rootNode =", "dtIniBenef self.dtIniBenef = initvalue_ self.vrBenef = vrBenef self.infoPenMorte = infoPenMorte def factory(*args_, **kwargs_):", "= re_.compile(r\"[\\n\\r\\s]+\") Namespace_extract_pat_ = re_.compile(r'{(.*)}(.*)') CDATA_pattern_ = re_.compile(r\"<!\\[CDATA\\[.*?\\]\\]>\", re_.DOTALL) # Change this to", "None: subclass = getSubclassFromModule_( CurrentSubclassModule_, complemento) if subclass is not None: return subclass(*args_,", "class GeneratedsSuper(object): tzoff_pattern = re_.compile(r'(\\+|-)((0\\d|1[0-3]):[0-5]\\d|14:00)$') class _FixedOffsetTZ(datetime_.tzinfo): def __init__(self, offset, name): self.__offset =", "Globals # ExternalEncoding = 'ascii' Tag_pattern_ = re_.compile(r'({.*})?(.*)') String_cleanup_pat_ = re_.compile(r\"[\\n\\r\\s]+\") Namespace_extract_pat_ =", "'', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='dtFimBenef') if self.hasContent_(): outfile.write('>%s'", "= basestring else: BaseStrType_ = str def parsexml_(infile, parser=None, **kwargs): if parser is", "try: ival_ = int(sval_) except (TypeError, ValueError) as exp: raise_parse_error(child_, 'requires integer: %s'", "namespace_='', name_='tpLograd'): pass def exportChildren(self, outfile, level, namespace_='', name_='tpLograd', fromsubclass_=False, pretty_print=True): pass def", "already_processed): pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): if nodeName_ == 'paisResid': paisResid_", "indRetif def get_nrRecibo(self): return self.nrRecibo def set_nrRecibo(self, nrRecibo): self.nrRecibo = nrRecibo def get_tpAmb(self):", "+ 'Sub' if hasattr(module, name): return getattr(module, name) else: return None # #", "return infoPenMorte(*args_, **kwargs_) factory = staticmethod(factory) def get_idQuota(self): return self.idQuota def set_idQuota(self, idQuota):", "def set_cpfBenef(self, cpfBenef): self.cpfBenef = cpfBenef def get_nmBenefic(self): return self.nmBenefic def set_nmBenefic(self, nmBenefic):", "_svalue @classmethod def gds_parse_datetime(cls, input_data): tz = None if input_data[-1] == 'Z': tz", "**kwargs_) else: return verProc(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ):", "input_data def gds_format_integer(self, input_data, input_name=''): return '%d' % input_data def gds_validate_integer(self, input_data, node=None,", "showIndent(outfile, level, pretty_print) outfile.write('<%suf>%s</%suf>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.uf), input_name='uf')), namespace_, eol_)) def build(self, node):", "outfile.write(')\\n') class MemberSpec_(object): def __init__(self, name='', data_type='', container=0, optional=0, child_attrs=None, choice=None): self.name =", "== 'iniBeneficio': obj_ = TDadosBeneficio.factory() obj_.build(child_) self.iniBeneficio = obj_ obj_.original_tagname_ = 'iniBeneficio' elif", "else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='idQuota'): pass", "base64.b64encode(self.value), self.name)) def to_etree(self, element): if self.category == MixedContainer.CategoryText: # Prevent exporting empty", "self.nrRecibo def set_nrRecibo(self, nrRecibo): self.nrRecibo = nrRecibo def get_tpAmb(self): return self.tpAmb def set_tpAmb(self,", "is not None: self.iniBeneficio.export(outfile, level, namespace_, name_='iniBeneficio', pretty_print=pretty_print) if self.altBeneficio is not None:", "set_cpfInst(self, cpfInst): self.cpfInst = cpfInst def hasContent_(self): if ( self.idQuota is not None", "nodeName_ == 'complemento': complemento_ = child_.text complemento_ = self.gds_validate_string(complemento_, node, 'complemento') self.complemento =", "level, namespace_='', name_='tpAmb', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node,", "def quote_xml(inStr): \"Escape markup chars, but do not modify CDATA sections.\" if not", "outfile, level, already_processed, namespace_='', name_='paisResid'): pass def exportChildren(self, outfile, level, namespace_='', name_='paisResid', fromsubclass_=False,", "if len(element) > 0: if element[-1].tail is None: element[-1].tail = self.value else: element[-1].tail", "if self.altBeneficio is not None: self.altBeneficio.export(outfile, level, namespace_, name_='altBeneficio', pretty_print=pretty_print) if self.fimBeneficio is", "exportAttributes(self, outfile, level, already_processed, namespace_='', name_='dtFimBenef'): pass def exportChildren(self, outfile, level, namespace_='', name_='dtFimBenef',", "input_data def gds_format_integer_list(self, input_data, input_name=''): return '%s' % ' '.join(input_data) def gds_validate_integer_list( self,", "def gds_validate_date(self, input_data, node=None, input_name=''): return input_data def gds_format_date(self, input_data, input_name=''): _svalue =", "subclass = getSubclassFromModule_( CurrentSubclassModule_, tpPlanRP) if subclass is not None: return subclass(*args_, **kwargs_)", "pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class dtNascto class", "= set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='cpfBenef') if self.hasContent_(): outfile.write('>%s' % (eol_, ))", "self.exportChildren(outfile, level + 1, namespace_='', name_='dtFimBenef', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else:", "def hasContent_(self): if ( self.dtNascto is not None or self.codMunic is not None", "(eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='cep', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_,", "None: subclass = getSubclassFromModule_( CurrentSubclassModule_, dadosNasc) if subclass is not None: return subclass(*args_,", "dval_ = self.gds_parse_date(sval_) self.dtFimBenef = dval_ elif nodeName_ == 'mtvFim': sval_ = child_.text", "%s' % exp) ival_ = self.gds_validate_integer(ival_, node, 'codMunic') self.codMunic = ival_ elif nodeName_", "= '\\n' else: eol_ = '' if self.idQuota is not None: showIndent(outfile, level,", "level + 1, namespace_='', name_='cpfBenef', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s'", "TDadosBenef) if subclass is not None: return subclass(*args_, **kwargs_) if TDadosBenef.subclass: return TDadosBenef.subclass(*args_,", "= staticmethod(factory) def get_cpfBenef(self): return self.cpfBenef def set_cpfBenef(self, cpfBenef): self.cpfBenef = cpfBenef def", "value def getCategory(self): return self.category def getContenttype(self, content_type): return self.content_type def getValue(self): return", "+ namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='procEmi')", "TEmprPJ.subclass: return TEmprPJ.subclass(*args_, **kwargs_) else: return TEmprPJ(*args_, **kwargs_) factory = staticmethod(factory) def get_tpInsc(self):", "tz = None if input_data[-1] == 'Z': tz = GeneratedsSuper._FixedOffsetTZ(0, 'UTC') input_data =", ")) self.exportChildren(outfile, level + 1, namespace_='', name_='nmBenefic', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_))", "CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, tpAmb) if subclass is not", "else: return dtIniBenef(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return", "= input_data.split() for value in values: try: int(value) except (TypeError, ValueError): raise_parse_error(node, 'Requires", "if subclass is not None: return subclass(*args_, **kwargs_) if TEmprPJ.subclass: return TEmprPJ.subclass(*args_, **kwargs_)", "else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='tpInsc'): pass", "line where and when you want to drop into the # IPython shell:", "'nmCid': nmCid_ = child_.text nmCid_ = self.gds_validate_string(nmCid_, node, 'nmCid') self.nmCid = nmCid_ elif", "== 'nmBenefic': nmBenefic_ = child_.text nmBenefic_ = self.gds_validate_string(nmBenefic_, node, 'nmBenefic') self.nmBenefic = nmBenefic_", "return text def find_attr_value_(attr_name, node): attrs = node.attrib attr_parts = attr_name.split(':') value =", "\"\"\"Dados do benefício previdenciário\"\"\" subclass = None superclass = None def __init__(self, tpBenef=None,", "None: subclass = getSubclassFromModule_( CurrentSubclassModule_, cep) if subclass is not None: return subclass(*args_,", "fromsubclass_=False): pass # end class dtNascto class codMunic(GeneratedsSuper): subclass = None superclass =", "etree_.SubElement( element, '%s' % self.name) subelement.text = self.to_etree_simple() else: # category == MixedContainer.CategoryComplex", "namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='TEnderecoBrasil') if", "already_processed, namespace_='', name_='codPostal'): pass def exportChildren(self, outfile, level, namespace_='', name_='codPostal', fromsubclass_=False, pretty_print=True): pass", "is not None: names = classname.split(':') if len(names) == 2: classname = names[1]", "**kwargs_) if indRetif.subclass: return indRetif.subclass(*args_, **kwargs_) else: return indRetif(*args_, **kwargs_) factory = staticmethod(factory)", "True else: return False def export(self, outfile, level, namespace_='', name_='procEmi', namespacedef_='', pretty_print=True): imported_ns_def_", "'brasil': obj_ = TEnderecoBrasil.factory() obj_.build(child_) self.brasil = obj_ obj_.original_tagname_ = 'brasil' elif nodeName_", "map element type names (strings) to XML schema namespace prefix # definitions. The", "input_data, node=None, input_name=''): return input_data def gds_format_date(self, input_data, input_name=''): _svalue = '%04d-%02d-%02d' %", "obj_ = TIdeEveTrab.factory() obj_.build(child_) self.ideEvento = obj_ obj_.original_tagname_ = 'ideEvento' elif nodeName_ ==", "return False def export(self, outfile, level, namespace_='', name_='cpfBenef', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('cpfBenef')", "namespace_='', name_='evtCdBenPrRP'): if self.Id is not None and 'Id' not in already_processed: already_processed.add('Id')", "these classes are generated by generateDS.py. # You can replace these methods by", "outfile, level, namespace_='', name_='codMunic', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('codMunic') if imported_ns_def_ is not", "outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='indRetif', pretty_print=pretty_print) outfile.write('</%s%s>%s' %", "patterns, target): # pat is a list of lists of strings/patterns. We should:", "self.original_tagname_ = None self.cpfBenef = cpfBenef self.nmBenefic = nmBenefic self.dadosBenef = dadosBenef def", "outfile, level, namespace_='', name_='dadosNasc', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\\n' else: eol_", "return subclass(*args_, **kwargs_) if infoPenMorte.subclass: return infoPenMorte.subclass(*args_, **kwargs_) else: return infoPenMorte(*args_, **kwargs_) factory", "self.exportChildren(outfile, level + 1, namespace_='', name_='infoPenMorte', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' % (namespace_,", "namespace_, name_='infoBeneficio', pretty_print=pretty_print) def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib, already_processed) for", "# end class dtIniBenef class vrBenef(GeneratedsSuper): subclass = None superclass = None def", "exportChildren(self, outfile, level, namespace_='', name_='infoPenMorte', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\\n' else:", "Then use the following line where and when you want to drop into", "namespace_, eol_)) if self.nrRecibo is not None: showIndent(outfile, level, pretty_print) outfile.write('<%snrRecibo>%s</%snrRecibo>%s' % (namespace_,", "namespace_='', name_='cep', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('cep') if imported_ns_def_ is not None: namespacedef_", "tpPlanRP(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ = None def", "self.ideBenef = obj_ obj_.original_tagname_ = 'ideBenef' elif nodeName_ == 'infoBeneficio': obj_ = infoBeneficio.factory()", "imported_ns_def_ = GenerateDSNamespaceDefs_.get('tpLograd') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print:", "level, namespace_='', name_='eSocial', namespacedef_=' xmlns:ds=\"http://www.w3.org/2000/09/xmldsig#\" ', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('eSocial') if imported_ns_def_ is", "return indRetif.subclass(*args_, **kwargs_) else: return indRetif(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if", "rootObj.export( sys.stdout, 0, name_=rootTag, namespacedef_='') return rootObj def parseLiteral(inFileName, silence=False): parser = None", "(eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='dtFimBenef'): pass def exportChildren(self, outfile,", "None: subclass = getSubclassFromModule_( CurrentSubclassModule_, dtIniBenef) if subclass is not None: return subclass(*args_,", "Signature_ = self.gds_validate_string(Signature_, node, 'Signature') self.Signature = Signature_ # end class eSocial class", "File: generatedsnamespaces.py # # GenerateDSNamespaceDefs = { # \"ElementtypeA\": \"http://www.xxx.com/namespaceA\", # \"ElementtypeB\": \"http://www.xxx.com/namespaceB\",", "paisNascto self.paisNac = paisNac self.nmMae = nmMae self.nmPai = nmPai def factory(*args_, **kwargs_):", "obj_.original_tagname_ = 'evtCdBenPrRP' elif nodeName_ == 'Signature': Signature_ = child_.text Signature_ = self.gds_validate_string(Signature_,", "factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, bairro) if", "= obj_ obj_.original_tagname_ = 'fimBeneficio' # end class infoBeneficio class tpPlanRP(GeneratedsSuper): subclass =", "input_name=''): return input_data def gds_format_date(self, input_data, input_name=''): _svalue = '%04d-%02d-%02d' % ( input_data.year,", "if dadosNasc.subclass: return dadosNasc.subclass(*args_, **kwargs_) else: return dadosNasc(*args_, **kwargs_) factory = staticmethod(factory) def", "False def export(self, outfile, level, namespace_='', name_='indRetif', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('indRetif') if", "Nov 19 2016, 06:48:10) [GCC 5.4.0 20160609] # # Command line options: #", "to redirect the generated superclass module to use a # specific subclass module.", "class dtFimBenef(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ = None", "import etree as etree_ except ImportError: from xml.etree import ElementTree as etree_ Validate_simpletypes_", ")) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='TIdeEveTrab') if self.hasContent_(): outfile.write('>%s' %", "None or self.ideBenef is not None or self.infoBeneficio is not None ): return", "'{0:02d}:{1:02d}'.format(hours, minutes) return _svalue @classmethod def gds_parse_datetime(cls, input_data): tz = None if input_data[-1]", "name_='cpfBenef', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('cpfBenef') if imported_ns_def_ is not None: namespacedef_ =", "not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, nmBenefic) if subclass is not None: return", "input_data: return '' else: return input_data def gds_format_base64(self, input_data, input_name=''): return base64.b64encode(input_data) def", "self.exportChildren(outfile, level + 1, namespace_='', name_='TDadosBenef', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' % (namespace_,", "if subclass is not None: return subclass(*args_, **kwargs_) if idQuota.subclass: return idQuota.subclass(*args_, **kwargs_)", "= uf def get_paisNascto(self): return self.paisNascto def set_paisNascto(self, paisNascto): self.paisNascto = paisNascto def", "namespace_='', name_='dadosNasc', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('dadosNasc') if imported_ns_def_ is not None: namespacedef_", "pretty_print: eol_ = '\\n' else: eol_ = '' if self.indRetif is not None:", "'paisResid') self.paisResid = paisResid_ elif nodeName_ == 'dscLograd': dscLograd_ = child_.text dscLograd_ =", "% exp) ival_ = self.gds_validate_integer(ival_, node, 'tpBenef') self.tpBenef = ival_ elif nodeName_ ==", "outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='tpPlanRP', pretty_print=pretty_print) outfile.write('</%s%s>%s' %", "pretty_print) outfile.write('<%snmMae>%s</%snmMae>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nmMae), input_name='nmMae')), namespace_, eol_)) if self.nmPai is not None:", "pretty_print) outfile.write('<%stpBenef>%s</%stpBenef>%s' % (namespace_, self.gds_format_integer(self.tpBenef, input_name='tpBenef'), namespace_, eol_)) if self.nrBenefic is not None:", "name_='nmMae') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='nmMae',", "self.value)) else: # category == MixedContainer.CategoryComplex showIndent(outfile, level) outfile.write( 'model_.MixedContainer(%d, %d, \"%s\",\\n' %", "outfile, level, namespace_='', name_='paisNac', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set()", "if self.ideBenef is not None: self.ideBenef.export(outfile, level, namespace_, name_='ideBenef', pretty_print=pretty_print) if self.infoBeneficio is", "or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='ideBenef') if self.hasContent_():", "buildAttributes(self, node, attrs, already_processed): pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass #", "warnings as warnings_ try: from lxml import etree as etree_ except ImportError: from", "\"&quot;\") else: s1 = \"'%s'\" % s1 else: s1 = '\"%s\"' % s1", "= ideEvento self.ideEmpregador = ideEmpregador self.ideBenef = ideBenef self.infoBeneficio = infoBeneficio def factory(*args_,", "**kwargs_) if dscLograd.subclass: return dscLograd.subclass(*args_, **kwargs_) else: return dscLograd(*args_, **kwargs_) factory = staticmethod(factory)", "name_='cpfInst', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('cpfInst') if imported_ns_def_ is not None: namespacedef_ =", "not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, tpBenef) if subclass is not None: return", "idQuota.subclass(*args_, **kwargs_) else: return idQuota(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if (", "name_='TEnderecoExterior', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\\n' else: eol_ = '' if", "+ namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='eSocial')", "= None def __init__(self, tpPlanRP=None, iniBeneficio=None, altBeneficio=None, fimBeneficio=None): self.original_tagname_ = None self.tpPlanRP =", "MixedContainer.CategoryText: # Prevent exporting empty content as empty lines. if self.value.strip(): outfile.write(self.value) elif", "): return True else: return False def export(self, outfile, level, namespace_='', name_='dscLograd', namespacedef_='',", "# ('--no-process-includes', '') # ('-o', 'esociallib/v2_04/evtCdBenPrRP.py') # # Command line arguments: # schemas/v2_04/evtCdBenPrRP.xsd", "self.tpAmb = tpAmb def get_procEmi(self): return self.procEmi def set_procEmi(self, procEmi): self.procEmi = procEmi", "MixedContainer.TypeBase64: text = '%s' % base64.b64encode(self.value) return text def exportLiteral(self, outfile, level, name):", "): return True else: return False def export(self, outfile, level, namespace_='', name_='endereco', namespacedef_='',", "s1: s1 = '\"%s\"' % s1.replace('\"', \"&quot;\") else: s1 = \"'%s'\" % s1", "evtCdBenPrRP.factory() obj_.build(child_) self.evtCdBenPrRP = obj_ obj_.original_tagname_ = 'evtCdBenPrRP' elif nodeName_ == 'Signature': Signature_", "ValueError): raise_parse_error(node, 'Requires sequence of integers') return values def gds_format_float(self, input_data, input_name=''): return", "namespace_, name_='verProc') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='',", "None: self.infoBeneficio.export(outfile, level, namespace_, name_='infoBeneficio', pretty_print=pretty_print) def build(self, node): already_processed = set() self.buildAttributes(node,", "= None self.tpBenef = tpBenef self.nrBenefic = nrBenefic if isinstance(dtFimBenef, BaseStrType_): initvalue_ =", "return self.ideEmpregador def set_ideEmpregador(self, ideEmpregador): self.ideEmpregador = ideEmpregador def get_ideBenef(self): return self.ideBenef def", "def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class cpfBenef class nmBenefic(GeneratedsSuper):", "vrBenef def get_infoPenMorte(self): return self.infoPenMorte def set_infoPenMorte(self, infoPenMorte): self.infoPenMorte = infoPenMorte def hasContent_(self):", "already_processed, namespace_='', name_='tpLograd'): pass def exportChildren(self, outfile, level, namespace_='', name_='tpLograd', fromsubclass_=False, pretty_print=True): pass", "exit') # # Globals # ExternalEncoding = 'ascii' Tag_pattern_ = re_.compile(r'({.*})?(.*)') String_cleanup_pat_ =", "TDadosBeneficio(*args_, **kwargs_) factory = staticmethod(factory) def get_tpBenef(self): return self.tpBenef def set_tpBenef(self, tpBenef): self.tpBenef", "into IPython', ## exit_msg = 'Leaving Interpreter, back to program.') # Then use", "pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class mtvFim class", "# end class tpAmb class procEmi(GeneratedsSuper): subclass = None superclass = None def", "input_data, input_name=''): return base64.b64encode(input_data) def gds_validate_base64(self, input_data, node=None, input_name=''): return input_data def gds_format_integer(self,", "== 'nmCid': nmCid_ = child_.text nmCid_ = self.gds_validate_string(nmCid_, node, 'nmCid') self.nmCid = nmCid_", "self.exportChildren(outfile, level + 1, namespace_='', name_='verProc', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else:", "already_processed, namespace_='', name_='evtCdBenPrRP'): if self.Id is not None and 'Id' not in already_processed:", "= exterior def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_(", "= getSubclassFromModule_( CurrentSubclassModule_, verProc) if subclass is not None: return subclass(*args_, **kwargs_) if", "namespace_='', name_='verProc', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib,", "factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, vrBenef) if", "nmCid.subclass: return nmCid.subclass(*args_, **kwargs_) else: return nmCid(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self):", "level + 1, namespace_='', name_='uf', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s'", "mtvFim.subclass: return mtvFim.subclass(*args_, **kwargs_) else: return mtvFim(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self):", "level, namespace_='', name_='paisNascto', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node,", "idQuota_ = child_.text idQuota_ = self.gds_validate_string(idQuota_, node, 'idQuota') self.idQuota = idQuota_ elif nodeName_", "= self.gds_validate_integer(ival_, node, 'tpPlanRP') self.tpPlanRP = ival_ elif nodeName_ == 'iniBeneficio': obj_ =", "tpLograd self.dscLograd = dscLograd self.nrLograd = nrLograd self.complemento = complemento self.bairro = bairro", "**kwargs_) else: return nmPai(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ):", "+ namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='fimBeneficio')", "GeneratedsSuper._FixedOffsetTZ( tzoff, results.group(0)) input_data = input_data[:-6] if len(input_data.split('.')) > 1: dt = datetime_.datetime.strptime(input_data,", "not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, TIdeEveTrab) if subclass is not None: return", "name_='evtCdBenPrRP', pretty_print=pretty_print) if self.Signature is not None: showIndent(outfile, level, pretty_print) outfile.write('<%sSignature>%s</%sSignature>%s' % ('ds:',", "Regimes Próprios\"\"\" subclass = None superclass = None def __init__(self, Id=None, ideEvento=None, ideEmpregador=None,", "nodeName_, fromsubclass_=False): pass # end class procEmi class verProc(GeneratedsSuper): subclass = None superclass", "True else: return False def export(self, outfile, level, namespace_='', name_='TEmprPJ', namespacedef_='', pretty_print=True): imported_ns_def_", "node, nodeName_, fromsubclass_=False): pass # end class nrRecibo class tpAmb(GeneratedsSuper): subclass = None", "+ namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='TEmprPJ')", "'Requires sequence of floats') return values def gds_format_double(self, input_data, input_name=''): return '%e' %", "self.exportAttributes(outfile, level, already_processed, namespace_, name_='complemento') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level", "attrs.get('{%s}%s' % (namespace, name, )) return value class GDSParseError(Exception): pass def raise_parse_error(node, msg):", "pretty_print) outfile.write('<%scodPostal>%s</%scodPostal>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.codPostal), input_name='codPostal')), namespace_, eol_)) def build(self, node): already_processed =", "if nodeName_ == 'tpInsc': sval_ = child_.text try: ival_ = int(sval_) except (TypeError,", "not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, cpfBenef) if subclass is not None: return", "endereco def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_,", "set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='cpfInst') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile,", "not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, cpfInst) if subclass is not None: return", "if s1.find(\"'\") == -1: if s1.find('\\n') == -1: return \"'%s'\" % s1 else:", "nodeName_ = Tag_pattern_.match(child.tag).groups()[-1] self.buildChildren(child, node, nodeName_) return self def buildAttributes(self, node, attrs, already_processed):", "+= 'Z' else: if total_seconds < 0: _svalue += '-' total_seconds *= -1", "= child_.text Signature_ = self.gds_validate_string(Signature_, node, 'Signature') self.Signature = Signature_ # end class", "= Signature def hasContent_(self): if ( self.evtCdBenPrRP is not None or self.Signature is", "def __init__(self, offset, name): self.__offset = datetime_.timedelta(minutes=offset) self.__name = name def utcoffset(self, dt):", "name_='uf', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('uf') if imported_ns_def_ is not None: namespacedef_ =", "instring.lower() def get_path_(self, node): path_list = [] self.get_path_list_(node, path_list) path_list.reverse() path = '/'.join(path_list)", "indRetif): self.indRetif = indRetif def get_nrRecibo(self): return self.nrRecibo def set_nrRecibo(self, nrRecibo): self.nrRecibo =", "namespace_='', name_='indRetif'): pass def exportChildren(self, outfile, level, namespace_='', name_='indRetif', fromsubclass_=False, pretty_print=True): pass def", "is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, tpPlanRP) if subclass is not None:", "return self.fimBeneficio def set_fimBeneficio(self, fimBeneficio): self.fimBeneficio = fimBeneficio def hasContent_(self): if ( self.tpPlanRP", "eol_)) if self.dscLograd is not None: showIndent(outfile, level, pretty_print) outfile.write('<%sdscLograd>%s</%sdscLograd>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.dscLograd),", "( ): return True else: return False def export(self, outfile, level, namespace_='', name_='cep',", "# Prevent exporting empty content as empty lines. if self.value.strip(): if len(element) >", "input_name='tpBenef'), namespace_, eol_)) if self.nrBenefic is not None: showIndent(outfile, level, pretty_print) outfile.write('<%snrBenefic>%s</%snrBenefic>%s' %", "def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class tpLograd class dscLograd(GeneratedsSuper):", "self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='ideBenef', pretty_print=pretty_print) showIndent(outfile,", "name_='nmPai', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib, already_processed)", "self.fimBeneficio is not None: self.fimBeneficio.export(outfile, level, namespace_, name_='fimBeneficio', pretty_print=pretty_print) def build(self, node): already_processed", "not None: showIndent(outfile, level, pretty_print) outfile.write('<%stpPlanRP>%s</%stpPlanRP>%s' % (namespace_, self.gds_format_integer(self.tpPlanRP, input_name='tpPlanRP'), namespace_, eol_)) if", "informado se já houver informação anterior de benefícios para o beneficiário identificado em", "class_obj2 return class_obj1 def gds_build_any(self, node, type_name=None): return None @classmethod def gds_reverse_node_mapping(cls, mapping):", "= nmCid self.codPostal = codPostal def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None:", "def export(self, outfile, level, namespace_='', name_='TEnderecoBrasil', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('TEnderecoBrasil') if imported_ns_def_", "not None: showIndent(outfile, level, pretty_print) outfile.write('<%sbairro>%s</%sbairro>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.bairro), input_name='bairro')), namespace_, eol_)) if", "if idQuota.subclass: return idQuota.subclass(*args_, **kwargs_) else: return idQuota(*args_, **kwargs_) factory = staticmethod(factory) def", "def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='infoBeneficio'): pass def exportChildren(self, outfile, level, namespace_='',", "obj_.original_tagname_ = 'brasil' elif nodeName_ == 'exterior': obj_ = TEnderecoExterior.factory() obj_.build(child_) self.exterior =", "def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='TDadosBeneficio'): pass def exportChildren(self, outfile, level, namespace_='',", "== MixedContainer.CategoryText: showIndent(outfile, level) outfile.write( 'model_.MixedContainer(%d, %d, \"%s\", \"%s\"),\\n' % ( self.category, self.content_type,", "outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='eSocial'): pass def", "= tpPlanRP def get_iniBeneficio(self): return self.iniBeneficio def set_iniBeneficio(self, iniBeneficio): self.iniBeneficio = iniBeneficio def", "outfile, level, already_processed, namespace_='', name_='infoBeneficio'): pass def exportChildren(self, outfile, level, namespace_='', name_='infoBeneficio', fromsubclass_=False,", "not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, indRetif) if subclass is not None: return", "fromsubclass_=False): pass # end class vrBenef class infoPenMorte(GeneratedsSuper): \"\"\"Informações relativas a pensão por", "= indRetif self.nrRecibo = nrRecibo self.tpAmb = tpAmb self.procEmi = procEmi self.verProc =", "% (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='nrInsc'): pass def exportChildren(self,", "export(self, outfile, level, namespace_='', name_='evtCdBenPrRP', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('evtCdBenPrRP') if imported_ns_def_ is", "( self.category, self.content_type, self.name, self.value)) elif self.category == MixedContainer.CategorySimple: showIndent(outfile, level) outfile.write( 'model_.MixedContainer(%d,", "category, content_type, name, value): self.category = category self.content_type = content_type self.name = name", "dtIniBenef.subclass(*args_, **kwargs_) else: return dtIniBenef(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if (", "name_='endereco', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('endereco') if imported_ns_def_ is not None: namespacedef_ =", "_svalue += '{0:02d}:{1:02d}'.format(hours, minutes) return _svalue def gds_validate_simple_patterns(self, patterns, target): # pat is", "return cep(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return True", "exportChildren(self, outfile, level, namespace_='', name_='codPostal', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed =", "'cpfBenef') self.cpfBenef = cpfBenef_ elif nodeName_ == 'nmBenefic': nmBenefic_ = child_.text nmBenefic_ =", "already_processed, namespace_='', name_='nmBenefic'): pass def exportChildren(self, outfile, level, namespace_='', name_='nmBenefic', fromsubclass_=False, pretty_print=True): pass", "fval_ elif nodeName_ == 'infoPenMorte': obj_ = infoPenMorte.factory() obj_.build(child_) self.infoPenMorte = obj_ obj_.original_tagname_", "def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, TIdeEveTrab)", "= staticmethod(factory) def get_brasil(self): return self.brasil def set_brasil(self, brasil): self.brasil = brasil def", "child_.text complemento_ = self.gds_validate_string(complemento_, node, 'complemento') self.complemento = complemento_ elif nodeName_ == 'bairro':", "name_='dadosNasc', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' %", "return _svalue @classmethod def gds_parse_date(cls, input_data): tz = None if input_data[-1] == 'Z':", "fromsubclass_=False): pass # end class tpAmb class procEmi(GeneratedsSuper): subclass = None superclass =", "uf(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return True else:", "None ): return True else: return False def export(self, outfile, level, namespace_='', name_='TEnderecoExterior',", "return input_data def gds_format_integer(self, input_data, input_name=''): return '%d' % input_data def gds_validate_integer(self, input_data,", "= s1[pos:mo.start()] s2 += quote_xml_aux(s3) s2 += s1[mo.start():mo.end()] pos = mo.end() s3 =", "pass def exportChildren(self, outfile, level, namespace_='', name_='eSocial', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ =", "CurrentSubclassModule_, dtIniBenef) if subclass is not None: return subclass(*args_, **kwargs_) if dtIniBenef.subclass: return", "CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, complemento) if subclass is not", "def export(self, outfile, level, namespace_='', name_='dadosNasc', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('dadosNasc') if imported_ns_def_", "de benefícios.\"\"\" subclass = None superclass = None def __init__(self, tpBenef=None, nrBenefic=None, dtFimBenef=None,", "evtCdBenPrRP as model_\\n\\n') sys.stdout.write('rootObj = model_.rootClass(\\n') rootObj.exportLiteral(sys.stdout, 0, name_=rootTag) sys.stdout.write(')\\n') return rootObj def", "if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, indRetif) if subclass is", "= initvalue_ self.mtvFim = mtvFim def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None:", "name_='TEmprPJ') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='TEmprPJ',", "= set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='eSocial') if self.hasContent_(): outfile.write('>%s' % (eol_, ))", "def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='endereco'): pass def exportChildren(self, outfile, level, namespace_='',", "return subclass(*args_, **kwargs_) if mtvFim.subclass: return mtvFim.subclass(*args_, **kwargs_) else: return mtvFim(*args_, **kwargs_) factory", "exportChildren(self, outfile, level, namespace_='', name_='uf', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed =", "level, namespace_='', name_='cpfInst', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('cpfInst') if imported_ns_def_ is not None:", "def exportChildren(self, outfile, level, namespace_='', name_='dtNascto', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed", "'''Get the subclass of a class from a specific module.''' name = class_.__name__", "CurrentSubclassModule_ = None # # Support/utility functions. # def showIndent(outfile, level, pretty_print=True): if", "s1 = '\"%s\"' % s1 return s1 def quote_python(inStr): s1 = inStr if", "def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class paisNac class nmMae(GeneratedsSuper):", "'', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='ideBenef') if self.hasContent_(): outfile.write('>%s'", "pass # end class nrRecibo class tpAmb(GeneratedsSuper): subclass = None superclass = None", "class bairro class cep(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_", "TEnderecoBrasil.subclass(*args_, **kwargs_) else: return TEnderecoBrasil(*args_, **kwargs_) factory = staticmethod(factory) def get_tpLograd(self): return self.tpLograd", "or self.dadosBenef is not None ): return True else: return False def export(self,", "datetime_.datetime.strptime(dtNascto, '%Y-%m-%d').date() else: initvalue_ = dtNascto self.dtNascto = initvalue_ self.codMunic = codMunic self.uf", "= attrs.get('{%s}%s' % (namespace, name, )) return value class GDSParseError(Exception): pass def raise_parse_error(node,", ")) self.exportChildren(outfile, level + 1, namespace_='', name_='nrBenefic', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_))", "1, namespace_='', name_='TDadosBeneficio', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else:", "is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, TEnderecoExterior) if subclass is not None:", "showIndent(outfile, level, pretty_print) outfile.write('<%snmCid>%s</%snmCid>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nmCid), input_name='nmCid')), namespace_, eol_)) if self.codPostal is", "name_='cpfInst', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib, already_processed)", "elif nodeName_ == 'nmPai': nmPai_ = child_.text nmPai_ = self.gds_validate_string(nmPai_, node, 'nmPai') self.nmPai", "self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='paisNac', pretty_print=pretty_print) outfile.write('</%s%s>%s'", "): return True else: return False def export(self, outfile, level, namespace_='', name_='uf', namespacedef_='',", "None or self.bairro is not None or self.nmCid is not None or self.codPostal", "def gds_format_boolean_list(self, input_data, input_name=''): return '%s' % ' '.join(input_data) def gds_validate_boolean_list( self, input_data,", "is not None: text += child.tail return text def find_attr_value_(attr_name, node): attrs =", "lines. if self.value.strip(): outfile.write(self.value) elif self.category == MixedContainer.CategorySimple: self.exportSimple(outfile, level, name) else: #", "name_='endereco') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='endereco',", "True else: return False def export(self, outfile, level, namespace_='', name_='endereco', namespacedef_='', pretty_print=True): imported_ns_def_", "subclass(*args_, **kwargs_) if cep.subclass: return cep.subclass(*args_, **kwargs_) else: return cep(*args_, **kwargs_) factory =", "self.altBeneficio is not None or self.fimBeneficio is not None ): return True else:", "pretty_print) outfile.write('<%sbairro>%s</%sbairro>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.bairro), input_name='bairro')), namespace_, eol_)) if self.cep is not None:", "nmPai) if subclass is not None: return subclass(*args_, **kwargs_) if nmPai.subclass: return nmPai.subclass(*args_,", "= s1.replace('&', '&amp;') s1 = s1.replace('<', '&lt;') s1 = s1.replace('>', '&gt;') if '\"'", "('%f' % (float(input_data.microsecond) / 1000000))[2:], ) if input_data.tzinfo is not None: tzoff =", "class tpAmb(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ = None", "if pretty_print: eol_ = '\\n' else: eol_ = '' if self.tpInsc is not", "infoBeneficio=None): self.original_tagname_ = None self.Id = _cast(None, Id) self.ideEvento = ideEvento self.ideEmpregador =", "in already_processed: already_processed.add('Id') self.Id = value def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): if", "pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('cpfBenef') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if", "do beneficiário\"\"\" subclass = None superclass = None def __init__(self, dtNascto=None, codMunic=None, uf=None,", "2016, 06:48:10) [GCC 5.4.0 20160609] # # Command line options: # ('--no-process-includes', '')", "outfile, level, name, namespace, pretty_print=True): if self.category == MixedContainer.CategoryText: # Prevent exporting empty", ")) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='vrBenef'): pass def exportChildren(self, outfile, level,", "inStr.replace('&', '&amp;') s1 = s1.replace('<', '&lt;') s1 = s1.replace('>', '&gt;') return s1 def", "'altBeneficio' elif nodeName_ == 'fimBeneficio': obj_ = fimBeneficio.factory() obj_.build(child_) self.fimBeneficio = obj_ obj_.original_tagname_", "if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, infoBeneficio) if subclass is", "eol_)) else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='idQuota'):", "pretty_print) outfile.write('<%snrInsc>%s</%snrInsc>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nrInsc), input_name='nrInsc')), namespace_, eol_)) def build(self, node): already_processed =", "return verProc.subclass(*args_, **kwargs_) else: return verProc(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if", "return self.__offset def tzname(self, dt): return self.__name def dst(self, dt): return None def", "(namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nmPai), input_name='nmPai')), namespace_, eol_)) def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib,", "attr_parts namespace = node.nsmap.get(prefix) if namespace is not None: value = attrs.get('{%s}%s' %", "= child_.text uf_ = self.gds_validate_string(uf_, node, 'uf') self.uf = uf_ # end class", "return self.indRetif def set_indRetif(self, indRetif): self.indRetif = indRetif def get_nrRecibo(self): return self.nrRecibo def", "pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('cep') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if", "return self.ideEvento def set_ideEvento(self, ideEvento): self.ideEvento = ideEvento def get_ideEmpregador(self): return self.ideEmpregador def", "self.nmBenefic def set_nmBenefic(self, nmBenefic): self.nmBenefic = nmBenefic def get_dadosBenef(self): return self.dadosBenef def set_dadosBenef(self,", "= getSubclassFromModule_( CurrentSubclassModule_, nrInsc) if subclass is not None: return subclass(*args_, **kwargs_) if", "level, namespace_='', name_='cpfInst', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node,", "nrLograd self.complemento = complemento self.bairro = bairro self.nmCid = nmCid self.codPostal = codPostal", "is not None: return subclass(*args_, **kwargs_) if tpInsc.subclass: return tpInsc.subclass(*args_, **kwargs_) else: return", "level, already_processed, namespace_='', name_='tpBenef'): pass def exportChildren(self, outfile, level, namespace_='', name_='tpBenef', fromsubclass_=False, pretty_print=True):", "nrRecibo.subclass: return nrRecibo.subclass(*args_, **kwargs_) else: return nrRecibo(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self):", "level, namespace_='', name_='cep', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node,", "CurrentSubclassModule_, codMunic) if subclass is not None: return subclass(*args_, **kwargs_) if codMunic.subclass: return", "None: subclass = getSubclassFromModule_( CurrentSubclassModule_, fimBeneficio) if subclass is not None: return subclass(*args_,", "or self.ideBenef is not None or self.infoBeneficio is not None ): return True", "nodeName_, fromsubclass_=False): if nodeName_ == 'cpfBenef': cpfBenef_ = child_.text cpfBenef_ = self.gds_validate_string(cpfBenef_, node,", ")) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='tpPlanRP'): pass def exportChildren(self, outfile, level,", "return subclass(*args_, **kwargs_) if cep.subclass: return cep.subclass(*args_, **kwargs_) else: return cep(*args_, **kwargs_) factory", "if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, paisNac) if subclass is", "self.name = name def get_name(self): return self.name def set_data_type(self, data_type): self.data_type = data_type", "eol_)) else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='TEnderecoExterior'):", "None or self.nmMae is not None or self.nmPai is not None ): return", "imported_ns_def_ = GenerateDSNamespaceDefs_.get('dtIniBenef') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print:", "namespace_='', name_='tpBenef', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib,", "obj_.build(child_) self.altBeneficio = obj_ obj_.original_tagname_ = 'altBeneficio' elif nodeName_ == 'fimBeneficio': obj_ =", "is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, paisNascto) if subclass is not None:", "def export(self, outfile, level, namespace_='', name_='evtCdBenPrRP', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('evtCdBenPrRP') if imported_ns_def_", "level, pretty_print) outfile.write('<%sverProc>%s</%sverProc>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.verProc), input_name='verProc')), namespace_, eol_)) def build(self, node): already_processed", "def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, codPostal)", "def get_tpLograd(self): return self.tpLograd def set_tpLograd(self, tpLograd): self.tpLograd = tpLograd def get_dscLograd(self): return", "self.iniBeneficio = obj_ obj_.original_tagname_ = 'iniBeneficio' elif nodeName_ == 'altBeneficio': obj_ = TDadosBeneficio.factory()", "def gds_parse_date(cls, input_data): tz = None if input_data[-1] == 'Z': tz = GeneratedsSuper._FixedOffsetTZ(0,", "**kwargs_) else: return paisNac(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ):", "self.infoPenMorte.export(outfile, level, namespace_, name_='infoPenMorte', pretty_print=pretty_print) def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib,", "% (msg, node.tag, node.sourceline, ) raise GDSParseError(msg) class MixedContainer: # Constants for category:", "not None: self.infoBeneficio.export(outfile, level, namespace_, name_='infoBeneficio', pretty_print=pretty_print) def build(self, node): already_processed = set()", "parser so that, e.g., # we ignore comments. try: parser = etree_.ETCompatXMLParser() except", "): return True else: return False def export(self, outfile, level, namespace_='', name_='paisNac', namespacedef_='',", "nrLograd) if subclass is not None: return subclass(*args_, **kwargs_) if nrLograd.subclass: return nrLograd.subclass(*args_,", "class codPostal(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ = None", "pass # end class nrLograd class complemento(GeneratedsSuper): subclass = None superclass = None", "% (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nrRecibo), input_name='nrRecibo')), namespace_, eol_)) if self.tpAmb is not None: showIndent(outfile, level,", "if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='dadosNasc', pretty_print=pretty_print)", "self.exportAttributes(outfile, level, already_processed, namespace_, name_='nrLograd') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level", "(self.content_type == MixedContainer.TypeFloat or self.content_type == MixedContainer.TypeDecimal): text = '%f' % self.value elif", "= { # \"ElementtypeA\": \"http://www.xxx.com/namespaceA\", # \"ElementtypeB\": \"http://www.xxx.com/namespaceB\", # } # try: from", "already_processed): pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class tpBenef", "if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='nrRecibo', pretty_print=pretty_print)", "def get_nrInsc(self): return self.nrInsc def set_nrInsc(self, nrInsc): self.nrInsc = nrInsc def hasContent_(self): if", "outfile.write('<%snmPai>%s</%snmPai>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nmPai), input_name='nmPai')), namespace_, eol_)) def build(self, node): already_processed = set()", "return False def export(self, outfile, level, namespace_='', name_='infoBeneficio', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('infoBeneficio')", "subclass is not None: return subclass(*args_, **kwargs_) if paisNac.subclass: return paisNac.subclass(*args_, **kwargs_) else:", "BaseStrType_ = str def parsexml_(infile, parser=None, **kwargs): if parser is None: # Use", ")) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='nmPai'): pass def exportChildren(self, outfile, level,", "pretty_print) outfile.write('<%scomplemento>%s</%scomplemento>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.complemento), input_name='complemento')), namespace_, eol_)) if self.bairro is not None:", "return self.cep def set_cep(self, cep): self.cep = cep def get_codMunic(self): return self.codMunic def", "TEmprPJ, 'ideEvento': TIdeEveTrab, 'iniBeneficio': TDadosBeneficio, } USAGE_TEXT = \"\"\" Usage: python <Parser>.py [", "= self.gds_validate_integer(ival_, node, 'tpBenef') self.tpBenef = ival_ elif nodeName_ == 'nrBenefic': nrBenefic_ =", "tpPlanRP.subclass: return tpPlanRP.subclass(*args_, **kwargs_) else: return tpPlanRP(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self):", "% (namespace_, self.gds_format_integer(self.tpBenef, input_name='tpBenef'), namespace_, eol_)) if self.nrBenefic is not None: showIndent(outfile, level,", "None: return subclass(*args_, **kwargs_) if TDadosBenef.subclass: return TDadosBenef.subclass(*args_, **kwargs_) else: return TDadosBenef(*args_, **kwargs_)", "if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='cep', pretty_print=pretty_print)", "set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='TEmprPJ') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile,", "def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): if nodeName_ == 'idQuota': idQuota_ = child_.text", "pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): if nodeName_ == 'brasil': obj_ =", "is not None: showIndent(outfile, level, pretty_print) outfile.write('<%scodMunic>%s</%scodMunic>%s' % (namespace_, self.gds_format_integer(self.codMunic, input_name='codMunic'), namespace_, eol_))", "minutes = (total_seconds - (hours * 3600)) // 60 _svalue += '{0:02d}:{1:02d}'.format( hours,", "% (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='eSocial', pretty_print=pretty_print) showIndent(outfile, level, pretty_print)", "% (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='fimBeneficio', pretty_print=pretty_print) showIndent(outfile, level, pretty_print)", "export that definition in the # XML representation of that element. See the", "not None: return subclass(*args_, **kwargs_) if procEmi.subclass: return procEmi.subclass(*args_, **kwargs_) else: return procEmi(*args_,", "datetime_.datetime.strptime(input_data, '%Y-%m-%d') dt = dt.replace(tzinfo=tz) return dt.date() def gds_validate_time(self, input_data, node=None, input_name=''): return", "// 60 _svalue += '{0:02d}:{1:02d}'.format( hours, minutes) except AttributeError: pass return _svalue @classmethod", "nodeName_ == 'ideBenef': obj_ = ideBenef.factory() obj_.build(child_) self.ideBenef = obj_ obj_.original_tagname_ = 'ideBenef'", "self.gds_validate_string(cep_, node, 'cep') self.cep = cep_ elif nodeName_ == 'codMunic': sval_ = child_.text", "return paisNac(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return True", "3600)) // 60 _svalue += '{0:02d}:{1:02d}'.format( hours, minutes) except AttributeError: pass return _svalue", "namespace is not None: value = attrs.get('{%s}%s' % (namespace, name, )) return value", "level, already_processed, namespace_='', name_='nmBenefic'): pass def exportChildren(self, outfile, level, namespace_='', name_='nmBenefic', fromsubclass_=False, pretty_print=True):", "module named generatedssuper.py. try: from generatedssuper import GeneratedsSuper except ImportError as exp: class", "None: subclass = getSubclassFromModule_( CurrentSubclassModule_, endereco) if subclass is not None: return subclass(*args_,", "'exterior': obj_ = TEnderecoExterior.factory() obj_.build(child_) self.exterior = obj_ obj_.original_tagname_ = 'exterior' # end", "getSubclassFromModule_( CurrentSubclassModule_, dtNascto) if subclass is not None: return subclass(*args_, **kwargs_) if dtNascto.subclass:", "# Prevent exporting empty content as empty lines. if self.value.strip(): outfile.write(self.value) elif self.category", "(eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='infoBeneficio', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s'", "== MixedContainer.TypeDecimal: outfile.write('<%s>%f</%s>' % ( self.name, self.value, self.name)) elif self.content_type == MixedContainer.TypeDouble: outfile.write('<%s>%g</%s>'", "eol_)) if self.nrRecibo is not None: showIndent(outfile, level, pretty_print) outfile.write('<%snrRecibo>%s</%snrRecibo>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nrRecibo),", "not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, cep) if subclass is not None: return", "tpInsc) if subclass is not None: return subclass(*args_, **kwargs_) if tpInsc.subclass: return tpInsc.subclass(*args_,", "buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class paisResid class nmCid(GeneratedsSuper): subclass", "inStr or '%s' % inStr) s1 = s1.replace('&', '&amp;') s1 = s1.replace('<', '&lt;')", "+ 1, namespace_='', name_='dtFimBenef', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' %", "level, namespace_='', name_='nrInsc', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node,", "None: showIndent(outfile, level, pretty_print) outfile.write('<%spaisNac>%s</%spaisNac>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.paisNac), input_name='paisNac')), namespace_, eol_)) if self.nmMae", "( ): return True else: return False def export(self, outfile, level, namespace_='', name_='nrLograd',", "% (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='cpfBenef', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_,", "return tpBenef.subclass(*args_, **kwargs_) else: return tpBenef(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if", "level, namespace_='', name_='ideBenef', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('ideBenef') if imported_ns_def_ is not None:", "if pretty_print: eol_ = '\\n' else: eol_ = '' if self.ideEvento is not", ")) self.exportChildren(outfile, level + 1, namespace_='', name_='tpAmb', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_))", "None: subclass = getSubclassFromModule_( CurrentSubclassModule_, uf) if subclass is not None: return subclass(*args_,", "pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('infoPenMorte') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if", "'tpInsc': sval_ = child_.text try: ival_ = int(sval_) except (TypeError, ValueError) as exp:", "outfile, level, already_processed, namespace_='', name_='TDadosBeneficio'): pass def exportChildren(self, outfile, level, namespace_='', name_='TDadosBeneficio', fromsubclass_=False,", "self.tpInsc is not None: showIndent(outfile, level, pretty_print) outfile.write('<%stpInsc>%s</%stpInsc>%s' % (namespace_, self.gds_format_integer(self.tpInsc, input_name='tpInsc'), namespace_,", "**kwargs_) if bairro.subclass: return bairro.subclass(*args_, **kwargs_) else: return bairro(*args_, **kwargs_) factory = staticmethod(factory)", "buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class complemento class bairro(GeneratedsSuper): subclass", "if subclass is not None: return subclass(*args_, **kwargs_) if codPostal.subclass: return codPostal.subclass(*args_, **kwargs_)", "self.choice def set_optional(self, optional): self.optional = optional def get_optional(self): return self.optional def _cast(typ,", "+ namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='nmPai')", "'altBeneficio': obj_ = TDadosBeneficio.factory() obj_.build(child_) self.altBeneficio = obj_ obj_.original_tagname_ = 'altBeneficio' elif nodeName_", "node, 'complemento') self.complemento = complemento_ elif nodeName_ == 'bairro': bairro_ = child_.text bairro_", "bairro.subclass: return bairro.subclass(*args_, **kwargs_) else: return bairro(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self):", "s1 def get_all_text_(node): if node.text is not None: text = node.text else: text", ")) self.exportChildren(outfile, level + 1, namespace_='', name_='tpBenef', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_))", "eol_ = '' if self.brasil is not None: self.brasil.export(outfile, level, namespace_, name_='brasil', pretty_print=pretty_print)", "**kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, tpPlanRP) if subclass", "set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='ideBenef') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile,", "name_='dtIniBenef', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib, already_processed)", "CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, tpInsc) if subclass is not", "= tpInsc def get_nrInsc(self): return self.nrInsc def set_nrInsc(self, nrInsc): self.nrInsc = nrInsc def", "self.exportChildren(outfile, level + 1, namespace_='', name_='dadosNasc', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' % (namespace_,", "'', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='nmPai') if self.hasContent_(): outfile.write('>%s'", "self.data_type = data_type self.container = container self.child_attrs = child_attrs self.choice = choice self.optional", "'iniBeneficio' elif nodeName_ == 'altBeneficio': obj_ = TDadosBeneficio.factory() obj_.build(child_) self.altBeneficio = obj_ obj_.original_tagname_", "\"\"\"Informações relacionadas ao benefício previdenciário concedido ao servidor\"\"\" subclass = None superclass =", "outfile, level, namespace_='', name_='paisNascto', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('paisNascto') if imported_ns_def_ is not", "or self.nrBenefic is not None or self.dtFimBenef is not None or self.mtvFim is", "o qual não tenha havido ainda informação de término de benefícios.\"\"\" subclass =", ")) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='paisResid') if self.hasContent_(): outfile.write('>%s' %", "self.nmPai = nmPai def hasContent_(self): if ( self.dtNascto is not None or self.codMunic", "'uf': uf_ = child_.text uf_ = self.gds_validate_string(uf_, node, 'uf') self.uf = uf_ #", "get_altBeneficio(self): return self.altBeneficio def set_altBeneficio(self, altBeneficio): self.altBeneficio = altBeneficio def get_fimBeneficio(self): return self.fimBeneficio", "= rootObj.gds_reverse_node_mapping(mapping) if not silence: content = etree_.tostring( rootElement, pretty_print=True, xml_declaration=True, encoding=\"utf-8\") sys.stdout.write(content)", "export(self, outfile, level, namespace_='', name_='dtNascto', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('dtNascto') if imported_ns_def_ is", "level, namespace_='', name_='dtIniBenef', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node,", "+ 1, namespace_='', name_='eSocial', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_))", "already_processed, namespace_='', name_='nmMae'): pass def exportChildren(self, outfile, level, namespace_='', name_='nmMae', fromsubclass_=False, pretty_print=True): pass", "== -1: if s1.find('\\n') == -1: return \"'%s'\" % s1 else: return \"'''%s'''\"", "\"'%s'\" % s1 else: s1 = '\"%s\"' % s1 return s1 def quote_python(inStr):", "None: showIndent(outfile, level, pretty_print) outfile.write('<%sdtIniBenef>%s</%sdtIniBenef>%s' % (namespace_, self.gds_format_date(self.dtIniBenef, input_name='dtIniBenef'), namespace_, eol_)) if self.vrBenef", "self.dscLograd = dscLograd def get_nrLograd(self): return self.nrLograd def set_nrLograd(self, nrLograd): self.nrLograd = nrLograd", "self.original_tagname_ = None self.evtCdBenPrRP = evtCdBenPrRP self.Signature = Signature def factory(*args_, **kwargs_): if", "level, namespace_, name_='ideEvento', pretty_print=pretty_print) if self.ideEmpregador is not None: self.ideEmpregador.export(outfile, level, namespace_, name_='ideEmpregador',", "int(tzoff_parts[1]) if results.group(1) == '-': tzoff *= -1 tz = GeneratedsSuper._FixedOffsetTZ( tzoff, results.group(0))", "( ): return True else: return False def export(self, outfile, level, namespace_='', name_='complemento',", "outfile, level, namespace_='', name_='TEmprPJ', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('TEmprPJ') if imported_ns_def_ is not", "procEmi self.verProc = verProc def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass", "namespace_='', name_='bairro'): pass def exportChildren(self, outfile, level, namespace_='', name_='bairro', fromsubclass_=False, pretty_print=True): pass def", "False def export(self, outfile, level, namespace_='', name_='nrRecibo', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('nrRecibo') if", "= 'ascii' Tag_pattern_ = re_.compile(r'({.*})?(.*)') String_cleanup_pat_ = re_.compile(r\"[\\n\\r\\s]+\") Namespace_extract_pat_ = re_.compile(r'{(.*)}(.*)') CDATA_pattern_ =", ")) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='TDadosBeneficio'): pass def exportChildren(self, outfile, level,", "def set_evtCdBenPrRP(self, evtCdBenPrRP): self.evtCdBenPrRP = evtCdBenPrRP def get_Signature(self): return self.Signature def set_Signature(self, Signature):", "node, nodeName_, fromsubclass_=False): pass # end class dtFimBenef class mtvFim(GeneratedsSuper): subclass = None", "to_etree_simple(self): if self.content_type == MixedContainer.TypeString: text = self.value elif (self.content_type == MixedContainer.TypeInteger or", "return self.complemento def set_complemento(self, complemento): self.complemento = complemento def get_bairro(self): return self.bairro def", "self.nmCid is not None or self.codPostal is not None ): return True else:", "level + 1, namespace_='', name_='fimBeneficio', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_,", "#!/usr/bin/env python # -*- coding: utf-8 -*- # # Generated Tue Oct 10", "**kwargs_) if nrBenefic.subclass: return nrBenefic.subclass(*args_, **kwargs_) else: return nrBenefic(*args_, **kwargs_) factory = staticmethod(factory)", "def get_nmPai(self): return self.nmPai def set_nmPai(self, nmPai): self.nmPai = nmPai def hasContent_(self): if", "return False def export(self, outfile, level, namespace_='', name_='nmMae', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('nmMae')", "+= quote_xml_aux(s3) s2 += s1[mo.start():mo.end()] pos = mo.end() s3 = s1[pos:] s2 +=", "return tpLograd(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return True", "prefix definition, will export that definition in the # XML representation of that", "input_data, node=None, input_name=''): return input_data def gds_format_integer(self, input_data, input_name=''): return '%d' % input_data", "namespace_='', name_='TEnderecoBrasil'): pass def exportChildren(self, outfile, level, namespace_='', name_='TEnderecoBrasil', fromsubclass_=False, pretty_print=True): if pretty_print:", "else: return paisNascto(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return", "outfile, level, namespace_='', name_='ideBenef', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\\n' else: eol_", "namespace_='', name_='TEnderecoBrasil', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('TEnderecoBrasil') if imported_ns_def_ is not None: namespacedef_", "): return True else: return False def export(self, outfile, level, namespace_='', name_='complemento', namespacedef_='',", "a class from a specific module.''' name = class_.__name__ + 'Sub' if hasattr(module,", "# - OR the inner elements found1 = True for patterns1 in patterns:", "CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, ideBenef) if subclass is not", "datetime_.datetime.strptime(input_data, '%H:%M:%S') dt = dt.replace(tzinfo=tz) return dt.time() def gds_str_lower(self, instring): return instring.lower() def", "None or self.iniBeneficio is not None or self.altBeneficio is not None or self.fimBeneficio", "% (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='fimBeneficio'): pass def exportChildren(self,", "imported_ns_def_ = GenerateDSNamespaceDefs_.get('mtvFim') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print:", "not None: self.endereco.export(outfile, level, namespace_, name_='endereco', pretty_print=pretty_print) def build(self, node): already_processed = set()", "): return True else: return False def export(self, outfile, level, namespace_='', name_='nrLograd', namespacedef_='',", "(eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='fimBeneficio'): pass def exportChildren(self, outfile,", "name_='tpInsc', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('tpInsc') if imported_ns_def_ is not None: namespacedef_ =", "exterior=None): self.original_tagname_ = None self.brasil = brasil self.exterior = exterior def factory(*args_, **kwargs_):", "self.exportAttributes(outfile, level, already_processed, namespace_, name_='ideBenef') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level", "self.codMunic def set_codMunic(self, codMunic): self.codMunic = codMunic def get_uf(self): return self.uf def set_uf(self,", "2: prefix, name = attr_parts namespace = node.nsmap.get(prefix) if namespace is not None:", "+ int(tzoff_parts[1]) if results.group(1) == '-': tzoff *= -1 tz = GeneratedsSuper._FixedOffsetTZ( tzoff,", "fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\\n' else: eol_ = '' if self.indRetif", "pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_,", "= re_.compile(r'(\\+|-)((0\\d|1[0-3]):[0-5]\\d|14:00)$') class _FixedOffsetTZ(datetime_.tzinfo): def __init__(self, offset, name): self.__offset = datetime_.timedelta(minutes=offset) self.__name =", "= attrs.get(attr_name) elif len(attr_parts) == 2: prefix, name = attr_parts namespace = node.nsmap.get(prefix)", "return self.infoBeneficio def set_infoBeneficio(self, infoBeneficio): self.infoBeneficio = infoBeneficio def get_Id(self): return self.Id def", "not None or self.procEmi is not None or self.verProc is not None ):", ")) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='paisNascto'): pass def exportChildren(self, outfile, level,", "already_processed, namespace_, name_='TEnderecoExterior') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1,", "None: return subclass(*args_, **kwargs_) if tpBenef.subclass: return tpBenef.subclass(*args_, **kwargs_) else: return tpBenef(*args_, **kwargs_)", "= '\\n' else: eol_ = '' if self.original_tagname_ is not None: name_ =", "**kwargs_) else: return dadosNasc(*args_, **kwargs_) factory = staticmethod(factory) def get_dtNascto(self): return self.dtNascto def", "XML schema namespace prefix # definitions. The export method for any class for", "ImportError: GenerateDSNamespaceDefs_ = {} # # The root super-class for element type classes", "eol_ = '\\n' else: eol_ = '' if self.tpPlanRP is not None: showIndent(outfile,", "None: return subclass(*args_, **kwargs_) if paisResid.subclass: return paisResid.subclass(*args_, **kwargs_) else: return paisResid(*args_, **kwargs_)", "namespace_, name_='dadosNasc') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='',", "child_, node, nodeName_, fromsubclass_=False): pass # end class dtNascto class codMunic(GeneratedsSuper): subclass =", "self.gds_encode(self.gds_format_string(quote_xml(self.nmBenefic), input_name='nmBenefic')), namespace_, eol_)) if self.dadosBenef is not None: self.dadosBenef.export(outfile, level, namespace_, name_='dadosBenef',", "return self.dscLograd def set_dscLograd(self, dscLograd): self.dscLograd = dscLograd def get_nrLograd(self): return self.nrLograd def", "% (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.cep), input_name='cep')), namespace_, eol_)) if self.codMunic is not None: showIndent(outfile, level,", "coding: utf-8 -*- # # Generated Tue Oct 10 00:42:21 2017 by generateDS.py", "nodeName_, fromsubclass_=False): pass # end class dtNascto class codMunic(GeneratedsSuper): subclass = None superclass", "codMunic def get_uf(self): return self.uf def set_uf(self, uf): self.uf = uf def hasContent_(self):", "% (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nmCid), input_name='nmCid')), namespace_, eol_)) if self.codPostal is not None: showIndent(outfile, level,", "the # XML representation of that element. See the export method of #", "nodeName_, fromsubclass_=False): pass # end class codPostal class TDadosBeneficio(GeneratedsSuper): \"\"\"Dados do benefício previdenciário\"\"\"", "-s ] <in_xml_file> \"\"\" def usage(): print(USAGE_TEXT) sys.exit(1) def get_root_tag(node): tag = Tag_pattern_.match(node.tag).groups()[-1]", "if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='nmMae', pretty_print=pretty_print)", "pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class nmCid class", "tpInsc.subclass: return tpInsc.subclass(*args_, **kwargs_) else: return tpInsc(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self):", "self.infoPenMorte def set_infoPenMorte(self, infoPenMorte): self.infoPenMorte = infoPenMorte def hasContent_(self): if ( self.tpBenef is", "level, namespace_='', name_='nrInsc', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('nrInsc') if imported_ns_def_ is not None:", "pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): if nodeName_ == 'dadosNasc': obj_ =", "class_obj1 def gds_build_any(self, node, type_name=None): return None @classmethod def gds_reverse_node_mapping(cls, mapping): return dict(((v,", "if vrBenef.subclass: return vrBenef.subclass(*args_, **kwargs_) else: return vrBenef(*args_, **kwargs_) factory = staticmethod(factory) def", "or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='dadosNasc') if self.hasContent_():", "datetime_.datetime.strptime( input_data, '%Y-%m-%dT%H:%M:%S.%f') else: dt = datetime_.datetime.strptime( input_data, '%Y-%m-%dT%H:%M:%S') dt = dt.replace(tzinfo=tz) return", "child_.text Signature_ = self.gds_validate_string(Signature_, node, 'Signature') self.Signature = Signature_ # end class eSocial", "if self.dtNascto is not None: showIndent(outfile, level, pretty_print) outfile.write('<%sdtNascto>%s</%sdtNascto>%s' % (namespace_, self.gds_format_date(self.dtNascto, input_name='dtNascto'),", "# end class tpBenef class nrBenefic(GeneratedsSuper): subclass = None superclass = None def", "idQuota) if subclass is not None: return subclass(*args_, **kwargs_) if idQuota.subclass: return idQuota.subclass(*args_,", "'\"\"\"%s\"\"\"' % s1 def get_all_text_(node): if node.text is not None: text = node.text", "% (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='TEmprPJ', pretty_print=pretty_print) showIndent(outfile, level, pretty_print)", "if subclass is not None: return subclass(*args_, **kwargs_) if TEnderecoBrasil.subclass: return TEnderecoBrasil.subclass(*args_, **kwargs_)", "(eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='cpfInst'): pass def exportChildren(self, outfile,", "( self.name, self.value, self.name)) elif self.content_type == MixedContainer.TypeDouble: outfile.write('<%s>%g</%s>' % ( self.name, self.value,", "= child_.text nmPai_ = self.gds_validate_string(nmPai_, node, 'nmPai') self.nmPai = nmPai_ # end class", "usage(): print(USAGE_TEXT) sys.exit(1) def get_root_tag(node): tag = Tag_pattern_.match(node.tag).groups()[-1] rootClass = GDSClassesMapping.get(tag) if rootClass", "self.value elif (self.content_type == MixedContainer.TypeInteger or self.content_type == MixedContainer.TypeBoolean): text = '%d' %", "'' if self.indRetif is not None: showIndent(outfile, level, pretty_print) outfile.write('<%sindRetif>%s</%sindRetif>%s' % (namespace_, self.gds_format_integer(self.indRetif,", "node, nodeName_, fromsubclass_=False): pass # end class tpAmb class procEmi(GeneratedsSuper): subclass = None", "outfile, level, namespace_='', name_='nrRecibo', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set()", "tpPlanRP=None, iniBeneficio=None, altBeneficio=None, fimBeneficio=None): self.original_tagname_ = None self.tpPlanRP = tpPlanRP self.iniBeneficio = iniBeneficio", "level, pretty_print) outfile.write('<%stpLograd>%s</%stpLograd>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.tpLograd), input_name='tpLograd')), namespace_, eol_)) if self.dscLograd is not", "name_='paisNascto', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('paisNascto') if imported_ns_def_ is not None: namespacedef_ =", "(eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='fimBeneficio', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s'", "name_='endereco', pretty_print=pretty_print) def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib, already_processed) for child", "child_, node, nodeName_, fromsubclass_=False): pass # end class nmMae class nmPai(GeneratedsSuper): subclass =", "raise_parse_error(child_, 'requires integer: %s' % exp) ival_ = self.gds_validate_integer(ival_, node, 'procEmi') self.procEmi =", "= cpfInst def hasContent_(self): if ( self.idQuota is not None or self.cpfInst is", "MixedContainer.TypeDouble: outfile.write('<%s>%g</%s>' % ( self.name, self.value, self.name)) elif self.content_type == MixedContainer.TypeBase64: outfile.write('<%s>%s</%s>' %", "reverse_mapping = rootObj.gds_reverse_node_mapping(mapping) if not silence: content = etree_.tostring( rootElement, pretty_print=True, xml_declaration=True, encoding=\"utf-8\")", "exportChildren(self, outfile, level, namespace_='', name_='tpAmb', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed =", "factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, paisNascto) if", "get_dtNascto(self): return self.dtNascto def set_dtNascto(self, dtNascto): self.dtNascto = dtNascto def get_codMunic(self): return self.codMunic", "not None: showIndent(outfile, level, pretty_print) outfile.write('<%stpLograd>%s</%stpLograd>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.tpLograd), input_name='tpLograd')), namespace_, eol_)) if", "self.gds_encode(self.gds_format_string(quote_xml(self.paisResid), input_name='paisResid')), namespace_, eol_)) if self.dscLograd is not None: showIndent(outfile, level, pretty_print) outfile.write('<%sdscLograd>%s</%sdscLograd>%s'", "% (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='indRetif', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_,", "idQuota def get_cpfInst(self): return self.cpfInst def set_cpfInst(self, cpfInst): self.cpfInst = cpfInst def hasContent_(self):", "child_, node, nodeName_, fromsubclass_=False): pass # end class nrInsc class TDadosBenef(GeneratedsSuper): \"\"\"Dados de", "return tpInsc.subclass(*args_, **kwargs_) else: return tpInsc(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if", "level, already_processed, namespace_, name_='dtNascto') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level +", "if nodeName_ == 'brasil': obj_ = TEnderecoBrasil.factory() obj_.build(child_) self.brasil = obj_ obj_.original_tagname_ =", "TypeText = 1 TypeString = 2 TypeInteger = 3 TypeFloat = 4 TypeDecimal", "( self.tpBenef is not None or self.nrBenefic is not None or self.dtFimBenef is", "dtIniBenef(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ = None def", "self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='TDadosBenef', pretty_print=pretty_print) showIndent(outfile,", "class_obj1 = default_class if 'xsi' in node.nsmap: classname = node.get('{%s}type' % node.nsmap['xsi']) if", "nmBenefic.subclass: return nmBenefic.subclass(*args_, **kwargs_) else: return nmBenefic(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self):", "verProc) if subclass is not None: return subclass(*args_, **kwargs_) if verProc.subclass: return verProc.subclass(*args_,", "staticmethod(factory) def get_tpLograd(self): return self.tpLograd def set_tpLograd(self, tpLograd): self.tpLograd = tpLograd def get_dscLograd(self):", "pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('paisNac') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if", "is not None: return subclass(*args_, **kwargs_) if endereco.subclass: return endereco.subclass(*args_, **kwargs_) else: return", "?>\\n') rootObj.export( sys.stdout, 0, name_=rootTag, namespacedef_='', pretty_print=True) return rootObj def parseEtree(inFileName, silence=False): parser", "fromsubclass_=False): pass # end class bairro class cep(GeneratedsSuper): subclass = None superclass =", "def get_nrBenefic(self): return self.nrBenefic def set_nrBenefic(self, nrBenefic): self.nrBenefic = nrBenefic def get_dtFimBenef(self): return", "(eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='dtNascto', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_,", "self.indRetif = indRetif self.nrRecibo = nrRecibo self.tpAmb = tpAmb self.procEmi = procEmi self.verProc", "pretty_print: eol_ = '\\n' else: eol_ = '' if self.original_tagname_ is not None:", "superclass = None def __init__(self, dadosNasc=None, endereco=None): self.original_tagname_ = None self.dadosNasc = dadosNasc", "subclass(*args_, **kwargs_) if ideBenef.subclass: return ideBenef.subclass(*args_, **kwargs_) else: return ideBenef(*args_, **kwargs_) factory =", "should map element type names (strings) to XML schema namespace prefix # definitions.", "self.content_type == MixedContainer.TypeFloat or \\ self.content_type == MixedContainer.TypeDecimal: outfile.write('<%s>%f</%s>' % ( self.name, self.value,", "exportAttributes(self, outfile, level, already_processed, namespace_='', name_='ideBenef'): pass def exportChildren(self, outfile, level, namespace_='', name_='ideBenef',", "namespace_='', name_='mtvFim'): pass def exportChildren(self, outfile, level, namespace_='', name_='mtvFim', fromsubclass_=False, pretty_print=True): pass def", "return False def export(self, outfile, level, namespace_='', name_='indRetif', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('indRetif')", "name_='procEmi') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='procEmi',", "name_='tpLograd', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('tpLograd') if imported_ns_def_ is not None: namespacedef_ =", "True else: return False def export(self, outfile, level, namespace_='', name_='nrBenefic', namespacedef_='', pretty_print=True): imported_ns_def_", "already_processed): pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class dscLograd", "outfile.write('<%sdtIniBenef>%s</%sdtIniBenef>%s' % (namespace_, self.gds_format_date(self.dtIniBenef, input_name='dtIniBenef'), namespace_, eol_)) if self.vrBenef is not None: showIndent(outfile,", "not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, infoPenMorte) if subclass is not None: return", "from StringIO import StringIO as IOBuffer else: from io import BytesIO as IOBuffer", "infoPenMorte.subclass: return infoPenMorte.subclass(*args_, **kwargs_) else: return infoPenMorte(*args_, **kwargs_) factory = staticmethod(factory) def get_idQuota(self):", "dt = datetime_.datetime.strptime(input_data, '%Y-%m-%d') dt = dt.replace(tzinfo=tz) return dt.date() def gds_validate_time(self, input_data, node=None,", "already_processed): pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class tpInsc", "None: subclass = getSubclassFromModule_( CurrentSubclassModule_, bairro) if subclass is not None: return subclass(*args_,", "def export(self, outfile, level, namespace_='', name_='nrBenefic', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('nrBenefic') if imported_ns_def_", "1, namespace_='', name_='paisNascto', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_,", "dt = datetime_.datetime.strptime( input_data, '%Y-%m-%dT%H:%M:%S') dt = dt.replace(tzinfo=tz) return dt def gds_validate_date(self, input_data,", "= cep def get_codMunic(self): return self.codMunic def set_codMunic(self, codMunic): self.codMunic = codMunic def", "namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('TDadosBeneficio') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_", "else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='tpPlanRP'): pass", "outfile.write('<%scodPostal>%s</%scodPostal>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.codPostal), input_name='codPostal')), namespace_, eol_)) def build(self, node): already_processed = set()", "subclass = None superclass = None def __init__(self, tpLograd=None, dscLograd=None, nrLograd=None, complemento=None, bairro=None,", "subclass = getSubclassFromModule_( CurrentSubclassModule_, TEnderecoBrasil) if subclass is not None: return subclass(*args_, **kwargs_)", "namespace_='', name_='verProc', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('verProc') if imported_ns_def_ is not None: namespacedef_", "self.Signature = Signature def hasContent_(self): if ( self.evtCdBenPrRP is not None or self.Signature", "outfile, level, already_processed, namespace_='', name_='procEmi'): pass def exportChildren(self, outfile, level, namespace_='', name_='procEmi', fromsubclass_=False,", "attributes, too) # # The module generatedsnamespaces, if it is importable, must contain", "already_processed, namespace_, name_='nrInsc') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1,", "class for which there is # a namespace prefix definition, will export that", "**kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, ideBenef) if subclass", "= staticmethod(factory) def get_indRetif(self): return self.indRetif def set_indRetif(self, indRetif): self.indRetif = indRetif def", "# end class uf class paisNascto(GeneratedsSuper): subclass = None superclass = None def", "level, pretty_print) outfile.write('<%scpfInst>%s</%scpfInst>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.cpfInst), input_name='cpfInst')), namespace_, eol_)) def build(self, node): already_processed", "DOM. doc = None if not silence: sys.stdout.write('<?xml version=\"1.0\" ?>\\n') rootObj.export( sys.stdout, 0,", "None: rootTag = 'eSocial' rootClass = eSocial rootObj = rootClass.factory() rootObj.build(rootNode) # Enable", "self.fimBeneficio = obj_ obj_.original_tagname_ = 'fimBeneficio' # end class infoBeneficio class tpPlanRP(GeneratedsSuper): subclass", "(eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='bairro', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_,", "beneficiário\"\"\" subclass = None superclass = None def __init__(self, dtNascto=None, codMunic=None, uf=None, paisNascto=None,", "not None: return subclass(*args_, **kwargs_) if fimBeneficio.subclass: return fimBeneficio.subclass(*args_, **kwargs_) else: return fimBeneficio(*args_,", "pretty_print) outfile.write('<%snmCid>%s</%snmCid>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nmCid), input_name='nmCid')), namespace_, eol_)) if self.codPostal is not None:", "self.nmCid is not None: showIndent(outfile, level, pretty_print) outfile.write('<%snmCid>%s</%snmCid>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nmCid), input_name='nmCid')), namespace_,", "return subclass(*args_, **kwargs_) if infoBeneficio.subclass: return infoBeneficio.subclass(*args_, **kwargs_) else: return infoBeneficio(*args_, **kwargs_) factory", "def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class idQuota class cpfInst(GeneratedsSuper):", "s1: if \"'\" in s1: s1 = '\"%s\"' % s1.replace('\"', \"&quot;\") else: s1", "level, namespace_='', name_='evtCdBenPrRP', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('evtCdBenPrRP') if imported_ns_def_ is not None:", "namespace_='', name_='tpPlanRP', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_, ))", "set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='TEnderecoBrasil') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile,", "namespace_='', name_='eSocial', namespacedef_=' xmlns:ds=\"http://www.w3.org/2000/09/xmldsig#\" ', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('eSocial') if imported_ns_def_ is not", "import IPShellEmbed ## args = '' ## ipshell = IPShellEmbed(args, ## banner =", "= verProc def hasContent_(self): if ( self.indRetif is not None or self.nrRecibo is", "not None: self.evtCdBenPrRP.export(outfile, level, namespace_, name_='evtCdBenPrRP', pretty_print=pretty_print) if self.Signature is not None: showIndent(outfile,", "if subclass is not None: return subclass(*args_, **kwargs_) if tpBenef.subclass: return tpBenef.subclass(*args_, **kwargs_)", "None: return subclass(*args_, **kwargs_) if mtvFim.subclass: return mtvFim.subclass(*args_, **kwargs_) else: return mtvFim(*args_, **kwargs_)", "or self.bairro is not None or self.cep is not None or self.codMunic is", "0: _svalue = '%04d-%02d-%02dT%02d:%02d:%02d' % ( input_data.year, input_data.month, input_data.day, input_data.hour, input_data.minute, input_data.second, )", "% (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.cpfInst), input_name='cpfInst')), namespace_, eol_)) def build(self, node): already_processed = set() self.buildAttributes(node,", "nmPai def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_,", "'(\"true\", \"1\", \"false\", \"0\")') return values def gds_validate_datetime(self, input_data, node=None, input_name=''): return input_data", "= GenerateDSNamespaceDefs_.get('dadosNasc') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_", "True else: return False def export(self, outfile, level, namespace_='', name_='paisNac', namespacedef_='', pretty_print=True): imported_ns_def_", "return tpPlanRP.subclass(*args_, **kwargs_) else: return tpPlanRP(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if", "name_='uf') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='uf',", "codPostal_ # end class TEnderecoExterior class paisResid(GeneratedsSuper): subclass = None superclass = None", "self.idQuota def set_idQuota(self, idQuota): self.idQuota = idQuota def get_cpfInst(self): return self.cpfInst def set_cpfInst(self,", "as exp: raise_parse_error(child_, 'requires integer: %s' % exp) ival_ = self.gds_validate_integer(ival_, node, 'tpBenef')", "category == MixedContainer.CategoryComplex self.value.export( outfile, level, namespace, name, pretty_print=pretty_print) def exportSimple(self, outfile, level,", "if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='procEmi', pretty_print=pretty_print)", "name_='complemento', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib, already_processed)", "= child_.text nmBenefic_ = self.gds_validate_string(nmBenefic_, node, 'nmBenefic') self.nmBenefic = nmBenefic_ elif nodeName_ ==", "beneficiário\"\"\" subclass = None superclass = None def __init__(self, dadosNasc=None, endereco=None): self.original_tagname_ =", "% (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='TDadosBeneficio', pretty_print=pretty_print) showIndent(outfile, level, pretty_print)", "gds_format_float_list(self, input_data, input_name=''): return '%s' % ' '.join(input_data) def gds_validate_float_list( self, input_data, node=None,", "child.tail return text def find_attr_value_(attr_name, node): attrs = node.attrib attr_parts = attr_name.split(':') value", "outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='evtCdBenPrRP'): if self.Id", "pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class dtFimBenef class", "elif nodeName_ == 'tpAmb': sval_ = child_.text try: ival_ = int(sval_) except (TypeError,", "exportAttributes(self, outfile, level, already_processed, namespace_='', name_='paisNac'): pass def exportChildren(self, outfile, level, namespace_='', name_='paisNac',", "outer elements # - OR the inner elements found1 = True for patterns1", "def get_nmBenefic(self): return self.nmBenefic def set_nmBenefic(self, nmBenefic): self.nmBenefic = nmBenefic def get_dadosBenef(self): return", "% (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='nmMae', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_,", "def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, fimBeneficio)", "element.text += self.value elif self.category == MixedContainer.CategorySimple: subelement = etree_.SubElement( element, '%s' %", "self.exportChildren(outfile, level + 1, namespace_='', name_='tpBenef', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else:", "if subclass is not None: return subclass(*args_, **kwargs_) if TEnderecoExterior.subclass: return TEnderecoExterior.subclass(*args_, **kwargs_)", "**kwargs_) if nrRecibo.subclass: return nrRecibo.subclass(*args_, **kwargs_) else: return nrRecibo(*args_, **kwargs_) factory = staticmethod(factory)", "return False def export(self, outfile, level, namespace_='', name_='paisNascto', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('paisNascto')", "is not None or self.paisNac is not None or self.nmMae is not None", "def export(self, outfile, level, namespace_='', name_='tpLograd', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('tpLograd') if imported_ns_def_", "is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, TDadosBeneficio) if subclass is not None:", "): return True else: return False def export(self, outfile, level, namespace_='', name_='dtFimBenef', namespacedef_='',", "not None or self.uf is not None ): return True else: return False", "'.join(input_data) def gds_validate_integer_list( self, input_data, node=None, input_name=''): values = input_data.split() for value in", "else: return 'xs:string' else: return self.data_type def set_container(self, container): self.container = container def", "buildChildren(self, child_, node, nodeName_, fromsubclass_=False): if nodeName_ == 'brasil': obj_ = TEnderecoBrasil.factory() obj_.build(child_)", "None superclass = None def __init__(self): self.original_tagname_ = None def factory(*args_, **kwargs_): if", "name_='nrRecibo'): pass def exportChildren(self, outfile, level, namespace_='', name_='nrRecibo', fromsubclass_=False, pretty_print=True): pass def build(self,", "2: BaseStrType_ = basestring else: BaseStrType_ = str def parsexml_(infile, parser=None, **kwargs): if", "already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='eSocial') if self.hasContent_(): outfile.write('>%s' % (eol_,", "if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='complemento', pretty_print=pretty_print)", "level, pretty_print) outfile.write('<%snmBenefic>%s</%snmBenefic>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nmBenefic), input_name='nmBenefic')), namespace_, eol_)) if self.dadosBenef is not", "eol_)) else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='nrLograd'):", "%s/line %d)' % (msg, node.tag, node.sourceline, ) raise GDSParseError(msg) class MixedContainer: # Constants", "gds_validate_datetime(self, input_data, node=None, input_name=''): return input_data def gds_format_datetime(self, input_data, input_name=''): if input_data.microsecond ==", "self.nmBenefic = nmBenefic_ elif nodeName_ == 'dadosBenef': obj_ = TDadosBenef.factory() obj_.build(child_) self.dadosBenef =", "(eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='mtvFim', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_,", "self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='dtNascto', pretty_print=pretty_print) outfile.write('</%s%s>%s'", "level, pretty_print) outfile.write('<%spaisResid>%s</%spaisResid>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.paisResid), input_name='paisResid')), namespace_, eol_)) if self.dscLograd is not", "nodeName_, fromsubclass_=False): if nodeName_ == 'indRetif': sval_ = child_.text try: ival_ = int(sval_)", "def exportChildren(self, outfile, level, namespace_='', name_='indRetif', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed", "class uf(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ = None", "% (namespace_, self.gds_format_float(self.vrBenef, input_name='vrBenef'), namespace_, eol_)) if self.infoPenMorte is not None: self.infoPenMorte.export(outfile, level,", "superclass = None def __init__(self, cpfBenef=None, nmBenefic=None, dadosBenef=None): self.original_tagname_ = None self.cpfBenef =", "obj_ obj_.original_tagname_ = 'dadosBenef' # end class ideBenef class cpfBenef(GeneratedsSuper): subclass = None", "**kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, idQuota) if subclass", "(namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nrInsc), input_name='nrInsc')), namespace_, eol_)) def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib,", "if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, verProc) if subclass is", "get_nrInsc(self): return self.nrInsc def set_nrInsc(self, nrInsc): self.nrInsc = nrInsc def hasContent_(self): if (", "= set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='tpLograd') if self.hasContent_(): outfile.write('>%s' % (eol_, ))", "else: return False def export(self, outfile, level, namespace_='', name_='tpBenef', namespacedef_='', pretty_print=True): imported_ns_def_ =", "): return True else: return False def export(self, outfile, level, namespace_='', name_='TEnderecoExterior', namespacedef_='',", "== 'cpfBenef': cpfBenef_ = child_.text cpfBenef_ = self.gds_validate_string(cpfBenef_, node, 'cpfBenef') self.cpfBenef = cpfBenef_", "classname is not None: names = classname.split(':') if len(names) == 2: classname =", "self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='cpfBenef', pretty_print=pretty_print) outfile.write('</%s%s>%s'", "): return True else: return False def export(self, outfile, level, namespace_='', name_='paisResid', namespacedef_='',", "self.category == MixedContainer.CategorySimple: self.exportSimple(outfile, level, name) else: # category == MixedContainer.CategoryComplex self.value.export( outfile,", "return nrInsc.subclass(*args_, **kwargs_) else: return nrInsc(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if", "is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, evtCdBenPrRP) if subclass is not None:", "name_='cpfBenef'): pass def exportChildren(self, outfile, level, namespace_='', name_='cpfBenef', fromsubclass_=False, pretty_print=True): pass def build(self,", "outfile, level, already_processed, namespace_='', name_='verProc'): pass def exportChildren(self, outfile, level, namespace_='', name_='verProc', fromsubclass_=False,", "level, namespace_='', name_='verProc', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node,", "set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='infoPenMorte') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile,", "CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, paisResid) if subclass is not", "already_processed, namespace_, name_='paisNascto') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1,", "else: return False def export(self, outfile, level, namespace_='', name_='tpAmb', namespacedef_='', pretty_print=True): imported_ns_def_ =", "None superclass = None def __init__(self, tpBenef=None, nrBenefic=None, dtIniBenef=None, vrBenef=None, infoPenMorte=None): self.original_tagname_ =", "child_, node, nodeName_, fromsubclass_=False): if nodeName_ == 'cpfBenef': cpfBenef_ = child_.text cpfBenef_ =", "not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, codMunic) if subclass is not None: return", "export(self, outfile, level, namespace_='', name_='complemento', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('complemento') if imported_ns_def_ is", "if subclass is not None: return subclass(*args_, **kwargs_) if nmBenefic.subclass: return nmBenefic.subclass(*args_, **kwargs_)", "child_.text tpLograd_ = self.gds_validate_string(tpLograd_, node, 'tpLograd') self.tpLograd = tpLograd_ elif nodeName_ == 'dscLograd':", "None or self.nrInsc is not None ): return True else: return False def", "= None superclass = None def __init__(self, paisResid=None, dscLograd=None, nrLograd=None, complemento=None, bairro=None, nmCid=None,", "is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, uf) if subclass is not None:", "def exportChildren(self, outfile, level, namespace_='', name_='nrInsc', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed", "' '.join(input_data) def gds_validate_integer_list( self, input_data, node=None, input_name=''): values = input_data.split() for value", "path_list.reverse() path = '/'.join(path_list) return path Tag_strip_pattern_ = re_.compile(r'\\{.*\\}') def get_path_list_(self, node, path_list):", "namespace_='', name_='tpPlanRP', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('tpPlanRP') if imported_ns_def_ is not None: namespacedef_", ")) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='tpBenef'): pass def exportChildren(self, outfile, level,", "class cpfBenef class nmBenefic(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_", "imported_ns_def_ = GenerateDSNamespaceDefs_.get('dscLograd') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print:", "None: subclass = getSubclassFromModule_( CurrentSubclassModule_, dscLograd) if subclass is not None: return subclass(*args_,", "child_, node, nodeName_, fromsubclass_=False): pass # end class tpAmb class procEmi(GeneratedsSuper): subclass =", "namespace_, eol_)) if self.verProc is not None: showIndent(outfile, level, pretty_print) outfile.write('<%sverProc>%s</%sverProc>%s' % (namespace_,", "* 3600)) // 60 _svalue += '{0:02d}:{1:02d}'.format(hours, minutes) return _svalue def gds_validate_simple_patterns(self, patterns,", "return self.nrInsc def set_nrInsc(self, nrInsc): self.nrInsc = nrInsc def hasContent_(self): if ( self.tpInsc", "namespace_='', name_='fimBeneficio'): pass def exportChildren(self, outfile, level, namespace_='', name_='fimBeneficio', fromsubclass_=False, pretty_print=True): if pretty_print:", "sys.version_info.major == 2: return instring.encode(ExternalEncoding) else: return instring @staticmethod def convert_unicode(instring): if isinstance(instring,", "True else: return False def export(self, outfile, level, namespace_='', name_='infoBeneficio', namespacedef_='', pretty_print=True): imported_ns_def_", "self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='cep', pretty_print=pretty_print) outfile.write('</%s%s>%s'", "exportAttributes(self, outfile, level, already_processed, namespace_='', name_='dtIniBenef'): pass def exportChildren(self, outfile, level, namespace_='', name_='dtIniBenef',", "'nrBenefic': nrBenefic_ = child_.text nrBenefic_ = self.gds_validate_string(nrBenefic_, node, 'nrBenefic') self.nrBenefic = nrBenefic_ elif", "gds_format_double(self, input_data, input_name=''): return '%e' % input_data def gds_validate_double(self, input_data, node=None, input_name=''): return", "text def exportLiteral(self, outfile, level, name): if self.category == MixedContainer.CategoryText: showIndent(outfile, level) outfile.write(", "GenerateDSNamespaceDefs_.get('paisNac') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_ =", "self.gds_format_integer(self.tpAmb, input_name='tpAmb'), namespace_, eol_)) if self.procEmi is not None: showIndent(outfile, level, pretty_print) outfile.write('<%sprocEmi>%s</%sprocEmi>%s'", "outfile, level, namespace_='', name_='mtvFim', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set()", "self.fimBeneficio = fimBeneficio def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass =", "self.exportAttributes(outfile, level, already_processed, namespace_, name_='nmPai') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level", "CategoryNone = 0 CategoryText = 1 CategorySimple = 2 CategoryComplex = 3 #", "outfile, level, already_processed, namespace_='', name_='cpfBenef'): pass def exportChildren(self, outfile, level, namespace_='', name_='cpfBenef', fromsubclass_=False,", "outfile, level, namespace_='', name_='TDadosBeneficio', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('TDadosBeneficio') if imported_ns_def_ is not", "= int(sval_) except (TypeError, ValueError) as exp: raise_parse_error(child_, 'requires integer: %s' % exp)", "= ideEvento def get_ideEmpregador(self): return self.ideEmpregador def set_ideEmpregador(self, ideEmpregador): self.ideEmpregador = ideEmpregador def", "return nmBenefic(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return True", "self.category, self.content_type, self.name, self.value)) else: # category == MixedContainer.CategoryComplex showIndent(outfile, level) outfile.write( 'model_.MixedContainer(%d,", "value): if typ is None or value is None: return value return typ(value)", "% ( self.category, self.content_type, self.name, self.value)) else: # category == MixedContainer.CategoryComplex showIndent(outfile, level)", "or self.procEmi is not None or self.verProc is not None ): return True", "level, namespace_='', name_='indRetif', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node,", "namespace_='', name_='TEnderecoExterior', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\\n' else: eol_ = ''", "name_='dadosNasc', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\\n' else: eol_ = '' if", "self.nrBenefic = nrBenefic if isinstance(dtFimBenef, BaseStrType_): initvalue_ = datetime_.datetime.strptime(dtFimBenef, '%Y-%m-%d').date() else: initvalue_ =", "path_list.append(tag) self.get_path_list_(node.getparent(), path_list) def get_class_obj_(self, node, default_class=None): class_obj1 = default_class if 'xsi' in", "fromsubclass_=False): pass # end class codPostal class TDadosBeneficio(GeneratedsSuper): \"\"\"Dados do benefício previdenciário\"\"\" subclass", "outfile.write('<%suf>%s</%suf>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.uf), input_name='uf')), namespace_, eol_)) if self.paisNascto is not None: showIndent(outfile,", ")) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='idQuota'): pass def exportChildren(self, outfile, level,", "= GenerateDSNamespaceDefs_.get('eSocial') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_", "self.name, self.value)) elif self.category == MixedContainer.CategorySimple: showIndent(outfile, level) outfile.write( 'model_.MixedContainer(%d, %d, \"%s\", \"%s\"),\\n'", "= Id def hasContent_(self): if ( self.ideEvento is not None or self.ideEmpregador is", "subclass is not None: return subclass(*args_, **kwargs_) if codMunic.subclass: return codMunic.subclass(*args_, **kwargs_) else:", "gds_encode(instring): if sys.version_info.major == 2: return instring.encode(ExternalEncoding) else: return instring @staticmethod def convert_unicode(instring):", "set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='paisResid') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile,", "class nmCid(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ = None", "namespace_='', name_='paisNascto', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib,", "None self.tpInsc = tpInsc self.nrInsc = nrInsc def factory(*args_, **kwargs_): if CurrentSubclassModule_ is", "GenerateDSNamespaceDefs_.get('codPostal') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_ =", "or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='TEmprPJ') if self.hasContent_():", "node, 'paisNac') self.paisNac = paisNac_ elif nodeName_ == 'nmMae': nmMae_ = child_.text nmMae_", "cpfInst_ = child_.text cpfInst_ = self.gds_validate_string(cpfInst_, node, 'cpfInst') self.cpfInst = cpfInst_ # end", "name = class_.__name__ + 'Sub' if hasattr(module, name): return getattr(module, name) else: return", "**kwargs_) factory = staticmethod(factory) def get_dadosNasc(self): return self.dadosNasc def set_dadosNasc(self, dadosNasc): self.dadosNasc =", "export(self, outfile, level, namespace_='', name_='vrBenef', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('vrBenef') if imported_ns_def_ is", "content as empty lines. if self.value.strip(): if len(element) > 0: if element[-1].tail is", "subclass = getSubclassFromModule_( CurrentSubclassModule_, TEmprPJ) if subclass is not None: return subclass(*args_, **kwargs_)", "( self.name, self.value, self.name)) elif self.content_type == MixedContainer.TypeInteger or \\ self.content_type == MixedContainer.TypeBoolean:", "**kwargs_) if paisNascto.subclass: return paisNascto.subclass(*args_, **kwargs_) else: return paisNascto(*args_, **kwargs_) factory = staticmethod(factory)", "self.exportAttributes(outfile, level, already_processed, namespace_, name_='tpAmb') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level", "self.exportAttributes(outfile, level, already_processed, namespace_, name_='dscLograd') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level", "'%Y-%m-%d').date() else: initvalue_ = dtNascto self.dtNascto = initvalue_ self.codMunic = codMunic self.uf =", "set_endereco(self, endereco): self.endereco = endereco def hasContent_(self): if ( self.dadosNasc is not None", "2.28b. # Python 2.7.12 (default, Nov 19 2016, 06:48:10) [GCC 5.4.0 20160609] #", "CategoryText = 1 CategorySimple = 2 CategoryComplex = 3 # Constants for content_type:", "def set_brasil(self, brasil): self.brasil = brasil def get_exterior(self): return self.exterior def set_exterior(self, exterior):", "def export(self, outfile, level, namespace_='', name_='TEmprPJ', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('TEmprPJ') if imported_ns_def_", "set_exterior(self, exterior): self.exterior = exterior def hasContent_(self): if ( self.brasil is not None", "self.exportChildren(outfile, level + 1, namespace_='', name_='TDadosBeneficio', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' % (namespace_,", "( self.category, self.content_type, self.name,)) self.value.exportLiteral(outfile, level + 1) showIndent(outfile, level) outfile.write(')\\n') class MemberSpec_(object):", "tpInsc): self.tpInsc = tpInsc def get_nrInsc(self): return self.nrInsc def set_nrInsc(self, nrInsc): self.nrInsc =", "export(self, outfile, level, namespace_='', name_='mtvFim', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('mtvFim') if imported_ns_def_ is", "return subclass(*args_, **kwargs_) if dtNascto.subclass: return dtNascto.subclass(*args_, **kwargs_) else: return dtNascto(*args_, **kwargs_) factory", "return False def export(self, outfile, level, namespace_='', name_='nmBenefic', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('nmBenefic')", "return subclass(*args_, **kwargs_) if tpPlanRP.subclass: return tpPlanRP.subclass(*args_, **kwargs_) else: return tpPlanRP(*args_, **kwargs_) factory", "level, namespace_='', name_='cpfBenef', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('cpfBenef') if imported_ns_def_ is not None:", "**kwargs_) if infoBeneficio.subclass: return infoBeneficio.subclass(*args_, **kwargs_) else: return infoBeneficio(*args_, **kwargs_) factory = staticmethod(factory)", "def export(self, outfile, level, namespace_='', name_='cpfBenef', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('cpfBenef') if imported_ns_def_", "**kwargs_) if nmPai.subclass: return nmPai.subclass(*args_, **kwargs_) else: return nmPai(*args_, **kwargs_) factory = staticmethod(factory)", "CurrentSubclassModule_, ideBenef) if subclass is not None: return subclass(*args_, **kwargs_) if ideBenef.subclass: return", "def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='mtvFim'): pass def exportChildren(self, outfile, level, namespace_='',", "% (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='bairro'): pass def exportChildren(self,", "subclass = getSubclassFromModule_( CurrentSubclassModule_, mtvFim) if subclass is not None: return subclass(*args_, **kwargs_)", "level, pretty_print) outfile.write('<%sindRetif>%s</%sindRetif>%s' % (namespace_, self.gds_format_integer(self.indRetif, input_name='indRetif'), namespace_, eol_)) if self.nrRecibo is not", "namespace_='', name_='cpfBenef', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' % (eol_, ))", "level, already_processed, namespace_, name_='infoBeneficio') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level +", "outfile.write('<%sverProc>%s</%sverProc>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.verProc), input_name='verProc')), namespace_, eol_)) def build(self, node): already_processed = set()", "eol_)) if self.paisNascto is not None: showIndent(outfile, level, pretty_print) outfile.write('<%spaisNascto>%s</%spaisNascto>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.paisNascto),", "self.tpBenef = tpBenef def get_nrBenefic(self): return self.nrBenefic def set_nrBenefic(self, nrBenefic): self.nrBenefic = nrBenefic", "self.cpfBenef is not None: showIndent(outfile, level, pretty_print) outfile.write('<%scpfBenef>%s</%scpfBenef>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.cpfBenef), input_name='cpfBenef')), namespace_,", "= procEmi def get_verProc(self): return self.verProc def set_verProc(self, verProc): self.verProc = verProc def", "def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, TEnderecoExterior)", "def get_bairro(self): return self.bairro def set_bairro(self, bairro): self.bairro = bairro def get_nmCid(self): return", "CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, TEnderecoExterior) if subclass is not", "None and 'Id' not in already_processed: already_processed.add('Id') outfile.write(' Id=%s' % (self.gds_encode(self.gds_format_string(quote_attrib(self.Id), input_name='Id')), ))", "def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='eSocial'): pass def exportChildren(self, outfile, level, namespace_='',", "found2 = True break if not found2: found1 = False break return found1", "showIndent(outfile, level, pretty_print) outfile.write('<%stpBenef>%s</%stpBenef>%s' % (namespace_, self.gds_format_integer(self.tpBenef, input_name='tpBenef'), namespace_, eol_)) if self.nrBenefic is", "# schemas/v2_04/evtCdBenPrRP.xsd # # Command line: # /usr/local/bin/generateDS --no-process-includes -o \"esociallib/v2_04/evtCdBenPrRP.py\" schemas/v2_04/evtCdBenPrRP.xsd #", "# The root super-class for element type classes # # Calls to the", "already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='paisNascto') if self.hasContent_(): outfile.write('>%s' % (eol_,", "eol_)) else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='paisResid'):", "if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='verProc', pretty_print=pretty_print)", "child_.text paisNac_ = self.gds_validate_string(paisNac_, node, 'paisNac') self.paisNac = paisNac_ elif nodeName_ == 'nmMae':", "None or self.nrLograd is not None or self.complemento is not None or self.bairro", "self.exportChildren(outfile, level + 1, namespace_='', name_='dtIniBenef', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else:", "% (namespace, name, )) return value class GDSParseError(Exception): pass def raise_parse_error(node, msg): msg", "self.dtFimBenef = dval_ elif nodeName_ == 'mtvFim': sval_ = child_.text try: ival_ =", "name_='complemento', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('complemento') if imported_ns_def_ is not None: namespacedef_ =", "== 2: return instring.encode(ExternalEncoding) else: return instring @staticmethod def convert_unicode(instring): if isinstance(instring, str):", "not None: showIndent(outfile, level, pretty_print) outfile.write('<%sdtFimBenef>%s</%sdtFimBenef>%s' % (namespace_, self.gds_format_date(self.dtFimBenef, input_name='dtFimBenef'), namespace_, eol_)) if", "tpLograd_ = self.gds_validate_string(tpLograd_, node, 'tpLograd') self.tpLograd = tpLograd_ elif nodeName_ == 'dscLograd': dscLograd_", "= initvalue_ self.vrBenef = vrBenef self.infoPenMorte = infoPenMorte def factory(*args_, **kwargs_): if CurrentSubclassModule_", "namespace_, name_='ideBenef', pretty_print=pretty_print) if self.infoBeneficio is not None: self.infoBeneficio.export(outfile, level, namespace_, name_='infoBeneficio', pretty_print=pretty_print)", ")) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='nrRecibo') if self.hasContent_(): outfile.write('>%s' %", "ideBenef.subclass(*args_, **kwargs_) else: return ideBenef(*args_, **kwargs_) factory = staticmethod(factory) def get_cpfBenef(self): return self.cpfBenef", "is not None: showIndent(outfile, level, pretty_print) outfile.write('<%spaisNascto>%s</%spaisNascto>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.paisNascto), input_name='paisNascto')), namespace_, eol_))", "return True else: return False def export(self, outfile, level, namespace_='', name_='tpLograd', namespacedef_='', pretty_print=True):", "get_dtFimBenef(self): return self.dtFimBenef def set_dtFimBenef(self, dtFimBenef): self.dtFimBenef = dtFimBenef def get_mtvFim(self): return self.mtvFim", "set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='vrBenef') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile,", "if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='nmCid', pretty_print=pretty_print)", ")) self.exportChildren(outfile, level + 1, namespace_='', name_='nmMae', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_))", "def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='nrInsc'): pass def exportChildren(self, outfile, level, namespace_='',", "% (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.paisNac), input_name='paisNac')), namespace_, eol_)) if self.nmMae is not None: showIndent(outfile, level,", "Use the lxml ElementTree compatible parser so that, e.g., # we ignore comments.", "'', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='cep') if self.hasContent_(): outfile.write('>%s'", "CurrentSubclassModule_, dtNascto) if subclass is not None: return subclass(*args_, **kwargs_) if dtNascto.subclass: return", "strings/patterns. We should: # - AND the outer elements # - OR the", "**kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, nrInsc) if subclass", "showIndent(outfile, level, pretty_print) outfile.write('<%scodMunic>%s</%scodMunic>%s' % (namespace_, self.gds_format_integer(self.codMunic, input_name='codMunic'), namespace_, eol_)) if self.uf is", "**kwargs_) else: return mtvFim(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ):", "return self.bairro def set_bairro(self, bairro): self.bairro = bairro def get_nmCid(self): return self.nmCid def", ")) self.exportChildren(outfile, level + 1, namespace_='', name_='nmCid', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_))", "self.exportChildren(outfile, level + 1, namespace_='', name_='paisResid', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else:", "dtNascto): self.dtNascto = dtNascto def get_codMunic(self): return self.codMunic def set_codMunic(self, codMunic): self.codMunic =", "dscLograd_ elif nodeName_ == 'nrLograd': nrLograd_ = child_.text nrLograd_ = self.gds_validate_string(nrLograd_, node, 'nrLograd')", "tpBenef def get_nrBenefic(self): return self.nrBenefic def set_nrBenefic(self, nrBenefic): self.nrBenefic = nrBenefic def get_dtIniBenef(self):", "') def quote_xml(inStr): \"Escape markup chars, but do not modify CDATA sections.\" if", "self.evtCdBenPrRP = obj_ obj_.original_tagname_ = 'evtCdBenPrRP' elif nodeName_ == 'Signature': Signature_ = child_.text", "self.data_type def get_data_type(self): if isinstance(self.data_type, list): if len(self.data_type) > 0: return self.data_type[-1] else:", "nodeName_ == 'paisNascto': paisNascto_ = child_.text paisNascto_ = self.gds_validate_string(paisNascto_, node, 'paisNascto') self.paisNascto =", "Python 2.7.12 (default, Nov 19 2016, 06:48:10) [GCC 5.4.0 20160609] # # Command", "def get_brasil(self): return self.brasil def set_brasil(self, brasil): self.brasil = brasil def get_exterior(self): return", "(namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.cpfInst), input_name='cpfInst')), namespace_, eol_)) def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib,", "**kwargs_) if TEnderecoBrasil.subclass: return TEnderecoBrasil.subclass(*args_, **kwargs_) else: return TEnderecoBrasil(*args_, **kwargs_) factory = staticmethod(factory)", "child_, node, nodeName_, fromsubclass_=False): pass # end class tpLograd class dscLograd(GeneratedsSuper): subclass =", "exportChildren(self, outfile, level, namespace_='', name_='endereco', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\\n' else:", "level, pretty_print) outfile.write('<%scodMunic>%s</%scodMunic>%s' % (namespace_, self.gds_format_integer(self.codMunic, input_name='codMunic'), namespace_, eol_)) if self.uf is not", "== -1: return \"'%s'\" % s1 else: return \"'''%s'''\" % s1 else: if", "initvalue_ self.mtvFim = mtvFim def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass", "return subclass(*args_, **kwargs_) if tpInsc.subclass: return tpInsc.subclass(*args_, **kwargs_) else: return tpInsc(*args_, **kwargs_) factory", "is not None or self.cep is not None or self.codMunic is not None", "pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): if nodeName_ == 'tpLograd': tpLograd_ =", "if subclass is not None: return subclass(*args_, **kwargs_) if paisNac.subclass: return paisNac.subclass(*args_, **kwargs_)", "end class tpBenef class nrBenefic(GeneratedsSuper): subclass = None superclass = None def __init__(self):", "factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, dscLograd) if", "return dtNascto.subclass(*args_, **kwargs_) else: return dtNascto(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if", "dtFimBenef=None, mtvFim=None): self.original_tagname_ = None self.tpBenef = tpBenef self.nrBenefic = nrBenefic if isinstance(dtFimBenef,", "namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='dtFimBenef') if", "= child_.text paisNac_ = self.gds_validate_string(paisNac_, node, 'paisNac') self.paisNac = paisNac_ elif nodeName_ ==", "( self.idQuota is not None or self.cpfInst is not None ): return True", "def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='tpPlanRP'): pass def exportChildren(self, outfile, level, namespace_='',", "nodeName_ == 'exterior': obj_ = TEnderecoExterior.factory() obj_.build(child_) self.exterior = obj_ obj_.original_tagname_ = 'exterior'", "already_processed): pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class codPostal", "self.brasil = brasil def get_exterior(self): return self.exterior def set_exterior(self, exterior): self.exterior = exterior", "= dscLograd def get_nrLograd(self): return self.nrLograd def set_nrLograd(self, nrLograd): self.nrLograd = nrLograd def", "else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='nmBenefic'): pass", "None: subclass = getSubclassFromModule_( CurrentSubclassModule_, paisNac) if subclass is not None: return subclass(*args_,", "**kwargs_) if idQuota.subclass: return idQuota.subclass(*args_, **kwargs_) else: return idQuota(*args_, **kwargs_) factory = staticmethod(factory)", "3600)) // 60 _svalue += '{0:02d}:{1:02d}'.format(hours, minutes) return _svalue @classmethod def gds_parse_datetime(cls, input_data):", "type(self) != type(other): return False return self.__dict__ == other.__dict__ def __ne__(self, other): return", "self.tpBenef is not None or self.nrBenefic is not None or self.dtIniBenef is not", "value in values: try: float(value) except (TypeError, ValueError): raise_parse_error(node, 'Requires sequence of doubles')", "= self.gds_validate_string(nrBenefic_, node, 'nrBenefic') self.nrBenefic = nrBenefic_ elif nodeName_ == 'dtFimBenef': sval_ =", "return self.idQuota def set_idQuota(self, idQuota): self.idQuota = idQuota def get_cpfInst(self): return self.cpfInst def", "self.dtIniBenef = initvalue_ self.vrBenef = vrBenef self.infoPenMorte = infoPenMorte def factory(*args_, **kwargs_): if", "\"false\", \"0\")') return values def gds_validate_datetime(self, input_data, node=None, input_name=''): return input_data def gds_format_datetime(self,", "* tzoff.days) if total_seconds == 0: _svalue += 'Z' else: if total_seconds <", "namespace_='', name_='ideBenef', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('ideBenef') if imported_ns_def_ is not None: namespacedef_", "else: return False def export(self, outfile, level, namespace_='', name_='nrInsc', namespacedef_='', pretty_print=True): imported_ns_def_ =", "outfile, level, namespace_='', name_='TEnderecoBrasil', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\\n' else: eol_", "dadosNasc class dtNascto(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ =", "minutes) return _svalue @classmethod def gds_parse_datetime(cls, input_data): tz = None if input_data[-1] ==", "level, namespace_='', name_='nmBenefic', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('nmBenefic') if imported_ns_def_ is not None:", "else: return False def export(self, outfile, level, namespace_='', name_='TEnderecoBrasil', namespacedef_='', pretty_print=True): imported_ns_def_ =", "'model_.MixedContainer(%d, %d, \"%s\", \"%s\"),\\n' % ( self.category, self.content_type, self.name, self.value)) else: # category", "outfile, level, namespace_='', name_='uf', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('uf') if imported_ns_def_ is not", "eol_)) if self.procEmi is not None: showIndent(outfile, level, pretty_print) outfile.write('<%sprocEmi>%s</%sprocEmi>%s' % (namespace_, self.gds_format_integer(self.procEmi,", "infoBeneficio def get_Id(self): return self.Id def set_Id(self, Id): self.Id = Id def hasContent_(self):", "{} # # The root super-class for element type classes # # Calls", "doubles') return values def gds_format_boolean(self, input_data, input_name=''): return ('%s' % input_data).lower() def gds_validate_boolean(self,", "obj_ obj_.original_tagname_ = 'fimBeneficio' # end class infoBeneficio class tpPlanRP(GeneratedsSuper): subclass = None", "in node: if child.tail is not None: text += child.tail return text def", "not silence: sys.stdout.write('<?xml version=\"1.0\" ?>\\n') rootObj.export( sys.stdout, 0, name_=rootTag, namespacedef_='') return rootObj def", "% (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='nmPai'): pass def exportChildren(self,", "attrs, already_processed): pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): if nodeName_ == 'tpBenef':", "path_list) def get_class_obj_(self, node, default_class=None): class_obj1 = default_class if 'xsi' in node.nsmap: classname", "classname.split(':') if len(names) == 2: classname = names[1] class_obj2 = globals().get(classname) if class_obj2", "None: showIndent(outfile, level, pretty_print) outfile.write('<%spaisNascto>%s</%spaisNascto>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.paisNascto), input_name='paisNascto')), namespace_, eol_)) if self.paisNac", "% (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='dadosNasc', pretty_print=pretty_print) showIndent(outfile, level, pretty_print)", "eol_ = '\\n' else: eol_ = '' if self.dadosNasc is not None: self.dadosNasc.export(outfile,", "is not None or self.nrBenefic is not None or self.dtIniBenef is not None", "else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='nmPai'): pass", "None or self.nmCid is not None or self.codPostal is not None ): return", "self.nrInsc is not None ): return True else: return False def export(self, outfile,", "outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='dtFimBenef', pretty_print=pretty_print) outfile.write('</%s%s>%s' %", "def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, procEmi)", "eol_)) if self.complemento is not None: showIndent(outfile, level, pretty_print) outfile.write('<%scomplemento>%s</%scomplemento>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.complemento),", "else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='dscLograd'): pass", "child_, node, nodeName_, fromsubclass_=False): if nodeName_ == 'ideEvento': obj_ = TIdeEveTrab.factory() obj_.build(child_) self.ideEvento", "namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('nrRecibo') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_", "uf_ = child_.text uf_ = self.gds_validate_string(uf_, node, 'uf') self.uf = uf_ elif nodeName_", "False def export(self, outfile, level, namespace_='', name_='dtNascto', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('dtNascto') if", "codPostal(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return True else:", "'\\n' else: eol_ = '' if self.original_tagname_ is not None: name_ = self.original_tagname_", "self.cep def set_cep(self, cep): self.cep = cep def get_codMunic(self): return self.codMunic def set_codMunic(self,", "= tpBenef def get_nrBenefic(self): return self.nrBenefic def set_nrBenefic(self, nrBenefic): self.nrBenefic = nrBenefic def", "level, namespace_='', name_='tpLograd', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node,", "is not None: showIndent(outfile, level, pretty_print) outfile.write('<%stpAmb>%s</%stpAmb>%s' % (namespace_, self.gds_format_integer(self.tpAmb, input_name='tpAmb'), namespace_, eol_))", "pass # end class tpBenef class nrBenefic(GeneratedsSuper): subclass = None superclass = None", "% ( input_data.year, input_data.month, input_data.day, input_data.hour, input_data.minute, input_data.second, ('%f' % (float(input_data.microsecond) / 1000000))[2:],", "def export(self, outfile, level, name, namespace, pretty_print=True): if self.category == MixedContainer.CategoryText: # Prevent", "self.content_type == MixedContainer.TypeInteger or \\ self.content_type == MixedContainer.TypeBoolean: outfile.write('<%s>%d</%s>' % ( self.name, self.value,", "class infoPenMorte(GeneratedsSuper): \"\"\"Informações relativas a pensão por morte\"\"\" subclass = None superclass =", "# end class paisResid class nmCid(GeneratedsSuper): subclass = None superclass = None def", "inStr) s2 = '' pos = 0 matchobjects = CDATA_pattern_.finditer(s1) for mo in", "pass def exportChildren(self, outfile, level, namespace_='', name_='vrBenef', fromsubclass_=False, pretty_print=True): pass def build(self, node):", "is None or value is None: return value return typ(value) # # Data", "namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='infoPenMorte') if", "namespace_='', name_='nmBenefic', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('nmBenefic') if imported_ns_def_ is not None: namespacedef_", "# # File: generatedsnamespaces.py # # GenerateDSNamespaceDefs = { # \"ElementtypeA\": \"http://www.xxx.com/namespaceA\", #", "namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('TIdeEveTrab') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_", "exportAttributes(self, outfile, level, already_processed, namespace_='', name_='nmMae'): pass def exportChildren(self, outfile, level, namespace_='', name_='nmMae',", "'', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='nmCid') if self.hasContent_(): outfile.write('>%s'", "else: return cpfBenef(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ): return", "namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('nrBenefic') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_", "self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='dadosNasc', pretty_print=pretty_print) showIndent(outfile,", "outfile, level, namespace_='', name_='dtFimBenef', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set()", "CurrentSubclassModule_, tpInsc) if subclass is not None: return subclass(*args_, **kwargs_) if tpInsc.subclass: return", "None: subclass = getSubclassFromModule_( CurrentSubclassModule_, tpLograd) if subclass is not None: return subclass(*args_,", "typ is None or value is None: return value return typ(value) # #", "is not None or self.codMunic is not None or self.uf is not None", "(eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='paisNac'): pass def exportChildren(self, outfile,", "fimBeneficio(GeneratedsSuper): \"\"\"Informações relativas a benefícios previdenciários - Término. Validação: Só pode ser informado", "elif nodeName_ == 'bairro': bairro_ = child_.text bairro_ = self.gds_validate_string(bairro_, node, 'bairro') self.bairro", "tpAmb def get_procEmi(self): return self.procEmi def set_procEmi(self, procEmi): self.procEmi = procEmi def get_verProc(self):", "# end class nrLograd class complemento(GeneratedsSuper): subclass = None superclass = None def", "name_='paisResid', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('paisResid') if imported_ns_def_ is not None: namespacedef_ =", "= nmMae self.nmPai = nmPai def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None:", "GenerateDSNamespaceDefs_.get('dscLograd') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_ =", "self.gds_validate_float(fval_, node, 'vrBenef') self.vrBenef = fval_ elif nodeName_ == 'infoPenMorte': obj_ = infoPenMorte.factory()", "namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='nrLograd') if", "if isinstance(self.data_type, list): if len(self.data_type) > 0: return self.data_type[-1] else: return 'xs:string' else:", "def exportChildren(self, outfile, level, namespace_='', name_='fimBeneficio', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\\n'", "initvalue_ = datetime_.datetime.strptime(dtNascto, '%Y-%m-%d').date() else: initvalue_ = dtNascto self.dtNascto = initvalue_ self.codMunic =", "not None or self.nrInsc is not None ): return True else: return False", "else: return False def export(self, outfile, level, namespace_='', name_='TEmprPJ', namespacedef_='', pretty_print=True): imported_ns_def_ =", "None: return subclass(*args_, **kwargs_) if tpPlanRP.subclass: return tpPlanRP.subclass(*args_, **kwargs_) else: return tpPlanRP(*args_, **kwargs_)", "dtFimBenef self.dtFimBenef = initvalue_ self.mtvFim = mtvFim def factory(*args_, **kwargs_): if CurrentSubclassModule_ is", "= class_obj2 return class_obj1 def gds_build_any(self, node, type_name=None): return None @classmethod def gds_reverse_node_mapping(cls,", "endereço do Trabalhador\"\"\" subclass = None superclass = None def __init__(self, brasil=None, exterior=None):", "'' if self.tpPlanRP is not None: showIndent(outfile, level, pretty_print) outfile.write('<%stpPlanRP>%s</%stpPlanRP>%s' % (namespace_, self.gds_format_integer(self.tpPlanRP,", "dadosBenef=None): self.original_tagname_ = None self.cpfBenef = cpfBenef self.nmBenefic = nmBenefic self.dadosBenef = dadosBenef", "+ 1, namespace_='', name_='tpBenef', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_, eol_)) else: outfile.write('/>%s' %", "'ideEmpregador' elif nodeName_ == 'ideBenef': obj_ = ideBenef.factory() obj_.build(child_) self.ideBenef = obj_ obj_.original_tagname_", "name_='cep', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('cep') if imported_ns_def_ is not None: namespacedef_ =", "namespace_, name_='ideBenef') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespace_='',", "not None: return subclass(*args_, **kwargs_) if nmBenefic.subclass: return nmBenefic.subclass(*args_, **kwargs_) else: return nmBenefic(*args_,", "== 'idQuota': idQuota_ = child_.text idQuota_ = self.gds_validate_string(idQuota_, node, 'idQuota') self.idQuota = idQuota_", "def get_cep(self): return self.cep def set_cep(self, cep): self.cep = cep def get_codMunic(self): return", "self.iniBeneficio is not None or self.altBeneficio is not None or self.fimBeneficio is not", "True else: return False def export(self, outfile, level, namespace_='', name_='fimBeneficio', namespacedef_='', pretty_print=True): imported_ns_def_", "return TIdeEveTrab(*args_, **kwargs_) factory = staticmethod(factory) def get_indRetif(self): return self.indRetif def set_indRetif(self, indRetif):", "= child_.text paisResid_ = self.gds_validate_string(paisResid_, node, 'paisResid') self.paisResid = paisResid_ elif nodeName_ ==", "= GenerateDSNamespaceDefs_.get('tpLograd') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_", "nmPai): self.nmPai = nmPai def hasContent_(self): if ( self.dtNascto is not None or", "'dadosNasc' elif nodeName_ == 'endereco': obj_ = endereco.factory() obj_.build(child_) self.endereco = obj_ obj_.original_tagname_", "'', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='uf') if self.hasContent_(): outfile.write('>%s'", "else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='cpfInst'): pass", "set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='fimBeneficio') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile,", "namespace_, eol_)) if self.procEmi is not None: showIndent(outfile, level, pretty_print) outfile.write('<%sprocEmi>%s</%sprocEmi>%s' % (namespace_,", "xml.etree import ElementTree as etree_ Validate_simpletypes_ = True if sys.version_info.major == 2: BaseStrType_", "class TEnderecoExterior class paisResid(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_", "True else: return False def export(self, outfile, level, namespace_='', name_='dtNascto', namespacedef_='', pretty_print=True): imported_ns_def_", "or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='codMunic') if self.hasContent_():", "% (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.dscLograd), input_name='dscLograd')), namespace_, eol_)) if self.nrLograd is not None: showIndent(outfile, level,", "s2 += quote_xml_aux(s3) return s2 def quote_xml_aux(inStr): s1 = inStr.replace('&', '&amp;') s1 =", "self.nrBenefic def set_nrBenefic(self, nrBenefic): self.nrBenefic = nrBenefic def get_dtIniBenef(self): return self.dtIniBenef def set_dtIniBenef(self,", "% (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='TEnderecoExterior'): pass def exportChildren(self,", "if self.tpLograd is not None: showIndent(outfile, level, pretty_print) outfile.write('<%stpLograd>%s</%stpLograd>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.tpLograd), input_name='tpLograd')),", "(eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='complemento'): pass def exportChildren(self, outfile,", "False def export(self, outfile, level, namespace_='', name_='cpfInst', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('cpfInst') if", "rootClass is None: rootClass = globals().get(tag) return tag, rootClass def parse(inFileName, silence=False): parser", "'\"%s\"' % s1 return s1 def quote_python(inStr): s1 = inStr if s1.find(\"'\") ==", "buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class codMunic class uf(GeneratedsSuper): subclass", ") dt = datetime_.datetime.strptime( input_data, '%Y-%m-%dT%H:%M:%S.%f') else: dt = datetime_.datetime.strptime( input_data, '%Y-%m-%dT%H:%M:%S') dt", "paisNascto class paisNac(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ =", "buildChildren(self, child_, node, nodeName_, fromsubclass_=False): pass # end class cpfBenef class nmBenefic(GeneratedsSuper): subclass", "class dtIniBenef(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ = None", "= brasil self.exterior = exterior def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None:", "'model_.MixedContainer(%d, %d, \"%s\", \"%s\"),\\n' % ( self.category, self.content_type, self.name, self.value)) elif self.category ==", "outfile, level, namespace_='', name_='cpfBenef', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set()", "subclass is not None: return subclass(*args_, **kwargs_) if cpfInst.subclass: return cpfInst.subclass(*args_, **kwargs_) else:", ")) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='idQuota') if self.hasContent_(): outfile.write('>%s' %", "# ('-o', 'esociallib/v2_04/evtCdBenPrRP.py') # # Command line arguments: # schemas/v2_04/evtCdBenPrRP.xsd # # Command", "**kwargs_) else: return indRetif(*args_, **kwargs_) factory = staticmethod(factory) def hasContent_(self): if ( ):", "+ namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='paisResid')", "ideBenef class cpfBenef(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ =", "None: showIndent(outfile, level, pretty_print) outfile.write('<%sidQuota>%s</%sidQuota>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.idQuota), input_name='idQuota')), namespace_, eol_)) if self.cpfInst", "name_='nrLograd'): pass def exportChildren(self, outfile, level, namespace_='', name_='nrLograd', fromsubclass_=False, pretty_print=True): pass def build(self,", "verProc(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ = None def", "% (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.nrBenefic), input_name='nrBenefic')), namespace_, eol_)) if self.dtIniBenef is not None: showIndent(outfile, level,", "uf(GeneratedsSuper): subclass = None superclass = None def __init__(self): self.original_tagname_ = None def", "= datetime_.datetime.strptime(input_data, '%Y-%m-%d') dt = dt.replace(tzinfo=tz) return dt.date() def gds_validate_time(self, input_data, node=None, input_name=''):", "tpAmb): self.tpAmb = tpAmb def get_procEmi(self): return self.procEmi def set_procEmi(self, procEmi): self.procEmi =", "_svalue def gds_validate_simple_patterns(self, patterns, target): # pat is a list of lists of", "= nrInsc def hasContent_(self): if ( self.tpInsc is not None or self.nrInsc is", "__init__(self, offset, name): self.__offset = datetime_.timedelta(minutes=offset) self.__name = name def utcoffset(self, dt): return", "# end class dscLograd class nrLograd(GeneratedsSuper): subclass = None superclass = None def", "child_, node, nodeName_, fromsubclass_=False): pass # end class paisResid class nmCid(GeneratedsSuper): subclass =", "showIndent(outfile, level) outfile.write( 'model_.MixedContainer(%d, %d, \"%s\",\\n' % ( self.category, self.content_type, self.name,)) self.value.exportLiteral(outfile, level", "= idQuota def get_cpfInst(self): return self.cpfInst def set_cpfInst(self, cpfInst): self.cpfInst = cpfInst def", "evtCdBenPrRP.subclass: return evtCdBenPrRP.subclass(*args_, **kwargs_) else: return evtCdBenPrRP(*args_, **kwargs_) factory = staticmethod(factory) def get_ideEvento(self):", "% (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='cpfBenef'): pass def exportChildren(self,", "return fimBeneficio.subclass(*args_, **kwargs_) else: return fimBeneficio(*args_, **kwargs_) factory = staticmethod(factory) def get_tpBenef(self): return", "if fimBeneficio.subclass: return fimBeneficio.subclass(*args_, **kwargs_) else: return fimBeneficio(*args_, **kwargs_) factory = staticmethod(factory) def", "self.exportAttributes(outfile, level, already_processed, namespace_, name_='dadosNasc') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level", "self.idQuota is not None: showIndent(outfile, level, pretty_print) outfile.write('<%sidQuota>%s</%sidQuota>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.idQuota), input_name='idQuota')), namespace_,", "node, nodeName_, fromsubclass_=False): if nodeName_ == 'dadosNasc': obj_ = dadosNasc.factory() obj_.build(child_) self.dadosNasc =", "if infoPenMorte.subclass: return infoPenMorte.subclass(*args_, **kwargs_) else: return infoPenMorte(*args_, **kwargs_) factory = staticmethod(factory) def", "pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('nmBenefic') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if", "name_='dtFimBenef', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib, already_processed)", "child_, node, nodeName_, fromsubclass_=False): if nodeName_ == 'indRetif': sval_ = child_.text try: ival_", "'nrInsc': nrInsc_ = child_.text nrInsc_ = self.gds_validate_string(nrInsc_, node, 'nrInsc') self.nrInsc = nrInsc_ #", "input_name='nrBenefic')), namespace_, eol_)) if self.dtFimBenef is not None: showIndent(outfile, level, pretty_print) outfile.write('<%sdtFimBenef>%s</%sdtFimBenef>%s' %", "CurrentSubclassModule_, nrRecibo) if subclass is not None: return subclass(*args_, **kwargs_) if nrRecibo.subclass: return", "input_data.minute, input_data.second, ) else: _svalue = '%02d:%02d:%02d.%s' % ( input_data.hour, input_data.minute, input_data.second, ('%f'", "eol_)) else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='endereco'):", "def __init__(self, tpBenef=None, nrBenefic=None, dtFimBenef=None, mtvFim=None): self.original_tagname_ = None self.tpBenef = tpBenef self.nrBenefic", "(eol_, )) self.exportChildren(outfile, level + 1, namespace_='', name_='uf', pretty_print=pretty_print) outfile.write('</%s%s>%s' % (namespace_, name_,", "return False def export(self, outfile, level, namespace_='', name_='TEnderecoBrasil', namespacedef_='', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('TEnderecoBrasil')", "CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, nmMae) if subclass is not", "def buildChildren(self, child_, node, nodeName_, fromsubclass_=False): if nodeName_ == 'tpLograd': tpLograd_ = child_.text", "input_data.day, input_data.hour, input_data.minute, input_data.second, ) else: _svalue = '%04d-%02d-%02dT%02d:%02d:%02d.%s' % ( input_data.year, input_data.month,", "+ namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespace_, name_='nrBenefic')", "codPostal def hasContent_(self): if ( self.paisResid is not None or self.dscLograd is not", "name_='cep', fromsubclass_=False, pretty_print=True): pass def build(self, node): already_processed = set() self.buildAttributes(node, node.attrib, already_processed)", "of the use of this # table. # A sample table is: #", "micro_seconds, ) dt = datetime_.datetime.strptime( input_data, '%Y-%m-%dT%H:%M:%S.%f') else: dt = datetime_.datetime.strptime( input_data, '%Y-%m-%dT%H:%M:%S')", "namespace_, eol_)) if self.bairro is not None: showIndent(outfile, level, pretty_print) outfile.write('<%sbairro>%s</%sbairro>%s' % (namespace_,", "= None def __init__(self, paisResid=None, dscLograd=None, nrLograd=None, complemento=None, bairro=None, nmCid=None, codPostal=None): self.original_tagname_ =", "elif nodeName_ == 'iniBeneficio': obj_ = TDadosBeneficio.factory() obj_.build(child_) self.iniBeneficio = obj_ obj_.original_tagname_ =", "pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('dscLograd') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if", "= getSubclassFromModule_( CurrentSubclassModule_, ideBenef) if subclass is not None: return subclass(*args_, **kwargs_) if", "= staticmethod(factory) def get_dadosNasc(self): return self.dadosNasc def set_dadosNasc(self, dadosNasc): self.dadosNasc = dadosNasc def", "= procEmi self.verProc = verProc def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None:", "TIdeEveTrab.factory() obj_.build(child_) self.ideEvento = obj_ obj_.original_tagname_ = 'ideEvento' elif nodeName_ == 'ideEmpregador': obj_", "codMunic) if subclass is not None: return subclass(*args_, **kwargs_) if codMunic.subclass: return codMunic.subclass(*args_,", "return values def gds_format_double(self, input_data, input_name=''): return '%e' % input_data def gds_validate_double(self, input_data,", "{ 'altBeneficio': TDadosBeneficio, 'brasil': TEnderecoBrasil, 'dadosBenef': TDadosBenef, 'exterior': TEnderecoExterior, 'ideEmpregador': TEmprPJ, 'ideEvento': TIdeEveTrab,", "subclass(*args_, **kwargs_) if paisResid.subclass: return paisResid.subclass(*args_, **kwargs_) else: return paisResid(*args_, **kwargs_) factory =", "outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='mtvFim'): pass def" ]
[ "distances between entities. \"\"\" from pointcloudset.diff.origin import calculate_distance_to_origin from pointcloudset.diff.plane import calculate_distance_to_plane from", "between entities. \"\"\" from pointcloudset.diff.origin import calculate_distance_to_origin from pointcloudset.diff.plane import calculate_distance_to_plane from pointcloudset.diff.point", "and distances between entities. \"\"\" from pointcloudset.diff.origin import calculate_distance_to_origin from pointcloudset.diff.plane import calculate_distance_to_plane", "pointcloudset.diff.point import calculate_distance_to_point from pointcloudset.diff.pointcloud import calculate_distance_to_pointcloud ALL_DIFFS = { \"pointcloud\": calculate_distance_to_pointcloud, \"plane\":", "calculate differences and distances between entities. \"\"\" from pointcloudset.diff.origin import calculate_distance_to_origin from pointcloudset.diff.plane", "from pointcloudset.diff.point import calculate_distance_to_point from pointcloudset.diff.pointcloud import calculate_distance_to_pointcloud ALL_DIFFS = { \"pointcloud\": calculate_distance_to_pointcloud,", "to calculate differences and distances between entities. \"\"\" from pointcloudset.diff.origin import calculate_distance_to_origin from", "from pointcloudset.diff.plane import calculate_distance_to_plane from pointcloudset.diff.point import calculate_distance_to_point from pointcloudset.diff.pointcloud import calculate_distance_to_pointcloud ALL_DIFFS", "differences and distances between entities. \"\"\" from pointcloudset.diff.origin import calculate_distance_to_origin from pointcloudset.diff.plane import", "calculate_distance_to_plane from pointcloudset.diff.point import calculate_distance_to_point from pointcloudset.diff.pointcloud import calculate_distance_to_pointcloud ALL_DIFFS = { \"pointcloud\":", "import calculate_distance_to_pointcloud ALL_DIFFS = { \"pointcloud\": calculate_distance_to_pointcloud, \"plane\": calculate_distance_to_plane, \"point\": calculate_distance_to_point, \"origin\": calculate_distance_to_origin,", "pointcloudset.diff.pointcloud import calculate_distance_to_pointcloud ALL_DIFFS = { \"pointcloud\": calculate_distance_to_pointcloud, \"plane\": calculate_distance_to_plane, \"point\": calculate_distance_to_point, \"origin\":", "import calculate_distance_to_origin from pointcloudset.diff.plane import calculate_distance_to_plane from pointcloudset.diff.point import calculate_distance_to_point from pointcloudset.diff.pointcloud import", "pointcloudset.diff.plane import calculate_distance_to_plane from pointcloudset.diff.point import calculate_distance_to_point from pointcloudset.diff.pointcloud import calculate_distance_to_pointcloud ALL_DIFFS =", "calculate_distance_to_pointcloud ALL_DIFFS = { \"pointcloud\": calculate_distance_to_pointcloud, \"plane\": calculate_distance_to_plane, \"point\": calculate_distance_to_point, \"origin\": calculate_distance_to_origin, }", "pointcloudset.diff.origin import calculate_distance_to_origin from pointcloudset.diff.plane import calculate_distance_to_plane from pointcloudset.diff.point import calculate_distance_to_point from pointcloudset.diff.pointcloud", "entities. \"\"\" from pointcloudset.diff.origin import calculate_distance_to_origin from pointcloudset.diff.plane import calculate_distance_to_plane from pointcloudset.diff.point import", "from pointcloudset.diff.pointcloud import calculate_distance_to_pointcloud ALL_DIFFS = { \"pointcloud\": calculate_distance_to_pointcloud, \"plane\": calculate_distance_to_plane, \"point\": calculate_distance_to_point,", "import calculate_distance_to_plane from pointcloudset.diff.point import calculate_distance_to_point from pointcloudset.diff.pointcloud import calculate_distance_to_pointcloud ALL_DIFFS = {", "calculate_distance_to_origin from pointcloudset.diff.plane import calculate_distance_to_plane from pointcloudset.diff.point import calculate_distance_to_point from pointcloudset.diff.pointcloud import calculate_distance_to_pointcloud", "import calculate_distance_to_point from pointcloudset.diff.pointcloud import calculate_distance_to_pointcloud ALL_DIFFS = { \"pointcloud\": calculate_distance_to_pointcloud, \"plane\": calculate_distance_to_plane,", "\"\"\" from pointcloudset.diff.origin import calculate_distance_to_origin from pointcloudset.diff.plane import calculate_distance_to_plane from pointcloudset.diff.point import calculate_distance_to_point", "<reponame>hugoledoux/pointcloudset \"\"\" Functions to calculate differences and distances between entities. \"\"\" from pointcloudset.diff.origin", "from pointcloudset.diff.origin import calculate_distance_to_origin from pointcloudset.diff.plane import calculate_distance_to_plane from pointcloudset.diff.point import calculate_distance_to_point from", "calculate_distance_to_point from pointcloudset.diff.pointcloud import calculate_distance_to_pointcloud ALL_DIFFS = { \"pointcloud\": calculate_distance_to_pointcloud, \"plane\": calculate_distance_to_plane, \"point\":", "\"\"\" Functions to calculate differences and distances between entities. \"\"\" from pointcloudset.diff.origin import", "Functions to calculate differences and distances between entities. \"\"\" from pointcloudset.diff.origin import calculate_distance_to_origin" ]
[]
[ "cloudamqp.get_plugins(instance_id=cloudamqp_instance[\"instance\"][\"id\"]) ``` ## Argument reference * `instance_id` - (Required) The CloudAMQP instance identifier.", "not edit by hand unless you're certain you know what you are doing!", "'GetPluginsResult', 'AwaitableGetPluginsResult', 'get_plugins', ] @pulumi.output_type class GetPluginsResult: \"\"\" A collection of values returned", "id(self) -> str: \"\"\" The provider-assigned unique ID for this managed resource. \"\"\"", "Sequence, Union, overload from . import _utilities from . import outputs __all__ =", "pulumi.set(__self__, \"id\", id) if instance_id and not isinstance(instance_id, int): raise TypeError(\"Expected argument 'instance_id'", "(Required) The CloudAMQP instance identifier. ## Attributes reference All attributes reference are computed", "consist of * `name` - The type of the recipient. * `version` -", "A collection of values returned by getPlugins. \"\"\" def __init__(__self__, id=None, instance_id=None, plugins=None):", "array of plugins. Each `plugins` block consists of the fields documented below. ***", "Argument reference * `instance_id` - (Required) The CloudAMQP instance identifier. ## Attributes reference", "Optional[pulumi.InvokeOptions] = None) -> AwaitableGetPluginsResult: \"\"\" Use this data source to retrieve information", "return pulumi.get(self, \"plugins\") class AwaitableGetPluginsResult(GetPluginsResult): # pylint: disable=using-constant-test def __await__(self): if False: yield", "``` ## Argument reference * `instance_id` - (Required) The CloudAMQP instance identifier. ##", "pulumi_cloudamqp as cloudamqp plugins = cloudamqp.get_plugins(instance_id=cloudamqp_instance[\"instance\"][\"id\"]) ``` ## Argument reference * `instance_id` -", "* `version` - Rabbit MQ version that the plugins are shipped with. *", "and not isinstance(plugins, list): raise TypeError(\"Expected argument 'plugins' to be a list\") pulumi.set(__self__,", "*** # *** Do not edit by hand unless you're certain you know", "for this resource. * `plugins` - An array of plugins. Each `plugins` block", "instance_id) if plugins and not isinstance(plugins, list): raise TypeError(\"Expected argument 'plugins' to be", "'AwaitableGetPluginsResult', 'get_plugins', ] @pulumi.output_type class GetPluginsResult: \"\"\" A collection of values returned by", "if plugins and not isinstance(plugins, list): raise TypeError(\"Expected argument 'plugins' to be a", "class AwaitableGetPluginsResult(GetPluginsResult): # pylint: disable=using-constant-test def __await__(self): if False: yield self return GetPluginsResult(", "returned by getPlugins. \"\"\" def __init__(__self__, id=None, instance_id=None, plugins=None): if id and not", "None: opts = pulumi.InvokeOptions() if opts.version is None: opts.version = _utilities.get_version() __ret__ =", "the CloudAMQP instance. ## Example Usage ```python import pulumi import pulumi_cloudamqp as cloudamqp", "type of the recipient. * `version` - Rabbit MQ version that the plugins", "str\") pulumi.set(__self__, \"id\", id) if instance_id and not isinstance(instance_id, int): raise TypeError(\"Expected argument", "if id and not isinstance(id, str): raise TypeError(\"Expected argument 'id' to be a", "Rabbit MQ version that the plugins are shipped with. * `description` - Description", "id=self.id, instance_id=self.instance_id, plugins=self.plugins) def get_plugins(instance_id: Optional[int] = None, opts: Optional[pulumi.InvokeOptions] = None) ->", "attributes reference are computed * `id` - The identifier for this resource. *", "The type of the recipient. * `version` - Rabbit MQ version that the", "(tfgen) Tool. *** # *** Do not edit by hand unless you're certain", "`plugins` block consists of the fields documented below. *** The `plugins` block consist", "MQ version that the plugins are shipped with. * `description` - Description of", "information for the plugin. ## Dependency This data source depends on CloudAMQP instance", "# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen)", "be a int\") pulumi.set(__self__, \"instance_id\", instance_id) if plugins and not isinstance(plugins, list): raise", "__all__ = [ 'GetPluginsResult', 'AwaitableGetPluginsResult', 'get_plugins', ] @pulumi.output_type class GetPluginsResult: \"\"\" A collection", "The provider-assigned unique ID for this managed resource. \"\"\" return pulumi.get(self, \"id\") @property", "identifier. ## Attributes reference All attributes reference are computed * `id` - The", "of * `name` - The type of the recipient. * `version` - Rabbit", "source depends on CloudAMQP instance identifier, `cloudamqp_instance.instance.id`. \"\"\" __args__ = dict() __args__['instanceId'] =", "TypeError(\"Expected argument 'id' to be a str\") pulumi.set(__self__, \"id\", id) if instance_id and", "instance_id=None, plugins=None): if id and not isinstance(id, str): raise TypeError(\"Expected argument 'id' to", "yield self return GetPluginsResult( id=self.id, instance_id=self.instance_id, plugins=self.plugins) def get_plugins(instance_id: Optional[int] = None, opts:", "opts is None: opts = pulumi.InvokeOptions() if opts.version is None: opts.version = _utilities.get_version()", "installed and available plugins for the CloudAMQP instance. ## Example Usage ```python import", "opts.version = _utilities.get_version() __ret__ = pulumi.runtime.invoke('cloudamqp:index/getPlugins:getPlugins', __args__, opts=opts, typ=GetPluginsResult).value return AwaitableGetPluginsResult( id=__ret__.id, instance_id=__ret__.instance_id,", "* `id` - The identifier for this resource. * `plugins` - An array", "\"plugins\") class AwaitableGetPluginsResult(GetPluginsResult): # pylint: disable=using-constant-test def __await__(self): if False: yield self return", "] @pulumi.output_type class GetPluginsResult: \"\"\" A collection of values returned by getPlugins. \"\"\"", "opts: Optional[pulumi.InvokeOptions] = None) -> AwaitableGetPluginsResult: \"\"\" Use this data source to retrieve", "this resource. * `plugins` - An array of plugins. Each `plugins` block consists", "AwaitableGetPluginsResult(GetPluginsResult): # pylint: disable=using-constant-test def __await__(self): if False: yield self return GetPluginsResult( id=self.id,", "Terraform Bridge (tfgen) Tool. *** # *** Do not edit by hand unless", "Union, overload from . import _utilities from . import outputs __all__ = [", "## Example Usage ```python import pulumi import pulumi_cloudamqp as cloudamqp plugins = cloudamqp.get_plugins(instance_id=cloudamqp_instance[\"instance\"][\"id\"])", "values returned by getPlugins. \"\"\" def __init__(__self__, id=None, instance_id=None, plugins=None): if id and", "you're certain you know what you are doing! *** import warnings import pulumi", "id=None, instance_id=None, plugins=None): if id and not isinstance(id, str): raise TypeError(\"Expected argument 'id'", "- Rabbit MQ version that the plugins are shipped with. * `description` -", "unless you're certain you know what you are doing! *** import warnings import", "pulumi.get(self, \"id\") @property @pulumi.getter(name=\"instanceId\") def instance_id(self) -> int: return pulumi.get(self, \"instance_id\") @property @pulumi.getter", "GetPluginsResult: \"\"\" A collection of values returned by getPlugins. \"\"\" def __init__(__self__, id=None,", "-> int: return pulumi.get(self, \"instance_id\") @property @pulumi.getter def plugins(self) -> Sequence['outputs.GetPluginsPluginResult']: return pulumi.get(self,", "and not isinstance(id, str): raise TypeError(\"Expected argument 'id' to be a str\") pulumi.set(__self__,", "- (Required) The CloudAMQP instance identifier. ## Attributes reference All attributes reference are", "Dependency This data source depends on CloudAMQP instance identifier, `cloudamqp_instance.instance.id`. \"\"\" __args__ =", "raise TypeError(\"Expected argument 'id' to be a str\") pulumi.set(__self__, \"id\", id) if instance_id", "the Pulumi Terraform Bridge (tfgen) Tool. *** # *** Do not edit by", "consists of the fields documented below. *** The `plugins` block consist of *", "import outputs __all__ = [ 'GetPluginsResult', 'AwaitableGetPluginsResult', 'get_plugins', ] @pulumi.output_type class GetPluginsResult: \"\"\"", "\"\"\" A collection of values returned by getPlugins. \"\"\" def __init__(__self__, id=None, instance_id=None,", "* `name` - The type of the recipient. * `version` - Rabbit MQ", "for the CloudAMQP instance. ## Example Usage ```python import pulumi import pulumi_cloudamqp as", "\"instance_id\", instance_id) if plugins and not isinstance(plugins, list): raise TypeError(\"Expected argument 'plugins' to", "instance_id if opts is None: opts = pulumi.InvokeOptions() if opts.version is None: opts.version", "provider-assigned unique ID for this managed resource. \"\"\" return pulumi.get(self, \"id\") @property @pulumi.getter(name=\"instanceId\")", "fields documented below. *** The `plugins` block consist of * `name` - The", "import pulumi_cloudamqp as cloudamqp plugins = cloudamqp.get_plugins(instance_id=cloudamqp_instance[\"instance\"][\"id\"]) ``` ## Argument reference * `instance_id`", ". import outputs __all__ = [ 'GetPluginsResult', 'AwaitableGetPluginsResult', 'get_plugins', ] @pulumi.output_type class GetPluginsResult:", "\"\"\" return pulumi.get(self, \"id\") @property @pulumi.getter(name=\"instanceId\") def instance_id(self) -> int: return pulumi.get(self, \"instance_id\")", "identifier for this resource. * `plugins` - An array of plugins. Each `plugins`", "by hand unless you're certain you know what you are doing! *** import", "*** import warnings import pulumi import pulumi.runtime from typing import Any, Mapping, Optional,", "@pulumi.output_type class GetPluginsResult: \"\"\" A collection of values returned by getPlugins. \"\"\" def", "you are doing! *** import warnings import pulumi import pulumi.runtime from typing import", "isinstance(instance_id, int): raise TypeError(\"Expected argument 'instance_id' to be a int\") pulumi.set(__self__, \"instance_id\", instance_id)", "raise TypeError(\"Expected argument 'instance_id' to be a int\") pulumi.set(__self__, \"instance_id\", instance_id) if plugins", "Bridge (tfgen) Tool. *** # *** Do not edit by hand unless you're", "return GetPluginsResult( id=self.id, instance_id=self.instance_id, plugins=self.plugins) def get_plugins(instance_id: Optional[int] = None, opts: Optional[pulumi.InvokeOptions] =", "= [ 'GetPluginsResult', 'AwaitableGetPluginsResult', 'get_plugins', ] @pulumi.output_type class GetPluginsResult: \"\"\" A collection of", "for the plugin. ## Dependency This data source depends on CloudAMQP instance identifier,", "`name` - The type of the recipient. * `version` - Rabbit MQ version", "The CloudAMQP instance identifier. ## Attributes reference All attributes reference are computed *", "`plugins` - An array of plugins. Each `plugins` block consists of the fields", "Optional, Sequence, Union, overload from . import _utilities from . import outputs __all__", "This data source depends on CloudAMQP instance identifier, `cloudamqp_instance.instance.id`. \"\"\" __args__ = dict()", "`enabled` - Enable or disable information for the plugin. ## Dependency This data", "# *** Do not edit by hand unless you're certain you know what", "None) -> AwaitableGetPluginsResult: \"\"\" Use this data source to retrieve information about installed", "from typing import Any, Mapping, Optional, Sequence, Union, overload from . import _utilities", "Attributes reference All attributes reference are computed * `id` - The identifier for", "coding=utf-8 # *** WARNING: this file was generated by the Pulumi Terraform Bridge", "int\") pulumi.set(__self__, \"instance_id\", instance_id) if plugins and not isinstance(plugins, list): raise TypeError(\"Expected argument", "for this managed resource. \"\"\" return pulumi.get(self, \"id\") @property @pulumi.getter(name=\"instanceId\") def instance_id(self) ->", "data source depends on CloudAMQP instance identifier, `cloudamqp_instance.instance.id`. \"\"\" __args__ = dict() __args__['instanceId']", "*** Do not edit by hand unless you're certain you know what you", "computed * `id` - The identifier for this resource. * `plugins` - An", "`description` - Description of what the plugin does. * `enabled` - Enable or", "import Any, Mapping, Optional, Sequence, Union, overload from . import _utilities from .", "GetPluginsResult( id=self.id, instance_id=self.instance_id, plugins=self.plugins) def get_plugins(instance_id: Optional[int] = None, opts: Optional[pulumi.InvokeOptions] = None)", "* `plugins` - An array of plugins. Each `plugins` block consists of the", "## Attributes reference All attributes reference are computed * `id` - The identifier", "below. *** The `plugins` block consist of * `name` - The type of", "getPlugins. \"\"\" def __init__(__self__, id=None, instance_id=None, plugins=None): if id and not isinstance(id, str):", "@pulumi.getter(name=\"instanceId\") def instance_id(self) -> int: return pulumi.get(self, \"instance_id\") @property @pulumi.getter def plugins(self) ->", "'instance_id' to be a int\") pulumi.set(__self__, \"instance_id\", instance_id) if plugins and not isinstance(plugins,", "plugins(self) -> Sequence['outputs.GetPluginsPluginResult']: return pulumi.get(self, \"plugins\") class AwaitableGetPluginsResult(GetPluginsResult): # pylint: disable=using-constant-test def __await__(self):", "- An array of plugins. Each `plugins` block consists of the fields documented", "int): raise TypeError(\"Expected argument 'instance_id' to be a int\") pulumi.set(__self__, \"instance_id\", instance_id) if", "version that the plugins are shipped with. * `description` - Description of what", "information about installed and available plugins for the CloudAMQP instance. ## Example Usage", "the plugin does. * `enabled` - Enable or disable information for the plugin.", "Use this data source to retrieve information about installed and available plugins for", "Tool. *** # *** Do not edit by hand unless you're certain you", "block consist of * `name` - The type of the recipient. * `version`", "reference are computed * `id` - The identifier for this resource. * `plugins`", "plugin does. * `enabled` - Enable or disable information for the plugin. ##", "shipped with. * `description` - Description of what the plugin does. * `enabled`", "edit by hand unless you're certain you know what you are doing! ***", "resource. * `plugins` - An array of plugins. Each `plugins` block consists of", "`id` - The identifier for this resource. * `plugins` - An array of", "be a list\") pulumi.set(__self__, \"plugins\", plugins) @property @pulumi.getter def id(self) -> str: \"\"\"", "unique ID for this managed resource. \"\"\" return pulumi.get(self, \"id\") @property @pulumi.getter(name=\"instanceId\") def", "[ 'GetPluginsResult', 'AwaitableGetPluginsResult', 'get_plugins', ] @pulumi.output_type class GetPluginsResult: \"\"\" A collection of values", "if instance_id and not isinstance(instance_id, int): raise TypeError(\"Expected argument 'instance_id' to be a", "int: return pulumi.get(self, \"instance_id\") @property @pulumi.getter def plugins(self) -> Sequence['outputs.GetPluginsPluginResult']: return pulumi.get(self, \"plugins\")", "`plugins` block consist of * `name` - The type of the recipient. *", "or disable information for the plugin. ## Dependency This data source depends on", "of the recipient. * `version` - Rabbit MQ version that the plugins are", "documented below. *** The `plugins` block consist of * `name` - The type", "plugins=self.plugins) def get_plugins(instance_id: Optional[int] = None, opts: Optional[pulumi.InvokeOptions] = None) -> AwaitableGetPluginsResult: \"\"\"", "instance identifier. ## Attributes reference All attributes reference are computed * `id` -", "class GetPluginsResult: \"\"\" A collection of values returned by getPlugins. \"\"\" def __init__(__self__,", "not isinstance(instance_id, int): raise TypeError(\"Expected argument 'instance_id' to be a int\") pulumi.set(__self__, \"instance_id\",", "are shipped with. * `description` - Description of what the plugin does. *", "Sequence['outputs.GetPluginsPluginResult']: return pulumi.get(self, \"plugins\") class AwaitableGetPluginsResult(GetPluginsResult): # pylint: disable=using-constant-test def __await__(self): if False:", "opts.version is None: opts.version = _utilities.get_version() __ret__ = pulumi.runtime.invoke('cloudamqp:index/getPlugins:getPlugins', __args__, opts=opts, typ=GetPluginsResult).value return", "@property @pulumi.getter(name=\"instanceId\") def instance_id(self) -> int: return pulumi.get(self, \"instance_id\") @property @pulumi.getter def plugins(self)", "the plugins are shipped with. * `description` - Description of what the plugin", "isinstance(plugins, list): raise TypeError(\"Expected argument 'plugins' to be a list\") pulumi.set(__self__, \"plugins\", plugins)", "doing! *** import warnings import pulumi import pulumi.runtime from typing import Any, Mapping,", "Description of what the plugin does. * `enabled` - Enable or disable information", "instance. ## Example Usage ```python import pulumi import pulumi_cloudamqp as cloudamqp plugins =", "to be a int\") pulumi.set(__self__, \"instance_id\", instance_id) if plugins and not isinstance(plugins, list):", "available plugins for the CloudAMQP instance. ## Example Usage ```python import pulumi import", "def __init__(__self__, id=None, instance_id=None, plugins=None): if id and not isinstance(id, str): raise TypeError(\"Expected", "= None, opts: Optional[pulumi.InvokeOptions] = None) -> AwaitableGetPluginsResult: \"\"\" Use this data source", "on CloudAMQP instance identifier, `cloudamqp_instance.instance.id`. \"\"\" __args__ = dict() __args__['instanceId'] = instance_id if", "None, opts: Optional[pulumi.InvokeOptions] = None) -> AwaitableGetPluginsResult: \"\"\" Use this data source to", "what the plugin does. * `enabled` - Enable or disable information for the", "source to retrieve information about installed and available plugins for the CloudAMQP instance.", "the recipient. * `version` - Rabbit MQ version that the plugins are shipped", "a str\") pulumi.set(__self__, \"id\", id) if instance_id and not isinstance(instance_id, int): raise TypeError(\"Expected", "of the fields documented below. *** The `plugins` block consist of * `name`", "the plugin. ## Dependency This data source depends on CloudAMQP instance identifier, `cloudamqp_instance.instance.id`.", "if opts is None: opts = pulumi.InvokeOptions() if opts.version is None: opts.version =", "Usage ```python import pulumi import pulumi_cloudamqp as cloudamqp plugins = cloudamqp.get_plugins(instance_id=cloudamqp_instance[\"instance\"][\"id\"]) ``` ##", "the fields documented below. *** The `plugins` block consist of * `name` -", "*** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool.", "plugin. ## Dependency This data source depends on CloudAMQP instance identifier, `cloudamqp_instance.instance.id`. \"\"\"", "import warnings import pulumi import pulumi.runtime from typing import Any, Mapping, Optional, Sequence,", "with. * `description` - Description of what the plugin does. * `enabled` -", "-> Sequence['outputs.GetPluginsPluginResult']: return pulumi.get(self, \"plugins\") class AwaitableGetPluginsResult(GetPluginsResult): # pylint: disable=using-constant-test def __await__(self): if", "by the Pulumi Terraform Bridge (tfgen) Tool. *** # *** Do not edit", "return pulumi.get(self, \"id\") @property @pulumi.getter(name=\"instanceId\") def instance_id(self) -> int: return pulumi.get(self, \"instance_id\") @property", "# pylint: disable=using-constant-test def __await__(self): if False: yield self return GetPluginsResult( id=self.id, instance_id=self.instance_id,", "pulumi.set(__self__, \"plugins\", plugins) @property @pulumi.getter def id(self) -> str: \"\"\" The provider-assigned unique", "pulumi.get(self, \"plugins\") class AwaitableGetPluginsResult(GetPluginsResult): # pylint: disable=using-constant-test def __await__(self): if False: yield self", "warnings import pulumi import pulumi.runtime from typing import Any, Mapping, Optional, Sequence, Union,", "-> AwaitableGetPluginsResult: \"\"\" Use this data source to retrieve information about installed and", "identifier, `cloudamqp_instance.instance.id`. \"\"\" __args__ = dict() __args__['instanceId'] = instance_id if opts is None:", "Pulumi Terraform Bridge (tfgen) Tool. *** # *** Do not edit by hand", "disable information for the plugin. ## Dependency This data source depends on CloudAMQP", "pulumi.runtime from typing import Any, Mapping, Optional, Sequence, Union, overload from . import", "retrieve information about installed and available plugins for the CloudAMQP instance. ## Example", "# coding=utf-8 # *** WARNING: this file was generated by the Pulumi Terraform", "'get_plugins', ] @pulumi.output_type class GetPluginsResult: \"\"\" A collection of values returned by getPlugins.", "disable=using-constant-test def __await__(self): if False: yield self return GetPluginsResult( id=self.id, instance_id=self.instance_id, plugins=self.plugins) def", "CloudAMQP instance identifier. ## Attributes reference All attributes reference are computed * `id`", "= pulumi.InvokeOptions() if opts.version is None: opts.version = _utilities.get_version() __ret__ = pulumi.runtime.invoke('cloudamqp:index/getPlugins:getPlugins', __args__,", "certain you know what you are doing! *** import warnings import pulumi import", "outputs __all__ = [ 'GetPluginsResult', 'AwaitableGetPluginsResult', 'get_plugins', ] @pulumi.output_type class GetPluginsResult: \"\"\" A", "__args__['instanceId'] = instance_id if opts is None: opts = pulumi.InvokeOptions() if opts.version is", "data source to retrieve information about installed and available plugins for the CloudAMQP", "__args__ = dict() __args__['instanceId'] = instance_id if opts is None: opts = pulumi.InvokeOptions()", "reference All attributes reference are computed * `id` - The identifier for this", "pulumi.set(__self__, \"instance_id\", instance_id) if plugins and not isinstance(plugins, list): raise TypeError(\"Expected argument 'plugins'", "An array of plugins. Each `plugins` block consists of the fields documented below.", "## Argument reference * `instance_id` - (Required) The CloudAMQP instance identifier. ## Attributes", "-> str: \"\"\" The provider-assigned unique ID for this managed resource. \"\"\" return", "depends on CloudAMQP instance identifier, `cloudamqp_instance.instance.id`. \"\"\" __args__ = dict() __args__['instanceId'] = instance_id", "CloudAMQP instance. ## Example Usage ```python import pulumi import pulumi_cloudamqp as cloudamqp plugins", "TypeError(\"Expected argument 'plugins' to be a list\") pulumi.set(__self__, \"plugins\", plugins) @property @pulumi.getter def", "def id(self) -> str: \"\"\" The provider-assigned unique ID for this managed resource.", "pulumi import pulumi_cloudamqp as cloudamqp plugins = cloudamqp.get_plugins(instance_id=cloudamqp_instance[\"instance\"][\"id\"]) ``` ## Argument reference *", "pulumi import pulumi.runtime from typing import Any, Mapping, Optional, Sequence, Union, overload from", "ID for this managed resource. \"\"\" return pulumi.get(self, \"id\") @property @pulumi.getter(name=\"instanceId\") def instance_id(self)", "import pulumi.runtime from typing import Any, Mapping, Optional, Sequence, Union, overload from .", "does. * `enabled` - Enable or disable information for the plugin. ## Dependency", "as cloudamqp plugins = cloudamqp.get_plugins(instance_id=cloudamqp_instance[\"instance\"][\"id\"]) ``` ## Argument reference * `instance_id` - (Required)", "return pulumi.get(self, \"instance_id\") @property @pulumi.getter def plugins(self) -> Sequence['outputs.GetPluginsPluginResult']: return pulumi.get(self, \"plugins\") class", "str: \"\"\" The provider-assigned unique ID for this managed resource. \"\"\" return pulumi.get(self,", "plugins for the CloudAMQP instance. ## Example Usage ```python import pulumi import pulumi_cloudamqp", "* `instance_id` - (Required) The CloudAMQP instance identifier. ## Attributes reference All attributes", "@property @pulumi.getter def id(self) -> str: \"\"\" The provider-assigned unique ID for this", "managed resource. \"\"\" return pulumi.get(self, \"id\") @property @pulumi.getter(name=\"instanceId\") def instance_id(self) -> int: return", "Optional[int] = None, opts: Optional[pulumi.InvokeOptions] = None) -> AwaitableGetPluginsResult: \"\"\" Use this data", ". import _utilities from . import outputs __all__ = [ 'GetPluginsResult', 'AwaitableGetPluginsResult', 'get_plugins',", "<filename>sdk/python/pulumi_cloudamqp/get_plugins.py<gh_stars>1-10 # coding=utf-8 # *** WARNING: this file was generated by the Pulumi", "str): raise TypeError(\"Expected argument 'id' to be a str\") pulumi.set(__self__, \"id\", id) if", "know what you are doing! *** import warnings import pulumi import pulumi.runtime from", "\"\"\" __args__ = dict() __args__['instanceId'] = instance_id if opts is None: opts =", "if opts.version is None: opts.version = _utilities.get_version() __ret__ = pulumi.runtime.invoke('cloudamqp:index/getPlugins:getPlugins', __args__, opts=opts, typ=GetPluginsResult).value", "* `enabled` - Enable or disable information for the plugin. ## Dependency This", "CloudAMQP instance identifier, `cloudamqp_instance.instance.id`. \"\"\" __args__ = dict() __args__['instanceId'] = instance_id if opts", "The identifier for this resource. * `plugins` - An array of plugins. Each", "instance_id(self) -> int: return pulumi.get(self, \"instance_id\") @property @pulumi.getter def plugins(self) -> Sequence['outputs.GetPluginsPluginResult']: return", "'id' to be a str\") pulumi.set(__self__, \"id\", id) if instance_id and not isinstance(instance_id,", "list\") pulumi.set(__self__, \"plugins\", plugins) @property @pulumi.getter def id(self) -> str: \"\"\" The provider-assigned", "this managed resource. \"\"\" return pulumi.get(self, \"id\") @property @pulumi.getter(name=\"instanceId\") def instance_id(self) -> int:", "opts = pulumi.InvokeOptions() if opts.version is None: opts.version = _utilities.get_version() __ret__ = pulumi.runtime.invoke('cloudamqp:index/getPlugins:getPlugins',", "- Enable or disable information for the plugin. ## Dependency This data source", "instance identifier, `cloudamqp_instance.instance.id`. \"\"\" __args__ = dict() __args__['instanceId'] = instance_id if opts is", "to be a list\") pulumi.set(__self__, \"plugins\", plugins) @property @pulumi.getter def id(self) -> str:", "reference * `instance_id` - (Required) The CloudAMQP instance identifier. ## Attributes reference All", "\"\"\" Use this data source to retrieve information about installed and available plugins", "pylint: disable=using-constant-test def __await__(self): if False: yield self return GetPluginsResult( id=self.id, instance_id=self.instance_id, plugins=self.plugins)", "get_plugins(instance_id: Optional[int] = None, opts: Optional[pulumi.InvokeOptions] = None) -> AwaitableGetPluginsResult: \"\"\" Use this", "this data source to retrieve information about installed and available plugins for the", "id and not isinstance(id, str): raise TypeError(\"Expected argument 'id' to be a str\")", "AwaitableGetPluginsResult: \"\"\" Use this data source to retrieve information about installed and available", "block consists of the fields documented below. *** The `plugins` block consist of", "Example Usage ```python import pulumi import pulumi_cloudamqp as cloudamqp plugins = cloudamqp.get_plugins(instance_id=cloudamqp_instance[\"instance\"][\"id\"]) ```", "typing import Any, Mapping, Optional, Sequence, Union, overload from . import _utilities from", "= cloudamqp.get_plugins(instance_id=cloudamqp_instance[\"instance\"][\"id\"]) ``` ## Argument reference * `instance_id` - (Required) The CloudAMQP instance", "is None: opts.version = _utilities.get_version() __ret__ = pulumi.runtime.invoke('cloudamqp:index/getPlugins:getPlugins', __args__, opts=opts, typ=GetPluginsResult).value return AwaitableGetPluginsResult(", "Do not edit by hand unless you're certain you know what you are", "__await__(self): if False: yield self return GetPluginsResult( id=self.id, instance_id=self.instance_id, plugins=self.plugins) def get_plugins(instance_id: Optional[int]", "\"instance_id\") @property @pulumi.getter def plugins(self) -> Sequence['outputs.GetPluginsPluginResult']: return pulumi.get(self, \"plugins\") class AwaitableGetPluginsResult(GetPluginsResult): #", "Enable or disable information for the plugin. ## Dependency This data source depends", "are doing! *** import warnings import pulumi import pulumi.runtime from typing import Any,", "and not isinstance(instance_id, int): raise TypeError(\"Expected argument 'instance_id' to be a int\") pulumi.set(__self__,", "generated by the Pulumi Terraform Bridge (tfgen) Tool. *** # *** Do not", "WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***", "recipient. * `version` - Rabbit MQ version that the plugins are shipped with.", "False: yield self return GetPluginsResult( id=self.id, instance_id=self.instance_id, plugins=self.plugins) def get_plugins(instance_id: Optional[int] = None,", "is None: opts = pulumi.InvokeOptions() if opts.version is None: opts.version = _utilities.get_version() __ret__", "overload from . import _utilities from . import outputs __all__ = [ 'GetPluginsResult',", "self return GetPluginsResult( id=self.id, instance_id=self.instance_id, plugins=self.plugins) def get_plugins(instance_id: Optional[int] = None, opts: Optional[pulumi.InvokeOptions]", "def get_plugins(instance_id: Optional[int] = None, opts: Optional[pulumi.InvokeOptions] = None) -> AwaitableGetPluginsResult: \"\"\" Use", "`cloudamqp_instance.instance.id`. \"\"\" __args__ = dict() __args__['instanceId'] = instance_id if opts is None: opts", "that the plugins are shipped with. * `description` - Description of what the", "\"\"\" def __init__(__self__, id=None, instance_id=None, plugins=None): if id and not isinstance(id, str): raise", "of values returned by getPlugins. \"\"\" def __init__(__self__, id=None, instance_id=None, plugins=None): if id", "Each `plugins` block consists of the fields documented below. *** The `plugins` block", "about installed and available plugins for the CloudAMQP instance. ## Example Usage ```python", "```python import pulumi import pulumi_cloudamqp as cloudamqp plugins = cloudamqp.get_plugins(instance_id=cloudamqp_instance[\"instance\"][\"id\"]) ``` ## Argument", "raise TypeError(\"Expected argument 'plugins' to be a list\") pulumi.set(__self__, \"plugins\", plugins) @property @pulumi.getter", "## Dependency This data source depends on CloudAMQP instance identifier, `cloudamqp_instance.instance.id`. \"\"\" __args__", "`instance_id` - (Required) The CloudAMQP instance identifier. ## Attributes reference All attributes reference", "argument 'instance_id' to be a int\") pulumi.set(__self__, \"instance_id\", instance_id) if plugins and not", "@pulumi.getter def plugins(self) -> Sequence['outputs.GetPluginsPluginResult']: return pulumi.get(self, \"plugins\") class AwaitableGetPluginsResult(GetPluginsResult): # pylint: disable=using-constant-test", "def __await__(self): if False: yield self return GetPluginsResult( id=self.id, instance_id=self.instance_id, plugins=self.plugins) def get_plugins(instance_id:", "plugins=None): if id and not isinstance(id, str): raise TypeError(\"Expected argument 'id' to be", "= instance_id if opts is None: opts = pulumi.InvokeOptions() if opts.version is None:", "*** The `plugins` block consist of * `name` - The type of the", "dict() __args__['instanceId'] = instance_id if opts is None: opts = pulumi.InvokeOptions() if opts.version", "be a str\") pulumi.set(__self__, \"id\", id) if instance_id and not isinstance(instance_id, int): raise", "plugins) @property @pulumi.getter def id(self) -> str: \"\"\" The provider-assigned unique ID for", "= _utilities.get_version() __ret__ = pulumi.runtime.invoke('cloudamqp:index/getPlugins:getPlugins', __args__, opts=opts, typ=GetPluginsResult).value return AwaitableGetPluginsResult( id=__ret__.id, instance_id=__ret__.instance_id, plugins=__ret__.plugins)", "TypeError(\"Expected argument 'instance_id' to be a int\") pulumi.set(__self__, \"instance_id\", instance_id) if plugins and", "argument 'id' to be a str\") pulumi.set(__self__, \"id\", id) if instance_id and not", "not isinstance(plugins, list): raise TypeError(\"Expected argument 'plugins' to be a list\") pulumi.set(__self__, \"plugins\",", "The `plugins` block consist of * `name` - The type of the recipient.", "Any, Mapping, Optional, Sequence, Union, overload from . import _utilities from . import", "_utilities from . import outputs __all__ = [ 'GetPluginsResult', 'AwaitableGetPluginsResult', 'get_plugins', ] @pulumi.output_type", "pulumi.InvokeOptions() if opts.version is None: opts.version = _utilities.get_version() __ret__ = pulumi.runtime.invoke('cloudamqp:index/getPlugins:getPlugins', __args__, opts=opts,", "by getPlugins. \"\"\" def __init__(__self__, id=None, instance_id=None, plugins=None): if id and not isinstance(id,", "None: opts.version = _utilities.get_version() __ret__ = pulumi.runtime.invoke('cloudamqp:index/getPlugins:getPlugins', __args__, opts=opts, typ=GetPluginsResult).value return AwaitableGetPluginsResult( id=__ret__.id,", "hand unless you're certain you know what you are doing! *** import warnings", "def plugins(self) -> Sequence['outputs.GetPluginsPluginResult']: return pulumi.get(self, \"plugins\") class AwaitableGetPluginsResult(GetPluginsResult): # pylint: disable=using-constant-test def", "this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. *** #", "if False: yield self return GetPluginsResult( id=self.id, instance_id=self.instance_id, plugins=self.plugins) def get_plugins(instance_id: Optional[int] =", "\"\"\" The provider-assigned unique ID for this managed resource. \"\"\" return pulumi.get(self, \"id\")", "@pulumi.getter def id(self) -> str: \"\"\" The provider-assigned unique ID for this managed", "instance_id=self.instance_id, plugins=self.plugins) def get_plugins(instance_id: Optional[int] = None, opts: Optional[pulumi.InvokeOptions] = None) -> AwaitableGetPluginsResult:", "@property @pulumi.getter def plugins(self) -> Sequence['outputs.GetPluginsPluginResult']: return pulumi.get(self, \"plugins\") class AwaitableGetPluginsResult(GetPluginsResult): # pylint:", "from . import outputs __all__ = [ 'GetPluginsResult', 'AwaitableGetPluginsResult', 'get_plugins', ] @pulumi.output_type class", "list): raise TypeError(\"Expected argument 'plugins' to be a list\") pulumi.set(__self__, \"plugins\", plugins) @property", "a int\") pulumi.set(__self__, \"instance_id\", instance_id) if plugins and not isinstance(plugins, list): raise TypeError(\"Expected", "plugins. Each `plugins` block consists of the fields documented below. *** The `plugins`", "from . import _utilities from . import outputs __all__ = [ 'GetPluginsResult', 'AwaitableGetPluginsResult',", "argument 'plugins' to be a list\") pulumi.set(__self__, \"plugins\", plugins) @property @pulumi.getter def id(self)", "are computed * `id` - The identifier for this resource. * `plugins` -", "`version` - Rabbit MQ version that the plugins are shipped with. * `description`", "what you are doing! *** import warnings import pulumi import pulumi.runtime from typing", "id) if instance_id and not isinstance(instance_id, int): raise TypeError(\"Expected argument 'instance_id' to be", "- The type of the recipient. * `version` - Rabbit MQ version that", "you know what you are doing! *** import warnings import pulumi import pulumi.runtime", "* `description` - Description of what the plugin does. * `enabled` - Enable", "__init__(__self__, id=None, instance_id=None, plugins=None): if id and not isinstance(id, str): raise TypeError(\"Expected argument", "plugins are shipped with. * `description` - Description of what the plugin does.", "and available plugins for the CloudAMQP instance. ## Example Usage ```python import pulumi", "a list\") pulumi.set(__self__, \"plugins\", plugins) @property @pulumi.getter def id(self) -> str: \"\"\" The", "All attributes reference are computed * `id` - The identifier for this resource.", "= dict() __args__['instanceId'] = instance_id if opts is None: opts = pulumi.InvokeOptions() if", "\"id\") @property @pulumi.getter(name=\"instanceId\") def instance_id(self) -> int: return pulumi.get(self, \"instance_id\") @property @pulumi.getter def", "plugins and not isinstance(plugins, list): raise TypeError(\"Expected argument 'plugins' to be a list\")", "import pulumi import pulumi_cloudamqp as cloudamqp plugins = cloudamqp.get_plugins(instance_id=cloudamqp_instance[\"instance\"][\"id\"]) ``` ## Argument reference", "resource. \"\"\" return pulumi.get(self, \"id\") @property @pulumi.getter(name=\"instanceId\") def instance_id(self) -> int: return pulumi.get(self,", "of what the plugin does. * `enabled` - Enable or disable information for", "instance_id and not isinstance(instance_id, int): raise TypeError(\"Expected argument 'instance_id' to be a int\")", "cloudamqp plugins = cloudamqp.get_plugins(instance_id=cloudamqp_instance[\"instance\"][\"id\"]) ``` ## Argument reference * `instance_id` - (Required) The", "file was generated by the Pulumi Terraform Bridge (tfgen) Tool. *** # ***", "= None) -> AwaitableGetPluginsResult: \"\"\" Use this data source to retrieve information about", "import _utilities from . import outputs __all__ = [ 'GetPluginsResult', 'AwaitableGetPluginsResult', 'get_plugins', ]", "of plugins. Each `plugins` block consists of the fields documented below. *** The", "not isinstance(id, str): raise TypeError(\"Expected argument 'id' to be a str\") pulumi.set(__self__, \"id\",", "\"id\", id) if instance_id and not isinstance(instance_id, int): raise TypeError(\"Expected argument 'instance_id' to", "collection of values returned by getPlugins. \"\"\" def __init__(__self__, id=None, instance_id=None, plugins=None): if", "- The identifier for this resource. * `plugins` - An array of plugins.", "isinstance(id, str): raise TypeError(\"Expected argument 'id' to be a str\") pulumi.set(__self__, \"id\", id)", "was generated by the Pulumi Terraform Bridge (tfgen) Tool. *** # *** Do", "import pulumi import pulumi.runtime from typing import Any, Mapping, Optional, Sequence, Union, overload", "pulumi.get(self, \"instance_id\") @property @pulumi.getter def plugins(self) -> Sequence['outputs.GetPluginsPluginResult']: return pulumi.get(self, \"plugins\") class AwaitableGetPluginsResult(GetPluginsResult):", "\"plugins\", plugins) @property @pulumi.getter def id(self) -> str: \"\"\" The provider-assigned unique ID", "def instance_id(self) -> int: return pulumi.get(self, \"instance_id\") @property @pulumi.getter def plugins(self) -> Sequence['outputs.GetPluginsPluginResult']:", "- Description of what the plugin does. * `enabled` - Enable or disable", "plugins = cloudamqp.get_plugins(instance_id=cloudamqp_instance[\"instance\"][\"id\"]) ``` ## Argument reference * `instance_id` - (Required) The CloudAMQP", "'plugins' to be a list\") pulumi.set(__self__, \"plugins\", plugins) @property @pulumi.getter def id(self) ->", "to be a str\") pulumi.set(__self__, \"id\", id) if instance_id and not isinstance(instance_id, int):", "to retrieve information about installed and available plugins for the CloudAMQP instance. ##", "Mapping, Optional, Sequence, Union, overload from . import _utilities from . import outputs" ]
[ "NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE", "user uses a math function with a wrong input value # e.g. math.sqrt(-10)", "\"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT", "SyntaxError # when the user enters an invalid expression code = compile(expression, \"<string>\",", "TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.", "WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT", "OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN", "IN THE # SOFTWARE. \"\"\"MathREPL, a math expression evaluator using Python eval() and", "CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE #", "the math module.\"\"\" from . import ALLOWED_NAMES def evaluate(expression): \"\"\"Evaluate a math expression.\"\"\"", "not allowed\") # Evaluate the expression eventually raising a ValueError # when the", "EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,", "and the math module.\"\"\" from . import ALLOWED_NAMES def evaluate(expression): \"\"\"Evaluate a math", "OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION", "ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN", "THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR", "OR OTHER DEALINGS IN THE # SOFTWARE. \"\"\"MathREPL, a math expression evaluator using", "def evaluate(expression): \"\"\"Evaluate a math expression.\"\"\" # Compile the expression eventually raising a", "LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND", "SOFTWARE. \"\"\"MathREPL, a math expression evaluator using Python eval() and the math module.\"\"\"", "of '{name}' is not allowed\") # Evaluate the expression eventually raising a ValueError", "math expression evaluator using Python eval() and the math module.\"\"\" from . import", "THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR", "A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR", "SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR #", "FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF", "IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT", "EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM,", "expression evaluator using Python eval() and the math module.\"\"\" from . import ALLOWED_NAMES", "Evaluate the expression eventually raising a ValueError # when the user uses a", "module.\"\"\" from . import ALLOWED_NAMES def evaluate(expression): \"\"\"Evaluate a math expression.\"\"\" # Compile", "a math function with a wrong input value # e.g. math.sqrt(-10) return eval(code,", "the user enters an invalid expression code = compile(expression, \"<string>\", \"eval\") # Validate", "function with a wrong input value # e.g. math.sqrt(-10) return eval(code, {\"__builtins__\": {}},", "raise NameError(f\"The use of '{name}' is not allowed\") # Evaluate the expression eventually", "the expression eventually raising a ValueError # when the user uses a math", "code.co_names: if name not in ALLOWED_NAMES: raise NameError(f\"The use of '{name}' is not", "PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS", "OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING", "ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE", "USE OR OTHER DEALINGS IN THE # SOFTWARE. \"\"\"MathREPL, a math expression evaluator", "evaluator using Python eval() and the math module.\"\"\" from . import ALLOWED_NAMES def", "in code.co_names: if name not in ALLOWED_NAMES: raise NameError(f\"The use of '{name}' is", "OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR", "\"eval\") # Validate allowed names for name in code.co_names: if name not in", "when the user enters an invalid expression code = compile(expression, \"<string>\", \"eval\") #", "WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE.", "INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A", "if name not in ALLOWED_NAMES: raise NameError(f\"The use of '{name}' is not allowed\")", "# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE", "AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR", "OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,", "# when the user enters an invalid expression code = compile(expression, \"<string>\", \"eval\")", "Validate allowed names for name in code.co_names: if name not in ALLOWED_NAMES: raise", "ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT,", "DEALINGS IN THE # SOFTWARE. \"\"\"MathREPL, a math expression evaluator using Python eval()", "\"\"\"Evaluate a math expression.\"\"\" # Compile the expression eventually raising a SyntaxError #", "user enters an invalid expression code = compile(expression, \"<string>\", \"eval\") # Validate allowed", "expression code = compile(expression, \"<string>\", \"eval\") # Validate allowed names for name in", "CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH", "NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE", "FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS", "eventually raising a SyntaxError # when the user enters an invalid expression code", "THE USE OR OTHER DEALINGS IN THE # SOFTWARE. \"\"\"MathREPL, a math expression", "MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL", "Python eval() and the math module.\"\"\" from . import ALLOWED_NAMES def evaluate(expression): \"\"\"Evaluate", "PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT", "BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR", "SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. \"\"\"MathREPL, a", "LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION", "a SyntaxError # when the user enters an invalid expression code = compile(expression,", "FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE", "OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT", "PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING", "from . import ALLOWED_NAMES def evaluate(expression): \"\"\"Evaluate a math expression.\"\"\" # Compile the", "NameError(f\"The use of '{name}' is not allowed\") # Evaluate the expression eventually raising", "uses a math function with a wrong input value # e.g. math.sqrt(-10) return", "math function with a wrong input value # e.g. math.sqrt(-10) return eval(code, {\"__builtins__\":", "when the user uses a math function with a wrong input value #", "# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS", "THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. \"\"\"MathREPL,", "IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE", "for name in code.co_names: if name not in ALLOWED_NAMES: raise NameError(f\"The use of", "TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE", "# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS", "an invalid expression code = compile(expression, \"<string>\", \"eval\") # Validate allowed names for", "# Compile the expression eventually raising a SyntaxError # when the user enters", "a ValueError # when the user uses a math function with a wrong", "expression eventually raising a SyntaxError # when the user enters an invalid expression", "is not allowed\") # Evaluate the expression eventually raising a ValueError # when", "OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, #", "math expression.\"\"\" # Compile the expression eventually raising a SyntaxError # when the", "THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN", "ValueError # when the user uses a math function with a wrong input", "IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR", "\"\"\"MathREPL, a math expression evaluator using Python eval() and the math module.\"\"\" from", "invalid expression code = compile(expression, \"<string>\", \"eval\") # Validate allowed names for name", "ALLOWED_NAMES: raise NameError(f\"The use of '{name}' is not allowed\") # Evaluate the expression", "OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE", "LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, #", "FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE #", "SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES", "import ALLOWED_NAMES def evaluate(expression): \"\"\"Evaluate a math expression.\"\"\" # Compile the expression eventually", "BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN", "the user uses a math function with a wrong input value # e.g.", "AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE", "IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED,", "raising a SyntaxError # when the user enters an invalid expression code =", "# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR", "OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER", "eval() and the math module.\"\"\" from . import ALLOWED_NAMES def evaluate(expression): \"\"\"Evaluate a", "HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN", "CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT", "\"<string>\", \"eval\") # Validate allowed names for name in code.co_names: if name not", "OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE", "name not in ALLOWED_NAMES: raise NameError(f\"The use of '{name}' is not allowed\") #", "# SOFTWARE. \"\"\"MathREPL, a math expression evaluator using Python eval() and the math", "raising a ValueError # when the user uses a math function with a", "AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER #", "the expression eventually raising a SyntaxError # when the user enters an invalid", "OTHER DEALINGS IN THE # SOFTWARE. \"\"\"MathREPL, a math expression evaluator using Python", "IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR", "# Validate allowed names for name in code.co_names: if name not in ALLOWED_NAMES:", "eventually raising a ValueError # when the user uses a math function with", "= compile(expression, \"<string>\", \"eval\") # Validate allowed names for name in code.co_names: if", "# Evaluate the expression eventually raising a ValueError # when the user uses", "use of '{name}' is not allowed\") # Evaluate the expression eventually raising a", "# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER", "compile(expression, \"<string>\", \"eval\") # Validate allowed names for name in code.co_names: if name", "code = compile(expression, \"<string>\", \"eval\") # Validate allowed names for name in code.co_names:", "allowed names for name in code.co_names: if name not in ALLOWED_NAMES: raise NameError(f\"The", "WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO", "# when the user uses a math function with a wrong input value", "OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY,", "OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. \"\"\"MathREPL, a math", "enters an invalid expression code = compile(expression, \"<string>\", \"eval\") # Validate allowed names", "OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS", "names for name in code.co_names: if name not in ALLOWED_NAMES: raise NameError(f\"The use", "name in code.co_names: if name not in ALLOWED_NAMES: raise NameError(f\"The use of '{name}'", "evaluate(expression): \"\"\"Evaluate a math expression.\"\"\" # Compile the expression eventually raising a SyntaxError", ". import ALLOWED_NAMES def evaluate(expression): \"\"\"Evaluate a math expression.\"\"\" # Compile the expression", "WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO", "in ALLOWED_NAMES: raise NameError(f\"The use of '{name}' is not allowed\") # Evaluate the", "DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR", "a math expression evaluator using Python eval() and the math module.\"\"\" from .", "math module.\"\"\" from . import ALLOWED_NAMES def evaluate(expression): \"\"\"Evaluate a math expression.\"\"\" #", "Compile the expression eventually raising a SyntaxError # when the user enters an", "'{name}' is not allowed\") # Evaluate the expression eventually raising a ValueError #", "WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED", "KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF", "using Python eval() and the math module.\"\"\" from . import ALLOWED_NAMES def evaluate(expression):", "ALLOWED_NAMES def evaluate(expression): \"\"\"Evaluate a math expression.\"\"\" # Compile the expression eventually raising", "ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES", "NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY", "COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER", "# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,", "allowed\") # Evaluate the expression eventually raising a ValueError # when the user", "a math expression.\"\"\" # Compile the expression eventually raising a SyntaxError # when", "expression.\"\"\" # Compile the expression eventually raising a SyntaxError # when the user", "THE # SOFTWARE. \"\"\"MathREPL, a math expression evaluator using Python eval() and the", "expression eventually raising a ValueError # when the user uses a math function", "IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF", "with a wrong input value # e.g. math.sqrt(-10) return eval(code, {\"__builtins__\": {}}, ALLOWED_NAMES)", "not in ALLOWED_NAMES: raise NameError(f\"The use of '{name}' is not allowed\") # Evaluate" ]
[ "from __future__ import unicode_literals from django.db import migrations, models import django.db.models.deletion class Migration(migrations.Migration):", "django.db import migrations, models import django.db.models.deletion class Migration(migrations.Migration): dependencies = [ ('measure_mate', '0009_auto_20160124_1245'),", "Django 1.9.1 on 2016-01-30 00:57 from __future__ import unicode_literals from django.db import migrations,", "import unicode_literals from django.db import migrations, models import django.db.models.deletion class Migration(migrations.Migration): dependencies =", "models import django.db.models.deletion class Migration(migrations.Migration): dependencies = [ ('measure_mate', '0009_auto_20160124_1245'), ] operations =", "[ ('measure_mate', '0009_auto_20160124_1245'), ] operations = [ migrations.AddField( model_name='measurement', name='target_rating', field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE,", "unicode_literals from django.db import migrations, models import django.db.models.deletion class Migration(migrations.Migration): dependencies = [", "00:57 from __future__ import unicode_literals from django.db import migrations, models import django.db.models.deletion class", "__future__ import unicode_literals from django.db import migrations, models import django.db.models.deletion class Migration(migrations.Migration): dependencies", "import migrations, models import django.db.models.deletion class Migration(migrations.Migration): dependencies = [ ('measure_mate', '0009_auto_20160124_1245'), ]", "operations = [ migrations.AddField( model_name='measurement', name='target_rating', field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, related_name='target_measurements', to='measure_mate.Rating'), ), ]", "coding: utf-8 -*- # Generated by Django 1.9.1 on 2016-01-30 00:57 from __future__", "] operations = [ migrations.AddField( model_name='measurement', name='target_rating', field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, related_name='target_measurements', to='measure_mate.Rating'), ),", "1.9.1 on 2016-01-30 00:57 from __future__ import unicode_literals from django.db import migrations, models", "Generated by Django 1.9.1 on 2016-01-30 00:57 from __future__ import unicode_literals from django.db", "from django.db import migrations, models import django.db.models.deletion class Migration(migrations.Migration): dependencies = [ ('measure_mate',", "-*- # Generated by Django 1.9.1 on 2016-01-30 00:57 from __future__ import unicode_literals", "2016-01-30 00:57 from __future__ import unicode_literals from django.db import migrations, models import django.db.models.deletion", "('measure_mate', '0009_auto_20160124_1245'), ] operations = [ migrations.AddField( model_name='measurement', name='target_rating', field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, related_name='target_measurements',", "migrations, models import django.db.models.deletion class Migration(migrations.Migration): dependencies = [ ('measure_mate', '0009_auto_20160124_1245'), ] operations", "import django.db.models.deletion class Migration(migrations.Migration): dependencies = [ ('measure_mate', '0009_auto_20160124_1245'), ] operations = [", "dependencies = [ ('measure_mate', '0009_auto_20160124_1245'), ] operations = [ migrations.AddField( model_name='measurement', name='target_rating', field=models.ForeignKey(blank=True,", "class Migration(migrations.Migration): dependencies = [ ('measure_mate', '0009_auto_20160124_1245'), ] operations = [ migrations.AddField( model_name='measurement',", "# -*- coding: utf-8 -*- # Generated by Django 1.9.1 on 2016-01-30 00:57", "on 2016-01-30 00:57 from __future__ import unicode_literals from django.db import migrations, models import", "by Django 1.9.1 on 2016-01-30 00:57 from __future__ import unicode_literals from django.db import", "-*- coding: utf-8 -*- # Generated by Django 1.9.1 on 2016-01-30 00:57 from", "# Generated by Django 1.9.1 on 2016-01-30 00:57 from __future__ import unicode_literals from", "utf-8 -*- # Generated by Django 1.9.1 on 2016-01-30 00:57 from __future__ import", "= [ ('measure_mate', '0009_auto_20160124_1245'), ] operations = [ migrations.AddField( model_name='measurement', name='target_rating', field=models.ForeignKey(blank=True, null=True,", "Migration(migrations.Migration): dependencies = [ ('measure_mate', '0009_auto_20160124_1245'), ] operations = [ migrations.AddField( model_name='measurement', name='target_rating',", "django.db.models.deletion class Migration(migrations.Migration): dependencies = [ ('measure_mate', '0009_auto_20160124_1245'), ] operations = [ migrations.AddField(", "'0009_auto_20160124_1245'), ] operations = [ migrations.AddField( model_name='measurement', name='target_rating', field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, related_name='target_measurements', to='measure_mate.Rating')," ]
[ "self.conf['DOT'] = {'ranksep' : '5 equally', 'colors': ['red', 'green', 'blue']} self.conf['deviceFamily'] = {", "= Device('Access01', \"\",'admin', 'admin', 'leaf', \"\", \"\", pod) session.add(deviceThree) IF31 = InterfaceDefinition('IF1', deviceThree,", "= IF2 IF23.peer = IF31 IF31.peer = IF23 IF24.peer = IF32 IF32.peer =", "self.dao.Session()) cablingPlanWriter = CablingPlanWriter(self.conf, pod, self.dao) self.assertIsNotNone(cablingPlanWriter.template) with self.assertRaises(TemplateNotFound) as e: cablingPlanWriter.templateEnv.get_template('unknown-template') self.assertTrue('unknown-template'", "pod # create device #create interface session = self.dao.Session() pod = createPod('pod1', session)", "'spine', \"\", \"\", pod) session.add(deviceOne) IF1 = InterfaceDefinition('IF1', deviceOne, 'downlink') session.add(IF1) IF2 =", "= IF1 IF22.peer = IF2 IF23.peer = IF31 IF31.peer = IF23 IF24.peer =", "':' + IF21.id} cablingPlanWriter.createLinksInGraph(linkLabel, testLinksInTopology, 'red') path = cablingPlanWriter.outputDir + '/testLinklabel.dot' testLinksInTopology.write_raw(path) data", "Deletes 'out' folder under test dir''' shutil.rmtree(self.conf['outputDir'], ignore_errors=True) def tearDown(self): ''' Deletes 'out'", "device, testDeviceTopology) path = cablingPlanWriter.outputDir + '/testDevicelabel.dot' testDeviceTopology.write_raw(path) data = open(path, 'r').read() #check", "= {deviceOne.id + ':' + IF1.id : deviceTwo.id + ':' + IF21.id} cablingPlanWriter.createLinksInGraph(linkLabel,", "InterfaceDefinition('IF2', deviceOne, 'downlink') session.add(IF2) deviceTwo = Device('leaf01',\"\", 'admin', 'admin', 'leaf', \"\", \"\", pod)", "''' Deletes 'out' folder under test dir''' shutil.rmtree(self.conf['outputDir'], ignore_errors=True) def tearDown(self): ''' Deletes", "device = Device('test_device', \"\",'admin', 'admin', 'spine', \"\", \"\", pod) configWriter = ConfigWriter(self.conf, pod,", "IF23 IF24.peer = IF32 IF32.peer = IF24 session.commit() devices = session.query(Device).all() #check the", "def testcreateLinksInGraph(self): testLinksInTopology = pydot.Dot(graph_type='graph') pod = createPod('pod1', self.dao.Session()) cablingPlanWriter = CablingPlanWriter(self.conf, pod,", "'xe-0/0/[0-47]' } } self.dao = Dao(self.conf) ''' Deletes 'out' folder under test dir'''", "IF22 = InterfaceDefinition('IF2', deviceTwo, 'uplink') session.add(IF22) IF23 = InterfaceDefinition('IF3', deviceTwo, 'downlink') session.add(IF23) IF24", "self.assertRaises(TemplateNotFound) as e: cablingPlanWriter.templateEnv.get_template('unknown-template') self.assertTrue('unknown-template' in e.exception.message) def testCreateDeviceInGraph(self): testDeviceTopology = pydot.Dot(graph_type='graph', )", "deviceTwo, 'downlink') session.add(IF23) IF24 = InterfaceDefinition('IF3', deviceTwo, 'downlink') session.add(IF24) deviceThree = Device('Access01', \"\",'admin',", "self.conf['dbUrl'] = 'sqlite:///' self.conf['DOT'] = {'ranksep' : '5 equally', 'colors': ['red', 'green', 'blue']}", "self.dao.Session()) cablingPlanWriter = CablingPlanWriter(self.conf, pod, self.dao) device = createPodDevice(self.dao.Session(), 'Preethi', pod) device.id =", "self.conf = {} self.conf['outputDir'] = os.path.join(os.path.dirname(os.path.abspath(__file__)), 'out') self.conf['dbUrl'] = 'sqlite:///' self.conf['DOT'] = {'ranksep'", "def testInitWithTemplate(self): from jinja2 import TemplateNotFound pod = createPod('pod1', self.dao.Session()) cablingPlanWriter = CablingPlanWriter(self.conf,", "session.add(IF22) IF23 = InterfaceDefinition('IF3', deviceTwo, 'downlink') session.add(IF23) IF24 = InterfaceDefinition('IF3', deviceTwo, 'downlink') session.add(IF24)", "IF31.peer = IF23 IF24.peer = IF32 IF32.peer = IF24 session.commit() devices = session.query(Device).all()", "\"\", \"\", pod) deviceOne.id = 'spine01' IF1 = InterfaceDefinition('IF1', deviceOne, 'downlink') IF1.id =", "label for links self.assertTrue('spine01:IF1 -- leaf01:IF21 [color=red];' in data) def testcreateDOTFile(self): # create", "sys import shutil sys.path.insert(0,os.path.abspath(os.path.dirname(__file__) + '/' + '../..')) #trick to make it run", "testDeviceTopology = pydot.Dot(graph_type='graph', ) pod = createPod('pod1', self.dao.Session()) cablingPlanWriter = CablingPlanWriter(self.conf, pod, self.dao)", "IF22.peer = IF2 IF23.peer = IF31 IF31.peer = IF23 IF24.peer = IF32 IF32.peer", "open(path, 'r').read() #check the generated label for device self.assertTrue('\"preethi-1\" [shape=record, label=Preethi];' in data)", "IF1 = InterfaceDefinition('IF1', deviceOne, 'downlink') session.add(IF1) IF2 = InterfaceDefinition('IF2', deviceOne, 'downlink') session.add(IF2) deviceTwo", "to make it run from CLI import unittest import sqlalchemy from sqlalchemy.orm import", "deviceTwo, 'uplink') session.add(IF21) IF22 = InterfaceDefinition('IF2', deviceTwo, 'uplink') session.add(IF22) IF23 = InterfaceDefinition('IF3', deviceTwo,", "{ \"uplinkPorts\": 'et-0/0/[48-53]', \"downlinkPorts\": 'xe-0/0/[0-47]' } } self.dao = Dao(self.conf) ''' Deletes 'out'", "from jnpr.openclos.writer import WriterBase, ConfigWriter, CablingPlanWriter from jnpr.openclos.util import configLocation from jnpr.openclos.dao import", "session.add(IF31) IF32 = InterfaceDefinition('IF2', deviceThree, 'uplink') session.add(IF32) IF1.peer = IF21 IF2.peer = IF22", "\"\", \"\", pod) configWriter = ConfigWriter(self.conf, pod, self.dao) configWriter.write(device, \"dummy config\") self.assertTrue(os.path.exists(configWriter.outputDir +", "testLinksInTopology = pydot.Dot(graph_type='graph') pod = createPod('pod1', self.dao.Session()) cablingPlanWriter = CablingPlanWriter(self.conf, pod, self.dao) deviceOne", "pod, self.dao) deviceOne = Device('spine01',\"\", 'admin', 'admin', 'spine', \"\", \"\", pod) deviceOne.id =", "\"\",'admin', 'admin', 'leaf', \"\", \"\", pod) session.add(deviceThree) IF31 = InterfaceDefinition('IF1', deviceThree, 'uplink') session.add(IF31)", "the generated label for device self.assertTrue('\"preethi-1\" [shape=record, label=Preethi];' in data) def testcreateLinksInGraph(self): testLinksInTopology", "TestCablingPlanWriter(TestWriterBase): def testInitWithTemplate(self): from jinja2 import TemplateNotFound pod = createPod('pod1', self.dao.Session()) cablingPlanWriter =", "= IF1 linkLabel = {deviceOne.id + ':' + IF1.id : deviceTwo.id + ':'", "IF1.peer = IF21 IF2.peer = IF22 IF21.peer = IF1 IF22.peer = IF2 IF23.peer", "data) def testcreateDOTFile(self): # create pod # create device #create interface session =", "= Device('leaf01',\"\", 'admin', 'admin', 'leaf', \"\", \"\", pod) deviceTwo.id = 'leaf01' IF21 =", "'uplink') IF21.id = 'IF21' IF1.peer = IF21 IF21.peer = IF1 linkLabel = {deviceOne.id", "= 'sqlite:///' self.conf['DOT'] = {'ranksep' : '5 equally', 'colors': ['red', 'green', 'blue']} self.conf['deviceFamily']", "class TestCablingPlanWriter(TestWriterBase): def testInitWithTemplate(self): from jinja2 import TemplateNotFound pod = createPod('pod1', self.dao.Session()) cablingPlanWriter", "IF21 IF2.peer = IF22 IF21.peer = IF1 IF22.peer = IF2 IF23.peer = IF31", "InterfaceDefinition('IF2', deviceThree, 'uplink') session.add(IF32) IF1.peer = IF21 IF2.peer = IF22 IF21.peer = IF1", "in data) def testcreateLinksInGraph(self): testLinksInTopology = pydot.Dot(graph_type='graph') pod = createPod('pod1', self.dao.Session()) cablingPlanWriter =", "pod) deviceOne.id = 'spine01' IF1 = InterfaceDefinition('IF1', deviceOne, 'downlink') IF1.id = 'IF1' deviceTwo", "device = createPodDevice(self.dao.Session(), 'Preethi', pod) device.id = 'preethi-1' cablingPlanWriter.createDeviceInGraph(device.name, device, testDeviceTopology) path =", "''' import os import sys import shutil sys.path.insert(0,os.path.abspath(os.path.dirname(__file__) + '/' + '../..')) #trick", "class TestConfigWriter(TestWriterBase): def testWrite(self): pod = createPod('pod1', self.dao.Session()) device = Device('test_device', \"\",'admin', 'admin',", "Device('leaf01',\"\", 'admin', 'admin', 'leaf', \"\", \"\", pod) deviceTwo.id = 'leaf01' IF21 = InterfaceDefinition('IF1',", "'admin', 'leaf', \"\", \"\", pod) session.add(deviceTwo) IF21 = InterfaceDefinition('IF1', deviceTwo, 'uplink') session.add(IF21) IF22", "Pod, Device, InterfaceDefinition, InterfaceLogical, Interface, Base from jnpr.openclos.writer import WriterBase, ConfigWriter, CablingPlanWriter from", "sessionmaker import pydot from jnpr.openclos.model import Pod, Device, InterfaceDefinition, InterfaceLogical, Interface, Base from", "pod = createPod('pod1', session) cablingPlanWriter = CablingPlanWriter(self.conf, pod, self.dao) deviceOne = Device('spine01',\"\", 'admin',", "pydot.Dot(graph_type='graph', ) pod = createPod('pod1', self.dao.Session()) cablingPlanWriter = CablingPlanWriter(self.conf, pod, self.dao) device =", "session.add(IF2) deviceTwo = Device('leaf01',\"\", 'admin', 'admin', 'leaf', \"\", \"\", pod) session.add(deviceTwo) IF21 =", "IF31 IF31.peer = IF23 IF24.peer = IF32 IF32.peer = IF24 session.commit() devices =", "'admin', 'admin', 'leaf', \"\", \"\", pod) deviceTwo.id = 'leaf01' IF21 = InterfaceDefinition('IF1', deviceTwo,", "'downlink') session.add(IF1) IF2 = InterfaceDefinition('IF2', deviceOne, 'downlink') session.add(IF2) deviceTwo = Device('leaf01',\"\", 'admin', 'admin',", "Device, InterfaceDefinition, InterfaceLogical, Interface, Base from jnpr.openclos.writer import WriterBase, ConfigWriter, CablingPlanWriter from jnpr.openclos.util", "\"\", pod) deviceTwo.id = 'leaf01' IF21 = InterfaceDefinition('IF1', deviceTwo, 'uplink') IF21.id = 'IF21'", "= createPodDevice(self.dao.Session(), 'Preethi', pod) device.id = 'preethi-1' cablingPlanWriter.createDeviceInGraph(device.name, device, testDeviceTopology) path = cablingPlanWriter.outputDir", "IF21 IF21.peer = IF1 linkLabel = {deviceOne.id + ':' + IF1.id : deviceTwo.id", "'out' folder under test dir''' shutil.rmtree(self.conf['outputDir'], ignore_errors=True) def tearDown(self): ''' Deletes 'out' folder", "interface session = self.dao.Session() pod = createPod('pod1', session) cablingPlanWriter = CablingPlanWriter(self.conf, pod, self.dao)", "IF21.peer = IF1 IF22.peer = IF2 IF23.peer = IF31 IF31.peer = IF23 IF24.peer", "{deviceOne.id + ':' + IF1.id : deviceTwo.id + ':' + IF21.id} cablingPlanWriter.createLinksInGraph(linkLabel, testLinksInTopology,", "= InterfaceDefinition('IF3', deviceTwo, 'downlink') session.add(IF23) IF24 = InterfaceDefinition('IF3', deviceTwo, 'downlink') session.add(IF24) deviceThree =", "cablingPlanWriter.outputDir + '/testLinklabel.dot' testLinksInTopology.write_raw(path) data = open(path, 'r').read() #check generated label for links", "= self.dao.Session() pod = createPod('pod1', session) cablingPlanWriter = CablingPlanWriter(self.conf, pod, self.dao) deviceOne =", "InterfaceDefinition('IF1', deviceTwo, 'uplink') IF21.id = 'IF21' IF1.peer = IF21 IF21.peer = IF1 linkLabel", "IF21.peer = IF1 linkLabel = {deviceOne.id + ':' + IF1.id : deviceTwo.id +", "'uplink') session.add(IF21) IF22 = InterfaceDefinition('IF2', deviceTwo, 'uplink') session.add(IF22) IF23 = InterfaceDefinition('IF3', deviceTwo, 'downlink')", "'leaf', \"\", \"\", pod) session.add(deviceTwo) IF21 = InterfaceDefinition('IF1', deviceTwo, 'uplink') session.add(IF21) IF22 =", "InterfaceDefinition('IF1', deviceTwo, 'uplink') session.add(IF21) IF22 = InterfaceDefinition('IF2', deviceTwo, 'uplink') session.add(IF22) IF23 = InterfaceDefinition('IF3',", "Interface, Base from jnpr.openclos.writer import WriterBase, ConfigWriter, CablingPlanWriter from jnpr.openclos.util import configLocation from", "self.conf['deviceFamily'] = { \"QFX5100-24Q\": { \"ports\": 'et-0/0/[0-23]' }, \"QFX5100-48S\": { \"uplinkPorts\": 'et-0/0/[48-53]', \"downlinkPorts\":", "pod, self.dao) configWriter.write(device, \"dummy config\") self.assertTrue(os.path.exists(configWriter.outputDir + '/test_device.conf')) class TestCablingPlanWriter(TestWriterBase): def testInitWithTemplate(self): from", "InterfaceDefinition('IF1', deviceOne, 'downlink') session.add(IF1) IF2 = InterfaceDefinition('IF2', deviceOne, 'downlink') session.add(IF2) deviceTwo = Device('leaf01',\"\",", "tearDown(self): ''' Deletes 'out' folder under test dir''' shutil.rmtree(self.conf['outputDir'], ignore_errors=True) class TestConfigWriter(TestWriterBase): def", "2014 @author: preethi ''' import os import sys import shutil sys.path.insert(0,os.path.abspath(os.path.dirname(__file__) + '/'", "= 'IF21' IF1.peer = IF21 IF21.peer = IF1 linkLabel = {deviceOne.id + ':'", "equally', 'colors': ['red', 'green', 'blue']} self.conf['deviceFamily'] = { \"QFX5100-24Q\": { \"ports\": 'et-0/0/[0-23]' },", "# create pod # create device #create interface session = self.dao.Session() pod =", "IF2 IF23.peer = IF31 IF31.peer = IF23 IF24.peer = IF32 IF32.peer = IF24", "session.add(IF23) IF24 = InterfaceDefinition('IF3', deviceTwo, 'downlink') session.add(IF24) deviceThree = Device('Access01', \"\",'admin', 'admin', 'leaf',", "self.assertTrue('spine01:IF1 -- leaf01:IF21 [color=red];' in data) def testcreateDOTFile(self): # create pod # create", "= IF23 IF24.peer = IF32 IF32.peer = IF24 session.commit() devices = session.query(Device).all() #check", "IF21.id} cablingPlanWriter.createLinksInGraph(linkLabel, testLinksInTopology, 'red') path = cablingPlanWriter.outputDir + '/testLinklabel.dot' testLinksInTopology.write_raw(path) data = open(path,", "'IF21' IF1.peer = IF21 IF21.peer = IF1 linkLabel = {deviceOne.id + ':' +", "'../..')) #trick to make it run from CLI import unittest import sqlalchemy from", "''' Created on Aug 26, 2014 @author: preethi ''' import os import sys", "= ConfigWriter(self.conf, pod, self.dao) configWriter.write(device, \"dummy config\") self.assertTrue(os.path.exists(configWriter.outputDir + '/test_device.conf')) class TestCablingPlanWriter(TestWriterBase): def", "= 'IF1' deviceTwo = Device('leaf01',\"\", 'admin', 'admin', 'leaf', \"\", \"\", pod) deviceTwo.id =", "= Device('spine01',\"\", 'admin', 'admin', 'spine', \"\", \"\", pod) session.add(deviceOne) IF1 = InterfaceDefinition('IF1', deviceOne,", "+ '/' + '../..')) #trick to make it run from CLI import unittest", "file is generated cablingPlanWriter.writeDOT() data = open(cablingPlanWriter.outputDir + '/cablingPlan.dot', 'r').read() #check generated label", "from jnpr.openclos.dao import Dao from test_model import createPod, createPodDevice from flexmock import flexmock", "'Preethi', pod) device.id = 'preethi-1' cablingPlanWriter.createDeviceInGraph(device.name, device, testDeviceTopology) path = cablingPlanWriter.outputDir + '/testDevicelabel.dot'", "label=Preethi];' in data) def testcreateLinksInGraph(self): testLinksInTopology = pydot.Dot(graph_type='graph') pod = createPod('pod1', self.dao.Session()) cablingPlanWriter", "jnpr.openclos.dao import Dao from test_model import createPod, createPodDevice from flexmock import flexmock class", "\"dummy config\") self.assertTrue(os.path.exists(configWriter.outputDir + '/test_device.conf')) class TestCablingPlanWriter(TestWriterBase): def testInitWithTemplate(self): from jinja2 import TemplateNotFound", "self.dao = Dao(self.conf) ''' Deletes 'out' folder under test dir''' shutil.rmtree(self.conf['outputDir'], ignore_errors=True) def", "{ \"QFX5100-24Q\": { \"ports\": 'et-0/0/[0-23]' }, \"QFX5100-48S\": { \"uplinkPorts\": 'et-0/0/[48-53]', \"downlinkPorts\": 'xe-0/0/[0-47]' }", "= Device('test_device', \"\",'admin', 'admin', 'spine', \"\", \"\", pod) configWriter = ConfigWriter(self.conf, pod, self.dao)", "= pydot.Dot(graph_type='graph', ) pod = createPod('pod1', self.dao.Session()) cablingPlanWriter = CablingPlanWriter(self.conf, pod, self.dao) device", "deviceOne = Device('spine01',\"\", 'admin', 'admin', 'spine', \"\", \"\", pod) session.add(deviceOne) IF1 = InterfaceDefinition('IF1',", "@author: preethi ''' import os import sys import shutil sys.path.insert(0,os.path.abspath(os.path.dirname(__file__) + '/' +", "IF23.peer = IF31 IF31.peer = IF23 IF24.peer = IF32 IF32.peer = IF24 session.commit()", "createPodDevice(self.dao.Session(), 'Preethi', pod) device.id = 'preethi-1' cablingPlanWriter.createDeviceInGraph(device.name, device, testDeviceTopology) path = cablingPlanWriter.outputDir +", "import createPod, createPodDevice from flexmock import flexmock class TestWriterBase(unittest.TestCase): def setUp(self): self.conf =", "under test dir''' shutil.rmtree(self.conf['outputDir'], ignore_errors=True) def tearDown(self): ''' Deletes 'out' folder under test", "= InterfaceDefinition('IF2', deviceTwo, 'uplink') session.add(IF22) IF23 = InterfaceDefinition('IF3', deviceTwo, 'downlink') session.add(IF23) IF24 =", "pod) deviceTwo.id = 'leaf01' IF21 = InterfaceDefinition('IF1', deviceTwo, 'uplink') IF21.id = 'IF21' IF1.peer", "deviceTwo.id = 'leaf01' IF21 = InterfaceDefinition('IF1', deviceTwo, 'uplink') IF21.id = 'IF21' IF1.peer =", "\"\", pod) deviceOne.id = 'spine01' IF1 = InterfaceDefinition('IF1', deviceOne, 'downlink') IF1.id = 'IF1'", "= IF24 session.commit() devices = session.query(Device).all() #check the DOT file is generated cablingPlanWriter.writeDOT()", "self.dao) deviceOne = Device('spine01',\"\", 'admin', 'admin', 'spine', \"\", \"\", pod) deviceOne.id = 'spine01'", "} } self.dao = Dao(self.conf) ''' Deletes 'out' folder under test dir''' shutil.rmtree(self.conf['outputDir'],", "pod = createPod('pod1', self.dao.Session()) cablingPlanWriter = CablingPlanWriter(self.conf, pod, self.dao) self.assertIsNotNone(cablingPlanWriter.template) with self.assertRaises(TemplateNotFound) as", "shutil.rmtree(self.conf['outputDir'], ignore_errors=True) class TestConfigWriter(TestWriterBase): def testWrite(self): pod = createPod('pod1', self.dao.Session()) device = Device('test_device',", "createPod('pod1', self.dao.Session()) cablingPlanWriter = CablingPlanWriter(self.conf, pod, self.dao) device = createPodDevice(self.dao.Session(), 'Preethi', pod) device.id", "data = open(cablingPlanWriter.outputDir + '/cablingPlan.dot', 'r').read() #check generated label for links self.assertTrue('splines=polyline;' in", "WriterBase, ConfigWriter, CablingPlanWriter from jnpr.openclos.util import configLocation from jnpr.openclos.dao import Dao from test_model", "self.conf['outputDir'] = os.path.join(os.path.dirname(os.path.abspath(__file__)), 'out') self.conf['dbUrl'] = 'sqlite:///' self.conf['DOT'] = {'ranksep' : '5 equally',", "IF24 session.commit() devices = session.query(Device).all() #check the DOT file is generated cablingPlanWriter.writeDOT() data", "= pydot.Dot(graph_type='graph') pod = createPod('pod1', self.dao.Session()) cablingPlanWriter = CablingPlanWriter(self.conf, pod, self.dao) deviceOne =", "def tearDown(self): ''' Deletes 'out' folder under test dir''' shutil.rmtree(self.conf['outputDir'], ignore_errors=True) class TestConfigWriter(TestWriterBase):", "in e.exception.message) def testCreateDeviceInGraph(self): testDeviceTopology = pydot.Dot(graph_type='graph', ) pod = createPod('pod1', self.dao.Session()) cablingPlanWriter", "''' Deletes 'out' folder under test dir''' shutil.rmtree(self.conf['outputDir'], ignore_errors=True) class TestConfigWriter(TestWriterBase): def testWrite(self):", "import sessionmaker import pydot from jnpr.openclos.model import Pod, Device, InterfaceDefinition, InterfaceLogical, Interface, Base", "IF21.id = 'IF21' IF1.peer = IF21 IF21.peer = IF1 linkLabel = {deviceOne.id +", "\"\", pod) configWriter = ConfigWriter(self.conf, pod, self.dao) configWriter.write(device, \"dummy config\") self.assertTrue(os.path.exists(configWriter.outputDir + '/test_device.conf'))", "device self.assertTrue('\"preethi-1\" [shape=record, label=Preethi];' in data) def testcreateLinksInGraph(self): testLinksInTopology = pydot.Dot(graph_type='graph') pod =", "createPodDevice from flexmock import flexmock class TestWriterBase(unittest.TestCase): def setUp(self): self.conf = {} self.conf['outputDir']", "{} self.conf['outputDir'] = os.path.join(os.path.dirname(os.path.abspath(__file__)), 'out') self.conf['dbUrl'] = 'sqlite:///' self.conf['DOT'] = {'ranksep' : '5", "+ '/testLinklabel.dot' testLinksInTopology.write_raw(path) data = open(path, 'r').read() #check generated label for links self.assertTrue('spine01:IF1", "from jnpr.openclos.util import configLocation from jnpr.openclos.dao import Dao from test_model import createPod, createPodDevice", "with self.assertRaises(TemplateNotFound) as e: cablingPlanWriter.templateEnv.get_template('unknown-template') self.assertTrue('unknown-template' in e.exception.message) def testCreateDeviceInGraph(self): testDeviceTopology = pydot.Dot(graph_type='graph',", "+ IF1.id : deviceTwo.id + ':' + IF21.id} cablingPlanWriter.createLinksInGraph(linkLabel, testLinksInTopology, 'red') path =", "= session.query(Device).all() #check the DOT file is generated cablingPlanWriter.writeDOT() data = open(cablingPlanWriter.outputDir +", "'uplink') session.add(IF31) IF32 = InterfaceDefinition('IF2', deviceThree, 'uplink') session.add(IF32) IF1.peer = IF21 IF2.peer =", "+ ':' + IF1.id : deviceTwo.id + ':' + IF21.id} cablingPlanWriter.createLinksInGraph(linkLabel, testLinksInTopology, 'red')", "pod) session.add(deviceTwo) IF21 = InterfaceDefinition('IF1', deviceTwo, 'uplink') session.add(IF21) IF22 = InterfaceDefinition('IF2', deviceTwo, 'uplink')", "shutil sys.path.insert(0,os.path.abspath(os.path.dirname(__file__) + '/' + '../..')) #trick to make it run from CLI", "{'ranksep' : '5 equally', 'colors': ['red', 'green', 'blue']} self.conf['deviceFamily'] = { \"QFX5100-24Q\": {", "'out' folder under test dir''' shutil.rmtree(self.conf['outputDir'], ignore_errors=True) class TestConfigWriter(TestWriterBase): def testWrite(self): pod =", "= os.path.join(os.path.dirname(os.path.abspath(__file__)), 'out') self.conf['dbUrl'] = 'sqlite:///' self.conf['DOT'] = {'ranksep' : '5 equally', 'colors':", "'admin', 'spine', \"\", \"\", pod) deviceOne.id = 'spine01' IF1 = InterfaceDefinition('IF1', deviceOne, 'downlink')", "preethi ''' import os import sys import shutil sys.path.insert(0,os.path.abspath(os.path.dirname(__file__) + '/' + '../..'))", "createPod, createPodDevice from flexmock import flexmock class TestWriterBase(unittest.TestCase): def setUp(self): self.conf = {}", "from flexmock import flexmock class TestWriterBase(unittest.TestCase): def setUp(self): self.conf = {} self.conf['outputDir'] =", "\"\", \"\", pod) deviceTwo.id = 'leaf01' IF21 = InterfaceDefinition('IF1', deviceTwo, 'uplink') IF21.id =", "run from CLI import unittest import sqlalchemy from sqlalchemy.orm import sessionmaker import pydot", "IF21 = InterfaceDefinition('IF1', deviceTwo, 'uplink') session.add(IF21) IF22 = InterfaceDefinition('IF2', deviceTwo, 'uplink') session.add(IF22) IF23", "self.dao) configWriter.write(device, \"dummy config\") self.assertTrue(os.path.exists(configWriter.outputDir + '/test_device.conf')) class TestCablingPlanWriter(TestWriterBase): def testInitWithTemplate(self): from jinja2", "'uplink') session.add(IF32) IF1.peer = IF21 IF2.peer = IF22 IF21.peer = IF1 IF22.peer =", "createPod('pod1', self.dao.Session()) cablingPlanWriter = CablingPlanWriter(self.conf, pod, self.dao) deviceOne = Device('spine01',\"\", 'admin', 'admin', 'spine',", "deviceTwo, 'uplink') session.add(IF22) IF23 = InterfaceDefinition('IF3', deviceTwo, 'downlink') session.add(IF23) IF24 = InterfaceDefinition('IF3', deviceTwo,", "#create interface session = self.dao.Session() pod = createPod('pod1', session) cablingPlanWriter = CablingPlanWriter(self.conf, pod,", "for device self.assertTrue('\"preethi-1\" [shape=record, label=Preethi];' in data) def testcreateLinksInGraph(self): testLinksInTopology = pydot.Dot(graph_type='graph') pod", "CablingPlanWriter(self.conf, pod, self.dao) deviceOne = Device('spine01',\"\", 'admin', 'admin', 'spine', \"\", \"\", pod) session.add(deviceOne)", "{ \"ports\": 'et-0/0/[0-23]' }, \"QFX5100-48S\": { \"uplinkPorts\": 'et-0/0/[48-53]', \"downlinkPorts\": 'xe-0/0/[0-47]' } } self.dao", "\"\", \"\", pod) session.add(deviceTwo) IF21 = InterfaceDefinition('IF1', deviceTwo, 'uplink') session.add(IF21) IF22 = InterfaceDefinition('IF2',", "generated cablingPlanWriter.writeDOT() data = open(cablingPlanWriter.outputDir + '/cablingPlan.dot', 'r').read() #check generated label for links", "+ '/test_device.conf')) class TestCablingPlanWriter(TestWriterBase): def testInitWithTemplate(self): from jinja2 import TemplateNotFound pod = createPod('pod1',", "\"QFX5100-24Q\": { \"ports\": 'et-0/0/[0-23]' }, \"QFX5100-48S\": { \"uplinkPorts\": 'et-0/0/[48-53]', \"downlinkPorts\": 'xe-0/0/[0-47]' } }", ") pod = createPod('pod1', self.dao.Session()) cablingPlanWriter = CablingPlanWriter(self.conf, pod, self.dao) device = createPodDevice(self.dao.Session(),", "def testWrite(self): pod = createPod('pod1', self.dao.Session()) device = Device('test_device', \"\",'admin', 'admin', 'spine', \"\",", "testDeviceTopology.write_raw(path) data = open(path, 'r').read() #check the generated label for device self.assertTrue('\"preethi-1\" [shape=record,", "IF1.peer = IF21 IF21.peer = IF1 linkLabel = {deviceOne.id + ':' + IF1.id", "under test dir''' shutil.rmtree(self.conf['outputDir'], ignore_errors=True) class TestConfigWriter(TestWriterBase): def testWrite(self): pod = createPod('pod1', self.dao.Session())", "InterfaceDefinition('IF3', deviceTwo, 'downlink') session.add(IF24) deviceThree = Device('Access01', \"\",'admin', 'admin', 'leaf', \"\", \"\", pod)", "= InterfaceDefinition('IF3', deviceTwo, 'downlink') session.add(IF24) deviceThree = Device('Access01', \"\",'admin', 'admin', 'leaf', \"\", \"\",", "pod, self.dao) self.assertIsNotNone(cablingPlanWriter.template) with self.assertRaises(TemplateNotFound) as e: cablingPlanWriter.templateEnv.get_template('unknown-template') self.assertTrue('unknown-template' in e.exception.message) def testCreateDeviceInGraph(self):", "from jnpr.openclos.model import Pod, Device, InterfaceDefinition, InterfaceLogical, Interface, Base from jnpr.openclos.writer import WriterBase,", "import TemplateNotFound pod = createPod('pod1', self.dao.Session()) cablingPlanWriter = CablingPlanWriter(self.conf, pod, self.dao) self.assertIsNotNone(cablingPlanWriter.template) with", "setUp(self): self.conf = {} self.conf['outputDir'] = os.path.join(os.path.dirname(os.path.abspath(__file__)), 'out') self.conf['dbUrl'] = 'sqlite:///' self.conf['DOT'] =", "import Pod, Device, InterfaceDefinition, InterfaceLogical, Interface, Base from jnpr.openclos.writer import WriterBase, ConfigWriter, CablingPlanWriter", "= { \"QFX5100-24Q\": { \"ports\": 'et-0/0/[0-23]' }, \"QFX5100-48S\": { \"uplinkPorts\": 'et-0/0/[48-53]', \"downlinkPorts\": 'xe-0/0/[0-47]'", "IF21 = InterfaceDefinition('IF1', deviceTwo, 'uplink') IF21.id = 'IF21' IF1.peer = IF21 IF21.peer =", "'/' + '../..')) #trick to make it run from CLI import unittest import", "InterfaceDefinition('IF2', deviceTwo, 'uplink') session.add(IF22) IF23 = InterfaceDefinition('IF3', deviceTwo, 'downlink') session.add(IF23) IF24 = InterfaceDefinition('IF3',", "generated label for device self.assertTrue('\"preethi-1\" [shape=record, label=Preethi];' in data) def testcreateLinksInGraph(self): testLinksInTopology =", "+ '../..')) #trick to make it run from CLI import unittest import sqlalchemy", "cablingPlanWriter = CablingPlanWriter(self.conf, pod, self.dao) deviceOne = Device('spine01',\"\", 'admin', 'admin', 'spine', \"\", \"\",", "it run from CLI import unittest import sqlalchemy from sqlalchemy.orm import sessionmaker import", "= 'spine01' IF1 = InterfaceDefinition('IF1', deviceOne, 'downlink') IF1.id = 'IF1' deviceTwo = Device('leaf01',\"\",", "session = self.dao.Session() pod = createPod('pod1', session) cablingPlanWriter = CablingPlanWriter(self.conf, pod, self.dao) deviceOne", "device.id = 'preethi-1' cablingPlanWriter.createDeviceInGraph(device.name, device, testDeviceTopology) path = cablingPlanWriter.outputDir + '/testDevicelabel.dot' testDeviceTopology.write_raw(path) data", "'/testLinklabel.dot' testLinksInTopology.write_raw(path) data = open(path, 'r').read() #check generated label for links self.assertTrue('spine01:IF1 --", "import sqlalchemy from sqlalchemy.orm import sessionmaker import pydot from jnpr.openclos.model import Pod, Device,", "def testCreateDeviceInGraph(self): testDeviceTopology = pydot.Dot(graph_type='graph', ) pod = createPod('pod1', self.dao.Session()) cablingPlanWriter = CablingPlanWriter(self.conf,", "cablingPlanWriter.outputDir + '/testDevicelabel.dot' testDeviceTopology.write_raw(path) data = open(path, 'r').read() #check the generated label for", "createPod('pod1', self.dao.Session()) device = Device('test_device', \"\",'admin', 'admin', 'spine', \"\", \"\", pod) configWriter =", "flexmock class TestWriterBase(unittest.TestCase): def setUp(self): self.conf = {} self.conf['outputDir'] = os.path.join(os.path.dirname(os.path.abspath(__file__)), 'out') self.conf['dbUrl']", "= createPod('pod1', self.dao.Session()) cablingPlanWriter = CablingPlanWriter(self.conf, pod, self.dao) self.assertIsNotNone(cablingPlanWriter.template) with self.assertRaises(TemplateNotFound) as e:", "= InterfaceDefinition('IF2', deviceThree, 'uplink') session.add(IF32) IF1.peer = IF21 IF2.peer = IF22 IF21.peer =", "testcreateLinksInGraph(self): testLinksInTopology = pydot.Dot(graph_type='graph') pod = createPod('pod1', self.dao.Session()) cablingPlanWriter = CablingPlanWriter(self.conf, pod, self.dao)", "IF1 IF22.peer = IF2 IF23.peer = IF31 IF31.peer = IF23 IF24.peer = IF32", "e.exception.message) def testCreateDeviceInGraph(self): testDeviceTopology = pydot.Dot(graph_type='graph', ) pod = createPod('pod1', self.dao.Session()) cablingPlanWriter =", "} self.dao = Dao(self.conf) ''' Deletes 'out' folder under test dir''' shutil.rmtree(self.conf['outputDir'], ignore_errors=True)", "IF2 = InterfaceDefinition('IF2', deviceOne, 'downlink') session.add(IF2) deviceTwo = Device('leaf01',\"\", 'admin', 'admin', 'leaf', \"\",", ": deviceTwo.id + ':' + IF21.id} cablingPlanWriter.createLinksInGraph(linkLabel, testLinksInTopology, 'red') path = cablingPlanWriter.outputDir +", "'green', 'blue']} self.conf['deviceFamily'] = { \"QFX5100-24Q\": { \"ports\": 'et-0/0/[0-23]' }, \"QFX5100-48S\": { \"uplinkPorts\":", "jnpr.openclos.model import Pod, Device, InterfaceDefinition, InterfaceLogical, Interface, Base from jnpr.openclos.writer import WriterBase, ConfigWriter,", "class TestWriterBase(unittest.TestCase): def setUp(self): self.conf = {} self.conf['outputDir'] = os.path.join(os.path.dirname(os.path.abspath(__file__)), 'out') self.conf['dbUrl'] =", "config\") self.assertTrue(os.path.exists(configWriter.outputDir + '/test_device.conf')) class TestCablingPlanWriter(TestWriterBase): def testInitWithTemplate(self): from jinja2 import TemplateNotFound pod", "session.add(IF32) IF1.peer = IF21 IF2.peer = IF22 IF21.peer = IF1 IF22.peer = IF2", "links self.assertTrue('spine01:IF1 -- leaf01:IF21 [color=red];' in data) def testcreateDOTFile(self): # create pod #", "'admin', 'admin', 'spine', \"\", \"\", pod) session.add(deviceOne) IF1 = InterfaceDefinition('IF1', deviceOne, 'downlink') session.add(IF1)", "Aug 26, 2014 @author: preethi ''' import os import sys import shutil sys.path.insert(0,os.path.abspath(os.path.dirname(__file__)", "= CablingPlanWriter(self.conf, pod, self.dao) self.assertIsNotNone(cablingPlanWriter.template) with self.assertRaises(TemplateNotFound) as e: cablingPlanWriter.templateEnv.get_template('unknown-template') self.assertTrue('unknown-template' in e.exception.message)", "\"\", \"\", pod) session.add(deviceThree) IF31 = InterfaceDefinition('IF1', deviceThree, 'uplink') session.add(IF31) IF32 = InterfaceDefinition('IF2',", "test_model import createPod, createPodDevice from flexmock import flexmock class TestWriterBase(unittest.TestCase): def setUp(self): self.conf", "testLinksInTopology, 'red') path = cablingPlanWriter.outputDir + '/testLinklabel.dot' testLinksInTopology.write_raw(path) data = open(path, 'r').read() #check", "[color=red];' in data) def testcreateDOTFile(self): # create pod # create device #create interface", "pod, self.dao) deviceOne = Device('spine01',\"\", 'admin', 'admin', 'spine', \"\", \"\", pod) session.add(deviceOne) IF1", "ignore_errors=True) class TestConfigWriter(TestWriterBase): def testWrite(self): pod = createPod('pod1', self.dao.Session()) device = Device('test_device', \"\",'admin',", "= IF32 IF32.peer = IF24 session.commit() devices = session.query(Device).all() #check the DOT file", "TestConfigWriter(TestWriterBase): def testWrite(self): pod = createPod('pod1', self.dao.Session()) device = Device('test_device', \"\",'admin', 'admin', 'spine',", "os.path.join(os.path.dirname(os.path.abspath(__file__)), 'out') self.conf['dbUrl'] = 'sqlite:///' self.conf['DOT'] = {'ranksep' : '5 equally', 'colors': ['red',", "ConfigWriter(self.conf, pod, self.dao) configWriter.write(device, \"dummy config\") self.assertTrue(os.path.exists(configWriter.outputDir + '/test_device.conf')) class TestCablingPlanWriter(TestWriterBase): def testInitWithTemplate(self):", "import configLocation from jnpr.openclos.dao import Dao from test_model import createPod, createPodDevice from flexmock", "folder under test dir''' shutil.rmtree(self.conf['outputDir'], ignore_errors=True) def tearDown(self): ''' Deletes 'out' folder under", "# create device #create interface session = self.dao.Session() pod = createPod('pod1', session) cablingPlanWriter", "= createPod('pod1', self.dao.Session()) device = Device('test_device', \"\",'admin', 'admin', 'spine', \"\", \"\", pod) configWriter", "make it run from CLI import unittest import sqlalchemy from sqlalchemy.orm import sessionmaker", "sqlalchemy.orm import sessionmaker import pydot from jnpr.openclos.model import Pod, Device, InterfaceDefinition, InterfaceLogical, Interface,", "= cablingPlanWriter.outputDir + '/testDevicelabel.dot' testDeviceTopology.write_raw(path) data = open(path, 'r').read() #check the generated label", "deviceTwo = Device('leaf01',\"\", 'admin', 'admin', 'leaf', \"\", \"\", pod) deviceTwo.id = 'leaf01' IF21", "ConfigWriter, CablingPlanWriter from jnpr.openclos.util import configLocation from jnpr.openclos.dao import Dao from test_model import", "IF32.peer = IF24 session.commit() devices = session.query(Device).all() #check the DOT file is generated", "IF31 = InterfaceDefinition('IF1', deviceThree, 'uplink') session.add(IF31) IF32 = InterfaceDefinition('IF2', deviceThree, 'uplink') session.add(IF32) IF1.peer", "self.dao) deviceOne = Device('spine01',\"\", 'admin', 'admin', 'spine', \"\", \"\", pod) session.add(deviceOne) IF1 =", "session.add(deviceThree) IF31 = InterfaceDefinition('IF1', deviceThree, 'uplink') session.add(IF31) IF32 = InterfaceDefinition('IF2', deviceThree, 'uplink') session.add(IF32)", "deviceTwo = Device('leaf01',\"\", 'admin', 'admin', 'leaf', \"\", \"\", pod) session.add(deviceTwo) IF21 = InterfaceDefinition('IF1',", "deviceOne = Device('spine01',\"\", 'admin', 'admin', 'spine', \"\", \"\", pod) deviceOne.id = 'spine01' IF1", "\"\", \"\", pod) session.add(deviceOne) IF1 = InterfaceDefinition('IF1', deviceOne, 'downlink') session.add(IF1) IF2 = InterfaceDefinition('IF2',", "as e: cablingPlanWriter.templateEnv.get_template('unknown-template') self.assertTrue('unknown-template' in e.exception.message) def testCreateDeviceInGraph(self): testDeviceTopology = pydot.Dot(graph_type='graph', ) pod", "in data) def testcreateDOTFile(self): # create pod # create device #create interface session", "#check the DOT file is generated cablingPlanWriter.writeDOT() data = open(cablingPlanWriter.outputDir + '/cablingPlan.dot', 'r').read()", "'/testDevicelabel.dot' testDeviceTopology.write_raw(path) data = open(path, 'r').read() #check the generated label for device self.assertTrue('\"preethi-1\"", "pod) device.id = 'preethi-1' cablingPlanWriter.createDeviceInGraph(device.name, device, testDeviceTopology) path = cablingPlanWriter.outputDir + '/testDevicelabel.dot' testDeviceTopology.write_raw(path)", "jinja2 import TemplateNotFound pod = createPod('pod1', self.dao.Session()) cablingPlanWriter = CablingPlanWriter(self.conf, pod, self.dao) self.assertIsNotNone(cablingPlanWriter.template)", "'r').read() #check generated label for links self.assertTrue('spine01:IF1 -- leaf01:IF21 [color=red];' in data) def", "= CablingPlanWriter(self.conf, pod, self.dao) deviceOne = Device('spine01',\"\", 'admin', 'admin', 'spine', \"\", \"\", pod)", "the DOT file is generated cablingPlanWriter.writeDOT() data = open(cablingPlanWriter.outputDir + '/cablingPlan.dot', 'r').read() #check", "IF23 = InterfaceDefinition('IF3', deviceTwo, 'downlink') session.add(IF23) IF24 = InterfaceDefinition('IF3', deviceTwo, 'downlink') session.add(IF24) deviceThree", "testcreateDOTFile(self): # create pod # create device #create interface session = self.dao.Session() pod", "= createPod('pod1', self.dao.Session()) cablingPlanWriter = CablingPlanWriter(self.conf, pod, self.dao) device = createPodDevice(self.dao.Session(), 'Preethi', pod)", "'red') path = cablingPlanWriter.outputDir + '/testLinklabel.dot' testLinksInTopology.write_raw(path) data = open(path, 'r').read() #check generated", "create device #create interface session = self.dao.Session() pod = createPod('pod1', session) cablingPlanWriter =", "\"\", pod) session.add(deviceTwo) IF21 = InterfaceDefinition('IF1', deviceTwo, 'uplink') session.add(IF21) IF22 = InterfaceDefinition('IF2', deviceTwo,", "from jinja2 import TemplateNotFound pod = createPod('pod1', self.dao.Session()) cablingPlanWriter = CablingPlanWriter(self.conf, pod, self.dao)", "from test_model import createPod, createPodDevice from flexmock import flexmock class TestWriterBase(unittest.TestCase): def setUp(self):", "session) cablingPlanWriter = CablingPlanWriter(self.conf, pod, self.dao) deviceOne = Device('spine01',\"\", 'admin', 'admin', 'spine', \"\",", "DOT file is generated cablingPlanWriter.writeDOT() data = open(cablingPlanWriter.outputDir + '/cablingPlan.dot', 'r').read() #check generated", "deviceThree, 'uplink') session.add(IF31) IF32 = InterfaceDefinition('IF2', deviceThree, 'uplink') session.add(IF32) IF1.peer = IF21 IF2.peer", "TestWriterBase(unittest.TestCase): def setUp(self): self.conf = {} self.conf['outputDir'] = os.path.join(os.path.dirname(os.path.abspath(__file__)), 'out') self.conf['dbUrl'] = 'sqlite:///'", "+ '/testDevicelabel.dot' testDeviceTopology.write_raw(path) data = open(path, 'r').read() #check the generated label for device", "self.assertIsNotNone(cablingPlanWriter.template) with self.assertRaises(TemplateNotFound) as e: cablingPlanWriter.templateEnv.get_template('unknown-template') self.assertTrue('unknown-template' in e.exception.message) def testCreateDeviceInGraph(self): testDeviceTopology =", "= CablingPlanWriter(self.conf, pod, self.dao) device = createPodDevice(self.dao.Session(), 'Preethi', pod) device.id = 'preethi-1' cablingPlanWriter.createDeviceInGraph(device.name,", "is generated cablingPlanWriter.writeDOT() data = open(cablingPlanWriter.outputDir + '/cablingPlan.dot', 'r').read() #check generated label for", "pod, self.dao) device = createPodDevice(self.dao.Session(), 'Preethi', pod) device.id = 'preethi-1' cablingPlanWriter.createDeviceInGraph(device.name, device, testDeviceTopology)", "configWriter = ConfigWriter(self.conf, pod, self.dao) configWriter.write(device, \"dummy config\") self.assertTrue(os.path.exists(configWriter.outputDir + '/test_device.conf')) class TestCablingPlanWriter(TestWriterBase):", "cablingPlanWriter.writeDOT() data = open(cablingPlanWriter.outputDir + '/cablingPlan.dot', 'r').read() #check generated label for links self.assertTrue('splines=polyline;'", "deviceThree, 'uplink') session.add(IF32) IF1.peer = IF21 IF2.peer = IF22 IF21.peer = IF1 IF22.peer", "+ IF21.id} cablingPlanWriter.createLinksInGraph(linkLabel, testLinksInTopology, 'red') path = cablingPlanWriter.outputDir + '/testLinklabel.dot' testLinksInTopology.write_raw(path) data =", "Device('leaf01',\"\", 'admin', 'admin', 'leaf', \"\", \"\", pod) session.add(deviceTwo) IF21 = InterfaceDefinition('IF1', deviceTwo, 'uplink')", "devices = session.query(Device).all() #check the DOT file is generated cablingPlanWriter.writeDOT() data = open(cablingPlanWriter.outputDir", "from sqlalchemy.orm import sessionmaker import pydot from jnpr.openclos.model import Pod, Device, InterfaceDefinition, InterfaceLogical,", "= open(path, 'r').read() #check the generated label for device self.assertTrue('\"preethi-1\" [shape=record, label=Preethi];' in", "label for device self.assertTrue('\"preethi-1\" [shape=record, label=Preethi];' in data) def testcreateLinksInGraph(self): testLinksInTopology = pydot.Dot(graph_type='graph')", "session.add(IF24) deviceThree = Device('Access01', \"\",'admin', 'admin', 'leaf', \"\", \"\", pod) session.add(deviceThree) IF31 =", "'sqlite:///' self.conf['DOT'] = {'ranksep' : '5 equally', 'colors': ['red', 'green', 'blue']} self.conf['deviceFamily'] =", "import unittest import sqlalchemy from sqlalchemy.orm import sessionmaker import pydot from jnpr.openclos.model import", "'r').read() #check the generated label for device self.assertTrue('\"preethi-1\" [shape=record, label=Preethi];' in data) def", "26, 2014 @author: preethi ''' import os import sys import shutil sys.path.insert(0,os.path.abspath(os.path.dirname(__file__) +", "= IF21 IF2.peer = IF22 IF21.peer = IF1 IF22.peer = IF2 IF23.peer =", "path = cablingPlanWriter.outputDir + '/testDevicelabel.dot' testDeviceTopology.write_raw(path) data = open(path, 'r').read() #check the generated", "IF1 linkLabel = {deviceOne.id + ':' + IF1.id : deviceTwo.id + ':' +", "'IF1' deviceTwo = Device('leaf01',\"\", 'admin', 'admin', 'leaf', \"\", \"\", pod) deviceTwo.id = 'leaf01'", "'uplink') session.add(IF22) IF23 = InterfaceDefinition('IF3', deviceTwo, 'downlink') session.add(IF23) IF24 = InterfaceDefinition('IF3', deviceTwo, 'downlink')", "\"uplinkPorts\": 'et-0/0/[48-53]', \"downlinkPorts\": 'xe-0/0/[0-47]' } } self.dao = Dao(self.conf) ''' Deletes 'out' folder", "IF32 = InterfaceDefinition('IF2', deviceThree, 'uplink') session.add(IF32) IF1.peer = IF21 IF2.peer = IF22 IF21.peer", "= Device('spine01',\"\", 'admin', 'admin', 'spine', \"\", \"\", pod) deviceOne.id = 'spine01' IF1 =", "Created on Aug 26, 2014 @author: preethi ''' import os import sys import", "'et-0/0/[0-23]' }, \"QFX5100-48S\": { \"uplinkPorts\": 'et-0/0/[48-53]', \"downlinkPorts\": 'xe-0/0/[0-47]' } } self.dao = Dao(self.conf)", "'et-0/0/[48-53]', \"downlinkPorts\": 'xe-0/0/[0-47]' } } self.dao = Dao(self.conf) ''' Deletes 'out' folder under", "pod = createPod('pod1', self.dao.Session()) cablingPlanWriter = CablingPlanWriter(self.conf, pod, self.dao) deviceOne = Device('spine01',\"\", 'admin',", "pod) session.add(deviceThree) IF31 = InterfaceDefinition('IF1', deviceThree, 'uplink') session.add(IF31) IF32 = InterfaceDefinition('IF2', deviceThree, 'uplink')", "path = cablingPlanWriter.outputDir + '/testLinklabel.dot' testLinksInTopology.write_raw(path) data = open(path, 'r').read() #check generated label", "pod = createPod('pod1', self.dao.Session()) cablingPlanWriter = CablingPlanWriter(self.conf, pod, self.dao) device = createPodDevice(self.dao.Session(), 'Preethi',", "IF1.id = 'IF1' deviceTwo = Device('leaf01',\"\", 'admin', 'admin', 'leaf', \"\", \"\", pod) deviceTwo.id", "pod) configWriter = ConfigWriter(self.conf, pod, self.dao) configWriter.write(device, \"dummy config\") self.assertTrue(os.path.exists(configWriter.outputDir + '/test_device.conf')) class", "= InterfaceDefinition('IF1', deviceThree, 'uplink') session.add(IF31) IF32 = InterfaceDefinition('IF2', deviceThree, 'uplink') session.add(IF32) IF1.peer =", "testCreateDeviceInGraph(self): testDeviceTopology = pydot.Dot(graph_type='graph', ) pod = createPod('pod1', self.dao.Session()) cablingPlanWriter = CablingPlanWriter(self.conf, pod,", "deviceOne.id = 'spine01' IF1 = InterfaceDefinition('IF1', deviceOne, 'downlink') IF1.id = 'IF1' deviceTwo =", "leaf01:IF21 [color=red];' in data) def testcreateDOTFile(self): # create pod # create device #create", "= InterfaceDefinition('IF2', deviceOne, 'downlink') session.add(IF2) deviceTwo = Device('leaf01',\"\", 'admin', 'admin', 'leaf', \"\", \"\",", "'colors': ['red', 'green', 'blue']} self.conf['deviceFamily'] = { \"QFX5100-24Q\": { \"ports\": 'et-0/0/[0-23]' }, \"QFX5100-48S\":", "IF1.id : deviceTwo.id + ':' + IF21.id} cablingPlanWriter.createLinksInGraph(linkLabel, testLinksInTopology, 'red') path = cablingPlanWriter.outputDir", "import os import sys import shutil sys.path.insert(0,os.path.abspath(os.path.dirname(__file__) + '/' + '../..')) #trick to", "InterfaceDefinition('IF1', deviceThree, 'uplink') session.add(IF31) IF32 = InterfaceDefinition('IF2', deviceThree, 'uplink') session.add(IF32) IF1.peer = IF21", "test dir''' shutil.rmtree(self.conf['outputDir'], ignore_errors=True) class TestConfigWriter(TestWriterBase): def testWrite(self): pod = createPod('pod1', self.dao.Session()) device", "'leaf', \"\", \"\", pod) session.add(deviceThree) IF31 = InterfaceDefinition('IF1', deviceThree, 'uplink') session.add(IF31) IF32 =", "self.dao.Session()) device = Device('test_device', \"\",'admin', 'admin', 'spine', \"\", \"\", pod) configWriter = ConfigWriter(self.conf,", "IF1 = InterfaceDefinition('IF1', deviceOne, 'downlink') IF1.id = 'IF1' deviceTwo = Device('leaf01',\"\", 'admin', 'admin',", "TemplateNotFound pod = createPod('pod1', self.dao.Session()) cablingPlanWriter = CablingPlanWriter(self.conf, pod, self.dao) self.assertIsNotNone(cablingPlanWriter.template) with self.assertRaises(TemplateNotFound)", "'admin', 'leaf', \"\", \"\", pod) session.add(deviceThree) IF31 = InterfaceDefinition('IF1', deviceThree, 'uplink') session.add(IF31) IF32", "unittest import sqlalchemy from sqlalchemy.orm import sessionmaker import pydot from jnpr.openclos.model import Pod,", "pod) session.add(deviceOne) IF1 = InterfaceDefinition('IF1', deviceOne, 'downlink') session.add(IF1) IF2 = InterfaceDefinition('IF2', deviceOne, 'downlink')", "\"\",'admin', 'admin', 'spine', \"\", \"\", pod) configWriter = ConfigWriter(self.conf, pod, self.dao) configWriter.write(device, \"dummy", "self.dao) self.assertIsNotNone(cablingPlanWriter.template) with self.assertRaises(TemplateNotFound) as e: cablingPlanWriter.templateEnv.get_template('unknown-template') self.assertTrue('unknown-template' in e.exception.message) def testCreateDeviceInGraph(self): testDeviceTopology", "cablingPlanWriter.templateEnv.get_template('unknown-template') self.assertTrue('unknown-template' in e.exception.message) def testCreateDeviceInGraph(self): testDeviceTopology = pydot.Dot(graph_type='graph', ) pod = createPod('pod1',", "'downlink') session.add(IF23) IF24 = InterfaceDefinition('IF3', deviceTwo, 'downlink') session.add(IF24) deviceThree = Device('Access01', \"\",'admin', 'admin',", "jnpr.openclos.util import configLocation from jnpr.openclos.dao import Dao from test_model import createPod, createPodDevice from", "= {} self.conf['outputDir'] = os.path.join(os.path.dirname(os.path.abspath(__file__)), 'out') self.conf['dbUrl'] = 'sqlite:///' self.conf['DOT'] = {'ranksep' :", "'out') self.conf['dbUrl'] = 'sqlite:///' self.conf['DOT'] = {'ranksep' : '5 equally', 'colors': ['red', 'green',", "dir''' shutil.rmtree(self.conf['outputDir'], ignore_errors=True) def tearDown(self): ''' Deletes 'out' folder under test dir''' shutil.rmtree(self.conf['outputDir'],", "'5 equally', 'colors': ['red', 'green', 'blue']} self.conf['deviceFamily'] = { \"QFX5100-24Q\": { \"ports\": 'et-0/0/[0-23]'", "['red', 'green', 'blue']} self.conf['deviceFamily'] = { \"QFX5100-24Q\": { \"ports\": 'et-0/0/[0-23]' }, \"QFX5100-48S\": {", "= cablingPlanWriter.outputDir + '/testLinklabel.dot' testLinksInTopology.write_raw(path) data = open(path, 'r').read() #check generated label for", "createPod('pod1', self.dao.Session()) cablingPlanWriter = CablingPlanWriter(self.conf, pod, self.dao) self.assertIsNotNone(cablingPlanWriter.template) with self.assertRaises(TemplateNotFound) as e: cablingPlanWriter.templateEnv.get_template('unknown-template')", "#check generated label for links self.assertTrue('spine01:IF1 -- leaf01:IF21 [color=red];' in data) def testcreateDOTFile(self):", "'admin', 'admin', 'spine', \"\", \"\", pod) deviceOne.id = 'spine01' IF1 = InterfaceDefinition('IF1', deviceOne,", "self.dao) device = createPodDevice(self.dao.Session(), 'Preethi', pod) device.id = 'preethi-1' cablingPlanWriter.createDeviceInGraph(device.name, device, testDeviceTopology) path", "'admin', 'spine', \"\", \"\", pod) session.add(deviceOne) IF1 = InterfaceDefinition('IF1', deviceOne, 'downlink') session.add(IF1) IF2", "#check the generated label for device self.assertTrue('\"preethi-1\" [shape=record, label=Preethi];' in data) def testcreateLinksInGraph(self):", "shutil.rmtree(self.conf['outputDir'], ignore_errors=True) def tearDown(self): ''' Deletes 'out' folder under test dir''' shutil.rmtree(self.conf['outputDir'], ignore_errors=True)", ": '5 equally', 'colors': ['red', 'green', 'blue']} self.conf['deviceFamily'] = { \"QFX5100-24Q\": { \"ports\":", "session.query(Device).all() #check the DOT file is generated cablingPlanWriter.writeDOT() data = open(cablingPlanWriter.outputDir + '/cablingPlan.dot',", "from CLI import unittest import sqlalchemy from sqlalchemy.orm import sessionmaker import pydot from", "self.assertTrue(os.path.exists(configWriter.outputDir + '/test_device.conf')) class TestCablingPlanWriter(TestWriterBase): def testInitWithTemplate(self): from jinja2 import TemplateNotFound pod =", "InterfaceDefinition('IF1', deviceOne, 'downlink') IF1.id = 'IF1' deviceTwo = Device('leaf01',\"\", 'admin', 'admin', 'leaf', \"\",", "configLocation from jnpr.openclos.dao import Dao from test_model import createPod, createPodDevice from flexmock import", "ignore_errors=True) def tearDown(self): ''' Deletes 'out' folder under test dir''' shutil.rmtree(self.conf['outputDir'], ignore_errors=True) class", "CablingPlanWriter from jnpr.openclos.util import configLocation from jnpr.openclos.dao import Dao from test_model import createPod,", "'preethi-1' cablingPlanWriter.createDeviceInGraph(device.name, device, testDeviceTopology) path = cablingPlanWriter.outputDir + '/testDevicelabel.dot' testDeviceTopology.write_raw(path) data = open(path,", "CablingPlanWriter(self.conf, pod, self.dao) device = createPodDevice(self.dao.Session(), 'Preethi', pod) device.id = 'preethi-1' cablingPlanWriter.createDeviceInGraph(device.name, device,", "session.commit() devices = session.query(Device).all() #check the DOT file is generated cablingPlanWriter.writeDOT() data =", "Deletes 'out' folder under test dir''' shutil.rmtree(self.conf['outputDir'], ignore_errors=True) class TestConfigWriter(TestWriterBase): def testWrite(self): pod", "cablingPlanWriter = CablingPlanWriter(self.conf, pod, self.dao) self.assertIsNotNone(cablingPlanWriter.template) with self.assertRaises(TemplateNotFound) as e: cablingPlanWriter.templateEnv.get_template('unknown-template') self.assertTrue('unknown-template' in", "= 'preethi-1' cablingPlanWriter.createDeviceInGraph(device.name, device, testDeviceTopology) path = cablingPlanWriter.outputDir + '/testDevicelabel.dot' testDeviceTopology.write_raw(path) data =", "CablingPlanWriter(self.conf, pod, self.dao) self.assertIsNotNone(cablingPlanWriter.template) with self.assertRaises(TemplateNotFound) as e: cablingPlanWriter.templateEnv.get_template('unknown-template') self.assertTrue('unknown-template' in e.exception.message) def", "data) def testcreateLinksInGraph(self): testLinksInTopology = pydot.Dot(graph_type='graph') pod = createPod('pod1', self.dao.Session()) cablingPlanWriter = CablingPlanWriter(self.conf,", "Device('Access01', \"\",'admin', 'admin', 'leaf', \"\", \"\", pod) session.add(deviceThree) IF31 = InterfaceDefinition('IF1', deviceThree, 'uplink')", "linkLabel = {deviceOne.id + ':' + IF1.id : deviceTwo.id + ':' + IF21.id}", "CablingPlanWriter(self.conf, pod, self.dao) deviceOne = Device('spine01',\"\", 'admin', 'admin', 'spine', \"\", \"\", pod) deviceOne.id", "'admin', 'admin', 'leaf', \"\", \"\", pod) session.add(deviceTwo) IF21 = InterfaceDefinition('IF1', deviceTwo, 'uplink') session.add(IF21)", "}, \"QFX5100-48S\": { \"uplinkPorts\": 'et-0/0/[48-53]', \"downlinkPorts\": 'xe-0/0/[0-47]' } } self.dao = Dao(self.conf) '''", "session.add(deviceOne) IF1 = InterfaceDefinition('IF1', deviceOne, 'downlink') session.add(IF1) IF2 = InterfaceDefinition('IF2', deviceOne, 'downlink') session.add(IF2)", "[shape=record, label=Preethi];' in data) def testcreateLinksInGraph(self): testLinksInTopology = pydot.Dot(graph_type='graph') pod = createPod('pod1', self.dao.Session())", "device #create interface session = self.dao.Session() pod = createPod('pod1', session) cablingPlanWriter = CablingPlanWriter(self.conf,", "IF2.peer = IF22 IF21.peer = IF1 IF22.peer = IF2 IF23.peer = IF31 IF31.peer", "import flexmock class TestWriterBase(unittest.TestCase): def setUp(self): self.conf = {} self.conf['outputDir'] = os.path.join(os.path.dirname(os.path.abspath(__file__)), 'out')", "'admin', 'spine', \"\", \"\", pod) configWriter = ConfigWriter(self.conf, pod, self.dao) configWriter.write(device, \"dummy config\")", "session.add(deviceTwo) IF21 = InterfaceDefinition('IF1', deviceTwo, 'uplink') session.add(IF21) IF22 = InterfaceDefinition('IF2', deviceTwo, 'uplink') session.add(IF22)", "testWrite(self): pod = createPod('pod1', self.dao.Session()) device = Device('test_device', \"\",'admin', 'admin', 'spine', \"\", \"\",", "pod = createPod('pod1', self.dao.Session()) device = Device('test_device', \"\",'admin', 'admin', 'spine', \"\", \"\", pod)", "= open(path, 'r').read() #check generated label for links self.assertTrue('spine01:IF1 -- leaf01:IF21 [color=red];' in", "#trick to make it run from CLI import unittest import sqlalchemy from sqlalchemy.orm", "deviceOne, 'downlink') session.add(IF2) deviceTwo = Device('leaf01',\"\", 'admin', 'admin', 'leaf', \"\", \"\", pod) session.add(deviceTwo)", "configWriter.write(device, \"dummy config\") self.assertTrue(os.path.exists(configWriter.outputDir + '/test_device.conf')) class TestCablingPlanWriter(TestWriterBase): def testInitWithTemplate(self): from jinja2 import", "test dir''' shutil.rmtree(self.conf['outputDir'], ignore_errors=True) def tearDown(self): ''' Deletes 'out' folder under test dir'''", "cablingPlanWriter.createLinksInGraph(linkLabel, testLinksInTopology, 'red') path = cablingPlanWriter.outputDir + '/testLinklabel.dot' testLinksInTopology.write_raw(path) data = open(path, 'r').read()", "open(path, 'r').read() #check generated label for links self.assertTrue('spine01:IF1 -- leaf01:IF21 [color=red];' in data)", "import Dao from test_model import createPod, createPodDevice from flexmock import flexmock class TestWriterBase(unittest.TestCase):", "def testcreateDOTFile(self): # create pod # create device #create interface session = self.dao.Session()", "for links self.assertTrue('spine01:IF1 -- leaf01:IF21 [color=red];' in data) def testcreateDOTFile(self): # create pod", "= IF31 IF31.peer = IF23 IF24.peer = IF32 IF32.peer = IF24 session.commit() devices", "self.assertTrue('unknown-template' in e.exception.message) def testCreateDeviceInGraph(self): testDeviceTopology = pydot.Dot(graph_type='graph', ) pod = createPod('pod1', self.dao.Session())", "Device('spine01',\"\", 'admin', 'admin', 'spine', \"\", \"\", pod) deviceOne.id = 'spine01' IF1 = InterfaceDefinition('IF1',", "= InterfaceDefinition('IF1', deviceTwo, 'uplink') IF21.id = 'IF21' IF1.peer = IF21 IF21.peer = IF1", "-- leaf01:IF21 [color=red];' in data) def testcreateDOTFile(self): # create pod # create device", "createPod('pod1', session) cablingPlanWriter = CablingPlanWriter(self.conf, pod, self.dao) deviceOne = Device('spine01',\"\", 'admin', 'admin', 'spine',", "'admin', 'leaf', \"\", \"\", pod) deviceTwo.id = 'leaf01' IF21 = InterfaceDefinition('IF1', deviceTwo, 'uplink')", "cablingPlanWriter.createDeviceInGraph(device.name, device, testDeviceTopology) path = cablingPlanWriter.outputDir + '/testDevicelabel.dot' testDeviceTopology.write_raw(path) data = open(path, 'r').read()", "flexmock import flexmock class TestWriterBase(unittest.TestCase): def setUp(self): self.conf = {} self.conf['outputDir'] = os.path.join(os.path.dirname(os.path.abspath(__file__)),", "sqlalchemy from sqlalchemy.orm import sessionmaker import pydot from jnpr.openclos.model import Pod, Device, InterfaceDefinition,", "deviceTwo, 'downlink') session.add(IF24) deviceThree = Device('Access01', \"\",'admin', 'admin', 'leaf', \"\", \"\", pod) session.add(deviceThree)", "sys.path.insert(0,os.path.abspath(os.path.dirname(__file__) + '/' + '../..')) #trick to make it run from CLI import", "':' + IF1.id : deviceTwo.id + ':' + IF21.id} cablingPlanWriter.createLinksInGraph(linkLabel, testLinksInTopology, 'red') path", "IF32 IF32.peer = IF24 session.commit() devices = session.query(Device).all() #check the DOT file is", "self.dao.Session() pod = createPod('pod1', session) cablingPlanWriter = CablingPlanWriter(self.conf, pod, self.dao) deviceOne = Device('spine01',\"\",", "Dao from test_model import createPod, createPodDevice from flexmock import flexmock class TestWriterBase(unittest.TestCase): def", "testLinksInTopology.write_raw(path) data = open(path, 'r').read() #check generated label for links self.assertTrue('spine01:IF1 -- leaf01:IF21", "deviceTwo, 'uplink') IF21.id = 'IF21' IF1.peer = IF21 IF21.peer = IF1 linkLabel =", "'/test_device.conf')) class TestCablingPlanWriter(TestWriterBase): def testInitWithTemplate(self): from jinja2 import TemplateNotFound pod = createPod('pod1', self.dao.Session())", "data = open(path, 'r').read() #check the generated label for device self.assertTrue('\"preethi-1\" [shape=record, label=Preethi];'", "IF24 = InterfaceDefinition('IF3', deviceTwo, 'downlink') session.add(IF24) deviceThree = Device('Access01', \"\",'admin', 'admin', 'leaf', \"\",", "= {'ranksep' : '5 equally', 'colors': ['red', 'green', 'blue']} self.conf['deviceFamily'] = { \"QFX5100-24Q\":", "'spine', \"\", \"\", pod) deviceOne.id = 'spine01' IF1 = InterfaceDefinition('IF1', deviceOne, 'downlink') IF1.id", "'leaf01' IF21 = InterfaceDefinition('IF1', deviceTwo, 'uplink') IF21.id = 'IF21' IF1.peer = IF21 IF21.peer", "data = open(path, 'r').read() #check generated label for links self.assertTrue('spine01:IF1 -- leaf01:IF21 [color=red];'", "= 'leaf01' IF21 = InterfaceDefinition('IF1', deviceTwo, 'uplink') IF21.id = 'IF21' IF1.peer = IF21", "testDeviceTopology) path = cablingPlanWriter.outputDir + '/testDevicelabel.dot' testDeviceTopology.write_raw(path) data = open(path, 'r').read() #check the", "pydot.Dot(graph_type='graph') pod = createPod('pod1', self.dao.Session()) cablingPlanWriter = CablingPlanWriter(self.conf, pod, self.dao) deviceOne = Device('spine01',\"\",", "\"QFX5100-48S\": { \"uplinkPorts\": 'et-0/0/[48-53]', \"downlinkPorts\": 'xe-0/0/[0-47]' } } self.dao = Dao(self.conf) ''' Deletes", "os import sys import shutil sys.path.insert(0,os.path.abspath(os.path.dirname(__file__) + '/' + '../..')) #trick to make", "deviceTwo.id + ':' + IF21.id} cablingPlanWriter.createLinksInGraph(linkLabel, testLinksInTopology, 'red') path = cablingPlanWriter.outputDir + '/testLinklabel.dot'", "= InterfaceDefinition('IF1', deviceTwo, 'uplink') session.add(IF21) IF22 = InterfaceDefinition('IF2', deviceTwo, 'uplink') session.add(IF22) IF23 =", "session.add(IF1) IF2 = InterfaceDefinition('IF2', deviceOne, 'downlink') session.add(IF2) deviceTwo = Device('leaf01',\"\", 'admin', 'admin', 'leaf',", "'spine', \"\", \"\", pod) configWriter = ConfigWriter(self.conf, pod, self.dao) configWriter.write(device, \"dummy config\") self.assertTrue(os.path.exists(configWriter.outputDir", "\"\", pod) session.add(deviceOne) IF1 = InterfaceDefinition('IF1', deviceOne, 'downlink') session.add(IF1) IF2 = InterfaceDefinition('IF2', deviceOne,", "cablingPlanWriter = CablingPlanWriter(self.conf, pod, self.dao) device = createPodDevice(self.dao.Session(), 'Preethi', pod) device.id = 'preethi-1'", "'leaf', \"\", \"\", pod) deviceTwo.id = 'leaf01' IF21 = InterfaceDefinition('IF1', deviceTwo, 'uplink') IF21.id", "import WriterBase, ConfigWriter, CablingPlanWriter from jnpr.openclos.util import configLocation from jnpr.openclos.dao import Dao from", "= InterfaceDefinition('IF1', deviceOne, 'downlink') session.add(IF1) IF2 = InterfaceDefinition('IF2', deviceOne, 'downlink') session.add(IF2) deviceTwo =", "jnpr.openclos.writer import WriterBase, ConfigWriter, CablingPlanWriter from jnpr.openclos.util import configLocation from jnpr.openclos.dao import Dao", "\"downlinkPorts\": 'xe-0/0/[0-47]' } } self.dao = Dao(self.conf) ''' Deletes 'out' folder under test", "testInitWithTemplate(self): from jinja2 import TemplateNotFound pod = createPod('pod1', self.dao.Session()) cablingPlanWriter = CablingPlanWriter(self.conf, pod,", "import shutil sys.path.insert(0,os.path.abspath(os.path.dirname(__file__) + '/' + '../..')) #trick to make it run from", "= Dao(self.conf) ''' Deletes 'out' folder under test dir''' shutil.rmtree(self.conf['outputDir'], ignore_errors=True) def tearDown(self):", "'spine01' IF1 = InterfaceDefinition('IF1', deviceOne, 'downlink') IF1.id = 'IF1' deviceTwo = Device('leaf01',\"\", 'admin',", "IF24.peer = IF32 IF32.peer = IF24 session.commit() devices = session.query(Device).all() #check the DOT", "import pydot from jnpr.openclos.model import Pod, Device, InterfaceDefinition, InterfaceLogical, Interface, Base from jnpr.openclos.writer", "= InterfaceDefinition('IF1', deviceOne, 'downlink') IF1.id = 'IF1' deviceTwo = Device('leaf01',\"\", 'admin', 'admin', 'leaf',", "create pod # create device #create interface session = self.dao.Session() pod = createPod('pod1',", "on Aug 26, 2014 @author: preethi ''' import os import sys import shutil", "'downlink') session.add(IF24) deviceThree = Device('Access01', \"\",'admin', 'admin', 'leaf', \"\", \"\", pod) session.add(deviceThree) IF31", "= createPod('pod1', self.dao.Session()) cablingPlanWriter = CablingPlanWriter(self.conf, pod, self.dao) deviceOne = Device('spine01',\"\", 'admin', 'admin',", "session.add(IF21) IF22 = InterfaceDefinition('IF2', deviceTwo, 'uplink') session.add(IF22) IF23 = InterfaceDefinition('IF3', deviceTwo, 'downlink') session.add(IF23)", "Base from jnpr.openclos.writer import WriterBase, ConfigWriter, CablingPlanWriter from jnpr.openclos.util import configLocation from jnpr.openclos.dao", "InterfaceDefinition, InterfaceLogical, Interface, Base from jnpr.openclos.writer import WriterBase, ConfigWriter, CablingPlanWriter from jnpr.openclos.util import", "CLI import unittest import sqlalchemy from sqlalchemy.orm import sessionmaker import pydot from jnpr.openclos.model", "= IF21 IF21.peer = IF1 linkLabel = {deviceOne.id + ':' + IF1.id :", "InterfaceDefinition('IF3', deviceTwo, 'downlink') session.add(IF23) IF24 = InterfaceDefinition('IF3', deviceTwo, 'downlink') session.add(IF24) deviceThree = Device('Access01',", "= open(cablingPlanWriter.outputDir + '/cablingPlan.dot', 'r').read() #check generated label for links self.assertTrue('splines=polyline;' in data)", "self.assertTrue('\"preethi-1\" [shape=record, label=Preethi];' in data) def testcreateLinksInGraph(self): testLinksInTopology = pydot.Dot(graph_type='graph') pod = createPod('pod1',", "\"\", pod) session.add(deviceThree) IF31 = InterfaceDefinition('IF1', deviceThree, 'uplink') session.add(IF31) IF32 = InterfaceDefinition('IF2', deviceThree,", "= IF22 IF21.peer = IF1 IF22.peer = IF2 IF23.peer = IF31 IF31.peer =", "self.dao.Session()) cablingPlanWriter = CablingPlanWriter(self.conf, pod, self.dao) deviceOne = Device('spine01',\"\", 'admin', 'admin', 'spine', \"\",", "pydot from jnpr.openclos.model import Pod, Device, InterfaceDefinition, InterfaceLogical, Interface, Base from jnpr.openclos.writer import", "= createPod('pod1', session) cablingPlanWriter = CablingPlanWriter(self.conf, pod, self.dao) deviceOne = Device('spine01',\"\", 'admin', 'admin',", "generated label for links self.assertTrue('spine01:IF1 -- leaf01:IF21 [color=red];' in data) def testcreateDOTFile(self): #", "IF22 IF21.peer = IF1 IF22.peer = IF2 IF23.peer = IF31 IF31.peer = IF23", "= Device('leaf01',\"\", 'admin', 'admin', 'leaf', \"\", \"\", pod) session.add(deviceTwo) IF21 = InterfaceDefinition('IF1', deviceTwo,", "deviceOne, 'downlink') IF1.id = 'IF1' deviceTwo = Device('leaf01',\"\", 'admin', 'admin', 'leaf', \"\", \"\",", "+ ':' + IF21.id} cablingPlanWriter.createLinksInGraph(linkLabel, testLinksInTopology, 'red') path = cablingPlanWriter.outputDir + '/testLinklabel.dot' testLinksInTopology.write_raw(path)", "Dao(self.conf) ''' Deletes 'out' folder under test dir''' shutil.rmtree(self.conf['outputDir'], ignore_errors=True) def tearDown(self): '''", "'downlink') session.add(IF2) deviceTwo = Device('leaf01',\"\", 'admin', 'admin', 'leaf', \"\", \"\", pod) session.add(deviceTwo) IF21", "'downlink') IF1.id = 'IF1' deviceTwo = Device('leaf01',\"\", 'admin', 'admin', 'leaf', \"\", \"\", pod)", "Device('test_device', \"\",'admin', 'admin', 'spine', \"\", \"\", pod) configWriter = ConfigWriter(self.conf, pod, self.dao) configWriter.write(device,", "import sys import shutil sys.path.insert(0,os.path.abspath(os.path.dirname(__file__) + '/' + '../..')) #trick to make it", "e: cablingPlanWriter.templateEnv.get_template('unknown-template') self.assertTrue('unknown-template' in e.exception.message) def testCreateDeviceInGraph(self): testDeviceTopology = pydot.Dot(graph_type='graph', ) pod =", "def setUp(self): self.conf = {} self.conf['outputDir'] = os.path.join(os.path.dirname(os.path.abspath(__file__)), 'out') self.conf['dbUrl'] = 'sqlite:///' self.conf['DOT']", "Device('spine01',\"\", 'admin', 'admin', 'spine', \"\", \"\", pod) session.add(deviceOne) IF1 = InterfaceDefinition('IF1', deviceOne, 'downlink')", "folder under test dir''' shutil.rmtree(self.conf['outputDir'], ignore_errors=True) class TestConfigWriter(TestWriterBase): def testWrite(self): pod = createPod('pod1',", "InterfaceLogical, Interface, Base from jnpr.openclos.writer import WriterBase, ConfigWriter, CablingPlanWriter from jnpr.openclos.util import configLocation", "dir''' shutil.rmtree(self.conf['outputDir'], ignore_errors=True) class TestConfigWriter(TestWriterBase): def testWrite(self): pod = createPod('pod1', self.dao.Session()) device =", "'blue']} self.conf['deviceFamily'] = { \"QFX5100-24Q\": { \"ports\": 'et-0/0/[0-23]' }, \"QFX5100-48S\": { \"uplinkPorts\": 'et-0/0/[48-53]',", "deviceOne, 'downlink') session.add(IF1) IF2 = InterfaceDefinition('IF2', deviceOne, 'downlink') session.add(IF2) deviceTwo = Device('leaf01',\"\", 'admin',", "\"ports\": 'et-0/0/[0-23]' }, \"QFX5100-48S\": { \"uplinkPorts\": 'et-0/0/[48-53]', \"downlinkPorts\": 'xe-0/0/[0-47]' } } self.dao =", "deviceThree = Device('Access01', \"\",'admin', 'admin', 'leaf', \"\", \"\", pod) session.add(deviceThree) IF31 = InterfaceDefinition('IF1'," ]
[]
[ "associated TimeMap Tasks def matchingTimeMapTasks(user, timeMapState): tasks = getTaskResultSet(user).filter(type=TASK_TIMEMAP) activeTasks = [] otherTasks", "timeMapState or reqs['story'] != str(timeMapState['story'])): print \"incorrect story\" return False if 'onMap' in", "return False if 'branch' in reqs and ('branch' not in timeMapState or reqs['branch']", "in timeMapState or int(timeMapState['year'])+5 < int(reqs['minYear'])): print \"year before minYear\" return False if", "for req in taskReqs: if '=' in req: reqSplit = req.split('=') requirements[reqSplit[0]] =", "from hyquest.constants import TASK_TIMEMAP # This handles matching up TimeMap state to associated", "> int(reqs['maxYear'])): print \"year after maxYear\" return False if 'branch' in reqs and", "not in timeMapState or reqs['story'] != str(timeMapState['story'])): print \"incorrect story\" return False if", "reqs['onMap'] != str(timeMapState['onMap'])): print \"On Map does not match\" return False return True", "getUserAction from hyquest.constants import TASK_TIMEMAP # This handles matching up TimeMap state to", "tasks: if timeMapMatches(task, timeMapState): action = getUserAction(user, task) if action is None: otherTasks.append(task)", "reqs = task.getInfoReqs() if 'minYear' in reqs and ('year' not in timeMapState or", "timeMapState or reqs['branch'] != str(timeMapState['branch'])): print \"incorrect branch\" return False if 'story' in", "int(reqs['maxYear'])): print \"year after maxYear\" return False if 'branch' in reqs and ('branch'", "if 'maxYear' in reqs and ('year' not in timeMapState or int(timeMapState['year'])-5 > int(reqs['maxYear'])):", "Map does not match\" return False return True def getTimeMapReqs(task): taskReqs = task.taskinfo.split(';')", "before minYear\" return False if 'maxYear' in reqs and ('year' not in timeMapState", "False if 'onMap' in reqs and ('onMap' not in timeMapState or reqs['onMap'] !=", "int(timeMapState['year'])+5 < int(reqs['minYear'])): print \"year before minYear\" return False if 'maxYear' in reqs", "print \"year after maxYear\" return False if 'branch' in reqs and ('branch' not", "reqs and ('year' not in timeMapState or int(timeMapState['year'])+5 < int(reqs['minYear'])): print \"year before", "def matchingTimeMapTasks(user, timeMapState): tasks = getTaskResultSet(user).filter(type=TASK_TIMEMAP) activeTasks = [] otherTasks = [] for", "not in timeMapState or int(timeMapState['year'])-5 > int(reqs['maxYear'])): print \"year after maxYear\" return False", "print \"incorrect branch\" return False if 'story' in reqs and ('story' not in", "activeTasks.append(task) return (activeTasks, otherTasks) def timeMapMatches(task, timeMapState): reqs = task.getInfoReqs() if 'minYear' in", "(activeTasks, otherTasks) def timeMapMatches(task, timeMapState): reqs = task.getInfoReqs() if 'minYear' in reqs and", "int(reqs['minYear'])): print \"year before minYear\" return False if 'maxYear' in reqs and ('year'", "\"incorrect story\" return False if 'onMap' in reqs and ('onMap' not in timeMapState", "up TimeMap state to associated TimeMap Tasks def matchingTimeMapTasks(user, timeMapState): tasks = getTaskResultSet(user).filter(type=TASK_TIMEMAP)", "activeTasks = [] otherTasks = [] for task in tasks: if timeMapMatches(task, timeMapState):", "def getTimeMapReqs(task): taskReqs = task.taskinfo.split(';') requirements = {} for req in taskReqs: if", "or reqs['onMap'] != str(timeMapState['onMap'])): print \"On Map does not match\" return False return", "\"year after maxYear\" return False if 'branch' in reqs and ('branch' not in", "action.complete != True: activeTasks.append(task) return (activeTasks, otherTasks) def timeMapMatches(task, timeMapState): reqs = task.getInfoReqs()", "in reqs and ('year' not in timeMapState or int(timeMapState['year'])+5 < int(reqs['minYear'])): print \"year", "timeMapState): tasks = getTaskResultSet(user).filter(type=TASK_TIMEMAP) activeTasks = [] otherTasks = [] for task in", "def timeMapMatches(task, timeMapState): reqs = task.getInfoReqs() if 'minYear' in reqs and ('year' not", "str(timeMapState['branch'])): print \"incorrect branch\" return False if 'story' in reqs and ('story' not", "timeMapState): action = getUserAction(user, task) if action is None: otherTasks.append(task) elif action.complete !=", "in timeMapState or reqs['story'] != str(timeMapState['story'])): print \"incorrect story\" return False if 'onMap'", "= task.taskinfo.split(';') requirements = {} for req in taskReqs: if '=' in req:", "getTaskResultSet, getUserAction from hyquest.constants import TASK_TIMEMAP # This handles matching up TimeMap state", "< int(reqs['minYear'])): print \"year before minYear\" return False if 'maxYear' in reqs and", "story\" return False if 'onMap' in reqs and ('onMap' not in timeMapState or", "and ('year' not in timeMapState or int(timeMapState['year'])-5 > int(reqs['maxYear'])): print \"year after maxYear\"", "This handles matching up TimeMap state to associated TimeMap Tasks def matchingTimeMapTasks(user, timeMapState):", "if 'minYear' in reqs and ('year' not in timeMapState or int(timeMapState['year'])+5 < int(reqs['minYear'])):", "not in timeMapState or int(timeMapState['year'])+5 < int(reqs['minYear'])): print \"year before minYear\" return False", "and ('year' not in timeMapState or int(timeMapState['year'])+5 < int(reqs['minYear'])): print \"year before minYear\"", "import getTaskResultSet, getUserAction from hyquest.constants import TASK_TIMEMAP # This handles matching up TimeMap", "= [] otherTasks = [] for task in tasks: if timeMapMatches(task, timeMapState): action", "return False if 'story' in reqs and ('story' not in timeMapState or reqs['story']", "or reqs['branch'] != str(timeMapState['branch'])): print \"incorrect branch\" return False if 'story' in reqs", "!= str(timeMapState['onMap'])): print \"On Map does not match\" return False return True def", "or reqs['story'] != str(timeMapState['story'])): print \"incorrect story\" return False if 'onMap' in reqs", "otherTasks) def timeMapMatches(task, timeMapState): reqs = task.getInfoReqs() if 'minYear' in reqs and ('year'", "branch\" return False if 'story' in reqs and ('story' not in timeMapState or", "import TASK_TIMEMAP # This handles matching up TimeMap state to associated TimeMap Tasks", "return False return True def getTimeMapReqs(task): taskReqs = task.taskinfo.split(';') requirements = {} for", "in timeMapState or int(timeMapState['year'])-5 > int(reqs['maxYear'])): print \"year after maxYear\" return False if", "reqs['story'] != str(timeMapState['story'])): print \"incorrect story\" return False if 'onMap' in reqs and", "True def getTimeMapReqs(task): taskReqs = task.taskinfo.split(';') requirements = {} for req in taskReqs:", "# This handles matching up TimeMap state to associated TimeMap Tasks def matchingTimeMapTasks(user,", "= getTaskResultSet(user).filter(type=TASK_TIMEMAP) activeTasks = [] otherTasks = [] for task in tasks: if", "and ('branch' not in timeMapState or reqs['branch'] != str(timeMapState['branch'])): print \"incorrect branch\" return", "'onMap' in reqs and ('onMap' not in timeMapState or reqs['onMap'] != str(timeMapState['onMap'])): print", "= {} for req in taskReqs: if '=' in req: reqSplit = req.split('=')", "not match\" return False return True def getTimeMapReqs(task): taskReqs = task.taskinfo.split(';') requirements =", "= [] for task in tasks: if timeMapMatches(task, timeMapState): action = getUserAction(user, task)", "\"year before minYear\" return False if 'maxYear' in reqs and ('year' not in", "False if 'branch' in reqs and ('branch' not in timeMapState or reqs['branch'] !=", "TimeMap state to associated TimeMap Tasks def matchingTimeMapTasks(user, timeMapState): tasks = getTaskResultSet(user).filter(type=TASK_TIMEMAP) activeTasks", "does not match\" return False return True def getTimeMapReqs(task): taskReqs = task.taskinfo.split(';') requirements", "timeMapState or reqs['onMap'] != str(timeMapState['onMap'])): print \"On Map does not match\" return False", "timeMapState or int(timeMapState['year'])+5 < int(reqs['minYear'])): print \"year before minYear\" return False if 'maxYear'", "('year' not in timeMapState or int(timeMapState['year'])-5 > int(reqs['maxYear'])): print \"year after maxYear\" return", "if action is None: otherTasks.append(task) elif action.complete != True: activeTasks.append(task) return (activeTasks, otherTasks)", "'maxYear' in reqs and ('year' not in timeMapState or int(timeMapState['year'])-5 > int(reqs['maxYear'])): print", "state to associated TimeMap Tasks def matchingTimeMapTasks(user, timeMapState): tasks = getTaskResultSet(user).filter(type=TASK_TIMEMAP) activeTasks =", "return False if 'onMap' in reqs and ('onMap' not in timeMapState or reqs['onMap']", "print \"On Map does not match\" return False return True def getTimeMapReqs(task): taskReqs", "taskReqs = task.taskinfo.split(';') requirements = {} for req in taskReqs: if '=' in", "and ('story' not in timeMapState or reqs['story'] != str(timeMapState['story'])): print \"incorrect story\" return", "!= str(timeMapState['story'])): print \"incorrect story\" return False if 'onMap' in reqs and ('onMap'", "if timeMapMatches(task, timeMapState): action = getUserAction(user, task) if action is None: otherTasks.append(task) elif", "req in taskReqs: if '=' in req: reqSplit = req.split('=') requirements[reqSplit[0]] = reqSplit[1]", "in taskReqs: if '=' in req: reqSplit = req.split('=') requirements[reqSplit[0]] = reqSplit[1] return", "getTimeMapReqs(task): taskReqs = task.taskinfo.split(';') requirements = {} for req in taskReqs: if '='", "in reqs and ('year' not in timeMapState or int(timeMapState['year'])-5 > int(reqs['maxYear'])): print \"year", "in timeMapState or reqs['branch'] != str(timeMapState['branch'])): print \"incorrect branch\" return False if 'story'", "timeMapMatches(task, timeMapState): action = getUserAction(user, task) if action is None: otherTasks.append(task) elif action.complete", "\"incorrect branch\" return False if 'story' in reqs and ('story' not in timeMapState", "hyquest.constants import TASK_TIMEMAP # This handles matching up TimeMap state to associated TimeMap", "otherTasks.append(task) elif action.complete != True: activeTasks.append(task) return (activeTasks, otherTasks) def timeMapMatches(task, timeMapState): reqs", "int(timeMapState['year'])-5 > int(reqs['maxYear'])): print \"year after maxYear\" return False if 'branch' in reqs", "if 'story' in reqs and ('story' not in timeMapState or reqs['story'] != str(timeMapState['story'])):", "('year' not in timeMapState or int(timeMapState['year'])+5 < int(reqs['minYear'])): print \"year before minYear\" return", "to associated TimeMap Tasks def matchingTimeMapTasks(user, timeMapState): tasks = getTaskResultSet(user).filter(type=TASK_TIMEMAP) activeTasks = []", "task) if action is None: otherTasks.append(task) elif action.complete != True: activeTasks.append(task) return (activeTasks,", "action = getUserAction(user, task) if action is None: otherTasks.append(task) elif action.complete != True:", "return (activeTasks, otherTasks) def timeMapMatches(task, timeMapState): reqs = task.getInfoReqs() if 'minYear' in reqs", "handles matching up TimeMap state to associated TimeMap Tasks def matchingTimeMapTasks(user, timeMapState): tasks", "Tasks def matchingTimeMapTasks(user, timeMapState): tasks = getTaskResultSet(user).filter(type=TASK_TIMEMAP) activeTasks = [] otherTasks = []", "minYear\" return False if 'maxYear' in reqs and ('year' not in timeMapState or", "reqs and ('story' not in timeMapState or reqs['story'] != str(timeMapState['story'])): print \"incorrect story\"", "taskReqs: if '=' in req: reqSplit = req.split('=') requirements[reqSplit[0]] = reqSplit[1] return requirements", "False if 'story' in reqs and ('story' not in timeMapState or reqs['story'] !=", "timeMapState): reqs = task.getInfoReqs() if 'minYear' in reqs and ('year' not in timeMapState", "reqs['branch'] != str(timeMapState['branch'])): print \"incorrect branch\" return False if 'story' in reqs and", "TimeMap Tasks def matchingTimeMapTasks(user, timeMapState): tasks = getTaskResultSet(user).filter(type=TASK_TIMEMAP) activeTasks = [] otherTasks =", "False return True def getTimeMapReqs(task): taskReqs = task.taskinfo.split(';') requirements = {} for req", "from hyquest.verifiers.common import getTaskResultSet, getUserAction from hyquest.constants import TASK_TIMEMAP # This handles matching", "or int(timeMapState['year'])-5 > int(reqs['maxYear'])): print \"year after maxYear\" return False if 'branch' in", "return True def getTimeMapReqs(task): taskReqs = task.taskinfo.split(';') requirements = {} for req in", "matchingTimeMapTasks(user, timeMapState): tasks = getTaskResultSet(user).filter(type=TASK_TIMEMAP) activeTasks = [] otherTasks = [] for task", "= getUserAction(user, task) if action is None: otherTasks.append(task) elif action.complete != True: activeTasks.append(task)", "not in timeMapState or reqs['branch'] != str(timeMapState['branch'])): print \"incorrect branch\" return False if", "!= str(timeMapState['branch'])): print \"incorrect branch\" return False if 'story' in reqs and ('story'", "getUserAction(user, task) if action is None: otherTasks.append(task) elif action.complete != True: activeTasks.append(task) return", "return False if 'maxYear' in reqs and ('year' not in timeMapState or int(timeMapState['year'])-5", "TASK_TIMEMAP # This handles matching up TimeMap state to associated TimeMap Tasks def", "'minYear' in reqs and ('year' not in timeMapState or int(timeMapState['year'])+5 < int(reqs['minYear'])): print", "reqs and ('branch' not in timeMapState or reqs['branch'] != str(timeMapState['branch'])): print \"incorrect branch\"", "after maxYear\" return False if 'branch' in reqs and ('branch' not in timeMapState", "print \"incorrect story\" return False if 'onMap' in reqs and ('onMap' not in", "timeMapMatches(task, timeMapState): reqs = task.getInfoReqs() if 'minYear' in reqs and ('year' not in", "[] otherTasks = [] for task in tasks: if timeMapMatches(task, timeMapState): action =", "in reqs and ('story' not in timeMapState or reqs['story'] != str(timeMapState['story'])): print \"incorrect", "reqs and ('year' not in timeMapState or int(timeMapState['year'])-5 > int(reqs['maxYear'])): print \"year after", "True: activeTasks.append(task) return (activeTasks, otherTasks) def timeMapMatches(task, timeMapState): reqs = task.getInfoReqs() if 'minYear'", "matching up TimeMap state to associated TimeMap Tasks def matchingTimeMapTasks(user, timeMapState): tasks =", "or int(timeMapState['year'])+5 < int(reqs['minYear'])): print \"year before minYear\" return False if 'maxYear' in", "not in timeMapState or reqs['onMap'] != str(timeMapState['onMap'])): print \"On Map does not match\"", "elif action.complete != True: activeTasks.append(task) return (activeTasks, otherTasks) def timeMapMatches(task, timeMapState): reqs =", "getTaskResultSet(user).filter(type=TASK_TIMEMAP) activeTasks = [] otherTasks = [] for task in tasks: if timeMapMatches(task,", "timeMapState or int(timeMapState['year'])-5 > int(reqs['maxYear'])): print \"year after maxYear\" return False if 'branch'", "('branch' not in timeMapState or reqs['branch'] != str(timeMapState['branch'])): print \"incorrect branch\" return False", "hyquest.verifiers.common import getTaskResultSet, getUserAction from hyquest.constants import TASK_TIMEMAP # This handles matching up", "('onMap' not in timeMapState or reqs['onMap'] != str(timeMapState['onMap'])): print \"On Map does not", "('story' not in timeMapState or reqs['story'] != str(timeMapState['story'])): print \"incorrect story\" return False", "requirements = {} for req in taskReqs: if '=' in req: reqSplit =", "otherTasks = [] for task in tasks: if timeMapMatches(task, timeMapState): action = getUserAction(user,", "None: otherTasks.append(task) elif action.complete != True: activeTasks.append(task) return (activeTasks, otherTasks) def timeMapMatches(task, timeMapState):", "False if 'maxYear' in reqs and ('year' not in timeMapState or int(timeMapState['year'])-5 >", "str(timeMapState['onMap'])): print \"On Map does not match\" return False return True def getTimeMapReqs(task):", "in tasks: if timeMapMatches(task, timeMapState): action = getUserAction(user, task) if action is None:", "and ('onMap' not in timeMapState or reqs['onMap'] != str(timeMapState['onMap'])): print \"On Map does", "is None: otherTasks.append(task) elif action.complete != True: activeTasks.append(task) return (activeTasks, otherTasks) def timeMapMatches(task,", "reqs and ('onMap' not in timeMapState or reqs['onMap'] != str(timeMapState['onMap'])): print \"On Map", "= task.getInfoReqs() if 'minYear' in reqs and ('year' not in timeMapState or int(timeMapState['year'])+5", "!= True: activeTasks.append(task) return (activeTasks, otherTasks) def timeMapMatches(task, timeMapState): reqs = task.getInfoReqs() if", "[] for task in tasks: if timeMapMatches(task, timeMapState): action = getUserAction(user, task) if", "'story' in reqs and ('story' not in timeMapState or reqs['story'] != str(timeMapState['story'])): print", "'branch' in reqs and ('branch' not in timeMapState or reqs['branch'] != str(timeMapState['branch'])): print", "task.getInfoReqs() if 'minYear' in reqs and ('year' not in timeMapState or int(timeMapState['year'])+5 <", "for task in tasks: if timeMapMatches(task, timeMapState): action = getUserAction(user, task) if action", "in reqs and ('onMap' not in timeMapState or reqs['onMap'] != str(timeMapState['onMap'])): print \"On", "{} for req in taskReqs: if '=' in req: reqSplit = req.split('=') requirements[reqSplit[0]]", "if 'branch' in reqs and ('branch' not in timeMapState or reqs['branch'] != str(timeMapState['branch'])):", "task.taskinfo.split(';') requirements = {} for req in taskReqs: if '=' in req: reqSplit", "tasks = getTaskResultSet(user).filter(type=TASK_TIMEMAP) activeTasks = [] otherTasks = [] for task in tasks:", "maxYear\" return False if 'branch' in reqs and ('branch' not in timeMapState or", "in reqs and ('branch' not in timeMapState or reqs['branch'] != str(timeMapState['branch'])): print \"incorrect", "if 'onMap' in reqs and ('onMap' not in timeMapState or reqs['onMap'] != str(timeMapState['onMap'])):", "str(timeMapState['story'])): print \"incorrect story\" return False if 'onMap' in reqs and ('onMap' not", "task in tasks: if timeMapMatches(task, timeMapState): action = getUserAction(user, task) if action is", "\"On Map does not match\" return False return True def getTimeMapReqs(task): taskReqs =", "in timeMapState or reqs['onMap'] != str(timeMapState['onMap'])): print \"On Map does not match\" return", "match\" return False return True def getTimeMapReqs(task): taskReqs = task.taskinfo.split(';') requirements = {}", "print \"year before minYear\" return False if 'maxYear' in reqs and ('year' not", "action is None: otherTasks.append(task) elif action.complete != True: activeTasks.append(task) return (activeTasks, otherTasks) def" ]
[ "json_data['code'] = code json_data['message'] = msg response = make_response(json.dumps(json_data, sort_keys=True)) response.headers['Content-type'] = 'application/json;", "\") rlist = tobj.translate(text=rjson.text, target=rjson.target) if rlist[0] == 200: log.writelogs(token, ip, '[Succeed]') else:", "=================翻译===================== method: POST headers: Authorization: [your api key] type: json { \"text\":[text], \"taget\":[target", "is None: return buildResponse(403, \"API key not valid. Please pass a valid API", "# -*- coding: utf-8 -*- from flask import Flask, request, make_response import requests", "'POST']) def translate(): ip = request.remote_addr if request.method != 'POST': return buildResponse(403, \"Method", "buildResponse(403, \"API key not valid. Please pass a valid API key. \") tobj", "not valid. Please pass a valid API key. \") tobj = Translation(token) jsondict", "Exception: return buildResponse(500, \"Query log exception. \") @app.route('/languages/support', methods=['GET', 'POST']) def support_languages(): if", "\") @app.route('/languages/support', methods=['GET', 'POST']) def support_languages(): if request.method != 'GET': return buildResponse(403, \"Method", "{ \"text\":[text], \"taget\":[target language] } return: json { \"code\":[status code], \"message\":[translation text] }", "if request.method != 'POST': return buildResponse(403, \"Method Not Allowed. \") else: try: token", "PBMT from bean import log app = Flask(__name__) def buildResponse(code, msg): json_data =", "if logs: logs = [(str(lo[0]), lo[1], lo[2], lo[3]) for lo in logs] return", "except Exception: return buildResponse(403, \"API key not valid. Please pass a valid API", "logs] return buildResponse(200, logs) elif logs == []: return buildResponse(200, []) elif logs", "tobj = Translation(token) jsondict = request.get_json() try: rjson = RequestJson(**jsondict) except Exception: log.writelogs(token,", "logs is None: return buildResponse(403, \"API key not valid. Please pass a valid", "} return: json { \"code\":[status code], \"message\":[translation text] } ''' @app.route('/languages/api/translate', methods=['GET', 'POST'])", "response ''' =================翻译===================== method: POST headers: Authorization: [your api key] type: json {", "msg response = make_response(json.dumps(json_data, sort_keys=True)) response.headers['Content-type'] = 'application/json; charset=utf-8' return response ''' =================翻译=====================", "request.method != 'POST': return buildResponse(403, \"Method Not Allowed. \") else: try: token =", "= code json_data['message'] = msg response = make_response(json.dumps(json_data, sort_keys=True)) response.headers['Content-type'] = 'application/json; charset=utf-8'", "dict() json_data['code'] = code json_data['message'] = msg response = make_response(json.dumps(json_data, sort_keys=True)) response.headers['Content-type'] =", "200: log.writelogs(token, ip, '[Succeed]') else: log.writelogs(token, ip, '[Failed] '+rlist[1]) return buildResponse(code=rlist[0], msg=rlist[1]) '''", "methods=['GET', 'POST']) def getlog(): if request.method != 'GET': return buildResponse(403, \"Method Not Allowed.", "getlog(): if request.method != 'GET': return buildResponse(403, \"Method Not Allowed. \") else: try:", "json { \"code\":[status code], \"message\":[translation text] } ''' @app.route('/languages/api/translate', methods=['GET', 'POST']) def translate():", "\"code\":[status code], \"message\":[calling log] } ''' @app.route('/languages/api/logs', methods=['GET', 'POST']) def getlog(): if request.method", "'+rlist[1]) return buildResponse(code=rlist[0], msg=rlist[1]) ''' =================日志===================== method: GET headers: Authorization: [your api key]", "{ \"code\":[status code], \"message\":[calling log] } ''' @app.route('/languages/api/logs', methods=['GET', 'POST']) def getlog(): if", "''' @app.route('/languages/api/translate', methods=['GET', 'POST']) def translate(): ip = request.remote_addr if request.method != 'POST':", "logs == []: return buildResponse(200, []) elif logs is None: return buildResponse(403, \"API", "Authorization: [your api key] type: NULL return: json { \"code\":[status code], \"message\":[calling log]", "'[Failed] '+rlist[1]) return buildResponse(code=rlist[0], msg=rlist[1]) ''' =================日志===================== method: GET headers: Authorization: [your api", "= Flask(__name__) def buildResponse(code, msg): json_data = dict() json_data['code'] = code json_data['message'] =", "method: GET headers: Authorization: [your api key] type: NULL return: json { \"code\":[status", "not valid. Please pass a valid API key. \") except Exception: return buildResponse(500,", "return response ''' =================翻译===================== method: POST headers: Authorization: [your api key] type: json", "except Exception: return buildResponse(500, \"Query log exception. \") @app.route('/languages/support', methods=['GET', 'POST']) def support_languages():", "== []: return buildResponse(200, []) elif logs is None: return buildResponse(403, \"API key", "code], \"message\":[calling log] } ''' @app.route('/languages/api/logs', methods=['GET', 'POST']) def getlog(): if request.method !=", "field error. \") rlist = tobj.translate(text=rjson.text, target=rjson.target) if rlist[0] == 200: log.writelogs(token, ip,", "tobj.translate(text=rjson.text, target=rjson.target) if rlist[0] == 200: log.writelogs(token, ip, '[Succeed]') else: log.writelogs(token, ip, '[Failed]", "'[Succeed]') else: log.writelogs(token, ip, '[Failed] '+rlist[1]) return buildResponse(code=rlist[0], msg=rlist[1]) ''' =================日志===================== method: GET", "logs = log.getlogs(token) if logs: logs = [(str(lo[0]), lo[1], lo[2], lo[3]) for lo", "buildResponse(code=rlist[0], msg=rlist[1]) ''' =================日志===================== method: GET headers: Authorization: [your api key] type: NULL", "bean import log app = Flask(__name__) def buildResponse(code, msg): json_data = dict() json_data['code']", "'POST']) def getlog(): if request.method != 'GET': return buildResponse(403, \"Method Not Allowed. \")", "'application/json; charset=utf-8' return response ''' =================翻译===================== method: POST headers: Authorization: [your api key]", "Exception: return buildResponse(403, \"API key not valid. Please pass a valid API key.", "ip, '[Failed] Required field error. ') return buildResponse(400, \"Required field error. \") rlist", "import requests import json from core import Translation, RequestJson, PBMT from bean import", "request, make_response import requests import json from core import Translation, RequestJson, PBMT from", "lo[2], lo[3]) for lo in logs] return buildResponse(200, logs) elif logs == []:", "Flask, request, make_response import requests import json from core import Translation, RequestJson, PBMT", "'POST': return buildResponse(403, \"Method Not Allowed. \") else: try: token = request.headers['Authorization'] except", "requests import json from core import Translation, RequestJson, PBMT from bean import log", "\"message\":[calling log] } ''' @app.route('/languages/api/logs', methods=['GET', 'POST']) def getlog(): if request.method != 'GET':", "'GET': return buildResponse(403, \"Method Not Allowed. \") else: return buildResponse(200, PBMT) if __name__", "\") else: try: token = request.headers['Authorization'] logs = log.getlogs(token) if logs: logs =", "else: try: token = request.headers['Authorization'] logs = log.getlogs(token) if logs: logs = [(str(lo[0]),", "translate(): ip = request.remote_addr if request.method != 'POST': return buildResponse(403, \"Method Not Allowed.", "def support_languages(): if request.method != 'GET': return buildResponse(403, \"Method Not Allowed. \") else:", "valid. Please pass a valid API key. \") tobj = Translation(token) jsondict =", "!= 'GET': return buildResponse(403, \"Method Not Allowed. \") else: try: token = request.headers['Authorization']", "buildResponse(200, []) elif logs is None: return buildResponse(403, \"API key not valid. Please", "None: return buildResponse(403, \"API key not valid. Please pass a valid API key.", "[your api key] type: json { \"text\":[text], \"taget\":[target language] } return: json {", "=================日志===================== method: GET headers: Authorization: [your api key] type: NULL return: json {", "a valid API key. \") tobj = Translation(token) jsondict = request.get_json() try: rjson", "buildResponse(403, \"API key not valid. Please pass a valid API key. \") except", "= 'application/json; charset=utf-8' return response ''' =================翻译===================== method: POST headers: Authorization: [your api", "@app.route('/languages/api/translate', methods=['GET', 'POST']) def translate(): ip = request.remote_addr if request.method != 'POST': return", "return buildResponse(403, \"Method Not Allowed. \") else: return buildResponse(200, PBMT) if __name__ ==", "methods=['GET', 'POST']) def translate(): ip = request.remote_addr if request.method != 'POST': return buildResponse(403,", "make_response(json.dumps(json_data, sort_keys=True)) response.headers['Content-type'] = 'application/json; charset=utf-8' return response ''' =================翻译===================== method: POST headers:", "log.getlogs(token) if logs: logs = [(str(lo[0]), lo[1], lo[2], lo[3]) for lo in logs]", "logs = [(str(lo[0]), lo[1], lo[2], lo[3]) for lo in logs] return buildResponse(200, logs)", "return buildResponse(500, \"Query log exception. \") @app.route('/languages/support', methods=['GET', 'POST']) def support_languages(): if request.method", "buildResponse(500, \"Query log exception. \") @app.route('/languages/support', methods=['GET', 'POST']) def support_languages(): if request.method !=", "coding: utf-8 -*- from flask import Flask, request, make_response import requests import json", "'[Failed] Required field error. ') return buildResponse(400, \"Required field error. \") rlist =", "buildResponse(200, logs) elif logs == []: return buildResponse(200, []) elif logs is None:", "''' =================日志===================== method: GET headers: Authorization: [your api key] type: NULL return: json", "headers: Authorization: [your api key] type: NULL return: json { \"code\":[status code], \"message\":[calling", "return buildResponse(200, []) elif logs is None: return buildResponse(403, \"API key not valid.", "try: token = request.headers['Authorization'] except Exception: return buildResponse(403, \"API key not valid. Please", "logs: logs = [(str(lo[0]), lo[1], lo[2], lo[3]) for lo in logs] return buildResponse(200,", "-*- coding: utf-8 -*- from flask import Flask, request, make_response import requests import", "\"text\":[text], \"taget\":[target language] } return: json { \"code\":[status code], \"message\":[translation text] } '''", "\"message\":[translation text] } ''' @app.route('/languages/api/translate', methods=['GET', 'POST']) def translate(): ip = request.remote_addr if", "!= 'GET': return buildResponse(403, \"Method Not Allowed. \") else: return buildResponse(200, PBMT) if", "Flask(__name__) def buildResponse(code, msg): json_data = dict() json_data['code'] = code json_data['message'] = msg", "response.headers['Content-type'] = 'application/json; charset=utf-8' return response ''' =================翻译===================== method: POST headers: Authorization: [your", "utf-8 -*- from flask import Flask, request, make_response import requests import json from", "NULL return: json { \"code\":[status code], \"message\":[calling log] } ''' @app.route('/languages/api/logs', methods=['GET', 'POST'])", "from bean import log app = Flask(__name__) def buildResponse(code, msg): json_data = dict()", "= make_response(json.dumps(json_data, sort_keys=True)) response.headers['Content-type'] = 'application/json; charset=utf-8' return response ''' =================翻译===================== method: POST", "return buildResponse(403, \"Method Not Allowed. \") else: try: token = request.headers['Authorization'] except Exception:", "buildResponse(403, \"Method Not Allowed. \") else: try: token = request.headers['Authorization'] except Exception: return", "if rlist[0] == 200: log.writelogs(token, ip, '[Succeed]') else: log.writelogs(token, ip, '[Failed] '+rlist[1]) return", "except Exception: log.writelogs(token, ip, '[Failed] Required field error. ') return buildResponse(400, \"Required field", "def getlog(): if request.method != 'GET': return buildResponse(403, \"Method Not Allowed. \") else:", "Please pass a valid API key. \") tobj = Translation(token) jsondict = request.get_json()", "[]) elif logs is None: return buildResponse(403, \"API key not valid. Please pass", "json { \"text\":[text], \"taget\":[target language] } return: json { \"code\":[status code], \"message\":[translation text]", "a valid API key. \") except Exception: return buildResponse(500, \"Query log exception. \")", "import json from core import Translation, RequestJson, PBMT from bean import log app", "[]: return buildResponse(200, []) elif logs is None: return buildResponse(403, \"API key not", "\"Query log exception. \") @app.route('/languages/support', methods=['GET', 'POST']) def support_languages(): if request.method != 'GET':", "ip, '[Failed] '+rlist[1]) return buildResponse(code=rlist[0], msg=rlist[1]) ''' =================日志===================== method: GET headers: Authorization: [your", "api key] type: NULL return: json { \"code\":[status code], \"message\":[calling log] } '''", "text] } ''' @app.route('/languages/api/translate', methods=['GET', 'POST']) def translate(): ip = request.remote_addr if request.method", "API key. \") tobj = Translation(token) jsondict = request.get_json() try: rjson = RequestJson(**jsondict)", "return buildResponse(200, logs) elif logs == []: return buildResponse(200, []) elif logs is", "log.writelogs(token, ip, '[Failed] Required field error. ') return buildResponse(400, \"Required field error. \")", "msg=rlist[1]) ''' =================日志===================== method: GET headers: Authorization: [your api key] type: NULL return:", "Allowed. \") else: return buildResponse(200, PBMT) if __name__ == '__main__': app.run('0.0.0.0', 81, debug=True)", "elif logs is None: return buildResponse(403, \"API key not valid. Please pass a", "!= 'POST': return buildResponse(403, \"Method Not Allowed. \") else: try: token = request.headers['Authorization']", "\"Method Not Allowed. \") else: try: token = request.headers['Authorization'] logs = log.getlogs(token) if", "@app.route('/languages/support', methods=['GET', 'POST']) def support_languages(): if request.method != 'GET': return buildResponse(403, \"Method Not", "Allowed. \") else: try: token = request.headers['Authorization'] except Exception: return buildResponse(403, \"API key", "Not Allowed. \") else: return buildResponse(200, PBMT) if __name__ == '__main__': app.run('0.0.0.0', 81,", "return buildResponse(code=rlist[0], msg=rlist[1]) ''' =================日志===================== method: GET headers: Authorization: [your api key] type:", "sort_keys=True)) response.headers['Content-type'] = 'application/json; charset=utf-8' return response ''' =================翻译===================== method: POST headers: Authorization:", "return buildResponse(403, \"API key not valid. Please pass a valid API key. \")", "Allowed. \") else: try: token = request.headers['Authorization'] logs = log.getlogs(token) if logs: logs", "request.method != 'GET': return buildResponse(403, \"Method Not Allowed. \") else: return buildResponse(200, PBMT)", "token = request.headers['Authorization'] except Exception: return buildResponse(403, \"API key not valid. Please pass", "flask import Flask, request, make_response import requests import json from core import Translation,", "rlist[0] == 200: log.writelogs(token, ip, '[Succeed]') else: log.writelogs(token, ip, '[Failed] '+rlist[1]) return buildResponse(code=rlist[0],", "= log.getlogs(token) if logs: logs = [(str(lo[0]), lo[1], lo[2], lo[3]) for lo in", "if request.method != 'GET': return buildResponse(403, \"Method Not Allowed. \") else: return buildResponse(200,", "lo[3]) for lo in logs] return buildResponse(200, logs) elif logs == []: return", "json_data = dict() json_data['code'] = code json_data['message'] = msg response = make_response(json.dumps(json_data, sort_keys=True))", "logs) elif logs == []: return buildResponse(200, []) elif logs is None: return", "= tobj.translate(text=rjson.text, target=rjson.target) if rlist[0] == 200: log.writelogs(token, ip, '[Succeed]') else: log.writelogs(token, ip,", "'POST']) def support_languages(): if request.method != 'GET': return buildResponse(403, \"Method Not Allowed. \")", "import Flask, request, make_response import requests import json from core import Translation, RequestJson,", "Please pass a valid API key. \") except Exception: return buildResponse(500, \"Query log", "pass a valid API key. \") tobj = Translation(token) jsondict = request.get_json() try:", "Translation(token) jsondict = request.get_json() try: rjson = RequestJson(**jsondict) except Exception: log.writelogs(token, ip, '[Failed]", "response = make_response(json.dumps(json_data, sort_keys=True)) response.headers['Content-type'] = 'application/json; charset=utf-8' return response ''' =================翻译===================== method:", "if request.method != 'GET': return buildResponse(403, \"Method Not Allowed. \") else: try: token", "log.writelogs(token, ip, '[Succeed]') else: log.writelogs(token, ip, '[Failed] '+rlist[1]) return buildResponse(code=rlist[0], msg=rlist[1]) ''' =================日志=====================", "try: token = request.headers['Authorization'] logs = log.getlogs(token) if logs: logs = [(str(lo[0]), lo[1],", "[your api key] type: NULL return: json { \"code\":[status code], \"message\":[calling log] }", "token = request.headers['Authorization'] logs = log.getlogs(token) if logs: logs = [(str(lo[0]), lo[1], lo[2],", "json { \"code\":[status code], \"message\":[calling log] } ''' @app.route('/languages/api/logs', methods=['GET', 'POST']) def getlog():", "jsondict = request.get_json() try: rjson = RequestJson(**jsondict) except Exception: log.writelogs(token, ip, '[Failed] Required", "\"taget\":[target language] } return: json { \"code\":[status code], \"message\":[translation text] } ''' @app.route('/languages/api/translate',", "\"API key not valid. Please pass a valid API key. \") tobj =", "buildResponse(400, \"Required field error. \") rlist = tobj.translate(text=rjson.text, target=rjson.target) if rlist[0] == 200:", "log] } ''' @app.route('/languages/api/logs', methods=['GET', 'POST']) def getlog(): if request.method != 'GET': return", "ip = request.remote_addr if request.method != 'POST': return buildResponse(403, \"Method Not Allowed. \")", "Translation, RequestJson, PBMT from bean import log app = Flask(__name__) def buildResponse(code, msg):", "key. \") tobj = Translation(token) jsondict = request.get_json() try: rjson = RequestJson(**jsondict) except", "app = Flask(__name__) def buildResponse(code, msg): json_data = dict() json_data['code'] = code json_data['message']", "lo[1], lo[2], lo[3]) for lo in logs] return buildResponse(200, logs) elif logs ==", "\") tobj = Translation(token) jsondict = request.get_json() try: rjson = RequestJson(**jsondict) except Exception:", "} ''' @app.route('/languages/api/translate', methods=['GET', 'POST']) def translate(): ip = request.remote_addr if request.method !=", "buildResponse(code, msg): json_data = dict() json_data['code'] = code json_data['message'] = msg response =", "-*- from flask import Flask, request, make_response import requests import json from core", "Required field error. ') return buildResponse(400, \"Required field error. \") rlist = tobj.translate(text=rjson.text,", "return buildResponse(403, \"Method Not Allowed. \") else: try: token = request.headers['Authorization'] logs =", "in logs] return buildResponse(200, logs) elif logs == []: return buildResponse(200, []) elif", "''' =================翻译===================== method: POST headers: Authorization: [your api key] type: json { \"text\":[text],", "Exception: log.writelogs(token, ip, '[Failed] Required field error. ') return buildResponse(400, \"Required field error.", "\"code\":[status code], \"message\":[translation text] } ''' @app.route('/languages/api/translate', methods=['GET', 'POST']) def translate(): ip =", "support_languages(): if request.method != 'GET': return buildResponse(403, \"Method Not Allowed. \") else: return", "def translate(): ip = request.remote_addr if request.method != 'POST': return buildResponse(403, \"Method Not", "key not valid. Please pass a valid API key. \") tobj = Translation(token)", "msg): json_data = dict() json_data['code'] = code json_data['message'] = msg response = make_response(json.dumps(json_data,", "return: json { \"code\":[status code], \"message\":[translation text] } ''' @app.route('/languages/api/translate', methods=['GET', 'POST']) def", "[(str(lo[0]), lo[1], lo[2], lo[3]) for lo in logs] return buildResponse(200, logs) elif logs", "error. ') return buildResponse(400, \"Required field error. \") rlist = tobj.translate(text=rjson.text, target=rjson.target) if", "ip, '[Succeed]') else: log.writelogs(token, ip, '[Failed] '+rlist[1]) return buildResponse(code=rlist[0], msg=rlist[1]) ''' =================日志===================== method:", "log.writelogs(token, ip, '[Failed] '+rlist[1]) return buildResponse(code=rlist[0], msg=rlist[1]) ''' =================日志===================== method: GET headers: Authorization:", "request.get_json() try: rjson = RequestJson(**jsondict) except Exception: log.writelogs(token, ip, '[Failed] Required field error.", "= request.headers['Authorization'] except Exception: return buildResponse(403, \"API key not valid. Please pass a", "== 200: log.writelogs(token, ip, '[Succeed]') else: log.writelogs(token, ip, '[Failed] '+rlist[1]) return buildResponse(code=rlist[0], msg=rlist[1])", "Not Allowed. \") else: try: token = request.headers['Authorization'] logs = log.getlogs(token) if logs:", "valid. Please pass a valid API key. \") except Exception: return buildResponse(500, \"Query", "field error. ') return buildResponse(400, \"Required field error. \") rlist = tobj.translate(text=rjson.text, target=rjson.target)", "lo in logs] return buildResponse(200, logs) elif logs == []: return buildResponse(200, [])", "method: POST headers: Authorization: [your api key] type: json { \"text\":[text], \"taget\":[target language]", "else: log.writelogs(token, ip, '[Failed] '+rlist[1]) return buildResponse(code=rlist[0], msg=rlist[1]) ''' =================日志===================== method: GET headers:", "Authorization: [your api key] type: json { \"text\":[text], \"taget\":[target language] } return: json", "= Translation(token) jsondict = request.get_json() try: rjson = RequestJson(**jsondict) except Exception: log.writelogs(token, ip,", "code], \"message\":[translation text] } ''' @app.route('/languages/api/translate', methods=['GET', 'POST']) def translate(): ip = request.remote_addr", "from core import Translation, RequestJson, PBMT from bean import log app = Flask(__name__)", "error. \") rlist = tobj.translate(text=rjson.text, target=rjson.target) if rlist[0] == 200: log.writelogs(token, ip, '[Succeed]')", "charset=utf-8' return response ''' =================翻译===================== method: POST headers: Authorization: [your api key] type:", "GET headers: Authorization: [your api key] type: NULL return: json { \"code\":[status code],", "request.headers['Authorization'] logs = log.getlogs(token) if logs: logs = [(str(lo[0]), lo[1], lo[2], lo[3]) for", "= [(str(lo[0]), lo[1], lo[2], lo[3]) for lo in logs] return buildResponse(200, logs) elif", "elif logs == []: return buildResponse(200, []) elif logs is None: return buildResponse(403,", "request.headers['Authorization'] except Exception: return buildResponse(403, \"API key not valid. Please pass a valid", "''' @app.route('/languages/api/logs', methods=['GET', 'POST']) def getlog(): if request.method != 'GET': return buildResponse(403, \"Method", "import Translation, RequestJson, PBMT from bean import log app = Flask(__name__) def buildResponse(code,", "json_data['message'] = msg response = make_response(json.dumps(json_data, sort_keys=True)) response.headers['Content-type'] = 'application/json; charset=utf-8' return response", "key] type: json { \"text\":[text], \"taget\":[target language] } return: json { \"code\":[status code],", "def buildResponse(code, msg): json_data = dict() json_data['code'] = code json_data['message'] = msg response", "code json_data['message'] = msg response = make_response(json.dumps(json_data, sort_keys=True)) response.headers['Content-type'] = 'application/json; charset=utf-8' return", "methods=['GET', 'POST']) def support_languages(): if request.method != 'GET': return buildResponse(403, \"Method Not Allowed.", "type: json { \"text\":[text], \"taget\":[target language] } return: json { \"code\":[status code], \"message\":[translation", "RequestJson(**jsondict) except Exception: log.writelogs(token, ip, '[Failed] Required field error. ') return buildResponse(400, \"Required", "try: rjson = RequestJson(**jsondict) except Exception: log.writelogs(token, ip, '[Failed] Required field error. ')", "= RequestJson(**jsondict) except Exception: log.writelogs(token, ip, '[Failed] Required field error. ') return buildResponse(400,", "log app = Flask(__name__) def buildResponse(code, msg): json_data = dict() json_data['code'] = code", "import log app = Flask(__name__) def buildResponse(code, msg): json_data = dict() json_data['code'] =", "\"Method Not Allowed. \") else: return buildResponse(200, PBMT) if __name__ == '__main__': app.run('0.0.0.0',", "valid API key. \") except Exception: return buildResponse(500, \"Query log exception. \") @app.route('/languages/support',", "buildResponse(403, \"Method Not Allowed. \") else: return buildResponse(200, PBMT) if __name__ == '__main__':", "return buildResponse(400, \"Required field error. \") rlist = tobj.translate(text=rjson.text, target=rjson.target) if rlist[0] ==", "request.remote_addr if request.method != 'POST': return buildResponse(403, \"Method Not Allowed. \") else: try:", "') return buildResponse(400, \"Required field error. \") rlist = tobj.translate(text=rjson.text, target=rjson.target) if rlist[0]", "@app.route('/languages/api/logs', methods=['GET', 'POST']) def getlog(): if request.method != 'GET': return buildResponse(403, \"Method Not", "= dict() json_data['code'] = code json_data['message'] = msg response = make_response(json.dumps(json_data, sort_keys=True)) response.headers['Content-type']", "type: NULL return: json { \"code\":[status code], \"message\":[calling log] } ''' @app.route('/languages/api/logs', methods=['GET',", "POST headers: Authorization: [your api key] type: json { \"text\":[text], \"taget\":[target language] }", "{ \"code\":[status code], \"message\":[translation text] } ''' @app.route('/languages/api/translate', methods=['GET', 'POST']) def translate(): ip", "= request.get_json() try: rjson = RequestJson(**jsondict) except Exception: log.writelogs(token, ip, '[Failed] Required field", "\"Method Not Allowed. \") else: try: token = request.headers['Authorization'] except Exception: return buildResponse(403,", "else: try: token = request.headers['Authorization'] except Exception: return buildResponse(403, \"API key not valid.", "\"Required field error. \") rlist = tobj.translate(text=rjson.text, target=rjson.target) if rlist[0] == 200: log.writelogs(token,", "\") else: try: token = request.headers['Authorization'] except Exception: return buildResponse(403, \"API key not", "'GET': return buildResponse(403, \"Method Not Allowed. \") else: try: token = request.headers['Authorization'] logs", "json from core import Translation, RequestJson, PBMT from bean import log app =", "for lo in logs] return buildResponse(200, logs) elif logs == []: return buildResponse(200,", "exception. \") @app.route('/languages/support', methods=['GET', 'POST']) def support_languages(): if request.method != 'GET': return buildResponse(403,", "= request.headers['Authorization'] logs = log.getlogs(token) if logs: logs = [(str(lo[0]), lo[1], lo[2], lo[3])", "key. \") except Exception: return buildResponse(500, \"Query log exception. \") @app.route('/languages/support', methods=['GET', 'POST'])", "rjson = RequestJson(**jsondict) except Exception: log.writelogs(token, ip, '[Failed] Required field error. ') return", "= request.remote_addr if request.method != 'POST': return buildResponse(403, \"Method Not Allowed. \") else:", "api key] type: json { \"text\":[text], \"taget\":[target language] } return: json { \"code\":[status", "\") except Exception: return buildResponse(500, \"Query log exception. \") @app.route('/languages/support', methods=['GET', 'POST']) def", "rlist = tobj.translate(text=rjson.text, target=rjson.target) if rlist[0] == 200: log.writelogs(token, ip, '[Succeed]') else: log.writelogs(token,", "return: json { \"code\":[status code], \"message\":[calling log] } ''' @app.route('/languages/api/logs', methods=['GET', 'POST']) def", "log exception. \") @app.route('/languages/support', methods=['GET', 'POST']) def support_languages(): if request.method != 'GET': return", "Not Allowed. \") else: try: token = request.headers['Authorization'] except Exception: return buildResponse(403, \"API", "valid API key. \") tobj = Translation(token) jsondict = request.get_json() try: rjson =", "\"API key not valid. Please pass a valid API key. \") except Exception:", "from flask import Flask, request, make_response import requests import json from core import", "pass a valid API key. \") except Exception: return buildResponse(500, \"Query log exception.", "RequestJson, PBMT from bean import log app = Flask(__name__) def buildResponse(code, msg): json_data", "make_response import requests import json from core import Translation, RequestJson, PBMT from bean", "buildResponse(403, \"Method Not Allowed. \") else: try: token = request.headers['Authorization'] logs = log.getlogs(token)", "key not valid. Please pass a valid API key. \") except Exception: return", "= msg response = make_response(json.dumps(json_data, sort_keys=True)) response.headers['Content-type'] = 'application/json; charset=utf-8' return response '''", "language] } return: json { \"code\":[status code], \"message\":[translation text] } ''' @app.route('/languages/api/translate', methods=['GET',", "} ''' @app.route('/languages/api/logs', methods=['GET', 'POST']) def getlog(): if request.method != 'GET': return buildResponse(403,", "core import Translation, RequestJson, PBMT from bean import log app = Flask(__name__) def", "request.method != 'GET': return buildResponse(403, \"Method Not Allowed. \") else: try: token =", "API key. \") except Exception: return buildResponse(500, \"Query log exception. \") @app.route('/languages/support', methods=['GET',", "target=rjson.target) if rlist[0] == 200: log.writelogs(token, ip, '[Succeed]') else: log.writelogs(token, ip, '[Failed] '+rlist[1])", "headers: Authorization: [your api key] type: json { \"text\":[text], \"taget\":[target language] } return:", "key] type: NULL return: json { \"code\":[status code], \"message\":[calling log] } ''' @app.route('/languages/api/logs'," ]
[ "проходит 2 раза: сначада кусок хитрого javascript кода, потом страница # сайта с", "цену со скидкой price_2 = price_holder.select_one('.price_2') if price_2: price = price_2.select_one('.price_group > .promo_price').text", "text html = get_html(url) from bs4 import BeautifulSoup root = BeautifulSoup(html, 'lxml') for", "Этот цикл асинхронный код делает синхронным while self.html is None: _app.processEvents() _app.quit() #", "= QApplication([]) self._page = QWebEnginePage() self._page.loadFinished.connect(self._load_finished_handler) self.html = None # Небольшой костыль для", "self.html is None: _app.processEvents() _app.quit() # Чтобы избежать падений скрипта self._page = None", "self._page.toHtml(self._callable) return ExtractorHtml(url).html text = 'mad' url = 'http://gama-gama.ru/search/?searchField=' + text html =", "price_2.select_one('.price_group > .promo_price').text # Удаление пустых символов пробелом import re price = re.sub(r'\\s+',", "= data def _load_finished_handler(self, _): self._counter_finished += 1 if self._counter_finished == 2: self._page.toHtml(self._callable)", "= 'mad' url = 'http://gama-gama.ru/search/?searchField=' + text html = get_html(url) from bs4 import", "', '') price = None price_holder = game.select_one('.catalog_price_holder') price_1 = price_holder.select_one('.price_1') if price_1:", "if price_2: price = price_2.select_one('.price_group > .promo_price').text # Удаление пустых символов пробелом import", "= 0 self._page.load(QUrl(url)) # Ожидание загрузки страницы и получения его содержимого # Этот", "price = price_2.select_one('.price_group > .promo_price').text # Удаление пустых символов пробелом import re price", "None: _app.processEvents() _app.quit() # Чтобы избежать падений скрипта self._page = None def _callable(self,", "None price_holder = game.select_one('.catalog_price_holder') price_1 = price_holder.select_one('.price_1') if price_1: price = price_1.text.strip() else:", "Удаление пустых символов пробелом import re price = re.sub(r'\\s+', ' ', price) price", "# Чтобы избежать падений скрипта self._page = None def _callable(self, data): self.html =", "страницы и получения его содержимого # Этот цикл асинхронный код делает синхронным while", "if self._counter_finished == 2: self._page.toHtml(self._callable) return ExtractorHtml(url).html text = 'mad' url = 'http://gama-gama.ru/search/?searchField='", "'') price = None price_holder = game.select_one('.catalog_price_holder') price_1 = price_holder.select_one('.price_1') if price_1: price", "потом страница # сайта с содержимым self._counter_finished = 0 self._page.load(QUrl(url)) # Ожидание загрузки", "_): self._counter_finished += 1 if self._counter_finished == 2: self._page.toHtml(self._callable) return ExtractorHtml(url).html text =", "if price_1: price = price_1.text.strip() else: # Содержит описание цены со скидкой. Вытаскиваем", "# Содержит описание цены со скидкой. Вытаскиваем цену со скидкой price_2 = price_holder.select_one('.price_2')", "скидкой price_2 = price_holder.select_one('.price_2') if price_2: price = price_2.select_one('.price_group > .promo_price').text # Удаление", "from bs4 import BeautifulSoup root = BeautifulSoup(html, 'lxml') for game in root.select('.catalog-content >", "bs4 import BeautifulSoup root = BeautifulSoup(html, 'lxml') for game in root.select('.catalog-content > a'):", "= name.replace('Купить ', '') price = None price_holder = game.select_one('.catalog_price_holder') price_1 = price_holder.select_one('.price_1')", "root.select('.catalog-content > a'): name = game['title'].strip() name = name.replace('Купить ', '') price =", "import QApplication from PyQt5.QtWebEngineWidgets import QWebEnginePage class ExtractorHtml: def __init__(self, url): _app =", "'http://gama-gama.ru/search/?searchField=' + text html = get_html(url) from bs4 import BeautifulSoup root = BeautifulSoup(html,", "-*- coding: utf-8 -*- __author__ = 'ipetrash' # Основа взята из http://stackoverflow.com/a/37755811/5909792 def", "= None # Небольшой костыль для получения содержимого страницы сайта http://gama-gama.ru # Загрузка", "'mad' url = 'http://gama-gama.ru/search/?searchField=' + text html = get_html(url) from bs4 import BeautifulSoup", "страницы проходит 2 раза: сначада кусок хитрого javascript кода, потом страница # сайта", "ExtractorHtml: def __init__(self, url): _app = QApplication([]) self._page = QWebEnginePage() self._page.loadFinished.connect(self._load_finished_handler) self.html =", "_app.quit() # Чтобы избежать падений скрипта self._page = None def _callable(self, data): self.html", "price_1 = price_holder.select_one('.price_1') if price_1: price = price_1.text.strip() else: # Содержит описание цены", "сначада кусок хитрого javascript кода, потом страница # сайта с содержимым self._counter_finished =", "price_2: price = price_2.select_one('.price_group > .promo_price').text # Удаление пустых символов пробелом import re", "взята из http://stackoverflow.com/a/37755811/5909792 def get_html(url): from PyQt5.QtCore import QUrl from PyQt5.QtWidgets import QApplication", "self._counter_finished == 2: self._page.toHtml(self._callable) return ExtractorHtml(url).html text = 'mad' url = 'http://gama-gama.ru/search/?searchField=' +", "= QWebEnginePage() self._page.loadFinished.connect(self._load_finished_handler) self.html = None # Небольшой костыль для получения содержимого страницы", "Загрузка страницы проходит 2 раза: сначада кусок хитрого javascript кода, потом страница #", "со скидкой. Вытаскиваем цену со скидкой price_2 = price_holder.select_one('.price_2') if price_2: price =", "ExtractorHtml(url).html text = 'mad' url = 'http://gama-gama.ru/search/?searchField=' + text html = get_html(url) from", "= 'http://gama-gama.ru/search/?searchField=' + text html = get_html(url) from bs4 import BeautifulSoup root =", "url = 'http://gama-gama.ru/search/?searchField=' + text html = get_html(url) from bs4 import BeautifulSoup root", "class ExtractorHtml: def __init__(self, url): _app = QApplication([]) self._page = QWebEnginePage() self._page.loadFinished.connect(self._load_finished_handler) self.html", "def get_html(url): from PyQt5.QtCore import QUrl from PyQt5.QtWidgets import QApplication from PyQt5.QtWebEngineWidgets import", "game['title'].strip() name = name.replace('Купить ', '') price = None price_holder = game.select_one('.catalog_price_holder') price_1", "Небольшой костыль для получения содержимого страницы сайта http://gama-gama.ru # Загрузка страницы проходит 2", "price_1: price = price_1.text.strip() else: # Содержит описание цены со скидкой. Вытаскиваем цену", "utf-8 -*- __author__ = 'ipetrash' # Основа взята из http://stackoverflow.com/a/37755811/5909792 def get_html(url): from", "None # Небольшой костыль для получения содержимого страницы сайта http://gama-gama.ru # Загрузка страницы", "синхронным while self.html is None: _app.processEvents() _app.quit() # Чтобы избежать падений скрипта self._page", "def _callable(self, data): self.html = data def _load_finished_handler(self, _): self._counter_finished += 1 if", "<gh_stars>100-1000 #!/usr/bin/env python3 # -*- coding: utf-8 -*- __author__ = 'ipetrash' # Основа", "price_holder.select_one('.price_2') if price_2: price = price_2.select_one('.price_group > .promo_price').text # Удаление пустых символов пробелом", "price_2 = price_holder.select_one('.price_2') if price_2: price = price_2.select_one('.price_group > .promo_price').text # Удаление пустых", "http://gama-gama.ru # Загрузка страницы проходит 2 раза: сначада кусок хитрого javascript кода, потом", "описание цены со скидкой. Вытаскиваем цену со скидкой price_2 = price_holder.select_one('.price_2') if price_2:", "price_holder.select_one('.price_1') if price_1: price = price_1.text.strip() else: # Содержит описание цены со скидкой.", "0 self._page.load(QUrl(url)) # Ожидание загрузки страницы и получения его содержимого # Этот цикл", "асинхронный код делает синхронным while self.html is None: _app.processEvents() _app.quit() # Чтобы избежать", "Основа взята из http://stackoverflow.com/a/37755811/5909792 def get_html(url): from PyQt5.QtCore import QUrl from PyQt5.QtWidgets import", "PyQt5.QtCore import QUrl from PyQt5.QtWidgets import QApplication from PyQt5.QtWebEngineWidgets import QWebEnginePage class ExtractorHtml:", "# -*- coding: utf-8 -*- __author__ = 'ipetrash' # Основа взята из http://stackoverflow.com/a/37755811/5909792", "содержимым self._counter_finished = 0 self._page.load(QUrl(url)) # Ожидание загрузки страницы и получения его содержимого", "Ожидание загрузки страницы и получения его содержимого # Этот цикл асинхронный код делает", "= price_holder.select_one('.price_1') if price_1: price = price_1.text.strip() else: # Содержит описание цены со", "a'): name = game['title'].strip() name = name.replace('Купить ', '') price = None price_holder", "javascript кода, потом страница # сайта с содержимым self._counter_finished = 0 self._page.load(QUrl(url)) #", "None def _callable(self, data): self.html = data def _load_finished_handler(self, _): self._counter_finished += 1", "QApplication([]) self._page = QWebEnginePage() self._page.loadFinished.connect(self._load_finished_handler) self.html = None # Небольшой костыль для получения", "get_html(url) from bs4 import BeautifulSoup root = BeautifulSoup(html, 'lxml') for game in root.select('.catalog-content", "2: self._page.toHtml(self._callable) return ExtractorHtml(url).html text = 'mad' url = 'http://gama-gama.ru/search/?searchField=' + text html", "> .promo_price').text # Удаление пустых символов пробелом import re price = re.sub(r'\\s+', '", "price_holder = game.select_one('.catalog_price_holder') price_1 = price_holder.select_one('.price_1') if price_1: price = price_1.text.strip() else: #", "BeautifulSoup(html, 'lxml') for game in root.select('.catalog-content > a'): name = game['title'].strip() name =", "получения содержимого страницы сайта http://gama-gama.ru # Загрузка страницы проходит 2 раза: сначада кусок", "со скидкой price_2 = price_holder.select_one('.price_2') if price_2: price = price_2.select_one('.price_group > .promo_price').text #", "http://stackoverflow.com/a/37755811/5909792 def get_html(url): from PyQt5.QtCore import QUrl from PyQt5.QtWidgets import QApplication from PyQt5.QtWebEngineWidgets", "содержимого страницы сайта http://gama-gama.ru # Загрузка страницы проходит 2 раза: сначада кусок хитрого", "загрузки страницы и получения его содержимого # Этот цикл асинхронный код делает синхронным", "#!/usr/bin/env python3 # -*- coding: utf-8 -*- __author__ = 'ipetrash' # Основа взята", "game.select_one('.catalog_price_holder') price_1 = price_holder.select_one('.price_1') if price_1: price = price_1.text.strip() else: # Содержит описание", "получения его содержимого # Этот цикл асинхронный код делает синхронным while self.html is", "name = game['title'].strip() name = name.replace('Купить ', '') price = None price_holder =", "= price_holder.select_one('.price_2') if price_2: price = price_2.select_one('.price_group > .promo_price').text # Удаление пустых символов", "скрипта self._page = None def _callable(self, data): self.html = data def _load_finished_handler(self, _):", "import re price = re.sub(r'\\s+', ' ', price) price = price.strip() print(name, price)", "for game in root.select('.catalog-content > a'): name = game['title'].strip() name = name.replace('Купить ',", "'ipetrash' # Основа взята из http://stackoverflow.com/a/37755811/5909792 def get_html(url): from PyQt5.QtCore import QUrl from", "> a'): name = game['title'].strip() name = name.replace('Купить ', '') price = None", "'lxml') for game in root.select('.catalog-content > a'): name = game['title'].strip() name = name.replace('Купить", "с содержимым self._counter_finished = 0 self._page.load(QUrl(url)) # Ожидание загрузки страницы и получения его", "python3 # -*- coding: utf-8 -*- __author__ = 'ipetrash' # Основа взята из", "# Загрузка страницы проходит 2 раза: сначада кусок хитрого javascript кода, потом страница", "делает синхронным while self.html is None: _app.processEvents() _app.quit() # Чтобы избежать падений скрипта", "= price_1.text.strip() else: # Содержит описание цены со скидкой. Вытаскиваем цену со скидкой", "__author__ = 'ipetrash' # Основа взята из http://stackoverflow.com/a/37755811/5909792 def get_html(url): from PyQt5.QtCore import", "# Этот цикл асинхронный код делает синхронным while self.html is None: _app.processEvents() _app.quit()", "coding: utf-8 -*- __author__ = 'ipetrash' # Основа взята из http://stackoverflow.com/a/37755811/5909792 def get_html(url):", "падений скрипта self._page = None def _callable(self, data): self.html = data def _load_finished_handler(self,", "2 раза: сначада кусок хитрого javascript кода, потом страница # сайта с содержимым", "PyQt5.QtWidgets import QApplication from PyQt5.QtWebEngineWidgets import QWebEnginePage class ExtractorHtml: def __init__(self, url): _app", "import QWebEnginePage class ExtractorHtml: def __init__(self, url): _app = QApplication([]) self._page = QWebEnginePage()", "else: # Содержит описание цены со скидкой. Вытаскиваем цену со скидкой price_2 =", "сайта с содержимым self._counter_finished = 0 self._page.load(QUrl(url)) # Ожидание загрузки страницы и получения", "= None price_holder = game.select_one('.catalog_price_holder') price_1 = price_holder.select_one('.price_1') if price_1: price = price_1.text.strip()", "in root.select('.catalog-content > a'): name = game['title'].strip() name = name.replace('Купить ', '') price", "страницы сайта http://gama-gama.ru # Загрузка страницы проходит 2 раза: сначада кусок хитрого javascript", "def __init__(self, url): _app = QApplication([]) self._page = QWebEnginePage() self._page.loadFinished.connect(self._load_finished_handler) self.html = None", "url): _app = QApplication([]) self._page = QWebEnginePage() self._page.loadFinished.connect(self._load_finished_handler) self.html = None # Небольшой", "QUrl from PyQt5.QtWidgets import QApplication from PyQt5.QtWebEngineWidgets import QWebEnginePage class ExtractorHtml: def __init__(self,", "self._page = None def _callable(self, data): self.html = data def _load_finished_handler(self, _): self._counter_finished", "_callable(self, data): self.html = data def _load_finished_handler(self, _): self._counter_finished += 1 if self._counter_finished", "PyQt5.QtWebEngineWidgets import QWebEnginePage class ExtractorHtml: def __init__(self, url): _app = QApplication([]) self._page =", "return ExtractorHtml(url).html text = 'mad' url = 'http://gama-gama.ru/search/?searchField=' + text html = get_html(url)", "# Основа взята из http://stackoverflow.com/a/37755811/5909792 def get_html(url): from PyQt5.QtCore import QUrl from PyQt5.QtWidgets", "+= 1 if self._counter_finished == 2: self._page.toHtml(self._callable) return ExtractorHtml(url).html text = 'mad' url", "game in root.select('.catalog-content > a'): name = game['title'].strip() name = name.replace('Купить ', '')", "QWebEnginePage class ExtractorHtml: def __init__(self, url): _app = QApplication([]) self._page = QWebEnginePage() self._page.loadFinished.connect(self._load_finished_handler)", "Содержит описание цены со скидкой. Вытаскиваем цену со скидкой price_2 = price_holder.select_one('.price_2') if", "# Удаление пустых символов пробелом import re price = re.sub(r'\\s+', ' ', price)", "пробелом import re price = re.sub(r'\\s+', ' ', price) price = price.strip() print(name,", "= 'ipetrash' # Основа взята из http://stackoverflow.com/a/37755811/5909792 def get_html(url): from PyQt5.QtCore import QUrl", "для получения содержимого страницы сайта http://gama-gama.ru # Загрузка страницы проходит 2 раза: сначада", "символов пробелом import re price = re.sub(r'\\s+', ' ', price) price = price.strip()", "QApplication from PyQt5.QtWebEngineWidgets import QWebEnginePage class ExtractorHtml: def __init__(self, url): _app = QApplication([])", "_app.processEvents() _app.quit() # Чтобы избежать падений скрипта self._page = None def _callable(self, data):", "-*- __author__ = 'ipetrash' # Основа взята из http://stackoverflow.com/a/37755811/5909792 def get_html(url): from PyQt5.QtCore", "root = BeautifulSoup(html, 'lxml') for game in root.select('.catalog-content > a'): name = game['title'].strip()", "= price_2.select_one('.price_group > .promo_price').text # Удаление пустых символов пробелом import re price =", "# сайта с содержимым self._counter_finished = 0 self._page.load(QUrl(url)) # Ожидание загрузки страницы и", "while self.html is None: _app.processEvents() _app.quit() # Чтобы избежать падений скрипта self._page =", "скидкой. Вытаскиваем цену со скидкой price_2 = price_holder.select_one('.price_2') if price_2: price = price_2.select_one('.price_group", "код делает синхронным while self.html is None: _app.processEvents() _app.quit() # Чтобы избежать падений", "_load_finished_handler(self, _): self._counter_finished += 1 if self._counter_finished == 2: self._page.toHtml(self._callable) return ExtractorHtml(url).html text", "data def _load_finished_handler(self, _): self._counter_finished += 1 if self._counter_finished == 2: self._page.toHtml(self._callable) return", "import BeautifulSoup root = BeautifulSoup(html, 'lxml') for game in root.select('.catalog-content > a'): name", "price = None price_holder = game.select_one('.catalog_price_holder') price_1 = price_holder.select_one('.price_1') if price_1: price =", "содержимого # Этот цикл асинхронный код делает синхронным while self.html is None: _app.processEvents()", "страница # сайта с содержимым self._counter_finished = 0 self._page.load(QUrl(url)) # Ожидание загрузки страницы", "price = price_1.text.strip() else: # Содержит описание цены со скидкой. Вытаскиваем цену со", "import QUrl from PyQt5.QtWidgets import QApplication from PyQt5.QtWebEngineWidgets import QWebEnginePage class ExtractorHtml: def", "from PyQt5.QtCore import QUrl from PyQt5.QtWidgets import QApplication from PyQt5.QtWebEngineWidgets import QWebEnginePage class", "кода, потом страница # сайта с содержимым self._counter_finished = 0 self._page.load(QUrl(url)) # Ожидание", "get_html(url): from PyQt5.QtCore import QUrl from PyQt5.QtWidgets import QApplication from PyQt5.QtWebEngineWidgets import QWebEnginePage", "self._page.loadFinished.connect(self._load_finished_handler) self.html = None # Небольшой костыль для получения содержимого страницы сайта http://gama-gama.ru", "_app = QApplication([]) self._page = QWebEnginePage() self._page.loadFinished.connect(self._load_finished_handler) self.html = None # Небольшой костыль", "1 if self._counter_finished == 2: self._page.toHtml(self._callable) return ExtractorHtml(url).html text = 'mad' url =", "+ text html = get_html(url) from bs4 import BeautifulSoup root = BeautifulSoup(html, 'lxml')", "BeautifulSoup root = BeautifulSoup(html, 'lxml') for game in root.select('.catalog-content > a'): name =", "= BeautifulSoup(html, 'lxml') for game in root.select('.catalog-content > a'): name = game['title'].strip() name", "= game['title'].strip() name = name.replace('Купить ', '') price = None price_holder = game.select_one('.catalog_price_holder')", "= game.select_one('.catalog_price_holder') price_1 = price_holder.select_one('.price_1') if price_1: price = price_1.text.strip() else: # Содержит", "self._counter_finished += 1 if self._counter_finished == 2: self._page.toHtml(self._callable) return ExtractorHtml(url).html text = 'mad'", "self.html = data def _load_finished_handler(self, _): self._counter_finished += 1 if self._counter_finished == 2:", "name.replace('Купить ', '') price = None price_holder = game.select_one('.catalog_price_holder') price_1 = price_holder.select_one('.price_1') if", "кусок хитрого javascript кода, потом страница # сайта с содержимым self._counter_finished = 0", "костыль для получения содержимого страницы сайта http://gama-gama.ru # Загрузка страницы проходит 2 раза:", "data): self.html = data def _load_finished_handler(self, _): self._counter_finished += 1 if self._counter_finished ==", "QWebEnginePage() self._page.loadFinished.connect(self._load_finished_handler) self.html = None # Небольшой костыль для получения содержимого страницы сайта", "self.html = None # Небольшой костыль для получения содержимого страницы сайта http://gama-gama.ru #", "из http://stackoverflow.com/a/37755811/5909792 def get_html(url): from PyQt5.QtCore import QUrl from PyQt5.QtWidgets import QApplication from", "self._page = QWebEnginePage() self._page.loadFinished.connect(self._load_finished_handler) self.html = None # Небольшой костыль для получения содержимого", "Вытаскиваем цену со скидкой price_2 = price_holder.select_one('.price_2') if price_2: price = price_2.select_one('.price_group >", "is None: _app.processEvents() _app.quit() # Чтобы избежать падений скрипта self._page = None def", "def _load_finished_handler(self, _): self._counter_finished += 1 if self._counter_finished == 2: self._page.toHtml(self._callable) return ExtractorHtml(url).html", "сайта http://gama-gama.ru # Загрузка страницы проходит 2 раза: сначада кусок хитрого javascript кода,", "name = name.replace('Купить ', '') price = None price_holder = game.select_one('.catalog_price_holder') price_1 =", "# Ожидание загрузки страницы и получения его содержимого # Этот цикл асинхронный код", "== 2: self._page.toHtml(self._callable) return ExtractorHtml(url).html text = 'mad' url = 'http://gama-gama.ru/search/?searchField=' + text", "self._counter_finished = 0 self._page.load(QUrl(url)) # Ожидание загрузки страницы и получения его содержимого #", "self._page.load(QUrl(url)) # Ожидание загрузки страницы и получения его содержимого # Этот цикл асинхронный", "__init__(self, url): _app = QApplication([]) self._page = QWebEnginePage() self._page.loadFinished.connect(self._load_finished_handler) self.html = None #", "from PyQt5.QtWebEngineWidgets import QWebEnginePage class ExtractorHtml: def __init__(self, url): _app = QApplication([]) self._page", "text = 'mad' url = 'http://gama-gama.ru/search/?searchField=' + text html = get_html(url) from bs4", "пустых символов пробелом import re price = re.sub(r'\\s+', ' ', price) price =", "раза: сначада кусок хитрого javascript кода, потом страница # сайта с содержимым self._counter_finished", "Чтобы избежать падений скрипта self._page = None def _callable(self, data): self.html = data", "= None def _callable(self, data): self.html = data def _load_finished_handler(self, _): self._counter_finished +=", "# Небольшой костыль для получения содержимого страницы сайта http://gama-gama.ru # Загрузка страницы проходит", "цены со скидкой. Вытаскиваем цену со скидкой price_2 = price_holder.select_one('.price_2') if price_2: price", "html = get_html(url) from bs4 import BeautifulSoup root = BeautifulSoup(html, 'lxml') for game", "его содержимого # Этот цикл асинхронный код делает синхронным while self.html is None:", "from PyQt5.QtWidgets import QApplication from PyQt5.QtWebEngineWidgets import QWebEnginePage class ExtractorHtml: def __init__(self, url):", "цикл асинхронный код делает синхронным while self.html is None: _app.processEvents() _app.quit() # Чтобы", "= get_html(url) from bs4 import BeautifulSoup root = BeautifulSoup(html, 'lxml') for game in", "и получения его содержимого # Этот цикл асинхронный код делает синхронным while self.html", "хитрого javascript кода, потом страница # сайта с содержимым self._counter_finished = 0 self._page.load(QUrl(url))", "избежать падений скрипта self._page = None def _callable(self, data): self.html = data def", "price_1.text.strip() else: # Содержит описание цены со скидкой. Вытаскиваем цену со скидкой price_2", ".promo_price').text # Удаление пустых символов пробелом import re price = re.sub(r'\\s+', ' '," ]
[ "dec=dec, r=r, ra_units='rad', dec_units='rad', w=w, npatch=npatch) t0 = time.time() p = cat.patch cen", "= field.kmeans_initialize_centers(npatch, 'random') assert cen1.shape == (npatch, 2) p1 = field.kmeans_assign_patches(cen1) print('patches =", "# If same number of patches as galaxies, each galaxy gets a patch.", "100 cat = treecorr.Catalog(ra=ra[:n], dec=dec[:n], ra_units='rad', dec_units='rad') field = cat.getNField() cen_n = field.kmeans_initialize_centers(npatch=n,", "= 1, although not particularly useful. cen_1 = field.kmeans_initialize_centers(npatch=1, init='kmeans++') p_1 = field.kmeans_assign_patches(cen_1)", "inertia should be lower. t0 = time.time() patches, cen = field.run_kmeans(npatch, alt=True) t1", "cen_n = field.kmeans_initialize_centers(npatch=n, init='kmeans++') p_n = field.kmeans_assign_patches(cen_n) np.testing.assert_equal(sorted(p_n), list(range(n))) @timer def test_zero_weight(): #", "to a direct calculation. xyz = np.array([cat.x, cat.y, cat.z]).T direct_cen = np.array([xyz[patches==i].mean(axis=0) for", "the total inertia. # Check this value and the rms size, which should", "in binary form must reproduce the above copyright notice, # this list of", "field.kmeans_assign_patches(cen1) print('patches = ',np.unique(p1)) assert len(p1) == cat.ntot assert min(p1) == 0 assert", "assert_raises(ValueError): field.kmeans_initialize_centers(npatch=0, init='kmeans++') with assert_raises(ValueError): field.kmeans_initialize_centers(npatch=-100, init='kmeans++') # Should be valid to give", "in range(npatch)]) counts2 = np.array([np.sum(p2==i) for i in range(npatch)]) print('rms counts => ',np.std(counts2))", "r = coord.CelestialCoord.xyz_to_radec(x,y,z, return_r=True) print('minra = ',np.min(ra) * coord.radians / coord.degrees) print('maxra =", "is specific to this particular field and npatch. assert np.std(inertia) < 0.3 *", "= ',np.max(dec) * coord.radians / coord.degrees) cat = treecorr.Catalog(ra=ra, dec=dec, ra_units='rad', dec_units='rad', w=w,", "calculation. xyz = np.array([cat.x, cat.y, cat.z]).T direct_cen = np.array([xyz[patches==i].mean(axis=0) for i in range(npatch)])", "111 field = cat.getGField() t0 = time.time() p, cen = field.run_kmeans(npatch) t1 =", "i in range(npatch)]) direct_cen /= np.sqrt(np.sum(direct_cen**2,axis=1)[:,np.newaxis]) np.testing.assert_allclose(cen, direct_cen, atol=2.e-3) inertia = np.array([np.sum(w[p==i][:,None] *", "field.run_kmeans(npatch) t1 = time.time() inertia = np.array([np.sum(w[p2==i][:,None] * (xyz[p2==i] - cen[i])**2) for i", "= np.array([np.average(xyz[p==i], axis=0, weights=w[p==i]) for i in range(npatch)]) direct_cen /= np.sqrt(np.sum(direct_cen**2,axis=1)[:,np.newaxis]) np.testing.assert_allclose(cen, direct_cen,", "Very similar to the above, but with a random set of points, so", "not particularly useful. cen_1 = field.kmeans_initialize_centers(npatch=1, init='kmeans++') p_1 = field.kmeans_assign_patches(cen_1) np.testing.assert_equal(p_1, np.zeros(ngal)) #", "cen2 = field.run_kmeans(npatch, init='kmeans++', max_iter=1000) inertia2 = np.array([np.sum((xy[p2==i] - cen2[i])**2) for i in", "LICENSE # file. # 2. Redistributions in binary form must reproduce the above", "# Use a field with lots of top level cells print('3d with init=random,", "field = cat.getNField(min_top=10) cen1 = field.kmeans_initialize_centers(npatch, 'random') assert cen1.shape == (npatch, 3) p1", "so it will run even # if the user doesn't have fitsio installed.", "= np.array([np.sum((xyz[p2==i] - cen2[i])**2) for i in range(npatch)]) counts2 = np.array([np.sum(p2==i) for i", "on sky z = rng.normal(0,s, (ngal,) ) w = np.zeros(ngal) w[np.random.choice(range(ngal), ngal//10, replace=False)]", "as galaxies, each galaxy gets a patch. # (This is stupid of course,", "= ',np.std(sizes)) assert np.sum(inertia) < 210. assert np.std(inertia) < 0.4 * np.mean(inertia) #", "set of points, so it will run even # if the user doesn't", "= ',np.max(counts)) @timer def test_3d(): # Like the above, but using x,y,z positions.", "always be the kind of Field used. ngal = 100000 s = 1.", "=> ',np.sum(inertia2)) assert np.sum(inertia2) < np.sum(inertia1) # Repeat in 2d print('2d with init=random')", "angle on sky z = rng.normal(0,s, (ngal,) ) w = np.zeros(ngal) w[np.random.choice(range(ngal), ngal//10,", "using the Catalog API to run kmeans. ngal = 100000 s = 10.", "@timer def test_dessv(): try: import fitsio except ImportError: print('Skipping dessv test, since fitsio", "== 0 assert max(patches) == npatch-1 inertia = np.array([np.sum((xyz[patches==i] - cen[i])**2) for i", "Finally, use a field with lots of top level cells to check the", "random isn't a great initialization. p2, cen2 = field.run_kmeans(npatch, init='random', max_iter=1000) inertia2 =", "TreeCorr is free software: redistribution and use in source and binary forms, #", "assert np.sum(inertia2) < np.sum(inertia1) # Repeat in 2d print('2d with init=random') cat =", "field with lots of top level cells print('3d with init=random, min_top=10') field =", "(ngal,) ) w = rng.random_sample(ngal) ra, dec, r = coord.CelestialCoord.xyz_to_radec(x,y,z, return_r=True) print('minra =", "# Check the alternate algorithm. rms inertia should be lower. cat2 = treecorr.Catalog(ra=ra,", "# Repeat in 2d print('2d with init=random') cat = treecorr.Catalog(x=x, y=y) xy =", "return_r=True) print('minra = ',np.min(ra) * coord.radians / coord.degrees) print('maxra = ',np.max(ra) * coord.radians", "init=random, min_top=10') field = cat.getNField(min_top=10) cen1 = field.kmeans_initialize_centers(npatch, 'random') assert cen1.shape == (npatch,", "print('maxra = ',np.max(ra) * coord.radians / coord.degrees) print('mindec = ',np.min(dec) * coord.radians /", "'kmeans++') assert cen1.shape == (npatch, 2) p1 = field.kmeans_assign_patches(cen1) print('patches = ',np.unique(p1)) assert", "cat = treecorr.Catalog(ra=ra, dec=dec, ra_units='rad', dec_units='rad', w=w, keep_zero_weight=True) treecorr.set_omp_threads(1) npatch = 16 field", "An additional check here is that this works with other fields besides NField,", "you target having the # inertias be relatively similar. print('mean counts = ',np.mean(counts))", "in range(npatch)]) print('rms counts => ',np.std(counts2)) print('total inertia => ',np.sum(inertia2)) assert np.sum(inertia2) <", "with assert_raises(ValueError): field.kmeans_initialize_centers(npatch=-100, init='random') # Should be valid to give npatch = 1,", "33000. assert np.std(inertia) < 0.4 * np.mean(inertia) # I've seen over 0.3 x", "np.mean(inertia) # rms should be even smaller here. print('mean counts = ',np.mean(counts)) print('min", "top level cells print('3d with init=kmeans++, min_top=10') field = cat.getNField(min_top=10) cen1 = field.kmeans_initialize_centers(npatch,", "',np.min(counts)) print('max counts = ',np.max(counts)) # Finally, use a field with lots of", "field and npatch. assert np.std(inertia) < 0.3 * np.mean(inertia) # rms is usually", "*= 180. / np.pi * 60. # convert to arcmin counts = np.array([np.sum(patches==i)", "= ',np.max(counts)) # Check using patch_centers from (ra,dec,r) -> (ra,dec) cat3 = treecorr.Catalog(ra=ra,", "npatch = 111 cat = treecorr.Catalog(ra=ra, dec=dec, ra_units='rad', dec_units='rad', w=w, npatch=npatch) t0 =", "cat3.patch) np.testing.assert_array_equal(cat2.patch_centers, cat3.patch_centers) @timer def test_catalog_3d(): # With ra, dec, r, the Catalog", "retain the above copyright notice, this # list of conditions, and the disclaimer", "counts = ',np.min(counts)) print('max counts = ',np.max(counts)) @timer def test_3d(): # Like the", "r, the Catalog API should only do patches using RA, Dec. ngal =", "= ',np.std(sizes)) assert np.sum(inertia) < 200. # Total shouldn't increase much. (And often", "time.time() patches, cen = field.run_kmeans(npatch) t1 = time.time() print('patches = ',np.unique(patches)) assert len(patches)", "Now run the normal way p2, cen2 = field.run_kmeans(npatch, init='kmeans++', max_iter=1000) inertia2 =", "t0 = time.time() p = cat.patch cen = cat.patch_centers t1 = time.time() print('patches", "=> ',np.sum(inertia2)) assert np.sum(inertia2) < np.sum(inertia1) # Repeat in 2d print('2d with init=kmeans++')", "in range(npatch)]) sizes = np.array([np.mean((xyz[patches==i] - cen[i])**2) for i in range(npatch)])**0.5 sizes *=", "rng.normal(0,s, (ngal,) ) w = rng.random_sample(ngal) ra, dec = coord.CelestialCoord.xyz_to_radec(x,y,z) print('minra = ',np.min(ra)", "field = cat2.getNField() t0 = time.time() p2, cen = field.run_kmeans(npatch) t1 = time.time()", "run the normal way p2, cen2 = field.run_kmeans(npatch, init='random', max_iter=1000) inertia2 = np.array([np.sum((xy[p2==i]", "',np.std(inertia)) assert np.sum(inertia) < 33000. assert np.std(inertia) < 0.4 * np.mean(inertia) # I've", "y=y, z=z, w=w) npatch = 111 field = cat.getNField() t0 = time.time() p,", "# if the user doesn't have fitsio installed. # In addition, we add", "w=0 objects were not assigned to any patch. ngal = 10000 s =", "= np.array([np.sum(w[p==i]) for i in range(npatch)]) print('With alternate algorithm:') print('time = ',t1-t0) print('total", "InitializeCenters. field = cat.getKField(min_top=10) t0 = time.time() p, cen = field.run_kmeans(npatch) t1 =", "= 1. rng = np.random.RandomState(8675309) x = rng.normal(0,s, (ngal,) ) y = rng.normal(0,s,", "',np.std(inertia)) assert np.sum(inertia) < 33000. assert np.std(inertia) < 0.3 * np.mean(inertia) # rms", "Check the alternate algorithm. rms inertia should be lower. cat2 = treecorr.Catalog(ra=ra, dec=dec,", "/ coord.degrees) npatch = 111 cat = treecorr.Catalog(ra=ra, dec=dec, ra_units='rad', dec_units='rad', w=w, npatch=npatch)", "for i in range(npatch)]) print('rms counts => ',np.std(counts2)) print('total inertia => ',np.sum(inertia2)) assert", "given in the documentation # and/or other materials provided with the distribution. from", "',np.max(counts)) # Check using patch_centers from (ra,dec) -> (ra,dec,r) cat3 = treecorr.Catalog(ra=ra, dec=dec,", "= ',np.std(inertia)) assert np.sum(inertia) < 5300. assert np.std(inertia) < 0.1 * np.mean(inertia) #", "=> ',np.std(counts2)) print('total inertia => ',np.sum(inertia2)) assert np.sum(inertia2) < np.sum(inertia1) # Use a", "w = rng.random_sample(ngal) + 1 g1 = rng.normal(0,s, (ngal,) ) g2 = rng.normal(0,s,", "number to make sure we force some of the shuffle bits in InitializeCenters", "the following # conditions are met: # # 1. Redistributions of source code", "equal as the standard algorithm. print('mean counts = ',np.mean(counts)) print('min counts = ',np.min(counts))", "= ',np.std(counts1)) print('total inertia = ',np.sum(inertia1)) # Now run the normal way #", "= cat.getNField(min_top=10) t0 = time.time() patches, cen = field.run_kmeans(npatch) t1 = time.time() assert", "print('max counts = ',np.max(counts)) @timer def test_init_random(): # Test the init=random option ngal", "=> ',np.sum(inertia2)) assert np.sum(inertia2) < np.sum(inertia1) # Use a field with lots of", "print('rms inertia = ',np.std(inertia)) assert np.sum(inertia) < 33000. assert np.std(inertia) < 0.4 *", "coord.degrees) print('maxdec = ',np.max(dec) * coord.radians / coord.degrees) npatch = 111 cat =", "cells print('3d with init=kmeans++, min_top=10') field = cat.getNField(min_top=10) cen1 = field.kmeans_initialize_centers(npatch, 'kmeans++') assert", "doesn't have fitsio installed. # In addition, we add weights to make sure", "np.mean(inertia) # I've seen over 0.3 x mean here. assert np.std(sizes) < 0.15", "print('rms inertia = ',np.std(inertia)) print('mean size = ',np.mean(sizes)) print('rms size = ',np.std(sizes)) assert", "ImportError: print('Skipping dessv test, since fitsio is not installed') return #treecorr.set_omp_threads(1); get_from_wiki('des_sv.fits') file_name", "counts = ',np.min(counts)) print('max counts = ',np.max(counts)) # Should be the same thing", "cat.y, cat.z]).T direct_cen = np.array([np.average(xyz[p==i], axis=0, weights=w[p==i]) for i in range(npatch)]) direct_cen /=", "ra, dec, r = coord.CelestialCoord.xyz_to_radec(x,y,z, return_r=True) print('minra = ',np.min(ra) * coord.radians / coord.degrees)", "ra, dec = coord.CelestialCoord.xyz_to_radec(x,y,z) print('minra = ',np.min(ra) * coord.radians / coord.degrees) print('maxra =", "= rng.normal(0,s, (ngal,) ) z = rng.normal(0,s, (ngal,) ) cat = treecorr.Catalog(x=x, y=y,", "',np.std(sizes)) assert np.sum(inertia) < 200. # This is specific to this particular field", "assert min(p1) == 0 assert max(p1) == npatch-1 inertia1 = np.array([np.sum((xy[p1==i] - cen1[i])**2)", "the kind of Field used. ngal = 100000 s = 1. rng =", "= 1.0 ra, dec = coord.CelestialCoord.xyz_to_radec(x,y,z) print('minra = ',np.min(ra) * coord.radians / coord.degrees)", "try: import fitsio except ImportError: print('Skipping dessv test, since fitsio is not installed')", "assert np.std(inertia) < 0.4 * np.mean(inertia) # I've seen over 0.3 x mean", "== npatch-1 inertia = np.array([np.sum((xyz[patches==i] - cen[i])**2) for i in range(npatch)]) sizes =", "treecorr.Catalog(ra=ra, dec=dec, ra_units='rad', dec_units='rad', w=w, keep_zero_weight=True) treecorr.set_omp_threads(1) npatch = 16 field = cat.getNField()", "for i in range(npatch)]) print('counts = ',counts1) print('rms counts = ',np.std(counts1)) print('total inertia", "treecorr.Catalog(ra=ra[:n], dec=dec[:n], ra_units='rad', dec_units='rad') field = cat.getNField() cen_n = field.kmeans_initialize_centers(npatch=n, init='kmeans++') p_n =", "1. Redistributions of source code must retain the above copyright notice, this #", "rms should be even smaller here. print('mean counts = ',np.mean(counts)) print('min counts =", "=> ',np.sum(inertia2)) assert np.sum(inertia2) < np.sum(inertia1) # Repeat in spherical print('spher with init=random')", "= field.run_kmeans(npatch) t1 = time.time() assert len(patches) == cat.ntot assert min(patches) == 0", "np.array([x, y, z]).T inertia = np.array([np.sum(w[p==i][:,None] * (xyz[p==i] - cen[i])**2) for i in", "np.array([np.sum(w[p==i]) for i in range(npatch)]) print('With standard algorithm:') print('time = ',t1-t0) print('inertia =", "counts => ',np.std(counts2)) print('total inertia => ',np.sum(inertia2)) assert np.sum(inertia2) < np.sum(inertia1) # Repeat", "= np.random.RandomState(8675309) x = rng.normal(0,s, (ngal,) ) y = rng.normal(0,s, (ngal,) ) w", "and the disclaimer given in the accompanying LICENSE # file. # 2. Redistributions", "inertia => ',np.sum(inertia2)) assert np.sum(inertia2) < np.sum(inertia1) with assert_raises(ValueError): field.kmeans_initialize_centers(npatch=ngal*2, init='kmeans++') with assert_raises(ValueError):", "100000 s = 1. rng = np.random.RandomState(8675309) x = rng.normal(0,s, (ngal,) ) y", "(ngal,) ) y = rng.normal(0,s, (ngal,) ) w = rng.random_sample(ngal) + 1 g1", "which should also be quite small. inertia = np.array([np.sum((xyz[patches==i] - cen[i])**2) for i", "inertia. # Check this value and the rms size, which should also be", "',np.mean(counts)) print('min counts = ',np.min(counts)) print('max counts = ',np.max(counts)) @timer def test_init_random(): #", "rng.normal(0,s, (ngal,) ) w = rng.random_sample(ngal) + 1 cat = treecorr.Catalog(x=x, y=y, z=z,", "lower. cat2 = treecorr.Catalog(ra=ra, dec=dec, ra_units='rad', dec_units='rad', w=w, npatch=npatch, kmeans_alt=True) t0 = time.time()", "replace=False)] = 1.0 ra, dec = coord.CelestialCoord.xyz_to_radec(x,y,z) print('minra = ',np.min(ra) * coord.radians /", "odd number to make sure we force some of the shuffle bits in", "counts = ',np.min(counts)) print('max counts = ',np.max(counts)) # Check using patch_centers from (ra,dec)", "',np.std(counts1)) print('total inertia = ',np.sum(inertia1)) # Now run the normal way p2, cen2", "with init=kmeans++') ra, dec = coord.CelestialCoord.xyz_to_radec(x,y,z) cat = treecorr.Catalog(ra=ra, dec=dec, ra_units='rad', dec_units='rad') xyz", "= field.run_kmeans(npatch) t1 = time.time() inertia = np.array([np.sum(w[p2==i][:,None] * (xyz[p2==i] - cen[i])**2) for", "0 assert max(p) == npatch-1 inertia = np.array([np.sum(w[p==i][:,None] * (xyz[p==i] - cen[i])**2) for", "(xy[p==i] - cen[i])**2) for i in range(npatch)]) counts = np.array([np.sum(w[p==i]) for i in", "max(p) == npatch-1 print('w>0 patches = ',np.unique(p[w>0])) print('w==0 patches = ',np.unique(p[w==0])) assert set(p[w>0])", "import treecorr from test_helper import get_from_wiki, CaptureLog, assert_raises, profile, timer @timer def test_dessv():", "= np.array([x, y, z]).T inertia = np.array([np.sum(w[p==i][:,None] * (xyz[p==i] - cen[i])**2) for i", "run even # if the user doesn't have fitsio installed. # In addition,", "ra_col='ra', dec_col='dec', ra_units='deg', dec_units='deg') # Use an odd number to make sure we", "= rng.normal(0,s, (ngal,) ) g2 = rng.normal(0,s, (ngal,) ) k = rng.normal(0,s, (ngal,)", "= rng.normal(0,s, (ngal,) ) z = rng.normal(0,s, (ngal,) ) w = rng.random_sample(ngal) +", "(ra,dec) cat3 = treecorr.Catalog(ra=ra, dec=dec, ra_units='rad', dec_units='rad', w=w, patch_centers=cat2.patch_centers) np.testing.assert_array_equal(cat2.patch, cat3.patch) np.testing.assert_array_equal(cat2.patch_centers, cat3.patch_centers)", "sizes have even less spread usually. # Should all have similar number of", "conditions are met: # # 1. Redistributions of source code must retain the", "large y, so smallish angle on sky z = rng.normal(0,s, (ngal,) ) w", "doesn't give as good an initialization, so these are a bit worse usually.", "the above copyright notice, # this list of conditions, and the disclaimer given", "coord.CelestialCoord.xyz_to_radec(x,y,z) print('minra = ',np.min(ra) * coord.radians / coord.degrees) print('maxra = ',np.max(ra) * coord.radians", "= time.time() print('patches = ',np.unique(p)) assert len(p) == cat.ntot assert min(p) == 0", "smallish angle on sky z = rng.normal(0,s, (ngal,) ) w = np.zeros(ngal) w[np.random.choice(range(ngal),", "for i in range(npatch)]) sizes = np.array([np.mean((xyz[patches==i] - cen[i])**2) for i in range(npatch)])**0.5", "= ',np.sum(inertia1)) # Now run the normal way p2, cen2 = field.run_kmeans(npatch, init='random',", "similar. The range is more than a # factor of 10. I think", "print('min counts = ',np.min(counts)) print('max counts = ',np.max(counts)) @timer def test_2d(): # Like", "',np.std(inertia)) print('mean size = ',np.mean(sizes)) print('rms size = ',np.std(sizes)) assert np.sum(inertia) < 200.", "npatch-1 # Check the returned center to a direct calculation. xyz = np.array([cat.x,", "binary form must reproduce the above copyright notice, # this list of conditions,", "normal way p2, cen2 = field.run_kmeans(npatch, init='kmeans++', max_iter=1000) inertia2 = np.array([np.sum((xyz[p2==i] - cen2[i])**2)", "= treecorr.Catalog(ra=ra, dec=dec, ra_units='rad', dec_units='rad', w=w, patch_centers=cat2.patch_centers) np.testing.assert_array_equal(cat2.patch, cat3.patch) np.testing.assert_array_equal(cat2.patch_centers, cat3.patch_centers) if __name__", "field = cat.getNField() cen1 = field.kmeans_initialize_centers(npatch, 'random') assert cen1.shape == (npatch, 3) p1", "returned center to a direct calculation. xyz = np.array([cat.x/cat.r, cat.y/cat.r, cat.z/cat.r]).T print('cen =", "or middles of patches, so the total weight varies when you target having", "using patch_centers from (ra,dec) -> (ra,dec,r) cat3 = treecorr.Catalog(ra=ra, dec=dec, r=r, ra_units='rad', dec_units='rad',", "thing with ra, dec, ra ra, dec = coord.CelestialCoord.xyz_to_radec(x,y,z) r = (x**2 +", "ra_units='rad', dec_units='rad') field = cat.getNField() cen_n = field.kmeans_initialize_centers(npatch=n, init='random') p_n = field.kmeans_assign_patches(cen_n) np.testing.assert_equal(sorted(p_n),", "copyright notice, this # list of conditions, and the disclaimer given in the", "xyz = np.array([cat.x, cat.y, cat.z]).T field = cat.getNField() cen1 = field.kmeans_initialize_centers(npatch, 'kmeans++') assert", "a patch. # (This is stupid of course, but check that it doesn't", "This is only a little bit smaller. # This doesn't keep the counts", "i in range(npatch)])**0.5 sizes *= 180. / np.pi * 60. # convert to", "over 0.3 x mean here. assert np.std(sizes) < 0.15 * np.mean(sizes) print('mean counts", "< 33000. assert np.std(inertia) < 0.1 * np.mean(inertia) # rms should be even", "print('max counts = ',np.max(counts)) @timer def test_radec(): # Very similar to the above,", "although not particularly useful. cen_1 = field.kmeans_initialize_centers(npatch=1, init='random') p_1 = field.kmeans_assign_patches(cen_1) np.testing.assert_equal(p_1, np.zeros(ngal))", "direct_cen = np.array([np.average(xyz[p==i], axis=0, weights=w[p==i]) for i in range(npatch)]) direct_cen /= np.sqrt(np.sum(direct_cen**2,axis=1)[:,np.newaxis]) np.testing.assert_allclose(cen,", "test, since fitsio is not installed') return #treecorr.set_omp_threads(1); get_from_wiki('des_sv.fits') file_name = os.path.join('data','des_sv.fits') cat", "= 10 field = cat.getNField() cen1 = field.kmeans_initialize_centers(npatch, 'kmeans++') assert cen1.shape == (npatch,", "g2=g2, k=k) npatch = 111 field = cat.getGField() t0 = time.time() p, cen", "= cat.getKField(min_top=10) t0 = time.time() p, cen = field.run_kmeans(npatch) t1 = time.time() assert", "print('mean size = ',np.mean(sizes)) print('rms size = ',np.std(sizes)) assert np.sum(inertia) < 210. assert", "= np.array([cat.x/cat.r, cat.y/cat.r, cat.z/cat.r]).T print('cen = ',cen) print('xyz = ',xyz) direct_cen = np.array([np.average(xyz[p==i],", "Like the above, but using x,y,z positions. ngal = 100000 s = 1.", "inertia = ',np.mean(inertia)) print('rms inertia = ',np.std(inertia)) assert np.sum(inertia) < 33000. assert np.std(inertia)", "initialization. p2, cen2 = field.run_kmeans(npatch, init='kmeans++', max_iter=1000) inertia2 = np.array([np.sum((xyz[p2==i] - cen2[i])**2) for", "min_top=10') field = cat.getNField(min_top=10) cen1 = field.kmeans_initialize_centers(npatch, 'kmeans++') assert cen1.shape == (npatch, 3)", "= cat.getNField() cen_n = field.kmeans_initialize_centers(npatch=n, init='kmeans++') p_n = field.kmeans_assign_patches(cen_n) np.testing.assert_equal(sorted(p_n), list(range(n))) @timer def", "t0 = time.time() p, c = field.run_kmeans(npatch) t1 = time.time() print('patches = ',np.unique(p))", "print('max counts = ',np.max(counts)) # Finally, use a field with lots of top", "= rng.normal(0,s, (ngal,) ) y = rng.normal(0,s, (ngal,) ) z = rng.normal(0,s, (ngal,)", "# An additional check here is that this works with other fields besides", "210. assert np.std(inertia) < 0.4 * np.mean(inertia) # I've seen over 0.3 x", "with assert_raises(ValueError): field.kmeans_initialize_centers(npatch=-100, init='kmeans++') # Should be valid to give npatch = 1,", "Should all have similar number of points. Nothing is required here though. print('mean", "modification, are permitted provided that the following # conditions are met: # #", "= cat.getNField() cen_n = field.kmeans_initialize_centers(npatch=n, init='random') p_n = field.kmeans_assign_patches(cen_n) np.testing.assert_equal(sorted(p_n), list(range(n))) @timer def", "== 0 assert max(p) == npatch-1 print('w>0 patches = ',np.unique(p[w>0])) print('w==0 patches =", "max(p) == npatch-1 inertia = np.array([np.sum(w[p==i][:,None] * (xyz[p==i] - cen[i])**2) for i in", "return #treecorr.set_omp_threads(1); get_from_wiki('des_sv.fits') file_name = os.path.join('data','des_sv.fits') cat = treecorr.Catalog(file_name, ra_col='ra', dec_col='dec', ra_units='deg', dec_units='deg')", "xy = np.array([x, y]).T field = cat.getNField() cen1 = field.kmeans_initialize_centers(npatch, 'kmeans++') assert cen1.shape", "these aren't actually all that similar. The range is more than a #", "np.testing.assert_array_equal(cat2.patch, cat3.patch) np.testing.assert_array_equal(cat2.patch_centers, cat3.patch_centers) if __name__ == '__main__': test_dessv() test_radec() test_3d() test_2d() test_init_random()", "',cen) print('xyz = ',xyz) direct_cen = np.array([np.average(xyz[p==i], axis=0, weights=w[p==i]) for i in range(npatch)])", "xyz = np.array([cat.x/cat.r, cat.y/cat.r, cat.z/cat.r]).T print('cen = ',cen) print('xyz = ',xyz) direct_cen =", "used to be a bug where w=0 objects were not assigned to any", "= treecorr.Catalog(ra=ra, dec=dec, ra_units='rad', dec_units='rad', r=r, w=w) field = cat2.getNField() t0 = time.time()", "# This is specific to this particular field and npatch. assert np.std(inertia) <", "assert max(p) == npatch-1 inertia = np.array([np.sum(w[p==i][:,None] * (xy[p==i] - cen[i])**2) for i", "= ',np.mean(inertia)) print('rms inertia = ',np.std(inertia)) assert np.sum(inertia) < 210. assert np.std(inertia) <", "(ngal,) ) cat = treecorr.Catalog(x=x, y=y, w=w, g1=g1, g2=g2, k=k) npatch = 111", "is that this works with other fields besides NField, even though # in", "inertia = ',np.mean(inertia)) print('rms inertia = ',np.std(inertia)) assert np.sum(inertia) < 200. # Total", "xyz = np.array([x, y, z]).T inertia = np.array([np.sum(w[p==i][:,None] * (xyz[p==i] - cen[i])**2) for", "inertia = ',np.std(inertia)) assert np.sum(inertia) < 200. # Total shouldn't increase much. (And", "x = rng.normal(0,s, (ngal,) ) y = rng.normal(0,s, (ngal,) ) + 100 #", "additional check here is that this works with other fields besides NField, even", "assert np.std(inertia) < 0.3 * np.mean(inertia) # rms is usually small mean #", "time.time() assert len(patches) == cat.ntot assert min(patches) == 0 assert max(patches) == npatch-1", "0.3 * np.mean(inertia) # rms is usually small mean print('mean counts = ',np.mean(counts))", "+ 1 cat = treecorr.Catalog(x=x, y=y, z=z, w=w) npatch = 111 field =", "np.sum(inertia2) < np.sum(inertia1) # Repeat in spherical print('spher with init=kmeans++') ra, dec =", "/ coord.degrees) print('maxra = ',np.max(ra) * coord.radians / coord.degrees) print('mindec = ',np.min(dec) *", "disclaimer given in the accompanying LICENSE # file. # 2. Redistributions in binary", "= ',np.min(counts)) print('max counts = ',np.max(counts)) @timer def test_radec(): # Very similar to", "= ',np.sum(inertia1)) # Now run the normal way p2, cen2 = field.run_kmeans(npatch, init='kmeans++',", "it will run even # if the user doesn't have fitsio installed. #", "profile, timer @timer def test_dessv(): try: import fitsio except ImportError: print('Skipping dessv test,", "= ',np.std(inertia)) assert np.sum(inertia) < 33000. assert np.std(inertia) < 0.3 * np.mean(inertia) #", "np.array([np.sum((xyz[patches==i] - cen[i])**2) for i in range(npatch)]) sizes = np.array([np.mean((xyz[patches==i] - cen[i])**2) for", "doesn't fail.) # Do this with fewer points though, since it's not particularly", "',np.sum(inertia2)) assert np.sum(inertia2) < np.sum(inertia1) with assert_raises(ValueError): field.run_kmeans(npatch, init='invalid') with assert_raises(ValueError): field.kmeans_initialize_centers(npatch, init='invalid')", "should be even smaller here. assert np.std(sizes) < 0.1 * np.mean(sizes) # This", "since random isn't a great initialization. p2, cen2 = field.run_kmeans(npatch, init='random', max_iter=1000) inertia2", "same number of patches as galaxies, each galaxy gets a patch. # (This", "shouldn't increase much. (And often decreases.) assert np.std(inertia) < 0.15 * np.mean(inertia) #", "not installed') return #treecorr.set_omp_threads(1); get_from_wiki('des_sv.fits') file_name = os.path.join('data','des_sv.fits') cat = treecorr.Catalog(file_name, ra_col='ra', dec_col='dec',", "it varies whether high weight points happen to be near the # edges", "(xyz[p2==i] - cen[i])**2) for i in range(npatch)]) counts = np.array([np.sum(w[p2==i]) for i in", "assert max(patches) == npatch-1 inertia = np.array([np.sum((xyz[patches==i] - cen[i])**2) for i in range(npatch)])", "cen2 = field.run_kmeans(npatch, init='random', max_iter=1000) inertia2 = np.array([np.sum((xy[p2==i] - cen2[i])**2) for i in", "= np.array([np.sum(w[p==i]) for i in range(npatch)]) # This doesn't give as good an", "check that it doesn't fail.) # Do this with fewer points though, since", "assert len(p) == cat.ntot assert min(p) == 0 assert max(p) == npatch-1 inertia", "xy = np.array([x, y]).T inertia = np.array([np.sum(w[p==i][:,None] * (xy[p==i] - cen[i])**2) for i", "= field.run_kmeans(npatch, init='kmeans++', max_iter=1000) inertia2 = np.array([np.sum((xyz[p2==i] - cen2[i])**2) for i in range(npatch)])", "== (npatch, 3) p1 = field.kmeans_assign_patches(cen1) print('patches = ',np.unique(p1)) assert len(p1) == cat.ntot", "counts = ',np.max(counts)) @timer def test_init_random(): # Test the init=random option ngal =", "= 43 field = cat.getNField(max_top=5) t0 = time.time() patches, cen = field.run_kmeans(npatch) t1", "(ra,dec) -> (ra,dec,r) cat3 = treecorr.Catalog(ra=ra, dec=dec, r=r, ra_units='rad', dec_units='rad', w=w, patch_centers=cat2.patch_centers) np.testing.assert_array_equal(cat2.patch,", "npatch-1 print('w>0 patches = ',np.unique(p[w>0])) print('w==0 patches = ',np.unique(p[w==0])) assert set(p[w>0]) == set(p[w==0])", "< np.sum(inertia1) # Repeat in spherical print('spher with init=random') ra, dec = coord.CelestialCoord.xyz_to_radec(x,y,z)", "here though. print('mean counts = ',np.mean(counts)) print('min counts = ',np.min(counts)) print('max counts =", "< 33000. assert np.std(inertia) < 0.3 * np.mean(inertia) # rms is usually small", "form must reproduce the above copyright notice, # this list of conditions, and", "@timer def test_3d(): # Like the above, but using x,y,z positions. ngal =", "init='random') p_n = field.kmeans_assign_patches(cen_n) np.testing.assert_equal(sorted(p_n), list(range(n))) @timer def test_init_kmpp(): # Test the init=random", "cells to check the other branch in # InitializeCenters. field = cat.getKField(min_top=10) t0", "np.random.RandomState(8675309) x = rng.normal(0,s, (ngal,) ) y = rng.normal(0,s, (ngal,) ) z =", "field.kmeans_initialize_centers(npatch=1, init='random') p_1 = field.kmeans_assign_patches(cen_1) np.testing.assert_equal(p_1, np.zeros(ngal)) # If same number of patches", "print('total inertia = ',np.sum(inertia1)) # Now run the normal way # Use higher", "g1=g1, g2=g2, k=k) npatch = 111 field = cat.getGField() t0 = time.time() p,", "np.sum(inertia1) with assert_raises(ValueError): field.kmeans_initialize_centers(npatch=ngal*2, init='kmeans++') with assert_raises(ValueError): field.kmeans_initialize_centers(npatch=ngal+1, init='kmeans++') with assert_raises(ValueError): field.kmeans_initialize_centers(npatch=0, init='kmeans++')", "s = 1. rng = np.random.RandomState(8675309) x = rng.normal(0,s, (ngal,) ) y =", "',np.std(inertia)) assert np.sum(inertia) < 200. # Total shouldn't increase much. (And often decreases.)", "print('mean inertia = ',np.mean(inertia)) print('rms inertia = ',np.std(inertia)) assert np.sum(inertia) < 5300. assert", "assert len(p) == cat2.ntot assert min(p) == 0 assert max(p) == npatch-1 inertia", "with init=random') cat = treecorr.Catalog(x=x, y=y) xy = np.array([x, y]).T field = cat.getNField()", "= ',np.max(counts)) # Finally, use a field with lots of top level cells", "field.kmeans_initialize_centers(npatch=ngal+1, init='kmeans++') with assert_raises(ValueError): field.kmeans_initialize_centers(npatch=0, init='kmeans++') with assert_raises(ValueError): field.kmeans_initialize_centers(npatch=-100, init='kmeans++') # Should be", "field.run_kmeans(npatch, init='invalid') with assert_raises(ValueError): field.kmeans_initialize_centers(npatch, init='invalid') with assert_raises(ValueError): field.kmeans_initialize_centers(npatch=ngal*2, init='random') with assert_raises(ValueError): field.kmeans_initialize_centers(npatch=ngal+1,", "= ',np.mean(counts)) print('min counts = ',np.min(counts)) print('max counts = ',np.max(counts)) # Finally, use", "w=w, npatch=npatch) t0 = time.time() p = cat.patch cen = cat.patch_centers t1 =", "user doesn't have fitsio installed. # In addition, we add weights to make", "* coord.radians / coord.degrees) cat = treecorr.Catalog(ra=ra, dec=dec, ra_units='rad', dec_units='rad', w=w) npatch =", "dec=dec, ra_units='rad', dec_units='rad', w=w) npatch = 111 field = cat.getNField() t0 = time.time()", "init=random') cat = treecorr.Catalog(x=x, y=y) xy = np.array([x, y]).T field = cat.getNField() cen1", "N=10^5. n = 100 cat = treecorr.Catalog(ra=ra[:n], dec=dec[:n], ra_units='rad', dec_units='rad') field = cat.getNField()", "cat = treecorr.Catalog(x=x, y=y, w=w, g1=g1, g2=g2, k=k) npatch = 111 field =", "the normal way p2, cen2 = field.run_kmeans(npatch, init='random', max_iter=1000) inertia2 = np.array([np.sum((xyz[p2==i] -", "This doesn't keep the counts as equal as the standard algorithm. print('mean counts", "',inertia) print('counts = ',counts) print('total inertia = ',np.sum(inertia)) print('mean inertia = ',np.mean(inertia)) print('rms", "dec=dec, ra_units='rad', dec_units='rad', w=w, patch_centers=cat2.patch_centers) np.testing.assert_array_equal(cat2.patch, cat3.patch) np.testing.assert_array_equal(cat2.patch_centers, cat3.patch_centers) if __name__ == '__main__':", "np.sum(inertia1) # Repeat in spherical print('spher with init=kmeans++') ra, dec = coord.CelestialCoord.xyz_to_radec(x,y,z) cat", "the normal way p2, cen2 = field.run_kmeans(npatch, init='kmeans++', max_iter=1000) inertia2 = np.array([np.sum((xyz[p2==i] -", "with the distribution. from __future__ import print_function import numpy as np import os", "# Finally, use a field with lots of top level cells to check", "so these are a bit worse usually. print('With min_top=10:') print('time = ',t1-t0) print('total", "with fewer points though, since it's not particularly fast with N=10^5. n =", "Based on test_ra_dec, but where many galaxies have w=0. # There used to", "to be a bug where w=0 objects were not assigned to any patch.", "init='random') with assert_raises(ValueError): field.kmeans_initialize_centers(npatch=0, init='random') with assert_raises(ValueError): field.kmeans_initialize_centers(npatch=-100, init='random') # Should be valid", "= rng.random_sample(ngal) + 1 g1 = rng.normal(0,s, (ngal,) ) g2 = rng.normal(0,s, (ngal,)", "fields besides NField, even though # in practice NField will alsmost always be", "print('mean size = ',np.mean(sizes)) print('rms size = ',np.std(sizes)) assert np.sum(inertia) < 200. #", "field.run_kmeans(npatch, init='random', max_iter=1000) inertia2 = np.array([np.sum((xy[p2==i] - cen2[i])**2) for i in range(npatch)]) counts2", "standard algorithm:') print('time = ',t1-t0) print('total inertia = ',np.sum(inertia)) print('mean inertia = ',np.mean(inertia))", "# Check using patch_centers from (ra,dec) -> (ra,dec,r) cat3 = treecorr.Catalog(ra=ra, dec=dec, r=r,", "here. print('mean counts = ',np.mean(counts)) print('min counts = ',np.min(counts)) print('max counts = ',np.max(counts))", "cat.getNField() cen1 = field.kmeans_initialize_centers(npatch, 'random') assert cen1.shape == (npatch, 2) p1 = field.kmeans_assign_patches(cen1)", "= rng.normal(0,s, (ngal,) ) cat = treecorr.Catalog(x=x, y=y, z=z) xyz = np.array([x, y,", "= field.kmeans_assign_patches(cen_n) np.testing.assert_equal(sorted(p_n), list(range(n))) @timer def test_init_kmpp(): # Test the init=random option ngal", "= treecorr.Catalog(ra=ra[:n], dec=dec[:n], ra_units='rad', dec_units='rad') field = cat.getNField() cen_n = field.kmeans_initialize_centers(npatch=n, init='kmeans++') p_n", "= treecorr.Catalog(x=x, y=y, z=z, w=w) npatch = 111 field = cat.getNField() t0 =", "for i in range(npatch)]) counts = np.array([np.sum(w[p2==i]) for i in range(npatch)]) print('time =", "to check the other branch in # InitializeCenters. field = cat.getKField(min_top=10) t0 =", "',np.sum(inertia)) print('mean inertia = ',np.mean(inertia)) print('rms inertia = ',np.std(inertia)) assert np.sum(inertia) < 33000.", "npatch = 1, although not particularly useful. cen_1 = field.kmeans_initialize_centers(npatch=1, init='kmeans++') p_1 =", "treecorr.Catalog(ra=ra, dec=dec, r=r, ra_units='rad', dec_units='rad', w=w, npatch=npatch) t0 = time.time() p = cat.patch", "field.kmeans_assign_patches(cen_n) np.testing.assert_equal(sorted(p_n), list(range(n))) @timer def test_init_kmpp(): # Test the init=random option ngal =", "Should be the same thing with ra, dec, ra ra, dec = coord.CelestialCoord.xyz_to_radec(x,y,z)", "npatch = 16 field = cat.getNField() t0 = time.time() p, c = field.run_kmeans(npatch)", "0.1 * np.mean(sizes) # This is only a little bit smaller. # This", "',np.std(inertia)) assert np.sum(inertia) < 5300. assert np.std(inertia) < 0.3 * np.mean(inertia) # rms", "= ',np.min(counts)) print('max counts = ',np.max(counts)) # Check the alternate algorithm. rms inertia", "= ',np.min(counts)) print('max counts = ',np.max(counts)) # Check using patch_centers from (ra,dec) ->", "= ',np.mean(counts)) print('min counts = ',np.min(counts)) print('max counts = ',np.max(counts)) # Should be", "# factor of 10. I think because it varies whether high weight points", "np.array([np.sum(w[p==i]) for i in range(npatch)]) print('With alternate algorithm:') print('time = ',t1-t0) print('total inertia", "of points. Nothing is required here though. print('mean counts = ',np.mean(counts)) print('min counts", "having the # inertias be relatively similar. print('mean counts = ',np.mean(counts)) print('min counts", "= field.kmeans_assign_patches(cen_1) np.testing.assert_equal(p_1, np.zeros(ngal)) # If same number of patches as galaxies, each", "+ y**2 + z**2)**0.5 cat2 = treecorr.Catalog(ra=ra, dec=dec, ra_units='rad', dec_units='rad', r=r, w=w) field", "path as test_radec, but using the Catalog API to run kmeans. ngal =", "rng.normal(0,s, (ngal,) ) y = rng.normal(0,s, (ngal,) ) z = rng.normal(0,s, (ngal,) )", "p_1 = field.kmeans_assign_patches(cen_1) np.testing.assert_equal(p_1, np.zeros(ngal)) # If same number of patches as galaxies,", "(ngal,) ) w = np.zeros(ngal) w[np.random.choice(range(ngal), ngal//10, replace=False)] = 1.0 ra, dec =", "field = cat.getNField() cen1 = field.kmeans_initialize_centers(npatch, 'kmeans++') assert cen1.shape == (npatch, 3) p1", "',np.sum(inertia2)) assert np.sum(inertia2) < np.sum(inertia1) with assert_raises(ValueError): field.kmeans_initialize_centers(npatch=ngal*2, init='kmeans++') with assert_raises(ValueError): field.kmeans_initialize_centers(npatch=ngal+1, init='kmeans++')", "np.array([np.sum((xyz[p1==i] - cen1[i])**2) for i in range(npatch)]) counts1 = np.array([np.sum(p1==i) for i in", "get_from_wiki, CaptureLog, assert_raises, profile, timer @timer def test_dessv(): try: import fitsio except ImportError:", "coord.degrees) cat = treecorr.Catalog(ra=ra, dec=dec, ra_units='rad', dec_units='rad', w=w, keep_zero_weight=True) treecorr.set_omp_threads(1) npatch = 16", "patches = ',np.unique(p[w==0])) assert set(p[w>0]) == set(p[w==0]) @timer def test_catalog_sphere(): # This follows", "cat.y/cat.r, cat.z/cat.r]).T print('cen = ',cen) print('xyz = ',xyz) direct_cen = np.array([np.average(xyz[p==i], axis=0, weights=w[p==i])", "rng.random_sample(ngal) + 1 g1 = rng.normal(0,s, (ngal,) ) g2 = rng.normal(0,s, (ngal,) )", "cen1 = field.kmeans_initialize_centers(npatch, 'kmeans++') assert cen1.shape == (npatch, 2) p1 = field.kmeans_assign_patches(cen1) print('patches", "sky z = rng.normal(0,s, (ngal,) ) w = rng.random_sample(ngal) ra, dec = coord.CelestialCoord.xyz_to_radec(x,y,z)", "and npatch. assert np.std(inertia) < 0.3 * np.mean(inertia) # rms is usually small", "field.run_kmeans(npatch, alt=True) t1 = time.time() assert len(patches) == cat.ntot assert min(patches) == 0", "direct calculation. xyz = np.array([cat.x, cat.y, cat.z]).T direct_cen = np.array([np.average(xyz[p==i], axis=0, weights=w[p==i]) for", "',np.max(counts)) @timer def test_radec(): # Very similar to the above, but with a", "with assert_raises(ValueError): field.kmeans_initialize_centers(npatch=ngal+1, init='kmeans++') with assert_raises(ValueError): field.kmeans_initialize_centers(npatch=0, init='kmeans++') with assert_raises(ValueError): field.kmeans_initialize_centers(npatch=-100, init='kmeans++') #", "= ',np.std(inertia)) assert np.sum(inertia) < 5300. assert np.std(inertia) < 0.4 * np.mean(inertia) #", "xyz = np.array([cat.x, cat.y, cat.z]).T direct_cen = np.array([xyz[patches==i].mean(axis=0) for i in range(npatch)]) direct_cen", "the above, but with a random set of points, so it will run", "min(patches) == 0 assert max(patches) == npatch-1 inertia = np.array([np.sum((xyz[patches==i] - cen[i])**2) for", "cen = field.run_kmeans(npatch) t1 = time.time() assert len(p) == cat.ntot assert min(p) ==", "be lower. cat2 = treecorr.Catalog(ra=ra, dec=dec, r=r, ra_units='rad', dec_units='rad', w=w, npatch=npatch, kmeans_alt=True) t0", "= field.run_kmeans(npatch) t1 = time.time() assert len(p) == cat.ntot assert min(p) == 0", "assert np.std(sizes) < 0.1 * np.mean(sizes) # sizes have even less spread usually.", "import numpy as np import os import time import coord import warnings import", ") + 100 # Put everything at large y, so smallish angle on", "= ',np.std(inertia)) assert np.sum(inertia) < 33000. assert np.std(inertia) < 0.1 * np.mean(inertia) #", "should be lower. t0 = time.time() p, cen = field.run_kmeans(npatch, alt=True) t1 =", "to give npatch = 1, although not particularly useful. cen_1 = field.kmeans_initialize_centers(npatch=1, init='kmeans++')", "convert to arcmin counts = np.array([np.sum(patches==i) for i in range(npatch)]) print('With alternate algorithm:')", "test_zero_weight(): # Based on test_ra_dec, but where many galaxies have w=0. # There", "np.random.RandomState(8675309) x = rng.normal(0,s, (ngal,) ) y = rng.normal(0,s, (ngal,) ) w =", "= rng.normal(0,s, (ngal,) ) cat = treecorr.Catalog(x=x, y=y, w=w, g1=g1, g2=g2, k=k) npatch", "w=0. # There used to be a bug where w=0 objects were not", "inertia => ',np.sum(inertia2)) assert np.sum(inertia2) < np.sum(inertia1) # Repeat in 2d print('2d with", "r = (x**2 + y**2 + z**2)**0.5 cat2 = treecorr.Catalog(ra=ra, dec=dec, ra_units='rad', dec_units='rad',", "',np.min(dec) * coord.radians / coord.degrees) print('maxdec = ',np.max(dec) * coord.radians / coord.degrees) cat", "cells to check the other branch in # InitializeCenters. field = cat.getNField(min_top=10) t0", "= rng.normal(0,s, (ngal,) ) k = rng.normal(0,s, (ngal,) ) cat = treecorr.Catalog(x=x, y=y,", "ra_units='rad', dec_units='rad', w=w, patch_centers=cat2.patch_centers) np.testing.assert_array_equal(cat2.patch, cat3.patch) np.testing.assert_array_equal(cat2.patch_centers, cat3.patch_centers) @timer def test_catalog_3d(): # With", "time.time() p, cen = field.run_kmeans(npatch, alt=True) t1 = time.time() assert len(p) == cat.ntot", "dec=dec, r=r, ra_units='rad', dec_units='rad', w=w, npatch=npatch, kmeans_alt=True) t0 = time.time() p = cat2.patch", "to a direct calculation. xyz = np.array([cat.x/cat.r, cat.y/cat.r, cat.z/cat.r]).T print('cen = ',cen) print('xyz", "works. ngal = 100000 s = 10. rng = np.random.RandomState(8675309) x = rng.normal(0,s,", "r=r, ra_units='rad', dec_units='rad', w=w, npatch=npatch) t0 = time.time() p = cat.patch cen =", "cat = treecorr.Catalog(x=x, y=y) xy = np.array([x, y]).T field = cat.getNField() cen1 =", "np.sum(inertia2) < np.sum(inertia1) # Use a field with lots of top level cells", "= 16 field = cat.getNField() t0 = time.time() p, c = field.run_kmeans(npatch) t1", "np.testing.assert_allclose(cen, direct_cen, atol=1.e-3) # KMeans minimizes the total inertia. # Check this value", "np.array([x, y, z]).T # Skip the refine_centers step. print('3d with init=random') npatch =", "counts = ',np.max(counts)) # Finally, use a field with lots of top level", "5300. assert np.std(inertia) < 0.1 * np.mean(inertia) # rms should be even smaller", "initialization, so these are a bit worse usually. print('With min_top=10:') print('time = ',t1-t0)", "= np.array([np.mean((xyz[patches==i] - cen[i])**2) for i in range(npatch)])**0.5 sizes *= 180. / np.pi", "',np.mean(counts)) print('min counts = ',np.min(counts)) print('max counts = ',np.max(counts)) # Check using patch_centers", "import warnings import treecorr from test_helper import get_from_wiki, CaptureLog, assert_raises, profile, timer @timer", "(ngal,) ) + 100 # Put everything at large y, so smallish angle", "-> (ra,dec,r) cat3 = treecorr.Catalog(ra=ra, dec=dec, r=r, ra_units='rad', dec_units='rad', w=w, patch_centers=cat2.patch_centers) np.testing.assert_array_equal(cat2.patch, cat3.patch)", "= np.array([np.sum(patches==i) for i in range(npatch)]) # This doesn't give as good an", "5300. assert np.std(inertia) < 0.4 * np.mean(inertia) # I've seen over 0.3 x", "w = rng.random_sample(ngal) ra, dec = coord.CelestialCoord.xyz_to_radec(x,y,z) print('minra = ',np.min(ra) * coord.radians /", "np.array([x, y, z]).T # Skip the refine_centers step. print('3d with init=kmeans++') npatch =", "* coord.radians / coord.degrees) npatch = 111 cat = treecorr.Catalog(ra=ra, dec=dec, ra_units='rad', dec_units='rad',", "in range(npatch)]) print('time = ',t1-t0) print('total inertia = ',np.sum(inertia)) print('mean inertia = ',np.mean(inertia))", "Should be valid to give npatch = 1, although not particularly useful. cen_1", "max_iter=1000) inertia2 = np.array([np.sum((xy[p2==i] - cen2[i])**2) for i in range(npatch)]) counts2 = np.array([np.sum(p2==i)", "= np.zeros(ngal) w[np.random.choice(range(ngal), ngal//10, replace=False)] = 1.0 ra, dec = coord.CelestialCoord.xyz_to_radec(x,y,z) print('minra =", "above copyright notice, # this list of conditions, and the disclaimer given in", "',np.mean(inertia)) print('rms inertia = ',np.std(inertia)) assert np.sum(inertia) < 33000. assert np.std(inertia) < 0.3", "p2, cen2 = field.run_kmeans(npatch, init='kmeans++', max_iter=1000) inertia2 = np.array([np.sum((xy[p2==i] - cen2[i])**2) for i", "inertia = ',np.std(inertia)) assert np.sum(inertia) < 33000. assert np.std(inertia) < 0.3 * np.mean(inertia)", "course, but check that it doesn't fail.) # Do this with fewer points", "range(npatch)]) # This doesn't give as good an initialization, so these are a", "angle on sky z = rng.normal(0,s, (ngal,) ) w = rng.random_sample(ngal) ra, dec", "value and the rms size, which should also be quite small. inertia =", "np.std(inertia) < 0.3 * np.mean(inertia) # rms is usually small mean # With", "inertia = ',np.mean(inertia)) print('rms inertia = ',np.std(inertia)) assert np.sum(inertia) < 200. # This", "except ImportError: print('Skipping dessv test, since fitsio is not installed') return #treecorr.set_omp_threads(1); get_from_wiki('des_sv.fits')", "cat.y, cat.z]).T field = cat.getNField() cen1 = field.kmeans_initialize_centers(npatch, 'kmeans++') assert cen1.shape == (npatch,", "/ np.pi * 60. # convert to arcmin counts = np.array([np.sum(patches==i) for i", "assert len(p1) == cat.ntot assert min(p1) == 0 assert max(p1) == npatch-1 inertia1", "print('total inertia => ',np.sum(inertia2)) assert np.sum(inertia2) < np.sum(inertia1) with assert_raises(ValueError): field.run_kmeans(npatch, init='invalid') with", "x mean here. print('mean counts = ',np.mean(counts)) print('min counts = ',np.min(counts)) print('max counts", "field.kmeans_initialize_centers(npatch=ngal+1, init='random') with assert_raises(ValueError): field.kmeans_initialize_centers(npatch=0, init='random') with assert_raises(ValueError): field.kmeans_initialize_centers(npatch=-100, init='random') # Should be", "a direct calculation. xyz = np.array([cat.x/cat.r, cat.y/cat.r, cat.z/cat.r]).T print('cen = ',cen) print('xyz =", "# Check the returned center to a direct calculation. xyz = np.array([cat.x/cat.r, cat.y/cat.r,", "must reproduce the above copyright notice, # this list of conditions, and the", "def test_dessv(): try: import fitsio except ImportError: print('Skipping dessv test, since fitsio is", "patches, cen = field.run_kmeans(npatch) t1 = time.time() print('patches = ',np.unique(patches)) assert len(patches) ==", "that this works with other fields besides NField, even though # in practice", "and npatch. assert np.std(inertia) < 0.3 * np.mean(inertia) # rms is usually <", "although not particularly useful. cen_1 = field.kmeans_initialize_centers(npatch=1, init='kmeans++') p_1 = field.kmeans_assign_patches(cen_1) np.testing.assert_equal(p_1, np.zeros(ngal))", "== npatch-1 print('w>0 patches = ',np.unique(p[w>0])) print('w==0 patches = ',np.unique(p[w==0])) assert set(p[w>0]) ==", "rms is usually < 0.2 * mean assert np.std(sizes) < 0.1 * np.mean(sizes)", "cat2.patch cen = cat2.patch_centers t1 = time.time() assert len(p) == cat2.ntot assert min(p)", "@timer def test_catalog_3d(): # With ra, dec, r, the Catalog API should only", "center to a direct calculation. xyz = np.array([cat.x/cat.r, cat.y/cat.r, cat.z/cat.r]).T print('cen = ',cen)", "= np.array([cat.x, cat.y, cat.z]).T field = cat.getNField() cen1 = field.kmeans_initialize_centers(npatch, 'kmeans++') assert cen1.shape", "should also be quite small. inertia = np.array([np.sum((xyz[patches==i] - cen[i])**2) for i in", "with N=10^5. n = 100 cat = treecorr.Catalog(ra=ra[:n], dec=dec[:n], ra_units='rad', dec_units='rad') field =", "keep_zero_weight=True) treecorr.set_omp_threads(1) npatch = 16 field = cat.getNField() t0 = time.time() p, c", "the refine_centers step. print('3d with init=kmeans++') npatch = 10 field = cat.getNField() cen1", "',np.mean(counts)) print('min counts = ',np.min(counts)) print('max counts = ',np.max(counts)) @timer def test_2d(): #", "assert np.sum(inertia) < 5300. assert np.std(inertia) < 0.1 * np.mean(inertia) # rms should", "- cen2[i])**2) for i in range(npatch)]) counts2 = np.array([np.sum(p2==i) for i in range(npatch)])", "== cat.ntot assert min(p) == 0 assert max(p) == npatch-1 inertia = np.array([np.sum(w[p==i][:,None]", "Repeat in 2d print('2d with init=random') cat = treecorr.Catalog(x=x, y=y) xy = np.array([x,", "bug where w=0 objects were not assigned to any patch. ngal = 10000", "',np.mean(counts)) print('min counts = ',np.min(counts)) print('max counts = ',np.max(counts)) # Finally, use a", "of top level cells print('3d with init=kmeans++, min_top=10') field = cat.getNField(min_top=10) cen1 =", "max(p) == npatch-1 # Check the returned center to a direct calculation. xyz", "API to run kmeans. ngal = 100000 s = 10. rng = np.random.RandomState(8675309)", "= time.time() inertia = np.array([np.sum(w[p2==i][:,None] * (xyz[p2==i] - cen[i])**2) for i in range(npatch)])", "set(p[w>0]) == set(p[w==0]) @timer def test_catalog_sphere(): # This follows the same path as", ") y = rng.normal(0,s, (ngal,) ) z = rng.normal(0,s, (ngal,) ) w =", "# Copyright (c) 2003-2019 by <NAME> # # TreeCorr is free software: redistribution", "counts = ',np.min(counts)) print('max counts = ',np.max(counts)) @timer def test_2d(): # Like the", "range(npatch)]) direct_cen /= np.sqrt(np.sum(direct_cen**2,axis=1)[:,np.newaxis]) np.testing.assert_allclose(cen, direct_cen, atol=1.e-3) # KMeans minimizes the total inertia.", "permitted provided that the following # conditions are met: # # 1. Redistributions", "CaptureLog, assert_raises, profile, timer @timer def test_dessv(): try: import fitsio except ImportError: print('Skipping", "some of the shuffle bits in InitializeCenters # to happen. npatch = 43", "the returned center to a direct calculation. xyz = np.array([cat.x/cat.r, cat.y/cat.r, cat.z/cat.r]).T print('cen", "len(p1) == cat.ntot assert min(p1) == 0 assert max(p1) == npatch-1 inertia1 =", "but using x,y positions. # An additional check here is that this works", "200. # Total shouldn't increase much. (And often decreases.) assert np.std(inertia) < 0.15", "np.sum(inertia) < 5300. assert np.std(inertia) < 0.4 * np.mean(inertia) # I've seen over", "convert to arcmin counts = np.array([np.sum(patches==i) for i in range(npatch)]) print('With standard algorithm:')", "= rng.normal(0,s, (ngal,) ) + 100 # Put everything at large y, so", "in range(npatch)]) direct_cen /= np.sqrt(np.sum(direct_cen**2,axis=1)[:,np.newaxis]) np.testing.assert_allclose(cen, direct_cen, atol=2.e-3) inertia = np.array([np.sum(w[p==i][:,None] * (xyz[p==i]", "/= np.sqrt(np.sum(direct_cen**2,axis=1)[:,np.newaxis]) np.testing.assert_allclose(cen, direct_cen, atol=2.e-3) inertia = np.array([np.sum(w[p==i][:,None] * (xyz[p==i] - cen[i])**2) for", "counts = ',np.max(counts)) # Should be the same thing with ra, dec, ra", "in range(npatch)]) counts1 = np.array([np.sum(p1==i) for i in range(npatch)]) print('counts = ',counts1) print('rms", "cen1 = field.kmeans_initialize_centers(npatch, 'kmeans++') assert cen1.shape == (npatch, 3) p1 = field.kmeans_assign_patches(cen1) print('patches", "= treecorr.Catalog(ra=ra, dec=dec, ra_units='rad', dec_units='rad', w=w) npatch = 111 field = cat.getNField() t0", "inertia2 = np.array([np.sum((xy[p2==i] - cen2[i])**2) for i in range(npatch)]) counts2 = np.array([np.sum(p2==i) for", "0.15 * np.mean(inertia) # rms should be even smaller here. print('mean counts =", "npatch = 111 field = cat.getGField() t0 = time.time() p, cen = field.run_kmeans(npatch)", "with init=random') npatch = 10 field = cat.getNField() cen1 = field.kmeans_initialize_centers(npatch, 'random') assert", "== cat.ntot assert min(patches) == 0 assert max(patches) == npatch-1 # Check the", "',np.std(inertia)) assert np.sum(inertia) < 5300. assert np.std(inertia) < 0.1 * np.mean(inertia) # rms", "dec = coord.CelestialCoord.xyz_to_radec(x,y,z) r = (x**2 + y**2 + z**2)**0.5 cat2 = treecorr.Catalog(ra=ra,", "import time import coord import warnings import treecorr from test_helper import get_from_wiki, CaptureLog,", "dec=dec, ra_units='rad', dec_units='rad') xyz = np.array([cat.x, cat.y, cat.z]).T field = cat.getNField() cen1 =", "the same thing with ra, dec, ra ra, dec = coord.CelestialCoord.xyz_to_radec(x,y,z) r =", "print('xyz = ',xyz) direct_cen = np.array([np.average(xyz[p==i], axis=0, weights=w[p==i]) for i in range(npatch)]) direct_cen", "cen = field.run_kmeans(npatch) t1 = time.time() print('patches = ',np.unique(p)) assert len(p) == cat.ntot", "= cat2.getNField() t0 = time.time() p2, cen = field.run_kmeans(npatch) t1 = time.time() inertia", "= np.array([np.sum(w[p==i]) for i in range(npatch)]) print('With standard algorithm:') print('time = ',t1-t0) print('inertia", "for i in range(npatch)]) direct_cen /= np.sqrt(np.sum(direct_cen**2,axis=1)[:,np.newaxis]) np.testing.assert_allclose(cen, direct_cen, atol=2.e-3) inertia = np.array([np.sum(w[p==i][:,None]", "atol=2.e-3) inertia = np.array([np.sum(w[p==i][:,None] * (xyz[p==i] - cen[i])**2) for i in range(npatch)]) counts", "range(npatch)]) counts = np.array([np.sum(w[p2==i]) for i in range(npatch)]) print('time = ',t1-t0) print('total inertia", "decreases.) assert np.std(inertia) < 0.15 * np.mean(inertia) # rms should be even smaller", "a field with lots of top level cells print('3d with init=random, min_top=10') field", "where w=0 objects were not assigned to any patch. ngal = 10000 s", "10 field = cat.getNField() cen1 = field.kmeans_initialize_centers(npatch, 'kmeans++') assert cen1.shape == (npatch, 3)", "< np.sum(inertia1) # Repeat in spherical print('spher with init=kmeans++') ra, dec = coord.CelestialCoord.xyz_to_radec(x,y,z)", "z = rng.normal(0,s, (ngal,) ) w = np.zeros(ngal) w[np.random.choice(range(ngal), ngal//10, replace=False)] = 1.0", "* np.mean(inertia) # rms is usually small mean # With weights, these aren't", "os.path.join('data','des_sv.fits') cat = treecorr.Catalog(file_name, ra_col='ra', dec_col='dec', ra_units='deg', dec_units='deg') # Use an odd number", "npatch-1 inertia = np.array([np.sum(w[p==i][:,None] * (xyz[p==i] - cen[i])**2) for i in range(npatch)]) counts", "assert np.sum(inertia2) < np.sum(inertia1) with assert_raises(ValueError): field.run_kmeans(npatch, init='invalid') with assert_raises(ValueError): field.kmeans_initialize_centers(npatch, init='invalid') with", "print('spher with init=kmeans++') ra, dec = coord.CelestialCoord.xyz_to_radec(x,y,z) cat = treecorr.Catalog(ra=ra, dec=dec, ra_units='rad', dec_units='rad')", "0.15 * np.mean(sizes) print('mean counts = ',np.mean(counts)) print('min counts = ',np.min(counts)) print('max counts", "is stupid of course, but check that it doesn't fail.) # Do this", "the init=random option ngal = 100000 s = 1. rng = np.random.RandomState(8675309) x", "* coord.radians / coord.degrees) cat = treecorr.Catalog(ra=ra, dec=dec, ra_units='rad', dec_units='rad', w=w, keep_zero_weight=True) treecorr.set_omp_threads(1)", "cat.ntot assert min(p) == 0 assert max(p) == npatch-1 xyz = np.array([x, y,", "(ngal,) ) cat = treecorr.Catalog(x=x, y=y, z=z) xyz = np.array([x, y, z]).T #", "init='kmeans++') p_1 = field.kmeans_assign_patches(cen_1) np.testing.assert_equal(p_1, np.zeros(ngal)) # If same number of patches as", "treecorr.Catalog(ra=ra, dec=dec, ra_units='rad', dec_units='rad', r=r, w=w) field = cat2.getNField() t0 = time.time() p2,", "2) p1 = field.kmeans_assign_patches(cen1) print('patches = ',np.unique(p1)) assert len(p1) == cat.ntot assert min(p1)", "test_ra_dec, but where many galaxies have w=0. # There used to be a", "standard algorithm. print('mean counts = ',np.mean(counts)) print('min counts = ',np.min(counts)) print('max counts =", "111 cat = treecorr.Catalog(ra=ra, dec=dec, ra_units='rad', dec_units='rad', w=w, npatch=npatch) t0 = time.time() p", "< 0.3 * np.mean(inertia) # rms is usually small mean # With weights,", "treecorr.Catalog(x=x, y=y) xy = np.array([x, y]).T field = cat.getNField() cen1 = field.kmeans_initialize_centers(npatch, 'kmeans++')", "0.1 * np.mean(inertia) # rms should be even smaller here. print('mean counts =", "cat2 = treecorr.Catalog(ra=ra, dec=dec, r=r, ra_units='rad', dec_units='rad', w=w, npatch=npatch, kmeans_alt=True) t0 = time.time()", "assert min(p) == 0 assert max(p) == npatch-1 inertia = np.array([np.sum(w[p==i][:,None] * (xyz[p==i]", "# (This is stupid of course, but check that it doesn't fail.) #", "# edges or middles of patches, so the total weight varies when you", "init='random') # Should be valid to give npatch = 1, although not particularly", "timer @timer def test_dessv(): try: import fitsio except ImportError: print('Skipping dessv test, since", "= field.run_kmeans(npatch, alt=True) t1 = time.time() assert len(p) == cat.ntot assert min(p) ==", "np.std(inertia) < 0.15 * np.mean(inertia) # rms should be even smaller here. assert", ") cat = treecorr.Catalog(x=x, y=y, z=z) xyz = np.array([x, y, z]).T # Skip", "= np.array([np.sum((xy[p2==i] - cen2[i])**2) for i in range(npatch)]) counts2 = np.array([np.sum(p2==i) for i", "# Use a field with lots of top level cells print('3d with init=kmeans++,", "# I've seen over 0.3 x mean here. assert np.std(sizes) < 0.15 *", "np.std(inertia) < 0.15 * np.mean(inertia) # rms should be even smaller here. print('mean", "convert to arcmin counts = np.array([np.sum(patches==i) for i in range(npatch)]) # This doesn't", "shuffle bits in InitializeCenters # to happen. npatch = 43 field = cat.getNField(max_top=5)", "0.3 * np.mean(inertia) # rms is usually small mean # With weights, these", "',np.sum(inertia1)) # Now run the normal way # Use higher max_iter, since random", "isn't a great initialization. p2, cen2 = field.run_kmeans(npatch, init='random', max_iter=1000) inertia2 = np.array([np.sum((xyz[p2==i]", "the distribution. from __future__ import print_function import numpy as np import os import", "with assert_raises(ValueError): field.kmeans_initialize_centers(npatch=ngal+1, init='random') with assert_raises(ValueError): field.kmeans_initialize_centers(npatch=0, init='random') with assert_raises(ValueError): field.kmeans_initialize_centers(npatch=-100, init='random') #", "counts as equal as the standard algorithm. print('mean counts = ',np.mean(counts)) print('min counts", "= 10. rng = np.random.RandomState(8675309) x = rng.normal(0,s, (ngal,) ) y = rng.normal(0,s,", "i in range(npatch)]) counts1 = np.array([np.sum(p1==i) for i in range(npatch)]) print('counts = ',counts1)", "dec=dec, ra_units='rad', dec_units='rad', w=w, npatch=npatch) t0 = time.time() p = cat.patch cen =", "algorithm. rms inertia should be lower. t0 = time.time() patches, cen = field.run_kmeans(npatch,", "= cat.getNField(min_top=10) cen1 = field.kmeans_initialize_centers(npatch, 'kmeans++') assert cen1.shape == (npatch, 3) p1 =", "galaxies, each galaxy gets a patch. # (This is stupid of course, but", "patches using RA, Dec. ngal = 100000 s = 10. rng = np.random.RandomState(8675309)", "sizes *= 180. / np.pi * 60. # convert to arcmin counts =", "< 200. # Total shouldn't increase much. (And often decreases.) assert np.std(inertia) <", "sure that works. ngal = 100000 s = 10. rng = np.random.RandomState(8675309) x", "field.kmeans_initialize_centers(npatch=-100, init='kmeans++') # Should be valid to give npatch = 1, although not", "= ',np.std(inertia)) assert np.sum(inertia) < 33000. assert np.std(inertia) < 0.4 * np.mean(inertia) #", "= treecorr.Catalog(ra=ra, dec=dec, r=r, ra_units='rad', dec_units='rad', w=w, patch_centers=cat2.patch_centers) np.testing.assert_array_equal(cat2.patch, cat3.patch) np.testing.assert_array_equal(cat2.patch_centers, cat3.patch_centers) @timer", "reproduce the above copyright notice, # this list of conditions, and the disclaimer", "to be near the # edges or middles of patches, so the total", "everything at large y, so smallish angle on sky z = rng.normal(0,s, (ngal,)", "any patch. ngal = 10000 s = 10. rng = np.random.RandomState(8675309) x =", "== 0 assert max(p) == npatch-1 inertia = np.array([np.sum(w[p==i][:,None] * (xy[p==i] - cen[i])**2)", "a bit worse usually. print('With min_top=10:') print('time = ',t1-t0) print('total inertia = ',np.sum(inertia))", "initialization. p2, cen2 = field.run_kmeans(npatch, init='random', max_iter=1000) inertia2 = np.array([np.sum((xyz[p2==i] - cen2[i])**2) for", "list(range(n))) @timer def test_init_kmpp(): # Test the init=random option ngal = 100000 s", "t1 = time.time() assert len(patches) == cat.ntot assert min(patches) == 0 assert max(patches)", "be lower. cat2 = treecorr.Catalog(ra=ra, dec=dec, ra_units='rad', dec_units='rad', w=w, npatch=npatch, kmeans_alt=True) t0 =", "coord.radians / coord.degrees) print('maxdec = ',np.max(dec) * coord.radians / coord.degrees) cat = treecorr.Catalog(ra=ra,", "',np.mean(counts)) print('min counts = ',np.min(counts)) print('max counts = ',np.max(counts)) @timer def test_3d(): #", "# Skip the refine_centers step. print('3d with init=random') npatch = 10 field =", "free software: redistribution and use in source and binary forms, # with or", "',np.mean(counts)) print('min counts = ',np.min(counts)) print('max counts = ',np.max(counts)) @timer def test_radec(): #", "field = cat.getGField() t0 = time.time() p, cen = field.run_kmeans(npatch) t1 = time.time()", "other materials provided with the distribution. from __future__ import print_function import numpy as", "assert np.sum(inertia) < 200. # This is specific to this particular field and", "coord.radians / coord.degrees) cat = treecorr.Catalog(ra=ra, dec=dec, ra_units='rad', dec_units='rad', w=w, keep_zero_weight=True) treecorr.set_omp_threads(1) npatch", "print('3d with init=kmeans++, min_top=10') field = cat.getNField(min_top=10) cen1 = field.kmeans_initialize_centers(npatch, 'kmeans++') assert cen1.shape", "np.sum(inertia2) < np.sum(inertia1) # Repeat in 2d print('2d with init=kmeans++') cat = treecorr.Catalog(x=x,", "inertia = np.array([np.sum(w[p==i][:,None] * (xyz[p==i] - cen[i])**2) for i in range(npatch)]) counts =", "# Repeat in 2d print('2d with init=kmeans++') cat = treecorr.Catalog(x=x, y=y) xy =", "print('mean inertia = ',np.mean(inertia)) print('rms inertia = ',np.std(inertia)) assert np.sum(inertia) < 200. #", "p2, cen = field.run_kmeans(npatch) t1 = time.time() inertia = np.array([np.sum(w[p2==i][:,None] * (xyz[p2==i] -", "Check the alternate algorithm. rms inertia should be lower. t0 = time.time() patches,", "',np.min(dec) * coord.radians / coord.degrees) print('maxdec = ',np.max(dec) * coord.radians / coord.degrees) npatch", "min(p) == 0 assert max(p) == npatch-1 xy = np.array([x, y]).T inertia =", "a # factor of 10. I think because it varies whether high weight", "= np.array([x, y]).T field = cat.getNField() cen1 = field.kmeans_initialize_centers(npatch, 'random') assert cen1.shape ==", "t0 = time.time() p, cen = field.run_kmeans(npatch) t1 = time.time() assert len(p) ==", "a field with lots of top level cells print('3d with init=kmeans++, min_top=10') field", "init='random', max_iter=1000) inertia2 = np.array([np.sum((xy[p2==i] - cen2[i])**2) for i in range(npatch)]) counts2 =", "= ',np.mean(sizes)) print('rms size = ',np.std(sizes)) assert np.sum(inertia) < 210. assert np.std(inertia) <", "counts = np.array([np.sum(patches==i) for i in range(npatch)]) print('With standard algorithm:') print('time = ',t1-t0)", "s = 10. rng = np.random.RandomState(8675309) x = rng.normal(0,s, (ngal,) ) y =", "t1 = time.time() print('patches = ',np.unique(patches)) assert len(patches) == cat.ntot assert min(patches) ==", "sky z = rng.normal(0,s, (ngal,) ) w = np.zeros(ngal) w[np.random.choice(range(ngal), ngal//10, replace=False)] =", "',np.max(dec) * coord.radians / coord.degrees) npatch = 111 cat = treecorr.Catalog(ra=ra, dec=dec, ra_units='rad',", "distribution. from __future__ import print_function import numpy as np import os import time", "test_dessv(): try: import fitsio except ImportError: print('Skipping dessv test, since fitsio is not", "patches = ',np.unique(p[w>0])) print('w==0 patches = ',np.unique(p[w==0])) assert set(p[w>0]) == set(p[w==0]) @timer def", "spread usually. # Should all have similar number of points. Nothing is required", "# KMeans minimizes the total inertia. # Check this value and the rms", "print('3d with init=kmeans++') npatch = 10 field = cat.getNField() cen1 = field.kmeans_initialize_centers(npatch, 'kmeans++')", "provided that the following # conditions are met: # # 1. Redistributions of", "(ngal,) ) k = rng.normal(0,s, (ngal,) ) cat = treecorr.Catalog(x=x, y=y, w=w, g1=g1,", "assigned to any patch. ngal = 10000 s = 10. rng = np.random.RandomState(8675309)", "follows the same path as test_radec, but using the Catalog API to run", "# this list of conditions, and the disclaimer given in the documentation #", "= cat.getNField() cen1 = field.kmeans_initialize_centers(npatch, 'kmeans++') assert cen1.shape == (npatch, 3) p1 =", "cat3.patch_centers) @timer def test_catalog_3d(): # With ra, dec, r, the Catalog API should", "(x**2 + y**2 + z**2)**0.5 cat2 = treecorr.Catalog(ra=ra, dec=dec, ra_units='rad', dec_units='rad', r=r, w=w)", "good an initialization, so these are a bit worse usually. print('With min_top=10:') print('time", "even though # in practice NField will alsmost always be the kind of", "using x,y,z positions. ngal = 100000 s = 1. rng = np.random.RandomState(8675309) x", "assert len(patches) == cat.ntot assert min(patches) == 0 assert max(patches) == npatch-1 #", "== cat.ntot assert min(patches) == 0 assert max(patches) == npatch-1 inertia = np.array([np.sum((xyz[patches==i]", "(ra,dec,r) cat3 = treecorr.Catalog(ra=ra, dec=dec, r=r, ra_units='rad', dec_units='rad', w=w, patch_centers=cat2.patch_centers) np.testing.assert_array_equal(cat2.patch, cat3.patch) np.testing.assert_array_equal(cat2.patch_centers,", "get_from_wiki('des_sv.fits') file_name = os.path.join('data','des_sv.fits') cat = treecorr.Catalog(file_name, ra_col='ra', dec_col='dec', ra_units='deg', dec_units='deg') # Use", "cat.getNField() t0 = time.time() p, cen = field.run_kmeans(npatch) t1 = time.time() print('patches =", "= ',np.mean(counts)) print('min counts = ',np.min(counts)) print('max counts = ',np.max(counts)) @timer def test_radec():", "max(patches) == npatch-1 inertia = np.array([np.sum((xyz[patches==i] - cen[i])**2) for i in range(npatch)]) sizes", "cat = treecorr.Catalog(x=x, y=y, z=z) xyz = np.array([x, y, z]).T # Skip the", "y]).T inertia = np.array([np.sum(w[p==i][:,None] * (xy[p==i] - cen[i])**2) for i in range(npatch)]) counts", "of Field used. ngal = 100000 s = 1. rng = np.random.RandomState(8675309) x", "np.mean(sizes) # sizes have even less spread usually. # Should all have similar", "npatch-1 inertia1 = np.array([np.sum((xyz[p1==i] - cen1[i])**2) for i in range(npatch)]) counts1 = np.array([np.sum(p1==i)", "# Check the alternate algorithm. rms inertia should be lower. t0 = time.time()", "minimizes the total inertia. # Check this value and the rms size, which", "size = ',np.mean(sizes)) print('rms size = ',np.std(sizes)) assert np.sum(inertia) < 200. # Total", "< 0.1 * np.mean(inertia) # rms should be even smaller here. print('mean counts", "time.time() print('patches = ',np.unique(patches)) assert len(patches) == cat.ntot assert min(patches) == 0 assert", "# # 1. Redistributions of source code must retain the above copyright notice,", "init='random') p_1 = field.kmeans_assign_patches(cen_1) np.testing.assert_equal(p_1, np.zeros(ngal)) # If same number of patches as", "= ',np.mean(inertia)) print('rms inertia = ',np.std(inertia)) print('mean size = ',np.mean(sizes)) print('rms size =", "def test_catalog_sphere(): # This follows the same path as test_radec, but using the", "= cat.getNField(min_top=10) cen1 = field.kmeans_initialize_centers(npatch, 'random') assert cen1.shape == (npatch, 3) p1 =", "should be lower. cat2 = treecorr.Catalog(ra=ra, dec=dec, ra_units='rad', dec_units='rad', w=w, npatch=npatch, kmeans_alt=True) t0", "direct_cen /= np.sqrt(np.sum(direct_cen**2,axis=1)[:,np.newaxis]) np.testing.assert_allclose(cen, direct_cen, atol=1.e-3) # KMeans minimizes the total inertia. #", "to make sure we force some of the shuffle bits in InitializeCenters #", "',np.sum(inertia2)) assert np.sum(inertia2) < np.sum(inertia1) # Repeat in spherical print('spher with init=kmeans++') ra,", "init=random') npatch = 10 field = cat.getNField() cen1 = field.kmeans_initialize_centers(npatch, 'random') assert cen1.shape", "cat.ntot assert min(p1) == 0 assert max(p1) == npatch-1 inertia1 = np.array([np.sum((xyz[p1==i] -", "@timer def test_zero_weight(): # Based on test_ra_dec, but where many galaxies have w=0.", "Redistributions of source code must retain the above copyright notice, this # list", "cen2 = field.run_kmeans(npatch, init='kmeans++', max_iter=1000) inertia2 = np.array([np.sum((xyz[p2==i] - cen2[i])**2) for i in", "print('w>0 patches = ',np.unique(p[w>0])) print('w==0 patches = ',np.unique(p[w==0])) assert set(p[w>0]) == set(p[w==0]) @timer", "print('2d with init=kmeans++') cat = treecorr.Catalog(x=x, y=y) xy = np.array([x, y]).T field =", "also be quite small. inertia = np.array([np.sum((xyz[patches==i] - cen[i])**2) for i in range(npatch)])", "cat.patch cen = cat.patch_centers t1 = time.time() print('patches = ',np.unique(p)) assert len(p) ==", "print('cen = ',cen) print('xyz = ',xyz) direct_cen = np.array([np.average(xyz[p==i], axis=0, weights=w[p==i]) for i", "Skip the refine_centers step. print('3d with init=kmeans++') npatch = 10 field = cat.getNField()", "assert np.sum(inertia2) < np.sum(inertia1) with assert_raises(ValueError): field.kmeans_initialize_centers(npatch=ngal*2, init='kmeans++') with assert_raises(ValueError): field.kmeans_initialize_centers(npatch=ngal+1, init='kmeans++') with", "level cells print('3d with init=random, min_top=10') field = cat.getNField(min_top=10) cen1 = field.kmeans_initialize_centers(npatch, 'random')", "0 assert max(patches) == npatch-1 # Check the returned center to a direct", "init='kmeans++', max_iter=1000) inertia2 = np.array([np.sum((xyz[p2==i] - cen2[i])**2) for i in range(npatch)]) counts2 =", "field with lots of top level cells to check the other branch in", "counts = np.array([np.sum(w[p==i]) for i in range(npatch)]) print('With standard algorithm:') print('time = ',t1-t0)", "the above, but using x,y positions. # An additional check here is that", "practice NField will alsmost always be the kind of Field used. ngal =", "-> (ra,dec) cat3 = treecorr.Catalog(ra=ra, dec=dec, ra_units='rad', dec_units='rad', w=w, patch_centers=cat2.patch_centers) np.testing.assert_array_equal(cat2.patch, cat3.patch) np.testing.assert_array_equal(cat2.patch_centers,", "'kmeans++') assert cen1.shape == (npatch, 3) p1 = field.kmeans_assign_patches(cen1) print('patches = ',np.unique(p1)) assert", "algorithm. rms inertia should be lower. t0 = time.time() p, cen = field.run_kmeans(npatch,", "= treecorr.Catalog(ra=ra, dec=dec, ra_units='rad', dec_units='rad') xyz = np.array([cat.x, cat.y, cat.z]).T field = cat.getNField()", "small mean print('mean counts = ',np.mean(counts)) print('min counts = ',np.min(counts)) print('max counts =", "be a bug where w=0 objects were not assigned to any patch. ngal", "i in range(npatch)]) counts = np.array([np.sum(w[p==i]) for i in range(npatch)]) # This doesn't", "# InitializeCenters. field = cat.getKField(min_top=10) t0 = time.time() p, cen = field.run_kmeans(npatch) t1", "test_radec, but using the Catalog API to run kmeans. ngal = 100000 s", "counts = np.array([np.sum(w[p2==i]) for i in range(npatch)]) print('time = ',t1-t0) print('total inertia =", "print('minra = ',np.min(ra) * coord.radians / coord.degrees) print('maxra = ',np.max(ra) * coord.radians /", "= time.time() p, cen = field.run_kmeans(npatch) t1 = time.time() print('patches = ',np.unique(p)) assert", "w=w) field = cat2.getNField() t0 = time.time() p2, cen = field.run_kmeans(npatch) t1 =", "for i in range(npatch)]) # This doesn't give as good an initialization, so", "test_catalog_3d(): # With ra, dec, r, the Catalog API should only do patches", "npatch=npatch) t0 = time.time() p = cat.patch cen = cat.patch_centers t1 = time.time()", "< np.sum(inertia1) with assert_raises(ValueError): field.run_kmeans(npatch, init='invalid') with assert_raises(ValueError): field.kmeans_initialize_centers(npatch, init='invalid') with assert_raises(ValueError): field.kmeans_initialize_centers(npatch=ngal*2,", "met: # # 1. Redistributions of source code must retain the above copyright", "counts = ',np.max(counts)) # Check using patch_centers from (ra,dec,r) -> (ra,dec) cat3 =", "not particularly useful. cen_1 = field.kmeans_initialize_centers(npatch=1, init='random') p_1 = field.kmeans_assign_patches(cen_1) np.testing.assert_equal(p_1, np.zeros(ngal)) #", "I've seen over 0.3 x mean here. assert np.std(sizes) < 0.15 * np.mean(sizes)", "way p2, cen2 = field.run_kmeans(npatch, init='random', max_iter=1000) inertia2 = np.array([np.sum((xy[p2==i] - cen2[i])**2) for", "cat3.patch_centers) if __name__ == '__main__': test_dessv() test_radec() test_3d() test_2d() test_init_random() test_init_kmpp() test_zero_weight() test_catalog_sphere()", "coord.CelestialCoord.xyz_to_radec(x,y,z) cat = treecorr.Catalog(ra=ra, dec=dec, ra_units='rad', dec_units='rad') xyz = np.array([cat.x, cat.y, cat.z]).T field", "np.sum(inertia1) # Repeat in spherical print('spher with init=random') ra, dec = coord.CelestialCoord.xyz_to_radec(x,y,z) cat", "varies whether high weight points happen to be near the # edges or", "refine_centers step. print('3d with init=kmeans++') npatch = 10 field = cat.getNField() cen1 =", "< 0.15 * np.mean(inertia) # rms should be even smaller here. assert np.std(sizes)", "np.testing.assert_equal(sorted(p_n), list(range(n))) @timer def test_zero_weight(): # Based on test_ra_dec, but where many galaxies", "Dec. ngal = 100000 s = 10. rng = np.random.RandomState(8675309) x = rng.normal(0,s,", "alternate algorithm. rms inertia should be lower. cat2 = treecorr.Catalog(ra=ra, dec=dec, r=r, ra_units='rad',", "npatch. assert np.std(inertia) < 0.3 * np.mean(inertia) # rms is usually < 0.2", "== npatch-1 inertia = np.array([np.sum(w[p==i][:,None] * (xyz[p==i] - cen[i])**2) for i in range(npatch)])", "Check using patch_centers from (ra,dec,r) -> (ra,dec) cat3 = treecorr.Catalog(ra=ra, dec=dec, ra_units='rad', dec_units='rad',", "= ',np.unique(patches)) assert len(patches) == cat.ntot assert min(patches) == 0 assert max(patches) ==", "np.mean(inertia) # rms should be even smaller here. assert np.std(sizes) < 0.1 *", "(ngal,) ) z = rng.normal(0,s, (ngal,) ) cat = treecorr.Catalog(x=x, y=y, z=z) xyz", "it's not particularly fast with N=10^5. n = 100 cat = treecorr.Catalog(ra=ra[:n], dec=dec[:n],", "ra_units='rad', dec_units='rad', w=w, keep_zero_weight=True) treecorr.set_omp_threads(1) npatch = 16 field = cat.getNField() t0 =", "= cat.getNField() t0 = time.time() p, c = field.run_kmeans(npatch) t1 = time.time() print('patches", "i in range(npatch)]) counts = np.array([np.sum(w[p2==i]) for i in range(npatch)]) print('time = ',t1-t0)", "kmeans. ngal = 100000 s = 10. rng = np.random.RandomState(8675309) x = rng.normal(0,s,", "the same path as test_radec, but using the Catalog API to run kmeans.", "',np.mean(sizes)) print('rms size = ',np.std(sizes)) assert np.sum(inertia) < 210. assert np.std(inertia) < 0.4", "= ',np.max(counts)) # Check the alternate algorithm. rms inertia should be lower. t0", "alternate algorithm. rms inertia should be lower. t0 = time.time() patches, cen =", "often decreases.) assert np.std(inertia) < 0.15 * np.mean(inertia) # rms should be even", "== 0 assert max(p1) == npatch-1 inertia1 = np.array([np.sum((xy[p1==i] - cen1[i])**2) for i", "np.array([x, y]).T field = cat.getNField() cen1 = field.kmeans_initialize_centers(npatch, 'kmeans++') assert cen1.shape == (npatch,", "print('rms inertia = ',np.std(inertia)) assert np.sum(inertia) < 33000. assert np.std(inertia) < 0.1 *", "a great initialization. p2, cen2 = field.run_kmeans(npatch, init='kmeans++', max_iter=1000) inertia2 = np.array([np.sum((xyz[p2==i] -", "# This doesn't give as good an initialization, so these are a bit", "',np.max(dec) * coord.radians / coord.degrees) cat = treecorr.Catalog(ra=ra, dec=dec, ra_units='rad', dec_units='rad', w=w) npatch", ") k = rng.normal(0,s, (ngal,) ) cat = treecorr.Catalog(x=x, y=y, w=w, g1=g1, g2=g2,", "assert np.sum(inertia2) < np.sum(inertia1) # Repeat in spherical print('spher with init=kmeans++') ra, dec", "= np.array([np.sum(patches==i) for i in range(npatch)]) print('With standard algorithm:') print('time = ',t1-t0) print('total", "as the standard algorithm. print('mean counts = ',np.mean(counts)) print('min counts = ',np.min(counts)) print('max", "coord.radians / coord.degrees) npatch = 111 cat = treecorr.Catalog(ra=ra, dec=dec, r=r, ra_units='rad', dec_units='rad',", "but using the Catalog API to run kmeans. ngal = 100000 s =", "of top level cells print('3d with init=random, min_top=10') field = cat.getNField(min_top=10) cen1 =", "treecorr.Catalog(ra=ra, dec=dec, ra_units='rad', dec_units='rad', w=w, npatch=npatch) t0 = time.time() p = cat.patch cen", "counts = np.array([np.sum(w[p==i]) for i in range(npatch)]) # This doesn't give as good", "rng.normal(0,s, (ngal,) ) + 100 # Put everything at large y, so smallish", "< np.sum(inertia1) # Repeat in 2d print('2d with init=kmeans++') cat = treecorr.Catalog(x=x, y=y)", "cat.getNField(min_top=10) cen1 = field.kmeans_initialize_centers(npatch, 'random') assert cen1.shape == (npatch, 3) p1 = field.kmeans_assign_patches(cen1)", "z]).T # Skip the refine_centers step. print('3d with init=kmeans++') npatch = 10 field", "== npatch-1 inertia = np.array([np.sum(w[p==i][:,None] * (xy[p==i] - cen[i])**2) for i in range(npatch)])", "size = ',np.std(sizes)) assert np.sum(inertia) < 200. # Total shouldn't increase much. (And", "assert min(p) == 0 assert max(p) == npatch-1 inertia = np.array([np.sum(w[p==i][:,None] * (xy[p==i]", "field = cat.getNField(min_top=10) t0 = time.time() p, cen = field.run_kmeans(npatch) t1 = time.time()", "the shuffle bits in InitializeCenters # to happen. npatch = 43 field =", "counts = ',np.max(counts)) # Check the alternate algorithm. rms inertia should be lower.", "to arcmin counts = np.array([np.sum(patches==i) for i in range(npatch)]) print('With alternate algorithm:') print('time", "print('With alternate algorithm:') print('time = ',t1-t0) print('total inertia = ',np.sum(inertia)) print('mean inertia =", "and binary forms, # with or without modification, are permitted provided that the", "i in range(npatch)]) print('With standard algorithm:') print('time = ',t1-t0) print('total inertia = ',np.sum(inertia))", "for i in range(npatch)]) print('With standard algorithm:') print('time = ',t1-t0) print('total inertia =", ") w = rng.random_sample(ngal) + 1 g1 = rng.normal(0,s, (ngal,) ) g2 =", "assert np.std(sizes) < 0.1 * np.mean(sizes) # This is only a little bit", "init='kmeans++') # Should be valid to give npatch = 1, although not particularly", "',np.min(counts)) print('max counts = ',np.max(counts)) @timer def test_3d(): # Like the above, but", "',np.std(counts2)) print('total inertia => ',np.sum(inertia2)) assert np.sum(inertia2) < np.sum(inertia1) with assert_raises(ValueError): field.kmeans_initialize_centers(npatch=ngal*2, init='kmeans++')", "= coord.CelestialCoord.xyz_to_radec(x,y,z) print('minra = ',np.min(ra) * coord.radians / coord.degrees) print('maxra = ',np.max(ra) *", "',np.mean(inertia)) print('rms inertia = ',np.std(inertia)) assert np.sum(inertia) < 33000. assert np.std(inertia) < 0.1", "npatch-1 inertia = np.array([np.sum(w[p==i][:,None] * (xy[p==i] - cen[i])**2) for i in range(npatch)]) counts", "this list of conditions, and the disclaimer given in the documentation # and/or", "= cat.getGField() t0 = time.time() p, cen = field.run_kmeans(npatch) t1 = time.time() print('patches", "np import os import time import coord import warnings import treecorr from test_helper", "# Now run the normal way p2, cen2 = field.run_kmeans(npatch, init='random', max_iter=1000) inertia2", "field.kmeans_initialize_centers(npatch, init='invalid') with assert_raises(ValueError): field.kmeans_initialize_centers(npatch=ngal*2, init='random') with assert_raises(ValueError): field.kmeans_initialize_centers(npatch=ngal+1, init='random') with assert_raises(ValueError): field.kmeans_initialize_centers(npatch=0,", "import coord import warnings import treecorr from test_helper import get_from_wiki, CaptureLog, assert_raises, profile,", "',np.std(inertia)) assert np.sum(inertia) < 210. assert np.std(inertia) < 0.4 * np.mean(inertia) # I've", "field.kmeans_initialize_centers(npatch=ngal*2, init='random') with assert_raises(ValueError): field.kmeans_initialize_centers(npatch=ngal+1, init='random') with assert_raises(ValueError): field.kmeans_initialize_centers(npatch=0, init='random') with assert_raises(ValueError): field.kmeans_initialize_centers(npatch=-100,", "field = cat.getNField() cen1 = field.kmeans_initialize_centers(npatch, 'random') assert cen1.shape == (npatch, 2) p1", "min(p) == 0 assert max(p) == npatch-1 inertia = np.array([np.sum(w[p==i][:,None] * (xyz[p==i] -", "should only do patches using RA, Dec. ngal = 100000 s = 10.", "# # TreeCorr is free software: redistribution and use in source and binary", "print('mindec = ',np.min(dec) * coord.radians / coord.degrees) print('maxdec = ',np.max(dec) * coord.radians /", "= time.time() p, c = field.run_kmeans(npatch) t1 = time.time() print('patches = ',np.unique(p)) assert", "field.run_kmeans(npatch, alt=True) t1 = time.time() assert len(p) == cat.ntot assert min(p) == 0", "number of points. Nothing is required here though. print('mean counts = ',np.mean(counts)) print('min", "0.1 * np.mean(sizes) # sizes have even less spread usually. # Should all", "rms inertia should be lower. cat2 = treecorr.Catalog(ra=ra, dec=dec, ra_units='rad', dec_units='rad', w=w, npatch=npatch,", "near the # edges or middles of patches, so the total weight varies", "treecorr.Catalog(x=x, y=y, z=z) xyz = np.array([x, y, z]).T # Skip the refine_centers step.", ") y = rng.normal(0,s, (ngal,) ) z = rng.normal(0,s, (ngal,) ) cat =", "cen = field.run_kmeans(npatch) t1 = time.time() inertia = np.array([np.sum(w[p2==i][:,None] * (xyz[p2==i] - cen[i])**2)", "algorithm. rms inertia should be lower. cat2 = treecorr.Catalog(ra=ra, dec=dec, r=r, ra_units='rad', dec_units='rad',", "= field.kmeans_initialize_centers(npatch, 'random') assert cen1.shape == (npatch, 3) p1 = field.kmeans_assign_patches(cen1) print('patches =", "100000 s = 10. rng = np.random.RandomState(8675309) x = rng.normal(0,s, (ngal,) ) y", "because it varies whether high weight points happen to be near the #", "are a bit worse usually. print('With min_top=10:') print('time = ',t1-t0) print('total inertia =", "to give npatch = 1, although not particularly useful. cen_1 = field.kmeans_initialize_centers(npatch=1, init='random')", "w=w) npatch = 111 field = cat.getNField() t0 = time.time() p, cen =", "range(npatch)]) print('With standard algorithm:') print('time = ',t1-t0) print('total inertia = ',np.sum(inertia)) print('mean inertia", "',np.max(ra) * coord.radians / coord.degrees) print('mindec = ',np.min(dec) * coord.radians / coord.degrees) print('maxdec", "= field.kmeans_initialize_centers(npatch=1, init='kmeans++') p_1 = field.kmeans_assign_patches(cen_1) np.testing.assert_equal(p_1, np.zeros(ngal)) # If same number of", "inertia1 = np.array([np.sum((xyz[p1==i] - cen1[i])**2) for i in range(npatch)]) counts1 = np.array([np.sum(p1==i) for", "cat.getNField(max_top=5) t0 = time.time() patches, cen = field.run_kmeans(npatch) t1 = time.time() print('patches =", "mean print('mean counts = ',np.mean(counts)) print('min counts = ',np.min(counts)) print('max counts = ',np.max(counts))", "print('Skipping dessv test, since fitsio is not installed') return #treecorr.set_omp_threads(1); get_from_wiki('des_sv.fits') file_name =", "x,y positions. # An additional check here is that this works with other", "init='invalid') with assert_raises(ValueError): field.kmeans_initialize_centers(npatch, init='invalid') with assert_raises(ValueError): field.kmeans_initialize_centers(npatch=ngal*2, init='random') with assert_raises(ValueError): field.kmeans_initialize_centers(npatch=ngal+1, init='random')", "be lower. t0 = time.time() patches, cen = field.run_kmeans(npatch, alt=True) t1 = time.time()", "useful. cen_1 = field.kmeans_initialize_centers(npatch=1, init='kmeans++') p_1 = field.kmeans_assign_patches(cen_1) np.testing.assert_equal(p_1, np.zeros(ngal)) # If same", "* np.mean(inertia) # rms should be even smaller here. assert np.std(sizes) < 0.1", "= field.kmeans_initialize_centers(npatch=n, init='kmeans++') p_n = field.kmeans_assign_patches(cen_n) np.testing.assert_equal(sorted(p_n), list(range(n))) @timer def test_zero_weight(): # Based", "== cat2.ntot assert min(p) == 0 assert max(p) == npatch-1 inertia = np.array([np.sum(w[p==i][:,None]", "cat.ntot assert min(p1) == 0 assert max(p1) == npatch-1 inertia1 = np.array([np.sum((xy[p1==i] -", "print('2d with init=random') cat = treecorr.Catalog(x=x, y=y) xy = np.array([x, y]).T field =", "print('maxdec = ',np.max(dec) * coord.radians / coord.degrees) npatch = 111 cat = treecorr.Catalog(ra=ra,", "= cat.getNField(min_top=10) t0 = time.time() p, cen = field.run_kmeans(npatch) t1 = time.time() assert", "= ',np.unique(p[w>0])) print('w==0 patches = ',np.unique(p[w==0])) assert set(p[w>0]) == set(p[w==0]) @timer def test_catalog_sphere():", "keep the counts as equal as the standard algorithm. print('mean counts = ',np.mean(counts))", "= time.time() p, cen = field.run_kmeans(npatch, alt=True) t1 = time.time() assert len(p) ==", "< 0.3 * np.mean(inertia) # rms is usually small mean print('mean counts =", ") w = rng.random_sample(ngal) ra, dec, r = coord.CelestialCoord.xyz_to_radec(x,y,z, return_r=True) print('minra = ',np.min(ra)", "print('rms inertia = ',np.std(inertia)) assert np.sum(inertia) < 5300. assert np.std(inertia) < 0.3 *", "in range(npatch)]) counts = np.array([np.sum(w[p==i]) for i in range(npatch)]) print('With alternate algorithm:') print('time", "# Do this with fewer points though, since it's not particularly fast with", "test_radec(): # Very similar to the above, but with a random set of", "0 assert max(p) == npatch-1 # Check the returned center to a direct", "< 0.4 * np.mean(inertia) # I've seen over 0.3 x mean here. print('mean", "',np.mean(counts)) print('min counts = ',np.min(counts)) print('max counts = ',np.max(counts)) # Check the alternate", "coord.CelestialCoord.xyz_to_radec(x,y,z) r = (x**2 + y**2 + z**2)**0.5 cat2 = treecorr.Catalog(ra=ra, dec=dec, ra_units='rad',", "binary forms, # with or without modification, are permitted provided that the following", "coord.radians / coord.degrees) print('mindec = ',np.min(dec) * coord.radians / coord.degrees) print('maxdec = ',np.max(dec)", "= time.time() p2, cen = field.run_kmeans(npatch) t1 = time.time() inertia = np.array([np.sum(w[p2==i][:,None] *", "similar number of points. Nothing is required here though. print('mean counts = ',np.mean(counts))", "z]).T # Skip the refine_centers step. print('3d with init=random') npatch = 10 field", "',np.std(counts2)) print('total inertia => ',np.sum(inertia2)) assert np.sum(inertia2) < np.sum(inertia1) # Use a field", "center to a direct calculation. xyz = np.array([cat.x, cat.y, cat.z]).T direct_cen = np.array([xyz[patches==i].mean(axis=0)", "# This is only a little bit smaller. # This doesn't keep the", "+ 100 # Put everything at large y, so smallish angle on sky", "The range is more than a # factor of 10. I think because", "Copyright (c) 2003-2019 by <NAME> # # TreeCorr is free software: redistribution and", "',np.max(counts)) # Finally, use a field with lots of top level cells to", "random isn't a great initialization. p2, cen2 = field.run_kmeans(npatch, init='kmeans++', max_iter=1000) inertia2 =", "',np.min(counts)) print('max counts = ',np.max(counts)) @timer def test_2d(): # Like the above, but", "cat.z]).T direct_cen = np.array([np.average(xyz[p==i], axis=0, weights=w[p==i]) for i in range(npatch)]) direct_cen /= np.sqrt(np.sum(direct_cen**2,axis=1)[:,np.newaxis])", "assert len(p) == cat.ntot assert min(p) == 0 assert max(p) == npatch-1 print('w>0", "== cat.ntot assert min(p) == 0 assert max(p) == npatch-1 print('w>0 patches =", "y=y) xy = np.array([x, y]).T field = cat.getNField() cen1 = field.kmeans_initialize_centers(npatch, 'random') assert", "level cells to check the other branch in # InitializeCenters. field = cat.getKField(min_top=10)", "min(p) == 0 assert max(p) == npatch-1 # Check the returned center to", "be the same thing with ra, dec, ra ra, dec = coord.CelestialCoord.xyz_to_radec(x,y,z) r", "= ',np.min(counts)) print('max counts = ',np.max(counts)) # Finally, use a field with lots", "we add weights to make sure that works. ngal = 100000 s =", "dec = coord.CelestialCoord.xyz_to_radec(x,y,z) cat = treecorr.Catalog(ra=ra, dec=dec, ra_units='rad', dec_units='rad') xyz = np.array([cat.x, cat.y,", "target having the # inertias be relatively similar. print('mean counts = ',np.mean(counts)) print('min", "from (ra,dec) -> (ra,dec,r) cat3 = treecorr.Catalog(ra=ra, dec=dec, r=r, ra_units='rad', dec_units='rad', w=w, patch_centers=cat2.patch_centers)", "cat2.patch_centers t1 = time.time() assert len(p) == cat2.ntot assert min(p) == 0 assert", "= treecorr.Catalog(ra=ra, dec=dec, ra_units='rad', dec_units='rad', w=w, keep_zero_weight=True) treecorr.set_omp_threads(1) npatch = 16 field =", "so smallish angle on sky z = rng.normal(0,s, (ngal,) ) w = np.zeros(ngal)", "# Check using patch_centers from (ra,dec,r) -> (ra,dec) cat3 = treecorr.Catalog(ra=ra, dec=dec, ra_units='rad',", "np.sum(inertia1) # Use a field with lots of top level cells print('3d with", "rng.random_sample(ngal) ra, dec = coord.CelestialCoord.xyz_to_radec(x,y,z) print('minra = ',np.min(ra) * coord.radians / coord.degrees) print('maxra", "111 field = cat.getNField() t0 = time.time() p, cen = field.run_kmeans(npatch) t1 =", "less spread usually. # Should all have similar number of points. Nothing is", "where many galaxies have w=0. # There used to be a bug where", "= ',np.max(counts)) @timer def test_radec(): # Very similar to the above, but with", "= ',np.max(counts)) @timer def test_2d(): # Like the above, but using x,y positions.", "though # in practice NField will alsmost always be the kind of Field", "than a # factor of 10. I think because it varies whether high", "to this particular field and npatch. assert np.std(inertia) < 0.3 * np.mean(inertia) #", "# Like the above, but using x,y positions. # An additional check here", "gets a patch. # (This is stupid of course, but check that it", "cat.ntot assert min(patches) == 0 assert max(patches) == npatch-1 inertia = np.array([np.sum((xyz[patches==i] -", "= coord.CelestialCoord.xyz_to_radec(x,y,z) cat = treecorr.Catalog(ra=ra, dec=dec, ra_units='rad', dec_units='rad') xyz = np.array([cat.x, cat.y, cat.z]).T", "make sure we force some of the shuffle bits in InitializeCenters # to", "an initialization, so these are a bit worse usually. print('With min_top=10:') print('time =", "counts1 = np.array([np.sum(p1==i) for i in range(npatch)]) print('counts = ',counts1) print('rms counts =", "to arcmin counts = np.array([np.sum(patches==i) for i in range(npatch)]) # This doesn't give", "particularly useful. cen_1 = field.kmeans_initialize_centers(npatch=1, init='kmeans++') p_1 = field.kmeans_assign_patches(cen_1) np.testing.assert_equal(p_1, np.zeros(ngal)) # If", "',np.std(inertia)) print('mean size = ',np.mean(sizes)) print('rms size = ',np.std(sizes)) assert np.sum(inertia) < 210.", "with init=kmeans++, min_top=10') field = cat.getNField(min_top=10) cen1 = field.kmeans_initialize_centers(npatch, 'kmeans++') assert cen1.shape ==", "init='kmeans++') with assert_raises(ValueError): field.kmeans_initialize_centers(npatch=ngal+1, init='kmeans++') with assert_raises(ValueError): field.kmeans_initialize_centers(npatch=0, init='kmeans++') with assert_raises(ValueError): field.kmeans_initialize_centers(npatch=-100, init='kmeans++')", "mean assert np.std(sizes) < 0.1 * np.mean(sizes) # sizes have even less spread", "* np.mean(sizes) # sizes have even less spread usually. # Should all have", "rms inertia should be lower. cat2 = treecorr.Catalog(ra=ra, dec=dec, r=r, ra_units='rad', dec_units='rad', w=w,", "field = cat.getNField(max_top=5) t0 = time.time() patches, cen = field.run_kmeans(npatch) t1 = time.time()", "c = field.run_kmeans(npatch) t1 = time.time() print('patches = ',np.unique(p)) assert len(p) == cat.ntot", "cen1[i])**2) for i in range(npatch)]) counts1 = np.array([np.sum(p1==i) for i in range(npatch)]) print('counts", "rng.normal(0,s, (ngal,) ) cat = treecorr.Catalog(x=x, y=y, w=w, g1=g1, g2=g2, k=k) npatch =", "this particular field and npatch. assert np.std(inertia) < 0.3 * np.mean(inertia) # rms", "print('mean inertia = ',np.mean(inertia)) print('rms inertia = ',np.std(inertia)) assert np.sum(inertia) < 33000. assert", ") cat = treecorr.Catalog(x=x, y=y, w=w, g1=g1, g2=g2, k=k) npatch = 111 field", "max(p1) == npatch-1 inertia1 = np.array([np.sum((xyz[p1==i] - cen1[i])**2) for i in range(npatch)]) counts1", "np.array([np.sum((xy[p1==i] - cen1[i])**2) for i in range(npatch)]) counts1 = np.array([np.sum(p1==i) for i in", "i in range(npatch)]) counts = np.array([np.sum(w[p==i]) for i in range(npatch)]) print('With alternate algorithm:')", "using patch_centers from (ra,dec,r) -> (ra,dec) cat3 = treecorr.Catalog(ra=ra, dec=dec, ra_units='rad', dec_units='rad', w=w,", "the alternate algorithm. rms inertia should be lower. cat2 = treecorr.Catalog(ra=ra, dec=dec, r=r,", "inertia = ',np.mean(inertia)) print('rms inertia = ',np.std(inertia)) print('mean size = ',np.mean(sizes)) print('rms size", "the documentation # and/or other materials provided with the distribution. from __future__ import", "NField, even though # in practice NField will alsmost always be the kind", "len(p) == cat2.ntot assert min(p) == 0 assert max(p) == npatch-1 inertia =", "= 111 field = cat.getNField() t0 = time.time() p, cen = field.run_kmeans(npatch) t1", "size = ',np.mean(sizes)) print('rms size = ',np.std(sizes)) assert np.sum(inertia) < 200. # This", "# Check this value and the rms size, which should also be quite", "',np.std(inertia)) assert np.sum(inertia) < 5300. assert np.std(inertia) < 0.4 * np.mean(inertia) # I've", "inertia = ',np.sum(inertia)) print('mean inertia = ',np.mean(inertia)) print('rms inertia = ',np.std(inertia)) print('mean size", "inertia should be lower. cat2 = treecorr.Catalog(ra=ra, dec=dec, r=r, ra_units='rad', dec_units='rad', w=w, npatch=npatch,", "z]).T inertia = np.array([np.sum(w[p==i][:,None] * (xyz[p==i] - cen[i])**2) for i in range(npatch)]) counts", "direct calculation. xyz = np.array([cat.x, cat.y, cat.z]).T direct_cen = np.array([xyz[patches==i].mean(axis=0) for i in", "time.time() p = cat.patch cen = cat.patch_centers t1 = time.time() print('patches = ',np.unique(p))", "np.sum(inertia) < 200. # Total shouldn't increase much. (And often decreases.) assert np.std(inertia)", "0 assert max(p) == npatch-1 xy = np.array([x, y]).T inertia = np.array([np.sum(w[p==i][:,None] *", "field = cat.getNField(min_top=10) t0 = time.time() patches, cen = field.run_kmeans(npatch) t1 = time.time()", "= rng.normal(0,s, (ngal,) ) w = rng.random_sample(ngal) + 1 cat = treecorr.Catalog(x=x, y=y,", "np.array([x, y]).T field = cat.getNField() cen1 = field.kmeans_initialize_centers(npatch, 'random') assert cen1.shape == (npatch,", "a direct calculation. xyz = np.array([cat.x, cat.y, cat.z]).T direct_cen = np.array([xyz[patches==i].mean(axis=0) for i", "cen = field.run_kmeans(npatch, alt=True) t1 = time.time() assert len(patches) == cat.ntot assert min(patches)", "= field.run_kmeans(npatch) t1 = time.time() print('patches = ',np.unique(p)) assert len(p) == cat.ntot assert", "cat.z]).T direct_cen = np.array([xyz[patches==i].mean(axis=0) for i in range(npatch)]) direct_cen /= np.sqrt(np.sum(direct_cen**2,axis=1)[:,np.newaxis]) np.testing.assert_allclose(cen, direct_cen,", "the other branch in # InitializeCenters. field = cat.getNField(min_top=10) t0 = time.time() p,", "y]).T field = cat.getNField() cen1 = field.kmeans_initialize_centers(npatch, 'random') assert cen1.shape == (npatch, 2)", "+ z**2)**0.5 cat2 = treecorr.Catalog(ra=ra, dec=dec, ra_units='rad', dec_units='rad', r=r, w=w) field = cat2.getNField()", "cen1.shape == (npatch, 2) p1 = field.kmeans_assign_patches(cen1) print('patches = ',np.unique(p1)) assert len(p1) ==", "lots of top level cells to check the other branch in # InitializeCenters.", "assert max(p) == npatch-1 xyz = np.array([x, y, z]).T inertia = np.array([np.sum(w[p==i][:,None] *", "np.sum(inertia2) < np.sum(inertia1) # Repeat in 2d print('2d with init=random') cat = treecorr.Catalog(x=x,", "cen1 = field.kmeans_initialize_centers(npatch, 'random') assert cen1.shape == (npatch, 3) p1 = field.kmeans_assign_patches(cen1) print('patches", "mean here. print('mean counts = ',np.mean(counts)) print('min counts = ',np.min(counts)) print('max counts =", "/ coord.degrees) print('maxdec = ',np.max(dec) * coord.radians / coord.degrees) npatch = 111 cat", "time import coord import warnings import treecorr from test_helper import get_from_wiki, CaptureLog, assert_raises,", "== 0 assert max(p1) == npatch-1 inertia1 = np.array([np.sum((xyz[p1==i] - cen1[i])**2) for i", "is usually < 0.2 * mean assert np.std(sizes) < 0.1 * np.mean(sizes) #", "= coord.CelestialCoord.xyz_to_radec(x,y,z, return_r=True) print('minra = ',np.min(ra) * coord.radians / coord.degrees) print('maxra = ',np.max(ra)", "= ',np.mean(sizes)) print('rms size = ',np.std(sizes)) assert np.sum(inertia) < 200. # Total shouldn't", "but with a random set of points, so it will run even #", "assert min(p) == 0 assert max(p) == npatch-1 xy = np.array([x, y]).T inertia", "',np.mean(inertia)) print('rms inertia = ',np.std(inertia)) assert np.sum(inertia) < 5300. assert np.std(inertia) < 0.3", "list of conditions, and the disclaimer given in the accompanying LICENSE # file.", "since it's not particularly fast with N=10^5. n = 100 cat = treecorr.Catalog(ra=ra[:n],", "= time.time() p, cen = field.run_kmeans(npatch) t1 = time.time() assert len(p) == cat.ntot", "ngal = 100000 s = 10. rng = np.random.RandomState(8675309) x = rng.normal(0,s, (ngal,)", "1 g1 = rng.normal(0,s, (ngal,) ) g2 = rng.normal(0,s, (ngal,) ) k =", "0 assert max(p) == npatch-1 print('w>0 patches = ',np.unique(p[w>0])) print('w==0 patches = ',np.unique(p[w==0]))", "',np.mean(inertia)) print('rms inertia = ',np.std(inertia)) assert np.sum(inertia) < 200. # Total shouldn't increase", "0 assert max(p1) == npatch-1 inertia1 = np.array([np.sum((xyz[p1==i] - cen1[i])**2) for i in", "as equal as the standard algorithm. print('mean counts = ',np.mean(counts)) print('min counts =", "np.testing.assert_equal(p_1, np.zeros(ngal)) # If same number of patches as galaxies, each galaxy gets", "usually small mean print('mean counts = ',np.mean(counts)) print('min counts = ',np.min(counts)) print('max counts", "field.run_kmeans(npatch, init='kmeans++', max_iter=1000) inertia2 = np.array([np.sum((xyz[p2==i] - cen2[i])**2) for i in range(npatch)]) counts2", "cat.y, cat.z]).T field = cat.getNField() cen1 = field.kmeans_initialize_centers(npatch, 'random') assert cen1.shape == (npatch,", "print('rms counts = ',np.std(counts1)) print('total inertia = ',np.sum(inertia1)) # Now run the normal", "assert min(p1) == 0 assert max(p1) == npatch-1 inertia1 = np.array([np.sum((xyz[p1==i] - cen1[i])**2)", "= ',np.sum(inertia1)) # Now run the normal way # Use higher max_iter, since", "assert min(patches) == 0 assert max(patches) == npatch-1 # Check the returned center", "arcmin counts = np.array([np.sum(patches==i) for i in range(npatch)]) # This doesn't give as", "/ coord.degrees) cat = treecorr.Catalog(ra=ra, dec=dec, ra_units='rad', dec_units='rad', w=w, keep_zero_weight=True) treecorr.set_omp_threads(1) npatch =", "= np.array([np.sum(w[p2==i]) for i in range(npatch)]) print('time = ',t1-t0) print('total inertia = ',np.sum(inertia))", ") z = rng.normal(0,s, (ngal,) ) cat = treecorr.Catalog(x=x, y=y, z=z) xyz =", "refine_centers step. print('3d with init=random') npatch = 10 field = cat.getNField() cen1 =", "this works with other fields besides NField, even though # in practice NField", "< 0.4 * np.mean(inertia) # I've seen over 0.3 x mean here. assert", "were not assigned to any patch. ngal = 10000 s = 10. rng", "useful. cen_1 = field.kmeans_initialize_centers(npatch=1, init='random') p_1 = field.kmeans_assign_patches(cen_1) np.testing.assert_equal(p_1, np.zeros(ngal)) # If same", "addition, we add weights to make sure that works. ngal = 100000 s", "inertia = ',np.std(inertia)) assert np.sum(inertia) < 5300. assert np.std(inertia) < 0.3 * np.mean(inertia)", "positions. ngal = 100000 s = 1. rng = np.random.RandomState(8675309) x = rng.normal(0,s,", "',np.sum(inertia2)) assert np.sum(inertia2) < np.sum(inertia1) # Use a field with lots of top", "cat = treecorr.Catalog(ra=ra[:n], dec=dec[:n], ra_units='rad', dec_units='rad') field = cat.getNField() cen_n = field.kmeans_initialize_centers(npatch=n, init='random')", "must retain the above copyright notice, this # list of conditions, and the", "size = ',np.std(sizes)) assert np.sum(inertia) < 210. assert np.std(inertia) < 0.4 * np.mean(inertia)", "conditions, and the disclaimer given in the accompanying LICENSE # file. # 2.", "= ',counts) print('total inertia = ',np.sum(inertia)) print('mean inertia = ',np.mean(inertia)) print('rms inertia =", "np.array([np.sum(w[p==i][:,None] * (xy[p==i] - cen[i])**2) for i in range(npatch)]) counts = np.array([np.sum(w[p==i]) for", "= field.kmeans_initialize_centers(npatch, 'kmeans++') assert cen1.shape == (npatch, 2) p1 = field.kmeans_assign_patches(cen1) print('patches =", "10. I think because it varies whether high weight points happen to be", "ngal = 10000 s = 10. rng = np.random.RandomState(8675309) x = rng.normal(0,s, (ngal,)", "ra, dec, ra ra, dec = coord.CelestialCoord.xyz_to_radec(x,y,z) r = (x**2 + y**2 +", "with lots of top level cells print('3d with init=kmeans++, min_top=10') field = cat.getNField(min_top=10)", "coord.degrees) cat = treecorr.Catalog(ra=ra, dec=dec, ra_units='rad', dec_units='rad', w=w) npatch = 111 field =", "patch. ngal = 10000 s = 10. rng = np.random.RandomState(8675309) x = rng.normal(0,s,", "# 1. Redistributions of source code must retain the above copyright notice, this", "time.time() patches, cen = field.run_kmeans(npatch) t1 = time.time() assert len(patches) == cat.ntot assert", "* np.mean(inertia) # I've seen over 0.3 x mean here. print('mean counts =", "an odd number to make sure we force some of the shuffle bits", "bit worse usually. print('With min_top=10:') print('time = ',t1-t0) print('total inertia = ',np.sum(inertia)) print('mean", "',np.max(counts)) # Should be the same thing with ra, dec, ra ra, dec", "= field.kmeans_assign_patches(cen_n) np.testing.assert_equal(sorted(p_n), list(range(n))) @timer def test_zero_weight(): # Based on test_ra_dec, but where", "assert_raises(ValueError): field.kmeans_initialize_centers(npatch=-100, init='random') # Should be valid to give npatch = 1, although", "the alternate algorithm. rms inertia should be lower. cat2 = treecorr.Catalog(ra=ra, dec=dec, ra_units='rad',", "np.pi * 60. # convert to arcmin counts = np.array([np.sum(patches==i) for i in", "print('min counts = ',np.min(counts)) print('max counts = ',np.max(counts)) @timer def test_init_random(): # Test", "= cat.getNField() t0 = time.time() p, cen = field.run_kmeans(npatch) t1 = time.time() print('patches", "# Should all have similar number of points. Nothing is required here though.", "InitializeCenters # to happen. npatch = 43 field = cat.getNField(max_top=5) t0 = time.time()", "rms is usually small mean # With weights, these aren't actually all that", "np.mean(inertia) # rms is usually small mean print('mean counts = ',np.mean(counts)) print('min counts", "= treecorr.Catalog(file_name, ra_col='ra', dec_col='dec', ra_units='deg', dec_units='deg') # Use an odd number to make", "t1 = time.time() inertia = np.array([np.sum(w[p2==i][:,None] * (xyz[p2==i] - cen[i])**2) for i in", "',np.mean(inertia)) print('rms inertia = ',np.std(inertia)) assert np.sum(inertia) < 33000. assert np.std(inertia) < 0.4", "r=r, w=w) field = cat2.getNField() t0 = time.time() p2, cen = field.run_kmeans(npatch) t1", "init=random option ngal = 100000 s = 1. rng = np.random.RandomState(8675309) x =", "w = rng.random_sample(ngal) ra, dec, r = coord.CelestialCoord.xyz_to_radec(x,y,z, return_r=True) print('minra = ',np.min(ra) *", "init=kmeans++') ra, dec = coord.CelestialCoord.xyz_to_radec(x,y,z) cat = treecorr.Catalog(ra=ra, dec=dec, ra_units='rad', dec_units='rad') xyz =", "i in range(npatch)]) print('With alternate algorithm:') print('time = ',t1-t0) print('total inertia = ',np.sum(inertia))", "cat = treecorr.Catalog(ra=ra, dec=dec, ra_units='rad', dec_units='rad') xyz = np.array([cat.x, cat.y, cat.z]).T field =", "materials provided with the distribution. from __future__ import print_function import numpy as np", "warnings import treecorr from test_helper import get_from_wiki, CaptureLog, assert_raises, profile, timer @timer def", "option ngal = 100000 s = 1. rng = np.random.RandomState(8675309) x = rng.normal(0,s,", "(This is stupid of course, but check that it doesn't fail.) # Do", "* (xyz[p==i] - cen[i])**2) for i in range(npatch)]) counts = np.array([np.sum(w[p==i]) for i", "# inertias be relatively similar. print('mean counts = ',np.mean(counts)) print('min counts = ',np.min(counts))", "great initialization. p2, cen2 = field.run_kmeans(npatch, init='random', max_iter=1000) inertia2 = np.array([np.sum((xyz[p2==i] - cen2[i])**2)", "counts = np.array([np.sum(w[p==i]) for i in range(npatch)]) print('With alternate algorithm:') print('time = ',t1-t0)", "installed') return #treecorr.set_omp_threads(1); get_from_wiki('des_sv.fits') file_name = os.path.join('data','des_sv.fits') cat = treecorr.Catalog(file_name, ra_col='ra', dec_col='dec', ra_units='deg',", "quite small. inertia = np.array([np.sum((xyz[patches==i] - cen[i])**2) for i in range(npatch)]) sizes =", "/= np.sqrt(np.sum(direct_cen**2,axis=1)[:,np.newaxis]) np.testing.assert_allclose(cen, direct_cen, atol=1.e-3) # KMeans minimizes the total inertia. # Check", "',counts1) print('rms counts = ',np.std(counts1)) print('total inertia = ',np.sum(inertia1)) # Now run the", "np.array([np.average(xyz[p==i], axis=0, weights=w[p==i]) for i in range(npatch)]) direct_cen /= np.sqrt(np.sum(direct_cen**2,axis=1)[:,np.newaxis]) np.testing.assert_allclose(cen, direct_cen, atol=2.e-3)", "give npatch = 1, although not particularly useful. cen_1 = field.kmeans_initialize_centers(npatch=1, init='random') p_1", "= ',np.unique(p)) assert len(p) == cat.ntot assert min(p) == 0 assert max(p) ==", "be even smaller here. print('mean counts = ',np.mean(counts)) print('min counts = ',np.min(counts)) print('max", "ra_units='rad', dec_units='rad', w=w) npatch = 111 field = cat.getNField() t0 = time.time() p,", "dec=dec, ra_units='rad', dec_units='rad', w=w, keep_zero_weight=True) treecorr.set_omp_threads(1) npatch = 16 field = cat.getNField() t0", "=> ',np.std(counts2)) print('total inertia => ',np.sum(inertia2)) assert np.sum(inertia2) < np.sum(inertia1) # Repeat in", "time.time() p2, cen = field.run_kmeans(npatch) t1 = time.time() inertia = np.array([np.sum(w[p2==i][:,None] * (xyz[p2==i]", "np.sum(inertia) < 33000. assert np.std(inertia) < 0.4 * np.mean(inertia) # I've seen over", "= cat.getNField() cen1 = field.kmeans_initialize_centers(npatch, 'random') assert cen1.shape == (npatch, 3) p1 =", "print('3d with init=random') npatch = 10 field = cat.getNField() cen1 = field.kmeans_initialize_centers(npatch, 'random')", "min(patches) == 0 assert max(patches) == npatch-1 # Check the returned center to", "< 5300. assert np.std(inertia) < 0.4 * np.mean(inertia) # I've seen over 0.3", "Use a field with lots of top level cells print('3d with init=random, min_top=10')", "z = rng.normal(0,s, (ngal,) ) cat = treecorr.Catalog(x=x, y=y, z=z) xyz = np.array([x,", "assert max(p) == npatch-1 # Check the returned center to a direct calculation.", "mean # With weights, these aren't actually all that similar. The range is", "check here is that this works with other fields besides NField, even though", "init=kmeans++') npatch = 10 field = cat.getNField() cen1 = field.kmeans_initialize_centers(npatch, 'kmeans++') assert cen1.shape", "inertia should be lower. t0 = time.time() p, cen = field.run_kmeans(npatch, alt=True) t1", "and the rms size, which should also be quite small. inertia = np.array([np.sum((xyz[patches==i]", "rng.normal(0,s, (ngal,) ) y = rng.normal(0,s, (ngal,) ) + 100 # Put everything", "# TreeCorr is free software: redistribution and use in source and binary forms,", "If same number of patches as galaxies, each galaxy gets a patch. #", "< 0.15 * np.mean(inertia) # rms should be even smaller here. print('mean counts", "treecorr.Catalog(ra=ra, dec=dec, ra_units='rad', dec_units='rad', w=w) npatch = 111 field = cat.getNField() t0 =", "if __name__ == '__main__': test_dessv() test_radec() test_3d() test_2d() test_init_random() test_init_kmpp() test_zero_weight() test_catalog_sphere() test_catalog_3d()", "# rms is usually small mean print('mean counts = ',np.mean(counts)) print('min counts =", "alternate algorithm. rms inertia should be lower. t0 = time.time() p, cen =", "max(p) == npatch-1 inertia = np.array([np.sum(w[p==i][:,None] * (xy[p==i] - cen[i])**2) for i in", "= ',np.mean(counts)) print('min counts = ',np.min(counts)) print('max counts = ',np.max(counts)) @timer def test_2d():", "= cat.getNField() cen1 = field.kmeans_initialize_centers(npatch, 'kmeans++') assert cen1.shape == (npatch, 2) p1 =", "',np.mean(sizes)) print('rms size = ',np.std(sizes)) assert np.sum(inertia) < 200. # Total shouldn't increase", "is required here though. print('mean counts = ',np.mean(counts)) print('min counts = ',np.min(counts)) print('max", "cat = treecorr.Catalog(file_name, ra_col='ra', dec_col='dec', ra_units='deg', dec_units='deg') # Use an odd number to", "alt=True) t1 = time.time() assert len(p) == cat.ntot assert min(p) == 0 assert", "= ',np.mean(counts)) print('min counts = ',np.min(counts)) print('max counts = ',np.max(counts)) @timer def test_3d():", "ra_units='rad', dec_units='rad', w=w, npatch=npatch, kmeans_alt=True) t0 = time.time() p = cat2.patch cen =", "weights, these aren't actually all that similar. The range is more than a", "def test_init_kmpp(): # Test the init=random option ngal = 100000 s = 1.", "for i in range(npatch)]) counts = np.array([np.sum(w[p==i]) for i in range(npatch)]) print('With standard", "p, c = field.run_kmeans(npatch) t1 = time.time() print('patches = ',np.unique(p)) assert len(p) ==", "of source code must retain the above copyright notice, this # list of", "field with lots of top level cells print('3d with init=kmeans++, min_top=10') field =", "cat.getNField() cen_n = field.kmeans_initialize_centers(npatch=n, init='kmeans++') p_n = field.kmeans_assign_patches(cen_n) np.testing.assert_equal(sorted(p_n), list(range(n))) @timer def test_zero_weight():", "factor of 10. I think because it varies whether high weight points happen", "p = cat.patch cen = cat.patch_centers t1 = time.time() print('patches = ',np.unique(p)) assert", "counts = ',np.max(counts)) # Check using patch_centers from (ra,dec) -> (ra,dec,r) cat3 =", "0.2 * mean assert np.std(sizes) < 0.1 * np.mean(sizes) # sizes have even", "is not installed') return #treecorr.set_omp_threads(1); get_from_wiki('des_sv.fits') file_name = os.path.join('data','des_sv.fits') cat = treecorr.Catalog(file_name, ra_col='ra',", "= ',np.std(counts1)) print('total inertia = ',np.sum(inertia1)) # Now run the normal way p2,", "the alternate algorithm. rms inertia should be lower. t0 = time.time() patches, cen", "increase much. (And often decreases.) assert np.std(inertia) < 0.15 * np.mean(inertia) # rms", "- cen1[i])**2) for i in range(npatch)]) counts1 = np.array([np.sum(p1==i) for i in range(npatch)])", "np.zeros(ngal)) # If same number of patches as galaxies, each galaxy gets a", "* np.mean(inertia) # rms should be even smaller here. print('mean counts = ',np.mean(counts))", "len(p) == cat.ntot assert min(p) == 0 assert max(p) == npatch-1 xy =", "= (x**2 + y**2 + z**2)**0.5 cat2 = treecorr.Catalog(ra=ra, dec=dec, ra_units='rad', dec_units='rad', r=r,", "t0 = time.time() patches, cen = field.run_kmeans(npatch) t1 = time.time() assert len(patches) ==", "field.kmeans_initialize_centers(npatch=1, init='kmeans++') p_1 = field.kmeans_assign_patches(cen_1) np.testing.assert_equal(p_1, np.zeros(ngal)) # If same number of patches", "# rms is usually < 0.2 * mean assert np.std(sizes) < 0.1 *", "cat = treecorr.Catalog(ra=ra, dec=dec, r=r, ra_units='rad', dec_units='rad', w=w, npatch=npatch) t0 = time.time() p", "forms, # with or without modification, are permitted provided that the following #", "* np.mean(sizes) print('mean counts = ',np.mean(counts)) print('min counts = ',np.min(counts)) print('max counts =", "min(p) == 0 assert max(p) == npatch-1 inertia = np.array([np.sum(w[p==i][:,None] * (xy[p==i] -", "# Check the returned center to a direct calculation. xyz = np.array([cat.x, cat.y,", "for i in range(npatch)]) counts = np.array([np.sum(w[p==i]) for i in range(npatch)]) print('With alternate", "print('min counts = ',np.min(counts)) print('max counts = ',np.max(counts)) # Finally, use a field", "= rng.normal(0,s, (ngal,) ) w = rng.random_sample(ngal) + 1 g1 = rng.normal(0,s, (ngal,)", "# rms should be even smaller here. assert np.std(sizes) < 0.1 * np.mean(sizes)", "= ',np.max(counts)) # Check using patch_centers from (ra,dec) -> (ra,dec,r) cat3 = treecorr.Catalog(ra=ra,", "KMeans minimizes the total inertia. # Check this value and the rms size,", "inertia = ',np.std(inertia)) print('mean size = ',np.mean(sizes)) print('rms size = ',np.std(sizes)) assert np.sum(inertia)", "w=w, npatch=npatch, kmeans_alt=True) t0 = time.time() p = cat2.patch cen = cat2.patch_centers t1", "len(patches) == cat.ntot assert min(patches) == 0 assert max(patches) == npatch-1 inertia =", "= np.random.RandomState(8675309) x = rng.normal(0,s, (ngal,) ) y = rng.normal(0,s, (ngal,) ) z", "kmeans_alt=True) t0 = time.time() p = cat2.patch cen = cat2.patch_centers t1 = time.time()", "cen = field.run_kmeans(npatch) t1 = time.time() print('patches = ',np.unique(patches)) assert len(patches) == cat.ntot", "print('patches = ',np.unique(p)) assert len(p) == cat.ntot assert min(p) == 0 assert max(p)", "cen_1 = field.kmeans_initialize_centers(npatch=1, init='kmeans++') p_1 = field.kmeans_assign_patches(cen_1) np.testing.assert_equal(p_1, np.zeros(ngal)) # If same number", "1. rng = np.random.RandomState(8675309) x = rng.normal(0,s, (ngal,) ) y = rng.normal(0,s, (ngal,)", "treecorr from test_helper import get_from_wiki, CaptureLog, assert_raises, profile, timer @timer def test_dessv(): try:", "dec=dec, r=r, ra_units='rad', dec_units='rad', w=w, patch_centers=cat2.patch_centers) np.testing.assert_array_equal(cat2.patch, cat3.patch) np.testing.assert_array_equal(cat2.patch_centers, cat3.patch_centers) @timer def test_catalog_3d():", "< 0.15 * np.mean(sizes) print('mean counts = ',np.mean(counts)) print('min counts = ',np.min(counts)) print('max", "= cat.getNField() cen1 = field.kmeans_initialize_centers(npatch, 'random') assert cen1.shape == (npatch, 2) p1 =", "assert max(p1) == npatch-1 inertia1 = np.array([np.sum((xyz[p1==i] - cen1[i])**2) for i in range(npatch)])", "counts = ',np.min(counts)) print('max counts = ',np.max(counts)) # Check the alternate algorithm. rms", "i in range(npatch)]) # This doesn't give as good an initialization, so these", "@timer def test_init_random(): # Test the init=random option ngal = 100000 s =", "np.sum(inertia1) # Repeat in 2d print('2d with init=kmeans++') cat = treecorr.Catalog(x=x, y=y) xy", "33000. assert np.std(inertia) < 0.3 * np.mean(inertia) # rms is usually small mean", "== 0 assert max(patches) == npatch-1 # Check the returned center to a", "5300. assert np.std(inertia) < 0.3 * np.mean(inertia) # rms is usually small mean", "cat2.getNField() t0 = time.time() p2, cen = field.run_kmeans(npatch) t1 = time.time() inertia =", "copyright notice, # this list of conditions, and the disclaimer given in the", "# Use an odd number to make sure we force some of the", "notice, this # list of conditions, and the disclaimer given in the accompanying", "counts = np.array([np.sum(patches==i) for i in range(npatch)]) # This doesn't give as good", "cat.getNField() cen1 = field.kmeans_initialize_centers(npatch, 'kmeans++') assert cen1.shape == (npatch, 2) p1 = field.kmeans_assign_patches(cen1)", "= np.array([np.sum(patches==i) for i in range(npatch)]) print('With alternate algorithm:') print('time = ',t1-t0) print('total", "k=k) npatch = 111 field = cat.getGField() t0 = time.time() p, cen =", "= 100 cat = treecorr.Catalog(ra=ra[:n], dec=dec[:n], ra_units='rad', dec_units='rad') field = cat.getNField() cen_n =", "cen[i])**2) for i in range(npatch)]) counts = np.array([np.sum(w[p==i]) for i in range(npatch)]) print('With", "=> ',np.sum(inertia2)) assert np.sum(inertia2) < np.sum(inertia1) with assert_raises(ValueError): field.kmeans_initialize_centers(npatch=ngal*2, init='kmeans++') with assert_raises(ValueError): field.kmeans_initialize_centers(npatch=ngal+1,", "have even less spread usually. # Should all have similar number of points.", "np.sum(inertia) < 5300. assert np.std(inertia) < 0.3 * np.mean(inertia) # rms is usually", "# Now run the normal way # Use higher max_iter, since random isn't", "cat.y, cat.z]).T direct_cen = np.array([xyz[patches==i].mean(axis=0) for i in range(npatch)]) direct_cen /= np.sqrt(np.sum(direct_cen**2,axis=1)[:,np.newaxis]) np.testing.assert_allclose(cen,", "dec_units='rad', w=w, npatch=npatch, kmeans_alt=True) t0 = time.time() p = cat2.patch cen = cat2.patch_centers", "we force some of the shuffle bits in InitializeCenters # to happen. npatch", "= rng.normal(0,s, (ngal,) ) w = rng.random_sample(ngal) ra, dec = coord.CelestialCoord.xyz_to_radec(x,y,z) print('minra =", "dec=dec[:n], ra_units='rad', dec_units='rad') field = cat.getNField() cen_n = field.kmeans_initialize_centers(npatch=n, init='random') p_n = field.kmeans_assign_patches(cen_n)", "in range(npatch)]) direct_cen /= np.sqrt(np.sum(direct_cen**2,axis=1)[:,np.newaxis]) np.testing.assert_allclose(cen, direct_cen, atol=1.e-3) # KMeans minimizes the total", "worse usually. print('With min_top=10:') print('time = ',t1-t0) print('total inertia = ',np.sum(inertia)) print('mean inertia", "be lower. t0 = time.time() p, cen = field.run_kmeans(npatch, alt=True) t1 = time.time()", "with lots of top level cells to check the other branch in #", "will alsmost always be the kind of Field used. ngal = 100000 s", "print('min counts = ',np.min(counts)) print('max counts = ',np.max(counts)) # Should be the same", "cat.ntot assert min(p) == 0 assert max(p) == npatch-1 print('w>0 patches = ',np.unique(p[w>0]))", "i in range(npatch)]) print('time = ',t1-t0) print('total inertia = ',np.sum(inertia)) print('mean inertia =", "# convert to arcmin counts = np.array([np.sum(patches==i) for i in range(npatch)]) print('With alternate", "dec_units='rad', w=w, keep_zero_weight=True) treecorr.set_omp_threads(1) npatch = 16 field = cat.getNField() t0 = time.time()", "treecorr.Catalog(ra=ra, dec=dec, ra_units='rad', dec_units='rad', w=w, patch_centers=cat2.patch_centers) np.testing.assert_array_equal(cat2.patch, cat3.patch) np.testing.assert_array_equal(cat2.patch_centers, cat3.patch_centers) if __name__ ==", "',np.sum(inertia1)) # Now run the normal way p2, cen2 = field.run_kmeans(npatch, init='kmeans++', max_iter=1000)", "Catalog API should only do patches using RA, Dec. ngal = 100000 s", "print('min counts = ',np.min(counts)) print('max counts = ',np.max(counts)) @timer def test_3d(): # Like", "counts = ',np.max(counts)) @timer def test_3d(): # Like the above, but using x,y,z", "cat.getNField() cen1 = field.kmeans_initialize_centers(npatch, 'random') assert cen1.shape == (npatch, 3) p1 = field.kmeans_assign_patches(cen1)", "particular field and npatch. assert np.std(inertia) < 0.3 * np.mean(inertia) # rms is", "p1 = field.kmeans_assign_patches(cen1) print('patches = ',np.unique(p1)) assert len(p1) == cat.ntot assert min(p1) ==", "= ',np.unique(p[w==0])) assert set(p[w>0]) == set(p[w==0]) @timer def test_catalog_sphere(): # This follows the", "',xyz) direct_cen = np.array([np.average(xyz[p==i], axis=0, weights=w[p==i]) for i in range(npatch)]) direct_cen /= np.sqrt(np.sum(direct_cen**2,axis=1)[:,np.newaxis])", "init=kmeans++') cat = treecorr.Catalog(x=x, y=y) xy = np.array([x, y]).T field = cat.getNField() cen1", "higher max_iter, since random isn't a great initialization. p2, cen2 = field.run_kmeans(npatch, init='kmeans++',", "coord.CelestialCoord.xyz_to_radec(x,y,z, return_r=True) print('minra = ',np.min(ra) * coord.radians / coord.degrees) print('maxra = ',np.max(ra) *", "rms inertia should be lower. t0 = time.time() patches, cen = field.run_kmeans(npatch, alt=True)", "cells print('3d with init=random, min_top=10') field = cat.getNField(min_top=10) cen1 = field.kmeans_initialize_centers(npatch, 'random') assert", "min(p1) == 0 assert max(p1) == npatch-1 inertia1 = np.array([np.sum((xy[p1==i] - cen1[i])**2) for", "with assert_raises(ValueError): field.kmeans_initialize_centers(npatch=0, init='kmeans++') with assert_raises(ValueError): field.kmeans_initialize_centers(npatch=-100, init='kmeans++') # Should be valid to", "np.zeros(ngal) w[np.random.choice(range(ngal), ngal//10, replace=False)] = 1.0 ra, dec = coord.CelestialCoord.xyz_to_radec(x,y,z) print('minra = ',np.min(ra)", "test_init_random(): # Test the init=random option ngal = 100000 s = 1. rng", "(npatch, 3) p1 = field.kmeans_assign_patches(cen1) print('patches = ',np.unique(p1)) assert len(p1) == cat.ntot assert", "the Catalog API to run kmeans. ngal = 100000 s = 10. rng", "be even smaller here. assert np.std(sizes) < 0.1 * np.mean(sizes) # This is", "',np.std(inertia)) assert np.sum(inertia) < 200. # This is specific to this particular field", "inertia2 = np.array([np.sum((xyz[p2==i] - cen2[i])**2) for i in range(npatch)]) counts2 = np.array([np.sum(p2==i) for", "cat2 = treecorr.Catalog(ra=ra, dec=dec, ra_units='rad', dec_units='rad', r=r, w=w) field = cat2.getNField() t0 =", "print('maxdec = ',np.max(dec) * coord.radians / coord.degrees) cat = treecorr.Catalog(ra=ra, dec=dec, ra_units='rad', dec_units='rad',", "dec_units='rad') field = cat.getNField() cen_n = field.kmeans_initialize_centers(npatch=n, init='kmeans++') p_n = field.kmeans_assign_patches(cen_n) np.testing.assert_equal(sorted(p_n), list(range(n)))", "w=w, g1=g1, g2=g2, k=k) npatch = 111 field = cat.getGField() t0 = time.time()", "counts = ',np.std(counts1)) print('total inertia = ',np.sum(inertia1)) # Now run the normal way", "of conditions, and the disclaimer given in the accompanying LICENSE # file. #", "np.array([np.sum(p1==i) for i in range(npatch)]) print('counts = ',counts1) print('rms counts = ',np.std(counts1)) print('total", "',np.std(counts2)) print('total inertia => ',np.sum(inertia2)) assert np.sum(inertia2) < np.sum(inertia1) # Repeat in spherical", "Skip the refine_centers step. print('3d with init=random') npatch = 10 field = cat.getNField()", "np.std(inertia) < 0.4 * np.mean(inertia) # I've seen over 0.3 x mean here.", "are met: # # 1. Redistributions of source code must retain the above", "print('inertia = ',inertia) print('counts = ',counts) print('total inertia = ',np.sum(inertia)) print('mean inertia =", "# Test the init=random option ngal = 100000 s = 1. rng =", "other branch in # InitializeCenters. field = cat.getKField(min_top=10) t0 = time.time() p, cen", "#treecorr.set_omp_threads(1); get_from_wiki('des_sv.fits') file_name = os.path.join('data','des_sv.fits') cat = treecorr.Catalog(file_name, ra_col='ra', dec_col='dec', ra_units='deg', dec_units='deg') #", "documentation # and/or other materials provided with the distribution. from __future__ import print_function", "patches, so the total weight varies when you target having the # inertias", "inertia = ',np.std(inertia)) assert np.sum(inertia) < 33000. assert np.std(inertia) < 0.4 * np.mean(inertia)", "a random set of points, so it will run even # if the", "of 10. I think because it varies whether high weight points happen to", "This is specific to this particular field and npatch. assert np.std(inertia) < 0.3", "def test_zero_weight(): # Based on test_ra_dec, but where many galaxies have w=0. #", "- cen[i])**2) for i in range(npatch)]) counts = np.array([np.sum(w[p2==i]) for i in range(npatch)])", "counts = ',np.min(counts)) print('max counts = ',np.max(counts)) # Finally, use a field with", "will run even # if the user doesn't have fitsio installed. # In", "# convert to arcmin counts = np.array([np.sum(patches==i) for i in range(npatch)]) print('With standard", "give npatch = 1, although not particularly useful. cen_1 = field.kmeans_initialize_centers(npatch=1, init='kmeans++') p_1", ") w = np.zeros(ngal) w[np.random.choice(range(ngal), ngal//10, replace=False)] = 1.0 ra, dec = coord.CelestialCoord.xyz_to_radec(x,y,z)", "== cat.ntot assert min(p1) == 0 assert max(p1) == npatch-1 inertia1 = np.array([np.sum((xyz[p1==i]", "besides NField, even though # in practice NField will alsmost always be the", "np.sum(inertia) < 33000. assert np.std(inertia) < 0.1 * np.mean(inertia) # rms should be", "range(npatch)]) counts2 = np.array([np.sum(p2==i) for i in range(npatch)]) print('rms counts => ',np.std(counts2)) print('total", "lots of top level cells print('3d with init=random, min_top=10') field = cat.getNField(min_top=10) cen1", "range(npatch)]) print('rms counts => ',np.std(counts2)) print('total inertia => ',np.sum(inertia2)) assert np.sum(inertia2) < np.sum(inertia1)", "1.0 ra, dec = coord.CelestialCoord.xyz_to_radec(x,y,z) print('minra = ',np.min(ra) * coord.radians / coord.degrees) print('maxra", "direct_cen, atol=1.e-3) # KMeans minimizes the total inertia. # Check this value and", "mean here. assert np.std(sizes) < 0.15 * np.mean(sizes) print('mean counts = ',np.mean(counts)) print('min", "but using x,y,z positions. ngal = 100000 s = 1. rng = np.random.RandomState(8675309)", "2003-2019 by <NAME> # # TreeCorr is free software: redistribution and use in", "fitsio except ImportError: print('Skipping dessv test, since fitsio is not installed') return #treecorr.set_omp_threads(1);", "max_iter, since random isn't a great initialization. p2, cen2 = field.run_kmeans(npatch, init='kmeans++', max_iter=1000)", "np.sum(inertia) < 33000. assert np.std(inertia) < 0.3 * np.mean(inertia) # rms is usually", "# to happen. npatch = 43 field = cat.getNField(max_top=5) t0 = time.time() patches,", "treecorr.Catalog(x=x, y=y, z=z, w=w) npatch = 111 field = cat.getNField() t0 = time.time()", "',np.unique(patches)) assert len(patches) == cat.ntot assert min(patches) == 0 assert max(patches) == npatch-1", "assert np.sum(inertia) < 210. assert np.std(inertia) < 0.4 * np.mean(inertia) # I've seen", "dec, ra ra, dec = coord.CelestialCoord.xyz_to_radec(x,y,z) r = (x**2 + y**2 + z**2)**0.5", "kind of Field used. ngal = 100000 s = 1. rng = np.random.RandomState(8675309)", "len(p) == cat.ntot assert min(p) == 0 assert max(p) == npatch-1 inertia =", "here. assert np.std(sizes) < 0.1 * np.mean(sizes) # This is only a little", ") g2 = rng.normal(0,s, (ngal,) ) k = rng.normal(0,s, (ngal,) ) cat =", "import os import time import coord import warnings import treecorr from test_helper import", "run kmeans. ngal = 100000 s = 10. rng = np.random.RandomState(8675309) x =", "cat.z]).T field = cat.getNField() cen1 = field.kmeans_initialize_centers(npatch, 'kmeans++') assert cen1.shape == (npatch, 3)", "w = rng.random_sample(ngal) + 1 cat = treecorr.Catalog(x=x, y=y, z=z, w=w) npatch =", "redistribution and use in source and binary forms, # with or without modification,", "# rms should be even smaller here. print('mean counts = ',np.mean(counts)) print('min counts", "',t1-t0) print('total inertia = ',np.sum(inertia)) print('mean inertia = ',np.mean(inertia)) print('rms inertia = ',np.std(inertia))", "= 100000 s = 1. rng = np.random.RandomState(8675309) x = rng.normal(0,s, (ngal,) )", "Total shouldn't increase much. (And often decreases.) assert np.std(inertia) < 0.15 * np.mean(inertia)", "assert_raises(ValueError): field.kmeans_initialize_centers(npatch=0, init='random') with assert_raises(ValueError): field.kmeans_initialize_centers(npatch=-100, init='random') # Should be valid to give", "print('max counts = ',np.max(counts)) # Check using patch_centers from (ra,dec) -> (ra,dec,r) cat3", "(ngal,) ) w = rng.random_sample(ngal) + 1 g1 = rng.normal(0,s, (ngal,) ) g2", "in 2d print('2d with init=random') cat = treecorr.Catalog(x=x, y=y) xy = np.array([x, y]).T", "step. print('3d with init=random') npatch = 10 field = cat.getNField() cen1 = field.kmeans_initialize_centers(npatch,", "force some of the shuffle bits in InitializeCenters # to happen. npatch =", "= ',np.sum(inertia)) print('mean inertia = ',np.mean(inertia)) print('rms inertia = ',np.std(inertia)) print('mean size =", "range is more than a # factor of 10. I think because it", "max(p) == npatch-1 xyz = np.array([x, y, z]).T inertia = np.array([np.sum(w[p==i][:,None] * (xyz[p==i]", "rng.normal(0,s, (ngal,) ) z = rng.normal(0,s, (ngal,) ) w = rng.random_sample(ngal) + 1", "counts => ',np.std(counts2)) print('total inertia => ',np.sum(inertia2)) assert np.sum(inertia2) < np.sum(inertia1) with assert_raises(ValueError):", "print('max counts = ',np.max(counts)) # Check using patch_centers from (ra,dec,r) -> (ra,dec) cat3", "np.testing.assert_array_equal(cat2.patch_centers, cat3.patch_centers) if __name__ == '__main__': test_dessv() test_radec() test_3d() test_2d() test_init_random() test_init_kmpp() test_zero_weight()", "field = cat.getNField() t0 = time.time() p, c = field.run_kmeans(npatch) t1 = time.time()", "coord.degrees) npatch = 111 cat = treecorr.Catalog(ra=ra, dec=dec, ra_units='rad', dec_units='rad', w=w, npatch=npatch) t0", "assert len(p) == cat.ntot assert min(p) == 0 assert max(p) == npatch-1 xyz", "positions. # An additional check here is that this works with other fields", "',np.max(counts)) # Check using patch_centers from (ra,dec,r) -> (ra,dec) cat3 = treecorr.Catalog(ra=ra, dec=dec,", "init='kmeans++') with assert_raises(ValueError): field.kmeans_initialize_centers(npatch=0, init='kmeans++') with assert_raises(ValueError): field.kmeans_initialize_centers(npatch=-100, init='kmeans++') # Should be valid", "ra_units='rad', dec_units='rad', r=r, w=w) field = cat2.getNField() t0 = time.time() p2, cen =", "npatch. assert np.std(inertia) < 0.3 * np.mean(inertia) # rms is usually small mean", "cen = cat.patch_centers t1 = time.time() print('patches = ',np.unique(p)) assert len(p) == cat.ntot", "npatch-1 xy = np.array([x, y]).T inertia = np.array([np.sum(w[p==i][:,None] * (xy[p==i] - cen[i])**2) for", "111 cat = treecorr.Catalog(ra=ra, dec=dec, r=r, ra_units='rad', dec_units='rad', w=w, npatch=npatch) t0 = time.time()", "y, z]).T # Skip the refine_centers step. print('3d with init=kmeans++') npatch = 10", "patches, cen = field.run_kmeans(npatch) t1 = time.time() assert len(patches) == cat.ntot assert min(patches)", "(ngal,) ) z = rng.normal(0,s, (ngal,) ) w = rng.random_sample(ngal) + 1 cat", "init='kmeans++') with assert_raises(ValueError): field.kmeans_initialize_centers(npatch=-100, init='kmeans++') # Should be valid to give npatch =", "# Repeat in spherical print('spher with init=kmeans++') ra, dec = coord.CelestialCoord.xyz_to_radec(x,y,z) cat =", "Field used. ngal = 100000 s = 1. rng = np.random.RandomState(8675309) x =", "0.4 * np.mean(inertia) # I've seen over 0.3 x mean here. print('mean counts", "seen over 0.3 x mean here. print('mean counts = ',np.mean(counts)) print('min counts =", "so the total weight varies when you target having the # inertias be", "= treecorr.Catalog(ra=ra[:n], dec=dec[:n], ra_units='rad', dec_units='rad') field = cat.getNField() cen_n = field.kmeans_initialize_centers(npatch=n, init='random') p_n", "field.kmeans_initialize_centers(npatch=0, init='random') with assert_raises(ValueError): field.kmeans_initialize_centers(npatch=-100, init='random') # Should be valid to give npatch", "list(range(n))) @timer def test_zero_weight(): # Based on test_ra_dec, but where many galaxies have", "field.kmeans_initialize_centers(npatch=ngal*2, init='kmeans++') with assert_raises(ValueError): field.kmeans_initialize_centers(npatch=ngal+1, init='kmeans++') with assert_raises(ValueError): field.kmeans_initialize_centers(npatch=0, init='kmeans++') with assert_raises(ValueError): field.kmeans_initialize_centers(npatch=-100,", "@timer def test_init_kmpp(): # Test the init=random option ngal = 100000 s =", "with init=random') ra, dec = coord.CelestialCoord.xyz_to_radec(x,y,z) cat = treecorr.Catalog(ra=ra, dec=dec, ra_units='rad', dec_units='rad') xyz", "sizes = np.array([np.mean((xyz[patches==i] - cen[i])**2) for i in range(npatch)])**0.5 sizes *= 180. /", "= 10 field = cat.getNField() cen1 = field.kmeans_initialize_centers(npatch, 'random') assert cen1.shape == (npatch,", "alt=True) t1 = time.time() assert len(patches) == cat.ntot assert min(patches) == 0 assert", "np.array([np.mean((xyz[patches==i] - cen[i])**2) for i in range(npatch)])**0.5 sizes *= 180. / np.pi *", "y = rng.normal(0,s, (ngal,) ) z = rng.normal(0,s, (ngal,) ) cat = treecorr.Catalog(x=x,", "= ',np.min(counts)) print('max counts = ',np.max(counts)) # Should be the same thing with", "< 200. # This is specific to this particular field and npatch. assert", "rng.normal(0,s, (ngal,) ) cat = treecorr.Catalog(x=x, y=y, z=z) xyz = np.array([x, y, z]).T", "',np.mean(counts)) print('min counts = ',np.min(counts)) print('max counts = ',np.max(counts)) # Should be the", "print('rms inertia = ',np.std(inertia)) assert np.sum(inertia) < 210. assert np.std(inertia) < 0.4 *", "np.array([np.sum(w[p2==i]) for i in range(npatch)]) print('time = ',t1-t0) print('total inertia = ',np.sum(inertia)) print('mean", "1 cat = treecorr.Catalog(x=x, y=y, z=z, w=w) npatch = 111 field = cat.getNField()", "init='kmeans++', max_iter=1000) inertia2 = np.array([np.sum((xy[p2==i] - cen2[i])**2) for i in range(npatch)]) counts2 =", "w = np.zeros(ngal) w[np.random.choice(range(ngal), ngal//10, replace=False)] = 1.0 ra, dec = coord.CelestialCoord.xyz_to_radec(x,y,z) print('minra", "cat = treecorr.Catalog(x=x, y=y, z=z, w=w) npatch = 111 field = cat.getNField() t0", "',np.unique(p)) assert len(p) == cat.ntot assert min(p) == 0 assert max(p) == npatch-1", ") y = rng.normal(0,s, (ngal,) ) + 100 # Put everything at large", "* coord.radians / coord.degrees) print('mindec = ',np.min(dec) * coord.radians / coord.degrees) print('maxdec =", "is usually small mean print('mean counts = ',np.mean(counts)) print('min counts = ',np.min(counts)) print('max", "with ra, dec, ra ra, dec = coord.CelestialCoord.xyz_to_radec(x,y,z) r = (x**2 + y**2", "# Now run the normal way p2, cen2 = field.run_kmeans(npatch, init='kmeans++', max_iter=1000) inertia2", "* (xyz[p2==i] - cen[i])**2) for i in range(npatch)]) counts = np.array([np.sum(w[p2==i]) for i", "= ',cen) print('xyz = ',xyz) direct_cen = np.array([np.average(xyz[p==i], axis=0, weights=w[p==i]) for i in", "np.testing.assert_array_equal(cat2.patch_centers, cat3.patch_centers) @timer def test_catalog_3d(): # With ra, dec, r, the Catalog API", "print('max counts = ',np.max(counts)) # Check the alternate algorithm. rms inertia should be", "seen over 0.3 x mean here. assert np.std(sizes) < 0.15 * np.mean(sizes) print('mean", "= ',np.mean(inertia)) print('rms inertia = ',np.std(inertia)) assert np.sum(inertia) < 33000. assert np.std(inertia) <", "init='random', max_iter=1000) inertia2 = np.array([np.sum((xyz[p2==i] - cen2[i])**2) for i in range(npatch)]) counts2 =", "',np.std(inertia)) assert np.sum(inertia) < 33000. assert np.std(inertia) < 0.1 * np.mean(inertia) # rms", "= 100000 s = 10. rng = np.random.RandomState(8675309) x = rng.normal(0,s, (ngal,) )", "np.sqrt(np.sum(direct_cen**2,axis=1)[:,np.newaxis]) np.testing.assert_allclose(cen, direct_cen, atol=1.e-3) # KMeans minimizes the total inertia. # Check this", "all that similar. The range is more than a # factor of 10.", "< np.sum(inertia1) # Repeat in 2d print('2d with init=random') cat = treecorr.Catalog(x=x, y=y)", "fail.) # Do this with fewer points though, since it's not particularly fast", "patches as galaxies, each galaxy gets a patch. # (This is stupid of", "in the documentation # and/or other materials provided with the distribution. from __future__", "* 60. # convert to arcmin counts = np.array([np.sum(patches==i) for i in range(npatch)])", "points. Nothing is required here though. print('mean counts = ',np.mean(counts)) print('min counts =", "more than a # factor of 10. I think because it varies whether", "inertia => ',np.sum(inertia2)) assert np.sum(inertia2) < np.sum(inertia1) with assert_raises(ValueError): field.run_kmeans(npatch, init='invalid') with assert_raises(ValueError):", "algorithm:') print('time = ',t1-t0) print('inertia = ',inertia) print('counts = ',counts) print('total inertia =", "dec_col='dec', ra_units='deg', dec_units='deg') # Use an odd number to make sure we force", "I've seen over 0.3 x mean here. print('mean counts = ',np.mean(counts)) print('min counts", "p_n = field.kmeans_assign_patches(cen_n) np.testing.assert_equal(sorted(p_n), list(range(n))) @timer def test_zero_weight(): # Based on test_ra_dec, but", "normal way # Use higher max_iter, since random isn't a great initialization. p2,", "rng.normal(0,s, (ngal,) ) y = rng.normal(0,s, (ngal,) ) w = rng.random_sample(ngal) + 1", "len(p) == cat.ntot assert min(p) == 0 assert max(p) == npatch-1 print('w>0 patches", "step. print('3d with init=kmeans++') npatch = 10 field = cat.getNField() cen1 = field.kmeans_initialize_centers(npatch,", "range(npatch)]) sizes = np.array([np.mean((xyz[patches==i] - cen[i])**2) for i in range(npatch)])**0.5 sizes *= 180.", "p, cen = field.run_kmeans(npatch, alt=True) t1 = time.time() assert len(p) == cat.ntot assert", "= 111 field = cat.getGField() t0 = time.time() p, cen = field.run_kmeans(npatch) t1", "print('With standard algorithm:') print('time = ',t1-t0) print('inertia = ',inertia) print('counts = ',counts) print('total", "# With weights, these aren't actually all that similar. The range is more", "',np.max(counts)) # Check the alternate algorithm. rms inertia should be lower. cat2 =", "= np.random.RandomState(8675309) x = rng.normal(0,s, (ngal,) ) y = rng.normal(0,s, (ngal,) ) +", "using x,y positions. # An additional check here is that this works with", "edges or middles of patches, so the total weight varies when you target", "code must retain the above copyright notice, this # list of conditions, and", "Repeat in spherical print('spher with init=kmeans++') ra, dec = coord.CelestialCoord.xyz_to_radec(x,y,z) cat = treecorr.Catalog(ra=ra,", "# Based on test_ra_dec, but where many galaxies have w=0. # There used", "assert len(patches) == cat.ntot assert min(patches) == 0 assert max(patches) == npatch-1 inertia", "algorithm:') print('time = ',t1-t0) print('total inertia = ',np.sum(inertia)) print('mean inertia = ',np.mean(inertia)) print('rms", "by <NAME> # # TreeCorr is free software: redistribution and use in source", "< 5300. assert np.std(inertia) < 0.3 * np.mean(inertia) # rms is usually small", "other branch in # InitializeCenters. field = cat.getNField(min_top=10) t0 = time.time() patches, cen", "counts = ',np.min(counts)) print('max counts = ',np.max(counts)) # Check using patch_centers from (ra,dec,r)", "'random') assert cen1.shape == (npatch, 3) p1 = field.kmeans_assign_patches(cen1) print('patches = ',np.unique(p1)) assert", "np.sum(inertia2) < np.sum(inertia1) with assert_raises(ValueError): field.kmeans_initialize_centers(npatch=ngal*2, init='kmeans++') with assert_raises(ValueError): field.kmeans_initialize_centers(npatch=ngal+1, init='kmeans++') with assert_raises(ValueError):", "',np.std(sizes)) assert np.sum(inertia) < 210. assert np.std(inertia) < 0.4 * np.mean(inertia) # I've", "With ra, dec, r, the Catalog API should only do patches using RA,", "2d print('2d with init=kmeans++') cat = treecorr.Catalog(x=x, y=y) xy = np.array([x, y]).T field", "dec_units='rad', r=r, w=w) field = cat2.getNField() t0 = time.time() p2, cen = field.run_kmeans(npatch)", "np.array([cat.x, cat.y, cat.z]).T direct_cen = np.array([xyz[patches==i].mean(axis=0) for i in range(npatch)]) direct_cen /= np.sqrt(np.sum(direct_cen**2,axis=1)[:,np.newaxis])", "k = rng.normal(0,s, (ngal,) ) cat = treecorr.Catalog(x=x, y=y, w=w, g1=g1, g2=g2, k=k)", "3) p1 = field.kmeans_assign_patches(cen1) print('patches = ',np.unique(p1)) assert len(p1) == cat.ntot assert min(p1)", "little bit smaller. # This doesn't keep the counts as equal as the", "__future__ import print_function import numpy as np import os import time import coord", "use in source and binary forms, # with or without modification, are permitted", "range(npatch)])**0.5 sizes *= 180. / np.pi * 60. # convert to arcmin counts", "dec_units='rad', w=w, patch_centers=cat2.patch_centers) np.testing.assert_array_equal(cat2.patch, cat3.patch) np.testing.assert_array_equal(cat2.patch_centers, cat3.patch_centers) @timer def test_catalog_3d(): # With ra,", "y=y) xy = np.array([x, y]).T field = cat.getNField() cen1 = field.kmeans_initialize_centers(npatch, 'kmeans++') assert", "= time.time() assert len(patches) == cat.ntot assert min(patches) == 0 assert max(patches) ==", "10000 s = 10. rng = np.random.RandomState(8675309) x = rng.normal(0,s, (ngal,) ) y", "range(npatch)]) counts = np.array([np.sum(w[p==i]) for i in range(npatch)]) print('With standard algorithm:') print('time =", "angle on sky z = rng.normal(0,s, (ngal,) ) w = rng.random_sample(ngal) ra, dec,", "is usually small mean # With weights, these aren't actually all that similar.", "level cells print('3d with init=kmeans++, min_top=10') field = cat.getNField(min_top=10) cen1 = field.kmeans_initialize_centers(npatch, 'kmeans++')", "t1 = time.time() assert len(p) == cat2.ntot assert min(p) == 0 assert max(p)", "field.run_kmeans(npatch, init='random', max_iter=1000) inertia2 = np.array([np.sum((xyz[p2==i] - cen2[i])**2) for i in range(npatch)]) counts2", "inertia = np.array([np.sum(w[p==i][:,None] * (xy[p==i] - cen[i])**2) for i in range(npatch)]) counts =", "rms should be even smaller here. assert np.std(sizes) < 0.1 * np.mean(sizes) #", "= ',np.min(ra) * coord.radians / coord.degrees) print('maxra = ',np.max(ra) * coord.radians / coord.degrees)", "print('rms inertia = ',np.std(inertia)) assert np.sum(inertia) < 200. # This is specific to", "print('counts = ',counts) print('total inertia = ',np.sum(inertia)) print('mean inertia = ',np.mean(inertia)) print('rms inertia", "Now run the normal way p2, cen2 = field.run_kmeans(npatch, init='random', max_iter=1000) inertia2 =", "cen1.shape == (npatch, 3) p1 = field.kmeans_assign_patches(cen1) print('patches = ',np.unique(p1)) assert len(p1) ==", "np.mean(inertia) # I've seen over 0.3 x mean here. print('mean counts = ',np.mean(counts))", "field = cat.getKField(min_top=10) t0 = time.time() p, cen = field.run_kmeans(npatch) t1 = time.time()", "atol=1.e-3) # KMeans minimizes the total inertia. # Check this value and the", "npatch-1 # Check the returned center to a direct calculation. xyz = np.array([cat.x/cat.r,", "@timer def test_catalog_sphere(): # This follows the same path as test_radec, but using", "since fitsio is not installed') return #treecorr.set_omp_threads(1); get_from_wiki('des_sv.fits') file_name = os.path.join('data','des_sv.fits') cat =", "coord.degrees) npatch = 111 cat = treecorr.Catalog(ra=ra, dec=dec, r=r, ra_units='rad', dec_units='rad', w=w, npatch=npatch)", "in # InitializeCenters. field = cat.getNField(min_top=10) t0 = time.time() patches, cen = field.run_kmeans(npatch)", "on sky z = rng.normal(0,s, (ngal,) ) w = rng.random_sample(ngal) ra, dec, r", "r=r, ra_units='rad', dec_units='rad', w=w, npatch=npatch, kmeans_alt=True) t0 = time.time() p = cat2.patch cen", "ra, dec = coord.CelestialCoord.xyz_to_radec(x,y,z) cat = treecorr.Catalog(ra=ra, dec=dec, ra_units='rad', dec_units='rad') xyz = np.array([cat.x,", "software: redistribution and use in source and binary forms, # with or without", "',t1-t0) print('inertia = ',inertia) print('counts = ',counts) print('total inertia = ',np.sum(inertia)) print('mean inertia", "= ',np.std(inertia)) print('mean size = ',np.mean(sizes)) print('rms size = ',np.std(sizes)) assert np.sum(inertia) <", "y = rng.normal(0,s, (ngal,) ) z = rng.normal(0,s, (ngal,) ) w = rng.random_sample(ngal)", "assert_raises(ValueError): field.kmeans_initialize_centers(npatch=ngal*2, init='kmeans++') with assert_raises(ValueError): field.kmeans_initialize_centers(npatch=ngal+1, init='kmeans++') with assert_raises(ValueError): field.kmeans_initialize_centers(npatch=0, init='kmeans++') with assert_raises(ValueError):", "alternate algorithm. rms inertia should be lower. cat2 = treecorr.Catalog(ra=ra, dec=dec, ra_units='rad', dec_units='rad',", "= time.time() patches, cen = field.run_kmeans(npatch) t1 = time.time() print('patches = ',np.unique(patches)) assert", "< 0.3 * np.mean(inertia) # rms is usually < 0.2 * mean assert", "np.std(sizes) < 0.1 * np.mean(sizes) # sizes have even less spread usually. #", "to a direct calculation. xyz = np.array([cat.x, cat.y, cat.z]).T direct_cen = np.array([np.average(xyz[p==i], axis=0,", "run the normal way # Use higher max_iter, since random isn't a great", "of course, but check that it doesn't fail.) # Do this with fewer", "',np.mean(sizes)) print('rms size = ',np.std(sizes)) assert np.sum(inertia) < 200. # This is specific", "stupid of course, but check that it doesn't fail.) # Do this with", "cat2 = treecorr.Catalog(ra=ra, dec=dec, ra_units='rad', dec_units='rad', w=w, npatch=npatch, kmeans_alt=True) t0 = time.time() p", "== 0 assert max(p) == npatch-1 xy = np.array([x, y]).T inertia = np.array([np.sum(w[p==i][:,None]", "valid to give npatch = 1, although not particularly useful. cen_1 = field.kmeans_initialize_centers(npatch=1,", "np.array([np.sum(w[p==i]) for i in range(npatch)]) # This doesn't give as good an initialization,", "to check the other branch in # InitializeCenters. field = cat.getNField(min_top=10) t0 =", "Nothing is required here though. print('mean counts = ',np.mean(counts)) print('min counts = ',np.min(counts))", "cen = field.run_kmeans(npatch) t1 = time.time() assert len(patches) == cat.ntot assert min(patches) ==", "field = cat.getNField() cen_n = field.kmeans_initialize_centers(npatch=n, init='random') p_n = field.kmeans_assign_patches(cen_n) np.testing.assert_equal(sorted(p_n), list(range(n))) @timer", "to happen. npatch = 43 field = cat.getNField(max_top=5) t0 = time.time() patches, cen", "print('max counts = ',np.max(counts)) @timer def test_2d(): # Like the above, but using", "= field.kmeans_assign_patches(cen1) print('patches = ',np.unique(p1)) assert len(p1) == cat.ntot assert min(p1) == 0", "patch_centers from (ra,dec) -> (ra,dec,r) cat3 = treecorr.Catalog(ra=ra, dec=dec, r=r, ra_units='rad', dec_units='rad', w=w,", "rms inertia should be lower. t0 = time.time() p, cen = field.run_kmeans(npatch, alt=True)", "counts = ',np.mean(counts)) print('min counts = ',np.min(counts)) print('max counts = ',np.max(counts)) # Finally,", "min_top=10:') print('time = ',t1-t0) print('total inertia = ',np.sum(inertia)) print('mean inertia = ',np.mean(inertia)) print('rms", "the other branch in # InitializeCenters. field = cat.getNField(min_top=10) t0 = time.time() patches,", "treecorr.set_omp_threads(1) npatch = 16 field = cat.getNField() t0 = time.time() p, c =", "this # list of conditions, and the disclaimer given in the accompanying LICENSE", "print('mean inertia = ',np.mean(inertia)) print('rms inertia = ',np.std(inertia)) print('mean size = ',np.mean(sizes)) print('rms", "a direct calculation. xyz = np.array([cat.x, cat.y, cat.z]).T direct_cen = np.array([np.average(xyz[p==i], axis=0, weights=w[p==i])", "total weight varies when you target having the # inertias be relatively similar.", "z**2)**0.5 cat2 = treecorr.Catalog(ra=ra, dec=dec, ra_units='rad', dec_units='rad', r=r, w=w) field = cat2.getNField() t0", "even less spread usually. # Should all have similar number of points. Nothing", "for i in range(npatch)]) counts2 = np.array([np.sum(p2==i) for i in range(npatch)]) print('rms counts", "== cat.ntot assert min(p1) == 0 assert max(p1) == npatch-1 inertia1 = np.array([np.sum((xy[p1==i]", "coord.degrees) print('maxra = ',np.max(ra) * coord.radians / coord.degrees) print('mindec = ',np.min(dec) * coord.radians", "print('rms size = ',np.std(sizes)) assert np.sum(inertia) < 200. # This is specific to", "',np.max(dec) * coord.radians / coord.degrees) npatch = 111 cat = treecorr.Catalog(ra=ra, dec=dec, r=r,", "# InitializeCenters. field = cat.getNField(min_top=10) t0 = time.time() patches, cen = field.run_kmeans(npatch) t1", "= ',np.max(dec) * coord.radians / coord.degrees) cat = treecorr.Catalog(ra=ra, dec=dec, ra_units='rad', dec_units='rad', w=w)", "np.array([x, y]).T inertia = np.array([np.sum(w[p==i][:,None] * (xy[p==i] - cen[i])**2) for i in range(npatch)])", "0.3 x mean here. assert np.std(sizes) < 0.15 * np.mean(sizes) print('mean counts =", "works with other fields besides NField, even though # in practice NField will", "== cat.ntot assert min(p) == 0 assert max(p) == npatch-1 # Check the", "< 5300. assert np.std(inertia) < 0.1 * np.mean(inertia) # rms should be even", "test_init_kmpp(): # Test the init=random option ngal = 100000 s = 1. rng", "of conditions, and the disclaimer given in the documentation # and/or other materials", "43 field = cat.getNField(max_top=5) t0 = time.time() patches, cen = field.run_kmeans(npatch) t1 =", "assert_raises(ValueError): field.kmeans_initialize_centers(npatch, init='invalid') with assert_raises(ValueError): field.kmeans_initialize_centers(npatch=ngal*2, init='random') with assert_raises(ValueError): field.kmeans_initialize_centers(npatch=ngal+1, init='random') with assert_raises(ValueError):", "great initialization. p2, cen2 = field.run_kmeans(npatch, init='kmeans++', max_iter=1000) inertia2 = np.array([np.sum((xyz[p2==i] - cen2[i])**2)", "counts = ',np.min(counts)) print('max counts = ',np.max(counts)) @timer def test_init_random(): # Test the", "= ',np.min(counts)) print('max counts = ',np.max(counts)) # Check using patch_centers from (ra,dec,r) ->", "treecorr.Catalog(ra=ra, dec=dec, r=r, ra_units='rad', dec_units='rad', w=w, npatch=npatch, kmeans_alt=True) t0 = time.time() p =", "and the disclaimer given in the documentation # and/or other materials provided with", "inertia = np.array([np.sum(w[p2==i][:,None] * (xyz[p2==i] - cen[i])**2) for i in range(npatch)]) counts =", "ngal//10, replace=False)] = 1.0 ra, dec = coord.CelestialCoord.xyz_to_radec(x,y,z) print('minra = ',np.min(ra) * coord.radians", "be quite small. inertia = np.array([np.sum((xyz[patches==i] - cen[i])**2) for i in range(npatch)]) sizes", "rms size, which should also be quite small. inertia = np.array([np.sum((xyz[patches==i] - cen[i])**2)", "though, since it's not particularly fast with N=10^5. n = 100 cat =", "weights to make sure that works. ngal = 100000 s = 10. rng", "os import time import coord import warnings import treecorr from test_helper import get_from_wiki,", "field = cat.getNField(min_top=10) cen1 = field.kmeans_initialize_centers(npatch, 'kmeans++') assert cen1.shape == (npatch, 3) p1", "of patches as galaxies, each galaxy gets a patch. # (This is stupid", "dec=dec, ra_units='rad', dec_units='rad', r=r, w=w) field = cat2.getNField() t0 = time.time() p2, cen", "the other branch in # InitializeCenters. field = cat.getKField(min_top=10) t0 = time.time() p,", "counts2 = np.array([np.sum(p2==i) for i in range(npatch)]) print('rms counts => ',np.std(counts2)) print('total inertia", "size = ',np.std(sizes)) assert np.sum(inertia) < 200. # This is specific to this", "print('w==0 patches = ',np.unique(p[w==0])) assert set(p[w>0]) == set(p[w==0]) @timer def test_catalog_sphere(): # This", "at large y, so smallish angle on sky z = rng.normal(0,s, (ngal,) )", "is more than a # factor of 10. I think because it varies", "NField will alsmost always be the kind of Field used. ngal = 100000", "or without modification, are permitted provided that the following # conditions are met:", "should be lower. t0 = time.time() patches, cen = field.run_kmeans(npatch, alt=True) t1 =", "actually all that similar. The range is more than a # factor of", "np.sum(inertia) < 200. # This is specific to this particular field and npatch.", "similar. print('mean counts = ',np.mean(counts)) print('min counts = ',np.min(counts)) print('max counts = ',np.max(counts))", "* np.mean(inertia) # I've seen over 0.3 x mean here. assert np.std(sizes) <", "cat = treecorr.Catalog(ra=ra, dec=dec, ra_units='rad', dec_units='rad', w=w) npatch = 111 field = cat.getNField()", "total inertia. # Check this value and the rms size, which should also", "/ coord.degrees) cat = treecorr.Catalog(ra=ra, dec=dec, ra_units='rad', dec_units='rad', w=w) npatch = 111 field", "print('min counts = ',np.min(counts)) print('max counts = ',np.max(counts)) @timer def test_radec(): # Very", "+ 1 g1 = rng.normal(0,s, (ngal,) ) g2 = rng.normal(0,s, (ngal,) ) k", "With weights, these aren't actually all that similar. The range is more than", "spherical print('spher with init=random') ra, dec = coord.CelestialCoord.xyz_to_radec(x,y,z) cat = treecorr.Catalog(ra=ra, dec=dec, ra_units='rad',", "= np.array([np.sum((xyz[p1==i] - cen1[i])**2) for i in range(npatch)]) counts1 = np.array([np.sum(p1==i) for i", "though. print('mean counts = ',np.mean(counts)) print('min counts = ',np.min(counts)) print('max counts = ',np.max(counts))", "to make sure that works. ngal = 100000 s = 10. rng =", "t1 = time.time() print('patches = ',np.unique(p)) assert len(p) == cat.ntot assert min(p) ==", "rng.normal(0,s, (ngal,) ) w = np.zeros(ngal) w[np.random.choice(range(ngal), ngal//10, replace=False)] = 1.0 ra, dec", "t0 = time.time() p2, cen = field.run_kmeans(npatch) t1 = time.time() inertia = np.array([np.sum(w[p2==i][:,None]", "/ coord.degrees) npatch = 111 cat = treecorr.Catalog(ra=ra, dec=dec, r=r, ra_units='rad', dec_units='rad', w=w,", "def test_2d(): # Like the above, but using x,y positions. # An additional", "np.array([np.sum(patches==i) for i in range(npatch)]) print('With alternate algorithm:') print('time = ',t1-t0) print('total inertia", "Check this value and the rms size, which should also be quite small.", "y, so smallish angle on sky z = rng.normal(0,s, (ngal,) ) w =", "cen2 = field.run_kmeans(npatch, init='random', max_iter=1000) inertia2 = np.array([np.sum((xyz[p2==i] - cen2[i])**2) for i in", "np.random.RandomState(8675309) x = rng.normal(0,s, (ngal,) ) y = rng.normal(0,s, (ngal,) ) + 100", "points happen to be near the # edges or middles of patches, so", "# Very similar to the above, but with a random set of points,", "= np.array([np.sum(w[p==i][:,None] * (xy[p==i] - cen[i])**2) for i in range(npatch)]) counts = np.array([np.sum(w[p==i])", "relatively similar. print('mean counts = ',np.mean(counts)) print('min counts = ',np.min(counts)) print('max counts =", "# rms is usually small mean # With weights, these aren't actually all", "Check the alternate algorithm. rms inertia should be lower. t0 = time.time() p,", "dec_units='rad', w=w) npatch = 111 field = cat.getNField() t0 = time.time() p, cen", "only do patches using RA, Dec. ngal = 100000 s = 10. rng", "assert_raises(ValueError): field.kmeans_initialize_centers(npatch=ngal+1, init='random') with assert_raises(ValueError): field.kmeans_initialize_centers(npatch=0, init='random') with assert_raises(ValueError): field.kmeans_initialize_centers(npatch=-100, init='random') # Should", "w=w, patch_centers=cat2.patch_centers) np.testing.assert_array_equal(cat2.patch, cat3.patch) np.testing.assert_array_equal(cat2.patch_centers, cat3.patch_centers) if __name__ == '__main__': test_dessv() test_radec() test_3d()", "inertia should be lower. cat2 = treecorr.Catalog(ra=ra, dec=dec, ra_units='rad', dec_units='rad', w=w, npatch=npatch, kmeans_alt=True)", "print('rms size = ',np.std(sizes)) assert np.sum(inertia) < 200. # Total shouldn't increase much.", "= time.time() p = cat.patch cen = cat.patch_centers t1 = time.time() print('patches =", "the above, but using x,y,z positions. ngal = 100000 s = 1. rng", "<NAME> # # TreeCorr is free software: redistribution and use in source and", "range(npatch)]) counts1 = np.array([np.sum(p1==i) for i in range(npatch)]) print('counts = ',counts1) print('rms counts", "# There used to be a bug where w=0 objects were not assigned", "direct_cen, atol=2.e-3) inertia = np.array([np.sum(w[p==i][:,None] * (xyz[p==i] - cen[i])**2) for i in range(npatch)])", "w[np.random.choice(range(ngal), ngal//10, replace=False)] = 1.0 ra, dec = coord.CelestialCoord.xyz_to_radec(x,y,z) print('minra = ',np.min(ra) *", "the # edges or middles of patches, so the total weight varies when", "of points, so it will run even # if the user doesn't have", "cen[i])**2) for i in range(npatch)]) sizes = np.array([np.mean((xyz[patches==i] - cen[i])**2) for i in", "rms is usually small mean print('mean counts = ',np.mean(counts)) print('min counts = ',np.min(counts))", "t0 = time.time() p = cat2.patch cen = cat2.patch_centers t1 = time.time() assert", "dec_units='rad', w=w, patch_centers=cat2.patch_centers) np.testing.assert_array_equal(cat2.patch, cat3.patch) np.testing.assert_array_equal(cat2.patch_centers, cat3.patch_centers) if __name__ == '__main__': test_dessv() test_radec()", "dec_units='rad') field = cat.getNField() cen_n = field.kmeans_initialize_centers(npatch=n, init='random') p_n = field.kmeans_assign_patches(cen_n) np.testing.assert_equal(sorted(p_n), list(range(n)))", "cen = cat2.patch_centers t1 = time.time() assert len(p) == cat2.ntot assert min(p) ==", "time.time() assert len(p) == cat.ntot assert min(p) == 0 assert max(p) == npatch-1", "',np.max(dec) * coord.radians / coord.degrees) cat = treecorr.Catalog(ra=ra, dec=dec, ra_units='rad', dec_units='rad', w=w, keep_zero_weight=True)", "assert cen1.shape == (npatch, 2) p1 = field.kmeans_assign_patches(cen1) print('patches = ',np.unique(p1)) assert len(p1)", "assert max(p) == npatch-1 xy = np.array([x, y]).T inertia = np.array([np.sum(w[p==i][:,None] * (xy[p==i]", "the # inertias be relatively similar. print('mean counts = ',np.mean(counts)) print('min counts =", "I think because it varies whether high weight points happen to be near", "whether high weight points happen to be near the # edges or middles", "',np.min(counts)) print('max counts = ',np.max(counts)) # Check using patch_centers from (ra,dec) -> (ra,dec,r)", "100 # Put everything at large y, so smallish angle on sky z", "ra ra, dec = coord.CelestialCoord.xyz_to_radec(x,y,z) r = (x**2 + y**2 + z**2)**0.5 cat2", "range(npatch)]) print('time = ',t1-t0) print('total inertia = ',np.sum(inertia)) print('mean inertia = ',np.mean(inertia)) print('rms", "z=z) xyz = np.array([x, y, z]).T # Skip the refine_centers step. print('3d with", "* np.mean(sizes) # This is only a little bit smaller. # This doesn't", "init='random') with assert_raises(ValueError): field.kmeans_initialize_centers(npatch=-100, init='random') # Should be valid to give npatch =", "assert_raises, profile, timer @timer def test_dessv(): try: import fitsio except ImportError: print('Skipping dessv", "the alternate algorithm. rms inertia should be lower. t0 = time.time() p, cen", "np.array([cat.x, cat.y, cat.z]).T direct_cen = np.array([np.average(xyz[p==i], axis=0, weights=w[p==i]) for i in range(npatch)]) direct_cen", "field.kmeans_initialize_centers(npatch=-100, init='random') # Should be valid to give npatch = 1, although not", "cen[i])**2) for i in range(npatch)])**0.5 sizes *= 180. / np.pi * 60. #", "= field.run_kmeans(npatch, init='kmeans++', max_iter=1000) inertia2 = np.array([np.sum((xy[p2==i] - cen2[i])**2) for i in range(npatch)])", "print('rms counts => ',np.std(counts2)) print('total inertia => ',np.sum(inertia2)) assert np.sum(inertia2) < np.sum(inertia1) with", "== npatch-1 xy = np.array([x, y]).T inertia = np.array([np.sum(w[p==i][:,None] * (xy[p==i] - cen[i])**2)", "p2, cen2 = field.run_kmeans(npatch, init='kmeans++', max_iter=1000) inertia2 = np.array([np.sum((xyz[p2==i] - cen2[i])**2) for i", "inertia = ',np.std(inertia)) assert np.sum(inertia) < 5300. assert np.std(inertia) < 0.4 * np.mean(inertia)", "from __future__ import print_function import numpy as np import os import time import", "npatch = 111 cat = treecorr.Catalog(ra=ra, dec=dec, r=r, ra_units='rad', dec_units='rad', w=w, npatch=npatch) t0", "npatch-1 xyz = np.array([x, y, z]).T inertia = np.array([np.sum(w[p==i][:,None] * (xyz[p==i] - cen[i])**2)", "',np.std(sizes)) assert np.sum(inertia) < 200. # Total shouldn't increase much. (And often decreases.)", "here. assert np.std(sizes) < 0.15 * np.mean(sizes) print('mean counts = ',np.mean(counts)) print('min counts", "print('total inertia = ',np.sum(inertia)) print('mean inertia = ',np.mean(inertia)) print('rms inertia = ',np.std(inertia)) print('mean", "= ',t1-t0) print('total inertia = ',np.sum(inertia)) print('mean inertia = ',np.mean(inertia)) print('rms inertia =", "== (npatch, 2) p1 = field.kmeans_assign_patches(cen1) print('patches = ',np.unique(p1)) assert len(p1) == cat.ntot", ") y = rng.normal(0,s, (ngal,) ) w = rng.random_sample(ngal) + 1 g1 =", "direct_cen = np.array([xyz[patches==i].mean(axis=0) for i in range(npatch)]) direct_cen /= np.sqrt(np.sum(direct_cen**2,axis=1)[:,np.newaxis]) np.testing.assert_allclose(cen, direct_cen, atol=1.e-3)", "assert np.sum(inertia2) < np.sum(inertia1) # Repeat in spherical print('spher with init=random') ra, dec", "np.testing.assert_equal(sorted(p_n), list(range(n))) @timer def test_init_kmpp(): # Test the init=random option ngal = 100000", "size = ',np.mean(sizes)) print('rms size = ',np.std(sizes)) assert np.sum(inertia) < 210. assert np.std(inertia)", "Repeat in spherical print('spher with init=random') ra, dec = coord.CelestialCoord.xyz_to_radec(x,y,z) cat = treecorr.Catalog(ra=ra,", "= rng.normal(0,s, (ngal,) ) w = rng.random_sample(ngal) ra, dec, r = coord.CelestialCoord.xyz_to_radec(x,y,z, return_r=True)", "run the normal way p2, cen2 = field.run_kmeans(npatch, init='kmeans++', max_iter=1000) inertia2 = np.array([np.sum((xyz[p2==i]", "top level cells to check the other branch in # InitializeCenters. field =", "used. ngal = 100000 s = 1. rng = np.random.RandomState(8675309) x = rng.normal(0,s,", "the accompanying LICENSE # file. # 2. Redistributions in binary form must reproduce", "rng.normal(0,s, (ngal,) ) k = rng.normal(0,s, (ngal,) ) cat = treecorr.Catalog(x=x, y=y, w=w,", "for i in range(npatch)]) counts = np.array([np.sum(w[p==i]) for i in range(npatch)]) # This", "dec_units='rad', w=w, npatch=npatch) t0 = time.time() p = cat.patch cen = cat.patch_centers t1", "cat.patch_centers t1 = time.time() print('patches = ',np.unique(p)) assert len(p) == cat.ntot assert min(p)", "= cat.patch_centers t1 = time.time() print('patches = ',np.unique(p)) assert len(p) == cat.ntot assert", "print('time = ',t1-t0) print('total inertia = ',np.sum(inertia)) print('mean inertia = ',np.mean(inertia)) print('rms inertia", "counts = ',np.max(counts)) @timer def test_radec(): # Very similar to the above, but", "time.time() print('patches = ',np.unique(p)) assert len(p) == cat.ntot assert min(p) == 0 assert", "but where many galaxies have w=0. # There used to be a bug", "assert min(p) == 0 assert max(p) == npatch-1 print('w>0 patches = ',np.unique(p[w>0])) print('w==0", "cat.getNField(min_top=10) t0 = time.time() patches, cen = field.run_kmeans(npatch) t1 = time.time() assert len(patches)", "@timer def test_radec(): # Very similar to the above, but with a random", "(ngal,) ) y = rng.normal(0,s, (ngal,) ) z = rng.normal(0,s, (ngal,) ) w", "counts = ',np.max(counts)) @timer def test_2d(): # Like the above, but using x,y", "test_2d(): # Like the above, but using x,y positions. # An additional check", "ra_units='rad', dec_units='rad') field = cat.getNField() cen_n = field.kmeans_initialize_centers(npatch=n, init='kmeans++') p_n = field.kmeans_assign_patches(cen_n) np.testing.assert_equal(sorted(p_n),", "= ',np.mean(sizes)) print('rms size = ',np.std(sizes)) assert np.sum(inertia) < 200. # This is", "',np.min(counts)) print('max counts = ',np.max(counts)) @timer def test_init_random(): # Test the init=random option", "= rng.normal(0,s, (ngal,) ) y = rng.normal(0,s, (ngal,) ) + 100 # Put", "on sky z = rng.normal(0,s, (ngal,) ) w = rng.random_sample(ngal) ra, dec =", "= time.time() assert len(p) == cat.ntot assert min(p) == 0 assert max(p) ==", "# This doesn't keep the counts as equal as the standard algorithm. print('mean", "np.array([np.sum(p2==i) for i in range(npatch)]) print('rms counts => ',np.std(counts2)) print('total inertia => ',np.sum(inertia2))", "x mean here. assert np.std(sizes) < 0.15 * np.mean(sizes) print('mean counts = ',np.mean(counts))", "max_iter=1000) inertia2 = np.array([np.sum((xyz[p2==i] - cen2[i])**2) for i in range(npatch)]) counts2 = np.array([np.sum(p2==i)", "print('With min_top=10:') print('time = ',t1-t0) print('total inertia = ',np.sum(inertia)) print('mean inertia = ',np.mean(inertia))", "np.array([np.sum(patches==i) for i in range(npatch)]) print('With standard algorithm:') print('time = ',t1-t0) print('total inertia", "'random') assert cen1.shape == (npatch, 2) p1 = field.kmeans_assign_patches(cen1) print('patches = ',np.unique(p1)) assert", "time.time() patches, cen = field.run_kmeans(npatch, alt=True) t1 = time.time() assert len(patches) == cat.ntot", "= ',inertia) print('counts = ',counts) print('total inertia = ',np.sum(inertia)) print('mean inertia = ',np.mean(inertia))", "with init=kmeans++') npatch = 10 field = cat.getNField() cen1 = field.kmeans_initialize_centers(npatch, 'kmeans++') assert", "# Should be the same thing with ra, dec, ra ra, dec =", "np.std(sizes) < 0.1 * np.mean(sizes) # This is only a little bit smaller.", "cat.getNField() t0 = time.time() p, c = field.run_kmeans(npatch) t1 = time.time() print('patches =", "field = cat.getNField() t0 = time.time() p, cen = field.run_kmeans(npatch) t1 = time.time()", "len(patches) == cat.ntot assert min(patches) == 0 assert max(patches) == npatch-1 # Check", "rng.normal(0,s, (ngal,) ) z = rng.normal(0,s, (ngal,) ) cat = treecorr.Catalog(x=x, y=y, z=z)", "branch in # InitializeCenters. field = cat.getKField(min_top=10) t0 = time.time() p, cen =", "to any patch. ngal = 10000 s = 10. rng = np.random.RandomState(8675309) x", "above, but with a random set of points, so it will run even", "with init=random, min_top=10') field = cat.getNField(min_top=10) cen1 = field.kmeans_initialize_centers(npatch, 'random') assert cen1.shape ==", "above copyright notice, this # list of conditions, and the disclaimer given in", "and use in source and binary forms, # with or without modification, are", "assert np.std(inertia) < 0.3 * np.mean(inertia) # rms is usually small mean print('mean", "= time.time() p = cat2.patch cen = cat2.patch_centers t1 = time.time() assert len(p)", "points though, since it's not particularly fast with N=10^5. n = 100 cat", "sky z = rng.normal(0,s, (ngal,) ) w = rng.random_sample(ngal) ra, dec, r =", "np.array([cat.x, cat.y, cat.z]).T field = cat.getNField() cen1 = field.kmeans_initialize_centers(npatch, 'random') assert cen1.shape ==", "Like the above, but using x,y positions. # An additional check here is", "since random isn't a great initialization. p2, cen2 = field.run_kmeans(npatch, init='kmeans++', max_iter=1000) inertia2", "lower. t0 = time.time() patches, cen = field.run_kmeans(npatch, alt=True) t1 = time.time() assert", "these are a bit worse usually. print('With min_top=10:') print('time = ',t1-t0) print('total inertia", "= rng.random_sample(ngal) ra, dec = coord.CelestialCoord.xyz_to_radec(x,y,z) print('minra = ',np.min(ra) * coord.radians / coord.degrees)", "= ',np.max(counts)) @timer def test_init_random(): # Test the init=random option ngal = 100000", "60. # convert to arcmin counts = np.array([np.sum(patches==i) for i in range(npatch)]) print('With", "print('mean counts = ',np.mean(counts)) print('min counts = ',np.min(counts)) print('max counts = ',np.max(counts)) @timer", "assert max(p) == npatch-1 inertia = np.array([np.sum(w[p==i][:,None] * (xyz[p==i] - cen[i])**2) for i", "cat.ntot assert min(p) == 0 assert max(p) == npatch-1 xy = np.array([x, y]).T", "# In addition, we add weights to make sure that works. ngal =", "assert len(p) == cat.ntot assert min(p) == 0 assert max(p) == npatch-1 #", "the total weight varies when you target having the # inertias be relatively", "cat.z/cat.r]).T print('cen = ',cen) print('xyz = ',xyz) direct_cen = np.array([np.average(xyz[p==i], axis=0, weights=w[p==i]) for", "* np.mean(inertia) # rms is usually small mean print('mean counts = ',np.mean(counts)) print('min", "same thing with ra, dec, ra ra, dec = coord.CelestialCoord.xyz_to_radec(x,y,z) r = (x**2", "cen = field.run_kmeans(npatch, alt=True) t1 = time.time() assert len(p) == cat.ntot assert min(p)", "print_function import numpy as np import os import time import coord import warnings", "rng.normal(0,s, (ngal,) ) w = rng.random_sample(ngal) + 1 g1 = rng.normal(0,s, (ngal,) )", "npatch-1 inertia1 = np.array([np.sum((xy[p1==i] - cen1[i])**2) for i in range(npatch)]) counts1 = np.array([np.sum(p1==i)", "for i in range(npatch)]) print('time = ',t1-t0) print('total inertia = ',np.sum(inertia)) print('mean inertia", "i in range(npatch)]) print('counts = ',counts1) print('rms counts = ',np.std(counts1)) print('total inertia =", "high weight points happen to be near the # edges or middles of", "returned center to a direct calculation. xyz = np.array([cat.x, cat.y, cat.z]).T direct_cen =", "Redistributions in binary form must reproduce the above copyright notice, # this list", "npatch = 43 field = cat.getNField(max_top=5) t0 = time.time() patches, cen = field.run_kmeans(npatch)", "(And often decreases.) assert np.std(inertia) < 0.15 * np.mean(inertia) # rms should be", "the normal way p2, cen2 = field.run_kmeans(npatch, init='kmeans++', max_iter=1000) inertia2 = np.array([np.sum((xy[p2==i] -", "provided with the distribution. from __future__ import print_function import numpy as np import", "== 0 assert max(p) == npatch-1 inertia = np.array([np.sum(w[p==i][:,None] * (xyz[p==i] - cen[i])**2)", "min_top=10') field = cat.getNField(min_top=10) cen1 = field.kmeans_initialize_centers(npatch, 'random') assert cen1.shape == (npatch, 3)", "above, but using x,y positions. # An additional check here is that this", "spherical print('spher with init=kmeans++') ra, dec = coord.CelestialCoord.xyz_to_radec(x,y,z) cat = treecorr.Catalog(ra=ra, dec=dec, ra_units='rad',", "assert min(patches) == 0 assert max(patches) == npatch-1 inertia = np.array([np.sum((xyz[patches==i] - cen[i])**2)", "branch in # InitializeCenters. field = cat.getNField(min_top=10) t0 = time.time() patches, cen =", "(npatch, 2) p1 = field.kmeans_assign_patches(cen1) print('patches = ',np.unique(p1)) assert len(p1) == cat.ntot assert", "',np.unique(p[w==0])) assert set(p[w>0]) == set(p[w==0]) @timer def test_catalog_sphere(): # This follows the same", "= ',np.mean(inertia)) print('rms inertia = ',np.std(inertia)) assert np.sum(inertia) < 200. # Total shouldn't", "accompanying LICENSE # file. # 2. Redistributions in binary form must reproduce the", "the counts as equal as the standard algorithm. print('mean counts = ',np.mean(counts)) print('min", "= np.array([np.sum(w[p2==i][:,None] * (xyz[p2==i] - cen[i])**2) for i in range(npatch)]) counts = np.array([np.sum(w[p2==i])", "Check the returned center to a direct calculation. xyz = np.array([cat.x/cat.r, cat.y/cat.r, cat.z/cat.r]).T", "each galaxy gets a patch. # (This is stupid of course, but check", "arcmin counts = np.array([np.sum(patches==i) for i in range(npatch)]) print('With alternate algorithm:') print('time =", "much. (And often decreases.) assert np.std(inertia) < 0.15 * np.mean(inertia) # rms should", "1, although not particularly useful. cen_1 = field.kmeans_initialize_centers(npatch=1, init='kmeans++') p_1 = field.kmeans_assign_patches(cen_1) np.testing.assert_equal(p_1,", "inertia = ',np.std(inertia)) assert np.sum(inertia) < 200. # This is specific to this", "',np.min(ra) * coord.radians / coord.degrees) print('maxra = ',np.max(ra) * coord.radians / coord.degrees) print('mindec", "dec=dec, ra_units='rad', dec_units='rad', w=w, npatch=npatch, kmeans_alt=True) t0 = time.time() p = cat2.patch cen", "# list of conditions, and the disclaimer given in the accompanying LICENSE #", "rng.random_sample(ngal) + 1 cat = treecorr.Catalog(x=x, y=y, z=z, w=w) npatch = 111 field", "init='invalid') with assert_raises(ValueError): field.kmeans_initialize_centers(npatch=ngal*2, init='random') with assert_raises(ValueError): field.kmeans_initialize_centers(npatch=ngal+1, init='random') with assert_raises(ValueError): field.kmeans_initialize_centers(npatch=0, init='random')", "# file. # 2. Redistributions in binary form must reproduce the above copyright", "the Catalog API should only do patches using RA, Dec. ngal = 100000", "np.sum(inertia2) < np.sum(inertia1) # Repeat in spherical print('spher with init=random') ra, dec =", "',np.min(counts)) print('max counts = ',np.max(counts)) @timer def test_radec(): # Very similar to the", "given in the accompanying LICENSE # file. # 2. Redistributions in binary form", "in range(npatch)]) print('With standard algorithm:') print('time = ',t1-t0) print('total inertia = ',np.sum(inertia)) print('mean", "without modification, are permitted provided that the following # conditions are met: #", "z = rng.normal(0,s, (ngal,) ) w = rng.random_sample(ngal) ra, dec = coord.CelestialCoord.xyz_to_radec(x,y,z) print('minra", "field = cat.getNField() cen1 = field.kmeans_initialize_centers(npatch, 'kmeans++') assert cen1.shape == (npatch, 2) p1", "np.array([np.sum(w[p==i][:,None] * (xyz[p==i] - cen[i])**2) for i in range(npatch)]) counts = np.array([np.sum(w[p==i]) for", "= np.array([x, y, z]).T # Skip the refine_centers step. print('3d with init=random') npatch", "16 field = cat.getNField() t0 = time.time() p, c = field.run_kmeans(npatch) t1 =", "for i in range(npatch)])**0.5 sizes *= 180. / np.pi * 60. # convert", "',np.mean(inertia)) print('rms inertia = ',np.std(inertia)) assert np.sum(inertia) < 5300. assert np.std(inertia) < 0.1", "np.mean(sizes) # This is only a little bit smaller. # This doesn't keep", "inertia => ',np.sum(inertia2)) assert np.sum(inertia2) < np.sum(inertia1) # Use a field with lots", "# sizes have even less spread usually. # Should all have similar number", "assert np.std(sizes) < 0.15 * np.mean(sizes) print('mean counts = ',np.mean(counts)) print('min counts =", "= cat.getNField(max_top=5) t0 = time.time() patches, cen = field.run_kmeans(npatch) t1 = time.time() print('patches", "',np.mean(inertia)) print('rms inertia = ',np.std(inertia)) assert np.sum(inertia) < 210. assert np.std(inertia) < 0.4", "the above copyright notice, this # list of conditions, and the disclaimer given", "< 0.2 * mean assert np.std(sizes) < 0.1 * np.mean(sizes) # sizes have", "ra_units='deg', dec_units='deg') # Use an odd number to make sure we force some", "way p2, cen2 = field.run_kmeans(npatch, init='kmeans++', max_iter=1000) inertia2 = np.array([np.sum((xyz[p2==i] - cen2[i])**2) for", "= np.array([cat.x, cat.y, cat.z]).T direct_cen = np.array([np.average(xyz[p==i], axis=0, weights=w[p==i]) for i in range(npatch)])", "dec_units='deg') # Use an odd number to make sure we force some of", "t0 = time.time() p, cen = field.run_kmeans(npatch) t1 = time.time() print('patches = ',np.unique(p))", "p2, cen2 = field.run_kmeans(npatch, init='random', max_iter=1000) inertia2 = np.array([np.sum((xyz[p2==i] - cen2[i])**2) for i", "= 111 cat = treecorr.Catalog(ra=ra, dec=dec, ra_units='rad', dec_units='rad', w=w, npatch=npatch) t0 = time.time()", "y = rng.normal(0,s, (ngal,) ) + 100 # Put everything at large y,", "g1 = rng.normal(0,s, (ngal,) ) g2 = rng.normal(0,s, (ngal,) ) k = rng.normal(0,s,", "cen_1 = field.kmeans_initialize_centers(npatch=1, init='random') p_1 = field.kmeans_assign_patches(cen_1) np.testing.assert_equal(p_1, np.zeros(ngal)) # If same number", "only a little bit smaller. # This doesn't keep the counts as equal", "standard algorithm:') print('time = ',t1-t0) print('inertia = ',inertia) print('counts = ',counts) print('total inertia", "60. # convert to arcmin counts = np.array([np.sum(patches==i) for i in range(npatch)]) #", "',np.min(counts)) print('max counts = ',np.max(counts)) # Should be the same thing with ra,", "y]).T field = cat.getNField() cen1 = field.kmeans_initialize_centers(npatch, 'kmeans++') assert cen1.shape == (npatch, 2)", "list of conditions, and the disclaimer given in the documentation # and/or other", "np.array([np.sum((xyz[p2==i] - cen2[i])**2) for i in range(npatch)]) counts2 = np.array([np.sum(p2==i) for i in", "coord import warnings import treecorr from test_helper import get_from_wiki, CaptureLog, assert_raises, profile, timer", "dec_units='rad') xyz = np.array([cat.x, cat.y, cat.z]).T field = cat.getNField() cen1 = field.kmeans_initialize_centers(npatch, 'random')", "t1 = time.time() assert len(p) == cat.ntot assert min(p) == 0 assert max(p)", "= rng.normal(0,s, (ngal,) ) w = np.zeros(ngal) w[np.random.choice(range(ngal), ngal//10, replace=False)] = 1.0 ra,", "< 0.1 * np.mean(sizes) # This is only a little bit smaller. #", "cat3.patch) np.testing.assert_array_equal(cat2.patch_centers, cat3.patch_centers) if __name__ == '__main__': test_dessv() test_radec() test_3d() test_2d() test_init_random() test_init_kmpp()", "a field with lots of top level cells to check the other branch", "direct_cen /= np.sqrt(np.sum(direct_cen**2,axis=1)[:,np.newaxis]) np.testing.assert_allclose(cen, direct_cen, atol=2.e-3) inertia = np.array([np.sum(w[p==i][:,None] * (xyz[p==i] - cen[i])**2)", "that similar. The range is more than a # factor of 10. I", "even smaller here. assert np.std(sizes) < 0.1 * np.mean(sizes) # This is only", "init='kmeans++') p_n = field.kmeans_assign_patches(cen_n) np.testing.assert_equal(sorted(p_n), list(range(n))) @timer def test_zero_weight(): # Based on test_ra_dec,", "< np.sum(inertia1) # Use a field with lots of top level cells print('3d", "np.sum(inertia2) < np.sum(inertia1) with assert_raises(ValueError): field.run_kmeans(npatch, init='invalid') with assert_raises(ValueError): field.kmeans_initialize_centers(npatch, init='invalid') with assert_raises(ValueError):", "usually small mean # With weights, these aren't actually all that similar. The", "for i in range(npatch)]) counts1 = np.array([np.sum(p1==i) for i in range(npatch)]) print('counts =", "should be lower. cat2 = treecorr.Catalog(ra=ra, dec=dec, r=r, ra_units='rad', dec_units='rad', w=w, npatch=npatch, kmeans_alt=True)", "in the accompanying LICENSE # file. # 2. Redistributions in binary form must", "galaxy gets a patch. # (This is stupid of course, but check that", "file_name = os.path.join('data','des_sv.fits') cat = treecorr.Catalog(file_name, ra_col='ra', dec_col='dec', ra_units='deg', dec_units='deg') # Use an", "= np.array([np.sum((xyz[patches==i] - cen[i])**2) for i in range(npatch)]) sizes = np.array([np.mean((xyz[patches==i] - cen[i])**2)", "= field.kmeans_initialize_centers(npatch=1, init='random') p_1 = field.kmeans_assign_patches(cen_1) np.testing.assert_equal(p_1, np.zeros(ngal)) # If same number of", "(c) 2003-2019 by <NAME> # # TreeCorr is free software: redistribution and use", "# Skip the refine_centers step. print('3d with init=kmeans++') npatch = 10 field =", "range(npatch)]) counts = np.array([np.sum(w[p==i]) for i in range(npatch)]) print('With alternate algorithm:') print('time =", "print('total inertia => ',np.sum(inertia2)) assert np.sum(inertia2) < np.sum(inertia1) with assert_raises(ValueError): field.kmeans_initialize_centers(npatch=ngal*2, init='kmeans++') with", "objects were not assigned to any patch. ngal = 10000 s = 10.", "small mean # With weights, these aren't actually all that similar. The range", "ra, dec = coord.CelestialCoord.xyz_to_radec(x,y,z) r = (x**2 + y**2 + z**2)**0.5 cat2 =", "counts = ',np.mean(counts)) print('min counts = ',np.min(counts)) print('max counts = ',np.max(counts)) # Should", "run the normal way p2, cen2 = field.run_kmeans(npatch, init='kmeans++', max_iter=1000) inertia2 = np.array([np.sum((xy[p2==i]", "= ',np.max(counts)) # Check the alternate algorithm. rms inertia should be lower. cat2", "print('total inertia = ',np.sum(inertia)) print('mean inertia = ',np.mean(inertia)) print('rms inertia = ',np.std(inertia)) assert", "# and/or other materials provided with the distribution. from __future__ import print_function import", "a bug where w=0 objects were not assigned to any patch. ngal =", "in spherical print('spher with init=kmeans++') ra, dec = coord.CelestialCoord.xyz_to_radec(x,y,z) cat = treecorr.Catalog(ra=ra, dec=dec,", "= time.time() assert len(p) == cat2.ntot assert min(p) == 0 assert max(p) ==", "time.time() p = cat2.patch cen = cat2.patch_centers t1 = time.time() assert len(p) ==", "10. rng = np.random.RandomState(8675309) x = rng.normal(0,s, (ngal,) ) y = rng.normal(0,s, (ngal,)", "counts => ',np.std(counts2)) print('total inertia => ',np.sum(inertia2)) assert np.sum(inertia2) < np.sum(inertia1) # Use", "y=y, z=z) xyz = np.array([x, y, z]).T # Skip the refine_centers step. print('3d", "== cat.ntot assert min(p) == 0 assert max(p) == npatch-1 xy = np.array([x,", "coord.radians / coord.degrees) print('maxdec = ',np.max(dec) * coord.radians / coord.degrees) npatch = 111", "API should only do patches using RA, Dec. ngal = 100000 s =", "np.mean(inertia) # rms is usually small mean # With weights, these aren't actually", "cat.getNField(min_top=10) t0 = time.time() p, cen = field.run_kmeans(npatch) t1 = time.time() assert len(p)", "ra_units='rad', dec_units='rad') xyz = np.array([cat.x, cat.y, cat.z]).T field = cat.getNField() cen1 = field.kmeans_initialize_centers(npatch,", "',np.sum(inertia1)) # Now run the normal way p2, cen2 = field.run_kmeans(npatch, init='random', max_iter=1000)", "',np.unique(p[w>0])) print('w==0 patches = ',np.unique(p[w==0])) assert set(p[w>0]) == set(p[w==0]) @timer def test_catalog_sphere(): #", "inertia = ',np.sum(inertia1)) # Now run the normal way # Use higher max_iter,", "=> ',np.sum(inertia2)) assert np.sum(inertia2) < np.sum(inertia1) # Repeat in spherical print('spher with init=kmeans++')", "arcmin counts = np.array([np.sum(patches==i) for i in range(npatch)]) print('With standard algorithm:') print('time =", "isn't a great initialization. p2, cen2 = field.run_kmeans(npatch, init='kmeans++', max_iter=1000) inertia2 = np.array([np.sum((xyz[p2==i]", "z=z, w=w) npatch = 111 field = cat.getNField() t0 = time.time() p, cen", "Now run the normal way # Use higher max_iter, since random isn't a", "= cat2.patch cen = cat2.patch_centers t1 = time.time() assert len(p) == cat2.ntot assert", "xyz = np.array([x, y, z]).T # Skip the refine_centers step. print('3d with init=random')", "in range(npatch)])**0.5 sizes *= 180. / np.pi * 60. # convert to arcmin", "print('min counts = ',np.min(counts)) print('max counts = ',np.max(counts)) # Check using patch_centers from", "= ',np.mean(counts)) print('min counts = ',np.min(counts)) print('max counts = ',np.max(counts)) # Check using", "g2 = rng.normal(0,s, (ngal,) ) k = rng.normal(0,s, (ngal,) ) cat = treecorr.Catalog(x=x,", "similar to the above, but with a random set of points, so it", "= field.run_kmeans(npatch, init='random', max_iter=1000) inertia2 = np.array([np.sum((xy[p2==i] - cen2[i])**2) for i in range(npatch)])", "min(p) == 0 assert max(p) == npatch-1 print('w>0 patches = ',np.unique(p[w>0])) print('w==0 patches", "cen_n = field.kmeans_initialize_centers(npatch=n, init='random') p_n = field.kmeans_assign_patches(cen_n) np.testing.assert_equal(sorted(p_n), list(range(n))) @timer def test_init_kmpp(): #", "=> ',np.std(counts2)) print('total inertia => ',np.sum(inertia2)) assert np.sum(inertia2) < np.sum(inertia1) with assert_raises(ValueError): field.run_kmeans(npatch,", "check the other branch in # InitializeCenters. field = cat.getKField(min_top=10) t0 = time.time()", "0 assert max(p) == npatch-1 xyz = np.array([x, y, z]).T inertia = np.array([np.sum(w[p==i][:,None]", "when you target having the # inertias be relatively similar. print('mean counts =", "required here though. print('mean counts = ',np.mean(counts)) print('min counts = ',np.min(counts)) print('max counts", "with assert_raises(ValueError): field.kmeans_initialize_centers(npatch, init='invalid') with assert_raises(ValueError): field.kmeans_initialize_centers(npatch=ngal*2, init='random') with assert_raises(ValueError): field.kmeans_initialize_centers(npatch=ngal+1, init='random') with", "* coord.radians / coord.degrees) print('maxdec = ',np.max(dec) * coord.radians / coord.degrees) cat =", "cat.getKField(min_top=10) t0 = time.time() p, cen = field.run_kmeans(npatch) t1 = time.time() assert len(p)", "xyz = np.array([cat.x, cat.y, cat.z]).T direct_cen = np.array([np.average(xyz[p==i], axis=0, weights=w[p==i]) for i in", "np.array([xyz[patches==i].mean(axis=0) for i in range(npatch)]) direct_cen /= np.sqrt(np.sum(direct_cen**2,axis=1)[:,np.newaxis]) np.testing.assert_allclose(cen, direct_cen, atol=1.e-3) # KMeans", "npatch = 10 field = cat.getNField() cen1 = field.kmeans_initialize_centers(npatch, 'random') assert cen1.shape ==", "',np.min(counts)) print('max counts = ',np.max(counts)) # Check using patch_centers from (ra,dec,r) -> (ra,dec)", "i in range(npatch)]) direct_cen /= np.sqrt(np.sum(direct_cen**2,axis=1)[:,np.newaxis]) np.testing.assert_allclose(cen, direct_cen, atol=1.e-3) # KMeans minimizes the", "def test_catalog_3d(): # With ra, dec, r, the Catalog API should only do", "of top level cells to check the other branch in # InitializeCenters. field", "(ngal,) ) w = rng.random_sample(ngal) + 1 cat = treecorr.Catalog(x=x, y=y, z=z, w=w)", "alternate algorithm:') print('time = ',t1-t0) print('total inertia = ',np.sum(inertia)) print('mean inertia = ',np.mean(inertia))", "dec=dec[:n], ra_units='rad', dec_units='rad') field = cat.getNField() cen_n = field.kmeans_initialize_centers(npatch=n, init='kmeans++') p_n = field.kmeans_assign_patches(cen_n)", "normal way p2, cen2 = field.run_kmeans(npatch, init='random', max_iter=1000) inertia2 = np.array([np.sum((xyz[p2==i] - cen2[i])**2)", "xyz = np.array([x, y, z]).T # Skip the refine_centers step. print('3d with init=kmeans++')", "set(p[w==0]) @timer def test_catalog_sphere(): # This follows the same path as test_radec, but", "= np.array([xyz[patches==i].mean(axis=0) for i in range(npatch)]) direct_cen /= np.sqrt(np.sum(direct_cen**2,axis=1)[:,np.newaxis]) np.testing.assert_allclose(cen, direct_cen, atol=1.e-3) #", "print('mean inertia = ',np.mean(inertia)) print('rms inertia = ',np.std(inertia)) assert np.sum(inertia) < 210. assert", "init=random') ra, dec = coord.CelestialCoord.xyz_to_radec(x,y,z) cat = treecorr.Catalog(ra=ra, dec=dec, ra_units='rad', dec_units='rad') xyz =", "Put everything at large y, so smallish angle on sky z = rng.normal(0,s,", "the normal way p2, cen2 = field.run_kmeans(npatch, init='random', max_iter=1000) inertia2 = np.array([np.sum((xy[p2==i] -", "inertia = ',np.sum(inertia1)) # Now run the normal way p2, cen2 = field.run_kmeans(npatch,", "= cat2.patch_centers t1 = time.time() assert len(p) == cat2.ntot assert min(p) == 0", "np.array([cat.x/cat.r, cat.y/cat.r, cat.z/cat.r]).T print('cen = ',cen) print('xyz = ',xyz) direct_cen = np.array([np.average(xyz[p==i], axis=0,", "@timer def test_2d(): # Like the above, but using x,y positions. # An", "weight varies when you target having the # inertias be relatively similar. print('mean", "field.run_kmeans(npatch, init='kmeans++', max_iter=1000) inertia2 = np.array([np.sum((xy[p2==i] - cen2[i])**2) for i in range(npatch)]) counts2", "w=w, keep_zero_weight=True) treecorr.set_omp_threads(1) npatch = 16 field = cat.getNField() t0 = time.time() p,", "',np.max(counts)) @timer def test_init_random(): # Test the init=random option ngal = 100000 s", "assert np.std(inertia) < 0.1 * np.mean(inertia) # rms should be even smaller here.", "doesn't keep the counts as equal as the standard algorithm. print('mean counts =", "(ngal,) ) y = rng.normal(0,s, (ngal,) ) + 100 # Put everything at", "print('rms inertia = ',np.std(inertia)) assert np.sum(inertia) < 33000. assert np.std(inertia) < 0.3 *", "field.run_kmeans(npatch) t1 = time.time() assert len(patches) == cat.ntot assert min(patches) == 0 assert", "not particularly fast with N=10^5. n = 100 cat = treecorr.Catalog(ra=ra[:n], dec=dec[:n], ra_units='rad',", "usually. # Should all have similar number of points. Nothing is required here", "np.sum(inertia) < 5300. assert np.std(inertia) < 0.1 * np.mean(inertia) # rms should be", "cen[i])**2) for i in range(npatch)]) counts = np.array([np.sum(w[p2==i]) for i in range(npatch)]) print('time", "= ',np.max(dec) * coord.radians / coord.degrees) npatch = 111 cat = treecorr.Catalog(ra=ra, dec=dec,", "print('rms inertia = ',np.std(inertia)) assert np.sum(inertia) < 5300. assert np.std(inertia) < 0.4 *", "assert max(p1) == npatch-1 inertia1 = np.array([np.sum((xy[p1==i] - cen1[i])**2) for i in range(npatch)])", "normal way p2, cen2 = field.run_kmeans(npatch, init='kmeans++', max_iter=1000) inertia2 = np.array([np.sum((xy[p2==i] - cen2[i])**2)", "def test_radec(): # Very similar to the above, but with a random set", "Repeat in 2d print('2d with init=kmeans++') cat = treecorr.Catalog(x=x, y=y) xy = np.array([x,", "inertias be relatively similar. print('mean counts = ',np.mean(counts)) print('min counts = ',np.min(counts)) print('max", "assert min(p) == 0 assert max(p) == npatch-1 xyz = np.array([x, y, z]).T", "field.kmeans_assign_patches(cen_n) np.testing.assert_equal(sorted(p_n), list(range(n))) @timer def test_zero_weight(): # Based on test_ra_dec, but where many", "np.std(inertia) < 0.3 * np.mean(inertia) # rms is usually small mean print('mean counts", "usually. print('With min_top=10:') print('time = ',t1-t0) print('total inertia = ',np.sum(inertia)) print('mean inertia =", "/ coord.degrees) print('mindec = ',np.min(dec) * coord.radians / coord.degrees) print('maxdec = ',np.max(dec) *", "x,y,z positions. ngal = 100000 s = 1. rng = np.random.RandomState(8675309) x =", "the user doesn't have fitsio installed. # In addition, we add weights to", "np.array([np.sum(patches==i) for i in range(npatch)]) # This doesn't give as good an initialization,", "inertia = ',np.sum(inertia)) print('mean inertia = ',np.mean(inertia)) print('rms inertia = ',np.std(inertia)) assert np.sum(inertia)", "np.testing.assert_allclose(cen, direct_cen, atol=2.e-3) inertia = np.array([np.sum(w[p==i][:,None] * (xyz[p==i] - cen[i])**2) for i in", "',np.sum(inertia)) print('mean inertia = ',np.mean(inertia)) print('rms inertia = ',np.std(inertia)) assert np.sum(inertia) < 5300.", "following # conditions are met: # # 1. Redistributions of source code must", "think because it varies whether high weight points happen to be near the", "rng.normal(0,s, (ngal,) ) g2 = rng.normal(0,s, (ngal,) ) k = rng.normal(0,s, (ngal,) )", "t0 = time.time() p, cen = field.run_kmeans(npatch, alt=True) t1 = time.time() assert len(p)", "',np.sum(inertia)) print('mean inertia = ',np.mean(inertia)) print('rms inertia = ',np.std(inertia)) print('mean size = ',np.mean(sizes))", "field.kmeans_initialize_centers(npatch, 'random') assert cen1.shape == (npatch, 3) p1 = field.kmeans_assign_patches(cen1) print('patches = ',np.unique(p1))", "print('With standard algorithm:') print('time = ',t1-t0) print('total inertia = ',np.sum(inertia)) print('mean inertia =", "= 1, although not particularly useful. cen_1 = field.kmeans_initialize_centers(npatch=1, init='random') p_1 = field.kmeans_assign_patches(cen_1)", "rng.random_sample(ngal) ra, dec, r = coord.CelestialCoord.xyz_to_radec(x,y,z, return_r=True) print('minra = ',np.min(ra) * coord.radians /", "# conditions are met: # # 1. Redistributions of source code must retain", "0.3 * np.mean(inertia) # rms is usually < 0.2 * mean assert np.std(sizes)", "y, z]).T inertia = np.array([np.sum(w[p==i][:,None] * (xyz[p==i] - cen[i])**2) for i in range(npatch)])", "cen[i])**2) for i in range(npatch)]) counts = np.array([np.sum(w[p==i]) for i in range(npatch)]) #", "print('max counts = ',np.max(counts)) @timer def test_3d(): # Like the above, but using", "y = rng.normal(0,s, (ngal,) ) w = rng.random_sample(ngal) + 1 g1 = rng.normal(0,s,", "',np.max(counts)) @timer def test_2d(): # Like the above, but using x,y positions. #", "=> ',np.std(counts2)) print('total inertia => ',np.sum(inertia2)) assert np.sum(inertia2) < np.sum(inertia1) with assert_raises(ValueError): field.kmeans_initialize_centers(npatch=ngal*2,", "range(npatch)]) direct_cen /= np.sqrt(np.sum(direct_cen**2,axis=1)[:,np.newaxis]) np.testing.assert_allclose(cen, direct_cen, atol=2.e-3) inertia = np.array([np.sum(w[p==i][:,None] * (xyz[p==i] -", "rng.normal(0,s, (ngal,) ) w = rng.random_sample(ngal) ra, dec, r = coord.CelestialCoord.xyz_to_radec(x,y,z, return_r=True) print('minra", "np.array([np.sum((xy[p2==i] - cen2[i])**2) for i in range(npatch)]) counts2 = np.array([np.sum(p2==i) for i in", "field.run_kmeans(npatch) t1 = time.time() assert len(p) == cat.ntot assert min(p) == 0 assert", "inertia = ',np.std(inertia)) assert np.sum(inertia) < 210. assert np.std(inertia) < 0.4 * np.mean(inertia)", "inertia = np.array([np.sum((xyz[patches==i] - cen[i])**2) for i in range(npatch)]) sizes = np.array([np.mean((xyz[patches==i] -", "even smaller here. print('mean counts = ',np.mean(counts)) print('min counts = ',np.min(counts)) print('max counts", "inertia = ',np.std(inertia)) assert np.sum(inertia) < 5300. assert np.std(inertia) < 0.1 * np.mean(inertia)", "cat2.ntot assert min(p) == 0 assert max(p) == npatch-1 inertia = np.array([np.sum(w[p==i][:,None] *", "= treecorr.Catalog(x=x, y=y) xy = np.array([x, y]).T field = cat.getNField() cen1 = field.kmeans_initialize_centers(npatch,", "algorithm. rms inertia should be lower. cat2 = treecorr.Catalog(ra=ra, dec=dec, ra_units='rad', dec_units='rad', w=w,", "have w=0. # There used to be a bug where w=0 objects were", "InitializeCenters. field = cat.getNField(min_top=10) t0 = time.time() p, cen = field.run_kmeans(npatch) t1 =", "',np.std(counts2)) print('total inertia => ',np.sum(inertia2)) assert np.sum(inertia2) < np.sum(inertia1) with assert_raises(ValueError): field.run_kmeans(npatch, init='invalid')", "# Like the above, but using x,y,z positions. ngal = 100000 s =", "print('max counts = ',np.max(counts)) # Should be the same thing with ra, dec,", "dec = coord.CelestialCoord.xyz_to_radec(x,y,z) print('minra = ',np.min(ra) * coord.radians / coord.degrees) print('maxra = ',np.max(ra)", "test_3d(): # Like the above, but using x,y,z positions. ngal = 100000 s", "assert np.sum(inertia) < 33000. assert np.std(inertia) < 0.1 * np.mean(inertia) # rms should", "of patches, so the total weight varies when you target having the #", "= ',np.std(inertia)) assert np.sum(inertia) < 200. # This is specific to this particular", "cat.ntot assert min(patches) == 0 assert max(patches) == npatch-1 # Check the returned", "to the above, but with a random set of points, so it will", "= time.time() patches, cen = field.run_kmeans(npatch, alt=True) t1 = time.time() assert len(patches) ==", "coord.degrees) print('mindec = ',np.min(dec) * coord.radians / coord.degrees) print('maxdec = ',np.max(dec) * coord.radians", "This follows the same path as test_radec, but using the Catalog API to", "from (ra,dec,r) -> (ra,dec) cat3 = treecorr.Catalog(ra=ra, dec=dec, ra_units='rad', dec_units='rad', w=w, patch_centers=cat2.patch_centers) np.testing.assert_array_equal(cat2.patch,", "p = cat2.patch cen = cat2.patch_centers t1 = time.time() assert len(p) == cat2.ntot", "the rms size, which should also be quite small. inertia = np.array([np.sum((xyz[patches==i] -", "Catalog API to run kmeans. ngal = 100000 s = 10. rng =", "treecorr.Catalog(ra=ra, dec=dec, r=r, ra_units='rad', dec_units='rad', w=w, patch_centers=cat2.patch_centers) np.testing.assert_array_equal(cat2.patch, cat3.patch) np.testing.assert_array_equal(cat2.patch_centers, cat3.patch_centers) @timer def", "give as good an initialization, so these are a bit worse usually. print('With", "in # InitializeCenters. field = cat.getNField(min_top=10) t0 = time.time() p, cen = field.run_kmeans(npatch)", "cat.getGField() t0 = time.time() p, cen = field.run_kmeans(npatch) t1 = time.time() print('patches =", "= np.array([cat.x, cat.y, cat.z]).T field = cat.getNField() cen1 = field.kmeans_initialize_centers(npatch, 'random') assert cen1.shape", "assert np.sum(inertia) < 33000. assert np.std(inertia) < 0.4 * np.mean(inertia) # I've seen", "= np.array([cat.x, cat.y, cat.z]).T direct_cen = np.array([xyz[patches==i].mean(axis=0) for i in range(npatch)]) direct_cen /=", "be near the # edges or middles of patches, so the total weight", "',np.std(counts2)) print('total inertia => ',np.sum(inertia2)) assert np.sum(inertia2) < np.sum(inertia1) # Repeat in 2d", "field.run_kmeans(npatch) t1 = time.time() print('patches = ',np.unique(patches)) assert len(patches) == cat.ntot assert min(patches)", "RA, Dec. ngal = 100000 s = 10. rng = np.random.RandomState(8675309) x =", "',np.mean(inertia)) print('rms inertia = ',np.std(inertia)) print('mean size = ',np.mean(sizes)) print('rms size = ',np.std(sizes))", "y, z]).T # Skip the refine_centers step. print('3d with init=random') npatch = 10", "range(npatch)]) print('With standard algorithm:') print('time = ',t1-t0) print('inertia = ',inertia) print('counts = ',counts)", "min(p) == 0 assert max(p) == npatch-1 xyz = np.array([x, y, z]).T inertia", "disclaimer given in the documentation # and/or other materials provided with the distribution.", "assert np.std(inertia) < 0.3 * np.mean(inertia) # rms is usually < 0.2 *", "a little bit smaller. # This doesn't keep the counts as equal as", "file. # 2. Redistributions in binary form must reproduce the above copyright notice,", "= ',np.sum(inertia)) print('mean inertia = ',np.mean(inertia)) print('rms inertia = ',np.std(inertia)) assert np.sum(inertia) <", "calculation. xyz = np.array([cat.x/cat.r, cat.y/cat.r, cat.z/cat.r]).T print('cen = ',cen) print('xyz = ',xyz) direct_cen", "',np.sum(inertia2)) assert np.sum(inertia2) < np.sum(inertia1) # Repeat in spherical print('spher with init=random') ra,", "z = rng.normal(0,s, (ngal,) ) w = rng.random_sample(ngal) ra, dec, r = coord.CelestialCoord.xyz_to_radec(x,y,z,", "',np.sum(inertia2)) assert np.sum(inertia2) < np.sum(inertia1) # Repeat in 2d print('2d with init=random') cat", "p_n = field.kmeans_assign_patches(cen_n) np.testing.assert_equal(sorted(p_n), list(range(n))) @timer def test_init_kmpp(): # Test the init=random option", "with assert_raises(ValueError): field.kmeans_initialize_centers(npatch=ngal*2, init='random') with assert_raises(ValueError): field.kmeans_initialize_centers(npatch=ngal+1, init='random') with assert_raises(ValueError): field.kmeans_initialize_centers(npatch=0, init='random') with", "cat.ntot assert min(p) == 0 assert max(p) == npatch-1 inertia = np.array([np.sum(w[p==i][:,None] *", "coord.degrees) print('maxdec = ',np.max(dec) * coord.radians / coord.degrees) cat = treecorr.Catalog(ra=ra, dec=dec, ra_units='rad',", "In addition, we add weights to make sure that works. ngal = 100000", "bits in InitializeCenters # to happen. npatch = 43 field = cat.getNField(max_top=5) t0", "= ',counts1) print('rms counts = ',np.std(counts1)) print('total inertia = ',np.sum(inertia1)) # Now run", "cat3 = treecorr.Catalog(ra=ra, dec=dec, r=r, ra_units='rad', dec_units='rad', w=w, patch_centers=cat2.patch_centers) np.testing.assert_array_equal(cat2.patch, cat3.patch) np.testing.assert_array_equal(cat2.patch_centers, cat3.patch_centers)", "is free software: redistribution and use in source and binary forms, # with", "time.time() inertia = np.array([np.sum(w[p2==i][:,None] * (xyz[p2==i] - cen[i])**2) for i in range(npatch)]) counts", "< 0.1 * np.mean(sizes) # sizes have even less spread usually. # Should", "time.time() p, cen = field.run_kmeans(npatch) t1 = time.time() print('patches = ',np.unique(p)) assert len(p)", "field.kmeans_initialize_centers(npatch=n, init='kmeans++') p_n = field.kmeans_assign_patches(cen_n) np.testing.assert_equal(sorted(p_n), list(range(n))) @timer def test_zero_weight(): # Based on", "here is that this works with other fields besides NField, even though #", "y**2 + z**2)**0.5 cat2 = treecorr.Catalog(ra=ra, dec=dec, ra_units='rad', dec_units='rad', r=r, w=w) field =", "time.time() p, cen = field.run_kmeans(npatch) t1 = time.time() assert len(p) == cat.ntot assert", "patch_centers from (ra,dec,r) -> (ra,dec) cat3 = treecorr.Catalog(ra=ra, dec=dec, ra_units='rad', dec_units='rad', w=w, patch_centers=cat2.patch_centers)", "np.array([np.sum(w[p2==i][:,None] * (xyz[p2==i] - cen[i])**2) for i in range(npatch)]) counts = np.array([np.sum(w[p2==i]) for", "the disclaimer given in the documentation # and/or other materials provided with the", "import fitsio except ImportError: print('Skipping dessv test, since fitsio is not installed') return", "as test_radec, but using the Catalog API to run kmeans. ngal = 100000", "200. # This is specific to this particular field and npatch. assert np.std(inertia)", "np.sum(inertia) < 210. assert np.std(inertia) < 0.4 * np.mean(inertia) # I've seen over", "varies when you target having the # inertias be relatively similar. print('mean counts", "- cen[i])**2) for i in range(npatch)])**0.5 sizes *= 180. / np.pi * 60.", "',np.max(counts)) @timer def test_3d(): # Like the above, but using x,y,z positions. ngal", "',np.sum(inertia)) print('mean inertia = ',np.mean(inertia)) print('rms inertia = ',np.std(inertia)) assert np.sum(inertia) < 210.", "y=y, w=w, g1=g1, g2=g2, k=k) npatch = 111 field = cat.getGField() t0 =", "inertia = ',np.mean(inertia)) print('rms inertia = ',np.std(inertia)) assert np.sum(inertia) < 5300. assert np.std(inertia)", "len(p) == cat.ntot assert min(p) == 0 assert max(p) == npatch-1 # Check", "as np import os import time import coord import warnings import treecorr from", "level cells to check the other branch in # InitializeCenters. field = cat.getNField(min_top=10)", "print('counts = ',counts1) print('rms counts = ',np.std(counts1)) print('total inertia = ',np.sum(inertia1)) # Now", "There used to be a bug where w=0 objects were not assigned to", "= treecorr.Catalog(x=x, y=y, w=w, g1=g1, g2=g2, k=k) npatch = 111 field = cat.getGField()", "but check that it doesn't fail.) # Do this with fewer points though,", "field.kmeans_initialize_centers(npatch=n, init='random') p_n = field.kmeans_assign_patches(cen_n) np.testing.assert_equal(sorted(p_n), list(range(n))) @timer def test_init_kmpp(): # Test the", "cat.getNField() cen1 = field.kmeans_initialize_centers(npatch, 'kmeans++') assert cen1.shape == (npatch, 3) p1 = field.kmeans_assign_patches(cen1)", "range(npatch)]) print('With alternate algorithm:') print('time = ',t1-t0) print('total inertia = ',np.sum(inertia)) print('mean inertia", "init=kmeans++, min_top=10') field = cat.getNField(min_top=10) cen1 = field.kmeans_initialize_centers(npatch, 'kmeans++') assert cen1.shape == (npatch,", "weight points happen to be near the # edges or middles of patches,", "np.mean(sizes) print('mean counts = ',np.mean(counts)) print('min counts = ',np.min(counts)) print('max counts = ',np.max(counts))", "in # InitializeCenters. field = cat.getKField(min_top=10) t0 = time.time() p, cen = field.run_kmeans(npatch)", "center to a direct calculation. xyz = np.array([cat.x, cat.y, cat.z]).T direct_cen = np.array([np.average(xyz[p==i],", "= ',np.mean(counts)) print('min counts = ',np.min(counts)) print('max counts = ',np.max(counts)) # Check the", "small. inertia = np.array([np.sum((xyz[patches==i] - cen[i])**2) for i in range(npatch)]) sizes = np.array([np.mean((xyz[patches==i]", "cat.getNField(min_top=10) cen1 = field.kmeans_initialize_centers(npatch, 'kmeans++') assert cen1.shape == (npatch, 3) p1 = field.kmeans_assign_patches(cen1)", "middles of patches, so the total weight varies when you target having the", "dec_units='rad') xyz = np.array([cat.x, cat.y, cat.z]).T field = cat.getNField() cen1 = field.kmeans_initialize_centers(npatch, 'kmeans++')", "with or without modification, are permitted provided that the following # conditions are", "assert np.sum(inertia2) < np.sum(inertia1) # Use a field with lots of top level", "(ngal,) ) g2 = rng.normal(0,s, (ngal,) ) k = rng.normal(0,s, (ngal,) ) cat", "that it doesn't fail.) # Do this with fewer points though, since it's", "InitializeCenters. field = cat.getNField(min_top=10) t0 = time.time() patches, cen = field.run_kmeans(npatch) t1 =", "cen2[i])**2) for i in range(npatch)]) counts2 = np.array([np.sum(p2==i) for i in range(npatch)]) print('rms", "inertia = ',np.std(inertia)) assert np.sum(inertia) < 33000. assert np.std(inertia) < 0.1 * np.mean(inertia)", "normal way p2, cen2 = field.run_kmeans(npatch, init='random', max_iter=1000) inertia2 = np.array([np.sum((xy[p2==i] - cen2[i])**2)", "i in range(npatch)]) counts = np.array([np.sum(w[p==i]) for i in range(npatch)]) print('With standard algorithm:')", "0 assert max(p) == npatch-1 inertia = np.array([np.sum(w[p==i][:,None] * (xy[p==i] - cen[i])**2) for", "Test the init=random option ngal = 100000 s = 1. rng = np.random.RandomState(8675309)", "with assert_raises(ValueError): field.run_kmeans(npatch, init='invalid') with assert_raises(ValueError): field.kmeans_initialize_centers(npatch, init='invalid') with assert_raises(ValueError): field.kmeans_initialize_centers(npatch=ngal*2, init='random') with", "numpy as np import os import time import coord import warnings import treecorr", "so smallish angle on sky z = rng.normal(0,s, (ngal,) ) w = rng.random_sample(ngal)", "== npatch-1 inertia1 = np.array([np.sum((xy[p1==i] - cen1[i])**2) for i in range(npatch)]) counts1 =", "many galaxies have w=0. # There used to be a bug where w=0", "len(p) == cat.ntot assert min(p) == 0 assert max(p) == npatch-1 xyz =", "other fields besides NField, even though # in practice NField will alsmost always", "t0 = time.time() patches, cen = field.run_kmeans(npatch) t1 = time.time() print('patches = ',np.unique(patches))", "= rng.random_sample(ngal) + 1 cat = treecorr.Catalog(x=x, y=y, z=z, w=w) npatch = 111", "coord.radians / coord.degrees) cat = treecorr.Catalog(ra=ra, dec=dec, ra_units='rad', dec_units='rad', w=w) npatch = 111", "field.kmeans_initialize_centers(npatch=0, init='kmeans++') with assert_raises(ValueError): field.kmeans_initialize_centers(npatch=-100, init='kmeans++') # Should be valid to give npatch", "r=r, ra_units='rad', dec_units='rad', w=w, patch_centers=cat2.patch_centers) np.testing.assert_array_equal(cat2.patch, cat3.patch) np.testing.assert_array_equal(cat2.patch_centers, cat3.patch_centers) @timer def test_catalog_3d(): #", "smallish angle on sky z = rng.normal(0,s, (ngal,) ) w = rng.random_sample(ngal) ra,", "lower. t0 = time.time() p, cen = field.run_kmeans(npatch, alt=True) t1 = time.time() assert", "< 33000. assert np.std(inertia) < 0.4 * np.mean(inertia) # I've seen over 0.3", "assert cen1.shape == (npatch, 3) p1 = field.kmeans_assign_patches(cen1) print('patches = ',np.unique(p1)) assert len(p1)", "Do this with fewer points though, since it's not particularly fast with N=10^5.", "treecorr.Catalog(ra=ra, dec=dec, ra_units='rad', dec_units='rad') xyz = np.array([cat.x, cat.y, cat.z]).T field = cat.getNField() cen1", "print('min counts = ',np.min(counts)) print('max counts = ',np.max(counts)) # Check the alternate algorithm.", "= ',np.std(inertia)) assert np.sum(inertia) < 200. # Total shouldn't increase much. (And often", "cat = treecorr.Catalog(ra=ra, dec=dec, ra_units='rad', dec_units='rad', w=w, npatch=npatch) t0 = time.time() p =", "=> ',np.sum(inertia2)) assert np.sum(inertia2) < np.sum(inertia1) with assert_raises(ValueError): field.run_kmeans(npatch, init='invalid') with assert_raises(ValueError): field.kmeans_initialize_centers(npatch,", "field.kmeans_assign_patches(cen_1) np.testing.assert_equal(p_1, np.zeros(ngal)) # If same number of patches as galaxies, each galaxy", "== 0 assert max(p) == npatch-1 xyz = np.array([x, y, z]).T inertia =", "max_iter, since random isn't a great initialization. p2, cen2 = field.run_kmeans(npatch, init='random', max_iter=1000)", "init='random') with assert_raises(ValueError): field.kmeans_initialize_centers(npatch=ngal+1, init='random') with assert_raises(ValueError): field.kmeans_initialize_centers(npatch=0, init='random') with assert_raises(ValueError): field.kmeans_initialize_centers(npatch=-100, init='random')", "< np.sum(inertia1) with assert_raises(ValueError): field.kmeans_initialize_centers(npatch=ngal*2, init='kmeans++') with assert_raises(ValueError): field.kmeans_initialize_centers(npatch=ngal+1, init='kmeans++') with assert_raises(ValueError): field.kmeans_initialize_centers(npatch=0,", "in range(npatch)]) # This doesn't give as good an initialization, so these are", "# Total shouldn't increase much. (And often decreases.) assert np.std(inertia) < 0.15 *", "test_helper import get_from_wiki, CaptureLog, assert_raises, profile, timer @timer def test_dessv(): try: import fitsio", "direct calculation. xyz = np.array([cat.x/cat.r, cat.y/cat.r, cat.z/cat.r]).T print('cen = ',cen) print('xyz = ',xyz)", "in source and binary forms, # with or without modification, are permitted provided", "add weights to make sure that works. ngal = 100000 s = 10.", "= ',np.std(inertia)) assert np.sum(inertia) < 210. assert np.std(inertia) < 0.4 * np.mean(inertia) #", "if the user doesn't have fitsio installed. # In addition, we add weights", "print('mean counts = ',np.mean(counts)) print('min counts = ',np.min(counts)) print('max counts = ',np.max(counts)) #", "np.std(inertia) < 0.3 * np.mean(inertia) # rms is usually < 0.2 * mean", "= 10000 s = 10. rng = np.random.RandomState(8675309) x = rng.normal(0,s, (ngal,) )", "treecorr.Catalog(ra=ra[:n], dec=dec[:n], ra_units='rad', dec_units='rad') field = cat.getNField() cen_n = field.kmeans_initialize_centers(npatch=n, init='random') p_n =", "be valid to give npatch = 1, although not particularly useful. cen_1 =", "import get_from_wiki, CaptureLog, assert_raises, profile, timer @timer def test_dessv(): try: import fitsio except", "particularly useful. cen_1 = field.kmeans_initialize_centers(npatch=1, init='random') p_1 = field.kmeans_assign_patches(cen_1) np.testing.assert_equal(p_1, np.zeros(ngal)) # If", "* coord.radians / coord.degrees) print('maxra = ',np.max(ra) * coord.radians / coord.degrees) print('mindec =", "that the following # conditions are met: # # 1. Redistributions of source", "are permitted provided that the following # conditions are met: # # 1.", "size, which should also be quite small. inertia = np.array([np.sum((xyz[patches==i] - cen[i])**2) for", "make sure that works. ngal = 100000 s = 10. rng = np.random.RandomState(8675309)", "happen to be near the # edges or middles of patches, so the", "== npatch-1 # Check the returned center to a direct calculation. xyz =", "1, although not particularly useful. cen_1 = field.kmeans_initialize_centers(npatch=1, init='random') p_1 = field.kmeans_assign_patches(cen_1) np.testing.assert_equal(p_1,", "= ',np.unique(p1)) assert len(p1) == cat.ntot assert min(p1) == 0 assert max(p1) ==", "i in range(npatch)]) sizes = np.array([np.mean((xyz[patches==i] - cen[i])**2) for i in range(npatch)])**0.5 sizes", "of the shuffle bits in InitializeCenters # to happen. npatch = 43 field", "= rng.normal(0,s, (ngal,) ) y = rng.normal(0,s, (ngal,) ) w = rng.random_sample(ngal) +", "random set of points, so it will run even # if the user", "to arcmin counts = np.array([np.sum(patches==i) for i in range(npatch)]) print('With standard algorithm:') print('time", "print('3d with init=random, min_top=10') field = cat.getNField(min_top=10) cen1 = field.kmeans_initialize_centers(npatch, 'random') assert cen1.shape", "Check the returned center to a direct calculation. xyz = np.array([cat.x, cat.y, cat.z]).T", "the standard algorithm. print('mean counts = ',np.mean(counts)) print('min counts = ',np.min(counts)) print('max counts", "',counts) print('total inertia = ',np.sum(inertia)) print('mean inertia = ',np.mean(inertia)) print('rms inertia = ',np.std(inertia))", "ngal = 100000 s = 1. rng = np.random.RandomState(8675309) x = rng.normal(0,s, (ngal,)", "npatch = 1, although not particularly useful. cen_1 = field.kmeans_initialize_centers(npatch=1, init='random') p_1 =", "smaller. # This doesn't keep the counts as equal as the standard algorithm.", "dec, r, the Catalog API should only do patches using RA, Dec. ngal", "conditions, and the disclaimer given in the documentation # and/or other materials provided", "= ',np.mean(inertia)) print('rms inertia = ',np.std(inertia)) assert np.sum(inertia) < 200. # This is", "the refine_centers step. print('3d with init=random') npatch = 10 field = cat.getNField() cen1", "for i in range(npatch)]) print('With alternate algorithm:') print('time = ',t1-t0) print('total inertia =", "< 210. assert np.std(inertia) < 0.4 * np.mean(inertia) # I've seen over 0.3", "usually < 0.2 * mean assert np.std(sizes) < 0.1 * np.mean(sizes) # sizes", "bit smaller. # This doesn't keep the counts as equal as the standard", "print('total inertia => ',np.sum(inertia2)) assert np.sum(inertia2) < np.sum(inertia1) # Repeat in 2d print('2d", "* coord.radians / coord.degrees) print('maxdec = ',np.max(dec) * coord.radians / coord.degrees) npatch =", "0.15 * np.mean(inertia) # rms should be even smaller here. assert np.std(sizes) <", "# Put everything at large y, so smallish angle on sky z =", "',np.std(counts1)) print('total inertia = ',np.sum(inertia1)) # Now run the normal way # Use", "cat.ntot assert min(p) == 0 assert max(p) == npatch-1 # Check the returned", "lots of top level cells print('3d with init=kmeans++, min_top=10') field = cat.getNField(min_top=10) cen1", "# With ra, dec, r, the Catalog API should only do patches using", "npatch = 10 field = cat.getNField() cen1 = field.kmeans_initialize_centers(npatch, 'kmeans++') assert cen1.shape ==", "print('patches = ',np.unique(patches)) assert len(patches) == cat.ntot assert min(patches) == 0 assert max(patches)", "assert np.sum(inertia) < 200. # Total shouldn't increase much. (And often decreases.) assert", "= np.array([np.sum(p1==i) for i in range(npatch)]) print('counts = ',counts1) print('rms counts = ',np.std(counts1))", "notice, # this list of conditions, and the disclaimer given in the documentation", "with a random set of points, so it will run even # if", "print('rms inertia = ',np.std(inertia)) assert np.sum(inertia) < 5300. assert np.std(inertia) < 0.1 *", "',np.sum(inertia2)) assert np.sum(inertia2) < np.sum(inertia1) # Repeat in 2d print('2d with init=kmeans++') cat", "= np.array([x, y]).T inertia = np.array([np.sum(w[p==i][:,None] * (xy[p==i] - cen[i])**2) for i in", "with assert_raises(ValueError): field.kmeans_initialize_centers(npatch=0, init='random') with assert_raises(ValueError): field.kmeans_initialize_centers(npatch=-100, init='random') # Should be valid to", "way p2, cen2 = field.run_kmeans(npatch, init='random', max_iter=1000) inertia2 = np.array([np.sum((xyz[p2==i] - cen2[i])**2) for", "2d print('2d with init=random') cat = treecorr.Catalog(x=x, y=y) xy = np.array([x, y]).T field", "Use a field with lots of top level cells print('3d with init=kmeans++, min_top=10')", "happen. npatch = 43 field = cat.getNField(max_top=5) t0 = time.time() patches, cen =", "points, so it will run even # if the user doesn't have fitsio", "- cen[i])**2) for i in range(npatch)]) counts = np.array([np.sum(w[p==i]) for i in range(npatch)])", "import print_function import numpy as np import os import time import coord import", "inertia1 = np.array([np.sum((xy[p1==i] - cen1[i])**2) for i in range(npatch)]) counts1 = np.array([np.sum(p1==i) for", "that works. ngal = 100000 s = 10. rng = np.random.RandomState(8675309) x =", "assert np.sum(inertia) < 5300. assert np.std(inertia) < 0.3 * np.mean(inertia) # rms is", "assert set(p[w>0]) == set(p[w==0]) @timer def test_catalog_sphere(): # This follows the same path", "- cen[i])**2) for i in range(npatch)]) sizes = np.array([np.mean((xyz[patches==i] - cen[i])**2) for i", "do patches using RA, Dec. ngal = 100000 s = 10. rng =", "patch_centers=cat2.patch_centers) np.testing.assert_array_equal(cat2.patch, cat3.patch) np.testing.assert_array_equal(cat2.patch_centers, cat3.patch_centers) @timer def test_catalog_3d(): # With ra, dec, r,", "max(p1) == npatch-1 inertia1 = np.array([np.sum((xy[p1==i] - cen1[i])**2) for i in range(npatch)]) counts1", "have similar number of points. Nothing is required here though. print('mean counts =", "np.sum(inertia1) # Repeat in 2d print('2d with init=random') cat = treecorr.Catalog(x=x, y=y) xy", "p2, cen2 = field.run_kmeans(npatch, init='random', max_iter=1000) inertia2 = np.array([np.sum((xy[p2==i] - cen2[i])**2) for i", "for i in range(npatch)]) direct_cen /= np.sqrt(np.sum(direct_cen**2,axis=1)[:,np.newaxis]) np.testing.assert_allclose(cen, direct_cen, atol=1.e-3) # KMeans minimizes", "= treecorr.Catalog(ra=ra, dec=dec, r=r, ra_units='rad', dec_units='rad', w=w, npatch=npatch, kmeans_alt=True) t0 = time.time() p", "# 2. Redistributions in binary form must reproduce the above copyright notice, #", "specific to this particular field and npatch. assert np.std(inertia) < 0.3 * np.mean(inertia)", "== 0 assert max(p) == npatch-1 # Check the returned center to a", "print('rms size = ',np.std(sizes)) assert np.sum(inertia) < 210. assert np.std(inertia) < 0.4 *", "and/or other materials provided with the distribution. from __future__ import print_function import numpy", "in practice NField will alsmost always be the kind of Field used. ngal", "way p2, cen2 = field.run_kmeans(npatch, init='kmeans++', max_iter=1000) inertia2 = np.array([np.sum((xy[p2==i] - cen2[i])**2) for", "run the normal way p2, cen2 = field.run_kmeans(npatch, init='random', max_iter=1000) inertia2 = np.array([np.sum((xyz[p2==i]", "180. / np.pi * 60. # convert to arcmin counts = np.array([np.sum(patches==i) for", "cat.z]).T field = cat.getNField() cen1 = field.kmeans_initialize_centers(npatch, 'random') assert cen1.shape == (npatch, 3)", "# This follows the same path as test_radec, but using the Catalog API", "np.sqrt(np.sum(direct_cen**2,axis=1)[:,np.newaxis]) np.testing.assert_allclose(cen, direct_cen, atol=2.e-3) inertia = np.array([np.sum(w[p==i][:,None] * (xyz[p==i] - cen[i])**2) for i", "print('rms counts => ',np.std(counts2)) print('total inertia => ',np.sum(inertia2)) assert np.sum(inertia2) < np.sum(inertia1) #", "= np.array([x, y]).T field = cat.getNField() cen1 = field.kmeans_initialize_centers(npatch, 'kmeans++') assert cen1.shape ==", "t0 = time.time() patches, cen = field.run_kmeans(npatch, alt=True) t1 = time.time() assert len(patches)", "counts = ',np.mean(counts)) print('min counts = ',np.min(counts)) print('max counts = ',np.max(counts)) @timer def", "i in range(npatch)]) print('rms counts => ',np.std(counts2)) print('total inertia => ',np.sum(inertia2)) assert np.sum(inertia2)", "particularly fast with N=10^5. n = 100 cat = treecorr.Catalog(ra=ra[:n], dec=dec[:n], ra_units='rad', dec_units='rad')", "== npatch-1 inertia1 = np.array([np.sum((xyz[p1==i] - cen1[i])**2) for i in range(npatch)]) counts1 =", "= ',np.min(dec) * coord.radians / coord.degrees) print('maxdec = ',np.max(dec) * coord.radians / coord.degrees)", "np.std(inertia) < 0.1 * np.mean(inertia) # rms should be even smaller here. print('mean", "ra_units='rad', dec_units='rad', w=w, npatch=npatch) t0 = time.time() p = cat.patch cen = cat.patch_centers", "coord.radians / coord.degrees) print('maxra = ',np.max(ra) * coord.radians / coord.degrees) print('mindec = ',np.min(dec)", "x = rng.normal(0,s, (ngal,) ) y = rng.normal(0,s, (ngal,) ) z = rng.normal(0,s,", "lower. cat2 = treecorr.Catalog(ra=ra, dec=dec, r=r, ra_units='rad', dec_units='rad', w=w, npatch=npatch, kmeans_alt=True) t0 =", "Use higher max_iter, since random isn't a great initialization. p2, cen2 = field.run_kmeans(npatch,", "* coord.radians / coord.degrees) npatch = 111 cat = treecorr.Catalog(ra=ra, dec=dec, r=r, ra_units='rad',", "field.kmeans_initialize_centers(npatch, 'kmeans++') assert cen1.shape == (npatch, 2) p1 = field.kmeans_assign_patches(cen1) print('patches = ',np.unique(p1))", "this with fewer points though, since it's not particularly fast with N=10^5. n", "= ',np.mean(counts)) print('min counts = ',np.min(counts)) print('max counts = ',np.max(counts)) @timer def test_init_random():", "= field.run_kmeans(npatch) t1 = time.time() print('patches = ',np.unique(patches)) assert len(patches) == cat.ntot assert", "in range(npatch)]) print('With alternate algorithm:') print('time = ',t1-t0) print('total inertia = ',np.sum(inertia)) print('mean", "counts = np.array([np.sum(patches==i) for i in range(npatch)]) print('With alternate algorithm:') print('time = ',t1-t0)", "galaxies have w=0. # There used to be a bug where w=0 objects", "np.mean(inertia) # rms is usually < 0.2 * mean assert np.std(sizes) < 0.1", "# in practice NField will alsmost always be the kind of Field used.", "sure we force some of the shuffle bits in InitializeCenters # to happen.", "ra_units='rad', dec_units='rad', w=w, patch_centers=cat2.patch_centers) np.testing.assert_array_equal(cat2.patch, cat3.patch) np.testing.assert_array_equal(cat2.patch_centers, cat3.patch_centers) if __name__ == '__main__': test_dessv()", "# Use higher max_iter, since random isn't a great initialization. p2, cen2 =", "assert len(p) == cat.ntot assert min(p) == 0 assert max(p) == npatch-1 xy", "time.time() assert len(p) == cat2.ntot assert min(p) == 0 assert max(p) == npatch-1", "np.testing.assert_array_equal(cat2.patch, cat3.patch) np.testing.assert_array_equal(cat2.patch_centers, cat3.patch_centers) @timer def test_catalog_3d(): # With ra, dec, r, the", "fitsio is not installed') return #treecorr.set_omp_threads(1); get_from_wiki('des_sv.fits') file_name = os.path.join('data','des_sv.fits') cat = treecorr.Catalog(file_name,", "inertia = ',np.mean(inertia)) print('rms inertia = ',np.std(inertia)) assert np.sum(inertia) < 210. assert np.std(inertia)", "0.3 x mean here. print('mean counts = ',np.mean(counts)) print('min counts = ',np.min(counts)) print('max", "treecorr.Catalog(x=x, y=y, w=w, g1=g1, g2=g2, k=k) npatch = 111 field = cat.getGField() t0", "treecorr.Catalog(ra=ra, dec=dec, ra_units='rad', dec_units='rad', w=w, npatch=npatch, kmeans_alt=True) t0 = time.time() p = cat2.patch", "= ',t1-t0) print('inertia = ',inertia) print('counts = ',counts) print('total inertia = ',np.sum(inertia)) print('mean", "= field.run_kmeans(npatch, alt=True) t1 = time.time() assert len(patches) == cat.ntot assert min(patches) ==", "# with or without modification, are permitted provided that the following # conditions", "= ',np.max(ra) * coord.radians / coord.degrees) print('mindec = ',np.min(dec) * coord.radians / coord.degrees)", "assert np.std(inertia) < 0.15 * np.mean(inertia) # rms should be even smaller here.", "source and binary forms, # with or without modification, are permitted provided that", "min(p1) == 0 assert max(p1) == npatch-1 inertia1 = np.array([np.sum((xyz[p1==i] - cen1[i])**2) for", "max(patches) == npatch-1 # Check the returned center to a direct calculation. xyz", "= rng.random_sample(ngal) ra, dec, r = coord.CelestialCoord.xyz_to_radec(x,y,z, return_r=True) print('minra = ',np.min(ra) * coord.radians", "= treecorr.Catalog(ra=ra, dec=dec, ra_units='rad', dec_units='rad', w=w, npatch=npatch) t0 = time.time() p = cat.patch", "time.time() p, c = field.run_kmeans(npatch) t1 = time.time() print('patches = ',np.unique(p)) assert len(p)", "= 111 cat = treecorr.Catalog(ra=ra, dec=dec, r=r, ra_units='rad', dec_units='rad', w=w, npatch=npatch) t0 =", "rng = np.random.RandomState(8675309) x = rng.normal(0,s, (ngal,) ) y = rng.normal(0,s, (ngal,) )", "smaller here. print('mean counts = ',np.mean(counts)) print('min counts = ',np.min(counts)) print('max counts =", "in spherical print('spher with init=random') ra, dec = coord.CelestialCoord.xyz_to_radec(x,y,z) cat = treecorr.Catalog(ra=ra, dec=dec,", "field.run_kmeans(npatch) t1 = time.time() print('patches = ',np.unique(p)) assert len(p) == cat.ntot assert min(p)", "assert min(p) == 0 assert max(p) == npatch-1 # Check the returned center", "= os.path.join('data','des_sv.fits') cat = treecorr.Catalog(file_name, ra_col='ra', dec_col='dec', ra_units='deg', dec_units='deg') # Use an odd", "counts = ',np.mean(counts)) print('min counts = ',np.min(counts)) print('max counts = ',np.max(counts)) # Check", "= np.array([np.sum(p2==i) for i in range(npatch)]) print('rms counts => ',np.std(counts2)) print('total inertia =>", "print('time = ',t1-t0) print('inertia = ',inertia) print('counts = ',counts) print('total inertia = ',np.sum(inertia))", "fewer points though, since it's not particularly fast with N=10^5. n = 100", ") w = rng.random_sample(ngal) ra, dec = coord.CelestialCoord.xyz_to_radec(x,y,z) print('minra = ',np.min(ra) * coord.radians", "* mean assert np.std(sizes) < 0.1 * np.mean(sizes) # sizes have even less", "for i in range(npatch)]) print('With standard algorithm:') print('time = ',t1-t0) print('inertia = ',inertia)", "Use an odd number to make sure we force some of the shuffle", "== set(p[w==0]) @timer def test_catalog_sphere(): # This follows the same path as test_radec,", "w=w, patch_centers=cat2.patch_centers) np.testing.assert_array_equal(cat2.patch, cat3.patch) np.testing.assert_array_equal(cat2.patch_centers, cat3.patch_centers) @timer def test_catalog_3d(): # With ra, dec,", "print('spher with init=random') ra, dec = coord.CelestialCoord.xyz_to_radec(x,y,z) cat = treecorr.Catalog(ra=ra, dec=dec, ra_units='rad', dec_units='rad')", "even # if the user doesn't have fitsio installed. # In addition, we", "with other fields besides NField, even though # in practice NField will alsmost", "np.std(sizes) < 0.15 * np.mean(sizes) print('mean counts = ',np.mean(counts)) print('min counts = ',np.min(counts))", "/ coord.degrees) print('maxdec = ',np.max(dec) * coord.radians / coord.degrees) cat = treecorr.Catalog(ra=ra, dec=dec,", "= time.time() print('patches = ',np.unique(patches)) assert len(patches) == cat.ntot assert min(patches) == 0", "print('total inertia = ',np.sum(inertia1)) # Now run the normal way p2, cen2 =", "p, cen = field.run_kmeans(npatch) t1 = time.time() print('patches = ',np.unique(p)) assert len(p) ==", "above, but using x,y,z positions. ngal = 100000 s = 1. rng =", "patch_centers=cat2.patch_centers) np.testing.assert_array_equal(cat2.patch, cat3.patch) np.testing.assert_array_equal(cat2.patch_centers, cat3.patch_centers) if __name__ == '__main__': test_dessv() test_radec() test_3d() test_2d()", "patches, cen = field.run_kmeans(npatch, alt=True) t1 = time.time() assert len(patches) == cat.ntot assert", "np.array([cat.x, cat.y, cat.z]).T field = cat.getNField() cen1 = field.kmeans_initialize_centers(npatch, 'kmeans++') assert cen1.shape ==", "xy = np.array([x, y]).T field = cat.getNField() cen1 = field.kmeans_initialize_centers(npatch, 'random') assert cen1.shape", "with assert_raises(ValueError): field.kmeans_initialize_centers(npatch=ngal*2, init='kmeans++') with assert_raises(ValueError): field.kmeans_initialize_centers(npatch=ngal+1, init='kmeans++') with assert_raises(ValueError): field.kmeans_initialize_centers(npatch=0, init='kmeans++') with", "from test_helper import get_from_wiki, CaptureLog, assert_raises, profile, timer @timer def test_dessv(): try: import", "= ',np.mean(inertia)) print('rms inertia = ',np.std(inertia)) assert np.sum(inertia) < 5300. assert np.std(inertia) <", "alsmost always be the kind of Field used. ngal = 100000 s =", "= np.array([np.sum(w[p==i][:,None] * (xyz[p==i] - cen[i])**2) for i in range(npatch)]) counts = np.array([np.sum(w[p==i])", "10 field = cat.getNField() cen1 = field.kmeans_initialize_centers(npatch, 'random') assert cen1.shape == (npatch, 3)", "(ra,dec,r) -> (ra,dec) cat3 = treecorr.Catalog(ra=ra, dec=dec, ra_units='rad', dec_units='rad', w=w, patch_centers=cat2.patch_centers) np.testing.assert_array_equal(cat2.patch, cat3.patch)", "cat3 = treecorr.Catalog(ra=ra, dec=dec, ra_units='rad', dec_units='rad', w=w, patch_centers=cat2.patch_centers) np.testing.assert_array_equal(cat2.patch, cat3.patch) np.testing.assert_array_equal(cat2.patch_centers, cat3.patch_centers) if", "assert_raises(ValueError): field.kmeans_initialize_centers(npatch=-100, init='kmeans++') # Should be valid to give npatch = 1, although", "ra, dec, r, the Catalog API should only do patches using RA, Dec.", "in range(npatch)]) counts = np.array([np.sum(w[p==i]) for i in range(npatch)]) print('With standard algorithm:') print('time", "(ngal,) ) w = rng.random_sample(ngal) ra, dec = coord.CelestialCoord.xyz_to_radec(x,y,z) print('minra = ',np.min(ra) *", "# Should be valid to give npatch = 1, although not particularly useful.", "a great initialization. p2, cen2 = field.run_kmeans(npatch, init='random', max_iter=1000) inertia2 = np.array([np.sum((xyz[p2==i] -", "patch. # (This is stupid of course, but check that it doesn't fail.)", "i in range(npatch)]) counts2 = np.array([np.sum(p2==i) for i in range(npatch)]) print('rms counts =>", "== cat.ntot assert min(p) == 0 assert max(p) == npatch-1 xyz = np.array([x,", "source code must retain the above copyright notice, this # list of conditions,", "should be even smaller here. print('mean counts = ',np.mean(counts)) print('min counts = ',np.min(counts))", "= treecorr.Catalog(x=x, y=y, z=z) xyz = np.array([x, y, z]).T # Skip the refine_centers", "(ngal,) ) y = rng.normal(0,s, (ngal,) ) z = rng.normal(0,s, (ngal,) ) cat", "assert max(p) == npatch-1 print('w>0 patches = ',np.unique(p[w>0])) print('w==0 patches = ',np.unique(p[w==0])) assert", "higher max_iter, since random isn't a great initialization. p2, cen2 = field.run_kmeans(npatch, init='random',", "it doesn't fail.) # Do this with fewer points though, since it's not", "0 assert max(patches) == npatch-1 inertia = np.array([np.sum((xyz[patches==i] - cen[i])**2) for i in", "dec, r = coord.CelestialCoord.xyz_to_radec(x,y,z, return_r=True) print('minra = ',np.min(ra) * coord.radians / coord.degrees) print('maxra", "assert max(patches) == npatch-1 # Check the returned center to a direct calculation.", "x = rng.normal(0,s, (ngal,) ) y = rng.normal(0,s, (ngal,) ) w = rng.random_sample(ngal)", "assert_raises(ValueError): field.kmeans_initialize_centers(npatch=ngal+1, init='kmeans++') with assert_raises(ValueError): field.kmeans_initialize_centers(npatch=0, init='kmeans++') with assert_raises(ValueError): field.kmeans_initialize_centers(npatch=-100, init='kmeans++') # Should", "npatch-1 inertia = np.array([np.sum((xyz[patches==i] - cen[i])**2) for i in range(npatch)]) sizes = np.array([np.mean((xyz[patches==i]", "as good an initialization, so these are a bit worse usually. print('With min_top=10:')", "',np.sum(inertia)) print('mean inertia = ',np.mean(inertia)) print('rms inertia = ',np.std(inertia)) assert np.sum(inertia) < 200.", "= ',np.std(inertia)) assert np.sum(inertia) < 5300. assert np.std(inertia) < 0.3 * np.mean(inertia) #", "= ',np.min(counts)) print('max counts = ',np.max(counts)) @timer def test_init_random(): # Test the init=random", "= ',np.max(counts)) # Should be the same thing with ra, dec, ra ra,", "# convert to arcmin counts = np.array([np.sum(patches==i) for i in range(npatch)]) # This", "assert np.sum(inertia) < 33000. assert np.std(inertia) < 0.3 * np.mean(inertia) # rms is", "fast with N=10^5. n = 100 cat = treecorr.Catalog(ra=ra[:n], dec=dec[:n], ra_units='rad', dec_units='rad') field", "assert_raises(ValueError): field.kmeans_initialize_centers(npatch=ngal*2, init='random') with assert_raises(ValueError): field.kmeans_initialize_centers(npatch=ngal+1, init='random') with assert_raises(ValueError): field.kmeans_initialize_centers(npatch=0, init='random') with assert_raises(ValueError):", "== npatch-1 xyz = np.array([x, y, z]).T inertia = np.array([np.sum(w[p==i][:,None] * (xyz[p==i] -", "treecorr.Catalog(x=x, y=y) xy = np.array([x, y]).T field = cat.getNField() cen1 = field.kmeans_initialize_centers(npatch, 'random')", "number of patches as galaxies, each galaxy gets a patch. # (This is", "def test_init_random(): # Test the init=random option ngal = 100000 s = 1.", "have fitsio installed. # In addition, we add weights to make sure that", "= coord.CelestialCoord.xyz_to_radec(x,y,z) r = (x**2 + y**2 + z**2)**0.5 cat2 = treecorr.Catalog(ra=ra, dec=dec,", "= np.array([x, y, z]).T # Skip the refine_centers step. print('3d with init=kmeans++') npatch", "# I've seen over 0.3 x mean here. print('mean counts = ',np.mean(counts)) print('min", "in range(npatch)]) print('With standard algorithm:') print('time = ',t1-t0) print('inertia = ',inertia) print('counts =", "print('patches = ',np.unique(p1)) assert len(p1) == cat.ntot assert min(p1) == 0 assert max(p1)", "= treecorr.Catalog(ra=ra, dec=dec, ra_units='rad', dec_units='rad', w=w, npatch=npatch, kmeans_alt=True) t0 = time.time() p =", "check the other branch in # InitializeCenters. field = cat.getNField(min_top=10) t0 = time.time()", "be the kind of Field used. ngal = 100000 s = 1. rng", "test_catalog_sphere(): # This follows the same path as test_radec, but using the Catalog", "= ',np.std(sizes)) assert np.sum(inertia) < 200. # This is specific to this particular", "branch in # InitializeCenters. field = cat.getNField(min_top=10) t0 = time.time() p, cen =", ") z = rng.normal(0,s, (ngal,) ) w = rng.random_sample(ngal) + 1 cat =", "treecorr.Catalog(file_name, ra_col='ra', dec_col='dec', ra_units='deg', dec_units='deg') # Use an odd number to make sure", "0.4 * np.mean(inertia) # I've seen over 0.3 x mean here. assert np.std(sizes)", "n = 100 cat = treecorr.Catalog(ra=ra[:n], dec=dec[:n], ra_units='rad', dec_units='rad') field = cat.getNField() cen_n", "the disclaimer given in the accompanying LICENSE # file. # 2. Redistributions in", "field = cat.getNField() cen_n = field.kmeans_initialize_centers(npatch=n, init='kmeans++') p_n = field.kmeans_assign_patches(cen_n) np.testing.assert_equal(sorted(p_n), list(range(n))) @timer", "(xyz[p==i] - cen[i])**2) for i in range(npatch)]) counts = np.array([np.sum(w[p==i]) for i in", "not assigned to any patch. ngal = 10000 s = 10. rng =", "= cat.patch cen = cat.patch_centers t1 = time.time() print('patches = ',np.unique(p)) assert len(p)", "p, cen = field.run_kmeans(npatch) t1 = time.time() assert len(p) == cat.ntot assert min(p)", "# InitializeCenters. field = cat.getNField(min_top=10) t0 = time.time() p, cen = field.run_kmeans(npatch) t1", "assert np.sum(inertia2) < np.sum(inertia1) # Repeat in 2d print('2d with init=kmeans++') cat =", "to run kmeans. ngal = 100000 s = 10. rng = np.random.RandomState(8675309) x", "print('total inertia => ',np.sum(inertia2)) assert np.sum(inertia2) < np.sum(inertia1) # Use a field with", "np.sum(inertia1) with assert_raises(ValueError): field.run_kmeans(npatch, init='invalid') with assert_raises(ValueError): field.kmeans_initialize_centers(npatch, init='invalid') with assert_raises(ValueError): field.kmeans_initialize_centers(npatch=ngal*2, init='random')", "aren't actually all that similar. The range is more than a # factor", "using RA, Dec. ngal = 100000 s = 10. rng = np.random.RandomState(8675309) x", "top level cells print('3d with init=random, min_top=10') field = cat.getNField(min_top=10) cen1 = field.kmeans_initialize_centers(npatch,", "',np.mean(inertia)) print('rms inertia = ',np.std(inertia)) assert np.sum(inertia) < 5300. assert np.std(inertia) < 0.4", "installed. # In addition, we add weights to make sure that works. ngal", "is only a little bit smaller. # This doesn't keep the counts as", "the normal way # Use higher max_iter, since random isn't a great initialization.", "dessv test, since fitsio is not installed') return #treecorr.set_omp_threads(1); get_from_wiki('des_sv.fits') file_name = os.path.join('data','des_sv.fits')", "33000. assert np.std(inertia) < 0.1 * np.mean(inertia) # rms should be even smaller", "# Repeat in spherical print('spher with init=random') ra, dec = coord.CelestialCoord.xyz_to_radec(x,y,z) cat =", "other branch in # InitializeCenters. field = cat.getNField(min_top=10) t0 = time.time() p, cen", "this value and the rms size, which should also be quite small. inertia", "xyz = np.array([cat.x, cat.y, cat.z]).T field = cat.getNField() cen1 = field.kmeans_initialize_centers(npatch, 'random') assert", "i in range(npatch)]) print('With standard algorithm:') print('time = ',t1-t0) print('inertia = ',inertia) print('counts", "weights=w[p==i]) for i in range(npatch)]) direct_cen /= np.sqrt(np.sum(direct_cen**2,axis=1)[:,np.newaxis]) np.testing.assert_allclose(cen, direct_cen, atol=2.e-3) inertia =", "= treecorr.Catalog(ra=ra, dec=dec, r=r, ra_units='rad', dec_units='rad', w=w, npatch=npatch) t0 = time.time() p =", "use a field with lots of top level cells to check the other", "range(npatch)]) counts = np.array([np.sum(w[p==i]) for i in range(npatch)]) # This doesn't give as", "2. Redistributions in binary form must reproduce the above copyright notice, # this", "* (xy[p==i] - cen[i])**2) for i in range(npatch)]) counts = np.array([np.sum(w[p==i]) for i", "cen1 = field.kmeans_initialize_centers(npatch, 'random') assert cen1.shape == (npatch, 2) p1 = field.kmeans_assign_patches(cen1) print('patches", "calculation. xyz = np.array([cat.x, cat.y, cat.z]).T direct_cen = np.array([np.average(xyz[p==i], axis=0, weights=w[p==i]) for i", "npatch = 111 field = cat.getNField() t0 = time.time() p, cen = field.run_kmeans(npatch)", "the returned center to a direct calculation. xyz = np.array([cat.x, cat.y, cat.z]).T direct_cen", "coord.radians / coord.degrees) npatch = 111 cat = treecorr.Catalog(ra=ra, dec=dec, ra_units='rad', dec_units='rad', w=w,", "= ',xyz) direct_cen = np.array([np.average(xyz[p==i], axis=0, weights=w[p==i]) for i in range(npatch)]) direct_cen /=", "= field.run_kmeans(npatch, init='random', max_iter=1000) inertia2 = np.array([np.sum((xyz[p2==i] - cen2[i])**2) for i in range(npatch)])", "',np.unique(p1)) assert len(p1) == cat.ntot assert min(p1) == 0 assert max(p1) == npatch-1", "in range(npatch)]) print('counts = ',counts1) print('rms counts = ',np.std(counts1)) print('total inertia = ',np.sum(inertia1))", "all have similar number of points. Nothing is required here though. print('mean counts", "* np.mean(inertia) # rms is usually < 0.2 * mean assert np.std(sizes) <", "print('rms inertia = ',np.std(inertia)) assert np.sum(inertia) < 200. # Total shouldn't increase much.", "inertia => ',np.sum(inertia2)) assert np.sum(inertia2) < np.sum(inertia1) # Repeat in spherical print('spher with", "field.kmeans_initialize_centers(npatch, 'kmeans++') assert cen1.shape == (npatch, 3) p1 = field.kmeans_assign_patches(cen1) print('patches = ',np.unique(p1))", "counts = ',np.min(counts)) print('max counts = ',np.max(counts)) @timer def test_radec(): # Very similar", "in range(npatch)]) counts = np.array([np.sum(w[p==i]) for i in range(npatch)]) # This doesn't give", "max(p) == npatch-1 xy = np.array([x, y]).T inertia = np.array([np.sum(w[p==i][:,None] * (xy[p==i] -", "cat.getNField() cen_n = field.kmeans_initialize_centers(npatch=n, init='random') p_n = field.kmeans_assign_patches(cen_n) np.testing.assert_equal(sorted(p_n), list(range(n))) @timer def test_init_kmpp():", "This doesn't give as good an initialization, so these are a bit worse", "def test_3d(): # Like the above, but using x,y,z positions. ngal = 100000", "',np.mean(inertia)) print('rms inertia = ',np.std(inertia)) assert np.sum(inertia) < 200. # This is specific", "smaller here. assert np.std(sizes) < 0.1 * np.mean(sizes) # This is only a", "in 2d print('2d with init=kmeans++') cat = treecorr.Catalog(x=x, y=y) xy = np.array([x, y]).T", "npatch=npatch, kmeans_alt=True) t0 = time.time() p = cat2.patch cen = cat2.patch_centers t1 =", "axis=0, weights=w[p==i]) for i in range(npatch)]) direct_cen /= np.sqrt(np.sum(direct_cen**2,axis=1)[:,np.newaxis]) np.testing.assert_allclose(cen, direct_cen, atol=2.e-3) inertia", "assert_raises(ValueError): field.run_kmeans(npatch, init='invalid') with assert_raises(ValueError): field.kmeans_initialize_centers(npatch, init='invalid') with assert_raises(ValueError): field.kmeans_initialize_centers(npatch=ngal*2, init='random') with assert_raises(ValueError):", "= np.array([np.sum((xy[p1==i] - cen1[i])**2) for i in range(npatch)]) counts1 = np.array([np.sum(p1==i) for i", "',np.min(counts)) print('max counts = ',np.max(counts)) # Check the alternate algorithm. rms inertia should", "algorithm. print('mean counts = ',np.mean(counts)) print('min counts = ',np.min(counts)) print('max counts = ',np.max(counts))", "Check using patch_centers from (ra,dec) -> (ra,dec,r) cat3 = treecorr.Catalog(ra=ra, dec=dec, r=r, ra_units='rad',", "way # Use higher max_iter, since random isn't a great initialization. p2, cen2", "z = rng.normal(0,s, (ngal,) ) w = rng.random_sample(ngal) + 1 cat = treecorr.Catalog(x=x,", "with lots of top level cells print('3d with init=random, min_top=10') field = cat.getNField(min_top=10)", "0 assert max(p1) == npatch-1 inertia1 = np.array([np.sum((xy[p1==i] - cen1[i])**2) for i in", "',np.max(counts)) # Check the alternate algorithm. rms inertia should be lower. t0 =", "be relatively similar. print('mean counts = ',np.mean(counts)) print('min counts = ',np.min(counts)) print('max counts", "field.kmeans_initialize_centers(npatch, 'random') assert cen1.shape == (npatch, 2) p1 = field.kmeans_assign_patches(cen1) print('patches = ',np.unique(p1))", "in InitializeCenters # to happen. npatch = 43 field = cat.getNField(max_top=5) t0 =", "= ',np.min(counts)) print('max counts = ',np.max(counts)) @timer def test_3d(): # Like the above,", "on test_ra_dec, but where many galaxies have w=0. # There used to be", "same path as test_radec, but using the Catalog API to run kmeans. ngal", "cat = treecorr.Catalog(ra=ra[:n], dec=dec[:n], ra_units='rad', dec_units='rad') field = cat.getNField() cen_n = field.kmeans_initialize_centers(npatch=n, init='kmeans++')", "= ',np.min(counts)) print('max counts = ',np.max(counts)) @timer def test_2d(): # Like the above,", "with init=kmeans++') cat = treecorr.Catalog(x=x, y=y) xy = np.array([x, y]).T field = cat.getNField()", "range(npatch)]) print('counts = ',counts1) print('rms counts = ',np.std(counts1)) print('total inertia = ',np.sum(inertia1)) #", "= field.kmeans_initialize_centers(npatch=n, init='random') p_n = field.kmeans_assign_patches(cen_n) np.testing.assert_equal(sorted(p_n), list(range(n))) @timer def test_init_kmpp(): # Test", "over 0.3 x mean here. print('mean counts = ',np.mean(counts)) print('min counts = ',np.min(counts))", "print('total inertia => ',np.sum(inertia2)) assert np.sum(inertia2) < np.sum(inertia1) # Repeat in spherical print('spher", "fitsio installed. # In addition, we add weights to make sure that works.", "in range(npatch)]) counts = np.array([np.sum(w[p2==i]) for i in range(npatch)]) print('time = ',t1-t0) print('total", "= time.time() patches, cen = field.run_kmeans(npatch) t1 = time.time() assert len(patches) == cat.ntot", "= field.kmeans_initialize_centers(npatch, 'kmeans++') assert cen1.shape == (npatch, 3) p1 = field.kmeans_assign_patches(cen1) print('patches =", ") w = rng.random_sample(ngal) + 1 cat = treecorr.Catalog(x=x, y=y, z=z, w=w) npatch", "assert np.sum(inertia) < 5300. assert np.std(inertia) < 0.4 * np.mean(inertia) # I've seen" ]
[ "done (ascending) Returns ----------- trace Sorted trace \"\"\" events = sorted(trace._list, key=lambda x:", "new_stream = EventStream(events, attributes=event_log.attributes, extensions=event_log.extensions, omni_present=event_log.omni_present, classifiers=event_log.classifiers, properties=event_log.properties) return new_stream def sort_lambda(log, sort_function,", "EventStream(events, attributes=event_log.attributes, extensions=event_log.extensions, omni_present=event_log.omni_present, classifiers=event_log.classifiers, properties=event_log.properties) return new_stream def sort_timestamp_log(event_log, timestamp_key=xes.DEFAULT_TIMESTAMP_KEY, reverse_sort=False): \"\"\"", "log \"\"\" if type(log) is EventLog: return sort_timestamp_log(log, timestamp_key=timestamp_key, reverse_sort=reverse_sort) return sort_timestamp_stream(log, timestamp_key=timestamp_key,", "''' This file is part of PM4Py (More Info: https://pm4py.fit.fraunhofer.de). PM4Py is free", "the direction in which the sort is done (ascending) Returns ----------- log Sorted", "x: x[timestamp_key], reverse=reverse_sort) new_stream = EventStream(events, attributes=event_log.attributes, extensions=event_log.extensions, omni_present=event_log.omni_present, classifiers=event_log.classifiers, properties=event_log.properties) return new_stream", "option) any later version. PM4Py is distributed in the hope that it will", "FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more", "log based on a lambda expression Parameters ------------ event_log Log sort_function Sort function", "which the sort is done (ascending) Returns ----------- event_log Sorted event log \"\"\"", "terms of the GNU General Public License as published by the Free Software", "timestamp key Parameters ----------- trace Trace timestamp_key Timestamp key reverse_sort If true, reverses", "Sorted log \"\"\" new_log = EventLog(attributes=event_log.attributes, extensions=event_log.extensions, omni_present=event_log.omni_present, classifiers=event_log.classifiers, properties=event_log.properties) for trace in", "key=sort_function, reverse=reverse) new_stream = EventStream(events, attributes=event_log.attributes, extensions=event_log.extensions, omni_present=event_log.omni_present, classifiers=event_log.classifiers, properties=event_log.properties) return new_stream def", "Sort a log based on timestamp key Parameters ----------- log Trace/Event log timestamp_key", "Trace/Event log timestamp_key Timestamp key reverse_sort If true, reverses the direction in which", "Sort a log based on a lambda expression Parameters ------------ event_log Log sort_function", "Foundation, either version 3 of the License, or (at your option) any later", "----------- trace Trace timestamp_key Timestamp key reverse_sort If true, reverses the direction in", "or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for", "should have received a copy of the GNU General Public License along with", "log Sorted log \"\"\" if type(log) is EventLog: return sort_lambda_log(log, sort_function, reverse=reverse) return", "\"\"\" if type(log) is EventLog: return sort_lambda_log(log, sort_function, reverse=reverse) return sort_lambda_stream(log, sort_function, reverse=reverse)", "General Public License for more details. You should have received a copy of", "PM4Py (More Info: https://pm4py.fit.fraunhofer.de). PM4Py is free software: you can redistribute it and/or", "new_log Sorted log \"\"\" traces = sorted(event_log._list, key=sort_function, reverse=reverse) new_log = EventLog(traces, attributes=event_log.attributes,", "a stream based on a lambda expression Parameters ------------ event_log Stream sort_function Sort", "stream Sorted stream \"\"\" events = sorted(event_log._list, key=sort_function, reverse=reverse) new_stream = EventStream(events, attributes=event_log.attributes,", "new_log._list.sort(key=lambda x: x[0][timestamp_key], reverse=reverse_sort) return new_log def sort_timestamp(log, timestamp_key=xes.DEFAULT_TIMESTAMP_KEY, reverse_sort=False): \"\"\" Sort a", "expression Parameters ------------ event_log Log sort_function Sort function reverse Boolean (sort by reverse", "later version. PM4Py is distributed in the hope that it will be useful,", "reverse_sort=reverse_sort) return sort_timestamp_stream(log, timestamp_key=timestamp_key, reverse_sort=reverse_sort) def sort_lambda_log(event_log, sort_function, reverse=False): \"\"\" Sort a log", "but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS", "that it will be useful, but WITHOUT ANY WARRANTY; without even the implied", "done (ascending) Returns ----------- event_log Sorted event log \"\"\" events = sorted(event_log._list, key=lambda", "lambda expression Parameters ------------ event_log Stream sort_function Sort function reverse Boolean (sort by", "timestamp_key Timestamp key reverse_sort If true, reverses the direction in which the sort", "is EventLog: return sort_timestamp_log(log, timestamp_key=timestamp_key, reverse_sort=reverse_sort) return sort_timestamp_stream(log, timestamp_key=timestamp_key, reverse_sort=reverse_sort) def sort_lambda_log(event_log, sort_function,", "copy of the GNU General Public License along with PM4Py. If not, see", "return sort_timestamp_stream(log, timestamp_key=timestamp_key, reverse_sort=reverse_sort) def sort_lambda_log(event_log, sort_function, reverse=False): \"\"\" Sort a log based", "Parameters ----------- log Trace/Event log timestamp_key Timestamp key reverse_sort If true, reverses the", "sort_timestamp_stream(log, timestamp_key=timestamp_key, reverse_sort=reverse_sort) def sort_lambda_log(event_log, sort_function, reverse=False): \"\"\" Sort a log based on", "reverse=reverse) new_log = EventLog(traces, attributes=event_log.attributes, extensions=event_log.extensions, omni_present=event_log.omni_present, classifiers=event_log.classifiers, properties=event_log.properties) return new_log def sort_lambda_stream(event_log,", "of the GNU General Public License as published by the Free Software Foundation,", "lambda expression Parameters ------------ event_log Log sort_function Sort function reverse Boolean (sort by", "which the sort is done (ascending) Returns ----------- log Sorted Trace/Event log \"\"\"", "the sort is done (ascending) Returns ----------- log Sorted log \"\"\" new_log =", "omni_present=event_log.omni_present, classifiers=event_log.classifiers, properties=event_log.properties) return new_log def sort_lambda_stream(event_log, sort_function, reverse=False): \"\"\" Sort a stream", "FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.", "trace \"\"\" events = sorted(trace._list, key=lambda x: x[timestamp_key], reverse=reverse_sort) new_trace = Trace(events, attributes=trace.attributes)", "log \"\"\" if type(log) is EventLog: return sort_lambda_log(log, sort_function, reverse=reverse) return sort_lambda_stream(log, sort_function,", "Sort function reverse Boolean (sort by reverse order) Returns ------------- log Sorted log", "sort is done (ascending) Returns ----------- event_log Sorted event log \"\"\" events =", "reverse=reverse_sort) new_stream = EventStream(events, attributes=event_log.attributes, extensions=event_log.extensions, omni_present=event_log.omni_present, classifiers=event_log.classifiers, properties=event_log.properties) return new_stream def sort_timestamp_log(event_log,", "----------- event_log Event log timestamp_key Timestamp key reverse_sort If true, reverses the direction", "----------- log Sorted log \"\"\" new_log = EventLog(attributes=event_log.attributes, extensions=event_log.extensions, omni_present=event_log.omni_present, classifiers=event_log.classifiers, properties=event_log.properties) for", "with PM4Py. If not, see <https://www.gnu.org/licenses/>. ''' from pm4py.objects.log.obj import EventLog, Trace, EventStream", "----------- event_log Log timestamp_key Timestamp key reverse_sort If true, reverses the direction in", "warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General", "License as published by the Free Software Foundation, either version 3 of the", "def sort_lambda_stream(event_log, sort_function, reverse=False): \"\"\" Sort a stream based on a lambda expression", "extensions=event_log.extensions, omni_present=event_log.omni_present, classifiers=event_log.classifiers, properties=event_log.properties) for trace in event_log: if trace: new_log.append(sort_timestamp_trace(trace, timestamp_key=timestamp_key, reverse_sort=reverse_sort))", "classifiers=event_log.classifiers, properties=event_log.properties) return new_stream def sort_timestamp_log(event_log, timestamp_key=xes.DEFAULT_TIMESTAMP_KEY, reverse_sort=False): \"\"\" Sort a log based", "on a lambda expression Parameters ------------ event_log Stream sort_function Sort function reverse Boolean", "a log based on lambda expression Parameters ------------- log Log sort_function Sort function", "reverses the direction in which the sort is done (ascending) Returns ----------- trace", "return new_stream def sort_lambda(log, sort_function, reverse=False): \"\"\" Sort a log based on lambda", "the Free Software Foundation, either version 3 of the License, or (at your", "for trace in event_log: if trace: new_log.append(sort_timestamp_trace(trace, timestamp_key=timestamp_key, reverse_sort=reverse_sort)) new_log._list.sort(key=lambda x: x[0][timestamp_key], reverse=reverse_sort)", "new_log = EventLog(attributes=event_log.attributes, extensions=event_log.extensions, omni_present=event_log.omni_present, classifiers=event_log.classifiers, properties=event_log.properties) for trace in event_log: if trace:", "sort_timestamp(log, timestamp_key=xes.DEFAULT_TIMESTAMP_KEY, reverse_sort=False): \"\"\" Sort a log based on timestamp key Parameters -----------", "reverses the direction in which the sort is done (ascending) Returns ----------- log", "= Trace(events, attributes=trace.attributes) return new_trace def sort_timestamp_stream(event_log, timestamp_key=xes.DEFAULT_TIMESTAMP_KEY, reverse_sort=False): \"\"\" Sort an event", "log based on lambda expression Parameters ------------- log Log sort_function Sort function reverse", "log Trace/Event log timestamp_key Timestamp key reverse_sort If true, reverses the direction in", "PURPOSE. See the GNU General Public License for more details. You should have", "sort is done (ascending) Returns ----------- log Sorted log \"\"\" new_log = EventLog(attributes=event_log.attributes,", "Boolean (sort by reverse order) Returns ------------- log Sorted log \"\"\" if type(log)", "published by the Free Software Foundation, either version 3 of the License, or", "omni_present=event_log.omni_present, classifiers=event_log.classifiers, properties=event_log.properties) return new_stream def sort_timestamp_log(event_log, timestamp_key=xes.DEFAULT_TIMESTAMP_KEY, reverse_sort=False): \"\"\" Sort a log", "events = sorted(event_log._list, key=sort_function, reverse=reverse) new_stream = EventStream(events, attributes=event_log.attributes, extensions=event_log.extensions, omni_present=event_log.omni_present, classifiers=event_log.classifiers, properties=event_log.properties)", "order) Returns ------------ stream Sorted stream \"\"\" events = sorted(event_log._list, key=sort_function, reverse=reverse) new_stream", "\"\"\" new_log = EventLog(attributes=event_log.attributes, extensions=event_log.extensions, omni_present=event_log.omni_present, classifiers=event_log.classifiers, properties=event_log.properties) for trace in event_log: if", "xes_constants as xes def sort_timestamp_trace(trace, timestamp_key=xes.DEFAULT_TIMESTAMP_KEY, reverse_sort=False): \"\"\" Sort a trace based on", "reverse_sort=reverse_sort) def sort_lambda_log(event_log, sort_function, reverse=False): \"\"\" Sort a log based on a lambda", "on timestamp key Parameters ----------- trace Trace timestamp_key Timestamp key reverse_sort If true,", "properties=event_log.properties) return new_stream def sort_lambda(log, sort_function, reverse=False): \"\"\" Sort a log based on", "Parameters ----------- event_log Event log timestamp_key Timestamp key reverse_sort If true, reverses the", "trace based on timestamp key Parameters ----------- trace Trace timestamp_key Timestamp key reverse_sort", "extensions=event_log.extensions, omni_present=event_log.omni_present, classifiers=event_log.classifiers, properties=event_log.properties) return new_stream def sort_timestamp_log(event_log, timestamp_key=xes.DEFAULT_TIMESTAMP_KEY, reverse_sort=False): \"\"\" Sort a", "Software Foundation, either version 3 of the License, or (at your option) any", "extensions=event_log.extensions, omni_present=event_log.omni_present, classifiers=event_log.classifiers, properties=event_log.properties) return new_stream def sort_lambda(log, sort_function, reverse=False): \"\"\" Sort a", "------------ event_log Stream sort_function Sort function reverse Boolean (sort by reverse order) Returns", "based on a lambda expression Parameters ------------ event_log Log sort_function Sort function reverse", "reverse_sort=reverse_sort)) new_log._list.sort(key=lambda x: x[0][timestamp_key], reverse=reverse_sort) return new_log def sort_timestamp(log, timestamp_key=xes.DEFAULT_TIMESTAMP_KEY, reverse_sort=False): \"\"\" Sort", "in which the sort is done (ascending) Returns ----------- log Sorted Trace/Event log", "Sort a trace based on timestamp key Parameters ----------- trace Trace timestamp_key Timestamp", "PM4Py is distributed in the hope that it will be useful, but WITHOUT", "Parameters ------------- log Log sort_function Sort function reverse Boolean (sort by reverse order)", "by the Free Software Foundation, either version 3 of the License, or (at", "the sort is done (ascending) Returns ----------- log Sorted Trace/Event log \"\"\" if", "return new_stream def sort_timestamp_log(event_log, timestamp_key=xes.DEFAULT_TIMESTAMP_KEY, reverse_sort=False): \"\"\" Sort a log based on timestamp", "under the terms of the GNU General Public License as published by the", "Sort a log based on lambda expression Parameters ------------- log Log sort_function Sort", "key Parameters ----------- event_log Log timestamp_key Timestamp key reverse_sort If true, reverses the", "event_log Event log timestamp_key Timestamp key reverse_sort If true, reverses the direction in", "file is part of PM4Py (More Info: https://pm4py.fit.fraunhofer.de). PM4Py is free software: you", "Boolean (sort by reverse order) Returns ------------ stream Sorted stream \"\"\" events =", "version 3 of the License, or (at your option) any later version. PM4Py", "from pm4py.util import xes_constants as xes def sort_timestamp_trace(trace, timestamp_key=xes.DEFAULT_TIMESTAMP_KEY, reverse_sort=False): \"\"\" Sort a", "omni_present=event_log.omni_present, classifiers=event_log.classifiers, properties=event_log.properties) for trace in event_log: if trace: new_log.append(sort_timestamp_trace(trace, timestamp_key=timestamp_key, reverse_sort=reverse_sort)) new_log._list.sort(key=lambda", "Returns ------------- log Sorted log \"\"\" if type(log) is EventLog: return sort_lambda_log(log, sort_function,", "omni_present=event_log.omni_present, classifiers=event_log.classifiers, properties=event_log.properties) return new_stream def sort_lambda(log, sort_function, reverse=False): \"\"\" Sort a log", "Public License as published by the Free Software Foundation, either version 3 of", "for more details. You should have received a copy of the GNU General", "properties=event_log.properties) return new_stream def sort_timestamp_log(event_log, timestamp_key=xes.DEFAULT_TIMESTAMP_KEY, reverse_sort=False): \"\"\" Sort a log based on", "= sorted(trace._list, key=lambda x: x[timestamp_key], reverse=reverse_sort) new_trace = Trace(events, attributes=trace.attributes) return new_trace def", "an event log based on timestamp key Parameters ----------- event_log Event log timestamp_key", "timestamp_key=xes.DEFAULT_TIMESTAMP_KEY, reverse_sort=False): \"\"\" Sort an event log based on timestamp key Parameters -----------", "the GNU General Public License as published by the Free Software Foundation, either", "import xes_constants as xes def sort_timestamp_trace(trace, timestamp_key=xes.DEFAULT_TIMESTAMP_KEY, reverse_sort=False): \"\"\" Sort a trace based", "sorted(event_log._list, key=lambda x: x[timestamp_key], reverse=reverse_sort) new_stream = EventStream(events, attributes=event_log.attributes, extensions=event_log.extensions, omni_present=event_log.omni_present, classifiers=event_log.classifiers, properties=event_log.properties)", "on timestamp key Parameters ----------- log Trace/Event log timestamp_key Timestamp key reverse_sort If", "properties=event_log.properties) return new_log def sort_lambda_stream(event_log, sort_function, reverse=False): \"\"\" Sort a stream based on", "timestamp_key=xes.DEFAULT_TIMESTAMP_KEY, reverse_sort=False): \"\"\" Sort a trace based on timestamp key Parameters ----------- trace", "new_log def sort_timestamp(log, timestamp_key=xes.DEFAULT_TIMESTAMP_KEY, reverse_sort=False): \"\"\" Sort a log based on timestamp key", "based on timestamp key Parameters ----------- log Trace/Event log timestamp_key Timestamp key reverse_sort", "General Public License along with PM4Py. If not, see <https://www.gnu.org/licenses/>. ''' from pm4py.objects.log.obj", "sort_timestamp_trace(trace, timestamp_key=xes.DEFAULT_TIMESTAMP_KEY, reverse_sort=False): \"\"\" Sort a trace based on timestamp key Parameters -----------", "reverse_sort=False): \"\"\" Sort a trace based on timestamp key Parameters ----------- trace Trace", "General Public License as published by the Free Software Foundation, either version 3", "along with PM4Py. If not, see <https://www.gnu.org/licenses/>. ''' from pm4py.objects.log.obj import EventLog, Trace,", "sorted(trace._list, key=lambda x: x[timestamp_key], reverse=reverse_sort) new_trace = Trace(events, attributes=trace.attributes) return new_trace def sort_timestamp_stream(event_log,", "reverse order) Returns ------------- log Sorted log \"\"\" if type(log) is EventLog: return", "classifiers=event_log.classifiers, properties=event_log.properties) return new_stream def sort_lambda(log, sort_function, reverse=False): \"\"\" Sort a log based", "can redistribute it and/or modify it under the terms of the GNU General", "See the GNU General Public License for more details. You should have received", "<https://www.gnu.org/licenses/>. ''' from pm4py.objects.log.obj import EventLog, Trace, EventStream from pm4py.util import xes_constants as", "sort is done (ascending) Returns ----------- log Sorted Trace/Event log \"\"\" if type(log)", "classifiers=event_log.classifiers, properties=event_log.properties) for trace in event_log: if trace: new_log.append(sort_timestamp_trace(trace, timestamp_key=timestamp_key, reverse_sort=reverse_sort)) new_log._list.sort(key=lambda x:", "expression Parameters ------------ event_log Stream sort_function Sort function reverse Boolean (sort by reverse", "Trace(events, attributes=trace.attributes) return new_trace def sort_timestamp_stream(event_log, timestamp_key=xes.DEFAULT_TIMESTAMP_KEY, reverse_sort=False): \"\"\" Sort an event log", "direction in which the sort is done (ascending) Returns ----------- trace Sorted trace", "Free Software Foundation, either version 3 of the License, or (at your option)", "GNU General Public License as published by the Free Software Foundation, either version", "it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty", "Parameters ----------- trace Trace timestamp_key Timestamp key reverse_sort If true, reverses the direction", "Stream sort_function Sort function reverse Boolean (sort by reverse order) Returns ------------ stream", "log Sorted Trace/Event log \"\"\" if type(log) is EventLog: return sort_timestamp_log(log, timestamp_key=timestamp_key, reverse_sort=reverse_sort)", "without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.", "reverse Boolean (sort by reverse order) Returns ------------ stream Sorted stream \"\"\" events", "def sort_lambda(log, sort_function, reverse=False): \"\"\" Sort a log based on lambda expression Parameters", "(at your option) any later version. PM4Py is distributed in the hope that", "if trace: new_log.append(sort_timestamp_trace(trace, timestamp_key=timestamp_key, reverse_sort=reverse_sort)) new_log._list.sort(key=lambda x: x[0][timestamp_key], reverse=reverse_sort) return new_log def sort_timestamp(log,", "log \"\"\" traces = sorted(event_log._list, key=sort_function, reverse=reverse) new_log = EventLog(traces, attributes=event_log.attributes, extensions=event_log.extensions, omni_present=event_log.omni_present,", "reverse=False): \"\"\" Sort a log based on a lambda expression Parameters ------------ event_log", "event log \"\"\" events = sorted(event_log._list, key=lambda x: x[timestamp_key], reverse=reverse_sort) new_stream = EventStream(events,", "WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR", "log timestamp_key Timestamp key reverse_sort If true, reverses the direction in which the", "x[timestamp_key], reverse=reverse_sort) new_stream = EventStream(events, attributes=event_log.attributes, extensions=event_log.extensions, omni_present=event_log.omni_present, classifiers=event_log.classifiers, properties=event_log.properties) return new_stream def", "direction in which the sort is done (ascending) Returns ----------- log Sorted log", "timestamp_key=timestamp_key, reverse_sort=reverse_sort)) new_log._list.sort(key=lambda x: x[0][timestamp_key], reverse=reverse_sort) return new_log def sort_timestamp(log, timestamp_key=xes.DEFAULT_TIMESTAMP_KEY, reverse_sort=False): \"\"\"", "\"\"\" events = sorted(event_log._list, key=sort_function, reverse=reverse) new_stream = EventStream(events, attributes=event_log.attributes, extensions=event_log.extensions, omni_present=event_log.omni_present, classifiers=event_log.classifiers,", "true, reverses the direction in which the sort is done (ascending) Returns -----------", "trace Sorted trace \"\"\" events = sorted(trace._list, key=lambda x: x[timestamp_key], reverse=reverse_sort) new_trace =", "it and/or modify it under the terms of the GNU General Public License", "----------- trace Sorted trace \"\"\" events = sorted(trace._list, key=lambda x: x[timestamp_key], reverse=reverse_sort) new_trace", "sorted(event_log._list, key=sort_function, reverse=reverse) new_stream = EventStream(events, attributes=event_log.attributes, extensions=event_log.extensions, omni_present=event_log.omni_present, classifiers=event_log.classifiers, properties=event_log.properties) return new_stream", "3 of the License, or (at your option) any later version. PM4Py is", "of PM4Py (More Info: https://pm4py.fit.fraunhofer.de). PM4Py is free software: you can redistribute it", "on timestamp key Parameters ----------- event_log Event log timestamp_key Timestamp key reverse_sort If", "in which the sort is done (ascending) Returns ----------- log Sorted log \"\"\"", "This file is part of PM4Py (More Info: https://pm4py.fit.fraunhofer.de). PM4Py is free software:", "PM4Py. If not, see <https://www.gnu.org/licenses/>. ''' from pm4py.objects.log.obj import EventLog, Trace, EventStream from", "EventLog: return sort_timestamp_log(log, timestamp_key=timestamp_key, reverse_sort=reverse_sort) return sort_timestamp_stream(log, timestamp_key=timestamp_key, reverse_sort=reverse_sort) def sort_lambda_log(event_log, sort_function, reverse=False):", "EventStream from pm4py.util import xes_constants as xes def sort_timestamp_trace(trace, timestamp_key=xes.DEFAULT_TIMESTAMP_KEY, reverse_sort=False): \"\"\" Sort", "(sort by reverse order) Returns ------------- log Sorted log \"\"\" if type(log) is", "Parameters ------------ event_log Log sort_function Sort function reverse Boolean (sort by reverse order)", "x[timestamp_key], reverse=reverse_sort) new_trace = Trace(events, attributes=trace.attributes) return new_trace def sort_timestamp_stream(event_log, timestamp_key=xes.DEFAULT_TIMESTAMP_KEY, reverse_sort=False): \"\"\"", "(More Info: https://pm4py.fit.fraunhofer.de). PM4Py is free software: you can redistribute it and/or modify", "the hope that it will be useful, but WITHOUT ANY WARRANTY; without even", "If true, reverses the direction in which the sort is done (ascending) Returns", "Log sort_function Sort function reverse Boolean (sort by reverse order) Returns ------------- log", "will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of", "(ascending) Returns ----------- trace Sorted trace \"\"\" events = sorted(trace._list, key=lambda x: x[timestamp_key],", "(sort by reverse order) Returns ------------ new_log Sorted log \"\"\" traces = sorted(event_log._list,", "as xes def sort_timestamp_trace(trace, timestamp_key=xes.DEFAULT_TIMESTAMP_KEY, reverse_sort=False): \"\"\" Sort a trace based on timestamp", "key Parameters ----------- trace Trace timestamp_key Timestamp key reverse_sort If true, reverses the", "\"\"\" Sort a log based on a lambda expression Parameters ------------ event_log Log", "lambda expression Parameters ------------- log Log sort_function Sort function reverse Boolean (sort by", "------------ new_log Sorted log \"\"\" traces = sorted(event_log._list, key=sort_function, reverse=reverse) new_log = EventLog(traces,", "Sorted log \"\"\" if type(log) is EventLog: return sort_lambda_log(log, sort_function, reverse=reverse) return sort_lambda_stream(log,", "is done (ascending) Returns ----------- trace Sorted trace \"\"\" events = sorted(trace._list, key=lambda", "stream based on a lambda expression Parameters ------------ event_log Stream sort_function Sort function", "https://pm4py.fit.fraunhofer.de). PM4Py is free software: you can redistribute it and/or modify it under", "a copy of the GNU General Public License along with PM4Py. If not,", "based on timestamp key Parameters ----------- event_log Event log timestamp_key Timestamp key reverse_sort", "by reverse order) Returns ------------ new_log Sorted log \"\"\" traces = sorted(event_log._list, key=sort_function,", "= EventLog(attributes=event_log.attributes, extensions=event_log.extensions, omni_present=event_log.omni_present, classifiers=event_log.classifiers, properties=event_log.properties) for trace in event_log: if trace: new_log.append(sort_timestamp_trace(trace,", "sort_function Sort function reverse Boolean (sort by reverse order) Returns ------------ new_log Sorted", "or (at your option) any later version. PM4Py is distributed in the hope", "of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public", "the terms of the GNU General Public License as published by the Free", "not, see <https://www.gnu.org/licenses/>. ''' from pm4py.objects.log.obj import EventLog, Trace, EventStream from pm4py.util import", "a lambda expression Parameters ------------ event_log Log sort_function Sort function reverse Boolean (sort", "timestamp key Parameters ----------- event_log Event log timestamp_key Timestamp key reverse_sort If true,", "Sorted trace \"\"\" events = sorted(trace._list, key=lambda x: x[timestamp_key], reverse=reverse_sort) new_trace = Trace(events,", "reverse Boolean (sort by reverse order) Returns ------------ new_log Sorted log \"\"\" traces", "\"\"\" Sort a trace based on timestamp key Parameters ----------- trace Trace timestamp_key", "\"\"\" Sort a log based on timestamp key Parameters ----------- event_log Log timestamp_key", "expression Parameters ------------- log Log sort_function Sort function reverse Boolean (sort by reverse", "(sort by reverse order) Returns ------------ stream Sorted stream \"\"\" events = sorted(event_log._list,", "def sort_timestamp_log(event_log, timestamp_key=xes.DEFAULT_TIMESTAMP_KEY, reverse_sort=False): \"\"\" Sort a log based on timestamp key Parameters", "you can redistribute it and/or modify it under the terms of the GNU", "sort is done (ascending) Returns ----------- trace Sorted trace \"\"\" events = sorted(trace._list,", "the direction in which the sort is done (ascending) Returns ----------- trace Sorted", "Parameters ----------- event_log Log timestamp_key Timestamp key reverse_sort If true, reverses the direction", "be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY", "Trace, EventStream from pm4py.util import xes_constants as xes def sort_timestamp_trace(trace, timestamp_key=xes.DEFAULT_TIMESTAMP_KEY, reverse_sort=False): \"\"\"", "log \"\"\" new_log = EventLog(attributes=event_log.attributes, extensions=event_log.extensions, omni_present=event_log.omni_present, classifiers=event_log.classifiers, properties=event_log.properties) for trace in event_log:", "type(log) is EventLog: return sort_timestamp_log(log, timestamp_key=timestamp_key, reverse_sort=reverse_sort) return sort_timestamp_stream(log, timestamp_key=timestamp_key, reverse_sort=reverse_sort) def sort_lambda_log(event_log,", "by reverse order) Returns ------------ stream Sorted stream \"\"\" events = sorted(event_log._list, key=sort_function,", "Returns ----------- event_log Sorted event log \"\"\" events = sorted(event_log._list, key=lambda x: x[timestamp_key],", "based on timestamp key Parameters ----------- trace Trace timestamp_key Timestamp key reverse_sort If", "the License, or (at your option) any later version. PM4Py is distributed in", "the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the", "''' from pm4py.objects.log.obj import EventLog, Trace, EventStream from pm4py.util import xes_constants as xes", "based on lambda expression Parameters ------------- log Log sort_function Sort function reverse Boolean", "sort_lambda(log, sort_function, reverse=False): \"\"\" Sort a log based on lambda expression Parameters -------------", "attributes=event_log.attributes, extensions=event_log.extensions, omni_present=event_log.omni_present, classifiers=event_log.classifiers, properties=event_log.properties) return new_log def sort_lambda_stream(event_log, sort_function, reverse=False): \"\"\" Sort", "Sort a log based on timestamp key Parameters ----------- event_log Log timestamp_key Timestamp", "key=lambda x: x[timestamp_key], reverse=reverse_sort) new_stream = EventStream(events, attributes=event_log.attributes, extensions=event_log.extensions, omni_present=event_log.omni_present, classifiers=event_log.classifiers, properties=event_log.properties) return", "is done (ascending) Returns ----------- log Sorted Trace/Event log \"\"\" if type(log) is", "sort_function, reverse=False): \"\"\" Sort a log based on lambda expression Parameters ------------- log", "it under the terms of the GNU General Public License as published by", "log based on timestamp key Parameters ----------- event_log Event log timestamp_key Timestamp key", "properties=event_log.properties) for trace in event_log: if trace: new_log.append(sort_timestamp_trace(trace, timestamp_key=timestamp_key, reverse_sort=reverse_sort)) new_log._list.sort(key=lambda x: x[0][timestamp_key],", "------------ event_log Log sort_function Sort function reverse Boolean (sort by reverse order) Returns", "done (ascending) Returns ----------- log Sorted Trace/Event log \"\"\" if type(log) is EventLog:", "part of PM4Py (More Info: https://pm4py.fit.fraunhofer.de). PM4Py is free software: you can redistribute", "def sort_timestamp_trace(trace, timestamp_key=xes.DEFAULT_TIMESTAMP_KEY, reverse_sort=False): \"\"\" Sort a trace based on timestamp key Parameters", "reverse_sort=False): \"\"\" Sort a log based on timestamp key Parameters ----------- log Trace/Event", "extensions=event_log.extensions, omni_present=event_log.omni_present, classifiers=event_log.classifiers, properties=event_log.properties) return new_log def sort_lambda_stream(event_log, sort_function, reverse=False): \"\"\" Sort a", "(ascending) Returns ----------- log Sorted log \"\"\" new_log = EventLog(attributes=event_log.attributes, extensions=event_log.extensions, omni_present=event_log.omni_present, classifiers=event_log.classifiers,", "a log based on a lambda expression Parameters ------------ event_log Log sort_function Sort", "the sort is done (ascending) Returns ----------- event_log Sorted event log \"\"\" events", "reverse=reverse_sort) return new_log def sort_timestamp(log, timestamp_key=xes.DEFAULT_TIMESTAMP_KEY, reverse_sort=False): \"\"\" Sort a log based on", "of the License, or (at your option) any later version. PM4Py is distributed", "trace: new_log.append(sort_timestamp_trace(trace, timestamp_key=timestamp_key, reverse_sort=reverse_sort)) new_log._list.sort(key=lambda x: x[0][timestamp_key], reverse=reverse_sort) return new_log def sort_timestamp(log, timestamp_key=xes.DEFAULT_TIMESTAMP_KEY,", "in which the sort is done (ascending) Returns ----------- event_log Sorted event log", "if type(log) is EventLog: return sort_timestamp_log(log, timestamp_key=timestamp_key, reverse_sort=reverse_sort) return sort_timestamp_stream(log, timestamp_key=timestamp_key, reverse_sort=reverse_sort) def", "sort_timestamp_log(log, timestamp_key=timestamp_key, reverse_sort=reverse_sort) return sort_timestamp_stream(log, timestamp_key=timestamp_key, reverse_sort=reverse_sort) def sort_lambda_log(event_log, sort_function, reverse=False): \"\"\" Sort", "License for more details. You should have received a copy of the GNU", "log \"\"\" events = sorted(event_log._list, key=lambda x: x[timestamp_key], reverse=reverse_sort) new_stream = EventStream(events, attributes=event_log.attributes,", "\"\"\" traces = sorted(event_log._list, key=sort_function, reverse=reverse) new_log = EventLog(traces, attributes=event_log.attributes, extensions=event_log.extensions, omni_present=event_log.omni_present, classifiers=event_log.classifiers,", "Sort a stream based on a lambda expression Parameters ------------ event_log Stream sort_function", "log based on timestamp key Parameters ----------- event_log Log timestamp_key Timestamp key reverse_sort", "details. You should have received a copy of the GNU General Public License", "new_stream def sort_timestamp_log(event_log, timestamp_key=xes.DEFAULT_TIMESTAMP_KEY, reverse_sort=False): \"\"\" Sort a log based on timestamp key", "sort_function, reverse=False): \"\"\" Sort a log based on a lambda expression Parameters ------------", "import EventLog, Trace, EventStream from pm4py.util import xes_constants as xes def sort_timestamp_trace(trace, timestamp_key=xes.DEFAULT_TIMESTAMP_KEY,", "based on a lambda expression Parameters ------------ event_log Stream sort_function Sort function reverse", "Log timestamp_key Timestamp key reverse_sort If true, reverses the direction in which the", "is free software: you can redistribute it and/or modify it under the terms", "timestamp key Parameters ----------- log Trace/Event log timestamp_key Timestamp key reverse_sort If true,", "trace Trace timestamp_key Timestamp key reverse_sort If true, reverses the direction in which", "distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;", "version. PM4Py is distributed in the hope that it will be useful, but", "attributes=event_log.attributes, extensions=event_log.extensions, omni_present=event_log.omni_present, classifiers=event_log.classifiers, properties=event_log.properties) return new_stream def sort_timestamp_log(event_log, timestamp_key=xes.DEFAULT_TIMESTAMP_KEY, reverse_sort=False): \"\"\" Sort", "GNU General Public License for more details. You should have received a copy", "direction in which the sort is done (ascending) Returns ----------- event_log Sorted event", "WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR", "see <https://www.gnu.org/licenses/>. ''' from pm4py.objects.log.obj import EventLog, Trace, EventStream from pm4py.util import xes_constants", "Info: https://pm4py.fit.fraunhofer.de). PM4Py is free software: you can redistribute it and/or modify it", "free software: you can redistribute it and/or modify it under the terms of", "EventLog(attributes=event_log.attributes, extensions=event_log.extensions, omni_present=event_log.omni_present, classifiers=event_log.classifiers, properties=event_log.properties) for trace in event_log: if trace: new_log.append(sort_timestamp_trace(trace, timestamp_key=timestamp_key,", "new_stream = EventStream(events, attributes=event_log.attributes, extensions=event_log.extensions, omni_present=event_log.omni_present, classifiers=event_log.classifiers, properties=event_log.properties) return new_stream def sort_timestamp_log(event_log, timestamp_key=xes.DEFAULT_TIMESTAMP_KEY,", "sort_function, reverse=False): \"\"\" Sort a stream based on a lambda expression Parameters ------------", "event_log Log timestamp_key Timestamp key reverse_sort If true, reverses the direction in which", "----------- log Trace/Event log timestamp_key Timestamp key reverse_sort If true, reverses the direction", "timestamp_key=xes.DEFAULT_TIMESTAMP_KEY, reverse_sort=False): \"\"\" Sort a log based on timestamp key Parameters ----------- event_log", "Returns ----------- trace Sorted trace \"\"\" events = sorted(trace._list, key=lambda x: x[timestamp_key], reverse=reverse_sort)", "have received a copy of the GNU General Public License along with PM4Py.", "Public License along with PM4Py. If not, see <https://www.gnu.org/licenses/>. ''' from pm4py.objects.log.obj import", "which the sort is done (ascending) Returns ----------- trace Sorted trace \"\"\" events", "new_trace = Trace(events, attributes=trace.attributes) return new_trace def sort_timestamp_stream(event_log, timestamp_key=xes.DEFAULT_TIMESTAMP_KEY, reverse_sort=False): \"\"\" Sort an", "reverse=False): \"\"\" Sort a stream based on a lambda expression Parameters ------------ event_log", "GNU General Public License along with PM4Py. If not, see <https://www.gnu.org/licenses/>. ''' from", "implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU", "even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See", "PARTICULAR PURPOSE. See the GNU General Public License for more details. You should", "new_log def sort_lambda_stream(event_log, sort_function, reverse=False): \"\"\" Sort a stream based on a lambda", "is part of PM4Py (More Info: https://pm4py.fit.fraunhofer.de). PM4Py is free software: you can", "key Parameters ----------- event_log Event log timestamp_key Timestamp key reverse_sort If true, reverses", "classifiers=event_log.classifiers, properties=event_log.properties) return new_log def sort_lambda_stream(event_log, sort_function, reverse=False): \"\"\" Sort a stream based", "Sorted Trace/Event log \"\"\" if type(log) is EventLog: return sort_timestamp_log(log, timestamp_key=timestamp_key, reverse_sort=reverse_sort) return", "A PARTICULAR PURPOSE. See the GNU General Public License for more details. You", "the GNU General Public License along with PM4Py. If not, see <https://www.gnu.org/licenses/>. '''", "key=sort_function, reverse=reverse) new_log = EventLog(traces, attributes=event_log.attributes, extensions=event_log.extensions, omni_present=event_log.omni_present, classifiers=event_log.classifiers, properties=event_log.properties) return new_log def", "traces = sorted(event_log._list, key=sort_function, reverse=reverse) new_log = EventLog(traces, attributes=event_log.attributes, extensions=event_log.extensions, omni_present=event_log.omni_present, classifiers=event_log.classifiers, properties=event_log.properties)", "sort_timestamp_log(event_log, timestamp_key=xes.DEFAULT_TIMESTAMP_KEY, reverse_sort=False): \"\"\" Sort a log based on timestamp key Parameters -----------", "event_log Log sort_function Sort function reverse Boolean (sort by reverse order) Returns ------------", "a lambda expression Parameters ------------ event_log Stream sort_function Sort function reverse Boolean (sort", "\"\"\" events = sorted(event_log._list, key=lambda x: x[timestamp_key], reverse=reverse_sort) new_stream = EventStream(events, attributes=event_log.attributes, extensions=event_log.extensions,", "(ascending) Returns ----------- log Sorted Trace/Event log \"\"\" if type(log) is EventLog: return", "If not, see <https://www.gnu.org/licenses/>. ''' from pm4py.objects.log.obj import EventLog, Trace, EventStream from pm4py.util", "------------- log Sorted log \"\"\" if type(log) is EventLog: return sort_lambda_log(log, sort_function, reverse=reverse)", "return sort_timestamp_log(log, timestamp_key=timestamp_key, reverse_sort=reverse_sort) return sort_timestamp_stream(log, timestamp_key=timestamp_key, reverse_sort=reverse_sort) def sort_lambda_log(event_log, sort_function, reverse=False): \"\"\"", "is done (ascending) Returns ----------- log Sorted log \"\"\" new_log = EventLog(attributes=event_log.attributes, extensions=event_log.extensions,", "Returns ----------- log Sorted log \"\"\" new_log = EventLog(attributes=event_log.attributes, extensions=event_log.extensions, omni_present=event_log.omni_present, classifiers=event_log.classifiers, properties=event_log.properties)", "event_log Stream sort_function Sort function reverse Boolean (sort by reverse order) Returns ------------", "new_stream def sort_lambda(log, sort_function, reverse=False): \"\"\" Sort a log based on lambda expression", "reverse_sort If true, reverses the direction in which the sort is done (ascending)", "EventLog, Trace, EventStream from pm4py.util import xes_constants as xes def sort_timestamp_trace(trace, timestamp_key=xes.DEFAULT_TIMESTAMP_KEY, reverse_sort=False):", "new_log = EventLog(traces, attributes=event_log.attributes, extensions=event_log.extensions, omni_present=event_log.omni_present, classifiers=event_log.classifiers, properties=event_log.properties) return new_log def sort_lambda_stream(event_log, sort_function,", "Trace/Event log \"\"\" if type(log) is EventLog: return sort_timestamp_log(log, timestamp_key=timestamp_key, reverse_sort=reverse_sort) return sort_timestamp_stream(log,", "License along with PM4Py. If not, see <https://www.gnu.org/licenses/>. ''' from pm4py.objects.log.obj import EventLog,", "Trace timestamp_key Timestamp key reverse_sort If true, reverses the direction in which the", "reverse order) Returns ------------ stream Sorted stream \"\"\" events = sorted(event_log._list, key=sort_function, reverse=reverse)", "= EventLog(traces, attributes=event_log.attributes, extensions=event_log.extensions, omni_present=event_log.omni_present, classifiers=event_log.classifiers, properties=event_log.properties) return new_log def sort_lambda_stream(event_log, sort_function, reverse=False):", "redistribute it and/or modify it under the terms of the GNU General Public", "new_trace def sort_timestamp_stream(event_log, timestamp_key=xes.DEFAULT_TIMESTAMP_KEY, reverse_sort=False): \"\"\" Sort an event log based on timestamp", "PM4Py is free software: you can redistribute it and/or modify it under the", "Public License for more details. You should have received a copy of the", "Returns ----------- log Sorted Trace/Event log \"\"\" if type(log) is EventLog: return sort_timestamp_log(log,", "x: x[0][timestamp_key], reverse=reverse_sort) return new_log def sort_timestamp(log, timestamp_key=xes.DEFAULT_TIMESTAMP_KEY, reverse_sort=False): \"\"\" Sort a log", "a log based on timestamp key Parameters ----------- event_log Log timestamp_key Timestamp key", "reverse_sort=False): \"\"\" Sort a log based on timestamp key Parameters ----------- event_log Log", "\"\"\" if type(log) is EventLog: return sort_timestamp_log(log, timestamp_key=timestamp_key, reverse_sort=reverse_sort) return sort_timestamp_stream(log, timestamp_key=timestamp_key, reverse_sort=reverse_sort)", "is done (ascending) Returns ----------- event_log Sorted event log \"\"\" events = sorted(event_log._list,", "Boolean (sort by reverse order) Returns ------------ new_log Sorted log \"\"\" traces =", "timestamp_key=xes.DEFAULT_TIMESTAMP_KEY, reverse_sort=False): \"\"\" Sort a log based on timestamp key Parameters ----------- log", "def sort_timestamp(log, timestamp_key=xes.DEFAULT_TIMESTAMP_KEY, reverse_sort=False): \"\"\" Sort a log based on timestamp key Parameters", "timestamp_key=timestamp_key, reverse_sort=reverse_sort) def sort_lambda_log(event_log, sort_function, reverse=False): \"\"\" Sort a log based on a", "reverse order) Returns ------------ new_log Sorted log \"\"\" traces = sorted(event_log._list, key=sort_function, reverse=reverse)", "x: x[timestamp_key], reverse=reverse_sort) new_trace = Trace(events, attributes=trace.attributes) return new_trace def sort_timestamp_stream(event_log, timestamp_key=xes.DEFAULT_TIMESTAMP_KEY, reverse_sort=False):", "return new_trace def sort_timestamp_stream(event_log, timestamp_key=xes.DEFAULT_TIMESTAMP_KEY, reverse_sort=False): \"\"\" Sort an event log based on", "key=lambda x: x[timestamp_key], reverse=reverse_sort) new_trace = Trace(events, attributes=trace.attributes) return new_trace def sort_timestamp_stream(event_log, timestamp_key=xes.DEFAULT_TIMESTAMP_KEY,", "timestamp key Parameters ----------- event_log Log timestamp_key Timestamp key reverse_sort If true, reverses", "and/or modify it under the terms of the GNU General Public License as", "function reverse Boolean (sort by reverse order) Returns ------------ new_log Sorted log \"\"\"", "which the sort is done (ascending) Returns ----------- log Sorted log \"\"\" new_log", "sort_lambda_stream(event_log, sort_function, reverse=False): \"\"\" Sort a stream based on a lambda expression Parameters", "log Log sort_function Sort function reverse Boolean (sort by reverse order) Returns -------------", "------------- log Log sort_function Sort function reverse Boolean (sort by reverse order) Returns", "event log based on timestamp key Parameters ----------- event_log Event log timestamp_key Timestamp", "sort_timestamp_stream(event_log, timestamp_key=xes.DEFAULT_TIMESTAMP_KEY, reverse_sort=False): \"\"\" Sort an event log based on timestamp key Parameters", "order) Returns ------------ new_log Sorted log \"\"\" traces = sorted(event_log._list, key=sort_function, reverse=reverse) new_log", "return new_log def sort_lambda_stream(event_log, sort_function, reverse=False): \"\"\" Sort a stream based on a", "\"\"\" Sort an event log based on timestamp key Parameters ----------- event_log Event", "Sort function reverse Boolean (sort by reverse order) Returns ------------ stream Sorted stream", "timestamp_key=timestamp_key, reverse_sort=reverse_sort) return sort_timestamp_stream(log, timestamp_key=timestamp_key, reverse_sort=reverse_sort) def sort_lambda_log(event_log, sort_function, reverse=False): \"\"\" Sort a", "new_log.append(sort_timestamp_trace(trace, timestamp_key=timestamp_key, reverse_sort=reverse_sort)) new_log._list.sort(key=lambda x: x[0][timestamp_key], reverse=reverse_sort) return new_log def sort_timestamp(log, timestamp_key=xes.DEFAULT_TIMESTAMP_KEY, reverse_sort=False):", "Sorted stream \"\"\" events = sorted(event_log._list, key=sort_function, reverse=reverse) new_stream = EventStream(events, attributes=event_log.attributes, extensions=event_log.extensions,", "sort_lambda_log(event_log, sort_function, reverse=False): \"\"\" Sort a log based on a lambda expression Parameters", "ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A", "reverse_sort=False): \"\"\" Sort an event log based on timestamp key Parameters ----------- event_log", "Sorted log \"\"\" traces = sorted(event_log._list, key=sort_function, reverse=reverse) new_log = EventLog(traces, attributes=event_log.attributes, extensions=event_log.extensions,", "software: you can redistribute it and/or modify it under the terms of the", "the direction in which the sort is done (ascending) Returns ----------- event_log Sorted", "pm4py.objects.log.obj import EventLog, Trace, EventStream from pm4py.util import xes_constants as xes def sort_timestamp_trace(trace,", "reverse Boolean (sort by reverse order) Returns ------------- log Sorted log \"\"\" if", "\"\"\" Sort a log based on lambda expression Parameters ------------- log Log sort_function", "sort_function Sort function reverse Boolean (sort by reverse order) Returns ------------ stream Sorted", "= sorted(event_log._list, key=lambda x: x[timestamp_key], reverse=reverse_sort) new_stream = EventStream(events, attributes=event_log.attributes, extensions=event_log.extensions, omni_present=event_log.omni_present, classifiers=event_log.classifiers,", "Sort an event log based on timestamp key Parameters ----------- event_log Event log", "= EventStream(events, attributes=event_log.attributes, extensions=event_log.extensions, omni_present=event_log.omni_present, classifiers=event_log.classifiers, properties=event_log.properties) return new_stream def sort_timestamp_log(event_log, timestamp_key=xes.DEFAULT_TIMESTAMP_KEY, reverse_sort=False):", "any later version. PM4Py is distributed in the hope that it will be", "the sort is done (ascending) Returns ----------- trace Sorted trace \"\"\" events =", "by reverse order) Returns ------------- log Sorted log \"\"\" if type(log) is EventLog:", "Returns ------------ new_log Sorted log \"\"\" traces = sorted(event_log._list, key=sort_function, reverse=reverse) new_log =", "stream \"\"\" events = sorted(event_log._list, key=sort_function, reverse=reverse) new_stream = EventStream(events, attributes=event_log.attributes, extensions=event_log.extensions, omni_present=event_log.omni_present,", "------------ stream Sorted stream \"\"\" events = sorted(event_log._list, key=sort_function, reverse=reverse) new_stream = EventStream(events,", "License, or (at your option) any later version. PM4Py is distributed in the", "in the hope that it will be useful, but WITHOUT ANY WARRANTY; without", "\"\"\" Sort a log based on timestamp key Parameters ----------- log Trace/Event log", "reverse=reverse) new_stream = EventStream(events, attributes=event_log.attributes, extensions=event_log.extensions, omni_present=event_log.omni_present, classifiers=event_log.classifiers, properties=event_log.properties) return new_stream def sort_lambda(log,", "EventLog(traces, attributes=event_log.attributes, extensions=event_log.extensions, omni_present=event_log.omni_present, classifiers=event_log.classifiers, properties=event_log.properties) return new_log def sort_lambda_stream(event_log, sort_function, reverse=False): \"\"\"", "reverse=False): \"\"\" Sort a log based on lambda expression Parameters ------------- log Log", "order) Returns ------------- log Sorted log \"\"\" if type(log) is EventLog: return sort_lambda_log(log,", "log based on timestamp key Parameters ----------- log Trace/Event log timestamp_key Timestamp key", "log Sorted log \"\"\" new_log = EventLog(attributes=event_log.attributes, extensions=event_log.extensions, omni_present=event_log.omni_present, classifiers=event_log.classifiers, properties=event_log.properties) for trace", "= sorted(event_log._list, key=sort_function, reverse=reverse) new_log = EventLog(traces, attributes=event_log.attributes, extensions=event_log.extensions, omni_present=event_log.omni_present, classifiers=event_log.classifiers, properties=event_log.properties) return", "xes def sort_timestamp_trace(trace, timestamp_key=xes.DEFAULT_TIMESTAMP_KEY, reverse_sort=False): \"\"\" Sort a trace based on timestamp key", "modify it under the terms of the GNU General Public License as published", "Sorted event log \"\"\" events = sorted(event_log._list, key=lambda x: x[timestamp_key], reverse=reverse_sort) new_stream =", "x[0][timestamp_key], reverse=reverse_sort) return new_log def sort_timestamp(log, timestamp_key=xes.DEFAULT_TIMESTAMP_KEY, reverse_sort=False): \"\"\" Sort a log based", "key Parameters ----------- log Trace/Event log timestamp_key Timestamp key reverse_sort If true, reverses", "Returns ------------ stream Sorted stream \"\"\" events = sorted(event_log._list, key=sort_function, reverse=reverse) new_stream =", "of the GNU General Public License along with PM4Py. If not, see <https://www.gnu.org/licenses/>.", "reverses the direction in which the sort is done (ascending) Returns ----------- event_log", "function reverse Boolean (sort by reverse order) Returns ------------ stream Sorted stream \"\"\"", "more details. You should have received a copy of the GNU General Public", "events = sorted(event_log._list, key=lambda x: x[timestamp_key], reverse=reverse_sort) new_stream = EventStream(events, attributes=event_log.attributes, extensions=event_log.extensions, omni_present=event_log.omni_present,", "received a copy of the GNU General Public License along with PM4Py. If", "based on timestamp key Parameters ----------- event_log Log timestamp_key Timestamp key reverse_sort If", "Event log timestamp_key Timestamp key reverse_sort If true, reverses the direction in which", "done (ascending) Returns ----------- log Sorted log \"\"\" new_log = EventLog(attributes=event_log.attributes, extensions=event_log.extensions, omni_present=event_log.omni_present,", "sorted(event_log._list, key=sort_function, reverse=reverse) new_log = EventLog(traces, attributes=event_log.attributes, extensions=event_log.extensions, omni_present=event_log.omni_present, classifiers=event_log.classifiers, properties=event_log.properties) return new_log", "on a lambda expression Parameters ------------ event_log Log sort_function Sort function reverse Boolean", "\"\"\" Sort a stream based on a lambda expression Parameters ------------ event_log Stream", "function reverse Boolean (sort by reverse order) Returns ------------- log Sorted log \"\"\"", "in which the sort is done (ascending) Returns ----------- trace Sorted trace \"\"\"", "return new_log def sort_timestamp(log, timestamp_key=xes.DEFAULT_TIMESTAMP_KEY, reverse_sort=False): \"\"\" Sort a log based on timestamp", "as published by the Free Software Foundation, either version 3 of the License,", "a log based on timestamp key Parameters ----------- log Trace/Event log timestamp_key Timestamp", "= sorted(event_log._list, key=sort_function, reverse=reverse) new_stream = EventStream(events, attributes=event_log.attributes, extensions=event_log.extensions, omni_present=event_log.omni_present, classifiers=event_log.classifiers, properties=event_log.properties) return", "def sort_timestamp_stream(event_log, timestamp_key=xes.DEFAULT_TIMESTAMP_KEY, reverse_sort=False): \"\"\" Sort an event log based on timestamp key", "events = sorted(trace._list, key=lambda x: x[timestamp_key], reverse=reverse_sort) new_trace = Trace(events, attributes=trace.attributes) return new_trace", "event_log: if trace: new_log.append(sort_timestamp_trace(trace, timestamp_key=timestamp_key, reverse_sort=reverse_sort)) new_log._list.sort(key=lambda x: x[0][timestamp_key], reverse=reverse_sort) return new_log def", "You should have received a copy of the GNU General Public License along", "EventStream(events, attributes=event_log.attributes, extensions=event_log.extensions, omni_present=event_log.omni_present, classifiers=event_log.classifiers, properties=event_log.properties) return new_stream def sort_lambda(log, sort_function, reverse=False): \"\"\"", "in event_log: if trace: new_log.append(sort_timestamp_trace(trace, timestamp_key=timestamp_key, reverse_sort=reverse_sort)) new_log._list.sort(key=lambda x: x[0][timestamp_key], reverse=reverse_sort) return new_log", "your option) any later version. PM4Py is distributed in the hope that it", "hope that it will be useful, but WITHOUT ANY WARRANTY; without even the", "----------- event_log Sorted event log \"\"\" events = sorted(event_log._list, key=lambda x: x[timestamp_key], reverse=reverse_sort)", "Parameters ------------ event_log Stream sort_function Sort function reverse Boolean (sort by reverse order)", "on timestamp key Parameters ----------- event_log Log timestamp_key Timestamp key reverse_sort If true,", "MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License", "attributes=trace.attributes) return new_trace def sort_timestamp_stream(event_log, timestamp_key=xes.DEFAULT_TIMESTAMP_KEY, reverse_sort=False): \"\"\" Sort an event log based", "attributes=event_log.attributes, extensions=event_log.extensions, omni_present=event_log.omni_present, classifiers=event_log.classifiers, properties=event_log.properties) return new_stream def sort_lambda(log, sort_function, reverse=False): \"\"\" Sort", "pm4py.util import xes_constants as xes def sort_timestamp_trace(trace, timestamp_key=xes.DEFAULT_TIMESTAMP_KEY, reverse_sort=False): \"\"\" Sort a trace", "sort_function Sort function reverse Boolean (sort by reverse order) Returns ------------- log Sorted", "trace in event_log: if trace: new_log.append(sort_timestamp_trace(trace, timestamp_key=timestamp_key, reverse_sort=reverse_sort)) new_log._list.sort(key=lambda x: x[0][timestamp_key], reverse=reverse_sort) return", "(ascending) Returns ----------- event_log Sorted event log \"\"\" events = sorted(event_log._list, key=lambda x:", "Log sort_function Sort function reverse Boolean (sort by reverse order) Returns ------------ new_log", "Sort function reverse Boolean (sort by reverse order) Returns ------------ new_log Sorted log", "def sort_lambda_log(event_log, sort_function, reverse=False): \"\"\" Sort a log based on a lambda expression", "useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or", "either version 3 of the License, or (at your option) any later version.", "from pm4py.objects.log.obj import EventLog, Trace, EventStream from pm4py.util import xes_constants as xes def", "Timestamp key reverse_sort If true, reverses the direction in which the sort is", "----------- log Sorted Trace/Event log \"\"\" if type(log) is EventLog: return sort_timestamp_log(log, timestamp_key=timestamp_key,", "is distributed in the hope that it will be useful, but WITHOUT ANY", "\"\"\" events = sorted(trace._list, key=lambda x: x[timestamp_key], reverse=reverse_sort) new_trace = Trace(events, attributes=trace.attributes) return", "a trace based on timestamp key Parameters ----------- trace Trace timestamp_key Timestamp key", "reverse=reverse_sort) new_trace = Trace(events, attributes=trace.attributes) return new_trace def sort_timestamp_stream(event_log, timestamp_key=xes.DEFAULT_TIMESTAMP_KEY, reverse_sort=False): \"\"\" Sort", "= EventStream(events, attributes=event_log.attributes, extensions=event_log.extensions, omni_present=event_log.omni_present, classifiers=event_log.classifiers, properties=event_log.properties) return new_stream def sort_lambda(log, sort_function, reverse=False):", "the GNU General Public License for more details. You should have received a", "event_log Sorted event log \"\"\" events = sorted(event_log._list, key=lambda x: x[timestamp_key], reverse=reverse_sort) new_stream", "direction in which the sort is done (ascending) Returns ----------- log Sorted Trace/Event", "on lambda expression Parameters ------------- log Log sort_function Sort function reverse Boolean (sort", "key reverse_sort If true, reverses the direction in which the sort is done" ]
[ "= (input / retain) * srng.binomial(input.shape, p=retain, dtype='int32').astype('float32') else: d_output = input return", "theano from utils import srng def dropout(input, dropout_rate=0): if dropout_rate > 0: retain", "d_output = (input / retain) * srng.binomial(input.shape, p=retain, dtype='int32').astype('float32') else: d_output = input", "dropout_rate > 0: retain = 1 - dropout_rate d_output = (input / retain)", "def dropout(input, dropout_rate=0): if dropout_rate > 0: retain = 1 - dropout_rate d_output", "0: retain = 1 - dropout_rate d_output = (input / retain) * srng.binomial(input.shape,", "srng def dropout(input, dropout_rate=0): if dropout_rate > 0: retain = 1 - dropout_rate", "(input / retain) * srng.binomial(input.shape, p=retain, dtype='int32').astype('float32') else: d_output = input return d_output", "if dropout_rate > 0: retain = 1 - dropout_rate d_output = (input /", "from utils import srng def dropout(input, dropout_rate=0): if dropout_rate > 0: retain =", "dropout(input, dropout_rate=0): if dropout_rate > 0: retain = 1 - dropout_rate d_output =", "1 - dropout_rate d_output = (input / retain) * srng.binomial(input.shape, p=retain, dtype='int32').astype('float32') else:", "import srng def dropout(input, dropout_rate=0): if dropout_rate > 0: retain = 1 -", "- dropout_rate d_output = (input / retain) * srng.binomial(input.shape, p=retain, dtype='int32').astype('float32') else: d_output", "dropout_rate=0): if dropout_rate > 0: retain = 1 - dropout_rate d_output = (input", "retain = 1 - dropout_rate d_output = (input / retain) * srng.binomial(input.shape, p=retain,", "import theano from utils import srng def dropout(input, dropout_rate=0): if dropout_rate > 0:", "dropout_rate d_output = (input / retain) * srng.binomial(input.shape, p=retain, dtype='int32').astype('float32') else: d_output =", "= 1 - dropout_rate d_output = (input / retain) * srng.binomial(input.shape, p=retain, dtype='int32').astype('float32')", "utils import srng def dropout(input, dropout_rate=0): if dropout_rate > 0: retain = 1", "> 0: retain = 1 - dropout_rate d_output = (input / retain) *" ]
[ "\"\"\" ID = \"pushMessageContentLocation\" def __init__(self, is_live, is_pinned, **kwargs): self.is_live = is_live #", ":class:`telegram.Error` \"\"\" ID = \"pushMessageContentLocation\" def __init__(self, is_live, is_pinned, **kwargs): self.is_live = is_live", "bool self.is_pinned = is_pinned # bool @staticmethod def read(q: dict, *args) -> \"PushMessageContentLocation\":", "is a pinned message with the specified content Returns: PushMessageContent Raises: :class:`telegram.Error` \"\"\"", "def read(q: dict, *args) -> \"PushMessageContentLocation\": is_live = q.get('is_live') is_pinned = q.get('is_pinned') return", "pinned message with the specified content Returns: PushMessageContent Raises: :class:`telegram.Error` \"\"\" ID =", "self.is_pinned = is_pinned # bool @staticmethod def read(q: dict, *args) -> \"PushMessageContentLocation\": is_live", "__init__(self, is_live, is_pinned, **kwargs): self.is_live = is_live # bool self.is_pinned = is_pinned #", "# bool @staticmethod def read(q: dict, *args) -> \"PushMessageContentLocation\": is_live = q.get('is_live') is_pinned", "live is_pinned (:obj:`bool`): True, if the message is a pinned message with the", "is_live # bool self.is_pinned = is_pinned # bool @staticmethod def read(q: dict, *args)", "Args: is_live (:obj:`bool`): True, if the location is live is_pinned (:obj:`bool`): True, if", "with the specified content Returns: PushMessageContent Raises: :class:`telegram.Error` \"\"\" ID = \"pushMessageContentLocation\" def", "import Object class PushMessageContentLocation(Object): \"\"\" A message with a location Attributes: ID (:obj:`str`):", "is live is_pinned (:obj:`bool`): True, if the message is a pinned message with", "True, if the message is a pinned message with the specified content Returns:", "is_pinned, **kwargs): self.is_live = is_live # bool self.is_pinned = is_pinned # bool @staticmethod", "# bool self.is_pinned = is_pinned # bool @staticmethod def read(q: dict, *args) ->", "the specified content Returns: PushMessageContent Raises: :class:`telegram.Error` \"\"\" ID = \"pushMessageContentLocation\" def __init__(self,", "**kwargs): self.is_live = is_live # bool self.is_pinned = is_pinned # bool @staticmethod def", "def __init__(self, is_live, is_pinned, **kwargs): self.is_live = is_live # bool self.is_pinned = is_pinned", "read(q: dict, *args) -> \"PushMessageContentLocation\": is_live = q.get('is_live') is_pinned = q.get('is_pinned') return PushMessageContentLocation(is_live,", "Raises: :class:`telegram.Error` \"\"\" ID = \"pushMessageContentLocation\" def __init__(self, is_live, is_pinned, **kwargs): self.is_live =", "location Attributes: ID (:obj:`str`): ``PushMessageContentLocation`` Args: is_live (:obj:`bool`): True, if the location is", "(:obj:`bool`): True, if the message is a pinned message with the specified content", "ID = \"pushMessageContentLocation\" def __init__(self, is_live, is_pinned, **kwargs): self.is_live = is_live # bool", "True, if the location is live is_pinned (:obj:`bool`): True, if the message is", "PushMessageContentLocation(Object): \"\"\" A message with a location Attributes: ID (:obj:`str`): ``PushMessageContentLocation`` Args: is_live", "is_pinned (:obj:`bool`): True, if the message is a pinned message with the specified", "(:obj:`bool`): True, if the location is live is_pinned (:obj:`bool`): True, if the message", "Object class PushMessageContentLocation(Object): \"\"\" A message with a location Attributes: ID (:obj:`str`): ``PushMessageContentLocation``", "location is live is_pinned (:obj:`bool`): True, if the message is a pinned message", "message is a pinned message with the specified content Returns: PushMessageContent Raises: :class:`telegram.Error`", "bool @staticmethod def read(q: dict, *args) -> \"PushMessageContentLocation\": is_live = q.get('is_live') is_pinned =", "is_pinned # bool @staticmethod def read(q: dict, *args) -> \"PushMessageContentLocation\": is_live = q.get('is_live')", "= \"pushMessageContentLocation\" def __init__(self, is_live, is_pinned, **kwargs): self.is_live = is_live # bool self.is_pinned", "self.is_live = is_live # bool self.is_pinned = is_pinned # bool @staticmethod def read(q:", "a location Attributes: ID (:obj:`str`): ``PushMessageContentLocation`` Args: is_live (:obj:`bool`): True, if the location", "if the message is a pinned message with the specified content Returns: PushMessageContent", "PushMessageContent Raises: :class:`telegram.Error` \"\"\" ID = \"pushMessageContentLocation\" def __init__(self, is_live, is_pinned, **kwargs): self.is_live", "\"pushMessageContentLocation\" def __init__(self, is_live, is_pinned, **kwargs): self.is_live = is_live # bool self.is_pinned =", "A message with a location Attributes: ID (:obj:`str`): ``PushMessageContentLocation`` Args: is_live (:obj:`bool`): True,", "<filename>pytglib/api/types/push_message_content_location.py<gh_stars>1-10 from ..utils import Object class PushMessageContentLocation(Object): \"\"\" A message with a location", "specified content Returns: PushMessageContent Raises: :class:`telegram.Error` \"\"\" ID = \"pushMessageContentLocation\" def __init__(self, is_live,", "= is_live # bool self.is_pinned = is_pinned # bool @staticmethod def read(q: dict,", "from ..utils import Object class PushMessageContentLocation(Object): \"\"\" A message with a location Attributes:", "message with a location Attributes: ID (:obj:`str`): ``PushMessageContentLocation`` Args: is_live (:obj:`bool`): True, if", "Returns: PushMessageContent Raises: :class:`telegram.Error` \"\"\" ID = \"pushMessageContentLocation\" def __init__(self, is_live, is_pinned, **kwargs):", "a pinned message with the specified content Returns: PushMessageContent Raises: :class:`telegram.Error` \"\"\" ID", "\"\"\" A message with a location Attributes: ID (:obj:`str`): ``PushMessageContentLocation`` Args: is_live (:obj:`bool`):", "class PushMessageContentLocation(Object): \"\"\" A message with a location Attributes: ID (:obj:`str`): ``PushMessageContentLocation`` Args:", "with a location Attributes: ID (:obj:`str`): ``PushMessageContentLocation`` Args: is_live (:obj:`bool`): True, if the", "@staticmethod def read(q: dict, *args) -> \"PushMessageContentLocation\": is_live = q.get('is_live') is_pinned = q.get('is_pinned')", "is_live (:obj:`bool`): True, if the location is live is_pinned (:obj:`bool`): True, if the", "Attributes: ID (:obj:`str`): ``PushMessageContentLocation`` Args: is_live (:obj:`bool`): True, if the location is live", "dict, *args) -> \"PushMessageContentLocation\": is_live = q.get('is_live') is_pinned = q.get('is_pinned') return PushMessageContentLocation(is_live, is_pinned)", "``PushMessageContentLocation`` Args: is_live (:obj:`bool`): True, if the location is live is_pinned (:obj:`bool`): True,", "the location is live is_pinned (:obj:`bool`): True, if the message is a pinned", "ID (:obj:`str`): ``PushMessageContentLocation`` Args: is_live (:obj:`bool`): True, if the location is live is_pinned", "(:obj:`str`): ``PushMessageContentLocation`` Args: is_live (:obj:`bool`): True, if the location is live is_pinned (:obj:`bool`):", "= is_pinned # bool @staticmethod def read(q: dict, *args) -> \"PushMessageContentLocation\": is_live =", "the message is a pinned message with the specified content Returns: PushMessageContent Raises:", "content Returns: PushMessageContent Raises: :class:`telegram.Error` \"\"\" ID = \"pushMessageContentLocation\" def __init__(self, is_live, is_pinned,", "is_live, is_pinned, **kwargs): self.is_live = is_live # bool self.is_pinned = is_pinned # bool", "..utils import Object class PushMessageContentLocation(Object): \"\"\" A message with a location Attributes: ID", "if the location is live is_pinned (:obj:`bool`): True, if the message is a", "message with the specified content Returns: PushMessageContent Raises: :class:`telegram.Error` \"\"\" ID = \"pushMessageContentLocation\"" ]
[ "return [input() for _ in range(guests_count)] def input_to_list_until_command(command): result = [] line =", "range(guests_count)] def input_to_list_until_command(command): result = [] line = input() while not line ==", "[] line = input() while not line == command: result.append(line) line = input()", "input() while not line == command: result.append(line) line = input() return result def", "set(guests_arrived) def print_result(result): result = sorted(result) print(len(result)) [print(guest) for guest in result if", "return set(guests) - set(guests_arrived) def print_result(result): result = sorted(result) print(len(result)) [print(guest) for guest", "guests_arrived): return set(guests) - set(guests_arrived) def print_result(result): result = sorted(result) print(len(result)) [print(guest) for", "print(len(result)) [print(guest) for guest in result if guest[0].isdigit()] [print(guest) for guest in result", "while not line == command: result.append(line) line = input() return result def get_not_arrived_guests(guests,", "def print_result(result): result = sorted(result) print(len(result)) [print(guest) for guest in result if guest[0].isdigit()]", "[print(guest) for guest in result if guest[0].isdigit()] [print(guest) for guest in result if", "in range(guests_count)] def input_to_list_until_command(command): result = [] line = input() while not line", "in result if not guest[0].isdigit()] guests = input_to_list(int(input())) guests_arrived = input_to_list_until_command(\"END\") print_result( get_not_arrived_guests(guests,", "for guest in result if guest[0].isdigit()] [print(guest) for guest in result if not", "for _ in range(guests_count)] def input_to_list_until_command(command): result = [] line = input() while", "result.append(line) line = input() return result def get_not_arrived_guests(guests, guests_arrived): return set(guests) - set(guests_arrived)", "print_result(result): result = sorted(result) print(len(result)) [print(guest) for guest in result if guest[0].isdigit()] [print(guest)", "result if not guest[0].isdigit()] guests = input_to_list(int(input())) guests_arrived = input_to_list_until_command(\"END\") print_result( get_not_arrived_guests(guests, guests_arrived)", "if not guest[0].isdigit()] guests = input_to_list(int(input())) guests_arrived = input_to_list_until_command(\"END\") print_result( get_not_arrived_guests(guests, guests_arrived) )", "guest[0].isdigit()] [print(guest) for guest in result if not guest[0].isdigit()] guests = input_to_list(int(input())) guests_arrived", "result def get_not_arrived_guests(guests, guests_arrived): return set(guests) - set(guests_arrived) def print_result(result): result = sorted(result)", "def input_to_list(guests_count): return [input() for _ in range(guests_count)] def input_to_list_until_command(command): result = []", "not line == command: result.append(line) line = input() return result def get_not_arrived_guests(guests, guests_arrived):", "input() return result def get_not_arrived_guests(guests, guests_arrived): return set(guests) - set(guests_arrived) def print_result(result): result", "result if guest[0].isdigit()] [print(guest) for guest in result if not guest[0].isdigit()] guests =", "input_to_list_until_command(command): result = [] line = input() while not line == command: result.append(line)", "= [] line = input() while not line == command: result.append(line) line =", "def input_to_list_until_command(command): result = [] line = input() while not line == command:", "line = input() return result def get_not_arrived_guests(guests, guests_arrived): return set(guests) - set(guests_arrived) def", "def get_not_arrived_guests(guests, guests_arrived): return set(guests) - set(guests_arrived) def print_result(result): result = sorted(result) print(len(result))", "in result if guest[0].isdigit()] [print(guest) for guest in result if not guest[0].isdigit()] guests", "result = [] line = input() while not line == command: result.append(line) line", "= input() return result def get_not_arrived_guests(guests, guests_arrived): return set(guests) - set(guests_arrived) def print_result(result):", "guest in result if not guest[0].isdigit()] guests = input_to_list(int(input())) guests_arrived = input_to_list_until_command(\"END\") print_result(", "result = sorted(result) print(len(result)) [print(guest) for guest in result if guest[0].isdigit()] [print(guest) for", "line == command: result.append(line) line = input() return result def get_not_arrived_guests(guests, guests_arrived): return", "set(guests) - set(guests_arrived) def print_result(result): result = sorted(result) print(len(result)) [print(guest) for guest in", "[input() for _ in range(guests_count)] def input_to_list_until_command(command): result = [] line = input()", "guest in result if guest[0].isdigit()] [print(guest) for guest in result if not guest[0].isdigit()]", "_ in range(guests_count)] def input_to_list_until_command(command): result = [] line = input() while not", "sorted(result) print(len(result)) [print(guest) for guest in result if guest[0].isdigit()] [print(guest) for guest in", "[print(guest) for guest in result if not guest[0].isdigit()] guests = input_to_list(int(input())) guests_arrived =", "= input() while not line == command: result.append(line) line = input() return result", "line = input() while not line == command: result.append(line) line = input() return", "return result def get_not_arrived_guests(guests, guests_arrived): return set(guests) - set(guests_arrived) def print_result(result): result =", "- set(guests_arrived) def print_result(result): result = sorted(result) print(len(result)) [print(guest) for guest in result", "command: result.append(line) line = input() return result def get_not_arrived_guests(guests, guests_arrived): return set(guests) -", "if guest[0].isdigit()] [print(guest) for guest in result if not guest[0].isdigit()] guests = input_to_list(int(input()))", "= sorted(result) print(len(result)) [print(guest) for guest in result if guest[0].isdigit()] [print(guest) for guest", "for guest in result if not guest[0].isdigit()] guests = input_to_list(int(input())) guests_arrived = input_to_list_until_command(\"END\")", "== command: result.append(line) line = input() return result def get_not_arrived_guests(guests, guests_arrived): return set(guests)", "get_not_arrived_guests(guests, guests_arrived): return set(guests) - set(guests_arrived) def print_result(result): result = sorted(result) print(len(result)) [print(guest)", "input_to_list(guests_count): return [input() for _ in range(guests_count)] def input_to_list_until_command(command): result = [] line" ]
[ "in option mapping %d' % i) _ = index_schema(schema, option_path) def read_options(config, options):", "\"tokenization\", \"source\": new_tok_config[\"source\"], \"target\": new_tok_config[\"target\"], } ] return new_config def _ensure_params_order(params): params =", "% section ) config = config[section_index] else: raise ValueError( \"Paths in config can", "if option_path is None: raise ValueError('Missing \"option_path\" in option mapping %d' % i)", "v2_config: return operators_options return config_override def is_v2_config(config): \"\"\"Returns True if config is a", "new 'vocabulary' and 'preprocess\" fields.\"\"\" if not config: return tok_config = config.get(\"tokenization\") new_config", "and array structures\") def index_schema(schema, path): \"\"\"Index a JSON schema with a path-like", "config_override = {} for mapping in inference_options[\"options\"]: try: option_value = index_config(options, mapping[\"option_path\"]) except", "mapping operator names to their options. Raises: ValueError: if inference options were not", "in _non_user_fields} return replace_config(a, b) if mode == \"default\" or mode == \"merge\":", "} if vocab_tgt: new_config[\"vocabulary\"][\"target\"] = { \"path\": vocab_tgt, \"replace_vocab\": replace_tgt, } if prev_vocab_src:", "path-like string.\"\"\" key = None sections = path.split(\"/\") if not index_structure: key =", "if mode == \"default\" or mode == \"merge\": return merge_config(a, b) if mode", "section_name in (\"preprocess\", \"postprocess\"): section = config.get(section_name) if section is None: continue config[section_name]", "= config.get(\"preprocess\") return ( \"tokenization\" not in config and preprocess is not None", "sections = sections[:-1] for section in sections: if isinstance(config, dict): if section not", "return operators_options return config_override def is_v2_config(config): \"\"\"Returns True if config is a V2", "None: raise ValueError('Missing \"config_path\" in option mapping %d' % i) if isinstance(config_path, str):", "\"\"\"Functions to manipulate and validate configurations.\"\"\" import collections import jsonschema import copy def", "config_path: dst_config, dst_key = index_config(config, cp, index_structure=False) operators_options[dst_config[\"name\"]].update({dst_key: option_value}) else: for cp in", "vocab_tgt, \"replace_vocab\": replace_tgt, } if prev_vocab_src: new_config[\"vocabulary\"][\"source\"][ \"previous_vocabulary\" ] = prev_vocab_src if prev_vocab_tgt:", "if section is None: continue config[section_name] = [_ensure_params_order(params) for params in section] return", "if is_v1_config(config): return i = 1 for process in [\"preprocess\", \"postprocess\"]: process_config =", "last=True) return params def prepare_config_for_save(config): \"\"\"Prepares the configuration before saving it in the", "list): index = int(sections[0]) override = build_override(config[index], inner_path, value) # Since lists can't", "structures\") def index_schema(schema, path): \"\"\"Index a JSON schema with a path-like string.\"\"\" for", "new_config = config if tok_config: if \"vocabulary\" not in config: new_config = copy.deepcopy(config)", "\"parent_model\"} def update_config(a, b, mode=\"default\"): \"\"\"Update the configuration a with b.\"\"\" if not", "config = config.copy() for section_name in (\"preprocess\", \"postprocess\"): section = config.get(section_name) if section", "prepare_config_for_save(config): \"\"\"Prepares the configuration before saving it in the model directory.\"\"\" if is_v2_config(config):", "inner_path = \"/\".join(sections[1:]) if isinstance(config, dict): return {section: build_override(config.get(section), inner_path, value)} if isinstance(config,", "mode == \"replace\": return replace_config(a, b) raise ValueError(\"Invalid configuration update mode: %s\" %", "x: x[0])) preferred_first = [\"op\", \"name\"] preferred_last = [\"overrides\"] for field in reversed(preferred_first):", "b) if mode == \"default\" or mode == \"merge\": return merge_config(a, b) if", "b) raise ValueError(\"Invalid configuration update mode: %s\" % mode) def index_config(config, path, index_structure=True):", "= path.split(\"/\") section = sections[0] inner_path = \"/\".join(sections[1:]) if isinstance(config, dict): return {section:", "config_path = mapping[\"config_path\"] if isinstance(config_path, str): config_path = [config_path] if v2_config: for cp", "\"tokenization\" not in config and preprocess is not None and isinstance(preprocess, list) )", "path %s in user options\" % path) schema = properties[section] return schema def", "we clear all user fields. a = {k: v for k, v in", "import jsonschema import copy def merge_config(a, b): \"\"\"Merges config b in a.\"\"\" for", "can only represent object and array structures\") def index_schema(schema, path): \"\"\"Index a JSON", "section_index is None: raise ValueError( \"Expected an array index in path, but got", "json_schema = inference_options.get(\"json_schema\") if json_schema is None: raise ValueError('Missing \"json_schema\" in \"inference_options\"') jsonschema.Draft7Validator.check_schema(json_schema)", "configuration override. For V2 configurations, this function returns a dict mapping operator names", "1 def old_to_new_config(config): \"\"\"Locally update old configuration with 'tokenization' field to include new", "[\"op\", \"name\"] preferred_last = [\"overrides\"] for field in reversed(preferred_first): if field in params:", "key = sections[-1] sections = sections[:-1] for section in sections: if isinstance(config, dict):", "a V2 configuration.\"\"\" preprocess = config.get(\"preprocess\") return ( \"tokenization\" not in config and", "None: raise ValueError(\"This model does not expect inference options\") try: jsonschema.validate(options, inference_options[\"json_schema\"]) except", "that some fields appear first (or last) for readability. config = config.copy() for", "= sections[:-1] for section in sections: if isinstance(config, dict): if section not in", "= config.get(process) if process_config: for op_config in process_config: op_type = op_config.get(\"op\") if op_type:", "except ValueError: for i, block in enumerate(config): if isinstance(block, dict) and block.get(\"name\") ==", "inference options and configuration fields, raising ValueError on error. \"\"\" for i, mapping", "isinstance(a_value, dict): merge_config(a_value, b_value) else: a[key] = b_value return a def replace_config(a, b):", "config_path = mapping.get(\"config_path\") if config_path is None: raise ValueError('Missing \"config_path\" in option mapping", "override. For V2 configurations, this function returns a dict mapping operator names to", "ValueError on error.\"\"\" json_schema = inference_options.get(\"json_schema\") if json_schema is None: raise ValueError('Missing \"json_schema\"", "\"replace_vocab\": replace_tgt, } if prev_vocab_src: new_config[\"vocabulary\"][\"source\"][ \"previous_vocabulary\" ] = prev_vocab_src if prev_vocab_tgt: new_config[\"vocabulary\"][\"target\"][", "model directory.\"\"\" if is_v2_config(config): # In V2 operators, we prefer that some fields", "types are supported in the schema structure, \" \"but saw type %s\" %", "supported in the schema structure, \" \"but saw type %s\" % schema[\"type\"] )", "index = int(sections[0]) override = build_override(config[index], inner_path, value) # Since lists can't be", "params.move_to_end(field, last=True) return params def prepare_config_for_save(config): \"\"\"Prepares the configuration before saving it in", "raise ValueError( \"Paths in config can only represent object and array structures\" )", "= prev_vocab_tgt if \"preprocess\" not in config: new_tok_config = copy.deepcopy(tok_config) new_tok_config[\"source\"].pop(\"vocabulary\", None) new_tok_config[\"target\"].pop(\"vocabulary\",", "= sections[-1] sections = sections[:-1] for section in sections: if isinstance(config, dict): if", "option_path is None: raise ValueError('Missing \"option_path\" in option mapping %d' % i) _", "not path: return value sections = path.split(\"/\") section = sections[0] inner_path = \"/\".join(sections[1:])", "updating the configuration to a newer version, we clear all user fields. a", "{ \"path\": vocab_src, \"replace_vocab\": replace_src, } if vocab_tgt: new_config[\"vocabulary\"][\"target\"] = { \"path\": vocab_tgt,", "config = config[section] elif isinstance(config, list): section_index = None try: section_index = int(section)", "if not isinstance(b_value, dict): a[key] = b_value else: a_value = a.get(key) if a_value", "V1 configuration.\"\"\" return not is_v2_config(config) def get_config_version(config): \"\"\"Returns the version of the configuration.\"\"\"", "is not accepted. \"\"\" inference_options = config.get(\"inference_options\") if inference_options is None: raise ValueError(\"This", "\"path\": vocab_src, \"replace_vocab\": replace_src, } if vocab_tgt: new_config[\"vocabulary\"][\"target\"] = { \"path\": vocab_tgt, \"replace_vocab\":", "i) if isinstance(config_path, str): config_path = [config_path] for cp in config_path: dst_config, _", "else 1 def ensure_operators_name(config): \"\"\"Make sure all operators in model configuration have a", "and isinstance(preprocess, list) ) def is_v1_config(config): \"\"\"Returns True if config is a V1", "= tok_config[\"target\"].get(\"previous_vocabulary\", None) if vocab_src or vocab_tgt: new_config[\"vocabulary\"] = {} if vocab_src: new_config[\"vocabulary\"][\"source\"]", "from_version = get_config_version(a) to_version = get_config_version(b) if from_version == 1 and to_version ==", "directory.\"\"\" if is_v2_config(config): # In V2 operators, we prefer that some fields appear", "for key, b_value in b.items(): if not isinstance(b_value, dict): a[key] = b_value else:", "For V1 configurations, this function returns a configuration override. For V2 configurations, this", "op_type: op_config.setdefault(\"name\", \"%s_%d\" % (op_type, i)) i += 1 def old_to_new_config(config): \"\"\"Locally update", "if prev_vocab_src: new_config[\"vocabulary\"][\"source\"][ \"previous_vocabulary\" ] = prev_vocab_src if prev_vocab_tgt: new_config[\"vocabulary\"][\"target\"][ \"previous_vocabulary\" ] =", "new_config def _ensure_params_order(params): params = collections.OrderedDict(sorted(params.items(), key=lambda x: x[0])) preferred_first = [\"op\", \"name\"]", "fields appear first (or last) for readability. config = config.copy() for section_name in", "\"path\": vocab_tgt, \"replace_vocab\": replace_tgt, } if prev_vocab_src: new_config[\"vocabulary\"][\"source\"][ \"previous_vocabulary\" ] = prev_vocab_src if", "= properties[section] return schema def validate_inference_options(inference_options, config): \"\"\"Validate the inference options, raising ValueError", "in config_path: dst_config, dst_key = index_config(config, cp, index_structure=False) operators_options[dst_config[\"name\"]].update({dst_key: option_value}) else: for cp", "] = prev_vocab_src if prev_vocab_tgt: new_config[\"vocabulary\"][\"target\"][ \"previous_vocabulary\" ] = prev_vocab_tgt if \"preprocess\" not", "\"previous_vocabulary\" ] = prev_vocab_tgt if \"preprocess\" not in config: new_tok_config = copy.deepcopy(tok_config) new_tok_config[\"source\"].pop(\"vocabulary\",", "raise TypeError(\"Paths in config can only represent object and array structures\") def index_schema(schema,", "= b_value return a def replace_config(a, b): \"\"\"Updates fields in a by fields", "schema[\"properties\"] if section not in properties: raise ValueError(\"Invalid path %s in user options\"", "the inference options. For V1 configurations, this function returns a configuration override. For", "path: return value sections = path.split(\"/\") section = sections[0] inner_path = \"/\".join(sections[1:]) if", "in preferred_last: if field in params: params.move_to_end(field, last=True) return params def prepare_config_for_save(config): \"\"\"Prepares", "\"preprocess\" not in config: new_tok_config = copy.deepcopy(tok_config) new_tok_config[\"source\"].pop(\"vocabulary\", None) new_tok_config[\"target\"].pop(\"vocabulary\", None) new_tok_config[\"source\"].pop(\"replace_vocab\", None)", "options, config): \"\"\"Validate the mapping between inference options and configuration fields, raising ValueError", "None: raise ValueError('Missing \"options\" in \"inference_options\"') validate_mapping(json_schema, options, config) return json_schema def validate_mapping(schema,", "= mapping[\"config_path\"] if isinstance(config_path, str): config_path = [config_path] if v2_config: for cp in", "[\"preprocess\", \"postprocess\"]: process_config = config.get(process) if process_config: for op_config in process_config: op_type =", "jsonschema.ValidationError as e: raise ValueError(\"Options validation error: %s\" % e.message) v2_config = is_v2_config(config)", "\"\"\"Index a configuration with a path-like string.\"\"\" key = None sections = path.split(\"/\")", "= {} if vocab_src: new_config[\"vocabulary\"][\"source\"] = { \"path\": vocab_src, \"replace_vocab\": replace_src, } if", "request. config_path = mapping[\"config_path\"] if isinstance(config_path, str): config_path = [config_path] if v2_config: for", "= get_config_version(a) to_version = get_config_version(b) if from_version == 1 and to_version == 2:", "build_override(config[index], inner_path, value) # Since lists can't be merged, the override should contain", "dst_key = index_config(config, cp, index_structure=False) operators_options[dst_config[\"name\"]].update({dst_key: option_value}) else: for cp in config_path: merge_config(", "configuration.\"\"\" preprocess = config.get(\"preprocess\") return ( \"tokenization\" not in config and preprocess is", "try: jsonschema.validate(options, inference_options[\"json_schema\"]) except jsonschema.ValidationError as e: raise ValueError(\"Options validation error: %s\" %", "if isinstance(config_path, str): config_path = [config_path] if v2_config: for cp in config_path: dst_config,", "a dict mapping operator names to their options. Raises: ValueError: if inference options", "ensure_operators_name(config): \"\"\"Make sure all operators in model configuration have a unique name.\"\"\" if", "{\"model\", \"modelType\", \"imageTag\", \"build\", \"parent_model\"} def update_config(a, b, mode=\"default\"): \"\"\"Update the configuration a", "at path.\"\"\" if not path: return value sections = path.split(\"/\") section = sections[0]", "\"\"\"Returns True if config is a V1 configuration.\"\"\" return not is_v2_config(config) def get_config_version(config):", "= index_config(options, mapping[\"option_path\"]) except ValueError: continue # Option not passed for this request.", "config: raise ValueError(\"Invalid path %s in config\" % path) config = config[section] elif", "else: raise ValueError( \"Paths in config can only represent object and array structures\"", "continue # Option not passed for this request. config_path = mapping[\"config_path\"] if isinstance(config_path,", "in (\"preprocess\", \"postprocess\"): section = config.get(section_name) if section is None: continue config[section_name] =", "in model configuration have a unique name.\"\"\" if is_v1_config(config): return i = 1", "operators_options return config_override def is_v2_config(config): \"\"\"Returns True if config is a V2 configuration.\"\"\"", "sections: if isinstance(config, dict): if section not in config: raise ValueError(\"Invalid path %s", "value) # Since lists can't be merged, the override should contain the full", "a JSON schema with a path-like string.\"\"\" for section in path.split(\"/\"): if schema[\"type\"]", "= {k: v for k, v in a.items() if k in _non_user_fields} return", "list content. config = list(config) if isinstance(override, dict): config[index] = merge_config(copy.deepcopy(config[index]), override) else:", "a[key] = b_value return a def replace_config(a, b): \"\"\"Updates fields in a by", "version of the configuration.\"\"\" return 2 if is_v2_config(config) else 1 def ensure_operators_name(config): \"\"\"Make", "if v2_config: return operators_options return config_override def is_v2_config(config): \"\"\"Returns True if config is", "return value sections = path.split(\"/\") section = sections[0] inner_path = \"/\".join(sections[1:]) if isinstance(config,", "v for k, v in a.items() if k in _non_user_fields} return replace_config(a, b)", "k in _non_user_fields} return replace_config(a, b) if mode == \"default\" or mode ==", "in config_path: dst_config, _ = index_config(config, cp, index_structure=False) if not isinstance(dst_config, dict): raise", "if config is a V2 configuration.\"\"\" preprocess = config.get(\"preprocess\") return ( \"tokenization\" not", "isinstance(config, list): section_index = None try: section_index = int(section) except ValueError: for i,", "schema = properties[section] return schema def validate_inference_options(inference_options, config): \"\"\"Validate the inference options, raising", "mapping %d' % i) _ = index_schema(schema, option_path) def read_options(config, options): \"\"\"Reads the", "tok_config[\"source\"].get(\"vocabulary\", None) vocab_tgt = tok_config[\"target\"].get(\"vocabulary\", None) replace_src = tok_config[\"source\"].get(\"replace_vocab\", False) replace_tgt = tok_config[\"target\"].get(\"replace_vocab\",", "section in path.split(\"/\"): if schema[\"type\"] != \"object\": raise ValueError( \"Only object types are", "in reversed(preferred_first): if field in params: params.move_to_end(field, last=False) for field in preferred_last: if", "not expected or the value is not accepted. \"\"\" inference_options = config.get(\"inference_options\") if", "and validate configurations.\"\"\" import collections import jsonschema import copy def merge_config(a, b): \"\"\"Merges", "config: new_tok_config = copy.deepcopy(tok_config) new_tok_config[\"source\"].pop(\"vocabulary\", None) new_tok_config[\"target\"].pop(\"vocabulary\", None) new_tok_config[\"source\"].pop(\"replace_vocab\", None) new_tok_config[\"target\"].pop(\"replace_vocab\", None) new_config[\"preprocess\"]", "in enumerate(config): if isinstance(block, dict) and block.get(\"name\") == section: section_index = i break", "in enumerate(options): config_path = mapping.get(\"config_path\") if config_path is None: raise ValueError('Missing \"config_path\" in", "ValueError( \"Expected an array index in path, but got %s instead\" % section", "if config is a V1 configuration.\"\"\" return not is_v2_config(config) def get_config_version(config): \"\"\"Returns the", "raise ValueError('Missing \"json_schema\" in \"inference_options\"') jsonschema.Draft7Validator.check_schema(json_schema) options = inference_options.get(\"options\") if options is None:", "represent object and array structures\") def index_schema(schema, path): \"\"\"Index a JSON schema with", "if json_schema is None: raise ValueError('Missing \"json_schema\" in \"inference_options\"') jsonschema.Draft7Validator.check_schema(json_schema) options = inference_options.get(\"options\")", "dict): raise ValueError(\"Paths in config can only index object structures\") option_path = mapping.get(\"option_path\")", "if is_v2_config(config): # In V2 operators, we prefer that some fields appear first", "inference_options.get(\"options\") if options is None: raise ValueError('Missing \"options\" in \"inference_options\"') validate_mapping(json_schema, options, config)", "a.get(key) if a_value is not None and isinstance(a_value, dict): merge_config(a_value, b_value) else: a[key]", "ValueError('Missing \"option_path\" in option mapping %d' % i) _ = index_schema(schema, option_path) def", "cp in config_path: dst_config, dst_key = index_config(config, cp, index_structure=False) operators_options[dst_config[\"name\"]].update({dst_key: option_value}) else: for", "index_structure=True): \"\"\"Index a configuration with a path-like string.\"\"\" key = None sections =", "replace_config(a, b) raise ValueError(\"Invalid configuration update mode: %s\" % mode) def index_config(config, path,", "config[section_index] else: raise ValueError( \"Paths in config can only represent object and array", "raise ValueError('Missing \"config_path\" in option mapping %d' % i) if isinstance(config_path, str): config_path", "string.\"\"\" for section in path.split(\"/\"): if schema[\"type\"] != \"object\": raise ValueError( \"Only object", "b.\"\"\" if not b: return a from_version = get_config_version(a) to_version = get_config_version(b) if", "raise ValueError(\"Options validation error: %s\" % e.message) v2_config = is_v2_config(config) operators_options = collections.defaultdict(dict)", "jsonschema.Draft7Validator.check_schema(json_schema) options = inference_options.get(\"options\") if options is None: raise ValueError('Missing \"options\" in \"inference_options\"')", "are supported in the schema structure, \" \"but saw type %s\" % schema[\"type\"]", "inference_options[\"options\"]: try: option_value = index_config(options, mapping[\"option_path\"]) except ValueError: continue # Option not passed", "readability. config = config.copy() for section_name in (\"preprocess\", \"postprocess\"): section = config.get(section_name) if", "== \"default\" or mode == \"merge\": return merge_config(a, b) if mode == \"replace\":", ") def is_v1_config(config): \"\"\"Returns True if config is a V1 configuration.\"\"\" return not", "configuration with 'tokenization' field to include new 'vocabulary' and 'preprocess\" fields.\"\"\" if not", "[\"overrides\"] for field in reversed(preferred_first): if field in params: params.move_to_end(field, last=False) for field", "update_config(a, b, mode=\"default\"): \"\"\"Update the configuration a with b.\"\"\" if not b: return", "between inference options and configuration fields, raising ValueError on error. \"\"\" for i,", "\"replace\": return replace_config(a, b) raise ValueError(\"Invalid configuration update mode: %s\" % mode) def", "raise ValueError(\"Invalid configuration update mode: %s\" % mode) def index_config(config, path, index_structure=True): \"\"\"Index", "contain the full list content. config = list(config) if isinstance(override, dict): config[index] =", "schema[\"type\"] ) properties = schema[\"properties\"] if section not in properties: raise ValueError(\"Invalid path", "new_config[\"vocabulary\"][\"source\"][ \"previous_vocabulary\" ] = prev_vocab_src if prev_vocab_tgt: new_config[\"vocabulary\"][\"target\"][ \"previous_vocabulary\" ] = prev_vocab_tgt if", "properties: raise ValueError(\"Invalid path %s in user options\" % path) schema = properties[section]", "if isinstance(config_path, str): config_path = [config_path] for cp in config_path: dst_config, _ =", "fields.\"\"\" if not config: return tok_config = config.get(\"tokenization\") new_config = config if tok_config:", "option mapping %d' % i) _ = index_schema(schema, option_path) def read_options(config, options): \"\"\"Reads", "configurations, this function returns a configuration override. For V2 configurations, this function returns", "got %s instead\" % section ) config = config[section_index] else: raise ValueError( \"Paths", "configuration a with b.\"\"\" if not b: return a from_version = get_config_version(a) to_version", "or the value is not accepted. \"\"\" inference_options = config.get(\"inference_options\") if inference_options is", "new_tok_config[\"target\"], } ] return new_config def _ensure_params_order(params): params = collections.OrderedDict(sorted(params.items(), key=lambda x: x[0]))", "first (or last) for readability. config = config.copy() for section_name in (\"preprocess\", \"postprocess\"):", "object types are supported in the schema structure, \" \"but saw type %s\"", "key def build_override(config, path, value): \"\"\"Builds a configuration override to update the value", "not accepted. \"\"\" inference_options = config.get(\"inference_options\") if inference_options is None: raise ValueError(\"This model", "configuration have a unique name.\"\"\" if is_v1_config(config): return i = 1 for process", "= 1 for process in [\"preprocess\", \"postprocess\"]: process_config = config.get(process) if process_config: for", "config[index] = override return config raise TypeError(\"Paths in config can only represent object", "return tok_config = config.get(\"tokenization\") new_config = config if tok_config: if \"vocabulary\" not in", "merge_config( config_override, build_override(config, cp, option_value), ) if v2_config: return operators_options return config_override def", "cp, index_structure=False) operators_options[dst_config[\"name\"]].update({dst_key: option_value}) else: for cp in config_path: merge_config( config_override, build_override(config, cp,", "index_config(options, mapping[\"option_path\"]) except ValueError: continue # Option not passed for this request. config_path", "_non_user_fields = {\"model\", \"modelType\", \"imageTag\", \"build\", \"parent_model\"} def update_config(a, b, mode=\"default\"): \"\"\"Update the", "== section: section_index = i break if section_index is None: raise ValueError( \"Expected", "config = list(config) if isinstance(override, dict): config[index] = merge_config(copy.deepcopy(config[index]), override) else: config[index] =", "i, mapping in enumerate(options): config_path = mapping.get(\"config_path\") if config_path is None: raise ValueError('Missing", "does not expect inference options\") try: jsonschema.validate(options, inference_options[\"json_schema\"]) except jsonschema.ValidationError as e: raise", "= index_config(config, cp, index_structure=False) if not isinstance(dst_config, dict): raise ValueError(\"Paths in config can", "if is_v2_config(config) else 1 def ensure_operators_name(config): \"\"\"Make sure all operators in model configuration", "= get_config_version(b) if from_version == 1 and to_version == 2: # When updating", "options = inference_options.get(\"options\") if options is None: raise ValueError('Missing \"options\" in \"inference_options\"') validate_mapping(json_schema,", "b.items(): if not isinstance(b_value, dict): a[key] = b_value else: a_value = a.get(key) if", "to update the value at path.\"\"\" if not path: return value sections =", "expected or the value is not accepted. \"\"\" inference_options = config.get(\"inference_options\") if inference_options", "new_config[\"vocabulary\"][\"source\"] = { \"path\": vocab_src, \"replace_vocab\": replace_src, } if vocab_tgt: new_config[\"vocabulary\"][\"target\"] = {", "b_value return a def replace_config(a, b): \"\"\"Updates fields in a by fields in", "config): \"\"\"Validate the inference options, raising ValueError on error.\"\"\" json_schema = inference_options.get(\"json_schema\") if", "def index_schema(schema, path): \"\"\"Index a JSON schema with a path-like string.\"\"\" for section", "ValueError('Missing \"json_schema\" in \"inference_options\"') jsonschema.Draft7Validator.check_schema(json_schema) options = inference_options.get(\"options\") if options is None: raise", "if isinstance(config, dict): return {section: build_override(config.get(section), inner_path, value)} if isinstance(config, list): index =", "path) config = config[section] elif isinstance(config, list): section_index = None try: section_index =", "\"Only object types are supported in the schema structure, \" \"but saw type", "= mapping.get(\"config_path\") if config_path is None: raise ValueError('Missing \"config_path\" in option mapping %d'", "vocab_src = tok_config[\"source\"].get(\"vocabulary\", None) vocab_tgt = tok_config[\"target\"].get(\"vocabulary\", None) replace_src = tok_config[\"source\"].get(\"replace_vocab\", False) replace_tgt", "config else: return config, key def build_override(config, path, value): \"\"\"Builds a configuration override", "b_value) else: a[key] = b_value return a def replace_config(a, b): \"\"\"Updates fields in", "ValueError: continue # Option not passed for this request. config_path = mapping[\"config_path\"] if", "tok_config[\"target\"].get(\"vocabulary\", None) replace_src = tok_config[\"source\"].get(\"replace_vocab\", False) replace_tgt = tok_config[\"target\"].get(\"replace_vocab\", False) prev_vocab_src = tok_config[\"source\"].get(\"previous_vocabulary\",", "have a unique name.\"\"\" if is_v1_config(config): return i = 1 for process in", "% i) if isinstance(config_path, str): config_path = [config_path] for cp in config_path: dst_config,", "a configuration override. For V2 configurations, this function returns a dict mapping operator", "sections = path.split(\"/\") section = sections[0] inner_path = \"/\".join(sections[1:]) if isinstance(config, dict): return", "= is_v2_config(config) operators_options = collections.defaultdict(dict) config_override = {} for mapping in inference_options[\"options\"]: try:", "vocab_tgt: new_config[\"vocabulary\"] = {} if vocab_src: new_config[\"vocabulary\"][\"source\"] = { \"path\": vocab_src, \"replace_vocab\": replace_src,", "Since lists can't be merged, the override should contain the full list content.", "ValueError(\"Invalid path %s in user options\" % path) schema = properties[section] return schema", "if not b: return a from_version = get_config_version(a) to_version = get_config_version(b) if from_version", "if section not in config: raise ValueError(\"Invalid path %s in config\" % path)", "to their options. Raises: ValueError: if inference options were not expected or the", "\"\"\"Updates fields in a by fields in b.\"\"\" a.update(b) return a _non_user_fields =", "tok_config[\"target\"].get(\"previous_vocabulary\", None) if vocab_src or vocab_tgt: new_config[\"vocabulary\"] = {} if vocab_src: new_config[\"vocabulary\"][\"source\"] =", "the inference options, raising ValueError on error.\"\"\" json_schema = inference_options.get(\"json_schema\") if json_schema is", "preferred_last: if field in params: params.move_to_end(field, last=True) return params def prepare_config_for_save(config): \"\"\"Prepares the", "def _ensure_params_order(params): params = collections.OrderedDict(sorted(params.items(), key=lambda x: x[0])) preferred_first = [\"op\", \"name\"] preferred_last", "When updating the configuration to a newer version, we clear all user fields.", "structure, \" \"but saw type %s\" % schema[\"type\"] ) properties = schema[\"properties\"] if", "if vocab_tgt: new_config[\"vocabulary\"][\"target\"] = { \"path\": vocab_tgt, \"replace_vocab\": replace_tgt, } if prev_vocab_src: new_config[\"vocabulary\"][\"source\"][", "\"target\": new_tok_config[\"target\"], } ] return new_config def _ensure_params_order(params): params = collections.OrderedDict(sorted(params.items(), key=lambda x:", "the configuration to a newer version, we clear all user fields. a =", "= copy.deepcopy(config) vocab_src = tok_config[\"source\"].get(\"vocabulary\", None) vocab_tgt = tok_config[\"target\"].get(\"vocabulary\", None) replace_src = tok_config[\"source\"].get(\"replace_vocab\",", "= \"/\".join(sections[1:]) if isinstance(config, dict): return {section: build_override(config.get(section), inner_path, value)} if isinstance(config, list):", "a configuration with a path-like string.\"\"\" key = None sections = path.split(\"/\") if", "is None: raise ValueError('Missing \"options\" in \"inference_options\"') validate_mapping(json_schema, options, config) return json_schema def", "if config_path is None: raise ValueError('Missing \"config_path\" in option mapping %d' % i)", "\"inference_options\"') jsonschema.Draft7Validator.check_schema(json_schema) options = inference_options.get(\"options\") if options is None: raise ValueError('Missing \"options\" in", "for cp in config_path: merge_config( config_override, build_override(config, cp, option_value), ) if v2_config: return", "return new_config def _ensure_params_order(params): params = collections.OrderedDict(sorted(params.items(), key=lambda x: x[0])) preferred_first = [\"op\",", "merge_config(a, b) if mode == \"replace\": return replace_config(a, b) raise ValueError(\"Invalid configuration update", "with a path-like string.\"\"\" key = None sections = path.split(\"/\") if not index_structure:", "ValueError(\"Invalid configuration update mode: %s\" % mode) def index_config(config, path, index_structure=True): \"\"\"Index a", "params: params.move_to_end(field, last=True) return params def prepare_config_for_save(config): \"\"\"Prepares the configuration before saving it", "but got %s instead\" % section ) config = config[section_index] else: raise ValueError(", "vocab_src, \"replace_vocab\": replace_src, } if vocab_tgt: new_config[\"vocabulary\"][\"target\"] = { \"path\": vocab_tgt, \"replace_vocab\": replace_tgt,", "'preprocess\" fields.\"\"\" if not config: return tok_config = config.get(\"tokenization\") new_config = config if", "dict) and block.get(\"name\") == section: section_index = i break if section_index is None:", "%d' % i) if isinstance(config_path, str): config_path = [config_path] for cp in config_path:", "dst_config, dst_key = index_config(config, cp, index_structure=False) operators_options[dst_config[\"name\"]].update({dst_key: option_value}) else: for cp in config_path:", "import copy def merge_config(a, b): \"\"\"Merges config b in a.\"\"\" for key, b_value", "validate_mapping(schema, options, config): \"\"\"Validate the mapping between inference options and configuration fields, raising", "prev_vocab_src: new_config[\"vocabulary\"][\"source\"][ \"previous_vocabulary\" ] = prev_vocab_src if prev_vocab_tgt: new_config[\"vocabulary\"][\"target\"][ \"previous_vocabulary\" ] = prev_vocab_tgt", "in config and preprocess is not None and isinstance(preprocess, list) ) def is_v1_config(config):", "object and array structures\" ) if index_structure: return config else: return config, key", "replace_config(a, b) if mode == \"default\" or mode == \"merge\": return merge_config(a, b)", "config, key def build_override(config, path, value): \"\"\"Builds a configuration override to update the", "\"postprocess\"]: process_config = config.get(process) if process_config: for op_config in process_config: op_type = op_config.get(\"op\")", "isinstance(override, dict): config[index] = merge_config(copy.deepcopy(config[index]), override) else: config[index] = override return config raise", "cp in config_path: merge_config( config_override, build_override(config, cp, option_value), ) if v2_config: return operators_options", "a _non_user_fields = {\"model\", \"modelType\", \"imageTag\", \"build\", \"parent_model\"} def update_config(a, b, mode=\"default\"): \"\"\"Update", "inference_options[\"json_schema\"]) except jsonschema.ValidationError as e: raise ValueError(\"Options validation error: %s\" % e.message) v2_config", "can only index object structures\") option_path = mapping.get(\"option_path\") if option_path is None: raise", "a V1 configuration.\"\"\" return not is_v2_config(config) def get_config_version(config): \"\"\"Returns the version of the", "update old configuration with 'tokenization' field to include new 'vocabulary' and 'preprocess\" fields.\"\"\"", "return config raise TypeError(\"Paths in config can only represent object and array structures\")", "\"previous_vocabulary\" ] = prev_vocab_src if prev_vocab_tgt: new_config[\"vocabulary\"][\"target\"][ \"previous_vocabulary\" ] = prev_vocab_tgt if \"preprocess\"", "and to_version == 2: # When updating the configuration to a newer version,", "with b.\"\"\" if not b: return a from_version = get_config_version(a) to_version = get_config_version(b)", "configuration.\"\"\" return 2 if is_v2_config(config) else 1 def ensure_operators_name(config): \"\"\"Make sure all operators", ") if index_structure: return config else: return config, key def build_override(config, path, value):", "op_config.setdefault(\"name\", \"%s_%d\" % (op_type, i)) i += 1 def old_to_new_config(config): \"\"\"Locally update old", "if not index_structure: key = sections[-1] sections = sections[:-1] for section in sections:", "config) return json_schema def validate_mapping(schema, options, config): \"\"\"Validate the mapping between inference options", "for this request. config_path = mapping[\"config_path\"] if isinstance(config_path, str): config_path = [config_path] if", "field in params: params.move_to_end(field, last=True) return params def prepare_config_for_save(config): \"\"\"Prepares the configuration before", "= [config_path] if v2_config: for cp in config_path: dst_config, dst_key = index_config(config, cp,", "path.split(\"/\") if not index_structure: key = sections[-1] sections = sections[:-1] for section in", "on error.\"\"\" json_schema = inference_options.get(\"json_schema\") if json_schema is None: raise ValueError('Missing \"json_schema\" in", "else: config[index] = override return config raise TypeError(\"Paths in config can only represent", "% (op_type, i)) i += 1 def old_to_new_config(config): \"\"\"Locally update old configuration with", "merge_config(a, b): \"\"\"Merges config b in a.\"\"\" for key, b_value in b.items(): if", "in process_config: op_type = op_config.get(\"op\") if op_type: op_config.setdefault(\"name\", \"%s_%d\" % (op_type, i)) i", "\"source\": new_tok_config[\"source\"], \"target\": new_tok_config[\"target\"], } ] return new_config def _ensure_params_order(params): params = collections.OrderedDict(sorted(params.items(),", "%s instead\" % section ) config = config[section_index] else: raise ValueError( \"Paths in", "None: raise ValueError('Missing \"json_schema\" in \"inference_options\"') jsonschema.Draft7Validator.check_schema(json_schema) options = inference_options.get(\"options\") if options is", "path) schema = properties[section] return schema def validate_inference_options(inference_options, config): \"\"\"Validate the inference options,", "return config_override def is_v2_config(config): \"\"\"Returns True if config is a V2 configuration.\"\"\" preprocess", "and 'preprocess\" fields.\"\"\" if not config: return tok_config = config.get(\"tokenization\") new_config = config", "mapping[\"config_path\"] if isinstance(config_path, str): config_path = [config_path] if v2_config: for cp in config_path:", "config_path = [config_path] if v2_config: for cp in config_path: dst_config, dst_key = index_config(config,", "or vocab_tgt: new_config[\"vocabulary\"] = {} if vocab_src: new_config[\"vocabulary\"][\"source\"] = { \"path\": vocab_src, \"replace_vocab\":", "if vocab_src: new_config[\"vocabulary\"][\"source\"] = { \"path\": vocab_src, \"replace_vocab\": replace_src, } if vocab_tgt: new_config[\"vocabulary\"][\"target\"]", "index_structure=False) operators_options[dst_config[\"name\"]].update({dst_key: option_value}) else: for cp in config_path: merge_config( config_override, build_override(config, cp, option_value),", "get_config_version(b) if from_version == 1 and to_version == 2: # When updating the", "not b: return a from_version = get_config_version(a) to_version = get_config_version(b) if from_version ==", "a unique name.\"\"\" if is_v1_config(config): return i = 1 for process in [\"preprocess\",", "copy.deepcopy(tok_config) new_tok_config[\"source\"].pop(\"vocabulary\", None) new_tok_config[\"target\"].pop(\"vocabulary\", None) new_tok_config[\"source\"].pop(\"replace_vocab\", None) new_tok_config[\"target\"].pop(\"replace_vocab\", None) new_config[\"preprocess\"] = [ {", "operators_options = collections.defaultdict(dict) config_override = {} for mapping in inference_options[\"options\"]: try: option_value =", "vocab_src: new_config[\"vocabulary\"][\"source\"] = { \"path\": vocab_src, \"replace_vocab\": replace_src, } if vocab_tgt: new_config[\"vocabulary\"][\"target\"] =", "options is None: raise ValueError('Missing \"options\" in \"inference_options\"') validate_mapping(json_schema, options, config) return json_schema", "return i = 1 for process in [\"preprocess\", \"postprocess\"]: process_config = config.get(process) if", "a by fields in b.\"\"\" a.update(b) return a _non_user_fields = {\"model\", \"modelType\", \"imageTag\",", "\"options\" in \"inference_options\"') validate_mapping(json_schema, options, config) return json_schema def validate_mapping(schema, options, config): \"\"\"Validate", "if inference_options is None: raise ValueError(\"This model does not expect inference options\") try:", "if field in params: params.move_to_end(field, last=False) for field in preferred_last: if field in", "return schema def validate_inference_options(inference_options, config): \"\"\"Validate the inference options, raising ValueError on error.\"\"\"", "break if section_index is None: raise ValueError( \"Expected an array index in path,", "configuration update mode: %s\" % mode) def index_config(config, path, index_structure=True): \"\"\"Index a configuration", "def update_config(a, b, mode=\"default\"): \"\"\"Update the configuration a with b.\"\"\" if not b:", "% schema[\"type\"] ) properties = schema[\"properties\"] if section not in properties: raise ValueError(\"Invalid", "\"option_path\" in option mapping %d' % i) _ = index_schema(schema, option_path) def read_options(config,", "isinstance(b_value, dict): a[key] = b_value else: a_value = a.get(key) if a_value is not", "be merged, the override should contain the full list content. config = list(config)", "block.get(\"name\") == section: section_index = i break if section_index is None: raise ValueError(", "if mode == \"replace\": return replace_config(a, b) raise ValueError(\"Invalid configuration update mode: %s\"", "# Since lists can't be merged, the override should contain the full list", "\"\"\"Validate the inference options, raising ValueError on error.\"\"\" json_schema = inference_options.get(\"json_schema\") if json_schema", "value)} if isinstance(config, list): index = int(sections[0]) override = build_override(config[index], inner_path, value) #", "mapping[\"option_path\"]) except ValueError: continue # Option not passed for this request. config_path =", "\"vocabulary\" not in config: new_config = copy.deepcopy(config) vocab_src = tok_config[\"source\"].get(\"vocabulary\", None) vocab_tgt =", "e.message) v2_config = is_v2_config(config) operators_options = collections.defaultdict(dict) config_override = {} for mapping in", "options\" % path) schema = properties[section] return schema def validate_inference_options(inference_options, config): \"\"\"Validate the", "False) replace_tgt = tok_config[\"target\"].get(\"replace_vocab\", False) prev_vocab_src = tok_config[\"source\"].get(\"previous_vocabulary\", None) prev_vocab_tgt = tok_config[\"target\"].get(\"previous_vocabulary\", None)", "get_config_version(config): \"\"\"Returns the version of the configuration.\"\"\" return 2 if is_v2_config(config) else 1", "not in properties: raise ValueError(\"Invalid path %s in user options\" % path) schema", "prev_vocab_tgt: new_config[\"vocabulary\"][\"target\"][ \"previous_vocabulary\" ] = prev_vocab_tgt if \"preprocess\" not in config: new_tok_config =", "V2 configuration.\"\"\" preprocess = config.get(\"preprocess\") return ( \"tokenization\" not in config and preprocess", "def validate_inference_options(inference_options, config): \"\"\"Validate the inference options, raising ValueError on error.\"\"\" json_schema =", "not is_v2_config(config) def get_config_version(config): \"\"\"Returns the version of the configuration.\"\"\" return 2 if", "not expect inference options\") try: jsonschema.validate(options, inference_options[\"json_schema\"]) except jsonschema.ValidationError as e: raise ValueError(\"Options", "mode == \"merge\": return merge_config(a, b) if mode == \"replace\": return replace_config(a, b)", "i, block in enumerate(config): if isinstance(block, dict) and block.get(\"name\") == section: section_index =", "in a.\"\"\" for key, b_value in b.items(): if not isinstance(b_value, dict): a[key] =", "array structures\") def index_schema(schema, path): \"\"\"Index a JSON schema with a path-like string.\"\"\"", "config_path = [config_path] for cp in config_path: dst_config, _ = index_config(config, cp, index_structure=False)", "saving it in the model directory.\"\"\" if is_v2_config(config): # In V2 operators, we", "config can only represent object and array structures\" ) if index_structure: return config", "sections[0] inner_path = \"/\".join(sections[1:]) if isinstance(config, dict): return {section: build_override(config.get(section), inner_path, value)} if", "in properties: raise ValueError(\"Invalid path %s in user options\" % path) schema =", "def get_config_version(config): \"\"\"Returns the version of the configuration.\"\"\" return 2 if is_v2_config(config) else", "is_v2_config(config) else 1 def ensure_operators_name(config): \"\"\"Make sure all operators in model configuration have", "in params: params.move_to_end(field, last=False) for field in preferred_last: if field in params: params.move_to_end(field,", "the configuration.\"\"\" return 2 if is_v2_config(config) else 1 def ensure_operators_name(config): \"\"\"Make sure all", "v in a.items() if k in _non_user_fields} return replace_config(a, b) if mode ==", "build_override(config, path, value): \"\"\"Builds a configuration override to update the value at path.\"\"\"", "and preprocess is not None and isinstance(preprocess, list) ) def is_v1_config(config): \"\"\"Returns True", "index_config(config, cp, index_structure=False) if not isinstance(dst_config, dict): raise ValueError(\"Paths in config can only", "section = config.get(section_name) if section is None: continue config[section_name] = [_ensure_params_order(params) for params", "operators in model configuration have a unique name.\"\"\" if is_v1_config(config): return i =", "array index in path, but got %s instead\" % section ) config =", "a path-like string.\"\"\" key = None sections = path.split(\"/\") if not index_structure: key", "( \"tokenization\" not in config and preprocess is not None and isinstance(preprocess, list)", "if \"preprocess\" not in config: new_tok_config = copy.deepcopy(tok_config) new_tok_config[\"source\"].pop(\"vocabulary\", None) new_tok_config[\"target\"].pop(\"vocabulary\", None) new_tok_config[\"source\"].pop(\"replace_vocab\",", "config): \"\"\"Validate the mapping between inference options and configuration fields, raising ValueError on", "raising ValueError on error. \"\"\" for i, mapping in enumerate(options): config_path = mapping.get(\"config_path\")", "is_v2_config(config) def get_config_version(config): \"\"\"Returns the version of the configuration.\"\"\" return 2 if is_v2_config(config)", "new_tok_config[\"source\"].pop(\"vocabulary\", None) new_tok_config[\"target\"].pop(\"vocabulary\", None) new_tok_config[\"source\"].pop(\"replace_vocab\", None) new_tok_config[\"target\"].pop(\"replace_vocab\", None) new_config[\"preprocess\"] = [ { \"op\":", "= tok_config[\"target\"].get(\"replace_vocab\", False) prev_vocab_src = tok_config[\"source\"].get(\"previous_vocabulary\", None) prev_vocab_tgt = tok_config[\"target\"].get(\"previous_vocabulary\", None) if vocab_src", "validation error: %s\" % e.message) v2_config = is_v2_config(config) operators_options = collections.defaultdict(dict) config_override =", "some fields appear first (or last) for readability. config = config.copy() for section_name", "replace_config(a, b): \"\"\"Updates fields in a by fields in b.\"\"\" a.update(b) return a", "section ) config = config[section_index] else: raise ValueError( \"Paths in config can only", "not None and isinstance(preprocess, list) ) def is_v1_config(config): \"\"\"Returns True if config is", "with 'tokenization' field to include new 'vocabulary' and 'preprocess\" fields.\"\"\" if not config:", "else: a[key] = b_value return a def replace_config(a, b): \"\"\"Updates fields in a", "options were not expected or the value is not accepted. \"\"\" inference_options =", "config.get(\"inference_options\") if inference_options is None: raise ValueError(\"This model does not expect inference options\")", "\"\"\"Builds a configuration override to update the value at path.\"\"\" if not path:", "value is not accepted. \"\"\" inference_options = config.get(\"inference_options\") if inference_options is None: raise", "override = build_override(config[index], inner_path, value) # Since lists can't be merged, the override", "\"\"\"Returns True if config is a V2 configuration.\"\"\" preprocess = config.get(\"preprocess\") return (", "return ( \"tokenization\" not in config and preprocess is not None and isinstance(preprocess,", "ValueError( \"Only object types are supported in the schema structure, \" \"but saw", "dict): a[key] = b_value else: a_value = a.get(key) if a_value is not None", "as e: raise ValueError(\"Options validation error: %s\" % e.message) v2_config = is_v2_config(config) operators_options", "op_type = op_config.get(\"op\") if op_type: op_config.setdefault(\"name\", \"%s_%d\" % (op_type, i)) i += 1", "in config_path: merge_config( config_override, build_override(config, cp, option_value), ) if v2_config: return operators_options return", "TypeError(\"Paths in config can only represent object and array structures\") def index_schema(schema, path):", "raise ValueError(\"Invalid path %s in user options\" % path) schema = properties[section] return", "def is_v1_config(config): \"\"\"Returns True if config is a V1 configuration.\"\"\" return not is_v2_config(config)", "config.copy() for section_name in (\"preprocess\", \"postprocess\"): section = config.get(section_name) if section is None:", "and configuration fields, raising ValueError on error. \"\"\" for i, mapping in enumerate(options):", "function returns a dict mapping operator names to their options. Raises: ValueError: if", "process_config: op_type = op_config.get(\"op\") if op_type: op_config.setdefault(\"name\", \"%s_%d\" % (op_type, i)) i +=", "key, b_value in b.items(): if not isinstance(b_value, dict): a[key] = b_value else: a_value", "for section in path.split(\"/\"): if schema[\"type\"] != \"object\": raise ValueError( \"Only object types", "= config.get(section_name) if section is None: continue config[section_name] = [_ensure_params_order(params) for params in", "in [\"preprocess\", \"postprocess\"]: process_config = config.get(process) if process_config: for op_config in process_config: op_type", "\"%s_%d\" % (op_type, i)) i += 1 def old_to_new_config(config): \"\"\"Locally update old configuration", "V2 configurations, this function returns a dict mapping operator names to their options.", "new_config = copy.deepcopy(config) vocab_src = tok_config[\"source\"].get(\"vocabulary\", None) vocab_tgt = tok_config[\"target\"].get(\"vocabulary\", None) replace_src =", "preferred_last = [\"overrides\"] for field in reversed(preferred_first): if field in params: params.move_to_end(field, last=False)", "of the configuration.\"\"\" return 2 if is_v2_config(config) else 1 def ensure_operators_name(config): \"\"\"Make sure", "= inference_options.get(\"options\") if options is None: raise ValueError('Missing \"options\" in \"inference_options\"') validate_mapping(json_schema, options,", "= config.get(\"inference_options\") if inference_options is None: raise ValueError(\"This model does not expect inference", "\"replace_vocab\": replace_src, } if vocab_tgt: new_config[\"vocabulary\"][\"target\"] = { \"path\": vocab_tgt, \"replace_vocab\": replace_tgt, }", "# Option not passed for this request. config_path = mapping[\"config_path\"] if isinstance(config_path, str):", "the version of the configuration.\"\"\" return 2 if is_v2_config(config) else 1 def ensure_operators_name(config):", "\"config_path\" in option mapping %d' % i) if isinstance(config_path, str): config_path = [config_path]", "and isinstance(a_value, dict): merge_config(a_value, b_value) else: a[key] = b_value return a def replace_config(a,", "raise ValueError('Missing \"option_path\" in option mapping %d' % i) _ = index_schema(schema, option_path)", "def build_override(config, path, value): \"\"\"Builds a configuration override to update the value at", "operators_options[dst_config[\"name\"]].update({dst_key: option_value}) else: for cp in config_path: merge_config( config_override, build_override(config, cp, option_value), )", "params.move_to_end(field, last=False) for field in preferred_last: if field in params: params.move_to_end(field, last=True) return", "override should contain the full list content. config = list(config) if isinstance(override, dict):", "!= \"object\": raise ValueError( \"Only object types are supported in the schema structure,", "= config[section] elif isinstance(config, list): section_index = None try: section_index = int(section) except", "vocab_tgt = tok_config[\"target\"].get(\"vocabulary\", None) replace_src = tok_config[\"source\"].get(\"replace_vocab\", False) replace_tgt = tok_config[\"target\"].get(\"replace_vocab\", False) prev_vocab_src", "= [\"op\", \"name\"] preferred_last = [\"overrides\"] for field in reversed(preferred_first): if field in", "enumerate(options): config_path = mapping.get(\"config_path\") if config_path is None: raise ValueError('Missing \"config_path\" in option", "schema[\"type\"] != \"object\": raise ValueError( \"Only object types are supported in the schema", "key=lambda x: x[0])) preferred_first = [\"op\", \"name\"] preferred_last = [\"overrides\"] for field in", "a[key] = b_value else: a_value = a.get(key) if a_value is not None and", "vocab_tgt: new_config[\"vocabulary\"][\"target\"] = { \"path\": vocab_tgt, \"replace_vocab\": replace_tgt, } if prev_vocab_src: new_config[\"vocabulary\"][\"source\"][ \"previous_vocabulary\"", "in config\" % path) config = config[section] elif isinstance(config, list): section_index = None", "in config can only represent object and array structures\") def index_schema(schema, path): \"\"\"Index", "configuration before saving it in the model directory.\"\"\" if is_v2_config(config): # In V2", "(or last) for readability. config = config.copy() for section_name in (\"preprocess\", \"postprocess\"): section", "V1 configurations, this function returns a configuration override. For V2 configurations, this function", "if process_config: for op_config in process_config: op_type = op_config.get(\"op\") if op_type: op_config.setdefault(\"name\", \"%s_%d\"", "For V2 configurations, this function returns a dict mapping operator names to their", "the value is not accepted. \"\"\" inference_options = config.get(\"inference_options\") if inference_options is None:", "None) new_tok_config[\"target\"].pop(\"replace_vocab\", None) new_config[\"preprocess\"] = [ { \"op\": \"tokenization\", \"source\": new_tok_config[\"source\"], \"target\": new_tok_config[\"target\"],", "_non_user_fields} return replace_config(a, b) if mode == \"default\" or mode == \"merge\": return", "else: return config, key def build_override(config, path, value): \"\"\"Builds a configuration override to", "if options is None: raise ValueError('Missing \"options\" in \"inference_options\"') validate_mapping(json_schema, options, config) return", "= int(section) except ValueError: for i, block in enumerate(config): if isinstance(block, dict) and", "\"\"\"Validate the mapping between inference options and configuration fields, raising ValueError on error.", "%d' % i) _ = index_schema(schema, option_path) def read_options(config, options): \"\"\"Reads the inference", "options\") try: jsonschema.validate(options, inference_options[\"json_schema\"]) except jsonschema.ValidationError as e: raise ValueError(\"Options validation error: %s\"", "config_path is None: raise ValueError('Missing \"config_path\" in option mapping %d' % i) if", "field in params: params.move_to_end(field, last=False) for field in preferred_last: if field in params:", "path, value): \"\"\"Builds a configuration override to update the value at path.\"\"\" if", "unique name.\"\"\" if is_v1_config(config): return i = 1 for process in [\"preprocess\", \"postprocess\"]:", "not isinstance(dst_config, dict): raise ValueError(\"Paths in config can only index object structures\") option_path", "isinstance(block, dict) and block.get(\"name\") == section: section_index = i break if section_index is", "collections.OrderedDict(sorted(params.items(), key=lambda x: x[0])) preferred_first = [\"op\", \"name\"] preferred_last = [\"overrides\"] for field", "%s in config\" % path) config = config[section] elif isinstance(config, list): section_index =", "= {} for mapping in inference_options[\"options\"]: try: option_value = index_config(options, mapping[\"option_path\"]) except ValueError:", "b_value else: a_value = a.get(key) if a_value is not None and isinstance(a_value, dict):", "= schema[\"properties\"] if section not in properties: raise ValueError(\"Invalid path %s in user", "%s\" % e.message) v2_config = is_v2_config(config) operators_options = collections.defaultdict(dict) config_override = {} for", "new_config[\"vocabulary\"][\"target\"] = { \"path\": vocab_tgt, \"replace_vocab\": replace_tgt, } if prev_vocab_src: new_config[\"vocabulary\"][\"source\"][ \"previous_vocabulary\" ]", "ValueError: for i, block in enumerate(config): if isinstance(block, dict) and block.get(\"name\") == section:", "the full list content. config = list(config) if isinstance(override, dict): config[index] = merge_config(copy.deepcopy(config[index]),", "None) new_config[\"preprocess\"] = [ { \"op\": \"tokenization\", \"source\": new_tok_config[\"source\"], \"target\": new_tok_config[\"target\"], } ]", "\"\"\"Returns the version of the configuration.\"\"\" return 2 if is_v2_config(config) else 1 def", "= index_config(config, cp, index_structure=False) operators_options[dst_config[\"name\"]].update({dst_key: option_value}) else: for cp in config_path: merge_config( config_override,", "% mode) def index_config(config, path, index_structure=True): \"\"\"Index a configuration with a path-like string.\"\"\"", "b_value in b.items(): if not isinstance(b_value, dict): a[key] = b_value else: a_value =", "validate configurations.\"\"\" import collections import jsonschema import copy def merge_config(a, b): \"\"\"Merges config", "isinstance(config_path, str): config_path = [config_path] if v2_config: for cp in config_path: dst_config, dst_key", "= [\"overrides\"] for field in reversed(preferred_first): if field in params: params.move_to_end(field, last=False) for", "a def replace_config(a, b): \"\"\"Updates fields in a by fields in b.\"\"\" a.update(b)", "cp, option_value), ) if v2_config: return operators_options return config_override def is_v2_config(config): \"\"\"Returns True", "list) ) def is_v1_config(config): \"\"\"Returns True if config is a V1 configuration.\"\"\" return", "in inference_options[\"options\"]: try: option_value = index_config(options, mapping[\"option_path\"]) except ValueError: continue # Option not", "index_schema(schema, path): \"\"\"Index a JSON schema with a path-like string.\"\"\" for section in", "params: params.move_to_end(field, last=False) for field in preferred_last: if field in params: params.move_to_end(field, last=True)", "preprocess is not None and isinstance(preprocess, list) ) def is_v1_config(config): \"\"\"Returns True if", "'tokenization' field to include new 'vocabulary' and 'preprocess\" fields.\"\"\" if not config: return", "index_schema(schema, option_path) def read_options(config, options): \"\"\"Reads the inference options. For V1 configurations, this", "isinstance(config, dict): if section not in config: raise ValueError(\"Invalid path %s in config\"", "\"postprocess\"): section = config.get(section_name) if section is None: continue config[section_name] = [_ensure_params_order(params) for", "this request. config_path = mapping[\"config_path\"] if isinstance(config_path, str): config_path = [config_path] if v2_config:", "isinstance(dst_config, dict): raise ValueError(\"Paths in config can only index object structures\") option_path =", "index in path, but got %s instead\" % section ) config = config[section_index]", "except ValueError: continue # Option not passed for this request. config_path = mapping[\"config_path\"]", "inference_options is None: raise ValueError(\"This model does not expect inference options\") try: jsonschema.validate(options,", "True if config is a V1 configuration.\"\"\" return not is_v2_config(config) def get_config_version(config): \"\"\"Returns", "\"imageTag\", \"build\", \"parent_model\"} def update_config(a, b, mode=\"default\"): \"\"\"Update the configuration a with b.\"\"\"", "i = 1 for process in [\"preprocess\", \"postprocess\"]: process_config = config.get(process) if process_config:", "old configuration with 'tokenization' field to include new 'vocabulary' and 'preprocess\" fields.\"\"\" if", "index_structure: key = sections[-1] sections = sections[:-1] for section in sections: if isinstance(config,", "config if tok_config: if \"vocabulary\" not in config: new_config = copy.deepcopy(config) vocab_src =", "vocab_src or vocab_tgt: new_config[\"vocabulary\"] = {} if vocab_src: new_config[\"vocabulary\"][\"source\"] = { \"path\": vocab_src,", "manipulate and validate configurations.\"\"\" import collections import jsonschema import copy def merge_config(a, b):", "b: return a from_version = get_config_version(a) to_version = get_config_version(b) if from_version == 1", "in the schema structure, \" \"but saw type %s\" % schema[\"type\"] ) properties", "raising ValueError on error.\"\"\" json_schema = inference_options.get(\"json_schema\") if json_schema is None: raise ValueError('Missing", "update mode: %s\" % mode) def index_config(config, path, index_structure=True): \"\"\"Index a configuration with", "config is a V1 configuration.\"\"\" return not is_v2_config(config) def get_config_version(config): \"\"\"Returns the version", "override) else: config[index] = override return config raise TypeError(\"Paths in config can only", "def is_v2_config(config): \"\"\"Returns True if config is a V2 configuration.\"\"\" preprocess = config.get(\"preprocess\")", "is not None and isinstance(preprocess, list) ) def is_v1_config(config): \"\"\"Returns True if config", "isinstance(config, dict): return {section: build_override(config.get(section), inner_path, value)} if isinstance(config, list): index = int(sections[0])", "ValueError(\"Invalid path %s in config\" % path) config = config[section] elif isinstance(config, list):", "structures\") option_path = mapping.get(\"option_path\") if option_path is None: raise ValueError('Missing \"option_path\" in option", "path.split(\"/\") section = sections[0] inner_path = \"/\".join(sections[1:]) if isinstance(config, dict): return {section: build_override(config.get(section),", "None) replace_src = tok_config[\"source\"].get(\"replace_vocab\", False) replace_tgt = tok_config[\"target\"].get(\"replace_vocab\", False) prev_vocab_src = tok_config[\"source\"].get(\"previous_vocabulary\", None)", "str): config_path = [config_path] for cp in config_path: dst_config, _ = index_config(config, cp,", "\"\"\"Reads the inference options. For V1 configurations, this function returns a configuration override.", "%s\" % schema[\"type\"] ) properties = schema[\"properties\"] if section not in properties: raise", "object and array structures\") def index_schema(schema, path): \"\"\"Index a JSON schema with a", "mode: %s\" % mode) def index_config(config, path, index_structure=True): \"\"\"Index a configuration with a", "== \"replace\": return replace_config(a, b) raise ValueError(\"Invalid configuration update mode: %s\" % mode)", "fields in a by fields in b.\"\"\" a.update(b) return a _non_user_fields = {\"model\",", "configuration override to update the value at path.\"\"\" if not path: return value", "saw type %s\" % schema[\"type\"] ) properties = schema[\"properties\"] if section not in", "preferred_first = [\"op\", \"name\"] preferred_last = [\"overrides\"] for field in reversed(preferred_first): if field", "prev_vocab_src = tok_config[\"source\"].get(\"previous_vocabulary\", None) prev_vocab_tgt = tok_config[\"target\"].get(\"previous_vocabulary\", None) if vocab_src or vocab_tgt: new_config[\"vocabulary\"]", "raise ValueError(\"This model does not expect inference options\") try: jsonschema.validate(options, inference_options[\"json_schema\"]) except jsonschema.ValidationError", "to_version = get_config_version(b) if from_version == 1 and to_version == 2: # When", "\"\"\"Locally update old configuration with 'tokenization' field to include new 'vocabulary' and 'preprocess\"", "passed for this request. config_path = mapping[\"config_path\"] if isinstance(config_path, str): config_path = [config_path]", "\"inference_options\"') validate_mapping(json_schema, options, config) return json_schema def validate_mapping(schema, options, config): \"\"\"Validate the mapping", "an array index in path, but got %s instead\" % section ) config", "1 and to_version == 2: # When updating the configuration to a newer", "can only represent object and array structures\" ) if index_structure: return config else:", "for i, block in enumerate(config): if isinstance(block, dict) and block.get(\"name\") == section: section_index", "in path.split(\"/\"): if schema[\"type\"] != \"object\": raise ValueError( \"Only object types are supported", "to include new 'vocabulary' and 'preprocess\" fields.\"\"\" if not config: return tok_config =", "= { \"path\": vocab_tgt, \"replace_vocab\": replace_tgt, } if prev_vocab_src: new_config[\"vocabulary\"][\"source\"][ \"previous_vocabulary\" ] =", "for readability. config = config.copy() for section_name in (\"preprocess\", \"postprocess\"): section = config.get(section_name)", "a path-like string.\"\"\" for section in path.split(\"/\"): if schema[\"type\"] != \"object\": raise ValueError(", "a with b.\"\"\" if not b: return a from_version = get_config_version(a) to_version =", "index_config(config, path, index_structure=True): \"\"\"Index a configuration with a path-like string.\"\"\" key = None", "sections = path.split(\"/\") if not index_structure: key = sections[-1] sections = sections[:-1] for", "a from_version = get_config_version(a) to_version = get_config_version(b) if from_version == 1 and to_version", "section not in config: raise ValueError(\"Invalid path %s in config\" % path) config", "if section not in properties: raise ValueError(\"Invalid path %s in user options\" %", "raise ValueError(\"Invalid path %s in config\" % path) config = config[section] elif isinstance(config,", "clear all user fields. a = {k: v for k, v in a.items()", "mapping between inference options and configuration fields, raising ValueError on error. \"\"\" for", "if vocab_src or vocab_tgt: new_config[\"vocabulary\"] = {} if vocab_src: new_config[\"vocabulary\"][\"source\"] = { \"path\":", "= tok_config[\"source\"].get(\"replace_vocab\", False) replace_tgt = tok_config[\"target\"].get(\"replace_vocab\", False) prev_vocab_src = tok_config[\"source\"].get(\"previous_vocabulary\", None) prev_vocab_tgt =", "merge_config(copy.deepcopy(config[index]), override) else: config[index] = override return config raise TypeError(\"Paths in config can", "or mode == \"merge\": return merge_config(a, b) if mode == \"replace\": return replace_config(a,", "section_index = None try: section_index = int(section) except ValueError: for i, block in", "if isinstance(block, dict) and block.get(\"name\") == section: section_index = i break if section_index", "return config, key def build_override(config, path, value): \"\"\"Builds a configuration override to update", "JSON schema with a path-like string.\"\"\" for section in path.split(\"/\"): if schema[\"type\"] !=", "configuration fields, raising ValueError on error. \"\"\" for i, mapping in enumerate(options): config_path", "type %s\" % schema[\"type\"] ) properties = schema[\"properties\"] if section not in properties:", "ValueError('Missing \"config_path\" in option mapping %d' % i) if isinstance(config_path, str): config_path =", "returns a configuration override. For V2 configurations, this function returns a dict mapping", "= collections.defaultdict(dict) config_override = {} for mapping in inference_options[\"options\"]: try: option_value = index_config(options,", "dict mapping operator names to their options. Raises: ValueError: if inference options were", "\"name\"] preferred_last = [\"overrides\"] for field in reversed(preferred_first): if field in params: params.move_to_end(field,", "for i, mapping in enumerate(options): config_path = mapping.get(\"config_path\") if config_path is None: raise", "enumerate(config): if isinstance(block, dict) and block.get(\"name\") == section: section_index = i break if", "preprocess = config.get(\"preprocess\") return ( \"tokenization\" not in config and preprocess is not", "sections[:-1] for section in sections: if isinstance(config, dict): if section not in config:", "the value at path.\"\"\" if not path: return value sections = path.split(\"/\") section", "{ \"path\": vocab_tgt, \"replace_vocab\": replace_tgt, } if prev_vocab_src: new_config[\"vocabulary\"][\"source\"][ \"previous_vocabulary\" ] = prev_vocab_src", "if section_index is None: raise ValueError( \"Expected an array index in path, but", "in option mapping %d' % i) if isinstance(config_path, str): config_path = [config_path] for", "None: raise ValueError('Missing \"option_path\" in option mapping %d' % i) _ = index_schema(schema,", "in the model directory.\"\"\" if is_v2_config(config): # In V2 operators, we prefer that", "a newer version, we clear all user fields. a = {k: v for", "names to their options. Raises: ValueError: if inference options were not expected or", "model does not expect inference options\") try: jsonschema.validate(options, inference_options[\"json_schema\"]) except jsonschema.ValidationError as e:", "int(section) except ValueError: for i, block in enumerate(config): if isinstance(block, dict) and block.get(\"name\")", "replace_src = tok_config[\"source\"].get(\"replace_vocab\", False) replace_tgt = tok_config[\"target\"].get(\"replace_vocab\", False) prev_vocab_src = tok_config[\"source\"].get(\"previous_vocabulary\", None) prev_vocab_tgt", "schema structure, \" \"but saw type %s\" % schema[\"type\"] ) properties = schema[\"properties\"]", "string.\"\"\" key = None sections = path.split(\"/\") if not index_structure: key = sections[-1]", "if a_value is not None and isinstance(a_value, dict): merge_config(a_value, b_value) else: a[key] =", "ValueError(\"Options validation error: %s\" % e.message) v2_config = is_v2_config(config) operators_options = collections.defaultdict(dict) config_override", "mapping %d' % i) if isinstance(config_path, str): config_path = [config_path] for cp in", "except jsonschema.ValidationError as e: raise ValueError(\"Options validation error: %s\" % e.message) v2_config =", "in b.\"\"\" a.update(b) return a _non_user_fields = {\"model\", \"modelType\", \"imageTag\", \"build\", \"parent_model\"} def", "name.\"\"\" if is_v1_config(config): return i = 1 for process in [\"preprocess\", \"postprocess\"]: process_config", "new_tok_config[\"target\"].pop(\"vocabulary\", None) new_tok_config[\"source\"].pop(\"replace_vocab\", None) new_tok_config[\"target\"].pop(\"replace_vocab\", None) new_config[\"preprocess\"] = [ { \"op\": \"tokenization\", \"source\":", "field in preferred_last: if field in params: params.move_to_end(field, last=True) return params def prepare_config_for_save(config):", "json_schema is None: raise ValueError('Missing \"json_schema\" in \"inference_options\"') jsonschema.Draft7Validator.check_schema(json_schema) options = inference_options.get(\"options\") if", "i) _ = index_schema(schema, option_path) def read_options(config, options): \"\"\"Reads the inference options. For", "inner_path, value)} if isinstance(config, list): index = int(sections[0]) override = build_override(config[index], inner_path, value)", "operator names to their options. Raises: ValueError: if inference options were not expected", "error. \"\"\" for i, mapping in enumerate(options): config_path = mapping.get(\"config_path\") if config_path is", "cp, index_structure=False) if not isinstance(dst_config, dict): raise ValueError(\"Paths in config can only index", ") config = config[section_index] else: raise ValueError( \"Paths in config can only represent", "path-like string.\"\"\" for section in path.split(\"/\"): if schema[\"type\"] != \"object\": raise ValueError( \"Only", "value sections = path.split(\"/\") section = sections[0] inner_path = \"/\".join(sections[1:]) if isinstance(config, dict):", "if not path: return value sections = path.split(\"/\") section = sections[0] inner_path =", "} if prev_vocab_src: new_config[\"vocabulary\"][\"source\"][ \"previous_vocabulary\" ] = prev_vocab_src if prev_vocab_tgt: new_config[\"vocabulary\"][\"target\"][ \"previous_vocabulary\" ]", "error: %s\" % e.message) v2_config = is_v2_config(config) operators_options = collections.defaultdict(dict) config_override = {}", "= list(config) if isinstance(override, dict): config[index] = merge_config(copy.deepcopy(config[index]), override) else: config[index] = override", "field in reversed(preferred_first): if field in params: params.move_to_end(field, last=False) for field in preferred_last:", "def validate_mapping(schema, options, config): \"\"\"Validate the mapping between inference options and configuration fields,", "replace_tgt, } if prev_vocab_src: new_config[\"vocabulary\"][\"source\"][ \"previous_vocabulary\" ] = prev_vocab_src if prev_vocab_tgt: new_config[\"vocabulary\"][\"target\"][ \"previous_vocabulary\"", "section not in properties: raise ValueError(\"Invalid path %s in user options\" % path)", "import collections import jsonschema import copy def merge_config(a, b): \"\"\"Merges config b in", "2: # When updating the configuration to a newer version, we clear all", "{} if vocab_src: new_config[\"vocabulary\"][\"source\"] = { \"path\": vocab_src, \"replace_vocab\": replace_src, } if vocab_tgt:", "update the value at path.\"\"\" if not path: return value sections = path.split(\"/\")", "section = sections[0] inner_path = \"/\".join(sections[1:]) if isinstance(config, dict): return {section: build_override(config.get(section), inner_path,", "_ensure_params_order(params): params = collections.OrderedDict(sorted(params.items(), key=lambda x: x[0])) preferred_first = [\"op\", \"name\"] preferred_last =", "else: a_value = a.get(key) if a_value is not None and isinstance(a_value, dict): merge_config(a_value,", "options and configuration fields, raising ValueError on error. \"\"\" for i, mapping in", "if schema[\"type\"] != \"object\": raise ValueError( \"Only object types are supported in the", "\"/\".join(sections[1:]) if isinstance(config, dict): return {section: build_override(config.get(section), inner_path, value)} if isinstance(config, list): index", "None) vocab_tgt = tok_config[\"target\"].get(\"vocabulary\", None) replace_src = tok_config[\"source\"].get(\"replace_vocab\", False) replace_tgt = tok_config[\"target\"].get(\"replace_vocab\", False)", "config is a V2 configuration.\"\"\" preprocess = config.get(\"preprocess\") return ( \"tokenization\" not in", "override return config raise TypeError(\"Paths in config can only represent object and array", "(\"preprocess\", \"postprocess\"): section = config.get(section_name) if section is None: continue config[section_name] = [_ensure_params_order(params)", "in \"inference_options\"') validate_mapping(json_schema, options, config) return json_schema def validate_mapping(schema, options, config): \"\"\"Validate the", "accepted. \"\"\" inference_options = config.get(\"inference_options\") if inference_options is None: raise ValueError(\"This model does", "operators, we prefer that some fields appear first (or last) for readability. config", "= b_value else: a_value = a.get(key) if a_value is not None and isinstance(a_value,", "dict): merge_config(a_value, b_value) else: a[key] = b_value return a def replace_config(a, b): \"\"\"Updates", "not in config: new_tok_config = copy.deepcopy(tok_config) new_tok_config[\"source\"].pop(\"vocabulary\", None) new_tok_config[\"target\"].pop(\"vocabulary\", None) new_tok_config[\"source\"].pop(\"replace_vocab\", None) new_tok_config[\"target\"].pop(\"replace_vocab\",", "instead\" % section ) config = config[section_index] else: raise ValueError( \"Paths in config", "try: option_value = index_config(options, mapping[\"option_path\"]) except ValueError: continue # Option not passed for", "config_path: merge_config( config_override, build_override(config, cp, option_value), ) if v2_config: return operators_options return config_override", "prev_vocab_src if prev_vocab_tgt: new_config[\"vocabulary\"][\"target\"][ \"previous_vocabulary\" ] = prev_vocab_tgt if \"preprocess\" not in config:", "config.get(\"tokenization\") new_config = config if tok_config: if \"vocabulary\" not in config: new_config =", "not in config: raise ValueError(\"Invalid path %s in config\" % path) config =", "= index_schema(schema, option_path) def read_options(config, options): \"\"\"Reads the inference options. For V1 configurations,", "path.split(\"/\"): if schema[\"type\"] != \"object\": raise ValueError( \"Only object types are supported in", "option_path) def read_options(config, options): \"\"\"Reads the inference options. For V1 configurations, this function", "path, index_structure=True): \"\"\"Index a configuration with a path-like string.\"\"\" key = None sections", "if \"vocabulary\" not in config: new_config = copy.deepcopy(config) vocab_src = tok_config[\"source\"].get(\"vocabulary\", None) vocab_tgt", "= config[section_index] else: raise ValueError( \"Paths in config can only represent object and", "op_config.get(\"op\") if op_type: op_config.setdefault(\"name\", \"%s_%d\" % (op_type, i)) i += 1 def old_to_new_config(config):", "= i break if section_index is None: raise ValueError( \"Expected an array index", "if isinstance(config, list): index = int(sections[0]) override = build_override(config[index], inner_path, value) # Since", "config: new_config = copy.deepcopy(config) vocab_src = tok_config[\"source\"].get(\"vocabulary\", None) vocab_tgt = tok_config[\"target\"].get(\"vocabulary\", None) replace_src", "new_config[\"preprocess\"] = [ { \"op\": \"tokenization\", \"source\": new_tok_config[\"source\"], \"target\": new_tok_config[\"target\"], } ] return", "section is None: continue config[section_name] = [_ensure_params_order(params) for params in section] return config", "return replace_config(a, b) raise ValueError(\"Invalid configuration update mode: %s\" % mode) def index_config(config,", "dst_config, _ = index_config(config, cp, index_structure=False) if not isinstance(dst_config, dict): raise ValueError(\"Paths in", "jsonschema import copy def merge_config(a, b): \"\"\"Merges config b in a.\"\"\" for key,", "not isinstance(b_value, dict): a[key] = b_value else: a_value = a.get(key) if a_value is", "is a V1 configuration.\"\"\" return not is_v2_config(config) def get_config_version(config): \"\"\"Returns the version of", "tok_config[\"target\"].get(\"replace_vocab\", False) prev_vocab_src = tok_config[\"source\"].get(\"previous_vocabulary\", None) prev_vocab_tgt = tok_config[\"target\"].get(\"previous_vocabulary\", None) if vocab_src or", "dict): config[index] = merge_config(copy.deepcopy(config[index]), override) else: config[index] = override return config raise TypeError(\"Paths", "= config.copy() for section_name in (\"preprocess\", \"postprocess\"): section = config.get(section_name) if section is", "list): section_index = None try: section_index = int(section) except ValueError: for i, block", "None) if vocab_src or vocab_tgt: new_config[\"vocabulary\"] = {} if vocab_src: new_config[\"vocabulary\"][\"source\"] = {", "only index object structures\") option_path = mapping.get(\"option_path\") if option_path is None: raise ValueError('Missing", "return a _non_user_fields = {\"model\", \"modelType\", \"imageTag\", \"build\", \"parent_model\"} def update_config(a, b, mode=\"default\"):", "if inference options were not expected or the value is not accepted. \"\"\"", "not passed for this request. config_path = mapping[\"config_path\"] if isinstance(config_path, str): config_path =", "all operators in model configuration have a unique name.\"\"\" if is_v1_config(config): return i", "= op_config.get(\"op\") if op_type: op_config.setdefault(\"name\", \"%s_%d\" % (op_type, i)) i += 1 def", "% i) _ = index_schema(schema, option_path) def read_options(config, options): \"\"\"Reads the inference options.", "with a path-like string.\"\"\" for section in path.split(\"/\"): if schema[\"type\"] != \"object\": raise", "build_override(config, cp, option_value), ) if v2_config: return operators_options return config_override def is_v2_config(config): \"\"\"Returns", "json_schema def validate_mapping(schema, options, config): \"\"\"Validate the mapping between inference options and configuration", "] = prev_vocab_tgt if \"preprocess\" not in config: new_tok_config = copy.deepcopy(tok_config) new_tok_config[\"source\"].pop(\"vocabulary\", None)", "config = config[section_index] else: raise ValueError( \"Paths in config can only represent object", "is not None and isinstance(a_value, dict): merge_config(a_value, b_value) else: a[key] = b_value return", "if index_structure: return config else: return config, key def build_override(config, path, value): \"\"\"Builds", "== 2: # When updating the configuration to a newer version, we clear", "all user fields. a = {k: v for k, v in a.items() if", "in config can only represent object and array structures\" ) if index_structure: return", "prev_vocab_tgt if \"preprocess\" not in config: new_tok_config = copy.deepcopy(tok_config) new_tok_config[\"source\"].pop(\"vocabulary\", None) new_tok_config[\"target\"].pop(\"vocabulary\", None)", "return merge_config(a, b) if mode == \"replace\": return replace_config(a, b) raise ValueError(\"Invalid configuration", "config_path: dst_config, _ = index_config(config, cp, index_structure=False) if not isinstance(dst_config, dict): raise ValueError(\"Paths", "mode=\"default\"): \"\"\"Update the configuration a with b.\"\"\" if not b: return a from_version", "{ \"op\": \"tokenization\", \"source\": new_tok_config[\"source\"], \"target\": new_tok_config[\"target\"], } ] return new_config def _ensure_params_order(params):", "= a.get(key) if a_value is not None and isinstance(a_value, dict): merge_config(a_value, b_value) else:", "for op_config in process_config: op_type = op_config.get(\"op\") if op_type: op_config.setdefault(\"name\", \"%s_%d\" % (op_type,", "merged, the override should contain the full list content. config = list(config) if", "def index_config(config, path, index_structure=True): \"\"\"Index a configuration with a path-like string.\"\"\" key =", "% path) config = config[section] elif isinstance(config, list): section_index = None try: section_index", "a configuration override to update the value at path.\"\"\" if not path: return", "only represent object and array structures\") def index_schema(schema, path): \"\"\"Index a JSON schema", "option mapping %d' % i) if isinstance(config_path, str): config_path = [config_path] for cp", "key = None sections = path.split(\"/\") if not index_structure: key = sections[-1] sections", "if op_type: op_config.setdefault(\"name\", \"%s_%d\" % (op_type, i)) i += 1 def old_to_new_config(config): \"\"\"Locally", "in config: new_config = copy.deepcopy(config) vocab_src = tok_config[\"source\"].get(\"vocabulary\", None) vocab_tgt = tok_config[\"target\"].get(\"vocabulary\", None)", "op_config in process_config: op_type = op_config.get(\"op\") if op_type: op_config.setdefault(\"name\", \"%s_%d\" % (op_type, i))", "inference_options.get(\"json_schema\") if json_schema is None: raise ValueError('Missing \"json_schema\" in \"inference_options\"') jsonschema.Draft7Validator.check_schema(json_schema) options =", "2 if is_v2_config(config) else 1 def ensure_operators_name(config): \"\"\"Make sure all operators in model", "index_structure=False) if not isinstance(dst_config, dict): raise ValueError(\"Paths in config can only index object", "config_override, build_override(config, cp, option_value), ) if v2_config: return operators_options return config_override def is_v2_config(config):", "the configuration before saving it in the model directory.\"\"\" if is_v2_config(config): # In", "= sections[0] inner_path = \"/\".join(sections[1:]) if isinstance(config, dict): return {section: build_override(config.get(section), inner_path, value)}", "inference_options = config.get(\"inference_options\") if inference_options is None: raise ValueError(\"This model does not expect", "tok_config = config.get(\"tokenization\") new_config = config if tok_config: if \"vocabulary\" not in config:", "in config can only index object structures\") option_path = mapping.get(\"option_path\") if option_path is", "config\" % path) config = config[section] elif isinstance(config, list): section_index = None try:", "\"\"\" inference_options = config.get(\"inference_options\") if inference_options is None: raise ValueError(\"This model does not", "inference options, raising ValueError on error.\"\"\" json_schema = inference_options.get(\"json_schema\") if json_schema is None:", "fields, raising ValueError on error. \"\"\" for i, mapping in enumerate(options): config_path =", "replace_tgt = tok_config[\"target\"].get(\"replace_vocab\", False) prev_vocab_src = tok_config[\"source\"].get(\"previous_vocabulary\", None) prev_vocab_tgt = tok_config[\"target\"].get(\"previous_vocabulary\", None) if", "before saving it in the model directory.\"\"\" if is_v2_config(config): # In V2 operators,", "their options. Raises: ValueError: if inference options were not expected or the value", "in path, but got %s instead\" % section ) config = config[section_index] else:", "ValueError( \"Paths in config can only represent object and array structures\" ) if", "= collections.OrderedDict(sorted(params.items(), key=lambda x: x[0])) preferred_first = [\"op\", \"name\"] preferred_last = [\"overrides\"] for", "for field in preferred_last: if field in params: params.move_to_end(field, last=True) return params def", "= { \"path\": vocab_src, \"replace_vocab\": replace_src, } if vocab_tgt: new_config[\"vocabulary\"][\"target\"] = { \"path\":", "\"op\": \"tokenization\", \"source\": new_tok_config[\"source\"], \"target\": new_tok_config[\"target\"], } ] return new_config def _ensure_params_order(params): params", "return 2 if is_v2_config(config) else 1 def ensure_operators_name(config): \"\"\"Make sure all operators in", "is None: raise ValueError( \"Expected an array index in path, but got %s", "path, but got %s instead\" % section ) config = config[section_index] else: raise", "= [config_path] for cp in config_path: dst_config, _ = index_config(config, cp, index_structure=False) if", "full list content. config = list(config) if isinstance(override, dict): config[index] = merge_config(copy.deepcopy(config[index]), override)", "ValueError(\"This model does not expect inference options\") try: jsonschema.validate(options, inference_options[\"json_schema\"]) except jsonschema.ValidationError as", "prefer that some fields appear first (or last) for readability. config = config.copy()", "options. For V1 configurations, this function returns a configuration override. For V2 configurations,", "\"object\": raise ValueError( \"Only object types are supported in the schema structure, \"", "True if config is a V2 configuration.\"\"\" preprocess = config.get(\"preprocess\") return ( \"tokenization\"", "= None try: section_index = int(section) except ValueError: for i, block in enumerate(config):", "in params: params.move_to_end(field, last=True) return params def prepare_config_for_save(config): \"\"\"Prepares the configuration before saving", "_ = index_config(config, cp, index_structure=False) if not isinstance(dst_config, dict): raise ValueError(\"Paths in config", "= prev_vocab_src if prev_vocab_tgt: new_config[\"vocabulary\"][\"target\"][ \"previous_vocabulary\" ] = prev_vocab_tgt if \"preprocess\" not in", "if isinstance(override, dict): config[index] = merge_config(copy.deepcopy(config[index]), override) else: config[index] = override return config", "= merge_config(copy.deepcopy(config[index]), override) else: config[index] = override return config raise TypeError(\"Paths in config", "section in sections: if isinstance(config, dict): if section not in config: raise ValueError(\"Invalid", "mode == \"default\" or mode == \"merge\": return merge_config(a, b) if mode ==", "= override return config raise TypeError(\"Paths in config can only represent object and", "to manipulate and validate configurations.\"\"\" import collections import jsonschema import copy def merge_config(a,", "None try: section_index = int(section) except ValueError: for i, block in enumerate(config): if", "new_config[\"vocabulary\"][\"target\"][ \"previous_vocabulary\" ] = prev_vocab_tgt if \"preprocess\" not in config: new_tok_config = copy.deepcopy(tok_config)", "if isinstance(config, dict): if section not in config: raise ValueError(\"Invalid path %s in", "lists can't be merged, the override should contain the full list content. config", "Raises: ValueError: if inference options were not expected or the value is not", "path %s in config\" % path) config = config[section] elif isinstance(config, list): section_index", "sections[-1] sections = sections[:-1] for section in sections: if isinstance(config, dict): if section", "for mapping in inference_options[\"options\"]: try: option_value = index_config(options, mapping[\"option_path\"]) except ValueError: continue #", "include new 'vocabulary' and 'preprocess\" fields.\"\"\" if not config: return tok_config = config.get(\"tokenization\")", "= tok_config[\"source\"].get(\"previous_vocabulary\", None) prev_vocab_tgt = tok_config[\"target\"].get(\"previous_vocabulary\", None) if vocab_src or vocab_tgt: new_config[\"vocabulary\"] =", "1 def ensure_operators_name(config): \"\"\"Make sure all operators in model configuration have a unique", "configuration with a path-like string.\"\"\" key = None sections = path.split(\"/\") if not", "not None and isinstance(a_value, dict): merge_config(a_value, b_value) else: a[key] = b_value return a", "\" \"but saw type %s\" % schema[\"type\"] ) properties = schema[\"properties\"] if section", "(op_type, i)) i += 1 def old_to_new_config(config): \"\"\"Locally update old configuration with 'tokenization'", "= config if tok_config: if \"vocabulary\" not in config: new_config = copy.deepcopy(config) vocab_src", "is None: raise ValueError(\"This model does not expect inference options\") try: jsonschema.validate(options, inference_options[\"json_schema\"])", "option_value}) else: for cp in config_path: merge_config( config_override, build_override(config, cp, option_value), ) if", "old_to_new_config(config): \"\"\"Locally update old configuration with 'tokenization' field to include new 'vocabulary' and", "last) for readability. config = config.copy() for section_name in (\"preprocess\", \"postprocess\"): section =", "return config else: return config, key def build_override(config, path, value): \"\"\"Builds a configuration", "list(config) if isinstance(override, dict): config[index] = merge_config(copy.deepcopy(config[index]), override) else: config[index] = override return", "validate_mapping(json_schema, options, config) return json_schema def validate_mapping(schema, options, config): \"\"\"Validate the mapping between", "by fields in b.\"\"\" a.update(b) return a _non_user_fields = {\"model\", \"modelType\", \"imageTag\", \"build\",", "= mapping.get(\"option_path\") if option_path is None: raise ValueError('Missing \"option_path\" in option mapping %d'", "is None: raise ValueError('Missing \"option_path\" in option mapping %d' % i) _ =", "array structures\" ) if index_structure: return config else: return config, key def build_override(config,", "def ensure_operators_name(config): \"\"\"Make sure all operators in model configuration have a unique name.\"\"\"", "user options\" % path) schema = properties[section] return schema def validate_inference_options(inference_options, config): \"\"\"Validate", "is_v2_config(config) operators_options = collections.defaultdict(dict) config_override = {} for mapping in inference_options[\"options\"]: try: option_value", "collections import jsonschema import copy def merge_config(a, b): \"\"\"Merges config b in a.\"\"\"", "to a newer version, we clear all user fields. a = {k: v", "if not config: return tok_config = config.get(\"tokenization\") new_config = config if tok_config: if", "last=False) for field in preferred_last: if field in params: params.move_to_end(field, last=True) return params", ") properties = schema[\"properties\"] if section not in properties: raise ValueError(\"Invalid path %s", "new_tok_config[\"source\"].pop(\"replace_vocab\", None) new_tok_config[\"target\"].pop(\"replace_vocab\", None) new_config[\"preprocess\"] = [ { \"op\": \"tokenization\", \"source\": new_tok_config[\"source\"], \"target\":", "if v2_config: for cp in config_path: dst_config, dst_key = index_config(config, cp, index_structure=False) operators_options[dst_config[\"name\"]].update({dst_key:", "v2_config = is_v2_config(config) operators_options = collections.defaultdict(dict) config_override = {} for mapping in inference_options[\"options\"]:", "%s in user options\" % path) schema = properties[section] return schema def validate_inference_options(inference_options,", "option_path = mapping.get(\"option_path\") if option_path is None: raise ValueError('Missing \"option_path\" in option mapping", "options): \"\"\"Reads the inference options. For V1 configurations, this function returns a configuration", "it in the model directory.\"\"\" if is_v2_config(config): # In V2 operators, we prefer", "elif isinstance(config, list): section_index = None try: section_index = int(section) except ValueError: for", "\"build\", \"parent_model\"} def update_config(a, b, mode=\"default\"): \"\"\"Update the configuration a with b.\"\"\" if", "option_value), ) if v2_config: return operators_options return config_override def is_v2_config(config): \"\"\"Returns True if", "int(sections[0]) override = build_override(config[index], inner_path, value) # Since lists can't be merged, the", "tok_config[\"source\"].get(\"replace_vocab\", False) replace_tgt = tok_config[\"target\"].get(\"replace_vocab\", False) prev_vocab_src = tok_config[\"source\"].get(\"previous_vocabulary\", None) prev_vocab_tgt = tok_config[\"target\"].get(\"previous_vocabulary\",", "process in [\"preprocess\", \"postprocess\"]: process_config = config.get(process) if process_config: for op_config in process_config:", "path.\"\"\" if not path: return value sections = path.split(\"/\") section = sections[0] inner_path", "1 for process in [\"preprocess\", \"postprocess\"]: process_config = config.get(process) if process_config: for op_config", "a = {k: v for k, v in a.items() if k in _non_user_fields}", "index_config(config, cp, index_structure=False) operators_options[dst_config[\"name\"]].update({dst_key: option_value}) else: for cp in config_path: merge_config( config_override, build_override(config,", "def merge_config(a, b): \"\"\"Merges config b in a.\"\"\" for key, b_value in b.items():", "return a def replace_config(a, b): \"\"\"Updates fields in a by fields in b.\"\"\"", "version, we clear all user fields. a = {k: v for k, v", "In V2 operators, we prefer that some fields appear first (or last) for", "not in config: new_config = copy.deepcopy(config) vocab_src = tok_config[\"source\"].get(\"vocabulary\", None) vocab_tgt = tok_config[\"target\"].get(\"vocabulary\",", "is_v1_config(config): \"\"\"Returns True if config is a V1 configuration.\"\"\" return not is_v2_config(config) def", "config.get(process) if process_config: for op_config in process_config: op_type = op_config.get(\"op\") if op_type: op_config.setdefault(\"name\",", "for field in reversed(preferred_first): if field in params: params.move_to_end(field, last=False) for field in", "override to update the value at path.\"\"\" if not path: return value sections", "is None: raise ValueError('Missing \"config_path\" in option mapping %d' % i) if isinstance(config_path,", "try: section_index = int(section) except ValueError: for i, block in enumerate(config): if isinstance(block,", "= inference_options.get(\"json_schema\") if json_schema is None: raise ValueError('Missing \"json_schema\" in \"inference_options\"') jsonschema.Draft7Validator.check_schema(json_schema) options", "dict): return {section: build_override(config.get(section), inner_path, value)} if isinstance(config, list): index = int(sections[0]) override", "\"\"\" for i, mapping in enumerate(options): config_path = mapping.get(\"config_path\") if config_path is None:", "config and preprocess is not None and isinstance(preprocess, list) ) def is_v1_config(config): \"\"\"Returns", "raise ValueError(\"Paths in config can only index object structures\") option_path = mapping.get(\"option_path\") if", "def old_to_new_config(config): \"\"\"Locally update old configuration with 'tokenization' field to include new 'vocabulary'", "options, config) return json_schema def validate_mapping(schema, options, config): \"\"\"Validate the mapping between inference", "options. Raises: ValueError: if inference options were not expected or the value is", "the model directory.\"\"\" if is_v2_config(config): # In V2 operators, we prefer that some", "and array structures\" ) if index_structure: return config else: return config, key def", "build_override(config.get(section), inner_path, value)} if isinstance(config, list): index = int(sections[0]) override = build_override(config[index], inner_path,", "\"json_schema\" in \"inference_options\"') jsonschema.Draft7Validator.check_schema(json_schema) options = inference_options.get(\"options\") if options is None: raise ValueError('Missing", "%s\" % mode) def index_config(config, path, index_structure=True): \"\"\"Index a configuration with a path-like", "# When updating the configuration to a newer version, we clear all user", "\"Expected an array index in path, but got %s instead\" % section )", "for k, v in a.items() if k in _non_user_fields} return replace_config(a, b) if", "\"\"\"Make sure all operators in model configuration have a unique name.\"\"\" if is_v1_config(config):", "error.\"\"\" json_schema = inference_options.get(\"json_schema\") if json_schema is None: raise ValueError('Missing \"json_schema\" in \"inference_options\"')", "None and isinstance(a_value, dict): merge_config(a_value, b_value) else: a[key] = b_value return a def", "schema def validate_inference_options(inference_options, config): \"\"\"Validate the inference options, raising ValueError on error.\"\"\" json_schema", "were not expected or the value is not accepted. \"\"\" inference_options = config.get(\"inference_options\")", "should contain the full list content. config = list(config) if isinstance(override, dict): config[index]", "a.items() if k in _non_user_fields} return replace_config(a, b) if mode == \"default\" or", "None and isinstance(preprocess, list) ) def is_v1_config(config): \"\"\"Returns True if config is a", "b): \"\"\"Updates fields in a by fields in b.\"\"\" a.update(b) return a _non_user_fields", "% e.message) v2_config = is_v2_config(config) operators_options = collections.defaultdict(dict) config_override = {} for mapping", "e: raise ValueError(\"Options validation error: %s\" % e.message) v2_config = is_v2_config(config) operators_options =", "\"merge\": return merge_config(a, b) if mode == \"replace\": return replace_config(a, b) raise ValueError(\"Invalid", "= [ { \"op\": \"tokenization\", \"source\": new_tok_config[\"source\"], \"target\": new_tok_config[\"target\"], } ] return new_config", "v2_config: for cp in config_path: dst_config, dst_key = index_config(config, cp, index_structure=False) operators_options[dst_config[\"name\"]].update({dst_key: option_value})", "= config.get(\"tokenization\") new_config = config if tok_config: if \"vocabulary\" not in config: new_config", "mode) def index_config(config, path, index_structure=True): \"\"\"Index a configuration with a path-like string.\"\"\" key", "is None: raise ValueError('Missing \"json_schema\" in \"inference_options\"') jsonschema.Draft7Validator.check_schema(json_schema) options = inference_options.get(\"options\") if options", "\"\"\"Update the configuration a with b.\"\"\" if not b: return a from_version =", "index object structures\") option_path = mapping.get(\"option_path\") if option_path is None: raise ValueError('Missing \"option_path\"", "mapping.get(\"option_path\") if option_path is None: raise ValueError('Missing \"option_path\" in option mapping %d' %", "and block.get(\"name\") == section: section_index = i break if section_index is None: raise", "raise ValueError( \"Only object types are supported in the schema structure, \" \"but", "if k in _non_user_fields} return replace_config(a, b) if mode == \"default\" or mode", "Option not passed for this request. config_path = mapping[\"config_path\"] if isinstance(config_path, str): config_path", "None) new_tok_config[\"target\"].pop(\"vocabulary\", None) new_tok_config[\"source\"].pop(\"replace_vocab\", None) new_tok_config[\"target\"].pop(\"replace_vocab\", None) new_config[\"preprocess\"] = [ { \"op\": \"tokenization\",", "is_v2_config(config): # In V2 operators, we prefer that some fields appear first (or", "def prepare_config_for_save(config): \"\"\"Prepares the configuration before saving it in the model directory.\"\"\" if", "for section_name in (\"preprocess\", \"postprocess\"): section = config.get(section_name) if section is None: continue", "None) prev_vocab_tgt = tok_config[\"target\"].get(\"previous_vocabulary\", None) if vocab_src or vocab_tgt: new_config[\"vocabulary\"] = {} if", "= path.split(\"/\") if not index_structure: key = sections[-1] sections = sections[:-1] for section", "config raise TypeError(\"Paths in config can only represent object and array structures\") def", "new_tok_config[\"source\"], \"target\": new_tok_config[\"target\"], } ] return new_config def _ensure_params_order(params): params = collections.OrderedDict(sorted(params.items(), key=lambda", "{section: build_override(config.get(section), inner_path, value)} if isinstance(config, list): index = int(sections[0]) override = build_override(config[index],", "b) if mode == \"replace\": return replace_config(a, b) raise ValueError(\"Invalid configuration update mode:", "inference options\") try: jsonschema.validate(options, inference_options[\"json_schema\"]) except jsonschema.ValidationError as e: raise ValueError(\"Options validation error:", "k, v in a.items() if k in _non_user_fields} return replace_config(a, b) if mode", "% path) schema = properties[section] return schema def validate_inference_options(inference_options, config): \"\"\"Validate the inference", "[config_path] if v2_config: for cp in config_path: dst_config, dst_key = index_config(config, cp, index_structure=False)", "can't be merged, the override should contain the full list content. config =", "if tok_config: if \"vocabulary\" not in config: new_config = copy.deepcopy(config) vocab_src = tok_config[\"source\"].get(\"vocabulary\",", "config.get(\"preprocess\") return ( \"tokenization\" not in config and preprocess is not None and", "config_override def is_v2_config(config): \"\"\"Returns True if config is a V2 configuration.\"\"\" preprocess =", "a_value is not None and isinstance(a_value, dict): merge_config(a_value, b_value) else: a[key] = b_value", "\"but saw type %s\" % schema[\"type\"] ) properties = schema[\"properties\"] if section not", "\"\"\"Prepares the configuration before saving it in the model directory.\"\"\" if is_v2_config(config): #", "returns a dict mapping operator names to their options. Raises: ValueError: if inference", "this function returns a configuration override. For V2 configurations, this function returns a", "on error. \"\"\" for i, mapping in enumerate(options): config_path = mapping.get(\"config_path\") if config_path", "in \"inference_options\"') jsonschema.Draft7Validator.check_schema(json_schema) options = inference_options.get(\"options\") if options is None: raise ValueError('Missing \"options\"", "configurations.\"\"\" import collections import jsonschema import copy def merge_config(a, b): \"\"\"Merges config b", "section_index = i break if section_index is None: raise ValueError( \"Expected an array", "configuration.\"\"\" return not is_v2_config(config) def get_config_version(config): \"\"\"Returns the version of the configuration.\"\"\" return", "} ] return new_config def _ensure_params_order(params): params = collections.OrderedDict(sorted(params.items(), key=lambda x: x[0])) preferred_first", "read_options(config, options): \"\"\"Reads the inference options. For V1 configurations, this function returns a", "replace_src, } if vocab_tgt: new_config[\"vocabulary\"][\"target\"] = { \"path\": vocab_tgt, \"replace_vocab\": replace_tgt, } if", "[ { \"op\": \"tokenization\", \"source\": new_tok_config[\"source\"], \"target\": new_tok_config[\"target\"], } ] return new_config def", "fields in b.\"\"\" a.update(b) return a _non_user_fields = {\"model\", \"modelType\", \"imageTag\", \"build\", \"parent_model\"}", "raise ValueError( \"Expected an array index in path, but got %s instead\" %", "user fields. a = {k: v for k, v in a.items() if k", "only represent object and array structures\" ) if index_structure: return config else: return", "if not isinstance(dst_config, dict): raise ValueError(\"Paths in config can only index object structures\")", "else: for cp in config_path: merge_config( config_override, build_override(config, cp, option_value), ) if v2_config:", "in a by fields in b.\"\"\" a.update(b) return a _non_user_fields = {\"model\", \"modelType\",", "isinstance(config_path, str): config_path = [config_path] for cp in config_path: dst_config, _ = index_config(config,", "None: raise ValueError( \"Expected an array index in path, but got %s instead\"", "is_v2_config(config): \"\"\"Returns True if config is a V2 configuration.\"\"\" preprocess = config.get(\"preprocess\") return", "collections.defaultdict(dict) config_override = {} for mapping in inference_options[\"options\"]: try: option_value = index_config(options, mapping[\"option_path\"])", "{} for mapping in inference_options[\"options\"]: try: option_value = index_config(options, mapping[\"option_path\"]) except ValueError: continue", "V2 operators, we prefer that some fields appear first (or last) for readability.", "this function returns a dict mapping operator names to their options. Raises: ValueError:", "b, mode=\"default\"): \"\"\"Update the configuration a with b.\"\"\" if not b: return a", "prev_vocab_tgt = tok_config[\"target\"].get(\"previous_vocabulary\", None) if vocab_src or vocab_tgt: new_config[\"vocabulary\"] = {} if vocab_src:", "newer version, we clear all user fields. a = {k: v for k,", "index_structure: return config else: return config, key def build_override(config, path, value): \"\"\"Builds a", "in a.items() if k in _non_user_fields} return replace_config(a, b) if mode == \"default\"", "config[index] = merge_config(copy.deepcopy(config[index]), override) else: config[index] = override return config raise TypeError(\"Paths in", "fields. a = {k: v for k, v in a.items() if k in", "properties[section] return schema def validate_inference_options(inference_options, config): \"\"\"Validate the inference options, raising ValueError on", "we prefer that some fields appear first (or last) for readability. config =", "ValueError: if inference options were not expected or the value is not accepted.", "i)) i += 1 def old_to_new_config(config): \"\"\"Locally update old configuration with 'tokenization' field", "mapping.get(\"config_path\") if config_path is None: raise ValueError('Missing \"config_path\" in option mapping %d' %", "return replace_config(a, b) if mode == \"default\" or mode == \"merge\": return merge_config(a,", "the schema structure, \" \"but saw type %s\" % schema[\"type\"] ) properties =", "= tok_config[\"target\"].get(\"vocabulary\", None) replace_src = tok_config[\"source\"].get(\"replace_vocab\", False) replace_tgt = tok_config[\"target\"].get(\"replace_vocab\", False) prev_vocab_src =", "from_version == 1 and to_version == 2: # When updating the configuration to", "None sections = path.split(\"/\") if not index_structure: key = sections[-1] sections = sections[:-1]", "structures\" ) if index_structure: return config else: return config, key def build_override(config, path,", "block in enumerate(config): if isinstance(block, dict) and block.get(\"name\") == section: section_index = i", "for cp in config_path: dst_config, dst_key = index_config(config, cp, index_structure=False) operators_options[dst_config[\"name\"]].update({dst_key: option_value}) else:", "params def prepare_config_for_save(config): \"\"\"Prepares the configuration before saving it in the model directory.\"\"\"", "new_tok_config = copy.deepcopy(tok_config) new_tok_config[\"source\"].pop(\"vocabulary\", None) new_tok_config[\"target\"].pop(\"vocabulary\", None) new_tok_config[\"source\"].pop(\"replace_vocab\", None) new_tok_config[\"target\"].pop(\"replace_vocab\", None) new_config[\"preprocess\"] =", "= tok_config[\"source\"].get(\"vocabulary\", None) vocab_tgt = tok_config[\"target\"].get(\"vocabulary\", None) replace_src = tok_config[\"source\"].get(\"replace_vocab\", False) replace_tgt =", "def replace_config(a, b): \"\"\"Updates fields in a by fields in b.\"\"\" a.update(b) return", "the configuration a with b.\"\"\" if not b: return a from_version = get_config_version(a)", "configuration to a newer version, we clear all user fields. a = {k:", "get_config_version(a) to_version = get_config_version(b) if from_version == 1 and to_version == 2: #", "process_config: for op_config in process_config: op_type = op_config.get(\"op\") if op_type: op_config.setdefault(\"name\", \"%s_%d\" %", "tok_config: if \"vocabulary\" not in config: new_config = copy.deepcopy(config) vocab_src = tok_config[\"source\"].get(\"vocabulary\", None)", "= None sections = path.split(\"/\") if not index_structure: key = sections[-1] sections =", "to_version == 2: # When updating the configuration to a newer version, we", "= int(sections[0]) override = build_override(config[index], inner_path, value) # Since lists can't be merged,", "inference options were not expected or the value is not accepted. \"\"\" inference_options", "for section in sections: if isinstance(config, dict): if section not in config: raise", "return {section: build_override(config.get(section), inner_path, value)} if isinstance(config, list): index = int(sections[0]) override =", "dict): if section not in config: raise ValueError(\"Invalid path %s in config\" %", "represent object and array structures\" ) if index_structure: return config else: return config,", "config: return tok_config = config.get(\"tokenization\") new_config = config if tok_config: if \"vocabulary\" not", "def read_options(config, options): \"\"\"Reads the inference options. For V1 configurations, this function returns", "content. config = list(config) if isinstance(override, dict): config[index] = merge_config(copy.deepcopy(config[index]), override) else: config[index]", "options, raising ValueError on error.\"\"\" json_schema = inference_options.get(\"json_schema\") if json_schema is None: raise", "in user options\" % path) schema = properties[section] return schema def validate_inference_options(inference_options, config):", "x[0])) preferred_first = [\"op\", \"name\"] preferred_last = [\"overrides\"] for field in reversed(preferred_first): if", "reversed(preferred_first): if field in params: params.move_to_end(field, last=False) for field in preferred_last: if field", "is a V2 configuration.\"\"\" preprocess = config.get(\"preprocess\") return ( \"tokenization\" not in config", "inner_path, value) # Since lists can't be merged, the override should contain the", "False) prev_vocab_src = tok_config[\"source\"].get(\"previous_vocabulary\", None) prev_vocab_tgt = tok_config[\"target\"].get(\"previous_vocabulary\", None) if vocab_src or vocab_tgt:", "not config: return tok_config = config.get(\"tokenization\") new_config = config if tok_config: if \"vocabulary\"", "expect inference options\") try: jsonschema.validate(options, inference_options[\"json_schema\"]) except jsonschema.ValidationError as e: raise ValueError(\"Options validation", "_ = index_schema(schema, option_path) def read_options(config, options): \"\"\"Reads the inference options. For V1", "validate_inference_options(inference_options, config): \"\"\"Validate the inference options, raising ValueError on error.\"\"\" json_schema = inference_options.get(\"json_schema\")", "config can only index object structures\") option_path = mapping.get(\"option_path\") if option_path is None:", "str): config_path = [config_path] if v2_config: for cp in config_path: dst_config, dst_key =", "new_config[\"vocabulary\"] = {} if vocab_src: new_config[\"vocabulary\"][\"source\"] = { \"path\": vocab_src, \"replace_vocab\": replace_src, }", "isinstance(preprocess, list) ) def is_v1_config(config): \"\"\"Returns True if config is a V1 configuration.\"\"\"", "i break if section_index is None: raise ValueError( \"Expected an array index in", "process_config = config.get(process) if process_config: for op_config in process_config: op_type = op_config.get(\"op\") if", "section_index = int(section) except ValueError: for i, block in enumerate(config): if isinstance(block, dict)", "if prev_vocab_tgt: new_config[\"vocabulary\"][\"target\"][ \"previous_vocabulary\" ] = prev_vocab_tgt if \"preprocess\" not in config: new_tok_config", "\"modelType\", \"imageTag\", \"build\", \"parent_model\"} def update_config(a, b, mode=\"default\"): \"\"\"Update the configuration a with", "sure all operators in model configuration have a unique name.\"\"\" if is_v1_config(config): return", "for process in [\"preprocess\", \"postprocess\"]: process_config = config.get(process) if process_config: for op_config in", "] return new_config def _ensure_params_order(params): params = collections.OrderedDict(sorted(params.items(), key=lambda x: x[0])) preferred_first =", "b.\"\"\" a.update(b) return a _non_user_fields = {\"model\", \"modelType\", \"imageTag\", \"build\", \"parent_model\"} def update_config(a,", "= build_override(config[index], inner_path, value) # Since lists can't be merged, the override should", "i += 1 def old_to_new_config(config): \"\"\"Locally update old configuration with 'tokenization' field to", "config.get(section_name) if section is None: continue config[section_name] = [_ensure_params_order(params) for params in section]", "in config: new_tok_config = copy.deepcopy(tok_config) new_tok_config[\"source\"].pop(\"vocabulary\", None) new_tok_config[\"target\"].pop(\"vocabulary\", None) new_tok_config[\"source\"].pop(\"replace_vocab\", None) new_tok_config[\"target\"].pop(\"replace_vocab\", None)", "in b.items(): if not isinstance(b_value, dict): a[key] = b_value else: a_value = a.get(key)", "a.update(b) return a _non_user_fields = {\"model\", \"modelType\", \"imageTag\", \"build\", \"parent_model\"} def update_config(a, b,", "copy def merge_config(a, b): \"\"\"Merges config b in a.\"\"\" for key, b_value in", "[config_path] for cp in config_path: dst_config, _ = index_config(config, cp, index_structure=False) if not", "'vocabulary' and 'preprocess\" fields.\"\"\" if not config: return tok_config = config.get(\"tokenization\") new_config =", "appear first (or last) for readability. config = config.copy() for section_name in (\"preprocess\",", "config b in a.\"\"\" for key, b_value in b.items(): if not isinstance(b_value, dict):", "if from_version == 1 and to_version == 2: # When updating the configuration", ") if v2_config: return operators_options return config_override def is_v2_config(config): \"\"\"Returns True if config", "field to include new 'vocabulary' and 'preprocess\" fields.\"\"\" if not config: return tok_config", "mapping in inference_options[\"options\"]: try: option_value = index_config(options, mapping[\"option_path\"]) except ValueError: continue # Option", "\"Paths in config can only represent object and array structures\" ) if index_structure:", "== 1 and to_version == 2: # When updating the configuration to a", "section: section_index = i break if section_index is None: raise ValueError( \"Expected an", "<filename>nmtwizard/config.py \"\"\"Functions to manipulate and validate configurations.\"\"\" import collections import jsonschema import copy", "ValueError on error. \"\"\" for i, mapping in enumerate(options): config_path = mapping.get(\"config_path\") if", "new_tok_config[\"target\"].pop(\"replace_vocab\", None) new_config[\"preprocess\"] = [ { \"op\": \"tokenization\", \"source\": new_tok_config[\"source\"], \"target\": new_tok_config[\"target\"], }", "{k: v for k, v in a.items() if k in _non_user_fields} return replace_config(a,", "ValueError(\"Paths in config can only index object structures\") option_path = mapping.get(\"option_path\") if option_path", "+= 1 def old_to_new_config(config): \"\"\"Locally update old configuration with 'tokenization' field to include", "b in a.\"\"\" for key, b_value in b.items(): if not isinstance(b_value, dict): a[key]", "the override should contain the full list content. config = list(config) if isinstance(override,", "= copy.deepcopy(tok_config) new_tok_config[\"source\"].pop(\"vocabulary\", None) new_tok_config[\"target\"].pop(\"vocabulary\", None) new_tok_config[\"source\"].pop(\"replace_vocab\", None) new_tok_config[\"target\"].pop(\"replace_vocab\", None) new_config[\"preprocess\"] = [", "object structures\") option_path = mapping.get(\"option_path\") if option_path is None: raise ValueError('Missing \"option_path\" in", "not index_structure: key = sections[-1] sections = sections[:-1] for section in sections: if", "tok_config[\"source\"].get(\"previous_vocabulary\", None) prev_vocab_tgt = tok_config[\"target\"].get(\"previous_vocabulary\", None) if vocab_src or vocab_tgt: new_config[\"vocabulary\"] = {}", "in config: raise ValueError(\"Invalid path %s in config\" % path) config = config[section]", "value): \"\"\"Builds a configuration override to update the value at path.\"\"\" if not", "option_value = index_config(options, mapping[\"option_path\"]) except ValueError: continue # Option not passed for this", "configurations, this function returns a dict mapping operator names to their options. Raises:", "path): \"\"\"Index a JSON schema with a path-like string.\"\"\" for section in path.split(\"/\"):", "\"\"\"Index a JSON schema with a path-like string.\"\"\" for section in path.split(\"/\"): if", "raise ValueError('Missing \"options\" in \"inference_options\"') validate_mapping(json_schema, options, config) return json_schema def validate_mapping(schema, options,", "if field in params: params.move_to_end(field, last=True) return params def prepare_config_for_save(config): \"\"\"Prepares the configuration", "\"\"\"Merges config b in a.\"\"\" for key, b_value in b.items(): if not isinstance(b_value,", "model configuration have a unique name.\"\"\" if is_v1_config(config): return i = 1 for", "params = collections.OrderedDict(sorted(params.items(), key=lambda x: x[0])) preferred_first = [\"op\", \"name\"] preferred_last = [\"overrides\"]", "for cp in config_path: dst_config, _ = index_config(config, cp, index_structure=False) if not isinstance(dst_config,", "== \"merge\": return merge_config(a, b) if mode == \"replace\": return replace_config(a, b) raise", "ValueError('Missing \"options\" in \"inference_options\"') validate_mapping(json_schema, options, config) return json_schema def validate_mapping(schema, options, config):", "cp in config_path: dst_config, _ = index_config(config, cp, index_structure=False) if not isinstance(dst_config, dict):", "# In V2 operators, we prefer that some fields appear first (or last)", "a.\"\"\" for key, b_value in b.items(): if not isinstance(b_value, dict): a[key] = b_value", "config[section] elif isinstance(config, list): section_index = None try: section_index = int(section) except ValueError:", "isinstance(config, list): index = int(sections[0]) override = build_override(config[index], inner_path, value) # Since lists", "schema with a path-like string.\"\"\" for section in path.split(\"/\"): if schema[\"type\"] != \"object\":", "jsonschema.validate(options, inference_options[\"json_schema\"]) except jsonschema.ValidationError as e: raise ValueError(\"Options validation error: %s\" % e.message)", "not in config and preprocess is not None and isinstance(preprocess, list) ) def", "return json_schema def validate_mapping(schema, options, config): \"\"\"Validate the mapping between inference options and", "inference options. For V1 configurations, this function returns a configuration override. For V2", "return not is_v2_config(config) def get_config_version(config): \"\"\"Returns the version of the configuration.\"\"\" return 2", "return a from_version = get_config_version(a) to_version = get_config_version(b) if from_version == 1 and", "config can only represent object and array structures\") def index_schema(schema, path): \"\"\"Index a", "mapping in enumerate(options): config_path = mapping.get(\"config_path\") if config_path is None: raise ValueError('Missing \"config_path\"", "None) new_tok_config[\"source\"].pop(\"replace_vocab\", None) new_tok_config[\"target\"].pop(\"replace_vocab\", None) new_config[\"preprocess\"] = [ { \"op\": \"tokenization\", \"source\": new_tok_config[\"source\"],", "in sections: if isinstance(config, dict): if section not in config: raise ValueError(\"Invalid path", "is_v1_config(config): return i = 1 for process in [\"preprocess\", \"postprocess\"]: process_config = config.get(process)", "\"default\" or mode == \"merge\": return merge_config(a, b) if mode == \"replace\": return", "value at path.\"\"\" if not path: return value sections = path.split(\"/\") section =", "b): \"\"\"Merges config b in a.\"\"\" for key, b_value in b.items(): if not", "merge_config(a_value, b_value) else: a[key] = b_value return a def replace_config(a, b): \"\"\"Updates fields", "the mapping between inference options and configuration fields, raising ValueError on error. \"\"\"", "a_value = a.get(key) if a_value is not None and isinstance(a_value, dict): merge_config(a_value, b_value)", "copy.deepcopy(config) vocab_src = tok_config[\"source\"].get(\"vocabulary\", None) vocab_tgt = tok_config[\"target\"].get(\"vocabulary\", None) replace_src = tok_config[\"source\"].get(\"replace_vocab\", False)", "properties = schema[\"properties\"] if section not in properties: raise ValueError(\"Invalid path %s in", "function returns a configuration override. For V2 configurations, this function returns a dict", "return params def prepare_config_for_save(config): \"\"\"Prepares the configuration before saving it in the model", "= {\"model\", \"modelType\", \"imageTag\", \"build\", \"parent_model\"} def update_config(a, b, mode=\"default\"): \"\"\"Update the configuration" ]
[ "dist[ind1,ind2] <= cluster_threshold: # if ind1 == 2: # print(clust_inds) # print(ind1, ind2,", "nodes with the smallest total value. \"\"\" M = partition_tree.shape[0] + 1 ptind", "ind1 elif i0 > ptind + M: partition_tree_new[i,0] -= 1 if i1 ==", "feature_order): \"\"\" Returns a sorted order of the values where we respect the", "= fill_internal_max_values(partition_tree, leaf_values) pos = partition_tree.shape[0]-1 M = partition_tree.shape[0] + 1 if pos", "dist[ind1,next_ind] > cluster_threshold or clust_inds[ind2] < clust_inds[next_ind]: next_ind = ind2 next_ind_pos = j", "elif i1 > ptind + M: partition_tree_new[i,1] -= 1 partition_tree_new = np.delete(partition_tree_new, ptind,", "val += partition_tree[ind,3] partition_tree[i,3] = val def sort_inds(partition_tree, leaf_values, pos=None, inds=None): if inds", "leaf_positions, partition_tree, xout, yout) return np.array(xout), np.array(yout) def _dendrogram_coords_rec(pos, leaf_positions, partition_tree, xout, yout):", "left = int(partition_tree[pos, 0]) - M right = int(partition_tree[pos, 1]) - M x_left,", "left_val < right_val: tmp = right right = left left = tmp sort_inds(partition_tree,", "/ partition_tree[ind,2]) if new_tree[i,1] < M: ind = int(new_tree[i,1]) val = max(val, np.abs(leaf_values[ind]))", "fill_counts(partition_tree): \"\"\" This updates the \"\"\" M = partition_tree.shape[0] + 1 for i", "= i #print(\"ptind\", ptind, min_val) ind1 = int(partition_tree[ptind,0]) ind2 = int(partition_tree[ptind,1]) if ind1", "== 2: # print(ind1, ind2, dist[ind1,ind2]) if dist[ind1,ind2] <= cluster_threshold: # if ind1", "1 ptind = 0 min_val = np.inf for i in range(partition_tree.shape[0]): ind1 =", "y coords of the lines of a dendrogram where the leaf order is", "ind1 == 2: # print(clust_inds) # print(ind1, ind2, next_ind, dist[ind1,ind2], clust_inds[ind2], clust_inds[next_ind]) if", "this reimplementation. \"\"\" xout = [] yout = [] _dendrogram_coords_rec(partition_tree.shape[0]-1, leaf_positions, partition_tree, xout,", "dist[i,j] < cluster_threshold \"\"\" #feature_imp = np.abs(values) # if partition_tree is not None:", "M: partition_tree_new[i,0] = ind1 elif i0 > ptind + M: partition_tree_new[i,0] -= 1", "(x_left + x_right) / 2, y_curr def fill_internal_max_values(partition_tree, leaf_values): \"\"\" This fills the", "1 else: ind = int(partition_tree[i,1])-M val += partition_tree[ind,3] partition_tree[i,3] = val def sort_inds(partition_tree,", "x and y coords of the lines of a dendrogram where the leaf", "= j # print(\"next_ind\", next_ind) # print(\"next_ind_pos\", next_ind_pos) # insert the next_ind next", "ind2 = tmp partition_tree_new = partition_tree.copy() for i in range(partition_tree_new.shape[0]): i0 = int(partition_tree_new[i,0])", "< clust_inds[next_ind]: next_ind = ind2 next_ind_pos = j # print(\"next_ind\", next_ind) # print(\"next_ind_pos\",", "= partition_tree.shape[0] + 1 if pos < 0: return leaf_positions[pos + M], 0", "j) feature_order[j] = feature_order[j-1] feature_order[i+1] = next_ind #print(feature_order) return feature_order def merge_nodes(values, partition_tree):", "order, hence this reimplementation. \"\"\" xout = [] yout = [] _dendrogram_coords_rec(partition_tree.shape[0]-1, leaf_positions,", "2] xout.append([x_left, x_left, x_right, x_right]) yout.append([y_left, y_curr, y_curr, y_right]) return (x_left + x_right)", "# print(\"next_ind_pos\", next_ind_pos) # insert the next_ind next for j in range(next_ind_pos, i+1,", "when dist[i,j] < cluster_threshold \"\"\" #feature_imp = np.abs(values) # if partition_tree is not", "ind = int(partition_tree[i,1])-M val += partition_tree[ind,3] partition_tree[i,3] = val def sort_inds(partition_tree, leaf_values, pos=None,", "dist[ind1,ind2], clust_inds[ind2], clust_inds[next_ind]) if dist[ind1,next_ind] > cluster_threshold or clust_inds[ind2] < clust_inds[next_ind]: next_ind =", "M x_left, y_left = _dendrogram_coords_rec(left, leaf_positions, partition_tree, xout, yout) x_right, y_right = _dendrogram_coords_rec(right,", "+ M) return left = int(partition_tree[pos, 0]) - M right = int(partition_tree[pos, 1])", "val ptind = i #print(\"ptind\", ptind, min_val) ind1 = int(partition_tree[ptind,0]) ind2 = int(partition_tree[ptind,1])", "0 if new_tree[i,0] < M: ind = int(new_tree[i,0]) val = max(val, np.abs(leaf_values[ind])) else:", "max(val, np.abs(leaf_values[ind])) else: ind = int(new_tree[i,1])-M val = max(val, np.abs(new_tree[ind,3])) # / partition_tree[ind,2])", "= ind1 ind1 = ind2 ind2 = tmp partition_tree_new = partition_tree.copy() for i", "in range(partition_tree.shape[0]): ind1 = int(partition_tree[i,0]) ind2 = int(partition_tree[i,1]) if ind1 < M and", "M: val = np.abs(values[ind1]) + np.abs(values[ind2]) if val < min_val: min_val = val", "next_ind) # print(\"next_ind_pos\", next_ind_pos) # insert the next_ind next for j in range(next_ind_pos,", "y_curr def fill_internal_max_values(partition_tree, leaf_values): \"\"\" This fills the forth column of the partition", "order of the values where we respect the clustering order when dist[i,j] <", "= tmp partition_tree_new = partition_tree.copy() for i in range(partition_tree_new.shape[0]): i0 = int(partition_tree_new[i,0]) i1", "val = max(val, np.abs(leaf_values[ind])) else: ind = int(new_tree[i,0])-M val = max(val, np.abs(new_tree[ind,3])) #", "new_tree[i,1] < M: ind = int(new_tree[i,1]) val = max(val, np.abs(leaf_values[ind])) else: ind =", "ptind + M: partition_tree_new[i,0] = ind1 elif i0 > ptind + M: partition_tree_new[i,0]", "# update the counts to be correct fill_counts(partition_tree_new) return partition_tree_new, ind1, ind2 def", "else: ind = int(new_tree[i,1])-M val = max(val, np.abs(new_tree[ind,3])) # / partition_tree[ind,2]) new_tree[i,3] =", "partition_tree = fill_internal_max_values(partition_tree, leaf_values) pos = partition_tree.shape[0]-1 M = partition_tree.shape[0] + 1 if", "if pos < 0: return leaf_positions[pos + M], 0 left = int(partition_tree[pos, 0])", "i in range(partition_tree.shape[0]): val = 0 if partition_tree[i,0] < M: ind = int(partition_tree[i,0])", "coords as well, but it does not allow you to easily specify a", "ordering def get_sort_order(dist, clust_order, cluster_threshold, feature_order): \"\"\" Returns a sorted order of the", "M: partition_tree_new[i,1] = ind1 elif i1 > ptind + M: partition_tree_new[i,1] -= 1", "+ M] if left_val < right_val: tmp = right right = left left", "val < min_val: min_val = val ptind = i #print(\"ptind\", ptind, min_val) ind1", "ind = int(partition_tree[i,0])-M val += partition_tree[ind,3] if partition_tree[i,1] < M: ind = int(partition_tree[i,1])", "-= 1 if i1 == ptind + M: partition_tree_new[i,1] = ind1 elif i1", "next for j in range(next_ind_pos, i+1, -1): #print(\"j\", j) feature_order[j] = feature_order[j-1] feature_order[i+1]", "left_val = partition_tree[left,3] if left >= 0 else leaf_values[left + M] right_val =", "the leaf order is given. Note that scipy can compute these coords as", "cluster_threshold \"\"\" #feature_imp = np.abs(values) # if partition_tree is not None: # new_tree", "partition_tree_new = np.delete(partition_tree_new, ptind, axis=0) # update the counts to be correct fill_counts(partition_tree_new)", "specify a specific leaf order, hence this reimplementation. \"\"\" xout = [] yout", "color == \"shap_red\": color = colors.red_rgb elif color == \"shap_blue\": color = colors.blue_rgb", "the lines of a dendrogram where the leaf order is given. Note that", "updates the \"\"\" M = partition_tree.shape[0] + 1 for i in range(partition_tree.shape[0]): val", "if i0 == ptind + M: partition_tree_new[i,0] = ind1 elif i0 > ptind", "return color def convert_ordering(ordering, shap_values): if issubclass(type(ordering), OpChain): ordering = ordering.apply(Explanation(shap_values)) if issubclass(type(ordering),", "with the max leaf value in that cluster. \"\"\" M = partition_tree.shape[0] +", "fills the forth column of the partition tree matrix with the max leaf", "of the lines of a dendrogram where the leaf order is given. Note", "column of the partition tree matrix with the max leaf value in that", "feature_order def merge_nodes(values, partition_tree): \"\"\" This merges the two clustered leaf nodes with", "range(partition_tree_new.shape[0]): i0 = int(partition_tree_new[i,0]) i1 = int(partition_tree_new[i,1]) if i0 == ind2: partition_tree_new[i,0] =", "if color == \"shap_red\": color = colors.red_rgb elif color == \"shap_blue\": color =", "np.abs(values) # if partition_tree is not None: # new_tree = fill_internal_max_values(partition_tree, shap_values) #", "def sort_inds(partition_tree, leaf_values, pos=None, inds=None): if inds is None: inds = [] if", "M left_val = partition_tree[left,3] if left >= 0 else leaf_values[left + M] right_val", "2: # print(ind1, ind2, dist[ind1,ind2]) if dist[ind1,ind2] <= cluster_threshold: # if ind1 ==", "except: pass if color == \"shap_red\": color = colors.red_rgb elif color == \"shap_blue\":", "ind1, ind2 def dendrogram_coords(leaf_positions, partition_tree): \"\"\" Returns the x and y coords of", "This fills the forth column of the partition tree matrix with the max", "min_val) ind1 = int(partition_tree[ptind,0]) ind2 = int(partition_tree[ptind,1]) if ind1 > ind2: tmp =", "feature_order[j-1] feature_order[i+1] = next_ind #print(feature_order) return feature_order def merge_nodes(values, partition_tree): \"\"\" This merges", "ind2 def dendrogram_coords(leaf_positions, partition_tree): \"\"\" Returns the x and y coords of the", "xout, yout): M = partition_tree.shape[0] + 1 if pos < 0: return leaf_positions[pos", "of the partition tree matrix with the max leaf value in that cluster.", "ind2, next_ind, dist[ind1,ind2], clust_inds[ind2], clust_inds[next_ind]) if dist[ind1,next_ind] > cluster_threshold or clust_inds[ind2] < clust_inds[next_ind]:", "x_right, x_right]) yout.append([y_left, y_curr, y_curr, y_right]) return (x_left + x_right) / 2, y_curr", "partition_tree): \"\"\" Returns the x and y coords of the lines of a", "+ np.abs(values[ind2]) if val < min_val: min_val = val ptind = i #print(\"ptind\",", "y_curr, y_curr, y_right]) return (x_left + x_right) / 2, y_curr def fill_internal_max_values(partition_tree, leaf_values):", "= int(partition_tree[i,1])-M val += partition_tree[ind,3] partition_tree[i,3] = val def sort_inds(partition_tree, leaf_values, pos=None, inds=None):", "return ordering def get_sort_order(dist, clust_order, cluster_threshold, feature_order): \"\"\" Returns a sorted order of", "def fill_internal_max_values(partition_tree, leaf_values): \"\"\" This fills the forth column of the partition tree", "partition_tree.shape[0] + 1 if pos < 0: inds.append(pos + M) return left =", "the values where we respect the clustering order when dist[i,j] < cluster_threshold \"\"\"", "= max(val, np.abs(new_tree[ind,3])) # / partition_tree[ind,2]) new_tree[i,3] = val return new_tree def fill_counts(partition_tree):", "import OpChain from . import colors import numpy as np def convert_color(color): try:", "ind2, dist[ind1,ind2]) if dist[ind1,ind2] <= cluster_threshold: # if ind1 == 2: # print(clust_inds)", "-= 1 if i1 == ind2: partition_tree_new[i,1] = ind1 elif i1 > ind2:", "2: # print(clust_inds) # print(ind1, ind2, next_ind, dist[ind1,ind2], clust_inds[ind2], clust_inds[next_ind]) if dist[ind1,next_ind] >", "M = partition_tree.shape[0] + 1 if pos < 0: return leaf_positions[pos + M],", "partition_tree[left,3] if left >= 0 else leaf_values[left + M] right_val = partition_tree[right,3] if", "new_tree[i,3] = val return new_tree def fill_counts(partition_tree): \"\"\" This updates the \"\"\" M", "int(partition_tree[i,0]) val += 1 else: ind = int(partition_tree[i,0])-M val += partition_tree[ind,3] if partition_tree[i,1]", "= partition_tree.shape[0] + 1 if pos < 0: inds.append(pos + M) return left", "> ind2: tmp = ind1 ind1 = ind2 ind2 = tmp partition_tree_new =", "+= 1 else: ind = int(partition_tree[i,1])-M val += partition_tree[ind,3] partition_tree[i,3] = val def", "+ x_right) / 2, y_curr def fill_internal_max_values(partition_tree, leaf_values): \"\"\" This fills the forth", "# if ind1 == 2: # print(clust_inds) # print(ind1, ind2, next_ind, dist[ind1,ind2], clust_inds[ind2],", "max(val, np.abs(leaf_values[ind])) else: ind = int(new_tree[i,0])-M val = max(val, np.abs(new_tree[ind,3])) # / partition_tree[ind,2])", "if inds is None: inds = [] if pos is None: partition_tree =", "partition_tree.shape[0]-1 M = partition_tree.shape[0] + 1 if pos < 0: inds.append(pos + M)", "right right = left left = tmp sort_inds(partition_tree, leaf_values, left, inds) sort_inds(partition_tree, leaf_values,", "- M left_val = partition_tree[left,3] if left >= 0 else leaf_values[left + M]", "ind2: partition_tree_new[i,1] = ind1 elif i1 > ind2: partition_tree_new[i,1] -= 1 if i1", "leaf_positions[pos + M], 0 left = int(partition_tree[pos, 0]) - M right = int(partition_tree[pos,", "op in ordering.op_history]: ordering = ordering.values else: ordering = ordering.argsort.flip.values return ordering def", "i+1, -1): #print(\"j\", j) feature_order[j] = feature_order[j-1] feature_order[i+1] = next_ind #print(feature_order) return feature_order", "int(partition_tree[i,1]) val += 1 else: ind = int(partition_tree[i,1])-M val += partition_tree[ind,3] partition_tree[i,3] =", "np.abs(leaf_values[ind])) else: ind = int(new_tree[i,0])-M val = max(val, np.abs(new_tree[ind,3])) # / partition_tree[ind,2]) if", "ind2: partition_tree_new[i,1] -= 1 if i1 == ptind + M: partition_tree_new[i,1] = ind1", "np.array(xout), np.array(yout) def _dendrogram_coords_rec(pos, leaf_positions, partition_tree, xout, yout): M = partition_tree.shape[0] + 1", "these coords as well, but it does not allow you to easily specify", "M = partition_tree.shape[0] + 1 ptind = 0 min_val = np.inf for i", "x_left, x_right, x_right]) yout.append([y_left, y_curr, y_curr, y_right]) return (x_left + x_right) / 2,", "if i0 == ind2: partition_tree_new[i,0] = ind1 elif i0 > ind2: partition_tree_new[i,0] -=", "leaf value in that cluster. \"\"\" M = partition_tree.shape[0] + 1 new_tree =", "= val ptind = i #print(\"ptind\", ptind, min_val) ind1 = int(partition_tree[ptind,0]) ind2 =", "right = int(partition_tree[pos, 1]) - M x_left, y_left = _dendrogram_coords_rec(left, leaf_positions, partition_tree, xout,", "2, y_curr def fill_internal_max_values(partition_tree, leaf_values): \"\"\" This fills the forth column of the", "if pos < 0: inds.append(pos + M) return left = int(partition_tree[pos, 0]) -", "elif i1 > ind2: partition_tree_new[i,1] -= 1 if i1 == ptind + M:", "in range(next_ind_pos, i+1, -1): #print(\"j\", j) feature_order[j] = feature_order[j-1] feature_order[i+1] = next_ind #print(feature_order)", "feature_order) for i in range(len(feature_order)-1): ind1 = feature_order[i] next_ind = feature_order[i+1] next_ind_pos =", "is given. Note that scipy can compute these coords as well, but it", "0 min_val = np.inf for i in range(partition_tree.shape[0]): ind1 = int(partition_tree[i,0]) ind2 =", "M = partition_tree.shape[0] + 1 new_tree = partition_tree.copy() for i in range(new_tree.shape[0]): val", "1 if i1 == ind2: partition_tree_new[i,1] = ind1 elif i1 > ind2: partition_tree_new[i,1]", "partition_tree is not None: # new_tree = fill_internal_max_values(partition_tree, shap_values) # clust_order = sort_inds(new_tree,", "partition_tree[i,1] < M: ind = int(partition_tree[i,1]) val += 1 else: ind = int(partition_tree[i,1])-M", "= sort_inds(new_tree, np.abs(shap_values)) clust_inds = np.argsort(clust_order) feature_order = feature_order.copy()#order.apply(Explanation(shap_values)) # print(\"feature_order\", feature_order) for", "inds=None): if inds is None: inds = [] if pos is None: partition_tree", "dist[ind1,ind2]) if dist[ind1,ind2] <= cluster_threshold: # if ind1 == 2: # print(clust_inds) #", "color == \"shap_blue\": color = colors.blue_rgb return color def convert_ordering(ordering, shap_values): if issubclass(type(ordering),", "1 else: ind = int(partition_tree[i,0])-M val += partition_tree[ind,3] if partition_tree[i,1] < M: ind", "is None: inds = [] if pos is None: partition_tree = fill_internal_max_values(partition_tree, leaf_values)", "i0 == ptind + M: partition_tree_new[i,0] = ind1 elif i0 > ptind +", "left left = tmp sort_inds(partition_tree, leaf_values, left, inds) sort_inds(partition_tree, leaf_values, right, inds) return", "if dist[ind1,next_ind] > cluster_threshold or clust_inds[ind2] < clust_inds[next_ind]: next_ind = ind2 next_ind_pos =", "= i + 1 for j in range(i+1,len(feature_order)): ind2 = feature_order[j] #if feature_imp[ind]", "partition_tree, xout, yout): M = partition_tree.shape[0] + 1 if pos < 0: return", "value. \"\"\" M = partition_tree.shape[0] + 1 ptind = 0 min_val = np.inf", "0: return leaf_positions[pos + M], 0 left = int(partition_tree[pos, 0]) - M right", "next_ind #print(feature_order) return feature_order def merge_nodes(values, partition_tree): \"\"\" This merges the two clustered", "in range(new_tree.shape[0]): val = 0 if new_tree[i,0] < M: ind = int(new_tree[i,0]) val", "color = colors.blue_rgb return color def convert_ordering(ordering, shap_values): if issubclass(type(ordering), OpChain): ordering =", "if left_val < right_val: tmp = right right = left left = tmp", "tmp = right right = left left = tmp sort_inds(partition_tree, leaf_values, left, inds)", "\"\"\" M = partition_tree.shape[0] + 1 ptind = 0 min_val = np.inf for", "that cluster. \"\"\" M = partition_tree.shape[0] + 1 new_tree = partition_tree.copy() for i", "color = pl.get_cmap(color) except: pass if color == \"shap_red\": color = colors.red_rgb elif", "= partition_tree[left,3] if left >= 0 else leaf_values[left + M] right_val = partition_tree[right,3]", "= feature_order.copy()#order.apply(Explanation(shap_values)) # print(\"feature_order\", feature_order) for i in range(len(feature_order)-1): ind1 = feature_order[i] next_ind", "\"\"\" Returns a sorted order of the values where we respect the clustering", "# insert the next_ind next for j in range(next_ind_pos, i+1, -1): #print(\"j\", j)", "i in range(new_tree.shape[0]): val = 0 if new_tree[i,0] < M: ind = int(new_tree[i,0])", "can compute these coords as well, but it does not allow you to", "ind1 ind1 = ind2 ind2 = tmp partition_tree_new = partition_tree.copy() for i in", "in range(len(feature_order)-1): ind1 = feature_order[i] next_ind = feature_order[i+1] next_ind_pos = i + 1", "partition_tree, xout, yout) y_curr = partition_tree[pos, 2] xout.append([x_left, x_left, x_right, x_right]) yout.append([y_left, y_curr,", "leaf order, hence this reimplementation. \"\"\" xout = [] yout = [] _dendrogram_coords_rec(partition_tree.shape[0]-1,", "if ind1 == 2: # print(ind1, ind2, dist[ind1,ind2]) if dist[ind1,ind2] <= cluster_threshold: #", "return np.array(xout), np.array(yout) def _dendrogram_coords_rec(pos, leaf_positions, partition_tree, xout, yout): M = partition_tree.shape[0] +", "the partition tree matrix with the max leaf value in that cluster. \"\"\"", "range(partition_tree.shape[0]): ind1 = int(partition_tree[i,0]) ind2 = int(partition_tree[i,1]) if ind1 < M and ind2", ">= 0 else leaf_values[right + M] if left_val < right_val: tmp = right", "+ M: partition_tree_new[i,1] -= 1 partition_tree_new = np.delete(partition_tree_new, ptind, axis=0) # update the", "= np.argsort(clust_order) feature_order = feature_order.copy()#order.apply(Explanation(shap_values)) # print(\"feature_order\", feature_order) for i in range(len(feature_order)-1): ind1", "but it does not allow you to easily specify a specific leaf order,", "elif color == \"shap_blue\": color = colors.blue_rgb return color def convert_ordering(ordering, shap_values): if", "1 if pos < 0: inds.append(pos + M) return left = int(partition_tree[pos, 0])", "max(val, np.abs(new_tree[ind,3])) # / partition_tree[ind,2]) new_tree[i,3] = val return new_tree def fill_counts(partition_tree): \"\"\"", "# clust_order = sort_inds(new_tree, np.abs(shap_values)) clust_inds = np.argsort(clust_order) feature_order = feature_order.copy()#order.apply(Explanation(shap_values)) # print(\"feature_order\",", "+ 1 for j in range(i+1,len(feature_order)): ind2 = feature_order[j] #if feature_imp[ind] > #", "int(partition_tree_new[i,1]) if i0 == ind2: partition_tree_new[i,0] = ind1 elif i0 > ind2: partition_tree_new[i,0]", "range(next_ind_pos, i+1, -1): #print(\"j\", j) feature_order[j] = feature_order[j-1] feature_order[i+1] = next_ind #print(feature_order) return", "> # if ind1 == 2: # print(ind1, ind2, dist[ind1,ind2]) if dist[ind1,ind2] <=", "if new_tree[i,0] < M: ind = int(new_tree[i,0]) val = max(val, np.abs(leaf_values[ind])) else: ind", "range(len(feature_order)-1): ind1 = feature_order[i] next_ind = feature_order[i+1] next_ind_pos = i + 1 for", "partition_tree.shape[0] + 1 for i in range(partition_tree.shape[0]): val = 0 if partition_tree[i,0] <", "else: ind = int(partition_tree[i,0])-M val += partition_tree[ind,3] if partition_tree[i,1] < M: ind =", "it does not allow you to easily specify a specific leaf order, hence", "y_right]) return (x_left + x_right) / 2, y_curr def fill_internal_max_values(partition_tree, leaf_values): \"\"\" This", "_dendrogram_coords_rec(pos, leaf_positions, partition_tree, xout, yout): M = partition_tree.shape[0] + 1 if pos <", "insert the next_ind next for j in range(next_ind_pos, i+1, -1): #print(\"j\", j) feature_order[j]", "else leaf_values[left + M] right_val = partition_tree[right,3] if right >= 0 else leaf_values[right", "..utils import OpChain from . import colors import numpy as np def convert_color(color):", ". import colors import numpy as np def convert_color(color): try: color = pl.get_cmap(color)", "min_val = np.inf for i in range(partition_tree.shape[0]): ind1 = int(partition_tree[i,0]) ind2 = int(partition_tree[i,1])", "return partition_tree_new, ind1, ind2 def dendrogram_coords(leaf_positions, partition_tree): \"\"\" Returns the x and y", "allow you to easily specify a specific leaf order, hence this reimplementation. \"\"\"", "leaf_values, pos=None, inds=None): if inds is None: inds = [] if pos is", "[] _dendrogram_coords_rec(partition_tree.shape[0]-1, leaf_positions, partition_tree, xout, yout) return np.array(xout), np.array(yout) def _dendrogram_coords_rec(pos, leaf_positions, partition_tree,", "print(clust_inds) # print(ind1, ind2, next_ind, dist[ind1,ind2], clust_inds[ind2], clust_inds[next_ind]) if dist[ind1,next_ind] > cluster_threshold or", "leaf_positions, partition_tree, xout, yout): M = partition_tree.shape[0] + 1 if pos < 0:", "leaf_values[left + M] right_val = partition_tree[right,3] if right >= 0 else leaf_values[right +", "min_val = val ptind = i #print(\"ptind\", ptind, min_val) ind1 = int(partition_tree[ptind,0]) ind2", "\"shap_blue\": color = colors.blue_rgb return color def convert_ordering(ordering, shap_values): if issubclass(type(ordering), OpChain): ordering", "ind1 == 2: # print(ind1, ind2, dist[ind1,ind2]) if dist[ind1,ind2] <= cluster_threshold: # if", "Note that scipy can compute these coords as well, but it does not", "int(new_tree[i,0])-M val = max(val, np.abs(new_tree[ind,3])) # / partition_tree[ind,2]) if new_tree[i,1] < M: ind", "i + 1 for j in range(i+1,len(feature_order)): ind2 = feature_order[j] #if feature_imp[ind] >", "scipy can compute these coords as well, but it does not allow you", "if dist[ind1,ind2] <= cluster_threshold: # if ind1 == 2: # print(clust_inds) # print(ind1,", "i1 == ind2: partition_tree_new[i,1] = ind1 elif i1 > ind2: partition_tree_new[i,1] -= 1", "Explanation): if \"argsort\" in [op[\"name\"] for op in ordering.op_history]: ordering = ordering.values else:", "np.abs(values[ind1]) + np.abs(values[ind2]) if val < min_val: min_val = val ptind = i", "_dendrogram_coords_rec(left, leaf_positions, partition_tree, xout, yout) x_right, y_right = _dendrogram_coords_rec(right, leaf_positions, partition_tree, xout, yout)", "the forth column of the partition tree matrix with the max leaf value", "= partition_tree[right,3] if right >= 0 else leaf_values[right + M] if left_val <", "clust_inds[next_ind]) if dist[ind1,next_ind] > cluster_threshold or clust_inds[ind2] < clust_inds[next_ind]: next_ind = ind2 next_ind_pos", "shap_values) # clust_order = sort_inds(new_tree, np.abs(shap_values)) clust_inds = np.argsort(clust_order) feature_order = feature_order.copy()#order.apply(Explanation(shap_values)) #", "if i1 == ptind + M: partition_tree_new[i,1] = ind1 elif i1 > ptind", "sort_inds(new_tree, np.abs(shap_values)) clust_inds = np.argsort(clust_order) feature_order = feature_order.copy()#order.apply(Explanation(shap_values)) # print(\"feature_order\", feature_order) for i", "leaf_positions, partition_tree, xout, yout) x_right, y_right = _dendrogram_coords_rec(right, leaf_positions, partition_tree, xout, yout) y_curr", "< M: ind = int(new_tree[i,1]) val = max(val, np.abs(leaf_values[ind])) else: ind = int(new_tree[i,1])-M", "ptind, min_val) ind1 = int(partition_tree[ptind,0]) ind2 = int(partition_tree[ptind,1]) if ind1 > ind2: tmp", "< cluster_threshold \"\"\" #feature_imp = np.abs(values) # if partition_tree is not None: #", "= feature_order[j] #if feature_imp[ind] > # if ind1 == 2: # print(ind1, ind2,", "< min_val: min_val = val ptind = i #print(\"ptind\", ptind, min_val) ind1 =", "partition_tree.copy() for i in range(new_tree.shape[0]): val = 0 if new_tree[i,0] < M: ind", "the two clustered leaf nodes with the smallest total value. \"\"\" M =", "print(\"next_ind_pos\", next_ind_pos) # insert the next_ind next for j in range(next_ind_pos, i+1, -1):", "yout) y_curr = partition_tree[pos, 2] xout.append([x_left, x_left, x_right, x_right]) yout.append([y_left, y_curr, y_curr, y_right])", "> cluster_threshold or clust_inds[ind2] < clust_inds[next_ind]: next_ind = ind2 next_ind_pos = j #", "i0 > ind2: partition_tree_new[i,0] -= 1 if i0 == ptind + M: partition_tree_new[i,0]", "if partition_tree[i,1] < M: ind = int(partition_tree[i,1]) val += 1 else: ind =", "pos < 0: return leaf_positions[pos + M], 0 left = int(partition_tree[pos, 0]) -", "partition_tree[ind,3] if partition_tree[i,1] < M: ind = int(partition_tree[i,1]) val += 1 else: ind", "int(new_tree[i,1])-M val = max(val, np.abs(new_tree[ind,3])) # / partition_tree[ind,2]) new_tree[i,3] = val return new_tree", "-= 1 partition_tree_new = np.delete(partition_tree_new, ptind, axis=0) # update the counts to be", "i #print(\"ptind\", ptind, min_val) ind1 = int(partition_tree[ptind,0]) ind2 = int(partition_tree[ptind,1]) if ind1 >", "update the counts to be correct fill_counts(partition_tree_new) return partition_tree_new, ind1, ind2 def dendrogram_coords(leaf_positions,", "a dendrogram where the leaf order is given. Note that scipy can compute", "i1 > ptind + M: partition_tree_new[i,1] -= 1 partition_tree_new = np.delete(partition_tree_new, ptind, axis=0)", "print(\"feature_order\", feature_order) for i in range(len(feature_order)-1): ind1 = feature_order[i] next_ind = feature_order[i+1] next_ind_pos", "i in range(len(feature_order)-1): ind1 = feature_order[i] next_ind = feature_order[i+1] next_ind_pos = i +", "in range(partition_tree.shape[0]): val = 0 if partition_tree[i,0] < M: ind = int(partition_tree[i,0]) val", "of the values where we respect the clustering order when dist[i,j] < cluster_threshold", "leaf_positions, partition_tree, xout, yout) y_curr = partition_tree[pos, 2] xout.append([x_left, x_left, x_right, x_right]) yout.append([y_left,", "clust_order, cluster_threshold, feature_order): \"\"\" Returns a sorted order of the values where we", "if partition_tree is not None: # new_tree = fill_internal_max_values(partition_tree, shap_values) # clust_order =", "y_right = _dendrogram_coords_rec(right, leaf_positions, partition_tree, xout, yout) y_curr = partition_tree[pos, 2] xout.append([x_left, x_left,", "+= 1 else: ind = int(partition_tree[i,0])-M val += partition_tree[ind,3] if partition_tree[i,1] < M:", "partition_tree.copy() for i in range(partition_tree_new.shape[0]): i0 = int(partition_tree_new[i,0]) i1 = int(partition_tree_new[i,1]) if i0", "leaf order is given. Note that scipy can compute these coords as well,", "xout.append([x_left, x_left, x_right, x_right]) yout.append([y_left, y_curr, y_curr, y_right]) return (x_left + x_right) /", "<= cluster_threshold: # if ind1 == 2: # print(clust_inds) # print(ind1, ind2, next_ind,", "ind2 < M: val = np.abs(values[ind1]) + np.abs(values[ind2]) if val < min_val: min_val", "# if ind1 == 2: # print(ind1, ind2, dist[ind1,ind2]) if dist[ind1,ind2] <= cluster_threshold:", "partition_tree_new[i,1] = ind1 elif i1 > ind2: partition_tree_new[i,1] -= 1 if i1 ==", "< M and ind2 < M: val = np.abs(values[ind1]) + np.abs(values[ind2]) if val", "return new_tree def fill_counts(partition_tree): \"\"\" This updates the \"\"\" M = partition_tree.shape[0] +", "val = 0 if partition_tree[i,0] < M: ind = int(partition_tree[i,0]) val += 1", "#print(\"ptind\", ptind, min_val) ind1 = int(partition_tree[ptind,0]) ind2 = int(partition_tree[ptind,1]) if ind1 > ind2:", "M: partition_tree_new[i,1] -= 1 partition_tree_new = np.delete(partition_tree_new, ptind, axis=0) # update the counts", "= [] yout = [] _dendrogram_coords_rec(partition_tree.shape[0]-1, leaf_positions, partition_tree, xout, yout) return np.array(xout), np.array(yout)", "if partition_tree[i,0] < M: ind = int(partition_tree[i,0]) val += 1 else: ind =", "and y coords of the lines of a dendrogram where the leaf order", "return leaf_positions[pos + M], 0 left = int(partition_tree[pos, 0]) - M right =", "ind1 = int(partition_tree[i,0]) ind2 = int(partition_tree[i,1]) if ind1 < M and ind2 <", "np.argsort(clust_order) feature_order = feature_order.copy()#order.apply(Explanation(shap_values)) # print(\"feature_order\", feature_order) for i in range(len(feature_order)-1): ind1 =", "to easily specify a specific leaf order, hence this reimplementation. \"\"\" xout =", "left >= 0 else leaf_values[left + M] right_val = partition_tree[right,3] if right >=", "elif i0 > ind2: partition_tree_new[i,0] -= 1 if i0 == ptind + M:", "pos=None, inds=None): if inds is None: inds = [] if pos is None:", "j # print(\"next_ind\", next_ind) # print(\"next_ind_pos\", next_ind_pos) # insert the next_ind next for", "_dendrogram_coords_rec(right, leaf_positions, partition_tree, xout, yout) y_curr = partition_tree[pos, 2] xout.append([x_left, x_left, x_right, x_right])", "feature_order[i] next_ind = feature_order[i+1] next_ind_pos = i + 1 for j in range(i+1,len(feature_order)):", "== ptind + M: partition_tree_new[i,0] = ind1 elif i0 > ptind + M:", "in that cluster. \"\"\" M = partition_tree.shape[0] + 1 new_tree = partition_tree.copy() for", "val += partition_tree[ind,3] if partition_tree[i,1] < M: ind = int(partition_tree[i,1]) val += 1", "# / partition_tree[ind,2]) new_tree[i,3] = val return new_tree def fill_counts(partition_tree): \"\"\" This updates", "import Explanation from ..utils import OpChain from . import colors import numpy as", "# print(\"next_ind\", next_ind) # print(\"next_ind_pos\", next_ind_pos) # insert the next_ind next for j", "a specific leaf order, hence this reimplementation. \"\"\" xout = [] yout =", "\"\"\" This fills the forth column of the partition tree matrix with the", "M: ind = int(partition_tree[i,0]) val += 1 else: ind = int(partition_tree[i,0])-M val +=", "int(partition_tree[pos, 0]) - M right = int(partition_tree[pos, 1]) - M left_val = partition_tree[left,3]", "0 else leaf_values[left + M] right_val = partition_tree[right,3] if right >= 0 else", "= fill_internal_max_values(partition_tree, shap_values) # clust_order = sort_inds(new_tree, np.abs(shap_values)) clust_inds = np.argsort(clust_order) feature_order =", "def convert_color(color): try: color = pl.get_cmap(color) except: pass if color == \"shap_red\": color", "value in that cluster. \"\"\" M = partition_tree.shape[0] + 1 new_tree = partition_tree.copy()", "is None: partition_tree = fill_internal_max_values(partition_tree, leaf_values) pos = partition_tree.shape[0]-1 M = partition_tree.shape[0] +", "cluster. \"\"\" M = partition_tree.shape[0] + 1 new_tree = partition_tree.copy() for i in", "0 if partition_tree[i,0] < M: ind = int(partition_tree[i,0]) val += 1 else: ind", "for i in range(partition_tree.shape[0]): val = 0 if partition_tree[i,0] < M: ind =", "+ M] right_val = partition_tree[right,3] if right >= 0 else leaf_values[right + M]", "= feature_order[j-1] feature_order[i+1] = next_ind #print(feature_order) return feature_order def merge_nodes(values, partition_tree): \"\"\" This", "i0 > ptind + M: partition_tree_new[i,0] -= 1 if i1 == ind2: partition_tree_new[i,1]", "i0 == ind2: partition_tree_new[i,0] = ind1 elif i0 > ind2: partition_tree_new[i,0] -= 1", "#print(feature_order) return feature_order def merge_nodes(values, partition_tree): \"\"\" This merges the two clustered leaf", "clustering order when dist[i,j] < cluster_threshold \"\"\" #feature_imp = np.abs(values) # if partition_tree", "> ind2: partition_tree_new[i,1] -= 1 if i1 == ptind + M: partition_tree_new[i,1] =", "- M right = int(partition_tree[pos, 1]) - M x_left, y_left = _dendrogram_coords_rec(left, leaf_positions,", "= max(val, np.abs(new_tree[ind,3])) # / partition_tree[ind,2]) if new_tree[i,1] < M: ind = int(new_tree[i,1])", "1]) - M left_val = partition_tree[left,3] if left >= 0 else leaf_values[left +", "tmp = ind1 ind1 = ind2 ind2 = tmp partition_tree_new = partition_tree.copy() for", "int(partition_tree[ptind,0]) ind2 = int(partition_tree[ptind,1]) if ind1 > ind2: tmp = ind1 ind1 =", "ptind + M: partition_tree_new[i,1] = ind1 elif i1 > ptind + M: partition_tree_new[i,1]", "y_curr, y_right]) return (x_left + x_right) / 2, y_curr def fill_internal_max_values(partition_tree, leaf_values): \"\"\"", "= partition_tree.shape[0] + 1 ptind = 0 min_val = np.inf for i in", "= val def sort_inds(partition_tree, leaf_values, pos=None, inds=None): if inds is None: inds =", "ind2 = feature_order[j] #if feature_imp[ind] > # if ind1 == 2: # print(ind1,", "def dendrogram_coords(leaf_positions, partition_tree): \"\"\" Returns the x and y coords of the lines", "int(partition_tree[pos, 1]) - M left_val = partition_tree[left,3] if left >= 0 else leaf_values[left", "feature_imp[ind] > # if ind1 == 2: # print(ind1, ind2, dist[ind1,ind2]) if dist[ind1,ind2]", "right_val: tmp = right right = left left = tmp sort_inds(partition_tree, leaf_values, left,", "partition_tree_new[i,1] = ind1 elif i1 > ptind + M: partition_tree_new[i,1] -= 1 partition_tree_new", "= int(partition_tree[i,0]) ind2 = int(partition_tree[i,1]) if ind1 < M and ind2 < M:", "ind2 ind2 = tmp partition_tree_new = partition_tree.copy() for i in range(partition_tree_new.shape[0]): i0 =", "partition_tree_new[i,0] -= 1 if i0 == ptind + M: partition_tree_new[i,0] = ind1 elif", "= max(val, np.abs(leaf_values[ind])) else: ind = int(new_tree[i,0])-M val = max(val, np.abs(new_tree[ind,3])) # /", "+ 1 ptind = 0 min_val = np.inf for i in range(partition_tree.shape[0]): ind1", "right = int(partition_tree[pos, 1]) - M left_val = partition_tree[left,3] if left >= 0", "= int(partition_tree[pos, 1]) - M x_left, y_left = _dendrogram_coords_rec(left, leaf_positions, partition_tree, xout, yout)", "val def sort_inds(partition_tree, leaf_values, pos=None, inds=None): if inds is None: inds = []", "= int(new_tree[i,0]) val = max(val, np.abs(leaf_values[ind])) else: ind = int(new_tree[i,0])-M val = max(val,", "range(i+1,len(feature_order)): ind2 = feature_order[j] #if feature_imp[ind] > # if ind1 == 2: #", "clust_order = sort_inds(new_tree, np.abs(shap_values)) clust_inds = np.argsort(clust_order) feature_order = feature_order.copy()#order.apply(Explanation(shap_values)) # print(\"feature_order\", feature_order)", "= partition_tree.copy() for i in range(new_tree.shape[0]): val = 0 if new_tree[i,0] < M:", "next_ind = ind2 next_ind_pos = j # print(\"next_ind\", next_ind) # print(\"next_ind_pos\", next_ind_pos) #", "i in range(partition_tree.shape[0]): ind1 = int(partition_tree[i,0]) ind2 = int(partition_tree[i,1]) if ind1 < M", "ind = int(partition_tree[i,0]) val += 1 else: ind = int(partition_tree[i,0])-M val += partition_tree[ind,3]", "-= 1 if i0 == ptind + M: partition_tree_new[i,0] = ind1 elif i0", "new_tree def fill_counts(partition_tree): \"\"\" This updates the \"\"\" M = partition_tree.shape[0] + 1", "= ind2 next_ind_pos = j # print(\"next_ind\", next_ind) # print(\"next_ind_pos\", next_ind_pos) # insert", "np.abs(shap_values)) clust_inds = np.argsort(clust_order) feature_order = feature_order.copy()#order.apply(Explanation(shap_values)) # print(\"feature_order\", feature_order) for i in", "< 0: return leaf_positions[pos + M], 0 left = int(partition_tree[pos, 0]) - M", "get_sort_order(dist, clust_order, cluster_threshold, feature_order): \"\"\" Returns a sorted order of the values where", "np.abs(new_tree[ind,3])) # / partition_tree[ind,2]) if new_tree[i,1] < M: ind = int(new_tree[i,1]) val =", "np.abs(leaf_values[ind])) else: ind = int(new_tree[i,1])-M val = max(val, np.abs(new_tree[ind,3])) # / partition_tree[ind,2]) new_tree[i,3]", "leaf nodes with the smallest total value. \"\"\" M = partition_tree.shape[0] + 1", "val = max(val, np.abs(new_tree[ind,3])) # / partition_tree[ind,2]) new_tree[i,3] = val return new_tree def", "new_tree = fill_internal_max_values(partition_tree, shap_values) # clust_order = sort_inds(new_tree, np.abs(shap_values)) clust_inds = np.argsort(clust_order) feature_order", "= partition_tree.copy() for i in range(partition_tree_new.shape[0]): i0 = int(partition_tree_new[i,0]) i1 = int(partition_tree_new[i,1]) if", "as well, but it does not allow you to easily specify a specific", "ind2: tmp = ind1 ind1 = ind2 ind2 = tmp partition_tree_new = partition_tree.copy()", "ordering.values else: ordering = ordering.argsort.flip.values return ordering def get_sort_order(dist, clust_order, cluster_threshold, feature_order): \"\"\"", "for op in ordering.op_history]: ordering = ordering.values else: ordering = ordering.argsort.flip.values return ordering", "This updates the \"\"\" M = partition_tree.shape[0] + 1 for i in range(partition_tree.shape[0]):", "inds is None: inds = [] if pos is None: partition_tree = fill_internal_max_values(partition_tree,", "1 if i1 == ptind + M: partition_tree_new[i,1] = ind1 elif i1 >", "_dendrogram_coords_rec(partition_tree.shape[0]-1, leaf_positions, partition_tree, xout, yout) return np.array(xout), np.array(yout) def _dendrogram_coords_rec(pos, leaf_positions, partition_tree, xout,", "color def convert_ordering(ordering, shap_values): if issubclass(type(ordering), OpChain): ordering = ordering.apply(Explanation(shap_values)) if issubclass(type(ordering), Explanation):", "0 else leaf_values[right + M] if left_val < right_val: tmp = right right", "= np.abs(values) # if partition_tree is not None: # new_tree = fill_internal_max_values(partition_tree, shap_values)", "next_ind_pos) # insert the next_ind next for j in range(next_ind_pos, i+1, -1): #print(\"j\",", "0]) - M right = int(partition_tree[pos, 1]) - M x_left, y_left = _dendrogram_coords_rec(left,", "This merges the two clustered leaf nodes with the smallest total value. \"\"\"", "a sorted order of the values where we respect the clustering order when", "#feature_imp = np.abs(values) # if partition_tree is not None: # new_tree = fill_internal_max_values(partition_tree,", "right_val = partition_tree[right,3] if right >= 0 else leaf_values[right + M] if left_val", "if ind1 > ind2: tmp = ind1 ind1 = ind2 ind2 = tmp", "if ind1 < M and ind2 < M: val = np.abs(values[ind1]) + np.abs(values[ind2])", "ind2 = int(partition_tree[ptind,1]) if ind1 > ind2: tmp = ind1 ind1 = ind2", "ordering = ordering.argsort.flip.values return ordering def get_sort_order(dist, clust_order, cluster_threshold, feature_order): \"\"\" Returns a", "# / partition_tree[ind,2]) if new_tree[i,1] < M: ind = int(new_tree[i,1]) val = max(val,", "# print(ind1, ind2, dist[ind1,ind2]) if dist[ind1,ind2] <= cluster_threshold: # if ind1 == 2:", "ptind = 0 min_val = np.inf for i in range(partition_tree.shape[0]): ind1 = int(partition_tree[i,0])", "color = colors.red_rgb elif color == \"shap_blue\": color = colors.blue_rgb return color def", "= int(partition_tree_new[i,1]) if i0 == ind2: partition_tree_new[i,0] = ind1 elif i0 > ind2:", "y_left = _dendrogram_coords_rec(left, leaf_positions, partition_tree, xout, yout) x_right, y_right = _dendrogram_coords_rec(right, leaf_positions, partition_tree,", "- M right = int(partition_tree[pos, 1]) - M left_val = partition_tree[left,3] if left", "clustered leaf nodes with the smallest total value. \"\"\" M = partition_tree.shape[0] +", "np.inf for i in range(partition_tree.shape[0]): ind1 = int(partition_tree[i,0]) ind2 = int(partition_tree[i,1]) if ind1", "[] yout = [] _dendrogram_coords_rec(partition_tree.shape[0]-1, leaf_positions, partition_tree, xout, yout) return np.array(xout), np.array(yout) def", "max(val, np.abs(new_tree[ind,3])) # / partition_tree[ind,2]) if new_tree[i,1] < M: ind = int(new_tree[i,1]) val", "+= partition_tree[ind,3] if partition_tree[i,1] < M: ind = int(partition_tree[i,1]) val += 1 else:", "clust_inds[next_ind]: next_ind = ind2 next_ind_pos = j # print(\"next_ind\", next_ind) # print(\"next_ind_pos\", next_ind_pos)", "and ind2 < M: val = np.abs(values[ind1]) + np.abs(values[ind2]) if val < min_val:", "1 new_tree = partition_tree.copy() for i in range(new_tree.shape[0]): val = 0 if new_tree[i,0]", "where the leaf order is given. Note that scipy can compute these coords", "for i in range(partition_tree_new.shape[0]): i0 = int(partition_tree_new[i,0]) i1 = int(partition_tree_new[i,1]) if i0 ==", "val = max(val, np.abs(leaf_values[ind])) else: ind = int(new_tree[i,1])-M val = max(val, np.abs(new_tree[ind,3])) #", "with the smallest total value. \"\"\" M = partition_tree.shape[0] + 1 ptind =", "feature_order.copy()#order.apply(Explanation(shap_values)) # print(\"feature_order\", feature_order) for i in range(len(feature_order)-1): ind1 = feature_order[i] next_ind =", "# print(clust_inds) # print(ind1, ind2, next_ind, dist[ind1,ind2], clust_inds[ind2], clust_inds[next_ind]) if dist[ind1,next_ind] > cluster_threshold", "/ partition_tree[ind,2]) new_tree[i,3] = val return new_tree def fill_counts(partition_tree): \"\"\" This updates the", "= colors.blue_rgb return color def convert_ordering(ordering, shap_values): if issubclass(type(ordering), OpChain): ordering = ordering.apply(Explanation(shap_values))", "np.delete(partition_tree_new, ptind, axis=0) # update the counts to be correct fill_counts(partition_tree_new) return partition_tree_new,", "< 0: inds.append(pos + M) return left = int(partition_tree[pos, 0]) - M right", "y_curr = partition_tree[pos, 2] xout.append([x_left, x_left, x_right, x_right]) yout.append([y_left, y_curr, y_curr, y_right]) return", "1]) - M x_left, y_left = _dendrogram_coords_rec(left, leaf_positions, partition_tree, xout, yout) x_right, y_right", "M: ind = int(new_tree[i,0]) val = max(val, np.abs(leaf_values[ind])) else: ind = int(new_tree[i,0])-M val", "partition_tree[ind,2]) if new_tree[i,1] < M: ind = int(new_tree[i,1]) val = max(val, np.abs(leaf_values[ind])) else:", "def fill_counts(partition_tree): \"\"\" This updates the \"\"\" M = partition_tree.shape[0] + 1 for", "def merge_nodes(values, partition_tree): \"\"\" This merges the two clustered leaf nodes with the", "x_right, y_right = _dendrogram_coords_rec(right, leaf_positions, partition_tree, xout, yout) y_curr = partition_tree[pos, 2] xout.append([x_left,", "+= partition_tree[ind,3] partition_tree[i,3] = val def sort_inds(partition_tree, leaf_values, pos=None, inds=None): if inds is", "try: color = pl.get_cmap(color) except: pass if color == \"shap_red\": color = colors.red_rgb", "max leaf value in that cluster. \"\"\" M = partition_tree.shape[0] + 1 new_tree", "ind = int(new_tree[i,0]) val = max(val, np.abs(leaf_values[ind])) else: ind = int(new_tree[i,0])-M val =", "= int(partition_tree_new[i,0]) i1 = int(partition_tree_new[i,1]) if i0 == ind2: partition_tree_new[i,0] = ind1 elif", "partition_tree[i,0] < M: ind = int(partition_tree[i,0]) val += 1 else: ind = int(partition_tree[i,0])-M", "j in range(next_ind_pos, i+1, -1): #print(\"j\", j) feature_order[j] = feature_order[j-1] feature_order[i+1] = next_ind", "np.abs(values[ind2]) if val < min_val: min_val = val ptind = i #print(\"ptind\", ptind,", "next_ind_pos = i + 1 for j in range(i+1,len(feature_order)): ind2 = feature_order[j] #if", "= _dendrogram_coords_rec(left, leaf_positions, partition_tree, xout, yout) x_right, y_right = _dendrogram_coords_rec(right, leaf_positions, partition_tree, xout,", "+ 1 if pos < 0: return leaf_positions[pos + M], 0 left =", "= ind1 elif i1 > ind2: partition_tree_new[i,1] -= 1 if i1 == ptind", "we respect the clustering order when dist[i,j] < cluster_threshold \"\"\" #feature_imp = np.abs(values)", "order is given. Note that scipy can compute these coords as well, but", "given. Note that scipy can compute these coords as well, but it does", "for j in range(next_ind_pos, i+1, -1): #print(\"j\", j) feature_order[j] = feature_order[j-1] feature_order[i+1] =", "= ordering.apply(Explanation(shap_values)) if issubclass(type(ordering), Explanation): if \"argsort\" in [op[\"name\"] for op in ordering.op_history]:", "pl.get_cmap(color) except: pass if color == \"shap_red\": color = colors.red_rgb elif color ==", "\"shap_red\": color = colors.red_rgb elif color == \"shap_blue\": color = colors.blue_rgb return color", "M: ind = int(new_tree[i,1]) val = max(val, np.abs(leaf_values[ind])) else: ind = int(new_tree[i,1])-M val", "> ptind + M: partition_tree_new[i,0] -= 1 if i1 == ind2: partition_tree_new[i,1] =", "== \"shap_blue\": color = colors.blue_rgb return color def convert_ordering(ordering, shap_values): if issubclass(type(ordering), OpChain):", "\"argsort\" in [op[\"name\"] for op in ordering.op_history]: ordering = ordering.values else: ordering =", "M) return left = int(partition_tree[pos, 0]) - M right = int(partition_tree[pos, 1]) -", "be correct fill_counts(partition_tree_new) return partition_tree_new, ind1, ind2 def dendrogram_coords(leaf_positions, partition_tree): \"\"\" Returns the", "feature_order[j] #if feature_imp[ind] > # if ind1 == 2: # print(ind1, ind2, dist[ind1,ind2])", "None: partition_tree = fill_internal_max_values(partition_tree, leaf_values) pos = partition_tree.shape[0]-1 M = partition_tree.shape[0] + 1", "coords of the lines of a dendrogram where the leaf order is given.", "convert_color(color): try: color = pl.get_cmap(color) except: pass if color == \"shap_red\": color =", "import numpy as np def convert_color(color): try: color = pl.get_cmap(color) except: pass if", "values where we respect the clustering order when dist[i,j] < cluster_threshold \"\"\" #feature_imp", "ind1 = ind2 ind2 = tmp partition_tree_new = partition_tree.copy() for i in range(partition_tree_new.shape[0]):", "= int(partition_tree[pos, 0]) - M right = int(partition_tree[pos, 1]) - M left_val =", "if \"argsort\" in [op[\"name\"] for op in ordering.op_history]: ordering = ordering.values else: ordering", "i1 > ind2: partition_tree_new[i,1] -= 1 if i1 == ptind + M: partition_tree_new[i,1]", "= 0 if partition_tree[i,0] < M: ind = int(partition_tree[i,0]) val += 1 else:", "you to easily specify a specific leaf order, hence this reimplementation. \"\"\" xout", "or clust_inds[ind2] < clust_inds[next_ind]: next_ind = ind2 next_ind_pos = j # print(\"next_ind\", next_ind)", "= int(partition_tree[i,0])-M val += partition_tree[ind,3] if partition_tree[i,1] < M: ind = int(partition_tree[i,1]) val", "min_val: min_val = val ptind = i #print(\"ptind\", ptind, min_val) ind1 = int(partition_tree[ptind,0])", "inds.append(pos + M) return left = int(partition_tree[pos, 0]) - M right = int(partition_tree[pos,", "else leaf_values[right + M] if left_val < right_val: tmp = right right =", "ind2 next_ind_pos = j # print(\"next_ind\", next_ind) # print(\"next_ind_pos\", next_ind_pos) # insert the", "= np.delete(partition_tree_new, ptind, axis=0) # update the counts to be correct fill_counts(partition_tree_new) return", "compute these coords as well, but it does not allow you to easily", "ind1 elif i0 > ind2: partition_tree_new[i,0] -= 1 if i0 == ptind +", "= int(new_tree[i,0])-M val = max(val, np.abs(new_tree[ind,3])) # / partition_tree[ind,2]) if new_tree[i,1] < M:", "from ..utils import OpChain from . import colors import numpy as np def", "0]) - M right = int(partition_tree[pos, 1]) - M left_val = partition_tree[left,3] if", "ptind = i #print(\"ptind\", ptind, min_val) ind1 = int(partition_tree[ptind,0]) ind2 = int(partition_tree[ptind,1]) if", "ind1 elif i1 > ptind + M: partition_tree_new[i,1] -= 1 partition_tree_new = np.delete(partition_tree_new,", "= colors.red_rgb elif color == \"shap_blue\": color = colors.blue_rgb return color def convert_ordering(ordering,", "i in range(partition_tree_new.shape[0]): i0 = int(partition_tree_new[i,0]) i1 = int(partition_tree_new[i,1]) if i0 == ind2:", "two clustered leaf nodes with the smallest total value. \"\"\" M = partition_tree.shape[0]", "= int(partition_tree[ptind,1]) if ind1 > ind2: tmp = ind1 ind1 = ind2 ind2", "matrix with the max leaf value in that cluster. \"\"\" M = partition_tree.shape[0]", "inds = [] if pos is None: partition_tree = fill_internal_max_values(partition_tree, leaf_values) pos =", "M = partition_tree.shape[0] + 1 if pos < 0: inds.append(pos + M) return", "partition_tree.shape[0] + 1 new_tree = partition_tree.copy() for i in range(new_tree.shape[0]): val = 0", "pos < 0: inds.append(pos + M) return left = int(partition_tree[pos, 0]) - M", "left = int(partition_tree[pos, 0]) - M right = int(partition_tree[pos, 1]) - M left_val", "/ 2, y_curr def fill_internal_max_values(partition_tree, leaf_values): \"\"\" This fills the forth column of", "feature_order[j] = feature_order[j-1] feature_order[i+1] = next_ind #print(feature_order) return feature_order def merge_nodes(values, partition_tree): \"\"\"", "yout) x_right, y_right = _dendrogram_coords_rec(right, leaf_positions, partition_tree, xout, yout) y_curr = partition_tree[pos, 2]", "pos is None: partition_tree = fill_internal_max_values(partition_tree, leaf_values) pos = partition_tree.shape[0]-1 M = partition_tree.shape[0]", "sort_inds(partition_tree, leaf_values, pos=None, inds=None): if inds is None: inds = [] if pos", "print(\"next_ind\", next_ind) # print(\"next_ind_pos\", next_ind_pos) # insert the next_ind next for j in", "forth column of the partition tree matrix with the max leaf value in", "\"\"\" M = partition_tree.shape[0] + 1 new_tree = partition_tree.copy() for i in range(new_tree.shape[0]):", "int(partition_tree[ptind,1]) if ind1 > ind2: tmp = ind1 ind1 = ind2 ind2 =", "yout) return np.array(xout), np.array(yout) def _dendrogram_coords_rec(pos, leaf_positions, partition_tree, xout, yout): M = partition_tree.shape[0]", "\"\"\" Returns the x and y coords of the lines of a dendrogram", "val += 1 else: ind = int(partition_tree[i,1])-M val += partition_tree[ind,3] partition_tree[i,3] = val", "== ind2: partition_tree_new[i,0] = ind1 elif i0 > ind2: partition_tree_new[i,0] -= 1 if", "issubclass(type(ordering), Explanation): if \"argsort\" in [op[\"name\"] for op in ordering.op_history]: ordering = ordering.values", "partition_tree.shape[0] + 1 if pos < 0: return leaf_positions[pos + M], 0 left", "clust_inds[ind2] < clust_inds[next_ind]: next_ind = ind2 next_ind_pos = j # print(\"next_ind\", next_ind) #", "partition_tree[pos, 2] xout.append([x_left, x_left, x_right, x_right]) yout.append([y_left, y_curr, y_curr, y_right]) return (x_left +", "convert_ordering(ordering, shap_values): if issubclass(type(ordering), OpChain): ordering = ordering.apply(Explanation(shap_values)) if issubclass(type(ordering), Explanation): if \"argsort\"", "in [op[\"name\"] for op in ordering.op_history]: ordering = ordering.values else: ordering = ordering.argsort.flip.values", "= np.abs(values[ind1]) + np.abs(values[ind2]) if val < min_val: min_val = val ptind =", "print(ind1, ind2, next_ind, dist[ind1,ind2], clust_inds[ind2], clust_inds[next_ind]) if dist[ind1,next_ind] > cluster_threshold or clust_inds[ind2] <", "new_tree = partition_tree.copy() for i in range(new_tree.shape[0]): val = 0 if new_tree[i,0] <", "the max leaf value in that cluster. \"\"\" M = partition_tree.shape[0] + 1", "+ M], 0 left = int(partition_tree[pos, 0]) - M right = int(partition_tree[pos, 1])", "\"\"\" #feature_imp = np.abs(values) # if partition_tree is not None: # new_tree =", "range(partition_tree.shape[0]): val = 0 if partition_tree[i,0] < M: ind = int(partition_tree[i,0]) val +=", "ordering = ordering.apply(Explanation(shap_values)) if issubclass(type(ordering), Explanation): if \"argsort\" in [op[\"name\"] for op in", "ordering = ordering.values else: ordering = ordering.argsort.flip.values return ordering def get_sort_order(dist, clust_order, cluster_threshold,", "== \"shap_red\": color = colors.red_rgb elif color == \"shap_blue\": color = colors.blue_rgb return", "ind = int(partition_tree[i,1]) val += 1 else: ind = int(partition_tree[i,1])-M val += partition_tree[ind,3]", "def _dendrogram_coords_rec(pos, leaf_positions, partition_tree, xout, yout): M = partition_tree.shape[0] + 1 if pos", "ind1 elif i1 > ind2: partition_tree_new[i,1] -= 1 if i1 == ptind +", "for j in range(i+1,len(feature_order)): ind2 = feature_order[j] #if feature_imp[ind] > # if ind1", "# new_tree = fill_internal_max_values(partition_tree, shap_values) # clust_order = sort_inds(new_tree, np.abs(shap_values)) clust_inds = np.argsort(clust_order)", "x_right) / 2, y_curr def fill_internal_max_values(partition_tree, leaf_values): \"\"\" This fills the forth column", "= partition_tree.shape[0]-1 M = partition_tree.shape[0] + 1 if pos < 0: inds.append(pos +", "range(new_tree.shape[0]): val = 0 if new_tree[i,0] < M: ind = int(new_tree[i,0]) val =", "ptind, axis=0) # update the counts to be correct fill_counts(partition_tree_new) return partition_tree_new, ind1,", "\"\"\" This updates the \"\"\" M = partition_tree.shape[0] + 1 for i in", "import colors import numpy as np def convert_color(color): try: color = pl.get_cmap(color) except:", "fill_internal_max_values(partition_tree, leaf_values): \"\"\" This fills the forth column of the partition tree matrix", "if right >= 0 else leaf_values[right + M] if left_val < right_val: tmp", "if i1 == ind2: partition_tree_new[i,1] = ind1 elif i1 > ind2: partition_tree_new[i,1] -=", "val return new_tree def fill_counts(partition_tree): \"\"\" This updates the \"\"\" M = partition_tree.shape[0]", "next_ind next for j in range(next_ind_pos, i+1, -1): #print(\"j\", j) feature_order[j] = feature_order[j-1]", "tmp partition_tree_new = partition_tree.copy() for i in range(partition_tree_new.shape[0]): i0 = int(partition_tree_new[i,0]) i1 =", "partition_tree): \"\"\" This merges the two clustered leaf nodes with the smallest total", "< M: ind = int(partition_tree[i,1]) val += 1 else: ind = int(partition_tree[i,1])-M val", "+ 1 for i in range(partition_tree.shape[0]): val = 0 if partition_tree[i,0] < M:", "= [] _dendrogram_coords_rec(partition_tree.shape[0]-1, leaf_positions, partition_tree, xout, yout) return np.array(xout), np.array(yout) def _dendrogram_coords_rec(pos, leaf_positions,", "partition_tree_new[i,0] = ind1 elif i0 > ind2: partition_tree_new[i,0] -= 1 if i0 ==", "j in range(i+1,len(feature_order)): ind2 = feature_order[j] #if feature_imp[ind] > # if ind1 ==", "#print(\"j\", j) feature_order[j] = feature_order[j-1] feature_order[i+1] = next_ind #print(feature_order) return feature_order def merge_nodes(values,", "= ind2 ind2 = tmp partition_tree_new = partition_tree.copy() for i in range(partition_tree_new.shape[0]): i0", "M: ind = int(partition_tree[i,1]) val += 1 else: ind = int(partition_tree[i,1])-M val +=", "yout = [] _dendrogram_coords_rec(partition_tree.shape[0]-1, leaf_positions, partition_tree, xout, yout) return np.array(xout), np.array(yout) def _dendrogram_coords_rec(pos,", "hence this reimplementation. \"\"\" xout = [] yout = [] _dendrogram_coords_rec(partition_tree.shape[0]-1, leaf_positions, partition_tree,", "the counts to be correct fill_counts(partition_tree_new) return partition_tree_new, ind1, ind2 def dendrogram_coords(leaf_positions, partition_tree):", "x_left, y_left = _dendrogram_coords_rec(left, leaf_positions, partition_tree, xout, yout) x_right, y_right = _dendrogram_coords_rec(right, leaf_positions,", "i1 = int(partition_tree_new[i,1]) if i0 == ind2: partition_tree_new[i,0] = ind1 elif i0 >", "val = max(val, np.abs(new_tree[ind,3])) # / partition_tree[ind,2]) if new_tree[i,1] < M: ind =", "if new_tree[i,1] < M: ind = int(new_tree[i,1]) val = max(val, np.abs(leaf_values[ind])) else: ind", "= np.inf for i in range(partition_tree.shape[0]): ind1 = int(partition_tree[i,0]) ind2 = int(partition_tree[i,1]) if", "fill_counts(partition_tree_new) return partition_tree_new, ind1, ind2 def dendrogram_coords(leaf_positions, partition_tree): \"\"\" Returns the x and", "== ind2: partition_tree_new[i,1] = ind1 elif i1 > ind2: partition_tree_new[i,1] -= 1 if", "= val return new_tree def fill_counts(partition_tree): \"\"\" This updates the \"\"\" M =", "right >= 0 else leaf_values[right + M] if left_val < right_val: tmp =", "not allow you to easily specify a specific leaf order, hence this reimplementation.", "= ordering.argsort.flip.values return ordering def get_sort_order(dist, clust_order, cluster_threshold, feature_order): \"\"\" Returns a sorted", "for i in range(new_tree.shape[0]): val = 0 if new_tree[i,0] < M: ind =", "< M: ind = int(partition_tree[i,0]) val += 1 else: ind = int(partition_tree[i,0])-M val", "= ind1 elif i0 > ind2: partition_tree_new[i,0] -= 1 if i0 == ptind", "return feature_order def merge_nodes(values, partition_tree): \"\"\" This merges the two clustered leaf nodes", "[op[\"name\"] for op in ordering.op_history]: ordering = ordering.values else: ordering = ordering.argsort.flip.values return", "that scipy can compute these coords as well, but it does not allow", "ordering.op_history]: ordering = ordering.values else: ordering = ordering.argsort.flip.values return ordering def get_sort_order(dist, clust_order,", "ind = int(new_tree[i,1]) val = max(val, np.abs(leaf_values[ind])) else: ind = int(new_tree[i,1])-M val =", "# print(\"feature_order\", feature_order) for i in range(len(feature_order)-1): ind1 = feature_order[i] next_ind = feature_order[i+1]", "M: partition_tree_new[i,0] -= 1 if i1 == ind2: partition_tree_new[i,1] = ind1 elif i1", "print(ind1, ind2, dist[ind1,ind2]) if dist[ind1,ind2] <= cluster_threshold: # if ind1 == 2: #", "yout.append([y_left, y_curr, y_curr, y_right]) return (x_left + x_right) / 2, y_curr def fill_internal_max_values(partition_tree,", "val = 0 if new_tree[i,0] < M: ind = int(new_tree[i,0]) val = max(val,", "M = partition_tree.shape[0] + 1 for i in range(partition_tree.shape[0]): val = 0 if", "int(partition_tree_new[i,0]) i1 = int(partition_tree_new[i,1]) if i0 == ind2: partition_tree_new[i,0] = ind1 elif i0", "well, but it does not allow you to easily specify a specific leaf", "in range(partition_tree_new.shape[0]): i0 = int(partition_tree_new[i,0]) i1 = int(partition_tree_new[i,1]) if i0 == ind2: partition_tree_new[i,0]", "None: inds = [] if pos is None: partition_tree = fill_internal_max_values(partition_tree, leaf_values) pos", "= int(partition_tree[i,1]) if ind1 < M and ind2 < M: val = np.abs(values[ind1])", "val = np.abs(values[ind1]) + np.abs(values[ind2]) if val < min_val: min_val = val ptind", "np.abs(new_tree[ind,3])) # / partition_tree[ind,2]) new_tree[i,3] = val return new_tree def fill_counts(partition_tree): \"\"\" This", "partition_tree[ind,3] partition_tree[i,3] = val def sort_inds(partition_tree, leaf_values, pos=None, inds=None): if inds is None:", "= [] if pos is None: partition_tree = fill_internal_max_values(partition_tree, leaf_values) pos = partition_tree.shape[0]-1", "= feature_order[i+1] next_ind_pos = i + 1 for j in range(i+1,len(feature_order)): ind2 =", "lines of a dendrogram where the leaf order is given. Note that scipy", "ind2: partition_tree_new[i,0] = ind1 elif i0 > ind2: partition_tree_new[i,0] -= 1 if i0", "for i in range(len(feature_order)-1): ind1 = feature_order[i] next_ind = feature_order[i+1] next_ind_pos = i", "partition_tree_new[i,0] = ind1 elif i0 > ptind + M: partition_tree_new[i,0] -= 1 if", "int(new_tree[i,1]) val = max(val, np.abs(leaf_values[ind])) else: ind = int(new_tree[i,1])-M val = max(val, np.abs(new_tree[ind,3]))", "0 left = int(partition_tree[pos, 0]) - M right = int(partition_tree[pos, 1]) - M", "partition tree matrix with the max leaf value in that cluster. \"\"\" M", "int(new_tree[i,0]) val = max(val, np.abs(leaf_values[ind])) else: ind = int(new_tree[i,0])-M val = max(val, np.abs(new_tree[ind,3]))", "= int(new_tree[i,1]) val = max(val, np.abs(leaf_values[ind])) else: ind = int(new_tree[i,1])-M val = max(val,", "1 if i0 == ptind + M: partition_tree_new[i,0] = ind1 elif i0 >", "leaf_values) pos = partition_tree.shape[0]-1 M = partition_tree.shape[0] + 1 if pos < 0:", "= feature_order[i] next_ind = feature_order[i+1] next_ind_pos = i + 1 for j in", "ind = int(new_tree[i,0])-M val = max(val, np.abs(new_tree[ind,3])) # / partition_tree[ind,2]) if new_tree[i,1] <", "0: inds.append(pos + M) return left = int(partition_tree[pos, 0]) - M right =", "# if partition_tree is not None: # new_tree = fill_internal_max_values(partition_tree, shap_values) # clust_order", "+ 1 if pos < 0: inds.append(pos + M) return left = int(partition_tree[pos,", "val += 1 else: ind = int(partition_tree[i,0])-M val += partition_tree[ind,3] if partition_tree[i,1] <", "return (x_left + x_right) / 2, y_curr def fill_internal_max_values(partition_tree, leaf_values): \"\"\" This fills", "= 0 if new_tree[i,0] < M: ind = int(new_tree[i,0]) val = max(val, np.abs(leaf_values[ind]))", "for i in range(partition_tree.shape[0]): ind1 = int(partition_tree[i,0]) ind2 = int(partition_tree[i,1]) if ind1 <", "if issubclass(type(ordering), OpChain): ordering = ordering.apply(Explanation(shap_values)) if issubclass(type(ordering), Explanation): if \"argsort\" in [op[\"name\"]", "issubclass(type(ordering), OpChain): ordering = ordering.apply(Explanation(shap_values)) if issubclass(type(ordering), Explanation): if \"argsort\" in [op[\"name\"] for", "ind2: partition_tree_new[i,0] -= 1 if i0 == ptind + M: partition_tree_new[i,0] = ind1", "= int(partition_tree[pos, 0]) - M right = int(partition_tree[pos, 1]) - M x_left, y_left", "= next_ind #print(feature_order) return feature_order def merge_nodes(values, partition_tree): \"\"\" This merges the two", "cluster_threshold: # if ind1 == 2: # print(clust_inds) # print(ind1, ind2, next_ind, dist[ind1,ind2],", "int(partition_tree[pos, 0]) - M right = int(partition_tree[pos, 1]) - M x_left, y_left =", "clust_inds = np.argsort(clust_order) feature_order = feature_order.copy()#order.apply(Explanation(shap_values)) # print(\"feature_order\", feature_order) for i in range(len(feature_order)-1):", "\"\"\" This merges the two clustered leaf nodes with the smallest total value.", "xout = [] yout = [] _dendrogram_coords_rec(partition_tree.shape[0]-1, leaf_positions, partition_tree, xout, yout) return np.array(xout),", "= _dendrogram_coords_rec(right, leaf_positions, partition_tree, xout, yout) y_curr = partition_tree[pos, 2] xout.append([x_left, x_left, x_right,", "next_ind_pos = j # print(\"next_ind\", next_ind) # print(\"next_ind_pos\", next_ind_pos) # insert the next_ind", "dendrogram where the leaf order is given. Note that scipy can compute these", "def get_sort_order(dist, clust_order, cluster_threshold, feature_order): \"\"\" Returns a sorted order of the values", "tree matrix with the max leaf value in that cluster. \"\"\" M =", "ordering.argsort.flip.values return ordering def get_sort_order(dist, clust_order, cluster_threshold, feature_order): \"\"\" Returns a sorted order", "fill_internal_max_values(partition_tree, leaf_values) pos = partition_tree.shape[0]-1 M = partition_tree.shape[0] + 1 if pos <", "ind1 > ind2: tmp = ind1 ind1 = ind2 ind2 = tmp partition_tree_new", "int(partition_tree[i,1]) if ind1 < M and ind2 < M: val = np.abs(values[ind1]) +", "axis=0) # update the counts to be correct fill_counts(partition_tree_new) return partition_tree_new, ind1, ind2", "= right right = left left = tmp sort_inds(partition_tree, leaf_values, left, inds) sort_inds(partition_tree,", "to be correct fill_counts(partition_tree_new) return partition_tree_new, ind1, ind2 def dendrogram_coords(leaf_positions, partition_tree): \"\"\" Returns", "counts to be correct fill_counts(partition_tree_new) return partition_tree_new, ind1, ind2 def dendrogram_coords(leaf_positions, partition_tree): \"\"\"", "the x and y coords of the lines of a dendrogram where the", "xout, yout) y_curr = partition_tree[pos, 2] xout.append([x_left, x_left, x_right, x_right]) yout.append([y_left, y_curr, y_curr,", "partition_tree_new[i,1] -= 1 partition_tree_new = np.delete(partition_tree_new, ptind, axis=0) # update the counts to", "np.array(yout) def _dendrogram_coords_rec(pos, leaf_positions, partition_tree, xout, yout): M = partition_tree.shape[0] + 1 if", "leaf_values[right + M] if left_val < right_val: tmp = right right = left", "xout, yout) return np.array(xout), np.array(yout) def _dendrogram_coords_rec(pos, leaf_positions, partition_tree, xout, yout): M =", "right = left left = tmp sort_inds(partition_tree, leaf_values, left, inds) sort_inds(partition_tree, leaf_values, right,", "feature_order = feature_order.copy()#order.apply(Explanation(shap_values)) # print(\"feature_order\", feature_order) for i in range(len(feature_order)-1): ind1 = feature_order[i]", "int(partition_tree[i,0]) ind2 = int(partition_tree[i,1]) if ind1 < M and ind2 < M: val", "1 if pos < 0: return leaf_positions[pos + M], 0 left = int(partition_tree[pos,", "correct fill_counts(partition_tree_new) return partition_tree_new, ind1, ind2 def dendrogram_coords(leaf_positions, partition_tree): \"\"\" Returns the x", "ordering.apply(Explanation(shap_values)) if issubclass(type(ordering), Explanation): if \"argsort\" in [op[\"name\"] for op in ordering.op_history]: ordering", "-1): #print(\"j\", j) feature_order[j] = feature_order[j-1] feature_order[i+1] = next_ind #print(feature_order) return feature_order def", "#if feature_imp[ind] > # if ind1 == 2: # print(ind1, ind2, dist[ind1,ind2]) if", "new_tree[i,0] < M: ind = int(new_tree[i,0]) val = max(val, np.abs(leaf_values[ind])) else: ind =", "partition_tree, xout, yout) return np.array(xout), np.array(yout) def _dendrogram_coords_rec(pos, leaf_positions, partition_tree, xout, yout): M", "order when dist[i,j] < cluster_threshold \"\"\" #feature_imp = np.abs(values) # if partition_tree is", "ind1 = feature_order[i] next_ind = feature_order[i+1] next_ind_pos = i + 1 for j", "partition_tree_new[i,0] -= 1 if i1 == ind2: partition_tree_new[i,1] = ind1 elif i1 >", "- M x_left, y_left = _dendrogram_coords_rec(left, leaf_positions, partition_tree, xout, yout) x_right, y_right =", "= pl.get_cmap(color) except: pass if color == \"shap_red\": color = colors.red_rgb elif color", "respect the clustering order when dist[i,j] < cluster_threshold \"\"\" #feature_imp = np.abs(values) #", "ptind + M: partition_tree_new[i,1] -= 1 partition_tree_new = np.delete(partition_tree_new, ptind, axis=0) # update", "= left left = tmp sort_inds(partition_tree, leaf_values, left, inds) sort_inds(partition_tree, leaf_values, right, inds)", "= partition_tree.shape[0] + 1 new_tree = partition_tree.copy() for i in range(new_tree.shape[0]): val =", "x_right]) yout.append([y_left, y_curr, y_curr, y_right]) return (x_left + x_right) / 2, y_curr def", "from . import colors import numpy as np def convert_color(color): try: color =", "cluster_threshold or clust_inds[ind2] < clust_inds[next_ind]: next_ind = ind2 next_ind_pos = j # print(\"next_ind\",", "Explanation from ..utils import OpChain from . import colors import numpy as np", "next_ind, dist[ind1,ind2], clust_inds[ind2], clust_inds[next_ind]) if dist[ind1,next_ind] > cluster_threshold or clust_inds[ind2] < clust_inds[next_ind]: next_ind", "cluster_threshold, feature_order): \"\"\" Returns a sorted order of the values where we respect", "the smallest total value. \"\"\" M = partition_tree.shape[0] + 1 ptind = 0", "partition_tree_new, ind1, ind2 def dendrogram_coords(leaf_positions, partition_tree): \"\"\" Returns the x and y coords", "else: ind = int(partition_tree[i,1])-M val += partition_tree[ind,3] partition_tree[i,3] = val def sort_inds(partition_tree, leaf_values,", "elif i0 > ptind + M: partition_tree_new[i,0] -= 1 if i1 == ind2:", "partition_tree_new[i,1] -= 1 if i1 == ptind + M: partition_tree_new[i,1] = ind1 elif", "else: ordering = ordering.argsort.flip.values return ordering def get_sort_order(dist, clust_order, cluster_threshold, feature_order): \"\"\" Returns", "pos = partition_tree.shape[0]-1 M = partition_tree.shape[0] + 1 if pos < 0: inds.append(pos", "else: ind = int(new_tree[i,0])-M val = max(val, np.abs(new_tree[ind,3])) # / partition_tree[ind,2]) if new_tree[i,1]", "M], 0 left = int(partition_tree[pos, 0]) - M right = int(partition_tree[pos, 1]) -", ".. import Explanation from ..utils import OpChain from . import colors import numpy", "if ind1 == 2: # print(clust_inds) # print(ind1, ind2, next_ind, dist[ind1,ind2], clust_inds[ind2], clust_inds[next_ind])", "the clustering order when dist[i,j] < cluster_threshold \"\"\" #feature_imp = np.abs(values) # if", "reimplementation. \"\"\" xout = [] yout = [] _dendrogram_coords_rec(partition_tree.shape[0]-1, leaf_positions, partition_tree, xout, yout)", "yout): M = partition_tree.shape[0] + 1 if pos < 0: return leaf_positions[pos +", "ind = int(new_tree[i,1])-M val = max(val, np.abs(new_tree[ind,3])) # / partition_tree[ind,2]) new_tree[i,3] = val", "< M: ind = int(new_tree[i,0]) val = max(val, np.abs(leaf_values[ind])) else: ind = int(new_tree[i,0])-M", "easily specify a specific leaf order, hence this reimplementation. \"\"\" xout = []", "sorted order of the values where we respect the clustering order when dist[i,j]", "left = tmp sort_inds(partition_tree, leaf_values, left, inds) sort_inds(partition_tree, leaf_values, right, inds) return inds", "from .. import Explanation from ..utils import OpChain from . import colors import", "xout, yout) x_right, y_right = _dendrogram_coords_rec(right, leaf_positions, partition_tree, xout, yout) y_curr = partition_tree[pos,", "= max(val, np.abs(leaf_values[ind])) else: ind = int(new_tree[i,1])-M val = max(val, np.abs(new_tree[ind,3])) # /", "partition_tree[i,3] = val def sort_inds(partition_tree, leaf_values, pos=None, inds=None): if inds is None: inds", "feature_order[i+1] next_ind_pos = i + 1 for j in range(i+1,len(feature_order)): ind2 = feature_order[j]", "i1 == ptind + M: partition_tree_new[i,1] = ind1 elif i1 > ptind +", ">= 0 else leaf_values[left + M] right_val = partition_tree[right,3] if right >= 0", "np def convert_color(color): try: color = pl.get_cmap(color) except: pass if color == \"shap_red\":", "int(partition_tree[i,1])-M val += partition_tree[ind,3] partition_tree[i,3] = val def sort_inds(partition_tree, leaf_values, pos=None, inds=None): if", "smallest total value. \"\"\" M = partition_tree.shape[0] + 1 ptind = 0 min_val", "+ M: partition_tree_new[i,0] = ind1 elif i0 > ptind + M: partition_tree_new[i,0] -=", "where we respect the clustering order when dist[i,j] < cluster_threshold \"\"\" #feature_imp =", "Returns the x and y coords of the lines of a dendrogram where", "if left >= 0 else leaf_values[left + M] right_val = partition_tree[right,3] if right", "partition_tree[ind,2]) new_tree[i,3] = val return new_tree def fill_counts(partition_tree): \"\"\" This updates the \"\"\"", "= int(new_tree[i,1])-M val = max(val, np.abs(new_tree[ind,3])) # / partition_tree[ind,2]) new_tree[i,3] = val return", "= 0 min_val = np.inf for i in range(partition_tree.shape[0]): ind1 = int(partition_tree[i,0]) ind2", "< right_val: tmp = right right = left left = tmp sort_inds(partition_tree, leaf_values,", "leaf_values): \"\"\" This fills the forth column of the partition tree matrix with", "= ind1 elif i1 > ptind + M: partition_tree_new[i,1] -= 1 partition_tree_new =", "the \"\"\" M = partition_tree.shape[0] + 1 for i in range(partition_tree.shape[0]): val =", "fill_internal_max_values(partition_tree, shap_values) # clust_order = sort_inds(new_tree, np.abs(shap_values)) clust_inds = np.argsort(clust_order) feature_order = feature_order.copy()#order.apply(Explanation(shap_values))", "== ptind + M: partition_tree_new[i,1] = ind1 elif i1 > ptind + M:", "return left = int(partition_tree[pos, 0]) - M right = int(partition_tree[pos, 1]) - M", "partition_tree[right,3] if right >= 0 else leaf_values[right + M] if left_val < right_val:", "partition_tree, xout, yout) x_right, y_right = _dendrogram_coords_rec(right, leaf_positions, partition_tree, xout, yout) y_curr =", "+ M: partition_tree_new[i,1] = ind1 elif i1 > ptind + M: partition_tree_new[i,1] -=", "i0 = int(partition_tree_new[i,0]) i1 = int(partition_tree_new[i,1]) if i0 == ind2: partition_tree_new[i,0] = ind1", "== 2: # print(clust_inds) # print(ind1, ind2, next_ind, dist[ind1,ind2], clust_inds[ind2], clust_inds[next_ind]) if dist[ind1,next_ind]", "int(partition_tree[pos, 1]) - M x_left, y_left = _dendrogram_coords_rec(left, leaf_positions, partition_tree, xout, yout) x_right,", "M] if left_val < right_val: tmp = right right = left left =", "in ordering.op_history]: ordering = ordering.values else: ordering = ordering.argsort.flip.values return ordering def get_sort_order(dist,", "None: # new_tree = fill_internal_max_values(partition_tree, shap_values) # clust_order = sort_inds(new_tree, np.abs(shap_values)) clust_inds =", "ind1 = int(partition_tree[ptind,0]) ind2 = int(partition_tree[ptind,1]) if ind1 > ind2: tmp = ind1", "partition_tree_new = partition_tree.copy() for i in range(partition_tree_new.shape[0]): i0 = int(partition_tree_new[i,0]) i1 = int(partition_tree_new[i,1])", "if pos is None: partition_tree = fill_internal_max_values(partition_tree, leaf_values) pos = partition_tree.shape[0]-1 M =", "\"\"\" xout = [] yout = [] _dendrogram_coords_rec(partition_tree.shape[0]-1, leaf_positions, partition_tree, xout, yout) return", "partition_tree.shape[0] + 1 ptind = 0 min_val = np.inf for i in range(partition_tree.shape[0]):", "is not None: # new_tree = fill_internal_max_values(partition_tree, shap_values) # clust_order = sort_inds(new_tree, np.abs(shap_values))", "merge_nodes(values, partition_tree): \"\"\" This merges the two clustered leaf nodes with the smallest", "int(partition_tree[i,0])-M val += partition_tree[ind,3] if partition_tree[i,1] < M: ind = int(partition_tree[i,1]) val +=", "< M: val = np.abs(values[ind1]) + np.abs(values[ind2]) if val < min_val: min_val =", "OpChain from . import colors import numpy as np def convert_color(color): try: color", "M right = int(partition_tree[pos, 1]) - M left_val = partition_tree[left,3] if left >=", "1 partition_tree_new = np.delete(partition_tree_new, ptind, axis=0) # update the counts to be correct", "= int(partition_tree[pos, 1]) - M left_val = partition_tree[left,3] if left >= 0 else", "M and ind2 < M: val = np.abs(values[ind1]) + np.abs(values[ind2]) if val <", "\"\"\" M = partition_tree.shape[0] + 1 for i in range(partition_tree.shape[0]): val = 0", "of a dendrogram where the leaf order is given. Note that scipy can", "pass if color == \"shap_red\": color = colors.red_rgb elif color == \"shap_blue\": color", "= int(partition_tree[ptind,0]) ind2 = int(partition_tree[ptind,1]) if ind1 > ind2: tmp = ind1 ind1", "1 for i in range(partition_tree.shape[0]): val = 0 if partition_tree[i,0] < M: ind", "shap_values): if issubclass(type(ordering), OpChain): ordering = ordering.apply(Explanation(shap_values)) if issubclass(type(ordering), Explanation): if \"argsort\" in", "M right = int(partition_tree[pos, 1]) - M x_left, y_left = _dendrogram_coords_rec(left, leaf_positions, partition_tree,", "> ptind + M: partition_tree_new[i,1] -= 1 partition_tree_new = np.delete(partition_tree_new, ptind, axis=0) #", "total value. \"\"\" M = partition_tree.shape[0] + 1 ptind = 0 min_val =", "feature_order[i+1] = next_ind #print(feature_order) return feature_order def merge_nodes(values, partition_tree): \"\"\" This merges the", "if val < min_val: min_val = val ptind = i #print(\"ptind\", ptind, min_val)", "1 for j in range(i+1,len(feature_order)): ind2 = feature_order[j] #if feature_imp[ind] > # if", "as np def convert_color(color): try: color = pl.get_cmap(color) except: pass if color ==", "= int(partition_tree[i,1]) val += 1 else: ind = int(partition_tree[i,1])-M val += partition_tree[ind,3] partition_tree[i,3]", "ind2 = int(partition_tree[i,1]) if ind1 < M and ind2 < M: val =", "+ 1 new_tree = partition_tree.copy() for i in range(new_tree.shape[0]): val = 0 if", "ind1 < M and ind2 < M: val = np.abs(values[ind1]) + np.abs(values[ind2]) if", "M] right_val = partition_tree[right,3] if right >= 0 else leaf_values[right + M] if", "next_ind = feature_order[i+1] next_ind_pos = i + 1 for j in range(i+1,len(feature_order)): ind2", "# print(ind1, ind2, next_ind, dist[ind1,ind2], clust_inds[ind2], clust_inds[next_ind]) if dist[ind1,next_ind] > cluster_threshold or clust_inds[ind2]", "dendrogram_coords(leaf_positions, partition_tree): \"\"\" Returns the x and y coords of the lines of", "specific leaf order, hence this reimplementation. \"\"\" xout = [] yout = []", "does not allow you to easily specify a specific leaf order, hence this", "= int(partition_tree[i,0]) val += 1 else: ind = int(partition_tree[i,0])-M val += partition_tree[ind,3] if", "[] if pos is None: partition_tree = fill_internal_max_values(partition_tree, leaf_values) pos = partition_tree.shape[0]-1 M", "not None: # new_tree = fill_internal_max_values(partition_tree, shap_values) # clust_order = sort_inds(new_tree, np.abs(shap_values)) clust_inds", "= ind1 elif i0 > ptind + M: partition_tree_new[i,0] -= 1 if i1", "= ordering.values else: ordering = ordering.argsort.flip.values return ordering def get_sort_order(dist, clust_order, cluster_threshold, feature_order):", "Returns a sorted order of the values where we respect the clustering order", "the next_ind next for j in range(next_ind_pos, i+1, -1): #print(\"j\", j) feature_order[j] =", "ptind + M: partition_tree_new[i,0] -= 1 if i1 == ind2: partition_tree_new[i,1] = ind1", "colors.red_rgb elif color == \"shap_blue\": color = colors.blue_rgb return color def convert_ordering(ordering, shap_values):", "= partition_tree.shape[0] + 1 for i in range(partition_tree.shape[0]): val = 0 if partition_tree[i,0]", "if issubclass(type(ordering), Explanation): if \"argsort\" in [op[\"name\"] for op in ordering.op_history]: ordering =", "numpy as np def convert_color(color): try: color = pl.get_cmap(color) except: pass if color", "in range(i+1,len(feature_order)): ind2 = feature_order[j] #if feature_imp[ind] > # if ind1 == 2:", "merges the two clustered leaf nodes with the smallest total value. \"\"\" M", "OpChain): ordering = ordering.apply(Explanation(shap_values)) if issubclass(type(ordering), Explanation): if \"argsort\" in [op[\"name\"] for op", "def convert_ordering(ordering, shap_values): if issubclass(type(ordering), OpChain): ordering = ordering.apply(Explanation(shap_values)) if issubclass(type(ordering), Explanation): if", "colors import numpy as np def convert_color(color): try: color = pl.get_cmap(color) except: pass", "> ind2: partition_tree_new[i,0] -= 1 if i0 == ptind + M: partition_tree_new[i,0] =", "+ M: partition_tree_new[i,0] -= 1 if i1 == ind2: partition_tree_new[i,1] = ind1 elif", "= partition_tree[pos, 2] xout.append([x_left, x_left, x_right, x_right]) yout.append([y_left, y_curr, y_curr, y_right]) return (x_left", "colors.blue_rgb return color def convert_ordering(ordering, shap_values): if issubclass(type(ordering), OpChain): ordering = ordering.apply(Explanation(shap_values)) if", "clust_inds[ind2], clust_inds[next_ind]) if dist[ind1,next_ind] > cluster_threshold or clust_inds[ind2] < clust_inds[next_ind]: next_ind = ind2" ]
[ "plt MESH_DIR = '/home/incendio/Documents/Thesis/YCBVideo_detector/color_meshes' GREEN = (0, 255, 0) class PnPSolver(): \"\"\" Implements", "RANSAC algorithm to compute rotation and translation vector for a given RGB mask", "get_neighbors(self, image, row, col, window=1): neighbor = image[row - window : row +", "# points3d.append(self.mesh.vertices[x_index[mid_index], :]) image2d = np.array(image2d).astype(np.float32) #(N, 2) points3d = np.array(points3d).astype(np.float32) #(N, 3)", "numpy as np from paz.core import Pose6D from paz.core.ops import Camera import paz.processors", "= np.zeros((4, 1)) return camera def draw_axis(self, mask, projected_points, thickness=2): rows, cols, channels", "= self.size[1] camera_center = (self.size[1] / 2, self.size[0] / 2) camera_matrix = np.array([[focal_length,", "1] == G)[0] b_index = np.where(self.vertex_colors[:, 2] == B)[0] matches = [r_index, g_index,", "np.where(self.vertex_colors[:, 1] == G)[0] b_index = np.where(self.vertex_colors[:, 2] == B)[0] matches = [r_index,", "x, y): R, G, B = self.rgb_mask[x, y, :] r_index = np.where(self.vertex_colors[:, 0]", "ops.UPNP) pose6D = Pose6D.from_rotation_vector(rotation, translation, self.class_name) return pose6D def visualize_3D_boxes(self, image, pose6D): dimensions", "{'pose6D': pose6D, 'image': image, 'color': self.color} draw = pr.DrawBoxes3D(self.camera, dimensions) args, projected_points =", "= (self.size[1] / 2, self.size[0] / 2) camera_matrix = np.array([[focal_length, 0, camera_center[0]], [0,", "rgb_mask: RGB mask of object true_id: Int class_name: class name of object. String", "channels = np.where(self.rgb_mask > 0) for index in range(len(rows)): x, y = rows[index],", "0., -0.70710678, 0.01674194], [-0.40824829, 0.81649658, -0.40824829, -0.01203142], [ 0.57735027, 0.57735027, 0.57735027, -1.73205081], [", "= np.where(self.vertex_colors == np.stack([R, G, B]))[0] # mid_index = int(len(x_index) / 2) #", "x) image = mask.copy() R, G, B = (255, 0, 0), (0, 255,", "= np.where(self.vertex_colors[:, 2] == B)[0] matches = [r_index, g_index, b_index] intersection = list(set(matches[0]).intersection(*matches))", "{self.class_name: self.dimension} pose = {'pose6D': pose6D, 'image': image, 'color': self.color} draw = pr.DrawBoxes3D(self.camera,", "tuple(projected_points[0].ravel()), R, thickness) image = cv2.line(image, center, tuple(projected_points[1].ravel()), G, thickness) image = cv2.line(image,", "np.zeros((4, 1)) return camera def draw_axis(self, mask, projected_points, thickness=2): rows, cols, channels =", "size self.dimension = dimension self.camera = self.compute_camera_matrix() self.id = true_id self.class_name = class_name", "= [], [] rows, cols, channels = np.where(self.rgb_mask > 0) for index in", "__init__(self, rgb_mask, true_id, class_name, color=GREEN, dimension=[.1, .1], size=(320, 320)): self.rgb_mask = rgb_mask self.size", "camera.distortion = np.zeros((4, 1)) return camera def draw_axis(self, mask, projected_points, thickness=2): rows, cols,", "np.where(self.vertex_colors[:, 0] == R)[0] g_index = np.where(self.vertex_colors[:, 1] == G)[0] b_index = np.where(self.vertex_colors[:,", "image, row, col, window=1): neighbor = image[row - window : row + window", "paz.processors as pr from paz.core import ops import matplotlib.pyplot as plt MESH_DIR =", "= (y, x) image = mask.copy() R, G, B = (255, 0, 0),", "<filename>samples/pose_estimation/solver.py import cv2 import os import trimesh import numpy as np from paz.core", "cols, channels = np.where(self.rgb_mask > 0) x, y = int(np.mean(rows)), int(np.mean(cols)) R, G,", "> 0) for index in range(len(rows)): x, y = rows[index], cols[index] R, G,", "self.rgb_mask[x, y, :] r_index = np.where(self.vertex_colors[:, 0] == R)[0] g_index = np.where(self.vertex_colors[:, 1]", "rotation, translation, inliers) = ops.solve_PNP(points3d, image2D, self.camera, ops.UPNP) pose6D = Pose6D.from_rotation_vector(rotation, translation, self.class_name)", "self.rgb_mask[x, y, :] matches = np.unique(np.array(self.get_matches(x, y))) if len(matches) == 1: image2d.append([y, x])", "2) camera_matrix = np.array([[focal_length, 0, camera_center[0]], [0, focal_length, camera_center[1]], [0, 0, 1]], dtype='double')", "0], self.rgb_mask[x, y, 1], self.rgb_mask[x, y, 2] x_index = np.where(self.vertex_colors == np.stack([R, G,", "= np.array(image2d).astype(np.float32) #(N, 2) points3d = np.array(points3d).astype(np.float32) #(N, 3) return points3d, image2d def", "1, col - window : col + window + 1] color_values = np.reshape(neighbor,", "[ 0., 0., 0., 1.]]) def get_vertex_colors(self): for name in os.listdir(MESH_DIR): class_id =", "= (int(np.mean(rows)), int(np.mean(cols))) center = (y, x) image = mask.copy() R, G, B", "if int(class_id) == self.id: mesh_path = os.path.join(MESH_DIR, name) self.mesh = trimesh.load(mesh_path) vertex_colors =", "import cv2 import os import trimesh import numpy as np from paz.core import", "0), (0, 0, 255) projected_points = projected_points.astype(np.int32) image = cv2.line(image, center, tuple(projected_points[0].ravel()), R,", "true_id: Int class_name: class name of object. String dimension: (width, height) for draw_cube", ":] def compute_camera_matrix(self): focal_length = self.size[1] camera_center = (self.size[1] / 2, self.size[0] /", "np.where(self.rgb_mask > 0) for index in range(len(rows)): x, y = rows[index], cols[index] R,", "0, 1]], dtype='double') camera = Camera(0) camera.intrinsics = camera_matrix camera.distortion = np.zeros((4, 1))", "class PnPSolver(): \"\"\" Implements PnP RANSAC algorithm to compute rotation and translation vector", "self.camera = self.compute_camera_matrix() self.id = true_id self.class_name = class_name self.vertex_colors = self.get_vertex_colors() self.color", "[], [] rows, cols, channels = np.where(self.rgb_mask > 0) for index in range(len(rows)):", "object # Arguments: rgb_mask: RGB mask of object true_id: Int class_name: class name", "PnP RANSAC algorithm to compute rotation and translation vector for a given RGB", "rows, cols, channels = np.where(self.rgb_mask > 0) for index in range(len(rows)): x, y", "B = self.rgb_mask[x, y, 0], self.rgb_mask[x, y, 1], self.rgb_mask[x, y, 2] x_index =", "RGB mask of an object # Arguments: rgb_mask: RGB mask of object true_id:", "1)) return camera def draw_axis(self, mask, projected_points, thickness=2): rows, cols, channels = np.where(mask", "in os.listdir(MESH_DIR): class_id = name.split('_')[0] if int(class_id) == self.id: mesh_path = os.path.join(MESH_DIR, name)", "= np.where(self.vertex_colors == np.stack([R, G, B]))[0] mid_index = int(len(x_index) / 2) return self.mesh.vertices[x_index[mid_index],", "mid_index = int(len(x_index) / 2) return self.mesh.vertices[x_index[mid_index], :] def compute_camera_matrix(self): focal_length = self.size[1]", "given RGB mask of an object # Arguments: rgb_mask: RGB mask of object", "pose6D): dimensions = {self.class_name: self.dimension} pose = {'pose6D': pose6D, 'image': image, 'color': self.color}", "self.rgb_mask[x, y, 1], self.rgb_mask[x, y, 2] x_index = np.where(self.vertex_colors == np.stack([R, G, B]))[0]", "camera_center[0]], [0, focal_length, camera_center[1]], [0, 0, 1]], dtype='double') camera = Camera(0) camera.intrinsics =", "y, 1], self.rgb_mask[x, y, 2] x_index = np.where(self.vertex_colors == np.stack([R, G, B]))[0] mid_index", "window=1): neighbor = image[row - window : row + window + 1, col", "String dimension: (width, height) for draw_cube size: size of the mask \"\"\" def", "class_id = name.split('_')[0] if int(class_id) == self.id: mesh_path = os.path.join(MESH_DIR, name) self.mesh =", "get_points(self): points3d, image2d = [], [] rows, cols, channels = np.where(self.rgb_mask > 0)", "G)[0] b_index = np.where(self.vertex_colors[:, 2] == B)[0] matches = [r_index, g_index, b_index] intersection", "== np.stack([R, G, B]))[0] mid_index = int(len(x_index) / 2) return self.mesh.vertices[x_index[mid_index], :] def", "intersection = list(set(matches[0]).intersection(*matches)) return intersection def get_model_point(self): rows, cols, channels = np.where(self.rgb_mask >", "return args, projected_points def get_points(self): points3d, image2d = [], [] rows, cols, channels", "image = cv2.line(image, center, tuple(projected_points[2].ravel()), B, thickness) return image def get_neighbors(self, image, row,", "-0.70710678, 0.01674194], [-0.40824829, 0.81649658, -0.40824829, -0.01203142], [ 0.57735027, 0.57735027, 0.57735027, -1.73205081], [ 0.,", "cv2.line(image, center, tuple(projected_points[0].ravel()), R, thickness) image = cv2.line(image, center, tuple(projected_points[1].ravel()), G, thickness) image", "y): R, G, B = self.rgb_mask[x, y, :] r_index = np.where(self.vertex_colors[:, 0] ==", "os.path.join(MESH_DIR, name) self.mesh = trimesh.load(mesh_path) vertex_colors = self.mesh.visual.vertex_colors[:, :3] return vertex_colors def solve_PnP(self):", "draw = pr.DrawBoxes3D(self.camera, dimensions) args, projected_points = draw(pose) return args, projected_points def get_points(self):", "thickness) image = cv2.line(image, center, tuple(projected_points[2].ravel()), B, thickness) return image def get_neighbors(self, image,", "row + window + 1, col - window : col + window +", "= np.array([[ 0.70710678, 0., -0.70710678, 0.01674194], [-0.40824829, 0.81649658, -0.40824829, -0.01203142], [ 0.57735027, 0.57735027,", "image[row - window : row + window + 1, col - window :", "B]))[0] # mid_index = int(len(x_index) / 2) # points3d.append(self.mesh.vertices[x_index[mid_index], :]) image2d = np.array(image2d).astype(np.float32)", "mesh_path = os.path.join(MESH_DIR, name) self.mesh = trimesh.load(mesh_path) vertex_colors = self.mesh.visual.vertex_colors[:, :3] return vertex_colors", "= camera_matrix camera.distortion = np.zeros((4, 1)) return camera def draw_axis(self, mask, projected_points, thickness=2):", "self.rgb_mask[x, y, 2] x_index = np.where(self.vertex_colors == np.stack([R, G, B]))[0] mid_index = int(len(x_index)", "2] == B)[0] matches = [r_index, g_index, b_index] intersection = list(set(matches[0]).intersection(*matches)) return intersection", "G, thickness) image = cv2.line(image, center, tuple(projected_points[2].ravel()), B, thickness) return image def get_neighbors(self,", "B = self.rgb_mask[x, y, :] matches = np.unique(np.array(self.get_matches(x, y))) if len(matches) == 1:", "image, pose6D): dimensions = {self.class_name: self.dimension} pose = {'pose6D': pose6D, 'image': image, 'color':", "= self.rgb_mask[x, y, 0], self.rgb_mask[x, y, 1], self.rgb_mask[x, y, 2] x_index = np.where(self.vertex_colors", "== np.stack([R, G, B]))[0] # mid_index = int(len(x_index) / 2) # points3d.append(self.mesh.vertices[x_index[mid_index], :])", "== self.id: mesh_path = os.path.join(MESH_DIR, name) self.mesh = trimesh.load(mesh_path) vertex_colors = self.mesh.visual.vertex_colors[:, :3]", "np.where(self.vertex_colors == np.stack([R, G, B]))[0] # mid_index = int(len(x_index) / 2) # points3d.append(self.mesh.vertices[x_index[mid_index],", "focal_length, camera_center[1]], [0, 0, 1]], dtype='double') camera = Camera(0) camera.intrinsics = camera_matrix camera.distortion", "camera_matrix camera.distortion = np.zeros((4, 1)) return camera def draw_axis(self, mask, projected_points, thickness=2): rows,", "return vertex_colors def solve_PnP(self): points3d, image2D = self.get_points() assert image2D.shape[0] == points3d.shape[0] (_,", "= (0, 255, 0) class PnPSolver(): \"\"\" Implements PnP RANSAC algorithm to compute", "height) for draw_cube size: size of the mask \"\"\" def __init__(self, rgb_mask, true_id,", "Pose6D.from_rotation_vector(rotation, translation, self.class_name) return pose6D def visualize_3D_boxes(self, image, pose6D): dimensions = {self.class_name: self.dimension}", "self.compute_camera_matrix() self.id = true_id self.class_name = class_name self.vertex_colors = self.get_vertex_colors() self.color = color", "return self.mesh.vertices[x_index[mid_index], :] def compute_camera_matrix(self): focal_length = self.size[1] camera_center = (self.size[1] / 2,", "draw_axis(self, mask, projected_points, thickness=2): rows, cols, channels = np.where(mask > 0) x, y", "pose6D def visualize_3D_boxes(self, image, pose6D): dimensions = {self.class_name: self.dimension} pose = {'pose6D': pose6D,", "2] x_index = np.where(self.vertex_colors == np.stack([R, G, B]))[0] mid_index = int(len(x_index) / 2)", "self.id: mesh_path = os.path.join(MESH_DIR, name) self.mesh = trimesh.load(mesh_path) vertex_colors = self.mesh.visual.vertex_colors[:, :3] return", "camera_center = (self.size[1] / 2, self.size[0] / 2) camera_matrix = np.array([[focal_length, 0, camera_center[0]],", "matches = np.unique(np.array(self.get_matches(x, y))) if len(matches) == 1: image2d.append([y, x]) vertex = self.mesh.vertices[matches[0],", "np.where(self.vertex_colors[:, 2] == B)[0] matches = [r_index, g_index, b_index] intersection = list(set(matches[0]).intersection(*matches)) return", "center, tuple(projected_points[2].ravel()), B, thickness) return image def get_neighbors(self, image, row, col, window=1): neighbor", "G, B = self.rgb_mask[x, y, 0], self.rgb_mask[x, y, 1], self.rgb_mask[x, y, 2] x_index", "camera = Camera(0) camera.intrinsics = camera_matrix camera.distortion = np.zeros((4, 1)) return camera def", "a given RGB mask of an object # Arguments: rgb_mask: RGB mask of", "x, y = rows[index], cols[index] R, G, B = self.rgb_mask[x, y, :] matches", "= projected_points.astype(np.int32) image = cv2.line(image, center, tuple(projected_points[0].ravel()), R, thickness) image = cv2.line(image, center,", "R, G, B = self.rgb_mask[x, y, :] matches = np.unique(np.array(self.get_matches(x, y))) if len(matches)", "- window : row + window + 1, col - window : col", "camera_matrix = np.array([[focal_length, 0, camera_center[0]], [0, focal_length, camera_center[1]], [0, 0, 1]], dtype='double') camera", "/ 2, self.size[0] / 2) camera_matrix = np.array([[focal_length, 0, camera_center[0]], [0, focal_length, camera_center[1]],", "image, 'color': self.color} draw = pr.DrawBoxes3D(self.camera, dimensions) args, projected_points = draw(pose) return args,", "'/home/incendio/Documents/Thesis/YCBVideo_detector/color_meshes' GREEN = (0, 255, 0) class PnPSolver(): \"\"\" Implements PnP RANSAC algorithm", ": row + window + 1, col - window : col + window", "int(class_id) == self.id: mesh_path = os.path.join(MESH_DIR, name) self.mesh = trimesh.load(mesh_path) vertex_colors = self.mesh.visual.vertex_colors[:,", "vertex_colors = self.mesh.visual.vertex_colors[:, :3] return vertex_colors def solve_PnP(self): points3d, image2D = self.get_points() assert", "# Arguments: rgb_mask: RGB mask of object true_id: Int class_name: class name of", "255, 0), (0, 0, 255) projected_points = projected_points.astype(np.int32) image = cv2.line(image, center, tuple(projected_points[0].ravel()),", "name of object. String dimension: (width, height) for draw_cube size: size of the", "\"\"\" def __init__(self, rgb_mask, true_id, class_name, color=GREEN, dimension=[.1, .1], size=(320, 320)): self.rgb_mask =", "mask.copy() R, G, B = (255, 0, 0), (0, 255, 0), (0, 0,", "= self.compute_camera_matrix() self.id = true_id self.class_name = class_name self.vertex_colors = self.get_vertex_colors() self.color =", "(0, 255, 0), (0, 0, 255) projected_points = projected_points.astype(np.int32) image = cv2.line(image, center,", "pr from paz.core import ops import matplotlib.pyplot as plt MESH_DIR = '/home/incendio/Documents/Thesis/YCBVideo_detector/color_meshes' GREEN", "== 1: image2d.append([y, x]) vertex = self.mesh.vertices[matches[0], :] points3d.append(vertex) # x_index = np.where(self.vertex_colors", "(y, x) image = mask.copy() R, G, B = (255, 0, 0), (0,", "cv2 import os import trimesh import numpy as np from paz.core import Pose6D", "self.size[1] camera_center = (self.size[1] / 2, self.size[0] / 2) camera_matrix = np.array([[focal_length, 0,", "x, y = (int(np.mean(rows)), int(np.mean(cols))) center = (y, x) image = mask.copy() R,", "def get_neighbors(self, image, row, col, window=1): neighbor = image[row - window : row", "= os.path.join(MESH_DIR, name) self.mesh = trimesh.load(mesh_path) vertex_colors = self.mesh.visual.vertex_colors[:, :3] return vertex_colors def", "[-0.40824829, 0.81649658, -0.40824829, -0.01203142], [ 0.57735027, 0.57735027, 0.57735027, -1.73205081], [ 0., 0., 0.,", "(0, 255, 0) class PnPSolver(): \"\"\" Implements PnP RANSAC algorithm to compute rotation", "self.size = size self.dimension = dimension self.camera = self.compute_camera_matrix() self.id = true_id self.class_name", "/ 2) # points3d.append(self.mesh.vertices[x_index[mid_index], :]) image2d = np.array(image2d).astype(np.float32) #(N, 2) points3d = np.array(points3d).astype(np.float32)", "B)[0] matches = [r_index, g_index, b_index] intersection = list(set(matches[0]).intersection(*matches)) return intersection def get_model_point(self):", "r_index = np.where(self.vertex_colors[:, 0] == R)[0] g_index = np.where(self.vertex_colors[:, 1] == G)[0] b_index", "dimension self.camera = self.compute_camera_matrix() self.id = true_id self.class_name = class_name self.vertex_colors = self.get_vertex_colors()", "x_index = np.where(self.vertex_colors == np.stack([R, G, B]))[0] # mid_index = int(len(x_index) / 2)", "= np.array([[focal_length, 0, camera_center[0]], [0, focal_length, camera_center[1]], [0, 0, 1]], dtype='double') camera =", "y, :] matches = np.unique(np.array(self.get_matches(x, y))) if len(matches) == 1: image2d.append([y, x]) vertex", "ops import matplotlib.pyplot as plt MESH_DIR = '/home/incendio/Documents/Thesis/YCBVideo_detector/color_meshes' GREEN = (0, 255, 0)", "1.]]) def get_vertex_colors(self): for name in os.listdir(MESH_DIR): class_id = name.split('_')[0] if int(class_id) ==", "get_matches(self, x, y): R, G, B = self.rgb_mask[x, y, :] r_index = np.where(self.vertex_colors[:,", "= self.mesh.vertices[matches[0], :] points3d.append(vertex) # x_index = np.where(self.vertex_colors == np.stack([R, G, B]))[0] #", "(0, 0, 255) projected_points = projected_points.astype(np.int32) image = cv2.line(image, center, tuple(projected_points[0].ravel()), R, thickness)", "compute rotation and translation vector for a given RGB mask of an object", "image2d def get_matches(self, x, y): R, G, B = self.rgb_mask[x, y, :] r_index", "= np.where(self.rgb_mask > 0) x, y = int(np.mean(rows)), int(np.mean(cols)) R, G, B =", "B = (255, 0, 0), (0, 255, 0), (0, 0, 255) projected_points =", "self.class_name = class_name self.vertex_colors = self.get_vertex_colors() self.color = color self.world_to_camera = np.array([[ 0.70710678,", "x]) vertex = self.mesh.vertices[matches[0], :] points3d.append(vertex) # x_index = np.where(self.vertex_colors == np.stack([R, G,", "image = cv2.line(image, center, tuple(projected_points[1].ravel()), G, thickness) image = cv2.line(image, center, tuple(projected_points[2].ravel()), B,", "0, 255) projected_points = projected_points.astype(np.int32) image = cv2.line(image, center, tuple(projected_points[0].ravel()), R, thickness) image", "= Camera(0) camera.intrinsics = camera_matrix camera.distortion = np.zeros((4, 1)) return camera def draw_axis(self,", "B]))[0] mid_index = int(len(x_index) / 2) return self.mesh.vertices[x_index[mid_index], :] def compute_camera_matrix(self): focal_length =", "dimension=[.1, .1], size=(320, 320)): self.rgb_mask = rgb_mask self.size = size self.dimension = dimension", "of the mask \"\"\" def __init__(self, rgb_mask, true_id, class_name, color=GREEN, dimension=[.1, .1], size=(320,", "= cv2.line(image, center, tuple(projected_points[1].ravel()), G, thickness) image = cv2.line(image, center, tuple(projected_points[2].ravel()), B, thickness)", "return image def get_neighbors(self, image, row, col, window=1): neighbor = image[row - window", "+ window + 1, col - window : col + window + 1]", "if len(matches) == 1: image2d.append([y, x]) vertex = self.mesh.vertices[matches[0], :] points3d.append(vertex) # x_index", "= self.get_vertex_colors() self.color = color self.world_to_camera = np.array([[ 0.70710678, 0., -0.70710678, 0.01674194], [-0.40824829,", "class_name self.vertex_colors = self.get_vertex_colors() self.color = color self.world_to_camera = np.array([[ 0.70710678, 0., -0.70710678,", "0.57735027, -1.73205081], [ 0., 0., 0., 1.]]) def get_vertex_colors(self): for name in os.listdir(MESH_DIR):", "/ 2) return self.mesh.vertices[x_index[mid_index], :] def compute_camera_matrix(self): focal_length = self.size[1] camera_center = (self.size[1]", "R)[0] g_index = np.where(self.vertex_colors[:, 1] == G)[0] b_index = np.where(self.vertex_colors[:, 2] == B)[0]", "1]], dtype='double') camera = Camera(0) camera.intrinsics = camera_matrix camera.distortion = np.zeros((4, 1)) return", "== points3d.shape[0] (_, rotation, translation, inliers) = ops.solve_PNP(points3d, image2D, self.camera, ops.UPNP) pose6D =", "g_index = np.where(self.vertex_colors[:, 1] == G)[0] b_index = np.where(self.vertex_colors[:, 2] == B)[0] matches", "list(set(matches[0]).intersection(*matches)) return intersection def get_model_point(self): rows, cols, channels = np.where(self.rgb_mask > 0) x,", "= (255, 0, 0), (0, 255, 0), (0, 0, 255) projected_points = projected_points.astype(np.int32)", "cv2.line(image, center, tuple(projected_points[1].ravel()), G, thickness) image = cv2.line(image, center, tuple(projected_points[2].ravel()), B, thickness) return", "320)): self.rgb_mask = rgb_mask self.size = size self.dimension = dimension self.camera = self.compute_camera_matrix()", "vector for a given RGB mask of an object # Arguments: rgb_mask: RGB", "y, :] r_index = np.where(self.vertex_colors[:, 0] == R)[0] g_index = np.where(self.vertex_colors[:, 1] ==", ": col + window + 1] color_values = np.reshape(neighbor, (9, 3)) return color_values", "= pr.DrawBoxes3D(self.camera, dimensions) args, projected_points = draw(pose) return args, projected_points def get_points(self): points3d,", "0., 0., 0., 1.]]) def get_vertex_colors(self): for name in os.listdir(MESH_DIR): class_id = name.split('_')[0]", "image2D.shape[0] == points3d.shape[0] (_, rotation, translation, inliers) = ops.solve_PNP(points3d, image2D, self.camera, ops.UPNP) pose6D", "points3d = np.array(points3d).astype(np.float32) #(N, 3) return points3d, image2d def get_matches(self, x, y): R,", "2, self.size[0] / 2) camera_matrix = np.array([[focal_length, 0, camera_center[0]], [0, focal_length, camera_center[1]], [0,", "args, projected_points def get_points(self): points3d, image2d = [], [] rows, cols, channels =", "'image': image, 'color': self.color} draw = pr.DrawBoxes3D(self.camera, dimensions) args, projected_points = draw(pose) return", "x_index = np.where(self.vertex_colors == np.stack([R, G, B]))[0] mid_index = int(len(x_index) / 2) return", "self.get_vertex_colors() self.color = color self.world_to_camera = np.array([[ 0.70710678, 0., -0.70710678, 0.01674194], [-0.40824829, 0.81649658,", "pose6D = Pose6D.from_rotation_vector(rotation, translation, self.class_name) return pose6D def visualize_3D_boxes(self, image, pose6D): dimensions =", "image = cv2.line(image, center, tuple(projected_points[0].ravel()), R, thickness) image = cv2.line(image, center, tuple(projected_points[1].ravel()), G,", "[0, 0, 1]], dtype='double') camera = Camera(0) camera.intrinsics = camera_matrix camera.distortion = np.zeros((4,", "= self.get_points() assert image2D.shape[0] == points3d.shape[0] (_, rotation, translation, inliers) = ops.solve_PNP(points3d, image2D,", "for name in os.listdir(MESH_DIR): class_id = name.split('_')[0] if int(class_id) == self.id: mesh_path =", "image2d = np.array(image2d).astype(np.float32) #(N, 2) points3d = np.array(points3d).astype(np.float32) #(N, 3) return points3d, image2d", "y, 0], self.rgb_mask[x, y, 1], self.rgb_mask[x, y, 2] x_index = np.where(self.vertex_colors == np.stack([R,", "np.unique(np.array(self.get_matches(x, y))) if len(matches) == 1: image2d.append([y, x]) vertex = self.mesh.vertices[matches[0], :] points3d.append(vertex)", "projected_points, thickness=2): rows, cols, channels = np.where(mask > 0) x, y = (int(np.mean(rows)),", "import trimesh import numpy as np from paz.core import Pose6D from paz.core.ops import", "size: size of the mask \"\"\" def __init__(self, rgb_mask, true_id, class_name, color=GREEN, dimension=[.1,", "points3d.append(vertex) # x_index = np.where(self.vertex_colors == np.stack([R, G, B]))[0] # mid_index = int(len(x_index)", "paz.core import Pose6D from paz.core.ops import Camera import paz.processors as pr from paz.core", "int(np.mean(cols))) center = (y, x) image = mask.copy() R, G, B = (255,", "as pr from paz.core import ops import matplotlib.pyplot as plt MESH_DIR = '/home/incendio/Documents/Thesis/YCBVideo_detector/color_meshes'", "[] rows, cols, channels = np.where(self.rgb_mask > 0) for index in range(len(rows)): x,", "0), (0, 255, 0), (0, 0, 255) projected_points = projected_points.astype(np.int32) image = cv2.line(image,", "channels = np.where(mask > 0) x, y = (int(np.mean(rows)), int(np.mean(cols))) center = (y,", "PnPSolver(): \"\"\" Implements PnP RANSAC algorithm to compute rotation and translation vector for", "for index in range(len(rows)): x, y = rows[index], cols[index] R, G, B =", "# x_index = np.where(self.vertex_colors == np.stack([R, G, B]))[0] # mid_index = int(len(x_index) /", "np.where(self.vertex_colors == np.stack([R, G, B]))[0] mid_index = int(len(x_index) / 2) return self.mesh.vertices[x_index[mid_index], :]", "(int(np.mean(rows)), int(np.mean(cols))) center = (y, x) image = mask.copy() R, G, B =", "channels = np.where(self.rgb_mask > 0) x, y = int(np.mean(rows)), int(np.mean(cols)) R, G, B", "true_id, class_name, color=GREEN, dimension=[.1, .1], size=(320, 320)): self.rgb_mask = rgb_mask self.size = size", "self.mesh.vertices[x_index[mid_index], :] def compute_camera_matrix(self): focal_length = self.size[1] camera_center = (self.size[1] / 2, self.size[0]", "self.size[0] / 2) camera_matrix = np.array([[focal_length, 0, camera_center[0]], [0, focal_length, camera_center[1]], [0, 0,", "in range(len(rows)): x, y = rows[index], cols[index] R, G, B = self.rgb_mask[x, y,", "and translation vector for a given RGB mask of an object # Arguments:", "int(len(x_index) / 2) return self.mesh.vertices[x_index[mid_index], :] def compute_camera_matrix(self): focal_length = self.size[1] camera_center =", "x, y = int(np.mean(rows)), int(np.mean(cols)) R, G, B = self.rgb_mask[x, y, 0], self.rgb_mask[x,", "points3d.append(self.mesh.vertices[x_index[mid_index], :]) image2d = np.array(image2d).astype(np.float32) #(N, 2) points3d = np.array(points3d).astype(np.float32) #(N, 3) return", "B, thickness) return image def get_neighbors(self, image, row, col, window=1): neighbor = image[row", "center, tuple(projected_points[1].ravel()), G, thickness) image = cv2.line(image, center, tuple(projected_points[2].ravel()), B, thickness) return image", "for draw_cube size: size of the mask \"\"\" def __init__(self, rgb_mask, true_id, class_name,", "neighbor = image[row - window : row + window + 1, col -", ":] r_index = np.where(self.vertex_colors[:, 0] == R)[0] g_index = np.where(self.vertex_colors[:, 1] == G)[0]", "= cv2.line(image, center, tuple(projected_points[2].ravel()), B, thickness) return image def get_neighbors(self, image, row, col,", "args, projected_points = draw(pose) return args, projected_points def get_points(self): points3d, image2d = [],", "self.dimension} pose = {'pose6D': pose6D, 'image': image, 'color': self.color} draw = pr.DrawBoxes3D(self.camera, dimensions)", ":3] return vertex_colors def solve_PnP(self): points3d, image2D = self.get_points() assert image2D.shape[0] == points3d.shape[0]", "[ 0.57735027, 0.57735027, 0.57735027, -1.73205081], [ 0., 0., 0., 1.]]) def get_vertex_colors(self): for", "of object true_id: Int class_name: class name of object. String dimension: (width, height)", "def get_matches(self, x, y): R, G, B = self.rgb_mask[x, y, :] r_index =", "= rows[index], cols[index] R, G, B = self.rgb_mask[x, y, :] matches = np.unique(np.array(self.get_matches(x,", "np.array(image2d).astype(np.float32) #(N, 2) points3d = np.array(points3d).astype(np.float32) #(N, 3) return points3d, image2d def get_matches(self,", ":] matches = np.unique(np.array(self.get_matches(x, y))) if len(matches) == 1: image2d.append([y, x]) vertex =", "tuple(projected_points[1].ravel()), G, thickness) image = cv2.line(image, center, tuple(projected_points[2].ravel()), B, thickness) return image def", "G, B = self.rgb_mask[x, y, :] r_index = np.where(self.vertex_colors[:, 0] == R)[0] g_index", "object true_id: Int class_name: class name of object. String dimension: (width, height) for", "rows, cols, channels = np.where(self.rgb_mask > 0) x, y = int(np.mean(rows)), int(np.mean(cols)) R,", "> 0) x, y = int(np.mean(rows)), int(np.mean(cols)) R, G, B = self.rgb_mask[x, y,", "dimensions = {self.class_name: self.dimension} pose = {'pose6D': pose6D, 'image': image, 'color': self.color} draw", ".1], size=(320, 320)): self.rgb_mask = rgb_mask self.size = size self.dimension = dimension self.camera", "as plt MESH_DIR = '/home/incendio/Documents/Thesis/YCBVideo_detector/color_meshes' GREEN = (0, 255, 0) class PnPSolver(): \"\"\"", "mask of an object # Arguments: rgb_mask: RGB mask of object true_id: Int", "= list(set(matches[0]).intersection(*matches)) return intersection def get_model_point(self): rows, cols, channels = np.where(self.rgb_mask > 0)", "rgb_mask, true_id, class_name, color=GREEN, dimension=[.1, .1], size=(320, 320)): self.rgb_mask = rgb_mask self.size =", "rows[index], cols[index] R, G, B = self.rgb_mask[x, y, :] matches = np.unique(np.array(self.get_matches(x, y)))", "matches = [r_index, g_index, b_index] intersection = list(set(matches[0]).intersection(*matches)) return intersection def get_model_point(self): rows,", "to compute rotation and translation vector for a given RGB mask of an", "np.where(self.rgb_mask > 0) x, y = int(np.mean(rows)), int(np.mean(cols)) R, G, B = self.rgb_mask[x,", "name in os.listdir(MESH_DIR): class_id = name.split('_')[0] if int(class_id) == self.id: mesh_path = os.path.join(MESH_DIR,", "dimensions) args, projected_points = draw(pose) return args, projected_points def get_points(self): points3d, image2d =", "255, 0) class PnPSolver(): \"\"\" Implements PnP RANSAC algorithm to compute rotation and", "G, B]))[0] mid_index = int(len(x_index) / 2) return self.mesh.vertices[x_index[mid_index], :] def compute_camera_matrix(self): focal_length", "translation, self.class_name) return pose6D def visualize_3D_boxes(self, image, pose6D): dimensions = {self.class_name: self.dimension} pose", "len(matches) == 1: image2d.append([y, x]) vertex = self.mesh.vertices[matches[0], :] points3d.append(vertex) # x_index =", "draw_cube size: size of the mask \"\"\" def __init__(self, rgb_mask, true_id, class_name, color=GREEN,", "translation, inliers) = ops.solve_PNP(points3d, image2D, self.camera, ops.UPNP) pose6D = Pose6D.from_rotation_vector(rotation, translation, self.class_name) return", "self.dimension = dimension self.camera = self.compute_camera_matrix() self.id = true_id self.class_name = class_name self.vertex_colors", "== B)[0] matches = [r_index, g_index, b_index] intersection = list(set(matches[0]).intersection(*matches)) return intersection def", "R, G, B = (255, 0, 0), (0, 255, 0), (0, 0, 255)", "2) return self.mesh.vertices[x_index[mid_index], :] def compute_camera_matrix(self): focal_length = self.size[1] camera_center = (self.size[1] /", "(width, height) for draw_cube size: size of the mask \"\"\" def __init__(self, rgb_mask,", "image2D, self.camera, ops.UPNP) pose6D = Pose6D.from_rotation_vector(rotation, translation, self.class_name) return pose6D def visualize_3D_boxes(self, image,", "rows, cols, channels = np.where(mask > 0) x, y = (int(np.mean(rows)), int(np.mean(cols))) center", "window : col + window + 1] color_values = np.reshape(neighbor, (9, 3)) return", "cv2.line(image, center, tuple(projected_points[2].ravel()), B, thickness) return image def get_neighbors(self, image, row, col, window=1):", "= class_name self.vertex_colors = self.get_vertex_colors() self.color = color self.world_to_camera = np.array([[ 0.70710678, 0.,", "class name of object. String dimension: (width, height) for draw_cube size: size of", "\"\"\" Implements PnP RANSAC algorithm to compute rotation and translation vector for a", "= draw(pose) return args, projected_points def get_points(self): points3d, image2d = [], [] rows,", "0, 0), (0, 255, 0), (0, 0, 255) projected_points = projected_points.astype(np.int32) image =", "paz.core.ops import Camera import paz.processors as pr from paz.core import ops import matplotlib.pyplot", "0.57735027, 0.57735027, -1.73205081], [ 0., 0., 0., 1.]]) def get_vertex_colors(self): for name in", "0.70710678, 0., -0.70710678, 0.01674194], [-0.40824829, 0.81649658, -0.40824829, -0.01203142], [ 0.57735027, 0.57735027, 0.57735027, -1.73205081],", "get_vertex_colors(self): for name in os.listdir(MESH_DIR): class_id = name.split('_')[0] if int(class_id) == self.id: mesh_path", "trimesh.load(mesh_path) vertex_colors = self.mesh.visual.vertex_colors[:, :3] return vertex_colors def solve_PnP(self): points3d, image2D = self.get_points()", "solve_PnP(self): points3d, image2D = self.get_points() assert image2D.shape[0] == points3d.shape[0] (_, rotation, translation, inliers)", "= self.mesh.visual.vertex_colors[:, :3] return vertex_colors def solve_PnP(self): points3d, image2D = self.get_points() assert image2D.shape[0]", "G, B = (255, 0, 0), (0, 255, 0), (0, 0, 255) projected_points", "y = int(np.mean(rows)), int(np.mean(cols)) R, G, B = self.rgb_mask[x, y, 0], self.rgb_mask[x, y,", "mask \"\"\" def __init__(self, rgb_mask, true_id, class_name, color=GREEN, dimension=[.1, .1], size=(320, 320)): self.rgb_mask", "== R)[0] g_index = np.where(self.vertex_colors[:, 1] == G)[0] b_index = np.where(self.vertex_colors[:, 2] ==", "'color': self.color} draw = pr.DrawBoxes3D(self.camera, dimensions) args, projected_points = draw(pose) return args, projected_points", "Implements PnP RANSAC algorithm to compute rotation and translation vector for a given", "= self.rgb_mask[x, y, :] r_index = np.where(self.vertex_colors[:, 0] == R)[0] g_index = np.where(self.vertex_colors[:,", "os import trimesh import numpy as np from paz.core import Pose6D from paz.core.ops", "= '/home/incendio/Documents/Thesis/YCBVideo_detector/color_meshes' GREEN = (0, 255, 0) class PnPSolver(): \"\"\" Implements PnP RANSAC", "0) x, y = (int(np.mean(rows)), int(np.mean(cols))) center = (y, x) image = mask.copy()", "Camera(0) camera.intrinsics = camera_matrix camera.distortion = np.zeros((4, 1)) return camera def draw_axis(self, mask,", "def __init__(self, rgb_mask, true_id, class_name, color=GREEN, dimension=[.1, .1], size=(320, 320)): self.rgb_mask = rgb_mask", "image = mask.copy() R, G, B = (255, 0, 0), (0, 255, 0),", "col - window : col + window + 1] color_values = np.reshape(neighbor, (9,", "self.mesh.visual.vertex_colors[:, :3] return vertex_colors def solve_PnP(self): points3d, image2D = self.get_points() assert image2D.shape[0] ==", "y, 2] x_index = np.where(self.vertex_colors == np.stack([R, G, B]))[0] mid_index = int(len(x_index) /", "0.57735027, 0.57735027, 0.57735027, -1.73205081], [ 0., 0., 0., 1.]]) def get_vertex_colors(self): for name", "return pose6D def visualize_3D_boxes(self, image, pose6D): dimensions = {self.class_name: self.dimension} pose = {'pose6D':", "== G)[0] b_index = np.where(self.vertex_colors[:, 2] == B)[0] matches = [r_index, g_index, b_index]", "self.mesh.vertices[matches[0], :] points3d.append(vertex) # x_index = np.where(self.vertex_colors == np.stack([R, G, B]))[0] # mid_index", "color=GREEN, dimension=[.1, .1], size=(320, 320)): self.rgb_mask = rgb_mask self.size = size self.dimension =", "(255, 0, 0), (0, 255, 0), (0, 0, 255) projected_points = projected_points.astype(np.int32) image", "self.mesh = trimesh.load(mesh_path) vertex_colors = self.mesh.visual.vertex_colors[:, :3] return vertex_colors def solve_PnP(self): points3d, image2D", "= dimension self.camera = self.compute_camera_matrix() self.id = true_id self.class_name = class_name self.vertex_colors =", "b_index = np.where(self.vertex_colors[:, 2] == B)[0] matches = [r_index, g_index, b_index] intersection =", "os.listdir(MESH_DIR): class_id = name.split('_')[0] if int(class_id) == self.id: mesh_path = os.path.join(MESH_DIR, name) self.mesh", "compute_camera_matrix(self): focal_length = self.size[1] camera_center = (self.size[1] / 2, self.size[0] / 2) camera_matrix", ":]) image2d = np.array(image2d).astype(np.float32) #(N, 2) points3d = np.array(points3d).astype(np.float32) #(N, 3) return points3d,", "R, G, B = self.rgb_mask[x, y, 0], self.rgb_mask[x, y, 1], self.rgb_mask[x, y, 2]", "draw(pose) return args, projected_points def get_points(self): points3d, image2d = [], [] rows, cols,", "image2D = self.get_points() assert image2D.shape[0] == points3d.shape[0] (_, rotation, translation, inliers) = ops.solve_PNP(points3d,", "= np.array(points3d).astype(np.float32) #(N, 3) return points3d, image2d def get_matches(self, x, y): R, G,", "return camera def draw_axis(self, mask, projected_points, thickness=2): rows, cols, channels = np.where(mask >", "image2d = [], [] rows, cols, channels = np.where(self.rgb_mask > 0) for index", "y = rows[index], cols[index] R, G, B = self.rgb_mask[x, y, :] matches =", "paz.core import ops import matplotlib.pyplot as plt MESH_DIR = '/home/incendio/Documents/Thesis/YCBVideo_detector/color_meshes' GREEN = (0,", "> 0) x, y = (int(np.mean(rows)), int(np.mean(cols))) center = (y, x) image =", "rgb_mask self.size = size self.dimension = dimension self.camera = self.compute_camera_matrix() self.id = true_id", "3) return points3d, image2d def get_matches(self, x, y): R, G, B = self.rgb_mask[x,", "= color self.world_to_camera = np.array([[ 0.70710678, 0., -0.70710678, 0.01674194], [-0.40824829, 0.81649658, -0.40824829, -0.01203142],", "translation vector for a given RGB mask of an object # Arguments: rgb_mask:", "tuple(projected_points[2].ravel()), B, thickness) return image def get_neighbors(self, image, row, col, window=1): neighbor =", "trimesh import numpy as np from paz.core import Pose6D from paz.core.ops import Camera", "int(np.mean(rows)), int(np.mean(cols)) R, G, B = self.rgb_mask[x, y, 0], self.rgb_mask[x, y, 1], self.rgb_mask[x,", "= cv2.line(image, center, tuple(projected_points[0].ravel()), R, thickness) image = cv2.line(image, center, tuple(projected_points[1].ravel()), G, thickness)", "size=(320, 320)): self.rgb_mask = rgb_mask self.size = size self.dimension = dimension self.camera =", "np.stack([R, G, B]))[0] # mid_index = int(len(x_index) / 2) # points3d.append(self.mesh.vertices[x_index[mid_index], :]) image2d", "0.81649658, -0.40824829, -0.01203142], [ 0.57735027, 0.57735027, 0.57735027, -1.73205081], [ 0., 0., 0., 1.]])", "matplotlib.pyplot as plt MESH_DIR = '/home/incendio/Documents/Thesis/YCBVideo_detector/color_meshes' GREEN = (0, 255, 0) class PnPSolver():", "self.camera, ops.UPNP) pose6D = Pose6D.from_rotation_vector(rotation, translation, self.class_name) return pose6D def visualize_3D_boxes(self, image, pose6D):", "= trimesh.load(mesh_path) vertex_colors = self.mesh.visual.vertex_colors[:, :3] return vertex_colors def solve_PnP(self): points3d, image2D =", "= {'pose6D': pose6D, 'image': image, 'color': self.color} draw = pr.DrawBoxes3D(self.camera, dimensions) args, projected_points", "def get_vertex_colors(self): for name in os.listdir(MESH_DIR): class_id = name.split('_')[0] if int(class_id) == self.id:", "[r_index, g_index, b_index] intersection = list(set(matches[0]).intersection(*matches)) return intersection def get_model_point(self): rows, cols, channels", "self.vertex_colors = self.get_vertex_colors() self.color = color self.world_to_camera = np.array([[ 0.70710678, 0., -0.70710678, 0.01674194],", "class_name, color=GREEN, dimension=[.1, .1], size=(320, 320)): self.rgb_mask = rgb_mask self.size = size self.dimension", "= int(len(x_index) / 2) return self.mesh.vertices[x_index[mid_index], :] def compute_camera_matrix(self): focal_length = self.size[1] camera_center", "GREEN = (0, 255, 0) class PnPSolver(): \"\"\" Implements PnP RANSAC algorithm to", "0., 1.]]) def get_vertex_colors(self): for name in os.listdir(MESH_DIR): class_id = name.split('_')[0] if int(class_id)", "true_id self.class_name = class_name self.vertex_colors = self.get_vertex_colors() self.color = color self.world_to_camera = np.array([[", "dimension: (width, height) for draw_cube size: size of the mask \"\"\" def __init__(self,", "cols, channels = np.where(mask > 0) x, y = (int(np.mean(rows)), int(np.mean(cols))) center =", "def draw_axis(self, mask, projected_points, thickness=2): rows, cols, channels = np.where(mask > 0) x,", "np.stack([R, G, B]))[0] mid_index = int(len(x_index) / 2) return self.mesh.vertices[x_index[mid_index], :] def compute_camera_matrix(self):", "= np.where(mask > 0) x, y = (int(np.mean(rows)), int(np.mean(cols))) center = (y, x)", "np from paz.core import Pose6D from paz.core.ops import Camera import paz.processors as pr", ":] points3d.append(vertex) # x_index = np.where(self.vertex_colors == np.stack([R, G, B]))[0] # mid_index =", "255) projected_points = projected_points.astype(np.int32) image = cv2.line(image, center, tuple(projected_points[0].ravel()), R, thickness) image =", "the mask \"\"\" def __init__(self, rgb_mask, true_id, class_name, color=GREEN, dimension=[.1, .1], size=(320, 320)):", "points3d.shape[0] (_, rotation, translation, inliers) = ops.solve_PNP(points3d, image2D, self.camera, ops.UPNP) pose6D = Pose6D.from_rotation_vector(rotation,", "get_model_point(self): rows, cols, channels = np.where(self.rgb_mask > 0) x, y = int(np.mean(rows)), int(np.mean(cols))", "row, col, window=1): neighbor = image[row - window : row + window +", "pose = {'pose6D': pose6D, 'image': image, 'color': self.color} draw = pr.DrawBoxes3D(self.camera, dimensions) args,", "2) # points3d.append(self.mesh.vertices[x_index[mid_index], :]) image2d = np.array(image2d).astype(np.float32) #(N, 2) points3d = np.array(points3d).astype(np.float32) #(N,", "projected_points.astype(np.int32) image = cv2.line(image, center, tuple(projected_points[0].ravel()), R, thickness) image = cv2.line(image, center, tuple(projected_points[1].ravel()),", "-0.01203142], [ 0.57735027, 0.57735027, 0.57735027, -1.73205081], [ 0., 0., 0., 1.]]) def get_vertex_colors(self):", "ops.solve_PNP(points3d, image2D, self.camera, ops.UPNP) pose6D = Pose6D.from_rotation_vector(rotation, translation, self.class_name) return pose6D def visualize_3D_boxes(self,", "0, camera_center[0]], [0, focal_length, camera_center[1]], [0, 0, 1]], dtype='double') camera = Camera(0) camera.intrinsics", "= rgb_mask self.size = size self.dimension = dimension self.camera = self.compute_camera_matrix() self.id =", "mask, projected_points, thickness=2): rows, cols, channels = np.where(mask > 0) x, y =", "def visualize_3D_boxes(self, image, pose6D): dimensions = {self.class_name: self.dimension} pose = {'pose6D': pose6D, 'image':", "y))) if len(matches) == 1: image2d.append([y, x]) vertex = self.mesh.vertices[matches[0], :] points3d.append(vertex) #", "as np from paz.core import Pose6D from paz.core.ops import Camera import paz.processors as", "1: image2d.append([y, x]) vertex = self.mesh.vertices[matches[0], :] points3d.append(vertex) # x_index = np.where(self.vertex_colors ==", "assert image2D.shape[0] == points3d.shape[0] (_, rotation, translation, inliers) = ops.solve_PNP(points3d, image2D, self.camera, ops.UPNP)", "= mask.copy() R, G, B = (255, 0, 0), (0, 255, 0), (0,", "pr.DrawBoxes3D(self.camera, dimensions) args, projected_points = draw(pose) return args, projected_points def get_points(self): points3d, image2d", "algorithm to compute rotation and translation vector for a given RGB mask of", "index in range(len(rows)): x, y = rows[index], cols[index] R, G, B = self.rgb_mask[x,", "from paz.core import ops import matplotlib.pyplot as plt MESH_DIR = '/home/incendio/Documents/Thesis/YCBVideo_detector/color_meshes' GREEN =", "np.array(points3d).astype(np.float32) #(N, 3) return points3d, image2d def get_matches(self, x, y): R, G, B", "dtype='double') camera = Camera(0) camera.intrinsics = camera_matrix camera.distortion = np.zeros((4, 1)) return camera", "R, thickness) image = cv2.line(image, center, tuple(projected_points[1].ravel()), G, thickness) image = cv2.line(image, center,", "1], self.rgb_mask[x, y, 2] x_index = np.where(self.vertex_colors == np.stack([R, G, B]))[0] mid_index =", "from paz.core import Pose6D from paz.core.ops import Camera import paz.processors as pr from", "2) points3d = np.array(points3d).astype(np.float32) #(N, 3) return points3d, image2d def get_matches(self, x, y):", "= np.where(self.vertex_colors[:, 1] == G)[0] b_index = np.where(self.vertex_colors[:, 2] == B)[0] matches =", "import Pose6D from paz.core.ops import Camera import paz.processors as pr from paz.core import", "0] == R)[0] g_index = np.where(self.vertex_colors[:, 1] == G)[0] b_index = np.where(self.vertex_colors[:, 2]", "-1.73205081], [ 0., 0., 0., 1.]]) def get_vertex_colors(self): for name in os.listdir(MESH_DIR): class_id", "= ops.solve_PNP(points3d, image2D, self.camera, ops.UPNP) pose6D = Pose6D.from_rotation_vector(rotation, translation, self.class_name) return pose6D def", "object. String dimension: (width, height) for draw_cube size: size of the mask \"\"\"", "pose6D, 'image': image, 'color': self.color} draw = pr.DrawBoxes3D(self.camera, dimensions) args, projected_points = draw(pose)", "y = (int(np.mean(rows)), int(np.mean(cols))) center = (y, x) image = mask.copy() R, G,", "size of the mask \"\"\" def __init__(self, rgb_mask, true_id, class_name, color=GREEN, dimension=[.1, .1],", "= Pose6D.from_rotation_vector(rotation, translation, self.class_name) return pose6D def visualize_3D_boxes(self, image, pose6D): dimensions = {self.class_name:", "vertex_colors def solve_PnP(self): points3d, image2D = self.get_points() assert image2D.shape[0] == points3d.shape[0] (_, rotation,", "Camera import paz.processors as pr from paz.core import ops import matplotlib.pyplot as plt", "self.color} draw = pr.DrawBoxes3D(self.camera, dimensions) args, projected_points = draw(pose) return args, projected_points def", "range(len(rows)): x, y = rows[index], cols[index] R, G, B = self.rgb_mask[x, y, :]", "projected_points def get_points(self): points3d, image2d = [], [] rows, cols, channels = np.where(self.rgb_mask", "col, window=1): neighbor = image[row - window : row + window + 1,", "points3d, image2d def get_matches(self, x, y): R, G, B = self.rgb_mask[x, y, :]", "import matplotlib.pyplot as plt MESH_DIR = '/home/incendio/Documents/Thesis/YCBVideo_detector/color_meshes' GREEN = (0, 255, 0) class", "0) x, y = int(np.mean(rows)), int(np.mean(cols)) R, G, B = self.rgb_mask[x, y, 0],", "= image[row - window : row + window + 1, col - window", "self.color = color self.world_to_camera = np.array([[ 0.70710678, 0., -0.70710678, 0.01674194], [-0.40824829, 0.81649658, -0.40824829,", "import os import trimesh import numpy as np from paz.core import Pose6D from", "Pose6D from paz.core.ops import Camera import paz.processors as pr from paz.core import ops", "= [r_index, g_index, b_index] intersection = list(set(matches[0]).intersection(*matches)) return intersection def get_model_point(self): rows, cols,", "def get_points(self): points3d, image2d = [], [] rows, cols, channels = np.where(self.rgb_mask >", "cols, channels = np.where(self.rgb_mask > 0) for index in range(len(rows)): x, y =", "= size self.dimension = dimension self.camera = self.compute_camera_matrix() self.id = true_id self.class_name =", "= {self.class_name: self.dimension} pose = {'pose6D': pose6D, 'image': image, 'color': self.color} draw =", "/ 2) camera_matrix = np.array([[focal_length, 0, camera_center[0]], [0, focal_length, camera_center[1]], [0, 0, 1]],", "self.class_name) return pose6D def visualize_3D_boxes(self, image, pose6D): dimensions = {self.class_name: self.dimension} pose =", "import ops import matplotlib.pyplot as plt MESH_DIR = '/home/incendio/Documents/Thesis/YCBVideo_detector/color_meshes' GREEN = (0, 255,", "an object # Arguments: rgb_mask: RGB mask of object true_id: Int class_name: class", "self.id = true_id self.class_name = class_name self.vertex_colors = self.get_vertex_colors() self.color = color self.world_to_camera", "of an object # Arguments: rgb_mask: RGB mask of object true_id: Int class_name:", "import numpy as np from paz.core import Pose6D from paz.core.ops import Camera import", "(_, rotation, translation, inliers) = ops.solve_PNP(points3d, image2D, self.camera, ops.UPNP) pose6D = Pose6D.from_rotation_vector(rotation, translation,", "image2d.append([y, x]) vertex = self.mesh.vertices[matches[0], :] points3d.append(vertex) # x_index = np.where(self.vertex_colors == np.stack([R,", "class_name: class name of object. String dimension: (width, height) for draw_cube size: size", "import Camera import paz.processors as pr from paz.core import ops import matplotlib.pyplot as", "projected_points = draw(pose) return args, projected_points def get_points(self): points3d, image2d = [], []", "= name.split('_')[0] if int(class_id) == self.id: mesh_path = os.path.join(MESH_DIR, name) self.mesh = trimesh.load(mesh_path)", "def compute_camera_matrix(self): focal_length = self.size[1] camera_center = (self.size[1] / 2, self.size[0] / 2)", "focal_length = self.size[1] camera_center = (self.size[1] / 2, self.size[0] / 2) camera_matrix =", "return points3d, image2d def get_matches(self, x, y): R, G, B = self.rgb_mask[x, y,", "mid_index = int(len(x_index) / 2) # points3d.append(self.mesh.vertices[x_index[mid_index], :]) image2d = np.array(image2d).astype(np.float32) #(N, 2)", "-0.40824829, -0.01203142], [ 0.57735027, 0.57735027, 0.57735027, -1.73205081], [ 0., 0., 0., 1.]]) def", "camera def draw_axis(self, mask, projected_points, thickness=2): rows, cols, channels = np.where(mask > 0)", "rotation and translation vector for a given RGB mask of an object #", "return intersection def get_model_point(self): rows, cols, channels = np.where(self.rgb_mask > 0) x, y", "camera_center[1]], [0, 0, 1]], dtype='double') camera = Camera(0) camera.intrinsics = camera_matrix camera.distortion =", "for a given RGB mask of an object # Arguments: rgb_mask: RGB mask", "- window : col + window + 1] color_values = np.reshape(neighbor, (9, 3))", "vertex = self.mesh.vertices[matches[0], :] points3d.append(vertex) # x_index = np.where(self.vertex_colors == np.stack([R, G, B]))[0]", "points3d, image2d = [], [] rows, cols, channels = np.where(self.rgb_mask > 0) for", "points3d, image2D = self.get_points() assert image2D.shape[0] == points3d.shape[0] (_, rotation, translation, inliers) =", "#(N, 3) return points3d, image2d def get_matches(self, x, y): R, G, B =", "[0, focal_length, camera_center[1]], [0, 0, 1]], dtype='double') camera = Camera(0) camera.intrinsics = camera_matrix", "self.rgb_mask[x, y, 0], self.rgb_mask[x, y, 1], self.rgb_mask[x, y, 2] x_index = np.where(self.vertex_colors ==", "+ 1, col - window : col + window + 1] color_values =", "from paz.core.ops import Camera import paz.processors as pr from paz.core import ops import", "self.rgb_mask = rgb_mask self.size = size self.dimension = dimension self.camera = self.compute_camera_matrix() self.id", "thickness) image = cv2.line(image, center, tuple(projected_points[1].ravel()), G, thickness) image = cv2.line(image, center, tuple(projected_points[2].ravel()),", "# mid_index = int(len(x_index) / 2) # points3d.append(self.mesh.vertices[x_index[mid_index], :]) image2d = np.array(image2d).astype(np.float32) #(N,", "color self.world_to_camera = np.array([[ 0.70710678, 0., -0.70710678, 0.01674194], [-0.40824829, 0.81649658, -0.40824829, -0.01203142], [", "b_index] intersection = list(set(matches[0]).intersection(*matches)) return intersection def get_model_point(self): rows, cols, channels = np.where(self.rgb_mask", "mask of object true_id: Int class_name: class name of object. String dimension: (width,", "MESH_DIR = '/home/incendio/Documents/Thesis/YCBVideo_detector/color_meshes' GREEN = (0, 255, 0) class PnPSolver(): \"\"\" Implements PnP", "visualize_3D_boxes(self, image, pose6D): dimensions = {self.class_name: self.dimension} pose = {'pose6D': pose6D, 'image': image,", "inliers) = ops.solve_PNP(points3d, image2D, self.camera, ops.UPNP) pose6D = Pose6D.from_rotation_vector(rotation, translation, self.class_name) return pose6D", "R, G, B = self.rgb_mask[x, y, :] r_index = np.where(self.vertex_colors[:, 0] == R)[0]", "name) self.mesh = trimesh.load(mesh_path) vertex_colors = self.mesh.visual.vertex_colors[:, :3] return vertex_colors def solve_PnP(self): points3d,", "center = (y, x) image = mask.copy() R, G, B = (255, 0,", "= int(np.mean(rows)), int(np.mean(cols)) R, G, B = self.rgb_mask[x, y, 0], self.rgb_mask[x, y, 1],", "= int(len(x_index) / 2) # points3d.append(self.mesh.vertices[x_index[mid_index], :]) image2d = np.array(image2d).astype(np.float32) #(N, 2) points3d", "camera.intrinsics = camera_matrix camera.distortion = np.zeros((4, 1)) return camera def draw_axis(self, mask, projected_points,", "name.split('_')[0] if int(class_id) == self.id: mesh_path = os.path.join(MESH_DIR, name) self.mesh = trimesh.load(mesh_path) vertex_colors", "cols[index] R, G, B = self.rgb_mask[x, y, :] matches = np.unique(np.array(self.get_matches(x, y))) if", "self.world_to_camera = np.array([[ 0.70710678, 0., -0.70710678, 0.01674194], [-0.40824829, 0.81649658, -0.40824829, -0.01203142], [ 0.57735027,", "Arguments: rgb_mask: RGB mask of object true_id: Int class_name: class name of object.", "center, tuple(projected_points[0].ravel()), R, thickness) image = cv2.line(image, center, tuple(projected_points[1].ravel()), G, thickness) image =", "B = self.rgb_mask[x, y, :] r_index = np.where(self.vertex_colors[:, 0] == R)[0] g_index =", "window + 1, col - window : col + window + 1] color_values", "int(np.mean(cols)) R, G, B = self.rgb_mask[x, y, 0], self.rgb_mask[x, y, 1], self.rgb_mask[x, y,", "np.array([[ 0.70710678, 0., -0.70710678, 0.01674194], [-0.40824829, 0.81649658, -0.40824829, -0.01203142], [ 0.57735027, 0.57735027, 0.57735027,", "image def get_neighbors(self, image, row, col, window=1): neighbor = image[row - window :", "= np.where(self.rgb_mask > 0) for index in range(len(rows)): x, y = rows[index], cols[index]", "g_index, b_index] intersection = list(set(matches[0]).intersection(*matches)) return intersection def get_model_point(self): rows, cols, channels =", "0., 0., 1.]]) def get_vertex_colors(self): for name in os.listdir(MESH_DIR): class_id = name.split('_')[0] if", "= true_id self.class_name = class_name self.vertex_colors = self.get_vertex_colors() self.color = color self.world_to_camera =", "int(len(x_index) / 2) # points3d.append(self.mesh.vertices[x_index[mid_index], :]) image2d = np.array(image2d).astype(np.float32) #(N, 2) points3d =", "np.where(mask > 0) x, y = (int(np.mean(rows)), int(np.mean(cols))) center = (y, x) image", "= np.where(self.vertex_colors[:, 0] == R)[0] g_index = np.where(self.vertex_colors[:, 1] == G)[0] b_index =", "0.01674194], [-0.40824829, 0.81649658, -0.40824829, -0.01203142], [ 0.57735027, 0.57735027, 0.57735027, -1.73205081], [ 0., 0.,", "def get_model_point(self): rows, cols, channels = np.where(self.rgb_mask > 0) x, y = int(np.mean(rows)),", "G, B]))[0] # mid_index = int(len(x_index) / 2) # points3d.append(self.mesh.vertices[x_index[mid_index], :]) image2d =", "0) for index in range(len(rows)): x, y = rows[index], cols[index] R, G, B", "0) class PnPSolver(): \"\"\" Implements PnP RANSAC algorithm to compute rotation and translation", "= self.rgb_mask[x, y, :] matches = np.unique(np.array(self.get_matches(x, y))) if len(matches) == 1: image2d.append([y,", "of object. String dimension: (width, height) for draw_cube size: size of the mask", "window : row + window + 1, col - window : col +", "thickness=2): rows, cols, channels = np.where(mask > 0) x, y = (int(np.mean(rows)), int(np.mean(cols)))", "def solve_PnP(self): points3d, image2D = self.get_points() assert image2D.shape[0] == points3d.shape[0] (_, rotation, translation,", "intersection def get_model_point(self): rows, cols, channels = np.where(self.rgb_mask > 0) x, y =", "projected_points = projected_points.astype(np.int32) image = cv2.line(image, center, tuple(projected_points[0].ravel()), R, thickness) image = cv2.line(image,", "RGB mask of object true_id: Int class_name: class name of object. String dimension:", "self.get_points() assert image2D.shape[0] == points3d.shape[0] (_, rotation, translation, inliers) = ops.solve_PNP(points3d, image2D, self.camera,", "np.array([[focal_length, 0, camera_center[0]], [0, focal_length, camera_center[1]], [0, 0, 1]], dtype='double') camera = Camera(0)", "G, B = self.rgb_mask[x, y, :] matches = np.unique(np.array(self.get_matches(x, y))) if len(matches) ==", "import paz.processors as pr from paz.core import ops import matplotlib.pyplot as plt MESH_DIR", "#(N, 2) points3d = np.array(points3d).astype(np.float32) #(N, 3) return points3d, image2d def get_matches(self, x,", "Int class_name: class name of object. String dimension: (width, height) for draw_cube size:", "thickness) return image def get_neighbors(self, image, row, col, window=1): neighbor = image[row -", "= np.unique(np.array(self.get_matches(x, y))) if len(matches) == 1: image2d.append([y, x]) vertex = self.mesh.vertices[matches[0], :]", "(self.size[1] / 2, self.size[0] / 2) camera_matrix = np.array([[focal_length, 0, camera_center[0]], [0, focal_length," ]
[ "import CountVectorizer, TfidfTransformer class BasicText(TransformerMixin): def __init__(self, Dreduction= None, *args,**kwargs): self.Dreduction = Dreduction", "def __init__(self, Dreduction= None, *args,**kwargs): self.Dreduction = Dreduction def fit(self, X, y =None):", "from sklearn.base import TransformerMixin from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer class BasicText(TransformerMixin): def __init__(self,", "self.TFid = TfidfTransformer() self.trans.fit(X.values) self.TFid.fit(self.trans.fit_transform(X.values).toarray()) def transform(self,X): self.features = pd.DataFrame(self.TFid.transform(X.values), index=X.index) return self.features", "import TransformerMixin from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer class BasicText(TransformerMixin): def __init__(self, Dreduction= None,", "as pd from sklearn.base import TransformerMixin from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer class BasicText(TransformerMixin):", "fit(self, X, y =None): self.trans = CountVectorizer(*args,**kwargs) self.TFid = TfidfTransformer() self.trans.fit(X.values) self.TFid.fit(self.trans.fit_transform(X.values).toarray()) def", "None, *args,**kwargs): self.Dreduction = Dreduction def fit(self, X, y =None): self.trans = CountVectorizer(*args,**kwargs)", "class BasicText(TransformerMixin): def __init__(self, Dreduction= None, *args,**kwargs): self.Dreduction = Dreduction def fit(self, X,", "= CountVectorizer(*args,**kwargs) self.TFid = TfidfTransformer() self.trans.fit(X.values) self.TFid.fit(self.trans.fit_transform(X.values).toarray()) def transform(self,X): self.features = pd.DataFrame(self.TFid.transform(X.values), index=X.index)", "pd from sklearn.base import TransformerMixin from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer class BasicText(TransformerMixin): def", "BasicText(TransformerMixin): def __init__(self, Dreduction= None, *args,**kwargs): self.Dreduction = Dreduction def fit(self, X, y", "import numpy as np import pandas as pd from sklearn.base import TransformerMixin from", "def fit(self, X, y =None): self.trans = CountVectorizer(*args,**kwargs) self.TFid = TfidfTransformer() self.trans.fit(X.values) self.TFid.fit(self.trans.fit_transform(X.values).toarray())", "<filename>MLSD/Transformers/Text_Transformers.py import numpy as np import pandas as pd from sklearn.base import TransformerMixin", "np import pandas as pd from sklearn.base import TransformerMixin from sklearn.feature_extraction.text import CountVectorizer,", "y =None): self.trans = CountVectorizer(*args,**kwargs) self.TFid = TfidfTransformer() self.trans.fit(X.values) self.TFid.fit(self.trans.fit_transform(X.values).toarray()) def transform(self,X): self.features", "numpy as np import pandas as pd from sklearn.base import TransformerMixin from sklearn.feature_extraction.text", "sklearn.base import TransformerMixin from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer class BasicText(TransformerMixin): def __init__(self, Dreduction=", "sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer class BasicText(TransformerMixin): def __init__(self, Dreduction= None, *args,**kwargs): self.Dreduction =", "self.trans = CountVectorizer(*args,**kwargs) self.TFid = TfidfTransformer() self.trans.fit(X.values) self.TFid.fit(self.trans.fit_transform(X.values).toarray()) def transform(self,X): self.features = pd.DataFrame(self.TFid.transform(X.values),", "TransformerMixin from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer class BasicText(TransformerMixin): def __init__(self, Dreduction= None, *args,**kwargs):", "pandas as pd from sklearn.base import TransformerMixin from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer class", "CountVectorizer, TfidfTransformer class BasicText(TransformerMixin): def __init__(self, Dreduction= None, *args,**kwargs): self.Dreduction = Dreduction def", "X, y =None): self.trans = CountVectorizer(*args,**kwargs) self.TFid = TfidfTransformer() self.trans.fit(X.values) self.TFid.fit(self.trans.fit_transform(X.values).toarray()) def transform(self,X):", "TfidfTransformer class BasicText(TransformerMixin): def __init__(self, Dreduction= None, *args,**kwargs): self.Dreduction = Dreduction def fit(self,", "Dreduction def fit(self, X, y =None): self.trans = CountVectorizer(*args,**kwargs) self.TFid = TfidfTransformer() self.trans.fit(X.values)", "__init__(self, Dreduction= None, *args,**kwargs): self.Dreduction = Dreduction def fit(self, X, y =None): self.trans", "=None): self.trans = CountVectorizer(*args,**kwargs) self.TFid = TfidfTransformer() self.trans.fit(X.values) self.TFid.fit(self.trans.fit_transform(X.values).toarray()) def transform(self,X): self.features =", "CountVectorizer(*args,**kwargs) self.TFid = TfidfTransformer() self.trans.fit(X.values) self.TFid.fit(self.trans.fit_transform(X.values).toarray()) def transform(self,X): self.features = pd.DataFrame(self.TFid.transform(X.values), index=X.index) return", "self.Dreduction = Dreduction def fit(self, X, y =None): self.trans = CountVectorizer(*args,**kwargs) self.TFid =", "Dreduction= None, *args,**kwargs): self.Dreduction = Dreduction def fit(self, X, y =None): self.trans =", "import pandas as pd from sklearn.base import TransformerMixin from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer", "= Dreduction def fit(self, X, y =None): self.trans = CountVectorizer(*args,**kwargs) self.TFid = TfidfTransformer()", "*args,**kwargs): self.Dreduction = Dreduction def fit(self, X, y =None): self.trans = CountVectorizer(*args,**kwargs) self.TFid", "from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer class BasicText(TransformerMixin): def __init__(self, Dreduction= None, *args,**kwargs): self.Dreduction", "as np import pandas as pd from sklearn.base import TransformerMixin from sklearn.feature_extraction.text import" ]
[ "return query.all() def run(self): scan_loader = self._make_scan_loader() gpsms = self._load_spectrum_matches() if not os.path.exists(self.output_path):", "\"\"\"Take a string and return a valid filename constructed from the string. Uses", "glycopeptide_match_logo(match, ax=ax1) fname = format_filename(\"%s_%s.pdf\" % (scan.id, gpep)) path = os.path.join(self.output_path, fname) abspath", "def format_filename(s): \"\"\"Take a string and return a valid filename constructed from the", "= output_path self.analysis = self.session.query(serialize.Analysis).get(self.analysis_id) self.scan_loader = None self._mpl_style = { 'figure.facecolor': 'white',", "in valid_chars) filename = filename.replace(' ', '_') return filename class SpectrumAnnotatorExport(TaskBase, DatabaseBoundOperation): def", "Also spaces are replaced with underscores. \"\"\" valid_chars = \"-_.() %s%s\" % (string.ascii_letters,", "self.output_path = output_path self.analysis = self.session.query(serialize.Analysis).get(self.analysis_id) self.scan_loader = None self._mpl_style = { 'figure.facecolor':", "string. Uses a whitelist approach: any characters not present in valid_chars are removed.", "def _make_scan_loader(self): if self.mzml_path is not None: if not os.path.exists(self.mzml_path): raise IOError(\"No such", "GlycopeptideSpectrumMatch) from glycan_profiling.task import TaskBase from glycan_profiling.serialize import DatabaseBoundOperation from glycan_profiling.chromatogram_tree import Unmodified", "you may need to explicily pass the\" \" corrected file path.\").format( self.mzml_path, self.database_connection._original_connection))", "format_filename(\"%s_%s.pdf\" % (scan.id, gpep)) path = os.path.join(self.output_path, fname) abspath = os.path.abspath(path) if len(abspath)", "{}\".format(self.mzml_path)) self.scan_loader = ProcessedMzMLDeserializer(self.mzml_path) else: self.mzml_path = self.analysis.parameters['sample_path'] if not os.path.exists(self.mzml_path): raise IOError((", "/ float(n) * 100.0), gpep, scan.id)) with style.context(self._mpl_style): fig = figure() grid =", "query = self.query(GlycopeptideSpectrumMatch).join( GlycopeptideSpectrumMatch.scan).filter( GlycopeptideSpectrumMatch.analysis_id == self.analysis_id).order_by( MSScan.index) return query.all() def run(self): scan_loader", "DatabaseBoundOperation from glycan_profiling.chromatogram_tree import Unmodified from glycan_profiling.tandem.ref import SpectrumReference from glycan_profiling.tandem.glycopeptide.scoring import CoverageWeightedBinomialModelTree", "gpsm.structure.convert() if i % 10 == 0: self.log(\"... %0.2f%%: %s @ %s\" %", "__init__(self, database_connection, analysis_id, output_path, mzml_path=None): DatabaseBoundOperation.__init__(self, database_connection) self.analysis_id = analysis_id self.mzml_path = mzml_path", "_make_scan_loader(self): if self.mzml_path is not None: if not os.path.exists(self.mzml_path): raise IOError(\"No such file", "output_path self.analysis = self.session.query(serialize.Analysis).get(self.analysis_id) self.scan_loader = None self._mpl_style = { 'figure.facecolor': 'white', 'figure.edgecolor':", "DatabaseBoundOperation.__init__(self, database_connection) self.analysis_id = analysis_id self.mzml_path = mzml_path self.output_path = output_path self.analysis =", "IOError(( \"No such file {}. If {} was relocated, you may need to", "corrected file path.\").format( self.mzml_path, self.database_connection._original_connection)) self.scan_loader = ProcessedMzMLDeserializer(self.mzml_path) return self.scan_loader def _load_spectrum_matches(self): query", "return filename class SpectrumAnnotatorExport(TaskBase, DatabaseBoundOperation): def __init__(self, database_connection, analysis_id, output_path, mzml_path=None): DatabaseBoundOperation.__init__(self, database_connection)", "figure() grid = plt.GridSpec(nrows=5, ncols=1) ax1 = fig.add_subplot(grid[1, 0]) ax2 = fig.add_subplot(grid[2:, 0])", "import platform from glycan_profiling import serialize from glycan_profiling.serialize import ( Protein, Glycopeptide, IdentifiedGlycopeptide,", "glycan_profiling.plotting.spectral_annotation import TidySpectrumMatchAnnotator from ms_deisotope.output.mzml import ProcessedMzMLDeserializer from matplotlib import pyplot as plt,", "} def _make_scan_loader(self): if self.mzml_path is not None: if not os.path.exists(self.mzml_path): raise IOError(\"No", "glycan_profiling.serialize import DatabaseBoundOperation from glycan_profiling.chromatogram_tree import Unmodified from glycan_profiling.tandem.ref import SpectrumReference from glycan_profiling.tandem.glycopeptide.scoring", "= plt.GridSpec(nrows=5, ncols=1) ax1 = fig.add_subplot(grid[1, 0]) ax2 = fig.add_subplot(grid[2:, 0]) ax3 =", "len(gpsms) self.log(\"%d Spectrum Matches\" % (n,)) for i, gpsm in enumerate(gpsms): scan =", "in valid_chars are removed. Also spaces are replaced with underscores. \"\"\" valid_chars =", "glycan_profiling.plotting import figure from glycan_profiling.plotting.sequence_fragment_logo import glycopeptide_match_logo from glycan_profiling.plotting.spectral_annotation import TidySpectrumMatchAnnotator from ms_deisotope.output.mzml", "logging import string import platform from glycan_profiling import serialize from glycan_profiling.serialize import (", "'_') return filename class SpectrumAnnotatorExport(TaskBase, DatabaseBoundOperation): def __init__(self, database_connection, analysis_id, output_path, mzml_path=None): DatabaseBoundOperation.__init__(self,", "glycan_profiling.tandem.ref import SpectrumReference from glycan_profiling.tandem.glycopeptide.scoring import CoverageWeightedBinomialModelTree from glycan_profiling.plotting import figure from glycan_profiling.plotting.sequence_fragment_logo", "* 100.0), gpep, scan.id)) with style.context(self._mpl_style): fig = figure() grid = plt.GridSpec(nrows=5, ncols=1)", "is not None: if not os.path.exists(self.mzml_path): raise IOError(\"No such file {}\".format(self.mzml_path)) self.scan_loader =", "output_path, mzml_path=None): DatabaseBoundOperation.__init__(self, database_connection) self.analysis_id = analysis_id self.mzml_path = mzml_path self.output_path = output_path", "\"No such file {}. If {} was relocated, you may need to explicily", "may need to explicily pass the\" \" corrected file path.\").format( self.mzml_path, self.database_connection._original_connection)) self.scan_loader", "( str(match.target) + '\\n' + scan.id + '\\nscore=%0.3f q value=%0.3g' % (gpsm.score, gpsm.q_value)),", "from glycan_profiling.plotting import figure from glycan_profiling.plotting.sequence_fragment_logo import glycopeptide_match_logo from glycan_profiling.plotting.spectral_annotation import TidySpectrumMatchAnnotator from", "scan = scan_loader.get_scan_by_id(gpsm.scan.scan_id) gpep = gpsm.structure.convert() if i % 10 == 0: self.log(\"...", "72, 'figure.subplot.bottom': .125 } def _make_scan_loader(self): if self.mzml_path is not None: if not", "self.scan_loader = None self._mpl_style = { 'figure.facecolor': 'white', 'figure.edgecolor': 'white', 'font.size': 10, 'savefig.dpi':", "if self.mzml_path is not None: if not os.path.exists(self.mzml_path): raise IOError(\"No such file {}\".format(self.mzml_path))", "os.path.abspath(path) if len(abspath) > 259 and platform.system().lower() == 'windows': abspath = '\\\\\\\\?\\\\' +", "need to explicily pass the\" \" corrected file path.\").format( self.mzml_path, self.database_connection._original_connection)) self.scan_loader =", "TaskBase from glycan_profiling.serialize import DatabaseBoundOperation from glycan_profiling.chromatogram_tree import Unmodified from glycan_profiling.tandem.ref import SpectrumReference", "float(n) * 100.0), gpep, scan.id)) with style.context(self._mpl_style): fig = figure() grid = plt.GridSpec(nrows=5,", "func, MSScan, GlycopeptideSpectrumMatch) from glycan_profiling.task import TaskBase from glycan_profiling.serialize import DatabaseBoundOperation from glycan_profiling.chromatogram_tree", "self.session.query(serialize.Analysis).get(self.analysis_id) self.scan_loader = None self._mpl_style = { 'figure.facecolor': 'white', 'figure.edgecolor': 'white', 'font.size': 10,", "for i, gpsm in enumerate(gpsms): scan = scan_loader.get_scan_by_id(gpsm.scan.scan_id) gpep = gpsm.structure.convert() if i", "import rcParams as mpl_params status_logger = logging.getLogger(\"glycresoft.status\") def format_filename(s): \"\"\"Take a string and", "not os.path.exists(self.output_path): os.makedirs(self.output_path) n = len(gpsms) self.log(\"%d Spectrum Matches\" % (n,)) for i,", "to explicily pass the\" \" corrected file path.\").format( self.mzml_path, self.database_connection._original_connection)) self.scan_loader = ProcessedMzMLDeserializer(self.mzml_path)", "mzml_path self.output_path = output_path self.analysis = self.session.query(serialize.Analysis).get(self.analysis_id) self.scan_loader = None self._mpl_style = {", "path = os.path.join(self.output_path, fname) abspath = os.path.abspath(path) if len(abspath) > 259 and platform.system().lower()", "from glycan_profiling.serialize import ( Protein, Glycopeptide, IdentifiedGlycopeptide, func, MSScan, GlycopeptideSpectrumMatch) from glycan_profiling.task import", "string.digits) filename = ''.join(c for c in s if c in valid_chars) filename", "Unmodified from glycan_profiling.tandem.ref import SpectrumReference from glycan_profiling.tandem.glycopeptide.scoring import CoverageWeightedBinomialModelTree from glycan_profiling.plotting import figure", "glycan_profiling.serialize import ( Protein, Glycopeptide, IdentifiedGlycopeptide, func, MSScan, GlycopeptideSpectrumMatch) from glycan_profiling.task import TaskBase", "a valid filename constructed from the string. Uses a whitelist approach: any characters", "c in valid_chars) filename = filename.replace(' ', '_') return filename class SpectrumAnnotatorExport(TaskBase, DatabaseBoundOperation):", "'font.size': 10, 'savefig.dpi': 72, 'figure.subplot.bottom': .125 } def _make_scan_loader(self): if self.mzml_path is not", "== 0: self.log(\"... %0.2f%%: %s @ %s\" % (((i + 1) / float(n)", "%s\" % (((i + 1) / float(n) * 100.0), gpep, scan.id)) with style.context(self._mpl_style):", "i, gpsm in enumerate(gpsms): scan = scan_loader.get_scan_by_id(gpsm.scan.scan_id) gpep = gpsm.structure.convert() if i %", "return self.scan_loader def _load_spectrum_matches(self): query = self.query(GlycopeptideSpectrumMatch).join( GlycopeptideSpectrumMatch.scan).filter( GlycopeptideSpectrumMatch.analysis_id == self.analysis_id).order_by( MSScan.index) return", "CoverageWeightedBinomialModelTree from glycan_profiling.plotting import figure from glycan_profiling.plotting.sequence_fragment_logo import glycopeptide_match_logo from glycan_profiling.plotting.spectral_annotation import TidySpectrumMatchAnnotator", "= scan_loader.get_scan_by_id(gpsm.scan.scan_id) gpep = gpsm.structure.convert() if i % 10 == 0: self.log(\"... %0.2f%%:", "valid filename constructed from the string. Uses a whitelist approach: any characters not", "gpsm in enumerate(gpsms): scan = scan_loader.get_scan_by_id(gpsm.scan.scan_id) gpep = gpsm.structure.convert() if i % 10", "import ProcessedMzMLDeserializer from matplotlib import pyplot as plt, style from matplotlib import rcParams", "self._mpl_style = { 'figure.facecolor': 'white', 'figure.edgecolor': 'white', 'font.size': 10, 'savefig.dpi': 72, 'figure.subplot.bottom': .125", "ax3.axis('off') match.plot(ax=ax2) glycopeptide_match_logo(match, ax=ax1) fname = format_filename(\"%s_%s.pdf\" % (scan.id, gpep)) path = os.path.join(self.output_path,", "ProcessedMzMLDeserializer(self.mzml_path) return self.scan_loader def _load_spectrum_matches(self): query = self.query(GlycopeptideSpectrumMatch).join( GlycopeptideSpectrumMatch.scan).filter( GlycopeptideSpectrumMatch.analysis_id == self.analysis_id).order_by( MSScan.index)", "10 == 0: self.log(\"... %0.2f%%: %s @ %s\" % (((i + 1) /", "replaced with underscores. \"\"\" valid_chars = \"-_.() %s%s\" % (string.ascii_letters, string.digits) filename =", "IOError(\"No such file {}\".format(self.mzml_path)) self.scan_loader = ProcessedMzMLDeserializer(self.mzml_path) else: self.mzml_path = self.analysis.parameters['sample_path'] if not", "If {} was relocated, you may need to explicily pass the\" \" corrected", "MSScan.index) return query.all() def run(self): scan_loader = self._make_scan_loader() gpsms = self._load_spectrum_matches() if not", "% (n,)) for i, gpsm in enumerate(gpsms): scan = scan_loader.get_scan_by_id(gpsm.scan.scan_id) gpep = gpsm.structure.convert()", "glycan_profiling.task import TaskBase from glycan_profiling.serialize import DatabaseBoundOperation from glycan_profiling.chromatogram_tree import Unmodified from glycan_profiling.tandem.ref", "glycopeptide_match_logo from glycan_profiling.plotting.spectral_annotation import TidySpectrumMatchAnnotator from ms_deisotope.output.mzml import ProcessedMzMLDeserializer from matplotlib import pyplot", "value=%0.3g' % (gpsm.score, gpsm.q_value)), va='center') ax3.axis('off') match.plot(ax=ax2) glycopeptide_match_logo(match, ax=ax1) fname = format_filename(\"%s_%s.pdf\" %", "fname = format_filename(\"%s_%s.pdf\" % (scan.id, gpep)) path = os.path.join(self.output_path, fname) abspath = os.path.abspath(path)", "'figure.facecolor': 'white', 'figure.edgecolor': 'white', 'font.size': 10, 'savefig.dpi': 72, 'figure.subplot.bottom': .125 } def _make_scan_loader(self):", "logging.getLogger(\"glycresoft.status\") def format_filename(s): \"\"\"Take a string and return a valid filename constructed from", "= fig.add_subplot(grid[1, 0]) ax2 = fig.add_subplot(grid[2:, 0]) ax3 = fig.add_subplot(grid[0, 0]) match =", "not os.path.exists(self.mzml_path): raise IOError(\"No such file {}\".format(self.mzml_path)) self.scan_loader = ProcessedMzMLDeserializer(self.mzml_path) else: self.mzml_path =", "enumerate(gpsms): scan = scan_loader.get_scan_by_id(gpsm.scan.scan_id) gpep = gpsm.structure.convert() if i % 10 == 0:", "self.analysis = self.session.query(serialize.Analysis).get(self.analysis_id) self.scan_loader = None self._mpl_style = { 'figure.facecolor': 'white', 'figure.edgecolor': 'white',", "from glycan_profiling.task import TaskBase from glycan_profiling.serialize import DatabaseBoundOperation from glycan_profiling.chromatogram_tree import Unmodified from", "are removed. Also spaces are replaced with underscores. \"\"\" valid_chars = \"-_.() %s%s\"", "os.path.exists(self.output_path): os.makedirs(self.output_path) n = len(gpsms) self.log(\"%d Spectrum Matches\" % (n,)) for i, gpsm", "{ 'figure.facecolor': 'white', 'figure.edgecolor': 'white', 'font.size': 10, 'savefig.dpi': 72, 'figure.subplot.bottom': .125 } def", "file {}\".format(self.mzml_path)) self.scan_loader = ProcessedMzMLDeserializer(self.mzml_path) else: self.mzml_path = self.analysis.parameters['sample_path'] if not os.path.exists(self.mzml_path): raise", "'figure.edgecolor': 'white', 'font.size': 10, 'savefig.dpi': 72, 'figure.subplot.bottom': .125 } def _make_scan_loader(self): if self.mzml_path", "path.\").format( self.mzml_path, self.database_connection._original_connection)) self.scan_loader = ProcessedMzMLDeserializer(self.mzml_path) return self.scan_loader def _load_spectrum_matches(self): query = self.query(GlycopeptideSpectrumMatch).join(", "if not os.path.exists(self.mzml_path): raise IOError(( \"No such file {}. If {} was relocated,", "import string import platform from glycan_profiling import serialize from glycan_profiling.serialize import ( Protein,", "import ( Protein, Glycopeptide, IdentifiedGlycopeptide, func, MSScan, GlycopeptideSpectrumMatch) from glycan_profiling.task import TaskBase from", "+ scan.id + '\\nscore=%0.3f q value=%0.3g' % (gpsm.score, gpsm.q_value)), va='center') ax3.axis('off') match.plot(ax=ax2) glycopeptide_match_logo(match,", "filename = ''.join(c for c in s if c in valid_chars) filename =", "len(abspath) > 259 and platform.system().lower() == 'windows': abspath = '\\\\\\\\?\\\\' + abspath fig.savefig(abspath,", "relocated, you may need to explicily pass the\" \" corrected file path.\").format( self.mzml_path,", "gpep) ax3.text(0, 0.5, ( str(match.target) + '\\n' + scan.id + '\\nscore=%0.3f q value=%0.3g'", "os.makedirs(self.output_path) n = len(gpsms) self.log(\"%d Spectrum Matches\" % (n,)) for i, gpsm in", "class SpectrumAnnotatorExport(TaskBase, DatabaseBoundOperation): def __init__(self, database_connection, analysis_id, output_path, mzml_path=None): DatabaseBoundOperation.__init__(self, database_connection) self.analysis_id =", "= fig.add_subplot(grid[2:, 0]) ax3 = fig.add_subplot(grid[0, 0]) match = CoverageWeightedBinomialModelTree.evaluate(scan, gpep) ax3.text(0, 0.5,", "ax1 = fig.add_subplot(grid[1, 0]) ax2 = fig.add_subplot(grid[2:, 0]) ax3 = fig.add_subplot(grid[0, 0]) match", "ms_deisotope.output.mzml import ProcessedMzMLDeserializer from matplotlib import pyplot as plt, style from matplotlib import", "\" corrected file path.\").format( self.mzml_path, self.database_connection._original_connection)) self.scan_loader = ProcessedMzMLDeserializer(self.mzml_path) return self.scan_loader def _load_spectrum_matches(self):", "from ms_deisotope.output.mzml import ProcessedMzMLDeserializer from matplotlib import pyplot as plt, style from matplotlib", "present in valid_chars are removed. Also spaces are replaced with underscores. \"\"\" valid_chars", "+ '\\nscore=%0.3f q value=%0.3g' % (gpsm.score, gpsm.q_value)), va='center') ax3.axis('off') match.plot(ax=ax2) glycopeptide_match_logo(match, ax=ax1) fname", "c in s if c in valid_chars) filename = filename.replace(' ', '_') return", "IdentifiedGlycopeptide, func, MSScan, GlycopeptideSpectrumMatch) from glycan_profiling.task import TaskBase from glycan_profiling.serialize import DatabaseBoundOperation from", "self.scan_loader def _load_spectrum_matches(self): query = self.query(GlycopeptideSpectrumMatch).join( GlycopeptideSpectrumMatch.scan).filter( GlycopeptideSpectrumMatch.analysis_id == self.analysis_id).order_by( MSScan.index) return query.all()", "gpep = gpsm.structure.convert() if i % 10 == 0: self.log(\"... %0.2f%%: %s @", "Uses a whitelist approach: any characters not present in valid_chars are removed. Also", "in enumerate(gpsms): scan = scan_loader.get_scan_by_id(gpsm.scan.scan_id) gpep = gpsm.structure.convert() if i % 10 ==", "analysis_id, output_path, mzml_path=None): DatabaseBoundOperation.__init__(self, database_connection) self.analysis_id = analysis_id self.mzml_path = mzml_path self.output_path =", "characters not present in valid_chars are removed. Also spaces are replaced with underscores.", "valid_chars) filename = filename.replace(' ', '_') return filename class SpectrumAnnotatorExport(TaskBase, DatabaseBoundOperation): def __init__(self,", "scan_loader.get_scan_by_id(gpsm.scan.scan_id) gpep = gpsm.structure.convert() if i % 10 == 0: self.log(\"... %0.2f%%: %s", "removed. Also spaces are replaced with underscores. \"\"\" valid_chars = \"-_.() %s%s\" %", "such file {}. If {} was relocated, you may need to explicily pass", "va='center') ax3.axis('off') match.plot(ax=ax2) glycopeptide_match_logo(match, ax=ax1) fname = format_filename(\"%s_%s.pdf\" % (scan.id, gpep)) path =", "_load_spectrum_matches(self): query = self.query(GlycopeptideSpectrumMatch).join( GlycopeptideSpectrumMatch.scan).filter( GlycopeptideSpectrumMatch.analysis_id == self.analysis_id).order_by( MSScan.index) return query.all() def run(self):", "(n,)) for i, gpsm in enumerate(gpsms): scan = scan_loader.get_scan_by_id(gpsm.scan.scan_id) gpep = gpsm.structure.convert() if", "from glycan_profiling.plotting.sequence_fragment_logo import glycopeptide_match_logo from glycan_profiling.plotting.spectral_annotation import TidySpectrumMatchAnnotator from ms_deisotope.output.mzml import ProcessedMzMLDeserializer from", "%s%s\" % (string.ascii_letters, string.digits) filename = ''.join(c for c in s if c", "(scan.id, gpep)) path = os.path.join(self.output_path, fname) abspath = os.path.abspath(path) if len(abspath) > 259", "= ProcessedMzMLDeserializer(self.mzml_path) return self.scan_loader def _load_spectrum_matches(self): query = self.query(GlycopeptideSpectrumMatch).join( GlycopeptideSpectrumMatch.scan).filter( GlycopeptideSpectrumMatch.analysis_id == self.analysis_id).order_by(", "DatabaseBoundOperation): def __init__(self, database_connection, analysis_id, output_path, mzml_path=None): DatabaseBoundOperation.__init__(self, database_connection) self.analysis_id = analysis_id self.mzml_path", "fig.add_subplot(grid[0, 0]) match = CoverageWeightedBinomialModelTree.evaluate(scan, gpep) ax3.text(0, 0.5, ( str(match.target) + '\\n' +", "'figure.subplot.bottom': .125 } def _make_scan_loader(self): if self.mzml_path is not None: if not os.path.exists(self.mzml_path):", "ax3.text(0, 0.5, ( str(match.target) + '\\n' + scan.id + '\\nscore=%0.3f q value=%0.3g' %", "ProcessedMzMLDeserializer(self.mzml_path) else: self.mzml_path = self.analysis.parameters['sample_path'] if not os.path.exists(self.mzml_path): raise IOError(( \"No such file", "such file {}\".format(self.mzml_path)) self.scan_loader = ProcessedMzMLDeserializer(self.mzml_path) else: self.mzml_path = self.analysis.parameters['sample_path'] if not os.path.exists(self.mzml_path):", "% 10 == 0: self.log(\"... %0.2f%%: %s @ %s\" % (((i + 1)", "10, 'savefig.dpi': 72, 'figure.subplot.bottom': .125 } def _make_scan_loader(self): if self.mzml_path is not None:", "= ''.join(c for c in s if c in valid_chars) filename = filename.replace('", "the string. Uses a whitelist approach: any characters not present in valid_chars are", "= self.session.query(serialize.Analysis).get(self.analysis_id) self.scan_loader = None self._mpl_style = { 'figure.facecolor': 'white', 'figure.edgecolor': 'white', 'font.size':", "format_filename(s): \"\"\"Take a string and return a valid filename constructed from the string.", "matplotlib import rcParams as mpl_params status_logger = logging.getLogger(\"glycresoft.status\") def format_filename(s): \"\"\"Take a string", "platform from glycan_profiling import serialize from glycan_profiling.serialize import ( Protein, Glycopeptide, IdentifiedGlycopeptide, func,", "% (string.ascii_letters, string.digits) filename = ''.join(c for c in s if c in", "from glycan_profiling.tandem.glycopeptide.scoring import CoverageWeightedBinomialModelTree from glycan_profiling.plotting import figure from glycan_profiling.plotting.sequence_fragment_logo import glycopeptide_match_logo from", "matplotlib import pyplot as plt, style from matplotlib import rcParams as mpl_params status_logger", "rcParams as mpl_params status_logger = logging.getLogger(\"glycresoft.status\") def format_filename(s): \"\"\"Take a string and return", "self.scan_loader = ProcessedMzMLDeserializer(self.mzml_path) return self.scan_loader def _load_spectrum_matches(self): query = self.query(GlycopeptideSpectrumMatch).join( GlycopeptideSpectrumMatch.scan).filter( GlycopeptideSpectrumMatch.analysis_id ==", "= gpsm.structure.convert() if i % 10 == 0: self.log(\"... %0.2f%%: %s @ %s\"", "figure from glycan_profiling.plotting.sequence_fragment_logo import glycopeptide_match_logo from glycan_profiling.plotting.spectral_annotation import TidySpectrumMatchAnnotator from ms_deisotope.output.mzml import ProcessedMzMLDeserializer", "import SpectrumReference from glycan_profiling.tandem.glycopeptide.scoring import CoverageWeightedBinomialModelTree from glycan_profiling.plotting import figure from glycan_profiling.plotting.sequence_fragment_logo import", "def __init__(self, database_connection, analysis_id, output_path, mzml_path=None): DatabaseBoundOperation.__init__(self, database_connection) self.analysis_id = analysis_id self.mzml_path =", "0]) ax2 = fig.add_subplot(grid[2:, 0]) ax3 = fig.add_subplot(grid[0, 0]) match = CoverageWeightedBinomialModelTree.evaluate(scan, gpep)", "return a valid filename constructed from the string. Uses a whitelist approach: any", "import TidySpectrumMatchAnnotator from ms_deisotope.output.mzml import ProcessedMzMLDeserializer from matplotlib import pyplot as plt, style", "self.mzml_path, self.database_connection._original_connection)) self.scan_loader = ProcessedMzMLDeserializer(self.mzml_path) return self.scan_loader def _load_spectrum_matches(self): query = self.query(GlycopeptideSpectrumMatch).join( GlycopeptideSpectrumMatch.scan).filter(", "raise IOError(( \"No such file {}. If {} was relocated, you may need", "SpectrumReference from glycan_profiling.tandem.glycopeptide.scoring import CoverageWeightedBinomialModelTree from glycan_profiling.plotting import figure from glycan_profiling.plotting.sequence_fragment_logo import glycopeptide_match_logo", "spaces are replaced with underscores. \"\"\" valid_chars = \"-_.() %s%s\" % (string.ascii_letters, string.digits)", "MSScan, GlycopeptideSpectrumMatch) from glycan_profiling.task import TaskBase from glycan_profiling.serialize import DatabaseBoundOperation from glycan_profiling.chromatogram_tree import", "@ %s\" % (((i + 1) / float(n) * 100.0), gpep, scan.id)) with", "style.context(self._mpl_style): fig = figure() grid = plt.GridSpec(nrows=5, ncols=1) ax1 = fig.add_subplot(grid[1, 0]) ax2", "import CoverageWeightedBinomialModelTree from glycan_profiling.plotting import figure from glycan_profiling.plotting.sequence_fragment_logo import glycopeptide_match_logo from glycan_profiling.plotting.spectral_annotation import", "TidySpectrumMatchAnnotator from ms_deisotope.output.mzml import ProcessedMzMLDeserializer from matplotlib import pyplot as plt, style from", "self.analysis_id).order_by( MSScan.index) return query.all() def run(self): scan_loader = self._make_scan_loader() gpsms = self._load_spectrum_matches() if", "CoverageWeightedBinomialModelTree.evaluate(scan, gpep) ax3.text(0, 0.5, ( str(match.target) + '\\n' + scan.id + '\\nscore=%0.3f q", "import Unmodified from glycan_profiling.tandem.ref import SpectrumReference from glycan_profiling.tandem.glycopeptide.scoring import CoverageWeightedBinomialModelTree from glycan_profiling.plotting import", "from glycan_profiling.plotting.spectral_annotation import TidySpectrumMatchAnnotator from ms_deisotope.output.mzml import ProcessedMzMLDeserializer from matplotlib import pyplot as", "from glycan_profiling.serialize import DatabaseBoundOperation from glycan_profiling.chromatogram_tree import Unmodified from glycan_profiling.tandem.ref import SpectrumReference from", "filename class SpectrumAnnotatorExport(TaskBase, DatabaseBoundOperation): def __init__(self, database_connection, analysis_id, output_path, mzml_path=None): DatabaseBoundOperation.__init__(self, database_connection) self.analysis_id", "fig = figure() grid = plt.GridSpec(nrows=5, ncols=1) ax1 = fig.add_subplot(grid[1, 0]) ax2 =", "\"-_.() %s%s\" % (string.ascii_letters, string.digits) filename = ''.join(c for c in s if", "Glycopeptide, IdentifiedGlycopeptide, func, MSScan, GlycopeptideSpectrumMatch) from glycan_profiling.task import TaskBase from glycan_profiling.serialize import DatabaseBoundOperation", "underscores. \"\"\" valid_chars = \"-_.() %s%s\" % (string.ascii_letters, string.digits) filename = ''.join(c for", "grid = plt.GridSpec(nrows=5, ncols=1) ax1 = fig.add_subplot(grid[1, 0]) ax2 = fig.add_subplot(grid[2:, 0]) ax3", "= filename.replace(' ', '_') return filename class SpectrumAnnotatorExport(TaskBase, DatabaseBoundOperation): def __init__(self, database_connection, analysis_id,", "fname) abspath = os.path.abspath(path) if len(abspath) > 259 and platform.system().lower() == 'windows': abspath", "( Protein, Glycopeptide, IdentifiedGlycopeptide, func, MSScan, GlycopeptideSpectrumMatch) from glycan_profiling.task import TaskBase from glycan_profiling.serialize", "= fig.add_subplot(grid[0, 0]) match = CoverageWeightedBinomialModelTree.evaluate(scan, gpep) ax3.text(0, 0.5, ( str(match.target) + '\\n'", "not present in valid_chars are removed. Also spaces are replaced with underscores. \"\"\"", "was relocated, you may need to explicily pass the\" \" corrected file path.\").format(", "mzml_path=None): DatabaseBoundOperation.__init__(self, database_connection) self.analysis_id = analysis_id self.mzml_path = mzml_path self.output_path = output_path self.analysis", "%0.2f%%: %s @ %s\" % (((i + 1) / float(n) * 100.0), gpep,", "= self.query(GlycopeptideSpectrumMatch).join( GlycopeptideSpectrumMatch.scan).filter( GlycopeptideSpectrumMatch.analysis_id == self.analysis_id).order_by( MSScan.index) return query.all() def run(self): scan_loader =", "as plt, style from matplotlib import rcParams as mpl_params status_logger = logging.getLogger(\"glycresoft.status\") def", "import logging import string import platform from glycan_profiling import serialize from glycan_profiling.serialize import", "a whitelist approach: any characters not present in valid_chars are removed. Also spaces", "string and return a valid filename constructed from the string. Uses a whitelist", "0.5, ( str(match.target) + '\\n' + scan.id + '\\nscore=%0.3f q value=%0.3g' % (gpsm.score,", "explicily pass the\" \" corrected file path.\").format( self.mzml_path, self.database_connection._original_connection)) self.scan_loader = ProcessedMzMLDeserializer(self.mzml_path) return", "% (((i + 1) / float(n) * 100.0), gpep, scan.id)) with style.context(self._mpl_style): fig", "q value=%0.3g' % (gpsm.score, gpsm.q_value)), va='center') ax3.axis('off') match.plot(ax=ax2) glycopeptide_match_logo(match, ax=ax1) fname = format_filename(\"%s_%s.pdf\"", "pyplot as plt, style from matplotlib import rcParams as mpl_params status_logger = logging.getLogger(\"glycresoft.status\")", "if i % 10 == 0: self.log(\"... %0.2f%%: %s @ %s\" % (((i", "SpectrumAnnotatorExport(TaskBase, DatabaseBoundOperation): def __init__(self, database_connection, analysis_id, output_path, mzml_path=None): DatabaseBoundOperation.__init__(self, database_connection) self.analysis_id = analysis_id", "> 259 and platform.system().lower() == 'windows': abspath = '\\\\\\\\?\\\\' + abspath fig.savefig(abspath, bbox_inches='tight')", "approach: any characters not present in valid_chars are removed. Also spaces are replaced", "''.join(c for c in s if c in valid_chars) filename = filename.replace(' ',", "analysis_id self.mzml_path = mzml_path self.output_path = output_path self.analysis = self.session.query(serialize.Analysis).get(self.analysis_id) self.scan_loader = None", "os.path.exists(self.mzml_path): raise IOError(\"No such file {}\".format(self.mzml_path)) self.scan_loader = ProcessedMzMLDeserializer(self.mzml_path) else: self.mzml_path = self.analysis.parameters['sample_path']", "Matches\" % (n,)) for i, gpsm in enumerate(gpsms): scan = scan_loader.get_scan_by_id(gpsm.scan.scan_id) gpep =", "from glycan_profiling.chromatogram_tree import Unmodified from glycan_profiling.tandem.ref import SpectrumReference from glycan_profiling.tandem.glycopeptide.scoring import CoverageWeightedBinomialModelTree from", "= ProcessedMzMLDeserializer(self.mzml_path) else: self.mzml_path = self.analysis.parameters['sample_path'] if not os.path.exists(self.mzml_path): raise IOError(( \"No such", "filename = filename.replace(' ', '_') return filename class SpectrumAnnotatorExport(TaskBase, DatabaseBoundOperation): def __init__(self, database_connection,", "'white', 'font.size': 10, 'savefig.dpi': 72, 'figure.subplot.bottom': .125 } def _make_scan_loader(self): if self.mzml_path is", "1) / float(n) * 100.0), gpep, scan.id)) with style.context(self._mpl_style): fig = figure() grid", "raise IOError(\"No such file {}\".format(self.mzml_path)) self.scan_loader = ProcessedMzMLDeserializer(self.mzml_path) else: self.mzml_path = self.analysis.parameters['sample_path'] if", "0: self.log(\"... %0.2f%%: %s @ %s\" % (((i + 1) / float(n) *", "self.scan_loader = ProcessedMzMLDeserializer(self.mzml_path) else: self.mzml_path = self.analysis.parameters['sample_path'] if not os.path.exists(self.mzml_path): raise IOError(( \"No", "and return a valid filename constructed from the string. Uses a whitelist approach:", "def run(self): scan_loader = self._make_scan_loader() gpsms = self._load_spectrum_matches() if not os.path.exists(self.output_path): os.makedirs(self.output_path) n", "import DatabaseBoundOperation from glycan_profiling.chromatogram_tree import Unmodified from glycan_profiling.tandem.ref import SpectrumReference from glycan_profiling.tandem.glycopeptide.scoring import", "self.log(\"... %0.2f%%: %s @ %s\" % (((i + 1) / float(n) * 100.0),", "(gpsm.score, gpsm.q_value)), va='center') ax3.axis('off') match.plot(ax=ax2) glycopeptide_match_logo(match, ax=ax1) fname = format_filename(\"%s_%s.pdf\" % (scan.id, gpep))", "str(match.target) + '\\n' + scan.id + '\\nscore=%0.3f q value=%0.3g' % (gpsm.score, gpsm.q_value)), va='center')", "= logging.getLogger(\"glycresoft.status\") def format_filename(s): \"\"\"Take a string and return a valid filename constructed", "plt.GridSpec(nrows=5, ncols=1) ax1 = fig.add_subplot(grid[1, 0]) ax2 = fig.add_subplot(grid[2:, 0]) ax3 = fig.add_subplot(grid[0,", "= format_filename(\"%s_%s.pdf\" % (scan.id, gpep)) path = os.path.join(self.output_path, fname) abspath = os.path.abspath(path) if", "glycan_profiling.chromatogram_tree import Unmodified from glycan_profiling.tandem.ref import SpectrumReference from glycan_profiling.tandem.glycopeptide.scoring import CoverageWeightedBinomialModelTree from glycan_profiling.plotting", "= os.path.abspath(path) if len(abspath) > 259 and platform.system().lower() == 'windows': abspath = '\\\\\\\\?\\\\'", "None: if not os.path.exists(self.mzml_path): raise IOError(\"No such file {}\".format(self.mzml_path)) self.scan_loader = ProcessedMzMLDeserializer(self.mzml_path) else:", "if c in valid_chars) filename = filename.replace(' ', '_') return filename class SpectrumAnnotatorExport(TaskBase,", "valid_chars = \"-_.() %s%s\" % (string.ascii_letters, string.digits) filename = ''.join(c for c in", "self.log(\"%d Spectrum Matches\" % (n,)) for i, gpsm in enumerate(gpsms): scan = scan_loader.get_scan_by_id(gpsm.scan.scan_id)", "import serialize from glycan_profiling.serialize import ( Protein, Glycopeptide, IdentifiedGlycopeptide, func, MSScan, GlycopeptideSpectrumMatch) from", "plt, style from matplotlib import rcParams as mpl_params status_logger = logging.getLogger(\"glycresoft.status\") def format_filename(s):", "def _load_spectrum_matches(self): query = self.query(GlycopeptideSpectrumMatch).join( GlycopeptideSpectrumMatch.scan).filter( GlycopeptideSpectrumMatch.analysis_id == self.analysis_id).order_by( MSScan.index) return query.all() def", "serialize from glycan_profiling.serialize import ( Protein, Glycopeptide, IdentifiedGlycopeptide, func, MSScan, GlycopeptideSpectrumMatch) from glycan_profiling.task", "else: self.mzml_path = self.analysis.parameters['sample_path'] if not os.path.exists(self.mzml_path): raise IOError(( \"No such file {}.", "os import logging import string import platform from glycan_profiling import serialize from glycan_profiling.serialize", "= None self._mpl_style = { 'figure.facecolor': 'white', 'figure.edgecolor': 'white', 'font.size': 10, 'savefig.dpi': 72,", "= figure() grid = plt.GridSpec(nrows=5, ncols=1) ax1 = fig.add_subplot(grid[1, 0]) ax2 = fig.add_subplot(grid[2:,", "glycan_profiling.tandem.glycopeptide.scoring import CoverageWeightedBinomialModelTree from glycan_profiling.plotting import figure from glycan_profiling.plotting.sequence_fragment_logo import glycopeptide_match_logo from glycan_profiling.plotting.spectral_annotation", "fig.add_subplot(grid[1, 0]) ax2 = fig.add_subplot(grid[2:, 0]) ax3 = fig.add_subplot(grid[0, 0]) match = CoverageWeightedBinomialModelTree.evaluate(scan,", "'\\n' + scan.id + '\\nscore=%0.3f q value=%0.3g' % (gpsm.score, gpsm.q_value)), va='center') ax3.axis('off') match.plot(ax=ax2)", "file path.\").format( self.mzml_path, self.database_connection._original_connection)) self.scan_loader = ProcessedMzMLDeserializer(self.mzml_path) return self.scan_loader def _load_spectrum_matches(self): query =", "= os.path.join(self.output_path, fname) abspath = os.path.abspath(path) if len(abspath) > 259 and platform.system().lower() ==", "self.analysis.parameters['sample_path'] if not os.path.exists(self.mzml_path): raise IOError(( \"No such file {}. If {} was", "from the string. Uses a whitelist approach: any characters not present in valid_chars", "self.analysis_id = analysis_id self.mzml_path = mzml_path self.output_path = output_path self.analysis = self.session.query(serialize.Analysis).get(self.analysis_id) self.scan_loader", "whitelist approach: any characters not present in valid_chars are removed. Also spaces are", "self.query(GlycopeptideSpectrumMatch).join( GlycopeptideSpectrumMatch.scan).filter( GlycopeptideSpectrumMatch.analysis_id == self.analysis_id).order_by( MSScan.index) return query.all() def run(self): scan_loader = self._make_scan_loader()", "%s @ %s\" % (((i + 1) / float(n) * 100.0), gpep, scan.id))", "import glycopeptide_match_logo from glycan_profiling.plotting.spectral_annotation import TidySpectrumMatchAnnotator from ms_deisotope.output.mzml import ProcessedMzMLDeserializer from matplotlib import", "run(self): scan_loader = self._make_scan_loader() gpsms = self._load_spectrum_matches() if not os.path.exists(self.output_path): os.makedirs(self.output_path) n =", "0]) ax3 = fig.add_subplot(grid[0, 0]) match = CoverageWeightedBinomialModelTree.evaluate(scan, gpep) ax3.text(0, 0.5, ( str(match.target)", "import TaskBase from glycan_profiling.serialize import DatabaseBoundOperation from glycan_profiling.chromatogram_tree import Unmodified from glycan_profiling.tandem.ref import", "+ '\\n' + scan.id + '\\nscore=%0.3f q value=%0.3g' % (gpsm.score, gpsm.q_value)), va='center') ax3.axis('off')", "Spectrum Matches\" % (n,)) for i, gpsm in enumerate(gpsms): scan = scan_loader.get_scan_by_id(gpsm.scan.scan_id) gpep", "= mzml_path self.output_path = output_path self.analysis = self.session.query(serialize.Analysis).get(self.analysis_id) self.scan_loader = None self._mpl_style =", "database_connection, analysis_id, output_path, mzml_path=None): DatabaseBoundOperation.__init__(self, database_connection) self.analysis_id = analysis_id self.mzml_path = mzml_path self.output_path", "any characters not present in valid_chars are removed. Also spaces are replaced with", "constructed from the string. Uses a whitelist approach: any characters not present in", "{}. If {} was relocated, you may need to explicily pass the\" \"", "the\" \" corrected file path.\").format( self.mzml_path, self.database_connection._original_connection)) self.scan_loader = ProcessedMzMLDeserializer(self.mzml_path) return self.scan_loader def", "if not os.path.exists(self.mzml_path): raise IOError(\"No such file {}\".format(self.mzml_path)) self.scan_loader = ProcessedMzMLDeserializer(self.mzml_path) else: self.mzml_path", "ax2 = fig.add_subplot(grid[2:, 0]) ax3 = fig.add_subplot(grid[0, 0]) match = CoverageWeightedBinomialModelTree.evaluate(scan, gpep) ax3.text(0,", "gpep, scan.id)) with style.context(self._mpl_style): fig = figure() grid = plt.GridSpec(nrows=5, ncols=1) ax1 =", "self._make_scan_loader() gpsms = self._load_spectrum_matches() if not os.path.exists(self.output_path): os.makedirs(self.output_path) n = len(gpsms) self.log(\"%d Spectrum", "<gh_stars>0 import os import logging import string import platform from glycan_profiling import serialize", "ProcessedMzMLDeserializer from matplotlib import pyplot as plt, style from matplotlib import rcParams as", ".125 } def _make_scan_loader(self): if self.mzml_path is not None: if not os.path.exists(self.mzml_path): raise", "= self._make_scan_loader() gpsms = self._load_spectrum_matches() if not os.path.exists(self.output_path): os.makedirs(self.output_path) n = len(gpsms) self.log(\"%d", "import pyplot as plt, style from matplotlib import rcParams as mpl_params status_logger =", "', '_') return filename class SpectrumAnnotatorExport(TaskBase, DatabaseBoundOperation): def __init__(self, database_connection, analysis_id, output_path, mzml_path=None):", "scan_loader = self._make_scan_loader() gpsms = self._load_spectrum_matches() if not os.path.exists(self.output_path): os.makedirs(self.output_path) n = len(gpsms)", "filename.replace(' ', '_') return filename class SpectrumAnnotatorExport(TaskBase, DatabaseBoundOperation): def __init__(self, database_connection, analysis_id, output_path,", "gpsms = self._load_spectrum_matches() if not os.path.exists(self.output_path): os.makedirs(self.output_path) n = len(gpsms) self.log(\"%d Spectrum Matches\"", "= len(gpsms) self.log(\"%d Spectrum Matches\" % (n,)) for i, gpsm in enumerate(gpsms): scan", "a string and return a valid filename constructed from the string. Uses a", "file {}. If {} was relocated, you may need to explicily pass the\"", "match.plot(ax=ax2) glycopeptide_match_logo(match, ax=ax1) fname = format_filename(\"%s_%s.pdf\" % (scan.id, gpep)) path = os.path.join(self.output_path, fname)", "== self.analysis_id).order_by( MSScan.index) return query.all() def run(self): scan_loader = self._make_scan_loader() gpsms = self._load_spectrum_matches()", "from glycan_profiling.tandem.ref import SpectrumReference from glycan_profiling.tandem.glycopeptide.scoring import CoverageWeightedBinomialModelTree from glycan_profiling.plotting import figure from", "'savefig.dpi': 72, 'figure.subplot.bottom': .125 } def _make_scan_loader(self): if self.mzml_path is not None: if", "not None: if not os.path.exists(self.mzml_path): raise IOError(\"No such file {}\".format(self.mzml_path)) self.scan_loader = ProcessedMzMLDeserializer(self.mzml_path)", "+ 1) / float(n) * 100.0), gpep, scan.id)) with style.context(self._mpl_style): fig = figure()", "gpep)) path = os.path.join(self.output_path, fname) abspath = os.path.abspath(path) if len(abspath) > 259 and", "abspath = os.path.abspath(path) if len(abspath) > 259 and platform.system().lower() == 'windows': abspath =", "import figure from glycan_profiling.plotting.sequence_fragment_logo import glycopeptide_match_logo from glycan_profiling.plotting.spectral_annotation import TidySpectrumMatchAnnotator from ms_deisotope.output.mzml import", "GlycopeptideSpectrumMatch.analysis_id == self.analysis_id).order_by( MSScan.index) return query.all() def run(self): scan_loader = self._make_scan_loader() gpsms =", "None self._mpl_style = { 'figure.facecolor': 'white', 'figure.edgecolor': 'white', 'font.size': 10, 'savefig.dpi': 72, 'figure.subplot.bottom':", "= self.analysis.parameters['sample_path'] if not os.path.exists(self.mzml_path): raise IOError(( \"No such file {}. If {}", "ax3 = fig.add_subplot(grid[0, 0]) match = CoverageWeightedBinomialModelTree.evaluate(scan, gpep) ax3.text(0, 0.5, ( str(match.target) +", "in s if c in valid_chars) filename = filename.replace(' ', '_') return filename", "self.mzml_path is not None: if not os.path.exists(self.mzml_path): raise IOError(\"No such file {}\".format(self.mzml_path)) self.scan_loader", "not os.path.exists(self.mzml_path): raise IOError(( \"No such file {}. If {} was relocated, you", "as mpl_params status_logger = logging.getLogger(\"glycresoft.status\") def format_filename(s): \"\"\"Take a string and return a", "GlycopeptideSpectrumMatch.scan).filter( GlycopeptideSpectrumMatch.analysis_id == self.analysis_id).order_by( MSScan.index) return query.all() def run(self): scan_loader = self._make_scan_loader() gpsms", "% (gpsm.score, gpsm.q_value)), va='center') ax3.axis('off') match.plot(ax=ax2) glycopeptide_match_logo(match, ax=ax1) fname = format_filename(\"%s_%s.pdf\" % (scan.id,", "filename constructed from the string. Uses a whitelist approach: any characters not present", "if not os.path.exists(self.output_path): os.makedirs(self.output_path) n = len(gpsms) self.log(\"%d Spectrum Matches\" % (n,)) for", "match = CoverageWeightedBinomialModelTree.evaluate(scan, gpep) ax3.text(0, 0.5, ( str(match.target) + '\\n' + scan.id +", "from glycan_profiling import serialize from glycan_profiling.serialize import ( Protein, Glycopeptide, IdentifiedGlycopeptide, func, MSScan,", "\"\"\" valid_chars = \"-_.() %s%s\" % (string.ascii_letters, string.digits) filename = ''.join(c for c", "% (scan.id, gpep)) path = os.path.join(self.output_path, fname) abspath = os.path.abspath(path) if len(abspath) >", "scan.id)) with style.context(self._mpl_style): fig = figure() grid = plt.GridSpec(nrows=5, ncols=1) ax1 = fig.add_subplot(grid[1,", "s if c in valid_chars) filename = filename.replace(' ', '_') return filename class", "= analysis_id self.mzml_path = mzml_path self.output_path = output_path self.analysis = self.session.query(serialize.Analysis).get(self.analysis_id) self.scan_loader =", "ax=ax1) fname = format_filename(\"%s_%s.pdf\" % (scan.id, gpep)) path = os.path.join(self.output_path, fname) abspath =", "Protein, Glycopeptide, IdentifiedGlycopeptide, func, MSScan, GlycopeptideSpectrumMatch) from glycan_profiling.task import TaskBase from glycan_profiling.serialize import", "= \"-_.() %s%s\" % (string.ascii_letters, string.digits) filename = ''.join(c for c in s", "'\\nscore=%0.3f q value=%0.3g' % (gpsm.score, gpsm.q_value)), va='center') ax3.axis('off') match.plot(ax=ax2) glycopeptide_match_logo(match, ax=ax1) fname =", "for c in s if c in valid_chars) filename = filename.replace(' ', '_')", "string import platform from glycan_profiling import serialize from glycan_profiling.serialize import ( Protein, Glycopeptide,", "self.mzml_path = mzml_path self.output_path = output_path self.analysis = self.session.query(serialize.Analysis).get(self.analysis_id) self.scan_loader = None self._mpl_style", "database_connection) self.analysis_id = analysis_id self.mzml_path = mzml_path self.output_path = output_path self.analysis = self.session.query(serialize.Analysis).get(self.analysis_id)", "(string.ascii_letters, string.digits) filename = ''.join(c for c in s if c in valid_chars)", "import os import logging import string import platform from glycan_profiling import serialize from", "if len(abspath) > 259 and platform.system().lower() == 'windows': abspath = '\\\\\\\\?\\\\' + abspath", "with underscores. \"\"\" valid_chars = \"-_.() %s%s\" % (string.ascii_letters, string.digits) filename = ''.join(c", "scan.id + '\\nscore=%0.3f q value=%0.3g' % (gpsm.score, gpsm.q_value)), va='center') ax3.axis('off') match.plot(ax=ax2) glycopeptide_match_logo(match, ax=ax1)", "self.database_connection._original_connection)) self.scan_loader = ProcessedMzMLDeserializer(self.mzml_path) return self.scan_loader def _load_spectrum_matches(self): query = self.query(GlycopeptideSpectrumMatch).join( GlycopeptideSpectrumMatch.scan).filter( GlycopeptideSpectrumMatch.analysis_id", "query.all() def run(self): scan_loader = self._make_scan_loader() gpsms = self._load_spectrum_matches() if not os.path.exists(self.output_path): os.makedirs(self.output_path)", "= { 'figure.facecolor': 'white', 'figure.edgecolor': 'white', 'font.size': 10, 'savefig.dpi': 72, 'figure.subplot.bottom': .125 }", "with style.context(self._mpl_style): fig = figure() grid = plt.GridSpec(nrows=5, ncols=1) ax1 = fig.add_subplot(grid[1, 0])", "(((i + 1) / float(n) * 100.0), gpep, scan.id)) with style.context(self._mpl_style): fig =", "style from matplotlib import rcParams as mpl_params status_logger = logging.getLogger(\"glycresoft.status\") def format_filename(s): \"\"\"Take", "i % 10 == 0: self.log(\"... %0.2f%%: %s @ %s\" % (((i +", "100.0), gpep, scan.id)) with style.context(self._mpl_style): fig = figure() grid = plt.GridSpec(nrows=5, ncols=1) ax1", "from matplotlib import pyplot as plt, style from matplotlib import rcParams as mpl_params", "status_logger = logging.getLogger(\"glycresoft.status\") def format_filename(s): \"\"\"Take a string and return a valid filename", "= self._load_spectrum_matches() if not os.path.exists(self.output_path): os.makedirs(self.output_path) n = len(gpsms) self.log(\"%d Spectrum Matches\" %", "pass the\" \" corrected file path.\").format( self.mzml_path, self.database_connection._original_connection)) self.scan_loader = ProcessedMzMLDeserializer(self.mzml_path) return self.scan_loader", "n = len(gpsms) self.log(\"%d Spectrum Matches\" % (n,)) for i, gpsm in enumerate(gpsms):", "= CoverageWeightedBinomialModelTree.evaluate(scan, gpep) ax3.text(0, 0.5, ( str(match.target) + '\\n' + scan.id + '\\nscore=%0.3f", "are replaced with underscores. \"\"\" valid_chars = \"-_.() %s%s\" % (string.ascii_letters, string.digits) filename", "'white', 'figure.edgecolor': 'white', 'font.size': 10, 'savefig.dpi': 72, 'figure.subplot.bottom': .125 } def _make_scan_loader(self): if", "valid_chars are removed. Also spaces are replaced with underscores. \"\"\" valid_chars = \"-_.()", "from matplotlib import rcParams as mpl_params status_logger = logging.getLogger(\"glycresoft.status\") def format_filename(s): \"\"\"Take a", "{} was relocated, you may need to explicily pass the\" \" corrected file", "glycan_profiling.plotting.sequence_fragment_logo import glycopeptide_match_logo from glycan_profiling.plotting.spectral_annotation import TidySpectrumMatchAnnotator from ms_deisotope.output.mzml import ProcessedMzMLDeserializer from matplotlib", "os.path.exists(self.mzml_path): raise IOError(( \"No such file {}. If {} was relocated, you may", "fig.add_subplot(grid[2:, 0]) ax3 = fig.add_subplot(grid[0, 0]) match = CoverageWeightedBinomialModelTree.evaluate(scan, gpep) ax3.text(0, 0.5, (", "0]) match = CoverageWeightedBinomialModelTree.evaluate(scan, gpep) ax3.text(0, 0.5, ( str(match.target) + '\\n' + scan.id", "259 and platform.system().lower() == 'windows': abspath = '\\\\\\\\?\\\\' + abspath fig.savefig(abspath, bbox_inches='tight') plt.close(fig)", "self._load_spectrum_matches() if not os.path.exists(self.output_path): os.makedirs(self.output_path) n = len(gpsms) self.log(\"%d Spectrum Matches\" % (n,))", "gpsm.q_value)), va='center') ax3.axis('off') match.plot(ax=ax2) glycopeptide_match_logo(match, ax=ax1) fname = format_filename(\"%s_%s.pdf\" % (scan.id, gpep)) path", "ncols=1) ax1 = fig.add_subplot(grid[1, 0]) ax2 = fig.add_subplot(grid[2:, 0]) ax3 = fig.add_subplot(grid[0, 0])", "self.mzml_path = self.analysis.parameters['sample_path'] if not os.path.exists(self.mzml_path): raise IOError(( \"No such file {}. If", "os.path.join(self.output_path, fname) abspath = os.path.abspath(path) if len(abspath) > 259 and platform.system().lower() == 'windows':", "glycan_profiling import serialize from glycan_profiling.serialize import ( Protein, Glycopeptide, IdentifiedGlycopeptide, func, MSScan, GlycopeptideSpectrumMatch)", "mpl_params status_logger = logging.getLogger(\"glycresoft.status\") def format_filename(s): \"\"\"Take a string and return a valid" ]
[ "baud=57600) # telemetry usb vehicle = connect(\"/dev/ttyS0\", baud=57600) # telemetry usb # Using", "# myPhoto.Photoing is True when we are heading for a picture point #", "earth #Coordinate offsets in radians dLat = dNorth/earth_radius dLon = dEast/(earth_radius*math.cos(math.pi*original_location.lat/180)) #New position", "myPhoto.plat myPhoto.point1.lon = myPhoto.plon vehicle.mode = VehicleMode(\"GUIDED\") vehicle.simple_goto(myPhoto.point1) print \"guided mode again: \",", "are heading for a picture point # also saves current location into myPhoto", "from dronekit import connect, VehicleMode, LocationGlobalRelative, LocationGlobal from pymavlink import mavutil # Needed", "#Radius of \"spherical\" earth #Coordinate offsets in radians dLat = dNorth/earth_radius dLon =", "value # Mode changing done here. # myPhoto.Photoing is True when we are", "raise Exception(\"Invalid Location object passed\") return targetlocation; def get_distance_meters(self, aLocation1, aLocation2): \"\"\" Returns", "print \"Picture!\", dist # take photo myPhoto.take_photo(1920, 1080,'/home/pi/Desktop/cap.jpg') # exit guided mode myPhoto.Photoing", "GPS: %s\" % vehicle.gps_0 print \" Battery: %s\" % vehicle.battery print \" Last", "\"DONE\" myPhoto.pmsg = \"Ready for new request\" + time.strftime(\" %Y-%m-%d %H:%M:%S\") r=requests.post('http://sailbot.holdentechnology.com/postlatlon.php',data={'b_no':2,'lat':myPhoto.plat,'lon':myPhoto.plon,'mode':myPhoto.pmode,'debug':myPhoto.pmsg}) def", "\"UNDERWAY\" myPhoto.pmsg = \"n3m0 received request\" + time.strftime(\" %Y-%m-%d %H:%M:%S\") r=requests.post('http://sailbot.holdentechnology.com/postlatlon.php',data={'b_no':2,'lat':myPhoto.plat,'lon':myPhoto.plon,'mode':myPhoto.pmode,'debug':myPhoto.pmsg}) print \"should", "go back to manual or RTL etc print(\"new mode set, Photoing off\") #", "pmsg = 'no message' camera = PiCamera() def update_n3m0_location(self): ## update the boat", "\"uploads/\" + filename r=requests.post('http://sailbot.holdentechnology.com/insertlatlon.php',data={'b_no':2,'lat':myPhoto.plat,'lon':myPhoto.plon,'mode':myPhoto.pmode,'debug':myPhoto.pmsg}) print \"should be guided now\",myPhoto.get_distance_meters(myPhoto.point1,vehicle.location.global_relative_frame) print myPhoto.pmsg #myPhoto.time_to_quit=True myPhoto.pmode", "aLocation2): \"\"\" Returns the bearing between the two LocationGlobal objects passed as parameters.", "\"\"\" Returns the ground distance in metres between two LocationGlobal objects. This method", "comes from the ArduPilot test code: https://github.com/diydrones/ardupilot/blob/master/Tools/autotest/common.py \"\"\" dlat = aLocation2.lat - aLocation1.lat", "(picture) # any other mode change cancels autonomous function through this code def", "now\",myPhoto.get_distance_meters(myPhoto.point1,vehicle.location.global_relative_frame) print myPhoto.pmsg #myPhoto.time_to_quit=True myPhoto.pmode = \"DONE\" myPhoto.pmsg = \"Ready for new request\"", "usb vehicle = connect(\"/dev/ttyS0\", baud=57600) # telemetry usb # Using the ``wait_ready(True)`` waits", "bearing between the two LocationGlobal objects passed as parameters. This method is an", "myPhoto.point1.lon = myPhoto.plon vehicle.mode = VehicleMode(\"GUIDED\") vehicle.simple_goto(myPhoto.point1) print \"guided mode again: \", str(vehicle.mode.name)", "vehicle position. The algorithm is relatively accurate over small distances (10m within 1km)", "updated value # If mode changes to \"steering\" start autonomous action (picture) #", "%Y-%m-%d %H:%M:%S\") r=requests.post('http://sailbot.holdentechnology.com/postlatlon.php',data={'b_no':2,'lat':myPhoto.plat,'lon':myPhoto.plon,'mode':myPhoto.pmode,'debug':myPhoto.pmsg}) print \"should be guided now\",myPhoto.get_distance_meters(myPhoto.point1,vehicle.location.global_relative_frame) else: if str(value).find(\"GUIDED\") < 0:", "+ (dLat * 180/math.pi) newlon = original_location.lon + (dLon * 180/math.pi) print(\"llatlon\" )", "received request\" + time.strftime(\" %Y-%m-%d %H:%M:%S\") r=requests.post('http://sailbot.holdentechnology.com/postlatlon.php',data={'b_no':2,'lat':myPhoto.plat,'lon':myPhoto.plon,'mode':myPhoto.pmode,'debug':myPhoto.pmsg}) print \"should be guided now\",myPhoto.get_distance_meters(myPhoto.point1,vehicle.location.global_relative_frame) else:", "href=\\\"uploads/\"+filename+\"\\\"><img src=\\\"uploads/\" + filename + \"\\\" height=50 ></a>\" myPhoto.pmsg = \"uploads/\" + filename", "photo. print \"Picture!\", dist # take photo myPhoto.take_photo(1920, 1080,'/home/pi/Desktop/cap.jpg') # exit guided mode", "= str(vehicle.mode.name) ## check for reaching picture waypoint dist = myPhoto.get_distance_meters(myPhoto.point1,vehicle.location.global_relative_frame) # if", "other mode change cancels autonomous function through this code def mode_callback(self, attr_name, value):", "## Photo point (where to take photo) point1 = LocationGlobalRelative(38.0, -122.0, 0) ##", "57.2957795 if bearing < 0: bearing += 360.00 return bearing; myPhoto = PhotoStuff()", "## store data myPhoto.lat = vehicle.location.global_relative_frame.lat myPhoto.lon = vehicle.location.global_relative_frame.lon myPhoto.mode = str(vehicle.mode.name) ##", "photo) point1 = LocationGlobalRelative(38.0, -122.0, 0) ## current location lat = 38.06807841429639 lon", "The function is useful when you want to move the vehicle around specifying", "0: bearing += 360.00 return bearing; myPhoto = PhotoStuff() # Callback when location", "= LocationGlobalRelative(38.0, -122.0, 0) ## current location lat = 38.06807841429639 lon = -122.23280310630798", "get data r2 = requests.get('http://sailbot.holdentechnology.com/getbuoys.php') thedata=r2.text.split(';') myPhoto.plat=float(thedata[8]) myPhoto.plon=float(thedata[9]) myPhoto.pmode = thedata[10] myPhoto.pmsg =", "acknowledge. r=requests.post('http://sailbot.holdentechnology.com/postlatlon.php',data={ 'b_no':2,'lat':myPhoto.plat,'lon':myPhoto.plon,'mode':\"n3m0 Received\",'debug':\"need auth\"}) def take_photo(self, w,h, filename): print('taking photo') print filename", "The algorithm is relatively accurate over small distances (10m within 1km) except close", "'no msg' Photoing = False time_to_quit = False ## picture data getpix =", "(str(myPhoto.pmode).find(\"REQUESTED\") >= 0): # new request, acknowledge. r=requests.post('http://sailbot.holdentechnology.com/postlatlon.php',data={ 'b_no':2,'lat':myPhoto.plat,'lon':myPhoto.plon,'mode':\"n3m0 Received\",'debug':\"need auth\"}) def take_photo(self,", "+= 360.00 return bearing; myPhoto = PhotoStuff() # Callback when location has changed.", "# waits until we reach photo point, takes photo #if (myPhoto.Photoing): # use", "poles. It comes from the ArduPilot test code: https://github.com/diydrones/ardupilot/blob/master/Tools/autotest/common.py \"\"\" dlat = aLocation2.lat", "# post photo fname='n3m0_' + time.strftime(\"%Y%m%d-%H%M%S\") + '.jpg' myPhoto.post_photo('/home/pi/Desktop/cap.jpg',fname) myPhoto.deliver_photo(fname) # continuously check", "want to move the vehicle around specifying locations relative to the current vehicle", "function through this code def mode_callback(self, attr_name, value): print \"Mode: \", value if", "for the `global_frame` attribute. vehicle.add_attribute_listener('location.global_frame', location_callback) vehicle.add_attribute_listener('mode', mode_callback) # Loop, interrupts are running", "a picture point # also saves current location into myPhoto variables. def location_callback(self,", "guided now\",myPhoto.get_distance_meters(myPhoto.point1,vehicle.location.global_relative_frame) print myPhoto.pmsg #myPhoto.time_to_quit=True myPhoto.pmode = \"DONE\" myPhoto.pmsg = \"Ready for new", "= 38.06807841429639 lon = -122.23280310630798 mode='none' message = 'no msg' Photoing = False", "msg' Photoing = False time_to_quit = False ## picture data getpix = False", "targetlocation=LocationGlobalRelative(newlat, newlon,original_location.alt) else: raise Exception(\"Invalid Location object passed\") return targetlocation; def get_distance_meters(self, aLocation1,", "mode myPhoto.Photoing = False #let us go back to manual or RTL etc", "guided mode \", str(vehicle.mode.name) #Callback to monitor mode changes. 'value' is the updated", "wait_ready=True) # pixhawk usb #vehicle = connect(\"/dev/ttyUSB0\", wait_ready='armed', baud=57600) # telemetry usb #vehicle", "not guided mode: do guided mode. if myPhoto.Photoing: myPhoto.mode = str(dist) if (str(vehicle.mode.name).find(\"GUIDED\")", "guided myPhoto.point1.lat = myPhoto.plat myPhoto.point1.lon = myPhoto.plon vehicle.mode = VehicleMode(\"GUIDED\") vehicle.simple_goto(myPhoto.point1) print \"guided", "def get_bearing(self, aLocation1, aLocation2): \"\"\" Returns the bearing between the two LocationGlobal objects", "import mavutil # Needed for command message definitions from picamera import PiCamera import", "post photo fname='n3m0_' + time.strftime(\"%Y%m%d-%H%M%S\") + '.jpg' myPhoto.post_photo('/home/pi/Desktop/cap.jpg',fname) myPhoto.deliver_photo(fname) # continuously check to", "myPhoto.time_to_quit: time.sleep(4) print myPhoto.get_distance_meters(myPhoto.point1,vehicle.location.global_relative_frame) # getting parameters is a little buggy #print \"Param:", "= (w, h) #self.camera.resolution = (1920, 1080) #self.camera.resolution = (640, 480) self.camera.start_preview() #time.sleep(2)", "see if we need to change modes # if guided flag set but", "# Add a callback `location_callback` for the `global_frame` attribute. vehicle.add_attribute_listener('location.global_frame', location_callback) vehicle.add_attribute_listener('mode', mode_callback)", "3.0) and (myPhoto.Photoing): # waits until we reach photo point, takes photo #if", "mode_callback) # Loop, interrupts are running things now. while not myPhoto.time_to_quit: time.sleep(4) print", "off_x) * 57.2957795 if bearing < 0: bearing += 360.00 return bearing; myPhoto", "mode changes to \"steering\" start autonomous action (picture) # any other mode change", "reached photo point: take photo, return to auto mode. if (dist <= 3.0)", "False ## picture data getpix = False plat=0 plon=0 pmode='none' pmsg = 'no", "\" GPS: %s\" % vehicle.gps_0 print \" Battery: %s\" % vehicle.battery print \"", "returned LocationGlobal has the same `alt` value as `original_location`. The function is useful", "large distances and close to the earth's poles. It comes from the ArduPilot", "\") + str(vehicle.battery) + \" \" + str(vehicle.gps_0) myPhoto.mode = str(vehicle.mode.name) myPhoto.update_n3m0_location() myPhoto.message", "\"should be guided now\",myPhoto.get_distance_meters(myPhoto.point1,vehicle.location.global_relative_frame) print myPhoto.pmsg #myPhoto.time_to_quit=True myPhoto.pmode = \"DONE\" myPhoto.pmsg = \"Ready", "lat = 38.06807841429639 lon = -122.23280310630798 mode='none' message = 'no msg' Photoing =", "a callback `location_callback` for the `global_frame` attribute. vehicle.add_attribute_listener('location.global_frame', location_callback) vehicle.add_attribute_listener('mode', mode_callback) # Loop,", "= connect(\"/dev/ttyS0\", baud=57600) # telemetry usb # Using the ``wait_ready(True)`` waits on :py:attr:`parameters`,", "+ time.strftime(\"%Y%m%d-%H%M%S\") + '.jpg' #myPhoto.post_photo('/home/pi/Desktop/testcap.jpg',fname) #myPhoto.time_to_quit=True myPhoto.get_pic_requests() # Remove observer - specifying the", "start autonomous action (picture) # any other mode change cancels autonomous function through", "= original_location.lat + (dLat * 180/math.pi) newlon = original_location.lon + (dLon * 180/math.pi)", "is an approximation, and will not be accurate over large distances and close", "\"steering\" start autonomous action (picture) # any other mode change cancels autonomous function", "time.strftime(\"%Y%m%d-%H%M%S\") + '.jpg' myPhoto.post_photo('/home/pi/Desktop/cap.jpg',fname) myPhoto.deliver_photo(fname) # continuously check to see if we need", "and previously registered callback function vehicle.remove_message_listener('location.global_frame', location_callback) vehicle.remove_message_listener('mode', mode_callback) # Close vehicle object", "= \"n3m0 received request\" + time.strftime(\" %Y-%m-%d %H:%M:%S\") r=requests.post('http://sailbot.holdentechnology.com/postlatlon.php',data={'b_no':2,'lat':myPhoto.plat,'lon':myPhoto.plon,'mode':myPhoto.pmode,'debug':myPhoto.pmsg}) print \"should be guided", "for bench testing, immediately takes photo. print \"Picture!\", dist # take photo myPhoto.take_photo(1920,", ">= 0): # new request, acknowledge. r=requests.post('http://sailbot.holdentechnology.com/postlatlon.php',data={ 'b_no':2,'lat':myPhoto.plat,'lon':myPhoto.plon,'mode':\"n3m0 Received\",'debug':\"need auth\"}) def take_photo(self, w,h,", "be accurate over large distances and close to the earth's poles. It comes", "hold info for courses, states, etc. class PhotoStuff: ## Photo point (where to", "# use for bench testing, immediately takes photo. print \"Picture!\", dist # take", "data={'submit':'Submit','name':'fileToUpload','id':'fileToUpload'} files = {'fileToUpload': (newname, open(filename, 'rb'))} rr = requests.post(url, data=data, files=files) print", "to the current vehicle position. The algorithm is relatively accurate over small distances", "\" Is Armable?: %s\" % vehicle.is_armable print \" System status: %s\" % vehicle.system_status.state", "Using the ``wait_ready(True)`` waits on :py:attr:`parameters`, :py:attr:`gps_0`, # :py:attr:`armed`, :py:attr:`mode`, and :py:attr:`attitude`. In", "testing, immediately takes photo. print \"Picture!\", dist # take photo myPhoto.take_photo(1920, 1080,'/home/pi/Desktop/cap.jpg') #", "vehicle.last_heartbeat print \" Is Armable?: %s\" % vehicle.is_armable print \" System status: %s\"", "take photo) point1 = LocationGlobalRelative(38.0, -122.0, 0) ## current location lat = 38.06807841429639", "useful when you want to move the vehicle around specifying locations relative to", "`location_callback` for the `global_frame` attribute. vehicle.add_attribute_listener('location.global_frame', location_callback) vehicle.add_attribute_listener('mode', mode_callback) # Loop, interrupts are", "\" Mode: %s\" % vehicle.mode.name # settable # class to hold info for", "#self.camera.resolution = (640, 480) self.camera.start_preview() #time.sleep(2) self.camera.capture(filename) self.camera.stop_preview() print('photo taken') def post_photo(self,filename, newname):", "is True when we are heading for a picture point # also saves", "\"Mode: \", value if str(value).find(\"STEERING\") >=0: myPhoto.Photoing = True myPhoto.pmode = \"UNDERWAY\" myPhoto.pmsg", "if str(value).find(\"GUIDED\") < 0: #not changed to guided mode myPhoto.Photoing = False #let", "to auto mode. if (dist <= 3.0) and (myPhoto.Photoing): # waits until we", "that all supported attributes will be populated. # 'parameters' #vehicle.wait_ready('gps_0','armed','mode','attitude') vehicle.wait_ready('gps_0') # Get", "`original_location`. The function is useful when you want to move the vehicle around", "test code: https://github.com/diydrones/ardupilot/blob/master/Tools/autotest/common.py \"\"\" dlat = aLocation2.lat - aLocation1.lat dlong = aLocation2.lon -", "to manual or RTL etc print(\"new mode set, Photoing off\") # Add a", "the specified `original_location`. The returned LocationGlobal has the same `alt` value as `original_location`.", "0): # guided, return to auto mode. vehicle.mode = VehicleMode(\"AUTO\") print \"End guided", "def post_photo(self,filename, newname): print ('posting photo') url = 'http://sailbot.holdentechnology.com/upload.php' #url = 'http://httpbin.org/post' data={'submit':'Submit','name':'fileToUpload','id':'fileToUpload'}", "test code: https://github.com/diydrones/ardupilot/blob/master/Tools/autotest/common.py \"\"\" off_x = aLocation2.lon - aLocation1.lon off_y = aLocation2.lat -", "myPhoto.deliver_photo(fname) # continuously check to see if we need to change modes #", "print(\"new mode set, Photoing off\") # Add a callback `location_callback` for the `global_frame`", "ArduPilot test code: https://github.com/diydrones/ardupilot/blob/master/Tools/autotest/common.py \"\"\" dlat = aLocation2.lat - aLocation1.lat dlong = aLocation2.lon", "modes # if guided flag set but not guided mode: do guided mode.", "connect(\"/dev/ttyS0\", baud=57600) # telemetry usb # Using the ``wait_ready(True)`` waits on :py:attr:`parameters`, :py:attr:`gps_0`,", "## get data r2 = requests.get('http://sailbot.holdentechnology.com/getbuoys.php') thedata=r2.text.split(';') myPhoto.plat=float(thedata[8]) myPhoto.plon=float(thedata[9]) myPhoto.pmode = thedata[10] myPhoto.pmsg", "Callback when location has changed. 'value' is the updated value # Mode changing", "get_distance_meters(self, aLocation1, aLocation2): \"\"\" Returns the ground distance in metres between two LocationGlobal", "'.jpg' #myPhoto.post_photo('/home/pi/Desktop/testcap.jpg',fname) #myPhoto.time_to_quit=True myPhoto.get_pic_requests() # Remove observer - specifying the attribute and previously", "VehicleMode(\"AUTO\") print \"End guided mode \", str(vehicle.mode.name) #Callback to monitor mode changes. 'value'", "<= 3.0) and (myPhoto.Photoing): # waits until we reach photo point, takes photo", "PhotoStuff: ## Photo point (where to take photo) point1 = LocationGlobalRelative(38.0, -122.0, 0)", "(dLon * 180/math.pi) print(\"llatlon\" ) if type(original_location) is LocationGlobal: targetlocation=LocationGlobal(newlat, newlon,original_location.alt) elif type(original_location)", "url = 'http://sailbot.holdentechnology.com/upload.php' #url = 'http://httpbin.org/post' data={'submit':'Submit','name':'fileToUpload','id':'fileToUpload'} files = {'fileToUpload': (newname, open(filename, 'rb'))}", "myPhoto.mode = str(dist) if (str(vehicle.mode.name).find(\"GUIDED\") < 0): # not guided myPhoto.point1.lat = myPhoto.plat", "usually # means that all supported attributes will be populated. # 'parameters' #vehicle.wait_ready('gps_0','armed','mode','attitude')", "getpix = False plat=0 plon=0 pmode='none' pmsg = 'no message' camera = PiCamera()", "things now. while not myPhoto.time_to_quit: time.sleep(4) print myPhoto.get_distance_meters(myPhoto.point1,vehicle.location.global_relative_frame) # getting parameters is a", "print \"should be guided now\",myPhoto.get_distance_meters(myPhoto.point1,vehicle.location.global_relative_frame) else: if str(value).find(\"GUIDED\") < 0: #not changed to", "the ``wait_ready(True)`` waits on :py:attr:`parameters`, :py:attr:`gps_0`, # :py:attr:`armed`, :py:attr:`mode`, and :py:attr:`attitude`. In practice", "></a>\" myPhoto.pmsg = \"uploads/\" + filename r=requests.post('http://sailbot.holdentechnology.com/insertlatlon.php',data={'b_no':2,'lat':myPhoto.plat,'lon':myPhoto.plon,'mode':myPhoto.pmode,'debug':myPhoto.pmsg}) print \"should be guided now\",myPhoto.get_distance_meters(myPhoto.point1,vehicle.location.global_relative_frame) print", "you want to move the vehicle around specifying locations relative to the current", "# Needed for command message definitions from picamera import PiCamera import time import", "'http://sailbot.holdentechnology.com/upload.php' #url = 'http://httpbin.org/post' data={'submit':'Submit','name':'fileToUpload','id':'fileToUpload'} files = {'fileToUpload': (newname, open(filename, 'rb'))} rr =", "method is an approximation, and may not be accurate over large distances and", "request, acknowledge. r=requests.post('http://sailbot.holdentechnology.com/postlatlon.php',data={ 'b_no':2,'lat':myPhoto.plat,'lon':myPhoto.plon,'mode':\"n3m0 Received\",'debug':\"need auth\"}) def take_photo(self, w,h, filename): print('taking photo') print", "%H:%M:%S\") #myPhoto.pmsg = \"<a href=\\\"uploads/\"+filename+\"\\\"><img src=\\\"uploads/\" + filename + \"\\\" height=50 ></a>\" myPhoto.pmsg", "r=requests.post('http://sailbot.holdentechnology.com/postlatlon.php',data={ 'b_no':2,'lat':myPhoto.plat,'lon':myPhoto.plon,'mode':\"n3m0 Received\",'debug':\"need auth\"}) def take_photo(self, w,h, filename): print('taking photo') print filename self.camera.resolution", "self.camera.resolution = (w, h) #self.camera.resolution = (1920, 1080) #self.camera.resolution = (640, 480) self.camera.start_preview()", "through this code def mode_callback(self, attr_name, value): print \"Mode: \", value if str(value).find(\"STEERING\")", "myPhoto.mode = str(vehicle.mode.name) myPhoto.update_n3m0_location() myPhoto.message = \" \" myPhoto.mode = \" \" myPhoto.take_photo(640,480,'/home/pi/Desktop/testcap.jpg')", "< 0: #not changed to guided mode myPhoto.Photoing = False #let us go", "= False ## picture data getpix = False plat=0 plon=0 pmode='none' pmsg =", "time.strftime(\"%Y-%m-%d %H:%M:%S\") #myPhoto.pmsg = \"<a href=\\\"uploads/\"+filename+\"\\\"><img src=\\\"uploads/\" + filename + \"\\\" height=50 ></a>\"", "objects passed as parameters. This method is an approximation, and may not be", "status: %s\" % vehicle.system_status.state print \" Mode: %s\" % vehicle.mode.name # settable #", "message' camera = PiCamera() def update_n3m0_location(self): ## update the boat location r=requests.post('http://sailbot.holdentechnology.com/postlatlon.php',data={'b_no':1,'lat':myPhoto.lat,'lon':myPhoto.lon,'mode':myPhoto.mode,'debug':myPhoto.message}) #print(r.text)", "the ArduPilot test code: https://github.com/diydrones/ardupilot/blob/master/Tools/autotest/common.py \"\"\" dlat = aLocation2.lat - aLocation1.lat dlong =", "else: # guided flag not set if (str(vehicle.mode.name).find(\"GUIDED\") >= 0): # guided, return", "to guided mode myPhoto.Photoing = False #let us go back to manual or", "method is an approximation, and will not be accurate over large distances and", "\"Finished<br>\" + time.strftime(\"%Y-%m-%d %H:%M:%S\") #myPhoto.pmsg = \"<a href=\\\"uploads/\"+filename+\"\\\"><img src=\\\"uploads/\" + filename + \"\\\"", "= myPhoto.plon vehicle.mode = VehicleMode(\"GUIDED\") vehicle.simple_goto(myPhoto.point1) print \"guided mode again: \", str(vehicle.mode.name) else:", "= 6378137.0 #Radius of \"spherical\" earth #Coordinate offsets in radians dLat = dNorth/earth_radius", "#myPhoto.post_photo('/home/pi/Desktop/testcap.jpg',fname) #myPhoto.time_to_quit=True myPhoto.get_pic_requests() # Remove observer - specifying the attribute and previously registered", "move the vehicle around specifying locations relative to the current vehicle position. The", "guided mode myPhoto.Photoing = False #let us go back to manual or RTL", "## update the boat location r=requests.post('http://sailbot.holdentechnology.com/postlatlon.php',data={'b_no':1,'lat':myPhoto.lat,'lon':myPhoto.lon,'mode':myPhoto.mode,'debug':myPhoto.message}) #print(r.text) def get_pic_requests(self): ## get data r2", "relatively accurate over small distances (10m within 1km) except close to the poles.", "myPhoto.Photoing is True when we are heading for a picture point # also", "print('photo taken') def post_photo(self,filename, newname): print ('posting photo') url = 'http://sailbot.holdentechnology.com/upload.php' #url =", "request\" + time.strftime(\" %Y-%m-%d %H:%M:%S\") r=requests.post('http://sailbot.holdentechnology.com/postlatlon.php',data={'b_no':2,'lat':myPhoto.plat,'lon':myPhoto.plon,'mode':myPhoto.pmode,'debug':myPhoto.pmsg}) def get_location_meters(self,original_location, dNorth, dEast): \"\"\" Returns a", "and `dEast` metres from the specified `original_location`. The returned LocationGlobal has the same", "for new request\" + time.strftime(\" %Y-%m-%d %H:%M:%S\") r=requests.post('http://sailbot.holdentechnology.com/postlatlon.php',data={'b_no':2,'lat':myPhoto.plat,'lon':myPhoto.plon,'mode':myPhoto.pmode,'debug':myPhoto.pmsg}) def get_location_meters(self,original_location, dNorth, dEast): \"\"\"", "now\",myPhoto.get_distance_meters(myPhoto.point1,vehicle.location.global_relative_frame) else: if str(value).find(\"GUIDED\") < 0: #not changed to guided mode myPhoto.Photoing =", "value): print \"Mode: \", value if str(value).find(\"STEERING\") >=0: myPhoto.Photoing = True myPhoto.pmode =", "The returned LocationGlobal has the same `alt` value as `original_location`. The function is", "< 0: bearing += 360.00 return bearing; myPhoto = PhotoStuff() # Callback when", "myPhoto.Photoing: myPhoto.mode = str(dist) if (str(vehicle.mode.name).find(\"GUIDED\") < 0): # not guided myPhoto.point1.lat =", "# any other mode change cancels autonomous function through this code def mode_callback(self,", "str(vehicle.battery) + \" \" + str(vehicle.gps_0) myPhoto.mode = str(vehicle.mode.name) myPhoto.update_n3m0_location() myPhoto.message = \"", "#fname='Tn3m0_' + time.strftime(\"%Y%m%d-%H%M%S\") + '.jpg' #myPhoto.post_photo('/home/pi/Desktop/testcap.jpg',fname) #myPhoto.time_to_quit=True myPhoto.get_pic_requests() # Remove observer - specifying", "newlat = original_location.lat + (dLat * 180/math.pi) newlon = original_location.lon + (dLon *", "math.atan2(-off_y, off_x) * 57.2957795 if bearing < 0: bearing += 360.00 return bearing;", "current vehicle position. The algorithm is relatively accurate over small distances (10m within", "callback `location_callback` for the `global_frame` attribute. vehicle.add_attribute_listener('location.global_frame', location_callback) vehicle.add_attribute_listener('mode', mode_callback) # Loop, interrupts", "filename self.camera.resolution = (w, h) #self.camera.resolution = (1920, 1080) #self.camera.resolution = (640, 480)", "height=50 ></a>\" myPhoto.pmsg = \"uploads/\" + filename r=requests.post('http://sailbot.holdentechnology.com/insertlatlon.php',data={'b_no':2,'lat':myPhoto.plat,'lon':myPhoto.plon,'mode':myPhoto.pmode,'debug':myPhoto.pmsg}) print \"should be guided now\",myPhoto.get_distance_meters(myPhoto.point1,vehicle.location.global_relative_frame)", "camera = PiCamera() def update_n3m0_location(self): ## update the boat location r=requests.post('http://sailbot.holdentechnology.com/postlatlon.php',data={'b_no':1,'lat':myPhoto.lat,'lon':myPhoto.lon,'mode':myPhoto.mode,'debug':myPhoto.message}) #print(r.text) def", "Vehicle. print(\"\\nConnecting to vehicle\") #vehicle = connect(/dev/ttyACM0, wait_ready=True) # pixhawk usb #vehicle =", "the updated value # If mode changes to \"steering\" start autonomous action (picture)", "to monitor mode changes. 'value' is the updated value # If mode changes", "and close to the earth's poles. It comes from the ArduPilot test code:", "point # also saves current location into myPhoto variables. def location_callback(self, attr_name, value):", "Get some vehicle attributes (state) print \"Get some vehicle attribute values:\" print \"", "print('photo posted') def deliver_photo(self,filename): myPhoto.pmode = \"Finished<br>\" + time.strftime(\"%Y-%m-%d %H:%M:%S\") #myPhoto.pmsg = \"<a", "for command message definitions from picamera import PiCamera import time import math import", "settable # class to hold info for courses, states, etc. class PhotoStuff: ##", "1080,'/home/pi/Desktop/cap.jpg') # exit guided mode myPhoto.Photoing = False # post photo fname='n3m0_' +", "off_x = aLocation2.lon - aLocation1.lon off_y = aLocation2.lat - aLocation1.lat bearing = 90.00", "change modes # if guided flag set but not guided mode: do guided", "self.camera.stop_preview() print('photo taken') def post_photo(self,filename, newname): print ('posting photo') url = 'http://sailbot.holdentechnology.com/upload.php' #url", "\"guided mode again: \", str(vehicle.mode.name) else: # guided flag not set if (str(vehicle.mode.name).find(\"GUIDED\")", "any other mode change cancels autonomous function through this code def mode_callback(self, attr_name,", "#Coordinate offsets in radians dLat = dNorth/earth_radius dLon = dEast/(earth_radius*math.cos(math.pi*original_location.lat/180)) #New position in", "str(value).find(\"STEERING\") >=0: myPhoto.Photoing = True myPhoto.pmode = \"UNDERWAY\" myPhoto.pmsg = \"n3m0 received request\"", "#let us go back to manual or RTL etc print(\"new mode set, Photoing", "mode. vehicle.mode = VehicleMode(\"AUTO\") print \"End guided mode \", str(vehicle.mode.name) #Callback to monitor", "# telemetry usb #vehicle = connect(\"/dev/ttyUSB0\", baud=57600) # telemetry usb vehicle = connect(\"/dev/ttyS0\",", "myPhoto.mode = \" \" myPhoto.take_photo(640,480,'/home/pi/Desktop/testcap.jpg') myPhoto.post_photo('/home/pi/Desktop/testcap.jpg','tphoto.jpg') #fname='Tn3m0_' + time.strftime(\"%Y%m%d-%H%M%S\") + '.jpg' #myPhoto.post_photo('/home/pi/Desktop/testcap.jpg',fname) #myPhoto.time_to_quit=True", "#vehicle = connect(/dev/ttyACM0, wait_ready=True) # pixhawk usb #vehicle = connect(\"/dev/ttyUSB0\", wait_ready='armed', baud=57600) #", "flag set but not guided mode: do guided mode. if myPhoto.Photoing: myPhoto.mode =", "- specifying the attribute and previously registered callback function vehicle.remove_message_listener('location.global_frame', location_callback) vehicle.remove_message_listener('mode', mode_callback)", "mavutil # Needed for command message definitions from picamera import PiCamera import time", "off\") # Add a callback `location_callback` for the `global_frame` attribute. vehicle.add_attribute_listener('location.global_frame', location_callback) vehicle.add_attribute_listener('mode',", "Heartbeat: %s\" % vehicle.last_heartbeat print \" Is Armable?: %s\" % vehicle.is_armable print \"", "while not myPhoto.time_to_quit: time.sleep(4) print myPhoto.get_distance_meters(myPhoto.point1,vehicle.location.global_relative_frame) # getting parameters is a little buggy", "lon = -122.23280310630798 mode='none' message = 'no msg' Photoing = False time_to_quit =", "the attribute and previously registered callback function vehicle.remove_message_listener('location.global_frame', location_callback) vehicle.remove_message_listener('mode', mode_callback) # Close", "rr.text print('photo posted') def deliver_photo(self,filename): myPhoto.pmode = \"Finished<br>\" + time.strftime(\"%Y-%m-%d %H:%M:%S\") #myPhoto.pmsg =", "point1 = LocationGlobalRelative(38.0, -122.0, 0) ## current location lat = 38.06807841429639 lon =", "myPhoto.pmode = thedata[10] myPhoto.pmsg = thedata[11] if (str(myPhoto.pmode).find(\"REQUESTED\") >= 0): # new request,", "attr_name, value): #print \"Location: \", value ## store data myPhoto.lat = vehicle.location.global_relative_frame.lat myPhoto.lon", "= \"<a href=\\\"uploads/\"+filename+\"\\\"><img src=\\\"uploads/\" + filename + \"\\\" height=50 ></a>\" myPhoto.pmsg = \"uploads/\"", "= aLocation2.lat - aLocation1.lat bearing = 90.00 + math.atan2(-off_y, off_x) * 57.2957795 if", "is relatively accurate over small distances (10m within 1km) except close to the", "90.00 + math.atan2(-off_y, off_x) * 57.2957795 if bearing < 0: bearing += 360.00", "photo myPhoto.take_photo(1920, 1080,'/home/pi/Desktop/cap.jpg') # exit guided mode myPhoto.Photoing = False # post photo", "\" myPhoto.take_photo(640,480,'/home/pi/Desktop/testcap.jpg') myPhoto.post_photo('/home/pi/Desktop/testcap.jpg','tphoto.jpg') #fname='Tn3m0_' + time.strftime(\"%Y%m%d-%H%M%S\") + '.jpg' #myPhoto.post_photo('/home/pi/Desktop/testcap.jpg',fname) #myPhoto.time_to_quit=True myPhoto.get_pic_requests() # Remove", "1km) except close to the poles. For more information see: http://gis.stackexchange.com/questions/2951/algorithm-for-offsetting-a-latitude-longitude-by-some-amount-of-meters \"\"\" earth_radius", "type(original_location) is LocationGlobalRelative: targetlocation=LocationGlobalRelative(newlat, newlon,original_location.alt) else: raise Exception(\"Invalid Location object passed\") return targetlocation;", "0): # not guided myPhoto.point1.lat = myPhoto.plat myPhoto.point1.lon = myPhoto.plon vehicle.mode = VehicleMode(\"GUIDED\")", "vehicle.parameters['WP_RADIUS'] myPhoto.message = time.strftime(\"%Y-%m-%d %H:%M:%S \") + str(vehicle.battery) + \" \" + str(vehicle.gps_0)", "(newname, open(filename, 'rb'))} rr = requests.post(url, data=data, files=files) print rr.text print('photo posted') def", "360.00 return bearing; myPhoto = PhotoStuff() # Callback when location has changed. 'value'", "as `original_location`. The function is useful when you want to move the vehicle", "= aLocation2.lon - aLocation1.lon off_y = aLocation2.lat - aLocation1.lat bearing = 90.00 +", "be guided now\",myPhoto.get_distance_meters(myPhoto.point1,vehicle.location.global_relative_frame) else: if str(value).find(\"GUIDED\") < 0: #not changed to guided mode", "%H:%M:%S\") r=requests.post('http://sailbot.holdentechnology.com/postlatlon.php',data={'b_no':2,'lat':myPhoto.plat,'lon':myPhoto.plon,'mode':myPhoto.pmode,'debug':myPhoto.pmsg}) print \"should be guided now\",myPhoto.get_distance_meters(myPhoto.point1,vehicle.location.global_relative_frame) else: if str(value).find(\"GUIDED\") < 0: #not", "vehicle.add_attribute_listener('location.global_frame', location_callback) vehicle.add_attribute_listener('mode', mode_callback) # Loop, interrupts are running things now. while not", "It comes from the ArduPilot test code: https://github.com/diydrones/ardupilot/blob/master/Tools/autotest/common.py \"\"\" dlat = aLocation2.lat -", "#Callback to monitor mode changes. 'value' is the updated value # If mode", "pixhawk usb #vehicle = connect(\"/dev/ttyUSB0\", wait_ready='armed', baud=57600) # telemetry usb #vehicle = connect(\"/dev/ttyUSB0\",", "data getpix = False plat=0 plon=0 pmode='none' pmsg = 'no message' camera =", "%H:%M:%S\") r=requests.post('http://sailbot.holdentechnology.com/postlatlon.php',data={'b_no':2,'lat':myPhoto.plat,'lon':myPhoto.plon,'mode':myPhoto.pmode,'debug':myPhoto.pmsg}) def get_location_meters(self,original_location, dNorth, dEast): \"\"\" Returns a LocationGlobal object containing the", "current location into myPhoto variables. def location_callback(self, attr_name, value): #print \"Location: \", value", "earth's poles. It comes from the ArduPilot test code: https://github.com/diydrones/ardupilot/blob/master/Tools/autotest/common.py \"\"\" dlat =", "need to change modes # if guided flag set but not guided mode:", "information see: http://gis.stackexchange.com/questions/2951/algorithm-for-offsetting-a-latitude-longitude-by-some-amount-of-meters \"\"\" earth_radius = 6378137.0 #Radius of \"spherical\" earth #Coordinate offsets", "In practice this usually # means that all supported attributes will be populated.", "str(vehicle.mode.name) else: # guided flag not set if (str(vehicle.mode.name).find(\"GUIDED\") >= 0): # guided,", "function vehicle.remove_message_listener('location.global_frame', location_callback) vehicle.remove_message_listener('mode', mode_callback) # Close vehicle object before exiting script vehicle.close()", "# Mode changing done here. # myPhoto.Photoing is True when we are heading", "update_n3m0_location(self): ## update the boat location r=requests.post('http://sailbot.holdentechnology.com/postlatlon.php',data={'b_no':1,'lat':myPhoto.lat,'lon':myPhoto.lon,'mode':myPhoto.mode,'debug':myPhoto.message}) #print(r.text) def get_pic_requests(self): ## get data", "PhotoStuff() # Callback when location has changed. 'value' is the updated value #", "% vehicle.mode.name # settable # class to hold info for courses, states, etc.", "## picture data getpix = False plat=0 plon=0 pmode='none' pmsg = 'no message'", "changes to \"steering\" start autonomous action (picture) # any other mode change cancels", "print \" Last Heartbeat: %s\" % vehicle.last_heartbeat print \" Is Armable?: %s\" %", "the current vehicle position. The algorithm is relatively accurate over small distances (10m", "This method is an approximation, and will not be accurate over large distances", "vehicle.mode.name # settable # class to hold info for courses, states, etc. class", "{'fileToUpload': (newname, open(filename, 'rb'))} rr = requests.post(url, data=data, files=files) print rr.text print('photo posted')", "Returns the bearing between the two LocationGlobal objects passed as parameters. This method", "the earth's poles. It comes from the ArduPilot test code: https://github.com/diydrones/ardupilot/blob/master/Tools/autotest/common.py \"\"\" off_x", "bearing; myPhoto = PhotoStuff() # Callback when location has changed. 'value' is the", "check to see if we need to change modes # if guided flag", "means that all supported attributes will be populated. # 'parameters' #vehicle.wait_ready('gps_0','armed','mode','attitude') vehicle.wait_ready('gps_0') #", "%s\" % vehicle.parameters['WP_RADIUS'] myPhoto.message = time.strftime(\"%Y-%m-%d %H:%M:%S \") + str(vehicle.battery) + \" \"", "`original_location`. The returned LocationGlobal has the same `alt` value as `original_location`. The function", "exit guided mode myPhoto.Photoing = False # post photo fname='n3m0_' + time.strftime(\"%Y%m%d-%H%M%S\") +", "dlong = aLocation2.lon - aLocation1.lon return math.sqrt((dlat*dlat) + (dlong*dlong)) * 1.113195e5 def get_bearing(self,", "mode: do guided mode. if myPhoto.Photoing: myPhoto.mode = str(dist) if (str(vehicle.mode.name).find(\"GUIDED\") < 0):", "time.strftime(\" %Y-%m-%d %H:%M:%S\") r=requests.post('http://sailbot.holdentechnology.com/postlatlon.php',data={'b_no':2,'lat':myPhoto.plat,'lon':myPhoto.plon,'mode':myPhoto.pmode,'debug':myPhoto.pmsg}) def get_location_meters(self,original_location, dNorth, dEast): \"\"\" Returns a LocationGlobal object", "guided mode. if myPhoto.Photoing: myPhoto.mode = str(dist) if (str(vehicle.mode.name).find(\"GUIDED\") < 0): # not", "be guided now\",myPhoto.get_distance_meters(myPhoto.point1,vehicle.location.global_relative_frame) print myPhoto.pmsg #myPhoto.time_to_quit=True myPhoto.pmode = \"DONE\" myPhoto.pmsg = \"Ready for", "+ '.jpg' #myPhoto.post_photo('/home/pi/Desktop/testcap.jpg',fname) #myPhoto.time_to_quit=True myPhoto.get_pic_requests() # Remove observer - specifying the attribute and", "else: raise Exception(\"Invalid Location object passed\") return targetlocation; def get_distance_meters(self, aLocation1, aLocation2): \"\"\"", "the two LocationGlobal objects passed as parameters. This method is an approximation, and", ":py:attr:`armed`, :py:attr:`mode`, and :py:attr:`attitude`. In practice this usually # means that all supported", ">= 0): # guided, return to auto mode. vehicle.mode = VehicleMode(\"AUTO\") print \"End", "telemetry usb # Using the ``wait_ready(True)`` waits on :py:attr:`parameters`, :py:attr:`gps_0`, # :py:attr:`armed`, :py:attr:`mode`,", "self.camera.capture(filename) self.camera.stop_preview() print('photo taken') def post_photo(self,filename, newname): print ('posting photo') url = 'http://sailbot.holdentechnology.com/upload.php'", "from the ArduPilot test code: https://github.com/diydrones/ardupilot/blob/master/Tools/autotest/common.py \"\"\" dlat = aLocation2.lat - aLocation1.lat dlong", "filename): print('taking photo') print filename self.camera.resolution = (w, h) #self.camera.resolution = (1920, 1080)", "%s\" % vehicle.last_heartbeat print \" Is Armable?: %s\" % vehicle.is_armable print \" System", "mode. if myPhoto.Photoing: myPhoto.mode = str(dist) if (str(vehicle.mode.name).find(\"GUIDED\") < 0): # not guided", "waypoint dist = myPhoto.get_distance_meters(myPhoto.point1,vehicle.location.global_relative_frame) # if reached photo point: take photo, return to", "#myPhoto.pmsg = \"<a href=\\\"uploads/\"+filename+\"\\\"><img src=\\\"uploads/\" + filename + \"\\\" height=50 ></a>\" myPhoto.pmsg =", "connect, VehicleMode, LocationGlobalRelative, LocationGlobal from pymavlink import mavutil # Needed for command message", "# telemetry usb vehicle = connect(\"/dev/ttyS0\", baud=57600) # telemetry usb # Using the", "print \"should be guided now\",myPhoto.get_distance_meters(myPhoto.point1,vehicle.location.global_relative_frame) print myPhoto.pmsg #myPhoto.time_to_quit=True myPhoto.pmode = \"DONE\" myPhoto.pmsg =", "VehicleMode, LocationGlobalRelative, LocationGlobal from pymavlink import mavutil # Needed for command message definitions", "to the Vehicle. print(\"\\nConnecting to vehicle\") #vehicle = connect(/dev/ttyACM0, wait_ready=True) # pixhawk usb", "connect(\"/dev/ttyUSB0\", baud=57600) # telemetry usb vehicle = connect(\"/dev/ttyS0\", baud=57600) # telemetry usb #", "connect(/dev/ttyACM0, wait_ready=True) # pixhawk usb #vehicle = connect(\"/dev/ttyUSB0\", wait_ready='armed', baud=57600) # telemetry usb", "'rb'))} rr = requests.post(url, data=data, files=files) print rr.text print('photo posted') def deliver_photo(self,filename): myPhoto.pmode", "photo #if (myPhoto.Photoing): # use for bench testing, immediately takes photo. print \"Picture!\",", "import requests # Connect to the Vehicle. print(\"\\nConnecting to vehicle\") #vehicle = connect(/dev/ttyACM0,", "print(\"\\nConnecting to vehicle\") #vehicle = connect(/dev/ttyACM0, wait_ready=True) # pixhawk usb #vehicle = connect(\"/dev/ttyUSB0\",", "thedata[11] if (str(myPhoto.pmode).find(\"REQUESTED\") >= 0): # new request, acknowledge. r=requests.post('http://sailbot.holdentechnology.com/postlatlon.php',data={ 'b_no':2,'lat':myPhoto.plat,'lon':myPhoto.plon,'mode':\"n3m0 Received\",'debug':\"need auth\"})", "mode change cancels autonomous function through this code def mode_callback(self, attr_name, value): print", "decimal degrees newlat = original_location.lat + (dLat * 180/math.pi) newlon = original_location.lon +", "\" System status: %s\" % vehicle.system_status.state print \" Mode: %s\" % vehicle.mode.name #", "original_location.lon + (dLon * 180/math.pi) print(\"llatlon\" ) if type(original_location) is LocationGlobal: targetlocation=LocationGlobal(newlat, newlon,original_location.alt)", "is useful when you want to move the vehicle around specifying locations relative", "\"\"\" Returns the bearing between the two LocationGlobal objects passed as parameters. This", "return bearing; myPhoto = PhotoStuff() # Callback when location has changed. 'value' is", "def take_photo(self, w,h, filename): print('taking photo') print filename self.camera.resolution = (w, h) #self.camera.resolution", "myPhoto.pmsg = thedata[11] if (str(myPhoto.pmode).find(\"REQUESTED\") >= 0): # new request, acknowledge. r=requests.post('http://sailbot.holdentechnology.com/postlatlon.php',data={ 'b_no':2,'lat':myPhoto.plat,'lon':myPhoto.plon,'mode':\"n3m0", "between two LocationGlobal objects. This method is an approximation, and will not be", "return to auto mode. vehicle.mode = VehicleMode(\"AUTO\") print \"End guided mode \", str(vehicle.mode.name)", "def update_n3m0_location(self): ## update the boat location r=requests.post('http://sailbot.holdentechnology.com/postlatlon.php',data={'b_no':1,'lat':myPhoto.lat,'lon':myPhoto.lon,'mode':myPhoto.mode,'debug':myPhoto.message}) #print(r.text) def get_pic_requests(self): ## get", "more information see: http://gis.stackexchange.com/questions/2951/algorithm-for-offsetting-a-latitude-longitude-by-some-amount-of-meters \"\"\" earth_radius = 6378137.0 #Radius of \"spherical\" earth #Coordinate", "\" + str(vehicle.gps_0) myPhoto.mode = str(vehicle.mode.name) myPhoto.update_n3m0_location() myPhoto.message = \" \" myPhoto.mode =", "we go!\" # Import DroneKit-Python from dronekit import connect, VehicleMode, LocationGlobalRelative, LocationGlobal from", "% vehicle.last_heartbeat print \" Is Armable?: %s\" % vehicle.is_armable print \" System status:", "bearing += 360.00 return bearing; myPhoto = PhotoStuff() # Callback when location has", "over small distances (10m within 1km) except close to the poles. For more", "print ('posting photo') url = 'http://sailbot.holdentechnology.com/upload.php' #url = 'http://httpbin.org/post' data={'submit':'Submit','name':'fileToUpload','id':'fileToUpload'} files = {'fileToUpload':", "running things now. while not myPhoto.time_to_quit: time.sleep(4) print myPhoto.get_distance_meters(myPhoto.point1,vehicle.location.global_relative_frame) # getting parameters is", "position. The algorithm is relatively accurate over small distances (10m within 1km) except", "an approximation, and will not be accurate over large distances and close to", "- aLocation1.lat dlong = aLocation2.lon - aLocation1.lon return math.sqrt((dlat*dlat) + (dlong*dlong)) * 1.113195e5", "updated value # Mode changing done here. # myPhoto.Photoing is True when we", "dist = myPhoto.get_distance_meters(myPhoto.point1,vehicle.location.global_relative_frame) # if reached photo point: take photo, return to auto", "# Connect to the Vehicle. print(\"\\nConnecting to vehicle\") #vehicle = connect(/dev/ttyACM0, wait_ready=True) #", "mode. if (dist <= 3.0) and (myPhoto.Photoing): # waits until we reach photo", "take_photo(self, w,h, filename): print('taking photo') print filename self.camera.resolution = (w, h) #self.camera.resolution =", "mode \", str(vehicle.mode.name) #Callback to monitor mode changes. 'value' is the updated value", "reaching picture waypoint dist = myPhoto.get_distance_meters(myPhoto.point1,vehicle.location.global_relative_frame) # if reached photo point: take photo,", "myPhoto variables. def location_callback(self, attr_name, value): #print \"Location: \", value ## store data", "False # post photo fname='n3m0_' + time.strftime(\"%Y%m%d-%H%M%S\") + '.jpg' myPhoto.post_photo('/home/pi/Desktop/cap.jpg',fname) myPhoto.deliver_photo(fname) # continuously", "\"\"\" Returns a LocationGlobal object containing the latitude/longitude `dNorth` and `dEast` metres from", "are running things now. while not myPhoto.time_to_quit: time.sleep(4) print myPhoto.get_distance_meters(myPhoto.point1,vehicle.location.global_relative_frame) # getting parameters", "to take photo) point1 = LocationGlobalRelative(38.0, -122.0, 0) ## current location lat =", "= False #let us go back to manual or RTL etc print(\"new mode", "# pixhawk usb #vehicle = connect(\"/dev/ttyUSB0\", wait_ready='armed', baud=57600) # telemetry usb #vehicle =", "relative to the current vehicle position. The algorithm is relatively accurate over small", "w,h, filename): print('taking photo') print filename self.camera.resolution = (w, h) #self.camera.resolution = (1920,", "\"\"\" dlat = aLocation2.lat - aLocation1.lat dlong = aLocation2.lon - aLocation1.lon return math.sqrt((dlat*dlat)", "# Remove observer - specifying the attribute and previously registered callback function vehicle.remove_message_listener('location.global_frame',", "\" myPhoto.mode = \" \" myPhoto.take_photo(640,480,'/home/pi/Desktop/testcap.jpg') myPhoto.post_photo('/home/pi/Desktop/testcap.jpg','tphoto.jpg') #fname='Tn3m0_' + time.strftime(\"%Y%m%d-%H%M%S\") + '.jpg' #myPhoto.post_photo('/home/pi/Desktop/testcap.jpg',fname)", "1080) #self.camera.resolution = (640, 480) self.camera.start_preview() #time.sleep(2) self.camera.capture(filename) self.camera.stop_preview() print('photo taken') def post_photo(self,filename,", "post_photo(self,filename, newname): print ('posting photo') url = 'http://sailbot.holdentechnology.com/upload.php' #url = 'http://httpbin.org/post' data={'submit':'Submit','name':'fileToUpload','id':'fileToUpload'} files", "< 0): # not guided myPhoto.point1.lat = myPhoto.plat myPhoto.point1.lon = myPhoto.plon vehicle.mode =", "approximation, and will not be accurate over large distances and close to the", "#print \"Param: %s\" % vehicle.parameters['WP_RADIUS'] myPhoto.message = time.strftime(\"%Y-%m-%d %H:%M:%S \") + str(vehicle.battery) +", "(w, h) #self.camera.resolution = (1920, 1080) #self.camera.resolution = (640, 480) self.camera.start_preview() #time.sleep(2) self.camera.capture(filename)", "else: if str(value).find(\"GUIDED\") < 0: #not changed to guided mode myPhoto.Photoing = False", "set but not guided mode: do guided mode. if myPhoto.Photoing: myPhoto.mode = str(dist)", "time_to_quit = False ## picture data getpix = False plat=0 plon=0 pmode='none' pmsg", "specified `original_location`. The returned LocationGlobal has the same `alt` value as `original_location`. The", "to auto mode. vehicle.mode = VehicleMode(\"AUTO\") print \"End guided mode \", str(vehicle.mode.name) #Callback", "takes photo #if (myPhoto.Photoing): # use for bench testing, immediately takes photo. print", "from picamera import PiCamera import time import math import requests # Connect to", "attributes will be populated. # 'parameters' #vehicle.wait_ready('gps_0','armed','mode','attitude') vehicle.wait_ready('gps_0') # Get some vehicle attributes", "\"Get some vehicle attribute values:\" print \" GPS: %s\" % vehicle.gps_0 print \"", "myPhoto.pmsg = \"Ready for new request\" + time.strftime(\" %Y-%m-%d %H:%M:%S\") r=requests.post('http://sailbot.holdentechnology.com/postlatlon.php',data={'b_no':2,'lat':myPhoto.plat,'lon':myPhoto.plon,'mode':myPhoto.pmode,'debug':myPhoto.pmsg}) def get_location_meters(self,original_location,", "position in decimal degrees newlat = original_location.lat + (dLat * 180/math.pi) newlon =", "this usually # means that all supported attributes will be populated. # 'parameters'", "print \"Get some vehicle attribute values:\" print \" GPS: %s\" % vehicle.gps_0 print", "# Get some vehicle attributes (state) print \"Get some vehicle attribute values:\" print", "0) ## current location lat = 38.06807841429639 lon = -122.23280310630798 mode='none' message =", "over large distances and close to the earth's poles. It comes from the", "= \" \" myPhoto.mode = \" \" myPhoto.take_photo(640,480,'/home/pi/Desktop/testcap.jpg') myPhoto.post_photo('/home/pi/Desktop/testcap.jpg','tphoto.jpg') #fname='Tn3m0_' + time.strftime(\"%Y%m%d-%H%M%S\") +", "str(dist) if (str(vehicle.mode.name).find(\"GUIDED\") < 0): # not guided myPhoto.point1.lat = myPhoto.plat myPhoto.point1.lon =", "#not changed to guided mode myPhoto.Photoing = False #let us go back to", "r=requests.post('http://sailbot.holdentechnology.com/postlatlon.php',data={'b_no':2,'lat':myPhoto.plat,'lon':myPhoto.plon,'mode':myPhoto.pmode,'debug':myPhoto.pmsg}) def get_location_meters(self,original_location, dNorth, dEast): \"\"\" Returns a LocationGlobal object containing the latitude/longitude", ":py:attr:`parameters`, :py:attr:`gps_0`, # :py:attr:`armed`, :py:attr:`mode`, and :py:attr:`attitude`. In practice this usually # means", "r=requests.post('http://sailbot.holdentechnology.com/insertlatlon.php',data={'b_no':2,'lat':myPhoto.plat,'lon':myPhoto.plon,'mode':myPhoto.pmode,'debug':myPhoto.pmsg}) print \"should be guided now\",myPhoto.get_distance_meters(myPhoto.point1,vehicle.location.global_relative_frame) print myPhoto.pmsg #myPhoto.time_to_quit=True myPhoto.pmode = \"DONE\" myPhoto.pmsg", "# continuously check to see if we need to change modes # if", "will not be accurate over large distances and close to the earth's poles.", "#vehicle.wait_ready('gps_0','armed','mode','attitude') vehicle.wait_ready('gps_0') # Get some vehicle attributes (state) print \"Get some vehicle attribute", "and may not be accurate over large distances and close to the earth's", "\", str(vehicle.mode.name) else: # guided flag not set if (str(vehicle.mode.name).find(\"GUIDED\") >= 0): #", "%H:%M:%S \") + str(vehicle.battery) + \" \" + str(vehicle.gps_0) myPhoto.mode = str(vehicle.mode.name) myPhoto.update_n3m0_location()", "mode set, Photoing off\") # Add a callback `location_callback` for the `global_frame` attribute.", "files = {'fileToUpload': (newname, open(filename, 'rb'))} rr = requests.post(url, data=data, files=files) print rr.text", "str(vehicle.mode.name) ## check for reaching picture waypoint dist = myPhoto.get_distance_meters(myPhoto.point1,vehicle.location.global_relative_frame) # if reached", "location r=requests.post('http://sailbot.holdentechnology.com/postlatlon.php',data={'b_no':1,'lat':myPhoto.lat,'lon':myPhoto.lon,'mode':myPhoto.mode,'debug':myPhoto.message}) #print(r.text) def get_pic_requests(self): ## get data r2 = requests.get('http://sailbot.holdentechnology.com/getbuoys.php') thedata=r2.text.split(';') myPhoto.plat=float(thedata[8])", "in decimal degrees newlat = original_location.lat + (dLat * 180/math.pi) newlon = original_location.lon", "return targetlocation; def get_distance_meters(self, aLocation1, aLocation2): \"\"\" Returns the ground distance in metres", "'value' is the updated value # Mode changing done here. # myPhoto.Photoing is", "Returns the ground distance in metres between two LocationGlobal objects. This method is", "the poles. For more information see: http://gis.stackexchange.com/questions/2951/algorithm-for-offsetting-a-latitude-longitude-by-some-amount-of-meters \"\"\" earth_radius = 6378137.0 #Radius of", "definitions from picamera import PiCamera import time import math import requests # Connect", "'b_no':2,'lat':myPhoto.plat,'lon':myPhoto.plon,'mode':\"n3m0 Received\",'debug':\"need auth\"}) def take_photo(self, w,h, filename): print('taking photo') print filename self.camera.resolution =", "the earth's poles. It comes from the ArduPilot test code: https://github.com/diydrones/ardupilot/blob/master/Tools/autotest/common.py \"\"\" dlat", "(myPhoto.Photoing): # use for bench testing, immediately takes photo. print \"Picture!\", dist #", "changed to guided mode myPhoto.Photoing = False #let us go back to manual", "telemetry usb vehicle = connect(\"/dev/ttyS0\", baud=57600) # telemetry usb # Using the ``wait_ready(True)``", "point (where to take photo) point1 = LocationGlobalRelative(38.0, -122.0, 0) ## current location", "+ \"\\\" height=50 ></a>\" myPhoto.pmsg = \"uploads/\" + filename r=requests.post('http://sailbot.holdentechnology.com/insertlatlon.php',data={'b_no':2,'lat':myPhoto.plat,'lon':myPhoto.plon,'mode':myPhoto.pmode,'debug':myPhoto.pmsg}) print \"should be", "filename r=requests.post('http://sailbot.holdentechnology.com/insertlatlon.php',data={'b_no':2,'lat':myPhoto.plat,'lon':myPhoto.plon,'mode':myPhoto.pmode,'debug':myPhoto.pmsg}) print \"should be guided now\",myPhoto.get_distance_meters(myPhoto.point1,vehicle.location.global_relative_frame) print myPhoto.pmsg #myPhoto.time_to_quit=True myPhoto.pmode = \"DONE\"", "if myPhoto.Photoing: myPhoto.mode = str(dist) if (str(vehicle.mode.name).find(\"GUIDED\") < 0): # not guided myPhoto.point1.lat", "deliver_photo(self,filename): myPhoto.pmode = \"Finished<br>\" + time.strftime(\"%Y-%m-%d %H:%M:%S\") #myPhoto.pmsg = \"<a href=\\\"uploads/\"+filename+\"\\\"><img src=\\\"uploads/\" +", "% vehicle.parameters['WP_RADIUS'] myPhoto.message = time.strftime(\"%Y-%m-%d %H:%M:%S \") + str(vehicle.battery) + \" \" +", "accurate over small distances (10m within 1km) except close to the poles. For", "180/math.pi) newlon = original_location.lon + (dLon * 180/math.pi) print(\"llatlon\" ) if type(original_location) is", "not be accurate over large distances and close to the earth's poles. It", "math import requests # Connect to the Vehicle. print(\"\\nConnecting to vehicle\") #vehicle =", "from the specified `original_location`. The returned LocationGlobal has the same `alt` value as", "poles. It comes from the ArduPilot test code: https://github.com/diydrones/ardupilot/blob/master/Tools/autotest/common.py \"\"\" off_x = aLocation2.lon", "= requests.post(url, data=data, files=files) print rr.text print('photo posted') def deliver_photo(self,filename): myPhoto.pmode = \"Finished<br>\"", "# guided flag not set if (str(vehicle.mode.name).find(\"GUIDED\") >= 0): # guided, return to", "location lat = 38.06807841429639 lon = -122.23280310630798 mode='none' message = 'no msg' Photoing", "changes. 'value' is the updated value # If mode changes to \"steering\" start", "= connect(\"/dev/ttyUSB0\", baud=57600) # telemetry usb vehicle = connect(\"/dev/ttyS0\", baud=57600) # telemetry usb", "= 90.00 + math.atan2(-off_y, off_x) * 57.2957795 if bearing < 0: bearing +=", "boat location r=requests.post('http://sailbot.holdentechnology.com/postlatlon.php',data={'b_no':1,'lat':myPhoto.lat,'lon':myPhoto.lon,'mode':myPhoto.mode,'debug':myPhoto.message}) #print(r.text) def get_pic_requests(self): ## get data r2 = requests.get('http://sailbot.holdentechnology.com/getbuoys.php') thedata=r2.text.split(';')", "set, Photoing off\") # Add a callback `location_callback` for the `global_frame` attribute. vehicle.add_attribute_listener('location.global_frame',", "+ str(vehicle.battery) + \" \" + str(vehicle.gps_0) myPhoto.mode = str(vehicle.mode.name) myPhoto.update_n3m0_location() myPhoto.message =", "as parameters. This method is an approximation, and may not be accurate over", "reach photo point, takes photo #if (myPhoto.Photoing): # use for bench testing, immediately", "False #let us go back to manual or RTL etc print(\"new mode set,", "\" Battery: %s\" % vehicle.battery print \" Last Heartbeat: %s\" % vehicle.last_heartbeat print", "myPhoto.Photoing = True myPhoto.pmode = \"UNDERWAY\" myPhoto.pmsg = \"n3m0 received request\" + time.strftime(\"", "\"End guided mode \", str(vehicle.mode.name) #Callback to monitor mode changes. 'value' is the", "dronekit import connect, VehicleMode, LocationGlobalRelative, LocationGlobal from pymavlink import mavutil # Needed for", "attr_name, value): print \"Mode: \", value if str(value).find(\"STEERING\") >=0: myPhoto.Photoing = True myPhoto.pmode", "Connect to the Vehicle. print(\"\\nConnecting to vehicle\") #vehicle = connect(/dev/ttyACM0, wait_ready=True) # pixhawk", "change cancels autonomous function through this code def mode_callback(self, attr_name, value): print \"Mode:", "print \" Is Armable?: %s\" % vehicle.is_armable print \" System status: %s\" %", "waits on :py:attr:`parameters`, :py:attr:`gps_0`, # :py:attr:`armed`, :py:attr:`mode`, and :py:attr:`attitude`. In practice this usually", "= False time_to_quit = False ## picture data getpix = False plat=0 plon=0", "newlon,original_location.alt) else: raise Exception(\"Invalid Location object passed\") return targetlocation; def get_distance_meters(self, aLocation1, aLocation2):", "autonomous function through this code def mode_callback(self, attr_name, value): print \"Mode: \", value", "aLocation2.lon - aLocation1.lon return math.sqrt((dlat*dlat) + (dlong*dlong)) * 1.113195e5 def get_bearing(self, aLocation1, aLocation2):", "\", str(vehicle.mode.name) #Callback to monitor mode changes. 'value' is the updated value #", "vehicle.mode = VehicleMode(\"GUIDED\") vehicle.simple_goto(myPhoto.point1) print \"guided mode again: \", str(vehicle.mode.name) else: # guided", "def location_callback(self, attr_name, value): #print \"Location: \", value ## store data myPhoto.lat =", "dLon = dEast/(earth_radius*math.cos(math.pi*original_location.lat/180)) #New position in decimal degrees newlat = original_location.lat + (dLat", "'.jpg' myPhoto.post_photo('/home/pi/Desktop/cap.jpg',fname) myPhoto.deliver_photo(fname) # continuously check to see if we need to change", "myPhoto.Photoing = False # post photo fname='n3m0_' + time.strftime(\"%Y%m%d-%H%M%S\") + '.jpg' myPhoto.post_photo('/home/pi/Desktop/cap.jpg',fname) myPhoto.deliver_photo(fname)", "myPhoto.get_distance_meters(myPhoto.point1,vehicle.location.global_relative_frame) # if reached photo point: take photo, return to auto mode. if", "``wait_ready(True)`` waits on :py:attr:`parameters`, :py:attr:`gps_0`, # :py:attr:`armed`, :py:attr:`mode`, and :py:attr:`attitude`. In practice this", "time.strftime(\"%Y%m%d-%H%M%S\") + '.jpg' #myPhoto.post_photo('/home/pi/Desktop/testcap.jpg',fname) #myPhoto.time_to_quit=True myPhoto.get_pic_requests() # Remove observer - specifying the attribute", "close to the poles. For more information see: http://gis.stackexchange.com/questions/2951/algorithm-for-offsetting-a-latitude-longitude-by-some-amount-of-meters \"\"\" earth_radius = 6378137.0", "Returns a LocationGlobal object containing the latitude/longitude `dNorth` and `dEast` metres from the", "posted') def deliver_photo(self,filename): myPhoto.pmode = \"Finished<br>\" + time.strftime(\"%Y-%m-%d %H:%M:%S\") #myPhoto.pmsg = \"<a href=\\\"uploads/\"+filename+\"\\\"><img", "LocationGlobalRelative: targetlocation=LocationGlobalRelative(newlat, newlon,original_location.alt) else: raise Exception(\"Invalid Location object passed\") return targetlocation; def get_distance_meters(self,", "return to auto mode. if (dist <= 3.0) and (myPhoto.Photoing): # waits until", "picture waypoint dist = myPhoto.get_distance_meters(myPhoto.point1,vehicle.location.global_relative_frame) # if reached photo point: take photo, return", "cancels autonomous function through this code def mode_callback(self, attr_name, value): print \"Mode: \",", "telemetry usb #vehicle = connect(\"/dev/ttyUSB0\", baud=57600) # telemetry usb vehicle = connect(\"/dev/ttyS0\", baud=57600)", "Mode: %s\" % vehicle.mode.name # settable # class to hold info for courses,", "= PhotoStuff() # Callback when location has changed. 'value' is the updated value", "accurate over large distances and close to the earth's poles. It comes from", "guided, return to auto mode. vehicle.mode = VehicleMode(\"AUTO\") print \"End guided mode \",", "#vehicle = connect(\"/dev/ttyUSB0\", wait_ready='armed', baud=57600) # telemetry usb #vehicle = connect(\"/dev/ttyUSB0\", baud=57600) #", "us go back to manual or RTL etc print(\"new mode set, Photoing off\")", "# new request, acknowledge. r=requests.post('http://sailbot.holdentechnology.com/postlatlon.php',data={ 'b_no':2,'lat':myPhoto.plat,'lon':myPhoto.plon,'mode':\"n3m0 Received\",'debug':\"need auth\"}) def take_photo(self, w,h, filename): print('taking", "Add a callback `location_callback` for the `global_frame` attribute. vehicle.add_attribute_listener('location.global_frame', location_callback) vehicle.add_attribute_listener('mode', mode_callback) #", "'parameters' #vehicle.wait_ready('gps_0','armed','mode','attitude') vehicle.wait_ready('gps_0') # Get some vehicle attributes (state) print \"Get some vehicle", "myPhoto.pmode = \"UNDERWAY\" myPhoto.pmsg = \"n3m0 received request\" + time.strftime(\" %Y-%m-%d %H:%M:%S\") r=requests.post('http://sailbot.holdentechnology.com/postlatlon.php',data={'b_no':2,'lat':myPhoto.plat,'lon':myPhoto.plon,'mode':myPhoto.pmode,'debug':myPhoto.pmsg})", "+ (dlong*dlong)) * 1.113195e5 def get_bearing(self, aLocation1, aLocation2): \"\"\" Returns the bearing between", "myPhoto.get_distance_meters(myPhoto.point1,vehicle.location.global_relative_frame) # getting parameters is a little buggy #print \"Param: %s\" % vehicle.parameters['WP_RADIUS']", "= PiCamera() def update_n3m0_location(self): ## update the boat location r=requests.post('http://sailbot.holdentechnology.com/postlatlon.php',data={'b_no':1,'lat':myPhoto.lat,'lon':myPhoto.lon,'mode':myPhoto.mode,'debug':myPhoto.message}) #print(r.text) def get_pic_requests(self):", "`alt` value as `original_location`. The function is useful when you want to move", "plat=0 plon=0 pmode='none' pmsg = 'no message' camera = PiCamera() def update_n3m0_location(self): ##", "* 180/math.pi) newlon = original_location.lon + (dLon * 180/math.pi) print(\"llatlon\" ) if type(original_location)", "def deliver_photo(self,filename): myPhoto.pmode = \"Finished<br>\" + time.strftime(\"%Y-%m-%d %H:%M:%S\") #myPhoto.pmsg = \"<a href=\\\"uploads/\"+filename+\"\\\"><img src=\\\"uploads/\"", "= vehicle.location.global_relative_frame.lat myPhoto.lon = vehicle.location.global_relative_frame.lon myPhoto.mode = str(vehicle.mode.name) ## check for reaching picture", "myPhoto.pmode = \"Finished<br>\" + time.strftime(\"%Y-%m-%d %H:%M:%S\") #myPhoto.pmsg = \"<a href=\\\"uploads/\"+filename+\"\\\"><img src=\\\"uploads/\" + filename", "baud=57600) # telemetry usb # Using the ``wait_ready(True)`` waits on :py:attr:`parameters`, :py:attr:`gps_0`, #", "Photoing = False time_to_quit = False ## picture data getpix = False plat=0", "myPhoto.Photoing = False #let us go back to manual or RTL etc print(\"new", "to \"steering\" start autonomous action (picture) # any other mode change cancels autonomous", "requests.get('http://sailbot.holdentechnology.com/getbuoys.php') thedata=r2.text.split(';') myPhoto.plat=float(thedata[8]) myPhoto.plon=float(thedata[9]) myPhoto.pmode = thedata[10] myPhoto.pmsg = thedata[11] if (str(myPhoto.pmode).find(\"REQUESTED\") >=", "photo fname='n3m0_' + time.strftime(\"%Y%m%d-%H%M%S\") + '.jpg' myPhoto.post_photo('/home/pi/Desktop/cap.jpg',fname) myPhoto.deliver_photo(fname) # continuously check to see", "rr = requests.post(url, data=data, files=files) print rr.text print('photo posted') def deliver_photo(self,filename): myPhoto.pmode =", "% vehicle.is_armable print \" System status: %s\" % vehicle.system_status.state print \" Mode: %s\"", "= False plat=0 plon=0 pmode='none' pmsg = 'no message' camera = PiCamera() def", "Needed for command message definitions from picamera import PiCamera import time import math", "myPhoto.post_photo('/home/pi/Desktop/cap.jpg',fname) myPhoto.deliver_photo(fname) # continuously check to see if we need to change modes", "= True myPhoto.pmode = \"UNDERWAY\" myPhoto.pmsg = \"n3m0 received request\" + time.strftime(\" %Y-%m-%d", "`global_frame` attribute. vehicle.add_attribute_listener('location.global_frame', location_callback) vehicle.add_attribute_listener('mode', mode_callback) # Loop, interrupts are running things now.", "\"should be guided now\",myPhoto.get_distance_meters(myPhoto.point1,vehicle.location.global_relative_frame) else: if str(value).find(\"GUIDED\") < 0: #not changed to guided", "vehicle.mode = VehicleMode(\"AUTO\") print \"End guided mode \", str(vehicle.mode.name) #Callback to monitor mode", "states, etc. class PhotoStuff: ## Photo point (where to take photo) point1 =", "requests # Connect to the Vehicle. print(\"\\nConnecting to vehicle\") #vehicle = connect(/dev/ttyACM0, wait_ready=True)", "a LocationGlobal object containing the latitude/longitude `dNorth` and `dEast` metres from the specified", "dNorth/earth_radius dLon = dEast/(earth_radius*math.cos(math.pi*original_location.lat/180)) #New position in decimal degrees newlat = original_location.lat +", "myPhoto.get_pic_requests() # Remove observer - specifying the attribute and previously registered callback function", "myPhoto.pmsg #myPhoto.time_to_quit=True myPhoto.pmode = \"DONE\" myPhoto.pmsg = \"Ready for new request\" + time.strftime(\"", "data myPhoto.lat = vehicle.location.global_relative_frame.lat myPhoto.lon = vehicle.location.global_relative_frame.lon myPhoto.mode = str(vehicle.mode.name) ## check for", "% vehicle.gps_0 print \" Battery: %s\" % vehicle.battery print \" Last Heartbeat: %s\"", "480) self.camera.start_preview() #time.sleep(2) self.camera.capture(filename) self.camera.stop_preview() print('photo taken') def post_photo(self,filename, newname): print ('posting photo')", ") if type(original_location) is LocationGlobal: targetlocation=LocationGlobal(newlat, newlon,original_location.alt) elif type(original_location) is LocationGlobalRelative: targetlocation=LocationGlobalRelative(newlat, newlon,original_location.alt)", "#time.sleep(2) self.camera.capture(filename) self.camera.stop_preview() print('photo taken') def post_photo(self,filename, newname): print ('posting photo') url =", "open(filename, 'rb'))} rr = requests.post(url, data=data, files=files) print rr.text print('photo posted') def deliver_photo(self,filename):", "saves current location into myPhoto variables. def location_callback(self, attr_name, value): #print \"Location: \",", "r=requests.post('http://sailbot.holdentechnology.com/postlatlon.php',data={'b_no':2,'lat':myPhoto.plat,'lon':myPhoto.plon,'mode':myPhoto.pmode,'debug':myPhoto.pmsg}) print \"should be guided now\",myPhoto.get_distance_meters(myPhoto.point1,vehicle.location.global_relative_frame) else: if str(value).find(\"GUIDED\") < 0: #not changed", "# getting parameters is a little buggy #print \"Param: %s\" % vehicle.parameters['WP_RADIUS'] myPhoto.message", "callback function vehicle.remove_message_listener('location.global_frame', location_callback) vehicle.remove_message_listener('mode', mode_callback) # Close vehicle object before exiting script", "180/math.pi) print(\"llatlon\" ) if type(original_location) is LocationGlobal: targetlocation=LocationGlobal(newlat, newlon,original_location.alt) elif type(original_location) is LocationGlobalRelative:", "here. # myPhoto.Photoing is True when we are heading for a picture point", "= \"uploads/\" + filename r=requests.post('http://sailbot.holdentechnology.com/insertlatlon.php',data={'b_no':2,'lat':myPhoto.plat,'lon':myPhoto.plon,'mode':myPhoto.pmode,'debug':myPhoto.pmsg}) print \"should be guided now\",myPhoto.get_distance_meters(myPhoto.point1,vehicle.location.global_relative_frame) print myPhoto.pmsg #myPhoto.time_to_quit=True", "values:\" print \" GPS: %s\" % vehicle.gps_0 print \" Battery: %s\" % vehicle.battery", "+ time.strftime(\" %Y-%m-%d %H:%M:%S\") r=requests.post('http://sailbot.holdentechnology.com/postlatlon.php',data={'b_no':2,'lat':myPhoto.plat,'lon':myPhoto.plon,'mode':myPhoto.pmode,'debug':myPhoto.pmsg}) def get_location_meters(self,original_location, dNorth, dEast): \"\"\" Returns a LocationGlobal", "dist # take photo myPhoto.take_photo(1920, 1080,'/home/pi/Desktop/cap.jpg') # exit guided mode myPhoto.Photoing = False", "from the ArduPilot test code: https://github.com/diydrones/ardupilot/blob/master/Tools/autotest/common.py \"\"\" off_x = aLocation2.lon - aLocation1.lon off_y", "to change modes # if guided flag set but not guided mode: do", "\"Location: \", value ## store data myPhoto.lat = vehicle.location.global_relative_frame.lat myPhoto.lon = vehicle.location.global_relative_frame.lon myPhoto.mode", "but not guided mode: do guided mode. if myPhoto.Photoing: myPhoto.mode = str(dist) if", "class PhotoStuff: ## Photo point (where to take photo) point1 = LocationGlobalRelative(38.0, -122.0,", "photo point, takes photo #if (myPhoto.Photoing): # use for bench testing, immediately takes", "if reached photo point: take photo, return to auto mode. if (dist <=", "off_y = aLocation2.lat - aLocation1.lat bearing = 90.00 + math.atan2(-off_y, off_x) * 57.2957795", "str(vehicle.mode.name) #Callback to monitor mode changes. 'value' is the updated value # If", "vehicle.gps_0 print \" Battery: %s\" % vehicle.battery print \" Last Heartbeat: %s\" %", "%s\" % vehicle.battery print \" Last Heartbeat: %s\" % vehicle.last_heartbeat print \" Is", "the ArduPilot test code: https://github.com/diydrones/ardupilot/blob/master/Tools/autotest/common.py \"\"\" off_x = aLocation2.lon - aLocation1.lon off_y =", "immediately takes photo. print \"Picture!\", dist # take photo myPhoto.take_photo(1920, 1080,'/home/pi/Desktop/cap.jpg') # exit", "= VehicleMode(\"GUIDED\") vehicle.simple_goto(myPhoto.point1) print \"guided mode again: \", str(vehicle.mode.name) else: # guided flag", "0: #not changed to guided mode myPhoto.Photoing = False #let us go back", "message = 'no msg' Photoing = False time_to_quit = False ## picture data", "info for courses, states, etc. class PhotoStuff: ## Photo point (where to take", "back to manual or RTL etc print(\"new mode set, Photoing off\") # Add", "autonomous action (picture) # any other mode change cancels autonomous function through this", ":py:attr:`gps_0`, # :py:attr:`armed`, :py:attr:`mode`, and :py:attr:`attitude`. In practice this usually # means that", "until we reach photo point, takes photo #if (myPhoto.Photoing): # use for bench", "%s\" % vehicle.mode.name # settable # class to hold info for courses, states,", "set if (str(vehicle.mode.name).find(\"GUIDED\") >= 0): # guided, return to auto mode. vehicle.mode =", "to the earth's poles. It comes from the ArduPilot test code: https://github.com/diydrones/ardupilot/blob/master/Tools/autotest/common.py \"\"\"", "objects. This method is an approximation, and will not be accurate over large", "aLocation2.lat - aLocation1.lat dlong = aLocation2.lon - aLocation1.lon return math.sqrt((dlat*dlat) + (dlong*dlong)) *", "buggy #print \"Param: %s\" % vehicle.parameters['WP_RADIUS'] myPhoto.message = time.strftime(\"%Y-%m-%d %H:%M:%S \") + str(vehicle.battery)", "get_pic_requests(self): ## get data r2 = requests.get('http://sailbot.holdentechnology.com/getbuoys.php') thedata=r2.text.split(';') myPhoto.plat=float(thedata[8]) myPhoto.plon=float(thedata[9]) myPhoto.pmode = thedata[10]", "False time_to_quit = False ## picture data getpix = False plat=0 plon=0 pmode='none'", "LocationGlobal objects. This method is an approximation, and will not be accurate over", "= 'http://sailbot.holdentechnology.com/upload.php' #url = 'http://httpbin.org/post' data={'submit':'Submit','name':'fileToUpload','id':'fileToUpload'} files = {'fileToUpload': (newname, open(filename, 'rb'))} rr", "vehicle.location.global_relative_frame.lat myPhoto.lon = vehicle.location.global_relative_frame.lon myPhoto.mode = str(vehicle.mode.name) ## check for reaching picture waypoint", "aLocation1.lat dlong = aLocation2.lon - aLocation1.lon return math.sqrt((dlat*dlat) + (dlong*dlong)) * 1.113195e5 def", "function is useful when you want to move the vehicle around specifying locations", "This method is an approximation, and may not be accurate over large distances", "is an approximation, and may not be accurate over large distances and close", "files=files) print rr.text print('photo posted') def deliver_photo(self,filename): myPhoto.pmode = \"Finished<br>\" + time.strftime(\"%Y-%m-%d %H:%M:%S\")", "plon=0 pmode='none' pmsg = 'no message' camera = PiCamera() def update_n3m0_location(self): ## update", "\"<a href=\\\"uploads/\"+filename+\"\\\"><img src=\\\"uploads/\" + filename + \"\\\" height=50 ></a>\" myPhoto.pmsg = \"uploads/\" +", "= \"DONE\" myPhoto.pmsg = \"Ready for new request\" + time.strftime(\" %Y-%m-%d %H:%M:%S\") r=requests.post('http://sailbot.holdentechnology.com/postlatlon.php',data={'b_no':2,'lat':myPhoto.plat,'lon':myPhoto.plon,'mode':myPhoto.pmode,'debug':myPhoto.pmsg})", "% vehicle.battery print \" Last Heartbeat: %s\" % vehicle.last_heartbeat print \" Is Armable?:", "close to the earth's poles. It comes from the ArduPilot test code: https://github.com/diydrones/ardupilot/blob/master/Tools/autotest/common.py", "value ## store data myPhoto.lat = vehicle.location.global_relative_frame.lat myPhoto.lon = vehicle.location.global_relative_frame.lon myPhoto.mode = str(vehicle.mode.name)", "# If mode changes to \"steering\" start autonomous action (picture) # any other", "vehicle.location.global_relative_frame.lon myPhoto.mode = str(vehicle.mode.name) ## check for reaching picture waypoint dist = myPhoto.get_distance_meters(myPhoto.point1,vehicle.location.global_relative_frame)", "some vehicle attributes (state) print \"Get some vehicle attribute values:\" print \" GPS:", "* 57.2957795 if bearing < 0: bearing += 360.00 return bearing; myPhoto =", "\", value if str(value).find(\"STEERING\") >=0: myPhoto.Photoing = True myPhoto.pmode = \"UNDERWAY\" myPhoto.pmsg =", "#myPhoto.time_to_quit=True myPhoto.get_pic_requests() # Remove observer - specifying the attribute and previously registered callback", "message definitions from picamera import PiCamera import time import math import requests #", "'no message' camera = PiCamera() def update_n3m0_location(self): ## update the boat location r=requests.post('http://sailbot.holdentechnology.com/postlatlon.php',data={'b_no':1,'lat':myPhoto.lat,'lon':myPhoto.lon,'mode':myPhoto.mode,'debug':myPhoto.message})", "filename + \"\\\" height=50 ></a>\" myPhoto.pmsg = \"uploads/\" + filename r=requests.post('http://sailbot.holdentechnology.com/insertlatlon.php',data={'b_no':2,'lat':myPhoto.plat,'lon':myPhoto.plon,'mode':myPhoto.pmode,'debug':myPhoto.pmsg}) print \"should", "may not be accurate over large distances and close to the earth's poles.", "if guided flag set but not guided mode: do guided mode. if myPhoto.Photoing:", "# :py:attr:`armed`, :py:attr:`mode`, and :py:attr:`attitude`. In practice this usually # means that all", "a little buggy #print \"Param: %s\" % vehicle.parameters['WP_RADIUS'] myPhoto.message = time.strftime(\"%Y-%m-%d %H:%M:%S \")", "except close to the poles. For more information see: http://gis.stackexchange.com/questions/2951/algorithm-for-offsetting-a-latitude-longitude-by-some-amount-of-meters \"\"\" earth_radius =", "\" \" myPhoto.take_photo(640,480,'/home/pi/Desktop/testcap.jpg') myPhoto.post_photo('/home/pi/Desktop/testcap.jpg','tphoto.jpg') #fname='Tn3m0_' + time.strftime(\"%Y%m%d-%H%M%S\") + '.jpg' #myPhoto.post_photo('/home/pi/Desktop/testcap.jpg',fname) #myPhoto.time_to_quit=True myPhoto.get_pic_requests() #", "myPhoto.plon vehicle.mode = VehicleMode(\"GUIDED\") vehicle.simple_goto(myPhoto.point1) print \"guided mode again: \", str(vehicle.mode.name) else: #", "6378137.0 #Radius of \"spherical\" earth #Coordinate offsets in radians dLat = dNorth/earth_radius dLon", "we are heading for a picture point # also saves current location into", "data r2 = requests.get('http://sailbot.holdentechnology.com/getbuoys.php') thedata=r2.text.split(';') myPhoto.plat=float(thedata[8]) myPhoto.plon=float(thedata[9]) myPhoto.pmode = thedata[10] myPhoto.pmsg = thedata[11]", "## current location lat = 38.06807841429639 lon = -122.23280310630798 mode='none' message = 'no", "= (1920, 1080) #self.camera.resolution = (640, 480) self.camera.start_preview() #time.sleep(2) self.camera.capture(filename) self.camera.stop_preview() print('photo taken')", "myPhoto.pmsg = \"uploads/\" + filename r=requests.post('http://sailbot.holdentechnology.com/insertlatlon.php',data={'b_no':2,'lat':myPhoto.plat,'lon':myPhoto.plon,'mode':myPhoto.pmode,'debug':myPhoto.pmsg}) print \"should be guided now\",myPhoto.get_distance_meters(myPhoto.point1,vehicle.location.global_relative_frame) print myPhoto.pmsg", "value if str(value).find(\"STEERING\") >=0: myPhoto.Photoing = True myPhoto.pmode = \"UNDERWAY\" myPhoto.pmsg = \"n3m0", "= (640, 480) self.camera.start_preview() #time.sleep(2) self.camera.capture(filename) self.camera.stop_preview() print('photo taken') def post_photo(self,filename, newname): print", "True myPhoto.pmode = \"UNDERWAY\" myPhoto.pmsg = \"n3m0 received request\" + time.strftime(\" %Y-%m-%d %H:%M:%S\")", "print \" Mode: %s\" % vehicle.mode.name # settable # class to hold info", "is LocationGlobalRelative: targetlocation=LocationGlobalRelative(newlat, newlon,original_location.alt) else: raise Exception(\"Invalid Location object passed\") return targetlocation; def", "False plat=0 plon=0 pmode='none' pmsg = 'no message' camera = PiCamera() def update_n3m0_location(self):", "current location lat = 38.06807841429639 lon = -122.23280310630798 mode='none' message = 'no msg'", "(dist <= 3.0) and (myPhoto.Photoing): # waits until we reach photo point, takes", "guided flag not set if (str(vehicle.mode.name).find(\"GUIDED\") >= 0): # guided, return to auto", "in radians dLat = dNorth/earth_radius dLon = dEast/(earth_radius*math.cos(math.pi*original_location.lat/180)) #New position in decimal degrees", "myPhoto.pmsg = \"n3m0 received request\" + time.strftime(\" %Y-%m-%d %H:%M:%S\") r=requests.post('http://sailbot.holdentechnology.com/postlatlon.php',data={'b_no':2,'lat':myPhoto.plat,'lon':myPhoto.plon,'mode':myPhoto.pmode,'debug':myPhoto.pmsg}) print \"should be", "-122.23280310630798 mode='none' message = 'no msg' Photoing = False time_to_quit = False ##", "Photo point (where to take photo) point1 = LocationGlobalRelative(38.0, -122.0, 0) ## current", "vehicle around specifying locations relative to the current vehicle position. The algorithm is", "original_location.lat + (dLat * 180/math.pi) newlon = original_location.lon + (dLon * 180/math.pi) print(\"llatlon\"", "print rr.text print('photo posted') def deliver_photo(self,filename): myPhoto.pmode = \"Finished<br>\" + time.strftime(\"%Y-%m-%d %H:%M:%S\") #myPhoto.pmsg", "- aLocation1.lat bearing = 90.00 + math.atan2(-off_y, off_x) * 57.2957795 if bearing <", "has changed. 'value' is the updated value # Mode changing done here. #", "value as `original_location`. The function is useful when you want to move the", "print \" System status: %s\" % vehicle.system_status.state print \" Mode: %s\" % vehicle.mode.name", "pmode='none' pmsg = 'no message' camera = PiCamera() def update_n3m0_location(self): ## update the", "LocationGlobal objects passed as parameters. This method is an approximation, and may not", "value # If mode changes to \"steering\" start autonomous action (picture) # any", "myPhoto.lat = vehicle.location.global_relative_frame.lat myPhoto.lon = vehicle.location.global_relative_frame.lon myPhoto.mode = str(vehicle.mode.name) ## check for reaching", "+ \" \" + str(vehicle.gps_0) myPhoto.mode = str(vehicle.mode.name) myPhoto.update_n3m0_location() myPhoto.message = \" \"", "all supported attributes will be populated. # 'parameters' #vehicle.wait_ready('gps_0','armed','mode','attitude') vehicle.wait_ready('gps_0') # Get some", "again: \", str(vehicle.mode.name) else: # guided flag not set if (str(vehicle.mode.name).find(\"GUIDED\") >= 0):", "usb #vehicle = connect(\"/dev/ttyUSB0\", baud=57600) # telemetry usb vehicle = connect(\"/dev/ttyS0\", baud=57600) #", "parameters. This method is an approximation, and may not be accurate over large", "point, takes photo #if (myPhoto.Photoing): # use for bench testing, immediately takes photo.", "point: take photo, return to auto mode. if (dist <= 3.0) and (myPhoto.Photoing):", "new request, acknowledge. r=requests.post('http://sailbot.holdentechnology.com/postlatlon.php',data={ 'b_no':2,'lat':myPhoto.plat,'lon':myPhoto.plon,'mode':\"n3m0 Received\",'debug':\"need auth\"}) def take_photo(self, w,h, filename): print('taking photo')", "# also saves current location into myPhoto variables. def location_callback(self, attr_name, value): #print", "same `alt` value as `original_location`. The function is useful when you want to", "approximation, and may not be accurate over large distances and close to the", "(10m within 1km) except close to the poles. For more information see: http://gis.stackexchange.com/questions/2951/algorithm-for-offsetting-a-latitude-longitude-by-some-amount-of-meters", "LocationGlobalRelative(38.0, -122.0, 0) ## current location lat = 38.06807841429639 lon = -122.23280310630798 mode='none'", "= aLocation2.lat - aLocation1.lat dlong = aLocation2.lon - aLocation1.lon return math.sqrt((dlat*dlat) + (dlong*dlong))", "code: https://github.com/diydrones/ardupilot/blob/master/Tools/autotest/common.py \"\"\" off_x = aLocation2.lon - aLocation1.lon off_y = aLocation2.lat - aLocation1.lat", "myPhoto.take_photo(640,480,'/home/pi/Desktop/testcap.jpg') myPhoto.post_photo('/home/pi/Desktop/testcap.jpg','tphoto.jpg') #fname='Tn3m0_' + time.strftime(\"%Y%m%d-%H%M%S\") + '.jpg' #myPhoto.post_photo('/home/pi/Desktop/testcap.jpg',fname) #myPhoto.time_to_quit=True myPhoto.get_pic_requests() # Remove observer", "print myPhoto.get_distance_meters(myPhoto.point1,vehicle.location.global_relative_frame) # getting parameters is a little buggy #print \"Param: %s\" %", "dEast/(earth_radius*math.cos(math.pi*original_location.lat/180)) #New position in decimal degrees newlat = original_location.lat + (dLat * 180/math.pi)", "not guided myPhoto.point1.lat = myPhoto.plat myPhoto.point1.lon = myPhoto.plon vehicle.mode = VehicleMode(\"GUIDED\") vehicle.simple_goto(myPhoto.point1) print", "myPhoto.mode = str(vehicle.mode.name) ## check for reaching picture waypoint dist = myPhoto.get_distance_meters(myPhoto.point1,vehicle.location.global_relative_frame) #", "+ (dLon * 180/math.pi) print(\"llatlon\" ) if type(original_location) is LocationGlobal: targetlocation=LocationGlobal(newlat, newlon,original_location.alt) elif", "# telemetry usb # Using the ``wait_ready(True)`` waits on :py:attr:`parameters`, :py:attr:`gps_0`, # :py:attr:`armed`,", "str(vehicle.mode.name) myPhoto.update_n3m0_location() myPhoto.message = \" \" myPhoto.mode = \" \" myPhoto.take_photo(640,480,'/home/pi/Desktop/testcap.jpg') myPhoto.post_photo('/home/pi/Desktop/testcap.jpg','tphoto.jpg') #fname='Tn3m0_'", "two LocationGlobal objects passed as parameters. This method is an approximation, and may", "'http://httpbin.org/post' data={'submit':'Submit','name':'fileToUpload','id':'fileToUpload'} files = {'fileToUpload': (newname, open(filename, 'rb'))} rr = requests.post(url, data=data, files=files)", "guided flag set but not guided mode: do guided mode. if myPhoto.Photoing: myPhoto.mode", "# take photo myPhoto.take_photo(1920, 1080,'/home/pi/Desktop/cap.jpg') # exit guided mode myPhoto.Photoing = False #", "vehicle = connect(\"/dev/ttyS0\", baud=57600) # telemetry usb # Using the ``wait_ready(True)`` waits on", "for courses, states, etc. class PhotoStuff: ## Photo point (where to take photo)", "#New position in decimal degrees newlat = original_location.lat + (dLat * 180/math.pi) newlon", "specifying the attribute and previously registered callback function vehicle.remove_message_listener('location.global_frame', location_callback) vehicle.remove_message_listener('mode', mode_callback) #", "be populated. # 'parameters' #vehicle.wait_ready('gps_0','armed','mode','attitude') vehicle.wait_ready('gps_0') # Get some vehicle attributes (state) print", "vehicle.wait_ready('gps_0') # Get some vehicle attributes (state) print \"Get some vehicle attribute values:\"", "\"spherical\" earth #Coordinate offsets in radians dLat = dNorth/earth_radius dLon = dEast/(earth_radius*math.cos(math.pi*original_location.lat/180)) #New", "(str(vehicle.mode.name).find(\"GUIDED\") >= 0): # guided, return to auto mode. vehicle.mode = VehicleMode(\"AUTO\") print", "= myPhoto.get_distance_meters(myPhoto.point1,vehicle.location.global_relative_frame) # if reached photo point: take photo, return to auto mode.", "`dNorth` and `dEast` metres from the specified `original_location`. The returned LocationGlobal has the", "the same `alt` value as `original_location`. The function is useful when you want", "DroneKit-Python from dronekit import connect, VehicleMode, LocationGlobalRelative, LocationGlobal from pymavlink import mavutil #", "new request\" + time.strftime(\" %Y-%m-%d %H:%M:%S\") r=requests.post('http://sailbot.holdentechnology.com/postlatlon.php',data={'b_no':2,'lat':myPhoto.plat,'lon':myPhoto.plon,'mode':myPhoto.pmode,'debug':myPhoto.pmsg}) def get_location_meters(self,original_location, dNorth, dEast): \"\"\" Returns", "aLocation2.lat - aLocation1.lat bearing = 90.00 + math.atan2(-off_y, off_x) * 57.2957795 if bearing", "bench testing, immediately takes photo. print \"Picture!\", dist # take photo myPhoto.take_photo(1920, 1080,'/home/pi/Desktop/cap.jpg')", "\" \" + str(vehicle.gps_0) myPhoto.mode = str(vehicle.mode.name) myPhoto.update_n3m0_location() myPhoto.message = \" \" myPhoto.mode", "Last Heartbeat: %s\" % vehicle.last_heartbeat print \" Is Armable?: %s\" % vehicle.is_armable print", "Received\",'debug':\"need auth\"}) def take_photo(self, w,h, filename): print('taking photo') print filename self.camera.resolution = (w,", "take photo, return to auto mode. if (dist <= 3.0) and (myPhoto.Photoing): #", "class to hold info for courses, states, etc. class PhotoStuff: ## Photo point", "r=requests.post('http://sailbot.holdentechnology.com/postlatlon.php',data={'b_no':1,'lat':myPhoto.lat,'lon':myPhoto.lon,'mode':myPhoto.mode,'debug':myPhoto.message}) #print(r.text) def get_pic_requests(self): ## get data r2 = requests.get('http://sailbot.holdentechnology.com/getbuoys.php') thedata=r2.text.split(';') myPhoto.plat=float(thedata[8]) myPhoto.plon=float(thedata[9])", "\"\"\" off_x = aLocation2.lon - aLocation1.lon off_y = aLocation2.lat - aLocation1.lat bearing =", "+ time.strftime(\"%Y-%m-%d %H:%M:%S\") #myPhoto.pmsg = \"<a href=\\\"uploads/\"+filename+\"\\\"><img src=\\\"uploads/\" + filename + \"\\\" height=50", "Is Armable?: %s\" % vehicle.is_armable print \" System status: %s\" % vehicle.system_status.state print", "%s\" % vehicle.gps_0 print \" Battery: %s\" % vehicle.battery print \" Last Heartbeat:", "math.sqrt((dlat*dlat) + (dlong*dlong)) * 1.113195e5 def get_bearing(self, aLocation1, aLocation2): \"\"\" Returns the bearing", "Loop, interrupts are running things now. while not myPhoto.time_to_quit: time.sleep(4) print myPhoto.get_distance_meters(myPhoto.point1,vehicle.location.global_relative_frame) #", "= connect(/dev/ttyACM0, wait_ready=True) # pixhawk usb #vehicle = connect(\"/dev/ttyUSB0\", wait_ready='armed', baud=57600) # telemetry", "h) #self.camera.resolution = (1920, 1080) #self.camera.resolution = (640, 480) self.camera.start_preview() #time.sleep(2) self.camera.capture(filename) self.camera.stop_preview()", "to move the vehicle around specifying locations relative to the current vehicle position.", "changed. 'value' is the updated value # Mode changing done here. # myPhoto.Photoing", "#if (myPhoto.Photoing): # use for bench testing, immediately takes photo. print \"Picture!\", dist", "changing done here. # myPhoto.Photoing is True when we are heading for a", "etc. class PhotoStuff: ## Photo point (where to take photo) point1 = LocationGlobalRelative(38.0,", "earth's poles. It comes from the ArduPilot test code: https://github.com/diydrones/ardupilot/blob/master/Tools/autotest/common.py \"\"\" off_x =", "mode='none' message = 'no msg' Photoing = False time_to_quit = False ## picture", ":py:attr:`attitude`. In practice this usually # means that all supported attributes will be", "data=data, files=files) print rr.text print('photo posted') def deliver_photo(self,filename): myPhoto.pmode = \"Finished<br>\" + time.strftime(\"%Y-%m-%d", "taken') def post_photo(self,filename, newname): print ('posting photo') url = 'http://sailbot.holdentechnology.com/upload.php' #url = 'http://httpbin.org/post'", "myPhoto.update_n3m0_location() myPhoto.message = \" \" myPhoto.mode = \" \" myPhoto.take_photo(640,480,'/home/pi/Desktop/testcap.jpg') myPhoto.post_photo('/home/pi/Desktop/testcap.jpg','tphoto.jpg') #fname='Tn3m0_' +", "photo point: take photo, return to auto mode. if (dist <= 3.0) and", "to vehicle\") #vehicle = connect(/dev/ttyACM0, wait_ready=True) # pixhawk usb #vehicle = connect(\"/dev/ttyUSB0\", wait_ready='armed',", "vehicle.system_status.state print \" Mode: %s\" % vehicle.mode.name # settable # class to hold", "('posting photo') url = 'http://sailbot.holdentechnology.com/upload.php' #url = 'http://httpbin.org/post' data={'submit':'Submit','name':'fileToUpload','id':'fileToUpload'} files = {'fileToUpload': (newname,", "around specifying locations relative to the current vehicle position. The algorithm is relatively", "and (myPhoto.Photoing): # waits until we reach photo point, takes photo #if (myPhoto.Photoing):", "import time import math import requests # Connect to the Vehicle. print(\"\\nConnecting to", "an approximation, and may not be accurate over large distances and close to", "aLocation2): \"\"\" Returns the ground distance in metres between two LocationGlobal objects. This", "will be populated. # 'parameters' #vehicle.wait_ready('gps_0','armed','mode','attitude') vehicle.wait_ready('gps_0') # Get some vehicle attributes (state)", "targetlocation; def get_distance_meters(self, aLocation1, aLocation2): \"\"\" Returns the ground distance in metres between", "- aLocation1.lon off_y = aLocation2.lat - aLocation1.lat bearing = 90.00 + math.atan2(-off_y, off_x)", "if type(original_location) is LocationGlobal: targetlocation=LocationGlobal(newlat, newlon,original_location.alt) elif type(original_location) is LocationGlobalRelative: targetlocation=LocationGlobalRelative(newlat, newlon,original_location.alt) else:", "myPhoto.message = \" \" myPhoto.mode = \" \" myPhoto.take_photo(640,480,'/home/pi/Desktop/testcap.jpg') myPhoto.post_photo('/home/pi/Desktop/testcap.jpg','tphoto.jpg') #fname='Tn3m0_' + time.strftime(\"%Y%m%d-%H%M%S\")", "manual or RTL etc print(\"new mode set, Photoing off\") # Add a callback", "registered callback function vehicle.remove_message_listener('location.global_frame', location_callback) vehicle.remove_message_listener('mode', mode_callback) # Close vehicle object before exiting", "time.strftime(\"%Y-%m-%d %H:%M:%S \") + str(vehicle.battery) + \" \" + str(vehicle.gps_0) myPhoto.mode = str(vehicle.mode.name)", "http://gis.stackexchange.com/questions/2951/algorithm-for-offsetting-a-latitude-longitude-by-some-amount-of-meters \"\"\" earth_radius = 6378137.0 #Radius of \"spherical\" earth #Coordinate offsets in radians", "is LocationGlobal: targetlocation=LocationGlobal(newlat, newlon,original_location.alt) elif type(original_location) is LocationGlobalRelative: targetlocation=LocationGlobalRelative(newlat, newlon,original_location.alt) else: raise Exception(\"Invalid", "or RTL etc print(\"new mode set, Photoing off\") # Add a callback `location_callback`", "location has changed. 'value' is the updated value # Mode changing done here.", "+ '.jpg' myPhoto.post_photo('/home/pi/Desktop/cap.jpg',fname) myPhoto.deliver_photo(fname) # continuously check to see if we need to", "the boat location r=requests.post('http://sailbot.holdentechnology.com/postlatlon.php',data={'b_no':1,'lat':myPhoto.lat,'lon':myPhoto.lon,'mode':myPhoto.mode,'debug':myPhoto.message}) #print(r.text) def get_pic_requests(self): ## get data r2 = requests.get('http://sailbot.holdentechnology.com/getbuoys.php')", "photo') print filename self.camera.resolution = (w, h) #self.camera.resolution = (1920, 1080) #self.camera.resolution =", "print \"guided mode again: \", str(vehicle.mode.name) else: # guided flag not set if", "\"Param: %s\" % vehicle.parameters['WP_RADIUS'] myPhoto.message = time.strftime(\"%Y-%m-%d %H:%M:%S \") + str(vehicle.battery) + \"", "the `global_frame` attribute. vehicle.add_attribute_listener('location.global_frame', location_callback) vehicle.add_attribute_listener('mode', mode_callback) # Loop, interrupts are running things", "radians dLat = dNorth/earth_radius dLon = dEast/(earth_radius*math.cos(math.pi*original_location.lat/180)) #New position in decimal degrees newlat", "request\" + time.strftime(\" %Y-%m-%d %H:%M:%S\") r=requests.post('http://sailbot.holdentechnology.com/postlatlon.php',data={'b_no':2,'lat':myPhoto.plat,'lon':myPhoto.plon,'mode':myPhoto.pmode,'debug':myPhoto.pmsg}) print \"should be guided now\",myPhoto.get_distance_meters(myPhoto.point1,vehicle.location.global_relative_frame) else: if", "we need to change modes # if guided flag set but not guided", "def mode_callback(self, attr_name, value): print \"Mode: \", value if str(value).find(\"STEERING\") >=0: myPhoto.Photoing =", "the vehicle around specifying locations relative to the current vehicle position. The algorithm", "etc print(\"new mode set, Photoing off\") # Add a callback `location_callback` for the", "the ground distance in metres between two LocationGlobal objects. This method is an", "# Callback when location has changed. 'value' is the updated value # Mode", "guided mode myPhoto.Photoing = False # post photo fname='n3m0_' + time.strftime(\"%Y%m%d-%H%M%S\") + '.jpg'", "vehicle attribute values:\" print \" GPS: %s\" % vehicle.gps_0 print \" Battery: %s\"", "ground distance in metres between two LocationGlobal objects. This method is an approximation,", "str(vehicle.gps_0) myPhoto.mode = str(vehicle.mode.name) myPhoto.update_n3m0_location() myPhoto.message = \" \" myPhoto.mode = \" \"", "auth\"}) def take_photo(self, w,h, filename): print('taking photo') print filename self.camera.resolution = (w, h)", "some vehicle attribute values:\" print \" GPS: %s\" % vehicle.gps_0 print \" Battery:", "`dEast` metres from the specified `original_location`. The returned LocationGlobal has the same `alt`", "aLocation1, aLocation2): \"\"\" Returns the ground distance in metres between two LocationGlobal objects.", "Import DroneKit-Python from dronekit import connect, VehicleMode, LocationGlobalRelative, LocationGlobal from pymavlink import mavutil", "locations relative to the current vehicle position. The algorithm is relatively accurate over", "src=\\\"uploads/\" + filename + \"\\\" height=50 ></a>\" myPhoto.pmsg = \"uploads/\" + filename r=requests.post('http://sailbot.holdentechnology.com/insertlatlon.php',data={'b_no':2,'lat':myPhoto.plat,'lon':myPhoto.plon,'mode':myPhoto.pmode,'debug':myPhoto.pmsg})", "(dlong*dlong)) * 1.113195e5 def get_bearing(self, aLocation1, aLocation2): \"\"\" Returns the bearing between the", "+ str(vehicle.gps_0) myPhoto.mode = str(vehicle.mode.name) myPhoto.update_n3m0_location() myPhoto.message = \" \" myPhoto.mode = \"", "vehicle attributes (state) print \"Get some vehicle attribute values:\" print \" GPS: %s\"", "LocationGlobal object containing the latitude/longitude `dNorth` and `dEast` metres from the specified `original_location`.", "done here. # myPhoto.Photoing is True when we are heading for a picture", "ArduPilot test code: https://github.com/diydrones/ardupilot/blob/master/Tools/autotest/common.py \"\"\" off_x = aLocation2.lon - aLocation1.lon off_y = aLocation2.lat", "algorithm is relatively accurate over small distances (10m within 1km) except close to", "time import math import requests # Connect to the Vehicle. print(\"\\nConnecting to vehicle\")", "#myPhoto.time_to_quit=True myPhoto.pmode = \"DONE\" myPhoto.pmsg = \"Ready for new request\" + time.strftime(\" %Y-%m-%d", "into myPhoto variables. def location_callback(self, attr_name, value): #print \"Location: \", value ## store", "pymavlink import mavutil # Needed for command message definitions from picamera import PiCamera", "* 1.113195e5 def get_bearing(self, aLocation1, aLocation2): \"\"\" Returns the bearing between the two", "(myPhoto.Photoing): # waits until we reach photo point, takes photo #if (myPhoto.Photoing): #", "requests.post(url, data=data, files=files) print rr.text print('photo posted') def deliver_photo(self,filename): myPhoto.pmode = \"Finished<br>\" +", "the bearing between the two LocationGlobal objects passed as parameters. This method is", ">=0: myPhoto.Photoing = True myPhoto.pmode = \"UNDERWAY\" myPhoto.pmsg = \"n3m0 received request\" +", "= \" \" myPhoto.take_photo(640,480,'/home/pi/Desktop/testcap.jpg') myPhoto.post_photo('/home/pi/Desktop/testcap.jpg','tphoto.jpg') #fname='Tn3m0_' + time.strftime(\"%Y%m%d-%H%M%S\") + '.jpg' #myPhoto.post_photo('/home/pi/Desktop/testcap.jpg',fname) #myPhoto.time_to_quit=True myPhoto.get_pic_requests()", "dNorth, dEast): \"\"\" Returns a LocationGlobal object containing the latitude/longitude `dNorth` and `dEast`", "attributes (state) print \"Get some vehicle attribute values:\" print \" GPS: %s\" %", "\" Last Heartbeat: %s\" % vehicle.last_heartbeat print \" Is Armable?: %s\" % vehicle.is_armable", "For more information see: http://gis.stackexchange.com/questions/2951/algorithm-for-offsetting-a-latitude-longitude-by-some-amount-of-meters \"\"\" earth_radius = 6378137.0 #Radius of \"spherical\" earth", "to hold info for courses, states, etc. class PhotoStuff: ## Photo point (where", "dEast): \"\"\" Returns a LocationGlobal object containing the latitude/longitude `dNorth` and `dEast` metres", "def get_pic_requests(self): ## get data r2 = requests.get('http://sailbot.holdentechnology.com/getbuoys.php') thedata=r2.text.split(';') myPhoto.plat=float(thedata[8]) myPhoto.plon=float(thedata[9]) myPhoto.pmode =", "waits until we reach photo point, takes photo #if (myPhoto.Photoing): # use for", "1.113195e5 def get_bearing(self, aLocation1, aLocation2): \"\"\" Returns the bearing between the two LocationGlobal", "command message definitions from picamera import PiCamera import time import math import requests", "PiCamera() def update_n3m0_location(self): ## update the boat location r=requests.post('http://sailbot.holdentechnology.com/postlatlon.php',data={'b_no':1,'lat':myPhoto.lat,'lon':myPhoto.lon,'mode':myPhoto.mode,'debug':myPhoto.message}) #print(r.text) def get_pic_requests(self): ##", "+ filename r=requests.post('http://sailbot.holdentechnology.com/insertlatlon.php',data={'b_no':2,'lat':myPhoto.plat,'lon':myPhoto.plon,'mode':myPhoto.pmode,'debug':myPhoto.pmsg}) print \"should be guided now\",myPhoto.get_distance_meters(myPhoto.point1,vehicle.location.global_relative_frame) print myPhoto.pmsg #myPhoto.time_to_quit=True myPhoto.pmode =", "the updated value # Mode changing done here. # myPhoto.Photoing is True when", "= -122.23280310630798 mode='none' message = 'no msg' Photoing = False time_to_quit = False", "# 'parameters' #vehicle.wait_ready('gps_0','armed','mode','attitude') vehicle.wait_ready('gps_0') # Get some vehicle attributes (state) print \"Get some", "aLocation1.lat bearing = 90.00 + math.atan2(-off_y, off_x) * 57.2957795 if bearing < 0:", "LocationGlobalRelative, LocationGlobal from pymavlink import mavutil # Needed for command message definitions from", "str(value).find(\"GUIDED\") < 0: #not changed to guided mode myPhoto.Photoing = False #let us", "thedata[10] myPhoto.pmsg = thedata[11] if (str(myPhoto.pmode).find(\"REQUESTED\") >= 0): # new request, acknowledge. r=requests.post('http://sailbot.holdentechnology.com/postlatlon.php',data={", "#self.camera.resolution = (1920, 1080) #self.camera.resolution = (640, 480) self.camera.start_preview() #time.sleep(2) self.camera.capture(filename) self.camera.stop_preview() print('photo", "time.sleep(4) print myPhoto.get_distance_meters(myPhoto.point1,vehicle.location.global_relative_frame) # getting parameters is a little buggy #print \"Param: %s\"", "observer - specifying the attribute and previously registered callback function vehicle.remove_message_listener('location.global_frame', location_callback) vehicle.remove_message_listener('mode',", "print \"Here we go!\" # Import DroneKit-Python from dronekit import connect, VehicleMode, LocationGlobalRelative,", "= thedata[10] myPhoto.pmsg = thedata[11] if (str(myPhoto.pmode).find(\"REQUESTED\") >= 0): # new request, acknowledge.", "= thedata[11] if (str(myPhoto.pmode).find(\"REQUESTED\") >= 0): # new request, acknowledge. r=requests.post('http://sailbot.holdentechnology.com/postlatlon.php',data={ 'b_no':2,'lat':myPhoto.plat,'lon':myPhoto.plon,'mode':\"n3m0 Received\",'debug':\"need", "latitude/longitude `dNorth` and `dEast` metres from the specified `original_location`. The returned LocationGlobal has", "myPhoto.lon = vehicle.location.global_relative_frame.lon myPhoto.mode = str(vehicle.mode.name) ## check for reaching picture waypoint dist", "= \"Finished<br>\" + time.strftime(\"%Y-%m-%d %H:%M:%S\") #myPhoto.pmsg = \"<a href=\\\"uploads/\"+filename+\"\\\"><img src=\\\"uploads/\" + filename +", "practice this usually # means that all supported attributes will be populated. #", "= {'fileToUpload': (newname, open(filename, 'rb'))} rr = requests.post(url, data=data, files=files) print rr.text print('photo", "the latitude/longitude `dNorth` and `dEast` metres from the specified `original_location`. The returned LocationGlobal", "for a picture point # also saves current location into myPhoto variables. def", "+ time.strftime(\"%Y%m%d-%H%M%S\") + '.jpg' myPhoto.post_photo('/home/pi/Desktop/cap.jpg',fname) myPhoto.deliver_photo(fname) # continuously check to see if we", "# means that all supported attributes will be populated. # 'parameters' #vehicle.wait_ready('gps_0','armed','mode','attitude') vehicle.wait_ready('gps_0')", "print filename self.camera.resolution = (w, h) #self.camera.resolution = (1920, 1080) #self.camera.resolution = (640,", "\" \" myPhoto.mode = \" \" myPhoto.take_photo(640,480,'/home/pi/Desktop/testcap.jpg') myPhoto.post_photo('/home/pi/Desktop/testcap.jpg','tphoto.jpg') #fname='Tn3m0_' + time.strftime(\"%Y%m%d-%H%M%S\") + '.jpg'", "# if guided flag set but not guided mode: do guided mode. if", "this code def mode_callback(self, attr_name, value): print \"Mode: \", value if str(value).find(\"STEERING\") >=0:", "* 180/math.pi) print(\"llatlon\" ) if type(original_location) is LocationGlobal: targetlocation=LocationGlobal(newlat, newlon,original_location.alt) elif type(original_location) is", "to the poles. For more information see: http://gis.stackexchange.com/questions/2951/algorithm-for-offsetting-a-latitude-longitude-by-some-amount-of-meters \"\"\" earth_radius = 6378137.0 #Radius", "if we need to change modes # if guided flag set but not", "previously registered callback function vehicle.remove_message_listener('location.global_frame', location_callback) vehicle.remove_message_listener('mode', mode_callback) # Close vehicle object before", "continuously check to see if we need to change modes # if guided", "also saves current location into myPhoto variables. def location_callback(self, attr_name, value): #print \"Location:", "courses, states, etc. class PhotoStuff: ## Photo point (where to take photo) point1", "%Y-%m-%d %H:%M:%S\") r=requests.post('http://sailbot.holdentechnology.com/postlatlon.php',data={'b_no':2,'lat':myPhoto.plat,'lon':myPhoto.plon,'mode':myPhoto.pmode,'debug':myPhoto.pmsg}) def get_location_meters(self,original_location, dNorth, dEast): \"\"\" Returns a LocationGlobal object containing", "% vehicle.system_status.state print \" Mode: %s\" % vehicle.mode.name # settable # class to", "has the same `alt` value as `original_location`. The function is useful when you", "dLat = dNorth/earth_radius dLon = dEast/(earth_radius*math.cos(math.pi*original_location.lat/180)) #New position in decimal degrees newlat =", "LocationGlobal: targetlocation=LocationGlobal(newlat, newlon,original_location.alt) elif type(original_location) is LocationGlobalRelative: targetlocation=LocationGlobalRelative(newlat, newlon,original_location.alt) else: raise Exception(\"Invalid Location", "aLocation1, aLocation2): \"\"\" Returns the bearing between the two LocationGlobal objects passed as", "# not guided myPhoto.point1.lat = myPhoto.plat myPhoto.point1.lon = myPhoto.plon vehicle.mode = VehicleMode(\"GUIDED\") vehicle.simple_goto(myPhoto.point1)", "mode again: \", str(vehicle.mode.name) else: # guided flag not set if (str(vehicle.mode.name).find(\"GUIDED\") >=", "vehicle.simple_goto(myPhoto.point1) print \"guided mode again: \", str(vehicle.mode.name) else: # guided flag not set", "System status: %s\" % vehicle.system_status.state print \" Mode: %s\" % vehicle.mode.name # settable", "newlon = original_location.lon + (dLon * 180/math.pi) print(\"llatlon\" ) if type(original_location) is LocationGlobal:", "little buggy #print \"Param: %s\" % vehicle.parameters['WP_RADIUS'] myPhoto.message = time.strftime(\"%Y-%m-%d %H:%M:%S \") +", "= aLocation2.lon - aLocation1.lon return math.sqrt((dlat*dlat) + (dlong*dlong)) * 1.113195e5 def get_bearing(self, aLocation1,", "# settable # class to hold info for courses, states, etc. class PhotoStuff:", "attribute values:\" print \" GPS: %s\" % vehicle.gps_0 print \" Battery: %s\" %", "flag not set if (str(vehicle.mode.name).find(\"GUIDED\") >= 0): # guided, return to auto mode.", "of \"spherical\" earth #Coordinate offsets in radians dLat = dNorth/earth_radius dLon = dEast/(earth_radius*math.cos(math.pi*original_location.lat/180))", "if (str(vehicle.mode.name).find(\"GUIDED\") >= 0): # guided, return to auto mode. vehicle.mode = VehicleMode(\"AUTO\")", "within 1km) except close to the poles. For more information see: http://gis.stackexchange.com/questions/2951/algorithm-for-offsetting-a-latitude-longitude-by-some-amount-of-meters \"\"\"", "fname='n3m0_' + time.strftime(\"%Y%m%d-%H%M%S\") + '.jpg' myPhoto.post_photo('/home/pi/Desktop/cap.jpg',fname) myPhoto.deliver_photo(fname) # continuously check to see if", "# guided, return to auto mode. vehicle.mode = VehicleMode(\"AUTO\") print \"End guided mode", "passed\") return targetlocation; def get_distance_meters(self, aLocation1, aLocation2): \"\"\" Returns the ground distance in", "in metres between two LocationGlobal objects. This method is an approximation, and will", "the Vehicle. print(\"\\nConnecting to vehicle\") #vehicle = connect(/dev/ttyACM0, wait_ready=True) # pixhawk usb #vehicle", "= 'http://httpbin.org/post' data={'submit':'Submit','name':'fileToUpload','id':'fileToUpload'} files = {'fileToUpload': (newname, open(filename, 'rb'))} rr = requests.post(url, data=data,", "+ math.atan2(-off_y, off_x) * 57.2957795 if bearing < 0: bearing += 360.00 return", "'value' is the updated value # If mode changes to \"steering\" start autonomous", "self.camera.start_preview() #time.sleep(2) self.camera.capture(filename) self.camera.stop_preview() print('photo taken') def post_photo(self,filename, newname): print ('posting photo') url", "\"Ready for new request\" + time.strftime(\" %Y-%m-%d %H:%M:%S\") r=requests.post('http://sailbot.holdentechnology.com/postlatlon.php',data={'b_no':2,'lat':myPhoto.plat,'lon':myPhoto.plon,'mode':myPhoto.pmode,'debug':myPhoto.pmsg}) def get_location_meters(self,original_location, dNorth, dEast):", "code def mode_callback(self, attr_name, value): print \"Mode: \", value if str(value).find(\"STEERING\") >=0: myPhoto.Photoing", "usb #vehicle = connect(\"/dev/ttyUSB0\", wait_ready='armed', baud=57600) # telemetry usb #vehicle = connect(\"/dev/ttyUSB0\", baud=57600)", "%s\" % vehicle.is_armable print \" System status: %s\" % vehicle.system_status.state print \" Mode:", "parameters is a little buggy #print \"Param: %s\" % vehicle.parameters['WP_RADIUS'] myPhoto.message = time.strftime(\"%Y-%m-%d", "myPhoto.post_photo('/home/pi/Desktop/testcap.jpg','tphoto.jpg') #fname='Tn3m0_' + time.strftime(\"%Y%m%d-%H%M%S\") + '.jpg' #myPhoto.post_photo('/home/pi/Desktop/testcap.jpg',fname) #myPhoto.time_to_quit=True myPhoto.get_pic_requests() # Remove observer -", "= 'no message' camera = PiCamera() def update_n3m0_location(self): ## update the boat location", "get_bearing(self, aLocation1, aLocation2): \"\"\" Returns the bearing between the two LocationGlobal objects passed", "True when we are heading for a picture point # also saves current", ":py:attr:`mode`, and :py:attr:`attitude`. In practice this usually # means that all supported attributes", "newlon,original_location.alt) elif type(original_location) is LocationGlobalRelative: targetlocation=LocationGlobalRelative(newlat, newlon,original_location.alt) else: raise Exception(\"Invalid Location object passed\")", "type(original_location) is LocationGlobal: targetlocation=LocationGlobal(newlat, newlon,original_location.alt) elif type(original_location) is LocationGlobalRelative: targetlocation=LocationGlobalRelative(newlat, newlon,original_location.alt) else: raise", "if bearing < 0: bearing += 360.00 return bearing; myPhoto = PhotoStuff() #", "from pymavlink import mavutil # Needed for command message definitions from picamera import", "VehicleMode(\"GUIDED\") vehicle.simple_goto(myPhoto.point1) print \"guided mode again: \", str(vehicle.mode.name) else: # guided flag not", "(1920, 1080) #self.camera.resolution = (640, 480) self.camera.start_preview() #time.sleep(2) self.camera.capture(filename) self.camera.stop_preview() print('photo taken') def", "38.06807841429639 lon = -122.23280310630798 mode='none' message = 'no msg' Photoing = False time_to_quit", "earth_radius = 6378137.0 #Radius of \"spherical\" earth #Coordinate offsets in radians dLat =", "on :py:attr:`parameters`, :py:attr:`gps_0`, # :py:attr:`armed`, :py:attr:`mode`, and :py:attr:`attitude`. In practice this usually #", "code: https://github.com/diydrones/ardupilot/blob/master/Tools/autotest/common.py \"\"\" dlat = aLocation2.lat - aLocation1.lat dlong = aLocation2.lon - aLocation1.lon", "Photoing off\") # Add a callback `location_callback` for the `global_frame` attribute. vehicle.add_attribute_listener('location.global_frame', location_callback)", "connect(\"/dev/ttyUSB0\", wait_ready='armed', baud=57600) # telemetry usb #vehicle = connect(\"/dev/ttyUSB0\", baud=57600) # telemetry usb", "supported attributes will be populated. # 'parameters' #vehicle.wait_ready('gps_0','armed','mode','attitude') vehicle.wait_ready('gps_0') # Get some vehicle", "myPhoto.take_photo(1920, 1080,'/home/pi/Desktop/cap.jpg') # exit guided mode myPhoto.Photoing = False # post photo fname='n3m0_'", "metres between two LocationGlobal objects. This method is an approximation, and will not", "\"Picture!\", dist # take photo myPhoto.take_photo(1920, 1080,'/home/pi/Desktop/cap.jpg') # exit guided mode myPhoto.Photoing =", "is the updated value # Mode changing done here. # myPhoto.Photoing is True", "monitor mode changes. 'value' is the updated value # If mode changes to", "getting parameters is a little buggy #print \"Param: %s\" % vehicle.parameters['WP_RADIUS'] myPhoto.message =", "(dLat * 180/math.pi) newlon = original_location.lon + (dLon * 180/math.pi) print(\"llatlon\" ) if", "# if reached photo point: take photo, return to auto mode. if (dist", "value): #print \"Location: \", value ## store data myPhoto.lat = vehicle.location.global_relative_frame.lat myPhoto.lon =", "when location has changed. 'value' is the updated value # Mode changing done", "mode myPhoto.Photoing = False # post photo fname='n3m0_' + time.strftime(\"%Y%m%d-%H%M%S\") + '.jpg' myPhoto.post_photo('/home/pi/Desktop/cap.jpg',fname)", "bearing < 0: bearing += 360.00 return bearing; myPhoto = PhotoStuff() # Callback", "comes from the ArduPilot test code: https://github.com/diydrones/ardupilot/blob/master/Tools/autotest/common.py \"\"\" off_x = aLocation2.lon - aLocation1.lon", "Armable?: %s\" % vehicle.is_armable print \" System status: %s\" % vehicle.system_status.state print \"", "object passed\") return targetlocation; def get_distance_meters(self, aLocation1, aLocation2): \"\"\" Returns the ground distance", "PiCamera import time import math import requests # Connect to the Vehicle. print(\"\\nConnecting", "update the boat location r=requests.post('http://sailbot.holdentechnology.com/postlatlon.php',data={'b_no':1,'lat':myPhoto.lat,'lon':myPhoto.lon,'mode':myPhoto.mode,'debug':myPhoto.message}) #print(r.text) def get_pic_requests(self): ## get data r2 =", "we reach photo point, takes photo #if (myPhoto.Photoing): # use for bench testing,", "location into myPhoto variables. def location_callback(self, attr_name, value): #print \"Location: \", value ##", "is the updated value # If mode changes to \"steering\" start autonomous action", "\"n3m0 received request\" + time.strftime(\" %Y-%m-%d %H:%M:%S\") r=requests.post('http://sailbot.holdentechnology.com/postlatlon.php',data={'b_no':2,'lat':myPhoto.plat,'lon':myPhoto.plon,'mode':myPhoto.pmode,'debug':myPhoto.pmsg}) print \"should be guided now\",myPhoto.get_distance_meters(myPhoto.point1,vehicle.location.global_relative_frame)", "r2 = requests.get('http://sailbot.holdentechnology.com/getbuoys.php') thedata=r2.text.split(';') myPhoto.plat=float(thedata[8]) myPhoto.plon=float(thedata[9]) myPhoto.pmode = thedata[10] myPhoto.pmsg = thedata[11] if", "see: http://gis.stackexchange.com/questions/2951/algorithm-for-offsetting-a-latitude-longitude-by-some-amount-of-meters \"\"\" earth_radius = 6378137.0 #Radius of \"spherical\" earth #Coordinate offsets in", "print \"Mode: \", value if str(value).find(\"STEERING\") >=0: myPhoto.Photoing = True myPhoto.pmode = \"UNDERWAY\"", "store data myPhoto.lat = vehicle.location.global_relative_frame.lat myPhoto.lon = vehicle.location.global_relative_frame.lon myPhoto.mode = str(vehicle.mode.name) ## check", "attribute. vehicle.add_attribute_listener('location.global_frame', location_callback) vehicle.add_attribute_listener('mode', mode_callback) # Loop, interrupts are running things now. while", "vehicle.add_attribute_listener('mode', mode_callback) # Loop, interrupts are running things now. while not myPhoto.time_to_quit: time.sleep(4)", "= False # post photo fname='n3m0_' + time.strftime(\"%Y%m%d-%H%M%S\") + '.jpg' myPhoto.post_photo('/home/pi/Desktop/cap.jpg',fname) myPhoto.deliver_photo(fname) #", "= dEast/(earth_radius*math.cos(math.pi*original_location.lat/180)) #New position in decimal degrees newlat = original_location.lat + (dLat *", "# Using the ``wait_ready(True)`` waits on :py:attr:`parameters`, :py:attr:`gps_0`, # :py:attr:`armed`, :py:attr:`mode`, and :py:attr:`attitude`.", "bearing = 90.00 + math.atan2(-off_y, off_x) * 57.2957795 if bearing < 0: bearing", "picture data getpix = False plat=0 plon=0 pmode='none' pmsg = 'no message' camera", "+ filename + \"\\\" height=50 ></a>\" myPhoto.pmsg = \"uploads/\" + filename r=requests.post('http://sailbot.holdentechnology.com/insertlatlon.php',data={'b_no':2,'lat':myPhoto.plat,'lon':myPhoto.plon,'mode':myPhoto.pmode,'debug':myPhoto.pmsg}) print", "small distances (10m within 1km) except close to the poles. For more information", "= requests.get('http://sailbot.holdentechnology.com/getbuoys.php') thedata=r2.text.split(';') myPhoto.plat=float(thedata[8]) myPhoto.plon=float(thedata[9]) myPhoto.pmode = thedata[10] myPhoto.pmsg = thedata[11] if (str(myPhoto.pmode).find(\"REQUESTED\")", "= 'no msg' Photoing = False time_to_quit = False ## picture data getpix", "(640, 480) self.camera.start_preview() #time.sleep(2) self.camera.capture(filename) self.camera.stop_preview() print('photo taken') def post_photo(self,filename, newname): print ('posting", "vehicle.is_armable print \" System status: %s\" % vehicle.system_status.state print \" Mode: %s\" %", "guided mode: do guided mode. if myPhoto.Photoing: myPhoto.mode = str(dist) if (str(vehicle.mode.name).find(\"GUIDED\") <", "= myPhoto.plat myPhoto.point1.lon = myPhoto.plon vehicle.mode = VehicleMode(\"GUIDED\") vehicle.simple_goto(myPhoto.point1) print \"guided mode again:", "interrupts are running things now. while not myPhoto.time_to_quit: time.sleep(4) print myPhoto.get_distance_meters(myPhoto.point1,vehicle.location.global_relative_frame) # getting", "get_location_meters(self,original_location, dNorth, dEast): \"\"\" Returns a LocationGlobal object containing the latitude/longitude `dNorth` and", "usb # Using the ``wait_ready(True)`` waits on :py:attr:`parameters`, :py:attr:`gps_0`, # :py:attr:`armed`, :py:attr:`mode`, and", "= original_location.lon + (dLon * 180/math.pi) print(\"llatlon\" ) if type(original_location) is LocationGlobal: targetlocation=LocationGlobal(newlat,", "If mode changes to \"steering\" start autonomous action (picture) # any other mode", "action (picture) # any other mode change cancels autonomous function through this code", "for reaching picture waypoint dist = myPhoto.get_distance_meters(myPhoto.point1,vehicle.location.global_relative_frame) # if reached photo point: take", "mode changes. 'value' is the updated value # If mode changes to \"steering\"", "object containing the latitude/longitude `dNorth` and `dEast` metres from the specified `original_location`. The", "if str(value).find(\"STEERING\") >=0: myPhoto.Photoing = True myPhoto.pmode = \"UNDERWAY\" myPhoto.pmsg = \"n3m0 received", "degrees newlat = original_location.lat + (dLat * 180/math.pi) newlon = original_location.lon + (dLon", "vehicle.remove_message_listener('location.global_frame', location_callback) vehicle.remove_message_listener('mode', mode_callback) # Close vehicle object before exiting script vehicle.close() print(\"Completed\")", "\"\\\" height=50 ></a>\" myPhoto.pmsg = \"uploads/\" + filename r=requests.post('http://sailbot.holdentechnology.com/insertlatlon.php',data={'b_no':2,'lat':myPhoto.plat,'lon':myPhoto.plon,'mode':myPhoto.pmode,'debug':myPhoto.pmsg}) print \"should be guided", "= \"Ready for new request\" + time.strftime(\" %Y-%m-%d %H:%M:%S\") r=requests.post('http://sailbot.holdentechnology.com/postlatlon.php',data={'b_no':2,'lat':myPhoto.plat,'lon':myPhoto.plon,'mode':myPhoto.pmode,'debug':myPhoto.pmsg}) def get_location_meters(self,original_location, dNorth,", "populated. # 'parameters' #vehicle.wait_ready('gps_0','armed','mode','attitude') vehicle.wait_ready('gps_0') # Get some vehicle attributes (state) print \"Get", "(state) print \"Get some vehicle attribute values:\" print \" GPS: %s\" % vehicle.gps_0", "specifying locations relative to the current vehicle position. The algorithm is relatively accurate", "= vehicle.location.global_relative_frame.lon myPhoto.mode = str(vehicle.mode.name) ## check for reaching picture waypoint dist =", "to see if we need to change modes # if guided flag set", "# exit guided mode myPhoto.Photoing = False # post photo fname='n3m0_' + time.strftime(\"%Y%m%d-%H%M%S\")", "when you want to move the vehicle around specifying locations relative to the", "def get_distance_meters(self, aLocation1, aLocation2): \"\"\" Returns the ground distance in metres between two", "myPhoto.plon=float(thedata[9]) myPhoto.pmode = thedata[10] myPhoto.pmsg = thedata[11] if (str(myPhoto.pmode).find(\"REQUESTED\") >= 0): # new", "elif type(original_location) is LocationGlobalRelative: targetlocation=LocationGlobalRelative(newlat, newlon,original_location.alt) else: raise Exception(\"Invalid Location object passed\") return", "myPhoto.plat=float(thedata[8]) myPhoto.plon=float(thedata[9]) myPhoto.pmode = thedata[10] myPhoto.pmsg = thedata[11] if (str(myPhoto.pmode).find(\"REQUESTED\") >= 0): #", "%s\" % vehicle.system_status.state print \" Mode: %s\" % vehicle.mode.name # settable # class", "- aLocation1.lon return math.sqrt((dlat*dlat) + (dlong*dlong)) * 1.113195e5 def get_bearing(self, aLocation1, aLocation2): \"\"\"", "containing the latitude/longitude `dNorth` and `dEast` metres from the specified `original_location`. The returned", "## check for reaching picture waypoint dist = myPhoto.get_distance_meters(myPhoto.point1,vehicle.location.global_relative_frame) # if reached photo", "aLocation1.lon off_y = aLocation2.lat - aLocation1.lat bearing = 90.00 + math.atan2(-off_y, off_x) *", "aLocation1.lon return math.sqrt((dlat*dlat) + (dlong*dlong)) * 1.113195e5 def get_bearing(self, aLocation1, aLocation2): \"\"\" Returns", "distances (10m within 1km) except close to the poles. For more information see:", "offsets in radians dLat = dNorth/earth_radius dLon = dEast/(earth_radius*math.cos(math.pi*original_location.lat/180)) #New position in decimal", "and :py:attr:`attitude`. In practice this usually # means that all supported attributes will", "#print \"Location: \", value ## store data myPhoto.lat = vehicle.location.global_relative_frame.lat myPhoto.lon = vehicle.location.global_relative_frame.lon", "myPhoto = PhotoStuff() # Callback when location has changed. 'value' is the updated", "= str(dist) if (str(vehicle.mode.name).find(\"GUIDED\") < 0): # not guided myPhoto.point1.lat = myPhoto.plat myPhoto.point1.lon", "is a little buggy #print \"Param: %s\" % vehicle.parameters['WP_RADIUS'] myPhoto.message = time.strftime(\"%Y-%m-%d %H:%M:%S", "https://github.com/diydrones/ardupilot/blob/master/Tools/autotest/common.py \"\"\" dlat = aLocation2.lat - aLocation1.lat dlong = aLocation2.lon - aLocation1.lon return", "Location object passed\") return targetlocation; def get_distance_meters(self, aLocation1, aLocation2): \"\"\" Returns the ground", "location_callback) vehicle.add_attribute_listener('mode', mode_callback) # Loop, interrupts are running things now. while not myPhoto.time_to_quit:", "photo') url = 'http://sailbot.holdentechnology.com/upload.php' #url = 'http://httpbin.org/post' data={'submit':'Submit','name':'fileToUpload','id':'fileToUpload'} files = {'fileToUpload': (newname, open(filename,", "photo, return to auto mode. if (dist <= 3.0) and (myPhoto.Photoing): # waits", "guided now\",myPhoto.get_distance_meters(myPhoto.point1,vehicle.location.global_relative_frame) else: if str(value).find(\"GUIDED\") < 0: #not changed to guided mode myPhoto.Photoing", "if (str(myPhoto.pmode).find(\"REQUESTED\") >= 0): # new request, acknowledge. r=requests.post('http://sailbot.holdentechnology.com/postlatlon.php',data={ 'b_no':2,'lat':myPhoto.plat,'lon':myPhoto.plon,'mode':\"n3m0 Received\",'debug':\"need auth\"}) def", "check for reaching picture waypoint dist = myPhoto.get_distance_meters(myPhoto.point1,vehicle.location.global_relative_frame) # if reached photo point:", "\"Here we go!\" # Import DroneKit-Python from dronekit import connect, VehicleMode, LocationGlobalRelative, LocationGlobal", "import math import requests # Connect to the Vehicle. print(\"\\nConnecting to vehicle\") #vehicle", "targetlocation=LocationGlobal(newlat, newlon,original_location.alt) elif type(original_location) is LocationGlobalRelative: targetlocation=LocationGlobalRelative(newlat, newlon,original_location.alt) else: raise Exception(\"Invalid Location object", "It comes from the ArduPilot test code: https://github.com/diydrones/ardupilot/blob/master/Tools/autotest/common.py \"\"\" off_x = aLocation2.lon -", "\"\"\" earth_radius = 6378137.0 #Radius of \"spherical\" earth #Coordinate offsets in radians dLat", "poles. For more information see: http://gis.stackexchange.com/questions/2951/algorithm-for-offsetting-a-latitude-longitude-by-some-amount-of-meters \"\"\" earth_radius = 6378137.0 #Radius of \"spherical\"", "variables. def location_callback(self, attr_name, value): #print \"Location: \", value ## store data myPhoto.lat", "= dNorth/earth_radius dLon = dEast/(earth_radius*math.cos(math.pi*original_location.lat/180)) #New position in decimal degrees newlat = original_location.lat", "print \" Battery: %s\" % vehicle.battery print \" Last Heartbeat: %s\" % vehicle.last_heartbeat", "https://github.com/diydrones/ardupilot/blob/master/Tools/autotest/common.py \"\"\" off_x = aLocation2.lon - aLocation1.lon off_y = aLocation2.lat - aLocation1.lat bearing", "print(\"llatlon\" ) if type(original_location) is LocationGlobal: targetlocation=LocationGlobal(newlat, newlon,original_location.alt) elif type(original_location) is LocationGlobalRelative: targetlocation=LocationGlobalRelative(newlat,", "if (str(vehicle.mode.name).find(\"GUIDED\") < 0): # not guided myPhoto.point1.lat = myPhoto.plat myPhoto.point1.lon = myPhoto.plon", "Battery: %s\" % vehicle.battery print \" Last Heartbeat: %s\" % vehicle.last_heartbeat print \"", "myPhoto.point1.lat = myPhoto.plat myPhoto.point1.lon = myPhoto.plon vehicle.mode = VehicleMode(\"GUIDED\") vehicle.simple_goto(myPhoto.point1) print \"guided mode", "mode_callback(self, attr_name, value): print \"Mode: \", value if str(value).find(\"STEERING\") >=0: myPhoto.Photoing = True", "(where to take photo) point1 = LocationGlobalRelative(38.0, -122.0, 0) ## current location lat", "location_callback(self, attr_name, value): #print \"Location: \", value ## store data myPhoto.lat = vehicle.location.global_relative_frame.lat", "myPhoto.message = time.strftime(\"%Y-%m-%d %H:%M:%S \") + str(vehicle.battery) + \" \" + str(vehicle.gps_0) myPhoto.mode", "\", value ## store data myPhoto.lat = vehicle.location.global_relative_frame.lat myPhoto.lon = vehicle.location.global_relative_frame.lon myPhoto.mode =", "= connect(\"/dev/ttyUSB0\", wait_ready='armed', baud=57600) # telemetry usb #vehicle = connect(\"/dev/ttyUSB0\", baud=57600) # telemetry", "attribute and previously registered callback function vehicle.remove_message_listener('location.global_frame', location_callback) vehicle.remove_message_listener('mode', mode_callback) # Close vehicle", "dlat = aLocation2.lat - aLocation1.lat dlong = aLocation2.lon - aLocation1.lon return math.sqrt((dlat*dlat) +", "Mode changing done here. # myPhoto.Photoing is True when we are heading for", "picamera import PiCamera import time import math import requests # Connect to the", "baud=57600) # telemetry usb #vehicle = connect(\"/dev/ttyUSB0\", baud=57600) # telemetry usb vehicle =", "distances and close to the earth's poles. It comes from the ArduPilot test", "two LocationGlobal objects. This method is an approximation, and will not be accurate", "heading for a picture point # also saves current location into myPhoto variables.", "= time.strftime(\"%Y-%m-%d %H:%M:%S \") + str(vehicle.battery) + \" \" + str(vehicle.gps_0) myPhoto.mode =", "= str(vehicle.mode.name) myPhoto.update_n3m0_location() myPhoto.message = \" \" myPhoto.mode = \" \" myPhoto.take_photo(640,480,'/home/pi/Desktop/testcap.jpg') myPhoto.post_photo('/home/pi/Desktop/testcap.jpg','tphoto.jpg')", "between the two LocationGlobal objects passed as parameters. This method is an approximation,", "auto mode. if (dist <= 3.0) and (myPhoto.Photoing): # waits until we reach", "passed as parameters. This method is an approximation, and may not be accurate", "# Import DroneKit-Python from dronekit import connect, VehicleMode, LocationGlobalRelative, LocationGlobal from pymavlink import", "take photo myPhoto.take_photo(1920, 1080,'/home/pi/Desktop/cap.jpg') # exit guided mode myPhoto.Photoing = False # post", "aLocation2.lon - aLocation1.lon off_y = aLocation2.lat - aLocation1.lat bearing = 90.00 + math.atan2(-off_y,", "= \"UNDERWAY\" myPhoto.pmsg = \"n3m0 received request\" + time.strftime(\" %Y-%m-%d %H:%M:%S\") r=requests.post('http://sailbot.holdentechnology.com/postlatlon.php',data={'b_no':2,'lat':myPhoto.plat,'lon':myPhoto.plon,'mode':myPhoto.pmode,'debug':myPhoto.pmsg}) print", "LocationGlobal from pymavlink import mavutil # Needed for command message definitions from picamera", "metres from the specified `original_location`. The returned LocationGlobal has the same `alt` value", "+ time.strftime(\" %Y-%m-%d %H:%M:%S\") r=requests.post('http://sailbot.holdentechnology.com/postlatlon.php',data={'b_no':2,'lat':myPhoto.plat,'lon':myPhoto.plon,'mode':myPhoto.pmode,'debug':myPhoto.pmsg}) print \"should be guided now\",myPhoto.get_distance_meters(myPhoto.point1,vehicle.location.global_relative_frame) else: if str(value).find(\"GUIDED\")", "use for bench testing, immediately takes photo. print \"Picture!\", dist # take photo", "when we are heading for a picture point # also saves current location", "#url = 'http://httpbin.org/post' data={'submit':'Submit','name':'fileToUpload','id':'fileToUpload'} files = {'fileToUpload': (newname, open(filename, 'rb'))} rr = requests.post(url,", "distance in metres between two LocationGlobal objects. This method is an approximation, and", "0): # new request, acknowledge. r=requests.post('http://sailbot.holdentechnology.com/postlatlon.php',data={ 'b_no':2,'lat':myPhoto.plat,'lon':myPhoto.plon,'mode':\"n3m0 Received\",'debug':\"need auth\"}) def take_photo(self, w,h, filename):", "go!\" # Import DroneKit-Python from dronekit import connect, VehicleMode, LocationGlobalRelative, LocationGlobal from pymavlink", "Exception(\"Invalid Location object passed\") return targetlocation; def get_distance_meters(self, aLocation1, aLocation2): \"\"\" Returns the", "#print(r.text) def get_pic_requests(self): ## get data r2 = requests.get('http://sailbot.holdentechnology.com/getbuoys.php') thedata=r2.text.split(';') myPhoto.plat=float(thedata[8]) myPhoto.plon=float(thedata[9]) myPhoto.pmode", "wait_ready='armed', baud=57600) # telemetry usb #vehicle = connect(\"/dev/ttyUSB0\", baud=57600) # telemetry usb vehicle", "picture point # also saves current location into myPhoto variables. def location_callback(self, attr_name,", "print \"End guided mode \", str(vehicle.mode.name) #Callback to monitor mode changes. 'value' is", "RTL etc print(\"new mode set, Photoing off\") # Add a callback `location_callback` for", "#vehicle = connect(\"/dev/ttyUSB0\", baud=57600) # telemetry usb vehicle = connect(\"/dev/ttyS0\", baud=57600) # telemetry", "if (dist <= 3.0) and (myPhoto.Photoing): # waits until we reach photo point,", "takes photo. print \"Picture!\", dist # take photo myPhoto.take_photo(1920, 1080,'/home/pi/Desktop/cap.jpg') # exit guided", "print('taking photo') print filename self.camera.resolution = (w, h) #self.camera.resolution = (1920, 1080) #self.camera.resolution", "thedata=r2.text.split(';') myPhoto.plat=float(thedata[8]) myPhoto.plon=float(thedata[9]) myPhoto.pmode = thedata[10] myPhoto.pmsg = thedata[11] if (str(myPhoto.pmode).find(\"REQUESTED\") >= 0):", "(str(vehicle.mode.name).find(\"GUIDED\") < 0): # not guided myPhoto.point1.lat = myPhoto.plat myPhoto.point1.lon = myPhoto.plon vehicle.mode", "-122.0, 0) ## current location lat = 38.06807841429639 lon = -122.23280310630798 mode='none' message", "return math.sqrt((dlat*dlat) + (dlong*dlong)) * 1.113195e5 def get_bearing(self, aLocation1, aLocation2): \"\"\" Returns the", "def get_location_meters(self,original_location, dNorth, dEast): \"\"\" Returns a LocationGlobal object containing the latitude/longitude `dNorth`", "not set if (str(vehicle.mode.name).find(\"GUIDED\") >= 0): # guided, return to auto mode. vehicle.mode", "now. while not myPhoto.time_to_quit: time.sleep(4) print myPhoto.get_distance_meters(myPhoto.point1,vehicle.location.global_relative_frame) # getting parameters is a little", "Remove observer - specifying the attribute and previously registered callback function vehicle.remove_message_listener('location.global_frame', location_callback)", "and will not be accurate over large distances and close to the earth's", "print \" GPS: %s\" % vehicle.gps_0 print \" Battery: %s\" % vehicle.battery print", "myPhoto.pmode = \"DONE\" myPhoto.pmsg = \"Ready for new request\" + time.strftime(\" %Y-%m-%d %H:%M:%S\")", "import PiCamera import time import math import requests # Connect to the Vehicle.", "LocationGlobal has the same `alt` value as `original_location`. The function is useful when", "# class to hold info for courses, states, etc. class PhotoStuff: ## Photo", "auto mode. vehicle.mode = VehicleMode(\"AUTO\") print \"End guided mode \", str(vehicle.mode.name) #Callback to", "vehicle.battery print \" Last Heartbeat: %s\" % vehicle.last_heartbeat print \" Is Armable?: %s\"", "import connect, VehicleMode, LocationGlobalRelative, LocationGlobal from pymavlink import mavutil # Needed for command", "print myPhoto.pmsg #myPhoto.time_to_quit=True myPhoto.pmode = \"DONE\" myPhoto.pmsg = \"Ready for new request\" +", "do guided mode. if myPhoto.Photoing: myPhoto.mode = str(dist) if (str(vehicle.mode.name).find(\"GUIDED\") < 0): #", "= VehicleMode(\"AUTO\") print \"End guided mode \", str(vehicle.mode.name) #Callback to monitor mode changes.", "# Loop, interrupts are running things now. while not myPhoto.time_to_quit: time.sleep(4) print myPhoto.get_distance_meters(myPhoto.point1,vehicle.location.global_relative_frame)", "not myPhoto.time_to_quit: time.sleep(4) print myPhoto.get_distance_meters(myPhoto.point1,vehicle.location.global_relative_frame) # getting parameters is a little buggy #print", "newname): print ('posting photo') url = 'http://sailbot.holdentechnology.com/upload.php' #url = 'http://httpbin.org/post' data={'submit':'Submit','name':'fileToUpload','id':'fileToUpload'} files =", "time.strftime(\" %Y-%m-%d %H:%M:%S\") r=requests.post('http://sailbot.holdentechnology.com/postlatlon.php',data={'b_no':2,'lat':myPhoto.plat,'lon':myPhoto.plon,'mode':myPhoto.pmode,'debug':myPhoto.pmsg}) print \"should be guided now\",myPhoto.get_distance_meters(myPhoto.point1,vehicle.location.global_relative_frame) else: if str(value).find(\"GUIDED\") <", "vehicle\") #vehicle = connect(/dev/ttyACM0, wait_ready=True) # pixhawk usb #vehicle = connect(\"/dev/ttyUSB0\", wait_ready='armed', baud=57600)" ]
[ "# Copyright (c) 2016-2019 <NAME> # License: MIT License import ezdxf dwg =", "insert=(10, 0, 0), scale=.5, rotation=30) # use dgn format msp.add_underlay(dgn_underlay_def, insert=(0, 30, 0),", "entity is like the INSERT entity, it creates an underlay reference, # and", "works # The (PDF)DEFINITION entity is like a block definition, it just defines", "an underlay reference, # and there can be multiple references to the same", "dgn format msp.add_underlay(dgn_underlay_def, insert=(0, 30, 0), scale=1.) # use dwf format msp.add_underlay(dwf_underlay_def, insert=(0,", "# name = page to display dwf_underlay_def = dwg.add_underlay_def(filename='underlay.dwf', name=\"Underlay_R2013-Model\") # don't know", "# name = 'default' just works # The (PDF)DEFINITION entity is like a", "the INSERT entity, it creates an underlay reference, # and there can be", "name=\"Underlay_R2013-Model\") # don't know how to get this name dgn_underlay_def = dwg.add_underlay_def(filename='underlay.dgn', name='default')", "insert=(0, 30, 0), scale=1.) # use dwf format msp.add_underlay(dwf_underlay_def, insert=(0, 15, 0), scale=1.)", "same underlay in a drawing. msp.add_underlay(pdf_underlay_def, insert=(10, 0, 0), scale=.5, rotation=30) # use", "15, 0), scale=1.) # get existing underlay definitions, Important: UNDERLAYDEFs resides in the", "use dwf format msp.add_underlay(dwf_underlay_def, insert=(0, 15, 0), scale=1.) # get existing underlay definitions,", "INSERT entity, it creates an underlay reference, # and there can be multiple", "page to display dwf_underlay_def = dwg.add_underlay_def(filename='underlay.dwf', name=\"Underlay_R2013-Model\") # don't know how to get", "multiple references to the same underlay in a drawing. msp.add_underlay(pdf_underlay_def, insert=(10, 0, 0),", "newer pdf_underlay_def = dwg.add_underlay_def(filename='underlay.pdf', name='1') # name = page to display dwf_underlay_def =", "R2000 format or newer pdf_underlay_def = dwg.add_underlay_def(filename='underlay.pdf', name='1') # name = page to", "and there can be multiple references to the same underlay in a drawing.", "references to the same underlay in a drawing. msp.add_underlay(pdf_underlay_def, insert=(10, 0, 0), scale=.5,", "underlay definitions, Important: UNDERLAYDEFs resides in the objects section pdf_defs = dwg.objects.query('PDFDEFINITION') #", "# and there can be multiple references to the same underlay in a", "just works # The (PDF)DEFINITION entity is like a block definition, it just", "the DXF R2000 format or newer pdf_underlay_def = dwg.add_underlay_def(filename='underlay.pdf', name='1') # name =", "this name dgn_underlay_def = dwg.add_underlay_def(filename='underlay.dgn', name='default') # name = 'default' just works #", "The (PDF)UNDERLAY entity is like the INSERT entity, it creates an underlay reference,", "defines the underlay msp = dwg.modelspace() # add first underlay msp.add_underlay(pdf_underlay_def, insert=(0, 0,", "scale=1.) # The (PDF)UNDERLAY entity is like the INSERT entity, it creates an", "# The (PDF)UNDERLAY entity is like the INSERT entity, it creates an underlay", "# underlay requires the DXF R2000 format or newer pdf_underlay_def = dwg.add_underlay_def(filename='underlay.pdf', name='1')", "format msp.add_underlay(dgn_underlay_def, insert=(0, 30, 0), scale=1.) # use dwf format msp.add_underlay(dwf_underlay_def, insert=(0, 15,", "definitions, Important: UNDERLAYDEFs resides in the objects section pdf_defs = dwg.objects.query('PDFDEFINITION') # get", "msp.add_underlay(dwf_underlay_def, insert=(0, 15, 0), scale=1.) # get existing underlay definitions, Important: UNDERLAYDEFs resides", "= ezdxf.new('R2000') # underlay requires the DXF R2000 format or newer pdf_underlay_def =", "name dgn_underlay_def = dwg.add_underlay_def(filename='underlay.dgn', name='default') # name = 'default' just works # The", "insert=(0, 15, 0), scale=1.) # get existing underlay definitions, Important: UNDERLAYDEFs resides in", "import ezdxf dwg = ezdxf.new('R2000') # underlay requires the DXF R2000 format or", "'default' just works # The (PDF)DEFINITION entity is like a block definition, it", "msp.add_underlay(dgn_underlay_def, insert=(0, 30, 0), scale=1.) # use dwf format msp.add_underlay(dwf_underlay_def, insert=(0, 15, 0),", "add first underlay msp.add_underlay(pdf_underlay_def, insert=(0, 0, 0), scale=1.) # The (PDF)UNDERLAY entity is", "DXF R2000 format or newer pdf_underlay_def = dwg.add_underlay_def(filename='underlay.pdf', name='1') # name = page", "pdf_underlay_def = dwg.add_underlay_def(filename='underlay.pdf', name='1') # name = page to display dwf_underlay_def = dwg.add_underlay_def(filename='underlay.dwf',", "= 'default' just works # The (PDF)DEFINITION entity is like a block definition,", "entity, it creates an underlay reference, # and there can be multiple references", "is like the INSERT entity, it creates an underlay reference, # and there", "requires the DXF R2000 format or newer pdf_underlay_def = dwg.add_underlay_def(filename='underlay.pdf', name='1') # name", "the underlay msp = dwg.modelspace() # add first underlay msp.add_underlay(pdf_underlay_def, insert=(0, 0, 0),", "dwg.add_underlay_def(filename='underlay.dwf', name=\"Underlay_R2013-Model\") # don't know how to get this name dgn_underlay_def = dwg.add_underlay_def(filename='underlay.dgn',", "The (PDF)DEFINITION entity is like a block definition, it just defines the underlay", "underlay reference, # and there can be multiple references to the same underlay", "reference, # and there can be multiple references to the same underlay in", "# The (PDF)DEFINITION entity is like a block definition, it just defines the", "dwg.add_underlay_def(filename='underlay.pdf', name='1') # name = page to display dwf_underlay_def = dwg.add_underlay_def(filename='underlay.dwf', name=\"Underlay_R2013-Model\") #", "Important: UNDERLAYDEFs resides in the objects section pdf_defs = dwg.objects.query('PDFDEFINITION') # get all", "objects section pdf_defs = dwg.objects.query('PDFDEFINITION') # get all pdf underlay defs in drawing", "how to get this name dgn_underlay_def = dwg.add_underlay_def(filename='underlay.dgn', name='default') # name = 'default'", "display dwf_underlay_def = dwg.add_underlay_def(filename='underlay.dwf', name=\"Underlay_R2013-Model\") # don't know how to get this name", "underlay in a drawing. msp.add_underlay(pdf_underlay_def, insert=(10, 0, 0), scale=.5, rotation=30) # use dgn", "be multiple references to the same underlay in a drawing. msp.add_underlay(pdf_underlay_def, insert=(10, 0,", "resides in the objects section pdf_defs = dwg.objects.query('PDFDEFINITION') # get all pdf underlay", "insert=(0, 0, 0), scale=1.) # The (PDF)UNDERLAY entity is like the INSERT entity,", "definition, it just defines the underlay msp = dwg.modelspace() # add first underlay", "0), scale=1.) # The (PDF)UNDERLAY entity is like the INSERT entity, it creates", "the objects section pdf_defs = dwg.objects.query('PDFDEFINITION') # get all pdf underlay defs in", "to get this name dgn_underlay_def = dwg.add_underlay_def(filename='underlay.dgn', name='default') # name = 'default' just", "(PDF)UNDERLAY entity is like the INSERT entity, it creates an underlay reference, #", "name = 'default' just works # The (PDF)DEFINITION entity is like a block", "# don't know how to get this name dgn_underlay_def = dwg.add_underlay_def(filename='underlay.dgn', name='default') #", "0, 0), scale=1.) # The (PDF)UNDERLAY entity is like the INSERT entity, it", "first underlay msp.add_underlay(pdf_underlay_def, insert=(0, 0, 0), scale=1.) # The (PDF)UNDERLAY entity is like", "to the same underlay in a drawing. msp.add_underlay(pdf_underlay_def, insert=(10, 0, 0), scale=.5, rotation=30)", "name = page to display dwf_underlay_def = dwg.add_underlay_def(filename='underlay.dwf', name=\"Underlay_R2013-Model\") # don't know how", "2016-2019 <NAME> # License: MIT License import ezdxf dwg = ezdxf.new('R2000') # underlay", "know how to get this name dgn_underlay_def = dwg.add_underlay_def(filename='underlay.dgn', name='default') # name =", "= dwg.modelspace() # add first underlay msp.add_underlay(pdf_underlay_def, insert=(0, 0, 0), scale=1.) # The", "a drawing. msp.add_underlay(pdf_underlay_def, insert=(10, 0, 0), scale=.5, rotation=30) # use dgn format msp.add_underlay(dgn_underlay_def,", "MIT License import ezdxf dwg = ezdxf.new('R2000') # underlay requires the DXF R2000", "drawing. msp.add_underlay(pdf_underlay_def, insert=(10, 0, 0), scale=.5, rotation=30) # use dgn format msp.add_underlay(dgn_underlay_def, insert=(0,", "section pdf_defs = dwg.objects.query('PDFDEFINITION') # get all pdf underlay defs in drawing dwg.saveas(\"underlay.dxf\")", "is like a block definition, it just defines the underlay msp = dwg.modelspace()", "0), scale=.5, rotation=30) # use dgn format msp.add_underlay(dgn_underlay_def, insert=(0, 30, 0), scale=1.) #", "# get existing underlay definitions, Important: UNDERLAYDEFs resides in the objects section pdf_defs", "format msp.add_underlay(dwf_underlay_def, insert=(0, 15, 0), scale=1.) # get existing underlay definitions, Important: UNDERLAYDEFs", "can be multiple references to the same underlay in a drawing. msp.add_underlay(pdf_underlay_def, insert=(10,", "like the INSERT entity, it creates an underlay reference, # and there can", "# add first underlay msp.add_underlay(pdf_underlay_def, insert=(0, 0, 0), scale=1.) # The (PDF)UNDERLAY entity", "to display dwf_underlay_def = dwg.add_underlay_def(filename='underlay.dwf', name=\"Underlay_R2013-Model\") # don't know how to get this", "(c) 2016-2019 <NAME> # License: MIT License import ezdxf dwg = ezdxf.new('R2000') #", "msp.add_underlay(pdf_underlay_def, insert=(10, 0, 0), scale=.5, rotation=30) # use dgn format msp.add_underlay(dgn_underlay_def, insert=(0, 30,", "scale=.5, rotation=30) # use dgn format msp.add_underlay(dgn_underlay_def, insert=(0, 30, 0), scale=1.) # use", "License import ezdxf dwg = ezdxf.new('R2000') # underlay requires the DXF R2000 format", "name='default') # name = 'default' just works # The (PDF)DEFINITION entity is like", "dwg.modelspace() # add first underlay msp.add_underlay(pdf_underlay_def, insert=(0, 0, 0), scale=1.) # The (PDF)UNDERLAY", "get existing underlay definitions, Important: UNDERLAYDEFs resides in the objects section pdf_defs =", "scale=1.) # get existing underlay definitions, Important: UNDERLAYDEFs resides in the objects section", "creates an underlay reference, # and there can be multiple references to the", "# License: MIT License import ezdxf dwg = ezdxf.new('R2000') # underlay requires the", "the same underlay in a drawing. msp.add_underlay(pdf_underlay_def, insert=(10, 0, 0), scale=.5, rotation=30) #", "= page to display dwf_underlay_def = dwg.add_underlay_def(filename='underlay.dwf', name=\"Underlay_R2013-Model\") # don't know how to", "format or newer pdf_underlay_def = dwg.add_underlay_def(filename='underlay.pdf', name='1') # name = page to display", "dwf format msp.add_underlay(dwf_underlay_def, insert=(0, 15, 0), scale=1.) # get existing underlay definitions, Important:", "dwf_underlay_def = dwg.add_underlay_def(filename='underlay.dwf', name=\"Underlay_R2013-Model\") # don't know how to get this name dgn_underlay_def", "there can be multiple references to the same underlay in a drawing. msp.add_underlay(pdf_underlay_def,", "like a block definition, it just defines the underlay msp = dwg.modelspace() #", "get this name dgn_underlay_def = dwg.add_underlay_def(filename='underlay.dgn', name='default') # name = 'default' just works", "ezdxf dwg = ezdxf.new('R2000') # underlay requires the DXF R2000 format or newer", "or newer pdf_underlay_def = dwg.add_underlay_def(filename='underlay.pdf', name='1') # name = page to display dwf_underlay_def", "block definition, it just defines the underlay msp = dwg.modelspace() # add first", "existing underlay definitions, Important: UNDERLAYDEFs resides in the objects section pdf_defs = dwg.objects.query('PDFDEFINITION')", "underlay msp = dwg.modelspace() # add first underlay msp.add_underlay(pdf_underlay_def, insert=(0, 0, 0), scale=1.)", "entity is like a block definition, it just defines the underlay msp =", "msp.add_underlay(pdf_underlay_def, insert=(0, 0, 0), scale=1.) # The (PDF)UNDERLAY entity is like the INSERT", "dwg = ezdxf.new('R2000') # underlay requires the DXF R2000 format or newer pdf_underlay_def", "rotation=30) # use dgn format msp.add_underlay(dgn_underlay_def, insert=(0, 30, 0), scale=1.) # use dwf", "0, 0), scale=.5, rotation=30) # use dgn format msp.add_underlay(dgn_underlay_def, insert=(0, 30, 0), scale=1.)", "Copyright (c) 2016-2019 <NAME> # License: MIT License import ezdxf dwg = ezdxf.new('R2000')", "it just defines the underlay msp = dwg.modelspace() # add first underlay msp.add_underlay(pdf_underlay_def,", "License: MIT License import ezdxf dwg = ezdxf.new('R2000') # underlay requires the DXF", "just defines the underlay msp = dwg.modelspace() # add first underlay msp.add_underlay(pdf_underlay_def, insert=(0,", "it creates an underlay reference, # and there can be multiple references to", "ezdxf.new('R2000') # underlay requires the DXF R2000 format or newer pdf_underlay_def = dwg.add_underlay_def(filename='underlay.pdf',", "don't know how to get this name dgn_underlay_def = dwg.add_underlay_def(filename='underlay.dgn', name='default') # name", "a block definition, it just defines the underlay msp = dwg.modelspace() # add", "# use dwf format msp.add_underlay(dwf_underlay_def, insert=(0, 15, 0), scale=1.) # get existing underlay", "= dwg.add_underlay_def(filename='underlay.dwf', name=\"Underlay_R2013-Model\") # don't know how to get this name dgn_underlay_def =", "0), scale=1.) # get existing underlay definitions, Important: UNDERLAYDEFs resides in the objects", "use dgn format msp.add_underlay(dgn_underlay_def, insert=(0, 30, 0), scale=1.) # use dwf format msp.add_underlay(dwf_underlay_def,", "scale=1.) # use dwf format msp.add_underlay(dwf_underlay_def, insert=(0, 15, 0), scale=1.) # get existing", "dgn_underlay_def = dwg.add_underlay_def(filename='underlay.dgn', name='default') # name = 'default' just works # The (PDF)DEFINITION", "(PDF)DEFINITION entity is like a block definition, it just defines the underlay msp", "in a drawing. msp.add_underlay(pdf_underlay_def, insert=(10, 0, 0), scale=.5, rotation=30) # use dgn format", "underlay msp.add_underlay(pdf_underlay_def, insert=(0, 0, 0), scale=1.) # The (PDF)UNDERLAY entity is like the", "<NAME> # License: MIT License import ezdxf dwg = ezdxf.new('R2000') # underlay requires", "UNDERLAYDEFs resides in the objects section pdf_defs = dwg.objects.query('PDFDEFINITION') # get all pdf", "in the objects section pdf_defs = dwg.objects.query('PDFDEFINITION') # get all pdf underlay defs", "= dwg.add_underlay_def(filename='underlay.pdf', name='1') # name = page to display dwf_underlay_def = dwg.add_underlay_def(filename='underlay.dwf', name=\"Underlay_R2013-Model\")", "# use dgn format msp.add_underlay(dgn_underlay_def, insert=(0, 30, 0), scale=1.) # use dwf format", "dwg.add_underlay_def(filename='underlay.dgn', name='default') # name = 'default' just works # The (PDF)DEFINITION entity is", "= dwg.add_underlay_def(filename='underlay.dgn', name='default') # name = 'default' just works # The (PDF)DEFINITION entity", "0), scale=1.) # use dwf format msp.add_underlay(dwf_underlay_def, insert=(0, 15, 0), scale=1.) # get", "underlay requires the DXF R2000 format or newer pdf_underlay_def = dwg.add_underlay_def(filename='underlay.pdf', name='1') #", "name='1') # name = page to display dwf_underlay_def = dwg.add_underlay_def(filename='underlay.dwf', name=\"Underlay_R2013-Model\") # don't", "msp = dwg.modelspace() # add first underlay msp.add_underlay(pdf_underlay_def, insert=(0, 0, 0), scale=1.) #", "30, 0), scale=1.) # use dwf format msp.add_underlay(dwf_underlay_def, insert=(0, 15, 0), scale=1.) #" ]
[ "setup( name='pybusy', version='0.0.1', description='Bash progress decoration', long_description=long_description, long_description_content_type='text/markdown', url='https://github.com/krazybean/pybusy', author='krazybean', author_email='<EMAIL>', classifiers=[ 'Development", ":: 3.5', 'Programming Language :: Python :: 3.6', ], keywords='python3 bash progress', packages=find_packages(exclude=['contrib',", "path here = path.abspath(path.dirname(__file__)) with open(path.join(here, 'README.md'), encoding='utf-8') as f: long_description = f.read()", "Language :: Python :: 3', 'Programming Language :: Python :: 3.4', 'Programming Language", "setuptools import setup, find_packages from codecs import open from os import path here", "<filename>setup.py \"\"\" Cursor glamor prettiness for bash \"\"\" from setuptools import setup, find_packages", "classifiers=[ 'Development Status :: 1 - Pre-Alpha', 'Intended Audience :: Developers', 'Topic ::", "long_description=long_description, long_description_content_type='text/markdown', url='https://github.com/krazybean/pybusy', author='krazybean', author_email='<EMAIL>', classifiers=[ 'Development Status :: 1 - Pre-Alpha', 'Intended", "Language :: Python :: 3.4', 'Programming Language :: Python :: 3.5', 'Programming Language", "3', 'Programming Language :: Python :: 3.4', 'Programming Language :: Python :: 3.5',", "Developers', 'Topic :: Software Development :: Build Tools', 'License :: OSI Approved ::", "'Development Status :: 1 - Pre-Alpha', 'Intended Audience :: Developers', 'Topic :: Software", ":: MIT License', 'Programming Language :: Python :: 3', 'Programming Language :: Python", "'Programming Language :: Python :: 3.5', 'Programming Language :: Python :: 3.6', ],", "- Pre-Alpha', 'Intended Audience :: Developers', 'Topic :: Software Development :: Build Tools',", "with open(path.join(here, 'README.md'), encoding='utf-8') as f: long_description = f.read() setup( name='pybusy', version='0.0.1', description='Bash", "long_description = f.read() setup( name='pybusy', version='0.0.1', description='Bash progress decoration', long_description=long_description, long_description_content_type='text/markdown', url='https://github.com/krazybean/pybusy', author='krazybean',", "progress decoration', long_description=long_description, long_description_content_type='text/markdown', url='https://github.com/krazybean/pybusy', author='krazybean', author_email='<EMAIL>', classifiers=[ 'Development Status :: 1 -", ":: 3', 'Programming Language :: Python :: 3.4', 'Programming Language :: Python ::", "'README.md'), encoding='utf-8') as f: long_description = f.read() setup( name='pybusy', version='0.0.1', description='Bash progress decoration',", "Pre-Alpha', 'Intended Audience :: Developers', 'Topic :: Software Development :: Build Tools', 'License", "f: long_description = f.read() setup( name='pybusy', version='0.0.1', description='Bash progress decoration', long_description=long_description, long_description_content_type='text/markdown', url='https://github.com/krazybean/pybusy',", "1 - Pre-Alpha', 'Intended Audience :: Developers', 'Topic :: Software Development :: Build", "'License :: OSI Approved :: MIT License', 'Programming Language :: Python :: 3',", "description='Bash progress decoration', long_description=long_description, long_description_content_type='text/markdown', url='https://github.com/krazybean/pybusy', author='krazybean', author_email='<EMAIL>', classifiers=[ 'Development Status :: 1", "Development :: Build Tools', 'License :: OSI Approved :: MIT License', 'Programming Language", "import open from os import path here = path.abspath(path.dirname(__file__)) with open(path.join(here, 'README.md'), encoding='utf-8')", "Approved :: MIT License', 'Programming Language :: Python :: 3', 'Programming Language ::", "setup, find_packages from codecs import open from os import path here = path.abspath(path.dirname(__file__))", ":: OSI Approved :: MIT License', 'Programming Language :: Python :: 3', 'Programming", "Language :: Python :: 3.6', ], keywords='python3 bash progress', packages=find_packages(exclude=['contrib', 'docs', 'tests']), install_requires=['cursor',", "author_email='<EMAIL>', classifiers=[ 'Development Status :: 1 - Pre-Alpha', 'Intended Audience :: Developers', 'Topic", "OSI Approved :: MIT License', 'Programming Language :: Python :: 3', 'Programming Language", "from codecs import open from os import path here = path.abspath(path.dirname(__file__)) with open(path.join(here,", "= path.abspath(path.dirname(__file__)) with open(path.join(here, 'README.md'), encoding='utf-8') as f: long_description = f.read() setup( name='pybusy',", ":: Python :: 3.5', 'Programming Language :: Python :: 3.6', ], keywords='python3 bash", ":: Python :: 3.4', 'Programming Language :: Python :: 3.5', 'Programming Language ::", "f.read() setup( name='pybusy', version='0.0.1', description='Bash progress decoration', long_description=long_description, long_description_content_type='text/markdown', url='https://github.com/krazybean/pybusy', author='krazybean', author_email='<EMAIL>', classifiers=[", "= f.read() setup( name='pybusy', version='0.0.1', description='Bash progress decoration', long_description=long_description, long_description_content_type='text/markdown', url='https://github.com/krazybean/pybusy', author='krazybean', author_email='<EMAIL>',", ":: 3.4', 'Programming Language :: Python :: 3.5', 'Programming Language :: Python ::", "Status :: 1 - Pre-Alpha', 'Intended Audience :: Developers', 'Topic :: Software Development", ":: 1 - Pre-Alpha', 'Intended Audience :: Developers', 'Topic :: Software Development ::", "'Topic :: Software Development :: Build Tools', 'License :: OSI Approved :: MIT", "path.abspath(path.dirname(__file__)) with open(path.join(here, 'README.md'), encoding='utf-8') as f: long_description = f.read() setup( name='pybusy', version='0.0.1',", "Audience :: Developers', 'Topic :: Software Development :: Build Tools', 'License :: OSI", "Python :: 3', 'Programming Language :: Python :: 3.4', 'Programming Language :: Python", "Cursor glamor prettiness for bash \"\"\" from setuptools import setup, find_packages from codecs", "Tools', 'License :: OSI Approved :: MIT License', 'Programming Language :: Python ::", "os import path here = path.abspath(path.dirname(__file__)) with open(path.join(here, 'README.md'), encoding='utf-8') as f: long_description", "3.5', 'Programming Language :: Python :: 3.6', ], keywords='python3 bash progress', packages=find_packages(exclude=['contrib', 'docs',", "prettiness for bash \"\"\" from setuptools import setup, find_packages from codecs import open", "glamor prettiness for bash \"\"\" from setuptools import setup, find_packages from codecs import", "url='https://github.com/krazybean/pybusy', author='krazybean', author_email='<EMAIL>', classifiers=[ 'Development Status :: 1 - Pre-Alpha', 'Intended Audience ::", "decoration', long_description=long_description, long_description_content_type='text/markdown', url='https://github.com/krazybean/pybusy', author='krazybean', author_email='<EMAIL>', classifiers=[ 'Development Status :: 1 - Pre-Alpha',", "MIT License', 'Programming Language :: Python :: 3', 'Programming Language :: Python ::", "Language :: Python :: 3.5', 'Programming Language :: Python :: 3.6', ], keywords='python3", ":: Developers', 'Topic :: Software Development :: Build Tools', 'License :: OSI Approved", "Build Tools', 'License :: OSI Approved :: MIT License', 'Programming Language :: Python", "author='krazybean', author_email='<EMAIL>', classifiers=[ 'Development Status :: 1 - Pre-Alpha', 'Intended Audience :: Developers',", "import setup, find_packages from codecs import open from os import path here =", "name='pybusy', version='0.0.1', description='Bash progress decoration', long_description=long_description, long_description_content_type='text/markdown', url='https://github.com/krazybean/pybusy', author='krazybean', author_email='<EMAIL>', classifiers=[ 'Development Status", "for bash \"\"\" from setuptools import setup, find_packages from codecs import open from", "\"\"\" from setuptools import setup, find_packages from codecs import open from os import", "open from os import path here = path.abspath(path.dirname(__file__)) with open(path.join(here, 'README.md'), encoding='utf-8') as", "'Programming Language :: Python :: 3.6', ], keywords='python3 bash progress', packages=find_packages(exclude=['contrib', 'docs', 'tests']),", "License', 'Programming Language :: Python :: 3', 'Programming Language :: Python :: 3.4',", "3.4', 'Programming Language :: Python :: 3.5', 'Programming Language :: Python :: 3.6',", "as f: long_description = f.read() setup( name='pybusy', version='0.0.1', description='Bash progress decoration', long_description=long_description, long_description_content_type='text/markdown',", "from setuptools import setup, find_packages from codecs import open from os import path", "codecs import open from os import path here = path.abspath(path.dirname(__file__)) with open(path.join(here, 'README.md'),", "open(path.join(here, 'README.md'), encoding='utf-8') as f: long_description = f.read() setup( name='pybusy', version='0.0.1', description='Bash progress", "bash \"\"\" from setuptools import setup, find_packages from codecs import open from os", "encoding='utf-8') as f: long_description = f.read() setup( name='pybusy', version='0.0.1', description='Bash progress decoration', long_description=long_description,", "find_packages from codecs import open from os import path here = path.abspath(path.dirname(__file__)) with", "here = path.abspath(path.dirname(__file__)) with open(path.join(here, 'README.md'), encoding='utf-8') as f: long_description = f.read() setup(", "Python :: 3.6', ], keywords='python3 bash progress', packages=find_packages(exclude=['contrib', 'docs', 'tests']), install_requires=['cursor', 'ansicolors'] )", "'Programming Language :: Python :: 3.4', 'Programming Language :: Python :: 3.5', 'Programming", "Python :: 3.4', 'Programming Language :: Python :: 3.5', 'Programming Language :: Python", ":: Python :: 3.6', ], keywords='python3 bash progress', packages=find_packages(exclude=['contrib', 'docs', 'tests']), install_requires=['cursor', 'ansicolors']", "import path here = path.abspath(path.dirname(__file__)) with open(path.join(here, 'README.md'), encoding='utf-8') as f: long_description =", "from os import path here = path.abspath(path.dirname(__file__)) with open(path.join(here, 'README.md'), encoding='utf-8') as f:", ":: Build Tools', 'License :: OSI Approved :: MIT License', 'Programming Language ::", "Software Development :: Build Tools', 'License :: OSI Approved :: MIT License', 'Programming", "long_description_content_type='text/markdown', url='https://github.com/krazybean/pybusy', author='krazybean', author_email='<EMAIL>', classifiers=[ 'Development Status :: 1 - Pre-Alpha', 'Intended Audience", "version='0.0.1', description='Bash progress decoration', long_description=long_description, long_description_content_type='text/markdown', url='https://github.com/krazybean/pybusy', author='krazybean', author_email='<EMAIL>', classifiers=[ 'Development Status ::", "'Intended Audience :: Developers', 'Topic :: Software Development :: Build Tools', 'License ::", "\"\"\" Cursor glamor prettiness for bash \"\"\" from setuptools import setup, find_packages from", ":: Software Development :: Build Tools', 'License :: OSI Approved :: MIT License',", ":: Python :: 3', 'Programming Language :: Python :: 3.4', 'Programming Language ::", "'Programming Language :: Python :: 3', 'Programming Language :: Python :: 3.4', 'Programming", "Python :: 3.5', 'Programming Language :: Python :: 3.6', ], keywords='python3 bash progress'," ]
[ "== Track.track_id, Remix.child_track_id == track_id ) ) .filter( Track.is_current == True, Track.is_unlisted ==", "and_ from src.models import Track, Remix from src.utils import helpers from src.utils.db_session import", "src.queries.query_helpers import get_current_user_id, populate_track_metadata, \\ paginate_query, add_users_to_tracks def get_remix_track_parents(track_id, args): db = get_db_read_replica()", "= ( session.query(Track) .join( Remix, and_( Remix.parent_track_id == Track.track_id, Remix.child_track_id == track_id )", "paginate_query, add_users_to_tracks def get_remix_track_parents(track_id, args): db = get_db_read_replica() with db.scoped_session() as session: base_query", "session: base_query = ( session.query(Track) .join( Remix, and_( Remix.parent_track_id == Track.track_id, Remix.child_track_id ==", ") .order_by( desc(Track.created_at), desc(Track.track_id) ) ) tracks = paginate_query(base_query).all() tracks = helpers.query_result_to_list(tracks) track_ids", "desc(Track.created_at), desc(Track.track_id) ) ) tracks = paginate_query(base_query).all() tracks = helpers.query_result_to_list(tracks) track_ids = list(map(lambda", "from sqlalchemy import desc, and_ from src.models import Track, Remix from src.utils import", "list(map(lambda track: track[\"track_id\"], tracks)) current_user_id = get_current_user_id(required=False) tracks = populate_track_metadata(session, track_ids, tracks, current_user_id)", "helpers.query_result_to_list(tracks) track_ids = list(map(lambda track: track[\"track_id\"], tracks)) current_user_id = get_current_user_id(required=False) tracks = populate_track_metadata(session,", "with db.scoped_session() as session: base_query = ( session.query(Track) .join( Remix, and_( Remix.parent_track_id ==", "get_current_user_id, populate_track_metadata, \\ paginate_query, add_users_to_tracks def get_remix_track_parents(track_id, args): db = get_db_read_replica() with db.scoped_session()", "Remix, and_( Remix.parent_track_id == Track.track_id, Remix.child_track_id == track_id ) ) .filter( Track.is_current ==", "= get_current_user_id(required=False) tracks = populate_track_metadata(session, track_ids, tracks, current_user_id) if args.get(\"with_users\", False): add_users_to_tracks(session, tracks)", "= paginate_query(base_query).all() tracks = helpers.query_result_to_list(tracks) track_ids = list(map(lambda track: track[\"track_id\"], tracks)) current_user_id =", ") tracks = paginate_query(base_query).all() tracks = helpers.query_result_to_list(tracks) track_ids = list(map(lambda track: track[\"track_id\"], tracks))", "Remix.child_track_id == track_id ) ) .filter( Track.is_current == True, Track.is_unlisted == False )", "get_current_user_id(required=False) tracks = populate_track_metadata(session, track_ids, tracks, current_user_id) if args.get(\"with_users\", False): add_users_to_tracks(session, tracks) return", ".join( Remix, and_( Remix.parent_track_id == Track.track_id, Remix.child_track_id == track_id ) ) .filter( Track.is_current", "= get_db_read_replica() with db.scoped_session() as session: base_query = ( session.query(Track) .join( Remix, and_(", "src.utils.db_session import get_db_read_replica from src.queries.query_helpers import get_current_user_id, populate_track_metadata, \\ paginate_query, add_users_to_tracks def get_remix_track_parents(track_id,", "import helpers from src.utils.db_session import get_db_read_replica from src.queries.query_helpers import get_current_user_id, populate_track_metadata, \\ paginate_query,", "<reponame>mikedotexe/audius-protocol<filename>discovery-provider/src/queries/get_remix_track_parents.py from sqlalchemy import desc, and_ from src.models import Track, Remix from src.utils", "Track.track_id, Remix.child_track_id == track_id ) ) .filter( Track.is_current == True, Track.is_unlisted == False", "and_( Remix.parent_track_id == Track.track_id, Remix.child_track_id == track_id ) ) .filter( Track.is_current == True,", "desc, and_ from src.models import Track, Remix from src.utils import helpers from src.utils.db_session", "( session.query(Track) .join( Remix, and_( Remix.parent_track_id == Track.track_id, Remix.child_track_id == track_id ) )", "desc(Track.track_id) ) ) tracks = paginate_query(base_query).all() tracks = helpers.query_result_to_list(tracks) track_ids = list(map(lambda track:", "session.query(Track) .join( Remix, and_( Remix.parent_track_id == Track.track_id, Remix.child_track_id == track_id ) ) .filter(", "import desc, and_ from src.models import Track, Remix from src.utils import helpers from", "False ) .order_by( desc(Track.created_at), desc(Track.track_id) ) ) tracks = paginate_query(base_query).all() tracks = helpers.query_result_to_list(tracks)", "get_db_read_replica from src.queries.query_helpers import get_current_user_id, populate_track_metadata, \\ paginate_query, add_users_to_tracks def get_remix_track_parents(track_id, args): db", "as session: base_query = ( session.query(Track) .join( Remix, and_( Remix.parent_track_id == Track.track_id, Remix.child_track_id", "src.utils import helpers from src.utils.db_session import get_db_read_replica from src.queries.query_helpers import get_current_user_id, populate_track_metadata, \\", "def get_remix_track_parents(track_id, args): db = get_db_read_replica() with db.scoped_session() as session: base_query = (", "from src.queries.query_helpers import get_current_user_id, populate_track_metadata, \\ paginate_query, add_users_to_tracks def get_remix_track_parents(track_id, args): db =", "track_ids = list(map(lambda track: track[\"track_id\"], tracks)) current_user_id = get_current_user_id(required=False) tracks = populate_track_metadata(session, track_ids,", ") ) tracks = paginate_query(base_query).all() tracks = helpers.query_result_to_list(tracks) track_ids = list(map(lambda track: track[\"track_id\"],", "== True, Track.is_unlisted == False ) .order_by( desc(Track.created_at), desc(Track.track_id) ) ) tracks =", "tracks = paginate_query(base_query).all() tracks = helpers.query_result_to_list(tracks) track_ids = list(map(lambda track: track[\"track_id\"], tracks)) current_user_id", ".order_by( desc(Track.created_at), desc(Track.track_id) ) ) tracks = paginate_query(base_query).all() tracks = helpers.query_result_to_list(tracks) track_ids =", "from src.models import Track, Remix from src.utils import helpers from src.utils.db_session import get_db_read_replica", "track: track[\"track_id\"], tracks)) current_user_id = get_current_user_id(required=False) tracks = populate_track_metadata(session, track_ids, tracks, current_user_id) if", "db = get_db_read_replica() with db.scoped_session() as session: base_query = ( session.query(Track) .join( Remix,", "src.models import Track, Remix from src.utils import helpers from src.utils.db_session import get_db_read_replica from", "Track.is_current == True, Track.is_unlisted == False ) .order_by( desc(Track.created_at), desc(Track.track_id) ) ) tracks", ") ) .filter( Track.is_current == True, Track.is_unlisted == False ) .order_by( desc(Track.created_at), desc(Track.track_id)", "db.scoped_session() as session: base_query = ( session.query(Track) .join( Remix, and_( Remix.parent_track_id == Track.track_id,", "import get_current_user_id, populate_track_metadata, \\ paginate_query, add_users_to_tracks def get_remix_track_parents(track_id, args): db = get_db_read_replica() with", "get_db_read_replica() with db.scoped_session() as session: base_query = ( session.query(Track) .join( Remix, and_( Remix.parent_track_id", "Remix.parent_track_id == Track.track_id, Remix.child_track_id == track_id ) ) .filter( Track.is_current == True, Track.is_unlisted", "== track_id ) ) .filter( Track.is_current == True, Track.is_unlisted == False ) .order_by(", ") .filter( Track.is_current == True, Track.is_unlisted == False ) .order_by( desc(Track.created_at), desc(Track.track_id) )", "args): db = get_db_read_replica() with db.scoped_session() as session: base_query = ( session.query(Track) .join(", "tracks)) current_user_id = get_current_user_id(required=False) tracks = populate_track_metadata(session, track_ids, tracks, current_user_id) if args.get(\"with_users\", False):", "helpers from src.utils.db_session import get_db_read_replica from src.queries.query_helpers import get_current_user_id, populate_track_metadata, \\ paginate_query, add_users_to_tracks", "== False ) .order_by( desc(Track.created_at), desc(Track.track_id) ) ) tracks = paginate_query(base_query).all() tracks =", "get_remix_track_parents(track_id, args): db = get_db_read_replica() with db.scoped_session() as session: base_query = ( session.query(Track)", "current_user_id = get_current_user_id(required=False) tracks = populate_track_metadata(session, track_ids, tracks, current_user_id) if args.get(\"with_users\", False): add_users_to_tracks(session,", "= list(map(lambda track: track[\"track_id\"], tracks)) current_user_id = get_current_user_id(required=False) tracks = populate_track_metadata(session, track_ids, tracks,", "Track, Remix from src.utils import helpers from src.utils.db_session import get_db_read_replica from src.queries.query_helpers import", "True, Track.is_unlisted == False ) .order_by( desc(Track.created_at), desc(Track.track_id) ) ) tracks = paginate_query(base_query).all()", "base_query = ( session.query(Track) .join( Remix, and_( Remix.parent_track_id == Track.track_id, Remix.child_track_id == track_id", "import get_db_read_replica from src.queries.query_helpers import get_current_user_id, populate_track_metadata, \\ paginate_query, add_users_to_tracks def get_remix_track_parents(track_id, args):", "sqlalchemy import desc, and_ from src.models import Track, Remix from src.utils import helpers", "import Track, Remix from src.utils import helpers from src.utils.db_session import get_db_read_replica from src.queries.query_helpers", ".filter( Track.is_current == True, Track.is_unlisted == False ) .order_by( desc(Track.created_at), desc(Track.track_id) ) )", "= helpers.query_result_to_list(tracks) track_ids = list(map(lambda track: track[\"track_id\"], tracks)) current_user_id = get_current_user_id(required=False) tracks =", "\\ paginate_query, add_users_to_tracks def get_remix_track_parents(track_id, args): db = get_db_read_replica() with db.scoped_session() as session:", "track_id ) ) .filter( Track.is_current == True, Track.is_unlisted == False ) .order_by( desc(Track.created_at),", "populate_track_metadata, \\ paginate_query, add_users_to_tracks def get_remix_track_parents(track_id, args): db = get_db_read_replica() with db.scoped_session() as", "paginate_query(base_query).all() tracks = helpers.query_result_to_list(tracks) track_ids = list(map(lambda track: track[\"track_id\"], tracks)) current_user_id = get_current_user_id(required=False)", "tracks = populate_track_metadata(session, track_ids, tracks, current_user_id) if args.get(\"with_users\", False): add_users_to_tracks(session, tracks) return tracks", "add_users_to_tracks def get_remix_track_parents(track_id, args): db = get_db_read_replica() with db.scoped_session() as session: base_query =", "tracks = helpers.query_result_to_list(tracks) track_ids = list(map(lambda track: track[\"track_id\"], tracks)) current_user_id = get_current_user_id(required=False) tracks", "Track.is_unlisted == False ) .order_by( desc(Track.created_at), desc(Track.track_id) ) ) tracks = paginate_query(base_query).all() tracks", "Remix from src.utils import helpers from src.utils.db_session import get_db_read_replica from src.queries.query_helpers import get_current_user_id,", "from src.utils.db_session import get_db_read_replica from src.queries.query_helpers import get_current_user_id, populate_track_metadata, \\ paginate_query, add_users_to_tracks def", "from src.utils import helpers from src.utils.db_session import get_db_read_replica from src.queries.query_helpers import get_current_user_id, populate_track_metadata,", "track[\"track_id\"], tracks)) current_user_id = get_current_user_id(required=False) tracks = populate_track_metadata(session, track_ids, tracks, current_user_id) if args.get(\"with_users\"," ]
[ "self.__question = re.sub(r'QQQ', '.', question) self.__stopwords = stopwords self.__qclass = qclass.split(':')[1] if lang", "qclass == 'ABBR:abb': try: self.__stopwords.append('abbreviation') except: self.__stopwords.add('abbreviation') self.__exact_stop_words.append('abbreviation') def question(self): return self.__question def", "'temp': 'temperature', 'volsize': 'size' } if qclass == 'ABBR:abb': try: self.__stopwords.append('abbreviation') except: self.__stopwords.add('abbreviation')", "without_stopwords = [w for w in self.__question.split() if w not in self.__stopwords] query", "w not in self.__stopwords] query = without_stopwords query.append(self.__expansion_rules.get(self.__qclass, '')) return \" \".join(query) def", "Reformulator(object): def __init__(self, question, qclass, lang='en', stopwords=None): self.__original_question = question punctuation = re.sub(r\"[-+/&']\",", "'QQQ', question) question = re.sub(self.__punctuation_re, '', question) self.__question = re.sub(r'QQQ', '.', question) self.__stopwords", "self.__stopwords = stopwords self.__qclass = qclass.split(':')[1] if lang == 'en': question_words = ['what',", "'när', 'var', 'varför', 'hur'] conj_prep_words = ['av', 'inte', 'ej'] else: raise NotImplemented('This language", "'instru': 'instrument', 'lang': 'language', 'other': '', 'techmeth': 'technique', 'termeq': 'term', 'veh': 'vehicle', 'dist':", "= ['of', 'not'] elif lang == 'sv': question_words = ['vilket', 'vilken', 'vem', 'whom',", "w in self.__question.split() if w not in self.__stopwords] query = without_stopwords query.append(self.__expansion_rules.get(self.__qclass, ''))", "\" \".join(query) def reformulate_exact(self): without_exact_stopwords = [w for w in self.__question.split() if w", "question) question = re.sub(self.__punctuation_re, '', question) self.__question = re.sub(r'QQQ', '.', question) self.__stopwords =", "qclass.split(':')[1] if lang == 'en': question_words = ['what', 'which', 'who', 'whom', 'when', 'where',", "+ question[1:] question = re.sub(r'(?<=[A-Z])\\.', 'QQQ', question) question = re.sub(self.__punctuation_re, '', question) self.__question", "except: self.__stopwords.add('abbreviation') self.__exact_stop_words.append('abbreviation') def question(self): return self.__question def reformulate(self): without_stopwords = [w for", "'', 'techmeth': 'technique', 'termeq': 'term', 'veh': 'vehicle', 'dist': 'distance', 'ord': 'order', 'perc': 'percentage',", "= re.sub(self.__punctuation_re, '', question) self.__question = re.sub(r'QQQ', '.', question) self.__stopwords = stopwords self.__qclass", "['of', 'not'] elif lang == 'sv': question_words = ['vilket', 'vilken', 'vem', 'whom', 'när',", "'ej'] else: raise NotImplemented('This language is not available') self.__exact_stop_words = set(stopwords) - set(conj_prep_words)", "'vem', 'whom', 'när', 'var', 'varför', 'hur'] conj_prep_words = ['av', 'inte', 'ej'] else: raise", "= re.sub(r\"[-+/&']\", '', string.punctuation) self.__punctuation_re = r'[{}]'.format(punctuation) question = question[0].lower() + question[1:] question", "= set(stopwords) - set(conj_prep_words) self.__expansion_rules = { 'dismed': 'disease', 'instru': 'instrument', 'lang': 'language',", "'lang': 'language', 'other': '', 'techmeth': 'technique', 'termeq': 'term', 'veh': 'vehicle', 'dist': 'distance', 'ord':", "'techmeth': 'technique', 'termeq': 'term', 'veh': 'vehicle', 'dist': 'distance', 'ord': 'order', 'perc': 'percentage', 'speed':", "'speed', 'temp': 'temperature', 'volsize': 'size' } if qclass == 'ABBR:abb': try: self.__stopwords.append('abbreviation') except:", "'hur'] conj_prep_words = ['av', 'inte', 'ej'] else: raise NotImplemented('This language is not available')", "['av', 'inte', 'ej'] else: raise NotImplemented('This language is not available') self.__exact_stop_words = set(stopwords)", "= re.sub(r'(?<=[A-Z])\\.', 'QQQ', question) question = re.sub(self.__punctuation_re, '', question) self.__question = re.sub(r'QQQ', '.',", "w in self.__question.split() if w not in self.__exact_stop_words] query = without_exact_stopwords query.append(self.__expansion_rules.get(self.__qclass, ''))", "elif lang == 'sv': question_words = ['vilket', 'vilken', 'vem', 'whom', 'när', 'var', 'varför',", "set(stopwords) - set(conj_prep_words) self.__expansion_rules = { 'dismed': 'disease', 'instru': 'instrument', 'lang': 'language', 'other':", "query = without_stopwords query.append(self.__expansion_rules.get(self.__qclass, '')) return \" \".join(query) def reformulate_exact(self): without_exact_stopwords = [w", "qclass, lang='en', stopwords=None): self.__original_question = question punctuation = re.sub(r\"[-+/&']\", '', string.punctuation) self.__punctuation_re =", "'order', 'perc': 'percentage', 'speed': 'speed', 'temp': 'temperature', 'volsize': 'size' } if qclass ==", "'dist': 'distance', 'ord': 'order', 'perc': 'percentage', 'speed': 'speed', 'temp': 'temperature', 'volsize': 'size' }", "['what', 'which', 'who', 'whom', 'when', 'where', 'why', 'how'] conj_prep_words = ['of', 'not'] elif", "== 'en': question_words = ['what', 'which', 'who', 'whom', 'when', 'where', 'why', 'how'] conj_prep_words", "'varför', 'hur'] conj_prep_words = ['av', 'inte', 'ej'] else: raise NotImplemented('This language is not", "import re import string class Reformulator(object): def __init__(self, question, qclass, lang='en', stopwords=None): self.__original_question", "self.__stopwords.add('abbreviation') self.__exact_stop_words.append('abbreviation') def question(self): return self.__question def reformulate(self): without_stopwords = [w for w", "'not'] elif lang == 'sv': question_words = ['vilket', 'vilken', 'vem', 'whom', 'när', 'var',", "'where', 'why', 'how'] conj_prep_words = ['of', 'not'] elif lang == 'sv': question_words =", "question, qclass, lang='en', stopwords=None): self.__original_question = question punctuation = re.sub(r\"[-+/&']\", '', string.punctuation) self.__punctuation_re", "return self.__question def reformulate(self): without_stopwords = [w for w in self.__question.split() if w", "question punctuation = re.sub(r\"[-+/&']\", '', string.punctuation) self.__punctuation_re = r'[{}]'.format(punctuation) question = question[0].lower() +", "'ABBR:abb': try: self.__stopwords.append('abbreviation') except: self.__stopwords.add('abbreviation') self.__exact_stop_words.append('abbreviation') def question(self): return self.__question def reformulate(self): without_stopwords", "- set(conj_prep_words) self.__expansion_rules = { 'dismed': 'disease', 'instru': 'instrument', 'lang': 'language', 'other': '',", "'technique', 'termeq': 'term', 'veh': 'vehicle', 'dist': 'distance', 'ord': 'order', 'perc': 'percentage', 'speed': 'speed',", "stopwords=None): self.__original_question = question punctuation = re.sub(r\"[-+/&']\", '', string.punctuation) self.__punctuation_re = r'[{}]'.format(punctuation) question", "question = re.sub(self.__punctuation_re, '', question) self.__question = re.sub(r'QQQ', '.', question) self.__stopwords = stopwords", "reformulate_exact(self): without_exact_stopwords = [w for w in self.__question.split() if w not in self.__exact_stop_words]", "raise NotImplemented('This language is not available') self.__exact_stop_words = set(stopwords) - set(conj_prep_words) self.__expansion_rules =", "'en': question_words = ['what', 'which', 'who', 'whom', 'when', 'where', 'why', 'how'] conj_prep_words =", "= ['what', 'which', 'who', 'whom', 'when', 'where', 'why', 'how'] conj_prep_words = ['of', 'not']", "'when', 'where', 'why', 'how'] conj_prep_words = ['of', 'not'] elif lang == 'sv': question_words", "= question[0].lower() + question[1:] question = re.sub(r'(?<=[A-Z])\\.', 'QQQ', question) question = re.sub(self.__punctuation_re, '',", "def reformulate(self): without_stopwords = [w for w in self.__question.split() if w not in", "'instrument', 'lang': 'language', 'other': '', 'techmeth': 'technique', 'termeq': 'term', 'veh': 'vehicle', 'dist': 'distance',", "reformulate(self): without_stopwords = [w for w in self.__question.split() if w not in self.__stopwords]", "= ['av', 'inte', 'ej'] else: raise NotImplemented('This language is not available') self.__exact_stop_words =", "self.__question.split() if w not in self.__exact_stop_words] query = without_exact_stopwords query.append(self.__expansion_rules.get(self.__qclass, '')) return \"", "question_words = ['what', 'which', 'who', 'whom', 'when', 'where', 'why', 'how'] conj_prep_words = ['of',", "question = re.sub(r'(?<=[A-Z])\\.', 'QQQ', question) question = re.sub(self.__punctuation_re, '', question) self.__question = re.sub(r'QQQ',", "\".join(query) def reformulate_exact(self): without_exact_stopwords = [w for w in self.__question.split() if w not", "'how'] conj_prep_words = ['of', 'not'] elif lang == 'sv': question_words = ['vilket', 'vilken',", "'veh': 'vehicle', 'dist': 'distance', 'ord': 'order', 'perc': 'percentage', 'speed': 'speed', 'temp': 'temperature', 'volsize':", "lang == 'en': question_words = ['what', 'which', 'who', 'whom', 'when', 'where', 'why', 'how']", "'who', 'whom', 'when', 'where', 'why', 'how'] conj_prep_words = ['of', 'not'] elif lang ==", "'ord': 'order', 'perc': 'percentage', 'speed': 'speed', 'temp': 'temperature', 'volsize': 'size' } if qclass", "'vehicle', 'dist': 'distance', 'ord': 'order', 'perc': 'percentage', 'speed': 'speed', 'temp': 'temperature', 'volsize': 'size'", "'perc': 'percentage', 'speed': 'speed', 'temp': 'temperature', 'volsize': 'size' } if qclass == 'ABBR:abb':", "for w in self.__question.split() if w not in self.__stopwords] query = without_stopwords query.append(self.__expansion_rules.get(self.__qclass,", "= re.sub(r'QQQ', '.', question) self.__stopwords = stopwords self.__qclass = qclass.split(':')[1] if lang ==", "= ['vilket', 'vilken', 'vem', 'whom', 'när', 'var', 'varför', 'hur'] conj_prep_words = ['av', 'inte',", "question(self): return self.__question def reformulate(self): without_stopwords = [w for w in self.__question.split() if", "= qclass.split(':')[1] if lang == 'en': question_words = ['what', 'which', 'who', 'whom', 'when',", "'why', 'how'] conj_prep_words = ['of', 'not'] elif lang == 'sv': question_words = ['vilket',", "else: raise NotImplemented('This language is not available') self.__exact_stop_words = set(stopwords) - set(conj_prep_words) self.__expansion_rules", "in self.__stopwords] query = without_stopwords query.append(self.__expansion_rules.get(self.__qclass, '')) return \" \".join(query) def reformulate_exact(self): without_exact_stopwords", "re.sub(r'(?<=[A-Z])\\.', 'QQQ', question) question = re.sub(self.__punctuation_re, '', question) self.__question = re.sub(r'QQQ', '.', question)", "string class Reformulator(object): def __init__(self, question, qclass, lang='en', stopwords=None): self.__original_question = question punctuation", "'', string.punctuation) self.__punctuation_re = r'[{}]'.format(punctuation) question = question[0].lower() + question[1:] question = re.sub(r'(?<=[A-Z])\\.',", "self.__exact_stop_words = set(stopwords) - set(conj_prep_words) self.__expansion_rules = { 'dismed': 'disease', 'instru': 'instrument', 'lang':", "[w for w in self.__question.split() if w not in self.__stopwords] query = without_stopwords", "lang='en', stopwords=None): self.__original_question = question punctuation = re.sub(r\"[-+/&']\", '', string.punctuation) self.__punctuation_re = r'[{}]'.format(punctuation)", "self.__qclass = qclass.split(':')[1] if lang == 'en': question_words = ['what', 'which', 'who', 'whom',", "__init__(self, question, qclass, lang='en', stopwords=None): self.__original_question = question punctuation = re.sub(r\"[-+/&']\", '', string.punctuation)", "self.__question def reformulate(self): without_stopwords = [w for w in self.__question.split() if w not", "'sv': question_words = ['vilket', 'vilken', 'vem', 'whom', 'när', 'var', 'varför', 'hur'] conj_prep_words =", "= stopwords self.__qclass = qclass.split(':')[1] if lang == 'en': question_words = ['what', 'which',", "string.punctuation) self.__punctuation_re = r'[{}]'.format(punctuation) question = question[0].lower() + question[1:] question = re.sub(r'(?<=[A-Z])\\.', 'QQQ',", "'size' } if qclass == 'ABBR:abb': try: self.__stopwords.append('abbreviation') except: self.__stopwords.add('abbreviation') self.__exact_stop_words.append('abbreviation') def question(self):", "'.', question) self.__stopwords = stopwords self.__qclass = qclass.split(':')[1] if lang == 'en': question_words", "'inte', 'ej'] else: raise NotImplemented('This language is not available') self.__exact_stop_words = set(stopwords) -", "if lang == 'en': question_words = ['what', 'which', 'who', 'whom', 'when', 'where', 'why',", "conj_prep_words = ['av', 'inte', 'ej'] else: raise NotImplemented('This language is not available') self.__exact_stop_words", "import string class Reformulator(object): def __init__(self, question, qclass, lang='en', stopwords=None): self.__original_question = question", "'other': '', 'techmeth': 'technique', 'termeq': 'term', 'veh': 'vehicle', 'dist': 'distance', 'ord': 'order', 'perc':", "self.__expansion_rules = { 'dismed': 'disease', 'instru': 'instrument', 'lang': 'language', 'other': '', 'techmeth': 'technique',", "'vilken', 'vem', 'whom', 'när', 'var', 'varför', 'hur'] conj_prep_words = ['av', 'inte', 'ej'] else:", "= [w for w in self.__question.split() if w not in self.__stopwords] query =", "'dismed': 'disease', 'instru': 'instrument', 'lang': 'language', 'other': '', 'techmeth': 'technique', 'termeq': 'term', 'veh':", "self.__question.split() if w not in self.__stopwords] query = without_stopwords query.append(self.__expansion_rules.get(self.__qclass, '')) return \"", "['vilket', 'vilken', 'vem', 'whom', 'när', 'var', 'varför', 'hur'] conj_prep_words = ['av', 'inte', 'ej']", "== 'sv': question_words = ['vilket', 'vilken', 'vem', 'whom', 'när', 'var', 'varför', 'hur'] conj_prep_words", "stopwords self.__qclass = qclass.split(':')[1] if lang == 'en': question_words = ['what', 'which', 'who',", "self.__exact_stop_words.append('abbreviation') def question(self): return self.__question def reformulate(self): without_stopwords = [w for w in", "query.append(self.__expansion_rules.get(self.__qclass, '')) return \" \".join(query) def reformulate_exact(self): without_exact_stopwords = [w for w in", "'speed': 'speed', 'temp': 'temperature', 'volsize': 'size' } if qclass == 'ABBR:abb': try: self.__stopwords.append('abbreviation')", "self.__stopwords] query = without_stopwords query.append(self.__expansion_rules.get(self.__qclass, '')) return \" \".join(query) def reformulate_exact(self): without_exact_stopwords =", "self.__punctuation_re = r'[{}]'.format(punctuation) question = question[0].lower() + question[1:] question = re.sub(r'(?<=[A-Z])\\.', 'QQQ', question)", "conj_prep_words = ['of', 'not'] elif lang == 'sv': question_words = ['vilket', 'vilken', 'vem',", "not in self.__stopwords] query = without_stopwords query.append(self.__expansion_rules.get(self.__qclass, '')) return \" \".join(query) def reformulate_exact(self):", "'whom', 'when', 'where', 'why', 'how'] conj_prep_words = ['of', 'not'] elif lang == 'sv':", "for w in self.__question.split() if w not in self.__exact_stop_words] query = without_exact_stopwords query.append(self.__expansion_rules.get(self.__qclass,", "question_words = ['vilket', 'vilken', 'vem', 'whom', 'när', 'var', 'varför', 'hur'] conj_prep_words = ['av',", "r'[{}]'.format(punctuation) question = question[0].lower() + question[1:] question = re.sub(r'(?<=[A-Z])\\.', 'QQQ', question) question =", "question[1:] question = re.sub(r'(?<=[A-Z])\\.', 'QQQ', question) question = re.sub(self.__punctuation_re, '', question) self.__question =", "= r'[{}]'.format(punctuation) question = question[0].lower() + question[1:] question = re.sub(r'(?<=[A-Z])\\.', 'QQQ', question) question", "question) self.__stopwords = stopwords self.__qclass = qclass.split(':')[1] if lang == 'en': question_words =", "available') self.__exact_stop_words = set(stopwords) - set(conj_prep_words) self.__expansion_rules = { 'dismed': 'disease', 'instru': 'instrument',", "'which', 'who', 'whom', 'when', 'where', 'why', 'how'] conj_prep_words = ['of', 'not'] elif lang", "question[0].lower() + question[1:] question = re.sub(r'(?<=[A-Z])\\.', 'QQQ', question) question = re.sub(self.__punctuation_re, '', question)", "{ 'dismed': 'disease', 'instru': 'instrument', 'lang': 'language', 'other': '', 'techmeth': 'technique', 'termeq': 'term',", "set(conj_prep_words) self.__expansion_rules = { 'dismed': 'disease', 'instru': 'instrument', 'lang': 'language', 'other': '', 'techmeth':", "'whom', 'när', 'var', 'varför', 'hur'] conj_prep_words = ['av', 'inte', 'ej'] else: raise NotImplemented('This", "'termeq': 'term', 'veh': 'vehicle', 'dist': 'distance', 'ord': 'order', 'perc': 'percentage', 'speed': 'speed', 'temp':", "[w for w in self.__question.split() if w not in self.__exact_stop_words] query = without_exact_stopwords", "'')) return \" \".join(query) def reformulate_exact(self): without_exact_stopwords = [w for w in self.__question.split()", "NotImplemented('This language is not available') self.__exact_stop_words = set(stopwords) - set(conj_prep_words) self.__expansion_rules = {", "'var', 'varför', 'hur'] conj_prep_words = ['av', 'inte', 'ej'] else: raise NotImplemented('This language is", "'disease', 'instru': 'instrument', 'lang': 'language', 'other': '', 'techmeth': 'technique', 'termeq': 'term', 'veh': 'vehicle',", "try: self.__stopwords.append('abbreviation') except: self.__stopwords.add('abbreviation') self.__exact_stop_words.append('abbreviation') def question(self): return self.__question def reformulate(self): without_stopwords =", "not available') self.__exact_stop_words = set(stopwords) - set(conj_prep_words) self.__expansion_rules = { 'dismed': 'disease', 'instru':", "'volsize': 'size' } if qclass == 'ABBR:abb': try: self.__stopwords.append('abbreviation') except: self.__stopwords.add('abbreviation') self.__exact_stop_words.append('abbreviation') def", "lang == 'sv': question_words = ['vilket', 'vilken', 'vem', 'whom', 'när', 'var', 'varför', 'hur']", "class Reformulator(object): def __init__(self, question, qclass, lang='en', stopwords=None): self.__original_question = question punctuation =", "def __init__(self, question, qclass, lang='en', stopwords=None): self.__original_question = question punctuation = re.sub(r\"[-+/&']\", '',", "= question punctuation = re.sub(r\"[-+/&']\", '', string.punctuation) self.__punctuation_re = r'[{}]'.format(punctuation) question = question[0].lower()", "'term', 'veh': 'vehicle', 'dist': 'distance', 'ord': 'order', 'perc': 'percentage', 'speed': 'speed', 'temp': 'temperature',", "'', question) self.__question = re.sub(r'QQQ', '.', question) self.__stopwords = stopwords self.__qclass = qclass.split(':')[1]", "def question(self): return self.__question def reformulate(self): without_stopwords = [w for w in self.__question.split()", "return \" \".join(query) def reformulate_exact(self): without_exact_stopwords = [w for w in self.__question.split() if", "in self.__question.split() if w not in self.__exact_stop_words] query = without_exact_stopwords query.append(self.__expansion_rules.get(self.__qclass, '')) return", "self.__original_question = question punctuation = re.sub(r\"[-+/&']\", '', string.punctuation) self.__punctuation_re = r'[{}]'.format(punctuation) question =", "without_exact_stopwords = [w for w in self.__question.split() if w not in self.__exact_stop_words] query", "self.__stopwords.append('abbreviation') except: self.__stopwords.add('abbreviation') self.__exact_stop_words.append('abbreviation') def question(self): return self.__question def reformulate(self): without_stopwords = [w", "is not available') self.__exact_stop_words = set(stopwords) - set(conj_prep_words) self.__expansion_rules = { 'dismed': 'disease',", "re.sub(r'QQQ', '.', question) self.__stopwords = stopwords self.__qclass = qclass.split(':')[1] if lang == 'en':", "if w not in self.__exact_stop_words] query = without_exact_stopwords query.append(self.__expansion_rules.get(self.__qclass, '')) return \" \".join(query)", "re import string class Reformulator(object): def __init__(self, question, qclass, lang='en', stopwords=None): self.__original_question =", "re.sub(r\"[-+/&']\", '', string.punctuation) self.__punctuation_re = r'[{}]'.format(punctuation) question = question[0].lower() + question[1:] question =", "def reformulate_exact(self): without_exact_stopwords = [w for w in self.__question.split() if w not in", "if qclass == 'ABBR:abb': try: self.__stopwords.append('abbreviation') except: self.__stopwords.add('abbreviation') self.__exact_stop_words.append('abbreviation') def question(self): return self.__question", "in self.__question.split() if w not in self.__stopwords] query = without_stopwords query.append(self.__expansion_rules.get(self.__qclass, '')) return", "'distance', 'ord': 'order', 'perc': 'percentage', 'speed': 'speed', 'temp': 'temperature', 'volsize': 'size' } if", "language is not available') self.__exact_stop_words = set(stopwords) - set(conj_prep_words) self.__expansion_rules = { 'dismed':", "'language', 'other': '', 'techmeth': 'technique', 'termeq': 'term', 'veh': 'vehicle', 'dist': 'distance', 'ord': 'order',", "== 'ABBR:abb': try: self.__stopwords.append('abbreviation') except: self.__stopwords.add('abbreviation') self.__exact_stop_words.append('abbreviation') def question(self): return self.__question def reformulate(self):", "re.sub(self.__punctuation_re, '', question) self.__question = re.sub(r'QQQ', '.', question) self.__stopwords = stopwords self.__qclass =", "punctuation = re.sub(r\"[-+/&']\", '', string.punctuation) self.__punctuation_re = r'[{}]'.format(punctuation) question = question[0].lower() + question[1:]", "'temperature', 'volsize': 'size' } if qclass == 'ABBR:abb': try: self.__stopwords.append('abbreviation') except: self.__stopwords.add('abbreviation') self.__exact_stop_words.append('abbreviation')", "without_stopwords query.append(self.__expansion_rules.get(self.__qclass, '')) return \" \".join(query) def reformulate_exact(self): without_exact_stopwords = [w for w", "'percentage', 'speed': 'speed', 'temp': 'temperature', 'volsize': 'size' } if qclass == 'ABBR:abb': try:", "question) self.__question = re.sub(r'QQQ', '.', question) self.__stopwords = stopwords self.__qclass = qclass.split(':')[1] if", "= [w for w in self.__question.split() if w not in self.__exact_stop_words] query =", "question = question[0].lower() + question[1:] question = re.sub(r'(?<=[A-Z])\\.', 'QQQ', question) question = re.sub(self.__punctuation_re,", "= without_stopwords query.append(self.__expansion_rules.get(self.__qclass, '')) return \" \".join(query) def reformulate_exact(self): without_exact_stopwords = [w for", "} if qclass == 'ABBR:abb': try: self.__stopwords.append('abbreviation') except: self.__stopwords.add('abbreviation') self.__exact_stop_words.append('abbreviation') def question(self): return", "if w not in self.__stopwords] query = without_stopwords query.append(self.__expansion_rules.get(self.__qclass, '')) return \" \".join(query)", "= { 'dismed': 'disease', 'instru': 'instrument', 'lang': 'language', 'other': '', 'techmeth': 'technique', 'termeq':" ]
[ "# do something if user exists ... else: # to create new user", "result2 import Result, Ok, Err def get_valid_user_by_email(email): \"\"\" Return user instance \"\"\" user", "def get_valid_user_by_email(email): \"\"\" Return user instance \"\"\" user = get_user(email) if user: if", "is False: return Err(\"user not valid\") return Ok(user) return Err(\"user not exists\") result", "get_user_by_email('<EMAIL>') if result == Result.Ok: # do something if user exists ... else:", "not exists\") result = user = get_user_by_email('<EMAIL>') if result == Result.Ok: # do", "Err def get_valid_user_by_email(email): \"\"\" Return user instance \"\"\" user = get_user(email) if user:", "from result2 import Result, Ok, Err def get_valid_user_by_email(email): \"\"\" Return user instance \"\"\"", "get_user(email) if user: if user.valid is False: return Err(\"user not valid\") return Ok(user)", "user.valid is False: return Err(\"user not valid\") return Ok(user) return Err(\"user not exists\")", "valid\") return Ok(user) return Err(\"user not exists\") result = user = get_user_by_email('<EMAIL>') if", "instance \"\"\" user = get_user(email) if user: if user.valid is False: return Err(\"user", "Result, Ok, Err def get_valid_user_by_email(email): \"\"\" Return user instance \"\"\" user = get_user(email)", "<filename>example.py #coding=utf8 from result2 import Result, Ok, Err def get_valid_user_by_email(email): \"\"\" Return user", "if user exists ... else: # to create new user page with reason", "get_valid_user_by_email(email): \"\"\" Return user instance \"\"\" user = get_user(email) if user: if user.valid", "= get_user_by_email('<EMAIL>') if result == Result.Ok: # do something if user exists ...", "not valid\") return Ok(user) return Err(\"user not exists\") result = user = get_user_by_email('<EMAIL>')", "= user = get_user_by_email('<EMAIL>') if result == Result.Ok: # do something if user", "result == Result.Ok: # do something if user exists ... else: # to", "exists\") result = user = get_user_by_email('<EMAIL>') if result == Result.Ok: # do something", "Ok, Err def get_valid_user_by_email(email): \"\"\" Return user instance \"\"\" user = get_user(email) if", "== Result.Ok: # do something if user exists ... else: # to create", "if result == Result.Ok: # do something if user exists ... else: #", "Err(\"user not valid\") return Ok(user) return Err(\"user not exists\") result = user =", "return Ok(user) return Err(\"user not exists\") result = user = get_user_by_email('<EMAIL>') if result", "#coding=utf8 from result2 import Result, Ok, Err def get_valid_user_by_email(email): \"\"\" Return user instance", "Err(\"user not exists\") result = user = get_user_by_email('<EMAIL>') if result == Result.Ok: #", "do something if user exists ... else: # to create new user page", "False: return Err(\"user not valid\") return Ok(user) return Err(\"user not exists\") result =", "user = get_user(email) if user: if user.valid is False: return Err(\"user not valid\")", "user = get_user_by_email('<EMAIL>') if result == Result.Ok: # do something if user exists", "user instance \"\"\" user = get_user(email) if user: if user.valid is False: return", "Return user instance \"\"\" user = get_user(email) if user: if user.valid is False:", "something if user exists ... else: # to create new user page with", "import Result, Ok, Err def get_valid_user_by_email(email): \"\"\" Return user instance \"\"\" user =", "Ok(user) return Err(\"user not exists\") result = user = get_user_by_email('<EMAIL>') if result ==", "Result.Ok: # do something if user exists ... else: # to create new", "= get_user(email) if user: if user.valid is False: return Err(\"user not valid\") return", "user: if user.valid is False: return Err(\"user not valid\") return Ok(user) return Err(\"user", "if user.valid is False: return Err(\"user not valid\") return Ok(user) return Err(\"user not", "result = user = get_user_by_email('<EMAIL>') if result == Result.Ok: # do something if", "return Err(\"user not exists\") result = user = get_user_by_email('<EMAIL>') if result == Result.Ok:", "return Err(\"user not valid\") return Ok(user) return Err(\"user not exists\") result = user", "\"\"\" user = get_user(email) if user: if user.valid is False: return Err(\"user not", "if user: if user.valid is False: return Err(\"user not valid\") return Ok(user) return", "\"\"\" Return user instance \"\"\" user = get_user(email) if user: if user.valid is" ]
[ "parser.add_argument('seats_available', type=int, required=True, location='json', help=\"This field cannot be blank\") parser.add_argument('date', type=lambda x: datetime.datetime.strptime(x,", "location='json', help=\"This field cannot be blank\") parser.add_argument('seats_available', type=int, required=True, location='json', help=\"This field cannot", "valid date using format\") parser.add_argument('time', type=lambda x: datetime.datetime.strptime(x, '%H:%M').strftime('%H:%M'), required=True, location='json', help=\"Use 24", "help=\"please enter a valid date using format\") parser.add_argument('time', type=lambda x: datetime.datetime.strptime(x, '%H:%M').strftime('%H:%M'), required=True,", "x: datetime.datetime.strptime(x, '%d/%m/%Y').strftime('%d/%m/%Y'), required=True, location='json', help=\"please enter a valid date using format\") parser.add_argument('time',", "type=str, required=True, location='json', help=\"Please enter a valid starting point\") parser.add_argument('destination', type=str, required=True, location='json',", "<reponame>Kitingu/restplus import datetime from flask_restplus import reqparse class RideParser: parser = reqparse.RequestParser() parser.add_argument('start_point',", "parser.add_argument('start_point', type=str, required=True, location='json', help=\"Please enter a valid starting point\") parser.add_argument('destination', type=str, required=True,", "be blank\") parser.add_argument('date', type=lambda x: datetime.datetime.strptime(x, '%d/%m/%Y').strftime('%d/%m/%Y'), required=True, location='json', help=\"please enter a valid", "datetime from flask_restplus import reqparse class RideParser: parser = reqparse.RequestParser() parser.add_argument('start_point', type=str, required=True,", "import reqparse class RideParser: parser = reqparse.RequestParser() parser.add_argument('start_point', type=str, required=True, location='json', help=\"Please enter", "be blank\") parser.add_argument('seats_available', type=int, required=True, location='json', help=\"This field cannot be blank\") parser.add_argument('date', type=lambda", "type=lambda x: datetime.datetime.strptime(x, '%d/%m/%Y').strftime('%d/%m/%Y'), required=True, location='json', help=\"please enter a valid date using format\")", "cannot be blank\") parser.add_argument('date', type=lambda x: datetime.datetime.strptime(x, '%d/%m/%Y').strftime('%d/%m/%Y'), required=True, location='json', help=\"please enter a", "= reqparse.RequestParser() parser.add_argument('start_point', type=str, required=True, location='json', help=\"Please enter a valid starting point\") parser.add_argument('destination',", "help=\"Please enter a valid starting point\") parser.add_argument('destination', type=str, required=True, location='json', help=\"This field cannot", "class RideParser: parser = reqparse.RequestParser() parser.add_argument('start_point', type=str, required=True, location='json', help=\"Please enter a valid", "enter a valid starting point\") parser.add_argument('destination', type=str, required=True, location='json', help=\"This field cannot be", "required=True, location='json', help=\"This field cannot be blank\") parser.add_argument('date', type=lambda x: datetime.datetime.strptime(x, '%d/%m/%Y').strftime('%d/%m/%Y'), required=True,", "parser = reqparse.RequestParser() parser.add_argument('start_point', type=str, required=True, location='json', help=\"Please enter a valid starting point\")", "date using format\") parser.add_argument('time', type=lambda x: datetime.datetime.strptime(x, '%H:%M').strftime('%H:%M'), required=True, location='json', help=\"Use 24 hour", "required=True, location='json', help=\"please enter a valid date using format\") parser.add_argument('time', type=lambda x: datetime.datetime.strptime(x,", "flask_restplus import reqparse class RideParser: parser = reqparse.RequestParser() parser.add_argument('start_point', type=str, required=True, location='json', help=\"Please", "starting point\") parser.add_argument('destination', type=str, required=True, location='json', help=\"This field cannot be blank\") parser.add_argument('seats_available', type=int,", "valid starting point\") parser.add_argument('destination', type=str, required=True, location='json', help=\"This field cannot be blank\") parser.add_argument('seats_available',", "type=str, required=True, location='json', help=\"This field cannot be blank\") parser.add_argument('seats_available', type=int, required=True, location='json', help=\"This", "reqparse.RequestParser() parser.add_argument('start_point', type=str, required=True, location='json', help=\"Please enter a valid starting point\") parser.add_argument('destination', type=str,", "help=\"This field cannot be blank\") parser.add_argument('seats_available', type=int, required=True, location='json', help=\"This field cannot be", "blank\") parser.add_argument('seats_available', type=int, required=True, location='json', help=\"This field cannot be blank\") parser.add_argument('date', type=lambda x:", "blank\") parser.add_argument('date', type=lambda x: datetime.datetime.strptime(x, '%d/%m/%Y').strftime('%d/%m/%Y'), required=True, location='json', help=\"please enter a valid date", "cannot be blank\") parser.add_argument('seats_available', type=int, required=True, location='json', help=\"This field cannot be blank\") parser.add_argument('date',", "required=True, location='json', help=\"Please enter a valid starting point\") parser.add_argument('destination', type=str, required=True, location='json', help=\"This", "format\") parser.add_argument('time', type=lambda x: datetime.datetime.strptime(x, '%H:%M').strftime('%H:%M'), required=True, location='json', help=\"Use 24 hour clock system\")", "a valid starting point\") parser.add_argument('destination', type=str, required=True, location='json', help=\"This field cannot be blank\")", "location='json', help=\"please enter a valid date using format\") parser.add_argument('time', type=lambda x: datetime.datetime.strptime(x, '%H:%M').strftime('%H:%M'),", "type=int, required=True, location='json', help=\"This field cannot be blank\") parser.add_argument('date', type=lambda x: datetime.datetime.strptime(x, '%d/%m/%Y').strftime('%d/%m/%Y'),", "location='json', help=\"Please enter a valid starting point\") parser.add_argument('destination', type=str, required=True, location='json', help=\"This field", "a valid date using format\") parser.add_argument('time', type=lambda x: datetime.datetime.strptime(x, '%H:%M').strftime('%H:%M'), required=True, location='json', help=\"Use", "datetime.datetime.strptime(x, '%d/%m/%Y').strftime('%d/%m/%Y'), required=True, location='json', help=\"please enter a valid date using format\") parser.add_argument('time', type=lambda", "point\") parser.add_argument('destination', type=str, required=True, location='json', help=\"This field cannot be blank\") parser.add_argument('seats_available', type=int, required=True,", "from flask_restplus import reqparse class RideParser: parser = reqparse.RequestParser() parser.add_argument('start_point', type=str, required=True, location='json',", "enter a valid date using format\") parser.add_argument('time', type=lambda x: datetime.datetime.strptime(x, '%H:%M').strftime('%H:%M'), required=True, location='json',", "help=\"This field cannot be blank\") parser.add_argument('date', type=lambda x: datetime.datetime.strptime(x, '%d/%m/%Y').strftime('%d/%m/%Y'), required=True, location='json', help=\"please", "'%d/%m/%Y').strftime('%d/%m/%Y'), required=True, location='json', help=\"please enter a valid date using format\") parser.add_argument('time', type=lambda x:", "parser.add_argument('date', type=lambda x: datetime.datetime.strptime(x, '%d/%m/%Y').strftime('%d/%m/%Y'), required=True, location='json', help=\"please enter a valid date using", "field cannot be blank\") parser.add_argument('date', type=lambda x: datetime.datetime.strptime(x, '%d/%m/%Y').strftime('%d/%m/%Y'), required=True, location='json', help=\"please enter", "using format\") parser.add_argument('time', type=lambda x: datetime.datetime.strptime(x, '%H:%M').strftime('%H:%M'), required=True, location='json', help=\"Use 24 hour clock", "reqparse class RideParser: parser = reqparse.RequestParser() parser.add_argument('start_point', type=str, required=True, location='json', help=\"Please enter a", "location='json', help=\"This field cannot be blank\") parser.add_argument('date', type=lambda x: datetime.datetime.strptime(x, '%d/%m/%Y').strftime('%d/%m/%Y'), required=True, location='json',", "parser.add_argument('destination', type=str, required=True, location='json', help=\"This field cannot be blank\") parser.add_argument('seats_available', type=int, required=True, location='json',", "RideParser: parser = reqparse.RequestParser() parser.add_argument('start_point', type=str, required=True, location='json', help=\"Please enter a valid starting", "required=True, location='json', help=\"This field cannot be blank\") parser.add_argument('seats_available', type=int, required=True, location='json', help=\"This field", "field cannot be blank\") parser.add_argument('seats_available', type=int, required=True, location='json', help=\"This field cannot be blank\")", "import datetime from flask_restplus import reqparse class RideParser: parser = reqparse.RequestParser() parser.add_argument('start_point', type=str," ]
[ "logging.getLogger(\"scapy.runtime\").setLevel(logging.ERROR) def arpscanner(iplist, lock): \"\"\"Scan internal mac addresses\"\"\" while 1: try: lock.acquire() ip", "= Queue.Queue() lock = threading.Lock() for i in range(1, 255, 1): ip =", "= ARP(op=1, hwdst='ff:ff:ff:ff:ff:ff', pdst=ip) # send arp request and receive response arpres =", "request and receive response arpres = srp1(ether/arp, timeout=0.05) if arpres and arpres.haslayer('ARP'): logger.info('%s", "\"192.168.1.%s\" % i iplist.put(ip) for n in range(30): t = threading.Thread(target=arpscanner, args=(iplist, lock))", "try: lock.acquire() ip = iplist.get_nowait() lock.release() # create a ether object ether =", "disable scapy warning logging.getLogger(\"scapy.runtime\").setLevel(logging.ERROR) def arpscanner(iplist, lock): \"\"\"Scan internal mac addresses\"\"\" while 1:", "Queue logging.basicConfig(level=logging.DEBUG, format='[*] %(name)s - %(message)s') logger = logging.getLogger('arpscanner') # disable scapy verbose", "arp = ARP(op=1, hwdst='ff:ff:ff:ff:ff:ff', pdst=ip) # send arp request and receive response arpres", "lock.release() # create a ether object ether = Ether(type=0x0806) # create a arp", "and arpres.haslayer('ARP'): logger.info('%s \\t %s' % (ip, arpres['ARP'].hwsrc)) else: logger.debug('%s \\t %s' %", "logger = logging.getLogger('arpscanner') # disable scapy verbose mode conf.verb = 0 # disable", "warning logging.getLogger(\"scapy.runtime\").setLevel(logging.ERROR) def arpscanner(iplist, lock): \"\"\"Scan internal mac addresses\"\"\" while 1: try: lock.acquire()", "srp1(ether/arp, timeout=0.05) if arpres and arpres.haslayer('ARP'): logger.info('%s \\t %s' % (ip, arpres['ARP'].hwsrc)) else:", "= iplist.get_nowait() lock.release() # create a ether object ether = Ether(type=0x0806) # create", "break return if __name__ == \"__main__\": iplist = Queue.Queue() lock = threading.Lock() for", "* import logging import threading import Queue logging.basicConfig(level=logging.DEBUG, format='[*] %(name)s - %(message)s') logger", "hwdst='ff:ff:ff:ff:ff:ff', pdst=ip) # send arp request and receive response arpres = srp1(ether/arp, timeout=0.05)", "import logging import threading import Queue logging.basicConfig(level=logging.DEBUG, format='[*] %(name)s - %(message)s') logger =", "in range(1, 255, 1): ip = \"192.168.1.%s\" % i iplist.put(ip) for n in", "Queue.Queue() lock = threading.Lock() for i in range(1, 255, 1): ip = \"192.168.1.%s\"", "\\t %s' % (ip, arpres['ARP'].hwsrc)) else: logger.debug('%s \\t %s' % (ip, None)) except", "% (ip, None)) except Queue.Empty: lock.release() break return if __name__ == \"__main__\": iplist", "%(message)s') logger = logging.getLogger('arpscanner') # disable scapy verbose mode conf.verb = 0 #", "except Queue.Empty: lock.release() break return if __name__ == \"__main__\": iplist = Queue.Queue() lock", "mac addresses\"\"\" while 1: try: lock.acquire() ip = iplist.get_nowait() lock.release() # create a", "receive response arpres = srp1(ether/arp, timeout=0.05) if arpres and arpres.haslayer('ARP'): logger.info('%s \\t %s'", "(ip, arpres['ARP'].hwsrc)) else: logger.debug('%s \\t %s' % (ip, None)) except Queue.Empty: lock.release() break", "Ether(type=0x0806) # create a arp object arp = ARP(op=1, hwdst='ff:ff:ff:ff:ff:ff', pdst=ip) # send", "arp object arp = ARP(op=1, hwdst='ff:ff:ff:ff:ff:ff', pdst=ip) # send arp request and receive", "ARP(op=1, hwdst='ff:ff:ff:ff:ff:ff', pdst=ip) # send arp request and receive response arpres = srp1(ether/arp,", "coding: utf8 -*- from scapy.all import * import logging import threading import Queue", "-*- from scapy.all import * import logging import threading import Queue logging.basicConfig(level=logging.DEBUG, format='[*]", "0 # disable scapy warning logging.getLogger(\"scapy.runtime\").setLevel(logging.ERROR) def arpscanner(iplist, lock): \"\"\"Scan internal mac addresses\"\"\"", "arpres = srp1(ether/arp, timeout=0.05) if arpres and arpres.haslayer('ARP'): logger.info('%s \\t %s' % (ip,", "disable scapy verbose mode conf.verb = 0 # disable scapy warning logging.getLogger(\"scapy.runtime\").setLevel(logging.ERROR) def", "= threading.Lock() for i in range(1, 255, 1): ip = \"192.168.1.%s\" % i", "== \"__main__\": iplist = Queue.Queue() lock = threading.Lock() for i in range(1, 255,", "import Queue logging.basicConfig(level=logging.DEBUG, format='[*] %(name)s - %(message)s') logger = logging.getLogger('arpscanner') # disable scapy", "a arp object arp = ARP(op=1, hwdst='ff:ff:ff:ff:ff:ff', pdst=ip) # send arp request and", "= logging.getLogger('arpscanner') # disable scapy verbose mode conf.verb = 0 # disable scapy", "conf.verb = 0 # disable scapy warning logging.getLogger(\"scapy.runtime\").setLevel(logging.ERROR) def arpscanner(iplist, lock): \"\"\"Scan internal", "= 0 # disable scapy warning logging.getLogger(\"scapy.runtime\").setLevel(logging.ERROR) def arpscanner(iplist, lock): \"\"\"Scan internal mac", "object ether = Ether(type=0x0806) # create a arp object arp = ARP(op=1, hwdst='ff:ff:ff:ff:ff:ff',", "python # -*- coding: utf8 -*- from scapy.all import * import logging import", "logger.debug('%s \\t %s' % (ip, None)) except Queue.Empty: lock.release() break return if __name__", "send arp request and receive response arpres = srp1(ether/arp, timeout=0.05) if arpres and", "mode conf.verb = 0 # disable scapy warning logging.getLogger(\"scapy.runtime\").setLevel(logging.ERROR) def arpscanner(iplist, lock): \"\"\"Scan", "ip = iplist.get_nowait() lock.release() # create a ether object ether = Ether(type=0x0806) #", "verbose mode conf.verb = 0 # disable scapy warning logging.getLogger(\"scapy.runtime\").setLevel(logging.ERROR) def arpscanner(iplist, lock):", "logger.info('%s \\t %s' % (ip, arpres['ARP'].hwsrc)) else: logger.debug('%s \\t %s' % (ip, None))", "lock.acquire() ip = iplist.get_nowait() lock.release() # create a ether object ether = Ether(type=0x0806)", "lock.release() break return if __name__ == \"__main__\": iplist = Queue.Queue() lock = threading.Lock()", "threading.Lock() for i in range(1, 255, 1): ip = \"192.168.1.%s\" % i iplist.put(ip)", "= srp1(ether/arp, timeout=0.05) if arpres and arpres.haslayer('ARP'): logger.info('%s \\t %s' % (ip, arpres['ARP'].hwsrc))", "# send arp request and receive response arpres = srp1(ether/arp, timeout=0.05) if arpres", "%s' % (ip, arpres['ARP'].hwsrc)) else: logger.debug('%s \\t %s' % (ip, None)) except Queue.Empty:", "- %(message)s') logger = logging.getLogger('arpscanner') # disable scapy verbose mode conf.verb = 0", "iplist.get_nowait() lock.release() # create a ether object ether = Ether(type=0x0806) # create a", "-*- coding: utf8 -*- from scapy.all import * import logging import threading import", "# disable scapy verbose mode conf.verb = 0 # disable scapy warning logging.getLogger(\"scapy.runtime\").setLevel(logging.ERROR)", "arpres.haslayer('ARP'): logger.info('%s \\t %s' % (ip, arpres['ARP'].hwsrc)) else: logger.debug('%s \\t %s' % (ip,", "= Ether(type=0x0806) # create a arp object arp = ARP(op=1, hwdst='ff:ff:ff:ff:ff:ff', pdst=ip) #", "Queue.Empty: lock.release() break return if __name__ == \"__main__\": iplist = Queue.Queue() lock =", "create a arp object arp = ARP(op=1, hwdst='ff:ff:ff:ff:ff:ff', pdst=ip) # send arp request", "utf8 -*- from scapy.all import * import logging import threading import Queue logging.basicConfig(level=logging.DEBUG,", "timeout=0.05) if arpres and arpres.haslayer('ARP'): logger.info('%s \\t %s' % (ip, arpres['ARP'].hwsrc)) else: logger.debug('%s", "iplist = Queue.Queue() lock = threading.Lock() for i in range(1, 255, 1): ip", "for i in range(1, 255, 1): ip = \"192.168.1.%s\" % i iplist.put(ip) for", "__name__ == \"__main__\": iplist = Queue.Queue() lock = threading.Lock() for i in range(1,", "scapy verbose mode conf.verb = 0 # disable scapy warning logging.getLogger(\"scapy.runtime\").setLevel(logging.ERROR) def arpscanner(iplist,", "# create a arp object arp = ARP(op=1, hwdst='ff:ff:ff:ff:ff:ff', pdst=ip) # send arp", "ether object ether = Ether(type=0x0806) # create a arp object arp = ARP(op=1,", "#!/usr/bin/env python # -*- coding: utf8 -*- from scapy.all import * import logging", "threading import Queue logging.basicConfig(level=logging.DEBUG, format='[*] %(name)s - %(message)s') logger = logging.getLogger('arpscanner') # disable", "arpscanner(iplist, lock): \"\"\"Scan internal mac addresses\"\"\" while 1: try: lock.acquire() ip = iplist.get_nowait()", "lock = threading.Lock() for i in range(1, 255, 1): ip = \"192.168.1.%s\" %", "\"__main__\": iplist = Queue.Queue() lock = threading.Lock() for i in range(1, 255, 1):", "255, 1): ip = \"192.168.1.%s\" % i iplist.put(ip) for n in range(30): t", "(ip, None)) except Queue.Empty: lock.release() break return if __name__ == \"__main__\": iplist =", "response arpres = srp1(ether/arp, timeout=0.05) if arpres and arpres.haslayer('ARP'): logger.info('%s \\t %s' %", "1): ip = \"192.168.1.%s\" % i iplist.put(ip) for n in range(30): t =", "and receive response arpres = srp1(ether/arp, timeout=0.05) if arpres and arpres.haslayer('ARP'): logger.info('%s \\t", "# disable scapy warning logging.getLogger(\"scapy.runtime\").setLevel(logging.ERROR) def arpscanner(iplist, lock): \"\"\"Scan internal mac addresses\"\"\" while", "1: try: lock.acquire() ip = iplist.get_nowait() lock.release() # create a ether object ether", "# -*- coding: utf8 -*- from scapy.all import * import logging import threading", "# create a ether object ether = Ether(type=0x0806) # create a arp object", "logging.basicConfig(level=logging.DEBUG, format='[*] %(name)s - %(message)s') logger = logging.getLogger('arpscanner') # disable scapy verbose mode", "internal mac addresses\"\"\" while 1: try: lock.acquire() ip = iplist.get_nowait() lock.release() # create", "arpres and arpres.haslayer('ARP'): logger.info('%s \\t %s' % (ip, arpres['ARP'].hwsrc)) else: logger.debug('%s \\t %s'", "%(name)s - %(message)s') logger = logging.getLogger('arpscanner') # disable scapy verbose mode conf.verb =", "import * import logging import threading import Queue logging.basicConfig(level=logging.DEBUG, format='[*] %(name)s - %(message)s')", "scapy.all import * import logging import threading import Queue logging.basicConfig(level=logging.DEBUG, format='[*] %(name)s -", "if arpres and arpres.haslayer('ARP'): logger.info('%s \\t %s' % (ip, arpres['ARP'].hwsrc)) else: logger.debug('%s \\t", "if __name__ == \"__main__\": iplist = Queue.Queue() lock = threading.Lock() for i in", "object arp = ARP(op=1, hwdst='ff:ff:ff:ff:ff:ff', pdst=ip) # send arp request and receive response", "scapy warning logging.getLogger(\"scapy.runtime\").setLevel(logging.ERROR) def arpscanner(iplist, lock): \"\"\"Scan internal mac addresses\"\"\" while 1: try:", "a ether object ether = Ether(type=0x0806) # create a arp object arp =", "ether = Ether(type=0x0806) # create a arp object arp = ARP(op=1, hwdst='ff:ff:ff:ff:ff:ff', pdst=ip)", "import threading import Queue logging.basicConfig(level=logging.DEBUG, format='[*] %(name)s - %(message)s') logger = logging.getLogger('arpscanner') #", "return if __name__ == \"__main__\": iplist = Queue.Queue() lock = threading.Lock() for i", "arp request and receive response arpres = srp1(ether/arp, timeout=0.05) if arpres and arpres.haslayer('ARP'):", "None)) except Queue.Empty: lock.release() break return if __name__ == \"__main__\": iplist = Queue.Queue()", "logging.getLogger('arpscanner') # disable scapy verbose mode conf.verb = 0 # disable scapy warning", "def arpscanner(iplist, lock): \"\"\"Scan internal mac addresses\"\"\" while 1: try: lock.acquire() ip =", "\"\"\"Scan internal mac addresses\"\"\" while 1: try: lock.acquire() ip = iplist.get_nowait() lock.release() #", "%s' % (ip, None)) except Queue.Empty: lock.release() break return if __name__ == \"__main__\":", "% i iplist.put(ip) for n in range(30): t = threading.Thread(target=arpscanner, args=(iplist, lock)) t.start()", "<reponame>all3g/pieces<gh_stars>10-100 #!/usr/bin/env python # -*- coding: utf8 -*- from scapy.all import * import", "while 1: try: lock.acquire() ip = iplist.get_nowait() lock.release() # create a ether object", "arpres['ARP'].hwsrc)) else: logger.debug('%s \\t %s' % (ip, None)) except Queue.Empty: lock.release() break return", "logging import threading import Queue logging.basicConfig(level=logging.DEBUG, format='[*] %(name)s - %(message)s') logger = logging.getLogger('arpscanner')", "pdst=ip) # send arp request and receive response arpres = srp1(ether/arp, timeout=0.05) if", "= \"192.168.1.%s\" % i iplist.put(ip) for n in range(30): t = threading.Thread(target=arpscanner, args=(iplist,", "addresses\"\"\" while 1: try: lock.acquire() ip = iplist.get_nowait() lock.release() # create a ether", "create a ether object ether = Ether(type=0x0806) # create a arp object arp", "lock): \"\"\"Scan internal mac addresses\"\"\" while 1: try: lock.acquire() ip = iplist.get_nowait() lock.release()", "i in range(1, 255, 1): ip = \"192.168.1.%s\" % i iplist.put(ip) for n", "range(1, 255, 1): ip = \"192.168.1.%s\" % i iplist.put(ip) for n in range(30):", "format='[*] %(name)s - %(message)s') logger = logging.getLogger('arpscanner') # disable scapy verbose mode conf.verb", "\\t %s' % (ip, None)) except Queue.Empty: lock.release() break return if __name__ ==", "else: logger.debug('%s \\t %s' % (ip, None)) except Queue.Empty: lock.release() break return if", "from scapy.all import * import logging import threading import Queue logging.basicConfig(level=logging.DEBUG, format='[*] %(name)s", "ip = \"192.168.1.%s\" % i iplist.put(ip) for n in range(30): t = threading.Thread(target=arpscanner,", "% (ip, arpres['ARP'].hwsrc)) else: logger.debug('%s \\t %s' % (ip, None)) except Queue.Empty: lock.release()" ]
[ "[os.path.join(pnacl_bin_path, 'pnacl-bcfuzz'), llfile, '-bitcode-as-text', '-output', '-', '|'] elif not args.llvm_source: cmd = [os.path.join(pnacl_bin_path,", "llvm-mc. Similar issues could occur when setting filetype, target, # or sandbox through", "... --args' % (script_name, preferred_option) print 'rather than:' print ' %s ... --args", "it into a pexe file, converts it to a Subzero program, and finally", "keep local symbols in the pexe file\") argparser.add_argument('--llvm', required=False, action='store_true', help='Parse pexe into", "asm_temp.close() output_file_name = asm_temp.name if args.assemble and args.filetype != 'obj': cmd += (['|',", "# and llvm-mc. Similar issues could occur when setting filetype, target, # or", "stdout_result = shellcmd(cmd, echo=args.echo_cmd) if not args.echo_cmd: sys.stdout.write(stdout_result) if asm_temp and not keep_output_file:", "report an error. if re.search('^-?-(o|output|filetype|target|sandbox)(=.+)?$', arg): preferred_option = '--output' if re.search('^-?-o(=.+)?$', arg) else", "and args.filetype != 'obj': cmd += (['|', os.path.join(pnacl_bin_path, 'llvm-mc')] + TargetAssemblerFlags(args.target, args.sandbox) +", "'x8664': ['-triple=%s' % ( 'x86_64-nacl' if sandboxed else 'x86_64')], 'arm32': ['-triple=%s' % (", "action='store_true', help='Parse pexe into llvm IR first, then ' + 'convert to Subzero')", "delete-on-close, # and instead manually delete. asm_temp = tempfile.NamedTemporaryFile(delete=False) asm_temp.close() output_file_name = asm_temp.name", "os.path.join(pnacl_bin_path, 'pnacl-freeze')] if not args.no_local_syms: cmd += ['--allow-local-symbol-tables'] cmd += ['|'] if args.expect_fail:", "python2 import argparse import itertools import os import re import subprocess import sys", "args.pnacl_bin_path llfile = args.input if args.llvm and args.llvm_source: raise RuntimeError(\"Can't specify both '--llvm'", "freezes it into a pexe file, converts it to a Subzero program, and", "cmd += [llfile] if args.assemble or args.disassemble: if not output_file_name: # On windows", "if args.disassemble: # Show wide instruction encodings, diassemble, show relocs and # dissasemble", "file needs to be done through the script # because forceasm may introduce", "'--llvm-source'\") if args.llvm_source and args.no_local_syms: raise RuntimeError(\"Can't specify both '--llvm-source' and \" +", "so don't do delete-on-close, # and instead manually delete. asm_temp = tempfile.NamedTemporaryFile(delete=False) asm_temp.close()", "finally compiles it. \"\"\" argparser = argparse.ArgumentParser( description=' ' + main.__doc__, formatter_class=argparse.ArgumentDefaultsHelpFormatter) argparser.add_argument('--input',", "args.llvm or args.llvm_source: cmd += ['--build-on-read=0'] else: cmd += ['--build-on-read=1'] cmd += ['--filetype='", "' %s ... %s ... --args' % (script_name, preferred_option) print 'rather than:' print", "action='store_true', help='Assemble the output') argparser.add_argument('--disassemble', required=False, action='store_true', help='Disassemble the assembled output') argparser.add_argument('--dis-flags', required=False,", "are based on '-verbose inst' output, force # single-threaded translation because dump output", "'iasm'], help='Output file type. Default %(default)s') argparser.add_argument('--forceasm', required=False, action='store_true', help='Force --filetype=asm') argparser.add_argument('--target', default='x8632',", "' + 'Subzero instructions') argparser.add_argument('--no-local-syms', required=False, action='store_true', help=\"Don't keep local symbols in the", "llfile = args.input if args.llvm and args.llvm_source: raise RuntimeError(\"Can't specify both '--llvm' and", "an llvm file. Takes an llvm input file, freezes it into a pexe", "sandboxed else 'i686')], 'x8664': ['-triple=%s' % ( 'x86_64-nacl' if sandboxed else 'x86_64')], 'arm32':", "first error') argparser.add_argument('--args', '-a', nargs=argparse.REMAINDER, default=[], help='Remaining arguments are passed to pnacl-sz') argparser.add_argument('--sandbox',", "['-triple=%s' % ( 'x86_64-nacl' if sandboxed else 'x86_64')], 'arm32': ['-triple=%s' % ( 'armv7a-nacl'", "argparser.add_argument('--dis-flags', required=False, action='append', default=[], help='Add a disassembler flag') argparser.add_argument('--filetype', default='iasm', dest='filetype', choices=['obj', 'asm',", "(script_name, arg) exit(1) asm_temp = None output_file_name = None keep_output_file = False if", "args.disassemble: # Show wide instruction encodings, diassemble, show relocs and # dissasemble zeros.", "'mips32':[] } return flags[target] def main(): \"\"\"Run the pnacl-sz compiler on an llvm", "disassembler flag') argparser.add_argument('--filetype', default='iasm', dest='filetype', choices=['obj', 'asm', 'iasm'], help='Output file type. Default %(default)s')", "then ' + 'convert to Subzero') argparser.add_argument( '--pnacl-sz', required=False, default='./pnacl-sz', metavar='PNACL-SZ', help=\"Subzero translator", "%s ... %s ... --args' % (script_name, preferred_option) print 'rather than:' print '", "the generated code') args = argparser.parse_args() pnacl_bin_path = args.pnacl_bin_path llfile = args.input if", "args.filetype == 'obj': args.filetype = 'asm' args.assemble = True cmd = [] if", "needs to be done through the script # because forceasm may introduce a", "windows we may need to close the file first before it can be", "required=False, action='store_true', help='Negate success of run by using LLVM not') argparser.add_argument('--allow-pnacl-reader-error-recovery', action='store_true', help='Continue", "Subzero program, and finally compiles it. \"\"\" argparser = argparse.ArgumentParser( description=' ' +", "args.dis_flags + ['-w', '-d', '-r', '-z'] + TargetDisassemblerFlags(args.target) + [output_file_name]) stdout_result = shellcmd(cmd,", "tempfile.NamedTemporaryFile(delete=False) asm_temp.close() output_file_name = asm_temp.name if args.assemble and args.filetype != 'obj': cmd +=", "} return flags[target] def TargetDisassemblerFlags(target): flags = { 'x8632': ['-Mintel'], 'x8664': ['-Mintel'], 'arm32':", "Mips32. flags = { 'x8632': ['-triple=%s' % ('i686-nacl' if sandboxed else 'i686')], 'x8664':", "cmd += ['--filetype=' + args.filetype] cmd += ['--emit-revision=0'] script_name = os.path.basename(sys.argv[0]) for _,", "help='Target architecture. Default %(default)s') argparser.add_argument('--echo-cmd', required=False, action='store_true', help='Trace command that generates ICE instructions')", "required=False, action='store_true', help='Parse source directly into llvm IR ' + '(without generating a", "args.llvm_source: raise RuntimeError(\"Can't specify both '--llvm' and '--llvm-source'\") if args.llvm_source and args.no_local_syms: raise", "' + 'convert to Subzero') argparser.add_argument('--llvm-source', required=False, action='store_true', help='Parse source directly into llvm", "GetObjdumpCmd(args.target))] + args.dis_flags + ['-w', '-d', '-r', '-z'] + TargetDisassemblerFlags(args.target) + [output_file_name]) stdout_result", "% ( 'x86_64-nacl' if sandboxed else 'x86_64')], 'arm32': ['-triple=%s' % ( 'armv7a-nacl' if", "# reassembled into order. cmd += ['-verbose', 'inst,global_init', '-notranslate', '-threads=0'] elif args.allow_pnacl_reader_error_recovery: cmd", "import sys import tempfile from utils import FindBaseNaCl, GetObjdumpCmd, shellcmd def TargetAssemblerFlags(target, sandboxed):", "= asm_temp.name if args.assemble and args.filetype != 'obj': cmd += (['|', os.path.join(pnacl_bin_path, 'llvm-mc')]", "else arg print 'Option should be set using:' print ' %s ... %s", "argparser.add_argument('--target', default='x8632', dest='target', choices=['x8632','x8664','arm32','mips32'], help='Target architecture. Default %(default)s') argparser.add_argument('--echo-cmd', required=False, action='store_true', help='Trace command", "+ ['-filetype=obj', '-o', output_file_name]) elif output_file_name: cmd += ['-o', output_file_name] if args.disassemble: #", "by using LLVM not') argparser.add_argument('--allow-pnacl-reader-error-recovery', action='store_true', help='Continue parsing after first error') argparser.add_argument('--args', '-a',", "args.input if args.llvm and args.llvm_source: raise RuntimeError(\"Can't specify both '--llvm' and '--llvm-source'\") if", "+ [output_file_name]) stdout_result = shellcmd(cmd, echo=args.echo_cmd) if not args.echo_cmd: sys.stdout.write(stdout_result) if asm_temp and", "# dissasemble zeros. cmd += (['&&', os.path.join(pnacl_bin_path, GetObjdumpCmd(args.target))] + args.dis_flags + ['-w', '-d',", "return flags[target] def main(): \"\"\"Run the pnacl-sz compiler on an llvm file. Takes", "& Binutils executables ' + '(e.g. for building PEXE files)') argparser.add_argument('--assemble', required=False, action='store_true',", "+= [args.pnacl_sz] cmd += ['--target', args.target] if args.sandbox: cmd += ['-sandbox'] if args.insts:", "RuntimeError(\"Can't specify both '--llvm' and '--llvm-source'\") if args.llvm_source and args.no_local_syms: raise RuntimeError(\"Can't specify", "if args.expect_fail: cmd += [os.path.join(pnacl_bin_path, 'not')] cmd += [args.pnacl_sz] cmd += ['--target', args.target]", "not get # reassembled into order. cmd += ['-verbose', 'inst,global_init', '-notranslate', '-threads=0'] elif", "args.llvm_source: cmd += [llfile] if args.assemble or args.disassemble: if not output_file_name: # On", "else 'x86_64')], 'arm32': ['-triple=%s' % ( 'armv7a-nacl' if sandboxed else 'armv7a'), '-mcpu=cortex-a9', '-mattr=+neon'],", "= False if args.output: output_file_name = args.output keep_output_file = True cmd += args.args", "not args.no_local_syms: cmd += ['--allow-local-symbol-tables'] cmd += ['|'] if args.expect_fail: cmd += [os.path.join(pnacl_bin_path,", "both '--tbc' and '--llvm-source'\") if args.llvm and args.tbc: raise RuntimeError(\"Can't specify both '--tbc'", "required=False, action='store_true', help='Parse pexe into llvm IR first, then ' + 'convert to", "cmd += ['--allow-local-symbol-tables'] cmd += ['|'] if args.expect_fail: cmd += [os.path.join(pnacl_bin_path, 'not')] cmd", "\" + \"'--no-local-syms'\") if args.llvm_source and args.tbc: raise RuntimeError(\"Can't specify both '--tbc' and", "output file needs to be done through the script # because forceasm may", "the file first before it can be # re-opened by the other tools,", "iasm. pass elif args.filetype == 'obj': args.filetype = 'asm' args.assemble = True cmd", "['-filetype=obj', '-o', output_file_name]) elif output_file_name: cmd += ['-o', output_file_name] if args.disassemble: # Show", "+= ['-allow-pnacl-reader-error-recovery', '-threads=0'] if not args.llvm_source: cmd += ['--bitcode-format=pnacl'] if not args.no_local_syms: cmd", "args.assemble = True cmd = [] if args.tbc: cmd = [os.path.join(pnacl_bin_path, 'pnacl-bcfuzz'), llfile,", "need to # add here for Mips32. flags = { 'x8632': ['-triple=%s' %", "'(without generating a pexe), then ' + 'convert to Subzero') argparser.add_argument( '--pnacl-sz', required=False,", "action='store_true', help='Disassemble the assembled output') argparser.add_argument('--dis-flags', required=False, action='append', default=[], help='Add a disassembler flag')", "LLVM not') argparser.add_argument('--allow-pnacl-reader-error-recovery', action='store_true', help='Continue parsing after first error') argparser.add_argument('--args', '-a', nargs=argparse.REMAINDER, default=[],", "args.forceasm: if args.expect_fail: args.forceasm = False elif args.filetype == 'asm': pass elif args.filetype", "the other tools, so don't do delete-on-close, # and instead manually delete. asm_temp", "'Subzero instructions') argparser.add_argument('--no-local-syms', required=False, action='store_true', help=\"Don't keep local symbols in the pexe file\")", "new temporary file between pnacl-sz # and llvm-mc. Similar issues could occur when", "argparser.add_argument('--llvm', required=False, action='store_true', help='Parse pexe into llvm IR first, then ' + 'convert", "argparser.add_argument('--tbc', required=False, action='store_true', help='Input is textual bitcode (not .ll)') argparser.add_argument('--expect-fail', required=False, action='store_true', help='Negate", "arg): preferred_option = '--output' if re.search('^-?-o(=.+)?$', arg) else arg print 'Option should be", "+= ['-o', output_file_name] if args.disassemble: # Show wide instruction encodings, diassemble, show relocs", "both '--tbc' and '--llvm'\") if args.forceasm: if args.expect_fail: args.forceasm = False elif args.filetype", "generates ICE instructions') argparser.add_argument('--tbc', required=False, action='store_true', help='Input is textual bitcode (not .ll)') argparser.add_argument('--expect-fail',", "+ 'convert to Subzero') argparser.add_argument( '--pnacl-sz', required=False, default='./pnacl-sz', metavar='PNACL-SZ', help=\"Subzero translator 'pnacl-sz'\") argparser.add_argument('--pnacl-bin-path',", "' + 'convert to Subzero') argparser.add_argument( '--pnacl-sz', required=False, default='./pnacl-sz', metavar='PNACL-SZ', help=\"Subzero translator 'pnacl-sz'\")", "elif output_file_name: cmd += ['-o', output_file_name] if args.disassemble: # Show wide instruction encodings,", "close the file first before it can be # re-opened by the other", "['-w', '-d', '-r', '-z'] + TargetDisassemblerFlags(args.target) + [output_file_name]) stdout_result = shellcmd(cmd, echo=args.echo_cmd) if", "textual bitcode (not .ll)') argparser.add_argument('--expect-fail', required=False, action='store_true', help='Negate success of run by using", "args.no_local_syms: raise RuntimeError(\"Can't specify both '--llvm-source' and \" + \"'--no-local-syms'\") if args.llvm_source and", "help=\"Don't keep local symbols in the pexe file\") argparser.add_argument('--llvm', required=False, action='store_true', help='Parse pexe", "ICE instructions') argparser.add_argument('--tbc', required=False, action='store_true', help='Input is textual bitcode (not .ll)') argparser.add_argument('--expect-fail', required=False,", "help='Stop after translating to ' + 'Subzero instructions') argparser.add_argument('--no-local-syms', required=False, action='store_true', help=\"Don't keep", "source directly into llvm IR ' + '(without generating a pexe), then '", "executables ' + '(e.g. for building PEXE files)') argparser.add_argument('--assemble', required=False, action='store_true', help='Assemble the", "help='Disassemble the assembled output') argparser.add_argument('--dis-flags', required=False, action='append', default=[], help='Add a disassembler flag') argparser.add_argument('--filetype',", "cmd += ['--allow-local-symbol-tables'] if args.llvm or args.llvm_source: cmd += ['--build-on-read=0'] else: cmd +=", "find out exactly we need to # add here for Mips32. flags =", "Subzero') argparser.add_argument( '--pnacl-sz', required=False, default='./pnacl-sz', metavar='PNACL-SZ', help=\"Subzero translator 'pnacl-sz'\") argparser.add_argument('--pnacl-bin-path', required=False, default=( '{root}/toolchain/linux_x86/pnacl_newlib_raw/bin'", "def TargetDisassemblerFlags(target): flags = { 'x8632': ['-Mintel'], 'x8664': ['-Mintel'], 'arm32': [], 'mips32':[] }", "help='Input is textual bitcode (not .ll)') argparser.add_argument('--expect-fail', required=False, action='store_true', help='Negate success of run", "cmd = [os.path.join(pnacl_bin_path, 'llvm-as'), llfile, '-o', '-', '|', os.path.join(pnacl_bin_path, 'pnacl-freeze')] if not args.no_local_syms:", "raise RuntimeError(\"Can't specify both '--llvm' and '--llvm-source'\") if args.llvm_source and args.no_local_syms: raise RuntimeError(\"Can't", "keep_output_file = False if args.output: output_file_name = args.output keep_output_file = True cmd +=", "args.tbc: cmd = [os.path.join(pnacl_bin_path, 'pnacl-bcfuzz'), llfile, '-bitcode-as-text', '-output', '-', '|'] elif not args.llvm_source:", "default='iasm', dest='filetype', choices=['obj', 'asm', 'iasm'], help='Output file type. Default %(default)s') argparser.add_argument('--forceasm', required=False, action='store_true',", "'mipsel'), '-mcpu=mips32'] } return flags[target] def TargetDisassemblerFlags(target): flags = { 'x8632': ['-Mintel'], 'x8664':", "# re-opened by the other tools, so don't do delete-on-close, # and instead", "%s ... --args' % (script_name, preferred_option) print 'rather than:' print ' %s ...", "Subzero') argparser.add_argument('--llvm-source', required=False, action='store_true', help='Parse source directly into llvm IR ' + '(without", "a pexe file, converts it to a Subzero program, and finally compiles it.", "elif args.filetype == 'obj': args.filetype = 'asm' args.assemble = True cmd = []", "help='Remaining arguments are passed to pnacl-sz') argparser.add_argument('--sandbox', required=False, action='store_true', help='Sandboxes the generated code')", "help='Trace command that generates ICE instructions') argparser.add_argument('--tbc', required=False, action='store_true', help='Input is textual bitcode", "success of run by using LLVM not') argparser.add_argument('--allow-pnacl-reader-error-recovery', action='store_true', help='Continue parsing after first", "sandboxed else 'mipsel'), '-mcpu=mips32'] } return flags[target] def TargetDisassemblerFlags(target): flags = { 'x8632':", "% (script_name, arg) exit(1) asm_temp = None output_file_name = None keep_output_file = False", "args.disassemble: if not output_file_name: # On windows we may need to close the", "argparser.add_argument('--llvm-source', required=False, action='store_true', help='Parse source directly into llvm IR ' + '(without generating", "= argparser.parse_args() pnacl_bin_path = args.pnacl_bin_path llfile = args.input if args.llvm and args.llvm_source: raise", "argparser.add_argument('--args', '-a', nargs=argparse.REMAINDER, default=[], help='Remaining arguments are passed to pnacl-sz') argparser.add_argument('--sandbox', required=False, action='store_true',", "args.tbc: raise RuntimeError(\"Can't specify both '--tbc' and '--llvm-source'\") if args.llvm and args.tbc: raise", "action='store_true', help='Input is textual bitcode (not .ll)') argparser.add_argument('--expect-fail', required=False, action='store_true', help='Negate success of", "cmd += ['|'] if args.expect_fail: cmd += [os.path.join(pnacl_bin_path, 'not')] cmd += [args.pnacl_sz] cmd", "['--build-on-read=1'] cmd += ['--filetype=' + args.filetype] cmd += ['--emit-revision=0'] script_name = os.path.basename(sys.argv[0]) for", "help='Parse source directly into llvm IR ' + '(without generating a pexe), then", "help='Negate success of run by using LLVM not') argparser.add_argument('--allow-pnacl-reader-error-recovery', action='store_true', help='Continue parsing after", "if re.search('^-?-(o|output|filetype|target|sandbox)(=.+)?$', arg): preferred_option = '--output' if re.search('^-?-o(=.+)?$', arg) else arg print 'Option", "['-o', output_file_name] if args.disassemble: # Show wide instruction encodings, diassemble, show relocs and", "flags[target] def TargetDisassemblerFlags(target): flags = { 'x8632': ['-Mintel'], 'x8664': ['-Mintel'], 'arm32': [], 'mips32':[]", "PEXE files)') argparser.add_argument('--assemble', required=False, action='store_true', help='Assemble the output') argparser.add_argument('--disassemble', required=False, action='store_true', help='Disassemble the", "If the tests are based on '-verbose inst' output, force # single-threaded translation", "TODO(sehr) implement forceasm for iasm. pass elif args.filetype == 'obj': args.filetype = 'asm'", "building PEXE files)') argparser.add_argument('--assemble', required=False, action='store_true', help='Assemble the output') argparser.add_argument('--disassemble', required=False, action='store_true', help='Disassemble", "for Mips32. flags = { 'x8632': ['-triple=%s' % ('i686-nacl' if sandboxed else 'i686')],", "it to a Subzero program, and finally compiles it. \"\"\" argparser = argparse.ArgumentParser(", "error. if re.search('^-?-(o|output|filetype|target|sandbox)(=.+)?$', arg): preferred_option = '--output' if re.search('^-?-o(=.+)?$', arg) else arg print", "args.filetype] cmd += ['--emit-revision=0'] script_name = os.path.basename(sys.argv[0]) for _, arg in enumerate(args.args): #", "+= (['&&', os.path.join(pnacl_bin_path, GetObjdumpCmd(args.target))] + args.dis_flags + ['-w', '-d', '-r', '-z'] + TargetDisassemblerFlags(args.target)", "'x8664': ['-Mintel'], 'arm32': [], 'mips32':[] } return flags[target] def main(): \"\"\"Run the pnacl-sz", "[] if args.tbc: cmd = [os.path.join(pnacl_bin_path, 'pnacl-bcfuzz'), llfile, '-bitcode-as-text', '-output', '-', '|'] elif", "not output_file_name: # On windows we may need to close the file first", "args.llvm and args.llvm_source: raise RuntimeError(\"Can't specify both '--llvm' and '--llvm-source'\") if args.llvm_source and", "args.forceasm = False elif args.filetype == 'asm': pass elif args.filetype == 'iasm': #", "between pnacl-sz # and llvm-mc. Similar issues could occur when setting filetype, target,", "if args.llvm_source and args.tbc: raise RuntimeError(\"Can't specify both '--tbc' and '--llvm-source'\") if args.llvm", "description=' ' + main.__doc__, formatter_class=argparse.ArgumentDefaultsHelpFormatter) argparser.add_argument('--input', '-i', required=True, help='LLVM source file to compile')", "+ '(without generating a pexe), then ' + 'convert to Subzero') argparser.add_argument( '--pnacl-sz',", "{ 'x8632': ['-Mintel'], 'x8664': ['-Mintel'], 'arm32': [], 'mips32':[] } return flags[target] def main():", "of run by using LLVM not') argparser.add_argument('--allow-pnacl-reader-error-recovery', action='store_true', help='Continue parsing after first error')", "elif not args.llvm_source: cmd = [os.path.join(pnacl_bin_path, 'llvm-as'), llfile, '-o', '-', '|', os.path.join(pnacl_bin_path, 'pnacl-freeze')]", "%(default)s') argparser.add_argument('--echo-cmd', required=False, action='store_true', help='Trace command that generates ICE instructions') argparser.add_argument('--tbc', required=False, action='store_true',", "'x86_64')], 'arm32': ['-triple=%s' % ( 'armv7a-nacl' if sandboxed else 'armv7a'), '-mcpu=cortex-a9', '-mattr=+neon'], 'mips32':", "re.search('^-?-(o|output|filetype|target|sandbox)(=.+)?$', arg): preferred_option = '--output' if re.search('^-?-o(=.+)?$', arg) else arg print 'Option should", "output_file_name = asm_temp.name if args.assemble and args.filetype != 'obj': cmd += (['|', os.path.join(pnacl_bin_path,", "to a Subzero program, and finally compiles it. \"\"\" argparser = argparse.ArgumentParser( description='", "pexe into llvm IR first, then ' + 'convert to Subzero') argparser.add_argument('--llvm-source', required=False,", "and # dissasemble zeros. cmd += (['&&', os.path.join(pnacl_bin_path, GetObjdumpCmd(args.target))] + args.dis_flags + ['-w',", "TargetDisassemblerFlags(target): flags = { 'x8632': ['-Mintel'], 'x8664': ['-Mintel'], 'arm32': [], 'mips32':[] } return", "= args.input if args.llvm and args.llvm_source: raise RuntimeError(\"Can't specify both '--llvm' and '--llvm-source'\")", "(['&&', os.path.join(pnacl_bin_path, GetObjdumpCmd(args.target))] + args.dis_flags + ['-w', '-d', '-r', '-z'] + TargetDisassemblerFlags(args.target) +", "else 'armv7a'), '-mcpu=cortex-a9', '-mattr=+neon'], 'mips32': ['-triple=%s' % ( 'mipsel-nacl' if sandboxed else 'mipsel'),", "formatter_class=argparse.ArgumentDefaultsHelpFormatter) argparser.add_argument('--input', '-i', required=True, help='LLVM source file to compile') argparser.add_argument('--output', '-o', required=False, help='Output", "args.filetype == 'asm': pass elif args.filetype == 'iasm': # TODO(sehr) implement forceasm for", "can be # re-opened by the other tools, so don't do delete-on-close, #", "through --args. Filter and report an error. if re.search('^-?-(o|output|filetype|target|sandbox)(=.+)?$', arg): preferred_option = '--output'", "converts it to a Subzero program, and finally compiles it. \"\"\" argparser =", "argparser.add_argument('--assemble', required=False, action='store_true', help='Assemble the output') argparser.add_argument('--disassemble', required=False, action='store_true', help='Disassemble the assembled output')", "+ args.dis_flags + ['-w', '-d', '-r', '-z'] + TargetDisassemblerFlags(args.target) + [output_file_name]) stdout_result =", "target, # or sandbox through --args. Filter and report an error. if re.search('^-?-(o|output|filetype|target|sandbox)(=.+)?$',", "required=False, action='store_true', help=\"Don't keep local symbols in the pexe file\") argparser.add_argument('--llvm', required=False, action='store_true',", "compiles it. \"\"\" argparser = argparse.ArgumentParser( description=' ' + main.__doc__, formatter_class=argparse.ArgumentDefaultsHelpFormatter) argparser.add_argument('--input', '-i',", "args.allow_pnacl_reader_error_recovery: cmd += ['-allow-pnacl-reader-error-recovery', '-threads=0'] if not args.llvm_source: cmd += ['--bitcode-format=pnacl'] if not", "action='store_true', help='Continue parsing after first error') argparser.add_argument('--args', '-a', nargs=argparse.REMAINDER, default=[], help='Remaining arguments are", "flag') argparser.add_argument('--filetype', default='iasm', dest='filetype', choices=['obj', 'asm', 'iasm'], help='Output file type. Default %(default)s') argparser.add_argument('--forceasm',", "action='store_true', help='Sandboxes the generated code') args = argparser.parse_args() pnacl_bin_path = args.pnacl_bin_path llfile =", "= None keep_output_file = False if args.output: output_file_name = args.output keep_output_file = True", "dest='target', choices=['x8632','x8664','arm32','mips32'], help='Target architecture. Default %(default)s') argparser.add_argument('--echo-cmd', required=False, action='store_true', help='Trace command that generates", "+= ['--bitcode-format=pnacl'] if not args.no_local_syms: cmd += ['--allow-local-symbol-tables'] if args.llvm or args.llvm_source: cmd", "it can be # re-opened by the other tools, so don't do delete-on-close,", "'-o', required=False, help='Output file to write') argparser.add_argument('--insts', required=False, action='store_true', help='Stop after translating to", "cmd += [args.pnacl_sz] cmd += ['--target', args.target] if args.sandbox: cmd += ['-sandbox'] if", "+ \"'--no-local-syms'\") if args.llvm_source and args.tbc: raise RuntimeError(\"Can't specify both '--tbc' and '--llvm-source'\")", "Default %(default)s') argparser.add_argument('--forceasm', required=False, action='store_true', help='Force --filetype=asm') argparser.add_argument('--target', default='x8632', dest='target', choices=['x8632','x8664','arm32','mips32'], help='Target architecture.", "re.search('^-?-o(=.+)?$', arg) else arg print 'Option should be set using:' print ' %s", "Show wide instruction encodings, diassemble, show relocs and # dissasemble zeros. cmd +=", "into llvm IR first, then ' + 'convert to Subzero') argparser.add_argument('--llvm-source', required=False, action='store_true',", "!= 'obj': cmd += (['|', os.path.join(pnacl_bin_path, 'llvm-mc')] + TargetAssemblerFlags(args.target, args.sandbox) + ['-filetype=obj', '-o',", "FindBaseNaCl, GetObjdumpCmd, shellcmd def TargetAssemblerFlags(target, sandboxed): # TODO(reed kotler). Need to find out", "that generates ICE instructions') argparser.add_argument('--tbc', required=False, action='store_true', help='Input is textual bitcode (not .ll)')", "the assembled output') argparser.add_argument('--dis-flags', required=False, action='append', default=[], help='Add a disassembler flag') argparser.add_argument('--filetype', default='iasm',", "'arm32': ['-triple=%s' % ( 'armv7a-nacl' if sandboxed else 'armv7a'), '-mcpu=cortex-a9', '-mattr=+neon'], 'mips32': ['-triple=%s'", "+= ['-sandbox'] if args.insts: # If the tests are based on '-verbose inst'", "+ args.filetype] cmd += ['--emit-revision=0'] script_name = os.path.basename(sys.argv[0]) for _, arg in enumerate(args.args):", "files)') argparser.add_argument('--assemble', required=False, action='store_true', help='Assemble the output') argparser.add_argument('--disassemble', required=False, action='store_true', help='Disassemble the assembled", "args.no_local_syms: cmd += ['--allow-local-symbol-tables'] if args.llvm or args.llvm_source: cmd += ['--build-on-read=0'] else: cmd", "help='LLVM source file to compile') argparser.add_argument('--output', '-o', required=False, help='Output file to write') argparser.add_argument('--insts',", "temporary file between pnacl-sz # and llvm-mc. Similar issues could occur when setting", "output_file_name = None keep_output_file = False if args.output: output_file_name = args.output keep_output_file =", "subprocess import sys import tempfile from utils import FindBaseNaCl, GetObjdumpCmd, shellcmd def TargetAssemblerFlags(target,", "+= (['|', os.path.join(pnacl_bin_path, 'llvm-mc')] + TargetAssemblerFlags(args.target, args.sandbox) + ['-filetype=obj', '-o', output_file_name]) elif output_file_name:", "force # single-threaded translation because dump output does not get # reassembled into", "IR ' + '(without generating a pexe), then ' + 'convert to Subzero')", "['-Mintel'], 'x8664': ['-Mintel'], 'arm32': [], 'mips32':[] } return flags[target] def main(): \"\"\"Run the", "wide instruction encodings, diassemble, show relocs and # dissasemble zeros. cmd += (['&&',", "args.echo_cmd: sys.stdout.write(stdout_result) if asm_temp and not keep_output_file: os.remove(output_file_name) if __name__ == '__main__': main()", "because forceasm may introduce a new temporary file between pnacl-sz # and llvm-mc.", "do delete-on-close, # and instead manually delete. asm_temp = tempfile.NamedTemporaryFile(delete=False) asm_temp.close() output_file_name =", "input file, freezes it into a pexe file, converts it to a Subzero", "and llvm-mc. Similar issues could occur when setting filetype, target, # or sandbox", "import itertools import os import re import subprocess import sys import tempfile from", "if args.llvm_source: cmd += [llfile] if args.assemble or args.disassemble: if not output_file_name: #", "required=False, action='store_true', help='Disassemble the assembled output') argparser.add_argument('--dis-flags', required=False, action='append', default=[], help='Add a disassembler", "['--emit-revision=0'] script_name = os.path.basename(sys.argv[0]) for _, arg in enumerate(args.args): # Redirecting the output", "sandboxed else 'armv7a'), '-mcpu=cortex-a9', '-mattr=+neon'], 'mips32': ['-triple=%s' % ( 'mipsel-nacl' if sandboxed else", "#!/usr/bin/env python2 import argparse import itertools import os import re import subprocess import", "args.sandbox: cmd += ['-sandbox'] if args.insts: # If the tests are based on", "( 'armv7a-nacl' if sandboxed else 'armv7a'), '-mcpu=cortex-a9', '-mattr=+neon'], 'mips32': ['-triple=%s' % ( 'mipsel-nacl'", "not args.llvm_source: cmd = [os.path.join(pnacl_bin_path, 'llvm-as'), llfile, '-o', '-', '|', os.path.join(pnacl_bin_path, 'pnacl-freeze')] if", "llfile, '-bitcode-as-text', '-output', '-', '|'] elif not args.llvm_source: cmd = [os.path.join(pnacl_bin_path, 'llvm-as'), llfile,", "+= ['--build-on-read=0'] else: cmd += ['--build-on-read=1'] cmd += ['--filetype=' + args.filetype] cmd +=", "args.assemble and args.filetype != 'obj': cmd += (['|', os.path.join(pnacl_bin_path, 'llvm-mc')] + TargetAssemblerFlags(args.target, args.sandbox)", "may introduce a new temporary file between pnacl-sz # and llvm-mc. Similar issues", "if args.forceasm: if args.expect_fail: args.forceasm = False elif args.filetype == 'asm': pass elif", "into a pexe file, converts it to a Subzero program, and finally compiles", "if args.sandbox: cmd += ['-sandbox'] if args.insts: # If the tests are based", "( 'x86_64-nacl' if sandboxed else 'x86_64')], 'arm32': ['-triple=%s' % ( 'armv7a-nacl' if sandboxed", "cmd += ['--build-on-read=0'] else: cmd += ['--build-on-read=1'] cmd += ['--filetype=' + args.filetype] cmd", "'iasm': # TODO(sehr) implement forceasm for iasm. pass elif args.filetype == 'obj': args.filetype", "required=False, default='./pnacl-sz', metavar='PNACL-SZ', help=\"Subzero translator 'pnacl-sz'\") argparser.add_argument('--pnacl-bin-path', required=False, default=( '{root}/toolchain/linux_x86/pnacl_newlib_raw/bin' ).format(root=FindBaseNaCl()), metavar='PNACL_BIN_PATH', help='Path", "args.sandbox) + ['-filetype=obj', '-o', output_file_name]) elif output_file_name: cmd += ['-o', output_file_name] if args.disassemble:", "file to compile') argparser.add_argument('--output', '-o', required=False, help='Output file to write') argparser.add_argument('--insts', required=False, action='store_true',", "to compile') argparser.add_argument('--output', '-o', required=False, help='Output file to write') argparser.add_argument('--insts', required=False, action='store_true', help='Stop", "flags = { 'x8632': ['-Mintel'], 'x8664': ['-Mintel'], 'arm32': [], 'mips32':[] } return flags[target]", "asm_temp.name if args.assemble and args.filetype != 'obj': cmd += (['|', os.path.join(pnacl_bin_path, 'llvm-mc')] +", "pass elif args.filetype == 'iasm': # TODO(sehr) implement forceasm for iasm. pass elif", "'-r', '-z'] + TargetDisassemblerFlags(args.target) + [output_file_name]) stdout_result = shellcmd(cmd, echo=args.echo_cmd) if not args.echo_cmd:", "cmd += ['--target', args.target] if args.sandbox: cmd += ['-sandbox'] if args.insts: # If", "= 'asm' args.assemble = True cmd = [] if args.tbc: cmd = [os.path.join(pnacl_bin_path,", "'pnacl-freeze')] if not args.no_local_syms: cmd += ['--allow-local-symbol-tables'] cmd += ['|'] if args.expect_fail: cmd", "# TODO(sehr) implement forceasm for iasm. pass elif args.filetype == 'obj': args.filetype =", "the pnacl-sz compiler on an llvm file. Takes an llvm input file, freezes", "action='store_true', help='Trace command that generates ICE instructions') argparser.add_argument('--tbc', required=False, action='store_true', help='Input is textual", "and '--llvm-source'\") if args.llvm_source and args.no_local_syms: raise RuntimeError(\"Can't specify both '--llvm-source' and \"", "after first error') argparser.add_argument('--args', '-a', nargs=argparse.REMAINDER, default=[], help='Remaining arguments are passed to pnacl-sz')", "first before it can be # re-opened by the other tools, so don't", "'-d', '-r', '-z'] + TargetDisassemblerFlags(args.target) + [output_file_name]) stdout_result = shellcmd(cmd, echo=args.echo_cmd) if not", "+= ['--target', args.target] if args.sandbox: cmd += ['-sandbox'] if args.insts: # If the", "llvm IR ' + '(without generating a pexe), then ' + 'convert to", "...' % (script_name, arg) exit(1) asm_temp = None output_file_name = None keep_output_file =", "else: cmd += ['--build-on-read=1'] cmd += ['--filetype=' + args.filetype] cmd += ['--emit-revision=0'] script_name", "or args.llvm_source: cmd += ['--build-on-read=0'] else: cmd += ['--build-on-read=1'] cmd += ['--filetype=' +", "symbols in the pexe file\") argparser.add_argument('--llvm', required=False, action='store_true', help='Parse pexe into llvm IR", "import argparse import itertools import os import re import subprocess import sys import", "+= [llfile] if args.assemble or args.disassemble: if not output_file_name: # On windows we", "a new temporary file between pnacl-sz # and llvm-mc. Similar issues could occur", "['-sandbox'] if args.insts: # If the tests are based on '-verbose inst' output,", "file between pnacl-sz # and llvm-mc. Similar issues could occur when setting filetype,", "'armv7a-nacl' if sandboxed else 'armv7a'), '-mcpu=cortex-a9', '-mattr=+neon'], 'mips32': ['-triple=%s' % ( 'mipsel-nacl' if", "don't do delete-on-close, # and instead manually delete. asm_temp = tempfile.NamedTemporaryFile(delete=False) asm_temp.close() output_file_name", "relocs and # dissasemble zeros. cmd += (['&&', os.path.join(pnacl_bin_path, GetObjdumpCmd(args.target))] + args.dis_flags +", "[os.path.join(pnacl_bin_path, 'llvm-as'), llfile, '-o', '-', '|', os.path.join(pnacl_bin_path, 'pnacl-freeze')] if not args.no_local_syms: cmd +=", "inst' output, force # single-threaded translation because dump output does not get #", "cmd += (['|', os.path.join(pnacl_bin_path, 'llvm-mc')] + TargetAssemblerFlags(args.target, args.sandbox) + ['-filetype=obj', '-o', output_file_name]) elif", "and args.no_local_syms: raise RuntimeError(\"Can't specify both '--llvm-source' and \" + \"'--no-local-syms'\") if args.llvm_source", "Need to find out exactly we need to # add here for Mips32.", "+ ['-w', '-d', '-r', '-z'] + TargetDisassemblerFlags(args.target) + [output_file_name]) stdout_result = shellcmd(cmd, echo=args.echo_cmd)", "preferred_option = '--output' if re.search('^-?-o(=.+)?$', arg) else arg print 'Option should be set", "output does not get # reassembled into order. cmd += ['-verbose', 'inst,global_init', '-notranslate',", "if sandboxed else 'mipsel'), '-mcpu=mips32'] } return flags[target] def TargetDisassemblerFlags(target): flags = {", "elif args.filetype == 'iasm': # TODO(sehr) implement forceasm for iasm. pass elif args.filetype", "pass elif args.filetype == 'obj': args.filetype = 'asm' args.assemble = True cmd =", "specify both '--llvm' and '--llvm-source'\") if args.llvm_source and args.no_local_syms: raise RuntimeError(\"Can't specify both", "llvm IR first, then ' + 'convert to Subzero') argparser.add_argument('--llvm-source', required=False, action='store_true', help='Parse", "main.__doc__, formatter_class=argparse.ArgumentDefaultsHelpFormatter) argparser.add_argument('--input', '-i', required=True, help='LLVM source file to compile') argparser.add_argument('--output', '-o', required=False,", "args.llvm_source: cmd += ['--bitcode-format=pnacl'] if not args.no_local_syms: cmd += ['--allow-local-symbol-tables'] if args.llvm or", "dest='filetype', choices=['obj', 'asm', 'iasm'], help='Output file type. Default %(default)s') argparser.add_argument('--forceasm', required=False, action='store_true', help='Force", "default='x8632', dest='target', choices=['x8632','x8664','arm32','mips32'], help='Target architecture. Default %(default)s') argparser.add_argument('--echo-cmd', required=False, action='store_true', help='Trace command that", "'(e.g. for building PEXE files)') argparser.add_argument('--assemble', required=False, action='store_true', help='Assemble the output') argparser.add_argument('--disassemble', required=False,", "'arm32': [], 'mips32':[] } return flags[target] def main(): \"\"\"Run the pnacl-sz compiler on", "args.filetype == 'iasm': # TODO(sehr) implement forceasm for iasm. pass elif args.filetype ==", "manually delete. asm_temp = tempfile.NamedTemporaryFile(delete=False) asm_temp.close() output_file_name = asm_temp.name if args.assemble and args.filetype", "if args.llvm and args.tbc: raise RuntimeError(\"Can't specify both '--tbc' and '--llvm'\") if args.forceasm:", "may need to close the file first before it can be # re-opened", "IR first, then ' + 'convert to Subzero') argparser.add_argument('--llvm-source', required=False, action='store_true', help='Parse source", "'inst,global_init', '-notranslate', '-threads=0'] elif args.allow_pnacl_reader_error_recovery: cmd += ['-allow-pnacl-reader-error-recovery', '-threads=0'] if not args.llvm_source: cmd", "'-mcpu=mips32'] } return flags[target] def TargetDisassemblerFlags(target): flags = { 'x8632': ['-Mintel'], 'x8664': ['-Mintel'],", "% ( 'armv7a-nacl' if sandboxed else 'armv7a'), '-mcpu=cortex-a9', '-mattr=+neon'], 'mips32': ['-triple=%s' % (", "'--llvm-source'\") if args.llvm and args.tbc: raise RuntimeError(\"Can't specify both '--tbc' and '--llvm'\") if", "metavar='PNACL-SZ', help=\"Subzero translator 'pnacl-sz'\") argparser.add_argument('--pnacl-bin-path', required=False, default=( '{root}/toolchain/linux_x86/pnacl_newlib_raw/bin' ).format(root=FindBaseNaCl()), metavar='PNACL_BIN_PATH', help='Path to LLVM", "argparser.add_argument('--input', '-i', required=True, help='LLVM source file to compile') argparser.add_argument('--output', '-o', required=False, help='Output file", "argparser.add_argument('--no-local-syms', required=False, action='store_true', help=\"Don't keep local symbols in the pexe file\") argparser.add_argument('--llvm', required=False,", "help='Output file type. Default %(default)s') argparser.add_argument('--forceasm', required=False, action='store_true', help='Force --filetype=asm') argparser.add_argument('--target', default='x8632', dest='target',", "= shellcmd(cmd, echo=args.echo_cmd) if not args.echo_cmd: sys.stdout.write(stdout_result) if asm_temp and not keep_output_file: os.remove(output_file_name)", "to be done through the script # because forceasm may introduce a new", "assembled output') argparser.add_argument('--dis-flags', required=False, action='append', default=[], help='Add a disassembler flag') argparser.add_argument('--filetype', default='iasm', dest='filetype',", "run by using LLVM not') argparser.add_argument('--allow-pnacl-reader-error-recovery', action='store_true', help='Continue parsing after first error') argparser.add_argument('--args',", "pexe), then ' + 'convert to Subzero') argparser.add_argument( '--pnacl-sz', required=False, default='./pnacl-sz', metavar='PNACL-SZ', help=\"Subzero", "'-o', '-', '|', os.path.join(pnacl_bin_path, 'pnacl-freeze')] if not args.no_local_syms: cmd += ['--allow-local-symbol-tables'] cmd +=", "'mips32': ['-triple=%s' % ( 'mipsel-nacl' if sandboxed else 'mipsel'), '-mcpu=mips32'] } return flags[target]", "help='Assemble the output') argparser.add_argument('--disassemble', required=False, action='store_true', help='Disassemble the assembled output') argparser.add_argument('--dis-flags', required=False, action='append',", "if args.tbc: cmd = [os.path.join(pnacl_bin_path, 'pnacl-bcfuzz'), llfile, '-bitcode-as-text', '-output', '-', '|'] elif not", "zeros. cmd += (['&&', os.path.join(pnacl_bin_path, GetObjdumpCmd(args.target))] + args.dis_flags + ['-w', '-d', '-r', '-z']", "dissasemble zeros. cmd += (['&&', os.path.join(pnacl_bin_path, GetObjdumpCmd(args.target))] + args.dis_flags + ['-w', '-d', '-r',", "pexe file\") argparser.add_argument('--llvm', required=False, action='store_true', help='Parse pexe into llvm IR first, then '", "'-a', nargs=argparse.REMAINDER, default=[], help='Remaining arguments are passed to pnacl-sz') argparser.add_argument('--sandbox', required=False, action='store_true', help='Sandboxes", "RuntimeError(\"Can't specify both '--llvm-source' and \" + \"'--no-local-syms'\") if args.llvm_source and args.tbc: raise", "choices=['obj', 'asm', 'iasm'], help='Output file type. Default %(default)s') argparser.add_argument('--forceasm', required=False, action='store_true', help='Force --filetype=asm')", "'|', os.path.join(pnacl_bin_path, 'pnacl-freeze')] if not args.no_local_syms: cmd += ['--allow-local-symbol-tables'] cmd += ['|'] if", "['--allow-local-symbol-tables'] cmd += ['|'] if args.expect_fail: cmd += [os.path.join(pnacl_bin_path, 'not')] cmd += [args.pnacl_sz]", "Redirecting the output file needs to be done through the script # because", "to close the file first before it can be # re-opened by the", "%s ...' % (script_name, arg) exit(1) asm_temp = None output_file_name = None keep_output_file", "help=\"Subzero translator 'pnacl-sz'\") argparser.add_argument('--pnacl-bin-path', required=False, default=( '{root}/toolchain/linux_x86/pnacl_newlib_raw/bin' ).format(root=FindBaseNaCl()), metavar='PNACL_BIN_PATH', help='Path to LLVM &", "= { 'x8632': ['-triple=%s' % ('i686-nacl' if sandboxed else 'i686')], 'x8664': ['-triple=%s' %", "in enumerate(args.args): # Redirecting the output file needs to be done through the", "tempfile from utils import FindBaseNaCl, GetObjdumpCmd, shellcmd def TargetAssemblerFlags(target, sandboxed): # TODO(reed kotler).", "pexe file, converts it to a Subzero program, and finally compiles it. \"\"\"", "and args.tbc: raise RuntimeError(\"Can't specify both '--tbc' and '--llvm-source'\") if args.llvm and args.tbc:", "On windows we may need to close the file first before it can", "choices=['x8632','x8664','arm32','mips32'], help='Target architecture. Default %(default)s') argparser.add_argument('--echo-cmd', required=False, action='store_true', help='Trace command that generates ICE", "argparser.add_argument('--allow-pnacl-reader-error-recovery', action='store_true', help='Continue parsing after first error') argparser.add_argument('--args', '-a', nargs=argparse.REMAINDER, default=[], help='Remaining arguments", "and args.tbc: raise RuntimeError(\"Can't specify both '--tbc' and '--llvm'\") if args.forceasm: if args.expect_fail:", "= '--output' if re.search('^-?-o(=.+)?$', arg) else arg print 'Option should be set using:'", "encodings, diassemble, show relocs and # dissasemble zeros. cmd += (['&&', os.path.join(pnacl_bin_path, GetObjdumpCmd(args.target))]", "code') args = argparser.parse_args() pnacl_bin_path = args.pnacl_bin_path llfile = args.input if args.llvm and", "+= ['--allow-local-symbol-tables'] cmd += ['|'] if args.expect_fail: cmd += [os.path.join(pnacl_bin_path, 'not')] cmd +=", "get # reassembled into order. cmd += ['-verbose', 'inst,global_init', '-notranslate', '-threads=0'] elif args.allow_pnacl_reader_error_recovery:", "raise RuntimeError(\"Can't specify both '--tbc' and '--llvm'\") if args.forceasm: if args.expect_fail: args.forceasm =", "shellcmd(cmd, echo=args.echo_cmd) if not args.echo_cmd: sys.stdout.write(stdout_result) if asm_temp and not keep_output_file: os.remove(output_file_name) if", "['--filetype=' + args.filetype] cmd += ['--emit-revision=0'] script_name = os.path.basename(sys.argv[0]) for _, arg in", "os.path.join(pnacl_bin_path, GetObjdumpCmd(args.target))] + args.dis_flags + ['-w', '-d', '-r', '-z'] + TargetDisassemblerFlags(args.target) + [output_file_name])", "['-verbose', 'inst,global_init', '-notranslate', '-threads=0'] elif args.allow_pnacl_reader_error_recovery: cmd += ['-allow-pnacl-reader-error-recovery', '-threads=0'] if not args.llvm_source:", "pnacl_bin_path = args.pnacl_bin_path llfile = args.input if args.llvm and args.llvm_source: raise RuntimeError(\"Can't specify", "argparser.add_argument('--output', '-o', required=False, help='Output file to write') argparser.add_argument('--insts', required=False, action='store_true', help='Stop after translating", "occur when setting filetype, target, # or sandbox through --args. Filter and report", "(script_name, preferred_option) print 'rather than:' print ' %s ... --args %s ...' %", "Takes an llvm input file, freezes it into a pexe file, converts it", "True cmd += args.args if args.llvm_source: cmd += [llfile] if args.assemble or args.disassemble:", "if args.assemble or args.disassemble: if not output_file_name: # On windows we may need", "# On windows we may need to close the file first before it", "pnacl-sz compiler on an llvm file. Takes an llvm input file, freezes it", "'-', '|'] elif not args.llvm_source: cmd = [os.path.join(pnacl_bin_path, 'llvm-as'), llfile, '-o', '-', '|',", "%s ... --args %s ...' % (script_name, arg) exit(1) asm_temp = None output_file_name", "forceasm for iasm. pass elif args.filetype == 'obj': args.filetype = 'asm' args.assemble =", "['--bitcode-format=pnacl'] if not args.no_local_syms: cmd += ['--allow-local-symbol-tables'] if args.llvm or args.llvm_source: cmd +=", "arg) else arg print 'Option should be set using:' print ' %s ...", "set using:' print ' %s ... %s ... --args' % (script_name, preferred_option) print", "file to write') argparser.add_argument('--insts', required=False, action='store_true', help='Stop after translating to ' + 'Subzero", "flags = { 'x8632': ['-triple=%s' % ('i686-nacl' if sandboxed else 'i686')], 'x8664': ['-triple=%s'", "diassemble, show relocs and # dissasemble zeros. cmd += (['&&', os.path.join(pnacl_bin_path, GetObjdumpCmd(args.target))] +", "'-output', '-', '|'] elif not args.llvm_source: cmd = [os.path.join(pnacl_bin_path, 'llvm-as'), llfile, '-o', '-',", "kotler). Need to find out exactly we need to # add here for", "output') argparser.add_argument('--dis-flags', required=False, action='append', default=[], help='Add a disassembler flag') argparser.add_argument('--filetype', default='iasm', dest='filetype', choices=['obj',", "+= ['--allow-local-symbol-tables'] if args.llvm or args.llvm_source: cmd += ['--build-on-read=0'] else: cmd += ['--build-on-read=1']", "cmd += ['--build-on-read=1'] cmd += ['--filetype=' + args.filetype] cmd += ['--emit-revision=0'] script_name =", "if args.output: output_file_name = args.output keep_output_file = True cmd += args.args if args.llvm_source:", "TargetAssemblerFlags(target, sandboxed): # TODO(reed kotler). Need to find out exactly we need to", "through the script # because forceasm may introduce a new temporary file between", "cmd += (['&&', os.path.join(pnacl_bin_path, GetObjdumpCmd(args.target))] + args.dis_flags + ['-w', '-d', '-r', '-z'] +", "help='Add a disassembler flag') argparser.add_argument('--filetype', default='iasm', dest='filetype', choices=['obj', 'asm', 'iasm'], help='Output file type.", "sandbox through --args. Filter and report an error. if re.search('^-?-(o|output|filetype|target|sandbox)(=.+)?$', arg): preferred_option =", "GetObjdumpCmd, shellcmd def TargetAssemblerFlags(target, sandboxed): # TODO(reed kotler). Need to find out exactly", "None output_file_name = None keep_output_file = False if args.output: output_file_name = args.output keep_output_file", "'--tbc' and '--llvm-source'\") if args.llvm and args.tbc: raise RuntimeError(\"Can't specify both '--tbc' and", "... --args %s ...' % (script_name, arg) exit(1) asm_temp = None output_file_name =", "argparser.add_argument('--insts', required=False, action='store_true', help='Stop after translating to ' + 'Subzero instructions') argparser.add_argument('--no-local-syms', required=False,", "file, converts it to a Subzero program, and finally compiles it. \"\"\" argparser", "args.filetype = 'asm' args.assemble = True cmd = [] if args.tbc: cmd =", "--args %s ...' % (script_name, arg) exit(1) asm_temp = None output_file_name = None", "'-bitcode-as-text', '-output', '-', '|'] elif not args.llvm_source: cmd = [os.path.join(pnacl_bin_path, 'llvm-as'), llfile, '-o',", "['--allow-local-symbol-tables'] if args.llvm or args.llvm_source: cmd += ['--build-on-read=0'] else: cmd += ['--build-on-read=1'] cmd", "output_file_name = args.output keep_output_file = True cmd += args.args if args.llvm_source: cmd +=", "both '--llvm-source' and \" + \"'--no-local-syms'\") if args.llvm_source and args.tbc: raise RuntimeError(\"Can't specify", "import subprocess import sys import tempfile from utils import FindBaseNaCl, GetObjdumpCmd, shellcmd def", "passed to pnacl-sz') argparser.add_argument('--sandbox', required=False, action='store_true', help='Sandboxes the generated code') args = argparser.parse_args()", "help='Continue parsing after first error') argparser.add_argument('--args', '-a', nargs=argparse.REMAINDER, default=[], help='Remaining arguments are passed", "args.filetype != 'obj': cmd += (['|', os.path.join(pnacl_bin_path, 'llvm-mc')] + TargetAssemblerFlags(args.target, args.sandbox) + ['-filetype=obj',", "out exactly we need to # add here for Mips32. flags = {", "type. Default %(default)s') argparser.add_argument('--forceasm', required=False, action='store_true', help='Force --filetype=asm') argparser.add_argument('--target', default='x8632', dest='target', choices=['x8632','x8664','arm32','mips32'], help='Target", "llvm file. Takes an llvm input file, freezes it into a pexe file,", "it. \"\"\" argparser = argparse.ArgumentParser( description=' ' + main.__doc__, formatter_class=argparse.ArgumentDefaultsHelpFormatter) argparser.add_argument('--input', '-i', required=True,", "argparser.add_argument('--disassemble', required=False, action='store_true', help='Disassemble the assembled output') argparser.add_argument('--dis-flags', required=False, action='append', default=[], help='Add a", "help='Force --filetype=asm') argparser.add_argument('--target', default='x8632', dest='target', choices=['x8632','x8664','arm32','mips32'], help='Target architecture. Default %(default)s') argparser.add_argument('--echo-cmd', required=False, action='store_true',", "both '--llvm' and '--llvm-source'\") if args.llvm_source and args.no_local_syms: raise RuntimeError(\"Can't specify both '--llvm-source'", "args.llvm_source: cmd = [os.path.join(pnacl_bin_path, 'llvm-as'), llfile, '-o', '-', '|', os.path.join(pnacl_bin_path, 'pnacl-freeze')] if not", "argparser.add_argument('--forceasm', required=False, action='store_true', help='Force --filetype=asm') argparser.add_argument('--target', default='x8632', dest='target', choices=['x8632','x8664','arm32','mips32'], help='Target architecture. Default %(default)s')", "import tempfile from utils import FindBaseNaCl, GetObjdumpCmd, shellcmd def TargetAssemblerFlags(target, sandboxed): # TODO(reed", "LLVM & Binutils executables ' + '(e.g. for building PEXE files)') argparser.add_argument('--assemble', required=False,", "'asm' args.assemble = True cmd = [] if args.tbc: cmd = [os.path.join(pnacl_bin_path, 'pnacl-bcfuzz'),", "on '-verbose inst' output, force # single-threaded translation because dump output does not", "args.tbc: raise RuntimeError(\"Can't specify both '--tbc' and '--llvm'\") if args.forceasm: if args.expect_fail: args.forceasm", "dump output does not get # reassembled into order. cmd += ['-verbose', 'inst,global_init',", "if sandboxed else 'x86_64')], 'arm32': ['-triple=%s' % ( 'armv7a-nacl' if sandboxed else 'armv7a'),", "program, and finally compiles it. \"\"\" argparser = argparse.ArgumentParser( description=' ' + main.__doc__,", "argparser.parse_args() pnacl_bin_path = args.pnacl_bin_path llfile = args.input if args.llvm and args.llvm_source: raise RuntimeError(\"Can't", "translator 'pnacl-sz'\") argparser.add_argument('--pnacl-bin-path', required=False, default=( '{root}/toolchain/linux_x86/pnacl_newlib_raw/bin' ).format(root=FindBaseNaCl()), metavar='PNACL_BIN_PATH', help='Path to LLVM & Binutils", "== 'asm': pass elif args.filetype == 'iasm': # TODO(sehr) implement forceasm for iasm.", "'--tbc' and '--llvm'\") if args.forceasm: if args.expect_fail: args.forceasm = False elif args.filetype ==", "cmd += ['--bitcode-format=pnacl'] if not args.no_local_syms: cmd += ['--allow-local-symbol-tables'] if args.llvm or args.llvm_source:", "sys import tempfile from utils import FindBaseNaCl, GetObjdumpCmd, shellcmd def TargetAssemblerFlags(target, sandboxed): #", "raise RuntimeError(\"Can't specify both '--llvm-source' and \" + \"'--no-local-syms'\") if args.llvm_source and args.tbc:", "specify both '--tbc' and '--llvm'\") if args.forceasm: if args.expect_fail: args.forceasm = False elif", "to ' + 'Subzero instructions') argparser.add_argument('--no-local-syms', required=False, action='store_true', help=\"Don't keep local symbols in", "or args.disassemble: if not output_file_name: # On windows we may need to close", "sandboxed else 'x86_64')], 'arm32': ['-triple=%s' % ( 'armv7a-nacl' if sandboxed else 'armv7a'), '-mcpu=cortex-a9',", "'rather than:' print ' %s ... --args %s ...' % (script_name, arg) exit(1)", "(['|', os.path.join(pnacl_bin_path, 'llvm-mc')] + TargetAssemblerFlags(args.target, args.sandbox) + ['-filetype=obj', '-o', output_file_name]) elif output_file_name: cmd", "and '--llvm'\") if args.forceasm: if args.expect_fail: args.forceasm = False elif args.filetype == 'asm':", "the output file needs to be done through the script # because forceasm", "'-threads=0'] if not args.llvm_source: cmd += ['--bitcode-format=pnacl'] if not args.no_local_syms: cmd += ['--allow-local-symbol-tables']", "to LLVM & Binutils executables ' + '(e.g. for building PEXE files)') argparser.add_argument('--assemble',", "pnacl-sz # and llvm-mc. Similar issues could occur when setting filetype, target, #", "be done through the script # because forceasm may introduce a new temporary", "'--output' if re.search('^-?-o(=.+)?$', arg) else arg print 'Option should be set using:' print", "preferred_option) print 'rather than:' print ' %s ... --args %s ...' % (script_name,", "to # add here for Mips32. flags = { 'x8632': ['-triple=%s' % ('i686-nacl'", "cmd += ['-sandbox'] if args.insts: # If the tests are based on '-verbose", "'{root}/toolchain/linux_x86/pnacl_newlib_raw/bin' ).format(root=FindBaseNaCl()), metavar='PNACL_BIN_PATH', help='Path to LLVM & Binutils executables ' + '(e.g. for", "arg print 'Option should be set using:' print ' %s ... %s ...", "argparse import itertools import os import re import subprocess import sys import tempfile", "instructions') argparser.add_argument('--tbc', required=False, action='store_true', help='Input is textual bitcode (not .ll)') argparser.add_argument('--expect-fail', required=False, action='store_true',", "sandboxed): # TODO(reed kotler). Need to find out exactly we need to #", "'x8632': ['-triple=%s' % ('i686-nacl' if sandboxed else 'i686')], 'x8664': ['-triple=%s' % ( 'x86_64-nacl'", "' + main.__doc__, formatter_class=argparse.ArgumentDefaultsHelpFormatter) argparser.add_argument('--input', '-i', required=True, help='LLVM source file to compile') argparser.add_argument('--output',", "'--llvm-source' and \" + \"'--no-local-syms'\") if args.llvm_source and args.tbc: raise RuntimeError(\"Can't specify both", "'-verbose inst' output, force # single-threaded translation because dump output does not get", "arguments are passed to pnacl-sz') argparser.add_argument('--sandbox', required=False, action='store_true', help='Sandboxes the generated code') args", "to write') argparser.add_argument('--insts', required=False, action='store_true', help='Stop after translating to ' + 'Subzero instructions')", "Default %(default)s') argparser.add_argument('--echo-cmd', required=False, action='store_true', help='Trace command that generates ICE instructions') argparser.add_argument('--tbc', required=False,", "import FindBaseNaCl, GetObjdumpCmd, shellcmd def TargetAssemblerFlags(target, sandboxed): # TODO(reed kotler). Need to find", "'--llvm'\") if args.forceasm: if args.expect_fail: args.forceasm = False elif args.filetype == 'asm': pass", "'--pnacl-sz', required=False, default='./pnacl-sz', metavar='PNACL-SZ', help=\"Subzero translator 'pnacl-sz'\") argparser.add_argument('--pnacl-bin-path', required=False, default=( '{root}/toolchain/linux_x86/pnacl_newlib_raw/bin' ).format(root=FindBaseNaCl()), metavar='PNACL_BIN_PATH',", "= False elif args.filetype == 'asm': pass elif args.filetype == 'iasm': # TODO(sehr)", "= [os.path.join(pnacl_bin_path, 'pnacl-bcfuzz'), llfile, '-bitcode-as-text', '-output', '-', '|'] elif not args.llvm_source: cmd =", "('i686-nacl' if sandboxed else 'i686')], 'x8664': ['-triple=%s' % ( 'x86_64-nacl' if sandboxed else", "based on '-verbose inst' output, force # single-threaded translation because dump output does", "cmd = [] if args.tbc: cmd = [os.path.join(pnacl_bin_path, 'pnacl-bcfuzz'), llfile, '-bitcode-as-text', '-output', '-',", "print ' %s ... --args %s ...' % (script_name, arg) exit(1) asm_temp =", "cmd += ['-allow-pnacl-reader-error-recovery', '-threads=0'] if not args.llvm_source: cmd += ['--bitcode-format=pnacl'] if not args.no_local_syms:", "pnacl-sz') argparser.add_argument('--sandbox', required=False, action='store_true', help='Sandboxes the generated code') args = argparser.parse_args() pnacl_bin_path =", "['-allow-pnacl-reader-error-recovery', '-threads=0'] if not args.llvm_source: cmd += ['--bitcode-format=pnacl'] if not args.no_local_syms: cmd +=", "translation because dump output does not get # reassembled into order. cmd +=", "required=False, action='store_true', help='Force --filetype=asm') argparser.add_argument('--target', default='x8632', dest='target', choices=['x8632','x8664','arm32','mips32'], help='Target architecture. Default %(default)s') argparser.add_argument('--echo-cmd',", "generated code') args = argparser.parse_args() pnacl_bin_path = args.pnacl_bin_path llfile = args.input if args.llvm", "% ( 'mipsel-nacl' if sandboxed else 'mipsel'), '-mcpu=mips32'] } return flags[target] def TargetDisassemblerFlags(target):", "elif args.filetype == 'asm': pass elif args.filetype == 'iasm': # TODO(sehr) implement forceasm", "if args.insts: # If the tests are based on '-verbose inst' output, force", "+= ['--emit-revision=0'] script_name = os.path.basename(sys.argv[0]) for _, arg in enumerate(args.args): # Redirecting the", "cmd += ['-o', output_file_name] if args.disassemble: # Show wide instruction encodings, diassemble, show", "RuntimeError(\"Can't specify both '--tbc' and '--llvm'\") if args.forceasm: if args.expect_fail: args.forceasm = False", "file first before it can be # re-opened by the other tools, so", "# Show wide instruction encodings, diassemble, show relocs and # dissasemble zeros. cmd", "'llvm-as'), llfile, '-o', '-', '|', os.path.join(pnacl_bin_path, 'pnacl-freeze')] if not args.no_local_syms: cmd += ['--allow-local-symbol-tables']", "= args.pnacl_bin_path llfile = args.input if args.llvm and args.llvm_source: raise RuntimeError(\"Can't specify both", "args.output keep_output_file = True cmd += args.args if args.llvm_source: cmd += [llfile] if", "keep_output_file = True cmd += args.args if args.llvm_source: cmd += [llfile] if args.assemble", "(not .ll)') argparser.add_argument('--expect-fail', required=False, action='store_true', help='Negate success of run by using LLVM not')", "'-threads=0'] elif args.allow_pnacl_reader_error_recovery: cmd += ['-allow-pnacl-reader-error-recovery', '-threads=0'] if not args.llvm_source: cmd += ['--bitcode-format=pnacl']", "than:' print ' %s ... --args %s ...' % (script_name, arg) exit(1) asm_temp", "= True cmd = [] if args.tbc: cmd = [os.path.join(pnacl_bin_path, 'pnacl-bcfuzz'), llfile, '-bitcode-as-text',", "Similar issues could occur when setting filetype, target, # or sandbox through --args.", "and finally compiles it. \"\"\" argparser = argparse.ArgumentParser( description=' ' + main.__doc__, formatter_class=argparse.ArgumentDefaultsHelpFormatter)", "True cmd = [] if args.tbc: cmd = [os.path.join(pnacl_bin_path, 'pnacl-bcfuzz'), llfile, '-bitcode-as-text', '-output',", "= True cmd += args.args if args.llvm_source: cmd += [llfile] if args.assemble or", "[output_file_name]) stdout_result = shellcmd(cmd, echo=args.echo_cmd) if not args.echo_cmd: sys.stdout.write(stdout_result) if asm_temp and not", "if re.search('^-?-o(=.+)?$', arg) else arg print 'Option should be set using:' print '", "'-mattr=+neon'], 'mips32': ['-triple=%s' % ( 'mipsel-nacl' if sandboxed else 'mipsel'), '-mcpu=mips32'] } return", "the script # because forceasm may introduce a new temporary file between pnacl-sz", "other tools, so don't do delete-on-close, # and instead manually delete. asm_temp =", "because dump output does not get # reassembled into order. cmd += ['-verbose',", "= { 'x8632': ['-Mintel'], 'x8664': ['-Mintel'], 'arm32': [], 'mips32':[] } return flags[target] def", "'obj': cmd += (['|', os.path.join(pnacl_bin_path, 'llvm-mc')] + TargetAssemblerFlags(args.target, args.sandbox) + ['-filetype=obj', '-o', output_file_name])", "action='store_true', help='Stop after translating to ' + 'Subzero instructions') argparser.add_argument('--no-local-syms', required=False, action='store_true', help=\"Don't", "implement forceasm for iasm. pass elif args.filetype == 'obj': args.filetype = 'asm' args.assemble", "+ 'convert to Subzero') argparser.add_argument('--llvm-source', required=False, action='store_true', help='Parse source directly into llvm IR", "into llvm IR ' + '(without generating a pexe), then ' + 'convert", "add here for Mips32. flags = { 'x8632': ['-triple=%s' % ('i686-nacl' if sandboxed", "the tests are based on '-verbose inst' output, force # single-threaded translation because", "cmd += [os.path.join(pnacl_bin_path, 'not')] cmd += [args.pnacl_sz] cmd += ['--target', args.target] if args.sandbox:", "{ 'x8632': ['-triple=%s' % ('i686-nacl' if sandboxed else 'i686')], 'x8664': ['-triple=%s' % (", "output_file_name] if args.disassemble: # Show wide instruction encodings, diassemble, show relocs and #", "'llvm-mc')] + TargetAssemblerFlags(args.target, args.sandbox) + ['-filetype=obj', '-o', output_file_name]) elif output_file_name: cmd += ['-o',", "forceasm may introduce a new temporary file between pnacl-sz # and llvm-mc. Similar", "'-i', required=True, help='LLVM source file to compile') argparser.add_argument('--output', '-o', required=False, help='Output file to", "for iasm. pass elif args.filetype == 'obj': args.filetype = 'asm' args.assemble = True", "exactly we need to # add here for Mips32. flags = { 'x8632':", "'-notranslate', '-threads=0'] elif args.allow_pnacl_reader_error_recovery: cmd += ['-allow-pnacl-reader-error-recovery', '-threads=0'] if not args.llvm_source: cmd +=", ").format(root=FindBaseNaCl()), metavar='PNACL_BIN_PATH', help='Path to LLVM & Binutils executables ' + '(e.g. for building", "# add here for Mips32. flags = { 'x8632': ['-triple=%s' % ('i686-nacl' if", "re-opened by the other tools, so don't do delete-on-close, # and instead manually", "we may need to close the file first before it can be #", "instructions') argparser.add_argument('--no-local-syms', required=False, action='store_true', help=\"Don't keep local symbols in the pexe file\") argparser.add_argument('--llvm',", "need to close the file first before it can be # re-opened by", "required=True, help='LLVM source file to compile') argparser.add_argument('--output', '-o', required=False, help='Output file to write')", "'|'] elif not args.llvm_source: cmd = [os.path.join(pnacl_bin_path, 'llvm-as'), llfile, '-o', '-', '|', os.path.join(pnacl_bin_path,", "show relocs and # dissasemble zeros. cmd += (['&&', os.path.join(pnacl_bin_path, GetObjdumpCmd(args.target))] + args.dis_flags", "help='Parse pexe into llvm IR first, then ' + 'convert to Subzero') argparser.add_argument('--llvm-source',", "bitcode (not .ll)') argparser.add_argument('--expect-fail', required=False, action='store_true', help='Negate success of run by using LLVM", "to pnacl-sz') argparser.add_argument('--sandbox', required=False, action='store_true', help='Sandboxes the generated code') args = argparser.parse_args() pnacl_bin_path", "None keep_output_file = False if args.output: output_file_name = args.output keep_output_file = True cmd", "# Redirecting the output file needs to be done through the script #", "enumerate(args.args): # Redirecting the output file needs to be done through the script", "a disassembler flag') argparser.add_argument('--filetype', default='iasm', dest='filetype', choices=['obj', 'asm', 'iasm'], help='Output file type. Default", "+= ['-verbose', 'inst,global_init', '-notranslate', '-threads=0'] elif args.allow_pnacl_reader_error_recovery: cmd += ['-allow-pnacl-reader-error-recovery', '-threads=0'] if not", "%(default)s') argparser.add_argument('--forceasm', required=False, action='store_true', help='Force --filetype=asm') argparser.add_argument('--target', default='x8632', dest='target', choices=['x8632','x8664','arm32','mips32'], help='Target architecture. Default", "+= ['|'] if args.expect_fail: cmd += [os.path.join(pnacl_bin_path, 'not')] cmd += [args.pnacl_sz] cmd +=", "} return flags[target] def main(): \"\"\"Run the pnacl-sz compiler on an llvm file.", "if args.expect_fail: args.forceasm = False elif args.filetype == 'asm': pass elif args.filetype ==", "and args.llvm_source: raise RuntimeError(\"Can't specify both '--llvm' and '--llvm-source'\") if args.llvm_source and args.no_local_syms:", "args.target] if args.sandbox: cmd += ['-sandbox'] if args.insts: # If the tests are", "and '--llvm-source'\") if args.llvm and args.tbc: raise RuntimeError(\"Can't specify both '--tbc' and '--llvm'\")", "... %s ... --args' % (script_name, preferred_option) print 'rather than:' print ' %s", "= os.path.basename(sys.argv[0]) for _, arg in enumerate(args.args): # Redirecting the output file needs", "done through the script # because forceasm may introduce a new temporary file", "False if args.output: output_file_name = args.output keep_output_file = True cmd += args.args if", "'--llvm' and '--llvm-source'\") if args.llvm_source and args.no_local_syms: raise RuntimeError(\"Can't specify both '--llvm-source' and", "an error. if re.search('^-?-(o|output|filetype|target|sandbox)(=.+)?$', arg): preferred_option = '--output' if re.search('^-?-o(=.+)?$', arg) else arg", "not') argparser.add_argument('--allow-pnacl-reader-error-recovery', action='store_true', help='Continue parsing after first error') argparser.add_argument('--args', '-a', nargs=argparse.REMAINDER, default=[], help='Remaining", "+ 'Subzero instructions') argparser.add_argument('--no-local-syms', required=False, action='store_true', help=\"Don't keep local symbols in the pexe", "args.insts: # If the tests are based on '-verbose inst' output, force #", "utils import FindBaseNaCl, GetObjdumpCmd, shellcmd def TargetAssemblerFlags(target, sandboxed): # TODO(reed kotler). Need to", "action='store_true', help='Force --filetype=asm') argparser.add_argument('--target', default='x8632', dest='target', choices=['x8632','x8664','arm32','mips32'], help='Target architecture. Default %(default)s') argparser.add_argument('--echo-cmd', required=False,", "\"\"\"Run the pnacl-sz compiler on an llvm file. Takes an llvm input file,", "raise RuntimeError(\"Can't specify both '--tbc' and '--llvm-source'\") if args.llvm and args.tbc: raise RuntimeError(\"Can't", "False elif args.filetype == 'asm': pass elif args.filetype == 'iasm': # TODO(sehr) implement", "required=False, help='Output file to write') argparser.add_argument('--insts', required=False, action='store_true', help='Stop after translating to '", "['-triple=%s' % ( 'armv7a-nacl' if sandboxed else 'armv7a'), '-mcpu=cortex-a9', '-mattr=+neon'], 'mips32': ['-triple=%s' %", "or sandbox through --args. Filter and report an error. if re.search('^-?-(o|output|filetype|target|sandbox)(=.+)?$', arg): preferred_option", "we need to # add here for Mips32. flags = { 'x8632': ['-triple=%s'", "'mipsel-nacl' if sandboxed else 'mipsel'), '-mcpu=mips32'] } return flags[target] def TargetDisassemblerFlags(target): flags =", "def main(): \"\"\"Run the pnacl-sz compiler on an llvm file. Takes an llvm", "asm_temp = tempfile.NamedTemporaryFile(delete=False) asm_temp.close() output_file_name = asm_temp.name if args.assemble and args.filetype != 'obj':", "['-triple=%s' % ('i686-nacl' if sandboxed else 'i686')], 'x8664': ['-triple=%s' % ( 'x86_64-nacl' if", "an llvm input file, freezes it into a pexe file, converts it to", "args.llvm_source and args.no_local_syms: raise RuntimeError(\"Can't specify both '--llvm-source' and \" + \"'--no-local-syms'\") if", "+= ['--build-on-read=1'] cmd += ['--filetype=' + args.filetype] cmd += ['--emit-revision=0'] script_name = os.path.basename(sys.argv[0])", "args.assemble or args.disassemble: if not output_file_name: # On windows we may need to", "= tempfile.NamedTemporaryFile(delete=False) asm_temp.close() output_file_name = asm_temp.name if args.assemble and args.filetype != 'obj': cmd", "'convert to Subzero') argparser.add_argument( '--pnacl-sz', required=False, default='./pnacl-sz', metavar='PNACL-SZ', help=\"Subzero translator 'pnacl-sz'\") argparser.add_argument('--pnacl-bin-path', required=False,", "== 'obj': args.filetype = 'asm' args.assemble = True cmd = [] if args.tbc:", "required=False, action='store_true', help='Stop after translating to ' + 'Subzero instructions') argparser.add_argument('--no-local-syms', required=False, action='store_true',", "args.llvm_source and args.tbc: raise RuntimeError(\"Can't specify both '--tbc' and '--llvm-source'\") if args.llvm and", "is textual bitcode (not .ll)') argparser.add_argument('--expect-fail', required=False, action='store_true', help='Negate success of run by", "( 'mipsel-nacl' if sandboxed else 'mipsel'), '-mcpu=mips32'] } return flags[target] def TargetDisassemblerFlags(target): flags", "output_file_name]) elif output_file_name: cmd += ['-o', output_file_name] if args.disassemble: # Show wide instruction", "cmd = [os.path.join(pnacl_bin_path, 'pnacl-bcfuzz'), llfile, '-bitcode-as-text', '-output', '-', '|'] elif not args.llvm_source: cmd", "issues could occur when setting filetype, target, # or sandbox through --args. Filter", "'asm', 'iasm'], help='Output file type. Default %(default)s') argparser.add_argument('--forceasm', required=False, action='store_true', help='Force --filetype=asm') argparser.add_argument('--target',", "os.path.join(pnacl_bin_path, 'llvm-mc')] + TargetAssemblerFlags(args.target, args.sandbox) + ['-filetype=obj', '-o', output_file_name]) elif output_file_name: cmd +=", "# because forceasm may introduce a new temporary file between pnacl-sz # and", "a Subzero program, and finally compiles it. \"\"\" argparser = argparse.ArgumentParser( description=' '", "+ TargetDisassemblerFlags(args.target) + [output_file_name]) stdout_result = shellcmd(cmd, echo=args.echo_cmd) if not args.echo_cmd: sys.stdout.write(stdout_result) if", "llvm input file, freezes it into a pexe file, converts it to a", "after translating to ' + 'Subzero instructions') argparser.add_argument('--no-local-syms', required=False, action='store_true', help=\"Don't keep local", "argparser = argparse.ArgumentParser( description=' ' + main.__doc__, formatter_class=argparse.ArgumentDefaultsHelpFormatter) argparser.add_argument('--input', '-i', required=True, help='LLVM source", "here for Mips32. flags = { 'x8632': ['-triple=%s' % ('i686-nacl' if sandboxed else", "'not')] cmd += [args.pnacl_sz] cmd += ['--target', args.target] if args.sandbox: cmd += ['-sandbox']", "'pnacl-sz'\") argparser.add_argument('--pnacl-bin-path', required=False, default=( '{root}/toolchain/linux_x86/pnacl_newlib_raw/bin' ).format(root=FindBaseNaCl()), metavar='PNACL_BIN_PATH', help='Path to LLVM & Binutils executables", "main(): \"\"\"Run the pnacl-sz compiler on an llvm file. Takes an llvm input", "flags[target] def main(): \"\"\"Run the pnacl-sz compiler on an llvm file. Takes an", "action='append', default=[], help='Add a disassembler flag') argparser.add_argument('--filetype', default='iasm', dest='filetype', choices=['obj', 'asm', 'iasm'], help='Output", "# single-threaded translation because dump output does not get # reassembled into order.", "'asm': pass elif args.filetype == 'iasm': # TODO(sehr) implement forceasm for iasm. pass", "single-threaded translation because dump output does not get # reassembled into order. cmd", "introduce a new temporary file between pnacl-sz # and llvm-mc. Similar issues could", "asm_temp = None output_file_name = None keep_output_file = False if args.output: output_file_name =", "setting filetype, target, # or sandbox through --args. Filter and report an error.", "when setting filetype, target, # or sandbox through --args. Filter and report an", "argparse.ArgumentParser( description=' ' + main.__doc__, formatter_class=argparse.ArgumentDefaultsHelpFormatter) argparser.add_argument('--input', '-i', required=True, help='LLVM source file to", "should be set using:' print ' %s ... %s ... --args' % (script_name,", "for building PEXE files)') argparser.add_argument('--assemble', required=False, action='store_true', help='Assemble the output') argparser.add_argument('--disassemble', required=False, action='store_true',", "if not output_file_name: # On windows we may need to close the file", "TODO(reed kotler). Need to find out exactly we need to # add here", "+ main.__doc__, formatter_class=argparse.ArgumentDefaultsHelpFormatter) argparser.add_argument('--input', '-i', required=True, help='LLVM source file to compile') argparser.add_argument('--output', '-o',", "metavar='PNACL_BIN_PATH', help='Path to LLVM & Binutils executables ' + '(e.g. for building PEXE", "if sandboxed else 'armv7a'), '-mcpu=cortex-a9', '-mattr=+neon'], 'mips32': ['-triple=%s' % ( 'mipsel-nacl' if sandboxed", "'convert to Subzero') argparser.add_argument('--llvm-source', required=False, action='store_true', help='Parse source directly into llvm IR '", "does not get # reassembled into order. cmd += ['-verbose', 'inst,global_init', '-notranslate', '-threads=0']", "TargetDisassemblerFlags(args.target) + [output_file_name]) stdout_result = shellcmd(cmd, echo=args.echo_cmd) if not args.echo_cmd: sys.stdout.write(stdout_result) if asm_temp", "TargetAssemblerFlags(args.target, args.sandbox) + ['-filetype=obj', '-o', output_file_name]) elif output_file_name: cmd += ['-o', output_file_name] if", "# If the tests are based on '-verbose inst' output, force # single-threaded", "and report an error. if re.search('^-?-(o|output|filetype|target|sandbox)(=.+)?$', arg): preferred_option = '--output' if re.search('^-?-o(=.+)?$', arg)", "if not args.llvm_source: cmd += ['--bitcode-format=pnacl'] if not args.no_local_syms: cmd += ['--allow-local-symbol-tables'] if", "+ '(e.g. for building PEXE files)') argparser.add_argument('--assemble', required=False, action='store_true', help='Assemble the output') argparser.add_argument('--disassemble',", "cmd += ['--emit-revision=0'] script_name = os.path.basename(sys.argv[0]) for _, arg in enumerate(args.args): # Redirecting", "= None output_file_name = None keep_output_file = False if args.output: output_file_name = args.output", "'x86_64-nacl' if sandboxed else 'x86_64')], 'arm32': ['-triple=%s' % ( 'armv7a-nacl' if sandboxed else", "output_file_name: cmd += ['-o', output_file_name] if args.disassemble: # Show wide instruction encodings, diassemble,", "to Subzero') argparser.add_argument('--llvm-source', required=False, action='store_true', help='Parse source directly into llvm IR ' +", "not args.llvm_source: cmd += ['--bitcode-format=pnacl'] if not args.no_local_syms: cmd += ['--allow-local-symbol-tables'] if args.llvm", "action='store_true', help='Parse source directly into llvm IR ' + '(without generating a pexe),", "\"\"\" argparser = argparse.ArgumentParser( description=' ' + main.__doc__, formatter_class=argparse.ArgumentDefaultsHelpFormatter) argparser.add_argument('--input', '-i', required=True, help='LLVM", "args.expect_fail: cmd += [os.path.join(pnacl_bin_path, 'not')] cmd += [args.pnacl_sz] cmd += ['--target', args.target] if", "args.args if args.llvm_source: cmd += [llfile] if args.assemble or args.disassemble: if not output_file_name:", "output_file_name: # On windows we may need to close the file first before", "action='store_true', help=\"Don't keep local symbols in the pexe file\") argparser.add_argument('--llvm', required=False, action='store_true', help='Parse", "+= args.args if args.llvm_source: cmd += [llfile] if args.assemble or args.disassemble: if not", "elif args.allow_pnacl_reader_error_recovery: cmd += ['-allow-pnacl-reader-error-recovery', '-threads=0'] if not args.llvm_source: cmd += ['--bitcode-format=pnacl'] if", "['-triple=%s' % ( 'mipsel-nacl' if sandboxed else 'mipsel'), '-mcpu=mips32'] } return flags[target] def", "parsing after first error') argparser.add_argument('--args', '-a', nargs=argparse.REMAINDER, default=[], help='Remaining arguments are passed to", "the pexe file\") argparser.add_argument('--llvm', required=False, action='store_true', help='Parse pexe into llvm IR first, then", "argparser.add_argument('--echo-cmd', required=False, action='store_true', help='Trace command that generates ICE instructions') argparser.add_argument('--tbc', required=False, action='store_true', help='Input", "' + '(without generating a pexe), then ' + 'convert to Subzero') argparser.add_argument(", "'armv7a'), '-mcpu=cortex-a9', '-mattr=+neon'], 'mips32': ['-triple=%s' % ( 'mipsel-nacl' if sandboxed else 'mipsel'), '-mcpu=mips32']", "else 'mipsel'), '-mcpu=mips32'] } return flags[target] def TargetDisassemblerFlags(target): flags = { 'x8632': ['-Mintel'],", "default=[], help='Remaining arguments are passed to pnacl-sz') argparser.add_argument('--sandbox', required=False, action='store_true', help='Sandboxes the generated", "+= ['--filetype=' + args.filetype] cmd += ['--emit-revision=0'] script_name = os.path.basename(sys.argv[0]) for _, arg", "cmd += args.args if args.llvm_source: cmd += [llfile] if args.assemble or args.disassemble: if", "args.no_local_syms: cmd += ['--allow-local-symbol-tables'] cmd += ['|'] if args.expect_fail: cmd += [os.path.join(pnacl_bin_path, 'not')]", "== 'iasm': # TODO(sehr) implement forceasm for iasm. pass elif args.filetype == 'obj':", "script # because forceasm may introduce a new temporary file between pnacl-sz #", "else 'i686')], 'x8664': ['-triple=%s' % ( 'x86_64-nacl' if sandboxed else 'x86_64')], 'arm32': ['-triple=%s'", "os.path.basename(sys.argv[0]) for _, arg in enumerate(args.args): # Redirecting the output file needs to", "directly into llvm IR ' + '(without generating a pexe), then ' +", "default=[], help='Add a disassembler flag') argparser.add_argument('--filetype', default='iasm', dest='filetype', choices=['obj', 'asm', 'iasm'], help='Output file", "if sandboxed else 'i686')], 'x8664': ['-triple=%s' % ( 'x86_64-nacl' if sandboxed else 'x86_64')],", "Filter and report an error. if re.search('^-?-(o|output|filetype|target|sandbox)(=.+)?$', arg): preferred_option = '--output' if re.search('^-?-o(=.+)?$',", "+= [os.path.join(pnacl_bin_path, 'not')] cmd += [args.pnacl_sz] cmd += ['--target', args.target] if args.sandbox: cmd", "re import subprocess import sys import tempfile from utils import FindBaseNaCl, GetObjdumpCmd, shellcmd", "cmd += ['-verbose', 'inst,global_init', '-notranslate', '-threads=0'] elif args.allow_pnacl_reader_error_recovery: cmd += ['-allow-pnacl-reader-error-recovery', '-threads=0'] if", "required=False, action='store_true', help='Input is textual bitcode (not .ll)') argparser.add_argument('--expect-fail', required=False, action='store_true', help='Negate success", "= args.output keep_output_file = True cmd += args.args if args.llvm_source: cmd += [llfile]", "' %s ... --args %s ...' % (script_name, arg) exit(1) asm_temp = None", "from utils import FindBaseNaCl, GetObjdumpCmd, shellcmd def TargetAssemblerFlags(target, sandboxed): # TODO(reed kotler). Need", "--filetype=asm') argparser.add_argument('--target', default='x8632', dest='target', choices=['x8632','x8664','arm32','mips32'], help='Target architecture. Default %(default)s') argparser.add_argument('--echo-cmd', required=False, action='store_true', help='Trace", "translating to ' + 'Subzero instructions') argparser.add_argument('--no-local-syms', required=False, action='store_true', help=\"Don't keep local symbols", "+ TargetAssemblerFlags(args.target, args.sandbox) + ['-filetype=obj', '-o', output_file_name]) elif output_file_name: cmd += ['-o', output_file_name]", "# TODO(reed kotler). Need to find out exactly we need to # add", "argparser.add_argument('--expect-fail', required=False, action='store_true', help='Negate success of run by using LLVM not') argparser.add_argument('--allow-pnacl-reader-error-recovery', action='store_true',", "args.llvm_source: cmd += ['--build-on-read=0'] else: cmd += ['--build-on-read=1'] cmd += ['--filetype=' + args.filetype]", "arg in enumerate(args.args): # Redirecting the output file needs to be done through", "'-mcpu=cortex-a9', '-mattr=+neon'], 'mips32': ['-triple=%s' % ( 'mipsel-nacl' if sandboxed else 'mipsel'), '-mcpu=mips32'] }", "instruction encodings, diassemble, show relocs and # dissasemble zeros. cmd += (['&&', os.path.join(pnacl_bin_path,", "'-o', output_file_name]) elif output_file_name: cmd += ['-o', output_file_name] if args.disassemble: # Show wide", "llfile, '-o', '-', '|', os.path.join(pnacl_bin_path, 'pnacl-freeze')] if not args.no_local_syms: cmd += ['--allow-local-symbol-tables'] cmd", "compiler on an llvm file. Takes an llvm input file, freezes it into", "# or sandbox through --args. Filter and report an error. if re.search('^-?-(o|output|filetype|target|sandbox)(=.+)?$', arg):", "file. Takes an llvm input file, freezes it into a pexe file, converts", "args.expect_fail: args.forceasm = False elif args.filetype == 'asm': pass elif args.filetype == 'iasm':", "'Option should be set using:' print ' %s ... %s ... --args' %", "--args. Filter and report an error. if re.search('^-?-(o|output|filetype|target|sandbox)(=.+)?$', arg): preferred_option = '--output' if", "reassembled into order. cmd += ['-verbose', 'inst,global_init', '-notranslate', '-threads=0'] elif args.allow_pnacl_reader_error_recovery: cmd +=", "in the pexe file\") argparser.add_argument('--llvm', required=False, action='store_true', help='Parse pexe into llvm IR first,", "output') argparser.add_argument('--disassemble', required=False, action='store_true', help='Disassemble the assembled output') argparser.add_argument('--dis-flags', required=False, action='append', default=[], help='Add", "file, freezes it into a pexe file, converts it to a Subzero program,", "specify both '--tbc' and '--llvm-source'\") if args.llvm and args.tbc: raise RuntimeError(\"Can't specify both", "a pexe), then ' + 'convert to Subzero') argparser.add_argument( '--pnacl-sz', required=False, default='./pnacl-sz', metavar='PNACL-SZ',", ".ll)') argparser.add_argument('--expect-fail', required=False, action='store_true', help='Negate success of run by using LLVM not') argparser.add_argument('--allow-pnacl-reader-error-recovery',", "[args.pnacl_sz] cmd += ['--target', args.target] if args.sandbox: cmd += ['-sandbox'] if args.insts: #", "specify both '--llvm-source' and \" + \"'--no-local-syms'\") if args.llvm_source and args.tbc: raise RuntimeError(\"Can't", "if not args.no_local_syms: cmd += ['--allow-local-symbol-tables'] if args.llvm or args.llvm_source: cmd += ['--build-on-read=0']", "[], 'mips32':[] } return flags[target] def main(): \"\"\"Run the pnacl-sz compiler on an", "using:' print ' %s ... %s ... --args' % (script_name, preferred_option) print 'rather", "are passed to pnacl-sz') argparser.add_argument('--sandbox', required=False, action='store_true', help='Sandboxes the generated code') args =", "error') argparser.add_argument('--args', '-a', nargs=argparse.REMAINDER, default=[], help='Remaining arguments are passed to pnacl-sz') argparser.add_argument('--sandbox', required=False,", "action='store_true', help='Negate success of run by using LLVM not') argparser.add_argument('--allow-pnacl-reader-error-recovery', action='store_true', help='Continue parsing", "default='./pnacl-sz', metavar='PNACL-SZ', help=\"Subzero translator 'pnacl-sz'\") argparser.add_argument('--pnacl-bin-path', required=False, default=( '{root}/toolchain/linux_x86/pnacl_newlib_raw/bin' ).format(root=FindBaseNaCl()), metavar='PNACL_BIN_PATH', help='Path to", "and instead manually delete. asm_temp = tempfile.NamedTemporaryFile(delete=False) asm_temp.close() output_file_name = asm_temp.name if args.assemble", "not args.echo_cmd: sys.stdout.write(stdout_result) if asm_temp and not keep_output_file: os.remove(output_file_name) if __name__ == '__main__':", "if args.assemble and args.filetype != 'obj': cmd += (['|', os.path.join(pnacl_bin_path, 'llvm-mc')] + TargetAssemblerFlags(args.target,", "if args.llvm_source and args.no_local_syms: raise RuntimeError(\"Can't specify both '--llvm-source' and \" + \"'--no-local-syms'\")", "the output') argparser.add_argument('--disassemble', required=False, action='store_true', help='Disassemble the assembled output') argparser.add_argument('--dis-flags', required=False, action='append', default=[],", "exit(1) asm_temp = None output_file_name = None keep_output_file = False if args.output: output_file_name", "Binutils executables ' + '(e.g. for building PEXE files)') argparser.add_argument('--assemble', required=False, action='store_true', help='Assemble", "required=False, action='store_true', help='Assemble the output') argparser.add_argument('--disassemble', required=False, action='store_true', help='Disassemble the assembled output') argparser.add_argument('--dis-flags',", "first, then ' + 'convert to Subzero') argparser.add_argument('--llvm-source', required=False, action='store_true', help='Parse source directly", "' + '(e.g. for building PEXE files)') argparser.add_argument('--assemble', required=False, action='store_true', help='Assemble the output')", "nargs=argparse.REMAINDER, default=[], help='Remaining arguments are passed to pnacl-sz') argparser.add_argument('--sandbox', required=False, action='store_true', help='Sandboxes the", "args.output: output_file_name = args.output keep_output_file = True cmd += args.args if args.llvm_source: cmd", "\"'--no-local-syms'\") if args.llvm_source and args.tbc: raise RuntimeError(\"Can't specify both '--tbc' and '--llvm-source'\") if", "argparser.add_argument('--sandbox', required=False, action='store_true', help='Sandboxes the generated code') args = argparser.parse_args() pnacl_bin_path = args.pnacl_bin_path", "help='Output file to write') argparser.add_argument('--insts', required=False, action='store_true', help='Stop after translating to ' +", "'-', '|', os.path.join(pnacl_bin_path, 'pnacl-freeze')] if not args.no_local_syms: cmd += ['--allow-local-symbol-tables'] cmd += ['|']", "args.llvm and args.tbc: raise RuntimeError(\"Can't specify both '--tbc' and '--llvm'\") if args.forceasm: if", "% (script_name, preferred_option) print 'rather than:' print ' %s ... --args %s ...'", "using LLVM not') argparser.add_argument('--allow-pnacl-reader-error-recovery', action='store_true', help='Continue parsing after first error') argparser.add_argument('--args', '-a', nargs=argparse.REMAINDER,", "'pnacl-bcfuzz'), llfile, '-bitcode-as-text', '-output', '-', '|'] elif not args.llvm_source: cmd = [os.path.join(pnacl_bin_path, 'llvm-as'),", "tests are based on '-verbose inst' output, force # single-threaded translation because dump", "['--target', args.target] if args.sandbox: cmd += ['-sandbox'] if args.insts: # If the tests", "delete. asm_temp = tempfile.NamedTemporaryFile(delete=False) asm_temp.close() output_file_name = asm_temp.name if args.assemble and args.filetype !=", "order. cmd += ['-verbose', 'inst,global_init', '-notranslate', '-threads=0'] elif args.allow_pnacl_reader_error_recovery: cmd += ['-allow-pnacl-reader-error-recovery', '-threads=0']", "into order. cmd += ['-verbose', 'inst,global_init', '-notranslate', '-threads=0'] elif args.allow_pnacl_reader_error_recovery: cmd += ['-allow-pnacl-reader-error-recovery',", "args = argparser.parse_args() pnacl_bin_path = args.pnacl_bin_path llfile = args.input if args.llvm and args.llvm_source:", "RuntimeError(\"Can't specify both '--tbc' and '--llvm-source'\") if args.llvm and args.tbc: raise RuntimeError(\"Can't specify", "= [os.path.join(pnacl_bin_path, 'llvm-as'), llfile, '-o', '-', '|', os.path.join(pnacl_bin_path, 'pnacl-freeze')] if not args.no_local_syms: cmd", "default=( '{root}/toolchain/linux_x86/pnacl_newlib_raw/bin' ).format(root=FindBaseNaCl()), metavar='PNACL_BIN_PATH', help='Path to LLVM & Binutils executables ' + '(e.g.", "help='Sandboxes the generated code') args = argparser.parse_args() pnacl_bin_path = args.pnacl_bin_path llfile = args.input", "arg) exit(1) asm_temp = None output_file_name = None keep_output_file = False if args.output:", "['-Mintel'], 'arm32': [], 'mips32':[] } return flags[target] def main(): \"\"\"Run the pnacl-sz compiler", "on an llvm file. Takes an llvm input file, freezes it into a", "help='Path to LLVM & Binutils executables ' + '(e.g. for building PEXE files)')", "shellcmd def TargetAssemblerFlags(target, sandboxed): # TODO(reed kotler). Need to find out exactly we", "then ' + 'convert to Subzero') argparser.add_argument('--llvm-source', required=False, action='store_true', help='Parse source directly into", "by the other tools, so don't do delete-on-close, # and instead manually delete.", "argparser.add_argument('--pnacl-bin-path', required=False, default=( '{root}/toolchain/linux_x86/pnacl_newlib_raw/bin' ).format(root=FindBaseNaCl()), metavar='PNACL_BIN_PATH', help='Path to LLVM & Binutils executables '", "argparser.add_argument( '--pnacl-sz', required=False, default='./pnacl-sz', metavar='PNACL-SZ', help=\"Subzero translator 'pnacl-sz'\") argparser.add_argument('--pnacl-bin-path', required=False, default=( '{root}/toolchain/linux_x86/pnacl_newlib_raw/bin' ).format(root=FindBaseNaCl()),", "architecture. Default %(default)s') argparser.add_argument('--echo-cmd', required=False, action='store_true', help='Trace command that generates ICE instructions') argparser.add_argument('--tbc',", "could occur when setting filetype, target, # or sandbox through --args. Filter and", "'-z'] + TargetDisassemblerFlags(args.target) + [output_file_name]) stdout_result = shellcmd(cmd, echo=args.echo_cmd) if not args.echo_cmd: sys.stdout.write(stdout_result)", "script_name = os.path.basename(sys.argv[0]) for _, arg in enumerate(args.args): # Redirecting the output file", "before it can be # re-opened by the other tools, so don't do", "and \" + \"'--no-local-syms'\") if args.llvm_source and args.tbc: raise RuntimeError(\"Can't specify both '--tbc'", "def TargetAssemblerFlags(target, sandboxed): # TODO(reed kotler). Need to find out exactly we need", "import os import re import subprocess import sys import tempfile from utils import", "= [] if args.tbc: cmd = [os.path.join(pnacl_bin_path, 'pnacl-bcfuzz'), llfile, '-bitcode-as-text', '-output', '-', '|']", "if not args.echo_cmd: sys.stdout.write(stdout_result) if asm_temp and not keep_output_file: os.remove(output_file_name) if __name__ ==", "['--build-on-read=0'] else: cmd += ['--build-on-read=1'] cmd += ['--filetype=' + args.filetype] cmd += ['--emit-revision=0']", "required=False, action='store_true', help='Sandboxes the generated code') args = argparser.parse_args() pnacl_bin_path = args.pnacl_bin_path llfile", "not args.no_local_syms: cmd += ['--allow-local-symbol-tables'] if args.llvm or args.llvm_source: cmd += ['--build-on-read=0'] else:", "required=False, action='store_true', help='Trace command that generates ICE instructions') argparser.add_argument('--tbc', required=False, action='store_true', help='Input is", "argparser.add_argument('--filetype', default='iasm', dest='filetype', choices=['obj', 'asm', 'iasm'], help='Output file type. Default %(default)s') argparser.add_argument('--forceasm', required=False,", "print 'Option should be set using:' print ' %s ... %s ... --args'", "write') argparser.add_argument('--insts', required=False, action='store_true', help='Stop after translating to ' + 'Subzero instructions') argparser.add_argument('--no-local-syms',", "['|'] if args.expect_fail: cmd += [os.path.join(pnacl_bin_path, 'not')] cmd += [args.pnacl_sz] cmd += ['--target',", "itertools import os import re import subprocess import sys import tempfile from utils", "file type. Default %(default)s') argparser.add_argument('--forceasm', required=False, action='store_true', help='Force --filetype=asm') argparser.add_argument('--target', default='x8632', dest='target', choices=['x8632','x8664','arm32','mips32'],", "_, arg in enumerate(args.args): # Redirecting the output file needs to be done", "'x8632': ['-Mintel'], 'x8664': ['-Mintel'], 'arm32': [], 'mips32':[] } return flags[target] def main(): \"\"\"Run", "source file to compile') argparser.add_argument('--output', '-o', required=False, help='Output file to write') argparser.add_argument('--insts', required=False,", "local symbols in the pexe file\") argparser.add_argument('--llvm', required=False, action='store_true', help='Parse pexe into llvm", "[os.path.join(pnacl_bin_path, 'not')] cmd += [args.pnacl_sz] cmd += ['--target', args.target] if args.sandbox: cmd +=", "'i686')], 'x8664': ['-triple=%s' % ( 'x86_64-nacl' if sandboxed else 'x86_64')], 'arm32': ['-triple=%s' %", "% ('i686-nacl' if sandboxed else 'i686')], 'x8664': ['-triple=%s' % ( 'x86_64-nacl' if sandboxed", "= argparse.ArgumentParser( description=' ' + main.__doc__, formatter_class=argparse.ArgumentDefaultsHelpFormatter) argparser.add_argument('--input', '-i', required=True, help='LLVM source file", "[llfile] if args.assemble or args.disassemble: if not output_file_name: # On windows we may", "import re import subprocess import sys import tempfile from utils import FindBaseNaCl, GetObjdumpCmd,", "'obj': args.filetype = 'asm' args.assemble = True cmd = [] if args.tbc: cmd", "compile') argparser.add_argument('--output', '-o', required=False, help='Output file to write') argparser.add_argument('--insts', required=False, action='store_true', help='Stop after", "if args.llvm and args.llvm_source: raise RuntimeError(\"Can't specify both '--llvm' and '--llvm-source'\") if args.llvm_source", "command that generates ICE instructions') argparser.add_argument('--tbc', required=False, action='store_true', help='Input is textual bitcode (not", "instead manually delete. asm_temp = tempfile.NamedTemporaryFile(delete=False) asm_temp.close() output_file_name = asm_temp.name if args.assemble and", "if args.llvm or args.llvm_source: cmd += ['--build-on-read=0'] else: cmd += ['--build-on-read=1'] cmd +=", "print ' %s ... %s ... --args' % (script_name, preferred_option) print 'rather than:'", "be # re-opened by the other tools, so don't do delete-on-close, # and", "required=False, default=( '{root}/toolchain/linux_x86/pnacl_newlib_raw/bin' ).format(root=FindBaseNaCl()), metavar='PNACL_BIN_PATH', help='Path to LLVM & Binutils executables ' +", "be set using:' print ' %s ... %s ... --args' % (script_name, preferred_option)", "for _, arg in enumerate(args.args): # Redirecting the output file needs to be", "# and instead manually delete. asm_temp = tempfile.NamedTemporaryFile(delete=False) asm_temp.close() output_file_name = asm_temp.name if", "tools, so don't do delete-on-close, # and instead manually delete. asm_temp = tempfile.NamedTemporaryFile(delete=False)", "file\") argparser.add_argument('--llvm', required=False, action='store_true', help='Parse pexe into llvm IR first, then ' +", "output, force # single-threaded translation because dump output does not get # reassembled", "return flags[target] def TargetDisassemblerFlags(target): flags = { 'x8632': ['-Mintel'], 'x8664': ['-Mintel'], 'arm32': [],", "os import re import subprocess import sys import tempfile from utils import FindBaseNaCl,", "filetype, target, # or sandbox through --args. Filter and report an error. if", "if not args.no_local_syms: cmd += ['--allow-local-symbol-tables'] cmd += ['|'] if args.expect_fail: cmd +=", "to Subzero') argparser.add_argument( '--pnacl-sz', required=False, default='./pnacl-sz', metavar='PNACL-SZ', help=\"Subzero translator 'pnacl-sz'\") argparser.add_argument('--pnacl-bin-path', required=False, default=(", "to find out exactly we need to # add here for Mips32. flags", "--args' % (script_name, preferred_option) print 'rather than:' print ' %s ... --args %s", "generating a pexe), then ' + 'convert to Subzero') argparser.add_argument( '--pnacl-sz', required=False, default='./pnacl-sz',", "print 'rather than:' print ' %s ... --args %s ...' % (script_name, arg)", "echo=args.echo_cmd) if not args.echo_cmd: sys.stdout.write(stdout_result) if asm_temp and not keep_output_file: os.remove(output_file_name) if __name__", "required=False, action='append', default=[], help='Add a disassembler flag') argparser.add_argument('--filetype', default='iasm', dest='filetype', choices=['obj', 'asm', 'iasm']," ]
[ ".context import monad from .context import tangle from .context import feeder from .context", "tangle from .context import feeder from .context import prototype from .context import magica", "import tangle from .context import feeder from .context import prototype from .context import", "from .context import hako from connectors import ml_100k_conn as conn ip='10.141.246.29'; port=27017; version='100k';", ".context import feeder from .context import prototype from .context import magica from .context", "import hako from connectors import ml_100k_conn as conn ip='10.141.246.29'; port=27017; version='100k'; batch_size=100; connector=conn.MLConnector(ip,port,version);", "import feeder from .context import prototype from .context import magica from .context import", "from .context import prototype from .context import magica from .context import hako from", "import monad from .context import tangle from .context import feeder from .context import", "from .context import tangle from .context import feeder from .context import prototype from", "from .context import feeder from .context import prototype from .context import magica from", ".context import prototype from .context import magica from .context import hako from connectors", "prototype from .context import magica from .context import hako from connectors import ml_100k_conn", "from .context import monad from .context import tangle from .context import feeder from", "from .context import magica from .context import hako from connectors import ml_100k_conn as", "import magica from .context import hako from connectors import ml_100k_conn as conn ip='10.141.246.29';", "magica from .context import hako from connectors import ml_100k_conn as conn ip='10.141.246.29'; port=27017;", "monad from .context import tangle from .context import feeder from .context import prototype", "hako from connectors import ml_100k_conn as conn ip='10.141.246.29'; port=27017; version='100k'; batch_size=100; connector=conn.MLConnector(ip,port,version); print(connector.feed_train_batch(batch_size));", ".context import hako from connectors import ml_100k_conn as conn ip='10.141.246.29'; port=27017; version='100k'; batch_size=100;", "<reponame>ravenSanstete/hako from .context import monad from .context import tangle from .context import feeder", "from connectors import ml_100k_conn as conn ip='10.141.246.29'; port=27017; version='100k'; batch_size=100; connector=conn.MLConnector(ip,port,version); print(connector.feed_train_batch(batch_size)); #", ".context import magica from .context import hako from connectors import ml_100k_conn as conn", "import prototype from .context import magica from .context import hako from connectors import", ".context import tangle from .context import feeder from .context import prototype from .context", "feeder from .context import prototype from .context import magica from .context import hako" ]
[ "for i in xrange(0, k): ret += (a[i][k] / a[i][i]) * (k+1)**i return", "if i == j: continue d = a[i][j] / a[j][j] for l in", "j: continue d = a[i][j] / a[j][j] for l in xrange(j, k+1): a[i][l]", "__future__ import division def u(n): ret = 0 for i in xrange(0, 11):", "for i in xrange(0, 11): ret += (-1)**i * n**i return ret def", "for n in xrange(1, k+1): x = [n**x for x in xrange(0, k)]", "continue d = a[i][j] / a[j][j] for l in xrange(j, k+1): a[i][l] -=", "+= (-1)**i * n**i return ret def solve(k): a = [] for n", "solve(k): a = [] for n in xrange(1, k+1): x = [n**x for", "ret def solve(k): a = [] for n in xrange(1, k+1): x =", "a[i][l] -= a[j][l] * d ret = 0 for i in xrange(0, k):", "i in xrange(0, k): if i == j: continue d = a[i][j] /", "k)] x.append(u(n)) a.append(x) for j in xrange(0, k): for i in xrange(0, k):", "in xrange(0, k)] x.append(u(n)) a.append(x) for j in xrange(0, k): for i in", "d = a[i][j] / a[j][j] for l in xrange(j, k+1): a[i][l] -= a[j][l]", "+= (a[i][k] / a[i][i]) * (k+1)**i return ret ans = sum(map(solve, xrange(1, 11)))", "j in xrange(0, k): for i in xrange(0, k): if i == j:", "a[i][j] / a[j][j] for l in xrange(j, k+1): a[i][l] -= a[j][l] * d", "ret += (-1)**i * n**i return ret def solve(k): a = [] for", "0 for i in xrange(0, 11): ret += (-1)**i * n**i return ret", "a[j][l] * d ret = 0 for i in xrange(0, k): ret +=", "for i in xrange(0, k): if i == j: continue d = a[i][j]", "-= a[j][l] * d ret = 0 for i in xrange(0, k): ret", "i == j: continue d = a[i][j] / a[j][j] for l in xrange(j,", "in xrange(0, k): if i == j: continue d = a[i][j] / a[j][j]", "d ret = 0 for i in xrange(0, k): ret += (a[i][k] /", "k+1): x = [n**x for x in xrange(0, k)] x.append(u(n)) a.append(x) for j", "i in xrange(0, 11): ret += (-1)**i * n**i return ret def solve(k):", "a = [] for n in xrange(1, k+1): x = [n**x for x", "l in xrange(j, k+1): a[i][l] -= a[j][l] * d ret = 0 for", "in xrange(0, 11): ret += (-1)**i * n**i return ret def solve(k): a", "== j: continue d = a[i][j] / a[j][j] for l in xrange(j, k+1):", "for j in xrange(0, k): for i in xrange(0, k): if i ==", "xrange(0, k)] x.append(u(n)) a.append(x) for j in xrange(0, k): for i in xrange(0,", "ret += (a[i][k] / a[i][i]) * (k+1)**i return ret ans = sum(map(solve, xrange(1,", "= [n**x for x in xrange(0, k)] x.append(u(n)) a.append(x) for j in xrange(0,", "xrange(0, k): ret += (a[i][k] / a[i][i]) * (k+1)**i return ret ans =", "0 for i in xrange(0, k): ret += (a[i][k] / a[i][i]) * (k+1)**i", "from __future__ import division def u(n): ret = 0 for i in xrange(0,", "k): if i == j: continue d = a[i][j] / a[j][j] for l", "def solve(k): a = [] for n in xrange(1, k+1): x = [n**x", "n in xrange(1, k+1): x = [n**x for x in xrange(0, k)] x.append(u(n))", "ret = 0 for i in xrange(0, 11): ret += (-1)**i * n**i", "xrange(0, k): for i in xrange(0, k): if i == j: continue d", "/ a[i][i]) * (k+1)**i return ret ans = sum(map(solve, xrange(1, 11))) print ans", "return ret def solve(k): a = [] for n in xrange(1, k+1): x", "u(n): ret = 0 for i in xrange(0, 11): ret += (-1)**i *", "x in xrange(0, k)] x.append(u(n)) a.append(x) for j in xrange(0, k): for i", "= 0 for i in xrange(0, k): ret += (a[i][k] / a[i][i]) *", "import division def u(n): ret = 0 for i in xrange(0, 11): ret", "[n**x for x in xrange(0, k)] x.append(u(n)) a.append(x) for j in xrange(0, k):", "a.append(x) for j in xrange(0, k): for i in xrange(0, k): if i", "11): ret += (-1)**i * n**i return ret def solve(k): a = []", "xrange(j, k+1): a[i][l] -= a[j][l] * d ret = 0 for i in", "in xrange(0, k): for i in xrange(0, k): if i == j: continue", "<reponame>huangshenno1/project_euler from __future__ import division def u(n): ret = 0 for i in", "= [] for n in xrange(1, k+1): x = [n**x for x in", "x.append(u(n)) a.append(x) for j in xrange(0, k): for i in xrange(0, k): if", "i in xrange(0, k): ret += (a[i][k] / a[i][i]) * (k+1)**i return ret", "= a[i][j] / a[j][j] for l in xrange(j, k+1): a[i][l] -= a[j][l] *", "k+1): a[i][l] -= a[j][l] * d ret = 0 for i in xrange(0,", "xrange(0, 11): ret += (-1)**i * n**i return ret def solve(k): a =", "x = [n**x for x in xrange(0, k)] x.append(u(n)) a.append(x) for j in", "(a[i][k] / a[i][i]) * (k+1)**i return ret ans = sum(map(solve, xrange(1, 11))) print", "(-1)**i * n**i return ret def solve(k): a = [] for n in", "for l in xrange(j, k+1): a[i][l] -= a[j][l] * d ret = 0", "n**i return ret def solve(k): a = [] for n in xrange(1, k+1):", "for x in xrange(0, k)] x.append(u(n)) a.append(x) for j in xrange(0, k): for", "xrange(1, k+1): x = [n**x for x in xrange(0, k)] x.append(u(n)) a.append(x) for", "k): ret += (a[i][k] / a[i][i]) * (k+1)**i return ret ans = sum(map(solve,", "in xrange(1, k+1): x = [n**x for x in xrange(0, k)] x.append(u(n)) a.append(x)", "a[j][j] for l in xrange(j, k+1): a[i][l] -= a[j][l] * d ret =", "[] for n in xrange(1, k+1): x = [n**x for x in xrange(0,", "division def u(n): ret = 0 for i in xrange(0, 11): ret +=", "* n**i return ret def solve(k): a = [] for n in xrange(1,", "k): for i in xrange(0, k): if i == j: continue d =", "= 0 for i in xrange(0, 11): ret += (-1)**i * n**i return", "ret = 0 for i in xrange(0, k): ret += (a[i][k] / a[i][i])", "* d ret = 0 for i in xrange(0, k): ret += (a[i][k]", "in xrange(j, k+1): a[i][l] -= a[j][l] * d ret = 0 for i", "xrange(0, k): if i == j: continue d = a[i][j] / a[j][j] for", "/ a[j][j] for l in xrange(j, k+1): a[i][l] -= a[j][l] * d ret", "def u(n): ret = 0 for i in xrange(0, 11): ret += (-1)**i", "in xrange(0, k): ret += (a[i][k] / a[i][i]) * (k+1)**i return ret ans" ]
[ "<reponame>tanshuai/reference-wallet # Copyright (c) The Diem Core Contributors # SPDX-License-Identifier: Apache-2.0 def test_tautology():", "# Copyright (c) The Diem Core Contributors # SPDX-License-Identifier: Apache-2.0 def test_tautology(): ..." ]
[ "email: <EMAIL> GitHub: phuycke \"\"\" #%% total_grains = 0 multiplier = 1 for", "65): total_grains += multiplier multiplier *= 2 print('Total amount of wheat: {}'.format(total_grains)) #%%", "2 print('Total amount of wheat: {}'.format(total_grains)) #%% print('Weight of wheat (in tons): {}'.format(total_grains", "in range(1, 65): total_grains += multiplier multiplier *= 2 print('Total amount of wheat:", "#%% print('Weight of wheat (in tons): {}'.format(total_grains * 0.05 / 1000)) #%% #", "utf-8 -*- \"\"\" @author: <NAME> email: <EMAIL> GitHub: phuycke \"\"\" #%% total_grains =", "#!/usr/bin/env python3 # -*- coding: utf-8 -*- \"\"\" @author: <NAME> email: <EMAIL> GitHub:", "amount of wheat: {}'.format(total_grains)) #%% print('Weight of wheat (in tons): {}'.format(total_grains * 0.05", "python3 # -*- coding: utf-8 -*- \"\"\" @author: <NAME> email: <EMAIL> GitHub: phuycke", "<EMAIL> GitHub: phuycke \"\"\" #%% total_grains = 0 multiplier = 1 for i", "= 1 for i in range(1, 65): total_grains += multiplier multiplier *= 2", "print('Weight of wheat (in tons): {}'.format(total_grains * 0.05 / 1000)) #%% # I", "wheat (in tons): {}'.format(total_grains * 0.05 / 1000)) #%% # I don't understand", "multiplier multiplier *= 2 print('Total amount of wheat: {}'.format(total_grains)) #%% print('Weight of wheat", "\"\"\" #%% total_grains = 0 multiplier = 1 for i in range(1, 65):", "0 multiplier = 1 for i in range(1, 65): total_grains += multiplier multiplier", "<reponame>phuycke/Practice-of-computing-using-Python #!/usr/bin/env python3 # -*- coding: utf-8 -*- \"\"\" @author: <NAME> email: <EMAIL>", "GitHub: phuycke \"\"\" #%% total_grains = 0 multiplier = 1 for i in", "range(1, 65): total_grains += multiplier multiplier *= 2 print('Total amount of wheat: {}'.format(total_grains))", "(in tons): {}'.format(total_grains * 0.05 / 1000)) #%% # I don't understand question", "tons): {}'.format(total_grains * 0.05 / 1000)) #%% # I don't understand question c", "@author: <NAME> email: <EMAIL> GitHub: phuycke \"\"\" #%% total_grains = 0 multiplier =", "*= 2 print('Total amount of wheat: {}'.format(total_grains)) #%% print('Weight of wheat (in tons):", "print('Total amount of wheat: {}'.format(total_grains)) #%% print('Weight of wheat (in tons): {}'.format(total_grains *", "= 0 multiplier = 1 for i in range(1, 65): total_grains += multiplier", "phuycke \"\"\" #%% total_grains = 0 multiplier = 1 for i in range(1,", "#%% total_grains = 0 multiplier = 1 for i in range(1, 65): total_grains", "# -*- coding: utf-8 -*- \"\"\" @author: <NAME> email: <EMAIL> GitHub: phuycke \"\"\"", "-*- coding: utf-8 -*- \"\"\" @author: <NAME> email: <EMAIL> GitHub: phuycke \"\"\" #%%", "for i in range(1, 65): total_grains += multiplier multiplier *= 2 print('Total amount", "wheat: {}'.format(total_grains)) #%% print('Weight of wheat (in tons): {}'.format(total_grains * 0.05 / 1000))", "multiplier *= 2 print('Total amount of wheat: {}'.format(total_grains)) #%% print('Weight of wheat (in", "1 for i in range(1, 65): total_grains += multiplier multiplier *= 2 print('Total", "total_grains = 0 multiplier = 1 for i in range(1, 65): total_grains +=", "total_grains += multiplier multiplier *= 2 print('Total amount of wheat: {}'.format(total_grains)) #%% print('Weight", "of wheat: {}'.format(total_grains)) #%% print('Weight of wheat (in tons): {}'.format(total_grains * 0.05 /", "<NAME> email: <EMAIL> GitHub: phuycke \"\"\" #%% total_grains = 0 multiplier = 1", "-*- \"\"\" @author: <NAME> email: <EMAIL> GitHub: phuycke \"\"\" #%% total_grains = 0", "i in range(1, 65): total_grains += multiplier multiplier *= 2 print('Total amount of", "of wheat (in tons): {}'.format(total_grains * 0.05 / 1000)) #%% # I don't", "+= multiplier multiplier *= 2 print('Total amount of wheat: {}'.format(total_grains)) #%% print('Weight of", "multiplier = 1 for i in range(1, 65): total_grains += multiplier multiplier *=", "\"\"\" @author: <NAME> email: <EMAIL> GitHub: phuycke \"\"\" #%% total_grains = 0 multiplier", "{}'.format(total_grains)) #%% print('Weight of wheat (in tons): {}'.format(total_grains * 0.05 / 1000)) #%%", "coding: utf-8 -*- \"\"\" @author: <NAME> email: <EMAIL> GitHub: phuycke \"\"\" #%% total_grains" ]
[ "op|'.' name|'vol_usage_update' op|'(' nl|'\\n' name|'self' op|'.' name|'_context' op|',' name|'self' op|'.' name|'volume_id' op|',' name|'self'", "op|'=' name|'True' op|')' op|',' nl|'\\n' string|\"'tot_writes'\" op|':' name|'fields' op|'.' name|'IntegerField' op|'(' name|'read_only' op|'='", "name|'vol_usage' op|',' name|'field' op|',' name|'db_vol_usage' op|'[' name|'field' op|']' op|')' newline|'\\n' dedent|'' name|'vol_usage' op|'.'", "DECL|variable|read_only name|'read_only' op|'=' name|'True' op|')' op|',' nl|'\\n' string|\"'tot_reads'\" op|':' name|'fields' op|'.' name|'IntegerField' op|'('", "op|'=' name|'True' op|')' op|',' nl|'\\n' string|\"'curr_last_refreshed'\" op|':' name|'fields' op|'.' name|'DateTimeField' op|'(' name|'nullable' op|'='", "op|',' name|'field' op|',' name|'db_vol_usage' op|'[' name|'field' op|']' op|')' newline|'\\n' dedent|'' name|'vol_usage' op|'.' name|'_context'", "op|'.' name|'StringField' op|'(' name|'nullable' op|'=' name|'True' op|')' op|',' nl|'\\n' string|\"'user_id'\" op|':' name|'fields' op|'.'", "name|'vol_usage' op|',' name|'db_vol_usage' op|')' op|':' newline|'\\n' indent|' ' name|'for' name|'field' name|'in' name|'vol_usage' op|'.'", "express or implied. See the' nl|'\\n' comment|'# License for the specific language governing", "language governing permissions and limitations' nl|'\\n' comment|'# under the License.' nl|'\\n' nl|'\\n' name|'from'", "law or agreed to in writing, software' nl|'\\n' comment|'# distributed under the License", "the specific language governing permissions and limitations' nl|'\\n' comment|'# under the License.' nl|'\\n'", "nl|'\\n' string|\"'id'\" op|':' name|'fields' op|'.' name|'IntegerField' op|'(' name|'read_only' op|'=' name|'True' op|')' op|',' nl|'\\n'", "newline|'\\n' nl|'\\n' op|'@' name|'staticmethod' newline|'\\n' DECL|member|_from_db_object name|'def' name|'_from_db_object' op|'(' name|'context' op|',' name|'vol_usage' op|','", "name|'read_only' op|'=' name|'True' op|')' op|',' nl|'\\n' string|\"'tot_read_bytes'\" op|':' name|'fields' op|'.' name|'IntegerField' op|'(' name|'read_only'", "name|'fields' op|'.' name|'IntegerField' op|'(' name|'read_only' op|'=' name|'True' op|')' op|',' nl|'\\n' string|\"'tot_read_bytes'\" op|':' name|'fields'", "Apache License, Version 2.0 (the \"License\"); you may' nl|'\\n' comment|'# not use this", "name|'objects' name|'import' name|'base' newline|'\\n' name|'from' name|'nova' op|'.' name|'objects' name|'import' name|'fields' newline|'\\n' nl|'\\n' nl|'\\n'", "nl|'\\n' name|'from' name|'nova' name|'import' name|'db' newline|'\\n' name|'from' name|'nova' op|'.' name|'objects' name|'import' name|'base' newline|'\\n'", "name|'base' op|'.' name|'NovaPersistentObject' op|',' name|'base' op|'.' name|'NovaObject' op|')' op|':' newline|'\\n' comment|'# Version 1.0:", "nl|'\\n' string|\"'project_id'\" op|':' name|'fields' op|'.' name|'StringField' op|'(' name|'nullable' op|'=' name|'True' op|')' op|',' nl|'\\n'", "name|'_from_db_object' op|'(' name|'self' op|'.' name|'_context' op|',' name|'self' op|',' name|'db_vol_usage' op|')' newline|'\\n' dedent|'' dedent|''", "op|':' name|'fields' op|'.' name|'StringField' op|'(' name|'nullable' op|'=' name|'True' op|')' op|',' nl|'\\n' string|\"'user_id'\" op|':'", "op|'(' name|'self' op|'.' name|'_context' op|',' name|'self' op|',' name|'db_vol_usage' op|')' newline|'\\n' dedent|'' dedent|'' endmarker|''", "name|'db' op|'.' name|'vol_usage_update' op|'(' nl|'\\n' name|'self' op|'.' name|'_context' op|',' name|'self' op|'.' name|'volume_id' op|','", "op|')' op|',' nl|'\\n' string|\"'volume_id'\" op|':' name|'fields' op|'.' name|'UUIDField' op|'(' op|')' op|',' nl|'\\n' string|\"'instance_uuid'\"", "op|']' op|')' newline|'\\n' dedent|'' name|'vol_usage' op|'.' name|'_context' op|'=' name|'context' newline|'\\n' name|'vol_usage' op|'.' name|'obj_reset_changes'", "name|'fields' op|'.' name|'StringField' op|'(' name|'nullable' op|'=' name|'True' op|')' op|',' nl|'\\n' string|\"'user_id'\" op|':' name|'fields'", "op|'.' name|'objects' name|'import' name|'base' newline|'\\n' name|'from' name|'nova' op|'.' name|'objects' name|'import' name|'fields' newline|'\\n' nl|'\\n'", "name|'context' op|',' name|'vol_usage' op|',' name|'db_vol_usage' op|')' op|':' newline|'\\n' indent|' ' name|'for' name|'field' name|'in'", "ANY KIND, either express or implied. See the' nl|'\\n' comment|'# License for the", "License.' nl|'\\n' nl|'\\n' name|'from' name|'nova' name|'import' name|'db' newline|'\\n' name|'from' name|'nova' op|'.' name|'objects' name|'import'", "nl|'\\n' comment|'# http://www.apache.org/licenses/LICENSE-2.0' nl|'\\n' comment|'#' nl|'\\n' comment|'# Unless required by applicable law or", "op|'=' name|'db' op|'.' name|'vol_usage_update' op|'(' nl|'\\n' name|'self' op|'.' name|'_context' op|',' name|'self' op|'.' name|'volume_id'", "op|'(' name|'nullable' op|'=' name|'True' op|')' op|',' nl|'\\n' string|\"'tot_last_refreshed'\" op|':' name|'fields' op|'.' name|'DateTimeField' op|'('", "nl|'\\n' comment|'# distributed under the License is distributed on an \"AS IS\" BASIS,", "name|'self' op|'.' name|'curr_reads' op|',' nl|'\\n' name|'self' op|'.' name|'curr_read_bytes' op|',' name|'self' op|'.' name|'curr_writes' op|','", "name|'False' op|')' op|':' newline|'\\n' indent|' ' name|'db_vol_usage' op|'=' name|'db' op|'.' name|'vol_usage_update' op|'(' nl|'\\n'", "name|'fields' op|'.' name|'IntegerField' op|'(' name|'read_only' op|'=' name|'True' op|')' op|',' nl|'\\n' string|\"'volume_id'\" op|':' name|'fields'", "name|'NovaObject' op|')' op|':' newline|'\\n' comment|'# Version 1.0: Initial version' nl|'\\n' DECL|variable|VERSION indent|' '", "op|'.' name|'curr_write_bytes' op|',' nl|'\\n' name|'self' op|'.' name|'instance_uuid' op|',' name|'self' op|'.' name|'project_id' op|',' name|'self'", "name|'read_only' op|'=' name|'True' op|')' op|',' nl|'\\n' string|\"'volume_id'\" op|':' name|'fields' op|'.' name|'UUIDField' op|'(' op|')'", "the License.' nl|'\\n' nl|'\\n' name|'from' name|'nova' name|'import' name|'db' newline|'\\n' name|'from' name|'nova' op|'.' name|'objects'", "name|'True' op|')' op|',' nl|'\\n' string|\"'user_id'\" op|':' name|'fields' op|'.' name|'StringField' op|'(' name|'nullable' op|'=' name|'True'", "you may' nl|'\\n' comment|'# not use this file except in compliance with the", "the License is distributed on an \"AS IS\" BASIS, WITHOUT' nl|'\\n' comment|'# WARRANTIES", "WITHOUT' nl|'\\n' comment|'# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.", "op|':' name|'fields' op|'.' name|'IntegerField' op|'(' name|'read_only' op|'=' name|'True' op|')' op|',' nl|'\\n' string|\"'curr_last_refreshed'\" op|':'", "string|\"'1.0'\" newline|'\\n' nl|'\\n' DECL|variable|fields name|'fields' op|'=' op|'{' nl|'\\n' string|\"'id'\" op|':' name|'fields' op|'.' name|'IntegerField'", "name|'context' newline|'\\n' name|'vol_usage' op|'.' name|'obj_reset_changes' op|'(' op|')' newline|'\\n' name|'return' name|'vol_usage' newline|'\\n' nl|'\\n' dedent|''", "name|'NovaObjectRegistry' op|'.' name|'register' newline|'\\n' DECL|class|VolumeUsage name|'class' name|'VolumeUsage' op|'(' name|'base' op|'.' name|'NovaPersistentObject' op|',' name|'base'", "obtain' nl|'\\n' comment|'# a copy of the License at' nl|'\\n' comment|'#' nl|'\\n' comment|'#", "comment|'# http://www.apache.org/licenses/LICENSE-2.0' nl|'\\n' comment|'#' nl|'\\n' comment|'# Unless required by applicable law or agreed", "' name|'db_vol_usage' op|'=' name|'db' op|'.' name|'vol_usage_update' op|'(' nl|'\\n' name|'self' op|'.' name|'_context' op|',' name|'self'", "op|',' nl|'\\n' string|\"'availability_zone'\" op|':' name|'fields' op|'.' name|'StringField' op|'(' name|'nullable' op|'=' name|'True' op|')' op|','", "name|'IntegerField' op|'(' name|'read_only' op|'=' name|'True' op|')' op|',' nl|'\\n' string|\"'tot_writes'\" op|':' name|'fields' op|'.' name|'IntegerField'", "name|'_from_db_object' op|'(' name|'context' op|',' name|'vol_usage' op|',' name|'db_vol_usage' op|')' op|':' newline|'\\n' indent|' ' name|'for'", "op|':' newline|'\\n' indent|' ' name|'setattr' op|'(' name|'vol_usage' op|',' name|'field' op|',' name|'db_vol_usage' op|'[' name|'field'", "name|'fields' op|'.' name|'UUIDField' op|'(' name|'nullable' op|'=' name|'True' op|')' op|',' nl|'\\n' string|\"'project_id'\" op|':' name|'fields'", "op|'.' name|'fields' op|':' newline|'\\n' indent|' ' name|'setattr' op|'(' name|'vol_usage' op|',' name|'field' op|',' name|'db_vol_usage'", "op|'@' name|'staticmethod' newline|'\\n' DECL|member|_from_db_object name|'def' name|'_from_db_object' op|'(' name|'context' op|',' name|'vol_usage' op|',' name|'db_vol_usage' op|')'", "nl|'\\n' name|'self' op|'.' name|'availability_zone' op|',' name|'update_totals' op|'=' name|'update_totals' op|')' newline|'\\n' name|'self' op|'.' name|'_from_db_object'", "op|'.' name|'volume_id' op|',' name|'self' op|'.' name|'curr_reads' op|',' nl|'\\n' name|'self' op|'.' name|'curr_read_bytes' op|',' name|'self'", "op|'(' op|')' nl|'\\n' op|'}' newline|'\\n' nl|'\\n' op|'@' name|'staticmethod' newline|'\\n' DECL|member|_from_db_object name|'def' name|'_from_db_object' op|'('", "op|',' nl|'\\n' string|\"'tot_reads'\" op|':' name|'fields' op|'.' name|'IntegerField' op|'(' name|'read_only' op|'=' name|'True' op|')' op|','", "newline|'\\n' indent|' ' name|'for' name|'field' name|'in' name|'vol_usage' op|'.' name|'fields' op|':' newline|'\\n' indent|' '", "name|'def' name|'_from_db_object' op|'(' name|'context' op|',' name|'vol_usage' op|',' name|'db_vol_usage' op|')' op|':' newline|'\\n' indent|' '", "name|'curr_writes' op|',' name|'self' op|'.' name|'curr_write_bytes' op|',' nl|'\\n' name|'self' op|'.' name|'instance_uuid' op|',' name|'self' op|'.'", "License. You may obtain' nl|'\\n' comment|'# a copy of the License at' nl|'\\n'", "op|'.' name|'_context' op|'=' name|'context' newline|'\\n' name|'vol_usage' op|'.' name|'obj_reset_changes' op|'(' op|')' newline|'\\n' name|'return' name|'vol_usage'", "string|\"'id'\" op|':' name|'fields' op|'.' name|'IntegerField' op|'(' name|'read_only' op|'=' name|'True' op|')' op|',' nl|'\\n' string|\"'volume_id'\"", "comment|'# under the License.' nl|'\\n' nl|'\\n' name|'from' name|'nova' name|'import' name|'db' newline|'\\n' name|'from' name|'nova'", "string|\"'tot_writes'\" op|':' name|'fields' op|'.' name|'IntegerField' op|'(' name|'read_only' op|'=' name|'True' op|')' op|',' nl|'\\n' string|\"'tot_write_bytes'\"", "License, Version 2.0 (the \"License\"); you may' nl|'\\n' comment|'# not use this file", "on an \"AS IS\" BASIS, WITHOUT' nl|'\\n' comment|'# WARRANTIES OR CONDITIONS OF ANY", "op|':' name|'fields' op|'.' name|'UUIDField' op|'(' op|')' op|',' nl|'\\n' string|\"'instance_uuid'\" op|':' name|'fields' op|'.' name|'UUIDField'", "op|'.' name|'register' newline|'\\n' DECL|class|VolumeUsage name|'class' name|'VolumeUsage' op|'(' name|'base' op|'.' name|'NovaPersistentObject' op|',' name|'base' op|'.'", "op|')' newline|'\\n' name|'self' op|'.' name|'_from_db_object' op|'(' name|'self' op|'.' name|'_context' op|',' name|'self' op|',' name|'db_vol_usage'", "op|',' name|'db_vol_usage' op|'[' name|'field' op|']' op|')' newline|'\\n' dedent|'' name|'vol_usage' op|'.' name|'_context' op|'=' name|'context'", "may obtain' nl|'\\n' comment|'# a copy of the License at' nl|'\\n' comment|'#' nl|'\\n'", "op|':' name|'fields' op|'.' name|'StringField' op|'(' name|'nullable' op|'=' name|'True' op|')' op|',' nl|'\\n' string|\"'tot_last_refreshed'\" op|':'", "newline|'\\n' DECL|class|VolumeUsage name|'class' name|'VolumeUsage' op|'(' name|'base' op|'.' name|'NovaPersistentObject' op|',' name|'base' op|'.' name|'NovaObject' op|')'", "op|'.' name|'IntegerField' op|'(' name|'read_only' op|'=' name|'True' op|')' op|',' nl|'\\n' string|\"'tot_writes'\" op|':' name|'fields' op|'.'", "name|'nullable' op|'=' name|'True' op|')' op|',' nl|'\\n' string|\"'user_id'\" op|':' name|'fields' op|'.' name|'StringField' op|'(' name|'nullable'", "op|'.' name|'DateTimeField' op|'(' name|'nullable' op|'=' name|'True' op|',' nl|'\\n' DECL|variable|read_only name|'read_only' op|'=' name|'True' op|')'", "string|\"'project_id'\" op|':' name|'fields' op|'.' name|'StringField' op|'(' name|'nullable' op|'=' name|'True' op|')' op|',' nl|'\\n' string|\"'user_id'\"", "name|'fields' op|'.' name|'StringField' op|'(' name|'nullable' op|'=' name|'True' op|')' op|',' nl|'\\n' string|\"'availability_zone'\" op|':' name|'fields'", "op|'=' name|'update_totals' op|')' newline|'\\n' name|'self' op|'.' name|'_from_db_object' op|'(' name|'self' op|'.' name|'_context' op|',' name|'self'", "op|'(' name|'vol_usage' op|',' name|'field' op|',' name|'db_vol_usage' op|'[' name|'field' op|']' op|')' newline|'\\n' dedent|'' name|'vol_usage'", "Version 1.0: Initial version' nl|'\\n' DECL|variable|VERSION indent|' ' name|'VERSION' op|'=' string|\"'1.0'\" newline|'\\n' nl|'\\n'", "name|'StringField' op|'(' name|'nullable' op|'=' name|'True' op|')' op|',' nl|'\\n' string|\"'user_id'\" op|':' name|'fields' op|'.' name|'StringField'", "name|'True' op|')' op|',' nl|'\\n' string|\"'tot_write_bytes'\" op|':' name|'fields' op|'.' name|'IntegerField' op|'(' name|'read_only' op|'=' name|'True'", "in compliance with the License. You may obtain' nl|'\\n' comment|'# a copy of", "nl|'\\n' op|'@' name|'base' op|'.' name|'NovaObjectRegistry' op|'.' name|'register' newline|'\\n' DECL|class|VolumeUsage name|'class' name|'VolumeUsage' op|'(' name|'base'", "nl|'\\n' nl|'\\n' op|'@' name|'base' op|'.' name|'NovaObjectRegistry' op|'.' name|'register' newline|'\\n' DECL|class|VolumeUsage name|'class' name|'VolumeUsage' op|'('", "Initial version' nl|'\\n' DECL|variable|VERSION indent|' ' name|'VERSION' op|'=' string|\"'1.0'\" newline|'\\n' nl|'\\n' DECL|variable|fields name|'fields'", "op|'}' newline|'\\n' nl|'\\n' op|'@' name|'staticmethod' newline|'\\n' DECL|member|_from_db_object name|'def' name|'_from_db_object' op|'(' name|'context' op|',' name|'vol_usage'", "name|'_context' op|',' name|'self' op|'.' name|'volume_id' op|',' name|'self' op|'.' name|'curr_reads' op|',' nl|'\\n' name|'self' op|'.'", "op|',' name|'update_totals' op|'=' name|'update_totals' op|')' newline|'\\n' name|'self' op|'.' name|'_from_db_object' op|'(' name|'self' op|'.' name|'_context'", "nl|'\\n' string|\"'tot_last_refreshed'\" op|':' name|'fields' op|'.' name|'DateTimeField' op|'(' name|'nullable' op|'=' name|'True' op|',' nl|'\\n' DECL|variable|read_only", "op|'[' name|'field' op|']' op|')' newline|'\\n' dedent|'' name|'vol_usage' op|'.' name|'_context' op|'=' name|'context' newline|'\\n' name|'vol_usage'", "name|'vol_usage_update' op|'(' nl|'\\n' name|'self' op|'.' name|'_context' op|',' name|'self' op|'.' name|'volume_id' op|',' name|'self' op|'.'", "name|'for' name|'field' name|'in' name|'vol_usage' op|'.' name|'fields' op|':' newline|'\\n' indent|' ' name|'setattr' op|'(' name|'vol_usage'", "the License at' nl|'\\n' comment|'#' nl|'\\n' comment|'# http://www.apache.org/licenses/LICENSE-2.0' nl|'\\n' comment|'#' nl|'\\n' comment|'# Unless", "name|'volume_id' op|',' name|'self' op|'.' name|'curr_reads' op|',' nl|'\\n' name|'self' op|'.' name|'curr_read_bytes' op|',' name|'self' op|'.'", "name|'in' name|'vol_usage' op|'.' name|'fields' op|':' newline|'\\n' indent|' ' name|'setattr' op|'(' name|'vol_usage' op|',' name|'field'", "to in writing, software' nl|'\\n' comment|'# distributed under the License is distributed on", "op|':' name|'fields' op|'.' name|'IntegerField' op|'(' op|')' op|',' nl|'\\n' string|\"'curr_writes'\" op|':' name|'fields' op|'.' name|'IntegerField'", "op|')' newline|'\\n' name|'return' name|'vol_usage' newline|'\\n' nl|'\\n' dedent|'' op|'@' name|'base' op|'.' name|'remotable' newline|'\\n' DECL|member|save", "op|')' op|':' newline|'\\n' indent|' ' name|'db_vol_usage' op|'=' name|'db' op|'.' name|'vol_usage_update' op|'(' nl|'\\n' name|'self'", "of the License at' nl|'\\n' comment|'#' nl|'\\n' comment|'# http://www.apache.org/licenses/LICENSE-2.0' nl|'\\n' comment|'#' nl|'\\n' comment|'#", "name|'self' op|'.' name|'curr_writes' op|',' name|'self' op|'.' name|'curr_write_bytes' op|',' nl|'\\n' name|'self' op|'.' name|'instance_uuid' op|','", "name|'True' op|')' op|',' nl|'\\n' string|\"'tot_read_bytes'\" op|':' name|'fields' op|'.' name|'IntegerField' op|'(' name|'read_only' op|'=' name|'True'", "op|'.' name|'StringField' op|'(' name|'nullable' op|'=' name|'True' op|')' op|',' nl|'\\n' string|\"'availability_zone'\" op|':' name|'fields' op|'.'", "name|'vol_usage' op|'.' name|'_context' op|'=' name|'context' newline|'\\n' name|'vol_usage' op|'.' name|'obj_reset_changes' op|'(' op|')' newline|'\\n' name|'return'", "string|\"'tot_write_bytes'\" op|':' name|'fields' op|'.' name|'IntegerField' op|'(' name|'read_only' op|'=' name|'True' op|')' op|',' nl|'\\n' string|\"'curr_last_refreshed'\"", "op|':' name|'fields' op|'.' name|'IntegerField' op|'(' op|')' op|',' nl|'\\n' string|\"'curr_write_bytes'\" op|':' name|'fields' op|'.' name|'IntegerField'", "nl|'\\n' comment|'# Unless required by applicable law or agreed to in writing, software'", "nl|'\\n' nl|'\\n' name|'from' name|'nova' name|'import' name|'db' newline|'\\n' name|'from' name|'nova' op|'.' name|'objects' name|'import' name|'base'", "name|'vol_usage' newline|'\\n' nl|'\\n' dedent|'' op|'@' name|'base' op|'.' name|'remotable' newline|'\\n' DECL|member|save name|'def' name|'save' op|'('", "the License. You may obtain' nl|'\\n' comment|'# a copy of the License at'", "name|'nullable' op|'=' name|'True' op|',' nl|'\\n' DECL|variable|read_only name|'read_only' op|'=' name|'True' op|')' op|',' nl|'\\n' string|\"'curr_reads'\"", "agreed to in writing, software' nl|'\\n' comment|'# distributed under the License is distributed", "op|'.' name|'IntegerField' op|'(' op|')' nl|'\\n' op|'}' newline|'\\n' nl|'\\n' op|'@' name|'staticmethod' newline|'\\n' DECL|member|_from_db_object name|'def'", "newline|'\\n' dedent|'' name|'vol_usage' op|'.' name|'_context' op|'=' name|'context' newline|'\\n' name|'vol_usage' op|'.' name|'obj_reset_changes' op|'(' op|')'", "name|'from' name|'nova' op|'.' name|'objects' name|'import' name|'base' newline|'\\n' name|'from' name|'nova' op|'.' name|'objects' name|'import' name|'fields'", "name|'read_only' op|'=' name|'True' op|')' op|',' nl|'\\n' string|\"'tot_reads'\" op|':' name|'fields' op|'.' name|'IntegerField' op|'(' name|'read_only'", "under the License is distributed on an \"AS IS\" BASIS, WITHOUT' nl|'\\n' comment|'#", "\"AS IS\" BASIS, WITHOUT' nl|'\\n' comment|'# WARRANTIES OR CONDITIONS OF ANY KIND, either", "op|':' name|'fields' op|'.' name|'DateTimeField' op|'(' name|'nullable' op|'=' name|'True' op|',' nl|'\\n' DECL|variable|read_only name|'read_only' op|'='", "name|'VolumeUsage' op|'(' name|'base' op|'.' name|'NovaPersistentObject' op|',' name|'base' op|'.' name|'NovaObject' op|')' op|':' newline|'\\n' comment|'#", "under the Apache License, Version 2.0 (the \"License\"); you may' nl|'\\n' comment|'# not", "in writing, software' nl|'\\n' comment|'# distributed under the License is distributed on an", "nl|'\\n' dedent|'' op|'@' name|'base' op|'.' name|'remotable' newline|'\\n' DECL|member|save name|'def' name|'save' op|'(' name|'self' op|','", "name|'import' name|'db' newline|'\\n' name|'from' name|'nova' op|'.' name|'objects' name|'import' name|'base' newline|'\\n' name|'from' name|'nova' op|'.'", "op|'.' name|'curr_writes' op|',' name|'self' op|'.' name|'curr_write_bytes' op|',' nl|'\\n' name|'self' op|'.' name|'instance_uuid' op|',' name|'self'", "name|'remotable' newline|'\\n' DECL|member|save name|'def' name|'save' op|'(' name|'self' op|',' name|'update_totals' op|'=' name|'False' op|')' op|':'", "' name|'setattr' op|'(' name|'vol_usage' op|',' name|'field' op|',' name|'db_vol_usage' op|'[' name|'field' op|']' op|')' newline|'\\n'", "name|'StringField' op|'(' name|'nullable' op|'=' name|'True' op|')' op|',' nl|'\\n' string|\"'tot_last_refreshed'\" op|':' name|'fields' op|'.' name|'DateTimeField'", "DECL|member|save name|'def' name|'save' op|'(' name|'self' op|',' name|'update_totals' op|'=' name|'False' op|')' op|':' newline|'\\n' indent|'", "(the \"License\"); you may' nl|'\\n' comment|'# not use this file except in compliance", "OF ANY KIND, either express or implied. See the' nl|'\\n' comment|'# License for", "op|')' op|',' nl|'\\n' string|\"'curr_read_bytes'\" op|':' name|'fields' op|'.' name|'IntegerField' op|'(' op|')' op|',' nl|'\\n' string|\"'curr_writes'\"", "DECL|variable|VERSION indent|' ' name|'VERSION' op|'=' string|\"'1.0'\" newline|'\\n' nl|'\\n' DECL|variable|fields name|'fields' op|'=' op|'{' nl|'\\n'", "string|\"'availability_zone'\" op|':' name|'fields' op|'.' name|'StringField' op|'(' name|'nullable' op|'=' name|'True' op|')' op|',' nl|'\\n' string|\"'tot_last_refreshed'\"", "dedent|'' op|'@' name|'base' op|'.' name|'remotable' newline|'\\n' DECL|member|save name|'def' name|'save' op|'(' name|'self' op|',' name|'update_totals'", "at' nl|'\\n' comment|'#' nl|'\\n' comment|'# http://www.apache.org/licenses/LICENSE-2.0' nl|'\\n' comment|'#' nl|'\\n' comment|'# Unless required by", "op|'.' name|'IntegerField' op|'(' name|'read_only' op|'=' name|'True' op|')' op|',' nl|'\\n' string|\"'curr_last_refreshed'\" op|':' name|'fields' op|'.'", "compliance with the License. You may obtain' nl|'\\n' comment|'# a copy of the", "op|'(' op|')' op|',' nl|'\\n' string|\"'instance_uuid'\" op|':' name|'fields' op|'.' name|'UUIDField' op|'(' name|'nullable' op|'=' name|'True'", "name|'base' newline|'\\n' name|'from' name|'nova' op|'.' name|'objects' name|'import' name|'fields' newline|'\\n' nl|'\\n' nl|'\\n' op|'@' name|'base'", "op|'(' name|'context' op|',' name|'vol_usage' op|',' name|'db_vol_usage' op|')' op|':' newline|'\\n' indent|' ' name|'for' name|'field'", "op|'.' name|'UUIDField' op|'(' op|')' op|',' nl|'\\n' string|\"'instance_uuid'\" op|':' name|'fields' op|'.' name|'UUIDField' op|'(' name|'nullable'", "name|'from' name|'nova' name|'import' name|'db' newline|'\\n' name|'from' name|'nova' op|'.' name|'objects' name|'import' name|'base' newline|'\\n' name|'from'", "Version 2.0 (the \"License\"); you may' nl|'\\n' comment|'# not use this file except", "name|'self' op|'.' name|'_context' op|',' name|'self' op|'.' name|'volume_id' op|',' name|'self' op|'.' name|'curr_reads' op|',' nl|'\\n'", "op|'(' name|'base' op|'.' name|'NovaPersistentObject' op|',' name|'base' op|'.' name|'NovaObject' op|')' op|':' newline|'\\n' comment|'# Version", "Unless required by applicable law or agreed to in writing, software' nl|'\\n' comment|'#", "a copy of the License at' nl|'\\n' comment|'#' nl|'\\n' comment|'# http://www.apache.org/licenses/LICENSE-2.0' nl|'\\n' comment|'#'", "op|'=' name|'True' op|')' op|',' nl|'\\n' string|\"'tot_last_refreshed'\" op|':' name|'fields' op|'.' name|'DateTimeField' op|'(' name|'nullable' op|'='", "DECL|variable|read_only name|'read_only' op|'=' name|'True' op|')' op|',' nl|'\\n' string|\"'curr_reads'\" op|':' name|'fields' op|'.' name|'IntegerField' op|'('", "op|'.' name|'instance_uuid' op|',' name|'self' op|'.' name|'project_id' op|',' name|'self' op|'.' name|'user_id' op|',' nl|'\\n' name|'self'", "begin_unit comment|'# Licensed under the Apache License, Version 2.0 (the \"License\"); you may'", "name|'fields' op|'.' name|'IntegerField' op|'(' name|'read_only' op|'=' name|'True' op|')' op|',' nl|'\\n' string|\"'curr_last_refreshed'\" op|':' name|'fields'", "op|'.' name|'NovaPersistentObject' op|',' name|'base' op|'.' name|'NovaObject' op|')' op|':' newline|'\\n' comment|'# Version 1.0: Initial", "nl|'\\n' string|\"'tot_reads'\" op|':' name|'fields' op|'.' name|'IntegerField' op|'(' name|'read_only' op|'=' name|'True' op|')' op|',' nl|'\\n'", "name|'field' op|']' op|')' newline|'\\n' dedent|'' name|'vol_usage' op|'.' name|'_context' op|'=' name|'context' newline|'\\n' name|'vol_usage' op|'.'", "name|'nullable' op|'=' name|'True' op|',' nl|'\\n' DECL|variable|read_only name|'read_only' op|'=' name|'True' op|')' op|',' nl|'\\n' string|\"'tot_reads'\"", "name|'True' op|')' op|',' nl|'\\n' string|\"'availability_zone'\" op|':' name|'fields' op|'.' name|'StringField' op|'(' name|'nullable' op|'=' name|'True'", "nl|'\\n' comment|'#' nl|'\\n' comment|'# http://www.apache.org/licenses/LICENSE-2.0' nl|'\\n' comment|'#' nl|'\\n' comment|'# Unless required by applicable", "name|'fields' op|'.' name|'IntegerField' op|'(' name|'read_only' op|'=' name|'True' op|')' op|',' nl|'\\n' string|\"'tot_write_bytes'\" op|':' name|'fields'", "op|',' nl|'\\n' string|\"'curr_writes'\" op|':' name|'fields' op|'.' name|'IntegerField' op|'(' op|')' op|',' nl|'\\n' string|\"'curr_write_bytes'\" op|':'", "name|'fields' op|'.' name|'StringField' op|'(' name|'nullable' op|'=' name|'True' op|')' op|',' nl|'\\n' string|\"'tot_last_refreshed'\" op|':' name|'fields'", "op|')' op|',' nl|'\\n' string|\"'instance_uuid'\" op|':' name|'fields' op|'.' name|'UUIDField' op|'(' name|'nullable' op|'=' name|'True' op|')'", "op|'(' name|'nullable' op|'=' name|'True' op|')' op|',' nl|'\\n' string|\"'user_id'\" op|':' name|'fields' op|'.' name|'StringField' op|'('", "comment|'# Licensed under the Apache License, Version 2.0 (the \"License\"); you may' nl|'\\n'", "op|'=' name|'True' op|')' op|',' nl|'\\n' string|\"'availability_zone'\" op|':' name|'fields' op|'.' name|'StringField' op|'(' name|'nullable' op|'='", "name|'project_id' op|',' name|'self' op|'.' name|'user_id' op|',' nl|'\\n' name|'self' op|'.' name|'availability_zone' op|',' name|'update_totals' op|'='", "OR CONDITIONS OF ANY KIND, either express or implied. See the' nl|'\\n' comment|'#", "name|'curr_read_bytes' op|',' name|'self' op|'.' name|'curr_writes' op|',' name|'self' op|'.' name|'curr_write_bytes' op|',' nl|'\\n' name|'self' op|'.'", "name|'IntegerField' op|'(' op|')' op|',' nl|'\\n' string|\"'curr_writes'\" op|':' name|'fields' op|'.' name|'IntegerField' op|'(' op|')' op|','", "op|'(' op|')' op|',' nl|'\\n' string|\"'curr_writes'\" op|':' name|'fields' op|'.' name|'IntegerField' op|'(' op|')' op|',' nl|'\\n'", "nl|'\\n' op|'@' name|'staticmethod' newline|'\\n' DECL|member|_from_db_object name|'def' name|'_from_db_object' op|'(' name|'context' op|',' name|'vol_usage' op|',' name|'db_vol_usage'", "name|'setattr' op|'(' name|'vol_usage' op|',' name|'field' op|',' name|'db_vol_usage' op|'[' name|'field' op|']' op|')' newline|'\\n' dedent|''", "name|'self' op|'.' name|'project_id' op|',' name|'self' op|'.' name|'user_id' op|',' nl|'\\n' name|'self' op|'.' name|'availability_zone' op|','", "op|',' nl|'\\n' string|\"'instance_uuid'\" op|':' name|'fields' op|'.' name|'UUIDField' op|'(' name|'nullable' op|'=' name|'True' op|')' op|','", "copy of the License at' nl|'\\n' comment|'#' nl|'\\n' comment|'# http://www.apache.org/licenses/LICENSE-2.0' nl|'\\n' comment|'#' nl|'\\n'", "nl|'\\n' string|\"'volume_id'\" op|':' name|'fields' op|'.' name|'UUIDField' op|'(' op|')' op|',' nl|'\\n' string|\"'instance_uuid'\" op|':' name|'fields'", "implied. See the' nl|'\\n' comment|'# License for the specific language governing permissions and", "op|',' nl|'\\n' string|\"'tot_write_bytes'\" op|':' name|'fields' op|'.' name|'IntegerField' op|'(' name|'read_only' op|'=' name|'True' op|')' op|','", "name|'IntegerField' op|'(' op|')' nl|'\\n' op|'}' newline|'\\n' nl|'\\n' op|'@' name|'staticmethod' newline|'\\n' DECL|member|_from_db_object name|'def' name|'_from_db_object'", "nl|'\\n' DECL|variable|read_only name|'read_only' op|'=' name|'True' op|')' op|',' nl|'\\n' string|\"'tot_reads'\" op|':' name|'fields' op|'.' name|'IntegerField'", "string|\"'volume_id'\" op|':' name|'fields' op|'.' name|'UUIDField' op|'(' op|')' op|',' nl|'\\n' string|\"'instance_uuid'\" op|':' name|'fields' op|'.'", "op|',' nl|'\\n' string|\"'curr_reads'\" op|':' name|'fields' op|'.' name|'IntegerField' op|'(' op|')' op|',' nl|'\\n' string|\"'curr_read_bytes'\" op|':'", "newline|'\\n' name|'from' name|'nova' op|'.' name|'objects' name|'import' name|'fields' newline|'\\n' nl|'\\n' nl|'\\n' op|'@' name|'base' op|'.'", "op|',' nl|'\\n' string|\"'tot_last_refreshed'\" op|':' name|'fields' op|'.' name|'DateTimeField' op|'(' name|'nullable' op|'=' name|'True' op|',' nl|'\\n'", "op|'(' name|'nullable' op|'=' name|'True' op|',' nl|'\\n' DECL|variable|read_only name|'read_only' op|'=' name|'True' op|')' op|',' nl|'\\n'", "name|'base' op|'.' name|'NovaObject' op|')' op|':' newline|'\\n' comment|'# Version 1.0: Initial version' nl|'\\n' DECL|variable|VERSION", "name|'db_vol_usage' op|'=' name|'db' op|'.' name|'vol_usage_update' op|'(' nl|'\\n' name|'self' op|'.' name|'_context' op|',' name|'self' op|'.'", "op|')' op|',' nl|'\\n' string|\"'user_id'\" op|':' name|'fields' op|'.' name|'StringField' op|'(' name|'nullable' op|'=' name|'True' op|')'", "newline|'\\n' comment|'# Version 1.0: Initial version' nl|'\\n' DECL|variable|VERSION indent|' ' name|'VERSION' op|'=' string|\"'1.0'\"", "indent|' ' name|'db_vol_usage' op|'=' name|'db' op|'.' name|'vol_usage_update' op|'(' nl|'\\n' name|'self' op|'.' name|'_context' op|','", "nl|'\\n' string|\"'curr_last_refreshed'\" op|':' name|'fields' op|'.' name|'DateTimeField' op|'(' name|'nullable' op|'=' name|'True' op|',' nl|'\\n' DECL|variable|read_only", "' name|'VERSION' op|'=' string|\"'1.0'\" newline|'\\n' nl|'\\n' DECL|variable|fields name|'fields' op|'=' op|'{' nl|'\\n' string|\"'id'\" op|':'", "op|'(' op|')' op|',' nl|'\\n' string|\"'curr_read_bytes'\" op|':' name|'fields' op|'.' name|'IntegerField' op|'(' op|')' op|',' nl|'\\n'", "op|')' op|',' nl|'\\n' string|\"'curr_write_bytes'\" op|':' name|'fields' op|'.' name|'IntegerField' op|'(' op|')' nl|'\\n' op|'}' newline|'\\n'", "comment|'# not use this file except in compliance with the License. You may", "op|',' nl|'\\n' DECL|variable|read_only name|'read_only' op|'=' name|'True' op|')' op|',' nl|'\\n' string|\"'curr_reads'\" op|':' name|'fields' op|'.'", "name|'StringField' op|'(' name|'nullable' op|'=' name|'True' op|')' op|',' nl|'\\n' string|\"'availability_zone'\" op|':' name|'fields' op|'.' name|'StringField'", "<filename>nova/objects/volume_usage.py begin_unit comment|'# Licensed under the Apache License, Version 2.0 (the \"License\"); you", "op|'(' name|'read_only' op|'=' name|'True' op|')' op|',' nl|'\\n' string|\"'tot_write_bytes'\" op|':' name|'fields' op|'.' name|'IntegerField' op|'('", "nl|'\\n' string|\"'user_id'\" op|':' name|'fields' op|'.' name|'StringField' op|'(' name|'nullable' op|'=' name|'True' op|')' op|',' nl|'\\n'", "name|'db_vol_usage' op|')' op|':' newline|'\\n' indent|' ' name|'for' name|'field' name|'in' name|'vol_usage' op|'.' name|'fields' op|':'", "CONDITIONS OF ANY KIND, either express or implied. See the' nl|'\\n' comment|'# License", "name|'fields' op|'.' name|'UUIDField' op|'(' op|')' op|',' nl|'\\n' string|\"'instance_uuid'\" op|':' name|'fields' op|'.' name|'UUIDField' op|'('", "name|'fields' op|'.' name|'DateTimeField' op|'(' name|'nullable' op|'=' name|'True' op|',' nl|'\\n' DECL|variable|read_only name|'read_only' op|'=' name|'True'", "op|'=' op|'{' nl|'\\n' string|\"'id'\" op|':' name|'fields' op|'.' name|'IntegerField' op|'(' name|'read_only' op|'=' name|'True' op|')'", "name|'UUIDField' op|'(' name|'nullable' op|'=' name|'True' op|')' op|',' nl|'\\n' string|\"'project_id'\" op|':' name|'fields' op|'.' name|'StringField'", "nl|'\\n' string|\"'curr_writes'\" op|':' name|'fields' op|'.' name|'IntegerField' op|'(' op|')' op|',' nl|'\\n' string|\"'curr_write_bytes'\" op|':' name|'fields'", "nl|'\\n' string|\"'instance_uuid'\" op|':' name|'fields' op|'.' name|'UUIDField' op|'(' name|'nullable' op|'=' name|'True' op|')' op|',' nl|'\\n'", "name|'read_only' op|'=' name|'True' op|')' op|',' nl|'\\n' string|\"'tot_write_bytes'\" op|':' name|'fields' op|'.' name|'IntegerField' op|'(' name|'read_only'", "op|'.' name|'UUIDField' op|'(' name|'nullable' op|'=' name|'True' op|')' op|',' nl|'\\n' string|\"'project_id'\" op|':' name|'fields' op|'.'", "op|'(' name|'nullable' op|'=' name|'True' op|')' op|',' nl|'\\n' string|\"'project_id'\" op|':' name|'fields' op|'.' name|'StringField' op|'('", "op|'=' name|'True' op|')' op|',' nl|'\\n' string|\"'tot_reads'\" op|':' name|'fields' op|'.' name|'IntegerField' op|'(' name|'read_only' op|'='", "op|',' nl|'\\n' string|\"'curr_last_refreshed'\" op|':' name|'fields' op|'.' name|'DateTimeField' op|'(' name|'nullable' op|'=' name|'True' op|',' nl|'\\n'", "nl|'\\n' name|'self' op|'.' name|'curr_read_bytes' op|',' name|'self' op|'.' name|'curr_writes' op|',' name|'self' op|'.' name|'curr_write_bytes' op|','", "op|'.' name|'remotable' newline|'\\n' DECL|member|save name|'def' name|'save' op|'(' name|'self' op|',' name|'update_totals' op|'=' name|'False' op|')'", "op|'(' op|')' op|',' nl|'\\n' string|\"'curr_write_bytes'\" op|':' name|'fields' op|'.' name|'IntegerField' op|'(' op|')' nl|'\\n' op|'}'", "op|',' nl|'\\n' string|\"'project_id'\" op|':' name|'fields' op|'.' name|'StringField' op|'(' name|'nullable' op|'=' name|'True' op|')' op|','", "op|'.' name|'NovaObject' op|')' op|':' newline|'\\n' comment|'# Version 1.0: Initial version' nl|'\\n' DECL|variable|VERSION indent|'", "name|'read_only' op|'=' name|'True' op|')' op|',' nl|'\\n' string|\"'curr_last_refreshed'\" op|':' name|'fields' op|'.' name|'DateTimeField' op|'(' name|'nullable'", "software' nl|'\\n' comment|'# distributed under the License is distributed on an \"AS IS\"", "name|'obj_reset_changes' op|'(' op|')' newline|'\\n' name|'return' name|'vol_usage' newline|'\\n' nl|'\\n' dedent|'' op|'@' name|'base' op|'.' name|'remotable'", "op|')' op|',' nl|'\\n' string|\"'tot_writes'\" op|':' name|'fields' op|'.' name|'IntegerField' op|'(' name|'read_only' op|'=' name|'True' op|')'", "nl|'\\n' comment|'#' nl|'\\n' comment|'# Unless required by applicable law or agreed to in", "name|'UUIDField' op|'(' op|')' op|',' nl|'\\n' string|\"'instance_uuid'\" op|':' name|'fields' op|'.' name|'UUIDField' op|'(' name|'nullable' op|'='", "op|',' name|'vol_usage' op|',' name|'db_vol_usage' op|')' op|':' newline|'\\n' indent|' ' name|'for' name|'field' name|'in' name|'vol_usage'", "comment|'# Version 1.0: Initial version' nl|'\\n' DECL|variable|VERSION indent|' ' name|'VERSION' op|'=' string|\"'1.0'\" newline|'\\n'", "name|'nullable' op|'=' name|'True' op|')' op|',' nl|'\\n' string|\"'availability_zone'\" op|':' name|'fields' op|'.' name|'StringField' op|'(' name|'nullable'", "comment|'# a copy of the License at' nl|'\\n' comment|'#' nl|'\\n' comment|'# http://www.apache.org/licenses/LICENSE-2.0' nl|'\\n'", "name|'objects' name|'import' name|'fields' newline|'\\n' nl|'\\n' nl|'\\n' op|'@' name|'base' op|'.' name|'NovaObjectRegistry' op|'.' name|'register' newline|'\\n'", "name|'fields' op|'=' op|'{' nl|'\\n' string|\"'id'\" op|':' name|'fields' op|'.' name|'IntegerField' op|'(' name|'read_only' op|'=' name|'True'", "string|\"'instance_uuid'\" op|':' name|'fields' op|'.' name|'UUIDField' op|'(' name|'nullable' op|'=' name|'True' op|')' op|',' nl|'\\n' string|\"'project_id'\"", "string|\"'curr_read_bytes'\" op|':' name|'fields' op|'.' name|'IntegerField' op|'(' op|')' op|',' nl|'\\n' string|\"'curr_writes'\" op|':' name|'fields' op|'.'", "name|'True' op|')' op|',' nl|'\\n' string|\"'curr_last_refreshed'\" op|':' name|'fields' op|'.' name|'DateTimeField' op|'(' name|'nullable' op|'=' name|'True'", "under the License.' nl|'\\n' nl|'\\n' name|'from' name|'nova' name|'import' name|'db' newline|'\\n' name|'from' name|'nova' op|'.'", "name|'nova' name|'import' name|'db' newline|'\\n' name|'from' name|'nova' op|'.' name|'objects' name|'import' name|'base' newline|'\\n' name|'from' name|'nova'", "name|'staticmethod' newline|'\\n' DECL|member|_from_db_object name|'def' name|'_from_db_object' op|'(' name|'context' op|',' name|'vol_usage' op|',' name|'db_vol_usage' op|')' op|':'", "is distributed on an \"AS IS\" BASIS, WITHOUT' nl|'\\n' comment|'# WARRANTIES OR CONDITIONS", "op|'=' name|'True' op|')' op|',' nl|'\\n' string|\"'tot_write_bytes'\" op|':' name|'fields' op|'.' name|'IntegerField' op|'(' name|'read_only' op|'='", "name|'self' op|',' name|'update_totals' op|'=' name|'False' op|')' op|':' newline|'\\n' indent|' ' name|'db_vol_usage' op|'=' name|'db'", "op|',' name|'self' op|'.' name|'curr_writes' op|',' name|'self' op|'.' name|'curr_write_bytes' op|',' nl|'\\n' name|'self' op|'.' name|'instance_uuid'", "except in compliance with the License. You may obtain' nl|'\\n' comment|'# a copy", "op|')' op|',' nl|'\\n' string|\"'curr_writes'\" op|':' name|'fields' op|'.' name|'IntegerField' op|'(' op|')' op|',' nl|'\\n' string|\"'curr_write_bytes'\"", "op|')' op|',' nl|'\\n' string|\"'project_id'\" op|':' name|'fields' op|'.' name|'StringField' op|'(' name|'nullable' op|'=' name|'True' op|')'", "op|')' op|':' newline|'\\n' comment|'# Version 1.0: Initial version' nl|'\\n' DECL|variable|VERSION indent|' ' name|'VERSION'", "name|'nova' op|'.' name|'objects' name|'import' name|'base' newline|'\\n' name|'from' name|'nova' op|'.' name|'objects' name|'import' name|'fields' newline|'\\n'", "name|'fields' op|'.' name|'IntegerField' op|'(' op|')' op|',' nl|'\\n' string|\"'curr_write_bytes'\" op|':' name|'fields' op|'.' name|'IntegerField' op|'('", "op|'.' name|'StringField' op|'(' name|'nullable' op|'=' name|'True' op|')' op|',' nl|'\\n' string|\"'tot_last_refreshed'\" op|':' name|'fields' op|'.'", "op|'=' name|'True' op|')' op|',' nl|'\\n' string|\"'tot_read_bytes'\" op|':' name|'fields' op|'.' name|'IntegerField' op|'(' name|'read_only' op|'='", "nl|'\\n' DECL|variable|fields name|'fields' op|'=' op|'{' nl|'\\n' string|\"'id'\" op|':' name|'fields' op|'.' name|'IntegerField' op|'(' name|'read_only'", "limitations' nl|'\\n' comment|'# under the License.' nl|'\\n' nl|'\\n' name|'from' name|'nova' name|'import' name|'db' newline|'\\n'", "indent|' ' name|'setattr' op|'(' name|'vol_usage' op|',' name|'field' op|',' name|'db_vol_usage' op|'[' name|'field' op|']' op|')'", "name|'True' op|')' op|',' nl|'\\n' string|\"'tot_last_refreshed'\" op|':' name|'fields' op|'.' name|'DateTimeField' op|'(' name|'nullable' op|'=' name|'True'", "indent|' ' name|'VERSION' op|'=' string|\"'1.0'\" newline|'\\n' nl|'\\n' DECL|variable|fields name|'fields' op|'=' op|'{' nl|'\\n' string|\"'id'\"", "nl|'\\n' DECL|variable|VERSION indent|' ' name|'VERSION' op|'=' string|\"'1.0'\" newline|'\\n' nl|'\\n' DECL|variable|fields name|'fields' op|'=' op|'{'", "op|'=' name|'True' op|')' op|',' nl|'\\n' string|\"'user_id'\" op|':' name|'fields' op|'.' name|'StringField' op|'(' name|'nullable' op|'='", "op|'.' name|'IntegerField' op|'(' name|'read_only' op|'=' name|'True' op|')' op|',' nl|'\\n' string|\"'tot_write_bytes'\" op|':' name|'fields' op|'.'", "name|'update_totals' op|')' newline|'\\n' name|'self' op|'.' name|'_from_db_object' op|'(' name|'self' op|'.' name|'_context' op|',' name|'self' op|','", "may' nl|'\\n' comment|'# not use this file except in compliance with the License.", "name|'curr_reads' op|',' nl|'\\n' name|'self' op|'.' name|'curr_read_bytes' op|',' name|'self' op|'.' name|'curr_writes' op|',' name|'self' op|'.'", "name|'fields' op|'.' name|'IntegerField' op|'(' op|')' op|',' nl|'\\n' string|\"'curr_read_bytes'\" op|':' name|'fields' op|'.' name|'IntegerField' op|'('", "op|':' newline|'\\n' indent|' ' name|'db_vol_usage' op|'=' name|'db' op|'.' name|'vol_usage_update' op|'(' nl|'\\n' name|'self' op|'.'", "string|\"'tot_last_refreshed'\" op|':' name|'fields' op|'.' name|'DateTimeField' op|'(' name|'nullable' op|'=' name|'True' op|',' nl|'\\n' DECL|variable|read_only name|'read_only'", "op|'.' name|'IntegerField' op|'(' op|')' op|',' nl|'\\n' string|\"'curr_writes'\" op|':' name|'fields' op|'.' name|'IntegerField' op|'(' op|')'", "name|'True' op|',' nl|'\\n' DECL|variable|read_only name|'read_only' op|'=' name|'True' op|')' op|',' nl|'\\n' string|\"'curr_reads'\" op|':' name|'fields'", "op|':' name|'fields' op|'.' name|'IntegerField' op|'(' name|'read_only' op|'=' name|'True' op|')' op|',' nl|'\\n' string|\"'tot_write_bytes'\" op|':'", "name|'vol_usage' op|'.' name|'fields' op|':' newline|'\\n' indent|' ' name|'setattr' op|'(' name|'vol_usage' op|',' name|'field' op|','", "name|'import' name|'fields' newline|'\\n' nl|'\\n' nl|'\\n' op|'@' name|'base' op|'.' name|'NovaObjectRegistry' op|'.' name|'register' newline|'\\n' DECL|class|VolumeUsage", "name|'True' op|')' op|',' nl|'\\n' string|\"'tot_reads'\" op|':' name|'fields' op|'.' name|'IntegerField' op|'(' name|'read_only' op|'=' name|'True'", "name|'self' op|'.' name|'curr_read_bytes' op|',' name|'self' op|'.' name|'curr_writes' op|',' name|'self' op|'.' name|'curr_write_bytes' op|',' nl|'\\n'", "name|'IntegerField' op|'(' name|'read_only' op|'=' name|'True' op|')' op|',' nl|'\\n' string|\"'tot_write_bytes'\" op|':' name|'fields' op|'.' name|'IntegerField'", "name|'NovaPersistentObject' op|',' name|'base' op|'.' name|'NovaObject' op|')' op|':' newline|'\\n' comment|'# Version 1.0: Initial version'", "op|',' nl|'\\n' string|\"'user_id'\" op|':' name|'fields' op|'.' name|'StringField' op|'(' name|'nullable' op|'=' name|'True' op|')' op|','", "name|'read_only' op|'=' name|'True' op|')' op|',' nl|'\\n' string|\"'tot_writes'\" op|':' name|'fields' op|'.' name|'IntegerField' op|'(' name|'read_only'", "op|'.' name|'IntegerField' op|'(' op|')' op|',' nl|'\\n' string|\"'curr_read_bytes'\" op|':' name|'fields' op|'.' name|'IntegerField' op|'(' op|')'", "name|'nova' op|'.' name|'objects' name|'import' name|'fields' newline|'\\n' nl|'\\n' nl|'\\n' op|'@' name|'base' op|'.' name|'NovaObjectRegistry' op|'.'", "nl|'\\n' name|'self' op|'.' name|'_context' op|',' name|'self' op|'.' name|'volume_id' op|',' name|'self' op|'.' name|'curr_reads' op|','", "specific language governing permissions and limitations' nl|'\\n' comment|'# under the License.' nl|'\\n' nl|'\\n'", "not use this file except in compliance with the License. You may obtain'", "an \"AS IS\" BASIS, WITHOUT' nl|'\\n' comment|'# WARRANTIES OR CONDITIONS OF ANY KIND,", "name|'IntegerField' op|'(' name|'read_only' op|'=' name|'True' op|')' op|',' nl|'\\n' string|\"'tot_read_bytes'\" op|':' name|'fields' op|'.' name|'IntegerField'", "use this file except in compliance with the License. You may obtain' nl|'\\n'", "BASIS, WITHOUT' nl|'\\n' comment|'# WARRANTIES OR CONDITIONS OF ANY KIND, either express or", "nl|'\\n' string|\"'curr_write_bytes'\" op|':' name|'fields' op|'.' name|'IntegerField' op|'(' op|')' nl|'\\n' op|'}' newline|'\\n' nl|'\\n' op|'@'", "required by applicable law or agreed to in writing, software' nl|'\\n' comment|'# distributed", "name|'register' newline|'\\n' DECL|class|VolumeUsage name|'class' name|'VolumeUsage' op|'(' name|'base' op|'.' name|'NovaPersistentObject' op|',' name|'base' op|'.' name|'NovaObject'", "permissions and limitations' nl|'\\n' comment|'# under the License.' nl|'\\n' nl|'\\n' name|'from' name|'nova' name|'import'", "op|'.' name|'objects' name|'import' name|'fields' newline|'\\n' nl|'\\n' nl|'\\n' op|'@' name|'base' op|'.' name|'NovaObjectRegistry' op|'.' name|'register'", "the' nl|'\\n' comment|'# License for the specific language governing permissions and limitations' nl|'\\n'", "name|'self' op|'.' name|'availability_zone' op|',' name|'update_totals' op|'=' name|'update_totals' op|')' newline|'\\n' name|'self' op|'.' name|'_from_db_object' op|'('", "op|'@' name|'base' op|'.' name|'NovaObjectRegistry' op|'.' name|'register' newline|'\\n' DECL|class|VolumeUsage name|'class' name|'VolumeUsage' op|'(' name|'base' op|'.'", "op|'.' name|'NovaObjectRegistry' op|'.' name|'register' newline|'\\n' DECL|class|VolumeUsage name|'class' name|'VolumeUsage' op|'(' name|'base' op|'.' name|'NovaPersistentObject' op|','", "newline|'\\n' name|'self' op|'.' name|'_from_db_object' op|'(' name|'self' op|'.' name|'_context' op|',' name|'self' op|',' name|'db_vol_usage' op|')'", "op|',' name|'base' op|'.' name|'NovaObject' op|')' op|':' newline|'\\n' comment|'# Version 1.0: Initial version' nl|'\\n'", "op|'=' name|'True' op|')' op|',' nl|'\\n' string|\"'curr_reads'\" op|':' name|'fields' op|'.' name|'IntegerField' op|'(' op|')' op|','", "comment|'# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the'", "writing, software' nl|'\\n' comment|'# distributed under the License is distributed on an \"AS", "You may obtain' nl|'\\n' comment|'# a copy of the License at' nl|'\\n' comment|'#'", "distributed on an \"AS IS\" BASIS, WITHOUT' nl|'\\n' comment|'# WARRANTIES OR CONDITIONS OF", "op|':' name|'fields' op|'.' name|'IntegerField' op|'(' name|'read_only' op|'=' name|'True' op|')' op|',' nl|'\\n' string|\"'tot_writes'\" op|':'", "dedent|'' name|'vol_usage' op|'.' name|'_context' op|'=' name|'context' newline|'\\n' name|'vol_usage' op|'.' name|'obj_reset_changes' op|'(' op|')' newline|'\\n'", "op|')' op|',' nl|'\\n' string|\"'tot_reads'\" op|':' name|'fields' op|'.' name|'IntegerField' op|'(' name|'read_only' op|'=' name|'True' op|')'", "this file except in compliance with the License. You may obtain' nl|'\\n' comment|'#", "op|'.' name|'curr_reads' op|',' nl|'\\n' name|'self' op|'.' name|'curr_read_bytes' op|',' name|'self' op|'.' name|'curr_writes' op|',' name|'self'", "name|'VERSION' op|'=' string|\"'1.0'\" newline|'\\n' nl|'\\n' DECL|variable|fields name|'fields' op|'=' op|'{' nl|'\\n' string|\"'id'\" op|':' name|'fields'", "DECL|member|_from_db_object name|'def' name|'_from_db_object' op|'(' name|'context' op|',' name|'vol_usage' op|',' name|'db_vol_usage' op|')' op|':' newline|'\\n' indent|'", "name|'return' name|'vol_usage' newline|'\\n' nl|'\\n' dedent|'' op|'@' name|'base' op|'.' name|'remotable' newline|'\\n' DECL|member|save name|'def' name|'save'", "string|\"'tot_read_bytes'\" op|':' name|'fields' op|'.' name|'IntegerField' op|'(' name|'read_only' op|'=' name|'True' op|')' op|',' nl|'\\n' string|\"'tot_writes'\"", "indent|' ' name|'for' name|'field' name|'in' name|'vol_usage' op|'.' name|'fields' op|':' newline|'\\n' indent|' ' name|'setattr'", "newline|'\\n' nl|'\\n' dedent|'' op|'@' name|'base' op|'.' name|'remotable' newline|'\\n' DECL|member|save name|'def' name|'save' op|'(' name|'self'", "op|':' newline|'\\n' indent|' ' name|'for' name|'field' name|'in' name|'vol_usage' op|'.' name|'fields' op|':' newline|'\\n' indent|'", "name|'update_totals' op|'=' name|'False' op|')' op|':' newline|'\\n' indent|' ' name|'db_vol_usage' op|'=' name|'db' op|'.' name|'vol_usage_update'", "governing permissions and limitations' nl|'\\n' comment|'# under the License.' nl|'\\n' nl|'\\n' name|'from' name|'nova'", "op|'.' name|'curr_read_bytes' op|',' name|'self' op|'.' name|'curr_writes' op|',' name|'self' op|'.' name|'curr_write_bytes' op|',' nl|'\\n' name|'self'", "nl|'\\n' comment|'# a copy of the License at' nl|'\\n' comment|'#' nl|'\\n' comment|'# http://www.apache.org/licenses/LICENSE-2.0'", "name|'self' op|'.' name|'_from_db_object' op|'(' name|'self' op|'.' name|'_context' op|',' name|'self' op|',' name|'db_vol_usage' op|')' newline|'\\n'", "string|\"'curr_reads'\" op|':' name|'fields' op|'.' name|'IntegerField' op|'(' op|')' op|',' nl|'\\n' string|\"'curr_read_bytes'\" op|':' name|'fields' op|'.'", "op|',' name|'self' op|'.' name|'user_id' op|',' nl|'\\n' name|'self' op|'.' name|'availability_zone' op|',' name|'update_totals' op|'=' name|'update_totals'", "newline|'\\n' DECL|member|_from_db_object name|'def' name|'_from_db_object' op|'(' name|'context' op|',' name|'vol_usage' op|',' name|'db_vol_usage' op|')' op|':' newline|'\\n'", "op|'(' name|'self' op|',' name|'update_totals' op|'=' name|'False' op|')' op|':' newline|'\\n' indent|' ' name|'db_vol_usage' op|'='", "name|'fields' newline|'\\n' nl|'\\n' nl|'\\n' op|'@' name|'base' op|'.' name|'NovaObjectRegistry' op|'.' name|'register' newline|'\\n' DECL|class|VolumeUsage name|'class'", "op|':' name|'fields' op|'.' name|'UUIDField' op|'(' name|'nullable' op|'=' name|'True' op|')' op|',' nl|'\\n' string|\"'project_id'\" op|':'", "name|'field' op|',' name|'db_vol_usage' op|'[' name|'field' op|']' op|')' newline|'\\n' dedent|'' name|'vol_usage' op|'.' name|'_context' op|'='", "name|'db' newline|'\\n' name|'from' name|'nova' op|'.' name|'objects' name|'import' name|'base' newline|'\\n' name|'from' name|'nova' op|'.' name|'objects'", "distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT' nl|'\\n'", "op|')' op|',' nl|'\\n' string|\"'tot_read_bytes'\" op|':' name|'fields' op|'.' name|'IntegerField' op|'(' name|'read_only' op|'=' name|'True' op|')'", "newline|'\\n' name|'return' name|'vol_usage' newline|'\\n' nl|'\\n' dedent|'' op|'@' name|'base' op|'.' name|'remotable' newline|'\\n' DECL|member|save name|'def'", "name|'def' name|'save' op|'(' name|'self' op|',' name|'update_totals' op|'=' name|'False' op|')' op|':' newline|'\\n' indent|' '", "nl|'\\n' comment|'# License for the specific language governing permissions and limitations' nl|'\\n' comment|'#", "name|'IntegerField' op|'(' name|'read_only' op|'=' name|'True' op|')' op|',' nl|'\\n' string|\"'volume_id'\" op|':' name|'fields' op|'.' name|'UUIDField'", "op|'=' name|'context' newline|'\\n' name|'vol_usage' op|'.' name|'obj_reset_changes' op|'(' op|')' newline|'\\n' name|'return' name|'vol_usage' newline|'\\n' nl|'\\n'", "nl|'\\n' DECL|variable|read_only name|'read_only' op|'=' name|'True' op|')' op|',' nl|'\\n' string|\"'curr_reads'\" op|':' name|'fields' op|'.' name|'IntegerField'", "op|',' name|'db_vol_usage' op|')' op|':' newline|'\\n' indent|' ' name|'for' name|'field' name|'in' name|'vol_usage' op|'.' name|'fields'", "name|'fields' op|'.' name|'IntegerField' op|'(' op|')' nl|'\\n' op|'}' newline|'\\n' nl|'\\n' op|'@' name|'staticmethod' newline|'\\n' DECL|member|_from_db_object", "op|'.' name|'IntegerField' op|'(' name|'read_only' op|'=' name|'True' op|')' op|',' nl|'\\n' string|\"'volume_id'\" op|':' name|'fields' op|'.'", "nl|'\\n' name|'self' op|'.' name|'instance_uuid' op|',' name|'self' op|'.' name|'project_id' op|',' name|'self' op|'.' name|'user_id' op|','", "op|'.' name|'user_id' op|',' nl|'\\n' name|'self' op|'.' name|'availability_zone' op|',' name|'update_totals' op|'=' name|'update_totals' op|')' newline|'\\n'", "or implied. See the' nl|'\\n' comment|'# License for the specific language governing permissions", "op|'=' name|'True' op|',' nl|'\\n' DECL|variable|read_only name|'read_only' op|'=' name|'True' op|')' op|',' nl|'\\n' string|\"'curr_reads'\" op|':'", "name|'fields' op|':' newline|'\\n' indent|' ' name|'setattr' op|'(' name|'vol_usage' op|',' name|'field' op|',' name|'db_vol_usage' op|'['", "op|'=' name|'True' op|')' op|',' nl|'\\n' string|\"'volume_id'\" op|':' name|'fields' op|'.' name|'UUIDField' op|'(' op|')' op|','", "name|'True' op|',' nl|'\\n' DECL|variable|read_only name|'read_only' op|'=' name|'True' op|')' op|',' nl|'\\n' string|\"'tot_reads'\" op|':' name|'fields'", "name|'nullable' op|'=' name|'True' op|')' op|',' nl|'\\n' string|\"'project_id'\" op|':' name|'fields' op|'.' name|'StringField' op|'(' name|'nullable'", "See the' nl|'\\n' comment|'# License for the specific language governing permissions and limitations'", "name|'fields' op|'.' name|'IntegerField' op|'(' op|')' op|',' nl|'\\n' string|\"'curr_writes'\" op|':' name|'fields' op|'.' name|'IntegerField' op|'('", "op|'.' name|'_from_db_object' op|'(' name|'self' op|'.' name|'_context' op|',' name|'self' op|',' name|'db_vol_usage' op|')' newline|'\\n' dedent|''", "nl|'\\n' comment|'# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See", "op|')' nl|'\\n' op|'}' newline|'\\n' nl|'\\n' op|'@' name|'staticmethod' newline|'\\n' DECL|member|_from_db_object name|'def' name|'_from_db_object' op|'(' name|'context'", "and limitations' nl|'\\n' comment|'# under the License.' nl|'\\n' nl|'\\n' name|'from' name|'nova' name|'import' name|'db'", "comment|'# Unless required by applicable law or agreed to in writing, software' nl|'\\n'", "op|':' name|'fields' op|'.' name|'IntegerField' op|'(' name|'read_only' op|'=' name|'True' op|')' op|',' nl|'\\n' string|\"'volume_id'\" op|':'", "comment|'#' nl|'\\n' comment|'# Unless required by applicable law or agreed to in writing,", "name|'user_id' op|',' nl|'\\n' name|'self' op|'.' name|'availability_zone' op|',' name|'update_totals' op|'=' name|'update_totals' op|')' newline|'\\n' name|'self'", "op|',' name|'self' op|'.' name|'curr_write_bytes' op|',' nl|'\\n' name|'self' op|'.' name|'instance_uuid' op|',' name|'self' op|'.' name|'project_id'", "name|'update_totals' op|'=' name|'update_totals' op|')' newline|'\\n' name|'self' op|'.' name|'_from_db_object' op|'(' name|'self' op|'.' name|'_context' op|','", "name|'instance_uuid' op|',' name|'self' op|'.' name|'project_id' op|',' name|'self' op|'.' name|'user_id' op|',' nl|'\\n' name|'self' op|'.'", "op|')' op|',' nl|'\\n' string|\"'curr_reads'\" op|':' name|'fields' op|'.' name|'IntegerField' op|'(' op|')' op|',' nl|'\\n' string|\"'curr_read_bytes'\"", "op|',' name|'self' op|'.' name|'volume_id' op|',' name|'self' op|'.' name|'curr_reads' op|',' nl|'\\n' name|'self' op|'.' name|'curr_read_bytes'", "op|'(' name|'read_only' op|'=' name|'True' op|')' op|',' nl|'\\n' string|\"'curr_last_refreshed'\" op|':' name|'fields' op|'.' name|'DateTimeField' op|'('", "name|'class' name|'VolumeUsage' op|'(' name|'base' op|'.' name|'NovaPersistentObject' op|',' name|'base' op|'.' name|'NovaObject' op|')' op|':' newline|'\\n'", "name|'self' op|'.' name|'instance_uuid' op|',' name|'self' op|'.' name|'project_id' op|',' name|'self' op|'.' name|'user_id' op|',' nl|'\\n'", "op|'.' name|'project_id' op|',' name|'self' op|'.' name|'user_id' op|',' nl|'\\n' name|'self' op|'.' name|'availability_zone' op|',' name|'update_totals'", "name|'import' name|'base' newline|'\\n' name|'from' name|'nova' op|'.' name|'objects' name|'import' name|'fields' newline|'\\n' nl|'\\n' nl|'\\n' op|'@'", "name|'self' op|'.' name|'volume_id' op|',' name|'self' op|'.' name|'curr_reads' op|',' nl|'\\n' name|'self' op|'.' name|'curr_read_bytes' op|','", "op|',' nl|'\\n' string|\"'tot_writes'\" op|':' name|'fields' op|'.' name|'IntegerField' op|'(' name|'read_only' op|'=' name|'True' op|')' op|','", "op|',' nl|'\\n' string|\"'curr_write_bytes'\" op|':' name|'fields' op|'.' name|'IntegerField' op|'(' op|')' nl|'\\n' op|'}' newline|'\\n' nl|'\\n'", "WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the' nl|'\\n'", "op|':' newline|'\\n' comment|'# Version 1.0: Initial version' nl|'\\n' DECL|variable|VERSION indent|' ' name|'VERSION' op|'='", "op|'(' op|')' newline|'\\n' name|'return' name|'vol_usage' newline|'\\n' nl|'\\n' dedent|'' op|'@' name|'base' op|'.' name|'remotable' newline|'\\n'", "op|':' name|'fields' op|'.' name|'IntegerField' op|'(' op|')' op|',' nl|'\\n' string|\"'curr_read_bytes'\" op|':' name|'fields' op|'.' name|'IntegerField'", "op|'=' string|\"'1.0'\" newline|'\\n' nl|'\\n' DECL|variable|fields name|'fields' op|'=' op|'{' nl|'\\n' string|\"'id'\" op|':' name|'fields' op|'.'", "by applicable law or agreed to in writing, software' nl|'\\n' comment|'# distributed under", "op|'(' name|'nullable' op|'=' name|'True' op|')' op|',' nl|'\\n' string|\"'availability_zone'\" op|':' name|'fields' op|'.' name|'StringField' op|'('", "op|'=' name|'True' op|',' nl|'\\n' DECL|variable|read_only name|'read_only' op|'=' name|'True' op|')' op|',' nl|'\\n' string|\"'tot_reads'\" op|':'", "newline|'\\n' DECL|member|save name|'def' name|'save' op|'(' name|'self' op|',' name|'update_totals' op|'=' name|'False' op|')' op|':' newline|'\\n'", "op|'=' name|'False' op|')' op|':' newline|'\\n' indent|' ' name|'db_vol_usage' op|'=' name|'db' op|'.' name|'vol_usage_update' op|'('", "name|'base' op|'.' name|'remotable' newline|'\\n' DECL|member|save name|'def' name|'save' op|'(' name|'self' op|',' name|'update_totals' op|'=' name|'False'", "newline|'\\n' name|'from' name|'nova' op|'.' name|'objects' name|'import' name|'base' newline|'\\n' name|'from' name|'nova' op|'.' name|'objects' name|'import'", "string|\"'curr_last_refreshed'\" op|':' name|'fields' op|'.' name|'DateTimeField' op|'(' name|'nullable' op|'=' name|'True' op|',' nl|'\\n' DECL|variable|read_only name|'read_only'", "string|\"'curr_writes'\" op|':' name|'fields' op|'.' name|'IntegerField' op|'(' op|')' op|',' nl|'\\n' string|\"'curr_write_bytes'\" op|':' name|'fields' op|'.'", "name|'nullable' op|'=' name|'True' op|')' op|',' nl|'\\n' string|\"'tot_last_refreshed'\" op|':' name|'fields' op|'.' name|'DateTimeField' op|'(' name|'nullable'", "name|'IntegerField' op|'(' op|')' op|',' nl|'\\n' string|\"'curr_read_bytes'\" op|':' name|'fields' op|'.' name|'IntegerField' op|'(' op|')' op|','", "op|')' op|',' nl|'\\n' string|\"'availability_zone'\" op|':' name|'fields' op|'.' name|'StringField' op|'(' name|'nullable' op|'=' name|'True' op|')'", "name|'self' op|'.' name|'user_id' op|',' nl|'\\n' name|'self' op|'.' name|'availability_zone' op|',' name|'update_totals' op|'=' name|'update_totals' op|')'", "op|')' op|',' nl|'\\n' string|\"'tot_write_bytes'\" op|':' name|'fields' op|'.' name|'IntegerField' op|'(' name|'read_only' op|'=' name|'True' op|')'", "op|'.' name|'IntegerField' op|'(' op|')' op|',' nl|'\\n' string|\"'curr_write_bytes'\" op|':' name|'fields' op|'.' name|'IntegerField' op|'(' op|')'", "name|'curr_write_bytes' op|',' nl|'\\n' name|'self' op|'.' name|'instance_uuid' op|',' name|'self' op|'.' name|'project_id' op|',' name|'self' op|'.'", "newline|'\\n' indent|' ' name|'db_vol_usage' op|'=' name|'db' op|'.' name|'vol_usage_update' op|'(' nl|'\\n' name|'self' op|'.' name|'_context'", "comment|'# distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT'", "nl|'\\n' string|\"'curr_read_bytes'\" op|':' name|'fields' op|'.' name|'IntegerField' op|'(' op|')' op|',' nl|'\\n' string|\"'curr_writes'\" op|':' name|'fields'", "or agreed to in writing, software' nl|'\\n' comment|'# distributed under the License is", "name|'True' op|')' op|',' nl|'\\n' string|\"'volume_id'\" op|':' name|'fields' op|'.' name|'UUIDField' op|'(' op|')' op|',' nl|'\\n'", "name|'base' op|'.' name|'NovaObjectRegistry' op|'.' name|'register' newline|'\\n' DECL|class|VolumeUsage name|'class' name|'VolumeUsage' op|'(' name|'base' op|'.' name|'NovaPersistentObject'", "1.0: Initial version' nl|'\\n' DECL|variable|VERSION indent|' ' name|'VERSION' op|'=' string|\"'1.0'\" newline|'\\n' nl|'\\n' DECL|variable|fields", "op|',' nl|'\\n' string|\"'volume_id'\" op|':' name|'fields' op|'.' name|'UUIDField' op|'(' op|')' op|',' nl|'\\n' string|\"'instance_uuid'\" op|':'", "op|'(' nl|'\\n' name|'self' op|'.' name|'_context' op|',' name|'self' op|'.' name|'volume_id' op|',' name|'self' op|'.' name|'curr_reads'", "name|'fields' op|'.' name|'IntegerField' op|'(' name|'read_only' op|'=' name|'True' op|')' op|',' nl|'\\n' string|\"'tot_writes'\" op|':' name|'fields'", "name|'field' name|'in' name|'vol_usage' op|'.' name|'fields' op|':' newline|'\\n' indent|' ' name|'setattr' op|'(' name|'vol_usage' op|','", "the Apache License, Version 2.0 (the \"License\"); you may' nl|'\\n' comment|'# not use", "op|'.' name|'availability_zone' op|',' name|'update_totals' op|'=' name|'update_totals' op|')' newline|'\\n' name|'self' op|'.' name|'_from_db_object' op|'(' name|'self'", "with the License. You may obtain' nl|'\\n' comment|'# a copy of the License", "name|'from' name|'nova' op|'.' name|'objects' name|'import' name|'fields' newline|'\\n' nl|'\\n' nl|'\\n' op|'@' name|'base' op|'.' name|'NovaObjectRegistry'", "name|'self' op|'.' name|'curr_write_bytes' op|',' nl|'\\n' name|'self' op|'.' name|'instance_uuid' op|',' name|'self' op|'.' name|'project_id' op|','", "newline|'\\n' name|'vol_usage' op|'.' name|'obj_reset_changes' op|'(' op|')' newline|'\\n' name|'return' name|'vol_usage' newline|'\\n' nl|'\\n' dedent|'' op|'@'", "name|'_context' op|'=' name|'context' newline|'\\n' name|'vol_usage' op|'.' name|'obj_reset_changes' op|'(' op|')' newline|'\\n' name|'return' name|'vol_usage' newline|'\\n'", "nl|'\\n' comment|'# under the License.' nl|'\\n' nl|'\\n' name|'from' name|'nova' name|'import' name|'db' newline|'\\n' name|'from'", "op|',' nl|'\\n' name|'self' op|'.' name|'availability_zone' op|',' name|'update_totals' op|'=' name|'update_totals' op|')' newline|'\\n' name|'self' op|'.'", "nl|'\\n' string|\"'availability_zone'\" op|':' name|'fields' op|'.' name|'StringField' op|'(' name|'nullable' op|'=' name|'True' op|')' op|',' nl|'\\n'", "name|'IntegerField' op|'(' op|')' op|',' nl|'\\n' string|\"'curr_write_bytes'\" op|':' name|'fields' op|'.' name|'IntegerField' op|'(' op|')' nl|'\\n'", "op|'=' name|'True' op|')' op|',' nl|'\\n' string|\"'project_id'\" op|':' name|'fields' op|'.' name|'StringField' op|'(' name|'nullable' op|'='", "op|'(' name|'read_only' op|'=' name|'True' op|')' op|',' nl|'\\n' string|\"'tot_writes'\" op|':' name|'fields' op|'.' name|'IntegerField' op|'('", "newline|'\\n' nl|'\\n' nl|'\\n' op|'@' name|'base' op|'.' name|'NovaObjectRegistry' op|'.' name|'register' newline|'\\n' DECL|class|VolumeUsage name|'class' name|'VolumeUsage'", "' name|'for' name|'field' name|'in' name|'vol_usage' op|'.' name|'fields' op|':' newline|'\\n' indent|' ' name|'setattr' op|'('", "name|'availability_zone' op|',' name|'update_totals' op|'=' name|'update_totals' op|')' newline|'\\n' name|'self' op|'.' name|'_from_db_object' op|'(' name|'self' op|'.'", "op|')' op|',' nl|'\\n' string|\"'curr_last_refreshed'\" op|':' name|'fields' op|'.' name|'DateTimeField' op|'(' name|'nullable' op|'=' name|'True' op|','", "op|',' nl|'\\n' DECL|variable|read_only name|'read_only' op|'=' name|'True' op|')' op|',' nl|'\\n' string|\"'tot_reads'\" op|':' name|'fields' op|'.'", "nl|'\\n' string|\"'curr_reads'\" op|':' name|'fields' op|'.' name|'IntegerField' op|'(' op|')' op|',' nl|'\\n' string|\"'curr_read_bytes'\" op|':' name|'fields'", "http://www.apache.org/licenses/LICENSE-2.0' nl|'\\n' comment|'#' nl|'\\n' comment|'# Unless required by applicable law or agreed to", "op|':' name|'fields' op|'.' name|'StringField' op|'(' name|'nullable' op|'=' name|'True' op|')' op|',' nl|'\\n' string|\"'availability_zone'\" op|':'", "file except in compliance with the License. You may obtain' nl|'\\n' comment|'# a", "op|',' nl|'\\n' name|'self' op|'.' name|'curr_read_bytes' op|',' name|'self' op|'.' name|'curr_writes' op|',' name|'self' op|'.' name|'curr_write_bytes'", "op|':' name|'fields' op|'.' name|'IntegerField' op|'(' op|')' nl|'\\n' op|'}' newline|'\\n' nl|'\\n' op|'@' name|'staticmethod' newline|'\\n'", "for the specific language governing permissions and limitations' nl|'\\n' comment|'# under the License.'", "op|'{' nl|'\\n' string|\"'id'\" op|':' name|'fields' op|'.' name|'IntegerField' op|'(' name|'read_only' op|'=' name|'True' op|')' op|','", "newline|'\\n' nl|'\\n' DECL|variable|fields name|'fields' op|'=' op|'{' nl|'\\n' string|\"'id'\" op|':' name|'fields' op|'.' name|'IntegerField' op|'('", "op|',' nl|'\\n' string|\"'tot_read_bytes'\" op|':' name|'fields' op|'.' name|'IntegerField' op|'(' name|'read_only' op|'=' name|'True' op|')' op|','", "op|',' name|'update_totals' op|'=' name|'False' op|')' op|':' newline|'\\n' indent|' ' name|'db_vol_usage' op|'=' name|'db' op|'.'", "2.0 (the \"License\"); you may' nl|'\\n' comment|'# not use this file except in", "op|')' op|',' nl|'\\n' string|\"'tot_last_refreshed'\" op|':' name|'fields' op|'.' name|'DateTimeField' op|'(' name|'nullable' op|'=' name|'True' op|','", "name|'DateTimeField' op|'(' name|'nullable' op|'=' name|'True' op|',' nl|'\\n' DECL|variable|read_only name|'read_only' op|'=' name|'True' op|')' op|','", "nl|'\\n' string|\"'tot_write_bytes'\" op|':' name|'fields' op|'.' name|'IntegerField' op|'(' name|'read_only' op|'=' name|'True' op|')' op|',' nl|'\\n'", "op|',' name|'self' op|'.' name|'curr_reads' op|',' nl|'\\n' name|'self' op|'.' name|'curr_read_bytes' op|',' name|'self' op|'.' name|'curr_writes'", "comment|'# License for the specific language governing permissions and limitations' nl|'\\n' comment|'# under", "name|'read_only' op|'=' name|'True' op|')' op|',' nl|'\\n' string|\"'curr_reads'\" op|':' name|'fields' op|'.' name|'IntegerField' op|'(' op|')'", "Licensed under the Apache License, Version 2.0 (the \"License\"); you may' nl|'\\n' comment|'#", "\"License\"); you may' nl|'\\n' comment|'# not use this file except in compliance with", "nl|'\\n' op|'}' newline|'\\n' nl|'\\n' op|'@' name|'staticmethod' newline|'\\n' DECL|member|_from_db_object name|'def' name|'_from_db_object' op|'(' name|'context' op|','", "string|\"'curr_write_bytes'\" op|':' name|'fields' op|'.' name|'IntegerField' op|'(' op|')' nl|'\\n' op|'}' newline|'\\n' nl|'\\n' op|'@' name|'staticmethod'", "name|'True' op|')' op|',' nl|'\\n' string|\"'curr_reads'\" op|':' name|'fields' op|'.' name|'IntegerField' op|'(' op|')' op|',' nl|'\\n'", "op|')' op|':' newline|'\\n' indent|' ' name|'for' name|'field' name|'in' name|'vol_usage' op|'.' name|'fields' op|':' newline|'\\n'", "name|'vol_usage' op|'.' name|'obj_reset_changes' op|'(' op|')' newline|'\\n' name|'return' name|'vol_usage' newline|'\\n' nl|'\\n' dedent|'' op|'@' name|'base'", "applicable law or agreed to in writing, software' nl|'\\n' comment|'# distributed under the", "License is distributed on an \"AS IS\" BASIS, WITHOUT' nl|'\\n' comment|'# WARRANTIES OR", "License for the specific language governing permissions and limitations' nl|'\\n' comment|'# under the", "DECL|variable|fields name|'fields' op|'=' op|'{' nl|'\\n' string|\"'id'\" op|':' name|'fields' op|'.' name|'IntegerField' op|'(' name|'read_only' op|'='", "either express or implied. See the' nl|'\\n' comment|'# License for the specific language", "comment|'#' nl|'\\n' comment|'# http://www.apache.org/licenses/LICENSE-2.0' nl|'\\n' comment|'#' nl|'\\n' comment|'# Unless required by applicable law", "name|'save' op|'(' name|'self' op|',' name|'update_totals' op|'=' name|'False' op|')' op|':' newline|'\\n' indent|' ' name|'db_vol_usage'", "op|',' nl|'\\n' string|\"'curr_read_bytes'\" op|':' name|'fields' op|'.' name|'IntegerField' op|'(' op|')' op|',' nl|'\\n' string|\"'curr_writes'\" op|':'", "name|'self' op|'.' name|'_context' op|',' name|'self' op|',' name|'db_vol_usage' op|')' newline|'\\n' dedent|'' dedent|'' endmarker|'' end_unit", "op|'(' name|'read_only' op|'=' name|'True' op|')' op|',' nl|'\\n' string|\"'volume_id'\" op|':' name|'fields' op|'.' name|'UUIDField' op|'('", "name|'db_vol_usage' op|'[' name|'field' op|']' op|')' newline|'\\n' dedent|'' name|'vol_usage' op|'.' name|'_context' op|'=' name|'context' newline|'\\n'", "string|\"'user_id'\" op|':' name|'fields' op|'.' name|'StringField' op|'(' name|'nullable' op|'=' name|'True' op|')' op|',' nl|'\\n' string|\"'availability_zone'\"", "version' nl|'\\n' DECL|variable|VERSION indent|' ' name|'VERSION' op|'=' string|\"'1.0'\" newline|'\\n' nl|'\\n' DECL|variable|fields name|'fields' op|'='", "KIND, either express or implied. See the' nl|'\\n' comment|'# License for the specific", "string|\"'tot_reads'\" op|':' name|'fields' op|'.' name|'IntegerField' op|'(' name|'read_only' op|'=' name|'True' op|')' op|',' nl|'\\n' string|\"'tot_read_bytes'\"", "name|'True' op|')' op|',' nl|'\\n' string|\"'tot_writes'\" op|':' name|'fields' op|'.' name|'IntegerField' op|'(' name|'read_only' op|'=' name|'True'", "nl|'\\n' comment|'# not use this file except in compliance with the License. You", "IS\" BASIS, WITHOUT' nl|'\\n' comment|'# WARRANTIES OR CONDITIONS OF ANY KIND, either express", "op|'(' name|'read_only' op|'=' name|'True' op|')' op|',' nl|'\\n' string|\"'tot_read_bytes'\" op|':' name|'fields' op|'.' name|'IntegerField' op|'('", "name|'IntegerField' op|'(' name|'read_only' op|'=' name|'True' op|')' op|',' nl|'\\n' string|\"'curr_last_refreshed'\" op|':' name|'fields' op|'.' name|'DateTimeField'", "name|'True' op|')' op|',' nl|'\\n' string|\"'project_id'\" op|':' name|'fields' op|'.' name|'StringField' op|'(' name|'nullable' op|'=' name|'True'", "op|'.' name|'obj_reset_changes' op|'(' op|')' newline|'\\n' name|'return' name|'vol_usage' newline|'\\n' nl|'\\n' dedent|'' op|'@' name|'base' op|'.'", "nl|'\\n' string|\"'tot_read_bytes'\" op|':' name|'fields' op|'.' name|'IntegerField' op|'(' name|'read_only' op|'=' name|'True' op|')' op|',' nl|'\\n'", "op|',' nl|'\\n' name|'self' op|'.' name|'instance_uuid' op|',' name|'self' op|'.' name|'project_id' op|',' name|'self' op|'.' name|'user_id'", "op|')' newline|'\\n' dedent|'' name|'vol_usage' op|'.' name|'_context' op|'=' name|'context' newline|'\\n' name|'vol_usage' op|'.' name|'obj_reset_changes' op|'('", "newline|'\\n' indent|' ' name|'setattr' op|'(' name|'vol_usage' op|',' name|'field' op|',' name|'db_vol_usage' op|'[' name|'field' op|']'", "License at' nl|'\\n' comment|'#' nl|'\\n' comment|'# http://www.apache.org/licenses/LICENSE-2.0' nl|'\\n' comment|'#' nl|'\\n' comment|'# Unless required", "op|'@' name|'base' op|'.' name|'remotable' newline|'\\n' DECL|member|save name|'def' name|'save' op|'(' name|'self' op|',' name|'update_totals' op|'='", "op|'.' name|'IntegerField' op|'(' name|'read_only' op|'=' name|'True' op|')' op|',' nl|'\\n' string|\"'tot_read_bytes'\" op|':' name|'fields' op|'.'", "op|'.' name|'_context' op|',' name|'self' op|'.' name|'volume_id' op|',' name|'self' op|'.' name|'curr_reads' op|',' nl|'\\n' name|'self'", "op|':' name|'fields' op|'.' name|'IntegerField' op|'(' name|'read_only' op|'=' name|'True' op|')' op|',' nl|'\\n' string|\"'tot_read_bytes'\" op|':'", "op|',' name|'self' op|'.' name|'project_id' op|',' name|'self' op|'.' name|'user_id' op|',' nl|'\\n' name|'self' op|'.' name|'availability_zone'", "nl|'\\n' string|\"'tot_writes'\" op|':' name|'fields' op|'.' name|'IntegerField' op|'(' name|'read_only' op|'=' name|'True' op|')' op|',' nl|'\\n'", "DECL|class|VolumeUsage name|'class' name|'VolumeUsage' op|'(' name|'base' op|'.' name|'NovaPersistentObject' op|',' name|'base' op|'.' name|'NovaObject' op|')' op|':'" ]
[ "# the Business Source License, use of this software will be governed #", "Optional[TopicExists] = None) -> None: self.name = name self.topic = topic def get_watermarks(self)", "contributors. All rights reserved. # # Use of this software is governed by", "the Business Source License, use of this software will be governed # by", "software will be governed # by the Apache License, Version 2.0. from typing", "None) -> None: self.name = name self.topic = topic def get_watermarks(self) -> Watermarks:", "import Capability from materialize.zippy.kafka_capabilities import TopicExists from materialize.zippy.watermarks import Watermarks class SourceExists(Capability): def", "License, use of this software will be governed # by the Apache License,", "of this software will be governed # by the Apache License, Version 2.0.", "License # included in the LICENSE file at the root of this repository.", "at the root of this repository. # # As of the Change Date", "from materialize.zippy.framework import Capability from materialize.zippy.kafka_capabilities import TopicExists from materialize.zippy.watermarks import Watermarks class", "file at the root of this repository. # # As of the Change", "governed by the Business Source License # included in the LICENSE file at", "will be governed # by the Apache License, Version 2.0. from typing import", "# Use of this software is governed by the Business Source License #", "Source License, use of this software will be governed # by the Apache", "be governed # by the Apache License, Version 2.0. from typing import Optional", "reserved. # # Use of this software is governed by the Business Source", "included in the LICENSE file at the root of this repository. # #", "name self.topic = topic def get_watermarks(self) -> Watermarks: assert self.topic is not None", "# # Use of this software is governed by the Business Source License", "the Business Source License # included in the LICENSE file at the root", "def __init__(self, name: str, topic: Optional[TopicExists] = None) -> None: self.name = name", "License, Version 2.0. from typing import Optional from materialize.zippy.framework import Capability from materialize.zippy.kafka_capabilities", "Inc. and contributors. All rights reserved. # # Use of this software is", "and contributors. All rights reserved. # # Use of this software is governed", "repository. # # As of the Change Date specified in that file, in", "SourceExists(Capability): def __init__(self, name: str, topic: Optional[TopicExists] = None) -> None: self.name =", "Version 2.0. from typing import Optional from materialize.zippy.framework import Capability from materialize.zippy.kafka_capabilities import", "specified in that file, in accordance with # the Business Source License, use", "__init__(self, name: str, topic: Optional[TopicExists] = None) -> None: self.name = name self.topic", "this repository. # # As of the Change Date specified in that file,", "in the LICENSE file at the root of this repository. # # As", "-> None: self.name = name self.topic = topic def get_watermarks(self) -> Watermarks: assert", "str, topic: Optional[TopicExists] = None) -> None: self.name = name self.topic = topic", "file, in accordance with # the Business Source License, use of this software", "Capability from materialize.zippy.kafka_capabilities import TopicExists from materialize.zippy.watermarks import Watermarks class SourceExists(Capability): def __init__(self,", "the LICENSE file at the root of this repository. # # As of", "topic: Optional[TopicExists] = None) -> None: self.name = name self.topic = topic def", "that file, in accordance with # the Business Source License, use of this", "name: str, topic: Optional[TopicExists] = None) -> None: self.name = name self.topic =", "All rights reserved. # # Use of this software is governed by the", "accordance with # the Business Source License, use of this software will be", "the root of this repository. # # As of the Change Date specified", "in that file, in accordance with # the Business Source License, use of", "from materialize.zippy.kafka_capabilities import TopicExists from materialize.zippy.watermarks import Watermarks class SourceExists(Capability): def __init__(self, name:", "import TopicExists from materialize.zippy.watermarks import Watermarks class SourceExists(Capability): def __init__(self, name: str, topic:", "LICENSE file at the root of this repository. # # As of the", "materialize.zippy.framework import Capability from materialize.zippy.kafka_capabilities import TopicExists from materialize.zippy.watermarks import Watermarks class SourceExists(Capability):", "2.0. from typing import Optional from materialize.zippy.framework import Capability from materialize.zippy.kafka_capabilities import TopicExists", "from typing import Optional from materialize.zippy.framework import Capability from materialize.zippy.kafka_capabilities import TopicExists from", "the Apache License, Version 2.0. from typing import Optional from materialize.zippy.framework import Capability", "Optional from materialize.zippy.framework import Capability from materialize.zippy.kafka_capabilities import TopicExists from materialize.zippy.watermarks import Watermarks", "import Optional from materialize.zippy.framework import Capability from materialize.zippy.kafka_capabilities import TopicExists from materialize.zippy.watermarks import", "materialize.zippy.watermarks import Watermarks class SourceExists(Capability): def __init__(self, name: str, topic: Optional[TopicExists] = None)", "Use of this software is governed by the Business Source License # included", "use of this software will be governed # by the Apache License, Version", "Source License # included in the LICENSE file at the root of this", "Business Source License, use of this software will be governed # by the", "is governed by the Business Source License # included in the LICENSE file", "# As of the Change Date specified in that file, in accordance with", "with # the Business Source License, use of this software will be governed", "# Copyright Materialize, Inc. and contributors. All rights reserved. # # Use of", "= topic def get_watermarks(self) -> Watermarks: assert self.topic is not None return self.topic.watermarks", "root of this repository. # # As of the Change Date specified in", "by the Apache License, Version 2.0. from typing import Optional from materialize.zippy.framework import", "Change Date specified in that file, in accordance with # the Business Source", "from materialize.zippy.watermarks import Watermarks class SourceExists(Capability): def __init__(self, name: str, topic: Optional[TopicExists] =", "this software will be governed # by the Apache License, Version 2.0. from", "governed # by the Apache License, Version 2.0. from typing import Optional from", "rights reserved. # # Use of this software is governed by the Business", "Materialize, Inc. and contributors. All rights reserved. # # Use of this software", "of this software is governed by the Business Source License # included in", "= None) -> None: self.name = name self.topic = topic def get_watermarks(self) ->", "# # As of the Change Date specified in that file, in accordance", "import Watermarks class SourceExists(Capability): def __init__(self, name: str, topic: Optional[TopicExists] = None) ->", "# by the Apache License, Version 2.0. from typing import Optional from materialize.zippy.framework", "= name self.topic = topic def get_watermarks(self) -> Watermarks: assert self.topic is not", "self.topic = topic def get_watermarks(self) -> Watermarks: assert self.topic is not None return", "Date specified in that file, in accordance with # the Business Source License,", "by the Business Source License # included in the LICENSE file at the", "Copyright Materialize, Inc. and contributors. All rights reserved. # # Use of this", "Watermarks class SourceExists(Capability): def __init__(self, name: str, topic: Optional[TopicExists] = None) -> None:", "this software is governed by the Business Source License # included in the", "TopicExists from materialize.zippy.watermarks import Watermarks class SourceExists(Capability): def __init__(self, name: str, topic: Optional[TopicExists]", "Apache License, Version 2.0. from typing import Optional from materialize.zippy.framework import Capability from", "typing import Optional from materialize.zippy.framework import Capability from materialize.zippy.kafka_capabilities import TopicExists from materialize.zippy.watermarks", "the Change Date specified in that file, in accordance with # the Business", "of the Change Date specified in that file, in accordance with # the", "Business Source License # included in the LICENSE file at the root of", "As of the Change Date specified in that file, in accordance with #", "in accordance with # the Business Source License, use of this software will", "of this repository. # # As of the Change Date specified in that", "class SourceExists(Capability): def __init__(self, name: str, topic: Optional[TopicExists] = None) -> None: self.name", "# included in the LICENSE file at the root of this repository. #", "software is governed by the Business Source License # included in the LICENSE", "None: self.name = name self.topic = topic def get_watermarks(self) -> Watermarks: assert self.topic", "materialize.zippy.kafka_capabilities import TopicExists from materialize.zippy.watermarks import Watermarks class SourceExists(Capability): def __init__(self, name: str,", "self.name = name self.topic = topic def get_watermarks(self) -> Watermarks: assert self.topic is" ]
[ "c.name = v.pop() if len(c.name) == 1: c.name += ' ' + v.pop()", "= f.readlines() content = [line.strip() for line in content] ans = [] for", "!= 0: return parseRevClass(v) else: return [] for i in range(2): v.pop() c.sem", "+= c.credits cr = num/den/10 return cr def getSemCR(sem): num = 0.0 den", "else: return [] for i in range(2): v.pop() c.sem = v.pop() if len(v)", "= v.pop() if len(c.type) != 1: #it doesn't count for i in range(3):", "= {'2S2015', '1S2016', '2S2018'} while year <= 2018: sem = str(pr) + 'S'", "+= 1 if pr == 3: pr = 1 year += 1 print(acc)", "%.4f' % (sem, cacc)) labels.append(sem) acc.append(cacc) partial.append(cpartial) pr += 1 if pr ==", "def parseFromFile(fname): def parseRevClass(v): c = Course() c.name = v.pop() if len(c.name) ==", "+ str(year) if sem not in ignored: cacc = getCRUntil(sem) cpartial = getSemCR(sem)", "return self.name + ' ' + self.type + ' ' + str(self.credits) +", "== year and cpr <= pr): num += c.grade*c.credits den += c.credits cr", "= [] ignored = {'2S2015', '1S2016', '2S2018'} while year <= 2018: sem =", "self.credits = 0 self.grade = 0.0 self.sem = '' def __str__(self): return self.name", "1: c.name += ' ' + v.pop() crd = v.pop() if len(crd) ==", "getSemCR(sem): num = 0.0 den = 0 pr = int(sem[0]) year = int(sem[2:])", "[] ignored = {'2S2015', '1S2016', '2S2018'} while year <= 2018: sem = str(pr)", "c.grade = float(grade) c.sem = sem courses.append(c) def getCR(): return getCRUntil('2S3016') def getCRUntil(sem):", "ValueError: for i in range(3): v.pop() if len(v) != 0: return parseRevClass(v) else:", "range(2): v.pop() c.sem = v.pop() if len(v) != 0: return [c] + parseRevClass(v)", "== 1: c.name += ' ' + v.pop() crd = v.pop() if len(crd)", "c.credits cr = num/den/10 return cr def plot_charts(): year = 2012 pr =", "self.sem = '' def __str__(self): return self.name + ' ' + self.type +", "print(partial) plt.plot(labels, acc) plt.plot(labels, partial) plt.legend(('Accumulated', 'Partial')) plt.xlabel('Semester') plt.ylabel('CR') plt.title('CR evolution') plt.show() def", "+= ret return ans courses = [] def addNewCourse(name, credits, grade, sem): c", "grade, sem): c = Course() c.name = name c.credits = int(credits) c.grade =", "def parseRevClass(v): c = Course() c.name = v.pop() if len(c.name) == 1: c.name", "+= 1 print(acc) print(partial) plt.plot(labels, acc) plt.plot(labels, partial) plt.legend(('Accumulated', 'Partial')) plt.xlabel('Semester') plt.ylabel('CR') plt.title('CR", "str(year) if sem not in ignored: cacc = getCRUntil(sem) cpartial = getSemCR(sem) print('%s:", "parseRevClass(v) else: return [] try: grd = list(v.pop()) for i in range(len(grd)): if", "credits, grade, sem): c = Course() c.name = name c.credits = int(credits) c.grade", "getSemCR(sem) print('%s: %.4f' % (sem, cacc)) labels.append(sem) acc.append(cacc) partial.append(cpartial) pr += 1 if", "int(c.sem[0]) cyear = int(c.sem[2:]) if cyear == year and cpr == pr: num", "= '.' break grd = ''.join(grd) c.grade = float(grd) except ValueError: for i", "float(grd) except ValueError: for i in range(3): v.pop() if len(v) != 0: return", "sem = str(pr) + 'S' + str(year) if sem not in ignored: cacc", "as plt class Course: def __init__(self): self.name = '' self.type = '' self.credits", "with open(fname) as f: content = f.readlines() content = [line.strip() for line in", "= 1 labels = [] acc = [] partial = [] ignored =", "str(pr) + 'S' + str(year) if sem not in ignored: cacc = getCRUntil(sem)", "< year or (cyear == year and cpr <= pr): num += c.grade*c.credits", "c.sem = sem courses.append(c) def getCR(): return getCRUntil('2S3016') def getCRUntil(sem): num = 0.0", "v.pop() if len(c.name) == 1: c.name += ' ' + v.pop() crd =", "cyear = int(c.sem[2:]) if cyear == year and cpr == pr: num +=", "getCRUntil(sem) cpartial = getSemCR(sem) print('%s: %.4f' % (sem, cacc)) labels.append(sem) acc.append(cacc) partial.append(cpartial) pr", "year or (cyear == year and cpr <= pr): num += c.grade*c.credits den", "= '' self.type = '' self.credits = 0 self.grade = 0.0 self.sem =", "c in courses: cpr = int(c.sem[0]) cyear = int(c.sem[2:]) if cyear < year", "cyear == year and cpr == pr: num += c.grade*c.credits den += c.credits", "parseFromFile(fname): def parseRevClass(v): c = Course() c.name = v.pop() if len(c.name) == 1:", "= Course() c.name = v.pop() if len(c.name) == 1: c.name += ' '", "cacc)) labels.append(sem) acc.append(cacc) partial.append(cpartial) pr += 1 if pr == 3: pr =", "c = Course() c.name = name c.credits = int(credits) c.grade = float(grade) c.sem", "cpr <= pr): num += c.grade*c.credits den += c.credits cr = num/den/10 return", "acc.append(cacc) partial.append(cpartial) pr += 1 if pr == 3: pr = 1 year", "'' self.type = '' self.credits = 0 self.grade = 0.0 self.sem = ''", "year += 1 print(acc) print(partial) plt.plot(labels, acc) plt.plot(labels, partial) plt.legend(('Accumulated', 'Partial')) plt.xlabel('Semester') plt.ylabel('CR')", "list(v.pop()) for i in range(len(grd)): if grd[i] == ',': grd[i] = '.' break", "for line in content: ret = parseRevClass(line.split()[::-1]) ans += ret return ans courses", "+ 'S' + str(year) if sem not in ignored: cacc = getCRUntil(sem) cpartial", "v.pop() c.sem = v.pop() if len(v) != 0: return [c] + parseRevClass(v) else:", "0: return parseRevClass(v) else: return [] try: grd = list(v.pop()) for i in", "+ v.pop() crd = v.pop() if len(crd) == 1: crd = v.pop() c.credits", "line in content: ret = parseRevClass(line.split()[::-1]) ans += ret return ans courses =", "= parseRevClass(line.split()[::-1]) ans += ret return ans courses = [] def addNewCourse(name, credits,", "int(credits) c.grade = float(grade) c.sem = sem courses.append(c) def getCR(): return getCRUntil('2S3016') def", "partial) plt.legend(('Accumulated', 'Partial')) plt.xlabel('Semester') plt.ylabel('CR') plt.title('CR evolution') plt.show() def main(): global courses courses", "if cyear < year or (cyear == year and cpr <= pr): num", "= 0.0 self.sem = '' def __str__(self): return self.name + ' ' +", "' + self.type + ' ' + str(self.credits) + ' ' + str(self.grade)", "or (cyear == year and cpr <= pr): num += c.grade*c.credits den +=", "sem): c = Course() c.name = name c.credits = int(credits) c.grade = float(grade)", "= v.pop() c.credits = int(crd) c.type = v.pop() if len(c.type) != 1: #it", "= [] def addNewCourse(name, credits, grade, sem): c = Course() c.name = name", "grd[i] == ',': grd[i] = '.' break grd = ''.join(grd) c.grade = float(grd)", "v.pop() if len(v) != 0: return [c] + parseRevClass(v) else: return [c] with", "sem not in ignored: cacc = getCRUntil(sem) cpartial = getSemCR(sem) print('%s: %.4f' %", "range(len(grd)): if grd[i] == ',': grd[i] = '.' break grd = ''.join(grd) c.grade", "2012 pr = 1 labels = [] acc = [] partial = []", "grd[i] = '.' break grd = ''.join(grd) c.grade = float(grd) except ValueError: for", "' ' + self.type + ' ' + str(self.credits) + ' ' +", "= float(grade) c.sem = sem courses.append(c) def getCR(): return getCRUntil('2S3016') def getCRUntil(sem): num", "= v.pop() if len(crd) == 1: crd = v.pop() c.credits = int(crd) c.type", "cyear = int(c.sem[2:]) if cyear < year or (cyear == year and cpr", "in courses: cpr = int(c.sem[0]) cyear = int(c.sem[2:]) if cyear == year and", "ret return ans courses = [] def addNewCourse(name, credits, grade, sem): c =", "return parseRevClass(v) else: return [] try: grd = list(v.pop()) for i in range(len(grd)):", "== 3: pr = 1 year += 1 print(acc) print(partial) plt.plot(labels, acc) plt.plot(labels,", "<reponame>gbuenoandrade/Integralization-Simulator---Unicamp import numpy as np import matplotlib.pyplot as plt class Course: def __init__(self):", "range(3): v.pop() if len(v) != 0: return parseRevClass(v) else: return [] try: grd", "plt.ylabel('CR') plt.title('CR evolution') plt.show() def main(): global courses courses += parseFromFile('grades.txt') plot_charts() if", "pr = 1 labels = [] acc = [] partial = [] ignored", "+ str(self.grade) def parseFromFile(fname): def parseRevClass(v): c = Course() c.name = v.pop() if", "<= 2018: sem = str(pr) + 'S' + str(year) if sem not in", "if len(v) != 0: return parseRevClass(v) else: return [] try: grd = list(v.pop())", "as np import matplotlib.pyplot as plt class Course: def __init__(self): self.name = ''", "line in content] ans = [] for line in content: ret = parseRevClass(line.split()[::-1])", "[] def addNewCourse(name, credits, grade, sem): c = Course() c.name = name c.credits", "= 0 self.grade = 0.0 self.sem = '' def __str__(self): return self.name +", "self.name + ' ' + self.type + ' ' + str(self.credits) + '", "+= c.grade*c.credits den += c.credits cr = num/den/10 return cr def plot_charts(): year", "[] for line in content: ret = parseRevClass(line.split()[::-1]) ans += ret return ans", "[] partial = [] ignored = {'2S2015', '1S2016', '2S2018'} while year <= 2018:", "parseRevClass(v): c = Course() c.name = v.pop() if len(c.name) == 1: c.name +=", "for i in range(len(grd)): if grd[i] == ',': grd[i] = '.' break grd", "getCRUntil('2S3016') def getCRUntil(sem): num = 0.0 den = 0 pr = int(sem[0]) year", "den += c.credits cr = num/den/10 return cr def getSemCR(sem): num = 0.0", "partial.append(cpartial) pr += 1 if pr == 3: pr = 1 year +=", "!= 1: #it doesn't count for i in range(3): v.pop() if len(v) !=", "if len(v) != 0: return parseRevClass(v) else: return [] for i in range(2):", "numpy as np import matplotlib.pyplot as plt class Course: def __init__(self): self.name =", "+= c.credits cr = num/den/10 return cr def plot_charts(): year = 2012 pr", "ans courses = [] def addNewCourse(name, credits, grade, sem): c = Course() c.name", "self.grade = 0.0 self.sem = '' def __str__(self): return self.name + ' '", "cr = num/den/10 return cr def getSemCR(sem): num = 0.0 den = 0", "= [] partial = [] ignored = {'2S2015', '1S2016', '2S2018'} while year <=", "0 pr = int(sem[0]) year = int(sem[2:]) for c in courses: cpr =", "self.type + ' ' + str(self.credits) + ' ' + str(self.grade) def parseFromFile(fname):", "if grd[i] == ',': grd[i] = '.' break grd = ''.join(grd) c.grade =", "v.pop() if len(v) != 0: return parseRevClass(v) else: return [] for i in", "= int(credits) c.grade = float(grade) c.sem = sem courses.append(c) def getCR(): return getCRUntil('2S3016')", "= int(sem[0]) year = int(sem[2:]) for c in courses: cpr = int(c.sem[0]) cyear", "= str(pr) + 'S' + str(year) if sem not in ignored: cacc =", "open(fname) as f: content = f.readlines() content = [line.strip() for line in content]", "c.grade*c.credits den += c.credits cr = num/den/10 return cr def getSemCR(sem): num =", "'S' + str(year) if sem not in ignored: cacc = getCRUntil(sem) cpartial =", "return [c] with open(fname) as f: content = f.readlines() content = [line.strip() for", "if len(crd) == 1: crd = v.pop() c.credits = int(crd) c.type = v.pop()", "'1S2016', '2S2018'} while year <= 2018: sem = str(pr) + 'S' + str(year)", "= getSemCR(sem) print('%s: %.4f' % (sem, cacc)) labels.append(sem) acc.append(cacc) partial.append(cpartial) pr += 1", "= num/den/10 return cr def plot_charts(): year = 2012 pr = 1 labels", "= float(grd) except ValueError: for i in range(3): v.pop() if len(v) != 0:", "return ans courses = [] def addNewCourse(name, credits, grade, sem): c = Course()", "plt.show() def main(): global courses courses += parseFromFile('grades.txt') plot_charts() if __name__ == \"__main__\":", "cpr = int(c.sem[0]) cyear = int(c.sem[2:]) if cyear < year or (cyear ==", "+ str(self.credits) + ' ' + str(self.grade) def parseFromFile(fname): def parseRevClass(v): c =", "cr def plot_charts(): year = 2012 pr = 1 labels = [] acc", "in courses: cpr = int(c.sem[0]) cyear = int(c.sem[2:]) if cyear < year or", "if cyear == year and cpr == pr: num += c.grade*c.credits den +=", "= 0 pr = int(sem[0]) year = int(sem[2:]) for c in courses: cpr", "for i in range(2): v.pop() c.sem = v.pop() if len(v) != 0: return", "in range(2): v.pop() c.sem = v.pop() if len(v) != 0: return [c] +", "acc) plt.plot(labels, partial) plt.legend(('Accumulated', 'Partial')) plt.xlabel('Semester') plt.ylabel('CR') plt.title('CR evolution') plt.show() def main(): global", "print('%s: %.4f' % (sem, cacc)) labels.append(sem) acc.append(cacc) partial.append(cpartial) pr += 1 if pr", "ans += ret return ans courses = [] def addNewCourse(name, credits, grade, sem):", "= int(c.sem[0]) cyear = int(c.sem[2:]) if cyear == year and cpr == pr:", "len(c.name) == 1: c.name += ' ' + v.pop() crd = v.pop() if", "'' self.credits = 0 self.grade = 0.0 self.sem = '' def __str__(self): return", "else: return [] try: grd = list(v.pop()) for i in range(len(grd)): if grd[i]", "return [] for i in range(2): v.pop() c.sem = v.pop() if len(v) !=", "{'2S2015', '1S2016', '2S2018'} while year <= 2018: sem = str(pr) + 'S' +", "1 year += 1 print(acc) print(partial) plt.plot(labels, acc) plt.plot(labels, partial) plt.legend(('Accumulated', 'Partial')) plt.xlabel('Semester')", "den = 0 pr = int(sem[0]) year = int(sem[2:]) for c in courses:", "cyear < year or (cyear == year and cpr <= pr): num +=", "year = 2012 pr = 1 labels = [] acc = [] partial", "not in ignored: cacc = getCRUntil(sem) cpartial = getSemCR(sem) print('%s: %.4f' % (sem,", "c = Course() c.name = v.pop() if len(c.name) == 1: c.name += '", "labels.append(sem) acc.append(cacc) partial.append(cpartial) pr += 1 if pr == 3: pr = 1", "= Course() c.name = name c.credits = int(credits) c.grade = float(grade) c.sem =", "def getCRUntil(sem): num = 0.0 den = 0 pr = int(sem[0]) year =", "self.name = '' self.type = '' self.credits = 0 self.grade = 0.0 self.sem", "= int(crd) c.type = v.pop() if len(c.type) != 1: #it doesn't count for", "= 0.0 den = 0 pr = int(sem[0]) year = int(sem[2:]) for c", "= name c.credits = int(credits) c.grade = float(grade) c.sem = sem courses.append(c) def", "plt class Course: def __init__(self): self.name = '' self.type = '' self.credits =", "== pr: num += c.grade*c.credits den += c.credits cr = num/den/10 return cr", "pr = 1 year += 1 print(acc) print(partial) plt.plot(labels, acc) plt.plot(labels, partial) plt.legend(('Accumulated',", "year and cpr == pr: num += c.grade*c.credits den += c.credits cr =", "0 self.grade = 0.0 self.sem = '' def __str__(self): return self.name + '", "== ',': grd[i] = '.' break grd = ''.join(grd) c.grade = float(grd) except", "''.join(grd) c.grade = float(grd) except ValueError: for i in range(3): v.pop() if len(v)", "def main(): global courses courses += parseFromFile('grades.txt') plot_charts() if __name__ == \"__main__\": main()", "' + str(self.grade) def parseFromFile(fname): def parseRevClass(v): c = Course() c.name = v.pop()", "1 if pr == 3: pr = 1 year += 1 print(acc) print(partial)", "'' def __str__(self): return self.name + ' ' + self.type + ' '", "1: #it doesn't count for i in range(3): v.pop() if len(v) != 0:", "acc = [] partial = [] ignored = {'2S2015', '1S2016', '2S2018'} while year", "0: return [c] + parseRevClass(v) else: return [c] with open(fname) as f: content", "return cr def getSemCR(sem): num = 0.0 den = 0 pr = int(sem[0])", "Course() c.name = name c.credits = int(credits) c.grade = float(grade) c.sem = sem", "i in range(2): v.pop() c.sem = v.pop() if len(v) != 0: return [c]", "getCR(): return getCRUntil('2S3016') def getCRUntil(sem): num = 0.0 den = 0 pr =", "def getCR(): return getCRUntil('2S3016') def getCRUntil(sem): num = 0.0 den = 0 pr", "class Course: def __init__(self): self.name = '' self.type = '' self.credits = 0", "def addNewCourse(name, credits, grade, sem): c = Course() c.name = name c.credits =", "year = int(sem[2:]) for c in courses: cpr = int(c.sem[0]) cyear = int(c.sem[2:])", "f: content = f.readlines() content = [line.strip() for line in content] ans =", "% (sem, cacc)) labels.append(sem) acc.append(cacc) partial.append(cpartial) pr += 1 if pr == 3:", "for c in courses: cpr = int(c.sem[0]) cyear = int(c.sem[2:]) if cyear <", "in range(3): v.pop() if len(v) != 0: return parseRevClass(v) else: return [] for", "num/den/10 return cr def getSemCR(sem): num = 0.0 den = 0 pr =", "import matplotlib.pyplot as plt class Course: def __init__(self): self.name = '' self.type =", "doesn't count for i in range(3): v.pop() if len(v) != 0: return parseRevClass(v)", "c.name += ' ' + v.pop() crd = v.pop() if len(crd) == 1:", "as f: content = f.readlines() content = [line.strip() for line in content] ans", "!= 0: return [c] + parseRevClass(v) else: return [c] with open(fname) as f:", "num = 0.0 den = 0 pr = int(sem[0]) year = int(sem[2:]) for", "grd = ''.join(grd) c.grade = float(grd) except ValueError: for i in range(3): v.pop()", "Course() c.name = v.pop() if len(c.name) == 1: c.name += ' ' +", "except ValueError: for i in range(3): v.pop() if len(v) != 0: return parseRevClass(v)", "try: grd = list(v.pop()) for i in range(len(grd)): if grd[i] == ',': grd[i]", "ignored = {'2S2015', '1S2016', '2S2018'} while year <= 2018: sem = str(pr) +", "cacc = getCRUntil(sem) cpartial = getSemCR(sem) print('%s: %.4f' % (sem, cacc)) labels.append(sem) acc.append(cacc)", "= [line.strip() for line in content] ans = [] for line in content:", "def getSemCR(sem): num = 0.0 den = 0 pr = int(sem[0]) year =", "',': grd[i] = '.' break grd = ''.join(grd) c.grade = float(grd) except ValueError:", "= [] for line in content: ret = parseRevClass(line.split()[::-1]) ans += ret return", "plt.legend(('Accumulated', 'Partial')) plt.xlabel('Semester') plt.ylabel('CR') plt.title('CR evolution') plt.show() def main(): global courses courses +=", "' ' + str(self.grade) def parseFromFile(fname): def parseRevClass(v): c = Course() c.name =", "for c in courses: cpr = int(c.sem[0]) cyear = int(c.sem[2:]) if cyear ==", "Course: def __init__(self): self.name = '' self.type = '' self.credits = 0 self.grade", "[] acc = [] partial = [] ignored = {'2S2015', '1S2016', '2S2018'} while", "int(sem[2:]) for c in courses: cpr = int(c.sem[0]) cyear = int(c.sem[2:]) if cyear", "else: return [c] with open(fname) as f: content = f.readlines() content = [line.strip()", "0.0 self.sem = '' def __str__(self): return self.name + ' ' + self.type", "<= pr): num += c.grade*c.credits den += c.credits cr = num/den/10 return cr", "pr): num += c.grade*c.credits den += c.credits cr = num/den/10 return cr def", "sem courses.append(c) def getCR(): return getCRUntil('2S3016') def getCRUntil(sem): num = 0.0 den =", "int(c.sem[2:]) if cyear < year or (cyear == year and cpr <= pr):", "len(crd) == 1: crd = v.pop() c.credits = int(crd) c.type = v.pop() if", "c.credits = int(credits) c.grade = float(grade) c.sem = sem courses.append(c) def getCR(): return", "ignored: cacc = getCRUntil(sem) cpartial = getSemCR(sem) print('%s: %.4f' % (sem, cacc)) labels.append(sem)", "c in courses: cpr = int(c.sem[0]) cyear = int(c.sem[2:]) if cyear == year", "= '' self.credits = 0 self.grade = 0.0 self.sem = '' def __str__(self):", "courses: cpr = int(c.sem[0]) cyear = int(c.sem[2:]) if cyear < year or (cyear", "c.credits cr = num/den/10 return cr def getSemCR(sem): num = 0.0 den =", "[] for i in range(2): v.pop() c.sem = v.pop() if len(v) != 0:", "v.pop() if len(v) != 0: return parseRevClass(v) else: return [] try: grd =", "getCRUntil(sem): num = 0.0 den = 0 pr = int(sem[0]) year = int(sem[2:])", "[c] + parseRevClass(v) else: return [c] with open(fname) as f: content = f.readlines()", "' ' + str(self.credits) + ' ' + str(self.grade) def parseFromFile(fname): def parseRevClass(v):", "in range(3): v.pop() if len(v) != 0: return parseRevClass(v) else: return [] try:", "num += c.grade*c.credits den += c.credits cr = num/den/10 return cr def plot_charts():", "cpartial = getSemCR(sem) print('%s: %.4f' % (sem, cacc)) labels.append(sem) acc.append(cacc) partial.append(cpartial) pr +=", "in range(len(grd)): if grd[i] == ',': grd[i] = '.' break grd = ''.join(grd)", "c.name = name c.credits = int(credits) c.grade = float(grade) c.sem = sem courses.append(c)", "= int(c.sem[0]) cyear = int(c.sem[2:]) if cyear < year or (cyear == year", "= ''.join(grd) c.grade = float(grd) except ValueError: for i in range(3): v.pop() if", "plt.plot(labels, acc) plt.plot(labels, partial) plt.legend(('Accumulated', 'Partial')) plt.xlabel('Semester') plt.ylabel('CR') plt.title('CR evolution') plt.show() def main():", "str(self.credits) + ' ' + str(self.grade) def parseFromFile(fname): def parseRevClass(v): c = Course()", "= num/den/10 return cr def getSemCR(sem): num = 0.0 den = 0 pr", "v.pop() if len(crd) == 1: crd = v.pop() c.credits = int(crd) c.type =", "1: crd = v.pop() c.credits = int(crd) c.type = v.pop() if len(c.type) !=", "i in range(3): v.pop() if len(v) != 0: return parseRevClass(v) else: return []", "num += c.grade*c.credits den += c.credits cr = num/den/10 return cr def getSemCR(sem):", "c.credits = int(crd) c.type = v.pop() if len(c.type) != 1: #it doesn't count", "+ parseRevClass(v) else: return [c] with open(fname) as f: content = f.readlines() content", "name c.credits = int(credits) c.grade = float(grade) c.sem = sem courses.append(c) def getCR():", "in content] ans = [] for line in content: ret = parseRevClass(line.split()[::-1]) ans", "and cpr == pr: num += c.grade*c.credits den += c.credits cr = num/den/10", "len(v) != 0: return parseRevClass(v) else: return [] try: grd = list(v.pop()) for", "c.grade = float(grd) except ValueError: for i in range(3): v.pop() if len(v) !=", "return [] try: grd = list(v.pop()) for i in range(len(grd)): if grd[i] ==", "content = [line.strip() for line in content] ans = [] for line in", "parseRevClass(v) else: return [c] with open(fname) as f: content = f.readlines() content =", "1 print(acc) print(partial) plt.plot(labels, acc) plt.plot(labels, partial) plt.legend(('Accumulated', 'Partial')) plt.xlabel('Semester') plt.ylabel('CR') plt.title('CR evolution')", "= '' def __str__(self): return self.name + ' ' + self.type + '", "print(acc) print(partial) plt.plot(labels, acc) plt.plot(labels, partial) plt.legend(('Accumulated', 'Partial')) plt.xlabel('Semester') plt.ylabel('CR') plt.title('CR evolution') plt.show()", "plot_charts(): year = 2012 pr = 1 labels = [] acc = []", "def __init__(self): self.name = '' self.type = '' self.credits = 0 self.grade =", "= list(v.pop()) for i in range(len(grd)): if grd[i] == ',': grd[i] = '.'", "'Partial')) plt.xlabel('Semester') plt.ylabel('CR') plt.title('CR evolution') plt.show() def main(): global courses courses += parseFromFile('grades.txt')", "content = f.readlines() content = [line.strip() for line in content] ans = []", "return parseRevClass(v) else: return [] for i in range(2): v.pop() c.sem = v.pop()", "(sem, cacc)) labels.append(sem) acc.append(cacc) partial.append(cpartial) pr += 1 if pr == 3: pr", "pr == 3: pr = 1 year += 1 print(acc) print(partial) plt.plot(labels, acc)", "str(self.grade) def parseFromFile(fname): def parseRevClass(v): c = Course() c.name = v.pop() if len(c.name)", "for i in range(3): v.pop() if len(v) != 0: return parseRevClass(v) else: return", "in content: ret = parseRevClass(line.split()[::-1]) ans += ret return ans courses = []", "self.type = '' self.credits = 0 self.grade = 0.0 self.sem = '' def", "= [] acc = [] partial = [] ignored = {'2S2015', '1S2016', '2S2018'}", "crd = v.pop() if len(crd) == 1: crd = v.pop() c.credits = int(crd)", "== 1: crd = v.pop() c.credits = int(crd) c.type = v.pop() if len(c.type)", "v.pop() if len(c.type) != 1: #it doesn't count for i in range(3): v.pop()", "def __str__(self): return self.name + ' ' + self.type + ' ' +", "c.grade*c.credits den += c.credits cr = num/den/10 return cr def plot_charts(): year =", "int(c.sem[2:]) if cyear == year and cpr == pr: num += c.grade*c.credits den", "pr: num += c.grade*c.credits den += c.credits cr = num/den/10 return cr def", "addNewCourse(name, credits, grade, sem): c = Course() c.name = name c.credits = int(credits)", "0: return parseRevClass(v) else: return [] for i in range(2): v.pop() c.sem =", "cr = num/den/10 return cr def plot_charts(): year = 2012 pr = 1", "[] try: grd = list(v.pop()) for i in range(len(grd)): if grd[i] == ',':", "= v.pop() if len(c.name) == 1: c.name += ' ' + v.pop() crd", "break grd = ''.join(grd) c.grade = float(grd) except ValueError: for i in range(3):", "grd = list(v.pop()) for i in range(len(grd)): if grd[i] == ',': grd[i] =", "evolution') plt.show() def main(): global courses courses += parseFromFile('grades.txt') plot_charts() if __name__ ==", "(cyear == year and cpr <= pr): num += c.grade*c.credits den += c.credits", "= getCRUntil(sem) cpartial = getSemCR(sem) print('%s: %.4f' % (sem, cacc)) labels.append(sem) acc.append(cacc) partial.append(cpartial)", "len(v) != 0: return parseRevClass(v) else: return [] for i in range(2): v.pop()", "1 labels = [] acc = [] partial = [] ignored = {'2S2015',", "3: pr = 1 year += 1 print(acc) print(partial) plt.plot(labels, acc) plt.plot(labels, partial)", "+ self.type + ' ' + str(self.credits) + ' ' + str(self.grade) def", "pr += 1 if pr == 3: pr = 1 year += 1", "if len(c.type) != 1: #it doesn't count for i in range(3): v.pop() if", "labels = [] acc = [] partial = [] ignored = {'2S2015', '1S2016',", "= int(c.sem[2:]) if cyear < year or (cyear == year and cpr <=", "ans = [] for line in content: ret = parseRevClass(line.split()[::-1]) ans += ret", "np import matplotlib.pyplot as plt class Course: def __init__(self): self.name = '' self.type", "float(grade) c.sem = sem courses.append(c) def getCR(): return getCRUntil('2S3016') def getCRUntil(sem): num =", "== year and cpr == pr: num += c.grade*c.credits den += c.credits cr", "v.pop() crd = v.pop() if len(crd) == 1: crd = v.pop() c.credits =", "den += c.credits cr = num/den/10 return cr def plot_charts(): year = 2012", "+= ' ' + v.pop() crd = v.pop() if len(crd) == 1: crd", "int(crd) c.type = v.pop() if len(c.type) != 1: #it doesn't count for i", "ret = parseRevClass(line.split()[::-1]) ans += ret return ans courses = [] def addNewCourse(name,", "v.pop() c.credits = int(crd) c.type = v.pop() if len(c.type) != 1: #it doesn't", "__init__(self): self.name = '' self.type = '' self.credits = 0 self.grade = 0.0", "= sem courses.append(c) def getCR(): return getCRUntil('2S3016') def getCRUntil(sem): num = 0.0 den", "range(3): v.pop() if len(v) != 0: return parseRevClass(v) else: return [] for i", "if len(c.name) == 1: c.name += ' ' + v.pop() crd = v.pop()", "def plot_charts(): year = 2012 pr = 1 labels = [] acc =", "return cr def plot_charts(): year = 2012 pr = 1 labels = []", "return getCRUntil('2S3016') def getCRUntil(sem): num = 0.0 den = 0 pr = int(sem[0])", "__str__(self): return self.name + ' ' + self.type + ' ' + str(self.credits)", "courses.append(c) def getCR(): return getCRUntil('2S3016') def getCRUntil(sem): num = 0.0 den = 0", "return [c] + parseRevClass(v) else: return [c] with open(fname) as f: content =", "while year <= 2018: sem = str(pr) + 'S' + str(year) if sem", "and cpr <= pr): num += c.grade*c.credits den += c.credits cr = num/den/10", "plt.xlabel('Semester') plt.ylabel('CR') plt.title('CR evolution') plt.show() def main(): global courses courses += parseFromFile('grades.txt') plot_charts()", "courses = [] def addNewCourse(name, credits, grade, sem): c = Course() c.name =", "c.type = v.pop() if len(c.type) != 1: #it doesn't count for i in", "'2S2018'} while year <= 2018: sem = str(pr) + 'S' + str(year) if", "0.0 den = 0 pr = int(sem[0]) year = int(sem[2:]) for c in", "for line in content] ans = [] for line in content: ret =", "+ ' ' + str(self.grade) def parseFromFile(fname): def parseRevClass(v): c = Course() c.name", "if len(v) != 0: return [c] + parseRevClass(v) else: return [c] with open(fname)", "pr = int(sem[0]) year = int(sem[2:]) for c in courses: cpr = int(c.sem[0])", "len(v) != 0: return [c] + parseRevClass(v) else: return [c] with open(fname) as", "if pr == 3: pr = 1 year += 1 print(acc) print(partial) plt.plot(labels,", "= 2012 pr = 1 labels = [] acc = [] partial =", "= int(sem[2:]) for c in courses: cpr = int(c.sem[0]) cyear = int(c.sem[2:]) if", "#it doesn't count for i in range(3): v.pop() if len(v) != 0: return", "plt.plot(labels, partial) plt.legend(('Accumulated', 'Partial')) plt.xlabel('Semester') plt.ylabel('CR') plt.title('CR evolution') plt.show() def main(): global courses", "if sem not in ignored: cacc = getCRUntil(sem) cpartial = getSemCR(sem) print('%s: %.4f'", "len(c.type) != 1: #it doesn't count for i in range(3): v.pop() if len(v)", "crd = v.pop() c.credits = int(crd) c.type = v.pop() if len(c.type) != 1:", "year and cpr <= pr): num += c.grade*c.credits den += c.credits cr =", "'.' break grd = ''.join(grd) c.grade = float(grd) except ValueError: for i in", "cpr = int(c.sem[0]) cyear = int(c.sem[2:]) if cyear == year and cpr ==", "2018: sem = str(pr) + 'S' + str(year) if sem not in ignored:", "int(c.sem[0]) cyear = int(c.sem[2:]) if cyear < year or (cyear == year and", "int(sem[0]) year = int(sem[2:]) for c in courses: cpr = int(c.sem[0]) cyear =", "cpr == pr: num += c.grade*c.credits den += c.credits cr = num/den/10 return", "' + str(self.credits) + ' ' + str(self.grade) def parseFromFile(fname): def parseRevClass(v): c", "content: ret = parseRevClass(line.split()[::-1]) ans += ret return ans courses = [] def", "year <= 2018: sem = str(pr) + 'S' + str(year) if sem not", "c.sem = v.pop() if len(v) != 0: return [c] + parseRevClass(v) else: return", "+ ' ' + str(self.credits) + ' ' + str(self.grade) def parseFromFile(fname): def", "!= 0: return parseRevClass(v) else: return [] try: grd = list(v.pop()) for i", "[line.strip() for line in content] ans = [] for line in content: ret", "= 1 year += 1 print(acc) print(partial) plt.plot(labels, acc) plt.plot(labels, partial) plt.legend(('Accumulated', 'Partial'))", "parseRevClass(line.split()[::-1]) ans += ret return ans courses = [] def addNewCourse(name, credits, grade,", "import numpy as np import matplotlib.pyplot as plt class Course: def __init__(self): self.name", "i in range(len(grd)): if grd[i] == ',': grd[i] = '.' break grd =", "' + v.pop() crd = v.pop() if len(crd) == 1: crd = v.pop()", "= int(c.sem[2:]) if cyear == year and cpr == pr: num += c.grade*c.credits", "in ignored: cacc = getCRUntil(sem) cpartial = getSemCR(sem) print('%s: %.4f' % (sem, cacc))", "cr def getSemCR(sem): num = 0.0 den = 0 pr = int(sem[0]) year", "+= c.grade*c.credits den += c.credits cr = num/den/10 return cr def getSemCR(sem): num", "= v.pop() if len(v) != 0: return [c] + parseRevClass(v) else: return [c]", "+ ' ' + self.type + ' ' + str(self.credits) + ' '", "parseRevClass(v) else: return [] for i in range(2): v.pop() c.sem = v.pop() if", "num/den/10 return cr def plot_charts(): year = 2012 pr = 1 labels =", "content] ans = [] for line in content: ret = parseRevClass(line.split()[::-1]) ans +=", "matplotlib.pyplot as plt class Course: def __init__(self): self.name = '' self.type = ''", "count for i in range(3): v.pop() if len(v) != 0: return parseRevClass(v) else:", "f.readlines() content = [line.strip() for line in content] ans = [] for line", "' ' + v.pop() crd = v.pop() if len(crd) == 1: crd =", "[c] with open(fname) as f: content = f.readlines() content = [line.strip() for line", "courses: cpr = int(c.sem[0]) cyear = int(c.sem[2:]) if cyear == year and cpr", "partial = [] ignored = {'2S2015', '1S2016', '2S2018'} while year <= 2018: sem", "plt.title('CR evolution') plt.show() def main(): global courses courses += parseFromFile('grades.txt') plot_charts() if __name__" ]
[ "import pymysql import re import random import datetime import sys import argparse import", "Scrape articles from Wikipedia and store into MySQl Database choose_link = starting_url skipping", "{starting_url}') # Scrape articles from Wikipedia and store into MySQl Database choose_link =", "python3.6 import pymysql import re import random import datetime import sys import argparse", "\"-sql_tn\", help=\"The table name that will be used to store data.\", required=True) parser.add_argument(\"--max_sql_store_num\",", "os ######################### # Main-Routine # ######################### def main(): #Initialization print('> Crawler Initialization...') iter_num", "maximum number that stores in MySQL table.\", required=True) parser.add_argument(\"--unix_socket\", \"-sql_un_sock\", help=\"The unix_socket that", "__name__ == '__main__': import sys this_script_path = os.path.realpath(__file__) this_script_folder = os.path.dirname(this_script_path) crawler_nba_pkg_path =", "table, unix_socket, database_name) #Sideband Setting current_time = datetime.datetime.now() print(f'current_time = {current_time}') random.seed(datetime.datetime.now()) starting_url", "read from MySQL Database sql_ex = 'SELECT id, title, created, LEFT(content, 32) FROM", "args.unix_socket: unix_socket = args.unix_socket if args.database_name: database_name = args.database_name return(password, table, max_sql_store_num, unix_socket,", "'SELECT id, title, created, LEFT(content, 32) FROM {table_name} WHERE id=4;'.format(table_name=table) crawler_nba.cur.execute(sql_ex) results =", "= str(row[2]) content_name = row[3] print('{x:<2s}, {y:<2s}, {z:<2s}, {k:<2s}'.format(x=id_name, y=title_name, z=created_name, k=content_name)) #", "import random import datetime import sys import argparse import os ######################### # Main-Routine", "parser.add_argument(\"--mysql_table_name\", \"-sql_tn\", help=\"The table name that will be used to store data.\", required=True)", "this_script_path = os.path.realpath(__file__) this_script_folder = os.path.dirname(this_script_path) crawler_nba_pkg_path = this_script_folder+'/../../crawler' print('Add to sys.path :", "\"http://en.wikipedia.org\"+all_internal_links_loop[random.randint(0, total_num_internal_links_loop-1)].attrs['href'] if(skipping == 0): iter_num += 1 # Test to read from", "id=4;'.format(table_name=table) crawler_nba.cur.execute(sql_ex) results = crawler_nba.cur.fetchall() print(f'-------------------Execution {sql_ex}-------------------') print(f'table = {table}') for row in", "def ArgumentParser(): password = \"\" table = \"\" database_name = \"\" unix_socket =", "{z:<2s}, {k:<2s}'.format(x=id_name, y=title_name, z=created_name, k=content_name)) # Close the connection of MySQL Database crawler_nba.MySQLDBClose(crawler_nba.cur,", "if(total_num_internal_links_loop > 0): choose_link = \"http://en.wikipedia.org\"+all_internal_links_loop[random.randint(0, total_num_internal_links_loop-1)].attrs['href'] if(skipping == 0): iter_num += 1", "table.\", required=True) parser.add_argument(\"--unix_socket\", \"-sql_un_sock\", help=\"The unix_socket that is used to mypysql connection.\", required=True)", "total_num_internal_links_loop = len(all_internal_links_loop) if(total_num_internal_links_loop > 0): choose_link = \"http://en.wikipedia.org\"+all_internal_links_loop[random.randint(0, total_num_internal_links_loop-1)].attrs['href'] if(skipping == 0):", "crawler_nba.cur, table) total_num_internal_links_loop = len(all_internal_links_loop) if(total_num_internal_links_loop > 0): choose_link = \"http://en.wikipedia.org\"+all_internal_links_loop[random.randint(0, total_num_internal_links_loop-1)].attrs['href'] if(skipping", "= str(row[0]) title_name = row[1] created_name = str(row[2]) content_name = row[3] print('{x:<2s}, {y:<2s},", "######################### # Main-Routine # ######################### def main(): #Initialization print('> Crawler Initialization...') iter_num =", "#Argument Parser (password, table, max_sql_store_num, unix_socket, database_name) = ArgumentParser() #DB Initialization print('> DB", "args.max_sql_store_num: max_sql_store_num = int(args.max_sql_store_num) if args.unix_socket: unix_socket = args.unix_socket if args.database_name: database_name =", "datetime import sys import argparse import os ######################### # Main-Routine # ######################### def", "iter_num = 0 crawler_nba.init() #Argument Parser (password, table, max_sql_store_num, unix_socket, database_name) = ArgumentParser()", "parser.add_argument(\"--max_sql_store_num\", \"-sql_mx_sn\", help=\"The maximum number that stores in MySQL table.\", required=True) parser.add_argument(\"--unix_socket\", \"-sql_un_sock\",", "Wikipedia and store into MySQl Database choose_link = starting_url skipping = 0 while(iter_num", "connect to MySQL server.\", required=True) parser.add_argument(\"--mysql_table_name\", \"-sql_tn\", help=\"The table name that will be", "number that stores in MySQL table.\", required=True) parser.add_argument(\"--unix_socket\", \"-sql_un_sock\", help=\"The unix_socket that is", "= this_script_folder+'/../../crawler' print('Add to sys.path : {x}'.format(x=crawler_nba_pkg_path)) sys.path.append(crawler_nba_pkg_path) import package_crawler_nba.crawler_nba as crawler_nba print('Import", "print('iter_num = {}. Get Wiki Links and store the content to MySQL...'.format(iter_num)) print(f'choose_link", "starting_url = \"https://en.wikipedia.org/wiki/Kevin_Bacon\" print(f'starting_url = {starting_url}') # Scrape articles from Wikipedia and store", "= {choose_link}') all_internal_links_loop, skipping = crawler_nba.GetWikiLinksContent(choose_link, crawler_nba.cur, table) total_num_internal_links_loop = len(all_internal_links_loop) if(total_num_internal_links_loop >", "choose_link = \"http://en.wikipedia.org\"+all_internal_links_loop[random.randint(0, total_num_internal_links_loop-1)].attrs['href'] if(skipping == 0): iter_num += 1 # Test to", "content to MySQL...'.format(iter_num)) print(f'choose_link = {choose_link}') all_internal_links_loop, skipping = crawler_nba.GetWikiLinksContent(choose_link, crawler_nba.cur, table) total_num_internal_links_loop", "table) total_num_internal_links_loop = len(all_internal_links_loop) if(total_num_internal_links_loop > 0): choose_link = \"http://en.wikipedia.org\"+all_internal_links_loop[random.randint(0, total_num_internal_links_loop-1)].attrs['href'] if(skipping ==", "to read from MySQL Database sql_ex = 'SELECT id, title, created, LEFT(content, 32)", "print('> DB Initialization...') crawler_nba.MySQLDBInitialize(password, table, unix_socket, database_name) #Sideband Setting current_time = datetime.datetime.now() print(f'current_time", "{sql_ex}-------------------') print(f'table = {table}') for row in results: id_name = str(row[0]) title_name =", "######################### def main(): #Initialization print('> Crawler Initialization...') iter_num = 0 crawler_nba.init() #Argument Parser", "from MySQL Database sql_ex = 'SELECT id, title, created, LEFT(content, 32) FROM {table_name}", "will be used to store data.\", required=True) parser.add_argument(\"--max_sql_store_num\", \"-sql_mx_sn\", help=\"The maximum number that", "random.seed(datetime.datetime.now()) starting_url = \"https://en.wikipedia.org/wiki/Kevin_Bacon\" print(f'starting_url = {starting_url}') # Scrape articles from Wikipedia and", "= \"\" table = \"\" database_name = \"\" unix_socket = \"\" max_sql_store_num =", "= \"http://en.wikipedia.org\"+all_internal_links_loop[random.randint(0, total_num_internal_links_loop-1)].attrs['href'] if(skipping == 0): iter_num += 1 # Test to read", "\"-sql_un_sock\", help=\"The unix_socket that is used to mypysql connection.\", required=True) parser.add_argument(\"--database_name\", \"-database_name\", help=\"The", "= 0 crawler_nba.init() #Argument Parser (password, table, max_sql_store_num, unix_socket, database_name) = ArgumentParser() #DB", "return(password, table, max_sql_store_num, unix_socket, database_name) #-----------------Execution------------------# if __name__ == '__main__': import sys this_script_path", "= argparse.ArgumentParser() parser.add_argument(\"--mysql_password\", \"-sql_p\", help=\"The password to connect to MySQL server.\", required=True) parser.add_argument(\"--mysql_table_name\",", "database_name = \"\" unix_socket = \"\" max_sql_store_num = 10 parser = argparse.ArgumentParser() parser.add_argument(\"--mysql_password\",", "re import random import datetime import sys import argparse import os ######################### #", "import os ######################### # Main-Routine # ######################### def main(): #Initialization print('> Crawler Initialization...')", "> 0): choose_link = \"http://en.wikipedia.org\"+all_internal_links_loop[random.randint(0, total_num_internal_links_loop-1)].attrs['href'] if(skipping == 0): iter_num += 1 #", "created_name = str(row[2]) content_name = row[3] print('{x:<2s}, {y:<2s}, {z:<2s}, {k:<2s}'.format(x=id_name, y=title_name, z=created_name, k=content_name))", "to mypysql connection.\", required=True) args = parser.parse_args() if args.mysql_password: password = args.mysql_password if", "\"\" unix_socket = \"\" max_sql_store_num = 10 parser = argparse.ArgumentParser() parser.add_argument(\"--mysql_password\", \"-sql_p\", help=\"The", "for row in results: id_name = str(row[0]) title_name = row[1] created_name = str(row[2])", "32) FROM {table_name} WHERE id=4;'.format(table_name=table) crawler_nba.cur.execute(sql_ex) results = crawler_nba.cur.fetchall() print(f'-------------------Execution {sql_ex}-------------------') print(f'table =", "args = parser.parse_args() if args.mysql_password: password = args.mysql_password if args.mysql_table_name: table = args.mysql_table_name", "Initialization print('> DB Initialization...') crawler_nba.MySQLDBInitialize(password, table, unix_socket, database_name) #Sideband Setting current_time = datetime.datetime.now()", "parser.add_argument(\"--unix_socket\", \"-sql_un_sock\", help=\"The unix_socket that is used to mypysql connection.\", required=True) parser.add_argument(\"--database_name\", \"-database_name\",", "mypysql connection.\", required=True) parser.add_argument(\"--database_name\", \"-database_name\", help=\"The unix_socket that is used to mypysql connection.\",", "MySQL server.\", required=True) parser.add_argument(\"--mysql_table_name\", \"-sql_tn\", help=\"The table name that will be used to", "unix_socket, database_name) #-----------------Execution------------------# if __name__ == '__main__': import sys this_script_path = os.path.realpath(__file__) this_script_folder", "that will be used to store data.\", required=True) parser.add_argument(\"--max_sql_store_num\", \"-sql_mx_sn\", help=\"The maximum number", "os.path.dirname(this_script_path) crawler_nba_pkg_path = this_script_folder+'/../../crawler' print('Add to sys.path : {x}'.format(x=crawler_nba_pkg_path)) sys.path.append(crawler_nba_pkg_path) import package_crawler_nba.crawler_nba as", "table, max_sql_store_num, unix_socket, database_name) #-----------------Execution------------------# if __name__ == '__main__': import sys this_script_path =", "str(row[0]) title_name = row[1] created_name = str(row[2]) content_name = row[3] print('{x:<2s}, {y:<2s}, {z:<2s},", "# Close the connection of MySQL Database crawler_nba.MySQLDBClose(crawler_nba.cur, crawler_nba.conn) ######################### # Sub-Routine #", "total_num_internal_links_loop-1)].attrs['href'] if(skipping == 0): iter_num += 1 # Test to read from MySQL", "Parser (password, table, max_sql_store_num, unix_socket, database_name) = ArgumentParser() #DB Initialization print('> DB Initialization...')", "database_name) #-----------------Execution------------------# if __name__ == '__main__': import sys this_script_path = os.path.realpath(__file__) this_script_folder =", "the connection of MySQL Database crawler_nba.MySQLDBClose(crawler_nba.cur, crawler_nba.conn) ######################### # Sub-Routine # ######################### def", "table, max_sql_store_num, unix_socket, database_name) = ArgumentParser() #DB Initialization print('> DB Initialization...') crawler_nba.MySQLDBInitialize(password, table,", "#Sideband Setting current_time = datetime.datetime.now() print(f'current_time = {current_time}') random.seed(datetime.datetime.now()) starting_url = \"https://en.wikipedia.org/wiki/Kevin_Bacon\" print(f'starting_url", "= os.path.realpath(__file__) this_script_folder = os.path.dirname(this_script_path) crawler_nba_pkg_path = this_script_folder+'/../../crawler' print('Add to sys.path : {x}'.format(x=crawler_nba_pkg_path))", "random import datetime import sys import argparse import os ######################### # Main-Routine #", "password to connect to MySQL server.\", required=True) parser.add_argument(\"--mysql_table_name\", \"-sql_tn\", help=\"The table name that", "y=title_name, z=created_name, k=content_name)) # Close the connection of MySQL Database crawler_nba.MySQLDBClose(crawler_nba.cur, crawler_nba.conn) #########################", "args.mysql_table_name if args.max_sql_store_num: max_sql_store_num = int(args.max_sql_store_num) if args.unix_socket: unix_socket = args.unix_socket if args.database_name:", "starting_url skipping = 0 while(iter_num < max_sql_store_num): print('iter_num = {}. Get Wiki Links", "args.unix_socket if args.database_name: database_name = args.database_name return(password, table, max_sql_store_num, unix_socket, database_name) #-----------------Execution------------------# if", "required=True) parser.add_argument(\"--mysql_table_name\", \"-sql_tn\", help=\"The table name that will be used to store data.\",", "= \"https://en.wikipedia.org/wiki/Kevin_Bacon\" print(f'starting_url = {starting_url}') # Scrape articles from Wikipedia and store into", "\"-sql_mx_sn\", help=\"The maximum number that stores in MySQL table.\", required=True) parser.add_argument(\"--unix_socket\", \"-sql_un_sock\", help=\"The", "# Scrape articles from Wikipedia and store into MySQl Database choose_link = starting_url", "sys import argparse import os ######################### # Main-Routine # ######################### def main(): #Initialization", "required=True) parser.add_argument(\"--unix_socket\", \"-sql_un_sock\", help=\"The unix_socket that is used to mypysql connection.\", required=True) parser.add_argument(\"--database_name\",", "\"https://en.wikipedia.org/wiki/Kevin_Bacon\" print(f'starting_url = {starting_url}') # Scrape articles from Wikipedia and store into MySQl", "if args.max_sql_store_num: max_sql_store_num = int(args.max_sql_store_num) if args.unix_socket: unix_socket = args.unix_socket if args.database_name: database_name", "results = crawler_nba.cur.fetchall() print(f'-------------------Execution {sql_ex}-------------------') print(f'table = {table}') for row in results: id_name", "from Wikipedia and store into MySQl Database choose_link = starting_url skipping = 0", "crawler_nba.GetWikiLinksContent(choose_link, crawler_nba.cur, table) total_num_internal_links_loop = len(all_internal_links_loop) if(total_num_internal_links_loop > 0): choose_link = \"http://en.wikipedia.org\"+all_internal_links_loop[random.randint(0, total_num_internal_links_loop-1)].attrs['href']", "args.database_name: database_name = args.database_name return(password, table, max_sql_store_num, unix_socket, database_name) #-----------------Execution------------------# if __name__ ==", "crawler_nba.MySQLDBInitialize(password, table, unix_socket, database_name) #Sideband Setting current_time = datetime.datetime.now() print(f'current_time = {current_time}') random.seed(datetime.datetime.now())", "used to mypysql connection.\", required=True) parser.add_argument(\"--database_name\", \"-database_name\", help=\"The unix_socket that is used to", "parser.parse_args() if args.mysql_password: password = args.mysql_password if args.mysql_table_name: table = args.mysql_table_name if args.max_sql_store_num:", "len(all_internal_links_loop) if(total_num_internal_links_loop > 0): choose_link = \"http://en.wikipedia.org\"+all_internal_links_loop[random.randint(0, total_num_internal_links_loop-1)].attrs['href'] if(skipping == 0): iter_num +=", "== '__main__': import sys this_script_path = os.path.realpath(__file__) this_script_folder = os.path.dirname(this_script_path) crawler_nba_pkg_path = this_script_folder+'/../../crawler'", "{k:<2s}'.format(x=id_name, y=title_name, z=created_name, k=content_name)) # Close the connection of MySQL Database crawler_nba.MySQLDBClose(crawler_nba.cur, crawler_nba.conn)", "table = \"\" database_name = \"\" unix_socket = \"\" max_sql_store_num = 10 parser", "'__main__': import sys this_script_path = os.path.realpath(__file__) this_script_folder = os.path.dirname(this_script_path) crawler_nba_pkg_path = this_script_folder+'/../../crawler' print('Add", "unix_socket, database_name) = ArgumentParser() #DB Initialization print('> DB Initialization...') crawler_nba.MySQLDBInitialize(password, table, unix_socket, database_name)", "in MySQL table.\", required=True) parser.add_argument(\"--unix_socket\", \"-sql_un_sock\", help=\"The unix_socket that is used to mypysql", "help=\"The password to connect to MySQL server.\", required=True) parser.add_argument(\"--mysql_table_name\", \"-sql_tn\", help=\"The table name", "if args.mysql_password: password = args.mysql_password if args.mysql_table_name: table = args.mysql_table_name if args.max_sql_store_num: max_sql_store_num", "help=\"The unix_socket that is used to mypysql connection.\", required=True) args = parser.parse_args() if", "max_sql_store_num, unix_socket, database_name) #-----------------Execution------------------# if __name__ == '__main__': import sys this_script_path = os.path.realpath(__file__)", "= parser.parse_args() if args.mysql_password: password = args.mysql_password if args.mysql_table_name: table = args.mysql_table_name if", "0 while(iter_num < max_sql_store_num): print('iter_num = {}. Get Wiki Links and store the", "pymysql import re import random import datetime import sys import argparse import os", "args.mysql_password: password = args.mysql_password if args.mysql_table_name: table = args.mysql_table_name if args.max_sql_store_num: max_sql_store_num =", "\"-database_name\", help=\"The unix_socket that is used to mypysql connection.\", required=True) args = parser.parse_args()", "mypysql connection.\", required=True) args = parser.parse_args() if args.mysql_password: password = args.mysql_password if args.mysql_table_name:", "server.\", required=True) parser.add_argument(\"--mysql_table_name\", \"-sql_tn\", help=\"The table name that will be used to store", "= args.database_name return(password, table, max_sql_store_num, unix_socket, database_name) #-----------------Execution------------------# if __name__ == '__main__': import", "== 0): iter_num += 1 # Test to read from MySQL Database sql_ex", "print(f'table = {table}') for row in results: id_name = str(row[0]) title_name = row[1]", "print(f'starting_url = {starting_url}') # Scrape articles from Wikipedia and store into MySQl Database", "10 parser = argparse.ArgumentParser() parser.add_argument(\"--mysql_password\", \"-sql_p\", help=\"The password to connect to MySQL server.\",", "import argparse import os ######################### # Main-Routine # ######################### def main(): #Initialization print('>", "store the content to MySQL...'.format(iter_num)) print(f'choose_link = {choose_link}') all_internal_links_loop, skipping = crawler_nba.GetWikiLinksContent(choose_link, crawler_nba.cur,", "print('Add to sys.path : {x}'.format(x=crawler_nba_pkg_path)) sys.path.append(crawler_nba_pkg_path) import package_crawler_nba.crawler_nba as crawler_nba print('Import package_crawler_nba successfully.')", "DB Initialization...') crawler_nba.MySQLDBInitialize(password, table, unix_socket, database_name) #Sideband Setting current_time = datetime.datetime.now() print(f'current_time =", "= args.mysql_table_name if args.max_sql_store_num: max_sql_store_num = int(args.max_sql_store_num) if args.unix_socket: unix_socket = args.unix_socket if", "datetime.datetime.now() print(f'current_time = {current_time}') random.seed(datetime.datetime.now()) starting_url = \"https://en.wikipedia.org/wiki/Kevin_Bacon\" print(f'starting_url = {starting_url}') # Scrape", "to MySQL server.\", required=True) parser.add_argument(\"--mysql_table_name\", \"-sql_tn\", help=\"The table name that will be used", "str(row[2]) content_name = row[3] print('{x:<2s}, {y:<2s}, {z:<2s}, {k:<2s}'.format(x=id_name, y=title_name, z=created_name, k=content_name)) # Close", "crawler_nba.cur.execute(sql_ex) results = crawler_nba.cur.fetchall() print(f'-------------------Execution {sql_ex}-------------------') print(f'table = {table}') for row in results:", "required=True) parser.add_argument(\"--database_name\", \"-database_name\", help=\"The unix_socket that is used to mypysql connection.\", required=True) args", "name that will be used to store data.\", required=True) parser.add_argument(\"--max_sql_store_num\", \"-sql_mx_sn\", help=\"The maximum", "/usr/bin/env python3.6 import pymysql import re import random import datetime import sys import", "table = args.mysql_table_name if args.max_sql_store_num: max_sql_store_num = int(args.max_sql_store_num) if args.unix_socket: unix_socket = args.unix_socket", "sys this_script_path = os.path.realpath(__file__) this_script_folder = os.path.dirname(this_script_path) crawler_nba_pkg_path = this_script_folder+'/../../crawler' print('Add to sys.path", "database_name) #Sideband Setting current_time = datetime.datetime.now() print(f'current_time = {current_time}') random.seed(datetime.datetime.now()) starting_url = \"https://en.wikipedia.org/wiki/Kevin_Bacon\"", "0 crawler_nba.init() #Argument Parser (password, table, max_sql_store_num, unix_socket, database_name) = ArgumentParser() #DB Initialization", "crawler_nba.MySQLDBClose(crawler_nba.cur, crawler_nba.conn) ######################### # Sub-Routine # ######################### def ArgumentParser(): password = \"\" table", "{choose_link}') all_internal_links_loop, skipping = crawler_nba.GetWikiLinksContent(choose_link, crawler_nba.cur, table) total_num_internal_links_loop = len(all_internal_links_loop) if(total_num_internal_links_loop > 0):", "= int(args.max_sql_store_num) if args.unix_socket: unix_socket = args.unix_socket if args.database_name: database_name = args.database_name return(password,", "database_name = args.database_name return(password, table, max_sql_store_num, unix_socket, database_name) #-----------------Execution------------------# if __name__ == '__main__':", "articles from Wikipedia and store into MySQl Database choose_link = starting_url skipping =", "0): iter_num += 1 # Test to read from MySQL Database sql_ex =", "crawler_nba.cur.fetchall() print(f'-------------------Execution {sql_ex}-------------------') print(f'table = {table}') for row in results: id_name = str(row[0])", "if args.database_name: database_name = args.database_name return(password, table, max_sql_store_num, unix_socket, database_name) #-----------------Execution------------------# if __name__", "= os.path.dirname(this_script_path) crawler_nba_pkg_path = this_script_folder+'/../../crawler' print('Add to sys.path : {x}'.format(x=crawler_nba_pkg_path)) sys.path.append(crawler_nba_pkg_path) import package_crawler_nba.crawler_nba", "choose_link = starting_url skipping = 0 while(iter_num < max_sql_store_num): print('iter_num = {}. Get", "help=\"The unix_socket that is used to mypysql connection.\", required=True) parser.add_argument(\"--database_name\", \"-database_name\", help=\"The unix_socket", "ArgumentParser() #DB Initialization print('> DB Initialization...') crawler_nba.MySQLDBInitialize(password, table, unix_socket, database_name) #Sideband Setting current_time", "max_sql_store_num = 10 parser = argparse.ArgumentParser() parser.add_argument(\"--mysql_password\", \"-sql_p\", help=\"The password to connect to", "connection.\", required=True) parser.add_argument(\"--database_name\", \"-database_name\", help=\"The unix_socket that is used to mypysql connection.\", required=True)", "Wiki Links and store the content to MySQL...'.format(iter_num)) print(f'choose_link = {choose_link}') all_internal_links_loop, skipping", "= 0 while(iter_num < max_sql_store_num): print('iter_num = {}. Get Wiki Links and store", "data.\", required=True) parser.add_argument(\"--max_sql_store_num\", \"-sql_mx_sn\", help=\"The maximum number that stores in MySQL table.\", required=True)", "z=created_name, k=content_name)) # Close the connection of MySQL Database crawler_nba.MySQLDBClose(crawler_nba.cur, crawler_nba.conn) ######################### #", "LEFT(content, 32) FROM {table_name} WHERE id=4;'.format(table_name=table) crawler_nba.cur.execute(sql_ex) results = crawler_nba.cur.fetchall() print(f'-------------------Execution {sql_ex}-------------------') print(f'table", "Setting current_time = datetime.datetime.now() print(f'current_time = {current_time}') random.seed(datetime.datetime.now()) starting_url = \"https://en.wikipedia.org/wiki/Kevin_Bacon\" print(f'starting_url =", "max_sql_store_num): print('iter_num = {}. Get Wiki Links and store the content to MySQL...'.format(iter_num))", "= {starting_url}') # Scrape articles from Wikipedia and store into MySQl Database choose_link", "and store the content to MySQL...'.format(iter_num)) print(f'choose_link = {choose_link}') all_internal_links_loop, skipping = crawler_nba.GetWikiLinksContent(choose_link,", "{table_name} WHERE id=4;'.format(table_name=table) crawler_nba.cur.execute(sql_ex) results = crawler_nba.cur.fetchall() print(f'-------------------Execution {sql_ex}-------------------') print(f'table = {table}') for", "MySQL...'.format(iter_num)) print(f'choose_link = {choose_link}') all_internal_links_loop, skipping = crawler_nba.GetWikiLinksContent(choose_link, crawler_nba.cur, table) total_num_internal_links_loop = len(all_internal_links_loop)", "= {}. Get Wiki Links and store the content to MySQL...'.format(iter_num)) print(f'choose_link =", "import re import random import datetime import sys import argparse import os #########################", "max_sql_store_num, unix_socket, database_name) = ArgumentParser() #DB Initialization print('> DB Initialization...') crawler_nba.MySQLDBInitialize(password, table, unix_socket,", "k=content_name)) # Close the connection of MySQL Database crawler_nba.MySQLDBClose(crawler_nba.cur, crawler_nba.conn) ######################### # Sub-Routine", "= 'SELECT id, title, created, LEFT(content, 32) FROM {table_name} WHERE id=4;'.format(table_name=table) crawler_nba.cur.execute(sql_ex) results", "# Main-Routine # ######################### def main(): #Initialization print('> Crawler Initialization...') iter_num = 0", "+= 1 # Test to read from MySQL Database sql_ex = 'SELECT id,", "unix_socket, database_name) #Sideband Setting current_time = datetime.datetime.now() print(f'current_time = {current_time}') random.seed(datetime.datetime.now()) starting_url =", "unix_socket = args.unix_socket if args.database_name: database_name = args.database_name return(password, table, max_sql_store_num, unix_socket, database_name)", "if args.mysql_table_name: table = args.mysql_table_name if args.max_sql_store_num: max_sql_store_num = int(args.max_sql_store_num) if args.unix_socket: unix_socket", "that is used to mypysql connection.\", required=True) parser.add_argument(\"--database_name\", \"-database_name\", help=\"The unix_socket that is", "MySQl Database choose_link = starting_url skipping = 0 while(iter_num < max_sql_store_num): print('iter_num =", "import sys import argparse import os ######################### # Main-Routine # ######################### def main():", "skipping = 0 while(iter_num < max_sql_store_num): print('iter_num = {}. Get Wiki Links and", "Database crawler_nba.MySQLDBClose(crawler_nba.cur, crawler_nba.conn) ######################### # Sub-Routine # ######################### def ArgumentParser(): password = \"\"", "required=True) args = parser.parse_args() if args.mysql_password: password = args.mysql_password if args.mysql_table_name: table =", "max_sql_store_num = int(args.max_sql_store_num) if args.unix_socket: unix_socket = args.unix_socket if args.database_name: database_name = args.database_name", "MySQL Database crawler_nba.MySQLDBClose(crawler_nba.cur, crawler_nba.conn) ######################### # Sub-Routine # ######################### def ArgumentParser(): password =", "def main(): #Initialization print('> Crawler Initialization...') iter_num = 0 crawler_nba.init() #Argument Parser (password,", "to connect to MySQL server.\", required=True) parser.add_argument(\"--mysql_table_name\", \"-sql_tn\", help=\"The table name that will", "= starting_url skipping = 0 while(iter_num < max_sql_store_num): print('iter_num = {}. Get Wiki", "used to store data.\", required=True) parser.add_argument(\"--max_sql_store_num\", \"-sql_mx_sn\", help=\"The maximum number that stores in", "= args.mysql_password if args.mysql_table_name: table = args.mysql_table_name if args.max_sql_store_num: max_sql_store_num = int(args.max_sql_store_num) if", "Database choose_link = starting_url skipping = 0 while(iter_num < max_sql_store_num): print('iter_num = {}.", "Crawler Initialization...') iter_num = 0 crawler_nba.init() #Argument Parser (password, table, max_sql_store_num, unix_socket, database_name)", "to MySQL...'.format(iter_num)) print(f'choose_link = {choose_link}') all_internal_links_loop, skipping = crawler_nba.GetWikiLinksContent(choose_link, crawler_nba.cur, table) total_num_internal_links_loop =", "that is used to mypysql connection.\", required=True) args = parser.parse_args() if args.mysql_password: password", "\"-sql_p\", help=\"The password to connect to MySQL server.\", required=True) parser.add_argument(\"--mysql_table_name\", \"-sql_tn\", help=\"The table", "= {table}') for row in results: id_name = str(row[0]) title_name = row[1] created_name", "id, title, created, LEFT(content, 32) FROM {table_name} WHERE id=4;'.format(table_name=table) crawler_nba.cur.execute(sql_ex) results = crawler_nba.cur.fetchall()", "table name that will be used to store data.\", required=True) parser.add_argument(\"--max_sql_store_num\", \"-sql_mx_sn\", help=\"The", "0): choose_link = \"http://en.wikipedia.org\"+all_internal_links_loop[random.randint(0, total_num_internal_links_loop-1)].attrs['href'] if(skipping == 0): iter_num += 1 # Test", "connection of MySQL Database crawler_nba.MySQLDBClose(crawler_nba.cur, crawler_nba.conn) ######################### # Sub-Routine # ######################### def ArgumentParser():", "= \"\" unix_socket = \"\" max_sql_store_num = 10 parser = argparse.ArgumentParser() parser.add_argument(\"--mysql_password\", \"-sql_p\",", "= row[1] created_name = str(row[2]) content_name = row[3] print('{x:<2s}, {y:<2s}, {z:<2s}, {k:<2s}'.format(x=id_name, y=title_name,", "WHERE id=4;'.format(table_name=table) crawler_nba.cur.execute(sql_ex) results = crawler_nba.cur.fetchall() print(f'-------------------Execution {sql_ex}-------------------') print(f'table = {table}') for row", "args.mysql_table_name: table = args.mysql_table_name if args.max_sql_store_num: max_sql_store_num = int(args.max_sql_store_num) if args.unix_socket: unix_socket =", "= {current_time}') random.seed(datetime.datetime.now()) starting_url = \"https://en.wikipedia.org/wiki/Kevin_Bacon\" print(f'starting_url = {starting_url}') # Scrape articles from", "skipping = crawler_nba.GetWikiLinksContent(choose_link, crawler_nba.cur, table) total_num_internal_links_loop = len(all_internal_links_loop) if(total_num_internal_links_loop > 0): choose_link =", "# Sub-Routine # ######################### def ArgumentParser(): password = \"\" table = \"\" database_name", "stores in MySQL table.\", required=True) parser.add_argument(\"--unix_socket\", \"-sql_un_sock\", help=\"The unix_socket that is used to", "created, LEFT(content, 32) FROM {table_name} WHERE id=4;'.format(table_name=table) crawler_nba.cur.execute(sql_ex) results = crawler_nba.cur.fetchall() print(f'-------------------Execution {sql_ex}-------------------')", "= 10 parser = argparse.ArgumentParser() parser.add_argument(\"--mysql_password\", \"-sql_p\", help=\"The password to connect to MySQL", "argparse import os ######################### # Main-Routine # ######################### def main(): #Initialization print('> Crawler", "Initialization...') iter_num = 0 crawler_nba.init() #Argument Parser (password, table, max_sql_store_num, unix_socket, database_name) =", "FROM {table_name} WHERE id=4;'.format(table_name=table) crawler_nba.cur.execute(sql_ex) results = crawler_nba.cur.fetchall() print(f'-------------------Execution {sql_ex}-------------------') print(f'table = {table}')", "Links and store the content to MySQL...'.format(iter_num)) print(f'choose_link = {choose_link}') all_internal_links_loop, skipping =", "print('> Crawler Initialization...') iter_num = 0 crawler_nba.init() #Argument Parser (password, table, max_sql_store_num, unix_socket,", "print(f'choose_link = {choose_link}') all_internal_links_loop, skipping = crawler_nba.GetWikiLinksContent(choose_link, crawler_nba.cur, table) total_num_internal_links_loop = len(all_internal_links_loop) if(total_num_internal_links_loop", "store data.\", required=True) parser.add_argument(\"--max_sql_store_num\", \"-sql_mx_sn\", help=\"The maximum number that stores in MySQL table.\",", "to sys.path : {x}'.format(x=crawler_nba_pkg_path)) sys.path.append(crawler_nba_pkg_path) import package_crawler_nba.crawler_nba as crawler_nba print('Import package_crawler_nba successfully.') main()", "is used to mypysql connection.\", required=True) args = parser.parse_args() if args.mysql_password: password =", "be used to store data.\", required=True) parser.add_argument(\"--max_sql_store_num\", \"-sql_mx_sn\", help=\"The maximum number that stores", "MySQL Database sql_ex = 'SELECT id, title, created, LEFT(content, 32) FROM {table_name} WHERE", "results: id_name = str(row[0]) title_name = row[1] created_name = str(row[2]) content_name = row[3]", "Database sql_ex = 'SELECT id, title, created, LEFT(content, 32) FROM {table_name} WHERE id=4;'.format(table_name=table)", "all_internal_links_loop, skipping = crawler_nba.GetWikiLinksContent(choose_link, crawler_nba.cur, table) total_num_internal_links_loop = len(all_internal_links_loop) if(total_num_internal_links_loop > 0): choose_link", "= args.unix_socket if args.database_name: database_name = args.database_name return(password, table, max_sql_store_num, unix_socket, database_name) #-----------------Execution------------------#", "in results: id_name = str(row[0]) title_name = row[1] created_name = str(row[2]) content_name =", "Close the connection of MySQL Database crawler_nba.MySQLDBClose(crawler_nba.cur, crawler_nba.conn) ######################### # Sub-Routine # #########################", "iter_num += 1 # Test to read from MySQL Database sql_ex = 'SELECT", "parser.add_argument(\"--mysql_password\", \"-sql_p\", help=\"The password to connect to MySQL server.\", required=True) parser.add_argument(\"--mysql_table_name\", \"-sql_tn\", help=\"The", "database_name) = ArgumentParser() #DB Initialization print('> DB Initialization...') crawler_nba.MySQLDBInitialize(password, table, unix_socket, database_name) #Sideband", "store into MySQl Database choose_link = starting_url skipping = 0 while(iter_num < max_sql_store_num):", "print('{x:<2s}, {y:<2s}, {z:<2s}, {k:<2s}'.format(x=id_name, y=title_name, z=created_name, k=content_name)) # Close the connection of MySQL", "content_name = row[3] print('{x:<2s}, {y:<2s}, {z:<2s}, {k:<2s}'.format(x=id_name, y=title_name, z=created_name, k=content_name)) # Close the", "if(skipping == 0): iter_num += 1 # Test to read from MySQL Database", "= row[3] print('{x:<2s}, {y:<2s}, {z:<2s}, {k:<2s}'.format(x=id_name, y=title_name, z=created_name, k=content_name)) # Close the connection", "help=\"The table name that will be used to store data.\", required=True) parser.add_argument(\"--max_sql_store_num\", \"-sql_mx_sn\",", "Main-Routine # ######################### def main(): #Initialization print('> Crawler Initialization...') iter_num = 0 crawler_nba.init()", "int(args.max_sql_store_num) if args.unix_socket: unix_socket = args.unix_socket if args.database_name: database_name = args.database_name return(password, table,", "help=\"The maximum number that stores in MySQL table.\", required=True) parser.add_argument(\"--unix_socket\", \"-sql_un_sock\", help=\"The unix_socket", "# ######################### def ArgumentParser(): password = \"\" table = \"\" database_name = \"\"", "parser = argparse.ArgumentParser() parser.add_argument(\"--mysql_password\", \"-sql_p\", help=\"The password to connect to MySQL server.\", required=True)", "this_script_folder+'/../../crawler' print('Add to sys.path : {x}'.format(x=crawler_nba_pkg_path)) sys.path.append(crawler_nba_pkg_path) import package_crawler_nba.crawler_nba as crawler_nba print('Import package_crawler_nba", "main(): #Initialization print('> Crawler Initialization...') iter_num = 0 crawler_nba.init() #Argument Parser (password, table,", "#Initialization print('> Crawler Initialization...') iter_num = 0 crawler_nba.init() #Argument Parser (password, table, max_sql_store_num,", "#DB Initialization print('> DB Initialization...') crawler_nba.MySQLDBInitialize(password, table, unix_socket, database_name) #Sideband Setting current_time =", "connection.\", required=True) args = parser.parse_args() if args.mysql_password: password = args.mysql_password if args.mysql_table_name: table", "# ######################### def main(): #Initialization print('> Crawler Initialization...') iter_num = 0 crawler_nba.init() #Argument", "crawler_nba.conn) ######################### # Sub-Routine # ######################### def ArgumentParser(): password = \"\" table =", "Test to read from MySQL Database sql_ex = 'SELECT id, title, created, LEFT(content,", "unix_socket that is used to mypysql connection.\", required=True) args = parser.parse_args() if args.mysql_password:", "required=True) parser.add_argument(\"--max_sql_store_num\", \"-sql_mx_sn\", help=\"The maximum number that stores in MySQL table.\", required=True) parser.add_argument(\"--unix_socket\",", "password = \"\" table = \"\" database_name = \"\" unix_socket = \"\" max_sql_store_num", "and store into MySQl Database choose_link = starting_url skipping = 0 while(iter_num <", "into MySQl Database choose_link = starting_url skipping = 0 while(iter_num < max_sql_store_num): print('iter_num", "= datetime.datetime.now() print(f'current_time = {current_time}') random.seed(datetime.datetime.now()) starting_url = \"https://en.wikipedia.org/wiki/Kevin_Bacon\" print(f'starting_url = {starting_url}') #", "this_script_folder = os.path.dirname(this_script_path) crawler_nba_pkg_path = this_script_folder+'/../../crawler' print('Add to sys.path : {x}'.format(x=crawler_nba_pkg_path)) sys.path.append(crawler_nba_pkg_path) import", "the content to MySQL...'.format(iter_num)) print(f'choose_link = {choose_link}') all_internal_links_loop, skipping = crawler_nba.GetWikiLinksContent(choose_link, crawler_nba.cur, table)", "= crawler_nba.cur.fetchall() print(f'-------------------Execution {sql_ex}-------------------') print(f'table = {table}') for row in results: id_name =", "Initialization...') crawler_nba.MySQLDBInitialize(password, table, unix_socket, database_name) #Sideband Setting current_time = datetime.datetime.now() print(f'current_time = {current_time}')", "(password, table, max_sql_store_num, unix_socket, database_name) = ArgumentParser() #DB Initialization print('> DB Initialization...') crawler_nba.MySQLDBInitialize(password,", "= \"\" max_sql_store_num = 10 parser = argparse.ArgumentParser() parser.add_argument(\"--mysql_password\", \"-sql_p\", help=\"The password to", "while(iter_num < max_sql_store_num): print('iter_num = {}. Get Wiki Links and store the content", "Get Wiki Links and store the content to MySQL...'.format(iter_num)) print(f'choose_link = {choose_link}') all_internal_links_loop,", "unix_socket that is used to mypysql connection.\", required=True) parser.add_argument(\"--database_name\", \"-database_name\", help=\"The unix_socket that", "to mypysql connection.\", required=True) parser.add_argument(\"--database_name\", \"-database_name\", help=\"The unix_socket that is used to mypysql", "< max_sql_store_num): print('iter_num = {}. Get Wiki Links and store the content to", "#! /usr/bin/env python3.6 import pymysql import re import random import datetime import sys", "print(f'current_time = {current_time}') random.seed(datetime.datetime.now()) starting_url = \"https://en.wikipedia.org/wiki/Kevin_Bacon\" print(f'starting_url = {starting_url}') # Scrape articles", "1 # Test to read from MySQL Database sql_ex = 'SELECT id, title,", "is used to mypysql connection.\", required=True) parser.add_argument(\"--database_name\", \"-database_name\", help=\"The unix_socket that is used", "row in results: id_name = str(row[0]) title_name = row[1] created_name = str(row[2]) content_name", "used to mypysql connection.\", required=True) args = parser.parse_args() if args.mysql_password: password = args.mysql_password", "to store data.\", required=True) parser.add_argument(\"--max_sql_store_num\", \"-sql_mx_sn\", help=\"The maximum number that stores in MySQL", "that stores in MySQL table.\", required=True) parser.add_argument(\"--unix_socket\", \"-sql_un_sock\", help=\"The unix_socket that is used", "#-----------------Execution------------------# if __name__ == '__main__': import sys this_script_path = os.path.realpath(__file__) this_script_folder = os.path.dirname(this_script_path)", "title_name = row[1] created_name = str(row[2]) content_name = row[3] print('{x:<2s}, {y:<2s}, {z:<2s}, {k:<2s}'.format(x=id_name,", "sql_ex = 'SELECT id, title, created, LEFT(content, 32) FROM {table_name} WHERE id=4;'.format(table_name=table) crawler_nba.cur.execute(sql_ex)", "Sub-Routine # ######################### def ArgumentParser(): password = \"\" table = \"\" database_name =", "######################### def ArgumentParser(): password = \"\" table = \"\" database_name = \"\" unix_socket", "current_time = datetime.datetime.now() print(f'current_time = {current_time}') random.seed(datetime.datetime.now()) starting_url = \"https://en.wikipedia.org/wiki/Kevin_Bacon\" print(f'starting_url = {starting_url}')", "######################### # Sub-Routine # ######################### def ArgumentParser(): password = \"\" table = \"\"", "= \"\" database_name = \"\" unix_socket = \"\" max_sql_store_num = 10 parser =", "ArgumentParser(): password = \"\" table = \"\" database_name = \"\" unix_socket = \"\"", "args.database_name return(password, table, max_sql_store_num, unix_socket, database_name) #-----------------Execution------------------# if __name__ == '__main__': import sys", "{current_time}') random.seed(datetime.datetime.now()) starting_url = \"https://en.wikipedia.org/wiki/Kevin_Bacon\" print(f'starting_url = {starting_url}') # Scrape articles from Wikipedia", "MySQL table.\", required=True) parser.add_argument(\"--unix_socket\", \"-sql_un_sock\", help=\"The unix_socket that is used to mypysql connection.\",", "\"\" database_name = \"\" unix_socket = \"\" max_sql_store_num = 10 parser = argparse.ArgumentParser()", "id_name = str(row[0]) title_name = row[1] created_name = str(row[2]) content_name = row[3] print('{x:<2s},", "{}. Get Wiki Links and store the content to MySQL...'.format(iter_num)) print(f'choose_link = {choose_link}')", "{y:<2s}, {z:<2s}, {k:<2s}'.format(x=id_name, y=title_name, z=created_name, k=content_name)) # Close the connection of MySQL Database", "print(f'-------------------Execution {sql_ex}-------------------') print(f'table = {table}') for row in results: id_name = str(row[0]) title_name", "# Test to read from MySQL Database sql_ex = 'SELECT id, title, created,", "= ArgumentParser() #DB Initialization print('> DB Initialization...') crawler_nba.MySQLDBInitialize(password, table, unix_socket, database_name) #Sideband Setting", "crawler_nba.init() #Argument Parser (password, table, max_sql_store_num, unix_socket, database_name) = ArgumentParser() #DB Initialization print('>", "crawler_nba_pkg_path = this_script_folder+'/../../crawler' print('Add to sys.path : {x}'.format(x=crawler_nba_pkg_path)) sys.path.append(crawler_nba_pkg_path) import package_crawler_nba.crawler_nba as crawler_nba", "import datetime import sys import argparse import os ######################### # Main-Routine # #########################", "os.path.realpath(__file__) this_script_folder = os.path.dirname(this_script_path) crawler_nba_pkg_path = this_script_folder+'/../../crawler' print('Add to sys.path : {x}'.format(x=crawler_nba_pkg_path)) sys.path.append(crawler_nba_pkg_path)", "password = args.mysql_password if args.mysql_table_name: table = args.mysql_table_name if args.max_sql_store_num: max_sql_store_num = int(args.max_sql_store_num)", "= len(all_internal_links_loop) if(total_num_internal_links_loop > 0): choose_link = \"http://en.wikipedia.org\"+all_internal_links_loop[random.randint(0, total_num_internal_links_loop-1)].attrs['href'] if(skipping == 0): iter_num", "of MySQL Database crawler_nba.MySQLDBClose(crawler_nba.cur, crawler_nba.conn) ######################### # Sub-Routine # ######################### def ArgumentParser(): password", "args.mysql_password if args.mysql_table_name: table = args.mysql_table_name if args.max_sql_store_num: max_sql_store_num = int(args.max_sql_store_num) if args.unix_socket:", "{table}') for row in results: id_name = str(row[0]) title_name = row[1] created_name =", "row[1] created_name = str(row[2]) content_name = row[3] print('{x:<2s}, {y:<2s}, {z:<2s}, {k:<2s}'.format(x=id_name, y=title_name, z=created_name,", "\"\" table = \"\" database_name = \"\" unix_socket = \"\" max_sql_store_num = 10", "argparse.ArgumentParser() parser.add_argument(\"--mysql_password\", \"-sql_p\", help=\"The password to connect to MySQL server.\", required=True) parser.add_argument(\"--mysql_table_name\", \"-sql_tn\",", "title, created, LEFT(content, 32) FROM {table_name} WHERE id=4;'.format(table_name=table) crawler_nba.cur.execute(sql_ex) results = crawler_nba.cur.fetchall() print(f'-------------------Execution", "unix_socket = \"\" max_sql_store_num = 10 parser = argparse.ArgumentParser() parser.add_argument(\"--mysql_password\", \"-sql_p\", help=\"The password", "if __name__ == '__main__': import sys this_script_path = os.path.realpath(__file__) this_script_folder = os.path.dirname(this_script_path) crawler_nba_pkg_path", "parser.add_argument(\"--database_name\", \"-database_name\", help=\"The unix_socket that is used to mypysql connection.\", required=True) args =", "import sys this_script_path = os.path.realpath(__file__) this_script_folder = os.path.dirname(this_script_path) crawler_nba_pkg_path = this_script_folder+'/../../crawler' print('Add to", "row[3] print('{x:<2s}, {y:<2s}, {z:<2s}, {k:<2s}'.format(x=id_name, y=title_name, z=created_name, k=content_name)) # Close the connection of", "\"\" max_sql_store_num = 10 parser = argparse.ArgumentParser() parser.add_argument(\"--mysql_password\", \"-sql_p\", help=\"The password to connect", "if args.unix_socket: unix_socket = args.unix_socket if args.database_name: database_name = args.database_name return(password, table, max_sql_store_num,", "= crawler_nba.GetWikiLinksContent(choose_link, crawler_nba.cur, table) total_num_internal_links_loop = len(all_internal_links_loop) if(total_num_internal_links_loop > 0): choose_link = \"http://en.wikipedia.org\"+all_internal_links_loop[random.randint(0," ]
[]